text
stringlengths
9
7.94M
\begin{document} \title[]{A Generic Framework for Reasoning about \\ Dynamic Networks of Infinite-State Processes\rsuper*} \author[A.~Bouajjani]{Ahmed Bouajjani} \address{LIAFA, University Paris Diderot and CNRS, Case 7014, 75205 Paris Cedex 13, France.} \email{\{abou,cezarad,cenea,jurski,sighirea\}@liafa.jussieu.fr} \thanks{This work is partially supported by the French ANR project AVERISS} \author[C.~Dr\u{a}goi]{Cezara Dr\u{a}goi} \author[C.~Enea]{Constantin Enea} \author[Y.~Jurski]{Yan Jurski} \author[M.~Sighireanu]{Mihaela Sighireanu} \keywords{dynamic networks, colored Petri nets, first-order logic, verification} \subjclass{E.1, F.3.1, F.4.1, F.4.3, I.2.2} \titlecomment{{\lsuper*}A shorter version of this paper has been published in the Proceedings of TACAS 2007, LNCS 4424.} \begin{abstract} We propose a framework for reasoning about unbounded dynamic networks of infinite-state processes. We propose Constrained Petri Nets (${\sf CPN}$) as generic models for these networks. They can be seen as Petri nets where tokens (representing occurrences of processes) are colored by values over some potentially infinite data domain such as integers, reals, etc. Furthermore, we define a logic, called ${\sf CML}$ (colored markings logic), for the description of ${\sf CPN}$ configurations. ${\sf CML}$ is a first-order logic over tokens allowing to reason about their locations and their colors. Both ${\sf CPN}$s and ${\sf CML}$ are parametrized by a color logic allowing to express constraints on the colors (data) associated with tokens. We investigate the decidability of the satisfiability problem of ${\sf CML}$ and its applications in the verification of ${\sf CPN}$s. We identify a fragment of ${\sf CML}$ for which the satisfiability problem is decidable (whenever it is the case for the underlying color logic), and which is closed under the computations of ${\sf post}$ and ${\sf pre}$ images for ${\sf CPN}$s. These results can be used for several kinds of analysis such as invariance checking, pre-post condition reasoning, and bounded reachability analysis. \end{abstract} \maketitle \section{Introduction} \label{sect-intro} The verification of software systems requires in general the consideration of infinite-state models. The sources of infinity in software models are multiple. One of them is the manipulation of variables and data structures ranging over infinite domains (such as integers, reals, arrays, etc). Another source of infinity is the fact that the number of processes running in parallel in the system can be either a parameter (fixed but arbitrarily large), or it can be dynamically changing due to process creation. While the verification of parameterized systems requires reasoning uniformly about the infinite family of (static) networks corresponding to any possible number of processes, the verification of dynamic systems requires reasoning about the infinite number of all possible dynamically changing network configurations. There are many works and several approaches on the verification of infinite-state systems taking into account either the aspects related to infinite data domains, or the aspects related to unbounded network structures due to parametrization or dynamic creation of processes. Concerning systems with data manipulation, a lot of work has been devoted to the verification of, for instance, finite-structure systems with unbounded counters, clocks, stacks, queues, etc. (see, e.g., \cite{AJ,BEM97,WB98,Boigelot,AAB00,FS01,FL02}). On the other hand, a lot of work has been done for the verification of parameterized and dynamic networks of Boolean (or finite-data domain) processes, proposing either exact model-checking and reachability analysis techniques for specific classes of systems (such as broadcast protocols, multithreaded programs, etc) \cite{EN98,EFM99,DRB02,BT05,BMOT05}, or generic algorithmic techniques (which can be approximate, or not guaranteed to terminate) such as network invariants-based approaches \cite{WL89,CGJ97}, and (abstract) regular model checking \cite{RMC,Bou01,SurveyRMC,BHV04}. However, only few works consider both infinite data manipulation and parametric/dynamic network structures (see the paragraph on related work). In this paper, we propose a generic framework for reasoning about parameterized and dynamic networks of concurrent processes which can manipulate (local and global) variables over infinite data domains. Our framework is parameterized by a data domain and a first-order theory on it (e.g., Presburger arithmetics on natural numbers). It consists of (1) expressive models allowing to cover a wide class of systems, and (2) a logic allowing to specify and to reason about the configurations of these models. The models we propose are called Constrained Petri Nets (${\sf CPN}$ for short). They are based on (place/transition) Petri nets where tokens are colored by data values. Intuitively, tokens represent different occurrences of processes, and places are associated with control locations and contain tokens corresponding to processes which are at a same control location. Since processes can manipulate local variables, each token (process occurrence) has several colors corresponding to the values of these variables. Then, configurations of our models are markings where each place contains a set of colored tokens, and transitions modify the markings as usual by removing tokens from some places and creating new ones in some other places. Transitions are guarded by constraints on the colors of tokens before and after firing the transition. We show that ${\sf CPN}$s allow to model various aspects such as unbounded dynamic creation of processes, manipulation of local and global variables over unbounded domains such as integers, synchronization, communication through shared variables, locks, etc. The logic we propose for specifying configurations of ${\sf CPN}$s is called Colored Markings Logic (${\sf CML}$ for short). It is a first order logic over tokens and their colors. It allows to reason about the presence of tokens in places, and also about the relations between the colors of these tokens. The logic ${\sf CML}$ is parameterized by a first order logic over the color domain allowing to express constraints on tokens. We investigate the decidability of the satisfiability problem of ${\sf CML}$ and its applications in verification of ${\sf CPN}$s. While the logic is decidable for finite color domains (such as booleans), we show that, unfortunately, the satisfiability problem of this logic becomes undecidable as soon as we consider the color domain to be the set of natural numbers with the usual ordering relation (and without any arithmetical operations). We prove that this undecidability result holds already for the fragment $\forall^*\exists^*$ of the logic (in the alternation hierarchy of the quantifiers over token variables) with this color domain. On the other hand, we prove that the satisfiability problem is decidable for the fragment $\exists^*\forall^*$ of ${\sf CML}$ whenever the underlying color logic has a decidable satisfiability problem, e.g., Presburger arithmetics, the first-order logic of addition and multiplication over reals, etc. Moreover, we prove that the fragment $\exists^*\forall^*$ of ${\sf CML}$ is effectively closed under ${\sf post}$ and ${\sf pre}$ image computations (i.e., computation of immediate successors and immediate predecessors) for ${\sf CPN}$s where all transition guards are also in $\exists^*\forall^*$. We show also that the same closure results hold when we consider the fragment $\exists^*$ instead of $\exists^*\forall^*$. These generic decidability and closure results can be applied in the verification of ${\sf CPN}$ models following different approaches such as pre-post condition (Hoare triples based) reasoning, bounded reachability analysis, and inductive invariant checking. More precisely, we derive from our results mentioned above that (1) checking whether starting from a $\exists^*\forall^*$ pre-condition, a $\forall^*\exists^*$ condition holds after the execution of a transition is decidable, that (2) the bounded reachability problem between two $\exists^*\forall^*$ definable sets is decidable, and that (3) checking whether a formula defines an inductive invariant is decidable for Boolean combinations of $\exists^*$ formulas. These results can be used to deal with non trivial examples of systems. Indeed, in many cases, program invariants and the assertions needed to establish them fall in the considered fragments of our logic. We illustrate this by carrying out in our framework the verification of several parameterized systems (including the examples usually considered in the literature such as the Bakery mutual exclusion protocol~\cite{Lamport-74}). In particular, we provide an inductive proof of correctness for the parametric version of the Reader-Writer lock system introduced in~\cite{Flanagan-Freund-Qadeer-02}. Flanagan et al. give a proof of this case study for the case of one reader and one writer. We consider here an arbitrarily large number of reader and writer processes and carry out (for the first time, to our knowledge) its verification by inductive invariant checking. We provide experimental results obtained for these examples using a prototype tool we have implemented based on our decision and verification procedures. \subsection*{Related work:} The use of unbounded Petri nets as models for parameterized networks of processes has been proposed in many existing works such as \cite{GS92,EN98,DRB02}. However, these works consider networks of {\em finite-state} processes and do not address the issue of manipulating infinite data domains. The extension of this idea to networks of infinite-state processes has been addressed only in very few works \cite{AJ98,Delzanno-01,BD02,AD06}. In \cite{AJ98}, Abdulla and Jonsson consider the case of networks of 1-clock timed systems and show, using the theory of well-structured systems and well quasi orderings \cite{AJ,FS01}, that the verification problem for a class of safety properties is decidable. Their approach has been extended in \cite{Delzanno-01,BD02} to a particular class of multiset rewrite systems with constraints (see also \cite{AD06} for recent developments of this approach). Our modeling framework is actually inspired by these works. However, while they address the issue of deciding the verification problem of safety properties (by reduction to the coverability problem) for specific classes of systems, we consider in our work a general framework, allowing to deal in a generic way with various classes of systems, where the user can express assertions about the configurations of the system, and check automatically that they hold (using post-pre reasoning and inductive invariant checking) or that they do not hold (using bounded reachability analysis). Our framework allows to reason automatically about systems which are beyond the scope of the techniques proposed in \cite{AJ98,Delzanno-01,BD02,AD06} such as, for instance, the parameterized Reader-Writer lock system presented in this paper. In parallel to our work, Abdulla et al. developed in~\cite{ADHR-07,AHDR08} abstract backward reachability analysis for a restricted class of constrained multiset rewrite systems. Basically, they consider constraints which are boolean combinations of universally quantified formulas, where data constraints are in the particular class of existentially quantified gap-order constraints. The abstraction they consider consists in taking after each pre-image computation the upward closure of the obtained set. This helps termination of the iterative computation and yields an upper-approximation of the backward reachability set. However, the used abstract analysis can be too imprecise for some systems. Our approach allows in contrast to carry out pre-post reasoning, invariance checking, as well as bounded analysis, for a larger class of systems. Techniques like those used in~\cite{ADHR-07,AHDR08} could be integrated into our framework in the future in order to discover (local) invariants automatically. In a series of papers, Pnueli et al. developed an approach for the verification of parameterized systems combining abstraction and proof techniques (see, e.g., \cite{Arons:ParamVerWAutCompIndInv:01}). This is probably one of the most advanced existing approaches allowing to deal with unbounded networks of infinite-state processes. We propose here a different framework for reasoning about these systems. In \cite{Arons:ParamVerWAutCompIndInv:01}, the authors consider a logic on (parametric-bound) {\em arrays} of integers, and they identify a fragment of this logic for which the satisfiability problem is decidable. In this fragment, they restrict the shape of the formula (quantification over indices) to formulas in the fragment $\exists^* \forall^*$ similarly to what we do, and also the class of used arithmetical constraints on indices and on the associated values. In a recent work by Bradley et al.~\cite{Bradley}, the satisfiability problem of the logic of unbounded arrays with any kind of elements values is investigated and the authors provide a new decidable fragment, which is incomparable to the one defined in \cite{Arons:ParamVerWAutCompIndInv:01}, but again which imposes similar restrictions on the quantifiers alternation in the formulas, and on the kind of constraints on indices that can be used. In contrast with these works, we consider a logic on \emph{multisets} of elements with any kind of associated data values, provided that the used theory on the data domain is decidable. For instance, we can use in our logic general Presburger constraints whereas \cite{Arons:ParamVerWAutCompIndInv:01} allows limited classes of constraints. On the other hand, we cannot specify faithfully unbounded arrays in our decidable fragment because formulas of the form $\forall^* \exists^*$ are needed to express that every non extremal element has a successor/predecessor. Nevertheless, for the verification of safety properties and invariant checking, expressing this fact is not necessary, and therefore, it is possible to handle (model and verify) in our framework all usual examples of parameterized systems (such as mutual exclusion protocols) considered in the works cited above. Let us finally mention that there are recent works on logics (first-order logics, or temporal logics) over finite/infinite structures (words or trees) over infinite alphabets (which can be considered as abstract infinite data domains) \cite{AncaLICS06,AncaPODS06,DemriLICS06}. The obtained positive results so far concern logics with very limited data domain (basically infinite sets with only equality, or sometimes with an ordering relation), and are based on reduction to complex problems such as reachability in Petri nets. \section{Colored Markings Logic} \label{sect-cml} \subsection{Preliminaries} Consider an enumerable set of \emph{tokens} and let us identify this set with the set of natural numbers $\mathbb{N}$. Intuitively, tokens represent occurrences of (parallel) processes. We assume that tokens may have colors corresponding for instance to data values attached to the corresponding processes. We consider that each token has $N$ colors, for some fixed natural number $N > 0$. Let $\mathbb{C}$ be a (potentially infinite) \emph{token color domain}. Examples of color domains are the set of natural numbers $\mathbb{N}$ and the set of real numbers $\mathbb{R}$. Also, we consider that tokens can be located at \emph{places}. Let $\mathbb{P}$ be a finite set of such places. Intuitively, places represent control locations of processes. A $N$-dim \emph{colored marking} is a mapping $M \in [\mathbb{N} \flc (\mathbb{P} \cup \{\bot\})\times \mathbb{C}^N]$ which associates with each token its place (if it is defined, or $\bot$ otherwise) and the values of its colors. Let $M$ be a $N$-dim colored marking, let $t \in \mathbb{N}$ be a token, and let $M(t) = (p,c_1,\ldots,c_N)$ $\in (\mathbb{P} \cup \{\bot\})\times \mathbb{C}^N$. Then, we consider that $\mathit{place}_M(t)$ denotes the element $p$, that $\mathit{color}_M(t)$ denotes the vector $(c_1,\ldots,c_N)$, and that for every $k\in\{1,\ldots,N\}$, $\mathit{color}_{M,k}(t)$ denotes the element $c_k$. We omit the subscript $M$ when it is clear from the context. \subsection{Colored Markings Logic (\texorpdfstring{${\sf CML}$)}{CML}} The logic ${\sf CML}$ is parameterized by a (first-order) logic on the considered token color domain $\mathbb{C}$, ${\sf FO}(\mathbb{C},\Omega,\Xi)$, i.e., by the set of operations $\Omega$ and the set of basic predicates (relations) $\Xi$ allowed on $\mathbb{C}$. In the sequel, we omit all or some of the parameters of ${\sf CML}$ when their specification is not necessary. Let $T$ be a set of \emph{token variables} ranging over $\mathbb{N}$ (set of tokens) and let $C$ be a set of \emph{color variables} ranging over $\mathbb{C}$, and assume that $T \cap C = \emptyset$. Then, the set of terms of ${\sf CML}(\mathbb{C}^N,\Omega,\Xi)$ (called \emph{token color terms}) is given by the grammar: \[ t ::= z \; | \; \delta_k(x) \mid o(t_1, \ldots, t_n) \] where $z \in C$, $k\in\{1,\ldots,N\}$, $x \in T$, and $o \in \Omega$. Intuitively, the term $\delta_k(x)$ represents the $k$th color (data value) attached to the token associated with the token variable $x$. We denote by $\equiv$ the syntactic equality relation on terms. The set of \emph{formulas} of ${\sf CML}(\mathbb{C}^N,\Omega,\Xi)$ is given by: \[\varphi ::= \mathit{true} \; | \; x=y\; | \; p(x) \; | \; r(t_1, \ldots, t_m) \; | \; \neg \varphi \; | \; \varphi \vee \varphi \; | \; \exists z . \; \varphi \; | \; \exists x . \; \varphi \] where $x,y \in T$, $z \in C$, $p \in \mathbb{P} \cup \{ \bot \}$, $r \in \Xi$. As usual, $\mathit{false}$ and the boolean connectives such as conjunction ($\wedge$) and implication ($\imp$), and universal quantification $(\forall)$ can be defined in terms of $\mathit{true}$, $\neg$, $\vee$, and $\exists$. We also use $\exists x \in p . \; \varphi$ (resp. $\forall x \in p . \; \varphi$) as an abbreviation of the formula $\exists x . \; p(x) \wedge \varphi$ (resp. $\forall x . \; p(x) \imp \varphi$). The notions of free/bound occurrences of variables in formulas and the notions of closed/open formulas are defined as usual in first-order logics. Given a formula $\varphi$, the set of free variables in $\varphi$ is denoted $\mathit{FV}(\varphi)$. In the sequel, we assume w.l.o.g. that in every formula, each variable is quantified at most once. We define a satisfaction relation between colored markings and ${\sf CML}$ formulas. For that, we need first to define the semantics of ${\sf CML}$ terms. Given valuations $\theta \in [T \flc \mathbb{N}]$, $\nu \in [C \flc \mathbb{C}]$, and a colored marking $M$, we define a mapping $\semdl{\cdot}_{M,\theta,\nu}$ which associates with each color term a value in $\mathbb{C}$: \begin{eqnarray*} \semdl{z}_{M,\theta,\nu} & = & \nu(z) \\ \semdl{\delta_k(x)}_{M,\theta,\nu} & = & \mathit{color}_{M,k}(\theta(x)) \\ \semdl{o(t_1, \ldots, t_n)}_{M,\theta,\nu} & = & o (\semdl{t_1}_{M,\theta,\nu}, \ldots, \semdl{t_n}_{M,\theta,\nu}) \end{eqnarray*} Then, we define inductively the satisfaction relation $\models_{\theta,\nu}$ between colored markings $M $ and ${\sf CML}$ formulas as follows: \begin{eqnarray*} M \models_{\theta,\nu} \mathit{true} & \; \; & \mbox{always} \\ M \models_{\theta,\nu} x=y & \; \mbox{iff} \; & \theta(x) = \theta(y) \\ M \models_{\theta,\nu} p(x) & \; \mbox{iff} \; & \mathit{place}_M(\theta(x)) = p \\ M \models_{\theta,\nu} r(t_1, \ldots, t_m) & \; \mbox{iff} \;& r(\semdl{t_1}_{M,\theta,\nu}, \ldots, \semdl{t_m}_{M,\theta,\nu}) \\ M \models_{\theta,\nu} \neg \varphi & \; \mbox{iff} \; & M \not\models_{\theta,\nu} \varphi \\ M \models_{\theta,\nu} \varphi_1 \vee \varphi_2 & \; \mbox{iff} \; & M \models_{\theta,\nu} \varphi_1 \; \mbox{or} \; M \models_{\theta,\nu} \varphi_2 \\ M \models_{\theta,\nu} \exists x . \; \varphi & \; \mbox{iff} \; & \exists t \in \mathbb{N} . \; M \models_{\theta[x \leftarrow t],\nu} \varphi \\ M \models_{\theta,\nu} \exists z . \; \varphi & \; \mbox{iff} \; & \exists c \in \mathbb{C} . \; M \models_{\theta,\nu[z \leftarrow c]} \varphi \end{eqnarray*} For every formula $\varphi$, we define $\semcro{\varphi}_{\theta,\nu}$ to be the set of colored markings $M$ such that $M \models_{\theta,\nu} \varphi$. A formula $\varphi$ is {\em satisfiable} iff there exist valuations $\theta$ and $\nu$ s.t. $\semcro{\varphi}_{\theta,\nu} \neq \emptyset$. The subscripts of $\models$ and $\semcro{\cdot}$ are omitted in the case of a closed formula. \subsection{Syntactical forms and fragments} \subsubsection{Prenex normal form:} A formula is in {\em prenex normal form} (PNF) if it is of the form $$ Q_1y_1Q_2 y_2 \ldots Q_m y_m . \; \varphi $$ where (1) $Q_1, \ldots, Q_m$ are (existential or universal) quantifiers, (2) $y_1, \ldots, y_m$ are variables in $T \cup C$, and $\varphi$ is a quantifier-free formula. It can be proved that for every formula $\varphi$ in ${\sf CML}$, there exists an equivalent formula $\varphi'$ in prenex normal form. \subsubsection{Quantifier alternation hierarchy:} We consider two families $\{\Sigma_n\}_{n\geq 0}$ and $\{\Pi_n\}_{n\geq 0}$ of fragments of ${\sf CML}$ defined according to the alternation depth of existential and universal quantifiers in their PNF: \begin{enumerate}[$\bullet$] \item Let $\Sigma_0 = \Pi_0$ be the set of formulas in PNF where all quantified variables are in $C$, \item For $n \geq 0$, let $\Sigma_{n+1}$ (resp. $\Pi_{n+1}$) be the set of formulas $ Q y_1\ldots y_m . \; \varphi $ in PNF where $y_1, \ldots, y_m \in T \cup C$, $Q$ is the existential (resp. universal) quantifier $\exists$ (resp. $\forall$), and $\varphi$ is a formula in $\Pi_n$ (resp. $\Sigma_n$). \end{enumerate} It is easy to see that, for every $n \geq 0$, $\Sigma_n$ and $\Pi_n$ are closed under conjunction and disjunction, and that the negation of a $\Sigma_n$ formula is a $\Pi_n$ formula and vice versa. For every $n \geq 0$, let $B(\Sigma_n)$ denote the set of all boolean combinations of $\Sigma_n$ formulas. Clearly, $B(\Sigma_n)$ subsumes both $\Sigma_n$ and $\Pi_n$, and is included in both $\Sigma_{n+1}$ and $\Pi_{n+1}$. \subsubsection{Special form:} \label{sect_special_form} The set of formulas in special form is given by the grammar: \[ \varphi ::= \mathit{true} \; | \; x = y \; | \; r(t_1,\ldots,t_n) \; | \; \neg \varphi \; | \; \varphi \vee \varphi \; | \; \exists z . \; \varphi \; | \; \exists x \in p . \; \varphi \] where $x,y \in T$, $z \in C$, $p \in \mathbb{P} \cup \{ \bot \}$, $r \in \Xi$, and $t_1,\ldots, t_n$ are token color terms. So, formulas in special form do not contain atoms of the form $p(x)$. It is not difficult to see that for every closed formula $\varphi$ in ${\sf CML}$, there exists an equivalent formula $\varphi'$ in special form. The transformation is based on the following fact: since variables are assumed to be quantified at most once in formulas, each formula $\exists x . \; \phi$ can be replaced by $\bigvee_{p \in \mathbb{P} \cup \{ \bot \} } \exists x \in p . \; \phi_{x,p}$ where $\phi_{x,p}$ is obtained by substituting in $\phi$ each occurrence of $p(x)$ by $\mathit{true}$, and each occurrence of $q(x)$, with $p \neq q$, by $\mathit{false}$. \subsubsection{Examples of properties expressible in $\texorpdfstring{{\sf CML}$}{CML}:} The fact that ``the place $p$ is empty'' is expressed by the $\Pi_1$ formula $\forall x. \; \neg p(x)$. The fact that ``$p$ contains precisely one token'' is expressed by the $B(\Sigma_1)$ formula: $ (\exists x\in p. \; true) \land (\forall y,z\in p. \; y=z) $. The $\Pi_1$ formula $ \forall x,y\in p. \; x=y $ expresses the fact that $p$ has one or zero token. The properties above do not depend on the colors of the token. The following examples show that the number of tokens in a place is also determined by properties of colors attached to tokens. Let consider now the logic ${\sf CML}(\mathbb{N}, \{0\}, \{\leq\})$. Then, the fact that ``$p$ contains an infinite number of tokens'' is implied by the $\Pi_2$ formula: $$\forall x\in p. \; \exists y\in p. \; \delta_1(x) < \delta_1(y) $$ Conversely, the fact that ``$p$ has a finite number of tokens'' is implied by the $\Sigma_2$ formula: $$ \exists x,y\in p. \;\forall z,u\in p. \; \delta_1(x)\le \delta_1(z)\le \delta_1(y) \land (\delta_1(z)=\delta_1(u) \Longrightarrow z=u) $$ \section{Satisfiability Problem: Undecidability} \label{sect-unsat} We show hereafter that the satisfiability problem of the logic ${\sf CML}$ is undecidable as soon as we consider formulas in $\Pi_2$, and this holds even for simple theories on colors. \begin{theorem} \label{thm-sat-undec} The satisfiability problem of the fragment $\Pi_2$ of ${\sf CML}(\mathbb{N}^2,\{0\},\{\leq\})$ is undecidable. \end{theorem} \begin{proof} The proof is done by reduction of the halting problem of Turing machines. The idea is to encode a computation of a machine, seen as a sequence of tape configurations, using tokens with integer colors. Each token represents a cell in the tape of the machine at some computation step. Therefore, the token has two integer colors: its position in the tape, and the position of its configuration in the computation (the computation step). The place of a token identifies uniquely the letter stored in the associated cell, the control state of the machine in the computation step of the cell, and the position of the head. Then, it is possible to express using formulas in $\Pi_2$ that two consecutive configurations correspond indeed to a valid transition of the machine. Intuitively, this is possible because $\Pi_2$ formulas allow to relate each cell at some configuration to the corresponding cell at the next configuration. Let us fix the notations used for Turing machine. A Turing machine is defined by $M=(Q,\Gamma,B,q_0,q_f,\Delta)$ where $Q$ is its finite set of states, $\Gamma$ is the finite tape alphabet containing the default blank symbol $B$, $q_0,q_f \in Q$ are the initial resp. the final state, and $\Delta$, called the transition relation, is a subset of $Q\times \Gamma \times Q \times \Gamma \times \{L,R\}$. A configuration of the machine is given by a triplet $(q,\mathcal{T},i)$ where $q\in Q$, $\mathcal{T} \in [\mathbb{N} \mapsto \Gamma]$ is the tape of cells identified by their position $j\in\mathbb{N}$ and storing a letter $\mathcal{T}(j)\in\Gamma$, and $i$ is the position of the head on the tape. A transition $(q,X,q',Y,d)\in\Delta$ defines a relation between two configurations $(q,\mathcal{T},i)$ and $(q',\mathcal{T}',i')$ iff either $i'=i+1$ and $d=R$ or $i'=i-1$ and $d=L$, the machine reads $X$ at position $i$, i.e. $\mathcal{T}(i)=X$, and writes $Y$ at the same position, i.e. $\mathcal{T}'(i)=Y$, and in any other position $k$ different from $i$, the tapes $\mathcal{T}$ and $\mathcal{T}'$ are equal, i.e. $\forall k. \; k\neq i \implies \mathcal{T}(k)=\mathcal{T}'(k)$. The initial configuration of the machine is $(q_0,\mathcal{T}_0,0)$ where $\mathcal{T}_0$ is the tape with all cells containing the blank symbol $B$. Without loss of generality, we suppose that (a) the machine has no deadlocks, (b) the head never goes left when it is at position $0$, and (c) when the final state is reached the machine loops in this state. We proceed now to the encoding of a computation that reaches the final state using a $\Pi_2$ formula of ${\sf CML}(\mathbb{N}^2,\{0\},\{\leq\})$. Instead of generic names $\delta_1$ and $\delta_2$ for color functions we use more intuitive names $step$ and $cell$ respectively. A token $x$ with $step(x)=j$ and $cell(x)=i$ represents the $i^{th}$ cell of the $j^{th}$ configuration in a computation. We define the set of places $\mathbb{P} = \Gamma \times \{\mathit{Head},\mathit{Nohead}\} \times Q$ and, for convenience, we denote members of $\mathbb{P}$ by strings, e.g., $\mathit{A\_Head\_q}$ with $A\in\Gamma$ and $q\in Q$. A token $x$ in a place named $\mathit{A\_Head\_q}$ encodes a cell labeled by the letter $A$ in a configuration where the head is at the position $cell(x)$ and the current state is $q$. Since in a given configuration the head and the control state have a unique occurence, our encoding includes the property that, among all tokens that have the same $step$ color, there is only one token in a place containing $Head$ in its name. First, we encode the properties of tapes. For this, we introduce the shorthand notation $\mathtt{Head}(x)$, parametrized by a token variable $x$, expressing that the token represented by $x$ encodes a cell that carries the head, i.e, the name of its place has $\mathit{Head}$ as substring. $$\mathtt{Head}(x) = \bigvee_{q\in Q}\bigvee_{A\in\Gamma}\mathit{A\_Head\_q}(x)$$ The following $\Pi_2$ formula $\mathtt{Tapes}$ expresses that, for any tape $j$ in an infinite computation, any cell $i$ is represented by a unique token $x$ (conditions (3.1) and (3.2)), and there is exactly one token $z$ which represents the position of the head (conditions (3.3) and (3.4)). \begin{eqnarray} \mathtt{Tapes} & = & \forall i,j. \; \exists x. \; cell(x) =i \land step(x)=j\\ & \land & \forall x,y . \; (step(x)=step(y)\land cell(x)=cell(y))\implies x=y\\ & \land & \forall j. \; \exists x. \; step(x)=j \land \mathtt{Head}(x)\\ & \land & \forall x,y. \; (\mathtt{Head}(x) \land \mathtt{Head}(y)) \implies (step(x)\neq step(y)) \end{eqnarray} Second, we encode the initial configuration using the following $B(\Sigma_1)$ formula: \begin{eqnarray*} \mathtt{Init} & = & \forall x. \; (step(x)=0 \land cell(x)>0) \implies \mathit{B\_NotHead\_q}_0(x)\\ & \land & \exists x. \; step(x)=0 \land cell(x)=0 \land \mathit{B\_Head\_q}_0(x) \end{eqnarray*} Third, we encode the termination condition saying that, at some step, the computation reaches the final state: $$ \mathtt{Acceptance} = \exists x. \; \bigvee_{A\in \Gamma} \mathit{A\_Head\_q}_f(x) $$ Finally, we encode each transition, i.e., the condition defining when two successive configurations correspond to a valid transition in the machine. For this, we have to fix the token storing the head in the current configuration ($x$), the tokens at the left ($x_l$) and at the right ($x_r$) of the head in the current configuration, and the tokens in the next configuration having the same position than $x$, $x_l$, and $x_r$ ($x'$, $x_l'$, resp. $x_r'$). When this identification is done (see the left part of the implication), we have to decompose the global transition over all transitions $\delta\in\Delta$: $$ \mathtt{Trans} = \begin{array}[t]{l} \forall x,x_l,x_r . \; \forall x',x'_l,x'_r . \; \\ \phantom{\forall x,x_l,x_r} \left(\begin{array}{ll} & \mathtt{Head}(x) \\ \land & step(x)=step(x_l) \land step(x)=step(x_r) \\ \land & \lnot (\exists y. \; cell(x_l) < cell(y) < cell(x)) \\ \land & \lnot (\exists y. \; cell(x) < cell(y) < cell(x_r)) \\ \land & \lnot (\exists y. \; step(x) < step(y) < step (x')) \\ \land & step(x')=step(x'_l) \land step(x')=step(x'_r) \\ \land & cell(x)=cell(x')\land cell(x_l)=cell(x'_l) \land cell(x_r)=cell(x'_r)\\ \end{array}\right) \\ \phantom{\forall x,x_l,x_r\forall x'} \implies \bigvee_{\delta\in \Delta} \mathtt{Trans}_\delta(x,x_l,x_r,x',x'_l,x'_r) \end{array} $$ where $\mathtt{Trans}_\delta$ relates its parameters accordingly to transition $\delta$. For example, if the transition $\delta$ is of the form $(q,X,q',Y,L)$ (the case of head moving at right is symmetrical), then we obtain the following $\Pi_1$ formula: $$ \mathtt{Trans}_\delta(x,x_l,x_r,x',x'_l,x'_r) = \begin{array}[t]{ll} & \mathit{X\_Head\_q}(x) \land \mathit{Y\_NotHead\_q}'(x') \\ \land & \bigwedge_{A\in \Gamma}(\mathit{A\_NotHead\_q}(x_l) \Longrightarrow \mathit{A\_Head\_q}'(x'_l)) \\ \land & \forall y,y'. \; \left(\begin{array}{ll} & y\neq x \land y \neq x_l\\ \land & y'\neq x' \land y' \neq x'_l \\ \land & step(y)=step(x)\\ \land & step(y')=step(x')\\ \land & cell(y)=cell(y') \end{array}\right)\implies \mathtt{Same}(y,y')\\ \end{array} $$ where the shorthand notation $\mathtt{Same}(y,y')$ stands for $$ \bigwedge_{A\in\Gamma,p\in Q} \mathit{A\_NotHead\_p}(y) \Leftrightarrow \mathit{A\_NotHead\_p}(y') $$ and expresses that the two tokens $y$ and $y'$ carry the same letter. Then, the $\mathit{Trans}$ formula is in $B(\Sigma_1)$. The conjunction $\mathtt{Tapes} \land \mathtt{Init} \land \mathtt{Trans} \land \mathtt{Acceptance}$ is a $\Pi_2$ formula which is satisfiable iff there is an accepting run. This reduction shows the undecidability of satisfiability for $\Pi_2$ fragment of ${\sf CML}(\mathbb{N}^2,\{0\},\{\le\})$. \end{proof} \section{Satisfiability problem: A Generic Decidability Result} \label{sect-sat} We prove in this section that the satisfiability problem for formulas in the fragment $\Sigma_2$ of ${\sf CML}$ is decidable whenever this problem is decidable for the underlying color logic. \begin{theorem} \label{thm-sat-dec} The satisfiability problem of the fragment $\Sigma_2$ of ${\sf CML}(\mathbb{C}^N,\Omega,\Xi)$, for any $N\ge 1$, is decidable provided that the satisfiability problem of ${\sf FO}(\mathbb{C},\Omega,\Xi)$ is decidable. \end{theorem} \begin{proof} The idea of the proof is to reduce the satisfiability problem of $\Sigma_2$ formulas to the satisfiability problem of $\Sigma_0$ formulas. We proceed as follows: we prove first that the fragment $\Sigma_2$ has the small model property, i.e., every satisfiable formula $\varphi$ in $\Sigma_2$ has a model of a bounded size (where the size is the number of tokens in each place). This bound corresponds actually to the number of existentially quantified token variables in the formula. Notice that this fact does not lead directly to an enumerative decision procedure for the satisfiability problem since the number of models of a bounded size is infinite in general (due to infinite color domains). Then we use the fact that over a finite model, the universal quantifications in $\varphi$ can be transformed into finite conjunctions in order to build a formula $\widehat{\varphi}$ in $\Sigma_1$ which is satisfiable if and only if the original formula $\varphi$ is satisfiable. Actually, $\widehat{\varphi}$ defines precisely the upward-closure of the set of markings defined by $\varphi$ (w.r.t. the inclusion ordering between sets of colored markings, extended to vectors of places). Finally we show that the $\Sigma_1$ formula $\widehat{\varphi}$ is satisfiable if and only if the $\Sigma_0$ formula obtained by transforming existential quantification over tokens into existential quantification over colors is decidable. We define the size of a marking $M$ to be the number of tokens $x$ for which $place_M(x)\neq \bot$. A marking $M'$ is said to be a sub-marking of a marking $M$ if all tokens in $M'$ for which $place_M(x) \neq \bot$ are mapped identically by $M$ and $M'$. We also define the upward closure of a set of markings $\mathcal{M}$ to be the set of all the markings that have a sub-marking in $\mathcal{M}$. First, we show the following lemma: \begin{lem} Let $\varphi$ be a $\Sigma_2$ closed formula $\varphi = \exists\vv{x} . \; \exists\vv{z}. \; \forall\vv{y} . \; \phi$ where $\vv{x}$ and $\vv{y}$ are token variables, $\vv{z}$ are color variables, and $\phi$ is a $\Sigma_0$ formula. Then: \begin{enumerate}[\em(1)] \item $\varphi$ has a model iff it has a model of size less than or equal to $|\vv{x}|$. \item The upward closure of $\semcro{\varphi}$ w.r.t. the sub-marking ordering is effectively definable in $\Sigma_1$. \end{enumerate} \end{lem} \begin{proof} \emph{Point (1):} $(\Leftarrow)$ Immediate. \noindent $(\Rightarrow)$ Let $M$ be a model of $\varphi$. Then, there exists a vector of tokens $\vv{t} \subset\mathbb{N}$, a vector of colors $\vv{c}\subset \mathbb{C}$, and two mappings $\theta : \vv{x}\mapsto \vv{t}$ and $\nu : \vv{z}\mapsto\vv{c}$ such that $M \models_{\theta,\nu}\forall \vv{y}. \;\phi$. Given any universally quantified formula it is always the case that if it is satisfied by a marking then it is also satisfied by all its sub-markings (w.r.t inclusion ordering). In particular, we define $M'$ to be the sub-marking of $M$ that agrees only on tokens in $\vv{t}$. Then, we have $M'\models_{\theta,\nu}\forall \vv{y}. \; \phi$, and therefore $M'\models \exists\vv{x}. \;\exists\vv{z}. \;\forall \vv{y}. \;\phi$. Therefore, for the fragment $\Sigma_2$, every satisfiable formula $\varphi=\exists \vv{x}. \;\exists\vv{z}. \;\forall \vv{y}. \;\phi$ has a model of size less or equal than $|\vv{x}|$. However this fact does not imply the decidability of the satisfiability problem since the color domain is infinite. \noindent\emph{Point (2):} We show that for any formula $\varphi$ in $\Sigma_2$ it exists a formula $\widehat{\varphi}$ such that any model $M$ of $\varphi$ has a sub-marking $M'$ which is a model of $\widehat{\varphi}$, i.e., the upper closure of the set of models of $\varphi$ is given by the set of models of $\widehat{\varphi}$. Let $\Theta$ be the set of all (partial or total) mappings $\sigma$ from elements of $\vv{y}$ to elements of $\vv{x}$. Then, we have that any model $M$ of $\varphi$ is also a model of $\exists\vv{x}. \;\exists\vv{z}. \;\varphi^{(1)}$ where $$ \varphi^{(1)} = \bigwedge_{\sigma\in\Theta} \forall\vv{y}. \;\Big( \big( (\bigwedge_{y\in dom(\sigma)} y=\sigma(y)) \land (\bigwedge_{y\not\in dom(\sigma)} \bigwedge_{x\in\vv{x}} y\ne x) \big)\Longrightarrow \varphi \Big) $$ This means that there exists a vector of tokens $\vv{t} \subset\mathbb{N}$, a vector of colors $\vv{c}\subset \mathbb{C}$, and two mappings $\theta : \vv{x}\mapsto \vv{t}$ and $\nu : \vv{z}\mapsto\vv{c}$ such that $M \models_{\theta,\nu}\varphi^{(1)}$. Consider now $M'$ to be the sub-marking of $M$ that agrees only on tokens in $\vv{t}$. Then, $M \models_{\theta,\nu}\varphi^{(1)}$ implies that: $$ M' \models_{\theta,\nu} \bigwedge\limits_{\substack{\sigma\in\Theta \\ dom(\sigma)=\vv{y}}} \forall\vv{y}. \;\big((\bigwedge_{y\in \vv{y}} y=\sigma(y)) \Longrightarrow \varphi\big) $$ which is equivalent to $M' \models \widehat{\varphi}$ with: $$ \widehat{\varphi} = \exists \vv{x} . \; \exists \vv{z} . \; \bigwedge\limits_{\substack{\sigma\in\Theta \\ dom(\sigma)=\vv{y}}} \varphi [\sigma(\vv{y})/\vv{y}] $$ By definition of $\widehat{\varphi}$, any of its minimal models is also a model of $\varphi$, and any of the models of $\varphi$ has a sub-model that is a model of $\widehat{\varphi}$. \end{proof} A direct consequence of the lemma above is that it is possible to reduce the satisfiability problem from $\Sigma_2$ to $\Sigma_1$. To prove the main theorem, we have to show that the satisfiability problem of $\Sigma_1$ can be reduced to one of $\Sigma_0$. Let us consider a $\Sigma_1$ formula $\varphi= \exists\vv{x}. \;\phi$ with $\phi$ in $\Sigma_0$. We do the following transformations: (1) we eliminate token equality by enumerating all the possible equivalence classes for equality between the finite number of variables in $\vv{x}$, then (2) we eliminate formulas of the form $p(x)$ by enumerating all the possible mappings from a token variable $x$ to places in $\mathbb{P}$, and (3) we replace terms of the form $\delta_k(x)$ by fresh color variables. Let us describe more formally these three transformations. \paragraph{\emph{Step 1:}} Let $\mathcal{B}(\vv{x})$ be the set of all possible equivalence classes (w.r.t. the equality relation) over elements of $\vv{x}$: an element $e$ in $\mathcal{B}(\vv{x})$ is a mapping from $\vv{x}$ to a vector of variables $\vv{x}^{(e)}\subseteq\vv{x}$ that contains only one variable for each equivalence class. We define $\phi_e$ to be $\phi[\vv{x}^{(e)} / \vv{x}]$ where, after the substitution, each atomic formula that is a token equality is replaced by ``$\mathit{true}$'' if it is a trivial equality $x=x$ and by ``$\mathit{false}$'' otherwise. Clearly $\varphi$ is equivalent to $$ \bigvee_{e\in \mathcal{B}(\vv{x})} \exists \vv{x}^{(e)}. \;\bigwedge_{i\neq j} (x^{(e)}_i\neq x^{(e)}_j) \land \phi_e $$ \paragraph{\emph{Step 2:}} Similarly, we eliminate from $\phi_e$ the occurrences of formulas $p(x)$. For a mapping $\sigma \in [\vv{x}^{(e)} \flc \mathbb{P}]$ and a variable $x$, $\sigma(x)(x)$ is a formula saying that the variable $x$ is in the place $\sigma(x)$. We use the notation $\sigma(\vv{x})(\vv{x})$ instead of $\bigwedge_i \sigma(x_i)(x_i)$. Again, for each value of $\sigma$ and $e$ we define $\phi_{e,\sigma}$ to be $\phi_e$ where each atomic sub-formula $p(x)$ is replaced by ``$\mathit{true}$'' if $\sigma(x)=p$ and by ``$\mathit{false}$'' otherwise. Then, we obtain an equivalent formula $\varphi_{=,p}$: $$ \bigvee_{e\in \mathcal{B}(\vv{x})} \;\; \exists\vv{x}^{(e)}. \; \bigwedge_{i\neq j} (x_i^{(e)}\neq x^{(e)}_j) \land \bigvee_{\sigma \in [\vv{x}^{(e)} \flc \mathbb{P}]}\sigma(\vv{x}^{(e)})(\vv{x}^{(e)}) \land \phi_{e,\sigma}$$ where sub-formulas $\phi_{e,\sigma}$ do not contain any atoms of the form $x^{(e)}_i=x^{(e)}_j$ or $p(x^{(e)}_i)$. Still, $\varphi_{e,\sigma}$ is not a $\Sigma_0$ formula, because it contains terms of the form $\delta_k(x)$. \paragraph{\emph{Step 3:}} For each coloring symbol $\delta_k$ and each token variable $x\in\vv{x}^{(e)}$, we define a color variable $s_{k,x}$. Let $\vv{s}^{(e)}$ be a vector containing all such color variables for each variable in $\vv{x}^{(e)}$. Then the formula $\varphi_{=,p}$ is satisfiable iff the following $\Sigma_0$ formula is satisfiable: $$ \bigvee_{e\in \mathcal{B}(\vv{x})} \;\; \exists \vv{s_e}. \; \bigvee_{\sigma \in [\vv{x}^{(e)} \flc \mathbb{P}]} \phi_{e,\sigma}[s_{k,x}/\delta_k(x)]_{1\le k\le N, x \in \vv{x}^{(e)}} $$ Therefore, the satisfiability problem of $\Sigma_2$ can be reduced to satisfiability problem of $\Sigma_0$, which is decidable by hypothesis. \end{proof} \paragraph{\bf Complexity:} From the last part of the proof, it follows that the satisfiability problem of a $\Sigma_1$ formula can be reduced in NP time to the satisfiability problem of a formula in the color logic ${\sf FO}(\mathbb{C},\Omega,\Xi)$. Indeed, in \emph{Step 1} an equivalence relation between the existentially quantified variables $\vv{x}$ is guessed and in \emph{Step 2} a place in $\mathbb{P}$ for the representative of each equivalence class is guessed, and given these guesses, a $\Sigma_0$ formula of linear size (w.r.t. the size of the original $\Sigma_1$) is built. From the first part of the proof, it follows that the reduction from the satisfiability problem of a $\Sigma_2$ formula to the satisfiability of a $\Sigma_1$ formula is in general exponential. More precisely, if $\varphi=\exists\vv{x}. \;\exists\vv{z}. \;\forall \vv{y}. \;\phi$ is a $\Sigma_2$ formula, then the equi-satisfiable $\Sigma_1$ formula $\widehat{\varphi}$ is of size $O(|\vv{x}|^{|\vv{y}|} |\varphi|)$. Therefore, the reduction of a $\Sigma_2$ formula to an equi-satisfiable formula in $\Sigma_0$ is in NEXPTIME. If the number of universally quantified variables (i.e., $|\vv{y}|$) is fixed, the reduction to an equi-satisfiable $\Sigma_1$ formula $\widehat{\varphi}$ becomes polynomial in the number of existentially quantified variables (i.e., $|\vv{x}|$). Then, in this case, the complexity of the reduction from $\Sigma_2$ formulas to equi-satisfiable $\Sigma_0$ formulas is in NP. \section{Constrained Petri Nets} We introduce hereafter models for networks of processes based on multiset rewriting systems with data. A \emph{Constrained Petri Net} (${\sf CPN}$) over the logic ${\sf CML}(\mathbb{C}^N,\Omega,\Xi)$ is a tuple $S = (\mathbb{P}, \Delta)$ where $\mathbb{P}$ is a finite set of places used in ${\sf CML}$, and $\Delta$ is a finite set of \emph{constrained transitions} of the form: \begin{equation} p_1, \ldots, p_n \; \hookrightarrow \; q_1, \ldots, q_m \; : \; \varphi \label{rule-eqn} \end{equation} where $p_i,q_j\in\mathbb{P}$ for all $i\in \{1,\ldots,n\}$ and $j\in\{1,\ldots,m\}$, and $\varphi$ is a ${\sf CML}(\mathbb{C}^N,\Omega,\Xi)$ formula called the \emph{transition guard} such that (1) $FV(\varphi) = \{x_1,\ldots,x_n\}\cup\{y_1,\ldots,y_m\}$, and (2) all occurences of variables $y_j$ in $\varphi$, for any $j \in \{1,\ldots, m\}$, are in terms of the form $\delta_k(y_j)$, for some $k \in \{1, \ldots, N \}$ . Configurations of ${\sf CPN}$s are colored markings. Intuitively, the application of a constrained transition to a colored marking $M$ (leading to a colored marking $M'$) consists in (1) deleting tokens represented by the variables $x_i$ from the corresponding places $p_i$, and in (2) creating tokens represented by variables $y_j$ in the places $q_j$, provided that the formula $\varphi$ is satisfied. The formula $\varphi$ expresses constraints on the tokens in the marking $M$ (especially on the tokens which are deleted) as well as constraints on the colors of created tokens (relating these colors with those of the tokens in $M$). Formally, given a ${\sf CPN}$ $S$, we define a transition relation $\flc_S$ between colored markings as follows: for every two colored markings $M$ and $M'$, we have $M \flc_S M'$ iff there exists a constrained transition of the form (\ref{rule-eqn}), and there exist tokens $ t_1, \ldots, t_n$ and $ t'_1, \ldots, t'_m$ s.t. $\forall i, j \in \{1, \ldots, n\} . \; i \neq j \imp t_i \neq t_j$, and $\forall i, j \in \{1, \ldots, m\} . \; i \neq j \imp t'_i \neq t'_j$, and \begin{enumerate}[(1)] \item $\forall i \in \{1, \ldots, n\} . \; place_M(t_i) = p_i$ and $place_{M'}(t_i) = \bot$, \item $\forall i \in \{1, \ldots, m\} . \; place_M(t'_i) = \bot$ and $place_{M'}(t'_i) = q_i $, \item $\forall t \in \mathbb{N}$, if $\forall i \in \{1, \ldots, n\} . \; t \neq t_i $ and $\forall j \in \{1, \ldots, m \} . \; t \neq t'_j$, then $M(t) = M'(t)$, \item $M \models_{\theta,\nu_\emptyset} \varphi[\mathit{color}_{M',k}(t'_j)/\delta_k(y_j)]_{1\le k\le N,1\le j\le m}$, where $\theta \in [T \flc \mathbb{N}]$ is a valuation of token variables such that $\forall i \in \{1, \ldots, n \} . \; \theta(x_i) = t_i$, and $\nu_\emptyset$ is the empty domain valuation of color variables. \end{enumerate} Given a colored marking $M$ let ${\sf post}_S(M) = \{ M' \; : \; M \rightarrow_S M' \}$ be the set of all immediate successors of $M$, and let ${\sf pre}_S(M) = \{ M' \; : \; M' \rightarrow_S M \}$ be the set of all immediate predecessors of $M$. These definitions can be generalized straightforwardly to sets of markings. Given a set of colored markings $\mathcal{M}$, let $\widetilde{pre}_S(\mathcal{M}) = \overline{{\sf pre}_S(\overline{\mathcal{M}})}$, where $(\overline{\, \cdot \,})$ denotes complementation (w.r.t. the set of all colored markings). Given a fragment $\Theta$ of ${\sf CML}$, we denote by ${\sf CPN}[\Theta]$ the class of ${\sf CPN}$ where all transition guards are formulas in the fragment $\Theta$. Due to the (un)decidability results of sections \ref{sect-unsat} and \ref{sect-sat}, we focus in the sequel on the classes ${\sf CPN}[\Sigma_2]$ and ${\sf CPN}[\Sigma_1]$. \section{Modeling Power of \texorpdfstring{${\sf CPN}$}{CPN}} \label{sect-modeling} We show in this section how constrained Petri nets can be used to model (unbounded) dynamic networks of parallel processes. We assume that each process is defined by an extended automaton, i.e., a finite-control state machine supplied with variables and data structures ranging over potentially infinite domains (such as integer variables, reals, etc). Processes running in parallel can communicate and synchronize using various kinds of mechanisms (rendez-vous, shared variables, locks, etc). Moreover, they can dynamically spawn new (copies of) processes in the network. More precisely, let $\mathcal{Q}$ be the finite set of control locations of the extended automata, and let $\vv{l}=(l_1,\ldots,l_N)$ and $\vv{g}=(g_1,\ldots,g_G)$ be the sets of local respectively global variables manipulated by these automata. Transitions between control locations are labeled by actions which combine (1) tests over the values of local/global variables, (2) assignments of local/global variables, (3) creation of a new process in a control location, (4) synchronization (e.g., CCS-like rendez-vous, locks, priorities, etc.). Tests over variables are first-order assertions based on a set of predicates $\Xi$. Variables are assigned with expressions built from local and global variables using a set of operations $\Omega$. \begin{exa} \label{ex:RW-EA} Reader-writer is a classical synchronization scheme used in operating systems or other large scale systems. It allows processes to work (read and write) on shared data. Reader processes may read data in parallel but they are exclusive with writers. Writer processes can only work in exclusive mode with other processes. A \emph{reader-writer lock} is used to implement such kind of synchronization for any number of readers and writers. For this, readers have to acquire the lock in \emph{read mode} and writers in \emph{write mode}. Let us consider the program proposed in~\cite{Flanagan-Freund-Qadeer-02} and using the reader-writer lock given in Table~\ref{tab:RW-use}. It consists of several \lstinline{Reader} and \lstinline{Writer} processes. The code of each process is given in Table~\ref{tab:RW-use}. (To keep the example readable, we omit the processes spawning the readers and writers.) The program uses a global reader-writer lock variable \lstinline{l} and a global variable \lstinline{x} representing the shared data. Each \lstinline{Reader} process has a local variable \lstinline{y}. Moreover, each process has a unique identifier represented by the \lstinline{_pid} local variable. Let us assume that \lstinline{x}, \lstinline{y}, and \lstinline{_pid} are of integer type. \lstinline{Writer} processes change the value of the global variable \lstinline{x} after acquiring the lock in write mode. \lstinline{Reader} processes are setting their local variable \lstinline{y} to a value depending on \lstinline{x} after acquiring the lock in read mode. \begin{table} \label{tab:RW-use} \begin{center} \begin{tabular}{lp{3eX}l} \begin{lstlisting}[language=Java] process Writer: 1: l.acq_write(_pid); 2: x = g(x); 3: l.rel_write(_pid); 4: \end{lstlisting} & & \begin{lstlisting}[language=Java] process Reader: 1: l.acq_read(_pid); 2: y = f(x); 3: l.rel_read(_pid); 4: \end{lstlisting} \end{tabular} \end{center} \caption{Example of program using reader-writer lock.} \end{table} \begin{figure} \caption{Extended automata model for the program in Table~\ref{tab:RW-use}.} \label{fig:RW-ea} \end{figure} \end{exa} Then, the extended automata model for the program in Table~\ref{tab:RW-use} is obtained by associating a control location to each line of the program and by labeling transitions between control locations with the statements of the program. The extended automata model is provided on Figure~\ref{fig:RW-ea}. \medspace We show hereafter how to build a ${\sf CPN}$ model for a network of extended automata described above. The logic of markings used by the ${\sf CPN}$ model is defined by ${\sf CML}(\mathbb{C}^N,\Omega,\Xi)$ where $N\ge 1$ is the (maximal) number of local variables of each process. To each control location in $\mathcal{Q}$ and to each global variable in $\vv{g}$ is associated a unique place in $\mathbb{P}$. Then, each running process is represented by a token, and in every marking, the place associated with the control location $q\in\mathcal{Q}$ contains precisely the tokens representing processes which are at the control location $q$. The value of a local variable $l_i$ of a process represented by token $t$ is given by $\delta_i(t)$. For global variables which are scalar, the place associated in $\mathbb{P}$ (for convenience, we use the same name for the place and the global variable) contains a single token whose first color stores the current value of the global variable. Global variables representing parametric-size collections may also be modeled by a place storing for each element of the collection a token whose first color gives the value of the element. However, we cannot express in the decidable fragment $\Sigma_2$ of ${\sf CML}$ the fact that a multiset indeed encodes an array of elements indexed by integers in some given interval. The reason is that, while we can express in $\Pi_1$ the fact that each token has a unique color in the interval, we need to use $\Pi_2$ formulas to say that for each color in the interval there exists a token with that color. Nevertheless, for the verification of safety properties and for checking invariants, it is not necessary to require the latter property. The set of constrained transitions of the ${\sf CPN}$ associated with the network are obtained using the following general rules: \paragraph{\textit{Test:}} A process action $q \by{\varphi(\vv{l},\vv{g})} q'$ where $\varphi$ is a ${\sf FO}(\mathbb{C},\Omega,\Xi)$ formula, is modeled by: $$ q,g_1,\ldots,g_G \; \hookrightarrow \; q',g_1,\ldots,g_G \; : \; \varphi\eta \land \bigwedge^{G+1}_{i=1} \varphi_{id}(i) $$ where $\eta$ is the substitution $[\delta_k(x_1)/l_k]_{1\le k\le N} [\delta_1(x_{k+1})/g_k]_{1\le k \le G}$, and $$ \varphi_{id}(i) = \bigwedge^{N}_{j=1} \delta_j(y_i)=\delta_j(x_i) $$ \paragraph{\textit{Assignment:}} A process action $q \by{(\vv{l},\vv{g}) := \vv{t}(\vv{l},\vv{g})} q'$ where $\vv{t}$ is a vector of $N+G$ $\Omega$-terms, is modeled by: $$ q,g_1,\ldots,g_G \; \hookrightarrow \; q',g_1,\ldots,g_G \; : \; \bigwedge^{N}_{i=1}\delta_i(y_1)=t_i\eta ~\land~ \bigwedge^{G}_{j=1}\delta_1(y_{j+1})=t_{N+j}\eta $$ where $\eta$ is the substitution defined in the previous case. In the modeling above, we consider that the execution of the process action is atomic. When tests and assignments are not atomic, we must transform each of them into a sequence of atomic operations: read first the global variables and assign their values to local variables, compute locally the new values to be assigned/tested, and finally, assign/test these values. \paragraph{\textit{Process creation:}} An action spawning a new process $q \by{{\sf spawn}(q_0)}q'$ is modeled using a transition which creates a new token in the initial control location $q_0$ of the new process: $$ q \hookrightarrow q', q_0 \; : \; \varphi_{id}(1) \land \varphi_0 $$ where $\varphi_0$ is $\bigwedge^{N}_{i=1} \delta_i(y_2)=null$ with $null$ the general initial value for local variables. Moreover, it is possible to associate with each newly created process an identity classically defined by a positive integer number. For that, let us consider that the first color $\delta_1$ gives the identity of the process represented by the token. To ensure that different processes have different identities, we express in the guard of every transition which creates a process the fact that the identity of this process does not exist already among tokens in places corresponding to control locations. This can easily be done using a universally quantified ($\Pi_1$) formula. Therefore, a spawn action $q \by{{\sf spawn}(q_0)}q'$ is modeled by: $$ q \hookrightarrow q', q_0 \; : \; \varphi_{id}(1) \land \varphi_0' $$ where $$\varphi_0' = \bigwedge^{N}_{i=2} \delta_i(y_2)=null ~\land~ \bigwedge_{\ell \in \mathcal{Q}} \forall t \in \ell . \; \neg (\delta_1(y_2)=\delta_1(t)) $$ The modeling of other actions (such as local/global variables assignment/test) can be modified accordingly in order to propagate the process identity through the transition. Notice that process identities are different from token values. Indeed, in some cases (e.g., for modeling value passing as described further in this section), we may use different tokens (at some special places representing buffers for instance) having the same identity $\delta_1$. \paragraph{\textit{Synchronization using locks:}} Locks can be simply modeled using global variables storing the identity of the owner process, or a special value (e.g. $-1$) if it is free. A process who acquires the lock must check if it is free, and then write his identity: $$ q, lock \hookrightarrow q', lock ~:~ \delta_1(x_2) = -1 \land \delta_1(y_2) = \delta_1(x_1) \land ... $$ To release the lock, a process assigns $-1$ to the lock, which can be modeled in a similar way. Other kinds of locks, such as reader-writer locks, can also be modeled in our framework as we show in the following example. \begin{exa} \label{ex:RW-CPN} Let us consider the extended automaton using the reader-writer lock given on Figure~\ref{fig:RW-ea}. For each of its states we introduce a place (e.g., place $r3$ for state \lstinline{r3}). For the scalar global variable \lstinline{x}, we create a place $x$ containing a single token. The global variable representing the reader-writer lock is modeled following the classical implementation~\cite{BenAri-05} which uses two variables: \begin{enumerate}[$\bullet$] \item a global \emph{integer} \lstinline{w} to store the identifier of the process holding the lock in write mode or $-1$ if no such process exists (process identifiers are supposed to be positive integers), and \item a global \emph{set of integers} \lstinline{r} to represent the processes holding the lock in read mode. \end{enumerate} Acquire (\lstinline{acq_read}, \lstinline{acq_write}) and release (\lstinline{rel_read}, \lstinline{rel_write}) operations are accessing variables \lstinline{w} and \lstinline{r} atomically. Then, we introduce a place $w$ (containing a single token) for the scalar global variable \lstinline{w}. For the global set variable \lstinline{r} we introduce a place which contains a token for each \lstinline{Reader} process owning the lock. By consequence, we need two colors for each token in the system: $\delta_1$ to store the identity of processes and $\delta_2$ to store the local variable \lstinline{y} for tokens representing \lstinline{Reader} processes and the value of global variables \lstinline{w} and \lstinline{x} for tokens in places $w$ resp. $x$. Therefore, the ${\sf CPN}$ model obtained is defined over the logic ${\sf CML}(\mathbb{N}^2, \{0, f,g\}, \{\le\})$, its set of places is $\mathbb{P}=\{r1,r2,r3,r4,w1,w2,w3,w4,r,w,x\}$, and its transition set $\Delta$ is given in Table~\ref{tab:RW}. This model belongs to the class ${\sf CPN}[\Pi_1]$. \begin{table*} \begin{center} \begin{cmrs} w_1: & \cmrsrule{w1,w}{w2,w}{ \begin{array}[t]{l} \lnot(\exists z\in r . \; true) \land \delta_2(x_2)< 0 \land \delta_2(y_2)=\delta_1(x_1) \land \\ \delta_1(y_2)=\delta_1(x_2) \land \varphi_{id}(1) \end{array}} \\ w_2: & \cmrsrule{w2, x}{w3, x}{\delta_2(y_2) = g(\delta_2(x_2)) \land \delta_1(y_2)=\delta_1(x_2) \land \varphi_{id}(1)} \\ w_3: & \cmrsrule{w3, w}{w4, w}{ \delta_2(x_2)=\delta_1(x_1) \land \delta_2(y_2)=-1 \land \delta_1(y_2)=\delta_1(x_2) \land \varphi_{id}(1) } \\ ~\\ r_1: & \cmrsrule{r1}{r2, r}{ (\forall z\in w. \; \delta_2(z)<0) \land \delta_1(y_2)=\delta_1(x_1) \land \varphi_{id}(1)} \\ r_2: & \cmrsrule{r2, x}{r3, x}{\delta_2(y_1)= f(\delta_2(x_2)) ~\land~ \delta_1(x_1)=\delta_1(y_1) ~\land~\varphi_{id}(2)} \\ r_3: & \cmrsrule{r3, r}{r4}{ \delta_1(x_1)=\delta_1(x_2) \land \varphi_{id}(1)} \\ \end{cmrs} \end{center} \label{tab:RW} \caption{${\sf CPN}$ model of reader-writer lock.} \end{table*} \end{exa} \paragraph{\textit{Value passing, return values:}} Processes may pass/wait for values to/from other processes with specific identities. They can use for that shared arrays of data indexed by process identities. Such an array $A$ can be modeled in our framework using a special place containing for each process a token. Initially, this place is empty, and whenever a new process is created, a token with the same identity is added to this place. Then, to model that a process reads/writes on $A[k]$, we use a transition which takes from the place associated with $A$ the token with color $\delta_1$ equal to $i$, reads/modifies the value attached with this token, and puts the token again in the same place. For instance, an assignment action $q \by{A[k] := e} q'$ executed by some process is modeled by the transition: $$ q, A \hookrightarrow q', A \; : \; \delta_1(x_2)=k ~\land~ \delta_2(y_2)=e ~\land~ \delta_1(y_2)=\delta_1(x_2) ~\land~ \varphi_{id}(1) $$ \paragraph{\textit{Rendez-vous synchronization:}} Synchronization between a finite number of processes can be modeled as in Petri nets. ${\sf CPN}$s allow in addition to put constraints on the colors (data) of the involved processes. \paragraph{\textit{Priorities:}} Various notion of priorities, such as priorities between different classes of processes (defined by properties of their colors), or priorities between different actions, can be modeled in ${\sf CPN}$s. This can be done by imposing in transition guards that transitions (performed by processes or corresponding to actions) of higher priority are not enabled. These constraints can be expressed using $\Pi_1$ formulas. In particular, checking that a place $p$ is empty can be expressed by $\forall x . \; \lnot p(x)$. (Which shows that, as soon as universally quantified formulas are allowed in guards, our models are as powerful as Turing machines, even for color logics over finite domains.) \section{Computing Post and Pre Images} \label{sec:postpre} We address in this section the problem of characterizing in ${\sf CML}$ the immediate successors/predecessors of ${\sf CML}$ definable sets of colored markings. \begin{theorem} \label{thm-reach} Let $S$ be a ${\sf CPN}[\Sigma_n]$, for $n \in \{1, 2 \}$. Then, for every ${\sf CML}$ closed formula $\varphi$ in the fragment $\Sigma_n$, the sets ${\sf post}_S (\semcro{\varphi})$ and ${\sf pre}_S (\semcro{\varphi})$ are effectively definable by ${\sf CML}$ formulas in the same fragment $\Sigma_n$. \end{theorem} \begin{proof} Let $\varphi$ be a closed formula, and let $\tau$ be a transition $\vv{p} \hookrightarrow \vv{q} : \psi$ of the system $S$. W.l.o.g., we suppose that $\varphi$ and $\psi$ are in special form (see definition in Section~\ref{sect_special_form}). Moreover, we suppose that variables in $\vv{x}$ and $\vv{y}$ introduced by $\tau$ have fresh names, i.e., different from those of variables quantified in $\varphi$ and $\psi$. We define hereafter the formulas $\varphi_{\sf post} = {\sf post}_S (\semcro{\varphi})$ and $\varphi_{\sf pre} = {\sf pre}_S (\semcro{\varphi})$ for this single transition. The generalization to the set of all transitions is straightforward. The construction of the formulas $\varphi_{\sf post}$ and $\varphi_{\sf pre}$ is not trivial because our logic does not allow to use quantification over places and color mappings in $[\mathbb{N}\flc\mathbb{C}]$. Intuitively, the idea is to express first the effect of deleting/adding tokens, and then composing these operations to compute the effect of a transition. Let us introduce two transformations $\ominus$ and $\oplus$ corresponding to deletion and creation of tokens. These operations are inductively defined on the structure of special form formulas in Tables~\ref{tab:ominus} and~\ref{tab:oplus}. The operation $\ominus$ is parameterized by a vector $\vv{z}$ of token variables to be deleted, a mapping $\mathtt{loc}$ associating with token variables in $\vv{z}$ the places from which they will be deleted, and a mapping $\mathtt{col}$ associating with each token variable in $\vv{z}$ and eack $k\in\{1,\ldots,N\}$ a fresh color variable in $C$. Intuitively, $\ominus$ projects a formula on all variables in $\vv{z}$. Rule $\ominus_2$ substitutes in a color formula $r(\vv{t})$ all occurences of colored tokens in $\vv{z}$ by fresh color variables given by the mapping $\mathtt{col}$. A formula $x=y$ is unchanged by the application of $\ominus$ if the token variables $x$ and $y$ are not in $\vv{z}$; otherwise, rule $\ominus_3$ replaces $x=y$ by ``$\mathit{true}$'' if it is trivially true (i.e., we have the same variable in both sides of the equality) or by ``$\mathit{false}$'' if $x$ (or $y$) is in $\vv{z}$. Indeed, each token variable in $\vv{z}$ represents (by the semantics of ${\sf CPN}$) a different token, and since this token is deleted by the transition rule, it cannot appear in the reached configuration. Rules $\ominus_4$ and $\ominus_5$ are straightforward. Finally, rule $\ominus_6$ does a case splitting according to the fact whether a deleted token is precisely the one referenced by the existential token quantification or not. The operation $\oplus$ is parameterized by a vector $\vv{z}$ of token variables to be added and a mapping $\mathtt{loc}$ associating with each variable in $\vv{z}$ the place in which it will be added. Intuitively, $\oplus$ transforms a formula taking into account that the tokens added by the transition were not present in the previous configuration (and therefore not constrained by the original formula describing the configuration before the transition). Then, the application of $\oplus$ has no effect on color formulas $r(\vv{t})$ (rule $\oplus_2$). When equality of tokens is tested, rule $\oplus_3$ takes into account that all added tokens are distinct and different from the existing tokens. For token quantification, rule $\oplus_6$ says that quantified tokens of the previous configuration cannot be equal to the added tokens. \newcounter{ominusrule} \def\addtocounter{ominusrule}{1} \ominus_{\arabic{ominusrule}}:{\addtocounter{ominusrule}{1} \ominus_{\arabic{ominusrule}}:} \begin{table*} \begin{center} $ \begin{array}{lrcl} \addtocounter{ominusrule}{1} \ominus_{\arabic{ominusrule}}: & \mathit{true} \ominus(\vv{z},\mathtt{loc},\mathtt{col}) &\; \; = \; \; & \mathit{true} \\ \addtocounter{ominusrule}{1} \ominus_{\arabic{ominusrule}}: & r(\vv{t}) \ominus(\vv{z},\mathtt{loc},\mathtt{col}) &\; \; = \; \; & r(\vv{t}) [\mathtt{col}(z)(k) / \delta_k(z) ]_{1\le k \le N, z \in \vv{z} } \\ \addtocounter{ominusrule}{1} \ominus_{\arabic{ominusrule}}: & (x=y)\ominus(\vv{z},\mathtt{loc},\mathtt{col}) & \; \; = \; \; & \left\{ \begin{array}{ll} x=y & \quad\mathrm{if}~x,y\not\in\vv{z} \\ \mathit{true} & \quad\mathrm{if}~x\equiv y\\ \mathit{false}& \quad\mathrm{otherwise} \end{array}\right. \\ \addtocounter{ominusrule}{1} \ominus_{\arabic{ominusrule}}: & (\neg \varphi ) \ominus(\vv{z},\mathtt{loc},\mathtt{col}) & \; \; = \; \; & \neg (\varphi \ominus(\vv{z},\mathtt{loc},\mathtt{col})) \\ \addtocounter{ominusrule}{1} \ominus_{\arabic{ominusrule}}: & (\varphi_1 \vee \varphi_2) \ominus(\vv{z},\mathtt{loc},\mathtt{col}) &\; \; = \; \; & (\varphi_1 \ominus(\vv{z},\mathtt{loc},\mathtt{col})) \vee (\varphi_2 \ominus(\vv{z},\mathtt{loc},\mathtt{col})) \\ \addtocounter{ominusrule}{1} \ominus_{\arabic{ominusrule}}: & (\exists x \in p. \; \varphi)\ominus(\vv{z},\mathtt{loc},\mathtt{col}) & \; \; = \; \; & \exists x \in p. \; (\varphi\ominus(\vv{z},\mathtt{loc},\mathtt{col})) \lor \\ & && \phantom{\exists x \in p. \;} \bigvee_{z \in \vv{z} : \mathtt{loc}(z)=p} (\varphi[z/x])\ominus(\vv{z},\mathtt{loc},\mathtt{col}) \end{array} $ \end{center} \caption{Definition of the $\ominus$ operator.} \label{tab:ominus} \end{table*} \newcounter{oplusrule} \def\addtocounter{oplusrule}{1} \oplus_{\arabic{oplusrule}}:{\addtocounter{oplusrule}{1} \oplus_{\arabic{oplusrule}}:} \begin{table*} \begin{center} $ \begin{array}{lrcl} \addtocounter{oplusrule}{1} \oplus_{\arabic{oplusrule}}: & \mathit{true} \oplus(\vv{z},\mathtt{loc}) &\; \; = \; \; & \mathit{true} \\ \addtocounter{oplusrule}{1} \oplus_{\arabic{oplusrule}}: & r(\vv{t}) \oplus(\vv{z},\mathtt{loc}) & = & r(\vv{t}) \\ \addtocounter{oplusrule}{1} \oplus_{\arabic{oplusrule}}: & (x=y)\oplus(\vv{z},\mathtt{loc}) & = & \left\{ \begin{array}{ll} x=y & \quad\mathrm{if}~x,y\not\in\vv{z} \\ \mathit{true} & \quad\mathrm{if}~x\equiv y\\ \mathit{false}& \quad\mathrm{otherwise} \end{array}\right. \\ \addtocounter{oplusrule}{1} \oplus_{\arabic{oplusrule}}: & (\neg \varphi ) \oplus(\vv{z},\mathtt{loc}) & = & \neg (\varphi \oplus(\vv{z},\mathtt{loc})) \\ \addtocounter{oplusrule}{1} \oplus_{\arabic{oplusrule}}: & (\varphi_1 \vee \varphi_2) \oplus(\vv{z},\mathtt{loc}) & = & (\varphi_1 \oplus(\vv{z},\mathtt{loc})) \vee (\varphi_2 \oplus(\vv{z},\mathtt{loc})) \\ \addtocounter{oplusrule}{1} \oplus_{\arabic{oplusrule}}: & (\exists x \in p.~\varphi)\oplus(\vv{z},\mathtt{loc}) & = & \exists x \in p.~(\varphi\oplus(\vv{z},\mathtt{loc})) \land \bigwedge_{z \in \vv{z}: \mathtt{loc}(z)=p} \lnot(x=z) \end{array} $ \end{center} \caption{Definition of the $\oplus$ operator.} \label{tab:oplus} \end{table*} Therefore, we define $\varphi_{{\sf post}_\tau}$ to be the formula: \begin{equation} \label{eq-post} \exists\vv{y} \in \vv{q} . \; \exists\vv{c} . \; \big( (\varphi \land\psi) \ominus(\vv{x},\vv{x}\mapsto \vv{p},\vv{x} \mapsto [1,N] \mapsto \vv{c})) \big) \oplus(\vv{y},\vv{y}\mapsto \vv{q}) \end{equation} In the formula above, we first delete the tokens corresponding to $\vv{x}$ from the current configuration $\varphi$ intersected with the guard of the rule $\psi$. Then, we add tokens corresponding to $\vv{y}$. Finally, we close the formula by quantifying existentially (1) the color variables $\vv{c}$ corresponding to colors of deleted tokens $\vv{x}$ and (2) the token variables $\vv{y}$ corresponding to the added tokens. Similarly, we define $\varphi_{{\sf pre}_\tau}$ to be the formula: \begin{equation} \label{eq-pre} \exists\vv{x} \in \vv{p} . \; \exists\vv{c} . \; \big((\varphi \oplus(\vv{x},\vv{x}\mapsto\vv{p})) \land \psi \big) \ominus(\vv{y},\vv{y}\mapsto\vv{q},\vv{y}\mapsto [1,N]\mapsto \vv{c}) ) \end{equation} In the formula above, we first add to the current configuration the tokens represented by the left hand side of the rule $\vv{x}$ in order to obtain a configuration on which the guard $\psi$ can be applied. Then, we remove the tokens added by the rule using token variables $\vv{y}$. Finally, we close the formula by quantifying existentially (1) the color variables $\vv{c}$ corresponding to colors of removed tokens $\vv{y}$ and (2) the token variables $\vv{x}$ corresponding to the added tokens. It is easy to see that if $\varphi$ and $\psi$ are in the $\Sigma_n$ fragment, for any $n \geq 1$, then both of the formulas $\varphi_{{\sf post}_\tau}$ and $\varphi_{{\sf pre}_\tau}$ are also in the same fragment $\Sigma_n$. \end{proof} \paragraph{\bf Complexity:} Let $\varphi$ be a $\Sigma_2$ formula, and let $\tau = \vv{p} \hookrightarrow \vv{q} : \psi$ be a transition of a system $S\in{\sf CPN}[\Sigma_2]$. Then the sizes of formulas ${\sf post}_\tau(\varphi)$ and ${\sf pre}_\tau(\varphi)$ are in general exponential in the number of quantifiers in $\varphi\land \psi$. More precisely, the size of the ${\sf post}$ (resp. ${\sf pre}$) image of $\varphi$ is $O(|\vv{p}|^n)$ (resp. $O(|\vv{q}|^n)$) times greater than the size of the formula $\varphi\land \psi$, where $n$ is the number of quantifiers in $\varphi\land \psi$. This exponential blow-up is due to the rule $\ominus_6$ in Table~\ref{tab:ominus}. If the number of the quantified variables in $\varphi\land\psi$ is fixed, then the size of ${\sf post}_\tau(\varphi)$ (resp. ${\sf pre}_\tau(\varphi)$), increases polynomially w.r.t. the size of the formula $\varphi\land \psi$. \begin{exa} \label{ex:post} To illustrate the construction given in the proof above, we consider the logic ${\sf CML}(\mathbb{N},\{0\},\{\le\})$ and the ${\sf CPN}$ $S=(\mathbb{P},\Delta)$ with $\mathbb{P}=\{p,q,r\}$ and $\Delta$ containing the following transition: $$ \tau:\;\; p \hookrightarrow q \;:\; \delta_1(x_1) \ge 0 \land \lnot(\exists t\in q. \;\delta_1(t)=\delta_1(y_1)) $$ Intuitively, this transition moves a token with positive color from place $p$ to place $q$ and assigns to its color a value non-deterministically chosen in $\mathbb{N}$ but different from all colors of tokens in place $q$. We illustrate the computation of ${\sf post}$-image of $\tau$ on two formulas in special form $\varphi_1= (\exists x\in r. \; \mathit{true})$ and $\varphi_2= (\forall x,y\in p. \; x=y)$. Intuitively, $\varphi_1$ says that the place $r$ contains at least a token, and $\varphi_2$ says that any two tokens in place $p$ are equal, i.e., place $p$ contains at most one token. Since $\varphi_1$ is not speaking about places involved in the transition $\tau$ (i.e., $p$ and $q$), we expect to obtain a stable ${\sf post}$-image by $\tau$, i.e., $\varphi_{1,{\sf post}_\tau} \implies \varphi_1$. Conversely, $\varphi_2$ speaks about a place changed by $\tau$, so its image cannot be stable. In the remainder of this example we give the details of the construction of the ${\sf post}$-images by $\tau$ for $\varphi_1$ and $\varphi_2$. By applying the equation~\ref{eq-post} to $\varphi_1$ we obtain: \begin{eqnarray*} \varphi_{1,{\sf post}_\tau} & = & \exists y_1\in q. \;\exists c_{1,x_1}. \; \begin{array}[t]{l} (\varphi_1\land \delta_1(x_1) \ge 0\land \lnot(\exists t\in q. \;\delta_1(t)=\delta_1(y_1)) \\ )\ominus(\{x_1\},\{x_1\mapsto p\}, \{x_1\mapsto 1 \mapsto c_{1,x_1}\}) \\ \phantom{)}\oplus (\{y_1\}, \{y_1\mapsto q\}) \end{array} \end{eqnarray*} In the following, we denote by $\mathtt{loc}_{x_1}$, $\mathtt{col}_{x_1}$, and $\mathtt{loc}_{y_1}$ the mappings $\{x_1\mapsto p\}$, $\{x_1\mapsto 1 \mapsto c_{1,x_1}\}$, resp. $\{y_1\mapsto q\}$. First, we compute the effect of applying the $\ominus$ operation on $\varphi_1$ and the guard of $\tau$ using the rules given in Table~\ref{tab:ominus}. By applying several times rules $\ominus_4$ and $\ominus_5$ to distribute $\ominus$ over $\land$ and $\lnot$ we obtain: \begin{eqnarray*} & & \varphi_1\ominus(\{x_1\},\mathtt{loc}_{x_1},\mathtt{col}_{x_1}) \\ & \land & (\delta_1(x_1) \ge 0) \ominus(\{x_1\},\mathtt{loc}_{x_1},\mathtt{col}_{x_1}) \\ & \land & \lnot \big( (\exists t\in q. \;\delta_1(t)=\delta_1(y_1)) \ominus(\{x_1\},\mathtt{loc}_{x_1},\mathtt{col}_{x_1}) \big) \end{eqnarray*} By applying rule $\ominus_6$ two times, $\ominus_2$ one time, and by replacing the empty disjunction by $\mathit{false}$, we obtain: \begin{eqnarray*} & & \big(\exists x\in r. \; \mathit{true}\ominus(\{x_1\},\mathtt{loc}_{x_1},\mathtt{col}_{x_1}) \lor \mathit{false}\big)\\ & \land & (c_{1,x_1} \ge 0) \\ & \land & \lnot \big(\exists t\in q. \; (\delta_1(t)=\delta_1(y_1)) \ominus(\{x_1\},\mathtt{loc}_{x_1},\mathtt{col}_{x_1}) \lor \mathit{false}\big) \end{eqnarray*} Rules $\ominus_1$ and $\ominus_2$ are applied to obtain the final result: \begin{eqnarray*} & & (\exists x\in r. \; \mathit{true}) \\ & \land & (c_{1,x_1} \ge 0) \\ & \land & \lnot (\exists t\in q. \;\delta_1(t)=\delta_1(y_1)) \end{eqnarray*} On the above formula is applied the $\oplus$ transformation using the rules given in Table~\ref{tab:oplus}. By applying several times rules $\oplus_4$ and $\oplus_5$ to distribute $\oplus$ over $\land$ and $\lnot$, we obtain: \begin{eqnarray*} & & (\exists x\in r. \; \mathit{true})\oplus(\{y_1\}, \mathtt{loc}_{y_1}) \\ & \land & (c_{1,x_1} \ge 0) \oplus(\{y_1\}, \mathtt{loc}_{y_1}) \\ & \land & \lnot \big( (\exists t\in q. \;\delta_1(t)=\delta_1(y_1))\oplus(\{y_1\}, \mathtt{loc}_{y_1})\big) \end{eqnarray*} By applying two times rules $\oplus_6$ and $\oplus_2$, and by replacing empty conjunctions by $\mathit{true}$ we obtain: \begin{eqnarray*} & & (\exists x\in r. \; \mathit{true}\oplus(\{y_1\}, \mathtt{loc}_{y_1}) ~\land~\mathit{true}) \\ & \land & (c_{1,x_1} \ge 0) \\ & \land & \lnot \big(\exists t\in q. \;(\delta_1(t)=\delta_1(y_1))\oplus(\{y_1\}, \mathtt{loc}_{y_1}) ~\land~ \lnot(t=y_1)\big) \end{eqnarray*} Rules $\oplus_1$ and $\oplus_2$ are applied to obtain the final result: \begin{eqnarray*} & & (\exists x\in r. \; \mathit{true}) \\ & \land & (c_{1,x_1} \ge 0) \\ & \land & \lnot \big(\exists t\in q. \; \delta_1(t)=\delta_1(y_1) ~\land~ \lnot(t=y_1)\big) \end{eqnarray*} Therefore, the immediate successors of $\varphi_1$ by $\tau$ are given by the following ${\sf CML}$ formula: \begin{eqnarray*} \lefteqn{\varphi_{1,{\sf post}_\tau}} \\ & = & \exists y_1\in q. \;\exists c_{1,x_1}. \; (\exists x\in r. \; \mathit{true}) \land (c_{1,x_1} \ge 0) \land \lnot \big(\exists t\in q. \; \delta_1(t)=\delta_1(y_1) \;\land\; \lnot(t=y_1)\big) \\ & = & (\exists x\in r. \; \mathit{true}) \land \big(\exists y_1\in q. \;\exists c_{1,x_1}. \; (c_{1,x_1} \ge 0) \land \lnot \big(\exists t\in q. \; \delta_1(t)=\delta_1(y_1) \;\land\; \lnot(t=y_1)\big)\big) \end{eqnarray*} where the last equality has been obtained by applying classical rules for quantifiers. It is easy now to see that $\varphi_{1,{\sf post}_\tau}\implies \varphi_1$. Now, we consider $\varphi_2$ and we apply the equation~\ref{eq-post} to obtain: \begin{eqnarray*} \varphi_{2,{\sf post}_\tau} & = & \exists y_1\in q. \;\exists c_{1,x_1}. \; \begin{array}[t]{l} (\varphi_2\land \delta_1(x_1) \ge 0\land \lnot(\exists t\in q. \;\delta_1(t)=\delta_1(y_1)) \\ )\ominus(\{x_1\},\mathtt{loc}_{x_1},\mathtt{col}_{x_1}) \\ \phantom{)}\oplus(\{y_1\}, \mathtt{loc}_{y_1}) \end{array} \end{eqnarray*} We only detail the effect of $\oplus$ and $\ominus$ operators on $\varphi_2$ since the computation for the conjunct representing the guard of $\tau$ is the same as for $\varphi_1$. In order to apply $\ominus$ on $\varphi_2$, we use the equivalent form of $\varphi_2$, i.e., $\lnot(\exists x\in p . \; \exists y\in p. \; \lnot(x=y))$. Then, the effect of the $\ominus$ operation on $\varphi_2$ is obtained by applying two times the rules $\ominus_4$ and $\ominus_6$ as follows: \begin{eqnarray*} \varphi_2 \ominus(\{x_1\},\mathtt{loc}_{x_1},\mathtt{col}_{x_1}) & = & \lnot\big( (\exists x\in p. \; \begin{array}[t]{l} (\exists y\in p . \; \lnot(x=y)\ominus(\{x_1\},\mathtt{loc}_{x_1},\mathtt{col}_{x_1})) \\ ~~\lor \lnot(x=x_1)\ominus(\{x_1\},\mathtt{loc}_{x_1},\mathtt{col}_{x_1}) \;) \end{array} \\ & & \quad\lor~(\exists y\in p. \; \lnot(x_1=y))\ominus(\{x_1\},\mathtt{loc}_{x_1},\mathtt{col}_{x_1}) \big) \end{eqnarray*} By applying several times rules $\ominus_3$, $\ominus_4$, and $\ominus_6$ we obtain: \begin{eqnarray*} \lefteqn{\varphi_2 \ominus(\{x_1\},\mathtt{loc}_{x_1},\mathtt{col}_{x_1})} \\ & = & \lnot\big( \begin{array}[t]{l} (\exists x\in p. \; \exists y\in p . \; \lnot(x=y) \lor \lnot(\mathit{false})\;) \\ \lor~(\exists y\in p. \; \lnot(x_1=y)\ominus(\{x_1\},\mathtt{loc}_{x_1},\mathtt{col}_{x_1}) \lor \lnot(x_1=x_1)\ominus(\{x_1\},\mathtt{loc}_{x_1},\mathtt{col}_{x_1})) \big) \end{array} \\ & = & \lnot\big( \begin{array}[t]{l} (\exists x\in p. \; \exists y\in p . \; \mathit{true}) \\ \lor~(\exists y\in p. \; \lnot(\mathit{false}) \lor \lnot(\mathit{true})) \big) \end{array} \\ & = & \lnot(\exists x\in p. \;\exists y\in p . \; \mathit{true})\land\lnot(\exists y\in p. \; \mathit{true}) \\ & = & (\forall x,y\in p . \; \mathit{false})\land(\forall y\in p. \; \mathit{false})\\ & = & (\forall x\in p . \; \mathit{false}) \end{eqnarray*} The last equivalence above is obtained from the classical properties of quantifiers. The final result is the one expected intuitively: the effect of removing the token $x_1$ in $p$ from a configuration where there is at most one token in $p$ (see meaning of $\varphi_2$) is a configuration with no token in $p$. It is easy to show that the effect of $\oplus(\{y_1\}, \mathtt{loc}_{y_1})$ on the last formula above is null. Therefore, the immediate successors of $\varphi_2$ by $\tau$ are given by the following ${\sf CML}$ formula: \begin{eqnarray*} \lefteqn{\varphi_{2,{\sf post}_\tau}} \\ & = & \exists y_1\in q. \;\exists c_{1,x_1}. \; (\forall x\in p. \; \mathit{false}) \land (c_{1,x_1} \ge 0) \land \lnot \big(\exists t\in q. \; \delta_1(t)=\delta_1(y_1) \;\land\; \lnot(t=y)\big) \\ & = & (\forall x\in p. \; \mathit{false}) \land \big(\exists y_1\in q. \;\exists c_{1,x_1}. \; (c_{1,x_1} \ge 0) \land \lnot \big(\exists t\in q. \; \delta_1(t)=\delta_1(y_1) \;\land\; \lnot(t=y)\big)\big) \end{eqnarray*} More complex examples of ${\sf post}$-image computations for the reader-writer lock example are provided in Section~\ref{ex:RW-INV}. \end{exa} \section{Applications in Verification} \label{sect-verif} We show in this section how to use the results of the previous section to perform various kinds of analysis. Let us fix for the rest of the section a first order logic ${\sf FO}(\mathbb{C},\Omega,\Xi)$ with a decidable satisfiability problem and a ${\sf CPN}$ $S$. \subsection{Pre-post condition reasoning} Given a transition $\tau$ in $S$ and given two formulas $\varphi$ and $\varphi'$, $\langle \varphi, \tau, \varphi' \rangle$ is a Hoare triple if whenever the condition $\varphi$ holds, the condition $\varphi'$ holds after the execution of $\tau$. In other words, we must have ${\sf post}_\tau (\semcro{\varphi}) \subseteq \semcro{\varphi'}$, or equivalently that ${\sf post}_\tau (\semcro{\varphi}) \cap \semcro{\neg \varphi'} = \emptyset$. Then, by Theorem \ref{thm-reach} and Theorem \ref{thm-sat-dec} we deduce the following: \begin{theorem} If $S$ is a ${\sf CPN}[\Sigma_2]$, then the problem whether $\langle \varphi, \tau, \varphi' \rangle$ is a Hoare triple is decidable for every transition $\tau$ of $S$, every formula $\varphi \in \Sigma_2$, and every formula $\varphi' \in \Pi_2$. \end{theorem} \subsection{Bounded reachability analysis} An instance of the bounded reachability analysis problem is a triple $(Init, Target, k)$ where $Init$ and $Target$ are two sets of configurations, and $k$ is a positive integer. The problem consists in deciding whether there exists a computation of length at most $k$ which starts from some configuration in $Init$ and reaches a configuration in $Target$. In other words, the problem consists in deciding whether $Target \cap \bigcup_{0\leq i \leq k} {\sf post}_S^i(Init) \neq \emptyset$, or equivalently whether $Init \cap \bigcup_{0\leq i \leq k} {\sf pre}_S^i(Target) \neq \emptyset$. The following result is a direct consequence of Theorem \ref{thm-reach} and Theorem \ref{thm-sat-dec}. \begin{theorem} If $S$ is a ${\sf CPN}[\Sigma_2]$, then, for every $k \in \mathbb{N}$, and for every two formulas $\varphi_{I}, \varphi_T \in \Sigma_2$, the bounded reachability problem $(\semcro{\varphi_{I}}, \semcro{\varphi_T}, k)$ is decidable. \end{theorem} \subsection{Checking invariance properties} Invariance checking consists in deciding whether a given property (1) is satisfied by the set of initial configurations, and (2) is stable under the transition relation of a system. Formally, given a ${\sf CPN}$ $S$ with transitions in $\Delta$ and a closed formula $\varphi_{init}$ defining the set of initial configurations, we say that a closed formula $\varphi$ is an \emph{inductive invariant} of $(\Delta,\varphi_{init})$ if and only if (1) $\semcro{\varphi_{init}}\subseteq\semcro{\varphi}$, and (2) ${\sf post}_{\tau}(\semcro{\varphi})\subseteq\semcro{\varphi}$ for any $\tau\in\Delta$. Clearly, (1) is equivalent to $\semcro{\varphi_{init}}\cap\semcro{\lnot\varphi} =\emptyset$, and (2) is equivalent to ${\sf post}_{\tau}(\semcro{\varphi})\cap\semcro{\lnot\varphi} =\emptyset$. By Theorem \ref{thm-reach} and Theorem \ref{thm-sat-dec}, we have: \begin{theorem} The problem whether a formula $\varphi\in B(\Sigma_1)$ is an inductive invariant of $(\Delta,\varphi_{init})$, where $\Delta\in{\sf CPN}[\Sigma_2]$ and $\varphi_{init}\in\Sigma_2$ is decidable. \end{theorem} The deductive approach for establishing an invariance property considers the \emph{inductive invariance checking problem} given by a triple $(\varphi_{init}, \varphi_{inv}, \varphi_{aux})$ of closed formulas expressing sets of configurations, and which consists in deciding whether (1) $\semcro{\varphi_{init}} \subseteq \semcro{\varphi_{aux}}$, (2) $\semcro{\varphi_{aux}} \subseteq \semcro{\varphi_{inv}}$, and (3) $\varphi_{aux}$ is an inductive invariant. The following result is a direct consequence of Theorem \ref{thm-reach}, Theorem \ref{thm-sat-dec}, and of the previous theorem. \begin{theorem}\label{thm-ci} If $S$ is a ${\sf CPN}[\Sigma_2]$, then the inductive invariance checking problem is decidable for every instance $(\varphi_{init}, \varphi_{inv}, \varphi_{aux})$ where $\varphi_{init} \in \Sigma_2$, and $\varphi_{inv}, \varphi_{aux} \in B(\Sigma_1)$ are all closed formulas. \end{theorem} Of course, the difficult part in applying the deductive approach is to find useful auxiliary inductive invariants. One approach to tackle this problem is to try to compute the largest inductive invariant included in $\varphi_{inv}$ which is the set $\bigcap_{k \geq 0} \widetilde{{\sf pre}}^k_S (\varphi_{inv})$. Therefore, a method to derive auxiliary inductive invariants is to try iteratively the sets $\varphi_{inv}$, $\varphi_{inv} \cap \widetilde{{\sf pre}}_S (\varphi_{inv})$, $\varphi_{inv} \cap \widetilde{{\sf pre}}_S (\varphi_{inv}) \cap \widetilde{{\sf pre}}^2_S (\varphi_{inv})$, etc. In many practical cases, only few strengthening steps are needed to find an inductive invariant. (Indeed, the user is able in general to provide accurate invariant assertions for each control point of his system.) The result below implies that the steps of this iterative strengthening method can be automatized when ${\sf CPN}[\Sigma_1]$ models and $\Pi_1$ invariants are considered. \begin{theorem} If $S$ is a ${\sf CPN}[\Sigma_1]$, then for every closed formula $\varphi$ in $\Pi_1$ and every positive integer $k$, it is possible to construct a formula in $\Pi_1$ defining the set $\bigcap_{0\leq i \leq k} \widetilde{{\sf pre}}^i_S (\semcro{\varphi})$. \end{theorem} The theorem above is a consequence of the fact that, by Theorem \ref{thm-reach}, for every $S$ in ${\sf CPN}[\Sigma_1]$ and for every formula $\varphi$ in $\Pi_1$, it is possible to construct a formula $\varphi_{\widetilde{{\sf pre}}}$ also in $\Pi_1$ such that $\semcro{\varphi_{\widetilde{{\sf pre}}}}= \widetilde{{\sf pre}}_S (\semcro{\varphi})$. \paragraph{\bf Complexity:} Let $\tau = \vv{p} \hookrightarrow \vv{q} : \psi$ be a transition of a system $S \in {\sf CPN}[\Sigma_2]$, and let $\varphi$ be a $B(\Sigma_1)$ formula. The satisfiability of ${\sf post}_{\tau}(\varphi)\land\lnot\varphi$ can be reduced in nondeterministic doubly-exponential time to the satisfiability problem of the color logic. This is due to the fact that (1) the reduction to the satisfiability problem of the color logic is in nondeterministic exponential time w.r.t. the maximal number of universally quantified variables in the formulas $\lnot \varphi$ and ${\sf post}_\tau(\varphi)$, and that (2) the number of universally quantified variables in ${\sf post}_\tau(\varphi)$ is exponential in the number of universally quantified variables in $\varphi \land \psi$. Now, for fixed sizes of $\vv{p}$ and $\vv{q}$, and for a fixed number of the quantified variables in $\varphi \land \psi$, the reduction to the satisfiability problem of the color logic is in NP. Such assumptions are in fact quite realistic in practice (as shown in the following section for different examples of parameterized systems). Indeed, in models of parametrized systems (see Section \ref{sect-modeling}), communication involves only few processes (usually at most two). This justifies the bound on the sizes of left and right hand sides of the transition rules. Moreover, invariants are usually expressible using a small number of process indices (for instance mutual exclusion needs two indices) and relates only few of their local variables. \section{Case Studies and Experimental Results} We illustrate the use of our framework on several examples of parameterized systems. First, we consider the parameterized version of the Reader-Writer lock example provided in~\cite{Flanagan-Freund-Qadeer-02}. We give for this case study the inductive invariant allowing to prove a suitable safety property, and we show significant parts of its proof. Then, we describe briefly a prototype tool for checking invariance properties based on our framework, and we give the experimental results obtained on several examples of parameterized mutual exclusion protocols and on the Reader-Writer lock case study. \subsection{Verification of the Reader-Writer Lock} \label{ex:RW-INV} A safety property of our example is ``for all \lstinline{Reader} processes at control location 3, the local variable \lstinline{y} has the same value, equal to \lstinline{f(x)}'', whose specification in ${\sf CML}$ is the following $\Pi_1$ formula: $$ RF = \forall a\in r3, t\in x. \; \delta_2(a)=f(\delta_1(t)) $$ Of course, this property is true only if all \lstinline{Reader} and \lstinline{Writer} processes respect the procedure of acquiring the lock, i.e., there are no other processes in the system which are accessing the global variable \lstinline{x}. Therefore, a correct initial configuration of the ${\sf CPN}$ model given on Table~\ref{tab:RW} has no token in places $r2$, $r3$, $w2$, and $w3$, and only one token in place $x$. Moreover, all process identities stored in color $\delta_1$ are positive. We suppose that the lock is free initially, i.e., the place $r$ is empty and the place $w$ contains a unique token with negative $\delta_2$ color. Then, a correct initial configuration of the system is given by the following $\mathit{Init}$ formula in $B(\Sigma_1)$: $$ \mathit{Init} = G_x\land \mathit{Ids}\land \mathit{Init}_{lock} \land \Big(\forall t. \; \lnot \big(r2(t)\lor r3(t) \lor w2(t) \lor w3(t)\big)\Big) $$ where \[ G_x = (\exists t\in x. \; true) \land (\forall t,t'\in x. \; t=t') \] expresses that the place $x$ contains a unique token, \[ \mathit{Ids} = \forall t . \; \delta_1(t) \ge 0 \] expresses that all tokens have a positive color $\delta_1$ (representing their identity), and \[ \mathit{Init}_{lock} = (\exists u\in w. \; \delta_2(u)<0) \land (\forall u,u'\in w. \; u=u') \land (\forall t\in r . \; \mathit{false}) \] specifies the initial state of the lock: there is only one token in place $w$ and its color $\delta_2$ is negative, and the place $r$ is empty. The premises of Theorem~\ref{thm-ci} are fulfilled since the model proposed on Table~\ref{tab:RW} is in ${\sf CPN}[\Pi_1]$, and $\mathit{Init}$ and $\mathit{RF}$ are both in $B(\Sigma_1)$. It follows that we have to find an inductive invariant $\varphi_{aux}\in B(\Sigma_1)$ such that $\mathit{Init} \Longrightarrow \varphi_{aux}$ and $\varphi_{aux}\Longrightarrow \mathit{RF}$. We consider the following $B(\Sigma_1)$ formula as candidate for $\varphi_{aux}$: $$ \mathit{Aux} = G_x \land \mathit{Ids} \land RW_w \land RW_r \land RF $$ where $G_x$, $\mathit{Ids}$ and $RF$ are defined above and \[ RW_w = (\exists u\in w. \; true) \land (\forall u,u'\in w. \; u=u') \land ((\exists t. \; w2(t)\lor w3(t)) \Leftrightarrow (\exists u\in w. \;\delta_2(u)\ge 0)) \] specifies that the place $w$ contains only one token which color $\delta_2$ is positive when a writer process is accessing the global variable (because $\delta_2$ stores the identity of the writer), and \[ RW_r = (\exists v. \; r2(v)\lor r3(v)) \Leftrightarrow (\exists \ell\in r. \; true) \] expresses that the place $r$ must contain a token when a reader process is accessing the global variable (i.e., it is at locations $r2$ or $r3$). Therefore, to check the safety property $RF$ we have to show that: (1) $\mathit{Init} \Longrightarrow \mathit{Aux}$, (2) for any transition $\tau$ in the system, ${\sf post}_{\tau}(Aux)\Longrightarrow \mathit{Aux}$, and (3) $Aux\Longrightarrow RF$. We let the point (1) as an exercise. The point (3) follows trivially from the definition of $Aux$. In the following, we detail the proof of the point (2) for one transition of the system, namely $w_1$, that we recall hereafter for readability: \begin{cmrs} w_1: & \cmrsrule{w1,w}{w2,w}{ \begin{array}[t]{l} \lnot(\exists z\in r . \; true) \land \delta_2(x_2)< 0 \land \delta_2(y_2)=\delta_1(x_1) \land \\ \delta_1(y_2)=\delta_1(x_2) \land \varphi_{id}(1) \end{array}} \\ \end{cmrs} Using equation~\ref{eq-post}, we obtain that the ${\sf post}$-image of $Aux$ by the transition $w_1$ has the following form: \begin{eqnarray*} \lefteqn{Aux_{{\sf post}_{w_1}}}\\ &\! =\! & \exists y_1\in w2. \;\exists y_2\in w. \; \exists c_{1,x_1},c_{2,x_1},c_{1,x_2},c_{2,x_2}. \; \\ & & \phantom{\exists y_1} \big(\!\begin{array}[t]{l} (Aux \land \lnot(\exists z\in r . \; true) ~\land \\ ~\delta_2(x_2)< 0 \land \delta_2(y_2)=\delta_1(x_1) \land \delta_1(y_2)=\delta_1(x_2) \land \varphi_{id}(1) \\ ~~)\ominus (\vv{x}, \mathtt{loc}_{\vv{x}},\mathtt{col}_{\vv{x}}) \end{array} \\ & & \phantom{\exists y_1}\big)\oplus (\vv{y}, \mathtt{loc}_{\vv{y}}) \end{eqnarray*} where $\vv{x}=(x_1,x_2)$, $\mathtt{loc}_{\vv{x}}=[x_1\mapsto w1,x_2\mapsto w]$, $\mathtt{col}_{\vv{x}}=[x_i\mapsto k \mapsto c_{k,x_i}]_{1\le i\le 2, 1\le k\le 2}$, $\vv{y}=(y_1,y_2)$, and $\mathtt{loc}_{\vv{y}}=[y_1\mapsto w2, y_2\mapsto w]$. Before applying operators $\ominus$ and $\oplus$, let us observe that $Aux$'s closed sub-formulas $G_x$, $RW_r$, $RF$, and $\lnot(\exists z\in r . \; true)$ concern places which are not involved in the transition $w_1$. It can be shown that (and Example~\ref{ex:post} gives an illustration of this fact) these sub-formulas are not changed by the application of $\ominus$ and $\oplus$ operators. Therefore, we have to apply these operators only on the rest of sub-formulas of $Aux$ and on the guard of $w_1$, i.e.: \begin{eqnarray*} \lefteqn{Aux_{{\sf post}_{w_1}}}\\ &\!=\!& G_x\land RW_r \land RF \land \lnot(\exists z\in r . \; true) \land \\ & & \exists y_1\in w2. \;\exists y_2\in w. \; \exists c_{1,x_1},c_{2,x_1},c_{1,x_2},c_{2,x_2}. \; \\ & & \phantom{\exists y_1} \big(\!\begin{array}[t]{l} (\mathit{Ids} \land RW_w \land \\ ~\delta_2(x_2)< 0 \land \delta_2(y_2)=\delta_1(x_1) \land \delta_1(y_2)=\delta_1(x_2) \land \varphi_{id}(1) \\ ~~)\ominus (\vv{x}, \mathtt{loc}_{\vv{x}},\mathtt{col}_{\vv{x}}) \end{array} \\ & & \phantom{\exists y_1}\big)\oplus (\vv{y}, \mathtt{loc}_{\vv{y}}) \end{eqnarray*} By distributing the $\ominus$ operator over $\land$ (rules $\ominus_4$ and $\ominus_5$), and by applying three times the rule $\ominus_2$, we obtain: \begin{eqnarray*} \lefteqn{Aux_{{\sf post}_{w_1}}}\\ &\! =\! & G_x\land RW_r \land RF \land \lnot(\exists z\in r . \; true) \land \\ & & \exists y_1\in w2. \;\exists y_2\in w. \; \exists c_{1,x_1},c_{2,x_1},c_{1,x_2},c_{2,x_2}. \; \\ & & \phantom{\exists y_1} \big(\!\begin{array}[t]{l} (\mathit{Ids}\ominus (\vv{x}, \mathtt{loc}_{\vv{x}},\mathtt{col}_{\vv{x}}) \land RW_w\ominus (\vv{x}, \mathtt{loc}_{\vv{x}},\mathtt{col}_{\vv{x}}) \land \\ c_{2,x_2} < 0 \land \delta_2(y_2) = c_{1,x_1}\land \delta_1(y_2)=c_{1,x_2}\land \varphi_{id}(1)\ominus (\vv{x}, \mathtt{loc}_{\vv{x}},\mathtt{col}_{\vv{x}}) \end{array} \\ & & \phantom{\exists y_1} \big)\oplus (\vv{y}, \mathtt{loc}_{\vv{y}}) \end{eqnarray*} The application of $\ominus$ on the $\mathit{Ids}$ sub-formula uses the rules $\ominus_4$ and $\ominus_6$ and has as effect the introduction of constraints on the $c_{1,x_1}$ and $c_{2,x_1}$ color variables: \begin{eqnarray*} \mathit{Ids}\ominus (\vv{x}, \mathtt{loc}_{\vv{x}},\mathtt{col}_{\vv{x}}) & = & (\forall t. \; \delta_1(t) \ge 0)\ominus (\vv{x}, \mathtt{loc}_{\vv{x}},\mathtt{col}_{\vv{x}}) \\ & = & \mathit{Ids} \land c_{1,x_1} \ge 0 \land c_{1,x_2}\ge 0 \end{eqnarray*} The result of applying $\ominus$ on the $RW_w$ sub-formula is (sometimes we omit the arguments of $\ominus$ for legibility): \begin{eqnarray*} \lefteqn{RW_w\ominus (\vv{x}, \mathtt{loc}_{\vv{x}},\mathtt{col}_{\vv{x}}) }\\ & = & \big(\begin{array}[t]{l} (\exists u\in w. \; true) \land \\ (\forall u,u'\in w. \; u=u') \land \\ ((\exists t. \; w2(t)\lor w3(t)) \Leftrightarrow (\exists u\in w. \;\delta_2(u)\ge 0)) \big)\ominus (\vv{x}, \mathtt{loc}_{\vv{x}},\mathtt{col}_{\vv{x}}) \end{array} \\ & = & (\exists u\in w. \; true)\ominus (\vv{x}, \mathtt{loc}_{\vv{x}},\mathtt{col}_{\vv{x}}) \land \\ & & (\forall u,u'\in w. \; u=u')\ominus (\vv{x}, \mathtt{loc}_{\vv{x}},\mathtt{col}_{\vv{x}}) \land \\ & & ((\exists t. \; w2(t)\lor w3(t)) \Leftrightarrow (\exists u\in w. \;\delta_2(u)\ge 0))\ominus (\vv{x}, \mathtt{loc}_{\vv{x}},\mathtt{col}_{\vv{x}}) \\ & = & ((\exists u\in w. \; true) \lor true) \land \\ & & (\forall u\in w. \; (\forall u'\in w. \; (u=u')\ominus) \land (u=x_2)\ominus) \land (\forall u'\in w. \; (x_2=u')\ominus) \land (x_2=x_2)\ominus \land \\ & & ((\exists t. \; w2(t)\lor w3(t)) \Leftrightarrow (\exists u\in w. \;\delta_2(u)\ge 0 \lor c_{2,x_2} \ge 0)) \\ & = & true \land \\ & & (\forall u\in w. \; (\forall u'\in w. \; u=u') \land \mathit{false}) \land (\forall u'\in w. \; \mathit{false}) \land true \land \\ & & ((\exists t. \; w2(t)\lor w3(t)) \Leftrightarrow (\exists u\in w. \;\delta_2(u)\ge 0 \lor c_{2,x_2} \ge 0)) \end{eqnarray*} After some trivial simplification, we obtain: \begin{eqnarray*} RW_w\ominus (\vv{x}, \mathtt{loc}_{\vv{x}},\mathtt{col}_{\vv{x}}) & = & (\forall u\in w. \; \mathit{false}) \land \\ & & ((\exists t. \; w2(t)\lor w3(t)) \Leftrightarrow (\exists u\in w. \;\delta_2(u)\ge 0 \lor c_{2,x_2} \ge 0)) \end{eqnarray*} As expected, the first conjunct of the result obtained above says that after the deletion of the $x_2$ token in $w$, there is no more token in $w$. The result of applying $\ominus$ on the $\varphi_{id}(1)$ sub-formula is: \begin{eqnarray*} \varphi_{id}(1)\ominus (\vv{x}, \mathtt{loc}_{\vv{x}},\mathtt{col}_{\vv{x}}) & = & (\delta_1(x_1)=\delta_1(y_1) \land \delta_2(x_1)=\delta_2(y_1))\ominus (\vv{x}, \mathtt{loc}_{\vv{x}},\mathtt{col}_{\vv{x}}) \\ & = & \delta_1(y_1) = c_{1,x_1}\land \delta_2(y_1) = c_{2,x_1} \end{eqnarray*} \noindent Therefore, after applying the $\ominus$ operator we obtain: \begin{eqnarray*} \lefteqn{Aux_{{\sf post}_{w_1}}}\\ &\! =\! & G_x\land RW_r \land RF \land \lnot(\exists z\in r . \; true) \land \\ & & \exists y_1\in w2. \;\exists y_2\in w. \; \exists c_{1,x_1},c_{2,x_1},c_{1,x_2},c_{2,x_2}. \; \\ & & \phantom{\exists y_1} \big(\!\begin{array}[t]{l} (\mathit{Ids} \land c_{1,x_1} \ge 0 \land c_{1,x_2}\ge 0 \land (\forall u\in w. \; \mathit{false}) \land \\ ((\exists t. \; w2(t)\lor w3(t)) \Leftrightarrow (\exists u\in w. \;\delta_2(u)\ge 0 \lor c_{2,x_2} \ge 0)) \land \\ c_{2,x_2} < 0 \land \delta_2(y_2) = c_{1,x_1}\land \delta_1(y_2)=c_{1,x_2}\land \delta_1(y_1) = c_{1,x_1}\land \delta_2(y_1) = c_{2,x_1} \end{array} \\ & & \phantom{\exists y_1} \big)\oplus (\vv{y}, \mathtt{loc}_{\vv{y}}) \end{eqnarray*} The $\oplus$ operation transforms all sub-formulas containing quantifiers. Indeed, after distributing $\oplus$ over conjunctions (rules $\oplus_4$ and $\oplus_5$) and after applying several times rules $\oplus_2$ and $\oplus_6$, we obtain: \begin{eqnarray*} \lefteqn{Aux_{{\sf post}_{w_1}}}\\ &\! =\! & G_x\land RW_r \land RF \land \lnot(\exists z\in r . \; true) \land \\ &&\exists y_1\in w2. \;\exists y_2\in w. \; \exists c_{1,x_1},c_{2,x_1},c_{1,x_2},c_{2,x_2}. \; \\ &&\phantom{\exists y_1} \big(\! \begin{array}[t]{l} (\forall t. \; \delta_1(t) \ge 0 \lor (t=y_1) \lor (t=y_2)) \land c_{1,x_1} \ge 0 \land c_{1,x_2}\ge 0 \land \\ (\forall u\in w. \; \mathit{false} \lor u=y_2) \land \\ \big((\exists t. \; (w2(t)\lor w3(t))\land (t\neq y_1))\!\Leftrightarrow\! ((\exists u\in w. \;\delta_2(u)\ge 0 \land (u\neq y_2)) \lor c_{2,x_2} \ge 0)\big) \land \\ c_{2,x_2} < 0 \land \delta_2(y_2) = c_{1,x_1}\land \delta_1(y_2)=c_{1,x_2}\land \delta_1(y_1) = c_{1,x_1}\land \delta_2(y_1) = c_{2,x_1} \end{array} \\ & & \phantom{\exists y_1}\big) \end{eqnarray*} We can now apply the decision procedure defined in Section~\ref{sect-sat} to prove that $Aux_{{\sf post}_{w_1}}\Longrightarrow Aux$, i.e., $Aux_{{\sf post}_{w_1}}\land\lnot Aux$ is unsatisfiable. Instead of doing this proof, we give some hints about the validity of this implication. First, we remark that by projecting color variables $c_{1,x_1}$ and $c_{1,x_2}$ the $\mathit{Ids}$ sub-formula of $Aux$ is implied by the sub-formula $(\forall t. \; \delta_1(t) \ge 0 \lor (t=y_1) \lor (t=y_2))$ and the constraints on $\delta_1(y_1)$ and $\delta_1(y_2)$: \begin{eqnarray*} \lefteqn{Aux_{{\sf post}_{w_1}}}\\ &\! =\! & G_x\land RW_r \land RF \land \lnot(\exists z\in r . \; true) \land \\ & & \exists y_1\in w2. \;\exists y_2\in w. \; \exists c_{2,x_1},c_{2,x_2}. \; \\ & & \phantom{\exists y_1} \big(\! \begin{array}[t]{l} (\forall t. \; \delta_1(t) \ge 0 \lor (t=y_1) \lor (t=y_2)) \land \delta_1(y_1) \ge 0 \land \delta_1(y_2)\ge 0 \land \\ (\forall u\in w. \; u=y_2) \land \\ \big( \Leftrightarrow ((\exists u\in w. \;\delta_2(u)\ge 0 \land (u\neq y_2)) \lor c_{2,x_2} \ge 0)\big) \land \\ c_{2,x_2} < 0 \land \delta_2(y_2) \ge 0 \land \delta_2(y_1) = c_{2,x_1} \end{array} \\ & & \phantom{\exists y_1}\big) \end{eqnarray*} Second, $RW_w$ sub-formula of $Aux$ is implied by the sub-formula $\exists y_1\in w_2. \;\exists y_2\in w. \;\dots(\forall u\in w. \; u=y_2) \land \dots \land \delta_2(y_2) \ge 0\dots$. Finally, in the context of conjuncts $c_{2,x_2} < 0$ and $(\forall u\in w. \; u=y_2)$, the left member of the equivalence: $$\big((\exists t. \; (w2(t)\lor w3(t))\land (t\neq y_1)) \Leftrightarrow ((\exists u\in w. \;\delta_2(u)\ge 0 \land (u\neq y_2)) \lor c_{2,x_2} \ge 0)\big)$$ is false, so we can replace it by $\lnot(\exists t. \; (w2(t)\lor w3(t))\land (t\neq y_1))$ which expresses, as expected, that only one writer (here $y_1$) can be present at the location $w2$. \subsection{Experimental results} We have implemented the algorithms for the decision procedure of ${\sf CML}$, the ${\sf post}$ and ${\sf pre}$-image computations, and the inductive invariant checking. Our prototype tool, implemented in Ocaml, takes as input an invariant $\varphi_{inv}$ in $B(\Sigma_1)$ which is a conjunction of local invariants written in special form (see definition in Section~\ref{sect_special_form}). Indeed, the invariants are usually conjunctions of formulas, each of them being an assertion which must hold when the control is at some particular location. Then, it decomposes the inductive invariant checking problem (i.e., ${\sf post}(\varphi_{inv})\land\lnot \varphi_{inv}$ is unsatisfiable) in several lemmas, one lemma for each transition of the input ${\sf CPN}$ model and for each local invariant in $\varphi_{inv}$ which contains places involved in the transition. For example, the tool generates 70 lemmas for the verification of the inductive invariant for the $RF$ property on the Reader-Writer lock example. However, not all lemmas are generated if the decision procedure for ${\sf CML}$ returns satisfiable for one of them (which implies that $\varphi_{inv}$ is not an inductive invariant). The implemented decision procedure for ${\sf CML}$ is parameterized by the decision procedure for the logic of colors ${\sf FO}(\mathbb{C},\Omega,\Xi)$. Actually, we generate lemmas in the SMTLIB format and we have an interface with most known SMT solvers. Therefore, we can allow as color logic any theory supported by the state of the art SMT solvers. Using this prototype, we modeled and verified several parameterized versions for mutual exclusion algorithms. The experimental results are given on Table~\ref{tab:exp}. (The considered models of the Burns and Bakery algorithms use atomic global condition checks over all the processes, although our framework allows in principle the consideration of models where global conditions are checked using non atomic iterations over the set of processes.) For all these examples, the color logic is the difference logic over integers for which we have used the decision procedure of Yices~\cite{Yices-06}. For each example, Table \ref{tab:exp} gives the number of rules of the model, the number of conjuncts of the inductive invariant (in CNF), the number of lemmas generated for the SMT solver, and the global execution time. \begin{table} \begin{tabular}{|l||r|r|r|r|}\hline \textit{Algorithm} & \textit{Nb. rules} & \textit{Inv. size} & \textit{SMT Lemmas} & \textit{Time (sec.)} \\\hline\hline Burns~\cite{Burns-Lynch-80} & 9 & 6 & 92 & 0.81 \\ Ticket & 3 & 9 & 28 & 26.23 \\ Bakery~\cite{Lamport-74} & 3 & 5 & 10 & 0.15 \\ Dijkstra~\cite{Dijkstra-65} & 11 & 9 & 1177 & 18390.97 \\ Martin~\cite{Martin-86} & 8 & 7 & 837 & 980.97 \\ Szymanski~\cite{Szymanski-88} & 9 & 12 & 293 & 1065.1 \\ Reader-writer lock~\cite{Flanagan-Freund-Qadeer-02} & 6 & 9 & 70 & 2195.68 \\\hline \end{tabular} \caption{Experimental results.} \label{tab:exp} \end{table} \section{Conclusion} \label{sec:concl} We have presented a framework for reasoning about dynamic/parametric networks of processes manipulating data over infinite domains. We have provided generic models for these systems and a logic allowing to specify their configurations, both being parametrized by a logic on the considered data domain. We have identified a fragment of this logic having a decidable satisfiability problem and which is closed under ${\sf post}$ and ${\sf pre}$ image computation, and we have shown the application of these results in verification. Our framework allows to deal in a uniform way with all classes of systems manipulating infinite data domains with a decidable first-order theory. In this paper, we have considered instantiations of this framework based on logics over integers or reals (which allows to consider systems with numerical variables). Different data domains can be considered in order to deal with other classes of systems such as multithreaded programs where each process (thread) has an unbounded stack (due to procedure calls). Our future work includes also the extension of our framework to other classes of systems and features such as dynamic networks of timed processes, networks of processes with broadcast communication, interruptions and exception handling, etc. \end{document}
\begin{document} \title{Generalized Drazin invertibility of operator matrices} \begin{abstract} \noindent We study the generalized Drazin invertibility as well as the Drazin and ordinary invertibility of an operator matrix $M_C=\left( \begin{array}{cc} A & C \\ 0 & B\end{array} \right)$ acting on a Banach space $\mathcal{X} \oplus \mathcal{Y}$ or on a Hilbert space $\mathcal{H} \oplus \mathcal{K}$. As a consequence some recent results are extended. \end{abstract} 2010 {\it Mathematics subject classification\/}. 47A10, 47A53. {\it Key words and phrases\/}. Operator matrices, generalized Drazin invertibility, generalized Kato decomposition, point spectrum, defect spectrum. \section{Introduction and Preliminaries} Let $\mathcal{X}$ and $\mathcal{Y}$ be infinite dimensional Banach spaces. The set of all bounded linear operators from $\mathcal{X}$ to $\mathcal{Y}$ will be denoted by $L(\mathcal{X},\mathcal{Y})$. For simplicity, we write $L(\mathcal{X})$ for $L(\mathcal{X}, \mathcal{X})$. The set \[\mathcal{X} \oplus \mathcal{Y}=\{(x,y): x \in \mathcal{X}, y \in \mathcal{Y} \} \] is a vector space with standard addition and multiplication by scalars. Under the norm \[||(x,y)||=(||x||^2+||y||^2)^{\frac{1}{2}}\] $\mathcal{X} \oplus \mathcal{Y}$ becomes a Banach space. If $\mathcal{X}_1$ and $\mathcal{Y}_1$ are closed subspaces of $\mathcal{X}$ and $\mathcal{Y}$ respectively, then we will use sometimes notation $\left( \begin{array}{c} \mathcal{X}_1 \\ \mathcal{Y}_1 \end{array} \right)$ instead of $\mathcal{X}_1 \oplus \mathcal{Y}_1$. If $A \in L(\mathcal{X})$, $B \in L(\mathcal{Y})$ and $C \in L(\mathcal{Y}, \mathcal{X})$ are given, then \[\left( \begin{array}{cc} A & C \\ 0 & B\end{array} \right): \mathcal{X} \oplus \mathcal{Y} \to \mathcal{X} \oplus \mathcal{Y} \] represents a bounded linear operator on $\mathcal{X} \oplus \mathcal{Y}$ and it is called {\em upper triangular operator matrix}. If $A$ and $B$ are given then we write $M_C=\left( \begin{array}{cc} A & C \\ 0 & B \end{array} \right)$ in order to emphasize dependence on $C$. Let $\mathbb{N} \, (\mathbb{N}_0)$ denote the set of all positive (non-negative) integers, and let $\mathbb{C}$ denote the set of all complex numbers. Given $T \in L(\mathcal{X}, \mathcal{Y})$, we denote by $N(T)$ and $R(T)$ the {\em kernel} and the {\em range} of $T$. The numbers $\alpha(T)={\rm dim} N(T)$ and $\beta(T)={\rm codim} R(T)$ are {\em nullity} and {\em deficiency} of $T \in L(\mathcal{X}, \mathcal{Y})$ respectively. The set \[\sigma(T)=\{\lambda \in \mathbb{C}: T-\lambda \; \text{is not invertible} \} \] is the {\em spectrum} of $T \in L(\mathcal{X})$. An operator $T \in L(\mathcal{X})$ is {\em bounded below} if there exists some $c>0$ such that $c\|x\|\leq \|Tx\| \; \; \text{for every} \; \; x \in \mathcal{X}$. It is worth mentioning that the set of bounded below operators and the set of left invertible operators coincide in the Hilbert space setting. Also, the set of surjective operators and the set of right invertible operators coincide on a Hilbert space. Recall that $T \in L(\mathcal{X})$ is nilpotent when $T^n=0$ for some $n \in \mathbb{N}$, while $T \in L(\mathcal{X})$ is {\em quasinilpotent} if $T-\lambda$ is invertible for all complex $\lambda\ne 0$. The {\em ascent} of $T \in L(\mathcal{X})$ is defined as ${\rm asc}(T)=\inf\{n \in \mathbb{N}_0:N(T^n)=N(T^{n+1})\}$, and {\em descent} of $T$ is defined as ${\rm dsc}(T)=\inf\{n \in \mathbb{N}_0:R(T^n)=R(T^{n+1})\}$, where the infimum over the empty set is taken to be infinity. If $K \subset \mathbb{C}$, ${\rm acc} \, K$ is the set of accumulation points of $K$. If $M$ is a subspace of $\mathcal{X}$ such that $T(M) \subset M$, $T \in L(\mathcal{X})$, it is said that $M$ is {\em $T$-invariant}. We define $T_M:M \to M$ as $T_Mx=Tx, \, x \in M$. If $M$ and $N$ are two closed $T$-invariant subspaces of $\mathcal{X}$ such that $\mathcal{X}=M \oplus N$, we say that $T$ is {\em completely reduced} by the pair $(M,N)$ and it is denoted by $(M,N) \in Red(T)$. In this case we write $T=T_M \oplus T_N$ and say that $T$ is a {\em direct sum} of $T_M$ and $T_N$. An operator $T \in L(\mathcal{X})$ is said to be {\em Drazin invertible}, if there exists $S \in L(\mathcal{X})$ and some $k \in \mathbb{N}$ such that \[TS=ST, \; \; \; STS=S, \; \; \; T^kST=T^k.\] It is a classical result that necessary and sufficient for $T \in L(\mathcal{X})$ to be Drazin invertible is that $T=T_1 \oplus T_2$ where $T_1$ is invertible and $T_2$ is nilpotent; see \cite{K, lay}. The {\em Drazin spectrum} of $T$ is defined as \[\sigma_D(T)=\{\lambda \in \mathbb{C}: T-\lambda \; \text{is not Drazin invertible} \} \] and it is compact \cite[Proposition 2.5]{Ber1}. An operator $T \in L(\mathcal{X})$ is {\em left Drazin invertible} if ${\rm asc}(T)<\infty$ and if $R(T^{{\rm asc}(T)+1})$ is closed and $T \in L(\mathcal{X})$ is {\em right Drazin invertible} if ${\rm dsc}(T)<\infty$ and if $R(T^{{\rm dsc}(T)})$ is closed. J. Koliha extended the concept of Drazin invertibility \cite{koliha}. An operator $T \in L(\mathcal{X})$ is said to be {\em generalized Drazin invertible}, if there exists $S \in L(\mathcal{X})$ such that \[TS=ST, \; \; \; STS=S, \; \; \; TST-T \; \; \text{is quasinilpotent}.\] \noindent According to \cite[Theorem 4.2 and Theorem 7.1]{koliha}, $T \in L(\mathcal{X})$ is generalized Drazin invertible if and only if $0 \not \in {\rm acc} \, \sigma(T)$, and it is exactly when $T=T_1 \oplus T_2$ with $T_1$ invertible and $T_2$ quasinilpotent. Naturally, the set \[\sigma_{gD}(T)=\{\lambda \in \mathbb{C}: T-\lambda \; \text{is not generalized Drazin invertible} \} \] is the {\em generalized Drazin spectrum} of $T$. We also recall definitions of the following spectra of $T \in L(\mathcal{X})$:\par \noindent the point spectrum: $\sigma_p(T)=\{\lambda \in \mathbb{C}: T-\lambda \; \text{is not injective} \}$, \noindent the defect spectrum: $\sigma_d(T)=\{\lambda \in \mathbb{C}: \overline{R(T-\lambda)} \neq \mathcal{X} \}$, \noindent the approximate point spectrum: $\sigma_{ap}(T)=\{\lambda \in \mathbb{C}: T-\lambda \; \text{is not bounded below} \}$,\par \noindent the surjective spectrum: $\sigma_{su}(T)=\{\lambda \in \mathbb{C}: T-\lambda \; \text{is not surjective} \}$,\par \noindent the left spectrum: $\sigma_l(T)=\{\lambda \in \mathbb{C}: T-\lambda \; \text{is not left invertible} \}$,\par \noindent the right spectrum: $\sigma_r(T)=\{\lambda \in \mathbb{C}: T-\lambda \; \text{is not right invertible} \}$.\par Let $M$ and $L$ be two closed linear subspaces of $\mathcal{X}$ and set \[\delta(M,L)=\sup \{dist(u,L): u \in M, \, \|u\|=1\},\] in the case that $M \neq \{0\}$, otherwise we define $\delta(\{0\}, L)=0$ for any subspace $L$. The {\em gap} between $M$ and $L$ is defined by \[\hat{\delta}(M,L)=\max\{\delta(M,L), \delta(L,M)\}.\] It is known that \cite[corollary 10.10]{Mu} \begin{equation}\label{gp1} \hat{\delta}(M,L)<1\Longrightarrow{\rm dim} M={\rm dim} L. \end{equation} An operator $T \in L(\mathcal{X})$ is {\em semi-regular} if $R(T)$ is closed and if one of the following equivalent statements holds:\par \noindent {\rm (i)} $N(T) \subset R(T^m)$ for each $m \in \mathbb{N}$;\par \noindent {\rm (ii)} $N(T^n) \subset R(T)$ for each $n \in \mathbb{N}$.\par \noindent It is clear that left and right invertible operators are semi-regular. V. M\"{u}ller shows (\cite[Corollary 12.4]{Mu}) that if $T \in L(\mathcal{X})$, then $T$ is semi-regular and $0 \not \in {\rm acc} \, \sigma(T)$ imply that $T$ is invertible. In particular, \begin{align} T \; \; \text{is left invertible and} \; \; 0 \not \in {\rm acc} \, \sigma(T) \; \; \Longrightarrow T \; \; \text{is invertible}, \label{leftsemi}\\ T \; \; \text{is right invertible and} \; \; 0 \not \in {\rm acc} \, \sigma(T) \; \; \Longrightarrow T \; \; \text{is invertible}.\label{rightsemi} \end{align} An operator $T \in L(\mathcal{X})$ is said to admit a {\em generalized Kato decomposition}, abbreviated as GKD, if there exists a pair $(M,N) \in Red(T)$ such that $T_M$ is semi-regular and $T_N$ is quasinilpotent. A relevant case is obtained if we assume that $T_N$ is nilpotent. In this case $T$ is said to be of {\em Kato type}. The invertibility, Drazin invertibility and generalized Drazin invertibility of upper triangular operator matrices have been studied by many authors, see for example \cite{Du, Han, ELA, Zhong, maroko, Dragan, Dragan1, Zhang, pseudoBW}. In this article we study primarily the generalized Drazin invertibility but also the Drazin and ordinary invertibility of operator matrices by using the technique that involves the gap theory; see the auxiliary results: \eqref{gp1}, \eqref{leftsemi}, \eqref{rightsemi}, Lemma ˜\ref{lema1}, Lemma ˜\ref{lema3}. Let $A \in L(\mathcal{H})$ and $B \in L(\mathcal{K})$, where $\mathcal{H}$ and $\mathcal{K}$ are separable Hilbert spaces, and consider the following conditions:\par \noindent {\rm (i)} $A=A_1 \oplus A_2$, $A_1$ is bounded below and $A_2$ is quasinilpotent; \par \noindent{\rm (ii)} $B=B_1 \oplus B_2$, $B_1$ is surjective and $B_2$ is quasinilpotent; \par \noindent {\rm (iii)} There exists a constant $\delta>0$ such that $\beta(A-\lambda)=\alpha(B-\lambda)$ for every $\lambda \in \mathbb{C}$ such that $0<|\lambda|<\delta$. \par \noindent Our main result stands that if the conditions {\rm (i)}-{\rm (iii)} are satisfied then there exists $C \in L(\mathcal{K}, \mathcal{H})$ such that $M_C$ is generalized Drazin invertible. The converse is also true under the assumption that $A$ and $B$ admit a GKD. Moreover, we obtain the corresponding results concerning the Drazin invertibility of operator matrices. Further, let $A \in L(\mathcal{X})$ and $B \in L(\mathcal{Y})$ where $\mathcal{X}$ and $\mathcal{Y}$ are Banach spaces. It is also shown that if $0$ is not an accumulation point of the defect spectrum of $A$ or of the point spectrum of $B$ and if the operator matrix $M_C=\left( \begin{array}{cc} A & C \\ 0 & B\end{array} \right)$ is invertible (resp. Drazin invertible, generalized Drazin invertible) for some $C \in L(\mathcal{Y}, \mathcal{X})$ then $A$ and $B$ are both invertible (resp. Drazin invertible, generalized Drazin invertible). What is more, we give several corollaries that improve some recent results. \section{Results} From now on $\mathcal{X}$ and $\mathcal{Y}$ will denote Banach spaces, while $\mathcal{H}$ and $\mathcal{K}$ will denote separable Hilbert spaces. We begin with several auxiliary lemmas. \begin{lemma} \label{lema1} Let $T \in L(\mathcal{X})$ be semi-regular. Then there exists $\epsilon >0$ such that $\alpha(T-\lambda)$ and $\beta(T-\lambda)$ are constant for $|\lambda|<\epsilon$. \end{lemma} \begin{proof} Let $T \in L(\mathcal{X})$ be semi-regular. The mapping $\lambda \to N(T-\lambda)$ is continuous at the point $0$ in the gap metric; see \cite[Theorem 1.38]{aiena} or \cite[Theorem 12.2]{Mu}. In particular, there exists $\epsilon_1>0$ such that $|\lambda|<\epsilon_1$ implies $\hat{\delta}(N(T), N(T-\lambda))<1$. From \eqref{gp1} we obtain ${\rm dim} N(T)={\rm dim} N(T-\lambda)$, i.e. $\alpha(T)=\alpha(T-\lambda)$ for $|\lambda|<\epsilon_1$. Further, $T^{\prime}$ is also semi-regular where $T^{\prime}$ is adjoint operator of $T$ \cite[Theorem 1.19]{aiena}. As above we conclude that $\alpha(T^{\prime})=\alpha(T^{\prime}-\lambda)$ on an open disc centered at $0$. Since $T-\lambda$ has closed range for sufficiently small $|\lambda|$ \cite[Theorem 1.31]{aiena}, it follows that there exists $\epsilon_2>0$ such that \[\beta(T)=\alpha(T^{\prime})=\alpha(T^{\prime}-\lambda)=\beta(T-\lambda) \; \; \text{for} \; \; |\lambda|<\epsilon_2.\] We put $\epsilon=\min\{\epsilon_1, \epsilon_2\}$, and the lemma follows. \end{proof} \begin{lemma} [\cite{Han}] \label{invertibility} Let $A \in L(\mathcal{X})$ and $B \in L(\mathcal{Y})$. If $A$ and $B$ are both invertible, then $M_C$ is invertible for every $C \in L(\mathcal{Y}, \mathcal{X})$. In addition, if $M_C$ is invertible for some $C \in L(\mathcal{Y}, \mathcal{X})$, then $A$ is invertible if and only if $B$ is invertible. \end{lemma} \begin{lemma} [\cite{joint}] \label{lema3} \noindent {\rm (a)} Let $T \in L(\mathcal{X})$. The following conditions are equivalent:\par \noindent {\rm (i)} There exists $(M,N) \in Red(T)$ such that $T_M$ is bounded below (resp. surjective) and $T_N$ is quasinilpotent;\par \noindent {\rm (ii)} $T$ admits a GKD and $0 \not \in {\rm acc} \; \sigma_{ap}(T)$ (resp. $0 \not \in {\rm acc} \; \sigma_{su}(T))$. \noindent {\rm (b)} Let $T \in L(\mathcal{X})$. The following conditions are equivalent:\par \noindent {\rm (i)} There exists $(M,N) \in Red(T)$ such that $T_M$ is bounded below (resp. surjective) and $T_N$ is nilpotent;\par \noindent {\rm (ii)} $T$ is of Kato type and $0 \not \in {\rm acc} \; \sigma_{ap}(T)$ (resp. $0 \not \in {\rm acc} \; \sigma_{su}(T))$. \end{lemma} We now prove our first main result. \begin{theorem} \label{GD} Let $A \in L(\mathcal{H})$ and $B \in L(\mathcal{K})$ be given operators on separable Hilbert spaces $\mathcal{H}$ and $\mathcal{K}$, respectively, such that:\par \noindent {\rm (i)} $A=A_1 \oplus A_2$, $A_1$ is bounded below and $A_2$ is quasinilpotent; \par \noindent{\rm (ii)} $B=B_1 \oplus B_2$, $B_1$ is surjective and $B_2$ is quasinilpotent; \par \noindent {\rm (iii)} There exists a constant $\delta>0$ such that $\beta(A-\lambda)=\alpha(B-\lambda)$ for every $\lambda \in \mathbb{C}$ such that $0<|\lambda|<\delta$. \par \noindent Then there exists an operator $C \in L(\mathcal{K}, \mathcal{H})$ such that $M_C$ is generalized Drazin invertible. \end{theorem} \begin{proof} By assumption, there exist closed $A$-invariant subspaces $\mathcal{H}_1$ and $\mathcal{H}_2$ of $\mathcal{H}$ such that $\mathcal{H}_1 \oplus \mathcal{H}_2=\mathcal{H}$, $A_{\mathcal{H}_1}=A_1$ is bounded below and $A_{\mathcal{H}_2}=A_2$ is quasinilpotent. Also, there exist closed $B$-invariant subspaces $\mathcal{K}_1$ and $\mathcal{K}_2$ of $\mathcal{K}$ such that $\mathcal{K}_1 \oplus \mathcal{K}_2=\mathcal{K}$, $B_{\mathcal{K}_1}=B_1$ is surjective and $B_{\mathcal{K}_2}=B_2$ is quasinilpotent. It is clear that $\beta(A-\lambda)=\beta(A_1-\lambda)+\beta(A_2-\lambda)$ and $\alpha(B-\lambda)=\alpha(B_1-\lambda)+\alpha(B_2-\lambda)$ for every $\lambda \in \mathbb{C}$. Since $A_2$ and $B_2$ are quasinilpotent we have \begin{align} \beta(A-\lambda)=\beta(A_1-\lambda) \label{beta}, \\ \alpha(B-\lambda)=\alpha(B_1-\lambda), \label{alpha} \end{align} for every $\lambda \in \mathbb{C} \setminus \{0\}$. Further, according to Lemma ˜\ref{lema1} there exists $\epsilon >0$ such that \begin{equation}\label{const} \beta(A_1-\lambda) \; \; \text{and} \; \; \alpha(B_1-\lambda) \; \; \text{are constant for} \; \; |\lambda|<\epsilon. \end{equation} Consider $\lambda_0 \in \mathbb{C}$ such that $0<|\lambda_0|<\min\{\epsilon, \delta\}$, where $\delta$ is from the condition (iii). Using \eqref{beta}, \eqref{alpha}, \eqref{const} and the condition (iii) we obtain \[\beta(A_1)=\beta(A_1-\lambda_0)=\beta(A-\lambda_0)=\alpha(B-\lambda_0)=\alpha(B_1-\lambda_0)=\alpha(B_1).\] On the other hand, it is easy to see that $\left( \begin{array}{c} \mathcal{H}_1 \\ \mathcal{K}_1 \end{array} \right)$ and $\left( \begin{array}{c} \mathcal{H}_2 \\ \mathcal{K}_2 \end{array} \right)$ are closed subspaces of $\left( \begin{array}{c} \mathcal{H} \\ \mathcal{K} \end{array} \right)$ and that $\left( \begin{array}{c} \mathcal{H}_1 \\ \mathcal{K}_1 \end{array} \right) \oplus \left( \begin{array}{c} \mathcal{H}_2 \\ \mathcal{K}_2 \end{array} \right)=\left( \begin{array}{c} \mathcal{H} \\ \mathcal{K} \end{array} \right)$. $\mathcal{H}_1$ and $\mathcal{K}_1$ are separable Hilbert spaces in their own right, so from \cite[Theorem 2]{Du} it follows that there exists an operator $C_1 \in L(\mathcal{K}_1, \mathcal{H}_1)$ such that the operator $\left( \begin{array}{cc} A_1 & C_1 \\ 0 & B_1 \end{array} \right): \left( \begin{array}{c} \mathcal{H}_1 \\ \mathcal{K}_1 \end{array} \right) \to \left( \begin{array}{c} \mathcal{H}_1 \\ \mathcal{K}_1 \end{array} \right)$ is invertible. Let define an operator $C \in L(\mathcal{K}, \mathcal{H})$ by \[C=\left( \begin{array}{cc} C_1 & 0 \\ 0 & 0 \end{array} \right): \left( \begin{array}{c} \mathcal{K}_1 \\ \mathcal{K}_2 \end{array} \right) \to \left( \begin{array}{c} \mathcal{H}_1 \\ \mathcal{H}_2 \end{array} \right).\] An easy computation shows that $\left( \begin{array}{c} \mathcal{H}_1 \\ \mathcal{K}_1 \end{array} \right)$ and $\left( \begin{array}{c} \mathcal{H}_2 \\ \mathcal{K}_2 \end{array} \right)$ are invariant for $M_C=\left( \begin{array}{cc} A & C \\ 0 & B \end{array} \right)$ and also \begin{align*} (M_C)_{\mathcal{H}_1 \oplus \mathcal{K}_1}=\left( \begin{array}{cc} A_1 & C_1 \\ 0 & B_1 \end{array} \right), \\ (M_C)_{\mathcal{H}_2 \oplus \mathcal{K}_2}=\left( \begin{array}{cc} A_2 & 0\\ 0 & B_2 \end{array} \right). \end{align*} Since $A_2$ and $B_2$ are quasinilpotent, from $\sigma((M_C)_{\mathcal{H}_2 \oplus \mathcal{K}_2})=\sigma(A_2) \cup \sigma(B_2)=\{0\}$, it follows that $(M_C)_{\mathcal{H}_2 \oplus \mathcal{K}_2}$ is quasinilpotent. Finally, $(M_C)_{\mathcal{H}_1 \oplus \mathcal{K}_1}$ is invertible and thus $M_C$ is generalized Drazin invertible. \end{proof} \noindent Using Theorem ˜\ref{GD} we obtain \cite[Theorem 2.1]{maroko} in a simpler way. \begin{theorem} [\cite{maroko}] \label{D} Let $A \in L(\mathcal{H})$ and $B \in L(\mathcal{K})$ be given operators on separable Hilbert spaces $\mathcal{H}$ and $\mathcal{K}$, respectively, such that:\par \noindent {\rm (i)} $A$ is left Drazin invertible; \par \noindent{\rm (ii)} $B$ is right Drazin invertible; \par \noindent {\rm (iii)} There exists a constant $\delta>0$ such that $\beta(A-\lambda)=\alpha(B-\lambda)$ for every $\lambda \in \mathbb{C}$ such that $0<|\lambda|<\delta$. \par \noindent Then there exists an operator $C \in L(\mathcal{K}, \mathcal{H})$ such that $M_C$ is Drazin invertible. \end{theorem} \begin{proof} By \cite[Theorem 3.12]{Ber0} it follows that there exist pairs $(\mathcal{H}_1, \mathcal{H}_2) \in Red(A)$ and $(\mathcal{K}_1, \mathcal{K}_2) \in Red(B)$ such that $A_{\mathcal{H}_1}=A_1$ is bounded below, $B_{\mathcal{K}_1}=B_1$ is surjective, $A_{\mathcal{H}_2}=A_2$ and $B_{\mathcal{K}_2}=B_2$ are nilpotent. From the proof of Theorem ˜\ref{GD} we conclude that there exists $C \in L(\mathcal{K}, \mathcal{H})$ such that \begin{align*} M_C=(M_C)_{\mathcal{H}_1 \oplus \mathcal{K}_1} \oplus (M_C)_{\mathcal{H}_2 \oplus \mathcal{K}_2}, \\ (M_C)_{\mathcal{H}_1 \oplus \mathcal{K}_1} \; \text{is invertible},\\ (M_C)_{\mathcal{H}_2 \oplus \mathcal{K}_2}=\left( \begin{array}{cc} A_2 & 0\\ 0 & B_2 \end{array} \right). \end{align*} For sufficiently large $n \in \mathbb{N}$ we have \[\left( \begin{array}{cc} A_2 & 0\\ 0 & B_2 \end{array} \right)^n=\left( \begin{array}{cc} (A_2)^n & 0\\ 0 & (B_2)^n \end{array} \right)=\left( \begin{array}{cc} 0 & 0\\ 0 & 0 \end{array} \right). \] It means that $(M_C)_{\mathcal{H}_2 \oplus \mathcal{K}_2}$ is nilpotent, and the proof is complete. \end{proof} Under the additional assumptions the converse implications in Theorem ˜\ref{GD} and Theorem ˜\ref{D} are also true even in the context of Banach spaces. \begin{theorem} \label{converse1} Let $A \in L(\mathcal{X})$ and $B \in L(\mathcal{Y})$ admit a GKD. If there exists some $C \in L(\mathcal{Y}, \mathcal{X})$ such that $M_C$ is generalized Drazin invertible, then the following holds: \noindent {\rm (i)} $A=A_1 \oplus A_2$, $A_1$ is bounded below and $A_2$ is quasinilpotent; \par \noindent{\rm (ii)} $B=B_1 \oplus B_2$, $B_1$ is surjective and $B_2$ is quasinilpotent; \par \noindent {\rm (iii)} There exists a constant $\delta>0$ such that $\beta(A-\lambda)=\alpha(B-\lambda)$ for every $\lambda \in \mathbb{C}$ such that $0<|\lambda|<\delta$. \par \end{theorem} \begin{proof} Let $M_C$ be generalized Drazin invertible for some $C \in L(\mathcal{Y}, \mathcal{X})$. Then there exists $\delta >0$ such that $M_C-\lambda$ is invertible for $0<|\lambda|<\delta$. According to \cite[Theorem 2]{Han} we have $0 \not \in {\rm acc} \; \sigma_l(A)$, $0 \not \in {\rm acc} \; \sigma_r(B)$ and that the statement {\rm (iii)} is satisfied. It means that also $0 \not \in {\rm acc} \; \sigma_{ap}(A)$ and $0 \not \in {\rm acc} \; \sigma_{su}(B)$. By Lemma ˜\ref{lema3} we obtain that the statements {\rm (i)} and {\rm (ii)} are also satisfied. \end{proof} \begin{theorem} \label{converse2} Let $A \in L(\mathcal{X})$ be of Kato type and let $B \in L(\mathcal{Y})$ admit a GKD. If there exists some $C \in L(\mathcal{Y}, \mathcal{X})$ such that $M_C$ is Drazin invertible, then the following holds:\par \noindent {\rm (i)} $A$ is left Drazin invertible; \par \noindent{\rm (ii)} $B$ is right Drazin invertible; \par \noindent {\rm (iii)} There exists a constant $\delta>0$ such that $\beta(A-\lambda)=\alpha(B-\lambda)$ for every $\lambda \in \mathbb{C}$ such that $0<|\lambda|<\delta$. \par \end{theorem} \begin{proof} Applying the same argument as in Theorem ˜\ref{converse1} we obtain that the statement {\rm (iii)} holds and $0 \not \in {\rm acc} \; \sigma_{ap}(A) \cup {\rm acc} \; \sigma_{su}(B)$. Now Lemma ˜\ref{lema3} implies that there exist $(X_1, X_2) \in Red(A)$ and $(Y_1, Y_2) \in Red(B)$ such that $A_1$ is bounded below, $A_2$ is nilpotent, $B_1$ is surjective and $B_2$ is quasinilpotent. Since ${\rm dsc}(B)<\infty$ \cite[Lemma 2.6]{Zhong}, then ${\rm dsc}(B_2)<\infty$, so $B_2$ is quasinilpotent operator with finite descent. We conclude that $B_2$ is nilpotent by \cite[Corollary 10.6, p. 332]{TL}. Let $n \geq d$ where $d \in \mathbb{N}$ is such that $(A_2)^d=0$ and $(A_2)^{d-1} \neq 0$. We have \begin{align*} N(A^n)=N((A_1)^n) \oplus N((A_2)^n)=\mathcal{X}_2, \\ N(A^{d-1})=N((A_1)^{d-1}) \oplus N((A_2)^{d-1})=N((A_2)^{d-1}) \subsetneq \mathcal{X}_2. \end{align*} It follows that ${\rm asc}(A)=d<\infty$. From $R(A^n)=R((A_1)^n) \oplus R((A_2)^n)=R((A_1)^n)$ we conclude that $R(A^n)$ is closed and therefore $A$ is left Drazin invertible. In a similar way we prove that $B$ is right Drazin invertible. \end{proof} \noindent An operator $T \in L(\mathcal{X})$ is {\em semi-Fredholm} if $R(T)$ is closed and if $\alpha(T)$ or $\beta(T)$ is finite. The class of semi-Fredholm operators belongs to the class of Kato type operators \cite[Theorem 16.21]{Mu}. According to this observation it seems that Theorem ˜\ref{converse2} is an extension of \cite[Corollary 2.3]{maroko}. If $\delta > 0$, set $\mathbb{D}(0, \delta)=\{\lambda \in \mathbb{C}: |\lambda|<\delta\}$. The following theorem is our second main result. \begin{theorem} \label{KolihaDrazin} Let $A \in L(\mathcal{X})$ and $B \in L(\mathcal{Y})$ be given operators such that $0 \not \in {\rm acc} \, \sigma_d(A)$ or $0 \not \in {\rm acc} \, \sigma_p(B)$. \noindent {\rm (i)} If there exists some $C \in L(\mathcal{Y}, \mathcal{X})$ such that $M_C$ is generalized Drazin invertible, then $A$ and $B$ are both generalized Drazin invertible.\par \noindent {\rm (ii)} If there exists some $C \in L(\mathcal{Y}, \mathcal{X})$ such that $M_C$ is Drazin invertible, then $A$ and $B$ are both Drazin invertible.\par \noindent {\rm (iii)} If there exists some $C \in L(\mathcal{Y}, \mathcal{X})$ such that $M_C$ is invertible, then $A$ and $B$ are both invertible. \end{theorem} \begin{proof} {\rm (i)}. Suppose that $0 \not \in {\rm acc} \, \sigma_d(A)$ and that $M_C$ is generalized Drazin invertible for some $C \in L(\mathcal{Y}, \mathcal{X})$. Since $0 \not \in {\rm acc} \, \sigma(M_C)$ then there exists $\delta > 0$ such that \begin{align} M_C-\lambda \; \text{is invertible}, \label{eq1}\\ \overline{R(A-\lambda)}=\mathcal{X}, \label{eq2} \end{align} for every $\lambda \in \mathbb{D}(0, \delta)\setminus \{0\}$. From \cite[Theorem 2]{Han} it follows that $A-\lambda$ is left invertible for every $\lambda \in \mathbb{D}(0, \delta)\setminus \{0\}$. Since $A-\lambda$ has closed range for every $\lambda \in \mathbb{D}(0, \delta)\setminus \{0\}$, from ˜\eqref{eq2} we conclude $R(A-\lambda)=\overline{R(A-\lambda)}=\mathcal{X}$ for $\lambda \in \mathbb{D}(0, \delta)\setminus \{0\}$, i.e. $A-\lambda$ is surjective for $\lambda \in \mathbb{D}(0, \delta)\setminus \{0\}$. Moreover, $A-\lambda$ is injective for $\lambda \in \mathbb{D}(0, \delta)\setminus \{0\}$, hence $A-\lambda$ is invertible for $\lambda \in \mathbb{D}(0, \delta)\setminus \{0\}$. It means that $0 \not \in {\rm acc} \, \sigma(A)$, so $A$ is generalized Drazin invertible. Now, \eqref{eq1} and Lemma ˜\ref{invertibility} imply that $B$ is generalized Drazin invertible. Assume that $0 \not \in {\rm acc} \, \sigma_p(B)$ and that $M_C$ is generalized Drazin invertible for some $C \in L(\mathcal{Y}, \mathcal{X})$. There exists $\delta>0$ such that ˜\eqref{eq1} holds and \begin{equation}\label{eq3} B-\lambda \; \text{is injective for} \; \lambda \in \mathbb{D}(0, \delta)\setminus \{0\}. \end{equation} $B$ is generalized Drazin invertible by ˜\eqref{eq1}, ˜\eqref{eq3} and \cite[Theorem 2]{Han}. We apply Lemma ˜\ref{invertibility} again and obtain that $A$ is generalized Drazin invertible.\par \noindent{\rm (ii)}. If there exists $C \in L(\mathcal{Y},\mathcal{X})$ such that $M_C$ is Drazin invertible, then $M_C$ is also generalized Drazin invertible. According to the part (i) it follows that $B$ is generalized Drazin invertible, i.e. $0 \not \in {\rm acc} \, \sigma(B)$. Further, ${\rm dsc}(B)<\infty$ by \cite[Lemma 2.6]{Zhong}, so from \cite[Corollary 1.6]{dsc} it follows that $B$ is Drazin invertible. We apply \cite[Lemma 2.7]{Zhong} and obtain that $A$ is also Drazin invertible. \par \noindent{\rm (iii}). $A$ is left invertible and $B$ is right invertible by \cite[Theorem 2]{Han}. On the other hand, the part (i) now leads to $0 \not \in {\rm acc} \, \sigma(A)$ and $0 \not \in {\rm acc} \, \sigma(B)$. The result follows from \eqref{leftsemi} and \eqref{rightsemi}. \end{proof} \noindent The part {\rm (ii)} of Theorem ˜\ref{KolihaDrazin} is also an extension of \cite[Corollary 2.3]{maroko}. The following result is an immediate consequence of Theorem ˜\ref{KolihaDrazin}. The second inclusion of Corollary ˜\ref{inkluzije} is proved in \cite[Corollary 4]{Han}, \cite[Theorem 5.1]{Dragan} and \cite[Lemma 2.3]{Dragan1}. \begin{corollary} \label{inkluzije} Let $A \in L(\mathcal{X})$ and $B \in L(\mathcal{Y})$. If $\sigma_{\ast} \in \{\sigma, \sigma_{D}, \sigma_{gD}\}$, then we have \[(\sigma_{\ast}(A) \cup \sigma_{\ast}(B))\setminus ({\rm acc} \, \sigma_d(A) \cap {\rm acc} \, \sigma_p(B)) \subset \sigma_{\ast}(M_C) \subset \sigma_{\ast}(A) \cup \sigma_{\ast}(B) \] for every $C \in L(\mathcal{Y}, \mathcal{X})$. \end{corollary} \begin{remark} \label{remark} {\em Let $T \in L(\mathcal{X})$. Inclusions ${\rm acc} \, \sigma_p(T) \subset {\rm acc} \, \sigma_l(T) \subset {\rm acc} \, \sigma(T) \subset \sigma(T)$ and ${\rm acc} \, \sigma_d(T) \subset {\rm acc} \, \sigma_r(T) \subset {\rm acc} \, \sigma(T) \subset \sigma(T)$ are clear. According to \cite[Theorem 12(iii)]{Bo} and from the fact that the Drazin spectrum is compact we have ${\rm acc} \, \sigma_p(T) \subset {\rm acc} \, \sigma(T)={\rm acc} \, \sigma_D(T) \subset \sigma_D(T)$ and also ${\rm acc} \, \sigma_d(T) \subset {\rm acc} \, \sigma(T)={\rm acc} \, \sigma_D(T) \subset \sigma_D(T)$. From this consideration we obtain the following statements for every $A \in L(\mathcal{X})$ and $B \in L(\mathcal{Y})$: \noindent {\rm (i)} ${\rm acc} \, \sigma_d(A) \cap {\rm acc} \, \sigma_p(B) \subset \sigma(A) \cap \sigma(B)$;\par \noindent {\rm (ii)} ${\rm acc} \, \sigma_d(A) \cap {\rm acc} \, \sigma_p(B) \subset \sigma_D(A) \cap \sigma_D(B)$;\par \noindent {\rm (iii)} ${\rm acc} \, \sigma_d(A) \cap {\rm acc} \, \sigma_p(B) \subset {\rm acc} \, \sigma_r(A) \cap {\rm acc} \, \sigma_l(B)\subset \sigma_r(A) \cap \sigma_l(B)$. } \end{remark} \noindent Remark ˜\ref{remark} shows that Corollary ˜\ref{inkluzije} is an improvement of \cite[Corollary 4]{Han}, \cite[Corollary 3.17]{pseudoBW} and of the equation (1) in \cite{Zhong}. Another extension of the equation (1) from \cite{Zhong} may be found in \cite[Proposition 2.2]{maroko}. We also generalize \cite[Corollary 2.6]{maroko}. \begin{corollary} \label{cormar} Let $A \in L(\mathcal{X})$, $B \in L(\mathcal{Y})$ and let $\sigma_{\ast} \in \{\sigma, \sigma_{D}, \sigma_{gD}\}$. If ${\rm acc} \, \sigma_d(A) \cap {\rm acc} \, \sigma_p(B)=\emptyset$ then \begin{equation} \label{formula} \sigma_{\ast}(M_C)=\sigma_{\ast}(A) \cup \sigma_{\ast}(B) \; \; \text{for every} \; \; C \in L(\mathcal{Y}, \mathcal{X}). \end{equation} In particular, if $\sigma_r(A) \cap \sigma_l(B)=\emptyset$ then \eqref{formula} is satisfied. \end{corollary} \begin{proof} Apply Corollary ˜\ref{inkluzije} and {\rm (iii)} of Remark ˜\ref{remark}. \end{proof} \begin{example} {\em Let $\mathcal{X}=\mathcal{Y}=\ell^2(\mathbb{N})$ and let $U$ be forward unilateral shift operator on $\ell^2(\mathbb{N})$. It is known that \[\sigma(U)=\sigma_D(U)=\sigma_{gD}(U)=\{\lambda \in \mathbb{C}:|\lambda| \leq 1\}.\] Since $\sigma_p(U)=\emptyset$, then from Corollary ˜\ref{cormar} we conclude \[\sigma_{\ast}\left( \left(\begin{array}{cc} A & C \\ 0 & U\end{array} \right) \right)=\sigma_{\ast}(A) \cup \sigma_{\ast}(U),\] where $\sigma_{\ast} \in \{\sigma, \sigma_D, \sigma_{gD} \}$ and $A, C \in L(\ell^2(\mathbb{N}))$ are arbitrary operators. } \end{example} \noindent We finish with a slight extension of \cite[Corollary 7]{Han}, \cite[Theorem 2.9]{Zhong} and \cite[Theorem 3.18]{pseudoBW}. \begin{theorem} \label{holest} Let $A \in L(\mathcal{X})$ and $B \in L(\mathcal{Y})$. If $\sigma_{\ast} \in \{\sigma, \sigma_{D}, \sigma_{gD}\}$, then for every $C \in L(\mathcal{Y}, \mathcal{X})$ we have \begin{equation} \label{holes} \sigma_{\ast}(A) \cup \sigma_{\ast}(B)=\sigma_{\ast}(M_C) \cup W, \end{equation} where $W$ is the union of certain holes in $\sigma_{\ast}(M_C)$, which happen to be subsets of ${\rm acc} \, \sigma_d(A) \cap {\rm acc} \, \sigma_p(B)$. \end{theorem} \begin{proof} Expression \eqref{holes} is the main result of \cite{Han}, \cite{Zhong} and \cite{Zhang}. Corollary ˜\ref{inkluzije} shows that the filling some holes in $\sigma_{\ast}(M_C)$ should occur in ${\rm acc} \, \sigma_d(A) \cap {\rm acc} \, \sigma_p(B)$. \end{proof} \noindent {\bf Acknowledgements.} The author is supported by the Ministry of Education, Science and Technological Development, Republic of Serbia, grant no. 174007. \noindent \author{Milo\v s D. Cvetkovi\'c} \noindent University of Ni\v s\\ Faculty of Sciences and Mathematics\\ P.O. Box 224, 18000 Ni\v s, Serbia \noindent {\it E-mail}: {\tt miloscvetkovic83@gmail.com} \end{document}
\begin{document} \begin{abstract} We consider weighted $L^p$-Hardy inequalities involving the distance to the boundary of a domain in the $n$-dimensional Euclidean space with nonempty boundary. Using criticality theory, we give an alternative proof of the following result of F.~G.~Avkhadiev (2006) \noindent{\bf Theorem.} {\em Let $\Omega \subsetneqq \mathbb{R}^n$, $n\geq 2$, be an arbitrary domain, $1<p<\infty$ and $\alpha + p>n$. Let $\mathrm{d}_\Omega(x) =\mathrm{dist}(x,\partial \Omega )$ denote the distance of a point $x\in \Omega$ to $\partial \Omega$. Then the following Hardy-type inequality holds $$ \int_{\Omega }\frac{|\nabla \varphi |^p}{\mathrm{d}_\Omega^{\alpha}}\,\mathrm{d}x \geq \left( \frac{\alpha +p-n}{p}\right)^p \int_{\Omega }\frac{|\varphi|^p}{\mathrm{d}_\Omega^{p+\alpha}}\,\mathrm{d}x \qquad \forall \varphi\in C^{\infty }_c(\Omega),$$ and the lower bound constant $\left( \frac{\alpha +p-n}{p}\right)^p$ is sharp. } \end{abstract} \maketitle \section{Introduction}\label{sec0} Let $\Omega$ be a domain in ${\mathbb{R}}^n$, $n\geq 2$ with nonempty boundary, and let $\mathrm{d}_\Omega (x)=\mathrm{dist}(x,\partial \Omega )$ denote the distance of a point $x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi$ to the boundary of $\Omega$. Fix $p\in (1,\infty )$. We say that the {\em $L^p$-Hardy inequality} is satisfied in $\Omega $ if there exists $c>0$ such that \begin{equation} \label{mainhardy} \int_{\Omega }|\nabla \vgf|^p\,\mathrm{d}x \geq c \int_{\Omega }\frac{|\vgf|^p}{\mathrm{d}_\Omega^p}\,\mathrm{d}x \qquad \mbox{ for all $\vgf\in C^{\infty }_c(\Omega)$}. \end{equation} The {\em $L^p$-Hardy constant} of $\Omega$ is the best constant $c$ for inequality (\ref{mainhardy}) which is denoted here by $H_p(\Omega )$. It is a classical result that goes back to Hardy himself (see for example \cite{BEL, permalkuf}) that if $n=1$ and $\Omega \subsetneqq\mathbb{R}$ is a bounded or unbounded interval, then the $L^p$-Hardy inequality holds and $H_{p}(\Omega)$ coincides with the widely known constant $$ c_p=\biggl(\frac{p-1}{p}\biggr)^p. $$ Recall that if $\Omega $ is bounded and has a sufficiently regular boundary in ${\mathbb{R}}^n$, then the $L^p$-Hardy inequality holds and $0< H_p(\Omega )\le c_p$ (for instance, see \cite{anc,mamipi}). Moreover, if $\Omega$ is convex, or more generally, if it is weakly mean convex, i.e., if $\Delta \mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi\leq 0$ in the distributional sense in $\Omega$ (see \cite{gromov,giga and giovanni, LLL, psaradakis}), then $H_p(\Omega )=c_p $ \cite{barfilter, dam, mamipi}. On the other hand, it is also well-known (see for example \cite{BEL,permalkuf}) that if $\Omega ={\mathbb{R}}^n\setminus\{0\}$ and $p\ne n$, then the $L^p$-Hardy inequality holds and $H_{p}(\Omega)$ coincides with the other widely known constant $$ c_{p,n}=\biggl|\frac{p-n}{p}\biggr|^p, $$ which indicates that the $L^p$-Hardy inequality does not hold for ${\mathbb{R}}^n\setminus\{0\}$ if $p=n$. In the present paper we study a {\em weighted} $L^p$-Hardy inequality involving the distance function to the boundary. We give a new proof for the following result. \begin{theorem}\label{main_thm} Let $\Omega} \def\Gx{\Xi} \def\Gy{\Psi \subsetneqq \mathbb{R}^n$ be an arbitrary domain, where $n\geq 2$. Fix $1<p<\infty$ and $\alpha} \def\gb{\beta} \def\gg{\gamma + p>n$. Then \begin{equation}\label{dp15} \int_{\Omega }\frac{|\nabla \vgf|^p}{\mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi^{\alpha} \def\gb{\beta} \def\gg{\gamma}}\,\mathrm{d}x \geq \left( \frac{\alpha} \def\gb{\beta} \def\gg{\gamma +p-n}{p}\right)^p \int_{\Omega }\frac{|\vgf|^p}{\mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi^{p+\alpha} \def\gb{\beta} \def\gg{\gamma}}\,\mathrm{d}x \qquad \forall \vgf\in C^{\infty }_c(\Omega), \end{equation} and the lower bound constant $$c_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n}:= \left( \frac{\alpha} \def\gb{\beta} \def\gg{\gamma +p-n}{p}\right)^p$$ is sharp. In particular, for $p>n$ we have $$H_p(\Omega )\geq c_{p,n}=\left( \frac{p-n}{p}\right)^p \quad \mbox{for any domain } \Omega} \def\Gx{\Xi} \def\Gy{\Psi \subsetneqq \mathbb{R}^n.$$ \end{theorem} \begin{remark} Theorem~\ref{main_thm} was proved by F.~G.~Avkhadiev in \cite{avk} using a cubic approximation of $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. One should note that J.~L.~Lewis \cite{Lewis} proved that \eqref{mainhardy} holds true (for $\alpha} \def\gb{\beta} \def\gg{\gamma=0$) with {\em a fixed positive constant independent on} $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$, and in \cite{Wannebo}, A.~Wannebo generalized Lewis' result to the case $\alpha} \def\gb{\beta} \def\gg{\gamma + p>n$. \end{remark} We need the following version of the Harnack convergence principle which will be used several times throughout the paper. \begin{proposition}[Harnack convergence principle]\label{propdp1} Consider an exhaustion $\{\Omega} \def\Gx{\Xi} \def\Gy{\Psi_i\}_{i\!=\!1}^\infty$ of smooth bounded domains such that $$ \left\{ x \in \overline{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}~:~ \mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi(x)>\frac{1}{i}\right\} \subseteq \Omega} \def\Gx{\Xi} \def\Gy{\Psi_i \Subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi_{i+1}, \mbox{ and } \cup_{i \in \mathbb{N}}\Omega} \def\Gx{\Xi} \def\Gy{\Psi_i = \Omega} \def\Gx{\Xi} \def\Gy{\Psi.$$ For each $i\in \mathbb{N}$, let $u_i$ be a positive (weak) solutions of the equation \begin{equation*}\label{dp2} -\mathrm{div}\,(\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi_i}^{-\alpha} \def\gb{\beta} \def\gg{\gamma} |\nabla u_i|^{p-2}\nabla u_i ) -\mu_i\frac{|u_i|^{p-2}u_i}{\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi_i}^{\alpha} \def\gb{\beta} \def\gg{\gamma+p}}=0 \qquad \text{ in } \Omega} \def\Gx{\Xi} \def\Gy{\Psi_i \end{equation*} such that $u_i(x_0)=1$, where $x_0 \in \Omega} \def\Gx{\Xi} \def\Gy{\Psi_1$, and $\mu} \def\gn{\nu} \def\gp{\pi_i \in \mathbb{R}$. If $\mu} \def\gn{\nu} \def\gp{\pi_i\to\mu} \def\gn{\nu} \def\gp{\pi$, then there exists $0<\gb<1$ such that, up to a subsequence, $\{u_i\}$ converges in $C^{0,\gb}_{\mathrm{loc}}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ to a positive (weak) solution $u\in W^{1,p}_{\mathrm{loc}}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ of the equation \begin{equation*} -\mathrm{div}\,(\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{-\alpha} \def\gb{\beta} \def\gg{\gamma} |\nabla u|^{p-2}\nabla u ) -\mu\frac{|u|^{p-2}u}{\mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi^{\alpha} \def\gb{\beta} \def\gg{\gamma+p}}=0 \qquad \text{ in } \Omega} \def\Gx{\Xi} \def\Gy{\Psi . \end{equation*} \end{proposition} \begin{proof} Since $\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi_i}\to \mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}$, the theorem follows directly from \cite[Proposition 2.7]{GP}. \end{proof} The paper is organized as follows: In Section~\ref{sec2} we give our proof of Theorem~\ref{main_thm} while in Appendix we outline two alternative proofs. \section{Proof of Theorem \ref{main_thm}}\label{sec2} Our proof of Theorem~\ref{main_thm} is based on a simple construction of a (weak) positive supersolutions to the associated Euler-Lagrange Lagrange equations \begin{equation}\label{EL_eq} -\mathrm{div}\,(\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{-\alpha} \def\gb{\beta} \def\gg{\gamma}|\nabla u|^{p-2}\nabla u) - \mu} \def\gn{\nu} \def\gp{\pi \frac{|u|^{p-2}u}{\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{\alpha} \def\gb{\beta} \def\gg{\gamma+p}} =0 \qquad \mbox{in } \Omega} \def\Gx{\Xi} \def\Gy{\Psi, \end{equation} for any $\mu} \def\gn{\nu} \def\gp{\pi<c_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n}$. Theorem~\ref{main_thm} then follows from the Harnack convergence principle (Proposition~\ref{propdp1}) together with the Agmon-Allegretto-Piepenbrink-type (AAP) theorem~\cite[Theorem~4.3]{pinpsa} which asserts that the Hardy inequality \eqref{dp15} holds true if and only if \eqref{EL_eq} admits a positive (super)solution for $\mu} \def\gn{\nu} \def\gp{\pi=c_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n}$. It seems that the method of the proof can be used to prove lower bounds for the best Hardy constant in different situations. \begin{proof}[Proof of Theorem \ref{main_thm}] A direct computation shows that for any $y \in \mathbb{R}^n$,the function $$u_{y}(x):=|x-y|^{(\alpha} \def\gb{\beta} \def\gg{\gamma+p-n)/(p-1)}$$ is a positive solution of the equation $$-\mathrm{div}\,(\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi_y}^{-\alpha} \def\gb{\beta} \def\gg{\gamma}(x)|\nabla u|^{p-2}\nabla u)=0 \qquad \mbox{in } \Omega} \def\Gx{\Xi} \def\Gy{\Psi_y := \mathbb{R}^n\setminus \{ y \},$$ where $\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi_y}(x)=|x-y|$. Hence, using the supersolution construction \cite{depi}, it follows that $$v_y(x):= u_{y}^{(p-1)/p}(x) =|x-y|^{(\alpha} \def\gb{\beta} \def\gg{\gamma+p-n)/p}$$ is a positive solution of the equation $$-\mathrm{div}\,(\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi_y}^{-\alpha} \def\gb{\beta} \def\gg{\gamma}|\nabla u|^{p-2}\nabla u) - c_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n}\frac{|u|^{p-2}u}{\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi_y}^{\alpha} \def\gb{\beta} \def\gg{\gamma+p}} =0 \qquad \mbox{in } \Omega} \def\Gx{\Xi} \def\Gy{\Psi_y.$$ Moreover, it is known \cite{BEL} (see also \cite{avk06}) that $c_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n}$ is the best constant for the inequality \begin{equation*} \int_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi_y} \frac{|\nabla \vgf|^p}{|x-y|^{\alpha} \def\gb{\beta} \def\gg{\gamma}} \,\mathrm{d}x \geq \mu} \def\gn{\nu} \def\gp{\pi \int_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi_y}\frac{|\vgf|^p}{|x-y|^{p+\alpha} \def\gb{\beta} \def\gg{\gamma}}\,\mathrm{d}x \qquad \forall \vgf\in C^{\infty}_c(\Omega} \def\Gx{\Xi} \def\Gy{\Psi_y). \end{equation*} Hence, the lower bound for the Hardy constant for the functional inequality \begin{equation*} \int_{\Omega }\frac{|\nabla \vgf|^p}{\mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi^{\alpha} \def\gb{\beta} \def\gg{\gamma}}\,\mathrm{d}x \geq \mu} \def\gn{\nu} \def\gp{\pi \int_{\Omega }\frac{|\vgf|^p}{\mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi^{p+\alpha} \def\gb{\beta} \def\gg{\gamma}}\,\mathrm{d}x \qquad \forall \vgf\in C^{\infty }_c(\Omega), \end{equation*} in a domain $\Omega} \def\Gx{\Xi} \def\Gy{\Psi\subsetneqq \mathbb{R}^n$ is less or equal to $c_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n}$. It remains to prove that \eqref{EL_eq} admits positive supersolutions in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ for $$\mu} \def\gn{\nu} \def\gp{\pi= \mu} \def\gn{\nu} \def\gp{\pi_\gd:= c_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n}-\gd>0, \qquad \forall\, 0<\gd<c_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n},$$ where $\Omega} \def\Gx{\Xi} \def\Gy{\Psi \subsetneqq \mathbb{R}^n$ is an arbitrary domain. We divide the proof into two steps. \textbf{Step 1:} Assume first that $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ is a smooth bounded domain. Fix $\gd$ as above, and choose $\vge>0$ small enough such that \begin{align}\label{dp12} \vge< \min \left\{ \left(\frac{ c_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n}}{\mu_\gd}\right)^{1/p} \!-1,\; \frac{\mu_\gd(\alpha} \def\gb{\beta} \def\gg{\gamma+p-n)}{p|\alpha} \def\gb{\beta} \def\gg{\gamma| c_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n} }\right\}. \end{align} For $x \in \Omega} \def\Gx{\Xi} \def\Gy{\Psi$, let $ P(x)\in \partial \Omega} \def\Gx{\Xi} \def\Gy{\Psi$ be the projection $x$ of into $\partial \Omega} \def\Gx{\Xi} \def\Gy{\Psi$ which is well defind for a.e. $x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi$, that is, $|x-P(x)|=\mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi(x)$. For any $y\in \partial \Omega} \def\Gx{\Xi} \def\Gy{\Psi$, consider the set \begin{align*} D_{y,\vge}\!: =\!\Big\{ x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi \mid & ~ |x-y|\!< \!(1+ \vge) \mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi(x),~ \cos(x-y,x-P(x))\!>\!1-\vge, \\ & \quad \text{ and }\mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi(x)\!>\!\vge/2 \big\}. \end{align*} If \begin{align}\label{dp14} \Omega} \def\Gx{\Xi} \def\Gy{\Psi_\vge= \{ x \in \Omega} \def\Gx{\Xi} \def\Gy{\Psi \mid \mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi(x)>\vge\}, \end{align} then $\displaystyle\cup_{y \in \partial \Omega} \def\Gx{\Xi} \def\Gy{\Psi} D_{y,\vge}$ is an open covering of the compact set $\overline{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}_\vge$. Therefore, there exist $y_i,~i=1,2,\cdots,m $ such that $\Omega} \def\Gx{\Xi} \def\Gy{\Psi_\vge \subseteq \displaystyle\cup_{i=1}^{m} D_{y_i,\vge}$ We note that $u_y$ is a positive supersolution to the equation \begin{equation*} -\mathrm{div}\,(\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{-\alpha} \def\gb{\beta} \def\gg{\gamma} |\nabla u|^{p-2}\nabla u ) + \vge |\alpha} \def\gb{\beta} \def\gg{\gamma| k_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n}\frac{|u|^{p-2}u }{\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{\alpha} \def\gb{\beta} \def\gg{\gamma+p}} =0 \qquad \text{ in } D_{y,\vge}. \end{equation*} where $k_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n}:= \left( \frac{\alpha} \def\gb{\beta} \def\gg{\gamma+p-n}{p-1}\right)^{p-1}$. Indeed, for $\alpha} \def\gb{\beta} \def\gg{\gamma\geq0$, \begin{align*} -\mathrm{div}\,(&\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{-\alpha} \def\gb{\beta} \def\gg{\gamma} |\nabla u_y|^{p-2}\nabla u_y ) \\& = \alpha} \def\gb{\beta} \def\gg{\gamma \mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi^{-\alpha} \def\gb{\beta} \def\gg{\gamma} k_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n}\left(\frac{ \nabla \mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi\cdot (x-y)|x-y|^{\alpha} \def\gb{\beta} \def\gg{\gamma-n}}{\mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi} -|x-y|^{\alpha} \def\gb{\beta} \def\gg{\gamma-n} \right)\\ & \geq \alpha} \def\gb{\beta} \def\gg{\gamma \mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi^{-\alpha} \def\gb{\beta} \def\gg{\gamma} k_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n}|x-y|^{\alpha} \def\gb{\beta} \def\gg{\gamma-n} \left(\frac{ |\nabla \mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi||x-y|(1-\vge)}{\mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi} -1\right)\\ &\geq \alpha} \def\gb{\beta} \def\gg{\gamma \mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi^{-\alpha} \def\gb{\beta} \def\gg{\gamma} k_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n}|x-y|^{\alpha} \def\gb{\beta} \def\gg{\gamma-n} \left( 1-\vge-1 \right)\\ & = -\vge \alpha} \def\gb{\beta} \def\gg{\gamma \mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi^{-\alpha} \def\gb{\beta} \def\gg{\gamma} k_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n}|x-y|^{\alpha} \def\gb{\beta} \def\gg{\gamma-n} \qquad \mbox{in } D_{y,\vge}. \end{align*} Hence, \begin{align*} -\mathrm{div}\,(\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{-\alpha} \def\gb{\beta} \def\gg{\gamma} |\nabla u_y|^{p-2}\nabla u_y )& + \vge |\alpha} \def\gb{\beta} \def\gg{\gamma| k_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n}\frac{|u_y|^{p-2}u_y }{\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{\alpha} \def\gb{\beta} \def\gg{\gamma+p}}\\ & \geq \mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi^{-\alpha} \def\gb{\beta} \def\gg{\gamma} k_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n}|x-y|^{\alpha} \def\gb{\beta} \def\gg{\gamma-n} \left(-\vge\alpha} \def\gb{\beta} \def\gg{\gamma + \vge |\alpha} \def\gb{\beta} \def\gg{\gamma| \frac{|x-y|^p}{\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{p}} \right)\\ & \geq \mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi^{-\alpha} \def\gb{\beta} \def\gg{\gamma} k_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n}|x-y|^{\alpha} \def\gb{\beta} \def\gg{\gamma-n} \left(-\vge\alpha} \def\gb{\beta} \def\gg{\gamma + \vge |\alpha} \def\gb{\beta} \def\gg{\gamma| \right)=0 \quad \mbox{in } D_{y,\vge}. \end{align*} Similarly, for $\alpha} \def\gb{\beta} \def\gg{\gamma<0$ \begin{align*} -\mathrm{div}\,(&\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{-\alpha} \def\gb{\beta} \def\gg{\gamma} |\nabla u_y|^{p-2}\nabla u_y ) + \vge |\alpha} \def\gb{\beta} \def\gg{\gamma| k_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n}\frac{|u_y|^{p-2}u_y }{\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{\alpha} \def\gb{\beta} \def\gg{\gamma+p}}\\ & \geq \mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi^{-\alpha} \def\gb{\beta} \def\gg{\gamma} k_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n}|x-y|^{\alpha} \def\gb{\beta} \def\gg{\gamma-n} \left( \frac{ \alpha} \def\gb{\beta} \def\gg{\gamma \nabla \mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi\cdot (x-y)}{\mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi} -\alpha} \def\gb{\beta} \def\gg{\gamma + \vge |\alpha} \def\gb{\beta} \def\gg{\gamma| \frac{|x-y|^p}{\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{p}} \right)\\ & \geq \mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi^{-\alpha} \def\gb{\beta} \def\gg{\gamma} k_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n}|x-y|^{\alpha} \def\gb{\beta} \def\gg{\gamma-n} \left( \frac{\alpha} \def\gb{\beta} \def\gg{\gamma |\nabla \mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi||x-y|}{\mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi} -\alpha} \def\gb{\beta} \def\gg{\gamma + \vge |\alpha} \def\gb{\beta} \def\gg{\gamma| \right)\\ & \geq \mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi^{-\alpha} \def\gb{\beta} \def\gg{\gamma} k_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n}|x-y|^{\alpha} \def\gb{\beta} \def\gg{\gamma-n} \left( \alpha} \def\gb{\beta} \def\gg{\gamma(1+\vge) -\alpha} \def\gb{\beta} \def\gg{\gamma + \vge |\alpha} \def\gb{\beta} \def\gg{\gamma| \right)\\ & \geq \mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi^{-\alpha} \def\gb{\beta} \def\gg{\gamma} k_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n}|x-y|^{\alpha} \def\gb{\beta} \def\gg{\gamma-n} \left( \vge\alpha} \def\gb{\beta} \def\gg{\gamma + \vge |\alpha} \def\gb{\beta} \def\gg{\gamma| \right) =0 \qquad \mbox{in } D_{y,\vge}. \end{align*} Now, the weak comparison principle \cite[Lemma~5.1]{pinpsa} implies that $$u_\gd:= \min \{u_{y_i}\mid 1\leq i\leq m\}$$ is a supersolution to the equation \begin{equation}\label{dp1} -\mathrm{div}\,(\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{-\alpha} \def\gb{\beta} \def\gg{\gamma} |\nabla u|^{p-2}\nabla u ) + \vge |\alpha} \def\gb{\beta} \def\gg{\gamma| k_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n}\frac{|u|^{p-2}u }{\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{\alpha} \def\gb{\beta} \def\gg{\gamma+p}} =0\quad \text{ in } \Omega} \def\Gx{\Xi} \def\Gy{\Psi_\vge. \end{equation} \textbf{Claim 1:} There exists a positive solution to the following equation \begin{equation}\label{dp11} -\mathrm{div}\,(\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{-\alpha} \def\gb{\beta} \def\gg{\gamma} |\nabla u|^{p-2}\nabla u ) - \left( \mu_\gd- \frac{\vge p|\alpha} \def\gb{\beta} \def\gg{\gamma| c_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n}}{\alpha} \def\gb{\beta} \def\gg{\gamma+p-n}\right) \frac{|u|^{p-2}u }{\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{\alpha} \def\gb{\beta} \def\gg{\gamma+p}} =0 \text{ in } \Omega} \def\Gx{\Xi} \def\Gy{\Psi_\vge. \end{equation} Employing the AAP-type theorem \cite[Theorem~4.3]{pinpsa}, it is enough to prove that there exists a positive supersolution to \eqref{dp11} in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi_\vge$. We use the supersolution construction \cite{depi} and prove that $v_\gd:= u_\gd^{(p-1)/p}$ is a supersolution to \eqref{dp11}. Using the fact that $u_\gd$ is a supersolution to \eqref{dp1}, we deduce that \begin{align*} & -\mathrm{div}\,(\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{-\alpha} \def\gb{\beta} \def\gg{\gamma} |\nabla v_\gd|^{p-2}\nabla v_\gd ) - \left( \mu_\gd- \frac{\vge p|\alpha} \def\gb{\beta} \def\gg{\gamma| c_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n}}{\alpha} \def\gb{\beta} \def\gg{\gamma+p-n}\right)\frac{|v_\gd|^{p-2}v_\gd}{\mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi^{\alpha} \def\gb{\beta} \def\gg{\gamma+p}}\\ & =\! - \!\left(\!\frac{p-1}{p}\!\right)^{p-1}\!\!\!\!\!\mathrm{div}\,\!(\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{-\alpha} \def\gb{\beta} \def\gg{\gamma} |\nabla u_\gd|^{p-2}\nabla u_\gd u_\gd^{-(p-1)/p}) \!-\! \! \left(\! \mu_\gd \!-\! \frac{\vge p|\alpha} \def\gb{\beta} \def\gg{\gamma| c_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n}}{\alpha} \def\gb{\beta} \def\gg{\gamma+p-n}\!\right)\!\! \frac{|u_\gd|^{\! (p-1)^2/p}}{\mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi^{\alpha} \def\gb{\beta} \def\gg{\gamma+p}}\\ & \geq \left(\frac{p-1}{p}\right)^{p}\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{-\alpha} \def\gb{\beta} \def\gg{\gamma} |\nabla u_\gd|^{p} u_\gd^{-(2p-1)/p} -\mu_\gd\frac{|u_\gd|^{(p-1)^2/p}}{\mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi^{\alpha} \def\gb{\beta} \def\gg{\gamma+p}}\\ & = \frac{|u_\gd|^{(p-1)^2/p}}{\mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi^{\alpha} \def\gb{\beta} \def\gg{\gamma+p}}\left[\left(\frac{p-1}{p}\right)^p\frac{|\nabla u_\gd|^pd^p}{u_\gd^p}- \mu_\gd \right] \qquad \mbox{in } \Omega} \def\Gx{\Xi} \def\Gy{\Psi_{\vge}. \end{align*} Therefore, we need to prove that $\left[\left(\frac{p-1}{p}\right)^p\frac{|\nabla u_\gd|^pd^p}{u_\gd^p}- \mu_\gd \right]\geq 0$. Indeed, for a.e. $x \in \Omega} \def\Gx{\Xi} \def\Gy{\Psi_\vge$, $u_\gd= u_{y_{i_0}}$ for some $i_0$ in a neighborhood of $x$. Using the definition of $\vge$ and $D_{y,\vge}$, we get \begin{align*} & \left(\frac{p-1}{p}\right)^p\frac{|\nabla u_\gd|^pd^p}{u_\gd^p}- \mu_\gd\\ &= \left(\frac{p-1}{p}\right)^p\left(\frac{\alpha} \def\gb{\beta} \def\gg{\gamma+p-n}{p-1}\right)^p \frac{d^p}{|x-y_{i_0}|^p}- \mu_\gd\geq \frac{c_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n}}{(1+\vge)^p} - \mu_\gd >0. \end{align*} Hence, Claim 1 is proved. \textbf{Claim 2:} There exists a positive solution to the following equation \begin{equation}\label{dp13} -\mathrm{div}\,(\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{-\alpha} \def\gb{\beta} \def\gg{\gamma} |\nabla u|^{p-2}\nabla u ) -\mu_\gd\frac{|u|^{p-2}u}{\mathrm{d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi^{\alpha} \def\gb{\beta} \def\gg{\gamma+p}}=0 \quad \text{ in } \Omega} \def\Gx{\Xi} \def\Gy{\Psi. \end{equation} Let $\vge_0>0$ be small enough such that \eqref{dp12} holds, and set $\vge_i := \min\{ \vge_0,\frac{1}{i}\}$. Clearly, $ \Omega} \def\Gx{\Xi} \def\Gy{\Psi_{\vge_i} \Subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi_{\vge_{i+1}} $ for $i$ large enough, and $\Omega} \def\Gx{\Xi} \def\Gy{\Psi= \cup_{i=1}^{\infty} \Omega} \def\Gx{\Xi} \def\Gy{\Psi_{\vge_i}$, where $\Omega} \def\Gx{\Xi} \def\Gy{\Psi_{\vge_i}$ is defined in \eqref{dp14}. Employing Claim~1, it follows that for $i\geq 1$ there exists a positive solution $u_i$ to \eqref{dp11} in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi_{\vge_i}$ satisfying $u_i(x_0)=1$. In light of the Harnack convergence principle (Proposition \ref{propdp1}), it follows that Claim~2 holds. \textbf{Step 2: } Assume now that $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ is an arbitrary domain. Choose a smooth compact exhaustion $\{ \Omega} \def\Gx{\Xi} \def\Gy{\Psi_i\}$ of $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. That is, $\{ \Omega} \def\Gx{\Xi} \def\Gy{\Psi_i\}$ is a sequence of smooth bounded domains such that $ \Omega} \def\Gx{\Xi} \def\Gy{\Psi_i \Subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi_{i+1}\Subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$, $\Omega} \def\Gx{\Xi} \def\Gy{\Psi= \cup_{i=1}^{\infty} \Omega} \def\Gx{\Xi} \def\Gy{\Psi_i$, and $$\max_{\substack{x\in \partial\Omega} \def\Gx{\Xi} \def\Gy{\Psi_i\cap B_i\\ y\in \partial\Omega} \def\Gx{\Xi} \def\Gy{\Psi\cap B_i}}\{\text{dist}(x,\partial \Omega} \def\Gx{\Xi} \def\Gy{\Psi) , \text{dist}(y,\partial \Omega} \def\Gx{\Xi} \def\Gy{\Psi_i) \} < \frac{1}{i},$$ where $B_i=\{|x|<i\}$. Observe that $\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi_i} \,\rightarrow \mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}$ a.e. in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. Indeed, for $x\in \overline{\Omega} \def\Gx{\Xi} \def\Gy{\Psi_i\cap B_i}$ one has $$|\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}(x) - \mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi_i}\!(x)|=|\text{dist}(x,\partial \Omega} \def\Gx{\Xi} \def\Gy{\Psi)-\text{dist}(x,\partial \Omega} \def\Gx{\Xi} \def\Gy{\Psi_i)|<\frac1i \,.$$ Invoking Claim 2, it follows that for each $i\geq 1$, there exists $u_i>0$ satisfying $u_i(x_0)=1$ and the equation \begin{equation*} -\mathrm{div}\,(\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi_i}^{-\alpha} \def\gb{\beta} \def\gg{\gamma} |\nabla u_i|^{p-2}\nabla u_i ) - \mu_\gd \frac{|u_i|^{p-2}u_i }{\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi_i}^{\alpha} \def\gb{\beta} \def\gg{\gamma+p}} =0 \quad \text{ in } \Omega} \def\Gx{\Xi} \def\Gy{\Psi_i. \end{equation*} Using again the Harnack convergence principle (Proposition \ref{propdp1}), we obtain a positive solution $u_\gd$ to \eqref{dp13} satisfying $u_\gd(x_0)=1$. Letting $\gd\to 0$, we get by Harnack convergence principle a positive solution $u_0$ to the equation \begin{equation}\label{dp21} -\mathrm{div}\,(\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{-\alpha} \def\gb{\beta} \def\gg{\gamma} |\nabla u|^{p-2}\nabla u ) - c_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n} \frac{|u|^{p-2}u }{\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{\alpha} \def\gb{\beta} \def\gg{\gamma+p}} =0 \quad \text{ in } \Omega} \def\Gx{\Xi} \def\Gy{\Psi \end{equation} that satisfies $u_0(x_0)=1$. In light of the AAP-type theorem we obtain the Hardy inequality \begin{align*} \int_{\Omega }\frac{|\nabla \vgf|^p}{\mathrm{d}_\Omega^{\alpha}}\,\mathrm{d}x \geq \left( \frac{\alpha +p-n}{p}\right)^p \int_{\Omega }\frac{|\vgf|^p}{\mathrm{d}_\Omega^{p+\alpha}}\,\mathrm{d}x \qquad \forall \vgf\in C^{\infty }_c(\Omega). \qquad \qedhere \end{align*} \end{proof} \appendix \section{Different Proofs} Here we give two alternative proofs of Theorem~\ref{main_thm}, both of them do not use an exhaustion argument. On the other hand, both rely on the following folklore lemma which is of independent interest, see for example, propositions 1.1.3. and 2.2.2. in \cite{CS} (cf. \cite[Theorem~1.6]{LLL}, where the case of $C^2$-domains is discussed). \begin{lemma}\label{semiconcave} Let $\Omega} \def\Gx{\Xi} \def\Gy{\Psi\! \subsetneqq \!\mathbb{R}^n$ be a domain. (i) The inequality \[-\Delta \mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi} \geq -\frac{n-1}{\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}} \, ,\] holds true in the sense of distributions in $\Omega$. (ii) Moreover, \begin{equation}\label{lapdq} \int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi \nabla\psi\cdot\nabla{\rm d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi {\rm d}x \geq -(n-1)\int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi\frac{\psi}{{\rm d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi}{\rm d}x\quad\forall~\psi\in C^\infty_c(\Omega} \def\Gx{\Xi} \def\Gy{\Psi), \psi\geq0. \end{equation} \end{lemma} \begin{proof}\label{semiconcave remark} (i) Since the function $|x|^2-\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^2(x)$ is convex, it follows that its distributional Laplacian is a nonnegative Radon measure (see \cite[Theorem 2-\S6.3]{evans and gariepy} and \cite[Lemma 2.1]{psaradakis} for the details). Hence, $$ \langle(n-1)-\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}\Delta \mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}, \vgf \rangle = \int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi \vgf \,\mathrm{d}\nu \qquad \forall~\vgf \in C^\infty_c(\Omega} \def\Gx{\Xi} \def\Gy{\Psi),$$ where $\gn$ is a nonnegative Radon measure, and $\langle \cdot, \cdot\rangle : \mathcal{D'}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)\times \mathcal{D}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ is the canonical duality pairing between distributions and test functions. Consequently, the distributional Laplacian of $- \mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}$ is itself a signed Radon measure $\mu$. Thus, \begin{equation*}\label{lapd} - \langle\Delta \mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}, \psi \rangle\!=\!-\!\int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi \!\!\Gd \psi {\rm d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi{\rm d}x \!=\! \int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi\!\! \psi{\rm d}\mu \! \geq \! -(n-1)\!\!\int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi \frac{\psi}{{\rm d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi}{\rm d}x \;\; \forall~\psi\in C^\infty_c(\Omega} \def\Gx{\Xi} \def\Gy{\Psi), \psi\geq0 . \end{equation*} (ii) Since $\nabla {\rm d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi \in L^\infty(\Omega} \def\Gx{\Xi} \def\Gy{\Psi,\mathbb{R}^n)$, it follows that $$-\langle (\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi})_{x_i, x_i}, \psi \rangle = \int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi (\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi})_{x_i} \psi_{x_i}\,\mathrm{d}x.$$ Therefore, $\Delta \mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}$, the distributional divergence of $\nabla \mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}$, satisfies $$ - \langle\Delta \mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}, \psi \rangle= \int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi \nabla {\rm d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi\cdot \nabla \psi \,\mathrm{d}x \qquad\forall~\psi\in C^\infty_c(\Omega} \def\Gx{\Xi} \def\Gy{\Psi).$$ Hence, $$ \int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi \nabla {\rm d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi\cdot \nabla \psi \,\mathrm{d}x = - \langle\Delta \mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}, \psi \rangle \geq -(n-1)\int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi\frac{\psi}{{\rm d}_\Omega} \def\Gx{\Xi} \def\Gy{\Psi}{\rm d}x \;\; \forall~\psi\in C^\infty_c(\Omega} \def\Gx{\Xi} \def\Gy{\Psi), \psi\geq0 . \qedhere$$ \end{proof} \begin{lemma}\label{a3} Let $\Omega} \def\Gx{\Xi} \def\Gy{\Psi\! \subsetneqq \!\mathbb{R}^n$ be a domain. Let $$1<p<\infty, \quad \alpha} \def\gb{\beta} \def\gg{\gamma\in \mathbb{R}^n, \quad \mbox{and }\;\; 0<\gg < \frac{\alpha} \def\gb{\beta} \def\gg{\gamma+p-n}{p-1}\,.$$ Then $\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^\gg$ is a (weak) positive supersolution of the equation \begin{equation*} -\mathrm{div}\,(\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{-\alpha} \def\gb{\beta} \def\gg{\gamma} |\nabla u|^{p-2}\nabla u)- C_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n,\gg} \frac{|u|^{p-2}u}{\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{p+\alpha} \def\gb{\beta} \def\gg{\gamma}}=0 \quad \text{ in } \Omega} \def\Gx{\Xi} \def\Gy{\Psi, \end{equation*} where $C_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n,\gg} := |\gg|^{p-1}(\alpha} \def\gb{\beta} \def\gg{\gamma-n+1-(\gg-1)(p-1))>0$. \end{lemma} \begin{proof} Using \eqref{lapdq} we obtain \begin{multline*} \int_{\Omega }\!\!\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{-\alpha} \def\gb{\beta} \def\gg{\gamma}|\nabla( \mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^\gg)|^{p-2}\nabla( \mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^\gg) \!\cdot \!\nabla \vgf \mathrm{d}x \! =\! |\gg|^{p-2}\gg \!\! \int_{\Omega } \!\!\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{(\gg-1)(p-1)-\alpha} \def\gb{\beta} \def\gg{\gamma} \nabla \mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi} \! \cdot \! \nabla \vgf \mathrm{d}x\\ =\!|\gg|^{p-1}\!\!\! \int_{\Omega }\!\!\! \left(\! \nabla( \mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}) \! \cdot \! \nabla (\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{\!(\gg-1)(p-1)-\alpha} \def\gb{\beta} \def\gg{\gamma} \!\! \vgf) \!-\! ((\gg \! - \!1)(p \! - \! 1)-\alpha} \def\gb{\beta} \def\gg{\gamma)\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{(\!\gg-1)(p-1)-\alpha} \def\gb{\beta} \def\gg{\gamma-1}\! \vgf \!\right) \!\mathrm{d}x\\ \geq C_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n,\gg} \int_{\Omega } \mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{{(\gg-1)(p-1)-\alpha} \def\gb{\beta} \def\gg{\gamma-1}}\vgf \,\mathrm{d}x . \qedhere \end{multline*} \end{proof} \begin{remark} Observe that $$ C_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n}= \max\left\{ C_{\alpha} \def\gb{\beta} \def\gg{\gamma,p,n,\gg} \mid {\gg \in \left(0,\frac{\alpha} \def\gb{\beta} \def\gg{\gamma+p-n}{p-1}\right)}\right\} ,$$ and the maximum is obtained with $\gg=(\alpha} \def\gb{\beta} \def\gg{\gamma+p-n)/p$. \end{remark} \begin{proof}[Alternative proof of Theorem~\ref{main_thm} I] Using Lemma \ref{a3} for $\gg=(\alpha} \def\gb{\beta} \def\gg{\gamma+p-n)/p$, we deduce that $\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{(\alpha} \def\gb{\beta} \def\gg{\gamma+p-n)/p}$ is positive (weak) supersolution to \eqref{dp21}. Consequently, the AAP-type theorem \cite[Theorem~4.3]{pinpsa} implies the Hardy-type inequality \eqref{dp15}. \end{proof} \begin{proof}[Alternative proof of Theorem~\ref{main_thm} II] Let $\Omega} \def\Gx{\Xi} \def\Gy{\Psi\! \subsetneqq \!\mathbb{R}^n$ be a domain, and fix $s>n$. Using Lemma~\ref{semiconcave}, the following $L^1$-Hardy inequality is proved in \cite[Theorem 2.3]{psaradakis}: \begin{align}\label{dp17} \int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi \frac{|\nabla \vgf|}{\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{s-1}}\,\mathrm{d}x\geq (s-n)\int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi \frac{| \vgf |}{\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{s}}\,\mathrm{d}x \qquad \forall~ \vgf \in C_c^\infty(\Omega} \def\Gx{\Xi} \def\Gy{\Psi). \end{align} Substituting $\vgf=|\psi|^p$ in \eqref{dp17} and using H\"older inequality, we obtain \begin{align*} \frac{s-n}{p}\int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi \frac{| \psi|^p}{\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{s}} \,\mathrm{d}x &\leq \int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi \frac{|\psi|^{p-1}|\nabla \psi|}{\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{s-1}} \,\mathrm{d}x = \int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi \frac{| \psi|^{p-1}}{\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{s-s/p}}\frac{| \nabla \psi|}{\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{s/p-1}} \,\mathrm{d}x\\[2mm] &\leq \left(\int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi \frac{| \psi|^p}{\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{s}} \right)^{1-1/p}\left(\int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi \frac{| \nabla \psi|^p}{\mathrm{d}_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}^{s-p}} \right)^{1/p} \qquad \forall~ \psi \in C_c^\infty(\Omega} \def\Gx{\Xi} \def\Gy{\Psi). \end{align*} Hence, for $s= \alpha} \def\gb{\beta} \def\gg{\gamma+p$, we get \eqref{dp15}. \end{proof} \begin{center} {\bf Acknowledgements} \end{center} D.~G. and Y.~P. acknowledge the support of the Israel Science Foundation (grant 637/19) founded by the Israel Academy of Sciences and Humanities. \end{document} \end{document}
\begin{document} \title{A fresh look at igorability for likelihood inference} \begin{abstract} When data are incomplete, a random vector $Y$ for the data process together with a binary random vector $R$ for the process that causes missing data, are modelled jointly. We review conditions under which $R$ can be ignored for drawing likelihood inferences about the distribution for~$Y$. The standard approach of Rubin\;(1976) and Seaman\;et.\,al.\,(2013),\,\textit{Statist.\,Sci.},\,\textbf{28}:2\;pp.\,257--268 emulates complete-data methods exactly, and directs an investigator to choose a full model in which missing at random (MAR) and distinct of parameters holds if the goal is not to use a full model. Another interpretation of ignorability lurking in the literature considers ignorable likelihood estimation independently of any model for the conditional distribution $R$ given~$Y$. We discuss shortcomings of the standard approach, and argue that the alternative gives the `right' conditions for ignorability because it treats the problem on its merits, rather than emulating methodology developed for when the investigator is in possession of all of the data. \vspace*{2mm} \noindent \textit{Key words and phrases:} incomplete data, missing data, ignorable, ignorability, missing at random, distinctness of parameters, likelihood theory. \end{abstract} \section{Introduction} \label{Sect:Intro} Missing data are a common problem in empirical research, and particularly so in medical and epidemiological studies. A central feature of the statistical methods for dealing with incomplete data pertains to conditions under which the random vector for the process causing the missing data need not be modelled. The modern framework was introduced by Rubin\;(1976). If $Y$ is a random vector representing the data generation process, Rubin\;(1976) introduced the concept of modelling missingness through a corresponding binary random vector $R$ of the same dimension as~$Y$, together with a joint probability distribution for $(Y, R)$. The realisations of $Y$ comprise complete data, both observed and unobserved, and realisations of $R$ determine which values of $Y$ are observed and which are missing. The conditional distribution for $R$ given~$Y$ represents the process that causes missing data, herafter called the \textbf{missingness process}. Given a model of joint densities for $(Y, R)$, Rubin (1976) identified conditions on the model for which the same inferences result whether the full model is used or the model for the missingness proces is discarded. Rubin considered direct likelihood, Bayesian and sampling distribution paradigms, but not frequentist likelihood inference specifically. Seaman\;et.\,al.\,(2013) reviewed use of the ignorability conditions in the literature to promote unity amongst writers, and adapted Rubin's conditions specifically for frequentist likelihood inference. We refer to the conditions derived in these works as the \textbf{standard conditions} for ignorabiliy. Seaman\;et.\,al.\,(2013,\,p.\,266) identified an alternative interpretation of ignorabilty lurking in the literature. This approach treats ignorable likelihood as an estimation process in its own right, independently of any model for the missingness process. Our aim is to review the two approaches, to explain some shortcomings of the standard approach, and to argue that the alternative interpretation gives the `right' conditions for ignorability because it treats the problem on its merits, rather than adopting methodology developed for when the investigator is in possession of all of the data. \section{Ignorability for direct likelihood inference (Rubin,\,1976)} \label{Sect:Rubin} We retain the notation $Y$\;and\;$R$ from Section~\ref{Sect:Intro}. The starting point for ignorability in the sense of Rubin (1976) is a (full) model of joint densities for $(Y, R)$: \begin{equation} \mathcal{M}_g \,=\, \{\, f_\theta(\mathbf{y})\,g_\psi(\mathbf{r} |\, \mathbf{y}) : (\theta, \psi)\in\Delta \,\}. \label{Eq:MgModel} \end{equation} If $\Theta$ and $\Psi$ are the images of the projections $(\theta,\psi)\mapsto\theta$ and $(\theta,\psi)\mapsto\psi$, respectively, then the \textbf{data model} of $\mathcal{M}_g$ is \begin{equation} \mathcal{M}_s \,=\, \{\, f_\theta(\mathbf{y}) : \theta\in\Theta \,\} \label{Eq:MsModel} \end{equation} and the \textbf{missingness model} is $\{\,g_\psi(\mathbf{r}|\,\mathbf{y})\,:\,\psi\in\Psi\,\}$. We call each density in the missingness model a \textbf{missingness mechanism}. Recall from Section~\ref{Sect:Intro} that the two conditions under which the missingness model could be discarded and identical direct likelihood inferences drawn from a model for the observed data derived from\;(\ref{Eq:MsModel}) were called MAR and distinctness of parameters. We consider these in turn. Given a realisation $(\mathbf{y}, \mathbf{r})$ of the random vector~$(Y, R)$, let~$\mathbf{y}^{ob(\mathbf{r})}$ and~$\mathbf{y}^{mi(\mathbf{r})}$ denote the vectors of observed and missing components of~$\mathbf{y}$, respectively. Analysis of the observed data $(\mathbf{y}^{ob(\mathbf{r})}, \mathbf{r})$ then proceeds by restriction of the random vector $(Y, R)$ to the event \begin{equation} \{\, (\mathbf{y}_*, \mathbf{r}_*) \,:\, \mathbf{r}_* = \mathbf{r} \text{ and } \mathbf{y}_*^{ob(\mathbf{r})} = \mathbf{y}^{ob(\mathbf{r})} \,\} \label{Eq:ODE} \end{equation} comprising all datasets $\mathbf{y}_*$ (together with~$\mathbf{r}$) which correspond to~$(\mathbf{y}, \mathbf{r})$ on the observed data values~$\mathbf{y}^{ob(\mathbf{r})}$ but may differ on the unobserved values~$\mathbf{y}^{mi(\mathbf{r})}$. A missingness mechanism $g_\psi(\mathbf{r} |\, \mathbf{y})$ is called \textbf{missing at random} (MAR) with respect to~$(\mathbf{y}, \mathbf{r})$ if $g_\psi(\mathbf{r} |\, \mathbf{y})$ is a constant function on the set~(\ref{Eq:ODE}), where $g$ is considered to be a function of $\mathbf{y}$ with $\mathbf{r}$ held fixed. Rubin (1976) defined missing at random to be a property of the full model (\ref{Eq:MgModel}) by requiring each density in the model to be MAR with respect to~$(\mathbf{y}, \mathbf{r})$. The terminology \textbf{realised MAR} was introduced in Seaman\;et.\,al.\,(2013) to distinguish this weaker form of MAR from a stronger form more suited to frequentist likelihood inference: a missingness mechanism is called \textbf{everywhere MAR} if it is realised MAR with respect to all possible data vectors and response pattens, $(\mathbf{y}, \mathbf{r})$, not just the realised pair representing the observed and missing data. The second of Rubin's conditions, \textbf{distinctness of parameters}, requires the parameter space $\Delta$ of~(\ref{Eq:MgModel}) to be a direct product $\Delta = \Theta\times\Psi$ of parameter spaces $\Theta$ of the data model and the missingness model. If every missingness mechanism in (\ref{Eq:MgModel}) is realised MAR with respect to the realised data vector~$\mathbf{y}$ and response pattern~$\mathbf{r}$, then the likelihood function for that part of (\ref{Eq:MgModel}) pertaining to the observable data factorizes as \begin{equation} L_g(\theta, \psi) \,=\, \int f_\theta(\mathbf{y}) \, g_\psi(\mathbf{r} |\, \mathbf{y}) \, \text{d}\mathbf{y}^{mi(\mathbf{r})} \,=\, g_\psi(\mathbf{r} |\, \mathbf{y}) \, \int f_\theta(\mathbf{y}) \, \text{d}\mathbf{y}^{mi(\mathbf{r})}\,. \label{Eq:MgLikelihood} \end{equation} If, in addition, distinctness of parameters holds for (\ref{Eq:MgModel}), then the (maximal) domain of each mapping $\theta\mapsto L_g(\theta,\psi)$ is the same for every value of~$\psi$, and likelihood estimates for $\theta$ can be obtained by maximising the simpler function \begin{equation} L_s(\theta) \,=\, \int f_\theta(\mathbf{y})\, \text{d}\mathbf{y}^{mt(\mathbf{r})}\,. \label{Eq:MsLikelihood} \end{equation} over its full domain. We refer the reader to Rubin (1976), Seaman\;et.\,al.\,(2013) and Mealli and Rubin (2015) for additional details. \vspace*{3mm} \textbf{An aside.} In (\ref{Eq:MsLikelihood}) we have used $\mathbf{y}^{mt(\mathbf{r})}$ instead of $\mathbf{y}^{mi(\mathbf{r})}$ to denote the unobserved variables because overlaying the response pattern $\mathbf{r}$ onto the marginal distribution for $Y$ involves a different relationship between $R$ and $Y$ compared to $(Y, R)$. The former does not respect the stochastic relationship encoded in the random vector~$(Y, R)$ because it involves holding $R$ fixed and allowing the marginal distribution for~$Y$ to vary. The `t' in $\mathbf{y}^{mt(\mathbf{r})}$ can be interpreted to mean `these are the variables of the marginal distribution for $Y$ that were missing this time.' Note also that we do \textbf{not} do the same on the right hand side of (\ref{Eq:MgLikelihood}) because the set being integrated over is all of $R\times Y,$ whereas in (\ref{Eq:MsLikelihood}) it is only~$Y$. See Galati (2019) for further details. $\qedsymbol$ \vspace*{3mm} It is helpful to view the two ignorability conditions, MAR and distinctness of parameters, in a hierarchy as follows: \begin{itemize} \item[(\textbf{a})] Does the investigator wish to enforce a relationship $\Delta\subsetneq\Theta\times\Psi$ between $\theta$ and $\psi$ for models (\ref{Eq:MgModel}) when estimating~$\theta$? If so, the analyst has no option but to consider only full models for which distinctness of parameters \textbf{does not} hold, irrespective of whether of not densities in the model are realised MAR. \item[(\textbf{b})] If the answer to (a) is no, then is every missingness mechanism in the model (\ref{Eq:MgModel}) realised MAR? If so, the analyst can discard the missingness mechanisms from the model. \end{itemize} Viewed in this way, ignorability for direct likelihood inferences is seen to be comprised of two components, the distinctness of parameters criterion, which really is just a statement that the investigator has no relationship of the form $\Delta\subsetneq\Theta\times\Psi$ to enforce when estimating~$\theta$, and the MAR component, which establishes the relationship between (\ref{Eq:MgLikelihood}) and (\ref{Eq:MsLikelihood}) for fixed~$\psi$. Note also this does \textbf{not} mean that the MAR condition is of no use when a relationship of the form $\Delta\subsetneq\Theta\times\Psi$ \textbf{is} enforced (that is, when distinctness of parameters does \textbf{not} hold). In this case, the MAR condition allows a likelihood $L(\psi)=g_\psi(\mathbf{r}|\,\mathbf{y})$ for the missingness model to be maximised independently of~$\theta$, and then $\Delta$ can be used to determine an appropriate restriction on the domain of (\ref{Eq:MsLikelihood}) for estimating~$\theta$. We will call full models (\ref{Eq:MgModel}) satifying the distinctness of parameters and MAR criteria \textbf{ignorable models}, and we emphasise that Rubin's (1976) ignorability theory for direct likelihood inference identifies a subset of models of the form (\ref{Eq:MgModel}), the ignorable models, from which an investigator can choose their model if they wish to draw inferences for $\theta$ free from the inconvenience of needing to model the missingness process explicitly. \section{Limitations caused by missing data} \label{Sect:Limitations} Ignorability is often presented as having something to do with drawing valid inferences. For example, Rubin\;(1976,\;Summary) states that the ignorability conditions are ``\textit{the weakest general conditions under which ignoring the process that causes missing data always leads to correct inferences.''} `Correct inferences' in this instance seems to mean that inferences will be drawn from the correct likelihood given the chosen model. It has nothing to do with whether or not the choice of model is valid for the given data, or whether or not valid conclusions will be drawn from the data. In the model-based paradigms, validity in the latter sense mentioned above is a subjective assessment of the goodness of fit of the model to the data. If the model fits poorly, then in some sense the inferences are not justifiable, and if the model fits too-well, then the model becomes more a description of the specific realised dataset rather than a description of the process which generated the data. When data are incomplete, the philosophy of the model-based likelihood paradigm breaks down in two essential ways. Firstly, it is impossible to validate the investigator's choice of missingness model against the data because the data required for this are missing (Molenberghs\;et.\,al.\,(2008)). So consideration of a missingness model becomes hypothetical in a manner analogous to the frequentist paradigm's hypothetical assumptions about $(Y, R)$. In the literature, this feature of incomplete data methods typically is referred to as `untestable assumptions.' The second way in which the paradigm breaks down is that it becomes impossible to validate even a model for the observed data against the observed data. The reason for this is a little more subtle. If the possible missingness patterns realisable from $R$ are $\mathbf{r}_1, \mathbf{r}_2, \ldots, \mathbf{r}_k$, and these occur with marginal probabilities $p_1, p_2, \ldots, p_k$, then a density $f(\mathbf{y})$ for the marginal distribution for $Y$ can be written as the mixture \begin{equation} f(\mathbf{y}) \;=\; \sum_{i=1}^k\;p_i\,p(\mathbf{y}|\,\mathbf{r}_i) \label{Eq:PatternMixture} \end{equation} where $p(\mathbf{y}|\,\mathbf{r}_i)$ is the conditional density for $Y$ given $R=\mathbf{r}_i$. If $\mathbf{r}_1=(1,1,\ldots,1)$ is the pattern for a complete case, then $p(\mathbf{y}|\,\mathbf{r}_1)$ gives the distribution for the complete cases, which may differ from the marginal distribution for $Y$ given by~$f(\mathbf{y})$. And in general, for the $i^{th}$ missingness pattern~$\mathbf{r}_i$, the distribution of the $\mathbf{y}$ values realised with $\mathbf{r}_i$ is $p(\mathbf{y}|\,\mathbf{r}_i)$ and \textbf{not} $f(\mathbf{y})$. Additionally, the distribution for the \textit{observed} values realised with $\mathbf{r}_i$ is given by the marginal density $\int\,p(\mathbf{y}|\,\mathbf{r}_i)\,d\mathbf{y}^{mi(\mathbf{r}_i)}$. This marginalisation stratifies the distribution for $Y$ into pieces of different shapes such that the complete data underlying each shape typically is not distributed according to~$f(\mathbf{y})$, and the differing shapes makes it impossible to mix them back together to recover $f(\mathbf{y})$ via~(\ref{Eq:PatternMixture}). The result is that the observed data comprise a collection of subsamples from different distributions, no one of which can be used to assess the fit of the data model, and the irregular shapes prevent the subsamples from being pooled together. To overcome the difficulties associated with checking the fit of the data model to the observed data, we note that imputation-based methods combined with posterier predictive checks have been considered, but we do not elaborate on these techniques. The points we wished to make are summarised below: \begin{itemize} \item[(\textbf{c})] When carrying over a model-based philosophy to the case of incomplete data, the types of hypothetical considerations typically rejected by these philosophies become inescapable due to the impossibility of validating the fit of the missingness model to the observed data. \item[(\textbf{d})] Validating the fit of the data model to the observed data becomes substantially more complicated when the data are incomplete. \end{itemize} \section{The middle road: When $R$ is MAR} \label{Sect:RisMAR} With any data analysis, the intention typically is to model the data to answer some substantive question under investigation. But incompleteness of the data impedes analysis in two ways, the dataset has an irregular shape, and the underlying missingness process can distort the distribution of the data that the investigator can observe. Methods for taking these factors into account differ in their difficulty and inconvenience, and as a matter of practicality, often methods that are less inconvenient are accorded priority ahead of more difficult and inconvienient methods. Methods like multiple imputation overcome the irregular shape of the data by filling in missing values with plausible values, and adjusting the precision of estimates accordingly (Molenberghs\;et.\,al.\,2015). A first step in creating imputations is often to consider a model of the form (\ref{Eq:MsModel}) to model jointly the variables in the dataset. To estimate~$\theta$, likelihood estimation might be employed. However, this estimation is further impeded by the potential distorting effect of the missingness process on the distribution of the observable data. The choice the analyst has at hand is to use ignorable likelihood estimation based on (\ref{Eq:MsLikelihood}) anyway, or to model the missingness process and base estimation of $\theta$ on the left hand side of~(\ref{Eq:MgLikelihood}). Apart from the class of ignorable models, the latter adds a substantial layer of complexity and inconvenience, and understanding conditions under which this can be avoided is important. In the situation just described, the primary concern is not in understanding the conditions under which modelling the missingness process would be unnecessary, as Rubin (1976) considered. Rather, the primary concern is simply to obtain an estimate for~$\theta$, and to understand the conditions under which this can be done without the need to consider models for the missingness process at all. This question cannot be answered using the approach in Rubin\;(1976), reviewed in Section~\ref{Sect:Rubin}, because no model for the missingness process is posited against which to compare the estimates from the ignorable likelihood estimation. Seaman\;et.\,al.\,(2013, p\,266) note that it is this question that some writers seem to have taken as their interpretation of ignorability. While these writers were considering frequentist properties of estimation, the same ideas apply to direct likelihood inferences. We elaborate on the details. Suppose that \begin{equation} h(\mathbf{y}, \mathbf{r}) = f(\mathbf{y})g(\mathbf{r}|\,\mathbf{y}) \label{Eq:FullDensity} \end{equation} is a joint density for the random vector~$(Y, R)$, and consider the model \begin{equation} \mathcal{M}_t \,=\, \{\, f_\theta(\mathbf{y})\,g(\mathbf{r} |\, \mathbf{y}) : \theta\in\Theta \,\}. \label{Eq:MtModel} \end{equation} By definition (\ref{Eq:MtModel}) is correctly specified for the missingness process. We can ask under what conditions can the likelihood for the observable data for this model, \begin{equation} L_t(\theta) \,=\, \int f_\theta(\mathbf{y}) \, g(\mathbf{r} |\, \mathbf{y}) \, \text{d}\mathbf{y}^{mi(\mathbf{r})} \label{Eq:MtLikelihoodNotMAR} \end{equation} be maximised without needing to evaluate the unknown density function $g(\mathbf{r}|\,\mathbf{y})$? If $g(\mathbf{r} |\, \mathbf{y})$ is MAR with respect to the realised values~$(\mathbf{y}, \mathbf{r})$, then~(\ref{Eq:MtLikelihoodNotMAR}) factorises in the usual way \begin{equation} L_t(\theta) \,=\, g(\mathbf{r} |\, \mathbf{y}) \, \int f_\theta(\mathbf{y})\, \text{d}\mathbf{y}^{mi(\mathbf{r})} \label{Eq:MtLikelihoodMAR} \end{equation} and~(\ref{Eq:MtLikelihoodMAR}) can be maximised without needing to evaluate~$g(\mathbf{r} |\, \mathbf{y})$. Therefore, under this MAR assumption about~$g(\mathbf{r} |\, \mathbf{y})$, maximising (\ref{Eq:MtLikelihoodMAR}) is equivalent to maximising~(\ref{Eq:MsLikelihood}). Hence, if the investigator would have no reason to impose a relationship $\Delta\subsetneq\Theta\times\Psi$ on the parameters of a model~(\ref{Eq:MgModel}), if such a model were to be considered, and if the investigator is happy to assert that the missingness process itself is realised MAR, then direct likelihood inferences for $\theta$ can be obtained by ignorable likelihood estimation without the need to consider a model for the missingness process at all. In particular, considering some hypothetical model (\ref{Eq:MgModel}) and asserting distinctness of parameters (that is, choosing an ignorable model as the starting point) is simply unnecessary. We record this formally. \begin{theorem}[Missingness-model-free Ignorability] \label{Thm:MMFIgnorabilty} If the investigator would have no reason to impose a relationship $\Delta\subsetneq\Theta\times\Psi$ on the parameters of a model~(\ref{Eq:MgModel}), and if the distribution for the random vector $R$ conditional on~$Y$ is realised MAR, then there is no need to consider models of the form (1) at all. In this case, direct likelihood inferences for $\theta$ can be obtained by ignorable likelihood estimation, and this equates to the investigator using the (unknown) conditional distribution for $R$ given $Y$ directly in the analysis. $\qedsymbol$ \end{theorem} When interpreting ignorability in this `model-free' sense, writers typically have gone further and adopted a frequentist view in which ignorable likelihood estimation retains the asymptotic properties of likelihood theory (Seaman\;et.\,al.\,2013,\;p.\,266). For completeness, we review the conditions that would be needed for ignorability in this sense. Recall from Section~\ref{Sect:Rubin} that ignorability for likelihood inferences in the sense of Rubin (1976) has two facets, with one being whether or not the investigator wishes to impose e a relationship $\Delta\subsetneq\Theta\times\Psi$ onto the estimation of~$\theta$. This consideration applies irrespective of the mode of likelihood inference, and we retain consideration of non-distinctness of parameters as the first step in the decision making process. When this would be of no interest to the investigator, a correctly specified ignorable model together with Theorem~\ref{Thm:MMFIgnorabilty} implies the investigator can dispense with consideration of the ignorable model, and instead assert directly that $R$ given $Y$ is MAR. For reasons explained in Seaman\;et.\,al.\,(2013), to consider frequentist properties of likelihood theory, we strengthen our assumption to $R$ given $Y$ being everywhere MAR. This is then sufficient for ignorable likelihood estimation to be valid in the frequentisti sense of likelihood theory provided the additional hypotheses of likelihood theory are satisfied. These requirements are summarised below. The model for the observable data is obtained by removing from each vector in $Y\times R$ the coordinates pertaining to the missing values. This creates an irregularly-shaped set. The probability measure on this irregularly-shaped set is obtained by pulling back events in this set to unions of events of the form (\ref{Eq:ODE}) in~$Y\times R$, and integrating the densities in (\ref{Eq:MtModel}) over these corresponding events for~$(Y, R)$ (by applying iterated integrals as per Fubini's Theorem {(Ash and Dol\'{e}ans-Dade 2000,\,p.\,101))}. In this way, the functions on the right had side of~(\ref{Eq:MtLikelihoodNotMAR}) are seen to give a model of densities for the observable data. By constuction, $\mathcal{M}_t$ is correctly specified if, and only~if, $f(\mathbf{y})\in\mathcal{M}_s$. The integration in~(\ref{Eq:MtLikelihoodNotMAR}) sets up a mapping from $\mathcal{M}_t$ to the observable data model. The observable data model therefore will be correctly specified if, and only if, $\mathcal{M}_t$ is, and identifiable provided $\mathcal{M}_s$ is identifiable (different values of $\theta$ correspond to different density functions $f_\theta$) and the mapping to it from $\mathcal{M}_t$ is one-to-one. A sufficient condition for the latter to be true is that the missingness process assigns non-zero probability to complete cases for all values of~$\mathbf{y}$, since then densities~(\ref{Eq:MtLikelihoodMAR}) corresponding to different values of $\theta$ will disagree for at least one $\mathbf{y}$ value on that part of the model pertaining to the complete cases. Finally, the appropriate regularity conditions must be satisfied by $L_t(\theta)$, $\Theta$ and the value $\theta_0\in\Theta$ for which $f_{\theta_0}=f$. We summarize this formally as follows. \begin{theorem}[Ignorability for frequentist likelihood inference] Sufficient conditions for ignoring the missingness process when drawing frequentist likelihood inferences are: \begin{enumerate} \item there is no relationship $\Delta\subsetneq\Theta\times\Psi$ (see (\ref{Eq:MgModel})) to be imposed on the analysis, \item \label{Item:MAR} the distribution of $R$ given $Y$ is everywhere MAR, \item the additional requirements of likelihood theory (summarised above) are satisfied. \end{enumerate} Moreover, when condition\;\ref{Item:MAR} holds, ignorable likelihood estimation equates to using the (unknown) distribution for $R$ given $Y$ directly in the analysis. In these circumstances, ignoring the missingness process is preferable to modelling it explicitly. $\qedsymbol$ \end{theorem} \section{Discussion} \label{Sect:Discussion} The foundations for ignorability of the process that causes missing data were put in place by Rubin (1976). With the exception of a stronger form of MAR framed in Seaman\;et.\,al.\,(2013) for frequentist likelihood theory, these foundations have been accepted essentially unaltered for more than four decades. Despite this, the conditions for ignorability seem to be more confusing than they should be. One reason for this might be that Rubin (1976) presented distinctness of parameters as a mathematical requirement of ignorable likelihood estimation. We suggest that this criterian is better understood from a statitistical perspective, namely, whether or not the investigators wish to impose on the analysis a relationship $\Delta\subsetneq\Omega\times\Psi$ between the parameters for the data densities and the missingness mechanisms. From this perspective, non-distinctness of parameters is a choice of restriction incorporated in the analysis by the investigator, not a mathematical requirement that makes ignorable estimation `work.' Moreover, the situations in which imposing such a restriction could be considered reasonable would seem to be rare. In many cases, problems similar to defining statistical significance to be~`$p<0.05$' would arise. We suggest that the condition should be expressed as non-distinctness of parameters, and that it should serve more as a footnote to the theory, rather than being given such a prominant place. Another factor which may be contributing to confusion about the concept is the linking of ignorability with notions of `valid' inferences. For example, Rubin (1976, Summary) communicates the implications of these conditions as ``\textit{always leads to correct inferences},'' while Little and Rubin (2002,\;p\,120) refer to inferences being ``\textit{valid from the frequency perspective},'' and Seaman\;et.\,al.\,(2013, Abstract) simply refer to ``\textit{valid inference}.'' In these cases, what the terms mean is left undefined. However, linking ignorability with validity of inferences at all would seem to be highly misleading because ignorability is silent on whether or not a particular choice of missingness model is a `valid' choice for the data at hand, and it is equally silent on whether the chosen data model is `valid' for the data at hand. So it is difficult to see that there can be any meaningful sense in which satisfaction of the ignorability conditions implies that inferences drawn from the model will be valid. We suggest, however, that a primary source of confusion surrounding ignorability is likely to be because, as framed, it is derived by emulating complete-data methods without modification. Specifically, a full model for $(Y, R)$ is taken as the starting point for the model-based paradigms, and correct specification of the full model is added for frequentist likelihood inference. These paradigms are predicated on the investigator being in possession of the data to enable validation of the model against the data, and this is not the case when data are incomplete. As discussed in Section~\ref{Sect:Limitations}, these complete-data paradigms do not carry over completely to incomplete data because the model for the missingness process cannot be validated against the data. This feature of the incomplete data setting undermines the rationale for considering a model for the missingness process as the starting point for an analysis. Additionally, by framing ignorability in terms of properties of the model for the missingness process, instead of in terms of the missingness process itself, the usual causal link between choice of model and properties of estimation is partially severed. Changes to the missingness model can be made without altering the ignorable likelihood estimator in any way at all. While it is true that swapping one ignorable missingness model for another merely results in a proportional change in the likelihood, this forces a user of the tools to be in possession of unnecessary detail about the relationship between the estimation process and some `hypothetical' set of models for the missingness process. Possibly the strongest argument against the standard conditions is the convoluted nature of the way the question posed is answered. Specifically, for an investigator to choose \textbf{not to use} a full model, the investigator is directed \textbf{to use} a full model with specific properties, when making any choice of full model at all is unnecessary. The alternative interpretation reviewed and fleshed out in Section~\ref{Sect:RisMAR} avoids these issues by making an assumption directly about the conditional distribution $R$ given~$Y$. This has no analogue in the correponding complete data methods (which do not make direct assumptions about the data random vector~$Y$). However, doing so is the most direct and natural way to answer the ignorability question: the two scenarios under which a full model is required are (i) $R$ given $Y$ is not MAR, or (ii) the investigator has prior information about a relationship between the data distribution and missingness process to incorporate into the estimation of~$\theta$. Otherwise, ignorable estimation is appropriate because it equates to (the unknown) $R$ given $Y$ being used directly in the analysis. Proponents of model-based paradigms might argue that the properties of $(Y, R)$ can never be known in reality. While it is true that the MAR assumption is `hypothetical' (untestable), ascribing it to a model for $R$ given~$Y$ has no advantages over ascribing it directly to $R$ given~$Y$ because it is impossible to validate this property for the missingness model against the data. In short, choosing an untestable model is not an improvement on making an untestable assumption, and comes with the disadvantages discussed above. In relation to the ignorable models identified by Rubin (1976), the primary difference between the standard conditions and the alternative interpretation of ignorability reviewed in Section~\ref{Sect:RisMAR} can be summed up as follows: in the former case, the ignorable models are the full models that an investigator would choose from in order to not use a full model; in the latter case, the ignorable models are the full models that can be ignored because the need for an investigator to contemplate ever choosing one never arises. \end{document}
\begin{document} \thispagestyle{plain} {\footnotesize {{\bf General Mathematics Vol. xx, No. x (201x), xx--xx}}} \vspace*{2cm} \begin{center} {\Large {\bf A Note On Multi Poly-Euler Numbers And Bernoulli Polynomials}} \footnote{\it Received 08 Jun, 2009 \hspace{0.1cm} Accepted for publication (in revised form) 29 November, 2013} {\large Hassan Jolany, Mohsen Aliabadi, Roberto B. Corcino, and M.R.Darafsheh } \end{center} \begin{abstract} In this paper we introduce the generalization of Multi Poly-Euler polynomials and we investigate some relationship involving Multi Poly-Euler polynomials. Obtaining a closed formula for generalization of Multi Poly-Euler numbers therefore seems to be a natural and important problem. \end{abstract} \begin{center} {\bf 2010 Mathematics Subject Classification:} 11B73, 11A07 {\bf Key words and phrases:} Euler numbers, Bernoulli numbers, Poly-Bernoulli numbers, Poly-Euler numbers, Multi Poly-Euler numbers and polynomials \end{center} \vspace*{0.3cm} \section{Introduction} In the 17th century a topic of mathematical interst was finite sums of powers of integers such as the series $1+2+...+(n-1)$ or the series $1^2 + 2^2 + ... + (n -1)^2$.The closed form for these finite sums were known ,but the sum of the more general series $1^k+2^k+...+(n-1)^k$was not.It was the mathematician Jacob Bernoulli who would solve this problem.Bernoulli numbers arise in Taylor series in the expansion \begin{equation}\label{e1} \begin{array}{c} \frac{x}{e^x-1}=\sum\limits_{n=0}^{\infty}B_n\frac{x^n}{n!} \end{array}. \end{equation} and we have, \begin{equation}\label{e1} \begin{array}{c}S_m(n) = \sum_{k=1}^n k^m = 1^m + 2^m+ \cdots + n^m= {1\over{m+1}}\sum_{k=0}^m {m+1\choose{k}} B_k\; n^{m+1-k} \end{array}. \end{equation} and we have following matrix representation for Bernoulli numbers(for $n\in \mathbf{N}$),[1-4]. \begin{align} B_{n} &=\frac{(-1)^n} {(n-1)!}~ \begin{vmatrix} \frac{1}{2}& \frac{1}{3} & \frac{1}{4} &\cdots \frac{1}{n}&~\frac{1}{n+1}\\ 1& 1 & 1 &\cdots 1 & 1 \\ 0& 2 & 3 &\cdots {n-1} & n\\ 0& 0 & \binom{3}{2} &\cdots \binom{n-1}{2} & \binom{n}{2} \\ \vdots & ~ \vdots & ~ \vdots &\ddots~~ \vdots & \vdots & \\ 0& 0& 0& \cdots \binom{n-1}{n-2} & \binom{n}{n-2} \\ \end{vmatrix}. \end{align} Euler on page 499 of [5], introduced Euler polynomials, to evaluate the alternating sum \begin{equation}\label{e1} \begin{array}{c} A_n(m)=\sum\limits_{k=1}^{m}(-1)^ {m-k}k^n=m^n-(m-1)^n+...+(-1)^ {m-1}1^n \end{array}. \end{equation} The Euler numbers may be defined by the following generating functions \begin{equation}\label{e1} \begin{array}{c} \frac{2}{e^{t}\!+\!1}\;=\;\sum\limits_{{n=0}}^{\infty}E_{n}\frac{t^{n}}{n!} \end{array}. \end{equation} and we have following folowing matrix representation for Euler numbers, [1,2,3,4]. \begin{align} E_{2n} &=(-1)^n (2n)!~ \begin{vmatrix} \frac{1}{2!}& 1 &~& ~&~\\ \frac{1}{4!}& \frac{1}{2!} & 1 &~&~\\ \vdots & ~ & \ddots~~ &\ddots~~ & ~\\ \frac{1}{(2n-2)!}& \frac{1}{(2n-4)!}& ~&\frac{1}{2!} & 1\\ \frac{1}{(2n)!}&\frac{1}{(2n-2)!}& \cdots & \frac{1}{4!} & \frac{1}{2!}\end{vmatrix}. \end{align} The poly-Bernoulli polynomials have been studied by many researchers in recent decade. The history of these polynomials goes back to Kaneko. The poly-Bernoulli polynomials have wide-ranging application from number theory and combinatorics and other fields of applied mathematics. One of applications of poly-Bernoulli numbers that was investigated by Chad Brewbaker in [6,7,8,9], is about the number of $(0,1)$-matrices with $n$-rows and $k $ columns. He showed the number of $(0,1)$-matrices with $n$-rows and $k$ columns uniquely reconstructable from their row and column sums are the poly-Bernoulli numbers of negative index $B_n^{(-k)}$. Let us briefly recall poly-Bernoulli numbers and polynomials. For an integer $k\in \mathbf{Z}$, put \begin{equation}\label{e1} \begin{array}{c} \operatorname{Li}_k(z) = \sum_{n=1}^\infty {z^n \over n^k}. \end{array}. \end{equation} which is the $k$-th polylogarithm if $k\geq 1$ , and a rational function if $k\leq 0$. The name of the function come from the fact that it may alternatively be defined as the repeated integral of itself . The formal power series can be used to define Poly-Bernoulli numbers and polynomials. The polynomials $B_n^{(k)}(x)$ are said to be poly-Bernoulli polynomials if they satisfy, \begin{equation}\label{e1} \begin{array}{c} {Li_{k}(1-e^{-t}) \over 1-e^{-t}}e^{xt}=\sum\limits_{n=0}^{\infty}B_{n}^{(k)}(x){t^{n}\over n!} \end{array}. \end{equation} In fact, Poly-Bernoulli polynomials are generalization of Bernoulli polynomials, because for $n\leq 0$, we have, \begin{equation}\label{e1} \begin{array}{c} (-1)^nB_n^{(1)}(-x)=B_n(x) \end{array}. \end{equation} Sasaki,[10], Japanese mathematician, found the Euler type version of these polynomials, In fact, he by using the following relation for Euler numbers, \begin{equation}\label{e1} \begin{array}{c} cosh t=\sum\limits_{n=0}^{\infty}\frac{E_n}{n!}t^n \end{array}. \end{equation} found a poly-Euler version as follows \begin{equation}\label{e1} \begin{array}{c} \frac{Li_k(1-e^{-4t})}{4tcosht}=\sum\limits_{n=0}^{\infty}E_{n}^{(k)}{t^{n}\over n!} \end{array}. \end{equation} Moreover, he by defining the following $L$-function, interpolated his definition about Poly-Euler numbers. \begin{equation}\label{e1} \begin{array}{c} L_k(s)=\frac{1}{\Gamma(s)}\int_0^{\infty}t^{s-1}\frac{Li_k(1-e^{-4t})}{4(e^t+e^{-t})}dt \end{array}. \end{equation} and Sasaki showed that \begin{equation}\label{e1} \begin{array}{c} L_k(-n)=(-1)^nn\frac{E_{n-1}^{(k)}}{2} \end{array}. \end{equation} But the fact is that working on such type of generating function for finding some identities is not so easy. So by inspiration of the definitions of Euler numbers and Bernoulli numbers, we can define Poly-Euler numbers and polynomials as follows which also A.Bayad [11], defined it by following method in same times. \begin{defin}\label{d1} (Poly-Euler polynomials):The Poly-Euler polynomials may be defined by using the following generating function, \begin{equation}\label{e1} \begin{array}{c} \frac{2Li_k(1-e^{-t})}{1+e^t}e^{xt}=\sum\limits_{n=0}^{\infty}\mathbf{E}_{n}^{(k)}{t^{n}\over n!} \end{array}. \end{equation} \end{defin} If we replace $t$ by $4t$ and take $x=1/2$ and using the definition $cosht=\frac{e^t+e^{-t}}{2}$, we get the Poly-Euler numbers which was introduced by Sasaki and Bayad and also we can find same interpolating function for them (with some additional constant coefficient). The generalization of poly-logarithm is defined by the following infinite series \begin{equation}\label{e1} \begin{array}{c} Li_{(k_1,k_2,...,k_r)}(z)=\sum\limits_{m_1,m_2,...,m_r}\frac{z^{m_r}}{m_1^{k_1}...m_r^{k_r}} \end{array}. \end{equation} which here in summation ($0<m_1<m_2<...m_r$). Kim-Kim [12], one of student of Taekyun Kim introduced the Multi poly- Bernoulli numbers and proved that special values of certain zeta functions at non-positive integers can be described in terms of these numbers. The study of Multi poly-Bernoulli numbers and their combinatorial relations has received much attention in [6-13]. The Multi Poly-Bernoulli numbers may be defined as follows \begin{equation}\label{e1} \begin{array}{c} \frac{Li_{(k_1,k_2,...,k_r)}(1-e^{-t})}{(1-e^{-t})^r}=\sum\limits_{n=0}^{\infty}B_n^{(k_1,k_2,...,k_r)}\frac{t^n}{n!} \end{array}. \end{equation} So by inspiration of this definition we can define the Multi Poly-Euler numbers and polynomials . \begin{defin}\label{d1} Multi Poly-Euler polynomials $\mathbf{E}_n^{(k_1,...,k_r)}(x)$, $(n = 0, 1, 2, ...)$ are defined for each integer $k_1, k_2, ..., k_r $ by the generating series \begin{equation}\label{e1} \begin{array}{c} \frac{2Li_{(k_1,...,k_r)}(1-e^{-t})}{(1+e^t)^r}e^{rxt}=\sum\limits_{n=0}^{\infty}\mathbf{E}_{n}^{(k_1,...,k_r)}(x){t^{n}\over n!} \end{array}. \end{equation} \end{defin} and if $x=0$, then we can define Multi Poly-Euler numbers $\mathbf{E}_{n}^{(k_1,...,k_r)}=\mathbf{E}_{n}^{(k_1,...,k_r)}(0)$ Now we define three parameters $a,b,c$, for Multi Poly-Euler polynomials and Multi Poly-Euler numbers as follows. \begin{defin}\label{d1} Multi Poly-Euler polynomials $\mathbf{E}_n^{(k_1,...,k_r)}(x,a,b)$, $(n = 0, 1, 2, ...)$ are defined for each integer $k_1, k_2, ..., k_r $ by the generating series \begin{equation}\label{e1} \begin{array}{c} \frac{2Li_{(k_1,...,k_r)}(1-(ab)^{-t})}{(a^{-t}+b^t)^r}e^{rxt}=\sum\limits_{n=0}^{\infty}\mathbf{E}_{n}^{{(k_1,...,k_r)}}(x,a,b){t^{n}\over n!} \end{array}. \end{equation} \end{defin} In the same way, and if $x=0$, then we can define Multi Poly-Euler numbers with $a,b$ parameters $\mathbf{E}_{n}^{(k_1,...,k_r)}(a,b)=\mathbf{E}_{n}^{(k_1,...,k_r)}(0,a,b)$. In the following theorem, we find a relation between $\mathbf{E}_{n}^{(k_1,...,k_r)}(a,b)$ and $\mathbf{E}_{n}^{(k_1,...,k_r)}(x)$ \begin{theo}\label{t1} Let $a,b>0$, $ab\neq \pm1$ then we have \begin{equation}\label{e1} \begin{array}{c} \mathbf{E}_{n}^{(k_1,k_2,...,k_r)}(a,b)=\mathbf{E}_{n}^{(k_1,k_2,...,k_r)}\left(\frac{lna}{lna+lnb}\right)(lna+lnb)^n \end{array}. \end{equation} \end{theo} Proof.By applying the Definition 2 and Definition 3,we have \begin{align*} \frac{2Li_{(k_1,...,k_r)}(1-(ab)^{-t})}{(a^{-t}+b^t)^r} &= \sum\limits_{n=0}^{\infty}\mathbf{E}_{n}^{{(k_1,...,k_r)}}(a,b){t^{n}\over n!} \\ & = e^{rt\ln a}\frac{2Li_{(k_1,...,k_r)}(1-e^{-t\ln ab})}{(1+e^{t\ln ab})^r}\\ \end{align*} So, we get \begin{align*} \frac{2Li_{(k_1,...,k_r)}(1-(ab)^{-t})}{(a^{-t}+b^t)^r}=\sum\limits_{n=0}^{\infty}\mathbf{E}_{n}^{{(k_1,...,k_r)}}\left(\frac{\ln a}{\ln a +\ln b}\right)(\ln a +\ln b)^n{t^{n}\over n!} \end{align*} Therefore, by comparing the coefficients of $t^n$ on both sides, we get the desired result. $\square$ Now, In next theorem, we show a shortest relationship between $\mathbf{E}_{n}^{(k_1,k_2,...,k_r)}(a,b)$ and $\mathbf{E}_{n}^{(k_1,k_2,...,k_r)}$. \begin{theo}\label{t1} Let $a,b>0$, $ab\neq \pm1$ then we have \begin{equation}\label{e1} \begin{array}{c} \mathbf{E}_{n}^{(k_1,k_2,...,k_r)}(a,b)=\sum\limits_{i=0}^{n}r^{n-i}(\ln a +\ln b)^i(\ln a)^{n-i}\binom{n}{i} \mathbf{E}_{i}^{(k_1,k_2,...,k_r)} \end{array}. \end{equation} \end{theo} Proof. By applying the Definition 2, we have, \begin{align*} \sum\limits_{n=0}^{\infty}\mathbf{E}_{n}^{{(k_1,...,k_r)}}(a,b){t^{n}\over n!}&=\frac{2Li_{(k_1,...,k_r)}(1-(ab)^{-t})}{(a^{-t}+b^t)^r} \\ & = e^{rt\ln a}\frac{2Li_{(k_1,...,k_r)}(1-e^{-t\ln ab})}{(1+e^{t\ln ab})^r}\\ & = \left(\sum\limits_{k=0}^{\infty}\frac{r^kt^k(\ln a)^k}{k!}\right)\left(\sum\limits_{n=0}^{\infty}\mathbf{E}_{n}^{{(k_1,...,k_r)}}(\ln a+\ln b)^n\frac{t^n}{n!}\right)\\ & = \sum\limits_{j=0}^{\infty}\left( \sum\limits_{i=0}^{j}r^{j-i}\frac{\mathbf{E}_j^{(k_1,...,k_r)}(\ln a+\ln b)^i(\ln a)^{j-i}}{i!(j-i)!}t^j\right)\\ \end{align*} So, by comparing the coefficients of $t^n$ on both sides , we get the desired result. $\square$ By applying the definition 2, by simple manipulation, we get the following corollary \begin{cor}\label{c1}For non-zero numbers $a,b$, with $ab \neq -1$ we have \begin{equation}\label{e2} \begin{array}{c} \mathbf{E}_n^{(k_1,...,k_r)}(x;a,b)=\sum\limits_{i=0}^{n}\binom{n}{i}r^{n-i}\mathbf{E}_i^{(k_1,...,k_r)}(a,b)x^{n-i} \end{array}. \end{equation} \end{cor} Furthermore, by combinig the results of Theorem 2, and Corollary 1, we get the following relation between generalization of Multi Poly-Euler polynomials with $a,b$ parameters $\mathbf{E}_{n}^{{(k_1,...,k_r)}}(x;a,b)$, and Multi Poly-Euler numbers $\mathbf{E}_{n}^{{(k_1,...,k_r)}}$. \begin{equation}\label{e1} \begin{array}{c} \mathbf{E}_{n}^{{(k_1,...,k_r)}}(x;a,b)= \sum\limits_{k=0}^{n} \sum\limits_{j=0}^{k}r^{n-k}\binom{n}{k}\binom{k}{j}(\ln a)^{k-j}(\ln a+\ln b)^j\mathbf{E}_{j}^{{(k_1,...,k_r)}}x^{n-k} \end{array}. \end{equation} Now, we state the ''Addition formula'' for generalized Multi Poly-Euler polynomials \begin{cor}\label{c1}(Addition formula) For non-zero numbers $a,b$, with $ab \neq -1$ we have \begin{equation}\label{e2} \begin{array}{c} \mathbf{E}_n^{(k_1,...,k_r)}(x+y;a,b)=\sum\limits_{k=0}^{n}\binom{n}{k}r^{n-k}\mathbf{E}_k^{(k_1,...,k_r)}(x;a,b)y^{n-k} \end{array}. \end{equation} \end{cor} Proof. We can write \begin{align*} \sum\limits_{n=0}^{\infty}\mathbf{E}_{n}^{{(k_1,...,k_r)}}(x+y;a,b){t^{n}\over n!}&=\frac{2Li_{(k_1,...,k_r)}(1-(ab)^{-t})}{(a^{-t}+b^t)^r} e^{(x+y)rt} \\ & = \frac{2Li_{(k_1,...,k_r)}(1-(ab)^{-t})}{(a^{-t}+b^t)^r} e^{xrt}e^{yrt}\\ & = \left(\sum\limits_{n=0}^{\infty}\mathbf{E}_{n}^{{(k_1,...,k_r)}}(x;a,b){t^{n}\over n!}\right) \left(\sum\limits_{i=0}^{n}\frac{y^i r^i}{i!}t^i\right) \\ & = \sum\limits_{n=0}^{\infty}\left( \sum\limits_{k=0}^{n}\binom{n}{k}r^{n-k}y^{n-k}\mathbf{E}_{k}^{{(k_1,...,k_r)}}(x;a,b)\right)\frac{t^n}{n!}\\ \end{align*} So, by comparing the coefficients of $t^n$ on both sides , we get the desired result. $\square$ \section{Explicit formula for Multi Poly-Euler polynomials} Here we present an explicit formula for Multi Poly-Euler polynomials. \begin{theo}\label{t1} The Multi Poly-Euler polynomials have the following explicit formula \begin{equation}\label{e2} \begin{array}{c} \mathbf{E}^{(k_1,k_2,\ldots, k_r)}_n(x)=\sum\limits_{i=0}^n\sum\limits_{ 0\le m_1\le m_2\le\ldots\le m_r \atop c_1+c_2+\ldots=r}\sum\limits_{j=0}^{m_r}\frac{2(rx-j)^{n-i}r!(-1)^{j+c_1+2c_2+\ldots}(c_1+2c_2+\ldots)^i\binom{m_r}{j}\binom{n}{i}}{(c_1!c_2!\ldots)(m_1^{k_1} m_2^{k_2}\ldots m_r^{k_r})}. \end{array}. \end{equation} \end{theo} Proof. We have $$Li_{(k_1,k_2,\ldots, k_r)}(1-e^{-t})e^{rxt}=\sum_{ 0\le m_1\le m_2\le\ldots\le m_r }\frac{(1-e^{-t})^{m_r}}{m_1^{k_1} m_2^{k_2}\ldots m_r^{k_r}}e^{rxt}\qquad\qquad\qquad\qquad\qquad\qquad$$ \begin{align*} =&\sum_{ 0\le m_1\le m_2\le\ldots\le m_r }\frac{1}{m_1^{k_1} m_2^{k_2}\ldots m_r^{k_r}}\sum_{j=0}^{m_r}(-1)^j\binom{m_r}{j}\sum_{n\ge0}(rx-j)^n\frac{t^n}{n!}\\ =&\sum_{n\ge0}\left(\sum_{ 0\le m_1\le m_2\le\ldots\le m_r }\sum_{j=0}^{m_r}\frac{(-1)^{j}(rx-j)^n\binom{m_r}{j}}{m_1^{k_1} m_2^{k_2}\ldots m_r^{k_r}}\right)\frac{t^n}{n!}. \end{align*} On the other hand, \begin{align*} \left(\frac{1}{1+e^{t}}\right)^r=&\left(\sum_{ n\ge0 }(-1)^ne^{nt}\right)^r\\ =&\sum_{c_1+c_2+\ldots=r}\frac{r!(-1)^{c_1+2c_2+\ldots}}{c_1!c_2!\ldots}e^{t(c_1+2c_2+\ldots)}\\ =&\sum_{c_1+c_2+\ldots=r}\frac{r!(-1)^{c_1+2c_2+\ldots}}{c_1!c_2!\ldots}\sum_{n\ge0}(c_1+2c_2+\ldots)^n\frac{t^n}{n!}\\ =&\sum_{n\ge0}\left(\sum_{c_1+c_2+\ldots=r}\frac{r!(-1)^{c_1+2c_2+\ldots}(c_1+2c_2+\ldots)^n}{c_1!c_2!\ldots}\right)\frac{t^n}{n!}. \end{align*} Hence, $$\frac{2Li_{(k_1,k_2,\ldots, k_r)}(1-e^{-t})}{(1+e^{t})^r}e^{rxt}=2Li_{(k_1,k_2,\ldots, k_r)}(1-e^{-t})e^{rxt}\left(\frac{1}{1+e^{t}}\right)^r\qquad\qquad\qquad\qquad\qquad\qquad$$ \begin{align*} =\left(\sum_{n\ge0}\left(\sum_{ 0\le m_1\le m_2\le\ldots\le m_r }\sum_{j=0}^{m_r}\frac{(-1)^{j}(rx-j)^n\binom{m_r}{j}}{m_1^{k_1} m_2^{k_2}\ldots m_r^{k_r}}\right)\frac{t^n}{n!}\right)\times\\ \;\;\;\;\times\left(\sum_{n\ge0}\left(\sum_{c_1+c_2+\ldots=r}\frac{r!(-1)^{c_1+2c_2+\ldots}(c_1+2c_2+\ldots)^n}{c_1!c_2!\ldots}\right)\frac{t^n}{n!}\right)\\ =2\sum_{n\ge0}\sum_{i=0}^n\left(\sum_{ 0\le m_1\le m_2\le\ldots\le m_r }\sum_{j=0}^{m_r}\frac{(-1)^{j}(rx-j)^{n-i}\binom{m_r}{j}}{m_1^{k_1} m_2^{k_2}\ldots m_r^{k_r}}\right)\frac{t^{n-i}}{(n-i)!}\times\\ \;\;\;\;\times\left(\sum_{c_1+c_2+\ldots=r}\frac{r!(-1)^{c_1+2c_2+\ldots}(c_1+2c_2+\ldots)^i}{c_1!c_2!\ldots}\right)\frac{t^i}{i!}\\\end{align*} \begin{align*}=2\sum_{n\ge0}\sum_{i=0}^n\sum_{ 0\le m_1\le m_2\le\ldots\le m_r \atop c_1+c_2+\ldots=r}\sum_{j=0}^{m_r}\frac{(rx-j)^{n-i}r!(-1)^{j+c_1+2c_2+\ldots}(c_1+2c_2+\ldots)^i\binom{m_r}{j}\binom{n}{i}}{(c_1!c_2!\ldots)(m_1^{k_1} m_2^{k_2}\ldots m_r^{k_r})}\frac{t^{n}}{n!}\end{align*} By comparing the coefficient of $t^n/n!$, we obtain the desired explicit formula. \begin{defin}\label{d1} (Poly-Euler polynomials with $a,b,c$ parameters):The Poly-Euler polynomials with $a,b,c$ parameters may be defined by using the following generating function, \begin{equation}\label{e1} \begin{array}{c} \frac{2Li_k(1-(ab)^{-t})}{a^{-t}+b^t}c^{xt}=\sum\limits_{n=0}^{\infty}\mathbf{E}_{n}^{(k)}(x;a,b,c){t^{n}\over n!} \end{array}. \end{equation} \end{defin} Now, in next theorem, we give an explicit formula for Poly-Euler polynomials with $a,b,c$ parameters. \begin{theo}\label{t1} The generalized Poly-Euler polynomials with $a,b,c$ parameters have the following explicit formula \begin{equation}\label{e2} \begin{array}{c} \mathbf{E}^{(k)}_n(x;a,b,c)=\\ \sum\limits_{m=0}^n\sum\limits_{j=0}^m\sum\limits_{i=0}^j\frac{2(-1)^{m-j+i}}{j^k}\binom{j}{i}(x\ln c-(m-j+i+1)\ln a-(m-j+i+1)\ln b)^n. \end{array}. \end{equation} \end{theo} Proof. We can write \begin{align*}\sum_{n\ge0}\mathbf{E}^{(k)}_n(x;a,b,c)\frac{t^n}{n!}=\frac{2Li_k(1-(ab)^{-t})}{a^{-t}((ab)^{-t}+1)}c^{xt} =2a^{-t}\left(\sum_{n\ge0}(-1)^n(ab)^{-nt}\right)\left(\sum_{n\ge0}\frac{\left(1-(ab)^{-t}\right)^m}{m^k}\right)c^{xt}.\end{align*} \begin{align*} =&a^{-t}\sum_{m\ge0}\sum_{j=0}^m\sum_{i=0}^j\frac{2(-1)^{m-j+i}}{j^k}\binom{j}{i}(ab)^{-t(x+m-j+i)}c^{xt}\\ =&\sum_{m\ge0}\sum_{j=0}^m\sum_{i=0}^j\frac{2(-1)^{m-j+i}}{j^k}\binom{j}{i}e^{-t(x+m-j+i)\ln(ab)}e^{-t\ln a}e^{xt\ln c}\\ =&\sum_{n\ge0}\sum_{m\ge0}\sum_{j=0}^m\sum_{i=0}^j\frac{2(-1)^{m-j+i}}{j^k}\binom{j}{i}\sum_{n\ge0}(x\ln c-(m-j+i+1)\ln a-(m-j+i)\ln b)^n\frac{t^n}{n!}\\ =&\sum_{n\ge0}\sum_{m=0}^n\sum_{j=0}^m\sum_{i=0}^j\frac{2(-1)^{m-j+i}}{j^k}\binom{j}{i}(x\ln c-(m-j+i+1)\ln a-(m-j+i)\ln b)^n\frac{t^n}{n!}. \end{align*} By comparing the coefficient of $t^n/n!$, we obtain the desired explicit formula.$\square$ \noindent $\begin{array}{ll} \textrm{\bf Hassan Jolany}\\ \textrm{Université des Sciences et Technologies de Lille}\\ \textrm{UFR de Mathématiques}\\ \textrm{Laboratoire Paul Painlevé}\\ \textrm{CNRS-UMR 8524 59655 Villeneuve d'Ascq Cedex/France}\\ \textrm{e-mail: hassan.jolany@math.univ-lille1.fr} \end{array}$ \noindent $\begin{array}{ll} \textrm{\bf Mohsen Aliabadi}\\ \textrm{Department of Mathematics, Statistics and Computer Science,}\\ \textrm{University of Illinois at Chicago, USA}\\ \textrm{e-mail: mohsenmath88@gmail.com} \end{array}$ \noindent $\begin{array}{ll} \textrm{\bf Roberto B. Corcino}\\ \textrm{Department of Mathematics}\\ \textrm{Mindanao State University, Marawi City, 9700 Philippines}\\ \textrm{e-mail: rcorcino@yahoo.com} \end{array}$ \noindent $\begin{array}{ll} \textrm{\bf M.R.Darafsheh}\\ \textrm{Department of Mathematics, Statistics and Computer Science }\\ \textrm{Faculty of Science}\\ \textrm{University of Tehran, Iran}\\ \textrm{e-mail: darafsheh@ut.ac.ir} \end{array}$ \end{document}
\begin{document} \maketitle \begin{abstract} We study two computational problems, parameterised by a fixed tree~$H$. $\nHom{H}$ is the problem of counting homomorphisms from an input graph~$G$ to~$H$. $\wHom{H}$ is the problem of counting weighted homomorphisms to~$H$, given an input graph~$G$ and a weight function for each vertex~$v$ of~$G$. Even though $H$ is a tree, these problems turn out to be sufficiently rich to capture all of the known approximation behaviour in $\mathrm{\#P}$. We give a complete trichotomy for $\wHom{H}$. If $H$ is a star then $\wHom{H}$ is in $\mathrm{FP}$. If $H$ is not a star but it does not contain a certain induced subgraph~$J_3$ then $\wHom{H}$ is equivalent under approximation-preserving (AP) reductions to $\textsc{\#BIS}$, the problem of counting independent sets in a bipartite graph. This problem is complete for the class $\text{\#RH$\Pi_1$}$ under AP-reductions. Finally, if $H$ contains an induced~$J_3$ then $\wHom{H}$ is equivalent under AP-reductions to $\textsc{\#Sat}$, the problem of counting satisfying assignments to a CNF Boolean formula. Thus, $\wHom{H}$ is complete for $\mathrm{\#P}$ under AP-reductions. The results are similar for $\nHom{H}$ except that a rich structure emerges if $H$ contains an induced~$J_3$. We show that there are trees~$H$ for which $\nHom{H}$ is $\textsc{\#Sat}$-equivalent (disproving a plausible conjecture of Kelk). However, it is still not known whether $\nHom{H}$ is $\textsc{\#Sat}$-hard for \emph{every} tree~$H$ which contains an induced~$J_3$. It turns out that there is an interesting connection between these homomorphism-counting problems and the problem of approximating the partition function of the \emph{ferromagnetic Potts model}. In particular, we show that for a family of graphs $J_q$, parameterised by a positive integer~$q$, the problem $\nHom{J_q}$ is AP-interreducible with the problem of approximating the partition function of the $q$-state Potts model. It was not previously known that the Potts model had a homomorphism-counting interpretation. We use this connection to obtain some additional upper bounds for the approximation complexity of $\nHom{J_q}$. \end{abstract} \section{Introduction} A \emph{homomorphism} from a graph~$G$ to a graph~$H$ is a mapping $\sigma:V(G)\rightarrow V(H)$ such that the image $(\sigma(u),\sigma(v))$ of every edge $(u,v) \in E(G)$ is in $E(H)$. Let $\Hom GH$ denote the set of homomorphisms from~$G$ to~$H$ and let $Z_H(G)=|\Hom GH|$. For each fixed~$H$, we consider the following computational problem. \begin{description} \item[Problem] $\nHom{H}$. \item[Instance] Graph $G$. \item[Output] $Z_H(G)$. \end{description} The vertices of~$H$ are often referred to as ``colours'' and a homomorphism from~$G$ to~$H$ can be thought of as an assignment of colours to the vertices of $G$ which satisfies certain constraints along each edge of~$G$. The constraints guarantee that adjacent vertices in~$G$ are assigned colours which are adjacent in~$H$. A homomorphism in $\Hom GH$ is therefore often called an ``$H$-colouring'' of~$G$. When $H=K_q$, the complete graph with $q$~vertices, the elements of $\Hom G{K_q}$ are proper $q$-colourings of~$G$. There has been much work on determining the complexity of the $H$-colouring decision problem, which is the problem of determining whether $Z_H(G)=0$, given input~$G$. This work will be described in Section~\ref{sec:prev}, but at this point it is worth mentioning the dichotomy result of Hell and Ne{\v{s}}et{\v{r}}il~\cite{HN}, which shows that the decision problem is solvable in polynomial time if $H$ is bipartite and that it is NP-hard otherwise. There has also been work~\cite{DG,Kelk} on determining the complexity of exactly or approximately solving the related counting problem~$\nHom{H}$. This paper is concerned with the computational difficulty of $\nHom{H}$ when $H$ is bipartite, and particularly when $H$ is a tree. As an example, consider the case where $H$ is the four-vertex path~$P_4$ (of length three). Label the vertices (or colours) $1,2,3,4$, in sequence. If $G$ is not bipartite then $\Hom GH=\emptyset$, so the interesting case is when $G$ is bipartite. Suppose for simplicity that $G$ is connected. Then one side of the vertex bipartition of $G$ must be assigned even colours and the other side must be assigned odd colours. It is easy to see that the vertices assigned colours~$1$ and~$4$ form an independent set of $G$, and that every independent set arises in exactly two ways as a homomorphism. Thus, $Z_{P_4}(G)$ is equal to twice the number of independent sets in the bipartite graph~$G$. We will return to this example presently. It will sometimes be useful to consider a weighted generalisation of the homomorphism-counting problem. Suppose, for each $v\in V(G)$, that $w_v:V(H)\rightarrow \mathbb{Q}_{\geq 0}$ is a weight function, assigning a non-negative rational weight to each colour. Let $W(G,H)$ be an indexed set of weight functions, containing one weight function for each vertex $v\in V(G)$, Thus, $$W(G,H) =\{ w_v \mid v\in V(G)\}.$$ Our goal is to compute the weighted sum of homomorphisms from~$G$ to~$H$, which is expressed as the partition function $$Z_{H}(G, W(G,H)) = \sum_{\sigma\in \Hom GH} \prod_{v\in V(G)} w_v(\sigma(v)).$$ Given a fixed~$H$, each weight function $w_v\in W(G,H)$ can be represented succinctly as a list of $|V(H)|$ rational numbers. This representation is used in the following computational problem. \begin{description} \item[Problem] $\wHom{H}$. \item[Instance] A graph $G$ and an indexed set of weight functions $W(G,H)$. \item[Output] $Z_{H}(G,W(G,H))$. \end{description} The complexity of \emph{exactly} solving $\nHom{H}$ and $\wHom{H}$ is already understood. Dyer and Greenhill have observed \cite[Lemma 4.1]{DG} that $\nHom{H}$ is in~$\mathrm{FP}$ if $H$ is a complete bipartite graph. It is easy to see (see Observation~\ref{obs:star}) that the same is true of $\wHom{H}$. On the other hand, Dyer and Greenhill showed that $\nHom{H}$ is $\mathrm{\#P}$-complete for every bipartite graph~$H$ that is not complete. Since $\nHom{H}$ is a special case of the more general problem $\wHom{H}$, we conclude that both problems are in~$\mathrm{FP}$ if $H$ is a star (a tree in which some ``centre'' vertex is an endpoint of every edge), and that both problems are $\mathrm{\#P}$-complete for every other tree~$H$. This paper maps the complexity of \emph{approximately} solving $\nHom{H}$ and $\wHom{H}$ when $H$ is a tree. Dyer, Goldberg, Greenhill and Jerrum~\cite{APred} introduced the concept of ``AP-reduction'' for studying the complexity of approximate counting problems. Informally, an AP-reduction is an efficient reduction from one counting problem to another, which preserves closeness of approximation; two counting problems that are interreducible using this kind of reduction have the same complexity when it comes to finding good approximate solutions. We have already encountered an extremely simple example of two AP-interreducible problems, namely $\nHom{P_4}$ and $\textsc{\#BIS}$, the problem of counting independent sets in a bipartite graph. Using less trivial reductions, Dyer et al.\ showed (\cite[Theorem 5]{APred}) that several natural counting problems in addition to $\nHom{P_4}$ are interreducible with $\textsc{\#BIS}$, and moreover that they are all complete for the complexity class $\text{\#RH$\Pi_1$}$ with respect to AP-reductions. The class $\text{\#RH$\Pi_1$}$ is conjectured to contain problems that do not have an FPRAS; however it is not believed to contain $\textsc{\#Sat}$, the classical hard problem of computing the number of satisfying assignments to a CNF Boolean formula. Refer to Section~\ref{sec:prelim} for more detail on the technical concepts mentioned here and elsewhere in the introduction. Steven Kelk's PhD thesis~\cite{Kelk} examined the approximation complexity of the problem $\nHom{H}$ for general~$H$. He identified certain families of graphs~$H$ for which $\nHom{H}$ is AP-interreducible with $\textsc{\#BIS}$ and other large families for which $\nHom{H}$ is AP-interreducible with $\textsc{\#Sat}$. He noted \cite[Section 5.7.1]{Kelk} that, during the study, he did not encounter \emph{any} bipartite graphs~$H$ for which $\textsc{\#Sat} \leq_\mathrm{AP} \nHom{H}$, and that he suspected \cite[Section 7.3]{Kelk} that there were ``structural barriers'' which would prevent homomorphism-counting problems to bipartite graphs from being $\textsc{\#Sat}$-hard. An interesting test case is the tree $J_3$ which is depicted in Figure~\ref{fig:J3}. Kelk referred to this tree \cite[Section 7.4]{Kelk} as ``the junction'', and conjectured that $\nHom{J_3}$ is neither $\textsc{\#BIS}$-easy nor $\textsc{\#Sat}$-hard. Thus, he conjectured that unlike the setting of Boolean constraint satisfaction, where every parameter leads to a computational problem which is FPRASable, $\textsc{\#BIS}$-equivalent, or $\textsc{\#Sat}$-equivalent \cite{trichotomy}, the complexity landscape for approximate $H$-colouring may be more nuanced, in the sense that there might be graphs~$H$ for which none of these hold. The purpose of this paper is to describe the interesting complexity landscape of the approximation problems $\nHom{H}$ and $\wHom{H}$ when $H$ is a tree. It turns out that even the case in which $H$ is a tree is sufficiently rich to include all of the known approximation complexity behaviour in $\mathrm{\#P}$. First, consider the weighted problem~$\wHom{H}$. For this problem, we show that there is a complexity trichotomy, and the trichotomy depends upon the induced subgraphs of~$H$. We say that $H$ {\it contains an induced $H'$} if $H$ has an induced subgraph that is isomorphic to~$H'$. Here is the result. If $H$ contains no induced $P_4$ then it is a star, so $\wHom{H}$ is in $\mathrm{FP}$ (Observation~\ref{obs:star}). If $H$ contains an induced $P_4$ but it does not contain an induced $J_3$ then it turns out that $\wHom{H}$ is AP-interreducible with $\textsc{\#BIS}$ (Lemma~\ref{lem:intermediate}). Finally, if $H$ contains an induced $J_3$, then $\textsc{\#Sat} \leq_\mathrm{AP} \wHom{H}$ (Lemma~\ref{lem:hardweighted}.) Thus, the complexity of $\wHom{H}$ is completely determined by the induced subgraphs of the tree~$H$, and there are no possibilities other than those that arise in the Boolean constraint satisfaction trichotomy~\cite{trichotomy}. Now consider the problem~$\nHom{H}$. Like its weighted counterpart, the unweighted problem $\nHom{H}$ is in $\mathrm{FP}$ if $H$ is a star, and it is $\textsc{\#BIS}$-equivalent if $H$ contains an induced~$P_4$ but it does not contain an induced~$J_3$. However, it is not known whether $\nHom{H}$ is $\textsc{\#Sat}$-hard for every $H$ which contains an induced $J_3$. The structure that has emerged is already quite rich. First, we have discovered (Theorem~\ref{thm:hardH}) that there are trees $H$ for which $\nHom{H}$ is $\textsc{\#Sat}$-hard. This result is surprising --- it disproves the plausible conjecture of Kelk that $\nHom{H}$ is not $\textsc{\#Sat}$-hard for any bipartite graph~$H$. We don't know whether $\nHom{H}$ is $\textsc{\#Sat}$-hard for \emph{every} tree~$H$ which contains an induced~$J_3$. In fact, we have discovered an interesting connection between these homomorphism-counting problems and the problem of approximating the partition function of the \emph{ferromagnetic Potts model}. In particular, Theorem~\ref{thm:junction} shows that for a family of graphs $J_q$, parameterised by a positive integer~$q$, the problem $\nHom{J_q}$ is AP-interreducible with the problem of approximating the partition function of the $q$-state Potts model. This is surprising because it was not known that the Potts model had a homomorphism-counting interpretation. The Potts-model connection allows us to give a non-trivial upper bound for the complexity of $\nHom{J_q}$. In particular, Corollary~\ref{cor:bqcol} shows that this problem is AP-reducible to the problem of counting proper $q$-colourings of bipartite graphs. We are not aware of any complexity relationships between the problems $\nHom{J_q}$, for $q>2$. At one extreme, they might all be AP-interreducible; at the other, they might all be incomparable. Another conceivable situation is that \nHom{$J_q$} is AP-reducible to $\nHom{J_{q'}}$ exactly when $q\leq q'$. There is no real evidence for or against any of these or other possibilities. However, in the final section we exhibit a natural problem that provides an upper bound on the complexity of infinite families of problems of the form $\nHom{J_q}$ where $q$ is a prime power. Specifically, we show (Corollary~\ref{newcor}) that $\nHom{J_{p^k}}$ is AP-reducible to the weight enumerator of a linear code over the field~$\mathbb{F}_p$. \subsection{Previous Work} \label{sec:prev} We have already mentioned Hell and Ne{\v{s}}et{\v{r}}il's classic work~\cite{HN} on the complexity of the $H$-colouring decision problem. They showed that this problem is solvable in polynomial time if $H$ is bipartite, and that it is NP-complete otherwise. Our paper is concerned with the situation in which $H$ is an undirected graph (specifically, an undirected tree) but it is worth noting that the decision problem becomes much more complicated if $H$ is allowed to be a \emph{directed} graph. Indeed, Feder and Vardi showed~\cite{FV} that every constraint satisfaction problem (CSP) is equivalent to some digraph homomorphism problem. Despite much research, a complete dichotomy theorem for the digraph homomorphism decision problem is not known. Bang-Jensen and Hell~\cite{BJH} had conjectured a dichotomy for the special case in which the digraph~$H$ has no sources and no sinks. This conjecture was proved in important recent work of Barto, Kozik and Niven~\cite{BKN}. Given the conjecture, Hell, Ne{\v{s}}et{\v{r}}il, and Zhu~\cite{HNZ} stated that ``digraphs with sources and sinks, and in particular oriented trees, seem to be the hard part of the problem.'' Gutjahr, Woeginger and Welzl~\cite{GWW} constructed a directed tree~$H$ such that determining whether a digraph~$G$ has a homomorphism to~$H$ is NP-complete. Of course, for some other trees, this problem is solvable in polynomial time. For example, they showed that it is solvable in polynomial time whenever $H$ is an oriented path (a path in which edges may go in either direction). Hell, Ne{\v{s}}et{\v{r}}il\ and Zhu~\cite{HNZ} construct a whole family of directed trees for which the homomorphism decision problem is NP-hard, and study the problem of characterising NP-hard trees by forbidden subtrees. The reader is referred to Hell and Ne{\v{s}}et{\v{r}}il's book~\cite{HNbook} and to their survey paper~\cite{HNsurvey} for more details about these decision problems. As mentioned in the introduction, there is already some existing work~\cite{DG, Kelk} on determining the complexity of exactly or approximately counting homomorphisms. This work is discussed in more detail elsewhere in this paper. The problem of sampling homomorphisms uniformly at random (or, in the weighed case, of sampling homomorphisms with probability proportional to their contributions to the partition function) is closely related to the approximate counting problem. We will later discuss some existing work~\cite{GKP} on the complexity of the homomorphism-sampling problem. First, we describe some related results on a particular approach to this problem - namely, the application of the Markov chain Monte Carlo (MCMC) method. Here the idea is to simulate a Markov chain whose states correspond to homomorphisms from~$G$ to~$H$. The chain will be constructed so that the probability of a particular homomorphism~$\sigma$ in the stationary distribution of the chain is proportional to the contribution of~$\sigma$ to the partition function. If the Markov chain is \emph{rapidly mixing} then it is possible to efficiently sample homomorphisms from a distribution that is very close to the appropriate distribution. This, in turn, leads to a good approximate counting algorithm~\cite{HColSampleCount}. First, Cooper, Dyer and Frieze~\cite{CDF} considered the unweighted problem. They showed that, for any non-trivial $H$, any Markov chain on $H$-colourings that changes the colours of up to some constant fraction of the vertices of~$G$ in a single step will have exponential mixing time (so will not lead to an efficient approximate counting algorithm). When $H$ is a tree with a self-loop on every vertex, they construct a weight function $w_H\colon V(H) \to \mathbb{Q}_{\geq 0}$ so that rapid mixing does occur for the special case of the weighted homomorphism problem in which every vertex $v$ of~$G$ has weight function $w_v=w_H$. Thus, their result gives an FPRAS for this special case of $\wHom{H}$. The slow-mixing results of \cite{CDF} have been extended in~\cite{BS} and in \cite{BCDT}. In particular, Borgs et al.~\cite{BCDT} considered the case in which $H$ is a rectangular subset of the hypercubic lattice, and constructed a weight function~$w_H$ for which quasi-local Markov chains (which change the colours of up to some constant fraction of the vertices in a small sublattice at each step) have slow mixing. \section{Preliminaries} \label{sec:prelim} This section brings together the main complexity-theoretic notions that are specific to the study of approximate counting problems. A more detailed account can be found in~\cite{APred}. A \emph{randomised approximation scheme\/} is an algorithm for approximately computing the value of a function~$f:\Sigma^*\rightarrow \mathbb{R}_{\geq 0}$. The approximation scheme has a parameter~$\varepsilon\in(0,1)$ which specifies the error tolerance. A \emph{randomised approximation scheme\/} for~$f$ is a randomised algorithm that takes as input an instance $ x\in \Sigma^{\ast }$ (e.g., in the case of $\nHom{H}$, the input would be an encoding of a graph~$G$) and a rational error tolerance $\varepsilon \in(0,1)$, and outputs a rational number $z$ (a random variable depending on the ``coin tosses'' made by the algorithm) such that, for every instance~$x$, $\Pr \big[e^{-\epsilon} f(x)\leq z \leq e^\epsilon f(x)\big]\geq \tfrac{3}{4}$. We adopt the convention that~$z$ is represented as a pair of integers representing the numerator and the denominator. The randomised approximation scheme is said to be a \emph{fully polynomial randomised approximation scheme}, or \emph{FPRAS}, if it runs in time bounded by a polynomial in $ |x| $ and $ \epsilon^{-1} $. As in~\cite{FerroPotts}, we say that a real number~$z$ is \emph{efficiently approximable} if there is an FPRAS for the constant function $f(x)=z$. Our main tool for understanding the relative difficulty of approximation counting problems is \emph{approximation-preserving reductions}. We use the notion of approximation-preserving reduction from Dyer et al.~\cite{APred}. Suppose that $f$ and $g$ are functions from $\Sigma^{\ast }$ to~$\mathbb{R}_{\geq 0}$. An AP-reduction from~$f$ to~$g$ gives a way to turn an FPRAS for~$g$ into an FPRAS for~$f$. The actual definition in~\cite{APred} applies to functions whose outputs are natural numbers. The generalisation that we use here follows McQuillan~\cite{McQuillan}. An {\it approximation-preserving reduction\/} (AP-reduction) from $f$ to~$g$ is a randomised algorithm~$\mathcal{A}$ for computing~$f$ using an oracle for~$g$. The algorithm~$\mathcal{A}$ takes as input a pair $(x,\varepsilon)\in\Sigma^*\times(0,1)$, and satisfies the following three conditions: (i)~every oracle call made by~$\mathcal{A}$ is of the form $(w,\delta)$, where $w\in\Sigma^*$ is an instance of~$g$, and $\delta \in (0,1)$ is an error bound satisfying $\delta^{-1}\leq\mathop{\mathrm{poly}}(|x|, \varepsilon^{-1})$; (ii) the algorithm~$\mathcal{A}$ meets the specification for being a randomised approximation scheme for~$f$ (as described above) whenever the oracle meets the specification for being a randomised approximation scheme for~$g$; and (iii)~the run-time of~$\mathcal{A}$ is polynomial in $|x|$ and $\varepsilon^{-1}$ and the bit-size of the values returned by the oracle. If an approximation-preserving reduction from $f$ to~$g$ exists we write $f\leq_\mathrm{AP} g$, and say that {\it $f$ is AP-reducible to~$g$}. Note that if $f\leq_\mathrm{AP} g$ and $g$ has an FPRAS then $f$ has an FPRAS\null. (The definition of AP-reduction was chosen to make this true.) If $f\leq_\mathrm{AP} g$ and $g\leq_\mathrm{AP} f$ then we say that {\it $f$ and $g$ are AP-interreducible}, and write $f\equiv_\mathrm{AP} g$. A word of warning about terminology: the notation $\leq_\mathrm{AP}$ has been used (see, e.g., \cite{CrescenziGuide}) to denote a different type of approximation-preserving reduction which applies to optimisation problems. We will not study optimisation problems in this paper, so hopefully this will not cause confusion. Dyer et al.~\cite{APred} studied counting problems in \#P and identified three classes of counting problems that are interreducible under approx\-imation-preserving reductions. The first class, containing the problems that have an FPRAS, are trivially AP-interreducible since all the work can be embedded into the reduction (which declines to use the oracle). The second class is the set of problems that are AP-interreducible with \textsc{\#Sat}, the problem of counting satisfying assignments to a Boolean formula in CNF\null. Zuckerman~\cite{zuckerman} has shown that \textsc{\#Sat}{} cannot have an FPRAS unless $\mathrm{RP}=\mathrm{NP}$. The same is obviously true of any problem to which \textsc{\#Sat}{} is AP-reducible. The third class appears to be of intermediate complexity. It contains all of the counting problems expressible in a certain logically-defined complexity class, $\text{\#RH$\Pi_1$}$. Typical complete problems include counting the downsets in a partially ordered set~\cite{APred}, computing the partition function of the ferromagnetic Ising model with local external magnetic fields~\cite{Ising}, and counting the independent sets in a bipartite graph, which is defined as follows. \begin{description} \item[Problem] $\textsc{\#BIS}$. \item[Instance] A bipartite graph $G$. \item[Output] The number of independent sets in $G$. \end{description} In \cite{APred} it was shown that $\textsc{\#BIS}$ is complete for the logically-defined complexity class $\mathrm{\#RH}\Pi_1$ with respect to approximation-preserving reductions. We conjecture~\cite{FerroPotts} that there is no FPRAS for $\textsc{\#BIS}$. A problem that is closely related to approximate counting is the problem of sampling configurations almost uniformly at random. The analogue of an FPRAS in the context of sampling problems is the PAUS, or \emph{Polynomial Almost Uniform Sampler}. Goldberg, Kelk, and Paterson~\cite{GKP} have studied the problem of sampling $H$-colourings almost uniformly at random. They gave a hardness result for every fixed tree~$H$ that is not a star. In particular, their theorem \cite[Theorem 2]{GKP} shows that there is no PAUS for sampling $H$-colourings unless $\textsc{\#BIS}$ has an FPRAS. In general, there is a close connection between approximate counting and almost-uniform sampling. Indeed, in the presence of a technical condition called ``self-reducibility'', the counting and sampling variants of two problems are interreducible~\cite{JVV}. The weighted problem $\wHom{H}$ is self-reducible, so the result of~\cite{GKP} immediately gives an AP-reduction from~$\textsc{\#BIS}$ to $\wHom{H}$ for every tree~$H$ that is not a star. However, it is not known whether the unweighted problem~$\nHom{H}$ is self-reducible. As mentioned in Section~\ref{sec:prev} the paper \cite{HColSampleCount} shows how to turn a PAUS for $H$-colourings into an FPRAS for $\nHom{H}$, but it is not known whether there is a reduction in the other direction. Thus, we cannot directly apply the hardness result of~\cite{GKP} to reduce~$\textsc{\#BIS}$ to~$\nHom{H}$. However, we will see in the next section that the complexity gap between problems with an FPRAS and those that are $\textsc{\#BIS}$-equivalent still holds for $\nHom{H}$ in the special case when $H$ is a tree, which is the focus of this paper. \section{Weighted tree homomorphisms} \label{sec:weighted} First, we introduce some notation and a few graphs that are of special interest. In this paper, the graphs that we consider are undirected and simple --- they do not have self-loops or multiple edges between vertices. For every positive integer~$n$, let $[n]$ denote $\{1,2,\ldots,n\}$. We use $\Gamma_H(v)$ to denote the set of neighbours of vertex~$v$ in graph~$H$ and we use $d_H(v)$ to denote the degree of~$v$, which is~$|\Gamma_H(v)|$. Let $P_n$ be the $n$-vertex path (with $n-1$ edges). An $n$-leaf \emph{star} is the complete bipartite graph $K_{1,n}$. Let $J_q$ be the graph with vertex set $$V(J_q) = \{w\} \cup \{c_i\mid i\in[q]\} \cup \{c'_i\mid i\in[q]\},$$ and edge set $$ E(J_q) = \{(c_i,c'_i) \mid i\in[q]\} \cup \{(c'_i,w)\mid i\in[q]\}.$$ $J_3$ is depicted in Figure~\ref{fig:J3}. \begin{figure} \caption{The tree $J_3$.} \label{fig:J3} \end{figure} \subsection{Stars} As Dyer and Greenhill observed \cite[Lemma 4.1]{DG}, $\nHom{H}$ is in $\mathrm{FP}$ if $H$ is a complete bipartite graph. We now show that $\wHom{H}$ is also in $\mathrm{FP}$ in this case. Suppose that $H$ is a complete bipartite graph with bipartition $(U,U')$ where $U=\{u_1,\ldots,u_h\}$ and $U'=\{u'_1,\ldots,u'_{h'}\}$. Let $G$ be an input to $\wHom{H}$ with connected components $G^1,\ldots,G^\kappa$. Clearly, $Z_H(G)=\prod_{i=1}^\kappa Z_{H}(G^i)$. Also, if $G^i$ is non-bipartite then $Z_{H}(G^i)=0$. Suppose that $G^i$ is a connected bipartite graph with bipartition $(V,V')$ where $V=\{v_1,\ldots,v_n\}$ and $V'=\{v'_1,\ldots,v'_{n'}\}$. Then $$Z_{H}(G^i) = \prod_{j=1}^n\sum_{c=1}^{h} w_{v_j}(u_c) \prod_{j'=1}^{n'}\sum_{c'=1}^{h'} w_{v'_{j'}}(u'_{c'}) + \prod_{j=1}^{n'}\sum_{c=1}^{h} w_{v'_{j}}(u_c) \prod_{j'=1}^{n}\sum_{c'=1}^{h'} w_{v_{j'}}(u'_{c'}) .$$ In the context of this paper, where $H$ is a tree, we can draw the following concluson. \begin{observation} \label{obs:star}\label{obs:triv} Suppose that $H$ is a star. Then $\wHom{H}$ is in $\mathrm{FP}$. \end{observation} \subsection{Trees with intermediate complexity} The purpose of this section is to prove Lemma~\ref{lem:intermediate}, which says that if $H$ is a tree that is not a star and has no induced~$J_3$ then $\textsc{\#BIS} \equiv_\mathrm{AP}\nHom{H}$ and $\textsc{\#BIS}\equiv_\mathrm{AP} \wHom{H}$. The main work of the section is in the proof of Lemma~\ref{lem:intermediate}, but first we need some existing results. In particular, Lemma~\ref{lem:kelk} below is due to Kelk, and Lemma~\ref{lem:CSP} is an easy consequence of earlier work by the authors and their coauthors on counting CSPs. We have chosen to include a proof sketch of the former because the work of Kelk is unpublished~\cite{Kelk} and a proof of the latter because we did not state or prove it explicitly in earlier work, and it might be rather difficult for the reader to see why it is implied by that work. If $H$ is a tree with no induced~$P_4$ then it is a star, so, by Observation~\ref{obs:triv}, $\wHom{H}$ is in $\mathrm{FP}$. On the other hand, the following lemma shows that if $H$ contains an induced~$P_4$ then even the unweighted problem $\nHom{H}$ is $\textsc{\#BIS}$-hard. To motivate the lemma, suppose that $H$ contains an induced~$P_4$. Then it is a bipartite graph which is not complete, so by Goldberg at al.~\cite[Theorem 2]{GKP} the (uniform) sampling problem for $H$-colourings of a graph is as hard as the sampling problem for independent sets in a bipartite graph. This is not quite the result we are seeking, but it is close in spirit, given the close connection between sampling and approximate counting. The following lemma, which is a special case of \cite[Lemma 2.19]{Kelk}, is exactly what we need. \begin{lemma} [Kelk] \label{lem:kelk} Let $H$ be a tree containing an induced~$P_4$. Then $$\textsc{\#BIS} \leq_\mathrm{AP} \nHom{H}.$$ \end{lemma} \begin{proof} (Proof sketch) We will not give a complete proof of Lemma~\ref{lem:kelk} since it is a special case of a lemma of Kelk, but here is a sketch to give the reader a high-level idea of the construction. Let $\Delta$ be the maximum degree of vertices of~$H$ and let $\Delta'\leq \Delta$ be the maximum degree taken by a neighbour of a degree-$\Delta$ vertex in~$H$. Note that $\Delta'\geq2$ since $H$ cannot be a star. Let $(c,c')$ be any edge in~$H$ with $d_H(c)=\Delta$ and $d_H(c')=\Delta'$. Let $N_c$ be the set $\Gamma_H(c)-\{c'\}$ and let $N_{c'} = \Gamma_H(c')-\{c\}$. Since $H$ is a tree, there are no edges in $H$ between $N_c$ and $N_{c'}$. Now consider a connected instance $G$ of $\textsc{\#BIS}$ with bipartition $V(G)=(V,V')$. Let $G'$ be the bipartite graph with vertex set $V(G)\cup \{C,C'\}$ (where~$C$ and~$C'$ are new vertices that are not in $V(G)$) and edge set $E(G) \cup \{(C,C')\} \cup \{C\}\times V' \cup \{C'\} \times V$. Consider an $H$-colouring $\sigma$ of~$G$ with $\sigma(C)=c$ and $\sigma(C')=c'$. (Standard constructions can be used to augment $G'$ so that almost all homomorphisms to $H$ have this property.) For every vertex $v\in V$, $\sigma(v) \in N_{c'} \cup \{c\}$ and for every vertex $v'\in V'$, $\sigma(v') \in N_c \cup \{c\}$. Also, $\{v \in V \mid \sigma(v)\in N_{c'} \} \cup \{ v'\in V' \mid \sigma(v')\in N_c\}$ is an independent set of~$G$. Thus, there is an injection from independent sets of~$G$ into these $H$-colourings of~$G'$. Standard tricks can be used to adjust the construction so that almost all of the homomorphisms correspond to \emph{maximum} independent sets of~$G$ and so that all maximum independent sets correspond to approximately the same number of homomorphisms. The proof follows from the fact that counting maximum independent sets in a bipartite graph is equivalent to~$\textsc{\#BIS}$~\cite{APred}. \end{proof} As mentioned above, the main result of this section is Lemma~\ref{lem:intermediate}, which will be presented below. Its proof relies on earlier work on counting \emph{constraint satisfaction problems} (CSPs). Suppose that $x$ and $x'$ are Boolean variables. An assignment $\sigma: \{x,x'\}\to \{0,1\}$ is said to satisfy the implication constraint $\mathrm{IMP}(x,x')$ if $(\sigma(x),\sigma(x'))$ is in $\{ (0,0),(0,1),(1,1)\}$. The idea is that ``$\sigma(x)=1$'' implies ``$\sigma(x')=1$''. The assignment~$\sigma$ is said to satisfy the ``pinning'' constraint $\delta_0(x)$ if $\sigma(x)=0$ and the pinning constraint $\delta_1(x)$ if $\sigma(x)=1$. If $X$ is a set of Boolean variables then a set~$C$ of $\{\mathrm{IMP},\delta_0,\delta_1\}$ constraints on~$X$ is a set of constraints of the form $\delta_0(x)$, $\delta_1(x)$ and $\mathrm{IMP}(x,x')$ for $x$ and $x'$ in $X$. The set $S(X,C)$ of \emph{satisfying assignments} is the set of all assignments $\sigma: X \to \{0,1\}$ which simultaneously satisfy all of the constraints in~$C$. We will consider the following computational problem. \begin{description} \item[Problem] $\textsc{\#CSP}(\mathrm{IMP},\delta_0,\delta_1)$. \item[Instance] A set $X$ of Boolean variables and a set $C$ of $\{\mathrm{IMP},\delta_0,\delta_1\}$ constraints on~$X$. \item[Output] $|S(X,C)|$. \end{description} We will also consider the following weighted version of $\textsc{\#CSP}(\mathrm{IMP})$. Suppose, for each $x\in X$, that $\gamma_x:\{0,1\} \rightarrow \mathbb{Q}_{> 0}$ is a weight function. For an indexed set $\gamma(X) = \{\gamma_x \mid x\in X\}$ of weight functions, let $$Z(X,C,\gamma) = \sum_{\sigma\in S(X,C)} \prod_{x\in X} \gamma_x(\sigma(x)).$$ \begin{description} \item[Problem] $\textsc{\#CSP}^*(\mathrm{IMP},\delta_0,\delta_1)$. \item[Instance] A set $X$ of Boolean variables, a set $C$ of $\{\mathrm{IMP},\delta_0,\delta_1\}$ constraints on $X$, and an indexed set $\gamma(X)$ of weight functions. \item[Output] $Z(X,C,\gamma)$. \end{description} We will use the following lemma, which follows from earlier work on counting CSPs. \begin{lemma} \label{lem:CSP} $\textsc{\#CSP}^*(\mathrm{IMP},\delta_0,\delta_1) \equiv_\mathrm{AP} \textsc{\#BIS}$. \end{lemma} \begin{proof} Dyer, Goldberg, and Jerrum~\cite[Theorem 3]{trichotomy} shows that $\textsc{\#CSP}(\mathrm{IMP},\delta_0, \delta_1) \equiv_\mathrm{AP} \textsc{\#BIS}$. $\textsc{\#CSP}(\mathrm{IMP},\delta_0,\delta_1)$ trivially reduces to $\textsc{\#CSP}^*(\mathrm{IMP},\delta_0,\delta_1)$ since it is a special case. Thus, it suffices to give an AP-reduction from $\textsc{\#CSP}^*(\mathrm{IMP},\delta_0,\delta_1)$ to $\textsc{\#CSP}(\mathrm{IMP},\delta_0,\delta_1)$. The idea behind the construction that we use comes from Bulatov et al.~\cite[Lemma 36, Item~(i)]{LSM}. We give the details in order to translate the construction into the current context. Let $(X,C,\gamma)$ be an instance of $\textsc{\#CSP}^*(\mathrm{IMP},\delta_0,\delta_1)$. We can assume without loss of generality that all of the weights $\gamma_x(b)$ are positive integers by multiplying all of the weights by the product of the denominators. The construction that follows is not difficult but the details are a little bit complicated, so we use the following running example to illustrate. Let $X=\{y,z\}$, $C = \mathrm{IMP}(y,z)$, $\gamma_y(0)=5$, $\gamma_y(1) = 2$, $\gamma_z(0)=1$ and $\gamma_z(1)=1$. For every variable $x\in X$, consider the weight function~$\gamma_x$. Let $k_x = \max(\lceil \lg \gamma_x(0) \rceil ,\lceil \lg \gamma_x(1) \rceil)$. For every $b\in \{0,1\}$, write the bit-expansion of $\gamma_x(1\oplus b)$ as $$ \gamma_x(1\oplus b) = a_{x,b,0} + a_{x,b,1} 2^1 + \cdots + a_{x,b,k_{x}} 2^{k_{x}},$$ where each $a_{x,b,i}\in \{0,1\}$. Note that $\gamma_x(1\oplus b)>0$ so there is at least one~$i$ with $a_{x,b,i}=1$. Let $\min_{x,b} = \min\{i \mid a_{x,b,i}=1\}$ and $\max_{x,b} = \max \{ i \mid a_{x,b,i} = 1\}$. If $i<\max_{x,b}$ and $a_{x,b,i}=1$ then let $\text{next}_{x,b,i} = \min\{j>i \mid a_{x,b,j}=1\}$. If $i>\min_{x,b}$ and $a_{x,b,i}=1$ then let $\text{prev}_{x,b,i} = \max\{j<i \mid a_{x,b,j}=1\}$. For the running example, \begin{itemize} \item $k_y= \lceil \lg 5 \rceil = 3$ and $k_z = \lceil \lg 1 \rceil =0$. \item For the variable $y$, taking $b=0$ we have $\gamma_y(1\oplus 0) = 2^1$ so $a_{y,0,0}=0$, $a_{y,0,1}=1$, and $a_{y,0,2}= a_{y,0,3}=0$. Also, $\min_{y,0}=1=\max_{y,0}$. \item Similarly, taking $b=1$ gives $\gamma_y(1\oplus 1) = 2^0+2^2$ so $a_{y,1,0}=1$, $a_{y,1,1}=0$, $a_{y,1,2}=1$ and $a_{y,1,3}=0$. Thus $\min_{y,1}=0$ and $\max_{y,1}=2$. Then $\text{next}_{y,1,0}=2$ and $\text{prev}_{y,1,2}=0$. \item Finally, for the variable $z$ and $b\in \{0,1\}$, we have $\gamma_z(1\oplus b)=2^0$ so $a_{z,b,0}=1$ and $\min_{z,b}=0=\max_{z,b}$. \end{itemize} Now for every $x\in X$, for every $ i \in \{1,\ldots,k_x\}$ and every $b\in\{0,1\}$ with $a_{x,b,i}=1$ let $A_{x,b,i}$ be the set of $i+2$ variables $\{x_{b,i,1},\ldots,x_{b,i,i}\} \cup \{ L_{x,b,i},R_{x,b,i}\}$. Let $C_{x,b,i}$ be the set of implication constraints $\bigcup_{j\in[i]} \{\mathrm{IMP}(L_{x,b,i},x_{b,i,j}),\mathrm{IMP}(x_{b,i,j},R_{x,b,i})\}$. Note that there are $2^i+2$ satisfying assignments to the $\textsc{\#CSP}$ instance $(A_{x,b,i},C_{x,b,i})$: one with $\sigma(L_{x,b,i})=\sigma(R_{x,b,i})=0$, one with $\sigma(L_{x,b,i})=\sigma(R_{x,b,i})=1$, and $2^i$ with $\sigma(L_{x,b,i})=0$ and $\sigma(R_{x,b,i})=1$. The point here is that the sets $A_{x,b,i}$ will be combined for different values of~$i$. The satisfying assignments with $\sigma(L_{x,b,i})=\sigma(R_{x,b,i})=0$ will correspond to contributions from a different index $i'>i$ and the satisfying assignments with $\sigma(L_{x,b,i})=\sigma(R_{x,b,i})=1$ will correspond to contributions from a different index $i'<i$. There are exactly $2^i$ satisfying assignments with $\sigma(L_{x,b,i})=0$ and $\sigma(R_{x,b,i})=1$ and these will correspond to the $a_{x,b,i} 2^i$ summand in the bit-expansion of $\gamma_x(1\oplus b)$. For the running example, \begin{itemize} \item for the variable $y$ and for $b=0$ and $i=1$ we have $A_{y,0,1} = \{y_{0,1,1} \} \cup \{ L_{y,0,1},R_{y,0,1}\}$. Then $C_{y,0,1}$ contains $ \{\mathrm{IMP}(L_{y,0,1},y_{0,1,1}),\mathrm{IMP}(y_{0,1,1},R_{y,0,1})\}$ and there are $2+2^1=4$ solutions. \item For the variable $y$ and for $b=1$ and $i=2$ we have $A_{y,1,2} = \{y_{1,2,1}, y_{1,2,2}\} \cup \{ L_{y,1,2},R_{y,1,2}\}$. Then $C_{y,1,2}$ contains the constraints $\mathrm{IMP}(L_{y,1,2},y_{1,2,1})$, $\mathrm{IMP}(y_{1,2,1},R_{y,1,2})$, $\mathrm{IMP}(L_{y,1,2},y_{1,2,2})$, and $\mathrm{IMP}(y_{1,2,2},R_{y,1,2})$ and there are $2+2^2=6$~solutions. \end{itemize} We now add some constraints corresponding to the $i=0$ case above. For every $x\in X$ and every $b\in \{0,1\}$ with $a_{x,b,0}=1$ let $A_{x,b,0}$ be the set of variables $\{ L_{x,b,0},R_{x,b,0}\}$. Let $C_{x,b,0}$ be the set containing the constraint $\mathrm{IMP}(L_{x,b,0},R_{x,b,0})$. Note that there are $2^0+2=3$ satisfying assignments to the $\textsc{\#CSP}$ instance $(A_{x,b,0},C_{x,b,0})$: one with $\sigma(L_{x,b,0})=\sigma(R_{x,b,0})=0$, one with $\sigma(L_{x,b,0})=\sigma(R_{x,b,0})=1$, and $2^0=1$ with $\sigma(L_{x,b,0})=0$ and $\sigma(R_{x,b,0})=1$. For the running example, \begin{itemize} \item $A_{y,1,0} = \{ L_{y,1,0},R_{y,1,0}\}$ and $C_{y,1,0} = \{ \mathrm{IMP}(L_{y,1,0},R_{y,1,0})\}$. \item For $b\in \{0,1\}$, $A_{z,b,0} = \{ L_{z,b,0},R_{z,b,0}\}$ and $C_{z,b,0} = \{ \mathrm{IMP}(L_{z,b,0},R_{z,b,0})\}$. \end{itemize} Now for every $x\in X$ and $b\in\{0,1\}$ let $C'_{x,b}$ be the set of constraints forcing equality of $\sigma(R_{x,b,i})$ and $\sigma(L_{x,b,j})$ when $i$ and $j$ are adjacent one-bits in the bit-expansion of $\gamma_x(1\oplus b)$. In particular, $$C'_{x,b} = \bigcup_{\text{next}_{x,b,i}=j, \text{prev}_{x,b,j}=i} \{ \mathrm{IMP}(R_{x,b,i},L_{x,b,j}), \mathrm{IMP}(L_{x,b,j},R_{x,b,i}) \} $$ For the running example, \begin{itemize} \item $C'_{y,0} = C'_{z,0} = C'_{z,1} = \emptyset$ since these variables have only one positive coefficient in the bit expansion. \item For the variable $y$ and $b=1$ the relevant non-zero coefficients are $i=0$ and $j=2$ so we get $$C'_{y,1} = \{ \mathrm{IMP}(R_{y,1,0},L_{y,1,2}), \mathrm{IMP}(L_{y,1,2},R_{y,1,0}) \}. $$ \end{itemize} Now consider $x\in X$. Let $C''_{x,0}=C'_{x,0} \cup \{\delta_0(L_{x,0,\min_{x,0}})\}$ and let $C''_{x,1} = C'_{x,1} \cup \{\delta_1(R_{x,1,\max_{x,1}})\}$. For $x\in X$ and $b\in \{0,1\}$ let $$A_{x,b} = \bigcup_{i\in \{0,\ldots,k_x\}, a_{x,b,i}=1} A_{x,b,i}$$ and let $$C_{x,b} = C''_{x,b} \cup \bigcup_{i\in\{0,\ldots,k_x\}, a_{x,b,i}=1} C_{x,b,i}. $$ Now will show that there are $\gamma_x(1)$ satisfying assignments to the $\textsc{\#CSP}$ instance $(A_{x,0},C_{x,0})$ which have the property that $\sigma(R_{x,0,\max_{x,0}})=1$ and one satisfying assignment in which $\sigma(R_{x,0,\max_{x,0}})=0.$ To see this, note that the constraint $\delta_0(L_{x,0,\min_{x,0}})$ forces $\sigma(L_{x,0,\min_{x,0}})=0$. If $\sigma(R_{x,0,\max_{x,0}})=0$ then all of the variables in $A_{x,0}$ are assigned spin~$0$ by~$\sigma$. Otherwise, there is exactly one~$i$ with $a_{x,0,i}=1$ and $\sigma(L_{x,0,i})=0$ and $\sigma(R_{x,0,i})=1$. As we noted above, there are $2^i$ assignments to the variables in~$A_{x,b,i}$. But $\sum_{i: a_{x,0,1}=i} 2^i = \gamma_x(1)$, as required. Similarly, there are $\gamma_x(0)$ satisfying assignments to the $\textsc{\#CSP}$ instance $(A_{x,1},C_{x,1})$ in which $\sigma(L_{x,1,\min_{x,1}})=0$ and there is one satisfying assignment in which $\sigma(L_{x,1,\min_{x,1}})=1.$ Let us quickly apply this to the running example. \begin{itemize} \item Taking variable $y$ and $b=0$ we have $A_{y,0} = A_{y,0,1}$ and $C''_{y,0} = \{\delta_0(L_{y,0,1})\} \cup C_{y,0,1}$. Then $\max_{y,0}=1$. From above, there is one solution~$\sigma$ with $\sigma(R_{y,0,\max_{y,0}})=0$ and there are $2^1=\gamma_y(1)$ solutions $\sigma$ with $\sigma(R_{y,0,\max_{y,0}})=1$. \item Taking variable $y$ and $b=1$ we have $$A_{y,1} = A_{y,1,0} \cup A_{y,1,2}$$ and $$C''_{y,1} = \{ \delta_1(R_{y,1,2}), \mathrm{IMP}(R_{y,1,0},L_{y,1,2}), \mathrm{IMP}(L_{y,1,2},R_{y,1,0}) \} \cup C_{y,1,0} \cup C_{y,1,2} .$$ There is one solution $\sigma$ with $\sigma(L_{y,1,0})=1$. There are $2^0+2^2=\gamma_y(0)$ solutions $\sigma$ with $\sigma(L_{y,1,0})=0$. \item Taking variable $z$ we have $A_{z,b} = A_{z,b,0} = \{L_{z,b,0},R_{z,b,0}\}$. Then, taking $b=0$, $C_{z,0} = \{ \delta_0(L_{z,0,0}),\mathrm{IMP}(L_{z,0,0},R_{z,0,0})\}$. so there is $2^0=1=\gamma_z(1)$ assignment with $\sigma(R_{z,0,0})=1$ and one with $\sigma(R_{z,0,0})=0$. Taking $b=1$, $C_{z,1} = \{\delta_1(R_{z,1,0}),\mathrm{IMP}(L_{z,1,0},R_{z,1,0})\}$ so there is $2^0=1=\gamma_z(0)$ assignment with $\sigma(L_{z,1,0})=0$ and one with $\sigma(L_{z,1,0})=1$. \end{itemize} Finally, consider $x\in X$. Let $C_x$ be the set of constraints containing the four implications $\mathrm{IMP}(x,R_{x,0,\max_{x,0}})$, $\mathrm{IMP}(R_{x,0,\max_{x,0}},x)$, $\mathrm{IMP}(x,L_{x,1,\min_{x,1}})$, and $\mathrm{IMP}(L_{x,1,\min_{x,1}},x)$. Now there are $\gamma_x(1)$ solutions to $(A_{x,0} \cup A_{x,1} \cup \{x\},C_{x,0} \cup C_{x,1} \cup C_x)$ with $\sigma(x)=1$ and $\gamma_x(0)$ solutions with $\sigma(x)=0$. Thus, we have simulated the weight function $w_x$ with $\{\mathrm{IMP},\delta_0,\delta_1\}$ constraints. For the running example, \begin{itemize} \item first consider the variable $y$. \begin{itemize} \item With $\sigma(y)=1$ the constraints in $C_y$ force $\sigma(R_{y,0,\max_{y,0}})=1$ which, from above, gives $\gamma_y(1)$ solutions to $(A_{y,0},C_{y,0})$. The constraints in $C_y$ also force $\sigma(L_{y,1,\min(y,1)})=1$, which, from above, gives one solution to $(A_{y,1},C_{y,1})$. \item With $\sigma(y)=0$ the constraints in $C_y$ force $\sigma(R_{y,0,\max_{y,0}})=0$ so there is only one solution to $(A_{y,0},C_{y,0})$. The constraints in $C_y$ also force $\sigma(L_{y,1,\min(y,1)})=0$ so there are $\gamma_y(0)$ solutions to $(A_{y,1},C_{y,1})$. \end{itemize} \item The argument for variable~$z$ is similar. \end{itemize} Thus, the correct output for the $\textsc{\#CSP}^*(\mathrm{IMP},\delta_0,\delta_1)$ instance $(X,C,\gamma)$ is same as the correct output for the $\textsc{\#CSP}(\mathrm{IMP},\delta_0,\delta_1)$ instance obtained from $(X,C,\gamma)$ by adding new variables and constraints to simulate each weight function $\gamma_x$. \end{proof} We can now prove the main lemma of this section. \begin{lemma} \label{lem:intermediate} Suppose that $H$ is a tree which is not a star and which has no induced~$J_3$. Then $$\textsc{\#BIS} \equiv_\mathrm{AP}\nHom{H} \mbox{ and } \textsc{\#BIS}\equiv_\mathrm{AP} \wHom{H}.$$ \end{lemma} \begin{proof} $\nHom{H}$ is a special case of $\wHom{H}$ so it is certainly AP-reducible to $\wHom{H}$. By Lemma~\ref{lem:kelk}, $\textsc{\#BIS}$ is AP-reducible to $\nHom{H}$ and therefore it is AP-reducible to $\wHom{H}$. So it suffices to give an AP-reduction from $\wHom{H}$ to $\textsc{\#BIS}$. Applying Lemma~\ref{lem:CSP}, it suffices to give an AP-reduction from $\wHom{H}$ to $\textsc{\#CSP}^*(\mathrm{IMP},\delta_0,\delta_1)$. In order to do the reduction, we will order the vertices of~$H$ using the fact that it has no induced~$J_3$. (This ordering is similar the one arising from the ``crossing property'' of the authors that is mentioned in \cite[Section 7.3.3]{Kelk}.) A ``convex ordering'' of a connected bipartite graph with bipartition $(U,U')$ with $|U|=h$ and $|U'|=h'$ and edge set $E\subseteq U\times U'$ is a pair of bijections $\pi:U \rightarrow [h]$ and $\pi':U' \rightarrow [h']$ such that there are monotonically non-decreasing functions functions $m:[h]\to[h']$, $M:[h]\to[h']$, $m':[h']\to[h]$ and $M':[h']\to[h]$ satisfying the following conditions. \begin{itemize} \item If $\pi(u)=i$ then $\{\pi'(u') \mid (u,u')\in E \} = \{ \ell \in [h'] \mid m(i) \leq \ell \leq M(i)\}$. \item If $\pi'(u')=i$ then $\{\pi(u) \mid (u,u')\in E \} = \{ \ell \in [h] \mid m'(i) \leq \ell \leq M'(i)\}$. \end{itemize} The purpose of~$\pi$ and~$\pi'$ is just to put the vertices in the correct order. For example, in Figure~\ref{fig:referee}, \begin{figure} \caption{An example of a convex ordering} \label{fig:referee} \end{figure} $\pi$ is the identity map on the set $U=\{1,2,3,4\}$ and $\pi'$ is the identity map on the set $U'=\{1,2,3\}$. Vertex~$3$ in~$U$ is connected to the sequence containing vertices $1$, $2$ and $3$ in~$U'$, so $m(3)=1$ and $M(3)=3$. Every other vertex in~$U$ has degree~$1$ and in particular $m(1)=M(1)=1$, $m(2)=M(2)=1$ and $m(4)=M(4)=3$. Similarly, vertex~$1$ in~$U'$ is attached to the sequence containing vertices $1$, $2$ and $3$ in $U$ so $m'(1)=1$ and $M'(1)=3$ but $m'(2)=M'(2)=3$ and $m'(3)=M'(3)=4$. To see that a convex ordering of~$H$ always exists, consider the following algorithm. The input is a tree~$H$ with no induced~$J_3$, a bipartition $(U,U')$ of the vertices of~$H$, and a distinguished leaf~$u\in U$ whose parent~$u'$ is adjacent to at most one non-leaf. (Note that such a leaf~$u$ always exists since $H$ is a tree.) The output is a convex ordering of~$H$ in which $\pi(u)=h$ and $\pi'(u')=h'$. Here is what the algorithm does. If all of the neighbours of~$u'$ are leaves, then $h'=1$ so take any bijection $\pi$ from $U-\{u\}$ to $[h-1]$ and set $\pi(u)=h$ and $\pi'(u')=h'$. Return this output. Otherwise, let $u''$ be the neighbour of $u'$ that is not a leaf. Let $H'$ be the graph formed from $H$ by removing all of the $d_H(u')-1$ neighbours of $u'$ other than $u''$. Since $H$ has no induced $J_3$, the graph $H'$ has the following property: $u'$ is a leaf whose parent, $u''$, is adjacent to at most one non-leaf. Recursively, construct a convex ordering for $H'$ in which $\pi(u')=h'$ and $\pi(u'')=h-(d_H(u')-1)$. Extend $\pi$ by assigning values to the leaf-neighbours of~$u'$, ensuring that $\pi(u)=h$. We will now show how to reduce $\wHom{H}$ to $\textsc{\#CSP}^*(\mathrm{IMP},\delta_0,\delta_1)$. Let $G$ be a connected bipartite graph with bipartition $(V,V')$ and let $W(G,H)$ be an indexed set of weight functions. Let $$Z'_{H}(G,W(G,H)) = \sum_{\sigma\in \Hom GH\text{ with $\sigma(V)\subseteq U$}}\, \prod_{v\in V(G)} w_v(\sigma(v))$$ and let $$Z''_{H}(G,W(G,H)) = \sum_{\sigma\in \Hom GH\text{ with $\sigma(V)\subseteq U'$}}\, \prod_{v\in V(G)} w_v(\sigma(v)).$$ Clearly, $Z_{H}(G,W(G,H)) = Z'_{H}(G,W(G,H))+Z''_{H}(G,W(G,H))$. We will show how to reduce the computation of $Z'_{H}(G,W(G,H))$, given the input $(G,W(G,H))$, to the problem $\textsc{\#CSP}^*(\mathrm{IMP},\delta_0,\delta_1)$. In the same way, we can reduce the computation of $Z''_{H}(G,W(G,H))$ to $\textsc{\#CSP}^*(\mathrm{IMP},\delta_0,\delta_1)$. Since we are considering assignments which map $V$ to $U$ and $V'$ to $U'$, the vertices in~$U$ will not get mixed up with the vertices in~$U'$. We can simplify the notation by relabelling the vertices so that $\pi$ and~$\pi'$ are the identity permutations. Then, given the convex ordering property, we can assume that $U=[h]$ and that $U'=[h']$ and that we have monotonically non-decreasing functions functions $m:[h]\to[h']$, $M:[h]\to[h']$, $m':[h']\to[h]$ and $M':[h']\to[h]$ such that \begin{itemize} \item for $i\in U$, $\Gamma_H(i) = \{ \ell \in [h'] \mid m(i) \leq \ell \leq M(i)\}$, and \item for $i\in U'$, $\Gamma_H(i) = \{ \ell \in [h] \mid m'(i) \leq \ell \leq M'(i)\}$. \end{itemize} A configuration $\sigma$ contributing to $Z'_{H}(G,W(G,H))$ is a map from $V$ to $[h]$ together with a map from $V'$ to $[h']$ such that the following is true for every edge $(v,v')\in V\times V'$. \begin{enumerate}[(1)] \item \label{one} $m({\sigma(v)}) \leq \sigma(v') \leq M({\sigma(v)})$, and \item \label{two} $m'({\sigma(v')}) \leq \sigma(v) \leq M'({\sigma(v')})$. \end{enumerate} Since $m$, $M$, $m'$ and $M'$ are monotonically non-decreasing, we can re-write the conditions in a less natural way which will be straightforward to apply below. \begin{enumerate}[($1'$)] \item \label{onep} $\sigma(v) \leq i$ implies $\sigma(v') \leq M(i)$, \item \label{twop} $\sigma(v') \leq i'$ implies $\sigma(v) \leq M'(i')$, \item \label{threep} $\sigma(v') \leq m(i)-1$ implies $\sigma(v) \leq i-1$, and \item \label{fourp} $\sigma(v) \leq m'(i')-1$ implies $\sigma(v') \leq i'-1$. \end{enumerate} Using monotonicity, (\ref{onep}$'$) and (\ref{twop}$'$) follow from the right-hand side of (\ref{one}) and (\ref{two}). Suppose that $\sigma(v') < m(i)$. Then the left-hand side of (\ref{one}) gives $m(\sigma(v))< m(i)$, so by monotonicity, $\sigma(v)< i$. Equation~(\ref{threep}$'$) follows. In the same way, Equation~(\ref{fourp}$'$) follows from the left-hand side of (\ref{two}). Going the other direction, the right-hand sides of (\ref{one}) and (\ref{two}) follow from (\ref{onep}$'$) and (\ref{twop}$'$).To derive the left-hand side of (\ref{one}), take the contrapositive of (\ref{threep}$'$), which says $\sigma(v) \geq i$ implies $\sigma(v') \geq m(i)$ then plug in $i=\sigma(v)$. The derivation of the left-hand side of (\ref{two}) is similar. We now construct an instance of $\textsc{\#CSP}^*(\mathrm{IMP},\delta_0,\delta_1)$. For each vertex $v\in V$ introduce Boolean variables $v_0,\ldots,v_{h}$. Introduce constraints $\delta_0(v_0)$ and $\delta_1(v_{h})$ and, for every $i\in[h]$, $\mathrm{IMP}(v_{i-1},v_i)$. For each vertex $v'\in V'$ introduces Boolean variables $v'_0,\ldots,v'_{h'}$. Introduce constraints $\delta_0(v'_0)$ and $\delta_1(v'_{h'})$ and, for every $i'\in[h']$, $\mathrm{IMP}(v'_{i'-1},v'_{i'})$. Now there is a one-to-one correspondence between assignments $\sigma$ mapping $V$ to~$U$ and $V'$ to~$U'$, and assignments $\tau$ to the Boolean variables that satisfy the above constraints. In particular, $\sigma(v)=\min\{i \mid \tau(v_i)=1\}$. Similarly, $\sigma(v')=\min\{i' \mid \tau(v'_i)=1\}$. Now, $\sigma(v) \leq i$ is exactly equivalent to $\tau(v_i) =1$. Thus, we can add the following further constraints to rule out assignments $\sigma$ that do not satisfy (\ref{onep}$'$), (\ref{twop}$'$), (\ref{threep}$'$) and (\ref{fourp}$'$). Add all of the following constraints where $v\in V$, $v'\in V'$, $i\in [h]$ and $i'\in [h']$: $\mathrm{IMP}(v_{i},v'_{M(i)})$, $\mathrm{IMP}(v'_{i'}, v_{M'(i')})$, $\mathrm{IMP}(v'_{m(i)-1},v_{i-1})$, and $\mathrm{IMP}(v_{m'(i')-1},v'_{i'-1})$. Now the assignments~$\tau$ of Boolean values to the variables satisfy all of the constraints if and only if they correspond to assignments~$\sigma$ which satisfy (\ref{onep}$'$), (\ref{twop}$'$), (\ref{threep}$'$) (\ref{fourp}$'$), and so should contribute to $$Z'_{H}(G,W(G,H)) = \sum_{\sigma\in \Hom GH\text{ with $\sigma(V)\subseteq U$}} \, \prod_{v\in V(G)} w_v(\sigma(v)).$$ We will next construct weight functions for the instance of $\textsc{\#CSP}^*(\mathrm{IMP},\delta_0,\delta_1)$ in order to reproduce the effect of the weight functions in $W(G,H)$. In order to avoid division by~$0$, we first modify the construction. Suppose that for some variable $v\in V$ and some $i\in [h]$, $w_v(i)=0$. Configurations $\sigma$ with $\sigma(v)=i$ make no contribution to $Z'_{H}(G,W(G,H))$. Thus, it does no harm to rule out such configurations by modifying the $\textsc{\#CSP}^*(\mathrm{IMP},\delta_0,\delta_1)$ instance to ensure that $\tau(v_i)=1$ implies $\tau(v_{i-1})=1$. We do this by adding the constraint $\mathrm{IMP}(v_i,v_{i-1})$. Similarly, if $w_{v'}(i')=0$ for $v'\in V$ and $i'\in[h']$ then we add the constraint $\mathrm{IMP}(v'_{i'},v'_{i'-1})$. Once we've made this change, we can replace $W(G,H)$ with an equivalent indexed set of weight functions $W'(G,H)$ where $w'_v(i)=w_v(i)$ if $w_v(i)>0$ and $w'_v(i)=1$, otherwise. The weight functions for the $\textsc{\#CSP}^*(\mathrm{IMP},\delta_0,\delta_1)$ instance are then constructed as follows, for each $v\in V$. For each $i\in[h]$, let $\gamma_{v_{i-1}}(0)=1$. Let $\gamma_{v_h}(1)=w'_v(h)$. For each $i\in [h-1]$, let $\gamma_{v_i}(1)=w'_v(i)/w'_v(i+1)$. Note that $\gamma_{v_h}(0)$ and $\gamma_{v_0}(1)$ have not yet been defined --- these values can be chosen arbitrarily. They will not be relevant given the constraints $\delta_0(v_0)$ and $\delta_1(v_h)$. Now if $\sigma(v)=i$ we have $\tau(v_0)=\cdots = \tau(v_{i-1})=0$ and $\tau(v_i)=\cdots = \tau(v_h)=1$ so $\prod_j \gamma_{v_j}(\tau(v_j)) = w'_v(i)$, as required. Similarly, for each $v'\in V'$, define the weight functions as follows. For each $i\in[h']$, let $\gamma_{v'_{i-1}}(0)=1$. Let $\gamma_{v'_{h'}}(1)=w'_{v'}(h')$. For each $i\in [h'-1]$, let $\gamma_{v'_i}(1)=w'_{v'}(i)/w'_{v'}(i+1)$. Using these weight functions, we obtain the desired reduction from the computation of $Z'_{H}(G,W(G,H))$ to $\textsc{\#CSP}^*(\mathrm{IMP},\delta_0,\delta_1)$. \end{proof} \subsection{Intractable trees} Lemma~\ref{lem:intermediate} shows that if $H$ has no induced~$J_3$ then $\wHom{H}$ is AP-reducible to $\textsc{\#BIS}$. The purpose of this section is to prove Lemma~\ref{lem:hardweighted}, below, which shows, by contrast, that if $H$ does have an induced~$J_3$, then $\wHom{H}$ is $\textsc{\#Sat}$-hard. In order to prepare for the proof of Lemma~\ref{lem:hardweighted}, we introduce the notion of a multiterminal cut. Given a graph~$G=(V,E)$ with distinguished vertices~$\alpha$, $\beta$ and~$\gamma$, which we refer to as ``terminals'', a {\it multiterminal cut\/} is a set $E'\subseteq E$ whose removal disconnects the terminals in the sense that the graph $(V,E\setminus E')$ does not contain a path between any two distinct terminals. The size of the multiterminal cut is the number of edges in~$E'$. Consider the following computational problem. \begin{description} \item[Problem] \MultiCutCount{$3$}. \item[Instance] A positive integer~$b$, a connected graph $G=(V,E)$ and $3$ distinct vertices $\alpha$, $\beta$ and $\gamma$ from~$V$. The input has the property that every multiterminal cut has size at least~$b$. \item[Output] The number of size-$b$ multiterminal cuts for $G$ with terminals $\alpha$, $\beta$, and $\gamma$. \end{description} We will use the following technical lemma, which we used before in~\cite{Ising} (without stating it formally). \begin{lemma} \label{lem:cut} \MultiCutCount{$3$} $\equiv_\mathrm{AP} \textsc{\#Sat}$. \end{lemma} \begin{proof} This follows essentially from the proof of Dalhaus et al.~\cite{Dalhaus} that the decision version of \MultiCutCount{$3$} is NP-hard and from the fact \cite[Theorem 1]{APred} that the NP-hardness of a decision problem implies that the corresponding counting problem is AP-interreducible with $\textsc{\#Sat}$. The details are given in \cite[Section 4]{Ising}. \end{proof} \begin{lemma} \label{lem:hardweighted} Suppose that $H$ is a tree with an induced~$J_3$. Then $$\textsc{\#Sat} \leq_\mathrm{AP}\wHom{H}.$$ \end{lemma} \begin{proof} We will prove the lemma by giving an AP-reduction from \MultiCutCount{$3$} to $\wHom{H}$. The lemma will then follow from Lemma~\ref{lem:cut}. Suppose that $H$ has an induced subgraph which is isomorphic to~$J_3$. To simplify the notation, label the vertices and edges of~$H$ in such a way that the induced subgraph is (identically) the graph $J$ depicted in Figure~\ref{fig:J}. \begin{figure} \caption{The tree $J$.} \label{fig:J} \end{figure} Let $b$, $G=(V,E)$, $\alpha$, $\beta$ and $\gamma$ be an input to \MultiCutCount{$3$}. Let $s= 2 + |E(G)|+2|V(G)|$. (The exact size of~$s$ is not important, but it has to be at least this big to make the calculation work, and it has to be at most a polynomial in the size of~$G$.) Let $G'$ be the graph defined as follows. First, let $V'(G)= \{(e,i) \mid e\in E,i\in[s]\}$. Thus, $V'(G)$ contains $s$ vertices for each edge $e$ of~$G$. Then let $G'$ be the graph with vertex set $V(G') = V(G) \cup V'(G)$ and edge set $$E(G') = \{(u,(e,i)) \mid u\in V(G), (e,i)\in V'(G), \mbox{and $u$ is an endpoint of~$e$} \}.$$ We will define weight functions $w_v$ for $v\in V(G')$ so that an approximation to the number of size-$b$ multi-terminal cuts for $G$ with terminals $\alpha$, $\beta$ and $\gamma$ can be obtained from an approximation to $Z_H(G',W(G',H))$. We start by defining the set of pairs $(v,c)\in V(G')\times V(H)$ for which we will specify $w_v(c)>0$. In particular, define the set $\Omega$ as follows. $$\Omega = \{(\alpha,x_0),(\beta,y_0),(\gamma,z_0)\} \cup \big((V(G)-\{\alpha,\beta,\gamma\}) \times \{x_0,y_0,z_0\} \big) \cup \left(V'(G) \times \{w,x_1,y_1,z_1\}\right).$$ Let $w_v(c)=1$ if $(v,c)\in \Omega$. Otherwise, let $w_v(c)=0$. Thus, $Z_H(G',W(G',H))$ is the number of homomorphisms $\sigma$ from~$G'$ to~$H$ with $\sigma(V(G)) = \{x_0,y_0,z_0\}$, $\sigma(V'(G)) \subseteq \{w,x_1,y_1,z_1\}$, $\sigma(\alpha)=x_0$, $\sigma(\beta)=y_0$ and $\sigma(\gamma)=z_0$. We will refer to these as ``valid'' homomorphisms. If $\sigma$ is a valid homomorphism, then let \begin{align*} \bichrom{\sigma} = \{ e \in E(G) \mid \quad & \mbox{the vertices of~$V(G)$ corresponding to } \\ & \mbox{the endpoints of~$e$ are mapped to different colours by~$\sigma$} \}.\end{align*} Note that, for every valid homomorphism~$\sigma$, $\bichrom{\sigma}$ is a multiterminal cut for the graph~$G$ with terminals~$\alpha$, $\beta$ and~$\gamma$. For every multiterminal cut $E'$, let $\components{E'}$ denote the number of components in the graph $(V,E\setminus E')$. For each multiterminal cut~$E'$, let $Z_{E'}$ denote the number of valid homomorphisms~$\sigma$ from~$G'$ to~$H$ such that $\bichrom{\sigma} = E'$. From the definition of multiterminal cut, $\components{E'}\geq 3$. If $\components{E'}=3$ then $$Z_{E'} = 2^{s(E(G)-E')}$$ since there are two choices for the colours of each vertex $(e,i)$ with $e\in E(G)-E'$. (Since the endpoints of each such edge~$e$ are assigned the same colour by~$\sigma$, the vertex $(e,i)$ can either be coloured~$w$, or it can be coloured with one other colour.) Also, $$Z_{E'} \leq 2^{s(E(G)-E')} 3^{\components{E'}-3},$$ since the component of~$\alpha$ is mapped to~$x_0$ by~$\sigma$, the component of~$\beta$ is mapped to~$y_0$, the component of~$\gamma$ is mapped to~$z_0$, and each remaining component is mapped to a colour in $\{x_0,y_0,z_0\}$. Let $Z^*= 2^{s(E(G)-b)} $. If $E'$ has size~$b$ then $\components{E'}=3$. (Otherwise, there would be a smaller multiterminal cut, contrary to the definition of \MultiCutCount{$3$}.) So, in this case, \begin{equation} Z_{E'} = Z^*. \label{eq:smgoodcuts} \end{equation} If $E'$ has size $b'>b$ then $$Z_{E'} \leq 2^{s(E(G)-b')} 3^{\components{E'}-3} = 2^{-s(b'-b)} 3^{\components{E'}-3} Z^* \leq 2^{-s} 3^{|V(G)|} Z^*. $$ Clearly, there are at most $2^{|E(G)|}$ multiterminal cuts~$E'$. So, using the definition of~$s$, \begin{equation} \label{eq:smbigcuts} \sum_{E' : |E'|>b} Z_{E'} \leq \frac{Z^*}{4} \end{equation} From Equation~(\ref{eq:smgoodcuts}), we find that, if there are $N$ size-$b$ multiterminal cuts then $$Z_H(G',W(G',H)) = N Z^* + \sum_{E' : |E'|>b} Z_{E'} .$$ So applying Equation (\ref{eq:smbigcuts}) , we get $$ N \leq \frac{Z_H(G',W(G',H))}{Z^*} \leq N + \frac{1}{4}.$$ Thus, we have an AP-reduction from \MultiCutCount{$3$} to $\nHom{H}$. To determine the accuracy with which~$Z(G)$ should be approximated in order to achieve a given accuracy in the approximation to~$N$, see the proof of Theorem 3 of \cite{APred}. \end{proof} \section{Tree homomorphisms capture the ferromagnetic Potts model.} \label{sec:potts} The problem $\nHom{H}$ counts colourings of a graph satisfying ``hard'' constraints: two colours (corresponding to vertices of $H$) are either allowed on adjacent vertices of the instance or disallowed. By contrast, the Potts model (to be described presently) is ``permissive'': every pair of colours is allowed on adjacent vertices, but some pairs are favoured relative to others. The strength of interactions between colours is controlled by a real parameter~$\gamma$. In this section, we will show that approximating the number of homomorphisms to $J_q$ is equivalent in difficulty to the problem of approximating the partition function of the ferromagnetic $q$-state Potts model. Since the latter problem is not known to be \textsc{\#BIS}-easy for any $q>2$, we might speculate that approximating $\nHom{J_q}$ is not \textsc{\#BIS}-easy for any $q>2$. If so, $J_3$ would be the smallest tree with this property. It is interesting that, for fixed~$q$, a continuously parameterised class of permissive problems can be shown to be computationally equivalent to a single counting problem with hard constraints. Suppose, for example, that we wanted to investigate the possibility that computing the partition function of the $q$-state ferromagnetic Potts model formed a hierarchy of problems of increasing complexity with increasing~$q$. We could equivalently investigate the sequence of problems $\nHom{J_q}$, which seems intuitively to be an easier proposition. We start with some definitions. Let $q$ be a positive integer. The $q$-state Potts model is a statistical mechanical model of Potts~\cite{Potts} which generalises the classical Ising model from two to $q$~spins. In this model, spins interact along edges of a graph~$G=(V,E)$. The strength of each interaction is governed by a parameter~$\gamma$ (a real number which is always at least~$-1$, and is greater than~$0$ in the \emph{ferromagnetic} case which we study, where like spins attract each other). The $q$-state Potts partition function is defined as follows. \begin{equation}\label{eq:PottsGph} Z_\mathrm{Potts}(G;q,\gamma) = \sum_{\sigma:V\rightarrow [q]} \prod_{e=\{u,v\}\in E} \big(1+\gamma\,\delta(\sigma(u) ,\sigma(v))\big), \end{equation} where $\delta(s,s')$ is~$1$ if $s=s'$, and is~$0$ otherwise. The Potts partition function is well-studied. In addition to the complexity-theory literature mentioned below, we refer the reader to Sokal's survey~\cite{Sokal05}. In order to state our results in the strongest possible form, we use the notion of ``efficiently approximable real number'' from Section~\ref{sec:prelim}. Recall that a real number $\gamma$ is efficiently approximable if there is an FPRAS for the problem of computing it. The notion of ``efficiently approximable'' is not important to the constructions below --- the reader who prefers to assume that the parameters are rational will still appreciate the essence of the reductions. Let $q$ be a positive integer and let $\gamma$ be a positive efficiently approximable real. Consider the following computational problem, which is parameterised by~$q$ and~$\gamma$. \begin{description} \item[Problem] $\textsc{Potts}(q,\gamma)$. \item[Instance] Graph $G=(V,E)$. \item[Output] $Z_\mathrm{Potts}(G;q,\gamma)$. \end{description} This problem may be defined more generally for non-integers~$q$ via the Tutte polynomial. We will use some results from \cite{FerroPotts} which are more general, but we do not need the generality here. In an important paper, Jaeger, Vertigan and Welsh~\cite{JVW90} examined the problem of evaluating the Tutte polynomial. Their result gave a complete classification of the computational complexity of $\textsc{Potts}(q,\gamma)$. For every fixed positive integer~$q$, apart from the trivial $q=1$, and for every fixed~$\gamma$, they showed that this computational problem is \#P-hard. When $q=1$ and $\gamma$ is rational, $Z_\mathrm{Potts}(G;q,\gamma)$ can easily be exactly evaluated in polynomial time. The complexity of the approximation problem has also been partially resolved. In the positive direction, Jerrum and Sinclair~\cite{JS93} gave an FPRAS for the case $q=2$. In the negative direction, Goldberg and Jerrum~\cite{FerroPotts} showed that approximation is $\textsc{\#BIS}$-hard for every fixed $q>2$. They left open the question of whether approximating $Z_\mathrm{Potts}(G;q,\gamma)$ is as easy as $\textsc{\#BIS}$ (or whether it might be even harder). In this paper, we show that the approximation problem is equivalent in complexity to a tree homomorphism problem. In particular, we show that $\textsc{Potts}(q,\gamma)$ is AP-equivalent to the problem of approximately counting homomorphisms to the tree~$J_q$. We first give an AP-reduction from $\textsc{Potts}(q,1)$ to $\nHom{J_q}$. \begin{lemma} \label{lem:tocol} Let $q>2$ be a positive integer. $$\textsc{Potts}(q,1) \leq_\mathrm{AP} \nHom{J_q}.$$ \end{lemma} \begin{proof} Let $G$ be an instance of $\textsc{Potts}(q,1)$. We can assume without loss of generality that $G$ is connected, since it is clear from~(\ref{eq:PottsGph}) that a graph~$G$ with connected components $G_1,\ldots,G_\kappa$ satisfies $Z_\mathrm{Potts}(G;q,\gamma)=\prod_{i=1}^{\kappa} Z_\mathrm{Potts}(G_i;q,\gamma)$. Let $G'$ be the graph with $$V(G') = V(G) \cup E(G)$$ and $$E(G') = \{(u,e) \mid u\in V(G), e \in E(G), \mbox{and $u$ is an endpoint of~$e$} \}.$$ $G'$ is sometimes referred to as the ``$2$-stretch'' of~$G$. For clarity, when we consider an element $e\in E(G)$ as a vertex of $G'$ (rather than an edge of $G$), we shall refer to it as the ``midpoint vertex corresponding to edge~$e$''. Let $s$ be an integer satisfying \begin{equation} \label{eq:firsts} 8 q {(q+1)}^{|V(G)|+|E(G)|} \leq {\left(\frac{q}{2}\right)}^s . \end{equation} For concreteness, take $s$ to be the smallest integer satisfying (\ref{eq:firsts}). The exact size of~$s$ is not so important. The calculation below relies on the fact that $s$ is large enough to satisfy~(\ref{eq:firsts}). On the other hand, $s$ must be at most a polynomial in the size of~$G$, to make the reduction feasible. We will construct an instance~$G''$ of $\nHom{J_q}$ by adding some gadgets to~$G'$. Fix a vertex $v\in V(G)$. Let $G''$ be the graph with $V(G'')=V(G) \cup E(G) \cup \{v_0,\ldots,v_s\}$ and $E(G'') = E(G') \cup \{(v,v_0)\} \cup \{(v_0,v_i) \mid i\in [s]\}$. See Figure~\ref{fig:firstinstance}. \begin{figure} \caption{The instance~$G''$. The thick curved line between $V(G)$ and $E(G)$ indicates that the edges in~$E(G')$ go between elements of~$V(G)$ and elements of~$E(G)$, but these are not shown. } \label{fig:firstinstance} \end{figure} We say that a homomorphism~$\sigma$ from~$G''$ to~$J_q$ is \emph{typical} if $\sigma(v_0)=w$. Note that, in a typical homomorphism, every vertex in $V(G)$ is mapped by~$\sigma$ to one of the colours from $\{c'_1,\ldots,c'_q\}$. Let $Z_{J_q}^t(G'')$ denote the number of typical homomorphisms from~$G''$ to~$J_q$. Given a mapping $\sigma: V(G) \rightarrow \{c'_1,\ldots,c'_q\}$, the number of typical homomorphisms which induce this mapping is $2^{\mathrm{mono}(\sigma)} q^s$, where $\mathrm{mono}(\sigma)$ is the number of edges $e\in E(G)$ whose endpoints in $V(G)$ are mapped to the same colour by~$\sigma$. (To see this, note that there are two possible colours for the midpoint vertices corresponding to such edges, whereas the other midpoint vertices have to be mapped to~$w$ by~$\sigma$. Also, there are $q$ possible colours for each vertex in $\{v_1,\ldots,v_s\}$.) Thus, using the definition~(\ref{eq:PottsGph}), we conclude that $$Z_{J_q}^t(G'') = \sum_{\sigma: V(G) \rightarrow \{c'_1,\ldots,c'_q\}} 2^{\mathrm{mono}(\sigma)} q^s = q^s Z_\mathrm{Potts}(G;q,1).$$ The number of atypical homomorphisms from~$G''$ to~$J_q$, which we denote by $Z_{J_q}^a(G'')$, is at most $2q 2^s {(q+1)}^{|V(G)|+|E(G)|}$. (To see this, note, that there are $2q$ alternative colours for~$v_0$. For each of these, there are at most~$2$ colours for each vertex in $\{v_1,\ldots,v_s\}$ and at most $q+1$ colours for each vertex in $V(G)\cup E(G)$.) Using Equation~(\ref{eq:firsts}), we conclude that $Z_{J_q}^a(G'') \leq q^s/4$. Since $Z_{J_q}(G'') = Z_{J_q}^t(G'') + Z_{J_q}^a(G'')$, we have \begin{equation} \label{done} Z_\mathrm{Potts}(G;q,1) \leq \frac{Z_{J_q}(G'')}{q^s} \leq Z_\mathrm{Potts}(G;q,1) + \frac{1}{4}. \end{equation} Equation~(\ref{done}) guarantees that the construction is an AP-reduction from $\textsc{Potts}(q,1)$ to the problem $\nHom{J_q}$. To determine the accuracy with which $Z_{J_q}(G'')$ should be approximated in order to achieve a given desired accuracy in the approximation to $Z_\mathrm{Potts}(G;q,1)$, see the proof of Theorem 3 of \cite{APred}. \end{proof} In order to get a reduction going the other direction, we need to generalise the Potts partition function to a hypergraph version. Let $\mathcal{H}=(\mathcal{V},\mathcal{E})$ be a hypergraph with vertex set $\mathcal{V}$ and hyperedge (multi)set~$\mathcal{E}$. Let $q$ be a positive integer. The $q$-state Potts partition function of $\mathcal{H}$ is defined as follows: $$ Z_\mathrm{Potts}(\mathcal{H};q,\gamma) = \sum_{\sigma:\mathcal{V}\rightarrow [q]} \prod_{f\in\mathcal{E}} \big(1+\gamma \delta(\{\sigma(v) \mid v\in f\})\big),$$ where $\delta(S)$ is~$1$ if its argument is a singleton and 0 otherwise. Let $q$ be a positive integer and let $\gamma$ be a positive efficiently approximable real. We consider the following computational problem, which is parameterised by~$q$ and~$\gamma$. \begin{description} \item[Problem] $\textsc{HyperPotts}(q,\gamma)$. \item[Instance] A hypergraph $\mathcal{H}=(\mathcal{V},\mathcal{E})$. \item[Output] $Z_\mathrm{Potts}(\mathcal{H};q,\gamma)$. \end{description} We start by reducing $\nHom{J_q}$ to the problem of approximating the Potts partition function of a hypergraph with parameters~$q$ and~$1$. \begin{lemma}\label{lem:fromcol} Let $q$ be a positive integer. $$\nHom{J_q} \leq_\mathrm{AP} \textsc{HyperPotts}(q,1).$$ \end{lemma} \begin{proof} We can assume without loss of generality that the instance to $\nHom{J_q}$ is bipartite, since otherwise the output is zero. We can also assume that it is connected since a graph~$G$ with connected components $G_1,\ldots,G_\kappa$ satisfies $Z_{J_q}(G) = \prod_{i=1}^\kappa Z_{J_q}(G_i)$. Finally, it is easy to find a bipartition of a connected bipartite graph in polynomial time, so we can assume without loss of generality that this is provided as part of the input. Let $B=(U,V,E)$ be a connected instance of $\nHom{J_q}$ consisting of vertex sets~$U$ and~$V$ and edge set $E$ (a subset of $U\times V$). Let $Z_{J_q}^U(B)$ be the number of homomorphisms from $B$ to $J_q$ in which vertices in~$U$ are coloured with colours in~$\{c'_1,\ldots,c'_q\}$. Similarly, let $Z_{J_q}^V(B)$ be the number of homomorphisms from $B$ to $J_q$ in which vertices in~$V$ are coloured with colours in~$\{c'_1,\ldots,c'_q\}$. Clearly, $Z_{J_q}(B) = Z_{J_q}^U(B) + Z_{J_q}^V(B)$. We will show how to approximate $Z_{J_q}^U(B)$ using an approximation oracle for $\textsc{HyperPotts}(q,1)$. The approximation of $Z_{J_q}^V(B)$ is similar. The construction is straightforward. For every $v\in V$, let $\Gamma(v)$ denote the set of neighbours of vertex~$v$ in~$B$. Let $F = \{\Gamma(v), \mid v\in V\}$. Let $H=(U,F)$ be an instance of $\textsc{HyperPotts}(q,1)$. The reduction is immediate, because $Z_{J_q}^U(B) = Z_\mathrm{Potts}(H;q,1)$. To see this, note that every configuration $\sigma: U \rightarrow \{c'_1,\ldots,c'_q\}$ contributes weight $2^{\mathrm{mono}(\sigma)}$ to $Z_\mathrm{Potts}(H;q,1)$, where ${\mathrm{mono}(\sigma)}$ is the number of hyperedges in~$F$ that are monochromatic in~$\sigma$. Also, the configuration~$\sigma$ can be extended in exactly $2^{\mathrm{mono}(\sigma)}$ ways to homomorphisms from~$B$ to~$J_q$. \end{proof} The next step is to reduce the problem of approximating the Potts partition function of a hypergraph to the problem of approximating the Potts partition function of a \emph{uniform} hypergraph, which is a hypergraph in which all hyperedges have the same size. The reason for this step is that the paper \cite{FerroPotts} shows how to reduce the latter to the approximation of the Potts partition function of a \emph{graph}, which is the desired target of our reduction. Let $q$ be a positive integer and let $\gamma$ be a positive efficiently approximable real. We consider the following computational problem, which, like $\textsc{HyperPotts}(q,\gamma)$, is parameterised by~$q$ and~$\gamma$. \begin{description} \item[Problem] $\textsc{UniformHyperPotts}(q,\gamma)$. \item[Instance] A uniform hypergraph $\mathcal{H}=(\mathcal{V},\mathcal{E})$. \item[Output] $Z_\mathrm{Potts}(\mathcal{H};q,\gamma)$. \end{description} We will actually only use the following lemma with $\gamma=1$ but we state, and prove, the more general lemma, since it is no more difficult to prove. \begin{lemma}\label{lem:touniform} Let $q$ be a positive integer and let $\gamma$ be a positive efficiently approximable real. Then $$\textsc{HyperPotts}(q,\gamma) \leq_\mathrm{AP} \textsc{UniformHyperPotts}(q,\gamma).$$ \end{lemma} \begin{proof} Let $\mathcal{H}=(\mathcal{V},\mathcal{E})$ be an instance to $\textsc{HyperPotts}(q,\gamma)$ with $|\mathcal{V}|=n$ and $|\mathcal{E}|=m$ and $\max(|f| \mid f \in \mathcal{E})=t$. Let~$s$ be any positive integer that is at least $$ \frac{\log(4 q^{n+m(t-1)}{(1+\gamma)}^m)} {\log(1+\gamma)}.$$ As with our other reductions, the exact value of~$s$ is not important, as long as it satisfies the above inequality, it is bounded from above by a polynomial in~$n$ and~$m$, and its can be computed in polynomial time (as a function of~$n$ and~$m$). An appropriate~$s$ can be readily computed by computing crude upper and lower bounds for~$\gamma$ and evaluating different values of~$s$ one-by-one to find one that is sufficiently large, in terms of these bounds. For every hyperedge $f\in\mathcal{E}$, fix some vertex $v_f\inf$. Introduce new vertices $\{u_{f,i}\mid f\in\mathcal{E},i\in[t-1]\}$, and let $\mathcal{V}' = \mathcal{V} \cup \{u_{f,i}\mid f\in\mathcal{E}, i\in[t-1]\}$. Let $$\mathcal{E}' = \Big\{ f \cup \big\{u_{f,i}\bigm| i\in[\, t-|f|\, ]\big\} \Bigm| f\in\mathcal{E} \Big\} \cup \Big\{ \{v_f,u_{f,1},\ldots,u_{f,t-1}\} \times [s] \Bigm| f \in \mathcal{E} \Big\}.$$ That is, the multi-set $\mathcal{E}' $ has $s$ copies of the edge $\{v_f,u_{f,1},\ldots,u_{f,t-1}\} $ and one copy of the edge $f \cup \{u_{f,i}\mid i\in[t-|f|\,]\}$ for each hyperedge $f\in \mathcal{E}$. Let $\mathcal{H}' = (\mathcal{V}', \mathcal{E}')$. Note that $\mathcal{H}'$ is $t$-uniform. Now, the total contribution to $Z_\mathrm{Potts}(\mathcal{H}';q,\gamma)$ from configurations~$\sigma$ which are monochromatic on every edge $\{v_f,u_{f,1},\ldots,u_{f,t-1}\}$ is exactly $Z_\mathrm{Potts}(\mathcal{H};q,\gamma) {(1+\gamma)}^{s m}$. Also, the total contribution to $Z_\mathrm{Potts}(\mathcal{H}';q,\gamma)$ from any other configurations~$\sigma$ is at most $q^{n+m(t-1)} {(1+\gamma)}^{m} {(1+\gamma)}^{s(m-1)}$ since there are at most $q^{n+m(t-1)}$ such configurations and $\gamma>0$. So \begin{align*} Z_\mathrm{Potts}(\mathcal{H};q,\gamma) \leq \frac{Z_\mathrm{Potts}(\mathcal{H}';q,\gamma)}{{(1+\gamma)}^{s m}} &\leq Z_\mathrm{Potts}(\mathcal{H};q,\gamma) + \frac{q^{n+m(t-1)} {(1+\gamma)}^{m}} { {(1+\gamma)}^s}\\ &\leq Z_\mathrm{Potts}(\mathcal{H};q,\gamma) + \frac14 \end{align*} which completes the reduction. \end{proof} Finally, we are ready to put together the pieces to show that, for every integer $q>2$, the problem of approximating the Potts partition function is equivalent to a tree homomorphism problem. \begin{theorem} Let $q>2$ be a positive integer and let $\gamma$ be a positive efficiently approximable real. Then $\textsc{Potts}(q,\gamma)\equiv_\mathrm{AP} \nHom{J_q}$. \label{thm:junction} \end{theorem} \begin{proof} We start by establishing the reduction from $\nHom{J_q}$ to $\textsc{Potts}(q,\gamma)$. By Lemmas \ref{lem:fromcol} and~\ref{lem:touniform}. $$\nHom{J_q}\leq_\mathrm{AP}\textsc{HyperPotts}(q,1)\leq_\mathrm{AP}\textsc{UniformHyperPotts}(q,1).$$ To complete the sequence of reductions we need to know that the last problem is reducible to $\textsc{Potts}(q,\gamma)$. Fortunately, this step already appears in the literature in a slightly different guise, so we just need to explain how to translate the terminology from the earlier result to the current setting. For every positive integer~$q$, the partition function $Z_\mathrm{Potts}(\mathcal{H};q,\gamma)$ of the Potts model on hypergraphs is equal to the \emph{Tutte polynomial} $Z_\mathrm{Tutte}(\mathcal{H};q,\gamma)$ (whose definition we will not need here). This equality is proved in \cite[Observation 2.1]{FerroPotts}, using the same basic line of argument that Fortuin and Kasteleyn~\cite{FK} used in the graph case. Furthermore, for $q>2$, Lemmas~9.1 and~10.1 of \cite{FerroPotts} reduce the problem of approximating the Tutte partition function $Z_\mathrm{Tutte}(\mathcal{H};q,1)$, where $\mathcal{H}$ is a \emph{uniform hypergraph}, to that of approximating the Tutte partition function $Z_\mathrm{Tutte}(G;q,\gamma)$, where $G$ is a \emph{graph}. Given the equivalence between $Z_\mathrm{Tutte}(G;q,\gamma)$ and $Z_\mathrm{Potts}(G;q,\gamma)$ mentioned earlier, we see that $$\textsc{UniformHyperPotts}(q,1)\leq_\mathrm{AP}\textsc{Potts}(q,\gamma),$$ completing the chain of reductions. For the other direction, we will establish an AP-reduction from $\textsc{Potts}(q,\gamma)$ to the problem $\nHom{J_q}$. To start, we note that since a graph is a special case of a uniform hypergraph, Lemmas~9.1 and 10.1 of \cite{FerroPotts} give an AP-reduction from $\textsc{Potts}(q,\gamma)$ to $\textsc{Potts}(q,1)$. (It is definitely not necessary to go via hypergraphs for this reduction, but here it is easier to use the stated result than to repeat the work.) Finally, Lemma~\ref{lem:tocol} shows that $\textsc{Potts}(q,1) \leq_\mathrm{AP} \nHom{J_q}$. \end{proof} \section{Inapproximability of counting tree homomorphisms} \label{sec:hard} Until now, it was not known whether or not a bipartite graph~$H$ exists for which approximating $\nHom H$ is \textsc{\#Sat}-hard. It is perhaps surprising, then, to discover that $\nHom H$ may be \textsc{\#Sat}-hard even when $H$ is a tree. However, the hardness result from Section~\ref{sec:weighted} provides a clue. There it was shown that the weighted version $\wHom{H}$ is \textsc{\#Sat}-hard whenever $H$ is a tree containing $J_3$ as an induced subgraph. If we were able to construct a tree~$H$, containing $J_3$, that is able, at least in some limited sense, to simulate vertex weights, then we might obtain a reduction from $\wHom{J_3}$ to $\nHom{H}$. That is roughly how we proceed in this section. We will obtain our hard tree~$H$ by ``decorating'' the leaves of~$J_3$. These decorations will match certain structures in the instance~$G$, so that particular distinguished vertices in $G$ will preferentially be coloured with particular colours. Carrying through this idea requires~$H$ to have a certain level of complexity, and the tree~$J_3^*$ that we actually use (see Figure~\ref{fig:JS}) is about the smallest for which this approach works. Presumably the same approach could also be applied starting at $J_q$, for $q>3$. It is possible that there are trees~$H$ that are much smaller than~$J_3^*$ for which $\nHom{H}$ is \textsc{\#Sat}-hard. It is even possible that $\nHom{J_3}$ is \textsc{\#Sat}-hard. But demonstrating this would require new ideas. Define vertex sets \begin{align*} X &=\{x_0,x_1\} \cup \{x_{2,i}\mid i\in[5]\} ,\\ Y &= \{y_0,y_1\} \cup \{y_{2,i}\mid i\in[4]\} \cup \{y_{3,i,j}\mid i\in[4],j\in[3]\}, \\ Z &= \{z_0,z_1\} \cup \{z_{2,i}\mid i\in[3]\} \cup \{z_{3,i,j}\mid i\in[3],j\in[3]\} \cup \{z_{4,i,j,k}\mid i\in[3],j\in[3],k\in[2]\}, \end{align*} and edge sets \begin{align*} E_X &=\{(x_0,x_1)\} \cup \{(x_1,x_{2,i})\mid i\in[5]\} ,\\ E_Y &= \{(y_0,y_1)\} \cup \{(y_1,y_{2,i})\mid i\in[4]\} \cup \{(y_{2,i},y_{3,i,j})\mid i\in[4],j\in[3]\} ,\\ E_Z &= \{(z_0,z_1)\} \cup \{(z_1,z_{2,i})\mid i\in[3]\} \cup \{(z_{2,i},z_{3,i,j})\mid i\in[3],j\in[3]\} \\ &\qquad\null\cup \{ ( z_{3,i,j},z_{4,i,j,k})\mid i\in[3],j\in[3],k\in[2]\}. \end{align*} Let $J_3^*$ be the tree with vertex set $V(J_3^*)=\{w\} \cup X \cup Y \cup Z$ and edge set $$E(J_3^*)=\{(w,x_0),(w,y_0),(w,z_0)\} \cup E_X \cup E_Y \cup E_Z.$$ See Figure~\ref{fig:JS}. Consider the equivalence relation on $V(J_3^*)$ defined by graph isomorphism --- two vertices of~$J_3^*$ are in the same equivalence class if there is an isomorphism of~$J_3^*$ mapping one to the other. The canonical representatives of the equivalence classes are the vertices $w$, $x_0$, $x_1$, $x_{2,1}$, $y_0$, $y_1$, $y_{2,1}$, $y_{3,1,1}$, $z_0$, $z_1$, $z_{2,1}$, $z_{3,1,1}$ and $z_{4,1,1,1}$. These are shown in the figure. \begin{figure} \caption{The tree $J_3^*$.} \label{fig:JS} \end{figure} In this section, we will show that $\textsc{\#Sat}$ is AP-reducible to~$\nHom{J_3^*}$. We start by identifying relevant structure in~$J_3^*$. A simple path in a graph is a path in which no vertices are repeated. For every vertex~$h$ of~$J_3^*$, and every positive integer~$k$, let $d_k(h)$ be the number of simple length-$k$ paths from~$h$. The values $d_1(h)$, $d_2(h)$ and $d_3(h)$ can be calculated for each canonical representative $h\in V(J_3^*)$ by inspecting the definition of~$J_3^*$ (or its drawing in Figure~\ref{fig:JS}). These values are recorded in the first four columns of the table in Figure~\ref{JStable}. \begin{figure} \caption{ For each canonical representative $h\in V(J_3^*)$, we record the values of $w_1(h)=d_1(h)$, $w_2(h)=d_1(h)+d_2(h)$ and $w_3(h)=d_1^2(h)+d_2(h)+d_3(h)$.} \label{JStable} \end{figure} Now let $w_k(h)$ denote the number of length-$k$ walks from~$h$ in~$J_3^*$. Clearly, $w_1(h)=d_1(h)$ since $J_3^*$ has no self-loops, so all length-$1$ walks are simple paths. Next, note that $w_2(h)=d_1(h) + d_2(h)$. To see this, note that every length-$2$ walk from~$h$ is either a simple length-$2$ path from~$J_3^*$, or it is a walk obtained by taking an edge from~$h$, and then going back to~$h$. Finally, $w_3(h) = d_1(h)^2 + d_2(h)+d_3(h)$ since every length-$3$ walk from~$h$ is one of the following: \begin{itemize} \item a simple length-$3$ path from~$h$, \item a simple length-$2$ path from~$h$, with the last edge repeated in reverse, or \item a simple length-$1$ path from~$h$ with the last edge repeated in reverse, followed by another simple length-$1$ path from~$h$. \end{itemize} These values are recorded, for each canonical representative $h\in V(J_3^*)$, in the last three columns of the table in Figure~\ref{JStable}. The important fact that we will use is that $w_1(h)$ is uniquely maximised at $h=x_1$, $w_2(h)$ is uniquely maximised at $h=y_1$, and $w_3(h)$ is uniquely maximised at $h=z_1$. (These are shown in boldface in the table.) We are now ready to prove the following theorem. \begin{theorem} \label{thm:hardH} $\textsc{\#Sat} \leq_\mathrm{AP} \nHom{J_3^*}$. \end{theorem} \begin{proof} By Lemma~\ref{lem:cut}, it suffices to give an AP-reduction from \MultiCutCount{$3$} to $\nHom{J_3^*}$. The basic construction follows the outline of the reduction developed in the proof of Lemma~\ref{lem:hardweighted}. However, unlike the situation of Lemma~\ref{lem:hardweighted}, the target problem $\nHom{J_3^*}$ does not include weights, so we must develop gadgetry to simulate the role of these. Let $b$, $G=(V,E)$, $\alpha$, $\beta$ and $\gamma$ be an input to \MultiCutCount{$3$}. Let $s= 3 + |E(G)|+2|V(G)|$. (As before, the exact size of~$s$ is not important, but it has to be at least this big to make the calculation work, and it has to be at most a polynomial in the size of~$G$.) Let $G'$ be the graph defined in the proof of Lemma~\ref{lem:hardweighted}. In particular, let $V'(G)= \{(e,i) \mid e\in E(G),i\in[s]\}$. Then let $G'$ be the graph with vertex set $V(G') = V(G) \cup V'(G)$ and edge set $$E(G') = \{(u,(e,i)) \mid u\in V(G), (e,i)\in V'(G), \mbox{and $u$ is an endpoint of~$e$} \}.$$ Now let $r$ be any positive integer such that \begin{equation} \label{eq:r} {\left( \frac{46}{40} \right)}^r \geq 8 {|V(J_3^*)|}^{|V(G)|+ s |E(G)| + 7}. \end{equation} For concreteness, take $r$ to be the smallest integer satisfying (\ref{eq:r}). Once again, the exact value of~$r$ is not so important. Any~$r$ would work as long as it is at most a polynomial in the size of~$G$, and it satisfies (\ref{eq:r}). We will construct an instance~$G''$ of $\nHom{J_3^*}$ by adding some gadgets to~$G'$. First, we define the gadgets. \begin{itemize} \item Let $\Gamma_{x}$ be a graph with vertex set $V(\Gamma_{x}) = \{ v_{x_1} \} \cup \bigcup_{i\in[r]} \{v_{x,i}\}$ and edge set $E(\Gamma_x) = \bigcup_{i\in [r]} \{(v_{x_1},v_{x,i})\}$. \item Let $\Gamma_y$ be a graph with vertex set $V(\Gamma_y) = \{v_{y_1}\} \cup \bigcup_{i\in[r]} \{v_{y,i},v'_{y,i}\} $ and edge set $E(\Gamma_y) = \bigcup_{i\in [r]} \{(v_{y_1},v_{y,i}),(v_{y,i},v'_{y,i}) \}$. \item Let $\Gamma_z$ be a graph with vertex set $V(\Gamma_z) = \{v_{z_1}\} \cup \bigcup_{i\in[r]} \{ v_{z,i}, v'_{z,i}, v''_{z,i} \} $ and edge set $E(\Gamma_x) = \bigcup_{i\in [r]} \{ (v_{z_1},v_{z,i}),(v_{z,i},v'_{z,i}),(v'_{z,i},v''_{z,i}) \}$. \end{itemize} Finally, let $$V(G'') = V(G') \cup \{v_w,v_{x_0},v_{y_0},v_{z_0}\} \cup V(\Gamma_x) \cup V(\Gamma_y) \cup V(\Gamma_z),$$ and \begin{align*} E(G'') &= \{ (v_w,v_{x_0}),(v_w,v_{y_0}),(v_w,v_{z_0}), (v_{x_0},v_{x_1}),(v_{y_0},v_{y_1}),(v_{z_0},v_{z_1}), (v_{x_1},\alpha),(v_{y_1},\beta),(v_{z_1},\gamma) \} \\ & \cup E(G') \cup \{(v_w,v) \mid v\in V(G)\} \cup E(\Gamma_x) \cup E(\Gamma_y) \cup E(\Gamma_z). \end{align*} A picture of the instance $G''$ is shown in Figure~\ref{fig:GInstance}. \begin{figure} \caption{The instance~$G''$. The thick curved line between $V(G)$ and $V'(G)$ indicates that the edges in~$E(G')$ go between vertices in~$V(G)$ and vertices in~$V'(G)$, but these are not shown. Vertex $v_w$ is connected to each vertex in~$V(G)$. } \label{fig:GInstance} \end{figure} We say that a homomorphism~$\sigma$ from~$G''$ to~$J_3^*$ is \emph{typical} if $\sigma(v_{x_1})=x_1$, $\sigma(v_{y_1})=y_1$, and $\sigma(v_{z_1})=z_1$. Note that, in a typical homomorphism, $\sigma(v_w)=w$, so $\sigma(V(G))=\{x_0,y_0,z_0\}$ and $\sigma(V'(G)) \subseteq \{w,x_1,y_1,z_1\}$. Also, $\sigma(\alpha)=x_0$, $\sigma(\beta)=y_0$, and $\sigma(\gamma)=z_0$. If $\sigma$ is a typical homomorphism, then let \begin{align*} \bichrom{\sigma} = \{ e \in E(G) \mid \quad & \mbox{the vertices of~$V(G)$ corresponding to } \\ & \mbox{the endpoints of~$e$ are mapped to different colours by~$\sigma$} \}.\end{align*} Note that, for every typical homomorphism~$\sigma$, $\bichrom{\sigma}$ is a multiterminal cut for the graph~$G$ with terminals~$\alpha$, $\beta$ and~$\gamma$. For every multiterminal cut $E'$ of~$G$, let $\components{E'}$ denote the number of components in the graph $(V,E\setminus E')$. For each multiterminal cut~$E'$, let $Z_{E'}$ denote the number of typical homomorphisms~$\sigma$ from~$G''$ to~$J_3^*$ such that $\bichrom{\sigma} = E'$. As in the proof of Lemma~\ref{lem:hardweighted}, $\components{E'}\geq 3$. If $\components{E'}=3$ then $$Z_{E'} = 2^{s|E(G)-E'|} 6^r 18^r 46^r = 2^{s|E(G)-E'|} 4968^r.$$ The $2^{s|E(G)-E'|}$ comes from the two choices for the colour of each vertex $(e,i)$ with $e\in E(G)-E'$, as before. The $6^r$ comes from the choices for the vertices in $V(\Gamma_x)\setminus \{x_1\}$ according to column~5 of the table in Figure~\ref{JStable}. The $18^r$ comes from the choices for the vertices in $ V(\Gamma_y)\setminus \{y_1\}$ (in column~6) and the $46^r$ comes from the choices for the vertices in $ V(\Gamma_z)\setminus \{z_1\}$ (in column~7). Also, for any multiterminal cut $E'$ of~$G$, $$Z_{E'} \leq 2^{s|E(G)-E'|} 3^{\components{E'}-3} 4968^r,$$ since in any typical homomorphism~$\sigma$, the component of~$\alpha$ is mapped to~$x_0$ by~$\sigma$, the component of~$\beta$ is mapped to~$y_0$, the component of~$\gamma$ is mapped to~$z_0$, and each remaining component is mapped to a colour in $\{x_0,y_0,z_0\}$. Let $Z^*= 2^{s|E(G)-b|} 4968^r$. If $E'$ has size~$b$ then $\components{E'}=3$. (Otherwise, there would be a smaller multiterminal cut, contrary to the definition of \MultiCutCount{$3$}.) So, in this case, \begin{equation} Z_{E'} = Z^*. \label{eq:goodcuts} \end{equation} If $E'$ has size $b'>b$ then $$Z_{E'} \leq 2^{s|E(G)-b'|} 3^{\components{E'}-3} 4968^r = 2^{-s(b'-b)} 3^{\components{E'}-3} Z^* \leq 2^{-s} 3^{|V(G)|} Z^*. $$ Clearly, there are at most $2^{|E(G)|}$ multiterminal cuts~$E'$. So, using the definition of~$s$, \begin{equation} \label{eq:bigcuts} \sum_{E' : |E'|>b} Z_{E'} \leq \frac{Z^*}{8}. \end{equation} Now let $Z^-$ denote the number of homomorphisms from~$G''$ to~$J_3^*$ that are not typical. Now $$ Z^- \leq |V(J_3^*)|^{|V(G)|+|V'(G)|+7 } {(40/46)}^r 4968^r, $$ since there are at most $|V(J_3^*)|$ colours for each of the vertices in $$V(G)\cup V'(G) \cup \{v_w,v_{x_0},v_{y_0},v_{z_0},v_{x_1},v_{y_1},v_{z_1}\}.$$ Also, given that the assignment to $v_{x_1}$, $v_{y_1}$ and $v_{z_1}$ is not precisely $x_1$, $y_1$ and $z_1$, respectively, it can be seen from the table in Figure~\ref{JStable} that the number of possibilities for the remaining vertices is at most $(40/46)^r$ times as large as it would otherwise have been. (For example, from the last column of the table, colouring $v_{z_1}$ with $y_1$ instead of with~$z_1$ would give exactly $40^r$ choices for the colours of the vertices in $ \Gamma_z \setminus \{v_{z_1}\}$ instead of $46^r$ choices. The differences in the other columns are more substantial than this.) Since $|V'(G)|=s |E(G)|$, $$Z^- \leq {|V(J_3^*)|}^{|V(G)|+s|E(G)|+7} {(40/46)}^r 4968^r.$$ We can assume that $b\leq |E(G)|$ (otherwise, the number of size-$b$ multiterminal cuts is trivially~$0$) so from the definition of~$Z^*$, $$ Z^- \leq {|V(J_3^*)|}^{|V(G)|+s|E(G)|+7} {(40/46)}^r Z^*. $$ Using Equation~(\ref{eq:r}), we get \begin{equation} \label{eq:nocut} Z^- \leq \frac{Z^*}{8}. \end{equation} From Equation~(\ref{eq:goodcuts}), we find that, if there are $N$ size-$b$ multiterminal cuts then $$Z_{J_3^*}(G) = N Z^* + \sum_{E' : |E'|>b} Z_{E'} + Z^-.$$ So applying Equations (\ref{eq:bigcuts}) and (\ref{eq:nocut}), we get $$ N \leq \frac{Z_{J_3^*}(G)}{Z^*} \leq N + \frac{1}{4}.$$ Thus, we have an AP-reduction from \MultiCutCount{$3$} to $\nHom{J_3^*}$. To determine the accuracy with which~$Z(G)$ should be approximated in order to achieve a given accuracy in the approximation to~$N$, see the proof of Theorem 3 of \cite{APred}. \end{proof} \section{The Potts partition function and proper colourings of bipartite graphs} \label{sec:bqcol} Let $q$ be any integer greater than~$2$. Consider the following computational problem. \begin{description} \item[Problem] $\bqcol q$. \item[Instance] A bipartite graph $G$. \item[Output] The number of proper $q$-colourings of $G$. \end{description} Dyer et al.~\cite[Theorem 13]{APred} showed that $\textsc{\#BIS} \leq_\mathrm{AP} \bqcol q$. However, it may be the case that $\bqcol q$ is easier to approximate than $\textsc{\#Sat}$. Certainly, no AP-reduction from $\textsc{\#Sat}$ to $\bqcol q$ has been discovered (despite some effort!). Therefore, it seems worth recording the following upper bound on the complexity of $\nHom{J_q}$, which is an easy consequence of Theorem~\ref{thm:junction}. \begin{corollary}\label{cor:bqcol} Let $q>2$ be a positive integer. Then $\nHom{J_q} \leq_\mathrm{AP} \bqcol q$. \end{corollary} Corollary~\ref{cor:bqcol} follows immediately from Lemma~\ref{lem:bqcol} below by applying Theorem~\ref{thm:junction} with $\gamma=1/(q-2)$. \begin{lemma}\label{lem:bqcol} Let $q>2$ be a positive integer. Then $\textsc{Potts}(q,1/(q-2)) \leq_\mathrm{AP} \bqcol q$. \end{lemma} \begin{proof} Let $G=(V,E)$ be an input to $\textsc{Potts}(q,1/(q-2))$. Let $G'$ be the two-stretch of $G$ constructed as in the proof of Lemma~\ref{lem:tocol}. In particular, $G'$ is the bipartite graph with $$V(G') = V(G) \cup E(G)$$ and $$E(G') = \{(u,e) \mid u\in V(G), e \in E(G), \mbox{and $u$ is an endpoint of~$e$} \}.$$ Consider an assignment $\sigma\colon V(G) \to [q]$ and an edge $e=(u,v)$ of $G$. If $\sigma(u)\neq \sigma(v)$ then there are $q-2$ ways to colour the midpoint vertex corresponding to~$e$ so that it receives a different colour from~$\sigma(u)$ and~$\sigma(v)$. However, if $\sigma(u)=\sigma(v)$ then there are $q-1$ possible colours for the midpoint vertex. Let $N$ denote the number of proper $q$-colourings of~$G'$. Then since $(q-1)/(q-2)-1=1/(q-2)$, we have $$ N = {(q-2)}^{|E|} \sum_{\sigma:V\rightarrow[q]} {\left(\frac{q-1}{q-2}\right)}^{\mathrm{mono}(\sigma)} = {(q-2)}^{|E|} Z_\mathrm{Potts}(G;q, 1/(q-2)),$$ where $\mathrm{mono}(\sigma)$ is the number of edges $e\in E(G)$ whose endpoints in $V(G)$ are mapped to the same colour by~$\sigma$. \end{proof} \section{The Potts partition function and the weight enumerator of a code} \label{sec:we} A {\it linear code\/} $C$ of length $N$ over a finite field $\mathbb{F}_q$ is a linear subspace of $\mathbb{F}_q^N$. If the subspace has dimension~$r$ then the code may be specified by an $r\times N$ {\it generating matrix}~$M$ over~$\mathbb{F}_q$ whose rows form a basis for the code. For any real number $\lambda$, the weight enumerator of the code is given by $W_M(\lambda)=\sum_{w\in C}\lambda^{\|w\|}$ where $\|w\|$ is the number of non-zero entries in~$w$. ($\|w\|$ is usually called the {\it Hamming weight\/} of~$w$.) We consider the following computational problem, parameterised by $q$ and~$\lambda$. \begin{description} \item[Problem] $\WE q\lambda$. \item[Instance] A generating matrix $M$ over $\mathbb{F}_q$. \item[Output] $W_M(\lambda)$. \end{description} In \cite{WeightEnum}, the authors considered the special case $q=2$ and obtained various results on the complexity of $\WE 2\lambda$, depending on $\lambda$. Here we show that, for any prime~$p$, $\WE p\lambda$ provides an upper bound on the complexity of $\textsc{Potts}(p^k,\gamma)$. \begin{theorem}\label{thm:PottsToWE} Suppose that $p$ is a prime, $k$ is a positive integer satisfying $p^k>2$ and $\lambda\in(0,1)$ is an efficiently computable real. Then $$\textsc{Potts}(p^k,1)\leq_\mathrm{AP}\WE p\lambda.$$ \end{theorem} The following corollary follows immediately from Theorem~\ref{thm:PottsToWE} and Theorem~\ref{thm:junction}. \begin{corollary} \label{newcor} Suppose that $p$ is a prime, $k$ is a positive integer satisfying $p^k>2$ and $\lambda\in(0,1)$ is an efficiently computable real. Then $\nHom{J_{p^k}} \leq_\mathrm{AP} \WE p\lambda$. \end{corollary} The condition $p^k>2$ can in fact be removed from Corollary~\ref{newcor}, even though the result does not follow from Theorem~\ref{thm:PottsToWE} in this situation. For the missing case where $p=2$ and $k=1$, Lemma~\ref{lem:intermediate} gives $\nHom{J_{2}} \leq_\mathrm{AP} \textsc{\#BIS}$ and \cite[Cor.~7, Part~(4)]{WeightEnum} show $\textsc{\#BIS} \leq_\mathrm{AP} \WE {2}{\lambda}$. A striking feature of Corollary~\ref{newcor} is that it provides a uniform upper bound on the complexity of the infinite sequence of problems $\nHom{J_{p^k}}$, with $p$ fixed and $k$ varying. This uniform upper bound is interesting if (as we suspect) $\WE p\lambda$ is not itself equivalent to \textsc{\#Sat}{} via AP-reducibility. \begin{proof}[Proof of Theorem~\ref{thm:PottsToWE}] Let $q=p^k$ and let $\gamma=\lambda^{-q(p-1)/p}-1>0$. Since Theorem~\ref{thm:junction} shows $\textsc{Potts}(p^k,1)\equiv_\mathrm{AP} \nHom{J_{p^k}}\equiv_\mathrm{AP}\textsc{Potts}(p^k,\gamma)$, it is enough to given an AP-reduction from $\textsc{Potts}(p^k,\gamma)$ to $\WE p\lambda$. So suppose $G=(V,E)$ is a graph with $n$ vertices and $m$ edges. We wish to evaluate \begin{equation}\label{eq:PottsDef} Z_\mathrm{Potts}(G;q,\gamma)=\sum_{\sigma:V\to[q]}(1+\gamma)^{\mathop{\mathrm{mono}}(\sigma)}. \end{equation} Our aim is to construct an instance of the weight enumerator problem whose solution is the above expression, modulo an easily computable factor. Introduce a collection of variables $X=\{x^v_i\mid v\in V \text{ and }i\in[k]\}$. To each assignment $\sigma:V\to[q]$ we define an associated assignment $\hat\sigma:X\to\mathbb{F}_p$ as follows: for all $v\in V$, $$ \big(\hat\sigma(x_1^v),\hat\sigma(x_2^v), \ldots,\hat\sigma(x_k^v)\big)=\phi(\sigma(v)), $$ where $\phi$ is any fixed bijection $[q]\to \mathbb{F}_p^k$. Note that $\sigma\mapsto\hat\sigma$ is a bijection from assignments $V\to[q]$ to assignments $X\to\mathbb{F}_p$. (Informally, we have coded the spin at each vertex as a $k$-tuple of variables taking values in~$\mathbb{F}_p$.) Let $\ell_1(z_1,\ldots,z_k),\ldots,\ell_q(z_1,\ldots,z_k)$ be an enumeration of all linear forms $\alpha_1z_1+\alpha_2z_2+\cdots+\alpha_kz_k$ over $\mathbb{F}_p$, where $(\alpha_1,\alpha_2,\ldots,\alpha_k)$ ranges over $\mathbb{F}_p^k$. This collection of linear forms has the following property: \begin{equation}\label{eq:prop} \begin{split} &\text{If $z_1=z_2=\cdots z_k=0$, then all of $\ell_1(z_1,\ldots,z_k),\ldots,\ell_q(z_1,\ldots,z_k)$ are zero;}\\ &\text{otherwise, precisely $q/p=p^{k-1}$ of $\ell_1(z_1,\ldots,z_k),\ldots,\ell_q(z_1,\ldots,z_k)$ are zero.} \end{split} \end{equation} The first claim in~(\ref{eq:prop}) is trivial. To see the second, assume without loss of generality that $z_1\not=0$. Then, for any choice of $(\alpha_2,\ldots,\alpha_k)\in\mathbb{F}_p^{k-1}$, there is precisely one choice for $\alpha_1\in\mathbb{F}_p$ that makes $\alpha_1z_1+\cdots+\alpha_kz_k=0$. Now give an arbitrary direction to each edge $(u,v)\in E$ and consider the system $\Lambda$ of linear equations $$ \Big\{ \ell_j \big(\hat\sigma(x^v_1)-\hat\sigma(x^u_1),\,\hat\sigma(x^v_2)-\hat\sigma(x^u_2),\, \ldots,\,\hat\sigma(x^v_k)-\hat\sigma(x^u_k)\big)=0: j\in[q] \text{ and } (u,v) \in E\Big\}. $$ (We view $\Lambda$ as a multiset, so the trivial equation $0=0$ arising from the linear form $\ell_j$ with $\alpha_1=\alpha_2 = \cdots = \alpha_k=0$ occurs $m$ times, a convention that makes the following calculation simpler.) Denote by $\mathop{\mathrm{sat}}(\hat\sigma)$ the number of satisfied equations in $\Lambda$. Then, from~(\ref{eq:prop}), $$\mathop{\mathrm{sat}}(\hat\sigma)=q\mathop{\mathrm{mono}}(\sigma)+\frac qp(m-\mathop{\mathrm{mono}}(\sigma)),$$ and hence $$ \mathop{\mathrm{mono}}(\sigma)=\frac p{(p-1)q}\mathop{\mathrm{sat}}(\hat\sigma)-\frac m{p-1}. $$ Noting that $1+\gamma=\lambda^{-q(p-1)/p}$, \begin{align} \sum_{\sigma:V\to[q]}(1+\gamma)^{\mathop{\mathrm{mono}}(\sigma)} &=\sum_{\hat\sigma: X \to\mathbb{F}_p}(1+\gamma)^{(p/(p-1)q)\mathop{\mathrm{sat}}(\hat\sigma)-m/(p-1)}\notag\\ &= \lambda^{qm/p} \sum_{\hat\sigma: X \to\mathbb{F}_p}\lambda^{-\mathop{\mathrm{sat}}(\hat\sigma)}\notag\\ &= \lambda^{-(1-1/p)qm} \sum_{\hat\sigma: X \to\mathbb{F}_p}\lambda^{\mathop{\mathrm{unsat}}(\hat\sigma)},\label{eq:unsat} \end{align} where $\mathop{\mathrm{unsat}}(\hat\sigma)=qm-\mathop{\mathrm{sat}}(\hat\sigma)$ is the number of unsatisfied equations in $\Lambda$. The system $\Lambda$ has $qm$ equations in $kn$ variables, so we may write it in matrix form $A\boldsymbol{\hat\sigma}=\mathbf0$, where $A$ is a $(qm\times kn)$-matrix, and $\boldsymbol{\hat\sigma}$ is a $kn$-vector over~$\mathbb{F}_p$. The columns of $A$ and the components of $\boldsymbol{\hat\sigma}$ are indexed by pairs $(i,v)\in[k]\times V$, and the $(i,v)$-component of $\boldsymbol{\hat\sigma}$ is $\hat\sigma(x_i^v)$. Enumerating the columns of $A$ as $\mathbf a_i^v\in\mathbb{F}_p^{qm}$ for $(i,v)\in[k]\times V$, we may re-express $\Lambda$ in the form $$ \sum_{i\in[k],v\in V}\hat\sigma(x_i^v)\,\mathbf a_i^v=\mathbf0, $$ where $\mathbf0$ is the length-$qm$ zero vector. Then $\mathop{\mathrm{unsat}}(\hat\sigma)$ is the Hamming weight of the length-$qm$ vector $\mathbf b(\hat\sigma)=\sum_{i,v}\hat\sigma(x_i^v)\,\mathbf a_i^v$. As $\hat\sigma$ ranges over all assignments $X\to\mathbb{F}_p$, so $\mathbf b(\hat\sigma)$ ranges over the vector space (or code) $$C=\Big\{\sum_{i,v}\hat\sigma(x_i^v)\,\mathbf a_i^v\Bigm| \hat\sigma: X\to\mathbb{F}_p\Big\} =\langle \mathbf a_i^v\mid i\in[k],v\in V\rangle$$ generated by the vectors $\{\mathbf a_i^v\}$. We will argue that the mapping sending $\hat\sigma$ to $\mathbf b(\hat\sigma)$ is $q$ to~1, from which it follows that $\sum_{\hat\sigma}\lambda^{\mathop{\mathrm{unsat}}(\hat\sigma)}$ is $q$ times the weight enumerator of the code~$C$. Then, from (\ref{eq:PottsDef}) and~(\ref{eq:unsat}), letting $M$ be any generating matrix for~$C$, $$ Z_\mathrm{Potts}(G;q,\gamma)=q\lambda^{-(1-1/p)qm} \,W_M(\lambda). $$ To see where the factor~$q$ comes from, consider the assignments~$\hat\sigma$ satisfying \begin{equation}\label{eq:qto1} \sum_{i\in[k],v\in V}\hat\sigma(x_i^v)\,\mathbf a_i^v=\mathbf b, \end{equation} for some $\mathbf b\in\mathbb{F}_p^{qm}$. For every $i\in [k]$ and every edge $(u,v)\in E$, there is an equation in $\Lambda$ specifying the value of $\hat\sigma(x_i^v)-\hat\sigma(x_i^u)$. Thus, since $G$ is connected, the vector $\mathbf b$ determines $\hat\sigma$ once the partial assigment $(\hat\sigma(x_1^r),\ldots,\hat\sigma(x_k^r))$ is specified for some distinguished vertex $r\in V$. Conversely, each of the $q$ partial assignments $(\hat\sigma(x_1^r),\ldots,\hat\sigma(x_k^r))$ extends to a total assignment satisfying~(\ref{eq:qto1}). \end{proof} \end{document}
\begin{document} \title{Reidemeister Moves and Groups} \begin{abstract}Recently, the author discovered an interesting class of knot-like objects called {\em free knots}. These purely combinatorial objects are equivalence classes of Gauss diagrams modulo Reidemeister moves (the same notion in the language of words was introduced by Turaev \cite{Turaev}, who thought all free knots to be trivial). As it turned out, these new objects are highly non-trivial, see \cite{Parity}, and even admit non-trivial cobordism classes \cite{Cobordisms}. An important issue is the existence of invariants where a {\em diagram evaluates to itself} which makes such objects ``similar'' to free groups: an element has its minimal representative which ``lives inside'' any representative equivalent to it. In the present paper, we consider generalizations of free knots by means of (finitely presented) groups. These new objects have lots of non-trivial properties coming from both knot theory and group theory: functoriality, coverings, etc. This connection allows one not only to apply group theory to various problems in knot theory but also to apply Reidemeister moves to the study of (finitely presented) groups. Groups appear naturally in this setting when graphs are embedded in $2$-surfaces. {\bf Keywords:} Group, Graph, Reidemeister Move, Knot, Free Knot. \end{abstract} {\bf AMS MSC} 05C83, 57M25, 57M27 \section{Introduction. Basic Definitions and Notation} Knots and virtual knots are encoded by $4$-valent graphs $\Gamma$ with a {\em framing} (opposite edge structure at vertices) and some decorations at vertices (the structure of overpasses and underpasses) modulo {\em Reidemeister moves}. When skipping de\-co\-rations of crossings, we get to certain equivalence classes of virtual knots, called {\em free knots}. If such a graph $\Gamma$ is treated as the image of immersion of a curve $\gamma$ in an oriented $2$-surface $\Sigma$ in such a way that $\Gamma\backslash \gamma$ admits a checkerboard colouring, then there is a natural presentation of the quotient group $\pi_{1}(\Sigma)\slash \langle [\gamma]\rangle$ by the normal closure of the element corresponding to $\gamma$ where {\em vertices are generators} and {\em regions are relators}; for more details, see \cite{IMN,FedoseevManturov}. Thus, considering $\Gamma$ as a homotopy class of $\gamma$, vertices of $\Gamma$ get some natural interpretations in terms of homotopy classes. When performing Reidemeister moves to $\Gamma$, labels of vertices undergo natural transformations. When considering more abstract graphs unrelated to any surfaces, one gets the notion of a {\em free knot} (in the case of many components $\gamma_{i}$, one can talk about a {\em free link}). A {\em free knot} is a $1$-component free link. Free knots possess highly non-trivial invariants demonstrating the following principle: {\em if a free link diagram $K$ is complicated enough, then every diagram $K'$ equivalent to $K$ contains $K$ inside}. This result is achieved by using the parity bracket $[\cdot]$, an invariant of free links valued in {\em diagrams of free links} such that $[K]=K$ for diagrams which are large enough, see ahead. This allowed the author to get new approaches to various problems in virtual \cite{Projection,Crossing} and classical knot theory \cite{ChrismanManturov,KrasnovManturov}, and various questions of topology. Thus, free links are interesting objects by themselves; on the other hand, vertices of their diagrams (framed $4$-graphs) can be labeled by elements of certain groups. This leads to the notion of {\em group free knots} or {\em $G$-free knots} where $G$ is a (finitely presented) group. When forgetting labelings of vertices of framed $4$-graphs, we get usual (classical,virtual, flat or free) knot theory, when embedding a framed $4$-graph into an oriented $2$-surface, we get a natural labeling. When applying Reidemeister moves to diagrams within a $2$-surface, we naturally get moves for labeled diagrams. Thus, considering various questions concerning equivalence classes of group free knots, we can study arbitrary groups. Indeed, every group $G$ leads to its own $G$-labeled free knot theory, and depending on which graphs turn out to be equivalent in this group, one may judge about the structure of the group. Let us now pass to formal definitions of the objects, we are going to deal with. \begin{rk} Throughout the rest of the text, all groups are assumed to be finitely presented. \end{rk} We shall deal with framed $4$-graphs which naturally appear as shadows of knot diagrams. \begin{rk} We say {\em ``$4$-graph''} instead of {\em ``$4$-valent graph''}, because we also admit objects to have {\em circular components}, i.e., circles which are disjoint from the graph. \end{rk} \begin{dfn} A $4$-valent graph $\Gamma$ is {\em framed} if at every vertex $X$, the four half-edges incident to it, are split into two sets; (half)edges from the same set are called {\em opposite}; (half)edges which are not opposite are called {\em adjacent}; such a choice is called {\em framing} of $\Gamma$. \end{dfn} \begin{dfn} By a {\em rotating cycle} of a framed $4$-graph $\Gamma$ we mean either a circular component of $\Gamma$ or a sequence of vertices (possibly, non-distinct) $v_{i}$ and distinct edges $e_{i}, i=0,\dots, k-1,$ such that $e_{i}$ connects $v_{i}$ and $v_{i+1}$ and at $v_{i},$ the edges $e_{i}$ and $e_{i-1}$ are non opposite. Here indices are counted modulo $k$. \end{dfn} \begin{dfn} When talking about the number of components, we separately count {\em circular components} and separately count {\em unicursal components}; by the latter, we mean an equivalence class of edges called by elementary equivalence relation where every two edges opposite at some vertex are treated as opposite. \end{dfn} Thus, the disjoint diagram of the disjoint sum of the simplest $2$-vertex Hopf link and the unknot has three components: a circular one corresponding to the unknot and the two unicursal components of the Hopf link. \begin{dfn} A {\em source-sink structure} of a framed $4$-graph $\Gamma$ is an orientation of all edges of $\Gamma$ such that at every vertex $X$ of it some two opposite (half)edges are emanating and the other two are incoming, see Fig. \ref{SourceSink}. Circular edges are just assumed to be oriented. \end{dfn} \begin{figure} \caption{A Source-Sink Structure at a Vertex; A Graph with a Source-Sink Structure} \label{SourceSink} \end{figure} \begin{dfn} Let $G$ be a framed $4$-graph, and let $X$ be a vertex of $G$, let $a,b$ be one pair of opposite (half)edges, and let $c,d$ be the the other pair of opposite (half)edges at $X$; by the {\em vertex removal} we mean the operation of deleting the vertex $X$ from $G$ and connecting $a$ to $b$ and connecting $c$ to $d$. \end{dfn} \begin{rk} Note that if a connected framed $4$-graph admits a source-sink structure then there exist exactly two such structures which differ by the overall orientation change. \end{rk} \begin{notat} We adopt the following convention. Whenever we draw an immersion of a framed $4$-graph in ${\mathbb R}^{2}$, we depict its vertices by solid circles, those points which are encircled are artifacts of projection caused by immersion. At every vertex, the framing is assumed to be {\em induced from the plane}: (half)edges which are locally opposite on the plane are opposite. \end{notat} Note that if at a vertex $X$ of a framed $4$-graph a half-edge $a$ is opposite to a half-edge $c$ and a half-edge $b$ is opposite to a half-edge $d$, then then when drawing on the plane, the counterclockwise order of edges can be $a,b,c,d$ or $a,d,c,b$. This leads to the two ``moves'' for planar diagrams of $4$-graphs which do not change the framed $4$-graph: the {\em detour move} and the {\em virtualization move}. The {\em detour move} removes one piece of an edge with all encircled crossings inside it and redraws it elsewhere with new encircled crossings with itself and other edges, see Fig. \ref{detour}; the {\em virtualiation move} changes the local counterclockwise order at a vertex without changing its framing; one can represent this move by flanking the classical crossings by two encircled intersection points of $a,b$ and $c,d$. \begin{figure} \caption{The detour move} \label{detour} \end{figure} \begin{rk} In the sequel, when drawing some transformation on the plane, we shall depict only the changing part of the move assuming the remaining part to be fixed. \end{rk} \begin{ex} Consider the framed $4$-graph with one vertex $A$ and two loops, $p$ and $q$ such that $p$ is opposite to itself at $A$ and $q$ is opposite to itself at $A$. Then this graph admits no source-sink structure because whatever orientation of the edge $p$ we take, it will be emanating from one side and incoming from the other side. \end{ex} \begin{dfn} We say that a framed $4$-graph is {\em good} if it admits a source-sink structure. We say that a good framed $4$-graphs {\em oriented} if a source-sink structure of it is selected. Likewise, we define {\em good} (resp., {\em oriented}) $G$-framed $4$ graphs $(\Gamma, f)$ if $\Gamma$ is good (resp., oriented). \end{dfn} \begin{rk} We shall refer to {\em oriented} $G$-framed $4$-graphs simply as $G$-graphs. Moreover, we shall often omit $f$ from the notation if the definition of $f$ is clear from the context. \end{rk} \begin{dfn} Let $G$ be a group. An {\em oriented} $G$-free link is an equivalence class of $G$-graphs by the following three Reidemeister moves shown in Fig.\ref{rmoves}. \begin{figure} \caption{The Reidemeister moves for free links} \label{rmoves} \end{figure} \end{dfn} Here we assume that for the first Reidemeister move, the vertex taking part in this move is marked by the unit element of the group, for the second Reidemeister moves, the two elements are inverse to each other, and for the third move we have some three elements $a,b,c$ in the left hand side such that $a\cdot b\cdot c=1$, and the opposite elements $a^{-1},b^{-1},c^{-1}$ in the right hand side, see Fig. \ref{rmoves}. Note that for the first Reidemeister move the choice of the source-sink orientation does not matter; it only matters whether these orientations agree for the edges touching the boundary of the picture; for the third Reidemeister move, it is crucial to require that the cyclic order of the three vertices along the source-sink orientation is $a,b,c$ and not $a,c,b$. \begin{dfn} Let $G$ be a finitely presented group. By a {\em $G$-framed-$4$-graph} we mean a pair $(\Gamma,f)$ where $\Gamma$ is a framed $4$-graph and $f$ is the map from the set of vertices of $\Gamma$ to $G$. \end{dfn} Let $\Gamma$ be a $G$-graph. Let $\gamma$ be a rotating cycle on $\Gamma$ with $k$ vertices (a vertex is counted twice if it is passed twice). Then $\gamma$ inherits a natural orientation from the source-sink structure of $\Gamma$. If we choose a starting point on $\gamma$, we can write down the elements of $G$ corresponding to the vertices of the group $G:$ $[\gamma]=\gamma_{1},\cdots,\gamma_{k}\in G$. If no starting point is given, we can obtain a conjugacy class of $\gamma$. \begin{dfn} The {\em minimal crossing number} of a $G$-free link $L$ is the minimal number of vertices of oriented $G$-framed $4$-graphs representing $L$. \end{dfn} This definition means that every group $G$ leads to a well defined {\em free knot theory} corresponding to this group. Even for the trivial group, this theory is extremely interesting: it gives rise to the theory of {\em even} free knots (i.e. free knots whose diagrams admit source-sink structure), which has non-trivial invariants reproducing itself \cite{Parity,Doklady}. Thus, one can use this theory in both directions: it is possible to study groups by means of free knots and their generalizations and to study various knot theories by using group-valued labeling of their vertices. \begin{rk} A similar theory can be constructed for all free knots, not necessary having source-sink structure, however, in this way one should overcome some difficulties with orientations corresponding to relations for Reidemeister moves. We shall touch on this question in another paper. Note also that, in the case of the ${\mathbb Z}_{2}$-group, the theory can be constructed even in the case of arbitrary graphs (see ahead). \end{rk} The paper is organized as follows. In the next section, we present the three main tools for constructing invariants of our object, {\em the parity bracket}, {\em the parity delta}, and {\em the covering}. In the third section, we shall consider various examples where the group labeling arises in topological situation, and which relations on groups does this group labeling are imposed in concrete situation. We shall mostly concentrate on the case of group coming from a $2$-surface. We conclude the paper by a list of unsolved problems relating Reidemeister moves with group theory. \section{Basic Invariants} We are now ready to construct our first invariants of groups. Let $G$ be a finite group. \begin{dfn} Let ${\cal R}(G)$ be the ${\mathbb Z}_{2}$-module freely generated by all $G$-knots. \end{dfn} \begin{dfn} Let $G$ be a group. By $\Na(G,k)$ we mean the formal integral linear combination of all $G$-links $K$ having minimal crossing number $k$. \end{dfn} From definition, it follows that $\Na(G,k)$ is an invariant of the group $G$. \begin{rk} For the case of infinite $G$, the number of all $G$-links $K$ having minimal crossing number $k$ can be infinite. \end{rk} \begin{ex} Let $G$ be the trivial group. Then the $G$-link theory coincides with the theory of (having source-sink structure and oriented) free links. \end{ex} One of the main phenomena in virtual knot theory is the {\em local crossing information}. Having two (or many) different types of crossings which behave nicely under Reidemeister moves, we can enhance many invariants by localizing some information at crossings; moreover, this allows one to reduce the study of objects to the study of their diagrams. In the case of the ${\mathbb Z}_{2}$-group, the main invariants are called {\em the parity bracket}, {\em the parity projection}, and {\em parity covering}. We are going to describe the parity bracket and to generalize the projection for the case of arbitrary groups. The {\em group bracket}, unlike the parity bracket, does not reduce the study of knots to the study of their diagrams, but rather is a functorial mapping from the category of $G$-knots to $G$-knots. Actually, iterative use of projections, and coverings and other tools lead to enhancement of many invariants constructed combinatorially. In a similar way, one can enhance various combinatorial invariants in the case of arbitrary group $G$. We shall touch on these problems in subsequent papers. Another important issue in the virtual knot theory is that many invariants (like linking numbers or writhe numbers) become picture-valued: instead of some count of crossings (with signs), we are allowed to count pictures arising at these points. \subsection{The Parity Bracket} In the present section, we will concentrate on the case of the group ${\mathbb Z}_{2}$. We will define the {\em parity bracket} which is well-defined for ${\mathbb Z}_{2}$-graphs and which realizes the principle ``if a knot (link) diagram is complicated enough then it reproduces itself''. In this case, {\em complicated enough} will mean {\em irreducible and odd}, see ahead, however, for other groups $G$ there are many other situation where it works. A ${\mathbb Z}_{2}$-labeling $f$ of a $G$-graph $\Gamma$ is usually referred to as {\em parity}. This bracket was first defined in \cite{Parity} for the so-called {\em Gaussian parity} which is trivial for all good framed $4$-graphs. Nevertheless, all the results stated below (as well as their proofs) remain true {\em whatever parity we take.} \begin{dfn} By a {\em smoothing} of a framed $4$-graph at a vertex we mean the result of deleting this vertex and repasting the half-edges into two edges in one of the two possible ways $\skcr\to \skcrv$,$\skcr\to \skcrh$. \end{dfn} \begin{rk} Note that the smoothing may result in new circular components of a free graph. Note that a smoothing of a {\em good} framed $4$-graph is {\em good}. \end{rk} Denote by ${\mathfrak G}$ the space of $\mathbb{Z}_{2}$-linear combinations of the following objects. We consider the linear space of all framed $4$-graphs (not necessarily good ones) subject to the second Reidemeister moves. By a {\em bigon} of a framed $4$-graph $\Gamma$ we mean two edges $e_{1}, e_{2}$ which connect some two vertices $v_{1}$ and $v_{2}$ and are non-opposite at both these vertices; bigons appear in the second Reidemeister moves. A framed $4$-graph is {\em irreducible} if it has no bigons inside. \begin{rk} It can be easily seen that the equivalence classes modulo the second Reidemeister moves are characterized by its minimal representatives, i.e., framed $4$-graphs without ``bigons''. Thus, we shall use the term ``graph'' when talking about an element from ${\mathfrak G}$ assuming the minimal representative of its element. \end{rk} Let $K$ be a framed good $Z_{2}$ $4$-graph. The bracket invariant $[\cdot]$ (see \cite{Parity}) is given by $$ [K]=\sum_{s_{{even,1-component}}}K_{s}\in {\mathfrak G}, $$ where the sum is taken by all smoothings $s$ at even vertices which lead to one-component diagrams $K_{s}$. This sum is considered as an element from ${\mathfrak G}$. The following theorem is proved in \cite{Parity} \begin{thm} $[\cdot]$ is an invariant of free knots; in other words, if two framed $4$-graphs $\Gamma_{1}$ and $\Gamma_{1}$ are equivalent then $[\Gamma_{1}]=[\Gamma_{2}]\in {\mathfrak G}$. \end{thm} Let us call a ${\mathbb Z}_{2}$-graph {\em odd} if all vertices of it are odd. It follows from the definition of the bracket that if $\Gamma$ is odd and irreducible then $[\Gamma]=[\Gamma]$. In particular, this means that {\em for every irreducible and odd ${\mathbb Z}_{2}$-graph $\Gamma$ and every graph $\Gamma'$ equivalent to it, $\Gamma$ can be obtained from $\Gamma'$ by means of smoothings at some vertices.} \subsection{Turaev's Delta and its Generalizations} Actually, many integer-valued invariants of topological objects can be calculated as certain (algebraic) sums taken over certain reference points (e.g., some intersection points in some configuration spaces etc). It turns out that in many situations which happen in low-dimensional topology, these points can be themselves endowed with certain (topological or cominatorial) information which makes them responsible for non-triviality of the object itself and various properties of it. Moreover, in some cases these points can be associated with objects similar to the initial one. This switch from numbers to pictures changes the situation crucially. The (algebraic) sums instead of being just integer-valued invariants of the initial object, transform into functorial mappings from objects to similar objects. For curves in $2$-surfaces, there are two operations, the multiplication and the comultiplication. The first one is due to W.Goldman \cite{Goldman}, and the second one is due to V.G.Turaev \cite{Turaev2}. The main idea is that we can get knots from $2$-component links and $2$-component links from knots by taking smoothings at some crossings. Then we take sums over such crossings and get an invariant map. We are mostly interested in the second operation and call it and its generalization ``Turaev's delta''. However, if we forget about $2$-surfaces and deal with curves in general position, we can take care not about their homotopy classes, but rather about free knots or links which appear at these crossings (see \cite{IMN} for free knots). But if we deal not with bare free knots, we can enhance the definition and take into account some group information. Let us be more specific. Let $\gamma$ be a one-component framed $4$-graph. At each crossing $c$, onecan perform the following {\em oriented smoothing} $\skcrosso\to \skcrho$. Let us denote the result of such smoothing at $c$ by $\gamma_{c}$. Then $\gamma_{c}$ is a framed $4$-graph with two unicursal components. Consider the linear space ${\cal M}$ generated by free two-component links. Now let ${\cal L}$ be the quotient of links in ${\cal M}$ by those links having one trivial component. Let $\Delta(\gamma)=\sum_{c} \gamma_{c} .$ \begin{thm} $\Delta$ is a well-defined map from the set of free knots to ${\cal L}$. \end{thm} The proof follows from a consideration of all Reidemeister moves. Looking carefully at Reidemeister moves, we see that when taking sum in ${\cal M}$, the invariance Reidemeister move requires the two-component free link with one trivial component to be zero. \begin{rk} For the case of $2$-surfaces, we have homotopy classes of curves which are represented by curves in general position; the latter have intersection points which correspond to vertices of the framed $4$-graph. \end{rk} If we had not curves in $2$-surfaces but $G$-knots it would be sufficient to define our $\Delta_{G}$ to be $$\Delta_{G}=\sum_{c, l(c)\neq 1}\gamma_{c},$$ to be the sum over all crossings $c$ with non-trivial label $l(c)$ of the resulting two-component free links. Note that the invariance under the third Reidemeister move does not allow one to consider the summands as $G$-links. However, one can split $\Delta_{G}$ with respect to elements of $\Delta$. For every element $g\neq 1\in G$ we define $\Delta_{g}=\Delta_{g}=\Delta_{g^{-1}}$ to be $$\sum_{c,l(c)\in \{g,g^{-1}\}} \gamma_{c}.$$ Then all $\Delta_{g}$ are invariants of the initial link and $\Delta_{G}$ naturally splits into the sum over all classes of non-trivial inverse elements. As we shall see later, the structure of $G$-knots naturally appears for curves in $2$-surfaces. \begin{thm} For each $g$, the mapping $\Delta_{g}$ is invariant. Consequently, $\Delta_{G}$ is an invariant mapping. \end{thm} \subsection{The group bracket} As we have seen before, ${\mathbb Z}_{2}$-knots admit a natural bracket $[\cdot]$ which takes all ${\mathbb Z}_{2}$-knots into ${\mathbb Z}_{2}$-linear combinations of equivalence classes of free knots modulo second Reidemeister moves. This actually means that we get from knots to graphs. This actually happens because there exist exactly one non-trivial element in ${\mathbb Z}_{2}$. For arbitrary group $G$, there is a similar bracket operation which takes $G$-knots to equivalence classes of diagrams modulo moves; however, the target space will consist of equivalence classes of $G$-graphs modulo not only second Reidemeister moves but also third Reidemeister moves. Thus, the group bracket unlike the partiy bracket, can not be treated as a graph-valued invariant, but rather, as a functorial map for $G$-knots, similar to Turaev's delta. Let us be more specific. Let ${\cal S}_{G}$ be the set of ${\mathbb Z}_{2}$-equivalence classes of $G$-graphs with no vertex marked by the unit element of the group, modulo the second and the third Reidemeister moves. Let $K$ be a $G$-graph. Consider the set of all vertices of $K$ marked by the unit element of the group $G$. We define smoothings $s$ at unit vertices of $K$ as before. Now, we set $$ [K]_{G}=\sum_{s_{{even,1-component}}}K_{s}\in {\cal S}_{G}. $$ \begin{thm} The map ${\cal S}_{G}$ is an invariant of $G$-knots. \end{thm} \begin{proof} We have to check the invariance under the three Reidemeister moves. Let $K$ and $K'$ be two $G$-graphs which differ by a Reidemeister move at some vertex. We shall show that $[K]=[K']$. Assume first $K'$ is obtained from $K$ by the first Reidemeister move which adds one vertex $v$ with trivial label. Then $[K']$ contains summands where $v$ yields a local loop. They do not count in the bracket since they have at least two components. All other summands in $[K']$ are in one-to-one correspondence with all summands in $[K]$. Assume now $K'$ is obtained from $K$ by a second Reidemeister move where the two additional crossings in $K'$ are both non-trivial. Then all summands in $[K']$ are in one-to-one correspondence with those of $[K]$, and they differ exactly by the same second Reidemeister move. The same situation happens when $K'$ differs from $K$ by a third Reidemeister move with three vertices of non-trivial labels: we have a one-to-one correspondence of equal summands. The cases when $K'$ differs from $K$ by a second Reidemeister move with two additional trivial crossings or by a third Reidemeister move with two additional trivial crossings are actually the same as in the case of usual bracket; for more detail, see \cite{Parity}. Finally, assume $K'$ differs from $K$ by a third Reidemeister move with labels $a,b,c$ such that $b=a^{-1}$ and $c=1$ (see Fig. \ref{rmoves}). In the right hand side, we will have labels $a^{-1},b^{-1}=a$, and $1^{-1}=1$. Now we see that one smoothing at the vertex of $K$ labeled $1$ coincides identically with the similar smoothing of $K$; the other smoothing of $K$ differs by the similar smoothing of $K'$ by two second Reidemeister moves. \end{proof} \subsection{Coverings and Projections} First, let $\alpha:G\to H$ be a homomorphism of two groups. Then, for every $G$-graph $\Gamma$, we get an $H$-graph $\alpha_{*}(\Gamma)$ by changing each vertex label $g\in G$ to $\alpha(g)\in H$. From the definition of $G$-knots we get to the following \begin{thm} The operation $\alpha_{*}$ leads to a well defined mapping from $G$-knots to $H$-knots, i.e., if $\Gamma_{1},\Gamma_{2}$ are two equivalent $G$-graphs then their images $\alpha_{*}(\Gamma_{1})$ $\alpha_{*}(\Gamma_{2})$ are two equivalent $H$-graphs. \end{thm} Below, we briefly sketch the idea how to construct a covering for $G$-links for some subgroup $G'$ of a group $G$; we define the corresponding projection in the general case; as for the covering, we define it only in the abelian case; in the general case it is defined quite analogously (see also subsection \ref{subscurves} curves in $2$-surfaces). Now, let $G'$ be a subgroup of a group $G$. Then for every $G$-graph $\Gamma$, we can define the $G'$-graph $\beta_{G'}(\Gamma)$ obtained from $\alpha_{*}$ by removing all vertices of $\Gamma$ whose labels do not belong to $G'$; the remaining vertices will keep their labels considered now as elements of $G'$. \begin{thm} The operation $\beta_{G'}(\Gamma)$ is a well defined mapping from $G$-knots to $G'$-knots. \end{thm} Thus, we have constructed a map $\beta_{G'}$ which {\em removes} some nodes. In particular, this projection maps knots to knots. In the simplest case when $G={\mathbb Z}_{2}, G'=\{1\}$, the {\em parity projection} map can be justified to a parity covering map. Namely, the projection map takes knots to $2$-component links, and generally, $l$-component links to $2l$-component links with two sets of $l$ components each. Over each crossings of the initial knot $X$, there are two crossings $X_{1}$ and $X_{2}$, of the obtained link. The {\em odd} crossings (those marked by non-trivial elements of ${\mathbb Z}_{2}$ which disappear under the projection map), will belong to two different components (sets of components) \cite{NewParity} (``mixed crossings''). Thus, over each crossing marked by the trivial element, there will be two {\em pure} crossings belonging to one component (in the case of links, to one set of components), crossings marked by the non-trivial element will be covered by two crossings, each being mixed (i.e., belonging to two different sets of components). Now, let $G$ be an abelian group. Let $G'$ be a subgroup\footnote{In the case of general $G$, one should consider normal subgroups only} of $G$ and let $H$ be the quotient group so that we have a short exact sequence $$G'\stackrel{\beta}{\to} G\stackrel{\alpha}{\to} H.$$ Let $(L,f)$ be a $G$-graph with $k$ components; we denote the $G$-link represented by $L,k$ by the same letter $L$. We shall construct the {\em the covering} corresponding to the inclusion $\beta:G\to G'$, as follows. This covering will be a $H$-link having $k\cdot |H|$ components. This link will be denoted by $L^{\beta}$. With each vertex $X$ of $(L,f)$ with label $g\in G,$ where $g=g'h,g'\in G',h\in H$, we shall associate $H$ vertices $X^{h}, h=\alpha(g)\in H$ indexed by elements of $H$; all these vertices will have label $g'$. Now, with every edge $XY$ connecting some two vertices $X,Y$ of $L$ we associate $|H|$ edges, namely, $X^{g}$ is connected to $Y^{gh}$ where $h=\alpha(g)$. \begin{thm} This map $L\to L^{\beta}$ is a well defined map from $G$-free links to $G'$-free links. \end{thm} \begin{proof} This theorem follows from the consideration of Reidemeister moves: when looking at the product of labels along the loop, bigon, or triangle, we see that these products are all equal to the unit of the group $G$, which means that both ``the $G'$-factor'' and ``the $H$-factor'' of them are units of $G'$ and $H$, respectively. This means that the corresponding edges of $L^{\beta}$ will close up to a loop, bigon or triangle, repectively. Hence, the Reidemeister move applied to $L$ will result in the corresponding Reidemeister move applied to $L^{\beta}$. \end{proof} \section{Further Applications and Examples} \subsection{Minimal Crossing Number} Consider the example shown in Fig. \ref{F1}. \begin{figure} \caption{The graph $F_{1}$} \label{F1} \end{figure} For every group $G$, the framed graph $F_{1}$ is equivalent to the unknot if the label on the unique vertex is equal to the unit of the group $G$ as a $G$-knot. \begin{thm} For every non-trivial group $G$, there exists a non-trivial labeling $f$ of the graph $F_{1}$ such that the $G$-knot $(F_{1},f)$ is not equivalent to the unknot. \end{thm} Indeed, $\Delta(F_{1})$ has one non-trivial summand. Now, let us consider the example shown in Fig. \ref{F2}. \begin{figure} \caption{The graph $F_{2}$} \label{F2} \end{figure} As well as $F_{1}$, the framed $4$-graph $F_{2}$ is trivial for the $1$-element group. Moreover, if we take ${\mathbb Z}_{2}$, then for every labeling of $F_{2}$, Thus, $F_{2}$ is not minimal for ${\mathbb Z}_{2}$. \begin{thm} For every group $G$ containing at least three elements, there exists a labeling $f$ of vertices of $F_{2}$ which makes this labeled graph minimal in its $G$-knot class. \end{thm} \subsection{Curves in $2$-surfaces} \label{subscurves} In the present section, we give the main example where the above theory comes from. Let $\Sigma$ be a closed oriented $2$-surface of genus $g$. We say that an embedding $f$ of a framed $4$-graph $\Gamma$ in $\Sigma$ is cellular if $\Sigma\backslash \Gamma$ is the disjoint union of $2$-cells. We say that a cellular embedding $f:\Gamma \to \Sigma$ is {\em checkerboard} if one can colour all cells of $\Sigma$ with $2$ colours in a way such that no two cells of the same colour are adjacent along an edge. The following fact can be proved by the reader as a simple exercise. \begin{exs} If a framed $4$-graph $\Gamma$ admits a checkerboard cellular embedding into some $\Sigma$ then $\Gamma$ admits a source-sink structure. Moreover, if $\Gamma$ admits a source-sink structure then every cellular embedding of $\Gamma$ in every $\Sigma$ (of any genus) is checkerboard. \end{exs} Now, let us fix a framed graph $\Gamma$ and its checkerboard cellular embedding $f$ into $\Sigma=S_{g}$. Consider the group $G(\Gamma,f)$ given by the following presentation $P$. The generators of $P$ are in one-to-one correspondence with vertices of $\Gamma$. The relaions of $P$ are in one-to-one-correspondence with cells of $\Sigma\backslash f(\Gamma)$. More precisely, with each cell $C$, one can associate its boundary which is a {\em rotating cycle} on $\Gamma$. Since $\Gamma$ admits a source-sink structure, this cycle becomes naturally oriented. This means, that all vertices of this cycle are cyclically ordered. This generates a cyclic word which we take to be the relator corresponding to the cell. Now, assume $\Gamma$ has one unicursal component. Then $f(\Gamma)$ can be thought of as the image of an immersed curve; we shall call this image $\gamma$. In \cite{Nikonov}, I.M.Nikonov proved the following \begin{thm} If $\Gamma$ has one unicursal component, then the group $G(\Gamma,f)$ is isomorphic to the quotient group of the fundamental group $\pi_{1}(\Sigma)$ by the free homotopy class $[\gamma]$. \end{thm} This leads us to the natural source where the labels for vertices of $\Gamma$ may come from. With every embedding $f$ of $\Gamma$ into $\Sigma$, one gets a natural labeling of vertices by elements of $G(\Gamma,f)$. Thus, curves in $S_{g}$ satisfying the checkerboard colorability condition naturally obtain the group labeling, which allows one to define the group bracket and the group delta for these knots. \section{Unsolved Problems} The bracket and the Turaev delta for curves in $2$-surfaces can be treated as follows. We study curves as conjugacy classes $[\gamma]$ of elements $\gamma$ of the fundamental group for an oriented $2$-surface $S_{g}$. Every summand of $\Delta$ transform one such conjugacy class $[\gamma]$ into two conjugacy classes $[\gamma_{1}],[\gamma_{2}]$ such that $\gamma_{1}\cdot \gamma_{2}=\gamma$. Here $\gamma_{1}\cup \gamma_{2}$ is the result of smoothing of $\gamma$ at some crossing $c$. For which groups $G$ can one define a $\Delta$-like comultiplication map $G\to {\mathbb Z}_{2} G\to G$ where the sum is taken along ``reference points''? How to define these ``reference points'' in a way similar to crossings of a diagram? Possibly, if the comultiplication operation is expected to be defined in the language of words for some presentation of $G$, the operation should be defined in terms of some letters selected in a specific way? A similar question can be asked about the group bracket and the parity bracket: we expect a well-defined map from $G$ to the direct sum of various tensor products $G\otimes G\otimes \dots \otimes G$. Can we define such a bracket for elements $g\in G$ realizing the principle $[g]=g$ and solving the word problem/ the conjugacy problem for some groups? \end{document}
\begin{document} \title{A Unified Framework for Task-Driven Data Quality Management} \begin{abstract} High-quality data is critical to train performant \textit{Machine Learning}~(ML) models, highlighting the importance of \textit{Data Quality Management}~(DQM). Existing DQM schemes often cannot satisfactorily improve ML performance because, by design, they are oblivious to downstream ML tasks. Besides, they cannot handle various data quality issues (especially those caused by adversarial attacks) and have limited applications to only certain types of ML models. Recently, data valuation approaches (e.g., based on the Shapley value) have been leveraged to perform DQM; yet, empirical studies have observed that their performance varies considerably based on the underlying data and training process. In this paper, we propose a \emph{task-driven, multi-purpose, model-agnostic} DQM framework, \textsc{DataSifter}\xspace, which is optimized towards a given downstream ML task, capable of effectively removing data points with various defects, and applicable to diverse models. Specifically, we formulate DQM as an optimization problem and devise a scalable algorithm to solve it. Furthermore, we propose a theoretical framework for comparing the worst-case performance of different DQM strategies. Remarkably, our results show that the popular strategy based on the Shapley value may end up choosing the worst data subset in certain practical scenarios. Our evaluation shows that \textsc{DataSifter}\xspace achieves and most often significantly improves the state-of-the-art performance over a wide range of DQM tasks, including backdoor, poison, noisy/mislabel data detection, data summarization, and data debiasing. \end{abstract} \section{Introduction} \label{sec:intro} High-quality data is a critical enabler for high-quality \emph{Machine Learning}~(ML) applications. However, due to inevitable errors, bias, and adversarial attacks occurring during the data generation and collection processes, real-world datasets often suffer various defects that can adversely impact the learned ML models. Hence, \textit{Data Quality Management}~(DQM) has become an essential prerequisite step for building ML applications. DQM has been extensively studied by the database community in the past. Early works~\cite{neutatz2021cleaning,chu2015katara,schelter2019unit} consider DQM as a standalone exercise without considering its connection with downstream ML applications. Studies have shown that such ML-oblivious DQM may not necessarily improve model performance~\cite{neutatz2021cleaning}; worse yet, it may even degrade model performance~\cite{amershi2019software}. More recent work started to tailor the DQM strategies to specific ML applications~\cite{krishnan2016activeclean,karlavs2020nearest,krishnan2019alphaclean}. Still, they apply only to simple models such as convex models, nearest neighbors, and specific data quality issues such as outlier detection. In parallel with these research efforts, the ML community has intensively investigated techniques focused on addressing a broad variety of data quality issues, such as adversarial~\cite{wang2019neural,chen2019deepinspect} and mislabeled~\cite{koh2017understanding,pruthi2020estimating} data detection, anomaly detection~\cite{du2019robust}, dataset debiasing~\cite{zemel2013learning,madras2018learning,wang2021learning}. However, a DQM scheme that can comprehensively remedy various types of data defects is still lacking. Our paper aims to address the limitations of prior DQM schemes by developing a unified DQM framework with the following properties: (1) \emph{multi-purpose} -- to handle various data quality issues; (2) \emph{task-driven} -- to effectively utilize the information from downstream ML tasks; and (3) \emph{model-agnostic} -- to incorporate different ML models. The line of existing work closest to achieving these goals is what we will later refer to as data valuation-based approaches. These approaches first adopt some importance quantification metric, e.g., influence functions~\cite{koh2017understanding}, Shapley values~\cite{ghorbani2019data,jia2019towards} and least cores~\cite{yan2020ifyoulike}, to quantify each training point according to the contributions toward the training processes, then decide which data to retain or remove based on the valuation rankings. While some existing data valuation-based approaches satisfy the three desiderata, empirical studies have shown that their performance varies considerably based on the underlying data and the learning process. Moreover, there is no clear understanding of such performance variation or formal characterization of the worst-case performance. In this paper, we start by formulating various DQM tasks into optimal data selection problems. The goal is to find a subset of data points that achieve the highest performance for a given ML task. We propose \textsc{DataSifter}\xspace, a multi-purpose, task-driven, model-agnostic DQM framework that first learns a data utility model from a small validation set, then selects the subset of data points by optimizing the acquired utility model. With the acquired data utility model, \textsc{DataSifter}\xspace can go beyond the functionalities offered by existing DQM schemes and further estimate the utility of selected data points. Such information could help data analysts to decide how many data points to choose or whether there is a need to acquire new data. Furthermore, we present a novel theoretical framework based on domination analysis which allows one to rigorously analyze the worst-case performance of data valuation-based DQM approaches and compare them with our approach. Specifically, we show that data valuation-based DQM approaches have unsatisfying worst-case performance guarantees. In particular, the popular Shapley value-based approach will select the worst data in some commonly occurring scenarios. We conduct a thorough empirical study on a range of ML tasks, including adversarially perturbed data detection, noisy label/feature detection, data summarization, and data debiasing. Our experiments demonstrate that \textsc{DataSifter}\xspace achieves and most often significantly improves the state-of-the-art performance of data valuation-based approaches on various tasks. \section{Related Work} The major differences between this paper and the related works are summarized in Table~\ref{table:summary-of-properties}. \begin{wraptable}{R}{0.59\linewidth} \centering \scriptsize \begin{tabular}{@{}l|p{0.85cm}<{\centering}p{0.85cm}<{\centering}p{0.85cm}<{\centering}p{0.85cm}<{\centering}p{0.85cm}<{\centering}@{}} \multirow{2}{*}{Method Type} & \textbf{Multi-} & \textbf{Task-} & \textbf{Model-} & \textbf{Est.} \\ & \textbf{purpose} & \textbf{Driven} & \textbf{Agnostic} & \textbf{Utility}\\ \hline \textbf{Traditional} & $\times$ & $\times$ & $\times$ & $\times$ \\ \textbf{Data Cleaning} & $\times$ & $\circ$ & $\circ$ & $\times$ \\ \textbf{Perm-Shapley \cite{maleki2015addressing}} & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\times$ \\ \textbf{TMC-Shapley \cite{ghorbani2019data}} & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\times$ \\ \textbf{G-Shapley \cite{ghorbani2019data}} & $\checkmark$ & $\checkmark$ & $\times$ & $\times$ \\ \textbf{KNN-Shapley \cite{jia2019efficient}} & $\times$ & $\times$ & $\checkmark$ & $\times$ \\ \textbf{Least Core \cite{yan2020ifyoulike}} & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\times$ \\ \textbf{Leave-one-out \cite{koh2017understanding}} & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\times$ \\ \textbf{Infl. Func. \cite{koh2017understanding}} & $\times$ & $\checkmark$ & $\times$ & $\times$ \\ \textbf{TracIn \cite{pruthi2020estimating}} & $\times$ & $\checkmark$ & $\times$ & $\times$ \\ \textbf{\textsc{DataSifter}\xspace} & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ \Xhline{1pt} \end{tabular} \caption{Summary of the differences between previous works with our methods (\textsc{DataSifter}\xspace). $\circ$ means only some of the techniques in the type satisfy the property. } \label{table:summary-of-properties} \end{wraptable} \textbf{Data Cleaning.} Classical data cleaning methods are based on simple attributes of a dataset such as completeness~\cite{schelter2019unit}, consistency~\cite{kandel2011wrangler}, and timeliness~\cite{chu2015katara}; however, these attributes may not necessarily correlate with the actual utility of data for training machine learning models. Recent works leverage the information about downstream ML tasks to guide the cleaning process. ActiveClean \cite{krishnan2016activeclean} explored task-driven data cleaning for convex models by selecting data points for human screening. BoostClean \cite{krishnan2017boostclean} sought to automate the manual cleaning by determining a predefined cleaning strategy from a library using boosting. AlphaClean \cite{krishnan2019alphaclean} also aimed to automate the cleaning process but relied on parallelized tree search. However, those framework's efficacy and generalizability are limited by the cleaning library. Furthermore, the recursive nature of the automatic selection process constrained the use case of those methods to only small models and datasets. CPClean \cite{karlavs2020nearest} proposed a different strategy for nearest neighbor models based on the concept of Certain Prediction. Still, CPClean is designed explicitly for SQL datasets with greedy repairing, making it difficult to generalize to larger-scaled cases like image datasets. In summary, the state-of-the-art data cleaning methods are only applicable to certain classes of ML models and datasets. Besides, adapting those cleaning works to other domains requires manual recollection of the cleaning library or human intervention, which can be impractical in many cases. \textbf{Data Importance Quantification.} One simple idea to quantify data importance is to use the leave-one-out error. \cite{koh2017understanding} provides an efficient algorithm to approximate leave-one-out error for each training point. Recent works leverage credit allocation schemes originated from cooperative game theory to quantify data importance. Particularly, Shapley value has been widely used \citep{ghorbani2019data, jia2019towards, jia2019efficient, jia2019scalability, wang2020principled}, as it uniquely satisfies a set of desirable axiomatic properties. More recently, \cite{yan2020ifyoulike} suggests that the Least core is also a viable alternative to Shapley value for measuring data importance. However, computing the exact Shapley and Least core values are generally NP-hard. Several approximation heuristics, such as TMC-Shapley \citep{ghorbani2019data}, G-Shapley \citep{ghorbani2019data}, KNN-Shapley \citep{jia2019efficient}, have been proposed for the Shapley value. Despite their computational advantage, they are biased in nature. On the other hand, unbiased estimators such as Permutation Sampling \citep{maleki2015addressing} and Group Testing \citep{jia2019towards} still require retraining models many times for any descent approximation accuracy. TracIn \cite{pruthi2020estimating} estimates the importance by tracing the test loss change caused by a training example during the training process. The representer point method~\cite{yeh2018representer} captures the importance of that training point by decomposing the pre-activation prediction of a neural network into a linear combination of activations of training points. Many of the aforementioned works can only be applied to differentiable models. \section{Formalism and Algorithmic Framework} \label{sec:method} In general, DQM aims to find a subset of data points with the highest utility. We use the data utility function to characterize the mapping from a set of data points to its utility. Formally, given a dataset $\mathcal{D}$ of size $n$, a \emph{data utility function} $U: 2^\mathcal{D} \rightarrow \mathbb{R}$ maps a set of data points $S \subseteq \mathcal{D}$ to a real number indicating the performance of the ML model trained on the set, such as test accuracy and fairness. With the notion of the data utility function, one can abstract DQM tasks as a \emph{data selection problem}: $\max_{|S|=k}U(S)$, where $k$ indicates the selection budget with $0<k<n$, which can be predetermined (e.g., based on the prior knowledge about potential data defects or computational requirements). Moreover, the DQM tasks without a specific selection budget can be reduced to a sequence of data selection problems with different values of $k$. With the abstraction above, one straightforward way to optimally select data is to exhaustively evaluate $U(S)$ for all possible size-$k$ subsets $S \subseteq \mathcal{D}$ and choose the one that achieves the highest utility. Of course, this naive algorithm requires prohibitively large computational resources because the number of utility evaluations is exponential in $k$, and worse yet, each evaluation of data utility function requires retraining the model. Fortunately, recent work shows that many common data utility functions can be effectively learned with a relatively small amount of samples \citep{wang2021one} because they are ``approximately'' submodular~\cite{balcan2011learning}. The ``approximate submodularity'' property allows efficient maximization of data utility functions through simple greedy algorithms~\citep{minoux1978accelerated, horel2016maximization, hassidim2018optimization, das2018approximate, chierichetti2020on}. Hence, combining data utility function learning and greedy search enables an efficient algorithm for data selection problems. Specifically, we extend and generalize the data utility learning and optimization technique originally proposed in \citep{wang2021one} for active learning to DQM. The proposed DQM framework, termed \textsc{DataSifter}\xspace, proceeds in two phases: \emph{learning} and \emph{selection} phase. \begin{figure} \caption{Overview of the learning phase, which consists of two functional steps, the utility sampling step and the training step of the utility ML model. We randomly select subsets from the raw data during the sampling step and assign a utility score on each set based on the downstream ML model's performance evaluated over a validation set. Then, we train the utility ML model that uses those scored pairs to acquire the ability to predict a performance score for a collection of data.} \label{fig:flow-chart} \end{figure} \textbf{Learning Phase. } Figure \ref{fig:flow-chart} depicts the learning phase of the \textsc{DataSifter}\xspace, which consists of a utility sampling step and a utility model training step. In particular, we assume that we have access to a small validation set representative for potential test samples. Thus, the utility of any given subset can be estimated by feeding the model with this subset then evaluating its performance over the validation set. In the utility sampling step, we randomly sample subsets of the training set, estimate the utility of each sampled subset, and label each using its utility score. We will refer to the scored subset as \emph{utility samples} hereinafter. To accelerate this step for large models such as deep nets, a small proxy model (such as logistic regression) can be used for approximating the utility since data utilities evaluated on deep nets and logistic regression are positively correlated, as shown in~\cite{wang2021learning}. In the utility model training step, we learn a parametric model for the data utility function using the utility samples; particularly, our experiments adopt DeepSets~\citep{zaheer2017deep} as the utility model. For a large dataset, the utility sampling step could be conducted on a small portion of the dataset. Our empirical studies show that the learned utility model can still extrapolate the utility for the unseen part of the dataset. \begin{wrapfigure}{R}{0.46\linewidth} \centering \includegraphics[width=0.65\linewidth]{images/synthetic.png} \caption{Predicted vs. True Utility for unseen subsets of logistic regression classifier trained on a synthetic dataset. Details are presented in supplementary materials.} \label{fig:utility-accuracy} \end{wrapfigure} \textbf{Selection Phase. } We select high-quality data through optimizing the learned model for data utility functions obtained from the previous phase; specifically, we adopt a linear-time stochastic greedy algorithm~\cite{mirzasoleiman2015lazier} to perform optimization. Clearly, \textsc{DataSifter}\xspace is an optimal solution to the data selection problem if the validation data matches the test data exactly and there are no computational constraints. In practice, despite limited validation dataset and limited computational resources, \textsc{DataSifter}\xspace is still very effective in selecting high-quality data or filtering bad data as we will show in the evaluation section. In addition, with the learned data utility model, \textsc{DataSifter}\xspace can provide an estimate of the utility for the selected dataset (see example in Figure~\ref{fig:utility-accuracy}), which will be useful for data analysts to decide the number of data points to select. \section{Worst-Case Analysis} \label{sec:theory} This section presents a theoretical framework for comparing the worst-case performance between \textsc{DataSifter}\xspace and data valuation-based DQM schemes, such as \emph{leave-one-out}~(LOO), Shapley value, and Least core\footnote{Least core may not be unique. In this paper, when we talk about the least core, we always refer to the least core vector that has the smallest $\ell_2$ norm, following the tie-breaking rule in the original literature \citep{yan2020ifyoulike}.}, and we assume no computational constraints. We start by abstracting a general notion from data valuation-based DQM schemes in the literature. We call an algorithm that returns $S \subseteq \mathcal{D}$ of size $k$ a \emph{heuristic} to a (size-$k$) data selection problem on $\mathcal{D}$. The typical pattern of data valuation-based heuristics is that they first rank the data points according to their corresponding data importance metric and then prioritize the points with the highest importance scores. We will define the heuristics matching this selection pattern as \emph{linear heuristics}. \begin{definition}[Linear heuristic] \label{def:linearheuristic} We say $\mathcal{M}$ is a linear heuristic for data selection problem if for every instance $\mathcal{I} = (\mathcal{D}, U)$, $\mathcal{M}$ works as follows: \begin{enumerate} \item Assign a score $v = (v_1, \ldots, v_n)$ for every data point $i \in \mathcal{D}$. \item Sort $\mathcal{D}$ in the descending order according to $v$ and obtain sorted data sequence $\mathcal{D}'$. Certain rules are applied to break tie. \item For any query of selecting $k$ high-quality data points, return the first $k$ data points in $\mathcal{D}'$. \end{enumerate} \end{definition} Our theoretical framework for studying the worst-case performance of data selection heuristics extends the domination analysis initially proposed in \cite{glover1997travelling}. Our worst-case performance metric is \emph{domination number}, which measures how many subsets achieve lower utility than the selected set in the worst-case scenario. \begin{definition}[Domination number] The domination number of a heuristic $\mathcal{M}$ for the data selection problem is the maximum integer $d(n, k)$ s.t., for every problem instance $\mathcal{I} = (\mathcal{D}, U)$ on a dataset $\mathcal{D}$ of size $n$ and utility function $U$, $\mathcal{M}(\mathcal{I}, k)$ produces a size-$k$ subset $S \subseteq \mathcal{D}$ which has utility $U(S)$ no worse than at least $d(n, k)$ size-$k$ subsets. \end{definition} The domination number is well defined for every data selection heuristic. A heuristic with a higher domination number may be a better choice than a heuristic with a smaller domination number due to the better worst-case guarantee. The best heuristic for data selection has domination number $d(n, k) = {n \choose k}$ for every $k \le n$, which means that it will select the size-$k$ data subset with the highest utility for every possible data utility function. Clearly, assuming no computational constraints, \textsc{DataSifter}\xspace is among the best heuristics which achieve the largest possible domination number. In contrast, the following result shows that no linear heuristic is the best whenever $n \ge 3$. We will defer all proofs to Appendix. \begin{theorem} \label{thm:nooptimal} For $n \ge 3$, there exists no linear heuristic $\mathcal{M}$ s.t. $d(n, k) = {n \choose k}$ for every $k \in \{1, \ldots, n\}$. \end{theorem} Furthermore, we can tighten the upper bound of the domination number for data valuation-based heuristics by noticing another common property: two data points will receive the same importance score if they contribute equally to all possible subsets of the training data. This property is often referred to as \emph{symmetry axiom} in the literature. \begin{definition}[Symmetry axiom] \label{def:symmetry} We say a linear heuristic $\mathcal{M}$ satisfies symmetry axiom if its scoring mechanism satisfies: $ [(\forall S \in \mathcal{D} \setminus \{i, j\}) U(S \cup\{i\})=U(S \cup\{j\})] \Longrightarrow v_{i}=v_{j} $. \end{definition} \begin{wrapfigure}{R}{0.33\linewidth} \centering \includegraphics[width=0.29\textwidth]{images/cases/theory.png} \caption{Results of data selection with different heuristics on a tiny dataset with natural redundancy. Dataset and implementation are detailed in the Appendix.} \label{fig:theory} \end{wrapfigure} The symmetry axiom may be desired for application scenarios requiring fairness, e.g., data importance scores are used to assign monetary rewards for data sharing or responsibility for ML decisions. However, for data selection, symmetry axiom may be undesirable because simply gathering high-value data points may lead to a set of redundant points. Based on this intuition, the following theorem gives an upper bound of domination number for non-trivial linear heuristics that with symmetry property. \begin{theorem} \label{thm:symmetrybound} If a linear heuristic $\mathcal{M}$ assigns different scores to different data points and satisfies symmetry axiom, then the domination number of $\mathcal{M}$ is $d(n, k) \le c{\ceil{n/c} \choose k}$ where $c = \left \lfloor{\frac{n}{k}}\right \rfloor$. \end{theorem} To better illustrate the issue raised by symmetry axiom, we evaluate the LOO, Shapley, and least core heuristic on a synthetic dataset with 15 training data points (so that we can compute the exact Shapley and least core values, as well as obtain the optimal solution for data selection problem). The utility metric is the test accuracy of a \emph{Support Vector Machine}~(SVM) classifier trained on the dataset. We simulate the natural redundancy in a dataset by replicating 5 data points three times and adding slight Gaussian noise to differentiate. Figure \ref{fig:theory} shows that with small selection budgets, the subsets selected by all the heuristics have low utility as the heuristics fail to promote diversity during selection. Notably, we show that the Shapley value heuristic would select the data subset with the lowest utility for certain data utility functions, including submodular ones. The Shapley value of a training point is calculated by taking a weighted average of the contribution of the point to all possible subsets of the training set, and the weights are independent of the selection budget $k$. Moreover, the Shapley value of training data weights higher for its marginal contributions on small datasets. Thus, data points that make a larger contribution on tiny datasets may be assigned with higher Shapley value, even if they make little or negative contributions in every dataset of desired selection size $k$. \begin{theorem} \label{thm:shapleybound} For any $n \ge 4$ and $k \in \{1, \ldots, n\}$, the domination number of Shapley value is $d(n, k) = 1$, even if the utility function $U$ is submodular. \end{theorem} \begin{wraptable}{R}{0.5\linewidth} \scriptsize \centering \begin{tabular}{@{}p{0.05cm}l|l|l@{}} \Xhline{1pt} \textbf{} & \multirow{2}{*}{\textbf{Task}} & \multicolumn{2}{c}{\textbf{Datasets}}\\ \cline{3-4} & & Main Text & Appendix\\ \Xhline{1pt} \textbf{I.} & Backdoor Detection & CIFAR-10 \citep{krizhevsky2009learning}& MNIST\citep{lecun1998mnist} \\ \hline \textbf{II.} & Poisoned Data Detection & CIFAR-10 \citep{krizhevsky2009learning}& Dog vs. Cat \citep{dogcat} \\ \hline \textbf{III.} & Noisy Feature Detection &CIFAR-10 \citep{krizhevsky2009learning} & MNIST \citep{lecun1998mnist} \\ \hline \textbf{IV.} & Mislabeling Detection & SPAM \citep{shams2013classifying}& CIFAR-10 \citep{krizhevsky2009learning} \\ \Xhline{1pt} \textbf{V.}& Data Summarization & PubFig83 \citep{pinto2011scaling}& COVID-CT \citep{zhao2020COVID-CT-Dataset} \\ \hline \textbf{VI.} & Data Debiasing & Adult \citep{uci-dataset}& COMPAS \citep{yoon2018machine} \\ \Xhline{1pt} \end{tabular} \caption{Summary of DQM tasks and datasets. We discuss one dataset for each task and defer the results over the other dataset to the Appendix. } \label{tb:summary-of-experiments} \end{wraptable} \section{Evaluation} \label{sec:eval} We evaluate \textsc{DataSifter}\xspace on six DQM tasks, as listed in Table \ref{tb:summary-of-experiments}. We consider various benchmark models and datasets used in past literature for each DQM task. Since we can observe similar results on different datasets, this section will only describe the result on \emph{one} representative dataset for each task and leave the other dataset in the Appendix. Finally, we discuss the scalability of the \textsc{DataSifter}\xspace on larger datasets. The implementation details and the additional results are presented in the Appendix. \begin{comment} We evaluate \textsc{DataSifter}\xspace on six different DQM tasks, including detecting (1) backdoor attacks, (2) targeted data poisoning attacks, (3) noisy feature, (4) mislabeled data as well as (5) data summarization and (6) debiasing. We perform experiments on various popular benchmark models and datasets used in the past literature for each DQM task, summarized in Table \ref{tb:summary-of-experiments}. As we observe similar results on different datasets, this section will only describe the result on \emph{one} representative dataset for each DQM task. The implementation details and the results for the other datasets are discussed in supplementary materials. \end{comment} \subsection{Baselines} We focus on comparing data valuation-based approaches as they are closest to achieving the properties of multi-purpose, task-driven, and model-agnostic. We omit the data cleaning methods from the comparison as their applicability is limited to specific DQM tasks and specific models. Specifically, we consider the following eight state-of-art data valuation-based approaches: (1) \textit{Shapley Permutation Sampling}~\textbf{(Perm-SV)} \citep{maleki2015addressing}, a Monte Carlo-based algorithm for Shapley value estimation. (2) \textit{TMC-Shapley}~\textbf{(TMC-SV)} \citep{ghorbani2019data}, a refined version of the Perm-SV, where the computation is focused on the subsets whose utility changes significantly when an extra point is added. (3) \textit{G-Shapley}~\textbf{(G-SV)} \citep{ghorbani2019data}, which approximates the Shapley value by anticipating the utility change caused by an extra point with its gradient. (4) \textit{KNN-Shapley}~\textbf{(KNN-SV)} \citep{jia2019efficient}, which approximates the Shapley value by using the K-Nearest-Neighbor as a proxy model. (5) \textit{Least Core}~\textbf{(LC)} \citep{yan2020ifyoulike}, another data value notion in cooperative game theory. (6) \textit{Leave-one-out}~\textbf{(LOO)} \citep{ghorbani2019data} evaluates the change of model performance when a data point is removed. (7) \textit{Influence Function}~\textbf{(INF)} \citep{koh2017understanding}, which approximates the LOO error with influence functions. (8) \textbf{TracIn} \citep{pruthi2020estimating}, which traces the test loss change during the training process whenever the training point of interest is utilized. (9) \textbf{Random} is a setting where we randomly select a subset from the target dataset. \add{For fair comparison between \textsc{DataSifter}\xspace and baselines, we fix the number of utility sampling as 4000 for \textsc{DataSifter}\xspace and baseline algorithms that require utility sampling. } The implementations of \textsc{DataSifter}\xspace and baseline algorithms will be detailed in the Appendix. We repeat model training ten times for each selected set of data points to obtain the error bars. \subsection{Results} \subsubsection{Filtering out Harmful Data} \begin{figure} \caption{The experimental results and comparisons of the \textsc{DataSifter}\xspace under the case of filtering out harmful data (application I-IV). The light blue region in each (a) graph represents the area that a method is no better than a random selection. For I.(b) and II.(b), we depict the Attack Success Rate (ASR), where a lower ASR indicates a more effective detection. For III.(b) and IV.(b), we show the model test accuracy, where a higher accuracy means a better selection.} \label{fig:all_detection} \end{figure} \label{sec:remoe-harmful} Training data could be contaminated by various harmful examples, e.g., backdoor triggers, poison information, noisy/mislabeled samples. Our goal here is to identify data points that are most likely to be harmful. These points can either be discarded or presented with high priorities to human experts for manual cleaning. To evaluate the performance of different DQM techniques, we examine the training instances according to the quality ranks outputted by each method and plot the change of the fraction of detected corrupted data with the fraction of the checked training data. Additionally, for poisoned/backdoor data detection, we plot the change of \textit{Attack Success Rate}~(ASR), and for noisy feature/label detection, we plot the change of model accuracies after filtering out the low-quality data points selected by each technique. \add{The validation data in utility sampling are 300 clean data points sampled from the test data of the corresponding datasets. } \textbf{I. }\textbf{Backdoor Detection.} Backdoor attacks~\citep{zeng2021rethinking,gu2017badnets} embed an exploit at training time that is subsequently invoked by the presence of a ``trigger'' at test time. They are considered particularly dangerous since they make models predict a target output on inputs with predefined triggers while still retain state-of-the-art performance on the clean data. Since data points with the backdoor triggers contribute little to the learning of clean validation samples, we could expect to identify them by minimizing the data utility model. This experiment studies the effectiveness of \textsc{DataSifter}\xspace for removing backdoored examples. We evaluate BadNets \cite{gu2017badnets} and Trojan attack \cite{liu2017trojaning}, the two most famous backdoor attacks in the literature. We adopted a three-layer CNN as the target model, a poison rate of 0.2, and a target label `Airplane.' Figure \ref{fig:all_detection} I.(a) and I.(b) elaborate the Trojan attack detection results for a 1,000-size randomly selected subset of the CIFAR-10 dataset. As we can see, \textsc{DataSifter}\xspace significantly outperforms other DQM approaches; for instance, it achieves a detection rate of 90\% with \textbf{51.17\%} fewer inspected data points than the others. \begin{comment} \add{Backdoor attacks have been considered as a severe security threat to data-driven learning models \citep{gu2017badnets,zeng2021rethinking}. } They are considered particularly dangerous because they make models predict a target label on inputs with predefined triggers while still retain state-of-the-art performance on the clean data. Since data points with the backdoor triggers contribute little to the learning of clean validation samples, we could expect to identify them by minimizing the data utility model. This experiment studies the effectiveness of \textsc{DataSifter}\xspace for removing backdoored examples. \add{The key point is that the data points with the backdoor triggers have lower contributions to the model utility.} We evaluate BadNets \cite{gu2017badnets} and Trojan attack \cite{liu2017trojaning}, the two most famous backdoor attacks in the literature. We adopted a three-layer CNN as the target model, a poison rate of 0.2, and a target label `Airplane.' Figure \ref{fig:all_detection} I.(a) and I.(b) elaborate the Trojan attack detection results for a 1000-size randomly selected subset of the CIFAR-10 dataset. As we can see, \textsc{DataSifter}\xspace significantly outperforms other DQM approaches; for instance, it achieves a detection rate of 90\% with \textbf{51.17\%} fewer inspected data points than the others. \end{comment} \textbf{II. Poisoned Data Detection.} Adversaries make slight modifications to some training samples in data poisoning attacks to cause malicious behaviors in the test phase (e.g., misclassifying target test examples). We evaluate different DQM techniques on two popular attacks, namely, feature collision attack \cite{shafahi2018poison} and influence function-based attack \cite{koh2017understanding}. These two are clean-label poisoning attacks where the attacker does not need to control the labeling of training data. We left the detailed descriptions of the attacks in the Appendix. Figure \ref{fig:all_detection} II.(a) and II.(b) show the results for feature collision attack \cite{shafahi2018poison} on a 500-size randomly selected CIFAR-10 subset, where 50 data points of class `cat' are perturbed with features extracted from a `frog' sample in the test set. We see that \textsc{DataSifter}\xspace significantly outperforms all other DQM methods in the poisoned data detection task; for instance, it attains a 90\% detection rate with \textbf{75.41\%} fewer examined data points. \textbf{III. Noisy Feature Detection.} Noise in features originated from sampling or transmitting (e.g., Gaussian noise) may decrease classification accuracy. Following the settings in \cite{wang2021one}, we add white noise to clean samples, and we evaluate the performance of each DQM technique on detecting those samples. For the CIFAR-10 dataset, we corrupt 25\% of the train data images by adding white noise. Based on Figure \ref{fig:all_detection} III.(a) and III.(b), we can conclude that \textsc{DataSifter}\xspace significantly outperforms all other methods on this task; for example, it achieves a 90\% of detection rate by examining \textbf{67.25\%} fewer data points. Meanwhile, the KNN-SV approach exhibits a distinctive trend -- it only starts finding the noisy data points until filtering out a certain amount of clean data. This is mainly because all noisy data points are out-of-distribution (OOD). The mechanism of KNN-SV tends to assign 0 values to OOD data points, while it will also assign negative values to some clean data points. We provide a more detailed explanation in the Appendix. \textbf{IV. Mislabeling Detection.} Labels in the real world are often noisy due to automatic or non-expert labeling. Following \cite{ghorbani2019data, jia2019scalability}, we perform experiments on two datasets and present the results of SVM trained on Enron1 SPAM dataset \citep{shams2013classifying} and the CIFAR-10 dataset. We adopt a bag-of-words representation of the Enron1 for training. The noise flipping ratio is 15\%. Under this setting, Influence-based techniques and G-SV are not applicable since they require the model trained with gradient-based approaches. Figure \ref{fig:all_detection} IV.(a) and IV.(b) show that although \textsc{DataSifter}\xspace does not attain the highest detection rate, the accuracies of the model trained on the selected data are competitive with the most effective approaches. For the Enron SPAM dataset, a small amount of mislabeled data points do not significantly affect the model performance; thus, those mislabeled samples could evade our detection based on the validation performance. By comparing Figure \ref{fig:all_detection} IV.(a) and IV.(b), we can tell such evasion is acceptable as the model trained over the data points selected by \textsc{DataSifter}\xspace still achieves a competitive accuracy. On the other hand, we find KNN-SV and LOO can accomplish a decent detection rate but end up with a lower validation accuracy. This is because they select very unbalanced data points, as both of them satisfy the symmetry axiom discussed in Section \ref{sec:theory}. \begin{wrapfigure}{R}{0.65\textwidth} \centering \includegraphics[width=0.65\textwidth]{images/cases/Second_Pics.png} \caption{The experimental results and comparison of the \textsc{DataSifter}\xspace under the case of selecting high-quality data (application V and VI). We depict the validation accuracy for both cases. A higher accuracy indicates a better performance. } \label{fig:all-selection} \end{wrapfigure} \subsubsection{Selecting High-quality Data} The DQM tasks considered in this section aim to select a subset that is most likely to help improve model test accuracy and fairness. \textbf{V. Data Summarization.} Data summarization aims to select a small, representative subset from a massive dataset, which can retain a comparable utility to that of the whole dataset. We use a convolutional neural network trained on the PubFig83 dataset in this experiment. Figure \ref{fig:all-selection} V shows that \textsc{DataSifter}\xspace and KNN-SV significantly outperform all the other DQM techniques, which have similar performance as the random selection. \textbf{VI. Data Debiasing. } We explore whether DQM techniques can help select a subset of training data that improves both fairness and performance for the ML task. We use logistic regression trained on the UCI Adult Census dataset as the task model. We measure the fairness by weighted accuracy -- the average of model classification accuracy over females and that over males. G-SV, KNN-SV, and Influence-based techniques are not applicable for this application since they either require the model trained using the SGD, or are designed for computing data importance when the metric is test accuracy or loss. Therefore, we only compare with the remaining six baselines. Figure \ref{fig:all-selection} VI shows that \textsc{DataSifter}\xspace achieves the top-tire performance along with the Perm-SV. \begin{wrapfigure}{R}{0.65\textwidth} \centering \includegraphics[width=0.65\textwidth]{images/cases/Large_result.png} \caption{ The experimental results and comparison of the \textsc{DataSifter}\xspace and baseline algorithms for detecting backdoored data on larger datasets. } \label{fig:large} \end{wrapfigure} \subsection{Comparisons on Larger Datasets} \label{sec:largedata} We compare the scalability between \textsc{DataSifter}\xspace and other baselines on large datasets. We show the results for backdoor detection on a 10,000-size Trojan square poisoned CIFAR-10 subset here. For \textsc{DataSifter}\xspace, we only sample data subset utilities from 1000 data points as we did in Section \ref{sec:remoe-harmful} I, but use the learned utility model to select data points on the entire 10,000 data points. When executed on NVIDIA Tesla K80 GPU, the clock time for the utility sampling step is within 5 hours for 4000 utility samples with a small CNN model, as the data size is fairly small. The LOO, the Least core, and all the Shapley value-based approaches except KNN-SV did not terminate in 24 hours, so we remove them from comparison. As we can see from Figure \ref{fig:large}, \textsc{DataSifter}\xspace once again outperforms all the remaining approaches. The results show that the learned utility model can also provide utility estimations for a set of unseen data points, which largely improves the scalability of \textsc{DataSifter}\xspace. On the contrary, the existing valuation-based approaches cannot predict the importance of unseen data points. Thus their utility sampling has to be conducted over the entire dataset. \section{Limitations of the \textsc{DataSifter}\xspace} \label{sec:limitations} We now discuss two limitations of the \textsc{DataSifter}\xspace. \paragraph{Scalability.} While Section \ref{sec:largedata} shows that \textsc{DataSifter}\xspace is often much more efficient than most of the other DQM schemes with similar design goals, its scalability to large dataset still needs further investigation. \textsc{DataSifter}\xspace could be slow as the utility sampling step requires retraining models for thousands of times. Although we already have several solutions for improving the scalability, such as using a smaller proxy model and/or only conduct utility sampling on a small portion of the dataset, it might still require hours of training. Further improving the scalability of \textsc{DataSifter}\xspace through some efficient approximation heuristics of data utility functions would be interesting future works. \paragraph{Utility Learning Model. } In this work, we use the popular set function learning model--DeepSet--as our utility learning model for all of the experiments. However, as shown in several previous works \citep{wei2015submodularity, han2020replication, wang2021one}, many data utility functions using commonly used learning algorithms are close to submodular functions. While DeepSet-based utility learning models have already shown promising results in our experiment, DeepSets does not provide a mechanism to incorporate such prior knowledge. As another interesting line of future work, we would like to exploit the approximate submodularity of these kinds of data utility functions and use more fine-grained architectures or training algorithms for utility learning, e.g., submodular regularizations \cite{alievalearning}. \section{Conclusion} This paper presents \textsc{DataSifter}\xspace as a unified framework for realizing task-driven, multi-purpose, model-agnostic data quality management. We theoretically analyzed the worst-case performance of existing data valuation-based DQM schemes and show that these approaches suffer unsatisfying performance guarantees. This sheds light on the empirical observations that existing data valuation-based DQM schemes exhibit significant performance variation over different datasets and tasks. Based on an extensive evaluation of the \textsc{DataSifter}\xspace over six types of DQM tasks and eight different datasets, we showed that \textsc{DataSifter}\xspace is more comprehensive and robust than the state-of-the-art DQM approaches with similar design goals. For future work, we would like to further improve the scalability of the \textsc{DataSifter}\xspace as well as design utility learning models that are better aligned with the properties of data utility functions. \begin{comment} \section*{Checklist} The checklist follows the references. Please read the checklist guidelines carefully for information on how to answer these questions. For each question, change the default \answerTODO{} to \answerYes{}, \answerNo{}, or \answerNA{}. You are strongly encouraged to include a {\bf justification to your answer}, either by referencing the appropriate section of your paper or providing a brief inline description. For example: \begin{itemize} \item Did you include the license to the code and datasets? \answerYes{See Section 1 } \item Did you include the license to the code and datasets? \answerNo{The code and the data are proprietary.} \item Did you include the license to the code and datasets? \answerNA{} \end{itemize} Please do not modify the questions and only use the provided macros for your answers. Note that the Checklist section does not count towards the page limit. In your paper, please delete this instructions block and only keep the Checklist section heading above along with the questions/answers below. \begin{enumerate} \item For all authors... \begin{enumerate} \item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? \answerYes{} \item Did you describe the limitations of your work? \answerYes{See Section \ref{sec:limitations}. } \item Did you discuss any potential negative societal impacts of your work? \answerNo{We do not identify any potential negative societal impacts of our work.} \item Have you read the ethics review guidelines and ensured that your paper conforms to them? \answerYes{} \end{enumerate} \item If you are including theoretical results... \begin{enumerate} \item Did you state the full set of assumptions of all theoretical results? \answerYes{} \item Did you include complete proofs of all theoretical results? \answerYes{See Supplementary Material. Sketch/intuition of proofs provided along with the theorem statements in Section \ref{sec:theory}. } \end{enumerate} \item If you ran experiments... \begin{enumerate} \item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? \answerYes{See Supplementary Material. } \item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? \answerYes{See Section \ref{sec:eval} and Supplementary Material. } \item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? \answerYes{See Section \ref{sec:eval}, we include error bars for all plots with randomness in model training.} \item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? \answerYes{See Supplementary Material. } \end{enumerate} \item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... \begin{enumerate} \item If your work uses existing assets, did you cite the creators? \answerYes{} \item Did you mention the license of the assets? \answerNA{All datasets we use are free for public use. } \item Did you include any new assets either in the supplemental material or as a URL? \answerNo{} \item Did you discuss whether and how consent was obtained from people whose data you're using/curating? \answerNA{} \item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? \answerNo{} \end{enumerate} \item If you used crowdsourcing or conducted research with human subjects... \begin{enumerate} \item Did you include the full text of instructions given to participants and screenshots, if applicable? \answerNA{} \item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? \answerNA{} \item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? \answerNA{} \end{enumerate} \end{enumerate} \end{comment} \appendix \section{Proof of Theorem \ref{thm:nooptimal}} \begin{customthm}{1} For $n \ge 3$, there exists no linear heuristic $\mathcal{M}$ s.t. $d(n, k) = {n \choose k}$ for every $k \in \{1, \ldots, n\}$. \end{customthm} \begin{proof} Suppose, for contradiction, that there exists a linear heuristic $\mathcal{M}$ s.t. $d(n, k) = {n \choose k}$ for all $k$. For a dataset $\mathcal{D}$ and utility function $U$, WLOG assume that the ranks (in non-ascending order) output by $\mathcal{M}$ in the Step 2 of Definition \ref{def:linearheuristic} is $(1, \ldots, n)$. Then it means \begin{align} &U(\{1\}) \ge U(S) \text{ for all } S \text{ s.t. } |S|=1, \nonumber \\ &U(\{1, 2\}) \ge U(S) \text{ for all } S \text{ s.t. } |S|=2, \nonumber \\ &\ldots \nonumber \\ &U(\{1, \ldots, n-1\}) \ge U(S) \text{ for all } S \text{ s.t. } |S|=n-1. \nonumber \end{align} We construct a simple counter example of $U$ to demonstrate such a $\mathcal{M}$ does not exist: let $n = 3$, we define $U$ as follows: \begin{align} &U(\emptyset) = 0, \nonumber \\ &U(\{1\}) = 7, U(\{2\}) = U(\{3\}) = 5, \nonumber \\ &U(\{1, 2\}) = 9, U(\{1, 3\}) = 9, U(\{2, 3\}) = 10, \nonumber \\ &U(\{1, 2, 3\}) = 10. \nonumber \end{align} To make $d(3, 1) = 3$, $\mathcal{M}$ must choose $1$ for $k=1$. However, for size-$2$ subsets, $\mathcal{M}$ can only choose between $\{1, 2\}$ and $\{1, 3\}$, whose utilities are both $9 < U(\{2, 3\})$. Therefore, $d(3, 2) = 2 < {3 \choose 2} = 3$. \end{proof} \section{Proof of Theorem \ref{thm:symmetrybound}} To formally state and prove Theorem \ref{thm:symmetrybound}, we introduce the formal definition of data type here. \begin{definition} Given a dataset $\mathcal{D}$ and utility function $U$, if for all subset $S \subseteq \mathcal{D} \setminus \{i, j\}$, we have $$ U(S \cup\{i\})=U(S \cup\{j\}), $$ we say two data points $i$ and $j$ are of the same \emph{type}. \end{definition} In other words, two data points are of the same type if they will be scored equally by every linear heuristic that satisfies Symmetry Axiom. Theorem \ref{thm:symmetrybound} essentially says that for all linear heuristic that will assign different scores to different types of data points, their domination numbers can be further upper bounded. We stress that this is a very mild assumption, especially when the space of the scores are continuous, which are the case for most of the existing data importance scoring mechanisms. To simplify the notations for set operations, we use $k \times \{D\}$ to denote a dataset that contains $k$ replicates of data point $D$, and we also denote the union of two data sets $S_1 \cup S_2 = S_1 + S_2$. The proof idea of Theorem \ref{thm:symmetrybound} is to construct a \emph{balanced} dataset that contains same amount of data points from the same types. If a linear heuristic $\mathcal{M}$ satisfies symmetry axiom, then $\mathcal{M}$ has to select data points of the same type when the target selection number is small, as all data points of the same type will receive the same scores. Of course, a dataset of only one type of data points will have nearly no utility. \begin{customthm}{2}[Restated] If a linear heuristic $\mathcal{M}$ satisfies symmetry axiom and will always assign different scores for different types of data points, then the domination number $d(n, k)$ of $\mathcal{M}$ is upper bounded by $\floor{n/k} \binom{\ceil{\frac{n}{\floor{n/k}}}}{k}$ for each $k \in \{1, \ldots, n\}$. \end{customthm} \begin{proof} Suppose there are $c$ \emph{types} of data points: $D_1, \ldots, D_c$. Let $r = n \mod c$. We construct the dataset $\mathcal{D}$ that contains $\floor{n/c}$ data points for each of $D_1, \ldots, D_{n-r}$, and contains $\ceil{n/c}$ data points for each of $D_{n-r+1}, \ldots, D_{n}$. We construct utility function $U$ as follows: \begin{align} &U(\emptyset) = 0; \nonumber \\ &U(i_1 \times \{ D_1 \} \ldots + i_c \times \{D_c\}) = 1, \nonumber \end{align} for every tuple of non-negative integers $(i_1, \ldots, i_c)$ s.t. $1 \le \sum_{j=1}^c i_j \le n$, except that \begin{align} U( k \times \{ D_1 \} ) = \ldots = U( k \times \{ D_c \} ) = 0 \nonumber \end{align} for all $k \le \floor{\frac{n}{c}}$. This construction reflects the rationale that a dataset that only contains one type of data points (e.g. all of the same label) provide little information for the ML task. Since $\mathcal{M}$ satisfies symmetry axiom, we know that all data points of the same type will receive the same scores. Besides, we know that data points of different types will receive different scores. Therefore, when the target selection size $k \le \floor{\frac{n}{c}}$, $\mathcal{M}$ will return $k \times \{ D_j \}$, which has the worst utilities for subset at size $k$ and there are $(c-r){\floor{n/c} \choose k} + r{\ceil{n/c} \choose k}$ such subsets that only contains single types of data points. For each $k$, by taking the largest possible $c$ such that $k \le \floor{\frac{n}{c}}$, we obtain the desired bound. \end{proof} We note that the upper bound is non-trivial for every $k \le n/2$. We also note the assumption that $\mathcal{M}$ always assigns different scores for different data types can be further relaxed as long as \emph{there exists} such a balanced dataset described in the proof that $\mathcal{M}$ assigns different scores for different data types. \section{Proof of Theorem \ref{thm:shapleybound}} Given a dataset $\mathcal{D} = \{1, \ldots, n\}$ and a submodular utility function $U$, the Shapley value is computed as \begin{equation} v_{\text{shap}}(i) = \frac{1}{n} \sum_{S \subseteq \mathcal{D} \setminus\{i\}} \frac{1}{{n-1 \choose |S|}} \big[U(S\cup \{i\})-U(S)\big] \label{eq:shapley} \end{equation} \begin{customthm}{3}[Restated] The domination number $d(n, k)$ of Shapley value is $1$ for every $n \ge 4$ and any $k \in \{1, \ldots, n\}$, even if we restrict the utility function $U$ to be submodular. \end{customthm} \begin{proof} We first consider the case when $k \ge 3$. We construct an instance of a dataset $\mathcal{D} = \{1, \ldots, n\}$ and a submodular utility function $U$ as follows: \begin{align} &U(\emptyset) = 0; \nonumber \\ &U(\{1\}) = U(\{2\}) = \ldots = U(\{k\}) = 7, U(\{i\}) = 5 \text{ for }i \ge k+1; \nonumber \\ &U(S)=2|S|+5 \text{ for all }S \text{ s.t. } 2 \le |S| \le k-1; \nonumber \\ &U(\{1, \ldots, k\}) = 2k+4,~~U(S)=2k+5 \text{ for all other }S\text{ s.t. } |S| = k; \nonumber \\ &U(S)=2k+5 \text{ for all }S \text{ s.t. }|S| \ge k+1. \nonumber \end{align} We can compute Shapley value according to its definition in (\ref{eq:shapley}): \begin{align} v_{shap}(1) = \ldots = v_{shap}(k) &= \frac{1}{n} \left[7+\frac{2(k-1)+4(n-k)}{n-1} + 2(k-3) + \frac{2{n-1 \choose k-1}-1}{ {n-1 \choose k-1} }\right] \nonumber \\ &= \frac{1}{n} \left[2k+3 + \frac{4n-2k-2}{n-1} -\frac{1}{{n-1 \choose k-1}}\right] \nonumber \end{align} \begin{align} v_{shap}(k+1) = \ldots = v_{shap}(n) &= \frac{1}{n} \left[ 5 + \frac{2k+4(n-k-1)}{n-1} + 2(k-3) + \frac{1}{{n-1 \choose k-1}} \right] \nonumber \\ &= \frac{1}{n} \left[ 2k+1 + \frac{4n-2k-4}{n-1} - \frac{1}{{n-1 \choose k-1}} \right] \nonumber \end{align} Since $$ v_{shap}(1) - v_{shap}(k+1) = \frac{1}{n} \left[ 2 + \frac{2}{n-1} - \frac{1}{{n-1 \choose k-1}} - \frac{1}{{n-1 \choose k}} \right] \ge \frac{2}{n(n-1)} >0, $$ we know that $\mathcal{M}$ will always output $\{1, \ldots, k\}$, which achieves the lowest utility among all data subsets of size $k$. Therefore, Shapley value's domination number $d(n, k)=1$ for all $3 \le k \le n-1$. We then consider the case when $k = 2$. The submodular data utility functions for the case of $k \ge 3$ can be easily adapted as follows: \begin{align} &U(\emptyset) = 0; \nonumber \\ &U(\{1\}) = U(\{2\}) = 7, U(\{i\}) = 5 \text{ for }i \ge 3; \nonumber \\ &U(\{1, 2\}) = 8,~~U(S)=9 \text{ for all other }S\text{ s.t. } |S| = 2; \nonumber \\ &U(S)=9 \text{ for all }S \text{ s.t. }|S| \ge 3. \nonumber \end{align} The Shapley value is computed as follows: \begin{align} v_{shap}(1) = v_{shap}(2) &= \frac{1}{n} \left[7+\frac{1+4(n-2)}{n-1}\right] \nonumber \\ &= \frac{1}{n} \left[11-\frac{3}{n-1}\right] \nonumber \end{align} \begin{align} v_{shap}(3) = \ldots = v_{shap}(n) &= \frac{1}{n} \left[ 5 + \frac{4+4(n-3)}{n-1} + \frac{2}{(n-1)(n-2)} \right] \nonumber \\ &= \frac{1}{n} \left[ 9 - \frac{4}{n-1} + \frac{2}{(n-1)(n-2)} \right] \nonumber \end{align} Since $$ v_{shap}(1) - v_{shap}(3) = \frac{1}{n} \left[ 2 + \frac{2}{n-1} - \frac{1}{{n-1 \choose k-1}} - \frac{1}{{n-1 \choose k}} \right] \ge \frac{2}{n(n-1)} >0, $$ we know that $\mathcal{M}$ will always output $\{1, \ldots, 2\}$, which achieves the lowest utility among all data subsets of size $2$. Therefore, for Shapley value, $d(n, 2)=1$. Finally, we consider the case when $k = 1$. Similarly, we construct a submodular utility function as follows: \begin{align} &U(\emptyset) = 0; \nonumber \\ &U(\{1\}) = 6, U(\{i\}) = 7 \text{ for }i \ge 2; \nonumber \\ &U(\{1, i\}) = 11 \text{ for }i \ge 2, U(\{i, j\}) = 9 \text{ for }i, j \ge 2; \nonumber \\ &U(S) = 11 \text{ for all }S \text{ s.t. }|S| \ge 3. \nonumber \end{align} The Shapley value is computed as follows: $$ v_{shap}(1) = \frac{1}{n}[6+4+2] = \frac{12}{n} $$ \begin{align} v_{shap}(i) &= \frac{1}{n} \left[ 7 + \frac{5+2(n-2)}{n-1} + \frac{2(n-2)(n-3)}{(n-1)(n-2)} \right] \nonumber \\ &= \frac{1}{n}\left[11-\frac{1}{n-1}\right] \nonumber \\ &< \frac{12}{n} = v_{shap}(1). \nonumber \end{align} Therefore, Shapley value's domination number $d(n, k)=1$ for $k = 1$. \end{proof} \section{Experiment Details and Results on More Datasets} \subsection{Details of Figure 2} In Figure 2 of the maintext, we showed the predicted vs true data utility values (test accuracy) for a synthetic dataset with logistic regression. For the synthetic data generation, we sample 200 data points from 50-dimensional standard Gaussian distribution. All of the 50-dimensional parameters are independently and uniformly drawn from $[-1, 1]$. Each data point is labeled by the sign of its vector's sum. The data utility model we use is a three-layer MLP (note that a set function can be represented by a function on $\{0, 1\}^n$ in a natural way). \subsection{Details of Figure 3} In Figure 3 of the maintext, the tiny synthetic dataset is generated by sample data points from a 2-dimensional standard Gaussian distribution, where the mean vector of the Gaussian distribution is $(0.1, -0.1)$. Each data point is labeled by the sign of its vector's sum. We first sample 9 data points with positive label and 2 data points with negative label. We then replicate each of the two negatively labeled data points for two times. To simulate natural noise, we add Gaussian noise to the copied data vector with scale $10^{-5}$. By sampling and copying, we obtained 15 data points with natural redundancy. Since there are only 6 data points with negative label, they tend to be assigned with larger (and similar) importance scores by linear heuristics like Shapley value. Both Shapley and Least core thus rank negative points with higher importance. This means that when the target selection size is less than 6, the selected dataset will have only single kind of labels and no information about the other label class at all. As shown in Figure 3, both Shapley and Least core achieves trivial utility for the first 6 selected data points. \subsection{Baseline Implementation} For fair comparisons between \textsc{DataSifter}\xspace and baselines, we fix the total number of utility sampling as 4000 for \textsc{DataSifter}\xspace and baseline algorithms that require utility sampling, including Perm-SV, TMC-SV, G-SV, and LC. Following the settings in \cite{ghorbani2019data}, we set the performance tolerance in TMC-Shapley as $10^{-3}$. Following the settings in \cite{jia2019efficient}, we set $K=5$ for KNN-Shapley. We use CVXOPT\footnote{\url{https://cvxopt.org/}} library to solve the constrained minimization problem in the least core calculation. For influence function technique, we rank training data points according to their influences on the model loss over the validation data. The code is adapted from the PyTorch implementation of influence function on GitHub\footnote{\url{https://github.com/nimarb/pytorch_influence_functions}}. For TracIn technique, we only use the parameters in the last layer, following the settings in \cite{pruthi2020estimating}. We sample checkpoints for every 15 epochs. The implementation is adapted from the official GitHub repository\footnote{\url{https://github.com/frederick0329/TracIn}}. \subsection{Details of Datasets Used in Section \ref{sec:eval}} \paragraph{CIFAR-10 \citep{krizhevsky2009learning}.} CIFAR-10 consists of 60,000 3-channel images in 10 classes (airplane, automobile, bird, cat, deer, dog, frog, horse, ship and truck). Each image is of size $32 \times 32$. \paragraph{MNIST \citep{lecun1998mnist}.} MNIST consists of 70,000 handwritten digits. The images are $28 \times 28$ grayscale pixels. \paragraph{Dog vs. Cat \citep{dogcat}.} Dog vs. Cat dataset consists of 2000 images (1000 for `dog' and 1000 for 'cat') extracted from CIFAR-10 dataset. Each image is of size $32 \times 32$. \paragraph{Enron SPAM \citep{shams2013classifying}.} Enron SPAM dataset consists of 2000 emails extracted from Enron corpus \cite{klimt2004enron}. The bag-of-words representation has 10714 dimensions. \paragraph{PubFig83 \citep{pinto2011scaling}.} PubFig83 is a real-life dataset of 13,837 facial images for 83 individuals. Each image is resized to $32 \times 32$. \paragraph{Covid-CT \citep{zhao2020COVID-CT-Dataset}.} The COVID-CT-Dataset has 746 CT images in total, containing 349 images from 216 COVID-19 patients and the rest of them are from healthy people. The dataset is separated into 543 training images and 203 test images. We resized each image to $32 \times 32$. \paragraph{UCI Adult Census \citep{uci-dataset}.} The Adult dataset contains 48,842 records from the 1994 Census database. Each record has 14 attributes, including gender and race information. The task is to predict whether one's income exceeds \$50K/yr based on census data. \paragraph{COMPAS \citep{yoon2018machine}.} We use a subset of the COMPAS dataset that contains 6172 data records used by the COMPAS algorithm in scoring defendants, along with their outcomes within two years of the decision, for criminal defendants in Broward County, Florida. Each data record has features including the number of priors, age, race, etc. \subsection{Implementation Details} For the experiment of backdoor detection, data poisoning detection, noisy detection, and mislabel detection on the CIFAR-10 dataset, the CNN model we use has two convolutional layers. A max-pooling layer follows each with the ReLU as the activation function. For the experiment of Backdoor detection and noisy feature detection on the MNIST dataset, we use LeNet adapted from \cite{lecun1998gradient}, which has two convolutional layers, two max-pooling layers, and one fully-connected layer. For the experiment of data summarization, a large CNN model is adopted to train on the PubFig83 dataset, which has six convolutional layers, and each of them is followed by a batch normalization layer and a ReLU activation function. For the experiment on poisoning detection over the Dog vs. Cat dataset as well as the data summarization over the COVID-CT dataset, we use a small CNN model adapted from PyTorch tutorial\footnote{\url{https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html}}, which contains two convolutional layers, two max-pooling layers, and followed by three fully-connected layers. We use Adam optimizer with learning rate $10^{-3}$, mini-batch size 32 to train all of the models mentioned above for 30 epochs, except that we train LeNet for five epochs on MNIST. For the experiment of data biasing on the Adult dataset, we implement logistic regression in scikit-learn \cite{pedregosa2011scikit} and use the LibLinear solver. For the experiment of mislabeling detection on SPAM and data debiasing on COMPAS, we adopt SVM implementation from scikit-learn library \cite{pedregosa2011scikit} with RBF kernel. A DeepSets model is a set function $f(S) = \rho \left( \sum_{x \in S} \phi(x) \right)$ where both $\rho$ and $\phi$ are neural networks. In our experiment, both $\phi$ and $\rho$ networks have three fully-connected layers. For the COMPAS dataset, we set the number of neurons in every hidden layer and the dimension of set features (i.e., the output of $\phi$ network) to be 64. For all other datasets, we set the number of neurons and set dimension to be 128 . We use the Adam optimizer with learning rate $10^{-4}$, mini-batch size of 32, $\beta_1=0.9$, and $\beta_2=0.999$ to train all of the DeepSets utility models, for up to 20 epochs. \subsection{Additional Results} In this section, we present experiment details and results on more datasets corresponding to the applications introduced in the main body (see Section \ref{sec:eval}). \subsubsection{Backdoor Attack} We consider the two most popular types of backdoor attacks, namely the BadNets \cite{gu2017badnets} and the Trojan square trigger \cite{liu2017trojaning}. Those two attacks' major difference is the trigger itself, where BadNets adopts a white block trigger at the right corner, and Trojan attack adopts a square trigger. Here, we show the results of \textsc{DataSifter}\xspace and baseline techniques over detecting BadNets triggers on MNIST dataset. The poisoning rate is 0.25, and the target label is `0'. The performance of different DQM techniques is illustrated in Figure \ref{fig:appen_detection} I.(a) and I.(b). We can see that \textsc{DataSifter}\xspace outperforms all other methods in the detection rate and significantly reduces the attack accuracy after filtered out bad data points. \subsubsection{Data Poisoning Attack} We discuss two popular types of clean-label data poisoning attacks. Feature collision attack \cite{shafahi2018poison} crafts poison images that collide with a target image in feature space, thus making it difficult for a model to discriminate between the two. Influence function-based poisoning attack \cite{koh2017understanding} identifies the most influential training data points for the target image and generates the adversarial training perturbation that causes the most increase in the loss on the target image. The Attack Success Rate is measured by the model's confidence on the prediction of poisoned data (with respect to the target label). Figure \ref{fig:appen_detection} II.(a) (b) show the results for influence function-based attack on Dog vs. Cat dataset, where 50 data points of class ‘cat’ are perturbed to increase the model loss on a ‘dog’ sample in the test set. As we can see, \textsc{DataSifter}\xspace is a more effective approach to detect poisoned data points than all other baselines. \subsubsection{Noisy Feature} We follow the same evaluation method for noisy data detection as in Section \ref{sec:eval} with another setting: LeNet model trained on noise polluted MNIST. We randomly select 1000 data points and corrupt 25\% of them with white noise. As shown in Figure \ref{fig:appen_detection} III.(a) (b), we can see that although KNN-Shapley can achieve slightly better performance in detecting noisy data points, \textsc{DataSifter}\xspace still retains a higher performance for model accuracy. Besides, similar to the case for CIFAR10, we find that the KNN-SV approach only starts finding the noisy data points until filtering out a certain amount of clean data. This is mainly because all noisy data points are out-of-distribution (OOD), as shown in Figure \ref{fig:noisydataexample} (b). The mechanism of KNN-SV, however, tends to assign 0 values to OOD data points while assign negative values to clean data points that are in-distribution but have different labels from their neighbors. Figure \ref{fig:noisydataexample} (c) gives a visualization of the distribution of KNN-Shapley values. \newcommand{0.3}{0.3} \begin{figure} \caption{The experimental results and comparisons of the \textsc{DataSifter}\xspace under the case of filtering out harmful data (application I-IV). The light blue region in each (a) graph represents the area that a method is no better than a random selection. For I.(b) and II.(b), we depict the Attack Success Rate (ASR), where a lower ASR indicates a more effective detection. For III.(b) and IV.(b), we show the model test accuracy, where a higher accuracy means a better selection.} \label{fig:appen_detection} \end{figure} \begin{figure} \caption{(a) a normal image from CIFAR-10, (b) an example of noisy data image, (c) a sample of KNN-Shapley values, where data points with index $<500$ are noisy. A data point with a higher KNN-Shapley value is considered more important.} \label{fig:noisydataexample} \end{figure} \subsubsection{Mislabeled Data} We conduct another experiment on noisy label detection: a small CNN model trained on 500 data points from the CIFAR-10 dataset. The noise flipping ratio is 25\%. The performance of mislabel detection is shown in Figure \ref{fig:appen_detection} IV.(a). As we can see, no DQM techniques are particularly effective in detecting mislabeled data for this task. Only KNN-SV achieves a slightly better performance than other approaches. We conjecture that the difficulty of mislabel detection on CIFAR-10 dataset is due to the following reason: since an oracle for detecting mislabeled data points can also be used to implement a classifier, the difficulty of mislabeling detection is at least as difficult as classification. A classifier directly trained on the 500 clean data points in this experiment, however, can only attain around 28\% test classification accuracy. Nevertheless, Figure \ref{fig:appen_detection} IV.(b) shows that \textsc{DataSifter}\xspace only achieves slightly worse model accuracy than KNN-SV after filtering out selected bad data points. \begin{wrapfigure}{R}{0.65\textwidth} \centering \includegraphics[width=0.65\textwidth]{images/appendix/Appendix_2nd.png} \caption{The experimental results and comparison of the \textsc{DataSifter}\xspace under the case of selecting high-quality data (application V and VI). We depict the validation accuracy for both cases. A higher accuracy indicates a better performance. } \label{fig:appen-selection} \end{wrapfigure} \subsubsection{Data Summarization} As another setting for the data summarization application we consider, we use the patient CT images from COVID-CT dataset for a binary classification task, which aims to determine whether an individual is diagnosed with COVID-19 or not. The CNN model trained on the dataset achieves around 72\% classification accuracy. Figure \ref{fig:appen-selection} V. shows the results for selecting up to 400 data points with different DQM techniques. As we can see, \textsc{DataSifter}\xspace achieves the best model accuracies on the selected data points along with KNN-SV. \subsubsection{Data Debiasing} We introduce another data debiasing experiment on the criminal recidivism prediction (COMPAS) task, where races are considered as the sensitive attribute. The utility metric we adopted here is the average accuracy across different race groups. The learning algorithm we use is SVM with RBF kernel. Baselines including G-SV, KNN-SV, and Influence-based techniques are not applicable for this application due to the utility metric and learning algorithm we use. Figure \ref{fig:appen-selection} VI. shows the results for \textsc{DataSifter}\xspace and the remaining five baselines. We can see that \textsc{DataSifter}\xspace again achieves the top-tire performance. \subsubsection{Large Datasets} We follow the same protocol as in Section \ref{sec:largedata} for comparing the scalability between \textsc{DataSifter}\xspace and other baselines on a different setting: noisy data detection on a 20,000-size CIFAR-10 subset. The corruption ratio is 25\%. Again, for \textsc{DataSifter}\xspace, we use the learned utility model from Section \ref{sec:largedata} to select data points on the 20,000-size set. We remove the LOO, the Least core, and all the Shapley value-based approaches except KNN-SV from comparison, as they did not terminate in 24 hours for 4000 utility sampling on the 20,000-size set. As we can see from Figure \ref{fig:appen-large} (a) and (b), \textsc{DataSifter}\xspace significantly outperforms all other baseline techniques. The results demonstrate that although the utility sampling step could be expensive, the scalability of \textsc{DataSifter}\xspace can be boosted by the predictive power of the learned utility model. \begin{figure} \caption{ The experimental results and comparison of the \textsc{DataSifter}\xspace and baseline algorithms for detecting noisy data on larger datasets. } \label{fig:appen-large} \end{figure} \end{document}
\begin{document} \title{A sneak preview of proof theory of ordinals hanks{This is a revised version of the r\'esum\'e for a talk at Kobe seminar on Logic and Computer Science , 5-6 Dec.1997} \begin{abstract} This talk is a sneak preview of the project, 'proof theory for theories of ordinals'. Background, aims, survey and furture works on the project are given. Subsystems of second order arithmetic are embedded in recursively large ordinals and then the latter are analysed. We scarcely touch upon proof theoretical matters. \end{abstract} \section {Proof theory \`a la Gentzen-Takeuti}\label{sec:1} Let T be a sound and recursive theory containing arithmetic. The {\it proof-theoretical ordinal\/} $|\mbox{T}|_{\Pi^{1}_{1}}<\omega^{CK}_{1}$ is defined by the ordinal: \[\sup\{\alpha<\omega^{CK}_{1}: \mbox{T}\vdash Wo[\prec_{\alpha}] \mbox{ for some recursive well ordering } \prec_{\alpha} \mbox { of type } \alpha\}\] ($Wo[\prec]$ denotes a $\Pi^{1}_{1}$-sentence saying that $\prec$ is a well ordering.) \begin{enumerate} \item (Gentzen 1936, 1938, 1943) $|\mbox{PA}|_{\Pi^{1}_{1}}=|\mbox{ACA}_{0}|_{\Pi^{1}_{1}}=\varepsilon_{0}$ \item (Takeuti 1967) $|\Pi^{1}_{1}\mbox{-CA}_{0}|_{\Pi^{1}_{1}}=|O(\omega,1)|_{<_{0}}=\psi_{\Omega}\Omega_{\omega}$ and\\ $|\Pi^{1}_{1}\mbox{-CA+BI}|_{\Pi^{1}_{1}}=|O(\omega+1,1)|_{<_{0}}=\psi_{\Omega}\varepsilon_{\Omega_{\omega}+1}$ \end{enumerate} {\bf Axiom schemata in second order arithmetic}. Let $\Phi$ denote a set of formulae in the language of second order arithmetic. \begin{enumerate} \item $\Phi\mbox{-CA}$: For each $\varphi\in\Phi$ \[\forall Y\exists X[X=\{n\in\omega: \varphi(n,Y)\}]\] \item $\Phi^{-}\mbox{-CA}$ denotes the set-parameter free version of $\Phi\mbox{-CA}$: \[\exists X[X=\{n\in\omega: \varphi(n)\}]\] \item $\Delta^{1}_{n}\mbox{-CA}$: For $\varphi,\psi\in\Sigma^{1}_{n}$ \[\{n\in\omega:\varphi(n,Y)\}=\{n\in\omega:\neg\psi(n,Y)\}\rightarrow\exists X[X=\{n\in\omega: \varphi(n,Y)\}]\] \item $\Phi\mbox{-AC}$: For each $\varphi\in\Phi$ \[\forall n\exists X \varphi(n,X)\rightarrow\exists \{X_{n}\}\forall n\varphi(n,X_{n})\] \item $\Phi\mbox{-DC}$: For each $\varphi\in\Phi$ \[\forall n\forall X\exists Y\varphi(n,X,Y)\rightarrow\exists\{X_{n}\}\forall n\varphi(n,X_{n},X_{n+1})\] \item BI: For each formula $\varphi$ \[Wf[X]\rarw TI[X,\varphi]\] \end{enumerate} Proof theory \`a la Gentzen-Takeuti \cite{G3}, \cite{T} proceeds as follows; \begin{description} \item[(G1)] Let $P$ be a proof whose endsequent $\Gamma$ has a restricted form, e.g., an arithmetical sequent. Define a reduction procedure $r$ which rewrites such a proof $P$ to yield another proofs $\{r(P,n):n\in I\}$ of sequents $\Gamma_{n}$ provided that $P$ has not yet reduced to a certain canonical form. For example when we want to show that the arithmetical sequent $\Gamma$ is true, the sequents $\Gamma_{n}$ are chosen so that $\Gamma$ is true iff every $\Gamma_{n}\, (n\in I)$ is true. Also if $P$ is in an irreducible form, then the endsequent is true outright. \item[(G2)] From the structure of the proof $P$, we abstract a structure related to this procedure $r$ and throw irrelevant residue away. Thus we get a finite figure $o(P)$. We call the figure $o(P)$ the {\em ordinal diagram} (o.d.) following G. Takeuti \cite{T}. Let ${\cal O}$ denote the set of o.d.'s. \item[(G3)] Define a relation $<$ on ${\cal O}$ so that $o(r(P,n))<o(P)$ for any $n\in I$. \item[(G4)] Show the relation $<$ on ${\cal O}$ to be well founded. \\ Usually $<$ is a linear ordering and hence $({\cal O},<)$ is a notation system for ordinals. \end{description} When the endsequent of a proof $P$ is an arithmetical sequent, we in fact construct an $\omega$ cut-free proof of the sequent whose height is less than or equal to (the order type of) the o.d. $o(P)$ attached to $P$. O.d.'s are constructed so that each constuctor for o.d.'s reflects a reduction step on proofs. We attach an o.d. $o(\Gamma;P)$ to each sequent $\Gamma$ occurring in a proof $P$. The o.d. $o(\Gamma;P)$ is built by applying constructors for o.d.'s. Applied constructors in building the term $o(\Gamma;P)$ correspond to the inference rules occurring above $\Gamma$. \section{$\omega$-proofs}\label{sec:2} In the latter half of 60's Sch\"utte, Tait, Feferman et.al analysed predicative parts of second order arithemetic using infinitary proofs with $\omega$-rule: infer $\forall n A(n)$ from $A(n)$ for any $n\in\omega$. Their main result is \[|\mbox{ATR}_{0}|_{\Pi^{1}_{1}}=\Gamma_{0}=_{df}\min\{\alpha>0:\forall\beta,\gamma<\alpha(\varphi\beta\gamma<\alpha)\}\] where $\varphi$ denotes the binary Veblen function: For each $\alpha<\omega_{1}$ define inductively a normal (strictly increasing and continuous) function $\varphi_{\alpha}:\omega_{1}\rarw\omega_{1}$ as follows: First set $\varphi 0\beta=\varphi_{0}\beta=\omega^{\beta}$. Since the ranges $rng(\varphi_{\beta})$ of $\varphi_{\beta}\, (\beta<\alpha)$ are club sets in $\omega_{1}$, so are their fixed points $fp(\varphi_{\beta})=\{\gamma<\omega_{1}:\varphi_{\beta}\gamma=\gamma\}$. Thus the intersection $\bigcap\{fp(\varphi_{\beta}):\beta<\alpha\}$ is also a club set in $\omega_{1}$. $\varphi_{\alpha}$ is defined to be the enumerating function of the set $\bigcap\{fp(\varphi_{\beta}):\beta<\alpha\}$. \section{Buchholz-Pohlers}\label{sec:3} In their Habilitationsscriften (1977) Buchholz \cite{Buchholz77}($\Omega_{\mu+1}$-rule) and \\ Pohlers \cite{Pohlers77}(local predicativity method) analysed theories for iterated inductive definitions. These theories formalize least fixed points of positive elemenary induction on $\omega$. For a monotone operator $\Gamma:{\cal P}(\omega)\rarw{\cal P}(\omega)$ define inductively sets $I^{\Gamma}_{\alpha}$ by \[I^{\Gamma}_{\alpha}=\Gamma(I^{\Gamma}_{<\alpha})\mbox{ with } I^{\Gamma}_{<\alpha}=\bigcup\{I^{\Gamma}_{\beta}:\beta<\alpha\}\] $I^{\Gamma}=\bigcup\{I^{\Gamma}_{\alpha}:\alpha<\omega^{CK}_{1}\}$ is the least fixed point of $\Gamma$. When $\Gamma$ is given by a positive elementary formula $A(X^{+},n)$, $\Gamma(X)=\{n\in\omega:\omega\models A[X,n]\}$, we write $I^{A}$ for $I^{\Gamma}$ and $I^{A}_{\alpha}$ for $I^{\Gamma}_{\alpha}$. Iterated $\Pi^{1-}_{1}\mbox{-CA}_{\nu}$ can be simulated in $\mbox{ID}_{\nu}$ since \[\Pi^{1}_{1} \mbox{ on } \omega=\mbox{ inductive on } \omega=\Sigma_{1} \mbox{ on } \omega^{CK}_{1}\] This is seen from Brower-Kleene $\Pi^{1}_{1}$-normal form: for each $\Pi^{1}_{1}$ $A(n,X)\, (n\in\omega,X\subseteq \omega)$ there exists a recursive relation $Q_{n,X}$ on $\omega$, i.e., there exists a recursive function $\{e\}^{X}(n)$ so that \begin{eqnarray} && A(n,X)\leftrightarrow Wf[Q_{n,X}]\leftrightarrow \{e\}^{X}(n)\in{\cal O}\leftrightarrow \nonumber \\ && \exists f\in L_{\omega^{CK}_{1}}[f \mbox{ collapsing function for } Q_{n,X}]\leftrightarrow \nonumber \\ && \exists\alpha<\omega^{CK}_{1}\forall x\in dom(Q_{n,X})(x\in I^{Q_{n,X}}_{\alpha}) \label{eqn:BK} \end{eqnarray} ($x\in I^{Q_{n,X}}_{\alpha}$ designates the order type of $Q_{n,X}| x$ is less than or equal to $\alpha$.) They showed ({\it cf}. \cite{LNM897} (1981).) \begin{enumerate} \item $|\mbox{ID}_{\nu}|_{\Pi^{1}_{1}}=|\Pi^{1-}_{1}\mbox{-CA}_{\nu}|_{\Pi^{1}_{1}}=\psi_{\Omega}\varepsilon_{\Omega_{\nu}+1}$ \item $|\mbox{ID}_{<\lambda}|_{\Pi^{1}_{1}}=|\Pi^{1-}_{1}\mbox{-CA}_{<\lambda}|_{\Pi^{1}_{1}}=\psi_{\Omega}\Omega_{\lambda}$ for limit $\lambda$ \end{enumerate} $\Omega_{\nu}$ denotes either $\omega_{\nu}$ or the continuous closure of the enumerating function of the recursively regular ordinals. \begin{remark}. {\rm Recently (May 1997) Buchholz \cite{Buchholz 1997} shows that Sch\"utte's cut eliminaion procedure for infinitary proofs with $\omega$-rule is nothing but the infinitary image of Gentzen's, $\mbox{Gentzen}^{\infty}=\mbox{Sch\"utte}$ and $\mbox{Takeuti}^{\infty}=\mbox{Buchholz}$. I conjecture that $\mbox{Arai}^{\infty}=\mbox{Pohlers-J\"ager}$ for $\mbox{KP}\omega$.} \end{remark} \section{J\"ager}\label{sec:4} G. J\"ager \cite{J} has shifted an object of proof-theoretic study to set theories from second order arithmetic. \begin{definition}($\Pi_2^\Omega$-ordinal of a theory) {\rm Let T be a recursive theory of sets such that} $\mbox{{\rm KP}}\omega\subseteq \mbox{{\rm T}}\subseteq \mbox{{\rm ZF+V=L}}$, {\rm where} $\mbox{{\rm KP}}\omega$ {\rm denotes Kripke-Platek set theory with the Axiom of Infinity.} {\rm For a sentence} $A$ {\rm let} $A^{L_{\alpha}}$ {\rm denote the result of replacing unbounded quantifiers} $Qx \, (Q\in\{ \forall, \exists\})$ {\rm in} $A$ {\rm by} $Qx\in L_{\alpha}$. {\rm Here for an ordinal} $\alpha\in Ord$ $L_\alpha$ {\rm denotes an intial segment of G\"odel's constructible sets.} {\rm Let} $\Omega$ {\rm denote the (individual constant corresponding to the) ordinal} $\omega^{CK}_1$. {\rm If} $T\not\vdash\exists \omega^{CK}_1$, {\rm e.g.,} $\mbox{{\rm T=KP}}\omega$ , {\rm then} $A^{L_\Omega }=_{df}A $. {\rm Define the} $\Pi_{2}^{\Omega}$-ordinal $|\mbox{{\rm T}}|_{\Pi_{2}^{\Omega}}$ of {\rm T by} \[|\mbox{{\rm T}}|_{\Pi_{2}^{\Omega}}=_{df}\inf\{\alpha\leq\omega^{CK}_1 :\forall\Pi_2\mbox{ {\rm sentence }}A(\mbox{{\rm T}}\vdash A^{L_\Omega} \: \Rightarrow \: L_\alpha\models A)\}<\omega^{CK}_1\] \end{definition} Here note that $|\mbox{T}|_{\Pi_{2}^{\Omega}}<\omega^{CK}_1$ since we have for any $\Pi_2$ sentence $A$, $\mbox{T}\vdash A^{L_\Omega} \Rightarrow L_\Omega\models A$ and $\Omega=\omega^{CK}_1$ is recursively regular, i.e., $\Pi_2$-reflecting. G. J\"ager \cite{J} shows that $|\mbox{KP}\omega|_{\Pi_{2}^{\Omega}}=\psi_{\Omega}\varepsilon_{\Omega+1}=d_{\Omega}\varepsilon_{\Omega+1}=$Howard ordinal and G. J\"ager and W. Pohlers \cite{J-P} gives the ordinal $|\mbox{KPi}|_{\Pi_{2}^{\Omega}}=\psi_{\Omega}\varepsilon_{I+1}$, where KPi denotes a set theory for recursively inaccessible universes and $I$ the first (recursively) weakly inaccessible ordinal. These include and imply proof-theoretic ordinals of subsystems of second order arithmetic corresponding to set theories. $\mbox{KP}\omega$ includes $\mbox{ID}_{1}$: Using (\ref{eqn:BK}), the axiom schema $n\in W(\prec)(\Leftrightarrow_{df}Wf[\prec| n]) \rarw TI[\prec| n,F]$ for arithmetical $\prec$ (note that this expresses the well-founded part $W(\prec)$ is the least fixed point of the operator determined by the formula $\forall m\prec n(m\in X)$) is derivable from $\Sigma\mbox{-rfl}\, (\Delta_{0}\mbox{-Coll})$ and {\em Foundation axiom schema} $\forall x[\forall y\in x \varphi(y)\rightarrow\varphi(x)]\rightarrow\forall x\varphi(x)$. \begin{remark} \begin{enumerate} \item {\rm (J\"ager, \cite{Jaeger84b}) $|\mbox{KP}\omega_{0}|_{\Pi^{1}_{1}}=\varepsilon_{0}$. In $\mbox{KP}\omega_{0}$ Foundation is restricted to sets $x\in a$. \item (Rathjen, \cite{Rathjen92}) For $n\geq 2$ $|\Pi_{n}\mbox{-Fund}|_{\Pi_{2}^{\Omega}}=d_{\Omega}\Omega_{n-1}(\omega)$ with $\Omega_{0}(\omega)=\omega\,\&\, \Omega_{n+1}(\omega)=\Omega^{\Omega_{n}(\omega)}$. In $\Pi_{n}\mbox{-Fund}$ Foundation is restricted to $\Pi_{n}$-formulae $\varphi(x)$. Note that $d_{\Omega}\Omega^{\omega}$ is the Ackermann ordinal, {\it cf}. \cite{R-W93}, and $d_{\Omega}\Omega^{2}=\Gamma_{0}$, the first strongly critical number. } \end{enumerate} \end{remark} {\bf Ramification, level and hierarchy}. In the proof-theoretic analysis of predicative parts of second order arithmetic (Sch\"utte et.al) second order variables$X$ is {\em stratified} into {\em ramified analytic hierarchy} according to contexts (occurrences of $X$ in proofs): $\omega$-models ${\cal M}_{\alpha}=(\omega,M_{\alpha};0,+,\cdot,\ldots)$. Put $M_{0}=\Delta^{0}_{1}$ and let $M_{\alpha+1}$ denote the collection of definable subsets of $\omega$ in ${\cal M}_{\alpha}$. E.g., $M_{1}=\Pi^{1}_{0}$. Alternatively we can set $M_{\alpha}$ as the jump hierarchy. For ID theories by Pohlers (local predicativity) the least fixed point $I=I_{<\Omega}$ is stratified into $\bigcup\{I_{\alpha}:\alpha<\Omega\}$. In J\"ager's case $L_{\alpha}$ do the same job. KPi is a constructive ZF in a sense: KPi is equivalent to each one of Feferman's $T_{0}$, Martinl\"of's type theory (1984), $\Delta^{1}_{2}\mbox{-CA+BI}$. Using the following lemma we see that $\Delta^{1}_{2}\mbox{-CA}$ is derived from $\Delta_{1}\mbox{-Sep}$. $Ad(d)$ designates that $d$ is admissible. Note that there is a $\Pi^{0}_{3}$ sentence $\theta$ so that for any transitive $d$, $Ad(d)\leftrightarrow \theta^{d}$, a $\Delta_{0}$ formula. \begin{lemma}\label{lem:Quant}Let $\sigma$ be a limit of admissible ordinals. \begin{enumerate} \item For each $\Pi^{1}_{1}$ formula $A(n,X)$ there exists a $\Sigma_{1}$ formula $A_{\Sigma}(n,X)$ in the language of set theory so that (cf. (\ref{eqn:BK}).) \[L_{\sigma}\models Ad(d)\,\&\, n\in\omega\,\&\, X\subseteq\omega\,\&\, X\in d\rightarrow[A(n,X)\leftrightarrow A_{\Sigma}^{d}(n,X)]\] \item For each $\Sigma^{1}_{2}$ formula $F(n,Y)$ with a set parameter $Y$ there exists a $\Sigma_{1}$ formula $A_{\Sigma}(n,Y)$ so that for \begin{eqnarray} && F_{\Sigma}(n,Y)\Leftrightarrow_{df}\exists d[Ad(d)\,\&\, Y\in d\,\&\, A^{d}_{\Sigma}(n,Y)] \nonumber \\ && L_{\sigma}\models n\in\omega\,\&\, Y\subseteq\omega\rightarrow\{F(n,Y)\leftrightarrow F_{\Sigma}(n,Y)\} \label{eq:Quant} \end{eqnarray} \item For each $\Sigma^{1}_{m+1}$ formula $F(n,Y)$ with a set parameter $Y$ there exists a $\Sigma_{m}$ formula $F_{\Sigma_{m}}(n,Y)$ so that \[L_{\sigma}\models n\in\omega\,\&\, Y\subseteq\omega\rightarrow\{F(n,Y)\leftrightarrow F_{\Sigma_{m}}(n,Y)\} \] \end{enumerate} \end{lemma} \section{Prehistory to Mahlo}\label{sec:5} J\"ager \cite{Jaeger84a}, Pohlers \cite{Pohlers87}, Sch\"utte \cite{Schuette88}(1984-1988) investigated $\rho$-inaccessible ordinals. $0$-inaccessibles are regular cardinals. $(\rho+1)$-inaccessibles are regular fixed points of the function $\pi_{\rho}(\alpha)$. For limit $\lambda$ $\lambda$-inaccessibles are $\rho$-inaccessibles for any $\rho<\lambda$. $\pi_{\rho}(\alpha)$ is the enumerating function of the continuous closure of $\rho$-inaccessibles. E.g., $\pi_{0}(\alpha)=\omega_{\alpha}$, $1$-inaccessibles are weakly inaccessibles and $\pi_{1}(\alpha)$ are weakly inaccessibles and their limits. This hierarchy $\pi_{\rho}(\alpha)$ of functions reminds us Veblen function $\varphi_{\alpha}\beta$. \section{Recursive notation systems of ordinals}\label{sec:6} {\em Ordinal diagrams} by Takeuti and us are just finite sequences of symbols together with order relation between them. There may be given set-theoretic interpretations for construtors of o.d's a posteriori. The order relation and constructors on o.d.'s reflect rewriting steps on finite proof figures. To show the well-foundedness of o.d's is the central matter. While recursive notation systems of ordinals by Buchholz, Rathjen et.al are built in set-theory. First (large) cardinals are supposed to exist, {\it cf}. the subsubsection \ref{subsubsec:8.1.1}. Then define some functions ({\em collapsing functions}) on ordinals to get a structure $(T;<,\Omega,\psi_{\Omega},\ldots)$. Thus we have set-theoretic interpretation and the well-foundedness of the structure in hand a priori assuming the existence of relevant large cardinals. After that the structure is shown to be isomorphic to a {\em recursive} structure $(T;<,\Omega,\psi_{\Omega},\ldots)\simeq (\hat{T};\hat{<},\hat{\Omega},\hat{\psi_{\Omega}},\ldots)$. Further if the latter is shown to be well-founded in a relevant theory, then the assumption of the existence of large cardinals is finally discarded as a figure of speech. Another route to dicarding the assumption is to show that either the {\em recursive analogue} of large cardinal suffices to model the structure, \\ $(T;<,\Omega,\psi_{\Omega},\ldots)\simeq (\check{T};<,\check{\Omega}=\omega^{CK}_{1},\check{\psi_{\Omega}},\ldots)$ ({\it cf}. Pohlers \cite{LNM1407} (1989).), or the construction of the structure $(T;<,\Omega,\psi_{\Omega},\ldots)$ is carried (mimiced) in a constructive set theory or a type theory (Rathjen, Griffor, Setzer). When the latter route is pursued, we have to show further that, e.g., a constructive set theory is reduced to a recursive analogue. \section{Proof theory of recursively large ordinals}\label{sec:7} Let ${\cal L}_0$ denote the first order language whose constants are; $=$(equal), $<$(less than), $0$(zero), $1$(one), $+$(plus), $\cdot$(times), $j $(G\"odel's pairing function on $Ord$),\\ $( )_0 ,( )_1$(projections, i.e., inverses to $j$). For each $\Delta_0$ formula ${\cal A}(X,a,b)$ with a binary predicate $X$ in ${\cal L}_{0}\cup\{X\}$ we introduce a binary predicate constant $R^{\cal A}$ and a ternary one $R^{\cal A}_{<}$ by a transfinite recursion on ordinals $a$: \[ b\in R^{\cal A}_{a} \: \Leftrightarrow_{df} \: R^{\cal A} (a,b)\: \Leftrightarrow \: {\cal A}(R^{\cal A}_{<a} ,a,b) \] with $R^{\cal A}_{<a} =\sum_{x<a}R^{\cal A}_x = \{(x,y):x<a \, \& \, y\in R^{\cal A}_x\}$. The language ${\cal L}_1$ is obtained from ${\cal L}_0$ by adding the predicate constants $R^{\cal A}$ and $R^{\cal A}_{<}$ for each bounded formula ${\cal A}(X,a,b)$ in ${\cal L}_{0}\cup\{X\}$. Let $F:Ord\rightarrow L$ denote (a variant of) the G\"odel's onto map from the class $Ord$ of ordinals to the class $L$ of constructible sets. The language ${\cal L}_{1}$ is chosen so that the set-theoretic membership relation $\in$ on $L$ is interpretable by a $\Delta_{0}$-formula $\in (E,a,b)$ in ${\cal L}_{1}$: \[a\varepsilon b \Leftrightarrow_{df}F(\alpha)\in F(\beta)\Leftrightarrow \in (E,a,b) \,\&\, a\equiv b \Leftrightarrow_{df} F(\alpha)=F(\beta)\Leftrightarrow =(E,a,b)\] Thus instead of developing an ordinal analysis of a set theory we can equally develop a proof theory for theories of ordinals. Every multiplicative principal number $\alpha=\omega^{\omega^{\beta}}$ is closed under each function constant in ${\cal L}_{0}$. In particular $\alpha$ is closed under the pairing function $j$ and hence each finite sequence $\bar{\beta}<\alpha$ is coded by a single $\beta<\alpha$. Let $\alpha=\langle \alpha;0,1,+,\cdot,\ldots,R^{\cal A}|\alpha,\ldots\rangle$ denote the ${\cal L}_{1}$-model with the universe $\alpha$. We sometimes identify the set $L_{\alpha}$ with a multiplicative principal number $\alpha$ since $L_{\alpha}=F"\alpha$. $\Pi_2^\Omega$-ordinal $|\mbox{T}|_{\Pi_{2}^{\Omega}}$ of a sound and recursive theory T of ordinals is defined similarly as before. In order to get an upper bound for the $\Pi_2^\Omega$-ordinal $|\mbox{T}|_{\Pi_{2}^{\Omega}}$ of a theory T we attach a {\it term\/} $o(\Gamma;P)$ to each sequent $\Gamma$ occurring in a proof $P$ in the theory T, which ends with a $\Pi_{2}^{\Omega}$ sentence. The term $o(\Gamma;P)$ is built up from atomic diagrams and {\it variables\/} by applying constructors in a system $(O(\mbox{T}),<)$ of o.d.'s for T. Variables occurring in the term $o(\Gamma;P)$ are eigenvariables occurring below $\Gamma$. Thus the term $o(\Gamma_{end};P)$ attched to the endsequent of $P$ is a closed term, i.e., denotes an o.d. Also each redex in our transformation is on the main branch, i.e., the rightmost branch of a proof tree and is the lowermost one. Therefore when we resolve an inference rule $J$ no free variable occurs below $J$. Finally set \[o(P)=d_{\Omega}o(\Gamma_{end};P)\in O(\mbox{T})\!|\!\Omega(=\{\alpha\in O(T):\alpha<\Omega\}),\] where $d_{\Omega}\alpha$ is a collapsing function \[d_{\Omega}:\alpha\mapsto d_{\Omega}\alpha<\Omega\] Applied constructors in building the term $o(\Gamma;P)$ correspond the inference rules occurring above $\Gamma$. For example at an inference rule $(b\exists)$ \[\infer[(b\exists)]{\Gamma,\exists x<tA(x)}{\Gamma,s<t & \Gamma,A(s)}\] we set with a complexity measure $gr(A)$ of formulae $A$ \[o(\Gamma,\exists x<tA(x))=o(\Gamma,s<t)\# o(\Gamma,A(s))\#s\# gr(A(s))\] Note that the instance term $s$ may contain variables, e.g., $s\equiv y\cdot z$. Also at an inference rule $(b\forall)$ \[\infer[(b\forall)]{\Gamma,\forall x<t A(x)}{\Gamma,x\not<t,A(x)}\] we substitute the term $t$ for the eigenvariable $x$ in the term $o(\Gamma,\forall x<t A(x))$; \[o(\Gamma,\forall x<t A(x))=o(\Gamma,x\not<t,A(x))[x:=t]\] Also, for example, to analyze (the inference rule corresponding to) the following axiom saying $\Omega$ is $\Pi_{2}$-reflecting \[\forall u<\Omega [A^{\Omega}(u)\rightarrow \exists z<\Omega (u<z\,\&\, A^{z}(u)]\: (A \mbox{ is a } \Pi_{2}\mbox{ formula)}\] we introduce a new rule together with a new constructor $(\Omega,\alpha)\mapsto d_{\Omega}\alpha<\Omega$ of o.d.'s: \[\infer[(c)^{\Omega}_{d_{\Omega}\alpha}]{\Gamma^{\Omega},A^{d_{\Omega}\alpha}}{\Gamma^{\Omega},A^{\Omega}}\] with a set $\Gamma$ of $\Sigma_{1}$ sentences. $\alpha$ is chosen so that $\alpha=o(\Gamma^{\Omega},A^{\Omega})$. Now our theorem for an upper bound is stated as follows. \begin{theorem}\label{th:up} If $P$ is a proof of a $\Pi_{2}^{\Omega}$-sentence $A^{\Omega}$ in {\rm T}, then $A^\alpha$ is true with $\alpha=o(P)$. \end{theorem} \section{Reflecting ordinals}\label{sec:8} \begin{definition} {\rm (Richter and Aczel \cite{R-A}) } {\rm Let} $X\subseteq Ord$ {\rm denote a class of ordinals and} $\Phi$ {\rm a set of formulae in the language of set theory (or the language of theories of ordinals). Put} $X\!|\!\alpha=_{df}\{\beta\in X :\beta<\alpha \}$. {\rm We say that an ordinal} $\alpha\in Ord$ {\rm is} $\Phi$-reflecting on $X$ {\rm if} \[\forall A \in\Phi \mbox{ {\rm with parameters from }} L_\alpha [L_\alpha\models A \: \Rightarrow \: \exists\beta\in X\!\!\alpha (L_\beta\models A)]\] {\rm If a parameter} $\gamma<\alpha$ {\rm occurs in} $A ${\rm , then it should be understood that} $\gamma<\beta${\rm .} \\ $\alpha$ {\rm is} $\Phi$-reflecting {\rm if} $\alpha$ {\rm is} $\Phi${\rm -reflecting on the class of ordinals} $Ord$. \end{definition} This is known to be a recursive analogue to indescribable cardinal $\kappa$: \[\forall R\subseteq V_{\kappa}[\langle V_{\kappa},\in,R\rangle\models A\Rightarrow\exists\alpha<\kappa(\langle V_{\alpha},\in,R\cap V_{\alpha}\rangle\models A)]\] {\bf Facts and definitions}. \cite{R-A} \begin{enumerate} \item $\alpha\in Ad \, \& \, \alpha>\omega \: \Leftrightarrow \: \alpha$ is recursively regular $\Leftrightarrow \: \alpha$ is $\Pi_2$-reflecting (on $Ord$) \\ with $Ad=_{df}$ the class of admissible ordinals \item $\alpha$ is recursively Mahlo $\Leftrightarrow$ $\alpha$ is $\Pi_2$-reflecting on $Ad.$ \item Put $M_n (X)=_{df}\{\alpha\in X:\alpha$ is $\Pi_n$-reflecting on $X\}$. Then for $n>0$, \[M_{n+1}(Ad)\subseteq M_n^\triangle (Ad), (M_n^\triangle)^\triangle (Ad), etc.,\] where $M_n^\triangle$ denotes the diagonal intersection of the operation \\ $X \mapsto M_n(X)$.\\ The least $\Pi_{n+1}$-reflecting ordinal is greater than, e.g., the least ordinal in $M_n^\triangle (Ad)$. \end{enumerate} From \cite{R-A} we know that $\Pi_{3}$-reflecting ordinals are recursive analogues to $\Pi^{1}_{1}$-indescribable cardinals, i.e., weakly compact cardinals. We say that $\kappa$ is $2${\it -regular} if for every $\kappa$-bounded $F:{}^{\kappa}\kappa\rarw{}^{\kappa}\kappa$ there exists an $\alpha$ such that $0<\alpha<\kappa$ and for any $f\in{}^{\kappa}\kappa$, if $\alpha$ is closed under $f$, then $\alpha$ is also closed under $F(f)$. Here $F$ is $\kappa$-bounded if \[\forall f\in{}^{\kappa}\kappa\forall\xi<\kappa\exists\gamma<\kappa\forall g\in{}^{\kappa}\kappa[g\gamma=f\gamma\rarw F(f)(\xi)=F(g)(\xi)]\] Then $\kappa$ is $2$-regular iff $\kappa$ is weakly compact. Let $\kappa$ be an admissible ordinal and $\xi<\kappa$. We say $\{\xi\}_{\kappa}$ maps $\kappa$-recursive functions to $\kappa$-recursive functions if \[\forall\beta<\kappa[\{\beta\}_{\kappa}:\kappa\rarw\kappa\Rightarrow\{\{\xi\}_{\kappa}(\beta)\}_{\kappa}:\kappa\rarw\kappa]\] An admissible $\kappa$ is said to be $2${\it -admissible} iff for any $\xi<\kappa$ if $\{\xi\}_{\kappa}$ maps $\kappa$-recursive functions to $\kappa$-recursive functions, then there exists an $\eta$ such that $\xi<\eta<\kappa$ and $\{\xi\}_{\eta}$ maps $\eta$-recursive functions to $\eta$-recursive functions. Then $\kappa$ is $2$-admissible iff $\kappa$ is $\Pi_{3}$-reflecting. \subsection{$\Pi_{2}$-reflection}\label{subsec:8.1} \subsubsection{A system $O(\Omega)$ of ordinal diagrams}\label{subsubsec:8.1.1} We define a system $O(\Omega)$ of ordinal diagrams. $O(\Omega)$ is equivalent to Takeuti's system $O(2,1)$ and the Howard ordinal is denoted by the o.d. $d_{\Omega}\varepsilon_{\Omega+1}$. Let $0,\Omega,+,\omega^{\alpha}$(exponential with base $\omega$) and $d$ be distinct symbols. Each element called ordinal diagram in the set $O(\Omega)$ is a finite sequence of these symbols. $0,\Omega$ are atomic diagrams and constructors in the system $O(\Omega)$ are $+,\omega^{\alpha}$ and $d_{\Omega}:\alpha\mapsto d_{\Omega}\alpha$.\footnote{$\alpha$ in $d_{\Omega}\alpha$ is not restricted to the case $\alpha\geq\Omega$.} Each diagram of the form $d_{\Omega}\alpha$ and $\Omega$ are defined to be epsilon numbers: \[\beta<d_{\Omega}\alpha\Rarw\omega^{\beta}<d_{\Omega}\alpha\] The order relations between epsilon numbers are defined as follows. \begin{enumerate} \item $d_{\Omega}\alpha<\Omega$ \item $d_{\Omega}\alpha<d_{\Omega}\beta$ holds if one of the following conditions is fulfilled. \begin{enumerate} \item $d_{\Omega}\alpha\leq K_{\Omega}\beta(\Leftrightarrow_{df}\exists\delta\in K_{\Omega}\beta(d_{\Omega}\alpha\leq\delta))$ \item $K_{\Omega}\alpha<d_{\Omega}\beta(\Leftrightarrow_{df}\forall\gamma\in K_{\Omega}\alpha(\gamma<d_{\Omega}\beta))\,\&\, \alpha<\beta$ \end{enumerate} \item $K_{\Omega}\alpha$ denotes the finite set of subdiagrams of $\alpha$ which are in the form $d_{\Omega}\gamma$, i.e., $K_{\Omega}\alpha$ consists of the epsilon numbers below $\Omega$ which are needed for the unique representation of $\alpha$ in Cantor normal form. \end{enumerate} Then we have the following facts. \begin{description} \item [($<1$)] $d_{\Omega}\alpha<\Omega$ \item [($<2$)]$K_{\Omega}\alpha<d_{\Omega}\alpha$ \item [($<3$)]$K_{\Omega}\alpha\leq\alpha$ \item[($<4$)] $\beta<\Omega \, \&\, K_{\Omega}\beta<d_{\Omega}\alpha \: \Rightarrow\: \beta<d_{\Omega}\alpha$ \end{description} An essentially or a collapsibly less than relation $\alpha\ll\beta$ is defined by \[\alpha\ll\beta\Leftrightarrow K_{\Omega}\alpha<d_{\Omega}\beta\,\&\,\alpha<\beta\Leftrightarrow d_{\Omega}\alpha<d_{\Omega}\beta\,\&\, \alpha<\beta\] The sytem $O(\Omega)$ is nothing but the notation system $D(\varepsilon_{\Omega+1})$ defined in \cite{R-W93}. Put \[k_{\Omega}\alpha=\max(K_{\Omega}\alpha\cup\{0\}) \,\&\, \Omega=\omega_{1}\mbox{(the first uncountable cardinal)}\] Define sets $D(\alpha)$ and ordinals $d_{\Omega}\alpha$ by simultaneous recursion on $\alpha$ as follows: \begin{enumerate} \item $\{\Omega\}\cup (k_{\Omega}\alpha+1)\subseteq D(\alpha)$ \item $D(\alpha)$ is closed under $+,\omega^{\beta}$. \item $\delta\in D(\alpha)\cap\alpha\Rarw d_{\Omega}\delta\in D(\alpha)$ \item $d_{\Omega}\alpha=\min\{\xi:\xi\not\in D(\alpha)\}$ \end{enumerate} Then we see \begin{enumerate} \item $d_{\Omega}\alpha<\Omega=\omega_{1}$ \item $d_{\Omega}\beta\leq K_{\Omega}\alpha\Rarw d_{\Omega}\beta<d_{\Omega}\alpha$ \item $\alpha<\beta\,\&\, K_{\Omega}\alpha<d_{\Omega}\beta\Rarw d_{\Omega}\alpha<d_{\Omega}\beta$ \item $d_{\Omega}\alpha=d_{\Omega}\beta\Rarw\alpha=\beta$ \item $d_{\Omega}\alpha=D(\alpha)\cap\Omega$ \item $\alpha\in D(\beta)\Leftrightarrow K_{\Omega}\alpha<d_{\Omega}\beta$ \end{enumerate} \subsubsection{Finitary analysis}\label{subsubsec:8.1.2} We explain our approach to an ordinal analysis by taking theories of $\Pi_{2}$ reflecting ordinals as an example. The fact that $\Omega$ is $\Pi_{2}$ reflecting is expressed by the following inference rule: \[\infer[(\Pi_{2}\mbox{-rfl})]{\Gamma} { \Gamma,A^{\Omega} & \neg\exists z(t<z<\Omega\wedge A^{z}),\Gamma} \] for any $\Pi_{2}$-formula $A^{\Omega}\equiv A\equiv\forall x\exists y B(x,y,t)$ with a {\it parameter term} $t$. $\mbox{T}_{2}$ denotes the theory obtained from $\mbox{T}_{0}$ by adding the inference rule $(\Pi_{2}\mbox{-rfl})$. $\mbox{T}_2$ is formulated in Tait's logic calculus. Let ${\cal L}_{c}$ denote the extended language of ${\cal L}_{1}$ obtained by adding an individual constant $\beta$ for each o.d. $\beta<\Omega$. \[{\cal L}_{c}={\cal L}_{1}\cup\{\beta\in O(\Omega):\beta<\Omega\}\] We show \begin{theorem}\label{th:T2} \[\forall\Pi_2\: A(\mbox{{\rm T}}_2\vdash A^{\Omega} \: \Rightarrow \: \exists\alpha\in O(\Omega)\!|\!d_{\Omega}\varepsilon_{\Omega+1}\:A^\alpha ).\] \end{theorem} Let $P$ be a proof ending with a $\Pi^{\Omega}_{2}$ sentence $A^{\Omega}$. To each sequent $\Gamma$ in $P$, we assign a term $o(\Gamma;P)\in{\cal F}$ so that $A ^\alpha$ is true with $\alpha=d_{\Omega}\alpha_0$ and $\alpha_0=o(P)$. This is proved by induction on $\alpha$. To deal with the rule $(\Pi_2\mbox{-rfl})$ we introduce a new rule: \[\infer[(c)^{\Omega}_{d_{\Omega}\alpha}]{\Gamma,A^{d_{\Omega}\alpha}}{\Gamma,A^{\Omega}}\] where $\Gamma\subset\Sigma^{\Omega}_{1}$ sentences, $A^{\Omega}\equiv\forall x\exists yB$ is a $\Pi^{\Omega}_{2}$-sentence and the following condition have to be enjoyed: \begin{equation}\label{eq:cnd} o(\Gamma,A^{\Omega})\ll\alpha \end{equation} This rule is plausible in view of the Collapsing Lemma \ref{lem:Collapsing}. \begin{lemma}\label{lem:Collapsing}{\rm (\cite{J})} Collapsing Lemma: $\vdash^\alpha_\Omega\Gamma \, \& \, \Gamma\subset\Sigma_1 \: \Rightarrow \: d_{\Omega}\alpha\models\Gamma$ \end{lemma} where $\beta\models\Gamma \: \Leftrightarrow_{df} \: \bigvee\Gamma^\beta=\bigvee\{\exists x_1<\beta B_1,\ldots,\exists x_n<\beta B_n\}$ ($B_1,\ldots,B_n$ are bounded) is true in the model $\langle O(\Omega)\!|\!\beta;+,\cdot,j,\ldots,R^{\cal A}|\beta,\ldots\rangle$. When a $(\Pi_2\mbox{-rfl})$ is to be analyzed, \[\infer[(\Pi_{2}\mbox{-rfl})]{\Gamma} { \Gamma,A^{\Omega} & \neg\exists z(t<z<\Omega\wedge A^{z}),\Gamma} \] roughly speaking, we set $\alpha=o(\Gamma,A^{\Omega})$ and substitute $d_{\Omega}\alpha$ for the variable $z$ [originally $z$ is replaced by $\Omega$], and replace the $(\Pi_2\mbox{-rfl})$ by a $(cut)$. The inference rule $(\Pi_{2}\mbox{-rfl})$ is resolved as follows: \[ \infer[J]{\Lambda} { \infer[(c)^{\Omega}_{d_{\Omega}\alpha}]{\Lambda,A^{d_{\Omega}\alpha}} { \infer*{\Lambda,A^{\Omega}}{\Gamma,A^{\Omega}} } & \infer*{\neg A^{d_{\Omega}\alpha},\Lambda} { \infer*{} { \infer*{\neg A^{d_{\Omega}\alpha},\Gamma}{\delta} } } } \] where \begin{enumerate} \item $\alpha=o(\Lambda,A^{\Omega})$. \item $(c)^{\Omega}_{d_{\Omega}\alpha}$ is the new inference rule, which says, if $\Pi^{\Omega}_{2}$-sentence $A^{\Omega}$ is derivable with a $\Sigma^{\Omega}_{1}$ side formulae $\Lambda$ and an o.d. $\alpha$, then we have $\Lambda,A^{d_{\Omega}\alpha}$, viz. after substituting any $\delta<d_{\Omega}\alpha$ coming from the right upper part of the $(cut)\, J$ for the universal quantifier $\forall x<\Omega$ in $A^{\Omega}$, we should have $\beta<d_{\Omega}\alpha$ for any instance term $\beta<\Omega$ of the existential quantifier $\exists y<\Omega$ in $A^{\Omega}$. \item The right upper part of $J$ is obtained by inversion, i.e., substituting the individual constant $d_{\Omega}\alpha$ for the variable $z$. $t<d_{\Omega}\alpha<\Omega$ follows from $t<\Omega$ and the fact that $t$ is contained in $\alpha$, {\it cf}. ($<4$). \end{enumerate} Then the points are that we have to retain the condition (\ref{eq:cnd}) $o(\Gamma,A)\ll\alpha$ in the rule $(c)$ and if we have \[\infer[(c)]{\Gamma,\exists y<d_{\Omega}\alpha B}{\infer[(\exists)]{\Gamma,\exists yB(y)}{\Gamma,B(\beta_{0})}}\] then it should be the case $\beta_{0}<d_{\Omega}\alpha$, i.e., $d_{\Omega}\beta\in K_{\Omega}\beta_{0}\Rarw d_{\Omega}\beta<d_{\Omega}\alpha$. First of all, $d_{\Omega}\beta$ occurs in a proof only because $d_{\Omega}\beta$ was generated at a $(c)$ and then substituted at a $(\Pi_2\mbox{-rfl})$. The latter condition $d_{\Omega}\beta<d_{\Omega}\alpha$ is ensured by the former (\ref{eq:cnd}) since $d_{\Omega}\beta\ll o(\Gamma,A)\ll\alpha$. The former condition (\ref{eq:cnd}) is retained since the only unbounded universal quantifier in $\Gamma,A$ is the outermost one $\forall x$ in $A$ and the o.d.$\geq d_{\Omega}\alpha$ is forbidden to be substituted for $x$ by the restriction $\exists x<d_{\Omega}\alpha$ in $\neg A$. Observe that there exists a {\em gap} $[d_{\Omega}\alpha,\Omega)$ for o.d.'s occurring above a rule $(c)^{\Omega}_{d_{\Omega}\alpha}$. Namely if $\beta<\Omega$ occurs above $(c)^{\Omega}_{d_{\Omega}\alpha}$, then $\beta<d_{\Omega}\alpha$. This follows from the condition (\ref{eq:cnd}) and the fact ($<4$): \[\beta<\Omega \, \&\, K_{\Omega}\beta<d_{\Omega}\alpha \: \Rightarrow\: \beta<d_{\Omega}\alpha\] Thus the Theorem \ref{th:T2} was shown by a finitary analysis. \subsection{Summary of results}\label{subsec:8.2} \begin{tabular}{|c|c|c|c|} \hline ordinal & set-ordinal & arithmetic & ordinal diagrams \\ & theory & & \\ \hline rec. regular & $KP\omega$ & $\exists{\cal O}, \, ID_1$ & $O(\Omega)^*$ \\ \hline rec. inacc. & $KPi$ & $\Sigma^1_2-AC+BI$, & $O(1;I)^*$ \\ & & $SBL$ & \\ \hline rec. Mahlo & $KPM$ & & $O(\mu)^*$ \\ \hline $\Pi_n$-reflecting & $T_{n}$ & & $O(\Pi_n)$ \\ \hline $\Pi^1_1$-reflecting & $T_{1}^{1},S(2;1,1)$ & & $O(2;1,1)$ \\ \hline \end{tabular} $^*$ designates that the o.d.'s are shown to be optimal. In a letter \cite{W2} A. Weiermann informed me that an inspection of his work in \cite{W1} yields an embedding of $O(\mu)$ in the notation system $T(M)$ by Rathjen \cite{Rathjen90}. Thus via Rathjen's well-ordering proof in \cite{Rathjen94a} we get indirectly that $O(\mu)$ is best possible. Recently we showed that $\mbox{KPM}\vdash Wo[O(\mu)|\alpha]$ for each $\alpha<d_{\Omega}\varepsilon_{\mu+1}$ without referring \cite{Rathjen94a}. \section{Stability}\label{sec:9} Rreflecting ordinals are too small to model the axiom $\Sigma^{1}_{2}\mbox{-CA}$ of second order arithmetic and hence theories for these ordinals are intermediate stages towards $\Sigma^{1}_{2}\mbox{-CA}$. We have to consider theories for ordinals below which there are stable ordinals. \begin{definition}\label{df:10nstbl} {\rm Let} $\kappa$ {\rm and} $\sigma<\kappa$ {\rm be ordinals and} $k$ {\rm a positive integer. We say that} $\sigma$ {\rm is} $(\kappa,k)$-stable {\rm if} \[L_{\sigma}\prec_{\Sigma_{k}}L_{\kappa},\] {\rm that is, for any} $\Sigma_{k}$ {\rm formula} $A$ {\rm with parameters from} $L_{\sigma}$ \[L_{\kappa}\models A \Rarw L_{\sigma}\models A.\] \end{definition} Note that $(\kappa,1)$-stability is equivalent to $\kappa$-stability. \\ \\ {\bf Facts}. (cf.\cite{R-A} and \cite{M}.) For a countable $\sigma$, \begin{enumerate} \item $\sigma$ is $\Pi^1_{0}$-reflecting $\Leftrightarrow \: \sigma$ is weakly stable, $\beta$-stable for some $\beta>\alpha$. \item $\sigma$ is $\Pi^1_1$-reflecting $\Leftrightarrow \: \sigma$ is $\sigma^+$-stable. \item $\Pi^1_1$ on $L_\sigma=$inductive on $L_\sigma=\Sigma_1$ on $L_{\sigma^+}$. \\ ($\sigma^+$ denotes the next admissible to $\sigma$.) \end{enumerate} \subsection{Summary of results}\label{subsec:9.1} The reason for this turning to stability is that $\Sigma^1_{k+1}$-Comprehension Axiom for $k\geq 1$ is interpretable in a universe $L_{\kappa}$ such that $L_{\kappa}$ has $(\kappa,k)$-stable ordinals and $L_{\kappa}$ is a limit of admissible sets.. Let $\sigma_{0}$ denote a $\Pi_{3}$ sentence in the language of set theory so that a transitive set $x$ is admissible iff $x\models\sigma_{0}$, {\it cf}. pp.315-316 in \cite{R-A}. Let $st_{0}(x)$ denote the $\Pi_{0}$ formula: \[st_{0}(x)\equiv\sigma_{0}^{x}\,\&\, x \mbox{ is transitive.}\] Also for $k\geq 1$ let $st_{k}(x)$ denote a $\Pi_{k}$ formula such that for any admissible $\kappa$ \[L_{\kappa}\models st_{k}(\sigma) \Leftrightarrow L_{\sigma}\prec_{\Sigma_{k}}L_{\kappa}\] Let $\mbox{KP}\ell^{r}_{k}$ denote a set theory for limits of ordinals $\sigma$ with $st_{k}(\sigma)$.\footnote{The superscript $r$ in $\mbox{KP}\ell^{r}_{k}$ indicates that the foundation schema is restricted to sets.} \[(Lim)_{k} \: \forall x\exists y(x\in y\,\&\, st_{k}(y))\] Using Lemma \ref{lem:Quant} one can model the axiom $\Sigma^{1-}_{k+1}\mbox{-CA}$: \\ $\exists X[X=\{n\in\omega: F(n)\}]$ ($F(n)$ is a $\Sigma^{1-}_{k+1}$ formula without set parameter.) in the universe $L_{\kappa}$ which contains a $(\kappa,k)$-stable ordinal $\sigma<\kappa$ and $L_{\kappa}\models (Lim)_{0}$: \begin{eqnarray*} \{n\in\omega:L_{\kappa}\models F^{set}(n)\} & = & \{n\in\omega:L_{\kappa}\models F_{\Sigma_{k}}(n)\} \\ & = & \{n\in\omega:L_{\sigma}\models F_{\Sigma_{k}}(n)\}\in L_{\sigma+1}\subseteq L_{\kappa} \end{eqnarray*} For $\Sigma_{k+1}$ formula $\varphi(x)\equiv\exists y\theta(y,x)$ and $L_{\kappa}\models (Lim)_{k}$ \[\varphi(x)\leftrightarrow \exists\alpha[st_{k}(\alpha)\,\&\, x\in L_{\alpha}\,\&\,\varphi^{L_{\alpha}}(x)]\] This enables us to iterate $\Sigma_{1}$-stability proof theory in analysing $\Sigma_{k+1}$-stability. \begin{tabular}{|c|c|c|c|} \hline $A+1$ stables & $S(2;A+1)$ & $\Sigma^{1-}_2\mbox{-CA}_{1+A+1}$ & $O(2;A+1)^*$ \\ \hline limit $A$ stables & $S(2;A)$ & $\Sigma^1_2\mbox{-AC +BI}+$ & $O(2;A)^*$ \\ & & $\Sigma^{1-}_2\mbox{-CA}_{A}$ & \\ \hline $<\omega$-stables & $S(2;<\omega)$ & $\Sigma^1_2\mbox{-CA}_0$ & $O(2;<\omega)^*$ \\ \hline $\omega$-stables, & $\Sigma_1$-Sep, & $\Sigma^1_2\mbox{-CA+BI}$ & $O(2;\omega)^*$ \\ nonprojectible & $S(2;\omega)$ & & \\ \hline $<\omega^\omega$-stables & $S(2;<\omega^{\omega})$ & $\Sigma^1_3\mbox{-DC}_0$ & $O(2;<\omega^\omega)^*$ \\ \hline $<\varepsilon_0$-stables & $S(2;<\varepsilon_{0})$ & $\Sigma^1_3\mbox{-DC}$ & $O(2;<\varepsilon_0)^*$ \\ \hline $\Pi_2(St)$-reflecting & $\Pi_1$-Coll., & $\Sigma^1_3\mbox{-AC+BI}$ & $O(2;I)^*$ \\ on stables $St$ & $S(2;I)$ & & \\ \hline $A+1$ 2-stables & $S(3;A+1)$ & $\Sigma^{1-}_{3}\mbox{-CA}_{1+A+1}$ & $O(3;A+1)^*$ \\ &&& (?) \\ \hline $<\omega$ 2-stables & $S(3;<\omega)$ & $\Sigma^1_{3}\mbox{-CA}_0$ & $O(3;<\omega)^*$ \\ &&& (?) \\ \hline $<\omega^\omega$ 2-stables & $S(3;<\omega^{\omega})$ & $\Sigma^1_{4}\mbox{-DC}_0$ & $O(3;<\omega^{\omega})^*$ \\ &&& (?) \\ \hline $<\varepsilon_0$ 2-stables & $S(3;<\varepsilon_0)$ & $\Sigma^1_{4}\mbox{-DC}$ & $O(3;<\varepsilon_0)^*$ \\ &&& (?) \\ \hline \end{tabular} \\ \\ \[\Sigma^{1-}_2\mbox{-CA}_{1+A+1}\,: \: \exists\{X_a\}_{a<_{A}1\oplus A}\forall a<_{A}1\oplus A(X_a=\{n: F(n,a,X_{<_{A}\,a})\})\] $S(2;I)$ denotes a theory of ordinals $I$ such that $I$ is $\Pi_2(St)$-reflecting, where $St$ denotes the set of stable ordinals below $I$ and $\Pi_2(St)$ the set of $\Pi_2$ formulae $A$ in the language ${\cal L}_{1}\cup\{St\}$ so that the predicate constant $St$ may occur. Then the set theory $\mbox{KP}\omega+\Pi_1 \mbox{-Collection+V=L}$ is interpretable in $S(2;I)$. \subsection{Proof theory for $\Pi^{1}_{1}$-reflection}\label{subsec:9.2} A baby case for ordinals below which there is a stable ordinal is an ordinal $\pi^{+}$ such that $\pi^{+}$ is the next admissible to a $\pi^{+}$-stable ordinal $\pi$, viz. $\Pi^{1}_{1}$-reflecting ordinal. Such a universe $L_{\pi^{+}}$ can be modelled in a theory $T^{1}_{1}$ for positive elementary inductive definitions on $L_{\pi}$: Fix an $X$-positive formula $A\equiv A(X^+,a)$ in the language ${\cal L}_{1}\cup\{X\}$. Let $Mp$ denote the set of multiplicative principal numbers $a\leq\pi$. Define a ternary predicate $I_<$ by: \[\forall a\in Mp\forall b<a^+[I^a_{<b}=\bigcup_{d<b}I^a_d=\bigcup_{d<b}\{c<a:A^a(I^a_{<d},c)\}]\] That is to say, for each $a\in Mp,a\leq\pi$ and $b<a^+$, $I^a_{<b}$ is the inductively generated subset of $a=\{c:c<a\}$ by the positive formula $A$ on the model $<a;+,\cdot,\ldots,R^{\cal A},\ldots>$, uniformly with respect to the multiplicative principal number $a$. Then the axioms of the theory $T^{1}_{1}$ say that the universe $\pi^+$ is $\Pi_2$-reflecting and the axiom $(\Pi^1_1\mbox{-rfl})$: \[\forall c<\pi[c\in I^\pi_{<\pi^+}\rightarrow\exists \beta\in (c,\pi)\cap Mp(c\in I^\beta_{<\beta^+})].\] where $c\in I^a_{<a^+}\Leftrightarrow_{df}\exists z<a^+\, A^a(I^a_{<z},c)$. The theory $T^{1}_{1}$ is designed so that a theory $S_{1}^{1}$ for ordinals $\pi^{+}$ with $L_{\pi}\prec_{\Sigma_{1}}L_{\pi^{+}}$ is interpretable in $T_{1}^{1}$. Let us examine the crucial case. \[\infer[(\Pi^1_1\mbox{-rfl})\, J]{} { \neg(\alpha<b<\pi),\forall x<b^+\neg A^b(I^b_{<x},\alpha) & \infer[(\exists)]{\exists x<\pi^+ A^\pi(I^\pi_{<x},\alpha)} {A^\pi(I^\pi_{<\xi},\alpha)} }\] with $\alpha\in I^\pi_{<\pi^+}\equiv\exists x<\pi^+ A^\pi(I^\pi_{<x},\alpha)$, etc. First consider the easy case:\\ {\bf Case1}. $\xi<\pi$: Then the above $(\Pi^1_1\mbox{-rfl})\, J$ says that $\pi$ is $\Pi_\xi$-reflecting. So define $\sigma=d_\pi$ such that $\xi,\alpha<\sigma<\pi$ and substitute $\sigma$ for the variable $b$.\\ Second the general case:\\ {\bf Case2}. $\xi\geq\pi$: Pick a $\sigma=d_\pi$ as above and substitute $\sigma$ for $b$. We need to compute a $\xi'$ such that $\sigma\leq\xi'<\sigma^+$ and resolve the $(\Pi^1_1\mbox{-rfl})\, J$: \[\infer[(cut)]{} { \neg A^\sigma(I^\sigma_{<\xi'},\alpha) & \infer[(c)^\pi_\sigma \, I]{A^\sigma (I^\sigma _{<\xi'},\alpha)} { \infer[J]{A^\pi(I^\pi_{<\xi},\alpha)} { \alpha\not\in I^b_{<b^+} & \alpha\in I^\pi_{<\pi^+},A^\pi(I^\pi_{<\xi},\alpha) } } } \] The problem is that we have to be consistent with the part \[\infer{} { \neg A^\sigma(I^\sigma_{<\xi'},\alpha) & \infer{A^\sigma (I^\sigma _{<\xi'},\alpha)} {A^\pi(I^\pi_{<\xi},\alpha)} } \] namely \[A^\sigma(I^\sigma_{<\xi'},\alpha)\leftrightarrow A^\pi(I^\pi_{<\xi},\alpha)\] This requires a function $F :\xi\mapsto\xi'$ such that \begin{description} \item[$(F1)$] $F$ is order preserving, \\ and in view of {\bf Case1}, \item[$(F2)$] $F$ is identity on$<\pi$, i.e., $\xi\in dom(F)|\pi\Rarw F(\xi)=\xi$ \item [$(F3)$]$rng(F)<\sigma^+$. \end{description} Note that, here, $dom(F)$ is a {\em proper subset} of $\{\xi\in O(\Pi^1_1):\xi<\pi^+\}$ with a system $O(\Pi^1_1)$ of o.d.'s for the theory $T^1_1$. We can safely set \[dom(F )=\{\xi\in O(\Pi^1_1):K_\pi\xi<\sigma\} \] i.e., subdiagram $\beta<\pi$ in $\xi\in dom(F)$ is $<\sigma$ since $dom(F)$ is the set of o.d.'s that may occur in the upperpart of the $(c)^\pi_\sigma\, I$. Especially we have \[dom(F )|\pi=O(\Pi^1_1)|\sigma\] This would be possible since there exists a {\em gap} $[\sigma,\pi)$ for o.d.'s occurring above the rule $(c)^\pi_\sigma$. Can we take the function $F$ as a collapsing function, e.g., $d_\pi$? The answer is no. We cannot expect for $\xi,\zeta\in dom(F)$, that $\xi<\zeta \Rightarrow\xi\ll_\pi\zeta$ or something like an essentially less than relation. And what is worse is that the function $F$ have to preserve atomic sentences in ${\cal L}_0$. \begin{description} \item[$(F4)$] $F$ preserves atomic sentences in ${\cal L}_0$, i.e., diagrams of ${\cal L}_0$ models \\ $< dom(F ) ; +,\cdot,\ldots>$ and $< rng(F ) ; +,\cdot,\ldots>$. \end{description} To sum up $(F1)-(F4)$, \begin{description} \item[(*)] $F$ is an embedding from ${\cal L}_0$ models $<dom(F);+,\cdot,\ldots>$ \\ to $<rng(F); +,\cdot,\ldots>$ over $O(\Pi_1^1)|\sigma$. \end{description} Now our solution for $F$ is a trite one: a {\em substitution} $[\pi:=\sigma]$. \begin{description} \item [$(F5)$]$F(\xi)=\xi$ if $\xi<\pi\, (\Leftrightarrow \xi<\sigma)$ \item [$(F6)$]$F$ commutes with $+$ and the Veblen function $\varphi$, e.g., \\ $F(\xi+\zeta)=F(\xi)+F(\zeta)$. \item [$(F7)$]$F(\pi)=\sigma$ and $F(\pi^+)=\sigma^+$. \item [$(F8)$] $F(d_{\pi^+}\beta)=d_{\sigma^+}F(\beta)$. \end{description} Assume $\pi<\xi<\pi^+$ with a strongly critical $\xi$. Such a $\xi$ is of the form $d_{\pi^+}\beta$ and is introduced when a $(\Pi_2\mbox{-rfl})$ for the universe $\pi^+$ is resolved. Then this $F$ meets (*), i.e., $(F1)$: Note that we have \begin{description} \item[$(F9)$] $F(K_{\pi^+}\beta)=K_{\sigma^+}F(\beta)$, \end{description} and by definition $d_{\pi^+}\beta<d_{\pi^+}\gamma \, \Leftrightarrow \, 1. \, \beta<\gamma \,\&\, K_{\pi^+}\beta< d_{\pi^+}\gamma \mbox{ or } 2. \, d_{\pi^+}\beta\leq K_{\pi^+}\gamma$ and similarly for $\sigma^+$. In fact a miniature $[\sigma,\varepsilon_{\sigma^{+}+1})$ of $[\pi,\varepsilon_{\pi^{+}+1})$ is formed by a realisation $F$ of the Mostowski collapsing function. In this way we can resolve a $(\Pi_1^1\mbox{-rfl})$ by setting $\xi'=F(\xi)$: each o.d. $\xi$ in the uppersequent of a $(c)$ is replaced by $F(\xi)$ in the lowersequent. \section{The future:uncountable cardinals}\label{sec:10} It is important to find an {\it equivalent and right axiom} in begining proof-theoretic analysis for recursively large ordinals. For example nonprojectible ordinal $\kappa$ was analysed by us as a limit of $\kappa$-stable ordinals. At least for us the latter formulation was essential: if we adopted other axioms, e.g., there is no $\kappa$-recursive injection $f:\kappa\rarw\alpha<\kappa$ or $L_{\kappa}\models \Sigma_{1}\mbox{-Separation}$, then an analysis of these axioms would be difficult for us. Therefore in this final section we give an equivalent condition for $\kappa$ to be an uncountable cardinal. The condition remains a submodel condition saying $\kappa$ has an appropriate submodel. So it may be possible to analyse such a universe by extending proof theory for $\Sigma_{k}$-stability in the near future. \begin{definition}\label{df:crd} {\rm Let} $\sigma$ {\rm be a recursively regular ordinal and} $\omega\leq\alpha<\kappa<\sigma$. \begin{enumerate} \item {\rm We say that} $\kappa<\sigma$ {\rm is} a $\sigma$-cardinal{\rm , denoted} $L_{\sigma}\models \kappa \mbox{ is a cardinal }$ iff \[L_{\sigma}\models\forall\alpha\in[\omega,\kappa)[\mbox{there is no surjective map } f:\alpha\rarw\kappa]\] \item \[L_{\sigma}\models card(\alpha)<card(\kappa)\Leftrightarrow_{df}L_{\sigma}\models \mbox{there is no surjective map } f:\alpha\rarw\kappa\] \end{enumerate} \end{definition} \begin{theorem}\label{th:crdlocal}Let $\sigma$ be a recursively regular ordinal and $\kappa,\alpha$ multiplicative principal numbers with $\omega\leq\alpha<\kappa<\sigma$. Then the following conditions are mutually equivalent: \begin{enumerate} \item \begin{eqnarray} && \exists(\pi,\pi\kappa,\pi\sigma)[\alpha<\pi\leq\pi\kappa<\pi\sigma<\kappa\,\&\, \nonumber \\ && \forall \Sigma_{1}\,\varphi\forall a<\pi(L_{\sigma}\models\varphi[\kappa,a]\rarw L_{\pi\sigma}\models\varphi[\pi\kappa,a])] \label{eqn:crdlocal1} \end{eqnarray} \item \begin{equation}\label{eqn:crdlocal2} {\cal P}(\alpha)\cap L_{\sigma}\subseteq L_{\kappa} \end{equation} \item \begin{equation}\label{eqn:crdlocal3} L_{\sigma}\models card(\alpha)<card(\kappa) \end{equation} \end{enumerate} \end{theorem} In what follows $\sigma$ denotes a recursively regular ordinal and $\alpha,\kappa$ multiplicative principal numbers with $\omega\leq\alpha<\kappa<\sigma$. \begin{lemma}\label{lem:crdlocal1} (\ref{eqn:crdlocal1})$\Rarw$(\ref{eqn:crdlocal2}) \end{lemma} {\bf Proof}.\hspace{2mm} First note that $L_{\kappa}\prec_{\Sigma_{1}}L_{\sigma}$. Define a $\Delta_{1}$-partial map $S:dom(S)\rarw{\cal P}(\alpha)\cap L_{\kappa}\, (dom(S)\subseteq\kappa)$ as follows. First set $S_{0}=\emptyset$ and let $S_{\beta}$ denote the $<_{L}$ least $X\in{\cal P}(\alpha)\cap L_{\kappa}$ such that $\forall\gamma<\beta(X\not\in S_{\gamma})$. It suffices to show that ${\cal P}(\alpha)\cap L_{\sigma}\subseteq\{S_{\beta}\}=rng(S)$. Suppose there exists an $X\in{\cal P}(\alpha)\cap L_{\sigma}$ so that $\forall\beta<\kappa(S_{\beta}\neq X)$ and let $X_{0}$ denote the $<_{L}$-least such set. Then $X_{0}$ is $\Sigma_{1}$ definable in $L_{\sigma}$: there exists a $\Delta_{1}$ formula \[\varphi(X,\alpha,\kappa)\Leftrightarrow_{df} \theta(X,\alpha,\kappa)\,\&\, \forall Y<_{L}X\neg\theta(Y,\alpha,\kappa)\] with \[\theta(X,\alpha,\kappa)\Leftrightarrow_{df}X\subseteq\alpha\,\&\,\forall\beta<\kappa(S_{\beta}\neq X)\] so that \[L_{\sigma}\models\varphi(X_{0},\alpha,\kappa)\,\&\, L_{\sigma}\models\exists !X\varphi(X,\alpha,\kappa)\] By (\ref{eqn:crdlocal1}) we have $L_{\pi\sigma}\models\exists X\varphi(X,\alpha,\pi\kappa)$, i.e., there exists the $<_{L}$-least $X_{1}\in{\cal P}(\alpha)\cap L_{\pi\sigma}\subseteq {\cal P}(\alpha)\cap L_{\kappa}$ such that $\forall\beta<\pi\kappa(<\kappa)(S_{\beta}\neq X_{1})$. This means that $X_{1}=S_{\pi\kappa}$. We show $X_{1}=X_{0}$. This yields a contradiction. Denote $x\in a$ by $x\in^{+}a$ and $x\not\in a$ by $x\in^{-}a$. For any $\gamma<\alpha$, again by (\ref{eqn:crdlocal1}) we have \begin{eqnarray*} && \gamma\in^{\pm} X_{0}\Leftrightarrow L_{\sigma}\models\exists X(\gamma\in^{\pm}X\,\&\, \varphi(X,\alpha,\kappa)) \Rightarrow\\ && L_{\pi\sigma}\models\exists X(\gamma\in^{\pm}X\,\&\, \varphi(X,\alpha,\pi\kappa)) \Leftrightarrow \gamma\in^{\pm}X_{1} \end{eqnarray*} \hspace*{\fill} $\Box$ \begin{lemma}\label{lem:crdlocal2} (\ref{eqn:crdlocal2})$\Rarw$(\ref{eqn:crdlocal3}) \end{lemma} {\bf Proof}.\hspace{2mm} Argue in $L_{\sigma}$. Suppose there exists a surjective map $f:\alpha\rarw\kappa$. Pick a surjective map (in $L_{\sigma}$) $g:\kappa\rarw L_{\kappa}$. Let $F:\alpha\rarw{\cal P}(\alpha)(\cap L_{\sigma})$ denote the map given by \[F(\beta)=\left\{ \begin{array}{ll} g(f(\beta)) & g(f(\beta))\in{\cal P}(\alpha) \\ \emptyset & \mbox{otherwise} \end{array} \right. \] Then by (\ref{eqn:crdlocal2}), ${\cal P}(\alpha)\cap L_{\sigma}\subseteq L_{\kappa}$ $F$ is surjective. Also $F\subseteq\alpha\times L_{\kappa}$ is $\Delta_{0}$ and hence $F\in L_{\sigma}$ by $\Delta_{0}$-Separation. Define $X\in{\cal P}(\alpha)\cap L_{\sigma}$ by \[X=\{\beta<\alpha:\beta\not\in F(\beta)\}\] Then $X=F(\gamma)$ for some $\gamma<\alpha$ and $\gamma\in X\Leftrightarrow \gamma\not\in F(\gamma)=X$. This is a contradiction. \hspace*{\fill} $\Box$ \begin{lemma}\label{lem:crdlocal3} (\ref{eqn:crdlocal3})$\Rarw$(\ref{eqn:crdlocal1}) \end{lemma} {\bf Proof}.\hspace{2mm} Since $\alpha$ is a multiplicative principal number, each finite sequence $\bar{\beta}<\alpha$ is coded by a single $\beta<\alpha$. We define a $\Sigma_{1}$ subset $X$ of $L_{\sigma}$ ($\Sigma_{1}$-Skolem hull of $\alpha\cup\{\alpha,\kappa\}$ in $L_{\sigma}$): Let $\{\varphi_{i}:i\in\omega\}$ denote an enumeration of $\Sigma_{1}$-formulae of the form $\varphi_{i}\equiv\exists y\theta_{i}(x,y;z,u,v)$ with a fixed variables $x,y,z,u,v$. Set for $\beta<\alpha$ \[r(i,\beta)\simeq \mbox{ the } <_{L} \mbox{ least } c\in L_{\sigma}[L_{\sigma}\models\theta_{i}((c)_{0},(c)_{1};\beta,\alpha,\kappa)]\] \[h(i,\beta)\simeq(r(i,\beta))_{0}\] and \[X=rng(h)=\{h(i,\beta)\in L_{\sigma}:i\in\omega, \beta<\alpha\}\] Clearly $r$ and $h$ are partial $\Sigma_{1}$ map whose domains are $\Sigma_{1}$ subset of $\omega\times\alpha$. First note that \begin{equation}\label{eqn:crdlocal3.1} \alpha\cup\{\alpha,\kappa\}\subseteq X \end{equation} Next we show \begin{claim}\label{clm:crdlocal3.1} For any $\Sigma_{1}(X)$-sentence $\varphi(\bar{a})$ with parameters $\bar{a}$ from $X$ \[L_{\sigma}\models \varphi(\bar{a})\Leftrightarrow X\models \varphi(\bar{a})\] Namely \[X\prec_{\Sigma_{1}}L_{\sigma}\] \end{claim} {\bf Proof} of Claim \ref{clm:crdlocal3.1}. Suppose $L_{\sigma}\models\exists v\theta(v,\bar{a})$ with $\bar{a}\subseteq X$. It suffices to show that there exists a $b\in X$ so that $L_{\sigma}\models \theta(b,\bar{a})$. For each $a_{k}\in\bar{a}$ pick a $\Sigma_{1}$-formula $\varphi_{i_{k}}\equiv \exists y\theta_{i_{k}}(x,y;z,u,v)$ and $\beta_{k}<\alpha$ so that $h(i_{k},\beta_{k})\simeq a_{k}$. Then \[L_{\sigma}\models \exists v\exists\bar{x}[\theta(v,(\bar{x})_{0})\,\&\, \bigwedge[x_{k} \mbox{ is the } <_{L} \mbox{ least } w \theta_{i_{k}}((w)_{0},(w)_{1};\beta_{k},\alpha,\kappa)]\] where $(\bar{x})_{0}=(x_{0})_{0},\ldots,(x_{n})_{0}$ with $\bar{x}=x_{0},\ldots,x_{n}$. Hence the assertion follows. \hspace*{\fill} {\it End of Proof of Claim} Suppose for the moment that the $\Sigma_{1}$-subset $dom(h)\subseteq\omega\times\alpha$ is an element of $L_{\sigma}$ ($\sigma$-finite). Then $h$ is $\Delta_{1}$ and $X=rng(h)$ is $\Delta_{1}$-subset of $L_{\sigma}$. We show \begin{claim}\label{clm:crdlocal3.2} Assume $dom(h)\in L_{\sigma}$. Then there exist a triple $(\pi,\pi\kappa,\pi\sigma)$ satisfying (\ref{eqn:crdlocal1}). \end{claim} {\bf Proof} of Claim \ref{clm:crdlocal3.2}. By Claim \ref{clm:crdlocal3.1} and the Condensation Lemma ({\it cf}. p.80 in \cite{Devlin}.) we have an isomorphism (Mostowski collapsing function) $F:X\leftrightarrow L_{\pi\sigma}$ for an ordinal $\pi\sigma\leq\sigma$ such that $F| Y=id| Y$ for any transitive $Y\subseteq X$. We show first that $\pi\sigma<\kappa$. Suppose $\kappa\leq\pi\sigma$. The collapsing function $F$ is defined by the following recursion: \[F(x)=\{F(y):y\in x\,\&\, y\in X\}\] Since $X$ is $\Delta_{1}$, $F$ is a $\Delta_{1}$-function. The $\Delta_{1}$ map $h$ maps $\omega\times\alpha$ onto $X$ and the $\Delta_{1}$ map $F$ maps $X$ onto $L_{\pi\sigma}\supseteq L_{\kappa}$. Hence the composition $F\circ h$ maps $\omega\times\alpha$ maps onto $L_{\pi\sigma}$. Let $G$ denote a restriction of $F\circ h$ so that $rng(G)=L_{\kappa}$. Then its domain $dom(G)$ is a $\Delta_{1}$-subset of $dom(h)$ and hence $dom(G)\in L_{\sigma}$. Therefore by combining a surjective map from $\alpha$ onto $\omega\times\alpha$ we get a $\Delta_{1}$ map $f\subseteq\alpha\times\kappa$ such that $dom(f)=\alpha$ and $rng(f)=\kappa$. $\Delta_{1}$-Separation in $L_{\sigma}$ yields $f\in L_{\sigma}$. This is a contradiction since $card(\alpha)<card(\kappa)$ in $L_{\sigma}$. Thus we have shown $\pi\sigma<\kappa$. Let $\pi$ denote the least ordinal not in $X$ and set $\pi\kappa=F(\kappa)$. Then $F(a)=a$ for any $a<\pi$. Also clearly $\alpha<\pi\leq\pi\kappa<\pi\sigma<\kappa$. For a $\Sigma_{1}$ sentence $\varphi[\kappa,a]$ with a parameter $a<\pi$ assume $L_{\sigma}\models\varphi[\kappa,a]$. Then $X\models\varphi[\kappa,a]$ and hence $L_{\pi\sigma}\models\varphi[\pi\kappa,a]$ as desired. \hspace*{\fill} {\it End of Proof of Claim} \\ \\ Thus it remains to show the \begin{claim}\label{clm:crdlocal3.3} $dom(h)\in L_{\sigma}$. \end{claim} {\bf Proof} of Claim \ref{clm:crdlocal3.3}. $dom(h)=\{(i,\beta)\in\omega\times\alpha: L_{\sigma}\models\exists c\theta_{i}((c)_{0},(c)_{1};\beta,\alpha,\kappa)\}$. Let $\sigma^{*}$ denote the $\Sigma_{1}$-projectum of $\sigma$. $dom(h)$ is a $\Sigma_{1}$-subset of $\omega\times\alpha\leftrightarrow\alpha$. Thus it suffices to show ({\it cf}. Theorem 6.11\footnote{Any $\sigma$-r.e. subset of $\beta<\sigma^{*}$ is $\sigma$-finite for admissible $\sigma$.} on p.177, \cite{Barwise}.) \[\alpha<\sigma^{*}\] Suppose $\sigma^{*}\leq\alpha$. Let $F:\sigma\rarw\sigma^{*}$ denote a $\Sigma_{1}$ injection and $f=F|\kappa$ the restriction of $F$ to $\kappa$. Then $f\in L_{\sigma}$ would be an injection from $\kappa$ to $\sigma^{*}\leq\alpha$. This is a contradiction since \[L_{\sigma}\models card(\alpha)<card(\kappa)\leftrightarrow \mbox{there is no injective map } f:\kappa\rarw\alpha\] \hspace*{\fill} $\Box$ \begin{theorem}\label{th:crd}Let $\sigma$ be a recursively regular ordinal and $\kappa$ a multiplicative principal number with $\omega<\kappa<\sigma$. Then the following conditions are mutually equivalent: \begin{enumerate} \item \begin{eqnarray*} && \forall\alpha\in[\omega,\kappa)\exists(\pi,\pi\kappa,\pi\sigma)[\alpha<\pi\leq\pi\kappa<\pi\sigma<\kappa\,\&\, \\ && \forall \Sigma_{1}\,\varphi\forall a<\pi(L_{\sigma}\models\varphi[\kappa,a]\rarw L_{\pi\sigma}\models\varphi[\pi\kappa,a])] \end{eqnarray*} \item \[ \forall\alpha\in[\omega,\kappa)[{\cal P}(\alpha)\cap L_{\sigma}\subseteq L_{\kappa}] \] \item \[ L_{\sigma}\models \kappa \mbox{ is a cardinal}>\omega \] \end{enumerate} \end{theorem} \begin{theorem}\label{th:crdexi}Let $\sigma$ be a recursively regular ordinal and $\gamma$ a multiplicative principal number with $\gamma<\sigma$. The following conditions are mutually equivalent. \begin{enumerate} \item $\exists\kappa\in Mp|\sigma \exists(\pi,\pi\kappa,\pi\sigma)[\gamma<\pi\leq\pi\kappa<\pi\sigma<\kappa\,\&\, \forall \Sigma_{1}\,\varphi\forall a<\pi(L_{\sigma}\models\varphi[\kappa,a]\rarw L_{\pi\sigma}\models\varphi[\pi\kappa,a])] $ \item ${\cal P}(\gamma)\cap L_{\sigma}\in L_{\sigma}$ \item $L_{\sigma}\models\exists\kappa(\kappa \mbox{ is a cardinal}>\gamma)$ \end{enumerate} \end{theorem} {\bf Proof}.\hspace{2mm} By Theorem \ref{th:crdlocal} it suffices to show the last condition assuming the second one. Assume ${\cal P}(\gamma)\cap L_{\sigma}\in L_{\sigma}$. Then there exists a multiplicative principal $\kappa_{0}<\sigma$ such that ${\cal P}(\gamma)\cap L_{\sigma}\subseteq L_{\kappa_{0}}$. By Lemma \ref{lem:crdlocal2} we have $L_{\sigma}\models card(\gamma)<card(\kappa_{0})$. Let $\kappa<\sigma$ denote the least ordinal satisfying this. Then we claim that $L_{\sigma}\models \kappa \mbox{ is a cardinal}$. For suppose $L_{\sigma}\not\models card(\alpha)<card(\kappa)$ for some $\alpha$ with $\gamma<\alpha<\kappa$ by $L_{\sigma}\models card(\gamma)<card(\kappa)$. Pick a surjective map $f\in L_{\sigma}$ with $f:\alpha\rarw\kappa$. Also pick a surjective map $g\in L_{\sigma}$ with $g:\gamma\rarw\alpha$ by the minimality of $\kappa$, i.e., $L_{\sigma}\not\models card(\gamma)<card(\alpha)$. The composition $f\circ g:\gamma\rightarrow\kappa$ is a surjective map in $L_{\sigma}$ contrdicting $L_{\sigma}\models card(\gamma)<card(\kappa)$. \hspace*{\fill} $\Box$ \end{document}
\begin{document} \title[On finite groups with exactly two non-abelian centralizers]{On finite groups with exactly two non-abelian centralizers} \author[S. J. Baishya ]{Sekhar Jyoti Baishya} \address{S. J. Baishya, Department of Mathematics, Pandit Deendayal Upadhyaya Adarsha Mahavidyalaya, Behali, Biswanath-784184, Assam, India.} \email{sekharnehu@yahoo.com} \begin{abstract} In this paper, we characterize finite group $G$ with unique proper non-abelian element centralizer. This improves \cite[Theorem 1.1]{nab}. Among other results, we have proved that if $C(a)$ is the proper non-abelian element centralizer of $G$ for some $a \in G$, then $\frac{C(a)}{Z(G)}$ is the Fitting subgroup of $\frac{G}{Z(G)}$, $C(a)$ is the Fitting subgroup of $G$ and $G' \in C(a)$, where $G'$ is the commutator subgroup of $G$. \end{abstract} \subjclass[2010]{20D60, 20D99} \keywords{Finite group, Centralizer, Partition of a group} \maketitle \section{Introduction} \label{S:intro} Throughout this paper $G$ is a finite group with center $Z(G)$ and commutator subgroup $G'$. Given a group $G$, let $\Cent(G)$ denote the set of centralizers of $G$, i.e., $\Cent(G)=\lbrace C(x) \mid x \in G\rbrace $, where $C(x)$ is the centralizer of the element $x$ in $G$. The study of finite groups in terms of $|\Cent(G)|$, becomes an interesting research topic in last few years. Starting with Belcastro and Sherman \cite{ctc092} in 1994 many authors have been studied and characterised finite groups $G$ in terms of $\mid \Cent(G)\mid$. More information on this and related concepts may be found in \cite{ed09, amiri2019, amiri20191, rostami, en09, ctc09, ctc091, ctc099, baishya, baishya1, baishya2,baishya5, zarrin0941, zarrin0942, non, con}. Amiri and Rostami \cite{nab} in 2015 introduced the notion of $nacent(G)$ which is the set of all non-abelian centralizers of $G$. Schmidt \cite {schmidt} characterized all groups $G$ with $\mid nacent(G)\mid=1$ which are called CA-groups. The authors in \cite{nab} initiated the study of finite groups with $\mid nacent(G)\mid=2$ and proved the following result (\cite[Theorem 1.1]{nab}): \begin{thm}\label{nabcentamiri} Let $G$ be a finite group such that $\mid nacent(G)\mid=2$. If $C(a)$ is a proper non-abelian centralizer for some $a \in G$, then one of the following assertions hold: \begin{enumerate} \item $\frac{G}{Z(G)}$ is a $p$-group for some prime $p$. \item $C(a)$ is the Fitting subgroup of $G$ of prime index $p$, $p$ divides $|C(a)|$ and $\mid Cent(G) \mid= \mid Cent(C(a)) \mid+ j+1 $, where $j$ is the number of distinct centralizers $C(g)$ for $g \in G \setminus C(a)$. \item $\frac{G}{Z(G)}$ is a Frobenius group with cyclic Frobenius complement $\frac{C(x)}{Z(G)}$ for some $x \in G$. \end{enumerate} \end{thm} In this paper, we revisit finite groups $G$ with $\mid nacent(G)\mid=2$ and improve this result. Among other results, we have also proved that if $C(a)$ is the proper non-abelian element centralizer of $G$ then $\frac{C(a)}{Z(G)}$ is the Fitting subgroup of $\frac{G}{Z(G)}$, $C(a)$ is the Fitting subgroup of $G$ and $G' \in C(a)$. \section{The main results} In this section, we prove the main results of the paper. We make the following Remark from \cite[Pp. 571--575]{zappa} which will be used in the sequel. \begin{rem}\label{rem1} A collection $\Pi$ of non-trivial subgroups of a group $G$ is called a partition if every non-trivial element of $G$ belongs to a unique subgroup in $\Pi$. If $\mid \Pi \mid=1$, the partition is said to be trivial. The subgroups in $\Pi$ are called components of $\Pi$. Following Miller, if $\Pi$ is a non-trivial partition of a non-abelian $p$ group $G$ ($p$ a prime), then all the elements of $G$ having order $>p$ belongs to the same component of $\Pi$. A partition $\Pi$ of a group $G$ is said to be normal if $g^{-1}Xg \in \Pi$ for every $X \in \Pi$ and $g \in G$. A non-trivial partition $\Pi$ of a group $G$ is said to be elementary if $G$ has a normal subgroup $K$ such that all cyclic subgroups which are not contained in $K$ have order $p$ ($p$ a prime) and are components of $\Pi$. All normal non-trivial partitions of a $p$ group of exponent $> p$ are elementary. A non-trivial partition $\Pi$ of a group $G$ is said to be non-simple if there exists a proper normal subgroup $N$ of $G$ such that for every component $X \in \Pi$, either $X \leq N$ or $X \cap N=1$. Let $G$ be a group and $\Pi$ a normal non-trivial partition. Suppose $\Pi$ is not a Frobenious partition and is non-simple. Then $G$ has a normal subgroup $K$ of index $p$ ($p$ a prime) in $G$ which is generated by all elements of $G$ having order $\neq p$. So $\Pi$ is elementary. Let $G$ be a group and $p$ be a prime. We recall that the subgroup generated by all the elements of $G$ whose order is not $p$ is called the Hughes subgroup and denoted by $H_p(G)$. The group $G$ is said to be a group of Hughes-Thompson type if $G$ is not a $p$ group and $H_p(G) \neq G$ for some prime $p$. In such a group we have $\mid G : H_p(G) \mid=p$ and $H_p(G)$ is nilpotent. \end{rem} We now determine the structure of finite groups $G$ with $\mid nacent(G)\mid=2$ which improves (\cite[Theorem 1.1]{nab}). \begin{thm}\label{nabcent1} Let $G$ be a finite group and $a \in G \setminus Z(G)$. Then $nacent(G)=\lbrace G, C(a) \rbrace$ if and only if one of the following assertions hold: \begin{enumerate} \item $\frac{G}{Z(G)}$ is a non-abelian $p$-group of exponent $>p$ ($p$ a prime), $\mid \frac{G}{Z(G)}: H_p(\frac{G}{Z(G)})\mid=p$, $H_p(\frac{G}{Z(G)})=\frac{C(a)}{Z(G)}$, $\mid \frac{C(x)}{Z(G)} \mid=p$ for any $x \in G \setminus C(a)$ and $C(a)$ is a CA-group. \item $\frac{G}{Z(G)}$ is a group of Hughes-Thompson type, $H_p(\frac{G}{Z(G)})=\frac{C(a)}{Z(G)}$ ($p$ a prime), $\mid \frac{C(x)}{Z(G)} \mid=p$ for any $x \in G \setminus C(a)$ and $C(a)$ is a CA-group. \item $\frac{G}{Z(G)}= \frac{C(a)}{Z(G)} \rtimes \frac{C(x)}{Z(G)}$ is a Frobenius group with Frobenius Kernel $\frac{C(a)}{Z(G)}$, cyclic Frobenius Complement $\frac{C(x)}{Z(G)}$ for some $x \in G\setminus C(a)$ and $C(a)$ is a CA-group.. \end{enumerate} \end{thm} \begin{proof} Let $G$ be a finite group such that $nacent(G)=\lbrace G, C(a) \rbrace$, $a \in G \setminus Z(G)$. Then clearly $C(a)$ is a CA-group. Note that we have $C(s) \subseteq C(a)$ for any $s \in C(a)\setminus Z(G)$, $C(a) \cap C(x)=Z(G)$ for any $x \in G \setminus C(a)$ and $C(x) \cap C(y)=Z(G)$ for any $x, y \in G \setminus C(a)$ with $C(x) \neq C(y)$. Hence $\Pi= \lbrace \frac{C(a)}{Z(G)}, \frac{C(x)}{Z(G)} \mid x \in G \setminus C(a)\rbrace$ is a non-trivial partition of $\frac{G}{Z(G)}$. In the present scenario we have $(gZ(G))^{-1}\frac{C(x)}{Z(G)}gZ(G)=\frac{g^{-1}C(x)g}{Z(G)}=\frac{C(g^{-1}xg)}{Z(G)}$ for any $gZ(G) \in \frac{G}{Z(G)}$ and $\frac{C(a)}{Z(G)}\lhd \frac{G}{Z(G)}$. Therefore $(gZ(G))^{-1}XgZ(G) \in \Pi$ for every $X \in \Pi$ and $gZ(G) \in \frac{G}{Z(G)}$. Hence $\Pi$ is a normal non-simple partition of $\frac{G}{Z(G)}$. In the present scenario, if $\Pi$ is a Frobenius partition of $\frac{G}{Z(G)}$, then $\frac{G}{Z(G)}= \frac{C(a)}{Z(G)} \rtimes \frac{C(x)}{Z(G)}$ is a Frobenius group with Frobenious Kernel $\frac{C(a)}{Z(G)}$ and cyclic Frobenius Complement $\frac{C(x)}{Z(G)}$ for some $x \in G\setminus C(a)$. Next, suppose $\Pi$ is not a Frobenius partition. Then in view of Remark \ref{rem1}, $\frac{G}{Z(G)}$ has a normal subgroup of index $p$ ($p$ a prime) in $\frac{G}{Z(G)}$ which is generated by all elements of $\frac{G}{Z(G)}$ having order $\neq p$. In the present situation if $\frac{G}{Z(G)}$ is not a $p$ group ($p$ a prime), then in view of Remark \ref{rem1}, $\frac{G}{Z(G)}$ is a group of Hughes-Thompson type and $\Pi$ is elementary. That is $\frac{G}{Z(G)}$ has a normal subgroup $\frac{K}{Z(G)}$ such that all cyclic subgroups which are not contained in $\frac{K}{Z(G)}$ have order $p$ ($p$ a prime) and are components of $\Pi$. In the present scenario we have $\frac{K}{Z(G)}=H_p(\frac{G}{Z(G)})$. Therefore $\Pi$ has $\frac{\mid G \mid}{p}$ components of order $p$ and these are precisely $\frac{C(x)}{Z(G)}, x \in G \setminus C(a)$. Consequently, we have $H_p(\frac{G}{Z(G)})=\frac{C(a)}{Z(G)}$. On the other hand, if $\frac{G}{Z(G)}$ is a $p$ group ($p$ a prime), then in view of Remark \ref{rem1}, $\frac{G}{Z(G)}$ is non-abelian of exponent $>p$ and $\mid \frac{G}{Z(G)}: H_p(\frac{G}{Z(G)})\mid=p$. In the present situation by Remark \ref{rem1}, $\Pi$ is elementary. Therefore using Remark \ref{rem1} again, $H_p(\frac{G}{Z(G)})=\frac{C(a)}{Z(G)}$ and all cyclic subgroups which are not contained in $\frac{C(a)}{Z(G)}$ have order $p$ and are components of $\Pi$. Therefore $\Pi$ has $\frac{\mid G \mid}{p}$ components of order $p$ and these are precisely $\frac{C(x)}{Z(G)}, x \in G \setminus C(a)$. Consequently, we have $H_p(\frac{G}{Z(G)})=\frac{C(a)}{Z(G)}$. Conversely, suppose $G$ is a finite group such that one of (a), (b) or (c) holds. Then it is easy to see that $nacent(G)=\lbrace G, C(a) \rbrace$ for some $a \in G \setminus Z(G)$. \end{proof} As an immediate consequence we have the following result. Recall that for a finite group $G$, the Fitting subgroup denoted by $F(G)$ is the largest normal nilpotent subgroup of $G$. \begin{thm}\label{sjb1} Let $G$ be a finite group with a unique proper non-abelian centralizer $C(a)$ for some $a \in G$. Then we have \begin{enumerate} \item $\mid \Cent(G) \mid= \mid \Cent(C(a)) \mid+ \frac{\mid G \mid}{p}+1$, ($p$ a prime) or $\mid \Cent(C(a)) \mid+ \mid \frac{C(a)}{Z(G)} \mid+1$. \item $G' \subseteq C(a)$. \item $\frac{C(a)}{Z(G)}$ is the Fitting subgroup of $\frac{G}{Z(G)}$. \item $C(a)$ is the Fitting subgroup of $G$. \item $C(a)=P \times A$, where $A$ is an abelian subgroup and $P$ is a CA-group of prime power order. \item $\frac{G}{C(a)}$ is cyclic. \end{enumerate} \end{thm} \begin{proof} (a) In view of Theorem \ref{nabcent1}, if $\frac{G}{Z(G)}$ is a Frobenius group then by \cite[Proposition 3.1]{amiri2019}, we have $\mid \Cent(G) \mid= \mid \Cent(C(a)) \mid+ \mid \frac{C(a)}{Z(G)} \mid+1$. On the otherhand, if $\frac{G}{Z(G)}$ is not a Frobenius group, then it follows from the proof of Theorem \ref{nabcent1} that the non-trivial partition $\Pi= \lbrace \frac{C(a)}{Z(G)}, \frac{C(x)}{Z(G)} \mid x \in G \setminus C(a)\rbrace$ of $\frac{G}{Z(G)}$ has $\frac{\mid G \mid}{p}$ components of order $p$ and these are precisely $\frac{C(x)}{Z(G)}, x \in G \setminus C(a)$. Hence $\mid \Cent(G) \mid= \mid \Cent(C(a)) \mid+ \frac{\mid G \mid}{p}+1$. (b) If $\frac{G}{Z(G)}$ is a Frobenius group, then by Theorem \ref{nabcent1}, $\frac{G}{Z(G)}= \frac{C(a)}{Z(G)} \rtimes \frac{C(x)}{Z(G)}$ with cyclic Frobenius complement $\frac{C(x)}{Z(G)}$ for some $x \in G\setminus C(a)$. Therefore $\frac{G}{C(a)}$ is cyclic and hence $G' \subseteq C(a)$. On the otherhand, if $\frac{G}{Z(G)}$ is not a Frobenius group, then it follows from Theorem \ref{nabcent1} that $C(a) \lhd G$ and $\mid \frac{G}{C(a)} \mid=p$ ($p$ a prime). Hence $G' \subseteq C(a)$. (c) If $\frac{G}{Z(G)}$ is a Frobenius group, then by Theorem \ref{nabcent1}, $\frac{G}{Z(G)}= \frac{C(a)}{Z(G)} \rtimes \frac{C(x)}{Z(G)}$ with cyclic Frobenius complement $\frac{C(x)}{Z(G)}$ for some $x \in G\setminus C(a)$. In the present scenario by \cite[Pp. 3]{mukti}, we have $\frac{C(a)}{Z(G)}$ is the Fitting subgroup of $\frac{G}{Z(G)}$. On the otherhand, if $\frac{G}{Z(G)}$ is not a Frobenius group, then it follows from Theorem \ref{nabcent1} that $H_p(\frac{G}{Z(G)})=\frac{C(a)}{Z(G)}$ and $\mid \frac{G}{Z(G)}: H_p(\frac{G}{Z(G)})\mid=p$. Hence $\frac{C(a)}{Z(G)}$ is the Fitting subgroup of $\frac{G}{Z(G)}$, noting that we have $C(a) \lhd G$. (d)It follows from (c) noting that $F(\frac{G}{Z(G)})=\frac{F(G)}{Z(G)}=\frac{C(a)}{Z(G)}$. (e) Using (d) we have $C(a)$ is a nilpotent CA-group. Therefore using \cite[Theorem 3.10 (5)]{abc}, $C(a)=P \times A$, where $A$ is an abelian subgroup and $P$ is a CA-group of prime power order. (f) It is clear from Theorem \ref{nabcent1} that if $\frac{G}{Z(G)}$ is a Frobenius group, then $\frac{G}{C(a)}$ is cyclic and $\mid \frac{G}{C(a)} \mid=p$, ($p$ a prime) otherwise. \end{proof} \end{document}
\begin{document} \title{Numerical scheme for stochastic differential equations driven by fractional Brownian motion with $ 1/4<H <1/2$.} \abstract{In this article, we study a numerical scheme for stochastic differential equations driven by fractional Brownian motion with Hurst parameter $ H \in \left( 1/4, 1/2 \right)$. Towards this end, we apply Doss-Sussmann representation of the solution and an approximation of this representation using a first order Taylor expansion. The obtained rate of convergence is $n^{-2H +\rho}$, for $\rho$ small enough.} \textbf{ Key words}: Doss-Sussmann representation, fractional Brownian motion, stochastic differential equation, Taylor expansion. \section{Introduction} In this article we are interested in a pathwise approximation of the solution to the stochastic differential equation \begin{equation}\label{eqest} X_{t} = x + \int_{0}^{t} b(X_{s})ds + \int_{0}^{t} \sigma(X_{s}) \circ dB_{s}, \quad t \in [0,T], \end{equation} where $x \in \mathbb{R}$ and $b, \sigma : \mathbb{R} \rightarrow \mathbb{R} $ are measurable functions. The stochastic integral in (\ref{eqest}) is understood in the sense of Stratonovich, (see Al{\`o}s et.al. \cite{alos1} for details) and $B= \lbrace B_{t} , t \in [0,T] \rbrace$ is a fractional Brownian motion (fBm) with Hurst parameter $H \in (1/4,1/2)$. $B$ is a centered Gaussian process with a covariance structure given by \begin{equation}\label{covariance} \mathbb{E}\left( B_{t} B_{s} \right) = {1 \over 2}\left( t^{2H} + s^{2H} - \vert t-s \vert^{2H} \right), \quad t \in [0,T]. \end{equation} In \cite{alos1}, the existence and uniqueness for the solution of equation (\ref{eqest}) have been established under suitable conditions, which follows from our assumption (see hypothesis (H) in Section \ref{doss-sussmann}). Equation (\ref{eqest}) has been analyzed by several authors, for different interpretations of stochastic integrals, because of the properties of fractional Brownian motion $B$ . Among these properties, we can mention self-similarity, stationary increments, $\rho$-H\"older continuity, for any $\rho\in(0,H)$, and the covariance of its increments on intervals decays asymptotically as a negative power of the distance between the intervals. Therefore, equation (\ref{eqest}) becomes quite useful in applications in different areas such as physics, biology, finance, etc (see, e.g., \cite{Alos2007, kaj2008, KLUP}). Hence, it is important to provide approximations to the solution of (\ref{eqest}). For $H=1/2$ (i.e., B is a Brownian motion), a large number of numerical schemes to approximate the unique solution of (\ref{eqest}) has been considered in the literature. The reader can consult Kloeden and Platen \cite{klo} (and the references therein), for a complete exposition of this topic. In particular, Talay \cite{Talay} introduces the Doss-Sussmann transformation \cite{doss, sussmann} in the study of numerical methods to the solution of stochastic differential equations (see Section \ref{doss-sussmann} for the definition of this transformation). For $H>1/2$, numerical schemes for equation (\ref{eqest}) have been analyzed by several authors. For instance, we can mention \cite{araya, hu, mish1} and \cite{nourdin}, where the stochastic integrals is interpreted as the extension of the Young integral given in \cite{za} and the forward integral, respectively. It is well-known that these integrals agree with the Stratonovich one under suitable conditions (see Al{\`o}s and Nualart \cite{alos2}). In this paper we are interested in the case $H<1/2$, because numerical schemes for the solution to (\ref{eqest}) have been studied only in some particular situations. Namely, Garz\'on et. al. \cite{garzon} use the Doss-Sussmann transformation in order to prove the convergence for the Euler scheme associated to (\ref{eqest}) by means of an approximation of fBm via fractional transport processes. In \cite{nourdin}, the authors also take advantage of the Doss-Sussmann transformation in order to discuss the Crank-Nicholson method, for $ H \in \left(1/6 , 1/2\right)$ and $b\equiv0$. Here, they show convergence in law of the error to a random variable, which depends on the solution of the equation and an independent Gaussian random variable. Specifically, the authors state that the rate of convergence of the scheme is of order $n^{1/2-3H}$. In \cite{tindel1} the authors consider the so-called modified Euler scheme for multidimensional stochastic differential equations driven by fBm with $ H \in \left(1/3 , 1/2\right)$. They utilize rough paths techniques in order to obtain the convergence rate of order $n^{1/2-2H}$. Also, they prove that this rate is sharp. In \cite{deya} a numerical scheme for stochastic differential equations driven by a multidimensional fBm with Hurst parameter greater than $1/3$ is introduced. The method is based on a second-order Taylor expansion, where the L\'evy area terms are replaced by products of increments of the driving fBm. Here, the order of convergence is $n^{-(H-\rho)}$, with $\rho \in \left( 1/3 , H \right)$. In order to get this rate of convergence, the authors use a combination of rough paths techniques and error bounds for the discretization of the L\'evy area terms. In this work we propose an approximation scheme for the solution to (\ref{eqest}) with $H \in \left( 1/4 , 1/2 \right)$. To do so, we use a first order Taylor expansion in the Doss-Sussmann representation of the solution. We consider the case $H \in \left( 1/4 , 1/2 \right)$ because it is showed in \cite{alos2} that the solution of (\ref{eqest}) is given by this transformation. However, even in the case $\left( 0 , 1/4 \right)$, our scheme tends to the mentioned transformation. The rate of convergence in this paper is $ n^{- 2H + \rho}$, where $\rho < 2H$ small enough, improving the ones given in \cite{nourdin}, \cite{Talay}, \cite{deya} and \cite{tindel1}. Also our rate is better than the one obtained in \cite{garzon} when the fBm is not approximated by means of fractional transport process. We observe that our method only establishes this rate of convergence for $H<1/2$ because we could only see that the auxiliary inequality (\ref{dify1}) below is satisfied in this case. However, the same construction holds for $H>1/2$ (see \cite{nourdin}, Proposition 1). In this case, the rate of convergence for the scheme is not the same as the case $1/4 < H < 1/2$. In fact, for $H>1/2$, we only get that the rate of convergence is $n^{-1 + \rho}$ for $\rho$ small enough. The paper is organized as follows: In Section \ref{sec2} we introduce the notations needed in this article. In particular, we explain the Doss-Sussmann-type transformation related to the unique solution to (\ref{eqest}). Also, in this section, the scheme is presented and the main result is stated (Theorem \ref{teo1} below). In Section \ref{sec3}, we establish the auxiliary lemmas, which are needed to show, in Section \ref{sec4}, that the main result is true. The proof of the auxiliary lemmas are presented in Section \ref{sec5}. Finally, in the Appendix (Section \ref{apen}), other auxiliary result is also studied because it is a general result concerning the Taylor expansion for some continuous functions. \section{Preliminaries and main result} \label{sec2} In this section, we introduce the basic notions and the framework that we use in this paper. That is, we first describe the Doss-Sussmann transformation given in Doss \cite{doss} and Sussmann \cite{sussmann}, which is the link between the stochastic and ordinary differential equations (see Al\`os et al. \cite{alos1}, or Nourdin and Neuenkirch \cite{nourdin}, for fractional Brownian motion case). Then, we provide a numerical method and its rate of convergence for the unique solution of (\ref{eqest}). These are the main result of this article (see Theorem \ref{teo1}). \subsection{Doss-Sussmann transformation}\label{doss-sussmann} Henceforth, we consider the stochastic differential equation \begin{equation}\label{eqest2} X_{t} = x + \int_{0}^{t} b(X_{s})ds + \int_{0}^{t} \sigma(X_{s}) \circ dB_{s}, \quad t \in [0,T], \end{equation} where $B=\{B_t:t\in[0,T]\}$ is a fractional Brownian motion with Hurst parameter $1/4 < H < 1/2$, $x \in \mathbb{R}$ and the stochastic integral in (\ref{eqest2}) is understood in the sense of Stratonovich, which is introduced in \cite{alos1}. Remember that $B$ is defined in (\ref{covariance}). The coefficients $b,\sigma:\mathbb{R}\rightarrow\mathbb{R}$ are measurable functions such that \begin{itemize} \item [(H)] $b \in C^{2}_{b}(\mathbb{R})$ and $\sigma \in C^{2}_{b}(\mathbb{R})$. \end{itemize} \begin{rem} \label{cotas} By assumption(H), we have, for $z \in \mathbb{R}$, \begin{itemize} \item $\vert b(z) \vert \leq M_{1} $, $\vert b'(z) \vert \leq M_{4} $ and $\vert b''(z) \vert \leq M_{6} $. \item $\vert \sigma (z) \vert \leq M_{5} $, $\vert \sigma ' (z) \vert \leq M_{2} $ and $\vert \sigma '' (z) \vert \leq M_{3} $. \end{itemize} We explicitly give these constants so that it will be clear where we use them in our analysis. \end{rem} Now, we explain the relation between (\ref{eqest2}) and ordinary differential equations: the so call Doss-Sussmann transformation. In Al\`os et al. (Proposition 6) is proven that the equation (\ref{eqest2}) has a unique solution of the form \begin{equation}\label{sol} X_{t} = \phi \left(Y_{t}, B_{t} \right). \end{equation} The function $\phi:\mathbb{R}^2\rightarrow\mathbb{R}$ is the solution of the ordinary differential equation \begin{eqnarray}\label{phiori} {\partial \phi \over \partial \beta }(\alpha, \beta) &= &\sigma(\phi(\alpha, \beta)),\quad \alpha, \ \beta \in \mathbb{R},\nonumber\\ \phi(\alpha , 0) &=& \alpha , \end{eqnarray} and the process $Y$ is the pathwise solution to the equation \begin{equation*}\label{y1} Y_{t} = x + \int_{0}^{t} \left( {\partial \phi \over \partial \alpha } (Y_{s}, B_{s}) \right)^{-1} b \left( \phi (Y_{s}, B_{s} ) \right) ds,\quad t\in[0,T]. \end{equation*} By Doss \cite{doss}, we have \begin{equation}\label{eq:phi} {\partial \phi \over \partial \alpha }(\alpha, \beta) = exp \left( \int_{0}^{\beta} \sigma'(\phi(\alpha, s )) ds \right), \end{equation} which implies \begin{equation}\label{y2} Y_{t} = x + \int_{0}^{t} \exp \left( -\int_{0}^{B_{s}} \sigma'(\phi(Y_{s}, u )) du \right) b \left( \phi (Y_{s}, B_{s} ) \right) ds. \end{equation} \subsection{Numerical Method} In this section, we describe our numerical scheme associated to the unique solution of (\ref{eqest2}). Towards this end, in Section \ref{sec:2.2.1}, we first propose an approximation to the function $\phi$ given in (\ref{eq:phi}), and then, in Section \ref{sec:2.2.2} we approximate the process $Y$. In both sections we suppose that (H) holds. \subsubsection{Approximation of $\phi$}\label{sec:2.2.1} Note that, for $x\in\mathbb{R}$, equation (\ref{phiori}) has the form \begin{equation}\label{phi} \phi(x,u) = x + \int_{0}^{u} \sigma(\phi(x,s))ds. \end{equation} For each $l\in\mathbb{N}$, we take the partition $\left\lbrace u_{i}^{l} , i\in\{-l,\ldots, l\}\right\rbrace$ of the interval $[- \Vert B\Vert_{\infty}, \Vert B\Vert_{\infty}]$ given by $-\Vert B \Vert_{\infty}=u^{l}_{-l} < \ldots <u^{l}_{-1}< u^{l}_{0}=0< u^{l}_{1}< \ldots < u^{l}_{l} = \Vert B \Vert_{\infty}$. Here, $ \Vert B \Vert_{\infty} =\sup_{t\in[0,T]}|B_t|$, \begin{equation} u^{l}_{i+1} = u^{l}_{i} + {\Vert B \Vert_{\infty} \over l} = {(i+1)\Vert B \Vert_{\infty} \over l},\quad u^{l}_{-(i+1)} = u^{l}_{-i} - {\Vert B \Vert_{\infty} \over l} = -{(i+1)\Vert B \Vert_{\infty} \over l}. \nonumber \end{equation} Let $x\in\mathbb{R}$ be given in (\ref{y2}). Set \begin{equation}\label{M} M:= \vert x \vert + T\left( M_{1}\exp(M_{2} \Vert B \Vert _{\infty}) + \Vert B\Vert_{H-\rho}C_{3}T^{H-\rho} \right) , \end{equation} where $\rho\in(0,H)$, $\Vert B\Vert_{H-\rho}$ is the $(H-\rho$)-H\"older norm of $B$ on $[0,T]$, $$C_{3}= M_{1}M_{2} \exp \left(M_{2}\Vert B \Vert_{\infty} \right) + M_{4} \exp \left(M_{2} \Vert B \Vert_{\infty} \right) M_{5} \Vert B \Vert_{\infty}\left( 1 +M_2 \right) $$ and $M_i$, $i\in\{1,\ldots,6\}$ are defined in Remark \ref{cotas}. Now, we define the function $\phi^{l} : \mathbb{R}^{2} \rightarrow \mathbb{R}$ by \begin{equation}\label{phi0} \phi^{l}(z,u) = 0 \ \ \ \mbox{if} \ \ \ (z,u) \not\in [-M,M] \times [-\Vert B \Vert_{\infty},\Vert B \Vert_{\infty}]; \end{equation} and, for $k=1, \ldots ,l$, \begin{equation}\label{phin1} \phi^{l}(z,u) = \phi^{l}(z,u^{l}_{k-1}) + \int_{u_{k-1}^{l}}^{u} \sigma \left(\phi^{l}(z,u^{l}_{k-1}) + (s-u_{k-1}^{l}) \sigma \left( \phi^{l}(z,u^ {l}_{k-1}) \right) \right) ds, \end{equation} if $z\in[-M,M]$ and $u \in (u^{l}_{k-1} ,u^{l}_{k} ]$, with \begin{equation}\label{phinicial} \phi^{l}(z,u^{l}_{0}) = z,\quad\hbox{ if}\ \ z\in[-M,M]. \end{equation} The definition of $\phi^l$ for the case $k=-l,\ldots, 0$ is similar. That is, \begin{equation}\label{phin1n} \phi^{l}(z,u) = \phi^{l}(z,u^{l}_{k}) -\int^{u_{k}^{l}}_{u} \sigma \left(\phi^{l}(z,u^{l}_{k}) + (s-u_{k}^{l}) \sigma \left( \phi^{l}(z,u^ {l}_{k}) \right) \right) ds, \end{equation} if $z\in[-M,M]$ and $u \in [u^{l}_{k-1} ,u^{l}_{k} )$. Also, we consider the function $\Psi^l: \mathbb{R}^{2} \rightarrow \mathbb{R}$, which is equal to \begin{equation}\label{psi0} \Psi^{l}(z,u) = 0 \ \ \ \mbox{if} \ \ \ (z,u) \not\in [-M,M] \times [-\Vert B \Vert_{\infty},\Vert B \Vert_{\infty}], \end{equation} and, for $k=1, \ldots ,l$, \begin{eqnarray} \Psi^{l}(z,u) &=& \Psi^{l}(z,u^{l}_{k-1}) + \int_{u_{k-1}^{l}}^{u} \left( \sigma \left( \Psi^{l}(z,u^{l}_{k-1})\right) + \sigma' \sigma \left( \Psi^{l}(z,u^{l}_{k-1}) \right) (s-u_{k-1}^{l}) \right) ds \nonumber \\ & = & \Psi^{l}(z,u^{l}_{k-1}) + (u-u_{k-1}^{l}) \left( \sigma \left(\Psi^{l}(z,u^{l}_{k-1})\right) + \sigma' \sigma \left( \Psi^{l}(z,u^{l}_{k-1}) \right) {(u-u_{k-1}^{l}) \over 2 } \right) \nonumber \\ \label{psin} \end{eqnarray} if $z\in[-M,M]$ and $u \in (u^{l}_{k-1} ,u^{l}_{k} ]$, with \begin{equation*}\label{psinicial} \Psi^{l}(z,u^{l}_{0}) = z, \quad\hbox{ if}\ \ z\in[-M,M]. \end{equation*} For $k=-l,\ldots, 0$, $\Psi^{l}$ is introduced as $$ \Psi^{l}(z,u) = \Psi^{l}(z,u^{l}_{k}) - \int^{u_{k}^{l}}_u \left( \sigma \left( \Psi^{l}(z,u^{l}_{k})\right) + \sigma' \sigma \left( \Psi^{l}(z,u^{l}_{k}) \right) (s-u_{k}^{l}) \right) ds , $$ If $z\in[-M,M]$ and $u \in[u^{l}_{k-1} ,u^{l}_{k} ]$. From equation (\ref{psin}) and last equality, it can be seen that $\Psi^l(z, \cdot)$ is continuous on $[-\Vert B \Vert_{\infty},\Vert B \Vert_{\infty}]$. We remark that the function $\phi^l$ given in (\ref{phin1}) and (\ref{phin1n}) is an auxiliary tool that allows us to use Taylor's theorem in the analysis of the numerical scheme proposed in this paper (i.e., Theorem \ref{teo1}). Indeed, the Taylor's theorem is utilized in Lemma \ref{lemapsi1}. \subsubsection{Approximation of $Y$} \label{sec:2.2.2} Here, we approximate the solution of equation (\ref{y2}). For $l\in\mathbb{N}$, we define the process $Y^l$ as the solution of the following ordinary differential equation, where the existence and uniqueness is guaranteed since the coefficient $g^l: \mathbb{R}^2 \to \mathbb{R}$ satisfies Lipschitz and linear growth conditions in the second variable (see Remark \ref{cotas} and Lemma \ref{lemapsin}). \begin{eqnarray}\label{yl} Y^{l}_{t} &=& x + \int_{0}^{t} g^{l}\left(B_{s} , Y^{l}_{s}\right) ds, \quad Y^{l}_{0} = x, \end{eqnarray} where \begin{equation}\label{gl1} g^{l}\left(B_{s} , Y^{l}_{s}\right) = \exp \left( -\int_{0}^{B_{s}} \sigma'(\Psi^{l}(Y^{l}_{s}, u )) du \right) b \left( \Psi^{l}(Y^{l}_{s}, B_{s} ) \right). \end{equation} Now, for $m\in\mathbb{N}$, we set the partition $0=t_{0} < \ldots <t_{m} = T$ of $[0,T]$ with $t_{i+1} = t_{i} + {T \over m}$ and we define the process $Y^ {l,m}$ as: \begin{small} \begin{eqnarray} Y_{0}^{l,m} &=& x , \nonumber \\ Y_{t}^{l,m} &=& Y_{t^{m}_{k}}^{l,m} + \int_{t_{k}^{m}}^{t} \left[ g^{l}\left( B_{t^{m}_{k}} , Y_{t^{m}_{k}}^{l,m} \right) + h_{1}^{l}\left( B_{t^{m}_{k}} , Y_{t^{m}_{k}}^{l,m} \right) \left( B_{s} - B_{t^{m}_{k}} \right) \right] ds \label{ynn}, \end{eqnarray} \end{small} for $t_{k}^{m} \leq t < t_{k+1}^{m}$, where $h_{1}^{l}(u,z) = {\partial g^{l}(u,z) \over \partial u}$ and $g^{l}$ is given by (\ref{gl1}). So \begin{equation}\label{dergl} {\partial g^{l}(u,z) \over \partial u} = - g^{l}(u,z) \sigma'(\Psi^{l}(z,u)) + \exp \left( - \int_{0}^{u} \sigma'( \Psi^{l}(z,r)) dr \right) b'(\Psi^{l}(z,u)) { \partial \Psi^{l}(z,u) \over \partial u} . \end{equation} By Remark \ref{cotas}, we can see \begin{equation} \left\vert g^{l}(u,z) \right\vert \leq M_{1} \exp \left( M_{2} \vert u\vert \right). \label{cotagl1} \end{equation} Also we have \begin{equation} \vert h_{1}^{l}(B_{t^{m}_{k}},Y^{l,m}_{t^{m}_{k}}) \vert \leq C_{3}, \label{cotah1} \end{equation} where $C_{3} $ is given in (\ref{M}). Moreover, assuming that (\ref{cotagl1}) and (\ref{cotah1}) are satisfied, it is not hard to prove by induction that \begin{equation*} \sup\limits_{t\in[0,T]} \vert Y_{t}^{n,n} \vert \leq \vert x \vert + T\left(M_{1} \exp(M_{2} \Vert B \Vert_{\infty} ) + T^{H-\rho} \Vert B \Vert_{H-\rho} C_{3} \right)=M. \end{equation*} Finally, in a similar way to Garz\'on et al. \cite{garzon}, for $n\in\mathbb{N}$, we define the approximation $X^n$ of $X$ by: \begin{equation}\label{metodo} X_{t}^{n} = \Psi^{n} \left( Y^{n,n}_{t} , B_{t} \right), \end{equation} where $\Psi^{n}$ and $Y_{t}^{n,n}$ are given by (\ref{psin}) and (\ref{ynn}), respectively.\\ Now, we are in position to state our main result\\ \begin{teo} \label{teo1} Let (H) be satisfied and $1/4< H < 1/2 $, then $$ \left\vert X_{t} - X_{t}^{n} \right\vert \leq C n^{-2(H-\rho)}, $$ where $\rho >0 $ is small enough and $C$ is a constant that does not depend on $n$. \end{teo} \begin{rem} The constant $C$ has the form \begin{eqnarray*} C &=& \exp(2M_{2} \Vert B \Vert_{\infty}) \left[ C_{2} \exp(C_{1}T) + \frac{ M_{2}^{2} M_{5} \Vert B \Vert^{3}_{\infty} }{6} + \frac{ M_{5}^{2} M_{3} \Vert B \Vert^{3}_{\infty} }{6} \right. \\ & + & \left. C_{6} T \exp(C_{7} T) \right], \end{eqnarray*} with {\scriptsize \begin{eqnarray} C_{1} &=& (M_{4} + M_{1}M_{3} \Vert B \Vert_{\infty} ) \exp( M_2 (\Vert B \Vert_{\infty} + T)), \nonumber \\ C_{2} &=& \exp( M_2 \Vert B \Vert_{\infty}) (M_{4} + M_{1}M_{3}\Vert B \Vert_{\infty} )({M_2^2 M_5 \Vert B \Vert^{3}_{\infty} \over 6} \exp(M_2 \Vert B \Vert_{\infty}) + {M_3 M_5^2 \Vert B \Vert^{3}_{\infty} \over 6} \exp(2M_2 \Vert B \Vert_{\infty}) ), \nonumber \\ C_{3} &=& M_{1}M_{2} \exp \left(M_{2}\Vert B \Vert_{\infty} \right) + M_{4} \exp \left(M_{2} \Vert B \Vert_{\infty} \right) M_{5} \Vert B \Vert_{\infty}\left( 1 +M_2 \right), \nonumber \\ C_4 &=& M_{1} \exp \left( M_{2} \Vert B \Vert_{\infty} \right) + C_3 T^{H - \rho} \Vert B \Vert_{H-\rho}, \nonumber \\ C_{5} &=& \exp (M_{2} \Vert B \Vert_{\infty}) \left[ \Vert B \Vert_{\infty} (1+M_{2}) \left( M_{3} M_{1}M_{5} + M_{2} M_{4} M_{5} + M_{6}M_{5} \Vert B \Vert_{\infty} (1+ M_{2}) \right) \right. \nonumber \\ &+& \left. M_{1}M_{2} + M_{4}M_{5}( 1+ M_{2}) \right], \nonumber \\ C_{6} &=& \left[ C_{4} \exp(3 M_{2} \Vert B \Vert_{\infty} ) [M_{4} + M_{1}M_{3}\Vert B \Vert_{\infty} ]T^{1-2(H - \rho)} + (C_{5} + C_{8}) \Vert B \Vert_{H-\rho} \right], \nonumber \\ C_{7} &=& \exp(3 M_{2} \Vert B \Vert_{\infty} ) [M_{4} + M_{1} M_{3} \Vert B \Vert_{\infty}], \nonumber \\ C_{8} & = & M_{4} M_{2} \exp(M_{2} \Vert B \Vert_{\infty}) \left[ (M_{5} + M_{5}M_{2}) \Vert B \Vert_{\infty} + M_{5}M_{2} \right]. \nonumber \end{eqnarray} } \end{rem} \begin{rem}\label{MR} We choose the constant $M$ because the processes given in (\ref{yl}) and (\ref{ynn}), as well as the solution to (\ref{y2}), are bounded by $M$, as it is pointed out in this section. \end{rem} \section{Preliminary lemmas}\label{sec3} In this section, we stated the auxiliary tools that we need in order to prove our main result Theorem \ref{teo1}. The first four lemmas are related to the apriori estimates of $\phi$. We recall you that the constants $M_{i}, i \in \{1, \ldots, 6 \}$ are introduced in Remark \ref{cotas}. \begin{lem}\label{lemaphi1} Let $\phi$ and $\phi^{l}$ be given by (\ref{phi}) and (\ref{phin1}), respectively. Then, Hypothesis (H) implies that , for $ (z,u) \in [-M,M] \times [-\Vert B \Vert_{\infty},\Vert B \Vert_{\infty}]$, we have \begin{equation*}\label{3.1} \left\vert \phi(z,u) - \phi^{l}(z,u) \right\vert \leq {M_{2}^{2} M_{5} \Vert B\Vert_{\infty}^{3} \over {6 l^2}} \exp \left( M_{2} \Vert B\Vert_{\infty} \right), \end{equation*} \end{lem} \begin{lem}\label{lemapsi1} Let $\phi^{l}$ and $\Psi^{l}$ be given by (\ref{phin1}) and (\ref{psin}), respectively. Then, Hypothesis (H) implies \begin{equation*} \left\vert \phi^{l}(z,u) - \Psi^{l}(z,u) \right\vert \leq {M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^{3} \over 6 l^2} \exp \left(2 M_{2} \Vert B \Vert_{\infty} \right), \end{equation*} for $ (z,u) \in [-M,M] \times [-\Vert B \Vert_{\infty},\Vert B \Vert_{\infty}]$. \end{lem} \begin{lem}\label{lemapsin} Let $\Psi^{l}$ be introduced in (\ref{psin}) and Hypothesis (H) hold. Then, for $(z_1, z_2,u) \in [-M,M]^2 \times [-\Vert B \Vert_{\infty},\Vert B \Vert_{\infty}] $, \begin{equation*} \left\vert \Psi^{l}(z_{1},u) - \Psi^{l}(z_{2},u) \right\vert \leq \left\vert z_{1} -z_{2} \right\vert \exp(2M_2 \Vert B \Vert_{\infty} ). \end{equation*} \end{lem} \begin{lem}\label{difphin1} Let $\phi^{l}$ be given in (\ref{phin1}). Then, under Hypothesis (H), \begin{equation}\label{difphin1-1} \vert \phi^{l}(z_{1},u) - \phi^{l}(z_{2},u) \vert \leq \vert z_{1}-z_{2} \vert \exp(2M_2 \Vert B \Vert_{\infty} ), \end{equation} for $(z_1, z_2,u) \in [-M,M]^2 \times [-\Vert B \Vert_{\infty},\Vert B \Vert_{\infty}] $. \end{lem} Now we proceed to state the lemmas referred to the estimates on $Y^n - Y$. \begin{lem}\label{lemayyn} Assume that Hypothesis (H) is satisfied. Let $Y$ and $Y^{n}$ be given in (\ref{y2}) and (\ref{yl}), respectively. Then, for $t \in [0,T]$, \begin{equation*} \left\vert Y_{t} - Y_{t}^{n} \right\vert \leq \exp (C_1 T) {C_2 \over n^{2}}, \end{equation*} where \begin{equation*} C_{1} = (M_{4} + M_{1}M_{3} \Vert B \Vert_{\infty} ) \exp( M_2 (\Vert B \Vert_{\infty} + T)) \end{equation*} and \begin{eqnarray*} C_{2} &=& \exp( M_2 \vert B \Vert_{\infty}) (M_{4} + M_{1}M_{3}\Vert B \Vert_{\infty}) \\ & \times & \left({T M_2^2 M_5 \Vert B \Vert_{\infty}^{3} \over 6} \exp(M_2 \Vert B \Vert_{\infty}) + {T M_3 M_5^2 \Vert B \Vert_{\infty}^3 \over 6} \exp(2M_2 \Vert B \Vert_{\infty}) \right). \end{eqnarray*} \end{lem} \begin{lem}\label{difynn} Let $Y^{n,n}$ be defined in (\ref{ynn}). Then Hypothesis (H) implies, for $s \in (t_{k}^{n}, t_{k+1}^{n}]$, \begin{equation*} \vert Y^{n,n}_{s} - Y^{n,n}_{t^{n}_{k}} \vert \leq C_{4} (s-t_{k}^{n}), \end{equation*} where $C_4 = M_{1} \exp \left( M_{2} \Vert B \Vert_{\infty} \right) + C_3 T^{H - \rho} \Vert B \Vert_{H-\rho}$ and \begin{equation} C_{3} = M_{1}M_{2} \exp \left(M_{2}\Vert B \Vert_{\infty} \right) + M_{4} \exp \left(M_{2} \Vert B \Vert_{\infty} \right) M_{5} \Vert B \Vert_{\infty}\left( 1 +M_2 \right). \nonumber \end{equation} \end{lem} \begin{lem}\label{difynynn} Suppose that Hypothesis (H) holds. Let $Y^{n}$ and $Y^{n,n}$ be given in (\ref{yl}) and (\ref{ynn}), respectively. Then, \begin{equation*} \left\vert Y^{n}_{t} - Y_{t}^{n,n} \right\vert \leq C_{6} T \left({T \over n} \right)^{2(H-\rho)} \exp(C_{7}T), \quad t \in [0,T], \end{equation*} where $0<\rho <H$, \begin{eqnarray*} C_{5} &=& \exp (M_{2} \Vert B \Vert_{\infty}) \left[ \Vert B \Vert_{\infty} (1+M_{2}) \left( M_{3} M_{1}M_{5} + M_{2} M_{4} M_{5} + M_{6}M_{5} \Vert B \Vert_{\infty} (1+ M_{2}) \right) \right. \\ &+& \left. M_{1}M_{2} + M_{4}M_{5}( 1+ M_{2}) \right], \end{eqnarray*} \begin{equation} C_{6} = \left[ C_{4} \exp(3 M_{2} \Vert B \Vert_{\infty} ) [M_{4} + M_{1}M_{3}\Vert B \Vert_{\infty} ]T^{1-2(H - \rho)} + (C_{5} + C_{8}) \Vert B \Vert_{H-\rho} \right] , \label{c6} \end{equation} with $C_{4}$ given in Lemma \ref{difynn}, and \begin{equation}\label{c7} C_{7} = \exp(3 M_{2} \Vert B \Vert_{\infty} ) [M_{4} + M_{1} M_{3} \Vert B \Vert_{\infty}], \end{equation} \begin{equation*} C_{8} = M_{4} M_{2} \exp(M_{2} \Vert B \Vert_{\infty}) \left[ (M_{5} + M_{5}M_{2}) \Vert B \Vert_{\infty} + M_{5}M_{2} \right]. \end{equation*} \end{lem} \section{Convergence of the Scheme: Proof of Theorem \ref{teo1}}\label{sec4} We are now ready to prove the main result of this article, which gives a theoretical bound on the speed of convergence for $X^n$ defined in (\ref{metodo}). Remember that the constants $M_{i}, i \in \{1, \ldots, 6 \}$, are given in Remark \ref{cotas}. \begin{proof} By (\ref{sol}) and (\ref{metodo}), we have, for $t\in [0,T]$, \begin{equation*} \vert X_{t} - X_{t}^{n} \vert \leq H_{1}(t) + H_{2}(t) + H_{3}(t), \end{equation*} where \begin{small} \begin{eqnarray} H_{1}(t) &=& \vert \phi\left( Y_{t}, B_{t} \right) - \phi^{n} ( Y^{n}_{t}, B_{t} ) \vert \nonumber \\ H_{2}(t) &=& \vert \phi^{n} ( Y^{n}_{t}, B_{t} ) - \phi^{n} ( Y^{n,n}_{t}, B_{t} ) \vert \nonumber \\ \nonumber H_{3}(t) &=& \vert \phi^{n} ( Y^{n,n}_{t}, B_{t} ) - \Psi^{n} ( Y^{n,n}_{t}, B_{t} )\vert. \nonumber \end{eqnarray} \end{small} Now we proceed to obtain estimates of $H_{1}$, $H_{2}$ and $H_{3}$. By property (\ref{eq:phi}), we get \begin{small} \begin{eqnarray*} H_{1}(t) & \leq & \vert \phi\left( Y_{t}, B_{t} \right) - \phi ( Y^{n}_{t}, B_{t} ) \vert + \vert \phi ( Y^{n}_{t}, B_{t} ) - \phi^{n} ( Y^{n}_{t}, B_{t} ) \vert \nonumber \\ & \leq & \exp \left(M_{2} \Vert B \Vert_{\infty} \right) \left\vert Y_{t} - Y_{t}^{n} \right\vert + \vert \phi ( Y^{n}_{t}, B_{t} ) - \phi^{n} ( Y^{n}_{t}, B_{t} ) \vert. \nonumber \\ \end{eqnarray*} \end{small} Therefore, by Lemmas \ref{lemaphi1} and \ref{lemayyn} \begin{equation} H_{1}(t) \leq \exp \left(M_{2} \Vert B \Vert_{\infty} \right) \exp(C_{1}T) {C_{2} \over n^2 }+ {M_{2}^{2} M_{5} \Vert B \Vert_{\infty}^{3} \over 6n^2 } \exp \left( M_{2} \Vert B \Vert_{\infty} \right). \label{h1} \end{equation} Also Lemmas \ref{difphin1} and \ref{difynynn}, yield \begin{small} \begin{eqnarray} H_{2}(t) & \leq & \exp(2M_{2} \Vert B \Vert_{\infty}) \vert Y^{n}_{t} - Y^{n,n}_{t} \vert \nonumber \\ & \leq &\exp(2M_{2} \Vert B \Vert_{\infty})C_{6} T \left({T \over n} \right)^{2(H-\rho)} \exp(C_{7}T).\label{h2} \end{eqnarray} \end{small} For $H_{3}$, we use Lemma \ref{lemapsi1}. So \begin{equation} H_{3}(t) \leq {M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^3 \over 6n^2} \exp(2M_{2}\Vert B \Vert_{\infty}).\label{h3} \end{equation} Finally, from (\ref{h1}) to (\ref{h3}), we have \begin{equation*} \vert X_{t} - X_{t}^{n} \vert \leq C n^{-2(H -\rho) } , \end{equation*} which shows that Theorem \ref{teo1} holds. \end{proof} \section{Proofs of preliminary lemmas}\label{sec5} Here, we provide the proofs of Lemmas \ref{lemaphi1} to \ref{difynynn}. First, we will prove, by induction, that the statements of Lemmas \ref{lemaphi1} to \ref{difphin1} hold for all $k=1,2, \ldots l$ and $u \in (u_{k-1}^{l} , u^{l}_{k}]$. We will consider for simplicity the case $u>0$, the other case can be treated similarly. \subsection*{Proof of Lemma \ref{lemaphi1}} \begin{proof} Let $z\in[-M,M]$. We will prove by induction that, for all $k \in \{1,\ldots, l\}$ and $u \in (u_{k-1}^{l} , u^{l}_{k}]$, we have \begin{equation}\label{philema1} \left\vert \phi(z,u) - \phi^{l}(z,u) \right\vert \leq {M_{2}^{2} M_{5} \Vert B\Vert_{\infty}^{3} \over 6 l^3} \tilde{C}_k, \end{equation} where $\tilde{C}_k = \exp \left( M_{2} (u^{l}_{k} - u^{l}_{0})\right) + \ldots + \exp \left( M_{2} (u^{l}_{k} - u^{l}_{k-1})\right).$ As a consequence we obtain the global bound \begin{equation}\label{philema} \left\vert \phi(z,u) - \phi^{l}(z,u) \right\vert \leq {M_{2}^{2} M_{5} \Vert B\Vert_{\infty}^{3} \over 6 l^2} \exp \left( M_{2} \Vert B\Vert_{\infty} \right), \end{equation} where $M_2$ and $M_5$ are constants independent of $k$ and they are given in Remark \ref{cotas}. First for $k=1$, let $0=u_{0}^{l} < u \leq u^{l}_{1}$, then (\ref{phi}), (\ref{phin1}), the Lipschitz condition on $\sigma$ (with constant $M_2$) and the fact that $\phi(z,u_0^l) = \phi^l(z,u_0^l)=z$ imply \begin{small} \begin{eqnarray} \left\vert \phi(z,u) - \phi^{l}(z,u) \right\vert & \leq & M_{2} \int_{u_{0}^{l}}^{u} \left\vert \phi(z,s) - z - (s-u_{0}^{l}) \sigma \left( z \right) \right\vert ds \nonumber \\ & \leq & M_{2} \int_{u_{0}^{l}}^{u}\left\vert \phi(z,s) - \phi^{l}(z,s) \right\vert ds + M_{2}\int_{u_{0}^{l}}^{u} \left\vert \phi^{l}(z,s) -z-(s-u_{0}^{l})\sigma(z)\right\vert ds \nonumber \\ & = & M_{2}\int_{u_{0}^{l}}^{u} \left\vert \phi(z,s) - \phi^{l}(z,s) \right\vert ds + M_{2} \bold{I}_{0}^{l}. \label{i0} \end{eqnarray} \end{small} Next, we bound the term $\bold{I}_{0}^{l} $, \begin{small} \begin{equation*} \bold{I}_{0}^{l}= \int_{u_{0}^{l}}^{u} \left\vert \phi^{l}(z,s) -z-(s-u_{0}^{l})\sigma(z)\right\vert ds = \int_{u_{0}^{l}}^{u} \left\vert \phi^{l}(z,s) -z- \int_{u_0^l}^s \sigma(z)dr \right\vert ds. \end{equation*} From (\ref{phin1}), the Lipschitz condition and the bound on $\sigma$, we get \begin{eqnarray} \bold{I}_{0}^{l}& = & \int_{u_{0}^{l}}^{u} \left\vert \int_{u_{0}^{l}}^{s} \sigma \left(z+(r-u_{0}^{l})\sigma(z) \right)dr - \int_{u_{0}^{l}}^{s} \sigma(z) dr \right\vert ds \nonumber \\ & \leq & \int_{u_{0}^{l}}^{u}\int_{u_{0}^{l}}^{s} \left\vert \sigma \left(z + (r-u_{0}^{l})\sigma(z)\right) -\sigma(z)\right\vert dr ds \nonumber \\ & \leq & M_{2}\int_{u_{0}^{l}}^{u}\int_{u_{0}^{l}}^{s}(r-u_{0}^{l})\left\vert \sigma(z) \right\vert dr ds \leq { M_{2} M_{5} \Vert B\Vert_{\infty}^3 \over 6 l^3}. \label{I0} \end{eqnarray} \end{small} Therefore by (\ref{i0}), (\ref{I0}) and the Gronwall lemma we obtain \begin{small} \begin{equation*} \left\vert \phi(z,u) - \phi^{l}(z,u) \right\vert \leq { M_{2}^{2} M_{5} \Vert B\Vert_{\infty}^3 \over 6 l^3} \exp \left( M_{2} (u_1^l - u_0^l)\right), \quad \mbox{for} \ u \in (0 , u_{1}^{l}]. \label{phid0} \end{equation*} \end{small} Now, consider an index $k \in \{2,\ldots, l\}$. Our induction assumption is that (\ref{philema1}) is true for $u \in (u_{k-1}^{l},u^{l}_{k}]$. We shall now propagate the induction, that is prove that the inequality is also true for its successor, $k+1$. We will thus study (\ref{philema1}) for $u \in (u_{k}^{l} ,u^{l}_{k+1}]$. Following (\ref{phi}), (\ref{phin1}) and our induction hypothesis we establish \begin{small} \begin{eqnarray} \left\vert \phi(z,u) - \phi^{l}(z,u) \right\vert & \leq & \left\vert \phi(z,u^{l}_{k}) - \phi^{l}(z,u^{l}_{k}) \right\vert \nonumber \\ & + & \int_{u^{l}_{k}}^{u} \left\vert \sigma \left( \phi(z,s) \right) -\sigma \left(\phi^{l}(z,u^{l}_{k}) + (s- u^{l}_{k}) \sigma \left( \phi^{l}(z,u^{l}_{k}) \right) \right) \right\vert ds, \nonumber \\ & \leq & \tilde{C}_k {M_2^2 M_5 \Vert B\Vert_{\infty}^3 \over 6{l}^{3}} + \int_{u^{l}_{k}}^{u} \left\vert \sigma \left( \phi(z,s) \right)-\sigma \left(\phi^{l}(z,u^{l}_{k}) + (s- u^{l}_{k}) \sigma \left( \phi^{l}(z,u^{l}_{k}) \right) \right) \right\vert ds. \nonumber \end{eqnarray} From Lipschitz condition on $\sigma$, \begin{eqnarray} \left\vert \phi(z,u) - \phi^{l}(z,u) \right\vert & \leq & \tilde{C}_k {M_2^2 M_5 \Vert B \Vert^3_{\infty} \over 6{l}^{3}}+ M_{2} \int_{u^{l}_{k}}^{u}\left\vert \phi(z,s)-\phi^{l}(z,s) \right\vert ds \nonumber \\ & + & M_{2}\int_{u^{l}_{k}}^{u}\left\vert \phi^{l}(z,s)-\phi^{l}(z,u^{l}_{k})- (s- u^{l}_{k}) \sigma\left(\phi^{l}(z,u^{l}_{k})\right)\right\vert ds \nonumber \\ & = & \tilde{C}_k {M_2^2 M_5 \Vert B \Vert^3_{\infty} \over 6{l}^{3}} + M_{2} \int_{u^{l}_{k}}^{u} \left\vert \phi(z,s)-\phi^{l}(z,s) \right\vert ds + M_{2} {\bold I}^{l}_{k}, \label{ik} \end{eqnarray} \end{small} where $\tilde{C}_k = \exp \left( M_{2} (u^{l}_{k} - u^{l}_{0})\right) +\ldots + \exp \left( M_{2} (u^{l}_{k} - u^{l}_{k-1})\right).$ Now, we analyze the term $ {\bold I}^{l}_{k}$, given in equation (\ref{ik}). From (\ref{phin1}), the Lipschitz condition and the bound on $\sigma$ we obtain \begin{small} \begin{eqnarray} {\bold I}^{l}_{k} & \leq & \int_{u^{l}_{k}}^{u} \int_{u^{l}_{k}}^{s} \left\vert \sigma \left(\phi^{l}(z,u^{l}_{k}) + (r- u^{l}_{k}) \sigma \left( \phi^{l}(z,u^{l}_{k}) \right) \right) - \sigma \left(\phi^{l}(z,u^{l}_{k}) \right) \right\vert dr ds \nonumber \\ & \leq & M_{2} \int_{u^{l}_{k}}^{u} \int_{u^{l}_{k}}^{s} (r - u^{l}_{k}) \left\vert \sigma \left( \phi^{l}(z,u^{l}_{k}) \right) \right\vert drds \leq {M_2 M_5 \Vert B \Vert^3_{\infty} \over 6{l}^{3}}. \label{IK} \end{eqnarray} \end{small} Therefore inequalities (\ref{ik}) and (\ref{IK}) yield \begin{small} \begin{eqnarray} \left\vert \phi(z,u) - \phi^{l}(z,u) \right\vert & \leq & \tilde{C}_k {M_2^2 M_5 \Vert B \Vert^3_{\infty} \over 6{l}^{3}} +{M_2^2 M_5 \Vert B \Vert^3_{\infty} \over 6{l}^{3}} + M_{2} \int_{u^{l}_{k}}^{u} \left\vert\phi(z,s)-\phi^{l}(z,s) \right\vert ds. \nonumber \end{eqnarray} \end{small} Thus, the Gronwall lemma allows us to establish \begin{eqnarray} & & \left\vert \phi(z,u) - \phi^{l}(z,u) \right\vert \leq \left(\tilde{C}_k + 1 \right)\left( {M_2^2 M_5 \Vert B \Vert^3_{\infty} \over 6{l}^{3}} \right) \exp(M_2(u^{l}_{k+1} - u^l_k)) \nonumber \\ &=& \left( \exp \left( M_{2} (u^{l}_{k} - u^{l}_{0})\right) +\ldots + \exp \left( M_{2} (u^{l}_{k} - u^{l}_{k-1}) \right) +1 \right) \exp(M_2(u^{l}_{k+1} - u^{l}_k) \left( {M_2^2 M_5 \Vert B \Vert^3_{\infty} \over 6{l}^{3}} \right) \nonumber \\ \nonumber \\ &=& \tilde{C}_{k+1} {M_2^2 M_5 \Vert B \Vert^3_{\infty} \over 6{l}^{3}}, \nonumber \end{eqnarray} which shows that (\ref{philema1}) is satisfied for $k+1$. Now, we prove that (\ref{philema}) is true. For all $(z,u) \in [-M,M] \times [-\Vert B \Vert_{\infty},\Vert B \Vert_{\infty}]$, there is some $k \in \lbrace 1,\ldots, l \rbrace$ such that $u^{l}_{k-1} < u \leq u^{l}_{k} $ and by the previous calculations \begin{small} \begin{eqnarray} \left\vert \phi(z,u) - \phi^{l}(z,u) \right\vert & \leq & { M_{2}^{2} M_{5} \Vert B \Vert^3_{\infty} \over 6 l^3} \left[ \exp \left( M_{2} (u^{l}_{k} - u^{l}_{0})\right) +\ldots + \exp \left( M_{2} (u^{l}_{k} - u^{l}_{k-1})\right) \right] \nonumber \\ & \leq & { M_{2}^{2} M_{5} \Vert B \Vert^3_{\infty} \over 6 l^3} k \exp \left( {M_{2} \Vert B \Vert_{\infty}} \right)\nonumber \\ & \leq & { M_{2}^{2} M_{5} \Vert B \Vert^3_{\infty} \over 6 l^2} \exp \left( {M_{2} \Vert B \Vert_{\infty}}\right), \nonumber \end{eqnarray} \end{small} proving the lemma. \end{proof} \subsection*{Proof of Lemma \ref{lemapsi1}} \begin{proof} As in the proof of Lemma \ref{lemaphi1}, we will prove by induction that, for all $k \in \{1,\ldots, l\}$ and $u \in (u_{k-1}^{l} , u^{l}_{k}]$, we have \begin{equation}\label{lemadifl-1} \left\vert \phi^{l}(z,u) -\Psi^{l}(z,u)\right\vert\leq {M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^{3} \over 6 l^3} k \left( 1 + A_l \right) ^k, \end{equation} with $A_l = 1 + {M_2 \Vert B \Vert_{\infty} \over l} + {1 \over 2} ({M_2 \Vert B \Vert_{\infty} \over l})^2$. Hence, \begin{equation}\label{lemadifl} \left\vert \phi^{l}(z,u) -\Psi^{l}(z,u)\right\vert\leq {M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^{3} \over 6 l^2} \exp(2 M_2 \Vert B \Vert_{\infty}). \end{equation} where $M_2$, $M_3$ and $M_5$ are constants independent on $k$ and are given in Remark \ref{cotas}. We first assume that $k=1$. If $0=u_{0}^{l} < u \leq u^{l}_{1}$ and from equalities (\ref{phin1}) to (\ref{psin}) we obtain that \begin{small} \begin{eqnarray*} \left\vert \phi^{l}(z,u) - \Psi^{l}(z,u)\right\vert & \leq & \int_{u_{0}^{l}}^{u} \left\vert \sigma \left[ \phi^{l}\left(z,u^{l}_{0} \right) + (s-u^{l}_{0}) \sigma \left( \phi^{l}\left(z,u^{l}_{0} \right) \right) \right] \right. \nonumber \\ & - & \left. \left[ \sigma \left( \Psi^{l}\left(z,u^{l}_{0} \right) \right) + \sigma \sigma' \left( \Psi^{l}\left(z,u^{l}_{0} \right) \right) (s-u^{l}_{0}) \right] \right\vert ds \nonumber \\ & = & \int_{u_{0}^{l}}^{u} \left\vert \sigma \left( z + (s-u^{l}_{0}) \sigma \left( z \right) \right) - \sigma(z) - \sigma'(z) (s-u^{l}_{0})\sigma(z) \right\vert ds. \end{eqnarray*} \end{small} By Taylor's theorem there exists a point $\theta \in \left( \inf \{z , z + (s-u^{l}_{0}) \sigma (z) \} , \sup \{z , z + (s-u^{l}_{0}) \sigma (z) \} \right)$ such that \begin{small} \begin{eqnarray*} \left\vert \phi^{l}(z,u) - \Psi^{l}(z,u) \right\vert & \leq & \int_{u_{0}^{l}}^{u} { \left\vert \sigma '' ( \theta ) \right\vert \over 2 } (s-u^{l}_{0})^{2} \left\vert \sigma (z) \right\vert^{2} ds \nonumber \\ & \leq & {M_{3}M_{5}^{2} \Vert B \Vert^3_{\infty} \over 6 l^3} \nonumber \\ & \leq & {M_{3}M_{5}^{2} \Vert B \Vert^3_{\infty} \over 6 l^3} 1 ( 1 + A_l) ^1. \end{eqnarray*} \end{small} Now,let us consider $k \in \{2,\ldots, l\}$. Our induction assumption is that (\ref{lemadifl-1}) is true for $u \in (u_{k-1}^{l},u^{l}_{k}]$. We will thus study (\ref{lemadifl-1}) for $u \in (u_{k}^{l} , u^{l}_{k+1}]$. Following equations (\ref{phin1}) to (\ref{psin}) and our induction hypothesis, we get \begin{small} \begin{eqnarray} & &\left\vert \phi^{l}(z,u) - \Psi^{l}(z,u)\right\vert \nonumber \\ &\leq & \left\vert \phi^{l}(z,u^{l}_{k}) - \Psi^{l}(z,u^{l}_{k})\right\vert + \int_{u_{k}^{l}}^{u}\left\vert \sigma \left[\phi^{l}\left(z,u^{l}_{k}\right) + (s-u^{l}_{k}) \sigma \left( \phi^{l}\left(z,u^{l}_{k} \right) \right) \right] \right. \nonumber \\ & - & \left. \left[ \sigma \left( \Psi^{l}\left(z,u^{l}_{k} \right) \right) + \sigma' \left( \Psi^{l}\left(z,u^{l}_{k} \right) \right) (s-u^{l}_{k})\sigma \left( \Psi^{l}\left(z,u^{l}_{k} \right) \right) \right] \right\vert ds \nonumber \\ & \leq & {M_{3} M_{5}^{2} \Vert B \Vert^{3}_{\infty} \over 6 l^3} k \left( 1 + A_l \right) ^k \nonumber \\ & + & \int_{u_{k}^{l}}^{u} \left\vert \sigma \left[ \phi^{l}\left(z,u^{l}_{k} \right) + (s-u^{l}_{k}) \sigma \left( \phi^{l}\left(z,u^{l}_{k} \right) \right) \right] - \sigma \left[ \Psi^{l}\left(z,u^{l}_{k} \right) + (s-u^{l}_{k}) \sigma \left( \Psi^{l}\left(z,u^{l}_{k} \right) \right) \right] \right\vert ds \nonumber \\ & + & \int_{u_{k}^{l}}^{u} \left\vert \sigma \left[ \Psi^{l}\left(z,u^{l}_{k} \right) + (s-u^{l}_{k}) \sigma \left( \Psi^{l}\left(z,u^{l}_{k} \right) \right) \right] - \left[ \sigma \left( \Psi^{l}\left(z,u^{l}_{k} \right) \right) + \sigma \sigma' \left( \Psi^{l}\left(z,u^{l}_{k} \right) \right) (s-u^{l}_{k}) \right] \right\vert ds \nonumber \\ &= & {M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^3 \over 6 l^3} k \left( 1 + A_l \right) ^k + \bold{J}_1^k +\bold{J}_2^k \nonumber \end{eqnarray} From the Lipschitz condition on $\sigma$, and our induction hypothesis \begin{eqnarray} \bold{J}_1^k & \leq & M_2 \int_{u_{k}^{l}}^{u} \left\vert \phi^{l}\left(z,u^{l}_{k} \right) - \Psi^{l}\left(z,u^{l}_{k} \right) \right\vert ds + M_2^2 \int_{u_{k}^{l}}^{u} (s-u_k^l) \left\vert \phi^{l}\left(z,u^{l}_{k} \right) - \Psi^{l}\left(z,u^{l}_{k} \right) \right\vert ds \nonumber \\ & \leq & {M_2 M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^3 \over 6 l^3} k \left( 1 + A_l \right) ^k (u - u^{l}_{k}) + {M_2^2 M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^3 \over 12 l^3} k \left( 1 + A_l\right) ^k (u - u^{l}_{k})^2. \nonumber \end{eqnarray} By Taylor's theorem there exists a point \begin{equation*} \theta_k \in \left( \inf \{ \Psi^{l}(z,u^{l}_{k}), \Psi^{l}(z,u^{l}_{k}) + (s-u^{l}_{k}) \sigma (\Psi^{l}(z,u^{l}_{k})) \} , \sup \{ \Psi^{l}(z,u^{l}_{k}), \Psi^{l}(z,u^{l}_{k}) + (s-u^{l}_{k}) \sigma (\Psi^{l}(z,u^{l}_{k})) \} \right) \end{equation*} such that \begin{eqnarray} \bold{J}_2^k & \leq & \int_{u_{k}^{l}}^{u} {\vert \sigma''(\theta_k) \vert \over 2} \left\vert \sigma \left( \Psi^{l} \left(z,u^{l}_{k} \right) \right) \right\vert ^2 (s-u^{l}_{k})^{2} ds \nonumber \leq { M_3 M_5^2 \over 6} (u - u^{l}_{k} )^3 . \nonumber \end{eqnarray} \end{small} Therefore \begin{eqnarray} & & \left\vert \phi^{l}(z,u) - \Psi^{l}(z,u)\right\vert \leq {M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^3 \over 6 l^3} k \left( 1 +A_l \right) ^k + {M_2 M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^3 \over 6 l^3} k \left( 1 + A_l \right) ^k (u - u^{l}_{k}) \nonumber \\ &+& {M_2^2 M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^3 \over 12 l^3} k \left( 1 + A_l \right) ^k (u - u^{l}_{k})^2 + { M_3 M_5^2 \over 6} (u - u^{l}_{k} )^3. \nonumber \end{eqnarray} Since $(u - u^{l}_{k}) \leq {\Vert B \Vert_{\infty} \over l}$ we obtain \begin{eqnarray} & & \left\vert \phi^{l}(z,u) - \Psi^{l}(z,u)\right\vert \leq {M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^3 \over 6 l^3} \left[ k\left(1 + A_l \right)^k + {M_2 \Vert B \Vert_{\infty} \over l} k\left(1 + A_l \right)^k+ {M_2^2 \Vert B \Vert_{\infty}^2 \over 2l^2 }k\left(1 + A_l \right)^k+ 1\right]. \nonumber \end{eqnarray} Since $ 1 < \left(1 + A_l \right)^{k+1}$, then \begin{eqnarray} & & \left\vert \phi^{l}(z,u) - \Psi^{l}(z,u)\right\vert \leq {M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^3 \over 6 l^3} \left[ k\left(1 + A_l \right)^{k+1} + 1\right] \nonumber \\ &\leq & {M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^3 \over 6 l^3} \left[ k\left(1 + A_l \right)^{k+1} + \left(1 + A_l \right)^{k+1}\right] = {M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^3 \over 6 l^3} (k+1) \left(1 + A_l \right)^{k+1}. \nonumber \end{eqnarray} Thus (\ref{lemadifl-1}) holds for any $k \in \{1, \ldots ,l\}$. Finally, we see that (\ref{lemadifl}) is satisfied. For all $(z,u) \in [-M,M] \times [-\Vert B \Vert_{\infty},\Vert B \Vert_{\infty}]$, there is some $k \in \lbrace 1,\ldots, l \rbrace$ such that $u^{l}_{k-1} < u \leq u^{l}_{k} $ and by (\ref{lemadifl-1}), \begin{small} \begin{eqnarray} \left\vert \phi(z,u) - \Psi^{l}(z,u) \right\vert & \leq & {M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^{3} \over 6 l^3} k \left(1 + A_l \right)^{k} \nonumber \\ & \leq & {M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^{3} \over 6 l^2} \left(1 + A_l \right)^{k} \nonumber \\ & \leq & {M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^{3} \over 6 l^2} \exp(2 M_2 \Vert B \Vert_{\infty}). \nonumber \end{eqnarray} \end{small} Thus, the proof is complete. \end{proof} \subsection*{Proof of Lemma \ref{lemapsin}} \begin{proof} We will prove by induction that, for all $k \in \{1,2, \ldots ,l\}$ and $u \in (u_{k-1}^{l} , u^{l}_{k}]$, we have \begin{equation}\label{difpsi1} \vert \Psi^{l}(z_{1},u) - \Psi^{l}(z_{2},u) \vert \leq \vert z_{1} - z_{2} \vert \prod_{j=1}^{k} \left[ 1 + M_{2}(u^{l}_{j}-u_{j-1}^{l}) + [M_{2}^{2} + M_{3}M_{5}]{(u^{l}_{j}-u_{j-1}^{l})^{2} \over 2 }\right]. \end{equation} Furthermore, for all $k \in \mathbb{N}$ we have obtained a global bound \begin{equation*} \vert \Psi^{l}(z_{1},u) - \Psi^{l}(z_{2},u) \vert \leq \vert z_{1} - z_{2} \vert \exp(2M_{2} \Vert B \Vert_{\infty} ). \end{equation*} In a similar way as in previous lemmas, if $0=u_{0}^{l} < u \leq u^{l}_{1}$, then by equation (\ref{psin}) and the fact that $\Psi^{l}(z,u_0^l) =z$, we have \begin{equation*} \left\vert \Psi^{l}(z_{1},u) -\Psi^{l}(z_{2},u) \right\vert \leq \left\vert z_{1}-z_{2} \right\vert \left[ 1 + M_2 (u_{1}^{l}-u_{0}^{l}) + (M_2^2 + M_3 M_5) {(u_{1}^{l}-u_{0}^{l})^{2} \over 2} \right]. \end{equation*} Then for $k=1$ (\ref{difpsi1}) is satisfied. Now, consider that (\ref{difpsi1}) is true for $k$. Then, we will prove that the inequality is true for its successor, $k+1$. For that, we will study (\ref{difpsi1}) for $u \in (u_{k}^{l} , u^{l}_{k+1}]$. Following (\ref{psin}), Lipschitz condition and hypothesis on the second derivative of $\sigma$, we have \begin{eqnarray*} & &\left\vert \Psi^{l}(z_{1},u) - \Psi^{l}(z_{2},u) \right\vert \\ &\leq & \left\vert \Psi^{l}(z_1,u^{l}_{k}) - \Psi^{l}(z_2,u^{l}_{k})\right\vert + M_2 \int_{u_{k}^{l}}^{u}\left\vert \Psi^{l}\left(z_1,u^{l}_{k}\right) -\Psi^{l}\left(z_2,u^{l}_{k}\right) \right\vert ds \\ &+& (M_2^2 + M_3 M_5) \int_{u_{k}^{l}}^{u}\left\vert \Psi^{l}\left(z_1,u^{l}_{k}\right) -\Psi^{l}\left(z_2,u^{l}_{k}\right) \right\vert (s-u^{l}_{k}) ds \\ &=& \left\vert \Psi^{l}(z_1,u^{l}_{k}) - \Psi^{l}(z_2,u^{l}_{k})\right\vert \left( 1+M_2 (u-u^{l}_{k}) + (M_2^2 + M_3 M_5) { (u-u^{l}_{k})^2 \over 2} \right). \end{eqnarray*} Consequently, from our induction hypothesis, we get \begin{eqnarray*} \left\vert \Psi^{l}(z_{1},u) - \Psi^{l}(z_{2},u) \right\vert & \leq & \left\vert z_{1}-z_{2} \right\vert \prod_{j=1}^{k} \left[ 1 + M_{2}(u^{l}_{j}-u_{j-1}^{l}) + [M_{2}^{2} + M_{3}M_{5}]{(u^{l}_{j}-u_{j-1}^{l})^{2} \over 2 }\right] \\ &\times& \left( 1 + M_{2}(u^{l}_{k+1}-u_{k}^{l}) + [M_{2}^{2} + M_{3}M_{5}]{(u^{l}_{k+1}-u_{k}^{l})^{2} \over 2 }\right) \\ &= & \left\vert z_{1}-z_{2} \right\vert \prod_{j=1}^{k+1} \left( 1 + M_{2}(u^{l}_{j}-u_{j-1}^{l}) + [M_{2}^{2} + M_{3}M_{5}]{(u^{l}_{j}-u_{j-1}^{l})^{2} \over 2 }\right), \end{eqnarray*} which implies that (\ref{difpsi1}) is satisfied. Now, for all $(z,u) \in [-M,M] \times [- \Vert B \Vert_{\infty},\Vert B \Vert_{\infty}]$, there is some $k \in \lbrace 1,\ldots, l \rbrace$ such that $u^{l}_{k-1} < u \leq u^{l}_{k} $ and from (\ref{difpsi1}) \begin{eqnarray*} \left\vert \Psi^{l}(z_{1},u) - \Psi^{l}(z_{2},u) \right\vert & \leq & \left\vert z_{1}-z_{2} \right\vert \prod_{j=1}^{k} \left( 1 + M_{2}(u^{l}_{j}-u_{j-1}^{l}) + [M_{2}^{2} + M_{3}M_{5}]{(u^{l}_{j}-u_{j-1}^{l})^{2} \over 2 }\right) \\ &\leq & \left\vert z_{1}-z_{2} \right\vert \left[ 1 + {2M_{2} \Vert B \Vert_{\infty} \over l} \right] ^{l} \leq \left\vert z_{1}-z_{2} \right\vert \exp (2M_{2} \Vert B \Vert_{\infty} ), \end{eqnarray*} where the last inequality is due by the fact that for $l$ large enough ${M_2 \Vert B \Vert_{\infty} + M_3M_5 \Vert B \Vert_{\infty} /M_2 \over 2 l} < 1$ and $\left( 1 + {2M_2 \Vert B \Vert_{\infty} \over l} \right)^l < \exp(2 M_2 \Vert B \Vert_{\infty})$. Thus the proof is complete. \end{proof} \subsection*{Proof of Lemma \ref{difphin1}} \begin{proof} We will prove by induction that, for all $k \in \{1,\ldots, l\}$ and $u \in (u_{k-1}^{l} , u^{l}_{k}]$, we have \begin{small} \begin{equation}\label{difphi1} \vert \phi^{l}(z_{1},u) - \phi^{l}(z_{2},u) \vert \leq \vert z_{1} - z_{2} \vert \prod_{j=1}^{k} \left[ 1 + M_{2}(u^{l}_{j}-u_{j-1}^{l}) + M_{2}^{2} {(u^{l}_{j}-u_{j-1}^{l})^{2} \over 2 }\right]. \end{equation} \end{small} As a consequence, for all $k \in \mathbb{N}$, \begin{equation*} \vert \phi^{l}(z_{1},u) - \phi^{l}(z_{2},u) \vert \leq \vert z_{1} - z_{2} \vert \exp \left( 2M_{2} \Vert B \Vert_{\infty} \right), \end{equation*} is true. If $0=u_{0}^{l} < u \leq u^{l}_{1}$, then by equations (\ref{phi0}) and (\ref{phin1}) and the fact that $\phi^{l}(z,u_{0}^{l}) =z$, we have \begin{small} \begin{equation*} \vert \phi^{l}(z_{1},u) - \phi^{l}(z_{2},u) \vert \leq \vert z_{1} - z_{2} \vert \left[ 1 + M_{2}(u^{l}_{1}-u_{0}^{l}) + M_{2}^{2} {(u^{l}_{1}-u_{0}^{l})^{2} \over 2 }\right]. \end{equation*} \end{small} Therefore for $k=1$ (\ref{difphi1}) is satisfied. Now, let (\ref{difphi1}) be true until $k$. Therefore, it remains to prove that this inequality is true for its successor, $k+1$. For that, we choose $u \in (u_{k}^{l} , u^{l}_{k+1}]$. Using (\ref{phin1}) and Lipschitz condition on $\sigma$ again, we can write \begin{eqnarray*} & &\left\vert \phi^{l}(z_{1},u) -\phi^{l}(z_{2},u) \right\vert \\ &\leq & \left\vert \phi^{l}(z_1,u^{l}_{k}) - \phi^{l}(z_2,u^{l}_{k})\right\vert + M_2 \int_{u_{k}^{l}}^{u}\left\vert \phi^{l}\left(z_1,u^{l}_{k}\right) -\phi^{l}\left(z_2,u^{l}_{k}\right) \right\vert ds \\ &+& M_2^2 \int_{u_{k}^{l}}^{u}\left\vert \phi^{l}\left(z_1,u^{l}_{k}\right) -\phi^{l}\left(z_2,u^{l}_{k}\right) \right\vert (s-u^{l}_{k}) ds \\ &=& \left\vert \phi^{l}(z_1,u^{l}_{k}) - \phi^{l}(z_2,u^{l}_{k})\right\vert \left( 1+M_2 (u-u^{l}_{k}) + M_2^2 { (u-u^{l}_{k})^2 \over 2} \right). \end{eqnarray*} Our induction hypothesis leads us to \begin{eqnarray*} \vert \phi^{l}(z_{1},u) - \phi^{l}(z_{2},u) \vert & \leq & \left\vert z_{1}-z_{2} \right\vert \prod_{j=1}^{k} \left[ 1 + M_{2}(u^{l}_{j}-u_{j-1}^{l}) + M_{2}^{2}{(u^{l}_{j}-u_{j-1}^{l})^{2} \over 2 }\right] \\ &\times& \left[ 1 + M_{2}(u^{l}_{k+1}-u_{k}^{l}) + M_{2}^{2} {(u^{l}_{k+1}-u_{k}^{l})^{2} \over 2 }\right] \\ &= & \left\vert z_{1}-z_{2} \right\vert \prod_{j=1}^{k+1} \left[ 1 + M_{2}(u^{l}_{j}-u_{j-1}^{l}) + M_{2}^{2} {(u^{l}_{j}-u_{j-1}^{l})^{2} \over 2 }\right]. \end{eqnarray*} Therefore, (\ref{difphi1}) for any $k \in \{ 1, \ldots , l \}$. Now, for all $u \in [- \Vert B \Vert_{\infty} ,\Vert B \Vert_{\infty}]$, there is some $k \in \lbrace 1,\ldots, l \rbrace$ such that $u^{l}_{k-1} < u \leq u^{l}_{k} $ and, by (\ref{difphi1}), \begin{eqnarray*} \left\vert \phi^{l}(z_{1},u) - \phi^{l}(z_{2},u) \right\vert & \leq & \left\vert z_{1}-z_{2} \right\vert \prod_{j=1}^{k} \left( 1 + M_{2}(u^{l}_{j}-u_{j-1}^{l}) + M_{2}^{2}{(u^{l}_{j}-u_{j-1}^{l})^{2} \over 2 }\right) \\ &\leq & \left\vert z_{1}-z_{2} \right\vert \left[ 1 + {2M_{2}\Vert B \Vert_{\infty} \over l} \right] ^{l} \leq \left\vert z_{1}-z_{2} \right\vert \exp (2M_{2}\Vert B \Vert_{\infty}), \end{eqnarray*} where the last inequality is due by the fact that for $l$ large enough ${M_2 \over l} < 1$ and $\left( 1 + {2M_2 \Vert B \Vert_{\infty} \over l} \right)^l < \exp(2 M_2 \Vert B \Vert_{\infty})$. Therefore (\ref{difphin1-1}) is satisfied and the proof is complete. \end{proof} \subsection*{Proof of Lemma \ref{lemayyn}} \begin{proof} By equations (\ref{y2}) and (\ref{yl}), we have, for $t \in [0,T]$, \begin{small} \begin{eqnarray*} \left\vert Y_{t} - Y_{t}^{n} \right\vert & \leq & \int_{0}^{t} \left\vert \exp \left( -\int_{0}^{B_{s}} \sigma' \left( \phi(Y_{s} ,u) \right) du \right) b \left( \phi(Y_{s} ,B_{s}) \right) \right. \nonumber \\ & - & \left. \exp \left( -\int_{0}^{B_{s}} \sigma' ( \Psi^{n}(Y^{n}_{s} ,u) ) du \right) b \left( \Psi^{n}(Y^{n}_{s} ,B_{s}) \right) \right\vert ds \nonumber \\ & \leq & \bold{K}_{1} + \bold{K}_{2}, \end{eqnarray*} \end{small} with \begin{small} \begin{equation} \bold{K}_{1} = \int_{0}^{t} \left\vert \exp \left( -\int_{0}^{B_{s}} \sigma' \left( \phi(Y_{s} ,u) \right) du \right) \right\vert \left\vert b \left( \phi(Y_{s} ,B_{s}) \right) - b \left( \Psi^{n}(Y^{n}_{s} ,B_{s}) \right) \right\vert ds, \nonumber \end{equation} \end{small} and \begin{small} \begin{equation} \bold{K}_{2} = \int_{0}^{t} \left\vert \exp \left( -\int_{0}^{B_{s}} \sigma' \left( \phi(Y_{s} ,u) \right) du \right) - \exp \left( -\int_{0}^{B_{s}} \sigma' \left( \Psi^{n}(Y^{n}_{s} ,u) \right) du \right)\right\vert \left\vert b \left( \Psi^{n}(Y^{n}_{s} ,B_{s}) \right) \right\vert ds. \nonumber \end{equation} \end{small} Therefore by (\ref{eq:phi}), the Lipschitz properties on $b$ and $\sigma$, and Lemmas \ref{lemaphi1} and \ref{lemapsi1} we obtain \begin{small} \begin{eqnarray} &&\bold{K}_{1} \leq M_{4} \exp \left( M_2\Vert B \Vert_{\infty} \right) \int_{0}^{t} \vert \phi(Y_{s} , B_{s}) - \Psi^{n}(Y^{n}_{s},B_{s}) \vert ds \nonumber \\ & \leq & M_{4} \exp \left( M_2 \Vert B \Vert_{\infty} \right) \left( \int_{0}^{t} \vert \phi(Y_{s} , B_{s}) - \phi(Y^{n}_{s},B_{s}) \vert ds + \int_{0}^{t} \vert \phi(Y^{n}_{s} , B_{s}) - \phi^{n}(Y^{n}_{s},B_{s}) \vert ds \right. \nonumber \\ & + & \left. \int_{0}^{t} \vert \phi^{n}(Y^{n}_{s} , B_{s}) - \Psi^{n}(Y^{n}_{s},B_{s}) \vert ds \right)\nonumber \\ &\leq & M_{4} \exp \left( M_2\Vert B \Vert_{\infty} \right) \left( \int_{0}^{t} \exp(M_2 \Vert B \Vert_{\infty}) \vert Y_{s} - Y^{n}_{s}|ds \right. \nonumber \\ && \hspace{3cm} +\left. {T M_2^2 M_5 \Vert B \Vert^{3}_{\infty} \over 6n^2} \exp(M_2 \Vert B \Vert_{\infty}) + {T M_3 M_5^2 \Vert B \Vert^{3}_{\infty} \over 6n^2} \exp(2M_2 \Vert B \Vert_{\infty}) \right). \nonumber \end{eqnarray} \end{small} Now, by the mean value theorem, we get \begin{eqnarray*} \bold{K}_2 & \leq & M_{1}M_{3} \exp \left( M_{2} \Vert B \Vert_{\infty} \right)\int_{0}^{t} \int_{0}^{\vert B_{s} \vert } \vert \phi(Y_{s},u) - \Psi^{n}(Y^{n}_{s},u) \vert du ds. \end{eqnarray*} Hence, proceeding as in $\bold{K}_{1}$, we obtain \begin{small} \begin{eqnarray*} \bold{K}_{2} & \leq & M_{1}M_{3} \exp \left( M_{2} \Vert B \Vert_{\infty} \right) \Vert B \Vert_{\infty} \left( \int_{0}^{t} \exp(M_2 \Vert B \Vert _\infty ) \vert Y_{s} - Y^{n}_{s}| ds \right.\nonumber \\ & +& \left. {T M_2^2 M_5 \Vert B \Vert_{\infty}^{3} \over 6n^2} \exp(M_2 \Vert B \Vert_{\infty}) + {TM_3 M_5^2 \Vert B \Vert_{\infty}^{3} \over 6n^2} \exp(2M_2 \Vert B \Vert_{\infty}) \right). \end{eqnarray*} \end{small} Taking into account the inequalities for $\bold{K}_{1}$ and $\bold{K}_{2}$, we have \begin{eqnarray*} \left\vert Y_{t} - Y_{t}^{n} \right\vert & \leq & C_{1} \int_{0}^{t} \vert Y_{s}- Y^{n}_{s} \vert ds + {C_{2} \over n^{2}}, \end{eqnarray*} \begin{equation*} C_{1} = (M_{4} + M_{1}M_{3} \Vert B \Vert_{\infty} ) \exp(2 M_2 \Vert B \Vert_{\infty}) \end{equation*} and \begin{eqnarray*} C_{2} &=& \exp( M_2 \vert B \Vert_{\infty}) (M_{4} + M_{1}M_{3}\Vert B \Vert_{\infty}) \\ & \times & \left({T M_2^2 M_5 \Vert B \Vert_{\infty}^{3} \over 6} \exp(M_2 \Vert B \Vert_{\infty}) + {T M_3 M_5^2 \Vert B \Vert_{\infty}^3 \over 6} \exp(2M_2 \Vert B \Vert_{\infty}) \right) \end{eqnarray*} Finally, the desired result is achieved by direct application of the Gronwall lemma. \end{proof} \subsection*{Proof of Lemma \ref{difynn}} \begin{proof} Recall that $h_{1}^{n}(z,u) = {\partial g^{n}(z,u) \over \partial z}$. Then, by equations (\ref{yl}) to (\ref{ynn}) we obtain that \begin{small} \begin{eqnarray*} \vert Y^{n,n}_{s} - Y^{n,n}_{t^{n}_{k}} \vert & \leq & \int_{t_{k}^{n}}^{s} \left\vert g^{n}\left( B_{t^{n}_{k}} , Y_{t^{n}_{k}}^{n,n} \right) + h_{1}^{n}\left( B_{t^{n}_{k}} , Y_{t^{n}_{k}}^{n,n} \right) \left( B_{u} - B_{t^{n}_{k}} \right) \right\vert du \nonumber \\ & \leq & M_{1} \exp \left( M_{2} \Vert B \Vert_{\infty} \right) (s-t_{k}^{n}) + \left\vert h_{1}^{n}\left( B_{t^{n}_{k}} , Y_{t^{n}_{k}}^{n,n} \right) \right\vert \int_{t_{k}^{n}}^{s} \left\vert B_{u} - B_{t^{n}_{k}} \right\vert du \nonumber \\ & \leq & M_{1} \exp \left( M_{2} \Vert B \Vert_{\infty} \right) (s-t_{k}^{n}) + \Vert B \Vert_{H- \rho} C_{3}(s-t_{k}^{n})^{1+H-\rho} \nonumber \\ & \leq & C_{4}(s-t_{k}^{n}), \end{eqnarray*} \end{small} where \begin{small} \begin{equation*} \left\vert h_{1}^{n}\left( B_{t^{n}_{k}} , Y_{t^{n}_{k}}^{n,n} \right) \right\vert \leq M_{1}M_{2} \exp \left(M_{2}\Vert B \Vert_{\infty} \right) + M_{4} \exp \left(M_{2} \Vert B \Vert_{\infty} \right) M_{5} \Vert B \Vert_{\infty}\left( 1 +M_2 \right) = C_{3}, \end{equation*} and \begin{equation*}\label{C4} C_4 = M_{1} \exp \left( M_{2} \Vert B \Vert_{\infty} \right) + C_3 T^{H - \rho} \Vert B \Vert_{H-\rho}. \end{equation*} \end{small} The specific computation of the bound of the term $h_{1}^{n}(z,u)$ is left in the Appendix (Section \ref{apen}). \end{proof} \subsection*{Proof of Lemma \ref{difynynn} } \begin{proof} Let $n \in \mathbb{N}$ be fixed. We will prove Lemma \ref{difynynn} by induction on $k$ again. That is, for every $k \in \{1, \ldots,n \}$ and $t \in (t_{k-1}^{n} , t_{k}^{n}]$, we have \begin{equation}\label{dify1} \left\vert Y^{n}_{t} - Y_{t}^{n,n} \right\vert \leq C_{6} \sum_{j=1}^{k} (t_{j}^n -t_{j-1}^n)^{1+2(H-\rho)} \exp(C_{7}(t^n_{k} - t^n_{j-1} )), \end{equation} here $0<\rho<H$. As a consequence, for all $k \in \{1, \ldots ,n\}$ we obtained the global bound \begin{equation*} \left\vert Y^{n}_{t} - Y_{t}^{n,n} \right\vert \leq { C_{6} T^{1+2(H-\rho)} \exp(C_{7}T) \over n^{2(H-\rho)} }, \end{equation*} where $C_{6}$ and $C_{7}$ are given in (\ref{c6}) and (\ref{c7}), respectively. First for $k=1$ and $t \in (t_{0}^{n}, t_{1}^{n}]$, equations (\ref{yl}) to (\ref{ynn}) imply \begin{small} \begin{eqnarray} \left\vert Y^{n}_{t} - Y_{t}^{n,n} \right\vert & \leq & \int_{t_{0}^{n}}^{t} \left\vert g^{n}(B_{s}, Y^{n}_{s}) - \left[ g^{n}(B_{t_{0}^{n}},x) + h_{1}^{n}(B_{t_{0}^{n}},x)(B_{s} - B_{t_{0}^{n}}) \right] \right\vert ds \nonumber \\ & \leq & \bold{F}_{1} + \bold{F}_{2} + \bold{F}_{3}, \nonumber \end{eqnarray} \end{small} where \begin{eqnarray} \bold{F}_{1} &=& \int_{t_{0}^{n}}^{t} \left\vert g^{n}(B_{s}, Y^{n}_{s}) - g^{n}(B_{s}, Y^{n,n}_{s}) \right\vert ds\nonumber \\ \bold{F}_{2} &=& \int_{t_{0}^{n}}^{t} \left\vert g^{n}(B_{s},Y^{n,n}_{s}) - g^{n}(B_{s},Y^{n,n}_{t_{0}^{n}}) \right\vert ds \quad \mbox{and}\nonumber \\ \bold{F}_{3} &=& \int_{t_{0}^{n}}^{t} \left\vert g^{n}(B_{s}, Y^{n,n}_{t_{0}^{n}}) - \left[ g^{n}(B_{t_{0}^{n}},Y^{n,n}_{t_{0}^{n}}) + h_{1}^{n}(B_{t_{0}^{n}},Y^{n,n}_{t_{0}^{n}})(B_{s} - B_{t_{0}^{n}}) \right] \right\vert ds. \nonumber \end{eqnarray} Equality (\ref{gl1}) and the triangle inequality allow us to write \begin{small} \begin{eqnarray*} \bold{F}_{1} &=& \int_{t_{0}^{n}}^{t} \left\vert \exp \left(-\int_{0}^{B_{s}} \sigma{'}(\Psi^{n}(Y_{s}^{n}, r)) dr \right) b(\Psi^{n}(Y_{s}^{n}, B_s)) \right.\\ &-& \left. \exp \left(-\int_{0}^{B_{s}} \sigma{'}(\Psi^{n}(Y_{s}^{n,n}, r)) dr \right) b(\Psi^{n}(Y_{s}^{n,n}, B_s)) \right\vert ds \\ & \leq & \int_{t_{0}^{n}}^{t} \left\vert \exp \left(-\int_{0}^{B_{s}} \sigma{'}(\Psi^{n}(Y_{s}^{n}, r))dr \right) \right\vert \left\vert b(\Psi^{n}(Y_{s}^{n}, B_s)) - b(\Psi^{n}(Y_{s}^{n,n}, B_s)) \right\vert ds \\ &+& \int_{t_{0}^{n}}^{t} \left\vert \exp \left(-\int_{0}^{B_{s}} \sigma{'}(\Psi^{n}(Y_{s}^{n}, r)) dr \right) -\exp \left(-\int_{0}^{B_{s}} \sigma{'}(\Psi^{n}(Y_{s}^{n,n}, r)) dr \right) \right\vert \\ & \times & \left\vert b(\Psi^{n}(Y_{s}^{n,n}, B_s)) \right\vert ds. \end{eqnarray*} \end{small} Therefore, the Lipschitz property on $b$ and the mean value theorem yield \begin{small} \begin{eqnarray*} \bold{F}_{1} & \leq & M_{4}\exp(M_{2} \Vert B \Vert_{\infty}) \int_{t_{0}^{n}}^{t} \left\vert \Psi^{n}(Y_{s}^{n}, B_s) - \Psi^{n}(Y_{s}^{n,n}, B_s) \right\vert ds, \\ &+& M_{1}M_{3} \exp \left( M_{2} \Vert B \Vert_{\infty} \right)\int_{0}^{t} \int_{0}^{\vert B_{s} \vert } \vert \Psi^{n}(Y_{s}^{n}, r) - \Psi^{n}(Y_{s}^{n,n}, r) \vert dr ds. \end{eqnarray*} \end{small} Consequently, Lemma \ref{lemapsin} lead us to \begin{small} \begin{eqnarray*} \bold{F}_{1} & \leq & M_{4}\exp(3M_{2} \Vert B \Vert_{\infty}) \int_{t_{0}^{n}}^{t} \left\vert Y^{n}_{s} - Y^{n,n}_{s} \right\vert ds \nonumber \\ &+& M_{1}M_3\exp(3M_{2} \Vert B \Vert_{\infty} ) \Vert B \Vert_{\infty} \int_{t_{0}^{n}}^{t} \left\vert Y^{n}_{s} - Y^{n,n}_{s} \right\vert ds\\ &=& \exp(3M_{2} \Vert B \Vert_{\infty}) \left[ M_{4} + M_{1}M_3 \Vert B \Vert_{\infty} \right] \int_{t_{0}^{n}}^{t} \left\vert Y^{n}_{s} - Y^{n,n}_{s} \right\vert ds.\\ \end{eqnarray*} \end{small} Proceeding similarly as in $\bold{F}_{1}$, \begin{small} \begin{eqnarray*} \bold{F}_{2} & \leq & M_{4}\exp(M_{2} \Vert B \Vert_{\infty}) \int_{t_{0}^{n}}^{t} \left\vert \Psi^{n}(Y_{s}^{n,n}, B_s) - \Psi^{n}(Y^{n,n}_{t_{0}^{n}}, B_s) \right\vert ds, \\ &+& M_{1}M_{3} \exp \left( M_{2} \Vert B \Vert_{\infty} \right)\int_{t_{0}^{n}}^{t} \int_{0}^{\vert B_{s} \vert } \vert \Psi^{n}(Y_{s}^{n,n}, r) - \Psi^{n}(Y^{n,n}_{t_{0}^{n}}, r) \vert dr ds \\ &\leq & \exp(3M_{2} \Vert B \Vert_{\infty}) \left[ M_{4} + M_{1}M_3 \Vert B \Vert_{\infty} \right]\int_{t_{0}^{n}}^{t} \vert Y_{s}^{n,n}-Y^{n,n}_{t_{0}^{n}} \vert ds. \end{eqnarray*} \end{small} Hence, using Lemma \ref{difynn}, we can establish $$\bold{F}_{2} \leq C_{4} \exp(3M_{2} \Vert B \Vert_{\infty}) \left[ M_{4} + M_{1}M_3 \Vert B \Vert_{\infty} \right] {(t-t_{0}^{n})^2 \over 2}.$$ Now, we deal with $\bold{F}_{3} $. From Lemma \ref{derivadagl} (Section \ref{apen}), \begin{eqnarray}\label{eq:F3delta} \bold{F}_{3} & \leq & C_{5} \int_{t_{0}^{n}}^{t} (B_{s} - B_{t_{0}^{n}})^{2} ds + \int_{t_{0}^{n}}^{t} \left\vert \sum_{j=1}^{n} 1_{ \{B_{s} \in (u_{j-1}, u_{j}] \}} \sum_{k=1}^{j} (B_{s} - u_{k}^{n}) \Delta_{j+k} g^n{'} \right\vert ds \nonumber \\ & + & \int_{t_{0}^{n}}^{t} \left\vert \sum_{j=-n+1}^{0} 1_{ \{B_{s} \in (u_{j-1}, u_{j}] \}} \sum_{k=-j}^{0} (B_{s} - u_{k}^{n}) \Delta_{j+k} g^n{'} \right\vert ds, \end{eqnarray} where \begin{equation*} \Delta_{j+k} g^n{'} = {\partial g^{n}(u_{j+k}+, Y_{t_{0}^{n}}^{n,n}) \over \partial u} - {\partial g^{n}(u_{j+k}-, Y_{t_{0}^{n}}^{n,n}) \over \partial u} \end{equation*} and \begin{eqnarray*} C_{5} &=& \exp (M_{2} \Vert B \Vert_{\infty}) \left[ \Vert B \Vert_{\infty} (1+M_{2}) \left( M_{3} M_{1}M_{5} + M_{2} M_{4} M_{5} + M_{6}M_{5} \Vert B \Vert_{\infty} (1+ M_{2}) \right) \right. \\ &+& \left. M_{1}M_{2} + M_{4}M_{5}( 1+ M_{2}) \right]. \end{eqnarray*} Note that (\ref{dergl}) implies \begin{eqnarray*} \vert \Delta_{j+k} g^n{'} \vert &\leq& M_{4} \exp(M_{2} \Vert B \Vert_{\infty}) \left\vert \frac{\partial \Psi^{n}(Y_{t_{0}^{n}}^{n,n}, u_{j+k}+)}{\partial u} - \frac{\partial \Psi^{n}(Y_{t_{0}^{n}}^{n,n}, u_{j+k}-)}{\partial u} \right\vert \nonumber \\ & = & M_{4} \exp(M_{2} \Vert B \Vert_{\infty}) \left\vert \sigma \left( \Psi^{n}(Y_{t_{0}^{n}}^{n,n},u_{j+k} ) \right) - \sigma \left( \Psi^{n}(Y_{t_{0}^{n}}^{n,n},u_{j+k-1} ) \right) \right. \nonumber \\ &-& \left. \sigma \sigma' \left( \Psi^{n}(Y_{t_{0}^{n}}^{n,n},u_{j+k-1} ) \right) \left(u_{j+k} - u_{j+k-1} \right) \right\vert \nonumber \\ & \le & M_{4} M_{2} \exp(M_{2} \Vert B \Vert_{\infty}) \left[ \left\vert \Psi^{n}(Y_{t_{0}^{n}}^{n,n},u_{j+k} ) - \Psi^{n}(Y_{t_{0}^{n}}^{n,n},u_{j+k-1} ) \right\vert \right. \nonumber \\ & + & \left. M_{5}M_{2} (u_{j+k}-u_{j+k-1}) \right] \nonumber \\ & \le & M_{4} M_{2} \exp(M_{2} \Vert B \Vert_{\infty}) \left[ (M_{5} + M_{5}M_{2}) {(u_{j} -u_{j-1})^{2} \over 2 } \right. \nonumber \\ & + & \left. M_{5}M_{2} (u_{j+k}-u_{j+k-1}) \right] \nonumber \\ & \le & M_{4} M_{2} \exp(M_{2} \Vert B \Vert_{\infty}) (u_{j} -u_{j-1}) \left[ (M_{5} + M_{5}M_{2}) \Vert B \Vert_{\infty} \right. \nonumber \\ & + & \left. M_{5}M_{2} \right] \nonumber \\ &=& C_{8} (u_{j} -u_{j-1}). \end{eqnarray*} Hence, (\ref{eq:F3delta}) implies \begin{equation*} \bold{F}_{3} \leq ( C_{5} +C_{8} ) \int_{t_{0}^{n}}^{t} (B_{s} - B_{t_{0}^{n}})^{2} ds \leq ( C_{5} +C_{8} ) \Vert B \Vert_{H-\rho} \frac{(t - t_{0}^{n})^{1+2(H-\rho)}}{2}, \end{equation*} Since $H < 1/2$, then the previous estimations for $\bold{F}_{1}$, $\bold{F}_{2}$ and $\bold{F}_{3}$ give \begin{eqnarray*} & &\left\vert Y^{n}_{t} - Y_{t}^{n,n} \right\vert \leq \nonumber \\ & & \exp(3 M_{2} \Vert B \Vert_{\infty} ) [M_{4} + M_{1} M_{3} \Vert B \Vert_{\infty}] \int_{t_{0}^{n}}^{t} \left\vert Y^{n}_{s} - Y^{n,n}_{s} \right\vert ds \\ & &+ \left[ C_{4} \exp(3 M_{2} \Vert B \Vert_{\infty} ) [M_{4} + M_{1}M_{3}\Vert B \Vert_{\infty} ]T^{1-2(H - \rho)} \right. \\ && \left. + (C_{5} + C_{8}) \Vert B \Vert_{H-\rho} \right] (t-t_{0}^{n})^{1 + 2(H-\rho)} \\ & & = C_{6}(t-t_{0}^{n})^{1+2(H-\rho)} + C_{7} \int_{t_{0}^{n}}^{t} \left\vert Y^{n}_{s} - Y^{n,n}_{s} \right\vert ds. \end{eqnarray*} Then by the Gronwall lemma and $t \in (t_{0}^{n} , t_{1}^{n}]$, we conclude \begin{equation*} \left\vert Y^{n}_{t} - Y_{t}^{n,n} \right\vert \leq C_{6} \exp(C_{7}(t^{n}_{1}-t_{0}^{n}) ) (t^{n}_{1}-t_{0}^{n})^{1+2(H-\rho)}. \end{equation*} Now we show that (\ref{dify1}) is true for $k+1$ if it holds for $k$. So we choose $t \in (t_{k}^{n}, t_{k+1}^{n}]$. Towards this end, we proceed as in the case $k=1$: \begin{small} \begin{eqnarray*} \left\vert Y^{n}_{t} - Y_{t}^{n,n} \right\vert & \leq & \left\vert Y^{n}_{t^n_{k}} - Y_{t^n_{k}}^{n,n} \right\vert \\ & + & \left| \int_{t_{k}^{n}}^t \left( g^{n}\left(B_{s} , Y^{n}_{s}\right)- \left[ g^{n}\left( B_{t^{n}_{k}} , Y_{t^{n}_{k}}^{n,n} \right) + h_{1}^{n}\left( B_{t^{n}_{k}} , Y_{t^{n}_{k}}^{n,n} \right) \left( B_{s} - B_{t^{n}_{k}} \right) \right] \right) ds \right| \\ & \leq & C_{6} \sum_{j=1}^{k} (t_{j}^n -t_{j-1}^n)^{1+2(H-\rho)} \exp(C_{7}(t^n_{k} - t^n_{j-1} )) + C_{6}(t_{k+1}^n - t_{k}^n )^{1 + 2(H-\rho)} \\ &+& C_{7} \int_{t_{k}^{n}}^{t} \left\vert Y^{n}_{s} - Y^{n,n}_{s} \right\vert ds. \end{eqnarray*} \end{small} Therefore, using the Gronwall lemma again and $t \in (t_{k}^{n} , t_{k+1}^{n}]$ \begin{small} \begin{eqnarray*} & & \left\vert Y^{n}_{t} - Y_{t}^{n,n} \right\vert \leq \\ & \leq & \left[ C_{6} \sum_{j=1}^{k} (t_{j}^n -t_{j-1}^n)^{1+2(H-\rho)} \exp(C_{7}(t^n_{k} - t^n_{j-1} )) + C_{6}(t_{k+1}^n - t_{k}^n )^{1 + 2(H-\rho)} \right] \exp(C_{7}(t_{k+1}^n - t_{k}^n )) \\ &=& C_{6} \sum_{j=1}^{k} (t_{j}^n -t_{j-1}^n)^{1+2(H-\rho)} \exp(C_{7}(t^n_{k+1} - t^n_{j-1} )) + C_{6}(t_{k+1}^n - t_{k}^n )^{1 + 2(H-\rho)} \exp(C_{7}(t_{k+1}^n - t_{k}^n )) \\ &=& C_{6} \sum_{j=1}^{k+1} (t_{j}^n -t_{j-1}^n)^{1+2(H-\rho)} \exp(C_{7}(t^n_{k+1} - t^n_{j-1} )). \end{eqnarray*} \end{small} Therefore (\ref{dify1}) is true for any $k \leq n$. Finally, for all $t \in [0,T]$, there exits $k \in \lbrace 1,\ldots, n \rbrace$ such that $t^{n}_{k-1} < t \leq t^{n}_{k} $. Thus (\ref{dify1}) implies \begin{eqnarray*} \left\vert Y^{n}_{t} - Y_{t}^{n,n} \right\vert & \leq & C_{6} \left({T \over n} \right)^{1+2(H-\rho)} k \exp(C_{7}(t^n_{k} - t^n_{0} )) \\ & \leq & C_{6} T \left({T \over n} \right)^{2(H-\rho)} \exp(C_{7}T), \end{eqnarray*} and the proof is complete. \end{proof} \section{Appendix}\label{apen} Here, we consider the following useful result for the analysis of the convergence of the scheme. \begin{lem}\label{derivadagl} Let $\left\lbrace u_{i}^{l} \right\rbrace$ be a partion of the interval $[-R,R]$ given by $-R=u^{l}_{-l} < \ldots <u^{l}_{-1}< u^{l}_{0}=0< u^{l}_{1}< \ldots < u^{l}_{l} = R$ and $f:[-R,R] \rightarrow \mathbb{R} $ a $C^{2}([u_{j}^l,u_{j+1}^l])$-function for each $j \in \{-l, \ldots, l-1 \}$. Also let $f$ be continuous on $[-R,R]$, $C$ a constant such that \begin{equation*} \sup\limits_{j \in \{-l,...,l-1 \}} \Vert f'' \Vert_{\infty , [u_{j}, u_{j+1} ]} =C< \infty, \end{equation*} and $x \in (u_{j}^{l} , u^{l}_{j+1}]$ and $y \in (u_{j+k}^{l} , u^{l}_{j+k+1}]$. Then, \begin{equation}\label{conj} \vert f(y)-f(x)-f'(x+)(y-x) \vert \leq {C \over 2} (y-x)^2 + \sum_{p=1}^{k} \Delta_{j+p}f' \cdot (y-u_{j+p}), \end{equation} where \begin{equation*} \Delta_{j+p}f' = \vert f'(u_{j+p}+)-f'(u_{j+p}-) \vert. \end{equation*} \end{lem} \begin{proof} We will prove that (\ref{conj}) holds via induction on $k$. We start our induction with $k=1$. That is, we consider two consecutive intervals. If $x \in (u_{j},u_{j+1}]$ and $y \in (u_{j+1},u_{j+2}]$. Then, \begin{eqnarray*}\lefteqn{ \vert f(y)-f(x)-f'(x+)(y-x) \vert} \\ & \leq & \vert f(y)-f(u_{j+1}) - f'(u_{j+1}+)(y-u_{j+1})\vert \\ &+& \vert f(u_{j+1}) + f'(u_{j+1}+)(y-u_{j+1}) - f(x) - f'(x+)(y-x) \vert \\ & \leq & {C \over 2} (y- u_{j+1})^2 + \vert f(u_{j+1})-f(x) - f'(x+)(u_{j+1}-x)\vert \\ &+& \vert f'(u_{j+1}+)-f'(x+) \vert (y-u_{j+1}) \\ & \leq & {C \over 2} (y- u_{j+1})^2 + {C \over 2} (u_{j+1}-x)^2 \\ &+& \left[ \vert f'(u_{j+1}+)-f'(u_{j+1}-) \vert + \vert f'(u_{j+1}-)-f'(x+) \vert \right] (y-u_{j+1})\\ &=& {C \over 2} (y- u_{j+1})^2 + {C \over 2} (u_{j+1}-x)^2 + (y-u_{j+1}) (\Delta_{j+1}f') \\ &+& C(u_{j+1} -x )(y-u_{j+1})\\ &=& {C \over 2} (y- x)^2 + (\Delta_{j+1}f')(y-u_{j+1}). \end{eqnarray*} It means, (\ref{conj}) holds for $k=1$. It remains to prove that the inequality (\ref{conj}) is true for its successor, $k+1$ assuming that until $k$ is satisfied. To do so, choose $x \in [u_{j},u_{j+1}]$ and $y \in [u_{j+k+1},u_{j+k+2}]$. Then, \begin{eqnarray*}\lefteqn{ \vert f(y)-f(x)-f'(x+)(y-x) \vert } \\ & \leq & \vert f(y)-f(u_{j+k+1}) - f'(u_{j+k+1}+)(y-u_{j+k+1})\vert \\ &+& \vert f(u_{j+k+1}) + f'(u_{j+k+1}+)(y-u_{j+k+1}) - f(x) - f'(x+)(y-x) \vert \\ & \leq & {C \over 2} (y- u_{j+k+1})^2 + \vert f(u_{j+k+1})-f(x) - f'(x+)(u_{j+k+1}-x)\vert \\ &+& \vert f'(u_{j+k+1}+)-f'(x+) \vert (y-u_{j+k+1}). \end{eqnarray*} Hence, our induction hypothesis implies \begin{eqnarray*}\lefteqn{ \vert f(y)-f(x)-f'(x+)(x-y) \vert} \\ & \leq & {C \over 2} (y- u_{j+k+1})^2 + {C \over 2} (u_{j+k+1}-x)^2 + \sum_{p=1}^{k} (u_{j+k+1} -u_{j+p}) \Delta_{j+p}f' \\ & + & \left[ \Delta_{j+k+1}f' + C (u_{j+k+1} - u_{j+k}) + \left\vert f'(u_{j+k}+) - f'(x+) \right\vert \right] (y-u_{j+k+1}) \\ & \leq & {C \over 2} (y- u_{j+k+1})^2 + {C \over 2} (u_{j+k+1}-x)^2 + \sum_{p=1}^{k} (u_{j+k+1} -u_{j+p}) \Delta_{j+p}f' \\ &+& \left[ \Delta_{j+k+1}f' + C (u_{j+k+1} - u_{j+k}) \phantom{ \sum_{p=1}^{k} } \right. \\ &+& \left. \sum_{p=1}^{k} \Delta_{j+p} f' + C(u_{j+k} - x) \right] (y-u_{j+k+1}) \\ & \leq & \frac{C}{2}(y-x)^{2} + \sum_{p=1}^{k} (y -u_{j+p}) \Delta_{j+p}f' + (y-u_{j+k+1}) \Delta_{j+k+1}f'. \end{eqnarray*} Therefore, (\ref{conj}) is satisfied for $k+1$ and the proof is complete. \end{proof} {\bf Acknowledgments} We would like to thank anonymous referee for useful comments and suggestions that improved the presentation of the paper. Part of this work was done while Jorge A. Le\'on was visiting CIMFAV, Chile, and H\'ector Araya and Soledad Torres were visiting Cinvestav-IPN, Mexico. The authors thank both Institutions for their hospitality and economical support. Jorge A. Le\'on was partially supported by the CONACYT grant 220303. Soledad Torres was partially supported by Proyecto ECOS C15E05; Fondecyt 1171335, REDES 150038 and Mathamsud 16MATH03. H\'ector Araya was partially supported by Beca CONICYT-PCHA/Doctorado Nacional/2016-21160138; Proyecto ECOS C15E05, REDES 150038 and Mathamsud 16MATH03. \end{document}
\begin{document} \title{Development of an approximate method for quantum optical models and their pseudo-Hermicity} \date{\today} \author{Ramazan Ko\c{c}} \email{koc@gantep.edu.tr} \affiliation{Department of Physics, Faculty of Engineering University of Gaziantep, 27310 Gaziantep, Turkey} \begin{abstract} An approximate method is suggested to obtain analytical expressions for the eigenvalues and eigenfunctions of the some quantum optical models. The method is based on the Lie-type transformation of the Hamiltonians. In a particular case it is demonstrated that $E\times \varepsilon $ Jahn-Teller Hamiltonian can easily be solved within the framework of the suggested approximation. The method presented here is conceptually simple and can easily be extended to the other quantum optical models. We also show that for a purely imaginary coupling the $E\times \varepsilon $ Hamiltonian becomes non-Hermitian but $P\sigma _{0}$-symmetric. Possible generalization of this approach is outlined. \end{abstract} \pacs{03.65.Fd, 42.50.Ap} \keywords{Algebraic Methods, Quantum Optical Models, Pseudo-Hermicity} \maketitle \section{Introduction} It is well known that the rotating wave approximation (RWA) is a useful method in determination of the eigenvalues and associated eigenfunctions of the various quantum optical Hamiltonians. The approximation gives accurate results when the frequency associated with the free evaluation of the system is essentially bigger than the transmission frequencies induced by the interaction between subsystem or external source. In quantum physics the application of the RWA usually leads to symmetry breaking: the representation space of the whole system is then divided into invariant subspaces, which strongly simplifies the mathematical complexity of the problem and usually provides the exact solution of the Hamiltonian. The simplest model which describes a two-level atom interacting with a single mode cavity field is the Jaynes-Cummings (JC) model \cite{jaynes}. A considerable attention has been devoted to the interaction of a radiation field with atoms since the paper of Dicke \cite{dicke}. Such system is commonly termed as the Dicke model. In spite of its simplicity, the whole spectrum of the Dicke Hamiltonian can not be obtained exactly and usually it has been treated in the framework of RWA. Besides its solution with RWA, in some papers an attempt is made to go beyond the RWA \cite{tur}. The continual integration methods are based on variational principles. The perturbative approach \cite{zaheer,zeng} leads to more complicated mathematical treatments and the theory converges only for a certain relationship between parameters of the Hamiltonian. In a more recent study, Klimov and his co-workers \cite{klimov} have developed a general perturbative approach to quantum optical models beyond the RWA, based on the Lie-type transformation. The Jahn Teller (JT) interaction \cite{jahn} is one of the most fascinating phenomena in modern physics and chemistry, providing a general approach to understanding the properties of molecules and crystals and their origins. This phenomena has inspired of the most important recent scientific discoveries, such as the concept of high temperature superconductivity. The JT interaction is an example of electron-phonon coupling. Therefore it seems that the RWA can be applied for solving the JT problems. Most of the JT Hamiltonians are more complicated than the Dicke Hamiltonian. At present, a few of them (i.e. $E\otimes \beta ,\ E\otimes \epsilon $ ) has been analyzed in the framework of quasi-exactly solvable problem \cite{koc1,koc2} or isolated exact solvability \cite{judd,longuet,reik,loorits,klenner,kus}, both provide finite number of exact eigenvalues and eigenfunctions in the closed form. In this paper we devise a novel method for solving JT Hamiltonians, as well as other quantum optical Hamiltonians in the framework of RWA. It will be shown that the eigenvalues and the associated eigenfunctions can be obtained in the closed form when the coupling constant is smaller than the natural frequency of the oscillator. The method described here includes a part of the motivation provided by the existence of the connection between JT Hamiltonians and Dicke Model. Here we concentrate our attention to the solution of the $E\otimes \epsilon $ \ JT\ Hamiltonian. Its solution has been treated previously by many authors \cite {judd,longuet,reik,kulak,lo,szopa}. We develop a new approximation method which is based on the similarity transformation. The method introduced here is the same as the RWA which has been usually used to solve Dicke Hamiltonian. An interesting and somewhat simpler form of the JT Hamiltonian is obtained by RWA. Other purpose of this paper is to show that for some purely imaginary couplings the $E\otimes \epsilon $ JT Hamiltonian becomes non-Hermitian but its low-lying part of the spectrum is real. It will be shown that the non-Hermitian Hamiltonian is not $PT$-invariant \cite {bender1,bender2,znojil,bagchi,ahmed} , but it is pseudo-Hermitian \cite {must1,must2,must3,bhabani,piju}. In the following section, we shall demonstrate our procedure on the $ E\otimes \epsilon $ JT Hamiltonian. We present a transformation procedure and we obtain approximate form of the $E\otimes \epsilon $ JT Hamiltonian. We show that Hamiltonian can be transformed in the form of the Dicke type Hamiltonians. We also obtain explicit expressions for the eigenstates and eigenvalues of the JT Hamiltonian. In section 3, we discuss the pseudo-Hermicity of the Hamiltonian. Finally we summarize our results. \section{Method and Summary of the Previous Results} The well-known form of the $E\otimes \varepsilon $ JT Hamiltonian describing a two-level fermionic subsystem coupled to two boson modes has obtained by Reik\cite{reik} is given by \begin{equation} H=\omega \left( a_{1}^{+}a_{1}+a_{2}^{+}a_{2}+1\right) +\omega _{0}\sigma _{0}+\kappa \lbrack (a_{1}+a_{2}^{+})\sigma _{+}+(a_{1}^{+}+a_{2})\sigma _{-}], \label{1} \end{equation} where $\omega _{0}$ is the level separation, $\omega $ is the frequency of the oscillator and $\kappa $ is the coupling strength. The Pauli matrices $ \sigma _{0,\pm }$ are given by \begin{equation} \sigma _{+}=\left[ \begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array} \right] ,\quad \sigma _{-}=\left[ \begin{array}{cc} 0 & 0 \\ 1 & 0 \end{array} \right] ,\;\sigma _{0}=\left[ \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right] . \label{2} \end{equation} The annihilation and creation operators, $a_{i}\;$and$\;a_{i}^{+},$ satisfy the usual commutation relations, \begin{equation} \lbrack a_{i}^{+},a_{j}^{+}]=[a_{i},a_{j}]=0,\quad \lbrack a_{i},a_{j}^{+}]=\delta _{ij}. \label{eq:4} \end{equation} The Hamiltonian (\ref{1}) can be solved in the framework of quasi-exactly solvable problems\cite{koc2} or by using numerical diagonalization method \cite{tur}. In order to obtain rotating wave approximated form of the $ E\otimes \varepsilon $ Hamiltonian, we use similarity transformation by introducing the operator \begin{equation} T=\frac{\kappa }{\omega +\omega _{0}}\left( \ \sigma _{+}a_{2}^{+}-\sigma _{-}a_{2}\right) +\frac{\kappa }{\omega -\omega _{0}}\left( \ \sigma _{-}a_{2}^{+}-\sigma _{+}a_{2}\right) , \label{3} \end{equation} and imposing the condition \ $\left| \omega \pm \omega _{0}\right| \gg \kappa ,$ which usually holds in the weak interaction, transformation of the Hamiltonian (\ref{1}), yields that \begin{eqnarray} \widetilde{H} &=&e^{T}He^{-T}\approx \omega \left( a_{1}^{+}a_{1}+a_{2}^{+}a_{2}+1\right) +\omega _{0}\sigma _{0}+\kappa \lbrack (a_{1}+a_{2})\sigma _{+}+(a_{1}^{+}+a_{2}^{+})\sigma _{-}]+ \notag \\ &&\left[ \frac{\kappa ^{2}}{\omega +\omega _{0}}\left( a_{1}^{+}a_{2}^{+}+a_{1}a_{2}\right) +\frac{\kappa ^{2}}{\omega -\omega _{0}} \left( a_{1}^{+}a_{2}+a_{1}a_{2}^{+}\right) \right] \sigma _{0}+ \notag \\ &&\frac{\omega \kappa ^{2}}{\omega ^{2}-\omega _{0}^{2}}\left( a_{2}^{+2}+a_{2}^{2}+2a_{2}^{+}a_{2}\right) \sigma _{0}+ \label{4} \\ &&\frac{\kappa ^{2}\sigma _{+}\sigma _{-}}{\omega -\omega _{0}}-\frac{\kappa ^{2}\sigma _{-}\sigma _{+}}{\omega +\omega _{0}}+O\left( \frac{\kappa ^{3}}{ \omega ^{2}-\omega _{0}^{2}}\right) . \notag \end{eqnarray} Since $\frac{\kappa ^{2}}{\omega \pm \omega _{0}}\ll 1$ is assumed to be a small parameter, neglection of the last term confirms result; \begin{equation} \widetilde{H}\approx \omega \left( a_{1}^{+}a_{1}+a_{2}^{+}a_{2}+1\right) +\omega _{0}\sigma _{0}+\kappa \lbrack (a_{1}+a_{2})\sigma _{+}+(a_{1}^{+}+a_{2}^{+})\sigma _{-}]. \label{5} \end{equation} It is analytically solvable due to the neglect of \ the counter-rotating terms, so called \ RWA. Now, we turn our attention to the solution of the Hamiltonian (\ref{5}). The rotation of the bosons given by the following operator \begin{equation} U=\exp \left( \frac{\pi }{4}(a_{1}^{+}a_{2}-a_{2}^{+}a_{1}\right) \label{5a} \end{equation} provides the expressions \ \begin{subequations} \begin{eqnarray} &&U(a_{1}+a_{2})U^{-1}=\sqrt{2}a_{1},\quad U(a_{1}^{+}+a_{2}^{+})U^{-1}= \sqrt{2}a_{1}^{+} \notag \\ &&U(a_{1}^{+}a_{1}+a_{2}^{+}a_{2})U^{-1}=a_{1}^{+}a_{1}+a_{2}^{+}a_{2} \label{5b} \end{eqnarray} Under $U,$ the Hamiltonian becomes \end{subequations} \begin{equation} \widetilde{H}\approx \omega \left( a_{1}^{+}a_{1}+a_{2}^{+}a_{2}+1\right) +\omega _{0}\sigma _{0}+\sqrt{2}\kappa \lbrack a_{1}\sigma _{+}+a_{1}^{+}\sigma _{-}]. \label{5x} \end{equation} The resultant Hamiltonian can easily be solved, because the matrix of the Hamiltonian can be decomposed in infinite dependent $2\times 2$ blocks on the subspaces $\left\{ \left| \uparrow ,n_{1}\right\rangle \left| \uparrow ,n_{2}\right\rangle ,\left| \downarrow ,n_{1}+1\right\rangle \left| \downarrow ,n_{2}\right\rangle \right\} $, where $n_{1}$ and $n_{2}$ are the number of photons.\ The eigenvalue problem can be written as \begin{table}[t] \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{|c|}{Ground state} & \multicolumn{2}{|c|}{First excited state} \\ \hline $\kappa ^{2}$ & $E_{rwa}$ & $E_{exact}$ & $E_{rwa}$ & $E_{exact}$ \\ \hline $0.1$ & 0.90455 & 0.90442 & 1.85982 & 1.82286 \\ \hline $0.2$ & 0.81678 & 0.81595 & 1.73508 & 1.67515 \\ \hline $0.3$ & 0.73508 & 0.73277 & 1.62159 & 1.54472 \\ \hline $0.4$ & 0.65835 & 0.65371 & 1.51676 & 1.36373 \\ \hline $0.5$ & 0.58578 & 0.57798 & 1.41886 & 1.31592 \\ \hline $0.6$ & 0.51676 & 0.50498 & 1.32667 & 1.21248 \\ \hline $0.7$ & 0.45080 & 0.43429 & 1.23931 & 1.11438 \\ \hline $0.8$ & 0.38754 & 0.36557 & 1.15609 & 1.02070 \\ \hline $0.9$ & 0.32667 & 0.29856 & 1.07646 & 0.93072 \\ \hline \end{tabular} \caption{Ground-state and first excited-state energies of the $E\otimes \protect\varepsilon $ JT Hamiltonian.} \label{tab:b} \end{table} \begin{equation} \widetilde{H}\left| \psi \right\rangle =E\left| \psi \right\rangle \label{6} \end{equation} where $\left| \psi \right\rangle $ is the two component \ eigenstate \begin{equation} \left| \psi \right\rangle =\left( \begin{array}{l} c_{1}\left| n_{1}\right\rangle \left| n_{2}\right\rangle \\ c_{2}\left| n_{1}+1\right\rangle \left| n_{2}\right\rangle \end{array} \right) , \label{7} \end{equation} where $c_{1}$and $c_{2\text{ }}$are normalization constant. Action of $ \widetilde{H\text{ }}$on $\psi $ yields the following expressions \begin{subequations} \begin{align} \left( c_{1}\left( \omega \left( n_{1}+n_{2}+1\right) +\omega _{0}\right) +c_{2}\sqrt{2}\kappa \sqrt{n_{1}+1}\right) \left| n_{1}\right\rangle \left| n_{2}\right\rangle & =c_{1}E\left| n_{1}\right\rangle \left| n_{2}\right\rangle \label{8a} \\ \left( c_{2}\left( \omega \left( n_{1}+n_{2}+2\right) -\omega _{0}\right) +c_{1}\sqrt{2}\kappa \sqrt{n_{1}+1}\right) \left| n_{1}+1\right\rangle \left| n_{2}\right\rangle & =c_{2}E\left| n_{1}+1\right\rangle \left| n_{2}\right\rangle . \label{8b} \end{align} Eliminating $c_{1}$ and $c_{2\text{ }}$between (\ref{8a}) and (\ref{8b}) and solving the resultant equation \ for $E$, we obtain \end{subequations} \begin{equation} E=\left( j+1\right) \omega \pm \frac{1}{2}\sqrt{8\kappa ^{2}(n+1)+\left( \omega -2\omega _{0}\right) ^{2}}. \label{9} \end{equation} where $j=n_{1}+n_{2}$ total number of bosons and $n=0,1,2,\cdots ,2j.$ The eigenstates can be easily written by using boson operators, acting on a vacuum state$\left| 0\right\rangle ;$ \begin{equation} \left| \psi \right\rangle =\left[ c_{1}a_{2}^{j-n}a_{1}^{+n}\left| 0\right\rangle ,c_{2}a_{2}^{j-n}a_{1}^{+n+1}\left| 0\right\rangle \right] ^{T}. \label{10} \end{equation} We conclude that in weak coupling limit the oscillators does not coupled to each other and each of them oscillates with their own frequencies. We have proven that when the interaction between $E$ ion and $\varepsilon $-modes are weak then $E\otimes \varepsilon $ JT Hamiltonian can be reduced to the JC model. Our formalism provides a solution of the problem which allows us to discuss the JT effects in the Dicke model. The accuracy of the approximate eigenvalues can be checked by means of the (quasi) exact solution of the $E\otimes \varepsilon $ JT Hamiltonian. The material parameters are chosen to be $\omega =1$ and $\omega _{0}=0$. The results are tabulated in Table 1. The results of our study show that the eigenvalues and eigenstates \ of the $ E\otimes \varepsilon $ JT\ Hamiltonian can be approximately described when the frequency $\omega $ of the oscillation larger than the interaction constant. \section{Non-Hermitian interaction} It has been shown that for some purely imaginary couplings constant $\kappa , $ the low-lying part of the $E\otimes \varepsilon $ JT Hamiltonian is real, although the Hamiltonian is non-Hermitian. Let us consider the Hamiltonian ( \ref{5x}) with the imaginary coupling $\kappa =i\gamma :$ \begin{equation} h=\omega \left( a_{1}^{+}a_{1}+a_{2}^{+}a_{2}+1\right) +\omega _{0}\sigma _{0}+i\sqrt{2}\gamma \lbrack a_{1}\sigma _{+}+a_{1}^{+}\sigma _{-}]. \label{11} \end{equation} This Hamiltonian is not Hermitian as, \begin{equation} h^{\dagger }=\omega \left( a_{1}^{+}a_{1}+a_{2}^{+}a_{2}+1\right) +\omega _{0}\sigma _{0}-i\sqrt{2}\gamma \lbrack a_{1}\sigma _{+}+a_{1}^{+}\sigma _{-}]\neq h. \label{12} \end{equation} Under the parity transformation, the Pauli matrices become invariant but both the creation and annihilation operators change sign. The time reversal operator for this Hamiltonian is $T=-i\sigma _{y}K$ where $K$ is complex conjugation operator. The time reversal operator changes the sign of the Pauli matrices and boson operators. \ It is easy to see that the Hamiltonian (\ref{11}) is not $PT$-symmetric \begin{equation} (PT)h(PT)^{-1}=\omega \left( a_{1}^{+}a_{1}+a_{2}^{+}a_{2}+1\right) -\omega _{0}\sigma _{0}+i\sqrt{2}\gamma \lbrack a_{1}\sigma _{+}+a_{1}^{+}\sigma _{-}]\neq h. \label{13} \end{equation} The Hamiltonian is not $PT$-symmetric but it gives real spectrum. Mustafazadeh \cite{must1,must2,must3} has shown that the reality of the spectrum of non-Hermitian Hamiltonian is due to pseudo-Hermicity properties of the Hamiltonian. A Hamiltonian is called $\eta $-pseudo-Hermitian if it satisfies the following relation \begin{equation} \eta h\eta ^{-1}=h^{\dagger }, \end{equation} where $\eta $ is a linear Hermitian operator. The Hamiltonian $h$ and its adjoint $h^{\dagger }$ can be related to each others by the operator $\sigma _{0}$ and using the relation $\sigma _{0}\sigma _{\pm }\sigma _{0}^{-1}=-\sigma _{\pm }:$ \begin{equation} \sigma _{0}h\sigma _{0}^{-1}=h^{\dagger }. \end{equation} Then the Hamiltonian (\ref{11}) is $\sigma _{0}$-pseudo-Hermitian. Our Hamiltonian is also pseudo-Hermitian with respect to the parity operator. As it is shown \cite{must1} that if a Hamiltonian is pseudo-Hermitian under two different operators, $\eta _{1},$ $\eta _{2}$ then the system is symmetric under the transformation generated by $\eta _{1}\eta _{2}^{-1}.$ Therefore our Hamiltonian is invariant under the symmetry generated by the combined operator, $P\sigma _{0}:$ \begin{equation} \left[ H,P\sigma _{0}\right] =0. \end{equation} \section{Conclusion} The aim of the this paper was to illustrate how the $E\otimes \varepsilon $ JT Hamiltonians can be solved by developing a transformation procedure. It has been found an approximate form of the $E\otimes \varepsilon $ JT Hamiltonian in the framework of the RWA. The resultant Hamiltonian can be solved analytically and its eigenvalues can be obtained in the closed form. We have shown that in the weak coupling limit the JT models may be recognized as the Dicke model. We have shown that when the coupling constant is imaginary the Hamiltonian is non-Hermitian but $P\sigma _{0}$-symmetric. We also hope to extend the method to the other JT and quantum optical systems. \end{document}
\begin{document} \title{Clustering with Missing Features: a Penalized Dissimilarity Measure based Approach } \author{Shounak Datta \and Supritam Bhattacharjee \and Swagatam Das* \thanks{*Corresponding Author} } \institute{S. Datta \and S. Das \at Electronics and Communication Sciences Unit, Indian Statistical Institute, 203, B. T. Road, Kolkata-700 108, India. \\ Tel.: +91-33-2575-2323\\ \email{swagatam.das@isical.ac.in} \and S. Bhattacharjee \at formerly at Instrumentation \& Electronics Engineering Department, Jadavpur University Salt Lake Campus, Salt Lake City , Block-LB, Plot No. 8, Sector - III, Kolkata - 700 098, India. \\ } \date{Received: date / Accepted: date} \maketitle \begin{abstract} Many real-world clustering problems are plagued by incomplete data characterized by missing or absent features for some or all of the data instances. Traditional clustering methods cannot be directly applied to such data without preprocessing by imputation or marginalization techniques. In this article, we overcome this drawback by utilizing a penalized dissimilarity measure which we refer to as the Feature Weighted Penalty based Dissimilarity (FWPD). Using the FWPD measure, we modify the traditional k-means clustering algorithm and the standard hierarchical agglomerative clustering algorithms so as to make them directly applicable to datasets with missing features. We present time complexity analyses for these new techniques and also undertake a detailed theoretical analysis showing that the new FWPD based k-means algorithm converges to a local optimum within a finite number of iterations. We also present a detailed method for simulating random as well as feature dependent missingness. We report extensive experiments on various benchmark datasets for different types of missingness showing that the proposed clustering techniques have generally better results compared to some of the most well-known imputation methods which are commonly used to handle such incomplete data. We append a possible extension of the proposed dissimilarity measure to the case of absent features (where the unobserved features are known to be undefined). \keywords{Missing Features \and Penalized Dissimilarity Measure \and k-means \and Hierarchical Agglomerative Clustering \and Absent Features} \end{abstract} \section{Introduction}\label{sec:intro} In data analytics, clustering is a fundamental technique concerned with partitioning a given dataset into useful groups (called \emph{clusters}) according to the relative similarity among the data instances. Clustering algorithms attempt to partition a set of data instances (characterized by some \emph{features}), into different clusters such that the member instances of any given cluster are akin to each other and are different from the members of the other clusters. Greater the similarity within a group and the dissimilarity between groups, better is the clustering obtained by a suitable algorithm. \par Clustering techniques are of extensive use and are hence being constantly investigated in statistics, machine learning, and pattern recognition. Clustering algorithms find applications in various fields such as economics, marketing, electronic design, space research, etc. For example, clustering has been used to group related documents for web browsing \cite{broder1997syntactic,haveliwala2000scalable}, by banks to cluster the previous transactions of clients to identify suspicious (possibly fraudulent) behaviour \cite{sabau2012survey}, for formulating effective marketing strategies by clustering customers with similar behaviour \cite{chaturvedi1997feature}, in earthquake studies for identifying dangerous zones based on previous epicentre locations \cite{weatherill2009delineation,shelly2009precise,lei2010identify}, and so on. However, when we analyze such real-world data, we may encounter incomplete data where some features of some of the data instances are missing. For example, web documents may have some expired hyper-links. Such missingness may be due to a variety of reasons such as data input errors, inaccurate measurement, equipment malfunction or limitations, and measurement noise or data corruption, etc. This is known as unstructured missingness \cite{chan1972oldest,rubin1976inference}. Alternatively, not all the features may be defined for all the data instances in the dataset. This is termed as structural missingness or absence of features \cite{chechik2008absent}. For example, credit-card details may not be defined for non-credit card clients of a bank. \par Missing features have always been a challenge for researchers because traditional learning methods (which assume all data instances to be \emph{fully observed}, i.e. all the features are observed) cannot be directly applied to such incomplete data, without suitable preprocessing. When the rate of missingness is low, the data instances with missing values may be ignored. This approach is known as \emph{marginalization}. Marginalization cannot be applied to data having a sizable number of missing values, as it may lead to the loss of a sizable amount of information. Therefore, sophisticated methods are required to fill in the vacancies in the data, so that traditional learning methods can be applied subsequently. This approach of filling in the missing values is called \emph{imputation}. However, inferences drawn from data having a large fraction of missing values may be severely warped, despite the use of such sophisticated imputation methods \cite{acuna2004treatment}. \subsection{Literature}\label{sec:lit} The initial models for feature missingness are due to Rubin and Little \citeyear{rubin1976inference,little1987statistical}. They proposed a three-fold classification of missing data mechanisms, viz. Missing Completely At Random (MCAR), Missing At Random (MAR), and Missing Not At Random (MNAR). MCAR refers to the case where missingness is entirely haphazard, i.e. the likelihood of a feature being unobserved for a certain data instance depends neither on the observed nor on the unobserved characteristics of the instance. For example, in an annual income survey, a citizen is unable to participate, due to unrelated reasons such as traffic or schedule problems. MAR eludes to the cases where the missingness is conditional to the observed features of an instance, but is independent of the unobserved features. Suppose, college-goers are less likely to report their income than office-goers. But, whether a college-goer will report his or her income is independent of the actual income. MNAR is characterized by the dependence of the missingness on the unobserved features. For example, people who earn less are less likely to report their incomes in the annual income survey. Datta et al. \citeyear{datta2016fwpd} further classified MNAR into two sub-types, namely MNAR-I when the missingness only depends on the unobserved features and MNAR-II when the missingness is governed by both observed as well as unobserved features. Schafer \& Graham \citeyear{schafer2002missing} and Zhang et al. \citeyear{zhang2012software} have observed that MCAR is a special case of MAR and that MNAR can also be converted to MAR by appending a sufficient number of additional features. Therefore, most learning techniques are based on the validity of the MAR assumption. \par A lot of research on the problem of learning with missing or absent features has been conducted over the past few decades, mostly focussing on imputation methods. Several works such as \cite{little1987statistical} and \cite{schafer1997analysis} provide elaborate theories and analyses of missing data. Common imputation methods \cite{donders2006review} involve filling the missing features of data instances with zeros (Zero Imputation (ZI)), or the means of the corresponding features over the entire dataset (Mean Imputation (MI)). Class Mean Imputation or Concept Mean Imputation (CMI) is a slight modification of MI that involves filling the missing features with the average of all observations having the same label as the instance being filled. Yet another common imputation method is k-Nearest Neighbor Imputation (kNNI) \cite{dixon1979pattern}, where the missing features of a data instance are filled in by the means of corresponding features over its k-Nearest Neighbors (kNN), on the observed subspace. Grzymala-Busse \& Hu \citeyear{grzymala2001comparison} suggested various novel imputation schemes such as treating missing attribute values as special values. Rubin \citeyear{rubin1987multiple} proposed a technique called Multiple Imputation (MtI) to model the uncertainty inherent in imputation. In MtI, the missing values are imputed by a typically small (e.g. 5-10) number of simulated versions, depending on the percentage of missing data \cite{chen2013nomore,horton2001multiple}. Some more sophisticated imputation techniques have been developed, especially by the bioinformatics community, to impute the missing values by exploiting the correlations between data. A prominent examples is the Singular Value Decomposition based Imputation (SVDI) technique \cite{troyanskaya2001missing} which performs regression based estimation of the missing values using the k most significant eigenvectors of the dataset. Other examples inlcude Least Squares Imputation (LSI) \cite{bo2004lsimpute}, Non-Negative LSI (NNLSI) and Collateral Missing Value Estimation (CMVE) \cite{sehgal2005collateral}. Model-based methods are related to yet distinct from imputation techniques. These methods attempt to model the distributions for the missing values instead of filling them in \cite{dempster1983incomplete,ahmad1993some,wang2002empirical1,wang2002empirical2}. \par However, most of these techniques assume the pattern of missingness to be MCAR or MAR because this allows the use of simpler models of missingness \cite{heitjan1996distinguishing}. Such simple models are not likely to perform well in case of MNAR as the pattern of missingness also holds information. Hence, other methods have to be developed to tackle incomplete data due to MNAR \cite{marlin2008missing}. Moreover, imputation may often lead to the introduction of noise and uncertainty in the data \cite{dempster1983incomplete,little1987statistical,barcelo2008impact,myrtveit2001analyzing}. \par In light of the observations made in the preceding paragraph, some learning methods avoid the inexact methods of imputation (as well as marginalization) altogether, while dealing with missingness. A common paradigm is random subspace learning where an ensemble of learners is trained on projections of the data in random subspaces and an inference is drawn based on the concensus among the ensemble \cite{krause2003ensemble,juszczak2004combining,nanni2012classifier}. Chechik et al. \citeyear{chechik2008absent} used the geometrical insight of max-margin classification to formulate an objective function which was optimized to directly classify the incomplete data. This was extended to the max-margin regression case for software effort prediction with absent features in \cite{zhang2012software}. Wagstaff et al. \citeyear{wagstaff2004clustering,wagstaff2005making} suggested a k-means algorithm with Soft Constraints (KSC) where soft constraints determined by fully observed objects are introduced to facilitate the grouping of instances with missing features. Himmelspach \& Conrad \citeyear{himmelspach2010clustering} provided a good review of partitional clustering techniques for incomplete datasets, which mentions some other techniques that do not make use of imputation. \par The idea to modify the distance between the data instances to directly tackle missingness (without having to resort to imputation) was first put forth by Dixon \citeyear{dixon1979pattern}. The Partial Distance Strategy (PDS) proposed in \cite{dixon1979pattern} scales up the \emph{observed distance}, i.e. the distance between two data instances in their \emph{common observed subspace} (the subspace consisting of the observed features common to both data instances) by the ratio of the total number of features (observed as well as unobserved) and the number of common observed features between them to obtain an estimate of their distance in the fully observed space. Hathaway \& Bezdek \citeyear{hathaway2001fcm} used the PDS to extend the Fuzzy C-Means (FCM) clustering algorithm to cases with missing features. Furthermore, Mill{\'a}n-Giraldo et al. \citeyear{millan2010dissimilarity} and Porro-Mu{\~n}oz et al. \citeyear{porro2013missing} generalized the idea of the PDS by proposing to scale the observed distance by factors other than the fraction of observed features. However, neither the PDS nor its extensions can always provide a good estimate of the actual distance as the observed distance between two instances may be unrelated to the distance between them in the unobserved subspace. \subsection{Motivation}\label{sec:motiv} As observed earlier, one possible way to adapt supervised as well as unsupervised learning methods to problems with missingness is to modify the \emph{distance} or \emph{dissimilarity measure} underlying the learning method. The idea is that the modified dissimilarity measure should use the common observed features to provide approximations of the distances between the data instances if they were to be fully observed. PDS is one such measure. Such approaches neither require marginalization nor imputation and are likely to yield better results than either of the two. For example, let $X_{full}=\{\mathbf{x}_1=(1,2),\mathbf{x}_2=(1.8,1),\mathbf{x}_3=(2,2.5)\}$ be a dataset consisting of three points in $\mathbb{R}^2$. Then, we have $d_{E}(\mathbf{x}_1,\mathbf{x}_2)=1.28$ and $d_{E}(\mathbf{x}_1,\mathbf{x}_3)=1.12$, $d_{E}(\mathbf{x}_i,\mathbf{x}_j)$ being the Euclidean distance between any two fully observed points $\mathbf{x}_i$ and $\mathbf{x}_j$ in $X_{full}$. Suppose that the first coordinate of the point $(1,2)$ be unobserved, resulting in the incomplete dataset $X=\{\widetilde{\mathbf{x}}_1=(*,2),\mathbf{x}_2=(1.8,1),\mathbf{x}_3=(2,2.5)\}$ (`*' denotes a missing value), on which learning must be carried out. Notice that this is a case of unstructured missingness (because the unobserved value is known to exist), as opposed to the structural missingness of \cite{chechik2008absent}. Using ZI, MI and 1NNI respectively, we obtain the following filled in datasets: \begin{equation*} \begin{aligned} &X_{ZI} = \{\widehat{\mathbf{x}_1}=(0,2),\mathbf{x}_2=(1.8,1),\mathbf{x}_3=(2,2.5)\},\\ &X_{MI} = \{\widehat{\mathbf{x}_1}=(1.9,2),\mathbf{x}_2=(1.8,1),\mathbf{x}_3=(2,2.5)\},\\ \text{and } &X_{1NNI} = \{\widehat{\mathbf{x}_1}=(2,2),\mathbf{x}_2=(1.8,1),\mathbf{x}_3=(2,2.5)\}, \end{aligned} \end{equation*} where $\widehat{\mathbf{x}_1}$ denotes an estimate of $\mathbf{x}_1$. If PDS is used to estimate the corresponding distances in $X$, then the distance $d_{PDS}(\mathbf{x}_1,\mathbf{x}_i)$ between the implicit estimate of $\mathbf{x}_1$ and some other instance $\mathbf{x}_i \in X$ is obtained by \begin{equation*} d_{PDS}(\mathbf{x}_1,\mathbf{x}_i) = \sqrt{\frac{2}{1}(x_{1,2}-x_{i,2})^2}, \end{equation*} where $x_{1,2}$ and $x_{i,2}$ respectively denote the 2nd features of $\mathbf{x}_1$ and $\mathbf{x}_i$, and 2 is the numerator of the multiplying factor due to the fact that $\mathbf{x}_i \in \mathbb{R}^2$ and 1 is the denominator owing to the fact that only the 2nd feature is observed for both $\mathbf{x}_1$ and $\mathbf{x}_i$. Then, we get \begin{equation*} \begin{aligned} &d_{PDS}(\mathbf{x}_1,\mathbf{x}_2) = \sqrt{\frac{2}{1}(2-1)^2} = 1.41,\\ \text{and } &d_{PDS}(\mathbf{x}_1,\mathbf{x}_3) = \sqrt{\frac{2}{1}(2-2.5)^2} = 0.71. \end{aligned} \end{equation*} The improper estimates obtained by PDS are due to the fact that the distance in the common observed subspace does not reflect the distance in the unobserved subspace. This is the principal drawback of the PDS method, as discussed earlier. Since the observed distance between two data instances is essentially a lower bound on the Euclidean distance between them (if they were to be fully observed), adding a suitable penalty to this lower bound can result in a reasonable approximation of the actual distance. This approach \cite{datta2016fwpd} called the Penalized Dissimilarity Measure (PDM) may be able to overcome the drawback which plagues PDS. Let the penalty between $\mathbf{x}_1$ and $\mathbf{x}_i$ be given by the ratio of the number of features which are unobserved for at least one of the two data instances and the total number of features in the entire dataset. Then, the dissimilarity $\delta_{PDM}(\mathbf{x}_1,\mathbf{x}_i)$ between the implicit estimate of $\mathbf{x}_1$ and some other $\mathbf{x}_i \in X$ is \begin{equation*} \delta_{PDM}(\mathbf{x}_1,\mathbf{x}_i) = \sqrt{(x_{1,2}-x_{i,2})^2} + \frac{1}{2}, \end{equation*} where the 1 in the numerator of the penalty term is due to the fact that the 1st feature of $\mathbf{x}_1$ is unobserved. Therefore, the dissimilarities $\delta_{PDM}(\mathbf{x}_1,\mathbf{x}_2)$ and $\delta_{PDM}(\mathbf{x}_1,\mathbf{x}_3)$ are \begin{equation*} \begin{aligned} &\delta_{PDM}(\mathbf{x}_1,\mathbf{x}_2) = \sqrt{(2-1)^2} + \frac{1}{2} = 1.5,\\ \text{and } &\delta_{PDM}(\mathbf{x}_1,\mathbf{x}_3) = \sqrt{(2-2.5)^2} + \frac{1}{2} = 1. \end{aligned} \end{equation*} \begin{figure} \caption{Comparison of various techniques for handling missing features.} \label{figVariousMissing1} \label{figVariousMissing2} \label{figVarious} \end{figure} \par The situation is illustrated in Figure \ref{figVariousMissing1}. The reader should note that the points estimated using ZI, MI and 1NNI exist in the same 2-D Cartesian space to which $X_{full}$ is native. On the other hand, the points estimated by both PDS and PDM exist in their individual abstract spaces (likely distinct from the native 2-D space). Therefore, for the sake of easy comparison, we illustrate all the estimates together by superimposing both these abstract spaces on the native 2-D space so as to coincide at the points $\mathbf{x}_2$ and $\mathbf{x}_3$. It can be seen that the approach based on the PDM does not suffer from the drawback of PDS and is better able to preserve the relationship between the points. Moreover, it should be noted that there are two possible images for each of the estimates obtained by both PDS and PDM. Therefore, had the partially observed point instead been $\mathbf{x'}_1 = (3,2)$ with the first feature missing (giving rise to the same incomplete dataset $X$; $\widetilde{\mathbf{x'}}_1$ replacing the identical incomplete point $\widetilde{\mathbf{x}}_1$), PDS and PDM would still find reasonably good estimates (PDM still being better than PDS). This situation is also illustrated in Figure \ref{figVariousMissing2}. In general, \begin{enumerate} \item ZI works well only for missing values in the vicinity of the origin and is also origin dependent; \item MI works well only when the missing value is near the observed mean of the missing feature; \item kNNI is reliant on the assumption that neighbors have similar features, but suffers from the drawbacks that missingness may give rise to erroneous neighbor selection and that the estimates are restricted to the range of observed values of the feature in question; \item PDS suffers from the assumption that the common observed distances reflect the unobserved distances; and \item none of these methods differentiate between identical incomplete points, i.e. $\widetilde{\mathbf{x}}_1$ and $\widetilde{\mathbf{x}}'_1$ are not differentiated between. \end{enumerate} However, a PDM successfully steers clear of all these drawbacks (notice that $\delta(\mathbf{x}_1,\mathbf{x}'_1) = \frac{1}{2}$). Furthermore, such a PDM can also be easily applied to the case of absent features, by slightly modifying the penalty term (see Appendix \ref{apd:first}). This knowledge motivates us to use a PDM to adapt traditional clustering methods to problems with missing features. \subsection{Contribution}\label{sec:contrib} The FWPD measure is a PDM used in \cite{datta2016fwpd} for kNN classification of datasets with missing features\footnote{The work of \citeauthor{datta2016fwpd} \citeyear{datta2016fwpd} is based on the FWPD measure originally proposed in the archived version of the current article \cite{DattaBD16}.}. The FWPD between two data instances is a weighted sum of two terms; the first term being the observed distance between the instances and the second being a penalty term. The penalty term is a sum of the penalties corresponding to each of the features which are missing from at least one of the data instances; each penalty being directly proportional to the probability of its corresponding feature being observed. Such a weighting scheme imposes greater penalty if a feature which is observed for a large fraction of the data is missing for a particular instance. On the other hand, if the missing feature is unobserved for a large fraction of the data, then a smaller penalty is imposed. \par The contributions of the current article are in order: \begin{enumerate} \item In the current article, we formulate the k-means clustering problem for datasets with missing features based on the proposed FWPD and develop an algorithm to solve the new formulation. \item We prove that the proposed algorithm is guaranteed to converge to a locally optimal solution of the modified k-means optimization problem formulated with the FWPD measure. \item We also propose Single Linkage, Average Linkage, and Complete Linkage based HAC methods for datasets plagued by missingness, based on the proposed FWPD. \item We provide an extensive discussion on the properties of the FWPD measure. The said discussion is more thorough compared to that of \cite{datta2016fwpd}. \item We further provide a detailed algorithm for simulating the four types of missingness enumerated in \cite{datta2016fwpd}, namely MCAR, MAR, MNAR-I (missingness only depends on the unobserved features) and MNAR-II (missingness depends on both observed as well as unobserved features). \item Moreover, since this work presents an alternative to imputation and can be useful in scenarios where imputation is not practical (such as structural missingness), we append an extension of the proposed FWPD to the case of absent features (where the absent features are known to be undefined or non-existent). We also show that the FWPD becomes a semi-metric in the case of structural missingness. \end{enumerate} \par Experiments are reported on diverse datasets and covers all four types of missingness. The results are compared with the popularly used imputation techniques. The comparative results indicate that our approach generally achieves better performance than the common imputation approaches used to handle incomplete data. \subsection{Organization}\label{sec:organiz} The rest of this paper is organized in the following way. In Section \ref{sec:fwpd}, we elaborate on the properties of the FWPD measure. The next section (Section \ref{sec:Kmeans}) presents a formulation of the k-means clustering problem which is directly applicable to datasets with missing features, based on the FWPD discussed in Section \ref{sec:fwpd}. This section also puts forth an algorithm to solve the optimization problem posed by this new formulation. The subsequent section (Section \ref{sec:Hier}) covers the HAC algorithm formulated using FWPD to be directly applicable to incomplete datasets. Experimental results (based on the missingness simulating mechanism discussed in the same section) are presented in Section \ref{sec:ExpRes}. Relevant conclusions are drawn in Section \ref{sec:conclsn}. Subsequently, Appendix \ref{apd:first} deals with the extension of the proposed FWPD to the case of absent features (structural missingness). \section{Feature Weighted Penalty based Dissimilarity Measure for Datasets with Missing Features}\label{sec:fwpd} Let the dataset $X \subset \mathbb{R}^m$, i.e. the data instances in $X$ are each characterized by $m$ feature values in $\mathbb{R}$. Further, let $X$ consist of $n$ instances $\mathbf{x}_i$ ($i \in \{1, 2, \cdots, n\}$), some of which have missing features. Let $\gamma_{\mathbf{x}_i}$ denote the set of observed features for the data point $\mathbf{x}_i$. Then, the set of all features $S=\bigcup_{i=1}^{n}\gamma_{\mathbf{x}_i}$ and $|S|=m$. The set of features which are observed for all data instances in $X$ is defined as $\gamma_{obs}=\bigcap_{i=1}^{n}\gamma_{\mathbf{x}_i}$. $|\gamma_{obs}|$ may or may not be non-zero. $\gamma_{miss}=S\backslash\gamma_{obs}$ is the set of features which are unobserved for at least one data point in $X$. The important notations used in this section (and beyond) are summarized in Table \ref{tabNota1}. \begin{table}[h] \begin{center} \caption{Some important notations used in Section \ref{sec:fwpd} and beyond.}\label{tabNota1} \begin{tabular}{c | l} \hline Notation & Meaning\\ \hline $X$ & Dataset with incomplete data points\\ $n$ & Number of data points in $X$\\ $\mathbf{x}_i$ & A data point in $X$\\ $x_{i,l}$ & $l$-th feature of $\mathbf{x}_i$\\ $S$ & Set of all features in $X$\\ $m$ & Number of features in $S$, i.e. $|S|$\\ $\gamma$ & General notation for a set of features in $S$\\ $\gamma_{\mathbf{x}_i}$ & Set of features observed for point $\mathbf{x}_i$\\ $\gamma_{obs}$ & Set of features observed for all instances in $X$\\ $\gamma_{miss}$ & Set of features which are unobserved for some point in $X$\\ $d_{\gamma}(\mathbf{x}_i,\mathbf{x}_j)$ & Distance between poinst $\mathbf{x}_i$ and $\mathbf{x}_j$ in the subspace defined by the features in $\gamma$\\ $d(\mathbf{x}_i,\mathbf{x}_j)$ & Observed distance between points $\mathbf{x}_i$ and $\mathbf{x}_j$\\ $d_E(\mathbf{x}_i,\mathbf{x}_j)$ & Euclidean distance between fully observed points $\mathbf{x}_i$ and $\mathbf{x}_j$\\ $w_l$ & Number of instances in $X$ having observed values for the $l$-th feature\\ $p(\mathbf{x}_i,\mathbf{x}_j)$ & Feature Weighted Penalty (FWP) between $\mathbf{x}_i$ and $\mathbf{x}_j$\\ $p_{\gamma}$ & FWP corresponding to the subspace defined by $\gamma$\\ $\delta(\mathbf{x}_i,\mathbf{x}_j)$ & Feature Weighted Penalty based Dissimilarity (FWPD) between $\mathbf{x}_i$ and $\mathbf{x}_j$\\ $d_{max}$ & Maximum observed distance between any two data points in $X$\\ $\alpha$ & Coefficient of relative importance between observed distance and FWP for FWPD\\ $\rho_{i,j,k}$ & $p(\mathbf{x}_i,\mathbf{x}_j) + p(\mathbf{x}_j,\mathbf{x}_k) - p(\mathbf{x}_k,\mathbf{x}_i)$ for some $\mathbf{x}_i, \mathbf{x}_j, \mathbf{x}_k \in X$\\ $\phi$ & An empty set\\ \hline \end{tabular} \end{center} \end{table} \begin{dfn}\label{defLowerDist} Let the distance between any two data instances $\mathbf{x}_i,\mathbf{x}_j \in X$ in a subspace defined by $\gamma$ be denoted as $d_{\gamma}(\mathbf{x}_i,\mathbf{x}_j)$. Then the observed distance (distance in the common observed subspace) between these two points can be defined as \begin{equation}\label{eqnLowerDist} d_{\gamma_{\mathbf{x}_i}\bigcap\gamma_{\mathbf{x}_j}}(\mathbf{x}_i,\mathbf{x}_j)=\sqrt{\sum_{l \in \gamma_{\mathbf{x}_i}\bigcap\gamma_{\mathbf{x}_j}}(x_{i,l}-x_{j,l})^2}, \end{equation} where $x_{i,l}$ denotes the $l$-th feature of the data instance $\mathbf{x}_i$. For the sake of convenience, $d_{\gamma_{\mathbf{x}_i}\bigcap\gamma_{\mathbf{x}_j}}(\mathbf{x}_i,\mathbf{x}_j)$ is simplified to $d(\mathbf{x}_i,\mathbf{x}_j)$ in the rest of this paper. \end{dfn} \begin{dfn}\label{defEDist} If both $\mathbf{x}_i$ and $\mathbf{x}_j$ were to be fully observed, the Euclidean distance $d_{E}(\mathbf{x}_i,\mathbf{x}_j)$ between $\mathbf{x}_i$ and $\mathbf{x}_j$ would be defined as \begin{equation*} d_{E}(\mathbf{x}_i,\mathbf{x}_j)=\sqrt{\sum_{l \in S}(x_{i,l}-x_{j,l})^2}. \end{equation*} \end{dfn} Now, since $(\gamma_{\mathbf{x}_i}\cap\gamma_{\mathbf{x}_j})\subseteq S$, and $(x_{i,l}-x_{j,l})^2 \geq 0$ $\forall$ $l \in S$, it follows that \begin{equation*} d(\mathbf{x}_i,\mathbf{x}_j) \leq d_{E}(\mathbf{x}_i,\mathbf{x}_j) \text{ } \forall \text{ } \mathbf{x}_i,\mathbf{x}_j \in X. \end{equation*} Therefore, to compensate for the distance in the unobserved subspace, we add a Feature Weighted Penalty (FWP) $p(\mathbf{x}_i,\mathbf{x}_j)$ (defined below) to $d(\mathbf{x}_i,\mathbf{x}_j)$. \begin{dfn}\label{defFwp} The FWP between $\mathbf{x}_i$ and $\mathbf{x}_j$ is defined as \begin{equation}\label{eqnDefFwp} p(\mathbf{x}_i,\mathbf{x}_j)=\frac{\underset{l \in S \backslash (\gamma_{\mathbf{x}_i}\bigcap \gamma_{\mathbf{x}_j})}{\sum}\;w_l}{\underset{l' \in S}{\sum}\;w_{l'}}, \end{equation} where $w_l \in (0,n]$ is the number of instances in $X$ having observed values of the feature $l$. It should be noted that FWP exacts greater penalty for unobserved occurrences of those features which are observed for a large fraction of the data instances. Moreover, since the value of the FWP solely depends on the taxable subspace $S \backslash (\gamma_{\mathbf{x}_i}\bigcap \gamma_{\mathbf{x}_j})$, we define an alternative notation for the FWP, viz. $p_{\gamma}={\underset{l \in \gamma}{\sum}\;w_l}/{\underset{l' \in S}{\sum}\;w_{l'}}$. Hence, $p(\mathbf{x}_i,\mathbf{x}_j)$ can also be written as $p_{S \backslash (\gamma_{\mathbf{x}_i}\bigcap \gamma_{\mathbf{x}_j})}$. \end{dfn} Then, the definition of the proposed FWPD follows. \begin{dfn}\label{defFwpd} The FWPD between $\mathbf{x}_i$ and $\mathbf{x}_j$ is \begin{equation}\label{eqnFwpd} \delta(\mathbf{x}_i,\mathbf{x}_j)=(1-\alpha)\times \frac{d(\mathbf{x}_i,\mathbf{x}_j)}{d_{max}} + \alpha \times p(\mathbf{x}_i,\mathbf{x}_j), \end{equation} where $\alpha \in (0,1]$ is a parameter which determines the relative importance between the two terms and $d_{max}$ is the maximum observed distance between any two points in $X$ in their respective common observed subspaces. \end{dfn} \subsection{Properties of the proposed FWPD}\label{sec:propFwpd} In this subsection, we discuss some of the important properties of the proposed FWPD measure. The following theorem discusses some of the important properties of the proposed FWPD measure and the subsequent discussion is concerned with the triangle inequality in the context of FWPD. \begin{theorem}\label{thmPropFwpd} The FWPD measure satisfies the following important properties: \begin{enumerate} \item $\delta(\mathbf{x}_i,\mathbf{x}_i) \leq \delta(\mathbf{x}_i,\mathbf{x}_j)$ $\forall$ $\mathbf{x}_i,\mathbf{x}_j \in X$, \item $\delta(\mathbf{x}_i,\mathbf{x}_i) \geq 0$ $\forall$ $\mathbf{x}_i \in X$, \item $\delta(\mathbf{x}_i,\mathbf{x}_i) = 0$ iff $\gamma_{\mathbf{x}_i}=S$, and \item $\delta(\mathbf{x}_i,\mathbf{x}_j)=\delta(\mathbf{x}_j,\mathbf{x}_i)$ $\forall$ $\mathbf{x}_i,\mathbf{x}_j \in X$. \end{enumerate} \begin{proof}\label{pfPropFwpd} \begin{enumerate} \item From Equations (\ref{eqnLowerDist}) and (\ref{eqnFwpd}), it follows that \begin{equation}\label{eqnFwpdSelf} \delta(\mathbf{x}_i,\mathbf{x}_i) = \alpha \times p(\mathbf{x}_i,\mathbf{x}_i). \end{equation} It also follows from Equation (\ref{eqnDefFwp}) that $p(\mathbf{x}_i,\mathbf{x}_i) \leq p(\mathbf{x}_i,\mathbf{x}_j)$ $\forall$ $\mathbf{x}_i,\mathbf{x}_j \in X$. Therefore, $\delta(\mathbf{x}_i,\mathbf{x}_i) \leq \alpha \times p(\mathbf{x}_i,\mathbf{x}_j)$. Since $\alpha \leq 1$, we have $\alpha \times p(\mathbf{x}_i,\mathbf{x}_j) \leq p(\mathbf{x}_i,\mathbf{x}_j)$. Now, it follows from Equation (\ref{eqnFwpd}) that $p(\mathbf{x}_i,\mathbf{x}_j) \leq \delta(\mathbf{x}_i,\mathbf{x}_j)$. Hence, we get $\delta(\mathbf{x}_i,\mathbf{x}_i) \leq \delta(\mathbf{x}_i,\mathbf{x}_j)$ $\forall$ $\mathbf{x}_i,\mathbf{x}_j \in X$. \item It can be seen from Equation (\ref{eqnFwpd}) that $\delta(\mathbf{x}_i,\mathbf{x}_i) = \alpha \times p(\mathbf{x}_i,\mathbf{x}_i)$. Moreover, it follows from Equation (\ref{eqnDefFwp}) that $p(\mathbf{x}_i,\mathbf{x}_i) \geq 0$. Hence, $\delta(\mathbf{x}_i,\mathbf{x}_i) \geq 0$ $\forall$ $\mathbf{x}_i \in X$. \item It is easy to see from Equation (\ref{eqnDefFwp}) that $p(\mathbf{x}_i,\mathbf{x}_i)=0$ iff $\gamma_{\mathbf{x}_i}=S$. Hence, it directly follows from Equation (\ref{eqnFwpdSelf}) that $\delta(\mathbf{x}_i,\mathbf{x}_i) = 0$ iff $\gamma_{\mathbf{x}_i}=S$. \item From Equation (\ref{eqnFwpd}) we have \begin{equation*} \begin{aligned} & \delta(\mathbf{x}_i,\mathbf{x}_j)=(1-\alpha)\times \frac{d(\mathbf{x}_i,\mathbf{x}_j)}{d_{max}} + \alpha \times p(\mathbf{x}_i,\mathbf{x}_j),\\ \text{and } & \delta(\mathbf{x}_j,\mathbf{x}_i)=(1-\alpha)\times \frac{d(\mathbf{x}_j,\mathbf{x}_i)}{d_{max}} + \alpha \times p(\mathbf{x}_j,\mathbf{x}_i). \end{aligned} \end{equation*} However, $d(\mathbf{x}_i,\mathbf{x}_j)=d(\mathbf{x}_j,\mathbf{x}_i)$ and $p(\mathbf{x}_i,\mathbf{x}_j)=p(\mathbf{x}_j,\mathbf{x}_i)$ $\forall$ $\mathbf{x}_i,\mathbf{x}_j \in X$ (by definition). Therefore, it can be easily seen that $\delta(\mathbf{x}_i,\mathbf{x}_j)=\delta(\mathbf{x}_j,\mathbf{x}_i)$ $\forall$ $\mathbf{x}_i,\mathbf{x}_j \in X$. \end{enumerate} \end{proof} \end{theorem} The triangle inequality is an important criterion which lends some useful properties to the space induced by a dissimilarity measure. Therefore, the conditions under which FWPD satisfies the said criterion are investigated below. However, it should be stressed that the satisfaction of the said criterion is not essential for the functioning of the clustering techniques proposed in the subsequent text. \begin{dfn}\label{defTri} For any three data instances $\mathbf{x}_i, \mathbf{x}_j, \mathbf{x}_k \in X$, the triangle inequality with respect to (w.r.t.) the FWPD measure is defined as \begin{equation}\label{eqnTri1} \delta(\mathbf{x}_i,\mathbf{x}_j) + \delta(\mathbf{x}_j,\mathbf{x}_k) \geq \delta(\mathbf{x}_k,\mathbf{x}_i). \end{equation} \end{dfn} The three following lemmas deal with the conditions under which Inequality (\ref{eqnTri1}) will hold. \begin{lemma}\label{penTri} For any three data instances $\mathbf{x}_i, \mathbf{x}_j, \mathbf{x}_k \in X$ let $\rho_{i,j,k} = p(\mathbf{x}_i,\mathbf{x}_j) + p(\mathbf{x}_j,\mathbf{x}_k) - p(\mathbf{x}_k,\mathbf{x}_i)$. Then $\rho_{i,j,k} \geq 0$ $\forall$ $\mathbf{x}_i, \mathbf{x}_j, \mathbf{x}_k \in X$. \begin{proof} Let us rewrite the penalty term $p(\mathbf{x}_i,\mathbf{x}_j)$ in terms of the spanned subspaces as $p(\mathbf{x}_i,\mathbf{x}_j) = p_{S \backslash (\gamma_{\mathbf{x}_i} \bigcup \gamma_{\mathbf{x}_j})} + p_{\gamma_{\mathbf{x}_i} \backslash \gamma_{\mathbf{x}_j}} + p_{\gamma_{\mathbf{x}_j} \backslash \gamma_{\mathbf{x}_i}}$. Now, accounting for the subspaces overlapping with the observed subspace of $\mathbf{x}_k$, we get \begin{equation*} \begin{aligned} p(\mathbf{x}_i,\mathbf{x}_j) = p_{S \backslash (\gamma_{\mathbf{x}_i} \bigcup \gamma_{\mathbf{x}_j} \bigcup \gamma_{\mathbf{x}_k})} & + p_{\gamma_{\mathbf{x}_k} \backslash (\gamma_{\mathbf{x}_i} \bigcup \gamma_{\mathbf{x}_j})} + p_{\gamma_{\mathbf{x}_i} \backslash (\gamma_{\mathbf{x}_j} \bigcup \gamma_{\mathbf{x}_k})} \\ & + p_{(\gamma_{\mathbf{x}_i} \bigcap \gamma_{\mathbf{x}_k}) \backslash \gamma_{\mathbf{x}_j}} + p_{\gamma_{\mathbf{x}_j} \backslash (\gamma_{\mathbf{x}_i} \bigcup \gamma_{\mathbf{x}_k})} + p_{(\gamma_{\mathbf{x}_j} \bigcap \gamma_{\mathbf{x}_k}) \backslash \gamma_{\mathbf{x}_i}}.\\ \text{Similarly, } p(\mathbf{x}_j,\mathbf{x}_k) = p_{S \backslash (\gamma_{\mathbf{x}_i} \bigcup \gamma_{\mathbf{x}_j} \bigcup \gamma_{\mathbf{x}_k})} & + p_{\gamma_{\mathbf{x}_i} \backslash (\gamma_{\mathbf{x}_j} \bigcup \gamma_{\mathbf{x}_k})} + p_{\gamma_{\mathbf{x}_j} \backslash (\gamma_{\mathbf{x}_i} \bigcup \gamma_{\mathbf{x}_k})} \\ & + p_{(\gamma_{\mathbf{x}_i} \bigcap \gamma_{\mathbf{x}_j}) \backslash \gamma_{\mathbf{x}_k}} + p_{\gamma_{\mathbf{x}_k} \backslash (\gamma_{\mathbf{x}_i} \bigcup \gamma_{\mathbf{x}_j})} + p_{(\gamma_{\mathbf{x}_i} \bigcap \gamma_{\mathbf{x}_k}) \backslash \gamma_{\mathbf{x}_j}},\\ \text{and } p(\mathbf{x}_k,\mathbf{x}_i) = p_{S \backslash (\gamma_{\mathbf{x}_i} \bigcup \gamma_{\mathbf{x}_j} \bigcup \gamma_{\mathbf{x}_k})} & + p_{\gamma_{\mathbf{x}_j} \backslash (\gamma_{\mathbf{x}_i} \bigcup \gamma_{\mathbf{x}_k})} + p_{\gamma_{\mathbf{x}_k} \backslash (\gamma_{\mathbf{x}_i} \bigcup \gamma_{\mathbf{x}_j})} \\ & + p_{(\gamma_{\mathbf{x}_j} \bigcap \gamma_{\mathbf{x}_k}) \backslash \gamma_{\mathbf{x}_i}} + p_{\gamma_{\mathbf{x}_i} \backslash (\gamma_{\mathbf{x}_j} \bigcup \gamma_{\mathbf{x}_k})} + p_{(\gamma_{\mathbf{x}_i} \bigcap \gamma_{\mathbf{x}_j}) \backslash \gamma_{\mathbf{x}_k}}.\\ \end{aligned} \end{equation*} Hence, after canceling out appropriate terms, we get \begin{equation*} \rho_{i,j,k} = p_{S \backslash (\gamma_{\mathbf{x}_i} \bigcup \gamma_{\mathbf{x}_j} \bigcup \gamma_{\mathbf{x}_k})} + p_{\gamma_{\mathbf{x}_k} \backslash (\gamma_{\mathbf{x}_i} \bigcup \gamma_{\mathbf{x}_j})} + 2 p_{(\gamma_{\mathbf{x}_i} \bigcap \gamma_{\mathbf{x}_k}) \backslash \gamma_{\mathbf{x}_j}} + p_{\gamma_{\mathbf{x}_i} \backslash (\gamma_{\mathbf{x}_j} \bigcup \gamma_{\mathbf{x}_k})} + p_{\gamma_{\mathbf{x}_j} \backslash (\gamma_{\mathbf{x}_i} \bigcup \gamma_{\mathbf{x}_k})}. \end{equation*} Now, since \begin{equation*} p_{\gamma_{\mathbf{x}_i} \backslash (\gamma_{\mathbf{x}_j} \bigcup \gamma_{\mathbf{x}_k})} + p_{\gamma_{\mathbf{x}_k} \backslash (\gamma_{\mathbf{x}_i} \bigcup \gamma_{\mathbf{x}_j})} + p_{(\gamma_{\mathbf{x}_i} \bigcap \gamma_{\mathbf{x}_k}) \backslash \gamma_{\mathbf{x}_j}} = p_{(\gamma_{\mathbf{x}_i} \bigcup \gamma_{\mathbf{x}_k}) \backslash \gamma_{\mathbf{x}_j}}, \end{equation*} we can further simplify to \begin{equation}\label{defRho} \rho_{i,j,k} = p_{(\gamma_{\mathbf{x}_i} \bigcup \gamma_{\mathbf{x}_k})\backslash \gamma_{\mathbf{x}_j}} + p_{(\gamma_{\mathbf{x}_i} \bigcap \gamma_{\mathbf{x}_k})\backslash \gamma_{\mathbf{x}_j}} + p_{\gamma_{\mathbf{x}_j} \backslash (\gamma_{\mathbf{x}_i} \bigcup \gamma_{\mathbf{x}_k})} + p_{S \backslash (\gamma_{\mathbf{x}_i} \bigcup \gamma_{\mathbf{x}_j} \bigcup \gamma_{\mathbf{x}_k})}. \end{equation} Since all the terms in Expression (\ref{defRho}) must be either zero or positive, this proves that $\rho_{i,j,k} \geq 0$ $\forall$ $\mathbf{x}_i, \mathbf{x}_j, \mathbf{x}_k \in X$. \end{proof} \end{lemma} \begin{lemma}\label{lemTri1} For any three data points $\mathbf{x}_i, \mathbf{x}_j, \mathbf{x}_k \in X$, Inequality (\ref{eqnTri1}) is satisfied when $(\gamma_{\mathbf{x}_i} \bigcap \gamma_{\mathbf{x}_j})=(\gamma_{\mathbf{x}_j} \bigcap \gamma_{\mathbf{x}_k})=(\gamma_{\mathbf{x}_k} \bigcap \gamma_{\mathbf{x}_i})$. \begin{proof}\label{pfLemTri1} From Equation (\ref{eqnFwpd}) the Inequality (\ref{eqnTri1}) can be rewritten as \begin{equation}\label{eqnTri2} \begin{aligned} (1-\alpha) \times \frac{d(\mathbf{x}_i,\mathbf{x}_j)}{d_{max}} + \alpha \times & p(\mathbf{x}_i,\mathbf{x}_j) \\ + (1-\alpha) \times & \frac{d(\mathbf{x}_j,\mathbf{x}_k)}{d_{max}} + \alpha \times p(\mathbf{x}_j,\mathbf{x}_k) \\ \geq & (1-\alpha) \times \frac{d(\mathbf{x}_k,\mathbf{x}_i)}{d_{max}} + \alpha \times p(\mathbf{x}_k,\mathbf{x}_i). \end{aligned} \end{equation} Further simplifying (\ref{eqnTri2}) by moving the penalty terms to the Left Hand Side (LHS) and the observed distance terms to the Right Hand Side (RHS), we get \begin{equation}\label{eqnTri3} \alpha \times \rho_{i,j,k} \geq \frac{(1-\alpha)}{d_{max}} \times (d(\mathbf{x}_k,\mathbf{x}_i) - (d(\mathbf{x}_i,\mathbf{x}_j)+d(\mathbf{x}_j,\mathbf{x}_k))). \end{equation} When $(\gamma_{\mathbf{x}_i} \bigcap \gamma_{\mathbf{x}_j})=(\gamma_{\mathbf{x}_j} \bigcap \gamma_{\mathbf{x}_k})=(\gamma_{\mathbf{x}_k} \bigcap \gamma_{\mathbf{x}_i})$, as $d(\mathbf{x}_i,\mathbf{x}_j)+d(\mathbf{x}_j,\mathbf{x}_k) \geq d(\mathbf{x}_k,\mathbf{x}_i)$, the RHS of Inequality (\ref{eqnTri3}) is less than or equal to zero. Now, it follows from Lemma \ref{penTri} that the LHS of Inequality (\ref{eqnTri3}) is always greater than or equal to zero as $\rho_{i,j,k} \geq 0$ and $\alpha \in (0,1]$. Hence, LHS $\geq$ RHS, which completes the proof. \end{proof} \end{lemma} \begin{lemma}\label{lemTri2} If $|\gamma_{\mathbf{x}_i} \bigcap \gamma_{\mathbf{x}_j}| \rightarrow 0$, $|\gamma_{\mathbf{x}_j} \bigcap \gamma_{\mathbf{x}_k}| \rightarrow 0$ and $|\gamma_{\mathbf{x}_k} \bigcap \gamma_{\mathbf{x}_i}| \rightarrow 0$, then Inequality (\ref{eqnTri3}) tends to be satisfied. \begin{proof}\label{pfLemTri2} When $|\gamma_{\mathbf{x}_i} \bigcap \gamma_{\mathbf{x}_j}| \rightarrow 0$, $|\gamma_{\mathbf{x}_j} \bigcap \gamma_{\mathbf{x}_k}| \rightarrow 0$ and $|\gamma_{\mathbf{x}_k} \bigcap \gamma_{\mathbf{x}_i}| \rightarrow 0$, then LHS $\rightarrow \alpha^{+}$ and RHS $\rightarrow 0$ for the Inequality (\ref{eqnTri3}). As $\alpha \in (0,1]$, Inequality (\ref{eqnTri3}) tends to be satisfied. \end{proof} \end{lemma} The following lemma deals with the value of the parameter $\alpha \in (0,1]$ for which a relaxed form of the triangle inequality is satisfied for any three data instances in a dataset $X$. \begin{lemma}\label{lemTri3} Let $\mathcal{P} = min \{\rho_{i,j,k}:\mathbf{x}_i, \mathbf{x}_j, \mathbf{x}_k \in X, \rho_{i,j,k} > 0\}.$ Then, for any arbitrary constant $\epsilon$ satisfying $0 \leq \epsilon \leq \mathcal{P}$, if $\alpha \geq (1-\epsilon)$, then the following relaxed form of the triangle inequality \begin{equation}\label{eqnRelTri1} \delta(\mathbf{x}_i,\mathbf{x}_j) + \delta(\mathbf{x}_j,\mathbf{x}_k) \geq \delta(\mathbf{x}_k,\mathbf{x}_i) - {\epsilon}^2, \end{equation} is satisfied for any $\mathbf{x}_i, \mathbf{x}_j, \mathbf{x}_k \in X$. \begin{proof}\label{pflemTri3} \begin{enumerate} \item If $\mathbf{x}_i$, $\mathbf{x}_j$, and $\mathbf{x}_k$ are all fully observed, then Inequality (\ref{eqnTri1}) holds. Now, since $\epsilon \geq 0$, therefore $\delta(\mathbf{x}_k,\mathbf{x}_i) \geq \delta(\mathbf{x}_k,\mathbf{x}_i) - {\epsilon}^2$. This implies $\delta(\mathbf{x}_i,\mathbf{x}_j) + \delta(\mathbf{x}_j,\mathbf{x}_k) \geq \delta(\mathbf{x}_k,\mathbf{x}_i) \geq \delta(\mathbf{x}_k,\mathbf{x}_i) - {\epsilon}^2$. Hence, Inequality (\ref{eqnRelTri1}) must hold. \item If $(\gamma_{\mathbf{x}_i} \bigcap \gamma_{\mathbf{x}_j} \bigcap \gamma_{\mathbf{x}_k}) \neq S$ i.e. at least one of the data instances is not fully observed, and $\rho_{i,j,k} = 0$, then $(\gamma_{\mathbf{x}_i} \bigcup \gamma_{\mathbf{x}_k})\backslash \gamma_{\mathbf{x}_j} = \phi$, $(\gamma_{\mathbf{x}_i} \bigcap \gamma_{\mathbf{x}_k})\backslash \gamma_{\mathbf{x}_j} = \phi$, $S \backslash (\gamma_{\mathbf{x}_i} \bigcup \gamma_{\mathbf{x}_j} \bigcup \gamma_{\mathbf{x}_k}) = \phi$, and \\ $\gamma_{\mathbf{x}_j} \backslash (\gamma_{\mathbf{x}_i} \bigcup \gamma_{\mathbf{x}_k}) = \phi$. This implies that $\gamma_{\mathbf{x}_j} = S$, and $\gamma_{\mathbf{x}_k} \bigcup \gamma_{\mathbf{x}_i} = \gamma_{\mathbf{x}_j}$. Moreover, since $\rho_{i,j,k} = 0$, we have $\delta(\mathbf{x}_i,\mathbf{x}_j) + \delta(\mathbf{x}_j,\mathbf{x}_k) - \delta(\mathbf{x}_k,\mathbf{x}_i) = d(\mathbf{x}_i,\mathbf{x}_j) + d(\mathbf{x}_j,\mathbf{x}_k) - d(\mathbf{x}_k,\mathbf{x}_i)$. Now, $\gamma_{\mathbf{x}_i} \bigcap \gamma_{\mathbf{x}_k} \subseteq \gamma_{\mathbf{x}_i}$, $\gamma_{\mathbf{x}_i} \bigcap \gamma_{\mathbf{x}_k} \subseteq \gamma_{\mathbf{x}_k}$ and $\gamma_{\mathbf{x}_i} \bigcap \gamma_{\mathbf{x}_k} \subseteq \gamma_{\mathbf{x}_j}$ as $\gamma_{\mathbf{x}_k} \bigcup \gamma_{\mathbf{x}_i} = \gamma_{\mathbf{x}_j} = S$. Therefore, $d(\mathbf{x}_i,\mathbf{x}_j) + d(\mathbf{x}_j,\mathbf{x}_k) - d(\mathbf{x}_k,\mathbf{x}_i) \geq d_{\gamma_{\mathbf{x}_i} \bigcap \gamma_{\mathbf{x}_k}}(\mathbf{x}_i,\mathbf{x}_j) + d_{\gamma_{\mathbf{x}_i} \bigcap \gamma_{\mathbf{x}_k}}(\mathbf{x}_j,\mathbf{x}_k) - d_{\gamma_{\mathbf{x}_i} \bigcap \gamma_{\mathbf{x}_k}}(\mathbf{x}_k,\mathbf{x}_i)$. Now, by the triangle inequality in subspace $\gamma_{\mathbf{x}_i} \bigcap \gamma_{\mathbf{x}_k}$, $d_{\gamma_{\mathbf{x}_i} \bigcap \gamma_{\mathbf{x}_k}}(\mathbf{x}_i,\mathbf{x}_j) + d_{\gamma_{\mathbf{x}_i} \bigcap \gamma_{\mathbf{x}_k}}(\mathbf{x}_j,\mathbf{x}_k) - d_{\gamma_{\mathbf{x}_i} \bigcap \gamma_{\mathbf{x}_k}}(\mathbf{x}_k,\mathbf{x}_i) \geq 0$. Hence, $\delta(\mathbf{x}_i,\mathbf{x}_j) + \delta(\mathbf{x}_j,\mathbf{x}_k) - \delta(\mathbf{x}_k,\mathbf{x}_i) \geq 0$, i.e. Inequalities (\ref{eqnTri1}) and (\ref{eqnRelTri1}) are satisfied. \item If $(\gamma_{\mathbf{x}_i} \bigcap \gamma_{\mathbf{x}_j} \bigcap \gamma_{\mathbf{x}_k}) \neq S$ and $\rho_{i,j,k} \neq 0$, as $\alpha \geq (1-\epsilon)$, LHS of Inequality (\ref{eqnTri3}) $\geq (1-\epsilon) \times (p_{(\gamma_{\mathbf{x}_i} \bigcup \gamma_{\mathbf{x}_k})\backslash \gamma_{\mathbf{x}_j}} + p_{(\gamma_{\mathbf{x}_i} \bigcap \gamma_{\mathbf{x}_k})\backslash \gamma_{\mathbf{x}_j}} + p_{\gamma_{\mathbf{x}_j} \backslash (\gamma_{\mathbf{x}_i} \bigcup \gamma_{\mathbf{x}_k})} + p_{S \backslash (\gamma_{\mathbf{x}_i} \bigcup \gamma_{\mathbf{x}_j} \bigcup \gamma_{\mathbf{x}_k})})$. Since $\epsilon \leq \mathcal{P}$, we further get that LHS $\geq (1-\epsilon)\epsilon$. Moreover, as $\frac{1}{d_{max}}(d(\mathbf{x}_k,\mathbf{x}_i) - (d(\mathbf{x}_i,\mathbf{x}_j) + d(\mathbf{x}_j,\mathbf{x}_k))) \leq 1$, we get RHS of Inequality (\ref{eqnTri3}) $\leq \epsilon$. Therefore, LHS - RHS $\geq (1-\epsilon)\epsilon - \epsilon = -{\epsilon}^2$. Now, as Inequality (\ref{eqnTri3}) is obtained from Inequality (\ref{eqnTri1}) after some algebraic manipulation, it must hold that (LHS - RHS) of Inequality (\ref{eqnTri3}) = (LHS - RHS) of Inequality (\ref{eqnTri1}). Hence, we get $\delta(\mathbf{x}_i,\mathbf{x}_j) + \delta(\mathbf{x}_j,\mathbf{x}_k) - \delta(\mathbf{x}_k,\mathbf{x}_i) \geq -{\epsilon}^2$ which can be simplified to obtain Inequality (\ref{eqnRelTri1}). This completes the proof. \end{enumerate} \end{proof} \end{lemma} \par Let us now elucidate the proposed FWP (and consequently the proposed FWPD measure) by using the following example. \begin{eg} \end{eg} Let $X \subset \mathbb{R}^3$ be a dataset consisting of $n=5$ data points, each having three features ($S=\{1,2,3\}$), some of which (marked by '*') are unobserved. The dataset is presented below (along with the feature observation counts and the observed feature sets for each of the instances). \begin{small} \begin{center} \begin{tabular*}{0.551\textwidth}{|c|c c c|c|} \hline Data Point & $x_{i,1}$ & $x_{i,2}$ & $x_{i,3}$ & $\gamma_{\mathbf{x}_i}$ \\ \hline $\mathbf{x}_1$ & * & $3$ & $2$ & $\{2,3\}$ \\ $\mathbf{x}_2$ & $1.2$ & * & $4$ & $\{1,3\}$ \\ $\mathbf{x}_3$ & * & $0$ & $0.5$ & $\{2,3\}$ \\ $\mathbf{x}_4$ & $2.1$ & $3$ & $1$ & $\{1,2,3\}$ \\ $\mathbf{x}_5$ & $-2$ & * & * & $\{1\}$ \\ \hline Obs. Count & $w_1=3$ & $w_2=3$ & $w_3=4$ & - \\ \hline \end{tabular*} \end{center} \end{small} The pairwise observed distance matrix $A_{d}$ and the pairwise penalty matrix $A_{p}$, are as follows: \begin{equation*} A_{d} = \left[ \begin{matrix} 0, & 2, & 3.35, & 1, & 0 \\ 2, & 0, & 3.5, & 3.13, & 3.2 \\ 3.35, & 3.5, & 0, & 3.04, & 0 \\ 1, & 3.13, & 3.04, & 0, & 4.1 \\ 0, & 3.2, & 0, & 4.1, & 0 \end{matrix} \right] \text{ and } A_{p} = \left[ \begin{matrix} 0.3, & 0.6, & 0.3, & 0.3, & 1 \\ 0.6, & 0.3, & 0.6, & 0.3, & 0.7 \\ 0.3, & 0.6, & 0.3, & 0.3, & 1 \\ 0.3, & 0.3, & 0.3, & 0, & 0.7 \\ 1, & 0.7, & 1, & 0.7, & 0.7 \end{matrix} \right]. \end{equation*} From $A_{d}$ it is observed that the maximum pairwise observed distance $d_{max}=4.1$. Then, the normalized observed distance matrix $A_{\bar{d}}$ is \begin{equation*} A_{\bar{d}} = \left[ \begin{matrix} 0, & 0.49, & 0.82, & 0.24, & 0 \\ 0.49, & 0, & 0.85, & 0.76, & 0.78 \\ 0.82, & 0.85, & 0, & 0.74, & 0 \\ 0.24, & 0.76, & 0.74, & 0, & 1 \\ 0, & 0.78, & 0, & 1, & 0 \end{matrix} \right]. \end{equation*} $\mathcal{P} = 0.3$. While it is not necessary, let us choose $\alpha=0.7$. Using Equation (\ref{eqnFwpd}) to calculate the FWPD matrix $A_{\delta}$, we get: \begin{equation*} A_{\delta} = 0.3 \times A_{\bar{d}} + 0.7 \times A_{p} = \left[ \begin{matrix} 0.21, & 1.02, & 1.22, & 0.51, & 0.7 \\ 1.02, & 0.21, & 1.47, & 1.15, & 1.45 \\ 1.22, & 1.47, & 0.21, & 1.12, & 0.7 \\ 0.51, & 1.15, & 1.12, & 0, & 1.72 \\ 0.7, & 1.45, & 0.7, & 1.72, & 0.49 \end{matrix} \right]. \end{equation*} It should be noted that in keeping with the properties of the FWPD described in Subsection \ref{sec:propFwpd}, $A_{\delta}$ is a symmetric matrix with the diagonal elements being the smallest entries in their corresponding rows (and columns) and the diagonal element corresponding to the fully observed point $\mathbf{x}_4$ being the only zero element. Moreover, it can be easily checked that the relaxed form of the triangle inequality, as given in Inequality (\ref{eqnRelTri1}), is always satisfied. \section{k-means Clustering for Datasets with Missing Features using the proposed FWPD}\label{sec:Kmeans} This section presents a reformulation of the k-means clustering problem for datasets with missing features, using the FWPD measure proposed in Section \ref{sec:fwpd}. The important notations used in this section (and beyond) are summarized in Table \ref{tabNota2}. The k-means problem (a term coined by MacQueen \citeyear{macqueen1967some}) deals with the partitioning of a set of $n$ data instances into $k(< n)$ clusters so as to minimize the sum of within-cluster dissimilarities. The standard heuristic algorithm to solve the k-means problem, referred to as the \emph{k-means algorithm}, was first proposed by Lloyd in 1957 \cite{lloyd1982least}, and rediscovered by Forgy \citeyear{forgy1965cluster}. Starting with $k$ random assignments of each of the data instances to one of the $k$ clusters, the k-means algorithm functions by iteratively recalculating the $k$ cluster centroids and reassigning the data instances to the nearest cluster (the cluster corresponding to the nearest cluster centroid), in an alternating manner. Selim \& Ismail \citeyear{selim1984kmeans} showed that the k-means algorithm converges to a local optimum of the non-convex optimization problem posed by the k-means problem, when the dissimilarity used is the Euclidean distance between data points. \begin{table}[h] \begin{center} \caption{Some important notations used in Section \ref{sec:Kmeans} and beyond.}\label{tabNota2} \begin{tabular}{c | c | l} \hline Notation & $\begin{array}{c c c} \text{Counter-part in}\\ \text{k-means-FWPD}\\ \text{iteration } t\\ \end{array}$ & Meaning\\ \hline $k$ & - & Number of clusters for k-means\\ $C_j$ & $C^t_j$ & $j$-th cluster for k-means\\ $u_{i,j}$ & $u^t_{i,j}$ & Membership of the data point $\mathbf{x}_i$ in the cluster $C_j$\\ $U$ & $U^t$ & $n \times k$ matrix of cluster memberships\\ $\mathcal{U}$ & - & Set of all possible $U$ values\\ $\mathbf{z}_j$ & $\mathbf{z}^t_j$ & Centroid of cluster $C_j$\\ $z_{j,l}$ & $z^t_{j,l}$ & $l$-th feature of the cluster centroid $\mathbf{z}_j$\\ $Z$ & $Z^t$ & Set of cluster centroids\\ $\mathcal{Z}$ & - & Set of all possible $Z$ values\\ $f(U,Z)$ & $f(U^t,Z^t)$ & k-means objective function defined on $\mathcal{U} \times \mathcal{Z}$\\ $X_l$ & - & Set of all $\mathbf{x}_i \in X$ having observed values for feature $l$\\ $U^*$ & - & Final cluster memberships found by k-means-FWPD\\ $Z^*$ & - & Final cluster centroids found by k-means-FWPD\\ $T$ & - & The convergent iteration of k-means-FWPD\\ - & $\tau$ & Any iteration preceding the current iteration $t$\\ $\mathcal{F}(Z)$ & - & Set of feasible membership matrices for $Z$\\ $\mathcal{F}(U)$ & - & Set of feasible centroid sets for $U$\\ $\mathcal{S}(U)$ & - & Set of super-feasible centroids sets for $U$\\ $(\tilde{U},\tilde{Z})$ & - & A partial optimal solution of the k-means-FWPD problem\\ $D$ & - & A feasible direction of movement for $U^*$\\ $\mathcal{O}$ & - & Big O notation\\ \hline \end{tabular} \end{center} \end{table} \par The proposed formulation of the k-means problem for datasets with missing features using the proposed FWPD measure, referred to as the \emph{k-means-FWPD problem} hereafter, differs from the standard k-means problem not only in that the underlying dissimilarity measure used is FWPD (instead of Euclidean distance), but also in the addition of a new constraint which ensures that a cluster centroid has observable values for exactly those features which are observed for at least one of the points in its corresponding cluster. Therefore, the k-means-FWPD problem to partition the dataset $X$ into $k$ clusters $(2 \leq k < n)$, can be formulated in the following way: \begin{subequations}\label{eqnKmeansForm} \begin{flalign} \text{P: minimize } f(U,&Z)=\sum_{i=1}^{n} \sum_{j=1}^{k} u_{i,j}((1-\alpha)\times \frac{d(\mathbf{x}_{i},\mathbf{z}_{j})}{d_{max}} + \alpha \times p(\mathbf{x}_{i},\mathbf{z}_{j})), \\ \text{subject to } & \sum_{j=1}^{k} u_{i,j}=1 \text{ } \forall \text{ } i \in \{1, 2, \cdots, n\}, \label{constr1}\\ & u_{i,j} \in \{0,1\} \text{ } \forall \text{ } i \in \{1, 2, \cdots, n\},j \in \{1, 2, \cdots, k\}, \label{constr2}\\ \text{and }& \gamma_{\mathbf{z}_j} = \bigcup_{\mathbf{x}_{i} \in C_{j}} \gamma_{\mathbf{x}_i} \text{ } \forall \text{ } j \in \{1, 2, \cdots, k\}, \label{constr3} \end{flalign} \end{subequations} where $U=[u_{i,j}]$ is the $n \times k$ real matrix of memberships, $d_{max}$ denotes the maximum observed distance between any two data points $\mathbf{x}_i,\mathbf{x}_i \in X$, $\gamma_{\mathbf{z}_j}$ denotes the set of observed features for $\mathbf{z}_j$ $(j \in \{1, 2, \cdots, k\})$, $C_{j}$ denotes the $j$-th cluster (corresponding to the centroid $\mathbf{z}_j$), $Z=\{\mathbf{z}_1, \cdots, \mathbf{z}_k\}$, and it is said that $\mathbf{x}_i \in C_{j}$ when $u_{i,j}=1$. \subsection{The k-means-FWPD Algorithm}\label{sec:KmeansFwpdAlgo} To find a solution to the problem P, which is a non-convex program, we propose a Lloyd's heuristic-like algorithm based on the FWPD (referred to as \emph{k-means-FWPD algorithm}), as follows: \begin{enumerate} \item Start with a random initial set of cluster assignments $U$ such that $\sum_{j=1}^{k} u_{i,j}=1$. Set $t=1$ and specify the maximum number of iterations $MaxIter$. \item For each cluster $C_{j}^{t}$ $(j = 1, 2, \cdots, k)$, calculate the observed features of the cluster centroid $\mathbf{z}_{j}^{t}$. The value for the $l$-th feature of a centroid $\mathbf{z}_{j}^{t}$ should be the average of the corresponding feature values for all the data instances in the cluster $C_{j}^{t}$ having observed values for the $l$-th feature. If none of the data instances in $C_{j}^{t}$ have observed values for the feature in question, then the value $z_{j,l}^{t-1}$ of the feature from the previous iteration should be retained. Therefore, the feature values are calculated as follows: \begin{equation}\label{eqnClustCentroid} z_{j,l}^{t}=\left\{ \begin{array}{cl} (\underset{\mathbf{x}_i \in X_l}{\sum}\; u_{i,j}^{t} \times x_{i,l})\bigg/(\underset{\mathbf{x}_i \in X_l}{\sum}\; u_{i,j}^{t}) \text{ }, & \mbox{$\forall \text{ } l \in \bigcup_{\mathbf{x}_i \in C_{j}^{t}} \gamma_{\mathbf{x}_i}$},\\ \;\;\;\;\;\;\;\;\;z_{j,l}^{t-1}\;\;\;\;\;\;\;\;\;, & \mbox{$\forall \text{ } l \in \gamma_{\mathbf{z}_j^{t-1}} \backslash \bigcup_{\mathbf{x}_i \in C_{j}^{t}} \gamma_{\mathbf{x}_i}$},\\ \end{array} \right. \end{equation} where $X_l$ denotes the set of all $\mathbf{x}_i \in X$ having observed values for the feature $l$. \item Assign each data point $\mathbf{x}_i$ $(i=1, 2, \cdots, n)$ to the cluster corresponding to its nearest (in terms of FWPD) centroid, i.e. \begin{equation*} u_{i,j}^{t+1} = \left\{ \begin{array}{ll} 1, & \mbox{if $\mathbf{z}_{j}^{t}=\underset{\mathbf{z} \in Z^t}{\argmin}\; \delta(\mathbf{x}_i,\mathbf{z})$},\\ 0, & \mbox{otherwise}.\\ \end{array} \right. \end{equation*} Set $t=t+1$. If $U^{t}=U^{t-1}$ or $t = MaxIter$, then go to Step 4; otherwise go to Step 2. \item Calculate the final cluster centroid set $Z^*$ as: \begin{equation}\label{eqnFinClustCentroid} z_{j,l}^{*}=\frac{\underset{\mathbf{x}_i \in X_l}{\sum}\; u_{i,j}^{t+1} \times x_{i,l}}{\underset{\mathbf{x}_i \in X_l}{\sum}\; u_{i,j}^{t+1}} \text{ } \forall \text{ } l \in \bigcup_{\mathbf{x}_i \in C_{j}^{t+1}} \gamma_{\mathbf{x}_i}. \end{equation} Set $U^* = U^{t+1}$. \end{enumerate} \begin{figure} \caption{Comparison of convergences of traditional k-means and k-means-FWPD algorithms.} \label{figConvTrad} \label{figConvNew} \label{figConv} \end{figure} \begin{rmk} The iterations of the traditional k-means algorithm are known to each result in a decrease in the value of the objective function $f$ \cite{selim1984kmeans} (Figure \ref{figConvTrad}). However, for the k-means-FWPD algorithm, the $Z^t$ calculations for some of the iterations may result in a finite increase in $f$, as shown in Figure \ref{figConvNew}. We show in Theorem \ref{thmFinFin} that only a finite number of such increments may occur during a given run of the algorithm, thus ushering in ultimate convergence. Moreover, the final feasible, locally-optimal solution is obtained using Step 4 (denoted by dotted line) which does not result in any further change to the objective function value. \end{rmk} \subsection{Notions of Feasibility in Problem P} Let $\mathcal{U}$ and $\mathcal{Z}$ respectively denote the sets of all possible $U$ and $Z$. Unlike the traditional k-means problem, the entire $\mathcal{U} \times \mathcal{Z}$ space is not feasible for the Problem P. There exists a set of feasible $U$ for a given $Z$. Similarly, there exist sets of feasible and super-feasible $Z$ (a super-set of the set of feasible $Z$) for a given $U$. In this subsection, we formally define these notions. \begin{dfn}\label{dfnFeasU} Given a cluster centroid set $Z$, the set $\mathcal{F}(Z)$ of feasible membership matrices is given by \begin{equation*} \mathcal{F}(\hat{Z}) = \{U : u_{i,j} = 0 \text{ } \forall j \in \{1,2, \cdots, k\} \text{ such that } \gamma_{\mathbf{z}_j} \subset \gamma_{\mathbf{x}_i} \}, \end{equation*} i.e. $\mathcal{F}(Z)$ is the set of all such membership matrices which do not assign any $\mathbf{x}_i \in X$ to a centroid in $Z$ missing some feature $l \in \gamma_{\mathbf{x}_i}$. \end{dfn} \begin{dfn}\label{dfnFeasZ} Given a membership matrix $U$, the set $\mathcal{F}(U)$ of feasible cluster centroid sets can be defined as \begin{equation*} \mathcal{F}(U) = \{Z : Z \text{ satisfies constraint } (\ref{constr3}) \}. \end{equation*} \end{dfn} \begin{dfn}\label{dfnSupfeasZ} Given a membership matrix $U$, the set $\mathcal{S}(U)$ of super-feasible cluster centroid sets is \begin{equation}\label{constr3rel} \mathcal{S}(U) = \{Z : \gamma_{\mathbf{z}_j} \supseteq \bigcup_{\mathbf{x}_{i} \in C_{j}} \gamma_{\mathbf{x}_i} \text{ } \forall \text{ } j \in \{1, 2, \cdots, k\} \}, \end{equation} i.e. $\mathcal{S}(U)$ is the set of all such centroid sets which ensure that any centroid has observed values at least for those features which are observed for any of the points assigned to its corresponding cluster in $U$. \end{dfn} \begin{rmk} The k-means-FWPD problem differs from traditional k-means in that not all $U \in \mathcal{U}$ are feasible for a given $Z$. Additionally, for a given $U$, there exists a set $\mathcal{S}(U)$ of super-feasible $Z$; $\mathcal{F}(U)$ a subset of $\mathcal{S}(U)$ being the set of feasible $Z$. The traversal of the k-means-FWPD algorithm is illustrated in Figure \ref{fig:algo1} where the grey solid straight lines denote the set of feasible $Z$ for the current $U^t$ while the rest of the super-feasible region is marked by the corresponding grey dotted straight line. Furthermore, the grey jagged lines denote the feasible set of $U$ for the current $Z^t$. Starting with a random $U^1 \in \mathcal{U}$ (Step 1), the algorithm finds $Z^1 \in \mathcal{S}(U^1)$ (Step 2), $U^2 \in \mathcal{F}(Z^1)$ (Step 3), and $Z^2 \in \mathcal{F}(U^2)$ (Step 2). However, it subsequently finds $U^3 \not\in \mathcal{F}(Z^2)$ (Step 3), necessitating a feasibility adjustment (see Section \ref{sec:feas}) while calculating $Z^3$ (Step 2). Subsequently, the algorithm converges to $(U^5,Z^4)$. For the convergent $(U^{T+1},Z^T)$, $U^{T+1} \in \mathcal{F}(Z^T)$ but it is possible that $Z^T \in \mathcal{S}(U^{T+1})\backslash \mathcal{F}(U^T)$ (as seen in the case of Figure \ref{fig:algo1}). However, the final $(U^*,Z^*)$ (obtained by the dotted black line transition denoting Step 4) is seen to be feasible in both respects and is shown (in Theorem \ref{thmLocalOpt}) to be locally-optimal in the corresponding feasible region. \end{rmk} \begin{figure} \caption{Simplified representation of how the k-means-FWPD algorithm traverses the $\mathcal{U} \times \mathcal{Z}$ space ($\mathcal{U}$ and $\mathcal{Z}$ are shown to be unidimensional for the sake of visualizability} \label{fig:algo1} \end{figure} \subsection{Partial Optimal Solutions}\label{sec:partOptSoln} This subsection deals with the concept of \emph{partial optimal solutions} of the problem P, to one of which the k-means-FWPD algorithm is shown to converge (prior to Step 4). The following definition formally presents the concept of a partial optimal solution. \begin{dfn}\label{dfnPOS} A partial optimal solution $(\tilde{U},\tilde{Z})$ of problem P, satisfies the following conditions \cite{wendel1976minimization}: \begin{equation*} \begin{aligned} & f(\tilde{U},\tilde{Z}) \leq f(U,\tilde{Z}) \text{ } \forall \text{ } U \in \mathcal{F}(\tilde{Z}) \text{ where } \tilde{U} \in \mathcal{F}(\tilde{Z}),\\ \text{and } & f(\tilde{U},\tilde{Z}) \leq f(\tilde{U},Z) \text{ } \forall \text{ } Z \in \mathcal{S}(\tilde{U}) \text{ where } \tilde{Z} \in \mathcal{S}(\tilde{U}). \end{aligned} \end{equation*} \end{dfn} To obtain a partial optimal solution of P, the two following subproblems are defined: \begin{equation*} \begin{aligned} &\text{P1: Given } U \in \mathcal{U}, \text{ minimize } f(U,Z) \text{ over } Z \in \mathcal{S}(U).\\ &\text{P2: Given } Z \text{ satisfying (\ref{constr3}), minimize } f(U,Z) \text{ over } U \in \mathcal{U}. \end{aligned} \end{equation*} The following lemmas establish that Steps 2 and 3 of the k-means-FWPD algorithm respectively solve the problems P1 and P2 for a given iterate. The subsequent theorem shows that the k-means-FWPD algorithm converges to a partial optimal solution of P. \begin{lemma}\label{lemSolveP1} Given a $U^t$, the centroid matrix $Z^t$ calculated using Equation (\ref{eqnClustCentroid}) is an optimal solution of the Problem P1.\\ \begin{proof} For a fixed $U^t \in \mathcal{U}$, the objective function is minimized when $\frac{\partial f}{\partial z_{j,l}^t}=0 \text{ } \forall j \in \{1, \cdots, k\}, l \in \gamma_{\mathbf{z}^t_j}$. For a particular $\mathbf{z}^t_j$, it follows from Definition \ref{defFwp} that $\{p(\mathbf{x}_{i},\mathbf{z}^t_{j}): \mathbf{x}_i \in C^t_j\}$ is independent of the values of the features of $\mathbf{z}^t_j$, as $\gamma_{\mathbf{x}_i} \bigcap \gamma_{\mathbf{z}^t_j} = \gamma_{\mathbf{x}_i}\ \text{ } \forall \mathbf{x}_i \in C^t_j$. Since an observed feature $l \in \gamma_{\mathbf{z}^t_j} \backslash (\bigcup_{\mathbf{x}_{i} \in C^t_{j}} \gamma_{\mathbf{x}_i})$ of $\mathbf{z}^t_j$ has no contribution to the observed distances, $\frac{\partial f}{\partial z_{j,l}^t}=0 \text{ } \forall l \in \gamma_{\mathbf{z}^t_j} \backslash (\bigcup_{\mathbf{x}_{i} \in C^t_{j}} \gamma_{\mathbf{x}_i})$. For an observed feature $l\in \bigcup_{\mathbf{x}_{i} \in C^t_{j}} \gamma_{\mathbf{x}_i}$ of $\mathbf{z}^t_j$, differentiating $f(U^t,Z^t)$ w.r.t. $z_{j,l}^t$ we get \begin{equation*} \frac{\partial f}{\partial z_{j,l}^t} = \frac{(1-\alpha)}{d_{max}} \times \underset{\mathbf{x}_i \in X_l}{\sum} u_{i,j}^t(\frac{x_{i,l}-z_{j,l}^t}{d(\mathbf{x}_i,\mathbf{z}^t_j)}). \end{equation*} Setting $\frac{\partial f}{\partial z_{j,l}^t} = 0$ and solving for $z_{j,l}^t$, we obtain \begin{equation*} z_{j,l}^t=\frac{\underset{\mathbf{x}_i \in X_l}{\sum} u_{i,j}^t \times x_{i,l}}{\underset{\mathbf{x}_i \in X_l}{\sum} u_{i,j}^t}. \end{equation*} Since Equation (\ref{eqnClustCentroid}) is in keeping with this criterion and ensures that constraint (\ref{constr3rel}) is satisfied, the centroid matrix $Z^t$ calculated using Equation (\ref{eqnClustCentroid}) is an optimal solution of P1. \end{proof} \end{lemma} \begin{lemma}\label{lemSolveP2} For a given $Z^t$, problem P2 is solved if $u^{t+1}_{i,j}=1$ and $u^{t+1}_{i,j^{'}}=0$ $\forall$ $i \in \{1, \cdots, n\}$ when $\delta(\mathbf{x}_i,\mathbf{z}^t_j) \leq \delta(\mathbf{x}_i,\mathbf{z}^t_{j^{'}})$, for all $j^{'} \neq j$.\\ \begin{proof} It is clear that the contribution of $\mathbf{x}_i$ to the total objective function is $\delta(\mathbf{x}_i,\mathbf{z}^t_j)$ when $u^{t+1}_{i,j}=1$ and $u^{t+1}_{i,j^{'}}=0$ $\forall$ $j^{'} \neq j$. Since any alternative solution is an extreme point of $\mathcal{U}$ \cite{selim1984kmeans}, it must satisfy (\ref{constr2}). Therefore, the contribution of $\mathbf{x}_i$ to the objective function for an alternative solution will be some $\delta(\mathbf{x}_i,\mathbf{z}^t_{j^{'}}) \geq \delta(\mathbf{x}_i,\mathbf{z}^t_j)$. Hence, the contribution of $\mathbf{x}_i$ is minimized by assigning $u^{t+1}_{i,j}=1$ and $u^{t+1}_{i,j^{'}}=0$ $\forall$ $j^{'} \neq j$. This argument holds true for all $\mathbf{x}_i \in X$, i.e. $\forall$ $i \in \{1, \cdots, n\}$. This completes the proof. \end{proof} \end{lemma} \begin{theorem}\label{thmConvergePOS} The k-means-FWPD algorithm finds a partial optimal solution of P.\\ \begin{proof} Let $T$ denote the terminal iteration. Since Step 2 and Step 3 of the k-means-FWPD algorithm respectively solve P1 and P2, the algorithm terminates only when the obtained iterate $(U^{T+1},Z^{T})$ solves both P1 and P2. Therefore, $f(U^{T+1},Z^{T}) \leq f(U^{T+1},Z) \text{ } \forall Z \in \mathcal{S}(U^{T+1})$. Since Step 2 ensures that $Z^{T} \in \mathcal{S}(U^{T})$ and $U^{T+1} = U^{T}$, we must have $Z^{T} \in \mathcal{S}(U^{T+1})$. Moreover, $f(U^{T+1},Z^{T}) \leq f(U,Z^T)$ $\forall U \in \mathcal{U}$ which implies $f(U^{T+1},Z^{T}) \leq f(U,Z^T) \text{ } \forall U \in \mathcal{F}(Z^T)$. Now, Step 2 ensures that $\gamma_{\mathbf{z}^{T}_j} \supseteq \bigcup_{\mathbf{x}_{i} \in C^T_{j}} \gamma_{\mathbf{x}_i} \text{ } \forall \text{ } j \in \{1, 2, \cdots, k\}$. Since we must have $U^{T+1} = U^{T}$ for convergence to occur, it follows that $\gamma_{\mathbf{z}^{T}_j} \supseteq \bigcup_{\mathbf{x}_{i} \in C^{T+1}_{j}} \gamma_{\mathbf{x}_i} \text{ } \forall \text{ } j \in \{1, 2, \cdots, k\}$, hence $u^{T+1}_{i,j} = 1$ implies $\gamma_{\mathbf{z}^T_j} \supseteq \gamma_{\mathbf{x}_i}$. Therefore, $U^{T+1} \in \mathcal{F}(Z^T)$. Consequently, the terminal iterate of Step 3 of the k-means-FWPD algorithm must be a partial optimal solution of P. \end{proof} \end{theorem} \subsection{Feasibility Adjustments}\label{sec:feas} Since it is possible for the number of observed features of the cluster centroids to increase over the iterations to maintain feasibility w.r.t. constraint (\ref{constr3}), we now introduce the concept of \emph{feasibility adjustment}, the consequences of which are discussed in this subsection. \begin{dfn}\label{dfnFA} A feasibility adjustment for cluster $j$ ($j \in \{1,2,\cdots,k\}$) is said to occur in iteration $t$ if $\gamma_{\mathbf{z}^t_j} \supset \gamma_{\mathbf{z}^{t-1}_j}$ or $\gamma_{\mathbf{z}^t_j} \backslash \gamma_{\mathbf{z}^{t-1}_j} \neq \phi$, i.e. if the centroid $\mathbf{z}^t_j$ acquires an observed value for at least one feature which was unobserved for its counter-part $\mathbf{z}^{t-1}_j$ in the previous iteration. \end{dfn} The following lemma shows that feasibility adjustment can only occur for a cluster as a result of the addition of a new data point previously unassigned to it. \begin{lemma}\label{lemNewPoint} Feasibility adjustment occurs for a cluster $C_j$ in iteration $t$ iff at least one data point $\mathbf{x}_i$, such that $\gamma_{\mathbf{x}_i} \backslash \gamma_{\mathbf{z}^{\tau}_j} \neq \phi$ $\forall \tau < t$, which was previously unassigned to $C_j$ (i.e. $u^{\tau}_{i,j} = 0$ $\forall \tau < t$) is assigned to it in iteration $t$.\\ \begin{proof} Due to Equation (\ref{eqnClustCentroid}), all features defined for $\mathbf{z}^{t-1}_j$ are also retained for $\mathbf{z}^t_j$. Therefore, for $\gamma_{\mathbf{z}^t_j} \backslash \gamma_{\mathbf{z}^{t-1}_j} \neq \phi$ there must exist some $\mathbf{x}_i$ such that $u^t_{i,j} = 1$, $u^{t-1}_{i,j} = 0$, and $\gamma_{\mathbf{x}_i} \backslash \gamma_{\mathbf{z}^{t-1}_j} \neq \phi$. Since the set of defined features for any cluster centroid is a monotonically growing set, we have $\gamma_{\mathbf{x}_i} \backslash \gamma_{\mathbf{z}^{\tau}_j} \neq \phi$ $\forall \tau < t$. It then follows from constraint (\ref{constr3}) that $u^{\tau}_{i,j} = 0$ $\forall \tau < t$. Now, to prove the converse, let us assume the existence of some $\mathbf{x}_i$ such that $\gamma_{\mathbf{x}_i} \backslash \gamma_{\mathbf{z}^{\tau}_j} \neq \phi$ $\forall \tau < t$ and $u^{\tau}_{i,j} = 0$ $\forall \tau < t$. Since $\gamma_{\mathbf{x}_i} \backslash \gamma_{\mathbf{z}^{t-1}_j} \neq \phi$ and $\gamma_{\mathbf{z}^t_j} \supseteq \gamma_{\mathbf{x}_i} \bigcup \gamma_{\mathbf{z}^{t-1}_j}$, it follows that $\gamma_{\mathbf{z}^t_j} \backslash \gamma_{\mathbf{z}^{t-1}_j} \neq \phi$. \end{proof} \end{lemma} The following theorem deals with the consequences of the feasibility adjustment phenomenon. \begin{theorem}\label{thmFinFin} For a finite number of iterations during a single run of the k-means-FWPD algorithm, there may be a finite increment in the objective function $f$, due to the occurrence of feasibility adjustments.\\ \begin{proof} It follows from Lemma \ref{lemSolveP1} that $f(U^t,Z^t) \leq f(U^t,Z)$ $\forall Z \in \mathcal{S}(U^t)$. If there is no feasibility adjustment in iteration $t$, $\mathcal{S}(U^{t-1}) = \mathcal{S}(U^t)$. Hence, $f(U^t,Z^t) \leq f(U^t,Z^{t-1})$. However, if a feasibility adjustment occurs in iteration $t$, then $\gamma_{\mathbf{z}^t_j} \subset \gamma_{\mathbf{z}^{t-1}_j}$ for at least one $j \in \{1,2,\cdots,k\}$. Hence, $Z^{t-1} \in \mathcal{Z} \backslash \mathcal{S}(U^t)$ and we may have $f(U^t,Z^t) > f(U^t,Z^{t-1})$. Since both $f(U^t,Z^t)$ and $f(U^t,Z^{t-1})$ are finite, $(f(U^t,Z^t) - f(U^t,Z^{t-1}))$ must also be finite. Now, the maximum number of feasibility adjustments occur in the worst case scenario where each data point, having an unique set of observed features (which are unobserved for all other data points), traverses all the clusters before convergence. Therefore, the maximum number of possible feasibility adjustments during a single run of the k-means-FWPD algorithm is $n(k-1)$, which is finite. \end{proof} \end{theorem} \subsection{Convergence of the k-means-FWPD Algorithm}\label{sec:KmeansFwpdConverge} We now show that the k-means-FWPD algorithm converges to the partial optimal solution, within a finite number of iterations. The following lemma and the subsequent theorem are concerned with this. \begin{lemma}\label{lemConvOrFeas} Starting with a given iterate $(U^t,Z^t)$, the k-means-FWPD algorithm either reaches convergence or encounters a feasibility adjustment, within a finite number of iterations.\\ \begin{proof} Let us first note that there are a finite number of extreme points of $\mathcal{U}$. Then, we observe that an extreme point of $\mathcal{U}$ is visited at most once by the algorithm before either convergence or the next feasibility adjustment. Suppose, this is not true, and let $U^{t_1}=U^{t_2}$ for distinct iterations $t_1$ and $t_2$ $(t_1 \geq t, t_1 < t_2)$ of the algorithm. Applying Step 2 of the algorithm, we get $Z^{t_1}$ and $Z^{t_2}$ as optimal centroid sets for $U^{t_1}$ and $U^{t_2}$, respectively. Then, $f(U^{t_1},Z^{t_1}) = f(U^{t_2},Z^{t_2})$ since $U^{t_1}=U^{t_2}$. However, it is clear from Lemmas \ref{lemSolveP1}, \ref{lemSolveP2} and Theorem \ref{thmFinFin} that $f$ strictly decreases subsequent to the iterate $(U^t,Z^t)$ and prior to either the next feasibility adjustment (in which case the value of $f$ may increase) or convergence (in which case $f$ remains unchanged). Hence, $U^{t_1} \neq U^{t_2}$. Therefore, it is clear from the above argument that the k-means-FWPD algorithm either converges or encounters a feasibility adjustment within a finite number of iterations. \end{proof} \end{lemma} \begin{theorem}\label{thmConvFwpd} The k-means-FWPD algorithm converges to a partial optimal solution within a finite number of iterations.\\ \begin{proof} It follows from Lemma \ref{lemConvOrFeas} that the first feasibility adjustment is encountered within a finite number of iterations since initialization and that each subsequent feasibility adjustment occurs within a finite number of iterations of the previous. Moreover, we know from Theorem \ref{thmFinFin} that there can only be a finite number of feasibility adjustments during a single run of the algorithm. Therefore, the final feasibility adjustment must occur within a finite number of iterations. Moreover, it follows from Lemma \ref{lemConvOrFeas} that the algorithm converges within a finite number of subsequent iterations. Hence, the k-means-FWPD algorithm must converge within a finite number of iterations. \end{proof} \end{theorem} \subsection{Local Optimality of the Final Solution}\label{sec:Finoptim} In this subsection, we establish the local optimality of the final solution obtained in Step 4 of the k-means-FWPD algorithm, subsequent to convergence in Step 3. \begin{lemma}\label{lemZStaropt} $Z^*$ is the unique optimal feasible cluster centroid set for $U^*$, i.e. $Z^* \in \mathcal{F}(U^*)$ and $f(U^*,Z^*) \leq f(U^*,Z) \text{ } \forall Z \in \mathcal{F}(U^*)$. \begin{proof} Since $Z^*$ satisfies constraint (\ref{constr3}) for $U^*$, $Z^* \in \mathcal{F}(U^*)$. We know from Lemma \ref{lemSolveP1} that for $f(U^*,Z^*) \leq f(U^*,Z) \text{ } \forall Z \in \mathcal{F}(U^*)$, we must have \begin{equation*} z^*_{j,l}=\frac{\underset{\mathbf{x}_i \in X_l}{\sum} u^*_{i,j} \times x_{i,l}}{\underset{\mathbf{x}_i \in X_l}{\sum} u^*_{i,j}}. \end{equation*} As this is ensured by Step 4, $Z^*$ must be the unique optimal feasible cluster centroid set for $U^*$. \end{proof} \end{lemma} \begin{lemma}\label{lemFeasZSupfeasZ} If $Z^*$ is the unique optimal feasible cluster centroid set for $U^*$, then $f(U^*,Z^*) \leq f(U,Z^*) \text{ } \forall U \in \mathcal{F}(Z^*)$. \begin{proof} We know from Theorem \ref{thmConvergePOS} that $f(U^{*},Z^{T}) \leq f(U,Z^T) \text{ } \forall U \in \mathcal{F}(Z^T)$. Now, $\gamma_{\mathbf{z}^*_j} \subseteq \gamma_{\mathbf{z}^T_j} \text{ } \forall j \in \{1,2,\cdots,k\}$. Therefore, $\mathcal{F}(Z^*) \subseteq \mathcal{F}(Z^T)$ must hold. It therefore follows that $f(U^*,Z^*) \leq f(U,Z^*) \text{ } \forall U \in \mathcal{F}(Z^*)$. \end{proof} \end{lemma} Now, the following theorem shows that the final solution obtained by Step 4 of the k-means-FWPD algorithm is locally optimal. \begin{theorem}\label{thmLocalOpt} The final solution $(U^*,Z^*)$ obtained by Step 4 of the k-means-FWPD algorithm is a local optimal solution of P. \begin{proof} We have from Lemma \ref{lemFeasZSupfeasZ} that $f(U^*,Z^*) \leq f(U,Z^*) \text{ } \forall U \in \mathcal{F}(Z^*)$. Therefore, $f(U^*,Z^*) \leq \min_{U} \{f(U,Z^*):U \in \mathcal{F}(Z^*)\}$ which implies that for all feasible directions $D$ at $U^*$, the one-sided directional derivative \cite{lasdon2013optimization}, \begin{equation}\label{eqnTraceGeqZero} trace(\nabla_{U}f(U^*,Z)^{\mathrm{T}}D) \geq 0. \end{equation} Now, since $Z^*$ is the unique optimal feasible cluster centroid set for $U^*$ (Lemma \ref{lemZStaropt}), $(U^*,Z^*)$ is a local optimum of problem P. \end{proof} \end{theorem} \subsection{Time Complexity of the k-means-FWPD Algorithm}\label{sec:TimeCompKmeansFwpd} In this subsection, we present a brief discussion on the time complexity of the k-means-FWPD algorithm. The k-means-FWPD algorithm consists of four basic steps, which are repeated iteratively. These steps are \begin{enumerate} \item \emph{Centroid Calculation}: As a maximum of $m$ features of each centroid must be calculated, the complexity of centroid calculation is at most $\mathcal{O}(kmn)$. \item \emph{Distance Calculation}: As each distance calculation involves at most $m$ features, the observed distance calculation between $n$ data instances and $k$ cluster centroids is at most $\mathcal{O}(kmn)$. \item \emph{Penalty Calculation}: The penalty calculation between a data point and a cluster centroid involves at most $m$ summations. Hence, penalty calculation over all possible pairings is at most $\mathcal{O}(kmn)$. \item \emph{Cluster Assignment}: The assignment of $n$ data points to $k$ clusters consists of the comparisons of the dissimilarities of each point with $k$ clusters, which is $\mathcal{O}(nk)$. \end{enumerate} Therefore, if the algorithm runs for $T$ iterations, the total computational complexity is $\mathcal{O}(kmnT)$ which is the same as that of the standard k-means algorithm. \section{Hierarchical Agglomerative Clustering for Datasets with Missing Features using the proposed FWPD}\label{sec:Hier} In this section we present HAC clustering methods using the proposed FWPD measure that can be directly applied to data with missing features. The important notations used in this section (and beyond) are summarized in Table \ref{tabNota3}. \begin{table}[h] \begin{center} \caption{Some important notations used in Section \ref{sec:Hier} and beyond.}\label{tabNota3} \begin{tabular}{c | l} \hline Notation & Meaning\\ \hline $B^t$ & Set of hierarchical clusters obtained in iteration $t$ of HAC-FWPD\\ $\beta^t_i$ & $i$-th hierarchical cluster in $B^t$\\ $Q^t$ & Matrix of dissimilarities between the hierarchical clusters in $B^t$\\ $q^(i,j)$ & $(i,j)$-th element of $Q^t$\\ $q^t_{min}$ & Smallest non-zero value in $Q^t$\\ $M$ & List of location in $Q^t$ having value $q^t_{min}$\\ $G$ & New hierarchical cluster formed by merging two of the closest hierarchical clusters in $B^t$\\ $i_G$ & Location of $G$ in the set $B^{t+1}$\\ $L(G,\beta)$ & Linkage between two hierarchical clusters $G$ and $\beta$\\ \hline \end{tabular} \end{center} \end{table} Hierarchical agglomerative schemes for data clustering seek to build a multi-level hierarchy of clusters, starting with each data point as a single cluster, by combining two (or more) of the most proximal clusters at one level to obtain a lower number of clusters at the next (higher) level. A survey of HAC methods can be found in \cite{murtagh2012algorithms}. However, these methods cannot be directly applied to datasets with missing features. Therefore, in this section, we develop variants of HAC methods, based on the proposed FWPD measure. Various proximity measures may be used to merge the clusters in an agglomerative clustering method. Modifications of the three most popular of such proximity measures (Single Linkage (SL), Complete Linkage (CL) and Average Linkage (AL)) so as to have FWPD as the underlying dissimilarity measure, are as follows: \begin{enumerate} \item \emph{Single Linkage with FWPD (SL-FWPD)}: The SL between two clusters $C_i$ and $C_j$ is $\min \{\delta(\mathbf{x}_i,\mathbf{x}_j):\mathbf{x}_i \in C_i, \mathbf{x}_j \in C_j\}$. \item \emph{Complete Linkage with FWPD (CL-FWPD)}: The CL between two clusters $C_i$ and $C_j$ is $\max \{\delta(\mathbf{x}_i,\mathbf{x}_j):\mathbf{x}_i \in C_i, \mathbf{x}_j \in C_j\}$. \item \emph{Average Linkage with FWPD (AL-FWPD)}: $\frac{1}{|C_i| \times |C_j|} \underset{\mathbf{x}_i \in C_i}{\sum} \underset{\mathbf{x}_j \in C_j}{\sum} \delta(\mathbf{x}_i,\mathbf{x}_j)$ is the AL between two clusters $C_i$ and $C_j$, where $|C_i|$ and $|C_j|$ are respectively the number of instances in the clusters $C_i$ and $C_j$. \end{enumerate} \subsection{The HAC-FWPD Algorithm}\label{sec:AggFwpdAlgo} To achieve hierarchical clusterings in the presence of unstructured missingness, the HAC method based on SL-FWPD, CL-FWPD, or AL-FWPD (referred to as the \emph{HAC-FWPD algorithm} hereafter) is as follows: \begin{enumerate} \item Set $B^0 = X$. Compute pairwise dissimilarities $\delta(\mathbf{x}_i,\mathbf{x}_j)$, $\forall$ $\mathbf{x}_i,\mathbf{x}_j \in X$ and construct the dissimilarity matrix $Q^0$ so that $q^0(i,j) = \delta(\mathbf{x}_i,\mathbf{x}_j)$. Set $t=0$. \item Search $Q^t$ to identify the set $M = \{(i_1,j_1), (i_2,j_2), \cdots, (i_k,j_k)\}$ containing all the pairs of indexes such that $q^t(i_r,j_r) = q^t_{min}$ $\forall$ $r \in \{1, 2, \cdots, k\}$, $q^t_{min}$ being the smallest non-zero element in $Q^t$. \item Merge the elements corresponding to any one pair in $M$, say $\beta^t_{ir}$ and $\beta^t_{jr}$ corresponding to the pair $(i_r,j_r)$, into a single group $G = \{\beta^t_{ir}, \beta^t_{jr}\}$. Construct $B^{t+1}$ by removing $\beta^t_{ir}$ and $\beta^t_{jr}$ from $B^t$ and inserting $G$. \item Define $Q^{t+1}$ on $B^{t+1} \times B^{t+1}$ as $q^{t+1}(i,j) = q^t(i,j)$ $\forall$ $i, j \text{ such that } \beta^t_i, \beta^t_j \neq G$ and $q^{t+1}(i,i_G) = q^{t+1}(i_G,i) = L(G,\beta^t_i)$, where $i_G$ denotes the location of $G$ in $B^{t+1}$ and \begin{equation*} L(G,\beta) = \left\{ \begin{array}{cc} \underset{\mathbf{x}_i \in G,\mathbf{x}_j \in \beta}{\min} \delta(\mathbf{x}_i,\mathbf{x}_j) &\mbox{for SL-FWPD},\\ \underset{\mathbf{x}_i \in G,\mathbf{x}_j \in \beta}{\max} \delta(\mathbf{x}_i,\mathbf{x}_j) &\mbox{for CL-FWPD},\\ \frac{1}{|G| \times |\beta|} \underset{\mathbf{x}_i \in G}{\sum} \underset{\mathbf{x}_j \in \beta}{\sum} \delta(\mathbf{x}_i,\mathbf{x}_j) &\mbox{for AL-FWPD}.\\ \end{array}\right. \end{equation*} Set $t=t+1$. \item Repeat Steps 2-4 until $B^t$ contains a single element. \end{enumerate} FWPD being the underlying dissimilarity measure (instead of other metrics such as the Euclidean distance), the HAC-FWPD algorithm can be directly applied to obtain SL, CL, or AL based hierarchical clustering of datasets with missing feature values. \subsection{Time Complexity of the HAC-FWPD Algorithm}\label{sec:TimeCompHierFwpd} Irrespective of whether SL-FWPD, AL-SWPD or CL-FWPD is used as the proximity measure, the HAC-FWPD algorithm consists of the following three basic steps: \begin{enumerate} \item \emph{Distance Calculation}: As each distance calculation involves at most $m$ features, the calculation of all pairwise observed distances among $n$ data instances is at most $\mathcal{O}(n^{2}m)$. \item \emph{Penalty Calculation}: The penalty calculation between a data point and a cluster centroid involves at most $m$ summations. Hence, penalty calculation over all possible pairings is at most $\mathcal{O}(n^{2}m)$. \item \emph{Cluster Merging}: The merging of two clusters takes place in each of the $n-1$ steps of the algorithm, and each merge at most has a time complexity of $\mathcal{O}(n^2)$. \end{enumerate} Therefore, assuming $m \ll n$, the total computational complexity of the HAC-FWPD algorithm is $\mathcal{O}(n^{3})$ which is the same as that of the standard HAC algorithm based on SL, CL or AL. \section{Experimental Results}\label{sec:ExpRes} In this section, we report the results of several experiments carried out to validate the merit of the proposed k-means-FWPD and HAC-FWPD clustering algorithms. In the following subsections, we describe the experimental setup used to validate the proposed techniques. The results of the experiments for the k-means-FWPD algorithm and the HAC-FWPD algorithm, are respectively presented thereafter. \subsection{Experiment Setup}\label{sec:ExpPhil} Adjusted Rand Index (ARI) \cite{hubert1985comparing} is a popular validity index used to judge the merit of the clustering algorithms. When the true class labels are known, ARI provides a measure of the similarity between the cluster partition obtained by a clustering technique and the true class labels. Therefore, a high value of ARI is thought to indicate better clusterings. But, the class labels may not always be in keeping with the natural cluster structure of the dataset. In such cases, good clusterings are likely to achieve lower values of these indexes compared to possibly erroneous partitions (which are more akin to the class labels). However, the purpose of our experiments is to find out how close the clusterings obtained by the proposed methods (and the contending techniques) are to the clusterings obtained by the standard algorithms (k-means algorithm and HAC algorithm); the proposed methods (and its contenders) being run on the datasets with missingness, while the standard methods are run on corresponding fully observed datasets. Hence, the clusterings obtained by the standard algorithms are used as the ground-truths using which the ARI values are calculated for the proposed methods (and their contenders). The performances of ZI, MI, kNNI (with $k \in \{3,5,10,20\}$) and SVDI (using the most significant 10\% of the eigenvectors) are used for comparison with the proposed methods. The variant of MI that we impute with for these experiments differs from the traditional technique in that we use the average of the averages for individual classes, instead of the overall average. This is done to minimize the effects of severe class imbalances that may exist in the datasets. We also conduct the Wilcoxon's signed rank test \cite{wilcoxon1945individual} to evaluate the statistical significance of the observed results. The performance of k-means depends on the initial cluster assignment. Therefore, to ensure fairness, we use the same set of random initial cluster assignments for both the standard k-means algorithm on the fully observed dataset as well as the proposed k-means-FWPD method (and its contenders). The maximum number of iterations of the k-means variants is set as $MaxIter = 500$. Results are recorded in terms of average ARI values over 50 different runs on each dataset. The number of clusters is assumed to be same as the number of classes. For HAC experiments, Results are reported as average ARI values obtained over 20 independent runs. AL is chosen over SL and CL as it is observed to generally achieve higher ARI values. The number of clusters is assumed to be same as the number of classes. \subsubsection{Datasets} We take 20 real-world datasets from the University of California at Irvine (UCI) repository \cite{dheeru2017uci} and the Jin Genomics Dataset (JGD) repository \cite{jin2017data}. Each feature of each dataset is normalized so as to have zero mean and unit standard deviation. The details of these 20 datasets are listed in Table \ref{dataDetails}. \subsubsection{Simulating Missingness Mechanisms}\label{mechanism} Experiments are conducted by removing features from each of the datasets according to the four missingness mechanisms, namely MCAR, MAR, MNAR-I and MNAR-II \cite{datta2016fwpd}. The detailed algorithm for simulating the four missingness mechanisms is as follows: \begin{enumerate} \item Specify the number of entries $MissCount$ to be removed from the dataset. Select the missingness mechanism as one out of MCAR, MAR, MNAR-I or MNAR-II. \item If the mechanism is MAR or MNAR-II, select a random subset $\gamma_{miss} \subset S$ containing half of the features in $S$ (i.e. $|\gamma_{miss}| = \frac{m}{2}$ if $|S|$ is even or $\frac{m+1}{2}$ if $|S|$ is odd). If the mechanism is MNAR-I, set $\gamma_{miss} = S$. Identify $\gamma_{obs} = S \backslash \gamma_{miss}$. Otherwise, go to Step 5. \item If the mechanism is MAR or MNAR-II, for each feature $l \in \gamma_{miss}$, randomly select a feature $l_c \in \gamma_{obs}$ on which the missingness of feature $l$ may depend. \item For each feature $l \in \gamma_{miss}$ randomly choose a type of missingness $MissType_l$ as one out of CENTRAL, INTERMEDIATE or EXTREMAL. \item Randomly select a non-missing entry $x_{i,l}$ from the data matrix. If the mechanism is MCAR, mark the entry as missing and decrement $MissCount = MissCount - 1$ and go to Step 11. \item If the mechanism is MAR, set $\lambda = x_{i,l_c}$, $\mu = \mu_{l_c}$ and $\sigma = \sigma_{l_c}$, where $\mu_{l_c}$ and $\sigma_{l_c}$ are the mean and standard deviation of the $l_c$-th feature over the dataset. If the mechanism is MNAR-I, set $\lambda = x_{i,l}$, $\mu = \mu_{l}$ and $\sigma = \sigma_{l}$. If the mechanism is MNAR-II, randomly set either $\lambda = x_{i,l}$, $\mu = \mu_{l}$ and $\sigma = \sigma_{l}$ or $\lambda = x_{i,l_c}$, $\mu = \mu_{l_c}$ and $\sigma = \sigma_{l_c}$. \item Calculate $z = \frac{1}{\sigma}\sqrt{(\lambda - \mu)^2}$. \item If $MissType_l = \text{CENTRAL}$, set $\mu_z = 0$. If $MissType_l = \text{INTERMEDIATE}$, set $\mu_z = 1$. If $MissType_l = \text{EXTREMAL}$, set $\mu_z = 2$. Set $\sigma_z = 0.35$. \item Calculate $pval = \frac{1}{\sqrt{2 \pi \sigma_z}} exp(- \frac{(z - \mu_z)^2}{2 \sigma_z^2})$. \item Randomly generate a value $qval$ in the interval $[0,1]$. If $pval \geq qval$, then mark the entry $x_{i,l}$ as missing and decrement $MissCount = MissCount - 1$. \item If $MissCount > 0$, then go to Step 5. \end{enumerate} In the above algorithm, the dependence of the missingness on feature values for MAR, MNAR-I and MNAR-II is achieved by removing entries based on the values of control features for their corresponding data points. The control feature may be the feature itself (for MNAR-I and MNAR-II) or may be another feature for the same data point (as in the case of MAR and MNAR-II). The dependence can be of three types, namely CENTRAL, INTERMEDIATE or EXTREMAL. CENTRAL dependence removes a feature when its corresponding control feature has a value close to the mean of the control feature over the dataset. EXTREMAL dependence removes a feature when the value of its control feature lies near the extremes. INTERMEDIATE dependence removes a feature when the value of the control lies between the mean and the extremes. For our experiments, we set $MissCount = \frac{nm}{4}$ to remove 25\% of the features from each dataset. Thus, an average of $\frac{m}{4}$ of the features are missing from each data instance. \begin{table}[h] \begin{center} \caption{Details of the 20 real-world datasets}\label{dataDetails} \begin{tabular}{c c c c c} \hline Dataset & \#Instances & \#Features & \#Classes & Repository\\ \hline Chronic Kidney & 800 & 24 & 2 & UCI\\ Colon & 62 & 2000 & 2 & JGD\\ GSAD$^{*}$ 1$^{\dagger}$ & 445 & 128 & 6 & UCI\\ Glass & 214 & 9 & 6 & UCI\\ Iris & 150 & 4 & 3 & UCI\\ Isolet 5$^{\dagger}$ & 1559 & 617 & 26 & UCI\\ Landsat & 6435 & 36 & 6 & UCI\\ Leaf & 340 & 15 & 36 & UCI\\ Libras & 360 & 90 & 15 & UCI\\ Lung & 181 & 12533 & 2 & JGD\\ Lung Cancer & 27 & 56 & 3 & UCI\\ Lymphoma & 62 & 4026 & 3 & JGD\\ Pendigits & 10992 & 16 & 10 & UCI\\ Prostate & 102 & 6033 & 2 & JGD\\ Seeds & 210 & 7 & 3 & UCI\\ Sensorless$^{\dagger}$ & 6000 & 48 & 11 & UCI\\ Sonar & 208 & 60 & 2 & UCI\\ Theorem Proving$^{\dagger}$ & 3059 & 51 & 6 & UCI\\ Vehicle & 94 & 18 & 4 & UCI\\ Vowel Context & 990 & 14 & 11 & UCI\\ \hline \multicolumn{5}{l}{\footnotesize{$^{*}$Gas Sensor Array Drift}}\\ \multicolumn{5}{l}{\footnotesize{$^{\dagger}$Only a meaningful subset of the dataset is used.}}\\ \end{tabular} \end{center} \end{table} \subsubsection{Selecting the parameter $\alpha$}\label{sec:alpha} In order to conduct experiments using the FWPD measure, we need to select a value of the parameter $\alpha$. Proper selection of $\alpha$ may help to boost the performance of the proposed k-means-FWPD and HAC-FWPD measures. Therefore, in this section, we undertake a study on the effects of $\alpha$ on the performance of FWPD. Experiments are conducted using $\alpha \in \{0.1, 0.25, 0.5, 0.75, 0.9\}$ on the datasets listed in Table \ref{dataDetails} using the experimental setup detailed above. The summary of the results of this study is shown in Table \ref{tabAlphas} in terms of average ARI values. A choice of $\alpha = 0.25$ performs best overall as well as individually except for MAR missingness (where $\alpha=0.1$ proves to be a better choice). This seems to indicate some correlation between the extent of missingness and the optimal value of $\alpha$ (25\% of the features are missing in our experiments as mentioned in Section \ref{mechanism}). However, the correlation is rather weak for k-means-FWPD where all values of alphas seem to have competitive performance. On the other hand, the correlation is seen to be much stronger for HAC-FWPD. This indicates that the optimal $\alpha$ varies considerably with the pattern of missingness for k-means-FWPD but not as much for HAC-FWPD. Another interesting observation is that performance of HAC-FWPD deteriorates considerably for high values of $\alpha$ implying that the distance term is FWPD must be given greater importance for HAC methods. As $\alpha=0.25$ has the best performance overall, we report the detailed experimental results in the subsequent sections for this choice of $\alpha$. \begin{table}[h] \begin{center} \caption{Summary of results for different choices of $\alpha$ in terms of average ARI values.}\label{tabAlphas} \scriptsize{ \begin{tabular}{c | c | c c c c c} \hline Clustering & Type of & \multicolumn{5}{c}{$\alpha$}\\ Algorithm & Missingness & 0.1 & 0.25 & 0.5 & 0.75 & 0.9\\ \hline k-means- & MCAR & 0.682 & 0.712 & 0.664 & 0.691 & 0.683 \\ -FWPD & MAR & 0.738 & 0.730 & 0.723 & 0.729 & 0.711 \\ & MNAR-I & 0.649 & 0.676 & 0.675 & 0.613 & 0.666 \\ & MNAR-II & 0.711 & 0.718 & 0.689 & 0.665 & 0.678 \\ \cline{2-7} & Overall & 0.695 & \textbf{0.709} & 0.688 & 0.675 & 0.685 \\ \hline HAC-FWPD & MCAR & 0.665 & 0.709 & 0.389 & 0.073 & 0.017 \\ & MAR & 0.740 & 0.724 & 0.441 & 0.210 & 0.094 \\ & MNAR-I & 0.720 & 0.721 & 0.458 & 0.158 & 0.036 \\ & MNAR-II & 0.708 & 0.716 & 0.443 & 0.140 & 0.025 \\ \cline{2-7} & Overall & 0.709 & \textbf{0.718} & 0.433 & 0.145 & 0.043 \\ \hline \multicolumn{7}{l}{Best values shown in \textbf{boldface}.}\\ \end{tabular} } \end{center} \end{table} \subsection{Experiments with the k-means-FWPD Algorithm}\label{sec:kmeansExps} We compare the proposed k-means-FWPD algorithm to the standard k-means algorithm run on the datasets obtained after performing ZI, MI, SVDI and kNNI. All runs of k-means-FWPD were found to converge within the stipulated budget of $MaxIter = 500$. The results of the experiments are listed in terms of the means and standard deviations of the obtained ARI values, in Tables \ref{mcarKmResults}-\ref{mnar2KmResults}. Only the best results for kNNI are reported, along with the best $k$ values. The statistically significance of the listed results are summarized at the bottom of the table in terms of average ranks as well as signed rank test hypotheses and p-values ($H_0$ signifying that the ARI values achieved by the proposed method and the contending method originate from identical distributions having the same medians; $H_1$ implies that the ARI values achieved by the proposed method and the contender originate from different distributions). \begin{table*}[!t] \begin{center} \caption{Means and standard deviations of ARI values for the k-means-FWPD algorithm against MCAR.}\label{mcarKmResults} \scriptsize{ \begin{tabular}{c c c c c c c} \hline Dataset & $\begin{array}{ll} \text{k-means-}\\ \text{-FWPD}\\ \end{array}$ & ZI & MI & SVDI & kNNI & $\begin{array}{cc} \text{Best } k\\ \text{(for kNNI)}\\ \end{array}$\\ \hline Chronic Kidney & 0.807$\pm$0.002 & 0.813$\pm$0.005 & 0.229$\pm$0.013 & 0.763$\pm$0.005 & \textbf{0.815}$\pm$0.003 & 5 \\ Colon & \textbf{0.681}$\pm$0.341 & 0.656$\pm$0.323 & 0.659$\pm$0.322 & 0.662$\pm$0.314 & 0.656$\pm$0.323 & 3 \\ GSAD 1 & \textbf{0.798}$\pm$0.236 & 0.625$\pm$0.199 & 0.722$\pm$0.172 & 0.552$\pm$0.203 & 0.711$\pm$0.195 & 3 \\ Glass & 0.488$\pm$0.097 & 0.466$\pm$0.119 & 0.131$\pm$0.066 & 0.417$\pm$0.114 & \textbf{0.505}$\pm$0.134 & 5 \\ Iris & \textbf{0.799}$\pm$0.119 & 0.672$\pm$0.113 & 0.116$\pm$0.083 & 0.732$\pm$0.159 & 0.758$\pm$0.157 & 3 \\ Isolet 5 & \textbf{0.679}$\pm$0.117 & 0.623$\pm$0.093 & 0.625$\pm$0.097 & 0.626$\pm$0.105 & 0.614$\pm$0.072 & 3 \\ Landsat & \textbf{0.937}$\pm$0.001 & 0.807$\pm$0.001 & 0.798$\pm$0.001 & 0.838$\pm$0.104 & 0.937$\pm$0.010 & 5 \\ Leaf & 0.455$\pm$0.014 & 0.328$\pm$0.010 & 0.339$\pm$0.019 & 0.354$\pm$0.037 & \textbf{0.465}$\pm$0.029 & 5 \\ Libras & \textbf{0.656}$\pm$0.070 & 0.642$\pm$0.069 & 0.103$\pm$0.019 & 0.619$\pm$0.067 & 0.625$\pm$0.077 & 20\\ Lung & \textbf{0.731}$\pm$0.341 & 0.718$\pm$0.261 & 0.659$\pm$0.341 & 0.694$\pm$0.318 & 0.718$\pm$0.261 & 3 \\ Lung Cancer & \textbf{0.542}$\pm$0.249 & 0.541$\pm$0.202 & 0.537$\pm$0.214 & 0.529$\pm$0.192 & 0.525$\pm$0.189 & 5 \\ Lymphoma & \textbf{0.755}$\pm$0.167 & 0.743$\pm$0.175 & 0.700$\pm$0.165 & 0.733$\pm$0.175 & 0.743$\pm$0.175 & 3 \\ Pendigits & 0.729$\pm$0.089 & 0.659$\pm$0.082 & 0.083$\pm$0.013 & 0.604$\pm$0.063 & \textbf{0.832}$\pm$0.105 & 3 \\ Prostate & \textbf{0.961}$\pm$0.025 & 0.944$\pm$0.043 & 0.944$\pm$0.043 & 0.946$\pm$0.041 & 0.944$\pm$0.043 & 3 \\ Seeds & \textbf{0.866}$\pm$0.030 & 0.735$\pm$0.021 & 0.242$\pm$0.039 & 0.745$\pm$0.041 & 0.865$\pm$0.025 & 5 \\ Sensorless & \textbf{0.765}$\pm$0.031 & 0.684$\pm$0.028 & 0.687$\pm$0.021 & 0.719$\pm$0.060 & 0.726$\pm$0.051 & 20 \\ Sonar & \textbf{0.697}$\pm$0.195 & 0.681$\pm$0.187 & 0.672$\pm$0.188 & 0.434$\pm$0.234 & 0.656$\pm$0.162 & 5 \\ Theorem Proving & \textbf{0.714}$\pm$0.197 & 0.671$\pm$0.229 & 0.661$\pm$0.188 & 0.565$\pm$0.139 & 0.672$\pm$0.197 & 20 \\ Vehicle & 0.715$\pm$0.143 & 0.674$\pm$0.139 & 0.114$\pm$0.060 & 0.646$\pm$0.105 & \textbf{0.723}$\pm$0.134 & 10\\ Vowel Context & 0.458$\pm$0.031 & 0.366$\pm$0.028 & 0.360$\pm$0.029 & 0.352$\pm$0.022 & \textbf{0.461}$\pm$0.060 & 3 \\ \hline Average Ranks & \textbf{1.38} & 3.33 & 4.20 & 3.65 & 2.45 & \\ \hline \multicolumn{2}{c}{Signed Rank Hypotheses (p-values)} & $H_1 (0.00)$ & $H_1 (0.00)$ & $H_1 (0.00)$ & $H_1 (0.03)$ & \\ \hline \multicolumn{7}{l}{$H_1$ significantly different from k-means-FWPD}\\ \multicolumn{7}{l}{$H_0$ statistically similar to k-means-FWPD}\\ \multicolumn{7}{l}{Best values shown in \textbf{boldface}.}\\ \end{tabular} } \end{center} \end{table*} \begin{table*}[!t] \begin{center} \caption{Means and standard deviations of ARI values for the k-means-FWPD algorithm against MAR.}\label{marKmResults} \scriptsize{ \begin{tabular}{c c c c c c c} \hline Dataset & $\begin{array}{ll} \text{k-means-}\\ \text{-FWPD}\\ \end{array}$ & ZI & MI & SVDI & kNNI & $\begin{array}{cc} \text{Best } k\\ \text{(for kNNI)}\\ \end{array}$\\ \hline Chronic Kidney & 0.793$\pm$0.006 & 0.792$\pm$0.004 & 0.803$\pm$0.003 & 0.787$\pm$0.008 & \textbf{0.838}$\pm$0.011 & 20\\ Colon & 0.812$\pm$0.177 & 0.826$\pm$0.149 & \textbf{0.827}$\pm$0.157 & 0.796$\pm$0.202 & 0.826$\pm$0.149 & 3 \\ GSAD 1 & \textbf{0.801}$\pm$0.163 & 0.737$\pm$0.187 & 0.673$\pm$0.179 & 0.719$\pm$0.165 & 0.760$\pm$0.144 & 3 \\ Glass & \textbf{0.617}$\pm$0.125 & 0.455$\pm$0.168 & 0.565$\pm$0.128 & 0.411$\pm$0.171 & 0.479$\pm$0.152 & 5 \\ Iris & 0.776$\pm$0.185 & 0.776$\pm$0.185 & \textbf{0.851}$\pm$0.144 & 0.776$\pm$0.163 & 0.764$\pm$0.176 & 5 \\ Isolet 5 & \textbf{0.729}$\pm$0.072 & 0.713$\pm$0.051 & 0.691$\pm$0.046 & 0.704$\pm$0.027 & 0.713$\pm$0.051 & 3 \\ Landsat & \textbf{0.940}$\pm$0.002 & 0.828$\pm$0.127 & 0.850$\pm$0.123 & 0.773$\pm$0.150 & 0.899$\pm$0.058 & 3 \\ Leaf & 0.510$\pm$0.022 & 0.392$\pm$0.042 & 0.440$\pm$0.046 & 0.501$\pm$0.021 & \textbf{0.532}$\pm$0.046 & 3 \\ Libras & 0.731$\pm$0.076 & 0.700$\pm$0.077 & \textbf{0.778}$\pm$0.065 & 0.697$\pm$0.082 & 0.675$\pm$0.071 & 3 \\ Lung & \textbf{0.754}$\pm$0.131& 0.711$\pm$0.230 & 0.717$\pm$0.232 & 0.625$\pm$0.312 & 0.711$\pm$0.230 & 3 \\ Lung Cancer & \textbf{0.606}$\pm$0.223 & 0.526$\pm$0.224 & 0.476$\pm$0.219 & 0.509$\pm$0.202 & 0.493$\pm$0.231 & 3 \\ Lymphoma & 0.790$\pm$0.164 & 0.883$\pm$0.109 & \textbf{0.885}$\pm$0.102 & 0.875$\pm$0.120 & 0.883$\pm$0.109 & 3 \\ Pendigits & 0.717$\pm$0.068 & 0.494$\pm$0.072 & 0.852$\pm$0.062 & 0.705$\pm$0.067 & \textbf{0.903}$\pm$0.065 & 5 \\ Prostate & \textbf{0.990}$\pm$0.021 & 0.984$\pm$0.036 & 0.986$\pm$0.036 & 0.918$\pm$0.017 & 0.984$\pm$0.036 & 3 \\ Seeds & 0.785$\pm$0.026 & 0.755$\pm$0.025 & 0.774$\pm$0.027 & 0.752$\pm$0.025 & \textbf{0.834}$\pm$0.033 & 3 \\ Sensorless & \textbf{0.759}$\pm$0.038 & 0.663$\pm$0.089 & 0.629$\pm$0.145 & 0.662$\pm$0.132 & 0.685$\pm$0.087 & 3 \\ Sonar & \textbf{0.620}$\pm$0.289 & 0.599$\pm$0.326 & 0.598$\pm$0.325 & 0.524$\pm$0.315 & 0.574$\pm$0.359 & 10 \\ Theorem Proving & \textbf{0.672}$\pm$0.179 & 0.636$\pm$0.165 & 0.617$\pm$0.182 & 0.618$\pm$0.189 & 0.649$\pm$0.145 & 10 \\ Vehicle & \textbf{0.699}$\pm$0.142 & 0.545$\pm$0.155 & 0.644$\pm$0.147 & 0.566$\pm$0.154 & 0.537$\pm$0.149 & 3 \\ Vowel Context & 0.497$\pm$0.044 & 0.461$\pm$0.072 & 0.436$\pm$0.064 & 0.408$\pm$0.054 & \textbf{0.577}$\pm$0.043 & 3 \\ \hline Average Ranks & \textbf{1.85} & 3.33 & 2.90 & 4.25 & 2.67 & \\ \hline \multicolumn{2}{c}{Signed Rank Hypotheses (p-values)} & $H_1 (0.00)$ & $H_0 (0.13)$ & $H_1 (0.00)$ & $H_0 (0.37)$ & \\ \hline \multicolumn{7}{l}{$H_1$ significantly different from k-means-FWPD}\\ \multicolumn{7}{l}{$H_0$ statistically similar to k-means-FWPD}\\ \multicolumn{7}{l}{Best values shown in \textbf{boldface}.}\\ \end{tabular} } \end{center} \end{table*} \begin{table*}[!t] \begin{center} \caption{Means and standard deviations of ARI values for the k-means-FWPD algorithm against MNAR-I.}\label{mnar1KmResults} \scriptsize{ \begin{tabular}{c c c c c c c} \hline Dataset & $\begin{array}{ll} \text{k-means-}\\ \text{-FWPD}\\ \end{array}$ & ZI & MI & SVDI & kNNI & $\begin{array}{cc} \text{Best } k\\ \text{(for kNNI)}\\ \end{array}$\\ \hline Chronic Kidney & \textbf{0.729}$\pm$0.011 & 0.714$\pm$0.010 & 0.399$\pm$0.051 & 0.599$\pm$0.005 & 0.616$\pm$0.022 & 3 \\ Colon & 0.789$\pm$0.214 & 0.781$\pm$0.202 & 0.770$\pm$0.205 & \textbf{0.801}$\pm$0.147 & 0.781$\pm$0.202 & 3 \\ GSAD 1 & 0.791$\pm$0.112 & \textbf{0.799}$\pm$0.097 & 0.790$\pm$0.110 & 0.689$\pm$0.175 & \textbf{0.799}$\pm$0.097 & 3 \\ Glass & \textbf{0.439}$\pm$0.101 & 0.391$\pm$0.117 & 0.152$\pm$0.048 & 0.388$\pm$0.097 & 0.438$\pm$0.105 & 10\\ Iris & 0.662$\pm$0.073 & 0.709$\pm$0.144 & 0.137$\pm$0.077 & \textbf{0.739}$\pm$0.132 & 0.658$\pm$0.168 & 5 \\ Isolet 5 & \textbf{0.708}$\pm$0.098 & 0.680$\pm$0.103 & 0.680$\pm$0.082 & 0.663$\pm$0.067 & 0.680$\pm$0.103 & 3 \\ Landsat & \textbf{0.869}$\pm$0.048 & 0.701$\pm$0.149 & 0.712$\pm$0.159 & 0.858$\pm$0.058 & 0.813$\pm$0.001 & 10 \\ Leaf & 0.493$\pm$0.052 & 0.416$\pm$0.029 & 0.403$\pm$0.020 & 0.439$\pm$0.023 & \textbf{0.522}$\pm$0.040 & 3 \\ Libras & \textbf{0.717}$\pm$0.083 & 0.667$\pm$0.076 & 0.378$\pm$0.058 & 0.638$\pm$0.070 & 0.656$\pm$0.067 & 3 \\ Lung & \textbf{0.636}$\pm$0.201 & 0.592$\pm$0.199 & 0.606$\pm$0.210 & 0.578$\pm$0.192 & 0.592$\pm$0.199 & 3 \\ Lung Cancer & \textbf{0.529}$\pm$0.235 & 0.497$\pm$0.184 & 0.459$\pm$0.129 & 0.457$\pm$0.217 & 0.497$\pm$0.184 & 3 \\ Lymphoma & 0.796$\pm$0.118 & \textbf{0.798}$\pm$0.133 & 0.764$\pm$0.130 & 0.764$\pm$0.129 & \textbf{0.798}$\pm$0.133 & 3 \\ Pendigits & 0.666$\pm$0.079 & 0.635$\pm$0.067 & 0.135$\pm$0.025 & 0.619$\pm$0.054 & \textbf{0.756}$\pm$0.093 & 5 \\ Prostate & \textbf{0.975}$\pm$0.032 & 0.958$\pm$0.075 & 0.962$\pm$0.075 & 0.910$\pm$0.061 & 0.958$\pm$0.075 & 3 \\ Seeds & 0.776$\pm$0.019 & 0.705$\pm$0.044 & 0.298$\pm$0.065 & 0.725$\pm$0.042 & \textbf{0.819}$\pm$0.052 & 20\\ Sensorless & \textbf{0.693}$\pm$0.041 & 0.638$\pm$0.036 & 0.636$\pm$0.039 & 0.610$\pm$0.069 & 0.593$\pm$0.051 & 10 \\ Sonar & \textbf{0.600}$\pm$0.297 & 0.537$\pm$0.287 & 0.546$\pm$0.292 & 0.326$\pm$0.282 & 0.537$\pm$0.287 & 3 \\ Theorem Proving & \textbf{0.540}$\pm$0.211 & 0.399$\pm$0.196 & 0.388$\pm$0.178 & 0.513$\pm$0.224 & 0.465$\pm$0.195 & 3 \\ Vehicle & \textbf{0.639}$\pm$0.141 & 0.551$\pm$0.123 & 0.298$\pm$0.078 & 0.613$\pm$0.103 & 0.536$\pm$0.107 & 3 \\ Vowel Context & 0.473$\pm$0.044 & 0.412$\pm$0.049 & 0.435$\pm$0.056 & 0.383$\pm$0.043 & \textbf{0.512}$\pm$0.036 & 3 \\ \hline Average Ranks & \textbf{1.55} & 3.02 & 4.08 & 3.67 & 2.67 & \\ \hline \multicolumn{2}{c}{Signed Rank Hypotheses (p-values)} & $H_1 (0.00)$ & $H_1 (0.00)$ & $H_1 (0.00)$ & $H_1 (0.05)$ & \\ \hline \multicolumn{7}{l}{$H_1$ significantly different from k-means-FWPD}\\ \multicolumn{7}{l}{$H_0$ statistically similar to k-means-FWPD}\\ \multicolumn{7}{l}{Best values shown in \textbf{boldface}.}\\ \end{tabular} } \end{center} \end{table*} \begin{table*}[!t] \begin{center} \caption{Means and standard deviations of ARI values for the k-means-FWPD algorithm against MNAR-II.}\label{mnar2KmResults} \scriptsize{ \begin{tabular}{c c c c c c c} \hline Dataset & $\begin{array}{ll} \text{k-means-}\\ \text{-FWPD}\\ \end{array}$ & ZI & MI & SVDI & kNNI & $\begin{array}{cc} \text{Best } k\\ \text{(for kNNI)}\\ \end{array}$\\ \hline Chronic Kidney & 0.751$\pm$0.014 & 0.659$\pm$0.032 & 0.744$\pm$0.015 & 0.636$\pm$0.037 & \textbf{0.770}$\pm$0.017 & 3 \\ Colon & \textbf{0.804}$\pm$0.210 & 0.797$\pm$0.226 & 0.798$\pm$0.220 & 0.781$\pm$0.208 & 0.797$\pm$0.226 & 3 \\ GSAD 1 & \textbf{0.731}$\pm$0.198 & 0.665$\pm$0.242 & 0.712$\pm$0.212 & 0.689$\pm$0.215 & 0.665$\pm$0.242 & 3 \\ Glass & \textbf{0.530}$\pm$0.089 & 0.423$\pm$0.101 & 0.413$\pm$0.099 & 0.395$\pm$0.109 & 0.451$\pm$0.101 & 5 \\ Iris & \textbf{0.773}$\pm$0.165 & 0.718$\pm$0.172 & 0.635$\pm$0.189 & 0.702$\pm$0.174 & 0.756$\pm$0.175 & 10\\ Isolet 5 & \textbf{0.789}$\pm$0.061 & 0.765$\pm$0.076 & 0.747$\pm$0.056 & 0.728$\pm$0.063 & 0.765$\pm$0.076 & 3 \\ Landsat & \textbf{0.892}$\pm$0.083 & 0.871$\pm$0.084 & 0.868$\pm$0.083 & 0.794$\pm$0.130 & 0.836$\pm$0.083 & 3 \\ Leaf & \textbf{0.476}$\pm$0.021 & 0.385$\pm$0.036 & 0.381$\pm$0.031 & 0.389$\pm$0.033 & 0.454$\pm$0.028 & 3 \\ Libras & \textbf{0.698}$\pm$0.079 & 0.675$\pm$0.078 & 0.681$\pm$0.077 & 0.648$\pm$0.080 & 0.669$\pm$0.081 & 3 \\ Lung & 0.686$\pm$0.220 & 0.649$\pm$0.226 & \textbf{0.707}$\pm$0.089 & 0.674$\pm$0.216 & 0.649$\pm$0.226 & 3 \\ Lung Cancer & \textbf{0.641}$\pm$0.200 & 0.640$\pm$0.283 & 0.567$\pm$0.243 & 0.568$\pm$0.190 & 0.640$\pm$0.283 & 3 \\ Lymphoma & \textbf{0.856}$\pm$0.115 & 0.824$\pm$0.136 & 0.842$\pm$0.123 & 0.818$\pm$0.153 & 0.824$\pm$0.136 & 3 \\ Pendigits & 0.608$\pm$0.082 & 0.589$\pm$0.095 & 0.561$\pm$0.097 & 0.557$\pm$0.096 & \textbf{0.825}$\pm$0.083 & 5 \\ Prostate & \textbf{0.984}$\pm$0.032 & \textbf{0.984}$\pm$0.032 & \textbf{0.984}$\pm$0.032 & 0.969$\pm$0.053 & \textbf{0.984}$\pm$0.032 & 3 \\ Seeds & \textbf{0.884}$\pm$0.028 & 0.738$\pm$0.039 & 0.772$\pm$0.038 & 0.771$\pm$0.038 & 0.831$\pm$0.029 & 10\\ Sensorless & \textbf{0.747}$\pm$0.041 & 0.667$\pm$0.130 & 0.727$\pm$0.030 & 0.704$\pm$0.056 & 0.698$\pm$0.068 & 3 \\ Sonar & \textbf{0.704}$\pm$0.227 & 0.662$\pm$0.235 & 0.658$\pm$0.221 & 0.314$\pm$0.152 & 0.662$\pm$0.235 & 3 \\ Theorem Proving & \textbf{0.640}$\pm$0.141 & 0.600$\pm$0.105 & 0.605$\pm$0.077 & 0.610$\pm$0.175 & 0.593$\pm$0.106 & 3 \\ Vehicle & 0.677$\pm$0.145 & \textbf{0.734}$\pm$0.132 & 0.571$\pm$0.167 & 0.635$\pm$0.153 & 0.712$\pm$0.141 & 3 \\ Vowel Context & 0.478$\pm$0.048 & 0.404$\pm$0.041 & 0.396$\pm$0.064 & 0.345$\pm$0.032 & \textbf{0.511}$\pm$0.056 & 3 \\ \hline Average Ranks & \textbf{1.38} & 3.30 & 3.27 & 4.25 & 2.80 & \\ \hline \multicolumn{2}{c}{Signed Rank Hypotheses (p-values)} & $H_1 (0.00)$ & $H_1 (0.00)$ & $H_1 (0.00)$ & $H_1 (0.03)$ & \\ \hline \multicolumn{7}{l}{$H_1$ significantly different from k-means-FWPD}\\ \multicolumn{7}{l}{$H_0$ statistically similar to k-means-FWPD}\\ \multicolumn{7}{l}{Best values shown in \textbf{boldface}.}\\ \end{tabular} } \end{center} \end{table*} We know from Theorem \ref{thmFinFin} that the maximum number of feasibility adjustments that can occur during a single run of k-means-FWPD is $n(k-1)$. This begs the question of whether one should choose $MaxIter \geq n(k-1)$. However, k-means-FWPD was observed to converge within the stipulated $MaxIter = 500$ iterations even for datasets like Isolet 5, Pendigits, Sensorless, etc. which have relatively large values of $n(k-1)$. This indicates that the number of feasibility adjustments that occur during a run is much lower in practice. Therefore, we conclude that it is not required to set $MaxIter \geq n(k-1)$ for practical problems. It is seen from Tables \ref{mcarKmResults}-\ref{mnar2KmResults} that the k-means-FWPD algorithm performs best, indicated by the consistently minimum average rankings on all types of missingness. The proposed method performs best on the majority of datasets for all kinds of missingness. kNNI is overall seen to be the second best performer (being statistically comparable to k-means-FWPD in case of MAR). It is also interesting to observe that the performance of MI is improved in case of MAR and MNAR-II, indicating that MI tends to be useful for partitional clustering when the missingness depends on the observed features. Moreover, SVDI is generally observed to perform poorly irrespective of the type of missingness, implying that the linear model assumed by SVDI is unable to conserve the convexity of the clusters (which is essential for good performance in case of partitional clustering). \subsection{Experiments with the HAC-FWPD Algorithm}\label{sec:hacExps} The experimental setup described in Section \ref{sec:ExpPhil} is also used to compare the HAC-FWPD algorithm (with AL-FWPD as the proximity measure) to the standard HAC algorithm (with AL as the proximity measure) in conjunction with ZI, MI, SVDI and kNNI. Results are reported as means and standard deviations of obtained ARI values over the 20 independent runs. AL is preferred here over SL and CL as it is observed to generally achieve higher ARI values. The results of the experiments are listed in Tables \ref{mcarHacResults}-\ref{mnar2HacResults}. The statistically significance of the listed results are also summarized at the bottom of the respective tables in terms of average ranks as well as signed rank test hypotheses and p-values ($H_0$ signifying that the ARI values achieved by the proposed method and the contending method originate from identical distributions having the same medians; $H_1$ implies that the ARI values achieved by the proposed method and the contender originate from different distributions). \begin{table*}[!t] \begin{center} \caption{Means and standard deviations of ARI values for the HAC-FWPD algorithm against MCAR.}\label{mcarHacResults} \scriptsize{ \begin{tabular}{c c c c c c c} \hline Dataset & $\begin{array}{ll} \text{HAC-}\\ \text{-FWPD}\\ \end{array}$ & ZI & MI & SVDI & kNNI & $\begin{array}{cc} \text{Best } k\\ \text{(for kNNI)}\\ \end{array}$\\ \hline Chronic Kidney & \textbf{1.000}$\pm$0.000 & 0.967$\pm$0.031 & 0.933$\pm$0.033 & 0.933$\pm$0.033 & 0.000$\pm$0.000 & 3 \\ Colon & \textbf{0.690}$\pm$0.240 & 0.469$\pm$0.145 & 0.286$\pm$0.174 & 0.380$\pm$0.304 & 0.000$\pm$0.000 & 3 \\ GSAD 1 & \textbf{0.454}$\pm$0.309 & 0.367$\pm$0.088 & 0.271$\pm$0.112 & 0.311$\pm$0.087 & 0.022$\pm$0.018 & 3 \\ Glass & \textbf{0.737}$\pm$0.081 & 0.671$\pm$0.089 & 0.680$\pm$0.089 & 0.638$\pm$0.090 & 0.033$\pm$0.032 & 3 \\ Iris & 0.885$\pm$0.072 & \textbf{0.922}$\pm$0.047 & 0.831$\pm$0.053 & 0.917$\pm$0.049 & 0.559$\pm$0.129 & 20\\ Isolet 5 & \textbf{0.855}$\pm$0.111 & 0.044$\pm$0.003 & 0.046$\pm$0.003 & 0.081$\pm$0.037 & 0.064$\pm$0.003 & 3 \\ Landsat & \textbf{0.712}$\pm$0.098 & 0.228$\pm$0.034 & 0.254$\pm$0.012 & 0.217$\pm$0.033 & 0.300$\pm$0.018 & 10 \\ Leaf & \textbf{0.497}$\pm$0.046 & 0.200$\pm$0.016 & 0.221$\pm$0.017 & 0.290$\pm$0.077 & 0.140$\pm$0.011 & 3 \\ Libras & \textbf{0.845}$\pm$0.054 & 0.276$\pm$0.031 & 0.298$\pm$0.033 & 0.381$\pm$0.030 & 0.156$\pm$0.050 & 10\\ Lung & \textbf{1.000}$\pm$0.000 & \textbf{1.000}$\pm$0.000 & \textbf{1.000}$\pm$0.000 & \textbf{1.000}$\pm$0.000 & 0.001$\pm$0.002 & 3 \\ Lung Cancer & \textbf{0.458}$\pm$0.193 & 0.408$\pm$0.223 & 0.356$\pm$0.229 & 0.436$\pm$0.335 & 0.034$\pm$0.035 & 3 \\ Lymphoma & \textbf{0.885}$\pm$0.058 & 0.718$\pm$0.373 & 0.335$\pm$0.498 & 0.713$\pm$0.372 & 0.547$\pm$0.297 & 3 \\ Pendigits & \textbf{0.712}$\pm$0.082 & 0.242$\pm$0.194 & 0.228$\pm$0.224 & 0.252$\pm$0.260 & 0.365$\pm$0.147 & 3 \\ Prostate & \textbf{1.000}$\pm$0.000 & \textbf{1.000}$\pm$0.000 & \textbf{1.000}$\pm$0.000 & \textbf{1.000}$\pm$0.000 & 0.001$\pm$0.001 & 3 \\ Seeds & 0.534$\pm$0.173 & 0.332$\pm$0.046 & 0.317$\pm$0.055 & 0.436$\pm$0.127 & \textbf{0.563}$\pm$0.110 & 10\\ Sensorless & \textbf{0.416}$\pm$0.303 & 0.196$\pm$0.024 & 0.203$\pm$0.017 & 0.249$\pm$0.102 & 0.005$\pm$0.008 & 3 \\ Sonar & \textbf{0.440}$\pm$0.419 & 0.329$\pm$0.473 & 0.128$\pm$0.298 & 0.261$\pm$0.365 & 0.001$\pm$0.000 & 3 \\ Theorem Proving & \textbf{0.802}$\pm$0.085 & 0.691$\pm$0.088 & 0.691$\pm$0.088 & 0.654$\pm$0.082 & 0.002$\pm$0.001 & 5 \\ Vehicle & \textbf{0.807}$\pm$0.108 & 0.315$\pm$0.295 & 0.315$\pm$0.295 & 0.645$\pm$0.232 & 0.084$\pm$0.008 & 5 \\ Vowel Context & \textbf{0.453}$\pm$0.081 & 0.248$\pm$0.042 & 0.211$\pm$0.066 & 0.194$\pm$0.029 & 0.101$\pm$0.019 & 3 \\ \hline Average Ranks & \textbf{1.30} & 2.95 & 3.52 & 2.88 & 4.35 & \\ \hline \multicolumn{2}{c}{Signed Rank Hypotheses (p-values)} & $H_1 (0.00)$ & $H_1 (0.00)$ & $H_1 (0.00)$ & $H_1 (0.00)$ & \\ \hline \multicolumn{7}{l}{$H_1$ significantly different from HAC-FWPD}\\ \multicolumn{7}{l}{$H_0$ statistically similar to HAC-FWPD}\\ \multicolumn{7}{l}{Best values shown in \textbf{boldface}.}\\ \end{tabular} } \end{center} \end{table*} \begin{table*}[!t] \begin{center} \caption{Means and standard deviations of ARI values for the HAC-FWPD algorithm against MAR.}\label{marHacResults} \scriptsize{ \begin{tabular}{c c c c c c c} \hline Dataset & $\begin{array}{ll} \text{HAC-}\\ \text{-FWPD}\\ \end{array}$ & ZI & MI & SVDI & kNNI & $\begin{array}{cc} \text{Best } k\\ \text{(for kNNI)}\\ \end{array}$\\ \hline Chronic Kidney & \textbf{0.799}$\pm$0.394 & 0.398$\pm$0.494 & 0.398$\pm$0.494 & 0.398$\pm$0.494 & 0.003$\pm$0.001 & 3 \\ Colon & \textbf{1.000}$\pm$0.000 & 0.463$\pm$0.516 & 0.463$\pm$0.516 & 0.601$\pm$0.423 & 0.016$\pm$0.002 & 3 \\ GSAD 1 & \textbf{0.619}$\pm$0.230 & 0.359$\pm$0.115 & 0.419$\pm$0.227 & 0.346$\pm$0.150 & 0.007$\pm$0.005 & 3 \\ Glass & \textbf{0.650}$\pm$0.188 & 0.617$\pm$0.124 & 0.603$\pm$0.129 & 0.590$\pm$0.184 & 0.057$\pm$0.082 & 20 \\ Iris & \textbf{0.949}$\pm$0.051 & 0.893$\pm$0.083 & 0.893$\pm$0.083 & 0.854$\pm$0.146 & 0.587$\pm$0.068 & 10 \\ Isolet 5 & \textbf{0.725}$\pm$0.186 & 0.491$\pm$0.011 & 0.491$\pm$0.011 & 0.464$\pm$0.076 & 0.076$\pm$0.005 & 3 \\ Landsat & \textbf{0.734}$\pm$0.044 & 0.638$\pm$0.323 & 0.601$\pm$0.301 & 0.721$\pm$0.142 & 0.162$\pm$0.113 & 3 \\ Leaf & 0.463$\pm$0.107 & 0.420$\pm$0.099 & 0.415$\pm$0.058 & \textbf{0.471}$\pm$0.053 & 0.154$\pm$0.013 & 3 \\ Libras & \textbf{0.864}$\pm$0.098 & 0.855$\pm$0.048 & 0.815$\pm$0.081 & 0.813$\pm$0.064 & 0.469$\pm$0.012 & 3\\ Lung & \textbf{1.000}$\pm$0.000 & \textbf{1.000}$\pm$0.000 & \textbf{1.000}$\pm$0.000 & \textbf{1.000}$\pm$0.000 & 0.008$\pm$0.024 & 3 \\ Lung Cancer & \textbf{0.709}$\pm$0.270 & 0.522$\pm$0.438 & 0.538$\pm$0.422 & 0.443$\pm$0.358 & 0.036$\pm$0.019 & 3 \\ Lymphoma & \textbf{1.000}$\pm$0.000 & 0.903$\pm$0.089 & 0.772$\pm$0.132 & 0.890$\pm$0.104 & 0.788$\pm$0.000 & 3 \\ Pendigits & \textbf{0.493}$\pm$0.138 & 0.351$\pm$0.124 & 0.292$\pm$0.076 & 0.483$\pm$0.103 & 0.407$\pm$0.035 & 3 \\ Prostate & \textbf{1.000}$\pm$0.000 & \textbf{1.000}$\pm$0.000 & \textbf{1.000}$\pm$0.000 & \textbf{1.000}$\pm$0.000 & 0.001$\pm$0.000 & 3 \\ Seeds & \textbf{0.564}$\pm$0.131 & 0.487$\pm$0.016 & 0.444$\pm$0.103 & 0.535$\pm$0.224 & 0.556$\pm$0.135 & 3 \\ Sensorless & \textbf{0.439}$\pm$0.288 & 0.276$\pm$0.163 & 0.174$\pm$0.053 & 0.256$\pm$0.136 & 0.000$\pm$0.000 & 10 \\ Sonar & \textbf{0.396}$\pm$0.551 & 0.005$\pm$0.000 & 0.005$\pm$0.000 & 0.094$\pm$0.222 & 0.001$\pm$0.000 & 3 \\ Theorem Proving & \textbf{0.725}$\pm$0.102 & 0.677$\pm$0.031 & 0.685$\pm$0.107 & 0.641$\pm$0.121 & 0.001$\pm$0.006 & 3 \\ Vehicle & \textbf{0.827}$\pm$0.123 & 0.431$\pm$0.279 & 0.431$\pm$0.278 & 0.825$\pm$0.105 & 0.075$\pm$0.044 & 3 \\ Vowel Context & \textbf{0.517}$\pm$0.127 & 0.451$\pm$0.095 & 0.445$\pm$0.126 & 0.401$\pm$0.207 & 0.104$\pm$0.024 & 3 \\ \hline Average Ranks & \textbf{1.20} & 2.83 & 3.27 & 3.00 & 4.70 & \\ \hline \multicolumn{2}{c}{Signed Rank Hypotheses (p-values)} & $H_1 (0.00)$ & $H_1 (0.00)$ & $H_1 (0.00)$ & $H_1 (0.00)$ & \\ \hline \multicolumn{7}{l}{$H_1$ significantly different from HAC-FWPD}\\ \multicolumn{7}{l}{$H_0$ statistically similar to HAC-FWPD}\\ \multicolumn{7}{l}{Best values shown in \textbf{boldface}.}\\ \end{tabular} } \end{center} \end{table*} \begin{table*}[!t] \begin{center} \caption{Means and standard deviations of ARI values for the HAC-FWPD algorithm against MNAR-I.}\label{mnar1HacResults} \scriptsize{ \begin{tabular}{c c c c c c c} \hline Dataset & $\begin{array}{ll} \text{HAC-}\\ \text{-FWPD}\\ \end{array}$ & ZI & MI & SVDI & kNNI & $\begin{array}{cc} \text{Best } k\\ \text{(for kNNI)}\\ \end{array}$\\ \hline Chronic Kidney & \textbf{1.000}$\pm$0.000 & \textbf{1.000}$\pm$0.000 & \textbf{1.000}$\pm$0.000 & \textbf{1.000}$\pm$0.000 & 0.002$\pm$0.000 & 3 \\ Colon & \textbf{0.926}$\pm$0.166 & 0.025$\pm$0.000 & 0.025$\pm$0.000 & 0.025$\pm$0.000 & 0.014$\pm$0.000 & 3 \\ GSAD 1 & \textbf{0.473}$\pm$0.237 & 0.326$\pm$0.085 & 0.343$\pm$0.098 & 0.271$\pm$0.112 & 0.005$\pm$0.003 & 3 \\ Glass & 0.736$\pm$0.135 & 0.697$\pm$0.119 & \textbf{0.738}$\pm$0.133 & 0.614$\pm$0.145 & 0.018$\pm$0.006 & 10 \\ Iris & 0.852$\pm$0.142 & 0.527$\pm$0.437 & 0.543$\pm$0.456 & \textbf{0.881}$\pm$0.180 & 0.540$\pm$0.026 & 20\\ Isolet 5 & \textbf{0.586}$\pm$0.085 & 0.326$\pm$0.207 & 0.401$\pm$0.169 & 0.223$\pm$0.153 & 0.048$\pm$0.033 & 3 \\ Landsat & \textbf{0.786}$\pm$0.085 & 0.420$\pm$0.298 & 0.443$\pm$0.375 & 0.765$\pm$0.086 & 0.072$\pm$0.004 & 5 \\ Leaf & \textbf{0.514}$\pm$0.043 & 0.345$\pm$0.065 & 0.267$\pm$0.080 & 0.437$\pm$0.054 & 0.110$\pm$0.035 & 5 \\ Libras & \textbf{0.843}$\pm$0.109 & 0.750$\pm$0.101 & 0.750$\pm$0.101 & 0.782$\pm$0.087 & 0.419$\pm$0.042 & 3 \\ Lung & \textbf{1.000}$\pm$0.000 & \textbf{1.000}$\pm$0.000 & \textbf{1.000}$\pm$0.000 & \textbf{1.000}$\pm$0.000 & 0.008$\pm$0.024 & 3 \\ Lung Cancer & \textbf{0.651}$\pm$0.316 & 0.516$\pm$0.269 & 0.516$\pm$0.269 & 0.561$\pm$0.336 & 0.020$\pm$0.010 & 3 \\ Lymphoma & \textbf{0.942}$\pm$0.130 & 0.861$\pm$0.000 & 0.861$\pm$0.000 & 0.861$\pm$0.000 & 0.651$\pm$0.000 & 3 \\ Pendigits & \textbf{0.641}$\pm$0.172 & 0.405$\pm$0.255 & 0.341$\pm$0.238 & 0.399$\pm$0.139 & 0.416$\pm$0.043 & 5 \\ Prostate & \textbf{1.000}$\pm$0.000 & \textbf{1.000}$\pm$0.000 & \textbf{1.000}$\pm$0.000 & \textbf{1.000}$\pm$0.000 & 0.001$\pm$0.000 & 3 \\ Seeds & \textbf{0.584}$\pm$0.154 & 0.429$\pm$0.141 & 0.479$\pm$0.047 & 0.421$\pm$0.084 & 0.555$\pm$0.082 & 3 \\ Sensorless & \textbf{0.300}$\pm$0.298 & 0.225$\pm$0.025 & 0.217$\pm$0.027 & 0.216$\pm$0.009 & 0.000$\pm$0.000 & 3 \\ Sonar & \textbf{0.598}$\pm$0.550 & 0.196$\pm$0.449 & 0.196$\pm$0.449 & 0.329$\pm$0.473 & 0.001$\pm$0.000 & 3 \\ Theorem Proving & \textbf{0.775}$\pm$0.121 & 0.762$\pm$0.068 & 0.731$\pm$0.025 & 0.742$\pm$0.031 & 0.002$\pm$0.000 & 3 \\ Vehicle & \textbf{0.797}$\pm$0.121 & 0.321$\pm$0.194 & 0.522$\pm$0.264 & 0.699$\pm$0.290 & 0.068$\pm$0.040 & 3 \\ Vowel Context & \textbf{0.447}$\pm$0.168 & 0.300$\pm$0.073 & 0.282$\pm$0.068 & 0.306$\pm$0.114 & 0.095$\pm$0.029 & 3 \\ \hline Average Ranks & \textbf{1.33} & 3.15 & 3.05 & 2.83 & 4.65 & \\ \hline \multicolumn{2}{c}{Signed Rank Hypotheses (p-values)} & $H_1 (0.00)$ & $H_1 (0.00)$ & $H_1 (0.00)$ & $H_1 (0.00)$ & \\ \hline \multicolumn{7}{l}{$H_1$ significantly different from HAC-FWPD}\\ \multicolumn{7}{l}{$H_0$ statistically similar to HAC-FWPD}\\ \multicolumn{7}{l}{Best values shown in \textbf{boldface}.}\\ \end{tabular} } \end{center} \end{table*} \begin{table*}[!t] \begin{center} \caption{Means and standard deviations of ARI values for the HAC-FWPD algorithm against MNAR-II.}\label{mnar2HacResults} \scriptsize{ \begin{tabular}{c c c c c c c} \hline Dataset & $\begin{array}{ll} \text{HAC-}\\ \text{-FWPD}\\ \end{array}$ & ZI & MI & SVDI & kNNI & $\begin{array}{cc} \text{Best } k\\ \text{(for kNNI)}\\ \end{array}$\\ \hline Chronic Kidney & \textbf{1.000}$\pm$0.000 & \textbf{1.000}$\pm$0.000 & \textbf{1.000}$\pm$0.000 & \textbf{1.000}$\pm$0.000 & 0.002$\pm$0.000 & 3 \\ Colon & \textbf{1.000}$\pm$0.000 & 0.147$\pm$0.384 & 0.147$\pm$0.384 & 0.147$\pm$0.384 & 0.013$\pm$0.004 & 3 \\ GSAD 1 & \textbf{0.429}$\pm$0.190 & 0.353$\pm$0.011 & 0.356$\pm$0.021 & 0.353$\pm$0.017 & 0.009$\pm$0.001 & 3 \\ Glass & 0.717$\pm$0.125 & \textbf{0.733}$\pm$0.103 & 0.696$\pm$0.113 & 0.690$\pm$0.172 & 0.020$\pm$0.008 & 20\\ Iris & 0.885$\pm$0.079 & 0.718$\pm$0.394 & 0.718$\pm$0.394 & \textbf{0.928}$\pm$0.024 & 0.552$\pm$0.012 & 10\\ Isolet 5 & \textbf{0.711}$\pm$0.047 & 0.296$\pm$0.214 & 0.479$\pm$0.007 & 0.184$\pm$0.174 & 0.043$\pm$0.034 & 3 \\ Landsat & \textbf{0.763}$\pm$0.048 & 0.229$\pm$0.004 & 0.229$\pm$0.004 & 0.651$\pm$0.278 & 0.082$\pm$0.003 & 3 \\ Leaf & \textbf{0.477}$\pm$0.036 & 0.255$\pm$0.109 & 0.277$\pm$0.081 & 0.342$\pm$0.124 & 0.110$\pm$0.044 & 3 \\ Libras & \textbf{0.817}$\pm$0.053 & 0.742$\pm$0.057 & 0.758$\pm$0.065 & 0.780$\pm$0.097 & 0.392$\pm$0.040 & 3 \\ Lung & \textbf{1.000}$\pm$0.000 & \textbf{1.000}$\pm$0.000 & \textbf{1.000}$\pm$0.000 & \textbf{1.000}$\pm$0.000 & 0.009$\pm$0.019 & 3 \\ Lung Cancer & \textbf{0.527}$\pm$0.158 & 0.515$\pm$0.194 & 0.515$\pm$0.194 & 0.503$\pm$0.216 & 0.035$\pm$0.021 & 3 \\ Lymphoma & \textbf{0.925}$\pm$0.122 & 0.916$\pm$0.076 & 0.876$\pm$0.034 & 0.876$\pm$0.034 & 0.706$\pm$0.075 & 3 \\ Pendigits & \textbf{0.485}$\pm$0.132 & 0.219$\pm$0.053 & 0.300$\pm$0.117 & 0.427$\pm$0.108 & 0.387$\pm$0.030 & 20\\ Prostate & \textbf{1.000}$\pm$0.000 & \textbf{1.000}$\pm$0.000 & \textbf{1.000}$\pm$0.000 & \textbf{1.000}$\pm$0.000 & 0.001$\pm$0.000 & 3 \\ Seeds & \textbf{0.581}$\pm$0.154 & 0.398$\pm$0.079 & 0.319$\pm$0.149 & 0.491$\pm$0.149 & 0.580$\pm$0.125 & 3\\ Sensorless & \textbf{0.475}$\pm$0.378 & 0.204$\pm$0.061 & 0.210$\pm$0.044 & 0.214$\pm$0.045 & 0.000$\pm$0.000 & 3 \\ Sonar & \textbf{0.295}$\pm$0.449 & 0.196$\pm$0.450 & 0.196$\pm$0.450 & 0.261$\pm$0.436 & 0.001$\pm$0.000 & 3 \\ Theorem Proving & \textbf{0.885}$\pm$0.078 & 0.681$\pm$0.057 & 0.681$\pm$0.057 & 0.711$\pm$0.059 & 0.001$\pm$0.002 & 3 \\ Vehicle & \textbf{0.821}$\pm$0.072 & 0.518$\pm$0.289 & 0.664$\pm$0.251 & 0.700$\pm$0.278 & 0.041$\pm$0.037 & 3 \\ Vowel Context & \textbf{0.533}$\pm$0.154 & 0.377$\pm$0.232 & 0.430$\pm$0.109 & 0.425$\pm$0.109 & 0.103$\pm$0.024 & 3 \\ \hline Average Ranks & \textbf{1.33} & 3.33 & 3.00 & 2.60 & 4.75 & \\ \hline \multicolumn{2}{c}{Signed Rank Hypotheses (p-values)} & $H_1 (0.00)$ & $H_1 (0.00)$ & $H_1 (0.00)$ & $H_1 (0.00)$ & \\ \hline \multicolumn{7}{l}{$H_1$ significantly different from HAC-FWPD}\\ \multicolumn{7}{l}{$H_0$ statistically similar to HAC-FWPD}\\ \multicolumn{7}{l}{Best values shown in \textbf{boldface}.}\\ \end{tabular} } \end{center} \end{table*} It is seen from Tables \ref{mcarHacResults}-\ref{mnar2HacResults} that the HAC-FWPD algorithm is able to perform best on all types of missingness, as evident from the consistently minimum average ranking. The proposed method performs best on the majority of datasets for all types of missingness. Moreover, the performance of HAC-FWPD is observed to be significantly better than kNNI which performs poorly overall, indicating that kNNI may not be useful for hierarchical clustering applications with missingness. Interestingly, in case of MAR and MNAR-II, both of which are characterized by the dependence of the missingness on the observed features, ZI, MI as well as SVDI show improved performance. This indicates that the dependence of the missingness on the observed features aids these imputation methods in case of hierarchical clustering. Another intriguing observation is that all the contending HAC methods consistently achieved the best possible performance on the high-dimensional datasets Lung and Prostate. This may indicate that while convexity of the cluster structures may be harmed due to missingness, the local proximity among the points is preserved owing to the sparse nature of such high-dimensional datasets. \section{Conclusions}\label{sec:conclsn} In this paper, we propose to use the FWPD measure as a viable alternative to imputation and marginalization approaches to handle the problem of missing features in data clustering. The proposed measure attempts to estimate the original distances between the data points by adding a penalty term to those pair-wise distances which cannot be calculated on the entire feature space due to missing features. Therefore, unlike existing methods for handling missing features, FWPD is also able to distinguish between distinct data points which look identical due to missing features. Yet, FWPD also ensures that the dissimilarity for any data instance from itself is never greater than its dissimilarity from any other point in the dataset. Intuitively, these advantages of FWPD should help us better model the original data space which may help in achieving better clustering performance on the incomplete data. Therefore, we use the proposed FWPD measure to put forth the k-means-FWPD and the HAC-FWPD clustering algorithms, which are directly applicable to datasets with missing features. We conduct extensive experimentation on the new techniques using various benchmark datasets and find the new approach to produce generally better results (for both partitional as well as hierarchical clustering) compared to some of the popular imputation methods which are generally used to handle the missing feature problem. In fact, it is observed from the experiments that the performance of the imputation schemes varies with the type of missingness and/or the clustering algorithm being used (for example, kNNI is useful for k-means clustering but not for HAC clustering; SVDI is useful for HAC clustering but not for k-means clustering; MI is effective when the missingness depends on the observed features). The proposed approach, on the other hand, exhibits good performance across all types of missingness as well as both partitional and hierarchical clustering paradigms. The experimental results attest to the ability of FWPD to better model the original data space, compared to existing methods. However, it must be stressed, that the performance of all these methods, including the FWPD based ones, can vary depending on the structure of the dataset concerned, the choice of the proximity measure used (for HAC), and the pattern and extent of missingness plaguing the data. Fortunately, the $\alpha$ parameter embedded in FWPD can be varied in accordance with the extent of missingness to achieve desired results. The results in Section \ref{sec:alpha} indicate that it may be useful to choose a high value of $\alpha$ when a large fraction of the features are unobserved, and to choose a smaller value when only a few of the features are missing. However, in the presence of a sizable amount of missingness and the absence of ground-truths to validate the merit of the achieved clusterings, it is safest to choose a value of $\alpha$ proportional to the percentage of missing features restricted within the range $[0.1,0.25]$. We also present an appendix dealing with an extension of the FWPD measure to problems with absent features and show that this modified form of FWPD is a semi-metric. \par An obvious follow-up to this work is the application of the proposed PDM variant to practical clustering problems which are characterized by large fractions of unobserved data that arise in various fields such as economics, psychiatry, web-mining, etc. Studies can be undertaken to better understand the effects that the choice of $\alpha$ has on the clustering results. Another rewarding topic of research is the investigation of the abilities of the FWPD variant for absent features (see Appendix \ref{apd:first}) by conducting proper experiments using benchmark applications characterized by this rare form of missingness (structural missingness). \appendix \normalsize{ \section{Extending the FWPD to problems with Absent Features}\label{apd:first} This appendix proposes an extension of the FWPD measure to the case of absent features or structural missingness. The principal difference between missing and absent features lies in the fact that the unobserved features are known to be undefined in the latter case, unlike the former. Therefore, while it makes sense to add penalties for features which are observed for only one of the data instances (as the very existence of such a feature sets the points apart), it makes little sense to add penalties for features which are undefined for both the data points. This is in contrast to problems with unstructured missingness where a feature missing from both the data instances is known to be defined for both points (which potentially have distinct values of this feature). Thus, the fundamental difference between the problems of missing and absent features is that two points observed in the same subspace and having identical observed features should (unlike the missing data problem) essentially be considered identical instances in the case of absent features, as the unobserved features are known to be non-existent. But, in case of the unobserved features being merely unknown (rather than being non-existent), such data points should be considered distinct because the unobserved features are likely to have distinct values (making the points distinct when completely observed). Hence, it is essential to add penalties for features missing from both points in the case of missing features, but not in the case of absent features. Keeping this in mind, we can modify the proposed FWPD (essentially modifying the proposed FWP) as defined in the following text to serve as a dissimilarity measure for structural missingness. \par Let the dataset $X_{abs}$ consist of $n$ data instances $\mathbf{x}_i$ ($i \in \{1, 2, \cdots, n\}$). Let $\zeta_{\mathbf{x}_i}$ denote the set of features on which the data point $\mathbf{x}_i \in X_{abs}$ is defined. \begin{dfn}\label{defAbsFwp} The FWP between the instances $\mathbf{x}_i$ and $\mathbf{x}_j$ in $X_{abs}$ is defined as \begin{equation}\label{eqnDefAbsFwp} p_{abs}(\mathbf{x}_i,\mathbf{x}_j)=\frac{\underset{l \in (\zeta_{\mathbf{x}_i}\bigcup \zeta_{\mathbf{x}_j}) \backslash (\zeta_{\mathbf{x}_i}\bigcap \zeta_{\mathbf{x}_j})}{\sum}\;\nu_l}{\underset{l' \in \zeta_{\mathbf{x}_i}\bigcup \zeta_{\mathbf{x}_j}}{\sum}\;\nu_{l'}} \end{equation} where $\nu_s \in (0,n]$ is the number of instances in $X_{abs}$ that are characterized by the feature $s$. Like in the case of unstructured missingness, this FWP also exacts greater penalty for the non-existence of commonly features. \end{dfn} Then, the definition of the FWPD modified for structural missingness is as follows. \begin{dfn}\label{defAbsFwpd} The FWPD between $\mathbf{x}_i$ and $\mathbf{x}_j$ in $X_{abs}$ is \begin{equation}\label{eqnAbsFwpd} \delta_{abs}(\mathbf{x}_i,\mathbf{x}_j)=(1-\alpha)\times \frac{d(\mathbf{x}_i,\mathbf{x}_j)}{d_{max}} + \alpha \times p_{abs}(\mathbf{x}_i,\mathbf{x}_j), \end{equation} where $\alpha \in (0,1)$ is a parameter which determines the relative importance between the two terms and $d(\mathbf{x}_i,\mathbf{x}_j)$ and $d_{max}$ retain their former definitions (but, in the context of structural missingness). \end{dfn} \par Now, having modified the FWPD to handle structural missingness, we show in the following theorem that the modified FWPD is a semi-metric. \begin{theorem}\label{thmPropAbsFwpd} The FWPD for absent features is a semi-metric, i.e. it satisfies the following important properties: \begin{enumerate} \item $\delta_{abs}(\mathbf{x}_i,\mathbf{x}_j) \geq 0$ $\forall$ $\mathbf{x}_i,\mathbf{x}_j \in X_{abs}$, \item $\delta_{abs}(\mathbf{x}_i,\mathbf{x}_j) = 0$ iff $\mathbf{x}_i = \mathbf{x}_j$, i.e. $\zeta_{\mathbf{x}_i} = \zeta_{\mathbf{x}_j}$ and $x_{i,l} = x_{j,l}$ $\forall$ $l \in \zeta_{\mathbf{x}_i}$, and \item $\delta_{abs}(\mathbf{x}_i,\mathbf{x}_j)=\delta_{abs}(\mathbf{x}_j,\mathbf{x}_i)$ $\forall$ $\mathbf{x}_i,\mathbf{x}_j \in X_{abs}$. \end{enumerate} \begin{proof}\label{pfPropAbsFwpd} \begin{enumerate} \item From Equation (\ref{eqnDefAbsFwp}) we can see that $p_{abs}(\mathbf{x}_i,\mathbf{x}_j) \geq 0$ $\forall$ $\mathbf{x}_i,\mathbf{x}_j \in X_{abs}$ and Equation (\ref{eqnLowerDist}) implies that $d(\mathbf{x}_i,\mathbf{x}_j) \geq 0$ $\forall$ $\mathbf{x}_i,\mathbf{x}_j \in X_{abs}$. Hence, it follows that $\delta_{abs}(\mathbf{x}_i,\mathbf{x}_j) \geq 0$ $\forall$ $\mathbf{x}_i,\mathbf{x}_j \in X_{abs}$. \item It is easy to see from Equation (\ref{eqnDefAbsFwp}) that $p_{abs}(\mathbf{x}_i,\mathbf{x}_i)=0$ iff $\zeta_{\mathbf{x}_i}=\zeta_{\mathbf{x}_j}$. Now, if $x_{i,l} = x_{j,l}$ $\forall$ $l \in \zeta_{\mathbf{x}_i}$, then $d(\mathbf{x}_i,\mathbf{x}_j) = 0$. Hence, $\delta_{abs}(\mathbf{x}_i,\mathbf{x}_j) = 0$ when $\zeta_{\mathbf{x}_i} = \zeta_{\mathbf{x}_j}$ and $x_{i,l} = x_{j,l}$ $\forall$ $l \in \zeta_{\mathbf{x}_i}$. The converse is also true as $\delta_{abs}(\mathbf{x}_i,\mathbf{x}_j) = 0$ implies $\zeta_{\mathbf{x}_i} = \zeta_{\mathbf{x}_j}$ and $d(\mathbf{x}_i,\mathbf{x}_j) = 0$; the latter in turn implying that $x_{i,l} = x_{j,l}$ $\forall$ $l \in \zeta_{\mathbf{x}_i}$. \item From Equation (\ref{eqnAbsFwpd}) we have \begin{equation*} \begin{aligned} & \delta_{abs}(\mathbf{x}_i,\mathbf{x}_j)=(1-\alpha)\times \frac{d(\mathbf{x}_i,\mathbf{x}_j)}{d_{max}} + \alpha \times p_{abs}(\mathbf{x}_i,\mathbf{x}_j),\\ \text{and } & \delta_{abs}(\mathbf{x}_j,\mathbf{x}_i)=(1-\alpha)\times \frac{d(\mathbf{x}_j,\mathbf{x}_i)}{d_{max}} + \alpha \times p_{abs}(\mathbf{x}_j,\mathbf{x}_i). \end{aligned} \end{equation*} But, $d(\mathbf{x}_i,\mathbf{x}_j)=d(\mathbf{x}_j,\mathbf{x}_i)$ and $p_{abs}(\mathbf{x}_i,\mathbf{x}_j)=p(\mathbf{x}_j,\mathbf{x}_i)$ $\forall$ $\mathbf{x}_i,\mathbf{x}_j \in X_{abs}$ (by definition). Therefore, it can be easily seen that $\delta_{abs}(\mathbf{x}_i,\mathbf{x}_j)=\delta_{abs}(\mathbf{x}_j,\mathbf{x}_i)$ $\forall$ $\mathbf{x}_i,\mathbf{x}_j \in X_{abs}$. \end{enumerate} \end{proof} \end{theorem} } \end{document}
\begin{document} \title[L\"owner equation]{Exponential driving function for the L\"owner equation} \author[D.~Prokhorov]{Dmitri Prokhorov} \subjclass[2010]{Primary 30C35; Secondary 30C20, 30C80} \keywords{L\"owner equation, singular solution} \address{D.~Prokhorov: Department of Mathematics and Mechanics, Saratov State University, Saratov 410012, Russia} \email{ProkhorovDV@info.sgu.ru} \begin{abstract} We consider the chordal L\"owner differential equation with the model driving function $\root3\of t$. Holomorphic and singular solutions are represented by their series. It is shown that a disposition of values of different singular and branching solutions is monotonic, and solutions to the L\"owner equation map slit domains onto the upper half-plane. The slit is a $C^1$-curve. We give an asymptotic estimate for the ratio of harmonic measures of the two slit sides. \end{abstract} \maketitle \section{Introduction} The L\"owner differential equation introduced by K.~L\"owner \cite{Loewner} served a source to study properties of univalent functions on the unit disk. Nowadays it is of growing interest in many areas, see, e.g., \cite{Markina}. The L\"owner equation for the upper half-plane $\mathbb H$ appeared later (see, e.g., \cite{Aleksandrov}) and became popular during the last decades. Define a function $w=f(z,t)$, $z\in\mathbb H$, $t\geq0$, \begin{equation} f(z,t)=z+\frac{2t}{z}+O\left(\frac{1}{z^2}\right), \;\;\; z\to\infty, \label{exp} \end{equation} which maps $\mathbb H\setminus K_t$ onto $\mathbb H$ and solves the {\it chordal} L\"owner ordinary differential equation \begin{equation} \frac{df(z,t)}{dt}=\frac{2}{f(z,t)-\lambda(t)},\;\;\;f(z,0)=z,\;\;\;z\in\mathbb H, \label{Leo} \end{equation} where the driving function $\lambda(t)$ is continuous and real-valued. The conformal maps $f(z,t)$ are continuously extended onto $z\in\mathbb R$ minus the closure of $K_t$ and the extended map also satisfies equation (\ref{Leo}). Following \cite{LMR}, we pay attention to an old problem to determine, in terms of $\lambda$, when $K_t$ is a Jordan arc, $K_t=\gamma(t)$, $t\geq0$, emanating from the real axis $\mathbb R$. In this case $f(z,t)$ are continuously extended onto the two sides of $\gamma(t)$, \begin{equation} \lambda(t)=f(\gamma(t),t),\;\;\;\gamma(t)=f^{-1}(\lambda(t),t). \label{gam} \end{equation} Points $\gamma(t)$ are treated as prime ends which are different for the two sides of the arc. Note that Kufarev \cite{Kufarev} proposed a counterexample of the non-slit mapping for the {\it radial} L\"owner equation in the disk. For the chordal L\"owner equation, Kufarev's example corresponds to $\lambda(t)=3\sqrt2\sqrt{1-t}$, see \cite{Kager}, \cite{LMR} for details. Equation (\ref{Leo}) admits integrating in quadratures for partial cases of $\lambda(t)$ studied in \cite{Kager}, \cite{PrZ}. The integrability cases of (\ref{Leo}) are invariant under linear and scaling transformations of $\lambda(t)$, see, e.g., \cite{LMR}. Therefore, assume without loss of generality that $\lambda(0)=0$ and, equivalently, $\gamma(0)=0$. The picture of singularity lines for driving functions $\lambda(t)$ belonging to the Lipschitz class $\text{Lip}(1/2)$ with the exponent 1/2 is well studied, see, e.g., \cite{LMR} and references therein. This article is aimed to show that in the case of the cubic root driving function $\lambda(t)=\root3\of t$ in (\ref{Leo}), that is, \begin{equation} \frac{df(z,t)}{dt}=\frac{2}{f(z,t)-\root3\of t},\;\;\;f(z,0)=z,\;\;\;\text{\rm Im } z\geq0, \label{Le3} \end{equation} the solution $w=f(z,t)$ is a slit mapping for $t>0$ small enough, i.e., $K_t=\gamma(t)$, $0<t<T$. The driving function $\lambda(t)=\root3\of t$ is chosen as a typical function of the Lipschitz class $\text{Lip}(1/3)$. We do not try to cover the most general case but hope that the model driving function serves a demonstration for a wider class. By the way, the case when the trace $\gamma$ is a circular arc meeting the real axis tangentially is studied in \cite{ProkhVas}. The explicit solution for the inverse function gave a driving term of the form $\lambda(t)=C t^{1\over3}+\dots$ which corresponds to the above driving function asymptotically. The main result of the article is contained in the following theorem which shows that $f(z,t)$ is a mapping from a slit domain $D(t)=\mathbb H\setminus\gamma(t)$. \begin{theorem} Let $f(z,t)$ be a solution to the L\"owner equation (\ref{Le3}). Then $f(\cdot,t)$ maps $D(t)=\mathbb H\setminus\gamma(t)$ onto $\mathbb H$ for $t>0$ small enough where $\gamma(t)$ is a $C^1$-curve, except probably for the point $\gamma(0)=0$. \end{theorem} Preliminary results of Section 2 in the article concern the theory of differential equations and preparations for the main proof. Theorem 1 together with helpful lemmas are proved in Section 3. Section 4 is devoted to estimates for harmonic measures of the two sides of the slit generated by the L\"owner equation (\ref{Le3}). Theorem 2 in this Section gives the asymptotic relation for the ratio of these harmonic measures as $t\to0$. In Section 5 we consider holomorphic solutions to (\ref{Le3}) represented by power series and propose asymptotic expansions for the radius of convergence of the series. \section{Preliminary statements} Change variables $t\to\tau^3$, $g(z,\tau):=f(z,\tau^3)$, and reduce equation (\ref{Le3}) to \begin{equation} \frac{dg(z,\tau)}{d\tau}=\frac{6\tau^2}{g(z,\tau)-\tau}, \;\;\;g(z,0)=z, \;\;\;\text{\rm Im } z\geq0. \label{Le2} \end{equation} Note that differential equations $$\frac{dy}{dx}=\frac{Q(x,y)}{P(x,y)}$$ with holomorphic functions $P(x,y)$ and $Q(x,y)$ are well known both for complex and real variables, especially in the case of polynomials $P$ and $Q$, see, e.g., \cite{Bendixson}, \cite{Borel}, \cite{Golubev}, \cite{Poincare}, \cite{Sansone-1}, \cite{Sansone-2}. If $z\neq0$, then $g(z,0)\neq0$, and there exists a {\it regular} solution $g(z,\tau)$ to (\ref{Le2}) holomorphic in $\tau$ for $|\tau|$ small enough which is unique for every $z\neq0$. We are interested mostly in studying {\it singular} solutions to (\ref{Le2}), i.e., those which do not satisfy the uniqueness conditions for equation (\ref{Le2}). Every point $(g(z_0,\tau_0),\tau_0)$ such that $g(z_0,\tau_0)=\tau_0$ is a {\it singular point} for equation (\ref{Le2}). If $\tau_0\neq0$, then $(g(z_0,\tau_0),\tau_0)$ is an {\it algebraic solution critical point}, and corresponding singular solutions to (\ref{Le2}) through this point are expanded in series in terms $(\tau-\tau_0)^{1/m}$, $m\in\mathbb N$. So these singular solutions are different branches of the same analytic function, see [17,~Chap.9, \S1]. The point $(g(z_0,\tau_0),\tau_0)=(0,0)$ is the only {\it singular point of indefinite character} for (\ref{Le2}). It is determined when the numerator and denominator in the right-hand side of (\ref{Le2}) vanish simultaneously. All the singular solutions to (\ref{Le2}) which are not branches of the same analytic function pass through this point $(0,0)$ [17,~Chap.9, \S1]. Regular and singular solutions to (\ref{Le2}) behave according to the Poincar\'e-Bendixson theorems \cite{Poincare}, \cite{Bendixson}, [17,~Chap.9, \S1]. Namely, two integral curves of differential equation (\ref{Le2}) intersect only at the singular point $(0,0)$. An integral curve of (\ref{Le2}) can have multiple points only at $(0,0)$. Bendixson \cite{Bendixson} considered real integral curves globally and stated that they have endpoints at knots and focuses and have an extension through a saddle. Under these assumptions, the Bendixson theorem \cite{Bendixson} makes possible only three cases for equation (\ref{Le2}) in a neighborhood of $(0,0)$: (a) an integral curve is closed, i.e., it is a cycle; (b) an integral curve is a spiral which tends to a cycle asymptotically; (c) an integral curve has the endpoint at $(0,0)$. Recall the integrability case \cite{Kager} of the L\"owner differential equation (\ref{Leo}) with the square root forcing $\lambda(t)=c\sqrt t$. After changing variables $t\to\tau^2$, the singular point $(0,0)$ in this case is a saddle according to the Poincar\'e classification \cite{Poincare} for linear differential equations. From the other side, another integrability case \cite{Kager} with the square root forcing $\lambda(t)=c\sqrt{1-t}$, after changing variables $t\to1-\tau^2$, leads to the focus at $(0,0)$. Going back to equation (\ref{Le2}) remark that its solutions are infinitely differentiable with respect to the real variable $\tau$, see [4,~Chap.1, \S1], [17,~Chap.9, \S1]. Hence recurrent evaluations of Taylor coefficients can help to find singular solutions provided that a resulting series will have a positive convergence radius [16,~Chap.3, \S1]. Apply this method to equation (\ref{Le2}). Let \begin{equation} g_s(0,\tau)=\sum_{n=1}^{\infty}a_n\tau^n \label{Si1} \end{equation} be a a formal power series for singular solutions to (\ref{Le2}). Note that $g_s$ is not necessarily unique. It depends on the path along which $z$ approaches to 0, $z\notin K_{\tau}$. Substitute (\ref{Si1}) into (\ref{Le2}) and see that \begin{equation} \sum_{n=1}^{\infty}na_n\tau^{n-1}\left(\sum_{n=1}^{\infty}a_n\tau^n-\tau\right)=6\tau^2. \label{Si2} \end{equation} Equating coefficients at the same powers in both sides of (\ref{Si2}) obtain that \begin{equation} a_1(a_1-1)=0. \label{Si3} \end{equation} This equation gives two possible values $a_1=1$ and $a_1=0$ to two singular solutions $g^+(0,\tau)$ and $g^-(0,\tau)$. In both cases equation (\ref{Si2}) implies recurrent formulas for coefficients $a_n^+$ and $a_n^-$ of $g^+(0,\tau)$ and $g^-(0,\tau)$ respectively, \begin{equation} a_1^+=1,\;\;a_2^+=6,\;\;a_n^+=-\sum_{k=2}^{n-1}ka_k^+a_{n+1-k}^+, \;\;n\geq3, \label{Si4} \end{equation} \begin{equation} a_1^-=0,\;\;a_2^-=-3,\;\; a_n^-=\frac{1}{n}\sum_{k=2}^{n-1}ka_k^-a_{n+1-k}^-,\;\;n\geq3, \label{Si5} \end{equation} Show that the series $\sum_{n=1}^{\infty}a_n^+\tau^n$ formally representing $g^+(0,\tau)$ diverges for all $\tau\neq0$. \begin{lemma} For $n\geq2$, the inequalities \begin{equation} 6^{n-1}(n-1)!\leq|a_n^+|\leq12^{n-1}n^{n-3} \label{Si6} \end{equation} hold. \end{lemma} \proof For $n=2$, the estimate (\ref{Si6}) from below holds with the equality sign. Suppose that these estimates are true for $k=2,\dots,n-1$ and substitute them in (\ref{Si2}). For $n\geq3$, we have $$|a_n^+|=\sum_{k=2}^{n-1}k|a_k^+||a_{n+1-k}^+| \geq\sum_{k=2}^{n-1}k6^{k-1}(k-1)!6^{n-k}(n-k)!=$$ $$6^{n-1}\sum_{k=2}^{n-1}k!(n-k)!\geq6^{n-1}(n-1)!\;.$$ This confirms by induction the estimate (\ref{Si6}) from below. Similarly, for $n=2,3$, the estimate (\ref{Si6}) from above is easily verified. Suppose that these estimates are true for $k=2,\dots,n-1$ and substitute them in (\ref{Si2}). For $n\geq4$, we have $$|a_n^+|=\sum_{k=2}^{n-1}k|a_k^+||a_{n+1-k}^+| \leq\sum_{k=2}^{n-1}k12^{k-1}k^{k-3}12^{n-k}(n+1-k)^{n-2-k}=$$ $$12^{n-1}\sum_{k=2}^{n-1}k^{k-2}(n+1-k)^{n-2-k} \leq12^{n-1}\left(\sum_{k=2}^{n-2}(n-1)^{k-2}(n-1)^{n-2-k}+\frac{(n-1)^{n-3}}{2}\right)$$ $$<12^{n-1}\left(\sum_{k=2}^{n-2}(n-1)^{n-4}+(n-1)^{n-4}\right)<12^{n-1}n^{n-3}$$ which completes the proof. \endproof Evidently, the upper estimates (\ref{Si6}) are preserved for $|a_n^-|$, $n\geq2$. The lower estimates (\ref{Si6}) imply divergence of $\sum_{n=1}^{\infty}a_n^+\tau^n$ for $\tau\neq0$. Therefore equation (\ref{Le2}) does not have any holomorphic solution in a neighborhood of $\tau_0=0$. There exist some methods to summarize the series $\sum_{n=1}^{\infty}a_n^+\tau^n$, the Borel regular method among them \cite{Borel}, [16,~Chap.3, \S1]. Let $$G(\tau)=\sum_{n=1}^{\infty}\frac{a_n^+}{n!}\tau^n,$$ this series converges for $|\tau|<1/12$ according to Lemma 1. The Borel sum equals $$h(\tau)=\int_0^{\infty}e^{-x}G(\tau x)dx$$ and solves (\ref{Le2}) provided it determines an analytic function. The same approach is applied to $\sum_{n=1}^{\infty}a_n^-\tau^n$. In any case solutions $g_1(0,\tau)$, $g_2(0,\tau)$ to (\ref{Le2}) emanating from the singular point $(0,0)$ satisfy the asymptotic relations $$g_1(0,\tau)=\sum_{k=1}^na_k^+\tau^k+o(\tau^n),\;\;\; g_2(0,\tau)=\sum_{k=1}^na_k^-\tau^k+o(\tau^n),\;\;\;\tau\to0,$$ for all $n\geq2$, $o(\tau^n)$ in both representations depend on $n$. Let $f_1(0,t):=g_1(0,\tau^3)$, $f_2(0,t):=g_2(0,\tau^3)$. Since $f_1(0,t)=\root3\of t+6\root3\of{t^2}+o(\root3\of{t^2})$ and $f_2(0,t)=-3\root3\of{t^2}+o(\root3\of{t^2})$ as $t\to0$, the inequality $$f_2(0,t)<\root3\of t<f_1(0,t)$$ holds for all $t>0$ small enough. Let us find representations for all other singular solutions to equation (\ref{Le3}) which appear at $t>0$. Suppose there is $z_0\in\mathbb H$ and $t_0>0$ such that $f(z_0,t_0)=\root3\of t$. Then $(f(z_0,t_0),t_0)$ is a singular point of equation (\ref{Le3}), and $f(z_0,t)$ is expanded in series with powers $(t-t_0)^{n/m}$, $m\in\mathbb N$, \begin{equation} f(z_0,t)=\root3\of t_0+\sum_{n=1}^{\infty}b_{n/m}(t-t_0)^{n/m}. \label{Si7} \end{equation} Substitute (\ref{Si7}) into (\ref{Le3}) and see that $$\sum_{n=1}^{\infty}\frac{nb_{n/m}(t-t_0)^{n/m-1}}{m}\times$$ \begin{equation} \left(\sum_{n=1}^{\infty}b_{n/m} (t-t_0)^{n/m}-\sum_{n=1}^{\infty}\frac{(-1)^{n-1}2\cdot5\dots(3n-4)}{n!} \frac{(t-t_0)^n}{(3t_0)^n}\right)=2. \label{Si8} \end{equation} Equating coefficients at the same powers in both sides of (\ref{Si8}) obtain that $m=2$ and \begin{equation} (b_{1/2})^2=4. \label{Si9} \end{equation} This equation gives two possible values $b_{1/2}=2$ and $b_{1/2}=-2$ to two branches $f_1(z_0,t)$ and $f_2(z_0,t)$ of the solution (\ref{Si7}). Indeed, we can accept only one of possibilities, for example $b_{1/2}=2$, while the second case is obtained by going to another branch of $(t-t_0)^{n/2}$ when passing through $t=t_0$. So we have recurrent formulas for coefficients $b_{n/2}$ of $f_1(z_0,t)$ and $f_2(z_0,t)$, \begin{equation} b_{1/2}=2,\;\;b_{n/2}=\frac{1}{n+1}\left(c_{n/2}-\frac{1}{2}\sum_{k=2}^{n-1}kb_{k/2} (b_{(n+1-k)/2}-c_{(n+1-k)/2})\right),\;\;n\geq2, \label{Si10} \end{equation} where \begin{equation} c_{(2k-1)/2}=0,\;\;c_k=\frac{(-1)^{k-1}2\cdot5\dots(3k-4)}{3^kt_0^{k-1/3}k!},\;\; k=1,2,\dots\;. \label{Si11} \end{equation} Since $$f_1(z_0,t)=\root3\of{t_0}+2\sqrt{t-t_0}+o(\sqrt{t-t_0}),\; f_2(z_0,t)=\root3\of{t_0}-2\sqrt{t-t_0}+o(\sqrt{t-t_0}),$$ $$\root3\of t= \root3\of{t_0}+\frac{1}{3t_0}(t-t_0)+o(t-t_0),\;\;\;t\to t_0+0,$$ the inequality $$f_2(z_0,t)<\root3\of t<f_1(z_0,t)$$ holds for all $t>t_0$ close to $t_0$. \section{Proof of the main results} The theory of differential equations claims that integral curves of equation (\ref{Le3}) intersect only at the singular point $(0,0)$ [17,~Chap.9, \S1]. In particular, this implies the local inequalities $f_2(0,t)<f_2(z_0,t)<\root3\of t<f_1(z_0,t)<f_1(0,t)$ where $(f(z_0,t_0),t_0)$ is an algebraic solution critical point for equation (\ref{Le3}). We will give an independent proof of these inequalities which can be useful for more general driving functions. \begin{lemma} For $t>0$ small enough and a singular point $(f(z_0,t_0),t_0)$ for equation (\ref{Le3}), $0<t_0<t$, the following inequalities $$f_2(0,t)<f_2(z_0,t)<\root3\of t<f_1(z_0,t)<f_1(0,t)$$ hold. \end{lemma} \proof To show that $f_1(z_0,t)<f_1(0,t)$ let us subtract equations $$\frac{df_1(0,t)}{dt}=\frac{2}{f_1(0,t)-\root3\of t},\;\;\;f_1(0,0)=0,$$ $$\frac{df_1(z_0,t)}{dt}=\frac{2}{f_1(z_0,t)-\root3\of t}, \;\;\;f_1(z_0,t_0)=\root3\of{t_0},$$ and obtain $$\frac{d(f_1(0,t)-f_1(z_0,t))}{dt}=\frac{2(f_1(z_0,t)-f_1(0,t))} {(f_1(0,t)-\root3\of t)(f_1(z_0,t)-\root3\of t)},$$ which can be written in the form $$\frac{d\log(f_1(0,t)-f_1(z_0,t))}{dt}=\frac{-2} {(f_1(0,t)-\root3\of t)(f_1(z_0,t)-\root3\of t)}.$$ Suppose that $T>t_0$ is the smallest number for which $f_1(0,T)=f_1(z_0,T)$. This implies that \begin{equation} \int_{t_0}^T\frac{dt}{(f_1(0,t)-\root3\of t)(f_1(z_0,t)-\root3\of t)}= \infty. \label{Si12} \end{equation} To evaluate the integral in (\ref{Si12}) we should study the behavior of $f_1(z_0,t)-\root3\of t$ with the help of differential equation \begin{equation} \frac{d(f_1(z_0,t)-\root3\of t)}{dt}= \frac{2}{f_1(z_0,t)-\root3\of t}-\frac{1}{3\root3\of{t^2}}=\frac{\root3\of t+6\root3\of{t^2}-f_1(z_0,t)}{3\root3\of{t^2} (f_1(z_0,t)-\root3\of t)}. \label{Si13} \end{equation} Calculate that $a_3^+=-72$ and write the asymptotic relation $$f_1(0,t)=\root3\of t+6\root3\of{t^2}-72t+o(t),\;\;t\to+0.$$ There exists a number $T'>0$ such that for $0<t<T'$, $\root3\of t+6\root3\of{t^2}>f_1(0,t)$. Consequently, the right-hand side in (\ref{Si13}) is positive for $0<t<T'$. Note that $T'$ does not depend on $t_0$. The condition "$t>0$ small enough" in Lemma 2 is understood from now as $0<t<T'$. We see from (\ref{Si13}) that for such $t$, $f_1(z_0,t)-\root3\of t$ is increasing with $t$, $t_0<t<T<T'$. Therefore, the integral in the left-hand side of (\ref{Si12}) is finite. The contradiction against equality (\ref{Si12}) denies the existence of $T$ with the prescribed properties which proves the third and the fourth inequalities in Lemma 2. The rest of inequalities in Lemma 2 are proved similarly and even easier. To show that $f_2(z_0,t)>f_2(0,t)$ let us subtract equations $$\frac{df_2(0,t)}{dt}=\frac{2}{f_2(0,t)-\root3\of t},\;\;\;f_2(0,0)=0,$$ $$\frac{df_2(z_0,t)}{dt}=\frac{2}{f_2(z_0,t)-\root3\of t}, \;\;\;f_2(z_0,t_0)=\root3\of{t_0},$$ and obtain $$\frac{d(f_2(0,t)-f_2(z_0,t))}{dt}=\frac{2(f_2(z_0,t)-f_2(0,t))} {(f_2(0,t)-\root3\of t)(f_2(z_0,t)-\root3\of t)},$$ which can be written in the form $$\frac{d\log(f_2(z_0,t)-f_2(0,t))}{dt}=\frac{-2} {(f_2(0,t)-\root3\of t)(f_2(z_0,t)-\root3\of t)}.$$ Suppose that $T>t_0$ is the smallest number for which $f_2(z_0,T)=f_2(0,T)$. This implies that \begin{equation} \int_{t_0}^T\frac{dt}{(f_2(0,t)-\root3\of t)(f_2(z_0,t)-\root3\of t)}= \infty. \label{Si14} \end{equation} To evaluate the integral in (\ref{Si14}) we should study the behavior of $f_2(z_0,t)-\root3\of t$ with the help of differential equation \begin{equation} \frac{d(f_2(z_0,t)-\root3\of t)}{dt}= \frac{2}{f_2(z_0,t)-\root3\of t}-\frac{1}{3\root3\of{t^2}}=\frac{\root3\of t+6\root3\of{t^2}-f_2(z_0,t)}{3\root3\of{t^2} (f_2(z_0,t)-\root3\of t)}. \label{Si15} \end{equation} Since $$f_2(0,t)=-3\root3\of{t^2}+o(\root3\of{t^2}),\;\;t\to+0,$$ there exists a number $T''>0$ such that for $0<t<T''$, $\root3\of t+6\root3\of{t^2}>f_2(0,t)$. Consequently, the right-hand side in (\ref{Si15}) is positive for $0<t<T''$. We see from (\ref{Si15}) that for such $t$, $f_2(0,t)-\root3\of t$ is decreasing with $t$, $t_0<t<T<T''$. Therefore, the integral in the left-hand side of (\ref{Si14}) is finite. The contradiction against equality (\ref{Si14}) denies the existence of $T$ with the prescribed properties which completes the proof. \endproof Add and complete the inequalities of Lemma 2 by the following statements demonstrating a monotonic disposition of values for different singular solutions. \begin{lemma} For $t>0$ small enough and singular points $(f(z_1,t_1),t_1)$, $(f(z_0,t_0),t_0)$ for equation (\ref{Le3}), $0<t_1<t_0<t$, the following inequalities $$f_2(z_1,t)<f_2(z_0,t),\;\;\;f_1(z_0,t)<f_1(z_1,t)$$ hold. \end{lemma} \proof Similarly to Lemma 2, subtract equations $$\frac{df_1(z_1,t)}{dt}=\frac{2}{f_1(z_1,t)-\root3\of t}, \;\;\;f_1(z_1,t_1)=\root3\of{t_1},$$ $$\frac{df_1(z_0,t)}{dt}=\frac{2}{f_1(z_0,t)-\root3\of t}, \;\;\;f_1(z_0,t_0)=\root3\of{t_0},$$ and obtain $$\frac{d(f_1(z_1,t)-f_1(z_0,t))}{dt}= \frac{2(f_1(z_0,t)-f_1(z_1,t))}{(f_1(z_1,t)-\root3\of t)(f_1(z_0,t)-\root3\of t)},$$ which can be written in the form $$\frac{d\log(f_1(z_1,t)-f_1(z_0,t))}{dt}=\frac{-2} {(f_1(z_1,t)-\root3\of t)(f_1(z_0,t)-\root3\of t)}.$$ Suppose that $T>t_0$ is the smallest number for which $f_1(z_1,T)=f_1(z_0,T)$. This implies that \begin{equation} \int_{t_0}^T\frac{dt}{(f_1(z_1,t)-\root3\of t)(f_1(z_0,t)-\root3\of t)}= \infty. \label{Si16} \end{equation} To evaluate the integral in (\ref{Si16}) apply to (\ref{Si13}) and obtain that there exists a number $T'>0$ such that for $0<t<T'$, $f_1(z_0,t)-\root3\of t$ is increasing with $t$, $t_0<t<T<T'$. Therefore, the integral in the left-hand side of (\ref{Si16}) is finite. The contradiction against equality (\ref{Si16}) denies the existence of $T$ with the prescribed properties which proves the second inequality of Lemma 3. To prove the first inequality of Lemma 3 subtract equations $$\frac{df_2(z_1,t)}{dt}=\frac{2}{f_2(z_1,t)-\root3\of t},\;\;\; f_2(z_1,t_1)=\root3\of{t_1},$$ $$\frac{df_2(z_0,t)}{dt}=\frac{2}{f_2(z_0,t)-\root3\of t}, \;\;\;f_2(z_0,t_0)=\root3\of{t_0},$$ and obtain after dividing by $f_2(z_1,t)-f_2(z_0,t)$ $$\frac{d\log(f_2(z_0,t)-f_2(z_1,t))}{dt}=\frac{-2} {(f_2(z_1,t)-\root3\of t)(f_2(z_0,t)-\root3\of t)}.$$ Suppose that $T>t_0$ is the smallest number for which $f_2(z_0,T)=f_2(z_1,T)$. This implies that \begin{equation} \int_{t_0}^T\frac{dt}{(f_2(z_1,t)-\root3\of t)(f_2(z_0,t)-\root3\of t)}=\infty. \label{Si17} \end{equation} To evaluate the integral in (\ref{Si17}) apply to (\ref{Si15}) and obtain that $\root3\of t+6\root3\of{t^2}>\root3\of t>f_2(0,t)$. Consequently, the right-hand side in (\ref{Si15}) is positive and we see that $f_2(0,t)-\root3\of t$ is decreasing with $t$, $t_0<t<T$. Therefore, the integral in the left-hand side of (\ref{Si17}) is finite. The contradiction against equality (\ref{Si17}) denies the existence of $T$ with the prescribed properties which completes the proof. \endproof {\it Proof of Theorem 1.} For $t_0>0$, there is a hull $K_{t_0}\subset\mathbb H$ such that $f(\cdot,t_0)$ maps $\mathbb H\setminus K_{t_0}$ onto $\mathbb H$. We refer to \cite{LMR} for definitions and more details. The hull $K_{t_0}$ is driven by $\root3\of t$. The function $f(\cdot,t_0)$ is extended continuously onto the set of prime ends on $\partial(\mathbb H\setminus K_{t_0})$ and maps this set onto $\mathbb R$. One of the prime ends is mapped on $\root3\of{t_0}$. Let $z_0=z_0(t_0)$ represent this prime end. Lemmas 2 and 3 describe the structure of the pre-image of $\mathbb H$ under $f(\cdot,t)$. All the singular solutions $f_1(0,t)$, $f_2(0,t)$, $f_1(z_0,t)$, $f_2(z_0,t)$, $0<t_0<t<T'$, are real-valued and satisfy the inequalities of Lemmas 2 and 3. So the segment $I=[f_2(0,t),f_1(0,t)]$ is the union of the segments $I_2=[f_2(0,t),\root3\of t]$ and $I_1=[\root3\of t,f_1(0,t)]$. The segment $I_2$ consists of points $f_2(z(\tau),t)$, $0\leq\tau<t$, and the segment $I_1$ consists of points $f_1(z(\tau),t)$, $0\leq\tau<t$. All these points belong to the boundary $\mathbb R=\partial\mathbb H$. This means that all the points $z(\tau)$, $0\leq\tau<t$, belong to the boundary $\partial(\mathbb H\setminus K_t)$ of $\mathbb H\setminus K_t$. Moreover, every point $z(\tau)$ except for the tip determines exactly two prime ends corresponding to $f_1(z(\tau),t)$ and $f_2(z(\tau),t)$. Evidently, $z(\tau)$ is continuous on $[0,t]$. This proves that $z(\tau):=\gamma(\tau)$ represents a curve $\gamma:=K_t$ with prime ends corresponding to points on different sides of $\gamma$. This proves that $f^{-1}(w,t)$ maps $\mathbb H$ onto the slit domain $\mathbb H\setminus\gamma(t)$ for $t>0$ small enough. It remains to show that $\gamma(t)$ is a $C^1$-curve. Fix $t_0>0$ from a neighborhood of $t=0$. Denote $g(w,t)=f^{-1}(w,t)$ an inverse of $f(z,t)$, and $h(w,t):=f(g(w,t_0),t)$, $t\geq t_0$. The arc $\gamma[t_0,t]:=K_t\setminus K_{t_0}$ is mapped by $f(z,t_0)$ onto a curve $\gamma_1(t)$ in $\mathbb H$ emanating from $\root3\of{t_0}\in\mathbb R$. So the function $h(w,t)$ is well defined on $\mathbb H\setminus\gamma_1(t_0)$, $t\geq t_0$. Expand $h(w,t)$ near infinity, $$h(w,t)=g(w,t_0)+\frac{2t}{g(w,t_0)}+O\left(\frac{1} {g^2(w,t_0)}\right)=w+\frac{2(t-t_0)}{w}+O\left(\frac{1}{w^2}\right).$$ Such expansion satisfies (\ref{exp}) after changing variables $t\to t-t_0$. The function $h(w,t)$ satisfies the differential equation $$\frac{dh(w,t)}{dt}=\frac{2} {h(w,t)-\root3\of t},\;\;\;h(w,t_0)=w,\;\;\;w\in\mathbb H.$$ This equation becomes the L\"owner differential equation if $t_1:=t-t_0$, $h_1(w,t_1):=h(w,t_0+t_1)$, \begin{equation} \frac{dh_1(w,t_1)}{dt_1}=\frac{2}{h_1(w,t_1)-\root3\of{t_1+t_0}},\;\;\; h_1(w,0)=w,\;\;\;w\in\mathbb H. \label{Cu1} \end{equation} The driving function $\lambda(t_1)=\root3\of{t_1+t_0}$ in (\ref{Cu1}) is analytic for $t_1\geq0$. It is known [1,~p.59] that under this condition $h_1(w,t_1)$ maps $\mathbb H\setminus\gamma_1$ onto $\mathbb H$ where $\gamma_1$ is a $C^1$-curve in $\mathbb H$ emanating from $\lambda(0)=\root3\of{t_0}$. The same does the function $h(w,t)$. Go back to $f(z,t)=h(f(z,t_0),t)$ and see that $f(z,t)$ maps $\mathbb H\setminus\gamma(t)$ onto $\mathbb H$, $\gamma(t)=\gamma[0,t_0]\cup\gamma[t_0,t]$, and $\gamma[t_0,t]$ is a $C^1$-curve. Tending $t_0$ to 0 we prove that $\gamma(t)$ is a $C^1$-curve, except probably for the point $\gamma(0)=0$. This completes the proof. \section{Harmonic measures of the slit sides} The function $f(z,t)$ solving (\ref{Le3}) maps $\mathbb H\setminus\gamma(t)$ onto $\mathbb H$. The curve $\gamma(t)$ has two sides. Denote $\gamma_1=\gamma_1(t)$ the side of $\gamma$ which is mapped by the extended function $f(z,t)$ onto $I_1=[\root3\of t,f_1(0,t)]$. Similarly, $\gamma_2=\gamma_2(t)$ is the side of $\gamma$ which is the pre-image of $I_2=[f_2(0,t),\root3\of t]$ under $f(z,t)$. Remind that the harmonic measures $\omega(f^{-1}(i,t);\gamma_k,\mathbb H\setminus\gamma(t),t)$ of $\gamma_k$ at $f^{-1}(i,t)$ with respect to $\mathbb H\setminus\gamma(t)$ are defined by the functions $\omega_k$ which are harmonic on $\mathbb H\setminus\gamma(t)$ and continuously extended on its closure except for the endpoints of $\gamma$, $\omega_k|_{\gamma_k(t)}=1$, $\omega_k|_{\mathbb R\cup(\gamma(t)\setminus\gamma_k(t))}=0$, $k=1,2$, see, e.g., [6,~Chap.3, \S3.6]. Denote $$m_k(t):=\omega(f^{-1}(i,t);\gamma_k,\mathbb H\setminus\gamma(t),t),\;\;\;k=1,2.$$ \begin{theorem} Let $f(z,t)$ be a solution to the L\"owner equation (\ref{Le3}). Then \begin{equation} \lim_{t\to+0}\frac{m_1(t)}{m_2^2(t)}=6\pi. \label{har} \end{equation} \end{theorem} \proof The harmonic measure is invariant under conformal transformations. So $$\omega(f^{-1}(i,t);\gamma_k,\mathbb H\setminus\gamma(t),t)=\Omega(i;f(\gamma_k,t),\mathbb H,t)$$ are defined by the harmonic functions $\Omega_k$ which are harmonic on $\mathbb H$ and continuously extended on $\mathbb R$ except for the endpoints of $f(\gamma_k,t)$, $\Omega_k|_{f(\gamma_k,t)}=1$, $\Omega_k|_{\mathbb R\setminus f(\gamma_k,t)}=0$, $k=1,2$. The solution of the problem is known, see, e.g., [5,~p.334]. Namely, $$m_k(t)=\frac{\alpha_k(t)}{\pi}$$ where $\alpha_k(t)$ is the angle under which the segment $I_k=I_k(t)$ is observed from the point $w=i$, $k=1,2$. It remains to find asymptotic expansions for $\alpha_k(t)$. Since $$f_1(0,t)=\root3\of t+6\root3\of{t^2}+O(t),\;\;\;f_2(0,t)=-3\root3\of{t^2}+O(t),\;\;\;t\to+0,$$ after elementary geometrical considerations we have $$\alpha_1(t)=\arctan f_1(0,t)-\arctan\root3\of t=6\root3\of{t^2}+O(t),\;\;\;t\to+0,$$ $$\alpha_2(t)=\arctan\root3\of t-\arctan f_2(0,t)=\root3\of t+3\root3\of{t^2}+O(t),\;\;\;t\to+0.$$ This implies that $$\frac{m_1(t)}{m_2^2(t)}=\pi\frac{6\root3\of{t^2}+O(t)}{(\root3\of t+3\root3\of{t^2}+O(t))^2}= 6\pi(1+O(\root3\of t)),\;\;\;t\to+0,$$ which leads to (\ref{har}) and completes the proof. \endproof \begin{remark} The relation similar to (\ref{har}) follows from \cite{ProkhVas} for the two sides of the circular slit $\gamma(t)$ in $\mathbb H$ such that $\gamma(t)$ is tangential to $\mathbb R$ at $z=0$. \end{remark} \section{Representation of holomorphic solutions} Holomorphic solutions to (\ref{Le3}) or, equivalently, to (\ref{Le2}) appear in a neighborhood of every non-singular point $(z_0,0)$. We will be interested in real solutions corresponding to $z_0\in\mathbb R$. Put $z_0=\epsilon>0$ and let \begin{equation} f(\epsilon,t)=\epsilon+\sum_{n=1}^{\infty}a_n(\epsilon)t^{n/3} \label{hol} \end{equation} be a solution of equation (\ref{Le3}) holomorphic with respect to $\tau=\root3\of t$. Change $\root3\of t$ by $\tau$ and substitute (\ref{hol}) in (\ref{Le2}) to get that \begin{equation} \sum_{n=1}^{\infty}na_n(\epsilon)\tau^{n-1}\left[\epsilon-\tau+\sum_{n=1}^{\infty} a_n(\epsilon)\tau^n\right]=6\tau^2. \label{coe} \end{equation} Equate coefficients at the same powers in both sides of (\ref{coe}) and obtain equations \begin{equation} a_1(\epsilon)=0,\;\;\;a_2(\epsilon)=0,\;\;\;a_k(\epsilon)=\frac{6}{k\epsilon^{k-2}}, \;\;\;k=3,4,5, \label{de1} \end{equation} and \begin{equation} a_n(\epsilon)=\frac{1}{n\epsilon}\left[(n-1)a_{n-1}(\epsilon)- \sum_{k=3}^{n-3}(n-k)a_{n-k}(\epsilon)a_k(\epsilon)\right], \;\;\;n\geq6. \label{de2} \end{equation} The series in (\ref{hol}) converges for $|\tau|=|\root3\of t|<R(\epsilon)$. \begin{theorem} The series in (\ref{hol}) converges for \begin{equation} |t|<\epsilon^3+o(\epsilon^3),\;\;\;\epsilon\to+0. \label{rad} \end{equation} \end{theorem} \proof Estimate the convergence radius $R(\epsilon)$ following the Cauchy majorant method, see, e.g., [4,~Chap.1, \S\S2-3], [16,~Chap.3, \S1]. The Cauchy theorem states: if the right-hand side in (\ref{Le2}) is holomorphic on a product of the closed disks $|g-\epsilon|\leq\rho_1$ and $|\tau|\leq r_1$ and is bounded there by $M$, then the series $\sum_{n=1}^{\infty}a_n(\epsilon)\tau^n$ converges in the disk $$|\tau|<R(\epsilon)=r_1\left(1-\exp\left\{-\frac{\rho_1}{2Mr_1}\right\}\right).$$ In the case of equation (\ref{Le2}) we have $$\rho_1+r_1<\epsilon,\;\;\text{and}\;\;M=\frac{6r_1^2}{\epsilon-(\rho_1+r_1)}.$$ This implies that for $\rho_1+r_1=\epsilon-\delta,$ $\delta>0$, $$R(\epsilon)=r_1\left(1-\exp\left\{-\frac{\epsilon- \delta-r_1}{12r_1^2}\delta\right\}\right).$$ So $R(\epsilon)$ depends on $\delta$ and $r_1$. Maximum of $R$ with respect to $\delta$ is obtained for $\delta=(\epsilon-r_1)/2$. Hence, this maximum is equal to \begin{equation} R_1(\epsilon)=r_1\left(1-\exp\left\{-\frac{(\epsilon-r_1)^2}{48r_1^3}\right\}\right), \label{eR1} \end{equation} where $R_1(\epsilon)$ depends on $r_1$. Let us find a maximum of $R_1$ with respect to $r_1\in(0,\epsilon)$. Notice that $R_1$ vanishes for $r_1=0$ and $r_1=\epsilon$. Therefore the maximum of $R_1$ is attained for a certain root $r_1=r_1(\epsilon)\in(0,\epsilon)$ of the derivative of $R_1$ with respect to $r_1$. To simplify the calculations we put $r_1(\epsilon)=\epsilon c(\epsilon)$, $0<c(\epsilon)<1$. Now the derivative of $R_1$ vanishes for $c=c(\epsilon)$ satisfying \begin{equation} 1-\exp\left\{-\frac{(1-c)^2}{48\epsilon c^3}\right\}\left(1+\frac{(1-c)(3-c)}{48\epsilon c^3}\right)=0. \label{der} \end{equation} Choose a sequence $\{\epsilon_n\}_{n=1}^{\infty}$ of positive numbers, $\lim_{n\to\infty}\epsilon_n=0$, such that $c(\epsilon_n)$ converge to $c_0$ as $n\to\infty$. Suppose that $c_0<1$. Then $$\exp\left\{-\frac{(1-c(\epsilon_n))^2}{48\epsilon_nc^3(\epsilon_n)}\right\} \left(1+\frac{(1-c(\epsilon_n))(3-c(\epsilon_n))}{48\epsilon_nc^3(\epsilon_n)}\right)<1$$ for $n$ large enough. Therefore $c(\epsilon_n)$ is not a root of equation (\ref{der}) for $\epsilon=\epsilon_n$ and $n$ large enough. This contradiction claims that $c_0=1$ for every sequence $\{\epsilon_n>0\}_{n=1}^{\infty}$ tending to 0 with $\lim_{n\to\infty}c(\epsilon_n)=c_0$. So we proved that $c(\epsilon)\to1$ as $\epsilon\to+0$. Consequently, the maximum of $R_1$ with respect to $r_1$ is attained for $r_1(\epsilon)=\epsilon c(\epsilon)=\epsilon(1+o(1))$ as $\epsilon\to+0$. Let $R_2=R_2(\epsilon)$ denote the maximum of $R_1$ with respect to $r_1$. It follows from (\ref{eR1}) that \begin{equation} R_2(\epsilon)=r_1(\epsilon) \left(1-\exp\left\{-\frac{(\epsilon-r_1(\epsilon))^2}{48r_1^3(\epsilon)}\right\}\right)= \epsilon c(\epsilon)\left(1-\exp\left\{-\frac{(1-c(\epsilon))^2}{48\epsilon c^3(\epsilon)}\right\}\right). \label{max} \end{equation} Examine how fast does $c(\epsilon)$ tends to 1 as $\epsilon\to+0$. Choose a sequence $\{\epsilon_n>0\}_{n=1}^{\infty}$, $\lim_{n\to\infty}\epsilon_n=0$, such that the sequence $(1-c(\epsilon_n))^2/\epsilon_n$ converges to a non-negative number or to $\infty$. Denote $$l:=\lim_{n\to\infty}\frac{(1-c(\epsilon_n))^2}{\epsilon_n},\;\;\;0\leq l\leq\infty.$$ If $0<l<\infty$, then $(1-c(\epsilon_n))/\epsilon_n$ tends to $\infty$, and equation (\ref{der}) with $\epsilon=\epsilon_n$ has no roots for $n$ large enough. If $l=0$, then, according to (\ref{der}), $\lim_{n\to\infty}(1-c(\epsilon_n))/\epsilon_n=0$, and $$\exp\left\{-\frac{(1-c(\epsilon_n))^2}{48\epsilon c^3(\epsilon)}\right\}\left(1+\frac{(1-c(\epsilon_n))(3-c(\epsilon_n))}{48\epsilon_n c^3(\epsilon_n)}\right)=$$ $$\left(1-\frac{(1-c(\epsilon_n))^2}{48\epsilon_n c^3(\epsilon_n)}+o\left(\frac{(1-c(\epsilon_n))^2}{\epsilon_n}\right)\right) \left(1+\frac{(1-c(\epsilon_n))(3-c(\epsilon_n))}{48\epsilon_n c^3(\epsilon_n)}+\right)=$$ $$1+\frac{1-c(\epsilon_n)}{24\epsilon_n}+ o\left(\frac{1-c(\epsilon_n)}{\epsilon_n}\right), \;\;\;n\to\infty.$$ This implies again that equation (\ref{der}) with $\epsilon=\epsilon_n$ has no roots for $n$ large enough. Thus the only possible case is $l=\infty$ for all sequences $\{\epsilon_n>0\}_{n=1}^{\infty}$ converging to 0. It follows from (\ref{max}) that \begin{equation} R_2(\epsilon)=\max_{0<r_1(\epsilon)<\epsilon}R_1(\epsilon)=\epsilon+o(\epsilon), \;\;\;\epsilon\to0. \label{R2} \end{equation} In other words, the series in (\ref{hol}) converges for $|t|<(\epsilon+o(\epsilon))^3$, $\epsilon\to0$, which implies the statement of Theorem 3 and completes the proof. \endproof \begin{remark} Evidently, a similar conclusion with the same formulas (\ref{de1}) and (\ref{de2}) is true for $\epsilon<0$. \end{remark} \end{document}
\begin{document} \title{\large{\textbf{ON THE ASYMPTOTIC EXPANSION OF THE \\ SUM OF THE FIRST $n$ PRIMES} { \begin{flushleft} \textit{Dedicated to Dr. Madan Mohan Singh, my first mathematics teacher.} \end{flushleft} } \begin{abstract} \small{An asymptotic formula for the sum of the first $n$ primes is derived. This result improves the previous results of P. Dusart. Using this asymptotic expansion, we prove the conjectures of R. Mandl and G. Robin on the upper and the lower bound of the sum of the first $n$ primes respectively.} \end{abstract} \section{Introduction} Let $p_n$ denote the $n^{th}$ prime \footnote{ 2000 \textit{Mathematics Subject Classification.} 11A41.} \footnote{ \textit{Key words and phrases.} Primes, Inequalities.}. Robert Mandl conjectured that \begin{displaymath} \sum_{r \le n}p_r < \frac{np_n}{2}. \label{mandl} \end{displaymath} This conjecture was proven by Rosser and Schoenfeld in \cite{rs} and is now referred to as Mandl's inequality. An alternate version of the proof was given by Dusart in \cite{pd}. In the same paper, Dusart also showed that \begin{equation} \sum_{r \le n}p_r = \frac{n^2}{2}(\ln n + \ln\ln n - 3/2 + o(1)). \label{dusart1} \end{equation} Currently, the best upper bound on Mandl's inequality is due to Hassani who showed that (See \cite{mh}) for for $n \ge 10$, \begin{equation} \sum_{r \le n}p_r < \frac{np_n}{2} - \frac{n^2}{14}. \label{hasanni} \end{equation} With regards to the lower bound, G. Robin conjectured that \begin{equation} n p_{[n/2]} < \sum_{r <n}p_r. \label{robin} \end{equation} This conjecture was also proved by Dusart in \cite{pd}. However, neither \ref{hasanni} nor \ref{robin} give exact growth rate of $\sum_{r \le n}p_r$. In this paper, we shall derive the asymptotic formula for $\sum_{r \le n}p_r$. Both Hassani's improvement of Mandl's inequality and Robin's conjecture follow as corollaries of our asymptotic formula. \section{Asymptotic expansion of $\sum_{r \le n}p_r$} \begin{theorem} (\textbf{M. Cipolla}) There exists a sequence $(P_m)_{m \ge 1}$ of polynomials with rational coefficients such that, for any integer m, \begin{displaymath} p_n = n \Bigg[\ln n + \ln\ln n - 1 + \sum_{r=1}^{m}\frac{(-1)^rP_r (\ln\ln n)}{\ln^r n} + o\Bigg(\frac{1}{\ln^m n}\Bigg)\Bigg]. \label{cipolla1} \end{displaymath} \end{theorem} This was proved by M. Cipolla in a beautiful paper (See \cite{mc}) in 1902. In the same paper, Cipolla gives recurrence formula for $P_m$ and shows that every $P_m$ has degree $m$ and leading coefficient $\frac{1}{m}$. In particular, \begin{equation} P_1 (x) = x-2, P_2 (x) = \frac{1}{2}(x^2 - 6x + 11). \label{cipolla2} \end{equation} \begin{lemma} If f is monotonic and continuous and defined in $[1,n]$ and then, \begin{displaymath} \sum_{r\le n}f(r)= \int_{1}^{n} f(x)dx + O(|f(n)|+|f(1)|). \label{lemma2.2} \end{displaymath} \end{lemma} \begin{proof} Well known. (See \cite{ik}, 1.62-1.67, Page 19-20) \qedhere\ \end{proof} \begin{theorem} There exists a sequence $(S_m)_{m \ge 1}$ of polynomials with rational coefficients such that, for any integer m, \begin{displaymath} \sum_{r \le n}p_r = \frac{n^2}{2} \Bigg[\ln n + \ln\ln n - \frac{3}{2} + \sum_{r=1}^{m}\frac{(-1)^{r+1}S_r (\ln\ln n)}{r\ln^r n} + o\Bigg(\frac{1}{\ln^m n}\Bigg)\Bigg]. \end{displaymath} Further, every $S_m$ has degree m and leading coefficient $1/m$. In particular \\ \begin{displaymath} S_1 (x) = x-\frac{5}{2}, S_2 (x) = x^2 - 7x + \frac{29}{2}. \end{displaymath} \end{theorem} \begin{proof} We define $p(x)$ as \begin{equation} p(x) = x\ln x + x\ln\ln x - x + \sum_{r=1}^{m}\frac{(-1)^rxP_r (\ln\ln n)}{\ln^r n}. \label{theorem2.3a} \end{equation} where $P_r(x)$ is the same sequence of polynomials as in Theorem \ref{cipolla1}. It follows form Lemma \ref{lemma2.2} that \begin{displaymath} \sum_{r \le n}p_r = 2 + 3 + \int_{3}^{n}p(x)dx + O(p_n) + o\Bigg(\int_{3}^{n}\frac{x}{\ln^m x}\Bigg). \end{displaymath} Now $p_n \sim n \ln n$ where as \begin{displaymath} \int_{3}^{n}\frac{x}{\ln^m x} \sim \frac{n^2}{2\ln^m n} \end{displaymath} which grows much faster than $n \ln n$. Hence \begin{equation} \sum_{r \le n}p_r = \int_{3}^{n}p(x)dx + o\Bigg(\frac{n^2}{\ln^2 n}\Bigg). \label{theorem2.3b} \end{equation} All we need to do is the integrate each term of \ref{theorem2.3a}. Except for a couple of simple terms, integration of the terms of \ref{theorem2.3a} will result in an infinite series and due to \ref{theorem2.3b}, we can stop the series when the growth rate of a new term is equal to or slower than the error term in \ref{theorem2.3b}. Since $P_m(\ln\ln x)$ is a polynomial of degree $m$ and has rational coefficients with leading coefficient $1/m$, the integration of each terms of $p(x)$ will result in an infinite series of the type \begin{displaymath} \int_{3}^{n}xP_m(\ln\ln x)dx = \frac{(-1)^m n^2}{2}\sum_{i=1}^{\infty}Q_{m,i}(\ln\ln n) + O(1) \end{displaymath} where $Q_{m,i}(x)$ is a polynomial of degree $m$ with rational coefficients and leading coefficient $1/m$. Thus the polynomial $S_m(x)$ is of degree $m$ and has rational coefficients with leading coefficient $1/m$. \\ To find the first two terms of the polynomial $S_m(x)$ we integrate the first four terms of $p(x)$. The first four terms of $p(x)$ are \begin{equation} x\ln x + x\ln\ln x - x + \frac{x\ln\ln x - 2x}{\ln x} - \frac{x\ln^2 \ln x -6x\ln\ln x + 11x}{2\ln^2 x}. \label{theorem2.3c} \end{equation} Integrating each term separately, we have \begin{equation} \int_{3}^{n} x \ln x dx = \frac{n^2 \ln n}{2} - \frac{n^2}{4} + O(1) \label{theorem2.3d} \end{equation} \begin{equation} \int_{3}^{n} x \ln\ln x dx = \frac{n^2 \ln\ln n}{2} - \frac{n^2}{4\ln n} - \frac{n^2}{8\ln^2 n} + O\Bigg(\frac{n^2}{\ln^3 n}\Bigg) \end{equation} \begin{equation} -\int_{3}^{n} x dx = -\frac{n^2}{2} + O(1) \end{equation} \begin{equation} \int_{3}^{n} \frac{x\ln \ln x}{\ln x} dx = \frac{n^2 \ln \ln n}{2 \ln n} + \frac{n^2 \ln \ln n}{4 \ln^2 n} - \frac{n^2}{4 \ln n} + O\Bigg(\frac{n^2 \ln \ln n}{\ln^3 n}\Bigg) \end{equation} \begin{equation} - 2\int_{3}^{n} \frac{x}{\ln x}dx = - \frac{n^2}{\ln n} - \frac{n^2}{2\ln^2 n} + O\Bigg(\frac{n^2}{\ln^3 n}\Bigg) \end{equation} \begin{equation} -\frac{1}{2}\int_{3}^{n} \frac{x\ln^2 \ln x}{\ln^2 x} dx = -\frac{n^2 \ln^2 \ln n}{4 \ln^2 n} + O\Bigg(\frac{n^2 \ln^2 \ln n}{\ln^3 n}\Bigg) \end{equation} \begin{equation} 3 \int_{3}^{n} \frac{x\ln\ln x}{\ln^2 x} dx = \frac{3n^2 \ln\ln n}{2\ln^2 n} + O\Bigg(\frac{n^2 \ln^2 \ln n}{\ln^3 n}\Bigg) \end{equation} \begin{equation} -\frac{11}{2} \int_{3}^{n} \frac{x}{\ln^2 x} dx = -\frac{11n^2}{4\ln^2 n} + O\Bigg(\frac{n^2 \ln n}{\ln^3 n}\Bigg) \label{theorem2.3e} \end{equation} Adding \ref{theorem2.3d} - \ref{theorem2.3e} we obtain \begin{displaymath} \sum_{r \le n}p_r = \frac{n^2}{2}\Bigg[\ln n + \ln\ln n - \frac{3}{2} + \frac{\ln\ln n}{\ln n} - \frac{5}{2\ln n} - \frac{\ln^2 \ln n}{2\ln^2 n} \end{displaymath} \begin{equation} + \frac{7 \ln \ln n}{2\ln^2 n} - \frac{29}{4\ln^2 n} + o\Bigg(\frac{1}{\ln^2 n}\Bigg) \Bigg]. \label{theorem2.3f} \end{equation} \\ Notice that taking the first four terms of \ref{theorem2.3f} we obtain Dusart's result in \ref{dusart1}. This proves the theorem. \qedhere\ \end{proof} \section{The inequality of Robin} From the asymptotic expansion of $\sum_{r \le n}p_r$ we can not only prove the inequalities of Mandl \ref{mandl}and Robin \ref{robin} but also refine them. \begin{lemma} \begin{displaymath} \sum_{r < n}p_r = n p_{[n/2]} + \frac{2\ln 2 - 1}{4}n^2 + O\Bigg(\frac{n^2 \ln\ln n}{\ln n}\Bigg). \label{lemma3.2} \end{displaymath} \end{lemma} \begin{proof} Taking $[n/2]$ in place of $n$ in the asymptotic expansion of $n^{th}$ prime we obtain \begin{displaymath} n p_{[n/2]} = \frac{n^2}{2}(\ln n + \ln \ln n - 1 - \ln 2) + O\Bigg(\frac{n^2 \ln\ln n}{\ln n}\Bigg) \end{displaymath} \begin{displaymath} = \frac{n p_n}{2} -\frac{n^2 \ln 2}{2} + O\Bigg(\frac{n^2 \ln\ln n}{\ln n}\Bigg) \end{displaymath} Using \ref{theorem2.3f} we can reduce this to \begin{displaymath} = \sum_{r <n}p_r + \frac{n^2}{4} -\frac{n^2 \ln 2}{2} + O\Bigg(\frac{n^2 \ln\ln n}{\ln n}\Bigg) \end{displaymath} This proves the lemma. \qedhere\ \end{proof} Since the second term of Lemma \ref{lemma3.2} is positive, it follows that for all sufficiently large $n$, Robin's conjecture is true. \small{e-mail: \texttt{nilotpalsinha@gmail.com, nilotpal.sinha@greatlakes.edu.in}} \end{document}
\begin{document} \title{Gromov hyperbolicity, John spaces and quasihyperbolic geodesics} \author{Qingshan Zhou} \address{Qingshan Zhou, School of Mathematics and Big Data, Foshan University, Foshan, Guangdong 528000, People's Republic of China} \email{q476308142@qq.com} \author{Yaxiang Li${}^{\mathbf{*}}$} \address{Yaxiang Li, Department of Mathematics, Hunan First Normal University, Changsha, Hunan 410205, People's Republic of China} \email{yaxiangli@163.com} \author{Antti Rasila} \address{Antti Rasila, College of Science, Guangdong Technion -- Israel Institute of Technology, Shantou, Guangdong 515063, People's Republic of China} \email{antti.rasila@gtiit.edu.cn; antti.rasila@iki.fi} \def\@arabic\c@footnote{} \footnotetext{ \texttt{\tiny File:~\jobname .tex, printed: \number\year-\number\month-\number\day, \thehours.\ifnum\theminutes<10{0}\fi\theminutes} } \makeatletter\def\@arabic\c@footnote{\@arabic\c@footnote}\makeatother \date{} \subjclass[2010]{Primary: 30C65, 30L10, 30F45; Secondary: 30C20} \keywords{ Quasihyperbolic metric, Gromov hyperbolic spaces, John spaces, quasihyperbolic geodesic.\\ ${}^{\mathbf{*}}$ Corresponding author} \begin{abstract} We show that every quasihyperbolic geodesic in a John space admitting a roughly starlike Gromov hyperbolic quasihyperbolization is a cone arc. This result provides a new approach to the elementary metric geometry question, formulated in \cite[Question 2]{Hei89}, which has been studied by Gehring, Hag, Martio and Heinonen. As an application, we obtain a simple geometric condition connecting uniformity of the space with the existence of Gromov hyperbolic quasihyperbolization. \end{abstract} \thanks{The research was partly supported by NNSF of China (Nos. 11601529, 11671127, 11571216).} \maketitle{} \pagestyle{myheadings} \markboth{}{} \section{Introduction} The unit disk or Poincar\'e disk $\mathbb{D}$ serves as a canonical model in studying of conformal mappings and hyperbolic geometry in complex analysis. It is noncomplete metric space with the metric inherited from the two dimensional Euclidean space $\mathbb{R}^2$. On the other hand, the unit disk equipped with the Poincar\'e metric is complete Riemannian $2$-manifold with constant negative curvature. This observation can be used in investigating the hyperbolic metric on planar domains and conformal mappings between them. A generalization of this idea to higher dimensional spaces, involving quasihyperbolic metrics and Gromov hyperbolicity, was studied by Bonk, Heinonen and Koskela in \cite{BHK}. Well-known geometric properties of a hyperbolic geodesic $[x,y]\in \mathbb{D}$ with respect to the Euclidean metric are: \begin{itemize} \item $\ell([x,y])\leq C|x-y|$, \item $\min\{\ell([x,z]),\ell([z,y])\}\leq C{\operatorname{dist}}(z,\partial \mathbb{D})$ \end{itemize} for all $z\in [x,y]$, where $C$ is a universal constant. The first of the above conditions says that hyperbolic geodesic essentially minimizes the length of all curves connecting the endpoints, namely, the Gehring-Haymann condition. The second one is called the cone condition or the double twisted condition. Martio and Sarvas studied in \cite{MS78} global injectivity properties of locally injective mappings. They considered a class of domains of $\mathbb{R}^n$, named by {\it uniform domains}, which means every pair of points can be connected by a curve satisfies the above two conditions for some constant $C\geq 1$. In \cite{GO}, Gehring and Osgood investigated the geometric properties of {\it quasihyperbolic metric}, which was introduced by Gehring and Palka \cite{GP76}, and proved that every quasihyperbolic geodesic in a Euclidean uniform domain also satisfies the above two conditions. It should be noted that the class of domains on $\mathbb{R}^n$, which only satisfies the second condition known as {\it John domains} is large and of independent interest. For instance, the slit disk on $\mathbb{R}^2$ is an example of such domain. This class was first considered by John \cite{Jo61} in the context of elasticity theory. Many characterizations of uniform and John domains can be found in the literature and the importance of these classes of domains in function theory is well established, see for example \cite{GGKN17, LVZ17}. From a geometric point of view, it is natural question, whether each quasihyperbolic geodesic of a John domain is a cone arc. This question was pointed out already in 1989 by Gehring, Hag and Martio \cite{GHM}: \begin{ques}\label{q-1} Suppose $D\subset \mathbb{R}^n$ is a $c$-John domain and that $\gamma$ is a quasihyperbolic geodesic in $D$. Is $\gamma$ a $b$-cone arc for some $b=b(c)$? \end{ques} They proved in \cite[Theorem $4.1$]{GHM} that quasihyperbolic geodesic in a plane simply connected John domain is a cone arc. They also constructed several examples to show that a similar result does not hold in higher dimensions. Furthermore, Heinonen has posed the following closely related problem concerning John disks: \begin{ques}\label{q-2}$($\cite[Question 2]{Hei89}$)$ Suppose $D\subset \mathbb{R}^n$ is a $c$-John domain which is quasiconformally equivalent to the unit ball $\mathbb{B}$ and that $\gamma$ is a quasihyperbolic geodesic in $D$. Is $\gamma$ a $b$-cone arc for some constant $b$? \end{ques} With the help of the conformal modulus of path families and Ahlfors $n$-regularity of $n$-dimensional Hausdorff measure of $\mathbb{R}^n$, Bonk, Heinonen and Koskela \cite[Theorem $7.12$]{BHK} gave an affirmative answer to Question \ref{q-2} for bounded domains with the constant dependence of the space dimension $n$. Recently, Guo \cite[Remark 3.10]{Guo15} provided a geometric method to deal with this question. His method was based on the result that a noncomplete metric space with a roughly starlike Gromov hyperbolic quasihyperbolization satisfies the Gehring-Hayman condition and the ball separation condition. These properties were established by Koskela, Lammi and Manojlovi\'{c} in \cite[Theorem $1.2$]{KLM14}. The constant $b$ in their results depends on the dimension $n$ as well. The second author of this paper considered a related question for quasihyperbolic quasigeodesics in the setting of Banach spaces \cite{Li}. Note that quasihyperbolic geodesics may not exist in infinite-dimensional spaces, even with assumption of convexity \cite{RT2}. The concept of uniformity in a metric space setting was first introduced by Bonk, Heinonen and Koskela \cite{BHK}, where they connected the uniformity to the negative curvature of the space that is understood in the sense of Gromov. Moreover, they generalized the result of Gehring and Osgood and showed that every quasihyperbolic geodesic in a $c$-uniform space must be a $C$-uniform arc with $C=C(c)$, see \cite[Theorem 2.10]{BHK}. They also proved that $c$-uniform space is a Gromov $\delta$-hyperbolic with respect to its quasihyperbolic metric for some constant $\delta=\delta(c)$, see \cite[Theorem 3.6]{BHK}. In view of the above results, it is natural to consider the following more general question: \begin{ques}\label{q-3} Let $D$ be a locally compact, rectifiably connected noncomplete metric space. If $D$ is an $a$-John space and $(D,k)$ is $\delta$-hyperbolic, is every quasihyperbolic geodesic $\gamma$ a $b$-cone arc with $b$ depending only on $a$ and $\delta$? \end{ques} In this paper, we study these questions. Our main result is the following: \begin{thm}\label{thm-1} Let $D$ be a locally compact, rectifiably connected noncomplete metric space. If $D$ is $a$-John and $(D,k)$ is $K$-roughly starlike and $\delta$-hyperbolic, then every quasihyperbolic geodesic in $D$ is a $b$-cone arc where $b$ depends only on $a, \delta$ and $K$. \end{thm} Every proper domain $D$ in $\mathbb{R}^n$ is a locally compact, rectifiably connected noncomplete metric space. Following terminology of \cite{BB03}, we call a locally compact, rectifiably connected noncomplete metric space $(D,d)$ {\it minimally nice}. For a minimally nice space $(D,d)$, we say that $D$ has a {\it Gromov hyperbolic quasihyperbolization}, if $(D,k)$ is $\delta$-hyperbolic for some constant $\delta\geq 0$, where $k$ is the quasihyperbolic metric (for definition see Subsection \ref{sub-2.2}). \begin{rem} The class of minimally nice John metric spaces, which admit a roughly starlike Gromov hyperbolic quasihyperbolization, is very wide. For example, it includes (inner) uniform domains (more generally, uniform metric spaces), simply connected John domains in the plane, and Gromov $\delta$-hyperbolic John domains in $\mathbb{R}^n$. \end{rem} \begin{rem} In view of the above, Theorem \ref{thm-1} states that all of the quasihyperbolic geodesics in the mentioned spaces are cone arcs. Moreover, Theorem \ref{thm-1} answers positively to question \ref{q-2} and also to question \ref{q-3} under a relatively mild condition. \end{rem} \begin{rem} The main tool in the proof of Theorem \ref{thm-1} is the uniformization process of (Gromov) hyperbolic spaces, which was introduced by Bonk, Heinonen and Koskela in \cite{BHK}. They proved that each proper, geodesic and roughly starlike $\delta$-hyperbolic space is quasihyperbolically equivalent to a $c$-uniform space; see \cite[4.5 and 4.37]{BHK}. The uniformization process of Bonk, Heinonen and Koskela has many applications and is an important tool in many related papers, see e.g. \cite{BB03, KLM14}. \end{rem} From \cite[Theorem 3.22]{Vai05} it follows that every $\delta$-hyperbolic domain of ${\mathbb R}^n$ is $K$-roughly starlike with $K$ depending only on $\delta$. Then we have the following corollary of Theorem \ref{thm-1}. \begin{cor} Every quasihyperbolic geodesic in an $a$-John, $\delta$-hyperbolic domain $D$ of ${\mathbb R}^n$ is a $b$-cone arc with $b$ depending only on $a$ and $\delta$. \end{cor} \begin{rem} A proper domain $D$ in $\mathbb{R}^n$ is called $\delta$-{\it hyperbolic} for some $\delta\geq 0$, if $D$ has a Gromov hyperbolic quasihyperbolization. We remark that the above result is an improvement of \cite[Lemma $3.9$]{Guo15} whenever $\varphi(t)=Ct$ for some positive constant $C$. Also, we do not require the domain to be bounded. \end{rem} \begin{rem} There are many applications of the above mentioned classes of domains of $\mathbb{R}^n$ in the quasiconformal mappings and potential theory, see e.g. \cite{BHK, CP17,GNV94,Guo15, NV}. A crucial ingredient in the related arguments is based on the fact that quasihyperbolic geodesics in Gromov hyperbolic John domains of $\mathbb{R}^n$ are inner uniform curves. \end{rem} As another motivation of this stude, we remark that Bonk, Heinonen and Koskela established the following characterization of Gromov hyperbolic domains on the $2$-sphere in \cite{BHK}. \begin{Thm}\label{Thm-1} $($\cite[Theorem 1.12]{BHK}$)$ Gromov hyperbolic domains on the $2$-sphere are precisely the conformal images of inner uniform slit domains. \end{Thm} A {\it slit domain} is a proper subdomain $D$ of Riemann sphere such that each component of its complement is a point or a line segment parallel to the real or imaginary axis. It is well known that every domain in Riemann sphere is conformally equivalent to a slit domain. In \cite{BHK}, Bonk, Heinonen and Koskela also pointed out that their proof of Theorem \Ref{Thm-1} is ``surprisingly indirect, using among other things the theory of modulus and Loewner spaces as developed recently in \cite{HK}, plus techniques from harmonic analysis", and ask for an elementary proof as well. In \cite{BB03}, Balogh and Buckley proved that a minimally nice metric space has a Gromov hyperbolic quasihyperbolization if and only if it satisfies the Gehring-Hayman condition and a ball separation condition. Their proof is also based on an analytic assumption that the space supports a suitable Poincar\'{e} inequality. Recently, Koskela, Lammi and Manojlovi\'{c} in \cite{KLM14} observed that Poincar\'{e} inequalities are not critical for this characterization of Gromov hyperbolicity, see \cite[Theorem 1.2]{KLM14}. By using the above results, and as an application of Theorem \ref{thm-1}, we give the following simple geometric condition connecting the uniformity of a space to its other properties: \begin{thm}\label{thm-2} Let $Q>1$ and let $(X,d,\mu)$ be a proper, $Q$-regular $A$-annularly quasiconvex length metric measure space. Let $D$ be a bounded proper subdomain of $X$. Then $D$ is uniform if and only if it is John or linearly locally connected, quasiconvex, and has a Gromov hyperbolic quasihyperbolization. \end{thm} \begin{rem} With the aid of Theorem \ref{thm-1} and some auxiliary results obtained in \cite{KLM14}, the proof of Theorem \ref{thm-2} is essentially elementary and only needs the techniques from metric geometry and some estimates concerning the quasihyperbolic metrics. It is not difficult to find that Theorem \Ref{Thm-1} is a direct corollary of Theorem \ref{thm-2}. \end{rem} This paper is organized as follows. Section 2 contains notation and the basic definitions and auxiliary lemmas. In Section 3, we will prove Theorem \ref{thm-1}. The proof of Theorem \ref{thm-2} is presented in Section 4. \section{Preliminaries} \subsection{Metric geometry} Let $(D, d)$ be a metric space, and let $B(x,r)$ and $\overline{B}(x,r)$ be the open ball and closed ball (of radius $r$ centered at the point $x$) in $D$, respectively. For a set $A$ in $D$, we use $\overline{A}$ to denote the metric completion of $A$ and $\partial A=\overline{A}\setminus A$ to be its metric boundary. A metric space $D$ is called {\it proper} if its closed balls are compact. Following terminology of \cite{BB03}, we call a locally compact, rectifiably connected noncomplete metric space $(D,d)$ {\it minimally nice}. By a curve, we mean a continuous function $\gamma:$ $[a,b]\to D$. If $\gamma$ is an embedding of $I$, it is also called an {\it arc}. The image set $\gamma(I)$ of $\gamma$ is also denoted by $\gamma$. A curve $\gamma$ is called {\it rectifiably}, if the length $\ell_d(\gamma)<\infty$. A metric space $(D, d)$ is called {\it rectifiably connected} if every pair of points in $D$ can be joined with a rectifiable curve $\gamma$. The length function associated with a rectifiable curve $\gamma$: $[a,b]\to D$ is $z_{\gamma}$: $[a,b]\to [0, \ell(\gamma)]$, given by $z_{\gamma}(t)=\ell(\gamma|_{[a,t]})$. For any rectifiable curve $\gamma:$ $[a,b]\to D$, there is a unique map $\gamma_s:$ $[0, \ell(\gamma)]\to D$ such that $\gamma=\gamma_s\circ z_{\gamma}$. Obviously, $\ell(\gamma_s|_{[0,t]})=t$ for $t\in [0, \ell(\gamma)]$. The curve $\gamma_s$ is called the {\it arclength parametrization} of $\gamma$. For a rectifiable curve $\gamma$ in $D$, the line integral over $\gamma$ of each Borel function $\varrho:$ $D\to [0, \infty)$ is $$\int_{\gamma}\varrho ds=\int_{0}^{\ell(\gamma)}\varrho\circ \gamma_s(t) dt.$$ We say an arc $\gamma$ is {\it geodesic} joining $x$ and $y$ in $D$ means that $\gamma$ is a map from an interval $I$ to $D$ such that $\gamma(0)=x$, $\gamma(l)=y$ and $$\;\;\;\;\;\;\;\;d(\gamma(t),\gamma(t'))=|t-t'|\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\mbox{for all}\;\;t,t'\in I.$$ Every rectifiably connected metric space $(D, d)$ admits a natural (or intrinsic) metric, its so-called length distance given by $$\ell(x, y) := \inf\ell(\gamma)$$ where $ \gamma$ is a rectifiable curve joining $ x, y $ in $D.$ A metric space $(D, d)$ is a {\it length space} provided that $d(x, y) = \ell(x, y)$ for all points $x, y\in D$. It is also common to call such a $d$ an intrinsic distance function. \subsection{Quasihyperbolic metric, quasigeodesics and solid arcs}\label{sub-2.2} Suppose $\gamma $ is a rectifiable curve in a minimally nice space $(D,d)$, its {\it quasihyperbolic length} is the number: $$\ell_{k_D}(\gamma)=\int_{\gamma}\frac{|dz|}{d_D(z)}, $$ where $d_D(z)={\operatorname{dist}}(x,\partial D)$ is the distance from $z$ to the boundary of $D$. For each pair of points $x$, $y$ in $D$, the {\it quasihyperbolic distance} $k_D(x,y)$ between $x$ and $y$ is defined by $$k_D(x,y)=\inf\ell_{k_D}(\gamma), $$ where the infimum is taken over all rectifiable curves $\gamma$ joining $x$ to $y$ in $D$. We remark that the resulting space $(D,k_D)$ is complete, proper and geodesic (cf. \cite[Proposition $2.8$]{BHK}). We recall the following basic estimates for quasihyperbolic distance that first used by Gehring and Palka \cite[2.1]{GP76} (see also \cite[(2.3), (2.4)]{BHK}): \begin{equation}\label{li-1} k_D(x,y)\geq \log\Big(1+\frac{d(x,y)} {\min\{d_D(x), d_D(y)\}}\Big)\geq \log|\frac{d_D(x)}{d_D(y)}|.\end{equation} In fact, more generally, we have \begin{equation}\label{li-2} \ell_{k_D}(\gamma)\geq \log\Big(1+\frac{\ell(\gamma)} {\min\{d_D(x), d_D(y)\}}\Big) \end{equation} Moreover, we have the following estimate: \begin{lem}\label{newlemlabel} Let $D$ be a minimally nice length space. Then for $x,y\in D$ with $d(x,y) < d_D(x)$, we have $$k_D(x,y)\leq \frac{d(x,y)}{d_D(x)-d(x,y)}.$$ \end{lem} \begin{pf} Let $0<\epsilon<\frac{1}{2}(d_D(x)-d(x,y))$. Since $D$ is a length space, there is a curve $\alpha$ joining $x$ and $y$ such that $\ell(\alpha)\leq d(x,y)+\epsilon$. Thus we have $\ell(\alpha)<d_D(x)$, which implies that $\alpha\subset B(x,d_D(x))\cap D$. Hence, we compute $$k_D(x,y)\leq \int_{\alpha}\frac{|dz|}{d_D(z)}\leq \frac{\ell(\alpha)}{d_D(x)-\ell(\alpha)}<\frac{d(x,y)+\epsilon}{d_D(x)-d(x,y)-\epsilon}.$$ By letting $\epsilon\to 0$, we get the desired inequality. \end{pf} \begin{defn} \label{def1.4} Suppose $\gamma$ is an arc in a minimally nice space $D$. The arc may be closed, open or half open. Let $\overline{x}=(x_0,$ $\ldots,$ $x_n)$, $n\geq 1$, be a finite sequence of successive points of $\gamma$. For $h\geq 0$, we say that $\overline{x}$ is {\it $h$-coarse} if $k_D(x_{j-1}, x_j)\geq h$ for all $1\leq j\leq n$. Let $\Phi_{k_D}(\gamma,h)$ denote the family of all $h$-coarse sequences of $\gamma$. Set $$z_{k_D}(\overline{x})=\sum^{n}_{j=1}k_D(x_{j-1}, x_j)$$ and $$\ell_{k_D}(\gamma, h)=\sup \{z_{k_D}(\overline{x}): \overline{x}\in \Phi_{k_D}(\gamma,h)\}$$ with the agreement that $\ell_{k_D}(\gamma, h)=0$ if $\Phi_{k_D}(\gamma,h)=\emptyset$. Then the number $\ell_{k_D}(\gamma, h)$ is the {\it $h$-coarse quasihyperbolic length} of $\gamma$. \end{defn} \begin{defn} \label{def1.5} Let $D$ be a minimally nice space. An arc $\gamma\subset D$ is {\it $(\nu, h)$-solid} with $\nu\geq 1$ and $h\geq 0$ if $$\ell_{k_D}(\gamma[x,y], h)\leq \nu\;k_D(x,y)$$ for all $x$, $y\in \gamma$. \end{defn} Let $\lambda\geq 1$ and $\mu\geq 0$. A curve $\gamma$ in $D$ is a {\it $(\lambda, \mu)$-quasigeodesic} if $$\ell_{k_D}(x,y) \leq \lambda k_D(x,y)+\mu$$ for all $x,y\in \gamma.$ If $\lambda=1$, $\mu=0$, then $\gamma$ is a quasihyperbolic geodesic. \begin{defn}Let $D$ and $D'$ be two minimally nice metric spaces. We say that a homeomorphism $f: D\to D'$ is an {\it $M$-quasihyperbolic mapping}, or briefly {\it $M$-QH}, if there exists a constant $M\geq 1$ such that for all $x$, $y\in D$, $$\frac{1 }{M}k_D(x,y)\leq k_{D'}(f(x),f(y))\leq M\;k_D(x,y) .$$\end{defn} In the following, we use $x$, $y$, $z$, $\ldots$ to denote the points in $D$, and $x'$, $y'$, $z'$, $\ldots$ the images of $x$, $y$, $z$, $\ldots$ in $D'$, respectively, under $f$. For arcs $\alpha$, $\beta$, $\gamma$, $\ldots$ in $D$, we also use $\alpha'$, $\beta'$, $\gamma'$, $\ldots$ to denote their images in $D'$. Under quasihyperbolic mappings, we have the following useful relationship between $(\lambda, \mu)$-quasigeodesics and solid arcs. \begin{lem}\label{ll-001} Suppose that $G$ and $G'$ are minimally nice metric spaces. If $f:\;G\to G'$ is $M$-QH, and $\gamma$ is a $(\lambda, \mu)$-quasigeodesic in $G$, then there are constants $\nu=\nu(\lambda, \mu, M)$ and $h=h(\lambda, \mu, M)$ such that the image $\gamma'$ of $\gamma$ under $f$ is $(\nu,h)$-solid in $G'$. \end{lem} \begin{pf} Let $\gamma$ be a $(\lambda,\mu)$-quasigeodesic and let $$h=M(\lambda+\mu)\;\; \mbox{and}\;\; \nu=M^2(\lambda+\mu).$$ To show that $\gamma'$ is $(\nu,h)$-solid, we only need to verify that for $x$, $y\in \gamma$, \begin{equation}\label{new-eq-3}\ell_{k_{G'}}(\gamma'[x',y'],h)\leq\nu k_{G'}(x',y').\end{equation} We prove this by considering two cases. The first case is: $k_G(x,y)<1$. Then for $z$, $w\in\gamma[x, y]$, we have $$k_{G'}(z',w')\leq Mk_G(z,w)\leq M(\lambda k_G(x,y)+\mu)<M(\lambda+\mu)=h,$$ and so \begin{equation}\label{ma-3}\ell_{k_{G'}}(\gamma'[x',y'],h)=0.\end{equation} Now, we consider the other case: $k_G(x,y)\geq 1$. Then with the aid of \cite[Theorem 4.9]{Vai6}, we have \begin{eqnarray}\label{ma-4} \ell_{k_{G'}}(\gamma'[x',y'],h) &\leq& M\ell_{k_G}(\gamma[x,y])\leq M(\lambda k_G(x,y)+\mu)\\ \nonumber &\leq& M(\lambda+\mu)k_{G}(x,y) \leq M^2(\lambda+\mu)k_{G'}(x',y').\end{eqnarray} It follows from \eqref{ma-3} and \eqref{ma-4} that \eqref{new-eq-3} holds, completing the proof.\end{pf} \subsection{Uniform spaces and John spaces} In this subsection we first recall the definitions of John spaces, cone arcs and uniform spaces. We also give some results related to some special arcs which will be useful later in the proof of the main result. \begin{defn} Let $a\geq 1$. A minimally nice space $(D,d)$ is called {\it $a$-John} if each pair of points $x,y\in D$ can be joined by a rectifiable arc $\alpha$ in $D$ such that for all $z\in \alpha$ $$\min\{\ell(\alpha[x,z]), \ell(\alpha[z,y])\}\leq a d_D(z),$$ where $\alpha[x,z]$ and $\alpha[z,y]$ denote two subarcs of $\alpha$ divided by the point $z$. The arc $\alpha$ is called an {\it $a$-cone} arc. \end{defn} \begin{defn} Let $c\geq 1$. A minimally nice space $(D,d)$ is called {\it $c$-uniform} if each pair of points $x,y\in D$ can be joined by a $c$-uniform arc. An arc $\alpha$ is called {\it $c$-uniform} if it is a $c$-cone arc and satisfies the $c$-quasiconvexity, that is, $\ell(\alpha)\leq c d(x,y).$ \end{defn} \begin{Lem}\label{Lem-uniform}$($\cite[(2.16)]{BHK}$)$\; If $D$ is a $c$-uniform metric space, then for all $x,y\in D$, we have $$ k_{D}(x,y)\leq 4c^2 \log\Big(1+\frac{d(x,y)}{\min\{d_{D}(x),d_{D}(y)\}}\Big).$$\end{Lem} The following properties of solid arcs in uniform metric spaces is from \cite{LVZ2} which will be used in our proofs. \begin{Lem}\label{Lem13''}$($\cite[Lemma 3]{LVZ2}$)$\, Suppose that $D$ is a $c$-uniform space, and that $\gamma$ is a $(\nu,h)$-solid arc in $D$ with endpoints $x$, $y$. Let $d_D(x_0)=\max_{p\in \gamma}d_D(p)$. Then there exist constants $a_1=a_1( c, \nu, h)\geq 1$ and $a_2=a_2(c, \nu, h)\geq 1$ such that \begin{enumerate} \item ${\operatorname{diam}}(\gamma[x,u])\leq a_1 d_D(u)$ for $u\in \gamma[x,x_0],$ and ${\operatorname{diam}}(\gamma[y,v])\leq a_1 d_D(v)$ for $v\in \gamma[y, x_0]$; \item ${\operatorname{diam}}(\gamma)\leq \max\big\{a_2 d(x,y), 2(e^h-1)\min\{d_D(x),d_D(y)\}\big\}.$ \end{enumerate} \end{Lem} Next we discuss the properties of cone arcs. \begin{lem}\label{eq-8} Let $\alpha[x,y]$ be an $a$-cone arc in $D$ and let $z_0$ bisect the arclength of $\alpha[x,y]$. Then for each $z_1$, $z_2\in\alpha[x,z_0]$ $($or $\alpha[y,z_0]$$)$ with $z_2\in \alpha[z_1,z_0]$, we have $$k_D(z_1,z_2)\leq \ell_k(\alpha[z_1,z_2])\leq 2a \log\big(1+\frac{2\ell(\alpha[z_1,z_2])}{d_D(z_1)}\big)$$ and $$\ell_{k}(\alpha[z_1,z_2])\leq 4a^2k_{D}(z_1,z_2)+4a^2.$$ \end{lem} \begin{pf} By symmetry, we only need to verify the assertion in the case $z_1$, $z_2\in\alpha[x,z_0]$. To this end, for $z_2\in \alpha[z_1,z_0]$ be given, we have $$d_D(z_2)\geq \frac{1}{a}\ell(\alpha[z_1,z_2]).$$ If $z_2\subset B(z_1, \frac{1}{2}d_D(z_1))$, thus one finds that $d_D(z_2)\geq \frac{1}{2}d_D(z_1)$. Otherwise, we have $d_D(z_2)\geq\frac{1}{2a}d_D(z_1)$. Hence in both cases we obtain $$d_D(z_2)\geq \frac{1}{4a}[2\ell(\alpha[z_1,z_2])+d_D(z_1)],$$ which yields that \begin{eqnarray*}k_{D}(z_1,z_2) &\leq& \ell_k(\alpha[z_1,z_2])= \int_{\alpha[z_1,z_2]}\frac{|dz|}{d_D(z)}\\ \nonumber &\leq& 2a\log\Big(1+\frac{2\ell(\alpha[z_1,z_2])}{d_D(z_1)}\Big)\\ \nonumber &\leq& 4a^2\log\Big(1+\frac{d_D(z_2)}{d_D(z_1)}\Big)\\ \nonumber &\leq& 4a^2k_{D}(z_1,z_2)+4a^2,\end{eqnarray*} as desired. \end{pf} \begin{lem} \label{lem13-0-0}Suppose that $f: D\to D'$ is an $M$-QH from an $a$-John minimally nice space $D$ to a $c$-uniform space $D'$. Let $\alpha$ be an $a$-cone arc in $D$ with end points $x$ and $y$, $z_0$ bisect the arclength of $\alpha$, and let $d_{D'}(v'_1)=\max\{d_{D'}(u'): u'\in\alpha'[x',z'_0]\}$ and $d_{D'}(v'_2)=\max\{d_{D'}(u'): u'\in\alpha'[y',z'_0]\}$. Then there is a constant $a_3=a_3(a,c,M)$ such that \begin{enumerate} \item for each $z'\in \alpha'[x', v'_1]$, $d' (x',z')\leq a_3\;d_{D'}(z')$ and for each $z'\in \alpha'[v'_1, z'_0]$, $d' (z'_0,z')\leq a_3\;d_{D'}(z')$. \item for each $z'\in \alpha'[y', v'_2]$, $d' (y',z')\leq a_3\;d_{D'}(z')$ and for each $z'\in \alpha'[v'_2, z'_0]$, $d' (z'_0,z')\leq a_3\;d_{D'}(z')$. \end{enumerate} \end{lem} \begin{pf} First, in the light of Lemma \ref{eq-8}, we see that $\alpha[x,z_0]$ and $\alpha[z_0,y]$ are $(4a^2,4a^2)$-quasigeodesics. Since $f: D\to D'$ is $M$-QH, we thus know from Lemma \ref{ll-001} that $\alpha'[x',z'_0]$ and $\alpha'[z'_0,y']$ are solid arcs. Moreover, by the choices of $v_1'$ and $v_2'$, $(1)$ and $(2)$ follows from Lemma \Ref{Lem13''}. \end{pf} \subsection{Uniformization theory of Bonk, Heinonen and Koskela } Let $(X,d)$ be a geodesic metric space and let $\delta\geq 0$. If for all triples of geodesics $[x,y], [y,z], [z,x]$ in $(X,d)$ satisfies: every point in $[x,y]$ is within distance $\delta$ from $[y,z]\cup [z,x]$, then the space $(X,d)$ is called a {\it $\delta$-hyperbolic space}. For simplicity, in the rest of this paper when we say that a minimally nice space $X$ is {\it Gromov hyperbolic} we mean that the space is $\delta$-hyperbolic with respect to the quasihyperbolic metric for some nonnegative constant $\delta$. In \cite{BHK}, Bonk, Heinonen and Koskela introduced the concept of rough starlikeness of a Gromov hyperbolic space with respect to a given base point. Let $X$ be a proper, geodesic $\delta$-hyperbolic space, and let $w\in X$, we say that $X$ is {\it $K$-roughly starlike} with respect to $w$ if for each $x\in X$ there is some point $\xi\in\partial^* X$ and a geodesic ray $\gamma=[w,\xi]$ with ${\operatorname{dist}}(x,\gamma)\leq K$. They also proved that both bounded uniform spaces and every hyperbolic domain $($a domain equipped with its quasi-hyperbolic metric is a Gromov hyperbolic space$)$ in ${\mathbb R}^n$ are roughly starlike. It turns out that this property serves as an important tool in several research, for instance \cite{BB03}, \cite{ZR} and \cite{KLM14}. Next we recall the conformal deformations which were introduced by Bonk, Heinonen and Koskela (cf. \cite[Chapter $4$]{BHK}). Let $(X,d)$ be a minimally nice space and $w\in X$. Consider the family of conformal deformations of $(X,k)$ by the densities $$\rho_\epsilon(x)=e^{-\epsilon k(x,w)}\;\;(\epsilon>0).$$ For $u$, $v\in X$, let $$d_\epsilon(u,v)=\inf\int_{\gamma} \rho_\epsilon ds_k,$$ where $ds_k$ is the arc-length element with respect to the metric $k$ and the infimum is taken over all rectifiable curves $\gamma$ in $X$ with endpoints $u$ and $v$. Then $d_\epsilon$ are metrics on $X$, and we denote the resulting metric spaces by $X_\epsilon=(X,d_\epsilon)$. The next result shows that the deformations $X_{\epsilon}$ are uniform spaces and each proper, geodesic and roughly starlike $\delta$-hyperbolic space is {\it quasihyperbolically equivalent} to a $c$-uniform space; see \cite[Propositions 4.5 and 4.37]{BHK}. \begin{Lem}\label{lem-1}$($\cite[Propositions $4.5$ and $4.37$]{BHK} or \cite[Lemma $4.12$]{BB03}$)$ Suppose $(X,d)$ is minimally nice, locally compact and that $(X,k)$ is both $\delta$-Gromov hyperbolic and $K$-roughly starlike, for some $\delta\geq 0$, $K>0$. Then $X_\epsilon$ has diameter at most $2/\epsilon$ and there are positive numbers $c, \epsilon_0$ depending only on $\delta, K$ such that $X_\epsilon$ is $c$-uniform for all $0<\epsilon\leq \epsilon_0$. Furthermore, there exists $c_0=c_0(\delta,K)\in(0,1)$ such that the quasihyperbolic metrics $k$ and $k_\epsilon$ satisfy the quasi-isometric condition $$c_0\epsilon k(x,y)\leq k_\epsilon(x,y)\leq e \epsilon k(x,y).$$ \end{Lem} \section{The proof of Theorem \ref{thm-1}} Let $(D,d)$ be a minimally nice $a$-John metric space and $(D,k)$ $K$-roughly starlike, $\delta$-hyperbolic where $k$ is the quasihyperbolic metric of $D$. Then by Lemma \Ref{lem-1}, we know that there is a positive number $\epsilon=\epsilon(\delta)$ such that $(D,d_{\epsilon})$ is a $c$-uniform metric space and the identity map from $(D,d)$ to $(D,d_{\epsilon})$ is $M$-QH, where $c$ and $M$ depend only on $\delta$ and $K$. For simplicity, we denote $D=(D,d)$, $(D',d')=(D,d_{\epsilon})$ and $f$ the identity map from $D$ to $D'$. We may assume without loss of generality that $D$ is a length space, because the length of an arc and the quasihyperbolic metrics associated to the original metric and the length metric coincide. Fix $z_1$, $z_2\in D$ and let $\gamma$ be a quasihyperbolic geodesic joining $z_1$, $z_2$ in $D$. Let $b=4a_4e^{a_4}$, $a_4=a_5^{8c^2M}$, $a_5=a_6^{4a^2M}$ and $a_6=(8a_1^2a_3)^{16c^2M}a^2$, where $a_1$ and $a_3$ are the constants from Lemmas \Ref{Lem13''} and \ref{lem13-0-0}, respectively. In the following, we shall prove that $\gamma$ is a $b$-cone arc, that is, for each $y\in\gamma$, $$\min\{\ell(\gamma[z_1, y]),\; \ell(\gamma[z_2, y])\}\leq b\,d_D(y).$$ Let $x_0\in \gamma$ be a point such that $d_D(x_0)=\max\limits_{z\in \gamma}d_D(z). $ By symmetry, we only need to prove that for $y\in\gamma[z_1,x_0]$, \begin{eqnarray} \label{John}\ell(\gamma[z_1, y])\leq b\,d_D(y).\end{eqnarray} To this end, let $m \geq 0$ be an integer such that $$2^{m}\, d_D(z_1) \leq d_D(x_0)< 2^{m+1}\, d_D(z_1). $$ And let $y_0$ be the first point in $\gamma[z_1,x_0]$ from $z_1$ to $x_0$ with $$d_D(y_0)=2^{m}\, d_D(z_1). $$ Observe that if $d_D(x_0)=d_D(z_1)$, then $y_0=z_1=x_0$. Let $y_1=z_1$. If $z_1=y_0$, we let $y_2=x_0$. It is possible that $y_2=y_1$. If $z_1\not= y_0$, then we let $y_2,\ldots ,y_{m+1}$ be the points such that for each $i\in \{2,\ldots,m+1\}$, $y_i$ denotes the first point in $\gamma[z_1,x_0]$ from $y_1$ to $x_0$ satisfying $$d_D(y_i)=2^{i-1}\, d_D(y_1).$$ Then $y_{m+1}=y_0$. We let $y_{m+2}=x_0$. It is possible that $y_{m+2}=y_{m+1}=x_0=y_0$. This possibility occurs once $x_0=y_0$. From the choice of $y_i$ we observe that for $y\in \gamma[y_i,y_{i+1}]$ $(i\in\{1, 2, \ldots, m+1\})$, \begin{equation}\label{li-newadd-1} d_D(y)<d_D(y_{i+1})=2d_D(y_i)\end{equation} and so for all $i\in\{1, 2, \ldots, m+1\}$, \begin{equation}\label{li-newadd-2} k_{D}(y_i,y_{i+1}) =\ell_k(\gamma[y_i,y_{i+1}])\geq \frac{\ell(\gamma[y_i,y_{i+1}])}{2d_D(y_i)}.\end{equation} To prove Theorem \ref{thm-1}, we shall estimate upper bound of the quasihyperbolic distance between $y_i$ and $y_{i+1}$, which state as follows. \begin{lem}\label{eq-0}For each $i\in \{1,\ldots, m+1\}$, $k_{D}(y_i,y_{i+1})\leq a_4$.\end{lem} We note that Theorem \ref{thm-1} can be obtained from Lemma \ref{eq-0} as follows. First, we observe from \eqref{li-newadd-2} and Lemma \ref{eq-0} that for all $i\in\{1,\ldots, m+1\}$, \begin{equation}\label{li-1'} \ell(\gamma[y_i,y_{i+1}])\leq 2a_4 \,d_D(y_i).\end{equation} Further, for each $y\in \gamma[y_1,x_{0}]$, there is some $i\in \{1,\ldots,m+1\}$ such that $y\in \gamma[y_i,y_{i+1}]$. It follows from \eqref{li-1} that $$ \log \frac{d_D(y_i)}{d_D(y)}\leq k_D(y,y_i)\leq \, k_D(y_i,y_{i+1})\leq a_4 ,$$ whence $$d_D(y_i)\leq e^{ a_4 }d_D(y).$$ From which and (\ref{li-1'}) it follows that \begin{eqnarray}\label{eq(li-3)} \ell(\gamma[z_1,y])&=& \ell(\gamma[y_1,y_2])+\ell(\gamma[y_2,y_3])+\ldots+\ell(\gamma[y_i,y]) \\ \nonumber &\leq& 2a_4 (d_D(y_1)+d_D(y_2)+\ldots+d_D(y_i))\\ \nonumber &\leq& 4a_4 \,d_D(y_i)\leq 4a_4 e^{a_4 }\,d_D(y),\end{eqnarray} as desired. This proves \eqref{John} and so Theorem \ref{thm-1} follows. Hence to complete the proof of Theorem \ref{thm-1}, we only need to prove Lemma \ref{eq-0}. \subsection{The proof of Lemma \ref{eq-0}} Without loss of generality, we may assume that $d_{D'}(y'_i)\leq d_{D'}(y'_{i+1})$. We note that if $ d (y_i, y_{i+1})<\frac{1}{2}d_D(y_i),$ then by Lemma \ref{newlemlabel} we have $$k_D(y_i, y_{i+1})\leq 1,$$ as desired. Therefore, we assume in the following that \begin{eqnarray}\label{eq(4-2)}d (y_i, y_{i+1})\geq \frac{1}{2}d_D(y_i).\end{eqnarray} Let $\alpha_i$ be an $a$-cone arc joining $y_i$ and $y_{i+1}$ in $D$ and let $v_i$ bisect the arclength of $\alpha_i$. Then Lemma \ref{eq-8} implies that \begin{eqnarray}\label{hl-eq(4-1-2)}\;\;\;\;\;k_{D}(y_i,y_{i+1})&\leq& k_{D}(y_i,v_i)+k_{D}(v_i,y_{i+1})\\ \nonumber &\leq& 2a\bigg(\log \Big( 1+\frac{2\ell(\alpha_i[y_i,v_i])} {d_D(y_i)}\Big)+\log \Big( 1+\frac{2\ell(\alpha_i[y_{i+1},v_i])} {d_D(y_{i+1})}\Big)\bigg)\\ \nonumber &\leq& 4a\log \Big( 1+\frac{\ell(\alpha_i)} {d_D(y_i)}\Big). \nonumber \end{eqnarray} Now we divide the proof of Lemma \ref{eq-0} into two cases. \begin{ca} \label{ca1} $\ell(\alpha_i)< a_5 d (y_i, y_{i+1}).$\end{ca} Then by \eqref{li-newadd-2} and \eqref{hl-eq(4-1-2)} we compute \begin{eqnarray}\label{eq(h-h-4-2')} \frac{d(y_i,y_{i+1})}{2d_D(y_i)}&\leq& k_{D}(y_i,y_{i+1}) \leq 4a \log \Big( 1+\frac{\ell(\alpha_i)} {d_D(y_i)}\Big) \\ \nonumber &\leq& 4a \log \Big( 1+\frac{ a_5d(y_i,y_{i+1})} {d_D(y_i)}\Big).\end{eqnarray} A necessary condition for \eqref{eq(h-h-4-2')} is $$ d (y_i,y_{i+1})\leq a_5^2\,d_D(y_i).$$ Hence we deduce from (\ref{eq(h-h-4-2')}) that $k_{D}(y_i,y_{i+1})\leq a_4$, as desired. \begin{ca} \label{ca2} $\ell(\alpha_i)\geq a_5 d (y_i, y_{i+1}).$\end{ca} We prove in this case by contradiction. Suppose on the contrary that \begin{eqnarray}\label{eq(h-4-2)}k_{D}(y_i,y_{i+1})> a_4.\end{eqnarray} Then by Lemma \Ref{Lem-uniform}, we get \begin{eqnarray*}a_4<k_{D}(y_i,y_{i+1})\leq M k_{D'}(y'_i,y'_{i+1}) \leq 4c^2M\log\Big(1+\frac{d' (y'_i,y'_{i+1})}{d_{D'}(y'_i)}\Big),\end{eqnarray*} and so \begin{eqnarray}\label{eq(h-4-1')}d' (y'_i,y'_{i+1})\geq a_5d_{D'}(y'_i).\end{eqnarray} Therefore, by the choice of $v_i\in\alpha_i$ we obtain $$d_D(v_i)\geq \frac{\ell(\alpha_i)}{2a}\geq \frac{a_5}{2a} d (y_i,y_{i+1})>a_6\, d (y_i,y_{i+1}),$$ we deduce from which and \eqref{eq(4-2)} that there exists a point $v_{i,0}\in \alpha_i[y_i,v_i]$ such that \begin{equation}\label{eq-11} d_D(v_{i,0})=a_6\, d (y_i,y_{i+1}).\end{equation} Moreover, we claim that \begin{equation}\label{claim1}k_{D}(y_i,v_{i,0})\leq \frac{1}{a_5}k_{D}(y_i,y_{i+1}).\end{equation} Otherwise, we would see from Lemma \ref{eq-8} and \eqref{eq-11} that \begin{eqnarray*}k_{D}(y_i,y_{i+1})&<& a_5 k_{D}(y_i,v_{i,0})\leq 4aa_5 \log\Big(1+\frac{\ell(\alpha_i[y_{i},v_{i,0}])}{d_D(y_i)}\Big) \\ \nonumber &\leq& 4aa_5 \log\Big(1+\frac{ad(v_{i,0})}{d_D(y_i)}\Big) \leq 4a^2a_5a_6\log\Big(1+\frac{ d (y_i,y_{i+1})}{d_D(y_i)}\Big), \end{eqnarray*} which together with \eqref{li-newadd-2} show that $$\frac{d(y_i,y_{i+1})}{d_D(y_i)}\leq 8a^2a_5a_6\log\Big(1+\frac{ d (y_i,y_{i+1})}{d_D(y_i)}\Big).$$ A necessary condition for the above inequality is $$ d (y_i,y_{i+1})\leq a_5^2\,d_D(y_i).$$ This shows that $k_{D}(y_i,y_{i+1})\leq a_4$, which contradicts $\eqref{eq(h-4-2)}$. Thus we get (\ref{claim1}). Then it follows from Lemma \Ref{Lem-uniform}, and \eqref{claim1} that \begin{eqnarray*} k_{D'}(y'_i,v'_{i,0})&<& Mk_{D}(y_i,v_{i,0}) \leq\frac{M}{a_5}k_{D}(y_i,y_{i+1}) \\ &\leq& \frac{M^2}{ a_5}k_{D'}(y'_i,y'_{i+1}) \leq \frac{4c^2M^2}{ a_5}\log\Big(1+\frac{d' (y'_i,y'_{i+1})}{d_{D'}(y'_i)}\Big). \end{eqnarray*} Hence, by using an elementary compute we see from \eqref{li-1} and \eqref{eq(h-4-1')} that \begin{eqnarray*} \log \Big(1+\frac{d' (y'_i,v'_{i,0})}{d_{D'}(y'_i)}\Big) \leq k_{D'}(y'_i,v'_{i,0})\leq \log\Big(1+\frac{d' (y'_i,y'_{i+1})}{a_5d_{D'}(y'_i)}\Big), \end{eqnarray*} which implies that \begin{eqnarray}\label{eq(hl-41-5)}d' (y'_i,v'_{i,0})< \frac{1}{a_5}d' (y'_i,y'_{i+1}).\end{eqnarray} Moreover, we deduce from (\ref{eq(hl-41-5)}) and (\ref{eq(h-4-1')}) that \begin{eqnarray}\label{eq--2} d_{D'}(v'_{i,0})\leq d' (y'_i,v'_{i,0})+d_{D'}(y'_i)\leq \frac{2}{a_5}d' (y'_i,y'_{i+1}).\end{eqnarray} We recall that $v_i$ is the point in the cone arc $\alpha_i[y_i,y_{i+1}]$ which bisect the arclength of $\alpha_i$. Next we need to estimate the location of the image point $v'_i$ in $\alpha'_i$. We claim that \begin{cl}\label{eq--6} $d'(y'_i,v'_i)<\frac{1}{2}d' (y'_i,y'_{i+1}).$ \end{cl} We prove this claim by a method of contradiction. Suppose on the contrary that \begin{equation}\label{neweqlabel}d' (y'_i,v'_i)\geq \frac{1}{2}d' (y'_i,y'_{i+1}).\end{equation} Let $u'_{0,i}\in\gamma'[y'_{i}, y'_{i+1}]$ be a point satisfying $$d_{D'}(u'_{0,i})=\max\{d_{D'}(w'):w'\in\gamma'[y'_{i}, y'_{i+1}]\}.$$ Then we see from Lemma \Ref{Lem13''} that \begin{equation}\label{e---1} d_{D'}(u'_{0,i})\geq \frac{1}{a_1}\max\{d' (y'_{i+1},u'_{0,i}), d' (u'_{0,i},y'_i)\} \geq \frac{d' (y'_i,y'_{i+1})}{2a_1}.\end{equation} This together with (\ref{eq(h-4-1')}) shows that there exists some point $y'_{0,i}\in \gamma'[y'_i,u'_{0,i}]$ satisfying \begin{eqnarray}\label{eq(W-l-6-1)}d_{D'}(y'_{0,i})=\frac{d' (y'_i,y'_{i+1})}{2a_1}. \end{eqnarray} It follows from Lemma \Ref{Lem13''} that\begin{eqnarray}\label{eq(W-l-6-1add)}d' (y'_i,y'_{0,i})\leq a_1\,d_{D'}(y'_{0,i}).\end{eqnarray} Let $v'_0\in\alpha'_i[y'_{i}, v'_{i}]$ satisfy $d_{D'}(v'_0)=\max\{d_{D'}(u'):u'\in\alpha'_i[y'_{i}, v'_{i}]\}$, see Figure \ref{fig01}. Then we see from Lemma \ref{lem13-0-0} that for each $z'\in \alpha'_i[ v'_i, v'_0]$, \begin{eqnarray}\label{cla-3}d' (v'_i,z')\leq a_3 d_{D'}(z').\end{eqnarray} On the other hand, we recall that $v'_{i,0}$ is the point such that $v_{i,0}\in \alpha_i[y_i,v_i]$ and satisfying \eqref{eq-11} and \eqref{eq(hl-41-5)}. Then by \eqref{eq(hl-41-5)} and \eqref{eq--2} we have \begin{eqnarray*}d' (v'_i,v'_{i,0})&\geq& d' (v'_i,y'_i)-d' (v'_{i,0},y'_i)\geq (\frac{1}{2}-\frac{1}{a_5})d' (y'_i,y'_{i+1})>a_3 d_{D'}(v'_{i,0}). \end{eqnarray*} That means $v'_0\in \alpha'_i[ v'_{i,0}, v'_i]$. Moreover, we know from Lemma \ref{lem13-0-0} and \eqref{neweqlabel} that $$d_{D'}(v'_0)\geq \frac{1}{a_3}\max\{d' (v'_{i},v'_0), d' (v'_0,y'_i)\}\geq \frac{d' (y'_i,v_i')}{2a_3}\geq \frac{d' (y'_i,y'_{i+1})}{4a_3}.$$ Hence, it follows from (\ref{eq--2}) that there exists some point $u'_0\in \alpha'_i[v'_{i,0},v'_{0}]$ such that \begin{eqnarray}\label{eq(W-l-6-2)} d_{D'}(u'_0)=\frac{d' (y'_i,y'_{i+1})}{4a_3},\end{eqnarray} and so Lemma \ref{lem13-0-0} leads to $$d' (y'_i,u'_0)\leq a_3\,d_{D'}(u'_0).$$ This together with \eqref{eq(W-l-6-1)}, \eqref{eq(W-l-6-1add)} and \eqref{eq(W-l-6-2)} show that $$d' (u'_0,y'_{0,i})\leq d' (u'_0,y'_i)+d' (y'_i,y'_{0,i})\leq 3a_3d_{D'}(u'_0).$$ Now we are ready to finish the proof of Claim \ref{eq--6}. It follows from \eqref{li-1} and Lemma \Ref{Lem-uniform} that \begin{eqnarray*} \log \frac{d_D(u_0)}{d_D(y_{0,i})}&\leq& k_{D}(y_{0,i},u_0) \leq M k_{D'}(y'_{0,i},u'_0) \\ \nonumber &\leq& 4c^2M\log\Big(1+\frac{d' (u'_0,y'_{0,i})}{\min\{d_{D'}(u'_0), d_{D'}(y'_{0,i})\}}\Big)\\ \nonumber &<&4c^2M\log (1+3a_3), \end{eqnarray*} which yields that \begin{eqnarray}\label{eq(W-l-6-4)}d_D(u_0)\leq (1+3a_3)^{4c^2M}d_D(y_{0,i})<a_6d_D(y_{0,i}).\end{eqnarray} On the other hand, by Lemma \ref{eq-8} we can get \begin{eqnarray*} k_{D}(v_{i,0},u_0)\geq 4{a}^2\log\Big(1+\frac{d_D(u_0)}{d_D(v_{i,0})}\Big)\end{eqnarray*} and by \eqref{li-1}, \eqref{eq--2} and \eqref{eq(W-l-6-2)} we have that \begin{eqnarray*} k_{D}(v_{i,0},u_0) \geq k_{D'}(v'_{i,0},u'_0) \geq\log\frac{d_{D'}(u'_0)}{d_{D'}(v'_{i,0})} \geq\log\frac{a_5}{8a_3},\end{eqnarray*} which yields $$d_D(u_0)\geq a_6d_D(v_{i,0}).$$ Therefore, we infer from \eqref{eq(4-2)} and \eqref{eq-11} that \begin{align*} d_D(u_0)\geq a_6d_D(v_{i,0})=a_6^2 d (y_i, y_{i+1}) \geq \frac{ a_6^2}{4}d_D(y_{i+1})\geq \frac{ a_6^2}{4}d_D(y_{0,i}),\end{align*} which contradicts (\ref{eq(W-l-6-4)}). Hence Claim \ref{eq--6} holds. Now we continue the proof of Lemma \ref{eq-0}. We first see from Claim \ref{eq--6} that $$d' (y'_{i+1},v'_i)\geq d'(y'_i,y'_{i+1})-d' (y'_i,v'_i)> \frac{d' (y'_i,y'_{i+1})}{2}\geq d' (y'_i,v'_i).$$ Let $q'_0\in \alpha'_i[y'_i,v'_i]$ and $u'_1\in\alpha'_i[y'_{i+1},v'_i]$ be points such that \begin{eqnarray}\label{112}\;\;\;\;\frac{d' (y'_i,v'_i)}{2a_3}= d' (q'_0,v_i')\,\,\;{\rm and}\,\,\; \frac{d'(y'_i,v'_i)}{2a_3}= d' (u'_1,v_i').\end{eqnarray} Then \begin{align*} d'(y'_i,q_0')\geq d'(y'_i,v_i')-d'(q'_0,v_i')=(2a_3-1)d'(q'_0,v_i')>d'(q'_0,v_i')\end{align*} and \begin{align*} d'(y'_{i+1},u_1')>d'(u'_1,v_i').\end{align*} Thus we get from Lemma \ref{lem13-0-0} that \begin{eqnarray}\label{eq(W-l-6'-0)}\;\;\;\;\;\;\;\;\;\;\;\; d_{D'}(q'_0)\geq \frac{d' (q'_0,v_i')}{a_3}\geq \frac{d' (y'_i,v'_i)}{2a^2_3} \,\; \mbox{and}\;\, d_{D'}(u'_1)\geq\frac{d' (u'_1,v_i')}{a_3}\geq \frac{d' (y'_i,v'_i)}{2a_3^2}.\end{eqnarray} Then it follows from Lemma \Ref{Lem-uniform}, \eqref{li-1}, \eqref{112} and \eqref{eq(W-l-6'-0)} that \begin{eqnarray} \label{eq--7} \Big|\log\frac{d_D(u_1)}{d_D(q_0)}\Big| &\leq&k_{D}(u_1,q_0) \leq M k_{D'}(u'_1, q'_0) \\\nonumber &\leq& 4c^2M\log\Big(1+\frac{d' (u'_1,q'_0)}{\min\{d_{D'}(q'_0), d_{D'}(u'_1)\}}\Big) \\\nonumber &\leq& 4c^2M\log\Big(1+\frac{d' (u'_1,v'_i)+d' (v'_i,q'_0)}{\min\{d_{D'}(q'_0), d_{D'}(u'_1)\}}\Big) \\\nonumber &\leq& 4c^2M\log (1+2a_3 ), \end{eqnarray} which implies that \begin{eqnarray}\label{eq(W-l-6'-2)}\frac{d_D(u_1)}{(1+2a_3 )^{4c^2M}}\leq d_D(q_0)\leq (1+2a_3 )^{4c^2M}e^Cd_D(u_1).\end{eqnarray} On the other hand, by \eqref{eq(h-4-1')}, \eqref{e---1} and Claim \ref{eq--6} we have $$d' (u'_{0,i},y'_i)\geq d_{D'}(u'_{0,i})-d_{D'}(y'_i)\geq (\frac{1}{2a_1}-\frac{1}{a_5})d' (y'_{i+1},y'_i)>\frac{1}{2a_1}d' (y'_{i},v'_i).$$ Then there exists $p'_0\in \gamma'[y'_i,u'_{0,i}]$ such that \begin{eqnarray}\label{132}d' ( y'_i,p'_0)=\frac{d' (y'_i,v'_i)}{2a_1},\end{eqnarray} see Figure \ref{fig02}. This combined with \eqref{112} and Lemma \Ref{Lem13''} shows that $$d' (p'_0,q'_0)\leq d' ( p'_0,y'_i)+d' (y'_i,v'_i)+d' (v'_i,q'_0)\leq (1+\frac{1}{a_1}+\frac{1}{a_3})d' (y'_i,v'_i),$$ and $$d' (y'_i,p'_0 )\leq a_1 d_{D'}(p'_0).$$ Then \eqref{eq(W-l-6'-0)} and \eqref{132} we have $$\min\{d_{D'}(q'_0),d_{D'}(p'_0)\}\geq \min\{\frac{1}{2a_1^2},\frac{1}{2a_3^2}\}d'(y_i',v_i') >\frac{1}{2a_1^2a_3^2}d'(y_i',v_i').$$ Therefore, Lemma \ref{newlemlabel} and \eqref{li-1} lead to \begin{eqnarray*}\log \frac{d_D(q_0)}{d_D(p_{0})}&\leq& k_{D}(q_0, p_{0}) \leq M k_{D'}(q'_0,p'_0)\\ \nonumber &\leq& 4c^2M\log\Big(1+\frac{d' (p'_0,q'_0)}{\min\{d_{D'}(q'_0),d_{D'}(p'_0)\}}\Big) \\ \nonumber &\leq& 4c^2M\log(6a_1^2a_3^2).\end{eqnarray*} We infer from \eqref{eq-11} that \begin{eqnarray}\label{eq-new-add2}d_D(q_0)&\leq& (6a_1^2a_3^2)^{4c^2M} d_D(p_0)\\ \nonumber&\leq& 2(6a_1^2a_3^2)^{4c^2M} d_D(y_i) \\ \nonumber&\leq& 2(6a_1^2a_3^2)^{4c^2M} d (y_i,y_{i+1}).\end{eqnarray} Finally, it follows from Lemma \ref{eq-8} and the choice of $q_0$ and $u_1$ that $$k_{D}(y_i, q_0)\leq 4a^2\log\Big(1+\frac{d_D( q_0)}{d_D(y_i)}\Big)\;\;{\rm and }\;\;k_{D}(u_i, y_{i+1})\leq 4a^2\log\Big(1+\frac{d_D(u_i)}{d_D(y_{i+1})}\Big).$$ Then by Lemma \Ref{Lem-uniform}, \eqref{eq--7}, \eqref{eq(W-l-6'-2)} and \eqref{eq-new-add2} we get \begin{eqnarray}\label{eq(W-l-6'-2')} k_{D}(y_i, y_{i+1})&\leq& k_{D}(y_i, q_0)+k_{D}(q_0, u_1)+k_{D}(u_1, y_{i+1}) \\ \nonumber &\leq& 4a^2 \log\Big(1+\frac{d_D( q_0)}{d_D(y_i)}\Big)+4A^2M \log\Big(1+2a_3\Big) \\ \nonumber &&+4a^2 \log\Big(1+\frac{d_D( u_1)}{d_D(y_{i+1})}\Big)\\ \nonumber &<& a_5 \log\Big(1+\frac{ d (y_i, y_{i+1})}{d_D(y_i)}\Big),\end{eqnarray} which together with \eqref{li-newadd-2} show that $$\frac{d (y_i,y_{i+1})}{2d_D(y_i)}\leq a_5 \log\Big(1+\frac{ d (y_i,y_{i+1})}{d_D(y_i)}\Big).$$ A necessary condition for this inequality is $ d (y_i, y_{i+1})\leq a_5^2d_D(y_i)$. Hence by (\ref{eq(W-l-6'-2')}), we know that $$k_{D}(y_i, y_{i+1})\leq a_5\log(1+a_5^2)<a_4,$$ which contradicts \eqref{eq(h-4-2)}. Therefore, we obtain Lemma \ref{eq-0} and so Theorem \ref{thm-1}. \section{The proof of Theorem \ref{thm-2}} In this section, we will prove Theorem \ref{thm-2} by means of Theorem \ref{thm-1} and some results demonstrated in \cite{KLM14}. We begin by recalling necessary definitions and results. \begin{defn} Let $(X, d,\mu)$ be a metric measure space. Given $Q> 1$, we say that $X$ is {\it $Q$-regular} if there exists a constant $C>0$ such that for each $x\in X$ and $0<r\leq {\operatorname{diam}}(X)$, $$C^{-1}r^Q\leq \mu(B(x,r))\leq Cr^Q.$$\end{defn} \begin{defn} Let $(X,d)$ be a locally compact and rectifiably connected metric space, $D\subset X$ be a domain (an open rectifiably connected set), and $C_{gh}\geq 1$ be a constant. We say that $D$ satisfies the {\it $C_{gh}$-Gehring-Hayman inequality}, if for all $x$, $y$ in $D$ and for each quasihyperbolic geodesic $\gamma$ joining $x$ and $y$, we have $$\ell(\gamma)\leq C_{gh}\ell(\beta_{x,y}),$$ where $\beta_{x,y}$ is any other curve joining $x$ and $y$ in $D$. In other words, quasihyperbolic geodesics are essentially the shortest curves in $D$. \end{defn} \begin{defn} Let $(X,d)$ be a metric space, $D\subset X$ be a domain, and $C_{bs}\geq 1$ be a constant. We say that $D$ satisfies the {\it $C_{bs}$-ball separation condition}, if for all $x$, $y$ in $D$ and for each quasihyperbolic geodesic $\gamma$ joining $x$ and $y$, we have for every $z\in \gamma$, $$B(z,C_{bs}d_D(z)) \cap \beta_{x,y} \not=\emptyset ,$$ where $\beta_{x,y}$ is any other curve joining $x$ and $y$ in $D$. \end{defn} \begin{defn} Let $(X,d)$ be a metric space, $D\subset X$ be a domain and let $c\geq 1$ be a constant. We say that $D$ is \begin{enumerate} \item {\it $c$-$LLC_1$}, if for all $x\in D$ and $r>0$, we have every pair of points in $B(x,r)$ can be joined by a curve in $B(x,cr)$. \item {\it $c$-$LLC_2$}, if for all $x\in D$ and $r>0$, we have every pair of points in $D\backslash B(x,r)$ can be joined by a curve in $D\backslash B(x,\frac{r}{c})$. \item {\it $c$-$LLC$}, if it is both $c$-$LLC_1$ and $c$-$LLC_2$. \end{enumerate} Moreover, $D$ is called {\it linearly locally connected} or {$LLC$}, if it is $c$-$LLC$ for some constant $c\geq 1$. \end{defn} \begin{defn} Let $c\geq 1$. A noncomplete metric space $(X,d)$ is {\it $c$-locally externally connected} ($c$-$LEC$) provided the $c$-$LLC_2$ property holds for all points $x\in X$ and all $r\in (0,d(x)/c)$. \end{defn} In \cite{BH}, Buckley and Herron obtained the following interesting characterization of uniform metric spaces. \begin{Thm}\label{bhthm4.2}$($\cite[Theorem 4.2]{BH}$)$ A minimally nice metric space $(X, d)$ is uniform and $LEC$ if and only if it is quasiconvex, $LLC$ with respect to curves, and satisfies a weak slice condition. These implications are quantitative.\end{Thm} \begin{defn} A metric space $(X,d)$ is called {\it annular quasiconvex}, if there is a constant $\lambda \geq 1$ so that, for any $x\in X$ and all $0 < r' < r$, each pair of points $y, z$ in $B(x, r) \backslash B(x, r')$ can be joined with a curve $\gamma_{yz}$ in $B(x, \lambda r)\backslash B(x, r'/\lambda)$ such that $\ell(\gamma_{yz})\leq \lambda d(y, z)$. \end{defn} It is not difficult to see that $\lambda$-annularly quasiconvexity property implies $C$-$LLC_2$, and hence $C$-$LEC$, where $C=2\lambda^2$. \subsection{The proof of Theorem \ref{thm-2}} Necessity: Suppose that $D$ is uniform. Then we know that $D$ is John and quasiconvex. Moreover, it follows from \cite[Theorem 3.6]{BHK} that $(D,k)$ is a roughly starlike Gromov hyperbolic space because $D$ is bounded, where $k$ is the quasihyperbolic metric of $D$. It remains to show that $D$ is $LLC$. Since $X$ is $A$-annularly quasiconvex, it follows that $D$ is $LEC$. Then we deduce from Theorem \Ref{bhthm4.2} that $D$ is $LLC$. Sufficiency: To prove the uniformity of $D$, we only need to prove that every quasihyperbolic geodesic $\gamma$ in $D$ is a uniform arc. We assume that $D$ is $c$-quasiconvex and $\delta$-hyperbolic. By \cite[Theorem 1.2]{KLM14}, we find that $D$ satisfies both the $C_{gh}$-Gehring-Hayman condition and the $C_{bs}$-ball condition for some constants $C_{gh},C_{bs}\geq 1$. So to prove the sufficiency, we only need to show that each quasihyperbolic geodesic in $D$ is a cone arc. We first assume that $D$ is $a$-John. Since $D$ is a bounded $\delta$-hyperbolic domain of $X$, we see from \cite[Theorem 3.1]{BB03} that $(D,k)$ is $K$-roughly starlike, because $X$ is annularly quasiconvex. Then from Theorem \ref{thm-1} the uniformity of $D$ follows. We are thus left to assume that $D$ is $c_0$-LLC. Again by virtue of the Gehring-Hayman condition, we only need to show that there is a uniform upper bound for the constant $\Lambda$ such that $$\min\{\ell(\gamma[x,z]),\ell(\gamma[z,y])\}=\Lambda d_D(z)$$ for each pair of points $x,y\in D$, for any quasihyperbolic geodesic $\gamma$ in $D$ joining $x$ and $y$, and for every point $z\in \gamma$. To this end, we deduce from the $C_{gh}$-Gehring-Hayman condition that $$\ell(\gamma[x,z])\leq cC_{gh}d(x,z)\;\;{\rm and}\;\;\ell(\gamma[y,z])\leq cC_{gh}d(y,z),$$ because the subarcs $\gamma[x,z]$ and $\gamma[x,y]$ are also quasihyperbolic geodesics. Thus we have $$\min\{d(x,z),d(y,z)\}\geq \frac{\Lambda}{cC_{gh}}d_D(z).$$ On the other hand, since $D$ is $c_0$-LLC, we know that there is a curve $\beta$ joining $x$ to $y$ with \begin{eqnarray}\label{eq-new1}\beta\subset X\setminus \overline{B}(z,\frac{\Lambda}{cc_0C_{gh}}d_D(z)).\end{eqnarray} Furthermore, since $\gamma$ is a quasihyperbolic geodesic and $D$ satisfies the $C_{bs}$-ball separation condition, we see that $$\beta\cap B(z,C_{bs}d_D(z))\not=\emptyset,$$ which together with \eqref{eq-new1} show that $$\Lambda\leq cc_0C_{gh}C_{bs},$$ as required. Hence, the proof of Theorem \ref{thm-2} is complete. \end{document}
\begin{document} \title{ Conditional Density Matrix: \ Systems and Subsystems in Quantum Mechanics } \begin{abstract} A new quantum mechanical notion --- Conditional Density Matrix --- proposed by the authors \cite{Kh}, \cite{Sol}, is discussed and is applied to describe some physical processes. This notion is a natural generalization of von Neumann density matrix for such processes as divisions of quantum systems into subsystems and reunifications of subsystems into new joint systems. Conditional Density Matrix assigns a quantum state to a subsystem of a composite system under condition that another part of the composite system is in some pure state. \end{abstract} \section{Introduction} \noindent A problem of a correct quantum mechanical description of divisions of quantum systems into subsystems and reunifications of subsystems into new joint systems attracts a great interest due to the present development of quantum communication. Although the theory of such processes finds room in the general scheme of quantum mechanics proposed by von Neumann in 1927 \cite{Neu}, even now they are often described in a fictitious manner. For example, the authors of classical photon teleportation experiment \cite{Tel} write {\it The entangled state contains no information on the individual particles; it only indicates that two particles will be in the opposite states. The important property of an entangled pair that as soon as a measurement on one particles projects it, say, onto $|\leftrightarrow >$ the state of the other one is determined to be $|\updownarrow >$, and vice versa. How could a measurement on one of the particles instantaneously influence the state of the other particle, which can be arbitrary far away? Einstein, among many other distinguished physicists, could simply not accept this "spooky action at a distance". But this property of entangled states has been demonstrated by numerous experiments. } \section{ The General Scheme of Quantum Mechanics } \noindent It was W.Heisenberg who in 1925 formulated a kinematic postulate of quantum mechanics \cite {Hei}. He proposed that there exists a connection between matrices and physical variables: $$ variable \quad {\cal F} \quad \Longleftrightarrow \quad matrix \quad (\hat F)_{mn}. $$ In the modern language the kinematic postulate looks like: {\it Each dynamical variable $\cal F$ of a system $\cal S$ corresponds to a linear operator $\hat F$ in Hilbert space $\cal H$} $$ dynamical \quad variable \quad {\cal F} \quad \Longleftrightarrow \quad linear \quad operator \quad {\hat F}. $$ The dynamics is given by the famous Heisenberg's equations formulated in terms of commutators. $$ {d{\hat F} \over dt} \quad = \quad {i \over \hbar}[{\hat H}, {\hat F}]. $$ To compare predictions of the theory with experimental data it was necessary to understand how one can determine the values of dynamical variables in the given state. W.Heisenberg gave a partial answer to this problem: {\it If matrix that corresponds to the dynamical variable is diagonal, then its diagonal elements define possible values for the dynamical variable, i.e. its spectrum.} $$ (\hat F)_{mn} = f_{m}{\delta}_{mn} \quad \Longleftrightarrow \quad \lbrace f_{m} \rbrace \quad is \quad spectrum \quad {\cal F}. $$ The general solution of the problem was given by von Neumann in 1927. He proposed the following procedure for calculation of average values of physical variables: $$ < {\cal F} > \quad = \quad Tr({\hat F}{\hat {\rho}}). $$ Here operator $\hat \rho$ satisfies three conditions: $$ 1) \quad {\hat \rho}^{+} \quad = \quad {\hat \rho}, $$ $$ 2) \quad Tr{\hat \rho} \quad = \quad 1, $$ $$ 3) \quad \forall \psi \in {\cal H} \quad <\psi|{\hat \rho}\psi> \quad \geq 0. $$ By the formula for average values von Neumann found out the correspondence between linear operators $\hat \rho$ and states of quantum systems: $$ \quad state \quad of\quad a\quad system \quad \rho \quad \Longleftrightarrow \quad linear \quad operator \quad {\hat \rho}. $$ In this way, the formula for average values becomes quantum mechanical definition of the notion "a state of a system". The operator $\hat \rho$ is called {\bf Density Matrix}. From the relation $$ (<{\cal F>})^{*} \quad = \quad Tr({\hat F}^{+}{\hat \rho}) $$ one can conclude that Hermitian-conjugate operators correspond to complex-conjugate variables and Hermitian operators correspond to real variables. $$ {\cal F} \leftrightarrow {\hat F} \quad \Longleftrightarrow \quad {\cal F}^{*} \leftrightarrow {\hat F}^{+}, $$ $$ {\cal F} = {\cal F}^{*} \quad \Longleftrightarrow \quad {\hat F} = {\hat F}^{+}. $$ The real variables are called {\it observables.} From the properties of density matrix and the definition of positively definite operators: $$ {\hat F}^{+} = {\hat F}, \quad \quad \forall \psi \in {\cal H} \quad <\psi|{\hat F}{\psi}> \quad \geq 0, $$ it follows that the average value of nonnegative variable is nonnegative. Moreover, the average value of nonnegative variable is equal to zero if and only if this variable equals zero. Now it is easy to give the following definition: {\it variable $\cal F$ has a definite value in the state $\rho$ if and only if its dispersion in the state $\rho$ is equal to zero. } In accordance to general definition of the dispersion of an arbitrary variable $$ {\cal D}(A) \quad = \quad <A^{2}> \quad - \quad (<A>)^{2}\,, $$ the expression for dispersion of a quantum variable $\cal F$ in the state $\rho$ has the form: $$ {\cal D}_{\rho}({\cal F}) \quad = \quad Tr({\hat Q}^{2}{\hat \rho}), $$ where $\hat Q$ is an operator: $$ \hat Q \quad = \quad {\hat F} - <{\cal F}>{\hat E}. $$ If $\cal F$ is observable then $Q^{2}$ is a positive definite variable. It follows that the dispersion of $\cal F$ is nonnegative. And all this makes clear the above-given definition. Since density matrix is a positive definite operator and its trace equals 1, we see that its spectrum is pure discrete and it can be written in the form $$ {\hat \rho} \quad = \quad \sum_{n}p_{n}{\hat P}_{n}, $$ where ${\hat P}_{n}$ is a complete set of self-conjugate projective operators: $$ {{\hat P}_{n}}^{+} = {\hat P}_{n}, \quad {\hat P}_{m}{\hat P}_{n} = {\delta}_{mn}{\hat P}_{m}, \quad \sum_{n}{\hat P}_{n} = {\hat E}. $$ Numbers $\lbrace p_{n} \rbrace$ satisfy the condition $$ p_{n}^{*} = p_{n}, \quad 0 \le p_{n}, \quad \sum_{n}p_{n}\,Tr{\hat P}_{n} = 1. $$ It follows that $\hat \rho$ acts according to the formula $$ {\hat \rho}{\Psi} \quad = \quad \sum_{n} p_{n} \sum_{\alpha \in {\Delta}_{n}} {\phi}_{n\alpha}\langle \phi_{n\alpha}|{\Psi} \rangle. $$ The vectors $\phi_{n\alpha}$ form an orthonormal basis in the space $\cal H$. Sets ${\Delta}_{n} = \lbrace 1,...,k_{n} \rbrace$ are defined by degeneration multiplicities $k_n$ of eigenvalues $p_{n}$. Now the dispersion of the observable $\cal F$ in the state $\rho$ is given by the equation $$ {\cal D}_{\rho}({\cal F}) \quad = \quad \sum_{n} p_{n} \sum_{\alpha \in {\Delta}_{n}} ||{\hat Q}{\phi}_{n\alpha}||^{2}. $$ All terms in this sum are nonnegative. Hence, if the dispersion is equal to zero, then $$ if \quad p_{n} \not= 0, \quad then \quad {\hat Q}{\phi}_{n\alpha} = 0. $$ Using the definition of the operator $\hat Q$, we obtain $$ if \quad p_{n} \not= 0, \quad then \quad {\hat F}{\phi}_{n\alpha} = {\phi}_{n\alpha}\langle F \rangle. $$ In other words, {\it if an observable ${\cal F}$ has a definite value in the given state ${\rho}$, then this value is equal to one of the eigenvalues of the operator ${\hat F}$. } In this case we have $$ {\hat \rho}{\hat F}{\phi}_{n\alpha} \quad = \quad {\phi}_{n\alpha}p_{n}\langle {\cal F} \rangle\,, $$ $$ {\hat F}{\hat \rho}{\phi}_{n\alpha} \quad = \quad {\phi}_{n\alpha}\langle {\cal F} \rangle p_{n}\,, $$ that proves the commutativity of operators $\hat F$ and $\hat \rho$. It is well known, that if $\hat A$ and $\hat B$ are commutative self-conjugate operators, then there exists self-conjugate operator $\hat T$ with non-degenerate spectrum such that $\hat A$ and $\hat B$ are functions of $\hat T$: $$ {\hat T}{\Psi} \quad = \quad \sum_{n\alpha} {\phi}_{n\alpha}t_{n\alpha} \langle{\phi}_{n\alpha}|{\Psi}\rangle, $$ $$ t_{n\alpha}^{*} = t_{n\alpha}, \quad t_{n\alpha} \not= t_{n^{'}{\alpha}^{'}}, \quad if \quad (n,{\alpha}) \neq (n^{'},{\alpha}^{'}). $$ $$ {\hat F}{\Psi} \quad = \quad \sum_{n\alpha} {\phi}_{n\alpha}f_{1}(t_{n\alpha}) \langle{\phi}_{n\alpha}|{\Psi}\rangle, $$ $$ {\hat \rho}{\Psi} \quad = \quad \sum_{n\alpha} {\phi}_{n\alpha}f_{2}(t_{n\alpha}) \langle{\phi}_{n\alpha}|{\Psi}\rangle, $$ Suppose that $\hat F$ is an operator with non-degenerate spectrum; then {\it if the observable ${\cal F}$ with non-degenerate spectrum has a definite value in the state ${\rho}$, then it is possible to represent the density matrix of this state as a function of the operator ${\hat F}$. } The operator $\hat F$ can be written in the form $$ {\hat F} \quad = \quad \sum_{n}f_{n}{\hat P}_{n}, $$ $$ {{\hat P}_{n}}^{+} = {\hat P}_{n}, \quad {\hat P}_{m}{\hat P}_{n} = {\delta}_{mn}{\hat P}_{m}, \quad tr({\hat P}_{n}) = 1, \quad \sum_{n}{\hat P}_{n} = {\hat E}. $$ The numbers $\lbrace f_{n} \rbrace$ satisfy the conditions $$ f_{n}^{*} = f_{n}, \quad f_{n} \neq f_{n^{'}}, \quad if \quad n \neq n^{'}. $$ We obviously have $$ {\hat F} \quad = \quad \sum_{n}f_{n}{\hat P}_{n}. $$ From $$ \langle F \rangle \quad = \quad \sum_{n} p_{n}f_{n} \quad = \quad f_{N}, $$ $$ \langle F^2 \rangle \quad = \quad \sum_{n} p_{n}f_{n}^{2} \quad = \quad f_{N}^{2} $$ we get $$ p_{n} \quad = \quad {\delta}_{nN}. $$ In this case density matrix is a projective operator satisfying the condition $$ {\hat \rho}^{2} \quad = \quad {\hat \rho}. $$ It acts as $$ {\hat \rho}{\Psi} \quad = \quad {\Psi}_{N}\langle {\Psi}_{N}|{\Psi} \rangle, $$ where $|{\Psi} \rangle$ is a vector in Hilbert space. The average value of an arbitrary variable in this state is equal to $$ \langle {\cal A} \rangle \quad = \quad \langle {\Psi}_{N}|{\hat A}{\Psi}_{N} \rangle. $$ It is so-called {\it PURE } state. If the state is not pure it is known as {\it mixed.} Suppose that every vector in $\cal H$ is a square integrable function $\Psi (x)$, where $x$ is a set of continuous and discrete variables. Scalar product is defined by the formula $$ \langle \Psi|\Phi \rangle \quad = \quad \int dx{\Psi}^{*}(x){\Phi}(x). $$ For simplicity we assume that every operator $\hat F$ in $\cal H$ acts as follows . $$ ({\hat F}{\Psi})(x) \quad = \quad \int F(x,x^{'})dx^{'}{\Psi}(x^{'}). $$ That is for any operator $\hat F$ there is an integral kernel $F(x,x^{'})$ associated with this operator $$ {\hat F} \quad \Longleftrightarrow \quad F(x,x^{'}). $$ Certainly, we may use $\delta$-function if necessary. Now the average value of the variable $\cal F$ in the state $\rho$ is given by equation $$ \langle {\cal F} \rangle_{\rho} \quad = \quad \int F(x,x^{'})dx^{'}{\rho}(x^{'},x)dx. $$ Here the kernel ${\rho}(x,x^{'})$ satisfies the conditions $$ {\rho}^{*}(x,x^{'}) \quad = \quad {\rho}(x^{'},x), $$ $$ \int {\rho}(x,x)dx \quad = \quad 1, $$ $$ \forall {\Psi} \in {\cal H} \quad \int{\Psi}(x)dx{\rho}(x,x^{'})dx^{'}{\Psi}(x^{'}) \geq 0. $$ \section{Composite System and Reduced Density Matrix} \noindent Suppose the variables $x$ are divided into two parts: $x = \lbrace y,z \rbrace$. Suppose also that the space $\cal H$ is a direct product of two spaces ${\cal H}_{1}$, ${\cal H}_{2}$: $$ {\cal H} \quad = \quad {\cal H}_{1}\otimes{\cal H}_{2}. $$ So, there is a basis in the space $\cal Н$ that can be written in the form $$ {\phi}_{an}(y,z) \quad = \quad f_{a}(y)v_{n}(z)\,. $$ The kernel of operator $\hat F$ in this basis looks like $$ {\hat F} \quad \Longleftrightarrow \quad F(y,z;y^{'},z^{'})\,. $$ In quantum mechanics it means that the system $S$ is a unification of two subsystems $S_{1}$ and $S_{2}$: $$ S \quad = \quad S_{1} \cup S_{2}\,. $$ The Hilbert space $\cal H$ corresponds to the system $S$ and the spaces ${\cal H}_{1}$ and ${\cal H}_{2}$ correspond to the subsystems $S_{1}$ and $S_{2}$. Now suppose that a physical variable ${\cal F}_{1}$ depends on variables ${y}$ only. The operator that corresponds to ${\cal F}_{1}$ has a kernel $$ F_{1}(y,z;y^{'},z^{'}) \quad = \quad F_{1}(y,y^{'}){\delta}(z - z^{'})\,. $$ The average value of $F_{1}$ in the state $\rho$ is equal to $$ \langle F_{1} \rangle_{\rho} \quad = \quad \int F(y,y^{'})dy^{'}{\rho}_{1}(y^{'},y)dy\,, $$ where the kernel ${\rho}_{1}$ is defined by the formula $$ {\rho}_{1}(y,y^{'}) \quad = \quad \int {\rho}(y,z;y_{'},z)dz\,. $$ The operator ${\hat \rho}_{1}$ satisfies all the properties of Density Matrix in $S_1$. Indeed, we have $$ {{\rho}_1}^{*}(y,y^{'}) \quad = \quad {\rho _1}(y^{'},y)\,, $$ $$ \int {\rho _1}(y,y)dy \quad = \quad 1\,, $$ $$ \forall {\Psi _1} \in {\cal H_1} \quad \int{\Psi _1}(y)dy{\rho _1}(y,y^{'})dy^{'}{\Psi _1}(y^{'}) \geq 0\,. $$ The operator $$ {\hat \rho}_{1} \quad = \quad Tr_{2}{\hat \rho}_{1+2}, $$ is called {\bf Reduced Density Matrix} . Thus, the state of the subsystem $S_1$ is defined by reduced density matrix. The reduced density matrix for the subsystem $S_2$ is defined analogously. $$ {\hat \rho}_{2} \quad = \quad Tr_{1}{\hat \rho}_{1+2}. $$ Quantum states $\rho_{1}$ and $\rho_{2}$ of subsystems are defined uniquely by the state $\rho_{1+2}$ of the composite system. Suppose the system $S$ is in a pure state then a quantum state of the subsystem $S_{1}$ is defined by the kernel $$ {\rho}_{1}(y,y^{'}) \quad = \quad \int{\Psi}(y,z)dz{\Psi}^{*}(y^{'},z). $$ If the function ${\Psi}(y,z)$ is the product $$ {\Psi}(y,z) \quad = \quad f(y)w(z), \quad \int|w(z)|^{2}dz = 1, $$ then subsystem $S_{1}$ is a pure state , too $$ {\rho}_{1}(y,y^{'}) \quad = \quad f(y)f^{*}(y^{'}), \quad \int|f(y)|^{2}dy = 1. $$ As it was proved by von Neumann, it is the only case when purity of composite system is inherited by its subsystems. Let us consider an example of a system in a pure state having subsystems in mixed states. Let the wave function of composite system be $$ {\Psi}(y,z) \quad = \quad {1 \over \sqrt{2}}(f(y)w(z) \pm f(z)w(y)), $$ where $<f|w> = 0$ and $<f|f>=<w|w>=1$. The density matrix of the subsystem $S_{1}$ has the kernel $$ {\rho}_{1}(y,y^{'}) \quad = \quad {1 \over 2} (f(y)f^{*}(y^{'}) + w(y)w^{*}(y^{'})). $$ The kernel of the operator ${{\hat \rho}_{1}}^{2}$ has the form $$ {{\rho}_{1}}^{2}(y,y^{'}) \quad = \quad {1 \over 4} (f(y)f^{*}(y^{'}) + w(y)w^{*}(y^{'})). $$ Therefore, the subsystem $S_{1}$ is in the mixed state. Moreover, its density matrix is proportional to unity operator. The previous property resolves the perplexities connected with Einstein - Podolsky - Rosen paradox. \section{EPR - paradox} \noindent Anyway, it was Shr\"{o}dinger who introduced a term "EPR-paradox". The authors of EPR themselves always considered their article as a demonstration of inconsistency of present to them quantum mechanics rather than a particular curiosity. The main conclusion of the paper \cite{EPR} "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?" published in 1935 (8 years later then the von Neumann book) is the statement: {\it ..we proved that (1) the quantum mechanical description of reality given by wave functions not complete or (2) when the operators corresponding to two physical quantities do not commute the two quantities cannot have simultaneous reality. Starting then with the assumption that the wave function does give a complete description of the physical reality, we arrived at the conclusion that two physical quantities, with noncommuting operators, can have simultaneous reality. Thus the negation of (1) leads to negation of only other alternative (2). We can thus focused to conclude that the quantum-mechanical description of physical reality given by wave function is not complete. } After von Neumann's works this statement appears obvious. However, in order to clarify this point of view completely we must understand what is "the physical reality" in EPR. In EPR-paper the physical reality is defined as: {\it If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of physical quantity, then there exists an element of physical reality corresponding to this physical quantity. } Such definition of physical reality is a step back as compared to von Neumann's definition. By EPR definition, the state is actual only when at least one observable has an exact value. This point of view is incomplete and leads to inconsistency. When a subsystem is separated "the loss of observables" results directly from the definition of density matrix for the subsystem. "The occurrence" of observables in the chosen subsystem when the quantities are measured in another "subsidiary" subsystem can be naturally explained in the terms of conditional density matrix. \section{Conditional Density Matrix} \noindent The average value of a variable with the kernel $$ F^c (x,x' ) = F_1 (y,y')u(z)u^* (z'), \quad \int |u(z)|^2 dz =1, $$ is equal to $$ \langle F^c \rangle_{\rho} \quad = \quad p \int F_1 (y,y^{'})dy^{'}{\rho}^{c}(y^{'},y)dy, $$ where $$ {\rho}^{c}(y,y^{'}) = {1 \over p} \int u^*(z)dz\, \rho (y,z;y',z')\,u(z')dz'\,, $$ $$ p \quad = \quad \int u^* (z)dz \,\rho (y,z;y,z')\,u(z')dz' dy. $$ Since we can represent $p$ in the form $$ p\quad = \quad \int P(z,z')dz'\, \rho _2 (z';z)dz, $$ $$ P(z,z') \quad = \quad u(z)u^*(z'), $$ we see that $p$ is an average value of a variable $P$ of the subsystem $S_2$. Operator $\hat P$ is a projector ($\hat P^2 = \hat P$). Therefore it is possible to consider the value $p$ as a probability. It is easy to demonstrate that the operator $\hat{\rho}^c $ satisfies all the properties of density matrix. So the kernel $\rho^c (y,y')$ defines some state of the subsystem $S_1$. What is this state? According to the decomposition of $\delta$-function $$ \delta (z-z')=\sum_{n} \phi _n (z) {\phi _n}^* (z'), $$ $\{ \phi _n (z) \}$ being a basis in the space ${\it H}_2$, the reduced density matrix is represented in the form of the sum $$ \rho _1 (y,y') = \sum p_n {\rho}_{n}^{c} (y,y'). $$ Here $$ {\rho}_{n}^{c} (y,y') = {1 \over p_n } \int {\phi _n}^* (z)dz \, \rho (y,z;y',z')\, {\phi}_n (z')dz' $$ and $$ p_n \quad = \quad \int {\phi _n}^* (z)dz \,\rho (y,z;y,z')\,{\phi _n}(z')dz'dy $$ $$ \quad = \quad \int \hat P_n (z,z')dz'\, {\rho}_2 (z',z)dz. $$ The numbers $p_n$ satisfy the conditions $$ {p_n}^* =p_n,\qquad p_n \geq 0, \qquad \sum_{n} p_n =1. $$ and are connected with a probability distribution. The basis $\{ \phi _n \}$ in the space ${\it H}_2$ corresponds to some observable $\hat G_2$ of the subsystem $S_2$ with discrete non-degenerate spectrum. It is determined by the kernel $$ G_2 (z,z')=\sum_{n} g_n {\phi }_n {\phi }*_n, \quad g_n = {g*}_n ; \quad g_n \not= g_{n1} \quad if \quad n \not= n1. $$ The average value of $G_2$ in the state $\rho _2$ is equal to $$ \int dy \rho _2 (z,z')dz'G(z',z) = $$ $$ = \sum_{n} g_n \int dy \rho _2 (z,z') dz' \phi _n (z') {\phi _n}^* (z') = \sum_{n} p_n g_n. $$ Thus number $p_n$ defines the chance that the observable $\hat G_2$ has the value $g_n$ in the state $\rho _2$. Obviously, the kernel $\rho _{n}^{c} (y,y')$ in this case defines the state of system $S_1$ under condition that the value of variable $G_2$ is equal to $g_n$. Hence it is natural to call operator $\hat \rho _{n}^{c}$ as Conditional Density Matrix (CDM) \cite{Kh}, \cite{Sol} $$ \hat \rho _{c1|2} = {Tr_2 (\hat P_2 \hat \rho) \over Tr(\hat P_2 \hat \rho ) }. $$ It is ({\it conditional}) density matrix for the subsystem $S_1$ under the condition that the subsystem $S_2$ is selected in a pure state $\hat \rho _2 =\hat P_2 $. It is the most important case for quantum communication. Conditional density matrix satisfies all the properties of density matrix. Conditional density matrix helps to clarify a sense of operations in some finest experiments. \section{Examples: System and Subsystems} \subsection{ Parapositronium} \noindent As an example we consider parapositronium, i.e. the system consisting of an electron and a positron. The total spin of the system is equal to zero. In this case the nonrelativistic approximation is valid and the state vector of the system is represented in the form of the product $$ {\Psi}({\vec r}_{e},{\sigma}_{e}; {\vec r}_{p}, {\sigma}_{p}) \quad = \quad {\Phi}({\vec r}_{e},{\vec r}_{p}) \chi({\sigma}_{e},{\sigma}_{p}). $$ The spin wave function is equal to $$ \chi({\sigma}_{e},{\sigma}_{p}) \quad = \quad {1 \over \sqrt{2}} ({\chi}_{\vec n}({\sigma}_{e}){\chi}_{-{\vec n}}({\sigma}_{p}) \quad - \quad {\chi}_{\vec n}({\sigma}_{p}){\chi}_{-{\vec n}}({\sigma}_{e})). $$ Here ${\chi}_{\vec n}(\sigma)$ and ${\chi}_{(-{\vec n})}(\sigma)$ are the eigenvectors of the operator that projects spin onto the vector $\vec n$: $$ ({\vec {\sigma}}{ \vec n})\ {\chi}_{\pm \vec n}(\sigma) \quad = \quad \pm {\chi}_{\pm \vec n}(\sigma), $$ The spin density matrix of the system is determined by the operator with the kernel $$ \rho({\sigma};{\sigma}^{'}) \quad = \quad {\chi}({\sigma}_{e},{\sigma}_{p})\ {{\chi}}^{*}({\sigma}^{'}_{e},{\sigma}^{'}_{p}), $$ The spin density matrix of the electron is $$ {\rho}_{e}({\sigma},{\sigma}^{'}) \quad = \quad \sum_{\xi}\ {\chi}({\sigma},\xi)\ {{\chi}}^{*}({\sigma}^{'}, \xi) \quad = \quad $$ $$ {1 \over 2}\ ({\chi}_{\vec n}(\sigma)\ {\chi}_{(-{\vec n})}({\sigma}^{'}) \ + \ {\chi}_{\vec n}(\sigma)\ {\chi}_{(-{\vec n})}({\sigma}^{'})) \quad = \quad {1 \over 2}\delta (\sigma - {\sigma}^{'}). $$ In this state the electron is completely unpolarized. If an electron passes through polarization filter then the pass probability is independent of the filter orientation. The same fact is valid for the positron if its spin state is measured independently of the electron. Now let us consider quite a different experiment. Namely, the positron passes through the polarization filter and the electron polarization is simultaneously measured. The operator that projects the positron spin onto the vector $\vec m$ (determined by the filter) is given by the kernel $$ P(\sigma,{\sigma}^{'}) \quad = \quad {\chi}_{\vec m}(\sigma)\ {\chi}^{*}_{{\vec m}}({\sigma}^{'}). $$ Now the conditional density matrix of the electron is equal to $$ {\rho}_{e/p}(\sigma,{\sigma}^{'}) \ = \ {\sum_{(\sigma,{\sigma}^{'})} {\chi}_{\vec m}(\sigma)\ {\chi}^{*}_{\vec m}({\sigma}^{'})\ {\chi}({\sigma}_{e},{\sigma}^{'})\ {{\chi}^{*}}({\sigma}^{'}_{e},\sigma) \over \sum_{(\xi,\sigma,{\sigma}^{'})} {\chi}_{\vec m}(\sigma)\ {\chi}^{*}_{\vec m}({\sigma}^{'})\ {\chi}(\xi,{\sigma}^{'})\ {{\chi}^{*}}(\xi,\sigma)}. $$ The result of the summation is $$ {\rho}_{e/p}(\sigma,{\sigma}^{'}) \quad = \quad {\chi}_{(-\vec m )}(\sigma)\ {\chi}^{*}_{(-\vec m )}({\sigma}^{'}). $$ Thus, if the polarization of the positron is selected with the help of polarizer in the state with well defined spin, then the electron appears to be polarized in the opposite direction. Of course, this result is in an agreement with the fact that total spin of composite system is equal to zero. Nevertheless this natural result can be obtained if positron and electron spins are measured simultaneously. In the opposite case, the more simple experiment shows that the direction of electron and positron spins are absolutely indefinite. A.Eistein said "{\it raffinert ist der Herr Gott, aber boschaft ist Er nicht}". \subsection{ Quantum Photon Teleportation} \noindent In the Innsbruck experiment \cite{Tel} on a photon state teleportation, the initial state of the system is the result of the unification of the pair of photons 1 and 2 being in the antisymmetric state ${\chi}({\sigma}_{1},{\sigma}_{2})$ with summary angular momentum equal to zero and the photon 3 being in the state ${\chi}_{\vec m}({\sigma}_{3})$ (that is, being polarized along the vector $\vec m $). The joint system state is given by the density matrix $$ \rho(\sigma, {\sigma}^{'}) \quad = \quad {\Psi}(\sigma){{\Psi}^{*}}({\sigma}^{'}), $$ where the wave function of the joint system is the product $$ {\Psi}(\sigma) \quad = \quad {\chi}({\sigma}_{1},{\sigma}_{2})\ {\chi}_{\vec m}({\sigma}_{3}). $$ Considering then the photon 2 only (without fixing the states of the photons 1 and 3) we find the photon 2 to be completely unpolarized with the density matrix $$ {\rho}({\sigma}_{2},{\sigma}_{2}^{'}) \ = \ Tr_{(1,3)}\ {\rho}({\sigma}_{1},{\sigma}_{2},{\sigma}_{3}; {\sigma}_{1},{\sigma}_{2}^{'},{\sigma}_{3}) \ = \ {1 \over 2}\ \delta ({\sigma}_{2} - {\sigma}_{2}^{'}). $$ However, if the photon 2 is registered when the state of the photons 1 and 3 has been determined to be ${\chi}({\sigma}_{1},{\sigma}_{3})$ then the state of the photon 2 is given by the conditional density matrix $$ {\rho}_{2/\lbrace 1,3 \rbrace} \quad = \quad {Tr_{(1,3)}\ (P_{1,3}\ {\rho}_{1,2,3}) \over Tr\ (P_{1,3}\ {\rho}_{1,2,3})}. $$ Here $ P_{1,3}$ is the projection operator $$ P_{1,3} \quad = \quad {\chi}({\sigma}_{1},{\sigma}_{3})\ {\chi}^{*}({\sigma}_{1},{\sigma}_{3}). $$ To evaluate the conditional density matrix it is convenient to preliminary find the vectors $$ {\phi}({\sigma}_{1}) \quad = \quad \sum_{3}\ {\chi}^{*}_{\vec m}({\sigma}_{3})\ {\chi}({\sigma}_{1},{\sigma}_{3}) $$ and $$ {\theta}({\sigma}_{2}) \quad = \quad \sum_{1}\ {{\phi}^{*}}({\sigma}_{1})\ {\chi}({\sigma}_{1},{\sigma}_{2}). $$ The vector $\theta$ equals to $$ {\theta}({\sigma}_{2}) \quad = \quad - {1 \over 2}\ {\chi}_{\vec m}({\sigma}_{2}) $$ and the conditional density matrix of the photon 2 appears to be equal to $$ {\rho}_{2/\lbrace 1,3 \rbrace} \quad = \quad {\chi}_{\vec m}({\sigma}_{2})\ {{\chi}}^{*}_{\vec m}({\sigma}_{2}^{'}). $$ Thus, if the subsystem consisting of the photons 1 and 3 is forced to be in the antisymmetric state ${\chi}({\sigma}_{1},{\sigma}_{3})$ (with total angular momentum equal to zero) then the photon 2 appears to be polarized along the vector ${\vec m}$. \subsection{ Entanglement Swapping} \noindent In the recent experiment \cite{Swa} in installation two pairs of correlated photons are emerged simultaneously. The state of the system is described by the wave function $$ {\Psi}(\sigma) \quad = \quad {\Psi}({\sigma}_{1}, {\sigma}_{2}, {\sigma}_{3}, {\sigma}_{4}) \quad = \quad {\chi}({\sigma}_{1},{\sigma}_{2}){\chi}({\sigma}_{3},{\sigma}_{4}). $$ The photons 2 and 3 are selected into antisymmetric state $ {\chi}({\sigma}_{2},{\sigma}_{3})$. What is the state of pair of photons 1 and 4? Conditional density matrix of the pair (1-4) is $$ {\hat \rho}_{14/23} \quad = \quad {Tr_{23}({\hat P}_{23}{\hat \rho}_{1234}) \over Tr({\hat P}_{23}{\hat \rho}_{1234})}, $$ where operator that selects pair (2-3) is defined by $$ P_{23}(\sigma, {\sigma}^{'}) \quad = \quad {\chi}({\sigma}_{2},{\sigma}_{3}) {\chi}^{*}({\sigma}_{2}^{'},{\sigma}_{3}^{'}) $$ and density matrix of four photons system is determined by kernel $$ {\rho}_{1234}(\sigma, {\sigma}^{'}) \quad = \quad {\Psi}({\sigma}_{1}, {\sigma}_{2}, {\sigma}_{3}, {\sigma}_{4}) {\Psi}^{*}({\sigma}_{1}^{'}, {\sigma}_{2}^{'}, {\sigma}_{3}^{'}, {\sigma}_{4}^{'}). $$ Direct calculation shows that the pair of the photons (1 and 4) has to be in pure state with the wave function $$ \Phi({\sigma}_{1},{\sigma}_{4}) \quad = \quad {\chi}({\sigma}_{1},{\sigma}_{4}). $$ The experiment confirms this prediction. \subsection{ Pairs of Polarized Photons} \noindent Now consider a modification of the Innsbruck experiment. Let there be two pairs of photons $(1,\ 2)$ and $(3,\ 4)$. Suppose that each pair is in the pure antisymmetric state $\chi$. The spin part of the density matrix of the total system is given by the equation $$ {\rho}(\sigma,{\sigma}^{'}) \quad = \quad {\Psi}(\sigma)\ {{\Psi}^{*}}({\sigma}^{'}), $$ where $$ {\Psi}(\sigma) \quad = \quad {\chi}({\sigma}_{1},{\sigma}_{2})\ {\chi}({\sigma}_{3},{\sigma}_{4}). $$ If the photons 2 and 4 pass though polarizes, they are polarized along ${\chi}_{\vec m}({\sigma}_{2})$ and ${\chi}_{\vec s}({\sigma}_{4})$ then the wave function of the system is transformed into $$ {\Phi}(\sigma) \quad = \quad {\chi}_{\vec n}({\sigma}_{1})\ {\chi}_{\vec m}({\sigma}_{2}) \ {\chi}_{\vec r}({\sigma}_{3})\ {\chi}_{\vec s}({\sigma}_{4}). $$ Here ${\vec n},\ {\vec m}$ and ${\vec r},\ {\vec s}$ are pairs of mutually orthogonal vectors. Now the conditional density matrix of the pair of photons 1 and 3 is $$ {\rho}_{(1,3)/(2,4)}(\sigma,{\sigma}^{'}) \quad = \quad {\Theta}({\sigma}_{1},{\sigma}_{3})\ {{\Theta}^{*}}({\sigma}_{1}^{'},{\sigma}_{3}^{'}). $$ The wave function of the pair is the product of wave functions of each photon with definite polarization $$ {\Theta}({\sigma}_{1},{\sigma}_{3}) \quad = \quad {\chi}_{\vec n}({\sigma}_{1})\ {\chi}_{\vec r}({\sigma}_{3}) . $$ We note that initial correlation properties of the system appear only when the photons pass though polarizers. Although the wave function of the system seems to be a wave function of independent particles the initial correlation exhibits in correlations of polarizations for each pair. Pairs of polarized photons appear to be very useful in quantum communication. \subsection{ Quantum Realization of Verman Communication Scheme} \noindent Let us recall the main idea of Vernam communication scheme \cite{Ver}. In this scheme, Alice encrypts her message (a string of bits denoted by the binary number $m_{1}$) using a randomly generated key $k$. She simply adds each bit of the message with the corresponding bit of the key to obtain the scrambled text ($ s = m_{1} \oplus k $, where $\oplus$ denotes the binary addition modulo 2 without carry). It is then sent to Bob, who decrypts the message by subtracting the key ($s \ominus k = m_{1} \oplus k \ominus k = m_{1}$). Because the bits of the scrambled text are as random as those of the key, they do not contain any information. This cryptosystem is thus provable secure in sense of information theory. Actually, today this is the only probably secure cryptosystem! Although perfectly secure, the problem with this security is that it is essential that Alice and Bob possess a common secret key, which must be at least as long as the message itself. They can only use the key for a single encryption. If they used the key more than once, Eve could record all of the scrambled messages and start to build up a picture of the plain texts and thus also of the key. (If Eve recorded two different messages encrypted with the same key, she could add the scrambled text to obtain the sum of the plain texts: $s_{1} \oplus s_{2} = m_{1} \oplus k \oplus m_{2} \oplus k = m_{1} \oplus m_{2} \oplus k \oplus k = m_{1} \oplus m_{2}$, where we used the fact that $\oplus$ is commutative.) Furthermore, the key has to be transmitted by some trusted means, such as a courier, or through a personal meeting between Alice and Bob. This procedure may be complex and expensive, and even may lead to a loophole in the system. With the help of pairs of polarized photons we can overcome the shortcomings of the classical realization of Vernam scheme. Suppose Alice sends to Bob pairs of polarized photons obtained according to the rules described in the previous section. Note that the concrete photons' polarizations are set up in Alice's laboratory and Eve does not know them. If the polarization of the photon 1 is set up by a random binary number $p_{i}$ and the polarization of the photon 3 is set up by a number $m_{i} \oplus p_{i}$ then each photon (when considered separately) does not carry any information. However, Bob after obtaining these photons can add corresponding binary numbers and get the number $m_{i}$ containing the information ($m_{i} \oplus p_{i} \oplus p_{i}=m_{i}$). In this scheme, a secret code is created during the process of sending and is transferred to Bob together with the information. It makes the usage of the scheme completely secure. \section{Conclusion} \noindent Provided that the subsystem $S_2$ of composite quantum system $S=S_1 + S_2$ is selected (or will be selected) in a pure state $\hat P_n$ the quantum state of subsystem $S_1$ is conditional density matrix $\hat {\rho}_{1c/2n}$. Reduced density matrix $\hat {\rho }_1$ is connected with conditional density matrices by an expansion: $$ \hat \rho _1 = \sum p_n {\hat \rho}_{1n/2n}; $$ here $$ \sum \hat P_n = \hat E, \qquad \sum p_n = 1. $$ The coefficients $p_n$ are probabilities to find subsystem $S_2$ in pure states $\hat P_n$. \end{document}
\begin{document} \title{Tight lower bounds on the time it takes to generate a geometric phase} \author{Niklas H{\"o}rnedal\,\orcidlink{0000-0002-2005-8694}\,} \email{niklas.hornedal@uni.lu} \affiliation{Department of Physics and Materials Science, University of Luxembourg, L-1511 Luxembourg, Luxembourg} \author{Ole S{\"o}nnerborn\,\orcidlink{0000-0002-1726-4892}\,} \email{ole.sonnerborn@kau.se} \affiliation{Department of Mathematics and Computer Science, Karlstad University, 651 88 Karlstad, Sweden} \affiliation{Department of Physics, Stockholm University, 106 91 Stockholm, Sweden} \begin{abstract} Although the non-adiabatic Aharonov-Anandan geometric phase acquired by a cyclically evolving quantum system does not depend on the evolution time, it does set a lower bound on the evolution time. In this paper, we derive three tight bounds on the time required to generate any prescribed Aharonov-Anandan geometric phase. The derivations are based on recent results on the geometric character of the classic Mandelstam-Tamm and Margolus-Levitin quantum speed limits. \end{abstract} \date{\today} \maketitle \section{Introduction} In 1984, Michael Berry reported a discovery proven to have a surprisingly wide range of applications. Berry showed \cite{Be1984} that if the Hamiltonian of a quantum mechanical system depends on external parameters that are varied cyclically in an adiabatic manner, then each non-degenerate eigenstate of the Hamiltonian acquires a phase only depending on the geometry of the parameter space. Nowadays, the Berry phase is a concept of central importance in virtually every branch of modern physics \cite{ShWi1989, BoMoKoNiZw2003}, including the recent fields of Topological states of matter \cite{BeHu2013, ChTeScRy2016, HaDu2017} and Quantum computation \cite{ZaRa1999, SjToAnHeJoSi2012, AlSj2022, ZhKyFiKwSjTo2021}. A few years after the publication of \cite{Be1984}, Aharonov and Anandan \cite{AhAn1987} extended Berry’s work by showing that a geometric phase can be associated with every cyclically evolving system, not only those evolving adiabatically. The Aharonov-Anandan phase is the argument of the holonomy of the closed curve traced by the system’s state in the gauge structure established by the Hopf bundle equipped with the Berry-Simon connection. Although often referred to as a non-adiabatic phase, the Anandan-Aharonov phase is also defined for adiabatically evolving systems and then agrees with the Berry phase. Being a holonomy, the Aharonov-Anandan phase factor, likewise the phase, depends neither on the evolution time nor the rate at which the system evolves. However, the path followed by a cyclically evolving state acquiring a non-trivial Aharonov-Anandan phase cannot be arbitrarily short. In this paper, we derive a tight lower bound for the Fubini-Study length of a closed curve of states in terms of its Aharonov-Anandan phase. Then, proceeding from the geometric interpretation of the Mandelstam-Tamm quantum speed limit \cite{MaTa1945} provided by \cite{AnAh1990}, we derive a lower bound on the time it takes to generate a specified Aharonov-Anandan phase. Interestingly, the Margolus-Levitin quantum speed limit \cite{MaLe1998} is also connected to the Aharonov-Anandan phase. Using the geometric description of the Margolus-Levitin quantum speed limit in \cite{HoSo2023a}, we derive another lower bound on the time it takes to generate an Aharonov-Anandan phase. Generally, a quantum speed limit is a fundamental lower bound on the time required to transform a quantum system in a specified way \cite{Fr2016, DeCa2017}. As announced, the evolution time estimates derived here originate from geometric characterizations of the Mandelstam-Tamm and Margolus-Levitin quantum speed limits \cite{MaTa1945, Fl1973, MaLe1998, GiLlMa2003, HoAlSo2022, HoSo2023a, HoSo2023b}. We have organized the paper as follows. We begin by repeating the definition of the Aharonov-Anandan geometric phase (Section \ref{sec: AA phase}). Then we derive and discuss an estimate of Margolus-Levitin type for systems whose dynamics are driven by time-independent Hamiltonians (Section \ref{sec: MLQSL}). Estimates of Margolus-Levitin type do not extend straightforwardly to systems with time-dependent Hamiltonians \cite{HoSo2023b}, but the Mandelstam-Tamm quantum speed limit does \cite{AnAh1990}. We derive a bound of Mandelstam-Tamm type for time-dependent systems and then relate this to the Margolus-Levitin type bound via the Bhatia-Davies inequality \cite{BhDa2000} (Section \ref{sec: MTQSL}). \begin{rmk} The problem addressed here is a kind of Brachistochrone problem \cite{CaHoKoOk2006, CaHoKoOk2007, CaHoKoOk2008, WaAlJaLlLuMo2015, AlHoAn2021}. However, unlike the case in most Brachistochrone problems, no constraints need to be placed on the spectrum of the Hamiltonian. This makes the bounds derived here universally valid. \end{rmk} \section{The Aharonov-Anandan phase}\label{sec: AA phase} The Aharonov-Anandan geometric phase of a cyclic evolution of a pure state is the argument of the holonomy of the evolution in the gauge structure established by the Hopf bundle equipped with the Berry-Simon connection \cite{Si1993, AhAn1987}. Here we briefly review this definition. Consider a quantum system modeled on a finite-dimensional Hilbert space $\mathcal{H}$. Let $\mathcal{S}$ be the unit sphere in $\mathcal{H}$ and $\mathcal{P}$ be the projective Hilbert space of unit rank orthogonal projection operators of $\mathcal{H}$; such operators represent pure states of the system.\footnote{In this paper, we will only consider quantum systems in pure states. The word ``state'' will therefore always refer to a pure quantum state.} The Hopf bundle is the $U(1)$-principal bundle $\eta$ sending $\ket{\psi}$ in $\mathcal{S}$ to $\ketbra{\psi}{\psi}$ in $\mathcal{P}$. Two vectors $\ket{\psi}$ and $\ket{\phi}$ in $\mathcal{S}$ belong to the same fiber of $\eta$, that is, get sent to the same state in $\mathcal{P}$, if and only if $\ket{\phi}=e^{i\theta}\ket{\psi}$ for some relative phase $\theta$. The Berry-Simon connection is the $1$-form $\mathcal{A}$ on $\mathcal{S}$ whose action on a tangent vector $\ket{\dot\psi}$ at $\ket{\psi}$ is \begin{equation} \mathcal{A}\ket{\dot\psi}=i\braket{\psi}{\dot\psi}. \end{equation} We say that a tangent vector is horizontal if $\mathcal{A}$ annihilates it. Suppose $\ket{\psi}$ sits in the fiber over $\rho$ in $\mathcal{P}$. Then any tangent vector $\dot\rho$ at $\rho$ lifts to a unique horizontal vector $\ket{\dot\psi}$ at $\ket{\psi}$. That $\ket{\dot\psi}$ is a lift of $\dot\rho$ means that $\mathrm{d}\eta\ket{\dot\psi}=\dot\rho$. More generally, if $\rho_t$ is a curve extending from $\rho$, then $\rho_t$ lifts to a unique curve $\ket{\psi_t}$ that extends from $\ket{\psi}$ and is such that $\ket{\dot\psi_t}$ is horizontal for every $t$. The curve $\ket{\psi_t}$ is called the horizontal lift of $\rho_t$ starting at $\ket{\psi}$. Suppose that $\rho_t$, $0\leq t\leq \tau$, is a closed curve at $\rho$. Let $\ket{\psi_t}$ be a horizontal lift of $\rho_t$. Since $\rho_t$ is closed, $\ket{\psi_t}$ starts and ends in the fiber over $\rho$; see Figure \ref{fig: fas}. The holonomy of $\rho_t$ is the phase factor $e^{i\theta}=\braket{\psi_0}{\psi_\tau}$. \begin{figure} \caption{The figure shows the cyclic evolution of a qubit state $\rho_t$ and one of its horizontal lifts $\ket{\psi_t}$. The upper part of the figure displays the Hopf fibers over the states in the evolution curve, conformally mapped into Euclidean $3$-space by stereographic projection. The horizontal lift intersects each fiber perpendicularly, and the intersections form base points in the fibers allowing us to canonically identify the fiber over each $\rho_t$ with $U(1)$ via $\ket{\phi}\mapsto \braket{\psi_t}{\phi}$. Since the initial and final vectors $\ket{\psi_0}$ and $\ket{\psi_\tau}$ belong to the same fiber, colored blue, we can identify $\braket{\psi_0}{\psi_\tau}$ with a phase factor $e^{i\theta}$ via the identification of the blue fiber with $U(1)$ associated with $\ket{\psi_0}$. This phase factor is the Aharonov-Anandan holonomy of $\rho_t$, and the phase $\theta$ is the Aharonov-Anandan geometric phase.} \label{fig: fas} \end{figure} The congruence class modulo $2\pi$ of $\theta$ is called the geometric phase of $\rho_t$. Since the action by $U(1)$ on $\mathcal{S}$ takes horizontal curves to horizontal curves, the geometric phase factor and geometric phase do not depend on the choice of horizontal lift of $\rho_t$. Although the geometric phase of $\rho_t$ is defined using a horizontal lift, the geometric phase can be determined from an arbitrary lift: If $\ket{\psi_t}$ is any lift of $\rho_t$, then \begin{equation}\label{horizontal lift} \ket{\psi_t'}= \ket{\psi_t}\exp\Big(i\int_0^t\mathrm{d}t\,\mathcal{A}\,\ket{\dot\psi_t}\Big) \end{equation} is a horizontal lift of $\rho_t$. The holonomy of $\rho_t$ is, thus, \begin{equation} e^{i\theta} =\braket{\psi_0}{\psi_\tau} \exp\Big(i\int_0^\tau\mathrm{d}t\,\mathcal{A}\ket{\dot\psi_t}\Big), \end{equation} implying that \begin{equation}\label{geometric phase} \theta = \arg\braket{\psi_0}{\psi_\tau}+i\int_0^\tau\mathrm{d}t\,\braket{\psi_t}{\dot\psi_t}\;\operatorname{mod}\,2\pi. \end{equation} Hereafter, we will identify geometric phases with their representatives in the interval $[0,2\pi)$. Thus, when we write that $\rho_t$ has the geometric phase $\theta$, we mean that $\theta$ is the smallest non-negative representative of the geometric phase of $\rho_t$. \section{A Margolus-Levitin type estimate}\label{sec: MLQSL} Consider a quantum system whose dynamics is driven by a time-independent Hamiltonian $H$. Suppose the evolving state of the system forms a non-stationary closed curve $\rho_t$, $0\leq t\leq \tau$, at $\rho$ with geometric phase $\theta$. Write $\langle H \rangle$ for the expected energy, and let $\overline\epsilon$ be the largest occupied energy that is less than $\langle H \rangle$ and $\underline\epsilon$ be the smallest occupied energy that is greater than $\langle H \rangle$. Then\footnote{We assume all quantities to be expressed in units such that $\hbar=1$.}\textsuperscript{,}\footnote{If $\overline\epsilon$ or $\underline\epsilon$ does not exist, the state is stationary, and we define the right-hand side of \eqref{MLQSL} to be $0$.} \begin{equation}\label{MLQSL} \tau\geq \max\bigg\{ \frac{\theta}{\langle H-\overline\epsilon\,\rangle}, \frac{2\pi-\theta}{\langle\,\underline\epsilon-H \rangle}\bigg\}. \end{equation} The quotients on the right are similar to the Margolus-Levitin quantum speed limit \cite{MaLe1998} and its dual \cite{NeAlSa2022, HoSo2023a}. Therefore, we call \eqref{MLQSL} a bound of Margolus-Levitin type. To prove \eqref{MLQSL}, let $\epsilon$ be any occupied eigenvalue of $H$ and $\ket{\psi}$ be any vector in the fiber over $\rho$. Then, as we shall see, the solution to \begin{equation} \ket{\dot\psi_t}=-i(H-\epsilon)\ket{\psi_t},\quad \ket{\psi_0}=\ket{\psi}, \end{equation} is a closed lift $\ket{\psi_t}$ of $\rho_t$. According to \eqref{geometric phase}, \begin{equation}\label{geometric phase 2} \theta = \int_0^\tau\mathrm{d}t\,\mathcal{A}\ket{\dot\psi_t} =\tau\langle H-\epsilon\rangle\;\operatorname{mod}\,2\pi. \end{equation} If $\epsilon$ is smaller than $\langle H \rangle$, then $\tau\langle H-\epsilon\rangle \geq \theta$, and if $\epsilon$ is larger than $\langle H \rangle$, then $\tau\langle \epsilon - H \rangle\geq 2\pi-\theta$. The estimate \eqref{MLQSL} follows from the observation that $\langle H-\epsilon\rangle$ assumes its smallest positive value for $\epsilon=\overline{\epsilon}$ and its largest negative value for $\epsilon=\underline{\epsilon}$. It remains to show that $\ket{\psi_t}$ is a closed lift of $\rho_t$. For this, let $\ket{\phi}$ be an eigenvector of $H$ with eigenvalue $\epsilon$. Since $\epsilon$ is occupied by $\rho$, $\braket{\phi}{\psi}\ne 0$. By adjusting the phase of $\ket{\phi}$ we can assume that $\ket{\psi}$ is in phase with $\ket{\phi}$, that is, $\braket{\phi}{\psi}> 0$. The curve $\ket{\psi_t}$ is a lift of $\rho_t$ since \begin{equation} \begin{split} \ketbra{\psi_t}{\psi_t} &= e^{-it(H-\epsilon)}\ketbra{\psi_0}{\psi_0}e^{it(H-\epsilon)}\\ &= e^{-itH}\ketbra{\psi}{\psi}e^{itH}\\ &=\rho_t, \end{split} \end{equation} and $\ket{\psi_t}$ is in phase with $\ket{\phi}$ for every $t$: \begin{equation} \braket{\phi}{\psi_t} = \bra{\phi} e^{-it(H-\epsilon)}\ket{\psi_0} = \braket{\phi}{\psi} >0. \end{equation} The initial and final vectors $\ket{\psi_0}$ and $\ket{\psi_\tau}$ are both in the fiber over $\rho$, and they are both in phase with $\ket{\phi}$. Since there is only one vector in the fiber over $\rho$ that is in phase with $\ket{\phi}$, the curve $\ket{\psi_t}$ is closed. \subsection{Tightness}\label{ML tightness} Here we show that for every value of $\theta$, the estimate \eqref{MLQSL} can be saturated by a qubit. The bound is thus tight. If $\theta=0$, the estimate is saturated by a stationary state. We assume that $\theta>0$. Consider a system modeled on a Hilbert space spanned by orthonormal vectors $\ket{0}$ and $\ket{1}$. Define the Pauli operators as \begin{align} &\sigma_x=\ketbra{0}{1}+\ketbra{1}{0}, \\ &\sigma_y=i(\ketbra{0}{1}-\ketbra{1}{0}), \\ &\sigma_z=\ketbra{1}{1}-\ketbra{0}{0}. \end{align} We can identify the projective Hilbert space with the unit sphere in Euclidean $3$-space by identifying each qubit state $\rho$ with a vector $\mathbf{r}$ defined as \begin{equation} \mathbf{r}=\big(\!\operatorname{tr}(\rho\sigma_x),\operatorname{tr}(\rho\sigma_y),\operatorname{tr}(\rho\sigma_z)\big). \end{equation} The vector $\mathbf{r}$ is called the Bloch vector of $\rho$, and the unit sphere is called the Bloch sphere. One can also assign a vector $\mathbf{\Omega}$ to each qubit Hamiltonian $H$ called the Rabi vector of the Hamiltonian: \begin{equation} \mathbf{\Omega}=\big(\!\operatorname{tr}(H\sigma_x),\operatorname{tr}(H\sigma_y),\operatorname{tr}(H\sigma_z)\big). \end{equation} Suppose the Rabi vector of the system Hamiltonian $H$ points along the positive $z$-axis and has the length $\Omega$. Further, suppose the system is prepared in a state $\rho$ whose Bloch vector $\mathbf{r}$ makes a polar angle $\phi>0$ with the positive $z$-axis; see Figure \ref{fig: qubit}. \begin{figure} \caption{The evolving Bloch vector $\mathbf{r}_t$ rotates about the $z$-axis with a preserved polar angle and an azimuthal angular speed $\Omega$. Thus, it returns to its original position at $\tau=2\pi/\Omega$. The geometric phase equals $2\pi$ minus half the solid angle subtended by $\mathbf{r}_t$. The solid angle is the area of the blue spherical cap.} \label{fig: qubit} \end{figure} As time passes, the Bloch vector $\mathbf{r}_t$ of the evolving state $\rho_t$ rotates around the $z$-axis with a preserved polar angle and azimuthal angular speed $\Omega$. The Bloch vector thus returns to its original position, and $\rho_t$ returns to $\rho$, at time $\tau=2\pi/\Omega$. One can show, see, e.g., \cite{LaHuCh1990}, that the geometric phase $\theta$ of $\rho_t$ is equal to $2\pi$ minus half the solid angle subtended by $\mathbf{r}_t$: \begin{equation}\label{tata} \theta = \pi(1+\cos\phi). \end{equation} By adjusting $\phi$, the phase $\theta$ can be given any predetermined value. The eigenvalues of $H$ are $(\operatorname{tr} H\pm \Omega)/2$, both being occupied at all times, and the expected energy is $(\operatorname{tr} H+\Omega\cos\phi)/2$. Consequently, \begin{align} &\langle H - \overline\epsilon\,\rangle = \frac{\Omega}{2}(1+\cos\phi), \\ &\langle\, \underline\epsilon - H\rangle = \frac{\Omega}{2}(1-\cos\phi). \end{align} It follows that \begin{equation} \frac{\theta}{\langle H - \overline\epsilon\,\rangle} =\frac{2\pi-\theta}{\langle\, \underline\epsilon - H\rangle} =\frac{2\pi}{\Omega} =\tau. \end{equation} This shows that \eqref{MLQSL} is tight. \subsection{A qutrit example} A quantum mechanical system with a time-independent Hamiltonian that returns to its original state at some point has periodic dynamics. The previous section shows that every qubit system with a time-independent Hamiltonian is periodic, with a period given by the equal quotients on the right-hand side of \eqref{MLQSL}. In this section, we show that the corresponding statement does not hold in higher dimensions. More precisely, we show that \eqref{MLQSL} may or may not be the period of a qutrit system with all three energy levels occupied. Furthermore, we show that if the right-hand side of \eqref{MLQSL} is the period of the qutrit, then either of the quotients can be greater than the other. That \eqref{MLQSL} can be saturated by a system with an effective dimension greater than two contrasts with the case of the Margolus-Levitin quantum speed limit \cite{HoSo2023a}. Consider a qutrit system with a Hamiltonian $H$ having eigenvalues $\epsilon_0<\epsilon_1<\epsilon_2$. Suppose the three eigenvalues are occupied and the dynamics of the state is periodic with period $\tau$. Let $\theta$ be the geometric phase of the path followed by the state during one period. According to \eqref{geometric phase 2}, there are integers $n_0>n_1>n_2$ such that $ \theta=\tau\langle H-\epsilon_j\rangle-2\pi n_j$. If $\langle H\rangle<\epsilon_1$, then \begin{align} &\hspace{-1pt}\frac{\theta}{\langle H-\overline\epsilon\,\rangle} =\tau\bigg(\frac{\langle H\rangle-\epsilon_0-2\pi n_0/\tau}{\langle H\rangle-\epsilon_0}\bigg),\label{nitton} \\ &\hspace{-1pt}\frac{2\pi-\theta}{\langle\,\underline\epsilon - H \rangle} =\tau\bigg(\frac{2\pi(n_1+1)/\tau+\epsilon_1-\langle H\rangle}{\epsilon_1-\langle H \rangle} \bigg),\label{tjugo} \end{align} and if $\langle H\rangle>\epsilon_1$, then \begin{align} &\hspace{-1pt}\frac{\theta}{\langle H-\overline\epsilon\,\rangle} =\tau\bigg(\frac{\langle H\rangle-\epsilon_1-2\pi n_1/\tau}{\langle H\rangle-\epsilon_1}\bigg),\label{tjuett} \\ &\hspace{-1pt}\frac{2\pi-\theta}{\langle\,\underline\epsilon - H \rangle} =\tau\bigg(\frac{2\pi(n_2+1)/\tau+\epsilon_2-\langle H\rangle}{\epsilon_2-\langle H \rangle} \bigg).\label{tjutva} \end{align} In both cases, with suitable choices of initial state and eigenvalues, we can arrange it so that none, either, or both of the quotients on the right in the Margolus-Levitin type estimate \eqref{MLQSL} assume the value $\tau$. If, on the other hand, $\langle H\rangle=\epsilon_1$, then $\theta=0$, and the first quotient in \eqref{MLQSL} vanishes. The other quotient is \begin{equation} \frac{2\pi-\theta}{\langle\,\underline\epsilon - H \rangle} =\frac{2\pi}{\epsilon_2-\epsilon_1} =\tau\bigg(\frac{1}{n_1-n_2}\bigg).\label{tjuett} \end{equation} We can arrange it so that $n_1=n_2+1$ as well as $n_1>n_2+1$. \subsection{Non-extensibility} In \cite{HoSo2023b}, it was demonstrated that the Margolus-Levitin quantum speed limit (and its generalization to an arbitrary fidelity \cite{GiLlMa2003, HoSo2023a}) does not straightforwardly generalize to an evolution time bound for systems with a time-dependent Hamiltonian. Here, we similarly show that \eqref{MLQSL} does not generalize to a time bound for time-dependent systems. More specifically, we show that it is in general not possible to replace the denominators in \eqref{MLQSL} with the corresponding time averages by showing that there are systems that violate such an estimate. Consider a qubit system in the $\sigma_x$-eigenstate \begin{equation} \rho=\frac{1}{2}(\ketbra{0}{0}+\ketbra{0}{1}+\ketbra{1}{0}+\ketbra{1}{1}). \end{equation} Fix an $E>0$, let $\mu(\chi)=E/(1-\cos\chi)$ for $0<\chi<\pi/2$, and define a time-dependent Hamiltonian $H_t$ as \begin{equation} H_t=e^{-iAt}He^{iAt} \end{equation} where \begin{align} & A= \mu(\chi)\sin\chi\, \sigma_z,\\ & H=\mu(\chi)(\sin\chi\, \sigma_z- \cos\chi\,\sigma_x). \end{align} Write $\rho_t$ for the state at time $t$ generated from $\rho$ by $H_t$. The propagator associated with $H_t$ is $e^{-iAt}e^{-i(A-H)t}$; see \cite{HoSo2023b}. Since $\rho$ commutes with $A-H$, \begin{equation} \rho_t=e^{-iAt}\rho e^{iAt}. \end{equation} The Bloch vector of $\rho_t$ moves along the equator in the Bloch sphere and returns to its original position at \begin{equation} \tau=\frac{\pi}{E\cot(\chi/2)}. \end{equation} Furthermore, the geometric phase of $\rho_t$ is $\theta=\pi$. The spectrum of $H_t$ is time-independent and consists of the eigenvalues $-\mu(\chi)$ and $\mu(\chi)$, both being occupied at all times. Furthermore, the expected energy of $H_t$ has the constant value $-\mu(\chi)\cos\chi$. Hence, \begin{align} &\langle H_t - \overline\epsilon_t\, \rangle = -\mu(\chi)\cos\chi +\mu(\chi)=E, \\ &\langle\, \underline\epsilon_t - H_t\rangle =\mu(\chi)+\mu(\chi)\cos\chi= E\cot^2(\chi/2). \end{align} We conclude that \begin{equation}\label{MLQSL2} \max\bigg\{ \frac{\theta}{\langle\hspace{-2pt}\langle\langle H_t-\overline{\epsilon}_t\rangle\rangle\hspace{-2pt}\rangle}, \frac{2\pi-\theta}{\langle\hspace{-2pt}\langle\langle\, \underline{\epsilon}_t - H_t \rangle\rangle\hspace{-2pt}\rangle}\bigg\}=\frac{\pi}{E}, \end{equation} the double angular brackets denoting time averages of the instantaneous expected values over $[0,\tau]$. Since the right-hand side of \eqref{MLQSL2} is greater than $\tau$, the left-hand side of \eqref{MLQSL2} is, in general, not an evolution time bound. This shows that \eqref{MLQSL} does not extend straightforwardly to systems with time-dependent Hamiltonians. \section{A Mandelstam-Tamm type estimate}\label{sec: MTQSL} Another time bound, also valid for time-dependent systems, can be derived from the fact that the evolution time multiplied by the time average of the energy uncertainty is equal to the Fubini-Study length of the evolution curve. Consider a curve of states $\rho_t$, $0\leq t\leq \tau$, generated by a Hamiltonian $H_t$. The instantaneous Fubini-Study speed of $\rho_t$ equals the energy uncertainty; see \eqref{fubini-study-uncertainty} below. Thus, the Fubini-Study length of $\rho_t$ is \begin{equation} \mathcal{L}(\rho_t) =\int_0^\tau\mathrm{d}t\, \Delta H_t = \tau\langle\hspace{-2pt}\langle \Delta H_t\rangle\hspace{-2pt}\rangle. \end{equation} The $\langle\hspace{-2pt}\langle \Delta H_t\rangle\hspace{-2pt}\rangle$ is the time average of the energy uncertainty over the evolution time interval $[0,\tau]$. Below we show that if $\rho_t$ is a closed curve with geometric phase $\theta$, then \begin{equation}\label{tjunio} \mathcal{L}(\rho_t) \geq \sqrt{\theta(2\pi-\theta)}. \end{equation} As a consequence, \begin{equation}\label{MTQSL} \tau \geq \frac{\sqrt{\theta(2\pi-\theta)}}{\langle\hspace{-2pt}\langle \Delta H_t\rangle\hspace{-2pt}\rangle}. \end{equation} We call \eqref{MTQSL} a time estimate of Mandelstam-Tamm type due to its similarity with the Mandelstam-Tamm quantum speed limit \cite{MaTa1945, Fl1973}. The Fubini-Study metric on $\mathcal{P}$ is induced from half of the Hilbert-Schmidt metric on the space of Hermitian operators on $\mathcal{H}$, \begin{equation}\label{fubini-study} g_{\textsc{fs}}(\dot\rho_1,\dot\rho_2)=\frac{1}{2}\operatorname{tr}(\dot\rho_1\dot\rho_2). \end{equation} If $\dot\rho_t=-i[H_t,\rho_t]$, the Fubini-Study speed squared of $\rho_t$ equals the variance of the energy: \begin{equation}\label{fubini-study-uncertainty} g_{\textsc{fs}}(\dot\rho_t,\dot\rho_t) =\frac{1}{2}\operatorname{tr}\big((-i[H_t,\rho_t])^2\big)=\Delta^2H_t. \end{equation} We equip $\mathcal{S}$ with the metric $g$ induced from the real part of the Hermitian inner product on $\mathcal{H}$, \begin{equation} g(\ket{\dot\psi_1},\ket{\dot\psi_2}) =\frac{1}{2}\big(\braket{\dot\psi_1}{\dot\psi_2}+\braket{\dot\psi_2}{\dot\psi_1}\big). \end{equation} Then the Hopf bundle is a Riemannian submersion, that is, $\mathrm{d}\eta$ preserves the inner product between horizontal vectors. As a consequence, $\rho_t$ and all of its horizontal lifts have the same length: If $\ket{\psi_t}$ is a horizontal lift of $\rho_t$, \begin{equation} \mathcal{L}(\rho_t) =\int_0^\tau\!\mathrm{d}t\sqrt{\frac{\operatorname{tr}(\dot\rho_t^2)}{2}} =\int_0^\tau\!\mathrm{d}t\sqrt{\braket{\dot\psi_t}{\dot\psi_t}} =\mathcal{L}(\ket{\psi_t}). \end{equation} We are to determine the smallest length $L(\theta)$ a closed curve at $\rho$ can have given that its geometric phase is $\theta$. If $\theta=0$, then $L(\theta)=0$. We thus assume that $\theta>0$. The calculation of $L(\theta)$ is inspired by \cite{TaNaHa2005}. Let $\rho_t$, $0\leq t\leq \tau$, be a closed curve at $\rho$ with geometric phase $\theta$ and length $L(\theta)$. Since ``geometric phase'' and ``length'' are parameterization invariant quantities, we may assume that $\rho_t$ has a constant speed and that $\tau=1$. Let $\ket{\psi}$ be any vector in the fiber above $\rho$, and let $\ket{\psi_t}$ be the horizontal lift of $\rho_t$ that starts at $\ket{\psi}$. Since $\rho_t$ is a closed curve with geometric phase $\theta$, $\ket{\psi_t}$ ends at $e^{i\theta}\ket{\psi}$. Also, since $\rho_t$ has minimal length among such curves, $\ket{\psi_t}$ is a minimum for the kinetic energy functional \begin{equation} \mathcal{E}\big(\ket{\phi_t},\lambda_t\big)=\frac{1}{2}\int_0^1 \mathrm{d}t\,\big( \braket{\dot\phi_t}{\dot\phi_t}+2i\lambda_t\braket{\phi_t}{\dot\phi_t}\big), \end{equation} the $\lambda_t$ being a Lagrange multiplier enforcing horizontally. The kinetic energy functional is defined on curves in $\mathcal{S}$ that extend between $\ket{\psi}$ and $e^{i\theta}\ket{\psi}$. Every variation vector field along $\ket{\psi_t}$ has the form $-iX_t\ket{\psi_t}$ for some curve of Hermitian operators $X_t$. To keep the endpoints of $\ket{\psi_t}$ fixed, we require that $X_t$ vanishes at $t=0$ and $t=1$. A variation with this variation vector field is $ \ket{\psi_{s,t}}=U_{s,t}\ket{\psi_t}$ with $U_{s,t}=\vec{\mathcal{T}}e^{-isX_t}$. The time order is such that $\partial_t U_{s,t} = -isU_{s,t}\dot X_t$. A straightforward calculation shows that \begin{equation}\label{partialintegration} \begin{split} \hspace{-5pt}\frac{\mathrm{d}}{\mathrm{d} s}\mathcal{E}\big(\ket{\psi_{s,t}},\lambda_t\big)\Big|_{s=0} = \frac{1}{2} &\int_0^1\mathrm{d}t \operatorname{tr}\Big( X_t\frac{\mathrm{d}}{\mathrm{d} t}\big(i\ketbra{\psi_t}{\dot\psi_t}-\\ &\hspace{10pt}i\ketbra{\dot\psi_t}{\psi_t} -2\lambda_t\ketbra{\psi_t}{\psi_t}\big)\Big). \end{split} \end{equation} Since $\ket{\psi_t}$ is a minimum of $\mathcal{E}$, and \eqref{partialintegration} holds for all curves of Hermitian operators $X_t$ vanishing for $t=0$ and $t=1$, \begin{equation}\label{bee} A=i\ket{\dot\psi_t}\bra{\psi_t}-i\ket{\psi_t}\bra{\dot\psi_t} +2\lambda_t\ket{\psi_t}\bra{\psi_t} \end{equation} is a time-independent Hermitian operator. Furthermore, since $\ket{\psi_t}$ is horizontal, $i\ket{\dot\psi_t} = (A-2\lambda_t) \ket{\psi_t}$ and $2\lambda_t = \bra{\psi_t}A\ket{\psi_t}$. These equations imply that $\lambda_t$ is constant: \begin{equation} 2\dot\lambda_t = i \bra{\psi_t} (A-2\lambda_t) A \ket{\psi_t}-i\bra{\psi_t}A(A-2\lambda_t)\ket{\psi_t}=0. \end{equation} We write $\lambda_t=\lambda$ and put $B=A-2\lambda$. Then \begin{align} &\ket{\dot\psi_t} = -iB\ket{\psi_t}, \\ &\langle B\rangle =\bra{\psi_0}B\ket{\psi_0} =0. \end{align} We show that $\rho_t=\ketbra{\psi_t}{\psi_t}$ occupies at most two eigenvalues of $B$, and we determine $\braket{\dot\psi_0}{\dot\psi_0}$, which is the speed of $\rho_t$ squared and, thus, equals $L(\theta)^2$. Since $\ket{\psi_t}$ is horizontal, the vectors $\ket{\psi_0}$ and $\ket{\dot\psi_0}$ are perpendicular. Also, by \eqref{bee}, they span the support of $A$. Any vector which is perpendicular to $\ket{\psi_0}$ and $\ket{\dot\psi_0}$ belongs to the kernel of $A$ and, therefore, is an eigenvector of $B$ with eigenvalue $-2\lambda$. We conclude that $\ket{\psi_0}$, and thus the entire curve $\ket{\psi_t}$, is contained in the span of two eigenvectors of $B$. The corresponding eigenvalues are \begin{equation}\label{eigenvalues} \epsilon_{\pm}=-\lambda\pm\sqrt{\lambda^2+\braket{\dot\psi_0}{\dot\psi_0}}. \end{equation} The requirement $\ket{\psi_1}=e^{i\theta}\ket{\psi_0}$ is satisfied if and only if $\epsilon_+=2\pi k-\theta$ and $\epsilon_-=2\pi l-\theta$ for some integers $k$ and $l$. Notice that $k\geq 1$ and $l\leq 0$ since $\epsilon_+$ is positive and $\epsilon_-$ is negative. We have that $-2\lambda=\epsilon_++\epsilon_-=2((k+l)\pi-\theta$ implying that $\lambda=\theta-(k+l)\pi$. Equation \eqref{eigenvalues} yields \begin{equation} 2k\pi-\theta=-\theta+(k+l)\pi+\sqrt{(\theta-(k+l)\pi)^2+\braket{\dot\psi_0}{\dot\psi_0}}. \end{equation} It follows that $\braket{\dot\psi_0}{\dot\psi_0} = (2k\pi-\theta)(\theta-2l\pi)$. The right side is minimal for $k=1$ and $l=0$. We conclude that \begin{equation} L(\theta) =\sqrt{\braket{\dot\psi_0}{\dot\psi_0}} =\sqrt{\theta(2\pi-\theta)}. \end{equation} Equation \eqref{tjunio}, and thus \eqref{MTQSL}, follows from the fact that the length of any closed curve at $\rho$ with geometric phase $\theta$ is greater than $L(\theta)$. \subsection{Tightness} The map sending a qubit state onto its Bloch vector is a diffeomorphism from the projective Hilbert space onto the Bloch sphere. However, the map is not an isometry. Rather, the Fubini-Study metric pushed forward to the Bloch sphere equals a quarter of the spherical metric. Consequently, the Fubini-Study length of a curve on the Bloch sphere is half its spherical length. Consider the system studied in Section \ref{ML tightness}. We shall show that it also satisfies the inequality in \eqref{MTQSL}. Since the radius of the curve traced by the Bloch vector is $\sin\phi$, the Fubini-Study length of the curve is $\pi\sin\phi$. Also, by \eqref{tata}, the geometric phase is $\theta = \pi(1+\cos\phi)$. Thus, \begin{equation} \mathcal{L}(\rho_t) =\pi\sqrt{1-\cos^2\phi} =\sqrt{\theta(2\pi-\theta)} =L(\theta). \end{equation} Since the Fubini-Study length is the smallest possible for a closed evolution with geometric phase $\theta$, the estimate \eqref{MTQSL} is saturated. Explicitly, $\Delta H=\Omega\sin\phi/2$ and, hence, \begin{equation} \frac{\sqrt{\theta(2\pi-\theta)}}{\Delta H} =\frac{2\pi\sin\phi}{\Omega\sin\phi} =\frac{2\pi}{\Omega} =\tau. \end{equation} \subsection{A time-bound of Bhatia-Davies type} In time-independent systems where the state occupies only two energy levels, the Mandelstam-Tamm type bound \eqref{MTQSL} is the geometric mean of the quotients on the right-hand side of the Margolus-Levitin type bound \eqref{MLQSL}. This follows from the Bhatia-Davies inequality \cite{BhDa2000}, stating that the variance of an observable $A$ is bounded from above according to \begin{equation} \Delta^2 A\leq \langle A-a_{\mathrm{min}}\rangle \langle a_{\mathrm{max}}-A\rangle, \end{equation} where $a_{\mathrm{min}}$ is the smallest, and $a_{\mathrm{max}}$ is the largest occupied eigenvalue of $A$. The Bhatia-Davies inequality is an equality if and only if at most two of the eigenvalues of $A$ are occupied. Consider a time-dependent system with Hamiltonian $H_t$ which returns to its initial state at time $\tau$ and then has acquired the geometric phase $\theta$. From the Bhatia-Davies inequality follows that \begin{equation}\label{timeBD} \langle\hspace{-2pt}\langle \Delta H_t\rangle\hspace{-2pt}\rangle \leq \Big{\langle}\hspace{-4pt}\Big{\langle} \sqrt{\langle H_t-\epsilon_{\mathrm{min};t}\rangle \langle \epsilon_{\mathrm{max};t}-H_t\rangle}\,\Big{\rangle}\hspace{-4pt}\Big{\rangle}, \end{equation} where $\epsilon_{\mathrm{min};t}$ and $\epsilon_{\mathrm{max};t}$ are the smallest and largest occupied instantaneous energy levels. By \eqref{MTQSL} and \eqref{timeBD}, \begin{equation}\label{BDQSL} \tau\geq \frac{\sqrt{\theta(2\pi-\theta)}}{\langle\hspace{-2pt}\langle\sqrt{\langle H_t-\epsilon_{\mathrm{min};t}\rangle \langle \epsilon_{\mathrm{max};t}-H_t\rangle}\,\rangle\hspace{-2pt}\rangle}. \end{equation} We call this a time bound of Bhatia-Davies type. Since, in general, inequality \eqref{timeBD} is strict, the estimate \eqref{BDQSL} is weaker than the Mandelstam-Tamm type estimate \eqref{MTQSL}. For qubits, \eqref{MTQSL} and \eqref{BDQSL} are saturated simultaneously. \section{Summary and outlook}\label{sec: summary} The Aharonov-Anandan geometric phase factor is the simplest example of an Abelian quantum mechanical holonomy. We have derived three tight lower bounds on the time it takes to generate an Aharonov-Anandan geometric phase. The limits originate in geometric descriptions of the classical Mandelstam-Tamm and Margolus-Levitin quantum speed limits. In holonomic quantum computation, quantum gates and circuits are generated from more complex Abelian and non-Abelian holonomies \cite{An1988, ZaRa1999, SjToAnHeJoSi2012, AlSj2022, ZhKyFiKwSjTo2021}. In a forthcoming paper, we will report lower bounds for the time it takes to generate such holonomies. \end{document}
\begin{document} \vspace*{0.35in} \begin{flushleft} {\Large \textbf\newline{Estimating Nonlinear Dynamics with the ConvNet Smoother} } \newline \\ Luca Ambrogioni\textsuperscript{*}, Umut Güçlü, Eric Maris and Marcel A. J. van Gerven \\ Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands \\ * l.ambrogioni@donders.ru.nl \end{flushleft} \section*{Abstract} Estimating the state of a dynamical system from a series of noise-corrupted observations is fundamental in many areas of science and engineering. The most well-known method, the Kalman smoother (and the related Kalman filter), relies on assumptions of linearity and Gaussianity that are rarely met in practice. In this paper, we introduced a new dynamical smoothing method that exploits the remarkable capabilities of convolutional neural networks to approximate complex nonlinear functions. The main idea is to generate a training set composed of both latent states and observations from an ensemble of simulators and to train the deep network to recover the former from the latter. Importantly, this method only requires the availability of the simulators and can therefore be applied in situations in which either the latent dynamical model or the observation model cannot be easily expressed in closed form. In our simulation studies, we show that the resulting ConvNet smoother has almost optimal performance in the Gaussian case even when the parameters are unknown. Furthermore, the method can be successfully applied to extremely nonlinear and non-Gaussian systems. Finally, we empirically validate our approach via the analysis of measured brain signals. \section{Introduction} Estimating the state of a dynamical system from a finite set of indirect and noisy measurements is a key objective in many fields of science and engineering~\cite{sarkka2013bayesian}. For example, meteorological forecasting requires the estimation of the dynamical state of a series of atmospheric variables from a sparse set of noisy measurements \cite{krishnamurti2016review}. Another example, essential for several modern technologies, is the localization of physical objects from indirect measurements such as radar, sonar or optical recordings \cite{sarkka2013bayesian}. Even the human brain consistently deals with this problem as it has to integrate a heterogeneous series of indirect sensory inputs in its internal representation of the external world \cite{mathys2014uncertainty}. In the rest of the paper we will refer to this class of problems as \emph{dynamical smoothing}, considering the related problem of \emph{dynamical filtering} as a special case (i.e. 0-lag smoothing). In this paper, we introduce a new nonlinear smoothing approach that only requires the ability to sample from measurements and dynamical models by leveraging the exceptional flexibility and representational capabilities of deep convolutional neural networks (ConvNets)~\cite{schmidhuber2015deep}. The key idea is to generate synthetic samples of signal and noise from an ensemble of generators in order to train a deep neural network to recover the latent dynamical state from the observations. This allows the use of very realistic models, where the signal and noise structure can be tailored to the specific application without any concern about their analytic tractability. Furthermore, the procedure completely circumvents the problem of model selection and parameter estimation since the training set can be constructed by hierarchically sampling the model and its parameters from an appropriate ensemble. Importantly, since we can generate an arbitrarily large number of training data, we can train arbitrarily complex deep networks without over-fitting. \subsection{Related works} Conventional solutions to the dynamical smoothing problem usually rely on a series of mathematical assumptions on the nature of the signal and the measurement process. For example, when the state dynamic is linear, the measurement model is Gaussian and all the parameters are known, the optimal solution is given by the Kalman smoother (also known as the Rauch–Tung–Striebel smoother)~\cite{briers2010smoothing}. For nonlinear state dynamics and/or a non-Gaussian measurement model, the dynamical smoothing problem can no longer be solved exactly. In these cases a common approximation is the extended Kalman smoother (EKS), which works by linearizing the state dynamics and the measurement model at each time point~\cite{sorenson1985kalman}. A more modern approach is the unscented Kalman smoother (UKS) that approximates the dynamical smoothing distribution by passing a set of selected points (sigma points) through the exact nonlinear functions of the state dynamics and the measurement model~\cite{wan2000unscented}. Unfortunately, these methods may introduce systematic biases in the estimated state and require the availability of both a prior and a likelihood function in closed form. In theory these shortcomings can be overcome by using sampling methods such as particle smoothers~\cite{sarkka2013bayesian}. However, these methods require a large number of samples in order to be reliable and are affected by particle degeneration. In recent years, several authors pioneered the use of deep neural networks and stochastic optimization on dynamical filtering and smoothing problems. Haarnoja and collaborators introduced the backprop KF, a deterministic filtering method based on a low-dimensional differentiable dynamical model whose input is obtained from high-dimensional measurements through a deep neural network \cite{haarnoja2016backprop}. Importantly, all the parameters of this model can be optimized using stochastic gradient descent. Improvements in stochastic variational inference led to several applications to dynamical filtering and smoothing problems. Using variational methods, Krishnan and collaborators introduced the deep Kalman filters \cite{krishnan2015deep}. In this work, the authors assume that the distribution of the latent states is Gaussian with mean and covariance matrix determined from the previous state through a parameterized deep neural network. The parameters of the network are trained using stochastic backpropagation \cite{rezende2014stochastic}. Similarly, Archer and collaborators introduced the use of stochastic black-box variational inference as a tool for approximating the posterior distribution of nonlinear smoothing problems \cite{archer2015black}. Despite the impressive performance of modern variational inference techniques, these methods share with the more conventional methods such as EKF and UKF the problem of introducing a systematic bias due to the constrains in the model and in the variational distribution. Conversely, the bias of our method can be arbitrarily reduced since neural networks are universal function approximators and we can use a limitless number of training samples. \section{The model} A state space model is defined as a dynamic equation that determines how the latent state evolves through time, and a measurement model, that determines the outcome of measurements performed on the latent state. If we assume that the measurement process does not affect the latent state, a general stochastic state model can be expressed as follows \begin{flalign} &x(t) = F_{\theta} \big[x(0),...,x(t-1); \xi_t\big]\\ &y(t) = G_{\phi} \big[x(0),...,x(t); \zeta_t\big]~, \label{eq: state space model} \end{flalign} where $x(t)$ is the latent state at time $t$ and $y(t)$ is the measurement at time $t$. The functions $F_{\theta}$ and $G_\phi$ determine the dynamic evolution of the latent state and the measurement process respectively. Since the system is causal and is not disturbed by the measurements, the dynamic function $F_{\theta}$ takes as input the past values of $x$ and a random variable $\xi_t$ that introduces stochasticity in the dynamics. Analogously, the measurement function $G_{\phi}$ takes as inputs the past and present values of $x$ and a random variable $\zeta_t$ that accounts for the randomness in the measurements. In a dynamical smoothing problem, we aim to recover the latent state $x$ from the set of measurements $\left\{y(0), ..., y(T)\right\}$, where $T$ is the final time point. Usually, the functional form of $F_{\theta}$ and $G_{\phi}$ is assumed to be known in advance. Nevertheless, these functions typically depend on a set of parameters ($\theta$ and $\phi$ respectively) that need to be inferred from the data. \subsection{Deterministic smoothing as a regression problem} We will focus on deterministic smoothing, where the output of the analysis is a point estimate of $x$ instead of a full probability distribution. If we have a training set where both the dynamical state $x$ and the measurements $y$ are observable, then the problem reduces to a simple regression problem where we construct a deterministic mapping $f(y;w) = \hat{x}$ by minimizing a suitable cost function of the form \begin{equation} C(w) = \sum_j D\big[x(t_j),f\big(y(t_j);w\big)\big]~, \end{equation} over the space of some parameters $w$. In this expression, the function $D\big[x,\hat{x}\big]$ is some sensible measure of deviation between the real dynamical state $x$ and our estimate $\hat{x}$. In most situations the state $x$ is not directly observable. However, in the usual dynamical smoothing setting, we assume to have access to the dynamical model that generated the data and the measurements. In this case, the model can simply be used for generating a synthetic training set. If the dynamical model is not exactly known, we can still construct a training set by sampling from a large parametrized family of plausible models. Clearly, this requires the use of a sophisticated regression model that is able to learn very complicated functional dependencies. To this end, we make use of convolutional neural networks, a regression (and classification) method, which has been shown to achieve state-of-the-art performance in many machine learning tasks~\cite{krizhevsky2012imagenet, he2016deep, oord2016wavenet}. \subsection{Convolutional neural networks} In this subsection, we briefly describe the details of the convolutional neural network which was used in our experiments. The network comprised a number of dilated convolution layers~\cite{Yu2015} with 60 one-dimensional kernels of length three and rectified linear units, followed by a fully-connected layer with $n$ one-dimensional kernels of length $n$ where $n$ is the signal length. In most experiments, $n$ was 200. Dilated convolution layers are similar to regular convolution layers with the exception that successive kernel elements have holes between each other, whose size is determined by a dilation factor. As a result, they ensure that the feature map length remains the same as the receptive field length increases. Note that regular convolution layers can be considered dilated convolution layers with a dilation factor of one. The dilation factor of the first two layers were one, which doubled after every subsequent layer. The number of convolution layers was chosen to be the largest possible value such that the receptive field length of the last convolution layer was less than the signal length. That is, \begin{equation} m = \argmax_x\left(3 + 2\sum_{i = 0}^{x - 2}2^i\right) < n \end{equation} where $m$ is the number of convolution layers. In all experiments, $m$ equalled seven. We initialized the bias terms to zero and the weights to samples drawn from a scaled Gaussian distribution~\cite{He2015}. We used Adam~\cite{Kingma2014} with initial $\alpha$ = 0.001, $\beta_1$ = 0.9, $\beta_2$ = 0.999, $\epsilon = 10^{-8}$ and a mini-batch size of 1500 to train the network for 20,000 epochs by iteratively minimizing a smooth approximation of the Huber loss function~\cite{Huber1964}, called the pseudo-Huber loss function~\cite{Charbonnier1997}: \begin{equation} C(w) = \sum_j \sqrt{1 + \left(x(t_j) - f\big(y(t_j);w\big)\right) ^ 2} - 1 \,. \end{equation} \subsection{The ConvNet smoother} The idea behind the ConvNet smoother is simple. We train a convolution neural network to recover the sequence of simulated dynamical states $\{x(0), ..., x(T)\}$ from the set of simulated measurements $\{y(0), ..., y(T)\}$. The network is trained on simulated data that are sampled using the state evolution function $F_{\theta}$ and the measurement function $G_{\phi}$. If we do not know the parameters $\theta$ and $\phi$ in advance, the training set can still be constructed by randomly sampling $\theta$ and $\phi$ prior to each sample of $\{x(0), ..., x(T)\}$ and $\{y(0), ..., y(T)\}$. In this way, we leave to the network the burden of adapting to the specific parameter setting every time a new series of observations is presented as input. Clearly, if the parameter space is large, we would need a more complex network in order to properly learn the mapping. Fortunately, since we can generate an arbitrarily large number of data points, we can potentially train any complex network without overfitting on the training set. \section{Experiments} In the following, we validate the ConvNet smoother in two simulation studies and in an analysis of real brain data acquired using magnetoencephalography (MEG). \subsection{Analysis of Gaussian dynamics} When the latent dynamical state is a Gaussian process (GP) with known covariance function, the optimal smoother (in a least-squares sense) is given by the expected value of a GP regression ~\cite{rasmussen2006gaussian}. Here, we compare the performance of the ConvNet smoother with the expectation of both an exact GP regression (with known covariance and noise parameters) and an optimized GP regression (where the parameters are obtained by maximizing the marginal likelihood of the model given the data). The ConvNet was trained with samples drawn from a GP equipped with a squared exponential covariance function~\cite{rasmussen2006gaussian}. For each sample, the length scale and the amplitude parameters were sampled from a log-normal distribution (length scale: $\mu = -1.9$, $\sigma = 0.8$; amplitude: $\mu = 0$, $\sigma = 0.5$). Hence, the ConvNet smoother is effectively trained on a large family of GPs. These samples where then corrupted with Gaussian white noise whose standard deviation was itself sampled from a log-normal distribution ($\mu = -0.9$, $\sigma = 0.5$). The network was trained to recover the ground truth function from the noisy data. Figure~\ref{figure 1} shows the results on a test set comprised of $200$ trials. Panel A shows an example trial. The estimate obtained using the ConvNet smoother is less smooth than the GP estimates but it does a good job at tracking the ground truth signal. Panel B shows the absolute deviations of the ConvNet, exact GP and optimized GP models. The performance of the ConvNet is only slighly worse than the (optimal) exact GP. Furthermore, the ConvNet significantly outperforms the GP optimized by maximal likelihood. \begin{figure} \caption{Analysis of Gaussian signals. A) Analysis of an example signal. B) Performance of ConvNet, exact GP and optimized GP. The boxplot shows the absolute deviation between reconstructed signal. The red lines show the median while the red squares show the mean.} \label{figure 1} \end{figure} \subsection{Analysis of nonlinear dynamics} In this section, we show that the ConvNet smoother can be used in complex situations where neither the exact likelihood or the exact prior can be easily expressed in closed form. This freedom allows to train the network on a very general noise model that is likely to approximate the real noise structure of the measured data as a special case. As dynamical model, we used the following stochastic anharmonic oscillator equation: \begin{equation} \frac{d^2x}{dt^2} = -\omega_0^2 x - \beta \frac{dx}{dt} + k_2 x^2 + k_3 x^3 ~ + \xi(t)\,, \label{eq: nonlinear SDE} \end{equation} where $\omega_0$ is the asymptotic undampened angular frequency for small oscillations, $\beta$ is the damping coefficient and both $k_2$ and $k_3$ regulate the deviation from harmonicity. The term $\xi(t)$ is additive Gaussian white noise and introduces stochasticity to the dynamics. We discretized the stochastic dynamical process using the Euler-Maruyama scheme with integration step equal to $0.01$ seconds. The parameters of the dynamical model were kept fixed ($\omega_0 = 5$, $\beta = 0.2$, $k_2 = 15$, $k_3 = -0.5$). This procedure gave a total of $N = 200$ time points for each simulated trial. The experiment is divided in two parts. In the first, we used a Gaussian observation model and we compared our method with existing dynamical smoothing techniques. In the second, we used a more complex parameterized observation model in order to demonstrate the flexibility of the ConvNet method. \subsubsection{Gaussian observation model} In this first part of the experiment, we use an observation model where the measurements are corrupted by Gaussian white noise. The standard deviation of the measurement noise was equal to $20$. We generated a total of 49900 training pairs $\left(\{x(t_1),...,x(t_N)\},\{y(t_1),...,y(t_N)\}\right)$ and 1000 test pairs. We compared the ConvNet smoother with EKS and UKS. Figure~\ref{figure 2}, Panel A shows the latent state of an example trial recovered using the ConvNet smoother, EKS and UKS. The absolute deviations between the recovered latent state and the ground truth signal are shown in Panel B. From the boxplots, it is clear that in this experiment the ConvNet greatly outperforms the other methods. \begin{figure} \caption{Analysis of nonlinear signals corrupted by conditionally independent noise. A) Analysis of an example signal. B) Performance of ConvNet, EKF and UKF. The boxplot shows the log10 of the absolute deviation between reconstructed signal. The red lines show the median while the red squares show the mean.} \label{figure 2} \end{figure} \subsubsection{Conditionally dependent observation model} In this experiment, we use a complex observation model where the measurements are not statistically independent given the latent state. In these situations, EKS and UKS cannot be directly applied. The observations $y(t)$ were obtained as follows: \begin{equation} y(t) = x(t) + \eta t + \gamma(t) + \zeta(t)\,,~ \label{eq: observation model} \end{equation} where $\eta t$ is a linear trend with slope $\eta$ sampled at random from a normal distribution with zero mean and standard deviation equal to $10$; $\gamma(t)$ is a pure jump process with exponential inter-jump interval (mean equal to $0.5$ s) and scaled Cauchy jump size (scale equal to 1.5); $\zeta(t)$ is Student-t white noise with scale sampled from a gamma distribution (with scale equal to $0.3$, shape equal to $2$) and degrees of freedom sampled from a uniform distribution over the integers from $2$ to $21$. All the parameters of the noise model were sampled every time a new trial was generated. We generated a total of 99900 training pairs $\left(\{x(t_1),...,x(t_N)\},\{y(t_1),...,y(t_N)\}\right)$ and 1000 test pairs. Figure~\ref{figure 3}, Panels A--D shows the results in the test set for four example trials. We can see that these trials are characterized by highly heterogeneous waveforms of the latent dynamical process and very variable noise structure. Visually, the method seems to maintain high performance for a wide range of signal and noise characteristics. For example, Panel~B shows a trial with a large discontinuous jump while Panel~C shows a trial with very pronounced outliers. The median log10 absolute deviation of the model output from the ground truth dynamical signal was $0.32$ while the upper and lower quantiles were $-0.51$ and $1.38$ respectively. \begin{figure} \caption{Analysis of nonlinear signals corrupted by conditionally dependent noise. Panels A to D show the input data points (blue dots), ground truth signal (blue line) and ConvNet estimates (red line) for four example trials.} \label{figure 3} \end{figure} \subsection{Analysis of brain oscillations} As a final example, we used the ConvNet smoother for recovering 10 Hz brain oscillations (the so-called alpha rhythm; see \cite{cantero2002human}) from MEG measurements. While several neural field equations have been proposed \cite{deco2008dynamic}, so far there is not a universally accepted dynamical model of cortical electromagnetic activity. Therefore, for this application, we generated the dynamical latent state using an empirical procedure that is meant to capture the qualitative features of alpha oscillations without resorting to an explicit equation of motion. The idea is to sample from a sufficiently large family of signal and noise models, in order to capture the observed data as special case. After training, the ConvNet should be able to recognize the appropriate signal and noise characteristics directly from the input time series. In our example, the oscillatory waveform was generated as follows: \begin{equation} x(t) = \mathcal{A}(t) f\bigg(\cos\big(\omega(t) t + \phi_0\big)\bigg)~\,, \label{eq: oscillations model} \end{equation} where the envelope $\mathcal{A}(t)$ and the angular frequency parameter $\omega(t)$ are sampled from a GP and the initial phase $\phi_0$ is sampled from a uniform distribution. Furthermore, the nonlinear function $f(a)$ has the following form: \begin{equation} f(a) = w_1 a + w_2 a^2 + w_3 a^3 + w_4 a^4 + w_5 a^5~, \label{eq: nonlinear functions} \end{equation} where the Taylor coefficients $w_1,w_3$ and $w_5$ were sampled from truncated t distributions (df = 3, from $0$ to $\infty$) and the coefficients $w_2$ and $w_4$ were sampled from t distributions (df = 3). This allows to generate synthetic alpha oscillations with variable waveform, amplitude and peak frequency. The network was trained on the synthetic data set and then applied to a resting-state MEG data set. The experimental procedure is described in~\cite{ambrogioni2016complex}. We compared the resulting estimate with the estimate obtained by applying a band-pass Butterworth (two-pass, 4th order, from 8 Hz to 12 Hz) filter to the MEG data. Figure~\ref{figure 4} shows the result in two example trials. In Panel A, the oscillatory alpha activity is very prominent across the whole trial. Note that the ConvNet smoother is able to recover the highly anharmonic waveform without introducing a substantial amount of noise. In Panel B, we can see that the oscillatory activity is absent in the first half of the trial and becomes prominent in the second half. Importantly, the ConvNet smoother almost completely suppresses the oscillatory response in the first part while the linear filter exhibits a low amplitude oscillation. \begin{figure} \caption{Analysis of brain oscillations. Panels A-B show the raw MEG signal (blue line) and the estimate of the alpha oscillations time series obtained using Butterworth filter (dashed line) and ConvNet smoother (red line) for two example trials.} \label{figure 4} \end{figure} \section{Conclusions} In this paper, we introduced the use of deep convolution neural networks trained on synthetic data for nonlinear smoothing. The ConvNet smoother requires no analytic work by the practitioner besides the design of an appropriate ensemble of signal and noise simulators. Importantly, imperfect prior knowledge about the signal and the noise model is compensated by the remarkable capacity of deep convolutional neural networks to recognize patterns in the data. Several improvements are possible. First, the model can easily be used for forecasting by training the network with an initial segment of noisy time series as input and the full noise-free time series as output. This kind of application could have a major impact given the importance of ensemble forecasting methods in fields such as meteorology~\cite{krishnamurti2016review}. Second, the method can be adapted to online filtering by using an autoregressive convolutional architecture where the filter kernels only have access to previous time points~\cite{uria2016neural}. Third, the uncertainty over the latent dynamical signal can be estimated either by drop-out~\cite{gal2015dropout} or by using a conditional density estimation neural network for estimating the full conditional distribution of the dynamical state given the data~\cite{williams1996using}. This latter approach may be considered as an application of the recently introduced framework for Bayesian conditional density estimation~\cite{papamakarios2016fast}. \end{document}
\begin{document} \begin{center} {\bf INTEGRAL EQUATION FOR THE TRANSITION DENSITY \\ OF THE MULTIDIMENSIONAL MARKOV RANDOM FLIGHT} \end{center} \begin{center} Alexander D. KOLESNIK\\ Institute of Mathematics and Computer Science\\ Academy Street 5, Kishinev 2028, Moldova\\ E-Mail: kolesnik@math.md \end{center} \vskip 0.2cm \begin{abstract} We consider the Markov random flight $\bold X(t)$ in the Euclidean space $\Bbb R^m, \; m\ge 2,$ starting from the origin $\bold 0\in\Bbb R^m$ that, at Poisson-paced times, changes its direction at random according to arbitrary distribution on the unit $(m-1)$-dimensional sphere $S^m(\bold 0,1)$ having absolutely continuous density. For any time instant $t>0$, the convolution-type recurrent relations for the joint and conditional densities of process $\bold X(t)$ and of the number of changes of direction, are obtained. Using these relations, we derive an integral equation for the transition density of $\bold X(t)$ whose solution is given in the form of a uniformly converging series composed of the multiple double convolutions of the singular component of the density with itself. Two important particular cases of the uniform distribution on $S^m(\bold 0,1)$ and of the Gaussian distributions on the unit circumference $S^2(\bold 0,1)$ are separately considered. \end{abstract} \vskip 0.1cm {\it Keywords:} Random flight, random evolution, joint density, conditional density, convolution, Fourier transform, characteristic function \vskip 0.2cm {\it AMS 2010 Subject Classification:} 60K35, 60K99, 60J60, 60J65, 82C41, 82C70 \section{Introduction} \numberwithin{equation}{section} Random motions at finite speed in the multidimensional Euclidean spaces $\Bbb R^m, \; m\ge 2$, also called random flights, have become the subject of intense researches in last decades. The majority of published works deal with the case of isotropic Markov random flights when the motions are controlled by a homogeneous Poisson process and their directions are taken uniformly on the unit $(m-1)$-dimensional sphere \cite{kol2,kol3,kol4,kol5,kol6,kol7}, \cite{mas}, \cite{sta1,sta2}. The limiting behaviour of the Markov random flight with a finite number of fixed directions in $\Bbb R^m$ was examined in \cite{ghosh}. In recent years the non-Markovian multidimensional random walks with Erlang- and Dirichlet-distributed displacements were studied in a series of works \cite{lecaer1,lecaer2,lecaer3,letac}, \cite{pogor1,pogor2}. Such random motions at finite velocities are of a great interest due to both their big theoretical importance and numerous fruitful applications in physics, chemistry, biology and other fields. When studying such a motion, its explicit distribution is, undoubtedly, the most attractive aim of the researches. However, despite many efforts, the closed-form expressions for the distributions of Markov random flights were obtained in a few cases only. In the spaces of low even-order dimensions such distributions were obtained in explicit forms by different methods (see \cite{sta2}, \cite{mas}, \cite{kol5}, \cite{kol7} for the Euclidean plane $\Bbb R^2$, \cite{kol6} for the space $\Bbb R^4$ and \cite{kol2} for the space $\Bbb R^6$). Moreover, in the spaces $\Bbb R^2$ and $\Bbb R^4$ such distributions are surprisingly expressed in terms of elementary functions, while in the space $\Bbb R^6$ the distribution has the form of a series composed of some polynomials. As far as the random flights in the odd-dimensional Euclidean spaces is concerned, their analysis is much more complicated in comparison with the even-dimensional cases. A formula for the transition density of the symmetric Markov random flight with unit speed in the space $\Bbb R^3$ was given in \cite{sta1}, however it has a very complicated form of an integral with variable limits whose integrand depends on the inverse tangent function from the integration variable (see \cite[formulas (1.3) and (4.21)]{sta1}). Moreover, the density presented in this work evokes some questions since its absolutely continuous (integral) part is discontinuous at the origin $\bold 0\in\Bbb R^3$ and this fact seems fairly strange. The characteristic functions of the multidimensional random flights are much more convenient objects for analysing than their densities. This is due to the fact that, while the densities are finitary fucntions (that is, functions defined on the compact sets of $\Bbb R^m$), their characteristic functions (Fourier transforms) are analytical real functions defined everywhere in $\Bbb R^m$. That is why just the characteristic functions have become the subject of the vast research whose results were published in \cite{kol3} and \cite{kol8}. In particular, in \cite{kol3} the tme-convolutional recurrent relations for the joint and conditional characteristic functions of the Markov random flight in the Euclidean space $\Bbb R^m$ of arbitrary dimension $m\ge 2$, were obtained. By using these recurrent ralations, the Volterra integral equation of second kind with continuous kernel for the unconditional characteristic function was derived and a closed-form expression for its Laplace transform was given. Such convolutional structure of the characteristic functions makes hints about a similar one for the respective densities. Discovering of such convolutional relations for the densities of Markov random flights in $\Bbb R^m, \; m\ge 2,$ is the main subject of this article. The paper is organized as follows. In Section 2 we introduce the general Markov random flight in the Euclidean spaces $\Bbb R^m, \; m\ge 2$, with arbitrary dissipation function and describe the structure of its distribution. Some basic properties of the joint, conditional and unconditional characteristic functions of the process are also given. In Section 3 we derive the recurrent relations for the joint and conditional densities of the process and of the number of changes of direction in the form of a double convolutions in space and time variables. Basing on these recurrent relations, an integral equation for the transition density of the process is obtained in Section 4, whose solution is given in the form of a uniformly converging series composed of the multiple double convolutions of the singular component of the density with itself. This solution is unique in the class of finitary functions in $\Bbb R^m$. Two important particular cases of the uniform distribution on $S^m(\bold 0,1)$ and of the Gaussian distributions on the unit circumference $S^2(\bold 0,1)$ are considered in Section 5. \section{Description of Process and Its Basic Properties} \numberwithin{equation}{section} Consider the following stochastic motion. A particle starts from the origin $\bold 0 = (0, \dots, 0)$ of the Euclidean space $\Bbb R^m, \; m\ge 2,$ at the initial time instant $t=0$ and moves with some constant speed $c$ (note that $c$ is treated as the constant norm of the velocity). The initial direction is a random $m$-dimensional vector with arbitrary distribution (also called the dissipation function) on the unit sphere $$S^m(\bold 0, 1) = \left\{ \bold x=(x_1, \dots ,x_m)\in \Bbb R^m: \; \Vert\bold x\Vert^2 = x_1^2+ \dots +x_m^2=1 \right\} $$ having the absolutely continuous bounded density $\chi(\bold x), \; \bold x\in S^m(\bold 0, 1)$. Emphasize that here and thereafter the upper index $m$ means the dimension of the space, which the sphere $S^m(\bold 0, 1)$ is considered in, but not its own dimension which, clearly, is $m-1$. The motion is controlled by a homogeneous Poisson process $N(t)$ of rate $\lambda>0$ as follows. At each Poissonian instant, the particle instantaneously takes on a new random direction on $S^m(\bold 0, 1)$ with the same density $\chi(\bold x), \; \bold x\in S^m(\bold 0, 1),$ independently of its previous motion and keeps moving with the same speed $c$ until the next Poisson event occurs, then it takes a new random direction again and so on. Let $\bold X(t)=(X_1(t), \dots ,X_m(t))$ be the particle's position at time $t>0$ which is referred to as the $m$-dimensional random flight. At arbitrary time instant $t>0$ the particle, with probability 1, is located in the close $m$-dimensional ball of radius $ct$ centered at the origin $\bold 0$: $$\bold B^m(\bold 0, ct) = \left\{ \bold x=(x_1, \dots ,x_m)\in \Bbb R^m : \; \Vert\bold x\Vert^2 = x_1^2+ \dots +x_m^2\le c^2t^2 \right\} .$$ Consider the probability distribution function $$\Phi(\bold x, t) = \text{Pr} \left\{ \bold X(t)\in d\bold x \right\}, \qquad \bold x\in\bold B^m(\bold 0, ct), \quad t\ge 0,$$ of process $\bold X(t)$, where $d\bold x$ is the infinitesimal element in the space $\Bbb R^m$ with Lebesgue measure $\mu(d\bold x) = dx_1 \dots dx_m$. For arbitrary fixed $t>0$, distribution $\Phi(\bold x, t)$ consists of two components. The singular component corresponds to the case when no one Poisson event occurs in the time interval $(0,t)$ and it is concentrated on the sphere $$S^m(\bold 0, ct) =\partial\bold B^m(\bold 0, ct) = \left\{ \bold x=(x_1, \dots ,x_m)\in \Bbb R^m: \; \Vert\bold x\Vert^2 = x_1^2+ \dots +x_m^2=c^2t^2 \right\} .$$ In this case the particle is located on the sphere $S^m(\bold 0, ct)$ and the probability of this event is $$\text{Pr} \left\{ \bold X(t)\in S^m(\bold 0, ct) \right\} = e^{-\lambda t} .$$ If at least one Poisson event occurs by time instant $t$, then the particle is located strictly inside the ball $\bold B^m(\bold 0, ct)$ and the probability of this event is $$\text{Pr} \left\{ \bold X(t)\in \text{int} \; \bold B^m(\bold 0, ct) \right\} = 1 - e^{-\lambda t} .$$ The part of the distribution $\Phi(\bold x, t)$ corresponding to this case is concentrated in the interior $$\text{int} \; \bold B^m(\bold 0, ct) = \left\{ \bold x=(x_1, \dots ,x_m)\in \Bbb R^m: \; \Vert\bold x\Vert^2 = x_1^2+ \dots +x_m^2<c^2t^2 \right\}$$ of the ball $\bold B^m(\bold 0, ct)$ and forms its absolutely continuous component. Let $p(\bold x,t) = p(x_1, \dots ,x_m,t), \;\; \bold x\in\bold B^m(\bold 0, ct) , \; t>0,$ be the density of distribution $\Phi(\bold x, t)$. It has the form \begin{equation}\label{dens} p(\bold x,t) = p_s(\bold x,t) + p_{ac}(\bold x,t) , \qquad \bold x\in\bold B^m(\bold 0, ct), \quad t>0, \end{equation} where $p_s(\bold x,t)$ is the density (in the sense of generalized functions) of the singular component of $\Phi(\bold x, t)$ concentrated on the sphere $S^m(\bold 0, ct)$ and $p_{ac}(\bold x,t)$ is the density of the absolutely continuous component of $\Phi(\bold x, t)$ concentrated in $\text{int} \; \bold B^m(\bold 0, ct)$. The density $\chi(\bold x), \; \bold x\in S^m(\bold 0, 1),$ on the unit sphere $S^m(\bold 0, 1)$ generates the absolutely continuous and bounded (in $\bold x$ for any fixed $t$) density $\varrho(\bold x,t), \bold x\in S^m(\bold 0, ct),$ on the sphere $S^m(\bold 0, ct)$ of radius $ct$ according to the formula $\varrho(\bold x,t) = \chi(\frac{1}{ct}\bold x), \; \bold x\in S^m(\bold 0, ct), \; t>0$. Therefore, the singular part of density (\ref{dens}) has the form: \begin{equation}\label{densS} p_s(\bold x,t) = e^{-\lambda t} \varrho(\bold x,t) \delta(c^2t^2-\Vert\bold x\Vert^2) , \qquad t>0, \end{equation} where $\delta(x)$ is the Dirac delta-function. The absolutely continuous part of density (\ref{dens}) has the form: \begin{equation}\label{densAC} p_{ac}(\bold x,t) = f_{ac}(\bold x,t) \Theta(ct-\Vert\bold x\Vert) , \qquad t>0, \end{equation} where $f_{ac}(\bold x,t)$ is some positive function absolutely continuous in $\text{int} \; \bold B^m(\bold 0, ct)$ and $\Theta(x)$ is the Heaviside step function given by \begin{equation}\label{heaviside} \Theta(x) = \left\{ \aligned 1, \qquad & \text{if} \; x>0,\\ 0, \qquad & \text{if} \; x\le 0. \endaligned \right. \end{equation} Consider the conditional densities $p_n(\bold x,t), \; n\ge 0,$ of process $\bold X(t)$ conditioned by the random events $\{ N(t)=n \}, \; n\ge 0,$ where, remind, $N(t)$ is the number of the Poisson events that have occurred in the time interval $(0, t)$. Obviously, $p_0(\bold x,t)=\varrho(\bold x,t) \delta(c^2t^2-\Vert\bold x\Vert^2)$. Therefore, our aim is to examine conditional densities $p_n(\bold x,t)$ for $n\ge 1$. Consider the conditional characteristic functions (Fourier transform) of process $\bold X(t)$: \begin{equation}\label{rec1} G_n(\boldsymbol\alpha, t) = E \left\{ e^{i\langle\boldsymbol\alpha, \bold X(t)\rangle} \vert \; N(t)=n \right\} , \qquad n\ge 1, \end{equation} where $\boldsymbol\alpha =(\alpha_1, \dots ,\alpha_m) \in \Bbb R^m$ is the real $m$-dimensional vector of inversion parameters and $\langle\boldsymbol\alpha, \bold X(t)\rangle$ is the inner product of the vectors $\boldsymbol\alpha$ and $\bold X(t)$. According to \cite[formula (6.8)]{kol3}, functions (\ref{rec1}) are given by the formula: \begin{equation}\label{rec2} G_n(\boldsymbol\alpha, t) = \frac{n!}{t^n} \int_0^t d\tau_1 \int_{\tau_1}^t d\tau_2 \dots \int_{\tau_{n-1}}^t d\tau_n \left\{ \prod_{j=1}^{n+1} \psi(\boldsymbol\alpha, \tau_j-\tau_{j-1}) \right\} , \qquad n\ge 1, \end{equation} where \begin{equation}\label{rec3} \psi(\boldsymbol\alpha, t) = \mathcal F_{\bold x} \left[ \varrho(\bold x,t) \delta(c^2t^2-\Vert\bold x\Vert^2) \right](\boldsymbol\alpha) = \int_{S^m(\bold 0, ct)} e^{i\langle\boldsymbol\alpha, \bold x\rangle} \; \varrho(\bold x, t) \; \nu(d\bold x) \end{equation} is the characteristic function (Fourier transform) of density $\varrho(\bold x,t)$ concentrated on the sphere $S^m(\bold 0, ct)$ of radius $ct$ and $\nu(d\bold x)$ is the surface Lebesgue measure on $S^m(\bold 0, ct)$. Consider separately the integral factor in (\ref{rec2}): \begin{equation}\label{rec4} \mathcal J_n(\boldsymbol\alpha, t) = \int_0^t d\tau_1 \int_{\tau_1}^t d\tau_2 \dots \int_{\tau_{n-1}}^t d\tau_n \left\{ \prod_{j=1}^{n+1} \psi(\boldsymbol\alpha, \tau_j-\tau_{j-1}) \right\} , \qquad n\ge 1. \end{equation} This function has a quite definite probabilistic sense, namely \begin{equation}\label{recc4} \tilde G_n(\boldsymbol\alpha, t) = \mathcal F_{\bold x} \bigl[ \tilde p_n(\bold x, t) \bigr](\boldsymbol\alpha) = \frac{(\lambda t)^n \; e^{-\lambda t}}{n!} \; G_n(\boldsymbol\alpha, t) = \lambda^n e^{-\lambda t} \mathcal J_n(\boldsymbol\alpha, t), \end{equation} $$\boldsymbol\alpha\in\Bbb R^m, \qquad t>0, \qquad n\ge 1,$$ is the characteristic function (Fourier transform) of the joint probability density $p_n(\bold x, t)$ of the particle's position at time instant $t$ and of the number $N(t)=n$ of the Poisson events that have occurred by this time moment $t$. It is known (see \cite[Theorem 5]{kol3}) that, for arbitrary $n\ge 1$, functions (\ref{rec4}) are connected with each other by the following recurrent relation: \begin{equation}\label{rec5} \mathcal J_n(\boldsymbol\alpha, t) = \int_0^t \psi(\boldsymbol\alpha, t-\tau) \; \mathcal J_{n-1}(\boldsymbol\alpha, \tau) \; d\tau = \int_0^t \psi(\boldsymbol\alpha, \tau) \; \mathcal J_{n-1}(\boldsymbol\alpha, t-\tau) \; d\tau, \qquad n\ge 1, \end{equation} where, by definition, ${\mathcal J}_0(\boldsymbol\alpha, t) \overset{\text{def}}{=} \psi(\boldsymbol\alpha, t)$. Formula (\ref{rec5}) can also be represented in the following time-convolutional form: \begin{equation}\label{rec6} \mathcal J_n(\boldsymbol\alpha, t) = \psi(\boldsymbol\alpha, t) \overset{t}{\ast} \mathcal J_{n-1}(\boldsymbol\alpha, t), \qquad n\ge 1, \end{equation} where the symbol $\overset{t}{\ast}$ means the convolution operation with respect to time variable $t$. From (\ref{rec6}) it follows that \begin{equation}\label{rec7} \mathcal J_n(\boldsymbol\alpha, t) = \left[ \psi(\boldsymbol\alpha, t) \right]^{\overset{t}{\ast}(n+1)} , \qquad n\ge 1, \end{equation} where $\overset{t}{\ast}(n+1)$ means the $(n+1)$-multiple convolution in $t$. Applying Laplace transformation $\mathcal L_t$ (in time variable $t$) to (\ref{rec7}), we arrive at the formula \begin{equation}\label{rec8} \mathcal L_t \left[ \mathcal J_n(\boldsymbol\alpha, t) \right] (s) = \bigl( \mathcal L_t \left[ \psi(\boldsymbol\alpha, t) \right] (s) \bigr)^{n+1} , \qquad n\ge 1. \end{equation} It is also known (see \cite[Corollary 5.3]{kol3}) that conditional characteristic functions (\ref{rec2}) satisfy the following recurrent relation \begin{equation}\label{rec8} G_n(\boldsymbol\alpha, t) = \frac{n}{t^n} \int_0^t \tau^{n-1} \psi(\boldsymbol\alpha, t-\tau) \; G_{n-1}(\boldsymbol\alpha, \tau) \; d\tau, \qquad G_0(\boldsymbol\alpha, t) \overset{\text{def}}{=} \psi(\boldsymbol\alpha, t), \quad n\ge 1. \end{equation} The unconditional characteristic function \begin{equation}\label{rec9} G(\boldsymbol\alpha, t) = E \left\{ e^{i\langle\boldsymbol\alpha, \bold X(t)\rangle} \right\} \end{equation} of process $\bold X(t)$ satisfies the Volterra integral equation of second kind (see \cite[Theorem 6]{kol3}): \begin{equation}\label{rec10} G(\boldsymbol\alpha, t) = e^{-\lambda t} \psi(\boldsymbol\alpha, t) + \lambda \int_0^t e^{-\lambda (t-\tau)} \psi(\boldsymbol\alpha, t-\tau) \; G(\boldsymbol\alpha, \tau) \; d\tau , \qquad t\ge 0, \end{equation} or in the convolutional form \begin{equation}\label{rec11} G(\boldsymbol\alpha, t) = e^{-\lambda t} \psi(\boldsymbol\alpha, t) + \lambda \bigl[ \left( e^{-\lambda t} \psi(\boldsymbol\alpha, t) \right) \overset{t}{\ast} \mathcal J(\boldsymbol\alpha, t) \bigr] . \end{equation} This is the renewal equation for Markov random flight $\bold X(t)$. In the class of continuous functions integral equation (\ref{rec10}) (or (\ref{rec11})) has the unique solution given by the uniformly converging series \begin{equation}\label{rec12} G(\boldsymbol\alpha, t) = e^{-\lambda t} \sum_{n=0}^{\infty} \lambda^n \; \left[ \psi(\boldsymbol\alpha, t) \right]^{\overset{t}{\ast} (n+1)} \; . \end{equation} From (\ref{rec11}) we obtain the general formula for the Laplace transform of characteristic function (\ref{rec9}): \begin{equation}\label{rec13} \mathcal L_t \left[ G(\boldsymbol\alpha, t) \right] (s) = \frac{\mathcal L_t \left[ \psi(\boldsymbol\alpha, t) \right] (s+\lambda)}{1 - \lambda \; \mathcal L_t \left[ \psi(\boldsymbol\alpha, t) \right](s+\lambda)} \; , \qquad \text{Re} \; s > 0. \end{equation} These properties will be used in the next section for deriving recurrent relations for the joint and conditional densities of Markov random flight $\bold X(t)$. \section{Recurrent Relations} \numberwithin{equation}{section} Consider the joint probability densities $p_n(\bold x, t), \; n\ge 0, \; \bold x\in\bold B^m(\bold 0, ct), \; t>0,$ of the particle's position $\bold X(t)$ at time instant $t>0$ and of the number of the Poisson events $\{ N(t)=n\}$ that have occurred by this instant $t$. For $n=0$, we have \begin{equation}\label{joint0} p_0(\bold x, t) = p_s(\bold x, t) = e^{-\lambda t} \varrho(\bold x,t) \delta(c^2t^2-\Vert\bold x\Vert^2) , \qquad t>0, \end{equation} where, remind, $p_s(\bold x, t)$ is the singular part of density (\ref{dens}) concentrated on the surface of the sphere $S^m(\bold 0, ct) = \partial\bold B^m(\bold 0, ct)$ and given by (\ref{densS}). If $n\ge 1$, then, according to (\ref{densAC}), joint densities $p_n(\bold x, t)$ have the form: \begin{equation}\label{jointN} p_n(\bold x, t) = f_n(\bold x,t) \Theta(ct-\Vert\bold x\Vert) , \qquad n\ge 1, \quad t>0, \end{equation} where $f_n(\bold x,t), \; n\ge 1,$ are some positive functions absolutely continuous in $\text{int} \; \bold B^m(\bold 0, ct)$ and $\Theta(x)$ is the Heaviside step function. The joint density $p_{n+1}(\bold x,t)$ can be expressed through the previous one $p_n(\bold x,t)$ by means of a recurrent relation. This result is given by the following theorem. {\bf Theorem 1.} {\it The joint densities} $p_n(\bold x, t), \; n\ge 1,$ {\it are connected with each other by the following recurrent relation:} \begin{equation}\label{rec14} p_{n+1}(\bold x, t) = \lambda \int_0^t \bigl[ p_0(\bold x, t-\tau) \overset{\bold x}{\ast} p_n(\bold x, \tau) \bigr] \; d\tau , \qquad n\ge 1, \quad \bold x\in\text{int} \; \bold B^m(\bold 0, ct), \quad t>0. \end{equation} \vskip 0.2cm \begin{proof} Applying Fourier transformation to the right-hand side of (\ref{rec15}), we have: \begin{equation}\label{rec17} \aligned \lambda \; \mathcal F_{\bold x} & \left[ \int_0^t \bigl[ p_0(\bold x, t-\tau) \overset{\bold x}{\ast} p_n(\bold x, \tau) \bigr] \; d\tau \right](\boldsymbol\alpha) \\ & = \lambda \int_0^t \mathcal F_{\bold x} \left[ p_0(\bold x, t-\tau) \overset{\bold x}{\ast} p_n(\bold x, \tau) \right](\boldsymbol\alpha) \; d\tau \\ & = \lambda \int_0^t \mathcal F_{\bold x} \bigl[ p_0(\bold x, t-\tau) \bigr](\boldsymbol\alpha) \; \mathcal F_{\bold x} \bigl[ p_n(\bold x, \tau) \bigr](\boldsymbol\alpha) \; d\tau \\ & = \lambda \int_0^t e^{-\lambda(t-\tau)} \mathcal F_{\bold x} \bigl[ \varrho(\bold x, t-\tau) \delta(c^2(t-\tau)^2-\Vert\bold x\Vert^2) \bigr](\boldsymbol\alpha) \;\; \mathcal F_{\bold x} \bigl[ p_n(\bold x, \tau) \bigr](\boldsymbol\alpha) \; d\tau \\ & = \lambda \int_0^t e^{-\lambda(t-\tau)} \psi(\boldsymbol\alpha, t-\tau) \; \lambda^n e^{-\lambda\tau} \mathcal J_n(\boldsymbol\alpha, \tau) \; d\tau \\ & = \lambda^{n+1} e^{-\lambda t} \int_0^t \psi(\boldsymbol\alpha, t-\tau) \; \mathcal J_n(\boldsymbol\alpha, \tau) \; d\tau \\ & = \lambda^{n+1} e^{-\lambda t} \mathcal J_{n+1}(\boldsymbol\alpha, t) \\ & = \mathcal F_{\bold x} \bigl[ p_{n+1}(\bold x, t) \bigr](\boldsymbol\alpha) , \endaligned \end{equation} where we have used formulas (\ref{rec3}), (\ref{recc4}), (\ref{rec5}). Thus, both the functions on the left- and right-hand sides of (\ref{rec14}) have the same Fourier transform and, therefore, they coincide. The change of integration order in the first step of (\ref{rec17}) is justified because the convolution $p_0(\bold x, t-\tau) \overset{\bold x}{\ast} p_n(\bold x, \tau)$ of the singular part $p_0(\bold x, t-\tau)$ of the density with the absolutely continuous one $p_n(\bold x, \tau), \; n\ge 1,$ is the absolutely continuous (and, therefore, uniformly bounded in $\bold x$) function. From this fact it follows that, for any $n\ge 1$, the integral in square brackets on the left-hand side of (\ref{rec17}) converges uniformly in $\bold x$ for any fixed $t$. The theorem is proved. \end{proof} {\it Remark 1.} In view of (\ref{densS}) and (\ref{densAC}), formula (\ref{rec14}) can be represented in the following expanded form: \begin{equation}\label{rec15} \aligned p_{n+1}(\bold x, t) & = \lambda \int_0^t e^{-\lambda(t-\tau)} \\ & \quad \times \left\{ \int \varrho(\bold x-\boldsymbol\xi,t-\tau) \delta(c^2(t-\tau)^2-\Vert\bold x-\boldsymbol\xi\Vert^2) \; f_n(\boldsymbol\xi, \tau) \Theta(c\tau-\Vert\boldsymbol\xi\Vert) \; \nu(d\boldsymbol\xi) \right\} d\tau , \endaligned \end{equation} $$n\ge 1, \quad \bold x\in\text{int} \; \bold B^m(\bold 0, ct), \quad t>0,$$ where the function $f_n(\boldsymbol\xi, \tau)$ is absolutely continuous in the variable $\boldsymbol\xi=(\xi_1,\dots,\xi_m)\in\Bbb R^m$ and $\nu(d\boldsymbol\xi)$ is the surface Lebesgue measure. Integration area in the interior integral on the right-hand side of (\ref{rec15}) is determined by all the $\boldsymbol\xi$, under which the integrand takes non-zero values, that is, by the system $$\boldsymbol\xi\in\Bbb R^m \; : \; \left\{ \aligned & \Vert\bold x-\boldsymbol\xi\Vert^2 = c^2(t-\tau)^2 , \\ & \Vert\boldsymbol\xi\Vert < c\tau. \endaligned \right.$$ The first relation of this system determines a sphere $S^m(\bold x, c(t-\tau))$ of radius $c(t-\tau)$ centred at point $\bold x$, while the second one represents an open ball $\text{int} \; \bold B^m(\bold 0, c\tau)$ of radius $c\tau$ centred at the origin $\bold 0$. Their intersection \begin{equation}\label{setM} M(\bold x, \tau) = S^m(\bold x, c(t-\tau))\cap\text{int} \; \bold B^m(\bold 0, c\tau), \end{equation} which is a part of (or the whole) surface of sphere $S^m(\bold x, c(t-\tau))$ located inside the ball $\bold B^m(\bold 0, c\tau)$, represents the integration area of dimension $m-1$ in the interior integral of (\ref{rec15}). Note that the sum of the radii of $S^m(\bold x, c(t-\tau))$ and $\text{int} \; \bold B^m(\bold 0, c\tau)$ is $c(t-\tau)+c\tau = ct > \Vert\bold x\Vert$, that is greater than the distance $\Vert\bold x\Vert$ between their centres $\bold 0$ and $\bold x$. This fact, as well as some simple geometric reasonongs, show that intersection (\ref{setM}) depends on $\tau\in (0,t)$ as follows. $\bullet$ If $\tau\in (0, \; \frac{t}{2} - \frac{\Vert x\Vert}{2c}]$, then intersection (\ref{setM}) is empty, that is, $M(\bold x, \tau) = \varnothing$. $\bullet$ If $\tau\in (\frac{t}{2} - \frac{\Vert x\Vert}{2c}, \; \frac{t}{2} + \frac{\Vert x\Vert}{2c}]$, then intersection $M(\bold x, \tau)$ is not empty and represents some hyperspace of dimension $m-1$. $\bullet$ If $\tau\in (\frac{t}{2} + \frac{\Vert x\Vert}{2c}, \; t]$, then $S^m(\bold x, c(t-\tau))\subset \text{int} \; \bold B^m(\bold 0, c\tau)$ and, therefore, in this case $M(\bold x, \tau) = S^m(\bold x, c(t-\tau))$. Thus, formula (\ref{rec15}), as well as (\ref{rec14}), can be rewritten in the expanded form: \begin{equation}\label{rec16} \aligned p_{n+1}(\bold x, t) & = \lambda \int\limits_{\frac{t}{2} - \frac{\Vert x\Vert}{2c}}^{\frac{t}{2} + \frac{\Vert x\Vert}{2c}} e^{-\lambda(t-\tau)} \left\{ \int\limits_{M(\bold x, \tau)} \varrho(\bold x-\boldsymbol\xi,t-\tau) \; f_n(\boldsymbol\xi, \tau) \; \nu(d\boldsymbol\xi) \right\} d\tau \\ & + \lambda \int\limits_{\frac{t}{2} + \frac{\Vert x\Vert}{2c}}^t e^{-\lambda(t-\tau)} \left\{ \int\limits_{S^m(\bold x, c(t-\tau))} \varrho(\bold x-\boldsymbol\xi,t-\tau) \; f_n(\boldsymbol\xi, \tau) \; \nu(d\boldsymbol\xi) \right\} d\tau \endaligned \end{equation} and the expressions in curly brackets of (\ref{rec16}) represents surface integrals over $M(\bold x, \tau)$ and $S^m(\bold x, c(t-\tau))$. {\it Remark 2.} By means of the double convolution operation of two arbitrary generalized functions $f_1(\bold x, t), f_2(\bold x, t) \in\mathscr{S'}, \; \bold x\in\Bbb R^m, \; t>0,$ \begin{equation}\label{rec16a} f_1(\bold x, t) \overset{\bold x}{\ast} \overset{t}{\ast} f_2(\bold x, t) = \int_0^t \int_{\Bbb R^m} f_1(\boldsymbol\xi, \tau) \; f_2(\bold x - \boldsymbol\xi, t-\tau) \; d\boldsymbol\xi \; d\tau \end{equation} formula (\ref{rec14}) can be represented in the succinct convolutional form \begin{equation}\label{rec16b} p_{n+1}(\bold x, t) = \lambda \left[ p_0(\bold x, t) \overset{\bold x}{\ast} \overset{t}{\ast} p_n(\bold x, t) \right] . \end{equation} Taking into account the well-known connections between the joint and conditional densities, we can extract from Theorem 1 a convolution-type recurrent relation for the conditional probability densities $\tilde p_n(\bold x, t), \; n\ge 1$. {\bf Corollary 1.1.} {\it The conditional densities} $\tilde p_n(\bold x, t), \; n\ge 1,$ {\it are connected with each other by the following recurrent relation:} \begin{equation}\label{rec18} \tilde p_{n+1}(\bold x, t) = \frac{n+1}{t^{n+1}} \int_0^t \tau^n \bigl[ \tilde p_0(\bold x, t-\tau) \overset{\bold x}{\ast} \; \tilde p_n(\bold x, \tau) \bigr] \; d\tau , \qquad n\ge 1, \quad \bold x\in\text{int} \; \bold B^m(\bold 0, ct), \quad t>0, \end{equation} {\it where $\tilde p_0(\bold x, t) = \varrho(\bold x,t) \delta(c^2t^2-\Vert\bold x\Vert^2)$ is the conditional density corresponding to the case when no one Poisson event occurs before time instant $t$.} \vskip 0.2cm \begin{proof} The proof immediately follows from Theorem 1 and recurrent formula (\ref{rec8}). \end{proof} {\it Remark 3.} Formulas (\ref{rec14}) and (\ref{rec18}) are also valid for $n=0$. In this case, for arbitrary $t>0$, they take the form: \begin{equation}\label{rec19} p_1(\bold x, t) = \lambda \int_0^t \bigl[ p_0(\bold x, t-\tau) \overset{\bold x}{\ast} \; p_0(\bold x, \tau) \bigr] \; d\tau , \end{equation} \begin{equation}\label{rec20} \tilde p_1(\bold x, t) = \frac{1}{t} \int_0^t \bigl[ \tilde p_0(\bold x, t-\tau) \overset{\bold x}{\ast} \; \tilde p_0(\bold x, \tau) \bigr] \; d\tau , \end{equation} where, remind, function $p_0(\bold x, t)$ defined by (\ref{joint0}) is the singular part of the density concentrated on the surface of the sphere $S^m(\bold 0, ct)$. The derivation of (\ref{rec19}) is a simple recompilation of the proof of Theorem 1 and taking into account the boundedness of $p_0(\bold x, t)$ which justifies the change of integration order in (\ref{rec17}). Formula (\ref{rec20}) follows from (\ref{rec19}). \section{Integral Equation for Transition Density} \numberwithin{equation}{section} The transition probability density $p(\bold x, t)$ of the multidimensional Markov flight $\bold X(t)$ is defined by the formula \begin{equation}\label{int1} p(\bold x, t) = \sum_{n=0}^{\infty} p_n(\bold x, t) , \qquad \bold x\in\bold B^m(\bold 0, ct), \quad t>0, \end{equation} where the joint densities $p_n(\bold x, t), \; n\ge 0,$ are given by (\ref{joint0}) and (\ref{jointN}). The density (\ref{int1}) is defined everywhere in the ball $\bold B^m(\bold 0, ct)$, while the function \begin{equation}\label{int2} p_{ac}(\bold x, t) = \sum_{n=1}^{\infty} p_n(\bold x, t) \end{equation} forms its absolutely continuous part concentrated in the interior $\text{int} \; \bold B^m(\bold 0, ct)$ of the ball. Therefore, series (\ref{int2}) converges uniformly everywhere in the close ball $\bold B^m(\bold 0, ct-\varepsilon)$ for arbitrary small $\varepsilon>0$. In the following theorem we state an integral equation for density (\ref{int1}). {\bf Theorem 2.} {\it The transition probability density $p(\bold x, t)$ of the Markov random flight $\bold X(t)$ satisfies the integral equation:} \begin{equation}\label{int3} p(\bold x, t) = p_0(\bold x, t) + \lambda \int_0^t \bigl[ p_0(\bold x, t-\tau) \overset{\bold x}{\ast} \; p(\bold x, \tau) \bigr] \; d\tau , \qquad \bold x\in\bold B^m(\bold 0, ct), \quad t>0. \end{equation} {\it In the class of finitary functions (that is, generalized functions defined on the compact sets of $\Bbb R^m$), integral equation} (\ref{int3}) {\it has the unique solution given by the series} \begin{equation}\label{int4} p(\bold x,t) = \sum_{n=0}^{\infty} \lambda^n \left[ p_0(\bold x, t) \right]^{\overset{\bold x}{\ast} \overset{t}{\ast}(n+1)} , \end{equation} {\it where the symbol $\overset{\bold x}{\ast} \overset{t}{\ast}(n+1)$ means the $(n+1)$-multiple double convolution with respect to spatial and time variables defined by} (\ref{rec16a}), {\it that is,} $$\left[ p_0(\bold x, t) \right]^{\overset{\bold x}{\ast} \overset{t}{\ast}(n+1)} = \underbrace{p_0(\bold x, t) \overset{\bold x}{\ast} \overset{t}{\ast} p_0(\bold x, t) \overset{\bold x}{\ast} \overset{t}{\ast} \dots \overset{\bold x}{\ast} \overset{t}{\ast} p_0(\bold x, t)}_{(n+1) \; \text{times}} .$$ {\it Series} (\ref{int4}) {\it is convergent everywhere in the open ball} $\text{int} \; \bold B^m(\bold 0, ct)$. {\it For any small $\varepsilon>0$, the series} (\ref{int4}) {\it converges uniformly (in $\bold x$ for any fixed $t>0$) in the close ball $\bold B^m(\bold 0, ct-\varepsilon)$ and, therefore, it determines the density $p(\bold x, t)$ which is continuous and bounded in this ball}. \vskip 0.2cm \begin{proof} Applying Theorem 1 and taking into account the uniform convergence of series (\ref{int2}) and of the integral in formula (\ref{rec14}), we have: $$\aligned p(\bold x, t) & = \sum_{n=0}^{\infty} p_n(\bold x, t) \\ & = p_0(\bold x, t) + \sum_{n=1}^{\infty} p_n(\bold x, t) \\ & = p_0(\bold x, t) + \lambda \sum_{n=1}^{\infty} \int_0^t \bigl[ p_0(\bold x, t-\tau) \overset{\bold x}{\ast} \; p_{n-1}(\bold x, \tau) \bigr] \; d\tau \\ & = p_0(\bold x, t) + \lambda \int_0^t \sum_{n=1}^{\infty} \bigl[ p_0(\bold x, t-\tau) \overset{\bold x}{\ast} \; p_{n-1}(\bold x, \tau) \bigr] \; d\tau \\ & = p_0(\bold x, t) + \lambda \int_0^t \left[ p_0(\bold x, t-\tau) \overset{\bold x}{\ast} \; \left\{ \sum_{n=1}^{\infty} p_{n-1}(\bold x, \tau) \right\} \right] \; d\tau \\ & = p_0(\bold x, t) + \lambda \int_0^t \left[ p_0(\bold x, t-\tau) \overset{\bold x}{\ast} \; \left\{ \sum_{n=0}^{\infty} p_n(\bold x, \tau) \right\} \right] \; d\tau \\ & = p_0(\bold x, t) + \lambda \int_0^t \bigl[ p_0(\bold x, t-\tau) \overset{\bold x}{\ast} \; p(\bold x, \tau) \bigr] \; d\tau , \endaligned$$ proving (\ref{int3}). Another way of proving the theorem is to apply the Fourier transformation to both sides of (\ref{int3}). Justifying then the change of the order of integrals similarly as it was done in (\ref{rec17}), we arrive at Volterra integral equation (\ref{rec10}) for Fourier transforms. Using notation (\ref{rec16a}), equation (\ref{int3}) can be represented in the convolutional form \begin{equation}\label{int5} p(\bold x, t) = p_0(\bold x, t) + \lambda \bigl[ p_0(\bold x, t) \overset{\bold x}{\ast} \overset{t}{\ast} p(\bold x, t) \bigr] , \qquad \bold x\in\bold B^m(\bold 0, ct), \quad t>0. \end{equation} Let us check that series (\ref{int4}) satisfies equation (\ref{int5}). Substituting (\ref{int4}) into the right-hand side of (\ref{int5}), we have: $$\aligned p_0(\bold x, t) + \lambda \biggl[ p_0(\bold x, t) \overset{\bold x}{\ast} \overset{t}{\ast} \biggl( \sum_{n=0}^{\infty} \lambda^n \left[ p_0(\bold x, t) \right]^{\overset{\bold x}{\ast} \overset{t}{\ast}(n+1)} \biggr) \biggr] & = p_0(\bold x, t) + \sum_{n=0}^{\infty} \lambda^{n+1} \left[ p_0(\bold x, t) \right]^{\overset{\bold x}{\ast} \overset{t}{\ast}(n+2)} \\ & = p_0(\bold x, t) + \sum_{n=1}^{\infty} \lambda^n \left[ p_0(\bold x, t) \right]^{\overset{\bold x}{\ast} \overset{t}{\ast}(n+1)} \\ & = \sum_{n=0}^{\infty} \lambda^n \left[ p_0(\bold x, t) \right]^{\overset{\bold x}{\ast} \overset{t}{\ast}(n+1)} \\ & = p(\bold x, t) \endaligned$$ and, therefore, series (\ref{int4}) is really the solution to equation (\ref{int5}). Note that applying Fourier transformation to (\ref{int3}) and (\ref{int4}) and taking into account (\ref{densS}), we arrive at the known results (\ref{rec11}) and (\ref{rec12}), respectively. The uniqueness of solution (\ref{int4}) in the class of finitary functions follows from the uniqueness of the solution of Volterra integral equation (\ref{rec10}) for its Fourier transform (\ref{rec12}) (i.e. characteristic function) in the class of continuous functions. Since the transition density $p(\bold x,t)$ is absolutely continuous in the open ball $\text{int} \; \bold B^m(\bold 0, ct)$, then, for any $\varepsilon>0$, it is continuous and uniformly bounded in the close ball $\bold B^m(\bold 0, ct-\varepsilon)$. From this fact and taking into account the uniqueness of the solution of integral equation (\ref{int3}) in the class of finitary functions, we can conclude that series (\ref{int4}) converges uniformly in $\bold B^m(\bold 0, ct-\varepsilon)$ for any small $\varepsilon>0$. This completes the proof. \end{proof} \section{Some Particular Cases} \numberwithin{equation}{section} In this section we consider two important particular cases of the general Markov random flight described in Section 2 when the dissipation function has the uniform distribution on the unit sphere $S^m(\bold 0,1)$ and Gaussian distribution on the unit circumference $S^2(\bold 0,1)$. \subsection{Symmetric Random Flights} Suppose that the initial and every new direction are chosen according to the uniform distribution on the unit sphere $S^m(\bold 0,1)$. Such processes in the Euclidean spaces $\Bbb R^m$ of different dimensions $m\ge 2$, which are referred to as the symmetric Markov random flights, have become the subject of a series of works \cite{kol2,kol3,kol4,kol5,kol6,kol7}, \cite{mas}, \cite{sta1,sta2}. In this symmetric case the function $\varrho(\bold x,t)$ is the density of the uniform distribution on the surface of the sphere $S^m(\bold 0, ct)$ and, therefore, it does not depend on spatial variable $\bold x$. Then, according to (\ref{densS}), the singular part of the transition density of process $\bold X(t)$ takes the form: \begin{equation}\label{sym1} p_s(\bold x,t) = e^{-\lambda t} \frac{\Gamma\left( \frac{m}{2} \right)}{2\pi^{m/2}\; (ct)^{m-1}} \; \delta(c^2t^2-\Vert\bold x\Vert^2) , \qquad m\ge 2, \quad t>0. \end{equation} Therefore, according to Theorem 1, for arbitrary dimension $m\ge 2$, the absolutely continuous parts $f_n(\bold x, t)$, \; $n\ge 0,$ of the joint probability densities of the symmetric Markov random flight are connected with each other by the following recurrent relation: \begin{equation}\label{sym2} f_{n+1}(\bold x, t) = \frac{\lambda \; \Gamma\left( \frac{m}{2} \right)}{2\pi^{m/2}\; c^{m-1}} \int_0^t \frac{e^{-\lambda(t-\tau)}}{(t-\tau)^{m-1}} \biggl\{ \int\limits_{M(\bold x,\tau)} f_n(\boldsymbol\xi,\tau) \; d\boldsymbol\xi \biggr\} \; d\tau , \end{equation} $$\bold x=(x_1,\dots,x_m) \in\text{int} \; \bold B^m(\bold 0, ct), \quad m\ge 2, \quad n\ge 0, \quad t>0,$$ where the integration area $M(\bold x,\tau)$ is given by (\ref{setM}). It is known (see \cite[formula (7)]{kol4}) that, in arbitrary dimension $m\ge 2$, the joint density of symmetric Markov random flight $\bold X(t)$ and of the single change of direction is given by the formula \begin{equation}\label{sym3} f_1(\bold x,t) = \lambda e^{-\lambda t} \; \frac{2^{m-3} \Gamma\left( \frac{m}{2} \right)}{\pi^{m/2} c^m t^{m-1}} \; F\left( \frac{m-1}{2}, -\frac{m}{2}+2; \; \frac{m}{2}; \; \frac{\Vert\bold x\Vert^2}{c^2t^2} \right) , \end{equation} $$\bold x=(x_1,\dots,x_m)\in\text{int} \; \bold B^m(\bold 0, ct), \quad m\ge 2, \quad t>0,$$ where $$F(\alpha,\beta;\gamma;z) = \sum_{k=0}^{\infty} \frac{(\alpha)_k (\beta)_k}{(\gamma)_k} \; \frac{z^k}{k!}$$ is the Gauss hypergeometric function. Then, by substituting (\ref{sym3}) into (\ref{sym2}) (for $n=1$), we obtain the following formula for the joint density of process $\bold X(t)$ and of two changes of direction: \begin{equation}\label{sym4} \aligned f_2(\bold x,t) & = \lambda^2 e^{-\lambda t} \; \frac{2^{m-4} \left[\Gamma\left( \frac{m}{2} \right)\right]^2}{\pi^m \; c^{2m-1}} \\ & \qquad\times \int_0^t \biggl\{ \int\limits_{M(\bold x,\tau)} F\left( \frac{m-1}{2}, -\frac{m}{2}+2; \; \frac{m}{2}; \; \frac{\Vert\boldsymbol\xi\Vert^2}{c^2\tau^2} \right) d\boldsymbol\xi \biggr\} \; \frac{d\tau}{(\tau(t-\tau))^{m-1}} , \endaligned \end{equation} $$\bold x=(x_1,\dots,x_m)\in\text{int} \; \bold B^m(\bold 0, ct), \quad m\ge 2, \quad t>0.$$ In the three-dimensional Euclidean space $\Bbb R^3$, joint density (\ref{sym3}) was computed explicitly by different methods and it has the form (see \cite[formula (25)]{kol5} or \cite[the second term of formulas (1.3) and (4.21)]{sta1}): \begin{equation}\label{sym5} f_1(\bold x,t) = \frac{\lambda e^{-\lambda t}}{4\pi c^2 t \Vert\bold x\Vert} \; \ln\left( \frac{ct+\Vert\bold x\Vert}{ct-\Vert\bold x\Vert} \right) , \end{equation} $$\bold x=(x_1,x_2,x_3)\in\text{int} \; \bold B^3(\bold 0, ct), \quad \Vert\bold x\Vert=\sqrt{x_1^2+x_2^2+x_3^2}, \quad t>0.$$ By substituting this joint density into (\ref{sym2}) (for $n=1, \; m=3$), we arrive at the formula: \begin{equation}\label{sym6} f_2(\bold x,t) = \frac{\lambda^2 e^{-\lambda t}}{16 \pi^2 c^4} \int_0^t \biggl\{ \int\limits_{M(\bold x,\tau)} \ln\left( \frac{c\tau+\Vert\boldsymbol\xi\Vert}{c\tau-\Vert\boldsymbol\xi\Vert} \right) \frac{d\boldsymbol\xi}{\Vert\boldsymbol\xi\Vert} \biggr\} \; \frac{d\tau}{\tau(t-\tau)^2} , \end{equation} $$\bold x=(x_1,x_2,x_3)\in\text{int} \; \bold B^3(\bold 0, ct), \quad t>0.$$ Formula (\ref{sym6}) can also be obtained by setting $m=3$ in (\ref{sym4}). According to Theorem 2 and (\ref{sym1}), the transition density of the $m$-dimensional symmetric Markov random flight solves the integral equation \begin{equation}\label{sym7} \aligned p(\bold x, t) & = \frac{\Gamma\left( \frac{m}{2} \right)}{2\pi^{m/2}\; c^{m-1}} \biggl\{ \frac{e^{-\lambda t}}{t^{m-1}} \; \delta(c^2t^2-\Vert\bold x\Vert^2) \\ & \qquad + \lambda \int_0^t \biggl[ \left( \frac{e^{-\lambda (t-\tau)}}{(t-\tau)^{m-1}} \; \delta(c^2(t-\tau)^2-\Vert\bold x\Vert^2) \right) \overset{\bold x}{\ast} \; p(\bold x, \tau) \biggr] \; d\tau \biggr\} , \endaligned \end{equation} $$\bold x=(x_1,\dots,x_m)\in\bold B^m(\bold 0, ct), \quad t>0.$$ In the class of finitary functions, equation (\ref{sym7}) has the unique solution given by the series \begin{equation}\label{sym8} p(\bold x,t) = \sum_{n=0}^{\infty} \lambda^n \left( \frac{\Gamma\left( \frac{m}{2} \right)}{2\pi^{m/2} \; c^{m-1}} \right)^{n+1} \left[ \frac{e^{-\lambda t}}{t^{m-1}} \; \delta(c^2t^2-\Vert\bold x\Vert^2) \right]^{\overset{\bold x}{\ast} \overset{t}{\ast}(n+1)} . \end{equation} \subsection{Gaussian Distribution on Circumference} Consider now the case of the non-symmetric planar random flight when the initial and each new direction are chosen according to the Gaussian distribution on the unit circumference $S^2(\bold 0,1)$ with the two-dimensional density \begin{equation}\label{gauss1} \chi_k(\bold x) = \frac{1}{2\pi \; I_0(k)} \; \exp\left( \frac{k x_1}{\Vert\bold x\Vert} \right) \; \delta(1-\Vert\bold x\Vert^2) , \end{equation} $$\bold x =(x_1,x_2)\in\Bbb R^2, \qquad \Vert\bold x\Vert=\sqrt{x_1^2+x_2^2} \qquad k\in\Bbb R,$$ where $I_0(z)$ is the modified Bessel function of order 0. Formula (\ref{gauss1}) determines the one-parametric family of Gaussian densities $\left\{ \chi_k(\bold x), \; k\in\Bbb R \right\}$, and for any fixed real $k\in\Bbb R$ the density $\chi_k(\bold x)$ is absolutely continuous and uniformly bounded on $S^2(\bold 0,1)$. If $k=0$, then formula (\ref{gauss1}) transforms into the density of the uniform distribution on the unit circumference $S^2(\bold 0,1)$, while for $k\neq 0$ it produces pure Gaussian-type densities. In the unit polar coordinates $x_1=\cos\theta, \; x_2=\sin\theta,$ formula (\ref{gauss1}) takes the form of the circular Gaussian law: \begin{equation}\label{gauss2} \chi_k(\theta) = \frac{e^{k \cos\theta}}{2\pi \; I_0(k)} , \qquad \theta\in [-\pi, \pi), \quad k\in\Bbb R. \end{equation} For arbitrary real $k\in\Bbb R$, Gaussian density (\ref{gauss1}) on the unit circumference $S^2(\bold 0,1)$ generates the Gaussian density \begin{equation}\label{gauss3} p_s(\bold x,t) = \frac{e^{-\lambda t}}{2\pi ct \; I_0(k)} \; \exp\left( \frac{k x_1}{\Vert\bold x\Vert} \right) \; \delta(c^2t^2-\Vert\bold x\Vert^2) , \end{equation} $$\bold x =(x_1,x_2)\in\Bbb R^2, \quad \Vert\bold x\Vert=\sqrt{x_1^2+x_2^2}, \quad t>0, \quad k\in\Bbb R,$$ concentrated on the circumference $S^2(\bold 0, ct)$ of radius $ct$. Then, according to Theorem 1, the joint densities are connected with each other by the recurrent relation \begin{equation}\label{gauss4} \aligned & f_{n+1}(\bold x, t) \\ & = \frac{\lambda}{2\pi c I_0(k)} \int_0^t \biggl\{ \int\limits_{M(\bold x,\tau)} \exp\left( \frac{k (x_1-\xi_1)}{\sqrt{(x_1-\xi_1)^2+(x_2-\xi_2)^2}} \right) \; f_n(\xi_1,\xi_2,\tau) \; d\xi_1 d\xi_2 \biggr\} \frac{e^{-\lambda(t-\tau)}}{t-\tau} \; d\tau , \endaligned \end{equation} $$\bold x =(x_1,x_2)\in \; \text{int} \; \bold B^2(\bold 0, ct), \quad n\ge 0, \quad t>0, \quad k\in\Bbb R.$$ According to Theorem 2 and (\ref{gauss3}), the transition density of the planar Markov random flight with Gaussian dissipation function (\ref{gauss1}) satisfies the integral equation \begin{equation}\label{gauss5} \aligned p(\bold x,t) & = \frac{e^{-\lambda t}}{2\pi ct \; I_0(k)} \; \exp\left( \frac{k x_1}{\Vert\bold x\Vert} \right) \; \delta(c^2t^2-\Vert\bold x\Vert^2) \\ & \qquad + \frac{\lambda}{2\pi c \; I_0(k)} \int_0^t \left[ \left( \frac{e^{-\lambda\tau}}{\tau} \; \exp\left( \frac{k x_1}{\Vert\bold x\Vert} \right) \; \delta(c^2\tau^2-\Vert\bold x\Vert^2) \right) \overset{\bold x}{\ast} \; p(\bold x, \tau) \right] d\tau , \endaligned \end{equation} $$\bold x=(x_1,x_2)\in\bold B^2(\bold 0, ct), \quad \Vert\bold x\Vert=\sqrt{x_1^2+x_2^2}, \quad t>0, \quad k\in\Bbb R.$$ In the class of finitary functions, equation (\ref{gauss5}) has the unique solution given by the series \begin{equation}\label{gauss6} p(\bold x,t) = \sum_{n=0}^{\infty} \lambda^n \left( \frac{1}{2\pi c \; I_0(k)} \right)^{n+1} \left[ \frac{e^{-\lambda t}}{t} \; \exp\left( \frac{k x_1}{\Vert\bold x\Vert} \right) \; \delta(c^2 t^2-\Vert\bold x\Vert^2) \right]^{\overset{\bold x}{\ast} \overset{t}{\ast}(n+1)} . \end{equation} \end{document}
\begin{document} \title[On the Krull filtration of ${\mathcal U}$]{The Krull filtration of the category of unstable modules over the Steenrod algebra} \author[Kuhn]{Nicholas J.~Kuhn} \address{Department of Mathematics \\ University of Virginia \\ Charlottesville, VA 22904} \email{njk4x@virginia.edu} \thanks{This research was partially supported by grants from the National Science Foundation} \date{June 25, 2013.} \subjclass[2000]{Primary 55S10; Secondary 18E10} \begin{abstract} In the early 1990's, Lionel Schwartz gave a lovely characterization of the Krull filtration of ${\mathcal U}$, the category of unstable modules over the mod $p$ Steenrod algebra. Soon after, this filtration was used by the author as an organizational tool in posing and studying some topological nonrealization conjectures. In recent years the Krull filtration of ${\mathcal U}$ has been similarly used by Castellana, Crespo, and Scherer in their study of H--spaces with finiteness conditions. Recently, Gaudens and Schwartz have given a proof of some of my conjectures. In light of these topological applications, it seems timely to better expose the algebraic properties of the Krull filtration. \end{abstract} \maketitle \section{Introduction} \label{introduction} The mod $p$ cohomology of a space $H^*(X)$ is naturally an object in both ${\mathcal U}$ and ${\mathcal K}$, the categories of unstable modules and algebras over the mod $p$ Steenrod algebra ${\mathcal A}$. Over the past quarter century, much important work in unstable homotopy theory has been grounded in the tremendous advances made during the 1980's in our understanding of the abelian category ${\mathcal U}$, and the central role played by $H^*(B{\mathbb Z}/p)$. In his 1962 paper [Gab62], Gabriel introduced the Krull filtration of a general abelian category. Applied to ${\mathcal U}$, the Krull filtration is the increasing sequence of localizing subcategories $$ {\mathcal U}_0 \subset {\mathcal U}_1 \subset {\mathcal U}_2 \subset {\mathcal U}_3 \subset {\mathcal U}_4 \subset \dots$$ recursively defined as follows: ${\mathcal U}_0$ is the full subcategory of locally finite unstable ${\mathcal A}$-modules, and for $n>0$, ${\mathcal U}_n$ is the full subcategory of unstable modules which project to a locally finite object in the quotient category ${\mathcal U}/{\mathcal U}_{n-1}$. In the early 1990's, Lionel Schwartz gave a lovely characterization of the Krull filtration of ${\mathcal U}$ in terms of Lannes' $T$-functor. Soon after, this filtration was used by the author \cite{k1} as an organizational tool in posing and studying some topological nonrealization conjectures. In particular, the author's Strong Realization Conjecture posited that if $H^*(X) \in {\mathcal U}_n$ for some finite $n$, then $H^*(X) \in {\mathcal U}_0$. A proof of this was recently given by Schwartz and G. Gaudens \cite{gaudens schwartz}. (Special cases were proved earlier in \cite{k1,s3,s4}.) The Krull filtration of ${\mathcal U}$ has been similarly used in recent years by Castellana, Crespo, and Scherer in their study of H--spaces with finiteness conditions \cite{ccs1,ccs2}. The few results about the Krull filtration are scattered in the literature, and are not particularly comprehensive. Moreover, Schwartz just sketched a proof of his key characterization in \cite[\S6.2]{s2}, and we suspect many readers would have trouble filling in the details. In light of the recent topological applications, it seems timely to better expose the properties of this interesting bit of the structure of ${\mathcal U}$. \subsection{Schwartz's characterization of ${\mathcal U}_n$} Lannes \cite{L} defines $T: {\mathcal U} \rightarrow {\mathcal U}$ to be the left adjoint to the functor sending $M$ to $H^*(B{\mathbb Z}/p) \otimes M$, and proves that this functor satisfies many remarkable properties; in particular, it is exact and commutes with tensor products. We need the reduced version. Let ${\bar T}: {\mathcal U} \rightarrow {\mathcal U}$ be left adjoint to the functor sending $M$ to $\tilde H^*(B{\mathbb Z}/p) \otimes M$, so that $TM = {\bar T} M \oplus M$. Then let ${\bar T}^n$ denote its $n$th iterate. Schwartz's elegant characterization of ${\mathcal U}_n$ \cite[Thm.6.2.4]{s2} goes as follows. It generalizes the $n=0$ case, proved earlier by Lannes and Schwartz \cite{ls2}. \begin{thm} \label{schwartz thm} \label{T thm} ${\mathcal U}_n = \{ M \in {\mathcal U} \ | \ {\bar T}^{n+1}M = 0 \}$. \end{thm} We will give a proof of this theorem which is simpler than the proof outlined in \cite{s2}. All of our subsequent results about ${\mathcal U}_n$ use this theorem. As an example, since ${\bar T}(M \otimes N) = ({\bar T} M \otimes N) \oplus (M \otimes {\bar T} N) \oplus ({\bar T} M \otimes {\bar T} N)$, the theorem has the following first consequence. \begin{cor} If $M \in {\mathcal U}_m$ and $N \in {\mathcal U}_n$, then $M \otimes N \in {\mathcal U}_{m+n}$. \end{cor} \subsection{The quotient category ${\mathcal U}_n/{\mathcal U}_{n-1}$ and consequences.} Our next theorem usefully identifies the quotient category ${\mathcal U}_n/{\mathcal U}_{n-1}$. We need to introduce some basic unstable modules. Let $F(1)$ be the free unstable module on a $1$--dimensional class. Explicitly, $F(1) = {\mathcal A}\cdot x \subset H^*(B{\mathbb Z}/p)$, where $x$ generates $H^1(B{\mathbb Z}/p)$. Thus when $p=2$, $$ F(1) = \langle x, x^2, x^4, \dots \rangle \subset {\mathbb Z}/2[x],$$ and when $p$ is odd, $$ F(1) = \langle x, y, y^p, y^{p^2}, \dots \rangle \subset \Lambda(x) \otimes {\mathbb Z}/p[y].$$ If $M \in {\mathcal U}_n$, then ${\bar T}^nM \in {\mathcal U}_0$. But ${\bar T}^nM$ has more structure than just this: note that ${\bar T}^n$ is left adjoint to tensoring with $\tilde H^*(B{\mathbb Z}/p)^{\otimes n}$, and thus ${\bar T}^n(M)$ has a natural action of the $n$th symmetric group $\Sigma_n$. Let $\Sigma_n\text{--}{\mathcal U}_0$ denote the category of $M \in {\mathcal U}_0$ equipped with a $\Sigma_n$--action. \begin{thm} \label{Un/Un-1 thm} The exact functor ${\bar T}^n: {\mathcal U}_n \rightarrow \Sigma_n\text{--}{\mathcal U}_0$ has as right adjoint the functor $$N \mapsto (N \otimes F(1)^{\otimes n})^{\Sigma_n},$$ and together these functors induce an equivalence $${\mathcal U}_n/{\mathcal U}_{n-1} \simeq \Sigma_n\text{--}{\mathcal U}_0.$$ \end{thm} Using this theorem, one quite easily obtains a recursive description of modules in ${\mathcal U}_n$. \begin{thm} \label{recursive Un thm} $M \in {\mathcal U}_n$ if and only if there exist $K,Q \in {\mathcal U}_{n-1}$, $N \in \Sigma_n\text{--}{\mathcal U}_0$, and an exact sequence $$ 0 \rightarrow K \rightarrow M \rightarrow (N \otimes F(1)^{\otimes n})^{\Sigma_n} \rightarrow Q \rightarrow 0.$$ Furthermore, $\bar T^n M \simeq N$. \end{thm} The case $n=1$, already useful, was previously known \cite[Prop.2.3]{s4}. One obtains a simple generating set for ${\mathcal U}_n$. \begin{thm} \label{F(1) thm} ${\mathcal U}_n$ is the smallest localizing subcategory containing all suspensions of the modules $F(1)^{\otimes m}$ for $0 \leq m \leq n$. \end{thm} \begin{rem} Let $F(n)$ be the free unstable module on an $n$--dimensional class. Though $F(n) \in {\mathcal U}_n$, ${\mathcal U}_n$ is generally strictly larger than the localizing subcategory generated by all suspensions of the modules $F(m)$ for $0\leq m \leq n$. See \exref{F(n) example}. \end{rem} \subsection{Interaction with the nilpotent filtration} To put \thmref{Un/Un-1 thm} in perspective, we remind readers of how the Krull filtration of ${\mathcal U}$ interacts with the nilpotent filtration \cite[\S2]{k1}. The nilpotent filtration of ${\mathcal U}$, introduced in \cite{s1}, is the decreasing filtration $$ {\mathcal U} = {\mathcal Nil}_0 \supset {\mathcal Nil}_1 \supset {\mathcal Nil}_2 \supset {\mathcal Nil}_3 \supset \dots$$ defined by letting ${\mathcal Nil}_s$ be the smallest localizing subcategory containing all $s$--fold suspensions of unstable modules. We write ${\mathcal Nil}$ for ${\mathcal Nil}_1$. H.-W. Henn, Lannes, and Schwartz \cite{hls1} identify ${\mathcal U}/{\mathcal Nil}$ as follows. Let ${\mathcal F}$ be the category of functors from finite dimensional ${\mathbb Z}/p$--vector spaces to ${\mathbb Z}/p$--vector spaces. There is a difference operator $\Delta: {\mathcal F} \rightarrow {\mathcal F}$ defined by $$ \Delta F(V) = F(V \oplus {\mathbb Z}/p)/F(V).$$ $F$ is said to be polynomial of degree $n$ if $\Delta^{n+1}F = 0$, and analytic if it is locally polynomial. Let ${\mathcal F}^n$ and ${\mathcal F}^{an}$ denote the subcategories of such functors. The authors of \cite{hls1} show that there is an equivalence $ {\mathcal U}/{\mathcal Nil} \simeq {\mathcal F}^{an}$. Under this equivalence, it is not hard to show that $\bar T$ corresponds to $\Delta$, thus ${\mathcal U}_n$ projects to ${\mathcal F}^n$. Schwartz \cite{s1} further observes that, for all $s$, $M \mapsto \Sigma^s M$ induces an equivalence $ {\mathcal U}/{\mathcal Nil} \simeq {\mathcal Nil}_s/{\mathcal Nil}_{s+1}$. Restricting these equivalences to ${\mathcal U}_n$, one learns that, for all $s$, there are equivalences of abelian categories $$ ({\mathcal Nil}_s \cap {\mathcal U}_n)/({\mathcal Nil}_{s+1} \cap {\mathcal U}_n) \simeq {\mathcal F}^n.$$ It is classic (see \cite{pirashvili}, \cite[\S 5.5]{s1}, or \cite{filt gen rep}) that there is an equivalence \begin{equation} \label{Fn/Fn-1thm} {\mathcal F}^n / {\mathcal F}^{n-1} \simeq {\mathbb Z}/p[\Sigma_n]\text{--modules}. \end{equation} The equivalence ${\mathcal U}_n/{\mathcal U}_{n-1} \simeq \Sigma_n\text{--}{\mathcal U}_0$ of \thmref{Un/Un-1 thm} can thus be seen as the correct lift of this result to ${\mathcal U}$. \begin{rem} The strongest form of (\ref{Fn/Fn-1thm}) says that the quotient functor ${\mathcal F}^n \rightarrow {\mathbb Z}/p[\Sigma_n]\text{--modules}$ admits both a left and right adjoint, forming a recollement setting \cite[Example 1.5]{filt gen rep}. By contrast, as $\bar T^n$ does not commute with products, the exact quotient functor ${\mathcal U}_n \rightarrow \Sigma_n\text{--} {\mathcal U}_0$ does not admit a left adjoint. See \remref{no adjoint remark} for related comments. \end{rem} \subsection{The Krull filtration of a module} Let $k_n: {\mathcal U} \rightarrow {\mathcal U}_n$ be right adjoint to the inclusion ${\mathcal U}_n \hookrightarrow {\mathcal U}$. Explicitly, $k_nM \subset M$ is the largest submodule of $M$ contained in ${\mathcal U}_n$. (So $k_0M$ is the locally finite part of $M$.) We obtain a natural increasing filtration of $M$: $$ k_0M \subset k_1M \subset k_2M \subset k_3M \subset \dots .$$ It is useful to let $\bar k_nM$ denote the composition factor $k_nM/k_{n-1}M$. A basic calculation from \cite{hls1} can be interpreted as saying the following. \begin{prop} \label{pnH prop} \cite[Lem.7.6.6]{hls1} $k_n \tilde H^*(B{\mathbb Z}/p)$ is the span of products of elements in $F(1)$ of length at most $n$. \end{prop} The next theorem lists some basic properties. In this theorem, $\Phi$ is the Frobenius functor, $nil_sM$ is the maximal submodule of $M$ in ${\mathcal Nil}_s$, $R_sM$ is the reduced module defined by $nil_sM/nil_{s+1}M = \Sigma^s R_sM$, and $\bar R_sM$ is its ${\mathcal Nil}$--closure. (These will be recalled in more detail in \secref{U section}.) \begin{thm} \label{kn thm} The Krull filtration of a module satisfies the following properties, for all $M,N \in {\mathcal U}$. \noindent{\bf (a)} \ $k_n$ is left exact, commutes with filtered colimits, and $\displaystyle \bigcup_{n=0}^{\infty} k_nM = M$. \\ \noindent{\bf (b)} \ $k_n$ commutes with the functors $\Sigma^s$, $\Phi$, $nil_s$, and $\bar R_s$. \\ \noindent{\bf (c)} \ $k_n$ preserves ${\mathcal Nil}$--reduced modules, ${\mathcal Nil}$--closed modules, and \\ ${\mathcal Nil}$--isomorphisms. \\ \noindent{\bf (d)} \ $\displaystyle \bigoplus_{l+m=n} \bar k_lM \otimes \bar k_mN = \bar k_n(M \otimes N)$. \end{thm} In contrast to (b), the natural map $$ R_s(k_nM) \rightarrow k_n(R_sM)$$ need only be a monomorphism (with cokernel in ${\mathcal Nil}$). See \exref{Rs ex}. \subsection{Symmetric sequences of locally finite modules} We construct a functor from ${\mathcal U}$ to the category of symmetric sequences of locally finite modules that seems to nicely encode much of the information about the Krull filtration of a module. A {\em symmetric sequence} in ${\mathcal U}_0$ is a sequence $M = \{M_0,M_1,M_2, \dots\}$, with $M_n \in \Sigma_n\text{--}{\mathcal U}_0$. The category of these, $\Sigma_*\text{--}{\mathcal U}_0$, has a symmetric monoidal structure with product $$ (M \boxtimes N)_n = \bigoplus_{l+m=n} \operatorname{Ind}_{\Sigma_l \times \Sigma_m}^{\Sigma_n}(M_l \otimes N_m).$$ \begin{defn} Let $\sigma_*: {\mathcal U} \longrightarrow \Sigma_*\text{--}{\mathcal U}_0$ be defined by $$ \sigma_nM = {\bar T}^n k_nM.$$ \end{defn} As ${\bar T}$ is exact and ${\bar T}^n k_{n-1}M = 0$, $\sigma_n M$ also equals ${\bar T}^n \bar k_nM$. Thus, under the correspondence of \thmref{Un/Un-1 thm}, $\sigma_nM$ corresponds to the image of the ${\mathcal U}_{n-1}$--reduced composition factor $\bar k_nM$ in ${\mathcal U}_n/{\mathcal U}_{n-1}$. \begin{thm} \label{sigma thm} $\sigma_*$ satisfies the following properties. \\ \noindent{\bf (a)} \ $\sigma_n$ is left exact and commutes with filtered colimits.\\ \noindent{\bf (b)} \ $\sigma_n$ commutes with the functors $\Sigma^s$, $\Phi$, $nil_s$, and $\bar R_s$. \\ \noindent{\bf (c)} \ $\sigma_n$ preserves ${\mathcal Nil}$--reduced modules, ${\mathcal Nil}$--closed modules, and \\ ${\mathcal Nil}$--isomorphisms. \\ \noindent{\bf (d)} \ $\sigma_*$ is symmetric monoidal: $ \sigma_*(M \otimes N) = \sigma_*M \boxtimes \sigma_*N$. \\ \noindent{\bf (e)} \ For all $s$ and $n$, there is a natural isomorphism of ${\mathbb Z}/p[\Sigma_n]$--modules $$(\sigma_nM)^s \simeq \operatorname{Hom}_{{\mathcal U}}(F(1)^{\otimes n}, \bar R_sM).$$ \end{thm} \subsection{Organization of the paper} The next three sections respectively contain needed background material on abelian categories, unstable modules, and polynomial functors. \thmref{schwartz thm}, Schwartz's characterization of ${\mathcal U}_n$, is then proved in \secref{schwartz thm section}. Properties of the Krull filtration of a module are proved in \secref{krull mod section}, and some of these are used in our proof of the identification of ${\mathcal U}_n/{\mathcal U}_{n-1}$ given in \secref{quotient cat section}. In \secref{sym sequences section}, we discuss \thmref{sigma thm} and give examples illustrating it. \section{Background: abelian categories} \label{abelian categories} We recall some standard concepts regarding abelian categories, as in \cite{gabriel}. \subsection{Basic notions} All the categories in this paper satisfy Grothendieck's axioms AB1--AB5 \cite{weibel} and have a set of generators. Standard consequences include that such categories have enough injectives. An object in an abelian category ${\mathcal C}$ is {\em Noetherian} if its poset of subobjects satisfy the ascending chain condition. ${\mathcal C}$ itself is said to be {\em locally Noetherian} if every object is the union of its Noetherian subobjects. Standard consequences include that direct sums of injectives are again injective, and objects admit injective envelopes. A full subcategory ${\mathcal B}$ of an abelian category ${\mathcal C}$ is {\em localizing} if it is closed under sub and quotient objects, extensions, and direct sums. $f: M \rightarrow N$ is a {\em ${\mathcal B}$-isomorphism} if $\ker f, \operatorname{coker} f \in {\mathcal B}$. The quotient category ${\mathcal C}/{\mathcal B}$ has the same objects as ${\mathcal C}$, with morphisms from $M$ to $N$ given by equivalence classes of triples $$ M \overset{f}{\hookleftarrow} M^{\prime} \xrightarrow{g} N^{\prime} \overset{h}{\twoheadleftarrow} N$$ with $\operatorname{coker} f, \ker h \in {\mathcal B}$. It is the initial category under ${\mathcal C}$ in which all ${\mathcal B}$-isomorphisms have been inverted. $M \in {\mathcal C}$ is {\em ${\mathcal B}$--reduced} if $\operatorname{Hom}_{{\mathcal C}}(N,M) = 0$ for all $N \in {\mathcal B}$, and is {\em ${\mathcal B}$--closed} if also $\operatorname{Ext}^1_{{\mathcal C}}(N,M) = 0$ for all $N \in {\mathcal B}$. The exact quotient functor $l: {\mathcal C} \rightarrow {\mathcal C}/{\mathcal B}$ has a right adjoint $r$. The counit of the adjunction $\epsilon_M: lr(M) \rightarrow M$ is always an isomorphism. The unit of the adjunction $\eta_M: M \rightarrow rl(M)$ is thus idempotent, and is called {\em localization away from ${\mathcal B}$} or {\em ${\mathcal B}$--closure}. The functor $k: {\mathcal C} \rightarrow {\mathcal B}$ right adjoint to the inclusion ${\mathcal B} \hookrightarrow {\mathcal C}$ can be computed by the formula $k(M) = \ker(\eta_M)$. \subsection{Locally finite objects and the Krull filtration} An object in an abelian category ${\mathcal C}$ is {\em simple} if it has no non-zero proper subobjects, {\em finite} if it has a filtration of finite length with simple composition factors, and {\em locally finite} if it is the a union of its finite subobjects. We let ${\mathcal C}_0$ denote the full subcategory consisting of the locally finite objects of ${\mathcal C}$. If ${\mathcal C}$ is locally Noetherian, ${\mathcal C}_0$ will be closed under extensions, and it follows that ${\mathcal C}_0$ is localizing. \begin{defn} \cite[p.382]{gabriel} The {\em Krull filtration} of a locally Noetherian abelian category ${\mathcal C}$ is the increasing sequence of localizing subcategories $$ {\mathcal C}_0 \subset {\mathcal C}_1 \subset {\mathcal C}_2 \subset $$ recursively defined for $n\geq 1$ by letting ${\mathcal C}_n$ be the full subcategory of ${\mathcal C}$ whose objects are the objects in ${\mathcal C}$ that represent locally finite objects in ${\mathcal C}/{\mathcal C}_{n-1}$. \end{defn} \begin{rem} Gabriel defines ${\mathcal C}_{\lambda}$ for any ordinal $\lambda$. For example, ${\mathcal C}_{\omega}$ is defined as the smallest localizing category containing all the ${\mathcal C}_n$. \thmref{F(1) thm} implies that ${\mathcal U}_{\omega} = {\mathcal U}$, so the Krull filtration for ${\mathcal U}$ stops at this point. \end{rem} \section{Background: unstable modules} \label{U section} In this section we recall some basic material about unstable modules. A general reference for this is \cite{s2}. \subsection{The categories ${\mathcal U}$ and ${\mathcal K}$} The mod $p$ Steenrod algebra ${\mathcal A}$ is generated by $Sq^k$, $k\geq 0$, when $p=2$, and $P^k$, $k \geq 0$, together with the Bockstein $\beta$ when $p$ is odd, and satisfying the usual {Adem relations}. The category ${\mathcal U}$ is then defined to be the full subcategory of ${\mathcal A}$--modules $M$ satisfying the {unstable condition}. When $p=2$ this means that, for all $x\in M$, $Sq^k x = 0$ if $k> |x|$. When $p$ is odd, the condition is that, for all $x\in M$ and $e = 0$ or $1$, $\beta^{e}P^k x = 0$ if $2k +e > |x|$. The abelian category ${\mathcal U}$ has a tensor product coming from the {Cartan formula}, and ${\mathcal K}$ is then defined to be the category of commutative algebras $K$ in ${\mathcal U}$ also satisfying the {restriction condition}: $Sq^{|x|}x = x^2$ for all $x \in K$ when $p=2$, and $P^{|x|/2}x = x^p$ for all even degree $x \in K$ when $p$ is odd. All of these definitions are, of course, motivated by the fact that, if $X$ is a topological space, its mod $p$ cohomology $H^*(X)$ is naturally an object in both ${\mathcal U}$ and ${\mathcal K}$. \subsection{The Frobenius functor $\Phi$} For $M \in {\mathcal U}$, $P_0: M \rightarrow M$ is defined by \begin{equation*} P_0 x = \begin{cases} Sq^k x & \text{if } k=|x| \text{ and } p=2 \\ \beta^eP^kx = 0 & \text{if } 2k + e = |x| \text{ and $p$ is odd}. \end{cases} \end{equation*} When $p=2$, the Frobenius functor $\Phi: {\mathcal U} \rightarrow {\mathcal U}$ is defined by letting \begin{equation*} (\Phi M)^{m} = \begin{cases} M^n & \text{if } m=2n \\ 0 & \text{otherwise, } \end{cases} \end{equation*} with $Sq^{2k}\phi(x) = \phi(Sq^kx)$, where $\phi(x) \in (\Phi M)^{2n}$ corresponds to $x \in M^n$. At odd primes, $\Phi: {\mathcal U} \rightarrow {\mathcal U}$ is defined by letting \begin{equation*} (\Phi M)^{m} = \begin{cases} M^{2n+e} & \text{if } m=2pn+2e, \text{ with } e=0,1 \\ 0 & \text{otherwise,} \end{cases} \end{equation*} with $P^{pk}\phi(x) = \phi(P^kx)$, and $P^{pk+1}\phi(x) = \phi(\beta P^kx)$ when $|x|$ is odd. $\Phi$ is an exact functor, and there is a natural transformation of unstable modules $$ \lambda: \Phi M\rightarrow M$$ defined by $\lambda(\phi(x)) = P_0x$. Let $\Omega: {\mathcal U} \rightarrow {\mathcal U}$ be left adjoint to $\Sigma$. Explicitly $\Omega M$ is the largest unstable submodule of $\Sigma^{-1}M$. This has just one nonzero right derived functor $\Omega^1$, and these can be calculated via an exact sequence $$ 0 \rightarrow \Sigma \Omega^1 M \rightarrow \Phi(M) \xrightarrow{\lambda} M \rightarrow \Sigma \Omega M \rightarrow 0.$$ $\lambda: \Phi M\rightarrow M$ is monic exactly when $M$ is ${\mathcal Nil}$--reduced, and, in this case, the evident iterated natural map $\lambda_k: \Phi^k M \rightarrow M$ is still monic. \subsection{The nilpotent filtration of a module} Let $nil_s: {\mathcal U} \rightarrow {\mathcal Nil}_s$ be right adjoint to the inclusion ${\mathcal Nil}_s \hookrightarrow {\mathcal U}$. Explicitly, $nil_sM \subset M$ is the largest submodule of $M$ contained in ${\mathcal Nil}_s$. We obtain a natural decreasing filtration of $M$: $$ M = nil_0M \supset nil_1M \supset nil_2M \supset nil_3M \supset \dots .$$ This filtration is complete as $nil_s M$ is $(s-1)$--connected. \begin{prop/def} $nil_sM/nil_{s+1}M = \Sigma^s R_sM$, where $R_sM$ is ${\mathcal Nil}$-reduced. \end{prop/def} For a proof see \cite[Lemma 6.4.1]{s2} or \cite[Prop. 2.2]{k1} We let $\bar R_sM$ denote the ${\mathcal Nil}$--closure of $R_sM$. \begin{prop} \cite[Cor. 3.2]{k5} $\bar R_s: {\mathcal U} \rightarrow {\mathcal U}$ is left exact. \end{prop} \begin{prop} \cite[Prop. 2.11]{k1} If $M$ is locally finite, then the nilpotent filtration equals the skeletal filtration, so that $M^s = R_s M = \bar R_s M$. \end{prop} \subsection{Finitely generated modules} We need various results about unstable modules which are finitely generated over ${\mathcal A}$. \begin{thm} \label{finite gen thm} {\bf (a)} \ A submodule of a finitely generated unstable module is again finitely generated. Thus ${\mathcal U}$ is locally Noetherian. \noindent{\bf (b)} \ The nilpotent filtration of a finitely generated module has finite length. \noindent{\bf (c)} \ A finitely generated unstable module represents a finite object in ${\mathcal U}/{\mathcal Nil}$. \end{thm} Proofs of these appear in \cite{s2} and \cite{k1}. \begin{cor} \label{gen cor} Any localizing subcategory of ${\mathcal U}$ will be generated by modules of the form $\Sigma^s M$, with $M$ finitely generated, ${\mathcal Nil}$--reduced, and representing a simple object in $U/{\mathcal Nil}$. \end{cor} We will also need the following lemma. \begin{lem} \label{phi lemma} Let $M$ be ${\mathcal Nil}$--reduced and finitely generated, and let $i: N \hookrightarrow M$ be the inclusion of a submodule with $M/N \in {\mathcal Nil}$. Then, for $k\gg 0$, the inclusion $\lambda_k: \Phi^k M \hookrightarrow M$ factors through $i$. \end{lem} \begin{proof} Let $x_1, \dots, x_l$ generate $M$. Since $M/N \in {\mathcal Nil}$, there exists $k$ such that $P_0^k(x_i) \in N$ for all $i$. But then the image of $\lambda^k$ will be contained in $N$. \end{proof} \subsection{${\mathcal U}$--injectives and properties of $\bar T$} Let $J(n)$ be the $n$th `Brown--Gitler module': the finite injective representing $M \rightsquigarrow (M^n)^{\vee}$ \cite[\S2.3]{s2}. \begin{thm} $H^*(BV) \otimes J(n)$ is injective in ${\mathcal U}$, and every ${\mathcal U}$--injective is a direct summand of a direct sum of such modules. \end{thm} \begin{thm} {\bf (a)} $T$, and thus $\bar T$, is exact. \noindent{\bf (b)} The natural map $T(M \otimes N) \rightarrow TM \otimes TN$ is an isomorphism. \noindent{\bf (c)} $T$, and thus $\bar T$, preserves both ${\mathcal Nil}$--reduced and ${\mathcal Nil}$--closed modules. \noindent{\bf (d)} $T$, and thus $\bar T$, commutes with the following functors: $\Sigma^s$, $\Phi$, $nil_s$, $R_s$, and $\bar R_s$. \end{thm} All of this in the literature: see \cite{L}, \cite{ls2}, \cite{lz1}, \cite{lz2}, \cite{k1}. \section{Background: polynomial functors} \subsection{Definitions and examples} Recall that ${\mathcal F}$ is the category of functors from finite dimensional ${\mathbb Z}/p$--vector spaces to ${\mathbb Z}/p$--vector spaces. This is an abelian category in the standard way, e.g., $F \rightarrow G \rightarrow H$ is exact at $G$ means that $F(V) \rightarrow G(V) \rightarrow H(V)$ is exact at $G(V)$ for all $V$. Some objects in ${\mathcal F}$ are $S^n$, $H_n$, $P_W$, and $I_W$, defined by $S^n(V) = (V^{\otimes n})_{\Sigma_n}$, $H_n(V) = H_n(BV)$, $P_W(V) = {\mathbb Z}/p[\operatorname{Hom}(W,V)]$ and $I_W(V) = {\mathbb Z}/p^{\operatorname{Hom}(V,W)}$. Note that $H_0$ is the constant functor `${\mathbb Z}/p$', and $H_1$ is the identity functor $\operatorname{Id}$. Using Yoneda's lemma, one see that $\operatorname{Hom}_{{\mathcal F}}(P_W,F) \simeq F(W)$, and thus $P_W$ is projective. Similarly $\operatorname{Hom}_{{\mathcal F}}(F, I_W) \simeq F(W)^{\vee}$, and thus $I_W$ is injective. Note also that $P_V \otimes P_W \simeq P_{V \oplus W}$ and $I_V \otimes I_W \simeq I_{V \oplus W}$. The functors $P_W$ and $I_W$ canonically split as $P_W \simeq {\mathbb Z}/p \oplus \bar P_W$ and $I_W \simeq {\mathbb Z}/p \oplus \bar I_W$, and we write $\bar P_{{\mathbb Z}/p}$ and $\bar I_{{\mathbb Z}/p}$ as $\bar P$ and $\bar I$. $F(V)$ is a canonical retract of $F(V \oplus {\mathbb Z}/p)$ and, as in the introduction, one defines the exact functor $\Delta: {\mathcal F} \rightarrow {\mathcal F}$ by $$ (\Delta F)(V) = F(V \oplus {\mathbb Z}/p)/F(V).$$ One easily checks that $\Delta$ has a left adjoint given by $F \mapsto F \otimes \bar P$, and a right adjoint given by $F \mapsto F \otimes \bar I$. $F$ is {\em polynomial of degree $n$} if $\Delta^{n+1}F = 0$. As explained in \cite{genrep1} or \cite{s2}, this agrees with the Eilenberg--MacLane definition used in \cite{hls1}. One then lets ${\mathcal F}^n$ be the category of all degree $n$ polynomial functors and ${\mathcal F}^{an}$ be the category of all locally polynomial functors. (As shown in \cite{genrep1}, ${\mathcal F}^{an}$ is also the locally finite category ${\mathcal F}_0 \subset {\mathcal F}$.) As examples, $\operatorname{Id}^{\otimes n}, S^n, H_n \in {\mathcal F}^n$, $\bar P_W$ has no nonzero polynomial subfunctors, while $I_W \in {\mathcal F}^{an}$. We explain this last fact. There is an identification $$ I_{{\mathbb Z}/p}(V) = S^*(V)/(x^p-x)$$ and thus $I_W$ is visibly a quotient of the sum of the polynomial functors $$ V \mapsto S^n(\operatorname{Hom}(W,V)).$$ \subsection{The polynomial filtration of a functor} The inclusion ${\mathcal F}^n \hookrightarrow {\mathcal F}$ has both a left adjoint $q_n$ and a right adjoint $p_n$. Explicitly $$ q_nF = \operatorname{coker} \{ \epsilon_F: \Delta^{n+1}F \otimes \bar P^{\otimes (n+1)} \rightarrow F\}$$ and $$ p_nF = \ker \{ \eta_F: F \rightarrow \Delta^{n+1}F \otimes \bar I^{\otimes (n+1)} \}.$$ \begin{thm} \label{poly filt thm} {\bf (a)} \ $\displaystyle p_nI_W(V) = S^{* \leq n}(\operatorname{Hom}(W,V))/(x^p-x)$. \\ \noindent{\bf (b)} \ $\displaystyle \sum_{l+m=n} p_lF \otimes p_mG = p_n(F \otimes G)$. \end{thm} Both of these statements are well known. For (a), see \cite[Lemma 7.6.6]{hls1} or the discussion in \cite[\S 6]{genrepsurvey}. Statement (b) is then a consequence as follows. We are asserting that two natural filtrations on $F \otimes G$ agree, with the one filtration clearly including in the other. From (a), one can visibly see that these filtrations agree when $F$ and $G$ are sums of $I_W$'s. For the general case, one finds exact sequences $$ 0 \rightarrow F \rightarrow I_0 \rightarrow I_1$$ and $$ 0 \rightarrow G \rightarrow J_0 \rightarrow J_1,$$ where $I_0$, $I_1$, $J_0$, and $J_1$ are all sums of $I_W$'s. Tensoring these sequences together yields an exact sequence $$ 0 \rightarrow F \otimes G \rightarrow I_0 \otimes J_0 \rightarrow I_1 \otimes J_0 \oplus I_0 \otimes J_1.$$ The two filtrations agree on the last two terms, and thus on $F\otimes G$. \subsection{The equivalence ${\mathcal U}/{\mathcal Nil} \simeq {\mathcal F}^{an}$} \label{U/Nil subsection} One has adjoint functors $$ {\mathcal U} \begin{array}{c} l \\[-.08in] \longrightarrow \\[-.1in] \longleftarrow \\[-.1in] r \end{array} {\mathcal F}$$ defined by $l(M)(V) = \operatorname{Hom}_{{\mathcal U}}(M,H^*(BV))^{\vee}$ and $r(F)^n = \operatorname{Hom}_{{\mathcal F}}(H_n,F)$. As examples, $l(F(n)) = H_n$, and $r(I_W) = H^*(BW)$. \begin{thm} \cite{hls1} The functor $l$ is exact, and $l$ and $r$ induce an equivalence of abelian categories $$ {\mathcal U}/{\mathcal Nil} \begin{array}{c} l \\[-.08in] \longrightarrow \\[-.1in] \longleftarrow \\[-.1in] r \end{array} {\mathcal F}^{an}.$$ \end{thm} It follows that $M \rightarrow rlM$ is ${\mathcal Nil}$--closure. \begin{prop} \cite{hls1} There are natural isomorphisms $$ l(M \otimes N) \simeq l(M) \otimes l(N) \text{ and } r(F \otimes G) \simeq r(F) \otimes r(G).$$ \end{prop} \begin{prop} There are natural isomorphisms $$ l (\bar T M) \simeq \Delta l(M) \text{ and } r(\Delta F) \simeq \bar T r(F).$$ \end{prop} The first of these is easily checked, and the second formally follows, once one knows that $\bar T$ preserves ${\mathcal Nil}$--closed modules. \section{Proof of \thmref{schwartz thm}} \label{schwartz thm section} In this section, we prove Schwartz' characterization of ${\mathcal U}_n$: $${\mathcal U}_n = {\mathcal U}_n^T,$$ where ${\mathcal U}_n^T$ is the full subcategory of ${\mathcal U}$ with objects $\{ M \in {\mathcal U} \ | \ {\bar T}^{n+1}M = 0 \}$. We prove this by induction on $n$. As our inductive step needs a lemma (\corref{deg 0 cor} below) that is at the heart of the $n=0$ case, we begin with the proof that ${\mathcal U}_0 = {\mathcal U}_0^T$ roughly following the proofs in \cite{ls2,s2}. It is easy to characterize modules in ${\mathcal U}_0$, noting that any module has its decreasing skeletal filtration, with subquotient modules concentrated in one degree. \begin{lem} \ Let $M$ be an unstable module. $M$ is simple if and only if it is isomorphic to $\Sigma^s {\mathbb Z}/p$ for some $s$. $M$ is finite in the categorical sense if and only if it is finite. $M$ is locally finite if and only if ${\mathcal A}\cdot x \subseteq M$ is finite for all $x \in M$. \end{lem} \begin{prop} If $M$ is locally finite, then $\bar T M = 0$. Thus ${\mathcal U}_0 \subseteq {\mathcal U}_0^T$. \end{prop} \begin{proof} As $\bar T$ is exact and commutes with suspensions and directed colimits, this follows from the lemma and the calculation that $\bar T {\mathbb Z}/p = 0$. \end{proof} \begin{prop} If $\bar T M = 0$, then $M$ is locally finite. Thus ${\mathcal U}_0^T \subseteq {\mathcal U}_0$. \end{prop} \begin{proof}[Sketch proof] It suffices to show this when $M$ is finitely generated. Such an $M$ embeds in an injective module of the form $\displaystyle \bigoplus_{i=1}^r H^*(BV_i) \otimes J(n_i)$. Meanwhile, $\bar T M = 0$ implies that $\operatorname{Hom}_{{\mathcal U}}(M, \tilde H^*(BV) \otimes J(n)) = 0$ for all $V$ and $n$. Thus $M$ will embed in $\displaystyle \bigoplus_{i=1}^r H^0(BV_i) \otimes J(n_i) = \bigoplus_{i=1}^r J(n_i)$ and so is finite. \end{proof} \begin{cor} \label{deg 0 cor} If $M$ is ${\mathcal Nil}$--reduced and $\bar T M = 0$, then $M$ is concentrated in degree 0. \end{cor} This corollary is used in the last step of the proof of the following lemma. \begin{lem} \label{omega lemma} If $M$ is ${\mathcal Nil}$--reduced, then $$\bar T^{n+1}M = 0 \Leftrightarrow \bar T^n\Omega M = 0.$$ \end{lem} \begin{proof} If $M$ is ${\mathcal Nil}$--reduced, there is a short exact sequence $$ 0 \rightarrow \Phi M \rightarrow M \rightarrow \Sigma \Omega M \rightarrow 0.$$ Since $\bar T^n$ is exact and commutes with $\Phi$ and $\Sigma$, one deduces that \begin{equation*} \begin{split} \bar T^n \Omega M = 0 & \Leftrightarrow\Phi \bar T^n M \simeq \bar T^n M \\ & \Leftrightarrow \bar T^n M \text{ is concentrated in degree 0} \\ & \Leftrightarrow \bar T^{n+1} M = 0. \end{split} \end{equation*} \end{proof} Armed with this lemma, we now give the inductive step of the proof that ${\mathcal U}_n = {\mathcal U}_n^T$. So assume by induction that ${\mathcal U}_{n-1} = {\mathcal U}_{n-1}^T$. \begin{proof}[Proof that ${\mathcal U}_n \subseteq {\mathcal U}_n^T$] A simple object in ${\mathcal U}/{\mathcal U}_{n-1}$ can be represented by $\Sigma^s M$, where $M$ is reduced. We show that then $\Sigma^s M \in {\mathcal U}_n^T$. Consider the exact sequence $$ 0 \rightarrow \Sigma^s \Phi M \rightarrow \Sigma^s M \rightarrow \Sigma^{s+1} \Omega M \rightarrow 0.$$ As $\Sigma^s M$ is simple in ${\mathcal U}/{\mathcal U}_{n-1}$ either $\Sigma^s \Phi M \in {\mathcal U}_{n-1}$ or $\Sigma^{s+1} \Omega M \in {\mathcal U}_{n-1}$. In this first case, we have \begin{equation*} \begin{split} \Sigma^s \Phi M \in {\mathcal U}_{n-1} & \Rightarrow \bar T^n \Sigma^s \Phi M = 0 \Rightarrow \Phi \bar T^n M = 0 \\ & \Rightarrow \bar T^n M = 0 \Rightarrow \bar T^n \Sigma^s M = 0 \\ & \Rightarrow \Sigma^s M \in {\mathcal U}_{n-1}. \end{split} \end{equation*} But this contradicts that $\Sigma^sM$ is nonzero in ${\mathcal U}/{\mathcal U}_{n-1}$. Thus $\Sigma^{s+1} \Omega M \in {\mathcal U}_{n-1}$. But then \begin{equation*} \begin{split} \Sigma^{s+1} \Omega M \in {\mathcal U}_{n-1} & \Rightarrow \bar T^n \Sigma^{s+1} \Omega M = 0 \Rightarrow \bar T^n \Omega M = 0 \\ & \Rightarrow \bar T^{n+1} M = 0 \Rightarrow \bar T^{n+1} \Sigma^s M = 0 \\ & \Rightarrow \Sigma^s M \in {\mathcal U}_{n}^T. \end{split} \end{equation*} \end{proof} \begin{proof}[Proof that ${\mathcal U}_n^T \subseteq {\mathcal U}_n$] Suppose that $\Sigma^s M \in {\mathcal U}_n^T$, with $M$ finitely generated, ${\mathcal Nil}$--reduced, and representing a simple object in ${\mathcal U}/{\mathcal Nil}$. We show that then $\Sigma^s M$ is either in ${\mathcal U}_{n-1}$ or represents a simple object in ${\mathcal U}/{\mathcal U}_{n-1}$, and is thus in ${\mathcal U}_n$. Using \lemref{omega lemma}, $\Sigma^s M \in {\mathcal U}_n^T$ implies that $\Omega M \in {\mathcal U}_{n-1}^T$, so that $\Phi M \rightarrow M$ is a ${\mathcal U}_{n-1}^T$--isomorphism. It follows that, for all $k$, $\Sigma^s \Phi^k M \rightarrow \Sigma^s M$ will be a ${\mathcal U}_{n-1}^T$--isomorphism (and thus a ${\mathcal U}_{n-1}$--isomorphism). Any nonzero sub-object of $\Sigma^s M$ has the form $\Sigma^s N$ with $0 \neq N < M$. As $M$ is ${\mathcal Nil}$--reduced and simple in ${\mathcal U}/{\mathcal Nil}$, $M/N \in {\mathcal Nil}$. By \lemref{phi lemma}, there exists $k>0$ such that $\Phi^k M \subset N$. Since $\Sigma^s M/\Sigma^s \Phi^kM \in {\mathcal U}_{n-1}$, we deduce that $\Sigma^s M/\Sigma^s N \in {\mathcal U}_{n-1}$. \end{proof} \begin{rem} We comment on how the proof of \thmref{schwartz thm} differs from the one outlined in \cite{s2}. The proof in \cite{s2} makes use of the theorem that ${\mathcal U}/{\mathcal Nil} \simeq {\mathcal F}^{an}$ together with substantial analysis of how the polynomial filtration of ${\mathcal F}^{an}$ is reflected in the category of reduced modules. Our proof instead uses \thmref{finite gen thm}(c), which says that finitely generated modules represent finite objects in ${\mathcal U}/{\mathcal Nil}$. The proof of this also uses that ${\mathcal U}/{\mathcal Nil} \simeq {\mathcal F}^{an}$, but replaces the analysis of the polynomial filtration by the observation that the functor in ${\mathcal F}$ given by $V \mapsto H_n(BV)$ is a finite functor, which has a very elementary proof \cite[Prop. 4.10]{genrep1}. \end{rem} \section{Properties of the Krull filtration of a module} \label{krull mod section} For simplicity, we write $\bar H$ denote $\tilde H^*(B{\mathbb Z}/p)$. Recall that $M \in {\mathcal U}_n$ if and only if $\bar T^{n+1}M = 0$, and $\bar T^{n+1}$ is both exact and left adjoint to the functor $M \mapsto M \otimes \bar H^{\otimes n+1}$. It follows formally that \begin{equation*} \label{kn defn} k_nM = \ker\{ \eta_M: M \rightarrow \bar T^{n+1} M \otimes \bar H^{\otimes n+1}\}, \end{equation*} where $\eta_M$ is the unit of the adjunction. We run through proofs of various properties of $k_nM$. \\ \noindent{\bf (a)} \ $k_n$ is left exact and commutes with filtered colimits. \begin{proof} This is immediate. \end{proof} \noindent{\bf (b)} \ $\displaystyle \bigcup_{n=0}^{\infty} k_nM = M$. \begin{proof} One easily computes that $\bar T F(n) = F(n-1)$, so that $F(n) \in {\mathcal U}_n$. As every unstable module is a quotient of a sum of $F(n)$'s, the claim follows. \end{proof} \noindent{\bf (c)} \ $k_n(N \otimes M) \simeq N \otimes k_nM$ if $N \in {\mathcal U}_0$. In particular, $k_n\Sigma^s M = \Sigma^s k_nM$. \begin{proof} As $\bar TN=0$, $$\eta_{N \otimes M}: N \otimes M \rightarrow \bar T^{n+1}(N \otimes M) \otimes \bar H^{\otimes n+1}$$ identifies with $$ N \otimes \eta_M: N \otimes M \rightarrow N \otimes \bar T^{n+1}M \otimes \bar H^{\otimes n+1}.$$ \end{proof} \noindent{\bf (d)} \ $k_n\Phi M \simeq \Phi k_nM$. \begin{proof} As $\Phi$ commutes with tensor products and $\bar T$, one has a commutative diagram \begin{equation*} \xymatrix{ \Phi M \ar@{=}[d] \ar[r]^-{\Phi(\eta_M)} & \Phi (\bar T^{n+1} M \otimes \bar H^{\otimes n+1})\ar[r]^-{\sim} & \bar T^{n+1} \Phi M \otimes (\Phi \bar H)^{\otimes n+1} \ar[d] \\ \Phi M \ar[rr]^-{\eta_{\Phi M}} && \bar T^{n+1} \Phi M \otimes \bar H^{\otimes n+1} } \end{equation*} As $\Phi \bar H \rightarrow \bar H$ is monic, the kernels of the two horizontal maps are equal. The kernel of the bottom map is $k_n\Phi M$, and, since $\Phi$ is exact, the kernel of the top map is $\Phi k_n M$. \end{proof} \noindent{\bf (e)} \ $k_n nil_s M \simeq nil_s k_nM$. \begin{proof} One easily checks that $k_n nil_s M \simeq k_nM \cap nil_s M \simeq nil_s k_nM$. \end{proof} \noindent{\bf (f)} \ If $M = r(F)$, then $k_n M = r(p_n F)$. Thus if $M$ is ${\mathcal Nil}$--closed, so is $k_nM$. \begin{proof} Applying the left exact functor $r$ to the exact sequence $$ 0 \rightarrow p_n F \rightarrow F \rightarrow \Delta^{n+1} F \otimes \bar I^{\otimes n+1}$$ shows that $r(p_n F)$ is the kernel of the map $$ M \rightarrow r(\Delta^{n+1} F \otimes \bar I^{\otimes n+1}).$$ But since $r$ commutes with tensor products and $r \circ \Delta^{n+1} = \bar T^{n+1} \circ r$, this map rewrites as $$ M \rightarrow \bar T^{n+1} M \otimes \bar H^{\otimes n+1},$$ which has kernel $k_n M$. \end{proof} \noindent{\bf (g)} \ $k_n$ preserves ${\mathcal Nil}$--reduced modules. \begin{proof} This follows from (f), noting that ${\mathcal Nil}$--reduced modules are submodules of ${\mathcal Nil}$--closed modules, and $k_n$ preserves monomorphisms. \end{proof} \noindent{\bf (h)} \ If $F = l(M)$, then $p_nF = l(k_nM)$. It follows that $k_n$ preserves ${\mathcal Nil}$--isomorphisms. \begin{proof} Applying the exact functor $l$ to the exact sequence $$ 0 \rightarrow k_n M \rightarrow M \rightarrow \bar T^{n+1} M \otimes \bar H^{\otimes n+1}$$ shows that $l(k_n M)$ is the kernel of the map $$ F \rightarrow l(\bar T^{n+1} M \otimes \bar H^{\otimes n+1}).$$ But since $l$ commutes with tensor products and $\Delta^{n+1} \circ l= l \circ \bar T^{n+1}$, this map rewrites as $$ F \rightarrow \Delta^{n+1} F \otimes \bar I^{\otimes n+1},$$ which has kernel $p_n F$. A map $f: M \rightarrow N$ in ${\mathcal U}$ is a ${\mathcal Nil}$--isomorphism precisely when $l(f)$ is an isomorphism. When this happens, $l(k_nf)=p_nl(f)$ will be an isomorphism, so $k_nf$ will be a ${\mathcal Nil}$--isomorphism. \end{proof} \noindent{\bf (i)} \ The natural map $\bar R_sk_nM \rightarrow k_n\bar R_sM$ is an isomorphism. \begin{proof} Applying the left exact functor $\bar R_s$ to the exact sequence $$ 0 \rightarrow k_nM \rightarrow M \rightarrow \bar T^{n+1} M \otimes \bar H^{\otimes n+1}$$ shows that $\bar R_sk_nM$ is the kernel of the map $$ \bar R_sM \rightarrow \bar R_s(\bar T^{n+1} M \otimes \bar H^{\otimes n+1}).$$ But since $\bar H$ is ${\mathcal Nil}$--closed, and $\bar R_s$ commutes with $\bar T$, this map rewrites as $$ R_sM \rightarrow \bar T^{n+1} \bar R_s M \otimes \bar H^{\otimes n+1},$$ which has kernel $k_n\bar R_sM$. \end{proof} \noindent{\bf (j)} \ $\displaystyle \sum_{l+m=n} k_lM \otimes k_mM = k_n(M \otimes N)$. \begin{proof} As with \thmref{poly filt thm}, this says that two natural filtrations of $M \otimes N$ agree, with one filtration certainly including in the other. Combined with (f), \thmref{poly filt thm} says that this is true if both modules are ${\mathcal Nil}$--closed. Then (c) implies that the statement is also true if both $M$ and $N$ are sums of ${\mathcal Nil}$--closed modules tensored with locally finite modules. This includes all ${\mathcal U}$--injectives. The statement then holds for all modules, using the same argument as in the proof of \thmref{poly filt thm}. \end{proof} From the above, one can deduce that the natural map $R_sk_nM \rightarrow k_nR_sM$ is always an inclusion with cokernel in ${\mathcal Nil}$, but the next example shows that this need {\em not} be an isomorphism. \begin{ex} \label{Rs ex} With $p=2$, let $M \in {\mathcal U}$ be defined by the pullback square \begin{equation*} \xymatrix{ M \ar[d] \ar[r] & \Phi^2(F(1)) \ar@{->>}[d] \\ \Sigma F(3) \ar@{->>}[r] & \Sigma^4 {\mathbb Z}/2. } \end{equation*} We claim that, for this $M$, the natural monomorphism $R_0(k_1(M)) \rightarrow k_1(R_0(M))$ identifies with the proper inclusion $ \Phi^3(F(1)) \hookrightarrow \Phi^2(F(1))$, and is thus {\em not} an isomorphism. To see this, recall that $R_0M = M/nil_1 M$. Applying $R_0$ to the short exact sequence $$0 \rightarrow \Sigma(F(3)^{\geq 4}) \rightarrow M \rightarrow \Phi^2F(1) \rightarrow 0$$ shows that $R_0M$, and thus $k_1R_0M$, equals $\Phi^2F(1)$. Meanwhile, applying the left exact functor $k_1$ to the short exact sequence $$ 0 \rightarrow \Phi^3F(1) \rightarrow M \rightarrow \Sigma F(3) \rightarrow 0$$ shows that $k_1M$, and thus $R_0k_1M$, equals $\Phi^3F(1)$. \end{ex} \begin{rem} \label{no adjoint remark} Being a right adjoint, $k_n$ is left exact. It is easily seen to not usually preserve surjections. To see this, let $M^{>r} \subset M$ denote the submodule of all elements of degree greater than $r$. Then $F(1) \rightarrow F(1)/F(1)^{>1} = \Sigma {\mathbb Z}/p$ is a surjection, $k_0(F(1)) = 0$, but $k_0(F(1)/F(1)^{>1}) = \Sigma {\mathbb Z}/p$. A similar argument shows that ${\mathcal U}_n \hookrightarrow {\mathcal U}$ does not admit a left adjoint. So see this, note that, for all $M \in {\mathcal U}$ and all $r$, $M/M^{>r}$ is in ${\mathcal U}_0$. Now suppose a left adjoint $q_n: {\mathcal U} \rightarrow {\mathcal U}_n$ exists for some $n$. It would follow that $M \rightarrow M/M^{>r}$ would factor through the natural map $M \rightarrow q_n(M)$ for all $r$, and we could deduce that $M \rightarrow q_n(M)$ would be monic. Since this would be true for all $M \in {\mathcal U}$, it would follow that ${\mathcal U}_n = {\mathcal U}$. \end{rem} We end this section with a discussion of \propref{pnH prop}: $k_n \bar H$ is the span of products of elements in $F(1)$ of length at most $n$. This is essentially the content of \cite[Lem.7.6.6]{hls1}, and a proof goes along the following lines. $I = I_{{\mathbb Z}/p}$ is a Hopf algebra object in ${\mathcal F}$, with addition ${\mathbb Z}/p \oplus {\mathbb Z}/p \rightarrow {\mathbb Z}/p$ inducing the coproduct $$ \Psi: I_{{\mathbb Z}/p} \rightarrow I_{{\mathbb Z}/p \oplus {\mathbb Z}/p} \simeq I_{{\mathbb Z}/p} \otimes I_{{\mathbb Z}/p}.$$ Then one observes that $p_n\bar I$ is precisely the kernel of the iterated reduced coproduct $$ \Psi^n: \bar I \rightarrow \bar I^{\otimes n+1}.$$ Applying $r$ to this, it follows that $k_n\bar H$ is the kernel of the iterated reduced coproduct $$ \Psi^n: \bar H \rightarrow \bar H^{\otimes n+1},$$ i.e., is the $n$th stage of the primitive filtration of $H^*(B{\mathbb Z}/p)$. This is then checked to be the span of products of elements in $F(1)$ of length at most $n$. \section{The identification of ${\mathcal U}_n/{\mathcal U}_{n-1}$} \label{quotient cat section} \thmref{Un/Un-1 thm} is a consequence of the next two lemmas. \begin{lem} \label{adjoint lem} The functor $\Sigma_n\text{--}{\mathcal U}_0 \rightarrow {\mathcal U}_n$ given by $$N \mapsto (N \otimes F(1)^{\otimes n})^{\Sigma_n}$$ is right adjoint to ${\bar T}^n: {\mathcal U}_n \rightarrow \Sigma_n\text{--}{\mathcal U}_0$. \end{lem} \begin{lem} \label{counit lem} The counit of the adjunction $$ {\bar T}^n((N \otimes F(1)^{\otimes n})^{\Sigma_n}) \rightarrow N$$ is an isomorphism for all $N \in \Sigma_n\text{--}{\mathcal U}_0$. \end{lem} \begin{proof}[Proof of \lemref{adjoint lem}] Given $M \in {\mathcal U}_n$ and $N \in \Sigma_n\text{--}{\mathcal U}_0$, we compute: \begin{equation*} \begin{split} \operatorname{Hom}_{\Sigma_n\text{--}{\mathcal U}_0}({\bar T}^n M, N) & = \operatorname{Hom}_{{\mathcal U}}^{\Sigma_n}({\bar T}^n M, N) \\ & = \operatorname{Hom}_{{\mathcal U}}^{\Sigma_n}(M, N \otimes \bar H^{\otimes n}) \\ & = \operatorname{Hom}_{{\mathcal U}}^{\Sigma_n}(M, k_n(N \otimes \bar H^{\otimes n})) \text{\ (since } M \in {\mathcal U}_n) \\ & = \operatorname{Hom}_{{\mathcal U}}^{\Sigma_n}(M, N \otimes F(1)^{\otimes n}) \\ & = \operatorname{Hom}_{{\mathcal U}_n}(M, (N \otimes F(1)^{\otimes n})^{\Sigma_n}). \\ \end{split} \end{equation*} \end{proof} \begin{proof}[Proof of \lemref{counit lem}] Starting from the calculation that ${\bar T} F(1) \simeq {\mathbb Z}/p$, it is easy to check that ${\bar T}^n (F(1)^{\otimes n}) = {\mathbb Z}/p[\Sigma_n]$. Then one has isomorphisms \begin{equation*} \begin{split} {\bar T}^n((N \otimes F(1)^{\otimes n})^{\Sigma_n}) & \simeq ({\bar T}^n(N \otimes F(1)^{\otimes n}))^{\Sigma_n} \\ & \simeq (N \otimes {\bar T}^n(F(1)^{\otimes n}))^{\Sigma_n} \\ & \simeq (N \otimes {\mathbb Z}/p[\Sigma_n])^{\Sigma_n} \\ & \simeq N. \end{split} \end{equation*} \end{proof} \begin{proof}[Proof of \thmref{recursive Un thm}] Given $M \in {\mathcal U}$, suppose that there exist $K,Q \in {\mathcal U}_{n-1}$, $N \in \Sigma_n\text{--}{\mathcal U}_0$, and an exact sequence $$ 0 \rightarrow K \rightarrow M \rightarrow (N \otimes F(1)^{\otimes n})^{\Sigma_n} \rightarrow Q \rightarrow 0.$$ Applying the exact functor ${\bar T}^n$ to this sequence shows that ${\bar T}^n M \simeq N$, and then applying ${\bar T}$ once more shows that ${\bar T}^{n+1}M = 0$, so that $M \in {\mathcal U}_n$. Conversely, given $M \in {\mathcal U}_n$, \thmref{Un/Un-1 thm} tells us that the unit of the adjunction $$ M \rightarrow ({\bar T}^n M \otimes F(1)^{\otimes})^{\Sigma_n}$$ has kernel and cokernel in ${\mathcal U}_{n-1}$. \end{proof} \begin{ex} \label{F(n) example} ${\mathcal U}_n$ is generally not generated as a localizing category by the modules $\Sigma^s F(m)$ with $m \leq n$. If it were, then ${\mathcal F}^n$ would be generated as a localizing category by the functors $H_m$, for $m\leq n$. But this is not always true. The first example of this is ${\mathcal F}^3$, when $p=2$. There is a splitting of functors $$ \Lambda^2(V) \otimes V \simeq \Lambda^3(V) \oplus L(V).$$ $L$ is a simple object in ${\mathcal F}^3$ which is not among the composition factors of the functors $H_m$, $m\leq 3$: these are just the functors $\Lambda^m$, $m \leq 3$. \end{ex} \section{A functor to symmetric sequences in ${\mathcal U}_0$} \label{sym sequences section} Recall that $\Sigma_*\text{--}{\mathcal U}_0$ is the category of symmetric sequences of locally finite modules, and that the functor $$\sigma_*: {\mathcal U} \longrightarrow \Sigma_*\text{--}{\mathcal U}_0$$ is defined by $\sigma_nM = {\bar T}^n k_nM$. We prove the various properties of this functor asserted in \thmref{sigma thm}. Properties listed in parts (a)--(c) of the theorem are true because the analogous properties have been shown to be true for the functors ${\bar T}$ and $k_n$. \\ \noindent{\bf (d)} \ $\sigma_*$ is symmetric monoidal: $ \sigma_*(M \otimes N) = \sigma_*M \boxtimes \sigma_*N$. \begin{proof} Recall that $\sigma_nM = {\bar T}^n \bar k_nM$, where $\bar k_nM = k_nM/k_{n-1}M$. Then we compute: \begin{equation*} \begin{split} \sigma_n(M\otimes N) & = {\bar T}^n\bar k_n(M\otimes N) \\ & =\bigoplus_{l+m=n}{\bar T}^n(\bar k_lM \otimes \bar k_mN) \\ & =\bigoplus_{l+m=n} \operatorname{Ind}_{\Sigma_l \times \Sigma_m}^{\Sigma_n}({\bar T}^l\bar k_lM \otimes {\bar T}^m\bar k_mN) \\ & = (\sigma_*M \boxtimes \sigma_*N)_n. \end{split} \end{equation*} \end{proof} \noindent{\bf (e)} \ For all $s$ and $n$, there is a natural isomorphism of ${\mathbb Z}/p[\Sigma_n]$--modules $$(\sigma_nM)^s \simeq \operatorname{Hom}_{{\mathcal U}}(F(1)^{\otimes n}, \bar R_sM).$$ \begin{proof} We first look at the special case when $M$ is ${\mathcal Nil}$--closed. We claim that then $\sigma_n M$ is concentrated in degree 0, and $$(\sigma_nM)^0 \simeq \operatorname{Hom}_{{\mathcal U}}(F(1)^{\otimes n}, M).$$ To see this, note that $M = r(F)$ for some $F \in {\mathcal F}$. Thus $$ \sigma_n M = {\bar T}^n k_n r(F)= r(\Delta^n p_n F).$$ Since $p_nF$ is polynomial of degree $n$, $\Delta^n p_nF$ will be polynomial of degree 0, and is thus constant. It follows that \begin{multline*} r(\Delta^n p_n F) = \operatorname{Hom}_{{\mathcal F}}({\mathbb Z}/p, \Delta^n p_n F) = \operatorname{Hom}_{{\mathcal F}}(\bar P^{\otimes n}, p_n F) \\ = \operatorname{Hom}_{{\mathcal F}}(q_n(\bar P^{\otimes n}), F) = \operatorname{Hom}_{{\mathcal F}}(\operatorname{Id}^{\otimes n}, F) = \operatorname{Hom}_{{\mathcal U}}(F(1)^{\otimes n}, M). \end{multline*} Now we consider the case of a general unstable module $M$. For locally finite modules, the skeletal filtration equals the nilpotent filtration, so $$ (\sigma_nM)^s = \bar R_s \sigma_n M = \sigma_n \bar R_s M = \operatorname{Hom}_{{\mathcal U}}(F(1)^{\otimes n}, \bar R_s M).$$ \end{proof} We make a couple of general comments regarding the values of $\sigma_*M$. Firstly, from part (e) of \thmref{sigma thm}, one sees that if the nilpotence length of $M$ is finite, there will be uniform bound on the nonzero degrees of the modules $\sigma_nM$. More precisely, if $nil_{s+1}M = 0$, then $(\sigma_nM)^t = 0$ for all $t>s$ and all $n$. This is the case if $M$ is a module over a Noetherian unstable algebra $K$ which is finitely generated using both the $K$--module and ${\mathcal A}$--module structures \cite[Lemma 6.3.10]{meyer}. (The more accessible paper \cite{henn} has a weaker version, and both \cite{hls2} and \cite{k5} give examples of bounds on nilpotence.) Our second comment is that if $K$ is an unstable algebra, then $\sigma_*K$ will be an algebra in symmetric sequences. Explicitly, the multiplication on $K$ induces $\Sigma_i \times \Sigma_j$ equivariant maps $\sigma_iK \otimes \sigma_jK \rightarrow \sigma_{i+j}K$ which are suitably associative, commutative, and unital. In particular, each $\sigma_nK$ will be a module over the unstable algebra $\sigma_0K$, the locally finite part of $K$. We end this section with various examples, all restricted to $p=2$. \begin{prop} Suppose $M$ is `homogeneous of degree $m$', i.e., $\bar k_m M = M$. Then ${\bar T}^m M \in \Sigma_m\text{--}{\mathcal U}_0$, and $$ \sigma_nM = \begin{cases} {\bar T}^mM & \text{if } n=m \\ 0 & \text{otherwise}. \end{cases} $$ \end{prop} \begin{proof} The hypothesis is equivalent to saying that the unit of the adjunction $M \rightarrow ({\bar T}^m M \otimes F(1)^{\otimes m})^{\Sigma_m}$ is an embedding, and the result follows from \thmref{Un/Un-1 thm}. \end{proof} \begin{ex} As special cases of the proposition, one has $$ \sigma_nF(m) = \begin{cases} {\mathbb Z}/2 & \text{if } n=m \\ 0 & \text{otherwise}, \end{cases} $$ and $$ \sigma_nF(1)^{\otimes m} = \begin{cases} {\mathbb Z}/2[\Sigma_m] & \text{if } n=m \\ 0 & \text{otherwise}. \end{cases} $$ \end{ex} Let $U: {\mathcal U} \rightarrow {\mathcal K}$ be the usual free functor, left adjoint to the forgetful functor. The next proposition describes $\sigma_*K$ when $K= U(M)$ where $M$ is as in the last example. To state this, we define a functor $$Sh^m: \Sigma_m\text{--}{\mathcal U}_0 \rightarrow \text{algebras in } \Sigma_*\text{--}{\mathcal U}_0$$ as follows. Given a $\Sigma_m$--module $N$, let $$ Sh^m_n(N) = \begin{cases} \operatorname{Ind}_{\Sigma_k \wr \Sigma_m}^{\Sigma_{km}}(N^{\otimes k}) & \text{if } n=km \\ 0 & \text{otherwise}. \end{cases} $$ Given $i+j=k$, the multiplication map $ Sh^m_{im}(N) \otimes Sh^m_{jm}(N) \rightarrow Sh^m_{km}(N)$ is defined by the composite $$\operatorname{Ind}_{\Sigma_i \wr \Sigma_m}^{\Sigma_{im}} N^{\otimes i} \otimes \operatorname{Ind}_{\Sigma_j \wr \Sigma_m}^{\Sigma_{m}} N^{\otimes j} \rightarrow \operatorname{Ind}_{(\Sigma_i \times \Sigma_j) \wr \Sigma_m}^{\Sigma_{im} \times \Sigma_{jm}} N^{\otimes k} \rightarrow \operatorname{Ind}_{(\Sigma_k) \wr \Sigma_m}^{\Sigma_{km}} N^{\otimes k},$$ where the first map is $\operatorname{Ind}_{(\Sigma_i \times \Sigma_j) \wr \Sigma_m}^{\Sigma_{im} \times \Sigma_{jm}}$ applied to the shuffle product $$ N^{\otimes i} \otimes N^{\otimes j} \rightarrow N^{\otimes k}.$$ \begin{prop} Suppose $M$ satisfies $\bar k_mM = M$. Then $ \sigma_*(U(M)) = Sh^m_*({\bar T}^mM)$. \end{prop} This follows from the next two lemmas. \begin{lem} Suppose $M$ satisfies $\bar k_mM = M$. Then \begin{equation*} \bar k_nU(M) = \begin{cases} \Lambda^k M & \text{if } n = km \\ 0 & \text{otherwise}. \end{cases} \end{equation*} \end{lem} \begin{proof}[Sketch proof] We first observe that $k_{km-1}\Lambda^k M \subseteq k_{km-1}M^{\otimes k} = 0$, so that $\bar k_{km}\Lambda^k M = \Lambda^k M$. Now we recall that $U(M) = S^*(M)/(Sq^{|x|}x-x^2)$. This is filtered with $U^k(M)$ equal to the image of $M^{\otimes k}$ in $U(M)$, and there are exact sequences $$ 0 \rightarrow U^{k-1}(M) \rightarrow U^{k}(M) \rightarrow \Lambda^{k} M \rightarrow 0.$$ Applying $k_n$ to this yields exact sequences $$ 0 \rightarrow k_nU^{k-1}(M) \rightarrow k_nU^{k}(M) \rightarrow k_n\Lambda^{k} M,$$ and one deduces that $$ U^{k-1}(M) = k_{km-1}U^k(M) \text{ so that } \bar k_{km}U^k(M) = \Lambda^k M$$ and $$ k_{km+r}U^k(M) = k_{km+r}U(M)$$ for $0\leq r < m$. \end{proof} \begin{lem} If ${\bar T}^{m+1}M = 0$, then $\bar T^{km}\Lambda^k M = \operatorname{Ind}_{\Sigma_k \wr \Sigma_m}^{\Sigma_{km}}(({\bar T}^mM)^{\otimes k}))$. \end{lem} \begin{proof}[Sketch proof] This follows by iterated use of the following facts: $T\Lambda^kM = \Lambda^kTM$, $TM = {\bar T} M \oplus M$, and $\Lambda^k(M \otimes N) = \bigoplus_{i+k=k} \Lambda^i M \otimes \Lambda^j N$. \end{proof} The proposition is illustrated with the next example. \begin{ex} $\sigma_*(H^*(K(V,m))) = Sh^m_*(V^{\vee})$, where $V^{\vee}$ is the dual of $V$, and is given trivial $\Sigma_m$--module structure. \end{ex} We end with a group cohomology example. \begin{ex} Let $Q_8$ be the quaternionic group of order 8. Then $$\sigma_*(H^*(BQ_8)) = H^*(S^3/Q_8) \otimes Sh^1_*({\mathbb Z}/2),$$ with $\sigma_0(H^*(BQ_8)) = H^*(S^3/Q_8) = {\mathbb Z}/2[x,y]/(x^2+xy+y^2,x^2y+xy^2)$. This can be seen quite easily. $Q_8 \subset S^3$ induces an embedding $$\Phi^2 H^*(B{\mathbb Z}/2) = H^*(BS^3) \subset H^*(BQ_8).$$ The image in cohomology of $Q_8 \rightarrow Q_8/[Q_8,Q_8] = ({\mathbb Z}/2)^2$ is $\sigma_0H^*(BQ_8)$, and the composite $\sigma_0H^*(BQ_8) \hookrightarrow H^*(BQ_8) \rightarrow H^*(S^3/Q_8)$ is an isomorphism. These maps combine to define an isomorphism in ${\mathcal K}$: $$ H^*(S^3/Q_8) \otimes \Phi^2 H^*(B{\mathbb Z}/2) \simeq H^*(BQ_8),$$ and the calculation of $\sigma_*H^*(BQ_8)$ follows. \end{ex} \end{document}
\begin{document} \title{Cox rings of surfaces and the anticanonical Iitaka dimension} \author{Michela Artebani} \address{ Departamento de Matem\'atica, \newline Universidad de Concepci\'on, \newline Casilla 160-C, Concepci\'on, Chile} \email{martebani@udec.cl} \author{Antonio Laface} \address{ Departamento de Matem\'atica, \newline Universidad de Concepci\'on, \newline Casilla 160-C, Concepci\'on, Chile} \email{alaface@udec.cl} \subjclass[2000]{14J26, 14C20} \keywords{Cox rings, rational surfaces, Iitaka dimension} \thanks{The first author has been partially supported by Proyecto FONDECYT Regular 2009, N. 1090069. The second author has been partially supported by Proyecto FONDECYT Regular 2008, N. 1080403.} \maketitle \begin{abstract} In this paper we investigate the relation between the finite generation of the Cox ring $R(X)$ of a smooth projective surface $X$ and its anticanonical Iitaka dimension $\kappa(-K_X)$. \end{abstract} \section*{Introduction} This paper discusses the problem of deciding which smooth projective surfaces over the complex numbers have finitely generated Cox ring. More precisely, if the Picard group $\operatorname{Pic}(X)$ of such a surface $X$ is finitely generated, or equivalently $q(X)=0$, then the {\em Cox ring} of $X$ is: \[ R(X):=\bigoplus_{D\in \operatorname{Pic}(X)} H^0(X,\mathcal O_X(D)). \] The Cox ring of a surface $X$ is known to be a polynomial ring if and only if $X$ is toric \cite{cox, hk}. In \cite[Cor.~5.1]{to2} Totaro proved that the Cox ring of a klt Calabi-Yau pair of dimension two over ${\mathbb C}$ is finitely generated if and only if its effective cone is rational polyhedral. In the recent paper ~\cite{tvv} by Testa, V\'arilly-Alvarado and Velasco, the authors prove that if $X$ is a smooth rational surface with $-K_X$ big, i.e.~such that $\dim\varphi_{|-nK_X|}(X)=2$ for $n$ big enough, then $R(X)$ admits a finite number of generators. These results motivate the research for a relation between the {\em anticanonical Iitaka dimension} of $X$ (see \cite[Def.~2.1.3]{laz}): \[ \kappa(-K_X) := \max\{\dim \varphi_{|-nK_X|}(X) : n\in{\mathbb N}\}, \] whose values can be $2$, $1$, $0$ and $-\infty$, and the finite generation of $R(X)$. Let ${\rm Eff}(X)$ be the convex cone generated by classes of effective divisors in $N^1(X)=\operatorname{Pic}(X)\otimes {\mathbb R}/\equiv$, where $\equiv$ is numerical equivalence. Our first result is the following. \begin{introthm} Let $X$ be a smooth rational surface with $\kappa(-K_X)=1$. Then the following are equivalent: \begin{enumerate} \item the effective cone ${\rm Eff}(X)$ is rational polyhedral; \item the Cox ring $R(X)$ is finitely generated; \item $X$ contains finitely many $(-1)$-curves. \end{enumerate} \end{introthm} Moreover we prove that, if $\kappa(-K_X)=1$, then the effective cone of $X$ is rational polyhedral if and only if the same is true for its relative minimal model. This allows us to show (Corollary \ref{infty}) that there exist surfaces with finitely generated Cox ring and anticanonical Iitaka dimension $1$ of any Picard number $\geq 9$. In case $-K_X$ is nef we prove the following result, which relies on Nikulin's description of surfaces with rational polyhedral effective cone \cite[Ex.~1.4.1]{n1}. \begin{introthm} Let $X$ be a smooth projective surface with $q(X)=0$ and $-K_X$ nef. Then $R(X)$ is finitely generated if and only if one of the following holds: \begin{enumerate} \item $X$ is the minimal resolution of singularities of a Del Pezzo surface with Du Val singularities; \item $\varphi_{|-mK_X|}$ is an elliptic fibration for some $m>0$ and the Mordell-Weil group of the Jacobian fibration of $\varphi_{|-mK_X|}$ is finite; \item $X$ is either a K3-surface or an Enriques surface with finite automorphism group $\operatorname{Aut}(X)$. \end{enumerate} \end{introthm} Surfaces of type (i) are classically known by \cite{na}, surfaces in (ii) can be classified by means of~\cite{d, cd} (for $m=1$) and Ogg-Shafarevich theory \cite{o,sh}, while surfaces of type (iii) have been classified in a series of papers by Nikulin and Kond\=o (see ~\cite[Ex.~1.4.1]{n1} for precise references). The paper is structured as follows. In Section 1 we introduce four cones in $N^1(X)$: the effective cone, the closed light cone, the nef cone and the semiample cone. Section 2 deals with the structure of the effective cone of rational surfaces with $\kappa(-K_X)\geq 0$. Our main result here is the following. \begin{introthm} Let $X$ be a smooth rational surface with $\kappa(-K_X)\geq 0$ and $\rho(X)\geq 3$, then $\overline{{\rm Eff}(X)}=\overline{E(X)}$, where $E(X)$ is the cone generated by classes of integral curves of $X$ with negative self-intersection. \end{introthm} In Section 3 we prove that on a smooth rational surface $X$ with $\kappa(-K_X)=1$ every nef ${\mathbb Q}$-divisor is semiample. The problem of finite generation of the Cox ring of such surfaces is considered in Section 4. Section 5 is devoted to the problem of finite generation of $R(X)$ under the hypothesis $-K_X$ nef. Finally, Section 6 shows an example of a non-rational surface $X$ with $\rho(X)=2$ and rational polyhedral effective cone whose Cox ring does not admit a finite set of generators.\\ \noindent{\em Acknowledgments:} It is a pleasure to thank Tommaso de Fernex and Damiano Testa for several useful comments which helped us to improve and clarify this work. We are also grateful to Jinhyung Park, who informed us about a mistake in the previous version of Lemma 4.4 and suggested how to fix it. \section{Basic setup} In what follows $X$ will denote a smooth projective surface defined over the complex numbers. Given a divisor $D$ of $X$ we will adopt the short notation $H^i(D)$ for the cohomology group $H^i(X,{\mathcal O}_X(D))$ and we will denote its dimension by $h^i(D)$. Also, we will denote by $\equiv$ the numerical equivalence between divisors, by $[D]$ the class of $D$ in $N^1(X)=\operatorname{Pic}(X)\otimes {\mathbb R}/\equiv$ and by $|D|$ the complete linear series associated to $D$. Observe that $N^1(X)=\operatorname{Pic}(X)\otimes {\mathbb R}$ if $q(X)=0$, in particular this is true if $X$ is a rational surface. We recall that the {\em effective cone} of an algebraic surface $X$ is defined as: \[ {\rm Eff}(X) := \{\sum_i a_i[D_i] : D_i\text{ is an effective divisor}, a_i\in {\mathbb R}_{\geq 0}\}. \] The {\em closed light cone} of $X$ is the cone of classes with non-negative self-intersection: \[ L(X):=\{[D]\in N^1(X) : D^2 \geq 0\}. \] We define $L_a(X)$ to be the half-cone of $L(X)$ which contains an ample class. In what follows we will say that a cone of $N^1(X)$ is {\em polyhedral} if it is generated by finitely many vectors. In particular a polyhedral cone is closed. We start proving the following (see also~\cite[\S1]{n1}). \begin{proposition}\label{eff-rays} Let $X$ be a smooth projective surface such that $\rho(X)\geq 3$ and ${\rm Eff}(X)$ is polyhedral. Then \[ {\rm Eff}(X) = \sum_{[E]\in\operatorname{Exc}(X)} {\mathbb R}_+ \cdot [E] \] where $\operatorname{Exc}(X)$ is the set of classes of integral curves $E$ of $X$ with $E^2<0$. \end{proposition} \begin{proof} This is a consequence of the following observation: by Riemann-Roch theorem the interior of $L_a(X)$ is contained in $ {\rm Eff}(X)$. Since the effective cone is polyhedral, then it is closed, so that $L_a(X)\subset {\rm Eff}(X)$. Since $\rho=\rho(X)\geq 3$, then $\partial L_a(X)$ is circular because the intersection form is hyperbolic with signature $(1,\rho-1)$ by the Hodge index theorem. Thus an element of $\partial {L_a}(X)$ can not be an extremal ray of ${\rm Eff}(X)$, since otherwise ${\rm Eff}(X)$ would not be polyhedral in a neighbourhood of that ray, giving a contradiction. \end{proof} Another important cone associated to $X$ is the cone of numerically effective divisors, or simply the {\em nef cone}: \[ {\rm Nef}(X) := \{[D]\in N^1(X) : D\cdot E\geq 0\text{ for any }[E]\in{\rm Eff}(X)\}. \] This cone is the dual of the effective cone with respect to the intersection form on the surface $X$. Finally, if $q(X)=0$, we define the {\em semiample cone} ${\rm SAmple}(X)$ to be the cone spanned by the classes of semiample divisors, where $D$ is semiample if $|nD|$ is base point free for some $n>0$ (see \cite[Def. 1.1.10, 2.1.26]{laz}). \begin{proposition}\label{eff} Let $X$ be a smooth projective surface with $q(X)=0$. We have the following inclusions: \[ {\rm SAmple}(X)\subset{\rm Nef}(X)\subset\overline{{\rm Eff}(X)}. \]\end{proposition} \begin{proof} The second inclusion is due to the fact that the nef cone is the closure of the ample cone~\cite[Thm.~1.4.23]{laz}. For the first inclusion observe that, if $D$ is semiample, then $\varphi_{|nD|}: X\to{\mathbb P}^r$ is a morphism for $n$ big enough. Thus, if $E$ is an effective divisor, then $nD\cdot E=\deg (\varphi_{|nD|}^*{\mathcal O}_{{\mathbb P}^r}(1)_{|E})\geq 0$, so that $D$ is nef. \end{proof} The following will be useful in the next sections. \begin{proposition}\label{fib} Let $X$ be a smooth projective surface with $q(X)=0$ and let $M$ be a non-trivial effective divisor of $X$ such that $|M|$ does not contain fixed components and $M^2=0$. Then $M\sim aD$ with $D$ smooth and integral, $h^0(D)=2$ and $H^0(M)\cong\operatorname{Sym}^a H^0(D)$. \end{proposition} \begin{proof} The linear series $|M|$ is base point free, since otherwise two of its distinct elements would intersect at the base points giving $M^2>0$, which is a contradiction. Let $\varphi_{|M|}: X\to B\subset {\mathbb P}^n$ be the morphism defined by $|M|$. Since holomorphic 1-forms of $B$ pull-back to $X$ we have $q(X)\geq p_a(B)$. Thus $q(X)=0$ implies that $B$ is smooth and rational. Consider the Stein factorization of $\varphi_{|M|}$: \[ \xymatrix@1{ X\ar[rr]^-{\varphi_{|M|}}\ar[rd]_-{f} & & {\mathbb P}^n\\ & {\mathbb P}^1\ar[ur]_{\nu} & } \] where $f$ is a morphism with connected fibers and $\nu$ is a finite map. If $a:=\deg(\nu)\deg(\nu({\mathbb P}^1))$, then $M\sim aD$, where ${\mathcal O}_X(D)=f^*{\mathcal O}_{{\mathbb P}^1}(1)$. Since $h^0(D)\geq 2$, then $n+1=h^0(M)=h^0(aD)\geq a+1$. On the other hand $a\geq\deg(\nu({\mathbb P}^1))\geq n$ since the curve $\nu({\mathbb P}^1)$ is non-degenerate. Thus $n=a$, $h^0(D)=2$ and the map $\nu$ is the $a$-Veronese embedding of ${\mathbb P}^1$. Since $f$ has connected fibers, then $D$ is connected so that, by Bertini's second theorem~\cite{is}, the general element of $|D|$ is smooth. \end{proof} \section{The structure of the effective cone} Let $X$ be a projective surface with $q(X)=0$ and $\kappa(-K_X)\geq 0$. Observe that in this case, either $K_X$ is numerically trivial, or $X$ is rational by Castelnuovo's rationality criterion~\cite[Thm.~3.4, VI]{bpv}. We consider the problem of determining under which hypothesis the effective cone of $X$ is rational polyhedral. Let $L_a(X)$ be the component of the closed light cone which contains the ample cone, as in the previous section, and let $E(X)$ be the convex cone generated by the classes of curves in $X$ with negative self-intersection. Given a cone $\sigma\subseteq N^1(X)$ we will adopt the following notation: \[ \sigma_{\geq 0} = \{[D]\in\sigma :D\cdot K_X\geq 0\}, \qquad \sigma_{\leq 0} = \{[D]\in\sigma :D\cdot K_X\leq 0\}. \] The cones $\sigma_{> 0}$ and $\sigma_{< 0}$ are defined in a similar way. \begin{theorem}\label{effc} If $X$ is a smooth rational surface with $\kappa(-K_X)\geq 0$ and $\rho(X)\geq 3$, then \[ \overline{{\rm Eff}(X)}=\overline{E(X)}. \] \end{theorem} \begin{proof} We divide the proof in three steps. \noindent {\em Step 1.} We prove that \[ \operatorname{Chull}(\overline{E(X)},L_a(X))=\overline{{\rm Eff}(X)}. \] If the sets are distinct then, since both are closed, there exists a class $[D]\in {\rm Eff}(X)\backslash \operatorname{Chull}(\overline {E(X)},L_a(X))$. In particular $D^2<0$, so that $|D|$ contains at least a negative curve as a fixed component. Thus $D\sim D_1+D_2$, where $D_1, D_2$ are effective and $D_1$ consists of all the negative curves contained in the fixed part of $|D|$. Since $[D_1]\in E(X)$, then $[D_2]\not \in \operatorname{Chull}(\overline{E(X)},L_a(X))$, so that $D_2^2<0$. Then there is still a negative curve in the fixed part of $|D_2|$ and thus in the fixed part of $|D|$, giving a contradiction. \noindent {\em Step 2.} We now prove that \[ L_a(X)_{\geq 0}\subset\operatorname{Chull}(\overline{E(X)},L_a(X)_{\leq 0}). \] If the interior of $L_a(X)_{\geq 0}$ is empty, then $L_a(X) = L_a(X)_{\leq 0}$, so we get the claim by Step 1. Otherwise, let $[D]$ be a class in the interior of $L_a(X)_{\geq 0}$, i.e. $D^2>0$ and $D\cdot K_X>0$. Observe that for some positive integers $n,m$, the multiples $nD$, $-mK_X$ are effective since $D^2>0$ and $\kappa(-K_X)\geq 0$. Thus, since $nD\cdot (-mK_X)<0$, then $|nD|$ contains at least a negative curve in its fixed locus. Let $D_1$ be given by all curves with negative self-intersection in the fixed locus of $|nD|$. Then the divisor $D_2=D-D_1$ is nef, so that $D_2^2\geq 0$ and $D_2\cdot (-K_X)\geq 0$. Thus $D_2\in L_a(X)_{\leq 0}$. Together with Step 1, this gives: \[ \operatorname{Chull}(\overline{E(X)},L_a(X)_{\leq 0})=\overline{{\rm Eff}(X)}. \] \noindent {\em Step 3.} Let $l_+\in\partial L_a(X)_{>0}$, so that $l_+^2=0$ and $l_+\cdot K_X>0$. By Step 2 we have $l_+=e+l_-$, where $e\in\overline{E(X)}$ and $l_-\in L_a(X)_{\leq 0}$. Assume that $l_+$ is an extremal ray of $\overline{{\rm Eff}(X)}$, then $l_+=e$ is an extremal ray of $\overline{E(X)}$. Observe that $\overline{E(X)}_{>0}$ and $E(X)_{>0}$ have the same extremal rays, since the last convex set contains a finite number of extremal rays, which are classes of curves contained in the base locus of an effective multiple of $-K_X$. Then $l_+$ is an extremal ray of $E(X)_{>0}$, so that $l_+^2<0$, which is a contradiction. Assume now that $z_-\in\partial L_a(X)_{<0}$, so that $z_-^2=0$ and $z_-\cdot K_X=-\epsilon<0$. Since $X$ is rational and $\rho(X)\geq 3$, then by the Cone Theorem~\cite[Thm.~1.5.33]{laz} and \cite[Lemma~6.2]{deb} we have: \[ \overline{{\rm Eff}(X)}=\overline{{\rm Eff}(X)}_{\geq 0}+\sum_i {\mathbb R}_+\cdot [D_i], \] for countably many $[D_i]\in E(X)$ which can have accumulation points only on the hyperplane $K_X^{\perp}$. This implies that $z_-=z_++e'$, where $z_+\in \overline{{\rm Eff}(X)}_{\geq 0}$ and $e'\in \overline{E(X)}_{\leq 0}$. If $z_-$ is an extremal ray of $\overline{{\rm Eff}(X)}$, then $z_-=e'$ is an extremal ray of $\overline{E(X)}_{<-\epsilon/2}=E(X)_{<-\epsilon/2}$ by the Cone theorem. Then, since $z_-$ is an extremal ray, we get $z_-^2<0$, which is a contradiction. We proved that neither $\partial L_a(X)_{<0}$ nor $\partial L_a(X)_{>0}$ can be at the boundary of the effective cone, thus $L_a(X)\subset\overline{E(X)}$. \end{proof} \begin{corollary}\label{effneg} Let $X$ be a smooth rational surface with $\kappa(-K_X)\geq 0$, then the following are equivalent \begin{enumerate} \item ${\rm Eff}(X)$ is rational polyhedral, \item $X$ contains finitely many $(-1)$ and $(-2)$-curves. \end{enumerate} \end{corollary} \begin{proof} If $\rho(X)\leq 2$, then $X$ is a toric surface, either the projective plane or a Hirzebruch surface, so that both the conditions are obviously satisfied. So now we assume that $\rho(X)\geq 3$. The implication $(i)\Rightarrow (ii)$ is given by Proposition~\ref{eff-rays}, since the classes of $(-1)$ and $(-2)$-curves span extremal rays of the effective cone. To prove the converse, by Theorem \ref{effc} it is enough to prove that $E(X)$ is rational polyhedral, or equivalently that $X$ contains finitely many classes of integral curves with negative self-intersection. The set of such classes in ${E(X)}_{>0}$ is finite, since the corresponding curves belong to the base locus of an effective multiple of $-K_X$. By the Cone Theorem and \cite[Lemma~6.2]{deb} the curves with negative self-intersection with classes in ${E(X)}_{<0}$ are rational, thus they are $(-1)$-curves by adjunction formula. Finally, a curve with negative self-intersection and orthogonal to $K_X$ is a $(-2)$-curve. Thus we conclude by observing that $E(X)\subseteq{\rm Eff}(X)\subseteq\overline{{\rm Eff}(X)}=\overline{E(X)}=E(X)$. \end{proof} \begin{remark}\label{k2} If $\kappa(-K_X)=2$, then ${\rm Eff}(X)$ is rational polyhedral~\cite[Prop.~3.3]{na}. The effective cone of a smooth rational surface $X$ with $\kappa(-K_X)=1$ is not necessarily polyhedral. Consider as an example the blow-up $X$ of ${\mathbb P}^2$ at the nine intersection points $\{p_1,\dots,p_9\}=C_1\cap C_2$ of two general plane cubics. Due to the generality assumption on the $C_i$'s, there are no reducible elements in the linear series $|-K_X|$, so that all the fibers of $\varphi_{|-K_X|}: X\to{\mathbb P}^1$ are integral. Hence the class $-K_X$ is an extremal ray of the effective cone. By Proposition~\ref{eff-rays} and the fact that $K_X^2=0$, we deduce that ${\rm Eff}(X)$ is not polyhedral (see also~\cite[Cor. 3.2]{to1}). \end{remark} \section{The nef and the semiample cones} In what follows we will make use of the {\em Zariski decomposition} of a pseudoeffective divisor (see~\cite[Thm. 2.3.19]{laz}): a pseudoeffective divisor $D$ can be written uniquely as a sum $D=N+P$, where $N$ and $P$ are ${\mathbb Q}$-divisors such that $P$ is nef, $N$ is effective and the intersection matrix of its components is negative definite, $P$ is orthogonal to each component of $N$. The divisors $P$ and $N$ are called the positive and negative part of $D$ respectively. In what follows, we will denote by $\kappa(D)$ the Iitaka dimension of a divisor $D$ (see \cite[Def.~2.1.3]{laz}). \begin{lemma}\label{genere} Let $X$ be a smooth rational surface with $\kappa(-K_X)\geq 1$. If $E$ is an effective divisor of $X$ such that the intersection matrix on its integral components is negative definite, then $p_a(E)\leq 0$. \end{lemma} \begin{proof} We begin proving that $h^0(K_X+E)=0$. Observe that $\kappa(E)=0$ since the intersection form on its components is negative definite. If $Z=K_X+E$ is an effective divisor, then $0=\kappa(E)=\kappa(-K_X+Z)\geq 1$, which is a contradiction. Consider now the exact sequence of sheaves: \[ \xymatrix@1{ 0\ar[r] & {\mathcal O}_X(-E)\ar[r] & {\mathcal O}_X\ar[r] & {\mathcal O}_E\ar[r] & 0. } \] Taking cohomology and using the fact that $h^1({\mathcal O}_X)=h^2({\mathcal O}_X)=0$ because $X$ is rational, we get $h^1({\mathcal O}_E)=h^2(-E)$. The last is equal to $h^0(K_X+E)$ by the Serre's duality theorem. Now, from what proved before, we deduce that $h^1({\mathcal O}_E)=0$ so that $p_a(E)\leq 0$, which proves the claim. \end{proof} \begin{lemma}\label{nef+big} Let $X$ be a smooth rational surface with $\kappa(-K_X)\geq 1$ and let $L$ be a nef divisor with $\kappa(L)=2$. Then $L$ is semiample. \end{lemma} \begin{proof} We follow the proof of~\cite[Lemma~2.6]{tvv}. Let $\Delta$ be the union of all integral curves orthogonal to $L$. Since $L^2>0$ then, by the Hodge index theorem, the restriction to $\Delta$ of the intersection form of $X$ is negative definite. Moreover, by Lemma~\ref{genere}, we have that $p_a(E)\leq 0$ for any effective divisor supported on $\Delta$. Thus we can apply Artin's contractability criterion \cite[Thm. 2.3]{art} to $\Delta$. So, there exists a normal projective surface $Y$ and a birational morphism $\psi: X\to Y$ which contracts only the connected components of $\Delta$. Hence, by \cite[Cor. 2.6]{art}, $L$ is linearly equivalent to a divisor $L'$ whose support is disjoint from $\Delta$, and hence $L$ is the pullback of a Cartier divisor on $Y$. By the Nakai-Moishezon criterion $L'$ is ample, so that $L$ is semiample. \end{proof} \begin{lemma}\label{k=1} Let $X$ be a smooth algebraic surface with $q(X)=0$ and let $L$ be a nef divisor with $\kappa(L)=1$. Then $L$ is semiample and $L^2=0$. \end{lemma} \begin{proof} First of all observe that since $L$ is nef with $\kappa(L)=1$, then $L^2=0$. Let $n\in {\mathbb N}$ such that $h^0(nL)>1$. If $B$ is the fixed part of $|nL|$, then $\kappa(B)\leq 1$ so that $B^2\leq 0$. Since $nL-B$ is nef with $1\leq \kappa(nL-B)\leq \kappa(nL)=1$, then $(nL-B)^2=0$. This implies that $B^2=B\cdot L=0$. The restriction of the intersection form to the space spanned by $[B]$ and $[L]$ is null. By the Hodge index theorem this implies that $[B]=[\alpha L]$ for some $\alpha\in{\mathbb Q}_{>0}$. Thus the base locus of $|(n-\alpha)L|$ is $0$-dimensional and, since $L^2=0$, we see that it is actually empty. This proves the claim. \end{proof} We are finally ready to prove the main theorem of this section. \begin{theorem}\label{nef-big} Let $X$ be a smooth rational surface with $\kappa(-K_X)\geq 1$. Then any nef ${\mathbb Q}$-divisor is semiample. \end{theorem} \begin{proof} If $L$ is a nef divisor of $X$, then $L^2\geq 0$. If $L^2>0$ or $K_X\cdot L<0$, then $h^0(L)\geq 2$ by Riemann-Roch formula so that $\kappa(L)\geq 1$ and we conclude by Lemma~\ref{nef+big} and~\ref{k=1}. We assume now that $L^2=K_X\cdot L=0$ and let $-K_X\sim N+P$ be the Zariski decomposition of $-K_X$. Then $P\cdot L=0$ because $P$, $N$ are effective and $L$ is nef. The restriction of the intersection form of $\operatorname{Pic}(X)$ to the space spanned by $[P]$ and $[L]$ is null. Thus $L\sim mP$ for some $m\in{\mathbb Q}_{\geq 0}$. Hence $\kappa(P)=\kappa(-K_X)\geq 1$ and $L$ is semiample by Lemma~\ref{nef+big} and~\ref{k=1}. \end{proof} \begin{remark} Observe that if $\kappa(-K_X)=0$ and the positive part $P$ of the Zariski decomposition of $-K_X$ is non-trivial, then the nef and the semiample cone of $X$ do not coincide. This implies that the Cox ring $R(X)$ is not finitely generated by~\cite[Cor. 2.6]{ahl}. An easy example of such surfaces is given by the blow-up of ${\mathbb P}^2$ at $9$ points in very general position. An example with ${\rm Eff}(X)$ rational polyhedral is given in~\cite[Ex.~1.4.1]{n1}. \end{remark} \section{Cox rings of rational surfaces with $\kappa(-K_X)=1$} We consider the problem of the finite generation of Cox rings of smooth rational surfaces with $\kappa(-K_X)=1$. \begin{proposition}\label{-K} Let $X$ be a smooth rational surface with $\kappa(-K_X)=1$ and let \[ -K_X\sim N+P \] be the Zariski decomposition of $-K_X$. Then $P\sim aC$ for some $a\in{\mathbb Q}_{>0}$, where $C$ is a smooth elliptic curve with $C^2=0$ and $h^0(C)=2$. \end{proposition} \begin{proof} Since $P$ is nef and $\kappa(P)=1$, then $P$ is semiample by Lemma~\ref{k=1}. By Proposition~\ref{fib} we have $P\sim aC$ for some smooth integral curve $C$ with $C^2=0$ and $h^0(C)=2$. By the genus formula and $-K_X\cdot P=0$ we get $2g(C)-2=C\cdot K_X=0$. \end{proof} \begin{theorem}\label{equiv} Let $X$ be a smooth rational surface with $\kappa(-K_X)=1$. Then the following are equivalent: \begin{enumerate} \item the effective cone ${\rm Eff}(X)$ is rational polyhedral; \item the Cox ring $R(X)$ is finitely generated; \item $X$ contains finitely many $(-1)$-curves. \end{enumerate} \end{theorem} \begin{proof} (i)$\Rightarrow$(ii): by~\cite[Cor. 2.6]{ahl} it is enough to prove that the nef and semiample cone of $X$ coincide and this is proved in Theorem~\ref{nef-big}. (ii)$\Rightarrow$(iii): if $C$ is a $(-1)$-curve and $x\in H^0(C)$, write $x=\sum_im_i$, where $m_i$ are monomials in the generators of $R(X)$. If $D_i$ is the zero locus of $m_i$, then $C\sim D_i$. Since $C$ is integral with $C^2<0$, then $D_i=C$ so that $m_i=\alpha_i x$, with $\alpha_i\in{\mathbb C}$. Hence $x$ appears in any set of homogeneous generators of $R(X)$. Since $R(X)$ is finitely generated, this implies that there are finitely many $(-1)$-curves. (iii)$\Rightarrow$(i): We can assume that $\rho(X)\geq 9$ since otherwise $\kappa(-K_X)=2$. By Corollary~\ref{effneg} it is enough to prove that $X$ contains finitely many $(-2)$-curves. Let $-K_X\sim N+aC$ be the Zariski decomposition of $-K_X$ as given in Proposition~\ref{-K}. If $E$ is a $(-2)$-curve, then $-K_X\cdot E=0$, so that either $E$ is contained in the support of $N$ or $E\cdot C=0$. In the last case $E$ is contracted by the morphism $\varphi_{|C|}: X\to{\mathbb P}^1$, which is a fibration by Proposition~\ref{-K}. Since the support of $N$ contains a finite number of prime divisors and $\varphi_{|C|}$ has finitely many reducible fibers, then $X$ contains finitely many $(-2)$-curves. \end{proof} Let $X$ be a smooth rational surface with $\kappa(-K_X)=1$ and let $-K_X\sim P+N$ be the Zariski decomposition. By Proposition~\ref{-K} a multiple of $P$ defines an elliptic fibration. We have the following commutative diagram: \[ \xymatrix{ & X\ar[ld]_{\pi} \ar[rd]^{\varphi_{|rP|}}& \\ Y\ar[rr]_{\varphi_{|-mK_Y|}} && {\mathbb P}^1 } \] where $\pi$ is the blow-down map of all the $(-1)$-curves contained in the fibers of $\varphi_{|rP|}$ and $m$ is the smallest positive integer such that $h^0(-mK_Y)=2$. The surface $Y$ is the {\em relative minimal model} and its anticanonical divisor $-K_Y$ is nef. If $m=1$, then $Y$ is called a {\em jacobian elliptic surface}. Let $\varphi:X\to {\mathbb P}^1$ be a fibration with connected fibers and let Let $L\sim aC$, with $a\in {\mathbb Q}$, where $C$ is an effective divisor whose support is contained in the fibers of $\varphi$. \begin{definition} The multiplicity of $L$ at $p\in X$ is: \[ \mu(L,p):=a\mu(C_p,p), \] where $C_p\in |C|$ is the unique curve through $p$, and $\mu(L,p)=0$ if there is no such curve. \end{definition} \begin{lemma}\label{mu} Let $\pi: \tilde{X}\to X$ be the blow-up at a point $p$ of a smooth rational surface with $\kappa(-K_X)=1$. Then $\kappa(-K_{\tilde{X}})=1$ if and only if $\mu(-K_X,p)>1$. In this case we have the Zariski decomposition $-K_{\tilde{X}}\sim \tilde P+\tilde N$ with $\tilde P=\pi^*P$ if $\mu(N,p)\geq 1$ and otherwise \[ \tilde P= \frac{\mu(-K_X,p)-1}{\mu(P,p) }\,\pi^*P. \] \end{lemma} \begin{proof} Let $-K_X\sim P+N$ be the Zariski decomposition and $\mu_P, \mu_N$ be the multiplicities of $P$ and $N$ in $p$. If $\mu_N\geq 1$ then up to multiples $\pi^*N-E$ is linearly equivalent to an effective divisor which is orthogonal to $\pi^*P$ and such that the intersection matrix on the components of its support is negative definite. Thus the Zariski decomposition of $-K_{\tilde X}$ is: $$-K_{\tilde X}\sim \pi^*P+(\pi^*N-E).$$ Otherwise, if $\mu_N<1$ and $\mu_P+\mu_N\geq 1$, consider the following decomposition $$-K_{\tilde X}=\frac{\mu_P+\mu_N-1}{\mu_P}\pi^*P+(\pi^*N-\mu_NE)+(1-\mu_N)\left(\frac{1}{\mu_P}\pi^*P-E\right).$$ Observe that the second and third term in the sum are linearly equivalent, up to multiples, to effective divisors whose supports are properly contained in the fibers of $\pi^*P$. In particular, for $\frac{1}{\mu_P}\pi^*P-E$, such support is contained in the unique fiber containing $E$. Since $E$ is not contained in any of the two supports, by the definition of $\mu_P$ and $\mu_N$, then their sum can not contain any positive rational multiple of $\pi^*P$, so the intersection form on its components is negative definite by~\cite[Lemma 8.2, III]{bpv}. Since the two divisors are clearly orthogonal to the first term in the sum, then their sum is the negative part of the Zariski decomposition of $-K_{\tilde X}$. Finally, if $\mu_P+\mu_N>1$, then $\kappa(-K_{\tilde X})=1$ . Conversely, observe that if $\mu(-K_X,p)= 1$, then $\kappa(-K_{\tilde X})=0$ by the previous decomposition. The same clearly holds if $\mu(-K_X,p)< 1$, since in this case any multiple of $-K_{\tilde X}=\pi^*(-K_X)-E$ is not linearly equivalent to an effective divisor. \end{proof} \begin{remark}\label{rem} Observe that if $\mu_P=\mu(P,p)>1$ we can always decompose $-K_{\tilde X}$ as $\big(1-\frac{1}{\mu_P}\big)\pi^*P+(\frac{1}{\mu_P}\pi^*P+\pi^*N-E)$, where the two terms are effective, orthogonal divisors and the second one is negative semidefinite. \end{remark} \begin{proposition}\label{exist} Let $n\geq 10$ be an integer. There exists a smooth rational surface $X$ with $\kappa(-K_X)=1$ and $\rho(X)=n$. \end{proposition} \begin{proof} Let $X_0$ be a smooth rational surface such that $\varphi_{|-K_{X_0}|}: X\to{\mathbb P}^1$ is an elliptic fibration which admits a fiber $P_0$ with a triple point. For example, $X_0$ can be the blow-up of ${\mathbb P}^2$ at the nine base points of the pencil $xy(x+y)+t(x^3+y^3+z^3)=0$. We construct a sequence of blow-ups $\pi_i: X_i\to X_{i-1}$, where $\pi_i$ is the blow-up of $X_{i-1}$ at a point $p_{i-1}$ chosen in the following way. Let $p_0$ be the triple point of $P_0$ and let $p_i$ lie on the intersection of the exceptional divisor $E_{i-1}=\pi_i^{-1}(p_{i-1})$ and the strict transform $E_{i-2}'$ of $E_{i-2}$ (see Figure~\ref{blow-up} below). Let $-K_{X_i}\sim P_i+N_i$ be the decomposition given in Remark \ref{rem} with $P_{i}\sim (1-\frac{1}{\mu_{i-1}})\pi^*_{i}(P_{i-1})$ where $\mu_{i-1}=\mu(P_{i-1},p_{i-1})$. This gives the formula: \[ P_{i}\sim \prod_{k=0}^{i-1}(1-\frac{1}{\mu_k})~\phi^*_{i}P_0, \] where $\phi_i=\pi_1\circ\cdots\circ\pi_i: X_i\to X_0$ is the blow-down map. In this way we can recursively calculate $\mu_{i}$ obtaining: \begin{equation}\label{mui} \mu_{i}=\prod_{k=0}^{i-1}(1-\frac{1}{\mu_k})~\mu(\phi_i^*P_0,p_{i}). \end{equation} Observe that $\mu_0=3$ since $p_0$ is a triple point of a fiber of the elliptic fibration $\varphi_{|P_0|}$. Let $a_i:=\mu(\phi_i^*P_0,p_i)$, then $a_0=3$, $a_1=4$ and \[ a_i=a_{i-1}+a_{i-2}. \] To see this observe that $\phi_i^*P_0$ contains $E_{i-1}$ with multiplicity $a_{i-1}$ and the strict transform of $E_{i-2}$ with multiplicity $a_{i-2}$. If we denote by $b_i:=a_i/a_{i-1}$, then an easy calculation based on~\eqref{mui} gives \[ \mu_i=(\mu_{i-1}-1)b_i. \] Observe that the $a_i$'s satisfy a Fibonacci type recursion and the $b_i$'s are rational approximations in the continued fraction expansion of the number $\frac{1}{2}(1+\sqrt{5})$. In particular we claim that $b_i>8/5$ for $i>4$, which can be easily proved by induction using the fact that $b_0=7/3$ and $b_i=1+1/b_{i-1}$. We prove now by induction that $\mu_i>8/3$ for each $i$. We have $\mu_0=3$ and $\mu_{i-1}>8/3=1+1/(8/5-1)>1+1/(b_i-1)$ so that $\mu_i=(\mu_{i-1}-1)b_i>\mu_{i-1}>8/3$. Since $\mu_i>1$, then $\kappa(-K_{X_i})=1$ by Lemma~\ref{mu}. Thus the surface $X_{n-9}$ has the required properties. \end{proof} \begin{figure} \caption{The sequence of blow-ups in Proposition~\ref{exist}} \label{blow-up} \end{figure} We now want to relate the structure of the effective cone ${\rm Eff}(X)$ of a smooth rational surface $X$ with $\kappa(-K_X)=1$ with that of its relative minimal model $Y$. Recall that the birational morphism $\pi: X\to Y$ induces an injective linear map $\pi^*:\operatorname{Pic}(Y)\otimes{{\mathbb R}}\to\operatorname{Pic}(X)\otimes{{\mathbb R}}$ which maps ${\rm Eff}(Y)$ into a linear section of ${\rm Eff}(X)$. Thus if ${\rm Eff}(X)$ is rational polyhedral, then the same is true for ${\rm Eff}(Y)$. We will show that the converse statement also holds. \begin{lemma}\label{finite} Let $Y$ be a smooth rational surface with $\kappa(-K_Y)=1$, $-K_Y$ nef and rational polyhedral ${\rm Eff}(Y)$. If $a,b$ are non-negative integers with $b\neq 0$, then there is a finite number of classes $[D]\in {\rm Nef}(Y)$ such that $D^2=a,\ -K_Y\cdot D=b$. \end{lemma} \begin{proof} Observe that $Y$ has a minimal elliptic fibration $\varphi=\varphi_{|-mK_Y|}$ for some $m>0$ by~\cite[Prop. 5.6.1]{cd}. Since the effective cone of $Y$ is rational polyhedral, then there are $8$ components of reducible fibers of $\varphi$ whose intersection matrix $M$ is negative definite (see Proposition~\ref{class}(ii) below). Let $f_1,\dots,f_8$ be the classes of such curves, $f$ be the class of a fiber of $\varphi$ and $s$ be the class of a $m$-section of $\varphi$ so that $f\cdot s=m$. Observe that the lattice $L:=\langle f,s,f_1,\dots,f_8\rangle$ has rank $10$, so that it has finite index $k$ in $\operatorname{Pic}(Y)$. Let \[ [D]=\alpha f+\sum_{i=1}^{8} \alpha_i f_i+\beta s\in \operatorname{Pic}(Y) \] be a nef class with $-K_Y\cdot D=b$ and $D^2=a$. Thus $b=-K_Y\cdot D=\frac{1}{m}f\cdot D=\beta$. Since $D$ is nef and $f-f_i$ is an effective class, then $b_i= D\cdot f_i\leq D\cdot f=b$. Observe that, since $[kD]\in \operatorname{Pic}(Y)$, then the coefficients of $[D]$ are rational with bounded denominators. Hence $b_i$ can take a finite number of non-negative rational values. For any such choice of the $b_i$'s, the coefficients $\alpha_i$ are uniquely determined since the intersection matrix $M$ of the $f_i$'s is non singular. Finally, the condition $D^2=a$ determines $\alpha$, since $D^2=-b^2+(\sum_i\alpha_if_i)^2+2b \sum_i\alpha_i f_i\cdot s+2\alpha b m$. This proves that there is a finite number of classes $[D]$ as in the statement. \end{proof} \begin{theorem}\label{effk1} Let $X$ be a smooth rational surface with $\kappa(-K_X)=1$ and such that its relative minimal model $Y$ has rational polyhedral effective cone. Then ${\rm Eff}(X)$ is rational polyhedral. \end{theorem} \begin{proof} By Theorem \ref{equiv} it is enough to prove that $X$ contains finitely many $(-1)$-curves. Let $\pi:X\to Y$ be the blow-down map. We assume that $X$ is the blow-up of $Y$ at $r$ points, possibly infinitely near, and we call $E$ the exceptional divisor of $\pi$. Observe that we can write $E=\sum_{i=1}^r c_i E_i$, where $c_i$ are positive integers and $E_i$ are curves (not necessarily integral) such that $E_i^2=-1$ and $E_i\cdot E_j=0$ for distinct $i, j$. Let $F$ be a $(-1)$-curve of $X$ and let $-K_X\sim N+P$ be the Zariski decomposition. Observe that $P\sim\alpha\pi^*(-K_Y)$, where $\alpha$ is a rational number with $0<\alpha<1$ by Lemma~\ref{mu}. Since $F\cdot (-K_X)=1$, then either $F$ is a component of $N$ or $F\cdot P=0$ or $0< F\cdot P \leq 1$. In the first two cases $F$ belongs to a finite set of curves, so we assume to be in the third case. Then we have \[ F\cdot E=F\cdot(\pi^*(-K_Y)+K_X)=F\cdot \pi^*(-K_Y)-1\leq \frac{1}{\alpha}-1. \] Observe that we can also assume that $F\cdot E_i\geq 0$ for each $i$, since otherwise $F$ would be a component of $E_i$ and again there is only a finite set of such components. If $D=\pi(F)$, then we can write $F$ as \[ F=\pi^*(D)-\sum_{i=1}^ra_iE_i \] with $a_i=F\cdot E_i\leq F\cdot E\leq \frac{1}{\alpha}-1$ so that $D^2\leq -1+r(\frac{1}{\alpha}-1)^2$ since $F^2=-1$. Moreover \[ 0<-K_Y\cdot D=\frac{1}{\alpha} P\cdot (F+\sum_ia_iE_i)\leq \frac{1}{\alpha} (1+(\frac{1}{\alpha}-1)\sum_iP\cdot E_i). \] Observe that either $D$ is a $(-1)$-curve of $Y$ or $D^2\geq 0$ and $D$ is a nef divisor since it is integral. In the first case there are a finite number of such $D$ by Theorem~\ref{equiv}. In the second case we conclude by Lemma \ref{finite} since both $D^2$ and $-K_Y\cdot D$ are bounded. This implies that $X$ contains finitely many $(-1)$-curves. \end{proof} \begin{corollary}\label{infty} Let $n\geq 10$ be an integer. Then there exists a smooth rational surface $X$ with $\kappa(-K_X)=1$, $\rho(X)=n$ and finitely generated Cox ring $R(X)$. \end{corollary} \begin{proof} Let $Y$ be a minimal elliptic surface with $4$ fibers of Kodaira type $\tilde A_2$. Such a surface exists and its effective cone is rational polyhedral by~\cite[Ex.~1.4.1]{n1}. Observe that $Y$ contains a fiber with a triple point, thus proceding as in the proof of Proposition~\ref{exist} we construct a surface $X$ of Picard number $n$ with $\kappa(-K_X)=1$ whose relative minimal model is $Y$. Since ${\rm Eff}(Y)$ is rational polyhedral then ${\rm Eff}(X)$ is rational polyhedral by Theorem~\ref{effk1}. We now conclude by Theorem~\ref{equiv}. \end{proof} \section{Smooth projective surfaces with $-K_X$ nef} In this section we consider smooth projective surfaces $X$ such that $q(X)=0$, ${\rm Eff}(X)$ is rational polyhedral and $-K_X$ is nef. Observe that, since $-K_X$ is nef, then $K_X^2\geq 0$ and the integral curves with negative self-intersection are either $(-1)$ or $(-2)$-curves. If $X$ is minimal, i.e. it does not contain $(-1)$-curves, then either $\rho(X)\leq 2$ or $K\equiv 0$ by Proposition \ref{eff-rays}. In the first case, $X$ is either ${\mathbb P}^2$ or a Hirzebruch surface $\mathbb{F}_n$ with $n=0,2$. In the second case, $X$ is either a K3 surface or an Enriques surface. In~\cite[Thm.~2.7, Thm.~2.10]{ahl} it is proved that the Cox ring of these surfaces is finitely generated if and only if ${\rm Eff}(X)$ is rational polyhedral. Moreover, the Picard lattices of those $X$ which admit a rational polyhedral effective cone are classified in a series of papers by Nikulin and Kond\=o (see ~\cite[Ex.~1.4.1]{n1} for precise references). If $X$ is non-minimal, then it is rational and it is either a Hirzebruch surface $\mathbb{F}_1$ or one of the surfaces described in the following (for the proof see~\cite[Ex.~1.4.1]{n1}). \begin{proposition}\label{class} Let $X$ be a smooth rational surface with $-K_X$ nef and $\rho(X)\geq 3$. Then ${\rm Eff}(X)$ is rational polyhedral if and only if one of the following holds: \begin{enumerate} \item $K_X^2>0$ and $X$ is the minimal resolution of singularities of a Del Pezzo surface with Du Val singularities; \item $K_X^2=0$ and any connected component of the set of $(-2)$-curves of $X$ is an extended Dynkin diagram of rank $r_i$ with $\sum_i r_i=8$. \end{enumerate} \end{proposition} The surfaces in Proposition~\ref{class} (ii) have been further classified and divided in two classes. If $\kappa(-K_X)=1$, then $\varphi_{|-mK_X|}$ is an elliptic fibration for some $m>0$. Then (ii) is equivalent to ask that the Jacobian fibration of $\varphi_{|-mK_X|}$ has finite Mordell-Weil group. In case $m=1$ the reducible fibers and the Mordell-Weil groups of such surfaces have been classified in \cite{d,cd}. If $\kappa(-K_X)=0$, then $|-mK_X|$ is zero-dimensional for all positive $m$. There are exactly three families of such surfaces, which have been classified in~\cite[Ex.~1.4.1]{n1}. \begin{remark} If $X$ is a smooth rational surface as in Proposition~\ref{class}, then ${\rm Eff}(X)$ is generated by $(-1)$ and $(-2)$-curves by Proposition \ref{eff-rays}. In particular, if $\varphi=\varphi_{|-K_X|}$ is an elliptic fibration, then these curves will be the sections and the components of reducible fibers of $\varphi$ respectively. \end{remark} \begin{theorem}\label{nfg} Let $X$ be a smooth projective surface with $q(X)=0$ and $-K_X$ nef. Then $R(X)$ is finitely generated if and only if one of the following holds: \begin{enumerate} \item $X$ is the minimal resolution of singularities of a Del Pezzo surface with Du Val singularities; \item $\varphi_{|-mK_X|}$ is an elliptic fibration for some $m>0$ and the Mordell-Weil group of the Jacobian fibration of $\varphi_{|-mK_X|}$ is finite; \item $X$ is either a K3-surface or an Enriques surface with finite automorphism group $\operatorname{Aut}(X)$. \end{enumerate} \end{theorem} \begin{proof} If $K_X\equiv 0$, then $X$ is either a K3 or an Enriques surface and we conclude by~\cite[Thm.~2.7, Thm.~2.10]{ahl}. If $K_X\not\equiv 0$ and $\rho(X)\leq 2$, then $X$ is either ${\mathbb P}^2$ or a Hirzebruch surface $\mathbb{F}_0$, $\mathbb{F}_2$. In all these cases $X$ is in (i). Assume now that $K_X\not\equiv 0$ and $\rho(X)\geq 3$. If $K_X^2>0$, then $\kappa(-K_X)=2$, so that $R(X)$ is finitely generated by~\cite[Thm.~2.9]{tvv}. If $K_X^2=0$ and $\kappa(-K_X)=1$ then, by Proposition~\ref{class} and Theorem~\ref{equiv}, $R(X)$ is finitely generated if and only if we are in (ii). If $K_X^2=0$ and $\kappa(-K_X)=0$, then $-K_X$ is nef but not semiample since $h^0(-mK_X)= 1$ for any $m>0$. Thus $R(X)$ is not finitely generated by~\cite[Prop.~2.9]{hk} or \cite[Corollary 2.6]{ahl}. \end{proof} \begin{corollary}\label{corell} Let $X$ be a smooth rational surface such that $\varphi_{|-K_X|}: X\to{\mathbb P}^1$ is an elliptic fibration. Then $R(X)$ is finitely generated if and only if the Mordell-Weil group of $\varphi_{|-K_X|}$ is finite. \end{corollary} \begin{remark}\label{non-fg} \cite[Prop.~2.9]{hk} or Theorem~\ref{nfg} and \cite[Ex.~1.4.1]{n1} provide a negative answer to~\cite[Question I.3.9]{har}, since they show that there are rational surfaces such that the effective cone is rational polyhedral but the Cox ring is not finitely generated. \end{remark} \section{An example with $\kappa(-K_X)=-\infty$} In this section we construct a surface $X$ with $\rho(X)=2$ such that ${\rm Eff}(X)$ is rational polyhedral but the Cox ring $R(X)$ is not finitely generated. The surface $X$ will be the blow-up of a smooth, very general quartic $S\subset{\mathbb P}^3$ at a very general point $p$. In particular $\kappa(-K_X)=-\infty$. If $\pi: X\to S$ is the blow-up at $p$ with exceptional divisor $E$ and $H=\pi^*{\mathcal O}_{S}(1)$, we will show that: \[ {\rm Eff}(X)=\langle [E],[H-2E]\rangle,\quad {\rm Nef}(X)=\langle [H],~ [H-2E]\rangle,\quad [H-2E]\not\in{\rm SAmple}(X), \] since $h^0(m(H-2E))=1$ for any $m\geq 1$. This implies, by~\cite[Proposition 2.9]{hk} or~\cite[Corollary 2.6]{ahl}, that $R(X)$ is not finitely generated. \begin{remark} In the given example it is possible to prove directly, without using~\cite[Proposition 2.9]{hk}, that $R(X)$ is not finitely generated. To any $x\in H^0(D)$ we associate its {\em degree}: \[ [x]:=[D]\in\operatorname{Pic}(X). \] Assume that $R(X)$ is finitely generated, so that there exists $[D]\in{\rm Nef}(X)$ such that the degree $[x]$ of any generator either lives in the cone $\langle [D],[E]\rangle$ or is equal to $[H-2E]$. A class $[D']$ as in the picture is ample, since it lives in the interior of the nef cone, so that there is a section $x'\in H^0(nD')$ which is not divisible by $z\in H^0(H-2E)$, for $n$ big enough. Thus $[x']$ is a non-negative linear combination of the degrees of the generators of $R(X)$ distinct from $z$. This means that $[nD']=[x']$ lives in the cone $\langle [D],[E]\rangle$, which is a contradiction. \end{remark} \begin{center} \begin{figure} \caption{Each blue ray provides new generators of $R(X)$.} \end{figure} \end{center} If $S\subset {\mathbb P}^3$ is a smooth quartic surface and $p\in S$ we denote by $\pi:C\to S\cap T_pS$ the normalization at $p$ of the hyperplane section $S\cap T_pS$. Observe that, for general $p$, $S\cap T_pS$ has a node at $p$ and $C$ is a smooth genus two curve. \begin{lemma}\label{tor} There exists a smooth quartic surface $S\subset{\mathbb P}^3$ with $\rho(S)=1$ and a point $p\in S$ such that $C$ is smooth of genus two and $K_{C}-q_1-q_2$ is not a torsion point of $J(C)$, where $\{q_1,q_2\}=\pi^{-1}(p)$. \end{lemma} \begin{proof} Let $S_0\subset{\mathbb P}^3$ be a smooth quartic surface with $\rho(S_0)=1$, $p\in S_0$ and $B_0=S_0\cap T_pS_0$. We denote by $B_1\subset T_pS_0$ a plane quartic with just one node at $p$ such that, if $\pi_1: C_1\to B_1$ is its normalization, then $K_{C_1}-q_{11}-q_{12}$ is not a torsion point of $J(C_1)$, where $\{q_{11},q_{12}\}=\pi_1^{-1}(p)$. Observe that such a plane quartic $B_1$ exists because, if $C$ is a genus two curve, then the morphism $\varphi$ associated to the linear system $|K_{C}+q_1+q_2|$ is birational onto a plane quartic with just one node at $\varphi(q_1)=\varphi(q_2)$. Let $f:\mathcal B\to {\mathbb P}^1$ with $\mathcal B\subset T_pS_0\times {\mathbb P}^1$ be the pencil of plane quartics generated by $B_0=f^{-1}(0)$ and $B_1=f^{-1}(1)$. We denote by $\pi: \mathcal C \to \mathcal B$ the normalization of $\mathcal B$ along the section $\{(p,t): t\in {\mathbb P}^1\}$ and by $J(\mathcal C)\to {\mathbb P}^1$ its relative jacobian. We will denote by $B_t$ and $C_t$ the fiber of $f$ and of $f\circ \pi$ respectively over $t\in {\mathbb P}^1$. Let $L_t:=K_{C_t}-q_{t1}-q_{t2}$, where $\{q_{t1},q_{t2}\}=\pi^{-1}(p,t)$. Since $L_1$ is not a torsion point of $J(C_1)$ and the set of torsion sections in $J(\mathcal C)$ is countable, then $L_t$ is not a torsion point of $J(C_t)$ outside of a countable set of $t\in{\mathbb P}^1$. Let $\mathcal{S}:=\{(S,t)\in |{\mathcal O}_{{\mathbb P}^3}(4)|\times{\mathbb P}^1 : B_t\subset S\}$ be the family of quartic surfaces which contain one $B_t$ and let $(S_t,t)$ be a curve in $\mathcal{S}$ which maps birationally onto ${\mathbb P}^1$ and which contains $(S_0,0)$. Observe that $B_t=S_t\cap T_pS_0$ for each $t\in{\mathbb P}^1$. Since $S_0$ is smooth with $\rho(S_0)=1$, then the same is true for $S_t$ outside of a countable set of values (see for example ~\cite[Thm.~1.1]{og}). Thus there exists $t_0\in{\mathbb P}^1$ such that $S_{t_0}$ is smooth with $\rho(S_{t_0})=1$ and $L_{t_0}$ is not a torsion point of $J(C_{t_0})$. \end{proof} \begin{proposition} There exists a smooth quartic surface $S\subset{\mathbb P}^3$ with $\rho(S)=1$ and a point $p\in S$ such that, if $\pi: X\to S$ is the blow-up at $p$, then ${\rm Eff}(X)$ is rational polyhedral but $R(X)$ is not finitely generated. \end{proposition} \begin{proof} Let $S$ and $p$ be as in Lemma~\ref{tor}. We start proving that ${\rm Eff}(X)$ is rational polyhedral. Let $E$ be the exceptional divisor of $\pi$ and $H=\pi^*{\mathcal O}_S(1)$. The class of the strict transform $C$ of $B:=S\cap T_pS$ is $[H-2E]$. Since $C$ is integral and $(H-2E)^2=0$, then $[H-2E]$ is nef. Let $[D]:= [aC-bE]$ where $a,b$ are positive integers. If $D$ is effective, then $D\cdot C<0$ so that $h^0(D) = h^0(D-C)$ and $(a-1)C-bE$ is effective. Applying this reasoning $a$ times we deduce $h^0(-bE)>0$, which is a contradiction. Hence $[D]$ is not effective. Thus ${\rm Eff}(X)=\langle [H-2E],[E]\rangle$ is rational polyhedral. We will now prove that ${\rm SAmple}(X)\subsetneq{\rm Nef}(X)$, so that $R(X)$ is not finitely generated by~\cite[Cor.~2.6]{ahl}. Since $K_X\sim E$, then by adjunction formula we have $C_{|C}\sim K_C-E_{|C}\sim K_C-q_1-q_2$, where $\pi(q_1)=\pi(q_2)=p$. By Lemma~\ref{tor} this implies that $C_{|C}$ is not a torsion point of $J(C)$ or, equivalently, that $h^0(nC_{|C})=0$ for any $n>0$. From the exact sequence \[ 0\to H^0((n-1)C)\to H^0(nC)\to H^0(nC_{|C})=0, \] and $h^0({\mathcal O}_X)=1$, we get that $h^0(nC)=1$ for any positive $n$. This implies that $H-2E$ is not semiample. \end{proof} \end{document}
\begin{document} \begin{center} {\Large \bf Comparison theorems and some of their applications} \vskip 5 mm {\Large \bf V.~F.~Babenko, O.~V.~Kovalenko} \vskip 5 mm {Oles Gonchar Dnipropetrovsk National University \\{\it E-mail: babenko.vladislav@gmail.com } {\it E-mail: olegkovalenko90@gmail.com} } \end{center} \begin{abstract} {Analogues of Kolmogorov comparison theorems and some of their applications were established.} \end{abstract} \section{Notations. Statement of the problem. Known results.} Let $L_\infty({\mathbb R})$ denote the space of all measurable and essentially bounded functions $x\colon {\mathbb R}\to {\mathbb R}$ with the norm $$\|x\|=\|x\|_{L_\infty({\mathbb R})}={\rm ess\,sup}\left\{|x(t)|:t\in{\mathbb R}\right\}.$$ For natural $r$ let $L_\infty^r({\mathbb R})$ denote the space of functions $x\colon {\mathbb R}\to {\mathbb R}$ such that the derivative $x^{(r-1)}, \; x^{(0)}=x,$ is locally absolute continuous and $x^{(r)}\in L_\infty({\mathbb R})$. Set $L_{\infty,\infty}^r({\mathbb R}):=L_\infty^r({\mathbb R})\bigcap L_\infty({\mathbb R})$. For $r\in {\mathbb N}$ by $\varphi_r(t)$ we will denote the Euler prefect spline of the order $r$ (i.~e. $r$-th periodic integral of the functions ${\rm sgn}\sin t$ with zero mean value on the period). For $\lambda> 0$ set $\varphi_{\lambda, r}(t):=\lambda^{-r}\varphi_{r}(\lambda t).$ To prove his outstanding inequality (see \cite{Kol1}---\cite{Kol3}) Kolmogorov proved a statement, known as a comparison theorem. {\bf Theorem A. }{\it Let $r\in{\mathbb N}$ and a function $x\in L^r_{\infty,\infty}({\mathbb R})$ are given. Let numbers $a\in{\mathbb R}$ and $\lambda>0$ be such, that $$ \|x^{(k)}\|\leq \|a\varphi_{\lambda,r}^{(k)}\|,\,\, k\in\{0,r\}. $$ If points $\xi,\eta\in{\mathbb R}$ are such, that $x(\xi)=a\varphi_{\lambda,r}(\eta)$, then $$|x'(\xi)|\leq |a|\cdot|\varphi'_{\lambda,r}(\eta)|.$$ } Both the Kolmogorov comparison theorem and its proof played important role in exact solutions of many extremal problems in approximation theory (see.~\cite{korn1, korn2, korn3, BKKP}). The goal of this paper is to prove several analogues of Kolmogorov comparison theorems. In the next paragraph we will introduce a family of splines, which will play the same role, as Euler perfect splines play in the theorem A, and study some of their properties. In \S~3 we will prove 3 analogues of Kolmogorov comparison theorem for the cases when the norms of a function and its derivatives of orders $r-1$ and $r$ are given; the norms of a function and its derivatives of orders $r-2$ and $r$ are given; the norms of a function and its derivatives of orders $r-2$, $r-1$ and $r$ are given. In \S~4 we will give several applications of the obtained comparison theorems. \section{Comparison functions and their properties.} Let $a_1, a_2\geq 0$. Set $T:=a_1+a_2+2$. Define a function $\psi_1(a_1,a_2;t)$ in the following way. On the segment $[0,T]$ set $$ \psi_1(a_1,a_2;t):=\left\{ \begin{array}{rcl} 0, & t\in[0,a_1], \\ t-a_1, & t\in[a_1,a_1+1], \\ 1, & t\in [a_1+1,a_1+a_2+1], \\ 2+a_1+a_2-t, & t\in [a_1+a_2+1, T].\\ \end{array} \right. $$ Continue function $\psi_1(a_1,a_2;t)$ to the segment $[T,2\cdot T]$ by the equality \begin{equation}\label{2.0} \psi_1(a_1,a_2;t) = - \psi_1(a_1,a_2;t-T),\,t\in[T,2\cdot T] \end{equation} and then periodically with period $2\cdot T$ to the whole real line. Note, that $\psi_1(a_1,a_2;t)\in L_{\infty,\infty}^1({\mathbb R})$. For $r\in {\mathbb N}$ denote by $\psi_r(a_1,a_2;t)$ $(r-1)$-th $2\cdot T$ -- periodic integral of the function $\psi_1(a_1,a_2;t)$ with zero mean on the period (so that, particularly, $\psi'_r(a_1,a_2;t)=\psi_{r-1}(a_1,a_2;t)$). Rodov~\cite{Rodov1946} was the first, who considered the functions $\psi_r(a_1,a_2;t)$. We will list several properties of the function $\psi_r(a_1,a_2;t)$, $r\in{\mathbb N}$, which can easily be proved either directly from definition, or similar to the corresponding properties of Euler splines $\varphi_r$ (see, for example,~\cite[Chapter 5]{korn1},~\cite[Chapter 3] {korn2}). Note, that the function $\psi_r$ is $2\cdot T$-periodic and for all $r\geq 1$ $$\psi_r(a_1,a_2;t) = - \psi_r(a_1,a_2;t-T),\,t\in[T,2\cdot T].$$ Moreover, the function $\psi_2(a_1,a_2;t)$ has exactly two zeroes on the period --- the points $a_1+\frac{a_2}{2}+1$ and $2a_1+\frac{3a_2}{2}+3$. Hence the functions $\psi_r(a_1,a_2;t)$ for $r\geq 2$ also have exactly two zeroes on the period: for any $k\in {\mathbb N}$ \begin{equation}\label{1} \psi_{2k+1}(a_1,a_2;0)=\psi_{2k+1}(a_1,a_2;a_1+a_2+2)=0, \end{equation} \begin{equation}\label{2} \psi_{2k}\left(a_1,a_2;a_1+\frac{a_2}{2}+1\right)=\psi_{2k}\left(a_1,a_2;2a_1+\frac{3a_2}{2}+3\right)=0. \end{equation} Note, that for $a_1 = 0$ the equality~\eqref{1} is true for $k=0$ too. Hence, in turn, we have that for $r\geq 3$ (in the case $a_1=0$ for $r\geq 2$) the function $\psi_r(a_1,a_2;t)$ is strictly monotone between zeroes of its derivative and the plot of the function $\psi_r(a_1,a_2;t)$ is strictly convex at the intervals of constant sign. Moreover, it is easy to see, that the plot of the function $\psi_r(a_1,a_2;t)$ is symmetrical with respect to its zeroes and the lines $t=t_0$, where $t_0$ -- is the zero of $\psi'_r(a_1,a_2;t)$. At last note, that $\psi_r(0,0;t)=\varphi_{\pi/2, r}(t)$. For $r\in{\mathbb N}$, $a_1,a_2\geq 0,\,\lambda>0$ and $b\in{\mathbb R}$ set $$\Psi_{a_1,a_2,b,\lambda}(t)=\Psi_{r;a_1,a_2,b,\lambda}(t):=$$ $$=b\left(\frac{\lambda}{2a_1+2a_2+4}\right)^{r}\psi_r\left(a_1,a_2;\frac{2a_1+2a_2+4}{\lambda} t\right).$$ Note, that the function $\Psi_{a_1,a_2,b,\lambda}(t)$ is $\lambda$ -- periodic. \begin{theorem}\label{th::0} Let $r\in{\mathbb N}$ and $x\in L^r_{\infty,\infty}({\mathbb R})$ be given. Then \begin{enumerate} \item[$a)$] there exist $a_2\geq 0,\,\lambda>0$ and $b\in{\mathbb R}$ such, that $$ \left\|\Psi_{0,a_2,b,\lambda}^{(s)}\right\|=\left\|x^{(s)}\right\|,\,\,s\in\left\{0,r-1,r\right\}.$$ \item[$b)$] there exist $a_1\geq 0,\,\lambda>0$ and $b\in{\mathbb R}$ such, that $$ \left\|\Psi_{a_1,0,b,\lambda}^{(s)}\right\|=\left\|x^{(s)}\right\|,\,\,s\in\left\{0,r-2,r\right\}.$$ \item[$c)$] there exist $a_1,a_2\geq 0,\,\lambda>0$ and $b\in{\mathbb R}$ such, that $$ \left\|\Psi_{a_1,a_2,b,\lambda}^{(s)}\right\|=\left\|x^{(s)}\right\|,\,\,s\in\left\{0,r-1,r-2,r\right\}.$$ \end{enumerate} \end{theorem} The truth of the theorem~\ref{th::0}, actually, follows from Kolmogorov comparison theorem. We will prove the statement $a)$, the rest of the statements can be proved analogously. It is clear, that $\Psi_{0,0,b,\lambda}(t)=b\left(\frac\lambda 4\right)^r\varphi_{\pi/2}(\frac 4\lambda t)$. Hence the parameters $b$ and $\lambda$ can be chosen in such way, that $ \left\|\Psi_{0,0,b,\lambda}^{(s)}\right\|=\left\|x^{(s)}\right\|$, $s=r-1,r$. Then theorem~A implies that $ \left\|\Psi_{0,0,b,\lambda}\right\|\leq\left\|x\right\|$. When the parameter $a_2$ increases $ \left\|\Psi_{0,a_2,b,\lambda}\right\|$ continuously increases from $ \left\|\Psi_{0,0,b,\lambda}\right\|$ to $\infty$, and $ \left\|\Psi_{0,a_2,b,\lambda}^{(s)}\right\|,\; s=r-1, r$ remain unchanged. Hence we can choose the parameter $a_2$ such, that $ \left\|\Psi_{0,a_2,b,\lambda}^{(s)}\right\|=\left\|x^{(s)}\right\|$, $s=0,r-1,r$. \section{Comparison theorems.} The next theorem contains three analogues of Kolmogorov comparison theorem. \begin{theorem}\label{th::1} Let $r\in{\mathbb N}$ and $x\in L_{\infty,\infty}^r({\mathbb R})$ be given. Let one of the following condition holds. \begin{enumerate} \item[$a)$] The numbers $a_1 = 0$, $a_2\geq 0$, $\lambda>0$ and $b\neq 0$ are such, that \begin{equation}\label{4.1} \left\|x^{(k)}\right\|\le \left\|\Psi^{(k)}_{a_1,a_2,b,\lambda}\right\|,\,\,k\in\{0,r-1,r\}. \end{equation} \item[$b)$] The numbers $a_1 \geq 0$, $a_2= 0$, $\lambda>0$ and $b\neq 0$ are such, that \begin{equation}\label{4.2} \left\|x^{(k)}\right\|\le \left\|\Psi^{(k)}_{a_1,a_2,b,\lambda}\right\|,\,\,k\in\{0,r-2,r\}. \end{equation} \item[$c)$] The numbers $a_1 \geq 0$, $a_2\geq 0$, $\lambda>0$ and $b\neq 0$ are such, that \begin{equation}\label{4.3} \left\|x^{(k)}\right\|\le \left\|\Psi^{(k)}_{a_1,a_2,b,\lambda}\right\|,\,\,k\in\{0,r-1,r-2,r\}. \end{equation} \end{enumerate} If points $\tau$ and $\xi$ are such that $x(\tau) = \Psi_{a_1,a_2,b,\lambda}(\xi)$, then \begin{equation}\label{6'}\left|x'(\tau)\right|\le\left|\Psi'_{a_1,a_2,b,\lambda}(\xi)\right|. \end{equation} \end{theorem} {\bf Proof.} For brevity we will write $\Psi (t)$ instead of $\Psi_{a_1,a_2,b,\lambda}(t)$ in the proof of this theorem. Considering, if necessary, the function $-x(t)$ instead of $x(t)$ and function $-\Psi (t)$ instead of $\Psi (t)$, we can count that $x'(\tau)>0$ and \begin{equation}\label{7} \Psi'(\tau)>0. \end{equation} Moreover, considering appropriate shift $\Psi (\cdot +\alpha)$ of the function $\Psi$, we can count that $\tau = \xi$, i.~e. \begin{equation}\label{6} x(\tau)=\Psi(\tau). \end{equation} Assume, that~\eqref{6} holds, but instead the inequality~(\ref{6'}) (with $\xi =\tau$) the inequality $$\left|x'(\tau)\right|>\left|\Psi'(\tau)\right| $$ holds. Denote by $(\tau_1,\tau_2)$ the smallest interval which contains $\tau$ on which the function $\Psi$ is monotone and such that $\Psi'(\tau_1)=\Psi'(\tau_2)=0$. In virtue of the assumption there exists a number $\delta>0$ such that $x'(t)>\Psi'(t)$ for all $t\in (\tau-\delta,\tau+\delta)$, and hence in virtue of~\eqref{6} $x(\tau+\delta)>\Psi(\tau+\delta)$ and $x(\tau-\delta)<\Psi(\tau-\delta)$. Choose $\varepsilon>0$ so small, that for a function $x_\varepsilon(t):=(1-\varepsilon)x(t)$ the following inequalities hold: $x_\varepsilon(\tau+\delta)>\Psi(\tau+\delta)$ and $x_\varepsilon(\tau-\delta)<\Psi(\tau-\delta)$. In virtue of the conditions~\eqref{4.1} -- \eqref{4.3} and condition \eqref{7} we have $$x_\varepsilon(\tau_1)>\Psi(\tau_1),\;\; x_\varepsilon(\tau_2)<\Psi(\tau_2).$$ Hence on the interval $(\tau_1,\tau_2)$ the difference $\Delta_\varepsilon(t):=x_\varepsilon(t)-\Psi(t)$ has at least 3 sign changes. It is easy to see, that there exist a sequence of functions $\mu_N\in C^\infty({\mathbb R})$, $N\in {\mathbb N}$ with the following properties: \begin{enumerate} \item $\mu_N(t)=1$ on interval $[\tau_1,\tau_2]$; $\|\mu_N\|=1$; \item $\mu_N(t)=0$ for all $t$ outside the interval $[\tau_1-N\cdot\frac{\lambda}2;\tau_1+N\cdot\frac{\lambda}2]$ (where, as before, $T=a_1+a_2+2$); \item for all $k=1,2,\dots,r$ $$\max\limits_{j=\overline{1,k}}\left\|\mu_N^{(j)}\right\|< \varepsilon \|x_\varepsilon^{(k)}\|\left(\sum\limits_{i=1}^k C_k^i\left\|x_\varepsilon^{(k-i)}\right\|\right)^{-1},$$ if $N$ is enough big. \end{enumerate} Below we count that $N$ is chosen enough big, so that the property 3 holds. Set $$x_N(t):=x_\varepsilon(t)\cdot\mu_N(t),$$ and $$\Delta_N(t):=\Psi(t)-x_N(t).$$ Then $$x_N(t)=x_\varepsilon(t),\;if\; t\in[\tau_1,\tau_2],$$ \begin{equation}\label{7'} \Delta_N(t)=\Psi(t),if\,|t-\tau_1|\geq N\cdot\frac\lambda 2 \end{equation} and $$ \|x_N\|\leq\|x_\varepsilon\|=(1-\varepsilon)\|x\|\leq(1-\varepsilon)\|\Psi\|. $$ Moreover, for $k=1,\dots,r$ $$\left|x_N^{(k)}(t)\right|= \left|\left[x_\varepsilon(t)\mu_N(t)\right]^{(k)}\right|= \left|\sum\limits_{i=0}^k C_k^i x_\varepsilon^{(k-i)}(t)\mu_N^{(i)}(t)\right|\leq $$ $$ \leq \left\|x_\varepsilon^{(k)}\right\|+\sum\limits_{i=1}^k C_k^i\left\|x_\varepsilon^{(k-i)}\right\|\left\|\mu_N^{(i)}\right\|.$$ Hence, in virtue of property 3 of the function $\mu_N$ and the choice of the number $N$, we get $$ \left\|x_N^{(k)}\right\|< \left\|x_\varepsilon^{(k)}\right\|+\varepsilon\left\|x^{(k)}_\varepsilon\right\|=(1-\varepsilon)\left\|x^{(k)}\right\|+\varepsilon\left\|x^{(k)}\right\|=\left\|x^{(k)}\right\|. $$ For $t\in [\tau_1,\tau_2]$ we have $\Delta_N=\Psi(t)-x_\varepsilon(t),$ and hence the function $\Delta_N(t)$ has at least three sign changes on the interval $[\tau_1,\tau_2]$. At each of the rest monotonicity intervals of the function $\Psi$ the function $\Delta_N$ has at least one sign change. Hence on the interval $\left[\tau_1-N\cdot\frac{\lambda}2,\tau_1+N\cdot\frac{\lambda}2\right]$ the function $\Delta_N(t)$ has at least $2N+2$ sign changes. Moreover, in virtue of~\eqref{1},\eqref{2} and \eqref{7'} for all $i=1,2,\dots,\left[\frac{r-1}{2}\right]$ the following equalities hold \begin{equation}\label{10} \Delta_N^{(2i-1)}\left(\tau_1-N\cdot\frac\lambda 2\right)=\Delta_N^{(2i-1)} \left(\tau_1+N\cdot\frac\lambda 2\right)=0. \end{equation} All of the arguments above are true if any of the condition $a)-c)$ hold. Let now condition $a)$ of the theorem holds. Applying Rolle's theorem and counting~\eqref{10} we have that the function $\Delta_N^{(r-1)}(t)$ has at least $2N+2$ zeroes on the interval $$\left[\tau_1-N\cdot\frac{\lambda} 2,\tau_1+N\cdot\frac{\lambda} 2\right].$$ Hence on some monotonicity interval $$[\alpha,\alpha+\frac \lambda 2]\subset \left[\tau_1-N\cdot\frac{\lambda}2, \tau_1+N\cdot\frac{\lambda}2\right]$$ of the function $\Psi^{(r-1)}(t)=\Psi^{(r-1)}_{0,a_2,b,\lambda}(t)$ the function $\Delta_N^{(r-1)}(t)$ changes sign at least three times. But then the difference $$ \Psi^{(r-1)}_{0,0,b,\lambda}(t)-x^{(r-1)}_N(t) $$ changes the sign at least three times on some monotonicity interval of the function $\Psi^{(r-1)}_{0,0,b,\lambda}(t)$ too. However this contradicts to the Kolmogorov comparison theorem (see theorem~A and, for example,~\cite[Statement 5.5.3]{korn2}) because the Euler spline $\Psi^{(r-1)}_{0,0,b,\lambda}(t)$ is comparison function for the function $x^{(r-1)}_N(t)$. If the condition $b)$ of the theorem holds, then applying similar arguments we will get contradiction with Kolmogorov comparison theorem. If the condition $c)$ of the theorem holds, then applying similar arguments we will get contradiction with already proved case when condition $a)$ holds. The theorem is proved. \section{Some applications.} From the theorem~\ref{th::1} we immediately get \begin{lemma}\label{sl::1} Let $r\in{\mathbb N}$, $x\in L_{\infty,\infty}^r({\mathbb R})$ and one of the conditions $a)-c)$ of the theorem~\ref{th::1} holds. Then on each monotonicity interval of the function $\Psi_{a_1,a_2,b,\lambda}(t)$ the difference $\Psi_{a_1,a_2,b,\lambda}(t)-x(t)$ has at most one sign change. \end{lemma} \noindent For 1-periodic non-negative integrable on period function $x(\cdot)$ denote by $r(x,\cdot)$ the decreasing rearrangement of the function $x$ (see, for example~\cite[Chapter~6]{korn1}). As a corollary of the theorem~\ref{th::1} and the results of the Chapter~3 of the monograph~\cite{korn2} we get the following theorem. \begin{theorem}\label{th::2} Let $r\in{\mathbb N}$ and $1$--periodic function $x\in L^{r}_{\infty,\infty}({\mathbb R})$ are given. Let one of the conditions $a)-c)$ of the theorem~\ref{th::1} holds. Then for all $t>0$ $$\int\limits_0^tr(|x'|,u)du\leq \lambda^{r-1}\int\limits_0^tr(|\Psi'_{a_1,a_2,b,1}|,u)du.$$ \end{theorem} For $a,b\in{\mathbb R}$, $a<b$, $p\in (0,\infty)$ and continuous function $x\colon {\mathbb R}\to{\mathbb R}$ set $\|x\|_{L_{p(a,b)}}:=\left(\int\limits_a^b\left|x(t)\right|^pdt\right)^{\frac{1}{p}}$. From the theorem~\ref{th::2} and general theorems about rearrangements comparison (see, for example,~\cite[Statement~1.3.10]{korn3}) we get the following analogue of the Ligun inequality~\cite{Ligun} (see also~\cite{BKKP}, Chapter~6). \begin{theorem} Let $r\in{\mathbb N}$ and $1$--periodic function $x\in L^{r}_{\infty,\infty}({\mathbb R})$ are given. Let one of the conditions $a)-c)$ of the theorem~\ref{th::1} holds. Then for all $1\leq p < \infty$ and natural $k<r-2$ $($ if condition $a)$ holds, then for all natural $k<r-1)$ $$\|x^{(k)}\|_{L_{p(0,1)}}\leq \lambda^{r-k}\|\Psi_{a_1,a_2,b,1}^{(k)}\|_{L_{p(0,1)}}.$$ \end{theorem} The following lemma is an analogue to the Bohr-Favard inequality, see, for example,~\cite{korn4}, Chapter~6. \begin{lemma}\label{th::3} Let $r\in{\mathbb N}$ and $1$--periodic function $x\in L^{r}_{\infty,\infty}({\mathbb R})$ are given. Let for $\lambda = 1$ one of the following conditions holds. \begin{enumerate} \item[$a)$] Numbers $a_1 = 0$, $a_2\geq 0$ and $b\neq 0$ are such that \begin{equation*} \left\|x^{(k)}\right\|\le \left\|\Psi^{(k)}_{a_1,a_2,b,\lambda}\right\|,\,\,k\in\{r-1,r\}. \end{equation*} \item[$b)$] Numbers $a_1 \geq 0$, $a_2= 0$ and $b\neq 0$ are such that \begin{equation*} \left\|x^{(k)}\right\|\le \left\|\Psi^{(k)}_{a_1,a_2,b,\lambda}\right\|,\,\,k\in\{r-2,r\}. \end{equation*} \item[$c)$] Numbers $a_1 \geq 0$, $a_2\geq 0$ and $b\neq 0$ are such that \begin{equation*} \left\|x^{(k)}\right\|\le \left\|\Psi^{(k)}_{a_1,a_2,b,\lambda}\right\|,\,\,k\in\{r-1,r-2,r\}. \end{equation*} \end{enumerate} Then $$E_0(x):=\inf\limits_{c\in{\mathbb R}}\|x-c\|\leq\left\|\Psi_{a_1,a_2,b,1}\right\|.$$ \end{lemma} We will proceed by induction on $r$. The basis of the induction easily follows. We will dwell on the induction step. Assume the contrary, let $E_0(x)>\left\|\Psi_{a_1,a_2,b,1}\right\|.$ Let $c$ be a constant of the best uniform approximation of the function $x$. We can count that $\max\limits_{t\in [0,1]}[x(t)-c]$ is attained in the point $t=0$, $\min\limits_{t\in [0,1]}[x(t)-c]$ --- in the point $m$ and \begin{equation}\label{000} m<\frac 12. \end{equation} This means that $x'(0) = x'(m)=0$. Moreover $$-\int\limits_0^mx'(t)dt=x(0)-x(m)=2E_0(x)>2\left\|\Psi_{a_1,a_2,b,1}\right\|=-\int\limits_0^m\Psi_{a_1,a_2,b,1}'(t)dt.$$ However the last inequality together with the induction hypothesis and~\eqref{000} contradicts the lemma~\ref{sl::1}. Using the theorem~\ref{th::1}, lemma~\ref{th::3} and ideas from~\cite{BKP1} (see also \S~6.4 of the monograph~\cite{BKKP}) we get the following analogue of Babenko, Kofanov and Pichugov inequality. \begin{theorem}\label{th::4} Let $r\in{\mathbb N}$ and $1$--periodic function $x\in L^{r}_{\infty,\infty}({\mathbb R})$ are given. Let for some $\lambda > 0$ one of the conditions $a)-c)$ of the lemma~\ref{th::3} holds and $E_0(x)=\left\|\Psi_{a_1,a_2,b,\lambda}\right\|.$ Then $$\|x\|_{L_{p(0,1)}}\geq\|\Psi_{a_1,a_2,b,\lambda}\|_{L_{p(0,\lambda)}}.$$ \end{theorem} For a function $x\in L_\infty({\mathbb R})$ let $c(x)$ denote the constant of the best approximation for the function $x$ in $L_\infty({\mathbb R})$. From theorem~\ref{th::4}, theorem~\ref{th::1} and ideas from~\cite{BKP2} (see also \S~6.7 of the monograph~\cite{BKKP}) we get the following analogue of Nagy type inequality (see~\cite{Nad1}), that was obtained by Babenko, Kofanov and Pichugov. \begin{theorem} Let $r\in{\mathbb N}$, $1$--periodic function $x\in L^{r}_{\infty,\infty}({\mathbb R})$, and numbers $p,q\in (0,\infty)$, $q>p$ be given. Let for some $\lambda > 0$ one of the conditions $a)-c)$ of the lemma~\ref{th::3} hold and $\left\|\Psi_{a_1,a_2,b,\lambda}\right\|_{L_{p(0,\lambda)}}=\|x-c(x)\|_{L_{p(0,1)}}.$ Then $$\left\|\Psi_{a_1,a_2,b,\lambda}\right\|_{L_{q(0,\lambda)}}\geq\|x-c(x)\|_{L_{q(0,1)}}.$$ \end{theorem} \begin {thebibliography}{99} \bibitem{Kol1} {Kolmogorov A. N. } {Une generalization de l'inegalite de M. J. Hadamard entre les bornes superieures des derivees successives d'une function.}// C. r. Acad. sci. Paris. --- 1938. - {\bf 207}. p. --- 764--765. \bibitem{Kol2} {Kolmogorov A. N.} On inequalities between upper bounds of consecutive derivatives of arbitrary function on the infinite interval, Uchenye zapiski MGU. --- 1939. - {\bf 30}. P. 3–-16 (in Russian). \bibitem{Kol3} Kolmogorov A. N. Selected works of A. N. Kolmogorov. Vol. I. Mathematics and mechanics. Translation: Mathematics and its Applications (Soviet Series), 25. Kluwer Academic Publishers Group, Dordrecht, 1991. \bibitem{korn1} {Korneichuk N. P. } {Extremal problems of approximation theory} -- Moskow: Nauka, 1976,~--- 320 p (in Russian). \bibitem{korn2} {Korneichuk N. P. } {Exact constants in approximation theory} -- Moskow: Nauka, 1987,~--- 423 p (in Russian). \bibitem{korn3} {Korneichuk N. P., Babenko V. F., Ligun A. A. } Extremal properties of polynomials and splines. -- Kyiv. Nauk. dumka, 1992,~--- 304 p (in Russian). \bibitem{BKKP} {Babenko V. F., Korneichuk N. P., Kofanov V. A., Pichugov S. A.} {Inequalities for derivatives and their applications} --- Kyiv. Nauk. dumka, 2003,~--- 590 p (in Russian). \bibitem{Rodov1946} {Rodov, A. M. } {Dependencies between upper bounds of derivatives of real functions.} // Izv. AN USSR. Ser. Math.--- 1946. {\bf 10}. P 257--270 (in Russian). \bibitem{Ligun} {Ligun A.A. } {Inequalities for upper bounds of functionals} // Analysis Math. --- 1976. --- {\bf 2}, N 1. --- P. 11--40. \bibitem{korn4} {Korneichuk N. P., Ligun A. A., Doronin V. G. } {Approximation with constrains} // Kyiv. Nauk. dumka, 1982,~--- 250 p (in Russian). \bibitem{BKP1} {Babenko V.F., Kofanov V.A., Pichugov S.A. } {Inequalities for norms of intermediate derivatives of periodic functions and their applications} // Ibid. --- N 3. --- P.~251--376. \bibitem{BKP2} {Babenko V.F., Kofanov V.A., Pichugov S.A. } {Comparison of rearrangement and Kolmogorov-Nagy type inequalities for periodic functions} // Approximation theory: A volume dedicated to Blagovest Sendov (B. Bojanov, Ed.). --- Darba, Sofia, 2002. --- P. 24--53. \bibitem{Nad1} {Sz.-Nagy B. } {\"{U}ber Integralungleichungen zwischen einer Funktion und ihrer Ableitung} // Acta. Sci. Math. --- 1941. --- {\bf 10}, --- P. 64--74. \end{thebibliography} \end{document}
\begin{document} \maketitle \begin{abstract} In this paper, we establish boundary H\"older gradient estimates for solutions to the linearized Monge-Amp\`ere equations with $L^{p}$ ($n<p\leq\infty$) right hand side and $C^{1,\gamma}$ boundary values under natural assumptions on the domain, boundary data and the Monge-Amp\`ere measure. These estimates extend our previous boundary regularity results for solutions to the linearized Monge-Amp\`ere equations with bounded right hand side and $C^{1, 1}$ boundary data. \end{abstract} \noindent \section{Statement of the main results} In this paper, we establish boundary H\"older gradient estimates for solutions to the linearized Monge-Amp\`ere equations with $L^{p}$ ($n<p\leq\infty$) right hand side and $C^{1,\gamma}$ boundary values under natural assumptions on the domain, boundary data and the Monge-Amp\`ere measure. Before stating these estimates, we introduce the following assumptions on the domain $\Omega$ and function $\phi$. Let $\Omega\subset {\mathbb R}^{n}$ be a bounded convex set with \begin{equation}\label{om_ass} B_\rho(\rho e_n) \subset \, \Omega \, \subset \{x_n \geq 0\} \cap B_{\frac 1\rho}, \end{equation} for some small $\rho>0$. Assume that \begin{equation} \Omega~ \text{contains an interior ball of radius $\rho$ tangent to}~ \partial \Omega~ \text{at each point on} ~\partial \Omega\cap\ B_\rho. \label{tang-int} \end{equation} Let $\phi : \overline \Omega \rightarrow {\mathbb R}$, $\phi \in C^{0,1}(\overline \Omega) \cap C^2(\Omega)$ be a convex function satisfying \begin{equation}\label{eq_u} 0 <\lambda \leq \det D^2\phi \leq \Lambda \quad \text{in $\Omega$}. \end{equation} Throughout, we denote by $\Phi = (\Phi^{ij})$ the matrix of cofactors of the Hessian matrix $D^{2}\phi$, i.e., $$\Phi = (\det D^{2} \phi) (D^{2} \phi)^{-1}.$$ We assume that on $\partial \Omega\cap B_\rho$, $\phi$ separates quadratically from its tangent planes on $\partial \Omega$. Precisely we assume that if $x_0 \in \partial \Omega \cap B_\rho$ then \begin{equation} \rho\abs{x-x_{0}}^2 \leq \phi(x)- \phi(x_{0})-\nabla \phi(x_{0}) (x- x_{0}) \leq \rho^{-1}\abs{x-x_{0}}^2, \label{eq_u1} \end{equation} for all $x \in \partial\Omega.$ Let $S_{\phi}(x_0, h)$ be the section of $\phi$ centered at $x_0\in \overline{\Omega}$ and of height $h$: $$S_{\phi}(x_0, h):= \{x\in \overline{\Omega}: \phi(x)<\phi(x_0)+\nabla\phi(x_0)(x-x_0)+ h\}.$$ When $x_0$ is the origin, we denote for simplicity $S_h:= S_{\phi}(0, h).$ Now, we can state our boundary H\"older gradient estimates for solutions to the linearized Monge-Amp\`ere equations with $L^{p}$ right hand side and $C^{1,\gamma}$ boundary data. \begin{theorem} \label{h-bdr-gradient} Assume $\phi$ and $\Omega$ satisfy the assumptions \eqref{om_ass}-\eqref{eq_u1} above. Let $u: B_{\rho}\cap \overline{\Omega}\rightarrow {\mathbb R}$ be a continuous solution to \begin{equation*} \left\{ \begin{alignedat}{2} \Phi^{ij}u_{ij} ~& = f ~&&\text{in} ~ B_{\rho}\cap \Omega, \\\ u &= \varphi~&&\text{on}~\partial \Omega \cap B_{\rho}, \end{alignedat} \right. \end{equation*} where $f\in L^{p}(B_{\rho}\cap\Omega)$ for some $p>n$ and $\varphi \in C^{1,\gamma}(B_{\rho}\cap\partial\Omega)$. Then, there exist $\alpha\in (0, 1)$ and $\theta_0$ small depending only on $n, p, \rho, \lambda, \Lambda, \gamma$ such that for all $\theta\leq \theta_0$ we have $$\|u- u(0)-\nabla u(0)x\|_{L^{\infty}(S_{\theta})}\leq C\left(\|u\|_{L^{\infty}(B_{\rho}\cap\Omega)} + \|f\|_{L^{p}(B_{\rho}\cap\Omega)} + \|\varphi\|_{C^{1,\gamma}(B_{\rho}\cap\partial\Omega)}\right) (\theta^{1/2})^{1+\alpha}$$ where $C$ only on $n, p, \rho, \lambda, \Lambda, \gamma$. We can take $\alpha:= \min\{1-\frac{n}{p}, \gamma\}$ provided that $\alpha<\alpha_0$ where $\alpha_0$ is the exponent in our previous boundary H\"older gradient estimates (see Theorem \ref{LS-gradient}). \end{theorem} Theorem \ref{h-bdr-gradient} extends our previous boundary H\"older gradient estimates for solutions to the linearized Monge-Amp\`ere equations with bounded right hand side and $C^{1, 1}$ boundary data \cite[Theorem 2. 1]{LS1}. This is an affine invariant analogue of the boundary H\"older gradient estimates of Ural'tseva \cite{U1} (see also \cite{U2} for a survey) for uniformly elliptic equation with $L^{p}$ right hand side. \begin{remark} By the Localization Theorem \cite{S1, S2}, we have $$B_{c\theta^{1/2}/\abs{log \theta}}\cap \overline{\Omega}\subset S_{\theta}\subset B_{C\theta^{1/2}\abs{log \theta}}\cap\overline{\Omega}.$$ Therefore, Theorem \ref{h-bdr-gradient} easily implies that $\nabla u$ is $C^{0, \alpha^{'}}$ on $B_{\rho/2}\cap\partial\Omega$ for all $\alpha^{'}<\alpha.$ \end{remark} As a consequence of Theorem \ref{h-bdr-gradient}, we obtain global $C^{1,\alpha}$ estimates for solutions to the linearized Monge-Amp\`ere equations with $L^{p}$ ($n<p\leq\infty$) right hand side and $C^{1,\gamma}$ boundary values under natural assumptions on the domain, boundary data and continuity of the Monge-Amp\`ere measure. \begin{theorem} Assume that $\Omega \subset B_{1/\rho}$ contains an interior ball of radius $\rho$ tangent to $\partial \Omega$ at each point on $\partial \Omega.$ Let $\phi : \overline \Omega \rightarrow {\mathbb R}$, $\phi \in C^{0,1}(\overline \Omega) \cap C^2(\Omega)$ be a convex function satisfying $$ \det D^2 \phi =g \quad \quad \mbox{with} \quad \lambda \le g \le \Lambda,\quad g\in C(\overline{\Omega}).$$ Assume further that on $\partial \Omega$, $\phi$ separates quadratically from its tangent planes, namely \begin{equation*} \rho\abs{x-x_{0}}^2 \leq \phi(x)- \phi(x_{0})-\nabla \phi(x_{0}) (x- x_{0}) \leq \rho^{-1}\abs{x-x_{0}}^2, ~\forall x, x_{0}\in\partial\Omega. \end{equation*} Let $u: \overline{\Omega}\rightarrow {\mathbb R}$ be a continuous function that solves the linearized Monge-Amp\`ere equation \begin{equation*} \left\{ \begin{alignedat}{2} \Phi^{ij}u_{ij} ~& = f ~&&\text{in} ~ \Omega, \\\ u &= \varphi ~&&\text{on}~\partial \Omega, \end{alignedat} \right. \end{equation*} where $\varphi$ is a $C^{1,\gamma}$ function defined on $\partial\Omega$ $(0<\gamma\leq 1)$ and $f\in L^{p}(\Omega)$ with $p>n$. Then \begin{equation*} \|u\|_{C^{1, \beta} (\overline \Omega )} \leq K( \|\varphi\|_{C^{1,\gamma}(\partial \Omega)}+ \|f\|_{L^p(\Omega)}), \end{equation*} where $\beta\in (0,1)$ and $K$ are constants depending on $n, \rho, \gamma, \lambda, \Lambda, p$ and the modulus of continuity of $g$. \label{global-reg} \end{theorem} Theorem \ref{global-reg} extends our previous global $C^{1,\alpha}$ estimates for solutions to the linearized Monge-Amp\`ere equations with bounded right hand side and $C^{1, 1}$ boundary data \cite[Theorem 2. 5 and Remark 7.1]{LS1}. It is also the global counterpart of Guti\'errez-Nguyen's interior $C^{1,\alpha}$ estimates for the linearized Monge-Amp\`ere equations. If we assume $\varphi$ to be more regular, say $\varphi \in W^{2, q}(\Omega)$ where $q>p$, then Theorem \ref{global-reg} is a consequence of the global $W^{2, p}$ estimates for solutions to the linearized Monge-Amp\`ere equations \cite[Theorem 1. 2]{LN}. In this case, the proof in \cite{LN} is quite involved. Our proof of Theorem \ref{global-reg} here is much simpler. \begin{remark} The estimates in Theorem \ref{global-reg} can be improved to \begin{equation} \|u\|_{C^{1, \beta} (\overline \Omega )} \leq K( \|\varphi\|_{C^{1,\gamma}(\partial \Omega)}+ \|f/tr~ \Phi\|_{L^p(\Omega)}). \label{improved-est} \end{equation} This follow easily from the estimates in Theorem \ref{global-reg} and the global $W^{2,p}$ estimates for solutions to the standard Monge-Amp\`ere equations with continuous right hand side \cite{S3}. Indeed, since $$tr ~\Phi\geq n(\det \Phi)^{\frac{1}{n}}\geq n \lambda^{\frac{n-1}{n}},$$ we also have $f/tr ~\Phi\in L^{p} (\Omega)$. Fix $q\in (n, p)$, then by \cite{S3}, $tr~ \Phi\in L^{\frac{pq}{p-q}}(\Omega)$. Now apply the estimates in Theorem \ref{global-reg} to $f\in L^{q}(\Omega)$ and then use H\"older inequality to $f= (f/tr~ \Phi) (tr~ \Phi)$ to obtain (\ref{improved-est}). \end{remark} \begin{remark} The linearized Monge-Amp\`ere operator $L_{\phi}:= \Phi^{ij}\partial_{ij}$ with $\phi$ satisfying the conditions of Theorem \ref{global-reg} is in general degenerate. Here is an explicit example in two dimensions, taken from \cite{Wang1}, showing that $L_{\phi}$ is not uniformly elliptic in $\overline{\Omega}.$ Consider $$\phi (x, y)=\frac{x^2}{log |log (x^2 + y^2)|} + y^2 log|log (x^2 + y^2)| $$ in a small ball $\Omega= B_{\rho}(0)\subset R^2$ around the origin. Then $\phi \in C^{0,1}(\overline \Omega) \cap C^2(\Omega)$ is strictly convex with $$\det D^2 \phi(x, y) = 4 + O (\frac{log|log(x^2 + y^2)|}{log(x^2 + y^2)})\in C(\overline{\Omega})$$ and $\phi$ has smooth boundary data on $\partial\Omega$. The quadratic separation of $\phi$ from its tangent planes on $\partial\Omega$ can be readily checked (see also \cite[Proposition 3.2]{S2}). However $\phi\not\in W^{2,\infty}(\Omega).$ \end{remark} \begin{remark} For the global $C^{1,\alpha}$ estimates in Theorem \ref{global-reg}, the condition $p>n$ is sharp, since even in the uniformly elliptic case (for example, when $\phi(x)= \frac{1}{2}|x|^2$, $L_{\phi}$ is the Laplacian), the global $C^{1,\alpha}$ estimates fail when $p=n$. \end{remark} We prove Theorem \ref{h-bdr-gradient} using the perturbation arguments in the spirit of Caffarelli \cite{C, CC} (see also Wang \cite{Wang}) in combination with our previous boundary H\"older gradient estimates for the case of bounded right hand side $f$ and $C^{1,1}$ boundary data \cite{LS1}. The next section will provide the proof of Theorem \ref{h-bdr-gradient}. The proof of Theorem \ref{global-reg} will be given in the final section, Section \ref{proof-sec}. \section{Boundary H\"older gradient estimates} In this section, we prove Theorem \ref{h-bdr-gradient}. {\it We will use the letters $c, C$ to denote generic constants depending only on the structural constants $n, p, \rho, \gamma, \lambda, \Lambda$ that may change from line to line.} Assume $\phi$ and $\Omega$ satisfy the assumptions \eqref{om_ass}-\eqref{eq_u1}. We can also assume that $\phi(0)=0$ and $\nabla \phi(0)=0.$ By the Localization Theorem for solutions to the Monge-Amp\`ere equations proved in \cite{S1, S2}, there exists a small constant $k$ depending only on $n, \rho, \lambda, \Lambda$ such that if $h\leq k$ then \begin{equation}kE_h\cap \overline{\Omega}\subset S_{\phi}(0, h)\subset k^{-1} E_h\cap \overline{\Omega}\ \label{loc-k} \end{equation} where $$E_h:= h^{1/2}A_h^{-1} B_1$$ with $A_h$ being a linear transformation (sliding along the $x_n=0$ plane) \begin{equation}A_h(x) = x- \tau_h x_n,~ \tau_h\cdot e_n =0, ~\det A_h =1\ \label{Amap} \end{equation} and $$|\tau_h|\leq k^{-1}\abs{log h}.$$ We define the following rescaling of $\phi$ \begin{equation}\phi_h(x):= \frac{\phi(h^{1/2} A^{-1}_h x)}{h} \label{phi-h} \end{equation} in \begin{equation}\Omega_h:= h^{-1/2}A_h\Omega. \label{omega-h} \end{equation} Then $$\lambda \leq \det D^2 \phi_h(x)= \det D^2 \phi(h^{1/2}A_{h}^{-1}x)\leq \Lambda$$ and $$B_{k}\cap \overline{\Omega_h}\subset S_{\phi_h}(0, 1)= h^{-1/2} A_{h}S_h\subset B_{k^{-1}}\cap \overline{\Omega_h}.$$ Lemma 4. 2 in \cite{LS1} implies that if $h, r\leq c$ small then $\phi_h$ satisfies in $S_{\phi_h}(0, 1)$ the hypotheses of the Localization Theorem \cite{S1, S2} at all $x_0\in S_{\phi_h}(0, r)\cap\partial S_{\phi_h}(0, 1).$ In particular, there exists $\tilde\rho$ small, depending only on $n, \rho, \lambda, \Lambda$ such that if $x_0\in S_{\phi_h}(0, r)\cap\partial S_{\phi_h}(0, 1)$ then \begin{equation} \tilde\rho\abs{x-x_{0}}^2 \leq \phi_h(x)- \phi_h(x_{0})-\nabla \phi_h(x_{0}) (x- x_{0}) \leq \tilde\rho^{-1}\abs{x-x_{0}}^2, \label{loc-h} \end{equation} for all $x \in \partial S_{\phi_h}(0, 1).$ We fix $r$ in what follows. Our previous boundary H\"older gradient estimates \cite{LS1} for solutions to the linearized Monge-Amp\`ere with bounded right hand side and $C^{1, 1}$ boundary data hold in $S_{\phi_h}(0, r)$. They will play a crucial role in the perturbation arguments and we now recall them here. \begin{theorem}(\cite[Theorem 2.1 and Proposition 6.1]{LS1}) Assume $\phi$ and $\Omega$ satisfy the assumptions \eqref{om_ass}-\eqref{eq_u1} above. Let $u: S_r\cap \overline{\Omega}\rightarrow {\mathbb R}$ be a continuous solution to \begin{equation*} \left\{ \begin{alignedat}{2} \Phi^{ij}u_{ij} ~& = f ~&&\text{in} ~ S_r\cap \Omega, \\\ u &= 0~&&\text{on}~\partial \Omega \cap S_r, \end{alignedat} \right. \end{equation*} where $f\in L^{\infty}(S_r\cap\Omega)$. Then $$|\partial_n u(0)|\leq C_0 \left(\|u\|_{L^{\infty}(S_r\cap\Omega)} + \|f\|_{L^{\infty}(S_r\cap\Omega)}\right)$$ and for $s\leq r/2$ $$\max_{S_r}|u-\partial_n u(0)x_n|\leq C_0 (s^{1/2})^{1+\alpha_0}\left(\|u\|_{L^{\infty}(S_r\cap\Omega)} + \|f\|_{L^{\infty}(S_r\cap\Omega)} \right)$$ where $\alpha_0\in (0, 1)$ and $C_0$ are constants depending only on $n, \rho, \lambda, \Lambda $. \label{LS-gradient} \end{theorem} Now, we are ready to give the proof of Theorem \ref{h-bdr-gradient}. \begin{proof}[Proof of Theorem \ref{h-bdr-gradient}] Since $u|_{\partial\Omega \cap B_{\rho}}$ is $C^{1,\gamma}$, by subtracting a suitable linear function we can assume that on $\partial\Omega\cap B_{\rho}$, $u$ satisfies $$\abs{u(x)}\leq M |x^{'}|^{1+\gamma}.$$ Let $$\alpha:=\min\{\gamma, 1-\frac{n}{p}\}$$ if $\alpha<\alpha_0$; otherwise let $\alpha\in (0, \alpha_0)$ where $\alpha_0$ is in Theorem \ref{LS-gradient}. The only place where we need $\alpha<\alpha_0$ is (\ref{alpha0}). By dividing our equation by a suitable constant we may assume that for some $\theta$ to be chosen later $$\|u\|_{L^{\infty}(B_{\rho}\cap\Omega)} + \|f\|_{L^{p}(B_{\rho}\cap\Omega)} + M\leq (\theta^{1/2})^{1+\alpha}=: \delta.$$ {\bf Claim.} There exists $0<\theta_0<r/4$ small depending only on $n, \rho, \lambda, \Lambda, \gamma, p$, and a sequence of linear functions $$l_m(x):= b_m x_n$$ with where $b_0= b_1 =0$ such that for all $\theta\leq\theta_0$ and for all $m\geq 1$, we have \begin{myindentpar}{1cm} (i) $$\|u-l_m\|_{L^{\infty}(S_{\theta^m})}\leq (\theta^{m/2})^{1+\alpha},$$ and\\ (ii) $$|b_m-b_{m-1}|\leq C_0 (\theta^{\frac{m-1}{2}})^{\alpha}.$$ \end{myindentpar} Our theorem follows from the claim. Indeed, (ii) implies that $\{l_m\}$ converges uniformly in $S_{\theta}$ to a linear function $l(x)= bx_n$ with $b$ universally bounded since \begin{equation*} \abs{b}\leq \sum_{m=1}^{\infty} \abs{b_m-b_{m-1}}\leq \sum_{m=1}^{\infty} C_0 (\theta^{\theta/2})^{m-1}= \frac{C_0}{1-\theta^{\alpha/2}}\leq 2C_0. \end{equation*} Furthermore, by (\ref{loc-k}) and (\ref{Amap}), we have $|x_n|\leq k^{-1}\theta^{m/2}$ for $x\in S_{\theta^m}$. Therefore, for any $m\geq 1$, \begin{eqnarray*} \|u-l\|_{L^{\infty}(S_{\theta^m})} &\leq & \|u-l_m\|_{L^{\infty}(S_{\theta^m})} + \sum_{j=m+1}^{\infty} \|l_j-l_{j-1}\|_{L^{\infty}(S_{\theta^m})} \\&\leq& (\theta^{m/2})^{1+\alpha} + \sum_{j=m+1}^{\infty} C_0 (\theta^{\frac{j-1}{2}})^{\alpha} (k^{-1}\theta^{m/2})\\ &\leq& C(\theta^{m/2})^{1+\alpha}. \end{eqnarray*} We now prove the claim by induction. Clearly (i) and (ii) hold for $m=1$. Suppose (i) and (ii) hold up to $m\geq 1$. We prove them for $m+1$. Let $h= \theta^m$. We define the rescaled domain $\Omega_h$ and function $\phi_h$ as in (\ref{omega-h}) and (\ref{phi-h}). We also define for $x\in \Omega_h$ $$v(x):= \frac{(u-l_m)(h^{1/2} A^{-1}_h x)}{h^{\frac{1+\alpha}{2}}}, ~f_h(x): = h^{\frac{1-\alpha}{2}}f(h^{1/2} A^{-1}_h x).$$ Then $$\|v\|_{L^{\infty}(S_{\phi_h}(0,1))}= \frac{1}{h^{\frac{1+\alpha}{2}}}\|u-l_m\|_{L^{\infty}(S_h)}\leq 1$$ and $$\Phi_h^{ij}v_{ij}=f_h~\text{in}~S_{\phi_h}(0,1)$$ with $$\|f_h\|_{L^{p}(S_{\phi_h}(0,1))} = (h^{1/2})^{1-\alpha-n/p} \|f\|_{L^{p}(S_h)}\leq \delta.$$ Let $w$ be the solution to \begin{equation*} \left\{ \begin{alignedat}{2} \Phi_h^{ij}w_{ij} ~& = 0 ~&&\text{in} ~ S_{\phi_h}(0, 2\theta), \\\ w &= \varphi_h~&&\text{on}~\partial S_{\phi_h}(0, 2\theta), \end{alignedat} \right. \end{equation*} where \begin{equation*} \varphi_h = \left\{\begin{alignedat}{1} 0 ~&~ \text{on} ~\partial S_{\phi_h}(0, 2\theta)\cap\partial\Omega_h \\ v~& ~\text{on}~ \partial S_{\phi_h}(0, 2\theta) \cap \Omega_h. \end{alignedat} \right. \end{equation*} By the maximum principle, we have $$\|w\|_{L^{\infty}(S_{\phi_h}(0, 2\theta))}\leq \|v\|_{L^{\infty}(S_{\phi_h}(0, 2\theta))}\leq 1.$$ Let $$\bar{l}(x):= \bar{b}x_n;~\bar{b}:=\partial_n w(0).$$ Then the boundary H\"older gradient estimates in Theorem \ref{LS-gradient} give \begin{equation}\abs{\bar{b}}\leq C_0 \|w\|_{L^{\infty}(S_{\phi_h}(0, 2\theta))}\leq C_0 \label{b-bar} \end{equation} and \begin{eqnarray}\|w-\bar{l}\|_{L^{\infty}(S_{\phi_h}(0, \theta))} \leq C_0\|w\|_{L^{\infty}(S_{\phi_h}(0, 2\theta))} (\theta^{\frac{1}{2}})^{1+\alpha_0} &\leq& C_0 (\theta^{\frac{1}{2}})^{1+\alpha_0}\nonumber\\ &\leq& \frac{1}{2}(\theta^{\frac{1}{2}})^{1+\alpha}, \label{alpha0} \end{eqnarray} provided that $$C_0\theta_0^{\frac{\alpha_0-\alpha}{2}}\leq 1/2.$$ We will show that, by choosing $\theta\leq \theta_0$ where $\theta_0$ is small, we have \begin{equation}\|w-v\|_{L^{\infty}(S_{\phi_h}(0, 2\theta))} \leq \frac{1}{2}(\theta^{\frac{1}{2}})^{1+\alpha}. \label{w-eq} \end{equation} Combining this with (\ref{alpha0}), we obtain $$\|v-\bar{l}\|_{L^{\infty}(S_{\phi_h}(0, \theta))}\leq (\theta^{\frac{1}{2}})^{1+\alpha}.$$ Now, let $$l_{m+1}(x):= l_m(x) + (h^{1/2})^{1+\alpha} \bar{l}(h^{-1/2}A_h x).$$ Then, for $x\in S_{\theta^{m+ 1}}= S_{\theta h}$, we have $h^{-1/2}A_h x\in S_{\phi_h}(0,\theta)$ and $$(u-l_{m+1})(x) = u(x)- l_m(x) - (h^{1/2})^{1+\alpha} \bar{l}(h^{-1/2}A_h x)= (h^{1/2})^{1+\alpha}(v- \bar{l})(h^{-1/2}A_h x).$$ Thus $$\|u-l_{m+1}\|_{L^{\infty}(S_{\theta^{m+1}})} = (h^{1/2})^{1+\alpha}\|v- \bar{l}\|_{L^{\infty}(S_{\phi_h}(0,\theta))}\leq (h^{1/2})^{1+\alpha} (\theta^{1/2})^{1+\alpha}= (\theta^{\frac{m+1}{2}})^{1+\alpha},$$ proving (i). On the other hand, we have $$l_{m+1} (x) = b_{m+1}x_n$$ where, by (\ref{Amap}) $$b_{m+1}:= b_m + (h^{1/2})^{1+\alpha} h^{-1/2} \bar{b} = b_m + h^{\alpha/2}\bar{b}.$$ Therefore, the claim is established since (ii) follows from (\ref{b-bar}) and $$\abs{b_{m+1}-b_m}= h^{\alpha/2}\abs{\bar{b}}\leq C_{0}\theta^{m\alpha/2}.$$ It remains to prove (\ref{w-eq}). We will use the ABP estimate to $w-v$ which solves \begin{equation*} \left\{ \begin{alignedat}{2} \Phi_h^{ij}(w-v)_{ij} ~& = -f_h ~&&\text{in} ~ S_{\phi_h}(0, 2\theta), \\\ w-v &= \varphi_h-v~&&\text{on}~\partial S_{\phi_h}(0, 2\theta). \end{alignedat} \right. \end{equation*} By this estimate and the way $\varphi_h$ is defined, we have \begin{multline*}\|w-v\|_{L^{\infty}(S_{\phi_h}(0, 2\theta))} \leq \|v\|_{L^{\infty}(\partial S_{\phi_h}(0, 2\theta)\cap \partial\Omega_h)} + C(n) diam (S_{\phi_h}(0, 2\theta)) \|\frac{f_h}{(\det \Phi_h)^{\frac{1}{n}}}\|_{L^{n}(S_{\phi_h}(0, 2\theta))}\\=: (I) + (II). \end{multline*} To estimate (I), we denote $y= h^{1/2}A_{h}^{-1}x$ when $x\in \partial S_{\phi_h}(0, 2\theta)\cap \partial\Omega_h.$ Then $y\in \partial S_{\phi}(0, 2\theta)\cap\partial\Omega$ and moreover, $$y_n = h^{1/2}x_n,~ y^{'}-\nu_h y_n = h^{1/2} x^{'}.$$ Noting that $x\in \partial S_{\phi_h}(0, 1)\cap \partial\Omega_h\subset B_{k^{-1}}$, we have by (\ref{Amap}) $$\abs{y}\leq k^{-1}h^{1/2}\abs{log h}\abs{x} \leq h^{1/4}\leq \rho$$ if $h=\theta^m$ is small. This is clearly satisfied when $\theta_0$ is small. Since $\Omega$ has an interior tangent ball of radius $\rho$, we have $$\abs{y_n}\leq \rho^{-1}|y^{'}|^2.$$ Therefore $$\abs{v_h y_n}\leq k^{-1}\abs{log h} \rho^{-1} |y^{'}|^2 \leq k^{-1}\rho^{-1} h^{1/4}\abs{log h} |y^{'}|\leq \frac{1}{2}|y^{'}|$$ and consequently, $$\frac{1}{2}|y^{'}|\leq |h^{1/2} x^{'}|\leq \frac{3}{2}|y^{'}|.$$ From (\ref{loc-h}) $$\tilde\rho |x^{'}|^2 \leq \phi_h(x)\leq 2\theta,$$ we have $$|y^{'}|\leq 2 h^{1/2} |x^{'}|\leq 2(2\tilde\rho^{-1})^{1/2} (\theta h)^{1/2}.$$ By (ii) and $b_0=0$, we have $$\abs{b_m}\leq \sum_{j=1}^{m}\abs{b_j-b_{j-1}}\leq \sum_{j=1}^{\infty} C_0 (\theta^{\theta/2})^{j-1}= \frac{C_0}{1-\theta^{\alpha/2}}\leq 2C_0$$ if $$\theta_0^{\alpha/2}\leq 1/2.$$ Now, we obtain from the definition of $v$ that $$h^{\frac{1+\alpha}{2}} |v(x)| = |(u-l_m)(y)| \leq \abs{u(y)} + 2C_0\abs{y_n} \leq \delta |y^{'}|^{1+\gamma} + 2C_0 \rho^{-1} |y^{'}|^2 = |y^{'}|^{1+\gamma}(\delta + 2C_0 \rho^{-1} |y^{'}|^{1-\gamma}).$$ Using $|y^{'}|\leq C \theta^{1/2}$ and $\gamma\geq \alpha$, we find $$v(x) \leq \frac{C ((\theta h)^{1/2})^{1+\gamma}(\delta +\theta^{\frac{1-\gamma}{2}})}{h^{\frac{1+\alpha}{2}}} = Ch^{\gamma-\alpha}\theta^{\frac{1+\gamma}{2}} (\theta^{\frac{1+\alpha}{2}} +\theta^{\frac{1-\gamma}{2}})\leq Ch^{\gamma-\alpha}\theta\leq \frac{1}{4} (\theta^{1/2})^{1+\alpha}$$ if $\theta_0$ is small. We then obtain $$(I) \leq \frac{1}{4} (\theta^{1/2})^{1+\alpha}.$$ To estimate (II), we recall $\delta = (\theta^{1/2})^{1+\alpha}$ and $$S_{\phi_h}(0, 2\theta)\subset B_{C(2\theta)^{1/2} \abs{log 2\theta}};~ \abs{S_{\phi_h}(0, 2\theta)}\leq C(2\theta)^{n/2}.$$ Since $$\det \Phi_h = (\det D^2\phi_h)^{n-1}\geq \lambda^{n-1},$$ we therefore obtain from H\"older inequality that \begin{eqnarray*} (II) &\leq& \frac{ C(n)}{\lambda^{\frac{n-1}{n}}} diam (S_{\phi_h}(0, 2\theta)) \|f_h\|_{L^{n}(S_{\phi_h}(0, 2\theta))}\\ &\leq& C(n, \lambda)diam (S_{\phi_h}(0, 2\theta)) \abs{S_{\phi_h}(0, 2\theta)}^{\frac{1}{n}-\frac{1}{p}} \| f_h\|_{L^{p}(S_{\phi_h}(0, 2\theta))} \\ &\leq & C\delta \theta^{1/2}\abs{log 2\theta} (\theta^{1/2})^{1-n/p}= C(\theta^{1/2})^{1+\alpha} \abs{log 2\theta} (\theta^{1/2})^{2-n/p}\leq \frac{1}{4} (\theta^{1/2})^{1+\alpha} \end{eqnarray*} if $\theta_0$ is small. It follows that $$\|w-v\|_{L^{\infty}(S_{\phi_h}(0, 2\theta))} \leq (I) + (II)\leq \frac{1}{2}(\theta^{\frac{1}{2}})^{1+\alpha},$$ proving (\ref{w-eq}). The proof of our theorem is complete. \end{proof} \section{Global $C^{1,\alpha}$ estimates} \label{proof-sec} In this section, we will prove Theorem \ref{global-reg}. \begin{proof}[Proof of Theorem \ref{global-reg}] We extend $\varphi$ to a $C^{1,\gamma}(\overline{\Omega})$ function in $\overline{\Omega}$. By the ABP estimate, we have \begin{equation} \label{u-max} \norm{u}_{L^{\infty}(\Omega)} \leq C \left(\norm{f}_{L^{p}(\Omega)} + \|\varphi\|_{L^{\infty}(\overline{\Omega})}\right) \end{equation} for some $C$ depending on $n, p, \rho, \lambda$. By multiplying $u$ by a suitable constant, we can assume that $$\norm{f}_{L^{p}(\Omega)} + \|\varphi\|_{C^{1,\gamma}(\overline{\Omega})}=1.$$ By using Guti\'errez-Nguyen's interior $C^{1,\alpha}$ estimates \cite{GN} and restricting our estimates in small balls of definite size around $\partial\Omega$, we can assume throughout that $1-\varepsilon\leq g\leq 1+ \varepsilon$ where $\varepsilon$ is as in Theorem \ref{h-bdr-gradient}. Let $y\in \Omega $ with $r:=dist (y,\partial\Omega) \le c,$ for $c$ universal, and consider the maximal section $S_{\phi}(y, h)$ of $\phi$ centered at $y$, i.e., $$h=\sup\{t\,| \quad S_{\phi}(y,t)\subset \Omega\}.$$ Since $\phi$ is $C^{1, 1}$ on the boundary $\partial\Omega$, by Caffarelli's strict convexity theorem, $\phi$ is strictly convex in $\Omega$. This implies the existence of the above maximal section $S_{\phi}(y, h)$ of $\phi$ centered at $y$ with $h>0$. By \cite[Proposition 3.2]{LS1} applied at the point $x_0\in \partial S_{\phi}(y,h) \cap \partial \Omega,$ we have \begin{equation} h^{1/2} \sim r, \label{hr} \end{equation} and $ S_{\phi}(y,h)$ is equivalent to an ellipsoid $E$ i.e $$cE \subset S_{\phi}(y,h)-y \subset CE,$$ where \begin{equation}E :=h^{1/2}A_{h}^{-1}B_1, \quad \mbox{with} \quad \|A_{h}\|, \|A_{ h}^{-1} \| \le C |\log h|; \det A_{h}=1. \label{eh} \end{equation} We denote $$\phi_y:=\phi-\phi(y)-\nabla \phi(y) (x-y).$$ The rescaling $\tilde \phi: \tilde S_1 \to {\mathbb R}$ of $u$ $$\tilde \phi(\tilde x):=\frac {1}{ h} \phi_y(T \tilde x) \quad \quad x=T\tilde x:=y+ h^{1/2}A_{h}^{-1}\tilde x,$$ satisfies $$\det D^2\tilde \phi(\tilde x)=\tilde g(\tilde x):=g(T \tilde x), $$ and \begin{equation} \label{normalsect} B_c \subset \tilde S_1 \subset B_C, \quad \quad \tilde S_1=\bar h^{-1/2} A_{\bar h}(S_{y, \bar h}- y), \end{equation} where $\tilde S_1:= S_{\tilde \phi} (0, 1)$ represents the section of $\tilde \phi$ at the origin at height 1. We define also the rescaling $\tilde u$ for $u$ $$\tilde u(\tilde x):= h^{-1/2}\left(u(T\tilde x)- u(x_{0})-\nabla u(x_0)(T\tilde x-x_0)\right),\quad \tilde x\in \tilde S_{1}.$$ Then $\tilde u$ solves $$\tilde \Phi^{ij} \tilde u_{ij} = \tilde f(\tilde x):= h^{1/2} f(T\tilde x).$$ Now, we apply Guti\'errez-Nguyen's interior $C^{1,\alpha}$ estimates \cite{GN} to $\tilde u $ to obtain $$\abs{D\tilde u (\tilde z_{1})-D\tilde u(\tilde z_{2})}\leq C\abs{\tilde z_{1}-\tilde z_{2}}^{\beta} \{\norm{\tilde u }_{L^{\infty}(\tilde S_{1})} + \norm{\tilde f}_{L^{p}(\tilde S_{1})}\},\quad\forall \tilde z_{1}, \tilde z_{2}\in \tilde S_{1/2},$$ for some small constant $\beta\in (0,1)$ depending only on $n, \lambda, \Lambda$.\\ By (\ref{normalsect}), we can decrease $\beta$ if necessary and thus we can assume that $2\beta\leq \alpha$ where $\alpha\in (0,1)$ is the exponent in Theorem \ref{h-bdr-gradient}. Note that, by (\ref{eh}) \begin{equation} \norm{\tilde f}_{L^{p}(\tilde S_{1})} = h^{1/2-\frac{n}{2p}}\norm{f}_{L^{p}(S_{y, \bar{h}})}. \label{scaled-lp} \end{equation} We observe that (\ref{hr}) and (\ref{eh}) give $$B_{C r\abs{log r}}(y)\supset S_{\phi}(y,h) \supset S_{\phi}(y,h/2)\supset B_{c\frac{r}{\abs{log r}}}(y)$$ and $$diam ( S_{\phi}(y,h))\leq Cr\abs{log r}.$$ By Theorem \ref{h-bdr-gradient} applied to the original function $u$, (\ref{u-max}) and (\ref{hr}), we have $$\norm{\tilde u }_{L^{\infty}(\tilde S_{1})} \leq C h^{-1/2} \left(\|u\|_{L^{\infty}(\Omega)} +\norm{f}_{L^{p}(\Omega)} + \|\varphi\|_{C^{1,\gamma}(\overline{\Omega})}\right)diam ( S_{\phi}(y,h))^{ 1+ \alpha} \leq C r^{\alpha}\abs{log r}^{1 +\alpha}.$$ Hence, using (\ref{scaled-lp}) and the fact that $\alpha\leq 1/2 (1-n/p)$, we get $$\abs{D\tilde u (\tilde z_{1})-D\tilde u(\tilde z_{2})}\leq C\abs{\tilde z_{1}-\tilde z_{2}}^{\beta} r^{\alpha}\abs{log r}^{1 +\alpha}~\forall \tilde z_{1}, \tilde z_{2}\in \tilde S_{1/2}.$$ Rescaling back and using $$\tilde z_1-\tilde z_2= h^{-1/2}A_{ h}(z_1-z_2),\quad h^{1/2}\sim r,$$ and the fact that $$\abs{\tilde z_1-\tilde z_2}\leq \norm{ h^{-1/2}A_{ h}}\abs{z_1-z_2} \leq C h^{-1/2}\abs{\log h}\abs{z_1-z_2}\leq C r^{-1}\abs{log r}\abs{z_1-z_2},$$ we find \begin{eqnarray}|Du(z_1)-Du( z_2)| &=&|A_{h}(D\tilde u (\tilde z_{1})-D\tilde u(\tilde z_{2})| \leq C\abs{\log h}(r^{-1}\abs{log r}\abs{z_1-z_2})^{\beta} r^{\alpha}\abs{log r}^{1 +\alpha} \nonumber\\& \le& |z_1-z_2|^{\beta} \quad \forall z_1, z_2 \in S_{\phi}(y,h/2). \label{oscv} \end{eqnarray} Notice that this inequality holds also in the Euclidean ball $B_{c\frac{r}{\abs{log r}}}(y)\subset S_{\phi}(y,h/2)$. Combining this with Theorem \ref{h-bdr-gradient}, we easily obtain $$[Du]_{C^\beta(\bar \Omega)} \le C$$ and the desired global $C^{1,\beta}$ bounds for $u$. \end{proof} {\bf Acknowledgments.} The authors would like to thank the referee for constructive comments on the manuscript. The first author was partially supported by the Vietnam Institute for Advanced Study in Mathematics (VIASM), Hanoi, Vietnam. \end{document}
\begin{document} \title{\large \bf Quantum Memory Process with a Four-Level Atomic Ensemble} \author{Xiong-Jun Liu$^{a,b,c}$\footnote{Electronic address: phylx@nus.edu.sg}, Hui Jing$^d$ and Mo-Lin Ge$^{b,c}$} \affiliation{a. Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore 117542 \\ b. Theoretical Physics Division, Nankai Institute of Mathematics,Nankai University, Tianjin 300071, P.R.China\\ c. Liuhui Center for Applied Mathematics, Nankai University and Tianjin University, Tianjin 300071, P.R.China\\ d. State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics,\\ Wuhan Institute of Physics and Mathematics, CAS, Wuhan 430071, P. R. China} \begin{abstract} We examine in detail the quantum memory technique for photons in a double $\Lambda$ atomic ensemble in this work. The novel application of the present technique to create two different quantum probe fields as well as entangled states of them is proposed. A larger zero-degeneracy class besides dark-state subspace is investigated and the adiabatic condition is confirmed in the present model. We extend the single-mode quantum memory technique to the case with multi-mode probe fields, and reveal the exact pulse matching phenomenon between two quantized pulses in the present system.\\ \pacs{03.67.Mn, 42.50.Gy, 03.65.Fd} \end{abstract} \maketitle \section{introduction} \indent Since the remarkable demonstration of ultraslow light speed in a Bose-Einstein condensate in 1999 \cite{1}, rapid advances have been witnessed in both experimental and theoretical aspects towards probing the novel mechanism of Electromagnetically Induced Transparency (EIT) \cite{2} and its many potential applications \cite{3,4,5,entangled,wu}. Particularly, based on ``dark-state polaritons" (DSPs) theory \cite{6}, the quantum memory via EIT technique is actively being explored by transferring the quantum states of photon wave-packets to metastable collective atomic-coherence (collective quasi spin states) in a loss-free and reversible manner \cite{7}. For the three-level EIT quantum memory technique, a semidirect product group under the condition of large atom number and low collective excitation limit \cite{6} was discovered by Sun $et al.$ \cite{8}, and the validity of the adiabatic condition for the evolution of DSPs has also been confirmed. As a natural extension, controlled light storing in a medium composed of double $\Lambda$ type four-level atoms was mentioned \cite{9} and briefly studied recently \cite{10}. However, in these previous theoretical works, the probe light is treated as classical\cite{10} and the evolution of the total wave function of the probe pulses and atoms is not clear. Thus many properties of quantum memory with four-level atomic system have not been discovered. In this paper, we present a quantum description of DSP theory in such a double $\Lambda$ type atomic ensemble interacting with two quantized fields and two classical control fields. The novel application of our model to create two different quantum probe fields as well as their entangled states is proposed. Furthermore, we extend the single-mode quantum memory technique to the case with multi-mode probe fields, and reveal the exact pulse matching phenomenon between two quantized probe pulses. \section{model} \begin{figure} \caption{Double $\Lambda$ type four-level $^{87}$Rb atoms coupled to two single-mode quantized and two classical control fields (a). The schematic setup for experimental realization is shown in part (b). Co-propagating input probe and control fields are used to avoid Doppler-broadening. The polarizing beam splitter is used to separate probe photons from control ones.} \label{1} \end{figure} Turning to the situation of Fig. 1(a), we assume that a collection of $N$ double $\Lambda$ type four-level atoms ($^{87}$Rb) interact with two single-mode quantized fields with coupling constants $g_1$ and $g_2$, and two classical control ones with time-dependent real Rabi-frequencies $\Omega_1(t)$ and $\Omega_2(t)$. Generalization to the multi-mode probe pulse case will be studied later. All probe and control fields are co-propagating in the $z$ direction (Fig. 1(b)). Considering all transitions at resonance, the interaction Hamiltonian of the total system can be written as: \begin{eqnarray}\label{eqn:1} \hat V=g_1\sqrt{N}\hat a_1\hat A^{\dag}+\Omega_1\hat T_{ac}+g_2\sqrt{N}\hat a_2\hat D^{\dag}+\Omega_2\hat T_{dc}+h.c., \end{eqnarray} where the collective atomic excitation operators: $\hat A=\frac{1}{\sqrt{N}}\sum_{j=1}^{N}\hat\sigma_{ba}^{j}, \ \hat C=\frac{1}{\sqrt{N}}\sum_{j=1}^N\hat\sigma_{bc}^{j}, \ \hat D=\frac {1}{\sqrt{N}}\sum_{j=1}^N\hat\sigma _{bd}^{j}$ with $\hat\sigma^i_{\mu\nu}=|\mu\rangle_{ii}\langle\nu| (\mu,\nu=a,b,c,d)$ being the flip operators of the $i$-th atom between states $|\mu\rangle$ and $|\nu\rangle$, and $\hat T^{-}_{\mu\nu}=\hat T_{\mu\nu}=\sum_{j=1}^{N}\hat\sigma _{\mu\nu}^{j}, \ \ \hat T_{\mu\nu}^{+}=(\hat T^{-}_{\mu\nu})^{\dagger }$ with $\mu\neq\nu=a,c,d$. Denoting by $|b\rangle=|b_1,b_2,...,b_N\rangle$ the collective ground state with all $N$ atoms staying in the same single particle ground state $|b\rangle$, we can easily give other quasi-spin wave states by the collective atomic excitation operators: $|a^n\rangle=[n!]^{-1/2}(\hat A^{\dag})^n|b\rangle$, $|c^n\rangle=[n!]^{-1/2}(\hat C^{\dag})^n|b\rangle$, and $|d^n\rangle=[n!]^{-1/2}(\hat D^{\dag})^n|b\rangle$. Following the analysis in ref. \cite{8}, one can verify that the dynamical symmetry of our double $\Lambda$ system is governed by a semidirect sum Lie algebra $su(3)\overline{\otimes}h_3$ in large $N$ limit and low excitation condition. To give a clear description of the interesting quantum memory process in this double $\Lambda$ type four-level-atoms ensemble, we define the new type of dark-state-polaritons operator as \begin{equation}\label{eqn:6} \hat d=\cos\theta\cos\phi\hat a_1-\sin\theta \hat C+\cos\theta\sin\phi\hat a_2, \end{equation} where the mixing angles $\theta$ and $\phi$ are defined through $\tan\theta=g_1\sqrt{N}/\sqrt{\Omega_1^2+\Omega_2^2g_1^2/g_2^2}$ and $\tan\phi=g_1\Omega_2/g_2\Omega_1$. By a straightforward calculation one can verify that $[\hat d,\hat d^{\dag}]=1$ and $ [\hat V,\hat d \ ]=0 $, hence the general atomic dark states can be obtained through $|D_n\rangle=[n!]^{-1/2}(\hat d^{\dag})^n|0\rangle$, where $|0\rangle=|b\rangle\otimes|0,0\rangle_e$ and $|0,0\rangle_e$ denotes the electromagnetic vacuum of two quantized probe fields. So we reach \begin{eqnarray}\label{eqn:8} |D_n\rangle&=&\sum^n_{k=0}\sum^{n-k}_{j=0}\sqrt{\frac{n!}{k!(n-k-j)!j!}}(-\sin\theta)^k(\cos\theta)^{n-k}\nonumber\\ &&(\sin\phi)^j(\cos\phi)^{n-k-j}|c^k,n-k-j,j\rangle . \end{eqnarray} From this formula it is clear that when the mixing angle $\theta$ is adiabatically rotated from $0$ to $\pi/2$, the quantum state of the DSPs is transferred from pure photonic character to collective excitations, i.e. $|D_n\rangle: \ \sum^{n}_{j=0}\sqrt{\frac{n!}{(n-j)!j!}} (\sin\phi)^j(\cos\phi)^{n-j}|b\rangle|n-j,j\rangle\rightarrow |c^n\rangle|0,0\rangle$. Similarly, another important physical phenomenon can also be predicted through our quantized description of this system. If initially only one quantized field (described by the coherent state $|\alpha_2\rangle$ with $\alpha_2=\alpha_0$) is injected into the atomic ensemble to couple the transition from $|b\rangle$ to $|d\rangle$, and the second control field is chosen to be much stronger than the first one $(g_1\Omega_2(0)\gg g_2\Omega_1(0))$ along with $\sqrt{g_2^2\Omega_1^2(0)+g_1^2\Omega_2(0)^2}\gg g_1g_2\sqrt{N}$ (or $\sin\phi_0=1$ and $\cos\theta_0=1$), the initial total state of the quantized field and atomic ensemble reads \begin{eqnarray}\label{eqn:9} |\Psi_0\rangle=\sum_{n}P_n(\alpha_0)|0,n\rangle\otimes|b\rangle , \end{eqnarray} where $P_n(\alpha_0)=\frac{\alpha_0^n}{\sqrt{n!}}e^{-|\alpha_0|^2/2}$ is the probability distribution function. Subsequently, the mixing angle $\theta$ is adiabatically rotated to $\pi/2$ by turning off the two control fields, hence the quantum state of the probe light $|\alpha_2\rangle$ is fully mapped into the collective atomic excitations. When both of the two control fields are turned back on and the mixing angle $\theta$ is rotated to $\theta=0$ again with $\phi$ to some value $\phi_e$, which is only determined by the Rabi-frequencies of the two re-applied control fields, we finally obtain from eq.(\ref{eqn:8}) \begin{eqnarray}\label{eqn:10} |\Psi_e\rangle&=&\sum_{j}\sum_{k}P_j(\alpha_{e1})P_k(\alpha_{e2}) |b,j,k\rangle\nonumber\\ &&=|b\rangle\otimes|\alpha_{e1}\rangle\otimes|\alpha_{e2}\rangle, \end{eqnarray} where $\alpha_{e1}=\alpha_0\cos\phi_e$ and $\alpha_{e2}=\alpha_0\sin\phi_e$ are the parameters of the two released coherent lights. The above expression shows that the injected quantized field can convert into two different coherent pulses $|\alpha_{ei}\rangle (i=1,2)$ after a proper evolution manipulated by two control fields. For example, i) if $\phi_e=\pi/2$, we have $\alpha_{e1}=0$ and $\alpha_{e2}=\alpha_0$, which means the released pulse is the same as the initial injected one; ii) if $\phi_e=0$, we have $\alpha_{e1}=\alpha_0$ and $\alpha_{e2}=0$, which means that the injected quantized field state is now fully converted into a different light beam $|\alpha_{e1}\rangle$. Obviously, this novel mechanism can be extended to other cases of the injected field, for example, in the presence of a non-classical or squeezed light beam (see the following discussion). In experiments, there also holds the promise for actual observation through, e.g., combining a beam splitter and an electro-optic modulator to generate the requisite sidebands \cite{1}. \section{generation of entangled coherent states} It is interesting to note that when we input a non-classical or squeezed probe light, by proper steering the control fields, we can generate two output entangled light beams. Firstly we consider that the injected quantized field is in a macroscopic quantum superposition of coherent states, e.g. for the initial total state \begin{eqnarray}\label{eqn:initial1} |\Psi_0\rangle^{\pm}=\frac{1}{\sqrt{{\cal N}(\alpha_0)}}|0\rangle\otimes(|\alpha_0\rangle\pm|-\alpha_0\rangle) \otimes|b\rangle, \end{eqnarray} where the normalized factor ${\cal N}_{\pm}(\alpha_0)=2\pm2e^{-2|\alpha_0|^2}$, with the scheme discussed above and from eq. (\ref{eqn:8}) we find the injected quantized pulse can evolve into a very interesting entangled coherent state (ECS) of two output fields ($|\Psi_0\rangle^{\pm}\rightarrow|\Psi_e\rangle^{\pm}$) \begin{widetext} \begin{eqnarray}\label{eqn:entangled} &\frac{1}{\sqrt{{\cal N}_{\pm}(\alpha_0)}}|0\rangle\otimes\bigr(|\alpha_0\rangle\pm|-\alpha_0\rangle\bigr) \otimes|b\rangle=\frac{1}{\sqrt{{\cal N}_{\pm}(\alpha_0)}}\bigr(|0\rangle\otimes|\alpha_0\rangle\pm|0\rangle\otimes|-\alpha_0\rangle\bigr) \otimes|b\rangle\longrightarrow\nonumber\\ &\longrightarrow\frac{1}{\sqrt{{\cal N}_{\pm}(\alpha_0)}}\bigr(\sum_{j}\sum_{k}P_j(\alpha_{e1})P_k(\alpha_{e2}) |b,j,k\rangle\pm\sum_{j}\sum_{k}P_j(-\alpha_{e1})P_k(-\alpha_{e2}) |b,j,k\rangle\bigr). \end{eqnarray} The final state in the above formula can be rewritten as: \begin{eqnarray}\label{eqn:entangled1} |\Psi_e\rangle^{\pm}=\frac{1}{\sqrt{{\cal N}_{\pm}(\alpha_0)}}\bigr(|\alpha_{e1},\alpha_{e2} \rangle\pm|-\alpha_{e1},-\alpha_{e2}\rangle\bigr)_{light} \otimes|b\rangle. \end{eqnarray} \end{widetext} where the subscript $light$ indicates the state of the output two probe pulses. If $\phi_e=0$, hence $\alpha_{e1}=\alpha_0$ and $\alpha_{e2}=0$, and then the evolution of the quantized fields proceeds as $|0\rangle\otimes(|\alpha_0\rangle\pm|-\alpha_0\rangle)/\sqrt{{\cal N}_{\pm}(\alpha_0)}\rightarrow(|\alpha_0\rangle\pm|-\alpha_0\rangle)\otimes|0\rangle/\sqrt{{\cal N}_{\pm}(\alpha_0)}$, which means the input Schr\"{o}dinger state is now fully converted into another one with different vibrational mode. On the other hand, for the general case of non-zero value of the coherent parameters $\alpha_{e1}$ and $\alpha_{e2}$, the states of output quantized fields are entangled coherent states. Since the parameters $\alpha_{ei} (i=1,2)$ are controllable, the entanglement of the output states \cite{entanglement} $E^{\pm}(\alpha_{e1}, \alpha_{e2})=-$ tr$(\rho^{\pm}_{\alpha_{e1}}\ln\rho^{\pm}_{\alpha_{e1}})$ with the reduced density matrix $\rho^{\pm}_{\alpha_{e1}}=$ tr$^{(\alpha_{e2}, atom)}(|\Psi_e\rangle\langle\Psi_e|)^{\pm}$ can also easily be controlled by the re-applied control fields. In particular, for the initial state $|\Psi_0\rangle^{-}$, if $\phi_e=\pi/4$, we have $\alpha_{e1}=\alpha_{e2}=\alpha_0/\sqrt{2}$ and then we obtain the maximally entangled state(MES): \begin{eqnarray}\label{eqn:entangled2} &&|0\rangle\otimes\bigr(|\alpha_0\rangle-|-\alpha_0\rangle\bigr)/\sqrt{{\cal N}_-(\alpha_0)}\longrightarrow\nonumber\\ &&\longrightarrow\bigr(|\frac{\alpha_0}{\sqrt{2}},\frac{\alpha_0} {\sqrt{2}}\rangle-|-\frac{\alpha_0}{\sqrt{2}},-\frac{\alpha_0}{\sqrt{2}}\rangle\bigr)/\sqrt{{\cal N}_-(\alpha_0)}, \end{eqnarray} which is most useful for quantum information processes. With the definitions of the orthogonal basis $|+\rangle=\bigr(|\frac{\alpha_0}{\sqrt{2}}\rangle+|-\frac{\alpha_0}{\sqrt{2}}\rangle\bigr)/\sqrt{{\cal N}_+(\alpha_0/2)}$ and $|-\rangle=\bigr(|\frac{\alpha_0}{\sqrt{2}}\rangle-|-\frac{\alpha_0}{\sqrt{2}}\rangle\bigr)/\sqrt{{\cal N}_-(\alpha_0/2)}$, the output coherent states can be rewritten as \begin{eqnarray}\label{eqn:entangled3} \Phi_{light}(-)=\frac{1}{\sqrt{2}}\bigr(|+\rangle|-\rangle+|-\rangle|+\rangle\bigr)_{light}, \end{eqnarray} which manifestly has one ebit of entanglement (since $\langle+|-\rangle=0$). We should emphasize that all the above results can not be obtained with classical DSP theory of a four-level system. Since our scheme of generating the entangled coherent states via quantized DSP theory is linear and controllable and it only requires a macroscopic quantum superposition for the initial state, this scheme may be feasible in experiment which has made much progress in recent years \cite{ent}. Besides our technique, the generation of entangled coherent states via Kerr effect \cite{entangled} and entanglement swapping using Bell-state measurements \cite{swap} is also studied widely. If the two output entangled coherent lights are respectively injected into two other atomic ensembles composed of many three-level atoms, and the quantum states of the lights are mapped into quasi spin-waves via sperate Raman transitions, it is possible to generate controllable entangled coherence of two atomic ensembles. Consider now a different type of input quantum state corresponding to a single-photon state, i.e. the initial total state $|\Psi_0\rangle=(|0\rangle\otimes|1\rangle)_{light}\otimes|b\rangle$. According to Eq. (\ref{eqn:8}) and after the light state storage process discussed in section II, the final state yields: \begin{eqnarray}\label{eqn:entangled3} \Phi_{light}=\frac{1}{\sqrt{2}}\bigr(|1\rangle|0\rangle+|0\rangle|1\rangle\bigr)_{light}. \end{eqnarray} The entangled states generated with the present scheme have other interesting aspects. Firstly, since the two output probe fields are different in frequency, the generated entangled states is between two quantized fields with different frequencies. Secondly, since the direction of the output probe field can be fully controlled by the corresponding control field \cite{3}, based on our scheme, the output directions of the two entangled probe fields can be controlled by the two reapplied control fields. These interesting factors are advantages of our scheme for generating entangled light fields, which is different from that using a standard beam splitter. \section{validity of adiabatic condition} As we have known, the condition of adiabatic evolution is most important for the quantum memory technique based on the quantized DSPs theory, because the total system should be confined to the dark states during the process of quantum memory. One can verify that when $g_1\neq g_2$, no larger zero subspace is obtained except for dark states and the adiabatic condition can be guaranteed by the adiabatic theorem. However, the dynamical symmetry in the present system depicted by the semi-direct sum algebra $h_3\overline{\otimes}su(3)$ indicates that, for the special case $g_1=g_2=g$, we may find a larger degeneracy class of states with zero-eigenvalue in this system. We define \begin{eqnarray}\label{eqn:operator1} \hat Q_{\pm}^{\dag}=\hat u^{\dag}\pm\hat b^{\dag}, \ \ \hat P_{\pm}^{\dag}=-\sin\phi\hat a_1+\cos\phi\hat a_2\pm\hat v^{\dag}, \end{eqnarray} where the operators $\hat u$, $\hat v$ and the bright-state-polaritons (BSPs) operator $\hat b$ are defined as: $\hat u=\cos\phi\hat A+\sin\phi\hat D , \ \hat v=-\sin\phi\hat A+\cos\phi\hat D$ and $\hat b =\sin\theta\cos\phi\hat a_1+\cos\phi\hat C +\sin\theta\sin\phi\hat a_2$. By a straightforward calculation one obtains the communication relations $[\hat V,\hat Q_{\pm}^{\dag}]=\pm\epsilon_1\hat Q_{\pm}^{\dag}$, $[\hat V,\hat P^{\dag}]=\pm\epsilon_2\hat P_{\pm}^{\dag}$ and $[\hat P_{\pm}^{\dag},\hat Q_{\pm}^{\dag}]=0$ with $\epsilon_1=\sqrt{g^2N+\Omega_1^2+\Omega_2^2}$ and $\epsilon_2=g\sqrt{N}$. Thus we further obtain \begin{eqnarray}\label{eqn:com} [\hat V,\hat P_{\pm}^{\dag}\hat Q_{\pm}^{\dag}]=\pm\epsilon_1\hat P_{\pm}^{\dag}\hat Q_{\pm}^{\dag} \pm \epsilon_2\hat P_{\pm}^{\dag}\hat Q_{\pm}^{\dag} \end{eqnarray} To this end we have obtained all communication relations between the above operators. Thanks to these results we finally obtain a much larger degeneracy class: \begin{eqnarray}\label{eqn:degen} |r(i,j;k,l;n)\rangle=\frac{1}{\sqrt{i!j!k!l!}}(\hat Q_+^{\dag})^i(\hat Q_-^{\dag})^j(\hat P_+^{\dag})^k(\hat P_-^{\dag})^l|D_n\rangle\nonumber, \end{eqnarray} with eigenvalue $E(i,j;k,l)=(i-j)\epsilon_1+(k-l)\epsilon_2$. Obviously, when $i=j$ and $k=l$, one finds the zero-eigenvalue degeneracy class is \begin{eqnarray}\label{eqn:11} |d(i,k;n)\rangle&=&\frac{1}{i!k!}(\hat Q_+^{\dag}\hat Q_-^{\dag})^i(\hat P_+^{\dag}\hat P_-^{\dag})^k|D_n\rangle,\\ &&(i,k,n=0,1,2,\cdots)\nonumber. \end{eqnarray} The larger class $\{|d(i,k;n)\rangle \ |$ $\ n=0,1,2,\cdots \}$ of states of zero eigenvalue are constructed by acting ($\hat Q_+^{\dag}\hat Q_-^{\dag}$) $i$ times and ($\hat P_+^{\dag}\hat P_-^{\dag}$) $k$ times on the dark state $|D_n\rangle$. Only when $i=0$ and $k=0$, the larger degeneracy class reduces to the special subset $ \{|D_n\rangle \ |$$ n=0,1,2,\cdots \}$ of the interaction Hamiltonian. As usual, the quantum adiabatic theorem does not forbid the transition between those states of same eigenvalue, hence it is important also in the present four-level-atoms system to confirm the forbiddance of any transitions from dark states $|D_n\rangle$ to $\{ |d(i,k;n)\rangle \ |ik\neq0,n=0,1,2,\cdots \}$. Generally this problem can be studied by defining the zero-eigenvalue subspaces $ {\bf S}^{[i,k]}:\{|d(i,k;n)\rangle \ |i,k,n=0,1,2,\cdots \}$, in which ${\bf S}^{[0,0]}={\bf S} $ is the dark-state subspace. The complementary part of the direct sum ${\bf DS}={\bf S}^{[0,0]}\oplus $ $ {\bf S}^{[0,1]}\oplus$ $ {\bf S}^{[1,0]}\oplus\cdots$ of all zero-eigenvalue subspaces is noted by ${\bf ES}={\bf S}^{[\bf ES]}$ in which each state turns out to have some nonzero eigenvalue after some calculations. Any state $|\phi ^{\lbrack i,k]}(t)\rangle =\sum_{i,k;n}c_{n}^{[i,k]}(t)|d(i,k;n)\rangle $ in ${\bf S} ^{[i,k]}$ evolves according to \cite{8} \begin{equation}\label{eqn:14} i\frac{d}{dt}c_{n}^{[i,k]}(t)=\sum_{i^{\prime },k^{\prime };n^{\prime }}D_{i,k;n}^{i^{\prime },k^{\prime };n^{\prime }}c_{n^{\prime }}^{[i^{\prime },k^{\prime }]}(t)+F[{\bf ES}], \end{equation} where $F[{\bf ES}]$, which can be ignored under adiabatic conditions \cite{8,11}, represents a certain functional of the complementary states and $D_{i,k;n}^{i^{\prime },k^{\prime };n^{\prime }}=-i\langle d(i^{\prime },k^{\prime };n^{\prime })|\partial _{t}|d(i,k;n)\rangle=-i\dot{\theta}\langle d(i^{\prime },k^{\prime };n^{\prime })|\partial _{\theta}|d(i,k;n)\rangle-i\dot{\phi}\langle d(i^{\prime },k^{\prime };n^{\prime })|\partial _{\phi}|d(i,k;n)\rangle $ with $\dot{\theta}=d\theta/dt$ and $\dot{\phi}=d\phi/dt$. With the definitions of these operators, we can easily calculate: \begin{eqnarray}\label{eqn:15} \partial_{\theta}\hat b=\hat d, \ \partial_{\theta}\hat d=-\hat b;\nonumber\\ \partial_{\phi}\hat b=\sin\theta\hat s, \ \partial_{\phi}\hat u=\hat v, \\ \partial_{\phi}\hat v=-\hat u, \ \partial_{\phi}\hat a=\hat s, \ \partial_{\phi}\hat s=-\hat a \nonumber, \end{eqnarray} where $\hat a= \cos\phi\hat a_1+\sin\phi\hat a_2$ and $\hat s= -\sin\phi\hat a_1+\cos\phi\hat a_2$. From these results one can finally determine that the equations about $\partial _{\theta }|d(i,k;n)\rangle $ and $\partial _{\phi }|d(i,k;n)\rangle $ do not contain the term $|d(i^{\prime },k^{\prime };n^{\prime })\rangle$, hence $\langle d(i^{\prime },k^{\prime };n^{\prime })|\partial _{t}|d(i,k;n)\rangle=0$ and the evolution equation yields $\frac{d}{dt}c_{n}^{[i,k]}(t)=0$, i.e., there is no mixing of different zero-eigenvalue subspaces during the adiabatic process and therefore, even for the special case of $g_1=g_2$, quantum memory may till be robust in the present double $\Lambda$ type atomic ensemble. \section{quantum memory for multi-mode quantized fields} In this section we shall extend the technique of quantum memory for a single-mode field to the multi-mode case in the double $\Lambda$ atomic-ensemble system. The two quantized fields described by the slowly-varying dimensionless operator are given by \begin{equation}\label{eqn:47} \hat{\cal E}_j(z,t) =\sum_k \hat a_{k_j}(t){\rm e}^{-i\frac{\nu_j}{c}(z-ct)}, \ (j=1,2), \end{equation} where $\nu_1=\omega_{ab}, \nu_2=\omega_{db}$ are the carrier frequencies of the two quantized optical fields. If the (slowly-varying) quantum amplitude does not change much in a small length interval $\Delta z$ which contains $N_z\gg 1$ atoms, we can introduce continuous atomic variables \cite{6} \begin{eqnarray}\label{eqn:48} \widetilde\sigma_{\mu\nu}(z,t) =\frac{1}{N_z}\sum_{z_j\in N_z}{\hat\sigma}_{\mu\nu}^j(t), \end{eqnarray} where $\hat\sigma_{\mu\nu}^j=|\mu_j\rangle\langle\nu_j| \, {\rm e}^{-i\frac{\omega_{\mu\nu}}{c}(z-ct)}$ is the slowly-varying part of the atomic flip operators. Making the replacement $\sum_{j=1}^N \longrightarrow \frac{N}{L}\int {\rm d} z$ with $L$ the length of the interaction in the propagation direction of the quantized field, the interaction Hamiltonian then yields \begin{eqnarray}\label{eqn:49} {\hat V}&=&-\int\frac{dz}{L}\bigr(\hbar g_1N\widetilde\sigma_{ab}(z,t)\hat {\cal E}_1(z,t)+\hbar\Omega_1(t) N\widetilde\sigma_{ac}(z,t)\nonumber\\ &&+\hbar g_2N\widetilde\sigma_{db}(z,t)\hat {\cal E}_2(z,t)+\hbar\Omega_2(t)N\widetilde\sigma_{dc}(z,t)\\ &&+h.a\bigr).\nonumber \end{eqnarray} The evolution of the Heisenberg operators $\hat{\cal E}_i(z,t)$ corresponding to the two quantum fields can be described by the propagation equations \begin{eqnarray}\label{50} \left(\frac{\partial}{\partial t}+c\frac{\partial}{\partial z}\right) \hat {\cal E}_1(z,t)= ig_1N\widetilde\sigma_{ba}(z,t) \end{eqnarray} and \begin{eqnarray}\label{51} \left(\frac{\partial}{\partial t}+c\frac{\partial}{\partial z}\right) \hat {\cal E}_2(z,t)= ig_2N\widetilde\sigma_{bd}(z,t). \end{eqnarray} In the condition of low excitation, i.e. $\widetilde\sigma_{bb}\approx1$, the atomic evolution governed by the Heisenberg-Langevin equations can be obtained by \begin{eqnarray}\label{eqn:52} \dot{\widetilde\sigma}_{ba}=-\gamma_{ba} {\widetilde\sigma}_{ba} +ig_1\hat{\cal E}_1+i\Omega_1{\widetilde\sigma}_{bc} +F_{ba}, \end{eqnarray} \begin{eqnarray}\label{eqn:53} \dot{\widetilde\sigma}_{bc}= i\Omega_1{\widetilde\sigma}_{ba} -ig_1\hat {\cal E}_1{\widetilde\sigma}_{ac}+i\Omega_2{\widetilde\sigma}_{bd} -ig_2\hat {\cal E}_2{\widetilde\sigma}_{dc}, \end{eqnarray} \begin{eqnarray}\label{eqn:54} \dot{\widetilde\sigma}_{bd}=-\gamma_{bd} {\widetilde\sigma}_{bd} +ig_2\hat{\cal E}_2+i\Omega_2{\widetilde\sigma}_{bc} +F_{bd}, \end{eqnarray} where $\gamma_{\mu\nu}$ are the transversal decay rates that will be assumed $\gamma_{ba}=\gamma_{bd}=\Gamma$ in the following derivation and $F_{\mu\nu}$ are $\delta$-correlated Langevin noise operators. From Eqs. (\ref{eqn:52}) and {\ref{eqn:54}} we find in the lowest (zero) order \begin{eqnarray}\label{eqn:coherence1} {\widetilde\sigma}_{ba}=(ig_1\hat{\cal E}_1+i\Omega_1{\widetilde\sigma}_{bc} +F_{ba})/\Gamma, \end{eqnarray} \begin{eqnarray}\label{eqn:coherence2} {\widetilde\sigma}_{bd}=(ig_2\hat{\cal E}_2+i\Omega_2{\widetilde\sigma}_{bc} +F_{bd})/\Gamma. \end{eqnarray} Substitute the above two formulae into Eq. (\ref{eqn:53}) yields \begin{eqnarray}\label{eqn:coherence3} \dot{\widetilde\sigma}_{bc}= \Gamma^{-1}\Omega_0^2{\widetilde\sigma}_{bc} -\Gamma^{-1}(g_1\Omega_1\hat {\cal E}_1+g_2\Omega_2\hat {\cal E}_2), \end{eqnarray} where $\Omega_0=\sqrt{\Omega_1^2+\Omega_2^2}$. The Langevin noise terms are neglected in the above results. For our purpose we shall calculate $\widetilde\sigma_{bc}$ to the first order, so \begin{eqnarray}\label{eqn:coherence4} \widetilde\sigma_{bc}\approx-\frac{1}{\Omega_0^2}(g_1\Omega_1\hat {\cal E}_1+g_2\Omega_2\hat {\cal E}_2)+\nonumber\\ +\frac{\Gamma}{\Omega_0^4}(g_1\Omega_1\partial_t\hat {\cal E}_1+g_2\Omega_2\partial_t\hat {\cal E}_2). \end{eqnarray} According to the former discussions, here the dark- and bright-state polaritons in the multi-mode case can be defined in continuous form: \begin{eqnarray}\label{eqn:DSP} \hat\Psi(z,t)=\cos\theta(t)\hat {\cal E}_{12}(z,t) - \sin\theta(t)\, \sqrt{N}\, \widetilde\sigma_{bc}(z,t), \end{eqnarray} \begin{eqnarray}\label{eqn:BSP} \hat\Phi(z,t)=\sin\theta(t)\hat {\cal E}_{12}(z,t) + \cos\theta(t)\, \sqrt{N}\, \widetilde\sigma_{bc}(z,t), \end{eqnarray} where $\hat {\cal E}_{12}(z,t)=\cos\phi(t)\, \hat {\cal E}_1(z,t)+\sin\phi(t)\, \hat {\cal E}_2(z,t)$ is the superposition of two quantized probe fields. One can transform the equations of motion for the electric field and the atomic variables into the new field variables. Similar to the single-mode case, we consider the low-excitation approximation and find \begin{eqnarray}\label{eqn:DSP1} \biggl[\frac{\partial}{\partial t} +c\cos^2\theta \frac{\partial}{\partial z}\biggr]\, \hat\Psi(z,t) =\dot\phi\sin\theta\cos^2\theta \, \hat s(z,t)-\nonumber\\ -\dot\theta\, \hat\Phi(z,t) -\sin\theta \cos\theta\, c\frac{\partial}{\partial z}\hat\Phi(z,t), \end{eqnarray} and \begin{eqnarray}\label{eqn:BSP1} \Phi&=&\frac{\Gamma}{g_1g_2\sqrt{N}}\frac{\cos^2\theta}{\Omega^2_0}\tan\theta\frac{\partial}{\partial t}(\sin\theta\Psi-\cos\theta\Phi)\nonumber\\ &&-\sin\theta(\Omega_2^2-\Omega_1^2)\hat s(z,t), \end{eqnarray} where we have defined $\hat s(z,t)=-\sin\phi(t)\, \hat {\cal E}_1(z,t)+\cos\phi(t)\, \hat {\cal E}_2(z,t)$. It is easy to see when $\hat s=0$, the total system can be reduced to the usual three-level one. For this we shall calculate the equation of motion of $\hat s(z,t)$ to study the adiabatic condition. From Eqs. (\ref{50}) and (\ref{51}) and together with the results of ${\widetilde\sigma}_{ba}$ and ${\widetilde\sigma}_{bd}$ one can verify that \begin{eqnarray}\label{eqn:58} (\frac{\partial}{\partial t}+c\cos^2\beta\frac{\partial}{\partial z})\hat s&=&-\frac{(g_1^2\Omega_2^2+g_2^2\Omega_1^2)N}{\Gamma}\frac{\cos^2\beta}{\Omega_0^2}\hat s-\nonumber\\ &&-\frac{1}{2}g_1g_2\sqrt{N}\sin2\beta\frac{\partial}{\partial t}\hat {\cal E}_{12}, \end{eqnarray} with \begin{eqnarray}\label{eqn:mix3} \tan^2\beta=\frac{N\Omega_1^2\Omega_2^2}{g_1^2\Omega_2^2+g_2^2\Omega_1^2}\frac{(g_1^2-g_2^2)^2}{\Omega_0^2}. \end{eqnarray} The time derivative of the mixing angle $\phi$ is neglected in the above equation. The first term in the right side of Eq. (\ref{eqn:58}) reveals a large absorption of $\hat s(z,t)$, which causes the field $\hat s(z,t)$ to be quickly reduced to zero so that the present system reaches pulse matching \cite{10,match,liu}: $\hat {\cal E}_2\rightarrow\tan\phi\hat {\cal E}_1$. For a numerical estimation, we typically set \cite{3} $g_1\approx g_2\sim10^5s^{-1}, N\approx10^8$, $\Gamma\approx10^8s^{-1}$, then the life time of field $\hat s(z,t)$ is about $\Delta t\sim10^{-10}s$ which is much smaller than the storage time \cite{3}. Furthermore, by introducing the adiabaticity parameter $\tau=(g_1g_2\sqrt{N}T/\Gamma)^{-1}$, we calculate the lowest order in Eq. (\ref{eqn:BSP1}) and thus obtain $\hat\Phi\approx0, \hat s\approx0$. Then the formula (\ref{eqn:DSP1}) reduces to the motion equation of DSPs defined in the usual three-level $\Lambda$ type system. Consequently we have \begin{eqnarray}\label{eqn:59} \hat {\cal E}_1(z,t)=\cos\theta(t)\cos\phi(t)\hat \Psi(z,t) \end{eqnarray} \begin{eqnarray}\label{eqn:60} \hat {\cal E}_2(z,t)=\cos\theta(t)\sin\phi(t)\hat \Psi(z,t) \end{eqnarray} \begin{eqnarray}\label{eqn:61} \sqrt{N}\widetilde\sigma_{bc}(z,t)=-\sin\theta(t)\hat \Psi(z,t) \end{eqnarray} where $\hat\Psi$ obeys the very simple equation of motion \begin{eqnarray}\label{eqn:62} \biggl[\frac{\partial}{\partial t} +c\cos^2\theta \frac{\partial}{\partial z}\biggr]\, \hat\Psi(z,t) =0 \end{eqnarray} The above results clearly show that, for example, if the initial condition reads $\theta\rightarrow0$ and $\phi\rightarrow0$, i.e. initially the external control fields are much stronger $\sqrt{g_2^2\Omega_1^2+g_1^2\Omega_2^2}\gg g_1g_2\sqrt{N}$ and $g_2\Omega_1(0)\gg g_1\Omega_2(0)$ (the first control field is much stronger than the second one), only ${\cal E}_1(z,t)$ is injected into the media and the polariton $\Psi={\cal E}_1(z,t)$. By adjusting the control fields so that $\sqrt{g_2^2\Omega_1^2+g_1^2\Omega_2^2}\ll g_1g_2\sqrt{N}$, the polariton evolves into $\hat\Psi=-\sqrt{N}\widetilde\sigma_{bc}(z,t)$ and the quantum information of the input probe pulse is stored. Likewise the analysis in section II, when the mixing angle $\theta$ is rotated back to $\theta=0$ again with $\phi$ to some value $\phi_e$ that is solely determined by the Rabi-frequencies of the two reapplied control fields, from the formulae (\ref{eqn:59}) and (\ref{eqn:60}) one finds another quantum field $\hat {\cal E}_2(z,t)$ will be created. The amplitudes of the two output quantum fields are controllable by the reapplied control fields. Now we shall give a brief discussion on the bandwidth of the probe fields that can be stored. As an example, we will deal with the first probe field (the discussion for another probe field is similar). According to the results of Eq. (\ref{eqn:59}), we can see the spectral width of the probe field narrows (broadens) when the mixing angles change \begin{eqnarray}\label{eqn:band1} \Delta\omega_{p1}(t)\approx\frac{\cos^2\theta(t)\cos^2\phi(t)}{\cos^2\theta(0) \cos^2\phi(0)}\Delta\omega_{p1}(0). \end{eqnarray} As in the present adiabatic condition, the propagation of the field ${\cal E}_{12}(z,t)$ is the same with that of the probe field in the three-level case, according to the previous results \cite{6} we obtain its EIT transparency window to be \begin{eqnarray}\label{eqn:band2} \Delta\omega_{tr}(t)=\frac{\cot^2\theta(t)}{\cot^2\theta(0)}\Delta\omega_{tr}(0). \end{eqnarray} On the other hand, we have the relation ${\cal E}_{1}(z,t)=\cos\phi(t){\cal E}_{12}(z,t)$, while their wave-packet lengths keep constant during the propagation (note that the Rabi-frequencies of control fields are independent of space in the present case). Therefore, we can reach the transparency window of the field ${\cal E}_{1}(z,t)$ as follows: \begin{eqnarray}\label{eqn:band3} \frac{\Delta\omega^{p1}_{tr}(t)}{\Delta\omega^{p1}_{tr}(0)} \approx\frac{\cos^2\phi(t)}{\cos^2\phi(0)}\frac{\Delta\omega_{tr}(t)}{\Delta\omega_{tr}(0)}. \end{eqnarray} Together with the above three equations (\ref{eqn:band1}-\ref{eqn:band3}), we can easily find \begin{eqnarray}\label{eqn:band4} \frac{\Delta\omega_{p1}(t)}{\Delta\omega^{p1}_{tr}(t)} =\frac{\sin^2\phi(t)}{\sin^2\phi(0)}\frac{\Delta\omega_{p1}(0)}{\Delta\omega^{p1}_{tr}(0)}. \end{eqnarray} In the practical case, $\sin^2\phi(t)/\sin^2\phi(0)$ is always close to unit. Thus absorption can be prevented as long as the input pulse spectrum lies in the initial transparency window: \begin{eqnarray}\label{eqn:band5} \Delta\omega_{p1}(0)\ll\Delta\omega^{p1}_{tr}(0). \end{eqnarray} Obviously, this result is similar to the requirement in usual three-level ensemble case \cite{6} and can easily be fulfilled when an optically dense medium is used. Finally, we shall give a brief estimate of the effect of atomic motion. In fact, atomic motion will lead to an additional phase evolution in the flip operators. For example, considering an atom in position $\vec r_j$, we have \begin{eqnarray}\label{eqn:motion1} \hat\sigma_{bc}\rightarrow\hat\sigma_{bc}e^{i\Delta\varphi_j(\vec r_j)}, \end{eqnarray} where $\Delta\varphi_j(\vec r_j)=\Delta\vec k\cdot\vec r_j(t)$ with $\Delta\vec k=\vec k_{cj}-\vec k_{pj}$. Here $\vec k_{cj}$ and $\vec k_{pj}$ are wave vectors of probe and control fields and for convenience we may assume $\vec k_{c1}-\vec k_{p1}=\vec k_{c2}-\vec k_{p2}$. The above Eq. shows that the free motion will result in a highly inhomogeneous phase distribution for the atoms in different positions, and then cause the decoherence of quantum states. In the adiabatic condition, atomic free motion can be studied by Wiener diffusion \cite{decoherence}. According to the results of Ref. \cite{decoherence}, the decoherence of a state $|D_n\rangle$ is characterized by the factor $e^{-nDt}$, where $D$ is the constant diffusion rate. On the other hand, for our model we can use co-propagating probe and control fields (see Fig. 1 (b)) so that $k_{cj}\approx\vec k_{pj} (j=1,2)$. Such a configuration can greatly reduce the phase diffusion and then avoid the decoherence induced by atomic free motion. \section{conclusions} In conclusion we present a detailed quantized description of DSP theory in a double $\Lambda$ type four-level atomic ensemble interacting with two quantized probe fields and two classical control ones, focusing on the dark state evolution and the interesting quantum memory process in this configuration. This problem is of interest because, i) rather than one state of a given probe light, the injected quantized field can convert into two different output pulses by properly steering two control fields; ii) by preparing the probe field in a non-classical state, e.g. a macroscopic quantum superposition of coherent states, a feasible scheme to generate optical entangled states is theoretically revealed in this controllable linear system, which may open up the way for DSP-based quantum information processing. The larger class of zero-eigenvalue states besides dark-states are identified for this system and, even in the presence of level degeneracy, we still confirm the validity of adiabatic passage conditions and thereby the robustness of the quantum memory process. Furthermore, we extend the single-mode quantum memory technique to the case with multi-mode probe fields, and reveal the exact pulse matching phenomenon between two quantized probe pulses in the present system. This work suggests many other interesting ways forward, for example, by applying forward and backward control fields in our system, we may obtain stationary pulse of entangled states of light fields \cite{lukin}. Other issues relation to interesting statistical phenomena such as spin squeezing \cite{12} and possible manipulating of quantum information \cite{13} may also comprise the subjects of future studies. \noindent We thank professors Yong-Shi Wu and J. L. Birman for valuable discussions. We also thank Xin Liu and Min-Si Li for helpful suggestions. This work is supported by NUS academic research Grant No. WBS: R-144-000-071-305, and by NSF of China under grants No.10275036 and No.10304020. \noindent \end{document}
\begin{document} \title{Demonstration of Shor encoding on a trapped-ion quantum computer} \title{Demonstration of Shor encoding on a trapped-ion quantum computer} \author{Nhung H. Nguyen} \affiliation{Joint Quantum Institute and Department of Physics, University of Maryland, College Park, MD 20742, USA} \author{Muyuan Li} \affiliation{Departments of Electrical and Computer Engineering, Chemistry, and Physics, Duke University, Durham, NC 27708, USA} \author{Alaina M. Green} \affiliation{Joint Quantum Institute and Department of Physics, University of Maryland, College Park, MD 20742, USA} \author{Cinthia Huerta Alderete} \affiliation{Joint Quantum Institute and Department of Physics, University of Maryland, College Park, MD 20742, USA} \author{Yingyue Zhu} \affiliation{Joint Quantum Institute and Department of Physics, University of Maryland, College Park, MD 20742, USA} \author{Daiwei Zhu} \affiliation{Joint Quantum Institute and Department of Physics, University of Maryland, College Park, MD 20742, USA} \author{Kenneth R. Brown} \affiliation{Departments of Electrical and Computer Engineering, Chemistry, and Physics, Duke University, Durham, NC 27708, USA} \author{Norbert M. Linke} \affiliation{Joint Quantum Institute and Department of Physics, University of Maryland, College Park, MD 20742, USA} \date{\today} \date{\today} \begin{abstract} Fault-tolerant quantum error correction (QEC) is crucial for unlocking the true power of quantum computers. QEC codes use multiple physical qubits to encode a logical qubit, which is protected against errors at the physical qubit level. Here we use a trapped ion system to experimentally prepare $m$-qubit GHZ states and sample the measurement results to construct $m\times m$ logical states of the $[[m^2,1,m]]$ Shor code, up to $m=7$. The synthetic logical fidelity shows how deeper encoding can compensate for additional gate errors in state preparation for larger logical states. However, the optimal code size depends on the physical error rate and we find that $m=5$ has the best performance in our system. We further realize the direct logical encoding of the $[[9,1,3]]$ Shor code on nine qubits in a thirteen-ion chain for comparison, with $98.8(1)\%$ and $98.5(1)\%$ fidelity for state $\ket\pm_L$, respectively. \end{abstract} \maketitle \section{Introduction} Fault-tolerant logical qubit encoding and fault-tolerant operations are required for executing quantum algorithms of sufficient depth to solve relevant problems \cite{gidneyHowFactor20482019,vonburgQuantumComputingEnhanced2020,shawQuantumAlgorithmsSimulating2020}. Fault-tolerant operations, such as state preparation, syndrome measurement, error correction, logical gates, and measurements are designed such that any physical-level error they introduce is corrected at the logical level \cite{nielsenQuantumComputationQuantum2011}. When the physical error rate is below a certain threshold, the logical error can be made arbitrarily small by concatenation, i.e. using multiple layers of encoding \cite{knillThresholdAccuracyQuantum1996}, or taking advantage of natural robustness within the system \cite{kitaevFaulttolerantQuantumComputation2003}. The optimal method for fault-tolerant quantum computation is unknown and current methods offer trade-offs between encoding rate, threshold \cite{KovalevPryadkoPRA2013,LiQCE2020}, and the number of available fault-tolerant gates \cite{Kesselring2018boundariestwist,GutierrezPRA2019,KrishnaPRX2019}. The same is true for near-term quantum error correction where only a limited amount of protection from physical-level errors will be available. The Shor code \cite{shorSchemeReducingDecoherence1995} protects against all physical single-qubit Pauli errors. While the canonical $[[9,1,3]]$ code is based on triple modular redundancy, larger $[[m^2,1,m]]$ Shor codes can be generated using $m$-modular redundancy, where $m$ is the number of physical qubits in each module. The Shor code, together with the rotated surface code \cite{tomitaLowdistanceSurfaceCodes2014} and the Bacon-Shor subsystem code \cite{baconOperatorQuantumErrorcorrecting2006}, is an example of a compass code \cite{li2DCompassCodes2019}. The surface code has high memory and circuit-level thresholds, and treats phase- and bit-flip errors equivalently \cite{raussendorfFaultTolerantQuantumComputation2007,fowlerPracticalClassicalProcessing2012,stephensFaulttolerantThresholdsQuantum2014}. The Bacon-Shor code has no asymptotic threshold with $m$ for either $X$ or $Z$ errors \cite{nappOptimalBaconShorCodes2012}. The Shor code on the other hand has a memory threshold of 50\% for $Z$ errors and no threshold for $X$ errors as $m$ increases. In practice, this means that, for any physical error rate, there is an optimal size for the Shor and Bacon-Shor codes \cite{nappOptimalBaconShorCodes2012}. These optimal codes can then be concatenated in a modular fashion to further improve performance \cite{aliferisSubsystemFaultTolerance2007}. Theoretical investigations comparing the 17-qubit rotated surface code \cite{tomitaLowdistanceSurfaceCodes2014} to a compass code on a 3$\times$3 qubit lattice find the latter to have much better performance in a realistic ion trap error model \cite{debroyLogicalPerformanceQubit2020}. In this paper, we find the optimal size $m$ of the Shor code that can be implemented on a particular trapped-ion quantum computer and investigate how measurements on a few qubits can predict the performance of larger systems. Trapped ions are a promising platform for realizing a large-scale fault-tolerant quantum computer due to their long coherence time \cite{wangSinglequbitQuantumMemory2017}, high connectivity \cite{Linke3305,wrightBenchmarking11qubitQuantum2019}, high-fidelity single- and two-qubit gates \cite{ballanceHighFidelityQuantumLogic2016,gaeblerHighFidelityUniversalGate2016a}, and scalable architectures \cite{,pinoDemonstrationQCCDTrappedion2020a,monroeLargescaleModularQuantumcomputer2014}. Also, different components needed for fault-tolerance have been successfully demonstrated on trapped ions, such as logical state preparation \cite{niggExperimentalQuantumComputations2014,eganFaultTolerantOperationQuantum2021}, single-qubit logical operations \cite{niggExperimentalQuantumComputations2014,fluhmannEncodingQubitTrappedion2019,eganFaultTolerantOperationQuantum2021}, quantum error-detection with stabilizer readout \cite{linkeFaulttolerantQuantumError2017,eganFaultTolerantOperationQuantum2021}, magic state preparation \cite{eganFaultTolerantOperationQuantum2021} and multiple rounds of feedback-correction \cite{negnevitskyRepeatedMultiqubitReadout2018a}. Here we prepare $m$-qubit GHZ states on a trapped-ion system and extrapolate the logical error rate classically in order to emulate state preparation and measurement of an $[[m^2,1,m]]$ Shor code, where $m=3,4,5,6,7$. This emulation yields the optimal code size for our current system. We then compare the emulated $m=3$ results to the full $3\times3$ code state preparation on nine physical qubits. The structure of the paper is as follows. In \cref{sec:bs-review} we review the Shor code and describe the methods used to study $[[m^2,1,m]]$ codes. In \cref{sec:trappedion} we outline the experimental setup. In \cref{sec:scaling} we present the experimental results on scaling. In \cref{sec:913} we demonstrate the logical basis state preparation of $m=3$ with 9 qubits. Lastly in \cref{sec:dis} we discuss the implications of these results for realizing fault-tolerant quantum computing. \section{The Shor code}\label{sec:bs-review} An $[[m^2,1,m]]$ Shor code uses $m\times m$ physical qubits to encode a single logical qubit with distance $m$, i.e. any two orthogonal logical states differ by at least $m$ bit- or phase-flips. It is constructed from the concatenation of an $m$-bit repetition code that corrects X errors with an $m$-bit repetition code that corrects Z errors \cite{shorSchemeReducingDecoherence1995}. Since all Pauli errors can be described as combinations of $Z$ and $X$ errors, measuring the stabilizers returns one of the potential syndromes, which give the location and type of the physical errors. These can then be remedied by applying suitable $X$ and/or $Z$ correction operations. For the $[[9,1,3]]$ Shor code only a single physical error can be diagnosed unambiguously, since it has distance 3. State preparation starts by fault-tolerantly creating a logical basis state, followed by fault-tolerant logical gates to generate a desired logical state $\ket{\psi}_L$. For an $[[m^2,1,m]]$ Shor code, the logical basis is given by $\ket{\pm}_L = \ket{\GHZ m\pm}^{\otimes m}$, where $\ket{\GHZ m\pm}=\frac1{\sqrt{2^m}}(\ket{0}^{\otimes m}\pm\ket{1}^{\otimes m})$. Since the $\ket{\pm}_L$ are product states of $\ket{\GHZ m\pm}$, we can prepare and measure many copies of a single $\ket{\GHZ m\pm}$ and randomly sample from these copies to artificially construct results corresponding to an $m\times m$ logical state. For example, with $m=3$, the logical states are $\ket{+}_L=\frac{1}{2\sqrt{2}}(\ket{000}+\ket{111})^{\otimes 3} = \bigotimes_{i=1,2,3} \ket{\GHZ3+}_i$ and $\ket{-}_L=\frac{1}{2\sqrt{2}}(\ket{000}-\ket{111})^{\otimes 3} = \bigotimes_{i=1,2,3} \ket{\GHZ3-}_i$. The circuit for encoding the $\ket{+}_L$ separates into three independent sub-circuits for creating three $3$-qubit GHZ states, see \cref{fig:FTShor3}. This is also the case for state preparation of the Bacon-Shor subsystem code \cite{eganFaultTolerantOperationQuantum2021}. This up-sampling allows us to study an $m\times m$-qubit Shor code with only $m$ qubits. However, some physical errors that come with larger system sizes such as cross-talk and others \cite{cetinaQuantumGatesIndividuallyAddressed2020} are underestimated. For a $[[9,1,3]]$ Shor code, there are eight stabilizers, six $Z$ stabilizers, which detect $X$ errors, and two $X$ stabilizers, which detect $Z$ errors \cite{debroyLogicalPerformanceQubit2020}. Therefore the code is better at detecting $X$ errors than $Z$ errors. To detect a single bit-flip within any GHZ sub-group, we measure the $Z$ stabilizers $ Z_jZ_{j+1}$ for the physical qubit index $j=1,2,4,5,6,7$. To detect a single phase-flip error, we measure the $X$ stabilizers $X_1X_2X_3X_4X_5X_6$ and $X_4X_5X_6X_7X_8X_9$. These error detection measurements can be done in a non-destructive way by projecting the parity onto an ancilla qubit and measuring it without disturbing the code qubits \cite{liDirectMeasurementBaconShor2018}. In our experiment we directly perform the projective measurement on the physical qubits and perform the error detection or correction procedure in post-processing. While not realizing a full error correction scheme, we still observe how logical errors scale with the number of physical qubits and gates used for state preparation. \begin{figure} \caption{Circuits for fault-tolerant preparation of logical states (a) $\ket+_L$ and (b) $\ket-_L$ of the $[[m^2,1,m]]$ Shor code for $m=3$. The circuit separates into $m$ groups, each preparing a $\ket{\GHZ m\pm}$ state. } \label{subfig:9plus} \label{subfig:9minus} \label{fig:FTShor3} \end{figure} \section{Trapped ion setup}\label{sec:trappedion} We carry out this experiment on a chain of trapped ions in a linear Paul trap. Two states in the hyperfine-split $^{2}$S$_{1/2}$ ground level of \Yb, $\ket{F=0,m_F=0}$ and $\ket{F=1,m_F=0}$, form the qubit. The ions are laser-cooled close to the motional ground state and initialized to $\ket{0}$ via optical pumping. Coherent operations are performed with two counter-propagating Raman laser beams, derived from a pulsed laser at $355$\:nm. The difference between relevant frequency components is stabilized to the energy splitting of the qubit. One of the Raman beams is split into an array of individual addressing beams, each of which is tightly focused onto exactly one ion, while the other is a global beam that illuminates the entire chain. We have frequency, amplitude, and phase control over each individual beam to selectively apply single-qubit and two-qubit gate operations. Detection is done via state-dependent fluorescence, where each ion is imaged onto one channel of a photo-multiplier tube array. Detailed performance of the system has been described elsewhere \cite{landsmanVerifiedQuantumInformation2019,figgattParallelEntanglingOperations2019}. For this work, we have extended the setup to operate on up to thirteen ions, at most nine of which act as qubits. The native gate set consists of single-qubit rotations around any axis $\vec{n}_\phi$ in the $x$-$y$ plane, $R^j_\phi(\theta)=e^{-i\vec\sigma \cdot \vec{n}_\phi\theta/2}$, rotations around the $z$-axis applied as classical phase shifts, $R^j_z(\theta)=e^{-i\sigma_z\theta/2}$, and two-qubit entangling gates $X_jX_k(\theta)=e^{i\sigma_x^j\sigma_x^k\theta}$ between any pair. These entangling gates are executed via spin-motion coupling based on the M\o lmer-S\o rensen scheme \cite{sorensenQuantumComputationIons1999,Solano99,milburnIonTrapQuantum2000}. We decouple the spin from the harmonic motion of all the modes by implementing a series of amplitude and frequency modulated pulses \cite{choiOptimalQuantumControl2014a,blumelEfficientStabilizedTwoqubit2021}. The fidelity for both single- and two-qubit gates is mainly limited by beam misalignment, beam-pointing instabilities, imperfect Stark-shift compensation and axial micromotion for all but the center ion. We do not have the ability to apply a quartic axial potential in order to space the ions equally. As a result, the alignment of the equally-spaced individual addressing beams worsens for larger numbers of ions in the trap. We mitigate this effect to some degree by using up to two additional ions at the each end of the chain. For five qubits, we trap seven ions; for seven qubits, we trap nine ions; for nine qubits, we trap thirteen ions. Trap imperfections also cause an unwanted axial radio-frequency (RF) field component that leads to axial micromotion. Therefore, in our setup, the single-qubit and two-qubit gate fidelities tend to decrease with the number of ions. The average fidelity for single-qubit gates (except $R_z$) in our experiment is $99.0(5)\%$ after correcting for state-preparation-and-measurement (SPAM) error. Typical fidelities for two-qubit gates are $99\%$ for a five-qubit system, $98.5\%$ for a seven-qubit system and $98\%$ for a nine-qubit system. \section{Scaling of the Shor code}\label{sec:scaling} We use the circuit in \cref{fig:m-cat} to create $\ket{\GHZ m\pm}$ states using our native gate set. The results of measuring in the $Z$-basis for $m=3$ are shown in \cref{fig:GHZ3Xbasis}~(a). To measure in the $X$-basis, we apply $H^{\otimes m}$ before detection which creates an equal superposition of all even- and odd-parity computational states for $\ket{\GHZ m+}$ and $\ket{\GHZ m-}$, respectively, i.e. $\avg{X^{\otimes m}}=1$ and $-1$. The results for $m=3$ are shown in \cref{fig:GHZ3Xbasis}~(b). The probability of measuring the $\ket{\GHZ m \pm}$ in the $Z$ or $X$ basis, shown in \cref{fig:scaling}~(a), is given by summing the relevant measured state populations, \begin{align} \mathcal{F}_z&=P_{00...0}+P_{11...1},\\ \mathcal{F}_x^\pm &=\sum_{s \substack{\text{ even}\\\text{ odd}}} P_s. \end{align} \begin{figure} \caption{The circuit to prepare a $\ket{\GHZ m\pm}$ on a trapped-ion quantum computer, with $\phi=0$ for $\ket{\GHZ m+}$ and $\phi=\pi$ for $\ket{\GHZ m-}$.} \label{fig:m-cat} \end{figure} \begin{figure} \caption{Measurement of $\ket{\GHZ3\pm}$ in the (a) $Z$ basis and (b) $X$ basis. In the Z basis, $\GHZ3\pm$ have the same measurement outcomes. The dashed line gives the ideal target population of $0.25$. } \label{fig:GHZ3Xbasis} \end{figure} We sample a group of $m$ experimental shots from $\ket{\GHZ m\pm}$ to construct an artificial shot corresponding to the measurement of an $m\times m$ logical state $\ket{\pm}_L$, which we read out by majority voting. For even $m$, ties are assigned randomly. Repeating this $N/m$ times, where $N$ is the total number of experimental repetitions, we arrive at the fidelities $\mathcal{F}_L^\pm$ for $m =3,4,5,6,7$ shown in \cref{fig:scaling}~(b) and \cref{tab:scaling}. For large $N$ the up-sampled logical fidelities are given by \begin{align} \mathcal{F}_L^\pm=\sum_{k=(m+1)/2}^m \binom{m}{k}(\mathcal{F}_x^\pm)^k(1-\mathcal{F}_x^\pm)^{m-k} \label{eq:logfid} \end{align} for odd $m$, and \begin{align} \mathcal{F}_L^\pm &=\sum_{k=(m+2)/2}^m \binom{m}{k}(\mathcal{F}_x^\pm)^k(1-\mathcal{F}_x^\pm)^{m-k}\nonumber\\ &+\frac12 \binom{m}{m/2}(\mathcal{F}_x^\pm)^{m/2}(1-\mathcal{F}_x^\pm)^{m/2} \label{eq:logfideven2} \end{align} for even $m$, due to the random assignment of ties. Assuming depolarizing errors dominate, the physical qubit fidelity is roughly $f=\mathcal{F}_x^\pm/m$ \cite{nappOptimalBaconShorCodes2012} and therefore the logical fidelity is \begin{align} \mathcal{F}_L^\pm=\sum_{k=\ceil{m/2}}^m \binom{m}{k}(mf)^k(1-mf)^{m-k}. \label{eq:logfid2} \end{align} \cref{fig:logfid2} plots the dependence of the logical error rate, which is $1-\mathcal{F}_L^\pm$, on the physical error rate, which is $1-f$, for different code sizes $m =3,5,7,9$ as given by \cref{eq:logfid2}. There is a cross-over point in the physical error rate where deeper encoding compensates for the larger number of gate errors that can arise when preparing larger GHZ states. The experimental results presented in \cref{tab:scaling} and \cref{fig:scaling} follow the estimated fidelity given by \cref{eq:logfid,eq:logfideven2}. \begin{figure} \caption{(a) $\ket{\GHZ m\pm}$ fidelity measured on $m$ trapped-ion qubits. (b) Up-sampled logical state fidelity of $[[m^2,1,m]]$ Shor codes after majority voting. For even $m$, ties are assigned randomly. The dashed yellow (blue) line is the state preparation and measurement fidelity for state $\ket+$ ($\ket-$) of the physical qubit. Note that the vertical ranges for (a) and (b) are different. The increase in logical fidelity from $m=3$ to $m=5$ shows how deeper encoding can offer increased protection against physical errors.} \label{fig:scaling} \end{figure} Although the fidelity to prepare five-qubit GHZ states is lower than that of the three-qubit GHZ states, the up-sampled logical states for the $[[25,1,5]]$ code has a higher fidelity than that for the up-sampled $[[9,1,3]]$ code after majority voting. This hints at the onset of fault-tolerance, since it demonstrates that deeper encoding can compensate for the increase in physical errors caused by employing more qubits and gates, leading to a lower logical error than a shallower code. This increase is not replicated when going to $[[49,1,7]]$, which shows that the state preparation errors have increased substantially as seen in the drop in the fidelity of $\GHZ7\pm$ (\cref{fig:scaling}). The random assignment of ties leads to a lower probability for $m=4, 6$ in \cref{fig:scaling}~(b). \begin{figure} \caption{Scaling of the $[[m^2,1,m]]$ Shor code given by a simple depolarizing error model, \cref{eq:logfid2}. } \label{fig:logfid2} \end{figure} \begin{table}[bt] \caption{ Fidelity of state preparation and measurement for $\ket{\GHZ m\pm}$ (Measure) and logical states of $[[m^2,1,m]]$ Shor codes constructed by up-sampling with majority voting (Majority vote). Data are taken with $N=20000 $~shots. The uncertainty is given by the standard deviation of the binomial distribution $\sqrt{\mathcal{F}(1-\mathcal{F})/N}$.}\label{tab:scaling}. \begin{tabularx}{0.48\textwidth}{l C Y Y Y Y Y} \toprule \multirow{2}{*}{$m$} & \multirow{2}{*}{Prep.} & \multicolumn{1}{c}{Z Meas.} & \multicolumn{2}{c}{X Meas.} & \multicolumn{2}{c}{Majority vote} \\ & & $\mathcal{F}_z$ & $\mathcal{F}_x^+$ & $\mathcal{F}_x^-$ & $\mathcal{F}_L^+$ & $\mathcal{F}_L^-$ \\ \midrule \multirow{2}{*}{3} &+ & \multirow{2}{*}{0.951(1)} & 0.965(1) & 0.035(1) & 0.9963(4) & 0.0037(1) \\ &- & & 0.033(1) & 0.967(1) & 0.0032(1) & 0.9968(4) \\ \multirow{2}{*}{4} &+ & \multirow{2}{*}{0.917(2)} & 0.947(2) & 0.053(1) & 0.9919(6) & 0.0081(1) \\ &- & & 0.051(1) & 0.949(2) & 0.0076(1) & 0.9924(6)\\ \multirow{2}{*}{5} &+ & \multirow{2}{*}{0.882(2)} & 0.936(2) & 0.064(1) & 0.9976(3) & 0.0024(1) \\ &- & & 0.072(1) & 0.928(2) & 0.0033(1) & 0.9967(4)\\ \multirow{2}{*}{6} &+ & \multirow{2}{*}{0.806(2)} & 0.917(2) & 0.083(1) & 0.9949(5) & 0.0051(1) \\ &- & & 0.086(1) & 0.914(2) & 0.0056(1) & 0.9944(5) \\ \multirow{2}{*}{7} &+ & \multirow{2}{*}{0.723(2)} & 0.869(2) & 0.131(1) & 0.9925(6) & 0.0075(1) \\ &- & & 0.132(1) & 0.868(2) & 0.0076(1) & 0.9924(6) \\\botrule \end{tabularx} \end{table} \begin{table}[t] \caption{ Fidelity of logical states of $[[m^2,1,m]]$ after discarding non-unanimous results (Error detect) and the success probability (Yield). Data are taken with $N=20000 $~shots.}\label{tab:discard} \begin{tabularx}{0.45\textwidth}{l C Y Y Y Y} \toprule \multirow{2}{*}{$m$} & \multirow{2}{*}{Prepare} &\multicolumn{2}{c}{Error detect} & \multirow{2}{*}{Yield} \\ & & $+$ & $-$ & \\ \midrule \multirow{2}{*}{3}&+ & 0.99995(1) & 0.00005(1) & 0.898(4) \\ &- & 0.00003(1) & 0.99997(1) & 0.905(4) \\ \multirow{2}{*}{4}&+ & 0.99999(1) & 0.00001(1) & 0.804(6) \\ &- & 0.000009(1) & 0.999991(1) & 0.810(6)\\ \multirow{2}{*}{5} &+ & 0.999998(1) & 0.000002(1) & 0.717(3) \\ &- & 0.000003(1) & 0.999997(1) & 0.690(4)\\ \multirow{2}{*}{6} &+ &0.9999996(2) & 0.0000004(1) & 0.594(5) \\ &- & 0.0000010(1) & 0.9999990(1) & 0.581(5) \\ \multirow{2}{*}{7} &+ & 0.999997(1) & 0.000003(1) & 0.373(6) \\ &- & 0.000002(1) & 0.999998(1) & 0.371(6) \\\botrule \end{tabularx} \end{table} The additional errors in the logical state preparation and measurement (SPAM) process mainly come from an increase in single- and two-qubit gate errors for longer ion chains as discussed in \cref{sec:trappedion}, and read-out errors because of cross-talk between photo-multiplier tube channels, i.e. physical SPAM errors. Physical readout cross-talk accounts for $1-5\%$ infidelity in the $Z$ -measurements, depending on $m$. The rest is from two-qubit gates, which corresponds to an average of $0.9\%$ error per gate for $m=3$, $1.3\%$ for $m=4$, $1.6\%$ for $m=5$, $1.7\%$ for $m=6$ and $2.2\%$ for $m=7$. These errors come from the increased beam mismatch discussed in \cref{sec:trappedion}. The Shor code can also be used as an error detection code, where non-unanimous votes are discarded rather than corrected, which leads to a finite yield. The fidelity and yield of the logical states after this procedure are presented in \cref{tab:discard}. The fidelities are higher than for the correction scheme since all $m$ qubits have to flip for a logical error to occur. For large $N$, the yield can be estimated by $(\mathcal{F}_x^\pm)^m+(1-\mathcal{F}_x^\pm)^m$. The optimal code size has increased to $6$, which is a valid size since ties don't play a role in the detection code. Using \cref{eq:logfid} we can estimate the minimal $\ket{\GHZ m\pm}$ fidelities needed in order for the up-sampled $m\times m$-qubit logical state to have the same fidelity as that of the $3\times3$-qubit logical state. For $m=5$, it is $0.93$; for $m=7$ qubits, it is $0.895$, which translates to an average infidelity of $1.7\%$ per two-qubit gate. \section{Logical qubit encoding}\label{sec:913} We also perform the full $[[9,1,3]]$ encoding with nine qubits in a 13-ion chain. In this experiment we directly generate and read out logical states $\ket\pm_L$ of the $[[9,1,3]]$ Shor code. We also characterize the individual $\ket{\GHZ3\pm}$ states with measurements in the $X$-basis. The results are presented in \cref{tab:9qubit} and \cref{fig:9qubit}. \begin{table}[bt] \caption{Fidelity of state preparation and measurement for three sets of $\ket{\GHZ3\pm}$ labeled as $1,2,3$ and the logical state $\ket\pm_L$ of [[9,1,3]]] code on a thirteen-ion chain.} \label{tab:9qubit} \begin{tabularx}{0.4\textwidth}{c Y Y Y} \toprule \multicolumn{2}{r}{Prepare} &\multicolumn{2}{c}{Measure}\\ & & $+$ & $-$ \\ \midrule \multirow{2}{*}{Logical}& $+$ & 0.988(1) & 0.012(1) \\ & $-$ & 0.015(1) & 0.985(1) \\ \multirow{2}{*}{$\ket{\GHZ3\pm}_1$}& $+$ & 0.942(1) & 0.058(1) \\ & $-$ & 0.050(1) & 0.950(1) \\ \multirow{2}{*}{$\ket{\GHZ3\pm}_2$}& $+$ & 0.942(1) & 0.058(1) \\ & $-$ & 0.079(1) & 0.921(1) \\ \multirow{2}{*}{$\ket{\GHZ3\pm}_3$}& $+$ & 0.942(1) & 0.058(1) \\ & $-$ & 0.068(1) & 0.932(1) \\\botrule \end{tabularx} \end{table} The fidelity of the logical states $\ket{\pm}_L$ is at the level of the physical state $\ket{-}$ and hence falls short of the average performance of the physical qubit $\ket{\pm}$. Using \cref{eq:logfid}, we estimate that the fidelity of each $\ket{\GHZ3+}$ state must be increased to $0.968$ in order to achieve the same logical fidelity as our physical qubit $\ket+$. It is worth noting that the fidelities of the $\ket{\GHZ3\pm}$ triplet on nine qubits are very similar to each other. This indicates a high level of uniformity among the qubits and gates. \begin{figure} \caption{A full $[[9,1,3]]$ Shor code logical state measurement with nine trapped-ion qubits (Left). We also show the average fidelity of the $\ket{\GHZ3\pm}$ states. For comparison, we show again the up-sampled results with three qubits from \cref{fig:scaling} (Right). The dashed yellow (blue) line is the state preparation and measurement fidelity for state $\ket+$ ($\ket-$) of the physical qubit.} \label{fig:9qubit} \end{figure} Notice that the fidelities of both the $\ket{\GHZ3\pm}$ states and logical $\ket{\pm}_L$ states are lower than their corresponding counterparts in the up-sampled experiment with three qubits in a chain of seven ions (see \cref{fig:scaling}). The additional errors reduce the fidelity compared to the up-sampled version by about $1\%$. \section{Discussion \& Outlook}\label{sec:dis} Our preparation and sampling of $\ket{\GHZ m\pm}$ states to synthetically construct $m \times m$ logical Shor code states shows experimentally that deeper encoding can compensate for additional physical errors from logical state preparation. For our specific setup, the $\ket{\GHZ 5\pm}$ state projects the best logical fidelities. The increase in physical errors with larger $m$ we observe is due to hardware limitations, most of which can be solved by better engineering. For example, detection cross-talk can be eliminated by independent photon-detectors \cite{Slichter:17,crainHighspeedLowcrosstalkDetection2019,eganFaultTolerantOperationQuantum2021}. The beam-ion alignment can be improved by traps with more control electrodes, such as micro-fabricated surface traps \cite{osti_1237003} or blade traps with more segments \cite{Pagano_2018}. Alternatively, near-perfect ion addressing can be achieved with integrated optics \cite{mehtaIntegratedOpticalMultiion2020,niffenegger2020} or beam steering using a micro-electro-mechanical system of mirrors \cite{wangHighFidelityTwoQubitGates2020}. Axial RF stray fields, and hence axial micromotion is also greatly reduced in precision-fabricated surface or 3D traps \cite{Pyka2014,decaroli2021design}. The comparison of the emulated $9$-qubit Shor state to a direct preparation of the $9$-qubit Shor code states shows a $2\%$ decrease in the fidelity of individual $\ket{\GHZ m\pm}$ states and a $1\%$ decrease in the SPAM logical fidelity. This result points toward future work where parts of quantum error corrected codes can be used as benchmarks for system scalability and uniformity. Recent work has emphasized that physical errors are not uniform over the Pauli operators \cite{li2DCompassCodes2019,puriBiaspreservingGatesStabilized2020,ataidesXZZXSurfaceCode2020,tuckettFaulttolerantThresholdsSurface2020}. Although the $m\times m$ Shor code has no threshold for one type of Pauli error, it matches the classical threshold for the other Pauli error. This makes the Shor code an exciting choice for quantum memories with asymmetric errors. Further work on asymmetric Shor codes and how they interact with bias-preserving gates is warranted \cite{chamberlandBuildingFaulttolerantQuantum2020}. \section*{Acknowledgments} We are grateful to T. Yoder for helpful discussions and comments on the manuscript. We also thank S. Debnath, K. A. Landsman, C. Figgatt, and C. Monroe for for early contributions to this work. Research at UMD was supported by the NSF, grant no. PHY-1430094, to the PFC@JQI. A.M.G. is supported by a JQI Postdoctoral Fellowship. Research at Duke was supported by the Office of the Director of National Intelligence - Intelligence Advanced Research Projects Activity through an Army Research Office contract (W911NF-16-1-0082) and the Army Research Office (W911NF-21-1-0005). \pagebreak \appendix \end{document}
\begin{document} \title{\Large\bfseries Ruin probability for discrete risk processes} \author{\itshape Ivana Ge\v{c}ek Tu{\dj}en\\ \\ Department of Mathematics\\ University of Zagreb, Zagreb, Croatia\\ Email: igecek@math.hr \color{black} } \date{} \maketitle \begin{abstract} \noindent We study the discrete time risk process modelled by the skip-free random walk and we derive the results connected to the ruin probability, such as crossing the fixed level, for this kind of process. We use the method relying on the classical ballot theorems to derive these results and compare them to the results obtained for the continuous time version of the risk process. We further generalize this model by adding the perturbation and, still relying on the skip-free structure of that process, we generalize the previous results on crossing the fixed level for the generalized discrete time risk process. \noindent \textit{Mathematics Subject Classification:} Primary 60C05; Secondary 60G50.\\ \noindent \textit{Keywords and phrases:} skip-free random walk, ballot theorem, Kemperman's formula, level crossing, ruin probability \end{abstract} \section{Introduction}\label{sec-1} In the classical ruin theory one usually observes the risk process \begin{equation}\label{1:eq.1} X(t)=ct-\sum_{i=1}^{N(t)}Y_i~,~~t\geq 0~,\end{equation} where $c>0$ represents the premium rate (we assume that there are the incoming premiums which arrive from the policy holders), $(Y_i~:~i\in {\mathbb N})$ is an i.i.d. sequence of nonnegative random variables with common distribution $F$ (which usually represent the policy holders' claims) and $(N(t)~:~t\geq 0)$ a homogeneous Poisson process of rate $\lambda>0$, independent of $(Y_i~:~i\in {\mathbb N})$. This basic process was generalized by many authors and we will follow the approach used in \cite{HPSV1}, \cite{HPSV2} and \cite{IGT}, which means that in the continuous time case one observes the generalized risk process \begin{equation}\label{1:eq.2}X(t)=ct-C(t)+Z(t)~,~~t\geq 0~,\end{equation} where $(C(t)~:~t\geq 0)$ is a subordinator and $(Z(t)~:~t\geq 0)$ an independent spectrally negative L\'{e}vy process. The overall process $X$ then also has the nice structure of the spectrally negative L\'evy process and the results from the fluctuation theory may be used to analyze it. One of the main questions that is observed in this model is the question of the \emph{ruin probability}, given some initial capital $u>0$, i.e. \begin{equation}\label{1:eq.3}\vartheta(u)={\mathbb P}(u+X(t)<0~,~~\textrm{for some $t>0$})~.\end{equation} Furthermore, the question of the distribution of the supremum of the dual process $\widehat{X}=-X$ is of the main interest, as well as the question directly connected to it, i.e. the first passage over some fixed level. Results for the above questions can be obtained using different approaches, such as decomposing the supremum of the dual of the generalized risk process $\widehat{X}$ or Laplace transform approach, and in \cite{HPSV2} authors use the famous Tak\'acs formula in the continuous time case.\\ \\ More precisely, for $m$ independent subordinators $C_1,\ldots,C_m$ without drift and with L\'evy measures $\Lambda_1,\ldots,\Lambda_m$ such that $\mathbb{E}\,(C_i(1))<\infty$, $i=1,2,\ldots,m$, one observes the risk process \begin{equation}\label{1:eq.4}X(t)=ct-C(t)~,~~t\geq 0~,\end{equation} for $C=C_1+\cdots +C_m$ and $c>\mathbb{E}\,(C_1(1))+\cdots +\mathbb{E}\,(C_m(1))$ (standard net profit assumption). In \cite{HPSV2} the following result was achieved: \begin{equation}\label{1:eq.5}{\mathbb P}(\widehat{\tau}_0<\infty,\widehat{X}(\widehat{\tau}_0-)\in dy,\widehat{X}(\widehat{\tau}_0)\in dx,\Delta C(\widehat{\tau}_0)=\Delta C_i(\widehat{\tau}_0))=\frac{1}{c}\Lambda_i(-y+dx)dy~,\end{equation} for $x>0$, $y\leq 0$ and $\widehat{X}(0)=0$, $\widehat{\tau}_0=\inf\{t\geq 0~:~\widehat{X}(t)>0\}$, $\Delta C(t)=C(t)-C(t-)$, $\Delta C_i(t)=C_i(t)-C_i(t-)$, $i=1,\ldots ,m$.\\ \\ Here authors interpret processes $C_i$ as independent risk portfolios competing to cause ruin and the above formula gives the probability that the ruin will be caused by one individual portfolio. Authors further generalize the problem by adding the L\'evy process $Z$ with no positive jumps in the model (this is called the perturbed model) and achieve the similar formula.\\ \\ The focus of this paper is rather on the method which led to above result, namely, the before mentioned Tak\'acs "magic" formula. In the continuous time case, this formula can be expressed in the following way (for details see \cite{Tak}). \begin{lemma} For the process $X$ defined as in (\ref{1:eq.4}), with $\widehat{X}(0)=0$, \begin{equation}\label{1:eq.6} {\mathbb P}\big(\sup_{0\leq s\leq t}\widehat{X}(s)>0~|~\widehat{X}(t)\big)=1-\big(-\frac{\widehat{X}(t)}{ct}\big)~. \end{equation} \end{lemma} But, this result naturally arises in the discrete time case in the view of the well known ballot theorems, so the main aim of this paper is to discover how and which results for ruin probability can be obtained, using the above method, when we observe a discrete time risk process. To establish the connection with the continuous time model, we will model our discrete time risk process with the \emph{upwards skip-free} (or \emph{right continuous}) random walk, i.e. a random walk with increments less or equal than $1$. These random walks can be observed as a discrete version of the spectrally negative L\'evy processes, i.e. the processes with no positive jumps. The main connection, which is in the focus of this paper, between this discrete and the continuous model is that in both cases we are able to control the one side jumps of the process. More precisely, skip-free random walks can cross levels from one side with the jumps of any size and on the other side they can only have unit jumps.\\ \\ In this surrounding we will prove the main results of the paper and the main tool for it will be the following result (details for this type of result can be found in \cite{Tak} or \cite{Dwa}). \begin{lemma} Let $\xi_1,\ldots,\xi_n$ be the random variables in $\{\ldots,-3,-2,-1,0,1\}$ with cyclically interchangeable increments. Let $R(i)=\xi_1+\cdots+\xi_n$, $1\leq i\leq n$, $R(0):=0$. Then for each $0\leq k\leq n$ \begin{equation}\label{1.eq.7} {\mathbb P}(R(i)>0~\textrm{for each $1\leq i\leq n$}~|~R(n)=k)=\frac{k}{n}~. \end{equation} \end{lemma} Using the skip-free structure of the random walks that model our risk process, Lemma 1.2. and some auxiliary results following from the ballot theorems (such as \emph{Kemperman's formula}, which will be explained in details in Section 2), we will derive the following main results of this paper. \begin{thm} Let $C^1$ and $C^2$ be two independent random walks with nondecreasing increments and $\mu^i:=\mathbb{E}\,(C^i(1))<\infty$, $i=1,2$. Let $C:=C^1+C^2$, $\mu=\mu^1+\mu^2$ and \begin{equation}\label{1:eq.8} X(n)=n-C(n)~,~~n\geq 0~, \end{equation} and let us assume that $\mathbb{E}\, (X(1))>0$, i.e. $\mu<1$. Then \begin{equation}\label{1:eq.9} {\mathbb P}(\widehat{\tau_0}<\infty,\widehat{X}(\widehat{\tau_0}-1)=y, \widehat{X}(\widehat{\tau_0})\geq x,\Delta C^i(\widehat{\tau_0})=x+1-y)={\mathbb P}(C^i(1)=x+1-y)~, \end{equation} for $y\leq 0$, $x>0$, $\widehat{\tau}_0=\inf\{n\geq 0~:~\widehat{X}(n)>0\}$, $\Delta C^i(n)=C^i(n)-C^i(n-1)$ (for $i=1,2$) and $\Delta C(n)=C(n)-C(n-1)$, $n\geq 0$. \end{thm} The above result will be generalized for $m$ independent random walks with nondecreasing increments $C^1,\ldots,C^m$, $m\in{\mathbb N}$, and $C=C^1+\cdots +C^m$ in the standard way.\\ \\ When we generalize the above model by adding the perturbation modelled by an \emph{upwards skip-free} random walk (or \emph{right continuous} random walk ) $Z$, i.e. the random walk with increments less or equal than $1$, we observe the perturbed discrete time risk process \begin{equation}\label{1:eq.10} X(n)=-C(n)+Z(n)~,~~n\geq 0~. \end{equation} Under the assumption that $\mathbb{E}\,(X(1))>0$ (so $\widehat{X}\to -\infty$) and with the same notation used as in the previous theorem, we will derive the following result. \begin{thm}\label{1.eq.11} \begin{align*}{\mathbb P}(\widehat{\tau}_0<\infty,&\widehat{X}(\widehat {\tau}_0-1)=y,\widehat{X}(\widehat{\tau}_0)\geq x,\textrm{the new supremum was caused by the process $C$})\\ &={\mathbb P}(C(1)\geq x+1-y)\cdot{\mathbb P}(Z(1)=-1)+{\mathbb P}(C(1)\geq x-y)\cdot{\mathbb P}(Z(1)\geq 0)~.\end{align*} \end{thm} This result can also be generalized so that we observe the probability that the random walk $C^i$ causes the ruin ($i\in\{1,\ldots,m\}$ ), again in the standard way. \section{Auxiliary results}\label{sec-2} \begin{defn} Let $(S(n)~:~n\geq 0)$ be a random walk with integer-valued increments $(Y(i)~:~i\geq 0)$, i.e. $S(n)=\sum_{i=1}^nY(i)$, $n\geq 0$, $S(0)=0$. We say that $S$ is an \emph{upwards skip-free} (or \emph{right continuous}) random walk if ${\mathbb P}(Y(i)\leq 1)=1$ (i.e. it's increments achieve values greater than $1$ with zero probability).\\ If ${\mathbb P}(Y(i)\geq -1)=1$ we say that $S$ is a \emph{downwards skip-free} (or \emph{left continuous}) random walk. \end{defn} To prove the results for the ruin probability for the skip-free class of random walks we will need some auxiliary results. Inspired by the approach used in \cite{HPSV2} for the continuous time case (i.e. spectrally negative L\'evy processes), we will use the results following from the famous \emph{ballot theorems}, first one dating to 1887. and formulated by \emph{Bertrand}. More precisely, let us assume that there are two candidates in the voting process in which there are $n$ voters. Candidate $A$ scores $a$ votes for the win over the candidate $B$ which scores $b$ votes, $a\geq b$. Then the probability that throughout the counting the number of votes registered for $A$ is always greater than the number of votes registered for the candidate $B$ is equal to $\frac{a-b}{a+b}=\frac{a-b}{n}$. This result was further generalized by \emph{Barbier} and proved in that generalized form by \emph{Aeplli}. Later this result was also proved by \emph{Dvoretzky} and \emph{Motzkin} using the \emph{cyclic lemma}, which is the approach similar to the one followed in this paper.\\ \\ The main property that lies in the heart of those type of theorems is having some kind of cyclic structure. \begin{defn} For random variables $\xi_1,\ldots,\xi_n$ we say that they are \emph{interchangeable} if for each $(r_1,\ldots ,r_n)\in {\mathbb R}^n$ and all permutations $\sigma$ of $\{1,2,\ldots ,n\}$, \begin{equation}\label{2:eq.1}{\mathbb P}(\xi_i\leq r_i~\textrm{for each $1\leq i\leq n$}~)={\mathbb P}(\xi_i\leq r_{\sigma(i)}~\textrm{for each $1\leq i\leq n$}~)~. \end{equation} For $\xi_1,\ldots ,\xi_n$ we say that they are \emph{cyclically interchangeable} if (\ref{2:eq.1}) is valid for each cyclic permutation $\sigma$ of $\{1,2,\ldots ,n\}$. \end{defn} In other words, the random variables are cyclically interchangeable if their distribution law is invariant under cyclic permutations and interchangeable (in some literature, for example see \cite{Dwa}, this property is also called \emph{exchangeable}) if it is invariant under all permutations. From the definition it is clear that interchangeable variables are also cyclically interchangeable and the converse is not true.\\ \\ One version of the ballot theorem states that for the random walk $R$ with interchangeable and non-negative increments which starts at $0$ (i.e. $R(0)=0$), the probability that $R(m)<m$ for each $m=1,2,\ldots,n$, conditionally on $R(n)=k$, is equal to $\frac{k}{n}$. More precisely, for us, the following result will play the key role. The result of this type was proved independently by \emph{Dwass} (for cyclically interchangeable random variables) and by \emph{Tak\'acs} (in less general case, for interchangeable random variables) in 1962. (appearing in the same issue of Annals of Mathematical Statistics), for details see \cite{Tak},\cite{Dwa} and for historical overview of the named results see \cite{AB}. \begin{lemma} Let $\xi_1,\ldots,\xi_n$ be the cyclically interchangeable random variables with values in the set $\{\ldots,-3,-2,-1,0,1\}$. Let $R(i)=\xi_1+\cdots+\xi_i$, $1\leq i\leq n$, $R(0)=0$. Then for each $0\leq k\leq n$ \begin{equation}\label{2.eq.1} {\mathbb P}(R(i)>0~\textrm{for each $1\leq i\leq n$}~|~R(n)=k)=\frac{k}{n}~. \end{equation} \end{lemma} Let us notice that from Lemma 2.3. it follows that if we have a skip-free random walk and we know it's position at some instant $n$ and that position is some $k$, we are able to calculate the exact probability that this random walk stayed under the position $0$ (or above the position $0$, depending on which skip-free random walk we observe, the right or the left continuous one) and that probability is equal to $\frac{k}{n}$. It is also important for our problem to mention that the assumptions used in the above lemma cannot be removed - it is necessary that the variables take values in ${\mathbb Z}$ and that they are bounded from one side, i.e. that we can control the jumps on one side.\\ \\ To prove Lemma 2.3. we need the following result, again for details see \cite{Tak}. \begin{lemma} Let $\varphi(u)$, $u=0,1,2,\ldots$ be a nondecreasing function for which $\varphi(0)=0$ and $\varphi(t+u)=\varphi(t)+\varphi(u)$, for $u=0,1,2,\ldots$, where $t\in{\mathbb N}$. Define $$\delta(u)=\left\{ \begin{array}{ll} 1, & \hbox{$v-\varphi(v)>u-\varphi(u)$ za $v>u$;} \\ 0, & \hbox{otherwise.} \\ \end{array} \right.$$ Then \begin{equation}\label{2:eq.3}\sum_{u=1}^{t}\delta(u)=\left\{ \begin{array}{ll} t-\varphi(t), & \hbox{$0\leq\varphi(t)\leq t$;} \\ 0, & \hbox{$\varphi(t)\geq t$.} \\ \end{array} \right.\end{equation} \end{lemma} Let us now prove Lemma 2.3.\\ \\ We observe the random variables $\gamma_1:=1-\xi_1$, $\gamma_2:=1-\xi_2$,$\ldots$ (instead of $\xi_1$,$\xi_2$,$\ldots$) and the random walk $R$ defined as $R(i)=\gamma_1+\ldots+\gamma_i$, $i\geq 1$, $R(0)=0$. It is a nondecreasing random walk and it's increments $\gamma_i$ are integer and cyclically interchangeable variables. We will show that $${\mathbb P}(R(u)\leq u~:~0\leq u\leq n|R(n))=\left\{ \begin{array}{ll} 1-\frac{R(n)}{n}, & \hbox{$0\leq R(n)\leq n$;} \\ 0, & \hbox{\textrm{otherwise.}} \\ \end{array} \right .$$ First we associate a new process $(R^{*}(u)~:~0\leq u<\infty)$ on $(0,\infty)$ to the process $(R(u)~:~0\leq u\leq n)$ such that $R^{*}(u)=R(u)$, for $0\leq u\leq n$ and $R^{*}(n+u)=R^{*}(n)+R^{*}(u)$, for $u\geq 0$. We define $$\delta(u)=\left\{ \begin{array}{ll} 1, & \hbox{if $v-R^{*}(v)\geq u-R^{*}(u)$ for each $v\geq u$;} \\ 0, & \hbox{otherwise.} \\ \end{array} \right.$$ Then $\delta(u)$ is a random variable and has the same distribution for each $u\geq 0$.\\ \\ Now we have \begin{align*} &{\mathbb P}(R(u)\leq u~,~0\leq u\leq n|R(n))={\mathbb P}(R^{*}(u)\leq u~,~u\geq 0|R(n))\\ &=\mathbb{E}\,[1_{\{v-R^{*}(v)\geq 0~,~\forall v\geq 0\}}|R(n)]=\mathbb{E}\,[\delta^{*}(0)|R(n)]=\\ &=\frac{1}{n}\cdot\sum_{u=1}^{n}\mathbb{E}\,[\delta^{*}(u)|R(n)]=\mathbb{E}\,[\frac{1}{n}\cdot\sum_{u=1}^{n}\delta^{*}(u)|R(n)]\\ &=\left\{ \begin{array}{ll} 1-\frac{R(n)}{n}, & \hbox{$0\leq R(n)\leq n$;} \\ 0, & \hbox{otherwise,} \\ \end{array} \right. \end{align*} using the fact that $\delta^{*}(u)$, $u\geq 0$, conditional on the position of $R(n)$, is equally distributed as $\delta(u)$, $0\leq u\leq n$, and using the result of Lemma 2.3. $\blacksquare$\\ \\ This is the proof which follows the approach used in \cite{Tak} and is also suitable for the continuous time risk process. But this type of result can also be proved following slightly different approach, in the same way the classic ballot theorems were proved - i.e. using some kind of a combinatorial formula. More precisely, we can use the following, for similar approach see \cite{Lam}. \begin{lemma}(\textbf{combinatorial formula}) Let $(x_1,\ldots ,x_n)$ be a finite sequence of integers with values greater or equal to $-1$ and let $\sum_{i=1}^{n} x_i=-k$. Let $\sigma_i(x)$ be a cyclic permutation of $x$ which starts with $x_i$, i.e. $\sigma_i(x)=(x_i,x_{i+1},\ldots ,x_n,x_1,\ldots x_{i-1})$, $i\in\{1,2,\ldots ,n\}$. Then there are exactly $k$ different indices $i\in\{1,2,\ldots ,n\}$ such that $$\sum_{l=1}^{j}(\sigma_i(x))_l> -k~,~~\textrm{for each}~j=1,\ldots ,n-1$$ and $$\sum_{l=1}^{n}(\sigma_i(x))_l=-k~,$$ i.e. there are exactly $k$ different permutations $\sigma_i(x)$ of the sequence $x$ such that the first sum of the members of the sequence $\sigma_i(x)$ that is equal to $-k$ is the sum of all members of the sequence $\sigma_i(x)$. \end{lemma} \textit{Proof.}\\ \\ We observe partial sums $s_j=\sum_{i=1}^j x_i$, $1\leq j\leq n$, $s_0:=0$, and find the lowest one - let that be $s_m$ (i.e. $m$ is the lowest index such that $s_m=\min_{1\le j \le n} s_j$). Now we take the cyclic permutation $\sigma_m(x)$, i.e. the one that starts with $x_{m+1}$; $\sigma_m(x)=(x_{m+1},x_{m+2},\ldots ,x_m)$. The overall sum of the sequence is $-k$, so $\sigma_m(x)$ hits $-k$ for the first time at the time instant $n$. For $j=1,2,\ldots ,k$ let $t_j$ be the first time of hitting the level $-j$, i.e. $t_j:=\min\{i\geq 1~:~s_i=-j\}$. Now again we can see that $\sigma_i(x)$ hits $-k$ for the first time at the time instant $n$ if and only if $i$ is one of the $t_j$-s, $j=1,2,\ldots ,k$, which proves our formula. $\blacksquare$\\ \\ Let us now observe the random walk $R(j)=\sum_{i=1}^{j} \xi(i)$, $R(0)=0$, with cyclically interchangeable increments $(\xi(1),\ldots ,\xi(n))$. Let $T_{-k}$ be the first time that $R$ reaches the level $-k$ and $T^{(i)}_{-k}$ the first time when the random walk with increments $\sigma_i(\xi)$ reaches $-k$ for the first time. Using Lemma 2.5., we have \begin{align*} n\cdot{\mathbb P}(T_{-k}=n|R(n)=-k)&=\sum_{i=1}^{n} \mathbb{E}\,[1_{\{T_{-k}=n\}}|R(n)=-k]\\ &=\mathbb{E}\,\big(\sum_{i=1}^{n}1_{\{T_{-k}=n\}}|R(n)=-k\big)\\ &=\mathbb{E}\,\big(\sum_{i=1}^{n}1_{\{T^{(i)}_{-k}=n\}}|R(n)=-k\big)\\ &=k~, \end{align*} where in the second line from the end we used that the increments of the random walk $R$ are cyclically interchangeable and in the last line the combinatorial formula, i.e. the fact that there are exactly $k$ permutations of the increments of the random walk $R$ which hit the level $-k$ for the first time at the time instant $n$. This is the \textit{Kemperman's formula} or \emph{the hitting time theorem}, and since we will use it only for independent random variables, i.e. the increments of the random walk, we will rephrase it in the less general form than the one we proved above. \begin{lemma} Let $R$ be the upwards skip-free random walk starting at $0$ (i.e. $R(0)=0$) and $\tau(k)\in\{0,1,2,\ldots\}$ the first time that the random walk $R$ crosses the level $k>0$. Then \begin{equation}\label{2:eq.4}n\cdot{\mathbb P}(\tau(k)=n)=k\cdot{\mathbb P}(R(n)=k)~,~n\geq 1~. \end{equation} \end{lemma} \section{Main results for the discrete time ruin process} \label{sec-3} Let $C^1$and $C^2$ be the independent and nondecreasing random walks, i.e. \begin{equation}\label{3:eq.1} C^i(n)=U^i(1)+\cdots +U^i(n)~,~~n\geq 1~, \end{equation} for \begin{equation}\label{3:eq.2} U^i(j)\sim \left( \begin{array}{ccccc} 0 & 1 & 2 & 3 & \ldots \\ p_0 & p_1 & p_2 & p_3 & \ldots \end{array} \right) \end{equation} for some $p_k\geq 0$, $k\geq 0$, $\sum_{k=0}^\infty p_k=1$ and $j\in\{1,2,\ldots\}$, $i=1,2$. We define \begin{equation}\label{3:eq.3} C=C^1+C^2~. \end{equation} Let us assume that $\mathbb{E}\, C^i(1)<\infty$ or, equivalently, $\mathbb{E}\, U^i(1)<\infty$, $i=1,2$.\\ \\ We define the \emph{the discrete time risk process with the unit drift} by \begin{equation}\label{3:eq.3} X(n)=n-C(n)~,~~n\geq 0~, \end{equation} which means that $X$ is the upwards skip-free random walk. Let us further assume that, using the notation $\mu:=\mathbb{E}\, C(1)$, $\mu_i:=\mathbb{E}\, C^i(1)$, $i=1,2$, \begin{equation}\label{3:eq.4} \mathbb{E}\, X(1)=1-\mathbb{E}\, C(1)=1-\mu=1-(\mu_1+\mu_2)>0~. \end{equation} For $\widehat{X}=-X$, we also define \begin{equation*}\widehat{S}(n):=\max_{0\leq s\leq n} \widehat{X}(s)~, \end{equation*} \begin{equation*} \widehat{S}(\infty):=\max_{s\geq 0} \widehat{X}(s) \end{equation*} and \begin{equation*}\widehat{\tau_x}=\inf\{n\geq 0~:~\widehat{X}(n)>x\}~, \end{equation*} the first time that the dual random walk $\widehat{X}$ crosses the level $x\in{\mathbb N}$, \begin{equation*} \Delta C^i(n)=C^i(n)-C^i(n-1)~~\textrm {for} ~~i=1,2 \end{equation*} and \begin{equation*} \Delta C(n)=C(n)-C(n-1)~, \end{equation*} $n\geq 1$, the jumps of the random walks $C^1$, $C^2$ and $C$. Using the linearity of the expectation, the fact that the increments of the random walk are independent and equally distributed and the standard induction procedure, we can see that the following result is valid. \begin{lemma} \begin{equation}\label{3:eq.5}\mathbb{E}\,\big( \sum_{n=0}^{\infty} \mathcal{H}(n,\omega,\Delta C_n^i(\omega))\big) =\mathbb{E}\,\big(\int_{(0,\infty)} \mathcal{H}(n,\omega,\varepsilon)dF_i(\varepsilon)\big)~, \end{equation} where $\mathcal{H}$ is a non-negative function and $F_i$ the distribution function of the increments of the random walk $C^i$, $i=1,2$. \end{lemma} For $y\leq 0$ and $x>0$ we define $$\mathcal{H}(n,\omega,\varepsilon_i)=1_{\{\widehat{X}(n-1)=y,\widehat{S}(n-1)\leq 0\}}\cdot 1_{\{\varepsilon_i=x+1-y\}}~.$$ Then, using Lemma 3.1., it follows \begin{align*} &\mathbb{E}\,\big( \sum_{n=0}^{\infty} \mathcal{H}(n,\omega,\Delta C^i(n)(\omega))\big)\\ &=\sum_{n=1}^{\infty} {\mathbb P}(\widehat{X}(n-1)=y,\widehat{S}(n-1)\leq 0,\Delta C^i(n)=x+1-y)\\ &={\mathbb P}(\widehat{\tau_0}<\infty,\widehat{X}(\widehat{\tau_0}-1)=y, \widehat{X}(\widehat{\tau_0})\geq x,\Delta C^i(\widehat{\tau_0})=x+1-y)~. \end{align*} Let us mention that the inequality $\widehat{X}(\widehat{\tau_0})\geq x$ appearing in the last line is the result of the fact that in the discrete time case the components of the random walk $C$ may jump simultaneously (unlike in the continuous time case when modelling the risk process with the spectrally negative L\'evy process). Since the components of the random walk $C$ are nondecreasing, they can only increase the supremum of the overall risk process and the drift decreases it for a unit at each time instant - so we have $\Delta C^i(\widehat{\tau_0})=(x-y)+1$ in the last line.\\ \\ On the other side, using Lemma 2.3. and Lemma 2.6., we have\\ \\ $\mathbb{E}\,\big(\sum_{n=1}^{\infty}\int_{(0,\infty)} \mathcal{H}(n,\omega)dF_i(\varepsilon_i)\big)\\ \\=\sum_{n=1}^{\infty}\mathbb{E}\,\big(\int_{(0,\infty)} 1_{\{\widehat{X}(n-1)=y,\widehat{S}(n-1)\leq 0\}}\cdot 1_{\{\varepsilon_i=x+1-y\}}dF_i(\varepsilon_i)\big)\\ \\=\sum_{n=1}^{\infty}{\mathbb P}(\widehat{X}(n-1)=y,\widehat{S}(n-1)\leq 0)\cdot{\mathbb P}(C^i(1)=x+1-y)\\ \\=\sum_{n=1}^{\infty}{\mathbb P}(\widehat{S}(n-1)\leq 0|\widehat{X}(n-1)=y)\cdot{\mathbb P}(\widehat{X}(n-1)=y)\cdot{\mathbb P}(C^i(1)=x+1-y)\\ \\={\mathbb P}(C^i(1)=x+1-y)\cdot\sum_{n=1}^{\infty} {\mathbb P}(\max_{0\leq m\leq n-1} \widehat{X}(m)\leq 0|\widehat{X}(n-1)=y)\cdot{\mathbb P}(\widehat{X}(n-1)=y)\\ \\={\mathbb P}(C^i(1)=x+1-y)\cdot\sum_{n=1}^{\infty} {\mathbb P}(\widehat{X}(m)\leq 0~,~ \forall 0\leq m\leq n-1|\widehat{X}(n-1)=y)\cdot{\mathbb P}(\widehat{X}(n-1)=y)\\ \\={\mathbb P}(C^i(1)=x+1-y)\cdot\sum_{n=1}^{\infty}{\mathbb P}(C(m)\leq m~:~\forall 0\leq m\leq n-1|\widehat{X}(n-1)=y)\cdot{\mathbb P}(\widehat{X}(n-1)=y)\\ \\={\mathbb P}(C^i(1)=x+1-y)\cdot\sum_{n=1}^{\infty}{\mathbb P}(C(m)\leq m~:~ \forall 0\leq m\leq n-1|C(n-1)=y+(n-1))\cdot{\mathbb P}(\widehat{X}(n-1)=y)\\ \\={\mathbb P}(C^i(1)=x+1-y)\cdot\sum_{n=1}^{\infty} (1-\frac{y+(n-1)}{n-1})\cdot{\mathbb P}(\widehat{X}(n-1)=y)\\ \\={\mathbb P}(C^i(1)=x+1-y)\cdot\sum_{n=1}^{\infty} \frac{1}{n-1}\cdot(-y)\cdot{\mathbb P}(\widehat{X}(n-1)=y)\\ \\={\mathbb P}(C^i(1)=x+1-y)\cdot\sum_{n=1}^{\infty} \frac{1}{n-1}\cdot(n-1)\cdot{\mathbb P}(\widehat{\tau}(y)=n-1)\\ \\={\mathbb P}(C^i(1)=x+1-y)~.$\\ \\ Let us notice that in the last line we again used the fact that $\widehat{X}$ is a downwards skip-free random walk which can only take unit steps to go downwards, i.e. it has to hit each level $y=-k\leq 0$ it crosses, so ${\mathbb P}(\widehat{\tau}(y)<\infty)=1$ for $y\leq 0$.\\ \\ Let us further notice that the above result can be generalized for finitely many random walks $C^1,C^2,\ldots ,C^m$, for some $m\in{\mathbb N}$, in the same way. So we have the following result. \begin{thm} Let $C^1, C^2,\ldots ,C^m$, $m\in{\mathbb N}$, be independent random walks with nondecreasing increments (defined as in (\ref{3:eq.1}) and (\ref{3:eq.2})) and \begin{equation}\label{3:eq.7} X(n)=n-(C^1+\cdots +C^m)(n)~,~n\geq 0~. \end{equation} Let $\widehat{X}(0)=0$ for the dual of the random walk $X$ (i.e. $\widehat{X}=-X$) and let us assume that $1>\mu=\mu^1+\cdots +\mu^m$, for $\mu:=\mathbb{E}\, C(1)$ and $\mu^i:=\mathbb{E}\, C^i(1)$, $i=1,2,\ldots,m$. Then for $y\leq 0$ and $x>0$ \begin{equation}\label{3:eq.8}{\mathbb P}(\widehat{\tau_0}<\infty,\widehat{X}(\widehat{\tau_0}-1)=y, \widehat{X}(\widehat{\tau_0})\geq x,\Delta C^i(\widehat{\tau_0})=x+1-y)={\mathbb P}(C^i(1)=x+1-y)~. \end{equation} \end{thm} Summing for all $y\leq 0$, we derive the following result. \begin{cor} For the random walks and the assumptions defined as in the Theorem 3.2., \begin{equation}\label{3:eq.9}{\mathbb P}(\widehat{\tau_0}<\infty, \widehat{X}(\widehat{\tau_0})\geq x,\Delta C^i(\widehat{\tau_0})\geq x+1)={\mathbb P}(C^i(1)\geq x+1)~. \end{equation} \end{cor} Let us now look at \emph{the discrete risk model with the perturbation}, i.e. the random walk $X$ such that \begin{equation}\label{3:eq.10} X(n)=-C(n)+Z(n)~,~ n\geq 1~,\end{equation} where $C$ is the random walk with nondecreasing increments and $Z$ the upwards skip-free random walk. In other words, we have \begin{equation}\label{3:eq.11} Z(1)=\xi_Z(i)\sim\left( \begin{array}{ccccc} \ldots & -2 & -1 & 0 & 1 \\ \ldots & q_2 & q_1 & q_0 & \rho \\ \end{array} \right) \end{equation} and \begin{equation}\label{3:eq.12} C(1)=\xi_C(i)\sim\left( \begin{array}{ccccc} 0 & 1 & 2 & 3 & \ldots \\ p_0& p_1 & p_2 & p_3 & \ldots \\ \end{array} \right)~~, \end{equation} for some $\rho,~q_j,~p_j\geq 0$ such that $\sum_{j=0}^\infty q_j=\sum_{j=0}^\infty p_j=1$. We assume that \begin{equation}\label{3:eq.13} \mathbb{E}\,(X(1))>0~, ~~\textrm{i.e}~~\mathbb{E}\, C(1)<\mathbb{E}\, Z(1)~. \end{equation} $X$ is obviously the upwards skip-free random walk and the dual process, $\widehat{X}=-X$, is the downwards skip-free random walk such that $\widehat{X}\to -\infty$. Furthermore, we can rewrite $\widehat{X}$ so that $$\widehat{X}(n)=\sum_{i=1}^{n}\xi_{\widehat{X}}(i)=\sum_{i=1}^{n}(\xi_W(i)-1)=\sum_{i=1}^n \xi_W(i)-n=:W(n)-n~,~~n\geq 0~,$$ for $\xi_W(i):=\xi_{\hat{X}}(i)+1$, so we have that ${\mathbb P}(W(1)=k)={\mathbb P}(\widehat{X}(1)=k-1)$, $k\geq 0$.\\ \\ Since in the discrete time case the random walks $C$ and $Z$ jump at the same time, if $\widehat{X}$ was in some position $y\leq 0$ at the time instant just before it crossed the level $0$ and in the position $x>0$ when it crossed the level $0$, the event \{$C$ caused the jump of the process $\widehat{X}$ over the level $0$\} can be written as $$\{\Delta C(\widehat{\tau}_0)\geq -y+1+x, \Delta Z(\widehat{\tau}_0) =-1\}\cup\{\Delta C(\widehat{\tau}_0)\geq -y+x, \Delta Z(\widehat{\tau}_0) \geq 0\}~.$$ So, for $y\leq 0$ and $x>0$ we have \begin{align*} &{\mathbb P}(\widehat{\tau}_0<\infty,\widehat{X}(\widehat {\tau}_0-1)=y,\widehat{X}(\widehat{\tau}_0)\geq x,\textrm{$C$ caused the jump of the process $\widehat{X}$ over the level $0$ })\\ &={\mathbb P}(\widehat{\tau}_0<\infty,\widehat{X}(\widehat{\tau}_0-1)=y,\widehat{X}(\widehat{\tau}_0)\geq x,\Delta C(\widehat{\tau}_0)\geq -y+1+x,\Delta Z(\widehat{\tau}_0)=-1)\\ &+{\mathbb P}(\widehat{\tau}_0<\infty,\widehat{X}(\widehat{\tau}_0-1)=y,\widehat{X}(\widehat{\tau}_0)\geq x,\Delta C(\widehat{\tau}_0)\geq -y+x,\Delta Z(\widehat{\tau}_0)\geq 0)~. \end{align*} Let us define $$\mathcal{H}(n,\omega,\varepsilon,\eta)=1_{\{\widehat{X}(n-1)=y,\widehat{S}(n-1)\leq 0\}}\cdot 1_{\{\varepsilon=x+1-y\}}\cdot 1_{\{\eta=-1\}}~.$$ Using the Lemma 3.1., Lemma 2.3. and Lemma 2.6. we have \begin{align*} &{\mathbb P}(\widehat{\tau}_0<\infty,\widehat{X}(\widehat{\tau}_0-1)=y,\widehat{X}(\widehat{\tau}_0)\geq x,\Delta C(\widehat{\tau}_0)\geq -y+1+x,\Delta Z(\widehat{\tau}_0)=-1)\\ &=\sum_{n=1}^{\infty}{\mathbb P}(\widehat{X}(n-1)=y,\widehat{S}(n-1)\leq 0)\cdot{\mathbb P}(C(1)\geq x+1-y)\cdot{\mathbb P}(Z(1)=-1)\\ &={\mathbb P}(C(1)\geq x+1-y)\cdot{\mathbb P}(Z(1)=-1)\cdot\sum_{n=1}^{\infty}{\mathbb P}(\widehat{S}(n-1)\leq 0|\widehat{X}(n-1)=y)\cdot{\mathbb P}(\widehat{X}(n-1)=y)\\ &={\mathbb P}(C(1)\geq x+1-y)\cdot{\mathbb P}(Z(1)=-1)\cdot\sum_{n=1}^{\infty}{\mathbb P}(W(t)\leq t~,~0\leq t\leq n-1|\widehat{X}(n-1)=y)\\ & \cdot{\mathbb P}(\widehat{X}(n-1)=y)\\ &={\mathbb P}(C(1)\geq x+1-y)\cdot{\mathbb P}(Z(1)=-1)\cdot\sum_{n=1}^{\infty}(1-\frac{y+(n-1)}{n-1})\cdot{\mathbb P}(\widehat{X}(n-1)=y)=\\ &={\mathbb P}(C(1)\geq x+1-y)\cdot{\mathbb P}(Z(1)=-1)\cdot\sum_{n=1}^{\infty}\frac{1}{n-1}\cdot(-y)\cdot{\mathbb P}(\widehat{X}(n-1)=y)=\\ &={\mathbb P}(C(1)\geq x+1-y)\cdot{\mathbb P}(Z(1)=-1)\cdot\sum_{n=1}^{\infty}{\mathbb P}(\widehat{\tau}_{y}=n-1)=\\ &={\mathbb P}(C(1)\geq x+1-y)\cdot{\mathbb P}(Z(1)=-1)~.\end{align*} It is obvious that in the result above we used the crucial argument that $\widehat{X}$ is the downwards skip-free random walk, which means that it can only use unit steps to go downwards, i.e. it visits each level $y\leq 0$.\\ \\ We can use the same calculation on the second addend, ${\mathbb P}(\widehat{\tau}_0<\infty,\widehat{X}(\widehat{\tau}_0-1)=y,\widehat{X}(\widehat{\tau}_0)\geq x,\Delta C(\widehat{\tau}_0)\geq -y+x,\Delta Z(\widehat{\tau}_0)\geq 0)$, so we have the following result. \begin{thm} Let $C$ and $Z$ be the random walks defined as in (\ref{3:eq.11}) and (\ref{3:eq.12}) and $X$ the discrete time perturbed risk process \begin{equation}\label{3:eq.14} X(n)=-C(n)+Z(n)~,~ n\geq 1~,\end{equation} and let us assume that $\mathbb{E}\,(X(1))<0$, i.e. $\mathbb{E}\, C(1)<\mathbb{E}\, Z(1)$. Then \begin{align*}{\mathbb P}(\widehat{\tau}_0<\infty,&\widehat{X}(\widehat {\tau}_0-1)=y,\widehat{X}(\widehat{\tau}_0)\geq x,\textrm{$C$ caused the jump of the process $\widehat{X}$ over the level $0$})\\ &={\mathbb P}(C(1)\geq x+1-y)\cdot{\mathbb P}(Z(1)=-1)+{\mathbb P}(C(1)\geq x-y)\cdot{\mathbb P}(Z(1)\geq 0)~.\end{align*}\end{thm} Let us notice that the similar result can be derived if we define $C$ as the sum of the $m\in{\mathbb N}$ independent nondecreasing random walks $C^1,\ldots,C^m$, $C=C^1+\cdots +C^m$.\\ \\ Let us further notice that the "problem" of the simultaneous jumps of the random walks $C$ and $Z$, which is characteristic for the discrete time processes and differs from the continuous time version of the same problem, may be overcome if we observe a natural connection between these two types of models - i.e. the compound Poisson processes. For some generalizations in this way and similar results connected to ruin probability see \cite{GTV}.\\ \\ \textbf{Acknowledgement:} This work has been supported in part by Croatian Science Foundation under the project 3526. \end{document}
\begin{document} \title {Global solutions in the critical Besov space for the non cutoff Boltzmann equation } \author[M]{Yoshinori Morimoto \corref{cor1}} \address[M]{Graduate School of Human and Environmental Studies, Kyoto University, Kyoto, 606-8501, Japan} \ead[M] {morimoto@math.h.kyoto-u.ac.jp} \author[S]{Shota Sakamoto} \address[S]{Graduate School of Human and Environmental Studies, Kyoto University, Kyoto, 606-8501, Japan} \ead[S] {sakamoto.shota.76r@st.kyoto-u.ac.jp} \cortext[cor1]{corresponding author, tel:81-75-753-6737, fax:81-75-753-2929} \begin{keyword}{Boltzmann equation, without angular cutoff, critical Besov space, global solution} \end{keyword} \date{} \begin{abstract} The Boltzmann equation is studied without the cutoff assumption. Under a perturbative setting, a unique global solution of the Cauchy problem of the equation is established in a critical Chemin-Lerner space. In order to analyse the collisional term of the equation, a Chemin-Lerner norm is combined with a non-isotropic norm with respect to a velocity variable, which yields an apriori estimate for an energy estimate. Together with local existence following from commutator estimates and the Hahn-Banach extension theorem, the desired solution is obtained. Also, the non-negativity of the solution is rigorously shown. \end{abstract} \maketitle \section{Introduction} We consider the Boltzmann equation in $\mathbb{R}^3$, \begin{align}\label{eq: Boltzmann} \begin{cases} \partial_t f(x,v,t)+v\cdot \nabla_x f(x,v,t)=Q(f,f)(x,v,t), \\ f(x,v,0)=f_0(x,v), \end{cases} \end{align} where $f=f(x,v,t)$ is a density distribution of particles in a rarefied gas with position $x \in \mathbb{R}^3$ and velocity $v \in \mathbb{R}^3$ at time $t \ge 0$, and $f_0$ is an initial datum. The aim of this paper is to show the global existence and uniqueness of solution to \eqref{eq: Boltzmann} in an appropriate Besov space under a perturbative framework. There are many references concerning well-posedness of solutions to the Boltzmann equation around the global Maxwellian equilibrium in different spaces. Recently, Duan, Liu, and Xu \cite{DLX} proved that the Boltzmann equation has a unique global strong solution near Maxwellian in the Chemin-Lerner space $\tilde{L}^\infty_T \tilde{L}^2_v (B^{3/2}_{2,1})$ (defined later: this space is a space-velocity-time Besov space) under the Grad's angular cutoff assumption. This was the first result which applied Besov spaces to the Cauchy problem of the Boltzmann equation around the equilibrium. In this paper, we consider \eqref{eq: Boltzmann} without angular cutoff by using the triple norm, introduced by Alexandre, Morimoto, Ukai, Xu, and Yang (AMUXY in what follows) \cite{AMUXY1, AMUXY2}. The triple norm was originally adopted to discuss the equation in $L^\infty_t (0,\infty; H_l^k(\mathbb{R}^6_{x,v}))$ for suitable $l, k \in \mathbb{N}$. We combine both ideas \cite{DLX} and \cite{AMUXY2}(see also \cite{amuxy4-2, amuxy4-3, AMUXY3}) to discuss the Cauchy problem without angular cutoff. For this purpose, we later introduce a Chemin-Lerner type triple norm, which enables us to take full advantage of the two ideas. We discuss \eqref{eq: Boltzmann} in greater detail. The bilinear operator $Q(f,g)$ is called the Boltzmann collision operator and is defined by \begin{align*} Q(f,g)(v)=\int_{\mathbb{R}^3}\int_{\mathbb{S}^2} B(v-v_*,\sigma) (f_*'g'-f_*g)d\sigma dv_*, \end{align*} where $f_*'=f(v_*')$, $g'=g(v')$, $f_*=f(v_*)$, $g=g(v)$, and \begin{align*} v'=\frac{v+v_*}{2}+\frac{\vert v-v_*\vert}{2}\sigma,\ v_*'=\frac{v+v_*}{2}-\frac{\vert v-v_* \vert}{2}\sigma\quad (\sigma \in \mathbb{S}^2). \end{align*} The operator $Q$ acts only on $v$ and $Q(f,g)(x,v,t):=Q(f(x,\cdot,t),g(x,\cdot,t))(v)$. A collision kernel $B$ is correspondently defined to reflect various physical phenomena. The first assumption is \begin{align*} B(v-v_*, \sigma)=b\left(\frac{v-v_*}{\vert v-v_* \vert}\cdot \sigma\right) \Phi_\gamma(|v-v_*|) \ge 0, \end{align*} where $\Phi_\gamma(|v-v_*|)= \vert v-v_*\vert^\gamma$ for $\gamma > -3$. This model is physically derived when the particle interaction obeys the inverse power law $\phi(r)=r^{-(p-1)}$ ($2<p<\infty$), where $r$ is a distance between two interacting particles. Before mentioning to the second assumption, we should recall that $b$ is a function of the deviation angle $\theta \in [0,\pi]$, defined by $\cos \theta =\frac{v-v_*}{\vert v-v_* \vert}\cdot \sigma$, so we write $b\left(\frac{v-v_*}{\vert v-v_* \vert}\cdot \sigma\right)=b(\cos\theta)$. As an usual convention, $b(\cos \theta)$ $(\theta \in [0, \pi])$ is replaced by \begin{equation*} \left[b(\cos \theta)+b(\cos(\pi-\theta))\right] \mathbf{1}_{\{ 0\le \theta \le \pi/2\}} \end{equation*} so that a domain of $b$ is $[0,\pi/2]$. The second one is called the non-cutoff assumption: for some $\nu \in (0,2)$ and $K>0$, \begin{align*} b\left(\frac{v-v_*}{\vert v-v_* \vert}\cdot \sigma\right) = b(\cos \theta)\sim K \theta^{-2-\nu}\ (\theta \downarrow 0), \end{align*} which implies that $b$ is not integrable near $\theta=0$. In the physical model of the inverse power law potential, relations among $\gamma$, $\nu$, and $p$ are given by $\gamma=(p-5)/(p-1)$ and $\nu=2/(p-1)$. In this paper, we particularly consider the case when $$\gamma > \max\{-3, -3/2 -\nu\}. $$ The latter component of this below estimate is due to a technical reason which will appear when we derive various estimates for the collision operator and a nonlinear term derived from $Q$. However, it is easily verified that $-1 < \gamma + \nu <1$ as long as $\gamma = (p-5)/(p-1)$ and $\nu = 2/(p-1)$, that is, all the physically possible combinations of $\gamma$ and $\nu$ are contained in our restriction. We consider the Boltzmann equation around the normalized Maxwellian \begin{align*} \mu=\mu(v)=(2\pi)^{-3/2}e^{-\vert v \vert^2/2}, \end{align*} thus we set $f=\mu + \mu^{1/2}g$, so that \eqref{eq: Boltzmann} is equivalent to the equation \begin{align}\label{eq: perturbation} \partial_tg+ v\cdot \nabla_x g +\cL g = \Gamma(g,g), \end{align} with initial datum $g_0$ defined from $f_0 = \mu + \mu^{1/2}g_0$, where \begin{align*} \Gamma (g,h)=\mu^{-1/2} Q(\mu^{1/2}g, \mu^{1/2}h),\quad \cL g=\cL_1g+\cL_2g:=-\Gamma(\mu^{1/2}, g)-\Gamma(g, \mu^{1/2}). \end{align*} These are the nonlinear and the linear term of the Boltzmann equation, respectively. It is known that $\cL$ is nonnegative-definite on $L^2_v(\mathbb{R}^3)$ and \begin{equation*} \ker{\cL}=\text{span} \{\mu^{1/2}, \mu^{1/2}v_i\ (i=1,2,3), \mu^{1/2}\vert v \vert\}. \end{equation*} The projection operator on $\ker{\mathcal{L}}$ is denoted by $\mathbf{P}$, therefore for each $g$, \begin{align*} \mathbf{P}g(x,v,t)=\left[ a(x,t)+v\cdot b(x,t)+\vert v \vert^2c(x,t)\right] \mu^{1/2}(v). \end{align*} The decomposition $g=\mathbf{P}g+(\mathbf{I}-\mathbf{P})g$ is called the macro-micro decomposition, where $\mathbf{I}$ is the identity map. The energy term and the dissipation term are respectively defined by \begin{align*} \mathcal{E}_T(g) &\sim \Vert g \Vert_{\tilde{L}^\infty_T \tilde{L}^2_v(B^{3/2}_x)}\\ \intertext{and} \mathcal{D}_T(g) &= \Vert \nabla_x(a,b,c) \Vert_{\tilde{L}^2_T(B^{1/2}_x)} + \Vert (\mathbf{I}-\mathbf{P})g\Vert_{\mathcal{T}^{3/2}_{T,2,2}}. \end{align*} Here and hereafter, $B^s_x:= B^s_{2,1}(\RR^3_x)$. We have just showed hidden indices, and the above norms will be rigorously defined in Section 2. The main theorem of this paper is the following. \begin{thm}\label{main theorem} Let $0< \nu <2$ and $\gamma > \max\{ -3, -\nu -3/2\}$. There are constants $\varepsilon_0>0$ and $ C>0$ such that if \begin{equation*} \Vert g_0 \Vert_{\tilde{L}^2_v (B^{3/2}_x)}\le \varepsilon_0, \end{equation*} then there exists a unique global solution $g(x,v,t)$ of \eqref{eq: perturbation} with initial datum $g_0(x,v)$. This solution satisfies \begin{equation}\label{apriori-estimate-0} \mathcal{E}_T(g) +\mathcal{D}_T(g) \le C \Vert g_0 \Vert_{\tilde{L}^2_v (B^{3/2}_x)} \end{equation} for any $T>0$. Moreover if $f_0(x,v) = \mu + \sqrt \mu g_0(x,v) \ge 0$ then $f(x,v,t) = \mu + \sqrt \mu g(x,v,t) \ge 0$. \end{thm} Let us make some comments on the theorem. We note that Besov embedding $B^{3/2}_{2,1}(\mathbb{R}^3) \hookrightarrow L^\infty (\mathbb{R}^3)$ ensures that $B^{3/2}_{2,1}(\mathbb{R}^3)$ is a Banach algebra, that is, if $f$ and $g$ are elements of $B^{3/2}_{2,1}(\mathbb{R}^3)$, then so is the product $fg$ (see Corollary 2.86 of \cite{BCD} for instance). This is the reason why we employed this space with respect to $x$, combined with $L^\infty_T L^2_v$. In this sense, we used the word \textit{critical} in the title; compare with the well-known fact that we need $s>3/2$ for $H^s(\mathbb{R}^3) \hookrightarrow L^\infty (\mathbb{R}^3)$. We also remark that the Chemin-Lerner space $\tilde{L}^\infty_T \tilde{L}^2_v (B^{3/2}_{2,1})$ defines stronger topology than $L^\infty_T L^2_v (B^{3/2}_{2,1})$. Thus we shall adopt the Chemin-Lerner norm to the problem. We review some known results, confining ourselves to research on the Cauchy problem around the global Maxwellian. The first fruitful work was done by Ukai \cite{U1}, \cite{U2}, based on spectral analysis of the collision operator and the bootstrap argument. This is the first result obtaining a global mild solution to the equation. Notable works following this perspective are, for example, Nishida and Imai \cite{NI}, and Ukai and Asano \cite{UA}. Most of these results were secured in the 70's and the 80's. They adopted $L^\infty$-norm to estimate a solution with respect to $v$. Recent study based on $L^2_v$-framework has started from similar but independent results by both Guo\cite{G} and Liu and Yu\cite{LY}, Liu, Yang, and Yu\cite{LYY}. They applied the macro-micro decomposition, which clarifies dissipative and equlibrating effects of the equation. This allowed us to utilize the energy method. They succeeded in constructing a classical solution in a $x$- and $v$-Sobolev space. There are two independent results concerning global existence around the equilibrium for the non-cutoff Boltzmann equation. To establish a global solution to the non cut-off Boltzmann equation in the whole space, AMUXY \cite{AMUXY1, amuxy4-2, AMUXY2} followed Guo's framework with intensive pseudo-differential calculus on the collisional term, in both general hard potential case $\gamma + \nu >0$ and general soft potential case $\gamma + \nu \le 0$. More precisely, when $\gamma +\nu >0$, \cite{amuxy4-2} constructed global solutions in $L^\infty_t (0,\infty; H_l^k(\mathbb{R}^6_{x,v}))$ with $k\ge 6$ and $l>3/2+\gamma + \nu$. Here, $H^k_l(\mathbb{R}^6_{x,v}) =\{ f\in \mathcal{S}' (\mathbb{R}^6_{x,v}); \langle v \rangle^l f \in H^k (\mathbb{R}^6_{x,v})\}$. For the general soft potential case under an additional condition $\gamma > \max\{ -3, -\nu -3/2\}$, AMUXY \cite{AMUXY2} found a solution in $L^\infty_t (0,\infty; L^2_v (\mathbb{R}^3 ; H^N_x (\mathbb{R}^3)))$ with $\NN \ni N \ge 4$ and another solution in $L^\infty_t (0,\infty; \tilde{\cH}_{\ell}^k(\RR^6_{x,v}))$ with $4 \le k \in \NN, \ell \ge k$, where \[\tilde{\cH}_{\ell}^k(\RR^6_{x,v}) = \{f \in \cS'(\RR^6_{x,v}); \sup_{|\alpha + \beta|\le k} \|\la v \ra^{(\ell-|\beta| )|\gamma+ \nu|} \pa_x^\alpha \pa_v^\beta f\|^2_{L^2(\RR^6)} < \infty\}.\] AMUXY \cite{amuxy4-3} also studied for qualitative properties including smoothing effect, uniqueness, non-negativity, and convergence rate towards the equlibrium. When the space variable moves in a torus, the global existence was shown by Gressman and Strain \cite{GS} with convergence rates. They employed an anisotropic metric on the ``lifted" paraboloid $d(v,v')=\sqrt{\vert v-v'\vert^2 + (\vert v\vert^2 -\vert v'\vert^2)^2/4}$ to capture changes in the power of the weight. For the general hard potential case, their result is sharper than \cite{amuxy4-2} and covers the global well-posedness in non-weighted and non-velocity derivative energy functional spaces $L^\infty_t (0,\infty; L^N_v (\mathbb{R}^3 ; H^2_x (\mathbb{T}^3)))$ with $N \ge 2$. It should be mentioned that the soft potential case was also studied by \cite{GS} in the space $L^\infty_t (0,\infty; \tilde{\cH}_{\ell}^k(\mathbb{T}^3_x \times \RR^3_{v}))$ with $4 \le k \in \NN, \ell \ge 0$, without additional condition $\gamma > \max\{ -3, -\nu -3/2\}$. The main theorem of the present paper improves the results in non-weighted and non-velocity derivative energy functional spaces, given in \cite{AMUXY2} for the general soft potential case and in \cite{GS} for the general hard potentical case, respectively. Our work is motivated by Duan, Liu, and Xu \cite{DLX}. Under the cut-off assumption, they studied the hard potential case $\gamma >0$ by using the Chemin-Lerner space $\tilde{L}^\infty_T \tilde{L}^2_v (B^{3/2}_x)$, which was originally invented in \cite{CL} to investigate the existence of the flow for solutions of the Navier-Stokes equations. One advantage of this time-velocity-space Besov space over a usual space $L^\infty_T L^2_v B^{3/2}_x$ can be explained as follows: in general, it is easier to bound dyadic blocks of the Littlewood-Paley decomposition in $L^\infty_T L^2_v L^2_x$ rather than to estimate directly the solution for the equation in $L^\infty_T L^2_v B^{3/2}_x$. Since the global well-posed theory in $\tilde{L}^\infty_T \tilde{L}^2_v (B^{3/2}_x)$ was established by \cite{DLX} for the cutoff case, we extend their methods to the non cut-off case by using upper bound estimates of the nonlinear term and calculation of commutators which were devised in a series of AMUXY's works. There are some other papers applying the Besov space to analysis of the Boltzmann equation. We will simply introduce some of them concerning the Cauchy problem. Recently, Tang and Liu \cite{TL} followed \cite{DLX} under the cut-off assumption and improved the result by replacing $B^{3/2}_{2,1}$ with narrower spaces $B^s_{2,r}$ ($(s,r)\in (3/2,\infty) \times [1,2]$ or $=(3/2, 1)$). For other research, see Alexandre \cite{A}, Ars\'{e}nio and Masmoudi \cite{AM}, Sohinger and Strain \cite{SS}. This paper is organized as follows. In Section 2, we review definitions of the Littlewood-Paley decomposition, Besov spaces, and Chemin-Lerner spaces. Taking into account the triple norm defined in \cite{AMUXY1, AMUXY2}, we also define a Chemin-Lerner type triple norm. In Section 3, we deduce Besov-type trilinear estimates. We have to derive three different estimates of Besov-type for the nonlinear term $\Gamma$, which lead us to the global and local existence of solutions. Since $\sigma$-integration on the unit sphere is significant in the non cutoff case, the nonlinear term can not be estimated by separating the so-called gain and loss terms, different from the cut-off case in \cite{DLX}. To get estimates analogous to those in Lemma 3.1 of \cite{DLX}, the $\sigma$-integration should be dealt with by the triple norm, involving two velocity variables $v, v'$. Section 4 is devoted to deduce an a priori estimate for the global existence. The microscopic part $(\mathbf{I}-\mathbf{P})g$ is handled by the estimates established in Section 3, however, we cannot adopt them for the macroscopic part $\mathbf{P}g$. To overcome this difficulty, we will bring in a fluid-type system of $(a, b, c)$. Applying the energy method to it, we will acquire an estimate of the macroscopic dissipation term. Local existence of a solution will be shown in Section 5. We want to establish a solution by the duality argument and the Hahn-Banach extension theorem, but a problem arises since, as far as we know, the dual space of $\tilde{L}^\infty_T \tilde{L}^2_v(B^{3/2}_x)$ is unknown. Therefore, we first find a solution to a linearized equation in a wider space $L^\infty(0,T; L^2(\mathbb{R}^6_{x,v}))$ and construct a approximate sequence. Then we show that a limit of the sequence is a solution to the equation, and it belongs to a suitable solution space indeed. This is our strategy, however, we have to employ very delicate commutator estimates involving the nonlinear term $\Gamma$ and cut-off functions with respect to both $x$ and $v$. Proofs of these estimates are fairly technical, so in this section we postpone proving them and focus on solving the equation assuming that the necessary lemmas are evidenced. Such lemmas are collected in Appendix with proofs. Non-negativity of the solution obtained will be given in Section 6. Known inequalities of Besov spaces and the triple norm used in this paper are also collected in Appendix. \section{Preliminaries}\label{S2} \setcounter{equation}{0} In this section, we define some function spaces for later use. Readers may consult \cite{BCD} for this topic. First, we introduce a dyadic partition of unity, also known as the Littlewood-Paley decomposition. Let $\mathcal{C}$ be the annulus $\{ \xi \in \mathbb{R}^3 \ \vert\ 3/4 \le \vert \xi \vert \le 8/3\}$ and $\mathcal{B}$ be the ball $B(0,4/3)$. Then there exist radial functions $\chi \in C^\infty_0(\mathcal{B})$ and $\phi \in C^\infty_0(\mathcal{C})$ satisfying the following conditions: \begin{gather*} 0 \le \chi(\xi), \phi(\xi) \le 1 \quad \mathrm{for \ any} \ \xi \in \mathbb{R}^3,\\ \chi(\xi)+\sum_{q \ge 0} \phi(2^{-q}\xi)=1\quad \mathrm{for \ any}\ \xi \in \mathbb{R}^3,\\ \sum_{q \in \mathbb{Z}} \phi(2^{-q}\xi)=1\quad \mathrm{for\ any}\ \xi \in \mathbb{R}^3\setminus\{0\},\\ \vert q-q'\vert \ge 2 \Rightarrow \mathrm{supp}\ \phi(2^{-q}\cdot)\cap\mathrm{supp}\ \phi(2^{-q'}\cdot) =\emptyset,\\ q\ge 1 \Rightarrow \mathrm{supp}\ \chi \cap \mathrm{supp}\ \phi(2^{-q}\cdot )=\emptyset,\\ \vert q-q'\vert \ge 5 \Rightarrow 2^{q'}\tilde{\mathcal{C}}\cap 2^q\mathcal{C}=\emptyset, \end{gather*} where $\tilde{\mathcal{C}}:=B(0,2/3)+\mathcal{C}$. We fix these functions and write $h:=\mathcal{F}^{-1}\phi$ and $\tilde{h}:=\mathcal{F}^{-1}\chi$. For each $f \in \mathcal{S}'(\mathbb{R}^3_x)$, the nonhomogeneous dyadic blocks $\Delta_q$ are defined by \begin{align*} &\Delta_{-1}f:= \chi(D)f =\int_{\mathbb{R}^3} \tilde{h}(y)f(x-y)dy,\\ &\Delta_qf:= \phi(2^{-q}D)f= 2^{3q}\int_{\mathbb{R}^3} h(2^{q}y) f(x-y) dy \quad (q\in \mathbb{N}\cup \{0\}) \end{align*} and $\Delta_qf:=0$ if $q\le -2$. The nonhomogeneous low-frequency cutoff operator $S_q$ is defined by \begin{align*} S_qf=\sum_{q' \le q-1} \Delta_{q'} f. \end{align*} We denote the set of all polynomials on $\mathbb{R}^3$ by $\mathcal{P}$. Regarding any polynomial $P$ as a distribution, we notice that $\dot{\Delta}_q P =0$. Therefore, the homogeneous dyadic blocks $\dot{\Delta}_q$ are defined by \begin{align*} \dot{\Delta}_qf:= \phi(2^{-q}D)f:=2^{3q} \int_{\mathbb{R}^3} h(2^{q}y)f(x-y)dy \end{align*} for any $f \in \mathcal{S}'(\mathbb{R}^3_x)/\mathcal{P}$ and $q \in \mathbb{Z}$. We now give the definition of nonhomogeneous Besov spaces as follows. \begin{defi} Let $1 \le p,r \le \infty$ and $s \in \mathbb{R}$. The nonhomogeneous Besov space $B^s_{pr}$ is defined by \begin{align*} B^s_{pr}:= \left\{ f \in \mathcal{S}'(\mathbb{R}^3_x) \ \vert \ \Vert f \Vert_{B^s_{pr}} := \left\Vert (2^{qs} \Vert \Delta_q f \Vert_{L^p_x} )_{q \ge -1} \right\Vert_{\ell^r} < \infty \right\}. \end{align*} When $r=\infty$, we set $\Vert f \Vert_{B^s_{p\infty}}:={\displaystyle\sup_{q \ge -1}2^{qs}} \Vert \Delta_q f \Vert_{L^p_x} $. \end{defi} Here $L^p_x:=L^p(\mathbb{R}^3_x)$ and this kind of abuse of notation will be used throughout the paper. The definition of homogeneous Besov spaces is as follows: \begin{defi} Let $1 \le p,r \le \infty$ and $s \in \mathbb{R}$. The homogeneous Besov space $\dot{B}^s_{pr}$ is defined by \begin{align*} \dot{B}^s_{pr}:= \left\{ f \in \mathcal{S}'(\mathbb{R}^3_x)/ \mathcal{P} \ \vert \ \Vert f \Vert_{\dot{B}^s_{pr}} := \left\Vert (2^{qs} \Vert \dot{\Delta}_q f \Vert_{L^p_x} )_{q \in \mathbb{Z}} \right\Vert_{\ell^r} < \infty \right\}. \end{align*} When $r=\infty$, we set $\Vert f \Vert_{\dot{B}^s_{pr}} := {\displaystyle\sup_{q \in \mathbb{Z}}2^{qs}} \Vert \dot{\Delta}_q f \Vert_{L^p_x} $. \end{defi} For simplicity of notations, we will write $B^s_{2,1}=:B^s_x$ and $\dot{B}^s_{2,1}=:\dot{B}^s_x$, as stated in the introduction. Next, we define Chemin-Lerner spaces, which is a generalization of the Besov space. \begin{defi} Let $ 1\le p,r,\alpha, \beta \le \infty$ and $s\in \mathbb{R}$. For $T \in [0,\infty)$, the Chemin-Lerner space $\tilde{L}^\alpha_T \tilde{L}^\beta_v (B^s_{pr})$ is defined by \begin{align*} \tilde{L}^\alpha_T \tilde{L}^\beta_v (B^s_{pr}):= \left\{ f(\cdot,v,t) \in \mathcal{S}' \ \vert\ \Vert f \Vert_{\tilde{L}^\alpha_T \tilde{L}^\beta_v (B^s_{pr})} < \infty \right\}, \end{align*} where \begin{gather*} \Vert f \Vert_{\tilde{L}^\alpha_T \tilde{L}^\beta_v (B^s_{pr})} := \left\Vert (2^{qs} \Vert \Delta_q f \Vert_{L^\alpha_T L^\beta_v L^p_x} )_{q \ge -1} \right\Vert_{\ell^r},\\ \Vert \Delta_q f \Vert_{L^\alpha_T L^\beta_v L^p_x}:= \left( \int_0^T \left( \int_{\mathbb{R}^3} \left( \int_{\mathbb{R}^3} \vert \Delta_q f(x,v,t) \vert^p dx \right)^{\beta/p} dv \right)^{\alpha/\beta} dt \right)^{1/\alpha} \end{gather*} with the usual convention when at least one of $p, r, \alpha, \beta$ is equal to $\infty$. We also define $\tilde{L}^\alpha_T \tilde{L}^\beta_v (\dot{B}^s_{pr})$ similarly. \end{defi} We denote $\tilde{L}^\alpha_T \tilde{L}^\beta_v (B^s_{2,1})$ by $\tilde{L}^\alpha_T \tilde{L}^\beta_v (B^s_x)$, and $\tilde{L}^\alpha_T \tilde{L}^\beta_v (\dot{B}^s_{2,1})$ by $\tilde{L}^\alpha_T \tilde{L}^\beta_v (\dot{B}^s_x)$. Finally, we give the definition of the non-isotropic norm $|\!|\!| \cdot |\!|\!|$ and the space $\mathcal{T}^s_{Tpr}$ and $ \dot{\mathcal{T}}^s_{Tpr}$, which are endowed with the ``Chemin-Lerner type triple norm''. \begin{defi} Let $1\le p, r \le \infty$, $T>0$ and $s\in \mathbb{R}$. $|\!|\!| f |\!|\!|$ is defined by \begin{align*} |\!|\!| f |\!|\!|^{2} :=& \iiint B(v-v_*,\sigma) \mu_* (f'-f)^2 dvdv_*d\sigma \\ &+ \iiint B(v-v_*,\sigma) f^2_* (\sqrt{\mu'}-\sqrt{\mu})^2 dvdv_* d\sigma\\ =& J_1^{\Phi_\gamma}(g) + J_2^{\Phi_\gamma}(g)\,, \end{align*} and the space $\mathcal{T}^s_{Tpr}$ is defined by \begin{align*} \mathcal{T}^s_{Tpr} := \left\{ f \ \vert \ \Vert f \Vert_{\mathcal{T}^s_{Tpr}} = \left\Vert (2^{qs} \Vert |\!|\!| \Delta_q f |\!|\!| \Vert_{L^{p}_T L^r_x} )_{q \ge -1} \right\Vert_{\ell^{1}} <\infty \right\}. \end{align*} $\dot{\mathcal{T}}^s_{Tpr}$ is defined in the same manner. \end{defi} $|\!|\!| \cdot |\!|\!|$ is called the triple norm, and it is known that this norm is estimated from both above and below by weighted Sobolev norms (see \cite[Proposition 2.2]{AMUXY2}): \begin{equation}\label{up-down-triple} \Vert f \Vert^2_{H^{\nu/2}_{\gamma/2}} +\Vert f \Vert^2_{L^2_{(\nu+\gamma)/2}} \lesssim |\!|\!| f |\!|\!|^{2} \lesssim \Vert f \Vert^2_{H^{\nu/2}_{(\nu+\gamma)/2}}. \end{equation} If $p$ or $r$ is an explicit number, $\mathcal{T}^s_{Tpr}$ is denoted by $\mathcal{T}^s_{T,p,r}$ to avoid ambiguity. In order to deduce Chemin-Lerner estimates in the following sections, we will use some properties of the above spaces. See Appendix for these properties. It must be emphasized again that $\Vert \cdot \Vert_{\tilde{L}^\alpha_T \tilde{L}^\beta_v (B^s_{pr})}$ is usually easier to handle than $\Vert \cdot \Vert_{L^\alpha_T L^\beta_v (B^s_{pr})}$. \section{Chemin-Lerner type trilinear estimates}\label{3} \setcounter{equation}{0} We will show an imporatnt estimate of the nonlinear term $\Gamma$ in the suitable Chemin-Lerner space. This estimate will be used many times throughout the paper. We denote the usual $L^2(\mathbb{R}^3_x \times \mathbb{R}^3_v)$ and $L^2(\mathbb{R}^3_v)$ inner product by $(\cdot,\cdot)_{x,v}$ and $(\cdot, \cdot)_v$, respectively. \begin{lem}\label{trilinear} Let $0<s\le \frac{3}{2}$ and $0<T \le \infty$. Define \begin{equation*} A_T(f, g, h):=\sum_{q \ge -1} 2^{qs}\left( \int^T_0 \vert ( \Delta_q \Gamma (f,g), \Delta_qh)_{x,v} \vert dt \right)^{1/2}. \end{equation*} Then the following trilenear estimates hold: \begin{align}\label{tri1} A_T&(f, g, h) \lesssim \Vert f \Vert_{\tilde{L}^\infty_T \tilde{L}^2_v(B^{3/2}_x)}^{1/2} \Vert g \Vert_{\mathcal{T}^{3/2}_{T,2,2}}^{1/2} \Vert h \Vert_{\mathcal{T}^s_{T,2,2}}^{1/2} , \\ \label{tri2} A_T&(f, g, h) \lesssim \left( \Vert f \Vert_{L^2_T L^2_v(\dot{B}^{3/2}_x)}^{1/2} \Vert g \Vert_{\mathcal{T}^s_{T,\infty,2}}^{1/2} + \Vert f \Vert_{\tilde{L}^\infty_T \tilde{L}^2_v(B^s_x)}^{1/2} \Vert g \Vert_{\dot{\mathcal{T}}^{3/2}_{T,2,2}}^{1/2} \right)\Vert h \Vert_{\mathcal{T}^s_{T,2,2}}^{1/2}. \end{align} Moreover, if $\gamma + \nu \le 0$, we have \begin{align}\label{tri3} A_T(f, g, h) \notag \lesssim &\, \bigg(\Vert \mu^{1/10}f \Vert_{L^2_T L^2_v(\dot{B}^{3/2}_x)}^{1/2} \Vert g \Vert_{\mathcal{T}^s_{T,\infty,2}}^{1/2} \\ &\quad \qquad +\Vert \mu^{1/10}f \Vert_{\tilde{L}^\infty_T \tilde{L}^2_v(B^s_x)}^{1/2} \Vert g \Vert_{\dot{\mathcal{T}}^{3/2}_{T,2,2}}^{1/2}\bigg)\Vert h \Vert_{\mathcal{T}^s_{T,2,2}}^{1/2} \notag\\ & +\bigg( \Vert f \Vert_{L^2_T L^2_{v, (\nu+\gamma)/2}(\dot{B}^{3/2}_x)}^{1/2} \Vert g \Vert_{\tilde{L}^\infty_T \tilde{L}^2_v(B^{s}_x)}^{1/2} \notag\\ &\quad \qquad +\Vert f \Vert_{\tilde{L}^\infty_T \tilde{L}^2_{v, (\nu+\gamma)/2}(B^s_x)}^{1/2} \Vert g \Vert_{L^2_T L^2_v(\dot{B}^{3/2}_x)}^{1/2}\bigg)\Vert h \Vert_{\mathcal{T}^s_{T,2,2}}^{1/2}. \end{align} \end{lem} \begin{proof} Before starting the proof of this lemma, we recall the Bony decomposition. Since $\sum_j \Delta_j =\mathrm{Id}$, we have, at least formally, $fg=\sum_{j,j'} \Delta_j f \Delta_{j'} g$. Dividing this summation according to the frequencies, we obtain the following Bony decomposition: \begin{align*} fg=\mathcal{T}_fg+\mathcal{T}_gf+\mathcal{R}(f,g), \end{align*} where \begin{align*} \mathcal{T}_fg:=\sum_j S_{j-1}f\Delta_jg,\ \mathcal{T}_gf := \sum_j \Delta_j f S_{j-1} g,\, \\ \mathcal{R}(f,g):=\sum_j \left(\sum_{\vert j-j'\vert \le 1} \Delta_{j'}f\Delta_jg\right). \end{align*} Using this decomposition, we divide the inner product into three parts: \begin{align*} ( \Delta_q \Gamma (f,g), \Delta_qh) = ( \Delta_q(\Gamma^1(f,g)+\Gamma^2(f,g)+\Gamma^3(f,g)), \Delta_q h), \end{align*} where $\Gamma^1(f,g) :=\sum_{j} \Gamma(S_{j-1}f, \Delta_j g) = \sum_j \Gamma^1_j(f,g)$, and $\Gamma^2(f,g)$ and $\Gamma^3(f,g)$ are similarly defined. First, we treat $\Gamma^1(f,g)$. From shapes of supports of $\chi$ and $\phi(2^{-j}\cdot)$, we notice \begin{align*} \Delta_q \sum_j (S_{j-1}f\Delta_jg) = \Delta_q \sum_{\vert j-q \vert \le 4} (S_{j-1}f\Delta_jg), \end{align*} that is, when $\Delta_q$ is applied to, the above summation with respect to $j$ becomes finite. Hence we have \begin{align*} \left(\Delta_q \sum_j \Gamma^1_j(f,g), \Delta_q h \right)_{x,v} &=\sum_{\vert j-q \vert \le 4} (\Delta_q \Gamma^1_j(f,g), \Delta_q h)_{x,v} \\ \notag &= \sum_{\vert j-q \vert \le 4} ( \Gamma^1_j(f,g), \Delta_q^2 h)_{x,v} \\ \notag & \lesssim \sum_{\vert j-q \vert \le 4}\int_{\mathbb{R}^3} \Vert S_{j-1}f \Vert_{L^2_v} |\!|\!| \Delta_j g |\!|\!| |\!|\!| \Delta_q^2 h |\!|\!| dx \\ \notag &\lesssim \sum_{\vert j-q \vert \le 4} \Vert f \Vert_{L^2_v L^\infty_x} \Vert |\!|\!| \Delta_j g |\!|\!| \Vert_{L^2_x} \Vert |\!|\!| \Delta_q h |\!|\!| \Vert_{L^2_x}, \notag \end{align*} where we used Corollary \ref{AMUXYtrilinear} and Lemma \ref{bounded operators}. We also used the inequlaity $$ \Vert |\!|\!| \Delta_q^2 h |\!|\!| \Vert_{L^2_x} \lesssim \Vert |\!|\!| \Delta_q h |\!|\!| \Vert_{L^2_x}, $$ which is verified by direct calculation. Hence, we have so far \begin{align}\label{ineq: AfterTrilinear1}\notag &\sum_{q \ge -1} 2^{qs}\left( \int^T_0 \vert ( \Delta_q \Gamma^1 (f,g), \Delta_qh)_{x,v} \vert dt \right)^{1/2} \\ &\lesssim \sum_{q \ge -1}2^{qs} \left(\int_0^T \sum_{\vert j-q \vert \le 4} \Vert f \Vert_{L^2_v L^\infty_x} \Vert |\!|\!| \Delta_j g |\!|\!| \Vert_{L^2_x} \Vert |\!|\!| \Delta_q h |\!|\!| \Vert_{L^2_x} dt \right)^{1/2}. \end{align} Starting from \eqref{ineq: AfterTrilinear1}, we shall estimates $\sum_{q} 2^{qs}\left( \int^T_0 \vert ( \Delta_q \Gamma^1 (f,g), \Delta_qh)_{x,v} \vert dt \right)^{1/2}$ in two different ways. Since appearing terms will be lengthy, we set \begin{equation*} X_j = \left( \int_0^T \Vert |\!|\!| \Delta_j g |\!|\!| \Vert_{L^2_x}^2 dt \right)^{1/2},\quad Y_q=\left(\int_0^T \Vert |\!|\!| \Delta_q h |\!|\!| \Vert_{L^2_x}^2 dt \right)^{1/2}. \end{equation*} Using the Cauchy-Schwarz inequality, embedding $ B^{3/2}_x \hookrightarrow L^\infty_x$ and Lemma \ref{Besov and Chemin-Lerner}, we have \begin{align}\label{tri-11}\notag &\sum_{q \ge -1} 2^{qs}\left( \int^T_0 \vert ( \Delta_q \Gamma^1 (f,g), \Delta_qh)_{x,v} \vert dt \right)^{1/2} \\ \notag &{\lesssim} \sum_{q \ge -1}2^{qs} \Vert f \Vert_{L^\infty_T L^2_v L^\infty_x}^{1/2} \left( \sum_{\vert j-q \vert \le 4} (X_j)^2 \right)^{1/4} (Y_q)^{1/2} \\ \notag &\lesssim \Vert f \Vert_{\tilde{L}^\infty_T \tilde{L}^2_v (B^{3/2}_x)}^{1/2} \left( \sum_{q \ge -1} 2^{qs} \left( \sum_{\vert j-q \vert \le 4} X_j \right)^{1/2} \right)^{1/2} \notag \left( \sum_{q \ge -1} 2^{qs} Y_q \right)^{1/2} \\ &\le \Vert f \Vert_{\tilde{L}^\infty_T \tilde{L}^2_v (B^{3/2}_x)}^{1/2} \Vert h \Vert_{\mathcal{T}^s_{T,2,2}}^{1/2} \left( \sum_{q \ge -1} 2^{qs} \sum_{\vert j-q \vert \le 4} 2^{-js}c_j \right)^{1/2} \Vert g \Vert_{\mathcal{T}^s_{T,2,2}}^{1/2}. \end{align} Here, we defined a $\ell^1$-sequence $\{c_j\}$ by $c_j := 2^{js} X_j/\Vert g \Vert_{\mathcal{T}^s_{T,2,2}}$. We will use abuse of notation $c_j$ in the sequal to express similar sequences, defined by other appropriate $\mathcal{T}^s_{Tpr}$-norms in each inequality. Note that all of them are in $\ell^1$. Now, the double summation is finite because by Fubini's theorem and Young's inequality, we have \begin{align*} \sum_{q \ge -1} 2^{qs} \sum_{\vert j-q \vert \le 4} 2^{-js}c_j &=\sum_{q\ge -1} [(\mathbf{1}_{\vert j \vert\le 4}2^{js})*c_j](q)\\ &\le \sum_j \mathbf{1}_{\vert j\vert\le 4}2^{js}\cdot\sum_j c_j <\infty. \end{align*} Therefore, we have \begin{align*} \sum_{q \ge -1} 2^{qs}\left( \int^T_0 \vert ( \Delta_q \Gamma^1 (f,g), \Delta_qh)_{x,v} \vert dt \right)^{1/2}\lesssim \Vert f \Vert_{\tilde{L}^\infty_T \tilde{L}^2_v (B^{3/2}_x)}^{1/2} \Vert g \Vert_{\mathcal{T}^s_{T,2,2}}^{1/2}\Vert h \Vert_{\mathcal{T}^s_{T,2,2}}^{1/2}. \end{align*} The $\Gamma^1$ part of the second estimate \eqref{tri2} is as follows. In this case, we apply embedding $ \dot{B}^{3/2}_x \hookrightarrow L^\infty_x$ to \eqref{ineq: AfterTrilinear1}. \begin{align*} &\sum_{q \ge -1} 2^{qs}\left( \int^T_0 \vert ( \Delta_q \Gamma^1 (f,g), \Delta_qh)_{x,v} \vert dt \right)^{1/2} \\ &\lesssim \Vert f \Vert_{L^2_TL^2_v (\dot{B}^{3/2}_x)}^{1/2}\sum_{q \ge -1} 2^{qs} \left( \sum_{\vert j-q \vert \le 4} \Vert |\!|\!| \Delta_j g |\!|\!| \Vert_{L^\infty_TL^2_x} \right)^{1/2} \left(\int_0^T \Vert |\!|\!| \Delta_q h |\!|\!| \Vert_{L^2_x}^2 dt \right)^{1/4} \\ & \le \Vert f \Vert_{L^2_TL^2_v (\dot{B}^{3/2}_x)}^{1/2} \Vert g \Vert_{\mathcal{T}^s_{T,\infty,2}}^{1/2} \Vert h \Vert_{\mathcal{T}^s_{T,2,2}}^{1/2}\left( \sum_{q \ge -1} 2^{qs} \sum_{\vert j-q \vert \le 4} 2^{-js}c_j \right)^{1/2} \\ & \lesssim \Vert f \Vert_{L^2_TL^2_v (\dot{B}^{3/2}_x)}^{1/2} \Vert g \Vert_{\mathcal{T}^s_{T,\infty,2}}^{1/2} \Vert h \Vert_{\mathcal{T}^s_{T,2,2}}^{1/2}. \end{align*} Next, we estimate $\sum_q 2^{qs}\left( \int^T_0 \vert ( \Delta_q \Gamma^2 (f,g), \Delta_qh)_{x,v} \vert dt \right)^{1/2}$. Since $\Gamma^1$ and $\Gamma^2$ have symmetry, we can similarly calculate it and obtain \begin{align*} &\sum_{q \ge -1} 2^{qs}\left( \int^T_0 \vert ( \Delta_q \Gamma^2 (f,g), \Delta_qh)_{x,v} \vert dt \right)^{1/2} \\ & \lesssim \sum_{q \ge -1} 2^{qs} \left( \sum_{\vert j-q \vert \le 4}\int^T_0 \Vert |\!|\!| S_{j-1} g |\!|\!| \Vert_{L^\infty_x} \Vert \Delta_j f \Vert_{L^2_{x,v}} \Vert |\!|\!| \Delta_q h |\!|\!| \Vert_{L^2_x} dt \right)^{1/2} \\ & \le \sum_{q \ge -1} 2^{qs} \left( \sum_{\vert j-q \vert \le 4} \Vert \Delta_j f \Vert_{L^\infty_T L^2_{x,v}} \Vert |\!|\!| S_{j-1} g |\!|\!| \Vert_{L^2_TL^\infty_x} \Vert |\!|\!| \Delta_q h |\!|\!| \Vert_{L^2_TL^2_x} \right)^{1/2} \\ & \lesssim \sum_{q \ge -1} 2^{qs}\left( \sum_{\vert j-q \vert \le 4} \Vert \Delta_j f \Vert_{L^\infty_T L^2_{x,v}} \Vert g\Vert_{\dot{\mathcal{T}}^{3/2}_{T,2,2}}\right)^{1/2} \Vert |\!|\!| \Delta_q h |\!|\!| \Vert_{L^2_TL^2_x}^{1/2} \\ & \le \Vert g\Vert_{\dot{\mathcal{T}}^{3/2}_{T,2,2}}^{1/2} \left(\sum_{q \ge -1} 2^{qs}\sum_{\vert j-q \vert \le 4} \Vert \Delta_j f \Vert_{L^\infty_T L^2_{x,v}} \right)^{1/2} \left( \sum_{q \ge -1} 2^{qs} \Vert |\!|\!| \Delta_q h |\!|\!| \Vert_{L^2_TL^2_x} \right)^{1/2} \\ & \lesssim \Vert f \Vert_{\tilde{L}^\infty_T \tilde{L}^2_v(B^s_x)}^{1/2} \Vert g\Vert_{\dot{\mathcal{T}}^{3/2}_{T,2,2}}^{1/2} \Vert h \Vert_{\mathcal{T}^s_{T,2,2}}^{1/2}, \end{align*} where we used Lemma \ref{T-estimate}. Since $\Vert g\Vert_{\dot{\mathcal{T}}^{3/2}_{T,2,2}} \lesssim \Vert g\Vert_{\mathcal{T}^{3/2}_{T,2,2}}$ and $B^{s_1}_{pr} \hookrightarrow B^{s_2}_{pr}$ when $s_2 \le s_1$, we can deduce that \begin{align*} \sum_q 2^{qs}\left( \int^T_0 \vert ( \Delta_q \Gamma^2 (f,g), \Delta_qh)_{x,v} \vert dt \right)^{1/2} \lesssim \Vert f \Vert_{\tilde{L}^\infty_T \tilde{L}^2_v(B^{3/2}_x)}^{1/2} \Vert g \Vert_{\mathcal{T}^{3/2}_{T,2,2}}^{1/2} \Vert h \Vert_{\mathcal{T}^s_{T,2,2}}^{1/2}. \end{align*} Therefore the estimates of the second part $\Gamma^2$ for \eqref{tri2} and \eqref{tri1} are completed. Finally, we calculate the third part $\Gamma^3$. Recalling the size of supports of $\mathcal{F}[\Delta_{j'}f]$ and $\mathcal{F}[\Delta_{j} g]$, first we have \begin{align*} \Delta_q\left( \sum_j \sum_{\vert j-j'\vert \le 1} (\Delta_{j'} f, \Delta_j g) \right)= \Delta_q \left(\sum_{\max\{j,j'\} \ge q-2 } \sum_{\vert j-j' \vert \le 1} (\Delta_{j'} f, \Delta_j g)\right). \end{align*} We notice that the summation concerning $j$ is not finite at this time. However, double summation appearing later can be shown finite. Similar to $\Gamma^1$, an estimate of $\Gamma^3$ with respect to $x$ and $v$ is \begin{align}\label{ineq: AfterTrilinear3}\notag &\sum_{q \ge -1} 2^{qs}\left( \int^T_0 \vert ( \Delta_q \Gamma^3 (f,g), \Delta_qh)_{x,v} \vert dt \right)^{1/2} \\ & \lesssim \sum_{q \ge -1}2^{qs} \left( \sum_{j \ge q-3} \int^T_0 \Vert f \Vert_{L^2_vL^\infty_x} \Vert |\!|\!| \Delta_j g |\!|\!| \Vert_{L^2_x} \Vert |\!|\!| \Delta_q h |\!|\!| \Vert_{L^2_x} dt \right)^{1/2}. \end{align} Using the Cauchy-Schwarz inequality with respect to $t$-integration, we have \begin{align*} &\sum_{q \ge -1} 2^{qs}\left( \int^T_0 \vert ( \Delta_q \Gamma^3 (f,g), \Delta_qh)_{x,v} \vert dt \right)^{1/2} \\ & \lesssim \sum_{q \ge -1}2^{qs} \Vert f \Vert_{L^\infty_T L^2_v (B^{3/2}_x)}^{1/2} \left( \sum_{j \ge q-3} \Vert |\!|\!| \Delta_j g |\!|\!| \Vert_{L^2_{T,x}} \right)^{1/2} \left(\Vert |\!|\!| \Delta_q h |\!|\!| \Vert_{L^2_{T,x}}\right)^{1/2} \\ & \le \Vert f \Vert_{\tilde{L}^\infty_T \tilde{L}^2_v (B^{3/2}_x)}^{1/2} \Big( \sum_{q \ge -1}\sum_{j \ge q-3} 2^{qs}\Vert |\!|\!| \Delta_j g |\!|\!| \Vert_{L^2_{T,x}}\Big)^{1/2} \Big(\sum_{q \ge -1}2^{qs} \Vert |\!|\!| \Delta_q h |\!|\!| \Vert_{L^2_{T,x}} \Big)^{1/2} \\ & \lesssim \Vert f \Vert_{\tilde{L}^\infty_T \tilde{L}^2_v (B^{3/2}_x)}^{1/2} \Vert g \Vert_{\mathcal{T}^s_{T,2,2}}^{1/2} \Vert h \Vert_{\mathcal{T}^s_{T,2,2}}^{1/2}\left( \sum_{q \ge -1}2^{qs}\sum_{j \ge q-3} 2^{-js}c_j \right)^{1/2} . \end{align*} The last factor of the previous line is finite because \begin{align*} \sum_{q \ge -1}2^{qs}\sum_{j \ge q-3} 2^{-js}c_j= \sum_{j \ge -4} c_j \sum_{j-q \ge -3}2^{-(j-q)s} <\infty. \end{align*} Here we applied Fubini's theorem and Young's inequality again. This is an estimate corresponding to \eqref{tri1}. Moreover, we calculate that \begin{align*} &\sum_{q \ge -1} 2^{qs}\left( \int^T_0 \vert ( \Delta_q \Gamma^3 (f,g), \Delta_qh)_{x,v} \vert dt \right)^{1/2} \\ & \lesssim \sum_{q \ge -1} 2^{qs} \Vert f \Vert_{L^2_TL^2_v L^\infty_x}^{1/2} \left(\sum_{j \ge q-3} \Vert |\!|\!| \Delta_j g |\!|\!| \Vert_{L^\infty_T L^2_x} \right)^{1/2} \Vert |\!|\!| \Delta_q h |\!|\!| \Vert_{L^2_{T,x}}^{1/2} \\ & \lesssim \Vert f \Vert_{L^2_T L^2_v(\dot{B}^{3/2}_x)}^{1/2} \left( \sum_{q \ge -1} \sum_{j \ge q-3}2^{qs}\Vert |\!|\!| \Delta_j g |\!|\!| \Vert_{L^\infty_T L^2_x} \right)^{1/2} \Vert h \Vert_{{\mathcal{T}}^s_{T,2,2}}^{1/2}\\ & \lesssim \Vert f \Vert_{L^2_T L^2_v(\dot{B}^{3/2}_x)}^{1/2}\Vert g \Vert_{\mathcal{T}^s_{T,\infty,2}}^{1/2} \Vert h \Vert_{\mathcal{T}^s_{T,2,2}}^{1/2}. \end{align*} Combining the above estimates properly, we obtain the second estimate \eqref{tri2}. For the third estimate \eqref{tri3}, we apply Corollary \ref{another-type} instead of Corollary \ref{AMUXYtrilinear}. Then the completely same argument of \eqref{tri2} yields \eqref{tri3}. Notice that all we have to do is to replace $f$ with $\mu^{1/10}f$ in the first term on the right hand side of \eqref{tri3}, and to replace $\mathcal{T}^s_{T,\infty,2}$ and $\dot{\mathcal{T}}^{3/2}_{T,2,2}$ with $\tilde{L}^\infty_T \tilde{L}^2_v(B^{s}_x)$ and $L^2_T L^2_v(\dot{B}^{3/2}_x)$ respectively in the second term. \end{proof} \section{Estimate on nonlinear term and a priori estimate}\label{S4} \setcounter{equation}{0} Inserting certain functions into the inequalities of Lemma \ref{trilinear}, we first estimate some nonlinear terms by the energy and the dissipation term. After that, we will derive an apriori estimate. In this section, we decompose $f$ into $\mu + \mu^{1/2}g$, and all Lemmas are statements on this $g$. \begin{lem}\label{4-1} Assume $0<s\le\frac{3}{2}$. Then we have \begin{align*} \sum_{q \ge -1} 2^{qs}\left( \int^T_0 \vert ( \Delta_q \Gamma (g,g), \Delta_q(\mathbf{I}-\mathbf{P})g) \vert dt \right)^{1/2} \lesssim \sqrt{\mathcal{E}_T(g)}\mathcal{D}_T(g). \end{align*} \end{lem} \begin{proof} We devide $\Gamma(g,g)$ into $\Gamma(g,\mathbf{P}g)$ and $\Gamma(g, (\mathbf{I}-\mathbf{P})g)$, and estimate the respective terms. Using \eqref{tri1} of Lemma \ref{trilinear}, we obtain \begin{align*} \sum_{q \ge -1} &2^{qs}\left( \int^T_0 \vert ( \Delta_q \Gamma (g,(\mathbf{I}-\mathbf{P})g), \Delta_q(\mathbf{I}-\mathbf{P})g) \vert dt \right)^{1/2} \\ & \lesssim \Vert g \Vert_{\tilde{L}^\infty_T \tilde{L}^2_v (B^{3/2}_x)}^{1/2} \Vert (\mathbf{I}-\mathbf{P})g \Vert_{\mathcal{T}^s_{T,2,2}} \lesssim \sqrt{\mathcal{E}_T(g)}\mathcal{D}_T(g). \end{align*} When $\gamma+ \nu >0$, \eqref{tri2} of Lemma \ref{trilinear} yields \begin{align*} \sum_{q \ge -1} &2^{qs}\left( \int^T_0 \vert ( \Delta_q \Gamma (g,\mathbf{P}g), \Delta_q(\mathbf{I}-\mathbf{P})g) \vert dt \right)^{1/2} \\ & \lesssim \Vert(\mathbf{I}-\mathbf{P})g\Vert_{\mathcal{T}^s_{T,2,2}}^{1/2} \left( \Vert g \Vert_{L^2_T L^2_v(\dot{B}^{3/2}_x)}^{1/2} \Vert \mathbf{P}g \Vert_{\mathcal{T}^s_{T,\infty,2}}^{1/2} + \Vert g \Vert_{\tilde{L}^\infty_T \tilde{L}^2_v(B^s_x)}^{1/2} \Vert \mathbf{P}g \Vert_{\dot{\mathcal{T}}^{3/2}_{T,2,2}}^{1/2} \right) \\ & \lesssim \sqrt{\mathcal{E}_T(g)}\mathcal{D}_T(g), \end{align*} where we used the following inequalities: \begin{gather*} \Vert g \Vert_{L^2_T L^2_v(\dot{B}^{3/2}_x)} \lesssim \Vert g \Vert_{\dot{\mathcal{T}}^{3/2}_{T,2,2}} \lesssim \Vert \mathbf{P}g \Vert_{\dot{\mathcal{T}}^{3/2}_{T,2,2}}+ \Vert (\mathbf{I}-\mathbf{P})g \Vert_{\mathcal{T}^{3/2}_{T,2,2}} \sim \mathcal{D}_T(g), \\ \Vert \mathbf{P}g \Vert_{\mathcal{T}^s_{T,\infty,2}} \lesssim \Vert g \Vert_{\tilde{L}^\infty_T \tilde{L}^2_v (B^s_x)}. \end{gather*} The first inequality is deduced by Lemmas \ref{Besov embedding}-\ref{Besov and Chemin-Lerner}, and the fact that $\Vert \cdot \Vert_{L^2_v} \le \Vert \cdot \Vert_{L^2_{v, (\gamma+\nu)/2}} \lesssim |\!|\!| \cdot |\!|\!|$ when $\gamma + \nu > 0$. We recall $|\!|\!| \mathbf{P}f |\!|\!| \lesssim \Vert f \Vert_{L^2_v}$ and Lemma \ref{Besov and Chemin-Lerner} for the second one. When $\gamma+ \nu \le 0$, \eqref{tri3} of Lemma \ref{trilinear} similarly yields \begin{align*} \sum_{q \ge -1} &2^{qs}\left( \int^T_0 \vert ( \Delta_q \Gamma (g,\mathbf{P}g), \Delta_q(\mathbf{I}-\mathbf{P})g) \vert dt \right)^{1/2} \\ &\lesssim \Vert \mu^{1/10}g \Vert_{L^2_T L^2_v(\dot{B}^{3/2}_x)}^{1/2} \Vert \mathbf{P}g \Vert_{\mathcal{T}^s_{T,\infty,2}}^{1/2}\Vert(\mathbf{I}-\mathbf{P})g\Vert_{\mathcal{T}^s_{T,2,2}}^{1/2} \\ &+ \Vert \mu^{1/10}g \Vert_{\tilde{L}^\infty_T \tilde{L}^2_v(B^s_x)}^{1/2} \Vert \mathbf{P}g \Vert_{\dot{\mathcal{T}}^{3/2}_{T,2,2}}^{1/2}\Vert(\mathbf{I}-\mathbf{P})g\Vert_{\mathcal{T}^s_{T,2,2}}^{1/2} \\ &+ \Vert g \Vert_{L^2_T L^2_{v, (\nu+\gamma)/2}(\dot{B}^{3/2}_x)}^{1/2} \Vert \mathbf{P}g \Vert_{\tilde{L}^\infty_T \tilde{L}^2_v(B^{s}_x)}^{1/2} \Vert (\mathbf{I}-\mathbf{P})g \Vert_{\mathcal{T}^s_{T,2,2}}^{1/2}\\ &+ \Vert g \Vert_{\tilde{L}^\infty_T \tilde{L}^2_{v, (\nu+\gamma)/2}(B^s_x)}^{1/2} \Vert \mathbf{P}g \Vert_{L^2_T L^2_v(\dot{B}^{3/2}_x)}^{1/2}\Vert (\mathbf{I}-\mathbf{P})g \Vert_{\mathcal{T}^s_{T,2,2}}^{1/2} \\ & \lesssim \sqrt{\mathcal{E}_T(g)}\mathcal{D}_T(g). \end{align*} Owing to the fact $|\!|\!| \mathbf{P}g |\!|\!| \lesssim \Vert g \Vert_{L^2_v}$, $\Vert \mathbf{P}g \Vert_{\tilde{L}^\infty_T \tilde{L}^2_v(B^{s}_x)}$ and $\Vert \mathbf{P}g \Vert_{\mathcal{T}^s_{T,\infty,2}}$ are similarly estimated. So are $\Vert \mathbf{P}g \Vert_{\dot{\mathcal{T}}^{3/2}_{T,2,2}}$ and $\Vert \mathbf{P}g \Vert_{L^2_T L^2_v(\dot{B}^{3/2}_x)}$. \end{proof} \begin{lem}\label{4-2} Let $\phi \in \mathcal{S}(\mathbb{R}^3_v)$ and $0<s\le \frac{3}{2}$. Then we have \begin{align*} \sum_{q \ge -1} 2^{qs}\left( \int^T_0 \Vert\Delta_q (\Gamma (g,g), \phi) \Vert^2_{L^2_x}dt \right)^{1/2} \lesssim \mathcal{E}_T(g)\mathcal{D}_T(g). \end{align*} \end{lem} \begin{proof} First, we consider the case $\gamma+\nu >0$. Together with the decomposition $\Gamma(g,g)=\Gamma^1(g,g)+\Gamma^2(g,g)+\Gamma^3(g,g)$, the generalized Minkowski inequality gives \begin{align*} &\left( \int^T_0 \Vert\Delta_q (\Gamma (g,g), \phi) \Vert^2_{L^2_x}dt \right)^{1/2} \le \sum_{i=1}^3\left( \int^T_0 \Vert\Delta_q (\Gamma^i (g,g), \phi) \Vert^2_{L^2_x}dt \right)^{1/2} \\ &\le \sum_{\vert j-q \vert \le 4} \left( \int^T_0 \Vert \Delta_q (\Gamma (S_{j-1}g,\Delta_jg), \phi) \Vert^2_{L^2_x} dt \right)^{1/2} \\ &+\sum_{\vert j-q \vert \le 4} \left( \int^T_0 \Vert \Delta_q (\Gamma (\Delta_j g, S_{j-1}g), \phi) \Vert^2_{L^2_x} dt \right)^{1/2} \\ &+\sum_{\max\{j,j'\} \ge q-2} \sum_{\vert j-j' \vert \le 1} \left( \int^T_0 \Vert \Delta_q (\Gamma ( \Delta_{j'} g,\Delta_jg), \phi) \Vert^2_{L^2_x} dt \right)^{1/2} \\ &=: I^1_q+I^2_q+I^3_q. \end{align*} By using Corollary \ref{AMUXYtrilinear} and Lemma \ref{bounded operators}, each term is calculated as follows: We use the macro-micro decomposition $g= \mathbf{P}g+ (\mathbf{I}-\mathbf{P})g$ to estimate the first and third term properly. Then we have \begin{align*} \sum_{q \ge -1} 2^{qs} I^1_q &\le \sum_{q \ge -1} 2^{qs}\sum_{\vert j-q \vert \le 4} \left( \int^T_0\int \Vert S_{j-1}g \Vert_{L^2_v}^2 |\!|\!| \Delta_j \mathbf{P}g |\!|\!|^2 dx dt \right)^{1/2}\\ &+\sum_{q \ge -1} 2^{qs} \sum_{\vert j-q \vert \le 4} \left( \int^T_0\int \Vert S_{j-1}g \Vert_{L^2_v}^2 |\!|\!| \Delta_j (\mathbf{I}-\mathbf{P})g |\!|\!|^2 dx dt \right)^{1/2} \\ &\le \sum_{q \ge -1} 2^{qs}\sum_{\vert j-q \vert \le 4} \left(\int_0^T \Vert g \Vert_{L^2_vL^\infty_x} ^2 dt\right)^{1/2} \Vert\Delta_j \mathbf{P}g \Vert_{L_T^\infty L^2_v L^2_x}\\ &+\sum_{q \ge -1} 2^{qs} \sum_{\vert j-q \vert \le 4} \Vert g \Vert_{L^\infty_T L^2_{v}L^\infty_x} \left( \int^T_0\Vert |\!|\!| \Delta_j (\mathbf{I}-\mathbf{P})g |\!|\!| \Vert_{L^2_x}^2 dt \right)^{1/2} \\ &\lesssim \Vert g \Vert_{L^2_T L^2_v(\dot{B}^{3/2}_x)} \Vert \mathbf{P}g \Vert_{\tilde L^\infty_T \tilde L^2_v(B_x^{s})} +\Vert g \Vert_{\tilde{L}^\infty_T \tilde{L}^2_v (B^{3/2}_x)} \Vert (\mathbf{I}-\mathbf{P})g \Vert_{\mathcal{T}^s_{T,2,2}} \\ & \lesssim \mathcal{E}_T(g)\mathcal{D}_T(g), \end{align*} \begin{align*} \sum_{q \ge -1} 2^{qs} I^2_q &\le \sum_{q \ge -1} 2^{qs} \sum_{\vert j-q \vert \le 4} \Vert \Delta_j g \Vert_{L^\infty_T L^2_{x,v}} \left( \int^T_0 \Vert |\!|\!| S_{j-1}g |\!|\!| \Vert_{L^\infty_x}^2 dt \right)^{1/2}\\ &\lesssim \Vert g \Vert_{\tilde L^\infty_T \tilde L^2_v(B^s_x)}\Vert g \Vert_{\dot{\mathcal{T}}^{3/2}_{T,2,2}}\lesssim \mathcal{E}_T(g)\mathcal{D}_T(g), \end{align*} \begin{align*} \sum_{q \ge -1} 2^{qs} I^3_q &\le \sum_{q \ge -1} \sum_{j \ge q-3} 2^{qs} \Vert \Delta_j g \Vert_{L^\infty_T L^2_{x,v}} \left( \int^T_0 \Vert |\!|\!| \mathbf{P}g |\!|\!| \Vert_{L^\infty_x}^2 dt \right)^{1/2}\\ & + \sum_{q \ge -1} \sum_{j' \ge q-3} 2^{qs}\Vert g \Vert_{L^\infty_T L^2_v L^\infty_x} \left( \int^T_0 \Vert |\!|\!| \Delta_{j'} (\mathbf{I}-\mathbf{P})g |\!|\!| \Vert_{L^2_x}^2 dt \right)^{1/2} \\ & \lesssim \Vert g \Vert_{\tilde L^\infty_T \tilde L^2_v(B^s_x)} \Vert \mathbf{P}g \Vert_{\dot{\mathcal{T}}^{3/2}_{T,2,2}} + \Vert g \Vert_{{L}^\infty_T {L}^2_v (B^{3/2}_x)} \Vert (\mathbf{I}-\mathbf{P})g \Vert_{\mathcal{T}^s_{T,2,2}} \\ & \lesssim \mathcal{E}_T(g)\mathcal{D}_T(g). \end{align*} When $\gamma+\nu \le 0$, apply Corollary \ref{another-type} instead of Corollary \ref{AMUXYtrilinear} for the estimate of $I_q^1$. Then we have \begin{align*} \sum_{q \ge -1} 2^{qs} I^1_q &\le \sum_{q \ge -1} 2^{qs}\sum_{\vert j-q \vert \le 4} \left( \int^T_0\int \Vert \la v \ra^{(\gamma+ \nu)/2} S_{j-1}g \Vert_{L^2_v}^2 |\!|\!| \Delta_j \mathbf{P}g |\!|\!|^2 dx dt \right)^{1/2}\\ &+\sum_{q \ge -1} 2^{qs} \sum_{\vert j-q \vert \le 4} \left( \int^T_0\int \Vert S_{j-1}g \Vert_{L^2_v}^2 |\!|\!| \Delta_j (\mathbf{I}-\mathbf{P})g |\!|\!|^2 dx dt \right)^{1/2} \\ &\lesssim \Vert \la v \ra^{(\gamma+ \nu)/2}g \Vert_{L^2_T L^2_v(\dot{B}^{3/2}_x)} \Vert \mathbf{P}g \Vert_{\tilde L^\infty_T \tilde L^2_v(B_x^{s})}\\ &+\Vert g \Vert_{\tilde{L}^\infty_T \tilde{L}^2_v (B^{3/2}_x)} \Vert (\mathbf{I}-\mathbf{P})g \Vert_{\mathcal{T}^s_{T,2,2}} \\ & \lesssim \mathcal{E}_T(g)\mathcal{D}_T(g). \end{align*} Other parts follow in the same manner. \end{proof} As for the upper bound of the linear term $\cL$ we have \begin{lem}\label{4-3} Let $\phi \in \mathcal{S}(\mathbb{R}^3_v) $ and $s>0$. Then we have \begin{align*} \sum_{q \ge -1} 2^{qs}\left( \int^T_0 \Vert\Delta_q (\cL(\mathbf{I}-\mathbf{P})g, \phi) \Vert^2_{L^2_x}dt \right)^{1/2} \lesssim \Vert (\mathbf{I}-\mathbf{P})g \Vert_{\mathcal{T}^s_{T,2,2}}. \end{align*} \end{lem} \begin{proof} Since it follows from Corollaries \ref{AMUXYtrilinear} and \ref{another-type} that \begin{align*} &\left |\Delta_q (\cL(\mathbf{I}-\mathbf{P})g, \phi)_{L^2_v}\right |\\ & \le \left | (\Gamma(\mu^{1/2}, \Delta_q(\mathbf{I}-\mathbf{P})g), \phi)_{L^2_v}\right | +\left | (\Gamma( \Delta_q(\mathbf{I}-\mathbf{P})g, \mu^{1/2}), \phi)_{L^2_v}\right |\\ & \lesssim |\!|\!|\Delta_q(\mathbf{I}-\mathbf{P})g|\!|\!| + \| \Delta_q(\mathbf{I}-\mathbf{P})g\|_{L^2_{(\gamma +\nu)/2}(\RR^3_v)}, \end{align*} we obtain the desired estimate. \end{proof} We next estimate derivative of the macroscopic part. \begin{lem}\label{macro-micro} It holds that \begin{align}\label{macro-apriori}\nonumber &\Vert \nabla_x (a,b,c) \Vert_{\tilde{L}^2_T(B^{1/2}_x)} \\ &\quad \quad \lesssim \Vert g_0 \Vert_{\tilde{L}^2_v(B^{3/2}_x)} +\mathcal{E}_T(g) +\Vert (\mathbf{I}-\mathbf{P})g \Vert_{\mathcal{T}^{3/2}_{T,2,2}}+\mathcal{E}_T(g)\mathcal{D}_T(g). \end{align} \end{lem} \begin{proof} We start from \eqref{eq: Boltzmann}. Multiplying the equation by $1$, $v$, and $\vert v \vert^2$ and taking $v$-integration, we have the following local macroscopic balance laws: \begin{align*} &\partial_t \int_{\mathbb{R}^3_v} fdv + \nabla_x \cdot \int_{\mathbb{R}^3_v} vfdv =0, \\ &\partial_t \int_{\mathbb{R}^3_v} vfdv + \nabla_x\cdot \int_{\mathbb{R}^3_v} v\otimes vfdv =0,\\ &\partial_t \int_{\mathbb{R}^3_v} \vert v\vert^2fdv + \nabla_x \cdot \int_{\mathbb{R}^3_v} \vert v\vert^2 vfdv =0. \end{align*} We decompose $f$ into $f = \mu + \mu^{1/2}g$ and further decompose $g$ into $g=\mathbf{P}g + (\mathbf{I-P})g =g_1+g_2$. In order to express the above balance laws with the macroscopic functions ($a$, $b$, $c$), we need to calculate some moments of Maxwellian $\mu$. \begin{align*} &(1, \mu)=1, (\vert v_i \vert^2, \mu)=1, (\vert v \vert^2, \mu)=3,\\ &(\vert v_i \vert^2 \vert v_j \vert^2, \mu)=1\ (i \neq j), (\vert v_i \vert^4, \mu ) =3, \\ &(\vert v \vert^2 \vert v_i \vert^2, \mu) = 5, (\vert v \vert^4, \mu) = 15, ( \vert v\vert^4 \vert v_i \vert^2 \mu )=35. \end{align*} Keeping these values in mind, we compute that \begin{align*} &\int_{\mathbb{R}^3_v} fdv = 1 + (a+ 3c),\\ &\int_{\mathbb{R}^3_v} vfdv = b,\\ &\int_{\mathbb{R}^3_v} \vert v\vert^2fdv = 3 + 3(a+5c),\\ &\int_{\mathbb{R}^3_v} v_i v_j fdv = \delta_{ij} + (a+5c)\delta_{ij} + (v_i v_j \mu^{1/2}, g_2),\\ &\int_{\mathbb{R}^3_v} \vert v\vert^2 vfdv = 5b + (\vert v \vert^2 v \mu^{1/2}, g_2). \end{align*} Here, $\delta_{ij}$ is Kronecker's delta. Inserting these identities into the balance laws, we have \begin{align*} &\partial_t (a+3c) + \nabla_x \cdot b=0,\\ &\partial_t b + \nabla_x (a+5c) + \nabla_x \cdot (v\otimes v \mu^{1/2}, g_2) =0,\\ &3\partial_t(a+5c) +5\nabla_x \cdot b + \nabla_x \cdot (\vert v\vert^2 v \mu^{1/2} g_2)=0. \end{align*} This is equivalent to the system \begin{align*} &\partial_t a - \frac{1}{2} \nabla_x \cdot(\vert v \vert^2 v\mu^{1/2}, g_2)=0,\\ &\partial_t b + \nabla_x (a+5c) + \nabla_x \cdot (v\otimes v \mu^{1/2}, g_2) =0,\\ &\partial_t c + \frac{1}{3}\nabla_x \cdot b +\frac{1}{6} \nabla_x \cdot (\vert v \vert^2 v \mu^{1/2}, g_2) =0. \end{align*} Next, we rewrite \eqref{eq: perturbation} as \begin{align*} \partial_t g_1 +v\cdot \nabla_x g_1 =-\partial_t g_2 + R_1+ R_2, \end{align*} where $R_1= -v \cdot \nabla_x g_2$ and $R_2 = -\mathcal{L}g_2 + \Gamma(g, g)$. We rewrite this equation once again so that we express the equation in terms of ($a$, $b$, $c$). \begin{align}\label{eq: macro perturbation}\notag &\partial_t a \mu^{1/2} + (\partial_t b + \nabla_x a)\cdot v \mu^{1/2} + \sum_i (\partial_t c+\partial_i b_i) \vert v_i \vert^2 \mu^{1/2} \\ &\quad+\sum_{i< j} (\partial_j b_i +\partial_i b_j) v_i v_j \mu^{1/2} + \nabla_x c \cdot \vert v \vert^2 v \mu^{1/2} =-\partial_t g_2 + R_1+ R_2. \end{align} Here, $\partial_i = \partial_{x_i}$. We define the high-order moment functions $A(g)=(A_{ij}(g))_{3\times 3}$ and $B(g) = (B_i(g))_{1\times 3}$ by \begin{align*} A_{ij}(g) = ( (v_iv_j -\delta_{ij})\mu^{1/2}, g), B_i(g) = \frac{1}{10}( (\vert v\vert^2-5)v_i \mu^{1/2}, g). \end{align*} Notice that these functions operate only on $v$ and $$\vert A_{ij}(g) \vert, \vert B_i(g) \vert \le C\Vert g \Vert_{L^2_{v,(\gamma+\nu)/2}}$$ since we take inner products of $g$ and rapidly decreasing functions. Applying $A_{ij}$ and $B_i$ to both sides of \eqref{eq: macro perturbation}, we have \begin{align*} &\partial_t (A_{ij}(g_2)+2c\delta_{ij}) + \partial_j b_i + \partial_i b_j = A_{ij} (R_1+R_2),\\ &\partial_t B_i(g_2) + \partial_i c = B_i(R_1+R_2). \end{align*} Thus, we obtained the following system: \begin{align}\label{eq: fluid-type} \begin{cases} \partial_t a - \frac{1}{2} \nabla_x \cdot(\vert v \vert^2 v\mu^{1/2}, g_2)=0,\\ \partial_t b + \nabla_x (a+5c) + \nabla_x \cdot (v\otimes v \mu^{1/2}, g_2) =0,\\ \partial_t c + \frac{1}{3}\nabla_x \cdot b +\frac{1}{6} \nabla_x \cdot (\vert v \vert^2 v \mu^{1/2}, g_2) =0,\\ \partial_t (A_{ij}(g_2)+2c\delta_{ij}) + \partial_j b_i + \partial_i b_j = A_{ij} (R_1+R_2),\\ \partial_t B_i(g_2) + \partial_i c = B_i(R_1+R_2). \end{cases} \end{align} We derive the desired estimate from this system. For later use, we define the energy functional \begin{align*} &E_q(g) = E^1_q(g) + \delta_2 E^2_q(g) + \delta_3 E^3_q(g),\\ &E^1_q(g)=\sum_{i}(B_i (\Delta_q g_2), \partial_i \Delta_q c),\\ &E^2_q(g)=\sum_{i,j} (A_{ij}(\Delta_q g_2 )+2\Delta_q c\delta_{ij}, \partial_j \Delta_qb_i + \partial_i \Delta_qb_j),\\ &E^3_q(g)=(\Delta_q b_,\nabla_x \Delta_q a). \end{align*} $\delta_2$ and $\delta_3$ are two small positive numbers, which will be chosen later. First, we apply $\Delta_q$ with $q \ge -1$ to \eqref{eq: fluid-type}. We have \begin{align}\label{eq: fluid-type2} \begin{cases} \partial_t \Delta_qa - \frac{1}{2} \nabla_x \cdot(\vert v \vert^2 v\mu^{1/2}, \Delta_qg_2)=0,\\ \partial_t \Delta_qb + \nabla_x\Delta_q(a+5c) + \nabla_x \cdot (v\otimes v \mu^{1/2}, \Delta_qg_2) =0,\\ \partial_t \Delta_qc + \frac{1}{3}\nabla_x \cdot \Delta_qb +\frac{1}{6} \nabla_x \cdot (\vert v \vert^2 v \mu^{1/2}, \Delta_qg_2) =0,\\ \partial_t (A_{ij}(\Delta_qg_2)+2\Delta_qc\delta_{ij}) + \partial_j \Delta_qb_i + \partial_i \Delta_qb_j = A_{ij} (\Delta_qR_1+\Delta_qR_2),\\ \partial_t B_i(\Delta_qg_2) + \partial_i \Delta_qc = B_i(\Delta_qR_1+\Delta_qR_2). \end{cases} \end{align} In what follows, by $\eqref{eq: fluid-type2}_n$ we denote the n-th equation of $\eqref{eq: fluid-type2}$. Multiplying $\eqref{eq: fluid-type2}_5$ by $\partial_i \Delta_qc$, we have \begin{align*} \frac{d}{dt} (B_i (\Delta_q g_2), \partial_i \Delta_qc ) - (B_i (\Delta_q g_2), \partial_i \Delta_q\partial_tc )&+ \Vert \partial_i \Delta_qc \Vert^2_{L^2_x} \\ &= (B_i(\Delta_qR_1+\Delta_qR_2), \partial_i \Delta_qc). \end{align*} By Young's inequality it is obvious that \begin{align*} (B_i(\Delta_q R_1), \partial_i \Delta_q c) \le \varepsilon \Vert \partial_i \Delta_qc \Vert^2_{L^2_x} + \frac{C}{\varepsilon} \Vert \nabla_x \Delta_q g_2 \Vert^2_{L^2_{v,(\gamma+\nu)/2} L^2_x} \end{align*} for small positive $\varepsilon$. We note that if $\Vert g \Vert_{\tilde{L}^\infty_T \tilde{L}^2_v (B^{3/2}_x)} < \infty$, the Cauchy-Schwarz inequality gives $(g(x, \cdot, t), \phi) \in \tilde{L}^\infty_T (B^{3/2}_x)$ for any $\phi \in \mathcal{S}(\mathbb{R}^3_v)$. Also, that $\Vert g \Vert_{\mathcal{T}^{3/2}_{T,2,2}} < \infty$ implies $\sum_q 2^{3/2} \left( \int^T_0 \int \vert (\Delta_q g(x, \cdot, t), \phi) \vert^2 dx \right)^{1/2} < \infty$ for any $\phi \in \mathcal{S}(\mathbb{R}^3_v)$. In order to estimate $(B_i (\Delta_q g_2), \partial_i \Delta_q\partial_tc )$, we use $\eqref{eq: fluid-type2}_3$ and partial integral. As long as $f$, $g \in B^{3/2}_{2,1}$, $(\partial_i \Delta_q f, \Delta_q g) = - (\Delta_q f, \partial_i \Delta_q g)$. Indeed, if $f\in B^{3/2}_{2,1}$, $\Delta_q f$ is also in $B^{3/2}_{2,1} \subset H^1$. Moreover, that $f \in L^\infty$ implies $\Delta_q f \in C^\infty_b$. So both $(\partial_i \Delta_q f, \Delta_q g)$ and $-(\Delta_q f, \partial_i \Delta_q! g)$ exist and the same. \begin{align*} \vert (B_i (\Delta_q g_2), \partial_i \Delta_q\partial_tc ) \vert &= \left\vert \left(\partial_i B_i (\Delta_q g_2), \frac{1}{3}\nabla_x \cdot \Delta_qb +\frac{1}{6} \nabla_x \cdot (\vert v \vert^2 v \mu^{1/2}, \Delta_qg_2)\right)\right\vert \\ &\le \varepsilon \Vert \nabla_x \Delta_q b \Vert^2_{L^2_x} + \frac{C}{\varepsilon} \Vert \nabla_x \Delta_q g_2 \Vert^2_{L^2_{v,(\gamma+\nu)/2} L^2_x}. \end{align*} Here we set $\Vert \nabla_x \Delta_q b \Vert_{L^2x} :=\Vert \nabla_x \otimes \Delta_q b \Vert_{L^2x}$ for brevity. From this inequality, we have for small $\lambda'>0$ and $\varepsilon_1>0$, \begin{align}\label{energy1}\notag \frac{d}{dt}E^1_q(g)+& \lambda' \Vert \nabla_x \Delta_qc \Vert^2_{L^2_x} \le \varepsilon_1 \Vert \nabla_x \Delta_q b \Vert^2_{L^2_x} \\ &+ \frac{C}{\varepsilon_1} \Vert \nabla_x \Delta_q g_2 \Vert^2_{L^2_{v,(\gamma+\nu)/2} L^2_x} + \frac{C}{\varepsilon_1}\sum_i\Vert B_i(\Delta_q R_2) \Vert^2_{L^2_x}. \end{align} Next, we multiply $\eqref{eq: fluid-type2}_4$ by $\partial_j \Delta_qb_i + \partial_i \Delta_qb_j$ and take summation with $i$ and $j$. We remark that \begin{align*} \Vert \partial_j \Delta_qb_i + \partial_i \Delta_qb_j \Vert^2_{L^2_x} = \Vert \partial_j \Delta_qb_i \Vert^2_{L^2_x} + \Vert \partial_i \Delta_qb_j \Vert^2_{L^2_x} +2(\partial_j \Delta_qb_i,\partial_i \Delta_qb_j)\\ = \Vert \partial_j \Delta_qb_i \Vert^2_{L^2_x} + \Vert \partial_i \Delta_qb_j \Vert^2_{L^2_x} +2(\partial_i \Delta_qb_i,\partial_j \Delta_qb_j). \end{align*} Thus, \begin{align*} \sum_{i,j}\Vert \partial_j \Delta_qb_i + \partial_i \Delta_qb_j \Vert^2_{L^2_x} = 2\Vert \nabla_x \Delta_q b\Vert^2_{L^2_x} + 2\Vert \nabla_x \cdot \Delta_q b\Vert^2_{L^2_x}, \end{align*} and this implies \begin{align*} &\frac{d}{dt} E^2_q(g) - \sum_{i,j}(A_{ij}(\Delta_q g_2 )+2\Delta_qc\delta_{ij}, \partial_j \Delta_q\partial_tb_i + \partial_i \Delta_q\partial_tb_j) \\ &+ 2\Vert \nabla_x \Delta_q b\Vert^2_{L^2_x} + 2\Vert \nabla_x \cdot \Delta_q b\Vert^2_{L^2_x} = ( A_{ij} (\Delta_qR_1+\Delta_qR_2), \partial_j \Delta_qb_i + \partial_i \Delta_qb_j). \end{align*} Substituting $\eqref{eq: fluid-type2}_2$ to eliminate $\partial_t \Delta_q b$, we have \begin{align*} &\vert (A_{ij}(\Delta_q g_2 )+2\Delta_qc\delta_{ij}, \partial_j \Delta_q\partial_tb_i) \vert \\ &= \left\vert \left( \partial_j A_{ij}(\Delta_q g_2 )+2\partial_j \Delta_q c\delta_{ij}, \partial_i \Delta_q(a+5c) + \partial_i \cdot (\sum_l v_iv_l \mu^{1/2}, \Delta_qg_2) \right) \right\vert \\ &\le \varepsilon \Vert \nabla_x \Delta_qa \Vert^2_{L^2_x}+ \frac{C}{\varepsilon} \Vert \nabla_x \Delta_qc \Vert^2_{L^2_x} + \frac{C}{\varepsilon} \Vert \nabla_x \Delta_q g_2 \Vert^2_{L^2_{v,(\gamma+\nu)/2} L^2_x} \end{align*} and other terms on the left hand side are similarly estimated. Hence, for small $\varepsilon_2 >0$ we have \begin{align}\label{energy2}\notag \frac{d}{dt} \sum_{i,j} E^2_q(g) +&\lambda' \Vert \nabla_x \Delta_qb \Vert^2_{L^2_x} \le \varepsilon_2 \Vert \nabla_x \Delta_qa \Vert^2_{L^2_x} + \frac{C}{\varepsilon_2} \Vert \nabla_x \Delta_qc \Vert^2_{L^2_x} \\ &+ \frac{C}{\varepsilon_2} \Vert \nabla_x \Delta_q g_2 \Vert^2_{L^2_{v,(\gamma+\nu)/2} L^2_x} + \sum_{i,j} \Vert A_{ij}(\Delta_q R_2) \Vert^2_{L^2_x}. \end{align} Lastly, from $\eqref{eq: fluid-type2}_2$ we have \begin{align*} \frac{d}{dt} E^3_q(g)- \sum_{i}(\Delta_q b_i,\partial_i \Delta_q \partial_t a) +\Vert \nabla_x \Delta_qa \Vert^2_{L^2_x} = -5\sum_{i} (\partial_i \Delta_q c, \partial_i \Delta_q a) \\ - \sum_{i,j} (\partial_j(v_i v_j \mu^{1/2} ,\Delta_q g_2), \partial_i \Delta_q a). \end{align*} Eliminating $\partial_t \Delta_q a$ by $\eqref{eq: fluid-type2}_1$, we have \begin{align}\label{energy3} \frac{d}{dt} E^3_q(g) + \lambda' \Vert \nabla_x \Delta_qa \Vert^2_{L^2_x} \le C \Vert \nabla_x \Delta_q(b,c) \Vert^2_{L^2_x} +C \Vert \nabla_x \Delta_q g_2 \Vert^2_{L^2_{v,(\gamma+\nu)/2} L^2_x}. \end{align} For sufficiently small $\delta_2$ and $\delta_3$ with $0 < \delta_3 \ll \delta_2 \ll 1$, taking summation $\eqref{energy1}+\delta_2\eqref{energy2}+ \delta_3\eqref{energy3}$ and then choosing small $\varepsilon_1$ and $\varepsilon_2$, we obtain for small $\lambda >0$, \begin{align}\label{energy4}\notag \frac{d}{dt} E_q(g(t))+\lambda &\Vert \nabla_x \Delta_q(a, b, c) \Vert^2_{L^2_x} \lesssim \Vert \nabla_x \Delta_q g_2 \Vert^2_{L^2_{v,(\gamma+\nu)/2} L^2_x} \\ &\quad + \sum_{i,j} \Vert A_{ij}(\Delta_q R_2) \Vert^2_{L^2_x} + \sum_{i} \Vert B_i(\Delta_q R_2) \Vert^2_{L^2_x}. \end{align} Integrating \eqref{energy4} on $[0,T]$ and taking the square root of the resultant inequality, we have \begin{align}\label{addition}\notag \Vert \nabla_x \Delta_q(a, b, c) \Vert_{L^2_T L^2_x} &\lesssim \sqrt{\left|E_q(g(0))\right|} + \sqrt{\left|E_q(g(T))\right|} \\ &\notag + \Vert \nabla_x \Delta_q g_2 \Vert_{L^2_T L^2_{v,(\gamma+\nu)/2} L^2_x} + \sum_{i,j} \Vert A_{ij}(\Delta_q R_2) \Vert_{L^2_T L^2_x} \\& + \sum_{i} \Vert B_i(\Delta_q R_2) \Vert_{L^2_T L^2_x}. \end{align} Multiplying this inequality by $2^{q/2}$ and taking summation over $q \ge -1$ yield \begin{align}\label{ineq: macro-nexttolast}\notag \Vert \nabla_x (a,b,c) \Vert_{\tilde{L}^2_T(B^{1/2}_x)} &\lesssim \sum_{q\ge -1}2^{q/2}\sqrt{\vert E_q(g(0))\vert} + \sum_{q\ge -1}2^{q/2}\sqrt{\vert E_q(g(T))\vert} \\ &+ \Vert g_2 \Vert_{\tilde{L}^2_T \tilde{L}^2_{v,(\gamma+\nu)/2} (B^{3/2}_x)} + \sum_{i,j}\sum_{q\ge -1}2^{q/2} \Vert A_{ij}(\Delta_q R_2) \Vert_{L^2_T L^2_x} \notag\\ &+ \sum_{i}\sum_{q\ge -1}2^{q/2} \Vert B_i(\Delta_q R_2) \Vert_{L^2_T L^2_x}. \end{align} Here we used Lemma \ref{Besov embedding} to estimate the sum corresponding to the third term of the right hand side of \eqref{addition} by $\Vert g_2 \Vert_{\tilde{L}^2_T \tilde{L}^2_{v,(\gamma+\nu)/2} (B^{3/2}_x)}$. This term is governed by $\Vert g_2\Vert_{\mathcal{T}^{3/2}_{T,2,2}}$ since $\Vert \cdot \Vert_{L^2_{v,(\gamma+\nu)/2}} \lesssim |\!|\!| \cdot |\!|\!| $. The other four terms on the right hand side are estimated as follows. The Cauchy-Schwarz inequality yields \begin{align*} \sum_{q\ge -1}2^{q/2}\sqrt{\vert E_q(g(t))\vert} \lesssim \sum_{q\ge -1}2^{q/2} \left[ \Vert \nabla_x \Delta_q (a,b,c)(t) \Vert_{L^2_x} + \Vert \Delta_q (b,c)(t)\Vert_{L^2_x} \right. \\ \left.+\Vert \nabla_x \Delta_q g_2(t)\Vert_{L^2_vL^2_x}\right]. \end{align*} Using Lemma \ref{Besov embedding} once again, we have \begin{align*} \sum_{q\ge -1}2^{q/2}\sqrt{\vert E_q(g(0))\vert} \lesssim \Vert g_0 \Vert_{\tilde{L}^2_v(B^{3/2}_x)}, \quad \sum_{q\ge -1}2^{q/2}\sqrt{\vert E_q(g(t))\vert} \lesssim \mathcal{E}_T(g). \end{align*} Applying Lemmas \ref{4-2} and \ref{4-3} with $\phi(v) =(v_iv_j -\delta_{ij})\mu^{1/2}(v)$ and $\phi(v)=\frac{1}{10}(\vert v \vert^2-5)v_i \mu^{1/2}(v)$ implies \begin{align*} \sum_{i,j}\sum_{q\ge -1}2^{q/2} \Vert A_{ij}(\Delta_q R_2) \Vert_{L^2_T L^2_x} + \sum_{i}\sum_{q\ge -1}2^{q/2} \Vert B_i(\Delta_q R_2) \Vert_{L^2_T L^2_x}\\ \lesssim \Vert g_2 \Vert_{\mathcal{T}^{3/2}_{T,2,2}} + \mathcal{E}_T(g)\mathcal{D}_T(g). \end{align*} Therefore, substituting the above four inequalities into \eqref{ineq: macro-nexttolast}, we finally obtained the desired estimate. \end{proof} At the end of this section, we derive an a priori estimate of the energy and the dissipation terms. \begin{lem}\label{energy estimate} It holds that \begin{align*} \mathcal{E}_T(g) + \mathcal{D}_T(g) \lesssim \Vert g_0 \Vert_{\tilde{L}^2_v(B^{3/2}_x)} +\left( \mathcal{E}_T(g) + \sqrt{\mathcal{E}_T(g)} \right) \mathcal{D}_T(g). \end{align*} \end{lem} \begin{proof} Applying $\Delta_q$ to \eqref{eq: perturbation} and taking the inner product with $\Delta_q g$ over $\mathbb{R}^3_x \times \mathbb{R}^3_v$, we have \begin{align*} (\partial_t \Delta_q g, \Delta_q g) + (v\cdot \Delta_q \nabla_x g, \Delta_q g) +(\mathcal{L}(\Delta_q g), \Delta_q g) = (\Delta_q \Gamma(g,g), \Delta_q g). \end{align*} Since $\Delta_q \nabla_x g= \nabla_x \Delta_q g$, we have $(v\cdot \Delta_q \nabla_x g, \Delta_q g)_{L^2_x}=0$. Moreover, \begin{align*} &(\Delta_q \Gamma(g, g), \Delta_q \mathbf{P}g)_{L^2_v}\\ &=\int_{\mathbb{R}^3_v \times \mathbb{R}^3_{v_*}\times \mathbb{S}^2} B \mu^{1/2}_* (\Delta_q (g_*' g') -\Delta_q (g_* g)) \Delta_q \mathbf{P}g dvdv_* d\sigma \\ &=\frac{1}{2} \int_{\mathbb{R}^3_v \times \mathbb{R}^3_{v_*}\times \mathbb{S}^2} B \Delta_q (g_* g) \Delta_q ((\mathbf{Pg})'\mu'^{1/2}_* + (\mathbf{Pg})'_*\mu'^{1/2} \\ &\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad-\mathbf{Pg}\mu^{1/2}_* -(\mathbf{Pg})_*\mu^{1/2}) dvdv_* d\sigma \\ &=0. \end{align*} Therefore, we have \begin{align*} \frac{1}{2}\frac{d}{dt} \Vert \Delta_q g \Vert^2_{L^2_v L^2_x} +2\lambda_0 \Vert |\!|\!|\Delta_q (\mathbf{I}-\mathbf{P}) g) |\!|\!| \Vert^2_{L^2_x} \le \vert (\Delta_q \Gamma(g,g), \Delta_q (\mathbf{I}-\mathbf{P})g) )\vert, \end{align*} where $\lambda_0$ is taken in Lemma \ref{linear term}. Integrate this inequality over $[0, t]$ with $0 \le t \le T$, take the square root of the resultant inequality and multiply it by $2^{3q/2}$. Then we have \begin{align*} 2^{3q/2} \Vert \Delta_q g(t) \Vert_{L^2_v L^2_x} + \sqrt{\lambda_0} 2^{3q/2} \left( \int^t_0 \Vert |\!|\!| (\mathbf{I}-\mathbf{P})\Delta_q g(\tau) |\!|\!| \Vert^2_{L^2_x} d\tau \right)^{1/2} \\ \le 2^{3q/2} \Vert \Delta_q g_0 \Vert_{L^2_v L^2_x} + 2^{3q/2} \left( \int^t_0 \vert (\Delta_q \Gamma(g,g), \Delta_q (\mathbf{I}-\mathbf{P})g) )\vert d\tau \right)^{1/2}. \end{align*} We take supremum over $0\le t \le T$ on the left hand side and take the summation over $q \ge -1$. Together with Lemma \ref{4-1}, we have \begin{align}\label{ineq: before-apriori} \Vert g \Vert_{\tilde{L}^\infty_T \tilde{L}^2_v (B^{3/2}_x)} + \sqrt{\lambda_0} \Vert (\mathbf{I}-\mathbf{P})g \Vert_{\mathcal{T}^{3/2}_{T,2,2}} \le \Vert g_0 \Vert_{\tilde{L}^2_v (B^{3/2}_x)} + \sqrt{\mathcal{E}_T(g)}\mathcal{D}_T(g). \end{align} We now use Lemma \ref{macro-micro}. Taking summation $\delta \eqref{macro-apriori} + \eqref{ineq: before-apriori}$ with small positive $\delta$, we have \begin{align*} (1 -\delta) \mathcal{E}_t(g) + (\sqrt{\lambda_0} -\delta) \Vert (\mathbf{I}-\mathbf{P})g \Vert_{\mathcal{T}^{3/2}_{T,2,2}} + \delta \Vert \nabla_x (a, b, c) \Vert_{\tilde{L}^2_T (B^{1/2}_x)}\\ \lesssim \Vert g_0 \Vert_{\tilde{L}^2_v (B^{3/2}_x)} + (\sqrt{\mathcal{E}_T(g)}+\mathcal{E}_T(g))\mathcal{D}_T(g). \end{align*} The proof is complete by choosing sufficiently small $\delta$. \end{proof} Since \eqref{apriori-estimate-0} follows from Lemma \ref{energy estimate}, the existence of a unique global solution in Theorem \ref{main theorem} is a direct consequence of Theorem \ref{local-existence} in the next section, concerning the existence of a local solution. The non-negativity of solutions to the Cauchy problem \eqref{eq: Boltzmann} is sent to Section \ref{S6}. \section{Local existence}\label{S5} \setcounter{equation}{0} \subsection{Local existence of linear and nonlinear equations} \begin{lem}[Local existence for a linear equation]\label{lemma2} There exist some $C_0 > 1$, $\epsilon_0>0$, $T_0 >0$ such that for all $0 < T \le T_0$, $g_0 \in \tilde L^2_v(B_x^{3/2})$, $f \in \tilde L_T^{\infty} \tilde L_v^2(B_x^{3/2})$ satisfying $$ \|f\|_{\tilde L_T^{\infty} \tilde L_v^2(B_x^{3/2})} \leq \epsilon_0,$$ the Cauchy problem \begin{align}\label{l-e-c}\begin{cases} \partial_tg+v\cdot \nabla_{x}g+\mathcal{L}_1g=\Gamma(f,g) -\mathcal{L}_2f ,\\ g|_{t=0}=g_0, \end{cases}\end{align} admits a weak solution $g \in L^{\infty}([0,T]; L^2(\RR_{x,v}^6))$ satisfying \begin{equation}\label{energy-important} \|g\|_{\tilde L_T^{\infty} \tilde L^2_v (B_x^{3/2})} +\|g\|_{\mathcal{T}_{T,2,2}^{3/2}} \le C_0\Big( \|g_0\|_{\tilde L^2_v(B_x^{3/2})} + \sqrt{ T} \|f\|_{\tilde L_T^{\infty} \tilde L^2_v (B_x^{3/2})}\Big) \end{equation} \end{lem} \begin{proof} Consider $$\mathcal{Q}=-\partial_t+(v\cdot \nabla_{x}+\mathcal{L}_1-\Gamma(f,\cdot))^*,$$ where the adjoint operator $(\cdot)^*$ is taken with respect to the scalar product in $L^2(\RR_{x,v}^6)$. Then, for all $h \in C^{\infty}([0,T], \mathcal{S}(\mathbb{R}_{x,v}^6))$, with $h(T)=0$ and~$0 \leq t \leq T$, \begin{align*} & \textrm{Re}\big(h(t),\mathcal{Q}h(t)\big)_{x,v} = -\frac{1}{2}\frac{d}{dt}(\|h\|^2_{L^2_{x,v}})\\ \notag & \quad +\textrm{Re}(v\cdot\nabla_{x}h,h)_{{x,v}}+\textrm{Re}(\mathcal{L}_1 h,h)_{{x,v}}-\textrm{Re}(\Gamma(f,h),h)_{{x,v}} \\ \notag \geq& \ -\frac{1}{2}\frac{d}{dt}\big(\|h(t)\|^2_{{L^2_{x,v}}} \big) +\frac{1}{C}\Vert |\!|\!| h(t) |\!|\!| \Vert^2_{L^2_x}- C \|h(t)\|_{{L^2_{x,v}}}^2\\ & \qquad -C\|f\|_{L^{\infty}([0,T] \times \RR_x^3; L^2(\mathbb{R}_{v}^2))} \Vert |\!|\!| h(t) |\!|\!| \Vert^2_{L^2_x}\,, \end{align*} because $\mathcal{L}_1$ is a selfadjoint operator and $\textrm{Re}(v\cdot \nabla_{x}h,h)_{L^2_{x,v}}=0$. Since $\tilde L_T^{\infty} \tilde L_v^2(B_x^{3/2}) \subset L^{\infty}([0,T] \times \RR_x^3; L^2(\mathbb{R}_{v}^3))$, we have \begin{align*} -\frac{d}{dt}\big(e^{2Ct}\|h(t)\|_{{L^2_{x,v}}}^2\big)+&\frac{1}{C}e^{2Ct}\Vert |\!|\!| h(t) |\!|\!| \Vert^2_{L^2_x} \\ &\leq 2e^{2Ct}\|h(t)\|_{{L^2_{x,v}}}\|\mathcal{Q}h(t)\|_{{L^2_{x,v}}}, \end{align*} if $\varepsilon_0$ is sufficiently small. Since $h(T)=0$, for all $t \in [0,T]$ we have \begin{align*} & \ \|h(t)\|_{{L^2_{x,v}}}^2+\frac{1}{C} \Vert |\!|\!| h |\!|\!| \Vert^2_{L^2([t,T]\times \RR^3_x)}\\ &\quad \leq \ 2\int_t^Te^{2C(\tau-t)}\|h(\tau)\|_{{L^2_{x,v}}}\|\mathcal{Q}h(\tau)\|_{{{L^2_{x,v}}}}d\tau \\ &\quad \leq 2e^{2CT}\|h\|_{L^{\infty}([0,T] ;L^2(\mathbb{R}_{x,v}^6))}\|\mathcal{Q}h\|_{L^{1}([0,T],L^2(\mathbb{R}_{x,v}^6))}, \enskip \text{so that} \end{align*} \begin{equation}\label{tr7} \|h\|_{L^{\infty}([0,T] ;L^2(\mathbb{R}_{x,v}^6))} \leq 2 e^{2CT}\|\mathcal{Q}h\|_{L^{1}([0,T],L^2(\mathbb{R}_{x,v}^6))}. \end{equation} Consider the vector subspace \begin{align*} \mathbb{W}&=\{w=\mathcal{Q}h : h \in C^{\infty}([0,T],\mathcal{S}(\mathbb{R}_{x,v}^6)), \ h(T)=0\} \\ &\subset L^{1}([0,T], L^2(\mathbb{R}_{x,v}^6)). \end{align*} This inclusion holds because it follows from Proposition \ref{upper-recent} that for $g \in L^2_{x,v}$ \begin{align*} |(\Gamma(f,\cdot)^*h,g)_{L^2_{x,v}}|=|(h,\Gamma(f,g))_{L^2_{x,v}}| \lesssim \|f \|_{L^\infty_x(L^2_v)} \|g\|_{L^2_{x,v}} \|h\|_{L^2_x(H^{\nu}_{\gamma +\nu})}\,, \end{align*} and hence, for all $t \in [0,T]$, $$\|\Gamma(f,\cdot)^*h\|_{L^2_{x,v}} \lesssim \|f \|_{L^\infty_x(L^2_v)} \|h\|_{L^2_x(H^{\nu}_{\gamma +\nu})}.$$ Since $g_0 \in L^2(\mathbb{R}_{x,v}^6)$ we define the linear functional \begin{align*} \mathcal{G} \ : \qquad &\mathbb{W} \enskip \longrightarrow \mathbb{C}\\ \notag w=&\mathcal{Q}h \mapsto (g_0,h(0))_{L^2_{x,v}} - (\mathcal{L}_2 f\,, h)_{L^2([0,T]; L^2_{x,v})}\,, \end{align*} where $h \in C^{\infty}([0,T],\mathcal{S}(\mathbb{R}_{x,v}^6))$, with $h(T)=0$. According to (\ref{tr7}), the operator $\mathcal{Q}$ is injective. The linear functional $\mathcal{G}$ is therefore well-defined. It follows from (\ref{tr7}) that $\mathcal{G}$ is a continuous linear form on $(\mathbb{W},\|\cdot\|_{L^{1}([0,T]; L^2(\mathbb{R}_{x,v}^6))})$, \begin{align*} |\mathcal{G}(w)| &\leq \|g_0\|_{L^2_{x,v}}\|h(0)\|_{L^2_{x,v}} + C_T \|f\|_{L^\infty([0,T]; L^2_{x,v})} \|h\|_{L^{1}([0,T]; L^2(\mathbb{R}_{x,v}^6))} \\ &\le C'_T \Big ( \|g_0\|_{L^2_{x,v}}+ \|f\|_{L^\infty([0,T]; L^2_{x,v})} \Big) \|\mathcal{Q}h\|_{L^1([0,T];L^2(\mathbb{R}_{x,v}^2))}\\ & = C'_T\Big ( \|g_0\|_{L^2_{x,v}}+ \|f\|_{L^\infty([0,T]; L^2_{x,v})} \Big)\|w\|_{L^1([0,T];L^2(\mathbb{R}_{x,v}^2))}\,. \end{align*} By using the Hahn-Banach theorem, $\mathcal{G}$ may be extended as a continuous linear form on $$L^{1}([0,T]; L^2(\mathbb{R}_{x,v}^6)),$$ with a norm smaller than $ C'_T \Big ( \|g_0\|_{L^2_{x,v}}+ \|f\|_{L^\infty([0,T]; L^2_{x,v})} \Big)$. It follows that there exists $g \in L^{\infty}([0,T]; L^2(\mathbb{R}_{x,v}^6))$ satisfying $$\|g\|_{L^{\infty}([0,T],L^2(\mathbb{R}_{x,v}^6))} \leq C'_T \Big ( \|g_0\|_{L^2_{x,v}}+ \|f\|_{L^\infty([0,T]; L^2_{x,v})} \Big),$$ such that $$\forall w \in L^{1}([0,T]; L^2(\mathbb{R}_{x,v}^6)), \quad \mathcal{G}(w)=\int_0^T(g(t),w(t))_{L^2_{x,v}}dt.$$ This implies that for all $h \in C_0^{\infty}((-\infty,T),\mathcal S(\mathbb{R}_{x,v}^6))$, \begin{align*} \mathcal{G}(\mathcal{Q}h)&=\int_0^T(g(t),\mathcal{Q}h(t))_{L^2_{x,v}}dt\\ &=(g_0,h(0))_{L^2_{x,v}} - \int_0^T (\mathcal{L}_2 f(t)\,, h(t))_{ L^2_{x,v}}dt\,. \end{align*} This shows that $g \in L^{\infty}([0,T]; L^2(\mathbb{R}_{x,v}^6))$ is a weak solution of the Cauchy problem \begin{equation}\label{cl1ff1} \begin{cases} \partial_tg+v\cdot \nabla_{x}g+\mathcal{L}_1g=\Gamma(f,g) -\mathcal{L}_2f ,\\ g|_{t=0}=g_0. \end{cases} \end{equation} It remains to prove that $g$ satisfies \eqref{energy-important}. Here we only give a formal proof, temporarily assuming $g \in \tilde L_T^\infty \tilde L^2_v (B^{3/2}) \cap \cT^{3/2}_{T,2,2}$, because too many ingredients are required for its rigorous proof, which will be given at the end of this section. \noindent {\it Formal proof of \eqref{energy-important}}: Applying $\Delta_q ( q \ge -1)$ to \eqref{cl1ff1} and taking the inner product $2^{3q} \Delta_q g$ over $\RR^6_{x,v}$, we obtain \begin{align*} &\frac{d }{dt}2^{3q} \|\Delta_q g\|^2_{x,v} + \frac{2^{3q}}{C} \| |\!|\!|\Delta_q g|\!|\!| \|_{L^2_x}^2 \\ &\quad \le 2^{3q+1} \left(\Delta_q \Gamma(f,g), \Delta_q g\right)_{x,v} + 2^{3q} \|\Delta_q f\|^2_{x,v} + C 2^{3q}\|\Delta_q g\|^2_{x,v}. \end{align*} Integrate this with respect to the time variable over $[0,t]$ with $0 \le t \le T$, take the square root of both sides of the resulting inequality and sum up over $q\ge -1$. Then it follows from \eqref{tri1} of Lemma \ref{trilinear} that \begin{align}\label{local-energy} &\|g\|_{\tilde L_T^\infty \tilde L^2_v (B^{3/2}) }+ \frac{1}{\sqrt C}\|g\|_{\cT^{3/2}_{T,2,2}}\le C'\|f\|^{1/2}_{\tilde L_T^\infty \tilde L^2_v (B^{3/2}) }\|g\|_{\cT^{3/2}_{T,2,2}} \notag \\ &\quad + 2\Big( \|g_0\|_{\tilde L^2_v (B^{3/2}) } + \sqrt{ T} \|f\|_{\tilde L_T^\infty \tilde L^2_v (B^{3/2}) } + \sqrt{ CT} \|g\|_{\tilde L_T^\infty \tilde L^2_v (B^{3/2}) }\Big)\,. \end{align} \end{proof} \begin{thm}[Local Existence] \label{local-existence} There exist $\varepsilon_1$, $T>0$ such that if $ g_0 \in \tilde{L}^2_v (B^{3/2}_x)$ and \begin{equation*} \Vert g_0 \Vert_{\tilde{L}^2_v (B^{3/2}_x)}\le \varepsilon_1, \end{equation*} then the Cauchy problem \eqref{cl1ff1} admits a unique solution $$g(x,v,t) \in \tilde L^\infty_T \tilde L^2_v (B^{3/2}) \, \mathop{\cap} \, \cT_{T,2,2}^{3/2}\,\,. $$ \end{thm} \begin{proof} Consider the sequence of approximate solutions defined by \begin{equation}\label{itere}\begin{cases} \partial_tg^{n+1}+v\cdot \nabla_{x}g^{n+1}+\mathcal{L}_1g^{n+1}=\Gamma(g^n,g^{n+1}) -\mathcal{L}_2g^n ,\\ g^{n+1}|_{t=0}=g_0, \hskip1.5cm ( n = 0,1,2,\cdots, \enskip g^0 = 0)\,. \end{cases} \end{equation} Use Lemma \ref{lemma2} with $g = g^{n+1}, f = g^n$ and $T = \min \{T_0, 1/(4C_0^2)\}$. Then we have \begin{equation}\label{iterate-sol} {\|g^n \|_{\tilde L^\infty_T \tilde L^2 (B^{3/2})} + \|g^n\|_{\cT_{T,2,2}^{3/2}} \le \varepsilon_0}, \end{equation} inductively, if $\varepsilon_1$ is taken such that $2C_0 \varepsilon_1 \le \varepsilon_0$. It remains to prove the convergence of the sequence $$\{g^n\,, \, n \in \NN\} \subset \tilde L^\infty_T \tilde L^2_v (B^{3/2}) \, \mathop{\cap} \, \cT_{T,2,2}^{3/2}\,. $$ Setting $w^n = g^{n+1} - g^n$, from \eqref{itere} we have \[ \partial_t w^{n}+v\cdot \nabla_{x}w^{n}+\mathcal{L}_1w^{n} =\Gamma(g^n, w^n)+ \Gamma(w^{n-1},g^{n}) -\mathcal{L}_2w^{n-1}\,, \] with $w^{n}|_{t=0}=0$. Similar to the computation for \eqref{local-energy}, we obtain \begin{align*} &\|w^n\|_{\tilde L_T^\infty \tilde L^2_v (B^{3/2}) }+ \frac{1}{\sqrt C}\|w^n\|_{\cT^{3/2}_{T,2,2}} \le C\Big( \|g^n\|^{1/2}_{\tilde L_T^\infty \tilde L^2_v (B^{3/2}) }\|w^n\|_{\cT^{3/2}_{T,2,2}} \\ & \qquad \qquad \qquad + \|w^{n-1}\|^{1/2}_{\tilde L_T^\infty \tilde L^2_v (B^{3/2}) } \|g^n\|^{1/2}_{\cT^{3/2}_{T,2,2}} \|w^n\|^{1/2}_{\cT^{3/2}_{T,2,2}}\\ &\qquad \qquad \qquad + \sqrt{ T} \|w^{n-1}\|^{1/2}_{\tilde L_T^\infty \tilde L^2_v (B^{3/2}) } \|w^{n}\|^{1/2}_{\tilde L_T^\infty \tilde L^2_v (B^{3/2}) }\Big)\,. \end{align*} If $\varepsilon_0$ and $T$ are sufficiently small then we have \begin{align*} \|w^n\|_{\tilde L_T^\infty \tilde L^2_v (B^{3/2}) } + \frac{1}{\sqrt C}\|w^n\|_{\cT^{3/2}_{T,2,2}} &\le \lambda \,\,\|w^{n-1}\|_{\tilde L_T^\infty \tilde L^2_v (B^{3/2}) }\\ & \le \lambda^{n-1} \,\,\|w^{1}\|_{\tilde L_T^\infty \tilde L^2_v (B^{3/2}) } \end{align*} for some $0< \lambda <1$, which concludes that \[ \mbox{ $\{g^n\}$ is a Cauchy sequence in $\tilde L_T^\infty \tilde L^2_v (B^{3/2})\cap \cT^{3/2}_{T,2,2}$, } \] and the limit function $g$ is a desired solution to the Cauchy problem \begin{align*} \partial _t g + v\cdot\nabla_x g + \cL g=\Gamma(g,\,g), \,\,\,\, g|_{t=0}=g_0\,. \end{align*} \end{proof} \subsection{Rigorous proof of \eqref{energy-important}} The preceding proof of \eqref{energy-important} is formal since we a priori assumed the left hand side of \eqref{local-energy} is finite. The rigorous proof requires more involved procedure. We start by the following lemma. \begin{lem} \label{modify} Let $0<s\le \frac{3}{2}$ and $0<T \le \infty$. For $ M \in \NN$ put $\displaystyle f_M = S_M f = \sum _{q\ge -1}^{M -1} \Delta_q f$. If $g$ satisfies \begin{align}\label{triple-norm-finite} \left \Vert |\!|\!| g |\!|\!| \right \Vert_{L^2_T L^2_x}^2 = \int_0^T \int_{\RR_x^3} |\!|\!| g|\!|\!| ^2 dx dt < \infty\,, \end{align} then there exists a $C>0$ independent of $M$ such that for any $\kappa >0$ \begin{align}\label{cut-x}\notag \sum_{q \ge -1} &\frac{2^{qs}}{1+ \kappa 2^{2qs}}\left( \int^T_0 \vert ( \Delta_q \Gamma (f_M,g), \Delta_q g)_{x,v} \vert dt \right)^{1/2}\\ &\le C \Vert f \Vert_{\tilde{L}^\infty_T \tilde{L}^2_v(B^{{3/2}}_x)}^{1/2} \Vert g \Vert_{\mathcal{T}_{T,2,2}^{s, \kappa}} \notag \\ &\qquad + C_M \Vert f_M \Vert_{\tilde{L}^\infty_T \tilde{L}^2_v(B^{{3/2}}_x)}^{1/2} \left \Vert |\!|\!| g |\!|\!| \right \Vert_{L^2_T L^2_x} \,, \end{align} where $C_M >0$ is a constant depending only on $M$ and \[ \Vert g \Vert_{\cT^{s, \kappa}_{T,2,2}} = \sum_{q \ge -1}^\infty \frac{2^{qs}} {1+\kappa 2^{2qs}} \left \Vert \, |\!|\!| \Delta_q g|\!|\!|\, \right\Vert_{L^{2}_T L^2_x} . \] \end{lem} \begin{proof} We notice that $\Vert g \Vert_{\cT^{s, \kappa}_{T,2,2}} < \infty$ for each $\kappa >0$ follows from \eqref{triple-norm-finite}. The proof of the lemma is the almost same procedure as the one for \eqref{tri1} of Lemma \ref{trilinear}. Indeed, recalling the Bony decomposition, for $\Gamma^1 (f_M, g)$ we obtain \begin{align*} &\sum_{q \ge -1} \frac{2^{qs}}{1+\kappa 2^{2qs}}\left( \int^T_0 \vert ( \Delta_q \Gamma^1 (f_M,g), \Delta_q g)_{x,v} \vert dt \right)^{1/2} \\ \notag &\lesssim \sum_{q \ge -1}\frac{2^{qs}}{1+\kappa 2^{2qs}} \left(\int_0^T \sum_{\vert j-q \vert \le 4} \Vert f \Vert_{L^2_v L^\infty_x} \Vert |\!|\!| \Delta_j g |\!|\!| \Vert_{L^2_x} \Vert |\!|\!| \Delta_q g |\!|\!| \Vert_{L^2_x} dt \right)^{1/2} \\ \notag &\lesssim \Vert f \Vert_{\tilde{L}^\infty_T \tilde{L}^2_v (B^{3/2}_x)}^{1/2} \left( \sum_{q \ge -1} \frac{2^{qs}}{1+\kappa 2^{2qs}}\left( \sum_{\vert j-q \vert \le 4} \int_0^T \Vert |\!|\!| \Delta_j g |\!|\!| \Vert_{L^2_x}^2 dt \right)^{1/2} \right)^{1/2} \\ \notag & \qquad \qquad \times \left( \sum_{q \ge -1} \frac{2^{qs}}{1+\kappa 2^{2qs}} \left(\int_0^T \Vert |\!|\!| \Delta_q g |\!|\!| \Vert_{L^2_x}^2 dt \right)^{1/2} \right)^{1/2} \\ \notag &\lesssim \Vert f \Vert_{\tilde{L}^\infty_T \tilde{L}^2_v (B^{3/2}_x)}^{1/2} \Vert g \Vert_{\mathcal{T}^{s, \kappa}_{T,2,2}}, \end{align*} similar as \eqref{tri-11}. As for $\Gamma^2(f_M, g)$, we have\begin{align*} &\sum_{q \ge -1} \frac{2^{qs}}{1+\kappa 2^{2qs}}\left( \int^T_0 \vert ( \Delta_q \Gamma^2 (f_M,g), \Delta_q g)_{x,v} \vert dt \right)^{1/2} \\ & \lesssim \sum_{q \ge -1} \frac{2^{qs}}{1+\kappa 2^{2qs}} \left( \sum_{\stackrel{\vert j-q \vert \le 4}{ j \le M}}\int^T_0 \Vert |\!|\!| S_{j-1} g |\!|\!| \Vert_{L^\infty_x} \Vert \Delta_j f_M \Vert_{L^2_{x,v}} \Vert |\!|\!| \Delta_q g |\!|\!| \Vert_{L^2_x} dt \right)^{1/2} \\ & \lesssim \sum_{q \ge -1}^{M+4} \frac{2^{qs}}{1+\kappa 2^{2qs}}\left( \Vert f_M \Vert_{\tilde{L}^\infty_T \tilde{L}^2_v (B^{3/2}_x)}^{1/2}\Vert |\!|\!| S_{M+4} g |\!|\!| \Vert_{L^2_TL^2_x} \Vert |\!|\!| \Delta_q g |\!|\!| \Vert_{L^2_TL^2_x} \right)^{1/2} \\ & \le C_M \Vert f_M \Vert_{\tilde{L}^\infty_T \tilde{L}^2_v (B^{3/2}_x)}^{1/2} \Vert |\!|\!| g |\!|\!| \Vert_{L^2_TL^2_x} . \end{align*} Since \begin{align*} \Delta_q\left( \sum_j \sum_{\vert j-j'\vert \le 1} (\Delta_{j'} f_M, \Delta_j g) \right)&= \Delta_q \left(\sum_{\max\{j,j'\} \ge q-2 } \sum_{\vert j-j' \vert \le 1} (\Delta_{j'} f_M, \Delta_j g)\right)\\ &=0 \enskip \mbox{if $q \geq M+3$}, \end{align*} the term corresponding to $\Gamma^3$ is estimated as follows: \begin{align*} &\sum_{q \ge -1} ^{M+2} \frac{2^{qs}} {1+ \kappa 2^{2qs}}\left( \int^T_0 \vert ( \Delta_q \Gamma^3 (f_M,g), \Delta_qg)_{x,v} \vert dt \right)^{1/2} \\ & \lesssim \sum_{q \ge -1}^{M+2} \frac{2^{qs}} {1+ \kappa 2^{2qs}}\left( \sum_{j \le M+1} \int^T_0 \Vert f_M \Vert_{L^2_vL^\infty_x} \Vert |\!|\!| \Delta_j g |\!|\!| \Vert_{L^2_x} \Vert |\!|\!| \Delta_q g |\!|\!| \Vert_{L^2_x} dt \right)^{1/2} \\ & \le C_M \Vert f_M \Vert_{\tilde L^\infty_T \tilde L^2_v (B^{3/2}_x)}^{1/2} \Vert |\!|\!| g |\!|\!| \Vert_{L^2_{T,x}} . \end{align*} Thus the proof is completed. \end{proof} For each $f_M$ $(M \in \NN)$ we consider a weak solution $g_M \in L^{\infty}([0,T]; L^2(\mathbb{R}_{x,v}^6))$ to the Cauchy problem \eqref{l-e-c} with $f$ replaced by $f_M$. If $g_M$ satisfies \eqref{triple-norm-finite}, then by the same procedure as in the formal proof of \eqref{energy-important} we obtain \begin{align*} &\sum_{q \ge -1} \frac{2^{3q/2}}{1+ \kappa 2^{3q}} \sup_{0 \le t \le T } \|\Delta_q g_M(t)\|_{x,v} + \frac{1}{\sqrt C} \Vert g_M \Vert_{\cT^{3/2, \kappa}_{T,2,2}} \\ &\quad \le C'\|f\|^{1/2}_{\tilde L_T^\infty \tilde L^2_v (B^{3/2}) }\|g_M\|_{\cT^{3/2,\kappa}_{T,2,2}} \notag\\ &\quad + 2\Big( \sum_{q \ge -1} \frac{2^{3q/2}}{1+ \kappa 2^{3q}} \|\Delta_q g_0\|_{x,v} + \sqrt{ T} \|f\|_{\tilde L_T^\infty \tilde L^2_v (B^{3/2}) } + \sqrt{ CT}\\ & \quad \times \sum_{q \ge -1} \frac{2^{3q/2}}{1+ \kappa 2^{3q}} \sup_{0 \le t \le T } \|\Delta_q g_M(t)\|_{x,v} \Big) + C_M \Vert f_M \Vert_{\tilde{L}^\infty_T \tilde{L}^2_v(B^{{3/2}}_x)}^{1/2} \left \Vert |\!|\!| g_M |\!|\!| \right \Vert_{L^2_T L^2_x}\,. \end{align*} Choose a small $T >0$ independent of $M$ and $\kappa$. Then letting $\kappa \rightarrow 0$, we get \[ \|g_M \|_{\tilde L_T^\infty \tilde L^2_v (B^{3/2}) }+ \|g_M \|_{\cT^{3/2}_{T,2,2}} < \infty\,, \] which permits the preceding formal proof and we obtain the energy estimate \eqref{local-energy} for each $g_M$. Set $w_{M,M'} = g_M - g_{M'}$, $M, M' \in \NN$. Then it follows from \eqref{l-e-c} that \begin{align*} &\partial_t w_{M,M'} +v\cdot \nabla_{x} w_{M, M'} +\mathcal{L}_1w_{M,M'}\\ & \qquad \qquad =\Gamma(f_M - f_{M'}, g_M) + \Gamma(f_{M'}, w_{M,M'}) -\mathcal{L}_2(f_M - f_{M'})\,. \end{align*} Since $\{ f_M\}$ is a Cauchy sequence in $\tilde L_T^{\infty} \tilde L_v^2(B_x^{3/2})$, it is easy to see that $\{g_M\}$ is a Cauchy sequence in $ \tilde L^\infty_T \tilde L^2_v (B^{3/2}) \, \mathop{\cap} \, \cT_{T,2,2}^{3/2}$, by the similar manipulation as in the proof of Theorem \ref{local-existence}. Since $ \displaystyle g = \lim_{M\rightarrow \infty} g_M$ belongs to $ \tilde L^\infty_T \tilde L^2_v (B^{3/2}) \, \mathop{\cap} \, \cT_{T,2,2}^{3/2}$, the preceding formal proof is justified. It remains to show \eqref{triple-norm-finite} for a weak solution $g_M \in L^{\infty}([0,T]; L^2(\mathbb{R}_{x,v}^6))$, under the assumption that $\|f_M \|_{L^\infty([0,T]\times \RR_x^3; L_2(\RR^3_v))}$ is sufficiently small, independent of $M$, and moreover $\|\nabla_x f_M \|_{L^\infty([0,T]\times \RR_x^3; L_2(\RR^3_v))} < \infty$. For the brevity, we write $g$ and $f$ instead of $g_M$ and $f_M$, respectively. Let us take different $1 > \delta, \delta' >0$. We use a weight function $W_{\delta'}(v) = \la \delta' v \ra^{-N}$ for $N \ge 1$ and mollifiers $M^\delta(D_v), S_\delta(D_x)$ defined in subsections \ref{ap-4}, \ref{ap-6}, respectively. Multiply $ W_{\delta'}(v)S_\delta(D_x)\Big(M^\delta(D_v)\Big)^2S_\delta(D_x)W_{\delta'}(v) g $ by the equation \eqref{l-e-c}, and integrate with respect to $t \in [0,T]$ and $(x,v) \in \RR^6$. Notice that \begin{align*} &\Big(\Gamma(f,g), W_{\delta'}S_\delta\Big(M^\delta\Big)^2S_\delta W_{\delta'} g\Big)_{{x,v}} - \Big(\Gamma(f, M^\delta S_\delta W_{\delta'}g) , M^\delta S_\delta W_{\delta'} g\Big)_{x,v}\\ &=\Big(W_{\delta'}\Gamma(f,g) - \Gamma(f, W_{\delta'}g) , S_\delta\Big(M^\delta\Big)^2S_\delta W_{\delta'} g\Big)_{x,v}\\ &+\Big( S_\delta \Gamma(f, W_{\delta'}g) - \Gamma(f, S_\delta W_{\delta'}g) , \Big(M^\delta\Big)^2S_\delta W_{\delta'} g\Big)_{x,v}\\ &+ \Big(\Gamma(f, S_\delta W_{\delta'}g)- Q(\mu^{1/2}f, S_\delta W_{\delta'}g) , \Big(M^\delta\Big)^2S_\delta W_{\delta'} g\Big)_{x,v}\\ &+ \Big(M^\delta Q(\mu^{1/2}f, S_\delta W_{\delta'}g)-Q(\mu^{1/2}f, M^\delta S_\delta W_{\delta'}g) , M^\delta S_\delta W_{\delta'} g\Big)_{x,v}\\ &+ \Big(Q(\mu^{1/2}f, M^\delta S_\delta W_{\delta'}g)-\Gamma(f, M^\delta S_\delta W_{\delta'}g) , M^\delta S_\delta W_{\delta'} g\Big)_{x,v}\\ &=\mbox{(1)} +\mbox{(2)} +\mbox{(3)}+\mbox{(4)}+\mbox{(5)}\,. \end{align*} It follows from Proposition \ref{weight-commutator} and Lemma \ref{M-bounded-triple} that \begin{align}\label{1}\notag \int_0^T (1) dt &\lesssim {\delta'}^{\nu/2} \|f\|_{L^\infty([0,T]\times \RR^3_x; L^2_v)} \|\la v\ra^{(\gamma+\nu)/2} W_{\delta'} g\|_{L^2([0,T]\times \RR^6_{x,v})}\\ & \qquad \qquad \qquad \times \| |\!|\!| M^\delta S_\delta W_{\delta'}g|\!|\!| \|_{L^2(([0,T]\times \RR^3_x)} \end{align} By means of \eqref{1st}, we have \begin{align}\label{2} \notag \int_0^T (2) dt &\lesssim \delta^{1-\nu/2} \|\nabla f \|_{L^\infty_{T,x}(L^2_v )} \|\la v \ra^{|\gamma|/2 + \nu} W_{\delta'} g\|_{L^2([0,T]\times \RR^6_{x,v})}\\ &\qquad\times \|M^\delta S_\delta W_{\delta'}g \|_{L^2([0,T]\times \RR^3_x; H^{\nu/2}_{\gamma/2}(\RR^3_v))} \end{align} because \[ \frac{\delta \la \xi \ra^{\nu}}{(1+ \delta\la \xi \ra)^{N_0}} \le \delta^{1-\nu/2} \la \xi \ra^{\nu/2}. \] It follows from \eqref{diff-G-Q} and Lemma \ref{M-bounded-triple} that \begin{align}\label{3-5}\notag \int_0^T (3) + (5) dt &\lesssim \|f\|_{L^\infty([0,T]\times \RR^3_x; L^2_v)} \|\la v\ra^{(\gamma+\nu)/2} W_{\delta'} g\|_{L^2([0,T]\times \RR^6_{x,v})}\\ & \qquad \qquad \qquad \times \| |\!|\!| M^\delta S_\delta W_{\delta'}g|\!|\!| \|_{L^2(([0,T]\times \RR^3_x)} \end{align} Thanks to Proposition \ref{IV-coro-2.15} and Proposition \ref{prop2.9_amuxy3}, for a $0< \nu' < \nu$ we have \begin{align*} \int_0^T (4) dt &\lesssim \|f\|_{L^\infty([0,T]\times \RR^3_x; L^2_v)}\Big( \|M^\delta S_\delta W_{\delta'}g \|^2_{L^2([0,T]\times \RR^3_x; H^{\nu'/2}_{(\nu+\gamma)^+}(\RR^3_v))} \\ &\notag \qquad \qquad \qquad + \|\la v\ra^{(\gamma+\nu)^+} W_{\delta'} g\|^2_{L^2([0,T]\times \RR^6_{x,v})}\Big)\,. \end{align*} Note that for any $\varepsilon, \kappa, \ell >0$ there exist $C_{\kappa, \varepsilon,\ell} >0$ and $N_{\ell, \varepsilon} >0$ such that \[ \|h\|_{H^{\nu/2-\varepsilon}_{\ell}} \le \kappa \|h\|_{H^{\nu/2}} + C_{\kappa, \varepsilon,\ell} \|h\|_{L^2_{N_{\ell, \varepsilon} }}, \] (see for example Lemma 2.4 of \cite{HMUY}). Therefore we obtain \begin{align}\label{4}\notag \int_0^T (4) dt &\lesssim \|f\|_{L^\infty([0,T]\times \RR^3_x; L^2_v)}\Big( \kappa \|M^\delta S_\delta W_{\delta'}g \|^2_{L^2([0,T]\times \RR^3_x; H^{\nu/2}_{\gamma/2}(\RR^3_v))} \\ &\qquad \qquad \qquad + C_\kappa \|\la v\ra^N W_{\delta'} g\|^2_{L^2([0,T]\times \RR^6_{x,v})}\Big)\,. \end{align} for a sufficiently large $N >0$. Summing up \eqref{1}-\eqref{4}, for any $\kappa >0$ we obtain \begin{align}\label{non-linear-2}\notag &\int_0^T\Big(\Gamma(f,g), W_{\delta'}S_\delta\Big(M^\delta\Big)^2S_\delta W_{\delta'} g\Big)_{x,v}dt \le ( C_1\|f\|_{L^\infty_{T,x}(L^2_v)} + \kappa )\\ & \quad \qquad \times \| |\!|\!| M^\delta S_\delta W_{\delta'}g|\!|\!| \|^2_{L^2(([0,T]\times \RR^3_x)} + C_{\kappa, f} \|\la v\ra^N W_{\delta'} g\|^2_{L^2([0,T]\times \RR^6_{x,v})}, \end{align} by means of \eqref{upper-fundamental1}. Similarly we obtain \begin{align}\label{linear-0}\notag &\int_0^T\Big(\cL_1 g , W_{\delta'}S_\delta\Big(M^\delta\Big)^2S_\delta W_{\delta'} g\Big)_{x,v}dt\\ & \qquad \notag \ge \int_0^T\Big(\cL_1 M^\delta S_\delta W_{\delta'} g, M^\delta S_\delta W_{\delta'} g\Big)_{x,v}dt \\ & \qquad \quad - \frac{\lambda_0}{2} \| |\!|\!| M^\delta S_\delta W_{\delta'}g|\!|\!| \|^2_{L^2(([0,T]\times \RR^3_x)} + C \|\la v\ra^N W_{\delta'} g\|^2_{L^2([0,T]\times \RR^6_{x,v})}. \end{align} To handle $v\cdot \nabla_x$ term, we use the similar device as in (3.2.4) of \cite{AMUXY2010}. Indeed, \begin{align}\label{nabla}\nonumber &\Big(v \cdot \nabla_x g, W_{\delta'}S_\delta\Big(M^\delta\Big)^2S_\delta W_{\delta'} g\Big)_{x,v}\\ &\quad =\Big([M^\delta, v] \cdot \nabla_x S_\delta W_{\delta'}g, M^\delta S_\delta W_{\delta'} g\Big)_{x,v} \le 2 \|M^\delta S_\delta W_{\delta'} g\|^2_{L^2(\RR_{x,v}^6)}, \end{align} because $|\big(\nabla_\xi M^\delta(\xi)\big) \cdot \big(\eta S_\delta(\eta)\big)| \le N_0 M^\delta(\xi) |\delta \eta| S(\delta \eta) \le 2 M^\delta(\xi) S_\delta(\eta)$. It follows from Lemma \ref{linear term} and \eqref{non-linear-2}-\eqref{nabla} that we obtain \[ \| |\!|\!| M^\delta S_\delta W_{\delta'}g|\!|\!| \|^2_{L^2(([0,T]\times \RR^3_x)} \le C_T\Big( \|\la v\ra^N W_{\delta'} g\|^2_{L^2([0,T]\times \RR^6_{x,v})} + \|g_0\|^2_{L^2(\RR_{x,v}^6)}\Big), \] if $C_1\|f\|_{L^\infty_{T,x}(L^2_v)} \le \lambda_0 /8$. For arbitrary but fixed $\delta'>0$, we get \[ \| |\!|\!| W_{\delta'}g|\!|\!| \|^2_{L^2(([0,T]\times \RR^3_x)} < C_{\delta'} \] by letting $\delta \rightarrow 0$. We repeat the same procedure without the factor $\Big(M^\delta(D_v)\Big)^2$, that is, multiply $ \Big(W_{\delta'}(v)S_\delta(D_x)\Big)^2 g $ by the equation, and integrate with respect to $t \in [0,T]$ and $(x,v) \in \RR^6$. By the same way as \eqref{1}, we get \begin{align*} &\int_0^T \left|\Big(\Gamma(f,g), \Big(W_{\delta'}S_\delta \Big)^2g\Big)_{x,v} - \Big(\Gamma(f, W_{\delta'}g) , \big(S_\delta\big)^2 W_{\delta'} g\Big)_{x,v} \right|dt \\ &\lesssim {\delta'}^{\nu/2} \|f\|_{L^\infty([0,T]\times \RR^3_x; L^2_v)} \| |\!|\!| W_{\delta'}g|\!|\!| \|^2_{L^2(([0,T]\times \RR^3_x)}\,. \end{align*} By using \eqref{AMUXYtrilinear} for $\Big(\Gamma(f, W_{\delta'}g) , \big(S_\delta\big)^2 W_{\delta'} g\Big)_{x,v} $, we obtain \begin{align*} &\int_0^T \left|\Big(\Gamma(f,g), \Big(W_{\delta'}S_\delta \Big)^2g\Big)_{x,v} \right|dt \lesssim \|f\|_{L^\infty([0,T]\times \RR^3_x; L^2_v)} \| |\!|\!| W_{\delta'}g|\!|\!| \|^2_{L^2(([0,T]\times \RR^3_x)}\,. \end{align*} Since $\big(v\cdot \nabla_x g, \big(W_{\delta'}S_\delta \big)^2g\big)_{x,v} =0$, by letting $\delta \rightarrow 0$ we obtain \begin{align*} &(\lambda_0 - C_1 \|f\|_{L^\infty([0,T]\times \RR^3_x; L^2_v)}) \| |\!|\!| W_{\delta'}g|\!|\!| \|^2_{L^2(([0,T]\times \RR^3_x)} \\ &\quad \lesssim {\delta'}^{\nu} \| |\!|\!| W_{\delta'}g|\!|\!| \|^2_{L^2(([0,T]\times \RR^3_x)} + \|g\|^2_{L^2([0,T]\times \RR^6_{x,v})} + \|g_0\|^2_{L^2(\RR_{x,v}^6)}\,, \end{align*} because of Lemma \ref{linear term} and \eqref{linear-comm-weight}. Finally we obtain \eqref{triple-norm-finite} by letting $\delta' \rightarrow 0$. \section{Non-negativity of solutions}\label{S6} \setcounter{equation}{0} The method of the proof is the almost same as the one of Proposition 5.2 in \cite{amuxy4-3}. For the self-containedness, we reproduce it. If $\{g^n\}$ is the sequence of approximate solutions in the proof of Theorem \ref{local-existence}, and if $f^n = \mu + \mu^{1/2}g^n$, then $\{f^n\}$ is constructed successively by the following linear Cauchy problem \begin{equation}\label{4.4.3} \left\{\begin{array}{l} \partial_t f^{n+1} + v\,\cdot\,\nabla_x f^{n+1} =Q (f^n, f^{n+1}), \\ f^{n+1}|_{t=0} = f_0 =\mu + \mu^{1/2} g_0\geq 0\, , \enskip (n=0,1,2,\cdots, \, f^0 =\mu). \end{array} \right. \end{equation} Hence the non-negativity of the solution to the original Cauchy problem \eqref{eq: Boltzmann} comes from the following induction argument: Suppose that \begin{equation}\label{4.4.1+100} f^n = \mu + \mu^{1/2} g^n \geq 0\,, \end{equation} for some $n\in\NN$. Then (\ref{4.4.1+100}) is true for $n+1$. If we put $\tilde f^n = \mu^{-1/2}f^n = \mu^{1/2} + g^n$ then $\tilde f^n$ satisfies \begin{equation}\label{4.4.3-bis} \left\{\begin{array}{l} \partial_t \tilde f^{n+1} + v\,\cdot\,\nabla_x \tilde f^{n+1} =\Gamma (\tilde f^n, \tilde f^{n+1}), \\ \tilde f^{n+1}|_{t=0} =\tilde f_0 = \mu^{1/2} + g_0\geq 0\, , \enskip (n=0,1,2,\cdots, \, \tilde f^0 =\mu^{1/2}). \end{array} \right. \end{equation} It follows from \eqref{iterate-sol} and Lemma \ref{T-estimate} that $\int_0^T \| |\!|\!|\tilde f^n |\!|\!| \|^2_{L^\infty_x} dt < \infty$, and hence, if $\tilde f^n_{\pm} = \pm \max(\pm \tilde f^n, 0)$ then we have \[ \int_0^T \| |\!|\!|\tilde f^n_+ |\!|\!| \|^2_{L^\infty_x} dt+\int_0^T \| |\!|\!|\tilde f^n_- |\!|\!| \|^2_{L^\infty_x} dt < \infty \] by means of the same argument as in the proof of Theorem 5.2 in \cite{amuxy4-3}. Take the convex function $\beta (s) = \frac 1 2 (s^- )^2= \frac 1 2 s\,(s^- ) $ with $s^-=\min\{s, 0\}$. Let $\varphi(v,x) = (1+|v|^2 +|x|^2)^{\alpha/2}$ with $\alpha >3/2$, and notice that \begin{align*} \beta_s (\tilde f^{n+1}) \varphi(v,x)^{-1}&: =\left(\frac{d}{ds}\,\,\beta \right)(\tilde f^{n+1}) \varphi(v,x)^{-1}\\ &={\tilde f_{ -}^{n+1}} \varphi(v,x)^{-1}\in L^\infty([0,T];L^2(\RR^6_{x,v}). \end{align*} Multiply the first equation of \eqref{4.4.3-bis} by $\beta_s (\tilde f^{n+1})\varphi(v,x)^{-2}$ $ = \tilde f_{-}^{n+1}\varphi(v,x)^{-2}$ and integrate over $[0,t] \times \RR^6_{x,v}$, ($t \in (0,T]$). Then, in view of $\beta(f^{n+1}(0)) = \tilde f_{0, -}^2/2 =0$, we have \begin{align*} &\int_{\RR^6} \beta ( \tilde f^{n+1}(t)) \varphi(v,x)^{-2}dxdv \\ &\qquad =\int_0^t \int_{\RR^6} \Gamma(\tilde f^n(\tau),\, \tilde f^{n+1}(\tau) )\,\, \beta_s(\tilde f^{n+1}(\tau)) \varphi(v,x)^{-2} \,\,dxdvd\tau \\ &\qquad\qquad-\int_0^t \int_{\RR^6} { v\,\cdot\, \nabla_x \enskip ( \beta (\tilde f^{n+1}(\tau)) \varphi(v,x)^{-2}) }dxdv d\tau\\ &\qquad\qquad + \int_0^t \int_{\RR^6} {\big (\varphi(v,x)^{2}\,\,v\, \cdot\, \nabla_x \,\varphi(v,x)^{-2} \big) }\enskip \beta (\tilde f^{n+1}(\tau))\varphi(v,x)^{-2}dxdvd\tau, \end{align*} where the first term on the right hand side is well defined because $$\int_0^T \| |\!|\!|\tilde f^{n+1} |\!|\!| \|^2_{L^\infty_x} dt + \int_0^T \| |\!|\!|\tilde f^{n+1}_{-} |\!|\!| \|^2_{L^\infty_x} dt < \infty. $$ Since the second term vanishes and $|v\, \cdot\, \nabla_x\, \varphi(v,x)^{-2} | \leq C \varphi(v,x)^{-2}$, we obtain \begin{align*} &\int_{\RR^6} \beta ( \tilde f^{n+1}(t)) \varphi(v,x)^{-2}dxdv \\ &\qquad \le \int_0^t \Big(\int_{\RR^6} \Gamma(\tilde f^n(\tau),\, \tilde f^{n+1}(\tau) )\,\, \beta_s(\tilde f^{n+1}(\tau)) \varphi(v,x)^{-2} \,\,dxdv\Big)d\tau \\ &\qquad\qquad \qquad \qquad \qquad +C \int_0^t \int_{\RR^6} \beta ( \tilde f^{n+1}(\tau)) \varphi(v,x)^{-2}dxdvd\tau\,. \end{align*} The integrand $(\cdot)$ of the first term on the right hand side is equal to \begin{align*} &\int_{\RR^6} \Gamma(\tilde f^n, \tilde f_{ -}^{n+1} ) \tilde f_{ -}^{n+1} \varphi(v,x)^{-2} dxdv\\ &\qquad\qquad\qquad + \int B \, \mu_{*}^{1/2}( {\tilde f_*}^n)' ( \tilde f_{+}^{n+1})' \tilde f_{-}^{n+1} \varphi(v,x)^{-2}dvdv_* d\sigma dx \\ &=A_1 + A_2\,. \end{align*} >From the induction hypothesis, the second term $A_2$ is non-positive. On the other hand, we have \begin{align*} &A_1 = \int( \Gamma (\tilde f^n, \varphi(v,x)^{-1}\tilde f_{-}^{n+1}), \varphi(v,x)^{-1} \tilde f_{-}^{n+1})_{L^2(\RR_v^3)} dx + R \\ &= - \int (\cL_1(\varphi(v,x)^{-1}\tilde f_{-}^{n+1}), \varphi(v,x)^{-1}\tilde f_{-}^{n+1})_{L^2(\RR_v^3)} dx\\ & + \int( \Gamma ( g^n, \varphi(v,x)^{-1}\tilde f_{-}^{n+1}), \varphi(v,x)^{-1} \tilde f_{-}^{n+1})_{L^2(\RR_v^3)} dx +R , \end{align*} where $R$ is a remainder term. It follows from Corollary \ref{cor-weight-commutator} with $f = \tilde f^n$ that for any $\kappa >0$ \begin{align*} \int_0^t |R| d\tau \leq& \kappa \int_0^t \int_{\RR_x^3} |\!|\!| \varphi(v,x)^{-1}\tilde f_{-}^{n+1}(\tau) |\!|\!|^2 dx d\tau \\ &+ C_\kappa \int_0^t \int_{\RR^6} \beta ( \tilde f^{n+1}(\tau)) \varphi(v,x)^{-2}dxdvd\tau\,. \end{align*} By means of Lemma \ref{linear term} and Corollary \ref{AMUXYtrilinear} with \eqref{iterate-sol}, we obtain \begin{align*} \int_0^t A_1 d\tau &\le -(\lambda_0 - C(\kappa + \varepsilon_0)) \int_0^t \int_{\RR_x^3} |\!|\!| \varphi(v,x)^{-1}\tilde f_{-}^{n+1}(\tau) |\!|\!|^2 dx d\tau \\ &+ C_\kappa \int_0^t \int_{\RR^6} \beta ( \tilde f^{n+1}(\tau)) \varphi(v,x)^{-2}dxdvd\tau\,. \end{align*} Finally we get \begin{align*} \int_{\RR^6} \beta ( \tilde f^{n+1}(t)) \varphi(v,x)^{-2}dxdv \lesssim \int_0^t \int_{\RR^6} \beta ( \tilde f^{n+1}(\tau)) \varphi(v,x)^{-2}dxdvd\tau, \end{align*} which implies that $\tilde f^{n+1}(t,x,v) \ge 0$ for $(t,x,v) \in [0,T]\times \RR^6$. \section{Appendix}\label{AP} \setcounter{equation}{0} \subsection{Fundamental inequalities and Besov embedding theorems}\label{ap-1} First of all, the following lemma plays the central role in this paper. \begin{lem}\label{MS-general} Let $\nu \in (0,2)$ and $\gamma > \max\{-3,-3/2-\nu\}$. For any $\alpha \ge 0$ and any $\beta \in \RR$ there exists a $C = C_{\alpha, \beta} >0$ such that \begin{align}\label{upper-fundamental123}\notag &\left\vert ( \Gamma(f,g),h )_{L^2_v} \right\vert \le C \Big(\Vert \mu^{1/10}f \Vert_{L^2_v} |\!|\!| g |\!|\!| |\!|\!| h |\!|\!| \\ & \quad \qquad \qquad + \|f\|_{L^2_{-\alpha} } \|g\|_{L^2_{(\gamma+\nu)/2+\alpha - \beta}} \|h \|_{L^2_{(\gamma+\nu)/2+\beta}}\Big). \end{align} \end{lem} This lemma will be proved in the next subsection together with another upper bound estimate. If we put $\alpha = \beta =0$ then we obtain \cite[Theorem 1.2]{AMUXY3}, namely, \begin{cor}\label{AMUXYtrilinear} Let $\nu \in (0,2)$ and $\gamma > \max\{-3,-3/2-\nu\}$. Then it holds \begin{align}\label{upper-fundamental1} \left\vert ( \Gamma(f,g),h )_{L^2_v} \right\vert \lesssim \Vert f \Vert_{L^2_v} |\!|\!| g |\!|\!| |\!|\!| h |\!|\!|. \end{align} \end{cor} In the case $\gamma+\nu \le 0$, we employ the following, by setting $\alpha = -(\gamma+ \nu)/2$ and $\beta =(\gamma+ \nu)/2$ or $0$. \begin{cor}\label{another-type} When $\gamma + \nu \le 0$ we have \begin{align}\label{cut-part-123-bis}\notag \Big|\Big(\Gamma ( f, g)\, & , \,h \Big)_{L^2_v }\Big| \lesssim \|\mu^{1/10}f\|_{L^2} |\!|\!|g|\!|\!| |\!|\!|h |\!|\!|\\ & + \|f\|_{L^2_{(\gamma+ \nu)/2} }\min \big\{ \|g\|_{L^2_{(\gamma+ \nu)/2}} \|h \|_{L^2}, \, \|g\|_{L^2} \|h \|_{L^2_{(\gamma+ \nu)/2}}\big \}\,. \end{align} \end{cor} We also need the lower estimate of $(\cL_1 f,f)$ and the upper estimate of $ ({\cL}_2 f,f)$, respectively. \begin{lem}\label{linear term} Let $\nu \in (0,2)$ and $\gamma > -3$. Then there exists a constant $\lambda_0>$ such that \begin{align*} &(\cL_1 f,f)_{L^2_v} \ge \frac{1}{2} (\cL f,f)_{L^2_v } \ge \lambda_0 |\!|\!| (\mathbf{I}-\mathbf{P})f |\!|\!|^2,\\ & \quad \left\vert (\cL_2f, g)_{L^2_v} \right\vert \lesssim \Vert \mu^{10^{-3}}f\Vert_{L^2_v} \Vert \mu^{10^{-3}}g\Vert_{L^2_v} \,. \end{align*} \end{lem} The first and second inequalities of the above lemma are from \cite[Proposition 2.1]{AMUXY2} and \cite[Lemma 2.15]{AMUXY2}, respectively. We catalogue a couple of lemmas which are frequently used in this paper. \begin{lem}\label{bounded operators} Let $ 1\le p \le \infty$ and $f\in L^p_x$, then there exists a constant $C>0$ independent of $p,q$ and $f$ such that \begin{align*} \Vert \Delta_q f \Vert_{L^p_x} \le C \Vert f \Vert_{L^p_x}, \quad \Vert S_q f \Vert_{L^p_x} \le C\Vert f \Vert_{L^p_x}. \end{align*} In short, $\Delta_q$ and $S_q$ are bounded operators on $L^p_x$. \end{lem} \begin{lem}\label{embedding} Let $1\le p, r \le \infty$. Then \begin{enumerate} \item $B^{s_1}_{pr} \hookrightarrow B^{s_2}_{pr}$ when $s_2 \le s_1$. This inclusion does not hold for the homogeneous Besov space. \item $B^{3/p}_{p,1} \hookrightarrow L^\infty$ and $\dot{B}^{3/p}_{p,1} \hookrightarrow L^\infty$ when $1\le p < \infty$. \end{enumerate} \end{lem} \begin{lem}\label{Besov embedding} Let $ 1 \le p,q,r \le \infty$ and $s>0$. Then we have \begin{align*} \Vert \nabla_x \cdot \Vert_{\tilde{L}^q_T(\dot{B}^s_{pr})} \sim \Vert \cdot \Vert_{\tilde{L}^q_T(\dot{B}^{s+1}_{pr})}, \quad \Vert \cdot \Vert_{\tilde{L}^q_T(\dot{B}^s_{pr})} \lesssim \Vert \cdot \Vert_{\tilde{L}^q_T(B^s_{pr})}. \end{align*} \end{lem} \begin{lem}\label{Besov and Chemin-Lerner} Let $ 1 \le p,\alpha, \beta, r \le \infty$ and $s>0$. If $r\le \min\{\alpha, \beta\} $ then \begin{align*} \Vert f \Vert_{L^{\alpha}_TL^{\beta}_v (B^s_{pr})} \le \Vert f \Vert_{\tilde{L}^{\alpha}_T \tilde{L}^{\beta}_v (B^s_{pr})} \ and\ \Vert f \Vert_{L^{\alpha}_TL^{\beta}_v (\dot{B}^s_{pr})} \le \Vert f \Vert_{\tilde{L}^{\alpha}_T \tilde{L}^{\beta}_v (\dot{B}^s_{pr})}. \end{align*} \end{lem} For the proof of Lemma \ref{bounded operators}, Lemma \ref{embedding} - Lemma \ref{Besov embedding} and Lemma \ref{Besov and Chemin-Lerner}, readers may refer to \cite{BCD}, \cite{XK} and \cite{DLX}, respectively. We prove a useful lemma to deal with terms consisting of $\Vert |\!|\!| f |\!|\!| \Vert_{L^\infty_x}$. \begin{lem}\label{T-estimate} For each $T>0$, we have \begin{align*} \left( \int^T_0 \Vert |\!|\!| f |\!|\!| \Vert_{L^\infty_x}^2 dt \right)^{1/2} \lesssim \Vert f \Vert_{\dot{\mathcal{T}}^{3/2}_{T,2,2}}. \end{align*} \end{lem} \begin{proof} By defition of $|\!|\!| f |\!|\!|$ and using both generalized Minkowski's inequality and Besov embedding $ \dot{B}^{3/2}_x \hookrightarrow L^\infty_x$, we calculate that \begin{align*} &\left( \int^T_0 \Vert |\!|\!| f |\!|\!| \Vert_{L^\infty_x}^2 dt \right)^{1/2} \\ &\le \left[ \int^T_0 \int B \left\{ \mu_* \sup_x (f'-f)^2 + (\mu^{1/2} - \mu'^{1/2}) \sup_xf^2_* \right\} dvdv_* d\sigma dt \right]^{1/2} \\ & \lesssim \left[ \int^T_0 \int B \left\{ \mu_* \left( \sum_l 2^{\frac{3}{2}l}\left( \int \vert \dot{\Delta}_l (f'-f)\vert^2 dx\right)^{1/2} \right)^2 \right.\right.\\ & \left.\left. \qquad+ (\mu^{1/2} - \mu'^{1/2}) \left( \sum_l 2^{\frac{3}{2}l}\left( \int \vert \dot{\Delta}_l f_*\vert^2 dx\right)^{1/2} \right)^2 \right\} dvdv_* d\sigma dt \right]^{1/2} \\ &\le \sum_l \left[ \int^T_0 \int B \left\{ \mu_* 2^{3l}\vert \dot{\Delta}_l (f'-f)\vert^2 \right.\right. \\ &\left.\left.\qquad \qquad+ (\mu^{1/2} - \mu'^{1/2}) 2^{3l} \vert \dot{\Delta}_l f_*\vert^2 \right\} dxdvdv_* d\sigma dt \right]^{1/2} = \Vert f \Vert_{\dot{\mathcal{T}}^{3/2}_{T,2,2}}. \end{align*} \end{proof} \subsection{Proof of Lemma \ref{MS-general}} \label{ap-2} In this subsection we also give another upper bound for $\Gamma$, which is a supplementary variant of \cite[Lemma 4.7]{amuxy4-3} concerning the range of the index of the Sobolev space. \begin{prop}\label{upper-recent} Let $0<\nu<2$, $\gamma > \max\{-3, -\nu-3/2\}$. For any $\ell \in \RR$ and $m \in [-\nu/2,\nu/2]$ we have \begin{align}\label{different-g-h} \Big | \Big(\Gamma( f, \, g),\, h \Big ) _{L^2} \Big| \lesssim \|f\|_{L^2} \|g\|_{H^{\nu/2+m}_{(\ell +\gamma+\nu)^+}}\|h\|_{H^{\nu/2-m}_{-\ell}}\,. \end{align} \end{prop} We decompose the kinetic factor of the cross-section {into} two parts, \begin{align*} \Phi(|z|)=|z|^\gamma = |z|^\gamma \varphi_0(|z|)+ |z|^\gamma \big(1-\varphi_0(|z|)\big) = \Phi_c(|z|) + \Phi_{\bar c}(|z|), \end{align*} where $\varphi_0\in C^\infty_0(\RR)$, Supp $\varphi_0\subset [-1, 1]; \varphi(t)=1$ for $|t|\leq 1/2$, and put \[ B_{c} = \Phi_{c} (|v-v_*|) b\Big(\frac{v-v_*}{|v-v_*|} \cdot \sigma\Big)\,,\quad B_{\bar c} = \Phi_{\bar c} (|v-v_*|) b\Big(\frac{v-v_*}{|v-v_*|} \cdot \sigma\Big)\,. \] {Accordingly}, we write \[ Q(f,g) = Q_c(f,g) + Q_{\bar c}(f,g)\,, \] and \[ \Gamma(f,g) = \Gamma_c(f,g) + \Gamma_{\bar c}(f,g)\,. \] \begin{prop} \label{upper-c} Let $0< \nu<2, \gamma > \max\{-3, -\nu-3/2\}$. If $m \in\ [-\nu/2,\nu/2]$ then we have \[ | (Q_c (f, g),h )| \lesssim \|f\|_{L^{2} }\Vert g\Vert_{H^{\nu/2+ m}} \|h\|_{H^{\nu/2 -m}}\,. \] \end{prop} \begin{rema} For any $q \in [1,2)$, let $\gamma > \max\{-3, -\nu -3 + 3/q\}$ and $-\nu/2 \le m \le \min\{\nu/2, 3/2 -\nu/2\}$. Then we have \[ | (Q_c (f, g),h )| \lesssim \|f\|_{L^q }\Vert g\Vert_{H^{\nu/2+ m}} \|h\|_{H^{\nu/2 -m}}\,. \] \end{rema} This proposition is an improvement of \cite[Proposition 2.1]{amuxy4-3}, concerning the lower bound of $m$, where it was assumed $m \ge \nu/2 -1$. For the self-containedness and the convenience of readers, we repeat the detail proof, including the general case pointed out at the above remark. For the proof of Proposition \ref{upper-c}, we shall follow some of the arguments from \cite{amuxy4-3}. First of all, by using the formula from the Appendix of \cite{advw}, and as in \cite{amuxy4-3}, one has \begin{align*} ( Q_c(f, g), h ) =& \int b \Big({\frac\xi{ | \xi |}} \cdot \sigma \Big) [ \hat\Phi_c (\xi_* - \xi^- ) - \hat \Phi_c (\xi_* ) ] \hat f (\xi_* ) \hat g(\xi - \xi_* ) \overline{{\hat h} (\xi )} d\xi d\xi_*d\sigma .\\ = & \int_{ | \xi^- | \leq {\frac 1 2} \la \xi_*\ra } \cdots\,\, d\xi d\xi_*d\sigma + \int_{ | \xi^- | \geq {\frac 1 2} \la \xi_*\ra } \cdots\,\, d\xi d\xi_*d\sigma \,\\ =& A_1(f,g,h) + A_2(f,g,h) \,\,, \end{align*} where $\hat f (\xi )$ is the Fourier transform of $f$ with respect to $v\in\RR^3$ and $ \xi^-=\frac{1}{2}(\xi-|\xi|\sigma)$. Then, we write $A_2(f,g,h)$ as \begin{align*} A_2 &= \int b \Big({\frac \xi{ | \xi |}} \cdot \sigma \Big) {\bf 1}_{ | \xi^- | \ge {\frac 1 2}\la \xi_*\ra } \hat\Phi_c (\xi_* - \xi^- ) \hat f (\xi_* ) \hat g(\xi - \xi_* ) \overline{{\hat h} (\xi )} d\xi d\xi_*d\sigma .\\ &- \int b \Big({\frac\xi{ | \xi |}} \cdot \sigma \Big){\bf 1}_{ | \xi^- | \ge {\frac 1 2}\la \xi_*\ra } \hat \Phi_c (\xi_* ) \hat f (\xi_* ) \hat g(\xi - \xi_* ) \overline{{\hat h} (\xi )} d\xi d\xi_*d\sigma \\ &= A_{2,1}(f,g,h) - A_{2,2}(f,g,h)\,. \end{align*} While for $A_1$, we use the Taylor expansion of $\hat \Phi_c$ at order $2$ to have $$ A_1 = A_{1,1} (f,g,h) +A_{1,2} (f,g,h) $$ where $$ A_{1,1} = \int b\,\, \xi^-\cdot (\nabla\hat\Phi_c)( \xi_*) {\bf 1}_{ | \xi^- | \leq {\frac 1 2} \la \xi_*\ra } \hat f (\xi_* ) \hat g(\xi - \xi_* ) \overline{\hat{h}(\xi)} d\xi d\xi_*d\sigma, $$ and $A_{1,2} (f,g,h)$ is the remaining term corresponding to the second order term in the Taylor expansion of $\hat\Phi_c$. The $A_{i,j}$ with $i,j=1,2$ are estimated by the following lemmas. \begin{lem}\label{A-1} For any $q \in [1,2]$, let $\gamma > \max\{-3, -\nu -3 + 3/q\}$. Furthermore, assume that $m \le 3/2 -\nu/2$ if $q <2$. Then we have \[ | A_{1,1} |+| A_{1,2}| \lesssim \|f\|_{L^q} \Vert f\Vert_{H^{\nu/2+ m}} \|h\|_{H^{\nu/2 -m}}\,. \] \end{lem} \begin{proof} Considering firstly $A_{1,1}$, by writing \[ \xi^- = \frac{|\xi|}{2}\left(\Big(\frac{\xi}{|\xi|}\cdot \sigma\Big)\frac{\xi }{|\xi|}-\sigma\right) + \left(1- \Big(\frac{\xi}{|\xi|}\cdot \sigma\Big)\right)\frac{\xi}{2}, \] we see that the integral corresponding to the first term on the right hand side vanishes because of the symmetry on $\SS^2$. Hence, we have \[ A_{1,1}= \int_{\RR^6} K(\xi, \xi_*) \hat f (\xi_* ) \hat g(\xi - \xi_* ) \overline{\hat h(\xi )} d\xi d\xi_* \,, \] where \[ K(\xi,\xi_*) = \int_{\SS^2} b \Big({\frac \xi{ | \xi |}} \cdot \sigma \Big) \left(1- \Big(\frac{\xi}{|\xi|}\cdot \sigma\Big)\right)\frac{\xi}{2}\cdot (\nabla\hat\Phi_c)( \xi_*) {\bf 1}_{ | \xi^- | \leq {\frac 1 2} \la \xi_*\ra } d \sigma \,. \] Note that $| \nabla \hat \Phi_c (\xi_*) | \lesssim {\frac 1{\la \xi_*\ra^{3+\gamma +1}}}$, from the Appendix of \cite{AMUXY2}. If $\sqrt 2 |\xi| \leq \la \xi_* \ra$, then $|\xi^-| \leq \la \xi_* \ra/2$ and this implies the fact that $0 \leq \theta \leq \pi/2$, and we have \begin{align*} |K(\xi,\xi_*)| &\lesssim \int_0^{\pi/2} \theta^{1-\nu} d \theta\frac{ \la \xi\ra}{\la \xi_*\ra^{3+\gamma +1}} \lesssim \frac{1 }{\la \xi_*\ra^{3+\gamma}}\left( \frac{\la \xi \ra}{\la \xi_*\ra}\right) \,. \end{align*} On the other hand, if $\sqrt 2 |\xi| \geq \la \xi_* \ra$, then \begin{align*}|K(\xi,\xi_*)| &\lesssim \int_0^{\pi\la \xi_*\ra /(2|\xi|)} \theta^{1-\nu} d \theta\frac{ \la \xi\ra}{\la \xi_*\ra^{3+\gamma +1}} \lesssim \frac{1 }{\la \xi_*\ra^{3+\gamma}}\left( \frac{\la \xi \ra}{\la \xi_*\ra}\right)^{\nu-1}\,. \end{align*} Hence we obtain \begin{align}\label{later-use1} &|K(\xi,\xi_*)| \lesssim \frac{1}{\la \xi_*\ra^{3+\gamma}}\left\{ \frac{\la \xi \ra}{\la \xi_*\ra}{\bf 1}_{\sqrt 2 |\xi| \leq \la \xi_* \ra}\right. \notag \\ &\qquad \left. +{\bf 1}_{ \sqrt 2 |\xi| \geq \la \xi_* \ra \geq |\xi|/2} + \left( \frac{\la \xi \ra}{\la \xi_*\ra}\right)^{\nu} {\bf 1}_{\la \xi_* \ra \leq |\xi|/2}\right\}\,. \end{align} Notice that \begin{equation}\label{equivalence-relation} \left \{ \begin{array}{ll} \la \xi \ra \lesssim \la \xi_* \ra \sim \la \xi-\xi_*\ra &\mbox{on supp ${\bf 1}_{\la \xi_* \ra\geq \sqrt 2 |\xi|}$}\\ \la \xi \ra \sim \la \xi-\xi_*\ra &\mbox{on supp ${\bf 1}_{\la \xi_*\ra \leq |\xi |/2 } $} \\ \la \xi \ra \sim \la \xi_* \ra \gtrsim \la \xi-\xi_*\ra & \mbox{on supp ${\bf 1}_{\sqrt 2 |\xi| \geq \la \xi_*\ra \geq | \xi|/2 }$\,.} \end{array} \right. \end{equation} Take an $\varepsilon >0$ such that $3 +\gamma +\nu > 3/q + \varepsilon$. Then we have \begin{align*} \frac{1}{\la \xi_*\ra^{3+\gamma}} \frac{\la \xi \ra}{\la \xi_*\ra}{\bf 1}_{\sqrt 2 |\xi| \leq \la \xi_* \ra} \lesssim \frac{\la \xi -\xi_*\ra^{\nu/2+m} \la \xi \ra^{\nu/2-m} \la \xi \ra^{-3/2-\varepsilon}}{ \la \xi_* \ra^{3/2+\gamma +\nu -\varepsilon}} \end{align*} in view of $\nu/2-m -3/2- \varepsilon < 1$. Replacing the factor $(\la\xi \ra/\la \xi_*\ra)^{\nu}{\bf 1}_{\la \xi_* \ra \leq |\xi|/2}$ on the right hand side of \eqref{later-use1} by \[ \frac{\la \xi \ra^{\nu/2-m} \la \xi-\xi_*\ra^{\nu/2+m}}{\la \xi_*\ra^{\nu}}\,, \] we obtain \begin{align}\label{kernel-estimate} |K(\xi,\xi_*)| \lesssim & \frac{\la \xi -\xi_*\ra^{\nu/2+m} \la \xi \ra^{\nu/2-m} \la \xi \ra^{-3/2-\varepsilon}}{ \la \xi_* \ra^{3/2+\gamma +\nu -\varepsilon}}+ \frac{\la \xi\ra^{\nu/2-m} \la \xi-\xi_*\ra^{\nu/2+m} }{\la \xi_*\ra^{3+\gamma +\nu}} \notag \\ &+ \frac{{\bf 1}_{ \la \xi -\xi *\ra \lesssim \la \xi_{*} \ra}}{\la \xi_*\ra^{3+\gamma +\nu/2-m} \la \xi-\xi_*\ra^{\nu/2+m} } \la \xi\ra^{\nu/2-m} \la \xi-\xi_*\ra^{\nu/2+m} \,. \end{align} Putting $\tilde {\hat g}(\xi)= \la \xi \ra^{\nu/2+m} \hat g(\xi), \tilde {\hat h}(\xi)= \la \xi \ra^{\nu/2 -m} \hat h(\xi)$, we have \begin{align*} |A_{1,1}|^2 &\lesssim \left(\int_{\RR^3} \frac{ |\tilde {\hat h}(\xi)|}{\la \xi\ra^{3/2 +\varepsilon}} \left(\int_{\RR^3} \frac{|\hat f(\xi_*)|}{\la \xi_* \ra^{3/2+\gamma +\nu -\varepsilon}}|\tilde {\hat g}(\xi-\xi_*)|d\xi_* \right)d\xi\right)^2 \\ &+\int_{\RR^6} \frac{|\hat f(\xi_*)| }{\la \xi_*\ra^{3+\gamma +\nu}}|\tilde {\hat g}(\xi-\xi_*)|^2 d\xi d\xi_* \int_{\RR^6} \frac{|\hat f(\xi_*)| }{\la \xi_*\ra^{3+\gamma +\nu}}|\tilde {\hat h}(\xi)|^2 d\xi d\xi_*\\ &+ \int_{\RR^6} \frac{|\hat f(\xi_*)|^2 }{\la \xi_*\ra^{6+2\gamma +\nu-2m}} \frac{{\bf 1}_{ \la \xi -\xi *\ra \lesssim \la \xi_{*} \ra}}{ \la \xi-\xi_*\ra^{\nu+2m} }d\xi d\xi_* \int_{\RR^6} |\tilde {\hat g}(\xi-\xi_*)|^2 |\tilde {\hat h}(\xi)|^2 d\xi d\xi_*\\ &= \cK^2 +\cA \cB + \cD \cE\,, \end{align*} by the Cauchy-Schwarz inequality. It follows from H\"older inequality that \begin{align*} \cK &\lesssim \|g\|_{H^{\nu/2+m}}\|h\|_{H^{\nu/2-m}}\left( \int_{\RR^3} \frac{|\hat f(\xi_*)|^2}{\la \xi_* \ra^{3+2(\gamma +\nu -\varepsilon)}}d\xi_*\right)^{1/2}\\ &\lesssim \|f\|_{L^q} \|g\|_{H^{\nu/2+m}}\|h\|_{H^{\nu/2-m}}\,, \end{align*} where we have used the fact that \begin{align}\label{p-estimate} &\int_{\RR^3} \frac{|\hat f(\xi_*)|^2}{\la \xi_* \ra^{\ell}}d\xi_* \lesssim \|f\|^2_{L^q} \enskip \nonumber\\ &\qquad \mbox{if} \enskip \ell > -3 + 6/q \enskip \mbox{for $q \in [1,2)$ and $\ell \ge 0$ for $q=2$} \end{align} by means of $\|\hat f\|_{L^{q'}} \le \|f\|_{L^q}$ with $1/q +1/q'=1$ for $q \in [1,2]$ . Since it follows from $3+\gamma +\nu > 3/q$ that $\la \xi_*\ra^{-(3+\gamma +\nu)} \in L^q$, the H\"older inequality again shows \[ \cA \lesssim \int_{\RR^3}\frac{|\hat f(\xi_*)|}{\la \xi_*\ra^{3+\gamma +\nu}} d\xi_* \|g\|^2_{H^{\nu/2+m}} \lesssim \|f\|_{L^q}\|g\|^2_{H^{\nu/2+m}} \,,\enskip \cB \lesssim \|f\|_{L^q} \|h\|^2_{H^{\nu/2-m}}\,. \] Note that \[ \int\frac{{\bf 1}_{ \la \xi -\xi *\ra \lesssim \la \xi_{*} \ra}}{ \la \xi-\xi_*\ra^{\nu+2m} } d\xi \enskip \lesssim \left \{ \begin{array}{lcl} \displaystyle \frac{1}{\la \xi_*\ra^{-3+\nu+2m} }& \mbox{if} & \nu/2+m <3/2\\ \log \la \xi_*\ra & \mbox{if} & \nu/2+m \ge 3/2\,. \end{array} \right. \] Since $3+2(\gamma +\nu) >0$ when $q =2$, together with $6+2 \gamma+ \nu-2m >0$, we get $\cD \le \|f \|_{L^2}^2 $, which concludes the desired bound for $A_{1,1}$ when $q=2$. In the case where $q \in [1,2)$, it follows from \eqref{p-estimate} that $\cD \lesssim \|f\|^2_{L^q}$ because of $\nu/2+m \le 3/2$. Now we consider $A_{1,2} (f, g, h)$, which comes from the second order term of the Taylor expansion. Note that $$ A_{1,2} = \int b \Big({\frac \xi{ | \xi |}} \cdot \sigma \Big)\int^1_0 d\tau (\nabla^2\hat \Phi_c) (\xi_* -\tau\xi^- ) (\xi^-)^2 \hat f (\xi_* ) \hat g(\xi - \xi_* ) \bar{\hat h} (\xi ) d\sigma d\xi d\xi_*\, . $$ Again from the Appendix of \cite{AMUXY2}, we have $$ | (\nabla^2\hat \Phi_c) (\xi_* -\tau\xi^- ) | \lesssim {\frac 1{\la \xi_* -\tau \xi^-\ra^{3+\gamma +2}}} \lesssim {\frac 1{\la \xi_*\ra^{3+\gamma +2}}}, $$ because $|\xi^-| \leq \la \xi_*\ra/2$. Similar to $A_{1,1}$, we can obtain \[ |A_{1,2}| \lesssim \int_{\RR^6} \tilde K(\xi, \xi_*) \hat f (\xi_* ) \hat g(\xi - \xi_* ) \bar{\hat h} (\xi ) d\xi d\xi_* \,, \] where $\tilde K(\xi,\xi_*)$ has the following upper bound \begin{align}\label{later-use2} \notag \tilde K(\xi,\xi_*) &\lesssim \int_0^{\min(\pi/2, \,\, \pi\la \xi_*\ra /(2|\xi|))} \theta^{1-\nu} d \theta \frac{ \la \xi\ra^2}{\la \xi_*\ra^{3+\gamma +2}}\\ &\lesssim \frac{1 }{\la \xi_*\ra^{3+\gamma}}\left\{ \left( \frac{\la \xi \ra}{\la \xi_*\ra}\right)^2{\bf 1}_{\sqrt 2 |\xi| \leq \la \xi_* \ra} +{\bf 1}_{ \sqrt 2 |\xi| \geq \la \xi_* \ra \geq |\xi|/2}\right.\notag\\ &\qquad\qquad\qquad\qquad\qquad\qquad\left.+\left( \frac{\la \xi \ra}{\la \xi_*\ra}\right)^{\nu} {\bf 1}_{\la \xi_* \ra \leq |\xi|/2}\right\}\,, \end{align} from which we obtain the same inequality as \eqref{kernel-estimate} for $\tilde K(\xi,\xi_*)$. Hence we obtain the desired bound for $A_{1,2}$. And this completes the proof of the lemma. \end{proof} \begin{lem}\label{A-2} For any $q \in [1,2]$, let $\gamma > \max\{-3, -\nu -3 + 3/q\}$. Furthermore, assume that $m \le 3/2 -\nu/2$ if $q <2$. Then \[ | A_{2,1} |+| A_{2,2}| \lesssim \|f\|_{L^q} \Vert f\Vert_{H^{\nu/2+ m}} \|h\|_{H^{\nu/2 -m}}\,. \] \end{lem} \begin{proof} In view of the definition of $A_{2,2}$, the fact that $|\xi| \sin(\theta/2) =|\xi^-| \geq \la \xi_*\ra/2$ and $\theta \in [0,\pi/2]$ imply $\sqrt 2 |\xi| \geq \la \xi_*\ra$. We can then directly compute the spherical integral appearing inside $A_{2,2}$ together with $\hat \Phi_c$ as follows: \begin{align}\label{A-2-2} &\notag \left|\int b \Big({\frac\xi{ | \xi |}} \cdot \sigma \Big)\hat \Phi_c(\xi_*) {\bf 1}_{ | \xi^- | \ge {\frac 1 2}\la \xi_*\ra } d\sigma \right| \lesssim {\frac 1{\la \xi_* \ra^{3+\gamma }}} \frac{\la \xi\ra^{\nu} }{\la \xi_*\ra^{\nu}}{\bf 1}_{\sqrt 2 |\xi| \geq \la \xi_* \ra} \\ &\quad \lesssim \frac{\la \xi\ra^{\nu/2-m} \la \xi-\xi_*\ra^{\nu/2+m} }{\la \xi_*\ra^{3+\gamma +\nu}}\notag \\ &\quad + \frac{{\bf 1}_{ \la \xi -\xi *\ra \lesssim \la \xi_{*} \ra}}{\la \xi_*\ra^{3+\gamma +\nu/2-m} \la \xi-\xi_*\ra^{\nu/2+m} } \la \xi\ra^{\nu/2-m} \la \xi-\xi_*\ra^{\nu/2+m}\,, \end{align} which yields the desired estimate for $A_{2, 2}$. We now turn to $$ A_{2,1}= \int b\,\, {\bf 1}_{ | \xi^- | \ge {\frac 1 2} \la \xi_*\ra }\hat \Phi_c (\xi_* - \xi^-) \hat f (\xi_* ) \hat g(\xi - \xi_* ) \bar{\hat h} (\xi ) d\sigma d\xi d\xi_* . $$ Firstly, note that we can work on the set $| \xi_* \,\cdot\,\xi^-| \ge {\frac 1 2} | \xi^-|^2$. In fact, on the complementary of this set, we have $| \xi_* \,\cdot\,\xi^-| \leq {\frac 1 2} | \xi^-|^2$ so that $|\xi_* -\xi^-| \gtrsim | \xi_*|$, and in this case, we can proceed in the same way as for $A_{2,2}$. Therefore, it suffices to estimate \begin{align*} A_{2,1,p}&= \int b\,\, {\bf 1}_{ | \xi^- | \ge {\frac 1 2} \la \xi_*\ra }{\bf 1}_{| \xi_* \,\cdot\,\xi^-| \ge {\frac 1 2} | \xi^-|^2}\hat \Phi_c (\xi_* - \xi^-) \hat f (\xi_* ) \hat g(\xi - \xi_* ) \overline{{\hat h} (\xi )} d\sigma d\xi d\xi_* \,. \end{align*} By \[ {\bf 1}= {\bf 1}_{\la \xi_* \ra \geq |\xi|/2} {\bf 1}_{\la\xi-\xi_* \ra \leq \la \xi_* - \xi^- \ra} + {\bf 1}_{\la \xi_* \ra \geq |\xi|/2} {\bf 1}_{\la\xi-\xi_* \ra > \la \xi_* - \xi^-\ra} + {\bf 1}_{\la \xi_* \ra < |\xi|/2} \] we decompose \begin{align*} A_{2,1,p} = A_{2,1,p}^{(1)} + A_{2,1,p}^{(2)} +A_{2,1,p}^{(3)} \,. \end{align*} On the sets for above integrals, we have $\la \xi_* -\xi^- \ra \lesssim \, \la \xi_* \ra$, because $| \xi^- | \lesssim | \xi_*|$ that follows from $| \xi^-|^2 \le 2 | \xi_* \cdot\xi ^-| \lesssim |\xi^-|\, | \xi_*|$. Furthermore, on the sets for $A_{2,1,p}^{(1)}$ and $A_{2,1,p}^{(2)}$ we have $\la \xi \ra \sim \la \xi_* \ra$, so that $\sup \Big(b\,\, {\bf 1}_{ | \xi^- | \ge {\frac 1 2} \la \xi_*\ra } {\bf 1}_{\la \xi_* \ra \geq |\xi|/2}\Big) \lesssim {\bf 1}_{|\xi^- |\leq |\xi|/\sqrt 2}$ and $\la \xi_* -\xi^- \ra \lesssim \, \la \xi \ra$. Hence we have, in view of $\nu/2-m \geq 0$, \begin{align*} |A_{2,1,p}^{(1)} | ^2 \lesssim& \int \frac{ |\hat \Phi_c (\xi_* - \xi^-) |^2 |\hat f (\xi_* )|^2 } {\la \xi_* -\xi^- \ra^{\nu-2m}}\frac{ {\bf 1}_{\la\xi-\xi_* \ra \leq \la \xi_* - \xi^- \ra}}{\la\xi-\xi_* \ra^{\nu+2m}}d\xi d\xi_* d \sigma\\ & \times \int |\la\xi-\xi_* \ra^{\nu/2+m}\hat g(\xi - \xi_* )|^2 |\la\xi \ra^{\nu/2-m}{\hat h} (\xi ) |^2 d\sigma d\xi d\xi_*\,. \end{align*} Note that $3+2(\gamma +\nu) >0$ when $q =2$, together with $6+2 \gamma+ \nu-2m >0$. Then, with $u = \xi_* -\xi^-$ we have \begin{align*} |A_{2,1,p}^{(1)} | ^2 \lesssim& \int |\hat f(\xi_*)|^2 \left\{ \sup_{u} {\la u \ra^{-( 6 +2\gamma+ \nu-2m)}} \int \frac{ {\bf 1}_{\la \xi^+ -u \ra \leq \la u \ra}}{\la \xi^+ -u \ra^{\nu+2m}}d\xi^+\right\} d\xi_*\\ &\qquad\qquad\qquad\times \|g\|^2_{H^{\nu/2+m}} \|h\|_{H^{\nu/2-m}}^2\\ \lesssim & \|f\|^2_{L^2} \|g\|^2_{H^{\nu/2+m}} \|h\|_{H^{\nu/2-m}}^2 \,, \end{align*} because $d\xi \sim d \xi^+$ on the support of ${\bf 1}_{|\xi^- |\leq |\xi|/\sqrt 2}$\,. In the case where $q <2$, we use the condition $\nu/2+m \le 3/2$. If $q=1$ then $\gamma +\nu >0$, and by the change of variables $\xi_*-\xi^- \rightarrow u$ we have \begin{align*} |A_{2,1,p}^{(1)} | ^2 \lesssim& \|\hat f\|^2_{L^\infty} \|g\|^2_{H^{\nu/2+m}} \|h\|_{H^{\nu/2-m}}^2 \int {\la u \ra^{-( 6 +2\gamma+ \nu-2m)}} \int \frac{ {\bf 1}_{\la w \ra \leq \la u \ra}}{\la w \ra^{\nu+2m}}dw du \\ \lesssim & \|f\|^2_{L^1} \|g\|^2_{H^{\nu/2+m}} \|h\|_{H^{\nu/2-m}}^2 \,. \end{align*} If $1 <q <2$, then $3+\gamma +\nu >3/q$ and by the H\"older inequality and the change of variables $u = \xi_* -\xi^-$ we have \begin{align*} |A_{2,1,p}^{(1)} | ^2 \lesssim& \|g\|^2_{H^{\nu/2+m}} \|h\|_{H^{\nu/2-m}}^2 \left( \int |\hat f(\xi_*)|^{q/(q-1)} d\xi_*\right)^{2(q-1)/q} \\ \times & \left(\int \left( {\la u \ra^{-( 6 +2\gamma+ \nu-2m)}} \int \frac{ {\bf 1}_{\la \xi^+ -u \ra \leq \la u \ra}}{\la \xi^+ -u \ra^{\nu+2m}}d\xi^+\right)^{q/(2-q)} du\right)^{2/q-1}\\ \lesssim & \|f\|^{2}_{L^{q}} \|g\|^2_{H^{\nu/2+m}} \|h\|_{H^{\nu/2-m}}^2 \,. \end{align*} As for $A_{2,1,p}^{(2)}$ we have by the Cauchy-Schwarz inequality \begin{align*} |A_{2,1,p}^{(2)} | ^2 \lesssim& \int \frac{ |\hat \Phi_c (\xi_* - \xi^-) | |\hat f (\xi_* )| } {\la \xi_* -\xi^- \ra^{\nu}}|{\la\xi-\xi_* \ra^{\nu/2+m}} \hat g(\xi -\xi_*)|^2 d \sigma d\xi d\xi_* \\ & \times \int \frac{ |\hat \Phi_c (\xi_* - \xi^-) | |\hat f (\xi_* )| } {\la \xi_* -\xi^- \ra^{\nu}} |\la\xi \ra^{\nu/2-m}{\hat h} (\xi ) |^2 d\sigma d\xi d\xi_*\,. \end{align*} Since it follows from $3 + \gamma + \nu <3/q$ and H\"older inequality that \[ \int \frac{ |\hat \Phi_c (\xi_* - \xi^-) | |\hat f (\xi_* )| } {\la \xi_* -\xi^- \ra^{\nu}} d \xi_* d\sigma \lesssim \|f\|_{L^q}\,, \] we have the desired estimates for $A_{2,1,p}^{(2)}$. On the set $A_{2,1,p}^{(3)}$ we have $\la \xi \ra \sim \la \xi - \xi_*\ra$. Hence \begin{align*} |A_{2,1,p}^{(3)} | ^2 \lesssim& \int b\, {\bf 1}_{ | \xi^- | \ge {\frac 1 2} \la \xi_*\ra }\frac{ |\hat \Phi_c (\xi_* - \xi^-) | |\hat f (\xi_* )| } {\la \xi\ra^{\nu}}|{\la\xi-\xi_* \ra^{\nu/2+m}} \hat g(\xi -\xi_*)|^2 d \sigma d\xi d\xi_* \\ & \times \int b\,\, {\bf 1}_{ | \xi^- | \ge {\frac 1 2} \la \xi_*\ra }\frac{ |\hat \Phi_c (\xi_* - \xi^-) | |\hat f (\xi_* )| } {\la \xi \ra^{\nu}} |\la\xi \ra^{\nu/2-m}{\hat h} (\xi ) |^2 d\sigma d\xi d\xi_*\,. \end{align*} We use the change of variables in $\xi_*$, $u= \xi_* -\xi^-$. Note that $| \xi ^-| \ge {\frac 1 2} \la u +\xi^-\ra $ implies $|\xi^-| \geq \la u\ra/\sqrt {10}$. If $q=1$ then $\gamma + \nu>0$ and we have \begin{align*} &\int b\,\, {\bf 1}_{ | \xi^- | \ge {\frac 1 2} \la \xi_*\ra }\frac{ |\hat \Phi_c (\xi_* - \xi^-) | |\hat f (\xi_* )| } {\la \xi\ra^{\nu}} d \sigma d\xi_*\\ &\lesssim \|\hat f\|_{L^\infty} \int \left (\frac{|\xi|}{\la u\ra}\right)^{\nu} \la u \ra^{-(3+\gamma)} \la \xi\ra^{-\nu} du\lesssim \|f\|_{L^1}\,. \end{align*} On the other hand, if $q >1$ then this integral is upper bounded by \begin{align*} &\int b\,\, {\bf 1}_{ | \xi^- | \ge {\frac 1 2} \la \xi_*\ra }\frac{ |\hat \Phi_c (\xi_* - \xi^-) |}{\la \xi \ra^{\nu/q} \la \xi_* -\xi^-\ra^{\nu/q'}}\frac{\la \xi_*\ra^{\nu/q'} |\hat f (\xi_* )| } {\la \xi \ra^{\nu/q'}} d \sigma d\xi_* \\ &\leq \left( \int b\, {\bf 1}_{ | \xi^- | \ge {\frac 1 2} \la \xi_* \ra }\frac{ |\hat \Phi_c (\xi_* - \xi^-) |^q}{\la \xi \ra^{\nu} \la \xi_* -\xi^-\ra^{\nu q/q'}} d\sigma d \xi_*\right)^{1/q}\\ &\qquad\qquad\times \left(\int b\, {\bf 1}_{ | \xi^- | \ge {\frac 1 2} \la \xi_*\ra }\frac{\la \xi_*\ra^{\nu} |\hat f (\xi_* )|^{q'}} {\la \xi \ra^{\nu}} d \sigma d\xi_* \right)^{1/q'}\\ &\leq \left( \int b\, {\bf 1}_{ | \xi^- | \gtrsim \la u \ra }\frac{ |\hat \Phi_c (u) |^q}{\la \xi \ra^{\nu} \la u \ra^{\nu q/q'}} d\sigma d u\right)^{1/q} \|\hat f\|_{L^{q'}} \\ &\lesssim \int \frac{du}{\la u \ra^{q (3+\gamma+\nu)}} \|f\|_{L^q}\,, \end{align*} where $1/q +1/q' =1$. Hence we also obtain the desired estimates for $A_{2,1,p}^{(3)}$. The proof of the lemma is complete. \end{proof} Proposition \ref{upper-c} is then a direct consequence of Lemmas \ref{A-1} and \ref{A-2}. The following lemma is a variant of \cite[Lemma 4.5]{amuxy4-3}, where the roles of $g$ and $h$ are exchanged. \begin{lem}\label{differ-Gam-Q} Let $0<\nu<2$ and $\gamma > \max \{-3, -\nu -3/2\}$. Then for any $\alpha \ge 0$ and any $\beta, \beta' \in \RR$ we have \begin{align}\label{diff-G-Q}\nonumber &\Big|\Big(\Gamma ( f, g)\, , \,h \Big)_{L^2} -\Big( Q(\mu^{1/2} f, g), h\Big)_{L^2}\Big|\\ &\qquad \lesssim \|\mu^{1/10} f\|_{L^2} ^{1/2} \|g\|_{L^2_{(\nu+\gamma)/2-\beta}} \Big(\cD(\mu^{1/4} { \,|f|} , \la v \ra^\beta h) \Big)^{1/2} \notag \\ &\notag \qquad \qquad + \|f\|_{L^2_{-\alpha} } \|g\|_{L^2_{(\nu +\gamma)/2+\alpha - \beta'}} \|h \|_{L^2_{(\nu+\gamma)/2+\beta' }} \\ & \qquad \qquad \qquad + \|\mu^{1/10} f\|_{L^2}\|\mu^{1/10} g\|_{L^2}\|\mu^{1/10}h\|_{H^{\nu/2}}\,, \end{align} where \begin{align*} \mathcal{D}(f,g)=\iiint_{\mathbb{R}^3_v\times\mathbb{R}^3_{v_*}\times\mathbb{S}^2} B(v-v_*,\sigma)f_*(g-g')^2dvdv_*d\sigma. \end{align*} \end{lem} \begin{rema} If $\gamma > -5/2$ then the last term of the right hand side of \eqref{diff-G-Q} disappears. \end{rema} Since it follows from Lemma 2.12 of \cite{amuxy4-3} and its proof that \[ \cD(\mu^{1/4} { \,|f|} , \la v \ra^\beta h) \lesssim \|\mu^{1/10} f\|_{L^2} |\!|\!|\la v\ra^\beta h|\!|\!|^2 \lesssim \|\mu^{1/10} f\|_{L^2} \| h \|^2_{H^{\nu/2}_{(\nu+\gamma)/2 + \beta} }, \] we have the following; \begin{cor}\label{convenient-123} Let $0<\nu<2$ and $\gamma > \max \{-3, -\nu -3/2\}$. Then for any $\alpha \ge 0$ and any $\beta, \beta' \in \RR$ we have \begin{align}\label{diff-G-Q-09}\notag &\Big|\Big(\Gamma ( f, g)\, , \,h \Big)_{L^2} -\Big( Q(\mu^{1/2} f, g), h\Big)_{L^2}\Big|\\ & \quad \quad \notag \lesssim \|\mu^{1/10} f\|_{L^2}\|g\|_{L^2_{(\gamma+\nu)/2 -\beta}}\|h\|_{H^{\nu/2}_{(\gamma+\nu)/2 +\beta}}\\ &\quad \qquad \qquad + \|f\|_{L^2_{-\alpha} } \|g\|_{L^2_{(\nu +\gamma)/2+\alpha - \beta'}} \|h \|_{L^2_{(\nu+\gamma)/2+\beta' }} \,. \end{align} \end{cor} \begin{proof}[Proof of Lemma \ref{differ-Gam-Q}] We write \begin{align*} \Big(\Gamma( f, g)\, & , \,h \Big)_{L^2} -\Big( Q(\mu^{1/2} f, g), h\Big)_{L^2} = \int B\, \Big( {\mu'_{*}}^{1/2} -\mu_{*}^{1/2} \Big) \big( f_{*} \big) g h' d\sigma dv_*dv \\ & = 2 \int B\, \Big( (\mu'_{*})^{1/4} -\mu_{*}^{1/4} \Big) \big( \mu_{*}^{1/4} f_{*} \big) gh d\sigma dv_*dv \\ &+ \int B\, \Big( (\mu'_{*})^{1/4} -\mu_{*}^{1/4} \Big) ^2 f_{*} g h' d\sigma dv_*dv \\ &+ 2 \int B\, \Big( (\mu'_{*})^{1/4} -\mu_{*}^{1/4} \Big) \big( \mu_{*}^{1/4} f_{*} \big) g(h'-h) d\sigma dv_*dv \\ &= D_1 + D_2 + D_3\,. \end{align*} Note that \begin{align*} &\Big((\mu'_{*})^{1/4} -\mu_{*}^{1/4} \Big)^2\le 2 \Big ((\mu'_{*})^{1/8} -\mu_{*}^{1/8} \Big)^2\Big( (\mu'_{*})^{1/4} +\mu_{*}^{1/4} \Big)\\ &\lesssim \min(|v-v_*|\theta,1) \min(|v'-v'_*|\theta,1) (\mu'_{*})^{1/4} + \Big(\min(|v'-v_*|\theta,1) \Big)^2\mu_{*}^{1/4} \,. \end{align*} By this decomposition we estimate \[ |D_2| \lesssim D^{(1)}_2 + D^{(2)}_2 \] Since $|v-v_*| \sim |v- v'_*|$ on $\mbox{supp}~b$, we have \begin{align*} \la v_*\ra &\lesssim \la v-v_*\ra + \la v \ra \lesssim \la v-v'_*\ra \left(1 + \frac{\la v \ra}{\la v-v'_*\ra} \right)\\ &\lesssim \la v-v'_*\ra \la v'_*\ra \lesssim \la v-v_*\ra \la v'_*\ra \enskip \mbox{on supp}~b\,, \end{align*} and hence \[ (\mu'_{*})^{1/4}| f_* | \lesssim \la {v'_*} \ra^\alpha (\mu'_{*})^{1/4}| \la v _* \ra^{-\alpha} f_* |\la v- v_*\ra^\alpha \enskip \mbox{for any} \enskip \alpha \ge 0. \] Noting $\la v\ra^{\beta'} \lesssim \la v'-v'_* \ra^{\beta'} \la v'_* \ra^{|\beta'|}$ on supp$\,\, b$ for any $\beta' \in \RR$, by the Cauchy-Schwarz inequality we have \begin{align*} (D^{(1)}_2)^2 &\lesssim \int B |v-v_*|^{2\alpha}{\bf 1}_{|v-v_*|\ge 1} \min(|v-v_*|^2 \theta^2 ,1) \Big( \frac{f_* g}{\la v _* \ra^{\alpha} \la v\ra^{\beta'}} \Big)^2d\sigma dv_*dv \\ &\qquad \times \int B |v-v_*|^{2\beta'}{\bf 1}_{|v-v_*|\ge 1} \min(|v-v_*|^2 \theta^2 ,1) ( \mu_{*}^{1/8} h)^2d\sigma dv_*dv \\ &+ \int B |v-v_*|^{2}{\bf 1}_{|v-v_*| < 1}\theta^2 ( \mu_{*}^{1/100} \mu^{1/100}h)^2d\sigma dv_*dv \\ &\qquad \qquad \times \int B |v-v_*|^{2}{\bf 1}_{|v-v_*| <1} \theta^2 ( \mu_*^{1/100} f_* \mu^{1/100}g)^2d\sigma dv_*dv \\ & \lesssim \|f\|_{L^2_{-\alpha}}^2\|g\|_{L^2_{(\nu+\gamma)/2 +\alpha- \beta'}}^2 \|h\|^2_{L^2_{(\nu +\gamma)/2+\beta'}}\,, \end{align*} where we used the fact that $\la v'_* \ra \sim \la v' \ra \sim \la v \ra \sim \la v_* \ra$ on supp$\,\, b \cap {\bf 1}_{|v-v_*| <1}$, and $2 \gamma +4 >-3$. As for $D^{(2)}_2$, we have \[ (D_2^{(2)})^2 \lesssim \|\mu^{1/10} f\|_{L^2}^2\|g\|_{L^2_{(\nu+\gamma)/2 - \beta'}}^2 \|h\|^2_{L^2_{(\nu +\gamma)/2+\beta'}}\,, \] thanks to the factor $\mu_*^{1/4}$ instead of ${\mu'_*}^{1/4}$. By the Cauchy-Schwarz inequality we have for any $\beta \in \RR$ \begin{align*} |D_3| &\lesssim \Big( \int B \Big( (\mu'_{*})^{1/4} -\mu_{*}^{1/4} \Big)^2 { |\mu_*^{1/4} f_{*}|} {\big(\la v \ra^{-\beta} g\big)}^2 d\sigma dv_*dv \Big)^{1/2}\\ &\quad \times \Big( \int B \, {\mu_{*}^{1/4}|f_*| } \la v \ra^{2\beta} \Big ( h'- h \Big)^2 d\sigma dv_*dv \Big)^{1/2}\\ & = \Big( \widetilde D_{3}(f,\la v \ra^{\beta}g) \Big)^{1/2}\Big(\cD_\beta(\mu^{1/4} f, h)\Big)^{1/2}\,. \end{align*} We have \begin{align*} \cD_\beta(\mu^{1/4} f, h) &\le 2 \Big( \cD(\, { |\mu^{1/4} f|}, \la v \ra^{\beta} h) + \int B \, { |\mu_*^{1/4}f_*| } \Big(\la v \ra^{\beta} - \la v' \ra^{\beta}\Big)^2h^2 dv dv_*d\sigma \Big)\\ &\lesssim \cD(\,{ | \mu^{1/4}f|}, \la v \ra^\beta h) + \|\mu^{1/8}f \|_{L^2} \|h\|^2_{L^2_{\beta+\gamma/2}}\,, \end{align*} because it follows from the same arguments in the proof of Lemma 2.12 in \cite{amuxy4-3} that \begin{equation}\label{wet} \Big|\la v \ra^\beta - \la v' \ra^\beta\Big| \lesssim \sin \frac{\theta}{2} \Big(\la v \ra^\beta \la v_*\ra^{ 2|\beta|+1}{\bf 1}_{|v-v_*| > 1}+ \la v \ra^{\beta-1} |v-v_*| {\bf 1}_{|v-v_*| \leq 1}\Big)\,. \end{equation} The similar method as for $D_2^{(2)}$ shows \[ \widetilde D_3 \lesssim \|\mu^{1/10} f\|_{L^2}\|g\|_{L^2_{(\nu+\gamma)/2 - \beta}}^2\,. \] To estimate $D_1$ we use the Taylor formula \begin{align*} (\mu'_{*})^{1/4} -\mu_{*}^{1/4} &= \big(\nabla \mu^{1/4}\big) (v_*)\cdot (v'_*-v_*) \\ & + \int_0^1(1-\tau) \big( \nabla^2 \mu^{1/4}\big)(v_*+ \tau(v'_*-v_*) ) (v'_*-v_*)^2 d\tau\,. \end{align*} By writing \[ v'_* - v_* = \frac{|v-v_*|}{2}\{(\sigma\cdot \vk)\vk -\sigma\} + \frac{v-v_*}{2}(1-\vk \cdot \sigma), \enskip \vk = \frac{v-v_*}{|v-v_*|}, \] we see that the integral corresponding the first term on the integral of $D_1$ vanishes becasue of the symmetry on $\SS^2$. Therefore, we have \begin{align*} |D_1| &\lesssim \int B \min(|v-v_*|\theta^2, 1) \mu_*^{1/4} f _* gh d\sigma dv_*dv \\ &\qquad \qquad + \int B \min(|v-v_*|^2\theta^2, 1) \mu_*^{1/4} f_* gh d\sigma dv_*dv \\ &\lesssim \int B {\bf 1}_{|v-v_*| \ge 1} \min(|v-v_*|^2\theta^2, 1) \mu_*^{1/4} f _* gh d\sigma dv_*dv \\ &\qquad \qquad + \int {\bf 1}_{|v-v_*| < 1} |v-v_*|^{\gamma+1+\nu/2} \mu_*^{1/10} f_* (\mu^{1/10} g)\frac{\mu^{1/10} h}{|v-v_*|^{\nu/2}} dv_*dv \\ &\lesssim \|\mu^{1/10} f\|_{L^2}\|g\|_{L^2_{(\nu+\gamma)/2 - \beta'}} \|h\|_{L^2_{(\nu +\gamma)/2+\beta'}}\\ &\qquad \qquad+ \|\mu^{1/10} f\|_{L^2}\|\mu^{1/10} g\|_{L^2}\|\mu^{1/10}h\|_{H^{\nu/2}} \end{align*} in view of $2\gamma +\nu+2 >-3$. \end{proof} Since we may replace $B {\bf 1}_{|v-v_*| \ge 1}$ by $B_{\bar c}$ in the proof of Lemma \ref{differ-Gam-Q}, under the condition $\gamma >-3$, it follows that for any $\beta, \beta' \in \RR$ and for any $\alpha \ge 0$ we have \begin{align}\label{diff-G-Q-56789}\notag &\Big|\Big(\Gamma_{\bar c} ( f, g)\, , \,h \Big)_{L^2} -\Big( Q_{\bar c}(\mu^{1/2} f, g), h\Big)_{L^2}\Big|\\ &\quad \lesssim \notag \|\mu^{1/10} f\|_{L^2} \|\la v \ra^{-\beta} g\|_{L^2_{(\nu+\gamma)/2}}|\!|\!|\la v \ra^{\beta} h|\!|\!|\\ & \qquad \qquad + \|f\|_{L^2_{-\alpha} } \|g\|_{L^2_{(\nu+\gamma)/2+\alpha - \beta'}} \|h \|_{L^2_{(\nu+\gamma)/2+\beta'}}. \end{align} By means of Lemma 3.2 of \cite{AMUXY3}, we get, for any $\alpha \ge 0$ and any $\beta' \in \RR$, \begin{align}\label{cut-part-123}\notag \Big|\Big(\Gamma_{\bar c} ( f, g)\, , \,h \Big)_{L^2}\Big| &\lesssim \|\mu^{1/10}f\|_{L^2} |\!|\!|g|\!|\!| |\!|\!|h |\!|\!|\\ & \quad + \|f\|_{L^2_{-\alpha} } \|g\|_{L^2_{(\nu +\gamma)/2+\alpha - \beta'}} \|h \|_{L^2_{(\nu+\gamma)/2+\beta' }}. \end{align} Now we write \begin{align*} \Big(\Gamma_{c} ( f, g)\, , \,h \Big)_{L^2} &= \int B_c \mu_*^{1/2}(f'_* g' -f_* g) h dv dv_* d\sigma\\ & = \int B_c \Big ( {\mu'}_*^{1/4} - {\mu_*}^{1/4} \Big ) {(\mu^{1/4} f)}_* g h dv dv_* d\sigma\\ &\quad + \int B_c \Big ( \mu_*^{1/4} - {\mu'_*}^{1/4} \Big ) ^2 f_* g h' dv dv_* d\sigma\\ &\quad + \int B_c \Big ( {\mu'}_*^{1/4} - {\mu_*}^{1/4} \Big ) {(\mu^{1/4} f)}_* g (h'-h) dv dv_* d\sigma\\ &\quad + \int B_c \mu_*^{1/4}\Big ( (\mu^{1/4} f)'_* g' -(\mu^{1/4}f)_* g\Big ) h dv dv_* d\sigma\\ & = I_1 + I_2 + I_3 + I_4\,. \end{align*} The estimation for $I_4$ is just the same as in the arguments starting from the line 14 at the page 1021 of \cite{AMUXY3}, by replacing $f$ by $\mu^{1/4}f$. Then we have $ |I_4| \lesssim \|\mu^{1/10}f\|_{L^2} |\!|\!|g|\!|\!| |\!|\!|h |\!|\!|$. On the other hand, the estimations for $I_1$, $I_2$, $I_3$ are quite the same as those for $D_1$, $D_2$, $D_3$, respectively, in the proof of Lemma \ref{differ-Gam-Q}. Therefore, $\Big(\Gamma_{c} ( f, g)\, , \,h \Big)_{L^2}$ has the same bound as the right hand side of \eqref{cut-part-123}. Thus the proof of Lemma \ref{MS-general} is complete. It remains to prove Proposition \ref{upper-recent}. To this end we state a variant of \cite[Proposition 2.5]{amuxy4-3} where the roles of $g,h$ are exchanged. \begin{prop}\label{IV-prop-2.5} Let $0<\nu<2$ and $\gamma>\max\{-3, -\nu -3/2\}$. For any $ \ell, \beta, \delta \in \RR$ and any small $\varepsilon >0$ $$ \Big|\Big(\la v \ra^\ell \,Q_c(f, g)-Q_c (f, \la v \ra^\ell g),\, h\Big)\Big|\lesssim \|f\|_{L^2_{\ell-1-\beta-\delta}} \|g\|_{L^2_\beta}\Vert h\Vert _{H^{(\nu-1+\epsilon)^+}_\delta}. $$ \end{prop} \begin{proof} It suffices to write \begin{align*} \Big(\la v \ra^\ell \,Q_c(f, g)- Q_c (f, \la v \ra^\ell g),\, h\Big) &= \int B_c \Big(\la v' \ra^\ell- \la v \ra^\ell \Big) f_* gh dv dv_* d\sigma \\ &+ \int B_c\Big(\la v' \ra^\ell- \la v \ra^\ell \Big) f_* g \Big(h'-h\Big) dv dv_* d\sigma \notag \\ &= J_1 + J_2\,. \notag \end{align*} The estimation for $J_2$ is quite the same as in the proof of \cite[Proposition 2.5]{amuxy4-3} As for $J_1$, we use the Taylor expansion, with $ v'_\tau = v+ \tau(v'-v)$, \begin{align}\label{one-use} \la v' \ra^\ell - \la v \ra^\ell = \nabla \Big( \la v \ra^\ell\Big)\cdot (v'-v) + \int_0^1 (1-\tau)\nabla^2 \Big( \la v'_\tau \ra^\ell \Big)d\tau(v'-v)^2 \,. \end{align} Since we have \[ v' - v= \frac{|v-v_*|}{2} (\sigma - (\vk \cdot \sigma) \vk ) + \frac{v-v_*}{2}(1 - \vk \cdot \sigma) \] and the integral corresponding to the first term vanishes because of the symmetry on $\SS^2$, it follows that \begin{align*} \Big|J_1\Big|&\lesssim \left|\int b(\cos \theta)\Phi_c \sin^2( \theta/2)|v-v_*|\Big( |\nabla \Big(\la v \ra^\ell \Big)| \right. \\ & \left.\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad + \int_0^1|\nabla^2 \Big( \la v'_\tau\ra^\ell\Big)|d\tau \Big)| f_* gh |dv dv_* d\sigma \right|\\ &\lesssim \int_{|v-v_*|\lesssim 1} |v-v_*|^{\gamma+1+(\nu -1 +\varepsilon)^+} |\la v_* \ra^{\ell-1-\beta-\delta } f_*|\\ &\qquad \qquad \qquad \times \, |(\la v \ra^{\beta} g)| \frac{|(\la v \ra^{\delta}h)| }{ |v-v_*|^{(\nu -1 +\varepsilon)^+}} dv dv_* \,\\ &\lesssim \|f\|_{L^2_{\ell-1-\beta-\delta} } \|g\|^2 _{L^2_\beta}\|h\|^2 _{H^{(\nu -1 +\varepsilon)^+}_\delta}\,, \end{align*} which, together with the estimate for $J_2$, gives the desired estimate. \end{proof} \begin{proof}[Proof of Proposition \ref{upper-recent}] Here we only prove the case $m \in [-\nu/2, 0]$ because the other case was essentially given in \cite[Lemma 4.7]{amuxy4-3}. Note \begin{align*} \Big(Q_c(f,g),h\Big) &= \Big(Q_c(f, \la v \ra^{\ell} g), \la v \ra^{-\ell} h\Big) \\ &+ \Big( \la v \ra^{\ell}Q_c(f, \la v \ra^{\ell} g) - Q_c(f, \la v \ra^{\ell} g) , \la v \ra^{-\ell}h \Big) . \end{align*} It follows from Propositions \ref{upper-c} and \ref{IV-prop-2.5} with $\beta =\ell$, $\delta =0$ that \begin{align*} \Big|\Big(Q_c(f,g),h\Big)\Big| \lesssim \|f\|_{L^2}\|g \|_{H^{\nu/2+m}_\ell} \|h\|_{H^{\nu/2-m}_{-\ell}}\,, \end{align*} where we have used $m \le 0$. By means of \cite[Theorem 2.1]{AMUXY2010} (see also (2.1) of \cite{amuxy4-3}) we have for any $m, \ell\in\RR$, $$ \Big|\Big(Q_{\bar c} (f, g),\, h \Big)\Big| \lesssim \Vert f\Vert_{L^1_{\ell^++(\gamma+\nu)^+}} \Vert g\Vert_{H^{\nu/2 +m}_{(\ell+\gamma+\nu)^+}} \|h\|_{H^{\nu/2-m}_{-\ell}}\,. $$ Above two estimates and Corollary \ref{convenient-123} concludes \eqref{different-g-h} when $-\nu/2 \le m \le 0$. \end{proof} \subsection{Commutator estimates with moments}\label{ap-3} Let $W_\delta(v) = \la \delta v \ra^{-N}$ for $0< \delta <1$ and $N \ge 1$. \begin{align*} &\Big(\mathcal{L}_1(g), W_\delta^2 g\Big)_{L^2(\RR^3_v) }- \Big(\mathcal{L}_1(W_\delta g), W_\delta g \Big)_{L^2(\RR^3_v)}\\ &= \int B \mu_*^{1/2} {\mu'}_*^{1/2}(W_\delta -W'_\delta)g' W_\delta g dv dv_* d\sigma\\ &=\frac{1}{2} \int B \mu_*^{1/2} {\mu'}_*^{1/2}(W_\delta -W'_\delta)^2g' g dv dv_* d\sigma\\ &\le \frac{1}{2} \int B \mu_* (W'_\delta -W_\delta)^2g^2 dv dv_* d\sigma\,, \end{align*} where we used the change of variables $(v', v'_*, \sigma) \rightarrow (v, v_*, \vk)$ and the Cauchy-Schwarz inequality. Note that we have \begin{align}\label{W-est}\notag |W_\delta(v') -W_\delta(v) | &\le \int_0^1 |\nabla W_\delta(v'_\tau)|d\tau |v'-v|, \enskip \enskip v'_\tau = v+ \tau(v'-v) \\ \notag &\lesssim \delta \int_0^1 W_\delta(v'_\tau)\la \delta v'_\tau\ra^{-1} d\tau |v-v_*|\sin \frac{\theta}{2}\\ \notag &\lesssim \delta^{\nu/2} |v-v_*|^{\nu/2} \theta W_\delta(v)\la v_* \ra^{N+2-\nu/2}\\ &\lesssim \delta^{\nu/2} \la v\ra^{\nu/2} \theta W_\delta(v)\la v_* \ra^{N+2}\,, \end{align} because $\la \delta v'_\tau\ra^{-1} \lesssim \la \delta v \ra^{-1} \la v_*\ra$ and $\delta |v-v_*| \le \la \delta v \ra \la v_* \ra$. Since it follows from Lemma 2.5 of \cite{AMUXY2} that $$\int |v-v_*|^{\gamma +\nu} \mu_*^{1/2} d v_* \lesssim \la v \ra^{\gamma+\nu}, $$ we have \begin{align}\label{linear-comm-weight} \left|\Big(\mathcal{L}_1(g), W_\delta^2 g\Big)_{L^2(\RR^3_v) }- \Big(\mathcal{L}_1(W_\delta g), W_\delta g \Big)_{L^2(\RR^3_v)}\right| \lesssim \delta^{\nu} \|g\|^2_{L^2_{(\nu+\gamma)/2}}\,. \end{align} \vskip1cm \begin{prop}\label{weight-commutator} Let $\gamma > \max \{-3, -\nu -3/2\}$. Then we have \begin{align*} \left| \Big(\Gamma(f,g), \right. &\left. W_\delta h\Big)_{L^2(\RR^3)}- \Big(\Gamma(f, W_\delta g), h\Big)_{L^2(\RR^3)} \right| \\ &\lesssim \delta^{\nu/2} \|f\|_{L^2} \|W_\delta g\|_{L^2_{(\nu+\gamma)/2}} \Big(\|h\|_{L^2_{(\nu+\gamma)/2}} + \|h\|_{H^{\nu'/2}_{-N_1}}\Big)\\ & \qquad + \delta^{\nu/2} \|f\|_{L^2}^{1/2} \|W_\delta g\|_{L^2_{(\nu+\gamma)/2}} \cD(\mu^{1/2}f, h)^{1/2}\,, \end{align*} for any $N_1 >0$ and any $0 \le \nu' \le \nu$ satisfying $\gamma + \nu' >-3/2$. \end{prop} \begin{proof} Note that \begin{align*} \Big(\Gamma(f,g), W_\delta h\Big)_{L^2(\RR^3)}-& \Big(\Gamma(f, W_\delta g), h\Big)_{L^2(\RR^3)}\\ &= \int B {\mu'}_*^{1/2} (W'_\delta -W_\delta){f}_* g h' dv dv_* d\sigma\\ &= \int B \mu_*^{1/2} (W'_\delta -W_\delta){f}_* gh dv dv_* d\sigma\\ &+ \int B \Big({\mu'}_*^{1/2} - \mu_*^{1/2}\Big) f_* (W'_\delta -W_\delta)gh' dv dv_* d\sigma\\ &+\int B \mu_*^{1/2} f_* (W'_\delta -W_\delta)g (h'-h) dv dv_* d\sigma\\ & = A_1 + A_2 + A_3. \end{align*} Note that \begin{align*} &|v-v_*|^\gamma \lesssim \la v\ra^\gamma \la v_* \ra^{|\gamma|} \enskip \mbox{or} \enskip \la v\ra^\gamma \la v'_* \ra^{|\gamma|} \enskip \mbox{if } \enskip |v-v_*| \ge 1\,,\\ & \la v \ra \sim \la v_* \ra \sim \la v' \ra \sim \la v_*'\ra \enskip \mbox{if } \enskip |v-v_*| < 1\,. \end{align*} We divide \begin{align*} A_2 = \int_{|v-v_*| < 1} \cdots dvdv_*d\sigma + \int_{|v-v_*| \ge 1} \cdots dvdv_*d\sigma= A_{2,1} + A_{2,2}. \end{align*} Using that \begin{align*} |{\mu'_*}^{1/2} - \mu_*^{1/2}| \lesssim \min(|v-v_*|\theta, 1)\mu_*^{1/4} + \min(|v-v'_*|\theta, 1){\mu'}_*^{1/4} \end{align*} we estimate as follows: $|A_{2,1}| \lesssim |A_{2,1}^{*}| + |A^{*'}_{2,1}|$. Then for any $N_1 >0$ we have \begin{align*} |A_{2,1}^{*'}| &\lesssim \delta^{\nu/2} \int_{|v-v_*| <1} |v-v_*|^{\gamma + 1 +\nu/2} b \theta^2\left| f_* \frac{W_\delta g} {\la v \ra^{N_1} } \frac{h'}{\la v' \ra^{N_1}}\right| dv dv_* d\sigma\\ &\lesssim \delta^{\nu/2} \left(\int \Big(\int_{|v-v_*| <1} |v-v_*|^{2\gamma + 2 +\nu} dv_* \Big)\left|\frac{W_\delta g} {\la v \ra^{N_1} }\right|^2dv\right)^{1/2} \\ &\qquad \qquad \qquad \times \left( \int b \theta^2 |f_*|^2 \left|\frac{h'}{\la v' \ra^{N_1}}\right|^2 dv dv_* d\sigma\right ) ^{1/2}\\ &\lesssim \delta^{\nu/2} \|f\|_{L^2} \|W_\delta g \|_{L^2_{-N_1}} \|h\|_{L^2_{-N_1}}\,. \end{align*} Similarly we have the same upper bound for $A_{2,1}^{*}$. As for $A_{2,2}$, it follows from the Cauchy -Schwarz inequality that \begin{align*} A_{2,2}^2 &\le 2\int_{|v-v_*|\ge 1} B f_*^2 (W_\delta - W_\delta')^2g^2 ({\mu'_*}^{1/2} + \mu^{1/2}_*)dv dv_* d\sigma \\ & \qquad \qquad \qquad \times \int B h^2 \Big ({\mu_*}^{1/4} - {\mu'}^{1/4}_*\Big)^2 dv dv_* d\sigma\\ &\lesssim \delta^{\nu}\int \la v\ra^{\gamma+\nu} b \theta^2 |f_* W_\delta g |^2dv dv_*d \sigma \|h\|^2_{L^2_{(\nu+\gamma)/2}}\\ &\lesssim \delta^{\nu}\|f\|^2_{L^2} \|W_\delta g\|^2_{L^2_{(\nu+\gamma)/2}} \|h\|^2_{L^2_{(\nu+\gamma)/2}}\,, \end{align*} where we have used the fact that, if $|v-v_*| \ge 1$, \[ |v-v_*|^{\gamma} (W_\delta - W_\delta')^2 \lesssim \delta^{\nu} \la v\ra^{\gamma+\nu} \theta^2 W_\delta^2 \min \{ \la v_* \ra^{|\gamma|+2N+4}\,,\, \la v'_* \ra^{|\gamma|+2N+4}\}, \] because of \eqref{W-est} and it with $v_*$ replaced by $v_*'$. By means of the Cauchy-Schwarz inequality, we have \begin{align*} A_3^2 & \le \int B \mu_*^{1/2} f_* (W'_\delta -W_\delta)^2g^2 dv dv_* d\sigma\times \cD(\mu^{1/2}| f|, h). \end{align*} The first factor on the right hand side is estimated above from $C \delta^{\nu}$ times \begin{align*} &\|\mu^{1/4} f \|_{L^1}\|W_\delta g\|^2_{L^2_{(\nu+\gamma)/2}}\\ &+\int \Big( \int_{|v-v_*|<1} |v-v_*|^{2(\gamma+\nu)}dv_*\Big)^{1/2} \|\mu^{1/4} f\|_{L^2}\|W_\delta g\|^2_{L^2_{-N_1}} \end{align*} because it follows again from \eqref{W-est} that \[ |v-v_*|^{\gamma} (W_\delta - W_\delta')^2 \mu^{1/4}_* \lesssim \delta^{\nu} \theta^2 W_\delta^2 \Big\{ \la v\ra^{\gamma+\nu} {\bf 1}_{|v-v_*|\ge 1} + \frac{|v-v_*|^{\gamma+\nu}}{\la v \ra^{N_1}} {\bf 1}_{|v-v_*|<1} \Big\}. \] Therefore we also have the desired bound for $A_3$. In order to estimate $A_1$, we use the following Taylor expansion of the second order; with $ v'_\tau = v+ \tau(v'-v)$, \begin{align}\label{W-est-sec} W_\delta(v') -W_\delta(v) &= \nabla W_\delta (v)\cdot (v'-v)\notag\\ & \qquad \qquad + \int_0^1 (1-\tau)\nabla^2 W_\delta(v'_\tau) d\tau (v'-v)^2. \end{align} Similar as in \eqref{one-use}, we can estimate the factor $W'_\delta -W_\delta$ by \begin{align}\label{second-w} &\notag |\nabla W_\delta(v)| |v-v_*| \theta^2 + |\nabla^2 W_\delta(v'_\tau) | |v-v_*|^2\theta^2\\ &\notag \lesssim \delta W_\delta(v) \la \delta v \ra^{-1} |v-v_*| \theta^2 + \delta^2 W_\delta(v'_\tau) \la \delta v'_\tau \ra^{-2} |v-v_*|^2 \theta^2\\ &\lesssim \delta^{\nu/2} W_\delta(v) |v-v_*|^{\nu} \theta^2 \la v_*\ra^{1-\nu/2} + \delta^{\nu}W_\delta(v) |v-v_*|^{\nu}\theta^2 \la v_*\ra^{N+4 -\nu} . \end{align} Consequently, for any $0 \le \nu' \le \nu$ satisfying $\gamma + \nu' >-3/2$, we have \begin{align*} |A_1|&\lesssim \delta^{\nu/2} \iint \la v \ra^{\gamma+\nu} {\bf 1}_{|v-v_*| \ge 1} \mu_*^{1/4}| f_* W_\delta g h |dv dv_*\\ & + \delta^{\nu/2} \iint |v-v_*|^{\gamma+(\nu+\nu')/2} {\bf 1}_{|v-v_*| < 1}\left| f_* \frac{W_\delta g }{\la v \ra^{N_1}} \frac{h}{|v-v_*|^{\nu'/2} \la v \ra^{N_1}} \right |dv dv_*\\ & \lesssim \delta^{\nu/2} \|\mu^{1/4}f\|_{L^1} \|W_\delta g\|_{L^2_{(\nu+\gamma)/2}}\|h\|_{L^2_{(\nu+\gamma)/2}}\\ & + \delta^{\nu/2} \Big(\int |f_*|^2 \int \left|\frac{h} {|v-v_*|^{\nu'/2} \la v \ra^{N_1}} \right |^2dv\Big) dv_*\Big)^{1/2}\\ &\quad \qquad \times \Big(\int \left|\frac{W_\delta g }{\la v \ra^{N_1}}\right|^2 \int_{|v-v_*|<1} |v-v_*|^{2\gamma+\nu + \nu'} dv_* \Big) dv\Big)^{1/2}\\ &\lesssim \delta^{\nu/2} \|f\|_{L^2} \Big(\|W_\delta g\|_{L^2_{(\nu+\gamma)/2}}\|h\|_{L^2_{(\nu+\gamma)/2}} + \|W_\delta g\|_{L^2_{-N_1}} \|h\|_{H^{\nu'/2}_{-N_1}}\Big). \end{align*} \end{proof} The following corollary is a variant of Lemma 4.9 in \cite{amuxy4-3}, which is used to prove the non-negativity of solutions. \begin{cor}\label{cor-weight-commutator} Let $\gamma > \max \{-3, -\nu -3/2\}$ and let $\varphi(v,x) = (1+|v|^2+|x|^2)^{\alpha/2}$ for $\alpha >3/2$. If we put $W_{\varphi} = \varphi(v,x)^{-1}$, then we have \begin{align*} \left| \Big(\Gamma(f,g), \right. &\left. W_{\varphi} h\Big)_{L^2(\RR^3)}- \Big(\Gamma(f, W_\varphi g), h\Big)_{L^2(\RR^3)} \right| \\ &\lesssim \|f\|_{L^2} \|W_\varphi g\|_{L^2_{\gamma/2}} \Big(\|h\|_{L^2_{(\nu+\gamma)/2}} + \|h\|_{H^{\nu'/2}_{-N_1}}\Big)\\ & \qquad + \|f\|_{L^2}^{1/2} \|W_\varphi g\|_{L^2_{\gamma/2}} \cD(\mu^{1/2}f, h)^{1/2}\,, \end{align*} for any $N_1 >0$ and any $0 \le \nu' \le \nu$ satisfying $\gamma + \nu' >-3/2$. \end{cor} \begin{proof} Instead of \eqref{W-est} and \eqref{second-w}, respectively, it suffices to use \begin{align*} |W_\varphi(v') -W_\varphi(v) | &\le \int_0^1 |\nabla W_\varphi(v'_\tau)|d\tau |v'-v|, \enskip \enskip v'_\tau = v+ \tau(v'-v) \\ \notag &\lesssim \int_0^1 W_\varphi(v'_\tau)\la v'_\tau\ra^{-1} d\tau |v-v_*|\sin \frac{\theta}{2}\\ \notag &\lesssim \theta W_\varphi(v)\la v_* \ra^{\alpha+2}\Big( {\bf 1}_{|v-v_*|\ge1} + |v-v_*|{\bf1}_{|v-v_*|<1}\Big) \end{align*} and \begin{align*} &|\nabla W_\varphi(v)| |v-v_*| \theta^2 + |\nabla^2 W_\varphi(v'_\tau) | |v-v_*|^2\theta^2\\ &\notag \lesssim W_\varphi(v) \la v \ra^{-1} |v-v_*| \theta^2 + W_\varphi(v'_\tau) \la v'_\tau \ra^{-2} |v-v_*|^2 \theta^2\\ &\notag \lesssim \theta^2 W_\varphi(v) \la v_*\ra^{\alpha +4}\Big( {\bf 1}_{|v-v_*|\ge1} + |v-v_*|{\bf1}_{|v-v_*|<1}\Big). \end{align*} \end{proof} \subsection{Commutator with $v$ derivative mollifier}\label{ap-4} Since the kinetic factor of the collision cross section is singular, we must confine ourselves to a special order of the mollifier. Let $1 \ge N_0 \ge \nu/2$ and let $M^\delta(\xi) = \displaystyle \frac{1}{(1+ \delta \la \xi \ra)^{N_0} }$ for $0 < \delta \le 1$. \begin{prop}\label{IV-coro-2.15} Assume that $0 < \nu <2$ and $\gamma>\max\{-3, -\frac 32 -\nu\} $. Then for any $\nu' >0$ satisfying $\nu-1 \le \nu' <\nu$ and $\gamma +\nu' >-3/2$ we have \begin{align*} \Big | \Big (M^\delta(D_v)\,& Q_c (f, g) - Q_c ( f, M^\delta(D_v) \, g) ,\, h \Big ) \Big |\\ &\qquad \qquad \lesssim \| f\|_{L^2} \|M^\delta (D_v) g\Vert_{H^{\nu'/2}} \, \Vert h\Vert_{H^{\nu'/2}}\,. \end{align*} \end{prop} \begin{proof} We shall follow similar arguments {}of Proposition 3.4 from \cite{AMUXY-Kyoto}. By using the formula {}from the Appendix of \cite{advw}, we have \begin{align*} ( Q_c(f, g), h ) =& \iiint_{ \RR^3\times\RR^3\times\SS^2} b \Big({\frac{\xi}{ | \xi |}} \cdot \sigma \Big) [ \hat\Phi_c (\xi_* - \xi^- ) - \hat \Phi_c (\xi_* ) ] \\ & \qquad \qquad \times \hat f (\xi_* ) \hat g(\xi - \xi_* ) \overline{{\hat h} (\xi )} d\xi d\xi_*d\sigma \,, \end{align*} where $\xi^-=\frac 1 2 (\xi-|\xi|\sigma)$. Therefore \begin{align*} \Big (M^\delta(D) \, Q_c (f, g)& - Q_c (f, M^\delta(D)\, g) , h \Big ) \\ = &\iiint b \Big({\frac{\xi}{ | \xi |}} \cdot \sigma \Big) [ \hat\Phi_c (\xi_* - \xi^- ) - \hat \Phi_c (\xi_* ) ] \\ &\quad \times \Big(M^\delta (\xi) - M^\delta (\xi-\xi_*)\Big) \hat f (\xi_* ) \hat g(\xi - \xi_* ) \overline{{\hat h} (\xi )} d\xi d\xi_*d\sigma \\ = & \iiint_{ | \xi^- | \leq { \frac 1 2} \la \xi_*\ra } \cdots\,\, d\xi d\xi_*d\sigma + \iiint_{ | \xi^- | \geq {\frac 1 2} \la \xi_*\ra } \cdots\,\, d\xi d\xi_*d\sigma \,\\ =& \cA_1(f,g,h) + \cA_2(f,g,h) \,\,. \end{align*} Then, we write $\cA_2(f,g,h)$ as \begin{align*} \cA_2 &= \iiint b \Big(\frac{\xi}{ | \xi |} \cdot \sigma \Big) {\bf 1}_{ | \xi^- | \ge {\frac1 2}\la \xi_*\ra } \hat\Phi_c (\xi_* - \xi^- ) \cdots d\xi d\xi_*d\sigma \\ &\quad - \iiint b \Big(\frac{\xi}{ | \xi |} \cdot \sigma \Big){\bf 1}_{ | \xi^- | \ge {\frac12}\la \xi_*\ra } \hat \Phi_c (\xi_* ) \cdots d\xi d\xi_*d\sigma \\ &= \cA_{2,1}(f,g,h) - \cA_{2,2}(f,g,h)\,. \end{align*} On the other hand, for $\cA_1$ we use the Taylor expansion of $\hat \Phi_c$ of order $2$ to have $$ \cA_1 = \cA_{1,1} (f,g,h) +\cA_{1,2} (f,g,h), $$ where \begin{align*} \cA_{1,1} &= \iiint b\,\, \xi^-\cdot (\nabla\hat\Phi_c)( \xi_*) {\bf 1}_{ | \xi^- | \leq \frac{1}{2} \la \xi_*\ra } \Big(M^\delta (\xi) - M^\delta (\xi-\xi_*)\Big)\\ & \qquad \times \hat f (\xi_* ) \hat g(\xi - \xi_* ) \bar{\hat h} (\xi ) d\xi d\xi_*d\sigma, \end{align*} and $\cA_{1,2} (f,g,h)$ is the remaining term corresponding to the second order term in the Taylor expansion of $\hat\Phi_c$. We first consider $\cA_{1,1}$. By writing \[ \xi^- = \frac{|\xi|}{2}\left(\Big(\frac{\xi}{|\xi|}\cdot \sigma\Big)\frac{\xi }{|\xi|}-\sigma\right) + \left(1- \Big(\frac{\xi}{|\xi|}\cdot \sigma\Big)\right)\frac{\xi}{2}, \] we see that the integral corresponding to the first term on the right hand side vanishes because of the symmetry on $\SS^2$. Hence, we have \[ \cA_{1,1}= \iint_{\RR^6} K(\xi, \xi_*) \Big(M^\delta (\xi) - M^\delta (\xi-\xi_*)\Big) \hat f (\xi_* ) \hat g(\xi - \xi_* ) \bar{\hat h} (\xi ) d\xi d\xi_* \,, \] where \[ K(\xi,\xi_*) = \int_{\SS^2} b \Big({ \frac{\xi}{ | \xi |}} \cdot \sigma \Big) \left(1- \Big(\frac{\xi}{|\xi|}\cdot \sigma\Big)\right)\frac{\xi}{2}\cdot (\nabla\hat\Phi_c)( \xi_*) {\bf 1}_{ | \xi^- | \leq {\frac1 2} \la \xi_*\ra } d \sigma \,. \] Note that $| \nabla \hat \Phi_c (\xi_*) | \lesssim \frac{1}{\la \xi_*\ra^{3+\gamma +1}}$, {}from the Appendix of \cite{AMUXY2}. If $\sqrt 2 |\xi| \leq \la \xi_* \ra$, then $\sin(\theta/2) $ $| \xi|= |\xi^-| \leq \la \xi_* \ra/2$ because $0 \leq \theta \leq \pi/2$, and we have \begin{align*} |K(\xi,\xi_*)| &\lesssim \int_0^{\pi/2} \theta^{1-\nu} d \theta\frac{ \la \xi\ra}{\la \xi_*\ra^{3+\gamma +1}} \lesssim \frac{1 }{\la \xi_*\ra^{3+\gamma}}\left( \frac{\la \xi \ra}{\la \xi_*\ra}\right) \,. \end{align*} On the other hand, if $\sqrt 2 |\xi| \geq \la \xi_* \ra$, then \begin{align*}|K(\xi,\xi_*)| &\lesssim \int_0^{\pi\la \xi_*\ra /(2|\xi|)} \theta^{1-\nu} d \theta\frac{ \la \xi\ra}{\la \xi_*\ra^{3+\gamma +1}} \lesssim \frac{1 }{\la \xi_*\ra^{3+\gamma}}\left( \frac{\la \xi \ra}{\la \xi_*\ra}\right)^{\nu-1}\,. \end{align*} Hence we obtain \begin{align}\label{later-use3} |K(\xi,\xi_*)| &\lesssim \frac{1 }{\la \xi_*\ra^{3+\gamma}}\left\{ \left( \frac{\la \xi \ra}{\la \xi_*\ra}\right){\bf 1}_{ \la \xi_* \ra \geq \sqrt 2 |\xi| } \right.\notag\\ &\qquad\left.+{\bf 1}_{ \sqrt 2 |\xi| \geq \la \xi_* \ra \geq |\xi|/2} + \left( \frac{\la \xi \ra}{\la \xi_*\ra}\right)^{\nu-1} {\bf 1}_{ |\xi|/2 \ge \la \xi_* \ra }\right\}\,. \end{align} Similarly to $\cA_{1,1}$, we can also write \[ \cA_{1,2}= \iint_{\RR^6} \tilde K(\xi, \xi_*) \Big(M^\delta (\xi) - M^\delta (\xi-\xi_*)\Big) \hat f (\xi_* ) \hat g(\xi - \xi_* ) \bar{\hat h} (\xi ) d\xi d\xi_* \,, \] where \[ \tilde K(\xi,\xi_*) = \int_{\SS^2} b \Big({\frac{\xi}{ | \xi |}} \cdot \sigma \Big) \int^1_0(1-\tau) (\nabla^2\hat \Phi_c) (\xi_* -\tau\xi^- ) \cdot\xi^- \cdot\xi^- {\bf 1}_{ | \xi^- | \leq {\frac 12} \la \xi_*\ra } d\tau d \sigma\,. \] Again {}from the Appendix of \cite{AMUXY2}, we have $$ | (\nabla^2\hat \Phi_c) (\xi_* -\tau\xi^- ) | \lesssim {\frac{ 1}{\la \xi_* -\tau \xi^-\ra^{3+\gamma +2}}} \lesssim {\frac{1}{\la \xi_*\ra^{3+\gamma +2}}}, $$ because $|\xi^-| \leq \la \xi_*\ra/2$, which leads to \begin{align}\label{later-use4}\notag |\tilde K(\xi,\xi_*)| &\lesssim \frac{1 }{\la \xi_*\ra^{3+\gamma}}\left\{ \left( \frac{\la \xi \ra}{\la \xi_*\ra}\right)^2 {\bf 1}_{ \la \xi_* \ra \geq \sqrt 2 |\xi| }\right.\\ &\qquad\left. +{\bf 1}_{ \sqrt 2 |\xi| \geq \la \xi_* \ra \geq |\xi|/2} + \left( \frac{\la \xi \ra}{\la \xi_*\ra}\right)^{\nu} {\bf 1}_{ |\xi|/2 \ge \la \xi_* \ra }\right\}\,. \end{align} We employ (3.4) of Lemma 3.1 in \cite{AMUXY-Kyoto} with $p=1$ and $\lambda =0$, that is, \begin{align}\label{simple-form} &\left|M^\delta (\xi) - M^\delta (\xi-\xi_*)\right|\leq C M^\delta (\xi-\xi_*)\left\{ \Big( \frac{\la \xi_*\ra}{\la \xi \ra }\Big) {\bf 1}_{\la \xi_* \ra \ge \sqrt 2 |\xi|} \right.\notag\\ &+ \left( {M^\delta (\xi_*)\big(1 + \delta \la \xi - \xi_*\ra\big)^{N_0}} +1 \right) {\bf 1}_{ \sqrt 2 |\xi| >\la \xi_* \ra \ge |\xi|/2 } +\left. \frac{\la \xi_*\ra}{\la \xi \ra }{\bf 1}_{|\xi|/2>\la \xi_* \ra }\right\}\,. \end{align} It follows from \eqref{later-use3} and \eqref{later-use4} that we have $$ |\cA_{1}|\lesssim |\cA_{1,1}| + |\cA_{1,2}| \lesssim A_1+ A_2 + A_3, $$ with \begin{equation}\label{A_1} A_1=\iint_{\RR^6} \left |\frac{\hat f(\xi_*)}{\la \xi_*\ra^{3+\gamma}}\right | \left |M^\delta (\xi-\xi_*)\hat g(\xi -\xi_*)\right | |\hat h(\xi)| {\bf 1}_{ \la \xi_* \ra \geq \sqrt 2 |\xi| }d\xi_* d\xi\, , \end{equation} and \begin{align*} A_2=& \iint_{\RR^6} \left |\frac{\hat f(\xi_*)}{\la \xi_*\ra^{3+\gamma}}\right | \left |M^\delta (\xi-\xi_*)\hat g(\xi -\xi_*)\right | |\hat h(\xi)|\\ &\times\left( {M^\delta (\xi_*)\big(1 + (\delta \la \xi - \xi_*\ra)^{N_0}\big)} +1 \right) {\bf 1}_{ \sqrt 2 |\xi| >\la \xi_* \ra \ge |\xi|/2 }d\xi_*d\xi\, ;\\ A_3=&\iint_{\RR^6} \left |\frac{\hat f(\xi_*)}{\la \xi_*\ra^{3+\gamma}}\right | \left |M^\delta (\xi-\xi_*)\hat g(\xi -\xi_*)\right | |\hat h(\xi)| \left( \frac{\la \xi\ra}{\la \xi_* \ra } \right)^{\nu-1} {\bf 1}_{|\xi|/2>\la \xi_* \ra } d\xi_* d\xi\, . \end{align*} Setting $\hat G(\xi) = \la \xi \ra^{\nu'/2} M^\delta (\xi) \hat g(\xi)$ and $\hat H(\xi) = \la \xi \ra^{\nu'/2}\hat h(\xi)$, we get \begin{align*} & A_1 \le \int_{\RR^3} \frac{|\hat H(\xi)|}{\la \xi \ra^{3/2 + \varepsilon}} \left(\int_{\RR^3} \left( \frac{\la \xi\ra}{\la \xi_* \ra} \right)^{3/2 + \varepsilon-\nu'/2} {\bf 1}_{ \la \xi_* \ra \geq \sqrt 2 |\xi| } \frac{|\hat f(\xi_*) |\, |\hat G ( \xi -\xi_*)|}{\la \xi_* \ra^{3/2+\gamma+\nu' -\varepsilon} } d\xi_* \right) d\xi \\ & \qquad \lesssim \|h\|_{H^{\nu'/2}} \|f\|_{L^2} \|M^\delta g\|_{H^{\nu'/2}} \,, \notag \end{align*} because $3/2 + \varepsilon-\nu'/2\ge 0$ and $3/2+\gamma+\nu' -\varepsilon \ge 0$ for a sufficiently small $\varepsilon >0$. Here we have used the fact that $\la \xi_* \ra \sim \la \xi-\xi_*\ra$ if $\la \xi_* \ra \geq \sqrt 2 |\xi|$. Noticing the third formula of \eqref{equivalence-relation}, we get \begin{align*} &\left| A_2 \right|^2 \lesssim \left \{\int_{\RR^3}\frac{|\hat f(\xi_*)|^2 d\xi_*}{ \la \xi_*\ra^{6+2\gamma +\nu'}} \int_{\la \xi -\xi_*\ra \lesssim \la \xi_* \ra} \left( \frac{\la \xi_* \ra^{-2N_0}}{\la \xi- \xi_* \ra^{\nu'-2N_0}} + \frac{1}{\la \xi -\xi_* \ra^{\nu'}} \right) d\xi \right\} \notag \\ &\quad \qquad \times \left( \iint_{\RR^6}|\hat G ( \xi -\xi_*)|^2 |\hat H(\xi)|^2 d\xi d\xi_* \right) \,.\notag\\ &\quad \lesssim \int_{\RR^3}\frac{|\hat f(\xi_*)|^2} { \la \xi_*\ra^{3+2(\gamma +\nu')}} d\xi_* \|M_\lambda^\delta g\|_{H^{\nu'/2}}^2 \|h\|_{H^{\nu'/2}}^2 \lesssim \|f\|_{L^2}^2 \|M_\lambda^\delta g\|_{H^{\nu'/2}}^2 \|h\|_{H^{\nu'/2}}^2\,, \end{align*} because $3+2(\gamma +\nu')>0$. Since $\nu' \ge \nu-1$ and $6 +2(\gamma + \nu')>3$, we have \begin{align*} &\left| A_3 \right|^2 \lesssim \left(\int_{\RR^3} |\hat f(\xi)|^2{d\xi_*} \int_{\RR^3} |\hat H(\xi)|^2d\xi\right)\\ & \times \left(\int_{\RR^3} \frac{d\xi_*}{ \la \xi_* \ra^{6+ 2(\gamma +\nu')}} \int_{\RR^3} \left(\frac{\la \xi_*\ra}{\la \xi \ra}\right)^{2\{2s'-(\nu-1)\}} {\bf 1}_{ |\xi|/2 \ge \la \xi_* \ra } |\hat G ( \xi -\xi_*)|^2 d\xi \right)\notag \\ & \qquad \lesssim \|f\|_{L^2}^2 \|M_\lambda^\delta g\|_{H^{\nu'/2}}^2 \|h\|_{H^{\nu'/2}}^2\, . \notag \end{align*} The above three estimates yield the desired estimate for $\cA_1(f,g,h)$. Next consider $\cA_2(f,g,h) = \cA_{2,1}(f,g,h) - \cA_{2,2}(f,g,h)$. Since $\theta \in [0,\pi/2]$ and $|\xi^-|= |\xi| \sin(\theta/2)$ $\geq$ $\la \xi_*\ra/2$, we have $\sqrt 2 |\xi| \geq \la \xi_*\ra$. Write \[ \cA_{2,j}= \iint_{\RR^6} K_j(\xi, \xi_*) \Big(M_\lambda^\delta (\xi) - M_\lambda^\delta (\xi-\xi_*)\Big) \hat f (\xi_* ) \hat g(\xi - \xi_* ) \bar{\hat h} (\xi ) d\xi d\xi_* \,. \] Then we have \begin{align*} &|K_2(\xi, \xi_*)| = \left|\int b \Big({\frac{\xi}{ | \xi |}} \cdot \sigma \Big)\hat \Phi_c(\xi_*) {\bf 1}_{ | \xi^- | \ge {\frac12}\la \xi_*\ra } d\sigma\right|\\ & \lesssim {\frac{1}{\la \xi_* \ra^{3+\gamma }}} \frac{\la \xi\ra^{\nu} }{\la \xi_*\ra^{\nu}}{\bf 1}_{\sqrt 2 |\xi| \geq \la \xi_* \ra} \notag \\ & \lesssim \frac{1 }{\la \xi_*\ra^{3+\gamma}}\left\{ {\bf 1}_{ \sqrt 2 |\xi| \geq \la \xi_* \ra \geq |\xi|/2} + \left( \frac{\la \xi \ra}{\la \xi_*\ra}\right)^{\nu} {\bf 1}_{ |\xi|/2 \ge \la \xi_* \ra }\right\} \notag \,, \end{align*} which shows the desired estimate for $\cA_{2,2}$, by exactly the same way as the estimation on $A_2$ and $A_3$. As for $\cA_{2,1}$, it suffices to work under the condition $|\xi_* \cdot \xi^-| \ge \frac1 2 |\xi^-|^2$. In fact, on the complement of this set, we have $|\xi_* -\xi^-| > | \xi_*|$, and $\hat \Phi_c(\xi_*-\xi^-)$ is the same as $\hat \Phi_c(\xi_*)$. Therefore, we consider $\cA_{2,1,p}$ which is defined by replacing $K_1(\xi, \xi_*)$ by \[ K_{1,p}(\xi,\xi_*) = \int_{\SS^2} b \Big({\frac{\xi}{ | \xi |}} \cdot \sigma \Big) \hat \Phi_c ( \xi_*-\xi^-) {\bf 1}_{ | \xi^- | \geq {\frac 12} \la \xi_*\ra }{\bf 1}_{| \xi_* \,\cdot\,\xi^-| \ge {\frac1 2} | \xi^-|^2} d \sigma \,. \] By writing \[ {\bf 1}= {\bf 1}_{\la \xi_* \ra \geq |\xi|/2} {\bf 1}_{\la\xi-\xi_* \ra \leq{2}\la \xi_* - \xi^- \ra} + {\bf 1}_{\la \xi_* \ra \geq |\xi|/2} {\bf 1}_{\la\xi-\xi_* \ra > {2}\la \xi_* - \xi^-\ra} + {\bf 1}_{\la \xi_* \ra < |\xi|/2}, \] we decompose respectively \begin{align*} \cA_{2,1,p} = B_1+ B_2 +B_3\,. \end{align*} On the sets corresponding to the above integrals, we have $\la \xi_* -\xi^- \ra \lesssim \, \la \xi_* \ra$, because $| \xi^- | \lesssim | \xi_*|$ that follows {}from $| \xi^-|^2 \le 2 | \xi_* \cdot\xi ^-| \lesssim |\xi^-|\, | \xi_*|$. Furthermore, on the sets for $B_1$ and $B_2$ we have $\la \xi \ra \sim \la \xi_* \ra$, so that $\la \xi_* -\xi^- \ra \lesssim \ \la \xi \ra$ and $b\,\, {\bf 1}_{ | \xi^- | \ge {\frac12} \la \xi_*\ra } {\bf 1}_{\la \xi_* \ra \geq |\xi|/2}$ is bounded. Putting again $\hat G(\xi) = \la \xi \ra^{\nu'/2} M^\delta (\xi) \hat g(\xi)$ and $\hat H(\xi) = \la \xi \ra^{\nu'/2}\hat h(\xi)$, by means of \eqref{simple-form} we have \begin{align*} |B_1| ^2 \lesssim& \left[\iiint \left |\frac{\hat \Phi_c (\xi_* - \xi^-)}{\la \xi_* - \xi^-\ra^{s'}} \right|^2 | \hat f (\xi_* )|^2 b\,\, {\bf 1}_{ | \xi^- | \ge {\frac1 2} \la \xi_*\ra } {\bf 1}_{\la \xi_* \ra \geq |\xi|/2} \right.\\ &\quad \times \left\{M^\delta (\xi_*)^2 \left( \frac{{\bf 1}_{ \la \xi -\xi *\ra \lesssim \la \xi_{ *}-\xi^- \ra}}{ \la \xi-\xi_*\ra^{\nu' } } + \frac{ \delta^{2N_0} {\bf 1}_{ \la \xi -\xi *\ra \lesssim \la \xi_{ *}-\xi^- \ra}}{ \la \xi-\xi_*\ra^{\nu'-2N_0 } } \right) \right.\\ &\quad \left. + \left. \frac{{\bf 1}_{ \la \xi -\xi *\ra \lesssim \la \xi_{ *} -\xi^- \ra}}{ \la \xi-\xi_*\ra^{\nu'} } \right \} d\xi d\xi_* d \sigma \right] \left(\iiint | \hat G(\xi - \xi_* )|^2 |{\hat H} (\xi ) |^2 d\sigma d\xi d\xi_*\right) \,. \end{align*} Putting $u = \xi_* - \xi^-$, we have $\la u \ra \lesssim \la \xi_*\ra$, and $ M^\delta (\xi_*)^2 \lesssim {(1+ \delta \la u \ra)^{-2N_0}}$. Therefore, \begin{align*} &|B_1 | ^2 \lesssim \int |\hat f(\xi_*)|^2 \left\{ \sup_u {\la u \ra^{-( 6 +2\gamma+\nu')}} \int_{\la \xi^+ -u \ra \leq \la u \ra} \Big ( \frac{ \delta^{2N_0}{(1+ \delta \la u \ra)^{-2N_0}} }{ \la \xi^+-u\ra^{\nu'-2N_0 } } + \right. \\ & \qquad \qquad \left. + \frac{1}{ \la \xi^+-u\ra^{\nu'} } \Big) d\xi^+ \right \}d\xi_* \,\, \|M^\delta(D)g\|^2_{H^{\nu'/2}} \|h\|_{H^{\nu'/2}}^2\\ & \qquad \lesssim \|f\|^2_{L^2} \|M^\delta (D)g\|^2_{H^{\nu'/2}} \|h\|_{H^{\nu'/2}}^2 \sup_u \frac{1}{\la u \ra^{ 3 +2(\gamma+ \nu')} }\,. \end{align*} Here we have used the change of variables $\xi \rightarrow \xi^+$ whose Jacobian is \begin{align*} &\Big|\frac{\partial \xi^+}{\partial \xi} \Big|=\frac{ \Big|I+ \frac{\xi}{|\xi|}\otimes \sigma\Big|} {8}\\ & =\frac{|1+ \frac{\xi}{|\xi|}\cdot\sigma|}{8}=\frac{\cos^2 (\theta/2)}{4}\ge \frac{1}{8}, \qquad \theta\in [0,\frac{\pi}{2}]. \notag \end{align*} As for $B_2$, we first note that, on the set of the integration, $\xi^+ = \xi-\xi_* +u$ implies \[ \frac{\la \xi -\xi_*\ra}{2} \le \la \xi -\xi_* \ra - |u| \le \la \xi^+ \ra \le \la \xi - \xi_*\ra +|u| \lesssim \la \xi - \xi_*\ra \,,\] so that \[( \enskip M^\delta(\xi) \sim \enskip ) \enskip M^\delta(\xi^+) \sim M^\delta (\xi-\xi_*)\,, \] and hence we have by the Cauchy-Schwarz inequality \begin{align*} |B_2| ^2 \lesssim& \iiint |\hat f(\xi_*)|^2\, |\hat G(\xi -\xi_*)|^2 d \sigma d\xi d\xi_* \\ & \qquad \times \iiint \frac{ |\hat \Phi_c (\xi_* - \xi^-) |^2 } {\la \xi_* -\xi^- \ra^{2\nu'}} |{ \hat H} (\xi ) |^2 d\sigma d\xi d\xi_*\\ \lesssim & \|f\|^2_{L^2} \|M^\delta(D)g\|^2_{H^{\nu'/2}} \|h\|_{H^{\nu'/2}}^2\,, \end{align*} because $6 +2(\gamma + \nu')>3$. On the set of the integration for $B_3$ we recall $\la \xi \ra \sim \la \xi - \xi_*\ra$ and \[ |M^\delta(\xi) -M^\delta(\xi-\xi_*) |\lesssim \frac{\la \xi_*\ra}{\la \xi \ra} M^\delta(\xi -\xi_*)\,, \] so that \begin{align*} |B_3 | ^2 \lesssim& \iiint b\,\, {\bf 1}_{ | \xi^- | \ge {\frac12} \la \xi_*\ra } \left(\frac{\la \xi_* \ra}{\la \xi \ra}\right)^{\nu} |\hat f(\xi_*)|^2 |\hat G(\xi -\xi_*)|^2 d \sigma d\xi d\xi_* \\ & \times \iiint b\,\, {\bf 1}_{ | \xi^- | \ge {\frac1 2} \la \xi_*\ra } \left(\frac{\la \xi_* \ra}{\la \xi \ra}\right)^{2-\nu} \frac{ |\hat \Phi_c (\xi_* - \xi^-) |^2} {\la \xi\ra^{2\nu'} } |{\hat H} (\xi ) |^2 d\sigma d\xi d\xi_*\,. \end{align*} We use the change of variables $\xi_* \rightarrow u= \xi_* -\xi^-$. Note that $| \xi ^-| \ge {\frac1 2} \la u +\xi^-\ra $ implies $|\xi^-| \geq \la u\ra/\sqrt {10}$, and that \[ \la \xi_* \ra \lesssim \la \xi_* - \xi^- \ra + |\xi| \sin \theta/2\,, \] which yields \[ \left(\frac{\la \xi_* \ra}{\la \xi \ra}\right)^{2-\nu} \lesssim \left(\frac{\la u \ra}{\la \xi \ra}\right)^{2-\nu} + \theta^{2-\nu}. \] Then we have \begin{align*} & \iint b\,\, {\bf 1}_{ | \xi^- | \ge {\frac12} \la \xi_*\ra } \left(\frac{\la \xi_* \ra}{\la \xi \ra}\right)^{2-\nu} \frac{ |\hat \Phi_c (\xi_* - \xi^-) |^2} {\la \xi\ra^{2\nu'}} d\sigma d\xi_* \lesssim \int \frac{{\bf 1}_{\la u\ra \lesssim |\xi|}}{\la u \ra^{6+2(\gamma +\nu')}}\left( \frac {\la u \ra} {\la \xi \ra}\right)^{2\nu'} \\ &\qquad \qquad \times \Big( \int b\, {\bf 1}_{ | \xi^- | \gtrsim \la u \ra } \left(\frac{\la u \ra} {\la \xi \ra}\right)^{2-\nu} d\sigma + \int b \theta^{2-\nu} {\bf 1}_{ | \xi^- | \gtrsim \la u \ra } d\sigma \Big)du\\ &\qquad \qquad \lesssim \int \frac{du} {\la u \ra^{6+2(\gamma +\nu')}} < \infty\,, \end{align*} because \begin{align*} \int b \theta^{2-\nu} {\bf 1}_{ | \xi^- | \gtrsim \la u \ra } d\sigma \left \{ \begin{array}{ll} \lesssim \left( \frac {\la u \ra} {\la \xi \ra}\right)^{2-2\nu} \enskip &\mbox{if $\nu >1$}\\ \lesssim \log \frac {\la \xi\ra} {\la u \ra}&\mbox{if $\nu = 1$}\\ < \infty \enskip &\mbox{if $\nu <1$}. \end{array} \right. \end{align*} Thus we have the same bound for $B_3$. The proof of the proposition is then completed. \end{proof} Let us recall Proposition 2.9 {}from \cite{AMUXY2010}. \begin{prop}\label{prop2.9_amuxy3} Let $M(\xi)$ be a positive symbol in $S^{0}_ {1,0}$ in the form of $M(\xi) = \tilde M(|\xi|^2)$. Assume that there exist constants $c, C>0$ such that for any $s, \tau>0$ $$ c^{-1}\leq \frac{s}{\tau}\leq c \,\,\,\,\,\,\mbox{implies} \,\,\,\,\,\,\,\,C^{-1}\leq \frac{\tilde M(s)}{ \tilde M(\tau)}\leq C, $$ and $M(\xi)$ satisfies $$ |M^{(\alpha)}(\xi)| = |\partial_\xi^\alpha M(\xi)| \leq C_{\alpha} M(\xi) \la \xi \ra^{-|\alpha|}\, , $$ for any $\alpha\in\NN^3$. Then, if $0<\nu<1$, for any $N >0$ there exists a $C_N >0$ such that \begin{align}\label{10.8-2} &\left|( M(D_v) Q_{\bar c}(f,\, g)- Q_{\bar c}(f,\, M(D_v) g) ,\,\, h)_{L^2}\right | \hskip4cm \notag \\ &\qquad \qquad \leq C_N \|f\|_{L^1_{\gamma^+}} \Big(\|M(D_v)\, g\|_{L^2_{\gamma^+}} + \| g \|_{H^{-N}_{\gamma^+}} \Big) \|h\|_{L^2}. \end{align} Furthermore, if $1 < \nu <2$, for any $N>0$ and any $\varepsilon >0$ , there exists a $C_{N, \varepsilon}>0 $ such that \begin{align}\label{10.8-3} \left |(M(D_v) Q_{\bar c}(f,\, g)-Q_{\bar c}(f,\, M(D_v) g),\,\, h)_{L^2} \right | \hskip4cm \notag \\ \leq C_{N, \varepsilon} \|f\|_{L^1_{(\nu+ \gamma-1)^+}} \Big( \|M(D_v) g\|_{H^{\nu-1+\varepsilon} _{(\nu+ \gamma-1)^+}} + \|g\|_{H^{-N}_{\gamma^+}}\Big) \|h\|_{L^2}\, . \end{align} When $\nu = 1$ we have the same estimate as (\ref{10.8-3}) with $(\nu+ \gamma-1)$ replaced by $(\gamma+ \kappa)$ for any small $\kappa >0$. \end{prop} \subsection{Boundedness of $M^\delta(D_v)$ on the triple norm}\label{ap-5} Instead of $M^\delta(\xi)$, we consider a little more general symbol $M(\xi) \in S^0_{1,0}$ satisfying conditions in Proposition \ref{prop2.9_amuxy3}. \begin{lem}\label{M-bounded-triple} Let $\gamma >-3$ and $0 < \nu <2$. Then we have \begin{align*} |\!|\!|M(D_v) g|\!|\!|^2 \lesssim |\!|\!|g|\!|\!|^2. \end{align*} \end{lem} \begin{proof} The proof is based on arguments in the subsection 2.3 of \cite{AMUXY2}. Since $J_2^{\Phi_\gamma}(g) \sim \|g\|^2_{L^2_{(\nu+\gamma)/2}}$ it suffices to show \begin{align}\label{bound-M-tri} J_1^{\Phi_\gamma}(M g) \lesssim J_1^{\Phi_\gamma}(g) + \|g\|^2_{L^2_{(\nu+\gamma)/2}} + \|g\|^2_{H^s_{\gamma/2}}\,, \end{align} in view of Proposition 2.2 of \cite{AMUXY2}, that is, there exist two generic constants $C_1, C_2>0$ such that \[ C_1 \left\{\left\| g\right\|^2_{H^s_{\gamma/2}(\RR^3_v)}+\left\| g\right\|^2_{L^2_{(\nu+\gamma)/2}(\RR^3_v)}\right\} \leq |\!|\!| g |\!|\!|^2 \leq C_2 \left\| g\right\|^2_{H^s_{(\nu+\gamma)/2}(\RR^3_v)}\,. \] Since $J_1^{\Phi_\gamma} (g) \sim J_1^{\Phi_0} (\la v \ra^{\gamma/2} g) $ modulo $ \|g\|^2_{L^2_{(\nu+\gamma)/2}}$, and we have \begin{align*} J_1^{\Phi_0} (\la v \ra^{\gamma/2}M g) &\le 2 \Big(J_1^{\Phi_0} (M \la v \ra^{\gamma/2} g) + J_1^{\Phi_0} ([\la v \ra^{\gamma/2}, M] g )\Big)\\ &\lesssim J_1^{\Phi_0} (M \la v \ra^{\gamma/2} g) + \|([\la v \ra^{\gamma/2}, M] g\|^2_{H^s_s}\,, \end{align*} it suffices to show \eqref{bound-M-tri} with $\gamma =0$. We recall (2.21) of \cite{AMUXY2}; \begin{align}\label{part1-intermid} J_1^{\Phi_0}(g)&=\iiint b(\cos\theta) {\mu} _\ast ( g'- g)^2dv_*d\sigma dv \notag \\ &=\frac{1}{(2\pi)^3}\iint b \Big(\frac{\xi}{|\xi|}\cdot \sigma \Big) \Big(\widehat {\mu} (0) | \widehat {g} (\xi ) - \widehat { g} (\xi^+) |^2 \\ & \qquad \qquad \qquad + 2 \textrm{Re}\, \Big(\widehat{ \mu }(0) - \widehat { \mu} (\xi^-) \Big) \widehat {g} (\xi^+ ) \overline{\widehat {g}} (\xi ) \Big)d\xi d\sigma . \notag \end{align} Since $\widehat { \mu}(\xi)$ is real-valued, it follows that \begin{align*} \textrm{Re}\, \Big(\widehat { \mu} (0) - \widehat { \mu} (\xi^-) \Big) \widehat { g} (\xi^+ ) \overline{\widehat { g} } (\xi ) &= \Big (\int \big (1- \cos (v\cdot \xi^-)\big) \mu(v) dv\Big)\, \textrm{Re}\,\widehat { g} (\xi^+ ) \overline{\widehat { g} } (\xi )\\ &\lesssim \min \{ \la \xi \ra^2 \theta^2 \,, \, 1 \} |\widehat { g} (\xi^+ ) \overline{\widehat { g} } (\xi )|. \end{align*} Therefore the second term of the right hand side of \eqref{part1-intermid} is estimated by $\|g\|^2_{H^{\nu/2}}$ with a constant factor. Similarly we have \begin{align*} J_1^{\Phi_0}( M g)&=\iiint b(\cos\theta) {\mu} _\ast ( (Mg)'- Mg)^2dv_*d\sigma dv\\ &\lesssim \iint b \Big(\frac{\xi}{|\xi|}\cdot \sigma \Big) \widehat {\mu} (0) | M(\xi) \widehat {g} (\xi ) -M(\xi^+) \widehat { g} (\xi^+) |^2 d\xi d\sigma +\|g\|^2_{H^{\nu/2}} \\ &\lesssim J_1^{\Phi_0}(g) + \iint b \Big(\frac{\xi}{|\xi|}\cdot \sigma \Big) | M(\xi) -M(\xi^+)|^2|\widehat {g} (\xi )|^2d\xi d\sigma +\|g\|^2_{H^{\nu/2}}\\ &\lesssim J_1^{\Phi_0}(g) +\|g\|^2_{H^{\nu/2}}\,, \end{align*} because $ | M(\xi) -M(\xi^+)|\lesssim M(\xi) \theta^2$ ( see (2.3.5) of \cite{AMUXY2010}, for example). \end{proof} \subsection{Mollifier with respect to $x$ variable}\label{ap-6} \begin{lem}\label{x-moll} Let $S \in C_0^\infty(\RR)$ satisfy $0 \le S \le 1$ and \[ S(\tau) = 1, \enskip |\tau| \le 1; \enskip S(\tau) = 0, \enskip |\tau| \ge 2. \] Put $S_\delta(D_x) = S(\delta D_x)$ for $\delta >0$. Then \begin{align}\label{1st} &\left|\int_0^T \right. \Big(S_\delta(D_x)\left. \Gamma(f, g) - \Gamma(f, S_\delta(D_x)g), \, \, h\Big)_{L^2(\RR^6_{x,v})}dt \right| \notag \\ & \lesssim \|\nabla f \|_{L^\infty([0,T]\times \RR_x^3; L^2_v )} \|\la v \ra^{|\gamma|/2 + \nu} g\|_{L^2([0,T]\times \RR^6_{x,v})}\notag\\ &\qquad \qquad \qquad \qquad \times \|\delta h\|_{L^2([0,T]\times \RR^3_x; H^{\nu}_{\gamma/2}(\RR^3_v))}. \end{align} \end{lem} \begin{proof} The proof is similar as the one of Lemma 3.4 in \cite{AMUXY2010}. If $$K_\delta(z) = \displaystyle -\frac{z}{\delta^4} \cF^{-1}(S)\Big(\frac{z}{\delta}\Big)$$ then we have \begin{align*} &\left|\int_0^T \Big(S_\delta(D_x)\Gamma(f, g) - \Gamma(f, S_\delta(D_x)g), \, \, h\Big)_{L^2(\RR^6)}dt \right|\\ &=\left| \int_0^1 \left\{ \int_{\RR_x^3 \times \RR_y^3} K_\delta (x-y) \right.\right. \\ & \left. \left. \times \int_0^T \Big(\Gamma \big (\nabla f(t, x + \tau(y-x)), \cdot), \delta g(t, y, \cdot)\big), \, h(t, x, \cdot)\Big)_{L^2(\RR^3_v)} dt dx dy\right \}d\tau\right|\\ &\lesssim \delta \|\nabla f \|_{L^\infty([0,T]\times \RR_x^3; L^2_v)}\int_0^T\int_{\RR^3_x} \Big(| K_\delta | * \|g(t,\cdot) \|_{L^2_{|\gamma|/2 +\nu}} \Big)(x) \, \|h(t,x) \|_{H^\nu_{\gamma/2}} dx dt\, \end{align*} where we have used \eqref{different-g-h}. We get \eqref{1st} since $\|K_\delta\|_{L^1_x} = \|K_1\|_{L^1_x} $. \end{proof} \noindent {\bf Acknowledgements:} The research of the first author was supported in part by Grant-in-Aid for Scientific Research No.25400160, Japan Society for the Promotion of Science. \end{document}
\begin{document} \title{Note on semi-proper orientations of outerplanar graphs\footnote{The first arXiv version of this paper was submitted in April 2020. The only technical differences between the first arXiv version and this one are in terminology and notation. The results are the same and were obtained independently of \cite{DH} \begin{abstract} A \textit{semi-proper orientation} of a given graph $G$, denoted by $(D,w)$, is an orientation $D$ with a weight function $w: A(D)\rightarrow \mathbb{Z}_+$, such that the in-weight of any adjacent vertices are distinct, where the \textit{in-weight} of $v$ in $D$, denoted by $w^-_D(v)$, is the sum of the weights of arcs towards $v$. The \textit{semi-proper orientation number} of a graph $G$, denoted by $\overrightarrow{\chi}_s(G)$, is the minimum of maximum in-weight of $v$ in $D$ over all semi-proper orientation $(D,w)$ of $G$. This parameter was first introduced by Dehghan (2019). When the weights of all edges eqaul to one, this parameter is equal to the \textit{proper orientation number} of $G$. The \textit{optimal semi-proper orientation} is a semi-proper orientation $(D,w)$ such that $\max_{v\in V(G)}w_D^-(v)=\overrightarrow{\chi}_s(G)$. Ara\'ujo et al. (2016) showed that $\overrightarrow{\chi}(G)\le 7$ for every cactus $G$ and the bound is tight. We prove that for every cactus $G$, $\overrightarrow{\chi}_s(G) \le 3$ and the bound is tight. Ara\'{u}jo et al. (2015) asked whether there is a constant $c$ such that $\overrightarrow{\chi}(G)\le c$ for all outerplanar graphs $G.$ While this problem remains open, we consider it in the weighted case. We prove that for every outerplanar graph $G,$ $\overrightarrow{\chi}_s(G)\le 4$ and the bound is tight. \noindent\textbf{Keywords:} proper orientation number; semi-proper orientation number; outerplanar graph \end{abstract} \section{Introduction}\label{intro} \baselineskip 17pt For basic notation in graph theory, the reader is referred to \cite{BM}. All graphs in this paper are considered to be simple. An \textit{orientation} $D$ of a graph $G$ is a digraph obtained from $G$ by replacing each edge by excactly one of two possible arcs with the same endvertices. The \textit{in-degree} of $v$ in $D$, denoted by $d_D^-(v)$, is the number of arcs towards $v$ in $D$ for each $v\in V(G)$. We will use the notation without subscript when the orientation $D$ is clear from context. For a given undirected graph $G$, an orientation $D$ of $G$ is \textit{proper} if $d^-(u)\ne d^-(v)$ for all $uv\in E(G)$. An orientation with maximum in-degree at most $k$ is called a \textit{$k$-orientation}. The \textit{proper orientation number} of a graph $G$ is the minimum integer $k$ such that $G$ admits a proper $k$-orientation, denoted by $\overrightarrow{\chi}(G)$. The existence of proper orientation was demonstrated by Borowiecki et al. in \cite{BG}, where it was shown that every graph $G$ has a proper $\Delta(G)$-orientation, where $\Delta(G)$ is the maximum degree of $G$. Later, Ahadi and Dehghan \cite{AD} introduced the concept of the proper orientation number. This parameter was widely investigated recently, for more details, we refer the reader to \cite{AD,ADM,AG,AC,AH,KM}. Note that every proper orientation of a graph $G$ induces a proper vertex coloring of $G$. Hence, we have the following sequences of inequalities: \begin{align}\label{e1} \omega(G)-1\le \chi(G)-1\le \overrightarrow{\chi}(G)\le \Delta(G) \end{align} These inequalities are best possible since, for a complete graph $K_n$, $\omega(K_n)-1=\chi(K_n)-1=\overrightarrow{\chi}(K_n)=\Delta(K_n)=n-1.$ Ahadi and Dehghan \cite{AD} proved that it is NP-complete to compute $\overrightarrow{\chi}(G)$ even for planar graphs. Araujo et al. \cite{AC} strengthened this result by showing that it holds for bipartite planar graphs of maximum degree 5. The following two problems have received great attention by researchers. \begin{problem}[\cite{AC}]\label{pro1} Is there a constant $c$ such that $\overrightarrow{\chi}(G)\le c$ for every planar graph $G$? \end{problem} \begin{problem}[\cite{AH}]\label{pro2} Is there a constant $c$ such that $\overrightarrow{\chi}(G)\le c$ for every outerplanar graph $G$? \end{problem} \noindent Knox et al.~\cite{KM} proved that $\overrightarrow{\chi}(G)\le 5$ for a $3$-connected planar bipartite graph $G$ and Noguci~\cite{N} showed that $\overrightarrow{\chi}(G)\le 3$ for any bipartite planar graph $G$ with $\delta(G)\ge 3$. Araujo et al. \cite{AH} proved $\overrightarrow{\chi}(G)\le 7$ for any cactus, i.e., an outerplanar graph with every 2-connected component being either an edge or a cycle and $\overrightarrow{\chi}(T)\le 4$ for any tree $T$ (see also \cite{KM} for a short algorithmic proof). Ai et al.~\cite{AG} proved that $\overrightarrow{\chi}(G)\le 3$ for any triangle-free, $2$-connected outerplanar graph $G$ and $\overrightarrow{\chi}(G)\le 4$ for any triangle-free, bridgless or tree-free outerplanar graph $G$. Later, Ara\'{u}jo et al. \cite{AS} studied the notion of a weighted proper orientation of graphs. Recently Dehghan \cite{D} introduced the notion of a semi-proper orientation of graphs. A \textit{semi-proper orientation} of a given graph $G$, denoted by $(D,w)$, is an orientation $D$ with a weight function $w: A(D)\rightarrow \mathbb{Z}_+$, such that the in-weight of any adjacent vertices are distinct, where the \textit{in-weight} of $v$ in $D$, denoted by $w^-_D(v)$, is the sum of the weights of arcs towards $v$. Let $\mu^-(D,w)$ be the maximum of $w^-_D(v)$ over all vertices $v$ of $G.$ We drop the subscript when the orientation and weight function are clear from the context. The \textit{semi-proper orientation number} of a graph $G$, denoted by $\overrightarrow{\chi}_s(G)$, is the minimum of $\mu^-(D,w)$ over all semi-proper orientations $(D,w)$ of $G$. An \textit{optimal semi-proper orientation} is a semi-proper orientation $(D,w)$ such that $\mu^-(D,w)= \overrightarrow{\chi}_s(G)$. \begin{theorem}[\cite{D}]\label{D} Every graph $G$ has an optimal semi-proper orientation $(D,w)$ such that the weight of each edge is one or two. \end{theorem} It is easy to see that $\overrightarrow{\chi}_s(G)\le \overrightarrow{\chi}(G)$. Moreover, by the definition of a semi-proper orientation, the in-weights of adjacent vertices are different. Consequently, by (\ref{e1}), we have \begin{align} \label{e2} \omega(G)-1\le \chi(G)-1\le \overrightarrow{\chi}_s(G)\le \overrightarrow{\chi}(G)\le \Delta(G) \end{align} Dehghan \cite{D} observed that there exist graphs $G$ such that $\overrightarrow{\chi}_s(G) < \overrightarrow{\chi}(G).$ Indeed, while as observed in \cite{D}, we have $\overrightarrow{\chi}_w(T)\le 2$ for every three $T$, there are trees $T$ with $\overrightarrow{\chi}(T)=4$ \cite{AC}. Thus, one natural problem is to study the gap between these two parameters. \begin{problem}[\cite{D}]\label{pro3} Is there any constant $c_1$ such that $\overrightarrow{\chi}(G)-\overrightarrow{\chi}_s(G)\le c_1$ for every graph $G$? \end{problem} In this paper, we prove a sharp upper bound for the semi-proper orientation number of cacti in Theorem \ref{cacti}, which implies that $c_1\ge 4$ if $c_1$ exist, due to the sharp upper bound $\overrightarrow{\chi}(G)\leq 7$ for cacti proved in \cite{AH}. In \cite{D}, Dehghan showed that determining whether a given planar graph $G$ with $\overrightarrow{\chi}_s(G)=2 $ has an optimal semi-proper orientation $(D,w)$ such that the weight of each edge is one is NP-complete. He also proved that the problem of determining the semi-proper orientation number of planar bipartite graphs is NP-hard. We prove the following two results. Theorem \ref{cacti} gives a tight bound for cacti in the weighted case. Note that this theorem and the tight bound on the proper orientation number of cacti imply that $c_1\ge 4$ in Problem \ref{pro3} (provided that $c_1$ exists). While Problem \ref{pro2} remains open, we consider the problem in the weighted case. Theorem \ref{main} solves this problem. Due to Theorem~\ref{D}, the bounds in these theorems can be achieved for optimal semi-proper orientations where every edge weight is 1 or 2. \begin{theorem} \label{cacti} For every cactus $G,$ we have $\overrightarrow{\chi}_s(G) \le 3$ and this bound is tight. \end{theorem} \begin{theorem}\label{main} For every outerplanar graph $G,$ we have $\overrightarrow{\chi}_s(G) \le 4$ and this bound is tight. \end{theorem} While the tightness proof of the bound in Theorem \ref{cacti} is quite easy, that in Theorem \ref{main} is more involved as an optimal semi-proper orientation of a significantly larger graph is considered. The remainder of the paper is organized as follows. We provide some definitions and simple lemmas for orientations on paths in Section~\ref{pre}. Next, we study (weighted) proper orientations of cacti and outerplanar graphs, and prove Theorems~\ref{cacti} and \ref{main} in Sections~\ref{cacti1} and \ref{outer}, respectively. \section{Preliminaries}\label{pre} Let us consider briefly some graph theory terminology and notation used in this paper. For more information on blocks and ear decomposition, see e.g. \cite{BM}. We denote a path and cycle by $P$ and $C,$ respectively, and the order of $P$ and $C$ by $|P|$ and $|C|,$ respectively. We call an edge $e$ an {\it $a$-$b$ edge} if the end points of $e$ have in-weight $a$ and $b$, respectively. A \textit{block} of a graph $G$ is a maximal nonseparable subgraph of $G$ and a block of order $i$ is said to be an \textit{$i$-block}. Note that every $i$-block with $i\ge 3$ is a 2-connected graph, 2-block is an edge (bridge) of $G$ and 1-block is an isolated vertex of $G.$ Thus, if $G$ is connected, it has no 1-blocks. The \textit{block tree} associated to $G$ is the tree $T(G)$ with vertex set $V(T(G))=\{v_i\colon B_i\text{ is block of }G \}\cup S$, where $S$ is the set of cut vertices of $G$, and edge set $E(T(G))=\{v_is_j\colon s_j\in B_i \}$. Choose a block $B_0$ of $G$ as a root of $T(G)$, and run depth-first search (DFS) algorithm on $T(G)$ from $B_0$. Then we can get an ordering of blocks in $G$ as $B_0,B_1,\dots,B_p$. If a cut vertex $s_i\in B_i\cap B_j$ and $j<i$, then say $s_i$ is the \textit{root} of $B_i$. For a subgraph $H$ of $G$, an \textit{ear} of $H$ is a non-trivial path $P$ in $G$ with end-vertices in $H$ but internal vertices not. We say an ear is \textit{attached} to the corresponding ends in $H$ and we call such pair of end-vertices \textit{active}. Especially, if the vertices of an active pair are adjacent to each other, we call the pair \textit{active edge}. It is well known that every 2-connected graph $G$ has an {\em ear decomposition} defined as follows. \begin{itemize} \item Choose a cycle $C_0$ of $G$ and let $G_0=C_0$. \item Add an ear $P_i$ attached to an active pair $(a_i,b_i)$ of $G_i$, where $a_i\ne b_i$ and let $G_{i+1}=G_i\cup P_i$, $0\le i<k$. \item $G_k=G$. \end{itemize} Now we introduce a class of outerplanar graphs, called \textit{universal outerplanar graphs}, which will be used in our proof. A \textit{universal outerplanar graph}, denoted by UOP($n$), is defined as follows: \begin{itemize} \item UOP(1) is a triangle. \item Add 2-length ears to all edges of UOP(1), then we get UOP(2). The new added edges are called \textit{outeredges} of UOP(2) and the new added vertices are called \textit{outervertices} of UOP(2). \item UOP($k+1$) is obtained from UOP($k$) by adding 2-length ears to all outeredges of UOP($k$). \end{itemize} We give some lemmas for orientations on paths below, which will be used later. \begin{lemma}[\cite{AG}] \label{path1} Let $P=v_1v_2\dots v_n $ be a path of length $n-1$. \begin{enumerate} \item\label{p1-0} If $n\geq 7$, then there are three semi-proper orientations with weights of all edges one such that $w^{-}(v_1)=0$ and $w^{-}(v_n)=0$ and \begin{enumerate} \item $w^{-}(v_2)=2$, $w^{-}(v_{n-2})=0$, $w^{-}(v_{n-1})=2$, and \item $w^{-}(v_2)=1$, $w^{-}(v_3)=2$, $w^{-}(v_{n-3})=0$, $w^{-}(v_{n-2})=2$, $w^{-}(v_{n-1})=1$, and \item $w^{-}(v_2)=1$, $w^{-}(v_{n-2})=0$, $w^{-}(v_{n-1})=2$, respectively. \end{enumerate} \item\label{p1-1} If $n=6$, then there are two semi-proper orientations with weights of all edges one such that $w^{-}(v_1)=0$ and $w^{-}(v_6)=0$ and \begin{enumerate} \item $w^{-}(v_2)=2$, $w^{-}(v_3)=0$, $w^{-}(v_4)=1$, $w^{-}(v_5)=2$, and \item $w^{-}(v_2)=1$, $w^{-}(v_3)=2$, $w^{-}(v_4)=0$, $w^{-}(v_5)=2$, respectively. \end{enumerate} \item\label{p1-2} If $n=5$, then there are two semi-proper orientations with weights of all edges one such that $w^{-}(v_1)=0$ and $w^{-}(v_5)=0$ and \begin{enumerate} \item $w^{-}(v_2)=1,$ $w^{-}(v_3)=2,$ $w^{-}(v_4)=1,$ and \item $w^{-}(v_2)=2,$ $w^{-}(v_3)=0,$ $w^{-}(v_4)=2,$ respectively. \end{enumerate} \item\label{p1-3} If $n=4$, then there exists a semi-proper orientation with weights of all edges one such that $w^{-}(v_1)=0,$ $w^{-}(v_2)=1,$ $w^{-}(v_3)=2$ and $w^{-}(v_4)=0.$ \end{enumerate} \end{lemma} \begin{lemma} \label{path2} Let $P=v_1v_2v_3$ be a path with length two. Then there exist three semi-proper orientations with weights of all edges at most two such that $w^-(v_1)=w^-(v_3)=0$, \begin{enumerate} \item\label{p2-1} weights of all edges are one and $w^-(v_2)=2$. \item\label{p2-2} $w^-(v_2)=3$, $w(v_1v_2)=2$ and $w(v_3v_2)=1$. \item\label{p2-3} $w^-(v_2)=4$ and $w(v_1v_2)=w(v_3v_2)=2$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item \includegraphics[scale=0.2]{p2-1.png} \item \includegraphics[scale=0.2]{p2-2.png} \item \includegraphics[scale=0.2]{p2-3.png} \end{enumerate} \end{proof} \begin{lemma} \label{path3} Let $P=v_1v_2v_3v_4$ be a path of length $3$. Then there exist two semi-proper orientations with weights of all edges at most two such that $w^-(v_1)=w^-(v_4)=0$, \begin{enumerate} \item\label{p3-1} $w^-(v_2)=2$, $w^-(v_3)=3$, $w(v_1v_2)=w(v_3v_4)=2$ and $w(v_2v_3)=1$. \item\label{p3-2} $w^-(v_2)=1$, $w^-(v_3)=3$, $w(v_2v_3)=2$ and $w(v_1v_2)=w(v_3v_4)=1$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item \includegraphics[scale=0.2]{p3-1.png} \item \includegraphics[scale=0.2]{p3-2.png} \end{enumerate} \end{proof} \begin{lemma}\label{path4} Let $P=v_1v_2v_3v_4v_5$ be a path with length $4$. Then there exists a semi-proper orientation with weights of all edges at most two such that $w^-(v_1)=w^-(v_3)=w^-(v_5)=0$, $w^-(v_2)=2$, $w^-(v_4)=3$, $w(v_4v_5)=2$ and other edges with weight one. \end{lemma} \begin{proof} \includegraphics[scale=0.2]{p4.png} \end{proof} \begin{lemma}\label{path5} Let $P=v_1v_2v_3v_4v_5v_6$ be a path with length $5$. Then there exists a semi-proper orientation with weights of all edges at most two such that $w^-(v_1)=w^-(v_6)=0$, $w^-(v_2)=w^-(v_5)=1$, $w^-(v_3)=3$, $w^-(v_4)=2$, $w(v_2v_3)=w(v_4v_5)=2$ and other edges with weight one. \end{lemma} \begin{proof} \includegraphics[scale=0.2]{p5.png} \end{proof} \section{Proof of Theorem~\ref{cacti}}\label{cacti1} Since $G$ is a cactus, we can label blocks of $G$ and construct a sequence of induced subgraphs of $G$ as follows. \begin{itemize} \item Choose a block $B_0$ as the root of block tree $T(G)$, i.e. $G_0=B_0$. \item Run DFS algorithm on $T(G)$ from $B_0$ to get an ordering of blocks in $G$. \item Add block $B_i$ to its root $s_i$, i.e., $G_i=G_{i-1}+B_i$, $1\le i\le k$. \item $G_k=G$. \end{itemize} Note that $B_i$ is either a $2$-block or a cycle in $G$. We prove it by induction on $k$. When $k=0$, orient $B_0$ using Lemmas~\ref{path1} and \ref{path2} or orient it arbitrarily if $B_0$ is 2-block. By the induction hypothesis, $G_{k-1}$ has a desired orientation $(D_{k-1},w)$. Consider the case when $B_k$ is a cycle $C$. If $|C|=3$ we can apply Lemmas \ref{path3} and \ref{path1} by setting $v_1=v_4=s_k.$ If $w^-(s_k)=1$ in $(D_{k-1},w)$, then we use Lemma~\ref{path3}-\ref{p3-1} to orient $C$ such that $w^-(v_2)=2$ and $w^-(v_3)=3$. If $w^-(s_k)=2$ in $(D_{k-1},w)$, then use Lemma~\ref{path3}-\ref{p3-2} to orient $C$ such that $w^-(v_2)=1$ and $w^-(v_3)=3$. If $w^-(s_k)\in \{0,3\}$ in $(D_{k-1},w)$, then use Lemma~\ref{path1}-\ref{p1-3} to orient $C$ such that $w^-(v_2)=1$ and $w^-(v_3)=2$. If $|C|=4,$ we can apply Lemma \ref{path1}-\ref{p1-2} by setting $v_1=v_5=s_k$ and using (a) if $w^-(s_k)\ne 1$ in $(D_{k-1},w)$ and (b) otherwise. If $|C|=5$ set $v_1=v_6=s_k$. If $w^-(s_k)=2$ in $G_{k-1}$, then use Lemma~\ref{path5}. Otherwise, use Lemma~\ref{path1}-\ref{p1-1}(a). Now we assume $|C|\ge 6$ and set $v_1=v_n=s_k.$ If $w^-(s_k)=1$ in $(D_{k-1},w)$, then use Lemma~\ref{path1}-\ref{p1-0}(a) and otherwise Lemma~\ref{path1}-\ref{p1-0}(b). Now consider the case when $B_k$ is a $2$-block $s_k v$. Then orient it from $s_k$ to $v$. If $w^-(s_k)=1$ in $(D_{k-1},w)$, then let $w(s_k v)=2$ such that $w^-(v)=2$. Otherwise, let $w(s_k v)=1$ such that $w^-(v)=1$. For both cases, we have $w^-_{(D_k,w)}(s_k)=w^-_{(D_{k-1},w)}(s_k)$. This implies that $G_k$ has a desired orientation $(D_k,w)$. \begin{figure} \caption{A tight example of Theorem~\ref{cacti}.} \label{gif} \end{figure} A tight example is given in Figure~\ref{gif}. Let $G$ be a cactus with vertex set $V(G)=\{v_1,v_2,v_3,v_4,v_5,v_6 \}$ and edge set $E(G)=\{v_1v_2,v_1v_3,v_2v_3,v_3v_4,v_4v_5,v_4v_6,v_5v_6 \}$. Suppose $G$ has a semi-proper 2-orientation $D$. Without loss of generality, we may assume that edge $v_3v_4$ in $G$ is oriented in $D$ from $v_4$ to $v_3.$ Hence, $1\le w_D^-(v_3)\le 2.$ Since $w_D^-(v_3)\le 2,$ without loss of generality, we may assume that edge $v_2v_3$ is oriented from $v_3$ to $v_2.$ Thus, $1\le w_D^-(v_2)\le 2.$ We cannot have $1\le w_D^-(v_1)\le 2$ as well since that would imply that there are vertices $v_i,v_j$ with $1\le i<j\le 3$ such that $w_D^-(v_i)=w_D^-(v_j).$ Hence, $w_D^-(v_1)=0,$ but then both $v_1v_2$ and $v_1v_3$ must be oriented from $v_1$ implying that $w_D^-(v_2)=w_D^-(v_3)=2,$ a contradiction. \qed \section{Proof of Theorem \ref{main}}\label{outer} We start from the following: \begin{lemma} \label{conn1} Let $G$ be a 2-connected outerplanar graph and let $s$ be an arbitrary vertex of $G.$ Then there exists a semi-proper orientation $(D,w)$ such that $\overrightarrow{\chi}_s(G) \le 4$ and $w^-(s)=0$. \end{lemma} \begin{proof} Since $G$ is $2$-connected, recall that we can construct $G$ by the process of ear decomposition as follows. \begin{itemize} \item Choose a cycle $C_0$ containing $s$ and let $G_0=C_0$. \item Add an ear $P_i$ attached to an active pair $(a_i,b_i)$ of $G_i$, where $a_i\ne b_i$ and let $G_{i+1}=G_i\cup P_i$, $0\le i<k$. \item $G_k=G$. \end{itemize} Note that $a_i$ is adjacent to $b_i$ as $G$ is outerplanar. We prove this lemma by induction on $k$. When $k=0$, orient $G_0$ using Lemma~\ref{path1} or orient it in any other proper way such that $w^-(s)=0$. By the induction hypothesis, $G_{k-1}$ has a desired orientation $(D_{k-1},w)$. Assume that $e=\{a_k,b_k\}$ is an active edge of $P_k=v_1v_2 \dots v_n$ and assume without loss of generality that $a_k=v_1$, $b_k=v_n$, $w^{-}(a_k)<w^{-}(b_k)$ in $(D_{k-1},w)$. We consider the following cases. {\bf Case 1} $|P_k|=3$. If $e$ is a 2-3 edge in $(D_{k-1},w)$, then use Lemma~\ref{path2}-\ref{p2-3} to orient $P_k$ such that $w^-(v_2)=4$. If neither $a_k$ nor $b_k$ has in-weight 2, then use Lemma~\ref{path2}-\ref{p2-1} to orient $P_k$ such that $w^-(v_2)=2$. If neither $a_k$ nor $b_k$ has in-weight 3, then use Lemma~\ref{path2}-\ref{p2-2} to orient $P_k$ such that $w^-(v_2)=3$. {\bf Case 2} $|P_k|=4$. If $w^-(a_k)=1$ or $w^-(b_k)=2$ in $(D_{k-1},w)$, then use Lemma~\ref{path1}-\ref{p1-3} to orient $P_k$ such that $w^-(v_2)=2$ and $w^-(v_3)=1$. If not, then use Lemma~\ref{path1}-\ref{p1-3} to orient $P_k$ such that $w^-(v_2)=1$ and $w^-(v_3)=2$. {\bf Case 3} $|P_k|=5$. If $e$ is a 1-2 edge in $(D_{k-1},w)$, then use Lemma~\ref{path4} to orient $P_k$ such that $w^-(v_2)=2$ and $w^-(v_4)=3$. If neither $a_k$ nor $b_k$ has in-weight 1, then use Lemma~\ref{path1}-\ref{p1-2}(a) to orient $P_k$ such that $w^-(v_2)=w^-(v_4)=1$. If neither $a_k$ nor $b_k$ has in-weight 2, then use Lemma~\ref{path1}-\ref{p1-2}(b) to orient $P_k$ such that $w^-(v_2)=w^-(v_4)=2$. {\bf Case 4} $|P_k|=6$. If $w^-(a_k)=1$ or $w^-(b_k)=2$ in $(D_{k-1},w)$, then use Lemma~\ref{path1}-\ref{p1-1}(b) to orient $P_k$ such that $w^-(v_2)=2$ and $w^-(v_5)=1$. If not, then use Lemma~\ref{path1}-\ref{p1-1}(b) to orient $P_k$ such that $w^-(v_2)=1$ and $w^-(v_5)=2$. {\bf Case 5} $|P_k|\ge 7$. If $w^-(a_k)=1$ or $w^-(b_k)=2$ in $(D_{k-1},w)$, then use Lemma~\ref{path1}-\ref{p1-0}(c) to orient $P_k$ such that $w^-(v_2)=2$ and $w^-(v_{n-1})=1$. If not, then use Lemma~\ref{path1}-\ref{p1-0}(c) to orient $P_k$ such that $w^-(v_2)=1$ and $w^-(v_{n-1})=2$. For all cases above, the in-weights of $a_k$ and $b_k$ in $(D_k,w)$ are the same as that in $(D_{k-1},w)$. This implies that $G_k$ has a desired orientation $(D_k,w),$ where in particular $w^-(s)=0$. \end{proof} To complete the proof of Theorem \ref{main}, it remains to consider the case when $G$ is connected but not 2-connected. Let $B_0,B_1,\dots ,B_k$ be a list of blocks of $G$ such that for every $i\in \{0,1,2,\dots , k\},$ the subgraph $G_i$ of $G$ induced by the union of blocks $B_0,B_1,\dots ,B_i$ is connected. Such a list can be obtained e.g. by using DFS on $T(G)$ as described in the beginning of the previous section. Let $s$ be the root of $B_k.$ We prove the following extension of the theorem by induction on $i\in \{0,1,\dots ,k\}$: {\em For every $i\in \{0,1,\dots ,k\}$, $G_i$ has a semi-proper orientation $(D_i,w)$ such that $\mu^-(D_i,w)\le 4$ and if $s\in V(G_i)$ then $w^-(s)=0$.} If $B_0$ is a 2-connected outerplanar graph, then by Lemma~\ref{conn1}, $G_0$ has a semi-proper orientation $(D_0,w)$ such that $\mu^-(D_0,w)\le 4$ and $w^-(s)=0$ if $s\in V(G_0)$. If $B_0$ is an edge, then we orient the edge from $s$ to ensure that $w^-(s)=0$ if $s\in V(G_0)$ and arbitrarily, otherwise. By the induction hypothesis, let $G_{i-1}$ have a desired orientation $(D_{i-1},w)$ such that $w^-(s)=0$ if $s\in V(G_{i-1}).$ First consider the case when $B_i$ is a 2-connected outerplanar graph. By Lemma~\ref{conn1}, $B_i$ has a semi-proper orientation $(D',w)$ such that $\mu^-(D',w)\le 4$ and $w^-(s)=0$ if $s\in V(B_i).$ Thus, $(D',w)$ does not add the in-weight of $s$ and $w^-(s)=0$ in the resulting semi-proper orientation of $G_i$ provided $s\in V(G_i).$ If $B_i$ is an edge $e$ then orient it from $s$ if $s$ is an end-vertex of $e$ and arbitrarily, otherwise. Then we obtain a desired orientation as above. Now we show the tightness of the bound. We will have $G$=UOP(4) as a tight example, which is depicted in Figure~\ref{f3}. Suppose $\overrightarrow{\chi}_s(G) \le 3.$ Since $G$ contains a $K_3$-subgraph, $\overrightarrow{\chi}_s(G) = 3.$ Let $D$ be an optimal semi-proper orientation of $G$ and let $V_i$ be the set of vertices in $D$ with in-weight $i\in \{0,1,2,3\}.$ Note that the vertices of $G$ can be partitioned into three size-8 sets $A,B,C$ such that every $K_3$-subgraph of $G$ has one vertex $a\in A$, $b\in B$ and $c\in C$ as depicted in Figure~\ref{f3}. (In other words, $A,B,C$ is a proper 3-coloring of $G.$) Let $S=\Sigma_{v\in V(G)}w_D^-(v).$ We have $S\ge \Sigma_{v\in V(G)}d_G(v)=45.$ For every $K_3$-subgraph of $G$ with vertices $a,b,c$ we have $$\{w^-(a),w^-(b),w^-(c)\}\in \{\{1,2,3\}, \{0,2,3\}, \{0,1,3\}, \{0,1,2\}\}. $$ Thus, $S\le 8(1+2+3)=48$ implying that the gap between the upper bound and lower bound of $S$ is 3. Hence, $G$ has at most three edges of weight 2 in $D.$ By the lower bound, $|V_3|\ge 7.$ Suppose $|V_3|=8.$ By propagation of in-weights from one $K_3$-subgraph to another $K_3$-subgraph sharing an edge with the former, we will get four outervertices of in-weight 3 implying that $G$ has four edges of weight 2 in $D,$ a contradiction. Thus, $|V_3|=7.$ If either $|V_1|<8$ or $|V_2|<8$ then $S<45$ implying that $|V_1|=|V_2|=8$ and $S=45.$ Hence, all edges of $G$ have weight 1 in $D.$ However, by the propagation of in-weights, we will conclude that at least three outervertices have in-weight 3 implying that $G$ has three edges of weight 2 in $D,$ a contradiction. \qed \begin{figure} \caption{Optimal semi-proper orientation $D$ of UOP(4).} \label{f3} \end{figure} \paragraph{Acknowledgments.}Shi and Taoqiu are supported by the National Natural Science Foundation of China (No. 11922112). \end{document}
\begin{document} \author{L. Bayón} \address{Departamento de Matemáticas, Universidad de Oviedo\\ Avda. Calvo Sotelo s/n, 33007 Oviedo, Spain} \author{P. Fortuny Ayuso} \address{Departamento de Matemáticas, Universidad de Oviedo\\ Avda. Calvo Sotelo s/n, 33007 Oviedo, Spain} \title{The Best-or-Worst and the Postdoc problems} \author{J.M. Grau} \address{Departamento de Matemáticas, Universidad de Oviedo\\ Avda. Calvo Sotelo s/n, 33007 Oviedo, Spain} \author{A. M. Oller-Marcén} \address{Centro Universitario de la Defensa de Zaragoza\\ Ctra. Huesca s/n, 50090 Zaragoza, Spain} \author{M.M. Ruiz} \address{Departamento de Matemáticas, Universidad de Oviedo\\ Avda. Calvo Sotelo s/n, 33007 Oviedo, Spain} \begin{abstract} We consider two variants of the secretary problem, the\emph{ Best-or-Worst} and the \emph{Postdoc} problems, which are closely related. First, we prove that both variants, in their standard form with binary payoff 1 or 0, share the same optimal stopping rule. We also consider additional cost/perquisites depending on the number of interviewed candidates. In these situations the optimal strategies are very different. Finally, we also focus on the Best-or-Worst variant with different payments depending on whether the selected candidate is the best or the worst. \end{abstract} \maketitle \keywords{Keywords: Secretary problem, Combinatorial Optimization} \subjclassname{60G40, 62L15} \section{Introduction} The \emph{secretary problem} is one of many names for a famous problem of optimal stopping theory. This problem can be stated as follows: an employer is willing to hire the best secretary out of $n$ rankable candidates. These candidates are interviewed one by one in random order. A decision about each particular candidate is to be made immediately after the interview. Once rejected, a candidate cannot be called back. During the interview, the employer can rank the candidate among all the preceding ones, but he is unaware of the quality of yet unseen candidates. The goal is then to determine the optimal strategy that maximizes the probability of selecting the best candidate. This problem has a very elegant solution. Dynkin \cite{48} and Lindley \cite{101} independently proved that the best strategy consists in a so-called threshold strategy. Namely, in rejecting roughly the first $n/e$ (cutoff value) interviewed candidates and then choosing the first one that is better than all the preceding ones. Following this strategy, the probability of selecting the best candidate is at least $1/e$, this being its approximate value for large values of $n$. This well-known solution was later refined by Gilbert and Mosteller \cite{gil} showing that $\left\lfloor (n-\frac{1}{2})e^{-1}+\frac{1} {2}\right\rfloor$ is a better approximation than $\lfloor n/e\rfloor$, although the difference is never greater than 1. This secretary problem has been addressed by many authors in different fields such as applied probability, statistics or decision theory. In \cite{FER}, \cite{FER2} or \cite{2009} extensive bibliographies on the topic can be found. On the other hand, different generalizations of this classical problem have been recently considered in the framework of partially ordered objects \cite{poset2,garrod,poset1} or matroids \cite{1,soto}. It is also worth mentioning the work of Bearden \cite{KK}, where the author considers a situation where the employer receives a payoff for selecting a candidate equal to the ``score'' of the candidate (in the classical problem the payoff is 1 if the candidate is really the best and 0 otherwise). In this situation, the optimal cutoff value is roughly the square root of the number of candidates. In this paper we focus on two closely related variants of the secretary problem. The so-called \emph{Best-or-Worst} and \emph{Postdoc} variants. In the Best-or-Worst variant, the classic secretary problem is modified so that the goal is to select either the best or the worst candidate, indifferent between the two cases. This variant can only be found on \cite{fergu} as a multicriteria problem in the perfect negative dependence case. Here we present it in greater detail. In the Postdoc variant, instead of selecting the best candidate, the goal is to select the second best candidate. This problem was proposed to Robert J. Vanderbei by Eugene Dynkin in 1980 with the following motivating story that explains the name of the problem: we are trying to hire a postdoc, since the best candidate will receive (and accept) an offer from Harvard, we are interested in hiring the second best candidate. Vanderbei himself solved the problem in 1983 using dynamic programming \cite{posdoc}. However, he never published his work because he learned that Rose had already published his own solution using different techniques \cite{rose}. Moreover, Szajowski had already solved the problem of picking the $k$-th better candidate for $2\leq k\leq 5$ \cite{aesima}. In the present paper, for these two variants, we study the standard problem (binary payoff function 1 or 0), showing that both have the same optimal cutoff rule strategy and also the problems considering payoff functions that depend on the number of performed interviews, showing that in this case they have very different optimal strategies. The paper is organized as follows: in Section 2, we present some technical results, in Section 3, we revisit the classic secretary problem and also solve two new situations with payoff functions that depend on the number of performed interviews. In Section 4 we focus on the Best-or-Worst variant, solving the problem for three different payoff functions and also presenting a variant in which the choice of the best or the worst candidate is no longer indifferent. In Section 5 we solve the three versions of the Postdoc variant and, finally, we compare the obtained results in Section 6. \section{Two technical results} The following result can be widely applied in different optimal stopping problems and it will be extensively used throughout the paper. For a sequence of continuous real functions $\{F_{n}\}_{n\in\mathbb{N}}$ defined on a closed interval, it determines the asymptotic behavior of the sequence $\{\mathcal{M}(n)\}_{n\in\mathbb{N}}$, where $\mathcal{M}(n)$ is the value for which the function $F_n$ reaches its maximum. \begin{prop}\label{conv} Let $\{F_{n}\}$ be a sequence of real functions with $F_n\in\mathcal{C}[0,n]$ and let $\mathcal{M}(n)$ be the value for which the function $F_n$ reaches its maximum. Assume that the sequence of functions $\{g_{n}\}_{n\in\mathbb{N}}$ given by $g_{n}(x):=F_{n}(nx)$ converges uniformly on $[0,1]$ to a function $g$ and that $\theta$ is the only global maximum of $g$ in $[0,1]$. Then, \begin{itemize} \item[i)] $\displaystyle\lim_{n} \mathcal{M}(n)/n =\theta$. \item[ii)] $\displaystyle\lim_{n} F_{n}(\mathcal{M}(n))= g(\theta)$. \item[iii)] If $\mathfrak{M}(n)\sim\mathcal{M}(n)$ then $\displaystyle\lim_{n}F_{n}(\mathfrak{M}(n))=g(\theta)$. \end{itemize} \end{prop} \begin{proof} \begin{itemize} \item[i)] Let us consider the sequence $\{\mathcal{M}(n)/n\}\subset [0,1]$ and assume that $\{\mathcal{M}(s_{n})/s_{n}\}$ is a subsequence that converges to $\alpha$. Then, $$g_{s_n}(\theta)=F_{s_n}(s_n\theta)\leq F_{s_{n}}(\mathcal{M}(s_{n}))=F_{s_{n}}\left(\frac{\mathcal{M}(s_{n})}{s_n}s_n\right)=g_{s_n}\left(\frac{\mathcal{M}(s_n)}{s_n}\right).$$ Consequently, since $g_n\to g$ uniformly on $[0,1]$, if we take limits we get $$g(\theta)=\lim_n g_{s_n}(\theta)\leq\lim_n g_{s_n}\left(\frac{\mathcal{M}(s_n)}{s_n}\right)=g(\alpha)$$ and since $\theta$ is the only global maximum of $g$, it follows that $\theta=\alpha$. Thus, we have proved that every convergent subsequence of $\{\mathcal{M}(n)/n\}$ converges to the same limit $\theta$. Since $\{\mathcal{M}(n)/n\}$ is defined on a compact set this implies that $\{\mathcal{M}(n)/n\}$ itself must also converge to $\theta$. \item[ii)] It is enough to observe that $$\lim_{n} F_{n}(\mathcal{M}(n))=\lim_{n}F_{n}\left(\frac{\mathcal{M}(n)}{n} n\right)=\lim_n g_n\left(\frac{\mathcal{M}(n)}{n}\right)=g(\theta),$$ where the last equality holds because $g_n\to g$ uniformly on $[0,1]$. \item[iii)] If $\mathfrak{M}(n)\sim\mathcal{M}(n)$, then it also holds that $\displaystyle \lim_n \frac{\mathfrak{M}(n)}{n}=\theta$ and we can reason as in the previous point. \end{itemize} \end{proof} \begin{rem} The condition of uniform convergence is required to ensure, for instance, that $\displaystyle \lim_n g_{s_n}\left(\frac{\mathcal{M}(s_n)}{s_n}\right)=g(\alpha)$. In fact, it is easy to give counterexamples to Proposition \ref{conv} if convergence is not uniform. \end{rem} Observe that Proposition \ref{conv} implies that that $\displaystyle \lim_n F_{n}(n \theta)=g(\theta)$. Moreover, it also implies that $\displaystyle \lim_n F_{n}(n \theta+o(n))=g(\theta)$. This means that $n\theta$ is a good estimate for $\mathcal{M}(n)$ and that, for large values of $n$, the maximum value of $F_n$ approaches $g(\theta)$. Proposition \ref{conv} admits the following two-variable version that can be proved in the same way. \begin{prop}\label{conv2} Let $\{G_{n}\}$ be a sequence of two variable real functions with $G_{n}\in\mathcal{C}\big(\{(x,y)\in\lbrack0,n]^{2}:x\leq y\}\big)$ and let $(\mathcal{M}_{1}(n),\mathcal{M}_{2}(n))$ be a point for which $G_{n}$ reaches its maximum. Assume that the sequence $\{h_n\}_{n\in\mathbb{N}}$ given by $h_{n}(x,y):=G_{n}(nx,ny)$ converges uniformly on $T:=\{(x,y)\in\mathbb{R}^2:0\leq x\leq y\leq 1\}$ to a function $h$ and that $(\theta_1,\theta_2)$ is the only global maximum of $h$ in $T$. Then, \begin{itemize} \item[i)] $\displaystyle\lim_{n} \mathcal{M}_{i}(n)/n =\theta_{i}$ for $i=1,2$. \item[ii)] $\displaystyle\lim_{n} G_{n}(\mathcal{M}_{1}(n),\mathcal{M}_{2}(n))= h(\theta_{1},\theta_{2}).$ \item[iii)] If $\mathfrak{M}_{i}(n)\sim\mathcal{M}_{i}(n)$ for $i=1,2$, then $\displaystyle\lim_{n} G_{n}( \mathfrak{M}_{1}(n),\mathfrak{M}_{2}(n))=h(\theta_{1},\theta_{2})$. \end{itemize} \end{prop} \section{A new look at the classic secretary problem} In the classical secretary problem, let $n$ be the number of candidates and let us consider a cutoff value $r\in(1,n)$. If $k\in (r,n]$ is an integer, the probability of successfully selecting the best candidate in the $k$-th interview is $\displaystyle P_{n,r}(k)=\frac{r}{n}\frac{1}{k-1}$. Thus, the probability function of succeeding in the classical secretary problem with $n$ candidates using $r$ as cutoff value, is given by $$F_{n}(r):=\sum_{k=r+1}^{n}P_{n,r}(k)=\frac{r}{n}\sum_{k=r+1}^{n}\frac{1}{k-1}.$$ The goal is now to determine the value of $r$ that maximizes this probability (i.e., to determine the optimal cutoff value) and to compute this maximum probability. This can be done using Proposition \ref{conv} in the following way. First, we extend $F_{n}$ to a real variable function by $$F_{n}(r)=\frac{r}{n}(\psi(n)-\psi(r)),$$ where $\psi$ is the so-called digamma function. Then, it can be seen with little effort that the sequence of functions $\{g_n\}$ defined by $g_{n}(x):=F_{n}(nx)$ converges uniformly on $[0,1]$ to the function $g(x):=-x\log(x)$ and the remaining is just some elementary calculus. \begin{rem} In \cite{FER} the following rather lax reasoning showing that $\mathcal{M}(n)/n$ tends to $1/e$ is given. If we let $n$ tend to infinity and write $x$ as the limit of $r/n$, then using $t$ for $j/n$ and $dt$ for $1/n$, the sum becomes a Riemann approximation to an integral $$F_{n}(r) \rightarrow x \int_{x}^{1} \frac{dt}{t}= - x \log(x).$$ Proposition 1 provides a more rigorous approach. \end{rem} We introduce a more general situation. Let $p:\mathbb{R}\to[0,+\infty)$ be a function (payoff function) and assume that a payoff of $p(k)$ is received if the $k$-th candidate is selected. In this setting, the expected payoff is $$E_n(r):=\sum_{k=r+1}^n p(k)P_{n,r}(k)=\frac{r}{n}\sum_{k=r+1}^{n}\frac{p(k)}{k-1}.$$ Note that in the classical situation \begin{equation}\label{pobin} p_B(k)=\begin{cases} 1, & \textrm{if the $k$-th candidate is the seeked candidate};\\ 0, & \textrm{otherwise}.\end{cases} \end{equation} and the expected payoff coincides with the probability of successfully selecting the best candidate. Now, let us modify the classical situation considering that performing each interview has a constant cost of $1/n$. Clearly, in this situation the payoff function is given by \begin{equation}\label{pocost} p_C(k)=\begin{cases} 1-k/n, & \textrm{if the $k$-th candidate is the seeked candidate};\\ 0, & \textrm{otherwise}.\end{cases} \end{equation} and the expected payoff is $$E^{C}_{n}(r):=\frac{r}{n}\sum_{k=r+1}^{n}\frac{1-\frac{k}{n}}{k-1}.$$ The following result provides the optimal cutoff value and the maximum expected payoff in this setting. In what follows, we denote by $W$ the main branch of the so-called Lambert-$W$ function, defined by $z=W(ze^z)$. \begin{prop} Given an integer $n>1$, let us consider the function $$E^{C}_{n}(r):=\frac{r}{n}\sum_{k=r+1}^{n}\frac{1-\frac{k}{n}}{k-1}$$ defined for every integer $1\leq r\leq n-1$ and let $\mathcal{M}(n)$ be the value for which the function $E^{C}_{n}$ reaches its maximum. Then, \begin{itemize} \item[i)] $\displaystyle\lim_{n} {\mathcal{M}(n)}/{n}=\rho:= -\frac{1}{2}W(-2 e^{-2}) =0.20318\dots$. \item[ii)] $\displaystyle \lim_{n}E^{C}_{n}(\mathcal{M}(n))=\displaystyle \lim_{n}E^{C}_{n}(\lfloor \rho n\rfloor)=\rho(1-\rho)= 0.16190\dots$. \end{itemize} \end{prop} \begin{proof} First, we extend $E^{C}_{n}$ to a real variable function by $$E^C_{n}(r)=\frac{r\,\left( -n+r+\left( -1+n\right) \,\psi(n)-\left(-1+n\right) \,\psi(r)\right) }{n^{2}}.$$ Now, it can be seen that $g_{n}(x):=E^{C}_{n}(nx)$ converges uniformly in $[0,1]$ to $g(x):=x\left(-1+x-\log(x)\right)$. To conclude the proof it is enough to apply Proposition \ref{conv} together with some straightforward computations. \end{proof} This result means that the optimal strategy in this setting consists in rejecting roughly the first $\rho n$ interviewed candidates and then accepting the first candidate which is better than all the preceding ones. Following this strategy, the maximum expected payoff is asymptotically equal to $\rho^{2}-\rho$. \begin{rem} The constant $\rho=-\frac{1}{2}W(-2e^{-2})=0.20318786\dots$ (A106533 in OEIS) appears in \cite{FER2} (erroneously approximated as 0.20388) in the context of the Best-Choice Duration Problem considering a payoff of $(n-k+1)/n$. Furthermore, as a noteworthy curiosity, it should be pointed out that this constant has appeared in a completely different context from the one addressed here (the Daley-Kendall model) and it is known as the \emph{rumour's constant} \cite{RUMOR,ru}. \end{rem} Now, let us consider that performing each interview has an perquisite of $1/n$. Clearly, in this situation the payoff function is given by \begin{equation}\label{popay} p_P(k)=\begin{cases} 1+k/n, & \textrm{if the $k$-th candidate is the seeked candidate};\\ 0, & \textrm{otherwise}.\end{cases} \end{equation} and the expected payoff is $$E^{P}_{n}(r):=\frac{r}{n}\sum_{k=r+1}^{n}\frac{1+\frac{k}{n}}{k-1}.$$ The following result provides the optimal cutoff value and the maximum expected payoff in this setting. \begin{prop} Given an integer $n>1$, let us consider the function $$E^{P}_{n}(r):=\frac{r}{n}\sum_{k=r+1}^{n}\frac{1+\frac{k}{n}}{k-1}$$ defined for every integer $1\leq r\leq n-1$ and let $\mathcal{M}(n)$ be the value for which the function $E^{P}_{n} $ reaches its maximum. Then, \begin{itemize} \item[i)] $\displaystyle\lim_{n} {\mathcal{M}(n)}/{n}=\mu:= \frac{1}{2}W( 2 ) =0.42630\dots$. \item[ii)] $\displaystyle \lim_{n}E^{P}_{n}(\mathcal{M}(n))=\displaystyle \lim_{n}E^{P}_{n}(\lfloor\mu n \rfloor)=\mu(1+\mu)= 0.608037\dots$. \end{itemize} \end{prop} \begin{proof} First, we extend $E^{P}_{n}$ to a real variable function by $$E^{P}_{n}(r)=\frac{r\,\left( n - r + \left( 1 + n \right) \,\psi(n) - \left( 1+ n \right) \,\psi(r) \right) }{n^{2}}.$$ Now it can be seen that $g_{n}(x):=E^{P}_{n}(nx)$ converges uniformly in $[0,1]$ to $g(x):=-x\left(-1+x+\log(x)\right)$. To conclude the proof it is enough to apply Proposition \ref{conv} together with some straightforward computations. \end{proof} This result means that the optimal strategy in this setting consists in rejecting roughly the first $\mu n$ interviewed candidates and then accepting the first candidate which is better than all the preceding ones. Following this strategy, the maximum expected payoff is asymptotically equal to $\mu^{2}+\mu$. \section{The Best-or-Worst variant} In this section we focus on the Best-or-Worst variant, as described in the introduction, in which the goal is to select either the best or the worst candidate, indifferent between the two cases. First of all we prove that, just like in the classic problem, the optimal strategy is a threshold strategy. \begin{teor}\label{BWS} For the Best-or-Worst variant, if $n$ is the number of objects, there exists $r(n)$ such that the following strategy is optimal: \begin{enumerate} \item Reject the $r(n)$ first interviewed candidates. \item After that, accept the first candidate which is either better or worse than all the preceding ones. \end{enumerate} \end{teor} \begin{proof} For the sake of brevity, a candidate which is either better or worse than all the preceding ones will be called a \emph{nice candidate}. Since the game under consideration is finite, there must exist an optimal strategy (in the sense that it maximizes the probability of success). Hence, we can define $P_{rej}(k)$ as the probability of success following an optimal strategy when rejecting a candidate in the $k$-th interview (regardless of its being a nice candidate or not). We can also define $P_{acc}(k)$ as the probability of success accepting a nice candidate in the $k$-th interview. Any optimal strategy will reject any non-nice candidate since the probability of being a successful choice will be $0$. Probability $P_{acc}(k)$ is $k/n$, which increases with $k$. On the other hand, the function $P_{rej}(k)$ is non-increasing because $$P_{rej}(k)=p\cdot(\max\{P_{acc}(k+1),P_{rej}(k+1)\}+(1-p)P_{rej}(k+1)\geq P_{rej}(k+1).$$ Thus, since $P_{acc}$ is increasing and $P_{rej}$ is non-increasing and given that $P_{{acc}}(n)=1$ and $P_{rej}(n)=0$, there exists a natural number $r(n)$ for which: $$P_{acc}(k)<P_{rej}(k)\ \textrm{if}\ k\leq r(n),$$ $$P_{acc}(k)\geq P_{rej}(k)\ \textrm{if}\ k>r(n).$$ As a consequence of this fact, the following strategy must be optimal: for each $k$-th interview with $k\in\{1,\dots, n\}$ do the following: \begin{itemize} \item Reject the $k$-th candidate if $k\leq r(n)$ or if it is not a nice candidate. \item Accept the $k$-th candidate if $k>r(n)$ and it is a nice candidate. \end{itemize} Note that the optimality of this strategy follows from the fact that, in each interview, we are choosing the action with greatest probability of success. \end{proof} Once that we have determined the optimal strategy, we focus on determining the probability of success in the $k$-th interview. To do so, let $n$ be the number of candidates and let us consider a cutoff value $r\in(1,n)$. If $k\in (r,n]$ is an integer, the probability of successfully selecting the best or the worst candidate in the $k$-th interview is $\displaystyle P^{BW}_{n,r}(k)=\frac{2}{n}\frac{\binom{r}{2}}{\binom{k-1}{2}}$. Thus, the probability function of succeeding in the Best-or-Worst variant with $n$ candidates using $r$ as cutoff value, is given by $$F^{BW}_{n}(r):=\sum_{k=r+1}^{n}P^{BW}_{n,r}(k)=\frac{2r(r-1)}{n}\sum_{k=r+1}^{n}\frac{1}{(k-1)(k-2)}=\frac{2r(n-r)}{n(n-1)},$$ where the last equality follows using telescopic sums. \begin{rem} Note that for $n>r\in\{0,1\}$, it is straightforward to see that the probability of success is $$F^{BW}_{n}(0)=F^{BW}_{n}(1)=\frac{2}{n}.$$ \end{rem} The goal is now to determine the value of $r$ that maximizes the probability $F^{BW}_{n}$ (i.e., to determine the optimal cutoff value) and to compute this maximum probability. We do so in the following result. \begin{teor}\label{BWP} Given a positive integer $n>2$, let us consider the function $$F^{BW}_{n}(r)=\frac{2r(n-r)}{n(n-1)}$$ defined for every integer $2\leq r\leq n-1$ and let $\mathcal{M}(n)$ be the value for which the function $F^{BW}_{n}$ reaches its maximum. Then, \begin{itemize} \item[i)] $\mathcal{M}(n)=\lfloor n/2\rfloor$. \item[ii)] The maximum value of $F^{BW}_{n}$ is: $$F^{BW}_{n}(\mathcal{M}(n))=\frac{\lfloor\frac{1+n}{2}\rfloor }{2\lfloor\frac{1+n}{2}\rfloor-1}= \begin{cases} \frac{n}{2(n-1)}, & \text{if $n$ is even};\\ \frac{n+1}{2n}, & \text{if $n$ is odd}. \end{cases} $$ \end{itemize} \end{teor} \begin{proof} \ \begin{itemize} \item[i)] Since $F^{BW}_{n}(r)=-\frac{2}{n(n-1)}r^{2}+\frac{2}{(n-1)}r$ is the equation of a parabola in the variable $r$, it is clear that $$\mathcal{M}(n)=\min\left\{ r\in[2,n-1]:F^{BW}_{n}(r)\geq F^{BW}_{n}(r+1)\right\}.$$ Now, $$F^{BW}_{n}(r+1)-F^{BW}_{n}(r)=\frac{2}{n(n-1)}(n-2r-1)$$ so it follows that $$F^{BW}_{n}(r+1)-F^{BW}_{n}(r)\leq0\Leftrightarrow(n-2r-1)\leq0\Leftrightarrow r\geq \frac{n-1}{2}.$$ Consequently, $$\mathcal{M}(n)=\min\left\{ r\in[2,n-1]:r\geq\frac{n-1}{2}\right\} =\lfloor n/2\rfloor$$ as claimed. \item[ii)] It is enough to apply the previous result. If $n$ is even, then $n=2N$ and $$F^{BW}_{n}(\mathcal{M}(n))=F^{BW}_{n}(N)=\frac{2N(n-N)}{n(n-1)}=\frac{2N^{2}} {2N(2N-1)}=\frac{N}{2N-1}.$$ Moreover, in this case $$\left\lfloor \frac{1+n}{2}\right\rfloor =\left\lfloor \frac{1+2N} {2}\right\rfloor =N$$ so it follows that $$F^{BW}_{n}(\mathcal{M}(n))=\frac{N}{2N-1}=\frac{\left\lfloor \frac{1+n} {2}\right\rfloor }{2\left\lfloor \frac{1+n}{2}\right\rfloor -1}$$ as claimed. Otherwise, if $n$ is odd, then $n=2N+1$ and $$F^{BW}_{n}(\mathcal{M}(n))=F_{n}^{BW}(N)=\frac{2N(n-N)}{n(n-1)}=\frac{2N(2N+1-N)} {(2N+1)2N}=\frac{N+1}{2N+1}.$$ In this case $$\left\lfloor \frac{1+n}{2}\right\rfloor =\left\lfloor \frac{1+2N+1} {2}\right\rfloor =N+1$$ so we also have that $$F^{BW}_{n}(\mathcal{M}(n))=\frac{N+1}{2N+1}=\frac{\left\lfloor \frac{1+n} {2}\right\rfloor }{2\left\lfloor \frac{1+n}{2}\right\rfloor -1}$$ and the proof is complete. \end{itemize} \end{proof} This result means that, for $n>2$, optimal strategy in this setting consists in rejecting roughly the first $\lfloor\frac{n}{2}\rfloor$ interviewed candidates and then accepting the first candidate which is either better or worse than all the preceding ones. Following this strategy, the maximum probability of success is $\displaystyle\frac{\lfloor\frac{1+n}{2}\rfloor}{2\lfloor\frac{1+n}{2}\rfloor-1}$. In the cases $n\in\{1,2\}$, it is evident that an optimal cutoff value is $r=0$, i.e. to accept the first candidate that we consider The probability of success is 1 in both cases according to the fact that $F^{BW}_1(0)=F^{BW}_2(0)=1$. \begin{rem} Unlike in the classic secretary problem, the probability of success in the Best-or-Worst variant is not strictly increasing in $n$. In fact, we have that $F^{BW}_{2n}(\mathcal{M}(2n))=F^{BW}_{2n-1}(\mathcal{M}(2n-1))$ for every $n$. \end{rem} We are now going to consider the Best-or-Worst variant with the payoff function $p_C$ given in (\ref{pocost}); i.e., we assume that performing each interview has a constant cost of $1/n$. Under this assumption it can be proved that the optimal strategy is the same threshold strategy given in Theorem \ref{BWS}. Moreover, in this setting, the expected payoff with $n$ candidates and cutoff value $r$ is given by $$E_n^{BW,C}(r):=\sum_{k=r+1}^n\left(1-\frac{k}{n}\right)P^{BW}_{n,r}(k)=\frac{2r(r-1)}{n^2}\sum_{k=r+1}^{n}\frac{n-k}{(k-1)(k-2)}.$$ As usual, the goal is to determine the optimal cutoff value that maximizes the expected payoff $E^{BW,C}_{n}$ and to compute this maximum expected payoff. We do so in the following result. \begin{teor} Given an integer $n>1$, let us consider the function $E^{BW,C}_n(r)$ defined above for every integer $1<r<n$ and let $\mathcal{M}(n)$ be the value for which the function $E^{BW,C}_n$ reaches its maximum. Also, let $$\theta:=-\frac{1}{2W_{_{-1}}(-\frac{1}{2\sqrt{e}})}=e^{\frac{1}{2} + W_{-1}(\frac{-1}{2\,\sqrt{e}})}$$ be the solution to the equation $2x\log(x)=x-1$. Then, the following hold: \begin{itemize} \item[i)] $\displaystyle\lim_{n} {\mathcal{M}(n)}/{n}=\theta= 0.284668\dots$. \item[ii)] $\displaystyle \lim_{n}E^{BW,C}_n( \mathcal{M}(n))=\displaystyle \lim_{n}E^{BW,C}_n(\lfloor n \theta \rfloor)=\theta(1-\theta)=0.2036321\dots$ \end{itemize} \end{teor} \begin{proof} First, observe that \begin{align*} E^{BW,C}_n(r)&=\frac{2r(r-1)}{n^{2}}\sum_{k=r+1}^{n}\frac{(n-k)}{(k-1)(k-2)} =\frac{2r(r-1)}{n^{2}}\left[ \frac{n-2}{r-1}-\frac{n-2}{n-1}-\sum_{i=r} ^{n-1}\frac{1}{i}\right]\\ & =2\frac{r}{n}\left( 1-\frac{2}{n}\right) -2\frac{r}{n}\left( \frac {r}{n-1}-\frac{1}{n-1}\right) -2\frac{r}{n}\left( \frac{r}{n}-\frac{1} {n}\right) \sum_{i=r}^{n-1}\frac{1}{i}. \end{align*} Now, we can extend $E^{BW,C}_n$ to a real variable function by $$E^{BW,C}_n(r)=2\frac{r}{n}\left( 1-\frac{2}{n}\right) -2\frac{r}{n}\left( \frac {r}{n-1}-\frac{1}{n-1}\right) -2\frac{r}{n}\left( \frac{r}{n}-\frac{1} {n}\right) (\psi(n)-\psi(r)).$$ Furthermore, it can be seen that the sequence of functions $g_n(x):=E^{BW,C}_n(nx)$ converges uniformly in $[0,1]$ to the function $g(x)=2x\left(1-x+x\log x\right)$. To conclude the proof it is enough to apply Proposition \ref{conv} together with some straightforward computations. \end{proof} \begin{rem} The constant $\theta=-\frac{1}{2W_{-1}(-\frac{1}{2\sqrt{e}})}=0.284668\dots$ also appears related to rumour theory \cite{RUMOR,ru} and to Gabriel's Horn (see A101314 in OEIS). \end{rem} Now, let us consider the Best-or-Worst variant with the payoff function $p_P$ given in (\ref{popay}); i.e., we assume that performing each interview has an additional payoff of $1/n$. Under this assumption, since the payoff increases with the number of interviews, it can be proved that the optimal strategy is again the same threshold strategy given in Theorem \ref{BWS}. Moreover, in this setting, the expected payoff with $n$ candidates and cutoff value $r$ is given by $$E_n^{BW,P}(r):=\sum_{k=r+1}^n\left(1+\frac{k}{n}\right)P^{BW}_{n,r}(k)=\frac{2r(r-1)}{n^2}\sum_{k=r+1}^{n}\frac{n+k}{(k-1)(k-2)}.$$ The optimal cutoff value that maximizes the expected payoff $E^{BW,P}_{n}$ and this maximum expected payoff are determined in following result. \begin{teor} Given an integer $n>1$, let us consider the function $E^{BW,P}_n(r)$ defined above for every integer $1<r<n$ and let $\mathcal{M}(n)$ be the value for which the function $E^{BW,P}_n$ reaches its maximum. Also let $$\vartheta:=\frac{1}{2\,W(\frac{e^{\frac{3}{2}}}{2})}=0.552001\dots$$ be the solution to the equation $1 - 3\,x - 2\,x\,\log(x)=0$. Then, the following hold: \begin{itemize} \item[i)] $\displaystyle\lim_{n} {\mathcal{M}(n)}/{n}=\vartheta$. \item[ii)] $\displaystyle \displaystyle \lim_{n}E^{BW,P}_n(\mathcal{M}(n))=\lim_{n}E^{BW,P}_n(\lfloor n\vartheta \rfloor)=\vartheta(1+\vartheta)=0.8567\dots$ \end{itemize} \end{teor} \begin{proof} First, observe that \begin{align*} E^{BW,P}_n(r) & =\frac{2r(r-1)}{n^{2}}\sum_{k=r+1}^{n}\frac{(n+k)}{(k-1)(k-2)}=\\ & =2\frac{r}{n}\left( 1+\frac{2}{n}\right) -2\frac{r}{n}\frac{r-1} {n}\left( 1+\frac{3}{n-1}\right) -2\frac{r}{n}\frac{r-1}{n}\sum_{i=r} ^{n-1}\frac{1}{i}. \end{align*} Now, we can extend $E^{BW,P}_n$ to a real variable function by $$E^{BW,P}_n(r)=2\frac{r}{n}\left( 1+\frac{2}{n}\right) -2\frac{r}{n}\frac{r-1} {n}\left( 1+\frac{3}{n-1}\right) -2\frac{r}{n}\frac{r-1}{n}(\psi(n)-\psi(r)).$$ Furthermore, it can be seen that the sequence of functions $g_{n}(x):=E^{BW,P}_n(nx)$ converges uniformly on $[0,1]$ to $g(x)=-2x\left(-1+x+x\log x\right)$. To conclude the proof it is enough to apply Proposition \ref{conv} together with some straightforward computations. \end{proof} So far, we have considered the Best-or-Worst variant in which the goal is to select either the best or the worst candidate, indifferent between the two cases. To finish this section we are going to further modify the Best-or-Worst variant. In particular we are going to consider different payoff depending on whether we select the best or the worst candidate. In paticular we are going to consider the following payoff function, with $m<M$. \begin{equation}\label{poun} p_U(k)=\begin{cases} m, & \textrm{if the $k$-th candidate is the worst candidate};\\ M, & \textrm{if the $k$-th candidate is the best candidate};\\ 0, & \textrm{otherwise}. \end{cases} \end{equation} In this new setting the optimal strategy has two thresholds, as stated in the following result, whose proof is analogue to that of Theorem \ref{BWS}. \begin{teor} For the Best-or-Worst variant, if $n$ is the number of candidates and the payments for selecting the worst and the best candidates are, respectively, $m<M$, there exist $r(n)\leq s(n),$ such that the following strategy is optimal: \begin{enumerate} \item Reject the $r(n)$ first interviewed candidates. \item Accept the first candidate which is better than all the preceding ones until reaching the $s(n)$-th candidate. \item After that, accept the first candidate which is either better or worse than all the preceding ones. \end{enumerate} \end{teor} Now, let $n$ be the number of candidates and let us consider cutoff values $1<r<s<n$. Then, if $k\in (r,n]$ is an integer, the probability of successfully selecting the best candidate in the $k$-th interview is given by $$P^{BW,U}_{n,r,s}(k)=\begin{cases} \frac{r}{(k-1) n}, & \textrm{if $r<k<s$};\\ \frac{r}{k-1}\frac{s-1}{k-2}\frac{1}{n}, & \textrm{if $k\geq s$}. \end{cases}$$ On the other hand, if $k\in (r,n]$ is an integer, the probability of successfully selecting the best or the worst candidate in the $k$-th interview is given by $$\overline{P}^{BW,U}_{n,r,s}(k)=\begin{cases} 0, & \textrm{if $r<k<s$};\\ \frac{r}{k-1}\frac{s-1}{k-2}\frac{1}{n}, & \textrm{if $k\geq s$}. \end{cases}$$ Because, according to the optimal strategy we can only select the worst candidate if $k\geq s$. Consequently, the expected payoff with $n$ candidates and cutoff values $r<s$ is given by \begin{align*} E^{BW,U}_{n}(r,s)&:=\sum_{k=r+1}^n MP^{BW,U}_{n,r,s}(k)+m\overline{P}^{BW,U}_{n,r,s}(k)=\\&=\sum_{k=r+1}^{s}\frac{M\,r}{\left(k-1\right) \,n}+\sum_{k=s+1}^{n}\left(M+m\right) \frac{r(s-1)}{(k-1)(k-2)n}. \end{align*} The following result determines the cutoff values as well as the corresponding maximum expected payoff. \begin{teor}\label{TnM} Given a positive integer $n>2$, let us consider the function $E^{BW,U}_{n}(r,s)$ defined above for every pair of integers in the set $\{(r,s)\in\mathbb{Z}^{2}:0\leq r\leq s<n\}$ and let $(\mathcal{M}_{1}(n),\mathcal{M}_{2}(n))$ be the point for which $E_n^{BW,U}$ reaches its maximum. Then, \begin{itemize} \item[i)] $\displaystyle \lim_{n}\frac{\mathcal{M}_1(n)}{n}=\frac{e^{-1+\frac{n}{M}}M}{m+M}.$ \item[ii)] $\displaystyle \lim_{n}\frac{\mathcal{M}_2(n)}{n}=\frac{M}{m+M}.$ \item[iii)] $\displaystyle \lim_{n}E^{BW,U}_{n}(\mathcal{M}_{1}(n),\mathcal{M}_{2}(n))=\frac{e^{-1+\frac{n}{M}}M^2}{m+M}.$ \end{itemize} \end{teor} \begin{proof} Let us define the sequence of functions $\{h_n\}$ by $h_n(x,y)=E^{BW}_n(nx,xy)$. Then, $$ \lim_{n}h_n(x,y)=h(x,y)= \begin{cases} (M+m)x-(M+m)xy+Mx\log(y/x), & \textrm{if $x,y\neq0$};\\ 0 & \textrm{otherwise}. \end{cases} $$ and the convergence is uniform on $T:=\{(x,y)\in\mathbb{R}^{2}:0\leq x\leq y\leq1\}$. Hence, we can apply Proposition \ref{conv2}. To do so, observe that $h$ is a concave function on the convex set $T$ with a negative definite hessian matrix. Since $h$ has only one critical point, namely $$\left(\frac{e^{-1+\frac{m}{M}}M}{M+m},\frac{M}{M+m}\right)$$ and $$h\left(\frac{e^{-1+\frac{m}{M}}M}{M+m},\frac{M}{M+m}\right)=\frac{e^{-1+\frac{m}{M}}M^{2}}{M+m}$$ the result follows. \end{proof} This result means that the optimal strategy in this setting consists in rejecting roughly the first $n\dfrac{e^{-1+\frac{m}{M}}M}{M+m}$ interviewed candidates, then accepting the first candidate which is better than all the preceding ones until reaching roughly the $n\dfrac{M}{M+m}$ candidate and, finally accepting the first candidate which is either better or worse than all the preceding ones. Following this strategy, the maximum expected payoff is asymptotically equal to $\displaystyle\frac{e^{-1+\frac{m}{M}}M^{2}}{M+m}$. \begin{rem} If $m\ll M$ the cuttof values obtained in Theorem \ref{TnM} are, approximately, $ne^{-1}$ and $n$. This means that the optimal strategy ignores the objective of obtaining the worst candidate and we recover the original secretary problem. In addition, if $m=M$, then both cutoff values coincide with $n/2$ and we recover the original Best-or-Worst variant. \end{rem} \section{The Postdoc variant} In this section we focus on the Postdoc variant, as described in the introduction, in which the goal is to select the second best candidate. First of all we have to prove that, just like in classic problem, the optimal strategy is a threshold strategy. In this variant it is not obvious that the optimal strategy has only one threshold. This is because the candidate considered in a given interview could be selected both if it is better or the second better than all the preceding ones and in both cases it could end up being the second best candidate. However, we are going to see that selecting a candidate which is better than all the preceding ones is never preferable to waiting for a candidate which is the second better than all the preceding ones. Assume for a moment that we are following a threshold strategy. Let $n$ be the number of candidates and let us consider a cutoff value $r\in(1,n)$. If $k\in (r,n]$ is an integer, the probability of successfully selecting the best or the worst candidate in the $k$-th interview is $P^{PD}_{n,r}(k)=\frac{r}{k-1}\frac{1}{k}\frac{\binom{k}{2}}{\binom{n}{2}}$. Thus, the probability function of succeeding in the Postdoc variant with $n$ candidates using $r$ as cutoff value and provided we are following a threshold strategy for the second best candidate, is given by $$F^{PD}_{n}(r):=\sum_{k=r+1}^{n}P^{PD}_{n,r}(k)=\sum_{k=r+1}^{n}\frac{r\,{\binom{k}{2}}}{\left( -1+k\right) \,k\,{\binom{n}{2}}}.$$ Note that the following holds: \begin{align*} F^{PD}_{n}(r):=&=\sum_{k=r+1}^{n}\frac{r\,{\binom{k}{2}}}{\left( -1+k\right) \,k\,{\binom {n}{2}}}=\frac{r\,{\binom{r+1}{2}}}{\left( -1+r+1\right) \,(r+1)\,{\binom {n}{2}}}+\sum_{k=r+2}^{n}\frac{r\,{\binom{k}{2}}}{\left( -1+k\right) \,k\,{\binom {n}{2}}}=\\ &= \frac{ \,{\binom{r+1}{2}}}{ \,(r+1)\,{\binom {n}{2}}}+\sum_{k=r+2}^{n}\frac{(r+1)r\,{\binom{k}{2}}}{\left( -1+k\right) \,k\,{(r+1)\binom {n}{2}}}=\\ &= \frac{ \,{\binom{r+1}{2}}}{ \,(r+1)\,{\binom {n}{2}}}+\frac{r}{r+1}\sum_{k=r+2}^{n}\frac{(r+1) \,{\binom{k}{2}}}{\left( -1+k\right) \,k\,{ \binom {n}{2}}}=\\ &= \frac{ \,{\binom{r+1}{2}}}{ \,(r+1)\,{\binom {n}{2}}}+ \frac{r}{r+1}F^{PD}_{n}(r+1). \end{align*} On the other hand, let us denote by $T_n(r)$ the probability of success after the $r$-th interview provided we have already selected a candidate which is better than all the preceding ones. Then, the probability of finding the second best candidate in the $(r+1)$-th interview is $\frac{1}{r+1}$ and, furthermore, the probability of not finding a better candidate among all the remaining interviews is $\frac{\binom{r+1}{2}}{\binom{n}{2}}$. On the other hand, the probability of not obtaining the second best candidate in the $(r+1)$-th interview is $\frac{r}{r+1}$ and the probability of success in this case will be $T_n(r+1)$. Hence, $$T_n (r)=\frac{1}{r+1}\frac{\binom{r+1}{2}}{ \binom{n}{2}}+\frac{r}{r+1}T_n(r+1).$$ Thus, we have seen that $T_n(r)$ and $F^{PD}_{n}(r) $ both satisfy the same recurrence relation in $r$. Moreover, it holds that $T_n(n-1)=F^{PD}_{n}( n-1)=1/n$ so, consequently, we obtain that $T_n(r)=F^{PD}_{n}(r) $ for every $r<n$. Note that this means that the optimal strategy can neglect if a given candidate is better than all the preceding ones and focus only on whether the candidate is the second better than all the preceding ones and thus the optimal strategy has only one threshold. \begin{teor}\label{TEORPDC} For the Postdoc variant, if $n$ is the number of candidates, there exists $r(n)$ such that the following strategy is optimal: \begin{enumerate} \item Reject the $r(n)$ first interviewed candidates. \item After that, accept the first candidate which is the second best until then. \end{enumerate} \end{teor} \begin{proof} Just use the same ideas as in Theorem \ref{BWS}. \end{proof} Thus, the probability function of succeeding in the Postdoc variant with $n$ candidates using $r$ as cutoff value, is given by $$F^{PD}_{n}(r):=\sum_{k=r+1}^{n}P^{PD}_{n,r}(k)=\frac{r(n-r)}{n(n-1)}.$$ Observe that we have obtained that $F_n^{PD}(r)=\dfrac{1}{2}F_n^{BW}(r)$. Consequently, if we follow the previous strategy, the optimal cutoff value is the same as in the Best-or-Worst variant; i.e., $\lfloor \frac{n}{2}\rfloor$) and the maximum probability of success is one half of the maximum probability of success in the Best-or-Worst variant (see Theorem \ref{BWP}). We are now going to consider the Postdoc variant with the payoff function $p_C$ given in (\ref{pocost}); i.e., we assume that performing each interview has a constant cost of $1/n$. Under this assumption it can be proved that the optimal strategy has two thresholds. \begin{teor}\label{PDP} For the Postdoc variant, if $n$ is the number of candidates and if the payoff function is given by (\ref{pocost}), there exist $r(n)\leq s(n)$, such that the following strategy is optimal: \begin{enumerate} \item Reject the $r(n)$ first interviewed candidates. \item Accept the first candidate which is better than all the preceding ones until reaching the $s(n)$-th candidate. \item After that, accept the first candidate which is either better or second better than all the preceding ones. \end{enumerate} \end{teor} \begin{proof} Proceed as in Theorem \ref{BWS} with each threshold separately. \end{proof} Under this strategy, the probability of successfully selecting the second best candidate in the $k$-th interview is given by the function $$P^{PD,C}_{n,r,s}(k)=\begin{cases} \frac{r(n-k)}{n(n-1)(k-1)}, & \textrm{if $r<k<s$};\\ \frac{r(s-1)(n-k)}{n(n-1)(k-1)(k-2)}+\frac{r(s-1)}{n(n-1)(k-2)}, & \textrm{if $k\geq s$}. \end{cases}$$ Consequently, the expected payoff with $n$ candidates and cutoff values $r<s$ is given by $$E^{PD,C}_{n}(r,s)=\sum_{k=r+1}^n \left(1-\frac{k}{n}\right)P^{PD,C}_{n,r,s}(k).$$ In the following result we determine the optimal cutoff values and the maximum expected payoff. \begin{teor} Given a positive integer $n>2$ let us consider the function $E^{PD,C}_{n}(r,s)$ defined above for every $(r,s)\in\{(r,s) \in\mathbb{Z}^{2}:0\leq r\leq s<n\}$ and let $(\mathcal{M}_{1} (n),\mathcal{M}_{2}(n))$ be the point for which $E^{PD,C}_{n}$ reaches its maximum. Then, \begin{itemize} \item[i)] $\displaystyle \lim_{n} {\mathcal{M}_1(n)}/{n}=0.17248\dots$ \item[ii)] $\displaystyle \lim_{n} {\mathcal{M}_2(n)}/{n}=0.39422\dots$ \item[iii)] $\displaystyle \lim_{n}E^{PD,C}_{n}(\mathcal{M}_{1}(n),\mathcal{M}_{2}(n))=0.11811\dots$ \end{itemize} \end{teor} \begin{proof} First of all, observe that \begin{align*} E^{PD,C}_{n}(r,s)&=\frac{r}{n^{2}}\left( n+\frac{n-1}{s-1}-s+\frac{\left( s-r\right) \,\left( 3-4\,n+r+s\right) }{2\,\left( n-1\right) }\right) +\\ & +\frac{r}{n^{2}}\left( \left( 1-s\right) \,\psi(-1+n)-\left( n-1\right) \,\psi(r)+\left( n-2+s\right) \,\psi(s-1)\right). \end{align*} Thus, if we define the sequence of functions $\{h_n\}$ by $h_n(x,y)=E^{PD,C}_n(nx,ny)$, it follows that $$\lim_{n}h_n(x,y)=h(x,y):=\begin{cases} \frac{x\left(2-6y+y^{2}+4x-x^{2}+2(1+y)\log y-2\,\log x\right)}{2}, & \textrm{if $x,y\neq0$};\\ 0, & \textrm{otherwise}. \end{cases}$$ and the convergence is uniform on $\{(x,y)\in\mathbb{R}^{2}:0\leq x\leq y\leq1\}$. Using elementary techniques we get that $h$ reaches its absolute maximum at the point $(\alpha,\beta)$ with $\beta:=0.39422\dots$ is the solution to $-2+\frac{1}{\beta}+\beta+\log(\beta)=0$ and $\alpha:=0.1724844\dots$ is the solution to $1-\frac{1}{\beta}-2\,\beta-\frac{\beta^{2}}{2}+4\,\alpha-\frac{3\,\alpha^{2}}{2}-\log(\alpha)=0$. The fact that $h(\alpha,\beta)=0.11811\dots$ concludes the proof. \end{proof} Finally, let us consider the Postdoc variant with the payoff function $p_P$ given in (\ref{popay}); i.e., we assume that performing each interview has an additional payoff of $1/n$. Under this assumption, it is clear that no optimal strategy will accept a candidate which is better than all the preceding ones because, if the search continues, the probability of success is the same and the payoff will be greater. Hence, we must only consider strategies with one threshold for the second best candidate, as in Theorem \ref{TEORPDC}, ignoring if the interviewed candidate in better than the preceding ones. In this setting, the expected payoff with $n$ candidates and cutoff value $r$ is given by $$E_n^{PD,P}(r):=\sum_{k=r+1}^n\left(1+\frac{k}{n}\right)P^{PD}_{n,r}(k)=\frac{r(n-r)(3n+1+r)}{2n^2(n-1)}.$$ The optimal cutoff value that maximizes the expected payoff $E^{PD,P}_{n}$ and this maximum expected payoff are determined in the following result. \begin{teor} Given an integer $n>1$, let us consider the function $E_n^{PD,P}(r)$ defined above for every integer $1<r<n$ and let $\mathcal{M}(n)$ be the value for which the function $E_n^{PD,P}$ reaches its maximum. Then, the following hold: \begin{itemize} \item[i)] $\displaystyle \lim_{n}\frac{\mathcal{M}(n)}{n}=\frac{\sqrt{13}-2}{3}=0.53518\dots$ \item[ii)] $\displaystyle\lim_{n}E_n^{PD,P}(\mathcal{M}(n))=\frac{13\sqrt{13}-35}{27}=0.4397\dots$ \end{itemize} \end{teor} \begin{proof} Since $E_n^{PD,P}$ is a degree 3 polynomial, we can explicitly obtain the exact value of $\mathcal{M}(n)$ by elementary methods. Namely, $$\mathcal{M}(n)=\frac{-1-2\,n+\sqrt{1+7\,n+13\,n^{2}}}{3}.$$ The result follows immediately. \end{proof} \begin{rem} Note that we can further refine the previous result by noting that $\displaystyle \mathcal{M}(n)=\left(\frac{\sqrt{13}-2}{3}\right)n+\frac{7-2\sqrt{13}}{6\sqrt{13}}+o(n)$. In this case, $[\mathcal{M}(n)]$ is the optimal cutoff value for all $n$ up to 10000, without any exception. \end{rem} \section{Conclusions} In this paper, we have analyzed two variants of the secretary problem which happen to be closely related: the Postdoc and the Best-or-Worst variants. Both of them have the same optimal threshold strategy and the mean payoff for the first one is twice as for the second one. We now show a comparative table of the asymptotic optimal cutoff value (ACV) given by $\displaystyle \lim_{n} \mathcal{M}(n)/n$ and the the asymptotic maximum expected payoff (AMP) in the classical secretary problem, in the Best-or-Worst variant and in the Postdoc variant with payoff functions $p_B$, $p_C$ and $p_P$. In the case of the Postdoc variant with payoff function $p_P$, in the cell corresponding to $\mathcal{M}(n)/n$ we show the two thresholds related to the optimal strategy in that setting. \[ \begin{tabular} [c]{|c|c|c|c|c|c|c|}\hline {\footnotesize Payoff} & \multicolumn{2}{|c|}{Classic} & \multicolumn{2}{|c|}{Best-or-Worst} & \multicolumn{2}{|c|}{Postdoc} \\\cline{2-7} & $\text{ACV}$ & $\text{AMP}$ & $\text{ACV}$ & $\text{AMP}$ & $\text{ACV}$ & $\text{AMP}$\\\hline $p_B$ & $e^{-1}$ & $e^{-1}$ & $1/2$ & 1/2 & $1/2$ & 1/4\\\hline $p_C$ & $ \begin{array} [c]{c} \rho\simeq\\ 0.2031 \end{array} $ & $ \begin{array} [c]{c} \rho-\rho^{2}\simeq\\ 0.1619 \end{array} $ & $ \begin{array} [c]{c} \theta\simeq\\ 0.2846 \end{array} $ & $ \begin{array} [c]{c} \theta-\theta^{2}\simeq\\ 0.2036 \end{array} $ & $ \begin{array} [c]{c} 0.1724,\\ 0.3942 \end{array} $ & $0.1181$\\\hline $p_P$ & $ \begin{array} [c]{c} \eta\simeq\\ 0.4263 \end{array} $ & $ \begin{array} [c]{c} \eta^{2}+\eta\simeq\\ 0.6080 \end{array} $ & $ \begin{array} [c]{c} \vartheta\simeq\\ 0.5520 \end{array} $ & $ \begin{array} [c]{c} \vartheta^{2}+\vartheta\simeq\\ 0.8567 \end{array} $ & $ \begin{array} [c]{c} \frac{\sqrt{13}-2}{3}\simeq\\ 0.5351 \end{array} \,$ & $ \begin{array} [c]{c} \frac{13\sqrt{13}-35}{27}\\ \simeq0.4397 \end{array} $\\\hline \end{tabular} \ \ \] \end{document}
\begin{document} \title[Automorphisms of buildings]{Automorphisms of Non-Spherical Buildings\\ Have Unbounded Displacement} \author[Abramenko]{Peter Abramenko} \address{Department of Mathematics\\ University of Virginia\\ Charlottesville, VA 22904} \email{pa8e@virginia.edu} \author[Brown]{Kenneth S. Brown} \address{Department of Mathematics\\ Cornell University\\ Ithaca, NY 14853} \email{kbrown@cornell.edu} \date{October 5, 2007} \begin{abstract} If $\phi$ is a nontrivial automorphism of a thick building~$\Delta$ of purely infinite type, we prove that there is no bound on the distance that $\phi$ moves a chamber. This has the following group-theoretic consequence: If $G$ is a group of automorphisms of~$\Delta$ with bounded quotient, then the center of~$G$ is trivial. \end{abstract} \maketitle \section*{Introduction} \label{sec:introduction} A well-known folklore result says that a nontrivial automorphism~$\phi$ of a thick Euclidean building~$X$ has unbounded displacement. Here we are thinking of $X$ as a metric space, and the assertion is that there is no bound on the distance that $\phi$ moves a point. [For the proof, consider the action of~$\phi$ on the boundary~$X_\infty$ at infinity. If $\phi$ had bounded displacement, then $\phi$ would act as the identity on~$X_\infty$, and one would easily conclude that $\phi=\id$.] In this note we generalize this result to buildings that are not necessarily Euclidean. We work with buildings~$\Delta$ as combinatorial objects, whose set $\C$ of chambers has a discrete metric (``gallery distance''). We say that $\Delta$ is of \emph{purely infinite type} if every irreducible factor of its Weyl group is infinite. \begin{theorem*} Let $\phi$ be a nontrivial automorphism of a thick building~$\Delta$ of purely infinite type. Then $\phi$, viewed as an isometry of the set \C\ of chambers, has unbounded displacement. \end{theorem*} The crux of the proof is a result about Coxeter groups (Lemma \ref{lem:3}) that may be of independent interest. We prove the lemma in Section~\ref{sec:lemma-about-coxeter}, after a review of the Tits cone in Section~\ref{sec:preliminaries}. We then prove the theorem in Section~\ref{sec:proof-theorem}, and we obtain the following (almost immediate) corollary: If $G$ is a subgroup of~$\Aut(\Delta)$ such that there is a bounded set of representatives for the $G$\h-orbits in~\C, then the center of~$G$ is trivial. We conclude the paper with a brief discussion of displacement in the spherical case. We are grateful to Hendrik Van Maldeghem for providing us with some counterexamples in this connection (see Example~\ref{exam:1} and Remark~\ref{rem:5}). \section{Preliminaries on the Tits cone} \label{sec:preliminaries} In this section we review some facts about the Tits cone associated to a Coxeter group \cite{abramenko08:_approac_to_build,bourbaki81:_group_lie,humphreys90:_reflec_coxet,tits61:_group_coxet,vinberg71:_discr}. We will use \cite{abramenko08:_approac_to_build} as our basic reference, but much of what we say can also be found in one or more of the other cited references. Let $(W,S)$ be a Coxeter system with $S$ finite. Then $W$ admits a canonical representation, which turns out to be faithful (see Lemma~\ref{lem:7} below), as a linear reflection group acting on a real vector space $V$ with a basis $\{e_s \mid s\in S\}$. There is an induced action of~$W$ on the dual space~$V^*$. We denote by~$C_0$ the simplicial cone in~$V^*$ defined by \[ C_0 := \{x\in V^* \mid \<x,e_s> >0 \text{ for all } s\in S\}; \] here $\<-,->$ denotes the canonical evaluation pairing between $V^*$ and~$V$. We call~$C_0$ the \emph{fundamental chamber}. For each subset $J\subseteq S$, we set \[ A_J := \{x \in V^* \mid \<x,e_s> = 0 \text{ for } s \in J \text{ and } \<x,e_s> > 0 \text{ for } s \in S\setminus J\}. \] The sets $A_J$ are the (relatively open) \emph{faces} of~$C$ in the standard terminology of polyhedral geometry. They form a partition of the closure~$\Cbar_0$ of $C_0$ in~$V^*$. For each $s\in S$, we denote by~$H_s$ the hyperplane in~$V^*$ defined by the linear equation $\<-,e_s> =0$. If follows from the explicit definition of the canonical representation of~$W$ (which we have not given) that $H_s$ is the fixed hyperplane of $s$ acting on~$V^*$. The complement of $H_s$ in~$V^*$ is the union of two open halfspaces~$U_\pm(s)$ that are interchanged by~$s$. Here \[ U_+(s) := \{x \in V^* \mid \<x,e_s> > 0\}, \] and \[ U_-(s) := \{x \in V^* \mid \<x,e_s> < 0\}. \] The hyperplanes $H_s$ are called the \emph{walls} of~$C_0$. We denote by $\H_0$ the set of walls of~$C_0$. The \emph{support} of the face $A= A_J$, denoted~$\supp A$, is defined to be the intersection of the walls of~$C_0$ containing~$A$, i.e., $\supp A = \bigcap_{s\in J} H_s$. Note that $A$ is open in~$\supp A$ and that $\supp A$ is the linear span of~$A$. Although our definitions above made use of the basis $\{e_s \mid s \in S\}$ of~$V$, there are also intrinsic geometric characterizations of walls and faces. Namely, the walls of~$C_0$ are the hyperplanes $H$ in~$V^*$ such that $H$ does not meet~$C_0$ and $H\cap\Cbar_0$ has nonempty interior in~$H$. And the faces of~$C_0$ correspond to subsets $\H_1\subseteq \H_0$. Given such a subset, let $L := \bigcap_{H\in\H_1} H$; the corresponding face~$A$ is then the relative interior (in~$L$) of the intersection $L\cap\Cbar_0$. We now make everything $W$-equivariant. We call a subset $C$ of~$V^*$ a \emph{chamber} if it is of the form $C = wC_0$ for some $w \in W$, and we call a subset $A$ of~$V^*$ a \emph{cell} if it is of the form $A = wA_J$ for some $w \in W$ and $J \subseteq S$. Each chamber~$C$ is a simplicial cone and hence has well-defined walls and faces, which can be characterized intrinsically as above. If $C=wC_0$ with $w\in W$, the walls of~$C$ are the transforms $wH_s$ ($s \in S$), and the faces of~$C$ are the cells $wA_J$ ($J\subseteq S$). Finally, we call a hyperplane $H$ in~$V^*$ a \emph{wall} if it is a wall of some chamber, and we denote by~\H\ the set of all walls; thus \[ \H = \{wH_s \mid w \in W,\, s \in S\}. \] The set of all faces of all chambers is equal to the set of all cells. The union of these cells is called the \emph{Tits cone} and will be denoted by~$X$ in the following. Equivalently, \[ X = \bigcup_{w\in W} w\Cbar_0. \] We now record, for ease of reference, some standard facts about the Tits cone. The first fact is Lemma~2.58 in \cite[Section~2.5]{abramenko08:_approac_to_build}. See also the proof of Theorem~1 in \cite[Section~V.4.4]{bourbaki81:_group_lie}. \begin{lemma} \label{lem:6} For any $w \in W$ and $s \in S$, we have \[ wC_0 \subseteq U_+(s) \iff l(sw) > l(w) \] and \[ wC_0 \subseteq U_-(s) \iff l(sw) < l(w). \] Here $l(-)$ is the length function on~$W$ with respect to~$S$. \qed \end{lemma} This immediately implies: \begin{lemma} \label{lem:7} $W$ acts simply transitively on the set of chambers. \qed \end{lemma} The next result allows one to talk about separation of cells by walls. It is part of Theorem~2.80 in \cite[Section~2.6]{abramenko08:_approac_to_build}, and it can also be deduced from Proposition~5 in \cite[Section~V.4.6]{bourbaki81:_group_lie}. \begin{lemma} \label{lem:8} If $H$ is a wall and $A$ is a cell, then either $A$ is contained in~$H$ or $A$ is contained in one of the two open halfspaces determined by~$H$. \qed \end{lemma} We turn now to reflections. The following lemma is an easy consequence of the stabilizer calculation in \cite[Theorem~2.80]{abramenko08:_approac_to_build} or \cite[Section~V.4.6]{bourbaki81:_group_lie}. \begin{lemma} \label{lem:9} For each wall $H\in\H$, there is a unique nontrivial element $s_H\in W$ that fixes $H$ pointwise. \qed \end{lemma} We call $s_H$ the \emph{reflection} with respect to~$H$. In view of a fact stated above, we have $s_{H_s} = s$ for all $s\in S$. Thus $S$ is the set of reflections with respect to the walls in~$\H_0$. It follows immediately from Lemma~\ref{lem:9} that \begin{equation} \label{eq:1} s_{wH} = ws_Hw^{-1} \end{equation} for all $H \in \H$ and $w \in W$. Hence $wSw^{-1}$ is the set of reflections with respect to the walls of~$wC_0$. \begin{corollary} \label{cor:2} For $s \in S$ and $w \in W$, $\,H_s$ is a wall of~$wC_0$ if and only if $w^{-1}sw$ is in~$S$. \end{corollary} \begin{proof} $H_s$ is a wall of~$wC_0$ if and only if $s$ is the reflection with respect to a wall of~$wC_0$. In view of the observations above, this is equivalent to saying $s\in wSw^{-1}$, i.e., $w^{-1} s w \in S$. \end{proof} Finally, we record some special features of the infinite case. \begin{lemma} \label{lem:1} Assume that $(W,S)$ is irreducible and $W$ is infinite. \begin{enumerate} \item\label{item:1} If two chambers $C,D$ have the same walls, then $C=D$. \item\label{item:2} The Tits cone $X$ does not contain any pair~$\pm x$ of opposite nonzero vectors. \end{enumerate} \end{lemma} \begin{proof} (\ref{item:1}) We may assume that $C=C_0$ and~$D =wC_0$ for some $w\in W$. Then Corollary~\ref{cor:2} implies that $C$ and~$D$ have the same walls if and only if $w$ normalizes~$S$. So the content of~(\ref{item:1}) is that the normalizer of $S$ in~$W$ is trivial. This is a well known fact. See \cite[Section~V.4, Exercise~3]{bourbaki81:_group_lie}, \cite[Proposition~4.1]{deodhar82:_coxet}, or \cite[Section~2.5.6]{abramenko08:_approac_to_build}. Alternatively, there is a direct geometric proof of~(\ref{item:1}) outlined in the solution to \cite[Exercise~3.118]{abramenko08:_approac_to_build}. (\ref{item:2}) This is a result of Vinberg~\cite[p.~1112, Lemma~15]{vinberg71:_discr}. See also \cite[Section~2.6.3]{abramenko08:_approac_to_build} and \cite[Theorem~2.1.6]{krammer94:_coxet} for alternate proofs. \end{proof} \section{A lemma about Coxeter groups} \label{sec:lemma-about-coxeter} We begin with a geometric version of our lemma, and then we translate it into algebraic language. \begin{lemma} \label{lem:2} Let $(W,S)$ be an infinite irreducible Coxeter system with $S$ finite. If $C$ and~$D$ are distinct chambers in the Tits cone, then $C$ has a wall~$H$ with the following two properties: \begin{enumerate}[\rm(a)] \item $H$ is not a wall of~$D$. \item $H$ does not separate $C$ from~$D$. \end{enumerate} \end{lemma} \begin{proof} For convenience (and without loss of generality), we assume that $C$ is the fundamental chamber~$C_0$. Define $J \subseteq S$ by \[ J := \{s \in S \mid H_s \text{ is a wall of } D\}, \] and set $L := \bigcap_{s\in J} H_s$. Thus $L$ is the support of the face $A = A_J$ of~$C$. By Lemma~\ref{lem:1}(\ref{item:1}), $J\neq S$, hence $L \neq \{0\}$. Since $L$ is an intersection of walls of~$D$, it is also the support of a face $B$ of~$D$. Note that $A$ and~$B$ are contained in precisely the same walls, since they have the same span~$L$. In particular, $B$ is not contained in any of the walls~$H_s$ with $s\in S\setminus J$, so, by Lemma~\ref{lem:8}, $B$ is contained in either $U_+(s)$ or~$U_-(s)$ for each such~$s$. Suppose that $B\subseteq U_-(s)$ for each $s \in S\setminus J$. Then, in view of the definition of~$A = A_J$ by linear equalities and inequalities, $B \subseteq -A$. But $B$ contains a nonzero vector~$x$ (since $B$ spans~$L$), so we have contradicted Lemma~\ref{lem:1}(\ref{item:2}). Thus there must exist $s\in S\setminus J$ with $B \subseteq U_+(s)$. This implies that $D\subseteq U_+(s)$, and the wall $H=H_s$ then has the desired properties (a) and~(b). \end{proof} We now prove the algebraic version of the lemma, for which we relax the hypotheses slightly. We do not even have to assume that $S$ is finite. Recall that $(W,S)$ is said to be \emph{purely infinite} if each of its irreducible factors is infinite. \begin{lemma} \label{lem:3} Let $(W,S)$ be a purely infinite Coxeter system. If $w \neq 1$ in~$W$, then there exists $s\in S$ such that: \begin{enumerate}[\rm(a)] \item $w^{-1} s w \notin S$. \item $l(sw) > l(w)$. \end{enumerate} \end{lemma} \begin{proof} Let $(W_i,S_i)$ be the irreducible factors of~$(W,S)$, which are all infinite. Suppose the lemma is true for each factor~$(W_i,S_i)$, and consider any $w\neq 1$ in~$W$. Then $w$ has components $w_i\in W_i$, at least one of which (say~$w_1$) is nontrivial. So we can find $s \in S_1$ with $w_1^{-1}sw_1 \notin S_1$ and $l(sw_1) > l(w_1)$. One easily deduces (a) and~(b). We are now reduced to the case where $(W,S)$ is irreducible. If $S$ is finite, we apply Lemma~\ref{lem:2} with $C$ equal to the fundamental chamber~$C_0$ and $D = w C_0$. Then $H = H_s$ for some $s\in S$. Property~(a) of that lemma translates to (a) of the present lemma by Corollary~\ref{cor:2}, and property~(b) of that lemma translates to (b) of the present lemma by Lemma~\ref{lem:6}. If $S$ is infinite, we use a completely different method. The result in this case follows from Lemma~\ref{lem:4} below. \end{proof} Recall that for any Coxeter system $(W,S)$ and any $w\in W$, there is a (finite) subset $S(w) \subseteq S$ such that every reduced decomposition of~$w$ involves precisely the generators in~$S(w)$. This follows, for example, from Tits's solution to the word problem~\cite{tits69:_word_prob}. (See also \cite[Theorem~2.35]{abramenko08:_approac_to_build}). \begin{lemma} \label{lem:4} Let $(W,S)$ be an irreducible Coxeter system, and let $w \in W$ be nontrivial. If $S(w) \neq S$, then there exists $s\in S$ satisfying conditions (a) and~(b) of Lemma~\ref{lem:3}. \end{lemma} \begin{proof} By irreducibility, there exists an element $s\in S \setminus S(w)$ that does not commute with all elements of~$S(w)$. Condition~(b) then follows from the fact that $s\notin S(w)$ and standard properties of Coxeter groups; see \cite[Lemma~2.15]{abramenko08:_approac_to_build}. To prove~(a), suppose $sw=wt$ with $t\in S$. We have $s\notin S(w)$ but $s \in S(sw)$ (since $l(sw) > l(w)$), so necessarily $t = s$. Using induction on~$l(w)$, one now deduces from Tits's solution to the word problem that $s$ commutes with every element of~$S(w)$ (see \cite[Lemma~2.39]{abramenko08:_approac_to_build}), contradicting the choice of~$s$. \end{proof} \section{Proof of the theorem} \label{sec:proof-theorem} In this section we assume familiarity with basic concepts from the theory of buildings \cite{abramenko08:_approac_to_build,ronan89:_lectur,scharlau95:_build,tits74:_build_bn,weiss03}. Let $\Delta$ be a building with Weyl group~$(W,S)$, let \C\ be the set of chambers of~$\Delta$, and let $\delta\colon \C\times\C\to W$ be the Weyl distance function. (See \cite[Section 4.8 or~5.1]{abramenko08:_approac_to_build} for the definition and standard properties of~$\delta$.) Recall that \C\ has a natural \emph{gallery metric}~$d(-,-)$ and that \[ d(C,D) = l\bigl(\delta(C,D)\bigr) \] for $C,D\in\C$. Let $\phi \colon \Delta\to\Delta$ be an automorphism of~$\Delta$ that is not necessarily type-preserving. Recall that $\phi$ induces an automorphism $\sigma$ of~$(W,S)$. From the simplicial point of view, we can think of $\sigma$ (restricted to~$S$) as describing the effect of~$\phi$ on types of vertices. From the point of view of Weyl distance, $\sigma$ is characterized by the equation \[ \delta(\phi(C),\phi(D)) = \sigma(\delta(C,D)) \] for $C,D\in\C$. Our main theorem will be obtained from the following technical lemma: \begin{lemma} \label{lem:5} Assume that $\Delta$ is thick. Fix a chamber $C \in \C$, and set $w := \delta(C,\phi(C))$. Suppose there exists $s \in S$ such that $l(sw) > l(w)$ and $w^{-1}sw \neq t := \sigma(s)$. Then there is a chamber $D$ $s$\h-adjacent to~$C$ such that $d(D,\phi(D)) > d(C, \phi(C))$. \end{lemma} \begin{proof} We will choose $D$ $s$\h-adjacent to~$C$ so that $u := \delta(C,\phi(D))$ satisfies $l(u) \geq l(w)$ and $l(su) > l(u)$. We will then have $\delta(D,\phi(D)) = su$, as illustrated in the following schematic diagram: \[ \def\textstyle{\textstyle} \xymatrix@+1pc{C \ar[r]^w \ar@{-}[d]_s \ar[dr]_u & \phi(C) \ar@{-}[d]^t\\ D \ar[r]_{su} & \phi(D)} \] Hence \[ d(D,\phi(D)) = l(su) > l(u) \geq l(w) = d(C,\phi(C)). \] Case 1. $l(wt) < l(w)$. Then there is a unique chamber $E_0$ $t$\h-adjacent to~$\phi(C)$ such that $\delta(C,E_0) = wt$. For all other $E$ that are $t$\h-adjacent to~$\phi(C)$, we have $\delta(C,E) = w$. So we need only choose $D$ so that $\phi(D) \neq E_0$; this is possible by thickness. Then $u = w$. Case 2. $l(wt) > l(w)$. Then $l(swt) > l(wt)$ because the conditions $l(swt) < l(wt)$, $\,l(wt) > l(w)$, and $l(sw) > l(w)$ would imply (e.g., by the deletion condition for Coxeter groups) $swt = w$, and the latter is excluded by assumption. Hence in this case we can choose $D$ $s$\h-adjacent to~$C$ arbitrarily, and we then have $u = wt$. \end{proof} Suppose now that $(W,S)$ is purely infinite and $\phi$ is nontrivial. Then we can start with any chamber~$C$ such that $\phi(C)\neq C$, and Lemma~\ref{lem:3} shows that the hypothesis of Lemma~\ref{lem:5} is satisfied. We therefore obtain a chamber~$D$ such that $d(D,\phi(D)) > d(C,\phi(C))$. Our main theorem as stated in the introduction follows at once. We restate it here for ease of reference: \begin{theorem} \label{thr:1} Let $\phi$ be a nontrivial automorphism of a thick building~$\Delta$ of purely infinite type. Then $\phi$, viewed as an isometry of the set \C\ of chambers, has unbounded displacement, i.e., the set $\{d(C,\phi(C)) \mid C\in\C\}$ is unbounded. \qed \end{theorem} \begin{remark} \label{rem:1} Note that, in view of the generality under which we proved Lemma \ref{lem:3}, the building~$\Delta$ is allowed to have infinite rank. \end{remark} \begin{remark} \label{rem:2} In view of the existence of translations in Euclidean Coxeter complexes, the thickness assumption in the theorem cannot be dropped. \end{remark} \begin{corollary} \label{cor:1} Let $\Delta$ and~\C\ be as in the theorem, and let $G$ be a group of automorphisms of~$\Delta$. If there is a bounded set of representatives for the $G$\h-orbits in~\C, then $G$ has trivial center. \end{corollary} \begin{proof} Let \M\ be a bounded set of representatives for the $G$\h-orbits in~\C, and let $z\in G$ be central. Then there is an upper bound~$M$ on the distances $d(C,zC)$ for $C\in\M$; we can take $M$ to be the diameter of the bounded set $\M \cup z\M$, for instance. Now every chamber $D\in\C$ has the form $D=gC$ for some $g\in G$ and $C\in\M$, hence \[ d(D,zD) = d(gC,zgC) = d(gC,gzC) = d(C,zC) \leq M. \] Thus $z$ has bounded displacement and therefore $z=1$ by the theorem. \end{proof} \begin{remark} \label{rem:4} Although Corollary~\ref{cor:1} is stated for faithful group actions, we can also apply it to actions that are not necessarily faithful and conclude (under the hypothesis of the corollary) that the center of~$G$ acts trivially. \end{remark} \begin{remark} \label{rem:6} Note that the hypothesis of the corollary is satisfied if the action of $G$ is chamber transitive. In particular, it is satisfied if the action is strongly transitive and hence corresponds to a BN-pair in~$G$. In this case, however, the result is trivial (and does not require the building to be of purely infinite type). Indeed, the stabilizer of every chamber is a parabolic subgroup and hence is self-normalizing, so it automatically contains the center of~$G$. To obtain other examples, consider a cocompact action of a group on a locally-finite thick Euclidean building (e.g., a thick tree). The corollary then implies that the center of the group must act trivially. \end{remark} \begin{remark} \label{rem:3} The conclusion of Theorem~\ref{thr:1} is obviously false for spherical buildings, since the metric space~\C\ is bounded in this case. But one can ask instead whether or not \begin{equation} \label{eq:2} \disp \phi = \diam \Delta, \end{equation} where $\diam \Delta$ denotes the diameter of the metric space~\C, and $\disp\phi$ is the \emph{displacement} of~$\phi$; the latter is defined by \[ \disp \phi := \sup \{d(C,\phi(C)) \mid C\in\C\}. \] Note that, in the spherical case, Equation~\eqref{eq:2} holds if and only if there is a chamber~$C$ such that $\phi(C)$ and~$C$ are opposite. This turns out to be false in general. The following counterexample was pointed out to us by Hendrik Van Maldeghem. \end{remark} \begin{example} \label{exam:1} Let $k$ be a field and $n$ an integer $\geq 2$. Let $\Delta$ be the building associated to the vector space $V=k^{2n}$. Thus the vertices of~$\Delta$ are the subspaces $U$ of~$V$ such that $0<U<V$, and the simplices are the chains of such subspaces. A chamber is a chain \[ U_1<U_2<\cdots <U_{2n-1} \] with $\dim U_i = i$ for all~$i$, and two such chambers $(U_i)$ and~$(U'_i)$ are opposite if and only if $U_i + U'_{2n-i} = V$ for all~$i$. Now choose a non-degenerate alternating bilinear form $B$ on~$V$, and let $\phi$ be the (type-reversing) involution of~$\Delta$ that sends each vertex $U$ to its orthogonal subspace~$U^\perp$ with respect to~$B$. For any chamber $(U_i)$ as above, its image under~$\phi$ is the chamber~$(U'_i)$ with $U'_{2n-i} = U_i^\perp$ for all~$i$. Since $U_1\leq U_1^\perp = U'_{2n-1}$, these two chambers are not opposite. \end{example} Even though \eqref{eq:2} is false in general, one can still use Lemma~\ref{lem:5} to obtain lower bounds on~$\disp\phi$. Consider the rank~2 case, for example. Then $\Delta$ is a generalized $m$-gon for some~$m$, its diameter is~$m$, and its Weyl group~$W$ is the dihedral group of order~$2m$. Lemma~\ref{lem:5} in this case yields the following result: \begin{corollary} \label{cor:3} Let $\phi$ be a nontrivial automorphism of a thick generalized $m$-gon. Then the following hold: \begin{enumerate}[\rm(a)] \item $\disp\phi \geq m-1$. \item If $\phi$ is type preserving and $m$ is odd, or if $\phi$ is type reversing and $m$ is even, then $\disp\phi=m$. \end{enumerate} \end{corollary} \begin{proof} (a) The hypothesis of Lemma~\ref{lem:5} is always satisfied as long as $w \neq 1$ and $l(w) < m-1$ since then there exists $s \in S$ with $l(sw) > l(w)$, and $sw$ has a unique reduced decomposition (which is in particular not of the form $wt$ with $t \in S$). (b) Suppose $l(w) = m-1$, and let $s \in S$ be the unique element such that $sw = w_0$, where the latter is the longest element of~$W$. Then $s' \in S$ satisfies $ws' = sw$ if and only if $l(ws') > l(w)$. Since $w$ does not start with~$s$, this is equivalent to $s' \neq s$ if $m$ is odd and to $s' = s$ if $m$ is even. So the hypothesis of Lemma~\ref{lem:5} is satisfied for \emph{any} $w$ different from 1 and~$w_0$ if $m$ is odd and $\sigma=\id$ or if $m$ is even and $\sigma \neq \id$. \end{proof} We conclude by mentioning another family of examples, again pointed out to us by Van Maldeghem. \begin{remark} \label{rem:5} For even $m = 2n$, type-preserving automorphisms $\phi$ of generalized $m$-gons with $\disp \phi = m-1$ arise as follows. Assume that there exists a vertex $x$ in the generalized $m$-gon~$\Delta$ such that the ball $B(x, n)$ is fixed pointwise by~$\phi$. Here $B(x,n)$ is the set of vertices with $d(x,y) \leq n$, where $d(-,-)$ now denotes the usual graph metric, obtained by minimizing lengths of paths. Recall that there are two types of vertices in~$\Delta$ and that opposite vertices always have the same type since $m$ is even. Let $y$ be any vertex that does has not the same type as~$x$. Then $y$ is at distance at most $n-1$ from some vertex in $B(x,n)$. Since $\phi$ fixes $B(x,n)$ pointwise, $d(y, \phi(y)) \leq 2n-2$. So $C$ and~$\phi(C)$ are not opposite for any chamber~$C$ having $y$ as a vertex. Since this is true for any vertex $y$ that does not have the same type as~$x$, $\disp \phi \neq m$ and hence, by Corollary~\ref{cor:3}(a), $\disp \phi = m-1$ if $\phi \neq \id$. Now it is a well-known fact (see for instance \cite[Corollary 5.4.7]{maldeghem98:_gener}) that every \emph{Moufang} $m$-gon possesses nontrivial type-preserving automorphisms $\phi$ fixing some ball $B(x,n)$ pointwise. (In the language of incidence geometry, these automorphisms are called central or axial collineations, depending on whether $x$ is a point or a line in the corresponding rank~2 geometry.) So for $m = 4$, $6$, or~$8$, all Moufang $m$-gons admit type-preserving automorphisms $\phi$ with $\disp \phi = m-1$. \end{remark} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \end{document}
\begin{document} \begin{abstract} We classify up to isomorphism all gradings by an arbitrary group $G$ on the Lie algebras of zero-trace upper block-triangular matrices over an algebraically closed field of characteristic $0$. It turns out that the support of such a grading always generates an abelian subgroup of $G$. Assuming that $G$ is abelian, our technique also works to obtain the classification of $G$-gradings on the upper block-triangular matrices as an associative algebra, over any algebraically closed field. These gradings were originally described by A. Valenti and M. Zaicev in 2012 (assuming characteristic $0$ and $G$ finite abelian) and classified up to isomorphism by A. Borges et al. in 2018. Finally, still assuming that $G$ is abelian, we classify $G$-gradings on the upper block-triangular matrices as a Jordan algebra, over an algebraically closed field of characteristic $0$. It turns out that, under these assumptions, the Jordan case is equivalent to the Lie case. \end{abstract} \maketitle \section{Introduction} The algebras of upper block-triangular matrices are an essential example of non-simple algebras. Moreover, viewed as Lie algebras, they are an example of the so-called parabolic subalgebras of simple Lie algebras. Group gradings on the upper triangular matrices (a Borel subalgebra) were investigated in \cite{pkfy2017}. In this paper, we classify gradings by any abelian group $G$ on the upper block-triangular matrices, viewed as an associative, Lie or Jordan algebra, over an algebra\-ical\-ly closed field $\mathbb{F}$, which is assumed to have characteristic $0$ in the Lie and Jordan cases. The basic idea is to show that every $G$-grading on the upper block-triangular matrices (of trace zero in the Lie case) can be extended uniquely to a grading on the full matrix algebra. However, not every $G$-grading on the full matrix algebra restricts to a grading on the upper block-triangular matrices, which leads us to consider an additional $\mathbb{Z}$-grading. In the associative case, this approach to the classification of gradings is different from the one of A. Valenti and M. Zaicev, who investigated upper triangular matrices in \cite{VaZa2007} and upper block-triangular matrices in \cite{VaZa2012} (under more restrictive assumptions than here). The Lie and Jordan cases are new. It turns out that the automorphism group of the upper block-triangular matrices, viewed as a Jordan algebra, is the same as the automorphism group of the upper block-triangular matrices of trace zero, viewed as a Lie algebra. Hence, the classifications of abelian group gradings in both cases are equivalent. The Jordan algebra of upper triangular matrices was investigated in \cite{pkfy2017Jord}. Moreover, we prove that, in the Lie case, there is no loss of generality in assuming $G$ abelian, because the support of any group grading on the zero-trace upper block-triangular matrices generates an abelian subgroup. The paper is structured as follows. After a brief review of terminology and relevant results on gradings in Section \ref{prelim}, we obtain a classification of gradings by abelian groups on the associative algebra of upper block-triangular matrices in Section \ref{assoc_case} (see Theorem \ref{th:main_assoc} and Corollary \ref{cor:main assoc}). In Section \ref{Lie_case}, we classify gradings on the Lie algebra of upper block-triangular matrices (Theorem \ref{th:main_Lie} and Corollary \ref{cor:main_Lie}). The center of this algebra is spanned by the identity matrix, and we actually classify gradings on the quotient modulo the center. The effect that this transition has on the classification of gradings is discussed in Section \ref{practical_iso}, the main results of which (Theorem \ref{th:main_practical} and Corollary \ref{cor:main_practical}) are quite general and may be of independent interest. Our approach to classification in Section \ref{Lie_case} follows the same lines as in the associative case. However, the Lie case is substantially more difficult, and some technical aspect is postponed until Section \ref{commut_supp}, where we also prove the commutativity of support (Theorem \ref{supp_commutativity}). Finally, the Jordan case is briefly discussed in Section \ref{Jord_case}. \section{Preliminaries on group gradings}\label{prelim} Let $A$ be an arbitrary algebra over a field $\mathbb{F}$ and let $G$ be a group. We say that $A$ is \emph{$G$-graded} if $A$ is endowed with a fixed vector space decomposition, \[ \Gamma:A=\bigoplus_{g\in G}A_g, \] such that $A_gA_h\subset A_{gh}$, for all $g,h\in G$. The subspace $A_g$ is called the \emph{homogeneous component of degree $g$}, and the non-zero elements $x\in A_g$ are said to be homogeneous of degree $g$. We write $\deg x=g$ for these elements. The \emph{support} of $A$ (or of $\Gamma$) is the set $\mathrm{Supp}\,A=\{g\in G\mid A_g\ne 0\}$. A subspace $I\subset A$ is called \emph{graded} if $I=\bigoplus_{g\in G}I\cap A_g$. If $I$ is a \emph{graded ideal} {(that is, it is simultaneously an ideal and a graded subspace)}, then the quotient algebra $A/I$ inherits a natural $G$-grading. $A$ is said to be \emph{graded-simple} if $A^2\ne 0$ and $A$ does not have nonzero proper graded ideals. If $A$ is an associative or Lie algebra, then a \emph{graded $A$-module} is an $A$-module $V$ with a fixed vector space decomposition $V=\bigoplus_{g\in G}V_g$ such that $A_g\cdot V_h\subset V_{gh}$, for all $g,h\in G$. A nonzero graded $A$-module is said to be \emph{graded-simple} if it does not have nonzero proper graded submodules. {(A \emph{graded submodule} is a submodule that is also a graded subspace.)} Let $H$ be any group and let $\alpha:G\to H$ be a homomorphism of groups. Then $\alpha$ induces a $H$-grading, say $A=\bigoplus_{h\in H} A_h'$, on the $G$-graded algebra $A$ if we define \[ A_h'=\bigoplus_{g\in\alpha^{-1}(h)}A_g. \] The $H$-grading is called the coarsening of $\Gamma$ induced by the homomorphism $\alpha$. Let $B=\bigoplus_{g\in G}B_g$ be another $G$-graded algebra. A map $f:A\to B$ is called a \emph{homomorphism of $G$-graded algebras} if $f$ is a homomorphism of algebras and $f(A_g)\subset B_g$, for all $g\in G$. If, moreover, $f$ is an isomorphism, we call $f$ a \emph{$G$-graded isomorphism} (or an isomorphism of graded algebras), and we say that $A$ and $B$ are \emph{$G$-graded isomorphic} (or isomorphic as graded algebras). Two $G$-gradings, $\Gamma$ and $\Gamma'$, on the same algebra $A$ are \emph{isomorphic} if $(A,\Gamma)$ and $(A,\Gamma')$ are isomorphic as graded algebras. Let $T$ be a finite abelian group and let $\sigma:T\times T\to\mathbb{F}^\times$ be a map, where $R^\times$ denotes the group of invertible elements in a ring $R$. We say that $\sigma$ is a \emph{2-cocycle} if \[ \sigma(u,v)\sigma(uv,w)=\sigma(u,vw)\sigma(v,w),\quad\forall u,v,w\in T. \] The twisted group algebra $\mathbb{F}^\sigma T$ is constructed as follows: it has $\{X_t\mid t\in T\}$ as an $\mathbb{F}$-vector space basis, and multiplication is given by $X_uX_v=\sigma(u,v)X_{uv}$. It is readily seen that $\mathbb{F}^\sigma T$ is an associative algebra if and only if $\sigma$ is a 2-cocycle, which we will assume from now on. Note that $A=\mathbb{F}^\sigma T$ has a natural $T$-grading, where each homogeneous component has dimension 1, namely $A_t=\mathbb{F}X_t$, for each $t\in T$. This is an example of the so-called \emph{division grading}. A graded algebra $D$ is a \emph{graded division algebra} (or $D$ has a division grading) if every non-zero homogeneous element of $D$ is invertible. Define $\beta:T\times T\to\mathbb{F}^\times$ by $\beta(u,v)=\sigma(u,v)\sigma(v,u)^{-1}$. Then we have \[ X_uX_v=\beta(u,v)X_vX_u,\quad\forall u,v\in T, \] and $\beta$ is an alternating bicharacter of $T$, that is, $\beta$ is multiplicative in each variable and $\beta(u,u)=1$ for all $u\in T$. If $\mathrm{char}\,\mathbb{F}$ does not divide $|T|$, then $\mathbb{F}^\sigma T$ is semisimple as an (ungraded) algebra. It follows that $\mathbb{F}^\sigma T$ is a simple algebra if and only if $\beta$ is non-degenerate. In particular, the non-degeneracy of $\beta$ implies that $|T|=\dim\mathbb{F}^\sigma T$ is a perfect square. It is known that, if $\mathbb{F}$ is algebraically closed, the isomorphism classes of matrix algebras endowed with a division grading by an abelian group $G$ are in bijection with the pairs $(T,\beta)$, where $T$ is a finite subgroup of $G$ (namely, the support of the grading) and $\beta:T\times T\to\mathbb{F}^\times$ is a non-degenerate alternating bicharacter (see e.g. \cite[Theorem 2.15]{EK2013}). For each $n$-tuple $(g_1,\ldots,g_n)$ of elements of $G$, we can define a $G$-grading on $M_n=M_n(\mathbb{F})$ by declaring that the matrix unit $E_{ij}$ is homogeneous of degree $g_ig_j^{-1}$, for all $i$ and $j$. A grading on $M_n$ is called \emph{elementary} if it is isomorphic to one of this form. For any $g\in G$ and any permutation $\sigma\in S_n$, the $n$-tuple $(g_{\sigma(1)}g,\ldots,g_{\sigma(n)}g)$ defines an isomorphic elementary $G$-grading. Hence, an isomorphism class of elementary gradings is described by a function $\kappa:G\to\mathbb{Z}_{\ge0}$, where $g\in G$ appears exactly $\kappa(g)$ times in the $n$-tuple. Moreover, $G$ acts on these functions by translation: given $g\in G$, one defines the function $g\kappa:G\to\mathbb{Z}_{\ge0}$ by $g\kappa(x)=\kappa(g^{-1}x)$. For any $\kappa:G\to\mathbb{Z}_{\ge0}$ with finite support, we denote $|\kappa|:=\sum_{x\in G}\kappa(x)$. If $\mathbb{F}$ is algebraically closed, then, for a fixed abelian group $G$, the isomorphism classes of $G$-gradings on $M_n$ are parametrized by the triples $(T,\beta,\kappa)$, where $T$ is a finite subgroup of $G$, $\beta:T\times T\to\mathbb{F}^\times$ is a non-degenerate alternating bicharacter, and $\kappa:G/T\to\mathbb{Z}_{\ge0}$ is a function with finite support such that $|\kappa|\sqrt{|T|}=n$. A grading in the isomorphism class corresponding to $(T,\beta,\kappa)$ can be explicitly constructed by making the following two choices: (i) a $k$-tuple $\gamma=(g_1,\ldots,g_k)$ of elements in $G$ such that each element $x\in G/T$ occurs in the $k$-tuple $(g_1T,\ldots,g_kT)$ exactly $\kappa(x)$ times (hence $k=|\kappa|$) and (ii) a division grading on $M_\ell$ with support $T$ and bicharacter $\beta$ (hence $|T|=\ell^2$). Since $n=k\ell$, we identify $M_n$ with $M_k\otimes M_\ell$ via Kronecker product and define a $G$-grading on $M_n$ by declaring the matrix $E_{ij}\otimes d$, with $1\le i,j\le k$, and $d$ a nonzero homogeneous element of $M_\ell$, to be of degree $g_i \deg(d) g_j^{-1}$. Finally, two triples $(T,\beta,\kappa)$ and $(T',\beta',\kappa')$ determine the same isomorphism class if and only if $T'=T$, $\beta'=\beta$, and there exists $g\in G$ such that $\kappa'=g\kappa$ (see e.g. \cite[Theorem 2.27]{EK2013}). \section{Associative case}\label{assoc_case} Let $\mathbb{F}$ be {a field} and let $V$ be a finite-dimensional $\mathbb{F}$-vector space. Denote by $\mathscr{F}$ a flag of subspaces in $V$, that is \begin{align*} 0=V_0\subsetneq V_1\subsetneq\ldots\subsetneq V_s=V. \end{align*} Let $n=\dim V$ and $n_i=\dim V_i/V_{i-1}$, for $i=1,2,\ldots,s$. We denote by $U(\mathscr{F})$ the set of endomorphisms of $V$ preserving the flag $\mathscr{F}$, which coincides with the upper block-triangular matrices $UT(n_1,\ldots,n_s)$ after a choice of basis of $V$ respecting the flag $\mathscr{F}$. We fix such a basis and identify $U(\mathscr{F})=UT(n_1,\ldots,n_s)\subset M_n$. For each $m\in\mathbb{Z}$, if $|m|<s$, let $J_m\subset M_n$ denote the $m$-th block-diagonal of matrices. Formally, \begin{align*} J_m=\mathrm{Span}\{&E_{ij}\in M_n\mid\text{there exists $q\in\mathbb{Z}_{\ge0}$ such that}\\ &n_1+\dots+n_q<i\le n_1+\dots+n_{q+1},\mathrm{ and }\\ &n_1+\dots+n_{q+m}<j\le n_1+\dots+n_{q+m+1}\}. \end{align*} Setting $J_m=0$ for $|m|\ge s$, we obtain a $\mathbb{Z}$-grading $M_n=\bigoplus_{m\in\mathbb{Z}}J_m$, which is the elementary grading defined by the $n$-tuple \[ (\underbrace{-1,\ldots,-1}_{n_1 \text{ times}},\underbrace{-2,\ldots,-2}_{n_2\text{ times}},\ldots,\underbrace{-s,\ldots,-s}_{n_s\text{ times}}). \] This grading restricts to $U(\mathscr{F})$, and we will refer to the resulting grading $U(\mathscr{F})=\bigoplus_{m\ge 0}J_m$ as the \emph{natural $\mathbb{Z}$-grading} of $U(\mathscr{F})$. The associated filtration consists of the powers of the Jacobson radical $J$ of $U(\mathscr{F})$, that is, {$\bigoplus_{i\ge m}J_i=J^m$} for all $m\ge 0$. Let $G$ be any abelian group and denote $G^\#=\mathbb{Z}\times G$. We identify $G$ with the subset $\{0\}\times G\subset G^\#$ and $\mathbb{Z}$ with $\mathbb{Z}\times\{1_G\}\subset G^\#$. We want to find a relation between $G^\#$-gradings on $M_n$ and $G$-gradings on $U(\mathscr{F})$. First, we note that, given any $G^\#$-grading on $M_n$, we obtain a $\mathbb{Z}$-grading on $M_n$ if we consider the coarsening induced by the projection onto the first component $G^\#\to\mathbb{Z}$. \begin{Def} A $G^\#$-grading on $M_n$ is said to be \emph{admissible} if $U(\mathscr{F})$ with its natural $\mathbb{Z}$-grading is a graded subalgebra of $M_n$, where $M_n$ is viewed as a $\mathbb{Z}$-graded algebra induced by the projection $G^\#\to\mathbb{Z}$. We call an isomorphism class of $G^\#$-grading on $M_n$ \emph{admissible} if it contains an admissible grading. \end{Def} \begin{Lemma}\label{ind_admissible} For any admissible $G^\#$-grading on $M_n$, {the $\mathbb{Z}$-grading induced by the projection $G^\#\to\mathbb{Z}$} has $J_m$ as its homogeneous component of degree $m$. \end{Lemma} \begin{proof} From the definition of admissible grading, we know that, for any $m\ge 0$, $J_m$ is contained in the homogeneous component of degree $m$ in the induced $\mathbb{Z}$-grading on $M_n$. In particular, each $E_{ii}$ is homogeneous of degree $0$. It follows that $E_{ii}M_nE_{jj}=\mathbb{F}E_{ij}$ is a graded subspace. Hence, all $E_{ij}$ are homogeneous. Moreover, if $E_{ij}\in J_{-m}$, then $E_{ji}\in J_m$ has degree $m$, so $E_{ij}$ must have degree $-m$, since $E_{ii}=E_{ij}E_{ji}$. The result follows. \end{proof} Recall from Section \ref{prelim} that, {over an algebraically closed field,} any isomorphism class of $G^\#$-gradings on $M_n$ is given by a finite subgroup $T$ of $G^\#$ (hence, in fact, $T\subset G$), a non-degenerate bicharacter $\beta:T\times T\to\mathbb{F}^\times$ and a function $\kappa:G^\#/T\to\mathbb{Z}_{\ge0}$ with finite support, where $n=k\ell$, $k=|\kappa|$ and $\ell=\sqrt{|T|}$. \begin{Lemma}\label{lem1} Consider a $G^\#$-grading on $M_n$ with parameters $(T,\beta,\kappa)$ and let \[ \gamma=\big((a_1,g_1),(a_2,g_2),\ldots,(a_k,g_k)\big) \] be a $k$-tuple of elements of $G^\#$ associated to $\kappa$. Then the $\mathbb{Z}$-grading on $M_n$ induced by the projection $G^\#\to\mathbb{Z}$ is an elementary grading defined by the $n$-tuple \[ (\underbrace{a_1,\ldots,a_1}_{\ell\text{ times}},\underbrace{a_2,\ldots,a_2}_{\ell\text{ times}},\ldots,\underbrace{a_k,\ldots,a_k}_{\ell\text{ times}}). \] \end{Lemma} \begin{proof} We have a $G^\#$-graded isomorphism $M_n\simeq M_k\otimes M_\ell$, where $M_k$ has an elementary grading defined by $\gamma$ and $M_\ell$ has a division grading with support $T$. Since $T$ is contained in the kernel of the projection $G^\#\to\mathbb{Z}$, the factor $M_\ell$ will get the trivial induced $\mathbb{Z}$-grading. The result follows. \end{proof} By the previous two lemmas, the isomorphism class of $G^\#$-gradings on $M_n$ with parameters $(T,\beta,\kappa)$ is admissible if and only if $\gamma$ has the following form, up to permutation and translation by an integer: \[ \gamma=\big((-1,g_{11}),\ldots,(-1,g_{1k_1}),(-2,g_{21}),\ldots,(-2,g_{2k_2})\ldots,(-s,g_{s1}),\ldots,(-s,g_{sk_s})\big), \] where $n_i=k_i\ell$ for all $i=1,2,\ldots,s$. Equivalently, this condition can be restated directly in terms of $\kappa$, regarded as a function $\mathbb{Z}\times G/T\to\mathbb{Z}_{\ge 0}$, as follows: there exist $a\in\mathbb{Z}$ and $\kappa_1,\ldots,\kappa_s:G/T\to\mathbb{Z}_{\ge0}$ with $|\kappa_i|\sqrt{|T|}=n_i$ such that \[ \kappa(a-i,x)=\kappa_i(x),\quad \forall i\in\{1,2,\ldots,s\},\, x\in G/T, \] and $\kappa(a-i,x)=0$ if $i\notin\{1,2,\ldots,s\}$. By Lemma \ref{ind_admissible}, every admissible $G^\#$-grading $ M_n=\bigoplus_{(m,g)\in G^\#}A_{(m,g)} $ restricts to a $G^\#$-grading on $U(\mathscr{F})$, hence the projection onto the second component $G^\#\to G$ induces a $G$-grading on $U(\mathscr{F})$, namely, $U(\mathscr{F})=\bigoplus_{g\in G}B_g$ where $B_g=\bigoplus_{m\ge 0}A_{(m,g)}$. \begin{Lemma}\label{lem2} If two admissible $G^\#$-gradings on $M_n$ are isomorphic then they induce isomorphic $G$-gradings on $U(\mathscr{F})$. \end{Lemma} \begin{proof} Assume that $\psi$ is an isomorphism between two admissible $G^\#$-gradings on $M_n$. Since $\psi$ preserves degree in $G^\#$, it fixes $U(\mathscr{F})$ as a set and therefore restricts to an automorphism of $U(\mathscr{F})$. This restriction is an isomorphism between the induced $G$-gradings on $U(\mathscr{F})$. \end{proof} Now we want to go back from $G$-gradings on $U(\mathscr{F})$ to $G^\#$-gradings on $M_n$. First note that the $G$-gradings on $U(\mathscr{F})$ obtained as above are not arbitrary, but satisfy the following: \begin{Def} We say that a $G$-grading on $U(\mathscr{F})$ is \emph{in canonical form} if, for each $m\in\{0,1,\ldots,s-1\}$, the subspace $J_m$ is $G$-graded. \end{Def} In other words, a $G$-grading $\Gamma:U(\mathscr{F})=\bigoplus_{g\in G}B_g$ is in canonical form if and only if it is compatible with the natural $\mathbb{Z}$-grading on $U(\mathscr{F})$. If this is the case, we obtain a $G^\#$-grading on $U(\mathscr{F})$ by taking $J_m\cap B_g$ as the homogeneous component of degree $(m,g)$. We want to show that this $G^\#$-grading uniquely extends to $M_n$. To this end, let us look more closely at the automorphism group of $U(\mathscr{F})$. We denote by $\mathrm{Int}(x)$ the inner automorphism $y\mapsto xyx^{-1}$ determined by an invertible element $x$. \begin{Lemma}\label{aut} $\mathrm{Aut}(U(\mathscr{F}))\simeq\left\{\psi\in\mathrm{Aut}(M_n)\mid\psi(U(\mathscr{F}))=U(\mathscr{F})\right\}$. \end{Lemma} \begin{proof} It is proved in \cite[Corollary 5.4.10]{Cheung} that \begin{align*} \mathrm{Aut}(U(\mathscr{F}))=\{\mathrm{Int}(x)\mid x\in U(\mathscr{F})^\times\}. \end{align*} On the other hand, every automorphism of the matrix algebra is inner, so let $y\in M_n^\times$ and assume $yU(\mathscr{F})y^{-1}=U(\mathscr{F})$. Then, by the description of $\mathrm{Aut}(U(\mathscr{F}))$ above, we can find $x\in U(\mathscr{F})^\times$ such that \begin{align*} \mathrm{Int}(x)\mid_{U(\mathscr{F})}=\mathrm{Int}(y)\mid_{U(\mathscr{F})}. \end{align*} It follows that $xy^{-1}$ commutes with all elements of $U(\mathscr{F})$. Hence $yx^{-1}=\lambda\,{1}$, for some $\lambda\in\mathbb{F}^\times$, and $y=\lambda x\in U(\mathscr{F})^\times$. \end{proof} Assume for a moment that {$\mathbb{F}$ is algebraically closed and $\mathrm{char}\,\mathbb{F}=0$. Since $G$ is abelian,} it is well known that $G$-gradings on a finite-dimensional algebra $A$ are equivalent to actions of the algebraic group $\widehat{G}:=\mathrm{Hom}_\mathbb{Z}(G,\mathbb{F}^\times)$ by automorphisms of $A$, that is, homomorphisms of algebraic groups $\widehat{G}\to\mathrm{Aut}(A)$ (see, for example, \cite[\S 1.4]{EK2013}). The homomorphism $\eta_\Gamma:\widehat{G}\to\mathrm{Aut}(A)$ corresponding to a grading $\Gamma:A=\bigoplus_{g\in G}A_g$ is defined by $\eta_\Gamma(\chi)(x)=\chi(g)x$ for all $\chi\in\widehat{G}$, $g\in G$ and $x\in A_g$. By Lemma \ref{aut}, we have \begin{align*} \mathrm{Aut}\left(U(\mathscr{F})\right)\simeq\mathrm{Stab}_{\mathrm{Aut}(M_n)}(U(\mathscr{F}))\subset\mathrm{Aut}(M_n), \end{align*} hence, if {$\mathbb{F}$ is algebraically closed and} $\mathrm{char}\,\mathbb{F}=0$, we obtain the desired unique extension of gradings from $U(\mathscr{F})$ to $M_n$. To generalize this result to {arbitrary $\mathbb{F}$}, we can use group schemes instead of groups. Recall that an \emph{affine group scheme} over a field $\mathbb{F}$ is a representable functor from the category $\mathrm{Alg}_\mathbb{F}$ of unital commutative associative $\mathbb{F}$-algebras to the category of groups (see e.g. \cite{Waterhouse} or \cite[Appendix A]{EK2013}). For example, the \emph{automorphism group scheme} of a finite-dimensional algebra $A$ is defined by \[ \mathbf{Aut}(A)(R):=\mathrm{Aut}_R(A\otimes R),\quad\forall R\in\mathrm{Alg}_\mathbb{F}. \] Another example of relevance to us is $\mathbf{GL}_1(A)$, for a finite-dimensional associative algebra $A$, defined by $\mathbf{GL}_1(A)(R):=(A\otimes R)^\times$. (In particular, $\mathbf{GL}_1(M_n)=\mathbf{GL}_n$.) Note that we have a homomorphism $\mathrm{Int}:\mathbf{GL}_1(A)\to\mathbf{Aut}(A)$. If $G$ is an abelian group, then the group algebra $\mathbb{F}G$ is a commutative Hopf algebra, so it represents an affine group scheme, which is the scheme version of the character group $\widehat{G}$. It is denoted by $G^D$ and given by $G^D(R)=\mathrm{Hom}_{\mathbb{Z}}(G,R^\times)$. In particular, $G^D(\mathbb{F})=\widehat{G}$. If we have a $G$-grading $\Gamma$ on $A$, then we can define a homomorphism of group schemes $\eta_\Gamma:G^D\to\mathbf{Aut}(A)$ by generalizing the formula in the case of $\widehat{G}$: $(\eta_\Gamma)_R(\chi)(x\otimes r)=x\otimes \chi(g)r$ for all $R\in\mathrm{Alg}_\mathbb{F}$, $\chi\in G^D(R)$, $r\in R$, $g\in G$ and $x\in A_g$. In this way, over an arbitrary field, $G$-gradings on $A$ are equivalent to homomorphisms of group schemes $G^D\to\mathbf{Aut}(A)$. \begin{Lemma}\label{aut_scheme} Over an arbitrary field, $\mathbf{Aut}(U(\mathscr{F}))$ is a quotient of $\mathbf{GL}_1(U(\mathscr{F}))$, and $\mathbf{Aut}\left(U(\mathscr{F})\right)\simeq\mathbf{Stab}_{\mathbf{Aut}(M_n)}(U(\mathscr{F}))$ via the restriction map. \end{Lemma} \begin{proof} We claim that the homomorphism $\mathrm{Int}:\mathbf{GL}_1(U(\mathscr{F}))\to\mathbf{Aut}(U(\mathscr{F}))$ is a quotient map {(in the sense of affine group schemes, see e.g. \cite[Chapter 15]{Waterhouse} or \cite[\S A.2]{EK2013})}. Since $\mathbf{GL}_1(U(\mathscr{F}))$ is smooth, it is sufficient to verify that (i) the group homomorphism $\mathrm{Int}:(U(\mathscr{F})\otimes\overline{\mathbb{F}})^\times\to\mathrm{Aut}_{\overline{\mathbb{F}}}(U(\mathscr{F})\otimes\overline{\mathbb{F}})$ is surjective, where $\overline{\mathbb{F}}$ is the algebraic closure of $\mathbb{F}$, and (ii) the Lie homomorphism $\mathrm{ad}:U(\mathscr{F})\to\mathrm{Der}(U(\mathscr{F}))$ is surjective (see e.g. \cite[Corollary A.49]{EK2013}). But (i) is satisfied by Corollary 5.4.10 in \cite{Cheung}, mentioned above, and (ii) is satisfied by Theorem 2.4.2 in the same work. Since the homomorphism $\mathrm{Int}:\mathbf{GL}_1(U(\mathscr{F}))\to\mathbf{Aut}(U(\mathscr{F}))$ factors through the restriction map $\mathbf{Stab}_{\mathbf{Aut}(M_n)}(U(\mathscr{F}))\to\mathbf{Aut}\left(U(\mathscr{F})\right)$, it follows that this latter is also a quotient map. But its kernel is trivial, because the corresponding restriction maps for the group $\mathrm{Stab}_{\mathrm{Aut}_{\overline{\mathbb{F}}}(M_n(\overline{\mathbb{F}}))}(U(\mathscr{F})\otimes\overline{\mathbb{F}})$ and Lie algebra $\mathrm{Stab}_{\mathrm{Der}(M_n)}(U(\mathscr{F}))$ are injective (see e.g. \cite[Theorem A.46]{EK2013}). \end{proof} Coming back to a $G$-grading $\Gamma$ on $U(\mathscr{F})$ in canonical form, we conclude by Lemma \ref{aut_scheme} that the corresponding $G^\#$-grading on $U(\mathscr{F})$ extends to a unique $G^\#$-grading $\Gamma^\#$ on $M_n$. By construction, $\Gamma^\#$ is admissible and induces the original grading $\Gamma$ on $U(\mathscr{F})$. It is also clear that $\Gamma^\#$ is uniquely determined by these properties. Thus, we have a bijection between admissible $G^\#$-gradings on $M_n$ and $G$-gradings on $U(\mathscr{F})$ in canonical form. \begin{Lemma}\label{can_assoc} For any $G$-grading on $U(\mathscr{F})$, there exists an isomorphic $G$-grading in canonical form. \end{Lemma} \begin{proof} It follows from Lemma \ref{aut_scheme} that the Jacobson radical $J=\bigoplus_{m>0}J_m$ of $U(\mathscr{F})$ is stabilized by $\mathbf{Aut}(U(\mathscr{F}))$. Hence, $J$ is a $G$-graded ideal, so the proof of \cite[Lemma 1]{y2018} shows that, in fact, there exists an isomorphic grading such that each block is a graded subspace. \end{proof} \begin{Lemma}\label{iso_grad1} If two $G$-gradings, $\Gamma_1$ and $\Gamma_2$, on $U(\mathscr{F})$ are in canonical form and isomorphic to one another, then there exists a block-diagonal matrix $x\in U(\mathscr{F})^\times$ such that $\psi_0=\mathrm{Int}(x)$ is an isomorphism between $\Gamma_1$ and $\Gamma_2$. \end{Lemma} \begin{proof} Let $\psi=\mathrm{Int}(y)$ be an isomorphism between $\Gamma_1$ and $\Gamma_2$. Write $y=(y_{ij})_{1\le i\le j\le s}$ in blocks and let $x=\mathrm{diag}(y_{11},\ldots,y_{ss})$. Then $x$ is invertible, so let $\psi_0=\mathrm{Int}(x)$. Fix $m\in\{0,1,\ldots,s-1\}$ and let $a\in J_m$ be $G$-homogeneous with respect to $\Gamma_1$. Since $J^m=J_m\oplus J^{m+1}$, we can uniquely write $\psi(a)=b+c$, where $b\in J_m$ and $c\in J^{m+1}$. Since $\Gamma_2$ is in canonical form, $J_m$ and $J^{m+1}$ are $G$-graded subspaces with respect to $\Gamma_2$. Since $\psi$ preserves $G$-degree, it follows that $b$ and $c$ are $G$-homogeneous elements with respect to $\Gamma_2$ of the same $G$-degree as $a$ with respect to $\Gamma_1$. Finally, note that $\psi_0(a)=b$. Since $m$ and $a$ were arbitrary, we have shown that $\psi_0$ is an isomorphism between $\Gamma_1$ and $\Gamma_2$. \end{proof} Now we can prove the converse of Lemma \ref{lem2}. \begin{Lemma}\label{iso_grad2} If two admissible $G^\#$-gradings on $M_n$ induce isomorphic $G$-gradings on $U(\mathscr{F})$, then they are isomorphic. \end{Lemma} \begin{proof} Let $\Gamma_1$ and $\Gamma_2$ be two isomorphic $G$-gradings on $U(\mathscr{F})$ obtained from two $G^\#$-gradings on $M_n$, $\Gamma_1^\#$ and $\Gamma_2^\#$, respectively. For $i=1,2$, let $\eta_i:(G^\#)^D\to\mathbf{Aut}(M_n)$ be the action corresponding to $\Gamma_i^\#$. Consider also the restriction $\Gamma_i'$ of $\Gamma_i^\#$ to $U(\mathscr{F})$ and the corresponding action $\eta_i':(G^\#)^D\to\mathbf{Aut}(U(\mathscr{F}))$. By Lemma \ref{iso_grad1}, we can find an isomorphism $\psi_0=\mathrm{Int}(x)$ between $\Gamma_1$ and $\Gamma_2$, where $x$ is block-diagonal. Such $\psi_0$ preserves the natural $\mathbb{Z}$-grading, so it is actually an isomorphism between the $G^\#$-gradings $\Gamma_1'$ and $\Gamma_2'$. Hence, $\psi_0\eta_1'(\chi)=\eta_2'(\chi)\psi_0$ for all $\chi\in(G^\#)^D(R)$ and all $R\in\mathrm{Alg}_\mathbb{F}$. By Lemma \ref{aut_scheme}, this implies $\psi_0\eta_1(\chi)=\eta_2(\chi)\psi_0$ for all $\chi\in(G^\#)^D(R)$, which means $\psi_0$ is an isomorphism between $\Gamma_1^\#$ and $\Gamma_2^\#$. \end{proof} We summarize the results of this section: \begin{Thm}\label{th:main_assoc} {Over an arbitrary field,} the mapping of an admissible $G^\#$-grading on $M_n$ to a $G$-grading on $U(\mathscr{F})$, given by restriction and coarsening, yields a bijection between the admissible isomorphism classes of $G^\#$-gradings on $M_n$ and the isomorphism classes of $G$-gradings on $U(\mathscr{F})$.\qed \end{Thm} {If $\mathbb{F}$ is algebraically closed, then the} admissible isomorphism classes of $G^\#$-gradings on $M_n$ can be parametrized by the triples $(T,\beta,(\kappa_1,\ldots,\kappa_s))$, where $T\subset G$ is a finite subgroup, $\beta:T\times T\to\mathbb{F}^\times$ is a non-degenerate alternating bicharacter and $\kappa_i:G/T\to\mathbb{Z}_{\ge0}$ are functions with finite support such that $|\kappa_i|\sqrt{|T|}=n_i$, for each $i=1,2,\ldots,s$. Hence, isomorphism classes of $G$-gradings on $U(\mathscr{F})$ are parametrized by the same triples. Choosing, for each $\kappa_i$, a $k_i$-tuple $\gamma_i$ of elements of $G$, where $k_i=|\kappa_i|$, we reproduce the description of $G$-gradings on $U(\mathscr{F})$ originally obtained in \cite{VaZa2012}. Note, however, that we do not need to assume that $G$ is finite, nor $\mathrm{char}\,\mathbb{F}=0$. Also note that we have a description not only of $G$-gradings but of their isomorphism classes, which gives an alternative proof of the following result first established in \cite[Corollary 4]{BFD2018}: \begin{Cor}\label{cor:main assoc} Two $G$-gradings on $U(\mathscr{F})$, determined by $(T,\beta,(\kappa_1,\ldots,\kappa_s))$ and $(T',\beta',(\kappa_1',\ldots,\kappa_s'))$, are isomorphic if and only if $T'=T$, $\beta'=\beta$ and there exists $g\in G$ such that $\kappa_i'=g\kappa_i$, for all $i=1,2,\ldots,s$.\qed \end{Cor} \section{Lie case}\label{Lie_case} Now we turn our attention to $U(\mathscr{F})^{(-)}$, that is, $U(\mathscr{F})$ viewed as a Lie algebra with respect to the commutator $[x,y]=xy-yx$. {Since we will be working with Lie and associative products at the same time, we will always indicate the former by brackets and keep using juxtaposition for the latter.} We assume that the grading group $G$ is abelian and the ground field $\mathbb{F}$ is algebraically closed of characteristic $0$, and follow the same approach as in the associative case. Denote by $\tau$ the flip along the secondary diagonal on $M_n${, that is, $\tau(E_{ij})=E_{n-j+1,n-i+1}$, for all matrix units $E_{ij}\in M_n$}. Note that $U(\mathscr{F})^\tau=U(\mathscr{F})$ if and only if $n_i=n_{s-i+1}$ for all $i=1,2,\ldots,\lfloor\frac{s}2\rfloor$. Let \[ U(\mathscr{F})_0=\{x\in U(\mathscr{F})\mid\mathrm{tr}(x)=0\}, \] which is a Lie subalgebra of $U(\mathscr{F})^{(-)}$. Moreover, $U(\mathscr{F})^{(-)}=U(\mathscr{F})_0\oplus\mathbb{F}{1}$, where ${1}\in U(\mathscr{F})$ is the identity matrix. The center $\mathfrak{z}(U(\mathscr{F})^{(-)})=\mathbb{F}{1}$ is always graded, so ${1}$ is a homogeneous element. If we change its degree arbitrarily, we obtain a new well-defined grading, which is not isomorphic to the original one, but will induce the same grading on $U(\mathscr{F})^{(-)}/\mathbb{F}{1}\simeq U(\mathscr{F})_0$ (compare with \cite[Definition 6]{pkfy2017}). It turns out that, up to isomorphism, a $G$-grading on $U(\mathscr{F})^{(-)}$ is determined by the induced $G$-grading on $U(\mathscr{F})_0$ and the degree it assigns to the identity matrix (see Corollary \ref{reduction_to_trace_0} in Section \ref{practical_iso}). Conversely, any $G$-grading on $U(\mathscr{F})_0$ extends to $U(\mathscr{F})^{(-)}=U(\mathscr{F})_0\oplus\mathbb{F}{1}$ by defining the degree of ${1}$ arbitrarily. Thus, we have a bijection between the isomorphism classes of $G$-gradings on $U(\mathscr{F})^{(-)}$ and the pairs consisting of an isomorphism class of $G$-gradings on $U(\mathscr{F})_0$ and an element of $G$. We start by computing the automorphism group of $U(\mathscr{F})_0$. To this end, we will use the following description of the automorphisms of $\mathrm{Aut}(U(\mathscr{F})^{(-)})$, which was proved in \cite{MaSo1999} for the field of complex numbers. \begin{Thm}[{\cite[Theorem 4.1.1]{Cecil}}]\label{aut_cecil} Let $\phi$ be an automorphism of $U(\mathscr{F})^{(-)}$, and assume $\mathrm{char}\,\mathbb{F}=0$ or $\mathrm{char}\,\mathbb{F}>3$. Then there exist $p,d\in U(\mathscr{F})$, with $p$ invertible and $d$ block-diagonal, such that one of the following holds: \begin{enumerate} \item $\phi(x)=pxp^{-1}+\mathrm{tr}(xd){1}$, for all $x\in U(\mathscr{F})$, or \item $\phi(x)=-px^\tau p^{-1}+\mathrm{tr}(xd){1}$, for all $x\in U(\mathscr{F})$.\qed \end{enumerate} \end{Thm} \begin{remark}\label{antiaut} Case (2) in the previous theorem occurs if and only if $U(\mathscr{F})$ is invariant under $\tau$, that is, $n_i=n_{s-i+1}$ for all $i$. It follows that $U(\mathscr{F})$ admits an anti-automorphism only under this condition. Indeed, if $\psi$ is an anti-automorphism of $U(\mathscr{F})$, then $-\psi$ is a Lie automorphism of $U(\mathscr{F})$. Hence, by Theorem \ref{aut_cecil}, we have $-\psi(x)=pxp^{-1}+\mathrm{tr}(xd){1}$ for all $x\in U(\mathscr{F})$ or $n_i=n_{s-i+1}$ for all $i$. However, the first possibility cannot occur if $n>2$, since it would imply that the composition $\psi\,\mathrm{Int}(p^{-1})$, which maps $x\mapsto-x+\mathrm{tr}(xd'){1}$ where $d'$ is the block-diagonal part of $-pdp^{-1}$, is an anti-automorphism of $U(\mathscr{F})$, but this is easily seen not to be the case. (Of course, if $n=2$ then we have $n_i=n_{s-i+1}$ for all $i$.) \end{remark} As a consequence, we obtain the following analog of Lemma \ref{aut}. {(As usual, the symbol $\rtimes$ denotes a semidirect product in which the second factor acts on the first.)} \begin{Lemma}\label{aut_Lie} If $n>2$ and $n_i=n_{s-i+1}$ for all $i$, then \[ \mathrm{Aut}(U(\mathscr{F})_0)\simeq\{\mathrm{Int}(x)\mid x\in U(\mathscr{F})^\times\}\rtimes\langle-\tau\rangle; \] otherwise, $\mathrm{Aut}(U(\mathscr{F})_0)\simeq\{\mathrm{Int}(x)\mid x\in U(\mathscr{F})^\times\}$. In both cases, \[ \mathrm{Aut}(U(\mathscr{F})_0)\simeq\mathrm{Stab}_{\mathrm{Aut}(\mathfrak{sl}_n)}(U(\mathscr{F})_0). \] \end{Lemma} \begin{proof} Let $\psi\in\mathrm{Aut}(U(\mathscr{F})_0)$. We extend $\psi$ to an automorphism $\phi$ of $U(\mathscr{F})^{(-)}$ by setting $\phi({1})={1}$. By the previous result, $\phi$ must have one of two possible forms. Assume it is the first one: \[ \phi(x)=pxp^{-1}+\mathrm{tr}(xd){1},\quad\forall x\in U(\mathscr{F}). \] But as $U(\mathscr{F})_0$ is an invariant subspace for $\phi$, we see that, for all $x\in U(\mathscr{F})_0$, \[ 0=\mathrm{tr}(\phi(x))=\mathrm{tr}(pxp^{-1}+\mathrm{tr}(xd){1})=n\,\mathrm{tr}(xd). \] Therefore, $\mathrm{tr}(xd)=0$ and hence $\psi(x)=\phi(x)=pxp^{-1}$, for all $x\in U(\mathscr{F})_0$, so $\psi=\mathrm{Int}(p)$. The same argument applies if $\phi$ has the second form. Note that, for $n=2$, the second form reduces to the first on $UT(1,1)_0$, since $-\tau$ coincides with $\mathrm{Int}(p)$ on $\mathfrak{sl}_2$, where $p=\mathrm{diag}(1,-1)$. On the other hand, for $n>2$, the two forms do not overlap, since the action of $-\tau$ differs already on the set of zero-trace diagonal matrices from the action of any inner automorphism. We conclude the proof in the same way as for Lemma \ref{aut}. \end{proof} Let $G$ be an abelian group and define $G^\#=\mathbb{Z}\times G$. Similarly to the associative case, we want to relate $G$-gradings on $U(\mathscr{F})_0$ and $G^\#$-gradings on $\mathfrak{sl}_n$, since for the latter a classification of group gradings is known \cite{BK2010} (see also \cite[Chapter 3]{EK2013}). Recall that $J_m$ stands for the $m$-th block-diagonal of matrices. We consider again the \emph{natural $\mathbb{Z}$-grading} on $U(\mathscr{F})_0$: its homogeneous component of degree $m\in\mathbb{Z}$ is $J_m\cap U(\mathscr{F})_0$ if $0\le m<s$ and $0$ otherwise. We say that a $G$-grading on $U(\mathscr{F})_0$ is in \emph{canonical form} if, for each $m\in\{0,\ldots,s-1\}$, the subspace $J_m\cap U(\mathscr{F})_0$ is $G$-graded. A $G^\#$-grading on $\mathfrak{sl}_n$ is said to be \emph{admissible} if the coarsening induced by the projection $G^\#\to\mathbb{Z}$ has $U(\mathscr{F})_0$, with its natural $\mathbb{Z}$-grading, as a graded subalgebra. An isomorphism class of $G^\#$-grading on $\mathfrak{sl}_n$ is called \emph{admissible} if it contains an admissible grading. Since any $\mathbb{Z}$-grading on $\mathfrak{sl}_n$ is the restriction of a unique $\mathbb{Z}$-grading on the associative algebra $M_n$, Lemma \ref{ind_admissible} still holds if we replace $M_n$ by $\mathfrak{sl}_n$. Therefore, every admissible $G^\#$-grading on $\mathfrak{sl}_n$ restricts to $U(\mathscr{F})_0$ and, by means of the projection $G^\#\to G$, yields a $G$-grading on $U(\mathscr{F})_0$, which is clearly in canonical form. Conversely, thanks to Lemma \ref{aut_Lie}, if a $G$-grading on $U(\mathscr{F})_0$ is in canonical form then it comes from a unique admissible $G^\#$-grading on $\mathfrak{sl}_n$ in this way. Therefore, similarly to the associative case, we obtain a bijection between admissible $G^\#$-grading on $\mathfrak{sl}_n$ and $G$-gradings on $U(\mathscr{F})_0$ in canonical form. The following result is technical and will be proved in Section \ref{commut_supp}: \begin{Lemma}\label{can_Lie} For any $G$-grading on $U(\mathscr{F})_0$, there exists an isomorphic $G$-grading in canonical form. \end{Lemma} Clearly, as in Lemma \ref{lem2}, if two admissible $G^\#$-gradings on $\mathfrak{sl}_n$ are isomorphic then they induce isomorphic $G$-gradings on $U(\mathscr{F})_0$. The converse is established by the same argument as Lemma \ref{iso_grad2}, using the following analog of Lemma \ref{iso_grad1}: \begin{Lemma} If two $G$-gradings, $\Gamma_1$ and $\Gamma_2$, on $U(\mathscr{F})_0$ are in canonical form and isomorphic to one another, then there exists an isomorphism $\psi_0$ between $\Gamma_1$ and $\Gamma_2$ of the form $\psi_0=\mathrm{Int}(x)$ or $\psi_0=-\mathrm{Int}(x)\tau$ where the matrix $x\in U(\mathscr{F})^\times$ is block-diagonal. \end{Lemma} \begin{proof} Let $\psi$ be an isomorphism between $\Gamma_1$ and $\Gamma_2$. If $\psi=\mathrm{Int}(y)$ then we are in the situation of the proof of Lemma \ref{iso_grad1}. If $\psi=-\mathrm{Int}(y)\tau$ then the same proof still works because all subspaces $J_m$ are invariant under $\tau$. \end{proof} In summary: \begin{Thm}\label{th:main_Lie} The mapping of an admissible $G^\#$-grading on $\mathfrak{sl}_n$ to a $G$-grading on $U(\mathscr{F})_0$, given by restriction and coarsening, yields a bijection between the admissible isomorphism classes of $G^\#$-gradings on $\mathfrak{sl}_n$ and the isomorphism classes of $G$-gradings on $U(\mathscr{F})_0$.\qed \end{Thm} There are two families of gradings on $\mathfrak{sl}_n$, $n>2$, namely, Type I and Type II. (Only Type I exists for $n=2$.) Their isomorphism classes are stated in Theorem 3.53 of \cite{EK2013}, but we will use Theorem 45 of \cite{BKE2018}, which is equivalent but uses more convenient parameters. By definition, a $G^\#$-grading of Type I is a restriction of a $G^\#$-grading on the associative algebra $M_n$, so it is parametrized by $(T,\beta,\kappa)$, where, as in Section \ref{assoc_case}, $T\subset G$ is a finite group, $\beta:T\times T\to\mathbb{F}^\times$ is a non-degenerate alternating bicharacter and $\kappa:\mathbb{Z}\times G/T\to\mathbb{Z}_{\ge0}$ is a function with finite support satisfying $|\kappa|\sqrt{|T|}=n$. For a Type II grading, there is a unique element $f\in G^\#$ of order $2$ (hence, in fact, $f\in G$), called the \emph{distinguished element}, such that the coarsening induced by the natural homomorphism $G^\#\to G^\#/\langle f\rangle$ is a Type I grading. The parametrization of Type II gradings depends on the choice of character $\chi$ of $G^\#$ satisfying $\chi(f)=-1$. So, we fix $\chi\in\widehat{G}$ with $\chi(f)=-1$ and extend it trivially to the factor $\mathbb{Z}$. Then, the parameters of a Type II grading are a finite subgroup $T\subset G^\#$ (hence $T\subset G$) containing $f$, an alternating bicharacter $\beta:T\times T\to\mathbb{F}^\times$ with radical $\langle f\rangle$ (so, $\beta$ determines the distinguished element $f$), an element $g^\#_0\in G^\#$, and a function $\kappa:\mathbb{Z}\times G/T\to\mathbb{Z}_{\ge0}$ with finite support satisfying $|\kappa|\displaystyle\sqrt{|T|/2}=n$. These parameters are required to satisfy some additional conditions, as follows. To begin with, for a Type II grading, $T$ must be $2$-elementary. Its Type I coarsening is a grading by $G^\#/\langle f\rangle\simeq\mathbb{Z}\times\overline{G}$ with parameters $(\overline{T},\bar{\beta},\kappa)$, where $\overline{T}:=T/\langle f\rangle$ is a subgroup of $\overline{G}:=G/\langle f\rangle$, $\bar{\beta}:\overline{T}\times\overline{T}\to\mathbb{F}^\times$ is the non-degenerate bicharacter induced by $\beta$, and $\kappa$ is now regarded as a function on $\mathbb{Z}\times\overline{G}/\overline{T}\simeq\mathbb{Z}\times G/T$. Since $T$ is $2$-elementary, $\beta$ can only take values $\pm 1$ and $\ell:=\sqrt{|T|/2}$ is a power of $2$. If one uses Kronecker products of Pauli matrices (of order $2$) to construct a division grading on $M_\ell$ with support $\overline{T}$ and bicharacter $\bar{\beta}$, then the transposition will preserve degree and thus become an involution on the resulting graded division algebra $D$. The choice of such an involution is arbitrary, and it will be convenient for our purposes to use $\tau$, which also preserves degree. Since all homogeneous components of $D$ are $1$-dimensional, we have \[ (X_{\bar{t}})^\tau=\bar{\eta}(\bar{t})X_{\bar{t}},\quad\forall\bar{t}\in\overline{T},\,X_{\bar{t}}\in D_{\bar{t}}, \] where $\bar{\eta}:\overline{T}\to\{\pm 1\}$ satisfies $\bar{\eta}(\bar{u}\bar{v})=\bar{\beta}(\bar{u},\bar{v})\bar{\eta}(\bar{u})\bar{\eta}(\bar{v})$ for all $\bar{u},\bar{v}\in\overline{T}$. If we regard $\bar{\eta}$ and $\bar{\beta}$ as maps of vector spaces over the field of two elements, this equation means that $\bar{\eta}$ is a quadratic form with polarization $\bar{\beta}$. {Define a quadratic form $\eta:T\to\{\pm 1\}$ with polarization $\beta$ by $\eta(t)=\chi(t)\bar{\eta}(\bar{t})$, where $\bar{t}$ denotes the image of $t\in T$ in the quotient group $\overline{T}$.} Recall that a concrete $G^\#/\langle f\rangle$-grading with parameters $(\overline{T},\bar{\beta},\kappa)$ is constructed by selecting a $k$-tuple of elements of $G^\#/\langle f\rangle$, as directed by $\kappa$, to get an elementary grading on $M_k$, where $k=|\kappa|$, and identifying $M_n\simeq M_k\otimes D$ via Kronecker product. The remaining parameter $g^\#_0$ can then be used, together with the chosen involution $\tau$ on $D$, to define an anti-automorphism $\varphi$ on $M_n$ by the formula \[ \varphi(X)=\Phi^{-1}X^\tau\Phi,\quad\forall X\in M_n, \] where the matrix $\Phi\in M_k\otimes D\simeq M_k(D)$ is constructed in such a way that $\varphi^2$ acts on $M_n$ in exactly the same way as $\chi^2$, which acts on $M_n$ because it can be regarded as a character on $G^\#/\langle f\rangle$ (since $\chi^2(f)=1$) and $M_n$ is a $G^\#/\langle f\rangle$-graded algebra. As a result, we can split each homogeneous component of the $G^\#/\langle f\rangle$-grading on $M_n$ into (at most $2$) eigenspaces of $\varphi$ so that the action of $\chi$ on the resulting $G^\#$-graded algebra $M_n^{(-)}$ coincides with the automorphism $-\varphi$. Finally, the restriction of this $G^\#$-grading to $\mathfrak{sl}_n$ is a $G^\#$-grading of Type II with parameters $(T,\beta,g^\#_0,\kappa)$. In order to construct $\Phi$, two conditions must be met: \begin{enumerate} \item[(i)] $\kappa$ is \emph{$g^\#_0$-balanced} in the sense that $\kappa(x)=\kappa((g_0^{\#})^{-1}x^{-1})$ for all $x\in\mathbb{Z}\times G/T$ (where the inverse in $\mathbb{Z}$ is understood with respect to addition); \item[(ii)] $\kappa(g^\#T)$ is even whenever $g_0^\#(g^\#)^2\in T$ and $\eta(g^\#_0(g^\#)^2)=-1$ for some $g^\#\in G^\#$. \end{enumerate} Such a matrix $\Phi\in M_k(D)$ is given explicitly by Equations (3.29) and (3.30) in \cite{EK2013}, but in relation to the usual transposition. Since we are using $\tau$, the order of the $k$ rows has to be reversed and the entries in $D$ chosen in accordance with the above quadratic form $\bar{\eta}$ rather than the quadratic form in \cite{EK2013}. It will also be convenient in our situation to order the $k$-tuple associated to $\kappa$ in a different way, as will be described below. We are only interested in admissible isomorphism classes of $G^\#$-gradings on $\mathfrak{sl}_n$. If $n=2$, the isomorphism condition for (Type I) gradings is the same as in the associative case: all translations of $\kappa$ determine isomorphic gradings. If $n>2$, however, one isomorphism class of Type I gradings on $\mathfrak{sl}_n$ can consist of one or two isomorphism classes of gradings on $M_n$, because $(T,\beta,\kappa)$ and $(T,\beta^{-1},\bar{\kappa})$ determine isomorphic gradings on $\mathfrak{sl}_n$, where the function $\bar{\kappa}:\mathbb{Z}\times G/T\to\mathbb{Z}_{\ge0}$ is defined by $\bar{\kappa}(i,x):=\kappa(-i,x^{-1})$. Hence, the isomorphism class of $G^\#$-gradings of Type I with parameters $(T,\beta,\kappa)$ is admissible if and only if at least one of the functions $\kappa$ and $\bar{\kappa}$ has the form described after Lemma \ref{lem1}. Assuming it is $\kappa$, there must exist $a\in\mathbb{Z}$ and functions $\kappa_1,\ldots,\kappa_s:G/T\to\mathbb{Z}_{\ge0}$ with $|\kappa_i|\sqrt{|T|}=n_i$, such that \begin{equation}\label{ref_kappa_seq} \kappa(a-i,x)=\kappa_i(x),\quad\forall i\in\{1,2,\ldots,s\},\,x\in G/T, \end{equation} and $\kappa(a-i,x)=0$ if $i\not\in\{1,2,\ldots,s\}$. Then $\bar{\kappa}$ can be expressed in the same form, but with the function $\bar{\kappa}_i(x):=\kappa_i(x^{-1})$ playing the role of $\kappa_{s-i+1}$ for each $i$. Thus, the isomorphism classes of $G$-gradings of Type I on $U(\mathscr{F})_0$ are parametrized by $(T,\beta,(\kappa_1,\ldots,\kappa_s))$, and, if $n_i=n_{s-i+1}$ for all $i$, then $(T,\beta,(\kappa_1,\ldots,\kappa_s))$ and $(T,\beta^{-1},(\bar{\kappa}_s,\ldots,\bar{\kappa}_1))$ determine isomorphic $G$-gradings on $U(\mathscr{F})_0$. Now consider the isomorphism class of Type II gradings on $\mathfrak{sl}_n$ ($n>2$) with parameters $(T,\beta,g^\#_0,\kappa)$. Admissibility is a condition on the $\mathbb{Z}$-grading induced by the projection $G^\#\to\mathbb{Z}$, which factors through the natural homomorphism $G^\#\to G^\#/\langle f\rangle$. So, for this isomorphism class to be admissible, it is necessary and sufficient for $\kappa$ to have the form given by Equation \eqref{ref_kappa_seq}, but with $|\kappa_i|\sqrt{|T|/2}=n_i$. \begin{Lemma}\label{balance_kappa_i} If $g^\#_0=(a_0,g_0)$ and $\kappa$ is given by Equation \eqref{ref_kappa_seq}, then $\kappa$ is $g^\#_0$-balanced if and only if $a_0=s+1-2a$ and $\kappa_i(x)=\kappa_{s-i+1}(g_0^{-1}x^{-1})$ for all $x\in G/T$ and all $i$. \end{Lemma} \begin{proof} Consider the function $\kappa_\mathbb{Z}:\mathbb{Z}\to\mathbb{Z}_{\ge 0}$ given by $\kappa_\mathbb{Z}(m)=\sum_{g\in G/T}\kappa(m,g)$. Then the support of $\kappa_\mathbb{Z}$ is $\{a-s,\ldots,a-1\}$. On the other hand, if $\kappa$ is $g^\#_0$-balanced, then $\kappa_\mathbb{Z}$ is $a_0$-balanced{, that is, $\kappa_\mathbb{Z}(i)=\kappa_\mathbb{Z}(-a_0-i)$, for all $i\in\mathbb{Z}$}, which implies $-a_0-(a-s)=a-1$. The result follows. \end{proof} Therefore, we can replace the parameters $g^\#_0$ and $\kappa$ by $g_0$ and $(\kappa_1,\ldots,\kappa_s)$. Also, since $g_0^\#(g^\#)^2\notin T$ for any $g^\#=(a-i,g)$ with $s+1\ne 2i$, condition (ii) is automatically satisfied if $s$ is even, and affects only $\kappa_{\frac{s+1}{2}}$ if $s$ is odd. Hence, we can restate conditions (i) and (ii) in terms of $\kappa_1,\ldots,\kappa_s$ as follows: \begin{enumerate} \item[(i')] $\kappa_i(x)=\kappa_{s-i+1}(g_0^{-1}x^{-1})$ for all $x\in G/T$ and all $i$; \item[(ii')] either $s$ is even or $s$ is odd and $\kappa_{\frac{s+1}{2}}(gT)$ is even whenever $g_0g^2\in T$ and $\eta(g_0g^2)=-1$ for some $g\in G$. \end{enumerate} Note that condition (i') implies that $n_i=|\kappa_i|\ell=|\kappa_{s-i+1}|\ell=n_{s-i+1}$, so Type II gradings on $U(\mathscr{F})_0$ can exist only if $n_i=n_{s-i+1}$ for all $i$, as expected from the structure of the automorphism group (see Lemma \ref{aut_Lie}). Let us describe explicitly a Type II grading on $U(\mathscr{F})_0$ in the isomorphism class parametrized by $(T,\beta,g_0,(\kappa_1,\ldots,\kappa_s))$. For each $1\le i<\frac{s+1}{2}$, we fill two $|\kappa_i|$-tuples, $\gamma_i$ and $\gamma_{s-i+1}$, simultaneously as follows, going from left to right in $\gamma_i$ and from right to left in $\gamma_{s-i+1}$. For each coset $x\in G/T$ that lies in the support of $\kappa_i$, we choose an element $g\in x$ and place $\kappa_i(x)$ copies of $g$ into $\gamma_i$ and as many copies of $g_0^{-1}g^{-1}$ into $\gamma_{s-i+1}$. If $s$ is odd, we fill the middle $|\kappa_i|$-tuple $\gamma_i$, with $i=\frac{s+1}{2}$, in the following manner: $\gamma_i$ will be the concatenation of (possibly empty) tuples $\gamma^\triangleleft$, $\gamma^+$, $\gamma^0$, $\gamma^-$ and $\gamma^\triangleright$ (in this order), where $\gamma^\triangleleft$ and $\gamma^+$ are to be filled from left to right, $\gamma^-$ and $\gamma^\triangleright$ from right to left, and $\gamma^0$ in any order. For each $x$ in the support of $\kappa_i$, we choose an element $g\in x$. If $g_0g^2\notin T$, we place $\kappa_i(x)$ copies of $g$ into $\gamma^\triangleleft$ and as many copies of $g_0^{-1}g^{-1}$ into $\gamma^\triangleright$. If $g_0g^2\in T$ and $\eta(g_0g^2)=-1$, we place $\frac12\kappa_i(x)$ copies of $g$ in each of $\gamma^+$ and $\gamma^-$. Finally, if $g_0g^2\in T$ and $\eta(g_0g^2)=1$, we place $\kappa_i(x)$ copies of $g$ into $\gamma^0$. Concatenating these $\gamma_1,\ldots,\gamma_s$ results in a $k$-tuple $\gamma=(g_1,\ldots,g_k)$ of elements of $G$. Taking them modulo $\langle f\rangle$, we define a $\overline{G}$-grading on $M_k$ and, consequently, on $M_n\simeq M_k\otimes D$, so $M_n=\bigoplus_{\bar{g}\in\overline{G}}R_{\bar{g}}$. Then we construct a matrix $\Phi\in M_k(D)\simeq M_k\otimes D$ as follows: \begin{equation}\label{Phi} \begin{split} \Phi&=\mathrm{diag}(\chi(g_1^{-1})I_\ell,\ldots,\chi(g_p^{-1})I_\ell)\oplus\mathrm{diag}(X_{\bar{g}_0\bar{g}_{p+1}^2},\ldots, X_{\bar{g}_0\bar{g}_{p+q}^2})\\ &\oplus\widetilde{\mathrm{diag}}(X_{\bar{g}_0\bar{g}_{p+q+1}^2},\ldots,X_{\bar{g}_0\bar{g}_{k-p-q}^2})\\ &\oplus\mathrm{diag}(-X_{\bar{g}_0\bar{g}_{k-p-q+1}^2},\ldots,-X_{\bar{g}_0\bar{g}_{k-p}^2})\oplus\mathrm{diag}(\chi(g_{k-p+1}^{-1})I_\ell,\ldots,\chi(g_k^{-1})I_\ell), \end{split} \end{equation} where $p$ is the sum of the lengths of $\gamma_1,\ldots,\gamma_{\lfloor\frac{s}2\rfloor}$, and $\gamma^\triangleleft$, $q$ is the length of $\gamma^+$, and $\widetilde{\mathrm{diag}}$ denotes arrangement of entries along the secondary diagonal (from left to right). Finally, we use $\Phi$ to define a $G$-grading on $M_n^{(-)}$: \begin{equation}\label{Type2Lie} M_n^{(-)}=\bigoplus_{g\in G}R_g\text{ where }R_g=\{X\in R_{\bar{g}}\mid\Phi^{-1}X^\tau\Phi=-\chi(g)X\}, \end{equation} which restricts to the desired grading on $U(\mathscr{F})_0$. Thus we obtain the following classification of $G$-gradings on $U(\mathscr{F})_0$ from our Theorem \ref{th:main_Lie} and the known classification for $\mathfrak{sl}_n$ (as stated in \cite[Theorem 45]{BKE2018} and \cite[Theorem 3.53]{EK2013}). \begin{Cor}\label{cor:main_Lie} Every grading on $U(\mathscr{F})_0$ by an abelian group $G$ is isomorphic either to a Type I grading with parameters $(T,\beta,(\kappa_1,\ldots,\kappa_s))$, where $|\kappa_i|=n_i\sqrt{|T|}$, or to a Type II grading with parameters $(T,\beta,g_0,(\kappa_1,\ldots,\kappa_s))$, where $|\kappa_i|\sqrt{|T|/2}=n_{i}$ and $T$ is $2$-elementary. Type II gradings can occur only if $n>2$ and $n_i=n_{s-i+1}$ for all $i$, and their parameters are subject to the conditions (i') and (ii') above. Moreover, gradings of Type I are not isomorphic to gradings of Type II, and within each type we have the following: \begin{enumerate} \item[(I)] $(T,\beta,(\kappa_1,\ldots,\kappa_s))$ and $(T',\beta',(\kappa_1',\ldots,\kappa_s'))$ determine the same isomorphism class if and only if $T'=T$ and there exists $g\in G$ such that either $\beta'=\beta$ and $\kappa_i'=g\kappa_i$ for all $i$, or $n>2$, $\beta'=\beta^{-1}$ and $\kappa_i'=g\bar{\kappa}_{s-i+1}$ for all $i$, where $\bar{\kappa}(x):=\kappa(x^{-1})$ for all $x\in G/T$. \item[(II)] $(T,\beta,g_0,(\kappa_1,\ldots,\kappa_s))$ and $(T',\beta',g_0',(\kappa_1',\ldots,\kappa_s'))$ determine the same isomorphism class if and only if $T'=T$, $\beta'=\beta$, and there exists $g\in G$ such that $g'_0=g^{-2}g_0$ and $\kappa'_i=g\kappa_i$ for all $i$.\qed \end{enumerate} \end{Cor} \section{Commutativity of the grading group}\label{commut_supp} Our immediate goal is to prove Lemma \ref{can_Lie}. The arguments will work without assuming a priori that the grading group is abelian, and, in fact, our second goal will be to prove that the elements of the support of any group grading on $U(\mathscr{F})_0$ must commute with each other. It will be more convenient to make computations in $U(\mathscr{F})^{(-)}$. So, suppose $U(\mathscr{F})^{(-)}$ is graded by an arbitrary group $G$. We still assume that $\mathrm{char}\,\mathbb{F}=0$, but $\mathbb{F}$ need not be algebraically closed. Write $U(\mathscr{F})=\bigoplus_{1\le i\le j\le s}B_{ij}$, where each $B_{ij}$ is the set of matrices with non-zero entries only in the $(i,j)$-th block. Thus, $J_m=B_{1,m+1}\oplus B_{2,m+2}\oplus\cdots\oplus B_{s-m,s}$ for all $m\in\{0,1,\ldots,s-1\}$. It is important to note that $[J_1,J_{m}]=J_{m+1}$ and hence the Lie powers of the Jacobson radical $J=\bigoplus_{m>0}J_m$ coincide with its associative powers. Let $e_i\in B_{ii}$ be the identity matrix of each diagonal block and let \[ \mathfrak{d}=\mathrm{Span}\{e_1,e_2,\ldots,e_s\}. \] We can write $B_{ii}=\mathfrak{s}_i\oplus \mathbb{F}e_i$, where $\mathfrak{s}_i=[B_{ii},B_{ii}]\simeq\mathfrak{sl}_{n_i}$. Let $S=\bigoplus_{i=1}^s\mathfrak{s}_{i}$ and $R=\mathfrak{d}\oplus J$. Then $U(\mathscr{F})^{(-)}=S\oplus R$ is a Levi decomposition. We will need the following graded version of Levi decomposition, which was established in \cite{PRZ2013} and then improved in \cite{Gord2016} by weakening the conditions on the ground field: \begin{Thm}[{\cite[Corollaries 4.2 and 4.3]{Gord2016}}]\label{thm_gord} Let $L$ be a finite-dimensional Lie algebra over a field $\mathbb{F}$ of characteristic $0$, graded by an arbitrary group $G$. Then the radical $R$ of $L$ is graded and there exists a maximal semisimple subalgebra $B$ such that $L=B\oplus R$ (direct sum of graded subspaces).\qed \end{Thm} \begin{Cor}\label{Levi} Consider any $G$-grading on $U(\mathscr{F})^{(-)}$. Then the ideal $R$ is graded. Moreover, there exists an isomorphic $G$-grading on $U(\mathscr{F})^{(-)}$ such that $S$ is also graded. \end{Cor} \begin{proof} By Theorem \ref{thm_gord}, there exists a graded Levi decomposition $U(\mathscr{F})^{(-)}=B\oplus R$. But $U(\mathscr{F})^{(-)}=S\oplus R$ is another Levi decomposition, so, by Malcev's Theorem (see e.g. \cite[Corollary 2 on p.~93]{Jac1979}), there exists an (inner) automorphism $\psi$ of $U(\mathscr{F})^{(-)}$ such that $\psi(B)=S$. Applying $\psi$ to the given $G$-grading on $U(\mathscr{F})^{(-)}$, we obtain a new $G$-grading on $U(\mathscr{F})^{(-)}$ with respect to which $S$ is graded. \end{proof} \begin{Lemma}\label{part_diag} For any $G$-grading on $U(\mathscr{F})^{(-)}$, there exists an isomorphic $G$-grading such that the subalgebras $\mathfrak{d}$ and $S$ are graded. \end{Lemma} \begin{proof} We partition $\{1,\ldots,s\}=\{i_1,\ldots,i_r\}\cup\{j_1,\ldots,j_{s-r}\}$ so that $n_{i_k}=1$ and $n_{j_k}>1$. Denote $e_{\triangle}=\sum_{k=1}^r e_{i_k}$, then $e_{\triangle}U(\mathscr{F})e_{\triangle}\simeq UT_r$, the algebra of upper triangular matrices (if $r>0$). By Corollary \ref{Levi}, we may assume that $S$ is graded. Then its centralizer in $R$, $N:=\mathrm{C}_R(S)$, is a graded subalgebra. It coincides with $\mathrm{Span}\{e_{j_1},\ldots,e_{j_t}\}\oplus e_{\triangle}U(\mathscr{F})e_{\triangle}$, and its center (which is also graded) coincides with $\mathrm{Span}\{e_{j_1},\ldots,e_{j_t},e_{\triangle}\}$. If $r=0$, then $N=\mathfrak{d}$ and we are done. Assume $r>0$. Then we obtain a $G$-grading on $N/\mathfrak{z}(N)\simeq UT_r^{(-)}/\mathbb{F} {1}\simeq (UT_r)_0$. These gradings were classified in \cite{pkfy2017}, where it was shown that, after applying an automorphism of $UT_r^{(-)}$, the subalgebra of diagonal matrices in $UT_r^{(-)}$ is graded. Since $-\tau$ preserves this subalgebra, we may assume that the automorphism in question is inner. But an inner automorphism of $e_{\triangle}U(\mathscr{F})e_{\triangle}$ can be extended to an inner automorphism of $U(\mathscr{F})$. {Indeed, let $y$ be an invertible element of $e_{\triangle}U(\mathscr{F})e_{\triangle}$.} Then $x=\sum_{k=1}^{s-r} e_{j_k}+y\in U(\mathscr{F})^\times$ and $\mathrm{Int}(x)$ extends $\mathrm{Int}(y)$. Moreover, $\mathrm{Int}(x)$ preserves $S$. Therefore, we may assume that the subalgebra of diagonal matrices in $N/\mathfrak{z}(N)$ is graded. But the inverse image of this subalgebra in $N$ is precisely $\mathfrak{d}$, so $\mathfrak{d}$ is graded. \end{proof} It will be convenient to use the following technical concept: \begin{Def} Let $L$ be a $G$-graded Lie algebra. We call $x\in L$ \textit{semihomogeneous} if $x=x_h+x_z$, with $x_h$ homogeneous and $x_z\in\mathfrak{z}(L)$. If $x_h\notin\mathfrak{z}(L)$, we define the \emph{degree} of $x$ as $\deg x_h$ and denoted it by $\deg x$. \end{Def} An important observation is that if $x$ and $y$ are semihomogeneous and $[x,y]\ne 0$, then $[x,y]$ is homogeneous of degree $\deg x\deg y$ (as $[x,y]$ will coincide with $[x_h,y_h]$). \begin{Prop}\label{diagonal} For any $G$-grading on $U(\mathscr{F})^{(-)}$, there exists an isomorphic $G$-grading with the following properties: \begin{enumerate} \item[(i)] the subalgebras $\mathfrak{s}_{k}+\mathfrak{s}_{s-k+1}$ are graded, \item[(ii)] the elements $e_k-e_{s-k+1}$ ($k\ne\frac{s+1}{2}$) are semihomogeneous of degree $1_G$, and \item[(iii)] the elements $e_k+e_{s-k+1}$ are semihomogeneous of degree $f$ (if $s>2$), where $f\in G$ is an element of order at most $2$. \end{enumerate} \end{Prop} \begin{proof} By Lemma \ref{part_diag}, we may assume that $S$ and $\mathfrak{d}$ are graded subalgebras. Also note that $J=[R,R]$ and all of its powers are graded ideals. We proceed by induction on $s$. If $s=1$, then $\mathfrak{s}_1=S$ is graded and there is nothing more to prove. If $s=2$, then $\mathfrak{s}_{1}\oplus\mathfrak{s}_{2}=S$ is graded. Also, $\mathrm{Span}\{e_1,e_2\}=\mathfrak{d}$ and $e_1+e_2={1}$ is central, so $e_1-e_2$ is a semihomogeneous element. Its degree must be equal to $1_G$, because $[e_1-e_2,x]=2x$ for any $x\in J=B_{12}$. Now assume $s>2$. \textbf{Claim 1}: $N:=B_{11}\oplus B_{ss}\oplus\mathbb{F}{1}\oplus J$ is graded. First suppose $s\ge 4$. Consider $J^{s-2}=J_{s-2}\oplus J_{s-1}$ (the three blocks in the top right corner) and the graded ideal $C:=\mathrm{C}_R(J^{s-2})=R\cap\mathrm{C}_{U(\mathscr{F})^{(-)}}(J^{s-2})$. It is easy to see that \[ C=\mathrm{Span}\{e_2,\ldots,e_{s-1}\}\oplus\mathbb{F}{1}\oplus B_{23}\oplus\cdots\oplus B_{s-2,s-1}\oplus J^2. \] Now, the adjoint action induces on $C/J^2$ a natural structure of a graded $U(\mathscr{F})^{(-)}$-module, and one checks that $N=\mathrm{Ann}_{U(\mathscr{F})^{(-)}}(C/J^2)+J$, so $N$ is graded. If $s=3$, then consider $J^2=J_2=B_{13}$ and the graded ideal $\tilde{C}:=\mathrm{C}_{U(\mathscr{F})^{(-)}}(J^2)$. One checks that \[ \tilde{C}=B_{22}\oplus\mathbb{F}{1}\oplus J, \] and hence $N=\mathrm{Ann}_{U(\mathscr{F})^{(-)}}(\tilde{C}/J)$. This completes the proof of Claim 1. It follows that $S\cap N=\mathfrak{s}_{1}\oplus\mathfrak{s}_{s}$ is a graded subalgebra, and \[ I_1:=\mathfrak{d}\cap N=\mathrm{Span}\{e_1,e_s,{1}\} \] is graded as well. Hence, $\mathrm{C}_{I_1}(J^{s-1})=\mathrm{Span}\{e_1+e_s,{1}\}$ is graded, so we conclude that $e_1+e_s$ is semihomogeneous. Denote its degree by $f$. \textbf{Claim 2}: $f^2=1_G$ and $e_1-e_s$ is semihomogeneous of degree $1_G$. Since $I_1/\mathbb{F}{1}$ is spanned by the images of $e_1$ and $e_s$, there must exists a semihomo\-ge\-neous linear combination $\tilde{e}$ of $e_1$ and $e_s$ that is not a scalar multiple of $e_1+e_s$. Consider the graded $I_1$-module $J^{s-2}/J^{s-1}$. As a module, it is isomorphic to $B_{1,s-1}\oplus B_{2,s}$, where ${1}$ acts as $0$, $e_1$ as the identity on the first summand and $0$ on the second, and $e_s$ as $0$ on the first and the negative identity on the second. Using this isomorphism, we will write the elements $x\in J^{s-2}/J^{s-1}$ as $x=x_1+x_2$ with $x_1\in B_{1,s-1}$ and $x_2\in B_{2,s}$. Since the situation is symmetric in $e_1$ and $e_s$, we may assume without loss of generality that $\tilde{e}=e_1+\alpha e_s$, $\alpha\ne 1$. Pick a homogeneous element $x=x_1+x_2$ with $x_1\ne 0$. First, we observe that $(e_1+e_s)\cdot((e_1+e_s)\cdot x)=x$, which implies $f^2=1_G$. If $x_2=0$, then $\tilde{e}\cdot x=(e_1+e_2)\cdot x=x$, and this implies that the semihomogeneous elements $\tilde{e}$ and $e_1+e_2$ both have degree $1_G$, which proves the claim. If $\alpha=0$, then $\tilde{e}\cdot x=x_1-\alpha x_2=x_1$ is homogeneous and we can apply the previous argument. So, we may assume that $\alpha\ne 0$. Suppose for a moment that we have $\deg\tilde{e}=1_G$. If $\alpha=-1$, we are done. Otherwise, we can consider the homogeneus element $0\ne x+\alpha^{-1}\tilde{e}\cdot x\in B_{1,s-1}$ and apply the previous argument again. It remains to prove that $\deg\tilde{e}=1_G$. Denote this degree by $g$ and assume $g\ne 1_G$. Considering \[ D:=\mathrm{Span}\{x,\tilde{e}\cdot x,\tilde{e}\cdot(\tilde{e}\cdot x),\ldots\}, \] we see, on the one hand, that $\dim D\le2$, because $D\subset\mathrm{Span}\{x_1,x_2\}$. On the other hand, non-zero homogeneous elements of distinct degrees are linearly independent, so the order of $g$ does not exceed $2$. By our assumption, it must be equal to $2$. Then $x$ and $\tilde{e}\cdot x$ form a basis of $D$ and $y:=\tilde{e}\cdot(\tilde{e}\cdot x)$ has the same degree as $x$. Therefore, $y=\lambda x$ for some $\lambda\ne0$. On the other hand, $y=x_1+\alpha^2 x_2$, hence $\alpha=\pm1$. The case $\alpha=1$ is excluded, whereas $\alpha=-1$ implies $\tilde{e}\cdot x=x$, which contradicts $g\ne 1_G$. The proof of Claim 2 is complete. We have established all assertions of the proposition for $k=1$. We are going to use the induction hypothesis for $k>1$. To this end, let $e:={1}-(e_1+e_s)$ and consider $eU(\mathscr{F})e\simeq UT(n_2,\ldots,n_{s-1})$. Observe that the operator $\mathrm{ad}(e_1-e_s)$ on $U(\mathscr{F})^{(-)}$ preserves degree and acts as $0$ on $B_{11}\oplus eU(\mathscr{F})e\oplus B_{ss}$, as the identity on the blocks $B_{12},\ldots,B_{1,s-1}$ and $B_{2s},\ldots,B_{s-1,s}$ and as $2$ times the identity on $B_{1s}$. It follows that \begin{align*} T_1:&=\Big(\mathrm{id}-\frac12\mathrm{ad}(e_1-e_s)\Big)\Big(\mathrm{id}-\mathrm{ad}(e_1-e_s)\Big)U(\mathscr{F})^{(-)}\\ &=B_{11}\oplus eU(\mathscr{F})e\oplus B_{ss}, \end{align*} is a graded subspace. Hence, $L_1:=\mathrm{C}_{T_1}(J^{s-1})=\mathbb{F}(e_1+e_s)\oplus eU(\mathscr{F})e$ is graded and we can apply the induction hypothesis to $L_1/\mathbb{F}(e_1+e_s)\simeq UT(n_2,\ldots,n_{s-1})$. Therefore, for $1<k\le\frac{s+1}{2}$, the subalgebras $\mathbb{F}(e_1+e_s)\oplus(\mathfrak{s}_k+\mathfrak{s}_{s-k+1})\subset L_1$ are graded, the elements $e_k+e_{s-k+1}$ are semi\-homo\-ge\-neous of degree $f'$ in $L_1$ (if $s>4$), and the elements $e_k-e_{s-k+1}$ ($k\ne\frac{s+1}{2}$) are semihomogeneous of degree $1_G$ in $L_1$. {For the subalgebras, we can get rid of the unwanted term $\mathbb{F}(e_1+e_s)$ by passing to the derived algebra, so we conclude that $\mathfrak{s}_k+\mathfrak{s}_{s-k+1}$ are graded. For the elements, since $\mathfrak{z}(L_1)=\mathbb{F}(e_1+e_s)\oplus\mathbb{F}1$, we also have to get rid of $\mathbb{F}(e_1+e_s)$ before we can conclude that they are semihomogeneous in $U(\mathscr{F})^{(-)}$.} \textbf{Claim 3}: $e_k+e_{s-k+1}$ are semihomogeneous of degree $f$ in $U(\mathscr{F})^{(-)}$. If $s=3$, then $e_2={1}-(e_1+e_3)$ is semihomogeneous of degree $f$. If $s=4$, then $e_2+e_{s-1}={1}-(e_1+e_s)$ is semihomogeneous of degree $f$. So, assume $s>4$. By the above paragraph, we know there exist $\alpha_k$ such that $\alpha_k(e_1+e_s)+e_k+e_{s-k+1}$ are semihomogeneous of degree $f'$ in $U(\mathscr{F})^{(-)}$. If $\alpha_2=0$, then pick a non-zero homogeneous element $x\in J^{s-2}/J^{s-1}$. Since $(e_1+e_s)\cdot x=-(e_2+e_{s-1})\cdot x\ne 0$, we conclude that $f=f'$ and the claim follows, because we can subtract the scalar multiples of $e_1+e_s$ from the elements $\alpha_k(e_1+e_s)+e_k+e_{s-k+1}$. If $\alpha_2\ne0$, consider instead the graded $U(\mathscr{F})^{(-)}$-module $([e_1-e_s,J^2]+J^3)/J^3$. As a module, it is isomorphic to $B_{13}\oplus B_{s-2,s}$, so $e_2+e_{s-1}$ annihilates it. Picking a non-zero homogeneous element $x$, we get \[ (\alpha_2(e_1+e_s)+e_2+e_{s-1})\cdot x=\alpha_2(e_1+e_s)\cdot x\ne0, \] so again $f=f'$ and the claim follows. \textbf{Claim 4}: $e_k-e_{s-k+1}$ are semihomogeneous of degree $1_G$ in $U(\mathscr{F})^{(-)}$. We know there exist $\alpha'_k$ such that $\alpha'_k(e_1+e_s)+e_k-e_{s-k+1}$ are semihomogeneous of degree $1_G$ in $U(\mathscr{F})^{(-)}$. If $f=1_G$, then we can subtract the scalar multiples of $e_1+e_s$, so we are done. If $f\ne 1_G$, we want to prove that $\alpha'_k=0$. By way of contradiction, assume $\alpha'_k\ne 0$. If $k<\frac{s}{2}$, then $e_k-e_{s-k+1}$ annihilates the graded module $([e_1-e_s,J^k]+J^{k+1})/J^{k+1}$, so, using the argument in the proof of Claim~3, we conclude that $\deg(e_1+e_s)=1_G$, a contradiction. It remains to consider the case $s=2k$. If $s>4$, then $e_{s/2}-e_{s/2+1}$ annihilates the graded module $([e_1-e_s,J]+J^2)/J^2$, which is isomorphic to $B_{12}\oplus B_{s-1,s}$, so the same argument works. If $s=4$, then $e_2-e_3$ does not annihilate this module, but acts on it as the negative identity. Picking a non-zero homogeneous element $x$, we get \[ x+(\alpha'_2(e_1+e_s)+e_2-e_3)\cdot x=\alpha'_2(e_1+e_s)\cdot x\ne 0, \] so again $\deg(e_1+e_s)=1_G$, a contradiction. The proof of the proposition is complete. \end{proof} \begin{proof}[Proof of Lemma \ref{can_Lie}] We extend a given $G$-grading on $U(\mathscr{F})_0$ to $U(\mathscr{F})^{(-)}$ by defining the degree of ${1}$ an arbitrarily. Then $U(\mathscr{F})_0\simeq U(\mathscr{F})^{(-)}/\mathbb{F}{1}$ as a graded algebra. By Lemma \ref{part_diag}, we may assume that $\mathfrak{d}$ and $S$ are graded, hence the subalgebra $J_0=\mathfrak{d}\oplus S$ and its homomorphic image $J_0/\mathbb{F}{1}\simeq J_0\cap U(\mathscr{F})_0$ in $U(\mathscr{F})_0$ are graded. (In fact, by Proposition \ref{diagonal}, we can say more: every subalgebra $B_{ii}+B_{s-i+1}+\mathbb{F}{1}$ is graded.) To deal with $J_m$ for $m>0$, we will use the semihomogeneous elements $d_i:=e_i-e_{s-i+1}$ of degree $1_G$ ($i\ne \frac{s+1}{2}$). Fix $i<j$. If $i+j\ne s+1$, then \[ B_{ij}\oplus B_{s-j+1,s-i+1}=\mathrm{ad}(d_i-d_j)\mathrm{ad}(d_i)\mathrm{ad}(d_j)U(\mathscr{F})^{(-)}, \] which is a graded subspace. If $i+j=s+1$, then \[ B_{ij}=(\mathrm{id}-\mathrm{ad}(d_i))\mathrm{ad}(d_i)J^{s-i+1} \] is graded. Thus, $B_{ij}+B_{s-j+1,s-i+1}$ is graded for all $i<j$, hence so is $J_m$. \end{proof} Now, we proceed to prove that the support of any $G$-grading on $U(\mathscr{F})_0$ is a \emph{commutative subset} of $G$ in the sense that its elements commute with each other. The key observation is that, if $x$ and $y$ are homogeneous elements in any $G$-graded Lie algebra and $[x,y]\ne 0$, then $\deg x$ must commute with $\deg y$. By induction, one can generalize this as follows: if $x_1,\ldots,x_k$ are homogeneous and $[\ldots[x_1,x_2],\ldots,x_k]\ne 0$ then the degrees of $x_i$ must commute pair-wise. This fact was used to show that the support of any graded-simple Lie algebra is commutative (see e.g. \cite[Proposition 2.3]{PRZ2013} or the proof of Proposition 1.12 in \cite{EK2013}). We will need the following two lemmas. \begin{Lemma}\label{cyclic} Suppose a semidirect product of Lie algebras $V\rtimes L$ is graded by a group $G$ in such a way that both the ideal $V$ and the subalgebra $L$ are graded. Assume that the support of $L$ is commutative and, as an $L$-module, $V$ is faithful and generated by a single homogeneous element. Then the support of $V\rtimes L$ is commutative. \end{Lemma} \begin{proof} Let $v$ be a homogeneous generator of $V$ as an $L$-module and let $g=\deg v$. Denote by $H$ the abelian subgroup generated by $\mathrm{Supp}\,L$. Then $\mathrm{Supp}\,V$ is contained in the coset $Hg$. In particular, the subgroup generated by $\mathrm{Supp}\,(V\rtimes L)$ is also generated by $H$ and $g$, so it is sufficient to prove that $g$ commutes with all elements of $\mathrm{Supp}\,L$. Let $a\ne 0$ be a homogeneous element of $L$. Since $V$ is faithful, there exists a homogeneous element $w\in V$ such that $a\cdot w\ne 0$. But, in the semidirect product, $a\cdot w=[a,w]$, hence $\deg a$ and $\deg w$ commute. Since $\deg a\in H$, $\deg w\in Hg$, and $H$ is abelian, we conclude that $\deg a$ commutes with $g$. \end{proof} { \begin{Lemma}\label{cyclic2} Let $L=L_1\times\cdots\times L_k$ and suppose the semidirect product $V\rtimes L$ is graded by a group $G$ in such a way that $V$ and each subalgebra $L_i$ are graded. Assume that $V$ is graded-simple as an $L$-module and, for each $i$, $\mathrm{Supp}\,L_i$ is commutative and $V$ is faithful as an $L_i$-module. Then the support of $V\rtimes L$ is commutative. \end{Lemma} \begin{proof} One checks that, if we redefine the bracket on the ideal $V$ to be zero while keeping the same bracket on the subalgebra $L$ and the same $L$-module structure on $V$, the resulting semidirect product is still $G$-graded, so we may suppose $[V,V]=0$. Let $v$ be any non-zero homogeneous element of $V$ (hence a generator of $V$ as an $L$-module). Let $W_i$ be the $L_i$-submodule generated by $v$. Since the actions of $L_i$ and $L_j$ on $V$ commute with each other for all $j\ne i$, $W_i$ must be a faithful $L_i$-module, so we can apply Lemma \ref{cyclic} to the graded subalgebra $W_i\rtimes L_i$ and conclude that $\deg v$ commutes with the elements of $\mathrm{Supp}\,L_i$ for each $i$. It remains to prove that the elements of $\mathrm{Supp}\,L_i$ commute with the elements of $\mathrm{Supp}\,L_j$ for $j\ne i$. Let $a\ne 0$ be a homogeneous element of $L_i$. Pick a homogeneous $v\in V$ such that $v':=a\cdot v\ne0$ and denote $g=\deg v$ and $g'=\deg v'$. By the previous argument, both $g$ and $g'$ commute with every element of $\mathrm{Supp}\,L_j$. But this implies that $\deg a$ commutes with every element of $\mathrm{Supp}\,L_j$. \end{proof} } \begin{Thm}\label{supp_commutativity} The support of any group grading on $U(\mathscr{F})_0$ over a field of charac\-te\-ristic $0$ generates an abelian subgroup. \end{Thm} \begin{proof} The result is known for simple Lie algebras, so we assume $s>1$. We extend the grading to $U(\mathscr{F})^{(-)}$ and bring it to the form described in Proposition \ref{diagonal}. Then, as in the proof of Lemma \ref{can_Lie} just above, we can break $J$ into the direct sum of graded subspaces of the form $B_{ij}\oplus B_{s-j+1,s-i+1}$ ($i+j\ne s+1$) or $B_{ij}$ ($i+j=s+1$), for all $1\le i<j\le s$. Also, $\tilde{\mathfrak{s}}_i:=\mathfrak{s}_i+\mathfrak{s}_{s-i+1}$ are graded subalgebras (possibly zero). Note that any non-zero $\tilde{\mathfrak{s}}_i$ is graded-simple and, therefore, its support is commutative, except in the following situation: $i\ne\frac{s+1}{2}$ and one of the ideals $\mathfrak{s}_i$ and $\mathfrak{s}_{s-i+1}$ is graded. In this case, the other ideal is graded, too, being the centralizer of the first in $\tilde{\mathfrak{s}}_i$, and we can apply Lemma \ref{cyclic2} to the graded algebra $B_{i,s-i+1}\oplus\tilde{\mathfrak{s}}_i\simeq B_{i,s-i+1}\rtimes(\mathfrak{s}_i\times\mathfrak{s}_{s-i+1})$ to conclude that the support of $\tilde{\mathfrak{s}}_i$ is still commutative. Moreover, its elements commute with those of $\mathrm{Supp}\,B_{i,s-i+1}$, so we are done in the case $s=2$. From now on, assume $s>2$. {Let $f$ be the element of $G$ as in Proposition \ref{diagonal}.} \textbf{Case 1}: $f=1_G$. Here each block $B_{ij}$ and each subalgebra $\mathfrak{s}_i$ is graded. Indeed, each element $e_i$ is semihomogeneous of degree $1_G$. If $i+j=s+1$, then we already know that $B_{ij}$ is graded, and otherwise $B_{ij}=\mathrm{ad}(e_i)(B_{ij}\oplus B_{s-j+1,s-i+1})$, so it is still graded. For $\tilde{\mathfrak{s}}_i$, it is sufficient to consider $i\le\frac{s+1}{2}$. If $i=\frac{s+1}{2}$, then we already know that $\mathfrak{s}_i$ is graded, and otherwise we can find $j>i$ such that $j\ne s-i+1$, which implies that $\mathfrak{s}_i=\mathrm{C}_{\tilde{\mathfrak{s}}_i}(B_{ij})$ is still graded. Applying Lemma \ref{cyclic2} to $B_{ij}\rtimes (\mathfrak{s}_i\times\mathfrak{s}_j)$, we conclude that the supports of non-zero $\mathfrak{s}_i$ and $\mathfrak{s}_j$ commute element-wise with one another and also with $\mathrm{Supp}\,B_{ij}$. (This works even if one of $\mathfrak{s}_i$ and $\mathfrak{s}_j$ is zero.) It follows that $\mathrm{Supp}\,S$ generates an abelian subgroup $H$ in $G$. It also commutes element-wise with $\mathrm{Supp}\,J$. Indeed, since $\mathrm{Supp}\,B_{ij}$ is contained in a coset of $H$, it is sufficient to prove that the degree of one non-zero homogeneous element of $B_{ij}$ commutes with the elements of $\mathrm{Supp}\,\mathfrak{s}_k$. We already know this if $k=i$ or $k=j$. Otherwise, we will have $k<i<j$, $i<k<j$ or $i<j<k$. In the last case, we have $[B_{ij},B_{jk}]=B_{ik}$, so we can find homogeneous elements $x\in B_{ij}$ and $y\in B_{jk}$ such that $0\ne [x,y]\in B_{ik}$. Since the elements of $\mathrm{Supp}\,\mathfrak{s}_k$ commute with $\deg y$ and with $\deg x\deg y$, they must commute with $\deg x$ as well. The other two cases are treated similarly. It remains to prove that $\mathrm{Supp}\,J$ is commutative. Since $J_1$ generates $J$ as a Lie algebra, it is sufficient to prove that, for any $1\le i\le j\le s-1$, the sets $\mathrm{Supp}\,B_{i,i+1}$ and $\mathrm{Supp}\,B_{j,j+1}$ commute with one another element-wise. But we can find homogeneous elements $x_1\in B_{12},\,x_2\in B_{23},\ldots,x_{s-1}\in B_{s-1,s}$ such that $[\ldots[x_1,x_2],\ldots,x_{s-1}]\ne 0$, so the degrees of $x_1,x_2,\ldots,x_{s-1}$ must commute pair-wise. The coset argument completes the proof of Case~1. \textbf{Case 2}: $f\ne 1_G$. Here we work with $\tilde{B}_{ij}:=B_{ij}+B_{s-j+1,s-i+1}$. If $\tilde{\mathfrak{s}}_i$ and $\tilde{\mathfrak{s}}_j$ are distinct (that is, $i+j\ne s+1$) and non-zero, then $\tilde{B}_{ij}$ is a direct sum of two non-isomorphic simple $(\tilde{\mathfrak{s}}_i\times\tilde{\mathfrak{s}}_j)$-submodules. We claim that it is a graded-simple $(\tilde{\mathfrak{s}}_i\times\tilde{\mathfrak{s}}_j)$-module. Indeed, otherwise one of the submodules $B_{ij}$ and $B_{s-j+1,s-i+1}$ would be graded. But there exist scalars $\lambda_i$ such that $\tilde{e}_i:=e_i+e_{s-i+1}+\lambda_i {1}$ are homogeneous of degree $f$, and $\mathrm{ad}(\tilde{e}_i)$ acts as the identity on $B_{ij}$ and the negative identity on $B_{s-j+1,s-i+1}$, which forces $f=1_G$, a contradiction. Therefore, we can apply Lemma \ref{cyclic2} to $\tilde{B}_{ij}\rtimes (\tilde{\mathfrak{s}}_i\times\tilde{\mathfrak{s}}_j)$ and conclude that the supports of non-zero $\tilde{\mathfrak{s}}_i$ and $\tilde{\mathfrak{s}}_j$ commute element-wise with one another, hence $\mathrm{Supp}\,S$ is commutative. Now consider $\tilde{B}_{ij}$, with $i+j\ne s+1$, as an $((\tilde{\mathfrak{s}}_i\times\tilde{\mathfrak{s}}_j)\times\mathbb{F}\tilde{e}_i)$-module, where one of $\tilde{\mathfrak{s}}_i$ and $\tilde{\mathfrak{s}}_j$ is allowed to be zero. The simple submodules $B_{ij}$ and $B_{s-j+1,s-i+1}$ are non-isomorphic, because they are distinguished by the action of $\tilde{e}_i$. Hence, our argument in the first paragraph shows that $\tilde{B}_{ij}$ is a graded-simple module, so we can apply Lemma \ref{cyclic2} to $\tilde{B}_{ij}\rtimes((\tilde{\mathfrak{s}}_i\times\tilde{\mathfrak{s}}_j)\times\mathbb{F}\tilde{e}_i)$ and conclude that the supports of $\tilde{\mathfrak{s}}_i$ and $\tilde{\mathfrak{s}}_j$ commute element-wise with $f$ and also with $\mathrm{Supp}\,\tilde{B}_{ij}$. Moreover, $f$ commutes with $\mathrm{Supp}\,\tilde{B}_{ij}$. If $i+j=s+1$, then $\tilde{B}_{ij}=B_{ij}$ and we can apply Lemma \ref{cyclic} to $B_{ij}\rtimes\tilde{\mathfrak{s}}_i$. Therefore, the elements of $\mathrm{Supp}\,S$ commute with $f$ and together generate an abelian subgroup $H$ in $G$. Then, by the same argument as in Case~1 (but using $\tilde{B}_{ij}$ instead of $B_{ij}$), we show that $\mathrm{Supp}\,S$ commutes element-wise with $\mathrm{Supp}\,J$. In order to prove that $f$ commutes with $\mathrm{Supp}\,J$, it is sufficient to consider $J_1$. As we have seen, $f$ commutes with $\mathrm{Supp}\,\tilde{B}_{ij}$ where $i+j\ne s+1$. The only case that is not covered in $J_1$ is $\tilde{B}_{s/2,s/2+1}=B_{s/2,s/2+1}$ for even $s$. Since $s>2$, we have $[\tilde{B}_{s/2-1,s/2},B_{s/2,s/2+1}]=\tilde{B}_{s/2-1,s/2+1}$. Since $f$ commutes with $\mathrm{Supp}\,\tilde{B}_{s/2-1,s/2}$ and with $\mathrm{Supp}\,\tilde{B}_{s/2-1,s/2+1}$, we conclude that $f$ commutes with $\mathrm{Supp}\,B_{s/2,s/2+1}$ as well. The commutativity of $\mathrm{Supp}\,J$ is proved by the same argument as in Case~1. \end{proof} \section{Jordan case}\label{Jord_case} Every Jordan isomorphism from the algebra $U(\mathscr{F})$, $s>1$, to an arbitrary associative algebra $R$ is either an associative isomorphism or anti-isomorphism \cite[Corollary 3.3]{BDW2016}. By the remark after Theorem \ref{aut_cecil}, $U(\mathscr{F})$ admits an anti-automorphism if and only if $n_{i}=n_{s-i+1}$ for all $i$. So, taking into account the structure of the automorphism group of $U(\mathscr{F})$ (see Lemma \ref{aut}), we obtain that the automorphism group of $U(\mathscr{F})^{(+)}$, that is, the algebra $U(\mathscr{F})$ viewed as a Jordan algebra with respect to the symmetrized product $x\circ y=xy+yx$, is either $\{\mathrm{Int}(x)\mid x\in U(\mathscr{F})^\times\}$ or $\{\mathrm{Int}(x)\mid x\in U(\mathscr{F})^\times\}\rtimes\langle\tau\rangle$. In both cases, the following holds: \begin{Lemma} {If $n>2$,} $\mathrm{Aut}(U(\mathscr{F})^{(+)})\simeq\mathrm{Aut}(U(\mathscr{F})_0)$.\qed \end{Lemma} Hence, if $\mathbb{F}$ is algebraically closed of characteristic $0$ and the grading group $G$ is abelian, then the classification of $G$-gradings on the Jordan algebra $U(\mathscr{F})^{(+)}$ is equivalent to the classification of $G$-gradings on the Lie algebra $U(\mathscr{F})_0$ (see also \cite[\S 5.6]{EK2013} for the simple case, $s=1$). Thus, we get the same parametrization of the isomorphism classes of gradings as in Corollary \ref{cor:main_Lie}. The only difference is the sign in the construction of Type II gradings on $M_n^{(+)}$ (compare with Equation \eqref{Type2Lie} and recall that $\Phi$ is given by Equation \eqref{Phi}): \begin{equation*} M_n^{(+)}=\bigoplus_{g\in G}R_g\text{ where }R_g=\{X\in R_{\bar{g}}\mid\Phi^{-1}X^\tau\Phi=\chi(g)X\}, \end{equation*} which are then restricted to $U(\mathscr{F})^{(+)}$. {Hence, for $n>2$, an explicit bijection between the $G$-gradings (or their isomorphism classes) on $U(\mathscr{F})^{(+)}$ and those on $U(\mathscr{F})_0$ is the following: restriction for Type I gradings and restriction with shift by the distinguished element $f$ for Type II gradings (which occur on $U(\mathscr{F})^{(+)}$ even for $n=2$, but in this case restrict to Type I gradings on $U(\mathscr{F})_0$).} We note, however, that this result does not exclude the existence of group gradings on $U(\mathscr{F})^{(+)}$ with non-commutative support. In view of Theorem \ref{supp_commutativity}, these gradings, if they exist, are not analogous to gradings on $U(\mathscr{F})_0$. \section{Isomorphism and practical isomorphism of graded Lie algebras}\label{practical_iso} We use the main result of this section to obtain a classification of group gradings for $U(\mathscr{F})^{(-)}$ from the classification for $U(\mathscr{F})_0$, but it is completely general and may be of independent interest. Let $G$ be a group and let $L_1$ and $L_2$ be two $G$-graded Lie algebras over an arbitrary field $\mathbb{F}$. \begin{Def}[{\cite[Definition 7]{pkfy2017}}] $L_1$ and $L_2$ are said to be \emph{practically $G$-graded isomorphic} if there exists an isomorphism of (ungraded) algebras $\psi:L_1\to L_2$ that induces a $G$-graded isomorphism $L_1/\mathfrak{z}(L_1)\to L_2/\mathfrak{z}(L_2)$. \end{Def} Note that, in this case, for every homogeneous non-central $x\in L_1$, we can find $z\in\mathfrak{z}(L_1)$ such that $y=\psi(x+z)$ is homogeneous in $L_2$ and $\deg x=\deg y$. Clearly, if $L_1$ and $L_2$ are $G$-graded isomorphic then they are practically $G$-graded isomorphic. The converse does not hold, but if $L_1$ and $L_2$ are practically $G$-graded isomorphic then the derived algebras $L_1'$ and $L_2'$ are $G$-graded isomorphic. More precisely: \begin{Lemma}\label{iso_derived} Assume $\psi:L_1\to L_2$ is an isomorphism of algebras that induces a $G$-graded isomorphism $L_1/\mathfrak{z}(L_1)\to L_2/\mathfrak{z}(L_2)$. Then $\psi$ restricts to a $G$-graded isomorphism $L_1'\to L_2'$. \end{Lemma} \begin{proof} Let $0\ne x\in L_1'$ be homogeneous of degree $g\in G$. Then there exist in $L_1$ nonzero homogeneous $x'_i$ of degree $g'_i$ and $x''_i$ of degree $g''_i$, $i=1,\ldots,m$, such that $x=\sum_{i=1}^m[x'_i,x''_i]$ and $g'_ig''_i=g$ for all $i$. Also, there exist $z'_i,z''_i\in\mathfrak{z}(L_1)$ such that $\psi(x'_i+z'_i)$ is homogeneous of degree $g'_i$ and $\psi(x''_i+z''_i)$ is homogeneous of degree $g''_i$, for all $i$. Hence, \[ \psi(x)=\psi\left(\sum_{i=1}^m[x'_i+z'_i,x''_i+z''_i]\right)=\sum_{i=1}^m[\psi(x'_i+z'_i),\psi(x''_i+z''_i)] \] is homogeneous in $L_2$ of degree $g$, as desired. \end{proof} Now we will see what happens if we strengthen the hypothesis on $\psi$ by assuming, in addition, that it restricts to a $G$-graded isomorphism $\mathfrak{z}(L_1)\to\mathfrak{z}(L_2)$. This does not yet imply that $\psi$ itself is a $G$-graded isomorphism, but we have the following: \begin{Thm}\label{th:main_practical} Let $L_1$ and $L_2$ be $G$-graded Lie algebras, and assume that there exists an isomorphism of (ungraded) algebras $\psi:L_1\to L_2$ such that both the induced map $L_1/\mathfrak{z}(L_1)\to L_2/\mathfrak{z}(L_2)$ and the restriction $\mathfrak{z}(L_1)\to\mathfrak{z}(L_2)$ are $G$-graded isomorphisms. Then $L_1$ and $L_2$ are isomorphic as $G$-graded algebras. \end{Thm} \begin{proof} Let $N_1\subset\mathfrak{z}(L_1)$ be a graded subspace such that \[ \mathfrak{z}(L_1)=N_1\oplus(\mathfrak{z}(L_1)\cap L_1'). \] By our hypothesis, $N_2:=\psi(N_1)$ is a graded subspace of $\mathfrak{z}(L_2)$. Since $L_1'\oplus N_1$ is a graded subspace of $L_1$, there exists a linearly independent set $\mathcal{B}_1=\{u_i\}_{i\in\mathscr{I}}$ of homogeneous element of $L_1$ satisfying \[ L_1=L_1'\oplus N_1\oplus\mathrm{Span}\,\mathcal{B}_1. \] By our hypothesis, we can find $z_i\in\mathfrak{z}(L_1)$ such that $\psi(u_i+z_i)$ is a homogeneous element of $L_2$ that has the same degree as $u_i$. Since $\mathfrak{z}(L_1)\subset L_1'\oplus N_1$, the set $\mathcal{B}_2:=\{\psi(u_i+z_i)\}_{i\in\mathscr{I}}$ is linearly independent and satisfies \[ L_2=L_2'\oplus N_2\oplus\mathrm{Span}\,\mathcal{B}_2. \] Now define a linear map $\theta:L_1\to L_2$ by setting $\theta|_{L_1'\oplus N_1}=0$ and $\theta(u_i)=\psi(z_i)$ for all $i\in\mathscr{I}$. This is a ``trace-like map'' in the sense that its image is contained in $\mathfrak{z}(L_2)$ and its kernel contains $L_1'$. It follows that $\tilde\psi:=\psi+\theta$ is an isomorphism of algebras $L_1\to L_2$. Applying Lemma \ref{iso_derived}, we see that $\psi$, and hence $\tilde{\psi}$, restricts to a $G$-graded isomorphism $L_1'\oplus N_1\to L_2'\oplus N_2$. By construction, $\tilde\psi(u_i)=\psi(u_i+z_i)$. It follows that $\tilde\psi$ is an isomorphism of $G$-graded algebras. \end{proof} \begin{Cor}\label{cor:main_practical} Let $\Gamma_1$ and $\Gamma_2$ be two $G$-gradings on a Lie algebra $L$ and consider the $G$-graded algebras $L_1=(L,\Gamma_1)$ and $L_2=(L,\Gamma_2)$. If $L_1/\mathfrak{z}(L_1)=L_2/\mathfrak{z}(L_2)$ and $\mathfrak{z}(L_1)=\mathfrak{z}(L_2)$ as $G$-graded algebras, then $L_1\simeq L_2$ as $G$-graded algebras. \end{Cor} \begin{proof} Apply the previous theorem with $\psi$ being the identity map. \end{proof} \begin{Cor}\label{reduction_to_trace_0} Let $\Gamma_1$ and $\Gamma_2$ be two $G$-gradings on $U(\mathscr{F})^{(-)}$ {and assume $\mathrm{char}\,\mathbb{F}\nmid n$.} Then $\Gamma_1$ and $\Gamma_2$ are isomorphic if and only if they assign the same degree to the identity matrix ${1}$ and induce isomorphic gradings on $U(\mathscr{F})^{(-)}/\mathbb{F} {1}\simeq U(\mathscr{F})_0$. \end{Cor} \begin{proof} The ``only if'' part is clear. For the ``if'' part, take an automorphism $\psi_0$ of $U(\mathscr{F})_0$ that sends the grading induced by $\Gamma_1$ to the one induced by $\Gamma_2$, extend $\psi_0$ to an automorphism $\psi$ of $U(\mathscr{F})^{(-)}=U(\mathscr{F})_0\oplus\mathbb{F} {1}$ by setting $\psi({1})={1}$, and apply the theorem. \end{proof} \end{document}
\begin{document} \title{General model-free weighted envelope estimation} \begin{abstract} Envelope methodology is succinctly pitched as a class of procedures for increasing efficiency in multivariate analyses without altering traditional objectives \citep[first sentence of page 1]{cook2018introduction}. This description is true with the additional caveat that the efficiency gains obtained by envelope methodology are mitigated by model selection volatility to an unknown degree. The bulk of the current envelope methodology literature does not account for this added variance that arises from the uncertainty in model selection. Recent strides to account for model selection volatility have been made on two fronts: 1) development of a weighted envelope estimator, in the context of multivariate regression, to account for this variability directly; 2) development of a model selection criterion that facilitates consistent estimation of the correct envelope model for more general settings. In this paper, we unify these two directions and provide weighted envelope estimators that directly account for the variability associated with model selection and are appropriate for general multivariate estimation settings for vector valued parameters. Our weighted estimation technique provides practitioners with robust and useful variance reduction in finite samples. Theoretical justification is given for our estimators and validity of a nonparametric bootstrap procedure for estimating their asymptotic variance are established. Simulation studies and a real data analysis support our claims and demonstrate the advantage of our weighted envelope estimator when model selection variability is present. \\[1em] \textbf{Keywords}: dimension reduction; model averaging; envelope methodology; nonparametric bootstrap; bootstrap smoothing; model selection \end{abstract} \section{Introduction} Let $\mathbf{X}_1$, $\ldots$, $\mathbf{X}_n$ be an independent sample where $\theta \in \mathbb{R}^p$ is a target parameter that we want to estimate. Suppose that $\widetilde{\theta} = \widetilde{\theta}(\mathbf{X}_1$, $\ldots$, $\mathbf{X}_n)$ is a $\sqrt{n}$-consistent and asymptotically normal estimator of $\theta$ with asymptotic variance $\Sigma > 0$ such that \begin{equation} \sqrt{n}\left(\widetilde{\theta} - \theta\right) \overset{d}{\to} N(0, \Sigma). \label{context} \end{equation} The idea of envelope methodology is to estimate $\theta$ with less asymptotic variability than $\Sigma$ through the exploitation of a parametric link between $\theta$ and $\Sigma$ \citep{cook2010, found, cook2018introduction}. Envelope methodology originated as a method to reduce the variability of a regression coefficient matrix $\beta$ in the multivariate linear regression model \citep{cook2010, su, su-inner, cook-scale, cook2018introduction}. The key insight behind using envelope methodology as a variance reduction tool was the observation that some linear combinations of the response vector may be invariant to changes in the predictors. Such linear combinations represent variability in the response vector that is not directly relevant to the estimation of $\beta$ and should be discarded. \cite{found} extended envelope methodology to the general setting where one only has a target parameter $\theta$, a $\sqrt{n}$ consistent and asymptotically normal estimator of $\theta$ as in \eqref{context}, and a $\sqrt{n}$ consistent estimator $\widehat{\Sigma}$ of $\Sigma$. In both the multivariate linear regression model and the general estimation framework in \eqref{context}, variance reduction obtained through envelope methodology arises from exploiting a subspace of the spectral structure of the variance matrix with dimension $u < p$ that contains span$(\theta)$. The dimension $u$ of the envelope space is unknown in practice. In many envelope modeling contexts one can estimate $u$ with information criteria, likelihood ratio tests, or cross-validation. Information criteria and likelihood ratio tests can be estimated with full Grassmannian optimization \citep{cook2010, algo, found, zhangmai}, the 1D algorithm \citep{algo, found}, or the fast envelope estimator \citep{fast-algo}. Recently, \cite{zhangmai} proposed new model-free information criteria that can estimate $u$ consistently. With $u$ estimated, the variability of the envelope estimator is assessed via the bootstrap. However, most bootstrap implementations are conditional on $u = \hat{u}$ where $\hat{u}$ is the estimated dimension of the envelope space. These procedures ignore the variability associated with model selection. \cite{eck2017weighted} provided a weighted envelope bootstrap to alleviate this problem in the context of the multivariate linear regression model. In this context the variability of the weighted envelope estimator was appreciably lower than that obtained by bootstrapping the multivariate linear regression model parameters as in \citet{eck2018bootstrapping}. \citet{eck2020aster} provided a double bootstrap procedure for envelope estimation of expected Darwinian fitness from an aster model \citep{geyer2007aster, shaw2008aster} which demonstrated useful variance reduction empirically. That being said, the theoretical motivations for each of these bootstrap procedures are not applicable for envelope estimation in general settings. The weights in \cite{eck2017weighted} are constructed from the Bayesian Information Criterion values of the multivariate linear regression model evaluated at all envelope estimators fit at dimension $u = 1$, $\ldots$, $p$. Model selection volatility is taken into account in \citet{eck2020aster} by criteria that also require a likelihood. In this paper we provide weighted envelope estimators that are appropriate for envelope estimation in the general setting described by the setup in \eqref{context}. These settings do not require the existence of a likelihood function and avoid having to condition on an estimated envelope dimension. We then provide bootstrap procedures which estimate the variability of our weighted envelope estimators. More importantly, our bootstrap procedures estimate the variability of the envelope estimator at the true, unknown, dimension $u$. This is because as $n\to\infty$, the weight at $u$ converges to $1$ fast enough to not incorporate influences from other envelope dimensions. The envelope estimators developed in this paper are a powerful and practical unification of the methodology in \cite{eck2017weighted} and \cite{zhangmai}. Our approach greatly generalizes the appropriateness of envelope model averaging beyond the context of multivariate linear regression models while simultaneously maintaining robust variance reduction in finite-samples as well as comparable to the estimators in \cite{zhangmai}. In practical applications, our approach encapsulates the variability associated with model selection volatility. We now motivate envelope methodology and weighted estimation techniques. \section{Envelope properties} We first provide the definition of a reducing subspace and an envelope. \begin{defn}[Reducing subspace] A subspace $\mathcal{R} \subset \mathbb{R}^p$ is a reducing subspace of a matrix $M$ if $M\mathcal{R} \subset \mathcal{R}$ and $M\mathcal{R}^{c} \subset \mathcal{R}^{c}$ where $\mathcal{R}^{c}$ is the orthogonal complement of $\mathcal{R}$ relative to the usual inner product. \end{defn} A reducing subspace $\mathcal{R}$ of a matrix $M$ allows one to decompose $M$ as $M = P_{\mathcal{R}}MP_{\mathcal{R}} + Q_{\mathcal{R}}MQ_{\mathcal{R}}$ where $P_{\mathcal{R}}$ is the projection into $\mathcal{R}$ and $Q_{\mathcal{R}} = I - P_{\mathcal{R}}$. When the eigenvalues of $M$ are distinct, a reducing subspace is a direct sum of eigenspaces of $M$. \begin{defn}[Envelope] The $M$ envelope of span($U$) is defined as the intersection of all reducing subspaces $\mathcal{R}$ of $M$ which satisfies span$(U) \subseteq \mathcal{R}$. The envelope subspace is denoted by $\mathcal{E}_M(U)$. \end{defn} The subspace $\mathcal{E}_M(U)$ is a small targeted part of the spectral structure of $M$ which contains the span of $U$. Denote $u, 0 \leq u \leq p$, as the dimension of $\mathcal{E}_M(U)$. All things being equal, a smaller value of $u$ indicates that stronger inferences can be obtained by taking advantage of the envelope structure when $U$ and $M$ are a parameter of interest and a covariance matrix respectively. To see the benefit of envelope methodology, first consider the case when $\mathcal{E}_M(U)$ is known. Let $\Gamma \in \mathbb{R}^{p \times u}$, $u < p$, be a known semi-orthogonal basis matrix for $\mathcal{E}_M(U)$. Let $U = \theta\theta^T$ and $M = \Sigma$ and suppose that we have $\sqrt{n}$-consistent $\widetilde{\theta}$ as in the initial setup \eqref{context}. The envelope estimator of $\theta$ is then $P_{\Gamma}\widetilde{\theta}$ where $P_{\Gamma} = \Gamma\Gamma^T$. Notice that, \begin{align*} &\sqrt{n}\left(P_{\Gamma}\widetilde{\theta} - \theta\right) = P_{\Gamma}\left\{\sqrt{n}\left(\widetilde{\theta} - \theta\right)\right\} \overset{d}{\to} N\left(0, P_{\Gamma}\Sigma P_{\Gamma}\right), \\ &\sqrt{n}\left(Q_{\Gamma}\widetilde{\theta}\right) =Q_{\Gamma}\left\{\sqrt{n}\left(\widetilde{\theta} - \theta\right)\right\} \overset{d}{\to} N\left(0, Q_{\Gamma}\Sigma Q_{\Gamma}\right), \end{align*} where $Q_{\Gamma} = I - P_{\Gamma}$ and $Q_{\Gamma}\theta = 0$ by definition. In this demonstration we see that $P_{\Gamma}\widetilde{\theta}$ consistently estimates $\theta$ with less asymptotic variability than that of the original estimator. The remaining piece of $\widetilde{\theta}$, given by $Q_{\Gamma}\widetilde{\theta}$, is a $\sqrt{n}$ consistent estimator of $0$ with non-trivial asymptotic variability given by $Q_{\Gamma}\Sigma Q_{\Gamma}$. Therefore $P_{\Gamma}\widetilde{\theta}$ is all that is needed to estimate $\theta$ while $Q_{\Gamma}\widetilde{\theta}$ produces extra variability that is nonessential to the estimation of $\theta$. Let $\|\cdot\|$ denote the spectral matrix norm. Envelope methodology leads to massive efficiency gains in settings where $\|Q_{\Gamma}\Sigma Q_{\Gamma}\| \gg \|P_{\Gamma}\Sigma P_{\Gamma}\|$. When one has the true $u$ but needs to estimate all other quantities then the asymptotic variability of the envelope estimator $P_{\Gamma}\Sigma P_{\Gamma}$ incurs an additional cost resulting from said estimation, see Section 5.2 of \citet{cook2010} and Section 3.3 of \citet{found} for specific examples. In practical settings, all envelope modeling quantities, including $u$, require estimation. In such general cases where the likelihood function need not be known, \citet{zhangmai} proposed to estimate a semi-orthogonal basis matrix $\Gamma \in \mathbb{R}^{p \times u}$ of $\mathcal{E}_M(U)$ by minimizing the generic moment-based objective function: \begin{equation} \label{Jn} J_n(\Gamma) = \log\mid \Gamma^T\widehat{M}\Gamma \mid + \log\mid \Gamma^T(\widehat{M} + \widehat{U})^{-1}\Gamma \mid, \end{equation} where $\widehat{M}$ and $\widehat{U}$ are $\sqrt{n}$-consistent estimators of $M$ and $U$. The motivations for \eqref{Jn} comes its population counterpart, $J(\Gamma) = \log\mid \Gamma^TM\Gamma \mid + \log\mid \Gamma^T(M + U)^{-1}\Gamma \mid$, where \citet{algo} showed that any $\Gamma$ which minimizes $J(\Gamma)$ must satisfy the envelope condition that span$(U) \subseteq \text{span}(\Gamma)$. Assuming that the true envelope dimension $u$ is supplied, the $\sqrt{n}$-consistency of the estimated envelope estimator that is constructed from a minimizer of \eqref{Jn} is established in \citet{algo}. The true envelope dimension is not known in practice. \section{Weighted envelope methodology} We introduce model-free weighted envelope estimation that offers a balance between variance reduction and model misspecification in finite samples. The idea is to weigh each envelope estimator evaluated at different candidate dimensions with respect to a measure of fit of the candidate envelope dimension. The weighted envelope estimators that we propose are of the form \begin{equation} \label{genwtenv} \hat{\theta}_w = \sum_{k=0}^p w_k\hat{\theta}_k, \qquad w_k = f_k(\mathcal{I}_n(1),\ldots, \mathcal{I}_n(p)), \end{equation} where $\hat{\theta}_k$, $\mathcal{I}_n(k)$, $f_k$ are, respectively, the envelope estimator, an information criteria that assesses the fit of the envelope dimension, and a function of all information criteria at proposed dimension $k$, and $n$ is the sample size. As is standard in model averaging, we require that $f_k$ and $\mathcal{I}_n(k)$ be chosen so that $w_k \geq 0$ and $\sum_k w_k = 1$. However, unlike typical model averaging contexts, $\hat{\theta}_k$ is consistent for all weight choices that satisfy $\sum_{k=u}^p w_k \to 1$ where $w_k \geq 0$ for $k \geq u$. Such weight choices induce a consistent estimator since the envelope estimator $\hat{\theta}_k$ is a consistent estimator for $\theta$ for all $k \geq u$. It is of course more desirable to select $\mathcal{I}_n(k)$ and $f_k$ so that $w_u \to 1$. In this paper, we study two choices of $\mathcal{I}_n(\cdot)$ and one choice of $f_k$ so that $w_u \to 1$ at a fast enough rate to facilitate reliable estimation of the variability of $\hat{\theta}_u$, $u$ unknown, via a nonparametric bootstrap. The weights will be of the form \begin{equation} \label{genwtenvw} w_k = \frac{\exp\left\{-n\mathcal{I}_n(k)\right\}}{\sum_{j=0}^p\exp\left\{-n\mathcal{I}_n(j)\right\}}. \end{equation} The choice of $f_k$ that yields the weights \eqref{genwtenvw} is motivated by \cite{eck2017weighted}, where the choice of $\mathcal{I}_n(k)$ in \cite{eck2017weighted} reflected the BIC value of a multivariate linear regression model with the envelope estimator at dimension $k$ plugged in. The two choices of $\mathcal{I}_n(k)$ that we study here facilitate weighted envelope estimation within the general envelope estimation context \eqref{context}. The first choice of $\mathcal{I}_n(k)$, denoted $\mathcal{I}_n^{\text{FG}}(k)$, allows for consistent estimation of the variability of $\hat{\theta}_u$ using the nonparametric bootstrap. This is achieved by setting $\mathcal{I}_n^{\text{FG}}(k) = J_n(\hat{\Gamma}) + \text{pen}(k)$ and showing that 1) $J_n(\hat{\Gamma})$ can be cast a quasi-likelihood objective function that is optimized via a full Grassmannian envelope optimization routine \citep{zhangmai}; 2) this partially optimized quasi-likelihood objective function can be cast as an M-estimation problem. The second choice of $\mathcal{I}_n(k)$, denoted $\mathcal{I}_n^{\text{1D}}(k)$, corresponds to sequential 1 dimension (1D) optimization \citep{algo, found, zhangmai}. The choice $\mathcal{I}_n^{\text{1D}}(k)$ does not facilitate the same consistent estimation of the variability of $\hat{\theta}_u$. However, we demonstrate that the choice $\mathcal{I}_n^{\text{1D}}(k)$ allows for reliable estimation of the variability of the envelope estimator that is estimated using the 1D algorithm at the true $u$. Both our simulations, and the simulations presented in \citet{zhangmai} find that the choice of $\mathcal{I}_n^{\text{1D}}(k)$ exhibits greater empirical variance reduction than the choice of $\mathcal{I}_n^{\text{FG}}(k)$. Moreover, the 1D optimization routine is faster, more stable, and less sensitive to initial values than the Full Grassmannian approach \citep{zeng2019TRES}. \subsection{Weighted envelope estimation via quasi-likelihood optimization} \label{section:FG} In this section we construct $\hat{\theta}_w$ in \eqref{genwtenv} where the information criteria $\mathcal{I}_n^{\text{FG}}(k)$ is derived from full Grassmannian optimization of the objective function \eqref{Jn}. The minimizer $\widehat{\Gamma}$ of the objective function \eqref{Jn} is the estimated basis of the envelope subspace at dimension $u$. After obtaining $\widehat{\Gamma}$, the envelope estimator is $\widehat{\theta}^{\text{FG}} = \widehat{\Gamma}\widehat{\Gamma}^T\widetilde{\theta}$ where the superscript FG denotes envelope estimation with respect to full Grassmannian optimization. This envelope estimator is the original estimator projected into the estimated envelope subspace. When $u$ is known, $\widehat{\theta}^{\text{FG}}_u$ is $\sqrt{n}$-consistent and has been shown to have lower variability than $\widetilde{\theta}$ in finite samples \citep{cook2010, found, cook2018introduction}. The weighted envelope estimator corresponding to $\mathcal{I}_n^{\text{FG}}(k)$ is \begin{equation} \label{envFG} \widehat{\theta}^{\text{FG}}_w = \sum_{k=0}^p w^{\text{FG}}_k\widehat{\theta}^{\text{FG}}_k, \qquad w^{\text{FG}}_k = \frac { \exp\left\{-n\mathcal{I}_n^{\text{FG}}(k)\right\} } { \sum_{j=0}^p\exp\left\{-n\mathcal{I}_n^{\text{FG}}(j)\right\} }, \end{equation} where $\widehat{\theta}^{\text{FG}}_k$ is the envelope estimator of $\theta$ constructed from full Grassmannian optimization of $J_n(\widehat{\Gamma}_k)$ at dimension $k$. In the remaining part of this Section we make $\mathcal{I}_n^{\text{FG}}(k)$ and demonstrate how our choices supplement Section~\ref{sec:bootFG} to yield consistent estimation of the variability of $\widehat{\theta}^{\text{FG}}_u$. \citet{zhangmai} showed that optimization of $J_n$ in \eqref{Jn} is the same as optimization of a partially minimized quasi-likelihood function. Define this quasi-likelihood function as, \begin{equation} \label{ln} l_n(M, \theta) = \log\mid M\mid + \text{tr}\left[M^{-1}\left\{\widehat{M} + (\widetilde{\theta} - \theta)(\widetilde{\theta} - \theta)^T\right\}\right], \end{equation} and, for some candidate dimension $k = 1$, $\ldots$, $p$, define the constraint set for the minimization of \eqref{ln} to be, \begin{equation} \label{Ak} \mathcal{A}_k = \left\{(M,\theta) : M = \Gamma\Omega\Gamma^T + \Gamma_o\Omega_o\Gamma_o^T > 0, \; \theta = \Gamma\eta, \; \eta \in \mathbb{R}^k, \; (\Gamma, \Gamma_o)^T(\Gamma, \Gamma_o) = I_p \right\}. \end{equation} Minimization of \eqref{ln} over the constraint set \eqref{Ak} is the same as minimizing $J_n$ in \eqref{Jn}. \begin{lem} \label{lem1:lemma1} \citep[Lemma 3.1]{zhangmai}. The minimizer of $l_n(M,\theta)$ in \eqref{ln} under the envelope parameterization \eqref{Ak} is $ \widehat{M}_{\text{Env}} = \widehat{\Gamma}\widehat{\Gamma}^T\widehat{\Omega}\widehat{\Gamma}\widehat{\Gamma}^T + \widehat{\Gamma}_o\widehat{\Gamma}_o^T\widehat{\Omega}_o\widehat{\Gamma}_o\widehat{\Gamma}_o^T $ and $\widehat{\theta} = \widehat{\Gamma}\widehat{\Gamma}^T\widetilde{\theta}$ where $\widehat{\Gamma}$ is the minimizer of the partially optimized objective function $ l_n(\Gamma) = \min_{\Omega,\Omega_o,\eta} l_n(\Gamma,\Omega,\Omega_o,\eta) = J_n(\Gamma) + \log\mid \widehat{M} + \widehat{U} \mid + p $ where $\widehat{U} = \widetilde{\theta}\widetilde{\theta}^T$. \end{lem} We show that optimization of the quasi-likelihood $l_n(M, \theta)$ can be cast as an M-estimation problem when $\widehat{M} = n^{-1}\sum_{i=1}^n h(\mathbf{X}_i, \widetilde{\theta})$ provided that $\widetilde{\theta}$ is an optimal solution to some objective function, as in the same vein as \citet[pg. 29]{stefanski2002M}. The justification that a nonparametric bootstrap procedure will consistently estimate the variability of $\widehat{\theta}^{\text{FG}}_u$ follows from bootstrap theory for M-estimators in Section 2 of \cite{andrews}. Therefore recasting \eqref{Jn} and \eqref{ln} as an M-estimation problem is an important theoretical consideration for our proposed methodology. The requirement that $\widehat{M} = n^{-1}\sum_{i=1}^n h(\mathbf{X}_i, \widetilde{\theta})$ is mild and it holds in linear regression, maximum likelihood estimation, and M-estimation. Our parameterization of $\widehat{M}$ gives, \begin{align*} l_n(M, \theta) &= \log\mid M \mid + \text{tr}\left[M^{-1}\left\{\widehat{M} + (\widetilde{\theta} - \theta)(\widetilde{\theta} - \theta)^T\right\}\right] \\ &= \log\mid M \mid + \text{tr}\left[n^{-1}\sum_{j=i}^n M^{-1}\left\{ h(\mathbf{X}_i, \widetilde{\theta}) + (\widetilde{\theta} - \theta)(\widetilde{\theta} - \theta)^T\right\} \right] \\ &= \log\mid M \mid + n^{-1}\sum_{i=1}^n\text{tr}\left[M^{-1}\left\{ h(\mathbf{X}_i, \widetilde{\theta}) + (\widetilde{\theta} - \theta)(\widetilde{\theta} - \theta)^T\right\} \right] \\ &= n^{-1}\sum_{i=1}^n\left(\log\mid M \mid + \text{tr}\left[M^{-1}\left\{ h(\mathbf{X}_i, \widetilde{\theta}) + (\widetilde{\theta} - \theta)(\widetilde{\theta} - \theta)^T\right\} \right]\right) \\ &= n^{-1}\sum_{i=1}^n f(\mathbf{X}_i, \theta, M). \end{align*} Lemma~\ref{lem1:lemma1} then gives, \begin{equation} \label{Mest} l_n(\Gamma) = \min_{\Omega,\Omega_o,\eta} n^{-1} \sum_{i=1}^n f(\mathbf{X}_i, \Gamma, \Omega, \Omega_o, \eta) = J_n(\Gamma) + \log\mid \widehat{M} + \widehat{U} \mid + p, \end{equation} where the minimization takes place over $\mathcal{A}_k$. The minimization of $n^{-1}\sum_{i=1}^n f(\mathbf{X}_i, \theta, M)$ over $\mathcal{A}_k$ provides estimates for both $\Gamma$ and $\eta$ and yields $\theta = \Gamma\eta$ and $M = \Gamma\Omega\Gamma^T + \Gamma_o\Omega_o\Gamma_o^T$. The proof of Lemma 3.1 in \citet{zhangmai} reveals that $\widehat{\theta}^{\text{FG}}_u = \widehat{\Gamma}\widehat{\Gamma}^T\widetilde{\theta}$. In Section~\ref{sec:bootFG} we develop a nonparametric bootstrap to consistently estimate the variability of $\widehat{\theta}^{\text{FG}}_u$. Now define \begin{equation} \label{FGcrit} \mathcal{I}_n^{\text{FG}}(k) = J_n(\widehat{\Gamma}_k) + \frac{Ck\log(n)}{n}, \qquad (k = 0,1,\ldots p), \end{equation} as in \cite{zhangmai} where $C > 0$ is a constant and $\mathcal{I}_n^{\text{FG}}(0) = 0$. The envelope dimension is selected as $\hat{u}_{\text{FG}} = \text{arg}\min_{0\leq k\leq p} \mathcal{I}_n(k)$. Theorem 3.1 in \citet{zhangmai} showed that $\mathbb{P}(\hat{u}_{\text{FG}} = u) \to 1$ as $n \to \infty$, provided that $C > 0$ and $\widehat{M}$ and $\widehat{U}$ are $\sqrt{n}$-consistent estimators of $M$ and $U$ respectively. \cite{zhangmai} provided evidence that selecting $\hat{u}_{\text{FG}}$ in this manner leads to envelope estimators with lower variability than traditional estimators and that the correct dimension selection improves in $n$. However, correct dimension selection was far from perfect in their simulations, especially for small and moderate sample sizes. Moreover, dimension selection is being ignored in these simulations. We use $\mathcal{I}_n^{\text{FG}}(k)$ in \eqref{FGcrit} to construct $\widehat{\theta}^{\text{FG}}_w$ in \eqref{envFG}. This construction yields consistent estimation of $\theta$ as seen in Section~\ref{consisprop} and consistent estimation of the variability of $\widehat{\theta}^{\text{FG}}_u$ through the combination of Theorem~\ref{thm:TFG} and our formulation of $l_n(M, \theta)$ as an M-estimation problem. \subsection{Weighted envelope estimation via the 1D algorithm} \label{section:1D} In this section we construct $\hat{\theta}_w$ in \eqref{genwtenv} where the information criteria $\mathcal{I}_n^{\text{1D}}(k)$ is derived from the 1D algorithm \citep{found, algo}. The 1D algorithm performs a sequence of optimizations that each return a basis vector of the envelope space (in the population) or a $\sqrt{n}$-consistent estimator of a basis vector for the envelope space (in finite-samples). The number of optimizations corresponds to the dimension of the envelope space and is provided by the user. The returned envelope estimator is $\widehat{\theta}^{\text{1D}}_u = \widehat{\Gamma}\widehat{\Gamma}\widetilde{\theta}$ where the estimated basis matrix $\widehat{\Gamma}$ is obtained from the 1D algorithm. The weighted envelope estimator corresponding to $\_oneD(k)$ is \begin{equation} \label{env1D} \widehat{\theta}^{\text{1D}}_w = \sum_{k=0}^p w^{\text{1D}}_k\widehat{\theta}^{\text{1D}}_k, \qquad w^{\text{1D}}_k = \frac { \exp\left\{-n\mathcal{I}_n^{\text{1D}}(k)\right\} } { \sum_{j=0}^p\exp\left\{-n\mathcal{I}_n^{\text{1D}}(j)\right\} }. \end{equation} When $C = 1$ in \eqref{1Dcrit}, the terms $n\mathcal{I}_n^{\text{1D}}(k)$ are BIC values corresponding to the asymptotic log likelihood the envelope model of dimension $k$. The algorithm is as follows: Set $u_o \leq p-1$ to be the user inputted number of optimizations. For step $k = 0$, $\ldots$, $u_o$, let $g_k \in \mathbb{R}^p$ denote the $k$-th direction to be obtained by the 1D algorithm. Define $G_k = (g_1$, $\ldots$, $g_k)$, and $(G_k$, $G_{0k})$ to be an orthogonal basis for $\mathbb{R}^p$ and set initial value $g_0 = G_{0} = 0$. Define $M_k = G_{0k}^TMG_{0k}$, $U_k = G_{0k}^TU G_{0k}$, and the objective function after $k$ steps $$ \phi_k(v) = \log(v^T M_k v) + \log\{v^T(M_k + U_k)^{-1} v\}. $$ The $(k+1)$-th envelope direction is $g_{k+1} = G_{0k}v_{k+1}$ where $v_{k+1} = \text{argmax}_{v}\phi_k(v)$ subject to $v^Tv = 1$. In the population, the 1D algorithm produces a nested solution path that contains the true envelope: $$ \text{span}\left(G_1\right) \subset \text{span}\left(G_{u-1}\right) \subset \cdots \subset \text{span}\left(G_u\right) = \mathcal{E}_M(U) \subset \text{span}\left(G_{u+1}\right) \subset \cdots \subset \text{span}\left(G_p\right) = \mathbb{R}^p. $$ Replacing $M$ and $U$ with $\sqrt{n}$-consistent estimators $\widehat{M}$ and $\widehat{U}$ yields $\sqrt{n}$-consistent estimates $\widehat{G}_k = \left(\widehat{g}_1, \ldots, \widehat{g}_k\right) \in \mathbb{R}^{p \times k}$, $k = 1$, $\ldots$, $p$ by optimizing $$ \phi_{k,n}(v) = \log(v^T \widehat{M}_k v) + \log\{v^T(\widehat{M}_k + \widehat{U}_k)^{-1} v\}. $$ The resulting envelope estimator $\widehat{\theta}^{\text{1D}}_u$ is therefore $\sqrt{n}$-consistent \citep{found}. \citet{zhangmai} proposed a model selection criterion to estimate $u$ in practical settings. This criterion is, \begin{equation} \mathcal{I}_n^{\text{1D}}(k) = \sum_{j=1}^k \phi_{j,n}(\hat{v}_j) + \frac{Cj\log{n}}{n}, \qquad (k = 0, \ldots, p), \label{1Dcrit} \end{equation} where $C > 0$ is a constant and $\mathcal{I}_n^{\text{1D}}(0) = 0$. The envelope dimension selected is given by $\hat{u}_{\text{1D}} = \text{arg}\min_{0\leq k\leq p} \mathcal{I}_n^{\text{1D}}(k)$. Theorem 3.2 in \citet{zhangmai} showed that $\mathbb{P}(\hat{u}_{\text{1D}} = u) \to 1$ as $n \to \infty$, provided that $C > 0$ and $\widehat{M}$ and $\widehat{U}$ are $\sqrt{n}$-consistent estimators of $M$ and $U$ respectively. We use $\mathcal{I}_n^{\text{1D}}(k)$ in \eqref{1Dcrit} to construct $\widehat{\theta}^{\text{1D}}_w$ in \eqref{env1D}. This construction yields consistent estimation of $\theta$ as seen in Section~\ref{consisprop} and allows for reliable estimation of the variability of $\widehat{\theta}^{\text{1D}}_u$. \subsection{Consistency properties of weighted envelope estimators} \label{consisprop} Weighted envelope estimators exhibit desirable consistency properties. First of all, the weights in \eqref{envFG} and \eqref{env1D} can be constructed so that they both satisfy $w^{\text{FG}}_u \to 1$ and $w^{\text{1D}}_u \to 1$ as $n \to \infty$. \begin{lem} For any constant $C > 0$ and $\sqrt{n}$-consistent $\widehat{M}$ and $\widehat{U}$ in \eqref{FGcrit}, $w^{\text{FG}}_u \to 1$ as $n \to \infty$. \label{lem:weightsFG} \end{lem} \begin{lem} For any constant $C > 0$ and $\sqrt{n}$-consistent $\widehat{M}$ and $\widehat{U}$ in \eqref{1Dcrit}, $w^{\text{1D}}_u \to 1$ as $n \to \infty$. \label{lem:weights1D} \end{lem} The proofs of both Lemmas are included in the Supplementary Materials. Lemmas~\ref{lem:weightsFG} and \ref{lem:weights1D} facilitate consistent estimation of $\theta$ using $\widehat{\theta}^{\text{FG}}_w$ and $\widehat{\theta}^{\text{1D}}_w$. \begin{lem} For any constant $C > 0$ and $\sqrt{n}$-consistent $\widehat{M}$ and $\widehat{U}$ in \eqref{FGcrit} and \eqref{1Dcrit}, both $\widehat{\theta}^{\text{FG}}_w \to \theta$ and $\widehat{\theta}^{\text{1D}}_w \to \theta$ as $n \to \infty$. \label{lem:envcon} \end{lem} The proof of Lemma~\ref{lem:envcon} immediately follows from Lemmas~\ref{lem:weightsFG} and \ref{lem:weights1D} and \citet[Proposition 2.1]{zhangmai}. While consistency is desirable, Lemma~\ref{lem:envcon} does not provide knowledge about the asymptotic variability of $\widehat{\theta}^{\text{FG}}_w$ or $\widehat{\theta}^{\text{1D}}_w$. Therefore, these estimators do not offer any assurance of variance reduction via model-free weighted envelope estimation. We expect that $\widehat{\theta}^{\text{FG}}_w$ and $\widehat{\theta}^{\text{1D}}_w$ will have lower asymptotic variance than $\widetilde{\theta}$ when $u < p$, but explicit computations of the asymptotic variance for both estimators are cumbersome. We will instead estimate the asymptotic variability of $\widehat{\theta}^{\text{FG}}_w$ and $\widehat{\theta}^{\text{1D}}_w$ with a nonparametric bootstrap. Theoretical justification for these bootstrap procedures are provided in the next section. \section{Bootstrapping for model-free weighted envelope estimators} \label{sec:boot} \subsection{Nonparametric bootstrap} Let $\mathbf{X}_1$, $\ldots$, $\mathbf{X}_n$ be the original data. We will estimate the variability of $\widehat{\theta}^{\text{FG}}_u$ and $\widehat{\theta}^{\text{1D}}_u$ by bootstrapping with respect to the weighted estimators $\widehat{\theta}^{\text{FG}}_w$ and $\widehat{\theta}^{\text{1D}}_w$. We will also show that bootstrapping with respect to $\widehat{\theta}^{\text{FG}}_w$ can consistently estimate the variability of $\widehat{\theta}^{\text{FG}}_u$. For each iteration of this nonparametric bootstrap procedure we denote the resampled data by $\X^{\textstyle{*}}_1$, $\ldots$, $\X^{\textstyle{*}}_n$ where each $\X^{\textstyle{*}}_i$, $i = 1$, $\ldots$, $n$ is sampled, with replacement, from the original data with equal probability $1/n$. Define the bootstrapped envelope estimators $\widehat{\theta}^{{\text{FG}}^{\textstyle{*}}} = \widehat{\theta}^{\text{FG}}\left(\X^{\textstyle{*}}_1, \ldots, \X^{\textstyle{*}}_n\right)$, $\widehat{\theta}^{{\text{1D}}^{\textstyle{*}}} = \widehat{\theta}^{\text{1D}}\left(\X^{\textstyle{*}}_1, \ldots, \X^{\textstyle{*}}_n\right)$, and the bootstrapped version of the original estimator $\widetilde{\theta}$ as $\widetilde{\theta}^{\textstyle{*}} = \widetilde{\theta}(\X^{\textstyle{*}}_1, \ldots \X^{\textstyle{*}}_n)$. Furthermore, define $\widehat{M}^{\textstyle{*}}$ and $\widehat{U}^{\textstyle{*}}$ in the same manner as $\widehat{M}$ and $\widehat{U}$ with respect to the starred data. \subsection{For quasi-likelihood optimization} \label{sec:bootFG} In this section we provide justification for the nonparametric bootstrap as a method to estimate the variability of $\widehat{\theta}^{\text{FG}}_w$. Define \begin{equation} \label{Jnstar} J^{\textstyle{*}}_n(\Gamma) = \log\mid\Gamma^T\widehat{M}^{\textstyle{*}}\Gamma\mid + \mid\Gamma^T\left(\widehat{M}^{\textstyle{*}} + \widehat{U}^{\textstyle{*}}\right)^{-1}\Gamma\mid, \end{equation} as the starred analog to $J_n$ in \eqref{Jn} and define, \begin{equation} \label{lnstar} l^{\textstyle{*}}_n(M, \theta) = \log\mid M\mid + \text{tr}\left[M^{-1}\left\{\widehat{M}^{\textstyle{*}} + (\widetilde{\theta}^{\textstyle{*}} - \theta)(\widetilde{\theta}^{\textstyle{*}} - \theta)^T\right\}\right], \end{equation} as the starred analog to $l_n$ in \eqref{ln}. Define $\widehat{\Gamma}^{\textstyle{*}}$ as the minimizer to \eqref{Jnstar}. When $\widehat{M}^{\textstyle{*}} = n^{-1}\sum_{i=1}^nh(\X^{\textstyle{*}}_i, \widetilde{\theta}^{\textstyle{*}})$ both Lemma~\ref{lem1:lemma1} and our likelihood derivation in Section~\ref{section:FG} give, $$ l^{\textstyle{*}}_n(\Gamma) = \min_{\Omega,\Omega_o,\eta} n^{-1} \sum_{i=1}^n f(\X^{\textstyle{*}}_i, \Gamma, \Omega, \Omega_o, \eta) = J^{\textstyle{*}}_n(\Gamma) + \log\mid\widehat{M}^{\textstyle{*}} + \widehat{U}^{\textstyle{*}}\mid + p, $$ which is the starred analog to \eqref{Mest}. Thus $\widehat{\Gamma}^{\textstyle{*}}$ is an M-estimator, being the minimizer of the partially minimized objective function $l^{\textstyle{*}}_n(\Gamma)$. We then let $\widehat{\theta}^{\text{FG}^{\textstyle{*}}}_u = \widehat{\Gamma}^{\textstyle{*}}\widehat{\Gamma}^{\textstyle{*}^T}\widetilde{\theta}^{\textstyle{*}}$ where $\widetilde{\theta}^{\textstyle{*}}$ is obtained in the same minimization of $\sum_{i=1}^n f(\X^{\textstyle{*}}_i, \Gamma, \Omega, \Omega_o, \eta)$. The envelope estimator $\widehat{\theta}^{\text{FG}^{\textstyle{*}}}_u$ is a product of M-estimators obtained from the same objective function. Therefore we can use the nonparametric bootstrap to consistently estimate the variability of $\widehat{\theta}^{\text{FG}}_u$ \citep[Section 2]{andrews}. The problem with this setup is that $u$ is unknown and requires estimation. We show that bootstrapping with respect to our weighted envelope estimator consistently estimates the variability of the envelope estimator $\widehat{\theta}^{\text{FG}}_u$ at the true unknown dimension when $\widehat{M}^{\textstyle{*}} = n^{-1}\sum_{i=1}^nh(\X^{\textstyle{*}}_i, \widetilde{\theta}^{\textstyle{*}})$. \begin{thm} Let $\widetilde{\theta}$ be a $\sqrt{n}$-consistent and asymptotically normal estimator. Let $\widehat{\theta}^{\text{FG}}_k$ be the envelope estimator obtained from full Grassmannian optimization at dimension $k = 0$, $\ldots$, $p$ and let $\widehat{\theta}^{\text{FG}}_w$ be the weighted envelope estimator with weights $w^{\text{FG}}$. Let $\widehat{\theta}^{{\text{FG}}^{\textstyle{*}}}_k$ and $\widehat{\theta}^{{\text{FG}}^{\textstyle{*}}}_w$ denote the corresponding quantities obtained by resampled data. Then as $n$ tends to $\infty$, \begin{equation} \begin{split} \sqrt{n}\left(\widehat{\theta}^{{\text{FG}}^{\textstyle{*}}}_w - \widehat{\theta}^{\text{FG}}_w\right) = \sqrt{n}\left(\widehat{\theta}^{{\text{FG}}^{\textstyle{*}}}_u - \widehat{\theta}^{\text{FG}}_u\right) + O_P\left\{n^{\left(1/2 - C\right)}\right\} + O_P\left[n^{\left\{Cu + 1/2\right\}}\right] \exp\left\{-n|O_P(1)|\right\}. \end{split} \label{TFGterms} \end{equation} \label{thm:TFG} \end{thm} \noindent {\bf Remarks}: \begin{itemize} \item[1.] Theorem~\ref{thm:TFG} shows that our bootstrap procedure consistently estimates the asymptotic variability of $\widehat{\theta}^{\text{FG}}_u$ when $u$ is unknown. We see that the second $O_P$ term in \eqref{TFGterms} vanishes quickly in $n$. These terms are associated with under selecting the true envelope dimension. Therefore it is more likely that our bootstrap procedures will conservatively estimate the variability of $\widehat{\theta}^{\text{FG}}_u$ in finite samples. \item[2.] We advocate for the case with $C = 1$ because of the close connection that $\mathcal{I}_n^{\text{FG}}(k)$ has with the Bayesian Information Criterion, similar reasoning was given in \citet{zhangmai}. The $Ck\log(n)/n$ penalty term in $\mathcal{I}_n^{\text{FG}}(k)$ facilitates the decaying bias in $n$ represented by the $O_P$ terms in \eqref{TFGterms}. Redefining $\mathcal{I}_n^{\text{FG}}(k)$ to have a penalty term that is fixed in $n$, similar to that of Akaike Information Criterion, fundamentally changes the $O_P$ terms in \eqref{TFGterms}. Specifically, the $O_P(n^{-1/2})$ term (when $C = 1$) disappears and the weights $w^{\text{FG}}_k$ fail to vanish for $k > u$. Therefore unknown non-zero asymptotic weight is given to candidate models with dimension $k > u$. Weighting in this manner is therefore suboptimal and is not advised. \item[3.] The weights $w^{\text{FG}}_k$ have a similar form to the weights which appear in the model averaging literature \citep{buckland, burnham, hjort, claeskens, tsague}. These weights are of the form \begin{equation} \label{weights-post} w_k = \frac { \exp\left\{-n\mathcal{I}_n^{\text{FG}}(k)/2\right\} } { \sum_{j=0}^p\exp\left\{-n\mathcal{I}_n^{\text{FG}}(p)/2\right\} } \end{equation} and they correspond to a posterior probability approximation for model $k$ under the prior that assigns equal weight to all candidate models, given the observed data. The weights \eqref{weights-post} do not have the same asymptotic properties as our weights. The difference between the two is a rescaling of $C$. Weights \eqref{weights-post} replace the constant $C$ in \eqref{TFGterms} with $C/2$. When $C = 1$, nonzero asymptotic weight would be placed on the envelope model with dimension $k = u + 1$. Therefore, weighting according to \eqref{weights-post} leads to higher estimated variability than is necessary in practice. \end{itemize} \subsection{For the 1D algorithm} In this section we provide justification for the nonparametric bootstrap as a method to estimate the variability of $\widehat{\theta}^{\text{1D}}$. It is important to note that the M-estimation argument which justified using $\widehat{\theta}^{{\text{FG}}^{\textstyle{*}}}_u$ as a consistent estimator for the variability of $\widehat{\theta}^{\text{FG}}_u$ does not hold here, we cannot cast the 1D objective function $\phi_k(v)$ as an M-estimation problem. The best we can do is verify that bootstrapping with respect to $\widehat{\theta}^{\text{1D}}_w$ is asymptotically the same as bootstrapping with respect to $\widehat{\theta}^{\text{1D}}_u$. We first define the quantities of the 1D algorithm applied to the starred data. For step $k = 0$, $\ldots$, $p-1$ of the 1D algorithm applied to starred data, let $\hat{g}^{\textstyle{*}}_k \in \mathbb{R}^p$ denote the $k$-th direction to be obtained. Define $\widehat{G}^{\textstyle{*}}_k = (\hat{g}^{\textstyle{*}}_1$, $\ldots$, $\hat{g}^{\textstyle{*}}_k)$, and $(\widehat{G}^{\textstyle{*}}_k$, $\widehat{G}^{\textstyle{*}}_{0k})$ to be an orthogonal basis for $\mathbb{R}^p$ and set initial value $\hat{g}^{\textstyle{*}}_0 = \widehat{G}^{\textstyle{*}}_{0} = 0$. Define $\widehat{M}^{\textstyle{*}}_k = \widehat{G}^{\textstyle{*}^T}_{0k}\widehat{M}^{\textstyle{*}}\widehat{G}^{\textstyle{*}}_{0k}$, $\widehat{U}^{\textstyle{*}}_k = \widehat{G}^{\textstyle{*}^T}_{0k}\widehat{U}^{\textstyle{*}}\widehat{G}^{\textstyle{*}}_{0k}$, and the objective function after $k$ steps $$ \phi^{\textstyle{*}}_{k,n}(v) = \log(v^T \widehat{M}^{\textstyle{*}}_k v) + \log\{v^T(\widehat{M}^{\textstyle{*}}_k + \widehat{U}^{\textstyle{*}}_k)^{-1} v\}. $$ The $(k+1)$-th envelope direction is $\hat{g}^{\textstyle{*}}_{k+1} = \widehat{G}^{\textstyle{*}}_{0k}v^{\textstyle{*}}_{k+1}$ where $v^{\textstyle{*}}_{k+1} = \text{argmax}_{v^Tv}\phi^{\textstyle{*}}_k(v)$. The estimated projection into envelope space is then $\widehat{P}^{\textstyle{*}} = \widehat{G}^{\textstyle{*}}\widehat{G}^{\textstyle{*}^T}$. We then arrive at the bootstrapped envelope estimator $\widehat{\theta}^{{\text{1D}}^{\textstyle{*}}}_u = \widehat{P}^{\textstyle{*}}_u\widetilde{\theta}^{\textstyle{*}}$. We show that bootstrapping $\widehat{\theta}^{\text{1D}}_w$ estimates the variability of the envelope estimator $\widehat{\theta}^{\text{1D}}_u$ at the true unknown dimension. \begin{thm} Let $\widetilde{\theta}$ be a $\sqrt{n}$-consistent and asymptotically normal estimator. Let $\widehat{\theta}^{\text{1D}}_k$ be the envelope estimator obtained from the 1D algorithm at dimension $k = 1$, $\ldots$, $p$ and let $\widehat{\theta}^{\text{1D}}_w$ be the weighted envelope estimator with weights $w^{\text{1D}}$. Let $\widehat{\theta}^{{\text{1D}}^{\textstyle{*}}}_k$ and $\widehat{\theta}^{{\text{1D}}^{\textstyle{*}}}_w$ denote the corresponding quantities obtained by resampled data. Then as $n$ tends to $\infty$, \begin{equation} \begin{split} \sqrt{n}\left(\widehat{\theta}^{{\text{1D}}^{\textstyle{*}}}_w - \widehat{\theta}^{\text{1D}}_w\right) = \sqrt{n}\left(\widehat{\theta}^{{\text{1D}}^{\textstyle{*}}}_u - \widehat{\theta}^{\text{1D}}_u\right) + O_P\left\{n^{\left(1/2 - C\right)}\right\} + O_P\left[n^{\left\{Cu + 1/2\right\}}\right] \exp\left\{-n|O_P(1)|\right\}. \end{split} \label{TDterms} \end{equation} \label{thm:TD} \end{thm} The remarks to Theorem~\ref{thm:TD} are similar to those for Theorem~\ref{thm:TFG}. We see that the second $O_P$ term in \eqref{TDterms} vanishes quickly in $n$. These terms are associated with under selecting the true envelope dimension. Therefore it is more likely that our bootstrap procedures will conservatively estimate the variability of $\widehat{\theta}^{\text{1D}}_u$ in finite samples. As previously mentioned, we advocate for the case with $C = 1$ because of the close connection that $\mathcal{I}_n^{\text{1D}}(k)$ has with the Bayesian Information Criterion, and we note that the weights $w^{\text{1D}}$ have a similar form to the weights which appear in the model averaging literature with similar weights as those in \eqref{weights-post}. Manuals for available software recommend use of one-directional optimizations, such as the 1D algorithm or the ECD algorithm \citep{cook2018fast}, because they are faster, stable, and less sensitive to initial values \citep{zeng2019TRES}. \section{Examples} \subsection{Exponential family generalized linear models (GLMs) simulations} \citet{found} showed that envelope estimation can provide variance reduction for parameter estimation in exponential family GLMs when predictors are normally distributed. We demonstrate that model free weighted envelope estimation can also achieve variance reduction in this context while accounting for model selection variability. Model free envelope estimation techniques and maximum likelihood estimation are then used to estimate the canonical parameter vector corresponding to GLMs (the regression coefficient vector with canonical link function). Estimation is performed using functionality in the \texttt{TRES} R package \citep{zeng2019TRES}. Following the recommendations in the \texttt{TRES} R package manuals, we use the 1D algorithm. We will therefore compare the performance of $\widehat{\theta}^{\text{1D}}_w$ to $\widehat{\theta}^{\text{1D}}_{\hat{u}_{\text{1D}}}$, the envelope estimator of $\theta$ evaluated at estimated dimension $\hat{u}_{\text{1D}}$. A nonparametric bootstrap with sample size $5000$ is then used to estimate the variability of these estimators. Our bootstrap simulation will consider two model selection regimes for obtaining $\hat{u}_{\text{1D}}$. In one regime, we estimate the envelope dimension at every iteration of the bootstrap (variable $u$ regime, estimated dimension denoted as $\hat{u}_{\text{1D}}^{*}$). In the other regime, we estimate the dimension of the envelope space in the original data set and then condition on this estimated dimension as if it were the true dimension (fixed $u$ regime, estimated dimension denoted as $\hat{u}_{\text{1D}}$). The fixed $u$ regime ignores the variability associated with model selection. Theorem~\ref{thm:TD} provides some guidance for the performance of the nonparametric bootstrap for estimating the variability of $\widehat{\theta}^{\text{1D}}_w$. A formal analog does not exist for the other envelope estimators, although empirical evidence in \citet{zhangmai} and \citet{eck2020aster} suggest that the variable $u$ regime will provide some robustness to variability in dimension selection. We simulate four different exponential family GLM settings where $p = 8$ and the true envelope dimension is $u = 2$. These four settings are divided into two GLM regression models and two settings within these GLM regression models. The GLM models considered are the logistic and Poisson regression models. Within these models, one simulation setting is designed to be favorable to envelope modeling and one is not. Nonignorable model selection variability is present in all of these simulation settings. Predictors are generated $X \sim N(0, \Sigma_X)$, where $\Sigma_X = \Gamma\Omega\Gamma^T + \Gamma_0\Omega_0\Gamma_0^T$. We construct the canonical parameter vector (regression coefficient vector as $\theta = \Gamma\Gamma^T v$ where $\Gamma$ and $v$ are provided in the supplement materials. In the logistic regression simulations we generate $Y_i \sim \text{Bernoulli}(\text{logit}(\theta^T X_i))$, and in the Poisson simulations we generate $Y_i \sim \text{Poisson}(\exp(\theta^T X_i))$. Our logistic regression simulation settings are Setting A: $\Omega$ has diagonal elements 2 and 3, $\Omega_0$ has diagonal elements $\exp(-4), \exp(-3), \ldots, \exp(1)$; Setting B: $\Omega$ has diagonal elements -4 and -5, $\Omega_0$ has diagonal elements $\exp(-3), \exp(-2), \ldots, \exp(2)$. Our Poisson regression simulation settings are Setting A: $\Omega$ has diagonal elements 1 and 10, $\Omega_0$ has diagonal elements $\exp(-6), \exp(-5), \ldots, \exp(-1)$; Setting B: $\Omega$ has diagonal elements $-3$ and $-2$, $\Omega_0$ has diagonal elements $\exp(-4), \exp(-3), \ldots, \exp(1)$. In both logistic and Poisson models, the configurations of $\Omega$ and $\Omega_0$ is designed to be favorable (unfavorable) to envelope estimation in setting A (setting B). Ratios of bootstrap standard deviations for estimators of the first component of the canonical parameter vector across all simulation settings are depicted in Table~\ref{Tab:GLMratios}. These ratios are of the form $r(\widetilde{\theta},\widehat{\theta}^{\text{1D}}_w) = \widehat{\text{sd}}^{*}(\widetilde{\theta})/\widehat{\text{sd}}^{*}(\widehat{\theta}^{\text{1D}}_w)$ where the standard deviations $\widehat{\text{sd}}^{*}(\widetilde{\theta})$ and $\widehat{\text{sd}}^{*}(\widehat{\theta}^{\text{1D}}_w)$ are the element in the first row and the first column of $$ \left(\frac{1}{B}\sum_{b=1}^B(\widetilde{\theta}^{\textstyle{*}}_b - \widetilde{\theta}) (\widetilde{\theta}^{\textstyle{*}}_b - \widetilde{\theta})^T\right)^{1/2}, \qquad \left(\frac{1}{B}\sum_{b=1}^B(\widehat{\theta}^{{\text{1D}}^{\textstyle{*}}}_{w_b} - \widehat{\theta}^{\text{1D}}_w) (\widehat{\theta}^{{\text{1D}}^{\textstyle{*}}}_{w_b} - \widehat{\theta}^{\text{1D}}_w)^T\right)^{1/2}, $$ respectively, and $B$ is the bootstrap sample size. Envelope estimation provides variance reduction in all settings. However, notice that the fixed $u$ regime exhibits erratic performance across $n$. This is due to the large variability in the estimated dimension across $n$, details of which are in the Supplementary Materials. Weighted envelope estimation and envelope estimation under the variable $u$ regime perform similarly with the weighted estimator performing slightly better. These simulations demonstrate the utility of weighted envelope estimation in the presence of model selection variability. Similar results are observed for other components of the canonical parameter vector as seen in the Supplementary Materials. \begin{table} \centering \begin{tabular}{llccc|ccc} & & \multicolumn{3}{c}{Setting A} & \multicolumn{3}{c}{Setting B} \\ model & $n$ & $r(\widetilde{\theta},\widehat{\theta}^{\text{1D}}_w)$ & $r(\widetilde{\theta},\widehat{\theta}^{\text{1D}}_{\hat{u}^{*}_{\text{1D}}})$ & $r(\widetilde{\theta},\widehat{\theta}^{\text{1D}}_{\hat{u}_{\text{1D}}})$ & $r(\widetilde{\theta},\widehat{\theta}^{\text{1D}}_w)$ & $r(\widetilde{\theta},\widehat{\theta}^{\text{1D}}_{\hat{u}^{*}_{\text{1D}}})$ & $r(\widetilde{\theta},\widehat{\theta}^{\text{1D}}_{\hat{u}_{\text{1D}}})$ \\ & 300 & 1.22 & 1.16 & 3.26 & 0.99 & 0.97 & 1.13 \\ & 500 & 1.88 & 1.76 & 2.88 & 1.04 & 1.02 & 1.14 \\ Logistic & 750 & 1.03 & 0.95 & 3.64 & 1.29 & 1.27 & 1.48 \\ & 1000 & 1.00 & 0.88 & 3.50 & 1.04 & 1.02 & 1.14 \\ \hline & 300 & 1.15 & 1.11 & 2.09 & 1.15 & 1.13 & 1.41 \\ & 500 & 2.58 & 2.29 & 17.29 & 1.35 & 1.21 & 2.38 \\ Poisson & 750 & 3.81 & 3.48 & 61.88 & 1.12 & 1.10 & 1.17 \\ & 1000 & 4.09 & 3.62 & 93.11 & 1.58 & 1.49 & 2.85 \end{tabular} \caption{Performance of envelope estimators of the first component of the regression coefficients for the logistic and Poisson regression models in comparison to the MLE. The first and fourth ratio columns display the ratio of the bootstrap standard deviation of the MLE to that of the weighted envelope estimator. The second and fifth ratio columns display the ratio of the bootstrap standard deviation of the MLE to that of the envelope estimator under the variable $u$ dimension selection regime. The third and sixth ratio columns display the ratio of the bootstrap standard deviation of the MLE to that of the envelope estimator under the fixed $u$ dimension selection regime. } \label{Tab:GLMratios} \end{table} \subsection{Real data illustration} Diabetes is a group of metabolic diseases associated with long-term damage, dysfunction, and failure of different organs, especially the eyes, kidneys, nerves, heart, and blood vessels \citep{american2010diagnosis}. In 2017 approximately 5 million adult deaths worldwide were attributable to diabetes; global healthcare expenditures on people with diabetes are estimated USD 850 billion \citep{cho2018idf}. Diabetes remains undiagnosed for an estimated 30\% of the people who have the disease \citep{heikes2008diabetes}. One way to address the problem of undiagnosed diabetes is to develop simple, inexpensive diagnostic tools that can identify people who are at high risk of pre-diabetes or diabetes using only readily-available clinical or demographic information \citep{heikes2008diabetes}. We examine the influence of several variables on a positive diagnosis of diabetes. We will let a positive diagnosis of diabetes be when an individual's hemoglobin percentage (also known as HbA1c) exceeds a value of 6.5\% \citep{world2011use}. We will consider an individual's height, weight, age, hip size, waist size, and gender, all of which are easy to measure, inexpensive, and do not require any laboratory testing, and a measure of their stabilized glucose as predictors for a positive diagnosis of diabetes. The data in this analysis come from a population-based sample of 403 rural African-Americans in Virginia \citep{willems1997prevalence}, and is taken from the \texttt{faraway} R package \citep{faraway2016R}. We considered a logistic regression model with response variable denoting a diagnosis of diabetes (1 when HbA1c $> 6.5\%$ and $0$ otherwise) that includes log transformed values for each continuous covariate and a main effect for gender. The log transformation was used to transform these variables to univariate normality while maintaining a scale that is interpretable. Model free envelope estimation techniques and maximum likelihood estimation are then used to estimate the canonical parameter vector corresponding to this logistic regression model. Estimation is performed using functionality in the \texttt{TRES} R package \citep{zeng2019TRES}. We will compare the performance of $\widehat{\theta}^{\text{1D}}_w$ to $\widehat{\theta}^{\text{1D}}_{\hat{u}_{\text{1D}}}$. A nonparametric bootstrap with sample size $5000$ is then used to estimate the variability of these estimators. Our bootstrap simulation will consider both the variable $u$ and fixed $u$ model selection regimes. Performance results are displayed in Table~\ref{Tab:diabetesperform}. We see that $\widehat{\theta}^{\text{1D}}_w$ and $\widehat{\theta}^{\text{1D}}_{\hat{u}_{\text{1D}}}$ are very similar to each other and both are very different than the MLE $\widetilde{\theta}$. Similarity of $\widehat{\theta}^{\text{1D}}_w$ and $\widehat{\theta}^{\text{1D}}_{\hat{u}_{\text{1D}}}$ follows from empirical weights $w_1 = 0.982$, $w_2 = 0.0176$, and $w_k \approx 0$ for all $3 \leq k \leq 7$. Also observe that the bootstrap standard deviation estimates vary across the model selection procedures. Most notably, the fixed $u$ regime provides massive variance reduction while the weighted estimator and variable $u$ regime provide similar modest but appreciable variance reduction. The variance reduction discrepancy between the fixed $u$ regime and the the weighted estimator and variable $u$ regime is due to large model selection variability. Specifically, the selected dimension probabilities across our nonparametric bootstrap procedure are $p(\hat{u}_{\text{1D}} = 1) = 0.568$, $p(\hat{u}_{\text{1D}} = 2) = 0.358$, $p(\hat{u}_{\text{1D}} = 3) = 0.067$, and $p(\hat{u}_{\text{1D}} = 4) = 0.007$. It is clear that unaccounted model selection variability may lead users astray when they use the fixed $u$ regime in estimating standard deviations via bootstrapping. This example shows how difficult it can be to report reliable variance reduction in practice, and how tempting it can be to ignore model selection variability. \begin{table} \footnotesize \centering \begin{tabular}{lcccccccccc} & $\widehat{\theta}^{\text{1D}}_w$ & $\widehat{\text{sd}}^{*}(\widehat{\theta}^{\text{1D}}_w)$ & $\widehat{\theta}^{\text{1D}}_{\hat{u}_{\text{1D}}}$ & $\widehat{\text{sd}}^{*}(\widehat{\theta}^{\text{1D}}_{\hat{u}^{*}_{\text{1D}}}$) & $\widehat{\text{sd}}^{*}(\widehat{\theta}^{\text{1D}}_{\hat{u}_{\text{1D}}}$) & $\widetilde{\theta}$ & $\widehat{\text{sd}}^{*}(\widetilde{\theta}$) & $r(\widetilde{\theta},\widehat{\theta}^{\text{1D}}_w)$ & $r(\widetilde{\theta},\widehat{\theta}^{\text{1D}}_{\hat{u}^{*}_{\text{1D}}})$ & $r(\widetilde{\theta},\widehat{\theta}^{\text{1D}}_{\hat{u}_{\text{1D}}})$ \\ $\log(\text{Age})$ & 1.78 & 0.66 & 1.78 & 0.68 & 0.69 & 2.03 & 0.82 & 1.24 & 1.21 & 1.19 \\ $\log(\text{Weight})$ & 0.70 & 1.69 & 0.70 & 1.81 & 0.38 & 1.26 & 2.67 & 1.58 & 1.48 & 7.07 \\ $\log(\text{Height})$ & 0.03 & 4.39 & 0.03 & 4.73 & 0.09 & -4.39 & 5.07 & 1.16 & 1.07 & 54.76 \\ $\log(\text{Waist})$& 0.68 & 1.93 & 0.68 & 2.06 & 0.26 & 2.65 & 2.98 & 1.54 & 1.45 & 11.60 \\ $\log(\text{Hip})$ & 0.49 & 3.26 & 0.49 & 3.47 & 0.24 & -2.64 & 4.23 & 1.30 & 1.22 & 17.80 \\ $\text{Female}$ & 0.40 & 0.62 & 0.40 & 0.64 & 0.62 & 0.17 & 0.74 & 1.19 & 1.17 & 1.20 \\ $\log(\text{Stab. Gluc.})$ & 5.11 & 0.93 & 5.11 & 0.93 & 0.83 & 5.03 & 1.07 & 1.15 & 1.15 & 1.29 \end{tabular} \caption{Performance of envelope estimates of the regression coefficients for the logistic regression of diabetes diagnosis on seven predictors. The first, third, and sixth column display the weighted envelope estimator, the envelope estimator with $\hat{u}_{\text{1D}} = 1$, and the MLE respectively. The second column displays the bootstrap standard deviation of the weighted envelope estimator. The fourth and fifth columns display the bootstrap standard deviation for the envelope estimator under the variable $u$ and fixed $u$ regimes respectively. The seventh column displays the bootstrap standard deviation of the MLE. The last three columns displays the ratio of bootstrap standard deviations of all envelope estimators to the those of the MLE.} \label{Tab:diabetesperform} \end{table} \subsection{Reproducing and extending Monte Carlo simulations of \citet{zhangmai}} \label{sec:repro} Here, we compare the performance of $\widehat{\theta}^{\text{FG}}_w$ and $\widehat{\theta}^{\text{1D}}_w$ to the consistent envelope estimators $\widehat{\theta}^{\text{FG}}_{\hat{u}_{\text{FG}}}$ and $\widehat{\theta}^{\text{1D}}_{\hat{u}_{\text{1D}}}$ using the simulation settings in \citet{zhangmai}. For our first comparison we reproduce the Monte Carlo simulations in Section 4.2 of \citet{zhangmai} and add both $\widehat{\theta}^{\text{FG}}_w$ and $\widehat{\theta}^{\text{1D}}_w$ to the list of estimators under comparison. Performance of all estimators at a sample size of $n = 75$ is also assessed. The data generating models that are considered are a single predictor linear regression model with 10 responses, a logistic regression model with a 10 predictors, and a Cox proportional hazards model with 10 predictors. In all three modeling setups, the true dimension of the envelope space is set at $u = 2$. In-depth details about this simulation setup are presented in \citet{zhangmai}. It is important to note $\widehat{\theta}^{\text{FG}}_{\hat{u}_{\text{FG}}}$ and $\widehat{\theta}^{\text{1D}}_{\hat{u}_{\text{1D}}}$ are estimated according to the variable $u$ regime. The Monte Carlo sample size is 200, as in \citet{zhangmai}. Table~\ref{tab:tab3inzhangmai} displays the results. From Table~\ref{tab:tab3inzhangmai} we see that the weighted envelope estimators perform very similarly to the consistent envelope estimators. This suggests that the variability in model selection is captured by all envelope estimators. This finding is expected in larger samples when the correct dimension selected percentage approaches 1, and it is a direct consequence of Lemmas~\ref{lem:weightsFG} and \ref{lem:weights1D} and Theorem 3.2 in \citet{zhangmai}. On the other hand, this finding is illuminating for sample sizes where the correct dimension selected percentages are nowhere near 1. Some variability in selection of $u$ which was used to construct both $\widehat{\theta}^{\text{FG}}_{\hat{u}_{\text{FG}}}$ and $\widehat{\theta}^{\text{1D}}_{\hat{u}_{\text{1D}}}$ is incorporated into these simulations since $u$ is estimated at every iteration. \begin{table} \begin{center} \begin{tabular}{lccc|cccccc} & \multicolumn{3}{c}{Correct Selection \%} & \multicolumn{5}{c}{Estimation Error $\|\hat{\theta} - \theta\|_F$} & \\ & & & & Standard & \multicolumn{5}{c}{Envelope} \\ Model & $n$ & 1D & FG & & true $u$ & 1D & FG & W1D & WFG \\ \hline & 75 & 74 & 63.5 & 0.69 & 0.50 & 0.55 & 0.55 & 0.54 & 0.55 \\ Linear & 150 & 93 & 81 & 0.49 & 0.31 & 0.33 & 0.33 & 0.34 & 0.33 \\ & 300 & 99 & 92 & 0.33 & 0.19 & 0.19 & 0.20 & 0.19 & 0.19 \\ & 600 & 99 & 92.5 & 0.23 & 0.13 & 0.14 & 0.14 & 0.14 & 0.14 \\ \hline & 75 & 22.5 & 42 & 4.04 & 1.04 & 1.06 & 1.00 & 1.09 & 1.08 \\ Logistic & 150 & 72 & 77.5 & 2.16 & 0.56 & 0.67 & 0.60 & 0.67 & 0.64 \\ & 300 & 92 & 89.5 & 1.40 & 0.34 & 0.35 & 0.34 & 0.37 & 0.36 \\ & 600 & 98 & 94 & 0.98 & 0.22 & 0.22 & 0.24 & 0.24 & 0.24 \\ \hline & 75 & 35 & 38 & 2.07 & 1.99 & 1.95 & 1.96 & 2.04 & 2.05 \\ Cox & 150 & 57.5 & 53.5 & 1.33 & 1.24 & 1.21 & 1.22 & 1.27 & 1.28 \\ & 300 & 83 & 75.5 & 0.98 & 0.90 & 0.89 & 0.90 & 0.93 & 0.93 \\ & 600 & 100 & 93 & 0.79 & 0.72 & 0.72 & 0.72 & 0.75 & 0.75 \end{tabular} \end{center} \caption{Table of Monte Carlo simulation results for different envelope estimators with respect to three different envelope models in the spirit of Table 3 from \citet{zhangmai}. Left panel includes percentages of correct selection for these envelope estimators. Right panel includes means and standard errors of $\|\hat{\theta} - \theta\|_F$ for the standard estimator and the envelope estimators with either true or estimated dimensions.} \label{tab:tab3inzhangmai} \end{table} We now demonstrate the small sample performance of weighted envelope estimation in the linear and logistic cases of \citet{zhangmai}. As before, this simulation uses the exact specifications in \citet{zhangmai} which were not designed to showcase weighted envelope estimation techniques. We ignore the Cox proportional hazards model case because appreciable envelope estimation was not observed in the original Monte Carlo simulation. For this bootstrap procedure, we generated one data set corresponding to the linear and logistic regression models in the previous simulation at sample sizes $n = 75, 150, 300$. We then perform a nonparametric bootstrap to estimate the variability of each envelope estimator using a bootstrap sample size of 200 iterations. We repeat this process 25 times, and report the average ratios of standard deviations relative to the standard estimator across these 25 Monte Carlo samples. Note that estimates of $u$ are allowed to (and do) vary across the iterations of the 25 Monte Carlos samples. Table~\ref{tab:bootstrap} displays the results with respect to the first component of the parameter vector (other components behave similarly) in both regression settings. In Table~\ref{tab:bootstrap} we see that weighted envelope estimation provides larger variance reduction then given by $\widehat{\theta}^{\text{1D}}_{\hat{u}_{\text{1D}}^*}$ and $\widehat{\theta}^{\text{FG}}_{\hat{u}_{\text{FG}}^*}$ and is comparable to oracle estimation in most settings. The estimators $\widehat{\theta}^{\text{FG}}_{\hat{u}_{\text{FG}}}$ and $\widehat{\theta}^{\text{1D}}_{\hat{u}_{\text{1D}}}$ outperform weighted envelope estimation. However, this variance reduction is due to underestimation of $u$ in many of the original samples. Thus, weighted envelope estimation provides a desirable balance between model variance reduction and robustness to model misspecification. \begin{table} \small \begin{center} \begin{tabular}{lc|ccccccc} Model & $n$ & $r(\tilde{\theta},\hat{\theta}_u)$ & $r(\tilde{\theta},\hat{\theta}^{\text{1D}}_{\hat{u}_{\text{1D}}})$ & $r(\tilde{\theta},\hat{\theta}^{\text{FG}}_{\hat{u}_{\text{FG}}})$ & $r(\tilde{\theta},\hat{\theta}^{\text{1D}}_{\hat{u}^{*}_{\text{1D}}})$ & $r(\tilde{\theta},\hat{\theta}^{\text{FG}}_{\hat{u}^{*}_{\text{FG}}})$ & $r(\tilde{\theta},\hat{\theta}^{\text{1D}}_w)$ & $r(\tilde{\theta},\hat{\theta}^{\text{FG}}_w)$ \\ \hline & 75 & 0.992 & 2.024 & 1.768 & 0.991 & 0.947 & 1.094 & 1.024 \\ Linear & 150 & 1.076 & 1.592 & 1.524 & 1.033 & 1.008 & 1.105 & 1.046 \\ & 300 & 1.236 & 2.219 & 2.108 & 1.173 & 1.102 & 1.264 & 1.171 \\ \hline & 75 & 1.013 & 1.054 & 1.022 & 0.978 & 0.966 & 1.079 & 1.033 \\ Logistic & 150 & 1.548 & 2.741 & 2.459 & 1.231 & 1.008 & 1.374 & 1.079 \\ & 300 & 4.525 & 7.338 & 5.738 & 1.331 & 1.003 & 1.450 & 1.042 \\ \hline \end{tabular} \end{center} \caption{Ratios of standard deviations for envelope estimators relative to the MLE.} \label{tab:bootstrap} \end{table} \section{Discussion} We proposed two weighted envelope estimators that properly account for model selection uncertainty in general envelope estimation settings. These estimators are a unification of the weighted envelope estimators proposed in \citet{eck2017weighted} which only account for model selection variability in the context of multivariate linear regression, and the generic algorithms (FG and 1D algorithms) in \citet{zhangmai} which provide consistent envelope dimension selection in general problems but have no finite-sample guarantees. Our weighted envelope estimators are theoretically justified, intuitive, and easy to implement. Our numerical examples show that our estimators possess desirable properties, especially when the sample size is prohibitively small for consistent envelope estimation techniques that do not properly account for variability in model selection. \citet{efron} provided a double bootstrap procedure that aims to incorporate variability in model selection. Their formulation is applicable for exponential families and it has been applied to envelope methodology \citep{eck2020aster, eck2018supporting}. Useful variance reduction was found empirically in this context. Neither \citet{efron} or \citet{eck2020aster} provided formal asymptotic justification for the bootstrap procedures that are implemented within. In this paper we provide formal justification for the bootstrap procedures that are developed within. Moreover, our weighted envelope estimators are appropriate for a more general class of envelope models than either \citet{efron}, \citet{eck2020aster}, or \citet{eck2017weighted} can claim. The idea of a model free weighting of envelope estimators across all candidate dimensions extends to partial envelopes \citep{su}, inner envelopes \citep{su-inner}, scaled envelopes \citep{cook-scale}, predictor envelopes \citep{cook-scale-pls}, sparse response envelopes \citep{su2016sparse}, tensor response regression \citep{li2017parsimonious}, matrix-variate response regression \citep{ding2018matrix} which is explicitly mentioned in \cite{ding2018matrixsupplement}, and envelopes regression models with nonlinearity and heteroscedasticity \citep{zhang2020envelopes}. One noted limitation of bootstrapping a weighted envelope estimation is that it can be computationally expensive, especially when $p$ is large \citep{yau2019hypothesis}. In such settings, we recommend investigating if the range of candidate dimensions can reasonably be reduced to a less computationally burdensome set of values or using the variable $u$ approach when estimating the envelope dimension at every iteration of the nonparametric bootstrap. Existing envelope software implements the former approach in the context of multivariate linear regression \citep{lee2019Renvlp}. Our simulations provide some empirical justification for the performance of the latter approach. \citet{yau2019hypothesis} developed a novel hypothesis testing procedure with respect to the multivariate linear envelope model. They showed that model averaging as in \citet{eck2017weighted} is very successful and is comparable in performance to their proposed methodology. They dismissed the model averaging technique by saying, ``there is an intuitive justification for why the model average estimator is not that viable. We may recall that the original motivation for applying the envelope model is to achieve dimension reduction. When one obtains $\hat{\theta}_w$, it is true that this estimator accounts for the variability for selecting $u$, however, because all possible envelope models are involved in \eqref{envFG} and \eqref{env1D}, it becomes unclear which subspace is being projected to as a result.'' The motivation for envelope methodology is not to ``achieve dimension reduction,'' rather the motivation for envelope methodology is to increase efficiency in multivariate analyses without altering traditional objectives \citep[first sentence of page 1]{cook2018introduction}. Dimension reduction is at the core of envelope methodology, but it is just a means to an end for achieving useful variance reduction. The reporting of a specific subspace is not of foundational importance to practitioners seeking variance reduction, especially when there is both uncertainty in the subspace selected and its dimension. When there is uncertainty about the correct envelope dimension, model averaging with our weighted envelope estimator provides a desirable balance between massive variance reduction and correct model specification. \section*{Acknowldegement} The author would like to thank R. Dennis Cook, Forrest W. Crawford, Karl Oskar Ekvall, Dootika Vats, and Xin Zhang for valuable feedback that improved the presentation of this paper. \section*{Supplementary Materials} Supplementary materials are available with this paper. This supplement includes proofs of all technical results and it doubles as a fully reproducible technical report that makes all R based analyses transparent. The simulations in Section~\ref{sec:repro} are not included in the supplementary materials. These simulations are adopted from MatLab code that accompanied \citet{zhangmai}. This code is readily available upon request. \begin{center} \huge{Supplementary Materials for "General model-free weighted envelope estimation"} \end{center} This supplement contains the proofs of all technical results that appear in the main text. It is also a fully reproducible technical report which makes the R based simulations and diabetes data analysis fully transparent. The Matlab based simulations adopted from \citet{zhangmai} are not fully reproduced here, however the code to produce those simulations is available upon request. This supplement begins with the proof of technical results followed by numerical examples and the R functions used to create these examples. \section*{Proofs of technical results in main text} \begin{proof}[Proof of Lemma 2] Note that $$ w^{\text{FG}}_k = \frac { \exp\left\{-n\mathcal{I}_n^{\text{FG}}(k)\right\} } { \sum_{j=0}^p\exp\left\{-n\mathcal{I}_n^{\text{FG}}(j)\right\} } = \frac { \exp\left[n\left\{\mathcal{I}_n^{\text{FG}}(u) - \mathcal{I}_n^{\text{FG}}(k)\right\}\right] } { \sum_{j=0}^p\exp\left[n\left\{\mathcal{I}_n^{\text{FG}}(u) - \mathcal{I}_n^{\text{FG}}(j)\right\}\right] }. $$ By definition of $\mathcal{I}_n^{\text{FG}}(k)$, we have that \begin{equation} n\left\{\mathcal{I}_n^{\text{FG}}(k) - \mathcal{I}_n^{\text{FG}}(u)\right\} = n\left\{J_n(\widehat{\Gamma}_k) - J_n(\widehat{\Gamma}_u)\right\} + C(k-u)\log(n). \label{derivFG} \end{equation} We show that $w^{\text{FG}}_k \to 0$ as $n \to \infty$ for all $k \neq u$ by following a similar argument as the proof of Theorems 3.1 and 3.2 in \cite{zhangmai}. Lemma 2 in \citet{zhangmai} states that $J(\Gamma_u) < J(\Gamma_k) < 0$ for all $k = 0$, $\ldots$, $u-1$, and $J(\Gamma_k) = J(\Gamma_u)$ for all $k = u+1$, $\ldots$, $p$. First suppose that $k = 0$, $\ldots$, $u-1$. In this setting, we have that \eqref{derivFG} tends to $\infty$ as $n \to \infty$. Now suppose that $k = u$, $\ldots$, $p$. In this setting, we have that $ n\left\{J_n(\widehat{\Gamma}_k) - J_n(\widehat{\Gamma}_u)\right\} = O_P(1) $ in \eqref{derivFG}. Therefore \eqref{derivFG} tends to $\infty$ as $n \to \infty$ when $k = u+1$, $\ldots$, $p$. Putting this together implies that $w^{\text{FG}}_k \to 0$ for all $k \neq u$ and $w^{\text{FG}}_u \to 1$ as $n \to \infty$. \end{proof} \begin{proof}[Proof of Lemma 3] Note that \begin{equation} w^{\text{1D}}_k = \frac { \exp\left\{-n\mathcal{I}_n^{\text{1D}}(k)\right\} } { \sum_{j=0}^p\exp\left\{-n\mathcal{I}_n^{\text{1D}}(j)\right\} } = \frac { \exp\left[n\left\{\mathcal{I}_n^{\text{1D}}(u) - \mathcal{I}_n^{\text{1D}}(k)\right\}\right] } { \sum_{j=0}^p\exp\left[n\left\{\mathcal{I}_n^{\text{1D}}(u) - \mathcal{I}_n^{\text{1D}}(j)\right\}\right] }. \label{int} \end{equation} We show that $w^{\text{1D}}_k \to 0$ as $n \to \infty$ for all $k \neq u$ by following a similar argument as the proof of Theorems 3.1 and 3.2 in \cite{zhangmai}. First suppose that $k > u$ and observe that $$ n\left\{\mathcal{I}_n^{\text{1D}}(u) - \mathcal{I}_n^{\text{1D}}(k)\right\} = n\left\{\sum_{j=u+1}^k \phi_{j,n}(\hat{v}_j) + \frac{C(u-k)\log(n)}{n}\right\}. $$ We have that $\phi_{j,n}(\hat{v}_j) = O_P\left(n^{-1}\right)$. Therefore $$ n\left\{\mathcal{I}_n^{\text{1D}}(u) - \mathcal{I}_n^{\text{1D}}(k)\right\} \to -\infty $$ as $n \to \infty$. From \eqref{int} we can conclude that $w^{\text{1D}}_k \to 0$ as $n \to \infty$ for all $k > u$. Now suppose that $k < u$. Let $$ n\left\{\mathcal{I}_n^{\text{1D}}(u) - \mathcal{I}_n^{\text{1D}}(k)\right\} = n\left\{\sum_{j=k+1}^u \phi_{j,n}(\hat{v}_j) + \frac{C(u-k)\log(n)}{n}\right\}. $$ The function $\phi_{j,n}(\hat{v}_j) \to \phi_{j}(v_j) < 0$ in probability as shown in the proof of Propositions 5 and 6 in \cite{algo}. Therefore $n\left\{\mathcal{I}_n^{\text{1D}}(u) - \mathcal{I}_n^{\text{1D}}(k)\right\} \to -\infty$ as $n \to \infty$. From \eqref{int} we can conclude that $w^{\text{1D}}_k \to 0$ as $n \to \infty$ for all $k < u$. Therefore $w^{\text{1D}}_k \to 0$ as $n \to \infty$ for all $k \neq u$ which implies that $w^{\text{1D}}_u \to 1$ as $n \to \infty$. \end{proof} <!-- \begin{proof}[Proof of Theorem 1] Notice that \begin{align*} &\sqrt{n}\left(\widehat{\theta}^{{\text{FG}}^{\textstyle{*}}}_w - \widehat{\theta}^{\text{FG}}_w\right) = \sqrt{n}\left(w^{{\text{FG}}^{\textstyle{*}}}_u\widehat{\theta}^{{\text{FG}}^{\textstyle{*}}}_u - w^{\text{FG}}_u\widehat{\theta}^{\text{FG}}_u\right) + \sqrt{n}\left(\sum_{k\neq u}^pw^{{\text{FG}}^{\textstyle{*}}}_k\widehat{\theta}^{{\text{FG}}^{\textstyle{*}}}_k - \sum_{k \neq u}^p w^{\text{FG}}_k\widehat{\theta}^{\text{FG}}_k\right) \\ &\qquad = \sqrt{n}\left(\widehat{\theta}^{{\text{FG}}^{\textstyle{*}}}_u - \widehat{\theta}^{\text{FG}}_u\right) + \sqrt{n}\left\{\sum_{k\neq u}^pw^{{\text{FG}}^{\textstyle{*}}}_k \left(\widehat{\theta}^{{\text{FG}}^{\textstyle{*}}}_k - \widehat{\theta}^{\text{FG}}_u\right) - \sum_{k \neq u}^p w^{\text{FG}}_k\left(\widehat{\theta}^{\text{FG}}_k - \widehat{\theta}^{\text{FG}}_u\right)\right\} \end{align*} We show that $w^{\text{FG}}_k$, $w^{{\text{FG}}^{\textstyle{*}}}_k \to 0$ for all $k \neq u$ such that \begin{align*} &\sqrt{n}\|\sum_{k\neq u}^pw^{{\text{FG}}^{\textstyle{*}}}_k\left(\widehat{\theta}^{{\text{FG}}^{\textstyle{*}}}_k - \widehat{\theta}^{{\text{FG}}^{\textstyle{*}}}_u\right) - \sum_{k \neq u}^p w^{\text{FG}}_k\left(\widehat{\theta}^{\text{FG}}_k - \widehat{\theta}^{\text{FG}}_u\right)\| \\ &\qquad\leq \sum_{k\neq u}^p\left(\sqrt{n}w^{{\text{FG}}^{\textstyle{*}}}_k \|\widehat{\theta}^{{\text{FG}}^{\textstyle{*}}}_k - \widehat{\theta}^{{\text{FG}}^{\textstyle{*}}}_u\| + \sqrt{n} w^{\text{FG}}_k\|\widehat{\theta}^{\text{FG}}_k - \widehat{\theta}^{\text{FG}}_u\|\right) \to 0 \end{align*} as $n\to\infty$ where the rates of the bound are given by \eqref{TFGterms}. We have that \begin{equation} \begin{split} &\sqrt{n} w^{\text{FG}}_k\|\widehat{\theta}^{\text{FG}}_k - \widehat{\theta}^{\text{FG}}_u\| = \frac { \sqrt{n}\exp\left\{-n\mathcal{I}_n^{\text{FG}}(k)\right\} } { \sum_{j=0}^p\exp\left\{-n\mathcal{I}_n^{\text{FG}}(j)\right\} }\|\widehat{\theta}^{\text{FG}}_j - \widehat{\theta}^{\text{FG}}_u\| \\ &\qquad \leq \sqrt{n}\exp\left\{n\mathcal{I}_n^{\text{FG}}(u) - n\mathcal{I}_n^{\text{FG}}(k)\right\} \|\widehat{\theta}^{\text{FG}}_k - \widehat{\theta}^{\text{FG}}_u\| \\ &\qquad= O_P\left(\sqrt{n}\right)\exp \left[n\left\{\mathcal{I}_n^{\text{FG}}(u) - n\mathcal{I}_n^{\text{FG}}(k)\right\}\right] \\ &\qquad= O_P\left(\sqrt{n}\right) \exp\left[n\left\{J_n(\widehat{\Gamma}_u) - J_n(\widehat{\Gamma}_k)\right\} + C(u-k)\log(n)\right] \\ &\qquad= O_P\left(n^{C(u-k) + 1/2}\right) \exp\left[n\left\{J_n(\widehat{\Gamma}_u) - J_n(\widehat{\Gamma}_k)\right\}\right]. \end{split} \label{intFG} \end{equation} The same steps as \eqref{intFG} yield \begin{equation} \sqrt{n} w^{{\text{FG}}^{\textstyle{*}}}_k\|\widehat{\theta}^{{\text{FG}}^{\textstyle{*}}}_k - \widehat{\theta}^{{\text{FG}}^{\textstyle{*}}}_u\| \leq O_P\left(n^{C(u-k) + 1/2}\right) \exp\left[n\left\{J^{\textstyle{*}}_n(\widehat{\Gamma}^{\textstyle{*}}_u) - J^{\textstyle{*}}_n(\widehat{\Gamma}^{\textstyle{*}}_k)\right\}\right]. \label{intFG2} \end{equation} For $0 \leq k < u$, we have that $ J_n(\widehat{\Gamma}_u) - J_n(\widehat{\Gamma}_k) = J(\Gamma_u) - J(\Gamma_k) + o_p(1) $ where $J(\Gamma_u) < J(\Gamma_k)$ as in the proof of \citet[Theorem 3.1]{zhangmai}. Similarly we have that $$ J^{\textstyle{*}}_n(\widehat{\Gamma}^{\textstyle{*}}_u) - J^{\textstyle{*}}_n(\widehat{\Gamma}^{\textstyle{*}}_k) = J(\Gamma_u) - J(\Gamma_k) + o_p(1). $$ Therefore the rates for the exponent in the last line of \eqref{intFG} and the right hand side of \eqref{intFG2} are $-n|O_P(1)|$. Notice that the rates in the last line of \eqref{intFG} and the right hand side of \eqref{intFG2} are upper bounded when $k = 0$. Putting this together yields \begin{align*} \sqrt{n} w^{\text{FG}}_k\|\widehat{\theta}^{\text{FG}}_k - \widehat{\theta}^{\text{FG}}_u\| &= O_P\left(n^{Cu + 1/2}\right)\exp\left\{-n|O_P(1)|\right\}, \\ \sqrt{n} w^{{\text{FG}}^{\textstyle{*}}}_k\|\widehat{\theta}^{{\text{FG}}^{\textstyle{*}}}_k - \widehat{\theta}^{{\text{FG}}^{\textstyle{*}}}_u\| &= O_P\left(n^{Cu + 1/2}\right)\exp\left\{-n|O_P(1)|\right\}; \end{align*} for all $0 \leq k < u$. Now consider $u < k \leq p$. From the proof of \citet[Theorem 3.1]{zhangmai} we have that $J_n(\widehat{\Gamma}_u) - J_n(\widehat{\Gamma}_k) = O_p(n^{-1})$. Combining this result with the steps in \eqref{intFG} yields \begin{equation} \sqrt{n} w^{\text{FG}}_k\|\widehat{\theta}^{\text{FG}}_k - \widehat{\theta}^{\text{FG}}_u\| \leq O_P\left(n^{C(u-k) + 1/2}\right). \label{intFG3} \end{equation} A similar argument applied to the starred data gives \begin{equation} \sqrt{n} w^{{\text{FG}}^{\textstyle{*}}}_k\|\widehat{\theta}^{{\text{FG}}^{\textstyle{*}}}_k - \widehat{\theta}^{{\text{FG}}^{\textstyle{*}}}_u\| \leq O_P\left(n^{C(u-k) + 1/2}\right). \label{intFG4} \end{equation} The rates in both \eqref{intFG3} and \eqref{intFG4} are upper bounded when $k = u - 1$. Putting this together yields \begin{align*} \sqrt{n} w^{\text{FG}}_k\|\widehat{\theta}^{\text{FG}}_k - \widehat{\theta}^{\text{FG}}_u\| &= O_P\left(n^{1/2 - C}\right), \\ \sqrt{n} w^{{\text{FG}}^{\textstyle{*}}}_k\|\widehat{\theta}^{{\text{FG}}^{\textstyle{*}}}_k - \widehat{\theta}^{{\text{FG}}^{\textstyle{*}}}_u\| &= O_P\left(n^{1/2 - C}\right); \end{align*} for all $u < k \leq p$. Therefore \begin{align*} &\sqrt{n}\left\{\sum_{k\neq u}^pw^{{\text{FG}}^{\textstyle{*}}}_k\left(\widehat{\theta}^{{\text{FG}}^{\textstyle{*}}}_k - \widehat{\theta}^{\text{FG}}_u\right) - \sum_{k \neq u}^p w^{\text{FG}}_k\left(\widehat{\theta}^{\text{FG}}_k - \widehat{\theta}^{\text{FG}}_u\right)\right\} \\ &\qquad= O_P\left(n^{1/2 - C}\right) + O_P\left(n^{Cu + 1/2}\right)\exp\left\{-n|O_P(1)|\right\}, \end{align*} as desired and the conclusion follows. \end{proof} <!-- \begin{proof}[Proof of Theorem 2] Notice that \begin{align*} &\sqrt{n}\left(\widehat{\theta}^{{\text{1D}}^{\textstyle{*}}}_w - \widehat{\theta}^{\text{1D}}_w\right) = \sqrt{n}\left(w^{\textstyle{*}}_u\widehat{\theta}^{{\text{1D}}^{\textstyle{*}}}_u - w_u\widehat{\theta}^{\text{1D}}_u\right) + \sqrt{n}\left(\sum_{k\neq u}^pw^{\textstyle{*}}_k\widehat{\theta}^{\textstyle{*}}_k - \sum_{k \neq u}^p w_k\widehat{\theta}_k\right) \\ &\qquad = \sqrt{n}\left(\widehat{\theta}^{\textstyle{*}}_u - \widehat{\theta}_u\right) + \sqrt{n}\left\{\sum_{k\neq u}^pw^{\textstyle{*}}_k\left(\widehat{\theta}^{\textstyle{*}}_k - \widehat{\theta}^{\textstyle{*}}_u\right) - \sum_{k \neq u}^p w_k\left(\widehat{\theta}_k - \widehat{\theta}_u\right)\right\}. \end{align*} We show that $w_k$, $w^{\textstyle{*}}_k \to 0$ such that $$ \sqrt{n}\|\sum_{k\neq u}^pw^{\textstyle{*}}_k\left(\widehat{\theta}^{\textstyle{*}}_k - \widehat{\theta}^{\textstyle{*}}_u\right) - \sum_{k \neq u}^p w_k\left(\widehat{\theta}_k - \widehat{\theta}_u\right)\| \leq \sum_{k=1}^p\left(\rootnw^{\textstyle{*}}_k\|\widehat{\theta}^{\textstyle{*}}_k - \widehat{\theta}^{\textstyle{*}}_u\| + \sqrt{n} w_k\|\widehat{\theta}_k - \widehat{\theta}_u\|\right) \to 0 $$ as $n\to\infty$ for all $k \neq u$ and find the rates at which they vanish. We have that \begin{equation} \begin{split} &\sqrt{n} w_k\|\widehat{\theta}_k - \widehat{\theta}_u\| = \frac { \sqrt{n}\exp\left\{-n\mathcal{I}_n^{\text{1D}}(k)\right\} } { \sum_{j=0}^p\exp\left\{-n\mathcal{I}_n^{\text{1D}}(j)\right\} }\|\widehat{\theta}_k - \widehat{\theta}_u\| \\ &\qquad \leq \sqrt{n}\exp\left\{n\mathcal{I}_n^{\text{1D}}(u) - n\mathcal{I}_n^{\text{1D}}(k)\right\} \|\widehat{\theta}_k - \widehat{\theta}_u\| \\ &\qquad= \sqrt{n}\exp\left\{ n\sum_{j=1}^u \phi_{j,n}(\hat{v}_j) - n\sum_{j=1}^k\phi_{j,n}(\hat{v}_j) + (u - k)\frac{C\log{n}}{n} \right\}\|\widehat{\theta}_k - \widehat{\theta}_u\| \\ &\qquad= n^{\left\{C(u - k) + 1/2\right\}}\exp\left\{ n\sum_{j=1}^u \phi_{j,n}(\hat{v}_j) - n\sum_{j=1}^k\phi_{j,n}(\hat{v}_j) \right\}\|\widehat{\theta}_k - \widehat{\theta}_u\| \\ &\qquad= O_P\left[n^{\left\{C(u - k) + 1/2\right\}}\right]\exp\left\{ n\sum_{j=1}^u \phi_{j,n}(\hat{v}_j) - n\sum_{j=1}^k\phi_{j,n}(\hat{v}_j) \right\} \end{split} \label{foo} \end{equation} where the last equality follows from the fact that $\|\widehat{\theta}_k - \widehat{\theta}_u\| = |O_P(1)|$ for all $k = 1$, $\ldots$, $p$. This is because $\widehat{\theta}_k \to \theta$ for all $k = u$, $\ldots$, $p$ and $\|\widehat{\theta}_k\| \to a \leq \|\theta\|$ for all $k = 1$, $\ldots$, $u-1$ since the envelope estimator exhibits shrinkage when $k = 1$, $\ldots$, $u-1$. First suppose that $k = 1$, $\ldots$, $u-1$. In this setting $\phi_{k,n}(\hat{v}_k) \to \phi_k(v_k) < 0$ as $n \to \infty$ \citep[proof of Theorems 5 and 6]{algo}. From \eqref{foo} we have \begin{equation} \begin{split} &\sqrt{n} w_k\|\widehat{\theta}_k - \widehat{\theta}_u\| \leq O_P\left[n^{\left\{C(u - k) + 1/2\right\}}\right] \exp\left\{n\sum_{j=k-1}^u \phi_{j,n}(\hat{v}_j)\right\} \\ &\qquad= O_P\left[n^{\left\{C(u - k) + 1/2\right\}}\right] \exp\left\{-n|O_P(1)|\right\}. \end{split} \label{bar-1} \end{equation} Now suppose that $k = u+1$, $\ldots$, $p$. In this setting, $\phi_{k,n}(\hat{v}_k) = O_p\left(n^{-1}\right)$ \citep[proof of Theorem 3.1]{zhangmai}. From \eqref{foo} we have \begin{equation} \begin{split} &\sqrt{n} w_k\|\widehat{\theta}_k - \widehat{\theta}_u\| \leq O_p\left[n^{\left\{C(u - k) + 1/2\right\}}\right] \exp\left\{-n\sum_{j=u+1}^k \phi_{j,n}(\hat{v}_j)\right\} \\ &\qquad= O_p\left[n^{\left\{C(u - k) + 1/2\right\}}\right]. \end{split} \label{bar-2} \end{equation} The same steps in \eqref{bar-1} and \eqref{bar-2} apply to the starred data so that \begin{equation} \sqrt{n} w^{\textstyle{*}}_k\|\widehat{\theta}^{\textstyle{*}}_k - \widehat{\theta}^{\textstyle{*}}_u\| \leq O_P\left[n^{\left\{C(u - k) + 1/2\right\}}\right] \exp\left\{-n|O_P(1)|\right\}, \qquad (k = 1,...,u-1), \label{baz-1} \end{equation} and \begin{equation} \sqrt{n} w^{\textstyle{*}}_k\|\widehat{\theta}^{\textstyle{*}}_k - \widehat{\theta}^{\textstyle{*}}_u\| = O_p\left[n^{\left\{C(u - k) + 1/2\right\}}\right], \qquad (k = u+1,...,p). \label{baz-2} \end{equation} Our conclusion follows by noting that \eqref{bar-1}, \eqref{bar-2}, \eqref{baz-1}, and \eqref{baz-2} implies that \begin{align*} \sqrt{n} w_k\|\widehat{\theta}_k - \widehat{\theta}_u\| &\leq O_P\left[n^{\left\{Cu + 1/2\right\}}\right] \exp\left\{-n|O_P(1)|\right\}, \qquad (k = 1,...,u-1); \\ \sqrt{n} w_k\|\widehat{\theta}_k - \widehat{\theta}_u\| &\leq O_p\left\{n^{\left(1/2 - C\right)}\right\}, \qquad (k = u+1,...,p); \\ \sqrt{n} w^{\textstyle{*}}_k\|\widehat{\theta}^{\textstyle{*}}_k - \widehat{\theta}^{\textstyle{*}}_u\| &\leq O_P\left[n^{\left\{Cu + 1/2\right\}}\right] \exp\left\{-n|O_P(1)|\right\}, \qquad (k = 1,...,u-1); \\ \sqrt{n} w^{\textstyle{*}}_k\|\widehat{\theta}^{\textstyle{*}}_k - \widehat{\theta}^{\textstyle{*}}_u\| &\leq O_p\left\{n^{\left(1/2 - C\right)}\right\}, \qquad (k = u+1,...,p); \end{align*} respectively. \end{proof} \section*{Numerical examples} The following software packages are needed to reproduce the analyses in the main text. \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{rm}\hlstd{(}\hlkwc{list} \hlstd{=} \hlkwd{ls}\hlstd{())} \hlkwd{library}\hlstd{(tidyverse)} \hlkwd{library}\hlstd{(TRES)} \hlkwd{library}\hlstd{(MASS)} \hlkwd{library}\hlstd{(foreach)} \hlkwd{library}\hlstd{(doParallel)} \hlkwd{library}\hlstd{(xtable)} \hlkwd{library}\hlstd{(faraway)} \end{alltt} \end{kframe} \end{knitrout} To register doParallel to be used with foreach, we call the registerDoParallel function and specify the number of cores to be used for parallel computing. \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{numCores} \hlkwb{<-} \hlkwd{detectCores}\hlstd{()} \hlopt{-} \hlnum{1}\hlstd{; numCores} \end{alltt} \begin{verbatim} ## [1] 15 \end{verbatim} \begin{alltt} \hlkwd{registerDoParallel}\hlstd{(}\hlkwc{cores} \hlstd{= numCores)} \hlkwd{RNGkind}\hlstd{(}\hlstr{"L'Ecuyer-CMRG"}\hlstd{)} \end{alltt} \end{kframe} \end{knitrout} \section*{Logistic regression simulations} \subsection*{Setting A:} We reproduce the logistic regression simulation in the main text. We first create the basis matrix $\Gamma$ for the true envelope space and the basis matrix for its orthogonal complement $\Gamma_0$. \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{p} \hlkwb{<-} \hlnum{8}\hlstd{; u} \hlkwb{<-} \hlnum{2} \hlstd{v1} \hlkwb{<-} \hlkwd{matrix}\hlstd{(}\hlkwd{rep}\hlstd{(}\hlnum{1}\hlopt{/}\hlkwd{sqrt}\hlstd{(p),p),} \hlkwc{nrow} \hlstd{= p)} \hlstd{O} \hlkwb{<-}\hlkwd{qr.Q}\hlstd{(}\hlkwd{qr}\hlstd{(v1),} \hlkwc{complete} \hlstd{=} \hlnum{TRUE}\hlstd{)} \hlstd{Gamma} \hlkwb{<-} \hlstd{O[,} \hlnum{1}\hlopt{:}\hlstd{u]} \hlstd{Gamma0} \hlkwb{<-} \hlstd{O[, (u}\hlopt{+}\hlnum{1}\hlstd{)}\hlopt{:}\hlstd{p]} \end{alltt} \end{kframe} \end{knitrout} We next create the core of the material and immaterial variation, denoted as $\Omega$ and $\Omega_0$ respectively. \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{Omega} \hlkwb{<-} \hlkwd{diag}\hlstd{(}\hlnum{2}\hlopt{:}\hlnum{3}\hlstd{)} \hlstd{Omega0} \hlkwb{<-} \hlkwd{diag}\hlstd{(}\hlkwd{exp}\hlstd{(}\hlkwd{c}\hlstd{(}\hlopt{-}\hlnum{4}\hlopt{:}\hlnum{1}\hlstd{)))} \end{alltt} \end{kframe} \end{knitrout} We now build the variance matrix of the predictor variables and construct the true canonical parameter vector (regression coefficient vector) as an element contained in $\text{span}(\Gamma)$. \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{SigmaX} \hlkwb{<-} \hlstd{Gamma}\hlopt{ \hlstd{beta} \hlkwb{<-} \hlstd{Gamma} \hlopt{ \hlstd{eig} \hlkwb{<-} \hlkwd{eigen}\hlstd{(SigmaX)} \hlstd{SigmaX.half} \hlkwb{<-} \hlstd{eig}\hlopt{$}\hlstd{vec} \hlopt{ \end{alltt} \end{kframe} \end{knitrout} We now perform the nonparametric bootstrap procedure for all envelope estimators of the canonical parameter vector mentioned in the main text and the MLE. This nonparametric bootstrap has a bootstrap sample size of $5000$. Our bootstrap simulation will consider two model selection regimes for determining the envelope dimension $\hat{u}_{1D}$ at every iteration. In one scheme, we estimate the envelope dimension at every iteration of the bootstrap (variable $u$). In the other scheme, we estimate the dimension of the envelope space in the original data set and then treat this estimated dimension as the true dimension when we resample our data and calculate these envelope estimators (fixed $u$), thus ignoring the variability associated with model selection. Theorem 3 in the main text provides some guidance for the performance of the nonparametric bootstrap for estimating the variability of $\widehat{\theta}^{\text{1D}}_w$, an analog does not exist for the other envelope estimators. The logistic\_sim function below generates a logistic regression model that incorporates the above envelope structure that is stored in your global environment. The function then calls on model\_boot (code is in the Appendix) to perform the nonparametric bootstrap with respect to all considered estimators. This function also returns the estimated envelope dimension at every iteration of the nonparametric bootstrap. \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{logistic_sim} \hlkwb{<-} \hlkwa{function}\hlstd{(}\hlkwc{n}\hlstd{,} \hlkwc{p} \hlstd{= p)\{} \hlstd{X} \hlkwb{<-} \hlkwd{matrix}\hlstd{(}\hlkwd{rnorm}\hlstd{(n}\hlopt{*}\hlstd{p),} \hlkwc{nrow} \hlstd{= n)} \hlopt{ \hlstd{gx} \hlkwb{<-} \hlkwd{as.numeric}\hlstd{(X} \hlopt{ \hlstd{Y} \hlkwb{<-} \hlkwd{rbinom}\hlstd{(n,} \hlkwc{size} \hlstd{=} \hlnum{1}\hlstd{,} \hlkwc{prob} \hlstd{=} \hlnum{1} \hlopt{/} \hlstd{(}\hlnum{1} \hlopt{+} \hlkwd{exp}\hlstd{(}\hlopt{-}\hlstd{gx)))} \hlstd{data_sim} \hlkwb{<-} \hlkwd{as.data.frame}\hlstd{(}\hlkwd{cbind}\hlstd{(Y, X))} \hlstd{m1} \hlkwb{<-} \hlkwd{glm}\hlstd{(Y} \hlopt{~ -}\hlnum{1} \hlopt{+} \hlstd{.,} \hlkwc{family} \hlstd{=} \hlstr{"binomial"}\hlstd{,} \hlkwc{data} \hlstd{= data_sim)} \hlkwd{model_boot}\hlstd{(}\hlkwc{model} \hlstd{= m1,} \hlkwc{nboot} \hlstd{= nboot,} \hlkwc{cores} \hlstd{= numCores)} \hlstd{\}} \end{alltt} \end{kframe} \end{knitrout} We perform the nonparametric bootstrap at five different sample sizes. \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{set.seed}\hlstd{(}\hlnum{13}\hlstd{)} \hlstd{n} \hlkwb{<-} \hlkwd{c}\hlstd{(}\hlnum{300}\hlstd{,} \hlnum{500}\hlstd{,} \hlnum{750}\hlstd{,} \hlnum{1000}\hlstd{)} \hlstd{nboot} \hlkwb{<-} \hlnum{5000} \hlkwd{system.time}\hlstd{(\{} \hlstd{lsims} \hlkwb{<-} \hlkwd{lapply}\hlstd{(n,} \hlkwa{function}\hlstd{(}\hlkwc{j}\hlstd{)} \hlkwd{logistic_sim}\hlstd{(}\hlkwc{n} \hlstd{= j,} \hlkwc{p} \hlstd{= p))} \hlstd{\})} \end{alltt} \begin{verbatim} ## user system elapsed ## 11838.292 100.072 830.252 \end{verbatim} \end{kframe} \end{knitrout} The distribution of the estimated dimension across bootstrap iterations and sample sizes is depicted below: \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{u_boot_l} \hlkwb{<-} \hlkwd{lapply}\hlstd{(}\hlnum{1}\hlopt{:}\hlkwd{length}\hlstd{(lsims),} \hlkwa{function}\hlstd{(}\hlkwc{j}\hlstd{)\{} \hlkwd{round}\hlstd{(}\hlkwd{table}\hlstd{(lsims[[j]][,} \hlnum{4}\hlopt{*}\hlstd{p}\hlopt{+}\hlnum{1}\hlstd{])} \hlopt{/} \hlstd{nboot,} \hlnum{3}\hlstd{)} \hlstd{\})} \hlcom{## n = 300} \hlstd{u_boot_l[[}\hlnum{1}\hlstd{]]} \end{alltt} \begin{verbatim} ## ## 1 2 3 4 5 6 ## 0.407 0.410 0.158 0.022 0.002 0.000 \end{verbatim} \begin{alltt} \hlcom{## n = 500} \hlstd{u_boot_l[[}\hlnum{2}\hlstd{]]} \end{alltt} \begin{verbatim} ## ## 1 2 3 4 ## 0.659 0.298 0.040 0.003 \end{verbatim} \begin{alltt} \hlcom{## n = 750} \hlstd{u_boot_l[[}\hlnum{3}\hlstd{]]} \end{alltt} \begin{verbatim} ## ## 1 2 3 4 5 ## 0.312 0.436 0.212 0.037 0.003 \end{verbatim} \begin{alltt} \hlcom{## n = 1000} \hlstd{u_boot_l[[}\hlnum{4}\hlstd{]]} \end{alltt} \begin{verbatim} ## ## 1 2 3 4 5 ## 0.456 0.436 0.101 0.007 0.000 \end{verbatim} \end{kframe} \end{knitrout} The Frobenius norm of all boostrapped covariance matrices for all estimators across sample sizes is depicted below: \begin{kframe} \begin{alltt} \hlstd{volume_boot_l} \hlkwb{<-} \hlkwd{do.call}\hlstd{(rbind,} \hlkwd{lapply}\hlstd{(}\hlnum{1}\hlopt{:}\hlkwd{length}\hlstd{(lsims),} \hlkwa{function}\hlstd{(}\hlkwc{j}\hlstd{)\{} \hlkwd{unlist}\hlstd{(}\hlkwd{lapply}\hlstd{(}\hlkwd{normvar}\hlstd{(lsims[[j]]),} \hlkwa{function}\hlstd{(}\hlkwc{x}\hlstd{)} \hlkwd{norm}\hlstd{(x,} \hlkwc{type}\hlstd{=}\hlstr{"F"}\hlstd{)))} \hlstd{\}))} \hlkwd{rownames}\hlstd{(volume_boot_l)} \hlkwb{<-} \hlstd{n} \hlkwd{xtable}\hlstd{(volume_boot_l,} \hlkwc{digits} \hlstd{=} \hlnum{3}\hlstd{)} \end{alltt} \end{kframe} \begin{table}[ht] \centering \begin{tabular}{rrrrr} \hline & se\_wt & se\_env & se\_env\_fixedu & se\_MLE \\ \hline 300 & 3.705 & 4.102 & 0.366 & 4.297 \\ 500 & 0.477 & 0.569 & 0.132 & 2.153 \\ 750 & 1.064 & 1.198 & 0.068 & 1.026 \\ 1000 & 0.784 & 1.033 & 0.053 & 0.792 \\ \hline \end{tabular} \end{table} The estimated efficiency gains for envelope estimators ($\text{se}^*(\hat{\theta}) / \text{se}^*(\hat{\theta}_{\text{env}})$) across sample sizes is depicted below: \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{SEs_boot_l} \hlkwb{<-} \hlkwd{lapply}\hlstd{(}\hlnum{1}\hlopt{:}\hlkwd{length}\hlstd{(lsims),} \hlkwa{function}\hlstd{(}\hlkwc{j}\hlstd{)\{} \hlkwd{do.call}\hlstd{(cbind,} \hlkwd{lapply}\hlstd{(}\hlkwd{normvar}\hlstd{(lsims[[j]]),} \hlkwa{function}\hlstd{(}\hlkwc{x}\hlstd{)} \hlkwd{sqrt}\hlstd{(}\hlkwd{diag}\hlstd{(x))))} \hlstd{\})} \hlstd{ratios_boot_l} \hlkwb{<-} \hlkwd{lapply}\hlstd{(}\hlnum{1}\hlopt{:}\hlkwd{length}\hlstd{(lsims),} \hlkwa{function}\hlstd{(}\hlkwc{j}\hlstd{)\{} \hlcom{# ratio of SE(MLE) to SE(wtEnv)} \hlstd{out} \hlkwb{<-} \hlkwd{cbind}\hlstd{(SEs_boot_l[[j]][,} \hlnum{4}\hlstd{]} \hlopt{/} \hlstd{SEs_boot_l[[j]][,} \hlnum{1}\hlstd{],} \hlcom{# ratio of SE(MLE) to SE(Env)} \hlstd{SEs_boot_l[[j]][,} \hlnum{4}\hlstd{]} \hlopt{/} \hlstd{SEs_boot_l[[j]][,} \hlnum{2}\hlstd{],} \hlcom{# ratio of SE(MLE) to SE(Env_hat(u))} \hlstd{SEs_boot_l[[j]][,} \hlnum{4}\hlstd{]} \hlopt{/} \hlstd{SEs_boot_l[[j]][,} \hlnum{3}\hlstd{])} \hlkwd{colnames}\hlstd{(out)} \hlkwb{<-} \hlkwd{c}\hlstd{(}\hlstr{"se(MLE)/se(env_wt)"}\hlstd{,} \hlstr{"se(MLE)/se(env_varu)"}\hlstd{,} \hlstr{"se(MLE)/se(env_fixu)"}\hlstd{)} \hlkwd{round}\hlstd{(out,} \hlnum{3}\hlstd{)} \hlstd{\})} \hlcom{## n = 300} \hlstd{ratios_boot_l[[}\hlnum{1}\hlstd{]]} \end{alltt} \begin{verbatim} ## se(MLE)/se(env_wt) se(MLE)/se(env_varu) se(MLE)/se(env_fixu) ## 1.224 1.158 3.263 ## 1.142 1.112 1.646 ## 1.013 0.965 8.304 ## 1.160 1.091 3.225 ## 1.439 1.331 1.816 ## 1.154 1.137 1.319 ## 1.057 1.034 1.142 ## 1.183 1.152 1.596 \end{verbatim} \begin{alltt} \hlcom{## n = 500} \hlstd{ratios_boot_l[[}\hlnum{2}\hlstd{]]} \end{alltt} \begin{verbatim} ## se(MLE)/se(env_wt) se(MLE)/se(env_varu) se(MLE)/se(env_fixu) ## 1.879 1.763 2.883 ## 1.291 1.270 1.563 ## 2.141 1.951 9.651 ## 2.177 1.969 5.369 ## 2.205 2.064 2.811 ## 1.460 1.434 1.547 ## 1.107 1.091 1.138 ## 1.209 1.186 1.319 \end{verbatim} \begin{alltt} \hlcom{## n = 750} \hlstd{ratios_boot_l[[}\hlnum{3}\hlstd{]]} \end{alltt} \begin{verbatim} ## se(MLE)/se(env_wt) se(MLE)/se(env_varu) se(MLE)/se(env_fixu) ## 1.028 0.951 3.642 ## 0.977 0.921 1.475 ## 1.067 0.999 11.346 ## 0.774 0.732 5.263 ## 0.632 0.592 2.721 ## 1.138 1.117 1.301 ## 0.941 0.919 0.963 ## 1.018 0.972 1.415 \end{verbatim} \begin{alltt} \hlcom{## n = 1000} \hlstd{ratios_boot_l[[}\hlnum{4}\hlstd{]]} \end{alltt} \begin{verbatim} ## se(MLE)/se(env_wt) se(MLE)/se(env_varu) se(MLE)/se(env_fixu) ## 0.997 0.881 3.498 ## 1.001 0.919 1.134 ## 1.008 0.924 10.073 ## 1.490 1.350 5.495 ## 0.502 0.378 2.191 ## 1.481 1.420 1.740 ## 0.936 0.859 1.211 ## 1.012 0.914 1.305 \end{verbatim} \end{kframe} \end{knitrout} \subsection*{Setting B:} We next create the core of the material and immaterial variation, denoted as $\Omega$ and $\Omega_0$ respectively, build the variance matrix of the predictor variables, and construct the true canonical parameter vector (regression coefficient vector) as an element contained in $\text{span}(\Gamma)$. \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{Omega} \hlkwb{<-} \hlkwd{diag}\hlstd{(}\hlkwd{exp}\hlstd{(}\hlopt{-}\hlkwd{c}\hlstd{(}\hlnum{4}\hlopt{:}\hlnum{5}\hlstd{)))} \hlstd{Omega0} \hlkwb{<-} \hlkwd{diag}\hlstd{(}\hlkwd{exp}\hlstd{(}\hlkwd{c}\hlstd{(}\hlopt{-}\hlnum{3}\hlopt{:}\hlnum{2}\hlstd{)))} \hlstd{SigmaX} \hlkwb{<-} \hlstd{Gamma}\hlopt{ \hlstd{beta} \hlkwb{<-} \hlstd{Gamma} \hlopt{ \hlstd{eig} \hlkwb{<-} \hlkwd{eigen}\hlstd{(SigmaX)} \hlstd{SigmaX.half} \hlkwb{<-} \hlstd{eig}\hlopt{$}\hlstd{vec} \hlopt{ \end{alltt} \end{kframe} \end{knitrout} We perform the nonparametric bootstrap at five different sample sizes. \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{set.seed}\hlstd{(}\hlnum{13}\hlstd{)} \hlkwd{RNGkind}\hlstd{(}\hlstr{"L'Ecuyer-CMRG"}\hlstd{)} \hlstd{n} \hlkwb{<-} \hlkwd{c}\hlstd{(}\hlnum{300}\hlstd{,} \hlnum{500}\hlstd{,} \hlnum{750}\hlstd{,} \hlnum{1000}\hlstd{)} \hlstd{nboot} \hlkwb{<-} \hlnum{5000} \hlkwd{system.time}\hlstd{(\{} \hlstd{lsimsB} \hlkwb{<-} \hlkwd{lapply}\hlstd{(n,} \hlkwa{function}\hlstd{(}\hlkwc{j}\hlstd{)} \hlkwd{logistic_sim}\hlstd{(}\hlkwc{n} \hlstd{= j,} \hlkwc{p} \hlstd{= p))} \hlstd{\})} \end{alltt} \begin{verbatim} ## user system elapsed ## 11804.324 86.218 807.427 \end{verbatim} \end{kframe} \end{knitrout} The distribution of the estimated dimension across bootstrap iterations and sample sizes is depicted below: \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{u_boot_lB} \hlkwb{<-} \hlkwd{lapply}\hlstd{(}\hlnum{1}\hlopt{:}\hlkwd{length}\hlstd{(lsimsB),} \hlkwa{function}\hlstd{(}\hlkwc{j}\hlstd{)\{} \hlkwd{round}\hlstd{(}\hlkwd{table}\hlstd{(lsimsB[[j]][,} \hlnum{4}\hlopt{*}\hlstd{p}\hlopt{+}\hlnum{1}\hlstd{])} \hlopt{/} \hlstd{nboot,} \hlnum{3}\hlstd{)} \hlstd{\})} \hlstd{u_boot_lB} \end{alltt} \begin{verbatim} ## [[1]] ## ## 1 2 3 4 5 6 7 ## 0.019 0.126 0.328 0.332 0.155 0.036 0.003 ## ## [[2]] ## ## 1 2 3 4 5 6 7 ## 0.003 0.083 0.275 0.363 0.212 0.056 0.007 ## ## [[3]] ## ## 1 2 3 4 5 6 7 ## 0.006 0.073 0.243 0.358 0.243 0.070 0.008 ## ## [[4]] ## ## 1 2 3 4 5 6 7 8 ## 0.002 0.041 0.166 0.324 0.300 0.131 0.032 0.003 \end{verbatim} \end{kframe} \end{knitrout} The Frobenius norm of all boostrapped covariance matrices for all estimators across sample sizes is depicted below: \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{volume_boot_lB} \hlkwb{<-} \hlkwd{do.call}\hlstd{(rbind,} \hlkwd{lapply}\hlstd{(}\hlnum{1}\hlopt{:}\hlkwd{length}\hlstd{(lsimsB),} \hlkwa{function}\hlstd{(}\hlkwc{j}\hlstd{)\{} \hlkwd{unlist}\hlstd{(}\hlkwd{lapply}\hlstd{(}\hlkwd{normvar}\hlstd{(lsimsB[[j]]),} \hlkwa{function}\hlstd{(}\hlkwc{x}\hlstd{)} \hlkwd{norm}\hlstd{(x,} \hlkwc{type}\hlstd{=}\hlstr{"F"}\hlstd{)))} \hlstd{\}))} \hlkwd{rownames}\hlstd{(volume_boot_lB)} \hlkwb{<-} \hlstd{n} \hlstd{volume_boot_lB} \end{alltt} \begin{verbatim} ## se_wt se_env se_env_fixedu se_MLE ## 300 2.7301055 2.8562090 1.9606413 2.6950735 ## 500 1.1832098 1.2586726 0.8433347 1.4049298 ## 750 0.4918665 0.5098347 0.4319702 0.9439191 ## 1000 0.6538492 0.6800432 0.5299688 0.7114760 \end{verbatim} \end{kframe} \end{knitrout} The estimated efficiency gains for envelope estimators ($\text{se}^*(\hat{\theta}) / \text{se}^*(\hat{\theta}_{\text{env}})$) across sample sizes is depicted below: \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{SEs_boot_lB} \hlkwb{<-} \hlkwd{lapply}\hlstd{(}\hlnum{1}\hlopt{:}\hlkwd{length}\hlstd{(lsimsB),} \hlkwa{function}\hlstd{(}\hlkwc{j}\hlstd{)\{} \hlkwd{do.call}\hlstd{(cbind,} \hlkwd{lapply}\hlstd{(}\hlkwd{normvar}\hlstd{(lsimsB[[j]]),} \hlkwa{function}\hlstd{(}\hlkwc{x}\hlstd{)} \hlkwd{sqrt}\hlstd{(}\hlkwd{diag}\hlstd{(x))))} \hlstd{\})} \hlstd{ratios_boot_lB} \hlkwb{<-} \hlkwd{lapply}\hlstd{(}\hlnum{1}\hlopt{:}\hlkwd{length}\hlstd{(lsimsB),} \hlkwa{function}\hlstd{(}\hlkwc{j}\hlstd{)\{} \hlcom{# ratio of SE(MLE) to SE(wtEnv)} \hlstd{out} \hlkwb{<-} \hlkwd{cbind}\hlstd{(SEs_boot_lB[[j]][,} \hlnum{4}\hlstd{]} \hlopt{/} \hlstd{SEs_boot_lB[[j]][,} \hlnum{1}\hlstd{],} \hlcom{# ratio of SE(MLE) to SE(Env)} \hlstd{SEs_boot_lB[[j]][,} \hlnum{4}\hlstd{]} \hlopt{/} \hlstd{SEs_boot_lB[[j]][,} \hlnum{2}\hlstd{],} \hlcom{# ratio of SE(MLE) to SE(Env_hat(u))} \hlstd{SEs_boot_lB[[j]][,} \hlnum{4}\hlstd{]} \hlopt{/} \hlstd{SEs_boot_lB[[j]][,} \hlnum{3}\hlstd{])} \hlkwd{colnames}\hlstd{(out)} \hlkwb{<-} \hlkwd{c}\hlstd{(}\hlstr{"se(MLE)/se(env_wt)"}\hlstd{,} \hlstr{"se(MLE)/se(env_varu)"}\hlstd{,} \hlstr{"se(MLE)/se(env_fixu)"}\hlstd{)} \hlkwd{round}\hlstd{(out,} \hlnum{3}\hlstd{)} \hlstd{\})} \hlcom{## n = 300} \hlstd{ratios_boot_lB[[}\hlnum{1}\hlstd{]]} \end{alltt} \begin{verbatim} ## se(MLE)/se(env_wt) se(MLE)/se(env_varu) se(MLE)/se(env_fixu) ## 0.991 0.970 1.129 ## 1.014 0.992 1.356 ## 1.051 1.041 1.120 ## 0.995 0.986 0.992 ## 0.898 0.890 0.864 ## 0.958 0.942 0.856 ## 0.924 0.915 0.851 ## 0.947 0.937 0.854 \end{verbatim} \begin{alltt} \hlcom{## n = 500} \hlstd{ratios_boot_lB[[}\hlnum{2}\hlstd{]]} \end{alltt} \begin{verbatim} ## se(MLE)/se(env_wt) se(MLE)/se(env_varu) se(MLE)/se(env_fixu) ## 1.045 1.025 1.135 ## 1.111 1.065 1.559 ## 0.967 0.960 0.927 ## 0.971 0.967 0.916 ## 1.023 1.020 1.034 ## 0.995 0.993 0.994 ## 0.994 0.992 0.991 ## 1.005 1.004 1.000 \end{verbatim} \begin{alltt} \hlcom{## n = 750} \hlstd{ratios_boot_lB[[}\hlnum{3}\hlstd{]]} \end{alltt} \begin{verbatim} ## se(MLE)/se(env_wt) se(MLE)/se(env_varu) se(MLE)/se(env_fixu) ## 1.293 1.274 1.475 ## 1.557 1.507 2.798 ## 0.981 0.980 0.816 ## 0.993 0.936 1.060 ## 0.990 0.987 1.069 ## 1.054 1.044 1.117 ## 1.063 1.060 1.086 ## 1.063 1.062 1.076 \end{verbatim} \begin{alltt} \hlcom{## n = 1000} \hlstd{ratios_boot_lB[[}\hlnum{4}\hlstd{]]} \end{alltt} \begin{verbatim} ## se(MLE)/se(env_wt) se(MLE)/se(env_varu) se(MLE)/se(env_fixu) ## 1.044 1.025 1.140 ## 1.042 1.019 1.183 ## 1.028 1.022 1.064 ## 1.022 1.013 1.064 ## 0.990 0.987 0.977 ## 1.002 0.999 0.998 ## 1.009 1.019 1.032 ## 1.009 1.005 1.019 \end{verbatim} \end{kframe} \end{knitrout} \section*{Poisson regression simulations} \subsection*{Setting A:} We reproduce the Poisson regression simulation in the main text. The setup is essesntially the same as the logistic regression simulations. We first create the basis matrix $\Gamma$ for the true envelope space and the basis matrix for its orthogonal complement $\Gamma_0$. \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{p} \hlkwb{<-} \hlnum{8}\hlstd{; u} \hlkwb{<-} \hlnum{2} \hlstd{v1} \hlkwb{<-} \hlkwd{matrix}\hlstd{(}\hlkwd{rep}\hlstd{(}\hlnum{1}\hlopt{/}\hlkwd{sqrt}\hlstd{(p),p),} \hlkwc{nrow} \hlstd{= p)} \hlstd{O} \hlkwb{<-}\hlkwd{qr.Q}\hlstd{(}\hlkwd{qr}\hlstd{(v1),} \hlkwc{complete} \hlstd{=} \hlnum{TRUE}\hlstd{)} \hlstd{Gamma} \hlkwb{<-} \hlstd{O[, (p}\hlopt{-}\hlstd{u}\hlopt{+}\hlnum{1}\hlstd{)}\hlopt{:}\hlstd{p]} \hlstd{Gamma0} \hlkwb{<-} \hlstd{O[,} \hlnum{1}\hlopt{:}\hlstd{(p}\hlopt{-}\hlstd{u)]} \end{alltt} \end{kframe} \end{knitrout} We next create the core of the material and immaterial variation, denoted as $\Omega$ and $\Omega_0$ respectively. \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{Omega} \hlkwb{<-} \hlkwd{diag}\hlstd{(}\hlkwd{c}\hlstd{(}\hlnum{1}\hlstd{,}\hlnum{10}\hlstd{))} \hlstd{Omega0} \hlkwb{<-} \hlkwd{diag}\hlstd{(}\hlkwd{exp}\hlstd{(}\hlopt{-}\hlnum{6}\hlopt{:-}\hlnum{1}\hlstd{))} \end{alltt} \end{kframe} \end{knitrout} We now build the variance matrix of the predictor variables and construct the true canonical parameter vector (regression coefficient vector) as an element contained in $\text{span}(\Gamma)$. \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{SigmaX} \hlkwb{<-} \hlstd{Gamma}\hlopt{ \hlstd{beta} \hlkwb{<-} \hlstd{Gamma} \hlopt{ \hlstd{eig} \hlkwb{<-} \hlkwd{eigen}\hlstd{(SigmaX)} \hlstd{SigmaX.half} \hlkwb{<-} \hlstd{eig}\hlopt{$}\hlstd{vec} \hlopt{ \end{alltt} \end{kframe} \end{knitrout} The poisson\_sim function below generates a Poisson regression model that incorporates the above envelope structure that is stored in your global environment. The function then calls on model\_boot (code is in the Appendix) to perform the nonparametric bootstrap with respect to all considered estimators. This function also returns the estimated envelope dimension at every iteration of the nonparametric bootstrap. \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{poisson_sim} \hlkwb{<-} \hlkwa{function}\hlstd{(}\hlkwc{n}\hlstd{,} \hlkwc{p} \hlstd{= p)\{} \hlstd{X} \hlkwb{<-} \hlkwd{matrix}\hlstd{(}\hlkwd{rnorm}\hlstd{(n}\hlopt{*}\hlstd{p),} \hlkwc{nrow} \hlstd{= n)} \hlopt{ \hlstd{gx} \hlkwb{<-} \hlkwd{as.numeric}\hlstd{(X} \hlopt{ \hlstd{Y} \hlkwb{<-} \hlkwd{rpois}\hlstd{(n,} \hlkwc{lambda} \hlstd{=} \hlkwd{exp}\hlstd{(gx))} \hlstd{data_sim} \hlkwb{<-} \hlkwd{as.data.frame}\hlstd{(}\hlkwd{cbind}\hlstd{(Y, X))} \hlstd{m1} \hlkwb{<-} \hlkwd{glm}\hlstd{(Y} \hlopt{~ -}\hlnum{1} \hlopt{+} \hlstd{.,} \hlkwc{family} \hlstd{=} \hlstr{"poisson"}\hlstd{,} \hlkwc{data} \hlstd{= data_sim)} \hlkwd{model_boot}\hlstd{(}\hlkwc{model} \hlstd{= m1,} \hlkwc{nboot} \hlstd{= nboot,} \hlkwc{cores} \hlstd{= numCores)} \hlstd{\}} \end{alltt} \end{kframe} \end{knitrout} We perform the nonparametric bootstrap at five different sample sizes. \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{set.seed}\hlstd{(}\hlnum{13}\hlstd{)} \hlkwd{RNGkind}\hlstd{(}\hlstr{"L'Ecuyer-CMRG"}\hlstd{)} \hlstd{n} \hlkwb{<-} \hlkwd{c}\hlstd{(}\hlnum{300}\hlstd{,} \hlnum{500}\hlstd{,} \hlnum{750}\hlstd{,} \hlnum{1000}\hlstd{)} \hlstd{nboot} \hlkwb{<-} \hlnum{5000} \hlkwd{system.time}\hlstd{(\{} \hlstd{psims} \hlkwb{<-} \hlkwd{lapply}\hlstd{(n,} \hlkwa{function}\hlstd{(}\hlkwc{j}\hlstd{)} \hlkwd{poisson_sim}\hlstd{(}\hlkwc{n} \hlstd{= j,} \hlkwc{p} \hlstd{= p))} \hlstd{\})} \end{alltt} \begin{verbatim} ## user system elapsed ## 10135.183 81.872 691.811 \end{verbatim} \end{kframe} \end{knitrout} The distribution of the estimated dimension across bootstrap iterations and sample sizes is depicted below: \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{u_boot_p} \hlkwb{<-} \hlkwd{lapply}\hlstd{(}\hlnum{1}\hlopt{:}\hlkwd{length}\hlstd{(psims),} \hlkwa{function}\hlstd{(}\hlkwc{j}\hlstd{)\{} \hlkwd{round}\hlstd{(}\hlkwd{table}\hlstd{(psims[[j]][,} \hlnum{4}\hlopt{*}\hlstd{p}\hlopt{+}\hlnum{1}\hlstd{])} \hlopt{/} \hlstd{nboot,} \hlnum{3}\hlstd{)} \hlstd{\})} \hlcom{## n = 300} \hlstd{u_boot_p[[}\hlnum{1}\hlstd{]]} \end{alltt} \begin{verbatim} ## ## 1 2 3 4 ## 0.446 0.425 0.117 0.012 \end{verbatim} \begin{alltt} \hlcom{## n = 500} \hlstd{u_boot_p[[}\hlnum{2}\hlstd{]]} \end{alltt} \begin{verbatim} ## ## 1 2 3 4 ## 0.911 0.086 0.003 0.000 \end{verbatim} \begin{alltt} \hlcom{## n = 750} \hlstd{u_boot_p[[}\hlnum{3}\hlstd{]]} \end{alltt} \begin{verbatim} ## ## 1 2 3 4 ## 0.522 0.436 0.040 0.002 \end{verbatim} \begin{alltt} \hlcom{## n = 1000} \hlstd{u_boot_p[[}\hlnum{4}\hlstd{]]} \end{alltt} \begin{verbatim} ## ## 1 2 3 4 ## 0.901 0.097 0.002 0.000 \end{verbatim} \end{kframe} \end{knitrout} The Frobenius norm of all boostrapped covariance matrices for all estimators across sample sizes is depicted below: \begin{kframe} \begin{alltt} \hlstd{volume_boot_p} \hlkwb{<-} \hlkwd{do.call}\hlstd{(rbind,} \hlkwd{lapply}\hlstd{(}\hlnum{1}\hlopt{:}\hlkwd{length}\hlstd{(psims),} \hlkwa{function}\hlstd{(}\hlkwc{j}\hlstd{)\{} \hlkwd{unlist}\hlstd{(}\hlkwd{lapply}\hlstd{(}\hlkwd{normvar}\hlstd{(psims[[j]]),} \hlkwa{function}\hlstd{(}\hlkwc{x}\hlstd{)} \hlkwd{norm}\hlstd{(x,} \hlkwc{type}\hlstd{=}\hlstr{"F"}\hlstd{)))} \hlstd{\}))} \hlkwd{rownames}\hlstd{(volume_boot_p)} \hlkwb{<-} \hlstd{n} \hlkwd{xtable}\hlstd{(volume_boot_p,} \hlkwc{digits} \hlstd{=} \hlnum{3}\hlstd{)} \end{alltt} \end{kframe} \begin{table}[ht] \centering \begin{tabular}{rrrrr} \hline & se\_wt & se\_env & se\_env\_fixedu & se\_MLE \\ \hline 300 & 0.814 & 0.904 & 0.286 & 1.192 \\ 500 & 0.123 & 0.160 & 0.003 & 0.748 \\ 750 & 0.025 & 0.030 & 0.000 & 0.525 \\ 1000 & 0.025 & 0.031 & 0.000 & 0.522 \\ \hline \end{tabular} \end{table} The estimated efficiency gains for envelope estimators ($\text{se}^*(\hat{\theta}) / \text{se}^*(\hat{\theta}_{\text{env}})$) across sample sizes is depicted below: \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{SEs_boot_p} \hlkwb{<-} \hlkwd{lapply}\hlstd{(}\hlnum{1}\hlopt{:}\hlkwd{length}\hlstd{(psims),} \hlkwa{function}\hlstd{(}\hlkwc{j}\hlstd{)\{} \hlkwd{do.call}\hlstd{(cbind,} \hlkwd{lapply}\hlstd{(}\hlkwd{normvar}\hlstd{(psims[[j]]),} \hlkwa{function}\hlstd{(}\hlkwc{x}\hlstd{)} \hlkwd{sqrt}\hlstd{(}\hlkwd{diag}\hlstd{(x))))} \hlstd{\})} \hlstd{ratios_boot_p} \hlkwb{<-} \hlkwd{lapply}\hlstd{(}\hlnum{1}\hlopt{:}\hlkwd{length}\hlstd{(psims),} \hlkwa{function}\hlstd{(}\hlkwc{j}\hlstd{)\{} \hlcom{# ratio of SE(MLE) to SE(wtEnv)} \hlstd{out} \hlkwb{<-} \hlkwd{cbind}\hlstd{(SEs_boot_p[[j]][,} \hlnum{4}\hlstd{]} \hlopt{/} \hlstd{SEs_boot_p[[j]][,} \hlnum{1}\hlstd{],} \hlcom{# ratio of SE(MLE) to SE(Env)} \hlstd{SEs_boot_p[[j]][,} \hlnum{4}\hlstd{]} \hlopt{/} \hlstd{SEs_boot_p[[j]][,} \hlnum{2}\hlstd{],} \hlcom{# ratio of SE(MLE) to SE(Env_hat(u))} \hlstd{SEs_boot_p[[j]][,} \hlnum{4}\hlstd{]} \hlopt{/} \hlstd{SEs_boot_p[[j]][,} \hlnum{3}\hlstd{])} \hlkwd{colnames}\hlstd{(out)} \hlkwb{<-} \hlkwd{c}\hlstd{(}\hlstr{"se(MLE)/se(env_wt)"}\hlstd{,} \hlstr{"se(MLE)/se(env_varu)"}\hlstd{,} \hlstr{"se(MLE)/se(env_fixu)"}\hlstd{)} \hlkwd{round}\hlstd{(out,} \hlnum{3}\hlstd{)} \hlstd{\})} \hlcom{## n = 300} \hlstd{ratios_boot_p[[}\hlnum{1}\hlstd{]]} \end{alltt} \begin{verbatim} ## se(MLE)/se(env_wt) se(MLE)/se(env_varu) se(MLE)/se(env_fixu) ## 1.152 1.115 2.086 ## 0.893 0.837 1.470 ## 0.999 0.937 1.563 ## 2.480 2.381 4.364 ## 2.279 2.127 3.707 ## 2.344 2.157 3.452 ## 2.785 2.712 4.875 ## 2.783 2.698 5.668 \end{verbatim} \begin{alltt} \hlcom{## n = 500} \hlstd{ratios_boot_p[[}\hlnum{2}\hlstd{]]} \end{alltt} \begin{verbatim} ## se(MLE)/se(env_wt) se(MLE)/se(env_varu) se(MLE)/se(env_fixu) ## 2.579 2.287 17.290 ## 3.670 3.179 39.262 ## 2.820 2.486 21.889 ## 2.679 2.347 18.994 ## 2.376 2.089 16.124 ## 2.403 2.104 15.983 ## 2.390 2.094 15.526 ## 2.353 2.073 12.627 \end{verbatim} \begin{alltt} \hlcom{## n = 750} \hlstd{ratios_boot_p[[}\hlnum{3}\hlstd{]]} \end{alltt} \begin{verbatim} ## se(MLE)/se(env_wt) se(MLE)/se(env_varu) se(MLE)/se(env_fixu) ## 3.810 3.479 61.878 ## 5.573 5.277 341.419 ## 2.975 2.711 235.859 ## 4.420 3.991 47.244 ## 3.407 3.059 247.080 ## 4.870 4.487 180.729 ## 3.545 2.984 24.367 ## 4.794 4.439 23.493 \end{verbatim} \begin{alltt} \hlcom{## n = 1000} \hlstd{ratios_boot_p[[}\hlnum{4}\hlstd{]]} \end{alltt} \begin{verbatim} ## se(MLE)/se(env_wt) se(MLE)/se(env_varu) se(MLE)/se(env_fixu) ## 4.092 3.623 93.113 ## 3.368 2.990 480.452 ## 4.228 3.764 398.744 ## 5.218 4.722 330.380 ## 3.714 3.378 301.965 ## 4.782 4.297 285.126 ## 4.918 4.432 240.507 ## 4.934 4.455 30.992 \end{verbatim} \end{kframe} \end{knitrout} \subsection*{Setting B:} We first create the basis matrix $\Gamma$ for the true envelope space and the basis matrix for its orthogonal complement $\Gamma_0$, build the variance matrix of the predictor variables, and construct the true canonical parameter vector (regression coefficient vector) as an element contained in $\text{span}(\Gamma)$. \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{Omega} \hlkwb{<-} \hlkwd{diag}\hlstd{(}\hlkwd{exp}\hlstd{(}\hlkwd{c}\hlstd{(}\hlopt{-}\hlnum{3}\hlstd{,}\hlopt{-}\hlnum{2}\hlstd{)))} \hlstd{Omega0} \hlkwb{<-} \hlkwd{diag}\hlstd{(}\hlkwd{exp}\hlstd{(}\hlopt{-}\hlnum{4}\hlopt{:}\hlnum{1}\hlstd{))} \hlstd{SigmaX} \hlkwb{<-} \hlstd{Gamma}\hlopt{ \hlstd{beta} \hlkwb{<-} \hlstd{Gamma} \hlopt{ \hlstd{eig} \hlkwb{<-} \hlkwd{eigen}\hlstd{(SigmaX)} \hlstd{SigmaX.half} \hlkwb{<-} \hlstd{eig}\hlopt{$}\hlstd{vec} \hlopt{ \end{alltt} \end{kframe} \end{knitrout} We perform the nonparametric bootstrap at five different sample sizes. \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{set.seed}\hlstd{(}\hlnum{13}\hlstd{)} \hlkwd{RNGkind}\hlstd{(}\hlstr{"L'Ecuyer-CMRG"}\hlstd{)} \hlstd{n} \hlkwb{<-} \hlkwd{c}\hlstd{(}\hlnum{300}\hlstd{,} \hlnum{500}\hlstd{,} \hlnum{750}\hlstd{,} \hlnum{1000}\hlstd{)} \hlstd{nboot} \hlkwb{<-} \hlnum{5000} \hlkwd{system.time}\hlstd{(\{} \hlstd{psimsB} \hlkwb{<-} \hlkwd{lapply}\hlstd{(n,} \hlkwa{function}\hlstd{(}\hlkwc{j}\hlstd{)} \hlkwd{poisson_sim}\hlstd{(}\hlkwc{n} \hlstd{= j,} \hlkwc{p} \hlstd{= p))} \hlstd{\})} \end{alltt} \begin{verbatim} ## user system elapsed ## 12486.444 88.840 853.086 \end{verbatim} \end{kframe} \end{knitrout} The distribution of the estimated dimension across bootstrap iterations and sample sizes is depicted below: \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{u_boot_pB} \hlkwb{<-} \hlkwd{lapply}\hlstd{(}\hlnum{1}\hlopt{:}\hlkwd{length}\hlstd{(psimsB),} \hlkwa{function}\hlstd{(}\hlkwc{j}\hlstd{)\{} \hlkwd{round}\hlstd{(}\hlkwd{table}\hlstd{(psimsB[[j]][,} \hlnum{4}\hlopt{*}\hlstd{p}\hlopt{+}\hlnum{1}\hlstd{])} \hlopt{/} \hlstd{nboot,} \hlnum{3}\hlstd{)} \hlstd{\})} \hlcom{## n = 300} \hlstd{u_boot_pB[[}\hlnum{1}\hlstd{]]} \end{alltt} \begin{verbatim} ## ## 1 2 3 4 ## 0.521 0.387 0.086 0.005 \end{verbatim} \begin{alltt} \hlcom{## n = 500} \hlstd{u_boot_pB[[}\hlnum{2}\hlstd{]]} \end{alltt} \begin{verbatim} ## ## 1 2 3 ## 0.785 0.200 0.015 \end{verbatim} \begin{alltt} \hlcom{## n = 750} \hlstd{u_boot_pB[[}\hlnum{3}\hlstd{]]} \end{alltt} \begin{verbatim} ## ## 1 2 3 4 ## 0.081 0.785 0.129 0.004 \end{verbatim} \begin{alltt} \hlcom{## n = 1000} \hlstd{u_boot_pB[[}\hlnum{4}\hlstd{]]} \end{alltt} \begin{verbatim} ## ## 1 2 3 4 ## 0.730 0.245 0.025 0.000 \end{verbatim} \end{kframe} \end{knitrout} The Frobenius norm of all boostrapped covariance matrices for all estimators across sample sizes is depicted below: \begin{kframe} \begin{alltt} \hlstd{volume_boot_pB} \hlkwb{<-} \hlkwd{do.call}\hlstd{(rbind,} \hlkwd{lapply}\hlstd{(}\hlnum{1}\hlopt{:}\hlkwd{length}\hlstd{(psimsB),} \hlkwa{function}\hlstd{(}\hlkwc{j}\hlstd{)\{} \hlkwd{unlist}\hlstd{(}\hlkwd{lapply}\hlstd{(}\hlkwd{normvar}\hlstd{(psimsB[[j]]),} \hlkwa{function}\hlstd{(}\hlkwc{x}\hlstd{)} \hlkwd{norm}\hlstd{(x,} \hlkwc{type}\hlstd{=}\hlstr{"F"}\hlstd{)))} \hlstd{\}))} \hlkwd{rownames}\hlstd{(volume_boot_pB)} \hlkwb{<-} \hlstd{n} \hlkwd{xtable}\hlstd{(volume_boot_pB,} \hlkwc{digits} \hlstd{=} \hlnum{3}\hlstd{)} \end{alltt} \end{kframe} \begin{table}[ht] \centering \begin{tabular}{rrrrr} \hline & se\_wt & se\_env & se\_env\_fixedu & se\_MLE \\ \hline 300 & 0.199 & 0.214 & 0.195 & 0.167 \\ 500 & 0.058 & 0.073 & 0.025 & 0.106 \\ 750 & 0.062 & 0.064 & 0.058 & 0.076 \\ 1000 & 0.037 & 0.045 & 0.009 & 0.073 \\ \hline \end{tabular} \end{table} The estimated efficiency gains for envelope estimators ($\text{se}^*(\hat{\theta}) / \text{se}^*(\hat{\theta}_{\text{env}})$) across sample sizes is depicted below: \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{SEs_boot_pB} \hlkwb{<-} \hlkwd{lapply}\hlstd{(}\hlnum{1}\hlopt{:}\hlkwd{length}\hlstd{(psimsB),} \hlkwa{function}\hlstd{(}\hlkwc{j}\hlstd{)\{} \hlkwd{do.call}\hlstd{(cbind,} \hlkwd{lapply}\hlstd{(}\hlkwd{normvar}\hlstd{(psimsB[[j]]),} \hlkwa{function}\hlstd{(}\hlkwc{x}\hlstd{)} \hlkwd{sqrt}\hlstd{(}\hlkwd{diag}\hlstd{(x))))} \hlstd{\})} \hlstd{ratios_boot_pB} \hlkwb{<-} \hlkwd{lapply}\hlstd{(}\hlnum{1}\hlopt{:}\hlkwd{length}\hlstd{(psimsB),} \hlkwa{function}\hlstd{(}\hlkwc{j}\hlstd{)\{} \hlcom{# ratio of SE(MLE) to SE(wtEnv)} \hlstd{out} \hlkwb{<-} \hlkwd{cbind}\hlstd{(SEs_boot_pB[[j]][,} \hlnum{4}\hlstd{]} \hlopt{/} \hlstd{SEs_boot_pB[[j]][,} \hlnum{1}\hlstd{],} \hlcom{# ratio of SE(MLE) to SE(Env) } \hlstd{SEs_boot_pB[[j]][,} \hlnum{4}\hlstd{]} \hlopt{/} \hlstd{SEs_boot_pB[[j]][,} \hlnum{2}\hlstd{],} \hlcom{# ratio of SE(MLE) to SE(Env_hat(u))} \hlstd{SEs_boot_pB[[j]][,} \hlnum{4}\hlstd{]} \hlopt{/} \hlstd{SEs_boot_pB[[j]][,} \hlnum{3}\hlstd{])} \hlkwd{colnames}\hlstd{(out)} \hlkwb{<-} \hlkwd{c}\hlstd{(}\hlstr{"se(MLE)/se(env_wt)"}\hlstd{,} \hlstr{"se(MLE)/se(env_varu)"}\hlstd{,} \hlstr{"se(MLE)/se(env_fixu)"}\hlstd{)} \hlkwd{round}\hlstd{(out,} \hlnum{3}\hlstd{)} \hlstd{\})} \hlcom{## n = 300} \hlstd{ratios_boot_pB[[}\hlnum{1}\hlstd{]]} \end{alltt} \begin{verbatim} ## se(MLE)/se(env_wt) se(MLE)/se(env_varu) se(MLE)/se(env_fixu) ## 1.153 1.133 1.407 ## 0.704 0.669 0.817 ## 0.950 0.911 0.840 ## 1.492 1.470 1.804 ## 1.556 1.479 1.939 ## 1.603 1.512 1.899 ## 1.120 1.088 1.361 ## 0.810 0.769 0.650 \end{verbatim} \begin{alltt} \hlcom{## n = 500} \hlstd{ratios_boot_pB[[}\hlnum{2}\hlstd{]]} \end{alltt} \begin{verbatim} ## se(MLE)/se(env_wt) se(MLE)/se(env_varu) se(MLE)/se(env_fixu) ## 1.346 1.211 2.377 ## 1.780 1.600 4.014 ## 1.115 1.102 1.165 ## 2.162 2.020 3.292 ## 2.156 1.982 4.672 ## 2.387 2.189 5.502 ## 1.007 0.872 3.387 ## 1.234 1.202 1.373 \end{verbatim} \begin{alltt} \hlcom{## n = 750} \hlstd{ratios_boot_pB[[}\hlnum{3}\hlstd{]]} \end{alltt} \begin{verbatim} ## se(MLE)/se(env_wt) se(MLE)/se(env_varu) se(MLE)/se(env_fixu) ## 1.124 1.104 1.168 ## 1.089 1.068 1.064 ## 1.204 1.188 1.196 ## 1.941 1.851 1.969 ## 1.973 1.885 1.977 ## 2.274 2.176 2.323 ## 0.815 0.798 0.873 ## 1.226 1.224 1.250 \end{verbatim} \begin{alltt} \hlcom{## n = 1000} \hlstd{ratios_boot_pB[[}\hlnum{4}\hlstd{]]} \end{alltt} \begin{verbatim} ## se(MLE)/se(env_wt) se(MLE)/se(env_varu) se(MLE)/se(env_fixu) ## 1.584 1.486 2.852 ## 1.611 1.463 7.292 ## 1.651 1.611 1.793 ## 2.615 2.446 4.653 ## 2.605 2.377 7.169 ## 2.723 2.495 7.324 ## 0.949 0.849 8.528 ## 1.454 1.427 1.615 \end{verbatim} \end{kframe} \end{knitrout} \subsection*{Build Table in main text} We now build Table 1 in the main text. \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{tab_sim} \hlkwb{<-} \hlkwd{rbind}\hlstd{(}\hlkwd{cbind}\hlstd{(n,} \hlkwd{do.call}\hlstd{(rbind,} \hlkwd{lapply}\hlstd{(ratios_boot_l,} \hlkwa{function}\hlstd{(}\hlkwc{xx}\hlstd{) xx[}\hlnum{1}\hlstd{, ])),} \hlkwd{do.call}\hlstd{(rbind,} \hlkwd{lapply}\hlstd{(ratios_boot_lB,} \hlkwa{function}\hlstd{(}\hlkwc{xx}\hlstd{) xx[}\hlnum{1}\hlstd{, ]))),} \hlkwd{cbind}\hlstd{(n,} \hlkwd{do.call}\hlstd{(rbind,} \hlkwd{lapply}\hlstd{(ratios_boot_p,} \hlkwa{function}\hlstd{(}\hlkwc{xx}\hlstd{) xx[}\hlnum{1}\hlstd{, ])),} \hlkwd{do.call}\hlstd{(rbind,} \hlkwd{lapply}\hlstd{(ratios_boot_pB,} \hlkwa{function}\hlstd{(}\hlkwc{xx}\hlstd{) xx[}\hlnum{1}\hlstd{, ]))))} \hlkwd{xtable}\hlstd{(tab_sim,} \hlkwc{digits} \hlstd{=} \hlnum{2}\hlstd{)} \end{alltt} \begin{verbatim} ## ## ## \begin{table}[ht] ## \centering ## \begin{tabular}{rrrrrrrr} ## \hline ## & n & se(MLE)/se(env\_wt) & se(MLE)/se(env\_varu) & se(MLE)/se(env\_fixu) & se(MLE)/se(env\_wt) & se(MLE)/se(env\_varu) & se(MLE)/se(env\_fixu) \\ ## \hline ## 1 & 300.00 & 1.22 & 1.16 & 3.26 & 0.99 & 0.97 & 1.13 \\ ## 2 & 500.00 & 1.88 & 1.76 & 2.88 & 1.04 & 1.02 & 1.14 \\ ## 3 & 750.00 & 1.03 & 0.95 & 3.64 & 1.29 & 1.27 & 1.48 \\ ## 4 & 1000.00 & 1.00 & 0.88 & 3.50 & 1.04 & 1.02 & 1.14 \\ ## 5 & 300.00 & 1.15 & 1.11 & 2.09 & 1.15 & 1.13 & 1.41 \\ ## 6 & 500.00 & 2.58 & 2.29 & 17.29 & 1.35 & 1.21 & 2.38 \\ ## 7 & 750.00 & 3.81 & 3.48 & 61.88 & 1.12 & 1.10 & 1.17 \\ ## 8 & 1000.00 & 4.09 & 3.62 & 93.11 & 1.58 & 1.49 & 2.85 \\ ## \hline ## \end{tabular} ## \end{table} \end{verbatim} \end{kframe} \end{knitrout} \section*{Diabetes example} Model free envelope estimation techniques and maximum likelihood estimation are then used to estimate the canonical parameter vector corresponding to a logistic regression model (the regression coefficient vector with inverse logit link function) for diabetes diagnoses. We load in the diabetes data from the faraway package. Log transformations are used for several predictor variables as a means to transform the variable to approximate normality while maintaining a scale that is interpretable. The respose variable is a diagnosis of diabetes based in an individual's hemoglobin percentage (HbA1c). A positive diagnosis is when HbA1c $> 6.5\%$. Only complete records are kept for this analysis. \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{data}\hlstd{(diabetes)} \hlcom{## add diagnosis and roughly transform predictors to univarite normality} \hlstd{dat} \hlkwb{<-} \hlstd{diabetes} \hlopt{ \hlkwd{mutate}\hlstd{(}\hlkwc{l.stab.glu} \hlstd{=} \hlkwd{log}\hlstd{(stab.glu),} \hlkwc{l.weight} \hlstd{=} \hlkwd{log}\hlstd{(weight),} \hlkwc{l.age} \hlstd{=} \hlkwd{log}\hlstd{(age),} \hlkwc{l.hip} \hlstd{=} \hlkwd{log}\hlstd{(hip),} \hlkwc{l.waist} \hlstd{=} \hlkwd{log}\hlstd{(waist),} \hlkwc{l.height} \hlstd{=} \hlkwd{log}\hlstd{(height))} \hlopt{ \hlstd{dplyr}\hlopt{::}\hlkwd{select}\hlstd{(diagnose, l.age, l.weight, l.height, l.waist, l.hip,} \hlstd{gender, l.stab.glu)} \hlcom{## exclude missing observations} \hlstd{dat} \hlkwb{<-} \hlkwd{na.omit}\hlstd{(dat)} \hlcom{## turn gender to factor variable} \hlstd{dat}\hlopt{$}\hlstd{gender} \hlkwb{<-} \hlkwd{as.factor}\hlstd{(dat}\hlopt{$}\hlstd{gender)} \end{alltt} \end{kframe} \end{knitrout} Here are density plots of the log transformed predictor variables. \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{dat} \hlopt{ \hlkwd{gather}\hlstd{()} \hlopt{ \hlkwd{facet_wrap}\hlstd{(}\hlopt{~} \hlstd{key,} \hlkwc{scales} \hlstd{=} \hlstr{"free"}\hlstd{)} \hlopt{+} \hlkwd{theme_minimal}\hlstd{()} \hlopt{+} \hlkwd{geom_density}\hlstd{()} \end{alltt} \end{kframe} \includegraphics[width=\maxwidth]{trimdata-1} \end{knitrout} We now fit the model with diagnosis as a response variable and log transformed versions of age, weight, height, waist, hip, and stabilized glucose as predictors. \begin{kframe} \begin{alltt} \hlstd{m1} \hlkwb{<-} \hlkwd{glm}\hlstd{(diagnose} \hlopt{~} \hlstd{.,} \hlkwc{family} \hlstd{=} \hlstr{"binomial"}\hlstd{,} \hlkwc{data} \hlstd{= dat,} \hlkwc{x} \hlstd{=} \hlnum{TRUE}\hlstd{,} \hlkwc{y} \hlstd{=} \hlnum{TRUE}\hlstd{)} \hlstd{betahat} \hlkwb{<-} \hlstd{m1}\hlopt{$}\hlstd{coefficients} \hlkwd{xtable}\hlstd{(}\hlkwd{summary}\hlstd{(m1))} \end{alltt} \end{kframe} \begin{table}[ht] \centering \begin{tabular}{rrrrr} \hline & Estimate & Std. Error & z value & Pr($>$$|$z$|$) \\ \hline (Intercept) & -21.3575 & 23.0558 & -0.93 & 0.3543 \\ l.age & 2.0285 & 0.7583 & 2.67 & 0.0075 \\ l.weight & 1.2546 & 2.4216 & 0.52 & 0.6044 \\ l.height & -4.3927 & 5.2798 & -0.83 & 0.4054 \\ l.waist & 2.6525 & 3.0997 & 0.86 & 0.3922 \\ l.hip & -2.6424 & 3.6213 & -0.73 & 0.4656 \\ genderfemale & 0.1744 & 0.6025 & 0.29 & 0.7722 \\ l.stab.glu & 5.0333 & 0.6602 & 7.62 & 0.0000 \\ \hline \end{tabular} \end{table} We now perform the nonparametric bootstrap procedure for all envelope estimators mentioned in the main text and the MLE. This nonparametric bootstrap has a bootstrap sample size of $5000$. \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{set.seed}\hlstd{(}\hlnum{13}\hlstd{)} \hlstd{numCores} \hlkwb{<-} \hlkwd{detectCores}\hlstd{()} \hlstd{nboot} \hlkwb{<-} \hlnum{5000} \hlkwd{RNGkind}\hlstd{(}\hlstr{"L'Ecuyer-CMRG"}\hlstd{)} \hlstd{boot_sample_diabetes} \hlkwb{<-} \hlkwd{model_boot}\hlstd{(}\hlkwc{model} \hlstd{= m1,} \hlkwc{nboot} \hlstd{= nboot,} \hlkwc{cores} \hlstd{= numCores,} \hlkwc{intercept} \hlstd{=} \hlnum{TRUE}\hlstd{)} \end{alltt} \end{kframe} \end{knitrout} The distribution of the estimated dimension across bootstrap iterations is depicted below. A non-trivial amount of model selection volitility exists across iterations of our nonparametric bootstrap. \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{round}\hlstd{(}\hlkwd{table}\hlstd{(boot_sample_diabetes[,} \hlkwd{ncol}\hlstd{(boot_sample_diabetes)])} \hlopt{/} \hlstd{nboot,} \hlnum{3}\hlstd{)} \end{alltt} \begin{verbatim} ## ## 1 2 3 4 5 ## 0.568 0.358 0.067 0.007 0.000 \end{verbatim} \end{kframe} \end{knitrout} The Frobenius norm of all boostrapped covariance matrices for all estimators is depicted below: \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{unlist}\hlstd{(}\hlkwd{lapply}\hlstd{(}\hlkwd{normvar}\hlstd{(boot_sample_diabetes),} \hlkwa{function}\hlstd{(}\hlkwc{x}\hlstd{)} \hlkwd{norm}\hlstd{(x,} \hlkwc{type}\hlstd{=}\hlstr{"F"}\hlstd{)))} \end{alltt} \begin{verbatim} ## se_wt se_env se_env_fixedu se_MLE ## 26.762481 30.571294 1.040725 37.316460 \end{verbatim} \end{kframe} \end{knitrout} Ratios of bootstrap standard errors are reported below. These ratios compare the MLE to the envelope estimators under study. The bootstrap standard error for the MLE is displayed in the numerator, so that a value greater than 1 indicate variance reduction via envelope methodology. \begin{kframe} \begin{alltt} \hlstd{SEs_diabetes} \hlkwb{<-} \hlkwd{do.call}\hlstd{(cbind,} \hlkwd{lapply}\hlstd{(}\hlkwd{normvar}\hlstd{(boot_sample_diabetes),} \hlkwa{function}\hlstd{(}\hlkwc{x}\hlstd{)} \hlkwd{sqrt}\hlstd{(}\hlkwd{diag}\hlstd{(x))))} \hlstd{ratios_diabetes} \hlkwb{<-} \hlkwd{cbind}\hlstd{(SEs_diabetes[,} \hlnum{4}\hlstd{]} \hlopt{/} \hlstd{SEs_diabetes[,} \hlnum{1}\hlstd{],} \hlcom{# ratio of SE(MLE) to SE(wtEnv)} \hlcom{# ratio of SE(MLE) to SE(Env)} \hlstd{SEs_diabetes[,} \hlnum{4}\hlstd{]} \hlopt{/} \hlstd{SEs_diabetes[,} \hlnum{2}\hlstd{],} \hlcom{# ratio of SE(MLE) to SE(Env_hat(u))} \hlstd{SEs_diabetes[,} \hlnum{4}\hlstd{]} \hlopt{/} \hlstd{SEs_diabetes[,} \hlnum{3}\hlstd{])} \hlkwd{colnames}\hlstd{(ratios_diabetes)} \hlkwb{<-} \hlkwd{c}\hlstd{(}\hlstr{"se(MLE)/se(env_wt)"}\hlstd{,} \hlstr{"se(MLE)/se(env_varu)"}\hlstd{,} \hlstr{"se(MLE)/se(env_fixu)"}\hlstd{)} \hlkwd{rownames}\hlstd{(ratios_diabetes)} \hlkwb{<-} \hlkwd{names}\hlstd{(m1}\hlopt{$}\hlstd{coefficients)[}\hlopt{-}\hlnum{1}\hlstd{]} \hlkwd{xtable}\hlstd{(ratios_diabetes,} \hlkwc{digits} \hlstd{=} \hlnum{3}\hlstd{)} \end{alltt} \end{kframe} \begin{table}[ht] \centering \begin{tabular}{rrrr} \hline & se(MLE)/se(env\_wt) & se(MLE)/se(env\_varu) & se(MLE)/se(env\_fixu) \\ \hline l.age & 1.241 & 1.208 & 1.191 \\ l.weight & 1.581 & 1.475 & 7.070 \\ l.height & 1.156 & 1.073 & 54.761 \\ l.waist & 1.541 & 1.448 & 11.599 \\ l.hip & 1.296 & 1.220 & 17.800 \\ genderfemale & 1.192 & 1.170 & 1.197 \\ l.stab.glu & 1.147 & 1.146 & 1.287 \\ \hline \end{tabular} \end{table} The estimated envelope dimension and weights vector are displayed in the code below. We can see that the estimated envelope dimension is 1 in the original sample, and the vector of weights is close to a point mass at 1. The weighted envelope estimator and the envelope estimator at the estimated dimension are very similar. That being said, the bootstrap distribution for the estimated envelope dimension $u$ is far from a point mass at 1 and the bootstrap standard errors for the weighted envelope estimator are lower than those obtained by using the variable $u$ approach which selects $u$ at every dimension. \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{Y} \hlkwb{<-} \hlstd{dat}\hlopt{$}\hlstd{diagnose} \hlstd{X} \hlkwb{<-} \hlkwd{as.matrix}\hlstd{(m1}\hlopt{$}\hlstd{x)[,}\hlopt{-}\hlnum{1}\hlstd{]} \hlstd{n} \hlkwb{<-} \hlkwd{nrow}\hlstd{(X)} \hlstd{a} \hlkwb{<-} \hlstd{betahat[}\hlnum{1}\hlstd{]} \hlstd{b} \hlkwb{<-} \hlstd{betahat[}\hlopt{-}\hlnum{1}\hlstd{]} \hlstd{model_cov} \hlkwb{<-} \hlkwd{Logistic_cov}\hlstd{(}\hlkwc{Y} \hlstd{= Y,} \hlkwc{X} \hlstd{= X,} \hlkwc{a} \hlstd{= a,} \hlkwc{b} \hlstd{= b)} \hlstd{M} \hlkwb{<-} \hlstd{model_cov}\hlopt{$}\hlstd{M; U} \hlkwb{<-} \hlstd{model_cov}\hlopt{$}\hlstd{U} \hlstd{bic_val} \hlkwb{<-} \hlkwd{bic_compute}\hlstd{(}\hlkwc{M} \hlstd{= M,} \hlkwc{U} \hlstd{= U,} \hlkwc{n} \hlstd{= n)} \hlcom{## estimated dimension in the original sample} \hlstd{u} \hlkwb{<-} \hlkwd{which.min}\hlstd{(bic_val)} \hlstd{u} \end{alltt} \begin{verbatim} ## [1] 1 \end{verbatim} \begin{alltt} \hlcom{## estimated weights for the weighted technique in } \hlcom{## the original sample} \hlstd{min_bic_val} \hlkwb{<-} \hlkwd{min}\hlstd{(bic_val)} \hlstd{wt_bic} \hlkwb{<-} \hlkwd{exp}\hlstd{(min_bic_val} \hlopt{-} \hlstd{bic_val)} \hlopt{/} \hlkwd{sum}\hlstd{(}\hlkwd{exp}\hlstd{(min_bic_val} \hlopt{-} \hlstd{bic_val))} \hlkwd{round}\hlstd{(wt_bic,} \hlnum{4}\hlstd{)} \end{alltt} \begin{verbatim} ## [1] 0.9921 0.0078 0.0001 0.0000 0.0000 0.0000 0.0000 \end{verbatim} \end{kframe} \end{knitrout} We now replicate Table 2 in the main text. This table displays the performance of envelope estimates of the regression coefficients (canonical parameter vector) for the logistic regression of diabetes diagnosis on seven predictors. The first, third, and sixth column display the weighted envelope estimator, the envelope estimator with $\hat{u} = 1$, and the MLE respectively. The second column displays the bootstrap standard error of the weighted envelope estimator. The fourth and fifth columns display the bootstrap standard error for the envelope estimator under the variable $u$ and fixed $u$ regimes respectively. The seventh column displays the bootstrap standard error of the MLE. The last three columns displays the ratio of bootstrap standard errors of all envelope estimators to the those of the MLE. \begin{kframe} \begin{alltt} \hlstd{Ghat} \hlkwb{<-} \hlkwd{manifold1D}\hlstd{(}\hlkwc{M} \hlstd{= M,} \hlkwc{U} \hlstd{= U,} \hlkwc{u} \hlstd{= u)} \hlstd{Env_diabetes} \hlkwb{<-} \hlkwd{as.numeric}\hlstd{((Ghat} \hlopt{ \hlstd{Env_wt_diabetes} \hlkwb{<-} \hlkwd{as.numeric}\hlstd{(}\hlkwd{wtenv}\hlstd{(M, U,} \hlkwc{wt} \hlstd{= wt_bic, b))} \hlstd{inference_diabetes} \hlkwb{<-} \hlkwd{round}\hlstd{(}\hlkwd{cbind}\hlstd{(Env_wt_diabetes,} \hlstd{SEs_diabetes[,} \hlnum{1}\hlstd{],Env_diabetes, SEs_diabetes[,} \hlnum{2}\hlstd{],} \hlstd{SEs_diabetes[,} \hlnum{3}\hlstd{], b, SEs_diabetes[,} \hlnum{4}\hlstd{]),} \hlnum{4}\hlstd{)} \hlkwd{rownames}\hlstd{(inference_diabetes)} \hlkwb{<-} \hlkwd{names}\hlstd{(betahat)[}\hlopt{-}\hlnum{1}\hlstd{]} \hlstd{tab} \hlkwb{<-} \hlkwd{cbind}\hlstd{(inference_diabetes, ratios_diabetes)} \hlkwd{colnames}\hlstd{(tab)} \hlkwb{<-} \hlkwa{NULL} \hlkwd{xtable}\hlstd{(tab,} \hlkwc{digits} \hlstd{=} \hlnum{3}\hlstd{)} \end{alltt} \end{kframe} \begin{table}[ht] \centering \begin{tabular}{rrrrrrrrrrr} \hline & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline l.age & 1.775 & 0.660 & 1.775 & 0.679 & 0.688 & 2.029 & 0.820 & 1.241 & 1.208 & 1.191 \\ l.weight & 0.702 & 1.685 & 0.702 & 1.806 & 0.377 & 1.255 & 2.665 & 1.581 & 1.475 & 7.070 \\ l.height & 0.033 & 4.388 & 0.033 & 4.728 & 0.093 & -4.393 & 5.074 & 1.156 & 1.073 & 54.761 \\ l.waist & 0.681 & 1.932 & 0.681 & 2.056 & 0.257 & 2.652 & 2.978 & 1.541 & 1.448 & 11.599 \\ l.hip & 0.491 & 3.263 & 0.492 & 3.466 & 0.238 & -2.642 & 4.229 & 1.296 & 1.220 & 17.800 \\ genderfemale & 0.401 & 0.624 & 0.402 & 0.636 & 0.621 & 0.174 & 0.744 & 1.192 & 1.170 & 1.197 \\ l.stab.glu & 5.112 & 0.932 & 5.112 & 0.933 & 0.831 & 5.033 & 1.069 & 1.147 & 1.146 & 1.287 \\ \hline \end{tabular} \end{table} \section*{Appendix: R code} \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe} \begin{alltt} \hlcom{##################################################} \hlcom{# 1D optimization solve for gamma #} \hlcom{##################################################} \hlcom{## stored in global environment} \hlstd{ballGBB1D} \hlkwb{<-} \hlkwa{function}\hlstd{(}\hlkwc{M}\hlstd{,} \hlkwc{U}\hlstd{,} \hlkwc{opts}\hlstd{=}\hlkwa{NULL}\hlstd{) \{} \hlstd{W0} \hlkwb{<-} \hlkwd{get_ini1D}\hlstd{(M, U)} \hlkwa{if} \hlstd{(}\hlkwd{is.null}\hlstd{(opts}\hlopt{$}\hlstd{xtol))} \hlstd{opts}\hlopt{$}\hlstd{xtol} \hlkwb{=} \hlnum{1e-8} \hlkwa{else if} \hlstd{(opts}\hlopt{$}\hlstd{xtol} \hlopt{<} \hlnum{0} \hlopt{||} \hlstd{opts}\hlopt{$}\hlstd{xtol} \hlopt{>} \hlnum{1}\hlstd{)} \hlstd{opts}\hlopt{$}\hlstd{xtol} \hlkwb{=} \hlnum{1e-8} \hlkwa{if} \hlstd{(}\hlkwd{is.null}\hlstd{(opts}\hlopt{$}\hlstd{gtol))} \hlstd{opts}\hlopt{$}\hlstd{gtol} \hlkwb{=} \hlnum{1e-8} \hlkwa{else if} \hlstd{(opts}\hlopt{$}\hlstd{gtol} \hlopt{<} \hlnum{0} \hlopt{||} \hlstd{opts}\hlopt{$}\hlstd{gtol} \hlopt{>} \hlnum{1}\hlstd{)} \hlstd{opts}\hlopt{$}\hlstd{gtol} \hlkwb{=} \hlnum{1e-8} \hlkwa{if} \hlstd{(}\hlkwd{is.null}\hlstd{(opts}\hlopt{$}\hlstd{ftol))} \hlstd{opts}\hlopt{$}\hlstd{ftol} \hlkwb{=} \hlnum{1e-12} \hlkwa{else if} \hlstd{(opts}\hlopt{$}\hlstd{ftol} \hlopt{<} \hlnum{0} \hlopt{||} \hlstd{opts}\hlopt{$}\hlstd{ftol} \hlopt{>} \hlnum{1}\hlstd{)} \hlstd{opts}\hlopt{$}\hlstd{ftol} \hlkwb{=} \hlnum{1e-12} \hlkwa{if} \hlstd{(}\hlkwd{is.null}\hlstd{(opts}\hlopt{$}\hlstd{mxitr))} \hlstd{opts}\hlopt{$}\hlstd{mxitr} \hlkwb{=} \hlnum{800} \hlstd{X} \hlkwb{<-} \hlkwd{OptManiMulitBallGBB}\hlstd{(W0, opts, fun1D, M, U)}\hlopt{$}\hlstd{X} \hlkwd{return}\hlstd{(X)} \hlstd{\}} \hlcom{## compute M and U with normal predictors for logistic} \hlcom{## regression model with normal predictors} \hlstd{Logistic_cov} \hlkwb{<-} \hlkwa{function}\hlstd{(}\hlkwc{Y}\hlstd{,}\hlkwc{X}\hlstd{,}\hlkwc{a}\hlstd{,}\hlkwc{b}\hlstd{)\{} \hlstd{n} \hlkwb{<-} \hlkwd{nrow}\hlstd{(X); p} \hlkwb{<-} \hlkwd{ncol}\hlstd{(X)} \hlstd{theta} \hlkwb{<-} \hlstd{a} \hlopt{+} \hlstd{X}\hlopt{ \hlstd{wts} \hlkwb{<-} \hlkwd{as.numeric}\hlstd{(}\hlkwd{exp}\hlstd{(theta)}\hlopt{/}\hlstd{((}\hlnum{1} \hlopt{+} \hlkwd{exp}\hlstd{(theta))}\hlopt{^}\hlnum{2}\hlstd{))} \hlstd{wts} \hlkwb{<-} \hlstd{wts}\hlopt{/}\hlkwd{mean}\hlstd{(wts)} \hlstd{Exw} \hlkwb{=} \hlkwd{t}\hlstd{(wts)} \hlopt{ \hlstd{Sxw} \hlkwb{<-} \hlkwd{t}\hlstd{(X}\hlopt{-}\hlkwd{do.call}\hlstd{(rbind,} \hlkwd{replicate}\hlstd{(n, Exw,} \hlkwc{simplify}\hlstd{=}\hlnum{FALSE}\hlstd{)))} \hlopt{ \hlkwd{diag}\hlstd{(wts)} \hlopt{ \hlstd{Ys} \hlkwb{<-} \hlstd{theta} \hlopt{+} \hlstd{(Y}\hlopt{-}\hlkwd{exp}\hlstd{(theta)}\hlopt{/}\hlstd{(}\hlnum{1}\hlopt{+}\hlkwd{exp}\hlstd{(theta)))}\hlopt{/}\hlstd{wts} \hlstd{Eyw} \hlkwb{<-} \hlkwd{t}\hlstd{(wts)} \hlopt{ \hlstd{Sxyw} \hlkwb{<-} \hlkwd{t}\hlstd{(X}\hlopt{-}\hlkwd{do.call}\hlstd{(rbind,} \hlkwd{replicate}\hlstd{(n, Exw,} \hlkwc{simplify}\hlstd{=}\hlnum{FALSE}\hlstd{)))} \hlopt{ \hlkwd{diag}\hlstd{(wts)} \hlopt{ \hlstd{Syw} \hlkwb{=} \hlkwd{t}\hlstd{(Ys}\hlopt{-}\hlkwd{do.call}\hlstd{(rbind,} \hlkwd{replicate}\hlstd{(n, Eyw,} \hlkwc{simplify}\hlstd{=}\hlnum{FALSE}\hlstd{)))} \hlopt{ \hlkwd{diag}\hlstd{(wts)} \hlopt{ \hlstd{M} \hlkwb{<-} \hlstd{Sxw} \hlstd{U} \hlkwb{<-} \hlstd{Sxyw}\hlopt{ \hlstd{out} \hlkwb{=} \hlkwd{list}\hlstd{(}\hlkwc{M} \hlstd{= M,} \hlkwc{U} \hlstd{= U)} \hlkwd{return}\hlstd{(out)} \hlstd{\}} \hlcom{## compute M and U with normal predictors for poisson } \hlcom{## regression model with normal predictors} \hlcom{## (Needs editing)} \hlstd{Poisson_cov} \hlkwb{<-} \hlkwa{function}\hlstd{(}\hlkwc{Y}\hlstd{,}\hlkwc{X}\hlstd{,}\hlkwc{a}\hlstd{,}\hlkwc{b}\hlstd{)\{} \hlstd{n} \hlkwb{<-} \hlkwd{nrow}\hlstd{(X); p} \hlkwb{<-} \hlkwd{ncol}\hlstd{(X)} \hlstd{theta} \hlkwb{<-} \hlstd{a} \hlopt{+} \hlstd{X}\hlopt{ \hlstd{wts} \hlkwb{<-} \hlkwd{as.numeric}\hlstd{(}\hlkwd{exp}\hlstd{(theta))} \hlstd{wts} \hlkwb{<-} \hlstd{wts}\hlopt{/}\hlkwd{mean}\hlstd{(wts)} \hlstd{Exw} \hlkwb{=} \hlkwd{t}\hlstd{(wts)} \hlopt{ \hlstd{Sxw} \hlkwb{<-} \hlkwd{t}\hlstd{(X}\hlopt{-}\hlkwd{do.call}\hlstd{(rbind,} \hlkwd{replicate}\hlstd{(n, Exw,} \hlkwc{simplify}\hlstd{=}\hlnum{FALSE}\hlstd{)))} \hlopt{ \hlkwd{diag}\hlstd{(wts)} \hlopt{ \hlstd{Ys} \hlkwb{<-} \hlstd{theta} \hlopt{+} \hlstd{(Y}\hlopt{-}\hlkwd{exp}\hlstd{(theta))}\hlopt{/}\hlstd{wts} \hlstd{Eyw} \hlkwb{<-} \hlkwd{t}\hlstd{(wts)} \hlopt{ \hlstd{Sxyw} \hlkwb{<-} \hlkwd{t}\hlstd{(X}\hlopt{-}\hlkwd{do.call}\hlstd{(rbind,} \hlkwd{replicate}\hlstd{(n, Exw,} \hlkwc{simplify}\hlstd{=}\hlnum{FALSE}\hlstd{)))} \hlopt{ \hlkwd{diag}\hlstd{(wts)} \hlopt{ \hlstd{Syw} \hlkwb{=} \hlkwd{t}\hlstd{(Ys}\hlopt{-}\hlkwd{do.call}\hlstd{(rbind,} \hlkwd{replicate}\hlstd{(n, Eyw,} \hlkwc{simplify}\hlstd{=}\hlnum{FALSE}\hlstd{)))} \hlopt{ \hlkwd{diag}\hlstd{(wts)} \hlopt{ \hlstd{M} \hlkwb{<-} \hlstd{Sxw} \hlstd{U} \hlkwb{<-} \hlstd{Sxyw}\hlopt{ \hlstd{out} \hlkwb{=} \hlkwd{list}\hlstd{(}\hlkwc{M} \hlstd{= M,} \hlkwc{U} \hlstd{= U)} \hlkwd{return}\hlstd{(out)} \hlstd{\}} \hlcom{## compute BIC scores } \hlcom{# functionality is from the TRES package, but is editted to include } \hlcom{# fitting when u = p} \hlstd{bic_compute} \hlkwb{<-} \hlkwa{function}\hlstd{(}\hlkwc{M}\hlstd{,} \hlkwc{U}\hlstd{,} \hlkwc{n}\hlstd{,} \hlkwc{opts} \hlstd{=} \hlkwa{NULL}\hlstd{,} \hlkwc{multiD} \hlstd{=} \hlnum{1}\hlstd{)\{} \hlstd{p} \hlkwb{<-} \hlkwd{dim}\hlstd{(M)[}\hlnum{2}\hlstd{]} \hlstd{Mnew} \hlkwb{<-} \hlstd{M} \hlstd{Unew} \hlkwb{<-} \hlstd{U} \hlstd{G} \hlkwb{<-} \hlkwd{matrix}\hlstd{(}\hlnum{0}\hlstd{, p, p)} \hlstd{G0} \hlkwb{<-} \hlkwd{diag}\hlstd{(p)} \hlstd{phi} \hlkwb{<-} \hlkwd{rep}\hlstd{(}\hlnum{0}\hlstd{, p)} \hlkwa{for} \hlstd{(k} \hlkwa{in} \hlnum{1}\hlopt{:}\hlstd{p) \{} \hlkwa{if} \hlstd{(k} \hlopt{==} \hlstd{p)\{} \hlstd{gk} \hlkwb{<-} \hlkwd{ballGBB1D}\hlstd{(Mnew, Unew, opts)} \hlstd{phi[k]} \hlkwb{<-} \hlstd{n} \hlopt{*} \hlstd{(}\hlkwd{log}\hlstd{(}\hlkwd{t}\hlstd{(gk)} \hlopt{ \hlkwd{log}\hlstd{(}\hlkwd{t}\hlstd{(gk)} \hlopt{ \hlkwd{log}\hlstd{(n)} \hlopt{*} \hlstd{multiD} \hlstd{G[, k]} \hlkwb{<-} \hlstd{G0} \hlopt{ \hlkwa{break} \hlstd{\}} \hlstd{gk} \hlkwb{<-} \hlkwd{ballGBB1D}\hlstd{(Mnew, Unew, opts)} \hlstd{phi[k]} \hlkwb{<-} \hlstd{n} \hlopt{*} \hlstd{(}\hlkwd{log}\hlstd{(}\hlkwd{t}\hlstd{(gk)} \hlopt{ \hlkwd{log}\hlstd{(}\hlkwd{t}\hlstd{(gk)} \hlopt{ \hlkwd{log}\hlstd{(n)} \hlopt{*} \hlstd{multiD} \hlstd{G[, k]} \hlkwb{<-} \hlstd{G0} \hlopt{ \hlstd{G0} \hlkwb{<-} \hlkwd{qr.Q}\hlstd{(}\hlkwd{qr}\hlstd{(G[,} \hlnum{1}\hlopt{:}\hlstd{k]),} \hlkwc{complete} \hlstd{=} \hlnum{TRUE}\hlstd{)[, (k} \hlopt{+} \hlnum{1}\hlstd{)}\hlopt{:}\hlstd{p]} \hlstd{Mnew} \hlkwb{<-} \hlkwd{t}\hlstd{(G0)} \hlopt{ \hlstd{Unew} \hlkwb{<-} \hlkwd{t}\hlstd{(G0)} \hlopt{ \hlstd{\}} \hlstd{bic_val} \hlkwb{<-} \hlkwd{rep}\hlstd{(}\hlnum{0}\hlstd{, p)} \hlkwa{for} \hlstd{(k} \hlkwa{in} \hlnum{1}\hlopt{:}\hlstd{p) \{} \hlstd{bic_val[k]} \hlkwb{<-} \hlkwd{sum}\hlstd{(phi[}\hlnum{1}\hlopt{:}\hlstd{k])} \hlstd{\}} \hlstd{u} \hlkwb{=} \hlkwd{which.min}\hlstd{(bic_val)} \hlstd{bic_val} \hlstd{\}} \hlcom{## compute weighted envelope estimator from a weight vector } \hlcom{## and the original estimator} \hlstd{wtenv} \hlkwb{<-} \hlkwa{function}\hlstd{(}\hlkwc{M}\hlstd{,} \hlkwc{U}\hlstd{,} \hlkwc{wt}\hlstd{,} \hlkwc{b}\hlstd{)\{} \hlstd{p} \hlkwb{<-} \hlkwd{ncol}\hlstd{(M)} \hlstd{Env_wt} \hlkwb{<-} \hlkwd{rep}\hlstd{(}\hlnum{0}\hlstd{,p)} \hlkwa{for}\hlstd{(i} \hlkwa{in} \hlnum{1}\hlopt{:}\hlstd{p)\{} \hlkwa{if}\hlstd{(i} \hlopt{<} \hlstd{p)\{} \hlstd{G} \hlkwb{<-} \hlkwd{manifold1D}\hlstd{(}\hlkwc{M} \hlstd{= M,} \hlkwc{U} \hlstd{= U,} \hlkwc{u} \hlstd{= i)} \hlstd{Env_wt} \hlkwb{<-} \hlstd{Env_wt} \hlopt{+} \hlstd{wt[i]} \hlopt{*} \hlkwd{as.numeric}\hlstd{((G} \hlopt{ \hlstd{\}} \hlkwa{if}\hlstd{(i} \hlopt{==} \hlstd{p) Env_wt} \hlkwb{<-} \hlstd{Env_wt} \hlopt{+} \hlstd{wt[i]} \hlopt{*} \hlstd{b} \hlstd{\}} \hlkwd{return}\hlstd{(Env_wt)} \hlstd{\}} \hlcom{## compute covariance matrices from bootstrap output} \hlstd{covar} \hlkwb{<-} \hlkwa{function}\hlstd{(}\hlkwc{reg}\hlstd{)\{} \hlstd{p} \hlkwb{<-} \hlstd{(}\hlkwd{ncol}\hlstd{(reg)} \hlopt{-} \hlnum{1}\hlstd{)} \hlopt{/} \hlnum{4} \hlstd{covwt} \hlkwb{<-} \hlkwd{cov}\hlstd{(reg[,} \hlnum{1}\hlopt{:}\hlstd{p])} \hlstd{covenv} \hlkwb{<-} \hlkwd{cov}\hlstd{(reg[, (p}\hlopt{+}\hlnum{1}\hlstd{)}\hlopt{:}\hlstd{(}\hlnum{2}\hlopt{*}\hlstd{p)])} \hlstd{covenvfixedu} \hlkwb{<-} \hlkwd{cov}\hlstd{(reg[, (}\hlnum{2}\hlopt{*}\hlstd{p}\hlopt{+}\hlnum{1}\hlstd{)}\hlopt{:}\hlstd{(}\hlnum{3}\hlopt{*}\hlstd{p)])} \hlstd{covMLE} \hlkwb{<-} \hlkwd{cov}\hlstd{(reg[, (}\hlnum{3}\hlopt{*}\hlstd{p}\hlopt{+}\hlnum{1}\hlstd{)}\hlopt{:}\hlstd{(}\hlnum{4}\hlopt{*}\hlstd{p)])} \hlstd{out} \hlkwb{=} \hlkwd{list}\hlstd{(}\hlkwc{covwt}\hlstd{=covwt,} \hlkwc{covenv}\hlstd{=covenv,} \hlkwc{covenvfixedu} \hlstd{= covenvfixedu,} \hlkwc{covMLE}\hlstd{=covMLE)} \hlstd{out} \hlstd{\}} \hlcom{## compute normed matrices from bootstrap output} \hlstd{normvar} \hlkwb{<-} \hlkwa{function}\hlstd{(}\hlkwc{reg}\hlstd{)\{} \hlstd{p} \hlkwb{<-} \hlstd{(}\hlkwd{ncol}\hlstd{(reg)} \hlopt{-} \hlnum{1}\hlstd{)} \hlopt{/} \hlnum{4} \hlstd{n} \hlkwb{<-} \hlkwd{nrow}\hlstd{(reg)} \hlstd{sewt} \hlkwb{<-} \hlkwd{crossprod}\hlstd{(reg[,} \hlnum{1}\hlopt{:}\hlstd{p])} \hlopt{/} \hlstd{n} \hlstd{seenv} \hlkwb{<-} \hlkwd{crossprod}\hlstd{(reg[, (p}\hlopt{+}\hlnum{1}\hlstd{)}\hlopt{:}\hlstd{(}\hlnum{2}\hlopt{*}\hlstd{p)])} \hlopt{/} \hlstd{n} \hlstd{seenvfixedu} \hlkwb{<-} \hlkwd{crossprod}\hlstd{(reg[, (}\hlnum{2}\hlopt{*}\hlstd{p}\hlopt{+}\hlnum{1}\hlstd{)}\hlopt{:}\hlstd{(}\hlnum{3}\hlopt{*}\hlstd{p)])} \hlopt{/} \hlstd{n} \hlstd{seMLE} \hlkwb{<-} \hlkwd{crossprod}\hlstd{(reg[, (}\hlnum{3}\hlopt{*}\hlstd{p}\hlopt{+}\hlnum{1}\hlstd{)}\hlopt{:}\hlstd{(}\hlnum{4}\hlopt{*}\hlstd{p)])} \hlopt{/} \hlstd{n} \hlstd{out} \hlkwb{=} \hlkwd{list}\hlstd{(}\hlkwc{se_wt}\hlstd{=sewt,} \hlkwc{se_env}\hlstd{=seenv,} \hlkwc{se_env_fixedu} \hlstd{= seenvfixedu,} \hlkwc{se_MLE}\hlstd{=seMLE)} \hlstd{out} \hlstd{\}} \hlcom{## Compute the bootstrap deviations for the weighted envelope } \hlcom{## estimator, the envelope estimator with fixed dimension, the } \hlcom{## envelope estimator with variable dimensions, and the MLE} \hlcom{## Also report the selected envelope dimension at every } \hlcom{## iteration of the nonparametric bootstrap} \hlstd{model_boot} \hlkwb{<-} \hlkwa{function}\hlstd{(}\hlkwc{model}\hlstd{,} \hlkwc{nboot} \hlstd{=} \hlnum{1000}\hlstd{,} \hlkwc{cores} \hlstd{=} \hlnum{15}\hlstd{,} \hlkwc{intercept} \hlstd{=} \hlnum{FALSE}\hlstd{)\{} \hlcom{## important quantities an MLE of beta} \hlstd{dat} \hlkwb{<-} \hlstd{model}\hlopt{$}\hlstd{data} \hlstd{model} \hlkwb{<-} \hlkwd{update}\hlstd{(model,} \hlkwc{x} \hlstd{=} \hlnum{TRUE}\hlstd{,} \hlkwc{y} \hlstd{=} \hlnum{TRUE}\hlstd{,} \hlkwc{data} \hlstd{= dat)} \hlstd{Y} \hlkwb{<-} \hlstd{model}\hlopt{$}\hlstd{y} \hlstd{X} \hlkwb{<-} \hlstd{model}\hlopt{$}\hlstd{x} \hlstd{n} \hlkwb{<-} \hlkwd{nrow}\hlstd{(X)} \hlstd{b} \hlkwb{<-} \hlstd{betahat} \hlkwb{<-} \hlstd{model}\hlopt{$}\hlstd{coefficients} \hlstd{a} \hlkwb{<-} \hlnum{0} \hlkwa{if}\hlstd{(intercept)\{} \hlstd{a} \hlkwb{<-} \hlstd{betahat[}\hlnum{1}\hlstd{]} \hlstd{b} \hlkwb{<-} \hlstd{betahat[}\hlopt{-}\hlnum{1}\hlstd{]} \hlstd{X} \hlkwb{<-} \hlkwd{matrix}\hlstd{(X[,}\hlopt{-}\hlnum{1}\hlstd{],} \hlkwc{nrow} \hlstd{= n)} \hlstd{\}} \hlstd{p} \hlkwb{<-} \hlkwd{ncol}\hlstd{(X)} \hlcom{## intermediate envelope quanitites } \hlstd{fn} \hlkwb{<-} \hlkwa{NULL} \hlstd{fam} \hlkwb{<-} \hlstd{model}\hlopt{$}\hlstd{family}\hlopt{$}\hlstd{family} \hlkwa{if}\hlstd{(fam} \hlopt{==} \hlstr{"binomial"}\hlstd{)\{} \hlstd{fn} \hlkwb{<-} \hlkwa{function}\hlstd{(}\hlkwc{Y}\hlstd{,}\hlkwc{X}\hlstd{,}\hlkwc{a}\hlstd{,}\hlkwc{b}\hlstd{)} \hlkwd{Logistic_cov}\hlstd{(}\hlkwc{Y}\hlstd{=Y,}\hlkwc{X}\hlstd{=X,}\hlkwc{a}\hlstd{=a,}\hlkwc{b}\hlstd{=b)} \hlstd{\}} \hlkwa{if}\hlstd{(fam} \hlopt{==} \hlstr{"poisson"}\hlstd{)\{} \hlstd{fn} \hlkwb{<-} \hlkwa{function}\hlstd{(}\hlkwc{Y}\hlstd{,}\hlkwc{X}\hlstd{,}\hlkwc{a}\hlstd{,}\hlkwc{b}\hlstd{)} \hlkwd{Poisson_cov}\hlstd{(}\hlkwc{Y}\hlstd{=Y,}\hlkwc{X}\hlstd{=X,}\hlkwc{a}\hlstd{=a,}\hlkwc{b}\hlstd{=b)} \hlstd{\}} \hlstd{model_cov} \hlkwb{<-} \hlkwd{fn}\hlstd{(}\hlkwc{Y} \hlstd{= Y,} \hlkwc{X} \hlstd{= X,} \hlkwc{a} \hlstd{= a,} \hlkwc{b} \hlstd{= b)} \hlstd{M} \hlkwb{<-} \hlstd{model_cov}\hlopt{$}\hlstd{M; U} \hlkwb{<-} \hlstd{model_cov}\hlopt{$}\hlstd{U} \hlstd{bic_val} \hlkwb{<-} \hlkwd{bic_compute}\hlstd{(}\hlkwc{M} \hlstd{= M,} \hlkwc{U} \hlstd{= U,} \hlkwc{n} \hlstd{= n)} \hlstd{u} \hlkwb{<-} \hlkwd{which.min}\hlstd{(bic_val)} \hlstd{min_bic_val} \hlkwb{<-} \hlkwd{min}\hlstd{(bic_val)} \hlstd{wt_bic} \hlkwb{<-} \hlkwd{exp}\hlstd{(min_bic_val} \hlopt{-} \hlstd{bic_val)} \hlopt{/} \hlkwd{sum}\hlstd{(}\hlkwd{exp}\hlstd{(min_bic_val} \hlopt{-} \hlstd{bic_val))} \hlcom{## estimators} \hlstd{Ghat} \hlkwb{<-} \hlkwd{manifold1D}\hlstd{(}\hlkwc{M} \hlstd{= M,} \hlkwc{U} \hlstd{= U,} \hlkwc{u} \hlstd{= u)} \hlstd{Env} \hlkwb{<-} \hlkwd{as.numeric}\hlstd{((Ghat} \hlopt{ \hlstd{Env_wt} \hlkwb{<-} \hlkwd{as.numeric}\hlstd{(}\hlkwd{wtenv}\hlstd{(M, U,} \hlkwc{wt} \hlstd{= wt_bic, b))} \hlcom{## bootstrap routine} \hlkwd{registerDoParallel}\hlstd{(}\hlkwc{cores} \hlstd{= cores)} \hlstd{output} \hlkwb{<-} \hlkwd{foreach}\hlstd{(}\hlkwc{i}\hlstd{=}\hlnum{1}\hlopt{:}\hlstd{nboot,}\hlkwc{.combine}\hlstd{=rbind,} \hlkwc{.export}\hlstd{=}\hlkwd{c}\hlstd{(}\hlstr{"ballGBB1D"}\hlstd{,}\hlstr{"manifold1D"}\hlstd{,}\hlstr{"wtenv"}\hlstd{,}\hlstr{"fn"}\hlstd{))} \hlopt{ \hlcom{## refit model using bootstrap pairs } \hlstd{m_boot} \hlkwb{<-} \hlkwd{update}\hlstd{(model,} \hlkwc{data} \hlstd{= dat[}\hlkwd{sample}\hlstd{(}\hlnum{1}\hlopt{:}\hlstd{n,} \hlkwc{replace} \hlstd{=} \hlnum{TRUE}\hlstd{), ],} \hlkwc{x} \hlstd{=} \hlnum{TRUE}\hlstd{,} \hlkwc{y} \hlstd{=} \hlnum{TRUE}\hlstd{)} \hlcom{## important quantities including the MLE of beta } \hlcom{## from the bootstrap sample} \hlstd{Y_boot} \hlkwb{<-} \hlstd{m_boot}\hlopt{$}\hlstd{y} \hlstd{X_boot} \hlkwb{<-} \hlstd{m_boot}\hlopt{$}\hlstd{x} \hlstd{b_boot} \hlkwb{<-} \hlstd{beta_boot} \hlkwb{<-} \hlstd{m_boot}\hlopt{$}\hlstd{coefficients} \hlstd{a_boot} \hlkwb{<-} \hlnum{0} \hlkwa{if}\hlstd{(intercept)\{} \hlstd{a_boot} \hlkwb{<-} \hlstd{beta_boot[}\hlnum{1}\hlstd{]} \hlstd{b_boot} \hlkwb{<-} \hlstd{beta_boot[}\hlopt{-}\hlnum{1}\hlstd{]} \hlstd{X_boot} \hlkwb{<-} \hlkwd{matrix}\hlstd{(X_boot[,}\hlopt{-}\hlnum{1}\hlstd{],} \hlkwc{nrow} \hlstd{= n)} \hlstd{\}} \hlcom{## intermediate envelope estimation quantities} \hlstd{model_cov_boot} \hlkwb{<-} \hlkwd{fn}\hlstd{(}\hlkwc{Y}\hlstd{=Y_boot,}\hlkwc{X}\hlstd{=X_boot,}\hlkwc{a}\hlstd{=a_boot,}\hlkwc{b}\hlstd{=b_boot)} \hlstd{M_boot} \hlkwb{<-} \hlstd{model_cov_boot}\hlopt{$}\hlstd{M; U_boot} \hlkwb{<-} \hlstd{model_cov_boot}\hlopt{$}\hlstd{U} \hlcom{#M <- vcov(m_boot); U <- beta_boot \hlstd{bic_val_boot} \hlkwb{=} \hlkwd{bic_compute}\hlstd{(}\hlkwc{M}\hlstd{=M_boot,}\hlkwc{U}\hlstd{=U_boot,} \hlkwc{n}\hlstd{=n)} \hlstd{u_boot} \hlkwb{<-} \hlkwd{which.min}\hlstd{(bic_val_boot)} \hlstd{min_bic_val_boot} \hlkwb{<-} \hlkwd{min}\hlstd{(bic_val_boot)} \hlstd{wt_bic_boot} \hlkwb{<-} \hlkwd{exp}\hlstd{(min_bic_val_boot} \hlopt{-} \hlstd{bic_val_boot)} \hlopt{/} \hlkwd{sum}\hlstd{(}\hlkwd{exp}\hlstd{(min_bic_val_boot} \hlopt{-} \hlstd{bic_val_boot))} \hlcom{## envelope estimators from bootstrap sample} \hlstd{G_boot} \hlkwb{<-} \hlkwd{manifold1D}\hlstd{(}\hlkwc{M} \hlstd{= M_boot,} \hlkwc{U} \hlstd{= U_boot,} \hlkwc{u} \hlstd{= u_boot)} \hlstd{G_boot_fixedu} \hlkwb{<-} \hlkwd{manifold1D}\hlstd{(}\hlkwc{M} \hlstd{= M_boot,} \hlkwc{U} \hlstd{= U_boot,} \hlkwc{u} \hlstd{= u)} \hlstd{Env_boot} \hlkwb{<-} \hlkwd{as.numeric}\hlstd{((G_boot} \hlopt{ \hlstd{Env_boot_fixedu} \hlkwb{<-} \hlkwd{as.numeric}\hlstd{((G_boot_fixedu} \hlopt{ \hlkwd{t}\hlstd{(G_boot_fixedu))} \hlopt{ \hlstd{Env_wt_boot} \hlkwb{<-} \hlkwd{as.numeric}\hlstd{(}\hlkwd{wtenv}\hlstd{(}\hlkwc{M} \hlstd{= M_boot,} \hlkwc{U} \hlstd{= U_boot,} \hlkwc{wt} \hlstd{= wt_bic_boot,} \hlkwc{b} \hlstd{= b_boot))} \hlcom{## descrepancy} \hlkwd{c}\hlstd{((Env_wt} \hlopt{-} \hlstd{Env_wt_boot),} \hlstd{(Env} \hlopt{-} \hlstd{Env_boot),} \hlstd{(Env} \hlopt{-} \hlstd{Env_boot_fixedu),} \hlstd{(b} \hlopt{-} \hlstd{b_boot), u_boot)} \hlstd{\}} \hlstd{output} \hlstd{\}} \end{alltt} \end{kframe} \end{knitrout} \end{document}
\begin{document} \title{Entanglement behavior of quantum states of fermionic system in accelerated frame} \author{Jinho Chang} \author{Younghun Kwon} \email{yyhkwon@hanyang.ac.kr} \affiliation{Department of Physics, Hanyang University, Ansan, Kyunggi-Do, 425-791, South Korea } \date{\today} \begin{abstract} In this paper, we investigate the behavior of bipartite entanglement of fermonic systems when one of parties is traveling with a uniform acceleration. For the ordering problem in fermonic systems, we apply the recent result in [Montero and Mart\'{i}n-Mart\'{i}nez, Phys. Rev. A 83 052306 (2011)]. Based on the approach, we consider both pure and mixed entangled states, and we show that the behavior in terms of the entanglement measure, negativity, allows one to obtain physical results, i.e. its convergence in the infinite acceleration. The behavior shows that the ordering employed is relevant to derive physical results for fermonic entanglement. This also corrects the previous analysis of [Mart\'{i}n-Mart\'{i}nez and Fuentez. \end{abstract} \maketitle \section{Introduction} \label{intro} Entanglement is one of central characteristics in quantum information theory, as it contains correlations that do not have a classical counterpart. Entangled states can be prepared by correlations stronger than those of local classical systems assisted with classical communication, but they do not exhibit arbitrarily strong correlations. Therefore, quantum theory is consistent with relativistic theories. Along the line, it is of fundamental question in relativistic quantum information theory how the behavior of entanglement can be described in a relativistic setting, particularly when one of parties is traveling in an acceleration \cite{ref:alsing1} \cite{ref:ball}. This can be seen from two parties who share entangled states, while one remains in an inertial frame and the other is described in accelerated frame. It is highly nontrivial that, even though entanglement in bosonic systems disappears in the limit of infinite acceleration, entanglement in fermionic systems can survive in the limit \cite{ref:fuentes}\cite{ref:alsing2}. This is remarkable in that entanglement related to correlations can behave differently, depending on fields, and can also persist in such a limit. However the analysis for the fermionic system was only based on the single-mode approximation\cite{ref:fuentes}\cite{ref:alsing2}. To obtain a precise understanding in the limit, it follows that one has to give a full consideration beyond single-mode approximation. This has been attempted, for instance, in Ref. \cite{ref:bruschi}, which is however, hard to interpret physically. The question in fact lies with the peculiarity of fermionic system, that gives rise to the ambiguity in the ordering of operators. Note that, as it is mentioned in Ref. \cite{ref:montero1}, what matters is to find an ordering that has the physical relevance, i.e. such that entanglement is characterized by what is observed by detectors. In Ref. \cite{ref:montero1}, such an ordering beyond single mode approximation is also proposed. In ref.\cite{ref:montero2} the ordering is then applied to explain entanglement in the infinite acceleration so the physicality of the fermionic structure is provided and the independence of the choice of Unruh mode in the infinite acceleration limit is discussed.\\ However the ordering was tested only in the example considered in Ref. \cite{ref:montero2}. Therefore in this work, beyond the single-mode approximation, we investigate the behavior of entanglement of fermonic systems using the recent construction proposed in Ref. \cite{ref:montero1}, for both pure and mixed states. In this way, the construction is extensively tested for a number of entangled states. We show that in all of these cases, the ordering constructed leads to convergence of fermonic entanglement in the infinite acceleration, i.e. it yields physical results. This also corrects the previously known analysis on the entanglement behavior in Ref. \cite{ref:martin}. Finally, our results provide strong evidence that the ordering suggested recently would characterize entanglement as it gives physical results. The organization of this article is as follows. In Sec. \ref{acc}, we will briefly review how fermionic systems are described in a non-inertial frame. In Sec. \ref{ex}, the approach in Ref. \cite{ref:montero1} is applied and the entanglement behavior of the fermionic system in an accelerated frame is shown. In Sec. \ref{con} we will conclude and discuss our results. \section{ Accelerated Frame} \label{acc} Let us begin with quantum fields in relativistic frames. A party traveling with a uniform acceleration is described by the so-called Rindler coordinate $(\tau,\varsigma,y,z)$, which has the following relation with Minkowski coordinate $(t,x,y,z)$, \begin{equation} ct=\varsigma \sinh(\frac{a \tau}{c}),x=\varsigma \cosh(\frac{a \tau}{c}) \label{eq1}, \end{equation} where $a$ is a fixed acceleration of the frame and $c$ is the velocity of light. For fixed $\varsigma$, the coordinate can be found as hyperbolic trajectories in space-time. This is shown in Fig. \ref{fig1}. Equation (\ref{eq1}) only covers the region (I) in Fig. \ref{fig1}. The region (II) is covered by $ ct=-\varsigma \sinh(\frac{a \tau}{c}),x=-\varsigma \cosh(\frac{a \tau}{c})$. The other two regions (F and P) can be described as $ ct=\pm\xi \cosh(\frac{a \sigma}{c}),x=\pm\xi \sinh(\frac{a \sigma}{c})$. \begin{figure} \caption{The Rindler diagram is shown, where Alice is in the inertial frame while Bob is in the uniformly accelerated frame. Two regions $\mathrm{I}$ and $\mathrm{II}$ are causally disconnected. Transformations of coordinates among regions $\mathrm{I}$, $\mathrm{II}$, $F$, and $P$ are explained in the text. } \label{fig1} \end{figure} Now let us consider the field in Minkowski and Rindler spacetime, which can be written as \begin{eqnarray} \phi &=& N_{M}\sum_{i}(a_{i,M}v^{+}_{i,M} + b^{\dag}_{i,M}v^{-}_{i,M} )\nonumber\\ &=& N_{R}\sum_{j}(a_{j,\mathrm{I}}v^{+}_{j,\mathrm{I}} + b^{\dag}_{j,I}v^{-}_{j,\mathrm{I}} + a_{j,II}v^{+}_{j,\mathrm{II}} + b^{\dag}_{j,II}v^{-}_{j,\mathrm{II}} ), \nonumber \end{eqnarray} where $N_{M}$ and $N_{R}$ are normalization constants. Also $v^{\pm}_{i,M}$ denotes the positive and negative energy solutions of the Dirac equation in Minkowski spacetime, which can be obtained with respect to the Killing vector field in Minkowski spacetime, and $v^{\pm}_{i,\mathrm{I}}$ and $v^{\pm}_{i,\mathrm{II}}$ are the positive and negative energy solutions of the Dirac equation in Rindler spacetime, with respect to the Killing vector field in regions I and II. In addition $a^{\dag}_{i,\Delta}(a_{i,\Delta})$ and $b^{\dag}_{i,\Delta}(b_{i,\Delta})$ are the creation (annihilation) operators for the positive and negative energy solutions (particle and antiparticle), where $\Delta $ denotes $M,\mathrm{I},\mathrm{II}$. Then they satisfy the anticommutation relations $\{a_{i,\Delta},a^{\dag}_{j,\Delta^{'}}\}=\{b_{i,\Delta},b^{\dag}_{j,\Delta^{'}}\}=\delta_{ij}\delta_{\Delta \Delta^{'} }$. It is known that the combination of Minkowski mode, called Unruh mode, can be transformed into a monochromatic Rindler mode and can annihilate the same Minkowski vacuum. The following relation thus holds: \begin{equation} A_{i,R/L}\equiv \cos \gamma_{i}a_{i,\mathrm{I}/ \mathrm{II}} - \sin \gamma_{i} b^{\dag}_{i,\mathrm{II} / \mathrm{I}}, \nonumber \end{equation} where $\cos \gamma_{i}=(e^{\frac{-2 \pi \Omega c}{a}}+1)^{-1/2}$. A more general relation can also be found, \begin{equation} a^{\dag}_{i,U}=q_{L}(A^{\dag}_{\Omega ,L}\otimes I_{R}) + q_{R}(I_{L} \otimes A^{\dag}_{\Omega ,R}), \label{eq2} \end{equation} by which one can go beyond the single-mode approximation. Using these relations, in case of Grassmann scalar, the Unruh vacuum can be given by \begin{eqnarray} |0_{\Omega }\rangle_{U} &=& \cos^{2} \gamma_{\Omega } |0000\rangle_{\Omega } - \sin \gamma_{\Omega } \cos \gamma_{\Omega } |0011\rangle_{\Omega } \nonumber\\ &+& \sin \gamma_{\Omega } \cos \gamma_{\Omega } |1100\rangle_{\Omega } - \sin^{2} \gamma_{\Omega } |1111\rangle_{\Omega }\label{eq3} \end{eqnarray} Here we use the notation $|pqmn\rangle_{\Omega } \equiv |p_{\Omega }\rangle^{+}_{\mathrm{I}}|q_{\Omega }\rangle^{-}_{\mathrm{II}} | m_{\Omega }\rangle^{-}_{\mathrm{I}} |n_{\Omega }\rangle^{+}_{\mathrm{II}} $. Then, one-particle states can be obtained as\begin{eqnarray} |1_{\Omega }\rangle^{+}_{U} &=& q_{R}(\cos \gamma_{\Omega } |1000\rangle_{\Omega } - \sin \gamma_{\Omega } |1011\rangle_{\Omega}) \nonumber\\ &+& q_{L}(\sin \gamma_{\Omega } |1101\rangle_{\Omega } + \cos \gamma_{\Omega } |0001\rangle_{\Omega }), \label{eq4} \end{eqnarray} and, \begin{eqnarray} |1_{\Omega }\rangle^{-}_{U} &=& q_{L}(\cos \gamma_{\Omega } |0100\rangle_{\Omega } - \sin \gamma_{\Omega } |0111\rangle_{\Omega}) \nonumber\\ &+& q_{R}(\sin \gamma_{\Omega } |1110\rangle_{\Omega } + \cos \gamma_{\Omega } |0010\rangle_{\Omega }). \label{eq5} \end{eqnarray} From now on, for both convenience and simplicity, we restrict our consideration to cases when $q_{R}$ and $q_{L}$ are real numbers, and we also omit the index $\Omega$ throughout. In Eqs. (\ref{eq4}) and (\ref{eq5}), the single-mode approximation can be found by putting $q_{R}=1$, so that the vacuum in the Minkowski frame can be written as, $|0\rangle_{M} = \cos \gamma |0\rangle_{\mathrm{I}}|0\rangle_{\mathrm{II}} + \sin \gamma |1\rangle_{\mathrm{I}}|1\rangle_{\mathrm{II}}$. Very recently, in Refs. \cite{ref:montero1} \cite{ref:montero2} it has been suggested that the ordering in fermonic systems should be rearranged by the sequence of particles and antiparticles in the separated regions, so that the entanglement behavior of those states will yield physical results. In the following section, we apply the construction and derive the entanglement behavior of fermonic systems. \section{Entanglement behavior} \label{ex} In this section, we consider the construction shown in Refs. \cite{ref:montero1} \cite{ref:montero2}, and we derive the entanglement behavior accordingly. For bipartite pure entanglement, the entanglement property of quantum states can be simplified greatly due to the Schmidt decomposition, by which a given quantum state can be expressed using a single parameter. That is, the decomposition allows any bipartite pure state to be in the expression $|\psi (\alpha) \rangle = \cos\alpha |00\rangle + \sin\alpha|11\rangle$ for some $\alpha$, with a set of orthonormal basis $|0\rangle$ and $|1\rangle$. To quantify entanglement,we apply the measure called \emph{negativity}, which is based on the partial transpose of quantum states, i.e. taking transpose to the state of either system of two parties \cite{ref:peres}. Then, for a given quantum state $\rho$, negativity $\mathcal{N}$ can be computed as follows: $\mathcal{N}(\rho) = \sum_{i} | \lambda_i | $ in which $\lambda_i$ are negative eigenvalues of $\rho^{\Gamma}$, where $\Gamma$ denotes the partial transpose \cite{ref:neg}. Note that the measure-negativity-is useful as a computable measure in various contexts. With these, the entanglement behavior is to be studied in the following scenario. Suppose that two parties, called Alice and Bob, share entangled states in inertial frames in the beginning. Afterward, Bob moves with a uniform acceleration. We then show that entanglement in the infinite acceleration allows us to obtain physical results. In particular, results shown in Sec. \ref{sub1}, \ref{sub2}, and \ref{sub3} correct previously known analysis in Ref. \cite{ref:martin}. \subsection{Bipartite pure states I - particle and antiparticle Unruh excitations} \label{sub1} We first consider pure entanglement between Alice and Bob. As was mentioned, suppose that two parties share a pure state and Bob travels with a uniform acceleration. Then, the state shared is described as \begin{equation} |\Phi_{+} (\alpha) \rangle = \cos \alpha |0\rangle_{M}|0\rangle_{U} + \sin \alpha |1\rangle_{M}|1^{+}\rangle_{U}. \label{eq6} \end{equation} Suppose that Bob's detector cannot distinguish between the particle or the antiparticle. As it is depicted in Fig. \ref{fig1}, two regions $\mathrm{I}$ and $\mathrm{II}$ are causally disconnected and thus Bob does not have assess to both. Hence, the state between Alice and Bob can be found by tracing either regions. First, the state of Alice and Bob when Bob is in region $\mathrm{I}$ is described as \begin{eqnarray} \rho_{AB_{\mathrm{I}}}^{\Phi_{+}} &=& \cos ^2 \alpha \cos ^4 \gamma |000\rangle \langle 000| \nonumber\\ &+& \frac{q_{R}}{2} \sin 2\alpha \cos ^{3} \gamma (|000\rangle \langle 110| + |110\rangle \langle 000|) \nonumber\\ &+& q_{L}^{2} sin ^{2} \alpha \cos ^{2} \gamma |100\rangle \langle 100| \nonumber\\ &+& \frac{1}{2}(1+(1-2 q_{L}^{2}) \cos 2\gamma) \sin ^{2}\alpha |110\rangle \langle 110| \nonumber\\ &-&\frac{q_{L}}{2} \sin 2\alpha \cos ^{2}\gamma \sin \gamma(|001\rangle \langle 100| + |100\rangle \langle 001|) \nonumber\\ &-& \frac{q_{R}q_{L}}{2} \sin ^{2}\alpha \sin 2\gamma (|100\rangle \langle 111| + |111\rangle \langle 100| ) \nonumber\\ &+& \frac{1}{4} \cos ^{2}\alpha \sin ^{2} 2\gamma (|001\rangle \langle 001| + |010\rangle \langle 010|) \nonumber\\ &+& \frac{q_{R}}{2} \sin 2\alpha \cos \gamma \sin ^{2} \gamma (|001\rangle \langle 111| + |111\rangle \langle 001|) \nonumber\\ &+& q_{R}^{2} \sin ^{2} \alpha \sin ^{2} \gamma |111\rangle \langle 111| \nonumber\\ &+& \frac{q_{L}}{2} \sin 2\alpha \sin ^{3} \gamma (|011\rangle \langle 110| + |110\rangle \langle 011|) \nonumber\\ &+& \cos ^{2} \alpha \sin ^{4} \gamma |011\rangle \langle 011|. \nonumber \end{eqnarray} And, the state of Alice and antiBob (i.e. in Bob's region II) is then expressed after tracing the region $\mathrm{I}$, as follows: \begin{eqnarray} \rho_{AB_{\mathrm{II}}}^{\Phi_{+}} &=& \cos ^2 \alpha \cos ^4 \gamma |000\rangle \langle 000| \nonumber\\ &+& \frac{q_{L}}{2} \sin 2\alpha \cos ^{3} \gamma (|000\rangle \langle 110| + |110\rangle \langle 000|) \nonumber\\ &+& q_{R}^{2} \sin ^{2} \alpha \cos ^{2} \gamma |100\rangle \langle 100| \nonumber\\ &+& \frac{1}{2}(1+(1-2 q_{L}^{2}) \cos 2\gamma) \sin ^{2}\alpha |110\rangle \langle 110| \nonumber\\ &+& \frac{q_{R}}{2} \sin 2\alpha \cos ^{2}\gamma \sin \gamma(|001\rangle \langle 100| + |100\rangle \langle 001|) \nonumber\\ &-& \frac{q_{R}q_{L}}{2} \sin ^{2}\alpha \sin 2\gamma (|100\rangle \langle 111| + |111\rangle \langle 100| ) \nonumber\\ &+& \frac{1}{4} \cos ^{2}\alpha \sin ^{2} 2\gamma (|001\rangle \langle 001| + |010\rangle \langle 010|) \nonumber\\ &-& \frac{q_{L}}{2} \sin 2\alpha \cos \gamma sin ^{2} \gamma (|001\rangle \langle 111| + |111\rangle \langle 001|) \nonumber\\ &+& q_{L}^{2} \sin ^{2} \alpha \sin ^{2} \gamma |111\rangle \langle 111| \nonumber\\ &+& \frac{q_{R}}{2} \sin 2\alpha \sin ^{3} \gamma (|011\rangle \langle 110| + |110\rangle \langle 011|) \nonumber\\ &+& \cos ^{2} \alpha \sin ^{4} \gamma |011\rangle \langle 011|. \nonumber \end{eqnarray} Note that these are exact expressions, not obtained from the single-mode approximation. Entanglement of these states is estimated using negativity in Ref. \cite{ref:neg}, which is shown in Fig. 2. As $\gamma$ increases, the entanglement of $ \rho_{AB_{\mathrm{I}}}^{\Phi_{+}}$ decreases but that of $ \rho_{AB_{\mathrm{II}}}^{\Phi_{+}}$ increases. They eventually coincide at $\gamma=\frac{\pi}{4}$. It is thus shown that for states $ \rho_{AB_{\mathrm{I}}}^{\Phi_{+}}$ and $ \rho_{AB_{\mathrm{II}}}^{\Phi_{+}}$, entanglement is determined independently of $q_{R}$ at the infinite acceleration, $\gamma=\frac{\pi}{4}$. As has been discussed in Ref. \cite{ref:montero2}, these allow us to obtain physical results since at infinite acceleration, entanglement of the states $ \rho_{AB_{\mathrm{I}}}^{\Phi_{+}}$ and $ \rho_{AB_{\mathrm{II}}}^{\Phi_{+}}$ is shown to be independent on $q_{R}$. \begin{figure}\label{fig2} \end{figure} Next let us consider the case in which Bob and anti-Bob detector can distinguish between the particle and the antiparticle. Then for the state in Eq. (\ref{eq6}), one can find density matrices corresponding to cases of the Alice-Bob particle in region I, the Alice-Bob antiparticle in region I, the Alice-anti-Bob particle in region II, and the Alice-anti-Bob antiparticle in region II. From the density matrices, entanglement can be computed using negativity for those cases, see Fig. 3. \begin{figure} \caption{(Color online) For the state $\Phi_{+}$ in Eq. (\ref{eq6}), negativity is computed for Alice-Bob particle in region I, Alice-Bob antiparticle in region I, Alice-anti-Bob particle in region II, and Alice-anti-Bob antiparticle in region II. Parts (a), (b), (c) and (d) show the cases of $q_{R}=1,q_{R}=0.75,q_{R}=0.5$,and $q_{R}=0.25$ respectively. The blue solid line, the red thick dashed one, the green dotted one and the orange dot-dashed one denote the negativity of Alice-Bob particle in region I, Alice-Bob antiparticle in region I, Alice-anti-Bob particle in region II, and Alice-anti-Bob antiparticle in region II respectively. } \label{fig3} \end{figure} In Fig. 3 (a), the behavior of entanglement at $q_{R}=1$ in terms of negativity is shown. As $\gamma$ increases, the entanglement of the Alice-Bob particle in region I decreases, however that of the Alice-anti-Bob antiparticle in region II increases. At $q_{R}=\frac{1}{2}$, we can see nonzero valued negativity only in cases of the Alice-Bob particle in region I, the Alice-Bob antiparticle in region I and the Alice-anti-Bob particle in region II. \subsection{Bipartite pure states II - particle and antiparticle Unruh excitations} \label{sub2} We next consider the entanglement between Alice and Bob, when they share the following state: \begin{equation} |\Phi_{-} (\alpha) \rangle = \cos \alpha |0\rangle_{M}|0\rangle_{U} + \sin \alpha |1\rangle_{M}|1^{-}\rangle_{U} \label{eq7}, \end{equation} As it is explained previously, Bob has inaccessible part due to his acceleration. The state of Alice and Bob after tracing the region $\mathrm{II}$ is found as follows: \begin{eqnarray} \rho_{AB_{\mathrm{I}}}^{\Phi^{-}} &=& \cos ^2 \alpha \cos ^4 \gamma |000\rangle \langle 000| \nonumber\\ &+& \frac{q_{R}}{2} \sin 2\alpha cos ^{3} \gamma (|000\rangle \langle 101| + |101\rangle \langle 000|) \nonumber\\ &+& q_{L}^{2} \sin ^{2} \alpha \cos ^{2} \gamma |101\rangle \langle 101| \nonumber\\ &+& \frac{1}{2}(1+(1-2 q_{L}^{2}) \cos 2\gamma) \sin ^{2}\alpha |101\rangle \langle 101| \nonumber\\ &+& \frac{q_{L}}{2} \sin 2\alpha \cos^{2}\gamma sin \gamma(|010\rangle \langle 100| + |100\rangle \langle 010|) \nonumber\\ &-& \frac{q_{R}q_{L}}{2} \sin ^{2}\alpha \sin 2\gamma (|100\rangle \langle 111| + |111\rangle \langle 100| ) \nonumber\\ &+& \frac{1}{4} \cos ^{2}\alpha \sin ^{2} 2\gamma (|001\rangle \langle 001| + |010\rangle \langle 010|) \nonumber\\ &-& \frac{q_{R}}{2} \sin 2\alpha \cos \gamma sin ^{2} \gamma (|010\rangle \langle 111| + |111\rangle \langle 010|) \nonumber\\ &+& q_{R}^{2} \sin ^{2} \alpha \sin ^{2} \gamma |111\rangle \langle 111| \nonumber\\ &+& \frac{q_{L}}{2} \sin 2\alpha \sin ^{3} \gamma (|011\rangle \langle 101| + |101\rangle \langle 011|) \nonumber\\ &+& \cos ^{2} \alpha \sin ^{4} \gamma |011\rangle \langle 011|. \nonumber \end{eqnarray} The state that Alice-anti-Bob(in Bob's region II) share can be obtained after tracing the region I, \begin{eqnarray} \rho_{AB_{\mathrm{II}}}^{\Phi^{-}} &=& \cos ^2 \alpha \cos ^4 \gamma |000\rangle \langle 000| \nonumber\\ &+& \frac{q_{L}}{2} \sin 2\alpha \cos ^{3} \gamma (|000\rangle \langle 101| + |101\rangle \langle 000|) \nonumber\\ &+& q_{R}^{2} \sin ^{2} \alpha \cos ^{2} \gamma |100\rangle \langle 100| \nonumber\\ &+& \frac{1}{2}(1+(1-2 q_{R}^{2}) \cos 2\gamma) \sin ^{2}\alpha |101\rangle \langle 101| \nonumber\\ &-&\frac{q_{L}}{2} \sin 2\alpha \cos ^{2}\gamma \sin \gamma(|010\rangle \langle 100| + |100\rangle \langle 010|) \nonumber\\ &-& \frac{q_{R}q_{L}}{2} \sin ^{2}\alpha \sin 2\gamma (|100\rangle \langle 111| + |111\rangle \langle 100| ) \nonumber\\ &+& \frac{1}{4} \cos ^{2}\alpha \sin ^{2} 2\gamma (|001\rangle \langle 001| + |010\rangle \langle 010|) \nonumber\\ &+& \frac{q_{R}}{2} \sin 2\alpha \cos \gamma \sin ^{2} \gamma (|010\rangle \langle 111| + |111\rangle \langle 010|) \nonumber\\ &+& q_{L}^{2} \sin ^{2} \alpha \sin ^{2} \gamma |111\rangle \langle 111| \nonumber\\ &+& \frac{q_{L}}{2} \sin 2\alpha \sin ^{3} \gamma (|011\rangle \langle 101| + |101\rangle \langle 011|) \nonumber\\ &+& \cos ^{2} \alpha \sin ^{4} \gamma |011\rangle \langle 011|. \nonumber \end{eqnarray} Entanglement of states $ \rho_{AB_{\mathrm{I}}}^{\Phi^{-}}$ and $\rho_{ AB_{\mathrm{II}}}^{\Phi^{-}}$ is shown in Fig. 4. Their behavior are very similar to cases shown in Sec. \ref{sub1}. It is also shown that entanglement of states $ \rho_{AB_{\mathrm{I}}}^{\Phi_{-}}$ and $ \rho_{AB_{\mathrm{II}}}^{\Phi_{-}}$ is independent of $q_{R}$ at $\gamma=\frac{\pi}{4}$. This demonstrates proper entanglement behavior as it yields physical results. \begin{figure}\label{fig4} \end{figure} As we discussed in the previous section, if the Bob and anti-Bob detector can distinguish between the particle and the antiparticle, for the state $\Phi_{-}$ in Eq. (\ref{eq7}) one may find the density matrices of the Alice-Bob particle in region I, the Alice-Bob antiparticle in region I, the Alice-anti-Bob particle in region II, and the Alice-anti-Bob antiparticle in region II. In this way, density matrices corresponding to cases of the Alice-Bob particle in region I, the Alice-Bob antiparticle in region I, the Alice-anti-Bob particle in region II, and the Alice-anti-Bob antiparticle in region II can be obtained. In Fig. \ref{fig5}, the entanglement behavior is shown for different values of $q_{R}$. \begin{figure} \caption{(Color online) For the state $\Phi_{-}$ in Eq. (\ref{eq7}), negativity is computed for Alice-Bob particle in region I, Alice-Bob antiparticle in region I, Alice-anti-Bob particle in region II, and Alice-anti-Bob antiparticle in region II. Parts (a),(b),(c) and (d) show the cases of $q_{R}=1,q_{R}=0.75,q_{R}=0.5$,and $q_{R}=0.25$ respectively. The blue solid line, the red thick dashed one, the green dotted one and the orange dot-dashed one denote negativity of Alice-Bob particle in region I, Alice-Bob antiparticle in region I, Alice-anti-Bob particle in region II, and Alice-anti-Bob antiparticle in region II respectively.} \label{fig5} \end{figure} We observe that the entanglement behavior shown in Fig. 5 is different from those in Fig. 3. At $q_{R}=1$ the non-zero values in negativity can be found in cases of the Alice-Bob antiparticle in region I and the Alice-anti-Bob particle in region II, but not in the others. At $q_{R}=\frac{3}{4}$, non-vanishing entanglement is found in cases of the Alice-Bob particle in region I, the Alice-Bob antiparticle in region I, and the Alice-anti-Bob antiparticle in region II. \begin{figure}\label{fig6} \end{figure} \subsection{Bipartite pure states III - particle and antiparticle degrees of freedom} \label{sub3} We now consider pure entangled states that describe correlations between particle and anti-particle degrees of freedom, when Bob is traveling with a uniform acceleration, as follows: \begin{equation} |\Phi_{*} (\alpha) \rangle = \cos \alpha |1\rangle^{+}_{M}|1^{+}\rangle_{U} + \sin \alpha |1\rangle^{-}_{M}|1^{-}\rangle_{U}. \label{eq8} \end{equation} As was done before, the state that Alice and Bob share can be obtained beyond the single-mode approximation. The state when Bob is in region $\mathrm{I}$ is obtained by tracing the other region, \begin{eqnarray} \rho_{AB_{\mathrm{I}}}^{\Phi_{*}} &=& q_{L}^{2} \cos ^2 \alpha \cos ^2 \gamma |+00\rangle \langle +00| \nonumber\\ &+& \frac{1}{2}(1+(1-2 q_{L}^{2})\cos 2\gamma) \cos ^{2} \alpha (|+10\rangle \langle +10| ) \nonumber\\ &+& \frac{1}{4}(1+(1-2 q_{L}^{2})\cos 2\gamma) \sin 2 \alpha |+10\rangle \langle -01| \nonumber\\ &+& \frac{1}{4}(1+(1-2 q_{L}^{2})\cos 2\gamma) \sin 2 \alpha |-01\rangle \langle +10| \nonumber\\ &+& \frac{1}{2}(1+(1-2 q_{L}^{2})\cos 2\gamma) \sin ^{2}\alpha |-01\rangle \langle -01| \nonumber\\ &+& q_{L}^{2} \sin ^{2} \alpha \cos ^{2} \gamma |-00\rangle \langle -00| \nonumber\\ &-& \frac{q_{R}q_{L}}{2} \cos ^{2}\alpha \sin 2\gamma (|+00\rangle \langle +11| + |+11\rangle \langle +00| ) \nonumber\\ &-& \frac{q_{R}q_{L}}{2} \sin ^{2}\alpha \sin 2\gamma (|-00\rangle \langle -11| + |-11\rangle \langle -00| ) \nonumber\\ &+& q_{R}^{2} \cos ^{2} \alpha \sin ^{2} \gamma |+11\rangle \langle +11| \nonumber\\ &+& q_{R}^{2} \sin ^{2} \alpha \sin ^{2} \gamma |-11\rangle \langle -11|. \nonumber \end{eqnarray} Also, the state that Alice and anti-Bob (in Bob's region II) share is, \begin{eqnarray} \rho_{AB_{\mathrm{II}}}^{\Phi_{*}} &=& q_{R}^{2} \cos ^2 \alpha \cos ^2 \gamma |+00\rangle \langle +00| \nonumber\\ &+& \frac{1}{2}(1+(1-2 q_{R}^{2})\cos 2\gamma) \cos ^{2} \alpha (|+10\rangle \langle +10| ) \nonumber\\ &+& \frac{1}{4}(1+(1-2 q_{R}^{2})\cos 2\gamma) \sin 2 \alpha |+10\rangle \langle -01| \nonumber\\ &+& \frac{1}{4}(1+(1-2 q_{R}^{2})\cos 2\gamma) \sin 2 \alpha |-01\rangle \langle +10| \nonumber\\ &+& \frac{1}{2}(1+(1-2 q_{R}^{2})\cos 2\gamma) \sin ^{2}\alpha |-01\rangle \langle -01| \nonumber\\ &+& q_{R}^{2} \sin ^{2} \alpha \cos ^{2} \gamma |-00\rangle \langle -00| \nonumber\\ &-& \frac{q_{R}q_{L}}{2} \cos ^{2}\alpha \sin 2\gamma (|+00\rangle \langle +11| + |+11\rangle \langle +00| ) \nonumber\\ &-& \frac{q_{R}q_{L}}{2} \sin ^{2}\alpha \sin 2\gamma (|-00\rangle \langle -11| + |-11\rangle \langle -00| ) \nonumber\\ &+& q_{L}^{2} \cos ^{2} \alpha \sin ^{2} \gamma |+11\rangle \langle +11| \nonumber\\ &+& q_{L}^{2} \sin ^{2} \alpha \sin ^{2} \gamma |-11\rangle \langle -11|. \nonumber \end{eqnarray} For these two states, entanglement is computed in terms of negativity, and is shown in Fig. \ref{fig6}. Interestingly, compared to other cases in Sec. \ref{sub1} and Sec. \ref{sub2}(see Figs. \ref{fig2} and \ref{fig4}), the behavior depicted in Fig. \ref{fig6} is shown to be nearly equidistant according to different values of $q_{R}$. It is also shown that the entanglement behavior of two states $ \rho_{ AB_{\mathrm{I}}}^{\Phi_{*}}$ and $ \rho_{ AB_{\mathrm{II}}}^{\Phi_{*}}$ is independent of $q_{R}$ at the infinite acceleration, $\gamma=\frac{\pi}{4}$. Let us also mention about when Bob and anti-Bob detector can distinguish between the particle and the antiparticle. In this case, it turns out that all density matrices corresponding to cases of Alice-Bob particle in region I, Alice-Bob antiparticle in region I, Alice-anti-Bob particle in region II, and Alice-anti-Bob antiparticle in region II are separable. Thus, negativity remains zero for all parameters. \subsection{Bipartite mixed states - particle and antiparticle Unruh excitations} \label{sub4} Up to now, we have considered entanglement of pure states when one of parties is traveling with a uniform acceleration. We have observed that, at the infinite acceleration, entanglement has the same convergence, which allows us to have physical results. In this subsection, we consider a more complicate scenario when two parties share a mixed state. It is our aim to discover how the entanglement behavior depends on the mixedness property, and also if its convergence is related to the mixedness. In particular, the case in which a white noise is added to a maximally entangled states, so-called Werner state, is to be considered. For Werner states, the mixedness is parameterized by a single parameter, depending on which one can see how noisy a state is. Suppose that two parties, Alice and Bob prepare Werner states in inertial frames, and then Bob moves in the uniformly accelerated frame. That is, the state can be expressed as follows: \begin{equation} \rho_{W}= F |\Phi_{+} (\alpha=\pi/4) \rangle \langle \Phi_{+} (\alpha=\pi/4) | + \frac{1-F}{4}\mathbb{I}, \end{equation} where the maximally entangled state is taken from Eq. (\ref{eq6}) when $\alpha=\pi/4$. Suppose also that Bob's detector cannot distinguish between the particle or the antiparticle. Beyond the single-mode approximation, both states that Alice and Bob share in Bob's region I and region II, respectively, are obtained by tracing the other region, as follows: \begin{widetext} \begin{eqnarray} \rho_{AB_{\mathrm{I}}}^{W} &=& \frac{1}{2}F q_{R} \cos ^3 \gamma (|000\rangle \langle 110|+|110\rangle \langle 000|) + \frac{1}{8} \cos ^{2} \gamma (3-2 q_{R}^{2}+F(1-2q_{R}^{2})+(1-F)\cos 2\gamma)|100\rangle \langle 100| \nonumber\\ &+ & \frac{1}{8} \cos ^{2} \gamma (3-2 q_{R}^{2}-F(1-2q_{R}^{2})+(1+F)\cos 2\gamma)|000\rangle \langle 000| - \frac{F q_{L}}{2} \cos ^{2}\gamma \sin \gamma(|001\rangle \langle 100| + |100\rangle \langle 001|) \nonumber\\ &+&\frac{F q_{R}}{2} \cos \gamma \sin ^{2}\gamma(|001\rangle \langle 111| + |111\rangle \langle 001|) +\frac{F q_{L}}{2} \sin ^{3}\gamma(|011\rangle \langle 110| + |110\rangle \langle 011|) \nonumber\\ &+& \frac{1}{4} \sin ^{2} \gamma ((1+F)q_{R}^{2}+(1-F) \sin ^{2}\gamma)|111\rangle \langle 111| + \frac{1}{4} \sin ^{2} \gamma ((1-F)q_{R}^{2}+(1+F) \sin ^{2}\gamma)|011\rangle \langle 011| \nonumber\\ &-& \frac{1}{8} (1-F)q_{L}q_{R} \sin 2 \gamma (|000\rangle \langle 011| +|011\rangle \langle 000|) - \frac{1}{8} (1+F)q_{L}q_{R} \sin 2 \gamma (|100\rangle \langle 111| +|111\rangle \langle 100|)\nonumber\\ &+&\frac{1}{16} (1-F) \sin ^{2} 2\gamma|101\rangle \langle 101|+\frac{1}{16} (1+F) \sin ^{2} 2\gamma|001\rangle \langle 001| + \frac{1}{16} (2(1+F)-2(1+F)(1-2 q_{R}^{2}) \cos 2\gamma \nonumber\\ & &+(1-F)\sin ^{2} 2\gamma)|110\rangle \langle 110| + \frac{1}{16} (2(1-F)-2(1-F)(1-2 q_{R}^{2}) \cos 2\gamma +(1+F)\sin ^{2} 2\gamma) |010\rangle \langle 010|, \nonumber \end{eqnarray} \end{widetext} and \begin{widetext} \begin{eqnarray} \rho_{AB_{\mathrm{II}}}^{W} & = & \frac{1}{2}F q_{L} \cos ^3 \gamma (|000\rangle \langle 110|+|110\rangle \langle 000|) + \frac{1}{8} \sin ^{2} \gamma (3-2 q_{R}^{2}+F(1-2q_{R}^{2}) - (1-F) \cos 2\gamma)|111\rangle \langle 111| \nonumber\\ &+& \frac{1}{8} \sin ^{2} \gamma (3-2 q_{R}^{2}-F(1-2q_{R}^{2}) -(1+F) \cos 2\gamma)|011\rangle \langle 011| + \frac{F q_{R}}{2} \cos ^{2}\gamma \sin \gamma(|001\rangle \langle 100| + |100\rangle \langle 001|) \nonumber\\ &-&\frac{F q_{L}}{2} \cos \gamma \sin ^{2}\gamma(|001\rangle \langle 111| + |111\rangle \langle 001|) + \frac{F q_{R}}{2} \sin ^{3}\gamma(|011\rangle \langle 110| + |110\rangle \langle 011|) \nonumber\\ &+& \frac{1}{4} \cos ^{2} \gamma ((1+F)q_{R}^{2}+(1-F) \cos ^{2}\gamma)|100\rangle \langle 100| + \frac{1}{4} \cos ^{2} \gamma ((1-F)q_{R}^{2}+(1+F) \cos ^{2}\gamma)|000\rangle \langle 000| \nonumber\\ &-& \frac{1}{8} (1-F)q_{L}q_{R} \sin 2 \gamma (|000\rangle \langle 011| +|011\rangle \langle 000|) - \frac{1}{8} (1+F)q_{L}q_{R} \sin 2 \gamma (|100\rangle \langle 111| +|111\rangle \langle 100|)\nonumber\\ &+&\frac{1}{16} (1-F) \sin ^{2} 2\gamma|101\rangle \langle 101| + \frac{1}{16} (1+F) \sin ^{2} 2\gamma|001\rangle \langle 001| + \frac{1}{16} (2(1+F)+2(1+F)(1-2 q_{R}^{2}) \cos 2\gamma \nonumber\\ & +&(1-F) \sin ^{2} 2\gamma)|110\rangle \langle 110| + \frac{1}{16} (2(1-F)+2(1-F)(1-2 q_{R}^{2}) \cos 2\gamma +(1+F) \sin ^{2} 2\gamma)|010\rangle \langle 010|. \nonumber \end{eqnarray} \end{widetext} \begin{figure}\label{fig7} \end{figure} Entanglement of these states $\rho_{AB_{\mathrm{I}}}^{W}$ and $ \rho_{AB_{\mathrm{II}}}^{W}$ are shown in terms of negativity in Fig. \ref{fig7}. It is shown that the entanglement behavior of $\rho_{AB_{\mathrm{I}}}^{W}$ and $\rho_{AB_{\mathrm{II}}}^{W}$ coincides at, and is independent of $q_{R}$, at the infinite acceleration $\gamma=\frac{\pi}{4}$. Next, when Bob and anti-Bob detector can distinguish between the particle and the antiparticle, density matrices can be found, for the following cases, Alice-Bob particle in region I, Alice-Bob antiparticle in region I, Alice-anti-Bob particle in region II, and Alice-anti-Bob antiparticle in region II. From the matrices, it is straightforward to compute entanglement using negativity. In Fig. 8, the entanglement behavior is shown. \begin{figure} \caption{(Color online) Entanglement of Alice-Bob particle in region I, Alice-Bob antiparticle in region I, Alice-anti-Bob particle in region II, and Alice-anti-Bob antiparticle in region II, for the state $\rho_{W}$ is shown. Parts (a), (b), (c) and (d) show the cases of $q_{R}=1,q_{R}=0.75,q_{R}=0.5$,and $q_{R}=0.25$ respectively (here we set $F=0.95$). The blue solid line, the red thick dashed one, the green dotted one and the orange dot-dashed one denote the negativity of Alice-Bob particle in region I, Alice-Bob antiparticle in region I, Alice-anti-Bob particle in region II, and Alice-anti-Bob antiparticle in region II respectively.} \label{fig8} \end{figure} \section{Discussion and Conclusion} \label{con} In this article, we have investigated the entanglement behavior of bipartite quantum states in fermionic systems when one of parties is traveling with a uniform acceleration. We have employed the recent proposal in Ref. \cite{ref:montero1} for the ordering of operators. This is because the ordering is suggested such that field entanglement is relevant to what is observed in detectors. We believe that this is a natural and relevant constraint, as it is designed to yield physical results, which is contrary to other previous approaches that consider various possibilities depending on mathematical ordering. Before the present consideration, the construction in Ref. \cite{ref:montero1} was only tested for a particular pure state in Ref. \cite{ref:montero2}. We have applied the construction and considered entanglement behavior for numerous cases,i.e, bipartite entanglement of pure and mixed states. We have shown that, in all of these cases, the entanglement behavior allows one to obtain physical results so that in the infinite acceleration, entanglement converges to a single and finite value. Our considerations consist of exemplary states shown in Ref. \cite{ref:martin}, where a different ordering has been applied and consequently the convergence property has not been achieved in the entanglement behavior. This contrasts to what is shown in the present work, and thus our result has provided the correct behavior of entanglement in fermonic systems. \end{document}
\begin{document} \title{Natural Deduction for Assertibility and Deniability hanks{This paper is an outcome of the project Logical Structure of Information Channels, no. 21-23610M, supported by the Czech Science Foundation and realized at the Institute of Philosophy of the Czech Academy of Sciences.} \begin{abstract} In this paper we split every basic propositional connective into two versions, one is called extensional and the other one intensional. The extensional connectives are semantically characterized by standard truth conditions that are relative to possible worlds. Assertibility and deniability of sentences built out of atomic sentences by extensional connectives are defined in terms of the notion of truth. The intensional connectives are characterized directly by assertibility and deniability conditions without the notion of truth. We pay special attention to the deniability condition for intensional implication. We characterize the logic of this mixed language by a system of natural deduction that sheds some light on the inferential behaviour of these two kinds of connectives and on the way they can interact. \end{abstract} \section{Introduction} Christopher Gauker, in his book \cite{gauker05}, put forward an interesting theory of conditionals based on the notions of assertibility and deniability in a context. A simplification of Gauker's theory was proposed in \cite{puncochar16} where it was shown that the characteristic features of the logic determined by Gauker's semantics are preserved even if we replace Gauker's rather complex notion of context with a much simpler one, according to which contexts are represented by sets of possible worlds. Besides simplicity the proposed modification has some further nice properties that the original Gauker's theory lacks, concerning a simple form of compositionality, validity of some plausible argument forms, or simple treatment of conditionals embedded in antecedents of other conditionals (for more details, see \cite{puncochar16}). A peculiar feature of Gauker's theory is that it reflects one rather surprising phenomenon: Logical operators are sensitive to the types of sentences they are applied to. For example, disjunction of two conditional sentences seems to behave differently than disjunction of two elementary sentences. The original semantics from \cite{gauker05} incorporates this ambiguity directly into the formal language. This approach was further elaborated, especially from the philosophical point of view, in \cite{puncochargauker20}. The semantics from \cite{puncochar16} is designed in a different way. It disambiguates the behaviour of logical connectives at the level of the formal language by replacing each connective with two operators, one ``extensional'' and one ``intensional''. It is argued in \cite{puncochar16} that while the original Gauker's approach allows us to model some phenomena in a straightforward way, it is technically less elegant than the disambiguation strategy. In this paper, we will focus on the semantics of extensional and intensional connectives proposed in \cite{puncochar16} and we will study the logic determined by this framework. Let us call it \textit{the Logic of Assertibility and Deniability}, or \textsf{LAD}, for short. \cite{puncochar16} provided a syntactic characterization of \textsf{LAD}, but only an indirect one, via a translation into a modal logic. The main contribution of this paper is a direct syntactic characterization of \textsf{LAD}. We will show that this logic can be characterized by an elegant system of natural deduction that clarifies the inferential behaviour of both extensional and intensional operators. \section{Extensional and intensional connectives}\label{extint} Consider the following scenario. A murder has been committed and we have four suspects. The murderer is not known but it is settled that it must be someone among these suspects. It is also clear that only one person has committed the crime. We have the following description of the suspects: \begin{tabular}{ll} tall man with dark hair and moustache & tall man with blond hair and without moustache \\ short man with blond hair and moustache & short man with dark hair and without moustache \\ \end{tabular} Now it seems that in this context one is entitled to assert the premises but not the conclusion of the following argument that has a seemingly valid form: \begin{tabular}{ll} Premise 1 &\textit{The murderer is either tall or short.} \\ Premise 2 &\textit{If the murderer is tall then he has a moustache if he has dark hair.} \\ Premise 3 &\textit{If the murderer is short then he has a moustache if he has blond hair.} \\ \hline Conclusion &\textit{It either holds that the murderer has a moustache if he has dark hair} \\ & \textit{or that the murderer has a moustache if he has blond hair.} \\ \end{tabular} The assertibility of the premisses is clear from the description of the context. But the conclusion does not seem to be assertible. As a reason for this intuition one might say that none of the two disjuncts in the conclusion is assertible in the context. But the same holds for the first premise, none of the disjuncts is assertible in the context and yet the whole disjunction clearly is. This puzzling phenomenon illustrates what we vaguely described in the introduction as the sensitivity of logical operators. The first premise says that in each possibility (``possible world''), one of the disjuncts is true. The disjunction connecting two conditionals in the conclusion says something different, namely that at least one of the two conditionals holds with respect to the whole context. We can describe these two cases as involving two different logical operators, one operating on the level of possible worlds and the other operating on the level of the whole context. One can easily formulate similar examples that involve negation of conditionals (see \cite{puncochar16}). For a discussion of similar phenomena, see, e.g. \cite{yalcin12} or \cite{bledin14}. Such examples motivate splitting each of the basic logical connectives (implication, conjunction, disjunction, and negation) into two versions, one will be called ``extensional'' and the other one ``intensional'': \begin{tabular}{lcccc} Extensional connectives: & $\supset$ & $\cap$ & $\cup$ & $\sim$ \\ Intensional connectives: & $\rightarrow$ & $\wedge$ & $\vee$ & $\neg$ \\ \end{tabular} Let $L$ be the language containing all atomic formulas and all formulas which can be constructed out of the atomic formulas using the extensional connectives. The language $L^*$ is the smallest set of formulas containing all $L$-formulas and closed under the application of the intensional connectives. The two languages can be concisely introduced as follows: \begin{tabular}{lc} $L$: & $\alpha= p \mid \alpha \supset \alpha \mid \alpha \cap \alpha \mid \alpha \cup \alpha \mid {\sim}\alpha$\\ $L^*$: & $\varphi= \alpha \mid \varphi \rightarrow \varphi \mid \varphi \wedge \varphi \mid \varphi \vee \varphi \mid \neg \varphi$ \end{tabular} The argument above would be formalized in the language $L^*$ in the following way (note that the language forbids us from connecting two intensional implications by the extensional disjunction): \begin{tabular}{l} $p \cup q, p \rightarrow (r \rightarrow t), q \rightarrow (s \rightarrow t) / (r \rightarrow t) \vee (s \rightarrow t)$ \end{tabular} The Greek letters $\alpha, \beta, \gamma$ will range over $L$-formulas and the letters $\varphi, \psi, \chi, \vartheta$ over $L^*$-formulas. Let $A$ be a set of atomic formulas. A possible $A$-world is any function that assigns a truth value ($1$~representing \textit{truth} or $0$ representing \textit{falsity}) to every atomic formula from $A$. An $A$-context is a nonempty set of possible $A$-worlds. The specification of the set $A$ will be omitted if no confusion arises. Note that, by definition, there cannot be two different possible $A$-worlds in which exactly the same atomic formulas are true. For the complex formulas of the language $L$, the truth and falsity conditions with respect to individual possible worlds are those of classical propositional logic. $\Vdash^{+}$ and $\Vdash^{-}$ will now respectively stand for the relations of assertibility and deniability between contexts and formulas of the language $L^*$. The assertibility and deniability conditions taken from \cite{puncochar16} (and motivated by \cite{gauker05}) are defined in the following way: \begin{tabular}{l} $C \Vdash^{+} \alpha$ iff for all $w \in C$, $\alpha$ is true in $w$.\\ $C \Vdash^{-} \alpha$ iff for all $w \in C$, $\alpha$ is false in $w$.\\ $C \Vdash^{+} \neg\varphi$ iff $C \Vdash^{-} \varphi$.\\ $C \Vdash^{-} \neg \varphi$ iff $C \Vdash^{+} \varphi$.\\ $C \Vdash^{+} \varphi \vee \psi$ iff $C \Vdash^{+} \varphi$ or $C \Vdash^{+} \psi$.\\ $C \Vdash^{-} \varphi \vee \psi$ iff $C \Vdash^{-} \varphi$ and $C \Vdash^{-} \psi$.\\ $C \Vdash^{+} \varphi \wedge \psi$ iff $C \Vdash^{+} \varphi$ and $C \Vdash^{+} \psi$.\\ $C \Vdash^{-} \varphi \wedge \psi$ iff $C \Vdash^{-} \varphi$ or $C \Vdash^{-} \psi$.\\ $C \Vdash^{+} \varphi \rightarrow \psi$ iff $D \Vdash^{+} \psi$ for all nonempty $D \subseteq C$, such that $D \Vdash^{+} \varphi$.\\ $C \Vdash^{-} \varphi \rightarrow \psi$ iff $D \Vdash^{-} \psi$ for some nonempty $D \subseteq C$, such that $D \Vdash^{+} \varphi$.\\ \end{tabular} The consequence relation is defined as assertibility preservation. That is, a set of $L^*$-formulas $\Delta$ entails an $L^*$-formula $\psi$ (symbolically $\Delta \vDash \psi$) if $\psi$  is assertible in every context in which all formulas from $\Delta$ are assertible. This consequence relation represents what we here call \textit{the Logic of Assertibility and Deniability} (\textsf{LAD}). We say that two $L^*$-formulas, $\varphi$  and $\psi$, are (logically) equivalent in this logic (symbolically $\varphi \equiv \psi$) if they are assertible in the same contexts, that is, if $\varphi \vDash \psi$  and $\psi \vDash \varphi$. They are strongly equivalent (symbolically $\varphi \rightleftharpoons \psi$) if they are not only assertible but also deniable in the same contexts, that is, if $\varphi \equiv \psi$ and $\neg \varphi \equiv \neg \psi$. It can be shown by induction that strongly equivalent formulas are universally interchangeable (this does not hold for mere equivalence). Note that the Gauker's original semantics from \cite{gauker05} does not have this property. \begin{proposition} Assume that $\varphi \rightleftharpoons \psi$, and $\varphi$  occurs as a subformula in $\chi$. If $\chi'$ is the result of replacing an occurrence of the subformula $\varphi$  in $\chi$ with $\psi$ then $\chi \rightleftharpoons \chi'$. \end{proposition} Note that the assertibility clause for disjunction formally corresponds to the support condition for inquisitive disjunction used in inquisitive semantics (see \cite{ciardelliroelofsen11}). However, in contrast to inquisitive semantics we do not treat intensional disjunction as a question generating operator but as a statement generating operator (e.g., the formula $(r \rightarrow t) \vee (s \rightarrow t)$ in the conclusion of our example does not represent a question but a statement). The assertibility clauses for conjunction and implication also correspond to the support clauses for these connectives in inquisitive semantics. Technically speaking, if negation is omitted, \textsf{LAD} just corresponds to inquisitive logic (which is axiomatized for example in \cite{ciardelliroelofsen11}). What is different from the standard inquisitive logic is our treatment of negation via the deniability conditions. The deniability conditions for (intensional) negation, disjunction and conjunction are the standard ones, typically used in a bilateralist setting (see, e.g., \cite{odintsov13}). The most tricky case is the deniability condition for implication. There are two natural candidates for an alternative deniability clause: \begin{tabular}{ll} (A) & $C \Vdash^{-} \varphi \rightarrow \psi$ iff $C \Vdash^{+} \varphi$ and $C \Vdash^{-} \psi$.\\ (B) & $C \Vdash^{-} \varphi \rightarrow \psi$ iff $D \Vdash^{-} \psi$ for all nonempty $D \subseteq C$, such that $D \Vdash^{+} \varphi$.\\ \end{tabular} The option (A) is known from Nelson logic (see \cite{odintsov13}). However, this option licenses the inference from $\neg (\varphi \rightarrow \psi)$ to $\varphi$ (and also to $\neg \psi$), which we found highly problematic from the natural language point of view (consider the following argument: \textit{It is not the case that if I die today, I will be living tomorrow. Therefore I will die today.}). The option (B) looks much more plausible. This option is known from connexive logic (see \cite{wansing21}). It makes $\neg (\varphi \rightarrow \psi)$ strongly equivalent to $\varphi \rightarrow \neg \psi$. This is indeed a reasonable way to read negations of conditionals. This clause was explored in more detail within the semantics of assertibility and deniability in \cite{puncochar14}. In our current setting the clause (B) would allow us to completely eliminate intensional negations from any $L^*$-formula. Since we also have double negation and DeMorgan laws guaranteed by the other semantic clauses, we could push all the intensional negations occurring in a formula to the subformulas that are already in the language $L$. For example, in the formula $\neg (\neg (p \wedge {\sim}s) \rightarrow (p \cup {\sim}q))$ we could push the negations inside the formula in the following way: $(\neg p \vee \neg{\sim}s) \rightarrow \neg (p \cup {\sim}q)$. Moreover, observe that for any $L$-formula $\alpha$ we have $\neg \alpha \rightleftharpoons {\sim}\alpha$, so when pushed so that it applies to an $L$-formula, intensional negation can be replaced with extensional negation (we would obtain $({\sim}p \vee {\sim}{\sim}s) \rightarrow {\sim}(p \cup {\sim}q)$, in our example). Hence, if the deniability clause (B) for implication is employed, every occurrence of intensional negation in any $L^*$-formula can be dissolved and thus intensional negation does not add any extra expressive power. Aside from making intensional negation redundant, the clause (B) has one other questionable consequence. If this clause is employed, there are formulas that are assertible as well as deniable in some contexts. There are even formulas that are assertible as well as deniable in every context and thus there are formulas of the form $\varphi \wedge \neg \varphi$ regarded as logically valid. A concrete example is obtained when we substitute $(p \wedge \neg p) \rightarrow (p \vee \neg p)$ for $\varphi$. Hence, the resulting logic is not only paraconsistent but it is in this sense inconsistent (see \cite{wansing21} for a more detailed discussion of this phenomenon). From a certain point of view, this may not be conceived as a substantial problem. However, in this paper, we want to maintain consistency and thus we reject clause (B) in favour of the clause stated above according to which a conditional is deniable in a context $C$ if and only if there is some subcontext of $C$ in which the antecedent is assertible and the consequent is deniable. This deniability clause corresponds to the one proposed in Gauker's book \cite{gauker05}. It can be viewed as a weakening of the strong clause (A) in the sense that $\neg (\varphi \rightarrow \psi)$ is not equivalent to $\varphi \wedge \neg \psi$  but rather to a mere possibility of $\varphi \wedge \neg \psi$. This way of denying conditionals seems to have strong support from natural language. For example, with respect to the context above, one can deny the claim \textit{if the murderer is tall, he has dark hair} because it is possible (from the perspective of the context) that the murderer is tall and does not have dark hair. In comparison to (B) we can observe that the clause (C) allows us to prove by induction the following fact. \begin{proposition} There is no context $C$ and no $L^*$-formula $\varphi$  such that $C \Vdash^+ \varphi$ and $C \Vdash^- \varphi$. \end{proposition} Another feature that distinguishes our deniability clause for implication from (B) is that it leads to the classical behaviour of intensional connectives in the special case where the context contains only one world. Thus in singleton contexts the assertibility and deniability clauses for intensional connectives coincide with truth and falsity clauses for the extensional connectives. To express this fact more formally, we will use the following notation. For any $L^*$-formula $\varphi$, $\varphi^{e}$ denotes the $L$-formula which is the result of replacing all intensional connectives in $\varphi$ with their extensional counterparts. Then the following fact can be easily established by induction. \begin{proposition}\label{p3} $\left\{w\right\} \Vdash^{+} \varphi$ iff $\varphi^{e}$ is true in $w$, and $\left\{w\right\} \Vdash^{-} \varphi$ iff $\varphi^{e}$ is false in $w$. \end{proposition} Since the truth conditions for the extensional connectives are the standard ones, the consequence relation restricted to the language $L$ is classical. \begin{proposition} For the formulas of the language $L$ the consequence relation coincides with the consequence relation of classical logic. \end{proposition} Of course, in general intensional and extensional connectives behave differently. Let us illustrate this with the murder scenario described above. Let $A$  be the set of atomic formulas $\{p, q, r, s, t \}$. Consider the following formalization and an $A$-context consisting of four possible $A$-worlds that differ from each other on who among the suspects committed the murder. \begin{tabular}{llcccc} && $w_1$ & $w_2$ & $w_3$ & $w_4$ \\ \textit{The murderer is tall.} &$p$ & 1 & 1 & 0 & 0 \\ \textit{The murderer is short.} &$q$ & 0 & 0 & 1 & 1 \\ \textit{The murderer has dark hair.} &$r$ & 1 & 0 & 0 & 1\\ \textit{The murderer has blond hair.} &$s$ & 0 & 1 & 1 & 0 \\ \textit{The murderer has a moustache.} &$t$ & 1 & 0 & 1 & 0 \\ \end{tabular} Now one can check that in this context all the premises of the argument, namely the formulas $p \cup q$, $p \rightarrow (r \rightarrow t)$, and $q \rightarrow (s \rightarrow t)$, are assertible but the conclusion $(r \rightarrow t) \vee (s \rightarrow t)$  is not. Our deniability condition for conditionals is an important ingredient of the semantics because it increases the expressive power of the language and introduces a specific kind of non-persistent formulas. We say that a formula is \textit{persistent} if it holds that whenever it is assertible in a context, it is assertible in every subcontext (i.e. in every nonempty subset) of the context. It can be proved by induction that every $L^*$-formula that does not contain any intensional implication in the scope of an intensional negation is persistent. This claim does not hold generally. For example, for the above context $C=\{ w_1, w_2, w_3, w_4 \}$ we have $C \Vdash^+ \neg (p \rightarrow q)$ because there is a subcontext $D=\{w_1, w_2 \}$ such that $D \Vdash^+ p$ and $D \Vdash^- q$. However, there is a subcontext $E \subseteq C$, e.g. $E = \{w_3, w_4 \}$, such that $E \nVdash^+ \neg (p \rightarrow q)$. Note that every formula of the form $\varphi \rightarrow \psi$ is also persistent, even if it contains an occurrence of intensional implication in the scope of intensional negation. We would like to say that persistent formulas play in inferences a substantially different role than non-persistent formulas. However, the notion of a persistent formula was defined \textit{semantically} and thus we cannot use it directly in the formulation of a syntactic deductive system. Nevertheless, we just specified \textit{syntactically} a large class of formulas that have this semantic property. We will call them safe. We say that a formula is \textit{safe}, if either it does not contain any $\rightarrow$ in the scope of $\neg$, or it is of the form $\varphi \rightarrow \psi$. \begin{proposition}\label{l: persistence of safe formulas} Every safe formula is persistent. \end{proposition} Safe formulas will play an important role in the formulation of our system of natural deduction. The subsequent completeness proof will indicate that this syntactic notion sufficiently approximates the notion of a persistent formula. Now we will show that by the extra expressive power gained from the deniability clause for implication we obtain functional completeness. To describe this fact more precisely, we will use the following defined symbols that will also be used later in the system of natural deduction. \begin{tabular}{ll} (a) & $\bot =_{def} p \cap {\sim} p$, for some selected fixed atomic formula $p$,\\ (b) & $\lozenge \varphi =_{def} \neg (\varphi\rightarrow \bot)$,\\ (c) & $\alpha_1 \oplus \ldots \oplus \alpha_n =_{def} (\alpha_1 \cup \ldots \cup \alpha_n) \wedge (\lozenge \alpha_1 \wedge \ldots \wedge \lozenge \alpha_n)$. \end{tabular} The symbol $\bot$ represents a \textit{contradiction}. It can be easily observed that $C \Vdash^- \bot$, for every context $C$. The symbol $\lozenge$ expresses a contextual possibility. It holds that $C \Vdash^+ \lozenge \varphi$ iff there is a nonempty $D \subseteq C$ such that $D \Vdash^+ \varphi$. Finally, $\oplus$ represents a ``pragmatic disjunction'', i.e. an extensional disjunction but such that all its disjuncts are possible (which is usually regarded to be a pragmatic feature of disjunction). If a finite set of atomic formulas is fixed, any $A$-world $w$ can be described by an $L$-formula $\sigma_w$ in the usual way. For example, in the above example the formula $\sigma_{w_1}$ would be $p \cap {\sim}q \cap r \cap {\sim}s \cap t$. Now any $A$-context $C=\{w_1, \ldots, w_n \}$ can be described by the formula $\mu_C=\sigma_{w_1} \oplus \ldots \oplus \sigma_{w_n}$. Moreover, any set of $A$-contexts $X=\{ C_1, \ldots, C_k \}$ can be described by the formula $\xi_X=\mu_{C_1} \vee \ldots \vee \mu_{C_k}$. Now we can state the functional completeness result. \begin{proposition} Let $A$  be a finite set of atomic formulas. Then it holds for any $A$-contexts $C, D$ that $C \Vdash^+ \mu_D$ iff $D=C$. Moreover, it holds for any set of $A$-contexts $X$, and any $A$-context $C$ that $C \Vdash^+ \xi_X$ iff $C \in X$. \end{proposition} A similar result was observed in \cite{puncochar15} for inquisitive logic with an operation called ``weak negation''. In fact, the crucial part of the proof of our main result will be a syntactic reconstruction of weak negation within our system of natural deduction. \section{Deductive calculus} In this section we define a Fitch style system of natural deduction for the semantics of assertibility and deniability. We will use the following economic notation. We use brackets and the colon to refer to subproofs. For example, $[\varphi: \psi]$ stands for a subproof in which $\varphi$ is the hypothetical assumption and $\psi$ is the last line of the subproof. We use two kinds of brackets, square and round. The difference is in the formulas from the outer proof that can be used in the subproof. By using square brackets in $[\varphi: \psi]$ we indicate that all formulas that are available in the step before the hypothetical assumption $\varphi$ is made are also available under this assumption in the derivation of $\psi$ from $\varphi$. This is just as in the standard natural deduction systems for classical and intuitionistic logic. The round brackets indicate that there is a restriction concerning the formulas from the outer proof that can be used under the assumption. By writing $(\varphi: \psi)$, we indicate that only \textit{safe} formulas from the outer proof can be used in the derivation of $\psi$ from $\varphi$. (Safe formulas are defined in the previous section.) The distinction between square and round brackets reflects the semantic distinction between two kinds of hypothetical assumption. Square brackets, i.e. $[\varphi: \psi]$, indicate that by making the hypothetical assumption $\varphi$ in a context $C$ we do not change the given context, we only assume that the assumption $\varphi$ is assertible in $C$. In contrast, round brackets, i.e. $(\varphi: \psi)$, indicate that by making the hypothetical assumption $\varphi$ in a context $C$ we move from $C$ to an arbitrary subcontext $D$ of $C$ in which $\varphi$ is assertible. If we have already proved that a formula is assertible in $C$, we can use it under the assumption $\varphi$ (in $D$) only if it is guaranteed that it is persistent. Proposition \ref{l: persistence of safe formulas} guarantees that all safe formulas are persistent. We split the rules of the calculus into three groups. The first group contains the ``classical'' introduction and elimination rules for the extensional connectives. \begin{tabular}{llll} (i$\cap$) & $\alpha, \beta / \alpha \cap \beta$ & (e$\cap_1$) & $\alpha \cap \beta / \alpha$ \\ && (e$\cap_2$) & $\alpha \cap \beta / \beta$ \\[0,3cm] (i$\cup_1$) & $\alpha / \alpha \cup \beta$ & (e$\cup$) & $\alpha \cup \beta, (\alpha:\gamma), (\beta:\gamma) / \gamma$\\ (i$\cup_2$) & $\beta / \alpha \cup \beta$ && \\[0,3cm] (i$\supset$) & $(\alpha:\beta)/\alpha \supset \beta$ & (e$\supset$) & $\alpha, \alpha \supset \beta / \beta$\\[0,3cm] (i$\sim$) & $(\alpha:\bot)/{\sim}\alpha$ & (e$\sim_1$) & $\alpha, {\sim}\alpha/\bot$\\ && (e$\sim_2$) & ${\sim}{\sim}\alpha/\alpha$\\ \end{tabular} \noindent The second group contains the following ``intuitionistic''  rules concerning the intensional connectives (but notice the restrictions indicated by round brackets and the fact that (i$\neg$) is restricted to $L$-formulas): \begin{tabular}{llll} (i$\wedge$) & $\varphi,\psi/\varphi \wedge \psi$ & (e$\wedge_1$) & $\varphi \wedge \psi/ \varphi$ \\ && (e$\wedge_2$) & $\varphi \wedge \psi/ \psi$ \\[0,3cm] (i$\vee_1$) & $\varphi/ \varphi \vee \psi$ & (e$\vee$) & $\varphi \vee \psi, [\varphi:\chi], [\psi:\chi] / \chi$\\ (i$\vee_2$) & $\psi/ \varphi \vee \psi$ && \\[0,3cm] (i$\rightarrow$) & $(\varphi:\psi)/\varphi \rightarrow \psi$ & (e$\rightarrow$) & $\varphi, \varphi \rightarrow \psi/ \psi$\\[0,3cm] (i$\neg$) & $(\alpha:\bot)/\neg \alpha$ & (e$\neg$) & $\varphi, \neg \varphi / \bot$\\ && (EFQ) & $\bot /\varphi$ \end{tabular} The third group consists of the rules that characterize the interaction of intensional negation with all intensional operators, plus two extra rules, (CEM), i.e. ``contextual excluded middle'', and ($\lozenge\oplus$): \begin{tabular}{llll} ($\neg\neg_1$) & $\neg \neg \varphi / \varphi$ & ($\neg\neg_2$) & $\varphi/ \neg \neg \varphi$ \\[0,1cm] ($\neg\wedge_1$) & $\neg (\varphi \wedge \psi) / \neg \varphi \vee \neg \psi$ & ($\neg\wedge_2$) & $\neg \varphi \vee \neg \psi / \neg (\varphi \wedge \psi)$ \\[0,1cm] ($\neg\vee_1$) & $\neg (\varphi \vee \psi) / \neg \varphi \wedge \neg \psi$ & ($\neg\vee_2$) & $\neg \varphi \wedge \neg \psi / \neg (\varphi \vee \psi)$ \\[0,1cm] ($\neg{\rightarrow}_1$) & $\neg (\varphi \rightarrow \psi) / \lozenge (\varphi \wedge \neg \psi)$ & ($\neg{\rightarrow}_2$) & $\lozenge (\varphi \wedge \neg \psi)/ \neg (\varphi \rightarrow \psi)$ \\[0,1cm] (CEM) & $/ (\varphi \rightarrow \bot) \vee \lozenge \varphi$ & ($\lozenge\oplus$) & $\lozenge \alpha_{1} \wedge \ldots \wedge \lozenge \alpha_{n}/\lozenge (\alpha_{1} \oplus \ldots \oplus \alpha_{n})$ \\[0,1cm] \end{tabular} We will write $\varphi_1, \ldots, \varphi_n \vdash \psi$ if $\psi$ is derivable in this system from the assumptions $\varphi_1, \ldots, \varphi_n$. As we already mentioned, our semantics has some connection to inquisitive semantics. Note, however, that our deductive system is very different from the standard system of natural deduction for inquisitive logic (see, e.g., \cite{ciardelli16}), though it has some similarities with the system for inquisitive logic with weak negation developed in \cite{puncochar15}. We can illustrate the role of restrictions related to round brackets with the following example. If the restriction given by the round brackets were not present we could derive, for example, the contradiction $\bot$ from the premise $\lozenge p \wedge \lozenge {\sim} p$ in the following way: {\footnotesize \begin{itemize} \item[] \begin{fitch} \lozenge p \wedge \lozenge {\sim} p = \neg (p \rightarrow \bot) \wedge \neg ({\sim} p \rightarrow \bot) & premise \\ p \cup {\sim} p & the standard derivation of excluded middle \\ \fh p & hypothetical assumption \\ \vline\hspace*{\fitchindent} \fh {\sim} p & hypothetical assumption \\ \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} \bot & 3,4, (e${\sim}_1$) \\ \vline\hspace*{\fitchindent} {\sim} p \rightarrow \bot & 4-5, (i$\rightarrow$) \\ \vline\hspace*{\fitchindent} \neg ({\sim} p \rightarrow \bot) & 1, (e$\wedge_2$) \\ \vline\hspace*{\fitchindent} \bot & 6,7, (e$\neg$) \\ \fh {\sim} p & hypothetical assumption \\ \vline\hspace*{\fitchindent} \fh p & hypothetical assumption \\%10 \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} \bot & 9,10, (e${\sim}_1$) \\ \vline\hspace*{\fitchindent} p \rightarrow \bot & 10-11, (i$\rightarrow$) \\ \vline\hspace*{\fitchindent} \neg (p \rightarrow \bot) & 1, (e$\wedge_1$) \\ \vline\hspace*{\fitchindent} \bot & 12,13, (e$\neg$) \\ \bot & 2,3-8,9-14, (e$\cup$) \\ \end{fitch} \end{itemize}} This would be an undesirable result. In our semantics, $\lozenge p \wedge \lozenge {\sim} p$ does not entail $\bot$ which accords with the intuition that $p$ and ${\sim} p$  can consistently be both possible. As it stands, the derivation is incorrect in the system, because in the steps 7 and 13 the restriction on the rule (e$\cup$) was not respected and the unsafe formula $\lozenge p \wedge \lozenge {\sim} p$ occurring in the outer proof was used under the hypothetical assumptions. We can observe that all rules of the system are sound with respect to the semantics. To illustrate how the soundness proof goes let us consider the rules (e$\cup$) and (e$\vee$). Let $\Delta^s$ denote the set of safe formulas from $\Delta$. Soundness of the two rules corresponds respectively to the following two semantic facts: \begin{itemize} \item[(a)] If $\Delta^s, \alpha \vDash \gamma$ and $\Delta^s, \beta \vDash \gamma$ then $\Delta, \alpha \cup \beta \vDash \gamma$. \item[(b)] If $\Delta, \varphi \vDash \chi$ and $\Delta, \psi \vDash \chi$ then $\Delta, \varphi \vee \psi \vDash \chi$. \end{itemize} In order to prove (a), assume that $\Delta^s, \alpha \vDash \gamma$ and $\Delta^s, \beta \vDash \gamma$. Let $C$ be a context in which all formulas from $\Delta$ and the formula $\alpha \cup \beta$  are assertible. Take an arbitrary possible world $w \in C$. All formulas of $\Delta^s$ are assertible in $\{w\}$ (due to persistence of safe formulas). Moreover, $\alpha$ or $\beta$ is assertible in $\{w\}$. It follows from our assumption that $\gamma$ is assertible in $\{w\}$. Since this holds for every $w \in C$, and $\gamma$ is an $L$-formula, we obtain $C \Vdash^+ \gamma$, as required. In order to prove (b), assume that $\Delta, \varphi \vDash \chi$ and $\Delta, \psi \vDash \chi$. Let $C$ be a context in which all formulas from $\Delta$ and the formula $\varphi \vee \psi$  are assertible. Then $\varphi$ or $\psi$ is assertible in $C$. It follows from our assumption that $\chi$ is assertible in $C$, as required. \section{Completeness} The proof of completeness proceeds in the following steps. First a contextual weak negation $-\varphi$  is defined recursively. This negation is a denial of assertibility, that is, $-\varphi$ is assertible in a given context iff $\varphi$  is not assertible in that context. It has to be shown that this negation behaves properly also on the syntactic side. That means that the following holds: $\Delta \nvdash \varphi$ if and only if $\Delta \cup \{-\varphi\}$ is consistent, i.e. $\Delta, -\varphi \nvdash \bot$. The proof of this fact will be the main task of this section. The reconstruction of weak negation allows us to reduce completeness to the claim that every consistent set has a model. We will only sketch the proof of this claim because the technique is basically the same as the one used in the completeness proof for inquisitive logic with weak negation from \cite{puncochar15}. The ``contextual weak negation'' is defined by the following five recursive clauses. The first clause states the definition for $L$-formulas and their intensional negations: \begin{tabular}{lll} 1. & $-\alpha=\lozenge \neg \alpha$, & $-\neg \alpha = \lozenge \alpha$.\\ \end{tabular} In the clauses 2.-5. we assume that $\varphi, \psi$ are arbitrary $L^*$-formulas for which $-\varphi, -\neg \varphi, -\psi, -\neg\psi$ are already defined, and we further define: \begin{tabular}{lll} 2. & $-\neg \neg \varphi = - \varphi$.\\ 3. & (a) $-(\varphi \rightarrow \psi)=\lozenge (\varphi \wedge -\psi)$, & (b) $-\neg(\varphi \rightarrow \psi)=\varphi \rightarrow -\neg \psi$.\\ 4. & (a) $-(\varphi \wedge \psi)=-\varphi \vee -\psi$, & (b) $-\neg (\varphi \wedge \psi)=-\neg \varphi \wedge -\neg \psi$. \\ 5. & (a) $-(\varphi \vee \psi)=-\varphi \wedge -\psi$, & (b) $-\neg (\varphi \vee \psi)=-\neg \varphi \vee -\neg \psi$. \end{tabular} Note that by these clauses $-\varphi$ is indeed defined for every $L^*$-formula $\varphi$. By induction on $\varphi$, we obtain the following observation. \begin{lemma}\label{l: ass of minus is the lack of ass} For any $L^*$-formula $\varphi$ and any context $C$, it holds that \begin{itemize} \item[] $C \Vdash^{+} -\varphi$ iff $C \nVdash^{+} \varphi$. \end{itemize} \end{lemma} We say that a context $C$ is a model of a set of formulas $\Delta$ if and only if every formula from $\Delta$ is assertible in $C$. The following lemma follows directly from Lemma \ref{l: ass of minus is the lack of ass}. \begin{lemma}\label{l: entailment and models} $\Delta \nvDash \varphi$ iff $\Delta \cup \{-\varphi\}$ has a model. \end{lemma} The next lemma is a cornerstone of the completeness proof and the main technical issue of this paper. \begin{lemma}\label{l: excluded middle} For any formula $\varphi$, the following holds: \begin{itemize} \item[(a)] $\vdash \varphi \vee - \varphi$, \item[(b)] $\varphi, -\varphi \vdash \bot$. \end{itemize} \end{lemma} \begin{proof} (a) We will proceed by simultaneous induction on $\varphi$ and $\neg \varphi$. In the derivations below, whenever we use a safe formula in a context in which it is required to use only safe formulas, we indicate this in the corresponding annotation. 1. Assume that $\alpha$ is from $L$. We will derive $\alpha \vee -\alpha$, i.e. $\alpha \vee \neg (\neg \alpha \rightarrow \bot)$. The derivation of $\neg \alpha \vee -\neg \alpha$ is similar. {\footnotesize \begin{itemize} \item[] \begin{fitch} (\neg \alpha \rightarrow \bot) \vee \neg (\neg \alpha \rightarrow \bot) & (CEM) \\ \fh \neg \alpha \rightarrow \bot & hypothetical assumption \\ \vline\hspace*{\fitchindent} \fh {\sim}\alpha & hypothetical assumption\\ \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} \fh \alpha & hypothetical assumption\\%4 \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} \bot & 3,4, (e$\sim_1$)\\ \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} \neg \alpha & 4-5, (i$\neg$) \\%6 \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} \bot & 2-safe,6, (e$\rightarrow$) \\ \vline\hspace*{\fitchindent} {\sim}{\sim}\alpha & 3-7, (i$\sim$) \\ \vline\hspace*{\fitchindent} \alpha & 8, (e$\sim_2$)\\ \vline\hspace*{\fitchindent} \alpha \vee \neg (\neg \alpha \rightarrow \bot) & 9, (i$\vee_1$) \\ \fh \neg (\neg \alpha \rightarrow \bot) & hypothetical assumption \\ \vline\hspace*{\fitchindent} \alpha \vee \neg (\neg \alpha \rightarrow \bot) & 11, (i$\vee_2$)\\ \alpha \vee \neg (\neg \alpha \rightarrow \bot) & 1,2-10,11-12, (e$\vee$) \\ \end{fitch} \end{itemize}} In the cases 2.-5. we assume as an inductive hypothesis that our claim holds for $\varphi, \neg \varphi, \psi, \neg \psi$. 2. It is easy to derive $\neg \neg \varphi \vee -\neg \neg \varphi$ from $\varphi \vee -\varphi$, by (e$\vee$), (i$\vee_1$), (i$\vee_2$), and ($\neg\neg_2$). 3. We show that our claim holds for $\varphi \rightarrow \psi$ and $\neg (\varphi \rightarrow \psi)$. First, we prove $\vdash (\varphi \rightarrow \psi) \vee -(\varphi \rightarrow \psi)$, i.e. $\vdash (\varphi \rightarrow \psi) \vee \lozenge (\varphi \wedge -\psi)$, which can be done in the following way: {\footnotesize\begin{itemize} \item[] \begin{fitch} ((\varphi \wedge -\psi) \rightarrow \bot ) \vee \lozenge (\varphi \wedge -\psi) & (CEM) \\ \fh (\varphi \wedge -\psi) \rightarrow \bot & hypothetical assumption \\ \vline\hspace*{\fitchindent} \fh \varphi & hypothetical assumption \\ \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} \psi \vee - \psi & induction hypothesis \\ \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} \fh \psi & hypothetical assumption \\ \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} \fh -- \psi & hypothetical assumption \\ \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} \varphi \wedge - \psi & 3,6, (i$\wedge$) \\ \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} \bot & 2-safe,7, (e$\rightarrow$) \\ \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} \psi & 8, (EFQ) \\ \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} \psi & 4,5,6-9, (e$\vee$) \\ \vline\hspace*{\fitchindent} \varphi \rightarrow \psi & 3-10, (i$\rightarrow$) \\ \vline\hspace*{\fitchindent} (\varphi \rightarrow \psi) \vee \lozenge (\varphi \wedge -\psi) & 11, (i$\vee_1$) \\ \fh \lozenge (\varphi \wedge -\psi) & hypothetical assumption \\ \vline\hspace*{\fitchindent} (\varphi \rightarrow \psi) \vee \lozenge (\varphi \wedge -\psi) & 13, (i$\vee_2$) \\ (\varphi \rightarrow \psi) \vee \lozenge (\varphi \wedge -\psi) & 1,2-12,13-14, (e$\vee$) \\ \end{fitch} \end{itemize}} Now we prove $\vdash \neg (\varphi \rightarrow \psi) \vee - \neg (\varphi \rightarrow \psi)$, i.e. $\neg (\varphi \rightarrow \psi) \vee (\varphi \rightarrow - \neg \psi)$: {\footnotesize \begin{itemize} \item[] \begin{fitch} ((\varphi \wedge \neg \psi) \rightarrow \bot ) \vee \lozenge (\varphi \wedge \neg \psi) & (CEM) \\ \fh (\varphi \wedge \neg \psi) \rightarrow \bot & hypothetical assumption \\ \vline\hspace*{\fitchindent} \fh \varphi & hypothetical assumption \\ \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} \neg \psi \vee -\neg \psi & induction hypothesis \\ \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} \fh \neg \psi & hypothetical assumption \\ \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} \varphi \wedge \neg \psi & 3,5, (i$\wedge$) \\ \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} \bot & 2-safe,6, (e$\rightarrow$) \\ \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} -- \neg \psi & 7, (EFQ) \\ \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} \fh -- \neg \psi & hypothetical assumption \\ \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} -- \neg \psi & 4,5-8,9, (e$\vee$) \\ \vline\hspace*{\fitchindent} \varphi \rightarrow -\neg \psi & 3-10, (i$\rightarrow$) \\ \vline\hspace*{\fitchindent} \neg (\varphi \rightarrow \psi) \vee (\varphi \rightarrow - \neg \psi) & 11, (i$\vee_2$) \\ \fh \lozenge (\varphi \wedge \neg \psi) & hypothetical assumption \\ \vline\hspace*{\fitchindent} \neg (\varphi \rightarrow \psi) & 13, ($\neg{\rightarrow}_2$) \\ \vline\hspace*{\fitchindent} \neg (\varphi \rightarrow \psi) \vee (\varphi \rightarrow - \neg \psi) & 14, (i$\vee_1$) \\ \neg (\varphi \rightarrow \psi) \vee (\varphi \rightarrow - \neg \psi) & 1,2-12,13-15, (e$\vee$) \\ \end{fitch} \end{itemize}} 4. We prove that our claim holds for $\varphi \wedge \psi$ and $\neg (\varphi \wedge \psi)$. First, we prove the former, i.e. $\vdash (\varphi \wedge \psi) \vee (- \varphi \vee - \psi)$, which can be done by the following derivation: {\footnotesize \begin{itemize} \item[] \begin{fitch} \varphi \vee - \varphi & induction hypothesis \\ \fh \varphi & hypothetical assumption \\ \vline\hspace*{\fitchindent} \psi \vee - \psi & induction hypothesis \\ \vline\hspace*{\fitchindent} \fh \psi & hypothetical assumption \\ \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} \varphi \wedge \psi & 2,4, (i$\wedge$) \\ \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} (\varphi \wedge \psi) \vee (-\varphi \vee -\psi) & 5, (i$\vee_1$) \\ \vline\hspace*{\fitchindent} \fh -\psi & hypothetical assumption \\ \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} (\varphi \wedge \psi) \vee (-\varphi \vee -\psi) & 7, (i$\vee_2$) (twice) \\%8 \vline\hspace*{\fitchindent} (\varphi \wedge \psi) \vee (-\varphi \vee -\psi) & 3, 4-6,7-8, (e$\vee$) \\ \fh - \varphi & hypothetical assumption \\%10 \vline\hspace*{\fitchindent} (\varphi \wedge \psi) \vee (-\varphi \vee -\psi) & 10, (i$\vee_1$), (i$\vee_2$) \\ (\varphi \wedge \psi) \vee (-\varphi \vee -\psi) & 1,2-9,10-11, (e$\vee$) \\ \end{fitch} \end{itemize}} Now we prove that $\vdash \neg (\varphi \wedge \psi) \vee (-\neg \varphi \wedge -\neg \psi)$, which can be done by the following derivation: {\footnotesize \begin{itemize} \item[] \begin{fitch} \neg \varphi \vee - \neg \varphi & induction hypothesis \\ \fh \neg \varphi & hypothetical assumption \\ \vline\hspace*{\fitchindent} \neg \varphi \vee \neg \psi & 2, (i$\vee_1$) \\ \vline\hspace*{\fitchindent} \neg (\varphi \wedge \psi) & 3, ($\neg\wedge_2$) \\ \vline\hspace*{\fitchindent} \neg (\varphi \wedge \psi) \vee (-\neg \varphi \wedge -\neg \psi) & 4, (i$\vee_1$) \\ \fh -\neg \varphi & hypothetical assumption \\ \vline\hspace*{\fitchindent} \neg \psi \vee - \neg \psi & induction hypothesis \\ \vline\hspace*{\fitchindent} \fh \neg \psi & hypothetical assumption \\%8 \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} \neg \varphi \vee \neg \psi & 8, (i$\vee_2$) \\ \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} \neg (\varphi \wedge \psi) & 9, ($\neg\wedge_2$) \\ \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} \neg (\varphi \wedge \psi) \vee (-\neg \varphi \wedge -\neg \psi) & 10, (i$\vee_1$) \\ \vline\hspace*{\fitchindent} \fh - \neg \psi & hypothetical assumption \\ \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} (-\neg \varphi \wedge -\neg \psi) & 6,12, (i$\wedge$) \\ \vline\hspace*{\fitchindent} \vline\hspace*{\fitchindent} \neg (\varphi \wedge \psi) \vee (-\neg \varphi \wedge -\neg \psi) & 13, (i$\vee_2$) \\%14 \vline\hspace*{\fitchindent} \neg (\varphi \wedge \psi) \vee (-\neg \varphi \wedge -\neg \psi) & 7,8-11,12-14, (e$\vee$) \\ \neg (\varphi \wedge \psi) \vee (-\neg \varphi \wedge -\neg \psi) & 1,2-5,6-15, (e$\vee$) \\ \end{fitch} \end{itemize}} 5. The case of $\vee$ is analogous to the case of $\wedge$. This finishes the proof of (a). (b) We will proceed again by simultaneous induction on $\varphi$ and $\neg \varphi$. 1. Assume that $\alpha$ is any $L$-formula. First, we will derive $\bot$ from $\alpha$ and $-\alpha = \neg (\neg \alpha \rightarrow \bot)$. The derivation of $\bot$ from $\neg \alpha$ and $-\neg \alpha$ is similar. {\footnotesize \begin{itemize} \item[] \begin{fitch} \alpha & premise \\ \neg (\neg \alpha \rightarrow \bot) & premise \\ \fh \neg \alpha & hypothetical assumption \\ \vline\hspace*{\fitchindent} \bot & 1-safe,3, (e$\neg$) \\ \neg \alpha \rightarrow \bot & 2-3, (i$\rightarrow$) \\ \bot & 2,5, (e$\neg$) \\ \end{fitch} \end{itemize}} For the steps 3.-5., assume as an induction hypothesis that our claim holds for some arbitrary $\varphi, \neg \varphi, \psi, \neg \psi$. 2. It is easy to see that if we assume $\varphi, -\varphi \vdash \bot$, then also $\neg \neg \varphi, -\neg \neg \varphi \vdash \bot$, by ($\neg\neg_1$). 3. We prove that the claim holds for $\varphi \rightarrow \psi$, i.e. $\varphi \rightarrow \psi, -(\varphi \rightarrow \psi) \vdash \bot$. {\footnotesize \begin{itemize} \item[] \begin{fitch} \varphi \rightarrow \psi & premise \\ --(\varphi \rightarrow \psi)=\neg ((\varphi \wedge - \psi ) \rightarrow \bot) & premise \\ \fh \varphi \wedge - \psi & hypothetical assumption \\ \vline\hspace*{\fitchindent} \varphi & 3, (e$\wedge_1$) \\ \vline\hspace*{\fitchindent} \psi & 1-safe,4, (e$\rightarrow$) \\ \vline\hspace*{\fitchindent} -- \psi & 3, (e$\wedge_2$) \\ \vline\hspace*{\fitchindent} \bot & 5,6, induction hypothesis\\ (\varphi \wedge - \psi) \rightarrow \bot & 3-7, (i$\rightarrow$) \\ \bot & 2,8, (e$\neg$) \\ \end{fitch} \end{itemize}} Now we prove that $\neg (\varphi \rightarrow \psi), -\neg (\varphi \rightarrow \psi) \vdash \bot$. {\footnotesize \begin{itemize} \item[] \begin{fitch} \neg (\varphi \rightarrow \psi) & premise \\ -- \neg (\varphi \rightarrow \psi)= \varphi \rightarrow - \neg \psi & premise \\ \fh \varphi \wedge \neg \psi & hypothetical assumption \\ \vline\hspace*{\fitchindent} \varphi & 3, (e$\wedge_1$) \\ \vline\hspace*{\fitchindent} -- \neg \psi & 2-safe,4, (e$\rightarrow$) \\ \vline\hspace*{\fitchindent} \neg \psi & 3, (e$\wedge_2$) \\ \vline\hspace*{\fitchindent} \bot & 5,6, induction hypothesis\\ (\varphi\wedge \neg \psi) \rightarrow \bot & 3-7, (i$\rightarrow$) \\ \neg ((\varphi \wedge \neg \psi) \rightarrow \bot) & 1, ($\neg{\rightarrow}_1$) \\ \bot & 8,9, (e$\neg$) \\ \end{fitch} \end{itemize}} 4. We prove that our claim holds for $\varphi \wedge \psi$ and $\neg (\varphi \wedge \psi)$. First, we prove that $\varphi \wedge \psi, -(\varphi \wedge \psi) \vdash \bot$, which can be done by the following derivation: {\footnotesize \begin{itemize} \item[] \begin{fitch} \varphi \wedge \psi & premise \\ -\varphi \vee -\psi & premise \\ \fh -\varphi & hypothetical assumption\\ \vline\hspace*{\fitchindent} \varphi & 1, (e$\wedge_1$) \\ \vline\hspace*{\fitchindent} \bot & 3,4, induction hypothesis \\ \fh -\psi & hypothetical assumption \\ \vline\hspace*{\fitchindent} \psi & 1, (e$\wedge_1$) \\ \vline\hspace*{\fitchindent} \bot & 6,7, induction hypothesis \\%8 \bot & 2,3-5,6-8, (e$\vee$) \\ \end{fitch} \end{itemize}} Now we prove that $\neg(\varphi \wedge \psi), -\neg (\varphi \wedge \psi) \vdash \bot$, which can be done by the following derivation: {\footnotesize \begin{itemize} \item[] \begin{fitch} \neg (\varphi \wedge \psi) & premise \\ -\neg \varphi \wedge -\neg \psi & premise \\ \neg \varphi \vee \neg \psi & 1, ($\neg\wedge_1$) \\ \fh \neg \varphi & hypothetical assumption \\ \vline\hspace*{\fitchindent} -\neg \varphi & 2, (e$\wedge_1$) \\ \vline\hspace*{\fitchindent} \bot & 4,5, induction hypothesis \\ \fh \neg \psi & hyp \\ \vline\hspace*{\fitchindent} -\neg \psi & 2, (e$\wedge_2$) \\ \vline\hspace*{\fitchindent} \bot & 7,8, induction hypothesis \\%9 \bot & 3,4-6,7-9, (e$\vee$) \\ \end{fitch} \end{itemize}} 5. The case of $\vee$ is analogous to the case of $\wedge$. This finishes the proof of (b). \end{proof} \begin{lemma}\label{l: defined negation and consistency} $\Delta \nvdash \varphi$ iff $\Delta \cup \{-\varphi\}$ is consistent. \end{lemma} \begin{proof} The left-to-right implication is obtained from Lemma \ref{l: excluded middle}-a, using the rules (e$\vee$) and (EFQ), and the right-to-left implication follows from Lemma \ref{l: excluded middle}-b. \end{proof} \begin{lemma}\label{l: consistent sets have model} Every consistent finite set of $L^*$-formulas has a model. \end{lemma} \begin{proof} We can give only a sketch of the proof here. The strategy is the same as in the analogous proofs in \cite{puncochar15} and \cite{puncochargauker20}. Let $A$ be a finite set of atomic formulas. Recall that for every $A$-context $C$ there is an $L^*$-formula $\mu_C$ characterizing $C$ in the following sense: For every $A$-context $D$, $D \Vdash^{+} \mu_{C}$ iff $D=C$. We can further prove that if $\Delta$ is a consistent set of $L^*$-formulas built out of the atomic formulas from $A$, then there is an $A$-context $C$ such that $\Delta \cup \left\{\mu_{C}\right\}$ is consistent. Moreover, we can prove by induction, crucially using the rule ($\lozenge\oplus$), that for every $L^*$-formula $\varphi$ built out of the atomic formulas from $A$, either $\mu_{C} \vdash \varphi$, or $\mu_{C} \vdash -\varphi$. We can further reason as follows. Let $\Delta$  be a consistent finite set of $L^*$-formulas and let $A$  be the set of atomic formulas occurring in $\Delta$. Then there is an $A$-context $C$ such that $\Delta \cup \left\{\mu_{C}\right\}$ is consistent. It follows that $\mu_{C} \vdash \psi$, for every $\psi \in \Delta$. Since $C \Vdash^{+} \mu_{C}$ it follows from soundness of the deductive rules that $C \Vdash^{+} \psi$, for every $\psi \in \Delta$. Hence, $\Delta$ has a model. \end{proof} \begin{theorem} $\varphi_1, \ldots, \varphi_n \vdash \psi$ \, iff\, $\varphi_1, \ldots, \varphi_n \vDash \psi$. \end{theorem} \begin{proof} Soundness was already discussed and completeness is obtained from Lemmas \ref{l: entailment and models}, \ref{l: defined negation and consistency}, and \ref{l: consistent sets have model}. \end{proof} \end{document}
\begin{document} \begin{abstract} For a graph $H$, let $c(H)=\inf\{c\,:\,e(G)\geqslant c|G| \mbox{ implies } G\succ H\,\}$, where $G\succ H$ means that $H$ is a minor of $G$. We show that if $H$ has average degree $d$, then $$ c(H)\le (0.319\ldots+o_d(1))|H|\sqrt{\log d} $$ where $0.319\ldots$ is an explicitly defined constant. This bound matches a corresponding lower bound shown to hold for almost all such $H$ by Norin, Reed, Wood and the first author. \end{abstract} \title{On the extremal function for graph minors} \section{Introduction} A graph $H$ is a \textsl{minor} of $G$, written $G\succ H$, if there exist non-empty disjoint subsets $U_v\subset V(G)$, $v\in V(H)$, such that each $G[U_v]$ is connected, and there is an edge in $G$ between $U_v$ and $U_w$ whenever $vw$ is an edge in~$H$. Thus $H$ can be obtained from $G$ by a sequence of edge contractions and deletions and vertex deletions. It is natural to ask, for a given $H$, what conditions on a graph $G$ guarantee that it contains $H$ as a minor. Mader~\cite{Mader1967} proved the existence of the extremal function~$c(H)$, defined as follows. \begin{dfn} For a graph $H$, let $c(H) = \inf \{\, c\,:\, e(G)\geqslant c|G| \mbox{ implies } G\succ H\,\}$. \end{dfn} Mader~\cite{Mader68} later proved the bound $c(K_t)\le 8t\log_2t$ for the complete graph $H=K_t$. Bollob\'as, Catlin and Erd\H{o}s~\cite{BollCatErd} realised that random graphs $G=G(n,p)$ give a good lower bound for $c(K_t)$. Indeed, by choosing $n$ and $p$ suitably, one obtains $c(K_t)\ge(\alpha+o(1))t\sqrt{\log t}$, where the constant $\alpha$ is described here. \begin{dfn} The constant $\alpha$ is given by $$ \alpha\,=\,\max_{0<p<1} {p/2\over\sqrt{\log(1/(1-p))}}\,=\,0.319\ldots\,, $$ with $p=0.715\ldots$ giving the maximum value. \end{dfn} Kostochka~\cite{Kostochka1,Kostochka2} (see also~\cite{thom84}) proved that $c(K_t)$ is in fact of order $t\sqrt{\log t}$. Finally, it was shown~\cite{thom01} that $c(K_t)=(\alpha+o(1))t\sqrt{\log t}$. Myers and Thomason~\cite{myersthom} considered general graphs $H$ with $t$ vertices and at least $t^{1+\tau}$ edges. They defined a graph parameter $\gamma(H)$ and proved that \\$c(H)=(\alpha\gamma(H)+o(1)) t\sqrt{\log t}$. We say more about $\gamma(H)$ in~\S\ref{secfuture}, but here it is enough to say that $\gamma(H)\leqslant\sqrt{\tau}$, and in fact $c(H)=(\alpha\sqrt{\tau}+o(1)) t\sqrt{\log t}$ for almost all $H$ with $t^{1+\tau}$ edges, and for all regular $H$ of this kind. Myers~\cite{Myers11} further showed that the extremal graphs are all essentially disjoint unions of pseudo-random graphs (in the sense of~\cite{thomasonpseudo}) having the same order and density as the random graphs discussed above. Thus $c(H)$ is determined very precisely when $\tau$ is bounded away from zero, but these results give little useful information when $\tau=o(1)$. Reed and Wood~\cite{reedwood,reedwoodcorrig} realised that it is better to express $c(H)$ in terms of the average degree $d$ of~$H$. For example, the results just mentioned imply that $c(H)\le(\alpha+o(1)) |H|\sqrt{\log d}$ if $\log d \ne o(\log |H|)$, with equality in many cases. Reed and Wood showed that $c(H)\le 1.9475|H|\sqrt{\log d}$ holds for all~$H$, provided $d$ is large. The actual behaviour of $c(H)$ can be qualitatively different, though, when $\log d = o(\log |H|)$, because random graphs themselves cannot serve as extremal graphs. In fact, Alon and F\"uredi~\cite{AlonFuredi} showed that, if the maximum degree of $H$ is at most $\log_2 |H|$, then the random graph $G(|H|,p)$ almost surely contains $H$ as a {\em spanning} subgraph when $p>1/2$. Indeed, $c(H)$ can be much smaller than $|H|\sqrt{\log d}$, even if $H$ is regular: Hendrey, Norin and Wood~\cite{HNW} have shown that $c(H)=O(|H|)$ when $H$ is a hypercube. But this kind of behaviour turns out to be rare. Norin, Reed, Thomason and Wood~\cite{NRTW} recently found a different class of graphs that can serve as extremal graphs. These are blowups of small random graphs but are not themselves random (though they are pseudo-random). Their method showed that $c(H)\ge (\alpha +o_d(1))|H|\sqrt{\log d}$ holds for almost all $H$ of average degree~$d$. (More exactly, given $\epsilon>0$ and $d>d_0(\epsilon)$, then for each $t>d$, when $H$ is chosen at random with $t$ vertices and average degree~$d$, we have $\Pr[c(H)> (\alpha -\epsilon)t\sqrt{\log d}] > 1-\epsilon$.) The bound on $c(H)$ was conjectured to be tight; that is, $c(H)= (\alpha +o_d(1))|H|\sqrt{\log d}$ almost always. Our main purpose here is to prove the next theorem, which strengthens the result of Reed and Wood~\cite{reedwood} and, in particular, settles the aformentioned conjecture positively. \begin{theorem} \label{T:key} Given $\epsilon>0$, there exists $D(\epsilon)$, such that if $d>D(\epsilon)$ then every graph $H$ of average degree $d$ satisfies $c(H) < (\alpha+\epsilon)|H|\sqrt{\log d}$. \end{theorem} The proof follows very broadly the strategy of~\cite{thom01}, used also in~\cite{myersthom}. It splits into two cases. The first is where $G$ has density bounded away from zero, and is reasonably connected; this is addressed in~\S\ref{secdense}. The second case is where $G$ is itself sparse but still has dense vertex neighbourhoods as well as reasonable connectivity; this is dealt with in~\S\ref{secsparse}. Nevertheless the methods of~\cite{thom01,myersthom} are not adequate to handle the situation where $|H|$ is much bigger than~$d$, and new ideas are needed. These are described in the appropriate sections. Our notation is more or less standard. For example, given a graph~$H$, then (as used above) $|H|$ denotes the number of vertices, $e(H)$ the number of edges, and $\delta(H)$ the minimum degree. If $X$ is a subset of the vertex set $V(H)$ then $\Gamma(X)$ denotes the set of vertices not in $X$ that have a neighbour in~$X$. The subgraph of $H$ induced by $X$ is denoted by~$H[X]$. The proof of Theorem~\ref{T:key} begins with the following families of graphs. \begin{dfn} Let $m{}> 1$ and $ 0\leqslant k\leqslant m{}/2$ be real numbers. Define \begin{align*} \ensuremath{\mathcal{E}_{m{},k}}{} = \{\,G\,:\, \abs{G}\geqslant m{}\,,\, e(G)>m{}\abs{G}-m{}k\,\}\,. \end{align*} \end{dfn} The main usefulness of the class $\ensuremath{\mathcal{E}_{m{},k}}{}$ to the study of $c(H)$, demonstrated by Mader~\cite{Mader68}. is that a minor-minimal element of $\ensuremath{\mathcal{E}_{m{},k}}{}$ (that is, a graph $G\in\ensuremath{\mathcal{E}_{m{},k}}{}$ which has no proper minor in $\ensuremath{\mathcal{E}_{m{},k}}{}$) enjoys the properties set out in the next lemma. \begin{lemma} \label{L:enkprop} Let $G$ be a minor-minimal element of $\mathcal{E}_{m{},k}$. Then $|G|\geqslant m{}+1$, $e(G)\leqslant m{}|G|-m{}k+1$, $m{}<\delta(G)<2m{}$, $\kappa(G)>k$, and every edge of $G$ is in more than $m{}-1$ triangles. \end{lemma} \begin{proof} The proof is standard and elementary (see for example~\cite[Section~2]{thom01}), though usually $m{}$ and $k$ are taken to be integers, so we provide a brief sketch. There are no graphs $G\in\ensuremath{\mathcal{E}_{m{},k}}{}$ with $m{}\leqslant |G| < m{}+1$ because in this range ${|G|\choose 2} < m{}|G|-m{}k$. Hence the removal of a vertex of $G$, or the contraction or removal of an edge, violates the size condition, which yields all the claimed properties except $\kappa(G)>k$. To obtain this, consider a cutset $S$ and a component $W$ of $G-S$. The condition $\delta(G)>m{}$ implies that both the minors $G[W\cup S]$ and $G\setminus W$ have more than $m{}$ vertices, and hence \begin{align*} e(G) \leqslant e(G[W\cup S])+e(G\setminus W)\le m(|W|+|S|)-m{}k+m{}(|G|-|W|)-m{} k\end{align*} yielding $|S|>k$ as claimed. \end{proof} We now state the main theorems of Sections 2 and 3 respectively, and show how they imply Theorem \ref{T:key} . \begin{restatable}[]{theorem}{sectwothm} \label{T:sectwoweak} Given $\epsilon >0$, there exists $D_1(\epsilon)$, such that if $H$ is a graph of average degree $d>D_1$, and $G$ is a graph of density at least $p+\epsilon$, with the properties $\epsilon < p <1-\epsilon$, $|G|\geqslant |H|\sqrt{\log_{1/(1-p)}d}$ and $\kappa(G)\geqslant \epsilon |G|$, then $G\succ H$. \end{restatable} \begin{restatable}[]{theorem}{secthreethm} \label{T:secthree} Given $\epsilon >0$, there exists $D_2(\epsilon)$, such that, if $H$ is a graph of average degree $d>D_2$, $m{}$ is a number with $m{}\geqslant \epsilon|H|\sqrt{\log d}$, and $G$ is a graph satisfying $|G|\geqslant D_2m{}$, $\kappa(G)\geqslant D_2|H|$, $e(G)\leqslant m{}|G|$ and every edge of $G$ lies in more than $m{}-1$ triangles, then $G\succ H$. \end{restatable} \begin{proof}[Proof of Theorem \ref{T:key}] We may assume that $0<\epsilon <1/4$. Let $H$ be a graph of average degree~$d$, let $m{} = (\alpha+\epsilon)|H|\sqrt{\log d}$ and let $k = \epsilon m{}/2$. We need to show that $G\succ H$ provided $e(G)\ge m{}|G|$ and $D(\epsilon)$ is large. Now $G\in\ensuremath{\mathcal{E}_{m{},k}}$, and (replacing $G$ by a minor of itself if necessary) from now on we assume that $G$ is minor minimal in~$\ensuremath{\mathcal{E}_{m{},k}}$ (note that we thereby forego the inequality $e(G)\ge m{}|G|$ but we still have \\$e(G)> m{}|G|-m{}k$). Thus $G$ has all the properties stated in Lemma~\ref{L:enkprop}. We may assume that $D(\epsilon)$ is large enough that $\epsilon(\alpha+\epsilon)\sqrt{\log d}/2 > D_2$, where $D_2=D_2(\epsilon)$ is the constant of Theorem~\ref{T:secthree}, and therefore $\kappa(G)> k> D_2|H|$. Thus if $\abs{G}\geqslant D_2m{}$, then $G$ satisfies the conditions of Theorem~\ref{T:secthree} (provided $D(\epsilon)>D_2$) and $G\succ H$. So suppose instead that $\abs{G}< D_2m{}$. Then $e(G)>m{}|G|-m{}k \geqslant m{}|G|(1-\epsilon/2)$, so $G$ has density at least $2m{}(1-\epsilon/2)/(|G|-1)>1/D_2$. Let $\epsilon'=\epsilon/2D_2$, and let $p+\epsilon'$ be the density of~$G$, so $\epsilon' < p < 1-\epsilon'$. Observe that $p>1/2D_2$ so $\epsilon p>\epsilon'$. Therefore $p(1+\epsilon)> p+\epsilon' >2m{}(1-\epsilon/2)/(|G|-1)$, so we have $|G|> (2m{}/p)(1-\epsilon/2)/(1+\epsilon) >(2\alpha/p)|H|\sqrt{\log d}$. Now $2\alpha/p \geqslant 1/\sqrt{\log(1/(1-p))}$ holds by the definition of~$\alpha$, so $|G|\geqslant |H|\sqrt{\log_{1/(1-p)}d}$. As for the connectivity of~$G$, we have $\kappa(G)\geqslant k=\epsilon m{}/2 > \epsilon |G|/2D_2=\epsilon'|G|$. We may now apply Theorem~\ref{T:sectwoweak} to $G$ with $\epsilon'$ in place of~$\epsilon$, to see that $G\succ H$, provided $D(\epsilon)>D_1(\epsilon')$. \end{proof} \section{The Dense Case}\label{secdense} In this section, our main aim will be to prove Theorem~\ref{T:sectwoweak}. In fact, it will turn out to be useful to prove a slightly stronger version of the theorem, namely Theorem~\ref{T:sectwostrong}, in which $H$ is a {\em rooted} minor, which is to say we specify, for each $v\in V(H)$, a vertex of $G$ that must lie in the class~$U_v$. This will be needed when we come to the sparse case in the next section. The essence of the proof is to choose the parts $U_v$ at random from~$G$. It is very unlikely that the parts so chosen will be connected; to get round this, we first put aside a few vertices of $G$ and choose the $U_v$ from the remainder, using the put aside vertices afterwards to augment the sets $U_v$ into connected subgraphs. In this way, all that we require of the random sets $U_v$ is that there is an edge in $G$ between $U_v$ and $U_w$ whenever $vw\in E(H)$. This procedure nearly works, but it throws up a few ``bad'' parts $U_v$ that cannot be used, and even among the good parts there will be a few edges $vw\in E(H)$ for which there is no $U_v$--$U_w$ edge. In~\cite{thom01}, where $H=K_t$, this was not a big problem: the initial aim is changed to finding instead a $K_{(1+\beta)t}$ minor, at no real extra cost if $\beta$ is small, and the few blemishes in this minor still leave us with a $K_t$ minor. In~\cite{myersthom}, where $H$ has average degree at least $t^\epsilon$, a similar solution is found; a $H+K_{\beta t}$ minor is aimed for, which even with up to $\beta t$ blemishes still leaves an $H$ minor. The method works in~\cite{myersthom} because if $\beta(\epsilon)$ is small then $\gamma(H)$ and $\gamma(H+K_{\beta t})$ are relatively close, as are $c(H)$ and $c(H+K_{\beta t})$. This method fails completely for sparse~$H$, because $c(H)$ and $c(H+K_{\beta t})$ are far from each other, so we need a new approach. We randomly partition $G$ (after setting aside some vertices) into somewhat more than $|H|$ parts but without predetermining which part is assigned to which vertex of $H$. After discarding the few bad parts we still have $|H|$ good parts left, each of which has an edge to most of the other good parts. The good parts are now randomly assigned to the vertices of~$H$; it turns out that this is enough to ensure that not too many (fewer than $|H|$) edges $vw\in E(H)$ are left with no $U_v$--$U_w$ edge. For these few missing edges, we can find $U_v$--$U_w$ paths at the final stage when we make all the $U_v$ connected. In this way we obtain the required $H$ minor. \subsection{Almost-$H$-compatible equipartitions}\label{subalmost} \begin{dfn} An \textit{equipartition} of $G$ is a partition of $V(G)$ into parts $V_i$ whose sizes differ by at most one. A $j$\textit{-almost-}$H$\textit{-compatible equipartition} of $G$ is an equipartition into parts $V_v$, $v\in V(H)$, where there are at most $j$ edges $vw$ of $H$ for which there is no edge between $V_v$ and $V_w$. An $H$\textit{-compatible equipartition} of $G$ is a 0-almost-$H$-compatible equipartition. \end{dfn} We now give the details of the argument sketched above. Here, in~\S\ref{subalmost}, we find an almost-$H$-compatible equipartition in a dense graph, and then in~\S\ref{subconnproj} we show how to connect up the parts of the equipartition, as well as `adding in' the missing edges. The result of taking a random equipartition is described by the next lemma. We remark that, in the proof, the parts are not chosen entirely randomly, but subject to the constraint that each part gets its fair share of high and low degree vertices --- this helps to control the number of ``bad'' parts. The lemma is more or less identical to~\cite[Theorem~3.1]{thom01}, and we keep its technical form so that we can copy it over with very little comment. \begin{lemma} \label{T:generalpartition} Let $G$ be a graph of density at least $p$, let $l\geqslant 2$ be an integer, and let $s = \lfloor |G|/l\rfloor\geqslant 2$. Let $\omega > 1$ and $0<\eta < p$ . Then $G$ has an equipartition into at least $s-\frac{2s}{\omega \eta}$ parts, with at most $2s^2(6\omega)^l(\frac{1-p}{1-\eta})^{(1-\eta)l(l-1)}$ pairs of parts having no edge between them. \end{lemma} \begin{proof} The argument is essentially exactly that of~\cite[Theorem~3.1]{thom01}, though the conclusion stated there is very slightly different. The argument involves removing $|G|-sl$ vertices, and then choosing randomly, from a certain distribution, an equipartition of the remaining vertices into $s$ parts of size~$l$. Some of these parts are of no use (they are called ``unacceptable'' in~\cite{thom01} and ``bad'' above). Amongst the other parts, some pairs will be defective in that they fail to have an edge between them. At the start of the final paragraph of the proof in~\cite{thom01}, it is stated that there is a partition with at most $2s/\omega\eta$ unacceptable parts and at most $2s^2(6\omega)^l(\frac{1-p}{1-\eta})^{(1-\eta)l(l-1)}$ defective pairs amongst the other parts. For the conclusion of \cite[Theorem~3.1]{thom01}, the unacceptable parts and one part from each defective pair are thrown away. For the conclusion of the present lemma, we keep all the acceptable parts. We then take the vertices from the unacceptable parts, together with the $|G|-sl$ vertices initially removed, and redistribute them amongst the acceptable parts so as to obtain the desired equipartition. \end{proof} \begin{lemma}\label{L:logprop} Let $\epsilon\leq1/2$ and let $0<x<1-\epsilon$. Then $\sqrt{\frac{\log(x+\epsilon)}{\log x}} \leqslant 1-\epsilon$. \end{lemma} \begin{proof} This is \cite[Lemma~3.1]{myersthom}, except that there the condition is $0<\epsilon\leqslant x\leqslant 1-\epsilon$. However, though the implied condition $\epsilon \leq1/2$ is used in the proof, the condition $\epsilon\leqslant x$ is not, and the proof works for $0<x$. \end{proof} Here is the main result of this subsection. \begin{theorem} \label{C:almostcompatible} Given $\epsilon >0$, there exists $D_3(\epsilon)$, such that if $H$ is a graph of average degree $d>D_3$, and $G$ is a graph of density at least $p+\epsilon$, with the properties $\epsilon < p <1-\epsilon$ and $|G|\geqslant |H|\sqrt{\log_{1/(1-p)}d}$, then $G$ contains an $|H|d^{-\epsilon/3}$-almost-$H$-compatible equipartition. \end{theorem} \begin{proof} We shall apply Lemma~\ref{T:generalpartition} to $G$, replacing $p$ in the lemma by $p+\epsilon$ (as we may), and using suitably chosen parameters $l$, $k$, $\eta$, $\omega$ so that $|H|\leqslant s(1-2/\omega\eta)$ and $2/\omega\eta\leqslant 1/10$. Suppose we have done this, obtaining an equipartition into $k\geqslant s(1-2/\omega\eta)$ parts. Note that $\epsilon<1/2$, and that we can make $l$, $|H|$ and $s$ as large as we like by making $D_3(\epsilon)$ large. In particular $2s^2/{k\choose2}\leqslant 5$. Thus, writing $P$ for the probability that a randomly chosen pair of parts has no edge between, we have $P\le 5(6\omega)^l(\frac{1-p-\epsilon}{1-\eta})^{(1-\eta)l(l-1)}$. Now randomly label $|H|$ of the parts as $U_v$, $v\in V(H)$. The expected number of edges $vw$ of $H$ without a $U_v$--$U_w$ edge is $Pe(H)=P|H|d/2$. We shall choose the parameters so that $P\le d^{-1-\epsilon/3}$, and therefore there is some labelling with fewer than $|H|d^{-\epsilon/3}$ such edges. Redistributing the vertices of the non-labelled parts amongst the $|H|$ labelled parts then gives the desired almost-$H$-compatible equipartition, so proving the theorem. All that remains is to choose suitable parameters. To start with, let \\$l=\ceil{(1-\epsilon/4) \sqrt{\log_{1/(1-p)}d}}$ and $s=\floor{|G|/l}$. We take $\omega=20/\epsilon^2\eta$, with $\eta$ still to be chosen. Certainly $2/\omega\eta\leqslant 1/10$, as needed, and moreover $s(1-2/\omega\eta)\geqslant |H|$. We turn now to the bound on $P$, which is $P\le 5(6\omega)^l(\frac{1-p-\epsilon}{1-\eta})^{(1-\eta)l(l-1)}$. Choose $\eta$ small so that $\eta<\epsilon/4$ and $(1-2\epsilon)^{\epsilon/2}<(1-\eta)^{1-2\eta}$. Since $p>\epsilon$ this implies $$ \left(\frac{1-p-\epsilon}{1-\eta}\right)^{(1-2\eta)}< (1-p-\epsilon)^{1-\epsilon} \frac{(1-2\epsilon)^{\epsilon-2\eta}}{(1-\eta)^{(1-2\eta)}} < (1-p-\epsilon)^{1-\epsilon}\,.$$ Make $l$ large enough so that $(1-2\eta)l^2<(1-\eta)l(l-1)$. Then \\$P\leqslant 5(6\omega)^l (1-p-\epsilon)^{(1-\epsilon)l^2}$. By Lemma~\ref{L:logprop} we have $\frac{\log(1-p)}{\log(1-p-\epsilon)} \leqslant (1-\epsilon)^2$, meaning that $(1-p-\epsilon)^{(1-\epsilon)^2}\leqslant 1-p$, so $P\leq5(6\omega)^l(1-p)^{l^2/(1-\epsilon)}$. Since\\ $l=\ceil{(1-\epsilon/4) \sqrt{\log_{1/(1-p)}d}}$ and $(1-\epsilon/4)^2/(1-\epsilon) > 1+\epsilon/2$, we obtain\\ $P\leq5(6\omega)^l d^{-1-\epsilon/2}$. Finally, by making $d$ large, we have $P\leqslant d^{-1-\epsilon/3}$, as desired. \end{proof} \subsection{The connector and the projector}\label{subconnproj} As mentioned at the start of~\S\ref{secdense}, when proving Theorem~\ref{T:sectwostrong} we first put aside a small set for later use. This set is actually made up of two special sets that we call the {\em connector} and the {\em projector}. The connector will contain many short paths between all pairs of vertices, and the projector will allow us to connect many sets $X$ by connecting only about $\log\abs{X}$ vertices of the projector. We borrow a couple of very straightforward lemmas from~\cite{thom01}. \begin{lemma}[\protect{\cite[Lemma~4.2]{thom01}}] \label{L:connectivity} Let $G$ be a graph of connectivity $\kappa>0$ and let $u,v\in V(G)$. Then $u$ and $v$ are joined in $G$ by at least $\kappa^2/4|G|$ internally disjoint paths of length at most $2|G|/\kappa$. \end{lemma} \begin{lemma}[\protect{\cite[Lemma~4.1]{thom01}}] \label{L:bipartiteprojection} Let $G$ be a bipartite graph on vertex classes $A,B$ with the property every vertex in $A$ has at least $\gamma \abs{B}$ neighbours in $B$, where $\gamma>0$. Then there is a set $M\subseteq B$ with $|M|\leqslant\floor{\log_{1/(1-\gamma)}\abs{A}}+1$ such that every vertex in $A$ has a neighbour in $M$, that is, $A\subset\Gamma(M)$. \end{lemma} The next theorem provides us with a connector and a projector. The form of the theorem allows us to put aside not just these two sets but also a third set $R$ which will form the roots of our rooted $H$ minor. \begin{theorem} \label{T:connectorprojector} Given $\eta>0$, there exists $D_4(\eta)$ such that, if $G$ is a graph with $|G|\geqslant D_4$ and $\kappa(G)\geq8\eta|G|$, then for each set $R\subset V(G)$ with $|R|\leqslant\eta |G|$, there exist subsets $C$ (the connector) and $P$ (the projector) in $V(G)$, disjoint from each other and from $R$, with the following properties: \begin{enumerate} \item{}$\abs{C},\abs{P}\leqslant 2\eta |G|$, \item{} each pair of vertices $u,v \notin C$ is joined by at least $2\eta^{2+1/2\eta}|G|$ internally disjoint paths of length at most $1/2\eta$ whose internal vertices lie in~$C$, \item{} each vertex not in $P$ has at least $2\eta^2|G|$ neighbours in $P$, and \item{} for every $Y\subset P$ with $|Y|\leqslant\eta^2|G|$ and for every $X\subset V(G)-P-C$, there is a set $M$ in $P-Y$ with $X\subset \Gamma(M)$ and $|M|\leqslant\floor{\log_{1/(1-\eta/2)}\abs{X}}+1$. \end{enumerate} \end{theorem} \begin{proof} First, we construct $C$. We put vertices of $G-R$ inside $C$ independently at random with probability $\eta$. By Markov's Inequality, $\abs{C}\leqslant 2\eta |G|$ holds with probability at least $1/2$. Define $\delta$ by $\kappa(G)=\delta|G|$, so $\delta\geqslant 8\eta$. Let $u,v\notin C$. Applying Lemma~\ref{L:connectivity} to the graph $G-(R\setminus\{u,v\})$, noting that its connectivity is at least $(\delta-\eta)|G|$, we see that $u$ and $v$ are joined by at least $(\delta-\eta)^2|G|/4\geqslant \delta^2|G|/16$ vertex-disjoint paths of length at most $2/(\delta-\eta)\leqslant 4/\delta$, whose internal vertices are not in~$R$. The probability that a given one of these paths has all internal vertices inside $C$ is at least $\eta^{4/\delta}$. We therefore expect at least $\delta^2\eta^{4/\delta}|G|/16$ vertex disjoint paths of length at most $4/\delta$ with all internal vertices inside~$C$. The paths are disjoint so the probabilities for different paths are independent of each other. Hence by a standard Chernoff bound (see for example \cite{AlonSpencer}), except with probability at most $|G|^2\exp(-\delta^2\eta^{4/\delta}|G|/128)$, all pairs of vertices are joined by at least $\delta^2\eta^{4/\delta}|G|/32$ vertex disjoint paths of length at most $4/\delta$ whose internal vertices lie inside~$C$. By making $D_4$ large we can ensure this probability is less than $1/2$. Combining this with the observation in the previous paragraph, we see that with positive probability there is a set $C$ satisfying $\abs{C}\leqslant 2\eta |G|$ and such that every pair $u,v \notin C$ is joined by at least $\delta^2\eta^{4/\delta}|G|/32\geqslant 2\eta^{2+1/2\eta}|G|$ paths inside $C$ of length at most $4/\delta\leqslant 1/2\eta$. Fix now such a choice of $C$; then $C$ satisfies properties~(1) and~(2). We next construct $P$. Consider $G-C-R$. Since $G$ has minimum degree at least $\kappa(G)=\delta |G|$, every vertex has at least $\delta|G|-4\eta|G|\geqslant\delta|G|/2$ neighbours inside $G-C-R$. Place vertices in $P$ independently at random with probability $\eta$. As before, with probability at least $1/2$, $\abs{P}\leqslant 2\eta|G|$. Given a vertex not in $P$, the number of its neighbours in $P$ is binomially distributed with mean at least $\eta\delta|G|/2$. Again, by a Chernoff bound, the probability that any vertex has fewer than $\eta\delta|G|/4$ neighbours in $P$ is at most $|G|\exp(-\eta\delta|G|/16)$, which is less than $1/2$ if $D_4$ is large. Therefore with positive probability there is a set $P$ with $|P|\le 2\eta|G|$ such that every vertex not in $P$ has at least $\eta\delta|G|/4\ge 2\eta^2|G|$ neighbours in $P$. Make such a choice of $P$: it satisfies properties~(1) and~(3). Finally, consider the bipartite graph with $A=X$ and $B=P-Y$. By choice of~$P$, each vertex of $A$ has at least $\eta\delta|G|/4-|Y|\geqslant \eta\delta|G|/8$ neighbours in $B$, which is at least $\delta|B|/16$ by property~(1). Lemma~\ref{L:bipartiteprojection} then implies there is a set $M$ in $P-Y$ with $X\subset \Gamma(M)$ and $|M|\leqslant\floor{\log_{1/(1-\delta/16)}\abs{X}}+1 \leqslant\floor{\log_{1/(1-\eta/2)}\abs{X}}+1$. Thus property~(4) holds. \end{proof} The next theorem shows that the connector and projector do the job required of them in the discussion at the start of~\S\ref{secdense}. The theorem accommodates general partitions of a set; in this paper we shall apply it only to equipartitions, but the more general result will be useful elsewhere (see~\S\ref{secfuture}). \begin{theorem} \label{T:connectpartition} Given $\eta>0$ there is a constant $D_5(\eta)$ with the following property. Let $G$ be a graph and $R\subset V(G)$ satisfy the conditions of Theorem~\ref{T:connectorprojector}. Let $C$ and $P$ be sets given by that theorem. Suppose further that $|G|\geqslant D_5|R|$. Then for any partition $(V_r: r\in R)$ of $G-C-P-R$ into $|R|$ parts, together with a set $F\subset\{\,rs\,:\,r,s\in R\}$ of pairs from $R$ with $|F|\le |R|/\eta$, there are disjoint sets $U_r$, $r\in R$ such that \begin{enumerate} \item{}$V_r\cup \{r\}\subset U_r$ for all $r\in R$, \item{}$G[U_r]$ is connected for all $r\in R$, and \item{} there is a $U_r$-$U_s$ edge for every pair $rs\in F$. \end{enumerate} \end{theorem} \begin{proof} Note that the conditions of Theorem~\ref{T:connectorprojector} imply $\eta\leq1/8$. Our first aim is to choose, one by one for each $r\in R$, disjoint sets $M_r\subset P$ with $V_r\cup\{r\}\subset \Gamma(M_r)$: for later parts of the proof, we shall require that $\sum_{r\in R}|M_r|\le \eta^{(3+1/2\eta)}|G|$. To this end, let $C_\eta$ be a constant, depending on~$\eta$, such that $1+\log_{1/(1-\eta/2)}(x+1)\le C_\eta\log(2x)$ holds for all $x\ge 1$. The sets $M_r$ will be derived via Theorem~\ref{T:connectorprojector} and satisfy $|M_r|\le \floor{\log_{1/(1-\eta/2)}\abs{V_r\cup\{r\}}}+1$; note that therefore $|M_r|\le C_\eta\log2|V_r|$. Thus $\sum_{r\in R}|M_r|\leqslant C_\eta\sum_{r\in R}\log2|V_r| \leqslant C_\eta|R|\log (2|G|/|R|)$ holds by the concavity of the $\log$ function. Now $\log(2x)/x\to0$ as $x\to\infty$ so $D_5$ can be chosen so that $ C_\eta (|R|/|G|)\log (2|G|/|R|) \leqslant \eta^{(3+1/2\eta)}$, which means that $\sum_{r\in R}|M_r|\le \eta^{(3+1/2\eta)}|G|$. We now see that our first aim can indeed be realised: choose the $M_r$ one by one, each time applying property~(4) of Theorem~\ref{T:connectorprojector} with $X=V_r\cup\{r\}$ and $Y$ being the union of those $M_{r'}$ already chosen, noting that $|Y| \leqslant \sum_{r\in R}|M_r|\leqslant \eta^{(3+1/2\eta)}|G|\leqslant \eta^2|G|$. We now choose disjoint sets $P_r\subset P$, $r\in R$, such that $G[P_r\cup M_r]$ is connected, and therefore so also is $G[P_r\cup M_r \cup V_r\cup\{r\}]$. To do this, we find $|M_r|-1$ paths joining the vertices of $M_r$, whose internal vertices are in~$C$, making use of property~(2) of Theorem~\ref{T:connectorprojector}. We choose the paths one by one. When we come to join a pair $u$, $v$ of vertices, some of the $2\eta^{2+1/2\eta}|G|$ paths given by property~(3) will be unavailable, because they contain a vertex lying in some previously chosen path. But at most $\sum_{r\in R}|M_r|\le \eta^{(3+1/2\eta)}|G|$ paths were previously chosen, so at most $(1/2\eta) \eta^{(3+1/2\eta)}|G| < \eta^{2+1/2\eta}|G|$ $u$--$v$ paths are unavailable. Hence the desired sets $P_r$ can all be found. We now take $U_r=P_r\cup M_r \cup V_r\cup\{r\}$. This gives properties~(1) and~(2) of the theorem. To obtain property~(3), for each $rs\in F$ we find an $r$--$s$ path $Q_{rs}$ whose internal vertices lie in~$C$. These can be found one by one by an argument similar to that just given; the total number of unavailable $r$--$s$ paths, accounting for the sets $P_r$ as in the previous paragraph and for the paths $Q_{r's'}$ already chosen, is at most $\eta^{2+1/2\eta}|G| + (2/\eta)(|R|/\eta) < 2\eta^{2+1/2\eta}|G|$ if $D_5$ is large, so at least one path is available. Having found $Q_{rs}$, add the vertices of $Q_{rs}$, apart from~$r$, to $U_s$. In this way we arrive at sets satisfying properties~(1)--(3). \end{proof} \begin{dfn} Let $H$ and $G$ be graphs, and let $R\subset V(G)$ be a set of $|H|$ vertices labelled by the vertices of $H$; say $R=\{\,r_v \,: \, v\in V(H)\}$. We say that $G$ has an $H$ minor {\em rooted at} $R$ if there exist non-empty disjoint subsets $U_v: v\in V(H)$ of $V(G)$ with $r_v\in U_v$, such that each $G[U_v]$ is connected and, whenever $vw$ is an edge in $H$, there is an edge in $G$ between $U_v$ and $U_w$. \end{dfn} We are finally ready to state and prove the main theorem of this section. This is a strengthening of Theorem~\ref{T:sectwoweak} that gives a rooted minor, which, as we mentioned earlier, will be useful later on. Note that Theorem~\ref{T:sectwoweak} follows from Theorem~\ref{T:sectwostrong} by picking an arbitrary set $R$ of roots. \begin{theorem} \label{T:sectwostrong} Given $\epsilon >0$, there exists $D_1(\epsilon)$, such that if $H$ is a graph of average degree $d>D_1$, $G$ is a graph of density at least $p+\epsilon$, with the properties $\epsilon < p <1-\epsilon$, $|G|\geqslant |H|\sqrt{\log_{1/(1-p)}d}$ and $\kappa(G)\geqslant \epsilon |G|$, and furthermore $R\subset V(G)$ is a set of vertices labelled by $V(H)$, then $G$ has an $H$ minor rooted at~$R$. \end{theorem} \begin{proof} Note that we can assume throughout that both $|G|$ and $|G|/|H|=|G|/|R|$ are arbitrarily large, since $\log(1/(1-p))$ is bounded above by the constraint on~$p$. We begin by applying Theorem~\ref{T:connectorprojector} with $\eta = \epsilon/20$ to obtain sets $C$ and $P$ as in the theorem. Consider the graph $G'=G-C-P-R$. We want to apply Theorem~\ref{C:almostcompatible} to~$G'$, but $|G'|$ might not satisfy the stated lower bound. However, we may assume that $|G'|\geqslant (1-4\eta)|G|-|R|\geqslant(1-\epsilon/4)|G|$, so $e(G')\geqslant e(G)-(\epsilon/4)|G|(|G|-1)$. Thus $e(G')\geqslant (p+\epsilon/2){|G|\choose2}$, and $G'$ has density at least $p+\epsilon/2$. Let $p'=p+\epsilon/4$ and let $\epsilon'=\epsilon/4$. By Lemma~\ref{L:logprop}, $\sqrt{\log_{1/(1-p')}d}\leqslant(1-\epsilon/4)\sqrt{\log_{1/(1-p)}d}$. So we may apply Theorem~\ref{C:almostcompatible} to $G'$ with parameters $p'$ and $\epsilon'$ to obtain an $|H|d^{-\epsilon/12}$-almost-$H$-compatible equipartition $V_v$, $v\in V(H)$ of $G'=G-C-P-R$. Apply Theorem~\ref{T:connectpartition} to this equipartition (formally writing $V_{r_v}$ instead of $V_v$), with $F$ the set of pairs $r_vr_w$ for which there is no $V_v$--$V_w$ edge; note that $|F|\leqslant |H|d^{-\epsilon/12}\leqslant|H|<|R|/\eta$ if $d$ is large. The resulting sets $U_r$ then form an $H$ minor in $G$ rooted at $R$. \end{proof} \section{The Sparse Case}\label{secsparse} Our approach in the sparse case broadly mirrors that of~\cite{thom01}, except that we need to construct an $H$ minor rather than a complete minor, and the graphs $G$ in which we are working have many fewer edges. (Both these difficulties were sidestepped in~\cite{myersthom}, where there were enough edges in the sparse case to find a large complete minor.) We will construct a large number (depending on~$\epsilon$) of disjoint dense and highly connected subgraphs of a minor-minimal graph --- in all but one of these we find different subgraphs of $H$ as rooted minors, using Theorem~\ref{T:sectwostrong}, and then we connect these minors together using the remaining dense subgraph. To find the dense subgraphs, we use the fact that, in a minor minimal graph, a typical vertex has a dense neighbourhood. Either we can find a large number of typical vertices whose neighbourhoods are largely disjoint, in which case we can carry out the programme just described, or most vertices have highly overlapping neighbourhoods. The latter case will be handled by the next lemma. This lemma is much the same as \cite[Lemma~5.1]{thom01}, but we need to reprove it because the degrees in our graphs are much lower and the constants involved are different. \begin{lemma} \label{T:largebipminor} Given $\epsilon, C>0$, there exists $D_6(\epsilon, C)$ such that, if $d>D_6$, the following holds. Let $H$ be a graph of average degree~$d$, and let $n\geqslant \epsilon |H|\sqrt{\log d}$. Let $G$ be a bipartite graph with vertex classes $A$, $B$ such that $\abs{B}\leqslant Cn$, $\abs{A}\geqslant D_6n$, and every vertex of $A$ has at least $n$ neighbours in~$B$. Then $G\succ H$. \end{lemma} \begin{proof} Begin by choosing a number $0<q<1/5$ such that $\epsilon \sqrt{-\log(2q)}\geqslant 1$. This implies that $n\geqslant|H|\sqrt{\log_{1/2q}d}$. Then take $D_6$ so that $D_6q\ge C^2$. Proceed by contracting, one by one, each vertex $a\in A$ to a vertex $b\in B$ of minimal degree in $G^*[N(a)]$, where $G^*$ is the graph we have at the moment we contract $ab$. We imagine that $a$ disappears in this process but~$b$ remains, and after we have done this for all $a\in A$ we are left with a graph on vertex set~$B$. Note that, if $G^*[N(a)]$ has minimum degree $p_a(|N(a)|-1)$, and $q_a=1-p_a$, then we add at least $q_a(n-1)$ edges to~$B$. Suppose that $q_a\ge q$ for every $a\in A$. Then we have added at least $|A|q(n-1)$ edges. But we can add at most ${|B|\choose2}$, so $2|A|q(n-1)\leqslant |B|(|B|-1)$. However, this inequality fails by our choice of $D_6$. It follows that, for some $a\in A$, $q_a\le q$ holds. Let $G'=G^*[N(a)]$. Then $G'$ has minimum degree at least $p_a(|G'|-1)$, and since $q_a\leqslant 1/5$ we have $p_a\geqslant 4/5$ so $\kappa(G')\geqslant |G'|/2$. Moreover $|G'|\geqslant n\geqslant |H|\sqrt{\log_{1/2q}d}$. Now $G'$ has density at least $(1-2q)+q$, so we can apply Theorem~\ref{T:sectwoweak} to $G'$ with parameters $\epsilon=q$ and $p=1-2q$ (increasing $D_6$ if necessary so that $D_6\geqslant D_1(q)$), to obtain $G\succ G'\succ H$. \end{proof} We can now prove Theorem \ref{T:secthree}. We restate first it, for convenience. \secthreethm* \begin{proof} Let $\ell=\ceil{8/\epsilon}$ and $L={\ell\choose2}$. We start by finding, one by one, disjoint sets $S_0,\ldots,S_L$ such that $$ \abs{S_i}\leqslant 6m{} \quad \mbox{and}\quad \delta(G[S_i])> \frac{5}{6}m{}-1\,, \quad\mbox{for } 0\le i\le L\,. $$ Suppose we have already found sets $S_0,...,S_{k-1}$, where $k\leqslant L$. Let $B = \bigcup_{0\leqslant i < k}S_i$. Then $\abs{B}\leqslant 6Lm{}$. Let $A$ be the set of vertices in $V(G)-B$ having degree at most~$6m{}$. Then $3m{}\abs{G-B-A}\leqslant e(G)\leqslant m{}\abs{G}$. In particular, $\abs{G-B-A}\leqslant \abs{G}/2$. Assuming, as we may, that $D_2\geqslant 36L$, then $|B|\leqslant |G|/6$, so $|A|\geqslant |G|/3$. Let $n=m{}/6$ and let $C=36L$, so $n\geqslant (\epsilon/6)|H|\sqrt{\log d}$ and $|B|\le Cn$. We may also assume that $D_2\ge D_6(\epsilon/6,C)/2$, where $D_6$ is the constant in Lemma~\ref{T:largebipminor}, and so $|A|>D_6n$. Then, by that lemma, either $G\succ H$, in which case the theorem is proved and we are done, or some vertex $a\in A$ has fewer than $n=m{}/6$ neighbours in~$B$. In this case put $S_k=\Gamma(a)\setminus B$. Then $|S_k|\leqslant 6m{}$, since $a\in A$. Moreover, because each edge incident with $a$ lies in more than $m-1$ triangles, we have $\delta(G[\Gamma(a)])\geqslant m-1$ and so $\delta(G[S_i])> 5m{}/6-1$. We thus find all our sets $S_0,\ldots,S_L$. Next, inside each $S_i$ we find a subset $T_i$ with $\kappa(G[T_i])\geqslant m{}/40$ and \\$\delta(G[T_i])\geq3m{}/4$. If $\kappa(G[S_i])\geqm{}/40$, just put $T_i=S_i$. If not, remove a cutset of size at most $m{}/40$ from $G[S_i]$ and let $S_i'$ be the set of vertices of a smallest component. Then $|S_i'|\leqslant|S_i|/2\le 3m$ and $\delta(G[S_i'])\geqslant 5m{}/6-1-m{}/40$. If $\kappa(G[S_i'])\geqslant m{}/40$, put $T_i=S_i'$. Otherwise, repeat the procedure on $G[S_i']$. After $j$ repetitions of the procedure we have a subgraph with at most $6m/2^j$ vertices and minimum degree at least $5m{}/6-1-jm{}/40$. This is impossible for $j=4$, so we reach the desired $T_i$ in at most 3~steps. Now let $V_1,...,V_\ell$ be an arbitrary equipartition of $H$, and let $H^{\{i,j\}} = H[V_i\cup V_j]$. Each subgraph $H^{\{i,j\}}$ has at most $2|H|/\ell+2\leqslant \epsilon|H|/4+2$ vertices, and average degree at most $\ell d$ (if $D_2$ is large). Notice that $H$ is the (not edge-disjoint) union of the $L = \binom{\ell}{2}$ subgraphs $H^{\{i,j\}}$. We shall find the $H^{\{i,j\}}$ as minors inside the $G[T_i]$, and with this in mind we relabel $T_1,\ldots,T_L$ as $T^{\{i,j\}}:1\leqslant i<j\leqslant \ell$; we shall find $H^{\{i,j\}}$ in $G[T^{\{i,j\}}]$. We now describe how the $H$ minor will be formed, leaving the details of the construction to later. The minor will be rooted at a set of roots $R\subset T_0$, which we pick now and label as $R=\{r_v:v\in V(H)\}$. The $H^{\{i,j\}}$ minors also need to be rooted, so pick now a set of roots $R^{\{i,j\}}=\{r^{\{i,j\}}_v: v\in V_i\cup V_j\}\subset T^{\{i,j\}}$. For each pair $\{i,j\}$ and for each $v\in V_i\cup V_j$, we find an $r_v$--$r^{\{i,j\}}_v$ path $P^{\{i,j\}}_v$, such that all the paths $P^{\{i,j\}}_v$ are internally disjoint from each other and from all the $H^{\{i',j'\}}$ minors. Then the paths $P^{\{i,j\}}_v$ with the minors $H^{\{i,j\}}$ together give an $H$ minor, because every edge of $H$ lies in one of the $H^{\{i,j\}}$, and by contracting the paths $P^{\{i,j\}}_v$ we identify, for each $v\in V(H)$, all the root vertices $r^{\{i,j\}}_v$ that are labelled by~$v$. Here are the constructional details. In practice, to avoid the paths $P^{\{i,j\}}_v$ intersecting the minors $H^{\{i',j'\}}$, we construct the paths first, then remove them and find the minors in the remaining graph. Each vertex $v$ lies in $\ell-1$ sets $V_i\cup V_j$, so $R^*=\bigcup_{\{i,j\}} R^{\{i,j\}}$ satisfies $|R^*|=(\ell-1)|H|$. Note that, if $D_2$ is large, then $\kappa(G-R)\geqslant D_2|H|-|R^*|\geqslant|R^*|$ and $|T_0-R|\geqslant 3m/4-|H|\geqslant|R^*|$. Thus by Menger's theorem we can find $|R^*|$ vertex disjoint paths in $G-R$ joining the set $T_0-R$ to the set~$R^*$. Let the path which ends at $r^{\{i,j\}}_v$ be $Q^{\{i,j\}}_v$, and let its first vertex be $x^{\{i,j\}}_v\in T_0$. These paths $Q^{\{i,j\}}_v$ might, as they stand, use a lot of vertices from the sets $T^{\{i',j'\}}$, so we now modify them so as to avoid this. Recall that $|T_0|\le 6m$ and $\kappa(G[T_0])\geqslant m{}/40$. Applying Lemma~\ref{L:connectivity} to $G[T_0]$, we see that each pair of vertices is joined by at least $m{}/38400$ paths of length at most~$480$ --- we call these ``short'' paths. Exactly the same remark applies to each $G[T^{\{i,j\}}]$. We now modify the paths $Q^{\{i,j\}}_v$, one by one, in the following way. For each set $T^{\{i',j'\}}$ that the path $Q^{\{i,j\}}_v$ enters, let $x$ and $y$ be the first and last vertices of $Q^{\{i,j\}}_v$ in $T^{\{i',j'\}}$, and replace the section of $Q^{\{i,j\}}_v$ between $x$ and $y$ by a short path inside $G[T^{\{i',j'\}}]$. At the moment we don't require these short paths to be disjoint for different $i,j,v$, but after doing this the paths $Q^{\{i,j\}}_v$ have the property that they enter each $T^{\{i',j'\}}$ only once, and use at most $480$ vertices of $T^{\{i',j'\}}$. After this is done, note that the first and last vertices of the $Q^{\{i,j\}}_v$ in the $T^{\{i',j'\}}$ are all still distinct, since they can never lie on the internal vertices of our new short paths - they are always vertices of the original path. Let $E$ be the set of first and last vertices of all the new short paths in all the $T^{\{i,j\}}$. We now go through each $T^{\{i,j\}}$ and replace the short paths used by ones that are disjoint from each other and from~$E$. We do this by replacing the short paths one at a time, each time choosing a new short path disjoint from previously chosen new short paths and disjoint from all the endpoints~$E$. We can do this because the number of vertices we need to avoid is at most $480|R^*| +|E|\leqslant 482|R^*| < m{}/40000$. The modified paths $Q^{\{i,j\}}_v$ are now once again vertex disjoint, each using at most $480$ vertices from each $T^{\{i',j'\}}$. Finally, we extend the paths $Q^{\{i,j\}}_v$ to the paths $P^{\{i,j\}}_v$ that we want, by finding $|R^*|$ internally disjoint short paths inside $G[T_0]$ that join $r_v$ to $x^{\{i,j\}}_v$, for all $v\in V(H)$ and all appropriate~$\{i,j\}$. What remains is to find the $H^{\{i,j\}}$ minors. Fix some pair~$\{i,j\}$. Let $X$ be the set of vertices on the paths $P^{\{i',j'\}}_{v'}$ that lie inside~$T^{\{i,j\}}$, other than the roots $R^{\{i,j\}}$. By the choice of these paths, we have $|X|\le 480|R^*|< 480\ell|H|$. Let $G'=G[T^{\{i,j\}}-X]$. Then it is enough to find an $H^{\{i,j\}}$ minor in $G'$ that is rooted at $R^{\{i,j\}}$. Recall that $H^{\{i,j\}}$ has at most $\epsilon|H|/4+2$ vertices, and average degree at most $\ell d$. Since $H$ itself has average degree~$d$, it is possible to add edges to $H^{\{i,j\}}$, if necessary, so that the resultant graph $H'$ has average degree $d'$ where $\epsilon d/5\leqslant d'\leqslant \ell d$ (if $D_2$ is large). It is now enough to find $H'$ as a minor of $G'$, rooted at $R^{\{i,j\}}$. Recalling the properties of the~$T_i$, we see that $|G'|\leqslant 6m$, $\delta(G')\geqslant 3m{}/4-|X|$ and $\kappa(G')\geqslant m{}/40-|X|$. If $D_2$ is large, this means $\delta(G')> 2m{}/3\geqslant |G'|/9$ and $\kappa(G')\geqslant m{}/50\geqslant |G'|/300$. Let $\epsilon'=1/300$. Then we can pick~$p\ge 1/10$ such that $\epsilon'<p<1-\epsilon'$ and $G'$ has density at least $p+\epsilon'$. Moreover, $\delta(G')\geqslant 2m{}/3$ implies we can choose $p+\epsilon' \geqslant 2m{}/3|G'|$, and since $\epsilon'\leqslant m/50|G'|$ we have $p\geqslant 3m{}/5|G'|$. Recalling that $|H'|\leqslant \epsilon|H|/4+2$, and that $\epsilon d/5\leqslant d'\leqslant \ell d$, we obtain, if $D_2$ is large, $$ |G'| \geqslant \frac{3m{}}{5p}\geqslant \frac{3\epsilon}{5p} |H|\sqrt{\log d}\geqslant \frac{11}{5p}|H'|\sqrt{\log d} \geqslant \frac{2}{p}|H'|\sqrt{\log d'}\,. $$ By the definition of~$\alpha>1/4$, this implies $$ |G'| \geqslant 4\frac{\sqrt{\log(1/(1-p))}}{p/2} |H'|\sqrt{\log_{1/(1-p)} d'} > |H'|\sqrt{\log_{1/(1-p)} d'}\,. $$ Because $d'>\epsilon d/5$ we can ensure that $d'>D_1(\epsilon')$ by making $D_2$ large. We now have all the conditions we need to conclude, from Theorem~\ref{T:sectwostrong}, that $H'$ is a minor of $G'$ rooted at $R^{\{i,j\}}$, as required. \end{proof} \section{Further extensions}\label{secfuture} As mentioned in the introduction, Myers and Thomason~\cite{myersthom} defined a graph parameter $\gamma(H)$ and proved that $c(H)=(\alpha\gamma(H)+o(1)) t\sqrt{\log t}$ for graphs $H$ with $t$ vertices and at least $t^{1+\tau}$ edges. The parameter $\gamma(H)$ is found by considering non-negative vertex weightings $w:V(H)\to{\mathbf R}^+$, and is given by $$ \gamma(H)\,=\,\min_w\,{1\over t}\,\sum_{u\in H}w(u) \mbox{\qquad such that\qquad} \sum_{uv\in E(H)}\,t^{-w(u)w(v)}\,\le\,t\,. $$ The constant weighting $w(v)=\sqrt \tau$ satisfies the constraints, so $\gamma(H)\le\sqrt\tau$ always, but in general the relative sizes of the vertex weights are, essentially, the relative sizes of the sets $|U_v|$ that are most likely to give an $H$ minor in a random graph (this is where the parameter comes from). For most $H$, and for regular $H$ when $\tau>0$, the optimal sizes of the $|U_v|$ are the same, so $\gamma(H)\approx\sqrt\tau$ and $c(H)=(\alpha+o(1)) |H|\sqrt{\log d}$. But there are natural examples where taking equal sized sets $|U_v|$ is not optimal. For example, the complete bipartite graph $K_{\beta t, (1-\beta)t}$ satisfies $\gamma(K_{\beta t, (1-\beta)t})\approx 2\sqrt{\beta(1-\beta)}$. When $H$ is sparse, examples such as the hypercube cited in the introduction suggest that $\gamma(H)$ is not enough on its own to determine $c(H)$. Nevertheless we think that Theorem~\ref{T:key} can be extended to incorporate some features of $\gamma(H)$. For example, if $\beta$ is fixed, the proof of Theorem~\ref{T:key} could probably be modified to show that, if $d$ is large and $H$ is a bipartite graph with average degree~$d$, having vertex class sizes $\beta|H|$ and $(1-\beta)|H|$, then $c(H)\le (2\sqrt{\beta(1-\beta)}\alpha+o(1))|H|\sqrt{\log d}$. Moreover, the argument of~\cite{NRTW} indicates that equality holds for almost all such~$H$. Similar remarks could be made regarding multipartite graphs. With this in mind, we have, as mentioned earlier, stated Theorem~\ref{T:connectpartition} in a form more general than what is needed in this paper, but other than that we do not pursue any details of these ideas here. \end{document}
\begin{document} \begin{abstract} This paper provides an overview of selected results and open problems in the theory of hyperplane arrangements, with an emphasis on computations and examples. We give an introduction to many of the essential tools used in the area, such as Koszul and Lie algebra methods, homological techniques, and the Bernstein-Gelfand-Gelfand correspondence, all illustrated with concrete calculations. We also explore connections of arrangements to other areas, such as De Concini-Procesi wonderful models, the Feichtner-Yuzvinsky algebra of an atomic lattice, fatpoints and blowups of projective space, and plane curve singularities. \end{abstract} \maketitle \tableofcontents \section{Introduction and algebraic preliminaries} \vskip -.05in There are a number of wonderful sources available on hyperplane arrangements, most notably Orlik-Terao's landmark 1992 text \cite{ot}. In the last decade alone several excellent surveys have appeared: Suciu's paper on aspects of the fundamental group \cite{Su}, Yuzvinsky's paper on Orlik-Solomon algebras and local systems cohomology \cite{Yuzsurvey}, and several monographs devoted to connections to areas such as hypergeometric integrals \cite{othyper}, mathematical physics \cite{var}, as well as proceedings from conferences at Sapporo \cite{ft}, Northeastern \cite{csTapp} and Istanbul \cite{ESTUYV}. The aim of this note is to provide an overview of some recent results and open problems, with a special emphasis on connections to computation. The paper also gives a concrete and example driven introduction for non-specialists, but there is enough breadth here that even experts should find something new. There are few proofs, but rather pointers to original source material. We also explore connections of arrangements to other areas, such as De Concini-Procesi wonderful models, the Feichtner-Yuzvinsky algebra of an atomic lattice, the Orlik-Terao algebra and blowups, and plane curve singularities. All computations in this survey can be performed using Macaulay2 \cite{GS}, available at: {\tt http://www.math.uiuc.edu/Macaulay2/}, and the arrangements package by Denham and Smith~\cite{DSmith}. Let $V={\mathbb{K}}^{\ell}$, and let $S$ be the symmetric algebra on $V^*$: $ S = \bigoplus_{i \in {\mathbb{Z}}}S_i$ is a ${\mathbb{Z}}$-graded ring, which means that if $s_i \in S_i$ and $s_j \in S_j$, then $s_i \cdot s_j \in S_{i+j}$. A graded $S$-module $M$ is defined in similar fashion. Of special interest is the case where $S_0$ is a field ${\mathbb{K}}$, so that each $M_i$ is a ${\mathbb{K}}$--vector space. The free $S$ module with generator in degree $i$ is written $S(-i)$, and in general $M(i)_j = M_{i+j}$. \begin{defn} The Hilbert function $HF(M,i) = \dim_{{\mathbb{K}}}M_i.$ \end{defn} \begin{defn} The Hilbert series $HS(M,i) = \sum_{{\mathbb{Z}}}\dim_{{\mathbb{K}}}M_it^i.$ \end{defn} \begin{exm}\label{exm:first} $S={\mathbb{K}}[x,y]$, $M=S/\langle x^2,xy \rangle$. Then \begin{center} \begin{supertabular}{|c|c|c|} \hline $i$ & $M_i$ & $M(-2)_i$ \\ \hline $0$ & $1$ & $0$\\ \hline $1$ & $x,y$ &$0$\\ \hline $2$ & $y^2$ &$1$ \\ \hline $3$ & $y^3$ & $x,y$ \\ \hline $4$ & $y^4$ & $y^2$ \\ \hline $n$ & $y^n$ & $y^{n-2}$ \\ \hline \end{supertabular} \end{center} \noindent The respective Hilbert series are \[ HS(M,i)=\frac{1-2t^2+t^3}{(1-t)^2} \mbox{ and } HS(M(-2),i)=\frac{t^2(1-2t^2+t^3)}{(1-t)^2} \] An induction shows that $HS(S(-i),t)=t^i/(1-t)^{\ell}$; this makes it easy to compute the Hilbert series of an arbitrary graded module from a free resolution. For $S/\langle x^2,xy\rangle$, a minimal free resolution is \[ 0 \longrightarrow S(-3) \xrightarrow{\left[ \! \begin{array}{c} y \\ -x \end{array}\! \right]} S(-2)^2 \xrightarrow{\left[ \!\begin{array}{cc} x^2& xy \end{array}\! \right]} S \longrightarrow S/I \longrightarrow 0. \] \begin{center} The map $[x^2,xy]$ sends $\begin{array}{c} \mbox{ }e_1 \mapsto x^2\\ \mbox{ }e_2 \mapsto xy, \end{array}$ \end{center} so in order to have a map of graded modules, the basis elements of the source must have degree two, explaining the shifts in the free resolution. Taking the alternating sum of the Hilbert series yields \[ HS(M,i)=\frac{t^3-2t^2+1}{(1-t)^2} \] which agrees with the previous computation. \end{exm} \begin{exm}\label{exm:twistedcubic} The $2\times2$ minors of $\left[ \! \begin{array}{ccc} x&y&z\\ y&z&w \end{array}\! \right]$ define the twisted cubic $I \subseteq S={\mathbb{K}}[x,y,z,w]$. \begin{small} \[ 0 \longrightarrow S(-3)^2 \xrightarrow{\left[ \! \begin{array}{cc} -z & w\\ y & -z \\ -x & y \end{array}\! \right]} S(-2)^3 \xrightarrow{\left[ \!\begin{array}{ccc} y^2\!-\!xz& yz\!-\!xw& z^2\!-\!yw \end{array}\! \right]} S \longrightarrow S/I \] \end{small} The numerical information in a free resolution may be compactly displayed as a {\em betti table}: \[ b_{ij} = \dim_{{\mathbb{K}}}\mathop{\rm Tor}\nolimits_i^S(M,{\mathbb{K}})_{i+j}. \] $$ \vbox{\offinterlineskip \halign{\strut\hfil# \ \vrule\quad&# \ &# \ &# \ &# \ &# \ &# \ &# \ &# \ &# \ &# \ &# \ &# \ &# \ \cr total&1&3&2\cr \noalign {\hrule} 0&1 &--&--&\cr 1&--&3 &2 &\cr \noalign{ } \noalign{ } }} $$ \vskip -.1in \noindent In particular, the indexing begins at position $(0,0)$ and is read over and down. So for the twisted cubic, $b_{21}(S/I) = \dim_{{\mathbb{K}}}\mathop{\rm Tor}\nolimits_2^S(S/I,{\mathbb{K}})_3 = 2.$ \end{exm} \noindent We now give a quick review of arrangements. Let ${\mathcal{A}}=\{H_1,\dots ,H_n\}$ be an arrangement of complex hyperplanes in ${\mathbb{C}}^{\ell}$. We assume ${\mathcal{A}}$ is {\em central} and {\em essential}: the $\ell_i$ with $H_i = V(\ell_i)$ are homogeneous, and the common zero locus $V(\ell_1,\ldots, \ell_n) = 0 \in {\mathbb{C}}^{\ell}$. The central condition means that ${\mathcal{A}}$ also defines an arrangement in ${\mathbb{P}}^{{\ell}-1}$. The main combinatorial object associated to ${\mathcal{A}}$ is the intersection lattice $L_{\mathcal A}$, which consists of the intersections of elements of ${\mathcal{A}}$, ordered by reverse inclusion. ${\mathbb{C}}^n$ is the lattice element $\hat{0}$ and the rank one elements of $L_{\mathcal A}$ are the hyperplanes. \begin{defn} The M\"{o}bius function $\mu$ : $L_{\mathcal A} \longrightarrow {\mathbb{Z}}$ is defined by $$\begin{array}{*{3}c} \mu(\hat{0}) & = & 1\\ \mu(t) & = & -\!\!\sum\limits_{s < t}\mu(s) \mbox{, if } \hat{0}< t \end{array}$$ \end{defn} \noindent The Poincar\'e and characteristic polynomials of ${\mathcal{A}}$ are defined as \[ \pi({\mathcal{A}},t) = \!\!\sum\limits_{x \in L({\mathcal A})}\mu(x) \cdot (-t)^{\text{rank}(x)}, \mbox{ and } \chi({\mathcal{A}},t) = t^{\text{rk}({\mathcal{A}})}\pi({\mathcal{A}},\frac{-1}{t}) \] \begin{exm}\label{exm:nonfano} The $A_3$ arrangement is $\bigcup_{1\le i < j \le 4}V(x_i-x_j) \subseteq {\mathbb{C}}^4$. Projecting along $(1,1,1,1)$ gives a central arrangement in ${\mathbb{C}}^3$, hence a configuration of lines in ${\mathbb{P}}^2$. This configuration corresponds to the figure below, but with the line at infinity (which bounds the figure) omitted. \begin{center} \epsfig{file=cmhf3.eps,height=1.3in,width=1.3in} \end{center} For the $7$ rank two elements of $L(A_3)$, the four corresponding to triple points have $\mu = 2$, and the three normal crossings have $\mu =1$. Thus, $\pi(A_3,t) = 1+6t+11t^2+6t^3$. Adding the bounding line gives the non-Fano arrangement NF, with $\pi(NF, t)=1+7t+15t^2+9t^3$. \end{exm} \noindent In \cite{OS}, Orlik and Solomon showed that the cohomology ring of the complement ${M_{\mathcal A}} = {\mathbb{C}}^n\setminus \bigcup_{i=1}^d H_i$ has presentation $H^{*}({M_{\mathcal A}},{\mathbb{Z}}) =\bigwedge ({\mathbb{Z}}^n)/I$, with generators $e_1, \dots , e_n$ in degree $1$ and \[ I = \langle \sum_{q}(-1)^{q-1}e_{i_1} \cdots \widehat{e_{i_q}}\cdots e_{i_r} \mid \mathop{\rm codim}\nolimits H_{i_1} \cap \cdots \cap H_{i_r} < r \rangle. \] \noindent For additional background on arrangements, see \cite{ot}. \section{$D({\mathcal{A}})$ and freeness} Let ${\mathcal{A}} = \bigcup\limits_{i=1}^n H_i \subseteq V={\mathbb{C}}^{\ell}$ be a central arrangement. For each $i$, fix $V(l_i)=H_i \in {\mathcal{A}}$, and define $Q_{{\mathcal{A}}} = \prod_{i=1}^n l_i \in S = {\mathbb{C}}[x_1,\ldots, x_{\ell}]$. \begin{defn}\label{DA} The module of ${\mathcal{A}}$-derivations (or Terao module) is the submodule of $Der_{{\mathbb{C}}}(S)$ consisting of vector fields tangent to ${\mathcal{A}}$: \[ D({\mathcal A}) = \{ \theta \in Der_{{\mathbb{C}}}(S) | \theta(l_i) \in \langle l_i \rangle \mbox{ for all }l_i\mbox{ with }V(l_i) \in {\mathcal{A}} \}. \] \end{defn} An arrangement is free when $D({\mathcal{A}})$ is a free $S$--module. In this case, the degrees of the generators of $D({\mathcal{A}})$ are called the exponents of the arrangement. Note that $D({\mathcal{A}})$ is always nonzero, since the Euler derivation $\theta_E = \sum_{i=1}^{\ell} x_i \partial/\partial x_i \in D({\mathcal{A}})$. It is easy to show that \[ D({\mathcal{A}})\simeq S\cdot\theta_E \oplus syz(J_{{\mathcal{A}}}), \] where $J_{{\mathcal{A}}}$ is the Jacobian ideal of $Q_{{\mathcal{A}}}$, and $syz$ denotes the module of syzygies on $J_{{\mathcal{A}}}$: polynomial relations on the generators of $J_{{\mathcal{A}}}$. \begin{thm}[Saito \cite{slog}] $\mathcal{A}$ is free iff there exist ${\ell}$ elements \[ \theta_i = \sum\limits_{j=1}^{\ell} f_{ij}\frac{\partial}{\partial x_j} \in D(\mathcal{A}), \] such that $\det([f_{ij}]) = c \cdot Q_{{\mathcal{A}}}$, for some $c \ne 0$. \end{thm} \begin{exm}\label{exm:nonfanoII} For Example~\ref{exm:nonfano}, a computation shows that \[ \begin{array}{ccc} D(A_3) & \simeq &S(-1)\oplus S(-2)\oplus S(-3)\\ D(NF) & \simeq &S(-1)\oplus S(-3)\oplus S(-3) \end{array} \] Interestingly, the respective Poincar\'e polynomials factor, as \[ \pi(A_3,t)=(1+t)(1+2t)(1+3t),\mbox{ and }\pi(NF, t)=(1+t)(1+3t)^2. \] This suggests the possibility of a connection between the exponents of a free arrangement and the Poincar\'e polynomial. \end{exm} \noindent A landmark result in arrangements is: \begin{thm}[Terao \cite{t}] If $D({\mathcal{A}}) \simeq \bigoplus\limits_{i=1}^{\ell} S(-a_i)$, then \[ \pi({\mathcal{A}},t)= \prod(1+a_it) = \sum \dim_{{\mathbb{C}}}H^i({\mathbb{C}}^{\ell}\setminus {\mathcal{A}})t^i. \] \end{thm} \begin{exm}\label{exm:stanley}[Stanley] For ${\mathcal{A}}$ below, $\pi({\mathcal{A}},t)=(1+t)(1+3t)^2$. \begin{center} \epsfig{file=cmhf2.eps,height=1.3in,width=1.3in} \end{center} A computation shows that ${\mathcal{A}}$ is not free, so factorization of $\pi({\mathcal{A}},t)$ is a necessary but not sufficient for freeness of ${\mathcal{A}}$. \end{exm} A famous open conjecture in the field of arrangements is: \begin{conj}[Terao] If $char({\mathbb{K}})=0$, then freeness of $D({\mathcal{A}})$ depends only on $L_{{\mathcal{A}}}$. \end{conj} \begin{exm}\label{exm:zieglerAB}[Ziegler's pair \cite{z2}] Let ${\mathcal{A}}$ be an arrangement of $9$ lines in ${\mathbb{P}}^2$, as below. \begin{center} \epsfig{file=yuz1.eps,height=1.5in,width=1.5in} \end{center} Then $D({\mathcal{A}})$ depends on nonlinear geometry: if the six triple points lie on a smooth conic, we compute: \[ \xymatrix{0 \ar[r]& S(-7) \oplus S(-8) \ar[r]& S(-5) \oplus S^3(-6) \ar[r]& syz(J_{{\mathcal{A}}}) \ar[r]& 0}, \] while if six triple points are not on a smooth conic, the resolution is: \vskip .06in \noindent$\xymatrix{0 \ar[r]& S^4(-7) \ar[r]& S^6(-6) \ar[r]& syz(J_{{\mathcal{A}}}) \ar[r]& 0}.$ \end{exm} A version of Terao's theorem applies to any arrangement: \begin{defn}\label{Dp} ${D^p(\A)} \subseteq \Lambda^p(Der_{{\mathbb{K}}}(S))$ consists of $\theta$ such that \[ \theta(l_i,f_2,\ldots, f_p) \in \langle l_i \rangle, \forall \mbox{ }V(l_i) \in {\mathcal{A}}, f_i \in S. \] \end{defn} \begin{thm}[Solomon-Terao, \cite{solt}]\label{SolT} \[ \chi({\mathcal{A}},t) = (-1)^{\ell} \lim_{x \rightarrow 1} \sum_{p \ge 0}HS({D^p(\A)};x)(t(x-1)-1)^p. \] \end{thm} \begin{prob} Relate the modules ${D^p(\A)}$, for $p\ge2$, to $L_{{\mathcal{A}}}$. \end{prob} A closed subarrangement $\hat {\mathcal{A}} \subseteq A$ is a subarrangement such that $\hat {\mathcal{A}} = {\mathcal{A}}_X$ for some flat $X$. The best result relating $D({\mathcal{A}})$ to $L_{{\mathcal{A}}}$ is: \begin{thm}[Terao, \cite{tunpub}] If $\hat {\mathcal{A}} \subset {\mathcal{A}}$ is a closed subarrangement, then $\mathop{\rm pdim}\nolimits D({\mathcal{A}}) \ge \mathop{\rm pdim}\nolimits D(\hat {\mathcal{A}})$. \end{thm} \begin{prob} Find bounds on $\mathop{\rm pdim}\nolimits D({\mathcal{A}})$ depending on $L_{{\mathcal{A}}}$ \end{prob} A particularly interesting class of arrangements are graphic arrangements, which are subarrangements of $A_n$. Given a simple (no loops or multiple edges) graph $G$, with $\ell$ vertices and edge set $\mathsf{E}$, we define ${\mathcal{A}}_{G}=\{z_i-z_j=0\mid (i,j)\in \mathsf{E} \subseteq {\mathbb{C}}^{\ell}\}$ \begin{thm}[Stanley \cite{stan}] ${\mathcal{A}}_{G}$ is supersolvable iff $G$ is chordal.\end{thm} \begin{thm}[Kung-Schenck \cite{ks}] If ${\mathcal{A}}_{G}$ has an induced $k$-cycle, then $\mathop{\rm pdim}\nolimits D({\mathcal{A}}_{G})\!\ge\! k\!-\!3$. \end{thm} \begin{exm} The largest induced cycle of $G$ below is a 6-cycle. \begin{center} \epsfig{file=oct.eps,height=1in,width=1in} \end{center} A computation shows $\mathop{\rm pdim}\nolimits(D({\mathcal{A}}))=3$. \end{exm} \begin{exm} The largest induced cycle of $G$ below is a 4-cycle. \begin{center} \epsfig{file=triprism.eps,height=1in,width=1in} \end{center} A computation shows $\mathop{\rm pdim}\nolimits(D({\mathcal{A}}))=2$. \end{exm} \begin{prob} Find a formula for $\mathop{\rm pdim}\nolimits D({\mathcal{A}}_{G})$. \end{prob} \pagebreak \begin{defn}\label{triple} A triple $({\mathcal A}', {\mathcal A}, {\mathcal A}'')$ of arrangements consists of a choice of $H \in {\mathcal{A}}$, with ${\mathcal{A}}'={\mathcal{A}} \setminus H, {\mathcal{A}}''={\mathcal{A}}|_H.$ \end{defn} \noindent A main tool for proving freeness is Terao's addition-deletion theorem. \begin{thm}[Terao \cite{t2}]\label{AD} For a triple, any two of the following imply the third \begin{enumerate} \item{$D({\mathcal A})\simeq \oplus_{i=1}^n S(-b_i)$.} \item{$D({\mathcal A}')\simeq S(-b_n +1)\oplus_{i=1}^{n-1} S(-b_i).$} \item{$D({\mathcal A}'')\simeq \oplus_{i=1}^{n-1} S/L(-b_i).$} \end{enumerate} \end{thm} \begin{exm} In Example~\ref{exm:nonfano}, the $A_3$ arrangement is free with exponents $\{1,2,3\}$. Let $H$ be the line at infinity, which meets $A_3$ in four points. Then $D({\mathcal A}'')$ is free, with exponents $\{1,2\}$, so the non-Fano arrangement is free with exponents $\{1,3,3\}$, which agrees with our earlier computation. Example~4.59 of \cite{ot} gives a free arrangement for which the addition-deletion theorem does not apply. \end{exm} As a corollary of Theorem~\ref{AD}, Terao showed that supersolvable arrangements are free. \begin{defn}\label{SSolve} An element $X$ of a lattice is modular if for all $Y \in L$ and all $Z<Y$, $Z \vee (X \wedge Y) = (Z \vee X) \wedge Y$. A central arrangement ${\mathcal{A}}$ is supersolvable if there exists a maximal chain $\hat{0} = X_0 < X_1 < \cdots < X_n = \hat{1}$ of modular elements in $L({\mathcal{A}})$. \end{defn} For line configurations in ${\mathbb{P}}^2$, the supersolvability condition simply means there is a singular point $p \in {\mathcal{A}}$ such that every other singularity of ${\mathcal{A}}$ lies on a line of ${\mathcal{A}}$ which passes through $p$. For example, the $A_3$ arrangement is supersolvable, since any triple point is such a singularity. For arrangements in ${\mathbb{P}}^2$, there is a beautiful characterization of freeness involving multiarrangements. \begin{defn} A multiarrangement $(\mathcal{A},{\bf m})$ consists of an arrangement ${\mathcal{A}}$, along with a multiplicity $m_i \in {\mathbb{N}}$ for each $H \in {\mathcal{A}}$. \[ D({\mathcal{A}},{\bf m}) = \{\theta \mid \theta(l_i) \in \langle l_i^{m_i}\rangle \}. \] \end{defn} \begin{thm} ${\mathcal{A}} \subseteq \mathbb{P}^2$ is free if and only if \begin{enumerate} \item{$\pi({\mathcal{A}},t) = (1+t)(1+at)(1+bt)$ and} \item{$D({\mathcal{A}}|_H,{\bf m}) \simeq S/L(-a)\oplus S/L(-b)$,} \end{enumerate} where $(2)$ holds for all $H=V(L) \in {\mathcal{A}}$, with ${\bf m}(H_i)\!=\!\mu_{{\mathcal{A}}}(H\cap H_i).$ \end{thm} The necessity of these conditions was shown by Ziegler in \cite{z}, and sufficiency was proved by Yoshinaga in \cite{y2}. In \cite{y1}, Yoshinaga gives a generalization to higher dimensions. \section{Multiarrangements} \noindent The exponents of free multiarrangements are not combinatorial: \begin{exm}[Ziegler, \cite{z}] Consider the two multiarrangements in ${\mathbb{P}}^1$, with underlying arrangements defined by \begin{center} $\begin{array}{cccc} {\mathcal{A}}_1 &= &V(x\cdot y \cdot (x+y)\cdot (x-y)) \\ {\mathcal{A}}_2 &= &V(x\cdot y \cdot (x+y)\cdot (x-ay)), \end{array}$ \end{center} with $a \ne 1$. To compute $D({\mathcal{A}}_1,(1,1,3,3))$, we must find all \[ \theta = f_1(x,y)\partial/\partial x + f_2\partial/\partial y \] such that \[ \begin{array}{c} \theta(x) \in \langle x \rangle, \mbox{ } \theta(x+y) \in \langle x+y \rangle^3\\ \theta(y) \in \langle y \rangle, \mbox{ } \theta(x-y) \in \langle x-y \rangle^3 \end{array} \] Thus, $D({\mathcal{A}}_1,(1,1,3,3))$ is the kernel of the matrix \begin{center} $ \left[ \! \begin{array}{cccccc} 1 &0 & x& 0 & 0 & 0 \\ 0&1& 0& y&0 & 0 \\ 1& 1& 0& 0& (x+y)^3& 0\\ 1&-1&0& 0& 0& (x-y)^3 \end{array}\! \right].$ \end{center} Computations show that $D({\mathcal{A}}_1,(1,1,3,3))$ has exponents $\{3,5\}$, and $D({\mathcal{A}}_2,(1,1,3,3))$ has exponents $\{4,4\}$. \end{exm} \noindent There is an analog of Theorem~\ref{SolT} for multiarrangements. \begin{defn}\label{Dp2} $D^p({\mathcal{A}},{\bf m}) \subseteq \Lambda^p(Der_{{\mathbb{K}}}(S))$ consists of $\theta$ such that \[ \theta(l_i,f_2,\ldots, f_p) \in \langle l_i \rangle^{m(\l_i)}, \forall \mbox{ }V(l_i) \in {\mathcal{A}}, f_i \in S. \] \end{defn} \begin{thm}[Abe-Terao-Wakefield \cite{atw2}] Define $\begin{array}{ccc} \Psi({\mathcal{A}},{\bf m},t,x) &=&\sum\limits_{p=0}^{\ell} HS(D^p({\mathcal{A}},{\bf m}),x)(t(x-1)-1)^p \\ \chi(({\mathcal{A}},{\bf m}),t) &=&(-1)^{\ell}\lim_{x \rightarrow 1}\Psi({\mathcal{A}},{\bf m},t,1). \end{array}$ If $D^1({\mathcal{A}},{\bf m}) \simeq \oplus S(-d_i),$ then $\chi(({\mathcal{A}},{\bf m}),t)= \prod\limits_{i=1}^{\ell}(1+d_it).$ \end{thm} In \cite{atw}, Abe-Terao-Wakefield prove an addition-deletion theorem for multiarrangements by introducing {\em Euler multiplicity} for the restriction. It follows from the Hilbert-Burch theorem that any $({\mathcal{A}},{\bf m}) \subseteq {\mathbb{P}}^1$ is free, which leads to the question of whether there exist other arrangements which are free for any ${\bf m}$. In \cite{aty}, Abe-Terao-Yoshinaga prove that any such arrangement is a product of one- and two--dimensional arrangements. Nevertheless, several natural questions arise: \begin{prob} Characterize the projective dimension of $D({\mathcal{A}},{\bf m})$. \end{prob} \begin{prob} Define supersolvability for multiarrangements. \end{prob} \section{Arrangements of plane curves} For a collection of hypersurfaces \[ {\mathcal{C}}=\bigcup_i V(f_i) \subseteq {\mathbb{P}}^n, \] the module of derivations $D({\mathcal{C}})$ is obtained by substituting $f_i$ for $l_i$ in Definition~\ref{DA}. It is not hard to prove that Saito's criterion still applies. Are there other freeness theorems? \begin{exm} For the arrangement ${\mathcal{C}} \subseteq {\mathbb{P}}^2$ depicted below \begin{center} \epsfig{file=1c6l.eps,height=1.5in,width=1.5in} \end{center} we compute that $D({\mathcal{C}}) \simeq S(-1) \oplus S(-2) \oplus S(-5)$. \end{exm} This example can be explained by an addition-deletion theorem \cite{ST1}, but there is subtle behavior related to singular points. For the remainder of this section, $C=\cup_i V(f_i)\subseteq {\mathbb{C}}^2$ is reduced plane curve, and if $p\in C$ is a singular point, translate so $p=(0,0)$. \begin{defn}\label{qhomogSing} A plane curve singularity is quasihomogeneous if and only if there exists a holomorphic change of variables so that $f(x,y) = \sum c_{ij}x^{i} y^{j}$ is weighted homogeneous: there exists $\alpha, \beta \in {\mathbb{Q}}$ such that $\sum c_{ij}x^{i \cdot \alpha} y^{j \cdot \beta}$ is homogeneous. \end{defn} \begin{defn} The Milnor number at $(0,0)$ is $$ \mu_{(0,0)}(C) = \dim_\mathbb{C}\mathbb{C}\{x,y\}/\langle\frac{\partial f}{\partial x}\mbox{, }\frac{\partial f}{\partial y}\rangle. $$ The Tjurina number at $(0,0)$ is $$ \tau_{(0,0)}(C) = \dim_{{\mathbb{C}}} {\mathbb{C}} \{x,y\}/\langle\frac{\partial f}{\partial x}\mbox{, }\frac{\partial f}{\partial y}\mbox{, }f\rangle. $$ \end{defn} For a projective plane curve $V(Q)\subseteq {\mathbb{P}}^2$, it is easy to see that the degree of $Jac(Q) = \sum_{p \in sing(V(Q))}\tau_p$. \begin{exm} Let ${\mathcal{C}}$ be as below: \begin{center} \epsfig{file=1c4l.eps,height=1.1in,width=1.6in} \end{center} If $p$ is an ordinary singularity with $k$ distinct branches, then $\mu_p(C)=(k-1)^2$, so the sum of the Milnor numbers is $20$. However, a computation shows that $\deg(J_{{\mathcal{C}}}) = 19$. All singularities are ordinary, but the singularity at the origin is not quasihomogeneous. \end{exm} \begin{thm}[Saito \cite{s}] If $C=V(f)$ has an isolated singularity at the origin, then $f \in Jac(f)$ iff $f$ is quasihomogeneous. \end{thm} For arrangements of lines and conics such that every singular point is quasihomogeneous, \cite{ST1} proves an addition/deletion theorem; \cite{STY} generalizes the result to curves of higher genus. \begin{exm} Let ${\mathcal{C}}$ be as below: \begin{center} \epsfig{file=tan2c2l.eps,height=1.5in,width=1.5in} \end{center} $D({\mathcal{C}})$ has exponents $\{1,2,3\}$, which can be shown using the aforementioned addition-deletion theorem. Change ${\mathcal{C}}$ to ${\mathcal{C}}'$ via: \[ y=0 \longrightarrow x-13y=0. \] A computation shows that $D({\mathcal{C}}')$ is not free. Thus, for line-conic arrangements, freeness is not combinatorial. \end{exm} \begin{prob} Define supersolvability for hypersurface arrangements. \end{prob} \begin{prob} Give combinatorial bounds on $\mathop{\rm pdim}\nolimits D({\mathcal{C}})$. \end{prob} \begin{prob} Analyze associated primes and $\mathop{\rm Ext}\nolimits$ modules of $D({\mathcal{C}})$. \end{prob} \section{The Orlik--Terao algebra and blowups} The Orlik--Terao algebra is a symmetric analog of the Orlik-Solomon algebra. While the Orlik-Solomon algebra records the existence of dependencies among sets of hyperplanes, the Orlik-Terao algebra records the actual dependencies. If $\mathop{\rm codim}\nolimits \cap_{j=1}^m H_{i_j} < m$, then there exist $c_{i_j}$ with \[ \sum\limits_{j=1}^m c_{i_j}\cdot l_{i_j} = 0 \mbox{ a dependency}. \] \begin{defn} The Orlik-Terao ideal \[ I_{{\mathcal{A}}} = \langle \sum_{j=1}^m c_{i_j} (y_{i_1}\cdots \hat y_{i_{j}} \cdots y_{i_m}) \mid \mbox{ over all dependencies}\rangle \] \end{defn} The Orlik-Terao algebra is $C({\mathcal{A}}) = {\mathbb{K}}[x_1,\ldots,x_n]/I_{{\mathcal{A}}}.$ \begin{exm}\label{otexm1} ${\mathcal{A}} = V(x_1 \cdot x_2\cdot x_3 \cdot (x_1+x_2+x_3))$, the only dependency is $l_1\!+\!l_2\!+\!l_3\!-\!l_4 = 0$, so $I_{{\mathcal{A}}} =\langle y_2y_3y_4+y_1y_3y_4+y_1y_2y_4-y_1y_2y_3 \rangle.$ \end{exm} In \cite{ot1}, Orlik and Terao answer a question of Aomoto by considering the Artinian quotient $AOT$ of $C({\mathcal{A}})$ by $\langle x_1^2,\ldots,x_n^2 \rangle$. They prove: \begin{thm}[Orlik-Terao \cite{ot1}] $HS(AOT, t) = \pi({\mathcal{A}},t)$.\end{thm} \begin{thm}[Terao \cite{ter}]\label{TeraoCA} \[ HS(C({\mathcal{A}}),t) = \pi\Big({\mathcal{A}}, \frac{t}{1-t}\Big). \] \end{thm} It is not hard to show that \[0 \rightarrow I_{{\mathcal{A}}} \rightarrow {\mathbb{K}}[x_1,\ldots,x_n]\stackrel{\phi}{\rightarrow} {\mathbb{K}}\Bigg[\frac{1}{l_1},\ldots, \frac{1}{l_n}\Bigg] \rightarrow 0\] is exact, so $V(I_{{\mathcal{A}}})\subseteq {\mathbb{P}}^{n-1}$ is irreducible and rational. In any situation where weights of dependencies play a role, the Orlik-Terao algebra is the natural candidate to study. One such situation involves 2-formality: \begin{defn} ${\mathcal{A}}$ is 2-formal if all dependencies are generated by dependencies among three hyperplanes. \end{defn} \begin{thm}[Falk-Randell \cite{fr}]If ${\mathcal{A}}$ is $K(\pi,1)$, ${\mathcal{A}}$ is 2-formal.\end{thm} \begin{thm}[Yuzvinsky \cite{yu1}] If ${\mathcal{A}}$ is free, ${\mathcal{A}}$ is 2-formal.\end{thm} One reason that formality is interesting is that it is not a combinatorial invariant: in Example~\ref{exm:zieglerAB}, the arrangement for which the six triple points lie on a smooth conic is not 2-formal, and the arrangement for which the points do not lie on a smooth conic is 2-formal. \begin{thm}[\cite{ST1}]\label{ST2formal} ${\mathcal{A}}$ is 2-formal iff $\mathop{\rm codim}\nolimits(I_{{\mathcal{A}}})_2=n-\ell.$ \end{thm} In \cite{bt}, Brandt and Terao generalized the notion of 2-formality to $k-$formality: ${\mathcal{A}}$ is k-formal if all dependencies are generated by dependencies among $k+1$ or fewer hyperplanes. Brandt and Terao prove that every free arrangement is $k-$formal. \begin{prob} Find an analog of Theorem~\ref{ST2formal} for $k-$formality. \end{prob} \begin{exm} The configuration of Example~\ref{otexm1} consists of four generic lines: \vskip -.1in \begin{figure}\end{figure} \vskip -.1in \noindent The Orlik-Terao ideal defines a cubic surface in ${\mathbb{P}}^3$, and a computation shows that $V(I_{{\mathcal{A}}})$ has four singular points. \end{exm} This can be interpreted in terms of a rational map. Let $\alpha_i = Q_{{\mathcal{A}}}/l_i$, and define $\phi_{{\mathcal{A}}}=[\alpha_1,\ldots,\alpha_n]$. \[ {\mathbb{P}}^{\ell-1}\stackrel{\phi_{{\mathcal{A}}}}{\longrightarrow}{\mathbb{P}}^{n-1}, \] Restrict to the case ${\mathcal{A}} \subseteq {\mathbb{P}}^2$, and let ${X_{\mathcal A}} \stackrel{\pi}{\longrightarrow} {\mathbb{P}}^2$ denote the blowup of ${\mathbb{P}}^2$ at the singular points of ${\mathcal{A}}$, with $E_0$ denoting the pullback to ${X_{\mathcal A}}$ of the class of a line on ${\mathbb{P}}^2$, and $E_i$ the exceptional divisors over singular points of ${\mathcal{A}}$. Let \[ D_{{\mathcal{A}}} = (n-1)E_0 - \!\!\sum\limits_{p_i \in L_2({\mathcal{A}})} \mu(p_i)E_i. \] Utilizing results of Proudfoot-Speyer \cite{ps} showing that $C({\mathcal{A}})$ is Cohen-Macaulay and the Riemann-Roch theorem, \cite{syzRes} shows that the map $\phi_{{\mathcal{A}}}$ is determined by the global sections of $D_{{\mathcal{A}}}$, and that $\phi_{{\mathcal{A}}}$ \begin{enumerate} \item is an isomorphism on $\pi^*({\mathbb{P}}^2 \setminus {\mathcal{A}}$) \item contracts the lines of ${\mathcal{A}}$ to points \item blows up the singularities of ${\mathcal{A}}$. \end{enumerate} \pagebreak \begin{defn}\label{CMregularity} A graded $S$-module $N$ has Castelnuovo-Mumford regularity $d$ if $\mathop{\rm Ext}\nolimits^j(N,S)_n=0$ for all $j$ and all $n\le -d-j-1$. \end{defn} In terms of the betti table, the regularity of $N$ is the label of the last non-zero row, so in Example~\ref{exm:twistedcubic}, $S/I$ has Castelnuovo-Mumford regularity one. The regularity of $C({\mathcal{A}})$ is determined in \cite{syzRes}: \begin{thm}[\cite{syzRes}]\label{CMregofCA} For ${\mathcal{A}} \subseteq {\mathbb{P}}^{\ell-1}$, $C({\mathcal{A}})$ is $\ell-1$--regular. \end{thm} To see this, note that since $C({\mathcal{A}})$ is Cohen-Macaulay, quotienting $C({\mathcal{A}})$ by $\ell$ generic linear forms yields an Artinian ring whose Hilbert series is the numerator of the Hilbert series of $C({\mathcal{A}})$. The regularity of an Artinian module is equal to the length of the module, so the result follows from Theorem~\ref{TeraoCA}. A main motivation for studying $C({\mathcal{A}})$ is a surprising connection to nets and resonance varieties, which are the subject of \S~\ref{resonanceSection}. First, the definition of a net: \begin{defn}~\label{Net} Let $3 \le k \in {\mathbb{Z}}$. A $k$-net in ${\mathbb{P}}^2$ is a partition of the lines of an arrangement ${\mathcal{A}}$ into $k$ subsets ${\mathcal{A}}_i$, together with a choice of points $Z \subseteq {\mathcal{A}}$, such that: \begin{enumerate} \item{for every $i \ne j$ and every $L \in{\mathcal{A}}_i, \ L'\in{\mathcal{A}}_j$, $L \cap L'\in Z$.} \item{$\forall$ $p \in Z$ and every $i \in \{1,\ldots,k\}$, there exists a unique $L\in A_i$ with $Z\in L$.} \end{enumerate} \end{defn} In \cite{LY}, Libgober and Yuzvinsky show that nets are related to the first resonance variety $R^1({\mathcal{A}})$. The definition of a net forces each subset ${\mathcal{A}}_i$ to have the same cardinality, and if $m=|{\mathcal{A}}_i|$, the net is called a $(k,m)$-net. Using work of \cite{LY} and \cite{FalkY}, it is shown in \cite{syzRes} that \begin{thm}\label{ENcpx} Existence of a $(k,m)$ net implies that there is a decomposition $D_{{\mathcal{A}}} = A+B$ with $h^0(A)=2$ and $h^0(B)=km - {m+1 \choose 2}$. \end{thm} \begin{defn}\label{1generic} A matrix of linear forms is $1$-generic if it has no zero entry, and cannot be transformed by row and column operations to have a zero entry. \end{defn} In \cite{EisenbudFactor}, Eisenbud shows that if a divisor $D$ on a smooth curve $X$ factors as $D \simeq A+B$, with $A$ having $m$--sections and $B$ having $n$--sections, then the ideal of the image of $X$ under the map defined by the global sections of $D$ will contain the $2 \times 2$ minors of a $1$-generic matrix. Using this result and Theorem~\ref{ENcpx}, it can be shown that $I_{{\mathcal{A}}}$ contains the ideal $I_2(M)$ of $2 \times 2$ minors of a $1$-generic \begin{small}$2 \times \Big( km - {m+1 \choose 2} \Big)$\end{small} matrix $M$. So if \begin{small}$G=S(-1)^{km - {m+1 \choose 2}}$\end{small}, the Eagon-Northcott complex \cite{E} \[ \cdots \rightarrow S_2(S^2)^* \otimes \Lambda^4 G \rightarrow (S^2)^* \otimes \Lambda^3 G \rightarrow \Lambda^2 G \rightarrow \Lambda^2 S^2 \rightarrow S/I_2(M) \rightarrow 0 \] is a subcomplex of resolution of $S/I_{{\mathcal{A}}}$. The geometric content of Theorem~\ref{ENcpx} is that it implies $V(I_{{\mathcal{A}}})$ lies on a {\em scroll} \cite{E}. \begin{exm}\label{ShowMeThm} For the $A_3$ arrangement, the set of triple points $Z$ gives a $(3,2)$ net, where the $A_i$ correspond to normal crossing points: $A_1 = 12\mid 34$, $A_2 = 13\mid 24$, $A_3 = 14\mid 23$. \begin{figure}\end{figure} \[ \mbox{Let }A = 2E_0-\!\!\sum\limits_{\{p | \mu(p) = 2\}} \!\!E_p \mbox{ and } B = 3E_0-\!\!\sum\limits_{p \in L_2({\mathcal{A}})} \!\!E_p. \] So $n - {m+1 \choose 2} = 6-3 =3$ and $I$ contains the $2\times 2$ minors of a $2 \times 3$ matrix, whose resolution appears in Example~\ref{exm:twistedcubic}. The graded betti diagram for ${\mathbb{C}}[x_0,\ldots,x_5]/I_{A}$ is $$ \vbox{\offinterlineskip \halign{\strut\hfil# \ \vrule\quad&# \ &# \ &# \ &# \ &# \ &# \ &# \ &# \ &# \ &# \ &# \ &# \ &# \ &# \ \cr total&1&4&5&2&\cr \noalign {\hrule} 0&1 &--&--&--&\cr 1&--&4 &2 &--&\cr 2&--&--&3 &2&\cr \noalign{ } \noalign{ } }} $$ From this, it follows that the free resolution of $S/I_{A}$ is a mapping cone resolution \cite{E}. The geometric meaning is that $X_{{\mathcal{A}}}$ is the intersection of a generic quadric hypersurface with the scroll. \end{exm} \noindent Since $D_{{\mathcal{A}}}$ contracts proper transforms of lines to points, it is not very ample. However, it follows from \cite{syzRes} that $D_{{\mathcal{A}}}+E_0$ is very ample, and gives a De-Concini-Procesi wonderful model (see next section) for the blowup. \begin{prob} Determine the graded betti numbers of $C({\mathcal{A}})$. \end{prob} \begin{prob} Relate $R^k({\mathcal{A}})$ to the graded betti numbers of $C({\mathcal{A}})$. \end{prob} \section{Compactifications} In \cite{FM}, Fulton-MacPherson provide a compactification $F(X,n)$ for the configuration space of $n$ marked points on an algebraic variety $X$. The construction is quite involved, but the combinatorial data is that of $A_n$. In a related vein, in \cite{DP}, De Concini-Procesi construct a wonderful model $X$ for a subspace complement ${M_{\mathcal A}} = {\mathbb{C}}^{\ell} \setminus {\mathcal{A}}$: a smooth, compact $X$ such that $X \setminus {M_{\mathcal A}}$ is a normal crossing divisor. Here it is the combinatorics which are complex. A key object in their construction is \[ {M_{\mathcal A}} \longrightarrow {\mathbb{C}}^{\ell} \times \prod\limits_{D \in G}{\mathbb{P}}({\mathbb{C}}^{\ell}/D),\] where $G$ is a building set. In \cite{FK}, Feichtner-Kozlov generalize the construction of \cite{DP} to a purely lattice-theoretic setting. See \cite{F} for additional background on this section. \begin{defn}\label{definition:NestSet} For a lattice $L$, a building set $G$ is a subset of $L$, such that for all $x \in L$, $\max \{G_{\le x}\} = \{x_1,\ldots,x_m\}$ satisfies $[\hat{0},x] \simeq \prod_{j=1}^m [\hat{0},x_j].$ A building set contains all irreducible $x \in L$. \end{defn} \begin{defn}\label{definition:NestSet} A subset $N$ of a building set $G$ is nested if for any set of incomparable $\{x_1,\ldots,x_p\} \subseteq N$ with $p\ge 2$, $x_1 \vee x_2 \vee \cdots \vee x_p$ exists in $L$, but is not in $G$. \end{defn} Nested sets form a simplicial complex $N(G)$, with vertices the elements of $G$ (which are vacuously nested). \begin{exm}\label{minBldgSet} The minimal building set for $A_3$ consists of the hyperplanes themselves, the triple intersections in $L_2$, and the element $\hat{1}$. Since $\hat{1}$ is a member of every face of $N(G)$, the nested set complex $N(G)$ is the cone over \begin{center} \epsfig{file=compact.eps,height=1.7in,width=2.0in} \end{center} There is an edge $\overline{(12),(123)}$ because there are no incomparable subsets with at least two elements, while $\overline{(12),(34)}$ is an edge because $(12)\vee(34)$ exists in $L$ (it is a normal crossing), but is not in $G$. \end{exm} Suppose $L$ is an atomic lattice, and $G$ a building set in $L$. In \cite{FY}, Feichtner and Yuzvinsky study a certain algebra associated to the pair $L,G$: \[ D(L,G) = {\mathbb{Z}}[x_g | g \in G]/I, \mbox{ with }x_g \mbox{ of degree }2. \] where $I$ is generated by \[ \prod\limits_{\{g_1,\ldots, g_n\} \not\in N(G)\}} x_{g_i} \mbox{ and } \sum\limits_{g_i \ge H \in L_1}x_{g_i} \] \begin{thm}[Feichtner-Yuzvinsky \cite{FY}]\label{thm:AtomicLatticealg} If ${\mathcal{A}}$ is a hyperplane arrangement and $G$ a building set containing $\hat{1}$, then \[ D(L,G) \simeq H^*(Y_{{\mathcal{A}},G},{\mathbb{Z}}), \] where $Y_{{\mathcal{A}},G}$ is the wonderful model arising from the building set $G$. \end{thm} The importance of this is the relation to the Knudson-Mumford compactification $\overline{M_{0,n}}$ of the moduli space of $n$ marked points on ${\mathbb{P}}^1$. \begin{thm}[De Concini-Procesi \cite{DP2}]\label{dpthm} \[ \overline{M_{0,n}} \simeq Y_{A_{n-2}, G}, \] where $G$ is the minimal building set for $A_{n-2}$. \end{thm} A presentation for the cohomology ring of $\overline{M_{0,n}}$ was first described by Keel in \cite{keel}; the description which follows from \cite{FY} is very economic. \begin{exm} By Theorem~\ref{thm:AtomicLatticealg} and \cite{DP2}, \[ H^*(\overline{M_{0,5}}, {\mathbb{Z}}) \simeq D(L(A_3), G_{min}). \] The nested set complex for $A_3$ and $G_{min}$ appears in Example~\ref{minBldgSet}, so that $D(L(A_3), G_{min})$ is the quotient of a polynomial ring $S$ with eleven generators by an ideal consisting of 6 linear forms (one form for each hyperplane) and 19 quadrics. To see that there are 19 quadrics, note that the space of quadrics in $11$-variables has dimension $45$, and $N(G_{min})$ has $15+11 = 26$ edges (recall that $\hat{1}$ is not pictured). A computation shows that \[ D(L(A_3), G_{min})\simeq {\mathbb{Z}}[x_1,\ldots, x_5]/I, \] where $I$ consists of all but one quadric of $S$ (and includes all squares of variables). This meshes with the intuitive picture: to obtain a wonderful model, simply blow up the four triple points, so that $\overline{M_{0,5}}$ is the corresponding Del Pezzo surface $X_4$, which has $\sum h^i(X_4,{\mathbb{Z}})t^i = 1+5t^2+t^4$, agreeing with the computation. \end{exm} \begin{prob} Analyze $D(L,G)$ for other lattices. \end{prob} \section{Associated Lie algebra of $\pi_1$ and LCS ranks} Let $G$ be a finitely-generated group, with normal subgroups, \[ G=G_1 \ge G_2 \ge G_3\ge\cdots, \] defined inductively by $G_k = [G_{k-1},G]$. We obtain an associated Lie algebra \[ gr(G)\otimes {\mathbb{Q}} := \bigoplus_{k=1}^{\infty} G_k/G_{k+1} \otimes {\mathbb{Q}}, \] with Lie bracket induced by the commutator map. Let $\phi_k=\phi_k(G)$ denote the rank of the $k$-th quotient. Presentations for $\pi_1({M_{\mathcal A}})$ are given by Randell \cite{R}, Salvetti \cite{Sal}, Arvola \cite{Ar}, and Cohen-Suciu \cite{CS}. For computations, the braid monodromy presentation of \cite{CS} is easiest to implement. For a detailed discussion of $\pi_1({M_{\mathcal A}})$, see Suciu's survey \cite{Su}. The fundamental group is quite delicate, and in this section, we investigate properties of $\pi_1({M_{\mathcal A}})$ via the associated graded Lie algebra \[ \mathfrak{g} = gr(\pi_1({M_{\mathcal A}}))\otimes {\mathbb{Q}} \] The Lefschetz-type theorem of Hamm-Le \cite{HL} implies that taking a generic two dimensional slice gives an isomorphism on $\pi_1$. Thus, to study $\pi_1({M_{\mathcal A}})$, we may assume ${\mathcal{A}} \subseteq {\mathbb{C}}^2$ or ${\mathbb{P}}^2$. As shown by Rybnikov \cite{Ry}, $\pi_1({M_{\mathcal A}})$ is not determined by $L_{{\mathcal{A}}}$; whereas the Orlik-Solomon algebra $H^*({M_{\mathcal A}},{\mathbb{Z}})$ is determined by $L_{{\mathcal{A}}}$. \begin{exm}\label{A3HS} In Example~\ref{exm:nonfano}, we saw that the Hilbert series for $A_3$ is $1+6t+11t^2+6t^3$. A computation shows that the LCS ranks begin \begin{center} $\begin{array}{cccccc} 6 &4 & 10 &21 & 54 & \cdots \end{array}$ \end{center} For higher $k$, $\phi_k(\pi_1(A_3)) = w_k(2)+w_k(3)$, where $w_k$ is a Witt number. In general, we may encode the LCS ranks via \[ \prod_{k=1}^{\infty}\frac{1}{(1-t^k)^{\phi_k}} \] For $A_3$, this is \[ \frac{1}{(1\!-\!t)^6}\frac{1}{(1\!-\!t^2)^4}\frac{1}{(1\!-\!t^3)^{10}}\frac{1}{(1\!-\!t^4)^{21}}\frac{1}{(1\!-\!t^5)^{54}}\cdots \] Expanding this and writing out the first few terms yields \[ 1+ 6t +25t^2+ 90t^3 +301t^4+966t^5+3025t^6+\cdots \] \vskip .1in If we multiply this with \[ \pi(A_3,-t)= 1- 6t +11t^2-6t^3, \] the result is $1$, and is part of a general pattern. \end{exm} \begin{thm}[Kohno's LCS formula \cite{K85}] For the arrangement $A_{n-1}$ (graphic arrangement of $K_n$) \[ \prod_{k=1}^{\infty}(1-t^k)^{\phi_k}=\prod\limits_{i=1}^{n-1}(1-it). \] \end{thm} This explains the computation of Example~\ref{A3HS}. We now compute the free resolution of the residue field $A/m$ as an $A$-module, where $m = \langle E_1 \rangle$. Let \[ b_{ij} = \dim_{{\mathbb{Q}}}Tor^{A}_i({\mathbb{Q}},{\mathbb{Q}})_j \] \begin{exm}\label{A3Tors} For $A_3$, we compute $b_{ij}=0$ if $i \ne j$, and \[ \sum_ib_{ii}t^i = 1+ 6t +25t^2+ 90t^3 +301t^4+966t^5+3025t^6+\cdots \] The $b_{ii}$ are the coefficients of the formal power series in Example~\ref{A3HS}! \end{exm} Kohno's result was the first of a long line of results on LCS formulas for certain special families of arrangements \begin{enumerate} \item{Braid arrangements: Kohno \cite{K85}} \item{Fiber type arrangements: Falk--Randell \cite{FR85}} \item{Supersolvable arrangements: Terao \cite{T}} \item{Lower bound for $\phi_k$: Falk \cite{Fa89}} \item{Koszul arrangements: Shelton--Yuzvinsky \cite{SY}} \item{Hypersolvable arrangements: Jambu--Papadima \cite{JP}} \item{Rational $K(\pi,1)$ arrangements: Papadima--Yuzvinsky \cite{PY}} \item{MLS arrangements: Papadima--Suciu \cite{PS}} \item{Graphic arrangements: Lima-Filho--Schenck \cite{LS}} \item{No such formula in general: Peeva \cite{P}} \end{enumerate} Let ${\mathbb{L}}(H_1({M_{\mathcal A}},{\mathbb{K}}))$ denote the free Lie algebra on $H_1({M_{\mathcal A}},{\mathbb{K}}).$ Dualizing the cup product gives a map \[ H_2({M_{\mathcal A}},{\mathbb{Q}}) \stackrel{c}{\rightarrow} H_1({M_{\mathcal A}},{\mathbb{Q}})\wedge H_1({M_{\mathcal A}},{\mathbb{Q}}) \longrightarrow {\mathbb{L}}(H_1({M_{\mathcal A}},{\mathbb{Q}})), \] Following Chen \cite{C}, define the holonomy Lie algebra \[ \mathfrak{h}_{\mathcal A} = {\mathbb{L}}(H_1({M_{\mathcal A}},{\mathbb{K}}))/I_{{\mathcal{A}}}, \] where $I_{{\mathcal{A}}}$ is the Lie ideal generated by $\mathop{\rm Im}\nolimits(c)$. As noted by Kohno in \cite{K83}, taking transpose of cup product shows that the image of $c$ is generated by \[ [x_j, \sum_{i=1}^k x_i], \] where $x_i$ is a generator of ${\mathbb{L}}(H_1(X,{\mathbb{K}}))$ corresponding to $H_i$, and the set $\{H_1,\ldots, H_k \}$ is a maximal dependent set of codimension two, so corresponds to an element of $L_2({\mathcal{A}})$. The upshot is that \[ \prod_{k=1}^{\infty} \frac{1}{(1-t^k)^{\phi_k}} = \sum_{i=0}^{\infty}\dim_{{\mathbb{Q}}} Tor_i^A({\mathbb{Q}},{\mathbb{Q}})_it^i. \] This was first made explicit by Peeva in \cite{P}; the proof runs as follows. First, Brieskorn \cite{Br} showed that ${M_{\mathcal A}}$ is formal, in the sense of \cite{S}. Using Sullivan's work and an analysis of the bigrading on Hirsch extensions, Kohno proved \begin{thm}[Kohno] $\phi_k(\mathfrak{g}) = \phi_k(\mathfrak{h}_{\mathcal A})$.\end{thm} Thus \begin{enumerate} \item{$\prod_{k=1}^{\infty} \frac{1}{(1-t^k)^{\phi_k}} = HS(U(\mathfrak{h}_{\mathcal A},t))$, which follows from Kohno's work and Poincar\'e-Birkhoff-Witt.} \item{Shelton-Yuzvinsky show in \cite{SY} that $U(\mathfrak{h}_{\mathcal A})=\overline{A}^{!}$ is the quadratic dual of the quadratic Orlik-Solomon algebra.} \item{Results of Priddy-L\"ofwall show that the quadratic dual is related to diagonal Yoneda Ext-algebra via \[ \overline{A}^{!} \cong \bigoplus_i \mathop{\rm Ext}\nolimits^i_{\overline{A}}({\mathbb{Q}},{\mathbb{Q}})_i. \]} \end{enumerate} Results of Peeva \cite{P} and Roos \cite{R} show that in general there does not exist a standard graded algebra $R$ such that $\prod_{k=1}^{\infty}(1-t^k)^{\phi_k}=HS(R,-t)$. For any quotient of a free Lie algebra, can we: \begin{prob} Find spaces for which there is a simple generating function for $\phi_k$. \end{prob} \begin{prob} Relate $\mathfrak{h}_{\mathcal A}$ to $\bigoplus\limits_{X \in L_2} \!\! {\mathfrak{h}}_{{{\mathcal{A}}}_X}$, as in \cite{PS}. \end{prob} As Shelton-Yuzvinsky proved in \cite{SY}, the natural class of arrangements for which an LCS formula holds are arrangements for which $A$ is a Koszul algebra, which we tackle next. \section{Koszul algebras} Let $T(V)$ denote the tensor algebra on $V$. \begin{defn} A quadratic algebra is $T(V)/I$, with $I \subseteq V\otimes V$. \end{defn} A quadratic algebra $A$ has a quadratic dual $A^{\perp}=T(V^{*})/I^{\perp}$: \[ \langle \alpha\otimes\beta \mid \alpha(a)\cdot \beta(b)=0 \mid \forall a\otimes b \in I \rangle = I^{\perp} \subseteq V^{*}\otimes V^{*} \] \begin{defn} $A$ is Koszul if $Tor^A_i({\mathbb{K}},{\mathbb{K}})_j = 0$, $i\ne j$. \end{defn} A quadratic algebra $A$ is Koszul iff the minimal free resolution of the residue field over $A$ has matrices with only linear entries. \begin{exm} The Hilbert series of $S = T({\mathbb{K}}^n)/\langle x_i\otimes x_j - x_j\otimes x_i \rangle$ is $1/(1-t)^n$, and a computation shows that the minimal free resolution of ${\mathbb{K}}$ over $S$ is the Koszul complex, so $\dim_{{\mathbb{K}}}Tor^S_i({\mathbb{K}},{\mathbb{K}})_i = {n \choose i}$. Since \[ I^{\perp} = \langle x_i\otimes x_j + x_j\otimes x_i \rangle, \] we see that $S^{!}=E$. The Hilbert series of $E$ is $(1+t)^n = \sum_{i=0}^n {n \choose i}t^i$. A computation shows that $\dim_{{\mathbb{K}}}Tor^E_i({\mathbb{K}},{\mathbb{K}})_i = {n-1+i \choose i}$, which are the coefficients in an expansion of $1/(1-t)^n$. \end{exm} Fr\"oberg \cite{Froberg} proved that if $I$ is a quadratic monomial ideal then $S/I$ is Koszul. By uppersemicontinuity \cite{Herzog}, this means $S/I$ is Koszul if $I$ has a quadratic Grobner basis (QGB). See Example~\ref{KnoQGB} below for a Koszul algebra having no QGB. Both $S$ and $E$ are Koszul, and the relation between their Hilbert series is explained by: \begin{thm} If $A$ is Koszul, so is $A^{!}$, and \[ HS(A,t)\cdot HS(A^{!}, -t) = 1 \] \end{thm} \begin{thm}[Bj\"{o}rner-Ziegler \cite{BZ}] The Orlik-Solomon algebra has a QGB iff ${\mathcal{A}}$ is supersolvable. \end{thm} \begin{exm} A computation shows that the Orlik-Solomon algebra of $A_3$ has a quadratic Grobner basis, so is Koszul. For the non-Fano arrangement, $\dim_{{\mathbb{K}}}Tor^A_3({\mathbb{K}},{\mathbb{K}})_4=1$, so $A$ is not Koszul. \end{exm} \begin{exm}[Caviglia \cite{Cav}]\label{KnoQGB} Map $R={\mathbb{K}}[a_1,\ldots a_9] \stackrel{\phi}{\longrightarrow} {\mathbb{K}}[x,y,z]$ using all cubic monomials of ${\mathbb{K}}[x,y,z]$ except $xyz$, and let $I=\ker(\phi)$. Then $R/I$ is Koszul, but has no quadratic Grobner basis. \end{exm} \begin{prob} For Orlik-Solomon algebras, does Koszul imply supersolvable? In the case of graphic arrangements, it does \cite{SS}. \end{prob} \begin{prob} Find a combinatorial description of $Tor^A_i({\mathbb{K}},{\mathbb{K}})_j$. \end{prob} \section{Resonance varieties}\label{resonanceSection} Let $A$ be the Orlik-Solomon algebra of ${M_{\mathcal A}}$, with $|{\mathcal{A}}|=n$. For each $a=\sum a_ie_i \in A_1$, we consider the Aomoto complex $(A,a)$, whose $i^{\text{th}}$ term is $A_i$, and differential is $ \wedge a$: \[ (A,a)\colon \xymatrix{ 0 \ar[r] &A_0 \ar[r]^{a} & A_1 \ar[r]^{a} & A_2 \ar[r]^{a}& \cdots \ar[r]^{a} & A_{\ell}\ar[r] & 0}. \] This complex arose in Aomoto's work \cite{Ao} on hypergeometric functions, as well as in the study of cohomology with local system coefficients \cite{ESV}, \cite{STV}. In \cite{Yuz}, Yuzvinsky showed that for a generic $a$, the Aomoto complex is exact; the resonance varieties of ${\mathcal{A}}$ are the loci of points $a=\sum_{i=1}^na_ie_i \leftrightarrow (a_1:\dots :a_n) \in {\mathbb{P}}^{n-1}$ for which $(A,a)$ fails to be exact, that is: \vskip -.2in \begin{defn} For each $k\ge 1$, \[ R^{k}({\mathcal{A}})=\{a \in {\mathbb{P}}^{n-1} \mid H^k(A,a)\ne 0\}. \] \end{defn} In \cite{Fa}, Falk gave necessary and sufficient conditions for $a \in R^1({\mathcal{A}})$. \begin{defn} A partition $\Pi$ of ${\mathcal{A}}$ is neighborly if for all $Y \in L_2({\mathcal{A}})$ and $\pi$ a block of $\Pi$, \[ \mu(Y) \le |Y\cap \pi| \Longrightarrow Y\subseteq \pi. \] \end{defn} Falk proved that components of $R^1({\mathcal{A}})$ arise from neighborly partitions; he conjectured that $R^1({\mathcal{A}})$ is a union of linear components. This was established (essentially simultaneously) by Cohen--Suciu \cite{CScv} and Libgober-Yuzvinsky \cite{LY}. Libgober and Yuzvinsky also showed that $R^{1}({\mathcal{A}})$ is a disjoint union of positive dimensional subspaces in ${\mathbb{P}}(E_1)$, and Cohen-Orlik \cite{CO} show that $R^{k \ge 2}({\mathcal{A}})$ is also a subspace arrangement. On the other hand, as shown by Falk in \cite{Fa2}, in positive characteristic, the components of $R^{1}({\mathcal{A}})$ can meet, and need not be linear. The approach of Libgober--Yuzvinsky involves connecting $R^1({\mathcal{A}})$ to pencils/nets/webs and there is much recent work in the area, e.g. \cite{FalkY}, \cite{PY} \cite{yu3}. Of special interest is the following conjecture relating $R^1({\mathcal{A}})$ and the LCS ranks $\phi_k$: \begin{conj}[Suciu \cite{Su}] If $\phi_4 = \theta_4$, then \[ \prod\limits_{k \ge 1} (1-t^k)^{\phi_k} = \prod\limits_{L_i \in R^1({\mathcal{A}})}(1-(dim(L_i)t), \] where $\theta_4$ is the fourth Chen rank (Definition~\ref{Cranks}). \end{conj} \begin{exm}\label{exm:toyRes} Let ${\mathcal{A}} = V(xy(x-y)z) \subseteq {\mathbb{P}}^2$, and $E = \Lambda({\mathbb{K}}^4)$, with generators $e_1,\ldots, e_4$, so that \[ A = E/\langle \partial(e_1e_2e_3), \partial(e_1e_2e_3e_4) \rangle. \] Since $\partial(e_1e_2e_3e_4) = e_1 \wedge \partial(e_1e_2e_3) - e_4 \partial(e_1e_2e_3)$, the second relation is unnecessary. To compute $R^1({\mathcal{A}})$, we need only the first two differentials in the Aomoto complex. Using $e_{13},e_{14},e_{23},e_{24},e_{34}$ as a basis for $A_2$, we find that $e_1 \mapsto e_1 \wedge (\sum\limits_{i=1}^4 a_ie_i) = a_2 e_{12} + a_3 e_{13} + a_4 e_{14}$. Since \[ \partial(e_1e_2e_3) = e_1 \wedge e_2 -e_1 \wedge e_3+ e_2 \wedge e_3, \] in $A$, $e_{12} = e_{13}-e_{23}$, so that \[a_2 e_{12}=a_2(e_{13}-e_{23}). \] This means $e_1 \mapsto (a_2 + a_3)e_{13} + a_4 e_{14}-a_2e_{23}$; similar computations for the other $e_i$ show that the Aomoto complex is \[ 0 \longrightarrow {\mathbb{K}}^1 \xrightarrow{\left[ \! \begin{array}{c} a_1\\ a_2\\ a_3\\ a_4 \end{array}\! \right]} {\mathbb{K}}^4 \xrightarrow{\left[ \!\begin{array}{cccc} a_2+a_3 & -a_1 & -a_1 & 0\\ a_4 & 0 & 0 & -a_1\\ -a_2 & a_1+a_3 & -a_2 & 0 \\ 0 & a_4 & 0 & -a_2 \\ 0 & 0 & a_4 & -a_3 \end{array}\! \right]} {\mathbb{K}}^5 \] The rank of the first map is always one, $R^1(A)\subseteq {\mathbb{P}}^3$ is the locus where the second matrix has kernel of dimension at least two, so the $3 \times 3$ minors must vanish. A computation shows this locus is $\langle a_4, a_1+a_2+a_3\rangle$. \end{exm} Letting $a=\sum_{i=1}^na_ie_i$, observe that $a \in R^1({\mathcal{A}})$ iff there exists a $b \in E_1$ so that $a\wedge b \in I_2$, so that $R^1({\mathcal{A}})$ is the locus of decomposable 2-tensors in $I_2$. Since $I_2$ is determined by the intersection lattice $L({\mathcal{A}})$ in rank $\le 2$, to study $R^1({\mathcal{A}})$, it can be assumed that ${\mathcal{A}} \subseteq {\mathbb{P}}^2$. While the first resonance variety is conjecturally connected (under certain conditions) to the LCS ranks, $R^1(A)$ is {\em always} connected to the Chen ranks introduced by Chen in \cite{C}. \begin{defn}\label{Cranks} The Chen ranks of $G$ are the LCS ranks of the maximal metabelian quotient of $G$: \[ \theta_k(G):=\phi_k(G/G''), \] where $G'=[G,G]$. \end{defn} \pagebreak \begin{conj}[Suciu \cite{Su}] \label{conj:chen} Let $G=G({\mathcal{A}})$ be an arrangement group, and let $h_r$ be the number of components of $R^{1}({\mathcal{A}})$ of dimension $r$. Then, for $k \gg 0$: \[ \theta_k(G)= (k-1) \sum_{r\ge 1} h_r \binom{r+k-1}{k}. \] \end{conj} For the previous example, $R^1({\mathcal{A}}) = V(a_4, a_1+a_2+a_3) \simeq {\mathbb{P}}^1$, so \[ \theta_k(G)= (k-1). \] To discuss the Chen ranks, we need some background. The Alexander invariant $G'/G''$ is a module over ${\mathbb{Z}}[G/G']$. For arrangements, ${\mathbb{Z}}[G/G']=$ Laurent polynomials in $n$-variables. In \cite{Massey}, Massey showed that \[ \sum_{k\ge 0} \theta_{k+2}\, t^k = HS(\mbox{gr }G'/G'' \otimes {\mathbb{Q}},t). \] It turns out to be easier to work with the linearized Alexander invariant $B$ introduced by Cohen-Suciu in \cite{CS2} \[ (A_2 \oplus E_3)\otimes S \stackrel{\Delta}{\longrightarrow} E_2\otimes S \longrightarrow B \longrightarrow 0, \] where $\Delta$ is built from the Koszul differential and $(E_2\rightarrow A_2)^t$. \begin{thm}[Cohen-Suciu \cite{CS2}] \[ V(\mbox{ann }B)=R^1({\mathcal{A}}) \] \end{thm} \begin{thm}[Papadima-Suciu \cite{PSChen}] For $k\ge 2$, \[ \sum_{k\ge 2} \theta_{k}\, t^k = HS(B,t). \] \end{thm} This shows that the Chen ranks are combinatorially determined, and depend only on $L({\mathcal{A}})$ in rank $\le 2$. \begin{exm} For the $A_3$ arrangement depicted in Example~\ref{ShowMeThm}, write $e_0=L_{12}, e_1=L_{13},e_2=L_{23},e_3=L_{24}, e_4=L_{14},e_5=L_{34}$. With this labelling \[ I_2 = \langle \partial(e_1e_4e_5), \partial(e_0e_1e_2),\partial(e_2e_3e_5), \partial(e_0e_3e_4)\rangle, \] from which a presentation for $B$ can be computed: \[ S^{14} \rightarrow S^4 \rightarrow B \rightarrow 0. \] A computation shows that $R^1(A_3)$ is \begin{center} $\begin{array}{c} V(x_1 + x_4 + x_5,x_0,x_2,x_3)\sqcup\\ V(x_2 + x_3 + x_5,x_0,x_1,x_4)\sqcup\\ V(x_0 + x_3 + x_4,x_1,x_2,x_4)\sqcup\\ V(x_0 + x_1 + x_2,x_3,x_4,x_5)\sqcup\\ V(x_0+x_1+x_2,x_0- x_5,x_1-x_3,x_2-x_4), \end{array}$ \end{center} and that the Hilbert Series of $B$ is: \[ (4t^2+2t^3-t^4)/(1-t)^2 = 4t^2+10t^3+15t^4+20t^5+\cdots \] On the other hand, the graded betti numbers $Tor^E_i(A_3,{\mathbb{K}})_j$ are $$ \vbox{\offinterlineskip \halign{\strut\hfil# \ \vrule\quad&# \ &# \ &# \ &# \ &# \ &# \ &# \ &# \ &# \ &# \ &#\ &# \ &#\ \cr total&1 &4 &10&21&45&91 \cr \noalign {\hrule} 0 &1 &--&--&--&-- &-- \cr 1 &--&4 &10&15&20 &25 \cr 2 &--&--&--&6& 25& 66 \cr }} $$ So the Hilbert series for $B$ encodes the ranks of $Tor^E_i(A_3,{\mathbb{K}})_{i+1}$. This suggests a connection between $R^1(A)$ and $Tor^E_i(A_3,{\mathbb{K}})_{i+1}$, which we tackle in the next section. \end{exm} Besides the connection to resonance varieties, there is a second reason to study $Tor^E_i(A,{\mathbb{K}})$: the numbers $b_{ij} = \dim_{{\mathbb{K}}}Tor^A_i({\mathbb{K}},{\mathbb{K}})_j$ studied in \S 7 grow very fast, while the numbers $b_{ij}' = \dim_{{\mathbb{K}}}Tor^E_i(A,{\mathbb{K}})_j$ grow at a much slower rate. \begin{exm} For the non-Fano arrangement of Example~\ref{exm:nonfano}\newline $$ \vbox{\offinterlineskip \halign{\strut\hfil# \ \vrule\quad&# \ &# \ &# \ &# \ &# \ &# \ &# \ &# \ &# \ &# \ &#\ &# \ &#\ \cr total&1 &7 &23&63&165&387 \cr \noalign {\hrule} 0 &1 &--&--&--&-- &-- \cr 1 &--&6 &17&27&36 &45 \cr 2 &--&1 &6 &36&129&342 \cr 3 &-- &--&--&--&--&-- \cr \noalign{ } \noalign{ } }} \vbox{\offinterlineskip \halign{\strut\hfil# \ \vrule\quad&# \ &# \ &# \ &# \ &# \ &# \ &# \ &# \ &# \ &# \ &#\ &# \ &# \ &#\ \cr total&1&7 &35& 156& 664& 2773 &*\cr \noalign {\hrule} 0&1 &7 &34 & 143&560 & 2108&* \cr 1&--&-- &1 & 13&103 &646 &* \cr 2&--&-- &-- & --&1 & 19 &* \cr 3&--&-- &-- &--&-- &-- &1 \cr \noalign{ } \noalign{ } }} $$ \vskip -.2in \hskip 1in $b_{ij}'$ \hskip 2in $b_{ij}$. \end{exm} The spaces $Tor^E_i(A,{\mathbb{K}})$ and $Tor^A_i({\mathbb{K}},{\mathbb{K}})$ are related via the change of rings spectral sequence: \[ \mathop{\rm Tor}\nolimits_i^A\left(\mathop{\rm Tor}\nolimits_j^E(A,{\mathbb{K}}),{\mathbb{K}}\right) \Longrightarrow \mathop{\rm Tor}\nolimits_{i+j}^E({\mathbb{K}},{\mathbb{K}}). \] For arrangements, details of this relationship are investigated in \cite{SS}. \begin{prob} Find a combinatorial description of $Tor^E_i(A,{\mathbb{K}})_j$. \end{prob} \begin{prob} If $A$ is Koszul, does this provide data on $Tor^E_i(A,{\mathbb{K}})_j$? \end{prob} \section{Linear syzygies} It is fairly easy to see that there is a connection between $R^1(A)$ and linear syzygies, that is, to the module $Tor^E_2(A,{\mathbb{K}})_3$. Since \[ a\wedge b \in I_2 \longrightarrow a\wedge b = \sum c_if_i, \mbox{ }c_i \in {\mathbb{K}}, f_i \in I_2, \] the relations $a\wedge a\wedge b = 0 = b\wedge a\wedge b$ yield linear syzygies on $I_2$: \[ \sum ac_if_i = 0 =\sum bc_if_i. \] {\bf Example~\ref{exm:toyRes}}, continued. Since $\partial(e_1e_2e_3) = (e_1-e_2) \wedge (e_2-e_3)$, both $(e_1-e_2)$ and $(e_2-e_3)$ are in $R^1(A)$, as is the line connecting them: \[ s(e_1-e_2)+ t(e_2-e_3) \subseteq R^1({\mathcal{A}}) \subseteq {\mathbb{P}}(E_1) \] Parametrically, this may be written \[ (s : t-s : -t:0) = V(a_4, a_1+a_2+a_3), \] so $s(e_1-e_2)+ t(e_2-e_3) \wedge \partial(e_1e_2e_3) = 0$ gives a family of linear syzygies on $I_2$, parameterized by ${\mathbb{P}}^1$. $\Diamond$\newline To make the connection between linear syzygies and the module $B$ precise, we need the following result: \begin{thm}[Eisenbud-Popescu-Yuzvinsky \cite{EPY}] For an arrangement ${\mathcal{A}}$, the Aomoto complex is exact, as a sequence of $S$-modules: \[ \xymatrixcolsep{21pt} \xymatrix{ 0\ar[r] & A_0 \otimes S \ar[r]^{\cdot a} & A_1\otimes S \ar[r]^(.62){\cdot a} & \cdots \ar[r]^(.37){\cdot a} & A_{\ell} \otimes S \ar[r] & F(A) \ar[r] & 0 }. \] \end{thm} \begin{thm}[Schenck-Suciu \cite{SS2}] The linearized Alexander invariant $B$ is determined by $F(A)$: \[ B \cong\mathop{\rm Ext}\nolimits_S^{\ell-1}(F(A), S). \] Furthermore, for $k\ge 2$, $\dim_{{\mathbb{K}}}B_k = \dim_{{\mathbb{K}}}\mathop{\rm Tor}\nolimits^E_{k-1}(A,{\mathbb{K}})_k.$ \end{thm} Using this, it is possible to prove one direction of Conjecture~\ref{conj:chen} \begin{thm}[Schenck-Suciu \cite{SS2}] For $k \gg 0$, \[ \theta_k(G) \ge (k-1) \sum_{L_i \in R^1({\mathcal{A}})} \binom{\dim L_i+k-1}{k}. \] \end{thm} \begin{prob} Prove the remaining direction of Conjecture~\ref{conj:chen}. \end{prob} What makes all this work is the Bernstein-Gelfand-Gelfand correspondence, which is our final topic. \pagebreak \section{Bernstein-Gelfand-Gelfand correspondence} Let $S=Sym(V^*)$ and $E = \bigwedge(V)$. The Bernstein-Gelfand-Gelfand correspondence is an isomorphism between derived categories of bounded complexes of coherent sheaves on ${\mathbb{P}}(V^*)$ and bounded complexes of finitely generated, graded $E$--modules. Although this sounds exotic, from this it is possible to extract functors\newline \noindent ${\bf R}$: finitely generated, graded $S$-modules $\longrightarrow$ linear free $E$-complexes. \newline \noindent${\bf L}$: finitely generated, graded $E$-modules $\longrightarrow$ linear free $S$-complexes.\newline The point is that problems can be translated to a (possibly) simpler setting. For example, BGG yields a very fast way to compute sheaf cohomology, using Tate resolutions. \begin{defn} Let $P$ be a finitely generated, graded $E$-module. Then ${\bf L}(P)$ is the complex \[ \xymatrixcolsep{21pt} \xymatrix{ \cdots\ar[r] &S \otimes P_{i+1} \ar[r]^(.56){\cdot a} & S \otimes P_{i} \ar[r]^(.44){\cdot a} &S \otimes P_{i-1} \ar[r]^(.66){\cdot a} &\cdots }, \] where $a=\sum\limits_{i=1}^n x_i\otimes e_i$, so that $1 \otimes p \mapsto \sum x_i\otimes e_i \wedge p$ \end{defn} Note that elements of $V^*$ have degree $=1$, and elements of $V$ have degree $=-1$. \begin{exm} $P = E = \bigwedge {\mathbb{K}}^3$. Then we have \[ 0 \longrightarrow S \otimes E_0 \longrightarrow S \otimes E_1 \longrightarrow S \otimes E_2 \longrightarrow S \otimes E_3 \longrightarrow 0. \] Clearly $1 \mapsto \sum_{1}^3 x_i\otimes e_i$. For $d_1$ \begin{center} $\begin{array}{c} \;\;\;e_1 \mapsto -x_2e_{12}-x_3e_{13}\\ e_2 \mapsto x_1e_{12}-x_3e_{23}\\ e_3 \mapsto x_1e_{13}+x_2e_{23} \end{array}$ \end{center} \[ d_2:\mbox{ }e_{12} \mapsto x_3 e_{123}, \mbox{ }e_{13}\mapsto -x_2 e_{123} \mbox{ }e_{23}\mapsto x_1 e_{123} \] Thus, ${\bf L}(E)$ is \begin{small} \[ S^1 \xrightarrow{\left[ \! \begin{array}{c} x_1\\ x_2\\ x_3 \end{array}\! \right]} S^3 \xrightarrow{\left[ \!\begin{array}{ccc} -x_2 & x_1 & 0 \\ -x_3 & 0 & x_1 \\ 0 & -x_3 & x_2 \end{array}\! \right]} S^3 \xrightarrow{\left[ \!\begin{array}{ccc} x_3 & -x_2 & x_1 \end{array}\! \right]} S^1 \] \end{small} This is simply the Koszul complex. \end{exm} If $M$ is a finitely generated, graded $S$-module, then ${\bf R}(M)$ is the complex \[ \xymatrixcolsep{21pt} \xymatrix{ \cdots\ar[r] &\hat{E} \otimes M_{i-1} \ar[r]^(.56){\cdot a} & \hat{E} \otimes M_{i} \ar[r]^(.44){\cdot a} &\hat{E} \otimes M_{i+1} \ar[r]^(.66){\cdot a} &\cdots }, \] where $a=\sum\limits_{i=1}^n e_i\otimes x_i$, so $1 \otimes m \mapsto \sum e_i\otimes x_i \cdot m$, and $\hat{E}$ is ${\mathbb{K}}$-dual to $E$: \[ \hat{E} \simeq E(n)=Hom_{{\mathbb{K}}}(E,{\mathbb{K}}). \] Just as ${\bf L}(P) = S\otimes_{\mathbb{K}} P$, ${\bf R}(M) = Hom_{{\mathbb{K}}}(E,M)$. \vskip .05in {\bf Example~\ref{exm:first}}, continued. If $M = {\mathbb{K}}[x_0,x_1]/\langle x_0x_1, x_0^2 \rangle$, then \begin{center} $\begin{array}{c} 1 \mapsto e_0\otimes x_0 + e_1\otimes x_1 \\ \;\;x_0 \mapsto e_0\otimes x_0^2 +e_1\otimes x_0x_1 \\ \;\;x_1 \mapsto e_0\otimes x_0x_1 +e_1\otimes x_1^2\\ \;\;\;\;\;\;x_1^n \mapsto e_0\otimes x_0x_1^n +e_1\otimes x_1^{n+1} \end{array}$ \end{center} Thus, ${\bf R}(M)$ is \begin{small} \[ E(2)^1 \xrightarrow{\left[ \! \begin{array}{c} e_0\\ e_1 \end{array}\! \right]} E(3)^2 \xrightarrow{\left[ \!\begin{array}{cc} 0 & e_1 \end{array}\! \right]} E(4)^1 \xrightarrow{\left[ \!\begin{array}{c} e_1 \end{array}\! \right]} E(5)^1 \xrightarrow{\left[ \!\begin{array}{c} e_1 \end{array}\! \right]} \cdots \] \end{small} This complex is exact, except at the second step. The kernel of \[ \left[ \!\begin{array}{cc} 0 & e_1 \end{array}\! \right] \] is generated by $\alpha = [1,0]$ and $\beta = [0,e_1]$, with relations $im(d_1)=\beta+e_0 \alpha =0, e_1 \beta = 0$, so that \[ H^1({\bf R}(M)) \simeq E(3)/e_0 \wedge e_1 \] The betti table for $M$ is: $$ \vbox{\offinterlineskip \halign{\strut\hfil# \ \vrule\quad&# \ &# \ &# \ &# \ &# \ &# \ &# \ &# \ &# \ &# \ &# \ &# \ &# \ \cr total&1&2&1\cr \noalign {\hrule} 0&1 &--&--&\cr 1&--&2 &1 &\cr \noalign{ } \noalign{ } }} $$ \vskip -.2in Note that in this example, $M$ is $1$-regular.$\Diamond$ \begin{thm}[Eisenbud-Fl{\o}ystad-Schreyer \cite{EFS}] \[ H^j({\bf R}(M))_{i+j} = Tor_i^S(M,{\mathbb{K}})_{i+j}.\] \end{thm} \begin{corollary}\label{EFSCM} The Castelnuovo-Mumford regularity of $M$ is $\le d$ iff $H^i({\bf R}(M))=0$ for all $i > d$. \end{corollary} \pagebreak What can be said about higher resonance varieties? In \cite{CO}, Cohen--Orlik prove that for $k \ge 2$, \[ R^k({\mathcal{A}}) = \bigcup L_i \mbox{ linear.} \] As observed by Suciu, in general the union need not be disjoint. \begin{thm}[Eisenbud-Popescu-Yuzvinsky \cite{EPY}] $R^k({\mathcal{A}}) \subseteq R^{k+1}({\mathcal{A}}).$ \end{thm} The key point is that \[ H^k(A,a) \ne 0 \mbox{ iff } Tor^S_{\ell-k}(F(A), S/I(p)) \ne 0. \] The result follows from interpreting this in terms of Koszul cohomology. \begin{thm}[Denham-Schenck \cite{denS}] Higher resonance may be interpreted via $\mathop{\rm Ext}\nolimits$: \[ R^k({\mathcal{A}}) = \bigcup_{k' \le k} V(\mathop{\rm ann}\nolimits \mathop{\rm Ext}\nolimits^{\ell-k'}(F(A),S)). \] Furthermore, the differentials in free resolution of $A$ over $E$ can be analyzed using BGG and the Grothendieck spectral sequence. \end{thm} For any coherent sheaf ${\mathcal F}$ on ${\mathbb{P}}^d$, there is a finitely generated, graded {\em saturated} $S$-module $M$ whose sheafification is ${\mathcal F}$. If ${\mathcal F}$ has Castelnuovo-Mumford regularity $r$, then the Tate resolution of ${\mathcal F}$ is obtained by splicing the complex ${\bf R}(M_{\ge r})$: \[ \xymatrixcolsep{21pt} \xymatrix{ 0\ar[r] &\hat{E} \otimes M_{r} \ar[r]^{d^r} & \hat{E} \otimes M_{r+1} \ar[r] &\hat{E} \otimes M_{r+2} \ar[r] &\cdots }, \] with a free resolution $P_{\bullet}$ for the kernel of $d^r$: \[ \xymatrixrowsep{10pt} \xymatrixcolsep{6pt} \xymatrix{ \cdots \ar[rr] && P_1 \ar[rr] && P_0 \ar[rr] \ar[dr] & & \hat{E} \otimes M_{r} \ar[rr]&& \hat{E} \otimes M_{r+1} \ar[rr] && \cdots \\ && && & \ker(d^r) \ar[ur] \ar[dr] & && && \\ && && 0 \ar[ur] & & 0 && && } \] By Corollary~\ref{EFSCM}, ${\bf R}(M_{\ge r})$ is exact except at the first step, so this yields an exact complex of free $E$-modules. \begin{exm} Since $M=S$ has regularity zero, we obtain Cartan resolutions in both directions, and the splice map $ E \rightarrow \widehat{E}=E(d+1)$ is multiplication by $e_0 \wedge e_1 \wedge \cdots \wedge e_d = \ker \left[ \! \begin{array}{ccc} e_0, & \cdots &, e_d \end{array}\! \right]^t$. \end{exm} \pagebreak \begin{thm}[Eisenbud-Fl{\o}ystad-Schreyer \cite{EFS}]\label{thmEFS} The $i^{th}$ free module $T^i$ in a Tate resolution for ${\mathcal F}$ satisfies \begin{center} $T^i = \bigoplus\limits_{j}\widehat{E} \otimes H^j({\mathcal F}(i-j))$.\end{center} \end{thm} {\bf Example~\ref{exm:twistedcubic}}, continued. The betti table for the twisted cubic shows that $S/I$ has regularity one, which provides us the information needed to compute the Tate resolution. Plugging the resulting numbers into Theorem~\ref{thmEFS} shows that \begin{center} \begin{supertabular}{|c|c|c|c|c|c|c|} \hline $i$ & $-3$ & $-2$ & $-1$ & $0$ & $1$ & $2$ \\ \hline $h^1({\mathcal F}(i))$ & $8$ & $5$ & $2$ & $0$ & $0$ & $0$\\ \hline $h^0({\mathcal F}(i))$ & $0$ & $0$ & $0$ & $1$ & $4$ & $7$ \\ \hline \end{supertabular} \end{center} \vskip .05in Does this make sense? Since ${\mathcal F} = {\mathcal O}_X = {\mathcal O}_{{\mathbb{P}}^1}(3)$, \[ h^1({\mathcal F}(i)) = h^1({\mathcal O}_{{\mathbb{P}}^1}(3i))=h^0({\mathcal O}_{{\mathbb{P}}^1}(-3i-2)) \] and \[ h^0({\mathcal F}(i)) = h^0({\mathcal O}_{{\mathbb{P}}^1}(3i))=3i+1, \mbox{ }i\ge 0, \] which agrees with our earlier computation. $\Diamond$ \vskip .05in \begin{prob} Investigate the Tate resolution for $D({\mathcal{A}})$ and $C({\mathcal{A}})$. \end{prob} \noindent{\bf Conclusion} In this note we have surveyed a number of open problems in arrangements. The beauty of the area is that these problems are all interconnected. Perhaps the most central objects are the resonance varieties, which are related to both the LCS ranks studied in \S 7 and \S 8 using Koszul and Lie algebras, and to the Chen ranks. The results of \S 5 tie resonance to the Orlik-Terao algebra, and \cite{syzRes} notes that $J_{{\mathcal{A}}} \subseteq H^0(D_{{\mathcal{A}}})$, so the Orlik-Terao algebra is also linked to $D({\mathcal{A}})$ and freeness. But freeness ties in to multiarrangements, and can be generalized to hypersurface arrangements, the topics of \S 3 and \S 4. To complete the circle, recent work of Cohen-Denham-Falk-Varchenko \cite{CDFV} relates freeness to $R^1({\mathcal{A}})$. In short, everything is connected! \vskip .05in \noindent{\bf Acknowledgements} Many thanks are due to the organizers of the Mathematical Society of Japan Seasonal Institute: Takuro Abe, Hiroaki Terao, Masahiko Yoshinaga and Sergey Yuzvinsky organized a wonderful meeting. I also am grateful to my research collaborators: G.~Denham, M.~Mustata, A.~Suciu, H.~Terao, S.~Tohaneanu, and M.~Yoshinaga. As noted in the introduction, the calculations carried out in this survey may be performed using the hyperplane arrangements package of Denham and Smith~\cite{DSmith} in Macaulay2~\cite{GS}. Code for the individual examples is available at {\tt http://www.math.uiuc.edu/~schenck/}. \end{document}
\begin{document} \title{f ormalsize Acoustic and Filtration Properties\ of Thermo-elastic porous medium:\ Biot's Equations of Thermo-Poroelasticity. } \small \noindent \textbf{Abstract.} A linear system of differential equations describing a joint motion of thermoelastic porous body and incompressible thermofluid occupying porous space is considered. Although the problem is linear, it is very hard to tackle due to the fact that its main differential equations involve non-smooth oscillatory coefficients, both big and small, under the differentiation operators. The rigorous justification is fulfilled for homogenization procedures as the dimensionless size of the pores tends to zero, while the porous body is geometrically periodic. As the results, we derive Biot's like system of equations of thermo-poroelasticity, system of equations of thermo-viscoelasticity, or system of non-isotropic Lam\'{e}'s equations depending on ratios between physical parameters and geometry of porous space. The proofs are based on Nguetseng's two-scale convergence method of homogenization in periodic structures.\\ \noindent \textbf{Key words:} Biot's equations, Stokes equations, Lam\'{e}'s equations, two-scale convergence, homogenization of periodic structures, thermo-poroelasticity.\\ \normalsize \addtocounter{section}{0} \setcounter{equation}{0} \begin{center} \textbf{Introduction} \end{center} In the present publication we consider a problem of a joint motion of thermoelastic deformable solid (thermoelastic skeleton), perforated by a system of channels (pores) and incompressible thermofluid occupying a porous space. We refer to this model as to \textbf{model (NA)}. In dimensionless variables (without primes) $$ {\mathbf x}'=L {\mathbf x},\quad t'=\tau t,\quad {\mathbf w}'=L {\mathbf w}, \quad \theta'=\vartheta_*\frac{L }{\tau v_{*}} \theta$$ the differential equations of the model in a domain $\Omega \in \mathbb R^{3}$ for the dimensionless displacement vector ${\mathbf w}$ of the continuum medium and the dimensionless temperature $\theta$, have a form: \begin{eqnarray} \label{0.1} & \displaystyle \alpha_\tau \bar{\rho} \frac{\partial^2 {\mathbf w}}{\partial t^2}=\mbox{div}_x \mathbb P + \bar{\rho} \mathbf F,\\ \label{0.2} & \displaystyle \alpha_\tau \bar{c}_p \frac{\partial \theta}{\partial t} = \mbox{div}_x ( \bar{\alpha} _{\varkappa} \nabla_x \theta) -\bar{\alpha}_\theta \frac{\partial}{\partial t} \mbox{div}_x {\mathbf w} +\Psi,\\ \label{0.3} & \displaystyle \mathbb P = \bar{\chi}\alpha_\mu \mathbb D\Bigl({\mathbf x},\frac{\partial {\mathbf w}}{\partial t}\Bigr) +(1-\bar{\chi})\alpha_\lambda \mathbb D(x,{\mathbf w})-(q+\pi )\mathbb I ,\\ \label{0.4} & \displaystyle q=p+\frac{\alpha_\nu}{\alpha_p}\frac{\partial p}{\partial t}+\bar{\chi}\alpha _{\theta f}\theta,\\ \label{0.5} & \displaystyle p+\bar{\chi} \alpha_p \mbox{div}_x {\mathbf w}=0,\\ \label{0.6} & \displaystyle \pi +(1-\bar{\chi}) (\alpha_\eta \mbox{div}_x {\mathbf w}-\alpha _{\theta s}\theta)=0. \end{eqnarray} Here and further we use notations $$ \mathbb D(x,{\mathbf u})=(1/2)\left(\nabla_x {\mathbf u} +(\nabla_x {\mathbf u})^T\right),$$ $$\bar{\rho}=\bar{\chi}\rho_f +(1-\bar{\chi})\rho_s, \quad \bar{c}_p=\bar{\chi} c_{pf} +(1-\bar{\chi})c_{ps},$$ $$ \bar{\alpha _{\varkappa}} =\bar{\chi} \alpha _{\varkappa f} +(1-\bar{\chi})\alpha _{\varkappa s},\quad \bar{\alpha}_\theta =\bar{\chi} \alpha_{\theta f} +(1-\bar{\chi})\alpha_{\theta s}.$$ In this model the characteristic function of the porous space $\bar{\chi}({\mathbf x})$ is a known function. For derivation of \eqref{0.1}-- \eqref{0.6} and description of dimensionless constants (all these constant are positive ) see \cite{MS}. We endow model \textbf{(NA)} with initial and boundary conditions \begin{equation} \label{0.7} {\mathbf w}|_{t=0}={\mathbf w}_0,\quad \frac{\partial {\mathbf w}}{\partial t}|_{t=0}={\mathbf v}_0,\quad \theta|_{t=0} =\theta_0,\quad {\mathbf x}\in \Omega \end{equation} \begin{equation} \label{0.8} {\mathbf w}=0,\quad \theta=0,\quad {\mathbf x} \in S=\partial \Omega, \quad t\geq 0. \end{equation} From the purely mathematical point of view, the corresponding initial-boundary value problem for model \textbf{(NA)} is well-posed in the sense that it has a unique solution belonging to a suitable functional space on any finite temporal interval (see \cite{MS}). However, in view of possible applications this model is ineffective. Therefore arises a question of finding an effective approximate models. If the model involves the small parameter $\varepsilon$, the most natural approach to this problem is to derive models that would describe limiting regimes arising as $\varepsilon$ tends to zero. Such an approximation significantly simplifies the original problem and at the same time preserves all of its main features. In the model under consideration we define $\varepsilon$ as the characteristic size of pores $l$ divided by the characteristic size $L$ of the entire porous body: $$\varepsilon =\frac{l}{L}.$$ But even this approach is too hard to work out, and some additional simplifying assumptions are necessary. In terms of geometrical properties of the medium, the most appropriate is to simplify the problem postulating that the porous structure is periodic. Further by \textbf{model} ${(\mathbf{N}\mathbf B})^\varepsilon$ we will call model \textbf{NA} supplemented by this periodicity condition. Thus, our main goal now is a derivation of all possible homogenized equations in the model ${(\mathbf{N}\mathbf B})^\varepsilon$. We accept the following constraints \begin{assumption} \label{assumption1} domain $\Omega =(0,1)^3$ is a periodic repetition of an elementary cell $Y^\varepsilon =\varepsilon Y$, where $Y=(0,1)^3$ and quantity $1/\varepsilon$ is integer, so that $\Omega$ always contains an integer number of elementary cells $Y_i^\varepsilon$. Let $Y_s$ be a "solid part" of $Y$, and the "liquid part" $Y_f$ -- is its open complement. We denote as $\gamma = \partial Y_f \cap \partial Y_s$ and $\gamma $ is $C^{1}$-surface. A porous space $\Omega ^{\varepsilon}_{f}$ is the periodic repetition of the elementary cell $\varepsilon Y_f$, and solid skeleton $\Omega ^{\varepsilon}_{s}$ is the periodic repetition of the elementary cell $\varepsilon Y_s$. A boundary $\Gamma^\varepsilon =\partial \Omega_s^\varepsilon \cap \partial \Omega_f^\varepsilon$ is the periodic repetition in $\Omega$ of the boundary $\varepsilon \gamma$. The "solid skeleton" $\Omega _{s}$ is a connected domain. \end{assumption} In these assumption \begin{equation*} \bar{\chi}({\mathbf x})=\chi^{\varepsilon}({\mathbf x})=\chi \left({\mathbf x} / \varepsilon\right), \end{equation*} $$\bar{c}_{p}=c_{p}^{\varepsilon}({\mathbf x})=\chi^{\varepsilon}({\mathbf x})c _{pf}+ (1-\chi^{\varepsilon}({\mathbf x}))c_{ps},$$ $$\bar{\rho}=\rho^{\varepsilon}({\mathbf x})=\chi^{\varepsilon}({\mathbf x})\rho _{f}+ (1-\chi^{\varepsilon}({\mathbf x}))\rho_{s},$$ $$ \bar{\alpha} _{\varkappa} =\alpha^{\varepsilon} _{\varkappa}({\mathbf x})= \chi ^{\varepsilon}({\mathbf x})\alpha _{\varkappa f} +(1-\chi ^{\varepsilon}({\mathbf x}))\alpha _{\varkappa s}, $$ $$\bar{\alpha}_\theta =\alpha ^{\varepsilon}_\theta({\mathbf x})=\chi ^{\varepsilon}({\mathbf x}) \alpha_{\theta f} +(1-\chi ^{\varepsilon}({\mathbf x}))\alpha_{\theta s},$$ where $\chi ({\mathbf y})$ is a characteristic function of $Y_f$ in $Y$. We say that a \textbf{porous space is disconnected (isolated pores)} if $\gamma \cap \partial Y=\emptyset$.\\ In the present work we suppose that all dimensionless parameters depend on the small parameter $\varepsilon$ and there exist limits (finite or infinite) $$\lim_{\varepsilon\searrow 0} \alpha_\mu(\varepsilon) =\mu_0, \quad \lim_{\varepsilon\searrow 0} \alpha_\lambda(\varepsilon) =\lambda_0, \quad \lim_{\varepsilon\searrow 0} \alpha_\tau(\varepsilon)=\tau_{0}, \quad \lim_{\varepsilon\searrow 0} \alpha_p(\varepsilon) =p_{*}.$$ Moreover, we restrict ourself with the case when $\tau_0<\infty$ and $$\mu_0=0, \quad p_{*}=\infty, \quad 0< \lambda_0 <\infty.$$ If $\tau_0=\infty$,then, re-normalizing the displacement vector and temperature by setting \begin{equation}\nonumber {\mathbf w} \rightarrow \alpha_\tau {\mathbf w},\quad \theta \rightarrow \alpha_\tau \theta \end{equation} we reduce the problem to the previous case. The condition $p_{*}=\infty $ means that the liquid in consideration is incompressible. Using Nguetseng's two-scale convergence method \cite{LNW,NGU} we derive Biot's like systems of thermo-poroelasticity or system of non-isotropic Lam\'{e}'s, depending on the ratios between dimensionless parameters and geometry of the porous space. Different isothermic models have been considered in \cite{S-P}, \cite{B-K}, \cite{GNG}, \cite{G-M2,G-M3,G-M1}, \cite{AM}. \addtocounter{section}{1} \setcounter{equation}{0} \setcounter{theorem}{0} \setcounter{lemma}{0} \setcounter{proposition}{0} \setcounter{corollary}{0} \setcounter{definition}{0} \setcounter{assumption}{0} \begin{center} \textbf{\S1}. \textbf{Model} ${(\mathbf{N}\mathbf B})^\varepsilon$ \end{center} \begin{center} \textbf{\S1. Formulation of the main results.} \end{center} As usual, equations \eqref{0.1}-\eqref{0.6} are understood in the sense of distributions. They involve the equations \eqref{0.1}-- \eqref{0.6} in a usual sense in the domains $\Omega_f^{\varepsilon}$ and $\Omega_s^{\varepsilon}$ and the boundary conditions \begin{eqnarray} \label{1.1} & [\vartheta]=0, \quad [{\mathbf w}]=0,\quad {\mathbf x}_0\in \Gamma ^{\varepsilon},\; t\geq 0,\\ \label{1.2} & [\mathbb P]=0,\quad [\alpha ^{\varepsilon} _{\varkappa} \nabla_x \theta ]=0, \quad {\mathbf x}_0\in \Gamma ^{\varepsilon},\; t\geq 0 \end{eqnarray} on the boundary $\Gamma^\varepsilon $, where \begin{eqnarray} \nonumber & [\varphi]({\mathbf x}_0)=\varphi_{(s)}({\mathbf x}_0) -\varphi_{(f)}({\mathbf x}_0),\\ \nonumber \displaystyle & \varphi_{(s)}({\mathbf x}_0) =\lim\limits_{\tiny \begin{array}{l}{\mathbf x}\to {\mathbf x}_0\\ {\mathbf x}\in \Omega_s^{\varepsilon}\end{array}} \varphi({\mathbf x}),\quad \varphi_{(f)}({\mathbf x}_0) =\lim\limits_{\tiny \begin{array}{l}{\mathbf x}\to {\mathbf x}_0\\ {\mathbf x}\in \Omega_f^{\varepsilon}\end{array}} \varphi({\mathbf x}). \end{eqnarray} There are various equivalent in the sense of distributions forms of representation of equations \eqref{0.1}--\eqref{0.2} and boundary conditions \eqref{1.1}--\eqref{1.2}. In what follows, it is convenient to write them in the form of the integral equalities. \begin{definition} \label{definition1} Five functions $({\mathbf w}^{\varepsilon},\theta^{\varepsilon},p^{\varepsilon},q^{\varepsilon},\pi^{\varepsilon})$ are called a generalized solution of \textbf{model} ${(\mathbf{N}\mathbf B})^\varepsilon$ if they satisfy the regularity conditions in the domain $ \Omega_{T}=\Omega\times (0,T)$ \begin{equation} \label{1.3} {\mathbf w}^{\varepsilon},\, \mathbb D(x,{\mathbf w}^{\varepsilon}),\, \mbox{div}_x{\mathbf w}^{\varepsilon},\, q^{\varepsilon},\,p^{\varepsilon},\, \frac{\partial p^{\varepsilon}}{\partial t},\,\pi^{\varepsilon},\,\theta^{\varepsilon}, \nabla_x \theta ^{\varepsilon} \in L^2(\Omega_{T}) \end{equation} in the domain $ \Omega_{T}=\Omega\times (0,T)$, boundary conditions \eqref{0.8}, equations \begin{eqnarray} \label{1.4} &\displaystyle q^{\varepsilon}=p^{\varepsilon}+ \frac{\alpha_\nu}{\alpha_p}\frac{\partial p^{\varepsilon}}{\partial t}+ \chi^{\varepsilon}\alpha _{\theta f}\theta ^{\varepsilon},\\ \label{1.5}& \displaystyle p^{\varepsilon}+ \chi^{\varepsilon} \alpha_p \mbox{div}_x {\mathbf w}^{\varepsilon}=0,\\ \label{1.6}& \displaystyle \pi^{\varepsilon} +(1-\chi^{\varepsilon}) (\alpha_\eta \mbox{div}_x {\mathbf w}^{\varepsilon}-\alpha _{\theta s}\theta^{\varepsilon})=0 \end{eqnarray} a.e. in $\Omega_{T}$, and integral identities \begin{eqnarray}\nonumber && \displaystyle \int_{\Omega_{T}} \Bigl(\alpha_\tau \rho ^{\varepsilon} {\mathbf w}^{\varepsilon}\cdot \frac{\partial ^{2}{\mathbf \varphi}}{\partial t^{2}} - \chi ^{\varepsilon}\alpha_\mu \mathbb D({\mathbf x}, {\mathbf w}^{\varepsilon}): \mathbb D(x,\frac{\partial {\mathbf \varphi}}{\partial t})-\rho ^{\varepsilon} \mathbf F\cdot {\mathbf \varphi}+\\ &&\nonumber\{(1-\chi ^{\varepsilon})\alpha_\lambda \mathbb D(x,{\mathbf w}^{\varepsilon})-(q^{\varepsilon}+\pi^{\varepsilon})\mathbb I\} : \mathbb D(x,{\mathbf \varphi})\Bigr) d{\mathbf x} dt +\\ \label{1.7} && \displaystyle \int_\Omega \alpha_\tau \rho ^{\varepsilon}\Bigl({\mathbf w}^{\varepsilon}_{0}\cdot\frac{\partial {\mathbf \varphi}}{\partial t}|_{t=0}- {\mathbf v}^{\varepsilon}_0 \cdot {\mathbf \varphi}|_{t=0} \Bigr)d{\mathbf x} =0 \end{eqnarray} for all smooth vector-functions ${\mathbf \varphi}={\mathbf \varphi}({\mathbf x},t)$ such that ${\mathbf \varphi}|_{\partial \Omega} ={\mathbf \varphi}|_{t=T}=\partial {\mathbf \varphi} / \partial t|_{t=T}=0$ and \begin{eqnarray} \nonumber && \displaystyle \int_{\Omega_{T}} \Bigl((\alpha_\tau c^{\varepsilon}_p \theta ^{\varepsilon}+\alpha^{\varepsilon}_\theta \mbox{div}_x {\mathbf w} ^{\varepsilon}) \frac{\partial \xi}{\partial t} - \alpha _{\varkappa }^{\varepsilon} \nabla_x \theta ^{\varepsilon}\cdot \nabla_x \xi +\Psi \xi \Bigr) d{\mathbf x} dt\\ \label{1.8} && \displaystyle +\int_\Omega (\alpha_\tau c^{\varepsilon}_p \theta ^{\varepsilon}_0 +\alpha^{\varepsilon}_\theta \mbox{div}_x {\mathbf w} _{0}^{\varepsilon})\xi|_{t=0}) d{\mathbf x}=0 \end{eqnarray} for all smooth functions $\xi= \xi({\mathbf x},t)$ such that $\xi|_{\partial \Omega} = \xi|_{t=T}=0$. \end{definition} In \eqref{1.4} by $A:B$ we denote the convolution (or, equivalently, the inner tensor product) of two second-rank tensors along the both indexes, i.e., $A:B=\mbox{tr\,} (B^*\circ A)=\sum_{i,j=1}^3 A_{ij} B_{ji}$. Suppose additionally that there exist limits (finite or infinite) \begin{equation} \nonumber \lim_{\varepsilon\searrow 0}\alpha_\nu(\varepsilon) =\nu_0, \quad \lim_{\varepsilon\searrow 0} \alpha_\eta(\varepsilon) =\eta_0,\quad \lim_{\varepsilon\searrow 0} \alpha_{\varkappa s}(\varepsilon) =\varkappa _{0s},\quad \lim_{\varepsilon\searrow 0} \alpha _{ \theta f}(\varepsilon) =\beta_{0f}, \end{equation} \begin{equation*} \lim_{\varepsilon\searrow 0} \alpha _{ \theta s}(\varepsilon) =\beta_{0s}, \quad \lim_{\varepsilon\searrow 0} \frac{\alpha_\mu}{\varepsilon^{2}} =\mu_1,\quad \lim_{\varepsilon\searrow 0} \frac{\alpha _{\varkappa f}}{\alpha_\mu}=\varkappa_{f}. \end{equation*} In what follows we suppose to be held \begin{assumption} \label{assumption2} 1)Dimensionless parameters in the model ${(\mathbf{N}\mathbf B})^\varepsilon$ satisfy to next restrictions $$ \mu_{0}=0; \quad 0< \tau _{0}+ \mu_1, \quad \varkappa _{0s}, \quad \varkappa _{f}, \quad \lambda_{0},\quad \eta _{0};$$ \begin{equation*} \tau _{0}, \quad \varkappa _{f},\quad \varkappa _{0s}, \quad \nu _{0}\quad \beta_{0f}, \quad \beta_{0s}\quad \lambda_{0} <\infty. \end{equation*} 2) Sequences $\{\sqrt{\alpha_\lambda}(1-\chi ^{\varepsilon})\nabla{\mathbf w}^{\epsilon}_0\}$, $\{\sqrt{\alpha_\tau}{\mathbf v}^{\epsilon}_0 \}$, $\{\sqrt{\alpha_\tau}\theta^{\epsilon}_0 \}$, $\{\sqrt{\alpha_\eta}(1- \chi ^{\varepsilon}) \mbox{div}_x {\mathbf w}_{0}^{\varepsilon}\}$, $\{\sqrt{\alpha _{p}} \chi ^{\varepsilon} \mbox{div}_x {\mathbf w}_{0}^{\varepsilon}\}$, $\{\sqrt{\alpha_\lambda \alpha_\tau}(1-\chi ^{\varepsilon})\nabla{\mathbf v}^{\epsilon}_0\}$, $\{\sqrt{\alpha_\eta \alpha_\tau}(1- \chi ^{\varepsilon}) \mbox{div}_x {\mathbf v}_{0}^{\varepsilon}\}$, $\{\sqrt{\alpha _{p}\alpha_\tau} \chi ^{\varepsilon} \mbox{div}_x {\mathbf v}_{0}^{\varepsilon}\}$, $\{\textbf{a}^{\varepsilon}_0 \}$, $\{\textbf{b}^{\varepsilon}_0 \}$ are uniformly in $\varepsilon$ bounded in $L^2(\Omega)$ and $|\mathbf F|, |\partial \mathbf F / \partial t|, \Psi, \partial \Psi / \partial t \in L^2(\Omega_{T})$. \end{assumption} Here $$\textbf{a}^{\varepsilon}_0=\mbox{div}_x \mathbb P_{0}^{\varepsilon} + \bar{\rho} \mathbf F({\mathbf x},0),$$ $$ c_{p}^{\varepsilon}\textbf{b}^{\varepsilon}_0 = \mbox{div}_x ( \alpha ^{\varepsilon} _{\varkappa} \nabla_x \theta ^{\varepsilon}_0) -\alpha ^{\varepsilon}_\theta \mbox{div}_x {\mathbf v}^{\varepsilon}_0 +\Psi ({\mathbf x},0),$$ $$\mathbb P_{0}^{\varepsilon}=\chi ^{\varepsilon}\alpha_\mu \mathbb D({\mathbf x}, {\mathbf v}_{0}^{\varepsilon}) +(1-\chi ^{\varepsilon})\alpha_\lambda \mathbb D(x,{\mathbf w}_{0}^{\varepsilon})+$$ $$(\chi ^{\varepsilon}(\alpha_p \mbox{div}_x {\mathbf w}^{\varepsilon}_0+ \alpha_\nu \mbox{div}_x {\mathbf v}^{\varepsilon}_0)+(1-\chi ^{\varepsilon})\alpha_\eta \mbox{div}_x {\mathbf w}^{\varepsilon}_0)\mathbb I.$$ In what follows all parameters may take all permitted values. For example, if $\tau_{0}=0$ or $\eta _{0}^{-1}=0$, then all terms in final equations containing these parameters disappear. The following Theorems \ref{theorem1}--\ref{theorem2} are the main results of the paper. \begin{theorem} \label{theorem1} For all $\varepsilon >0$ on the arbitrary time interval $[0,T]$ there exists a unique generalized solution of model ${(\mathbf{N}\mathbf B})^\varepsilon$ and \begin{equation} \label{1.9} \displaystyle \max\limits_{0\leq t\leq T}\| |{\mathbf w}^{\varepsilon}(t)|, \sqrt{\alpha_\mu} \chi^\varepsilon |\nabla_x {\mathbf w}^{\varepsilon}(t)|, (1-\chi^\varepsilon) |\nabla_x {\mathbf w}^{\varepsilon}(t)| \|_{2,\Omega} \leq C_{0} , \end{equation} \begin{equation} \label{1.10} \displaystyle\| \theta^{\varepsilon} \|_{2,\Omega_{T}}+\sqrt{\alpha _{\varkappa f}}\| \chi ^{\varepsilon} \nabla_x \theta^{\varepsilon}\|_{2,\Omega _{T}}+ \|(1- \chi ^{\varepsilon}) \nabla_x \theta^{\varepsilon}\|_{2,\Omega _{T}} \leq C_{0} , \end{equation} \begin{equation}\label{1.11} \|q^{\varepsilon}\|_{2,\Omega_{T}} + \|p^{\varepsilon}\|_{2,\Omega_{T}} + \frac{\alpha _{\nu}}{\alpha _{p}}\|\frac{\partial p^{\varepsilon}}{\partial t}\|_{2,\Omega_{T}} + \|\pi ^{\varepsilon}\|_{2,\Omega_{T}} \leq C_{0} \end{equation} where $C_{0}$ does not depend on the small parameter $\varepsilon $. \end{theorem} \begin{theorem} \label{theorem2} Functions ${\mathbf w}^{\varepsilon}$ and $\theta ^{\varepsilon}$ admit an extension ${\mathbf u}^{\varepsilon}$ and $\vartheta^{\varepsilon}$ respectively from $\Omega_{s,T}^{\varepsilon}=\Omega_s^\varepsilon \times (0,T)$ into $\Omega_{T}$ such that the sequences $\{{\mathbf u}^{\varepsilon}\}$ and $\{\vartheta^{\varepsilon}\}$ converge strongly in $L^{2}(\Omega_{T})$ and weakly in $L^{2}((0,T);W^1_2(\Omega))$ to the functions ${\mathbf u}$ and $\vartheta$ respectively. At the same time, sequences $\{{\mathbf w}^\varepsilon\}$, $\{\theta ^{\varepsilon}\}$, $\{p^{\varepsilon}\}$, $\{q^{\varepsilon}\}$, and $\{\pi^{\varepsilon}\}$ converge weakly in $L^{2}(\Omega_{T})$ to ${\mathbf w}$, $\theta $, $p$, $q$, and $\pi$, respectively. The following assertions for these limiting functions hold true: \textbf{(I)} If $\mu_1 =\infty$ then ${\mathbf w}={\mathbf u}$, $\theta =\vartheta $ and the weak limits ${\mathbf u}$, $\vartheta $, $p$, $q$, and $\pi$ satisfy in $\Omega_{T}$ the initial-boundary value problem \begin{equation}\label{1.12} \left. \begin{array}{lll} \displaystyle \tau _{0}\hat{\rho}\frac{\partial ^2{\mathbf u}}{\partial t^2} +\nabla (q+\pi )-\hat{\rho}\mathbf F=\\[1ex] \mbox{div}_x \{\lambda _{0}\mathbb A^{s}_{0}:\mathbb D(x,{\mathbf u}) + B^{s}_{0}(\mbox{div}_x {\mathbf u}-\frac{\beta_{0s}}{\eta_{0}}\vartheta )+B^{s}_{1}q \}, \end{array} \right\} \end{equation} \begin{equation}\label{1.13} (\tau_{0}\hat{c_{p}}+\frac{\beta_{0s}^{2}}{\eta_{0}}(1-m))\frac{\partial \vartheta}{\partial t} -\frac{\beta_{0s}}{\eta_{0}}\frac{\partial \pi}{\partial t}+(a^{s}_{1}-\frac{1}{\eta_{0}})\langle \frac{\partial q}{\partial t}\rangle_{\Omega}= \mbox{div}_x ( B^{\theta}\cdot \nabla \vartheta )+\Psi , \end{equation} \begin{equation}\label{1.14} \frac{1}{\eta_{0}}(\pi +\langle q\rangle_{\Omega})+C^{s}_{0}:\mathbb D(x,{\mathbf u})+ a^{s}_{0}(\mbox{div}_x {\mathbf u} - \frac{\beta_{0s}}{\eta_{0}}(\vartheta-\langle \vartheta \rangle_{\Omega})) +a^{s}_{1}(q-\langle q\rangle_{\Omega})=0, \end{equation} \begin{equation}\label{1.15} \frac{1}{\eta_{0}}(\pi +\langle q\rangle_{\Omega}) + \mbox{div}_x {\mathbf u}+ \frac{(1-m)\beta_{0s}}{\eta_{0}}(\vartheta-\langle \vartheta \rangle_{\Omega})=0, \end{equation} \begin{equation}\label{1.16} q-\langle q\rangle_{\Omega}=p +\beta_{0f}m(\vartheta-\langle \vartheta \rangle_{\Omega}), \end{equation} where $$\hat{\rho}=m \rho_{f} + (1-m)\rho_{s},\quad \hat{c_{p}}=m c_{pf} + (1-m)c_{ps},\quad m=\int _{Y}\chi ({\mathbf y})d{\mathbf y}.$$ The symmetric strictly positively defined constant fourth-rank tensor $\mathbb A^{s}_{0}$, constant matrices $C^{s}_{0}, B^{s}_{0}$ $B^{s}_{1}$, strictly positively defined constant matrix $B^{\vartheta}$ and constants $a^{s}_{0}$, $a^{s}_{1}$ and $a^{s}_{2}$ are defined below by Eqs. \eqref{4.33} - \eqref{4.35} and \eqref{4.38}. Differential equations \eqref{1.12}-\eqref{1.16} are endowed with initial conditions at $t=0$ and $ {\mathbf x}\in \Omega$ \begin{equation}\label{1.17} (\tau _{0}+\beta_{0s})(\vartheta-\vartheta_{0})=0,\quad\tau _{0}({\mathbf u}-{\mathbf u}_{0})= \tau _{0}(\frac{\partial {\mathbf u}}{\partial t}-{\mathbf v}_{0})=0; \end{equation} and boundary conditions \begin{equation}\label{1.18} \vartheta ({\mathbf x},t)=0, \quad {\mathbf u}({\mathbf x},t)=0, \quad {\mathbf x}\in S, \quad t>0. \end{equation} \noindent \textbf{(II)} If the porous space is disconnected, then ${\mathbf w}={\mathbf u}$ and strong and weak limits ${\mathbf u}$, $\vartheta $, $p$, $q$, $\pi$ together with a weak limit $\theta ^{f}$ of the sequence $\{\chi ^{\varepsilon}\theta ^{\varepsilon}\}$ satisfy in $\Omega_{T}$ equations \eqref{1.12}, \eqref{1.14}-- \eqref{1.15}, the state equation \begin{equation}\label{1.19} q-\langle q\rangle_{\Omega}=p +\beta_{0f}(\theta ^{f}-\langle \theta ^{f} \rangle_{\Omega}), \end{equation} and heat equation \begin{eqnarray} \nonumber &&\tau_{0}c_{pf}\frac{\partial \theta^{f}}{\partial t}+(\tau_{0}c_{ps}+ \frac{\beta_{0s}^{2}}{\eta_{0}})(1-m)\frac{\partial \vartheta}{\partial t}-\frac{\beta_{0s}}{\eta_{0}}\frac{\partial \pi}{\partial t} +(a^{s}_{1}-\frac{1}{\eta_{0}})\langle \frac{\partial q}{\partial t}\rangle_{\Omega}=\\ && \mbox{div}_x ( B^{\theta}\cdot \nabla \vartheta ) +\Psi . \label{1.20} \end{eqnarray} Here $\theta^{f}$ is defined below by formulas \eqref{4.40}--\eqref{4.45}. The problem is endowed with initial and boundary conditions \eqref{1.17}-\eqref{1.18}. \noindent \textbf{(III)} If $\mu_{1}<\infty$ then strong and weak limits ${\mathbf u}$, $\vartheta $, ${\mathbf w}^{f}$, $\theta ^{f}$, $p$, $q$ and $\pi$ of the sequences $\{{\mathbf u}^\varepsilon\}$, $\{\vartheta ^\varepsilon\}$, $\{\chi^{\varepsilon}{\mathbf w}^\varepsilon\}$, $\{\chi^{\varepsilon}\theta ^\varepsilon\}$, $\{p^\varepsilon\}$, $\{q^\varepsilon\}$ and $\{\pi^\varepsilon\}$ satisfy the initial-boundary value problem in $\Omega_T$, consisting of the balance of momentum equation \begin{equation}\label{1.21} \left. \begin{array}{lll} \displaystyle\tau _{0}(\rho_{f}\frac{\partial ^2{\mathbf w}^{f}}{\partial t^2}+\rho_{s}(1-m)\frac{\partial ^2{\mathbf u}}{\partial t^2}) +\nabla (q+\pi )-\hat{\rho}\mathbf F= \\[1ex] \mbox{div}_x \{\lambda _{0}A^{s}_{0}:\mathbb D(x,{\mathbf u}) + B^{s}_{0}\mbox{div}_x {\mathbf u} +B^{s}_{1}q \}, \end{array} \right\} \end{equation} where $\mathbb A^{s}_{0}$, $B^{s}_{0}$ è $B^{s}_{1}$ are the same as in Eq. \eqref{1.12}, continuity equation \eqref{1.14}, continuity equation \begin{equation} \label{1.22} \frac{1}{\eta_{0}}(\pi +\langle q\rangle_{\Omega})+\mbox{div}_x {\mathbf w}^{f} + \frac{(1-m)\beta_{0s}}{\eta_{0}}(\vartheta -\langle \vartheta \rangle_{\Omega})= (m-1)\mbox{div}_x {\mathbf u} , \end{equation} state equation \eqref{1.19}, heat equation \eqref{1.20} and Darcy's law in the form \begin{equation}\label{1.23} \frac{\partial {\mathbf w}^{f}}{\partial t}=m\frac{\partial {\mathbf u}}{\partial t}+\int_{0}^{t} B_{1}(\mu_1,t-\tau)\cdot (-\nabla_x q+\rho_{f}\mathbf F-\tau_{0}\rho_{f}\frac{\partial ^2 {\mathbf u}}{\partial \tau ^2})({\mathbf x},\tau )d\tau \end{equation} if $\tau_{0}>0$ è $\mu_{1}>0$, Darcy's law in the form \begin{equation}\label{1.24} \frac{\partial {\mathbf w}^{f}}{\partial t}=\frac{\partial {\mathbf u}}{\partial t}+B_{2}(\mu_1)\cdot(-\nabla_x q+\rho_{f}\mathbf F) \end{equation} if $\tau_{0}=0$ and, finally, Darcy's law in the form \begin{equation}\label{1.25} \frac{\partial {\mathbf w}^{f}}{\partial t}=B_{3}\cdot \frac{\partial {\mathbf u}}{\partial t}+\frac{1}{\tau _{0}\rho_{f}}(m\mathbb I-B_{3})\cdot\int_{0}^{t}(-\nabla_x q+\rho_{f}\mathbf F)({\mathbf x},\tau )d\tau \end{equation} if $\mu_{1}=0$. The problem is supplemented by boundary and initial conditions \eqref{1.17}-\eqref{1.18} for the displacement ${\mathbf u}$ and temperature $\vartheta$ of the rigid component and by the boundary condition \begin{equation}\label{1.26} {\mathbf w}^{f}({\mathbf x},t)\cdot {\mathbf n}({\mathbf x})=0, \quad ({\mathbf x},t) \in S=\partial \Omega , \quad t>0 \end{equation} for the displacement $ {\mathbf w}^{f}$ of the liquid component. In Eqs. \eqref{1.23}--\eqref{1.26} ${\mathbf n}({\mathbf x})$ is the unit normal vector to $S$ at a point ${\mathbf x} \in S$, and matrices $B_{1}(\mu_1,t)$, $B_{2}(\mu_1)$, and $B_{3}$ are given below by formulas \eqref{4.51}--\eqref{4.56}. \end{theorem} \addtocounter{section}{1} \setcounter{equation}{0} \setcounter{theorem}{0} \setcounter{lemma}{0} \setcounter{proposition}{0} \setcounter{corollary}{0} \setcounter{definition}{0} \setcounter{assumption}{0} \begin{center} \textbf{\S2. Preliminaries} \end{center} \textbf{2.1. Two-scale convergence.} Justification of Theorems \ref{theorem1}--\ref{theorem2} relies on systematic use of the method of two-scale convergence, which had been proposed by G. Nguetseng \cite{NGU} and has been applied recently to a wide range of homogenization problems (see, for example, the survey \cite{LNW}). \begin{definition} \label{TS} A sequence $\{\varphi^\varepsilon\}\subset L^2(\Omega_{T})$ is said to be \textit{two-scale convergent} to a limit $\varphi\in L^2(\Omega_{T}\times Y)$ if and only if for any 1-periodic in ${\mathbf y}$ function $\sigma=\sigma({\mathbf x},t,{\mathbf y})$ the limiting relation \begin{equation}\label{(2.1)} \lim_{\varepsilon\searrow 0} \int_{\Omega_{T}} \varphi^\varepsilon({\mathbf x},t) \sigma\left({\mathbf x},t,{\mathbf x} / \varepsilon\right)d{\mathbf x} dt = \int _{\Omega_{T}}\int_Y \varphi({\mathbf x},t,{\mathbf y})\sigma({\mathbf x},t,{\mathbf y})d{\mathbf y} d{\mathbf x} dt \end{equation} holds. \end{definition} Existence and main properties of weakly convergent sequences are established by the following fundamental theorem \cite{NGU,LNW}: \begin{theorem} \label{theorem3}(\textbf{Nguetseng's theorem}) \textbf{1.} Any bounded in $L^2(Q)$ sequence contains a subsequence, two-scale convergent to some limit $\varphi\in L^2(\Omega_{T}\times Y)$.\\[1ex] \textbf{2.} Let sequences $\{\varphi^\varepsilon\}$ and $\{\varepsilon \nabla_x \varphi^\varepsilon\}$ be uniformly bounded in $L^2(\Omega_{T})$. Then there exist a 1-periodic in ${\mathbf y}$ function $\varphi=\varphi({\mathbf x},t,{\mathbf y})$ and a subsequence $\{\varphi^\varepsilon\}$ such that $\varphi,\nabla_y \varphi\in L^2(\Omega_{T}\times Y)$, and $\varphi^\varepsilon$ and $\varepsilon \nabla_x \varphi^\varepsilon$ two-scale converge to $\varphi$ and $\nabla_y \varphi$, respectively.\\[1ex] \textbf{3.} Let sequences $\{\varphi^\varepsilon\}$ and $\{\nabla_x \varphi^\varepsilon\}$ be bounded in $L^2(Q)$. Then there exist functions $\varphi\in L^2(\Omega_{T})$ and $\psi \in L^2(\Omega_{T}\times Y)$ and a subsequence from $\{\varphi^\varepsilon\}$ such that $\psi$ is 1-periodic in ${\mathbf y}$, $\nabla_y \psi\in L^2(\Omega_{T}\times Y)$, and $\varphi^\varepsilon$ and $\nabla_x \varphi^\varepsilon$ two-scale converge to $\varphi$ and $\nabla_x \varphi({\mathbf x},t)+\nabla_y \psi({\mathbf x},t,{\mathbf y})$, respectively. \end{theorem} \begin{corollary} \label{corollary2.1} Let $\sigma\in L^2(Y)$ and $\sigma^\varepsilon({\mathbf x}):=\sigma({\mathbf x}/\varepsilon)$. Assume that a sequence $\{\varphi^\varepsilon\}\subset L^2(\Omega_{T})$ two-scale converges to $\varphi \in L^2(\Omega_{T}\times Y)$. Then the sequence $\sigma^\varepsilon \varphi^\varepsilon$ two-scale converges to $\sigma \varphi$. \end{corollary} \textbf{2.2. An extension lemma.} The typical difficulty in homogenization problems while passing to a limit in Model ${(\mathbf{N}\mathbf B})^\varepsilon$ as $\varepsilon \searrow 0$ arises because of the fact that the bounds on the gradient of displacement $\nabla_x {\mathbf w}^\varepsilon$ may be distinct in liquid and rigid phases. The classical approach in overcoming this difficulty consists of constructing of extension to the whole $\Omega$ of the displacement field defined merely on $\Omega_s$. The following lemma is valid due to the well-known results from \cite{ACE,JKO}. We formulate it in appropriate for us form: \begin{lemma} \label{Lemma2.1} Suppose that assumptions of Sec. 1.2 on geometry of periodic structure hold, $ \psi^\varepsilon\in W^1_2(\Omega^\varepsilon_s)$ and $\psi^\varepsilon =0$ on $S_{s}^{\varepsilon}=\partial \Omega ^\varepsilon_s \cap \partial \Omega$ in the trace sense. Then there exists a function $ \sigma^\varepsilon \in W^1_2(\Omega)$ such that its restriction on the sub-domain $\Omega^\varepsilon_s$ coincide with $\psi^\varepsilon$, i.e., \begin{equation} \label{2.2} (1-\chi^\varepsilon({\mathbf x}))( \sigma^\varepsilon({\mathbf x}) - \psi^\varepsilon ({\mathbf x}))=0,\quad {\mathbf x}\in\Omega, \end{equation} and, moreover, the estimate \begin{equation} \label{2.3} \|\sigma^\varepsilon\|_{2,\Omega}\leq C\| \psi^\varepsilon\|_{2,\Omega ^{\varepsilon}_{s}} , \quad \|\nabla_x \sigma^\varepsilon\|_{2,\Omega} \leq C \|\nabla_x \psi^\varepsilon\|_{2,\Omega ^{\varepsilon}_{s}} \end{equation} hold true, where the constant $C$ depends only on geometry $Y$ and does not depend on $\varepsilon$. \end{lemma} \textbf{2.3. Friedrichs--Poincar\'{e}'s inequality in periodic structure.} The following lemma was proved by L. Tartar in \cite[Appendix]{S-P}. It specifies Friedrichs--Poincar\'{e}'s inequality for $\varepsilon$-periodic structure. \begin{lemma} \label{F-P} Suppose that assumptions on the geometry of $\Omega^\varepsilon_f$ hold true. Then for any function $\varphi\in \stackrel{\!\!\circ}{W^1_2}(\Omega^\varepsilon_f)$ the inequality \begin{equation} \label{(F-P)} \int_{\Omega^\varepsilon_f} |\varphi|^2 d{\mathbf x} \leq C \varepsilon^2 \int_{\Omega^\varepsilon_f} |\nabla_x \varphi|^2 d{\mathbf x} \end{equation} holds true with some constant $C$, independent of $\varepsilon$. \end{lemma} \textbf{2.4. Some notation.} Further we denote 1) $ \langle\Phi \rangle_{Y} =\int_Y \Phi dy, \quad \langle\Phi \rangle_{Y_{f}} =\int_{Y_{f}} \Phi dy,\quad \langle\Phi \rangle_{Y_{s}} =\int_{Y_{s}} \Phi dy.$ 2) If $\textbf{a}$ and $\textbf{b}$ are two vectors then the matrix $\textbf{a}\otimes \textbf{b}$ is defined by the formula $$(\textbf{a}\otimes \textbf{b})\cdot \textbf{c}=\textbf{a}(\textbf{b}\cdot \textbf{c})$$ for any vector $\textbf{c}$. 3) If $B$ and $C$ are two matrices, then $B\otimes C$ is a forth-rank tensor such that its convolution with any matrix $A$ is defined by the formula $$(B\otimes C):A=B (C:A)$$. 4) By $\mathbb I^{ij}$ we denote the $3\times 3$-matrix with just one non-vanishing entry, which is equal to one and stands in the $i$-th row and the $j$-th column. 5) We also introduce $$ J^{ij}=\frac{1}{2}(\mathbb I^{ij}+\mathbb I^{ji})=\frac{1}{2} ({\mathbf e}_i \otimes {\mathbf e}_j + {\mathbf e}_j \otimes {\mathbf e}_i), $$ where $({\mathbf e}_1, {\mathbf e}_2, {\mathbf e}_3)$ are the standard Cartesian basis vectors. \addtocounter{section}{1} \setcounter{equation}{0} \setcounter{theorem}{0} \setcounter{lemma}{0} \setcounter{proposition}{0} \setcounter{corollary}{0} \setcounter{definition}{0} \setcounter{assumption}{0} \begin{center} \textbf{\S3. Proof of Theorem \ref{theorem1}} \end{center} Under restriction $\tau_{0}>0$ estimates \eqref{1.9}-\eqref{1.10} follow from \begin{equation*} \max\limits_{0<t<T}(\sqrt{\alpha_\eta}\| \mbox{div}_x \frac{\partial{\mathbf w}^{\varepsilon}}{\partial t}(t) \|_{2,\Omega _s^{\varepsilon}}+ \sqrt{\alpha_\lambda}\| \nabla_x \frac{\partial{\mathbf w}^{\varepsilon}}{\partial t}(t) \|_{2,\Omega _s^{\varepsilon}} \end{equation*} \begin{equation*} + \sqrt{\alpha_\tau}\| \frac{\partial ^{2}{\mathbf w}^\varepsilon}{\partial t^{2}}(t)\|_{2,\Omega}+\sqrt{\alpha _{p}} \| \mbox{div}_x \frac{\partial{\mathbf w}^{\varepsilon}}{\partial t}(t)\|_{2,\Omega _f^{\varepsilon}}+ \sqrt{\alpha_\tau}\| \frac{\partial\theta^{\varepsilon}}{\partial t}(t)\|_{2,\Omega}) \end{equation*} \begin{equation*} +\sqrt{\alpha _{\varkappa f}}\|\chi ^{\varepsilon} \nabla_x \frac{\partial\theta^{\varepsilon}}{\partial t}\|_{2,\Omega _{T}}+\sqrt{\alpha _{\varkappa s}}\|(1- \chi ^{\varepsilon}) \nabla_x \frac{\partial\theta^{\varepsilon}}{\partial t}\|_{2,\Omega _{T}} \end{equation*} \begin{equation} \label{3.1} +\sqrt{\alpha_\mu}\|\chi ^{\varepsilon} \nabla_x \frac{\partial ^{2}{\mathbf w}^\varepsilon}{\partial t^{2}} \|_{2,\Omega_T}+ \sqrt{\alpha _{\nu}}\| \chi ^{\varepsilon} \mbox{div}_x \frac{\partial ^{2}{\mathbf w}^\varepsilon}{\partial t^{2}}\|_{2,\Omega _{T}} \leq \frac{C_{0}}{\sqrt{\alpha_\tau}}, \end{equation} where $C_{0}$ is independent of $\varepsilon$. Last estimate we obtain if we differentiate equations for ${\mathbf w}^{\varepsilon}$ and $\theta^{\varepsilon}$ with respect to time, multiply first equation by $\partial ^{2} {\mathbf w}^{\varepsilon} / \partial t^{2}$, second equation--by $\partial\theta^{\varepsilon} / \partial t $, integrate by parts and sum the result. The same estimate guaranties the existence and uniqueness of the generalized solution for the model ${(\mathbf{N}\mathbf B})^\varepsilon$. Estimate \eqref{1.11} for pressures follows from integral identity \eqref{1.7} and estimates \eqref{3.1} as an estimate of the corresponding functional, if we re-normalized pressures, such that $$\int _{\Omega} (q^\varepsilon({\mathbf x},t)+\pi^\varepsilon({\mathbf x},t)) d{\mathbf x}=0. $$. Indeed, integral identity \eqref{1.7} and estimates \eqref{3.1} imply $$|\int _{\Omega} (q^\varepsilon+\pi^\varepsilon )\mbox{div}_x {\mathbf{\psi}} d{\mathbf x} |\leq C \|\nabla {\mathbf{\psi}}\|_{2,\Omega}.$$ Choosing now ${\mathbf{\psi}}$ such that $(q^\varepsilon+\pi^\varepsilon )= \mbox{div}_x {\mathbf{\psi}}$ we get the desired estimate for the sum of pressures $(q^\varepsilon+\pi^\varepsilon )$. Such a choice is always possible (see \cite{LAD}), if we put $${\mathbf{\psi}}=\nabla \varphi + {\mathbf{\psi_{0}}}, \quad \mbox{div}_x {\mathbf{\psi_{0}}}=0, \quad \triangle \varphi=q^\varepsilon+\pi^\varepsilon ,\quad \varphi | _{\partial\Omega}=0, \quad (\nabla \varphi + {\mathbf{\psi_{0}}})| _{\partial\Omega}=0.$$ Note that the re-normalization of the pressures $(q^\varepsilon+\pi^\varepsilon )$ transforms continuity and state equations \eqref{1.4}-\eqref{1.6} for pressures into \begin{eqnarray} \label{3.2} &\displaystyle q^{\varepsilon}=p^{\varepsilon}+ \frac{\alpha_\nu}{\alpha_p}\frac{\partial p^{\varepsilon}}{\partial t}+ \chi^{\varepsilon}(\alpha _{\theta f}\theta ^{\varepsilon}+\gamma ^{\varepsilon}_{f}),\\ \label{3.3}& \displaystyle \frac{1}{\alpha_p}p^{\varepsilon}+ \chi^{\varepsilon}\mbox{div}_x {\mathbf w}^{\varepsilon}=-\frac{1}{m}\beta ^{\varepsilon}\chi^\varepsilon ,\\ \label{3.4}& \displaystyle \frac{1}{\alpha_\eta}\pi^{\varepsilon} +(1-\chi^{\varepsilon}) (\mbox{div}_x {\mathbf w}^{\varepsilon}-\frac{\alpha _{\theta s}}{\alpha_\eta}\theta^{\varepsilon}+\gamma ^{\varepsilon}_{s})=0, \end{eqnarray} where $$\beta ^{\varepsilon}=\langle (1-\chi^\varepsilon )\mbox{div}_x {\mathbf w}^\varepsilon \rangle _{\Omega},\quad m\gamma ^{\varepsilon}_{f}=\langle q^\varepsilon \rangle _{\Omega}-\alpha _{\theta f}\langle \chi^{\varepsilon}\theta ^{\varepsilon}\rangle _{\Omega},$$ $$ (1-m)\gamma ^{\varepsilon}_{s}=\frac{1}{\alpha_\eta}\langle q^\varepsilon \rangle _{\Omega}+\frac{\alpha _{\theta s}}{\alpha_\eta}\langle (1-\chi^{\varepsilon})\theta ^{\varepsilon}\rangle _{\Omega}-\beta ^{\varepsilon}.$$ Note that the basic integral identity \eqref{1.7} permits to bound only the sum $(q^\varepsilon +\pi^{\varepsilon})$. But thanks to the property that the product of these two functions is equal to zero, it is enough to get bounds for each of these functions. The pressure $p^{\varepsilon}$ is bounded from the state equation \eqref{3.2}, if we substitute the term $(\alpha_{\nu} / \alpha_p)\partial p^\varepsilon / \partial t$ from the continuity equation \eqref{3.3} and use estimate \eqref{3.1}. Estimation of ${\mathbf w}^\varepsilon$ and $\theta^\varepsilon$ in the case $\tau_0=0$ is not simple, and we outline it in more detail. As usual, we obtain the basic estimates if we multiply equations for ${\mathbf w}^\varepsilon$ by $\partial {\mathbf w}^\varepsilon /\partial t$, equation for $\theta^\varepsilon$ by $\theta^\varepsilon$, sum the result and then integrate by parts all obtained terms. The only two terms $\mathbf F\cdot \partial {\mathbf w}^\varepsilon / \partial t $ and $\Psi\cdot \theta ^{\varepsilon} $ heed additional consideration here. First of all, on the strength of Lemma \ref{Lemma2.1}, we construct an extension ${\mathbf u}^\varepsilon $ of the function ${\mathbf w}^\varepsilon $ from $\Omega_s^\varepsilon$ into $\Omega_f^\varepsilon$ such that ${\mathbf u}^\varepsilon ={\mathbf w}^\varepsilon$ in $\Omega_s^\varepsilon$, ${\mathbf u}^\varepsilon \in W_2^1(\Omega)$ and $$\| {\mathbf u}^\varepsilon\|_{2,\Omega} \leq C \|\nabla_x {\mathbf u}^\varepsilon\|_{2,\Omega} \leq \frac{C}{\sqrt{\alpha_\lambda}} \|(1-\chi^\varepsilon)\sqrt{\alpha_\lambda}\nabla_x {\mathbf w}^\varepsilon\|_{2,\Omega }.$$ After that we estimate $\|{\mathbf w}^\varepsilon\|_{2,\Omega}$ with the help of Friedrichs--Poincar\'{e}'s inequality in periodic structure (lemma \ref{F-P}) for the difference $({\mathbf u}^\varepsilon -{\mathbf w}^\varepsilon)$: $$\|{\mathbf w}^\varepsilon\|_{2,\Omega} \leq \|{\mathbf u}^\varepsilon\|_{2,\Omega} + \|{\mathbf u}^\varepsilon -{\mathbf w}^\varepsilon\|_{2,\Omega} \leq \|{\mathbf u}^\varepsilon\|_{2,\Omega} + C\varepsilon \|\chi^\varepsilon \nabla_x ({\mathbf u}^\varepsilon -{\mathbf w}^\varepsilon)\|_{2,\Omega} $$ $$\leq \|{\mathbf u}^\varepsilon\|_{2,\Omega}+C\varepsilon \|\nabla_x {\mathbf u}^\varepsilon\|_{2,\Omega}+C(\varepsilon \alpha _{\mu }^{-\frac{1}{2}})\|\chi^\varepsilon \sqrt{\alpha_\mu} \nabla_x {\mathbf w}^\varepsilon\|_{2,\Omega}$$ $$\leq \frac{C}{\sqrt{\alpha_\lambda}} \|(1-\chi^\varepsilon)\sqrt{\alpha_\lambda}\nabla_x {\mathbf w}^\varepsilon\|_{2,\Omega }+C(\varepsilon \alpha _{\mu }^{-\frac{1}{2}})\|\chi^\varepsilon \sqrt{\alpha_\mu} \nabla_x {\mathbf w}^\varepsilon\|_{2,\Omega}.$$ The same method we apply for the temperature $\theta^{\varepsilon}$: there is an extension $\vartheta^\varepsilon $ of the function $\theta^\varepsilon $ from $\Omega_s^\varepsilon$ into $\Omega_f^\varepsilon$ such that $\vartheta^\varepsilon =\theta^\varepsilon$ in $\Omega_s^\varepsilon$, $\vartheta^\varepsilon \in W_2^1(\Omega)$ and $$\| \vartheta^\varepsilon\|_{2,\Omega} \leq C \|\nabla_x \vartheta^\varepsilon\|_{2,\Omega} \leq \frac{C}{\sqrt{\alpha_{\varkappa s}}} \|(1-\chi^\varepsilon)\sqrt{\alpha_{\varkappa s}}\nabla_x \theta^\varepsilon\|_{2,\Omega },$$ $$\|\theta^\varepsilon\|_{2,\Omega} \leq \frac{C}{\sqrt{\alpha_{\varkappa s}}} \|(1-\chi^\varepsilon)\sqrt{\alpha_{\varkappa s}}\nabla_x \theta^\varepsilon\|_{2,\Omega } +C(\varepsilon \alpha _{\varkappa s }^{-\frac{1}{2}})\|\chi^\varepsilon \sqrt{\alpha_{\varkappa s}} \nabla_x \theta^\varepsilon\|_{2,\Omega}.$$ Next we pass the derivative with respect to time from $\partial {\mathbf w}^{\varepsilon }/ \partial t$ to $\rho^{\varepsilon}\mathbf F$ and bound all obtained new terms in a usual way with the help of H\"{o}lder and Grownwall's inequalities. The rest of the proof is the same as for the case $\tau_0>0$, if we use the consequence of \eqref{3.1}: $$\max\limits_{0<t<T}\alpha_\tau \| \frac{\partial ^2 {\mathbf w}^{\varepsilon}}{\partial t^2}(t)\|_{2,\Omega}\leq C_{0}.$$ \addtocounter{section}{1} \setcounter{equation}{0} \setcounter{theorem}{0} \setcounter{lemma}{0} \setcounter{proposition}{0} \setcounter{corollary}{0} \setcounter{definition}{0} \setcounter{assumption}{0} \begin{center} \textbf{\S4. Proof of Theorem \ref{theorem2}} \end{center} \textbf{4.1. Weak and two-scale limits of sequences of displacement, temperatures and pressures.} On the strength of Theorem \ref{theorem1}, the sequences $\{\theta^\varepsilon\}$, $\{p^\varepsilon\}$, $\{q^\varepsilon\}$, $\{\pi^\varepsilon\}$ and $\{{\mathbf w}^\varepsilon \}$ are uniformly in $\varepsilon$ bounded in $L^2(\Omega_{T})$. Hence there exist a subsequence of small parameters $\{\varepsilon>0\}$ and functions $\theta $, $p$, $q$, $\pi$ and ${\mathbf w}$ such that \begin{equation*} \theta^\varepsilon \rightarrow \theta,\quad p^\varepsilon \rightarrow p,\quad q^\varepsilon \rightarrow q, \quad \pi^\varepsilon \rightarrow \pi, \quad {\mathbf w}^\varepsilon \rightarrow {\mathbf w} \end{equation*} weakly in $L^2(\Omega_T)$ as $\varepsilon\searrow 0$. Due to Lemma \ref{Lemma2.1} there is a function ${\mathbf u}^\varepsilon \in L^\infty ((0,T);W^1_2(\Omega))$ such that ${\mathbf u}^\varepsilon ={\mathbf w}^\varepsilon $ in $\Omega_{s}\times (0,T)$, and the family $\{{\mathbf u}^\varepsilon \}$ is uniformly in $\varepsilon$ bounded in $L^\infty ((0,T);W^1_2(\Omega))$. Therefore it is possible to extract a subsequence of $\{\varepsilon>0\}$ such that \begin{equation*} {\mathbf u}^\varepsilon \rightarrow {\mathbf u} \mbox{ weakly in } L^2 ((0,T);W^1_2(\Omega)) \end{equation*} as $\varepsilon \searrow 0$. Applying again the same lemma \ref{Lemma2.1} we conclude that there is a function $\vartheta^\varepsilon \in L^{2}((0,T);W^1_2(\Omega))$ such that $\vartheta^\varepsilon =\theta^\varepsilon $ in $\Omega_{s}\times (0,T)$, and the family $\{\vartheta^\varepsilon \}$ is uniformly in $\varepsilon$ bounded in $L^{2}((0,T);W^1_2(\Omega))$. Therefore it is possible to extract a subsequence of $\{\varepsilon>0\}$ such that \begin{equation*} \vartheta^\varepsilon \rightarrow \vartheta \mbox{ weakly in } L^2 ((0,T);W^1_2(\Omega)) \end{equation*} as $\varepsilon \searrow 0$. Moreover, \begin{equation} \label{4.1} \chi^\varepsilon \alpha_\mu \mathbb D({\mathbf x},{\mathbf w}^\varepsilon) \rightarrow 0, \quad \chi^\varepsilon \alpha _{\varkappa f} \nabla \theta^\varepsilon \rightarrow 0 \end{equation} as $\varepsilon \searrow 0$. Relabelling if necessary, we assume that the sequences converge themselves. On the strength of Nguetseng's theorem, there exist 1-periodic in ${\mathbf y}$ functions $\Theta ({\mathbf x},t,{\mathbf y})$, $P({\mathbf x},t,{\mathbf y})$, $\Pi({\mathbf x},t,{\mathbf y})$, $Q({\mathbf x},t,{\mathbf y})$, $\mathbf W({\mathbf x},t,{\mathbf y})$, $\Theta ^{s} ({\mathbf x},t,{\mathbf y})$ and $\mathbf U({\mathbf x},t,{\mathbf y})$ such that the sequences $\{\theta^\varepsilon\}$, $\{p^\varepsilon\}$, $\{\pi^\varepsilon\}$, $\{q^\varepsilon\}$, $\{{\mathbf w}^\varepsilon \}$, $\{\nabla_x \vartheta^\varepsilon \}$ and $\{\nabla_x {\mathbf u}^\varepsilon \}$ two-scale converge to $\Theta ({\mathbf x},t,{\mathbf y})$, $P({\mathbf x},t,{\mathbf y})$, $\Pi({\mathbf x},t,{\mathbf y})$, $Q({\mathbf x},t,{\mathbf y})$, $\mathbf W({\mathbf x},t,{\mathbf y})$, $\nabla _{x}\vartheta +\nabla_{y}\Theta ^{s}({\mathbf x},t,{\mathbf y})$ and $\nabla _{x}{\mathbf u} +\nabla_{y}\mathbf U({\mathbf x},t,{\mathbf y})$, respectively. Note that the sequence $\{\mbox{div}_x {\mathbf w}^\varepsilon \}$ weakly converges to $\mbox{div}_x {\mathbf w}$ and $ \vartheta ,|{\mathbf u}| \in L^2 ((0,T);\stackrel{\!\!\circ}{W^1_2}(\Omega)).$ Last assertion for disconnected porous space follows from inclusion $\vartheta ^\varepsilon ,|{\mathbf u} ^\varepsilon |\in L^2 ((0,T);\stackrel{\!\!\circ}{W^1_2}(\Omega))$ and for the connected porous space it follows from the Friedrichs--Poincar\'{e}'s inequality for ${\mathbf u}^\varepsilon$ and $ \vartheta ^\varepsilon$ in the $\varepsilon$-layer of the boundary $S$ and from convergence of sequences $\{{\mathbf u}^\varepsilon \}$ and $\{\vartheta^\varepsilon \}$ to ${\mathbf u}$ and $ \vartheta $ respectively strongly in $L^2(\Omega_{T})$ and weakly in $L^2 ((0,T);W^1_2(\Omega))$.\\ \textbf{4.2. Micro- and macroscopic equations I..} \begin{lemma} \label{lemma4.1} For all $ {\mathbf x} \in \Omega$ and ${\mathbf y}\in Y$ weak and two-scale limits of the sequences $\{\theta^\varepsilon\}$, $\{p^\varepsilon\}$, $\{\pi^\varepsilon\}$, $\{q^\varepsilon\}$, $\{{\mathbf w}^\varepsilon\}$, $\{\nabla_x \vartheta^\varepsilon \}$ and $\{\nabla_x {\mathbf u}^\varepsilon \}$ satisfy the relations \begin{eqnarray} \label{4.2} &Q=\frac{1}{m}\chi q, \quad Q=P+\chi (\beta_{0f} \Theta+\gamma_{f});\\ \label{4.3} & \frac{1}{\eta_{0}}\Pi+(1-\chi ) (\mbox{div}_x{\mathbf u} + \mbox{div}_y \mathbf U-\frac{\beta_{0s}}{\eta_{0}}(\vartheta-\langle \vartheta \rangle_{\Omega}) +\gamma_{s})=0;\\ \label{4.4} & \mbox{div}_y \mathbf W=0;\\ \label{4.5} &\mathbf W=\chi \mathbf W + (1-\chi){\mathbf u} ;\\ \label{4.6} &\Theta=\chi \Theta + (1-\chi)\vartheta ;\\ \label{4.7} & q=p +\beta_{0f}\theta ^{f}+m\gamma_{f};\\ \label{4.8} & \frac{1}{\eta_{0}}\pi+ (1-m)(\mbox{div}_x {\mathbf u} -\frac{\beta_{0s}}{\eta_{0}}(\vartheta-\langle \vartheta \rangle_{\Omega}) +\gamma_{s})+ \langle \mbox{div}_y\mathbf U\rangle_{Y_{s}}=0;\\ \label{4.9} & \frac{1}{\eta_{0}}\pi+\mbox{div}_x {\mathbf w} -(1-m)(\frac{\beta_{0s}}{\eta_{0}}(\vartheta-\langle \vartheta \rangle_{\Omega})-\gamma_{s})+\beta =0, \end{eqnarray} where $$\beta =\langle \langle \mbox{div}_y\mathbf U\rangle_{Y_{s}}\rangle_{\Omega}, \quad \theta ^{f}=\langle \Theta\rangle_{Y_{f}},$$ $$m\gamma_{f}=\langle q\rangle_{\Omega}-\beta_{0f}\langle \theta ^{f}\rangle_{\Omega}, \quad (1-m)\gamma_{s}=\frac{1}{\eta_{0}}\langle q\rangle_{\Omega}-\beta .$$ \end{lemma} \begin{proof} In order to prove first equation in \eqref{4.2} into Eq.\eqref{1.7} insert a test function ${\mathbf \psi}^\varepsilon =\varepsilon {\mathbf \psi}\left({\mathbf x},t,{\mathbf x} / \varepsilon\right)$, where ${\mathbf \psi}({\mathbf x},t,{\mathbf y})$ is an arbitrary 1-periodic in ${\mathbf y}$ and finite on $Y_f$ function. Passing to the limit as $\varepsilon \searrow 0$, we get \begin{equation} \label{4.10} \nabla_y Q({\mathbf x},t,{\mathbf y})=0, \quad {\mathbf y}\in Y_{f}. \end{equation} The weak and two-scale limiting passage in Eq. \eqref{3.4} yield that Eq. \eqref{4.7} and the second equation in \eqref{4.2}. Next, fulfilling the two-scale limiting passage in the equalities $$(1-\chi^{\varepsilon})p^{\varepsilon} =0,\quad (1-\chi^{\varepsilon})q^{\varepsilon} =0$$ we get $$(1-\chi )P=0,\quad (1-\chi )Q=0,$$ which justify first equation in \eqref{4.2}. Eqs.\eqref{4.3}, \eqref{4.4}, \eqref{4.8}, and \eqref{4.9} appear as the results of two-scale limiting passages in Eqs.\eqref{3.2}--\eqref{3.4} with the proper test functions being involved. Thus, for example, Eq.\eqref{4.8} is just a subsequence of Eq.\eqref{4.3} and Eq.\eqref{4.9} is a result of two-scale convergence in the sum of Eq.\eqref{3.3} and Eq.\eqref{3.4} with the test functions independent of the ``fast'' variable ${\mathbf y}={\mathbf x} / \varepsilon$. Eq.\eqref{4.4} is derived quite similarly if multiply the same sum of Eq.\eqref{3.3} and Eq.\eqref{3.4} by an arbitrary function ${\mathbf \psi}^\varepsilon =\varepsilon {\mathbf \psi}\left({\mathbf x},t,{\mathbf x} / \varepsilon\right)$ and pass to the limit as $\varepsilon\searrow 0$. In order to prove Eqs.\eqref{4.5} and \eqref{4.6} it is sufficient to consider the two-scale limiting relations in \begin{equation*} (1-\chi ^{\varepsilon})({\mathbf w}^{\varepsilon}-{\mathbf u}^{\varepsilon})=0, \quad (1-\chi ^{\varepsilon})(\theta^{\varepsilon}-\vartheta^{\varepsilon})=0. \end{equation*} \end{proof} \begin{lemma} \label{lemma4.2} For all $({\mathbf x},t) \in \Omega_{T}$ and $y \in Y$ the relation \begin{equation} \label{4.11} \mbox{div}_y \{\lambda_0(1-\chi ) (\mathbb D(y,\mathbf U)+\mathbb D(x,{\mathbf u}))- (\Pi +\frac{1}{m}q \chi )\cdot \mathbb I \}=0. \end{equation} holds true. \end{lemma} \begin{proof} Substituting a test function of the form ${\mathbf \psi}^\varepsilon =\varepsilon {\mathbf \psi}\left({\mathbf x},t,{\mathbf x} / \varepsilon \right)$, where ${\mathbf \psi}({\mathbf x},t,{\mathbf y})$ is an arbitrary 1-periodic in ${\mathbf y}$ function vanishing on the boundary $\partial \Omega$, into Eq.\eqref{1.7} and passing to the limit as $\varepsilon \searrow 0$, we arrive at the desired microscopic relation on the cell $Y$. \end{proof} In the same way using additionally continuity equations \eqref{3.3} and \eqref{3.4} one gets from Eq.\eqref{1.8} \begin{lemma} \label{lemma4.3} For all $({\mathbf x},t) \in \Omega_{T}$ the relations \begin{equation} \label{4.12} \left. \begin{array}{lll} \displaystyle \triangle _{y}\Theta ^{s} = 0, \quad {\mathbf y}\in Y_s,\\[1ex] \frac{\partial\Theta ^{s}}{\partial n}=-\nabla_{x} \vartheta \cdot \mathbf{n}, \quad {\mathbf y}\in \gamma \end{array} \right\} \end{equation} hold true. \end{lemma} Now we pass to the macroscopic equations for the solid displacements. \begin{lemma} \label{lemma4.4} Let $\hat{\rho}=m \rho_{f} + (1-m)\rho_{s}, \quad {\mathbf w}^{f}=\langle \mathbf W\rangle_{Y_{f}}$. Then functions ${\mathbf u} , {\mathbf w}^{f}, q, \pi , \theta^{f} , \vartheta $ satisfy in $\Omega_{T}$ the system of macroscopic equations \begin{eqnarray}\label{4.14} && \tau _{0}\rho_{f}\frac{\partial ^2{\mathbf w}^{f}}{\partial t^2}+\tau _{0}\rho_{s}(1-m)\frac{\partial ^2{\mathbf u}}{\partial t^2}-\hat{\rho}\mathbf F=\\ &&\mbox{div}_x \{\lambda _{0}((1-m)\mathbb D(x,{\mathbf u})+ \langle \mathbb D(y,\mathbf U)\rangle _{Y_{s}})-(q+\pi )\cdot \mathbb I \},\nonumber \end{eqnarray} \begin{eqnarray}\label{4.15} &&\tau_{0}c_{pf}\frac{\partial \theta^{f}}{\partial t}+(\tau _{0}c_{ps}+\frac{\beta_{0s}^{2}}{\eta_{0}})(1-m)\frac{\partial \vartheta}{\partial t} -\frac{\beta_{0s}}{\eta_{0}}\frac{\partial \pi}{\partial t} -\beta_{0f}\frac{\partial \beta}{\partial t}-\\ &&(1-m)\beta_{0s}\frac{\partial \gamma_{s}}{\partial t}= \varkappa _{0s}\mbox{div}_x \{(1-m)\nabla_{x}\vartheta + \langle \nabla _{y}\Theta^{s}\rangle _{Y_{s}}\} +\Psi.\nonumber \end{eqnarray} \end{lemma} \begin{proof} Eqs.\eqref{4.14} and \eqref{4.15} arise as the limit of Eqs.\eqref{1.7} and \eqref{1.8} with test functions being finite in $\Omega_T$ and independent of $\varepsilon$. In Eq.\eqref{1.8} we have used continuity equations \eqref{3.3} and \eqref{3.4}. \end{proof} \textbf{4.3. Micro- and macroscopic equations II.} \begin{lemma} \label{lemma4.5} If $\mu_{1}=\infty$, then ${\mathbf u}={\mathbf w}$ and $\theta =\vartheta $. \end{lemma} \begin{proof} In order to verify, it is sufficient to consider the differences $({\mathbf u}^\varepsilon -{\mathbf w}^\varepsilon)$ and $(\theta^\varepsilon -\vartheta^\varepsilon)$ and apply Friedrichs--Poincar'{e}'s inequality, just like in the proof of Theorem \ref{theorem1}. \end{proof} \begin{lemma} \label{lemma4.6} Let $\mu_1 <\infty$ and $\mathbf V=\chi\partial \mathbf W / \partial t$. Then \begin{equation}\label{4.16} \tau_{0}\rho_{f}\frac{\partial \mathbf V}{\partial t}-\rho_{f}\mathbf F= \mu_{1}\triangle_y \mathbf V -\nabla_y R -\nabla_x q, \quad {\mathbf y} \in Y_{f}, \end{equation} \begin{equation}\label{4.17} \tau_{0}c_{pf}\frac{\partial \Theta}{\partial t}= \varkappa _{1} \mu_{1}\triangle_y \Theta +\frac{\beta_{0f}}{m}\frac{\partial \beta}{\partial t} + \Psi, \quad {\mathbf y} \in Y_{f}, \end{equation} \begin{equation}\label{4.18} \mathbf V=\frac{\partial {\mathbf u}}{\partial t}, \quad \Theta =\vartheta, \quad {\mathbf y} \in \gamma \end{equation} for $\mu_{1}>0$, and \begin{equation}\label{4.19} \tau_{0}\rho_{f}\frac{\partial \mathbf V}{\partial t}= -\nabla_y R -\nabla _{x} q +\rho_{f}\mathbf F, \quad {\mathbf y} \in Y_{f}, \end{equation} \begin{equation}\label{4.20} \tau_{0}c_{pf}\frac{\partial \Theta}{\partial t}= \frac{\beta_{0f}}{m}\frac{\partial \beta}{\partial t} +\Psi, \quad {\mathbf y} \in Y_{f}, \end{equation} \begin{equation}\label{4.21} (\chi \mathbf W - {\mathbf u})\cdot{\mathbf n}=0, \quad {\mathbf y} \in \gamma \end{equation} for $\mu_{1}=0$. In Eq.\eqref{4.21} ${\mathbf n}$ is the unit normal to $\gamma$. \end{lemma} \begin{proof} Differential equations \eqref{4.16} and \eqref{4.19} follow as $\varepsilon\searrow 0$ from integral equality \eqref{1.7} with the test function ${\mathbf \psi}={\mathbf \varphi}(x\varepsilon^{-1})\cdot h({\mathbf x},t)$, where ${\mathbf \varphi}$ is solenoidal and finite in $Y_{f}$ vector-function. The same arguments apply for the Eq.\eqref{4.17} and Eq.\eqref{4.20} The only one difference here is that we use the continuity equation \eqref{3.3} to exclude the term $\chi ^{\varepsilon}\mbox{div}_x (\partial {\mathbf w}^\varepsilon / \partial t)$. First boundary condition in \eqref{4.18} is the consequence of the two-scale convergence of $\{\alpha_{\mu}^{\frac{1}{2}}\nabla_x {\mathbf w}^{\varepsilon}\}$ to the function $\mu_{1}^{\frac{1}{2}}\nabla_y\mathbf W({\mathbf x},t,{\mathbf y})$. On the strength of this convergence, the function $\nabla_y \mathbf W({\mathbf x},t,{\mathbf y})$ is $L^2$-integrable in $Y$. As above we apply the same argument to the second boundary condition in \eqref{4.18}. The boundary conditions \eqref{4.21} follow from Eqs.\eqref{4.4} and \eqref{4.5}. \end{proof} \begin{lemma} \label{lemma4.7} If the porous space is disconnected, which is the case of isolated pores, then ${\mathbf u}={\mathbf w}$. \end{lemma} \begin{proof} Indeed, in the case $0\leq \mu_{1}<\infty$ the systems of equations \eqref{4.4}, \eqref{4.16} and \eqref{4.18}, or \eqref{4.4}, \eqref{4.19} and \eqref{4.21} have the unique solution $\mathbf V=\partial {\mathbf u} / \partial t$. \end{proof} \textbf{4.4. Homogenized equations I.} \begin{lemma} \label{lemma4.8} If $\mu_1 =\infty$ then ${\mathbf w}={\mathbf u}$, $\theta =\vartheta $ and the weak limits ${\mathbf u}$, $\vartheta $, $p$, $q$, and $\pi$ satisfy in $\Omega_{T}$ the initial-boundary value problem \begin{equation}\label{4.22} \left. \begin{array}{lll} \displaystyle \tau _{0}\hat{\rho}\frac{\partial ^2{\mathbf u}}{\partial t^2} +\nabla (q+\pi )-\hat{\rho}\mathbf F=\\[1ex] \mbox{div}_x \{\lambda _{0}\mathbb A^{s}_{0}:\mathbb D(x,{\mathbf u}) + B^{s}_{0}(\mbox{div}_x {\mathbf u}-\frac{\beta_{0s}}{\eta_{0}}\vartheta )+B^{s}_{1}q \}, \end{array} \right\} \end{equation} \begin{equation}\label{4.23} (\tau_{0}\hat{c_{p}}+\frac{\beta_{0s}^{2}}{\eta_{0}}(1-m))\frac{\partial \vartheta}{\partial t} -\frac{\beta_{0s}}{\eta_{0}}\frac{\partial \pi}{\partial t}+(a^{s}_{1}-\frac{1}{\eta_{0}})\langle \frac{\partial q}{\partial t}\rangle_{\Omega} = \mbox{div}_x ( B^{\theta}\cdot \nabla \vartheta )+\Psi, \end{equation} \begin{equation}\label{4.24} \frac{1}{\eta_{0}}\pi+C^{s}_{0}:\mathbb D(x,{\mathbf u})+ a^{s}_{0}(\mbox{div}_x {\mathbf u} - \frac{\beta_{0s}}{\eta_{0}}\vartheta) +a^{s}_{1}q=\tilde{\gamma}, \end{equation} \begin{equation}\label{4.25} \frac{1}{\eta_{0}}\pi + \mbox{div}_x {\mathbf u}+ \frac{(1-m)\beta_{0s}}{\eta_{0}} \vartheta=\tilde{\beta}, \end{equation} \begin{equation}\label{4.26} q=p +\beta_{0f}m \vartheta +m\gamma_{f}, \end{equation} where the symmetric strictly positively defined constant fourth-rank tensor $\mathbb A^{s}_{0}$, constant matrices $C^{s}_{0}, B^{s}_{0}$ $B^{s}_{1}$, strictly positively defined constant matrix $B^{\vartheta}$ and constants $a^{s}_{0}$, $a^{s}_{1}$ and $a^{s}_{2}$ are defined below by formulas \eqref{4.33} - \eqref{4.35} and \eqref{4.38} and $$\tilde{\gamma}=(a^{s}_{1}-\frac{1}{\eta_{0}})\langle q\rangle_{\Omega}-a^{s}_{0}\frac{\beta_{0s}}{\eta_{0}}\langle \vartheta \rangle_{\Omega}, \quad -\tilde{\beta}=(1-m)\frac{\beta_{0s}}{\eta_{0}}\langle \vartheta \rangle_{\Omega}+\frac{1}{\eta_{0}}\langle q\rangle_{\Omega}.$$ Differential equations \eqref{4.22} and \eqref{4.23} are endowed with initial conditions at $t=0$ and ${\mathbf x}\in \Omega$ \begin{equation}\label{4.27} (\tau _{0}+\beta_{0s})(\vartheta-\vartheta_{0})=0,\quad\tau _{0}({\mathbf u}-{\mathbf u}_{0})= \tau _{0}(\frac{\partial {\mathbf u}}{\partial t}-{\mathbf v}_{0})=0; \end{equation} and boundary conditions \begin{equation}\label{4.28} \vartheta ({\mathbf x},t)=0, \quad {\mathbf u}({\mathbf x},t)=0, \quad {\mathbf x}\in S, \quad t>0. \end{equation} \end{lemma} \begin{proof} In the first place let us notice that ${\mathbf u} ={\mathbf w}$ and $\theta =\vartheta $ due to Lemma \ref{lemma4.5}. The differential equations \eqref{4.22} follow from the macroscopic equations \eqref{4.14}, after we insert in them the expression $$\langle \mathbb D(y,\mathbf U)\rangle _{Y_{s}}=\mathbb A^{s}_{1}:\mathbb D(x,{\mathbf u}) + B^{s}_{0}(\mbox{div}_x {\mathbf u}-\frac{\beta_{0s}}{\eta_{0}}(\vartheta -\langle \vartheta \rangle_{\Omega})) +B^{s}_{1}(q-\langle q\rangle_{\Omega}).$$ In turn, this expression follows by virtue of solutions of Eqs.\eqref{4.3} and \eqref{4.11} on the pattern cell $Y_{s}$. Indeed, setting \begin{eqnarray}\nonumber \mathbf U=&&\sum_{i,j=1}^{3}\mathbf U^{ij}({\mathbf y})D_{ij}+ \mathbf U_{0}({\mathbf y})(\mbox{div}_x {\mathbf u}-\frac{\beta_{0s}}{\eta_{0}}(\vartheta -\langle \vartheta \rangle_{\Omega}))\\ &&+\mathbf U_{1}({\mathbf y})(q-\langle q\rangle_{\Omega})+\mathbf U_{2}({\mathbf y})\langle q\rangle_{\Omega} \nonumber \end{eqnarray} \begin{eqnarray}\nonumber \Pi=&&\lambda _{0}\sum_{i,j=1}^{3}\Pi^{ij}({\mathbf y})D_{ij} +\Pi_{0}({\mathbf y})(\mbox{div}_x {\mathbf u}-\frac{\beta_{0s}}{\eta_{0}}(\vartheta -\langle \vartheta \rangle_{\Omega}))\\ &&+\Pi_{1}({\mathbf y})(q-\langle q\rangle_{\Omega})+\Pi_{2}({\mathbf y})\langle q\rangle_{\Omega},\nonumber \end{eqnarray} where $$D_{ij}=\frac{1}{2}(\frac{\partial u_{i}}{\partial x_{j}}+ \frac{\partial u_{j}}{\partial x_{i}}),$$ we arrive at the following periodic-boundary value problems in $Y$: \begin{equation}\label{4.29} \left. \begin{array}{lll} \displaystyle \mbox{div}_y \{(1-\chi ) (\mathbb D(y,\mathbf U^{ij})+J^{ij}) - \Pi ^{ij}\cdot \mathbb I \}=0,\\[1ex] \frac{\lambda _{0}}{\eta_{0}}\Pi ^{ij} +(1-\chi ) \mbox{div}_y \mathbf U^{ij} =0; \end{array} \right\} \end{equation} \begin{equation}\label{4.30} \left. \begin{array}{lll} \displaystyle \mbox{div}_y \{\lambda_{0}(1-\chi ) \mathbb D(y,\mathbf U_{0}) - \Pi_{0}\cdot \mathbb I \}=0,\\[1ex] \frac{1}{\eta_{0}}\Pi _{0} + (1-\chi )(\mbox{div}_y \mathbf U_{0}+1) =0; \end{array} \right\} \end{equation} \begin{equation}\label{4.31} \left. \begin{array}{lll} \displaystyle \mbox{div}_y \{\lambda_{0}(1-\chi ) \mathbb D(y,\mathbf U_{1}) - (\Pi_{1}+\frac{1}{m}\chi )\cdot \mathbb I \}=0,\\[1ex] \frac{1}{\eta_{0}}\Pi _{1} +(1-\chi )\mbox{div}_y \mathbf U_{1}) =0. \end{array} \right\} \end{equation} \begin{equation}\label{4.32} \left. \begin{array}{r} \displaystyle \mbox{div}_y \{\lambda_{0}(1-\chi )\mathbb D(y,\mathbf U_{2}) - (\Pi_{2}+\frac{1}{m}\chi )\cdot \mathbb I \}=0, \\[1ex] \displaystyle \frac{1}{\eta_{0}}\Pi _{2} + (1-\chi )\mbox{div}_y \mathbf U_{2} -\frac{(1-\chi )}{(1-m)}(\langle \mbox{div}_y\mathbf U_{2}\rangle_{Y_{s}}+\frac{1}{\eta_{0}})=0. \end{array} \right\} \end{equation} Note, that $$\beta=\sum_{i,j=1}^{3}\langle \mbox{div}_y\mathbf U^{ij}\rangle_{Y_{s}} \langle D_{ij}\rangle_{\Omega} +\langle \mbox{div}_y\mathbf U_{0}\rangle_{Y_{s}} \langle \mbox{div}_x {\mathbf u}-\frac{\beta_{0s}}{\eta_{0}}(\vartheta -\langle \vartheta \rangle_{\Omega}\rangle_{\Omega} + $$ $$\langle \mbox{div}_y\mathbf U_{1}\rangle_{Y_{s}} \langle q-\langle q\rangle_{\Omega}\rangle_{\Omega}+ \langle \mbox{div}_y\mathbf U_{2}\rangle_{Y_{s}} \langle q\rangle_{\Omega}=\langle \mbox{div}_y\mathbf U_{2}\rangle_{Y_{s}} \langle q\rangle_{\Omega}$$ due to homogeneous boundary conditions for ${\mathbf u}({\mathbf x},t)$. On the strength of the assumptions on the geometry of the pattern ``liquid'' cell $Y_{s}$, problems \eqref{4.29}-- \eqref{4.32} have unique solution, up to an arbitrary constant vector. In order to discard the arbitrary constant vectors we demand $$\langle\mathbf U^{ij}\rangle_{Y_{s}} =\langle\mathbf U_{0}\rangle_{Y_{s}} =\langle\mathbf U_{1}\rangle_{Y_{s}} =\langle\mathbf U_{2}\rangle_{Y_{s}}=0.$$ Thus \begin{equation}\label{4.33} \mathbb A^{s}_{0}=\sum_{i,j=1}^{3}J^{ij}\otimes J^{ij} + \mathbb A^{s}_{1}, \quad \mathbb A^{s}_{1}=\sum_{i,j=1}^{3}\langle (1-\chi) D(y,\mathbf U^{ij})\rangle _{Y}\otimes J^{ij}. \end{equation} Symmetry and strict positiveness of the tensor $\mathbb A^{s}_{0}$ have been proved in \cite{AM}. Finally, Eqs.\eqref{4.24}--\eqref{4.26} for the pressures follow from Eqs. \eqref{4.7}-- \eqref{4.9}, after we insert in them the expression $$\langle \mbox{div}_y\mathbf U\rangle_{Y_{s}}=C^{s}_{0}:\mathbb D(x,{\mathbf u})+ \tilde{a}^{s}_{0}(\mbox{div}_x {\mathbf u} - \frac{\beta_{0s}}{\eta_{0}}(\vartheta --\langle \vartheta \rangle_{\Omega})) +a^{s}_{1}(q-\langle q\rangle_{\Omega})+a^{s}_{2}\langle q\rangle_{\Omega}$$ where \begin{equation}\label{4.34} B^{s}_{0}=\langle\mathbb D(y,\mathbf U_{0})\rangle _{Y_{s}}, \quad B^{s}_{1}=\langle\mathbb D(y,\mathbf U_{1})\rangle _{Y_{s}}, \quad C^{s}_{0}=\sum_{i,j=1}^{3}\langle\mbox{div}_y\mathbf U^{ij}\rangle _{Y_{s}}J^{ij}, \end{equation} \begin{equation}\label{4.35} \tilde{a}^{s}_{0}= \langle\mbox{div}_y\mathbf U_{0}\rangle _{Y_{s}}=a^{s}_{0}-1+m, \quad a^{s}_{1}= \langle\mbox{div}_y\mathbf U_{1}\rangle _{Y_{s}}, \quad a^{s}_{2}= \langle\mbox{div}_y\mathbf U_{2}\rangle _{Y_{s}}. \end{equation} Now for $i=1,2,3$ we consider the model problems \begin{eqnarray} \label{4.36} && \displaystyle \triangle _{y}\Theta_{i} ^{s} = 0, \quad {\mathbf y}\in Y_s,\\ \nonumber && \displaystyle \frac{\partial\Theta_{i} ^{s}}{\partial n}=- {\mathbf e}_{i}\cdot \mathbf{n}, \quad {\mathbf y}\in \gamma \end{eqnarray} and put \begin{equation}\label{4.37} \Theta ^{s}=\sum_{i=1}^{3}(\Theta_{i} ^{s}\otimes {\mathbf e}_{i})\cdot \nabla _{x}\vartheta . \end{equation} Then $\Theta ^{s}$ solves the problem \eqref{4.12}--\eqref{4.13} and if we insert an expression $\langle \nabla _{y}\Theta^{s}\rangle _{Y_{s}}$ into \eqref{4.15} we get \begin{equation}\label{4.38} B^{\theta}=\varkappa_{0s}((1-m)\mathbb I+\sum_{i=1}^{3}\langle\nabla_{y}\Theta_{i} ^{s}\rangle _{Y_{s}}\otimes {\mathbf e}_{i}). \end{equation} All properties of the matrix $B^{\theta}$ are well known ( see \cite{S-P}, \cite{JKO}). \end{proof} \begin{lemma} \label{lemma4.9} If the porous space is disconnected, then ${\mathbf w}={\mathbf u}$ and the weak limits $\theta ^{f}$, ${\mathbf u}$, $\vartheta $, $p$, $q$, and $\pi$ satisfy in $\Omega_{T}$ equations \eqref{4.22}, \eqref{4.24},\eqref{4.25}, \eqref{4.10}, where $\mathbb A^{s}_{0}$, $C^{s}_{0}, B^{s}_{0}$ $B^{s}_{1}$, $B^{\vartheta}$, $a^{s}_{0}$, $a^{s}_{1}$ and $a^{s}_{2}$ are the same as in Lemma \ref{lemma4.8}, the state equation \eqref{4.7}, and heat equation \begin{equation}\label{4.39} \left. \begin{array}{lll} \displaystyle \tau_{0}c_{pf}\frac{\partial \theta^{f}}{\partial t}+(\tau_{0}c_{ps}+ \frac{\beta_{0s}^{2}}{\eta_{0}})(1-m)\frac{\partial \vartheta}{\partial t}-\frac{\beta_{0s}}{\eta_{0}}\frac{\partial \pi}{\partial t} +(a^{s}_{1}-\frac{1}{\eta_{0}})\langle \frac{\partial q}{\partial t}\rangle_{\Omega}=\\[1ex] \mbox{div}_x ( B^{\theta}\cdot \nabla \vartheta ) +\Psi, \end{array} \right\} \end{equation} where for $\mu_{1}>0$ and $\tau >0$ \begin{equation}\label{4.40} \theta^{f}({\mathbf x},t)=m\vartheta ({\mathbf x},t) +\int _{0}^{t}b^{\theta}_{f}(t-\tau )(\frac{1}{\tau_{0}c_{pf}}(\frac{\beta_{0f}}{m}\frac{\partial \beta}{\partial t} + \Psi )-\frac{\partial \vartheta }{\partial t})({\mathbf x},\tau )d\tau . \end{equation} If $\mu_{1}>0$ and $\tau =0$, then \begin{equation}\label{4.41} \theta^{f}({\mathbf x},t)= m\vartheta ({\mathbf x},t) - c^{\theta}_{f}(\frac{\beta_{0f}}{m}\frac{\partial \beta}{\partial t}(t) + \Psi ({\mathbf x},t)). \end{equation} Finally, if $\mu_{1}=0$, then \begin{equation}\label{4.42} \theta^{f}({\mathbf x},t)=m\vartheta_{0}({\mathbf x})+\frac{m}{\tau _{0}c_{pf}} \int _{0}^{t}(\frac{\beta_{0f}}{m}\frac{\partial \beta}{\partial t}(\tau) + \Psi ({\mathbf x},\tau))d\tau . \end{equation} Here $ b^{\theta}_{f}(t)$ and $c^{\theta}_{f}$ are defined below by formulas \eqref{4.43}-- \eqref{4.45}. The problem is endowed with initial and boundary conditions \eqref{4.27} è \eqref{4.28}. \end{lemma} \begin{proof} The only one difference here with the previous lemma is the heat equation for $\vartheta$ and the state equation for pressures, because $\theta\neq \vartheta$. The function $\theta^{f}=\langle \Theta \rangle _{Y_{f}}$ now is defined from microscopic equations \eqref{4.17} and \eqref{4.18}, if $\mu_{1}>0$ and microscopic equations \eqref{4.20}, if $\mu_{1}=0$. Indeed, the solutions of above mentioned problems are given by formulas $$\Theta =\vartheta ({\mathbf x},t) +\int _{0}^{t}\Theta _{1}^{f}({\mathbf y},t-\tau )h({\mathbf x},\tau )d\tau ,$$ if $\mu_{1}>0$ and $\tau >0$ and $$\Theta =\vartheta ({\mathbf x},t) -\Theta _{0}^{f}({\mathbf y})(\frac{\beta_{0f}}{m}\frac{\partial \beta}{\partial t}(t) + \Psi ({\mathbf x},t)),$$ if $\mu_{1}>0$ and $\tau =0$, where $$h=\frac{1}{\tau_{0}c_{pf}}(\frac{\beta_{0f}}{m}\frac{\partial \beta}{\partial t} + \Psi )-\frac{\partial \vartheta }{\partial t}$$ and functions $\Theta _{1}^{f}$ and $\Theta ^{f}_{0}$ are 1-periodic in ${\mathbf y}$ solutions of the problems \begin{equation}\label{4.43} \left. \begin{array}{lll} \displaystyle \tau_{0}c_{pf}\frac{\partial\Theta _{1}^{f}}{\partial t}= \varkappa _{1} \mu_{1}\triangle_y \Theta _{1}^{f}, \quad {\mathbf y} \in Y_{f},\\[1ex] \Theta _{1}^{f}({\mathbf y},0)=1, \quad {\mathbf y} \in Y_{f}; \quad \Theta _{1}^{f} =0, \quad {\mathbf y} \in \gamma , \end{array} \right\} \end{equation} and \begin{equation}\label{4.44} \varkappa _{1} \mu_{1}\triangle_y \Theta _{0} ^{f}=1, \quad {\mathbf y} \in Y_{f}; \quad \Theta _{0}^{f} =0, \quad {\mathbf y} \in \gamma . \end{equation} Then, in accordance with definition, the function $\theta^{f}$ is given by \eqref{4.40} or \eqref{4.41}, where \begin{equation}\label{4.45} b^{\theta}_{f}(t)=\langle \Theta _{1}^{f}\rangle _{Y_{f}}, \quad c^{\theta}_{f}=\langle \Theta _{0}^{f}\rangle _{Y_{f}}. \end{equation} If $\mu_{1}=0$, then $\Theta $ is found by a simple integration in time. \end{proof} \textbf{4.5. Homogenized equations II.} Let $\mu_{1}<\infty$. In the same manner as above, we verify that the weak limit ${\mathbf u}$ of the sequence $\{{\mathbf u}^\varepsilon\}$ satisfies some initial-boundary value problem likes problem \eqref{4.22}-- \eqref{4.28} because, in general, the weak limit ${\mathbf w}$ of the sequence $\{{\mathbf w}^\varepsilon\}$ differs from ${\mathbf u}$. More precisely, the following statement is true. \begin{lemma} \label{lemma4.10} If $\mu_{1}<\infty$ then the weak limits ${\mathbf u}$, ${\mathbf w}^{f}$, $\theta ^{f}$, $\vartheta $, $p$, $q$, and $\pi$ of the sequences $\{{\mathbf u}^\varepsilon\}$, $\{\chi^{\varepsilon}{\mathbf w}^\varepsilon\}$, $\{\chi^{\varepsilon}\theta ^\varepsilon\}$, $\{\vartheta ^\varepsilon\}$, $\{p^\varepsilon\}$, $\{q^\varepsilon\}$, and $\{\pi^\varepsilon\}$ satisfy the initial-boundary value problem in $\Omega_T$, consisting of the balance of momentum equation \begin{equation}\label{4.46} \left. \begin{array}{lll} \displaystyle\tau _{0}(\rho_{f}\frac{\partial ^2{\mathbf w}^{f}}{\partial t^2}+\rho_{s}(1-m)\frac{\partial ^2{\mathbf u}}{\partial t^2}) +\nabla (q+\pi )-\hat{\rho}\mathbf F= \\[1ex] \mbox{div}_x \{\lambda _{0}A^{s}_{0}:\mathbb D(x,{\mathbf u}) + B^{s}_{0}\mbox{div}_x {\mathbf u} +B^{s}_{1}q \}, \end{array} \right\} \end{equation} where $\mathbb A^{s}_{0}$, $B^{s}_{0}$ and $B^{s}_{1}$ are the same as in \eqref{4.22}, continuity equation \eqref{4.24}, continuity equation \begin{equation} \label{4.47} \frac{1}{\eta_{0}}(\pi +\langle q\rangle_{\Omega})+\mbox{div}_x {\mathbf w}^{f} + \frac{(1-m)\beta_{0s}}{\eta_{0}}(\vartheta -\langle \vartheta \rangle_{\Omega})= (m-1)\mbox{div}_x {\mathbf u} , \end{equation} state equation \eqref{4.7}, heat equation \eqref{4.39} and Darcy's law in the form \begin{equation}\label{4.48} \frac{\partial {\mathbf w}^{f}}{\partial t}=\frac{\partial {\mathbf u}}{\partial t}+\int_{0}^{t} B_{1}(\mu_1,t-\tau)\cdot (-\nabla_x q+\rho_{f}\mathbf F-\tau_{0}\rho_{f}\frac{\partial ^2 {\mathbf u}}{\partial \tau ^2})({\mathbf x},\tau )d\tau \end{equation} if $\tau_{0}>0$ and $\mu_{1}>0$, Darcy's law in the form \begin{equation}\label{4.49} \frac{\partial {\mathbf w}^{f}}{\partial t}=\frac{\partial {\mathbf u}}{\partial t}+B_{2}(\mu_1)\cdot(-\nabla_x q+\rho_{f}\mathbf F) \end{equation} if $\tau_{0}=0$ and, finally, Darcy's law in the form \begin{equation}\label{4.50} \frac{\partial {\mathbf w}^{f}}{\partial t}=B_{3}\cdot \frac{\partial {\mathbf u}}{\partial t}+\frac{1}{\tau _{0}\rho_{f}}(m\mathbb I-B_{3})\cdot\int_{0}^{t}(-\nabla_x q+\rho_{f}\mathbf F)({\mathbf x},\tau )d\tau \end{equation} if $\mu_{1}=0$. The problem is supplemented by boundary and initial conditions \eqref{4.27}--\eqref{4.28} for the displacement ${\mathbf u}$ and temperature $\vartheta$ of the rigid component and by the boundary condition \begin{equation}\label{4.51} {\mathbf w}^{f}({\mathbf x},t)\cdot {\mathbf n}({\mathbf x})=0, \quad ({\mathbf x},t) \in S=\partial \Omega , \quad t>0, \end{equation} for the displacement $ {\mathbf w}^{f}$ of the liquid component. In Eqs.\eqref{4.46}--\eqref{4.50} ${\mathbf n}({\mathbf x})$ is the unit normal vector to $S$ at a point ${\mathbf x} \in S$, and matrices $B_{1}(\mu_1,t)$, $B_{2}(\mu_1)$, and $B_{3}$ are defined below by formulas \eqref{4.52}--\eqref{4.57}. \end{lemma} \begin{proof} Eqs. \eqref{4.46} and \eqref{4.47} derived in a usual way like Eqs.\eqref{4.22} and \eqref{4.25}. For example, to get Eq.\eqref{4.47} we just expressed $\mbox{div}_x {\mathbf w} $ in Eq.\eqref{4.9} using homogenization in Eq.\eqref{4.5}: ${\mathbf w}={\mathbf w}^{f}+(1-m){\mathbf u}.$ Therefore we omit the relevant proofs now and focus only on derivation of homogenized equations for the velocity ${\mathbf v}$ in the form of Darcy's laws. The derivation of Eq. \eqref{4.51} is standard \cite{S-P}. à) If $\mu_{1}>0$ and $\tau_{0}>0$, then the solution of the microscopic equations \eqref{4.4}, \eqref{4.16} and \eqref{4.18} is given by formula \begin{equation*} \mathbf V=\frac{\partial {\mathbf u}}{\partial t}+\int_{0}^{t} \textbf{B}^{f}_{1}({\mathbf y},t-\tau)\cdot (-\nabla_x q+\rho_{f}\mathbf F-\tau_{0}\rho_{f}\frac{\partial ^2 {\mathbf u}}{\partial \tau ^2})({\mathbf x},\tau )d\tau , \end{equation*} where \begin{equation*} \textbf{B}^{f}_{1}({\mathbf y},t)= \sum_{i=1}^{3}\mathbf V^{i}({\mathbf y},t)\otimes {\mathbf e}_{i}, \end{equation*} and functions $\mathbf V^{i}({\mathbf y},t)$ are defined by virtue of the periodic initial-boundary value problem \begin{equation}\label{4.52} \left. \begin{array}{lll} \displaystyle \tau _{0}\rho_{f}\frac{\partial \mathbf V^{i}}{\partial t}-\mu_{1}\triangle \mathbf V^{i} +\nabla Q^{i} =0, \quad \mbox{div}_y \mathbf V^{i} =0, \quad {\mathbf y} \in Y_{f}, t>0,\\[1ex] \mathbf V^{i}=0, \quad {\mathbf y} \in \gamma , t>0;\quad \tau _{0}\rho_{f}\mathbf V^{i}(y,0)={\mathbf e}_{i}, \quad {\mathbf y} \in Y_{f}. \end{array} \right\} \end{equation} In \eqref{4.52} ${\mathbf e}_{i}$ is the standard Cartesian basis vector of the coordinate axis $x_{i}$. Therefore \begin{equation}\label{4.53} B_{1}(\mu_{1},t)= \langle \textbf{B}^{f}_{1}({\mathbf y},t)\rangle _{Y_{s}}. \end{equation} b) If $\tau_{0}=0$ and $\mu_{1}>0$ then the solution of the stationary microscopic equations \eqref{4.4}, \eqref{4.16} and \eqref{4.18} is given by formula \begin{equation*} \mathbf V=\frac{\partial {\mathbf u}}{\partial t}+\textbf{B}^{f}_{2}({\mathbf y})\cdot(-\nabla q+\rho_{f}\mathbf F), \end{equation*} where \begin{equation*} \textbf{B}^{f}_{2}({\mathbf y})= \sum_{i=1}^{3}\mathbf U^{i}({\mathbf y})\otimes {\mathbf e}_{i} , \end{equation*} and functions $\mathbf U^{i}({\mathbf y})$ are defined from the periodic boundary value problem \begin{equation}\label{4.54} \left. \begin{array}{lll} \displaystyle -\mu_{1}\triangle \mathbf U^{i} +\nabla R^{i} ={\mathbf e}_{i}, \quad \mbox{div}_y \mathbf U^{i} =0, \quad {\mathbf y} \in Y_{f},\\[1ex] \mathbf U^{i}=0, \quad {\mathbf y} \in \gamma . \end{array} \right\} \end{equation} Thus \begin{equation}\label{4.55} B_{2}(\mu_{1})= \langle \textbf{B}^{f}_{2}(({\mathbf y})\rangle _{Y_{s}}. \end{equation} Matrices $B_{1}(\mu_1,t)$ and $B_{2}(\mu_1)$ are symmetric and positively defined \cite[Chap. 8]{S-P}. c) Finally, if $\tau_{0}>0$ and $\mu_{1}=0$, then in the process of solving the system \eqref{4.4}, \eqref{4.19} and \eqref{4.21} we firstly find the pressure $R({\mathbf x},t,{\mathbf y})$ by virtue of solving the Neumann problem for Laplace's equation in $Y_{f}$. If $${\mathbf h}({\mathbf x},t)=-\tau_{0}\rho_{f}\frac{\partial ^2{\mathbf u}}{\partial t^2}({\mathbf x},t) -\nabla q({\mathbf x},t)+\rho_{f}\mathbf F({\mathbf x},t),$$ then $$R({\mathbf x},t,{\mathbf y})=\sum_{i=1}^{3}R_{i}({\mathbf y}) {\mathbf e}_{i}\otimes {\mathbf h}({\mathbf x},t),$$ where $R^{i}({\mathbf y})$ is the solution of the problem \begin{equation}\label{4.56} \triangle R_{i}=0,\quad {\mathbf y} \in Y_{f}; \quad \nabla R_{i}\cdot {\mathbf n} ={\mathbf n}\cdot {\mathbf e}_{i}, \quad {\mathbf y} \in \gamma . \end{equation} The formula \eqref{4.50} appears as the result of integration with respect to time in the homogenization of Eq.\eqref{4.19} and \begin{equation}\label{4.57} B_{3}=\sum_{i=1}^{3}\langle \nabla R_{i}({\mathbf y})\rangle _{Y_{s}}\otimes {\mathbf e}_{i}, \end{equation} where the matrix $(m\mathbb I - B_3)$ is symmetric and positively definite \cite[Chap. 8]{S-P}. \end{proof} \end{document}
\begin{document} \title[Trend to equilibrium for RDD--Poisson models]{Uniform convergence to equilibrium for a family of drift--diffusion models with trap-assisted recombination and self-consistent potential} \author{Klemens Fellner} \address{Institute of Mathematics and Scientific Computing, University of Graz, Heinrichstra\ss e 36, 8010 Graz, Austria} \email{klemens.fellner@uni-graz.at} \author{Michael Kniely} \address{Faculty of Mathematics, TU Dortmund University, Vogelpothsweg 87, 44227 Dortmund, Germany} \email[Corresponding author]{michael.kniely@tu-dortmund.de} \keywords{PDEs in connection with semiconductor devices, reaction--diffusion equations, self-consistent potential, trapped states, entropy method, exponential convergence to equilibrium} \subjclass[2020]{Primary 35Q81; Secondary 78A35, 35B40, 35K57} \begin{abstract} We investigate a recombination--drift--diffusion model coupled to Poisson's equation modelling the transport of charge within certain types of semiconductors. In more detail, we study a two-level system for electrons and holes endowed with an intermediate energy level for electrons occupying trapped states. As our main result, we establish an explicit functional inequality between relative entropy and entropy production, which leads to exponential convergence to equilibrium. We stress that our approach is applied uniformly in the lifetime of electrons on the trap level assuming that this lifetime is sufficiently small. \end{abstract} \maketitle \section{Introduction and main results} \begin{figure} \caption{An illustration of the allowed transitions of electrons between the three energy levels.} \label{figmodel} \end{figure} We consider the following PDE--ODE recombination--drift--diffusion system coupled to Poisson's equation on a bounded domain $\Omega \subset \mathbb{R}^3$ with boundary $\partial \Omega \in C^2$: \begin{equation} \label{eqsystem} \begin{cases} \begin{aligned} \partial_t n &= \nabla \cdot J_n(n, \psi) + R_n(n,n_{tr}), \\ \partial_t p &= \nabla \cdot J_p(p, \psi) + R_p(p,n_{tr}), \\ \varepsilon \, \partial_t n_{tr} &= R_p(p, n_{tr}) - R_n(n, n_{tr}), \\ - \lambda \, \Delta \psi &= n - p + \varepsilon n_{tr} - D, \end{aligned} \end{cases} \end{equation} where the flux terms $J_n$, $J_p$ and recombination terms $R_n$, $R_p$ are defined as \begin{align*} & & J_n &:= \nabla n + n\nabla (\psi + V_n) = \mu_n \nabla \frac{n}{\mu_n} + n \nabla \psi, & \mu_n &:= e^{-V_n}, & & \\ & & J_p &:= \nabla p + p\nabla (-\psi + V_p) = \mu_p \nabla \frac{p}{\mu_p} - p \nabla \psi, & \mu_p &:= e^{-V_p}, & & \end{align*} \[ R_n := \frac{1}{\tau_n} \left( n_{tr} - \frac{n}{n_0 \mu_n} (1 - n_{tr}) \right), \qquad R_p := \frac{1}{\tau_p} \left( 1 - n_{tr} - \frac{p}{p_0 \mu_p} n_{tr} \right). \] The variables $n$, $p$, and $n_{tr}$ denote the densities of electrons in the conduction band, holes in the valence band, and electrons on the trap level (see Fig.\ \ref{figmodel}). Moreover, $\psi$ represents the electrostatic potential generated by $n$, $p$, $n_{tr}$, and the time-independent doping profile $D \in L^{\infty}(\Omega)$. The constants $n_0, p_0, \tau_n, \tau_p>0$ are positive recombination parameters, while $\varepsilon\in(0,\varepsilon_0]$ (for arbitrary but fixed $\varepsilon_0>0$) is a dimensionless quantity which can be interpreted, on the one hand, as the density of available trapped states and, on the other hand, as the lifetime of electrons on the trap level. Note that system \eqref{eqsystem} reduces in the limit $\varepsilon=0$ to the famous Shockley--Read--Hall model of electron recombination \cite{SR52,H52} in semiconductor drift--diffusion systems, see e.g. \cite{MRS90, GMS07}. Finally, \begin{align} \label{eqdefpot} V_n, \, V_p \in W^{2,\infty}(\Omega) \qquad \mbox{with} \qquad \hat n \cdot \nabla V_n = \hat n \cdot \nabla V_p = 0 \quad \mbox{on} \ \partial \Omega \end{align} represent external time-independent potentials. The system is equipped with no-flux boundary conditions for the flux components and homogeneous Neumann boundary data for the electrostatic potential, \begin{equation} \label{eqboundaryconditions} \hat n \cdot J_n = \hat n \cdot J_p = \hat n \cdot \nabla \psi = 0 \quad \mbox{on} \ \partial \Omega \end{equation} where $\hat n$ represents the outer unit normal vector on $\partial \Omega$. As we will use a result in \cite{GMS07} for proving existence and uniqueness of global solutions, we choose the initial conditions $n(0, \cdot) = n_I$, $p(0, \cdot) = p_I$, and $n_{tr}(0, \cdot) = n_{tr,I}$ in accordance with \cite{GMS07}: \begin{equation} \label{eqinitialconditions} n_I, p_I \in H^1(\Omega) \cap L^\infty(\Omega), \quad n_I, p_I \geq 0, \quad 0 \leq n_{tr,I} \leq 1. \end{equation} For convenience, we assume that the volume of $\Omega$ is normalised, i.e. $|\Omega| = 1$, and we set \[ \overline{f} := \int_{\Omega} f(x) \, dx \] for any function $f \in L^1(\Omega)$. This abbreviation is consistent with the usual definition of the average of $f$ since $|\Omega|=1$. The main goal of the paper is to close the gap between the models investigated in \cite{FK18} and \cite{FK20}. While the pure Shockley--Read--Hall model including the electrostatic potential has been considered in \cite{FK18}, the family of drift--diffusion models with trap-assisted recombination already appeared in \cite{FK20} but without coupling to Poisson's equation. Here, we focus on the exponential convergence to equilibrium for a PDE--ODE model including both trapped states dynamics and the self-consistent potential. More precisely, we obtain an explicit bound for the convergence rate by employing the so-called \emph{entropy method} which amounts to deriving a functional inequality between an entropy functional and the associated entropy production. For related works on exponential convergence to equilibrium for reaction--diffusion systems, see e.g.\ \cite{AMT00,DFT17_ComplexBalance,FT17_DetailBalance,FT18_ConvergenceOfRenormalizedSolutions,MHM15}. The framework of the entropy method which we shall use here to obtain explicit bounds on the convergence rate originates from \cite{DF06,DF08,DF14}, where models from reversible chemistry have been studied. An earlier application of the entropy method, but using a non-constructive compactness argument, is presented in \cite{GGH96,GH97}, where the authors prove exponential convergence for a model of electrically charged species taking the coupling to Poisson's equation into account. Throughout the article, we will frequently encounter the following inhomogeneous Poisson equation with right hand side $f \in L^2(\Omega)$ subject to homogeneous Neumann boundary conditions: \begin{equation} \label{eqpsi} -\lambda \, \Delta\psi = f \quad \mbox{in} \ \Omega, \qquad \hat n \cdot \nabla \psi = 0 \quad \mbox{on} \ \partial \Omega. \end{equation} It is well-known that there exists a weak solution $\psi \in H^1(\Omega)$ if and only if $\overline f = 0$ holds true (compatibility condition with homogeneous Neumann boundary data). In this case, $\psi$ is determined only up to an additive constant, which one can fix via the normalisation $\overline \psi = 0$ to obtain a unique solution $\psi$. Due to the previous considerations, we additionally have to demand that the initial data satisfy the charge-neutrality condition \begin{equation} \label{eqinitialconservation} \int_\Omega \big( n_I - p_I + \varepsilon n_{tr,I} - D \big) dx = 0. \end{equation} As a consequence of the structure of system \eqref{eqsystem} and the no-flux boundary conditions in \eqref{eqboundaryconditions}, we see that the total charge is preserved for all $t \geq 0$ in the sense that \begin{equation} \label{eqconservationlaw} \int_\Omega \big( n(t, \cdot) - p(t, \cdot) + \varepsilon n_{tr}(t, \cdot) \big) \, dx = \int_\Omega \big( n_I - p_I + \varepsilon n_{tr,I} \big) \, dx = \int_\Omega D \, dx. \end{equation} For the sake of completeness, we subsequently recall all assumptions on our model referring to the introduction above for further details and modelling issues. \begin{assumption} \label{assump} We work on a bounded domain $\Omega \subset \mathbb R^3$ with boundary $\partial \Omega$ of class $C^2$ imposing the following constraints: \begin{itemize} \item $n_0$, $p_0$, $\tau_n$, $\tau_p$ are positive constants, while $\varepsilon \in (0, \varepsilon_0]$ is bounded by $\varepsilon_0 > 0$, \item $D \in L^\infty(\Omega)$ and $V_n, \, V_p \in W^{2,\infty}(\Omega)$ with $\hat n \cdot \nabla V_n = \hat n \cdot \nabla V_p = 0$ on $\partial \Omega$ where $\hat n$ represents the outer unit normal vector on $\partial \Omega$, \item $\hat n \cdot J_n = \hat n \cdot J_p = \hat n \cdot \nabla \psi = 0$ on $\partial \Omega$, \item $n_I, p_I \in H^1(\Omega) \cap L^\infty(\Omega)$, $n_I, p_I \geq 0$, $0 \leq n_{tr,I} \leq 1$, and \eqref{eqinitialconservation} holds. \end{itemize} \end{assumption} Throughout this article, we suppose Assumption \ref{assump} to hold. \begin{definition} A \emph{global weak solution} to \eqref{eqsystem}--\eqref{eqinitialconditions} and \eqref{eqinitialconservation} is a quadruple $(n, p, n_{tr}, \psi) : [0, \infty) \rightarrow H^1(\Omega)^2 \times L^\infty(\Omega) \times H^1(\Omega)$ such that for all $T > 0$ the following conditions are satisfied: \begin{itemize} \item $n, p \in L^2(0, T; H^1(\Omega))$, and \begin{multline*} - \int_0^T \langle u'(t), n(t) \rangle_{H^1(\Omega)^\ast \times H^1(\Omega)} dt - \int_\Omega n_I u(0) \, dx \\ = - \int_0^T \int_\Omega J_n(n, \psi) \cdot \nabla u \, dx \, dt + \int_0^T \int_\Omega R_n(n, n_{tr}) u \, dx \, dt, \end{multline*} \begin{multline*} - \int_0^T \langle u'(t), p(t) \rangle_{H^1(\Omega)^\ast \times H^1(\Omega)} dt - \int_\Omega p_I u(0) \, dx \\ = - \int_0^T \int_\Omega J_p(p, \psi) \cdot \nabla u \, dx \, dt + \int_0^T \int_\Omega R_p(p, n_{tr}) u \, dx \, dt, \end{multline*} for all $u \in W_2(0, T) \mathrel{\mathop:}= \big\{ f \in L^2(0, T; H^1(\Omega)) \, | \, \partial_t f \in L^2(0, T; H^1(\Omega)^*) \big\}$ subject to $u(T) = 0$, \item $\begin{aligned}n_{tr}(T) = n_{tr,I} + \frac{1}{\varepsilon} \int_0^T \big( R_p(p, n_{tr}) - R_n(n, n_{tr}) \big) dt\end{aligned}$, \item $\begin{aligned}\lambda \int_\Omega \nabla \psi(T) \cdot \nabla w \, dx = \int_\Omega \big( n(T) - p(T) + \varepsilon n_{tr}(T) - D \big) w \, dx\end{aligned}$ for all $w \in H^1(\Omega)$. \end{itemize} \end{definition} We further mention the embedding $W_2(0, T) \hookrightarrow C([0, T], L^2(\Omega))$ known from PDE theory (see e.g.\ \cite{Chi00}). \begin{proposition}[Global Solutions] \label{propglobalsolution} There exists a unique global weak solution $(n, p, n_{tr}, \psi) : [0, \infty) \rightarrow H^1(\Omega)^2 \times L^\infty(\Omega) \times H^1(\Omega)$ of \eqref{eqsystem}--\eqref{eqinitialconditions} and \eqref{eqinitialconservation} with $\overline {\psi(t, \cdot)} = 0$ for all $t \geq 0$. This solution satisfies $n, p \in L^2(0,T; H^1(\Omega))$ for all $T>0$ uniformly in $\varepsilon \in (0, \varepsilon_0]$ as well as \eqref{eqconservationlaw}. Moreover, $n, p \geq 0$, $0 \leq n_{tr} \leq 1$, and there exist positive constants $M$, $K(M)$ (again uniformly in $\varepsilon \in (0, \varepsilon_0]$) such that \begin{align} \label{equpperboundnp} \| n(t) \|_{L^\infty(\Omega)} + \| p(t) \|_{L^\infty(\Omega)} \leq M \quad \mbox{and} \quad \| \psi(t) \|_{H^2(\Omega)} + \| \psi(t) \|_{C(\overline \Omega)} \leq K \end{align} for all $t \geq 0$. In addition, there exists a positive constant $\mu(M, K) < \tfrac12$ (uniformly in $\varepsilon \in (0, \varepsilon_0]$) such that \begin{align} \label{eqlowerboundnp} n(t, x), \, p(t,x) \geq \mu \min \big\{ t^2, 1 \big\} \end{align} and \begin{align} \label{eqlowerupperboundntr} n_{tr}(t, x) \in \big[ \min \big\{ \tfrac{1}{2\varepsilon_0\tau_p} t, \mu \big\}, \, 1 - \min \big\{ \tfrac{1}{2\varepsilon_0\tau_n} t, \mu \big\} \big] \end{align} for all $t \geq 0$ and a.e.\ $x \in \Omega$. Finally, $n, p \in W_2(0, T) \hookrightarrow C([0, T], L^2(\Omega))$, $n_{tr} \in C([0, T], L^\infty(\Omega))$, and $\psi \in C([0, T], H^2(\Omega))$, where each inclusion holds true uniformly in $\varepsilon \in (0, \varepsilon_0]$. \end{proposition} \begin{proposition}[Equilibrium States] \label{propequilibrium} The stationary system \begin{equation} \label{eqeqsystem} \begin{cases} \begin{aligned} \nabla \cdot J_n(n) + R_n(n,n_{tr}) = 0,& \\ \nabla \cdot J_p(p) + R_p(p,n_{tr}) = 0,& \\ R_n(n, n_{tr}) = R_p(p, \, n_{tr}),& \\ - \lambda \, \Delta \psi = n - p + \varepsilon n_{tr} - D,& \end{aligned} \end{cases} \end{equation} subject to $\hat n \cdot J_n = \hat n \cdot J_p = \hat n \cdot \nabla \psi = 0$ admits a unique solution $(n_\infty, p_\infty, n_{tr,\infty}, \psi_\infty) \in (H^1(\Omega) \cap L^\infty(\Omega))^4$ satisfying $\overline {\psi_\infty} = 0$. The equilibrium potential $\psi_\infty$ is continuous and there exists a positive constant $K_\infty$ such that \[ \| \psi_\infty \|_{H^2(\Omega)} + \| \psi_\infty \|_{C(\overline \Omega)} \leq K_\infty. \] Moreover, there exist positive constants $M_\infty(K_\infty) > 1$ and $\mu_\infty(K_\infty) < \tfrac12$ such that \begin{equation} \label{eqeqbounds} n_\infty(x), \, p_\infty(x) \in (\mu_\infty, M_\infty) \quad \mbox{and} \quad n_{tr,\infty}(x) \in (\mu_\infty, 1 - \mu_\infty) \end{equation} for a.e.\ $x \in \Omega$. The constants $K_\infty$, $M_\infty$, and $\mu_\infty$ are independent of $\varepsilon \in (0, \varepsilon_0]$. In detail, the equilibrium densities $n_\infty$, $p_\infty$, and $n_{tr,\infty}$ read \begin{equation} \label{eqeqstates} n_\infty = n_\ast e^{-\psi_\infty - V_n}, \quad p_\infty = p_\ast e^{\psi_\infty - V_p}, \quad n_{tr,\infty} = \frac{n_\ast}{n_\ast + n_0 e^{\psi_\infty}} = \frac{p_0}{p_0 + p_\ast e^{\psi_\infty}} \end{equation} where the positive constants $n_\ast$ and $p_\ast$ are uniquely determined in terms of $\psi_\infty$ by \begin{equation} \label{eqeqconstants} n_\ast p_\ast = n_0 p_0 \quad \mbox{and} \quad n_\ast \overline{e^{-\psi_\infty - V_n}} - p_\ast \overline{e^{\psi_\infty - V_p}} + \varepsilon \overline{\frac{n_\ast}{n_\ast + n_0 e^{\psi_\infty}}} - \overline D = 0. \end{equation} Furthermore, the following relations hold true: \begin{equation} \label{eqntrrelations} n_{tr,\infty} = \frac{n_\infty (1 - n_{tr,\infty})}{n_0 \mu_n} \quad \mbox{and} \quad 1 - n_{tr,\infty} = \frac{p_\infty n_{tr,\infty}}{p_0 \mu_p}. \end{equation} \end{proposition} We introduce the \emph{entropy functional} $E(n, p, n_{tr}, \psi)$ for non-negative functions $n, p, n_{tr} \in L^2(\Omega)$ satisfying $n_{tr} \leq 1$ and $\overline n - \overline p + \varepsilon \overline{n_{tr}} = \overline D$ where $\psi \in H^1(\Omega)$ is the unique solution of \eqref{eqpsi} with right hand side $f = n - p + \varepsilon n_{tr} - D$ and normalisation $\overline \psi = 0$: \begin{multline} E(n, p, n_{tr}, \psi) \mathrel{\mathop:}= \int_{\Omega} \bigg( n \ln \frac{n}{n_0 \mu_n} - (n-n_0\mu_n) \\ + p \ln \frac{p}{p_0 \mu_p} - (p-p_0\mu_p) + \frac{\lambda}{2} \big| \nabla \psi \big|^2 + \varepsilon \int_{1/2}^{n_{tr}} \ln \left( \frac{s}{1-s} \right) ds \bigg) dx. \label{eqentropy} \end{multline} The densities $n$ and $p$ enter via Boltzmann entropy contributions $a \ln a - (a - 1) \geq 0$, whereas $n_{tr}$ appears within the entropy functional via an integral term. We first mention that the integral $\int_{1/2}^{n_{tr}} \ln \big( \frac{s}{1-s} \big) ds$ is non-negative and finite for all $n_{tr}(x) \in [0,1]$. In more detail, we may write \begin{multline*} \int_{1/2}^{n_{tr}} \ln \left( \frac{s}{1-s} \right) ds = \big[ n_{tr} \ln n_{tr} - (n_{tr} - 1) \big] \\ + \big[ (1 - n_{tr}) \ln (1 - n_{tr}) - ((1 - n_{tr}) - 1) \big] + \ln 2 - 1. \end{multline*} Consequently, both the occupied and unoccupied trapped states ($n_{tr}$ and $1 - n_{tr}$) are described via Boltzmann statistics within the entropy functional, and the integral $\int_{1/2}^{n_{tr}} \ln \big( \frac{s}{1-s} \big) ds$ allows to combine the contributions of $n_{tr}$ and $1 - n_{tr}$ in a compact fashion. One can further verify that the entropy functional \eqref{eqentropy} is indeed a Lyapunov functional: By defining the \emph{entropy production functional} \begin{equation} \label{eeplaw} P(n, p, n_{tr}, \psi) := -\frac{d}{dt} E(n, p, n_{tr}, \psi), \end{equation} we first calculate along solutions of \eqref{eqsystem} that formally \begin{multline} P(n, p, n_{tr}, \psi) = \int_{\Omega} \bigg( \frac{|J_n|^2}{n} + \frac{|J_p|^2}{p} \\ - R_n \ln \left( \frac{n(1-n_{tr})}{n_0 \mu_n n_{tr}} \right) - R_p \ln \left( \frac{p n_{tr}}{p_0 \mu_p (1-n_{tr})} \right) \bigg) dx. \label{eqproduction} \end{multline} The entropy production functional involves non-negative flux terms as well as recombination terms of the form $(a-1) \ln a \geq 0$. The entropy production $P$ is, therefore, a non-negative functional, which ensures the monotone decrease in time of the entropy $E$ along trajectories of \eqref{eqsystem}. More precisely, it will be shown in Theorem \ref{theoremexpconvergence} that the global weak solutions to \eqref{eqsystem} obtained in Proposition \ref{propglobalsolution} satisfy a suitable weak version of \eqref{eeplaw}, see \eqref{eqweakeplaw} below. The following theorem constitutes a so-called entropy--entropy production (EEP) estimate. This is a functional inequality between entropy and entropy production for arbitrary, yet admissible non-negative functions $n, p, n_{tr} \in L^\infty(\Omega)$, $n_{tr} \leq 1$; in particular, the electrostatic potential $\psi \in H^1(\Omega)$ in the following theorem must be the unique solution of \eqref{eqpsi} subject to $f = n - p + \varepsilon n_{tr} - D$ and the normalisation $\overline \psi = 0$. \begin{theorem}[Entropy--Entropy Production Estimate] \label{theoremeepinequality} Consider all non-negative functions $n, p, n_{tr} \in L^\infty(\Omega)$ subject to $n, p \leq \mathcal{M}$, $n_{tr} \leq 1$, and $\overline n - \overline p + \varepsilon \overline{n_{tr}} = \overline D$ and accordingly determine $\psi \in H^1(\Omega)$ as the unique solution to \eqref{eqpsi} with $f = n - p + \varepsilon n_{tr} - D$ and $\overline \psi = 0$. Then, there exist explicit constants $\varepsilon_0 > 0$ and $C_{EEP} > 0$ depending on $\mathcal{M}$ and on $K_\infty$ (as given in Proposition \ref{propequilibrium}) such that \[ E(n, p, n_{tr}, \psi) - E(n_\infty, p_\infty, n_{tr,\infty}, \psi_\infty) \leq C_{EEP} P(n, p, n_{tr}, \psi) \] holds true for all $\varepsilon \in (0, \varepsilon_0]$. \end{theorem} Note that $C_{EEP}$ is independent of $\varepsilon \in (0, \varepsilon_0]$ and that this abstract EEP inequality can be applied to the global solution to \eqref{eqsystem} by using $\mathcal{M}=M$ from Proposition \ref{propglobalsolution}. We are then able to prove the exponential decay of the entropy relative to the equilibrium by using a Gronwall argument. \begin{theorem}[Exponential Decay of the Relative Entropy] \label{theoremexpconvergence} Let $\varepsilon \in (0, \varepsilon_0]$ with $\varepsilon_0 > 0$ from Theorem \ref{theoremeepinequality}, and let $(n, p, n_{tr}, \psi)$ be the unique global weak solution to \eqref{eqsystem} with non-negative initial datum $(n_I, p_I, n_{tr,I}) \in (H^1(\Omega) \cap L^\infty(\Omega))^2 \times L^\infty(\Omega)$, $n_{tr,I} \leq 1$, satisfying $\overline{n_I} - \overline{p_I} + \varepsilon \overline{n_{tr,I}} = \overline D$ according to Proposition \ref{propglobalsolution}. In addition, let $(n_\infty, p_\infty, n_{tr,\infty}, \psi_\infty)$ be the unique equilibrium state characterised in Proposition \ref{propequilibrium} as a solution to \eqref{eqeqsystem}. Then, $(n, p, n_{tr}, \psi)$ fulfils the weak entropy production law \begin{equation} \label{eqweakeplaw} E(n,p,n_{tr},\psi)(t_1) + \int_{t_0}^{t_1} P(n, p, n_{tr}, \psi)(s) \, ds = E(n,p,n_{tr},\psi)(t_0) \end{equation} for all $0 < t_0 \leq t_1 < \infty$. As a consequence, $E(n, p, n_{tr}, \psi)$ converges exponentially to $E(n_\infty, p_\infty, n_{tr,\infty}, \psi_\infty)$ with explicit rate and constant as a function of time $t \geq 0$. More precisely, \begin{multline} \label{eqexpconventropy} E(n, p, n_{tr}, \psi) - E(n_\infty, p_\infty, n_{tr,\infty}, \psi_\infty) \\ \leq \Big( E(n_I, p_I, n_{tr,I}, \psi_I) - E(n_\infty, p_\infty, n_{tr,\infty}, \psi_\infty) \Big) e^{-C_{EEP}^{-1} t} \end{multline} where $\psi_I \in H^1(\Omega)$ is the unique weak solution to \eqref{eqpsi} with $f = n_I - p_I + \varepsilon n_{tr,I} - D$ and $\overline{\psi_I} = 0$. \end{theorem} \begin{corollary}[Exponential Convergence to the Equilibrium] \label{corconvlinfty} Under the hypotheses of Theorem \ref{theoremexpconvergence}, the following improved convergence properties with constants $0 < c, C < \infty$ both depending on $M$ and $K_\infty$ but not on $\varepsilon \in (0, \varepsilon_0]$ hold true for all $t \geq 0$: \begin{multline} \label{eqexpconvlinfty} \| n - n_\infty \|_{L^\infty(\Omega)} + \| p - p_\infty \|_{L^\infty(\Omega)} \\ + \| n_{tr} - n_{tr,\infty} \|_{L^\infty(\Omega)} + \| \psi - \psi_\infty \|_{H^2(\Omega)} \leq C e^{-c\, t}. \end{multline} In particular, $\psi \rightarrow \psi_\infty$ in $L^\infty(\Omega)$ at an exponential rate. \end{corollary} The remainder of this article is devoted to the proofs of the various statements above. Section \ref{secsolution} collects the proofs of Propositions \ref{propglobalsolution} and \ref{propequilibrium}. The proof of Theorem \ref{theoremeepinequality} along with the necessary prerequisites is contained in Section \ref{seceep}, while the results on exponential convergence to equilibrium are proven in Section \ref{secconv}. A brief section providing an outlook to future research concludes the paper. \section{Global solution and equilibrium state} \label{secsolution} \begin{proof}[Proof of Proposition \ref{propglobalsolution}] The existence of such a unique global solution $(n, p, n_{tr}, \psi)$ as well as $(n, p) \in L^2(0,T; H^1(\Omega))$ for all $T>0$ uniformly for $\varepsilon \in (0,\varepsilon_0]$ and $0 \leq n_{tr} \leq 1$ are a consequence of \cite[Lemma 3.1]{GMS07}. The uniform-in-time $L^\infty$ bounds for $n$ and $p$ follow similar to \cite[Lemma 4.1]{DFM08}, where a Nash--Moser-type iteration for $L^r$ norms, $r \geq 1$, of $n$ and $p$ has been employed. But as the coupling to Poisson's equation is missing in \cite{DFM08}, we have to slightly modify the line of arguments. The evolution of the $L^{r+1}$ norm, $r \geq 1$, of $n$ and $p$ can be reformulated as \begin{align*} \frac{d}{dt} &\int_\Omega \big( n^{r+1} + p^{r+1} \big) \, dx = (r+1) \int_\Omega \Big( -r n^{r-1} \nabla n \cdot \big(\nabla n + n \nabla (\psi + V_n) \big) \\ &- r p^{r-1} \nabla p \cdot \big(\nabla p + p \nabla (-\psi + V_p) \big) + n^r R_n + p^r R_p \Big) dx \\ \leq&- \frac{4r}{r+1} \int_\Omega \Big| \nabla n^\frac{r+1}{2} \Big|^2 \, dx -\frac{4r}{r+1} \int_\Omega \Big| \nabla p^\frac{r+1}{2} \Big|^2 \, dx + r \int_\Omega n^{r+1} \Delta (\psi + V_n) \, dx \\ &+ r \int_\Omega p^{r+1} \Delta (-\psi + V_p) \, dx + (r+1) \int_\Omega \big( n^r R_n + p^r R_p \big) \, dx. \end{align*} To the last term, we apply the estimate $(r+1) n^r \leq \frac1r + 2 r n^{r+1}$ which follows from Young's inequality $ab \leq \frac{1}{q} a^q + \frac{1}{s} b^s$ with $a \mathrel{\mathop:}= (\frac{1}{r+1})^\frac{1}{r+1}$, $b \mathrel{\mathop:}= (\frac{1}{r+1})^{-\frac{1}{r+1}} n^r$, $q \mathrel{\mathop:}= r+1$, and $s \mathrel{\mathop:}= \frac{r+1}{r}$. As a consequence of $-\lambda \Delta \psi = n - p + \varepsilon n_{tr} - D$, $(n^{r+1} - p^{r+1})(n-p) \geq 0$, $|R_n| \leq C (1 + n)$, and $|R_p| \leq C (1 + p)$, we then deduce \begin{align} \label{eqnashmoser1} \frac{d}{dt} \int_\Omega \big( n^{r+1} + p^{r+1} \big) \, dx \leq &-\frac{4r}{r+1} \int_\Omega \Big| \nabla n^\frac{r+1}{2} \Big|^2 \, dx -\frac{4r}{r+1} \int_\Omega \Big| \nabla p^\frac{r+1}{2} \Big|^2 \, dx \\ &+ \widehat C r \int_\Omega \big( n^{r+1} + p^{r+1} \big) \, dx + \frac{\widehat C}{r} \nonumber \end{align} with a constant $\widehat C > 0$ depending on $\varepsilon_0$ but not on $r$. One can now proceed as in \cite[Lemma 4.1]{DFM08}. For completeness, we briefly collect the main arguments below and refer to \cite{DFM08} for the details. By utilizing the Gagliardo--Nirenberg-type inequality $\| f \|_{L^2(\Omega)} \leq C_{GN} \| f \|_{L^1(\Omega)}^\frac{2}{5} \| f \|_{H^1(\Omega)}^\frac{3}{5}$ for $f \mathrel{\mathop:}= n^\frac{r+1}{2}$ and $f \mathrel{\mathop:}= p^\frac{r+1}{2}$, one derives \begin{multline} \label{eqnashmoser2} \int_\Omega \big( n^{r+1} + p^{r+1} \big) \, dx \\ \leq \delta \int_\Omega \bigg( \Big| \nabla n^\frac{r+1}{2} \Big|^2 + \Big| \nabla p^\frac{r+1}{2} \Big|^2 \bigg) dx + \frac{\widetilde C}{\delta} \Bigg( \int_\Omega \Big( n^\frac{r+1}{2} + p^\frac{r+1}{2} \Big) \, dx \Bigg)^2 \end{multline} where $\widetilde C > 0$ is a constant independent of $r$ and $\delta > 0$. We now introduce $\lambda_k \mathrel{\mathop:}= 2^k - 1$ for $k \geq 1$ and set $r \mathrel{\mathop:}= \lambda_k$. Choosing a sufficiently small constant $A > 0$ and defining $\delta_k \mathrel{\mathop:}= \frac{A}{\lambda_k}$ results in \[ \delta_k \big(\widehat C \lambda_k + \delta_k \big) \leq \frac{4 \lambda_k}{\lambda_k + 1} \] for all $k \geq 1$. By multiplying \eqref{eqnashmoser2} with $\widehat C \lambda_k + \delta_k$ and by combining the result with \eqref{eqnashmoser1}, we arrive at \begin{multline*} \frac{d}{dt} \int_\Omega \big( n^{\lambda_k + 1} + p^{\lambda_k + 1} \big) \, dx \leq - \delta_k \int_\Omega \big( n^{\lambda_k + 1} + p^{\lambda_k + 1} \big) \, dx \\ + B \lambda_k (\lambda_k + \delta_k) \sup_{0 \leq \tau \leq t} \Bigg( \int_\Omega \Big( n^\frac{r+1}{2} + p^\frac{r+1}{2} \Big) \, dx \Bigg)^2 + \frac{\widehat C}{\lambda_k} \end{multline*} with the constant $B \mathrel{\mathop:}= \frac{\widehat C \widetilde C}{A}$. The uniform $L^\infty$ bounds on $n$ and $p$ now follow from \cite[Lemma 4.2]{DFM08}. As, in particular, $\| n(t) - p(t) + \varepsilon n_{tr}(t) - D \|_{L^2(\Omega)}$ is uniformly bounded in $t \geq 0$, we conclude that $\| \psi(t) \|_{H^2(\Omega)}$ is uniformly bounded in time by applying standard elliptic regularity theory (see e.g.\ \cite[Chap. IV. \textsection 2. Theorem 4]{Mik80}). The announced bound on $\| \psi(t) \|_{C(\overline \Omega)}$ follows from the embedding $H^2(\Omega) \hookrightarrow C(\overline \Omega)$ valid in $\mathbb R^3$. The regularity $\partial_t n, \partial_t p \in L^2(0, T; H^{1}(\Omega)^\ast)$ and, hence, $n, p \in W_2(0, T) \hookrightarrow C([0, T], L^2(\Omega))$ uniformly for $\varepsilon \in (0,\varepsilon_0]$ is easily inferred from the corresponding bounds on $J_n, J_p \in L^2((0, T) \times \Omega)$ and $R_n, R_p \in L^\infty((0, T) \times \Omega)$. Likewise, $n_{tr} \in C([0, T], L^\infty(\Omega))$ and $\psi \in C([0, T], H^2(\Omega))$, where both inclusions hold true uniformly for $\varepsilon \in (0,\varepsilon_0]$. For showing the upper and lower bound on $n_{tr}$ in \eqref{eqlowerupperboundntr}, we multiply the third equation in \eqref{eqsystem} with $\tau_p$ and observe that \[ \varepsilon \partial_t (\tau_p n_{tr}) \geq 1 - \rho n_{tr} \] holds true with a constant $\rho(M) > 1$ due to $\| p \|_{L^\infty(\Omega)} \leq M$. We now distinguish the following three cases for all $t \geq 0$ and a.e.\ $x \in \Omega$: $n_{tr}(t, x) \geq \tfrac{1}{\rho}$, $n_{tr}(t, x) \in [\frac{1}{2 \rho}, \tfrac{1}{\rho})$, and $n_{tr}(t, x) < \tfrac{1}{2 \rho}$. In the first case, $\partial_t (\tau_p n_{tr}(t,x)) \leq 0$, while in the second case $\partial_t (\tau_p n_{tr}(t,x)) > 0$. And in the third case, $\partial_t (\tau_p n_{tr}(t,x)) > \tfrac{1}{2 \varepsilon_0}$. Defining $t_0 \mathrel{\mathop:}= \tfrac{\varepsilon_0 \tau_p}{\rho}$, this ensures \begin{equation} \label{eqlowerboundntr} \tau_p n_{tr}(t,x) \geq \frac{t}{2 \varepsilon_0}, \quad t \in [0, t_0], \qquad \mbox{and} \qquad n_{tr}(t,x) \geq \frac{1}{2 \rho}, \quad t \geq t_0. \end{equation} The upper bound on $n_{tr}$ follows by applying the same arguments to $\tau_n(1 - n_{tr})$. Concerning the bounds on $n$ and $p$ in \eqref{eqlowerboundnp}, we follow the lines in \cite{FK20} and concentrate on the arguments for $n$ as the result for $p$ can be derived analogously. For simplicity, we set w.l.o.g.\ $\tau_n = \tau_p = 1$ in the following calculations. The temporal derivative of $n$ is then bounded from below by \begin{align} \label{eqdtw} \partial_t n \geq \nabla \cdot \big( \nabla n + n \nabla (\psi + V_n) \big) + n_{tr} - \alpha n \end{align} with a constant $\alpha > 0$. Employing the no-flux boundary conditions from \eqref{eqboundaryconditions}, we first test \eqref{eqdtw} with $\bigl( n - \mu_1 t^2 \bigr)_-$ for $t \in [0, t_0]$ where $\mu_1 > 0$ is a constant specified below and where we abbreviate $(\cdot)_{-} \mathrel{\mathop:}= \min\{\cdot,0\}$. This entails \begin{align*} \frac{d}{dt} \frac{1}{2} &\int_{\Omega} \Bigl(n - \mu_1 t^2\Bigr)_{-}^2 \, dx = \int_{\Omega} \Bigl(n - \mu_1 t^2\Bigr)_{-} \big( \partial_t n - 2\mu_1 t \big) \, dx \\ \leq &\int_{\Omega} \Bigl(n-\mu_1 t^2\Bigr)_{-} \Big( \nabla \cdot \big(\nabla n + n \nabla (\psi + V_n) \big) + n_{tr} - \alpha n - 2 \mu_1 t \Big) \, dx \\ = &- \int_{\Omega} \mathbb{1}_{n \leq \mu_1 t^2} \nabla n \cdot \big( \nabla n + (n - \mu_1 t^2) \nabla (\psi + V_n) + \mu_1 t^2 \nabla (\psi + V_n) \big) \, dx \\ &+ \int_{\Omega} \Big( n-\mu_1 t^2 \Big)_{-} \big(n_{tr} - \alpha n - 2\mu_1 t \big) \, dx. \end{align*} Omitting the first term on the right hand side, we further derive \begin{align*} \frac{d}{dt} \frac{1}{2} \int_{\Omega} \Bigl(n - \mu_1 t^2\Bigr)_{-}^2 \, dx \leq &-\frac12 \int_\Omega \nabla \bigg[ \Bigl(n-\mu_1 t^2\Bigr)_{-}^2 \bigg] \cdot \nabla (\psi + V_n) \, dx \\ &- \int_\Omega \nabla \bigg[ \Bigl(n-\mu_1 t^2\Bigr)_{-} \bigg] \cdot \mu_1 t^2 \nabla (\psi + V_n) \, dx \\ &+ \int_{\Omega} \Big( n-\mu_1 t^2 \Big)_{-} \big(n_{tr} - \alpha n - 2\mu_1 t \big) \, dx. \end{align*} We now integrate by parts utilizing $\hat n \cdot \nabla \psi = \hat n \cdot \nabla V_n = 0$ on $\partial \Omega$. Due to the bound $n_{tr}(t,x) \geq \tfrac{t}{2 \varepsilon_0}$ on the considered interval $t \in [0, t_0]$, we obtain \begin{multline*} \frac{d}{dt} \frac{1}{2} \int_{\Omega} \Bigl(n - \mu_1 t^2\Bigr)_{-}^2 \, dx \leq \frac12 \int_\Omega \Big( n-\mu_1 t^2 \Big)_{-}^2 \| \Delta (\psi + V_n) \|_{L^\infty(\Omega)} \, dx \\ + \int_{\Omega} \Big( n-\mu_1 t^2 \Big)_{-} \Big(\frac{1}{2 \varepsilon_0} - \alpha \mu_1 t_0 - 2 \mu_1 - \mu_1 t_0 \| \Delta (\psi + V_n) \|_{L^\infty(\Omega)} \Big) t \, dx. \end{multline*} Choosing $\mu_1 > 0$ according to $\mu_1 \bigl( 2 + \alpha t_0 + t_0 \| \Delta (\psi + V_n) \|_{L^\infty(\Omega)} \bigr) \leq \tfrac{1}{2 \varepsilon_0}$, we deduce \[ \frac{d}{dt} \int_{\Omega} \Bigl(n-\mu_1 t^2\Bigr)_{-}^2 \, dx \leq \|\Delta (\psi + V_n) \|_{L^\infty(\Omega)} \int_\Omega \Bigl(n-\mu_1 t^2\Bigr)_{-}^2 \, dx. \] Because of $\int_{\Omega} \bigl(n(0,x)\bigr)_{-}^2 \, dx = 0$, we derive $\int_{\Omega} \big(n - \mu_1 t^2 \big)_{-}^2 \, dx = 0$ for all $t \in [0, t_0]$ by applying a Gronwall argument. We thus arrive at $n(t,x) \geq \mu_1 t^2$ for all $t \in [0, t_0]$ and a.e.\ $x\in\Omega$. In the situation $t \geq t_0$, we test \eqref{eqdtw} with $( n - \mu_2 )_-$ where $\mu_2 > 0$ is another constant to be specified. As above, we calculate \begin{align*} \frac{d}{dt} \frac{1}{2} &\int_{\Omega} \bigl(n - \mu_2\bigr)_{-}^2 \, dx = \int_{\Omega} \bigl(n - \mu_2\bigr)_{-} \partial_t n \, dx \\ \leq &\int_{\Omega} \bigl(n-\mu_2\bigr)_{-} \Big( \nabla \cdot \big(\nabla n + n \nabla (\psi + V_n) \big) + n_{tr} - \alpha n \Big) \, dx \\ = &- \int_{\Omega} \mathbb{1}_{n \leq \mu_2} \nabla n \cdot \big( \nabla n + (n - \mu_2) \nabla (\psi + V_n) + \mu_2 \nabla (\psi + V_n) \big) \, dx \\ &+ \int_{\Omega} \big( n-\mu_2 \big)_{-} \big(n_{tr} - \alpha n \big) \, dx. \end{align*} The same reasoning as above gives rise to \begin{multline*} \frac{d}{dt} \frac{1}{2} \int_{\Omega} \bigl(n - \mu_2\bigr)_{-}^2 \, dx \leq -\frac12 \int_\Omega \nabla \Big[ \bigl(n-\mu_2\bigr)_{-}^2 \Big] \cdot \nabla (\psi + V_n) \, dx \\ - \int_\Omega \nabla \Big[ \bigl(n-\mu_2\bigr)_{-} \Big] \cdot \mu_2 \nabla (\psi + V_n) \, dx + \int_{\Omega} \big( n-\mu_2 \big)_{-} \big(n_{tr} - \alpha n \big) \, dx. \end{multline*} For $t \geq t_0$, we have the lower bound $n_{tr}(t,x) \geq \frac{1}{2 \rho}$, which yields \begin{multline*} \frac{d}{dt} \frac{1}{2} \int_{\Omega} \big( n - \mu_2 \big)_{-}^2 \, dx \leq \frac12 \int_\Omega \big( n - \mu_2 \big)_{-}^2 \| \Delta (\psi + V_n) \|_{L^\infty(\Omega)} \, dx \\ + \int_{\Omega} \big( n - \mu_2 \big)_{-} \Big( \frac{1}{2\rho} - \alpha \mu_2 - \mu_2 \| \Delta (\psi + V_n) \|_{L^\infty(\Omega)} \Big) \, dx. \end{multline*} If we impose the conditions $\mu_2 (\alpha + \| \Delta (\psi + V_n) \|_{L^\infty(\Omega)}) \leq \frac{1}{2 \rho}$ and $\mu_2 \leq \mu_1 t_0^2$ on $\mu_2 > 0$, we infer \[ \frac{d}{dt} \int_{\Omega} \big( n - \mu_2 \big)_{-}^2 \, dx \leq \|\Delta (\psi + V_n)\|_{L^\infty(\Omega)} \int_{\Omega} \big( n - \mu_2 \big)_{-}^2 \, dx \] as well as $\int_{\Omega} \big( n(t_0,x) - \mu_2 \big)_{-}^2 \, dx = 0$. Finally, Gronwall's lemma guarantees that $\int_{\Omega} \big( n - \mu_2 \big)_{-}^2 \, dx = 0$ and, hence, $n(t,x) \geq \mu_2$ for all $t \geq t_0$ and a.e.\ $x \in \Omega$. \end{proof} \begin{proof}[Proof of Proposition \ref{propequilibrium}] As the entropy production vanishes at the stationary state $(n_\infty, p_\infty, n_{tr,\infty}, \psi_\infty)$, straightforward calculations show that $J_n = J_p = R_n = R_p = 0$ yields the representations for $n_\infty$ and $p_\infty$ as well as the two expressions for $n_{tr,\infty}$ in \eqref{eqeqstates}. The relation $n_\ast p_\ast = n_0 p_0$ results from a combination of the formulas arising from $R_n = 0$ and $R_p = 0$, while the fact that the conservation law is also fulfilled in the equilibrium follows from integrating Poisson's equation in \eqref{eqeqsystem}. Note that $n_\ast$ (and hence $p_\ast$) is uniquely determined from the second relation in \eqref{eqeqconstants} as its left hand side is strictly monotonously increasing and surjective from $(0, \infty)$ to $(-\infty,\infty)$ as a function of $n_\ast$. The identities in \eqref{eqntrrelations} are equivalent versions of $R_n(n_\infty, n_{tr,\infty}) = 0$ and $R_p(p_\infty, n_{tr,\infty}) = 0$. Next, we establish the existence of the limiting potential $\psi_\infty$. A technical difficulty stems from the fact that the constant $n_\ast$ (or equally $p_\ast$) depends non-locally on $\psi_\infty$, see \eqref{eqeqconstants}. However, this can be avoided by substituting $\widetilde{\psi_\infty} \mathrel{\mathop:}= \psi_\infty - \ln n_\ast$ and rewriting $- \lambda \, \Delta \psi_\infty = n_\infty - p_\infty + \varepsilon n_{tr,\infty} - D$ as \begin{align} \label{eqeqpotential} - \lambda \, \Delta \widetilde{\psi_\infty} - e^{-\widetilde{\psi_\infty} - V_n} + n_0 p_0 e^{\widetilde{\psi_\infty} - V_p} - \frac{\varepsilon}{1 + n_0 e^{\widetilde{\psi_\infty}}} = - D. \end{align} We now aim to apply \cite[Theorem 4.8]{T10} to \eqref{eqeqpotential}, which we further reformulate as \begin{align} \label{eqeqpotentialabstract} - \lambda \, \Delta \widetilde{\psi_\infty} + \min\{ e^{-V_n}, n_0 p_0 e^{-V_p} \} \widetilde{\psi_\infty} + d(\cdot, \widetilde{\psi_\infty}) = - D \end{align} where \begin{align*} d(x,y) \mathrel{\mathop:}= - e^{-y-V_n} + n_0 p_0 e^{y-V_p} - \frac{\varepsilon}{1 + n_0 e^{y}} - \min\{ e^{-V_n}, n_0 p_0 e^{-V_p} \} y. \end{align*} The structure of \eqref{eqeqpotentialabstract} is suitable to apply \cite[Theorem 4.8]{T10} for the existence of a unique continuous solution provided that $d$ is monotone increasing w.r.t.\ $y$: Indeed, direct computations show \begin{equation*} e^{-y-V_n} + n_0 p_0 e^{y-V_p} \ge 2 \sqrt{n_0 p_0} e^{-\frac{V_p + V_n}{2}}, \qquad \forall y\in\mathbb{R}, \end{equation*} (where the lower bound is attained at the unique minimum $e^{y}=e^{\frac{V_p-V_n}{2}}/\sqrt{n_0 p_0}$). Hence, we estimate independently of $\varepsilon$ \begin{align*} \partial_y d(x,y) &\ge e^{-y-V_n} + n_0 p_0 e^{y-V_p} - \min\{ e^{-V_n}, n_0 p_0 e^{-V_p} \} \\ &\ge 2 e^{-\frac{V_n}{2}} \sqrt{n_0 p_0} e^{-\frac{V_p}{2}} - \min\{ e^{-V_n}, n_0 p_0 e^{-V_p} \} > 0, \end{align*} and therefore strict monotonicity of $d$ w.r.t.\ $y$ follows. As a consequence, \eqref{eqeqpotentialabstract} admits a unique solution $\widetilde{\psi_\infty} \in H^1(\Omega) \cap L^\infty(\Omega)$, which is continuous on $\overline \Omega$ and bounded via \[ \| \widetilde{\psi_\infty} \|_{H^1(\Omega)} + \| \widetilde{\psi_\infty} \|_{C(\overline \Omega)} \leq \widetilde{K_\infty} \] where the constant $\widetilde{K_\infty}$ is independent of $\varepsilon$. Going back to $\psi_\infty$, the constraint $\overline{\psi_\infty} = 0$ implies \[ n_\ast = e^{-\int_\Omega \widetilde{\psi_\infty} \, dx}, \] which in turn uniquely determines $\psi_\infty = \widetilde{\psi_\infty} - \int_\Omega \widetilde{\psi_\infty} \, dx$. The bounds on $\widetilde{\psi_\infty}$ directly transfer to $\psi_\infty$. As in \cite{FK18}, we verify the bounds \eqref{eqeqbounds} by solving the two equations in \eqref{eqeqconstants} for $n_\ast > 0$ abbreviating $V_\infty \mathrel{\mathop:}= \max \{ \| V_n \|_{L^\infty(\Omega)}, \| V_p \|_{L^\infty(\Omega)} \}$: \begin{align*} n_\ast &= \frac{\overline{D - \varepsilon n_{tr,\infty}}}{2\overline{e^{-\psi_\infty-V_n}}} + \sqrt{\frac{\overline{D - \varepsilon n_{tr,\infty}}^2}{4\overline{e^{-\psi_\infty-V_n}}^2} + n_0 p_0 \frac{\overline{e^{\psi_\infty-V_p}}}{\overline{e^{-\psi_\infty-V_n}}}} \\ &\leq e^{K_\infty + V_\infty} (\sqrt{n_0p_0} + \varepsilon_0 + | \overline D |). \end{align*} We stress that the same bound is valid also for $p_\ast > 0$, and that the upper and lower bounds on $n_\infty$, $p_\infty$ and $n_{tr,\infty}$ are a consequence of the bounds on $n_\ast$ and $p_\ast$ as well as $n_\ast p_\ast = n_0 p_0$. Finally, the estimate \[ \| \psi_\infty \|_{H^2(\Omega)} \leq C \| n_\infty - p_\infty + \varepsilon n_{tr,\infty} - D \|_{L^2(\Omega)} \leq C(K_\infty) \] ensures the higher regularity of $\psi_\infty$. \end{proof} \section{Derivation of an EEP inequality} \label{seceep} As an auxiliary result, we first derive a convenient expression for the entropy relative to the equilibrium. \begin{lemma} \label{lemmarelativeentropy} The entropy relative to the equilibrium equals \begin{align*} &E(n,p,n_{tr},\psi) - E(n_\infty,p_\infty,n_{tr,\infty},\psi_\infty) \\ &\qquad = \int_{\Omega} \bigg( n \ln \frac{n}{n_\infty} - (n-n_\infty) + p \ln \frac{p}{p_\infty} - (p-p_\infty) \\ &\qquad\quad + \frac{\lambda}{2} \big| \nabla (\psi - \psi_\infty) \big|^2 + \varepsilon \int_{n_{tr,\infty}}^{n_{tr}} \left( \ln \frac{s}{1-s} - \ln \frac{n_{tr,\infty}}{1-n_{tr,\infty}} \right) ds \bigg) dx. \end{align*} \end{lemma} \begin{proof} According to the definition of $E(n,p,n_{tr},\psi)$, one has \begin{align*} &E(n,p,n_{tr}, \psi) - E(n_\infty, p_\infty, n_{tr,\infty}, \psi_\infty) \\ &= \int_\Omega \bigg( n \ln \frac{n}{n_0 \mu_n}\!-\!n_\infty \ln \frac{n_\infty}{n_0 \mu_n}\!-\!(n\!-\!n_\infty) + p \ln \frac{p}{p_0 \mu_p}\!-\!p_\infty \ln \frac{p_\infty}{p_0 \mu_p}\!-\!(p\!-\!p_\infty) \\ &\quad + \frac{\lambda}{2} \big( |\nabla \psi|^2 - |\nabla \psi_\infty|^2 \big) + \varepsilon \int_{n_{tr,\infty}}^{n_{tr}} \ln \frac{s}{1-s} ds \bigg) dx. \end{align*} We rewrite the first integrand as $ n \ln \frac{n}{n_0 \mu_n} = n \ln \frac{n}{n_\infty} + n \ln \frac{n_\infty}{n_0 \mu_n} $ and use $\frac{n_\infty}{n_0 \mu_n} = \frac{n_\ast}{n_0} e^{-\psi_\infty}$ to find \begin{multline*} \int_\Omega \bigg( n \ln \frac{n}{n_0 \mu_n} - n_\infty \ln \frac{n_\infty}{n_0 \mu_n} - (n - n_\infty) \bigg) dx \\ = \int_\Omega \bigg( n \ln \frac{n}{n_\infty} - (n - n_\infty) + (n - n_\infty) \Big(\ln \frac{n_\ast}{n_0} - \psi_\infty \Big) \bigg) dx. \end{multline*} Together with an analogous calculation for the $p$-terms, we obtain \begin{align*} &E(n,p,n_{tr}, \psi) - E(n_\infty, p_\infty, n_{tr,\infty}, \psi_\infty) \\ &\quad = \int_\Omega \bigg( n \ln \frac{n}{n_\infty} - (n - n_\infty) + p \ln \frac{p}{p_\infty} - (p - p_\infty) \\ &\qquad + (n - n_\infty) \Big(\ln \frac{n_\ast}{n_0} - \psi_\infty \Big) + (p - p_\infty) \Big(\ln \frac{p_\ast}{p_0} + \psi_\infty \Big) \\ &\qquad + \frac{\lambda}{2} |\nabla \psi|^2 - \frac{\lambda}{2} |\nabla \psi_\infty|^2 + \varepsilon \int_{n_{tr,\infty}}^{n_{tr}} \ln \frac{s}{1-s} \, ds \bigg) dx. \end{align*} We now employ the conservation law $\overline p - \overline{p_\infty} = \overline n - \overline{n_\infty} + \varepsilon (\overline{n_{tr}} - \overline{n_{tr,\infty}})$, the formula $n_\ast p_\ast = n_0 p_0$ and the representation $\frac{p_\ast}{p_0} = \frac{1 - n_{tr,\infty}}{n_{tr,\infty}} e^{-\psi_\infty}$ to derive \begin{align*} &(\overline n - \overline{n_\infty}) \ln \frac{n_\ast}{n_0} + (\overline p - \overline{p_\infty}) \ln \frac{p_\ast}{p_0} \\ &\quad = (\overline n - \overline{n_\infty}) \ln \frac{n_\ast p_\ast}{n_0 p_0} + \varepsilon \int_\Omega (n_{tr} - n_{tr,\infty}) \ln \frac{p_\ast}{p_0} \, dx \\ &\quad = \varepsilon \int_\Omega (n_{tr} - n_{tr,\infty}) \bigg(\ln \frac{1 - n_{tr,\infty}}{n_{tr,\infty}} - \psi_\infty \bigg) dx \\ &\quad = -\varepsilon \int_\Omega \bigg( (n_{tr} - n_{tr,\infty}) \psi_\infty + \int_{n_{tr,\infty}}^{n_{tr}} \ln \frac{n_{tr,\infty}}{1 - n_{tr,\infty}} \, ds \bigg) dx. \end{align*} The relative entropy now reads \begin{align*} &E(n,p,n_{tr}, \psi) - E(n_\infty, p_\infty, n_{tr,\infty}, \psi_\infty) \\ &\quad = \int_\Omega \bigg( n \ln \frac{n}{n_\infty} - (n - n_\infty) + p \ln \frac{p}{p_\infty} - (p - p_\infty) \\ &\qquad + \frac{\lambda}{2} |\nabla \psi|^2 - \frac{\lambda}{2} |\nabla \psi_\infty|^2 - \big(n - n_\infty - p + p_\infty + \varepsilon (n_{tr} - n_{tr,\infty})\big) \psi_\infty \\ &\qquad + \varepsilon \int_{n_{tr,\infty}}^{n_{tr}} \bigg( \ln \frac{s}{1-s} - \ln \frac{n_{tr,\infty}}{1-n_{tr,\infty}} \bigg) ds \bigg) dx. \end{align*} Poisson's equation $n - n_\infty - p + p_\infty + \varepsilon (n_{tr} - n_{tr,\infty}) = -\lambda \Delta (\psi - \psi_\infty)$ and an integration by parts entail \begin{align*} &E(n,p,n_{tr}, \psi) - E(n_\infty, p_\infty, n_{tr,\infty}, \psi_\infty) \\ &\quad = \int_\Omega \bigg( n \ln \frac{n}{n_\infty} - (n - n_\infty) + p \ln \frac{p}{p_\infty} - (p - p_\infty) + \frac{\lambda}{2} |\nabla \psi|^2- \frac{\lambda}{2} |\nabla \psi_\infty|^2 \\ &\qquad - \lambda \nabla (\psi - \psi_\infty) \cdot \nabla \psi_\infty + \varepsilon \int_{n_{tr,\infty}}^{n_{tr}} \bigg( \ln \frac{s}{1-s} - \ln \frac{n_{tr,\infty}}{1-n_{tr,\infty}} \bigg) ds \bigg) dx. \end{align*} The claim now obviously follows from collecting the terms involving $\psi$ and $\psi_\infty$. \end{proof} Following ideas in \cite{GG96,FK18} and \cite{FK20}, we are able to bound the relative entropy essentially in terms of the squared $L^2$ distance between $(n, p, \sqrt{n_{tr}})$ and $(n_\infty, p_\infty, \sqrt{n_{tr,\infty}})$. \begin{proposition} \label{propeg} There exists an explicit constant $c_1(K_\infty ) > 0$ satisfying \begin{multline*} E(n, p, n_{tr}, \psi) - E(n_\infty, p_\infty, n_{tr,\infty}, \psi_\infty) \\ \leq c_1 \int_\Omega \bigg( \frac{(n-n_\infty)^2}{n_\infty} + \frac{(p-p_\infty)^2}{p_\infty} + \varepsilon \big(\sqrt{n_{tr}} - \sqrt{n_{tr,\infty}} \big)^2 \bigg) \, dx \end{multline*} for all $\varepsilon > 0$ and all non-negative $n, p, n_{tr} \in L^2(\Omega)$, $n_{tr} \leq 1$ where $\psi \in H^1(\Omega)$ is the unique solution of \eqref{eqpsi} with $f = n - p + \varepsilon n_{tr} - D$ and $\overline \psi = 0$. \end{proposition} \begin{proof} Applying the elementary inequality $\ln x \leq x - 1$ for $x > 0$, we derive \[ n \ln \frac{n}{n_\infty} - (n-n_\infty) \leq n \Big( \frac{n}{n_\infty} - 1 \Big) - n + n_\infty = \frac{(n-n_\infty)^2}{n_\infty} \] and an analogous estimate involving $p$ and $p_\infty$. Integration by parts with homogeneous Neumann conditions for $\psi$ and $\psi_\infty$ as well as $-\lambda \Delta (\psi - \psi_\infty) = (n - n_\infty) - (p - p_\infty) + \varepsilon (n_{tr} - n_{tr,\infty})$ yield \begin{align*} &\lambda \int_{\Omega} \left| \nabla (\psi-\psi_\infty)\right|^2 dx \\ &\quad = \int_\Omega \big( (n - n_\infty) - (p - p_\infty) + \varepsilon (n_{tr} - n_{tr,\infty}) \big) (\psi - \psi_\infty) \, dx \\ &\quad \leq \frac{1}{2} \bigg( \frac1\delta \| (n - n_\infty) - (p - p_\infty) + \varepsilon (n_{tr} - n_{tr,\infty}) \|^2 + \delta \| \psi - \psi_\infty \|^2 \bigg) \\ &\quad \leq \frac{3L(\Omega)}{2\lambda} \Big( \| n - n_\infty \|^2 + \| p - p_\infty \|^2 + \varepsilon \| n_{tr} - n_{tr,\infty} \|^2 \Big) + \frac{\lambda}{2} \| \nabla (\psi - \psi_\infty) \|^2. \end{align*} Here and below, we abbreviate $\| \cdot \| \mathrel{\mathop:}= \| \cdot \|_{L^2(\Omega)}$ and denote by $L(\Omega) > 0$ a constant such that Poincar\'e's estimate $\| f \|^2 \leq L(\Omega) \| \nabla f \|^2$ holds true for all $f \in H^1(\Omega)$ subject to $\overline f = 0$. The estimate in the last line is then a result of $\overline{\psi - \psi_\infty} = 0$ and Poincar\'e's inequality together with the choice $\delta \mathrel{\mathop:}= \lambda/L(\Omega)$, whereas the previous bound follows from H\"older's inequality and Young's inequality with some constant $\delta > 0$. We thus find \begin{multline*} \frac{\lambda}{2} \int_{\Omega} \left| \nabla (\psi-\psi_\infty) \right|^2 dx \\ \leq \frac{3L(\Omega) \max \{ M_\infty, 4\}}{2\lambda} \int_\Omega \bigg( \frac{(n-n_\infty)^2}{n_\infty} + \frac{(p-p_\infty)^2}{p_\infty} + \varepsilon \big(\sqrt{n_{tr}} - \sqrt{n_{tr,\infty}} \big)^2 \bigg) \, dx, \end{multline*} where we employed the bounds from \eqref{eqeqbounds} and $\big(\sqrt{n_{tr}} + \sqrt{n_{tr,\infty}} \big)^2 \leq 4$. The last term within the relative entropy including $n_{tr}$ can be controlled as in \cite{FK20}. For convenience, we briefly recall the main arguments. First, there exists for all $x \in \Omega$ some mean value \[ \theta(x) \in (\min\{n_{tr}(x), n_{tr,\infty}(x)\}, \max\{n_{tr}(x), n_{tr,\infty}(x)\}) \] such that \begin{equation} \label{eqlnmeanvalue} \int_{n_{tr,\infty}(x)}^{n_{tr}(x)} \ln \frac{s}{1-s} ds = (n_{tr}(x) - n_{tr,\infty}(x)) \ln \frac{\theta(x)}{1-\theta(x)}. \end{equation} To enhance readability, we shall suppress the $x$-dependence of $n_{tr}$ and $n_{tr,\infty}$ subsequently. We further use the bound $n_{tr,\infty} \in (\mu_\infty, 1-\mu_\infty)$ from \eqref{eqeqbounds} and observe that \[ \left| \int_{n_{tr,\infty}}^{n_{tr}} \ln \frac{s}{1-s} ds \right| \leq \int_{0}^1 \left| \ln \frac{s}{1-s} \right| ds = 2\ln 2 \] for all $x \in \Omega$. In combination with \eqref{eqlnmeanvalue}, this estimate entails \[ \left| \ln \frac{\theta(x)}{1-\theta(x)} \right| \big| n_{tr} - n_{tr,\infty} \big| \leq 2 \ln 2. \] By an elementary argumentation, one can now conclude that $\theta(x) \in (\xi, 1 - \xi)$ where $\xi \in \big( 0,\frac12 \big)$ only depends on $\mu_\infty$. Therefore, we obtain \begin{align*} &\varepsilon \int_\Omega \int_{n_{tr,\infty}}^{n_{tr}} \left( \ln \frac{s}{1-s} - \ln \frac{n_{tr,\infty}}{1-n_{tr,\infty}} \right) ds \, dx \\ &\qquad = \varepsilon \int_\Omega \left( \ln \frac{\theta(x)}{1-\theta(x)} - \ln \frac{n_{tr,\infty}}{1-n_{tr,\infty}} \right) (n_{tr} - n_{tr,\infty}) \, dx \\ &\qquad = \varepsilon \int_\Omega \frac{1}{\sigma(x)(1-\sigma(x))} (\theta(x) - n_{tr,\infty}) (n_{tr} - n_{tr,\infty}) \, dx \end{align*} with some $\sigma(x) \in (\min\{\theta(x), n_{tr,\infty}(x)\}, \max\{\theta(x), n_{tr,\infty}(x)\}) \subset [\xi, 1-\xi]$ employing the mean-value theorem and taking into account that \[ \frac{d}{ds} \ln \frac{s}{1-s} = \frac{1}{s(1-s)}. \] As $(\sigma(x)(1-\sigma(x)))^{-1}$ is uniformly bounded in $\Omega$ in terms of $\xi(\mu_\infty)$, there exists some $c > 0$ only depending on $\mu_\infty$ such that \begin{align*} &\varepsilon \int_\Omega \int_{n_{tr,\infty}}^{n_{tr}} \left( \ln \frac{s}{1-s} - \ln \frac{n_{tr,\infty}}{1-n_{tr,\infty}} \right) ds \, dx \\ &\quad \leq c \varepsilon \int_\Omega |\theta(x) - n_{tr,\infty}| |n_{tr} - n_{tr,\infty}| \, dx \\% \leq \varepsilon c \int_\Omega (n_{tr} - n_{tr,\infty})^2 \, dx \\ &\quad \leq 4 c \varepsilon \int_\Omega \big(\sqrt{n_{tr}} - \sqrt{n_{tr,\infty}} \big)^2 \, dx \end{align*} where the last line results from estimating $\big(\sqrt{n_{tr}} + \sqrt{n_{tr,\infty}} \big)^2 \leq 4$. This proves the claim. \end{proof} The subsequent lemma contains rather non-intuitive estimates for bilinear terms like $(n - n_\infty)(p - p_\infty)$. These expressions will appear in the proof of Proposition \ref{propgd} below. Admissible functions are typically assumed to belong to the set \begin{align} \label{eqsetn} \mathcal N \mathrel{\mathop:}= \big\{ (n, p, n_{tr}) \in L^2_+(\Omega)^3 \; : \; n, p \leq M, \ n_{tr} \leq 1 \text{\ a.e.\ in\ } \Omega \big\}. \end{align} \begin{lemma} \label{lemmareactionterms} The following estimates hold true for all $(n, p, n_{tr}) \in \mathcal N$ with explicit constants $\Gamma_1(M ) > 0$ and $\Gamma_2 > 0$: \begin{align*} (n - n_\infty) (p - p_\infty) &\leq \Gamma_1 \bigg(\!- R_n \ln \frac{n (1 - n_{tr})}{n_0 \mu_n n_{tr}} - R_p \ln \frac{p n_{tr}}{p_0 \mu_p (1 - n_{tr})} \, \bigg), \\ (n - n_\infty) (-n_{tr} + n_{tr,\infty}) &\leq \Gamma_2 \bigg(\!- R_n \ln \frac{n (1 - n_{tr})}{n_0 \mu_n n_{tr}} + \big( \sqrt{n_{tr}} - \sqrt{n_{tr,\infty}} \big)^2 \, \bigg), \\ (p - p_\infty) (n_{tr} - n_{tr,\infty}) \\ &\mkern-36mu\mkern-36mu\mkern-18mu \leq \Gamma_2 \bigg(\!- R_p \ln \frac{p n_{tr}}{p_0 \mu_p (1 - n_{tr})} + \big( \sqrt{1 - n_{tr}} - \sqrt{1 - n_{tr,\infty}} \big)^2 \, \bigg). \end{align*} \end{lemma} \begin{proof} As in \cite{GG96} and \cite{FK18}, we first recall the elementary inequalities $(a - a_0)(b - b_0) \leq (\sqrt{ab} - \sqrt{a_0 b_0})^2$ for all $a,a_0,b,b_0 \geq 0$ and $4 (\sqrt x - \sqrt y)^2 \leq (x - y) \ln \frac{x}{y}$ for all $x \geq 0$ and $y > 0$. Concerning the first inequality we write \[ (n - n_\infty)(p - p_\infty) \leq \big( \sqrt{np} - \sqrt{n_\infty p_\infty} \big)^2 = n_\infty p_\infty \bigg(\sqrt{\frac{np}{n_0 \mu_n p_0 \mu_p}} - 1 \bigg)^2 \] and distinguish the two cases $n_{tr} > \frac12$ and $n_{tr} \leq \frac12$. In the case $n_{tr} > \frac12$, we infer \begin{align*} (n - n_\infty) (p - p_\infty) &\leq n_0 p_0 \mu_n \mu_p \bigg( \sqrt{ \frac{n}{n_0 \mu_n n_{tr}} } \bigg( \sqrt{ \frac{p}{p_0 \mu_p} n_{tr} } - \sqrt{1 - n_{tr}} \bigg) \\ &\quad + \sqrt{ \frac{1}{n_{tr}} } \bigg( \sqrt{ \frac{n}{n_0 \mu_n} (1 - n_{tr}) } - \sqrt{n_{tr}} \bigg) \bigg)^2 \\ &\leq \Gamma_1(M ) \bigg( \Big( \frac{n}{n_0 \mu_n} \, (1 - n_{tr}) - n_{tr} \Big) \ln \frac{n (1 - n_{tr})}{n_0 \mu_n n_{tr}} \\ &\quad + \Big( \frac{p}{p_0 \mu_p} \, n_{tr} - (1 - n_{tr}) \Big) \ln \frac{p n_{tr}}{p_0 \mu_p (1-n_{tr})} \bigg) \end{align*} employing the $L^\infty$ bound on $n$. Using analogous arguments we derive the same result also in the case $n_{tr} \leq \frac12$. The second inequality arises from \begin{align*} (n - n_\infty)(-n_{tr} + n_{tr,\infty}) &\leq \Big( \sqrt{n(1 - n_{tr})} - \sqrt{n_\infty (1 - n_{tr,\infty})} \Big)^2 \\ &= n_0 \mu_n \bigg( \sqrt{ \frac{n}{n_0 \mu_n} (1 - n_{tr}) } - \sqrt{n_{tr,\infty}} \bigg)^2 \end{align*} where we used the relation $n_\infty (1 - n_{tr,\infty}) = n_0 \mu_n n_{tr,\infty}$ from Proposition \ref{propequilibrium}. The claim is now a consequence of \begin{multline*} n_0 \mu_n \bigg( \bigg( \sqrt{ \frac{n}{n_0 \mu_n} (1 - n_{tr}) } - \sqrt{n_{tr}} \bigg) + \big( \sqrt{n_{tr}} - \sqrt{n_{tr,\infty}} \big) \bigg)^2 \\ \leq \Gamma_2 \bigg( \Big( \frac{n (1 - n_{tr})}{n_0 \mu_n} - n_{tr} \Big) \ln \frac{n (1 - n_{tr})}{n_0 \mu_n n_{tr}} + \big( \sqrt{n_{tr}} - \sqrt{n_{tr,\infty}} \big)^2 \bigg). \end{multline*} Similarly, one can also verify the third inequality stated above. \end{proof} The next result establishes an upper bound for the $L^2$ distance between $(n,p)$ and $(n_\infty,p_\infty)$ basically in terms of the entropy production $P$ and $\| \sqrt{n_{tr}} - \sqrt{n_{tr,\infty}} \|_{L^2(\Omega)}^2$. Similar arguments already appeared in \cite{GG96} and \cite{FK18}. \begin{proposition} \label{propgd} There exists an explicit constant $c_2(M, K_\infty ) > 0$ satisfying \begin{multline*} \int_\Omega \bigg( \frac{(n-n_\infty)^2}{n_\infty} + \frac{(p-p_\infty)^2}{p_\infty} \bigg) \, dx \leq c_2 P(n, p, n_{tr}, \psi) \\ + c_2 \, \varepsilon \int_\Omega \bigg( \big( \sqrt{n_{tr}} - \sqrt{n_{tr,\infty}} \big)^2 + \big( \sqrt{1 - n_{tr}} - \sqrt{1 - n_{tr,\infty}} \big)^2 \bigg) dx \end{multline*} for all $\varepsilon > 0$ and all $(n, p, n_{tr}) \in \mathcal N$ where additionally $n, p \in H^1(\Omega)$ and $\psi \in H^1(\Omega)$ is the unique solution of \eqref{eqpsi} with $f = n - p + \varepsilon n_{tr} - D$ and $\overline \psi = 0$. \end{proposition} \begin{proof} We start by defining intermediate equilibria $N \mathrel{\mathop:}= n_\ast e^{-\psi - V_n}$ and $P \mathrel{\mathop:}= p_\ast e^{\psi - V_p}$ which fulfil $J_n(N, \psi) = J_p(P, \psi) = 0$. Due to $J_n(n, \psi) = N \nabla (\frac{n}{N})$ and $J_p(p, \psi) = P \nabla (\frac{p}{P})$, we derive the following lower bounds for the flux terms involving $J_n$ and $J_p$: \begin{align*} \frac{|J_n|^2}{n_\infty n} &= \frac{N^2}{n_\infty n} \bigg| \nabla \Big( \frac{n}{N} \Big) \bigg|^2 = \frac{N^2}{n_\infty n} \bigg| \frac{n_\infty}{N} \nabla \Big( \frac{n}{n_\infty} \Big) + \frac{n}{n_\infty} \nabla \Big( \frac{n_\infty}{N} \Big) \bigg|^2 \\ &= \frac{N^2}{n_\infty n} \bigg| e^{\psi - \psi_\infty} \nabla \Big( \frac{n}{n_\infty} \Big) + \frac{n}{n_\infty} e^{\psi - \psi_\infty} \nabla (\psi - \psi_\infty) \bigg|^2 \\ &\geq 2 \frac{N^2}{n_\infty^2} e^{2(\psi - \psi_\infty)} \nabla \Big( \frac{n}{n_\infty} \Big) \cdot \nabla (\psi - \psi_\infty) = 2 \nabla (\psi - \psi_\infty) \cdot \nabla \Big( \frac{n - n_\infty}{n_\infty} \Big). \end{align*} In the same way, we obtain $\frac{|J_p|^2}{p_\infty p} \geq -2 \nabla (\psi - \psi_\infty) \cdot \nabla ( \frac{p - p_\infty}{p_\infty} )$ and, therefore, \begin{multline*} \frac{\lambda}{2} \int_{\Omega} \bigg( \frac{|J_n|^2}{n_\infty n} + \frac{|J_p|^2}{p_\infty p} \bigg) dx \geq \lambda \int_\Omega \nabla (\psi - \psi_\infty) \cdot \nabla \bigg( \frac{n - n_\infty}{n_\infty} - \frac{p - p_\infty}{p_\infty} \bigg) dx \\ = \int_\Omega \Big( (n - n_\infty) - (p - p_\infty) + \varepsilon (n_{tr} - n_{tr,\infty}) \Big) \bigg( \frac{n - n_\infty}{n_\infty} - \frac{p - p_\infty}{p_\infty} \bigg) dx \end{multline*} via integration by parts and Poisson's equation. Rearranging this inequality now yields \begin{align*} &\int_\Omega \bigg( \frac{(n-n_\infty)^2}{n_\infty} + \frac{(p-p_\infty)^2}{p_\infty} \bigg) dx \\ &\quad \leq \frac{\lambda}{2} \int_{\Omega} \bigg( \frac{|J_n|^2}{n_\infty n} + \frac{|J_p|^2}{p_\infty p} \bigg) dx + \int_{\Omega} \bigg( \Big( \frac{1}{n_\infty} + \frac{1}{p_\infty} \Big) (n - n_\infty)(p - p_\infty) \\ &\qquad + \varepsilon (-n_{tr} + n_{tr,\infty}) \frac{n - n_\infty}{n_\infty} + \varepsilon (n_{tr} - n_{tr,\infty}) \frac{p - p_\infty}{p_\infty} \bigg) dx. \end{align*} Together with the bounds from \eqref{eqeqbounds} and Lemma \ref{lemmareactionterms}, we arrive at the desired result. \end{proof} We are now in a position to prove the EEP inequality from Theorem \ref{theoremeepinequality}, where the main task is to provide an appropriate bound on $(\sqrt{n_{tr}} - \sqrt{n_{tr,\infty}})^2$. \begin{proof}[Proof of Theorem \ref{theoremeepinequality}] \noindent \textbf{Step 1.} Due to \eqref{eqntrrelations} we easily calculate \begin{align*} \sqrt{n_{tr}} &- \sqrt{\frac{n}{n_0 \mu_n} (1 - n_{tr})} = \sqrt{n_{tr}} - \sqrt{n_{tr,\infty}} - \sqrt{\frac{n}{n_0 \mu_n} (1 - n_{tr})} \\ &\qquad + \sqrt{\frac{n_\infty}{n_0 \mu_n} (1 - n_{tr})} - \sqrt{\frac{n_\infty}{n_0 \mu_n} (1 - n_{tr})} + \sqrt{\frac{n_\infty}{n_0 \mu_n} (1 - n_{tr,\infty})} \\ &= \sqrt{n_{tr}} - \sqrt{n_{tr,\infty}} - \sqrt{\frac{1 - n_{tr}}{n_0 \mu_n}} \Big( \sqrt{n} - \sqrt{n_\infty} \Big) \\ &\qquad + \sqrt{\frac{n_\infty}{n_0 \mu_n}} \Big( \sqrt{1 - n_{tr,\infty}} - \sqrt{1 - n_{tr}} \Big). \end{align*} Observing that $\sqrt{n_{tr}} - \sqrt{n_{tr,\infty}}$ and $\sqrt{1 - n_{tr,\infty}} - \sqrt{1 - n_{tr}}$ have the same sign and using the inequality $4 (\sqrt x - \sqrt y)^2 \leq (x - y) \ln \frac{x}{y}$ for $x \geq 0$ and $y > 0$, we reformulate the previous identity to find \begin{align*} \big( \sqrt{n_{tr}} - \sqrt{n_{tr,\infty}} \big)^2 &\leq \bigg( \sqrt{n_{tr}} - \sqrt{n_{tr,\infty}} + \sqrt{\frac{n_\infty}{n_0 \mu_n}} \Big( \sqrt{1 - n_{tr,\infty}} - \sqrt{1 - n_{tr}} \Big) \bigg)^2 \\ &\leq 2 \bigg( \sqrt{n_{tr}} - \sqrt{\frac{n}{n_0 \mu_n} (1 - n_{tr})} \bigg)^2 + \frac{2}{n_0 \mu_n} \big( \sqrt{n} - \sqrt{n_\infty} \big)^2 \\ &\leq \frac12 \Big( \frac{n (1 - n_{tr})}{n_0 \mu_n} - n_{tr} \Big) \ln \frac{n (1 - n_{tr})}{n_0 \mu_n n_{tr}} + \frac{2}{n_0 \mu_n} \frac{(n - n_\infty)^2}{n_\infty}. \end{align*} Along the same lines, we also deduce \begin{multline*} \big( \sqrt{1 - n_{tr}} - \sqrt{1 - n_{tr,\infty}} \big)^2 \\ \leq \frac12 \Big( \frac{p n_{tr}}{p_0 \mu_p} - (1 - n_{tr}) \Big) \ln \frac{p n_{tr}}{p_0 \mu_p (1 - n_{tr})} + \frac{2}{p_0 \mu_p} \frac{(p - p_\infty)^2}{p_\infty}. \end{multline*} \noindent \textbf{Step 2.} We can now improve the claim of Proposition \ref{propeg} in the sense that there exists a constant $c_1(K_\infty ) > 0$ satisfying \begin{multline*} E(n, p, n_{tr}, \psi) - E(n_\infty, p_\infty, n_{tr,\infty}, \psi_\infty) \\ \leq c_1 P(n, p, n_{tr}, \psi) + c_1 \int_\Omega \bigg( \frac{(n-n_\infty)^2}{n_\infty} + \frac{(p-p_\infty)^2}{p_\infty} \bigg) \, dx \end{multline*} for all $(n, p, n_{tr}) \in \mathcal N$ where $\psi \in H^1(\Omega)$ is the unique solution of \eqref{eqpsi} with $f = n - p + \varepsilon n_{tr} - D$ and $\overline \psi = 0$ for any $\varepsilon > 0$. Furthermore, we notice that Proposition \ref{propgd} now gives rise to a constant $c_2(M, K_\infty ) > 0$ such that \begin{multline*} \int_\Omega \bigg( \frac{(n-n_\infty)^2}{n_\infty} + \frac{(p-p_\infty)^2}{p_\infty} \bigg) \, dx \\ \leq c_2 P(n, p, n_{tr}, \psi) + c_2 \, \varepsilon \int_\Omega \bigg( \frac{(n - n_\infty)^2}{n_\infty} + \frac{(p - p_\infty)^2}{p_\infty} \bigg) dx \end{multline*} holds true for all $\varepsilon > 0$ and all $(n, p, n_{tr}) \in \mathcal N$ with $n, p \in H^1(\Omega)$ and $\psi \in H^1(\Omega)$ being the unique solution of \eqref{eqpsi} with $f = n - p + \varepsilon n_{tr} - D$ and $\overline \psi = 0$. \noindent \textbf{Step 3.} If we restrict $\varepsilon$ to the interval $\big( 0, \tfrac{1}{2c_2} \big)$, we finally arrive at \[ E(n, p, n_{tr}, \psi) - E(n_\infty, p_\infty, n_{tr,\infty}, \psi_\infty) \leq (c_1 + 2 c_1 c_2) P(n, p, n_{tr}, \psi) \] employing the notation from Step 2. \end{proof} \section{Proof of the exponential convergence} \label{secconv} As soon as the weak entropy production law \eqref{eqweakeplaw} is settled, the exponential decay of the relative entropy arises from a Gronwall argument as carried out in \cite{FK20} (see also \cite{Wil65,Bee75}). \begin{proof}[Proof of Theorem \ref{theoremexpconvergence}] The weak entropy production law \eqref{eqweakeplaw} readily follows from \eqref{eqentropy} and \eqref{eqproduction} for $0 < t_0 \leq t_1 < \infty$ utilizing the regularity and bounds on $n$, $p$, $n_{tr}$, and $\psi$ from Proposition \ref{propglobalsolution}. The statement of the theorem is then a consequence of Theorem \ref{theoremeepinequality} applied to the global solution $(n, p, n_{tr}, \psi)$. Note, however, that the weak entropy production law \eqref{eqweakeplaw} only allows to derive \begin{multline*} E(n,p,n_{tr},\psi)(t) - E(n_\infty, p_\infty, n_{tr,\infty}, \psi_\infty) \\ \leq (E(n,p,n_{tr},\psi)(t_0) - E(n_\infty, p_\infty, n_{tr,\infty}, \psi_\infty)) e^{-C_{EEP}^{-1} (t-t_0)} \end{multline*} for all $t_0 \in (0, t]$. The assertion in \eqref{eqexpconventropy} is then a consequence of the fact that the entropy $E(n,p,n_{tr},\psi)(t_0)$ continuously extends to $t_0 \rightarrow 0$ since $n,p \in C([0,T), L^2(\Omega))$, $n_{tr} \in C([0,T), L^\infty(\Omega))$, and $\psi \in C([0,T), H^2(\Omega))$ for all $T>0$ by Proposition \ref{propglobalsolution}. \end{proof} Exponential convergence in $L^\infty$ and $H^2$ for $(n,p,n_{tr})$ and $\psi$, respectively, is a consequence of standard regularity techniques, which have been partially employed already in \cite{FK20}. As a prerequisite, we formulate a Csisz\'ar--Kullback--Pinsker-type inequality, which we believe to be well-known. But as we were not able to find a precise reference, we provide a proof in the subsequent lemma. \begin{lemma}[A Csisz\'ar--Kullback--Pinsker-type inequality] \label{lemmackp} Let $f, g : \Omega \rightarrow \mathbb R$ be non-negative and measurable functions and $g$ be strictly positive. Then, \[ \int_\Omega \bigg( f \ln \frac{f}{g} - f + g \bigg) dx \geq \frac{3}{2 \overline f + 4 \overline g} \| f - g \|_{L^1(\Omega)}^2. \] \end{lemma} \begin{proof} Going back to an idea of Pinsker, we first prove the elementary inequality $h(u) \mathrel{\mathop:}= (2u+4)(u \ln u - u + 1) - 3(u-1)^2 \geq 0$ for scalar $u \geq 0$. The claim follows from the identities $h(1) = h'(1) = h''(1) = 0$ and the sign of $h'''(u) = \tfrac{4}{u} - \tfrac{4}{u^2}$ for $u > 1$ and $u < 1$, respectively. As a consequence, we obtain \begin{align*} \| f - g \|_{L^1(\Omega)} &= \int_\Omega \bigg| \frac{f}{g} - 1 \bigg| g \, dx \leq \int_\Omega \frac{g}{\sqrt{3}} \sqrt{\frac{2f}{g} + 4} \sqrt{\frac{f}{g} \ln \frac{f}{g} - \frac{f}{g} + 1} \, dx \\ &\leq \frac{1}{\sqrt{3}} \sqrt{\int_\Omega (2f + 4g) \, dx} \sqrt{\int_\Omega \Big( f \ln \frac{f}{g} - f + g \Big) \, dx}, \end{align*} where we employed H\"older's inequality in the last step. \end{proof} \begin{proof}[Proof of Corollary \ref{corconvlinfty}] An immediate consequence of the exponential decay of the relative entropy as stated in \eqref{eqexpconventropy} is the exponential convergence to the equilibrium of $n(t)$ and $p(t)$ in $L^1(\Omega)$, and of $n_{tr}(t)$ in $L^2(\Omega)$. To see this, we first recall the explicit representation of the relative entropy from Lemma \ref{lemmarelativeentropy}. Lemma \ref{lemmackp} allows us to control \[ \int_\Omega \left( n \ln \frac{n}{n_\infty} - (n - n_\infty) \right) dx \geq \frac{3}{2 \overline n + 4 \overline{n_\infty}} \| n - n_\infty \|_{L^1(\Omega)}^2 \geq c \| n - n_\infty \|_{L^1(\Omega)}^2 \] and analogously $\| p - p_\infty \|_{L^1(\Omega)}^2$ in terms of a (rough) constant $c(M, K_\infty) > 0$. Next, we notice that \[ \frac{d}{ds} \ln \left( \frac{s}{1-s} \right) = \frac{1}{s(1-s)} \geq 4 \] is valid for all $s \in (0,1)$, which enables us to estimate \begin{multline*} \varepsilon \int_\Omega \int_{n_{tr,\infty}}^{n_{tr}} \left( \ln \left( \frac{s}{1-s} \right) - \ln \left( \frac{n_{tr,\infty}}{1 - n_{tr,\infty}} \right) \right) ds \, dx \\ = \varepsilon \int_\Omega \int_{n_{tr,\infty}}^{n_{tr}} \frac{1}{\sigma(s)(1 - \sigma(s))} (s - n_{tr,\infty}) \, ds \, dx \geq 2 \varepsilon \| n_{tr} - n_{tr,\infty} \|_{L^2(\Omega)}^2 \end{multline*} where $\sigma(s)$ serves as an intermediate point between $n_{tr,\infty}$ and $s$. As a preparation for the exponential convergence of $n$ and $p$ in $L^\infty(\Omega)$, we adapt an argument from \cite{FK20} to establish a polynomially growing $W^{1,q}(\Omega)$ bound on $n$ for $q \geq 4$. (In fact, we shall only require such a bound for $q = 6$ below.) The same technique is also applicable to $p$. As in the proof of Proposition \ref{propglobalsolution}, we set w.l.o.g.\ $\tau_n = \tau_p = 1$ leading to \begin{align*} \partial_t n = \Delta n + \nabla n \cdot \nabla (\psi + V_n) + n \Delta (\psi + V_n) + n_{tr} - \frac{n}{n_0 \mu_n} (1-n_{tr}). \end{align*} Using $-|\nabla n|^{q-2} \Delta n$ as a test function and recalling $\hat n \cdot \nabla n = 0$ on $\partial \Omega$ entails \begin{multline*} \frac1{q(q-1)} \frac{d}{dt} \int_\Omega |\nabla n|^q \, dx = \frac1{q-1} \int_\Omega |\nabla n|^{q-2} \nabla n \cdot \nabla \partial_t n \, dx = - \int_\Omega |\nabla n|^{q-2} \Delta n \, \partial_t n \, dx \\ = - \int_\Omega |\nabla n|^{q-2} |\Delta n|^2 \, dx - \int_\Omega |\nabla n|^{q-2} \Delta n \nabla n \cdot \nabla (\psi + V_n) \, dx \\ - \int_\Omega |\nabla n|^{q-2} \Delta n \, n \Delta (\psi + V_n) \, dx - \int_\Omega |\nabla n|^{q-2} \Delta n \Big(n_{tr} - \frac{n}{n_0 \mu_n} (1 - n_{tr}) \Big) \, dx. \end{multline*} By estimating the third line with Young's inequality via \[ \Big| \Delta n \, n \Delta (\psi + V_n) + \Delta n \Big(n_{tr} - \frac{n}{n_0 \mu_n} (1 - n_{tr}) \Big) \Big| \leq \frac12 |\Delta n|^2 + \frac12 C_2^2 \] with a constant $C_2(M) > 0$, and by observing that \begin{multline*} \Big| \int_\Omega |\nabla n|^{q-2} \Delta n \nabla n \cdot \nabla (\psi + V_n) \, dx \Big| = \Big| \int_\Omega \frac{1}{q} \nabla \big( |\nabla n|^q \big) \cdot \nabla (\psi + V_n) \, dx \Big| \\ \leq \frac{1}{q} \int_\Omega |\nabla n|^q \big| \Delta (\psi + V_n) \big| \, dx \leq \frac1q \int_\Omega |\nabla n|^q \, C_1 \, dx \end{multline*} with another constant $C_1(M)>0$, we calculate \begin{multline*} \frac1{q(q-1)} \frac{d}{dt} \int_\Omega |\nabla n|^q \, dx \leq - \int_\Omega |\nabla n|^{q-2} |\Delta n|^2 \, dx \\ + \frac1q \int_\Omega |\nabla n|^q \, C_1 \, dx + \int_\Omega |\nabla n|^{q-2} \Big( \frac12 |\Delta n|^2 + \frac12 C_2^2 \Big) \, dx. \end{multline*} We rewrite the first term in the second line by another integration by parts and Young's inequality, which leads us to \begin{align*} \frac1q \int_\Omega |\nabla n|^q \, dx &= \frac1q \int_\Omega |\nabla n|^{q-2} \nabla n \cdot \nabla n \, dx = - \frac{q-1}{q} \int_\Omega |\nabla n|^{q-2} \Delta n \, n \, dx \\ &\leq \frac{1}{2C_1} \int_\Omega |\nabla n|^{q-2} |\Delta n|^2 \, dx + \frac{C_1M^2}{2} \int_\Omega |\nabla n|^{q-2} \, dx. \end{align*} The previous estimates now guarantee that \[ \frac{d}{dt} \int_\Omega |\nabla n|^q \, dx \leq C_3 \int_\Omega |\nabla n|^{q-2} \, dx, \] with a constant $C_3(M, q) > 0$. Choosing $t_0 > 0$ and $t \geq t_0$ arbitrarily and utilizing $|\Omega| = 1$, one has \[ \| \nabla n(t) \|_{L^q(\Omega)}^q \leq \| \nabla n(t_0) \|_{L^q(\Omega)}^q + C_3 \int_{t_0}^t \| \nabla n(s) \|_{L^q(\Omega)}^{q-2} \, ds. \] The polynomial growth of $\| \nabla n \|_{L^q(\Omega)}$ is then obtained by an elementary Gronwall lemma (see e.g. \cite{Bee75}). In detail, \begin{align} \label{eqboundgradn} \| \nabla n(t) \|_{L^q(\Omega)} \leq \Big( \| \nabla n(t_0) \|_{L^q(\Omega)}^2 + C_3 (t-t_0) \Big)^{\frac12}. \end{align} Exponential convergence to the equilibrium for $n$ in $L^q(\Omega)$, $1 < q < \infty$, is easily deduced from the exponential convergence of $n$ in $L^1(\Omega)$ as settled above and the $L^\infty(\Omega)$ bounds on $n$ and $n_\infty$ in \eqref{equpperboundnp} and \eqref{eqeqbounds}, respectively, by writing \[ \| n - n_\infty \|_{L^q(\Omega)}^q \leq \| n - n_\infty \|_{L^\infty(\Omega)}^{q-1} \| n - n_\infty \|_{L^1(\Omega)} \leq C e^{-c t} \] where the constants $c(M, K_\infty, q), C(M, K_\infty, q) > 0$ are independent of $\varepsilon$ for $\varepsilon \in (0, \varepsilon_0]$. For $q = 2$ and together with the same bound on $p$ and the estimate \[ \| \psi - \psi_\infty \|_{H^2(\Omega)} \leq C \big( \| n - n_\infty \|_{L^2(\Omega)} + \| p - p_\infty \|_{L^2(\Omega)} + \varepsilon \| n_{tr} - n_{tr,\infty} \|_{L^2(\Omega)} \big), \] this directly implies the exponential convergence of $\psi$ in $H^2(\Omega)$. The Gagliardo--Nirenberg--Moser interpolation inequality now allows us to infer exponential convergence of $n$ and $p$ in $L^\infty(\Omega)$. In fact, the bound on $\| \nabla n \|_{L^{6}(\Omega)}$ in \eqref{eqboundgradn} entails \begin{equation} \label{eqconvlinfty} \|n-n_\infty\|_{L^{\infty}(\Omega)} \le C \| n-n_\infty \|_{W^{1,6}(\Omega)}^{\frac12} \| n-n_\infty \|_{L^{6}(\Omega)}^{\frac12} \leq C e^{-c t}, \end{equation} with constants $c(M, K_\infty), C(M, K_\infty) > 0$. The exponential convergence of $n_{tr}$ in $L^\infty(\Omega)$ can be verified essentially along the same lines as in \cite{FK20}. We, therefore, omit some technical details and set w.l.o.g.\ $\tau_n = \tau_p = 1$. By defining $u \mathrel{\mathop:}= n_{tr} - n_{tr,\infty}$, one derives the following pointwise relation by inserting $\pm n_{tr,\infty}$ several times and by applying the identities from \eqref{eqntrrelations}: \begin{align*} \varepsilon \, \partial_t u &= R_p - R_n = \bigg( 1 - n_{tr} - \frac{p}{p_0 \mu_p} n_{tr} \bigg) - \bigg( n_{tr} - \frac{n}{n_0 \mu_n} (1 - n_{tr}) \bigg) \\ &= - u \left(2 + \frac{p_\infty}{p_0 \mu_p} + \frac{n_\infty}{n_0 \mu_n}\right) - \frac{n_{tr}}{p_0 \mu_p} \left(p-p_\infty\right) + \frac{(1 - n_{tr})}{n_0 \mu_n} \left(n-n_\infty\right). \end{align*} By recalling $0 \leq n_{tr} \leq 1$, $\mu_n = e^{-V_n}$, and $\mu_p = e^{-V_p}$ with $V_n, V_p \in L^\infty(\Omega)$, and by employing \eqref{eqconvlinfty}, we find \begin{align*} \frac{d}{dt} \|u(t, \cdot)\|_{L^\infty(\Omega)} \leq - \frac{2}{\varepsilon} \|u(t, \cdot)\|_{L^\infty(\Omega)} + \frac{C}{\varepsilon} e^{-c t} \end{align*} where $c, C > 0$ depend on $M$ and $K_\infty$ but not on $\varepsilon$ for $\varepsilon \in (0, \varepsilon_0]$. If we choose $c > 0$ sufficiently small satisfying $\varepsilon_0 c \leq 1$, we arrive at \begin{multline*} \|n_{tr}(t, \cdot) - n_{tr,\infty} \|_{L^\infty(\Omega)} \le e^{-2 t/\varepsilon} + \frac{C}{\varepsilon} \int_0^t e^{ -2 (t-s) /\varepsilon - c s}\,ds \\ \le e^{-2 t/\varepsilon} + e^{-2 t /\varepsilon} \frac{C}{2 - \varepsilon c} \left( e^{ (2/\varepsilon - c)t} - 1\right) \le e^{-2 t/\varepsilon_0} + C e^{ - c t}. \end{multline*} Finally, estimate \eqref{eqexpconvlinfty} is proven. \end{proof} \section{Conclusion and Outlook} We have derived a so-called entropy--entropy production (EEP) inequality for a recombination--drift--diffusion--Poisson system modelling the dynamics of electrons and holes on separate energy levels in semiconductor materials. We have then employed this EEP inequality to establish exponential convergence to the equilibrium for the densities of the involved charge carriers and the self-consistent electrostatic potential. However, several simplifying hypotheses have been imposed (cf. Assumption \ref{assump}), which allowed for a transparent presentation of the main ideas of the proof but which prevent one from directly applying the results to real-world semiconductor devices. We conclude the article with a couple of comments on possible future research. \begin{itemize} \item Instead of considering trapped states allowing for a limited number of electrons, one can---at least from a mathematical perspective---also consider trapped states attracting holes. This could be achieved by replacing $n_{tr}$ by the density of \emph{trapped holes} $p_{tr}$, and by appropriately reformulating model \eqref{eqsystem}. As the structure of the resulting system remains essentially unchanged, we believe that our results also transfer to this situation. \item Concerning the presence of multiple trap levels or even a continuous distribution of trap levels within the bandgap of the semiconductor as in \cite{GMS07}, we stress that this requires a different definition of the entropy functional involving the density $n_{tr}^\eta$ of occupied trapped states with energy $\eta \in [E_\mathrm{min}, E_\mathrm{max}]$. Assuming all constants to be independent of $\eta$ (which also enforces the equilibria $n_{tr,\infty}^\eta$ to coincide), one can redefine the entropy functional as \begin{align} \label{eqentropynew} E(n, p, \{n_{tr}^\eta\}_\eta, \psi) &\mathrel{\mathop:}= \int_{\Omega} \bigg( n \ln \frac{n}{n_0 \mu_n} - (n-n_0\mu_n) + p \ln \frac{p}{p_0 \mu_p} - (p-p_0\mu_p) \\ &\qquad \quad + \frac{\lambda}{2} \big| \nabla \psi \big|^2 + \varepsilon \int_{E_\mathrm{min}}^{E_\mathrm{max}} \int_{1/2}^{n_{tr}^\eta} \ln \left( \frac{s}{1-s} \right) ds \, d\eta \bigg) dx. \nonumber \end{align} In this context, our strategy leads to the same results by following the line of arguments with some minor adaptions. But as soon as $n_0$ and $p_0$ depend on $\eta$, it already seems to be infeasible to derive an expression for the entropy production $P$ similar to \eqref{eqproduction} (at least if the integral over $\eta$ is placed in front of the right hand side of \eqref{eqentropynew}). Further studies in this direction shall be carried out in a subsequent project. \item In order to generalise our framework to the setting of semiconductor devices subject to non-trivial boundary conditions and space-dependent material parameters, the following questions have to be addressed: How to prove the existence of global solutions (observing that heterostructures are not covered by \cite{GMS07})? How to incorporate boundary conditions into the entropy functional preserving the non-negativity of $E$ and $P$ along with $P = -\tfrac{d}{dt} E$? Moreover, the fluxes $J_n$, $J_p$ and the reactions $R_n$, $R_p$ are now typically non-zero in the equilibrium. Is it, therefore, reasonable to expect relative fluxes and relative reactions to appear in the entropy production? To our knowledge, questions of equilibration of such models are largely open. \end{itemize} \end{document}
\begin{document} \title{Bosonic data hiding: power of linear vs non-linear optics} \author{Krishna Kumar Sabapathy} \email{krishnakumar.sabapathy@gmail.com} \affiliation{Xanadu, 777 Bay Street, Toronto ON, M5G 2C8, Canada} \affiliation{Departament de F\'{\i}sica: Grup d'Informaci\'{o} Qu\`{a}ntica, Universitat Aut\`{o}noma de Barcelona, 08193 Bellaterra (Barcelona), Spain} \author{Andreas Winter} \email{andreas.winter@uab.cat} \affiliation{Departament de F\'{\i}sica: Grup d'Informaci\'{o} Qu\`{a}ntica, Universitat Aut\`{o}noma de Barcelona, 08193 Bellaterra (Barcelona), Spain} \affiliation{ICREA---Instituci\'o Catalana de Recerca i Estudis Avan\c{c}ats, Pg.~Lluis Companys, 23, 08010 Barcelona, Spain} \date{2 February 2021} \begin{abstract} We show that the positivity of the Wigner function of Gaussian states and measurements provides an elegant way to bound the discriminating power of ``linear optics'', which we formalise as Gaussian measurement operations augmented by classical (feed-forward) communication (GOCC). This allows us to reproduce and generalise the result of Takeoka and Sasaki [PRA 78:022320, 2008], which tightly characterises the GOCC norm distance of coherent states, separating it from the optimal distinguishability according to Helstrom's theorem. Furthermore, invoking ideas from classical and quantum Shannon theory we show that there are states, each a probabilistic mixture of multi-mode coherent states, which are exponentially reliably discriminated in principle, but appear exponentially close judging from the output of GOCC measurements. In analogy to LOCC data hiding, which shows an irreversibility in the preparation and discrimination of states by the restricted class of local operations and classical communication (LOCC), we call the present effect \emph{GOCC data hiding}. We also present general bounds in the opposite direction, guaranteeing a minimum of distinguishability under measurements with positive Wigner function, for any bounded-energy states that are Helstrom distinguishable. We conjecture that a similar bound holds for GOCC measurements. \end{abstract} \maketitle \section{Introduction} \label{sec:intro} One of the most basic problems of quantum information theory is the discrimination of two alternatives (``hypotheses''), each of which represents the possible state of a system, $\rho_0$ or $\rho_1$. Under the formalism of quantum mechanics, this calls for the design of a measurement and a decision rule to choose between the two options based on the measurement outcome. The measurement is a binary resolution of unity, also called a positive operator valued measure (POVM), $(M_0=M,M_1=\1-M)$ of two semidefinite operators $M_0\,M_1 \geq 0$ summing to $M_0+M_1=\1$. The outcome $M_{\hat{i}}$ of the measurement is intended to correspond to the estimate $\hat{i}$ of the true state $\rho_i$. For simplicity, we will assume that the two hypotheses come with equal (uniform) prior probabilities, so the error probability is \begin{equation}\begin{split} \label{eq:HelstromHolevo} P_e &= \frac12 \tr\rho_0 (\1-M) + \frac12 \tr\rho_1 M \\ &= \frac12 \bigl( 1-\tr (\rho_0-\rho_1)M \bigr). \end{split}\end{equation} The minimum error over all quantum mechanically allowed POVMs gives rise to the trace norm, \begin{equation} \label{eq:tracenorm} \min_{0\leq M\leq\1} P_e = \frac12 \left( 1-\frac12\|\rho_0-\rho_1\|_1 \right), \end{equation} which formula is nowadays known as Helstrom bound \cite{Helstrom,Helstrom:book} or Holevo-Helstrom bound \cite{Holevo:dist}, since it was initially only proved for projective measurements and subsequently for generalised measurements. However, from the beginning of quantum detection theory, it was understood that -- depending on the physical system -- the Helstrom optimal measurement may not be easily implemented. Indeed, the very example of discrimination of to coherent states of an optical mode was already considered by Helstrom \cite{Helstrom:coherent}, who contrasted the absolutely minimum error probability with the performance of reasonable practical measurements. Mathematically, this means that the minimisation on the l.h.s. of Eq.~(\ref{eq:tracenorm}) is performed over a smaller set of POVMs, the restriction an expression of what is deemed physically feasible. Consequently, the error probability becomes larger, in some interesting cases close to $\frac12$ even for orthogonal, i.e.~ideally perfectly distinguishable, states. This phenomenon was first observed in bipartite systems under the restriction of local operations and classical communication (LOCC), and dubbed \emph{data hiding} \cite{Terhal-datahiding,DiVincenzo-datahiding}, which has been generalised to multi-party settings \cite{EggelingWerner}, and analysed extensively \cite{MWW,LW,W:eff}. In the present paper we will look at a different kind of restriction, in Bosonic quantum systems, motivated by the distinction between phase-space \emph{linear} (aka \emph{Gaussian}) and \emph{non-linear} (i.e.~non-Gaussian) operations, see also \cite{KKVV}. It is well-known that a process that starts from a Gaussian state and proceeds only via Gaussian operations, including Gaussian measurements and classical feed-forward (GOCC, see below), is in a certain sense very far away from the full complexity of quantum mechanics: indeed, such a process can be simulated efficiently on a classical computer \cite{bartlett,mari} and hence, unless BQP=BPP, is not quantum computationally universal. In other words, non-Gaussianity is a resource for computation, which it becomes quite explicitly in proposals of optical quantum computing such as the Knill-Laflamme-Milburn scheme \cite{KLM} that relies on photon detection and otherwise passive linear optics. Here we show that non-Gaussianity is a resource for the basic task of binary hypothesis testing. In particular we show how to leverage simple properties of the Wigner function to prove not only a limitation of the power of Gaussian operations, but construct data hiding with respect to GOCC. To conclude the introduction, a word on terminology: we refer to Gaussian states and channels as ``linear'', because the latter are described by linear transformations in the phase space of the canonical variables $x$ and $p$. Conversely, ``non-linear'' is anything outside the Gaussian set. Note however that in parts of quantum optics a narrower concept is used, whereby only channels are considered linear that are built with passive Gaussian unitaries, and perhaps admitting displacement operators. The rest of paper is structured as follows: In the next section (\ref{sec:gaussian}) we recall the necessary formalism and notation of quantum harmonic oscillators and Gaussian Bosonic states and operations; for our purposes in particular useful will be the phase space methods based on Wigner functions. Then, in Section~\ref{sec:GOCC} we specialise the general framework of restricted measurements to Gaussian quantum operations and arbitrary classical computations (GOCC), and the important relaxation of this class to measurements with non-negative Wigner functions (W+). We use these in Section~\ref{sec:GOCC-vs-ALL} to analyze the optimal GOCC measurement to distinguish two coherent states, reproducing (with a conceptually much simpler proof) a result of Takeoka and Sasaki~\cite{TakeokaSasaki}. The GOCC distinguishability of any two distinct coherent states is always a little, but always strictly worse than the optimal distingishability according to Helstrom \cite{Helstrom,Holevo:dist}. Motivate by this, in Section~\ref{sec:hiding} we exhibit examples of multimode states, each a mixture of coherent states (hence ``classical'' in the quantum-optical sense \cite{Glauber,Sudarshan} and in particular preparable by GOCC), whose GOCC distinguishability is exponentially small while they are almost perfectly distinguishable under the optimal Holevo-Helstrom measurement. From the other side, there are lower limits to how indistinguishable two orthogonal states on $n$ quantum harmonic modes and with bounded energy can be, which we show for W+ measurements and conjecture for GOCC measurements (Section~\ref{sec:lower-bounds}). We conclude in Section~\ref{sec:conclusions}. \section{Bosonic Gaussian formalism} \label{sec:gaussian} We briefly review the formalism of Bosonic systems and Gaussian states, which has been laid out in many review articles and textbooks, such as \cite{Weedbrook-et-al} and \cite{Barnett,KokLovett}, which two emphasise the quantum information aspect. For our particular choice of normalisations, see \cite{cahill}. Each elementary system, called a (harmonic) mode, is characterised by a pair of canonical variables $x$ and $p$, satisfying the canonical commutation relation (CCR) $[x,p]=i$ (customarily choosing units where $\hbar=1$) and generating the CCR algebra of Heisenberg and Weyl. By the Stone-von-Neumann theorem, each irreducible representation of this algebra on a separable Hilbert space $\cH$ is isomorphic to the usual position and momentum operators $x$ and $p$, respectively. It is convenient to introduce the annihilation and creation operators \begin{equation} \label{eq:anni+crea} a = \frac{x+ip}{\sqrt{2}}, \quad a^\dagger = \frac{x-ip}{\sqrt{2}}, \end{equation} respectively. They can be used to define the number operator, \begin{equation} \label{eq:N} N = a^\dagger a = \frac12 (x^2+p^2) - \frac12 = \sum_{n=0}^\infty n\,\proj{n}, \end{equation} which up to the energy shift of $-\frac12$ to bring the ground state energy to zero, is equivalent to the quantum harmonic Hamiltonian (at fixed frequency), and has precisely the non-negative integers as eigenvalues; the eigenstates are known as Fock states or number states, $\ket{n}$. In the number basis, \begin{equation} \label{eq:anni+crea:N} a = \sum_{n=0}^\infty \sqrt{n}\,\ketbra{n\!-\!1}{n}, \quad a^\dagger = \sum_{n=0}^\infty \sqrt{n}\,\ketbra{n}{n\!-\!1}. \end{equation} All these operators are unbounded, and one might have justified hesitations against the algebraic operations performed above. The established solution to all of the potential problems associated to the unboundedness and associated restricted domains is to pass to the displacement operators, \begin{equation} \label{eq:D-alpha} D(\alpha) = e^{\alpha a^\dagger-\overline{\alpha} a}, \text{ for } \alpha\in \CC, \end{equation} which are \emph{bona fide} unitaries, hence bounded operators. So far, we have discussed our quantum system at hand as if it were a single mode, but we can of course consider multi-mode systems, which again by the Stone-von-Neumann theorem are characterised uniquely as irreducible representations of the CCR algebra generated by $x_1,\ldots,x_m$ and $p_1,\ldots,p_m$ such that $[x_j,p_k] = i\delta_{jk}$. This means that its Hilbert space can be identified with $\cH_1\ox\cdots\ox\cH_m$, where $\cH_j$ is the Hilbert space of the $j$-th mode, carrying the representation of $x_j$ and $p_j$. In particular, each mode has its own annihilation operator $a_j$ and displacement operator $D(\alpha)$; for an $m$-tuple $\underline{\alpha} = (\alpha_1,\ldots,\alpha_m)$ of displacements, we write $D(\underline{\alpha}) = D(\alpha_1)\ox\cdots\ox D(\alpha_m)$ for the $m$-mode displacement operator. The subspace spanned by these operators is dense in the bounded operators $\cB(\cH)$ on the Hilbert space. For a general density operator $\rho \in \cS(\cH) = \{\rho\geq 0,\ \tr\rho=1\}$, or more generally for a trace class operator, the characteristic function is defined as \begin{equation} \chi_\rho(\underline{\alpha}) := \tr \rho D(\underline{\alpha}). \end{equation} This is a bounded complex function, uniquely specifying $\rho$. A state $\rho$ is called Gaussian if its characteristic function $\chi_\rho$ is of Gaussian form. For our purposes, we need another, particularly useful representation of the state as a quasi-probability function, the so-called Wigner function $W_\rho$ \cite{Wigner,HOCSW}, see also \cite{Barnett,cahill} for many fundamental and useful relations such as the following two. It is defined as the (multidimensional complex) Fourier transform of the characteristic function $\chi_\rho$, \begin{equation} \label{eq:Wigner-Fourier} W_\rho(x,p) = \left(\frac{1}{2\pi^2}\right)^m \int {\rm d}^{2m}\underline{\xi}\, e^{\underline{\alpha}\cdot\underline{\xi}^\dagger -\underline{\xi}\cdot\underline{\alpha}^\dagger} \chi_\rho(\underline{\xi}), \end{equation} where we reparametrise the argument in phase space coordinates, $\alpha_j = \frac{1}{\sqrt{2}}(x_j+ip_j)$, and $\underline{\alpha}\cdot\underline{\xi}^\dagger = \sum_j \alpha_j\overline{\xi}_j$ is the Hermitian inner product of the complex coordinate tuples. This is a real-valued function, and its normalisation is chosen in such a way that \begin{equation} \int {\rm d}^m x{\rm d}^m p\, W_\rho(x,p) = \tr\rho, \end{equation} hence for a state we can address it as a quasi-probability function as it integrates to $1$, and if the Wigner function is positive it is a genuine probability density. In general, can be expressed as an expectation value, cf. \cite{Barnett,cahill}, \begin{equation} \label{eq:Wigner-expectation} W_\rho(x,p) = \pi^{-m} \tr \rho D(\underline{\alpha})(-1)^{N_1+\ldots+N_m}D(\underline{\alpha})^\dagger. \end{equation} where as before $\alpha_j = \frac{1}{\sqrt{2}}(x_j+ip_j)$. It shows that $W_\rho$ is well-defined and indeed a continuous bounded function for all trace class operators: indeed, $|W_\rho(x,p)| \leq \pi^{-m} \|\rho\|_1$. The above formula can be used to give meaning to more general operators (such as POVM elements); for instance the Wigner function of the identity operator is a constant, $W_\1 = (2\pi)^{-m}$. The Wigner transformation preserves the Frobenius (Hilbert-Schmidt) inner product, \begin{equation} \label{eq:Frobenius} \tr\rho\sigma = (2\pi)^m \int {\rm d}^m x {\rm d}^m p\, W_\rho(x,p)W_\sigma(x,p). \end{equation} Unitary transformations of the Hilbert space preserve the canonical commutation relations; but the subset of unitaries that map the Lie algebra of the canonical variables, which is $\operatorname{span}\{\1,x_j,p_k\}$, to itself, are called \emph{Gaussian} unitaries. We address them also as ``linear'' transformations, since they are correspond to an affine linear map of phase space, and are described by a displacement vector and a symplectic matrix. Gaussian channels are precisely the completely positive and trace preserving (cptp) maps taking Gaussian states to Gaussian states. It is a fundamental fact that a quantum channel $\cN:\cS(\cH)\rightarrow\cS(\cH')$ is Gaussian if and only if it has a Gaussian unitary dilation $U$, with the environment initialised in the vacuum state: \begin{equation} \cN(\rho) = \tr_E U\left(\rho\ox\proj{0}^{\ox\ell}\right)U^\dagger, \end{equation} where the environment has $\ell$ modes. For the following, we need the (Glauber-Sudarshan) coherent states, also known as minimal dispersion states $\ket{\alpha}$, which are eigenstates of the annihilation operator: $a\ket{\alpha} = \alpha\,\ket{\alpha}$, for $\alpha\in\CC$. This defines the states uniquely, and one can show that they are related by displacements: $\ket{\alpha} = D(\alpha)\ket{0}$, where $\ket{0}$ is both the coherent state corresponding to $\alpha=0$ and the vacuum, i.e. the ground state of the Hamiltonian, in other words the zeroth Fock state. In the Fock basis, \begin{equation} \label{eq:coherentstate} \ket{\alpha} = e^{-\frac12 |\alpha|^2} \sum_{n=0}^\infty \frac{\alpha^n}{\sqrt{n!}}\ket{n}, \end{equation} a relation that reassuringly shows that the coherent states are well-defined unit vectors, written in a genuine orthonormal basis. However, what is more relevant are the following expressions for the first and second moments. For $\alpha = \alpha_R + i\alpha_I$ written in terms of real and imaginary parts, \begin{align} \bra{\alpha} x \ket{\alpha} &= \alpha_R\sqrt{2},\ \bra{\alpha} p \ket{\alpha} = \alpha_I\sqrt{2},\\ \bra{0} x^2 \ket{0} &= \bra{0} p^2 \ket{0} = \frac12, \end{align} the latter ``vacuum fluctuations'' consistent with the Heisenberg-Robertson uncertainty relation. Furthermore, the inner product, easily confirmed from the expansion in the Fock basis, \begin{equation} |\bra{\alpha}\beta\rangle|^2 = e^{-|\alpha-\beta|^2}. \end{equation} And finally, we record \begin{equation} \label{eq:hetero} \frac{1}{\pi} \int {\rm d}\alpha\, \proj{\alpha} = \1, \end{equation} showing that the family of operators $\frac{{\rm d}\alpha }{\pi}\proj{\alpha}$ forms a POVM, known as heterodyne measurement. \section{State discrimination by \protect\\ Gaussian measurements} \label{sec:GOCC} Now that we have the Bosonic formalism in place, we can discuss the problem of binary hypothesis testing under Gaussian restrictions on the measurement. Indeed, going back to Eqs.~(\ref{eq:HelstromHolevo}) and (\ref{eq:tracenorm}) in the introduction, almost any restriction $\mathbb{M}$ on the set of possible measurements, be they physically motivated or purely mathematical, results in a larger error probability than the Helstrom expression, which is most conveniently expressed in terms of a distinguishability norm on states: \begin{equation} \label{eq:M-norm} \min_{(M,\1-M)\in\mathbb{M}} P_e =: \frac12 \left( 1-\frac12\|\rho_0-\rho_1\|_{\mathbb{M}} \right). \end{equation} How to define the set $\mathbb{M}$ appropriately and what exactly is necessary for it to define a norm is explained in detail in \cite{MWW}. An example exceedingly well-studied in quantum information theory is the set of measurements implemented by local operations and classical communication (LOCC) in a bi- or multi-partite system, as well as its relaxations separable POVM elements (SEP) and POVM elements with positive partial transpose (PPT), cf.~\cite{MWW,LOCC:always} Here, we consider restrictions motivated from the fact that Gaussian operations are distinguished among the ones allowed by quantum mechanics generally, following Takeoka and Sasaki \cite{TakeokaSasaki}. Concretely, we are interested in the measurements implemented by any sequence of partial Gaussian POVMs and classical feed-forward (Gaussian operations and classical computation, GOCC). Very much like LOCC, there is no concise way of writing down a general GOCC transformation, but for a binary measurement the prescription is as follows. \begin{definition} \label{defi:GOCC} A \emph{GOCC measurement protocol} on $m$ modes consists of the repetition of the following steps, for $r=1,\ldots,R$ (``rounds''), after initially setting $\xi_0=\emptyset$ and $m_{\emptyset}=m$. Here, $\xi_{r-1}$ is the collection of all measurement outcomes prior to round $r$. \begin{itemize} \item[{(r.1)}] create a number $k_{\xi_{r-1}}$ of Bosonic modes in the vacuum state; \item[{(r.2)}] perform a Gaussian unitary $U_{\xi_{r-1}}$ on the $m_{\xi_{r-1}}+k_{\xi_{r-1}}$ modes; \item[{(r.3)}] perform homodyne detection on the last $\ell_{\xi_{r-1}}$ modes, keeping the first $m_{\xi_{r}} := m_{\xi_{r-1}}+k_{\xi_{r-1}}-\ell_{\xi_{r-1}}$; call the outcome $\underline{x}^{(r)} = x_1^{(r)}\ldots x_{\ell_{\xi_{r-1}}}^{(r)}$ and set $\xi_{r} := \{\xi_{r-1},\underline{x}^{(r)}\}$. \end{itemize} Each $r$ is called a ``round'', and in the $R$-th round all remaining modes are measured, i.e. $\ell_{\xi_{r-1}} = m_{\xi_{r-1}}+k_{\xi_{r-1}}$. The final measurement outcome is a measurable function $f(\xi_R) \in \Omega$, taking values in a prescribed set $\Omega$, which for simplicity we assume to be discrete. This defines a POVM $(M_\omega:\omega\in\Omega)$, and every POVM that arises in the above way, or as a limit of such POVMs in the strong topology is called a \emph{GOCC POVM}. \end{definition} Now, returning to equiprobable hypotheses $\rho_0$ and $\rho_1$, and enforcing the POVMs to be implemented by GOCC protocols, we arrive at the GOCC norm: \begin{equation} \label{eq:GOCC-norm} \inf_{(M,\1-M)\atop \text{GOCC POVM}} P_e =: \frac12 \left( 1-\frac12\|\rho_0-\rho_1\|_\GOCC \right). \end{equation} Note that $\|\cdot\|_\GOCC$ is indeed a norm, since the set of GOCC measurements is tomographically complete. Indeed, heterodyne detection on every available mode is a tomographically complete measurement, meaning that for every pair of distinct quantum states, there exists a binary coarse graining of the heterodyne detection outcomes that discriminates the states with some non-zero bias. \begin{remark} In the definition of a GOCC protocol, we could have allowed the $k_{\xi_{r-1}}$ ancillary modes to be prepared in any Gaussian state in step (r.1), but that does not add any more generality, since every Gaussian state can be prepared from the vacuum by a suitable Gaussian unitary. We could also have allowed an arbitrary Gaussian quantum channel in step (r.2), but again that does not add any more generality since every Gaussian channel has a dilation to a Gaussian unitary with an environment prepared in the vacuum state. Finally, in step (r.3) we could have allowed any Gaussian measurement, but every Gaussian measurement can be implemented by adjoining suitable ancilla modes in the vacuum, performing a Gaussian unitary and a homodyne measurement. From the point of view of the discussion of classes of operations, of which measurements are a special case, it is interesting to distinguish certain subclasses of GOCC: what we actually have defined are the measurements implemented by a GOCC protocol with finitely many rounds, as well as the closure of this set. One could also define the POVMs implemented by a GOCC protocol with unbounded rounds (but probability 1 to stop), which would sit between the former two, cf.~\cite{LOCC:always} for the case of LOCC. While it is interesting to study these three classes, in particular whether they coincide or are separated (as they are in the analogous case of LOCC \cite{LOCC:always}) this is beyond the scope of the present work. Indeed, for the case of hypothesis testing, thanks to the infimum in the error probability, all three classes will give rise to the same GOCC norm. \end{remark} An elementary observation about GOCC is that the fine-grained measurement (i.e. before coarse-graining to a discrete POVM) consists of operators each of which is a positive scalar multiple of a Gaussian pure state. In particular, they have non-negative Wigner function, and because the coarse-graining amounts to summing POVM elements, also the final POVM has non-negative Wigner functions. We thus call a binary POVM $(M,\1-M)$ with non-negative Wigner functions $W_{M}$ and $W_{\1-M}$ a \emph{W+ POVM}, and denote their set $\mathbb{W}_+$. Just as the restriction to GOCC leads to the distinguishability norm $\|\cdot\|_\GOCC$ [Eq.~(\ref{eq:GOCC-norm})], the restriction to W+ POVMs gives rise to the distinguishability norm $\|\cdot\|_\Wplus$: \begin{equation} \label{eq:Wplus-norm} \inf_{(M,\1-M)\atop \text{W+ POVM}} P_e =: \frac12 \left( 1-\frac12\|\rho_0-\rho_1\|_\Wplus \right). \end{equation} Since every GOCC measurement is automatically W+, we have by definition \begin{equation} \label{eq:tower} \|\rho_0-\rho_1\|_\GOCC \leq \|\rho_0-\rho_1\|_\Wplus \leq \|\rho_0-\rho_1\|_1. \end{equation} The rest of the paper is concerned with the comparison of these norms. The questions guiding us are: are they different, and how large are the gaps? \section{Separation between GOCC and unrestricted measurements} \label{sec:GOCC-vs-ALL} Our first result shows a simple upper bound on the GOCC and W+ distinguishability norms in terms of the Wigner functions of the two states. \begin{lemma} \label{lemma:W+vs-L1} For any two states $\rho_0$ and $\rho_1$ of an $m$-mode system, with associated Wigner functions $W_0$ and $W_1$, respectively, \[\begin{split} \| \rho_0-\rho_1 \|_\GOCC &\leq \| \rho_0-\rho_1 \|_\Wplus \\ &\leq \| W_0-W_1 \|_{L^1} \\ &= \int {\rm d}^m x{\rm d}^m p\, |W_0(x,p)-W_1(x,p)|. \end{split}\] \end{lemma} Note that, unlike the inequalities (\ref{eq:tower}), the third term in the chain is not a trace norm of density matrices, but an $L^1$ norm of real functions, which we may interpret as generalised densities. \begin{proof} Only the second inequality remains to be proved. Consider any W+ POVM $(M,\1-M)$, meaning that the stochastic response functions $F = (2\pi)^m W_M$ and $1-F = (2\pi)^m W_{\1-M}$ are bounded between $0$ and $1$. By the Frobenius inner product formula for the Wigner function, Eq.~(\ref{eq:Frobenius}), we have \begin{equation} \label{eq:W-expectation} \tr(\rho_0-\rho_1)M = \int {\rm d}^m x{\rm d}^m p\, \bigl(W_0(x,p)-W_1(x,p)\bigr)F(x,p), \end{equation} where the left hand side appears in Eq.~(\ref{eq:HelstromHolevo}), its supremum over W+ POVMs being $\frac12\|\rho_0-\rho_1\|_\Wplus$; while the right hand side is upper bounded by $\frac12 \| W_0-W_1 \|_{L^1}$, where we made use of the fact that $\int {\rm d}^m x{\rm d}^m p\, (W_0(x,p)-W_1(x,p)) = \tr(\rho_0-\rho_1) = 0$. \end{proof} \begin{remark} The lemma assumes measurements with W+ POVMs, but it gives interesting information also in cases where the POVM has some limited Wigner negativity. Namely, looking at Eq.~(\ref{eq:W-expectation}), we subsequently use that $0 \leq F(x,p) \leq 1$, which is the property W`+ of the measurement. If we do not have ``too much'' Wigner negativity in the measurement operators, this could be expressed by a bound $|2F(x,p)-1| \leq B$, and then we would get \begin{equation} \bigl| \tr(\rho_0-\rho_1)M \bigr| \leq \frac{B}{2} \| W_0-W_1 \|_{L^1}. \end{equation} The right hand side can still be small when the $L^1$-distance is really small, and at the same time $B$ not too large. In the next section we shall see an example of this. \end{remark} As one might expect, the inequality in Lemma \ref{lemma:W+vs-L1} is often crude, or even trivial since one can find states where the right had side exceeds $2$. However, if $\rho_0$ and $\rho_1$ are both states with non-negative Wigner function, for instance probabilistic mixtures of Gaussian states, then $W_0$ and $W_1$ are \emph{bona fide} probability densities, and the right hand side is $\leq 2$. In that case, we have the following corollary for the quantum Chernoff coefficient when measurements are restricted to GOCC or W+ POVMs. Recall that the Chernoff coefficient is the exponential rate of the minimum error probability in distinguishing two i.i.d. hypotheses. I.e., in the case of two quantum states \begin{equation} \xi(\rho_0,\rho_1) := \lim_{n\rightarrow\infty} - \frac1n \ln \left(1-\frac12\left\|\rho_0^{\ox n}-\rho_1^{\ox n}\right\|_1\right), \end{equation} which generalises the analogous question for probability distributions \cite{Chernoff}. Amazingly, there is a formula for this exponent \cite{q-Chernoff}, generalising in its turn the classical answer \cite{Chernoff}: \begin{equation} \xi(\rho_0,\rho_1) = -\ln \inf_{0<s<1} \tr\rho_0^s\rho_1^{1-s}. \end{equation} Just as the distinguishability norm under a restriction, we can then define the constrained Chernoff coefficient, if the restriction $\mathbb{M}$ describes a subset of POVMs for each number of elementary systems: \begin{equation} \xi_{\mathbb{M}}(\rho_0,\rho_1) := \lim_{n\rightarrow\infty} - \frac1n \ln \left(1-\frac12\left\|\rho_0^{\ox n}-\rho_1^{\ox n}\right\|_{\mathbb{M}}\right). \end{equation} \begin{corollary} \label{cor:Chernoff} For two states $\rho_0$ and $\rho_1$ with non-negative Wigner functions $W_0,\, W_1 \geq 0$ (meaning that they are probability density functions), \[ \xi_\GOCC(\rho_0,\rho_1) \leq \xi_\Wplus(\rho_0,\rho_1) \leq \xi(W_0,W_1), \] where according to Chernoff's theorem \cite{Chernoff}, \[ \xi(W_0,W_1) = -\ln \inf_{0<s<1} \int {\rm d}^m x{\rm d}^m p\, W_0(x,p)^s W_1(x,p)^{1-s} \] is the classical Chernoff coefficient of the probability distributions $W_0$ and $W_1$. \qed \end{corollary} As in Lemma \ref{lemma:W+vs-L1}, the third term in the chain is not a quantity of density matrices, but of classical probability densities. It is not difficult to find examples where the bounds of the Lemma and its Corollary are exactly tight, among them the case of two coherent states studied originally by Takeoka and Sasaki \cite{TakeokaSasaki}. \begin{example} \label{example:takeoka-sasaki} Consider $\rho_0$ and $\rho_1$ as two coherent states of a single mode, say $\rho_0=\proj{+\alpha}$, $\rho_1=\proj{-\alpha}$ for $\alpha > 0$. Then, \begin{equation} \frac12 \|\rho_0-\rho_1\|_1 = \sqrt{1-e^{-4\alpha^2}}. \end{equation} while by Lemma \ref{lemma:W+vs-L1}, \begin{equation} \frac12 \|\rho_0-\rho_1\|_\GOCC = \frac12 \|\rho_0-\rho_1\|_\Wplus = \operatorname{erf}(\alpha\sqrt{2}), \end{equation} with the error function $\operatorname{erf}(x) = \frac{2}{\sqrt\pi} \int_0^x {\rm d}x\,e^{-x^2}$. The equality follows from homodyning the $x$-coordinate and deciding depending on the sign of the measurement outcome. The norms are compared in Fig.~\ref{fig:GOCC-vs-tr}. Furthermore, in the asymptotic i.i.d. setting of the Chernoff bound, \begin{equation} \xi(\rho_0,\rho_1) = -\ln F(\rho_0,\rho_1)^2 = 4 |\alpha|^2, \end{equation} while by Corollary \ref{cor:Chernoff}, \begin{equation} \xi_\GOCC(\rho_0,\rho_1) = \xi_\Wplus(\rho_0,\rho_1) = 2 |\alpha|^2. \end{equation} The equality follows from homodyning each mode separately in the $x$ direction, and classical post-processing. \end{example} \begin{figure} \caption{Plot of the trace distance (red) versus the GOCC distance (green) against $\alpha$ on the horizontal axis. While there is a nonzero gap for all $\alpha > 0$, it vanishes for asymptotically large and small displacements, as expected. The largest difference between $\frac12 \|\rho_0-\rho_1\|_1$ and $\frac12 \|\rho_0-\rho_1\|_\GOCC$ is $\approx 0.11$, occurring at $\alpha \approx 0.45$.} \label{fig:GOCC-vs-tr} \end{figure} \begin{example} More generally, for any one-mode Gaussian state and its displacement along one of the principal axes of the covariance matrix, \begin{equation} \|\rho_0-\rho_1\|_\GOCC = \|\rho_0-\rho_1\|_\Wplus = \|W_0-W_1\|_{L^1}, \end{equation} and the latter can be expressed in terms of the error function and the shared variance of the two states in the direction of the displacement connecting them. The equality follows from homodyning in the direction of the line connecting the two first moment vectors in phase space, and deciding depending on which of the two points is closer to the outcome. \end{example} \section{Data hiding secure against Gaussian attacker} \label{sec:hiding} As soon as we realize that it is possible to get large gaps between $\|\cdot\|_1$ and $\|\cdot\|_\GOCC$, we have to ask ourselves, just how large the gap can be. In particular, is it possible to find state pairs which are almost maximally distant in the trace norm, yet almost indistinguishable in the GOCC norm? In other words, can we protect the information against an adversary who attempts the hypothesis testing on the two states but with only access to Gaussian operations and classical communication? This is the definition of data hiding, first explored in the context of the LOCC restriction, and then later abstractly for an arbitrary restriction on the possible measurements. Next we shall show that data hiding is possible also under GOCC, at least when going to multiple modes. \begin{theorem} \label{thm:GOCC-hiding} Let $\overline{E} > 0$. Then, there is a constant $c>0$ such that for all sufficiently large integers $m$ there exist $m$-mode states $\rho_0$ and $\rho_1$, each a mixture of a finite set of coherent states and with average energy (photon number) per mode bounded by $\overline{E}+o(1)$, such that \begin{align} \label{eq:random-tracenorm} \frac12 \| \rho_0\!-\!\rho_1 \|_1 &\geq 1-e^{-cm}, \\ \label{eq:random-Wplusnorm} e^{-cm} &\geq \frac12 \| \rho_0\!-\!\rho_1 \|_\Wplus \geq \frac12 \| \rho_0\!-\!\rho_1 \|_\GOCC. \end{align} \end{theorem} \begin{proof} Consider the $m$-mode coherent states $\ket{\underline{\alpha}^{(\lambda)}} = \ket{\alpha^{(\lambda)}_1}\ket{\alpha^{(\lambda)}_2}\cdots\ket{\alpha^{(\lambda)}_m}$ ($\lambda=1,\ldots,2L$), where the parameters $\alpha^{(\lambda)}_j\in\CC$ are chosen i.i.d according to a normal distribution with mean $0$ and variance $\EE |\alpha^{(\lambda)}_j|^2 = \overline{E}$. Then define \begin{equation}\begin{split} \rho_0 &= \frac{1}{L} \sum_{\lambda=1}^L \proj{\underline{\alpha}^{(2\lambda)}}, \\ \rho_1 &= \frac{1}{L} \sum_{\lambda=1}^L \proj{\underline{\alpha}^{(2\lambda-1)}}, \end{split}\end{equation} so these are random states. Note that with high probability, indeed asymptotically converging to $1$ as $m\gg 1$, both have their photon number per mode bounded by $\overline{E}+o(1)$. Also, \begin{equation} \EE \rho_0 = \EE \rho_1 = \gamma(\overline{E})^{\ox m}, \end{equation} where $\gamma(\overline{E}) = (1-e^{-\beta}) e^{-\beta N}$ is the thermal state of a single Bosonic mode of mean photon number $\overline{E}$, i.e. with $\beta = \ln\left(1+\frac{1}{\overline{E}}\right)$. The rest of the proof will consist in showing that we can fix $L$ in such a way that with probability close to $1$, $\rho_0$ and $\rho_1$ are distinguishable except with exponentially small error probability, and that with probability close to $1$, the Wigner functions $W_0$ and $W_1$ are exponentially close to $W_{\gamma(\overline{E})}^{\ox m}$, the Wigner function of $\gamma(\overline{E})^{\ox m}$, in the total variational distance. \emph{Eq.~(\ref{eq:random-tracenorm}):} The ensemble of coherent states $\ket{\underline{\alpha}^{(\lambda)}}$ is the well-studied random coherent state modulation of the noiseless Bosonic channel with input power (photon number) $\overline{E}$, whose classical capacity is well-known \cite{pure-loss-C}, with the strong converse proved in \cite{WiWi:pure-loss}. \begin{equation}\begin{split} C(\id,\overline{E}) &= g(\overline{E}) \\ &= (\overline{E}+1)\ln(\overline{E}+1) - \overline{E}\ln\overline{E} \\ &= \ln(1+\overline{E}) + \overline{E}\ln\left(1+\frac{1}{\overline{E}}\right). \end{split}\end{equation} Thus, when $2L\leq e^{m\bigl(C(\id,\overline{E})-\delta\bigr)}$, it follows from the Holevo-Schumacher-Westmoreland theorem \cite{Holevo:C,SchumacherWestmoreland:C,Holevo:C-E} that with probability close to $1$, there exists a POVM $(D_\lambda)_{\lambda=1}^{2L}$ that decodes $\lambda$ reliably from the state $\proj{\underline{\alpha}^{(\lambda)}}$: \begin{equation} \frac{1}{2L}\sum_{\lambda=1}^{2L} \tr \proj{\underline{\alpha}^{(\lambda)}} D_\lambda \geq 1-e^{-c'm}, \end{equation} with a suitable constant $c'>0$ and for all sufficiently large $m$. Thus, with $\rho_i$ ($i=0,1$) as defined above and \begin{equation} M_i = \sum_{\lambda=1}^L D_{2\lambda-i} \quad (i=0,1), \end{equation} it follows \begin{equation} \frac12 \tr \rho_0 M_0 + \frac12 \tr \rho_1 M_1 \geq 1-e^{-c'm}, \end{equation} which implies Eq.~(\ref{eq:random-tracenorm}). \emph{Eq.~(\ref{eq:random-Wplusnorm}):} The Wigner functions $W_{\underline{\alpha}^{(\lambda)}}$ of the coherent states $\proj{\underline{\alpha}^{(\lambda)}}$ are $2m$-dimensional real Gaussian probability densities centered at $\underline{z}^{(\lambda)}$, where $z^{(\lambda)}_{2j-1} = \Re \alpha^{(\lambda)}_j\sqrt{2}$ and $z^{(\lambda)}_{2j} = \Im \alpha^{(\lambda)}_j\sqrt{2}$ are the rescaled real and imaginary part of $\underline{\alpha}^{(\lambda)}_j$, respectively; they have variance $\frac12$ in each direction. We read them as output distributions of an i.i.d.~additive white Gaussian noise (AWGN) channel on $2m$ inputs $\underline{z}^{(\lambda)}$, and with noise power $\frac12$. Note that all $z^{(\lambda)}_j$ are themselves Gaussian distributed random variables with $\EE z^{(\lambda)}_j = 0$ and $\EE |z^{(\lambda)}_j|^2 = \overline{E}$. This channel, which we denote $\widetilde{W}$ since its output distributions come from the Wigner functions of the coherent states $\underline{\alpha}^{(\lambda)}$, thanks to Shannon's famous formula with the signal-to-noise ratio has the capacity \begin{equation} C(\widetilde{W},\overline{E}) = \frac12 \ln(1+2\overline{E}). \end{equation} Thus, by the theory of approximation of output statistics \cite{HanVerdu:AOS}, adapted to the AWGN channel \cite{HanVerdu-AWGN}, it follows that when $2L\geq e^{2m\bigl(C(\widetilde{W},\overline{E})+\delta\bigr)}$, then with probability close to $1$ \begin{equation} \left\| W_i - W_{\gamma(\overline{E})}^{\ox m} \right\|_{L^1} \leq \frac12 e^{-c''m}, \end{equation} for $i=0,1$, with a suitable constant $c''>0$ and for all sufficiently large $m$. See~\cite[Thm.~6.7.3]{Han:InfoSpec} for the concrete statement. Hence, by the triangle inequality and Lemma \ref{lemma:W+vs-L1}, we get Eq.~(\ref{eq:random-Wplusnorm}). It remains to put the two parts together: We observe that $2C(\widetilde{W},\overline{E}) < C(\id,\overline{E})$ for all $\overline{E} > 0$. Indeed, a well-known elementary inequality states \begin{equation} \ln(1+t) \geq \frac{t}{1+t}, \end{equation} which we apply to $t=\frac{1}{\overline{E}}$, yielding \begin{equation} E\ln\left(1+\frac{1}{\overline{E}}\right) \geq \overline{E}\frac{\frac{1}{\overline{E}}}{1+\frac{1}{\overline{E}}} = \frac{\overline{E}}{1+\overline{E}} > \ln\left(1+\frac{\overline{E}}{1+\overline{E}}\right), \end{equation} which is equivalent to the claim. This means that we can choose $\delta>0$ such that \( 2C(W,\overline{E})+2\delta < C(\id,\overline{E})-\delta, \) meaning we can satisfy \begin{equation} e^{2m\bigl(C(\widetilde{W},\overline{E})+\delta\bigr)} \leq 2L \leq e^{2m\bigl(C(\widetilde{W},\overline{E})+\delta\bigr)} \end{equation} simultaneously for all sufficiently large $m$. Finally, setting $c=\min\{c',c''\}$ concludes the proof. \end{proof} \begin{remark} While we didn't make any attempt to give a numerical value for $c$ (which is a function of $\overline{E}$), in principle it can be extracted from the HSW coding theorem for the noiseless Bosonic channel and the resolvability coding theorem for the AWGN channel. Likewise, we presented the theorem as an asymptotic result, but the proofs of the two coding theorems will yield finite values of $m$ for which the constructions work with probability $>\frac34$, and so we get the existence of the data hiding states for that number of modes. \end{remark} \begin{corollary} \label{cor:big-Chernoff} For the two $m$-mode states $\rho_0$ and $\rho_1$ from Theorem \ref{thm:GOCC-hiding}, \[ \xi(\rho_0,\rho_1) \geq \frac{c}{2}m - \ln\sqrt{2}, \] whereas \[\begin{split} \xi_\GOCC(\rho_0,\rho_1) &\leq \xi_\Wplus(\rho_0,\rho_1) \\ &\leq \xi(W_0,W_1) \\ &\leq -2\ln\left(1-e^{-cm}\right) \sim 2 e^{-cm}. \end{split}\] \end{corollary} \begin{proof} With $\epsilon = e^{-cm}$ as in Theorem \ref{thm:GOCC-hiding}, we use the Fuchs-van de Graaf relation between trace distance and fidelity \cite{FvdG}: \begin{equation} 1-F(\rho_0,\rho_1) \leq \frac12 \|\rho_0-\rho_1\|_1 \leq \sqrt{1-F(\rho_0,\rho_1)^2}, \end{equation} where the mixed-state fidelity is given \begin{equation} F(\rho_0,\rho_1) := \|\sqrt{\rho_0}\sqrt{\rho_1}\|_1. \end{equation} Now, we get first $1-F(\rho_0,\rho_1)^2 \geq (1-\epsilon)^2$. And then we can estimate: \begin{equation}\begin{split} e^{-\xi(\rho_0,\rho_1)} &= \inf_{0<s<1} \tr\rho_0^s\rho_1^{1-s} \\ &\leq \tr\sqrt{\rho_0}\sqrt{\rho_1} \\ &\leq \|\sqrt{\rho_0}\sqrt{\rho_1}\|_1 \\ &= F(\rho_0,\rho_1) \leq \sqrt{2\epsilon}. \end{split}\end{equation} Secondly, we get $1-\epsilon \leq F(W_0,W_1) = F(\omega_0,\omega_1)$, with suitable purifications $\omega_i$ of $W_i$, according to Uhlmann's theorem. By Corollary \ref{cor:Chernoff}, \begin{equation}\begin{split} \xi_\GOCC(\rho_0,\rho_1) &\leq \xi_\Wplus(\rho_0,\rho_1) \\ &\leq \xi(W_0,W_1) \\ &\leq \xi(\omega_0,\omega_1) \\ &= -\ln F(\omega_0,\omega_1)^2 \\ &= -\ln F(W_0,W_1)^2 \leq -2\ln(1-\epsilon), \end{split}\end{equation} where in the third line we have used the monotonicty of the Chernoff coefficient under partial traces, and in the fourth line the formula for the Chernoff coefficient for pure states. \end{proof} \section{Lower bounds on distinguishability under \protect\\ GOCC and W+ measurements} \label{sec:lower-bounds} So far, we have seen examples of separations, including large ones, between the trace norm and GOCC and W+ norms. Especially about the construction in the previous section we can ask, whether and in which sense it uses the available resources optimally: these would be the number of modes and the energy. Here we show lower bounds on the distinguishability of general states when restricted to W+, compared to the trace norm. They are motivated by similar studies under the LOCC, SEP or PPT constraint, or an abstract constraint on the allowed measurements \cite{MWW,LW}, see also \cite{ultimate}. \begin{proposition} \label{prop:W+lowerbound} For any two $m$-mode states $\rho_0$ and $\rho_1$, \[ \| \rho_0-\rho_1 \|_\Wplus \geq 2^{-m-1}\| \rho_0-\rho_1 \|_2^2 = 2^{-m-1}\tr(\rho_0-\rho_1)^2. \] \end{proposition} \begin{proof} We will write down a specific W+ POVM that achieves the r.h.s. as its statistical distance. In fact, with $\Delta = \rho_0-\rho_1$, for our POVM $(M,\1-M)$ we make the ansatz \begin{equation}\begin{split} M &= \frac12(\1 + \eta\Delta), \\ \1-M &= \frac12(\1 - \eta\Delta), \end{split}\end{equation} with a suitable constant $\eta>0$ to ensure that not only is this a POVM (for which it is enough that $\eta\leq 1$), but a W+ POVM. For that purpose, recall that $W_\1 = (2\pi)^{-m}$. Recall furthermore that the Wigner functions of states are bounded, $|W_\rho(x,p)| \leq \pi^{-m}$, see Eq.~(\ref{eq:Wigner-expectation}). This means that $|W_\Delta(x,p)| \leq 2 \pi^{-m}$, and so $W_{M/\1-M} = \frac12 W_\1 \pm \frac12 \eta W_\Delta \geq 0$ is guaranteed by letting $\eta = 2^{-m-1}$. Thus we have $2M-\1=\eta\Delta$, and can calculate \begin{equation} \|\rho_0-\rho_1\|_\Wplus \geq \tr \Delta(2M-\1) = 2^{-m-1}\|\rho_0-\rho_1\|_2^2, \end{equation} concluding the proof. \end{proof} \begin{corollary} \label{cor:W+lowerbound} Consider two $m$-mode states $\rho_0$ and $\rho_1$, with average energy (photon number) per mode bounded by $\overline{E}$ and $\|\rho_0-\rho_1\|_1 \geq t > 0$. Then, with $t=4c+r$, \[ \| \rho_0-\rho_1 \|_\Wplus \geq r^2 2^{-m-1}\left(1+\frac{\overline{E}}{c^2}\right)^{-m} \!\!\!\!. \] Thus, to achieve the kind of separation as in Theorem~\ref{thm:GOCC-hiding}, between a ``large'' trace norm and ``small'' W+ norm, with bounded energy per mode, their number necessarily has to grow; or else, the energy per mode has to grow very strongly. \end{corollary} \begin{proof} Construct the projector $P$ onto the space of all $m$-mode number states with photon number $\leq m \frac{\overline{E}}{c^2}$. Denote $D = \operatorname{rank} P$. By the assumption of the energy bound, and Markov's inequality, \begin{equation} \tr\rho_0 P,\ \tr\rho_1 P \geq 1-c^2. \end{equation} Hence, by the gentle measurement lemma \cite{Winter:qstrong}, \begin{equation} \|\rho_0 - P\rho_0 P\|_1,\ \|\rho_1 - P\rho_1 P\|_1 \leq 2c, \end{equation} and so by the triangle inequality $\|P\rho_0 P-P\rho_1 P\|_1 \geq t-4c = r$. Now, \begin{equation}\begin{split} \|\rho_0-\rho_1\|_2 &\geq \|P\rho_0 P-P\rho_1 P\|_2 \\ &\geq \frac{1}{\sqrt{D}}\|P\rho_0 P-P\rho_1 P\|_1 \geq \frac{r}{\sqrt{D}}, \end{split}\end{equation} where the first inequality follows from the fact that the Frobenius norm squared is the sum of the modulus-squared of the all the matrix entries, and the projector $P$ simply gets rid of some of those; the second is the well-known comparison between (Schatten) $1$- and $2$-norms on a $D$-dimensional space. Thus, by Proposition \ref{prop:W+lowerbound} we have \begin{equation} \|\rho_0-\rho_1\|_\Wplus \geq 2^{-m-1}\|\rho_0-\rho_1\|_2^2 \geq r^2 2^{-m-1}\frac{1}{D}, \end{equation} and it remains to control $D$. Note that by its definition, it has an exact expression as a binomial coefficient, \begin{equation} D = {\left\lfloor \frac{\overline{E}}{c^2} \right\rfloor + m \choose m} \leq {\frac{\overline{E}}{c^2} + m \choose m} \leq \left(1+\frac{\overline{E}}{c^2}\right)^{m}, \end{equation} concluding the proof. \end{proof} We believe that a lower bound like the one of Corollary \ref{cor:W+lowerbound} should hold for the GOCC norm, too. To get such a bound, we need to find a ``pretty good'' Gaussian measurement to distinguish two given states. A possible strategy might be provided by \cite[Thms.~13 and 14]{MWW}, where it is shown that in dimension $D<\infty$, a fixed rank-one POVM $\cM$ whose elements form a (weighted) $2$-design, provides a bound \begin{equation} \|\cdot\|_{\cM} \geq \frac{1}{2D+2}\|\cdot\|_1. \end{equation} This should hold with corrections for approximate designs, too, cf.~\cite{AmbainisEmerson}. Obviously, as with Bosonic systems we are in infinite dimension, the dimension bound is \emph{a priori} not going to be useful. However, we can take inspiration from Corollary \ref{cor:W+lowerbound} and its proof, where we assume energy-bounded states, which we cut off at a finite photon number, restricting them thus to a finite-dimensional subspace. The more serious obstacle is that we would have to construct a Gaussian measurement, or a probabilistic mixture of Gaussian measurements, that approximates a $2$-design. But while the set of all Gaussian states has a locally compact symmetry group (symplectic and displacement transformations in phase space) that is consistent with a $2$-design, notorious normalisation and convergence issues prevent us from treating it as such \cite{Blume-KohoutTurner}. A different approach would be to analyse an even simpler measurement, which however must be tomographically complete. A nice candidate would be heterodyne detection on each mode, Eq. (\ref{eq:hetero}). \section{Discussion} \label{sec:conclusions} By analysing the Wigner functions of Bosonic quantum states, we showed that there can be arbitrarily large gaps between the GOCC norm distance and the trace distance. In terms of the norm based on POVMs with positive Wigner functions, we could show that the separation necessarily requires many modes, if we are in the regime of states with bounded energy per mode. Our results beg several questions, among them the following: first, is it possible to derandomise the construction of Theorem \ref{thm:GOCC-hiding}, in the sense that we would like to have concrete (not random) states with guaranteed separation of GOCC vs trace norm? Secondly, while our construction requires multiple modes, is it possible to have GOCC data hiding in a fixed number of modes, or even a single mode, at the expense of larger energy (cf.~Corollary \ref{cor:W+lowerbound})? Fortuitously, the recent work by Lami \cite{LL:new} goes some way towards addressing these questions: Indeed, \cite[Ex.~5]{LL:new} shows two orthogonal Fock-diagonal states, called the \emph{even and odd thermal states}, which while being at maximum possible trace distance, have arbitrarily small GOCC (and indeed W+) distance for sufficiently large energy (temperature). The resulting upper bound \cite[Eq.~(23)]{LL:new} even compares well with our lower bound from Corollary \ref{cor:W+lowerbound}, when $m=1$. The main difference to our scheme in Theorem \ref{thm:GOCC-hiding} is that those even and odd thermal states are not Gaussian, or even mixtures of Gaussian states, in fact they have negative Wigner function, indicating the difficulty in creating them. Instead our states, while undoubtedly complex (being multi-mode and requiring subtle arrangements of points in phase space, are simply uniform mixtures of coherent states, so in a certain sense they are easy to prepare (an experimental implementation would be however still be challenging). As a matter of fact, this is best expressed in resource theoretic terms, noticing that GOCC actually can be defined as a class of quantum maps (to be precise: instruments), beyond our Definition \ref{defi:GOCC} of only GOCC measurements. This point of view is clearly evident in earlier references \cite{TakeokaSasaki}, even if it is not formalised. But recently, several attempts have been made to create fully-fledged resource theories of non-Gaussianity and of Wigner-negativity \cite{ZSS,TZ,AGPF}. While these works specifically focus on state transformations, and in particular the distillation of some form of ``pure'' non-Gaussian resource, our problem of the creation and discrimination of data hiding states are naturally phrased in the general resource theory. Indeed, in the framework of \cite{TZ,AGPF}, our GOCC measurements are free operations, and so are the state preparation of $\rho_0$ and $\rho_1$ from Theorem \ref{thm:GOCC-hiding}. Thus, our results can be interpreted as contributions towards assessing the non-Gaussianity (Wigner negativity) of a measurement that distinguishes two states optimally. Here is the largest difference to the cited recent papers, which formalise the resource character of states, whereas our focus is on quantum operations. In that sense, Theorem \ref{thm:GOCC-hiding} (and equally \cite[Ex.~5]{LL:new}) provides a benchmark for the realisation of non-Gaussian quantum information processing, simply because optimal, or even decent discrimination of the states requires considerable abilities beyond the Gaussian (``linear'') realm. \acknowledgments The authors are grateful to Toni Ac\'{\i}n and Gael Sent\'{\i}s for prompting the first formulation of the question treated in the present paper, during and after the doctoral defence of Gael, and in particular for sharing Ref.~\cite{TakeokaSasaki}. Thanks to John Calsamiglia and Ludovico Lami for asking many further questions which directed the present research, in particular about the GOCC Chernoff coefficient. After the present work having been suspended for many years, we especially thank Ludovico Lami, whose keen interest in data hiding in general, and recent work \cite{LL:new} in particular, have eventually provided the motivation to finish and publish the present manuscript. Finally, we thank Prof. Luitpold Blumenduft for elucidating an optical phenomenon that bears a certain analogy to the phenomenon of Gaussian data hiding. The authors' work was supported by the European Commission (STREP ``RAQUEL''), the ERC (Advanced Grant ``IRQUAT''), the Spanish MINECO (grants FIS2008-01236, FIS2013-40627-P, FIS2016-86681-P and PID2019-107609GB-I00), with the support of FEDER funds, and by the Generalitat de Catalunya, CIRIT projects 2014-SGR-966 and 2017-SGR-1127. \end{document}
\begin{document} \title{On the existence of optimal stationary policies for average Markov decision processes with countable states} \begin{abstract} For a Markov decision process with countably infinite states, the optimal value may not be achievable in the set of stationary policies. In this paper, we study the existence conditions of an optimal stationary policy in a countable-state Markov decision process under the long-run average criterion. With a properly defined metric on the policy space of ergodic MDPs, the existence of an optimal stationary policy can be guaranteed by the compactness of the space and the continuity of the long-run average cost with respect to the metric. We further extend this condition by some assumptions which can be easily verified in control problems of specific systems, such as queueing systems. Our results make a complementary contribution to the literature in the sense that our method is capable to handle the cost function unbounded from both below and above, only at the condition of continuity and ergodicity. Several examples are provided to illustrate the application of our main results. \end{abstract} \textbf{Keywords}: Markov decision process, countable states, optimal stationary policy, metric space \section{Introduction}\label{section_intro} For finite Markov decision processes (MDPs), the optimality of various types of policies are well studied. For example, it is well known that the optimal value of finite MDPs with discounted or average criteria can be achieved by \emph{Markovian} and \emph{deterministic} policies, thus \emph{history-dependent} and \emph{randomized} policies are not needed to consider. More details can be referred to books on MDPs \citep{Bertsekas12,Puterman94}. Countable-state MDPs are a type of widely existing models and are particularly useful for many problems, such as queueing systems, inventory management, etc. When the state space of MDPs is changed from finite to infinite (countable), the relevant analysis becomes more complicated and the algorithms need sophisticated discussion \citep{Golubin2003,Meyn1997}. Compared with the complete theoretical results for finite MDPs, there is no comprehensive theory for infinite MDPs with countable states and the long-run average criterion. The existence of an optimal stationary policy for countable-state MDPs needs specific discussion, and attracts research attention in recent decades. Although we can restrict our attention to stationary policies in finite MDPs, this is no longer true when the state space is countable. In general, the optimal value of a countable-state MDP may not be achievable by stationary policies, even not by history-dependent policies. Interesting counterexamples can be found in the excellent books on MDPs (see Examples 5.6.1\&5.6.5\&5.6.6 of \cite{Bertsekas12}, Examples 8.10.1\&8.10.2 of \cite{Puterman94}, and Subsection~7.1 of \cite{Sennott1999}). Since a stationary policy is not necessarily optimal for countably infinite MDPs, there are literature works on the specific existence conditions of optimal stationary policies. Sennott studies the existence conditions for average cost optimality of stationary policies for discrete-time MDPs when state space is countable and action space is finite \citep{Sennott1986,Sennott1989}. In Sennott's studies, a distinguished state is introduced and the vanishing discount optimality approach is adopted to study the optimality inequality. \cite{Borkar1989} also studies the condition of optimal stationary policies for discrete-time average cost MDPs with countable states, but from the characterization through the dynamic programming equations. For constrained MDPs with countable states and long-run average cost, \cite{Borkar1994} further establishes the existence of stationary randomized policies for the general case of nonnegative cost functions (or unbounded from below), which uses the method of occupation measures. \cite{Lasserre1988} studies the stationary policies of denumerable state MDPs for not only the average cost optimality, but also the Blackwell optimality. \cite{Meyn1999} studies the similar problem based on the stabilization of controlled Markov chains with algorithmic analysis. \cite{Cao2015} study the existence condition of optimal stationary policies for a class of queueing systems, also from the analysis of system stability. \cite{Cavazos1991,Cavazos1992} give a fairly complete summary and comparison of different results on existence conditions for discrete-time average cost MDPs with countable state space and finite action sets. For more general cases rather than countable state space, \cite{Hernandez1991} studies the existence condition on average cost optimal stationary policies in a class of discrete-time Markov control processes with Borel spaces and unbounded costs, where the action space is assumed setwise continuity instead of a compact set. \cite{Feinberg2007} present sufficient conditions for the existence of an optimal stationary policy of MDPs with the average cost optimality inequalities, where the state and action space are Borel subsets of Polish spaces. The derived result is also applied to a cash balance problem with an inventory model. For continuous-time MDPs with infinite state in Polish spaces, \cite{Guo2006} study the existence of optimal deterministic stationary policies by using the Dynkin formula and two optimality inequalities for the average cost criterion. Some other systematic discussion on this issue can also be found in the excellent books on MDPs, see \cite{Bertsekas12,Hernandez1996,Puterman94,Sennott1999} for discrete-time MDPs and \cite{Bertsekas12,Guo2009} for continuous-time MDPs. In summary, most of the existing results are about the sufficient conditions, which usually require constructing a set of functions satisfying several sophisticated assumptions. Although these conditions are quite general, they may be not easy to verify and may encounter difficulty of function construction during the application to practical problems. In this paper, we study the optimality condition of stationary policies for average cost MDPs with countable states and finite actions available at each state. By defining a proper metric in the policy space, we study the continuity of the system's average cost and the compactness of the policy space, and we show that such continuity and compactness can induce the existence of an optimal stationary policy. We further extend the continuity requirement by assuming some reasonable conditions on transition rates and uniform convergence of un-normalized probabilities in MDPs. Compared with the existing literature work, our result holds at a weak condition of requiring continuity and ergodicity, and it can handle the cost function unbounded from both below and above. While some general results in the literature require the cost function unbounded only from below (e.g., see \citep{Borkar1994}) or $\omega$-geometric ergodicity (e.g., see \citep{Hernandez1999}), which partly demonstrates the advantages of our method. Moreover, our result may be easier to verify for some MDPs, especially for queueing systems. The main results of the paper are illustrated by several examples, for one of which the cost function is unbounded from above and from below, as discussed in Remark~2 at the end of Section~\ref{section_queue}. The remainder of the paper is organized as follows. In Section~\ref{section_result}, we derive the existence condition by studying the continuity of the average cost in a defined compact metric space of policies. In Section~\ref{section_queue}, an example of scheduling problem in queueing systems is provided to demonstrate the validation process of our existence condition of an optimal stationary policy. In Section~\ref{section_extension}, we further extend the existence condition to several reasonable assumptions which may be easy to satisfy in practical problems. Finally, we conclude the paper in Section~\ref{section_conclusion}. \section{The Basic Idea}\label{section_result} In an MDP, the state space is denoted as $\mathcal S$, which is assumed to be countably infinite. Without loss of generality, we denote it as $\mathcal S =\{0,1,\dots\}$. Associated with every state $i \in \mathcal S$, there is a finite action set $\mathcal A(i)$. At state $i \in \mathcal S$, if action $a \in \mathcal A(i)$ is adopted, an instant cost $f(i,a)$ will incur. Meanwhile, the system will transit to state $j \in \mathcal S$ with transition probability $p^a(i,j)$ for discrete-time MDPs and with transition rate $q^a(i,j)$ for continuous-time MDPs, respectively. Let $u$ denote a (deterministic) stationary policy which is a mapping on $\mathcal S$ such that $u(i) \in \mathcal A(i)$ for all $i \in \mathcal S$. Let $\mbox{$\cal U$}$ denote the stationary policy space and $\mbox{$\cal U$}:= \times_{i \in \mathcal S}\mathcal A(i) := \mathcal A(0) \times \mathcal A(1) \times \dots $, with ``$\times$" being the Cartesian product. Let $X (t)$ be the system state at time $t$. Under suitable conditions, the long-run average performance measure for MDPs, which does not depend on any initial state $x\in \mbox{$\cal S$}$, but depends on $u \in \mbox{$\cal U$}$, is defined as $\eta (u)$: \begin{equation} \eta (u) := \lim\limits_{T\rightarrow \infty}\frac{1}{T} \mathbb E\left\{\sum_{t=0}^{T-1} f(X(t),u(X(t))) \Big | X(0)=x \right\}, \end{equation} or \begin{equation} \eta (u) := \lim\limits_{T\rightarrow \infty}\frac{1}{T} \mathbb E\left\{\int_{t=0}^{T} f(X(t),u(X(t)))dt \Big | X(0)=x \right\}, \end{equation} for discrete-time and continuous-time ergodic MDPs, respectively, where the expectation operator $\mathbb E$ depends on $u \in \mbox{$\cal U$}$. However, such dependence is omitted below for notation simplicity. The goal of optimization is to find a policy $u^*$ such that \begin{equation} \eta (u^* ) = \inf_{u \in \cal U} [\eta (u) ],~~~~~(\mbox{or }~ \eta (u^* ) = \sup_{u \in \cal U} [\eta (u) ] ). \end{equation} Assume that $\eta (u)$ is bounded in $u \in \mbox{$\cal U$}$, so $\inf_{u \in \cal U} [\eta (u) ]$ is finite. We aim to find conditions under which such an optimal stationary policy $u^*$ exists. \begin{theorem} \label{thmopex} Suppose $\mbox{$\cal U$}$ is a compact metric space and the function $\eta (u)$ is continuous in $\mbox{$\cal U$}$ with the metric, then an optimal policy $u^*$ exists. \end{theorem} {\it Proof:} Let $\eta^* := \inf_{u \in \cal U} [\eta (u) ]$. By definition, there exists a sequence of policies, denoted as $u_0$, $u_1$, $\dots$, such that \begin{equation} \label{etatos} \lim_{n \to \infty} \eta (u_n ) = \eta^* . \end{equation} Because $\mbox{$\cal U$}$ is compact, there is a subsequence of $\{ u_n , n=0,1, \dots \}$ that converges to a limit (accumulation) point. Denote this subsequence as $\{ u_{n_k}, k=0,1, \dots \}$ and the limit point as $u^* \in \mbox{$\cal U$}$. Then \[ \lim_{k \to \infty } u_{n_k} = u^* \in \mbox{$\cal U$} . \] By continuity of $\eta (u)$, we have \[ \lim_{k \to \infty } \eta (u_{n_k} ) = \eta (u^* ) . \] By (\ref{etatos}), we obtain \[ \eta (u^*) = \eta ^* = \inf_{u \in \cal U} [\eta (u) ] ; \] i.e., $u^* \in \mbox{$\cal U$}$ is an optimal policy. $\Box$ Theorem~\ref{thmopex} requires a compact metric space defined for $\mathcal U$. Below, we introduce such a metric in the policy space. Note that a policy can be denoted as \[ u = ( u(0) , u(1) , \dots ). \] Choosing a real number $0<r<0.5$, (e.g., $r=0.1$), we define the distance between two policies $u_1 = ( u_1 (0) , u_1 (1) , \dots )$ and $u_2 = ( u_2 (0) , u_2 (1) , \dots ) $ as \begin{equation} \label{defdis} d(u_1 , u_2 ) := \sum_{i=0}^\infty ||u_1 (i) - u_2 (i) || r^i , \end{equation} in which \[ ||u_1 (i) - u_2 (i) || := \left \{ \begin{array}{ll} 1 & ~ if ~ u_1 (i) \neq u_2 (i) , \\ 0 & ~ if ~ u_1 (i) = u_2 (i) . \end{array} \right . \] It is easy to verify that \[ d(u,u)=0,~~ d(u_1, u_2 ) = d(u_2 , u_1 ), \] and for any three policies $u_1$, $u_2$, and $u_3$, the following triangle inequality holds \[ d(u_1, u_3 ) \leq d(u_1 , u_2 ) + d(u_2 , u_3 ). \] Thus, $d(u_1 , u_2 )$, $u_1, u_2 \in \mbox{$\cal U$}$, indeed defines a metric on $\mbox{$\cal U$}$. Suppose for two policies $u_1$ and $u_2$, $u_1 (i) = u_2 (i)$ for all $i=0,1, \dots, k$. Then \begin{eqnarray}\label{drskm} d(u_1 , u_2 ) &=& \sum_{i=k+1}^\infty ||u_1 (i) - u_2 (i) || r^i \nonumber\\ &\leq& \sum_{i=k+1}^\infty r^i = r^{k+1} \sum_{i=0}^\infty r^i = \frac {r^{k+1}}{1-r} < r^{k}, \end{eqnarray} where the last inequality holds because we choose $r<0.5$, so $\frac {r}{1-r} <1$. By (\ref{drskm}), we have \begin{lemma} \label{lemduout} $d(u_1, u_2) < r^k$ if and only if $u_1 (i) = u_2 (i)$ for all $i \leq k$. \end{lemma} {\it Proof:} The ``If" part follows directly from (\ref{drskm}). Now we prove the ``Only if" part using contradiction. Assume that there is an integer $n$ such that $u_1 (n) \neq u_2 (n)$ and $n \leq k$. By (\ref{defdis}), we have $d(u_1, u_2) \geq r^n >r^k$, which is in contradiction with the condition $d(u_1, u_2) < r^k$. Thus, the assumption is not true and the ``Only if" part is proved. $\Box$ The metric defined by the distance function $d(u_1, u_2 )$ induces a topology on $\mbox{$\cal U$}$. First, we define an open ball around a point $u \in \mbox{$\cal U$}$ as \begin{equation} \label{eq_disc} O_\epsilon (u) := \{ all~v \in \mbox{$\cal U$}: d(u,v) < \epsilon \} ,~~~\epsilon >0 . \end{equation} We have $u \in O_\epsilon (u)$ for any $\epsilon >0$. A set $N(u)$ is called a neighborhood of a point $u \in \mbox{$\cal U$}$, if there is an open ball $O_\epsilon (u)$ for some $\epsilon >0$ such that $O_\epsilon (u) \subseteq N(u)$. By Lemma~\ref{lemduout}, we have the following fact: $u'(i)=u(i)$ for all $i \leq k$ if and only if $u' \in O_{r^k}(u)$. \noindent\textbf{Remark~1.} Lemma~\ref{lemduout} reveals the advantage of the metric (\ref{defdis}): It shows that all the policies in a small neighborhood $O_{r^k}(u)$ of policy $u$ take the same actions in the first $k$ states. This property is very useful in proving the continuity of $\eta (u)$ in many optimization problems, in which the steady-state probability of state $i$, $\pi (i)$, goes to zero when $i$ goes to infinity; in other words, states $i>k$ are less important. $\Box$ In a metric space $\mbox{$\cal U$}$, a limit point can be defined by the metric, i.e., $\lim_{n \to \infty} u_n =u$ for some sequence $\{u_n\} \subseteq \mathcal U$, if and only if $\lim_{n \to \infty} d(u_n , u) =0$. In this sense, a continuous function is defined in the same way as a continuous function defined in a real space. Since $\mathcal A(i)$ is finite and $\mathcal S$ is countable, it is well known that with the metric (\ref{defdis}) the policy space $\mbox{$\cal U$}=\times_{i \in \mathcal S}\mathcal A(i)$ is compact. In fact, every point $u \in \mbox{$\cal U$}$ is an accumulation (limit) point, and every policy is in $\mbox{$\cal U$}$. In order to apply Theorem~\ref{thmopex}, we have to prove the continuity of $\eta (u)$ in $\mbox{$\cal U$}$ for the specific problems. Below, we use some examples to illustrate the applicability of Theorem~\ref{thmopex} in MDPs. \begin{example} \label{exmp} (A modification of Example 8.10.2 in Puterman's book \citep{Puterman94}) Consider an MDP with $\mbox{$\cal S$} = \{1,2, \dots\}$. At each state $i \in \mbox{$\cal S$}$, there are two actions $1$ and $0$. If action $1$ is taken, then the state transits from $i$ to $i+1$ with probability $1$ and the cost is $f(i,1)=0$; if action $0$ is taken, then the state stays at $i$ with probability $1$ and the cost is $f(i,0)=\frac {1}{i}$. The Markov chain (under any given policy) is denoted as $X(t)$, $t=0,1, \dots$. A stationary policy is denoted as a mapping $u: \mbox{$\cal S$} \rightarrow \{0,1\} $. The performance measure for policy $u = ( u(1), u(2), \dots )$ with initial state $i$ is the long-run average \begin{equation} \label{defperf} \eta (u,i) = \lim_{T \to \infty} \frac {1}{T} \mathbb E \left \{\sum_{t=0}^{T-1} f(X(t),u(X(t))) \Big | X (0) = i \right \} . \end{equation} Note that the performance may depend on the initial state, i.e., it is a function of both the initial states and policies. To prove the existence of an optimal policy, we need to fix the initial state. In (\ref{defperf}), we choose $X(0)=1$. We wish to find a policy $u^*$ such that \[ \eta (u^*, 1) = \inf_{u \in \cal U} \{ \eta (u,1) \} . \] We need to prove that such an optimal stationary policy exists. Now, we prove that $\eta (u,1)$ is continuous in $u \in \mbox{$\cal U$}$ with metric (\ref{defdis}). Given a policy $u_0$, for any small positive $\epsilon$, we find the maximum $k$ satisfying $r^k > \epsilon$. By Lemma \ref{lemduout}, if we choose a policy $u$ satisfying $d(u,u_0 )<\epsilon$, then all the actions of such policies $u$ and $u_0$ at states $i \leq k$ are the same. By the structure of $\eta (u,i)$ defined in (\ref{defperf}), we can conclude that \[ |\eta (u,1) - \eta (u_0,1)| < \frac {1}{k} . \] More precisely, since $u(i)=u_0(i)$ for all $i\leq k$, we discuss it with two cases. Case 1: If $u (i) = {u_0} (i) = 1$ for all $i \leq k$, we have $0 < \eta(u,1), \eta(u',1) < \frac {1}{k}$, thus $|\eta (u,1) - \eta (u_0,1)| < \frac {1}{k}$. Case 2: If there exists some state $i \leq k$ such that $u (i) = {u_0} (i) = 0$, we denote the smallest such state as $i^*$ and we have $\eta (u,1) = \eta (u_0,1) = \frac{1}{i^*}$, thus $|\eta (u,1) - \eta (u_0,1)| = 0$. In summary, for any $\epsilon>0$, take $\hat k>1$ such that $\frac {1}{\hat k}<\epsilon$, thus $|\eta (u,1) - \eta (u_0,1)| <\epsilon$ for all $u\in O_{r^{\hat k}}(u_0)$. Therefore, $\eta (u,1)$ is continuous at $u_0$. Finally, by Theorem \ref{thmopex}, the optimal stationary policy exists. Actually, it is easy to verify that the optimal policy is $u^* = (1,1,\dots,1,\dots)$ and the corresponding optimal cost is $\eta^* = 0$. $\Box$ \end{example} \begin{example} (Example 8.10.2 in Puterman's book \citep{Puterman94}) Consider an MDP with $\mbox{$\cal S$}= \{1,2, \dots\}$. At state $i \in \mbox{$\cal S$}$, there are two actions $1$ and $0$. If action $1$ is taken, then the state transits from $i$ to $i+1$ with probability $1$ and the reward is $f(i,1)=0$; if action $0$ is taken, then the state stays at $i$ with probability $1$ and the reward is $f(i,0)= 1 - \frac {1}{i}$. The Markov chain (under any policy) is denoted as $X(t)$, $t=0,1, \dots$. A stationary policy is denoted as a mapping $u: \mbox{$\cal S$} \rightarrow \{0,1\} $. The performance measure for policy $u = (u(1), u(2), \dots ) $ with initial state $i$ is the long-run average reward as follows. \begin{equation} \label{defperf2} \eta (u,i) = \lim_{T \to \infty} \frac {1}{T} \mathbb E \left \{ \sum_{t=0}^{T-1} f(X(t),u(X(t))) \Big | X(0) = i \right \} . \end{equation} We set the initial state always as $X(0)=1$ and we wish to find an policy $u^*$ such that \[ \eta (u^*,1) = \sup_{u \in \cal U} \{ \eta (u,1) \} . \] The discussion is the same as Example \ref{exmp}, except that $\eta (u,1)$ is NOT continuous at $u_0 = (1,1, \dots, 1, \dots )$ with $\eta (u_0,1) = 0$, while $\eta (u,1) \geq 1-\frac{1}{k}$ for any neighboring policy $u$ with $d(u,u_0)<r^k$. Therefore, an optimal stationary policy may not exist for this example. Actually, it is easy to verify that the optimal reward of this problem is $\eta^* = 1$. A history-dependent policy $u^*$ which uses action 0 $i$ times in state $i$, and then uses action 1 once, will yield a reward stream of $(0,0,\frac{1}{2},\frac{1}{2},0,\frac{2}{3},\frac{2}{3},\frac{2}{3},0,\frac{3}{4},\frac{3}{4},\frac{3}{4},\frac{3}{4},\dots)$. Thus, the history-dependent policy $u^*$ can reach the optimal reward $\eta^*=1$. However, any stationary deterministic policy yields possible rewards as either 0 or $1-\frac{1}{i}$, which cannot reach the optimal reward $\eta^*=1$. $\Box$ \end{example} \section{The $c/\mu$-Rule in Queueing Systems}\label{section_queue} In this section, we show that, with the metric space defined by (\ref{defdis}), the basic idea presented in Section \ref{section_result} can be applied to a class of optimal scheduling problems in queueing systems, called the $c/\mu$-rule problem, to establish the existence of an optimal stationary policy. \begin{figure} \caption{The illustration of the on/off control of group-server queues.} \label{fig_groupQ} \end{figure} The problem is about the on/off scheduling control of parallel servers in a group-server queue. More details of the problem setting can be referred to \citep{Xia2018} and we give a brief introduction as follows. Consider a group-server queue with a single infinite-size buffer and $K$ groups of parallel servers, as illustrated by Fig.~\ref{fig_groupQ}. Customers are homogeneous and customer arrival is assumed as a Poisson process with rate $\lambda$. Arriving customers will go to the idle servers at status `on'. If all the servers at status `on' are busy, the arriving customer will wait in the buffer. Servers are providing service in parallel and categorized into $K$ groups. Servers in the same group are homogeneous in service rates and cost rates, while those in different groups are heterogeneous. Group $k$ has $M_k$ servers with service rate $\mu_k$ and cost rate $c_k$ per unit of time, $k=1,2,\cdots,K$. Without loss of generality, we assume $\mu_1 \geq \mu_2 \geq \dots \geq \mu_K$. The system cost includes two parts, the operating cost of servers and the holding cost of customers. The system state $n$ is the number of customers in the system. The state space is denoted as $\mbox{$\cal S$} = \{0,1,2,\dots\}$, which is countably infinite. We can turn on or off servers dynamically to reduce the system average cost. The action is the number of working servers at each group, which is denoted as $a=(a_1,a_2,\cdots,a_K)$, where $a_k$ is the number of working servers in group $k$ and $a_k \in \{0,1, \cdots, M_k\}$. For any state $n \geq 1$, action space $\mathcal A(n)$ is a subset of $\{1,\ldots,M_1\}\times \{0,\ldots,M_2\}\times\cdots\times \{0,\ldots,M_K\}$, where $a_1 \geq 1$ is reasonable to guarantee the system ergodic. Define a stationary policy as $u := (u(0),u(1),u(2),\dots)$, where $u(n):=(u(n,1),u(n,2),\dots,u(n,K)) \in \mathcal A(n)$ is the action at state $n$ and $u(n,k)$ is the number of working servers in group $k$ at state $n$. The cost function at state $n$ under policy $u$ is \begin{equation} \label{rewardf} f(n,u) := h(n) + \sum_{k=1}^{K}c_k u(n,k), \end{equation} where $h(n)$ is the holding cost rate at state $n$. The system long-run average cost under policy $u$ is defined as \begin{equation} \label{etau} \eta (u) = \lim\limits_{T \rightarrow \infty} \frac{1}{T} \mathbb E \left\{ \int_{t=0}^{T} f(n(t),u)dt \right\}, \end{equation} where $n(t)$ is the system state at time $t$. The optimal average cost is $\eta^* = \inf\limits_{u} [\eta(u)]$. We aim at finding the optimal stationary policy $u^*$ which achieves the optimal average cost, i.e., $\eta(u^*) = \eta^*$, where $u^* \in \mathcal U$ and $\mathcal U$ is the stationary policy space. In \citep{Xia2018}, it is shown that the optimal policies (if one exists) follow the so called $c/\mu$-rule: Servers in the group with smaller values of $c/\mu$ should be turned on with higher priority. Here, we want to verify that an optimal stationary policy does exist for this problem with countable states. It is natural to assume that the holding cost $h(n)$ is increasing in $n$; and thus, under optimal policies the queue should be \emph{ergodic}. So we assume that $\{n(t)\}$ is ergodic (under each policy in $\mbox{$\cal U$}$) with a unique steady-state distribution $\pi (n,u)$, $n=0,1, \cdots$, $u \in \mbox{$\cal U$}$, and the long-run average (\ref{etau}) does not depend on the initial state. Since our queue is a birth-death process, we can derive the steady-state distribution as below. \begin{eqnarray}\label{eq_pi} \pi(n,u) = \frac{1}{1+G(u)}\prod_{l=1}^{n}\frac{\lambda}{u(l)\mu}, \qquad n \geq 1, \end{eqnarray} where $\mu = (\mu_1, \cdots, \mu_K )^T$, and \begin{equation} u(l)\mu := \sum_{k=1}^{K} u(l,k) \mu_k, \end{equation} and \begin{equation} \label{gun} G(u) := \sum_{n=1}^{\infty} \prod_{l=1}^{n}\frac{\lambda}{u(l)\mu}. \end{equation} The queue is stable if and only if $G(u) < \infty$ which also indicates \begin{equation}\label{eq7} \lim\limits_{n \rightarrow \infty} \sum_{m=n}^{\infty} \prod_{l=1}^{m}\frac{\lambda}{u(l)\mu} =0. \end{equation} The ergodicity of the system under a policy $u$ can indicate a necessary condition: $u(n)\mu \neq 0$ for all $n \geq 1$. The stability of the system can be guaranteed by a sufficient condition: there exists an $\bar n$ such that $u(n)\mu > \lambda$ for all $n > \bar n$. For an ergodic policy $u \in \mbox{$\cal U$}$, under suitable condition, the long-run average (\ref{etau}) equals \begin{equation} \label{etast} \eta (u) = \sum_{n=0}^\infty \pi (n,u)f(n,u) . \end{equation} For the analysis here, we need to make the following assumption: \begin{assumption} \label{ass0} The normalizing factor $G(u)$ in (\ref{gun}) (equivalently, the limit in (\ref{eq7})) and the performance limit (\ref{etast}) converge uniformly in $\mbox{$\cal U$}$. \end{assumption} We use the metric definition (\ref{defdis}) to quantify the distance between any two policies $u_1$ and $u_2$. In what follows, we will prove that when the two policies $u$ and $u'$ are infinitely close, their performance measures $\eta (u)$ and $\eta (u')$ are also infinitely close to each other. Denote the two policies by $u = (u(0), u(1), \cdots, u(n), \cdots )$ and $u' = (u'(0), u'(1), \cdots, u'(n), \cdots )$. By Lemma \ref{lemduout}, we assume that \begin{equation} \label{uln} u(l)=u'(l),~~for ~l=0,1, \cdots, n. \end{equation} which means that $u'\in O_{r^n}(u)$. First, we compare the difference of the normalization factors $1+G(u)$ and $1+G(u')$ of these two policies. We have \begin{eqnarray} \label{onepd} \frac{1+ G(u)}{1+G(u')} &=& \frac{1+\sum_{m=1}^{\infty} \prod_{l=1}^{m}\frac{\lambda}{u(l)\mu}}{1+\sum_{m=1}^{\infty} \prod_{l=1}^{m}\frac{\lambda}{u'(l)\mu}} \nonumber\\ &=& \frac{ \Big ( 1+\sum_{m=1}^{n} \prod_{l=1}^{m}\frac{\lambda}{u(l)\mu} \Big ) +\sum_{m=n+1}^{\infty} \prod_{l=1}^{m}\frac{\lambda}{u(l)\mu}}{ \Big ( 1+\sum_{m=1}^{n} \prod_{l=1}^{m}\frac{\lambda}{u(l)\mu} \Big ) +\sum_{m=n+1}^{\infty} \prod_{l=1}^{m}\frac{\lambda}{u'(l)\mu}} \nonumber\\ &=& \frac{1+\frac{\sum_{m=n+1}^{\infty} \prod_{l=1}^{m}\frac{\lambda}{u(l)\mu}}{1+\sum_{m=1}^{n} \prod_{l=1}^{m}\frac{\lambda}{u(l)\mu}}} {1+\frac{\sum_{m=n+1}^{\infty} \prod_{l=1}^{m}\frac{\lambda}{u'(l)\mu}}{1+\sum_{m=1}^{n} \prod_{l=1}^{m}\frac{\lambda}{u(l)\mu}}} < 1+\frac{\sum_{m=n+1}^{\infty} \prod_{l=1}^{m}\frac{\lambda}{u(l)\mu}}{1+\sum_{m=1}^{n} \prod_{l=1}^{m}\frac{\lambda}{u(l)\mu}} \nonumber\\ &<& 1 + \sum_{m=n+1}^{\infty} \prod_{l=1}^{m}\frac{\lambda}{u(l)\mu} = 1 + \delta(n,u), \end{eqnarray} where \begin{equation} \delta(n,u) := \sum_{m=n+1}^{\infty} \prod_{l=1}^{m}\frac{\lambda}{u(l)\mu}. \end{equation} Similarly, we can also have \begin{eqnarray} \label{onemdp} \frac{1+ G(u)}{1+G(u')} &=& \frac{1+\sum_{m=1}^{\infty} \prod_{l=1}^{m}\frac{\lambda}{u(l)\mu}}{1+\sum_{m=1}^{\infty} \prod_{l=1}^{m}\frac{\lambda}{u'(l)\mu}} = \frac{1+\frac{\sum_{m=n+1}^{\infty} \prod_{l=1}^{m}\frac{\lambda}{u(l)\mu}}{1+\sum_{m=1}^{n} \prod_{l=1}^{m}\frac{\lambda}{u(l)\mu}}} {1+\frac{\sum_{m=n+1}^{\infty} \prod_{l=1}^{m}\frac{\lambda}{u'(l)\mu}}{1+\sum_{m=1}^{n} \prod_{l=1}^{m}\frac{\lambda}{u(l)\mu}}} \nonumber\\ &>& \frac{1} {1+\frac{\sum_{m=n+1}^{\infty} \prod_{l=1}^{m}\frac{\lambda}{u'(l)\mu}}{1+\sum_{m=1}^{n} \prod_{l=1}^{m}\frac{\lambda}{u(l)\mu}}} > \frac{1} {1+ \sum_{m=n+1}^{\infty} \prod_{l=1}^{m}\frac{\lambda}{u'(l)\mu} }\nonumber\\ &>& 1 - \sum_{m=n+1}^{\infty} \prod_{l=1}^{m}\frac{\lambda}{u'(l)\mu} = 1-\delta(n,u'), \end{eqnarray} where \begin{equation} \delta(n,u' ) := \sum_{m=n+1}^{\infty} \prod_{l=1}^{m}\frac{\lambda}{u'(l)\mu}. \end{equation} Therefore, we have \begin{eqnarray} 1-\delta(n,u' ) < \frac{1+ G(u)}{1+G(u')} < 1+\delta(n,u). \end{eqnarray} Let $\sigma(n,u,u')$ be determined by \begin{equation}\label{eq_G2} \frac{1+ G(u)}{1+G(u')} = 1+\sigma(n,u,u'). \end{equation} Then, \begin{equation}\label{eq_sigma} -\delta(n,u')<\sigma(n,u,u')<\delta(n,u) . \end{equation} With (\ref{eq_pi}), (\ref{uln}), and (\ref{eq_G2}), the steady-state distributions under these two policies $u$ and $u'$ have the following relation. \begin{equation}\label{eq13} \pi(m,u') = (1+\sigma(n,u,u'))\pi(m,u), \qquad m=0,1,\dots,n. \end{equation} Next, we study the difference between the associated long-run average costs $\eta$ under policies $u$ and $u'$. The cost functions are denoted by $f(m,u)$ and $f(m,u')$, respectively. By (\ref{rewardf}) and (\ref{uln}), we have $f(m,u) = f(m,u')$ for $m=0,1,\dots,n$. Therefore, we have \begin{eqnarray} &&\eta(u' ) - \eta(u) \nonumber\\ &=& \sum_{m=0}^{\infty}[\pi(m,u')f(m,u' ) - \pi(m,u)f(m,u)] \nonumber\\ &=& \sum_{m=0}^{n}[\pi(m,u')f(m,u' ) - \pi(m,u)f(m,u)] + \sum_{m=n+1}^{\infty}[\pi(m,u')f(m,u') - \pi(m,u)f(m,u)] \nonumber\\ &=& \sum_{m=0}^{n}[\pi(m,u') - \pi(m,u)]f(m,u) + \sum_{m=n+1}^{\infty}[\pi(m,u')f(m,u') - \pi(m,u)f(m,u)] . \nonumber \end{eqnarray} Applying (\ref{eq13}), we have \begin{equation}\label{diffeta} \eta(u') - \eta(u) = \sigma(n,u,u')\sum_{m=0}^{n}\pi(m,u)f(m,u) + \sum_{m=n+1}^{\infty}[\pi(m,u')f(m,u') - \pi(m,u)f(m,u)]. \end{equation} Now we are ready to prove the continuity of $\eta (u)$ in the metric space $\mbox{$\cal U$}$ with metric (\ref{defdis}). With (\ref{eq7}), we have \begin{equation*} \lim\limits_{n \rightarrow \infty} \delta(n,u) = 0, \qquad \lim\limits_{n \rightarrow \infty} \delta(n,u') = 0. \end{equation*} Let $\epsilon>0$ be any small number. Under Assumption~\ref{ass0}, by the uniformity of $G(u)$ in (\ref{gun}) and (\ref{eq7}), there exists a large integer $N_1$ such that if $n>N_1$, we have $\delta (n,u) < \epsilon$ for any $u \in \mbox{$\cal U$}$. By (\ref{eq_sigma}), we have \[ |\sigma (n,u,u')|< \epsilon, \quad \forall u,u' \in \mbox{$\cal U$}. \] Next, because (\ref{etast}) converges, there is a large integer $N_2 $ such that \begin{equation*} \Big|\sum_{m=0}^{n}\pi(m,u)f(m,u) \Big|< |\eta(u)| + 1, \quad \forall n>N_2. \end{equation*} Furthermore, under Assumption \ref{ass0}, by the uniformity of the convergence of (\ref{etast}), there is a large integer $N_3$ such that \begin{equation*} \Big|\sum_{m=n+1}^{\infty}[\pi(m,u')f(m,u') - \pi(m,u)f(m,u)] \Big| < 2 \epsilon, \quad \forall n>N_3 \ {\rm and} \ u,u'\in \mbox{$\cal U$}. \end{equation*} Finally, let $N^*:=\max \{N_1, N_2, N_3 \}$. Then, by (\ref{diffeta}) and Lemma~\ref{lemduout}, we have \begin{eqnarray}\label{diffop1} |\eta (u) - \eta (u')| &\leq& |\sigma(n,u,u')|\Big|\sum_{m=0}^{n}\pi(m,u)f(m,u)\Big| + \Big|\sum_{m=n+1}^{\infty}[\pi(m,u')f(m,u') - \pi(m,u)f(m,u)] \Big| \nonumber\\ &<& [|\eta(u)|+ 3] \epsilon, \ \ \ \ {\rm for \ all \ } u'\in O_{r^{N^*}}(u). \end{eqnarray} Since $\eta(u)$ is bounded, we conclude that $\eta (u)$ is continuous at $u$ in the metric space. Therefore, the existence of optimal stationary policy $u^*$ for this $c/\mu$-rule problem directly follows by Theorem~\ref{thmopex}. $\Box$ \noindent\textbf{Remark~2.} The condition of uniform convergence in Assumption~\ref{ass0} is easy to validate in queueing systems. For example, we can set the condition for the control of our group-server queues as follows: \textcircled{\oldstylenums{1}} there exists a constant $\tilde{n}$ such that for any $n > \tilde{n}$, every feasible action $u(n) \in \mathcal A(n)$ always satisfies $u(n) \mu > \lambda$. Therefore, we define $\rho_0 := \max_{u(n)\in \mathcal A(n),n>\tilde{n}}\{ \frac{\lambda}{u(n)\mu} \} <1$. We directly have $G(u)\leq \sum_{n=1}^{\tilde{n}}\prod_{l=1}^{n}\frac{\lambda}{u(l)\mu}+\sum_{n=\tilde{n}+1}^{\infty}\rho_0^n < \infty$, which indicates that the queueing system is stable and the normalizing factor $G(u)$ in (\ref{gun}) converges uniformly in $u\in \mbox{$\cal U$}$. Compared with (\ref{eq_pi}), we further define a pseudo probability $\tilde{\pi}(n,u) := \frac{1}{1+G(u)}\rho_0^n$. Obviously, we always have $\tilde{\pi}(n,u) \geq \pi(n,u)$ for any policy $u$ and $n > \tilde{n}$. Thus, for the performance limit (\ref{etast}), we have $|\eta (u)| \leq \sum_{n=0}^{\tilde n} \pi(n,u)|f(n,u)| + \sum_{n=\tilde{n}+1}^{\infty} \tilde{\pi}(n,u)|f(n,u)| = \sum_{n=0}^{\tilde n} \pi(n,u)|f(n,u)| + \frac{1}{1+G(u)} \sum_{n=\tilde{n}+1}^{\infty} \rho_0^n |f(n,u)|$, where the first part is always finite and we only need to guarantee the second part bounded. Thus, \textcircled{\oldstylenums{2}} any cost function $|f(n,u)|$ polynomially increasing to infinity along with $n$ will be controlled by the exponential factor $\rho_0^n$. Therefore, with \textcircled{\oldstylenums{1}} and \textcircled{\oldstylenums{2}}, we can easily validate Assumption~\ref{ass0} that $G(u)$ and $\eta(u)$ converge uniformly, and thus an optimal stationary policy exists. More specifically, for the cost function (\ref{rewardf}), we have $f(n,u)=h(n)+\sum_{k=1}^{K}c_k u(n,k)$, where the operating cost $\sum_{k=1}^{K}c_k u(n,k)$ is obviously bounded and the holding cost $h(n)$ can be unbounded. From the above analysis, we can see that $f(n,u)$ can be unbounded from both below and above sides. For example, we can set $h(n)=(-1)^n \cdot n$, which is unbounded both below and above while satisfies our condition \textcircled{\oldstylenums{2}}. However, this kind of cost function may not be handled by other methods in the literature \citep{Borkar1994} because the cost function thereof is required to be unbounded from below. This is also one of the advantages of our method in this paper. We have demonstrated the applicability of Theorem~\ref{thmopex} for proving the existence of optimal stationary policies in a scheduling problem of queueing systems. In the next section, we further show that this approach also applies to more general cases. \section{More General Cases}\label{section_extension} In general, we consider a continuous-time MDP with a countable state space denoted as $\mbox{$\cal S$} = \{ 0,1, \dots \}$. Let $\pi (i,u)$ be the steady-state probability of state $i\in \mbox{$\cal S$}$ under given policy $u\in \mathcal U$, and $q^a(i,j)$ be the transition rate from state $i$ to $j$ under action $a\in \mathcal A(i)$, $i,j \in \mbox{$\cal S$}$. Obviously, we have $q^a(i,j)\geq 0$ for $i \neq j$ and $q^a(i,i)=-\sum_{j\in\mathcal S, j \neq i} q^a(i,j) \leq 0$, where $|q^a(i,i)|$ can be understood as the total rates transiting out from state $i$ if action $a$ is adopted. Then we know that the steady-state probabilities $\pi(i,u)$'s must satisfy the following equations. \begin{align} & \sum_{j=0}^\infty \pi (j,u) q^{u(j)}(j,i) = 0, \quad i \in \mbox{$\cal S$}, \label{pii} \\ & \sum_{i=0}^\infty \pi (i,u) = 1 , \label{sumpi} \end{align} where (\ref{sumpi}) is called a normalization equation. Given a policy $u\in \mathcal U$, any sequence $\nu(i,u)\geq 0$ (depending on $u$), $i \in \mbox{$\cal S$}$, that satisfies \begin{equation} \label{eqnu} \sum_{j=0}^\infty \nu (j,u) q^{u(j)}(j,i) = 0, \ \forall i \in \mbox{$\cal S$} \ \mbox{ and} \quad \sum_{i=0}^\infty \nu (i,u) < \infty , \end{equation} is called an {\it un-normalized steady-state vector}. From (\ref{eqnu}), we have \[ \pi (i,u) = \frac {\nu (i,u) } {\sum_{i=0}^\infty \nu (i,u) }, ~~i \in \mathcal S , \] is the steady-state probability. In the rest of the paper, it is more convenient to deal with the un-normalized vector because it does not contain the denominator. Moreover, it is convenient to set $\nu (0,u)=1$ to obtain an un-normalized probability. First, we make the following assumptions to simplify the problem setting. \begin{assumption} \label{ass1} \begin{enumerate} \item [(a)] $q^a(i,j)$ is bounded, i.e., $|q^a(i,j)| < \Lambda $, for all $i,j\in \mathcal S, a\in \mathcal A(i)$. \item [(b)] There is an integer $M>0$ such that $q^a(j,i)=0$, for all $j >i+M$, $i \in \mathcal S$ and $a\in \mathcal A(i)$. \end{enumerate} \end{assumption} Assumption~\ref{ass1}(a) indicates that the transition rate from any state $i$ has an upper bound $\Lambda$, which is reasonable for most cases in practice. Assumption~\ref{ass1}(b) means that the transition rate from state $j$ back to $i$ is 0 if state $j$ is far away from state $i$. This assumption is also reasonable in many practical systems, especially it is usually true for queueing systems since state $j$ always transits back only to state $j-1$ caused by a service completion event. Given any $u\in \mbox{$\cal U$}$, at a state $i \in \mbox{$\cal S$}$, we may take an action denoted by $u(i)$, which determines the value of $q^{u(i)}(i,j)$, $j \in \mbox{$\cal S$}$. Then $u:= (u(0), u(1), \cdots )$ denotes a policy. Let $\mbox{$\cal U$}$ be the space of all policies. The steady-state probability at state $i$ is denoted by $\pi(i,u)$, which depends on policy $u$. The reward or cost function at state $i$ with action $u(i)$ is denoted by $f(i,u(i))$. We assume that the Markov processes under all policies in $\mbox{$\cal U$}$ are ergodic and the long-run average performance under policy $u$ is \begin{equation} \label{etaepf} \eta (u): = \sum_{i=0}^\infty \pi (i,u) f(i,u(i)). \end{equation} Denoting $\nu(i,u)$ as the un-normalized steady-state vector satisfying (\ref{eqnu}) under policy $u$, we give one more assumption as follows (cf. Assumption \ref{ass0}). \begin{assumption} \label{ass2} $\sum_{i=0}^N \nu(i,u) $, with $\nu(0,u)=1$, converges uniformly in $\mbox{$\cal U$}$ as $N \to \infty$, and $\sum_{i=0}^N \pi(i,u) f(i,u(i))$ converges uniformly in $\mbox{$\cal U$}$, as $N \to \infty$. \end{assumption} Assumption~\ref{ass2} holds for many Markov systems, especially when the system is stable under the neighborhood of policies. In fact, it holds if there is a sequence, denoted as $\overline \nu (i)$, $i=0,1, \dots$, such that $\nu(i,u) \leq \overline \nu (i)$ and $\sum_{i=0}^\infty \overline\nu (i)< \infty$. \begin{example} Consider a controlled $M/M/1$ queue with arrival rate $\lambda (i,u)$ and service rate $\mu (i,u)$ (under a given control policy $u$) when the number of customers is $i$, $i\in \mbox{$\cal S$} = \{0, 1, \dots, \}$. Let $X(t) \in \mbox{$\cal S$}$ be the Markov process of the queue. The un-normalized steady-state vector is $\nu (i,u) = \prod_{l=0}^i \frac {\lambda(l,u)}{\mu(l,u)}$. The process is stable if \[ \sum_{i=0}^\infty \nu (i,u) = \sum_{i=0}^\infty \prod_{l=0}^i \frac {\lambda(l,u)}{\mu(l,u)} < \infty . \] Therefore, Assumption \ref{ass2} is the same as Assumption \ref{ass0}, and if there is a bound $\overline \gamma <1$ and state $i^*$ such that $\frac {\lambda(i,u)}{\mu(i,u)} < \overline \gamma $ for all policies $u$ and states $i\geq i^*$, then Assumption \ref{ass2} holds. $\Box$ \end{example} Now, let us understand the role of Assumptions~\ref{ass1} and \ref{ass2}. For any integer $N>0$, we consider the first $K$ equations in (\ref{eqnu}), where $K>N$. Given any $u\in \mbox{$\cal U$}$, by Assumption~\ref{ass1}(b), the summation in (\ref{eqnu}) is over only finitely many states, resulting in \begin{equation}\label{eqshort} \sum_{j=0}^{i+M} \nu (j,u)q^{u(j)}(j,i) = 0, ~~~i=0,1, \dots, K, \end{equation} which can be further rewritten as \begin{equation}\label{twoterm0} \sum_{j=0}^{K} \nu (j,u) q^{u(j)}(j,i) + \sum_{j=K+1}^{i+M} \nu (j,u) q^{u(j)}(j,i) = 0, ~~~i=0,1, \dots, K. \end{equation} For (\ref{twoterm0}), the last summation is nonzero only if $i+M >K$. Thus, only the last $M$ equations in (\ref{twoterm0}) contain nonzero terms of the last summation, whose values are small enough to be ignored, as shown by the following analysis. For any $\epsilon >0$ and $N>0$, by Assumption \ref{ass2}, there is a large enough $K$ such that \begin{equation}\label{sumnuk} \sum_{i=K+1}^\infty \nu (i,u) < \frac {\epsilon}{N}, \ \ \ {\rm for \ all} \ u\in \mbox{$\cal U$}. \end{equation} By Assumption~\ref{ass1}, the last summation of (\ref{twoterm0}) can be written as \begin{equation}\label{sumnuk2} \sum_{j=K+1}^{i+M} \nu (j,u) q^{u(j)}(j,i) < \sum_{j=K+1}^{\infty} \nu (j,u) q^{u(j)}(j,i) < \frac {\epsilon}{N} \Lambda = O(\frac {\epsilon}{N}), \ \ \ {\rm for \ all} \ u\in \mbox{$\cal U$}. \end{equation} Substituting the above result into (\ref{twoterm0}), we see that solving (\ref{twoterm0}) becomes solving the following equations \begin{eqnarray}\label{twoterm3} 0 &=& \sum_{j=0}^{K} \nu (j,u) q^{u(j)}(j,i), ~~~~~~~~~~~~~~~ i=0,1, \dots, K-M, \nonumber \\ 0 &=& \sum_{j=0}^{K} \nu (j,u) q^{u(j)}(j,i) + O(\frac {\epsilon}{N}), ~~~i=K-M+1, \dots, K, \end{eqnarray} where we have $K+1$ variables and $K+1$ linear equations. Thus, the variables $\nu(i,u)$'s can be solved and we state the results as (\ref{muinep}) in the following lemma, where $F_i (q^{u(j)}(j,k); \ j,k=0,1, \dots, K )$ denotes a function $F_i(\cdot)$ with variables $q^{u(j)}(j,k)$, $i=0,1, \dots, K$. \begin{lemma} \label{lemnu} Under Assumptions ~\ref{ass1} and \ref{ass2}, for any policy $u\in\mbox{$\cal U$}$, integer $N>0$, and small number $\epsilon >0$, there exists an integer $K>0$ such that \begin{equation} \label{muinep} \nu (i,u) = F_i (q^{u(j)}(j,k); \ j,k=0,1, \dots, K ) + \kappa_i (N) ,~~~~i=0,1, \dots, N,\footnote{In fact, this equation holds for $i=0,1, \dots, K$, $K>N$, but to prove Theorem \ref{thm2}, we only need it for the first $N$ $\nu (i,u)$'s.} \end{equation} and $\kappa_i (N) < \frac {\epsilon}{N}$. In words, we say that roughly for any finite $N$, $\nu (0,u) , \dots, \nu (N,u)$ depend only on the transition rates among finitely many states. The functions $F_i$, $i=0,1,\cdots, N$, are the same for any policy $u' \in O_{r^K}(u)$. \end{lemma} Note that we can set $\nu(0,u)=1$ for solving (\ref{twoterm3}) since $c \nu$ is also a solution to (\ref{twoterm3}) for any feasible solution $\nu$, where $c$ is a constant. Moreover, ignoring the term of $O(\frac {\epsilon}{N})$, (\ref{twoterm3}) is a set of linear equations determined by the values of $\{q^{u(j)}(j,k); \ j,k=0,1, \dots, K \}$. Therefore, for any two policies $u'$ and $u$ such that $u'(i)=u(i)$ for all $0\leq i\leq K$, $F_i$'s take the same form for such policies, $i=0,1, \cdots, K$. With Assumptions~\ref{ass1} and \ref{ass2}, we can further extend the existence condition of optimal stationary policies in Theorem~\ref{thmopex} and derive the following theorem. \begin{theorem}\label{thm2} Under Assumptions~\ref{ass1} and \ref{ass2}, there exists an optimal stationary policy for the average cost MDP with a countable state space. \end{theorem} {\it Proof:} Let $N>0$ be any integer and $\epsilon>0$ be any small number. Consider any two policies $u$ and $u'$, which determine the corresponding transition rates $q(i,j):=q^{u(i)}(i,j)$ and $q'(i,j):=q^{u'(i)}(i,j)$, as well as the steady-state vectors $\nu(i)$ and $\nu'(i)$, respectively. By Lemma~\ref{lemnu} and Assumption~\ref{ass2}, if $K$ is large enough, then we have \[ \nu ' (i) = F_i (q'(j,k);\ j,k=0,1, \dots, K ) + \kappa'_i (N) ,~~~~i=0,1, \dots, N, \] and \[ \nu (i) = F_i (q(j,k);\ j,k=0,1, \dots, K ) + \kappa_i (N) ,~~~~i=0,1, \dots, N, \] where $ \kappa '_i (N) < \frac {\epsilon}{N}$ and $ \kappa _i (N) < \frac {\epsilon}{N}$. By Lemma~\ref{lemduout}, if $u$ and $u'$ are close enough such that $d(u, u' )<r^K$, then $u (i) = u' (i)$ for all $i<K$. This means $q(i,j) = q'(i,j)$ for all $i<K$ and $j=0,1, \dots$. Therefore, we have \begin{equation} \label{nudiff} \nu ' (i) = \nu (i) + \kappa '_i (N) - \kappa_i (N) , ~~i=0,1, \dots, N. \end{equation} The rest analysis is similar to (\ref{onepd})--(\ref{eq13}). First, we have \[ \pi (i) = \frac {\nu (i)}{\sum_{j=0}^\infty \nu (j) } , \] and \begin{eqnarray} \label{pippi} \pi' (i) &=& \frac {\nu ' (i)}{\sum_{j=0}^\infty \nu ' (j) } \nonumber \\ &=& \frac {\sum_{j=0}^\infty \nu (j) }{\sum_{j=0}^\infty \nu ' (j) } \Big \{ \frac {\nu (i) + \kappa_i (N) - \kappa'_i (N) } {\sum_{j=0}^\infty \nu (j) } \Big \}, \quad i=0,1,\dots,N. \end{eqnarray} With (\ref{nudiff}), we have \begin{eqnarray} \frac {\sum_{j=0}^\infty \nu (j) }{\sum_{j=0}^\infty \nu ' (j) } &=& \frac {\sum_{j=0}^N \nu (j) + \sum_{j=N+1}^\infty \nu (j) } {\sum_{j=0}^N \nu ' (j) + \sum_{j=N+1}^\infty \nu ' (j) } \nonumber \\ &=& \frac {\sum_{j=0}^N \nu (j) + \sum_{j=N+1}^\infty \nu (j) } {\sum_{j=0}^N \nu (j) + \sum_{j=N+1}^\infty \nu ' (j) + \sum_{j=0}^N [\kappa'_i (N) - \kappa_i (N) ]} . \nonumber \end{eqnarray} If $K$ is large enough (i.e., $d(u,u')$ is small enough), it holds \[ \Big | \sum_{j=0}^N [\kappa'_i (N) - \kappa_i (N) ] \Big | < 2 \epsilon . \] Therefore, \begin{align} \frac {\sum_{j=0}^\infty \nu (j) }{\sum_{j=0}^\infty \nu ' (j) } & = \frac {1 + \frac {\sum_{j=N+1}^\infty \nu (j)} {\sum_{j=0}^N \nu (j) } } {1 + \frac {\sum_{j=N+1}^\infty \nu ' (j)} {\sum_{j=0}^N \nu (j) } + \frac {\sum_{j=0}^N [\kappa'_i (N) - \kappa_i (N) ]} {\sum_{j=0}^N \nu(j)}} \nonumber \\ & = \frac {1 + \frac {\sum_{j=N+1}^\infty \nu (j)} {\sum_{j=0}^N \nu (j) } } {1 + \frac {\sum_{j=N+1}^\infty \nu ' (j)} {\sum_{j=0}^N \nu (j) } + \epsilon (N, u,u') } , \nonumber \end{align} with $|\epsilon (N,u,u')| := \Big|\frac {\sum_{j=0}^N [\kappa'_i(N) - \kappa_i(N) ]} {\sum_{j=0}^N \nu(j)}\Big| < \Big|\frac {\sum_{j=0}^N [\kappa'_i(N) - \kappa_i(N) ]} {1} \Big| < 2\epsilon$, where we use the preset condition $\nu(0)=1$. The rest proof follows the same procedure as (\ref{onemdp})--(\ref{eq13}). First, as in (\ref{eq13}), we can derive \begin{equation} \pi'(i) = (1+\sigma(N,u,u'))\pi (i), \quad i=1,2,\dots,N, \end{equation} where $|\sigma (N,u,u')|< 3 \epsilon$ (with a large $N$ such that $\sum_{i=N+1}^\infty \nu (i) < \epsilon$), when $K$ is large enough. Then, similar to (\ref{diffeta}), we have \begin{eqnarray} \label{diffeta2} &&\eta(u') - \eta(u) \nonumber\\ &=& \sigma(N,u,u')\sum_{m=0}^{N}\pi(i,u)f(i,u(i) ) + \sum_{i=N+1}^{\infty}[\pi(i,u' )f(m,u'(i) ) - \pi(i,u)f(i,u (i) )] . \end{eqnarray} Now we are ready to prove the continuity of $\eta (u)$ in the metric space $\mbox{$\cal U$}$ with metric (\ref{defdis}). Let $\epsilon>0$ be any small number. First, as discussed above, under Assumptions \ref{ass1} and \ref{ass2}, by the uniformity of $\sum_{i=0}^\infty \nu(i)$, there is a large integer $N_1$ such that if $n>N_1$, we have $|\sigma (N_1,u,u')|< 3\epsilon$ for any $u$ and $u'$. Next, because $\sum_{i=0}^\infty \pi (i,u) f(i, u(i)) $ converges, there is an $N_2 $ such that $|\sum_{i=0}^{N}\pi(i,u)f(i,u(i)) | < |\eta (u)|+1$, for all $N>N_2$. Furthermore, under Assumption \ref{ass2}, by the uniformity of the convergence of (\ref{etaepf}), there is a large $N_3$ such that for all $n>N_3$, it holds \[ \Big|\sum_{i=n+1}^{\infty}[\pi(i,u')f(i,u') - \pi(i,u)f(i,u)] \Big| < 2 \epsilon . \] Therefore, by (\ref{diffeta2}) and Lemma~\ref{lemduout}, for $\hat N:=\max \{N_1, N_2, N_3 \}$, we have \begin{equation}\label{diffop} |\eta (u) - \eta (u')| < [3|\eta(u)|+ 5] \epsilon, \ \ \ \forall \ u'\in O_{r^{\hat N}}(u). \end{equation} Thus, $\eta (u)$ is continuous at $u$ in the metric space, and then the existence of optimal stationary policy $u^*$ follows from Theorem~\ref{thmopex}. $\Box$ In summary, we have extended the existence condition of optimal stationary policies for average MDPs with countable state space from Theorem~\ref{thmopex} for the $c/\mu$-rule problem to Theorem~\ref{thm2} for the more general case. As stated by Assumptions~\ref{ass1} and \ref{ass2}, if the system has bounded and limited-distance backward transition rates, and with the uniformity of the convergence of the un-normalized probabilities and the performance sequences, the existence of optimal stationary policies can be guaranteed by Theorem~\ref{thm2}. The theorem may be easily verified in practice, especially for queueing systems, as demonstrated in the aforementioned examples. \section{Conclusion}\label{section_conclusion} In this paper, we derive the existence conditions of optimal stationary policies for countable state MDPs with long-run average criterion. By defining a suitable metric on the policy space forming a compact metric space, the existence condition can be guaranteed by proving the continuity of the long-run average cost as a function in the policy space under the metric. With some assumptions on the transition rates and the uniformity of the convergence of the un-normalized probabilities of the processes, the existence of the optimal policies can be proved for the MDPs with countable states in a general form. Compared with other conditions studied in the literature, the condition in this paper may be easier to verify when applied to practical MDP problems, especially in queueing systems. Some examples are studied to illustrate the applicability of our results. Future research topics may include the extensions to MDPs with other criteria, such as the discounted ones. \end{document}
\begin{document} \begin{doublespace} \newtheorem{thm}{Theorem}[section] \newtheorem{lemma}[thm]{Lemma} \newtheorem{defn}[thm]{Definition} \newtheorem{prop}[thm]{Proposition} \newtheorem{corollary}[thm]{Corollary} \newtheorem{remark}[thm]{Remark} \newtheorem{example}[thm]{Example} \numberwithin{equation}{section} \def\varepsilon{\varepsilon} \def{ $\Box$ }{{ $\Box$ }} \def{\mathcal H}{{\mathcal H}} \def{\mathcal M}{{\mathcal M}} \def{\mathcal B}{{\mathcal B}} \def{\mathcal L}{{\mathcal L}} \def{\mathcal F}{{\mathcal F}} \def{\mathcal E}{{\mathcal E}} \def{\mathcal Q}{{\mathcal Q}} \def{\mathcal S}{{\mathcal S}} \def{\rm{Cap}}{{\rm{Cap}}} \def\widetilde{\widetilde} \def{\mathbb R}{{\mathbb R}} \def{\bf L}{{\bf L}} \def{\mathbb E}{{\mathbb E}} \def{\bf F}{{\bf F}} \def{\mathbb P}{{\mathbb P}} \def{\mathbb H}{{\mathbb H}} \def{\mathbb N}{{\mathbb N}} \def\varepsilon{\varepsilon} \def\widehat{\widehat} \def\noindent{\bf Proof.} {\noindent{\bf Proof.} } \title{\Large \bf Potential Theory of Subordinate Brownian Motions Revisited } \author{{\bf Panki Kim}\thanks{This work was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) grant funded by the Korea government(MEST)(2010-0001984).} \quad {\bf Renming Song} \quad and \quad {\bf Zoran Vondra{\v{c}}ek}\thanks{Supported in part by the MZOS grant 037-0372790-2801.} } \date{ } \maketitle \begin{abstract} The paper discusses and surveys some aspects of the potential theory of subordinate Brownian motion under the assumption that the Laplace exponent of the corresponding subordinator is comparable to a regularly varying function at infinity. This extends some results previously obtained under stronger conditions. \end{abstract} \noindent {\bf AMS 2010 Mathematics Subject Classification}: Primary 60J45 Secondary 60G51, 60J35, 60J75 \noindent {\bf Keywords and phrases}: Subordinator, subordinate Brownian motion, potential theory, Green function, L\'evy density, Harnack inequality, boundary Harnack principle \section{Introduction}\label{ksv-sec-intro} An ${\mathbb R}^d$-valued process $X=(X_t:\, t\ge 0)$ is called a L\'evy process in ${\mathbb R}^d$ if it is a right continuous process with left limits and if, for every $s, t\ge 0$, $X_{t+s}-X_s$ is independent of $\{X_r, r\in [0, s]\}$ and has the same distribution as $X_s-X_0$. A L\'evy process is completely characterized by its L\'evy exponent $\Phi$ via $$ {\mathbb E}[\exp\{i\langle \xi, X_t-X_0 \rangle\}]=\exp\{-t\Phi(\xi)\}, \quad t\ge 0, \xi\in {\mathbb R}^d. $$ The L\'evy exponent $\Phi$ of a L\'evy process is given by the L\'evy-Khintchine formula $$ \Phi(\xi)=i\langle l, \xi \rangle + \frac12 \langle \xi, A\xi^T \rangle + \int_{{\mathbb R}^d}\left(1-e^{i\langle \xi, x \rangle}+i\langle \xi, x \rangle{\bf 1}_{\{|x|<1\}}\right)\Pi(dx), \quad \xi\in {\mathbb R}^d\, , $$ where $ l\in {\mathbb R}^d$, $A$ is a nonnegative definite $d\times d$ matrix, and $\Pi$ is a measure on ${\mathbb R}^d\setminus\{0\}$ satisfying $\int (1\wedge |x|^2)\, \Pi(dx) <\infty$. $A$ is called the diffusion matrix, $\Pi$ the L\'evy measure, and $(l, A, \Pi)$ the generating triplet of the process. Nowadays L\'evy processes are widely used in various fields, such as mathematical finance, actuarial mathematics and mathematical physics. However, general L\'evy processes are not very easy to deal with. A subordinate Brownian motion in ${\mathbb R}^d$ is a L\'evy process which can be obtained by replacing the time of Brownian motion in ${\mathbb R}^d$ by an independent subordinator (i.e., an increasing L\'evy process starting from 0). More precisely, let $B=(B_t:\, t\ge 0)$ be a Brownian motion in ${\mathbb R}^d$ and $S=(S_t:\, t\ge 0)$ be a subordinator independent of $B$. The process $X=(X_t:\, t\ge 0)$ defined by $X_t=B_{S_t}$ is a rotationally invariant L\'evy process in ${\mathbb R}^d$ and is called a subordinate Brownian motion. The subordinator $S$ used to define the subordinate Brownian motion $X$ can be interpreted as ``operational'' time or ``intrinsic'' time. For this reason, subordinate Brownian motions have been used in mathematical finance and other applied fields. Subordinate Brownian motions form a very large class of L\'evy processes. Nonetheless, compared with general L\'evy processes, subordinate Brownian motions are much more tractable. If we take the Brownian motion $B$ as given, then $X$ is completely determined by the subordinator $S$. Hence, one can deduce properties of $X$ from properties of the subordinator $S$. On the analytic level this translates to the following: Let $\phi$ denote the Laplace exponent of the subordinator $S$, that is, ${\mathbb E}[\exp\{-\lambda S_t\}] =\exp\{-t\phi(\lambda)\}$, $\lambda >0$. Then the characteristic exponent $\Phi$ of the subordinate Brownian motion $X$ takes on the very simple form $\Phi(x)=\phi(|x|^2)$ (our Brownian motion $B$ runs at twice the usual speed). Therefore, properties of $X$ should follow from properties of the Laplace exponent $\phi$. The Laplace exponent $\phi$ of a subordinator $S$ is a Bernstein function, hence it has a representation of the form $$ \phi(\lambda)=b\lambda +\int_{(0,\infty)}(1-e^{-\lambda t})\, \mu(dt) $$ where $b\ge 0$ and $\mu$ is a measure on $(0,\infty)$ satisfying $\int_{(0,\infty)}(1\wedge t)\, \mu(dt)<\infty$. If $\mu$ has a completely monotone density, the function $\phi$ is called a complete Bernstein function. The purpose of this work is to study the potential theory of subordinate Brownian motion under the assumption that the Laplace exponent $\phi$ of the subordinator is a complete Bernstein function comparable to a regularly varying functions at infinity. More precisely, we will assume that there exist $\alpha\in (0,2)$ and a function $\ell$ slowly varying at infinity such that $$ \phi(\lambda)\asymp \lambda^{\alpha/2}\ell(\lambda)\, , \quad \lambda \to \infty\, . $$ Here and later, for two functions $f$ and $g$ we write $f(\lambda)\asymp g(\lambda)$ as $\lambda \to \infty$ if the quotient $f(\lambda)/g(\lambda)$ stays bounded between two positive constants as $\lambda \to \infty$. A lot of progress has been made in recent years in the study of the potential theory of subordinate Brownian motions, see, for instance \cite{CKSV, CKSV2, KSV1, KSV2, KSV3, RSV, SiSV} and \cite[Chapter 5]{BBKRSV}. In particular, an extensive survey of results obtained before 2007 is given in \cite[Chapter 5]{BBKRSV}. At that time, the focus was on the potential theory of the process $X$ in the whole ${\mathbb R}^d$, the results for (killed) subordinate Brownian motions in an open subset still being out of reach. In the last few years significant progress has been made in studying the potential theory of subordinate Brownian motions killed upon exiting an open subset of ${\mathbb R}^d$. The main results include the boundary Harnack principle and sharp Green function estimates. For processes having a continuous component see \cite{KSV2} (for the one-dimensional case) and \cite{CKS5, CKSV, CKSV2} (for multi-dimensional case). For purely discontinuous processes, the boundary Harnack principle was obtained in \cite{KSV1} and sharp Green function estimates were discussed in the recent preprint \cite{KSV3}. The main assumption in \cite{CKSV, CKSV2, KSV1} and \cite[Chapter 5]{BBKRSV} is that the Laplace exponent of the subordinator is regularly varying at infinity. The results were established under different assumptions, some of which turned out to be too strong and some even redundant. Time is now ripe to put some of the recent progress under one unified setup and to give a survey of some of these results. The survey builds upon the work done in \cite[Chapter 5]{BBKRSV} and \cite{KSV1}. The setup we are going to assume is more general than all these of the previous papers, so in this sense, most of the results contained in this paper are extensions of the existing ones. In Section \ref{ksv-sec-subordinators} we first recall some basic facts about subordinators, Bernstein functions and complete Bernstein functions. Then we establish asymptotic behaviors, near the origin, of the potential density and L\'evy density of subordinators. In Section \ref{ksv-sec-sbm} we establish the asymptotic behaviors, near the origin, of the Green function and the L\'evy density of our subordinate Brownian motion. These results follow from the asymptotic behaviors, near the origin, of the potential density and L\'evy density of the subordinator. In Section \ref{ksv-sec-hibhp} we prove that the Harnack inequality and the boundary Harnack principle hold for our subordinate Brownian motions. The materials covered in this paper by no means include all that can be said about the potential theory of subordinate Brownian motions. One of the omissions is the sharp Green function estimates of (killed) subordinate Brownian motions in bounded $C^{1, 1}$ open sets obtained in the recent preprint \cite{KSV3}. The present paper builds up the framework for \cite{KSV3} and can be regarded as a preparation for \cite{KSV3} in this sense. Another omission is the Dirichlet heat kernel estimates of subordinate Brownian motions in smooth open sets recently established in \cite{CKS1, CKS2, CKS3, CKS4}. One of the reasons we do not include these recent results in this paper is that all these heat kernel estimates are for particular subordinate Brownian motions only and are not yet established in the general case. A third notable omission is the spectral theory for killed subordinate Brownian motions developed in \cite{CS05, CS06a, CS06c}. Some of these results have been summarized in \cite[Section 12.3]{SSV}. A fourth notable omission is the potential theory of subordinate killed Brownian motions developed in \cite{GPRSSV, GRSS, Son04, SV03, SV04b, SV06}. Some of these results have been summarized in \cite[Section 5.5]{BBKRSV} and \cite[Chapter 13]{SSV}. In this paper we concentrate on subordinate Brownian motions without diffusion components and therefore this paper does not include results from\cite{CKSV, CKSV2, KSV2}. One of the reasons for this is that subordinate Brownian motions with diffusion components require a different treatment. We end this introduction with few words on the notations. For functions $f$ and $g$ we write $f(t)\sim g(t)$ as $t\to 0+$ (resp. $t\to \infty$) if the quotient $f(t)/g(t)$ converges to 1 as $t\to 0+$ (resp. $t\to \infty$), and $f(t) \asymp g(t)$ as $t\to 0+$ (resp. $t\to \infty$) if the quotient $f(t)/g(t)$ stays bounded between two positive constants as $t\to 0+$ (resp. $t\to \infty$). \section{Subordinators}\label{ksv-sec-subordinators}\label{ksv-sec-sub} \subsection{Subordinators and Bernstein functions}\label{ksv-ss:bf} Let $S=(S_t:\, t\ge 0)$ be a subordinator, that is, an increasing L\'evy process taking values in $[0,\infty)$ with $S_0=0$. A subordinator $S$ is completely characterized by its Laplace exponent $\phi$ via $$ {\mathbb E}[\exp(-\lambda S_t)]=\exp(-t \phi(\lambda))\, ,\quad \lambda > 0. $$ The Laplace exponent $\phi$ can be written in the form (cf. \cite[p. 72]{Ber}) $$ \phi(\lambda)=b\lambda +\int_0^{\infty}(1-e^{-\lambda t})\, \mu(dt)\, . $$ Here $b \ge 0$, and $\mu$ is a $\sigma$-finite measure on $(0,\infty)$ satisfying $$ \int_0^{\infty} (t\wedge 1)\, \mu(dt)< \infty\, . $$ The constant $b$ is called the drift, and $\mu$ the L\'evy measure of the subordinator $S$. A $C^{\infty}$ function $\phi:(0,\infty)\to [0,\infty)$ is called a Bernstein function if $(-1)^n D^n \phi\le 0$ for every positive integer $n$. Every Bernstein function has a representation (cf. \cite[Theorem 3.2]{SSV}) $$ \phi(\lambda)=a+b\lambda +\int_{(0,\infty)}(1-e^{-\lambda t})\, \mu(dt) $$ where $a,b\ge 0$ and $\mu$ is a measure on $(0,\infty)$ satisfying $\int_{(0,\infty)}(1\wedge t)\, \mu(dt)<\infty$. $a$ is called the killing coefficient, $b$ the drift and $\mu$ the L\'evy measure of the Bernstein function. Thus a nonnegative function $\phi$ on $(0, \infty)$ is the Laplace exponent of a subordinator if and only if it is a Bernstein function with $\phi(0+)=0$. Sometimes we need to deal with killed subordinators, that is, subordinators killed at independent exponential times. Let $e_a$ be an exponential random variable with parameter $a\ge0$, i.e., ${\mathbb P}( e_a >t)=e^{-at}, \, t>0$. We allow $a=0$ in which case $e_a=\infty$. Assume that $S$ is a subordinator with Laplace exponent $\phi$ and $e_a$ is independent of $S$. We define a process $\widehat S$ by $$ \widehat S_t= \begin{cases} S_t , & t < e_a\\ \infty & t \ge e_a \end{cases}. $$ The process $\widehat S$ is the subordinator $S$ killed at an independent exponential time. We call $\widehat S$ a killed subordinator. The corresponding Laplace exponent $\widehat \phi$ is related to $\phi$ as $$ \widehat \phi(\lambda)=a+\phi(\lambda), \qquad \lambda>0. $$ In fact, $$ {\mathbb E}[e^{i \xi \cdot \widehat S_t}]= {\mathbb E}[e^{i \xi \cdot S_t} {\bf 1}_{\{ t < e_a \}}]={\mathbb E}[e^{i \xi \cdot S_t}]{\mathbb P}( t < e_a )=e^{-t \phi(\lambda)} e^{-at}=e^{-t(a+\phi(\lambda))}.$$ A function $\phi: (0, \infty)\to [0, \infty)$ is the Laplace exponent of a killed subordinator if and only if $\phi$ is a Bernstein function. For this reason, we use a killed subordinator sometimes. A Bernstein function $\phi$ is called a complete Bernstein function if the L\'evy measure $\mu$ has a completely monotone density $\mu(t)$, i.e., $(-1)^n D^n \mu\ge 0$ for every non-negative integer $n$. Here and below, by abuse of notation we will denote the L\'evy density by $\mu(t)$. Complete Bernstein functions form a large subclass of Bernstein functions. Most of the familiar Bernstein functions are complete Bernstein functions. See \cite[Chapter 15]{SSV} for an extensive table of complete Bernstein functions. Here are some examples of complete Bernstein functions: \begin{description} \item{(i)} $\phi(\lambda)=\lambda^{\alpha/2}$, $\alpha\in (0, 2]$; \item{(ii)} $\phi(\lambda)=(\lambda+m^{2/\alpha})^{\alpha/2}-m$, $\alpha\in (0, 2), m\ge 0$; \item{(iii)} $\phi(\lambda)=\lambda^{\alpha/2}+ \lambda^{\beta/2}$, $0\le \beta<\alpha\in (0, 2]$; \item{(iv)} $\phi(\lambda)=\lambda^{\alpha/2}(\log(1+\lambda))^{\gamma/2}$, $\alpha\in (0, 2), \gamma\in (0, 2-\alpha)$; \item{(v)} $\phi(\lambda)=\lambda^{\alpha/2}(\log(1+\lambda))^{-\beta/2}$, $0\le \beta<\alpha\in (0, 2]$. \end{description} An example of a Bernstein function which is not a complete Bernstein function is $1-e^{-\lambda}$. It is known (cf. \cite[Proposition 7.1]{SSV}) that $\phi$ is a complete Bernstein function if and only if the function $\lambda/\phi(\lambda)$ is a complete Bernstein function. For other properties of complete Bernstein functions we refer the readers to \cite{SSV}. The following result, which will play an important role later, says that the L\'evy density of a complete Bernstein function cannot decrease too fast in the following sense. \begin{lemma}[{\cite[Lemma 2.1]{KSV3}}]\label{ksv-l:H2-valid} Suppose that $\phi$ is a complete Bernstein function with L\'evy density $\mu$. Then there exists $C_1>0$ such that $\mu(t)\le C_1 \mu(t+1)$ for every $t>1$. \end{lemma} \noindent{\bf Proof.} Since $\mu$ is a completely monotone function, by Bernstein's theorem (cf. \cite[Theorem 1.4]{SSV}), there exists a measure $m$ on $[0,\infty)$ such that $\mu(t)=\int_{[0,\infty)}e^{-tx} m(dx).$ Choose $r>0$ such that $\int_{[0, r]}e^{-x}\, m(dx)\ge \int_{(r, \infty)}e^{-x}\, m(dx).$ Then, for any $t>1$, we have \begin{eqnarray*} \int_{[0, r]}e^{-t x}\, m(dx)&\ge&e^{-(t -1)r}\int_{[0, r]}e^{-x}\, m(dx)\\ &\ge &e^{-(t -1)r}\int_{(r, \infty)}e^{-x}\, m(dx)\,\ge \, \int_{(r, \infty)}e^{-t x}\, m(dx). \end{eqnarray*} Therefore, for any $t>1$, \begin{eqnarray*} &&\mu(t+1)\ge \int_{[0, r]}e^{-(t+1) x}\, m(dx)\ge e^{-r}\int_{[0, r]}e^{- t x}\, m(dx) \\ &&\ge \frac12\, e^{-r}\int_{[0, \infty)}e^{-t x}\, m(dx)=\frac12\, e^{-r}\mu(t). \end{eqnarray*} { $\Box$ } The potential measure of the (possibly killed) subordinator $S$ is defined by \begin{equation}\label{ksv-potential measure} U(A)={\mathbb E} \int_0^{\infty} {\bf 1}_{\{S_t\in A\}} \, dt, \quad A\subset [0, \infty). \end{equation} Note that $U(A)$ is the expected time the subordinator $S$ spends in the set $A$. The Laplace transform of the measure $U$ is given by \begin{equation}\label{ksv-lt potential measure} {\mathcal L} U(\lambda)=\int_0^{\infty}e^{-\lambda t}\, dU(t)= {\mathbb E}\int_0^{\infty} \exp(-\lambda S_t)\, dt =\frac{1}{\phi(\lambda)}\, . \end{equation} We call a subordinator $S$ a complete subordinator if its Laplace exponent $\phi$ is a complete Bernstein function. The following characterization of complete subordinators is due to \cite[Remark 2.2]{SV06} (see also \cite[Corollary 5.3]{BBKRSV}). \begin{prop}\label{ksv-p:2.2} Let $S$ be a subordinator with Laplace exponent $\phi$ and potential measure $U$. Then $\phi$ is a complete Bernstein function if and only if $$ U(dt)=c\delta_0(dt)+u(t)dt $$ for some $c\ge 0$ and completely monotone function $u$. \end{prop} In case the constant $c$ in the proposition above is equal to zero, we will call $u$ the potential density of the subordinator $S$. An inspection of the argument, given in \cite[Chapter 5]{BBKRSV} or \cite{SV06}, leading to the proposition above yields the following two results (cf. \cite[Corollary 5.4 and Corollary 5.5]{BBKRSV} or \cite[Corollary 2.3 and Corollary 2.4]{SV06}). \begin{corollary}\label{ksv-c2.3} Suppose that $S=(S_t:\, t\ge 0)$ is a subordinator whose Laplace exponent $$ \phi(\lambda)=b\lambda +\int_0^{\infty} (1-e^{-\lambda t})\, \mu(dt) $$ is a complete Bernstein function with $b>0$ or $\mu(0, \infty)=\infty$. Then the potential measure $U$ of $S$ has a completely monotone density $u$. \end{corollary} \noindent{\bf Proof.} By \cite[Corollary 5.4]{BBKRSV} or \cite[Corollary 2.3]{SV06}, if the drift of the complete subordinator $S$ is zero or the L\'evy measure $\mu$ has infinite mass, then the constant $c$ in Proposition \ref{ksv-p:2.2} is equal to zero so the potential measure $U$ of $S$ has a density $u$. The completeness of the density follows directly from Proposition \ref{ksv-p:2.2}. { $\Box$ } \begin{corollary}\label{ksv-c2.4} Let $S$ be a complete subordinator with Laplace exponent $\phi(\lambda)=\int_0^{\infty} (1-e^{-\lambda t})\mu(dt)$. Suppose that the L\'evy measure $\mu$ has infinite mass. Then the potential measure of a (killed) subordinator with Laplace exponent $\psi(\lambda):=\lambda/\phi(\lambda)$ has a completely monotone density $v$ given by $$ v(t)=\mu(t, \infty). $$ \end{corollary} \noindent{\bf Proof.} Since the drift of $S$ is zero and the L\'evy measure $\mu$ has infinite mass, by \cite[Corollary 5.5]{BBKRSV} or \cite[Corollary 2.4]{SV06}, we have that $$ \psi(\lambda)=a + \int_0^{\infty} (1-e^{-\lambda t})\, \nu(dt) $$ where $a=\left(\int_0^{\infty} t\mu(t)dt\right)^{-1}$, the L\'evy measure $\nu$ of $\psi$ has infinite mass and the potential measure of a possibly killed (i.e., $a>0$) subordinator with Laplace exponent $\psi$ has a density $v$ given by $ v(t)=\mu(t, \infty). $ The completeness of the density follows from \cite[Corollary 5.3]{BBKRSV}, which works for killed subordinators. { $\Box$ } \subsection{Asymptotic behavior of the potential and L\'evy densities}\label{ksv-ss:asympumu} From now on we will always assume that $S$ is a complete subordinator without drift and that the Laplace exponent $\phi$ of $S$ satisfies $\lim_{\lambda\to\infty}\phi(\lambda)=\infty$ (or equivalently, the L\'evy measure of $S$ has infinite mass). Under this assumption, the potential measure $U$ of $S$ has a completely monotone density $u$ (cf. Corollary \ref{ksv-c2.3}). The main purpose of this subsection is to determine the asymptotic behaviors of $u$ and $\mu$ near the origin. For this purpose, we will need the following result due to Z\"ahle (cf. \cite[Theorem 7]{Z}). \begin{prop}\label{ksv-p:zahle} Suppose that $w$ is a completely monotone function given by $$ w(t)=\int^\infty_0e^{-st} f(s)\, ds, $$ where $f$ is a nonnegative decreasing function. Then $$ f(s)\le \left(1-e^{-1}\right)^{-1} s^{-1}w(s^{-1}), \quad s>0. $$ If, furthermore, there exist $\delta\in (0, 1)$ and $a, s_0>0$ such that \begin{equation}\label{ksv-e:zahle} w(\lambda t)\le a \lambda^{-\delta} w(t), \quad \lambda\ge 1, t\ge 1/s_0, \end{equation} then there exists $C_2=C_2(w,f,a,s_0, \delta)>0$ such that $$ f(s)\ge C_2 s^{-1}w(s^{-1}), \quad s\le s_0. $$ \end{prop} \noindent{\bf Proof.} Using the assumption that $f$ is a nonnegative decreasing function, we get that, for any $r>0$, we have \begin{eqnarray*} w(t)&=&\frac1t\int^\infty_0e^{-s}f\left(\frac{s}{t}\right)ds\\ &\ge&\frac1t\int^r_0e^{-s}f\left(\frac{s}{t}\right)ds \,\ge\,\frac1tf\left(\frac{r}{t}\right)\left(1-e^{-r}\right). \end{eqnarray*} Thus $$ f\left(\frac{r}{t}\right)\le\frac{tw(t)}{1-e^{-r}}, \quad t>0, r>0. $$ In particular, we have $$ f(s)\le \left(1-e^{-1}\right)^{-1}s^{-1}w(s^{-1}), \quad s>0, $$ and \begin{equation}\label{ksv-e:zahle1} f\left(\frac{s}{t}\right)\le \left(1-e^{-1}\right)^{-1}\frac{t}{s}w\left(\frac{t}{s}\right), \quad s>0, t>0. \end{equation} On the other hand, for $r\in (0, 1]$, we have \begin{eqnarray*} tw(t)&=&\int^r_0e^{-s}f\left(\frac{s}{t}\right)ds+\int_r^\infty e^{-s}f\left(\frac{s}{t}\right)ds\\ &\le&\int^r_0e^{-s}f\left(\frac{s}{t}\right)ds + f\left(\frac{r}{t}\right)e^{-r}\\ &\le&\left(1-e^{-1}\right)^{-1}t\int^r_0e^{-s}\frac1s\, w\left(\frac{t}{s}\right)ds + f\left(\frac{r}{t}\right)e^{-r}, \end{eqnarray*} where in the last line we used \eqref{ksv-e:zahle1}. Now we assume \eqref{ksv-e:zahle}, then we get that $$ w\left(\frac{t}{s}\right)\le a s^{\delta}w(t), \quad t\ge 1/s_0, s<r. $$ Thus, for $r\in (0, 1]$, we have, $$ tw(t)\le a\left(1-e^{-1}\right)^{-1}tw(t)\int^r_0e^{-s}s^{\delta-1}ds + f\left(\frac{r}{t}\right)e^{-r}. $$ Choosing $r\in(0, 1]$ small enough so that $$ a\left(1-e^{-1}\right)^{-1}\int^r_0e^{-s}s^{\delta-1}ds\le \frac12, $$ we conclude that for this choice of $r$, we have $$ f\left(\frac{r}{t}\right)\ge c_1 tw(t), \quad t\ge 1/s_0 $$ for some constant $c_1>0$. Since $w$ is decreasing, we have $$ f(s)\ge c_1\frac{r}{s}w\left( \frac{r}{s}\right) \ge c_2 s^{-1}w(s^{-1}), \quad s\le rs_0, $$ where $c_2=c_1r$. From this we immediately get that there exists $c_3>0$ such that $$ f(s)\ge c_3 s^{-1}w(s^{-1}), \quad s\le s_0. $$ { $\Box$ } \begin{corollary} The potential density $u$ of $S$ satisfies \begin{equation}\label{ksv-e:u-upper-bound} u(t)\le C_3 t^{-1}\phi(t^{-1})^{-1}\, ,\quad t>0\, . \end{equation} \end{corollary} \noindent{\bf Proof.} Apply the first part of Proposition \ref{ksv-p:zahle} to the function $$ w(t):=\int_0^{\infty}e^{-s t}u(s)\, ds =\frac{1}{\phi(t)}. $$ { $\Box$ } We introduce now the main assumption on our Laplace exponent $\phi$ of the complete subordinator $S$ that we will use throughout the rest of the paper. Recall that a function $\ell:(0,\infty)\to (0,\infty)$ is slowly varying at infinity if $$ \lim_{t\to \infty}\frac{\ell(\lambda t)}{\ell(t)}=1\, ,\quad \textrm{for every }\lambda >0\, . $$ \noindent {\bf Assumption (H):} There exist $\alpha\in (0,2)$ and a function $\ell:(0,\infty)\to (0,\infty)$ which is measurable, locally bounded above and below by positive constants, and slowly varying at infinity such that \begin{equation}\label{ksv-e:reg-var} \phi(\lambda) \asymp \lambda^{\alpha/2}\ell(\lambda)\, ,\quad \lambda \to \infty\, . \end{equation} \begin{remark}\label{ksv-r-interpretation-H}{\rm The precise interpretation of \eqref{ksv-e:reg-var} will be as follows: There exists a positive constant $c>1$ such that $$ c^{-1}\le \frac{\phi(\lambda)}{\lambda^{\alpha/2}\ell(\lambda)} \le c \qquad \textrm{for all }\lambda \in [1,\infty)\, . $$ The choice of the interval $[1,\infty)$ is, of course, arbitrary. Any interval $[a,\infty)$ would do, but with a different constant. This follows from the continuity of $\phi$ and the assumption that $\ell$ is locally bounded above and below by positive constants. Moreover, by choosing $a>0$ large enough, we could dispense with the local boundedness assumption. Indeed, by \cite[Lemma 1.3.2]{BGT}, every slowly varying function at infinity is locally bounded on $[a,\infty)$ for $a$ large enough. Although the choice of interval $[1,\infty)$ is arbitrary, it will have as a consequence the fact that all relations of the type $f(t)\asymp g(t)$ as $t \to \infty$ (respectively $t\to 0+$) following from \eqref{ksv-e:reg-var} will be interpreted as $\tilde{c}^{-1} \le f(t)/g(t) \le \tilde{c}$ for $t\ge 1$ (respectively $0<t\le 1$). } \end{remark} The assumption \eqref{ksv-e:reg-var} is a very weak assumption on the asymptotic behavior of $\phi$ at infinity. All the examples (in (i), (iii) and (v), we need to take $\alpha<2$) above Lemma \ref{ksv-l:H2-valid} satisfy this assumption. In fact they satisfy the following stronger assumption \begin{equation}\label{ksv-e:reg-var2} \phi(\lambda)= \lambda^{\alpha/2}\ell(\lambda)\, , \end{equation} where $\ell$ is a function slowly varying at infinity. By inspecting the table in \cite[Chapter 15]{SSV}, one can come up with a lot more examples of complete Bernstein functions satisfying this stronger assumption. In the next example we construct a complete Bernstein function satisfying \eqref{ksv-e:reg-var}, but not the stronger \eqref{ksv-e:reg-var2}. \begin{example}{\rm Suppose that $\alpha\in (0, 2)$. Let $F$ be a function on $[0, \infty)$ defined by $F(x)=0$ on $0 \le x<1$ and $$ F(x)=2^n\, , \quad 2^{2(n-1)/\alpha} \le x < 2^{2n/\alpha},\ n=1, 2, \dots. $$ Then clearly $F$ is non-decreasing and $x^{\alpha/2} \le F(x) \le 2 x^{\alpha/2}$ for all $x \ge 1$. This implies that for all $t>0$, $$ \frac{t^{\alpha/2}}{2} \le \liminf_{x\to \infty} \frac{F(tx)}{F(x)} \le \limsup_{x\to \infty} \frac{F(tx)}{F(x)} \le 2 t^{\alpha/2}. $$ If $F$ were regularly varying, the above inequality would imply that the index was $\alpha/2$, hence the limit of $F(tx)/F(x)$ as $x\to \infty$ would be equal to $c t^{\alpha/2}$ for some positive constant $c$. But this does not happen because of the following. Take $t=2^{2/\alpha}$ and a subsequence $x_n=2^{2n/\alpha}$. Then $t x_n= 2^{2(n+1)/\alpha}$ and therefore $$ F(t x_n)/F(x_n)=2^{n+2} / 2^{n+1}=2 $$ which should be equal to $c t^{\alpha/2}=c (2^{2/\alpha})^{\alpha/2}=2c$, implying $c=1$. On the other hand, take any $t\in (1, 2^{2/\alpha})$ and the same subsequence $x_n=2^{2n/\alpha}$. Then $t x_n\in [2^{2n/\alpha}, 2^{2(n+1)/\alpha} )$ implying $F(t x_n)=F(x_n)$. Thus the quotient $F(t x_n)/F(x_n)=1$ which should be equal to $c t^{\alpha}=t^{\alpha}$ for all $t\in (1,2^{1/a})$. Clearly this is impossible, so $F$ is not regularly varying. This also shows that $F(x)$ is not $\sim$ to any $c x^{\alpha/2}$, as $x\to \infty$. Let $\sigma$ be the measure corresponding to the nondecreasing function $F$ (in the sense that $\sigma(dt)=F(dt)$): $$ \sigma:=\sum^{\infty}_{n=1} 2^n\delta_{2^{2n/\alpha}}\, . $$ Since $\int_{(0,\infty)}(1+t)^{-1}\, \sigma(dt)<\infty$, $\sigma$ is a Stieltjes measure. Let $$ g(\lambda):=\int_{(0,\infty)}\frac{1}{\lambda+t}\, \sigma(dt)=\sum_{n=1}^{\infty} \frac{2^n}{\lambda +2^{2n/\alpha}} $$ be the corresponding Stieltjes function. It follows from \cite[Theorem 1.7.4]{BGT} or \cite[Lemma 6.2]{WYY} that $g$ is not regularly varying at infinity. Moreover, since $F(x)\asymp x^{\alpha/2}$, $x \to \infty$, it follows from \cite[Lemma 6.3]{WYY} that $g(\lambda)\asymp \lambda^{\alpha/2-1}$, $\lambda \to \infty$. Therefore, the function $f(\lambda):=1/ g(\lambda)$ is a complete Bernstein function which is not regularly varying at infinity, but satisfies $f(\lambda)\asymp \lambda^{1-\alpha/2}$, $\lambda \to \infty$. } \end{example} Now we are going to establish the asymptotic behaviors of $u$ and $\mu$ under the assumption {\bf (H)}. First we claim that under the assumption \eqref{ksv-e:reg-var}, there exist $\delta\in (0, 1)$ and $a, s_0>0$ such that \begin{equation}\label{ksv-e:zahle3} \phi(\lambda t)\ge a\lambda^{\delta}\phi(t), \quad \lambda\ge 1, t\ge 1/s_0. \end{equation} Indeed, by Potter's theorem (cf. \cite[Theorem 1.5.6]{BGT}), for $0<\epsilon<\alpha/2$ there exists $t_1$ such that $$ \frac{\ell( t)}{\ell(\lambda t)}\le 2 \max\left(\left(\frac{ t}{\lambda t}\right)^{\epsilon}, \left(\frac{\lambda t}{t}\right)^{\epsilon}\right)=2\lambda^{\epsilon}\, ,\quad \lambda \ge 1, t\ge t_1\, . $$ Hence, $$ \phi(\lambda t)\ge c_2 (\lambda t)^{\alpha/2}\ell(\lambda t) = c_2 t^{\alpha/2}\ell(t) \lambda^{\alpha/2} \frac{\ell(\lambda t)}{\ell(t)}\ge c_3 \phi(t)\lambda^{\alpha/2-\epsilon}\, ,\quad \lambda\ge 1, t\ge t_1. $$ By taking $\delta:=\alpha/2 -\epsilon\in (0,1)$, $a=c_3$, and $s_0=1/t_1$ we arrive at \eqref{ksv-e:zahle3}. \begin{thm}\label{ksv-t:behofu} Let $S$ be a complete (possibly killed) subordinator with Laplace exponent $\phi$ satisfying {\bf (H)}. Then the potential density $u$ of $S$ satisfies \begin{equation}\label{ksv-e:behofu} u(t)\asymp t^{-1}\phi(t^{-1})^{-1}\asymp\frac{t^{\alpha/2-1}}{\ell(t^{-1})}\, , \quad t \to 0+\,. \end{equation} \end{thm} \noindent{\bf Proof.} Put $$ w(t):=\int_0^{\infty}e^{-s t}u(s)\, ds =\frac{1}{\phi(t)}, $$ then by \eqref{ksv-e:zahle3} we have $$ w(\lambda t)\le a^{-1}\lambda^{-\delta}w(t), \quad \lambda\ge 1, t\ge 1/s_0. $$ Applying the second part of Proposition \ref{ksv-p:zahle} we see that there is a constant $c>0$ such that $$ u(t)\ge c t^{-1}w(t^{-1}), $$ for small $t>0$. Combining this inequality with \eqref{ksv-e:u-upper-bound} we arrive at \eqref{ksv-e:behofu}. { $\Box$ } \begin{thm}\label{ksv-t:behofmu} Let $S$ be a complete subordinator with Laplace exponent $\phi$ with zero killing coefficient satisfying {\bf (H)}. Then the L\'evy density $\mu$ of $S$ satisfies \begin{equation}\label{ksv-e:behofmu} \mu(t)\asymp t^{-1}\phi(t^{-1})\asymp t^{-\alpha/2-1}\ell(t^{-1})\, , \quad t\to 0+\,. \end{equation} \end{thm} \noindent{\bf Proof.} Since $\phi$ is a complete Bernstein function, the function $\psi(\lambda):=\lambda/\phi(\lambda)$ is also a complete Bernstein function and satisfies $$ \psi(\lambda)\asymp \frac{\lambda^{1-\alpha/2}}{\ell(\lambda)}\, ,\quad \lambda \to \infty, $$ where $\alpha\in (0,2)$ and $\ell$ are the same as in \eqref{ksv-e:reg-var}. It follows from Corollary \ref{ksv-c2.4} that the potential measure of a killed subordinator with Laplace exponent $\psi$ has a complete monotone density $v$ given by $$ v(t)=\mu(t, \infty)=\int^\infty_t\mu(s)ds. $$ Applying Theorem \ref{ksv-t:behofu} to $\psi$ and $v$ we get \begin{equation}\label{ksv-e:v-psi} \mu(t,\infty)=v(t)\asymp t^{-1}\psi(t^{-1})^{-1}=\phi(t^{-1})\, ,\quad t \to 0\, . \end{equation} By using the elementary inequality $1-e^{-c y}\le c(1-e^{-y})$ valid for all $c \ge 1$ and all $y>0$, we get that $\phi(c\lambda)\le c\phi(\lambda)$ for all $c\ge 1$ and all $\lambda >0$. Hence $\phi(s^{-1})=\phi(2 (2s)^{-1})\le 2\phi((2s)^{-1})$ for all $s>0$. Therefore, by \eqref{ksv-e:v-psi}, for all $s \in (0, 1/2)$ $$ v(s)\le c_1 \phi(s^{-1})\le 2c_1 \phi((2s)^{-1}) \le c_2 v(2s) $$ for some constants $c_1, c_2>0$. Since $$ v(t/2)\ge v(t/2)-v(t)=\int_{t/2}^{t}\mu(s)\, ds \ge (t/2)\mu(t)\, , $$ we have for all $t \in (0,1)$, $$ \mu(t)\le 2 t^{-1}v(t/2)\le c_2 t^{-1} v(t)\le c_3 t^{-1}\phi(t^{-1})\, , $$ for some constant $c_3>0$. Using \eqref{ksv-e:zahle3} we get that for every $\lambda \ge 1$ $$ \phi(s^{-1})=\phi(\lambda(\lambda s)^{-1})\ge a\lambda^{\delta}\phi((\lambda s)^{-1})\, ,\quad s\le \frac{s_0}{\lambda}\, . $$ It follows from \eqref{ksv-e:v-psi} that there exists a constant $c_4>0$ such that $$ c_4^{-1}\phi(s^{-1})\le v(s) \le c_4 \phi(s^{-1})\, , \quad s< 1\,. $$ Fix $\lambda:=2^{1/\delta}((c_4^2 a^{-1})\vee 1)^{1/\delta}\ge1$. Then for $s\le (s_0\wedge 1)/\lambda$, $$ v(\lambda s)\le c_4\phi((\lambda s)^{-1})\le c_4 a^{-1} \lambda^{-\delta}\phi(s^{-1})\le c_4^2a^{-1}\lambda^{-\delta} v(s) \le \frac12 v(s) $$ by our choice of $\lambda$. Further, $$ (\lambda-1)s\mu(s)\ge \int_s^{\lambda s}\mu(t)\, dt=v(s)-v(\lambda s)\ge v(s)-\frac12 v(s)=\frac12 v(s)\, . $$ This implies that for all small $t$ $$ \mu(t)\ge \frac{1}{2(\lambda-1)}t^{-1} v(t)= c_5 t^{-1}v(t)\ge c_6 t^{-1}\phi(t^{-1}) $$ for some constants $c_5, c_6>0$. The proof is now complete. { $\Box$ } \section{Subordinate Brownian motion}\label{ksv-sec-sbm} \subsection{Definitions and technical lemma}\label{ksv-ss:sbm} Let $B=(B_t, {\mathbb P}_x)$ be a Brownian motion in ${\mathbb R}^d$ with transition density $p(t,x,y)=p(t,y-x)$ given by $$ p(t,x)=(4\pi t)^{-d/2}\exp\left(-\frac{|x|^2}{4t}\right), \quad t>0,\, x,y\in {\mathbb R}^d \, . $$ The semigroup $(P_t:\, t\ge 0)$ of $B$ is defined by $P_tf(x)= {\mathbb E}_x[f(B_t)]=\int_{{\mathbb R}^d}p(t,x,y)f(y)\, dy$, where $f$ is a nonnegative Borel function on ${\mathbb R}^d$. Recall that if $d\ge 3$, the Green function $G^{(2)}(x,y)=G^{(2)}(x-y)$, $x,y\in {\mathbb R}^d$, of $B$ is well defined and is equal to $$ G^{(2)}(x)=\int_0^{\infty}p(t,x)\, dt = \frac{\Gamma(d/2-1)}{4\pi^{d/2}}\, |x|^{-d+2}\, . $$ Let $S=(S_t:\, t\ge 0)$ be a complete subordinator independent of $B$, with Laplace exponent $\phi(\lambda)$, L\'evy measure $\mu$ and potential measure $U$. In the rest of the paper, we will always assume that $S$ is a complete subordinator whose killing coefficient is zero, is dependent of $B$ and satisfies ({\bf H}). Hence $\lim_{\lambda\to \infty}\phi(\lambda)=\infty$, and thus $S$ has a completely monotone potential density $u$. We define a new process $X=(X_t:\, t\ge 0)$ by $X_t:=B_{S_t}$. Then $X$ is a L\'evy process with characteristic exponent $\Phi(x)=\phi(|x|^2)$ (see e.g.\cite[pp.197--198]{Sat}) called a subordinate Brownian motion. The semigroup $(Q_t:\, t\ge 0)$ of the process $X$ is given by $$ Q_t f(x)= {\mathbb E}_x[f(X_t)]={\mathbb E}_x[f( B_{S_t})]=\int_0^{\infty} P_s f(x)\, {\mathbb P}(S_t\in ds)\, . $$ The semigroup $Q_t$ has a density $q(t,x,y)=q(t,x-y)$ given by $q(t,x)=\int_0^{\infty}p(s,x)\, {\mathbb P}(S_t\in ds)$. Recall that, according to the criterion of Chung-Fuchs type (see \cite{PS71} or \cite[pp. 252--253]{Sat}), $X$ is transient if and only if for some small $r>0$, $\int_{|x|<r} \frac{1}{\Phi(x)}\, dx <\infty$. Since $\Phi(x)=\phi(|x|^2)$, it follows that $X$ is transient if and only if \begin{equation}\label{ksv-transience} \int_{0+}\frac{\lambda^{d/2-1}}{\phi(\lambda)}\, d\lambda <\infty\, . \end{equation} This is always true if $d\ge 3$, and, depending on the subordinator, may be true for $d=1$ or $d=2$. In the case $d\le 2$, if there exists $\gamma\in [0, d/2)$ such that \begin{equation}\label{ksv-e:ass4trans} \liminf_{\lambda \to 0}\frac{\phi(\lambda)}{\lambda^{\gamma}}>0, \end{equation} then \eqref{ksv-transience} holds. For $x\in {\mathbb R}^d$ and a Borel subset $A$ of ${\mathbb R}^d$, the potential measure of $X$ is given by \begin{eqnarray*} G(x,A)&=& {\mathbb E}_x\int_0^{\infty} {\bf 1}_{\{X_t\in A\}} dt= \int_0^{\infty}Q_t{\bf 1}_A(x)\, dt =\int_0^{\infty}\int_0^{\infty}P_s {\bf 1}_A(x){\mathbb P}(S_t\in ds)\, dt\\ &=&\int_0^{\infty}P_s {\bf 1}_A\, u(s)\,ds =\int_A \int_0^{\infty}p(s,x,y)\, u(s)\, ds\, dy \, , \end{eqnarray*} where the second line follows from (\ref{ksv-potential measure}). If $X$ is transient and $A$ is bounded, then $G(x,A)<\infty$ for every $x\in {\mathbb R}^d$. In this case we denote by $G(x,y)$ the density of the potential measure $G(x,\cdot)$. Clearly, $G(x,y)=G(y-x)$ where \begin{equation}\label{ksv-green function} G(x)=\int_0^{\infty} p(t,x)\, U(dt)=\int_0^{\infty} p(t,x) u(t)\, dt\, . \end{equation} The L\'evy measure $\Pi$ of $X$ is given by (see e.g.~\cite[pp. 197--198]{Sat}) $$ \Pi(A)=\int_A \int_0^{\infty}p(t,x)\, \mu(dt)\, dx =\int_A J(x)\, dx\, ,\quad A\subset {\mathbb R}^d\, , $$ where \begin{equation}\label{ksv-jumping function} J(x):= \int_0^{\infty}p(t,x)\, \mu(dt)=\int_0^{\infty}p(t,x)\mu(t)dt \end{equation} is the L\'evy density of $X$. Define the function $j:(0,\infty)\to (0,\infty)$ by \begin{equation}\label{ksv-function j measure} j(r):= \int_0^{\infty} (4\pi)^{-d/2} t^{-d/2} \exp\left(-\frac{r^2}{4t}\right)\, \mu(dt)\, , \quad r>0\, , \end{equation} and note that by (\ref{ksv-jumping function}), $J(x)=j(|x|)$, $x\in {\mathbb R}^d\setminus \{0\}$. Since $x\mapsto p(t,x)$ is continuous and radially decreasing, we conclude that both $G$ and $J$ are continuous on ${\mathbb R}^d\setminus \{0\}$ and radially decreasing. The following technical lemma will play a key role in establishing the asymptotic behaviors of the Green function $G$ and the L\'evy density $J$ of the subordinate Brownian motion $X$ in the next subsection. \begin{lemma}\label{ksv-key technical} Suppose that $w:(0,\infty)\to (0,\infty)$ is a decreasing function, $\ell:(0,\infty)\to (0,\infty)$ a measurable function which is locally bounded above and below by positive constants and is slowly varying at $\infty$, and $\beta\in [0,2]$, $\beta>1-d/2$. If $d=1$ or $d=2$, we additionally assume that there exist constants $c>0$ and $\gamma <d/2$ such that \begin{equation}\label{ksv-asymp v2} w(t)\le ct^{\gamma-1}\, , \quad \forall \, t \ge 1\, . \end{equation} Let $$ I(x):=\int_0^{\infty}(4\pi t)^{-d/2}e^{-\frac{|x|^2}{4t}}w(t)\, dt\, . $$ \begin{itemize} \item[(a)] If \begin{equation}\label{ksv-asymp v} w(t)\asymp \frac{1}{t^{\beta}\ell(1/t)}\, , \quad t\to 0\, , \end{equation} then $$ I(x)\asymp \frac{1}{|x|^{d+2\beta-2}\, \ell \big(\frac{1}{|x|^2}\big)} \asymp\frac{w(|x|^2)}{|x|^{d-2}}\, , \quad |x|\to 0 \, . $$ \item[(b)] If \begin{equation}\label{ksv-asymp v-sim} w(t)\sim \frac{1}{t^{\beta}\ell(1/t)}\, , \quad t\to 0\, , \end{equation} then $$ I(x)\sim \frac{\Gamma(d/2+\beta-1)}{4^{1-\beta}\pi^{d/2}}\, \frac{1}{|x|^{d+2\beta-2}\ell\big(\frac{1}{|x|^2}\big)}\, , \quad |x|\to 0\, . $$ \end{itemize} \end{lemma} \noindent{\bf Proof.} (a) Let us first note that the assumptions of the lemma guarantee that $I(x)<\infty $ for every $x\neq 0$. Now, let $\xi\ge 1/4$ to be chosen later. By a change of variable we get \begin{eqnarray} \int_0^{\infty}(4\pi t)^{-d/2}e^{-\frac{|x|^2}{4t}}w(t)\, dt &=& \frac{1}{4\pi^{d/2}}\left(|x|^{-d+2}\int_0^{\xi |x|^2} t^{d/2-2} e^{-t} w\left(\frac{|x|^2}{4t}\right)\, dt \right.\nonumber\\ &&\qquad \left. + |x|^{-d+2}\int_{\xi|x|^2}^{\infty} t^{d/2-2} e^{-t} w\left(\frac{|x|^2}{4t}\right)\, dt\right)\nonumber\\ &=:& \frac{1}{4\pi^{d/2}}\left(|x|^{-d+2}I_1(x)+|x|^{-d+2}I_2(x)\right)\, .\label{ksv-e:Ione+Itwo} \end{eqnarray} We first consider $I_1(x)$ for the case $d=1$ or $d=2$. It follows from the assumptions that there exists a positive constant $c_1$ such that $w(s)\le c_1 s^{\gamma-1}$ for all $s\ge 1/(4\xi)$. Thus \begin{eqnarray*} I_1(x)\le \int_0^{\xi |x|^2} t^{d/2-2}e^{-t}c_1\left(\frac{|x|^2}{4t}\right)^{\gamma-1}\, dt \le c_2 |x|^{2\gamma-2}\int_0^{\xi|x|^2}t^{d/2-\gamma-1}\, dt = c_3 |x|^{d-2}\, . \end{eqnarray*} It follows that \begin{equation}\label{ksv-Ione} \lim_{|x|\to 0} |x|^{-d+2}I_1(x) \left(|x|^{d-2+2\beta}\, \ell\left(\frac{1}{|x|^2}\right) \right) =0\, . \end{equation} In the case $d\ge 3$, we proceed similarly, using the bound $w(s)\le w(1/(4\xi))$ for $s\ge 1/(4\xi)$. Now we consider $I_2(x)$: \begin{eqnarray*} &&|x|^{-d+2}I_2(x)\,=\,\frac{1}{|x|^{d-2}}\int_{\xi |x|^2}^{\infty} t^{d/2-2}e^{-t}w\left(\frac{|x|^2}{4t}\right)\, dt\\ &&=\frac{4^\beta}{|x|^{d+2\beta-2}\, \ell(\frac{1}{|x|^2})}\int_{\xi |x|^2}^{\infty} t^{d/2-2+\beta}e^{-t} w\left(\frac{|x|^2}{4t}\right) \left(\frac{|x|^2}{4t}\right)^{\beta}\, \ell\, \left(\frac{4t}{|x|^2}\right)\, \frac{\ell(\frac{1}{|x|^2})}{\ell (\frac{4t}{|x|^2})}\, dt\, . \end{eqnarray*} Using the assumption (\ref{ksv-asymp v}), we can see that there is a constant $c_1>1$ such that $$ c_1^{-1} \le w\left(\frac{|x|^2}{4t}\right) \left(\frac{|x|^2}{4t}\right)^{\beta}\, \ell\, \left(\frac{4t}{|x|^2}\right) <c_1\, , $$ for all $t$ and $x$ satisfying $|x|^2/(4t)\le 1/(4\xi)$. Now choose a $\delta\in (0,d/2-1+\beta)$ (note that by assumption, $d/2-1+\beta>0$). By Potter's theorem (cf. \cite[Theorem 1.5.6 (i)]{BGT}), there exists $\rho=\rho(\delta)\ge1$ such that \begin{equation}\label{ksv-e:potter1} \frac{\ell(\frac{1}{|x|^2})}{\ell(\frac{4t}{|x|^2})}\le 2\left(\left(\frac{1/|x|^2}{4t/|x|^2}\right)^{\delta}\vee \left(\frac{1/|x|^2}{4t/|x|^2}\right)^{-\delta}\right)=2\left((4t)^{\delta}\vee (4t)^{-\delta}\right)\le c_2(t^{\delta}\vee t^{-\delta}) \end{equation} whenever $\frac{1}{|x|^2}>\rho$ and $\frac{4t}{|x|^2} >\rho$. By reversing the roles of $1/|x|^2$ and $4t/|x|^2$ we also get that \begin{equation}\label{ksv-e:potter2} \frac{\ell(\frac{1}{|x|^2})}{\ell(\frac{4t}{|x|^2})}\ge c_2^{-1}(t^{\delta}\wedge t^{-\delta})\ \end{equation} for $\frac{1}{|x|^2}>\rho$ and $\frac{4t}{|x|^2} >\rho$. Now we define $\xi:=\frac{\rho}{4}$ so that for all $x\neq 0$ with $|x|^2 \le \frac{1}{4\xi}$ and $t>\xi|x|^2$ we have that \begin{eqnarray} c_1^{-1} c_2^{-1} \, t^{d/2-2+\beta}e^{-t} (t^{\delta}\wedge t^{-\delta}) &\le & t^{d/2-2+\beta}e^{-t} w\left(\frac{|x|^2}{4t}\right) \left(\frac{|x|^2}{4t}\right)^{\beta}\, \ell\, \left(\frac{4t}{|x|^2}\right) \frac{\ell(\frac{1}{|x|^2})}{\ell(\frac{4t}{|x|^2})} \, \nonumber \\ & \le & c_1 c_2 \, t^{d/2-2+\beta}e^{-t} (t^{\delta}\vee t^{-\delta})\, .\label{ksv-e:key-lemma-1} \end{eqnarray} Let \begin{eqnarray*} c_3&:=&c_1^{-1} c_2^{-1}\int_0^{\infty}t^{d/2-2+\beta}e^{-t} (t^{\delta}\wedge t^{-\delta}) dt <\infty\, ,\\ c_4&:=& c_1 c_2 \int_0^{\infty} t^{d/2-2+\beta}e^{-t} (t^{\delta}\vee t^{-\delta}) dt <\infty\, . \end{eqnarray*} The integrals are finite because of assumption $d/2-2+\beta-\delta>-1$. It follows from \eqref{ksv-e:key-lemma-1} that \begin{eqnarray*} c_3&\le & \liminf_{|x|\to 0}\int_0^{\infty} t^{d/2-2+\beta}e^{-t} w\left(\frac{|x|^2}{4t}\right) \left(\frac{|x|^2}{4t}\right)^{\beta}\, \ell\, \left(\frac{4t}{|x|^2}\right) \frac{\ell(\frac{1}{|x|^2})}{\ell(\frac{4t}{|x|^2})} \, {\bf 1}_{(\xi|x|^2,\infty)}(t) dt\\ &\le & \limsup_{|x|\to 0}\int_0^{\infty} t^{d/2-2+\beta}e^{-t} w\left(\frac{|x|^2}{4t}\right) \left(\frac{|x|^2}{4t}\right)^{\beta}\, \ell\, \left(\frac{4t}{|x|^2}\right) \frac{\ell(\frac{1}{|x|^2})}{\ell(\frac{4t}{|x|^2})} \, {\bf 1}_{(\xi|x|^2,\infty)}(t) dt \le c_4\, . \end{eqnarray*} This means that \begin{eqnarray} \lefteqn{|x|^{-d+2}I_2(x) \left(|x|^{d-2\beta+2}\, \ell(\frac{1}{|x|^2}) \right)}\nonumber \\ &=&4^{\beta}\int_{\xi |x|^2}^{\infty} t^{d/2-2+\beta}e^{-t} w\left(\frac{|x|^2}{4t}\right) \left(\frac{|x|^2}{4t}\right)^{\beta}\, \ell\, \left(\frac{4t}{|x|^2}\right)\, \frac{\ell(\frac{1}{|x|^2})}{\ell (\frac{4t}{|x|^2})}\, dt \asymp 1\, .\label{ksv-Itwo} \end{eqnarray} Combining \eqref{ksv-Ione} and \eqref{ksv-Itwo} we have proved the first part of the lemma. \noindent (b) The proof is almost the same with a small difference at the very end. Since $\ell$ is slowly varying at $\infty$, we have that $$ \lim_{|x|\to 0}\frac{\ell(\frac{1}{|x|^2})}{\ell(\frac{4t}{|x|^2})}=1\, . $$ This implies that \begin{eqnarray*} & &\lim_{|x|\to 0} t^{d/2-2+\beta}e^{-t} w\left(\frac{|x|^2}{4t}\right) \left(\frac{|x|^2}{4t}\right)^{\beta}\, \ell\, \left(\frac{4t}{|x|^2}\right) \frac{\ell(\frac{1}{|x|^2})}{\ell(\frac{4t}{|x|^2})} \, {\bf 1}_{(\xi|x|^2,\infty)}(t)\\ & &\quad = t^{d/2-2+\beta}e^{-t} {\bf 1}_{(0,\infty)}(t) \, . \end{eqnarray*} By the right-hand side inequality in \eqref{ksv-e:key-lemma-1}, we can apply the dominated convergence theorem to conclude that \begin{eqnarray*} \lefteqn{\lim_{|x|\to 0}|x|^{-d+2}I_2(x) \left(|x|^{d-2\beta+2}\, \ell(\frac{1}{|x|^2}) \right)}\\ &=&\lim_{|x|\to 0} 4^{\beta}\int_{0}^{\infty} t^{d/2-2+\beta}e^{-t} w\left(\frac{|x|^2}{4t}\right) \left(\frac{|x|^2}{4t}\right)^{\beta}\, \ell\, \left(\frac{4t}{|x|^2}\right)\, \frac{\ell(\frac{1}{|x|^2})}{\ell (\frac{4t}{|x|^2})}{\bf 1}_{(\xi|x|^2,\infty)}(t)\, dt \\ &=&4^{\beta}\Gamma(d/2-1+\beta)\, . \end{eqnarray*} Together with \eqref{ksv-e:Ione+Itwo} and \eqref{ksv-Ione} this proves the second part of the lemma. { $\Box$ } \subsection{Asymptotic behavior of the Green function and L\'evy density}\label{ksv-ss:asympgj} The goal of this subsection is to establish the asymptotic behaviors of the Green function $G(x)$ and L\'evy density $J(x)$ of the subordinate process $X$ under certain assumptions on the Laplace exponent $\phi$ of the subordinator $S$. We start with the Green function. \begin{thm}\label{ksv-t:Gorigin} Suppose that the Laplace exponent $\phi$ is a complete Bernstein function satisfying the assumption {\bf (H)} and that $\alpha\in (0,2\wedge d)$. In the case $d\le 2$, we further assume \eqref{ksv-e:ass4trans}. Then $$ G(x)\asymp \frac1{|x|^{d}\phi(|x|^{-2})}\asymp \frac1{|x|^{d-\alpha}\ell(|x|^{-2})}, \qquad |x|\to 0. $$ \end{thm} \noindent{\bf Proof.} It follows from Theorem \ref{ksv-t:behofu} that the potential density $u$ of $S$ satisfies $$ u(t)\asymp t^{-1}\phi(t^{-1})^{-1}\asymp\frac{t^{\alpha/2-1}}{\ell(t^{-1})}\, , \quad t \to 0+\,. $$ Using \eqref{ksv-e:u-upper-bound} and \eqref{ksv-e:ass4trans} we conclude that in case $d\le 2$ there exists $c>0$ such that $$ u(t)\le ct^{\gamma-1}, \quad t\ge 1\, . $$ We can now apply Lemma \ref{ksv-key technical} with $w(t)=u(t)$, $\beta=1-\alpha/2$ to obtain the required asymptotic behavior. { $\Box$ } \begin{remark}\label{ksv-cond&zahle}{\rm (i) Since $\alpha$ is always assumed to be in $(0, 2)$, the assumption $\alpha\in (0, 2\wedge d)$ in the theorem above makes a difference only in the case $d=1$. \noindent (ii) In case $d\ge 3$, the conclusion of the theorem above is proved in \cite[Theorem 1 (ii)--(iii)]{Z} under weaker assumptions. The statement of \cite[Theorem 1 (ii)]{Z} in case $d\le 2$ is incorrect and the proof has an error. } \end{remark} The asymptotic behavior near the origin of $J(x)$ is contained in the following result. \begin{thm}\label{ksv-t:Jorigin} Suppose that the Laplace exponent $\phi$ is a complete Bernstein function satisfying the assumption {\bf (H)}. Then $$ J(x)\asymp \frac{\phi(|x|^{-2})}{|x|^{d}}\asymp \frac{\ell(|x|^{-2})}{|x|^{d+\alpha}}, \qquad |x|\to 0. $$ \end{thm} \noindent{\bf Proof.} It follows from Theorem \ref{ksv-t:behofmu} that the L\'evy density $\mu$ of $S$ satisfies $$ \mu(t)\asymp t^{-1}\phi(t^{-1})\asymp t^{-\alpha/2-1}\ell(t^{-1})\, , \quad t \to 0+\,. $$ Since $\mu(t)$ is decreasing and integrable at infinity, one can easily show that there exists $c>0$ such that $$ \mu(t)\le ct^{-1}, \quad t\ge 1. $$ In fact, if the claim above were not valid, we could find an increasing sequence $\{t_n\}$ such that $t_1>1, t_n\uparrow\infty, t_n-t_{n-1}\ge t_n/2$ and that $\mu(t_n)\ge n t_n^{-1}$. Then we would have $$ \int^\infty_1\mu(t)dt=\int^{t_1}_1\mu(t)dt+\sum^\infty_{n=2}\int^{t_n}_{t_{n-1}}\mu(t)dt \ge \frac{t_1-1}{t_1}+\sum^\infty_{n=2}\frac{n}2=\infty, $$ contradicting the integrability of $\mu$ at infinity. Therefore the claim above is valid. We can now apply Lemma \ref{ksv-key technical} with $w(t)=\mu(t)$, $\beta=1+\alpha/2$ and $\gamma=0$ to obtain the required asymptotic behavior. { $\Box$ } \begin{prop}\label{ksv-properties of j} Suppose that the Laplace exponent $\phi$ is a complete Bernstein function satisfying the assumption {\bf (H)}. Then the following assertions hold. \begin{description} \item{(a)} For any $K>0$, there exists $C_4=C_4(K) >1$ such that \begin{equation}\label{ksv-H:1} j(r)\le C_4\, j(2r), \qquad \forall r\in (0, K). \end{equation} \item{(b)} There exists $C_5 >1$ such that \begin{equation}\label{ksv-H:2} j(r)\le C_5\, j(r+1), \qquad \forall r>1. \end{equation} \end{description} \end{prop} \noindent{\bf Proof.} \eqref{ksv-H:1} follows immediately from Theorem \ref{ksv-t:Jorigin}. However, we give below a proof of both \eqref{ksv-H:1} and \eqref{ksv-H:2} using only \eqref{ksv-nu 0}--\eqref{ksv-nu infty}. For simplicity we redefine in this proof the function $j$ by dropping the factor $(4\pi)^{-d/2}$ from its definition. This does not effect \eqref{ksv-H:1} and \eqref{ksv-H:2}. It follows from Lemma \ref{ksv-l:H2-valid} and Theorem \ref{ksv-t:behofmu} that \begin{description} \item{(a)} For any $K>0$, there exists $c_1=c_1(K) >1$ such that \begin{equation}\label{ksv-nu 0} \mu(r)\le c_1\, \mu(2r), \qquad \forall r\in (0, K). \end{equation} \item{(b)} There exists $c_2 >1$ such that \begin{equation}\label{ksv-nu infty} \mu(r)\le c_2\, \mu(r+1), \qquad \forall r>1. \end{equation} \end{description} Let $0< r < K$. We have \begin{eqnarray*} \lefteqn{j(2r) = \int_0^{\infty} t^{-d/2} \exp(- r^2/t)\mu(t)\, dt} \\ & \ge & \frac12 \left(\int_{K/2}^{\infty}t^{-d/2} \exp(- r^2/t)\mu(t)\, dt +\int_0^{2K} t^{-d/2} \exp(- r^2/t)\mu(t)\, dt\right)\\ &=& \frac12 (I_1 + I_2). \end{eqnarray*} Now, \begin{eqnarray*} I_1 &=& \int_{K/2}^{\infty}t^{-d/2} \exp(- \frac{r^2}{t})\mu(t)\, dt = \int_{K/2}^{\infty}t^{-d/2} \exp(-\frac{ r^2}{4t}) \exp(-\frac{3 r^2}{4t})\mu(t)\, dt \\ &\ge &\int_{K/2}^{\infty}t^{-d/2} \exp(-\frac{ r^2}{4t}) \exp(-\frac{3 r^2}{2K})\mu(t)\, dt \ge e^{-3K/2}\int_{K/2}^{\infty}t^{-d/2} \exp(-\frac{ r^2}{4t}) \mu(t)\, dt\, ,\\ & & \\ I_2 &=& \int_0^{2K} t^{-d/2}\exp(- \frac{r^2}{t})\mu(t)\, dt = 4^{-d/2+1} \int_0^{K/2} s^{-d/2}\exp(-\frac{ r^2}{4s})\mu(4s)\, ds \\ &\ge & c_1^{-2} 4^{-d/2+1}\int_0^{K/2} s^{-d/2} \exp(-\frac{ r^2}{4s})\mu(s)\,ds. \\ \end{eqnarray*} Combining the three displays above we get that $j(2r)\ge c_3\, j(r)$ for all $ r\in (0, K)$. To prove \eqref{ksv-H:2} we first note that for all $t\ge 2$ and all $r\ge 1$ it holds that $$ \frac{(r+1)^2}{t}-\frac{r^2}{t-1}\le 1\, . $$ This implies that \begin{equation}\label{ksv-a} \exp\left(-\frac{(r+1)^2}{4t}\right) \ge e^{-1/4} \exp\left(-\frac{ r^2}{4(t-1)}\right), \quad \mbox{ for all }r>1, t>2\, . \end{equation} Now we have \begin{eqnarray*} \lefteqn{j(r+1)= \int_0^{\infty} t^{-d/2}\exp(-\frac{(r+1)^2}{4t}) \mu(t)\, dt }\\ &\ge & \frac12 \left( \int_0^8 t^{-d/2}\exp(-\frac{(r+1)^2}{4t}) \mu(t)\, dt +\int_3^{\infty}t^{-d/2}\exp(-\frac{(r+1)^2}{4t}) \mu(t)\, dt \right)\\ &=&\frac12(I_3+I_4). \end{eqnarray*} For $I_3$ note that $(r+1)^2\le 4r^2$ for all $r>1$. Thus \begin{eqnarray*} I_3 &=& \int_0^8 t^{-d/2}\exp(-\frac{(r+1)^2}{4t}) \mu(t)\, dt \ge \int_0^8 t^{-d/2}\exp(- r^2/t) \mu(t)\, dt \\ &=& 4^{-d/2+1}\int_0^2 s^{-d/2}\exp(-\frac{ r^2}{4s}) \mu(4s)\, ds \\ & \ge & c_1^{-2} 4^{-d/2+1}\int_0^2 s^{-d/2}\exp(-\frac{ r^2}{4s}) \mu(s)\, ds\, ,\\ & &\\ I_4&=& \int_3^{\infty}t^{-d/2}\exp(-\frac{(r+1)^2}{4t}) \mu(t)\, dt\\ & \ge & \int_3^{\infty}t^{-d/2}\exp\{-1/4\} \exp(-\frac{ r^2}{4(t-1)})\, \mu(t)\, dt\\ &=&e^{-1/4} \int_2^{\infty} (s-1)^{-d/2}\exp(-\frac{ r^2}{4s}) \, \mu(s+1)\, ds\\ &\ge & c_1^{-1} e^{-1/4} \int_2^{\infty} s^{-d/2}\exp(-\frac{ r^2}{4s}) \mu(s)\, ds\, . \end{eqnarray*} Combining the three displays above we get that $j(r+1)\ge c_4\, j(r)$ for all $ r>1$. { $\Box$ } \subsection{Some results on subordinate Brownian motion in ${\mathbb R}$}\label{ksv-ss:1dsbm} In this subsection we assume $d=1$. We will consider subordinate Brownian motions in ${\mathbb R}$. Let $B=(B_t:\, t\ge 0)$ be a Brownian motion in ${\mathbb R}$, independent of $S$, with $$ {\mathbb E}\left[e^{i\theta(B_t-B_0)}\right] =e^{-t\theta^2}, \qquad\, \forall \theta\in {\mathbb R}, t>0. $$ The subordinate Brownian motion $X=(X_t:t\ge 0)$ in ${\mathbb R}$ defined by $X_t=B_{S_t}$ is a symmetric L\'evy process with the characteristic exponent $\Phi(\theta)=\phi(\theta^2)$ for all $\theta\in {\mathbb R}.$ In the first part of this subsection, up to Corollary \ref{ksv-c:vsm}, we do not need to assume that $\phi$ satisfies the assumption ({\bf H}). Let $\overline{X}_t:=\sup\{0\vee X_s:0\le s\le t\}$ and let $L=(L_t:\, t\ge 0)$ be a local time of $\overline{X}-X$ at $0$. $L$ is also called a local time of the process $X$ reflected at the supremum. Then the right continuous inverse $L^{-1}_t$ of $L$ is a subordinator and is called the ladder time process of $X$. The process $\overline{X}_{L^{-1}_t}$ is also a subordinator and is called the ladder height process of $X$. (For the basic properties of the ladder time and ladder height processes, we refer our readers to \cite[Chapter 6]{Ber}.) Let $\chi$ be the Laplace exponent of the ladder height process of $X$. It follows from \cite[Corollary 9.7]{Fris} that \begin{equation}\label{ksv-e:formula4leoflh} \chi(\lambda)= \exp\left(\frac1\pi\int^{\infty}_0\frac{\log(\Phi(\lambda\theta))} {1+\theta^2}d\theta \right) =\exp\left(\frac1\pi\int^{\infty}_0\frac{\log( \phi(\lambda^2\theta^2))}{1+\theta^2}d\theta \right), \quad \forall \lambda>0. \end{equation} The next result, first proved independently in \cite{KSV3} and \cite{Kw}, tells us that $\chi$ is also complete Bernstein function. The proof presented below is taken from \cite{KSV3}. \begin{prop}\label{ksv-p:chi is cbf} Suppose $\phi$, the Laplace exponent of the subordinator $S$, is a complete Bernstein function. Then the Laplace exponent $\chi$ of the ladder height process of the subordinate Brownian motion $X_t=B_{S_t}$ is also a complete Bernstein function. \end{prop} \noindent{\bf Proof.} It follows from Theorem \cite[Theorem 6.10]{SSV} that $\phi$ has the following representation: \begin{equation}\label{ksv-e:exp-repr} \log \phi(\lambda)=\gamma +\int^\infty_0\left(\frac{t}{1+t^2}-\frac1{\lambda+t}\right)\eta(t)dt, \end{equation} where $\eta$ in a function such that $0\le \eta(t)\le 1$ for all $t>0$. By \eqref{ksv-e:exp-repr} and \eqref{ksv-e:formula4leoflh}, we have \begin{eqnarray*} \log \chi(\lambda) =\frac{\gamma}2+\frac1{\pi}\int^\infty_0\int^\infty_0 \left(\frac{t}{1+t^2}-\frac1{\lambda^2\theta^2+t}\right)\eta(t)dt \frac{d\theta}{1+\theta^2}. \end{eqnarray*} By using $0\le \eta(t)\le 1$, we have \begin{eqnarray*} \eta(t)\left|\frac{t}{1+t^2}-\frac1{\lambda^2\theta^2+t}\right| \frac{1}{1+\theta^2}&\le& \frac{1}{1+t^2} \frac{1}{1+\theta^2}\left(\frac1{\lambda^2 \theta^2+t }+ \frac{\lambda^2 \theta^2t}{\lambda^2 \theta^2+t }\right)\\ & \le& \frac{1}{1+t^2} \left(\frac1{\lambda^2 \theta^2+t }+ \frac{\lambda^2 t}{\lambda^2 \theta^2+t }\right). \end{eqnarray*} Since \begin{eqnarray*} \int^\infty_0\frac1{\lambda^2 \theta^2+t }\, d\theta =\frac1t\int^\infty_0\frac1{\frac{\lambda^2\theta^2}t+1}\, d\theta=\frac1t\frac{\sqrt{t}}{\lambda}\int^\infty_0\frac1{\gamma^2+1}\, d\gamma = \frac{\pi}{2\lambda\sqrt{t}}, \end{eqnarray*} we can use Fubini's theorem to get \begin{eqnarray} \log \chi(\lambda) &=&\frac{\gamma}2+\int^\infty_0\left(\frac{t}{2(1+t^2)}- \frac1{2\sqrt{t}(\lambda+\sqrt{t})}\right)\eta(t)dt\label{ksv-e:logexp4chi}\\ &=&\frac{\gamma}2+\int^\infty_0\left(\frac{t}{2(1+t^2)}- \frac1{2(1+t)}\right)\eta(t)dt \nonumber\\ & & +\int^\infty_0 \left(\frac1{2(1+t)}-\frac1{2\sqrt{t}(\lambda+\sqrt{t})}\right)\eta(t)dt\nonumber\\ &=&\gamma_1+\int^\infty_0\left(\frac{s}{1+s^2}-\frac1{\lambda+s} \right)\eta(s^2)ds.\nonumber \end{eqnarray} Applying \cite[Theorem 6.10]{SSV} we get that $\chi$ is a complete Bernstein function. { $\Box$ } The potential measure of the ladder height process of $X$ is denoted by $V$ and its density by $v$. We will also use $V$ to denote the renewal function of $X$: $V(t):=V((0,t))=\int_0^t v(s)\, ds$. The following result is first proved in \cite{KSV3}. \begin{prop}\label{ksv-p:chiphi} $\chi$ is related to $\phi$ by the following relation $$ e^{-\pi/2}\sqrt{\phi(\lambda^2)}\le \chi(\lambda)\le e^{\pi/2}\sqrt{\phi(\lambda^2)}\, ,\qquad \textrm{for all }\lambda>0\, .$$ \end{prop} \noindent{\bf Proof.} According to \eqref{ksv-e:logexp4chi}, we have $$ \log \chi(\lambda) =\frac{\gamma}2+\frac12\int^\infty_0\left(\frac{t}{1+t^2}- \frac1{\sqrt{t}(\lambda+\sqrt{t})}\right)\eta(t)dt\, . $$ Together with representation \eqref{ksv-e:exp-repr} we get that for all $\lambda >0$ \begin{eqnarray*} \lefteqn{\left|\, \log \chi(\lambda)-\frac12 \log \phi(\lambda^2)\, \right| }\\ &=& \frac12\left|\int_0^{\infty}\left(\Big(\frac{t}{1+t^2}- \frac1{\sqrt{t} (\lambda+\sqrt{t})}\Big) -\Big(\frac{t}{1+t^2}-\frac1{\lambda^2+t} \Big)\right)\eta(t)\, dt\right|\\ &\le &\frac12\int_0^{\infty}\frac{\lambda(\sqrt{t}+\lambda)}{(\lambda^2+t) \sqrt{t}(\lambda+\sqrt{t})}\, dt=\frac12\int_0^{\infty}\frac{\lambda}{(\lambda^2+t)\sqrt{t}}\, dt =\frac{\pi}{2}\, . \end{eqnarray*} This implies that $$ -\pi/2 \le \log \chi(\lambda)-\frac12 \log \phi(\lambda^2) \le \pi/2\, ,\qquad \textrm{for all }\lambda>0\, , $$ i.e., $$ e^{-\pi/2}\le \chi(\lambda)\phi(\lambda^2)^{-1/2}\le e^{\pi/2}\, ,\qquad \textrm{for all }\lambda>0\, . $$ { $\Box$ } Combining the above two propositions with Corollary \ref{ksv-c2.3}, we obtain \begin{corollary}\label{ksv-c:vsm} Suppose $\phi$, the Laplace exponent of the subordinator $S$, is a complete Bernstein function satisfying $\lim_{\lambda\to\infty}\phi(\lambda)=\infty$. Then the potential measure of the ladder height process of the subordinate Brownian motion $X_t=B_{S_t}$ has a completely monotone density $v$. In particular, $v$ and the renewal function $V$ are $C^{\infty}$ functions. \end{corollary} In the remainder of this paper we will always assume that $\phi$ satisfies the assumption {\bf(H)}. We will not explicitly mention this assumption anymore. Since $\phi(\lambda)\asymp\lambda^{\alpha/2}\ell(\lambda)$ as $\lambda \to \infty$, Lemma \ref{ksv-p:chiphi} implies that \begin{equation}\label{ksv-e:abofkappaatinfty} \chi(\lambda)\asymp\lambda^{\alpha/2}(\ell(\lambda^2))^{1/2}, \qquad t\to \infty. \end{equation} It follows from \eqref{ksv-e:abofkappaatinfty} that $\lim_{\lambda\to \infty} \chi(\lambda)/\lambda=0$, hence the ladder height process does not have a drift. Recall that $V(t)=V((0,t))=\int_0^t v(s)ds$ is the renewal function of the ladder height process of $X$. In light of \eqref{ksv-e:abofkappaatinfty}, we have, as a consequence of Theorem \ref{ksv-t:behofu}, the following result. \begin{prop}\label{ksv-p:abofgf4lhpat0} As $t\to 0$, we have $$ V(t)\,\asymp \phi(t^{-2})^{-1/2}\asymp\,\frac{t^{\alpha/2}}{(\ell(t^{-2}))^{1/2}} $$ and $$ v(t)\,\asymp t^{-1}\phi(t^{-2})^{-1/2}\asymp\,\frac{t^{\alpha/2-1}}{ (\ell(t^{-2}))^{1/2}}\, . $$ \end{prop} \begin{remark}\label{ksv-r:abofgf4lhpat0}{\rm It follows immediately from the proposition above that there exists a positive constant $c>0$ such that $V(2t)\le c V(t)$ for all $t\in (0,2)$. } \end{remark} It follows from \eqref{ksv-e:abofkappaatinfty} above and \cite[Lemma 7.10]{Ky} that the process $X$ does not creep upwards. Since $X$ is symmetric, we know that $X$ also does not creep downwards. Thus if, for any $a\in {\mathbb R}$, we define $$ \tau_a=\inf\{t>0: X_t<a\}, \quad \sigma_a=\inf\{t>0: X_t\le a\}, $$ then we have \begin{equation}\label{ksv-e:firstexittime} {\mathbb P}_x(\tau_a=\sigma_a)=1, \quad x>a. \end{equation} Let $G_{(0, \infty)}(x, y)$ be the Green function of $X$ in $(0, \infty)$. Then we have the following result. \begin{prop}\label{ksv-p:Greenf4kpXonhalfline} For any $x, y>0$ we have $$ G_{(0, \infty)}(x, y)=\left\{\begin{array}{ll} \int^x_0v(z)v(y+z-x)dz, & x\le y,\\ \int^x_{x-y}v(z)v(y+z-x)dz, & x>y. \end{array}\right. $$ \end{prop} \noindent{\bf Proof.} Let $X^{(0,\infty)}$ be the process obtained by killing $X$ upon exiting from $(0, \infty)$. By using (\ref{ksv-e:firstexittime}) above and \cite[Theorem 20, p.~176]{Ber} we get that for any nonnegative function on $f$ on $(0, \infty)$, \begin{equation}\label{ksv-e:e1inpfoformforgf} {\mathbb E}_x\left[ \int_0^{\infty} f(X^{(0, \infty)}_t)\, dt\right]=k \int^{\infty}_0 \int^x_0v(z)f(x+z-y)v(y) dz dy\, , \end{equation} where $k$ is the constant depending on the normalization of the local time of the process $X$ reflected at its supremum. We choose $k=1$. Then \begin{eqnarray}\label{ksv-e:e2inpfoformforgf} &&{\mathbb E}_x\left[ \int_0^{\infty} f(X^{(0, \infty)}_t)\, dt\right] \,=\,\int_0^{\infty} \, v(y)\int_0^x \, v(z)f(x+y-z) dz dy\nonumber\\ &&=\int_0^x \, v(z)\int_0^{\infty} v(y)f(x+y-z) dy dz \,=\,\int_0^x \, v(z)\int_{x-z}^{\infty}\, v(w+z-x)f(w) dw dz\nonumber\\ &&=\int_0^x f(w) \int_{x-w}^x \, v(z)v(w+z-x) dzdw+ \int_x^{\infty} f(w) \int_0^x \, v(z)v(w+z-x) dzdw\, .\nonumber \\ && \end{eqnarray} On the other hand, \begin{equation}\label{ksv-e:e3inpfoformforgf} {\mathbb E}_x\left[\int_0^{\infty} f(X^{(0, \infty)}_t)\, dt\right] =\int_0^{\infty}G_{(0, \infty)}(x,w)f(w)\, dw. \end{equation} By comparing (\ref{ksv-e:e2inpfoformforgf}) and (\ref{ksv-e:e3inpfoformforgf}) we arrive at our desired conclusion. { $\Box$ } For any $r>0$, let $G_{(0, r)}$ be the Green function of $X$ in $(0, r)$. Then we have the following result. \begin{prop}\label{ksv-p:upbdongfofkpinfiniteinterval} For all $r>0$ and all $x\in (0,r)$ $$ \int_0^r G_{(0,r)}(x,y)\, dy \le 2 V(x) V(r)\, . $$ In particular, for any $R>0$, there exists $C_6=C_6(R)>0$ such that for all $r\in (0,R)$ and all $x\in (0,r)$, $$ \int^r_0 G_{(0, r)}(x, y)dy \le C_6 (\phi(r^{-2})\phi(x^{-2}))^{-1/2} \asymp\frac{r^{\alpha/2}}{(\ell(r^{-2}))^{1/2}} \frac{x^{\alpha/2}}{(\ell(x^{-2}))^{-1/2}}\, . $$ \end{prop} \noindent{\bf Proof.} For any $x\in (0, r)$, we have \begin{eqnarray*} &&\int^r_0G_{(0, r)}(x, y)dy \le \int^r_0G_{(0, \infty)}(x, y)dy\\ &&=\int^x_0\int^x_{x-y}v(z)v(y+z-x)dzdy+ \int^r_x\int^x_0v(z)v(y+z-x)dzdy\\ &&=\int^x_0v(z)\int^x_{x-z}v(y+z-x)dydz +\int^x_0v(z)\int^r_xv(y+z-x)dydz\, \le\, 2\,V(r)\,V(x). \end{eqnarray*} Now the desired conclusion follows easily from Proposition \ref{ksv-p:abofgf4lhpat0}. { $\Box$ } As a consequence of the result above, we immediately get the following. \begin{corollary}\label{ksv-p:upbdongfofkpinfiniteinterval2} For all $r>0$ and all $x\in (0,r)$ $$ \int_0^r G_{(0,r)}(x,y)\, dy \le 2 V(r)\big(V(x) \wedge V(r-x)\big)\, . $$ In particular, for any $R>0$, there exists $C_7=C_7(R)>0$ such that for all $x\in (0, r)$, and $r\in (0, R)$, \begin{eqnarray*} \int^r_0G_{(0, r)}(x, y)dy&\le& C_7 (\phi(r^{-2}))^{-1/2}\left((\phi(x^{-2}))^{-1/2}\wedge (\phi((r-x)^{-2}))^{-1/2}\right) \\ &\asymp&\frac{r^{\alpha/2}}{(\ell(r^{-2}))^{1/2}} \left(\frac{x^{\alpha/2}}{(\ell(x^{-2}))^{1/2}}\wedge \frac{(r-x)^{\alpha/2}}{(\ell((r-x)^{-2}))^{1/2}}\right)\, . \end{eqnarray*} \end{corollary} \noindent{\bf Proof.} The first inequality is a consequence of the identity $\int^r_0 G_{(0, r)}(x, y)dy=\int^r_0 G_{(0, r)}(r-x, y)dy$ which is true by symmetry of the process $X$. The second one now follows exactly as in the proof of Proposition \ref{ksv-p:upbdongfofkpinfiniteinterval}. { $\Box$ } \begin{remark}\label{ksv-r:upbdongfofkpinfiniteinterval2}{\rm With self-explanatory notation, an immediate consequence of the above corollary is the following estimate \begin{equation}\label{ksv-e:upbdongfofkpinfiniteinterval2} \int_{-r}^r G_{(-r,r)}(x,y)\, dy \le 2 V(2r)\big(V(r+x) \wedge V(r-x)\big)\, . \end{equation} } \end{remark} \section{Harnack inequality and Boundary Harnack principle}\label{ksv-sec-hibhp} From now on we will always assume that $X$ is a subordinate Brownian motion in ${\mathbb R}^d$. Recall that {\bf (H)} is the standing assumptions on the Laplace exponent $\phi$. The goal of this section is to show that the Harnack inequality and the boundary Harnack principle hold for $X$. The infinitesimal generator ${\bf L}$ of the corresponding semigroup is given by \begin{equation}\label{ksv-3.1} {\bf L} f(x)=\int_{{\mathbb R}^d}\left( f(x+y)-f(x)-y\cdot \nabla f(x) {\bf 1}_{\{|y|\le1\}} \right)\, J(y)dy \end{equation} for $f\in C_b^2({\mathbb R}^d)$. Moreover, for every $f\in C_b^2({\mathbb R}^d)$ $$ f(X_t)-f(X_0)-\int_0^t {\bf L} f(X_s)\, ds $$ is a ${\mathbb P}_x$-martingale for every $x\in {\mathbb R}^d$. We recall the L\'evy system formula for $X$ which describes the jumps of the process $X$: for any non-negative measurable function $f$ on ${\mathbb R}_+ \times {\mathbb R}^d\times {\mathbb R}^d$ with $f(s, y, y)=0$ for all $y\in {\mathbb R}^d$, any stopping time $T$ (with respect to the filtration of $X$) and any $x\in {\mathbb R}^d$, \begin{equation}\label{ksv-e:levy} {\mathbb E}_x \left[\sum_{s\le T} f(s,X_{s-}, X_s) \right]= {\mathbb E}_x \left[ \int_0^T \left( \int_{{\mathbb R}^d} f(s,X_s, y) J(X_s,y) dy \right) ds \right]. \end{equation} (See, for example, \cite[Proof of Lemma 4.7]{CK1} and \cite[Appendix A]{CK2}.) \subsection{Harnack inequality}\label{ksv-ss:hi} It follows from Theorem \ref{ksv-t:Jorigin} and the 0-version of \cite[Propositions 1.5.8 and 1.5.10]{BGT} that \begin{equation}\label{ksv-e:svphi1} r^{-2}\int_0^r s^{d+1} j(s) ds\asymp\frac{\ell(r^{-2})}{r^\alpha}\asymp \phi(r^{-2}), \qquad r\to 0 \end{equation} and \begin{equation}\label{ksv-e:svphi2} \int_r^{\infty} s^{d-1} j(s) ds \asymp\frac{\ell(r^{-2})}{r^\alpha}\asymp \phi(r^{-2}), \qquad r\to 0. \end{equation} For any open set $D$, we use $\tau_D$ to denote the first exit time from $D$, i.e., $\tau_D=\inf\{t>0: \, X_t\notin D\}$. \begin{lemma}\label{ksv-L3.1} There exists a constant $C_8>0$ such that for every $r\in (0,1)$ and every $t>0$, $$ {\mathbb P}_x\left(\sup_{s\le t} |X_s-X_0|>r\right) \le C_8 \phi(r^{-2}) t\, . $$ \end{lemma} \noindent{\bf Proof.} It suffices to prove the lemma for $x=0$. Let $f\in C^2_b({\mathbb R}^d)$, $0\leq f \leq 1$, $f(0)=0$, and $f(y)=1$ for all $|y|\ge 1$. Let $c_1=\sup_{y}\sum_{j,k} |(\partial^2/\partial y_j\partial y_k) f(y)|$. Then $|f(z+y)-f(z) -y\cdot \nabla f(z)|\le \frac{c_1}{2} |y|^2$. For $r\in (0,1)$, let $f_r(y)=f(y/r)$. Then the following estimate is valid: \begin{eqnarray*} |f_r(z+y)-f_r(z) -y\cdot \nabla f_r(z){\bf 1}_{\{|y|\le r\}}| &\le& \frac{c_1}{2} \frac{|y|^2}{r^2}{\bf 1}_{\{|y|\le r\}} + {\bf 1}_{\{|y|\ge r\}}\\ &\le& c_2({\bf 1}_{\{|y|\le r\}} \frac{|y|^2}{r^2}+{\bf 1}_{\{|y|\ge r\}})\, . \end{eqnarray*} By using \eqref{ksv-e:svphi1} and \eqref{ksv-e:svphi2}, we get the following estimate: \begin{eqnarray}\label{ksv-referee0} |{\bf L} f_r(z)| &\le & \int_{{\mathbb R}^d} |f_r(z+y)-f_r(z) -y\cdot \nabla f_r(z){\bf 1}_{(|y|\le r)}| \, J(y)dy \nonumber \\ & \le & c_2\int_{{\mathbb R}^d} \left({\bf 1}_{\{|y|\le r\}} \frac{|y|^2}{r^2}+{\bf 1}_{\{|y|\ge r\}}\right)\, J(y)dy\nonumber \\ & \le & C_8\phi(r^{-2}) \, , \end{eqnarray} where the constant $C_8$ is independent of $r$. Further, by the martingale property, \begin{equation}\label{ksv-referee} {\mathbb E}_0 f_r( X_{\tau_{B(0,r)}\wedge t} ) - f_r(0)= {\mathbb E}_0 \int_0^{\tau_{B(0,r)}\wedge t} {\bf L} f_r(X_s)\, ds \end{equation} implying the estimate $$ {\mathbb E}_0 f_{r}( X_{\tau_{B(0,r)}\wedge t}) \leq C_8\phi(r^{-2}) t\, . $$ If $X$ exits $B(0,r)$ before time $t$, then $f_{r}( X_{\tau_{B(0,r)}\wedge t})=1$, so the left hand side is larger than ${\mathbb P}_0(\tau_{B(0,r)} \le t)$. { $\Box$ } \begin{lemma}\label{ksv-L3.2} For every $r\in (0,1)$, and every $x\in {\mathbb R}^d$, $$ \inf_{z\in B(x,r/2)} {\mathbb E}_z \left[\tau_{B(x,r)} \right] \geq \frac{1}{C_8 \phi((r/2)^{-2})}\, , $$ where $C_8$ is the constant from Lemma \ref{ksv-L3.1}. \end{lemma} \noindent{\bf Proof.} Using \eqref{ksv-referee0} and \eqref{ksv-referee} we get that for any $t>0$ and $z\in B(x,r/2)$, \begin{eqnarray*} {\mathbb P}_0(\tau_{B(0,r/2)} \le t)&\le& C_8\phi((r/2)^{-2}){\mathbb E}_0 \left[\tau_{B(0,r/2)}\wedge t\right]\\ &=& C_8\phi((r/2)^{-2}){\mathbb E}_{z} \left[\tau_{B(z,r/2)}\wedge t\right]\\ &\le & C_8\phi((r/2)^{-2}){\mathbb E}_{z} \left[\tau_{B(x,r)}\wedge t\right]. \end{eqnarray*} Letting $t\to\infty$, we immediately get the desired conclusion. { $\Box$ } \begin{lemma}\label{ksv-L3.3} There exists a constant $C_9>0$ such that for every $r\in (0,1)$ and every $x\in {\mathbb R}^d$, $$ \sup_{z\in B(x,r)} {\mathbb E}_z \left[{\tau}_{B(x,r)}\right] \leq \frac{C_9}{\phi(r^{-2})}\, . $$ \end{lemma} \noindent{\bf Proof.} Let $r\in (0,1)$, and let $x\in {\mathbb R}^d$. Using the L\'evy system formula \eqref{ksv-e:levy}, we get \begin{eqnarray*} 1 & \geq & {\mathbb P}_z (| X_{{\tau}_{B(x,r)}}-x|>r ) \\ & = & \int_{B(x,r)}G_{B(x,r)}(z,y) \int_{ \overline{B(x,r)}^c } j(|u-y|)\, du \, dy \, , \end{eqnarray*} where $G_{B(x,r)}$ denotes the Green function of the process $X$ in $B(x,r)$. Now we estimate the inner integral. Let $y\in B(x,r)$, $u\in \overline{B(x,r)}^c$. If $u\in B(x,2)$, then $|u-y|\le 2|u-x|$, while for $u\notin B(x,2)$ we use $|u-y|\le |u-x|+1$. Then \begin{eqnarray*} \lefteqn{\int_{ \overline{B(x,r)}^c} j(|u-y|)\, du}\\ & = & \int_{ \overline{B(x,r)}^c\cap B(x,2)} j(|u-y|)\, du+ \int_{ \overline{B(x,r)}^c\cap B(x,2)^c} j(|u-y|)\, du \\ & \ge &\int_{ \overline{B(x,r)}^c\cap B(x,2)} j(2|u-x|)\, du+ \int_{ \overline{B(x,r)}^c\cap B(x,2)^c} j(|u-x|+1)\, du \\ & \ge & \int_{ \overline{B(x,r)}^c\cap B(x,2)}c^{-1} j(|u-x|)\, du+ \int_{ \overline{B(x,r)}^c\cap B(x,2)^c} c^{-1}j(|u-x|)\, du \\ &=& \int_{ \overline{B(x,r)}^c}c^{-1}j(|u-x|)\, du\, , \end{eqnarray*} where in the next to last line we used \eqref{ksv-H:1} and \eqref{ksv-H:2}. Now, It follows from \eqref{ksv-e:svphi2} that \begin{eqnarray*} 1 &\ge & \int_{B(x,r)}G_{B(x,r)}(z,y)\, dy \int_{ \overline{B(x,r)}^c}c^{-1}j(|u-x|)\, du \\ & = & {\mathbb E}_z \left[\tau_{B(x,r)} \right] c^{-1}\, c_1\int_r^{\infty} v^{d-1} j(v)\, dv \\ & = & c_2\phi(r^{-2}) {\mathbb E}_z \left[\tau_{B(x,r)} \right] \end{eqnarray*} which implies the lemma. { $\Box$ } An improved version of the above lemma will be given in Proposition \ref{ksv-l:tau} later on. \begin{lemma}\label{ksv-L3.4} There exists a constant $C_{10}>0$ such that for every $r\in (0,1)$, every $x\in {\mathbb R}^d$, and any $A\subset B(x,r)$ $$ {\mathbb P}_y\left(T_A < {\tau}_{B(x,3r)}\right) \geq C_{10} \frac{|A|}{|B(x, r)|}, \qquad \textrm{for all }y\in B(x,2r)\, . $$ \end{lemma} \noindent{\bf Proof.} Without loss of generality assume that ${\mathbb P}_y(T_A < {\tau}_{B(x,3r)})<1/4$. Set $\tau={\tau}_{B(x,3r)}$. By Lemma \ref{ksv-L3.1}, ${\mathbb P}_y(\tau\leq t) \leq {\mathbb P}_y(\tau_{B(y,r)}\leq t) \leq c_1 \phi(r^{-2}) t$. Choose $t_0= 1/(4c_1 \phi(r^{-2}))$, so that ${\mathbb P}_y(\tau\leq t_0) \leq 1/4$. Further, if $z\in B(x, 3r)$ and $u\in A \subset B(x,r)$, then $|u-z| \leq 4r$. Since $j$ is decreasing, $j(|u-z|) \geq j(4r)$. Thus, \begin{eqnarray*} {\mathbb P}_y (T_A < \tau) & \geq & {\mathbb E}_y \sum_{s\leq T_A \wedge \tau \wedge t_0} {\bf 1}_{\{X_{s-}\neq X_s, X_s\in A\}} \\ & = & {\mathbb E}_y \int_0^{T_A \wedge \tau \wedge t_0} \int_A j(|u-X_s|)\, du \, ds \\ & \geq & {\mathbb E}_y \int_0^{T_A \wedge \tau \wedge t_0} \int_A j(4r)\, du \, ds \\ & = & j(4r) |A| {\mathbb E}_y[T_A \wedge \tau \wedge t_0] \, , \end{eqnarray*} where in the second line we used properties of the L\'evy system. Next, \begin{eqnarray*} {\mathbb E}_y[T_A\wedge \tau \wedge t_0] & \ge & {\mathbb E}_y[t_0; \, T_A\geq \tau \geq t_0] \\ & = & t_0 {\mathbb P}_y(T_A \ge \tau \ge t_0) \\ & \ge & t_0[1-{\mathbb P}_y(T_A < \tau)-{\mathbb P}_y(\tau <t_0)] \\ & \ge & \frac{t_0}{2} = \frac{1}{8 c_1 \phi(r^{-2})}\, . \end{eqnarray*} The last two displays give that $$ {\mathbb P}_y (T_A < \tau) \geq j(4r) |A| \frac{1}{8 c_1 \phi(r^{-2})} = \frac{1}{8 c_1} |A| \frac{j(4r)}{\phi(r^{-2})}. $$ The claim now follows immediately from \eqref{ksv-e:reg-var} and Theorem \ref{ksv-t:Jorigin}. { $\Box$ } \begin{lemma}\label{ksv-L3.5} There exist positive constant $C_{11}$ and $C_{12}$, such that if $r\in (0,1)$, $x\in {\mathbb R}^d$, $z\in B(x,r)$, and $H$ is a bounded nonnegative function with support in $B(x,2r)^c$, then $$ {\mathbb E}_z H( X_{{\tau}_{B(x,r)}}) \leq C_{11} {\mathbb E}_z [{\tau}_{B(x,r)}] \int H(y) j(|y-x|) \, dy \, , $$ and $$ {\mathbb E}_z H( X_{{\tau}_{B(x,r)}}) \geq C_{12} {\mathbb E}_z [{\tau}_{B(x,r)}] \int H(y) j(|y-x|) \, dy \, . $$ \end{lemma} \noindent{\bf Proof.} Let $y\in B(x,r)$ and $u\in B(x,2r)^c$. If $u\in B(x,2)$ we use the estimates \begin{equation}\label{ksv-3.7} 2^{-1}|u-x|\le |u-y| \le 2|u-x|, \end{equation} while if $u\notin B(x,2)$ we use \begin{equation}\label{ksv-3.8} |u-x|-1\le |u-y|\le |u-x|+1. \end{equation} Let $B\subset B(x,2r)^c$. Then using the L\'evy system we get $$ {\mathbb E}_z \left[ {\bf 1}_B(X_{\tau_{B(x,r)}}) \right] = {\mathbb E}_z \int_0^{\tau_{B(x,r)}} \int_B j(|u-X_s|)\, du\, ds\, . $$ By use of \eqref{ksv-H:1}, \eqref{ksv-H:2}, \eqref{ksv-3.7}, and \eqref{ksv-3.8}, the inner integral is estimated as follows: \begin{eqnarray*} \int_B j(|u-X_s|)\, du &=& \int_{B\cap B(x,2)} j(|u-X_s|)\, du + \int_{B\cap B(x,2)^c} j(|u-X_s|)\, du \\ & \le & \int_{B\cap B(x,2)} j(2^{-1}|u-x|)\, du + \int_{B\cap B(x,2)^c} j(|u-x|-1)\, du \\ & \le &\int_{B\cap B(x,2)} c j(|u-x|)\, du + \int_{B\cap B(x,2)^c} c j(|u-x|)\, du \\ & = & c \int_B j(|u-x|)\, du. \end{eqnarray*} Therefore \begin{eqnarray*} {\mathbb E}_z \left[ {\bf 1}_B(X_{\tau_{B(x,r)}}) \right] & \le & {\mathbb E}_z \int_0^{\tau_{B(x,r)}} c \int_B j(|u-x|)\, du \\ & = & c\, {\mathbb E}_z (\tau_{B(x,r)}) \int {\bf 1}_B(u) j(|u-x|)\, du\, . \end{eqnarray*} Using linearity we get the above inequality when ${\bf 1}_B$ is replaced by a simple function. Approximating $H$ by simple functions and taking limits we have the first inequality in the statement of the lemma. The second inequality is proved in the same way. { $\Box$ } \begin{defn}\label{ksv-def:har1} Let $D$ be an open subset of ${\mathbb R}^d$. A function $u$ defined on ${\mathbb R}^d$ is said to be \begin{description} \item{(1)} harmonic in $D$ with respect to $X$ if $$ {\mathbb E}_x\left[|u(X_{\tau_{B}})|\right] <\infty \quad \hbox{ and } \quad u(x)= {\mathbb E}_x\left[u(X_{\tau_{B}})\right], \qquad x\in B, $$ for every open set $B$ whose closure is a compact subset of $D$; \item{(2}) regular harmonic in $D$ with respect to $X$ if it is harmonic in $D$ with respect to $X$ and for each $x \in D$, $u(x)= {\mathbb E}_x\left[u(X_{\tau_{D}})\right].$ \end{description} \end{defn} Now we give the proof of Harnack inequality. The proof below is basically the proof given in \cite{SV04} which is an adaptation of the proof given in \cite{BL02a}. However, the proof below corrects some typos in the proof given in \cite{SV04}. \begin{thm}\label{ksv-T:Har} There exists $C_{13}>0$ such that, for any $r\in (0, 1/4)$, $x_0\in{\mathbb R}^d$, and any function $u$ which is nonnegative on ${\mathbb R}^d$ and harmonic with respect to $X$ in $B(x_0, 17r)$, we have $$ u(x)\le C_{13} u(y), \quad \textrm{for all }x, y\in B(x_0, r). $$ \end{thm} \noindent{\bf Proof.} Without loss of generality we may assume that $u$ is strictly positive in $B(x_0, 16r)$. Indeed, if $u(x)=0$ for some $x\in B(x_0, 16r)$, then by harmonicity $ 0=u(x)={\mathbb E}_x [u(X_{\tau_B})] $ for $x \in B=B(x,\epsilon) \subset B(x_0, 16r)$. This and the fact that the Levy measure of $X$ is supported on all of ${\mathbb R}^d$ and has a density imply that $u=0$ a.e. with respect to Lebesgue measure. Moreover, by the harmonicity, for every $y \in B(x_0, 16r)$, $u(y)={\mathbb E}_y[u(X_{\tau_B} )]=0$ where $B=B(y,\delta)\subset B(x_0, 16r)$. Therefore, if $u(x)=0$ for some $x$, then $u$ is identically zero in $B(x_0, 16 r)$ and there is nothing to prove. We first assume $u$ is bounded on ${\mathbb R}^d$. Using the harmonicity of $u$ and Lemma \ref{ksv-L3.4}, one can show that $u$ is bounded from below on $B(x_0, r)$ by a positive number. To see this, let $\epsilon>0$ be such that $F=\{x\in B(x_0, 3r)\setminus B(x_0, 2r): u(x)>\epsilon\}$ has positive Lebesgue measure. Take a compact subset $K$ of $F$ so that it has positive Lebesgue measure. Then by Lemma \ref{ksv-L3.4}, for $x\in B(x_0, r)$, we have $$ u(x)\,=\,{\mathbb E}_x\left[u( X_{T_K\wedge \tau_{B(x_0,9r)}} ) \right] \,>\,c\,\epsilon\,\frac{|K|}{|B(x_0, 3r)|}, $$ for some $c>0$. By taking a constant multiple of $u$ we may assume that $\inf_{B(x_0, r)}u =1/2$. Choose $z_0\in B(x_0, r)$ such that $u(z_0)\le 1$. We want to show that $u$ is bounded above in $B(x_0, r)$ by a positive constant independent of $u$ and $r\in (0, 1/4)$. We will establish this by contradiction: If there exists a point $x\in B(x_0, r)$ with $ u(x)=K$ where $K$ is too large, we can obtain a sequence of points in $B(x_0, 2r)$ along which $u$ is unbounded. Using Lemmas \ref{ksv-L3.2}, \ref{ksv-L3.3} and \ref{ksv-L3.5}, one can see that there exists $c_1>0$ such that if $x\in {\mathbb R}^d$, $s\in (0, 1)$ and $H$ is nonnegative bounded function with support in $B(x, 2s)^c$, then for any $y, z\in B(x, s/2)$, \begin{equation}\label{ksv-e:2.1} {\mathbb E}_z H( X_{\tau_{B(x, s)}} )\,\le \,c_1\,{\mathbb E}_y H( X_{\tau_{B(x, s)}} ). \end{equation} By Lemma \ref{ksv-L3.4}, there exists $c_2>0$ such that if $A\subset B(x_0, 4r)$ then \begin{equation}\label{ksv-e:2.2} {\mathbb P}_y\left(T_A<\tau_{B(x_0, 16r)}\right)\,\ge\, c_2\,\frac{|A|}{|B(x_0, 4r)|}, \quad \forall y\in B(x_0, 8r). \end{equation} Again by Lemma \ref{ksv-L3.4}, there exists $c_3>0$ such that if $x\in{\mathbb R}^d$, $s\in (0, 1)$ and $F\subset B(x, s/3)$ with $|F|/|B(x, s/3)|\ge 1/3$, then \begin{equation}\label{ksv-e:2.3} {\mathbb P}_x\left(T_F<\tau_{B(x, s)}\right)\,\ge\, c_3. \end{equation} Let \begin{equation}\label{ksv-e:2.4} \eta=\frac{c_3}3,\,\,\,\,\,\,\,\,\,\,\, \zeta=(\frac13\wedge\frac1{c_1})\eta. \end{equation} Now suppose there exists $x\in B(x_0, r)$ with $u(x)=K$ for $K>K_0:=\frac{2|B(x_0, 1)|}{c_2\zeta}\vee\frac{2(12)^d}{c_2\zeta}$. Let $s$ be chosen so that \begin{equation}\label{ksv-e:2.5} |B(x, \frac{s}3)|=\frac{2|B(x_0, 4r)|}{c_2\zeta K}<1. \end{equation} Note that this implies \begin{equation}\label{ksv-e:2.6} s=12\left(\frac2{c_2\zeta}\right)^{1/d}rK^{-1/d}<r. \end{equation} Let us write $B_s$ for $B(x, s)$, $\tau_s$ for $\tau_{B(x, s)}$, and similarly for $B_{2s}$ and $\tau_{2s}$. Let $A$ be a compact subset of $$ A'=\{y\in B(x, \frac{s}3): u(y)\ge \zeta K\}. $$ It is well known that $u(X_t)$ is right continuous in $[0,\tau_{B(x_0, 16r)})$. Since $z_0\in B(x_0, r)$ and $A'\subset B(x, \frac{s}3)\subset B(x_0, 2r)$, we can apply (\ref{ksv-e:2.2}) to get \begin{eqnarray*} 1&\ge&u(z_0)\ge {\mathbb E}_{z_0}[u( X_{T_A\wedge\tau_{B(x_0, 16r)}} ){\bf 1}_{\{T_A< \tau_{B(x_0, 16r)}\}}]\\ &\ge&\zeta K{\mathbb P}_{z_0}(T_A<\tau_{B(x_0, 16r)})\\ &\ge&c_2\zeta K\frac{|A|}{|B(x_0, 4r)|}. \end{eqnarray*} Hence $$ \frac{|A|}{|B(x, \frac{s}3)|}\le\frac{|B(x_0, 4r)|} {c_2\zeta K |B(x, \frac{s}3)|}=\frac12. $$ This implies that $|A'|/|B(x, s/3)|\le 1/2$. Let $F$ be a compact subset of $B(x, s/3)\setminus A'$ such that \begin{equation}\label{ksv-e:2.7} \frac{|F|}{|B(x, \frac{s}3)|}\ge \frac13. \end{equation} Let $H=u\cdot{\bf 1}_{B_{2s}^c}$. We claim that $$ {\mathbb E}_x[u( X_{\tau_s} ); X_{\tau_s} \notin B_{2s}]\le\eta K. $$ If not, ${\mathbb E}_x H( X_{\tau_s})>\eta K$, and by (\ref{ksv-e:2.1}), for all $y\in B(x, s/3)$, we have \begin{eqnarray*} u(y)&=&{\mathbb E}_y u( X_{\tau_s})\ge {\mathbb E}_y[u( X_{\tau_s}); X_{\tau_s}\notin B_{2s}]\\ &\ge& c_1^{-1}{\mathbb E}_x H( X_{\tau_s})\ge c_1^{-1}\eta K\ge \zeta K, \end{eqnarray*} contradicting (\ref{ksv-e:2.7}) and the definition of $A'$. Let $M=\sup_{B_{2s}}u$. We then have \begin{eqnarray*} K&=&u(x)= {\mathbb E}_x [u( X_{\tau_s \wedge T_F} )]\\ &=&{\mathbb E}_x[u( X_{T_F} ); T_F<\tau_s]+ {\mathbb E}_x[u( X_{\tau_s}); \tau_s<T_F, X_{\tau_s}\in B_{2s}]\\ &&\,\,\,+ {\mathbb E}_x[u( X_{\tau_s}); \tau_s<T_F, X_{\tau_s}\notin B_{2s}]\\ &\le& \zeta K{\mathbb P}_x(T_F<\tau_s)+M{\mathbb P}_x(\tau_s<T_F)+\eta K\\ &=&\zeta K{\mathbb P}_x(T_F<\tau_s)+M(1-{\mathbb P}_x(T_F<\tau_s))+\eta K, \end{eqnarray*} or equivalently $$ \frac{M}{K}\ge\frac{1-\eta-\zeta}{1-{\mathbb P}_x(T_F<\tau_s)} +\zeta . $$ Using (\ref{ksv-e:2.3}) and (\ref{ksv-e:2.4}) we see that there exists $\beta>0$ such that $M\ge K(1+2\beta)$. Therefore there exists $x'\in B(x, 2s)$ with $u(x')\ge K(1+\beta)$. Now suppose there exists $x_1\in B(x_0, r)$ with $u(x_1)=K_1>K_0$. Define $s_1$ in terms of $K_1$ analogously to (\ref{ksv-e:2.5}). Using the above argument (with $x_1$ replacing $x$ and $x_2$ replacing $x'$), there exists $x_2\in B(x_1, 2s_1)$ with $u(x_2)=K_2\ge (1+\beta)K_1$. We continue and obtain $s_2$ and then $x_3$, $K_3$, $s_3$, etc. Note that $x_{i+1}\in B(x_i, 2s_i)$ and $K_i\ge (1+\beta)^{i-1}K_1$. In view of (\ref{ksv-e:2.6}), \begin{eqnarray*}\sum_{i=0}^{\infty} |x_{i+1}-x_i|&\le& r+ 24 \left(\frac2{c_2\zeta}\right)^{1/d}r \sum_{i=1}^{\infty}K_i^{-1/d}\\ & \le & r + 24 \left(\frac2{c_2\zeta}\right)^{1/d} K_1^{-1/d}r \sum_{i=1}^{\infty} (1+\beta)^{-(i-1)/d}\\ &=& r + 24r \left(\frac2{c_2\zeta}\right)^{1/d} K_1^{-1/d}r\sum^\infty_{i=0}(1+\beta)^{-i/d}\\ &=& r+ c_4rK^{-1/d}_1 \end{eqnarray*} where $c_4:=24 (\frac2{c_2\zeta})^{1/d}\sum^\infty_{i=0}(1+\beta)^{-i/d}$. So if $K_1>c^d_4 \vee K_0$ then we have a sequence $x_1, x_2, \dots$ contained in $B(x_0, 2r)$ with $u(x_i)\ge (1+\beta)^{i-1}K_1\rightarrow\infty$, a contradiction to $u$ being bounded. Therefore we can not take $K_1$ larger than $c^d_4\vee K_0$, and thus $\sup_{y\in B(x_0, r)}u(y)\le c^d_4\vee K_0$, which is what we set out to prove. In the case that $u$ is unbounded, one can follow the simple limit argument in the proof of \cite[Theorem 2.4]{SV04} to finish the proof. { $\Box$ } By using the standard chain argument one can derive the following form of Harnack inequality. \begin{corollary}\label{ksv-c:hi} For every $a \in (0,1)$, there exists $C_{14}=C_{14}(a)>0$ such that for every $r \in (0, 1/4)$, $x_0 \in {\mathbb R}^d$, and any function $u$ which is nonnegative on ${\mathbb R}^d$ and harmonic with respect to $X$ in $B(x_0, r)$, we have $$ u(x)\le C_{14} u(y), \quad \textrm{for all }x, y\in B(x_0, ar)\, . $$ \end{corollary} \subsection{Some estimates for the Poisson kernel} Recall that for any open set $D$ in ${\mathbb R}^d$, $\tau_D$ is the first exit time of $X$ from $D$. We recall from Subsection \ref{ksv-ss:sbm} that $X$ has a transition density $q(t, x, y)$, which is jointly continuous. Using this and the strong Markov property, one can easily check that $$ q_D(t, x, y):=q(t, x, y)-{\mathbb E}_x[t>\tau_D, q(t-\tau_D, X_{\tau_D}, y)], \quad x, y \in D $$ is continuous and the transition density of $X^D$. For any bounded open set $D\subset {\mathbb R}^d$, we will use $G_D$ to denote the Green function of $X^D$, i.e., $$ G_D(x, y):=\int^\infty_0 q_D(t, x, y)dt, \quad x, y\in D. $$ Note that $G_D(x,y)$ continuous in $(D\times D)\setminus\{(x, x): x\in D\}$. We will frequently use the well-known fact that $G_D(\cdot, y)$ is harmonic in $D\setminus\{y\}$, and regular harmonic in $D\setminus \overline{B(y,\varepsilon)}$ for every $\varepsilon >0$. Using the L\'{e}vy system for $X$, we know that for every bounded open subset $D$, every $f \ge 0$ and all $x \in D$, \begin{equation}\label{ksv-newls} {\mathbb E}_x\left[f(X_{\tau_D});\,X_{\tau_D-} \not= X_{\tau_D} \right] = \int_{\overline{D}^c} \int_{D} G_D(x,z) J(z-y) dz f(y)dy. \end{equation} For notational convenience, we define \begin{equation}\label{ksv-PK} K_D(x,y)\,:= \int_{D} G_D(x,z) J(z-y) dz, \qquad (x,y) \in D \times \overline{D}^c. \end{equation} Thus \eqref{ksv-newls} can be simply written as \begin{equation}\label{ksv-newls-2} {\mathbb E}_x\left[f(X_{\tau_D});\,X_{\tau_D-} \not= X_{\tau_D} \right] =\int_{\overline{D}^c} K_D(x,y)f(y)dy\, , \end{equation} revealing $K_D(x,y)$ as a density of the exit distribution of $X$ from $D$. The function $K_D(x,y)$ is called the Poisson kernel of $X$. Using the continuity of $G_D$ and $J$, one can easily check that $K_D$ is continuous on $D \times \overline{D}^c$. The following proposition is an improvement of Lemma \ref{ksv-L3.3}. The idea of the proof comes from \cite{Sz2}. \begin{prop}\label{ksv-l:tau} For all $r>0$ and all $x_0\in {\mathbb R}^d$, $$ {\mathbb E}_x[\tau_{B(x_0,r)}]\,\le\, 2V(2r) V(r-|x-x_0|)\, ,\qquad x\in B(x_0, r)\, . $$ In particular, for any $R>0$, $r\in (0, R)$ and $x_0 \in {\mathbb R}^d$, \begin{eqnarray*} {\mathbb E}_x[\tau_{B(x_0,r)}]&\le & C_{7}\, (\phi(r^{-2})\phi((r-|x-x_0|)^{-2}))^{-1/2}\\ &\asymp &\frac{r^{\alpha/2}}{(\ell(r^{-2}))^{1/2}}\frac{(r-|x-x_0|)^{\alpha/2}}{(\ell((r-|x-x_0|)^{-2}))^{1/2}}, \qquad x\in B(x_0, r)\, , \end{eqnarray*} where $C_7=C_7(R)$ is the constant form Proposition \ref{ksv-p:upbdongfofkpinfiniteinterval2}. \end{prop} \noindent{\bf Proof.} Without loss of generality, we may assume that $x_0=0$. For $x\neq 0$, put $Z_t=\frac{X_t\cdot x}{|x|}$. Then $Z_t$ is a L\'evy process on ${\mathbb R}$ with $$ {\mathbb E}(e^{i\theta Z_t})={\mathbb E}(e^{i\theta\frac{x}{|x|}\cdot X_t}) =e^{-t \phi(|\theta\frac{x}{|x|}|^2)}=e^{-t \phi(\theta^2)} \qquad \theta\in {\mathbb R}. $$ Thus $Z_t$ is of the type of one-dimensional subordinate Brownian motion studied in Section \ref{ksv-ss:1dsbm}. It is easy to see that, if $X_t\in B(0, r)$, then $|Z_t|<r$, hence $$ {\mathbb E}_x[\tau_{B(0, r)}]\le {\mathbb E}_{|x|}[\tilde \tau], $$ where $\tilde \tau=\inf\{t>0: |Z_t|\ge r\}$. Now the desired conclusion follows easily from Proposition \ref{ksv-p:upbdongfofkpinfiniteinterval2} (more precisely, from \eqref{ksv-e:upbdongfofkpinfiniteinterval2}). { $\Box$ } As a consequence of Lemma \ref{ksv-L3.2}, Proposition \ref{ksv-l:tau} and \eqref{ksv-PK}, we get the following result. \begin{prop}\label{ksv-p:Poisson1} There exist $C_{15}, C_{16}>0$ such that for every $r \in (0, 1)$ and $x_0 \in {\mathbb R}^d$, \begin{eqnarray}\label{ksv-P1} K_{B(x_0,r)}(x,y) &\le & C_{15} \, j(|y-x_0|-r) \big(\phi(r^{-2})\phi((r-|x-x_0|)^{-2})\big)^{-1/2}\\ &\asymp & j(|y-x_0|-r) \frac{r^{\alpha/2}} {(\ell(r^{-2}))^{1/2}}\frac{(r-|x-x_0|)^{\alpha/2}} {(\ell((r-|x-x_0|)^{-2}))^{1/2}}\, ,\nonumber \end{eqnarray} for all $(x,y) \in B(x_0,r)\times \overline{B(x_0,r)}^c$ and \begin{equation}\label{ksv-P2} K_{B(x_0,r)}(x_0,y) \,\ge\, C_{16}\,\frac{j(|y-x_0|)}{\phi((r/2)^{-2})}\asymp j(|y-x_0|)\frac{r^\alpha}{\ell(r^{-2})} \end{equation} for all $y \in\overline{B(x_0,r)}^c$. \end{prop} \noindent{\bf Proof.} Without loss of generality, we assume $x_0=0$. For $z \in B(0, r)$ and $r<|y|<2$ $$ |y|-r \le |y|-|z| \le |z-y| \le |z|+|y| \le r +|y| \le 2|y| , $$ and for $z \in B(0, r)$ and $y \in B(0, 2)^c$, $$ |y|-r \le |y|-|z| \le |z-y| \le |z|+|y| \le r +|y|\le |y|+1. $$ Thus by the monotonicity of $j$, \eqref{ksv-H:1} and \eqref{ksv-H:2}, there exists a constant $c>0$ such that $$ c j(|y|) \,\le\, j(|z-y|) \, \le \, j(|y|-r)\, , \qquad (z,y) \in B(0,r) \times \overline{B(0,r)}^c. $$ Applying the above inequality, Lemma \ref{ksv-L3.2} and Proposition \ref{ksv-l:tau} to \eqref{ksv-PK}, we have proved the proposition. { $\Box$ } \begin{prop}\label{ksv-p:Poisson2} For every $a \in (0,1)$, $r \in (0, 1/4)$, $x_0 \in {\mathbb R}^d$ and $x_1, x_2 \in B(x_0, ar)$, $$ K_{B(x_0,r)}(x_1,y) \,\le\, C_{14} K_{B(x_0,r)}(x_2,y), \qquad y \in \overline{B(x_0,r)}^{\, c}\, , $$ where $C_{14}=C_{14}(a)$ is the constant from Corollary \ref{ksv-c:hi}. \end{prop} \noindent{\bf Proof.} Let $a\in (0,1)$, $r\in (0,1/4)$ and $x_0\in {\mathbb R}^d$ be fixed. For every Borel set $A\subset \overline{B(x_0,r)}^{\, c}$, the function $x\mapsto {\mathbb P}_x(X_{\tau_{B(x_0,r)}}\in A)$ is harmonic in $B(x_0,r)$. By Corollary \ref{ksv-c:hi} and \eqref{ksv-newls-2}, we have for all $x_1, x_2 \in B(x_0, ar)$, \begin{eqnarray*} \int_A K_{B(x_0,r)}(x_1,y)\, dy&=& {\mathbb P}_{x_1}(X_{\tau_{B(x_0,r)}}\in A)\\ &\le & C_{14} {\mathbb P}_{x_2}(X_{\tau_{B(x_0,r)}}\in A)=\int_A K_{B(x_0,r)}(x_2,y)\, dy\, . \end{eqnarray*} This implies that $K_{B(x_0,r)}(x_1,y)\le C_{14} K_{B(x_0,r)}(x_2,y)$ for a.e.~$y\in \overline{B(x_0,r)}^{\, c}$, and hence by the continuity of $ K_{B(x_0,r)}(x,\cdot)$ for every $y\in \overline{B(x_0,r)}^{\, c}$.{ $\Box$ } The next inequalities will be used several times in the remainder of this paper. \begin{lemma}\label{ksv-l:l} There exists $C>0$ such that \begin{equation}\label{ksv-el1} \frac{s^{\alpha/2}}{\left(\ell(s^{-2})\right)^{1/2}} \,\le \, C \, \frac{r^{\alpha/2}}{\left(\ell(r^{-2})\right)^{1/2}}, \qquad 0<s<r\le 4, \end{equation} \begin{equation}\label{ksv-el2} \frac{s^{1-\alpha/2}}{\left(\ell(s^{-2})\right)^{1/2}} \,\le \, C \, \frac{r^{1-\alpha/2}}{\left(\ell(r^{-2})\right)^{1/2}}, \qquad 0<s<r\le 4, \end{equation} \begin{equation}\label{ksv-el7} s^{1-\alpha/2} \,{\left(\ell(s^{-2})\right)^{1/2}} \,\le \, C \, r^{1-\alpha/2}\,{\left(\ell(r^{-2})\right)^{1/2}}, \qquad 0<s<r\le 4, \end{equation} \begin{equation}\label{ksv-el3} \int^{\infty}_r \frac{\left(\ell(s^{-2})\right)^{1/2}}{s^{1+\alpha/2}}ds \,\le \, C \, \frac{\left(\ell(r^{-2})\right)^{1/2}}{r^{\alpha/2}}, \qquad 0<r\le 4, \end{equation} \begin{equation}\label{ksv-el6} \int^{r}_0 \frac{\left(\ell(s^{-2})\right)^{1/2}}{s^{\alpha/2}}ds \,\le \, C \, \frac{\left(\ell(r^{-2})\right)^{1/2}}{r^{\alpha/2-1}}, \qquad 0<r\le 4, \end{equation} \begin{equation}\label{ksv-el4} \int^{\infty}_r \frac{\ell(s^{-2})}{s^{1+\alpha}}ds \,\le \, C \, \frac{\ell(r^{-2})}{r^{\alpha}}, \qquad 0<r\le 4, \end{equation} \begin{equation}\label{ksv-el5} \int_{0}^r \frac{\ell(s^{-2})}{s^{\alpha-1}}ds \,\le \, C \, \frac{\ell(r^{-2})}{r^{\alpha-2}}, \qquad 0<r\le 4, \end{equation} and \begin{equation}\label{ksv-el8} \int_{0}^r \frac{s^{\alpha-1}}{\ell(s^{-2})}ds \,\le \, C \, \frac{r^{\alpha}}{\ell(r^{-2})}, \qquad \ 0<r\le 4. \end{equation} \end{lemma} \noindent{\bf Proof.} The first three inequalities follow easily from \cite[Theorem 1.5.3]{BGT}, while the last five from the 0-version of \cite[1.5.11]{BGT}. { $\Box$ } \begin{prop}\label{ksv-p:Poisson3} For every $a \in (0,1)$, there exists $C_{17}=C_{17}(a)>0$ such that for every $r \in (0, 1)$ and $x_0 \in {\mathbb R}^d$, \begin{eqnarray*} K_{B(x_0,r)}(x,y) \,&\le &\, C_{17}\,\frac{r^{\alpha/2-d}}{(\ell(r^{-2}))^{1/2}} \frac{(\ell((|y-x_0|-r)^{-2}))^{1/2}} {( |y-x_0|-r)^{\alpha/2}}\, ,\\ & & \qquad \qquad \qquad \forall x\in B(x_0, ar),\, y \in \{r<|x_0-y| \le 2r\}\, . \end{eqnarray*} \end{prop} \noindent{\bf Proof.} By Proposition \ref{ksv-p:Poisson2}, $$ K_{B(x_0,r)}(x,y) \le \frac{c_1}{r^d} \int_{B(x_0, a r)} K_{B(x_0,r)}(w,y) dw $$ for some constant $c_1=c_1(a)>0$. Thus from Proposition \ref{ksv-l:tau}, (\ref{ksv-P1}) and Remark \ref{ksv-r:abofgf4lhpat0} we have that \begin{eqnarray*} K_{B(x_0,r)}(x,y)&\le& \frac{ c_1}{r^d}\int_{B(x_0, r)}\int_{B(x_0,r)} G_{B(x_0,r)}(w,z)J(z-y) dz dw \\ &=& \frac{c_1}{r^d}\int_{B(x_0, r)} {\mathbb E}_z[\tau_{B(x_0,r)}]J(z-y) dz\\ & \le& \frac{c_2}{r^d} \frac{r^{\alpha/2}}{(\ell(r^{-2}))^{1/2}} \int_{B(x_0, r)}\frac{(r-|z-x_0|)^{\alpha/2}}{(\ell((r-|z-x_0|)^{-2}))^{1/2}} J(z-y)dz \end{eqnarray*} for some constant $c_2=c_2(a)>0$. Now applying Theorem \ref{ksv-t:Jorigin}, we get $$ K_{B(x_0,r)}(x,y) \, \le\, \frac{c_3 r^{\alpha/2-d}}{(\ell(r^{-2}))^{1/2}} \int_{B(x_0, r)}\frac{(r-|z-x_0|)^{\alpha/2}} {(\ell((r-|z-x_0|)^{-2}))^{1/2}} \frac{\ell(|z-y|^{-2})} {|z-y|^{d+\alpha}}dz $$ for some constant $c_3=c_3(a)>0$. Since $r-|z-x_0| \le |y-z| \le 3r \le 3$, from \eqref{ksv-el1} we see that $$ \frac{(r-|z-x_0|)^{\alpha/2}} {(\ell((r-|z-x_0|)^{-2}))^{1/2}} \, \le\, c_4 \frac{(|y-z|)^{\alpha/2}} {(\ell(|y-z|^{-2}))^{1/2}} $$ for some constant $c_4>0$. Thus we have \begin{eqnarray*} K_{B(x_0,r)}(x,y) & \le& \frac{c_5 r^{\alpha/2-d}}{(\ell(r^{-2}))^{1/2}} \int_{B(x_0, r)} \frac{(\ell(|z-y|^{-2}))^{1/2}}{|z-y|^{d+\alpha/2}}dz\\ & \le& \frac{c_5 r^{\alpha/2-d}}{(\ell(r^{-2}))^{1/2}} \int_{B(y, |y-x_0|-r)^c} \frac{(\ell(|z-y|^{-2}))^{1/2}}{|z-y|^{d+\alpha/2}}dz\\ &\le & \frac{c_6 r^{\alpha/2-d}}{(\ell(r^{-2}))^{1/2}} \int_{|y-x_0|-r}^{\infty} \frac{\left(\ell(s^{-2})\right)^{1/2}} {s^{1+\alpha/2}}ds \end{eqnarray*} for some constants $c_5=c_5(a)>0$ and $c_6=c_6(a)>0$. Using \eqref{ksv-el3} in the above equation, we conclude that $$ K_{B(x_0,r)}(x,y) \,\le \, \frac{c_7 r^{\alpha/2-d}}{(\ell(r^{-2}))^{1/2}} \frac{(\ell((|y-x_0|-r)^{-2}))^{1/2}} {( |y-x_0|-r)^{\alpha/2}} $$ for some constant $c_7=c_7(a)>0$. { $\Box$ } \begin{remark}\label{ksv-r:Poisson3}{\rm Note that the right-hand side of the estimate can be replaced by $\frac{V(r)}{r^d V(|y-x_0|-r)}$. } \end{remark} \subsection{Boundary Harnack principle}\label{ksv-ss:bhp} In this subsection, we additionally assume that $\alpha\in (0, 2\wedge d)$ and in the case $d\le 2$, we further assume \eqref{ksv-e:ass4trans}. The proof of the boundary Harnack principle is basically the proof given in \cite{KSV1}, which is adapted from \cite{Bog97, SW99}. The following result is a generalization of \cite[Lemma 3.3]{SW99}. \begin{lemma}\label{ksv-l2.1} For every $a \in (0, 1)$, there exists a positive constant $C_{19}=C_{19}(a)>0$ such that for any $r\in (0, 1)$ and any open set $D$ with $D\subset B(0, r)$ we have $$ {\mathbb P}_x\left(X_{\tau_D} \in B(0, r)^c\right) \,\le\, C_{19}\,r^{-\alpha}\, \ell(r^{-2})\int_D G_D(x,y)dy, \qquad x \in D\cap B(0, ar). $$ \end{lemma} \noindent{\bf Proof.} We will use $C^{\infty}_c({\mathbb R}^d)$ to denote the space of infinitely differentiable functions with compact supports. Recall that ${\bf L}$ is the $L_2$-generator of $X$ in \eqref{ksv-3.1} and that $G(x,y)$ and $G_D(x,y)$ are the Green functions of $X$ in ${\mathbb R}^d$ and $D$ respectively. We have ${\bf L} \, G(x,y)=-\delta_x(y)$ in the weak sense. Since $ G_D(x,y)=G(x,y) -{\mathbb E}_x[G(X_{\tau_D},y)] $, we have, by the symmetry of ${\bf L}$, for any $x\in D$ and any nonnegative $\phi \in C^{\infty}_c({\mathbb R}^d)$, \begin{eqnarray*} &&\int_D G_D(x,y) {\bf L} \phi(y)dy =\int_{{\mathbb R}^d} G_D(x,y) {\bf L} \phi(y)dy\\ &&= \int_{{\mathbb R}^d} G(x,y) {\bf L} \phi(y)dy- \int_{{\mathbb R}^d} {\mathbb E}_x[G(X_{\tau_D},y)] {\bf L} \phi(y)dy\\ &&= \int_{{\mathbb R}^d} G(x,y) {\bf L} \phi(y)dy- \int_{D^c} \int_{{\mathbb R}^d} G(z,y) {\bf L} \phi(y)dy {\mathbb P}_x(X_{\tau_D} \in dz)\\ &&=-\phi(x)+ \int_{D^c} \phi(z){\mathbb P}_x(X_{\tau_D} \in dz) \,=\,-\phi(x)+{\mathbb E}_x[\phi(X_{\tau_D})]. \end{eqnarray*} In particular, if $\phi(x)=0$ for $x\in D$, we have \begin{equation}\label{ksv-har_gen} {\mathbb E}_x\left[ \phi(X_{\tau_D})\right] = \int_D G_D(x,y) {\bf L} \phi(y)dy. \end{equation} For fixed $a \in (0,1)$, take a sequence of radial functions $\phi_m$ in $C^{\infty}_c({\mathbb R}^d)$ such that $0\le \phi_m\le 1$, \[ \phi_m(y)=\left\{ \begin{array}{lll} 0, & |y|< a\\ 1, & 1\le |y|\le m+1\\ 0, & |y|>m+2, \end{array} \right. \] and that $\sum_{i, j}|\frac{\partial^2}{\partial y_i\partial y_j} \phi_m|$ is uniformly bounded. Define $\phi_{m, r}(y)=\phi_m(\frac{y}{r})$ so that $0\le \phi_{m, r}\le 1$, \begin{equation}\label{ksv-e:2.11}\phi_{m, r}(y)= \begin{cases} 0, & |y|<ar\\ 1, & r\le |y|\le r(m+1)\\ 0, & |y|>r(m+2), \end{cases} \quad \text{and} \quad \sup_{y\in {\mathbb R}^d} \sum_{i, j}\left|\frac{\partial^2}{\partial y_i\partial y_j} \phi_{m, r}(y)\right| \,<\, c_1\, r^{-2}. \end{equation} We claim that there exists a constant $c_1=c_1(a)>0$ such that for all $r\in (0, 1)$, \begin{equation}\label{ksv-e2.1} \sup_{m \ge 1} \sup_{y\in {\mathbb R}^d} |{\bf L}\phi_{m,r}(y)|\,\le\, c_1 r^{-\alpha} \, \ell(r^{-2}). \end{equation} In fact, by Proposition \ref{ksv-t:Jorigin} we have \begin{eqnarray*} && \left|\int_{{\mathbb R}^d} (\phi_{m,r}(x+y)-\phi_{m,r}(x)-(\nabla \phi_{m,r}(x) \cdot y)1_{B(0, r)}(y))J(y) dy \right|\\ &&\le \left|\int_{\{|y|\le r\}} (\phi_{m,r}(x+y)-\phi_{m,r}(x)-(\nabla \phi_{m,r}(x) \cdot y)1_{B(0, r)}(y))J(y) dy\right|\\ && \quad +2\int_{\{r<|y|\}}J(y) dy \\ &&\le \frac{c_2}{r^2}\int_{\{|y|\le r \}} |y|^2 J(y)dy +2\int_{\{r<|y|\}}J(y) dy \\ &&\le \frac{c_3}{r^2}\int_{\{|y|\le r \}} \frac1{|y|^{d+\alpha-2}} \ell(|y|^{-2})dy \,+\,c_3\int_{\{r<|y|\}} \frac1{|y|^{d+\alpha}} \ell(|y|^{-2}) dy \\ && \le \frac{c_4}{r^2} \int_{0}^r \frac{\ell(s^{-2})}{s^{\alpha-1}}ds\,+\, c_4\int^{\infty}_r \frac{\ell(s^{-2})}{s^{1+\alpha}}ds. \end{eqnarray*} Applying \eqref{ksv-el4}-\eqref{ksv-el5} to the above equation, we get $$ \left|\int_{{\mathbb R}^d} (\phi_{m,r}(x+y)-\phi_{m,r}(x)-(\nabla \phi_{m,r}(x) \cdot y)1_{B(0, r)}(y))J(y) dy \right| \,\le \, c_5\, r^{-\alpha}\, \ell(r^{-2}),$$ for some constant $c_5=c_5(d, \alpha, \ell)>0$. So the claim follows. Let $A(x, a,b):=\{ y \in {\mathbb R}^d: a \le |y-x| <b \}.$ When $D \subset B(0,r)$ for some $r\in (0, 1)$, we get, by combining (\ref{ksv-har_gen}) and (\ref{ksv-e2.1}), that for any $x\in D\cap B(0, ar)$, \begin{eqnarray*} {\mathbb P}_x\left(X_{\tau_D} \in B(0, r)^c\right)\,&=&\, \lim_{m\to \infty}{\mathbb P}_x\left(X_{\tau_D} \in A(0, r, (m+1)r)\right)\\ \,&\le &\, C\,r^{-\alpha} \, \ell(r^{-2})\int_D G_D(x,y)dy. \end{eqnarray*} { $\Box$ } \begin{lemma}\label{ksv-l2.1_1} There exists $C_{20}>0$ such that for any open set $D$ with $B(A, \kappa r)\subset D\subset B(0, r)$ for some $r\in (0, 1)$ and $\kappa\in (0, 1)$, we have that for every $x \in D \setminus B(A, \kappa r)$, \begin{eqnarray*} \lefteqn{\int_{D} G_D(x,y) dy }\\ & \le & C_{20}\, r^{\alpha} \,\kappa^{-d-\alpha/2}\, \frac1{\ell((4r)^{-2})}\left(1+ \frac{\ell((\frac{\kappa r}{2})^{-2})}{\ell((4r)^{-2})}\right) {\mathbb P}_x\left(X_{\tau_{D\setminus B(A, \kappa r)}} \in B(A, \kappa r)\right). \end{eqnarray*} \end{lemma} \noindent{\bf Proof.} Fix a point $x\in D\setminus B(A, \kappa r)$ and let $B:=B(A, \frac{\kappa r}2)$. Note that, by the harmonicity of $G_D(x,\,\cdot\,)$ in $D\setminus \{x\}$ with respect to $X$, we have \[ G_D(x,A)\,\ge\,\int_{D\cap \overline{B}^c}K_B(A, y)G_D(x,y)dy \,\ge\,\int_{D\cap B(A, \frac{3\kappa r}4)^c}K_B(A, y)G_D(x,y)dy. \] Since $\frac{3\kappa r}4\le |y-A|\le 2r$ for $y\in B(A, \frac{3\kappa r}4)^c\cap D$ and $j$ is a decreasing function, it follows from \eqref{ksv-P2} in Proposition \ref{ksv-p:Poisson1} and Theorem \ref{ksv-t:Jorigin} that \begin{eqnarray*} G_D(x,A) &\ge& c_1\, \frac{(\frac{\kappa r}{2})^\alpha}{\ell\left(( \frac{\kappa r}{2})^{-2}\right)}\int_{D \cap B(A, \frac{3\kappa r}4)^c}G_D(x,y)J(y-A) dy\\ &\ge& c_1\, j(2r)\, \frac{(\frac{\kappa r}{2})^\alpha}{\ell\left(( \frac{\kappa r}{2})^{-2}\right)}\int_{D \cap B(A, \frac{3\kappa r}4)^c}G_D(x,y) dy\\ &\ge& c_2\, \kappa^\alpha \,r^{-d}\, \frac{\ell((2r)^{-2})}{\ell(( \frac{\kappa r}{2})^{-2})}\int_{D \cap B(A, \frac{3\kappa r}4)^c}G_D(x,y) dy, \end{eqnarray*} for some positive constants $c_1$ and $c_2$. On the other hand, applying Theorem \ref{ksv-T:Har} we get \[ \int_{B(A, \frac{3\kappa r}4)} G_D(x,y) dy\le c_3 \int_{B(A, \frac{3\kappa r}4)} G_D(x,A)dy \,\le\,c_4\,r^{d}\,\kappa^d G_D(x,A), \] for some positive constants $c_3$ and $c_4$. Combining these two estimates we get that \begin{equation}\label{ksv-efe1} \int_{D} G_D(x,y) dy \,\le\, c_5\,\left(r^{d}\kappa^d+r^{d} \kappa^{-\alpha}\frac{\ell((\frac{\kappa r}{2})^{-2})} {\ell((2r)^{-2})}\right)\, G_D(x,A) \end{equation} for some constant $c_5>0$. Let $\Omega=D\setminus \overline{B(A, \frac{\kappa r}2)}$. Note that for any $z\in B(A, \frac{\kappa r}4)$ and $y\in \Omega$, $ 2^{-1}|y-z|\le|y-A|\le 2|y-z|$. Thus we get from (\ref{ksv-PK}) and \eqref{ksv-H:1} that for $z\in B(A,\frac{\kappa r}4)$, \begin{equation}\label{ksv-e:KK1} c_6^{-1}K_{\Omega}(x, A) \,\le \,K_{\Omega}(x, z) \,\le\, c_6K_{\Omega}(x, A) \end{equation} for some $c_6>1$. Using the harmonicity of $G_D(\cdot, A)$ in $D\setminus\{A\}$ with respect to $X$, we can split $G_D(\cdot, A)$ into two parts: \begin{eqnarray*} \lefteqn{G_D(x, A) ={\mathbb E}_x \left[G_D(X_{\tau_{\Omega}},A)\right]}\\ &=&{\mathbb E}_x \left[G_D(X_{\tau_{\Omega}},A):\,X_{\tau_{\Omega}} \in B(A, \frac{\kappa r}4) \right]\\ & & + {\mathbb E}_x\left[G_D(X_{\tau_{\Omega}},A):\,X_{\tau_{\Omega}} \in \{\frac{\kappa r}4\le |y-A|\le \frac{\kappa r}2\}\right]\\ & := &I_1+I_2. \end{eqnarray*} Since $G_D(y,A)\le G(y,A)$, by using (\ref{ksv-e:KK1}) and Theorem \ref{ksv-t:Gorigin}, we have \begin{eqnarray*} I_1 &\le & c_6\,K_{\Omega}(x,A) \int_{B(A, \frac{\kappa r}4)}G_D(y, A)dy \\ & \le & c_7 \,K_{\Omega}(x,A) \int_{B(A, \frac{\kappa r}4)} \frac{1}{|y-A|^{d-\alpha}\ell(|y-A|^{-2})}\, dy\, , \end{eqnarray*} for some constant $c_7>0$. Since $|y-A|\le 4r \le 4 $, by \eqref{ksv-el1}, \begin{equation}\label{ksv-efe} \frac{|y-A|^{\alpha/2}}{\ell(|y-A|^{-2})} \,\le\, c_8 \, \frac{(4r)^{\alpha/2}}{\ell((4r)^{-2})} \end{equation} for some constant $c_8>0$. Thus \begin{eqnarray*} I_1 &\le & c_7\, c_8\,K_{\Omega}(x,A) \int_{B(A, \frac{\kappa r}4)}\frac{1}{|y-A|^{d-\alpha/2}} \frac{(4r)^{\alpha/2}}{\ell((4r)^{-2})}dy\\ & \le & c_9\kappa^{\alpha/2}r^{\alpha}\frac1{\ell((4r)^{-2})}K_{\Omega}(x, A) \end{eqnarray*} for some constant $c_9>0$. Now using (\ref{ksv-e:KK1}) again, we get \begin{eqnarray*} I_1 &\le & c_{10}\kappa^{\alpha/2-d}r^{\alpha-d}\frac1{\ell((4r)^{-2})}\int_{B(A, \frac{\kappa r}4)} K_{\Omega}(x, z)dz\\ &=&c_{10}\kappa^{\alpha/2-d}r^{\alpha-d}\frac1{\ell((4r)^{-2})}{\mathbb P}_x\left(X_{\tau_{\Omega}}\in B(A, \frac{\kappa r}{4}\right) \end{eqnarray*} for some constant $c_{10}>0$. On the other hand, again by Theorem \ref{ksv-t:Gorigin} and \eqref{ksv-efe}, \begin{eqnarray*} I_2 &=& \int_{\{\frac{\kappa r}4\le |y-A|\le \frac{\kappa r}2\}} G_{D}(y,A) {\mathbb P}_x(X_{\tau_{\Omega}} \in dy) \\ &\le & c_{11}\int_{\{\frac{\kappa r}4\le |y-A|\le \frac{\kappa r}2\}} \frac1{|y-A|^{d-\alpha}} \,\frac{1}{\ell(|y-A|^{-2})} {\mathbb P}_x(X_{\tau_{\Omega}} \in dy)\\ &\le & c_{12} \kappa^{\alpha/2-d}\,r^{\alpha-d} \, \frac1{\ell((4r)^{-2})}{\mathbb P}_x \left(X_{\tau_{\Omega}} \in \{\frac{\kappa r}4\le |y-A|\le \frac{\kappa r}2\}\right), \end{eqnarray*} for some constants $c_{11}>0$ and $c_{12}>0$. Therefore $$ G_D(x, A) \,\le\, c_{13}\, \kappa^{\alpha/2-d}\,r^{\alpha-d}\frac1{\ell((4r)^{-2})}\, {\mathbb P}_x\left(X_{\tau_{\Omega}} \in B(A, \frac{\kappa r}2)\right) $$ for some constant $c_{13}>0$. Combining the above with \eqref{ksv-efe1}, we get \begin{eqnarray*} \lefteqn{\int_{D} G_D(x,y) dy}\\ &\le & c_{14}\, r^{\alpha} \,\kappa^{-d-\alpha/2}\, \frac1{\ell((4r)^{-2})}\left(1+ \frac{\ell((\frac{\kappa r}{2})^{-2})}{\ell((2r)^{-2})}\right){\mathbb P}_x \left(X_{\tau_{D\setminus B(A, \frac{\kappa r}2)}} \in B(A, \frac{\kappa r}2)\right), \end{eqnarray*} for some constant $c_{14}>0$. It follows immediately that \begin{eqnarray*} \lefteqn{\int_{D} G_D(x,y) dy }\\ & \le & c_{14}\, r^{\alpha} \, \kappa^{-d-\alpha/2}\,\frac1{\ell((4r)^{-2})}\left(1+ \frac{\ell((\frac{\kappa r}{2})^{-2})} {\ell((2r)^{-2})} \right) {\mathbb P}_x\left(X_{\tau_{D\setminus B(A, \kappa r)}} \in B(A, \kappa r)\right). \end{eqnarray*} { $\Box$ } Combining Lemmas \ref{ksv-l2.1}-\ref{ksv-l2.1_1} and using the translation invariant property, we have the following \begin{lemma}\label{ksv-l2.3} There exists $C_{21}>0$ such that for any open set $D$ with $B(A, \kappa r)\subset D\subset B(Q, r)$ for some $r\in(0, 1)$ and $\kappa\in (0, 1)$, we have that for every $ x\in D\cap B(Q, \frac{r}2)$, \begin{eqnarray*} \lefteqn{{\mathbb P}_x\left(X_{\tau_{D}} \in B(Q, r)^c\right)}\\ & \le & C_{21}\,\kappa^{-d-\alpha/2 }\, \frac{\ell(r^{-2})}{ \ell((4r)^{-2})}\, \left(1+\frac{\ell((\frac{\kappa r}{2})^{-2})}{\ell((2r)^{-2})} \right) {\mathbb P}_x\left(X_{\tau_{D\setminus B(A, \kappa r) }} \in B(A, \kappa r) \right). \end{eqnarray*} \end{lemma} Let $A(x, a,b):=\{ y \in {\mathbb R}^d: a \le |y-x| <b \}.$ \begin{lemma}\label{ksv-l2.U} Let $D$ be an open set and $r\in (0,1/2)$. For every $Q \in {\mathbb R}^d$ and any positive function $u$ vanishing on $D^c \cap B(Q, \frac{11}6r)$, there is a $\sigma\in (\frac{10}{6}r, \frac{11}{6}r)$ such that for any $ x \in D \cap B(Q, \frac{3}{2} r)$, \begin{equation}\label{ksv-e:l2.U} {\mathbb E}_x\left[u(X_{\tau_{D \cap B(Q, \sigma)}}); X_{\tau_{D \cap B(Q, \sigma)}} \in B(Q, \sigma)^c\right] \le C_{22}\,\frac{r^{\alpha}}{\ell((2r)^{-2})} \int_{B(Q, \frac{10r}6)^c} J(y-Q)u(y)dy \end{equation} for some constant $C_{22}>0$ independent of $Q$ and $u$. \end{lemma} \noindent{\bf Proof.} Without loss of generality, we may assume that $Q=0$. Note that by \eqref{ksv-el6} \begin{eqnarray*} &&\int^{\frac{11}{6}r}_{\frac{10}{6}r}\int_{A(0, \sigma, 2r)} \ell((|y|- \sigma)^{-2})^{1/2} (|y|-\sigma)^{-{\alpha}/2} u(y)\, dy \, d\sigma\\ &&=\int_{A(0, \frac{10}{6}r , 2r)} \int^{ |y| \wedge \frac{11}{6}r}_{\frac{10}{6}r}\ell((|y|- \sigma)^{-2})^{1/2} (|y|-\sigma)^{-{\alpha}/2}\, d\sigma\, u(y )\,dy \\ && \le \int_{A(0, \frac{10}{6}r , 2r)} \left(\int^{ |y| - \frac{10}{6}r}_{0}\ell(s^{-2})^{1/2} s^{-{\alpha}/2}ds \right)u(y)dy \\ &&\le c_1 \int_{A(0, \frac{10r}6, 2r)} \ell\left(\left(|y|- \frac{10r}6 \right)^{-2}\right)^{1/2} \left(|y|- \frac{10r}6\right)^{1-{\alpha}/2} u(y)dy \end{eqnarray*} for some positive constant $c_1$. Using \eqref{ksv-el2} and \eqref{ksv-el7}, we get that there are constants $c_2>0$ and $c_3>0$ such that \begin{eqnarray*} \lefteqn{\int_{A(0, \frac{10r}6, 2r)}\ell\left(\left(|y|- \frac{10r}6 \right)^{-2}\right)^{1/2} \left(|y|- \frac{10r}6\right)^{1-{\alpha}/2} u(y)dy }\\ &\le & c_3 \int_{A(0, \frac{10r}6, 2r)} \ell(|y|^{-2})^{1/2} |y|^{1-{\alpha}/2} u(y)dy\\ &\le & c_3 \frac{r^{1-\alpha/2}}{\ell((2r)^{-2})^{1/2}} \int_{A(0, \frac{10r}6, 2r)} \ell(|y|^{-2}) u(y)dy\, . \end{eqnarray*} Thus, by taking $c_4>6 c_1 c_3$, we can conclude that there is a $\sigma\in (\frac{10}{6}r, \frac{11}{6}r)$ such that \begin{eqnarray}\label{ksv-e:int} \lefteqn{\int_{A(0, \sigma, 2r)}\ell((|y|- \sigma)^{-2})^{1/2}\, (|y|-\sigma)^{-{\alpha}/2}u(y)dy}\nonumber\\ &\le & c_4\,\frac{r^{-\alpha/2}}{\ell((2r)^{-2})^{1/2}} \int_{A(0, \frac{10r}6, 2r)} \ell(|y|^{-2}) u(y)dy. \end{eqnarray} Let $x \in D \cap B(0, \frac{3}{2} r)$. Note that, since $X$ satisfies the hypothesis ${\bf H}$ in \cite{Sz1}, by Theorem 1 in \cite{Sz1} we have \begin{eqnarray*} && {\mathbb E}_x\left[u(X_{\tau_{D \cap B(0, \sigma)}}); X_{\tau_{D \cap B(0, \sigma)}} \in B(0, \sigma)^c \right]\\ &&= {\mathbb E}_x\left[u(X_{\tau_{D \cap B(0, \sigma)}}); X_{\tau_{D \cap B(0, \sigma)}} \in B(0, \sigma)^c, \, \tau_{D \cap B(0, \sigma)} =\tau_{B(0, \sigma)} \right]\\ &&= {\mathbb E}_x\left[u(X_{\tau_{ B(0, \sigma)}}); X_{ \tau_{B(0, \sigma)}} \in B(0, \sigma)^c, \, \tau_{D \cap B(0, \sigma)} = \tau_{B(0, \sigma)} \right]\\ &&\le {\mathbb E}_x\left[u(X_{\tau_{ B(0, \sigma)}}); X_{\tau_{B(0, \sigma)}} \in B(0, \sigma)^c \right] \,=\,\int_{B(0, \sigma)^c}K_{B(0, \sigma)}(x, y)u(y)dy. \end{eqnarray*} Since $ \sigma <2r < 1$, from \eqref{ksv-P1} in Proposition \ref{ksv-p:Poisson1}, Proposition \ref{ksv-p:Poisson3} we have \begin{eqnarray*} \lefteqn{ {\mathbb E}_x\left[u(X_{\tau_{D \cap B(0, \sigma)}}); X_{\tau_{D \cap B(0, \sigma)}} \in B(0, \sigma)^c \right] \,\le\, \int_{ B(0, \sigma)^c } K_{B(0, \sigma)}(x, y)u(y)dy}\\ &\le &\, c_5 \int_{A(0, \sigma, 2r)} \frac{\sigma^{\alpha/2-d}}{\left(\ell(\sigma^{-2})\right)^{1/2}} \frac{(\ell((|y|-\sigma)^{-2}))^{1/2}} {( |y|-\sigma)^{\alpha/2}} u(y)dy\\ && + c_5 \int_{B(0, 2r)^c}j(|y|-\sigma) \frac{\sigma^{\alpha/2}} {(\ell(\sigma^{-2}))^{1/2}}\frac{(\sigma-|x|)^{\alpha/2}} {(\ell((\sigma-|x|)^{-2}))^{1/2}} u(y)dy \end{eqnarray*} for some constant $c_5>0$. When $y \in A(0, 2r , 4)$ we have $\frac1{12}|y|\le |y|-\sigma $, while when $|y|\ge 4$ we have $|y|-\sigma\ge |y|-1$. Since $ \sigma-|x|\le\sigma \le {2r}$, we have by \eqref{ksv-el1} and the monotonicity of $j$, $$ j(|y|-\sigma) \frac{\sigma^{\alpha/2}}{(\ell(\sigma^{-2}))^{1/2}} \frac{(\sigma-|x|)^{\alpha/2}} {(\ell((\sigma-|x|)^{-2}))^{1/2}} \,\le\, c_6 j\left(\frac{|y|}{12}\right) \frac{r^{\alpha}}{\ell((2r)^{-2})} , \quad y \in A(0, 2r , 4) $$ and $$ j(|y|-\sigma) \frac{\sigma^{\alpha/2}}{(\ell(\sigma^{-2}))^{1/2}} \frac{(\sigma-|x|)^{\alpha/2}} {(\ell((\sigma-|x|)^{-2}))^{1/2}} \,\le\, c_6 j(|y|-1) \frac{r^{\alpha}}{\ell((2r)^{-2})} , \quad |y|\ge 4 $$ for some constant $c_6>0$. Thus by applying \eqref{ksv-H:1} and \eqref{ksv-H:2}, we get $$ j(|y|-\sigma) \frac{\sigma^{\alpha/2}}{(\ell(\sigma^{-2}))^{1/2}} \frac{(\sigma-|x|)^{\alpha/2}} {(\ell((\sigma-|x|)^{-2}))^{1/2}} \,\le\, c_7 j(|y|) \frac{r^{\alpha}}{\ell((2r)^{-2})} $$ for some constant $c_7>0$. Therefore, \begin{eqnarray*} \lefteqn{\int_{B(0, 2r)^c}j(|y|-\sigma)\frac{\sigma^{\alpha/2}} {(\ell(\sigma^{-2}))^{1/2}}\frac{(\sigma-|x|)^{\alpha/2}} {(\ell((\sigma-|x|)^{-2}))^{1/2}} u(y)dy}\\ & \le & c_5 c_7 \frac{r^{\alpha}}{\ell((2r)^{-2})} \int_{B(0, 2r)^c} J(y) u(y)\, dy\, . \end{eqnarray*} On the other hand, by \eqref{ksv-el1}, \eqref{ksv-e:int} and Theorem \ref{ksv-t:Jorigin}, we have that \begin{eqnarray*} \lefteqn{\int_{A(0, \sigma, 2r)} \frac{\sigma^{\alpha/2-d}}{\left(\ell(\sigma^{-2})\right)^{1/2}} \frac{(\ell((|y|-\sigma)^{-2}))^{1/2}} {( |y|-\sigma)^{\alpha/2}} u(y)dy}\\ &\le& \left(\frac{10r}{6}\right)^{-d}\frac{\sigma^{\alpha/2}} {\left(\ell(\sigma^{-2})\right)^{1/2}} \int_{A(0, \sigma, 2r)} \frac{(\ell((|y|-\sigma)^{-2}))^{1/2}} {( |y|-\sigma)^{\alpha/2}} u(y)dy\\ &\le & c_8 r^{-d}\frac{(2r)^{\alpha/2}}{\left(\ell((2r)^{-2})\right)^{1/2}} \,\frac{r^{-\alpha/2}}{\left(\ell((2r)^{-2})\right)^{1/2}} \int_{A(0, \frac{10r}6, 2r)} \ell(|y|^{-2}) u(y)dy \\ &\le & c_{9} \frac{r^{\alpha}}{\ell((2r)^{-2})} \int_{A(0, \frac{10r}6, 2r)}\ell(|y|^{-2}) |y|^{-d-\alpha} u(y) dy\\ &\le & c_{10} \frac{r^{\alpha}}{\ell((2r)^{-2})} \int_{A(0, \frac{10r}6, 2r)}J(y) u(y) dy \end{eqnarray*} for some positive constants $c_8$, $c_9$ and $c_{10}$. Hence, by combining the last two displays we arrive at $$ {\mathbb E}_x\left[u(X_{\tau_{D \cap B(0, \sigma)}}); X_{\tau_{D \cap B(0, \sigma)}} \in B(0, \sigma)^c \right]\\ \,\le\, c_{11}\,\frac{r^{\alpha}}{\ell((2r)^{-2})} \int_{B(0, \frac{10r}6)^c}J(y)u(y)dy $$ for some constant $c_{11}>0$. { $\Box$ } \begin{lemma}\label{ksv-l2.2} Let $D$ be an open set and $r\in (0,1/2)$. Assume that $B(A, \kappa r)\subset D\cap B(Q, r)$ for $\kappa\in (0, 1/2 ]$. Suppose that $u\ge0$ is regular harmonic in $D\cap B(Q, 2r)$ with respect to $X$ and $u=0$ in $D^c\cap B(Q, 2r)$. If $w$ is a regular harmonic function with respect to $X$ in $D\cap B(Q, r)$ such that $$ w(x)=\left\{ \begin{array}{ll} u(x), & x\in B(Q, \frac{3r}2)^c\cup (D^c\cap B(Q, r)),\\ 0, & x \in A(Q, r, \frac{3r}2), \end{array}\right. $$ then $$ u(A) \ge w(A) \ge C_{23}\,\kappa^{\alpha} \frac{\ell((2r)^{-2})} {\ell((\kappa r)^{-2})} \,u(x), \quad x \in D \cap B(Q,\frac32 r) $$ for some constant $C_{23}>0$. \end{lemma} \noindent{\bf Proof.} Without loss of generality we may assume $Q=0$. Let $x \in D \cap B(0,\frac32 r)$. The left hand side inequality in the conclusion of the lemma is clear from the fact that $u$ dominates $w$ on $(D\cap B(0,r))^c$ and both functions are regular harmonic in $D\cap B(0,r)$. Thus we only need to prove the right hand side inequality. By Lemma \ref{ksv-l2.U} there exists $\sigma\in (\frac{10r}6, \frac{11r}6)$ such that \eqref{ksv-e:l2.U} holds. Since $u$ is regular harmonic in $D\cap B(0, 2r)$ with respect to $X$ and equal to zero on $D^c\cap B(0,2r)$, it follows that \begin{equation}\label{ksv-e:l2.2} u(x)= {\mathbb E}_x\left[u(X_{\tau_{D \cap B(0, \sigma)}}); \,X_{\tau_{D \cap B(0, \sigma)}} \in B(0, \sigma)^c \right] \le c_1\frac{r^{\alpha}}{\ell((2r)^{-2})} \int_{B(0, \frac{10r}6)^c} J(y)u(y)dy \end{equation} for some constant $c_1>0$. On the other hand, by \eqref{ksv-P2} in Proposition \ref{ksv-p:Poisson1}, we have that \begin{eqnarray*} w(A)&=& \int_{B(0, \frac{3r}2)^c} K_{D\cap B(0, r)}(A, y)u(y)dy \ge \int_{B(0, \frac{3r}2)^c} K_{B(A, \kappa r)}(A, y)u(y)dy\\ &\ge & c_2 \int_{B(0, \frac{3r}2)^c} J(A-y) \frac{(\kappa r)^\alpha}{\ell((\kappa r)^{-2})} u(y)dy \end{eqnarray*} for some constant $c_2>0$. Note that $|y-A|\le 2|y|$ in $ A(0,\frac{3r}2, 4)$ and that $|y-A|\le |y|+1$ for $|y|\ge 4$. Hence by the monotonicity of $j$, \eqref{ksv-H:1} and \eqref{ksv-H:2}, $$ w(A)\,\ge\, c_3\,\frac{(\kappa r)^\alpha}{\ell((\kappa r)^{-2})} \int_{B(0, \frac{3r}2)^c} J( y) u(y) dy $$ for some constant $c_3>0$. Therefore, by \eqref{ksv-e:l2.2} $$ w(A)\ge c_4 c_1^{-1} \,\kappa^{\alpha} \frac{\ell((2r)^{-2})} {\ell((\kappa r)^{-2})} \,u(x)\, . $$ { $\Box$ } \begin{defn}\label{ksv-fat} Let $\kappa \in (0,1/2]$. We say that an open set $D$ in ${\mathbb R}^d$ is $\kappa$-fat if there exists $R>0$ such that for each $Q \in \partial D$ and $r \in (0, R)$, $D \cap B(Q,r)$ contains a ball $B(A_r(Q),\kappa r)$. The pair $(R, \kappa)$ is called the characteristics of the $\kappa$-fat open set $D$. \end{defn} Note that all Lipschitz domain and all non-tangentially accessible domain (see \cite{JK} for the definition) are $\kappa$-fat. The boundary of a $\kappa$-fat open set can be highly nonrectifiable and, in general, no regularity of its boundary can be inferred. Bounded $\kappa$-fat open set may be disconnected. Since $\ell$ is slowly varying at $\infty$, we get the Carleson's estimate from Lemma \ref{ksv-l2.2}. \begin{corollary}\label{ksv-c:Carl} Suppose that $D$ is a $\kappa$-fat open set with the characteristics $(R, \kappa)$. There exists a constant $C_{24}$ depending on the characteristics $(R,\kappa)$ such that if $r \le R\wedge \frac12$, $Q\in \partial D$, $u\ge0$ is regular harmonic in $D\cap B(Q, 2r)$ with respect to $X$ and $u=0$ in $D^c\cap B(Q, 2r)$, then $$ u\left(A_r(Q)\right) \,\ge C_{24}\, u(x)\, , \quad \forall x \in D \cap B(Q,\frac32 r)\, . $$ \end{corollary} The next theorem is a boundary Harnack principle for (possibly unbounded) $\kappa$-fat open set and it is the main result of this subsection. \begin{thm}\label{ksv-BHP} Suppose that $D$ is a $\kappa$-fat open set with the characteristics $(R, \kappa)$. There exists a constant $C_{25}>1$ depending on the characteristics $(R,\kappa)$ such that if $r \le R\wedge \frac14$ and $Q\in\partial D$, then for any nonnegative functions $u, v$ in ${\mathbb R}^d$ which are regular harmonic in $D\cap B(Q, 2r)$ with respect to $X$ and vanish in $D^c \cap B(Q, 2r)$, we have $$ C_{25}^{-1}\,\frac{u(A_r(Q))}{v(A_r(Q))}\,\le\, \frac{u(x)}{v(x)}\,\le C_{25}\,\frac{u(A_r(Q))}{v(A_r(Q))}, \qquad x\in D\cap B(Q, \frac{r}2)\, . $$ \end{thm} \noindent{\bf Proof.} Since $\ell$ is slowly varying at $\infty$ and locally bounded above and below by positive constants, there exists a constant $c>0$ such that for every $r\in (0,1/4)$, \begin{equation}\label{ksv-lll} \max \left(\frac{\ell(r^{-2})}{ \ell((\kappa r)^{-2}) },\, \frac{\ell((2r)^{-2})}{ \ell((4r)^{-2})},\, \frac{\ell((\frac{\kappa r}{2})^{-2})}{\ell((4r)^{-2})},\, \frac{\ell((\kappa r)^{-2})}{\ell((2r)^{-2})} \right) \,\le\, c\, . \end{equation} Fix $r\in (0, R\wedge \frac14)$ throughout this proof. Without loss of generality we may assume that $Q=0$ and $u(A_r(0))=v(A_r(0))$. For simplicity, we will write $A_r(0)$ as $A$ in the remainder of this proof. Define $u_1$ and $u_2$ to be regular harmonic functions in $D\cap B(0, r)$ with respect to $X$ such that $$ u_1(x)=\left\{ \begin{array}{ll} u(x), & x\in A(0,r, \frac{3r}{2}),\\ 0, & x\in B(0, \frac{3r}2)^c\cup(D^c\cap B(0, r)) \end{array} \right. $$ and $$ u_2(x)=\left\{ \begin{array}{ll} 0, & x\in A(0,r, \frac{3r}{2}), \\ u(x), & x\in B(0, \frac{3r}2)^c\cup(D^c\cap B(0, r)). \end{array} \right. $$ and note that $u=u_1+u_2$. If $D\cap A(0,r,\frac{3r}{2})=\emptyset$, then $u_1=0$ and the inequality (\ref{ksv-e2.6}) below holds trivially. So we assume that $D\cap A(0,r,\frac{3r}{2})$ is not empty. Then by Lemma \ref{ksv-l2.2}, $$ u(y)\le c_1 \kappa^{-\alpha} \frac{\ell((\kappa r)^{-2})}{\ell((2r)^{-2})}\, u(A), \qquad y\in D\cap B(0, \frac{3r}2), $$ for some constant $c_1>0$. For $x\in D\cap B(0, \frac{r}2)$, we have \begin{eqnarray*} u_1(x)&=& {\mathbb E}_x\left[u(X_{\tau_{D\cap B(0, r)}}): X_{\tau_{D\cap B(0, r)}}\in D\cap A(0,r,\frac{3r}{2})\right]\\ &\le&\left(\sup_{D\cap A(0,r,\frac{3r}{2})}u(y)\right) {\mathbb P}_x\left( X_{\tau_{D\cap B(0, r)}}\in D\cap A(0,r,\frac{3r}{2})\right) \\ &\le&\left(\sup_{D\cap A(0,r,\frac{3r}{2})}u(y)\right) {\mathbb P}_x\left( X_{\tau_{D\cap B(0, r)}}\in B(0,r)^c \right) \\ &\le&c_1\,\kappa^{-\alpha} \frac{\ell((\kappa r)^{-2})}{\ell((2r)^{-2})} \,u(A) \,{\mathbb P}_x\left( X_{\tau_{D\cap B(0, r)}}\in B(0,r)^c \right). \end{eqnarray*} Now using Lemma \ref{ksv-l2.3} (with $D$ replaced by $D\cap B(0,r)$) and \eqref{ksv-lll}, we have that for $ x\in D\cap B(0, \frac{r}2)$, \begin{eqnarray} &&u_1(x)\\ &&\le\, c_2\,\kappa^{-d-\frac32\alpha }\, \frac {\ell((\kappa r)^{-2})}{\ell((2r)^{-2})}\frac{\ell(r^{-2})}{\ell((4r)^{-2})}\, \left(1+\frac{\ell((\frac{\kappa r}{2})^{-2})}{\ell((4r)^{-2})} \right)\,u(A)\times \nonumber\\ &&\quad \times\ {\mathbb P}_x\left( X_{\tau_{(D\cap B(0,r))\setminus B(A, \frac{\kappa r}2)}} \in B(A, \frac{\kappa r}2)\right) \nonumber\\ &&\le\,c_3 \,u(A)\,{\mathbb P}_x\left( X_{\tau_{(D\cap B(0,r))\setminus B(A, \frac{\kappa r}2)}} \in B(A, \frac{\kappa r}2)\right) \label{ksv-e2.3} \end{eqnarray} for some positive constants $c_2$ and $c_3=c_3(\kappa)$. Since $r <1/4$, Theorem \ref{ksv-T:Har} implies that $$ u(y)\,\ge\, c_4\,u(A), \qquad y\in B(A, \frac{\kappa r}2) $$ for some constant $c_4>0$. Therefore for $x\in D\cap B(0, \frac{r}2)$ \begin{equation}\label{ksv-e2.4} u(x) \,=\, {\mathbb E}_x\left[u(X_{\tau_{(D\cap B(0, r))\setminus B(A, \frac{\kappa r}2)}}) \right] \,\ge\, c_4\,u(A)\, {\mathbb P}_x\left(X_{\tau_{(D\cap B(0,r))\setminus B(A, \frac{\kappa r}2)}} \in B(A, \frac{\kappa r}2)\right). \end{equation} Using (\ref{ksv-e2.3}), the analogue of (\ref{ksv-e2.4}) for $v$, and the assumption that $u(A)=v(A)$, we get that for $x\in D\cap B(0, \frac{r}2)$, \begin{equation}\label{ksv-e2.6} u_1(x)\,\le \,c_3\,v(A)\, {\mathbb P}_x\left(X_{\tau_{(D\cap B(0, r)) \setminus B(A, \frac{\kappa r}2)}} \in B(A, \frac{\kappa r}2)\right)\,\le \,c_5\,v(x) \end{equation} for some constant $c_5=c_5(\kappa)>0.$ For $x\in D\cap B(0, r)$, we have \begin{eqnarray*} u_2(x)&=& \int_{B(0, \frac{3r}2)^c}K_{D\cap B(0, r)} (x, z)u(z)dz\\ &=& \int_{B(0, \frac{3r}2)^c} \int_{D\cap B(0, r)} G_{D\cap B(0, r)}(x, y) J(y-z)dy\, u(z)\, dz. \end{eqnarray*} Let $$ s(x):=\int_{D\cap B(0, r)}G_{D\cap B(0, r)}(x, y)dy. $$ Note that for every $y \in B(0,r)$ and $z \in B(0, \frac{3r}2)^c$, $$ \frac13|z| \,\le\, |z|-r \,\le\, |z|-|y| \, \le\, |y-z|\, \le \, |y|+|z|\, \le \, r+|z| \le 2 |z|\, , $$ and that for every $y \in B(0,r)$ and $z \in B(0, 12)^c$, $$ |z|-1\, \le \, |y-z|\, \le \,|z|+1. $$ So by the monotonicity of $j$, for every $y \in B(0,r)$ and $z \in A(0, \frac{3r}2, 12)$, $$ j(12|z|) \, \le \,j(2|z|) \, \le \,J(y-z) \, \le \, j\left(\frac{|z|}{3}\right) \, \le \,j\left(\frac{|z|}{12}\right)\, , $$ and for every $y \in B(0,r)$ and every $z \in B(0,12)^c$, $$ j(|z|-1)\, \le \,J(y-z) \, \le \, j(|z|+1). $$ Using \eqref{ksv-H:1} and \eqref{ksv-H:2}, we have that, for every $y \in B(0,r)$ and $z \in B(0, \frac{3r}2)^c$, $$ c_6^{-1} j(|z|) \, \le \,J(y-z) \, \le \,c_6\,j(|z|) $$ for some constant $c_6>0$. Thus we have \begin{equation}\label{ksv-e2.5} c_7^{-1}\le \left(\frac{u_2(x)}{u_2(A)}\right)\left(\frac{s(x)}{s(A)}\right)^{-1}\le c_7, \end{equation} for some constant $c_7>1$. Applying (\ref{ksv-e2.5}) to $u$, and $v$ and Lemma \ref{ksv-l2.2} to $v$ and $v_2$, we obtain for $x\in D\cap B(0, \frac{r}2)$, \begin{eqnarray}\label{ksv-e2.7} u_2(x) &\le & c_7\,u_2(A)\,\frac{s(x)}{s(A)}\,\le\, c_{7}^2\, \frac{u_2(A)}{v_2(A)}\,v_2(x)\, \le\, c_{8}\, \kappa^{-\alpha} \frac{\ell((\kappa r)^{-2})} {\ell((2r)^{-2})}\frac{u(A)}{v(A)}\,v_2(x)\nonumber \\ & = & c_{8}\,\kappa^{-\alpha} \frac{\ell((\kappa r)^{-2})}{\ell((2r)^{-2})}\,v_2(x), \end{eqnarray} for some constant $c_8>0.$ Combining (\ref{ksv-e2.6}) and (\ref{ksv-e2.7}) and applying \eqref{ksv-lll}, we have $$ u(x)\,\le\, c_{9} \,v(x), \qquad x\in D\cap B(0, \frac{r}2), $$ for some constant $c_{9}=c_{9}(\kappa)>0.$ { $\Box$ } \noindent {\bf Acknowledgment:} We thank Qiang Zeng for his comments on the first version of this paper. We also thank the referee for helpful comments. \begin{singlespace} \small \end{singlespace} \end{doublespace} \end{document} {\bf Panki Kim} Department of Mathematical Sciences and Research Institute of Mathematics, Seoul National University, San56-1 Shinrim-dong Kwanak-gu, Seoul 151-747, Republic of Korea E-mail: \texttt{pkim@snu.ac.kr} {\bf Renming Song} Department of Mathematics, University of Illinois, Urbana, IL 61801, USA E-mail: \texttt{rsong@math.uiuc.edu} {\bf Zoran Vondra{\v{c}}ek} Department of Mathematics, University of Zagreb, Zagreb, Croatia Email: \texttt{vondra@math.hr} \end{document}
\begin{document} \title[Stability of the Slow Manifold] {Stability of the Slow Manifold\\ in the Primitive Equations} \author[Temam]{R.~Temam} \email{temam@indiana.edu} \urladdr{http://mypage.iu.edu/\~{}temam} \address[RT]{The Institute for Scientific Computing and Applied Mathematics\\ Indiana University, Rawles Hall\\ Bloomington, IN~47405--7106, United States} \author[Wirosoetisno]{D.~Wirosoetisno} \email{djoko.wirosoetisno@durham.ac.uk} \urladdr{http://www.maths.dur.ac.uk/\~{}dma0dw} \address[DW]{Department of Mathematical Sciences\\ University of Durham\\ Durham\ \ DH1~3LE, United Kingdom} \thanks{This research was partially supported by the National Science Foundation under grant NSF-DMS-0604235, by the Research Fund of Indiana University, and by a grant from the Nuffield Foundation} \keywords{Slow manifold, exponential asymptotics, primitive equations} \subjclass[2000]{Primary: 35B40, 37L25, 76U05} \begin{abstract} We show that, under reasonably mild hypotheses, the solution of the forced--dissipative rotating primitive equations of the ocean loses most of its fast, inertia--gravity, component in the small Rossby number limit as $t\to\infty$. At leading order, the solution approaches what is known as ``geostrophic balance'' even under ageostrophic, slowly time-dependent forcing. Higher-order results can be obtained if one further assumes that the forcing is time-independent and sufficiently smooth. If the forcing lies in some Gevrey space, the solution will be exponentially close to a finite-dimensional ``slow manifold'' after some time. \end{abstract} \maketitle \section{Introduction}\label{s:intro} One of the most basic models in geophysical fluid dynamics is the primitive equations, understood here to be the hydrostatic approximation to the rotating compressible Navier--Stokes equations, which is believed to describe the large-scale dynamics of the atmosphere and the ocean to a very good accuracy. An important feature of such large-scale dynamics is that it largely consists of slow motions in which the pressure gradient is nearly balanced by the Coriolis force, a state known as {\em geostrophic balance\/}. Various physical explanations have been given, some supported by numerical simulations, to describe how this comes about, but to our knowledge no rigorous mathematical proof has been proposed. (For a review of the geophysical background, see, e.g., \cite{daley:ada}.) One aim of this article is to prove that, in the limit of strong rotation and stratification, the solution of the primitive equations will approach geostrophic balance as $t\to\infty$, in the sense that the ageostrophic energy will be of the order of the Rossby number. As illustrated by the simple one-dimensional model \eqref{q:1dm}, here the basic mechanism for balance is the viscous damping of rapid oscillations, leaving the slow dynamics mostly unchanged. Separation of timescale, characterised by a small parameter $\varepsilon$, is therefore crucial for our result; this is obtained by considering the limit of strong rotation {\em and\/} stratification, or in other words, small Rossby number with Burger number of order one. We note that there are other physical mechanisms through which a balanced state may be reached. Working in an unbounded domain, an important example is the radiation of inertia--gravity waves to infinity in what is known as the classical geostrophic adjustment problem (see \cite[\S7.3]{gill:aod} and further developments in \cite{reznik-al:01}). Attempts to extend geostrophic balance to higher orders, and the closely related problem of eliminating rapid oscillations in numerical solutions (e.g., \cite{baer-tribbia:77,machenhauer:77,leith:80,vautard-legras:86}), led naturally to the concept of {\em slow manifold\/} \cite{lorenz:86}, which has since become important in the study of rotating fluids (and more generally of systems with multiple timescales). We refer the reader to \cite{mackay:04} for a thorough review, but for our purposes here, a slow manifold means a manifold in phase space on which the normal velocity is small; if the normal velocity is zero, we have an exact slow manifold. In the geophysical literature, there have been many papers proposing various formal asymptotic methods to construct slow manifolds (e.g., \cite{warn-menard:86,wbsv:95}). A number of numerical studies closely related to the stability of slow manifolds have also been done (e.g., \cite{ford-mem-norton:00,polvani-al:94}). It was realised early on \cite{lorenz:86,warn:97} that in general no exact slow manifold exists and any construction is generally asymptotic in nature. For finite-dimensional systems, this can often be proved using considerations of exponential asymptotics (see, e.g., \cite{kruskal-segur:91}). More recently, it has been shown explicitly \cite{jv-yavneh:04} in an infinite-dimensional rotating fluid model that exponentially weak fast oscillations are generated spontaneously by vortical motion, implying that slow manifolds could at best be exponentially accurate (meaning the normal velocity on it be exponentially small). Theorem~\ref{t:ho} shows, given the hypotheses, that an exponential accuracy can indeed be achieved for the primitive equations, albeit with a weaker dependence on $\varepsilon$. From a more mathematical perspective, our exponentially slow manifold (see Lemma~\ref{t:suba}), which is also presented in \cite{temam-dw:ebal} in a slightly different form, is obtained using a technique adapted from that first proposed in \cite{matthies:01}. It involves truncating the PDE to a finite-dimensional system whose size depends on $\varepsilon$ and applying a classical estimate from perturbation theory to the finite system. By carefully balancing the truncation size and the estimates on the finite system, one obtains a finite-dimensional exponentially accurate slow manifold. This estimate is local in time and only requires that the (instantaneous) variables and the forcing be in some Sobolev space $H^s$; it (although not the long-time asymptotic result below) can thus be obtained for the inviscid equations as well. If our solution is also Gevrey (which is true for the primitive equations given Gevrey forcing), the ignored high modes are exponentially small, so the ``total error'' (i.e.\ normal velocity on the slow manifold) is also exponentially small. Gevrey regularity of the solution is therefore crucial in obtaining exponential estimates. As with the Navier--Stokes equations \cite{foias-temam:89}, in the absence of boundaries and with Gevrey forcing, one can prove that the strong solution of the primitive equations also has Gevrey regularity \cite{petcu-dw:gev3}. For the present article, we need uniform bounds on the norms, which have been proved recently \cite{petcu:3dpe} following the global regularity results of \cite{cao-titi:07,kobelkov:06,kobelkov:07}. Since our result also assumes strong rotation, however, one could have used an earlier work \cite{babin-al:00} which proved global regularity under a sufficiently strong rotation and then used \cite{petcu-dw:gev3} to obtain Gevrey regularity. While our earlier paper \cite{temam-dw:ebal} is concerned with a finite-time estimate on pointwise accuracy (``predictability''), in this article our aim is to obtain long-time asymptotic estimates (on ``balance''). In this regard, the main problem for both the leading-order (Theorem~\ref{t:o1}) and higher-order (Theorem~\ref{t:ho}) estimates are the same: to bound the energy transfer, through the nonlinear term, from the slow to fast modes at the same order as the fast modes themselves. For this, one needs to handle not only {\em exact\/} fast--fast--slow resonances, whose absence has long been known in the geophysical literature (cf.\ e.g., \cite{bartello:95,embid-majda:96,lelong-riley:91,warn:86} for discussions of related models), but also {\em near\/} resonances. A key part in our approach is an estimate involving near resonances in the primitive equations (cf.\ Lemma~\ref{t:nores}). Another method based on algebraic geometry to handle related near resonances can be found in \cite{babin-al:99}. Taken together with \cite{temam-dw:ebal}, the results here may be regarded as an extension of the single-frequency exponential estimates obtained in \cite{matthies:01} to the ocean primitive equations, which have an infinite number of frequencies. Alternately, one may view Theorem~\ref{t:ho} as an extension to exponential order of the leading-order results of \cite{babin-al:00} for a closely related model. Finally, our results here put a strong constraint on the nature of the global attractor \cite{ju:07} in the strong rotation limit: the attractor will have to lie within an exponentially thin neighbourhood of the slow manifold. The rest of this article is arranged as follows. We begin in the next section by describing the ocean primitive equations (henceforth OPE) and recalling the known regularity results. In Section~\ref{s:nm}, we write the OPE in terms of fast--slow variables and in Fourier modes, followed by computing explicitly the operator corresponding to the nonlinear terms and describing its properties. In Section~\ref{s:o1}, we state and prove our leading-order estimate, that the solution of the OPE will be close to geostrophic balance as $t\to\infty$. In the last section, we state and prove our exponential-order estimate. \section{The Primitive Equations}\label{s:ope} We start by recalling the basic settings of the ocean primitive equations \cite{lions-temam-wang:92a}, and then recast the system in a form suitable for our aim in this article. \subsection{Setup} We consider the primitive equations for the ocean, scaled as in \cite{petcu-temam-dw:pe} \begin{equation}\label{q:uvr}\begin{aligned} &\partial_t\boldsymbol{v} + \frac1\varepsilon \bigl[ \boldsymbol{v}^\perp + \nabla_2 p \bigr] + \boldsymbol{u}\cdot\nabla \boldsymbol{v} = \mu\Delta \boldsymbol{v} + f_{\vb}^{},\\ &\partial_t\rho - \frac1\varepsilon u^3 + \boldsymbol{u}\cdot\nabla \rho = \mu\Delta \rho + f_\rho^{},\\ &\nabla\cdot\boldsymbol{u} = \gb\!\cdot\!\vb + \partial_z u^3 = 0,\\ &\rho = -\partial_zp. \end{aligned}\end{equation} Here $\boldsymbol{u}=(u^1,u^2,u^3)$ and $\boldsymbol{v}=(u^1,u^2,0)$ are the three- and two-dimensional fluid velocity, with $\boldsymbol{v}^\perp:=(-u^2,u^1,0)$. The variable $\rho$ can be interpreted in two ways: One can take it to be the departure from a stably-stratified profile (with the usual Boussinesq approximation), with the full density of the fluid given by \begin{equation} \rho_\textrm{full}(x,y,z,t) = \rho_0 - \varepsilon^{-1} z\rho_1 + \rho(x,y,z,t), \end{equation} for some positive constants $\rho_0$ and $\rho_1$. Alternately, one can think of it to be, e.g., salinity or temperature that contributes linearly to the density. The pressure $p$ is determined by the hydrostatic relation $\partial_zp=-\rho$ and the incompressibility condition $\nabla\cdot\boldsymbol{u}=0$, and is not (directly) a function of $\rho$. We write $\nabla:=(\partial_x,\partial_y,\partial_z)$, $\nabla_2:=(\partial_x,\partial_y,0)$, $\Delta:=\partial_x^2+\partial_y^2+\partial_z^2$ and $\lapl2:=\partial_x^2+\partial_y^2$. The parameter $\varepsilon$ is related to the Rossby and Froude numbers; in this paper we shall be concerned with the limit $\varepsilon\to0$. In general the viscosity coefficients for $\boldsymbol{v}$ and $\rho$ are different; we have set them both to $\mu$ for clarity of presentation (the general case does not introduce any more essential difficulty). The variables $(\boldsymbol{v},\rho)$ evidently depend on the parameters $\varepsilon$ and $\mu$ as well as on $(\boldsymbol{x},t)$, but we shall not write this dependence explicitly. We work in three spatial dimensions, $\boldsymbol{x} := (x,y,z) = (x^1,x^2,x^3) \in [0,L_1]\times[0,L_2]\times[-L_3/2,L_3/2]$\penalty0$=: \mathscr{M}$, with periodic boundary conditions assumed; we write $|\mathscr{M}|:=L_1L_2L_3$. Moreover, following the practice in numerical simulations of stratified turbulence (see, e.g., \cite{bartello:95}), we impose the following symmetry on the dependent variables: \begin{equation}\label{q:sym}\begin{aligned} &\boldsymbol{v}(x,y,-z) = \boldsymbol{v}(x,y,z), &\qquad &p(x,y,-z) = p(x,y,z),\\ &u^3(x,y,-z) = -u^3(x,y,z), &\qquad &\rho(x,y,-z) = -\rho(x,y,z); \end{aligned}\end{equation} we say that $\boldsymbol{v}$ and $p$ are {\em even} in $z$, while $u^3$ and $\rho$ are {\em odd} in $z$. For this symmetry to persist, $f_{\vb}^{}$ must be even and $f_\rho^{}$ odd in $z$. Since $u^3$ and $\rho$ are also periodic in $z$, we have $u^3(x,y,-L_3/2)=u^3(x,y,L_3/2)=0$ and $\rho(x,y,-L_3/2)=\rho(x,y,L_3/2)=0$; similarly, $\partial_z u^1=0$, $\partial_z u^2=0$ and $\partial_z p=0$ on $z=0,\pm L_3/2$ if they are sufficiently smooth (as will be assumed below). One may consider the symmetry conditions \eqref{q:sym} as a way to impose the boundary conditions $u^3=0$, $\rho=0$, $\partial_z u^1=0$, $\partial_z u^2=0$ and $\partial_z p=0$ on $z=0$ and $z=L_3/2$ in the {\em effective domain} $[0,L_1]\times[0,L_2]\times[0,L_3/2]$. All variables and the forcing are assumed to have zero mean in $\mathscr{M}$; the symmetry conditions above ensure that this also holds for their products that appear below. It can be verified that the symmetry \eqref{q:sym} is preserved by the OPE \eqref{q:uvr}; that is, if it holds at $t=0$, it continues to hold for $t>0$. \subsection{Determining the pressure and vertical velocity} Since $u^3=0$ at $z=0$, we can use (\ref{q:uvr}c) to write \begin{equation}\label{q:u3} u^3(x,y,z) = -\int_0^z \gb\!\cdot\!\vb(x,y,z') \>\mathrm{d}z'. \end{equation} Similarly, the pressure $p$ can be written in terms of the density $\rho$ as follows (cf.\ \cite{samelson-temam-wang2:03}). Let $p(x,y,z)=\langle p(x,y)\rangle+\delta p(x,y,z)$ where $\langle\cdot\rangle$ denotes $z$-average and where \begin{equation}\label{q:ptil} \delta p(x,y,z) = -\int_{z_0}^z \rho(x,y,z') \>\mathrm{d}z' \end{equation} with $z_0(x,y)$ chosen such that $\langle{\delta p}\rangle=0$; this is most conveniently done using Fourier series (see below). Using the fact that \begin{equation} \int_{-L_3/2}^{L_3/2} \gb\!\cdot\!\vb \>\mathrm{d}z = -\int_{-L_3/2}^{L_3/2} \partial_z u^3 \>\mathrm{d}z = u^3(\cdot,-L_3/2) - u^3(\cdot,L_3/2) = 0, \end{equation} and taking 2d divergence of the momentum equation (\ref{q:uvr}a), we find \begin{equation} \frac1\varepsilon \bigl[ \nabla\cdot\langle\boldsymbol{v}^\perp\rangle + \lapl2 \langle p\rangle \bigr] + \nabla\cdot\langle{\boldsymbol{u}\cdot\nabla\boldsymbol{v}}\rangle = \mu\Delta \nabla\cdot\langle\boldsymbol{v}\rangle + \nabla\cdot\langlef_{\vb}^{}\rangle. \end{equation} Here we have used the fact that $z$-integration commutes with horizontal differential operators. We can now solve for the average pressure $\langle p\rangle$, \begin{equation} \langle p\rangle = \ilapl2\bigl[ -\nabla\cdot\langle\boldsymbol{v}^\perp\rangle + \varepsilon \bigl( - \nabla\cdot\langle\boldsymbol{u}\cdot\nabla\boldsymbol{v}\rangle + \mu\Delta\nabla\cdot\langle\boldsymbol{v}\rangle + \nabla\cdot\langlef_{\vb}^{}\rangle \bigr) \bigr] \end{equation} where $\ilapl2$ is uniquely defined to have zero $xy$-average. With this, the momentum equation now reads \begin{equation}\label{q:vb}\begin{aligned} \partial_t\boldsymbol{v} + \frac1\varepsilon\bigl[ \boldsymbol{v}^\perp - \nabla\ilapl2\nabla\cdot\langle\boldsymbol{v}^\perp\rangle &+ \nabla_2\delta p \bigr] + \boldsymbol{u}\cdot\nabla\boldsymbol{v} - \nabla\ilapl2\nabla\cdot\langle\boldsymbol{u}\cdot\nabla\boldsymbol{v}\rangle\\ &\hskip-10pt= \mu \Delta \bigl(\boldsymbol{v} - \nabla\ilapl2\nabla\cdot\langle\boldsymbol{v}\rangle\bigr) + f_{\vb}^{} - \nabla\ilapl2\nabla\cdot\langlef_{\vb}^{}\rangle. \end{aligned}\end{equation} \subsection{Canonical form and regularity results} Besides the usual $L^p(\mathscr{M})$ and $H^s(\mathscr{M})$, with $p\in[1,\infty]$ and $s\ge0$, we shall also need the Gevrey space $G^\sigma(\mathscr{M})$, defined as follows. For $\sigma\ge0$, we say that $u\in G^\sigma(\mathscr{M})$ if \begin{equation}\label{q:Gevdef} |\mathrm{e}^{\sigma(-\Delta)^{1/2}}u|_{L^2}^{} =: |u|_{G^\sigma}^{} < \infty. \end{equation} Let us denote our state variable $W=(\boldsymbol{v},\rho)^\mathrm{T}$. We write $W\in L^p(\mathscr{M})$ if $\boldsymbol{v}\in L^p(\mathscr{M})^2$, $\rho\in L^p(\mathscr{M})$, $(\boldsymbol{v},\rho)$ has zero average over $\mathscr{M}$ and $(\boldsymbol{v},\rho)$ satisfies the symmetry \eqref{q:sym}, in the distribution sense as appropriate; analogous notations are used for $W\in H^s(\mathscr{M})$ and $W\in G^\sigma(\mathscr{M})$, and for the forcing $f$ (which has to preserve the symmetries of $W$). With $u^3$ given by \eqref{q:u3} and $\delta p$ by \eqref{q:ptil}, we can write the OPE (\ref{q:uvr}b) and \eqref{q:vb} in the compact form \begin{equation}\label{q:dW} \partial_t{W} + \frac1\varepsilon LW + B(W,W) + AW = f. \end{equation} The operators $L$, $B$ and $A$ are defined by \begin{equation}\begin{aligned} &LW = \bigl(\boldsymbol{v}^\perp-\nabla\ilapl2\nabla\cdot\langle\boldsymbol{v}^\perp\rangle+\nabla_2\delta p,-u^3\bigr)^\textrm{T}\\ &B(W,\hat W) = \bigl(\boldsymbol{u}\cdot\nabla\hat\boldsymbol{v} - \nabla\ilapl2\nabla\cdot\langle\boldsymbol{u}\cdot\nabla\hat\boldsymbol{v}\rangle, \boldsymbol{u}\cdot\nabla\hat\rho\bigr)^\textrm{T}\\ &AW = -\bigl(\mu\Delta(\boldsymbol{v}-\nabla\ilapl2\nabla\cdot\langle\boldsymbol{v}\rangle),\mu\Delta\rho\bigr)^\textrm{T}, \end{aligned}\end{equation} and the force $f$ is given by \begin{equation} f = (f_{\vb}^{}-\nabla\ilapl2\nabla\cdot\langlef_{\vb}^{}\rangle,f_\rho^{})^\textrm{T}. \end{equation} The following properties are known (see, e.g., \cite{petcu-dw:gev3}). The operator $L$ is antisymmetric: for any $W\in L^2(\mathscr{M})$ \begin{equation}\label{q:Lasym} (LW,W)_{L^2} = 0; \end{equation} $B$ conserves energy: for any $W\in H^1(\mathscr{M})$ and $\hat W\in H^1(\mathscr{M})$, \begin{equation}\label{q:Basym} (W,B(\hat W,W))_{L^2} = 0; \end{equation} and $A$ is coercive: for any $W\in H^2(\mathscr{M})$, \begin{equation}\label{q:Acoer} (AW,W)_{L^2}= \mu\,|\nabla W|_{L^2}^2. \end{equation} We shall need the following regularity results for the OPE (here $K_s$ and $M_\sigma$ are continuous increasing functions of their arguments): \newtheorem*{thmz}{Theorem 0} \begin{thmz}\label{t:reg} Let $W_0\in H^1$ and $f\in L^\infty(\mathbb{R}_+;L^2)$. Then for all $t\ge0$ there exists a solution $W(t)\in H^1$ of \eqref{q:dW} with $W(0)=W_0$ and \begin{equation}\label{q:WH1} |W(t)|_{H^1} \le K_0(|W_0|_{H^1},\|f\|_0^{}) \end{equation} where, here and henceforth, $\|f\|_s^{}:=\esssup_{t\ge0}|f(t)|_{H^s}^{}$ for $s\ge0$. Moreover, there exists a time $T_1(|W_0|_{H^1},\|f\|_0^{})$ such that for $t\ge T_1$, \begin{equation}\label{q:WH1u} |W(t)|_{H^1}^{} \le K_1(\|f\|_0^{}). \end{equation} Similarly, if $f\in L^\infty(\mathbb{R}_+;H^{s-1})$, there exists a time $T_s(|W_0|_{H^1},\|f\|_{s-1}^{})$ such that \begin{equation}\label{q:WHsu} |W(t)|_{H^s}^{} \le K_s(\|f\|_{s-1}^{}) \end{equation} for $t\ge T_s$. Finally, fixing $\sigma>0$, if also $\nabla f\in L^\infty(\mathbb{R}_+;G^\sigma)$, there exists a time $T_\sigma(|W_0|_{H^1}^{},|\nabla f|_{G^\sigma}^{})$ such that, for $t\ge T_\sigma$ \begin{equation}\label{q:WGsig} |\nabla^2 W(t)|_{G^\sigma}^{} \le M_\sigma^{}(|\nabla f|_{G^\sigma}). \end{equation} \end{thmz} \noindent The proof of \eqref{q:WH1}--\eqref{q:WH1u} can be found in \cite{ju:07}; the higher-order results \eqref{q:WHsu} can be found in \cite{petcu:3dpe}. Both these works followed \cite{cao-titi:07} and \cite{kobelkov:06}. The result \eqref{q:WGsig} follows from \cite{petcu-dw:gev3} and using \eqref{q:WHsu} for $s=2$. Since we are concerned with the limit of small $\varepsilon$, however, one might also be able to obtain \eqref{q:WH1} and \eqref{q:WHsu} following the method used in \cite{babin-al:00} for the Boussinesq (non-hydrostatic) model. One could then proceed to obtain \eqref{q:WGsig} as above. \section{Normal Modes}\label{s:nm} In this section, we decompose the solution $W$ into its slow and fast components, expand them in Fourier modes, and state a lemma that will be used in sections \ref{s:o1} and~\ref{s:ho} below. \subsection{Fast and slow variables} The Ertel potential vorticity \begin{equation} q_E^{} = \sgb\!\cdot\!\vb - \partial_z\rho + \varepsilon \bigl[(\partial_z\boldsymbol{v})\cdot\nabla^\perp\rho - \partial_z\rho\,(\sgb\!\cdot\!\vb)\bigr], \end{equation} where $\nabla^\perp:=(-\partial_y,\partial_x,0)$, plays a central role in geophysical fluid dynamics since it is a material invariant in the absence of forcing and viscosity. In this paper, however, it is easier to work with the {\em linearised\/} potential vorticity (henceforth simply called {\em potential vorticity\/}) \begin{equation} q := \sgb\!\cdot\!\vb - \partial_z\rho. \end{equation} From \eqref{q:uvr}, its evolution equation is \begin{equation} \partial_t q + \nabla^\perp\cdot(\boldsymbol{u}\cdot\nabla\boldsymbol{v}) - \partial_z(\boldsymbol{u}\cdot\nabla\rho) = \mu\Delta q + f_q \end{equation} where $f_q:=\nabla^\perp\!\cdot\!f_{\vb}^{}-\partial_zf_\rho^{}$. Let $\psi^0:=\Delta^{-1} q$, uniquely defined by requiring that $\psi^0$ has zero integral over $\mathscr{M}$, and let \begin{equation}\label{q:W0def} W^0 := \left(\begin{matrix} \boldsymbol{v}^0\\ \rho^0 \end{matrix}\right) := \left( \begin{matrix}\nabla^\perp\psi^0\\ -\partial_z\psi^0 \end{matrix}\right). \end{equation} We note a mild abuse of notation on $\boldsymbol{v}^0$ and $\nabla^\perp$: $W^0=(-\partial_y\psi^0,\partial_x\psi^0,-\partial_z\psi^0)^\mathrm{T}$. A little computation shows that $W^0$ lies in the kernel of the antisymmetric operator $L$, that is, $LW^0=0$. Conversely, if $LW=0$, then $W=(\nabla^\perp\Psi,-\partial_z\Psi)^\mathrm{T}$ for some $\Psi$: Since $u^3=0$, we have $\gb\!\cdot\!\vb=0$, so $\boldsymbol{v}=\nabla^\perp\Psi+{\boldsymbol{V}}$ for some $\Psi(x,y,z)$ and ${\boldsymbol{V}}(z)$. Now \begin{equation}\label{q:kerL1}\begin{aligned} 0 &= \boldsymbol{v}^\perp - \nabla_2\ilapl2\nabla\!\cdot\!\langle v^\perp\rangle + \nabla_2\delta p\\ &= -\nabla_2\Psi + {\boldsymbol{V}}^\perp + \nabla_2\ilapl2\lapl2\langle\Psi\rangle + \nabla_2\delta p. \end{aligned}\end{equation} Since all other terms are horizontal gradients and ${\boldsymbol{V}}$ does not depend on $(x,y)$, we must have ${\boldsymbol{V}}=0$. Writing $\Psi(x,y,z)=\tilde\Psi(x,y,z)+\langle\Psi(x,y)\rangle$ where $\tilde\Psi(x,y,z)$ has zero $z$-average, the terms that do not depend on $z$ cancel and we are left with \begin{equation}\label{q:kerL2} -\nabla_2\tilde\Psi + \nabla_2\delta p = 0. \end{equation} So $\delta p(x,y,z) = \tilde\Psi(x,y,z) + \Phi(z)$; but since $\langle\delta p\rangle=0$, $\Phi=0$ and thus $\rho=-\partial_z\Psi$ by \eqref{q:ptil}. Therefore the null space of $L$ is completely characterised by \eqref{q:W0def}, \begin{equation}\label{q:kerL} \mathrm{ker}\,L = \{W^0:W^0=(\nabla^\perp\psi^0,-\partial_z\psi^0)^\mathrm{T}\}. \end{equation} With $\psi^0=\ilapl{}(\nabla^\perp\!\cdot\!\boldsymbol{v}-\partial_z\rho)$ as above, this also defines a projection $W\mapsto W^0$. We call $W^0$ our {\em slow variable}. Letting $B^0$ be the projection of $B$ to $\mathrm{ker}\,L$, \begin{equation} B^0(W,\hat W) := \left( \begin{matrix}\nabla^\perp\Delta^{-1}\bigl[ \nabla^\perp\cdot(\boldsymbol{u}\cdot\nabla\hat\boldsymbol{v}) - \partial_z(\boldsymbol{u}\cdot\nabla\hat\rho)\bigr]\\ -\partial_z\Delta^{-1}\vphantom{\Big|}\bigl[ \nabla^\perp\cdot(\boldsymbol{u}\cdot\nabla\hat\boldsymbol{v}) - \partial_z(\boldsymbol{u}\cdot\nabla\hat\rho)\bigr] \end{matrix}\right), \end{equation} we find that $W^0$ satisfies \begin{equation}\label{q:dtW0} \partial_t{W^0} + B^0(W,W) + AW^0 = f^0 \end{equation} where $f^0=(\nabla^\perp\Delta^{-1}f_q^{}, -\partial_z\Delta^{-1}f_q^{})^\mathrm{T}$ is the slow forcing. Now let \begin{equation} W^\varepsilon = \left(\begin{matrix} \boldsymbol{v}^\varepsilon\\ \rho^\varepsilon \end{matrix}\right) := W - W^0 = \left(\begin{matrix} \boldsymbol{v}-\boldsymbol{v}^0\\ \rho-\rho^0\end{matrix}\right). \end{equation} It will be seen below in Fourier representation that $W^\varepsilon$ is a linear combination of eigenfunctions of $L$ with imaginary eigenvalues whose moduli are bounded from below; we thus call $W^\varepsilon$ our {\em fast variable}. Since $\gb\!\cdot\!\vb^0=0$, the vertical velocity $u^3$ is a purely fast variable. In analogy with \eqref{q:dtW0}, we have \begin{equation}\label{q:dtWeps} \partial_t{W^\varepsilon} + \frac1\varepsilon LW^\varepsilon + B^\varepsilon(W,W) + AW^\varepsilon = f^\varepsilon \end{equation} where $B^\varepsilon(W,\hat W):=B(W,\hat W)-B^0(W,\hat W)$ and $f^\varepsilon:=f-f^0$. The fast variable has no potential vorticity, as can be seen by computing $\nabla^\perp\cdot\boldsymbol{v}^\varepsilon-\partial_z\rho^\varepsilon=q-\nabla^\perp\!\cdot\!\nabla^\perp\psi^0-\partial_{zz}\psi^0=0$. Since the slow variable is completely determined by the potential vorticity, this implies that the fast and slow variables are orthogonal in $L^2(\mathscr{M})$, \begin{equation}\label{q:Worth}\begin{aligned} (W^0,W^\varepsilon)_{L^2} &= (\boldsymbol{v}^0,\boldsymbol{v}^\varepsilon)_{L^2} + (\rho^0,\rho^\varepsilon)_{L^2}\\ &\hskip-3pt= (\nabla^\perp\psi^0,\boldsymbol{v}^\varepsilon)_{L^2} - (\partial_z\psi^0,\rho^\varepsilon)_{L^2} = (\psi_0,-\nabla^\perp\cdot\boldsymbol{v}^\varepsilon+\partial_z\rho^\varepsilon)_{L^2} = 0. \end{aligned}\end{equation} Of central interest in this paper is the ``fast energy'' \begin{equation} \sfrac12 |W^\varepsilon|_{L^2}^2 = \sfrac12\bigl(|\boldsymbol{v}^\varepsilon|_{L^2}^2+|\rho^\varepsilon|_{L^2}^2\bigr). \end{equation} Its time derivative can be computed as follows. Using \eqref{q:Worth}, we have after integrating by parts \begin{equation} (W^\varepsilon,\partial_tW)_{L^2} = (W^\varepsilon,\partial_tW^0)_{L^2} + (W^\varepsilon,\partial_tW^\varepsilon)_{L^2} = \frac12\ddt{\:}|W^\varepsilon|_{L^2}^2. \end{equation} Now \eqref{q:Basym} implies that \begin{equation} (W^\varepsilon,B(W,W))_{L^2} = (W^\varepsilon,B(W,W^0+W^\varepsilon))_{L^2} = (W^\varepsilon,B(W,W^0))_{L^2}. \end{equation} Putting these together with \eqref{q:Lasym} and \eqref{q:Acoer}, we find \begin{equation}\label{q:ddtweps} \frac12\ddt{}|W^\varepsilon|_{L^2}^2 + \mu|\nabla W^\varepsilon|_{L^2}^2 = -(W^\varepsilon,B(W,W^0))_{L^2} + (W^\varepsilon,f^\varepsilon)_{L^2}. \end{equation} \subsection{Fourier expansion} Thanks to the regularity results in Theorem~0, our solution $W(t)$ is smooth and we can thus expand it in Fourier series, \begin{equation} \boldsymbol{v}(\boldsymbol{x},t) = {\textstyle\sum}_{\boldsymbol{k}}^{}\, \boldsymbol{v}_{\boldsymbol{k}}(t)\, \ex^{\im{\boldsymbol{k}}\cdot\boldsymbol{x}} \qquad\textrm{and}\qquad \rho(\boldsymbol{x},t) = {\textstyle\sum}_{\boldsymbol{k}}^{}\, \rho_{\boldsymbol{k}}(t)\, \ex^{\im{\boldsymbol{k}}\cdot\boldsymbol{x}}. \end{equation} Here ${\boldsymbol{k}}=(k_1,k_2,k_3)\in\Zahl_L$ where $\Zahl_L=\mathbb{R}^3/\mathscr{M}=\{(2\pi l_1/L_1,2\pi l_2/L_2, 2\pi l_3/L_3):(l_1,l_2,l_3)\in\mathbb{Z}^3\}$; any wavevector ${\boldsymbol{k}}$ is henceforth understood to live in $\Zahl_L$. We also denote ${\boldsymbol{k}}':=(k_1,k_2,0)$ and write ${\boldsymbol{k}}'\wedge{\boldsymbol{j}}':=k_1j_2-k_2j_1$. Since our variables have zero average over $\mathscr{M}$, $\boldsymbol{v}_{\boldsymbol{k}}=0$ when ${\boldsymbol{k}}=0$; moreover, since $\rho$ is odd in $z$, $\rho_{\boldsymbol{k}}=0$ whenever $k_3=0$. Thus $W_{\boldsymbol{k}}:=(\boldsymbol{v}_{\boldsymbol{k}},\rho_{\boldsymbol{k}})=0$ when ${\boldsymbol{k}}=0$, which allows us to write the $H^s$ norm simply as \begin{equation}\label{q:Hsnorm} |W|_{H^s}^2 = {\textstyle\sum}_{\boldsymbol{k}}^{}\,|{\boldsymbol{k}}|^{2s}|W_{\boldsymbol{k}}|^2 \end{equation} and (see \eqref{q:Gevdef} for the definition of $G^\sigma$) \begin{equation}\label{q:Gsignorm} |W|_{G^\sigma}^2 = {\textstyle\sum}_{\boldsymbol{k}}^{}\,\mathrm{e}^{2\sigma|{\boldsymbol{k}}|}|W_{\boldsymbol{k}}|^2\,. \end{equation} The antisymmetric operator $L$ is diagonal in Fourier space, meaning that $L_{{\boldsymbol{k}}{\boldsymbol{l}}}=0$ when ${\boldsymbol{k}}\ne{\boldsymbol{l}}$; we shall thus write $L_{{\boldsymbol{k}}}:=L_{{\boldsymbol{k}}{\boldsymbol{k}}}$. When $k_3\ne0$, we have \begin{equation} L_{\boldsymbol{k}} = \left( \begin{matrix} 0 &-1 &-k_1/k_3\\ 1 &0 &-k_2/k_3\\ k_1/k_3 &k_2/k_3 &0 \end{matrix} \right). \end{equation} For ${\boldsymbol{k}}'\ne0$, its eigenvalues are $\omega^0_{\boldsymbol{k}}=0$ and $\mathrm{i}\omega^\pm_{\boldsymbol{k}}=\pm\mathrm{i}|{\boldsymbol{k}}|/k_3$, where $|{\boldsymbol{k}}|:=\bigl(k_1^2+k_2^2+k_3^2)^{1/2}$, with eigenvectors \begin{equation}\label{q:evectg} X^0_{\boldsymbol{k}} = \frac1{|{\boldsymbol{k}}|}\left(\begin{matrix} \phantom{-}k_2\\ -k_1\\ \phantom{-}k_3 \end{matrix}\right) \qquad\textrm{and}\qquad X^\pm_{\boldsymbol{k}} = \frac1{\sqrt2|{\boldsymbol{k}}'|\,|{\boldsymbol{k}}|}\left(\begin{matrix} -k_2 k_3\pm\mathrm{i} k_1|{\boldsymbol{k}}|\\ \phantom{-}k_1 k_3 \pm\mathrm{i} k_2|{\boldsymbol{k}}|\\ |{\boldsymbol{k}}'|^2 \end{matrix}\right). \end{equation} When ${\boldsymbol{k}}'=0$, we have $\omega_{\boldsymbol{k}}^0=0$ and $\mathrm{i}\omega_{\boldsymbol{k}}^\pm=\pm\mathrm{i}$ as eigenvalues with eigenvectors \begin{equation}\label{q:evect0} X^0_{\boldsymbol{k}} = \Biggl(\begin{matrix} \>0\>\\ 0\\ \sgn k_3 \end{matrix}\Biggr) \qquad\textrm{and}\qquad X^\pm_{\boldsymbol{k}} = \frac{1}{\sqrt2}\Biggl(\begin{matrix} \>1\>\\ \mp\mathrm{i}\\ 0\end{matrix}\Biggr). \end{equation} For ${\boldsymbol{k}}$ fixed, these eigenvectors are orthonormal under the inner product $\cdot\;$ in $\mathbb{C}^3$. When $k_3=0$, the fact that $\rho_{\boldsymbol{k}}=0$ and ${\boldsymbol{k}}\cdot\boldsymbol{v}_{\boldsymbol{k}}=0$ implies that the space is one-dimensional for each ${\boldsymbol{k}}$ (in fact, it is known that the vertically-averaged dynamics is that of the rotating 2d Navier--Stokes equations). Since projecting to the $k_3=0$ subspace is equivalent to taking vertical average, we compute \begin{equation} \langle LW\rangle = (\langle\boldsymbol{v}^\perp\rangle-\nabla_2\ilapl2\nabla\!\cdot\!\langle\boldsymbol{v}^\perp\rangle,0)^\mathrm{T} \end{equation} where we have used $\langle u^3\rangle=0$ (since $u^3$ is odd) and $\langle\delta p\rangle=0$ (by definition). Reasoning as in \eqref{q:kerL1}--\eqref{q:kerL2} above, we find that $\langle LW\rangle=0$, that is, the vertically-averaged ($k_3=0$) component is completely slow. In this case we can thus write \begin{equation}\label{q:evect3} \omega_{\boldsymbol{k}}^0 = 0 \qquad\textrm{and}\qquad X^0_{\boldsymbol{k}} = \frac1{|{\boldsymbol{k}}'|} \left(\begin{matrix} \phantom{-}k_2\\ -k_1\\ \phantom{-}0 \end{matrix}\right), \end{equation} which can be included in the generic case ${\boldsymbol{k}}'\ne0$ in computations. Since the $k_3=0$ component is completely slow, $\langle W^\varepsilon\rangle=0$, there is no need to fix $X^\pm_{\boldsymbol{k}}$. We note that since $k_3\ne0$, $|\omega_{\boldsymbol{k}}^\pm|\ge1$, viz., \begin{equation}\label{q:infw} \inf\, |\omega_{\boldsymbol{k}}^\pm|^2 = \inf_{k_3\ne0}\, \biggl\{\frac{k_1^2+k_2^2+k_3^2}{k_3^2},\; 1\biggr\} = 1. \end{equation} In what follows, it is convenient to use $\{X^0_{\boldsymbol{k}},X^\pm_{\boldsymbol{k}}\}$ as basis. We can now write \begin{equation}\label{q:W0Weps}\begin{aligned} &W^0(\boldsymbol{x},t) := {\textstyle\sum}_{\boldsymbol{k}}\; w^0_{\boldsymbol{k}}(t) X^0_{\boldsymbol{k}} \ex^{\im{\boldsymbol{k}}\cdot\boldsymbol{x}}\\ &W^\varepsilon(\boldsymbol{x},t) := {\textstyle\sum}_{\boldsymbol{k}}^s\; w^s_{\boldsymbol{k}}(t) X^s_{\boldsymbol{k}} \mathrm{e}^{-\mathrm{i}\omega^s_{\boldsymbol{k}} t/\varepsilon}\ex^{\im{\boldsymbol{k}}\cdot\boldsymbol{x}}, \end{aligned}\end{equation} where $s\in\{-1,+1\}$, which we write as $\{-,+\}$ when it appears as a label. The Fourier coefficients $w^0_{\boldsymbol{k}}$ and $w^\pm_{\boldsymbol{k}}$ are complex numbers that depend on $t$ only, with $w^0_0=0$ and $w^\pm_{(k_1,k_2,0)}=0$. With $\alpha\in\{-1,0,+1\}$, they can be computed using \begin{equation} w^\alpha_{\boldsymbol{k}}(t) = \frac1{|\mathscr{M}|} \int_\mathscr{M} W(\boldsymbol{x},t)\cdot X^\alpha_{\boldsymbol{k}} \,\mathrm{e}^{\mathrm{i}\omega^\alpha_{\boldsymbol{k}} t/\varepsilon-\mathrm{i}{\boldsymbol{k}}\cdot\boldsymbol{x}} \>\mathrm{d}\boldsymbol{x}. \end{equation} The following relations hold: \begin{equation} |W^0|_{L^2}^2 = {\textstyle\sum}_{\boldsymbol{k}}\; |w^0_{\boldsymbol{k}}|^2 \qquad\textrm{and}\qquad |W^\varepsilon|_{L^2}^2 = {\textstyle\sum}_{\boldsymbol{k}}^s\; |w^s_{\boldsymbol{k}}|^2. \end{equation} In addition, the fact that $(\boldsymbol{v}^0,\rho^0)$ is real implies \begin{equation}\label{q:w0} w^0_{-{\boldsymbol{k}}} = -\overline{w^0_{\boldsymbol{k}}} \qquad\textrm{and}\qquad w^0_{(k_1,k_2,-k_3)} = w^0_{(k_1,k_2,k_3)} \end{equation} where overbars denote complex conjugation. Similarly, since $(\boldsymbol{v}^\varepsilon,\rho^\varepsilon)$ is real, \begin{equation}\label{q:weps1} w^\pm_{-{\boldsymbol{k}}} = \overline{w^\pm_{\boldsymbol{k}}} \qquad\textrm{and}\qquad w^\pm_{(k_1,k_2,-k_3)} = -w^\pm_{(k_1,k_2,k_3)} \end{equation} when ${\boldsymbol{k}}'\ne0$ and, when ${\boldsymbol{k}}'=0$, \begin{equation}\label{q:weps2} w^\pm_{(0,0,-k_3)} = \overline{w^\mp_{(0,0,k_3)}}. \end{equation} We shall see below that, the linear oscillations having been factored out, the variable $w^s_{\boldsymbol{k}}$ is slow at leading order. Similarly to $W$, we write the forcing $f$ as \begin{equation}\begin{aligned} &f^0(\boldsymbol{x},t) := {\textstyle\sum}_{\boldsymbol{k}}\; f^0_{\boldsymbol{k}}(t) X^0_{\boldsymbol{k}} \ex^{\im{\boldsymbol{k}}\cdot\boldsymbol{x}}\\ &f^\varepsilon(\boldsymbol{x},t) := {\textstyle\sum}_{\boldsymbol{k}}^s\; f^s_{\boldsymbol{k}}(t) X^s_{\boldsymbol{k}} \ex^{\im{\boldsymbol{k}}\cdot\boldsymbol{x}}, \end{aligned}\end{equation} where, unlike in \eqref{q:W0Weps}, there is no factor of $\mathrm{e}^{-\mathrm{i}\omega_{\boldsymbol{k}}^st/\varepsilon}$ in the definition of $f^\varepsilon$. As noted above, $f$ must satisfy the same symmetries as $W$, so the above properties of $w_{\boldsymbol{k}}^\alpha$ also hold for $f_{\boldsymbol{k}}^\alpha$; we note in particular that $f_{\boldsymbol{k}}^\pm=0$ when $k_3=0$. For later convenience, we define the operator $\partial_t^*$ by \begin{equation} \partial_t^* W := \mathrm{e}^{-tL/\varepsilon}\partial_t\,\mathrm{e}^{tL/\varepsilon}W. \end{equation} From \eqref{q:dW}, we find \begin{equation}\label{q:dtSWeps} \dy_t^* W + B(W,W) + AW = f, \end{equation} which is $\partial_t W$ with the large antisymmetric term removed. Now the nonlinear term on the rhs of \eqref{q:ddtweps} can be written as \begin{equation}\begin{aligned} (W^\varepsilon,B(W^0+W^\varepsilon,W^0))_{L^2} &= (W^\varepsilon,B(W^0,W^0))_{L^2} + (W^\varepsilon,B(W^\varepsilon,W^0))_{L^2}\\ &= (W^\varepsilon,B(W^0,W^0))_{L^2} - (W^0,B(W^\varepsilon,W^\varepsilon))_{L^2}, \end{aligned}\end{equation} where the identity $(W^0,B(W^\varepsilon,W^\varepsilon))_{L^2}=-(W^\varepsilon,B(W^\varepsilon,W^0))_{L^2}$ had been obtained from \eqref{q:Basym}. First, let \vskip-15pt \begin{equation}\begin{aligned} (W^\varepsilon,B(W^0,W^0))_{L^2} &= |\mathscr{M}| \sum_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^s\, w^0_{\boldsymbol{j}} w^0_{\boldsymbol{k}} \overline{w^s_{\boldsymbol{l}}}\, \mathrm{i} (X^0_{\boldsymbol{j}}\cdot{\boldsymbol{k}}')(X^0_{\boldsymbol{k}}\cdot X^s_{\boldsymbol{l}})\, \delta_{{\boldsymbol{j}}+{\boldsymbol{k}}-{\boldsymbol{l}}}\,\mathrm{e}^{\mathrm{i}\omega^s_{\boldsymbol{l}} t/\varepsilon}\\ &= \sum_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^s\, w^0_{\boldsymbol{j}} w^0_{\boldsymbol{k}} \overline{w^s_{\boldsymbol{l}}}\, B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{00s} \mathrm{e}^{\mathrm{i}\omega^s_{\boldsymbol{l}} t/\varepsilon} \end{aligned}\end{equation} where $\delta_{{\boldsymbol{j}}+{\boldsymbol{k}}-{\boldsymbol{l}}}=1$ when ${\boldsymbol{j}}+{\boldsymbol{k}}={\boldsymbol{l}}$ and $0$ otherwise, and where \begin{equation}\label{q:B00s} B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{00s} := \mathrm{i}\,|\mathscr{M}|\,\delta_{{\boldsymbol{j}}+{\boldsymbol{k}}-{\boldsymbol{l}}} (X^0_{\boldsymbol{j}}\cdot{\boldsymbol{k}}')(X^0_{\boldsymbol{k}}\cdot X^s_{\boldsymbol{l}}). \end{equation} It is easy to verify from \eqref{q:B00s} that $B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{00s}=0$ when $|{\boldsymbol{j}}'|\,|{\boldsymbol{k}}'|\,l_3=0$, so we consider the other cases. For the first factor, we have \begin{equation}\label{q:B00sa} X^0_{\boldsymbol{j}}\cdot{\boldsymbol{k}}' = \frac{{\boldsymbol{k}}'\wedge{\boldsymbol{j}}'}{|{\boldsymbol{j}}|}. \end{equation} For the second factor, we have \begin{equation}\label{q:B00sb}\begin{aligned} &X^0_{\boldsymbol{k}}\cdot X^s_{\boldsymbol{l}} = \frac{k_2-\mathrm{i} s k_1}{\sqrt2\,|{\boldsymbol{k}}|} &&\textrm{when } {\boldsymbol{l}}'=0, \textrm{ and}\\ &X^0_{\boldsymbol{k}}\cdot X^s_{\boldsymbol{l}} = \frac{k_3|{\boldsymbol{l}}'|^2-({\boldsymbol{k}}'\cdot{\boldsymbol{l}}')l_3-\mathrm{i} s({\boldsymbol{l}}'\wedge{\boldsymbol{k}}')|{\boldsymbol{l}}|}{\sqrt2\,|{\boldsymbol{k}}|\,|{\boldsymbol{l}}|\,|{\boldsymbol{l}}'|} &&\textrm{when }{\boldsymbol{l}}'\ne0. \end{aligned}\end{equation} From these, we have the bound \begin{equation}\label{q:bdB00s} |B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{00s}| \le \frac{3\,|\mathscr{M}|}{\sqrt2}\frac{|{\boldsymbol{k}}'|\,|{\boldsymbol{j}}'|}{|{\boldsymbol{j}}|}. \end{equation} Next, we consider \begin{equation}\label{q:WBee0}\begin{aligned} (W^0,B(&W^\varepsilon,W^\varepsilon))_{L^2}\\ &= |\mathscr{M}| \sum_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs} w_{\boldsymbol{j}}^r w_{\boldsymbol{k}}^s \overline{w_{\boldsymbol{l}}^0} \,\mathrm{i}\,({\sf V}X^r_{\boldsymbol{j}}\cdot{\boldsymbol{k}})(X^s_{\boldsymbol{k}}\cdot X^0_{\boldsymbol{l}}) \,\delta_{{\boldsymbol{j}}+{\boldsymbol{k}}-{\boldsymbol{l}}}\,\mathrm{e}^{-\mathrm{i}(\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s)t/\varepsilon}\\ &= \sum_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs}\, w_{\boldsymbol{j}}^r w_{\boldsymbol{k}}^s \overline{w_{\boldsymbol{l}}^0} \, B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0} \,\mathrm{e}^{-\mathrm{i}(\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s)t/\varepsilon} \end{aligned}\end{equation} where \begin{equation}\label{q:Bee0} B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0} := \mathrm{i}\,|\mathscr{M}|\,\delta_{{\boldsymbol{j}}+{\boldsymbol{k}}-{\boldsymbol{l}}}\,({\sf V}X^r_{\boldsymbol{j}}\cdot{\boldsymbol{k}})(X^s_{\boldsymbol{k}}\cdot X^0_{\boldsymbol{l}}) \end{equation} and where the operator ${\sf V}$, which produces an incompressible velocity vector out of $X^r_{\boldsymbol{j}}$, is defined by \begin{equation}\begin{aligned} &{\sf V}X_{\boldsymbol{j}}^r = X_{\boldsymbol{j}}^r \hbox to120pt{} &&\textrm{when } j_3|{\boldsymbol{j}}'|=0, \textrm{ and}\\ &{\sf V}X_{\boldsymbol{j}}^r = \frac1{\sqrt2\,|{\boldsymbol{j}}|\,|{\boldsymbol{j}}'|}\left(\begin{matrix} -j_2 j_3 +\mathrm{i} r j_1|{\boldsymbol{j}}|\\ \phantom{-}j_1 j_3 +\mathrm{i} r j_2|{\boldsymbol{j}}|\\ -\mathrm{i} r |{\boldsymbol{j}}'|^2|{\boldsymbol{j}}|/j_3 \end{matrix}\right) &&\textrm{when } j_3|{\boldsymbol{j}}'|\ne 0. \end{aligned}\end{equation} Thus, we have ${\sf V}X^r_{\boldsymbol{j}}\cdot{\boldsymbol{k}} = 0$ when $j_3=0$, \begin{equation} {\sf V}X^r_{\boldsymbol{j}}\cdot{\boldsymbol{k}} = \bigl(k_1-\mathrm{i} r k_2\bigr)/\sqrt2 \end{equation} when ${\boldsymbol{j}}'=0$, and \begin{equation} {\sf V}X^r_{\boldsymbol{j}}\cdot{\boldsymbol{k}} = \frac{j_3({\boldsymbol{j}}'\wedge{\boldsymbol{k}}') + \mathrm{i} r|{\boldsymbol{j}}|({\boldsymbol{j}}'\cdot{\boldsymbol{k}}') - \mathrm{i} r|{\boldsymbol{j}}'|^2|{\boldsymbol{j}}|\,k_3/j_3}{\sqrt2\,|{\boldsymbol{j}}|\,|{\boldsymbol{j}}'|} \end{equation} in the generic case $j_3|{\boldsymbol{j}}'|\ne0$. In all cases, we have the bound \begin{equation}\label{q:bdvwr0} |{\sf V}X^r_{\boldsymbol{j}}\cdot{\boldsymbol{k}}| \le |\mathscr{M}|\,\bigl(\sqrt2\,|{\boldsymbol{k}}'|+|{\boldsymbol{j}}'|\,|k_3|/|j_3|\bigr). \end{equation} Next, $X_{\boldsymbol{k}}^s\cdot X_{\boldsymbol{l}}^0=0$ when $k_3=0$ or ${\boldsymbol{k}}'={\boldsymbol{l}}'=0$, and \begin{equation}\begin{aligned} &X_{\boldsymbol{k}}^s\cdot X_{\boldsymbol{l}}^0 = \frac{l_2+\mathrm{i} s l_1}{\sqrt2\,|{\boldsymbol{l}}|} &&\textrm{when } {\boldsymbol{l}}'\ne0 \textrm{ and } {\boldsymbol{k}}'=0,\\ &X_{\boldsymbol{k}}^s\cdot X_{\boldsymbol{l}}^0 = \sgn l_3\frac{|{\boldsymbol{k}}'|}{\sqrt2\,|{\boldsymbol{k}}|} &&\textrm{when } {\boldsymbol{l}}'=0 \textrm{ and } {\boldsymbol{k}}'\ne0,\\ &X_{\boldsymbol{k}}^s\cdot X_{\boldsymbol{l}}^0 = \frac{-({\boldsymbol{k}}'\cdot{\boldsymbol{l}}')k_3+\mathrm{i} s({\boldsymbol{k}}'\wedge{\boldsymbol{l}}')|{\boldsymbol{k}}|+|{\boldsymbol{k}}'|^2l_3}{\sqrt2\,|{\boldsymbol{k}}|\,|{\boldsymbol{k}}'|\,|{\boldsymbol{l}}|} &&\textrm{when } |{\boldsymbol{k}}'|\,|{\boldsymbol{l}}'|\,k_3\ne0. \end{aligned}\end{equation} These give us the bound \begin{equation} |X_{\boldsymbol{k}}^s\cdot X_{\boldsymbol{l}}^0| \le \sqrt{5/2} \end{equation} in all cases and, together with \eqref{q:bdvwr0}, when $j_3\ne0$, \begin{equation}\label{q:bdBrs0} |B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0}| \le \sqrt5\,|\mathscr{M}|\,\bigl(|{\boldsymbol{k}}'| + |{\boldsymbol{j}}'|\,|k_3|/|j_3|\bigr). \end{equation} When $j_3k_3=0$ or ${\boldsymbol{l}}=0$, we have $B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0}=0$. \subsection{Fast--Fast--Slow Resonances} We first write \eqref{q:WBee0} as \begin{equation} (W^0,B(W^\varepsilon,W^\varepsilon))_{L^2} = \frac12\sum_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs}\, w_{\boldsymbol{j}}^r w_{\boldsymbol{k}}^s \overline{w_{\boldsymbol{l}}^0} \,\bigl(B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0} + B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}\bigr) \,\mathrm{e}^{-\mathrm{i}(\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s)t/\varepsilon}. \end{equation} It has long been known in the geophysical community that many rotating fluid models ``have no fast--fast--slow resonances'' (see, e.g., \cite{warn:86} for the shallow-water equations and \cite{bartello:95} for the Boussinesq equations). In our notation, the absence of {\em exact\/} fast--fast--slow resonances means that $B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0}+B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}=0$ whenever $\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s=0$; the significance of this will be apparent below [see the development following \eqref{q:Bst}]. For our purpose, however, we also need to consider {\em near\/} resonances, i.e.\ those cases when $|\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s|$ is small but nonzero. The following ``no-resonance'' lemma contains the estimate we need: \begin{lemma}\label{t:nores} For any ${\boldsymbol{j}}$, ${\boldsymbol{k}}$, ${\boldsymbol{l}}\in\Zahl_L$ with ${\boldsymbol{l}}\ne0$, \begin{equation}\label{q:nores} \bigl|B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0}+B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}\bigr| \le \cnst{\textrm{nr}}\,|\mathscr{M}|\, \Bigl(\frac{|{\boldsymbol{j}}|\,|{\boldsymbol{k}}|}{|{\boldsymbol{l}}|} + |j_3| + |k_3|\Bigr)\, |\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s| \end{equation} where $\cnst{\textrm{nr}}$ is an absolute constant. \end{lemma} \noindent We note that $B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0}=B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}=0$ when ${\boldsymbol{l}}=0$ by \eqref{q:WBee0}, so this case is trivial. We defer the proof to Appendix~\ref{s:nores}. \section{Leading-Order Estimates}\label{s:o1} In this section, we discuss the leading-order case of our general problem. This is done separately due to its geophysical interest and since it requires qualitatively weaker hypotheses. As before, $W(t)=W^0(t)+W^\varepsilon(t)$ is the solution of the OPE \eqref{q:dW} with initial conditions $W(0)=W_0$, and $K_{\rm g}(\cdot)$ is a continuous and increasing function of its argument. \begin{theorem}\label{t:o1} Suppose that the initial data $W_0\in H^1(\mathscr{M})$ and that the forcing $f\in L^\infty(\mathbb{R}_+;H^2)\cap W^{1,\infty}(\mathbb{R}_+;L^2)$, with \begin{equation} \|f\|_{\rm g}^{} := \esssup_{t>0}\,\bigl(|f(t)|_{H^2}^{} + |\partial_tf(t)|_{L^2}^{}\bigr). \end{equation} Then there exist $T_{\rm g}=T_{\rm g}(|W_0|_{H^1}^{},\|f\|_{\rm g}^{},\varepsilon)$ and $K_{\rm g}=K_{\rm g}(\|f\|_{\rm g}^{})$, such that for $t\ge T_{\rm g}$, \begin{equation} |W^\varepsilon(t)|_{L^2}^{} \le \sqrt\varepsilon\, K_{\rm g}(\|f\|_{\rm g}^{}). \end{equation} \end{theorem} In geophysical parlance, our result states that, for given initial data and forcing, the solution of the OPE will become geostrophically balanced (in the sense that the ageostrophic component $W^\varepsilon$ is of order $\sqrt\varepsilon$) after some time. We note that the forcing may be time-dependent (although $\|f\|_{\rm g}^{}$ cannot depend on $\varepsilon$) and need not be geostrophic; this will not be the case when we consider higher-order balance later. Also, in contrast to the higher-order result in the next section, no restriction on $\varepsilon$ is necessary in this case. The linear mechanism of this ``geostrophic decay'' may be appreciated by modelling \eqref{q:dtWeps}, without the non\-linear term, by the following ODE \begin{equation}\label{q:1dm} \ddt{x} + \frac{\mathrm{i}}{\varepsilon}\, x + \mu x = f \end{equation} where $\mu>0$ is a constant and $f=f(t)$ is given independently of $\varepsilon$. The skew-hermitian term $\mathrm{i} x/\varepsilon$ causes oscillations of $x$ whose frequency grows as $\varepsilon\to0$. In this limit, the forcing becomes less effective since $f$ varies slowly by hypothesis while the damping remains unchanged, so $x$ will eventually decay to the order of the ``net forcing'' $\sqrt\varepsilon f$. More concretely, let $z(t)=\mathrm{e}^{\mathrm{i} t/\varepsilon}x(t)$ and write \eqref{q:1dm} as \begin{equation}\label{q:dzdt} \ddt{\;}\bigl(\mathrm{e}^{\mu t/2}z\bigr) + \frac{\mu}{2}\mathrm{e}^{\mu t/2}z = \mathrm{e}^{\mu t/2-\mathrm{i} t/\varepsilon}f, \end{equation} from which it follows that \begin{equation} \ddt{\;}\bigl(\mathrm{e}^{\mu t/2}|z|^2\bigr) + \mu\mathrm{e}^{\mu t/2}|z|^2 = 2\mathrm{e}^{\mu t/2}\mathrm{Re}\,\bigl(\mathrm{e}^{-\mathrm{i} t/\varepsilon}\bar z f\bigr). \end{equation} Integrating, we find \begin{equation}\begin{aligned} \mathrm{e}^{\mu t/2}|z(t)| &- |z(0)|^2 + \mu\int_0^t \mathrm{e}^{\mu\tau/2} |z(\tau)|^2\>\mathrm{d}\tau = 2 \int_0^t \mathrm{e}^{\mu\tau/2} \mathrm{Re}\bigl(\mathrm{e}^{-\mathrm{i}\tau/\varepsilon}\bar z f\bigr)\>\mathrm{d}\tau\\ &= 2\varepsilon \bigl[\mathrm{e}^{\mu\tau/2}\mathrm{Re}\bigl(\mathrm{i}\mathrm{e}^{-\mathrm{i} t/\varepsilon}\bar z f\bigr)\bigr]_0^t - 2\varepsilon \int_0^t \mathrm{Re}\bigl[\mathrm{i}\mathrm{e}^{-\mathrm{i}\tau/\varepsilon}\partial_\tau(\mathrm{e}^{\mu\tau/2}\bar z f)\bigr] \>\mathrm{d}\tau, \end{aligned}\end{equation} where the second equality is obtained by integration by parts. Since $\partial_tf$ is bounded independently of $\varepsilon$, the integral can be bounded using \eqref{q:dzdt} and the integral on the left-hand side. This leaves us with \begin{equation} |z(t)|^2 \le \mathrm{e}^{-\mu t/2}\,\cnst1(|f|)\,|z(0)|^2 + \frac{\varepsilon}{\mu}\,(1-\mathrm{e}^{-\mu t/2})\,K(|f|,|\partial_tf|,\mu). \end{equation} Most of the work in the proof below is devoted to handling the nonlinear term, where particular properties of the OPE come into play. A PDE application of this principle can be found in \cite{schochet:94}. \subsection{Proof of Theorem~\ref{t:o1}} In this proof, we omit the subscript in the inner product $(\cdot,\cdot)_{L^2}$ when the meaning is unambiguous; similarly, $|\cdot|\equiv|\cdot|_{L^2}^{}$. We start by writing \eqref{q:ddtweps} as \begin{equation}\label{q:dt0Weps}\begin{aligned} \ddt{} |W^\varepsilon|^2 &+ 2\mu |\nabla W^\varepsilon|^2\\ &= -2(W^\varepsilon,B(W^0,W^0)) - 2(W^\varepsilon,B(W^\varepsilon,W^0)) + 2(W^\varepsilon,f^\varepsilon)\\ &= -2(W^\varepsilon,B(W^0,W^0)) + 2(W^0,B(W^\varepsilon,W^\varepsilon)) + 2(W^\varepsilon,f^\varepsilon). \end{aligned}\end{equation} Using the Poincar{\'e} inequality, $|W^\varepsilon|^2\le\cnst{\textrm{p}}|\nabla W^\varepsilon|^2$, and multiplying the left-hand side by $2\mathrm{e}^{\nu t}$ where $\nu:=\mu\cnst{\textrm{p}}$, we have \begin{equation} \ddt{}\bigl( \mathrm{e}^{\nu t} |W^\varepsilon|^2 \bigr) + \mu\mathrm{e}^{\nu t} |\nabla W^\varepsilon|^2 \le \mathrm{e}^{\nu t}\Bigl(\ddt{} |W^\varepsilon|^2 + \mu |\nabla W^\varepsilon|^2 + \mu |\nabla W^\varepsilon|^2\Bigr). \end{equation} With this, \eqref{q:dt0Weps} becomes \begin{equation}\label{q:ddtwfour}\begin{aligned} &\ddt{\;} \bigl(\mathrm{e}^{\nu t}\, |W^\varepsilon|^2\bigr) + \mu \mathrm{e}^{\nu t}|\nabla W^\varepsilon|^2\\ &\hbox to24pt{}\le 2\,\mathrm{e}^{\nu t}\, (W^\varepsilon,f^\varepsilon) - 2\,\mathrm{e}^{\nu t}\, (W^\varepsilon,B(W^0,W^0)) + 2\,\mathrm{e}^{\nu t}\, (W^0,B(W^\varepsilon,W^\varepsilon)). \end{aligned}\end{equation} We now integrate this inequality from $0$ to $t$. On the left-hand side we have \begin{equation}\label{q:lhs1}\begin{aligned} \int_0^t \Bigl\{ \ddtau{\;} \bigl(\mathrm{e}^{\nu\tau} |W^\varepsilon|^2\bigr) &+ \mu \mathrm{e}^{\nu\tau}|\nabla W^\varepsilon|^2 \Bigr\} \>\mathrm{d}\tau\\ &= \mathrm{e}^{\nu t}|W^\varepsilon(t)|^2 - |W^\varepsilon(0)|^2 + \mu \int_0^t \mathrm{e}^{\nu\tau}|\nabla W^\varepsilon|^2 \>\mathrm{d}\tau. \end{aligned}\end{equation} Using the expansion \eqref{q:W0Weps} of $W^\varepsilon$, we integrate the right-hand side by parts to bring out a factor of $\varepsilon$; that is, we integrate the rapidly oscillating exponential $\mathrm{e}^{\mathrm{i}\omega_{\boldsymbol{k}}^st/\varepsilon}$ and leave everything else. For the force term, we have \begin{equation}\begin{aligned} \int_0^t \mathrm{e}^{\nu\tau} (W^\varepsilon,f^\varepsilon) \>\mathrm{d}\tau &= |\mathscr{M}| \sum_{\boldsymbol{k}}^s\,\int_0^t \mathrm{e}^{\nu\tau+\mathrm{i}\omega_{\boldsymbol{k}}^s\tau/\varepsilon} \overline{w_{\boldsymbol{k}}^s} f_{\boldsymbol{k}}^s \>\mathrm{d}\tau\\ &= \varepsilon\, |\mathscr{M}|\sump_{\boldsymbol{k}}^s\, \frac{1}{\mathrm{i}\omega_{\boldsymbol{k}}^s}\bigl[\overline{w_{\boldsymbol{k}}^s(t)} f_{\boldsymbol{k}}^s(t) \mathrm{e}^{\nu t+\mathrm{i}\omega_{\boldsymbol{k}}^st/\varepsilon} - \overline{w_{\boldsymbol{k}}^s(0)} f_{\boldsymbol{k}}^s(0)\bigr]\\ &\qquad- \varepsilon\, |\mathscr{M}|\int_0^t \sump_{\boldsymbol{k}}^s\, \frac{\mathrm{e}^{\mathrm{i}\omega_{\boldsymbol{k}}^s\tau/\varepsilon}}{\mathrm{i}\omega_{\boldsymbol{k}}^s} \ddtau{\;}\bigl( \overline{w_{\boldsymbol{k}}^s} f_{\boldsymbol{k}}^s \mathrm{e}^{\nu\tau}\bigr) \>\mathrm{d}\tau. \end{aligned}\end{equation} Here the prime on $\sump$ indicates that terms for which $\omega_{\boldsymbol{k}}^s=0$ are omitted since then $w_{\boldsymbol{k}}^s=0$. Introducing the integration operator ${\sf I}_\omega$ defined by \begin{equation}\label{q:Iwdef} {\sf I}_\omega W^\varepsilon(\boldsymbol{x},t) := \sump_{\boldsymbol{k}}^s\; \frac\mathrm{i}{\omega^s_{\boldsymbol{k}}}\, w^s_{\boldsymbol{k}}(t) X^s_{\boldsymbol{k}} \mathrm{e}^{-\mathrm{i}\omega^s_{\boldsymbol{k}} t/\varepsilon}\ex^{\im{\boldsymbol{k}}\cdot\boldsymbol{x}}\,, \end{equation} which is well-defined since $|\omega_{\boldsymbol{k}}^s|\ge1$, we can write this as \begin{equation}\label{q:Wef}\begin{aligned} \int_0^t &\mathrm{e}^{\nu\tau} (W^\varepsilon,f^\varepsilon) \>\mathrm{d}\tau = \varepsilon\, \mathrm{e}^{\nu t}({\sf I}_\omega W^\varepsilon(t),f^\varepsilon(t)) - \varepsilon\,({\sf I}_\omega W^\varepsilon(0),f^\varepsilon(0))\\ &- \varepsilon \int_0^t \mathrm{e}^{\nu\tau}\bigl\{ \nu({\sf I}_\omega W^\varepsilon,f^\varepsilon) + ({\sf I}_\omega\dy_\tau^* W^\varepsilon,f^\varepsilon) + ({\sf I}_\omega W^\varepsilon,\partial_\tau f^\varepsilon) \bigr\} \>\mathrm{d}\tau. \end{aligned}\end{equation} Similarly, integrating the next term by parts we find \begin{equation}\label{q:WzBze}\begin{aligned} &\int_0^t \mathrm{e}^{\nu\tau} (W^\varepsilon,B(W^0,W^0)) \>\mathrm{d}\tau\\ &\quad= \varepsilon\, \mathrm{e}^{\nu t}({\sf I}_\omega W^\varepsilon,B(W^0,W^0))(t) - \varepsilon\, ({\sf I}_\omega W^\varepsilon,B(W^0,W^0))(0)\\ &\qquad- \varepsilon \int_0^\tau \mathrm{e}^{\nu\tau} \bigl\{ \nu\, ({\sf I}_\omega W^\varepsilon,B(W^0,W^0)) + ({\sf I}_\omega\dy_\tau^* W^\varepsilon,B(W^0,W^0))\\ &\hbox to75pt{}+ ({\sf I}_\omega W^\varepsilon,B(\partial_\tau W^0,W^0)) + ({\sf I}_\omega W^\varepsilon,B(W^0,\partial_\tau W^0)) \bigr\}\>\mathrm{d}\tau. \end{aligned}\end{equation} Next, we consider \begin{equation}\label{q:Bst}\begin{aligned} \int_0^t &\mathrm{e}^{\nu\tau}\,(W^0,B(W^\varepsilon,W^\varepsilon))\,\>\mathrm{d}\tau\\ &= \int_0^t \frac12\sum_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs} \mathrm{e}^{-\mathrm{i}(\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s)\tau/\varepsilon} \bigl(B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0} + B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}\bigr) w_{\boldsymbol{j}}^r w_{\boldsymbol{k}}^s \overline{w_{\boldsymbol{l}}^0}\, \mathrm{e}^{\nu\tau} \>\mathrm{d}\tau\\ &= \frac{\varepsilon\mathrm{i}}2\,\sump_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs} \frac{B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0}+B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}}{\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s}\, [w_{\boldsymbol{j}}^r(t)w_{\boldsymbol{k}}^s(t)\overline{w_{\boldsymbol{l}}^0(t)}\mathrm{e}^{\nu t-\mathrm{i}(\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s)t/\varepsilon}\\ &\hbox to210pt{}- w_{\boldsymbol{j}}^r(0)w_{\boldsymbol{k}}^s(0)\overline{w_{\boldsymbol{l}}^0(0)}]\\ &\qquad {}- \frac{\varepsilon\mathrm{i}}2\int_0^t \sump_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs} \frac{B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0}+B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}}{\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s}\, \mathrm{e}^{-\mathrm{i}(\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s)\tau/\varepsilon} \ddtau{}\bigl[ w^r_{\boldsymbol{j}} w^s_{\boldsymbol{k}} \overline{w^0_{\boldsymbol{l}}} \mathrm{e}^{\nu \tau}\bigr]\;\mathrm{d}\tau. \end{aligned}\end{equation} Here the prime on $\sum'$ indicates that exactly resonant terms, for which $\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s=0$ and $B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0}+B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}=0$, are excluded. Using the bilinear operator $B_\omega$, defined for any $W^\varepsilon$, $\hat W^\varepsilon$ and $\tilde W^0$ by \begin{equation}\label{q:Bwsdef} (\tilde W^0,B_\omega(W^\varepsilon,\hat W^\varepsilon)) := \frac{\mathrm{i}}2 \sump_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs} \frac{B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0}+B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}}{\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{l}}^s} w_{\boldsymbol{j}}^r \hat w_{\boldsymbol{k}}^s \overline{\tilde w\vphantom{w}_{\boldsymbol{l}}^0} \mathrm{e}^{-\mathrm{i}(\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s)t/\varepsilon}, \end{equation} we can write \eqref{q:Bst} in the more compact form \begin{equation}\label{q:Bws}\begin{aligned} \!\int_0^t &\mathrm{e}^{\nu\tau}\, (W^0,B(W^\varepsilon,W^\varepsilon)) \>\mathrm{d}\tau\\ &= \varepsilon\,\mathrm{e}^{\nu t}\,(W^0,B_\omega(W^\varepsilon,W^\varepsilon))(t) - \varepsilon\,(W^0,B_\omega(W^\varepsilon,W^\varepsilon))(0)\\ &\quad {}- \varepsilon\int_0^t \mathrm{e}^{\nu\tau}\,\bigl\{ \nu\,(W^0,B_\omega(W^\varepsilon,W^\varepsilon)) + (\partial_\tau W^0,B_\omega(W^\varepsilon,W^\varepsilon))\\ &\hskip156pt {}+ (W^0,\dstauB_\omega(W^\varepsilon,W^\varepsilon)) \}\>\mathrm{d}\tau. \end{aligned}\end{equation} Putting these together, \eqref{q:ddtwfour} integrates to \begin{equation}\label{q:Wt1}\begin{aligned} \mathrm{e}^{\nu t}|W^\varepsilon(t)|^2 &- |W^\varepsilon(0)|^2 + \mu \int_0^t \mathrm{e}^{\nu\tau}|\nabla W^\varepsilon|^2 \>\mathrm{d}\tau\\ &\le 2\varepsilon\,\mathrm{e}^{\nu t}\, \bigl({\sf I}_\omega W^\varepsilon,f^\varepsilon\bigr)(t) - 2\varepsilon\, \bigl({\sf I}_\omega W^\varepsilon,f^\varepsilon\bigr)(0)\\ &\quad- 2\varepsilon\, \mathrm{e}^{\nu t}\, \bigl({\sf I}_\omega W^\varepsilon,B(W^0,W^0)\bigr)(t) + 2\varepsilon\, \bigl({\sf I}_\omega W^\varepsilon,B(W^0,W^0)\bigr)(0)\\ &\quad{}+ 2\varepsilon\,\mathrm{e}^{\nu t}\bigl(W^0,B_\omega(W^\varepsilon,W^\varepsilon)\bigr)(t) - 2\varepsilon\,\bigl(W^0,B_\omega(W^\varepsilon,W^\varepsilon)\bigr)(0)\\ &\quad+ 2\varepsilon \int_0^t \mathrm{e}^{\nu\tau} \bigl\{ I_0(\tau) - I_1(\tau) + I_2(\tau)\bigr\} \>\mathrm{d}\tau. \end{aligned}\end{equation} Here the integrands are \begin{equation} I_0 := \nu ({\sf I}_\omega W^\varepsilon,f^\varepsilon) + ({\sf I}_\omega W^\varepsilon,\partial_\tau f^\varepsilon) + ({\sf I}_\omega\dy_\tau^* W^\varepsilon,f^\varepsilon), \end{equation} \begin{equation}\begin{aligned} I_1 &:= \nu\, ({\sf I}_\omega W^\varepsilon, B(W^0,W^0)) + ({\sf I}_\omega\dy_\tau^* W^\varepsilon,B(W^0,W^0))\\ &\qquad {}+ ({\sf I}_\omega W^\varepsilon, B(\partial_\tau W^0,W^0)) + ({\sf I}_\omega W^\varepsilon,B(W^0,\partial_\tau W^0)), \end{aligned}\end{equation} and \begin{equation}\begin{aligned} I_2 := \nu\,(W^0,B_\omega(W^\varepsilon,W^\varepsilon)) &+ (\partial_\tau W^0,B_\omega(W^\varepsilon,W^\varepsilon))\\ &+ (W^0,\dstauB_\omega(W^\varepsilon,W^\varepsilon)). \end{aligned}\end{equation} We now bound the right-hand side of \eqref{q:Wt1}. On the second line, we have \begin{equation}\begin{aligned} \bigl|\mathrm{e}^{\nu t}\,({\sf I}_\omega W^\varepsilon(t),f^\varepsilon(t))&-({\sf I}_\omega W^\varepsilon(0),f^\varepsilon(0))\bigr|\\ &\le \mathrm{e}^{\nu t}\,|W^\varepsilon(t)|\,|f^\varepsilon(t)| + |W^\varepsilon(0)|\,|f^\varepsilon(0)|, \end{aligned}\end{equation} where we have used the fact that, thanks to \eqref{q:infw}, \begin{equation} |\nabla^\alpha {\sf I}_\omega W^\varepsilon| \le |\nabla^\alpha W^\varepsilon|, \qquad\textrm{for }\alpha=0,1,2,\cdots. \end{equation} To bound the next line, we use the estimate \begin{equation}\label{q:B0ee}\begin{aligned} |(\tilde W,B(W^0,\hat W))| &\le C\, |\tilde W|_{L^6}\,|W^0|_{L^3}\,|\nabla\hat W|_{L^2}\\ &\le C\, |\nabla\tilde W|\,|W^0|^{1/2}\,|\nabla W^0|^{1/2}\,|\nabla\hat W| \end{aligned}\end{equation} (note that the first argument of $B$ is $W^0$) to obtain \begin{equation}\begin{aligned} \bigl|\mathrm{e}^{\nu t}\,({\sf I}_\omega W^\varepsilon,B(W^0,\,&W^0))(t)-({\sf I}_\omega W^\varepsilon,B(W^0,W^0)(0)\bigr|\\ &\le \mathrm{e}^{\nu t}\,|\nabla W^\varepsilon(t)|\,|W^0(t)|^{1/2}|\nabla W^0(t)|^{3/2}\\ &\qquad+ |\nabla W^\varepsilon(0)|\,|W^0(0)|^{1/2}|\nabla W^0(0)|^{3/2}. \end{aligned}\end{equation} In \eqref{q:B0ee} and in the rest of this proof, $C$ and $c$ denote generic constants which may not be the same each time the symbol is used; such constants may depend on $\mathscr{M}$ but not on any other parameter. Numbered constants may also depend on $\mu$. We now derive a bound involving $B_\omega$. Since $B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0} + B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}=0$ in the case of exact resonance, we assume that $\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s\ne0$. Then \eqref{q:nores} implies \begin{equation} \frac{\bigl|B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0} + B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}\bigr|}{|\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s|} \le C\,|{\boldsymbol{j}}|\,|{\boldsymbol{k}}|. \end{equation} With this, we have for any $W^\varepsilon$, $\hat W^\varepsilon$ and $\tilde W^0$, \begin{equation}\label{q:bd3Bws}\begin{aligned} \!\bigl|(\tilde W^0,B_\omega(W^\varepsilon,\hat W^\varepsilon))\bigr| &\le \frac12\sump_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs} \biggl|\frac{B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0}+B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}}{\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s}\biggr|\, |w^r_{\boldsymbol{j}}|\,|\hat w^s_{\boldsymbol{k}}|\,|\tilde w_{\boldsymbol{l}}^0|\\ &\le C\,\sum_{{\boldsymbol{j}}+{\boldsymbol{k}}={\boldsymbol{l}}}^{rs}\, |{\boldsymbol{j}}|\,|{\boldsymbol{k}}|\,|w^r_{\boldsymbol{j}}|\,|\hat w^s_{\boldsymbol{k}}|\,|\tilde w_{\boldsymbol{l}}^0|\\ &\le \int_\mathscr{M} \theta(\boldsymbol{x})\,\xi(\boldsymbol{x})\,\zeta(\boldsymbol{x})\;\mathrm{d}\boldsymbol{x}^3\\ &\le C\,|\nabla W^\varepsilon|_{L^p}|\nabla\hat W^\varepsilon|_{L^q}|\tilde W^0|_{L^m}\,, \end{aligned}\end{equation} with $1/p+1/q+1/m=1$ and where on the penultimate line \begin{equation} \theta(\boldsymbol{x}) := {\textstyle\sum}_{\boldsymbol{j}}^r\,|{\boldsymbol{j}}|\,|w_{\boldsymbol{j}}^r|\,\mathrm{e}^{\mathrm{i}{\boldsymbol{j}}\cdot\boldsymbol{x}}, \> \xi(\boldsymbol{x}) := {\textstyle\sum}_{\boldsymbol{k}}^s\,|{\boldsymbol{k}}|\,|\hat w_{\boldsymbol{k}}^s|\,\mathrm{e}^{\mathrm{i}{\boldsymbol{k}}\cdot\boldsymbol{x}} \textrm{ and } \zeta(\boldsymbol{x}) := {\textstyle\sum}_{\boldsymbol{l}}\,|\tilde w_{\boldsymbol{l}}^0|\,\mathrm{e}^{\mathrm{i}{\boldsymbol{l}}\cdot\boldsymbol{x}}. \end{equation} Using \eqref{q:bd3Bws} with $p=q=2$ and $m=\infty$, plus the embedding $H^2\subset\subset L^\infty$, we have the bound \begin{equation}\begin{aligned} \bigl|\mathrm{e}^{\nu t}\bigl(W^0,\,&B_\omega(W^\varepsilon,W^\varepsilon)\bigr)(t) - \bigl(W^0,B_\omega(W^\varepsilon,W^\varepsilon)\bigr)(0)\bigr|\\ &\le C\,\bigl(\mathrm{e}^{\nu t}|\nabla W^\varepsilon(t)|^2|\nabla^2W^0(t)| + |\nabla W^\varepsilon(0)|^2|\nabla^2W^0(0)|\bigr). \end{aligned}\end{equation} To bound the integrand in \eqref{q:Wt1}, we need estimates on $\partial_tW^0$ and $\dy_t^* W^\varepsilon$ in addition to those already obtained. Using the bound \begin{equation} |B^0(W,W)|_{L^2}^{} \le C\,|\nabla W|_{L^4}^2 \le C\,|\nabla W|_{H^{3/4}}^2 \le C\,|\nabla^2 W|^{3/2}|\nabla W|^{1/2}, \end{equation} we find from \eqref{q:dtW0} \begin{equation}\label{q:bd0dtW0}\begin{aligned} |\partial_t W^0|_{L^2}^{} &\le C\,|\nabla W|_{H^{3/4}}^2 + \mu\,|\nabla^2 W^0| + |f|\\ &\le C\,|\nabla^2 W|^{3/2}|\nabla W|^{1/2} + \mu\, |\nabla^2 W^0| + |f|. \end{aligned}\end{equation} Similarly, we find from \eqref{q:dtSWeps} \begin{equation}\label{q:bd0dtWe}\begin{aligned} |\dy_t^* W^\varepsilon|_{L^2}^{} &\le C\,|\nabla W|_{H^{3/4}}^2 + \mu\,|\nabla^2 W^\varepsilon| + |f|\\ &\le C\,|\nabla^2 W|^{3/2}|\nabla W|^{1/2} + \mu\, |\nabla^2 W^\varepsilon| + |f|. \end{aligned}\end{equation} Now using the bound \begin{equation} |\nabla B(W,W)|_{L^2}^{} \le C\,|\nabla^2 W|_{L^{12/5}}^{}|\nabla W|_{L^{12}}^{} \le C\,|\nabla^2 W|_{H^{1/4}}^{} \end{equation} we find \begin{equation}\label{q:bd1dtW}\begin{aligned} |\nabla\partial_t W^0|_{L^2}^{} &\le C\,|\nabla^2 W|_{H^{1/4}}^2 + \mu\,|\nabla^3 W^0| + |\nabla f^0|\\ &\le C\,|\nabla^3 W|^{1/2}|\nabla^2 W|^{3/2} + \mu\,|\nabla^3 W^0| + |\nabla f^0|,\\ |\nabla\dy_t^* W^\varepsilon|_{L^2}^{} &\le C\,|\nabla^2 W|_{H^{1/4}}^2 + \mu\,|\nabla^3 W^\varepsilon| + |\nabla f^\varepsilon|\\ &\le C\,|\nabla^3 W|^{1/2}|\nabla^2 W|^{3/2} + \mu\,|\nabla^3 W^\varepsilon| + |\nabla f^\varepsilon|.\\ \end{aligned}\end{equation} The bound for $I_0$ follows by using \eqref{q:bd0dtWe}, \begin{equation}\label{q:bdI0}\begin{aligned} |I_0|_{L^2}^{} &\le C\,\bigl(|\nabla W|_{H^{3/4}}^2 + (\mu+c)\,|\nabla^2W^\varepsilon| + |f^\varepsilon| \bigr)\,(|f^\varepsilon|+|\partial_t f^\varepsilon|)\\ &\le C\,\bigl(|\nabla W|_{H^{3/4}}^2 + (\mu+c)\,|\nabla^2W| + \|f\|_{\rm g}^{} \bigr)\,\|f\|_{\rm g}^{}, \end{aligned}\end{equation} where we have used the fact that $|\nabla^\alpha W^\varepsilon|^2 \le |\nabla^\alpha W^\varepsilon|^2 + |\nabla^\alpha W^0|^2 = |\nabla^\alpha W|^2$. Next, using \eqref{q:bd3Bws} we bound $I_2$ as \begin{equation}\begin{aligned} |I_2|_{L^2}^{} &\le \mu c\,|W^0|_{L^\infty}^{}|\nabla W^\varepsilon|^2 + c\,|\partial_\tau W^0|\,|\nabla W^\varepsilon|_{L^4}^2\\ &\hskip93pt {}+ c\,|W^0|_{L^\infty}^{}|\nabla W^\varepsilon|\,|\nabla\dy_t^* W^\varepsilon|\\ &\le \mu c\,|\nabla^2W^0|\,|\nabla W^\varepsilon|^2 + c\,\bigl(|\nabla W|_{H^{3/4}}^2 + \mu\,|\nabla^2 W^0| + |f^0|\bigr) |\nabla W^\varepsilon|_{H^{3/4}}^2\\ &\hskip20pt {}+ c\,|\nabla^2 W^0|\,|\nabla W^\varepsilon|\, \bigl(|\nabla^2W|_{H^{1/4}}^2 + \mu\,|\nabla^3W^\varepsilon| + |\nabla f^\varepsilon|\bigr)\\ &\le c\,|\nabla W|\,|\nabla^2W|\,|\nabla^2W|_{H^{1/4}}^2 + \mu c\,|\nabla^3W|\,|\nabla^2W|\,|\nabla W|\\ &\hskip20pt {}+ |\nabla^2W|^{3/2}|\nabla W|^{1/2}\,\|f\|_{\rm g}^{} \end{aligned}\end{equation} where interpolation inequalities have been used for the last step. The bound for $I_1$ is majorised by that for $I_2$. Putting everything together, we have from \eqref{q:Wt1} \begin{equation}\label{q:Wt2}\begin{aligned} \mathrm{e}^{\nu t}\,&|W^\varepsilon(t)|^2 - |W^\varepsilon(0)|^2\\ &\le \varepsilon\,c_2\,\mathrm{e}^{\nu t}\,|\nabla^2 W(t)|\bigl(|\nabla W(t)|^2 + \|f\|_{\rm g}^{}\bigr) + \varepsilon\,c_2\,|\nabla^2 W_0|\bigl(|\nabla W_0|^2 + \|f\|_{\rm g}^{}\bigr)\\ &\quad {}+ \varepsilon\,c_3 \int_0^t \mathrm{e}^{\nu\tau} \bigl\{ |W|_{H^1}^{}|W|_{H^2}^{}|W|_{H^{9/4}}^2 + \mu\,|W|_{H^3}^{}|W|_{H^2}^{}|W|_{H^1}^{}\\ &\hskip80pt {}+ \bigl(|W|_{H^2}^{3/2}|W|_{H^1}^{1/2} + (\mu+c)\,|W|_{H^2}^{} + \|f\|_{\rm g}^{}\bigr)\,\|f\|_{\rm g}^{} \bigr\} \>\mathrm{d}\tau. \end{aligned}\end{equation} Now by \eqref{q:WH1u} and \eqref{q:WHsu}, we can find $K_*(\|f\|_{\rm g}^{})$ and $T_*(|\nabla W_0|,\|f\|_{\rm g}^{})$ such that, for $t\ge T_*$, \begin{equation} c\,|\nabla^sW(t)|^2 + (\mu+c')\, |\nabla^s W(t)| + \|f\|_{\rm g}^{} \le K_* \end{equation} for $s\in\{0,1,2,3\}$. Let $t':=t-T_*$ and relabel $t$ in \eqref{q:Wt2} as $t'$. We can then bound the integral in \eqref{q:Wt2} as \begin{equation} \int_0^{t'} \mathrm{e}^{\nu\tau}\bigl\{\cdots\}\>\mathrm{d}\tau \le \frac{\mathrm{e}^{\nu t'}-1}{\nu}\,c_4\,K_*(\|f\|_{\rm g}^{})^2. \end{equation} Bounding the remaining terms in \eqref{q:Wt2} similarly, we find \begin{equation}\begin{aligned} |W^\varepsilon(t)|^2 &\le \mathrm{e}^{-\nu(t-T_*)}\,|W^\varepsilon(T_*)|^2 + \varepsilon\,c_5\,\bigl(K_*^2 + K_*^{3/2}\bigr)\\ &\le \mathrm{e}^{-\nu(t-T_*)}\,|W(T_*)|^2 + \varepsilon\,c_5\,\bigl(K_*^2 + K_*^{3/2}\bigr). \end{aligned}\end{equation} This proves the theorem, with $K_{\rm g}(\|f\|_{\rm g}^{})^2 = 2\,c_5\,\bigl(K_*^2 + K_*^{3/2}\bigr)$ and $T_{\rm g}(|\nabla W_0|,\|f\|_{\rm g}^{},\varepsilon)=T_*-\log\bigl[\varepsilon\,c_5\,\bigl(K_*+K_*^{1/2}\bigr)\bigr]/\nu$. \section{Higher-Order Estimates}\label{s:ho} When $\partial_tf=0$ in the very simple model \eqref{q:1dm}, we can obtain a better estimate on $x'=x-U$ where $U=\varepsilon f/(\varepsilon\mu+\mathrm{i})$ than on $x$, namely that $x'(t)\to0$ as $t\to\infty$; here $U$ is the (exact, higher-order) {\em slow manifold\/}. The situation is more complicated when $f$ is time-dependent, or when $x$ is coupled to a slow variable $y$ with the evolution equations having nonlinear terms. In this case, it is not generally possible to find $U$ (explicit examples are known where no such $U$ exists), and thus $x'(t)\not\to0$ as $t\to\infty$ for any $U(y,f;\varepsilon)$. Nevertheless, it is often possible to find a $U^*$ that gives an exponentially small bound on $x'(t)$ for large $t$. We shall do this for the primitive equations. More concretely, in this section we show that, with reasonable regularity assumptions on the forcing $f$, the leading-order estimate on the fast variable $W^\varepsilon$ in the previous section can be sharpened to an exponential-order estimate on $W^\varepsilon-U^*(W^0,f;\varepsilon)$, where $U^*$ is computed below. As in \cite{temam-dw:ebal}, we make use of the Gevrey regularity of the solution and work with a finite-dimensional truncation of the system, whose description now follows. Given a fixed $\kappa>0$, we define the low-mode truncation of $W$ by \begin{equation}\label{q:Pldef} W^<(\boldsymbol{x},t) = ({\sf P}^{\!{}^<} W)(\boldsymbol{x},t) := \sum_{|{\boldsymbol{k}}|<\kappa}^\alpha\, w_{\boldsymbol{k}}^\alpha X_{\boldsymbol{k}}^\alpha \mathrm{e}^{-\mathrm{i}\omega_{\boldsymbol{k}}^\alpha t/\varepsilon}\mathrm{e}^{\mathrm{i}{\boldsymbol{k}}\cdot\boldsymbol{x}} \end{equation} where the sum is taken over $\alpha\in\{0,\pm1\}$ and ${\boldsymbol{k}}\in\mathbb{Z}_L$ with $|{\boldsymbol{k}}|<\kappa$. We also define the high-mode part of $W$ by $W^>:=W-W^<$. The low- and high-mode parts of the slow and fast variables, $W^{0<}$, $W^{0>}$, $W^{\varepsilon<}$ and $W^{\varepsilon>}$, are defined in the obvious manner, i.e.\ $W^{0<}$ with $\alpha=0$ in \eqref{q:Pldef} and $W^{\varepsilon<}$ with $\alpha\in\{\pm1\}$. It is clear from \eqref{q:Pldef} and \eqref{q:Hsnorm} that the projection ${\sf P}^{\!{}^<}$ is orthogonal in $H^s$, so ${\sf P}^{\!{}^<}$ commutes with both $A$ and $L$ in \eqref{q:dW}. We denote ${\sf P}^{\!{}^<} B$ by $B^<$. It follows from the definition that the low-mode part $W^<$ satisfies a ``reverse Poincar{\'e}'' inequality, i.e.\ for any $s\ge0$, \begin{equation}\label{q:ipoi} |\nabla W^<|_{H^s}^{} \le \kappa\,|W^<|_{H^s}^{}\,. \end{equation} If $W\in G^\sigma(\mathscr{M})$, the exponential decay of its Fourier coefficients implies that $W^>$ is exponentially small, that is, for any $s\ge0$, \begin{equation}\label{q:Wgg} |W^>|_{H^s}^{} \le C_s\,\kappa^s\,\mathrm{e}^{-\sigma\kappa}|W|_{G^\sigma}^{}\,. \end{equation} The first inequality evidently also applies to the slow and fast parts separately, i.e.\ with $W^<$ replaced by $W^{0<}$ or $W^{\varepsilon<}$; as for \eqref{q:Wgg}, it also holds when $W^>$ on the lhs is replaced by $W^{0>}$ or $W^{\varepsilon>}$. We recall that the global regularity results of Theorem~0 imply that, with Gevrey forcing, any solution $W\in H^1(\mathscr{M})$ will be in $G^\sigma(\mathscr{M})$ after a short time. As in \cite{temam-dw:ebal} and following \cite{matthies:01}, the central idea here is to split $W^\varepsilon$ into its low- and high-mode parts. The high-mode part $W^{\varepsilon>}$ is exponentially small by \eqref{q:Wgg}. We then compute $U^*(W^{0<},f^<;\varepsilon)$ such that $W^{\varepsilon<}-U^*$ becomes exponentially small after some time. Following historical precedent in the geophysical literature, it is natural to present our results in two parts, first locally in time and second globally. (Here ``local in time'' is used in a sense similar to ``local truncation error'' in numerical analysis, giving a bound on the time derivative of some ``error''.) The following lemma states that, in a suitable finite-dimensional space, we can find a ``slow manifold'' $W^{\varepsilon<}=U^*(W^{0<},f^<;\varepsilon)$ on which the normal velocity of $W^{\varepsilon<}$ is at most exponentially small: \begin{lemma}\label{t:suba} Let $s>3/2$ and $\eta>0$ be fixed. Given $W^0\in H^s(\mathscr{M})$ and $f\in H^s(\mathscr{M})$ with $\partial_tf=0$, there exists $\varepsilon_{**}(|W^0|_{H^s}^{},|f|_{H^s}^{},\eta)$ such that for $\varepsilon\le\varepsilon_{**}$ one can find $\kappa(\varepsilon)$ and $U^*(W^{0<},f^<;\varepsilon)$ that makes the remainder function \begin{equation}\label{q:Rsdef}\begin{aligned} \mathcal{R}^*(W^{0<},f^<;\varepsilon) &:= {\sf P}^{\!{}^<}[(\mathsf{D} U^*)\,\mathcal{G}^*] + \frac1\varepsilon LU^*\\ &\qquad {}+ B^{\varepsilon<}(W^{0<}+U^*,W^{0<}+U^*) + AU^* - f^{\varepsilon<} \end{aligned}\end{equation} exponentially small in $\varepsilon$, \begin{equation} |\mathcal{R}^*(W^{0<},f^<;\varepsilon)|_{H^s}^{} \le \cnst{r} \bigl[(|W^{0<}|_{H^s}^{} + \eta)^2 + |f|_{H^s}^{}\bigr]\,\exp(-\eta/\varepsilon^{1/4}); \end{equation} here $\mathsf{D} U^*$ is the derivative of $U^*$ with respect to $W^{0<}$ and \begin{equation} \mathcal{G}^* := -B^{0<}(W^{0<}+U^*,W^{0<}+U^*) - AW^{0<} + f^{0<}. \end{equation} \end{lemma} \noindent{\bf Remarks.} \refstepcounter{rmkctr} \noindent{\bf\thermkctr. } The bounds may depend on $s$, $\mu$ and $\mathscr{M}$ as well as on $\eta$, but only the latter is indicated explicitly here and in the proof below. \refstepcounter{rmkctr} \noindent{\bf\thermkctr. } Given $\kappa$ fixed, $U^*$ lives in the same space as $W^{\varepsilon<}$, that is, $(W^0,U^*)_{L^2}^{}=0$ and ${\sf P}^{\!{}^<} U^*=U^*$. \refstepcounter{rmkctr} \noindent{\bf\thermkctr. } In the leading-order case of \S\ref{s:o1}, the slow manifold is $U^0=0$ and the local error estimate is incorporated directly into the proof of Theorem~\ref{t:o1}; we therefore did not put these into a separate lemma. \refstepcounter{rmkctr} \noindent{\bf\thermkctr. }\label{r:sm} Unlike formal constructions in the geophysical literature (see, e.g., \cite{ob:97,wbsv:95}), our slow manifold is not defined for all possible $W^0$ and $\varepsilon$. Instead, given that $|W^0|_{G^\sigma}^{}\le R$, we can define $U^*$ for all $\varepsilon\le\varepsilon_{**}(R,\sigma)$; generally, the larger the set of $W^0$ over which $U^*$ is to be defined, the smaller $\varepsilon$ will have to be. \refstepcounter{rmkctr} \noindent{\bf\thermkctr. } In what follows, we will often write $U^*(W^0,f;\varepsilon)$ for $U^*({\sf P}^{\!{}^<} W^0,{\sf P}^{\!{}^<} f;\varepsilon)$; this should not cause any confusion. Using the Lemma and a technique similar to that used to prove Theorem~\ref{t:o1}, we can bound the ``net forcing'' on $W'=W^{\varepsilon<}-U^*$ by $\mathcal{R}^*$. The dissipation term $AW'$ then ensures that $W'$ eventually decays to an exponentially small size. This gives us our global result: \begin{theorem}\label{t:ho} Let $W_0\in H^1(\mathscr{M})$ and $\nabla f\in G^\sigma(\mathscr{M})$ be given with $\partial_tf=0$. Then there exist $\varepsilon_*(f;\sigma)$ and $T_*(|\nabla W_0|,|\nabla f|_{G^\sigma}^{})$ such that for $\varepsilon\le\varepsilon_*$ and $t\ge T_*$, we can approximate the fast variable $W^\varepsilon(t)$ by a function $U^*(W^0(t),f;\varepsilon)$ of the slow variable $W^0(t)$ up to an exponential accuracy, \begin{equation} |W^\varepsilon(t)-U^*(W^0(t),f;\varepsilon)|_{L^2}^{} \le K_*(|\nabla f|_{G^\sigma}^{},\sigma)\,\exp(-\sigma/\varepsilon^{1/4}). \end{equation} \end{theorem} \noindent As in Theorem~\ref{t:o1}, here $K_*$ is a continuous increasing function of its arguments; $W(t)=W^0(t)+W^\varepsilon(t)$ is the solution of \eqref{q:dW} with initial condition $W(0)=W_0$. As before, the bounds depend on $\mu$ and $\mathscr{M}$, but these are not indicated explicitly. \noindent{\bf Remarks.} \refstepcounter{rmkctr} \noindent{\bf\thermkctr. } With very minor changes in the proof of Theorem~\ref{t:ho} below, one could also show that, if $f\in H^{n+1}$ and $\partial_tf=0$, then $|W^\varepsilon(t)-U^n(W^0(t),f;\varepsilon)|_{L^2}^{}$ is bounded as $\varepsilon^{n/4}$ for sufficiently large $n$ and possibly something better for smaller $n$. \refstepcounter{rmkctr} \noindent{\bf\thermkctr. } Recalling remark~\ref{r:sm} above, our slow manifold is only defined for $\varepsilon$ sufficiently small for a given $|W^{0<}|$ (or equivalently, for $|W^{0<}|$ sufficiently small for a given $\varepsilon$). The results of Theorem~0 tell us that $W(t)$ will be inside a ball in $G^\sigma(\mathscr{M})$ after a sufficiently large $t$; we use (twice) the radius of this absorbing ball to fix the restriction on $\varepsilon$. Thus our approach sheds no light on the analogous problem in the inviscid case, which has no absorbing set. \refstepcounter{rmkctr} \noindent{\bf\thermkctr. } As proved in \cite{ju:07,kobelkov:06,petcu:3dpe}, assuming sufficiently smooth forcing, the primitive equations admit a finite-dimensional global attactor. Theorem~\ref{t:ho} states that, for $\varepsilon\le\varepsilon_*(|f|_{G^\sigma}^{})$, the solution will enter, and remain in, an exponentially thin neighbourhood of $U^*(W^{0<},f^<;\varepsilon)$ in $L^2(\mathscr{M})$ after some time. It follows that the global attractor must then be contained in this exponentially thin neighbourhood as well. \refstepcounter{rmkctr} \noindent{\bf\thermkctr. } The dynamics on this attractor is generally thought to be chaotic \cite{temam:iddsmp}. Thus our present results do not qualitatively affect the finite-time predictability estimate of \cite{temam-dw:ebal}. \refstepcounter{rmkctr} \noindent{\bf\thermkctr. } When $\partial_tf\ne0$, the slaving relation $U^*$ would have a non-local dependence on $t$. Quasi-periodic forcing, however, can be handled by introducing an auxiliary variable $\boldsymbol{\theta}=(\theta_1,\cdots,\theta_n)$, where $n$ is the number of independent frequencies of $f$. The slaving relation $U^*$ would then depend on $\boldsymbol{\theta}$ as well as on $W^{0<}$. \refstepcounter{rmkctr} \noindent{\bf\thermkctr. } Bounds of this type are only available for the fast variable $W^\varepsilon$; no special bounds exist for the slow variable $W^0$ except in special cases, such as when the forcing $f$ is completely fast, $(W^0,f)_{L^2}^{}=0$. We next present the proofs of Lemma~\ref{t:suba} and Theorem~\ref{t:ho}. The first one follows closely that in \cite{temam-dw:ebal} which used a slightly different notation; we redo it here for notational coherence and since some estimates in it are needed in the proof of Theorem~\ref{t:ho}. As before, we write $(\cdot,\cdot)\equiv(\cdot,\cdot)_{L^2}$ and $|\cdot|\equiv|\cdot|_{L^2}^{}$ when there is no ambiguity. \subsection{Proof of Lemma~\ref{t:suba}} As usual, we use $c$ to denote a generic constant which may not be the same each time it appears. Constants may depend on $s$ and the domain $\mathscr{M}$ (and also on $\mu$ for non-generic ones), but dependence on $\eta$ is indicated explicitly. Since $s>3/2$, $H^s(\mathscr{M})$ is a Banach algebra, so if $u$ and $v\in H^s$, \begin{equation} |uv|_s^{} \le c\,|u|_s^{}|v|_s^{} \end{equation} where here and henceforth $|\cdot|_s^{} := |\cdot|_{H^s}^{}\,$. Let us take $\varepsilon\le1$ and $\kappa$ as given for now; restrictions on $\varepsilon$ will be stated as we go along and $\kappa$ will be fixed in \eqref{q:deltakappa} below. We construct the function $U^*$ iteratively as follows. First, let \begin{equation}\label{q:U1} \frac1\varepsilon LU^1 = - B^{\varepsilon<}(W^{0<},W^{0<}) + f^{\varepsilon<}\,, \end{equation} where $U^1\in\textrm{range}\,L$ for uniqueness; similarly, $U^n\in\textrm{range}\,L$ in what follows. For $n=1,2,\cdots$, let \begin{equation}\label{q:Unp1} \frac1\varepsilon LU^{n+1} = -{\sf P}^{\!{}^<}\bigl[(\mathsf{D} U^n)\mathcal{G}^n\bigr] - B^{\varepsilon<}(W^{0<}+U^n,W^{0<}+U^n) - AU^n + f^{\varepsilon<}, \end{equation} where $\mathsf{D} U^n$ is the Fr{\'e}chet derivative of $U^n$ with respect to $W^{0<}$ (regarded as living in an appropriate Hilbert space) and \begin{equation} \mathcal{G}^n := -B^{0<}(W^{0<}+U^n,W^{0<}+U^n) - AW^{0<} + f^{0<}. \end{equation} We note that the right-hand sides of \eqref{q:U1} and \eqref{q:Unp1} do not lie in $\mathrm{ker}\,L$, so $U^1$ and $U^{n+1}$ are well defined. Moreover, $U^n$ lives in the same space as $W^{\varepsilon<}$, that is, $U^n\in{\sf P}^{\!{}^<}\textrm{range}\,L$; in other words, $(W^0,U^n)=0$ and ${\sf P}^{\!{}^<} U^n=U^n$. For $\eta>0$, let $D_\eta(W^{0<})$ be the complex $\eta$-neighbourhood of $W^{0<}$ in ${\sf P}^{\!{}^<} H^s(\mathscr{M})$. With $W^{0<}$ defined by \eqref{q:Pldef}, this is \begin{equation}\label{q:Deta}\begin{aligned} D_\eta(W^0) = \biggl\{ &\hat W^0 : \hat W^0(\boldsymbol{x},t) = \sum_{|{\boldsymbol{k}}|<\kappa}\,\hat w_{\boldsymbol{k}}^0 X_{\boldsymbol{k}}^0\mathrm{e}^{\mathrm{i}{\boldsymbol{k}}\cdot\boldsymbol{x}} \quad\textrm{with }\\ &\quad\hat w_{(k_1,k_2,k_3)}^0 = \hat w_{(k_1,k_2,-k_3)}^0 \textrm{ and } \sum_{|{\boldsymbol{k}}|<\kappa}\,|{\boldsymbol{k}}|^{2s}\,|\hat w_{\boldsymbol{k}}^0-w_{\boldsymbol{k}}^0|^2 < \eta^2 \biggr\}. \end{aligned}\end{equation} Since $W^0(\boldsymbol{x},t)$ and $X_{\boldsymbol{k}}^0$ are real, $w_{\boldsymbol{k}}^0$ must satisfy (\ref{q:w0}a), but $\hat w_{\boldsymbol{k}}^0$ in \eqref{q:Deta} need not satisfy this condition although it must satisfy (\ref{q:w0}b). We can thus regard $D_\eta(W^{0<}) \subset \{ (w_{\boldsymbol{k}}^{}) : 0 < |{\boldsymbol{k}}| < \kappa \textrm{ and } w_{(k_1,k_2,-k_3)}=w_{k_1,k_2,k_3)} \} \cong \mathbb{C}^m$ for some $m$. Let $\delta>0$ be given; it will be fixed below in \eqref{q:deltakappa}. For any function $g$ of $W^{0<}$, let \begin{equation} |g(W^{0<})|_{s;n}^{} := \sup_{W\in D_{\eta-n\delta}(W^{0<})}\,|g(W)|_s^{}\,; \end{equation} this expression is meaningful when $D_{\eta-n\delta}(W^{0<})$ is non-empty, that is, for $n\in\{0,\cdots,\lfloor\eta/\delta\rfloor =: n_*\}$. For future reference, we note that \begin{equation} |W^{0<}|_{s;0}^{} \le |W^{0<}|_s^{} + \eta. \end{equation} Our first step is to obtain by induction a couple of uniform bounds \eqref{q:bdUn}--\eqref{q:bdUW}, valid for $n\in\{1,\cdots,n_*\}$, which will be useful later. First, for $U^1$, we have \begin{equation} \frac1\varepsilon|LU^1|_{s;1}^{} \le |B^{\varepsilon<}(W^{0<},W^{0<})|_{s;1}^{} + |f^{\varepsilon<}|_s^{} \end{equation} which, using the estimate $|B(W,W)|_s^{}\le c\,|\nabla W|_s^2$ and \eqref{q:ipoi}, implies \begin{equation}\label{q:bdU1} |U^1|_{s;1}^{} \le \varepsilon\,\cnst0\,\bigl(\kappa^2|W^{0<}|_{s;1}^2 + |f^{\varepsilon<}|_{s}^{}\bigr). \end{equation} Next, we derive an iterative estimate for $|U^n|_{s;n}^{}$. Using the fact that $|\cdot|_{s;m}^{} \le |\cdot|_{s;n}^{}$ whenever $m\ge n$, we have for $n=1,2,\cdots$, \begin{equation}\begin{aligned} \frac1\varepsilon\,|U^{n+1}|_{s;n+1}^{} \le |(\mathsf{D} U^n)\mathcal{G}^n|_{s;n+1}^{} &+ |B^{\varepsilon<}(W^{0<}+U^n,W^{0<}+U^n)|_{s;n}^{}\\ &+ \mu\kappa^2\,|W^{0<}|_{s;n}^{} + |f^{\varepsilon<}|_s^{}\,. \end{aligned}\end{equation} The first term on the right-hand side can be bounded by a technique based on Cauchy's integral formula: Let $D_\eta(z_0^{})\subset\mathbb{C}$ be the complex $\eta$-neighbourhood of $z_0^{}$. For $\varphi: D_\eta(z_0^{})\to\mathbb{C}$ analytic and $\delta\in(0,\eta)$, we can bound $|\varphi'|$ in $D_{\eta-\delta}(z_0^{})$ by $|\varphi|$ in $D_\eta(z_0^{})$ as \begin{equation}\label{q:cauchy} |\varphi'\cdot z|_{D_{\eta-\delta}(z_0^{})} \le \frac1\delta |\varphi|_{D_\eta(z_0^{})}^{}|z|_{\mathbb{C}}^{}\,. \end{equation} Now by \eqref{q:U1} $U^1$ is an analytic function of the finite-dimensional variable $W^{0<}$, so assuming that $U^n$ is analytic in $W^{0<}$ we can regard the Fr{\'e}chet derivative $\mathsf{D} U^n$ as an ordinary derivative. Taking for $\varphi'$ in \eqref{q:cauchy} the derivative of $U^n$ in the direction $\mathcal{G}^n$ (i.e.\ working on the complex plane containing $0$ and $\mathcal{G}^n$), we have \begin{equation} |(\mathsf{D} U^n)\mathcal{G}^n|_{s;n+1}^{} \le \frac1\delta\, |U^n|_{s;n}|\mathcal{G}^n|_{s;n}^{}\,. \end{equation} Using the estimate \begin{equation} |B^{\varepsilon<}(W^{0<}+U^n,W^{0<}+U^n)|_{s;n}^{} \le c\,|\nabla(W^{0<}+U^n)|_{s;n}^2 \le c\,\kappa^2|W^{0<}+U^n|_{s;n}^2 \end{equation} we have \begin{equation}\label{q:bdUn1}\begin{aligned} |U^{n+1}|_{s;n+1}^{} &\le \frac{\varepsilon c}\delta\,|U^n|_{s;n}^{} \bigl( c\,\kappa^2\,|W^{0<}+U^n|_{s;n}^2 + \mu\kappa^2\,|W^{0<}|_{s;n}^{} + |f^{0<}|_s^{} \bigr)\\ &\qquad+ \varepsilon\kappa^2\,c\,|W^{0<}+U^n|_{s;n}^2 + \mu\varepsilon\kappa^2\,|U^n|_{s;n} + \varepsilon\,|f^{\varepsilon<}|_s^{}\,. \end{aligned}\end{equation} To complete the inductive step, let us now set \begin{equation}\label{q:deltakappa} \delta = \varepsilon^{1/4} \qquad\textrm{and}\qquad \kappa = \varepsilon^{-1/4}. \end{equation} With this, we have from \eqref{q:bdUn1} \begin{equation}\label{q:bdUn2}\begin{aligned} |U^{n+1}|_{s;n+1}^{} &\le \varepsilon^{1/4}\,\cnst1\,|U^n|_{s;n}^{} \bigl( |W^{0<}+U^n|_{s;n}^2 + \mu\,|W^{0<}|_{s;n}^{} + \varepsilon^{1/2}\,|f^{0<}|_s^{} \bigr)\\ &\qquad+ \varepsilon^{1/2}\,\cnst2\,\bigl(|W^{0<}+U^n|_{s;n}^2 + \mu\,|U^n|_{s;n} + \varepsilon^{1/2}\,|f^{\varepsilon<}|_s^{}\bigr). \end{aligned}\end{equation} We require $\varepsilon$ to be such that \begin{equation}\label{q:eps1} \varepsilon^{1/4}\,(\cnst0+\cnst1+\cnst2)\, \bigl(|W^{0<}|_{s;0}^2 + \mu\,|W^{0<}|_{s;0}^{} + |f|_s^{} \bigr) \le \sfrac14\min\{ 1, |W^{0<}|_s^{} \} \end{equation} and claim that with this we have \begin{equation}\label{q:bdUn} |U^n|_{s;n}^{} \le \varepsilon^{1/4}\,\cnst{U}\, \bigl(|W^{0<}|_{s;0}^2 + \mu\,|W^{0<}|_{s;0}^{} + |f^<|_s^{}\bigr) \end{equation} with $\cnst{U}=4\,(\cnst0+\cnst1+\cnst2)$. Now since $\varepsilon\le1$, \eqref{q:bdU1} implies that it holds for $n=1$, so let us suppose that it holds for $m=0,\cdots,n$ for some $n<n_*$. Now \eqref{q:eps1} and \eqref{q:bdUn} imply that \begin{equation}\label{q:bdUW} |U^m|_{s;m}^{} \le |W^{0<}|_s^{} \le |W^{0<}|_{s;0}^{} \qquad\textrm{and}\qquad |U^m|_{s;m}^{} \le 1 \end{equation} for $m=0,\cdots,n$. Using these in \eqref{q:bdUn2}, we have \begin{equation}\begin{aligned} |U^{n+1}|_{s;n+1}^{} &\le 4\,\varepsilon^{1/4}\,\cnst1\,\bigl(|W^{0<}|_{s;0}^2 + \mu\,|W^{0<}|_{s;0}^{} + |f^<|_s^{}\bigr)\,|U^n|_{s;n}^{}\\ &\hskip30pt {}+ 4\,\varepsilon^{1/2}\,\cnst2\,\bigl(|W^{0<}|_{s;0}^2 + \mu\,|W^{0<}|_{s;0}^{} + |f^<|_s^{}\bigr)\\ &\le \varepsilon^{1/4}\cnst{U}\,\bigl(|W^{0<}|_{s;0}^2 + \mu\,|W^{0<}|_{s;0}^{} + |f^<|_s^{}\bigr). \end{aligned}\end{equation} This proves \eqref{q:bdUn} and \eqref{q:bdUW} for $n=0,\cdots,n_*$. We now turn to the remainder \begin{equation} \mathcal{R}^0 := B^{\varepsilon<}(W^{0<},W^{0<}) - f^{\varepsilon<} \end{equation} and, for $n=1,\cdots$, \begin{equation}\label{q:Rndef} \mathcal{R}^n := {\sf P}^{\!{}^<}[(\mathsf{D} U^n)\,\mathcal{G}^n] + \frac1\varepsilon LU^n + B^{\varepsilon<}(W^{0<}+U^n,W^{0<}+U^n) + AU^n - f^{\varepsilon<}. \end{equation} We seek to show that, for $n=0,\cdots,n_*$, it scales as $\mathrm{e}^{-n}$. We first note that by construction $\mathcal{R}^n\not\in\textrm{ker}\,L$, so $L^{-1}\mathcal{R}^n$ is well-defined. Taking $U^0=0$, we have \begin{equation}\label{q:RUU} \mathcal{R}^n = \frac1\varepsilon L\,(U^n - U^{n+1}). \end{equation} We then compute \begin{equation}\begin{aligned} \mathcal{R}^{n+1} &= {\sf P}^{\!{}^<}[(\mathsf{D} U^{n+1})\,\mathcal{G}^{n+1}] + \frac1\varepsilon LU^{n+1}\\ &\qquad {}+ B^{\varepsilon<}(W^{0<}+U^{n+1},W^{0<}+U^{n+1}) + AU^{n+1} - f^{\varepsilon<}\\ &= {\sf P}^{\!{}^<}[(\mathsf{D} U^{n+1})(\mathcal{G}^n+\delta\mathcal{G}^n)] + \frac1\varepsilon LU^n - \mathcal{R}^n\\ &\qquad {}+ B^{\varepsilon<}(W^{0<}+U^n,W^{0<}+U^n) - \varepsilon\,B^{\varepsilon<}(W^{0<}+U^n,L^{-1}\mathcal{R}^n)\\ &\qquad {}- \varepsilon\,B^{\varepsilon<}(L^{-1}\mathcal{R}^n,W^{0<}+U^{n+1}) + AU^n - \varepsilon\, AL^{-1}\mathcal{R}^n - f^{\varepsilon<}\\ &= {\sf P}^{\!{}^<}[(\mathsf{D} U^n)\,\delta\mathcal{G}^n] - \varepsilon\,L^{-1}{\sf P}^{\!{}^<}[(\mathsf{D}\mathcal{R}^n)\,\mathcal{G}^{n+1}] - \varepsilon\,AL^{-1}\mathcal{R}^n\\ &\qquad {}- \varepsilon\,B^{\varepsilon<}(L^{-1}\mathcal{R}^n,W^{0<}+U^{n+1}) - \varepsilon\,B^{\varepsilon<}(W^{0<}+U^n,L^{-1}\mathcal{R}^n), \end{aligned}\end{equation} where we have used \eqref{q:RUU} and where \begin{equation}\begin{aligned} \delta\mathcal{G}^n &:= \mathcal{G}^{n+1} - \mathcal{G}^n\\ &= \varepsilon\,B^{0<}(W^{0<}+U^{n+1},L^{-1}\mathcal{R}^n) + \varepsilon\,B^{0<}(L^{-1}\mathcal{R}^n,W^{0<}+U^n). \end{aligned}\end{equation} To obtain a bound on $\mathcal{R}^n$, we compute using \eqref{q:bdUW} \begin{equation}\label{q:Gyn}\begin{aligned} |\mathcal{G}^n|_{s;n}^{} &\le c\,\bigl( |\nabla(W^{0<}+U^n)|_{s;n}^2 + \mu\,|\Delta W^{0<}|_{s;n}^{} + |f^{0<}|_s^{}\bigr)\\ &\le c\,\kappa^2\,\bigl( |W^{0<}|_{s;0}^2 + \mu\,|W^{0<}|_{s;0}^{} + |f|_s^{}\bigr), \end{aligned}\end{equation} as well as \begin{equation}\begin{aligned} |\delta\mathcal{G}^n|_{s;n+1}^{} &\le \varepsilon\,c\,|\nabla(W^{0<}+U^{n+1})|_{s;n+1}^{}|\nabla L^{-1}\mathcal{R}^n|_{s;n+1}^{}\\ &\qquad {}+ \varepsilon\,c\,|\nabla L^{-1}\mathcal{R}^n|_{s;n+1}^{}|\nabla(W^{0<}+U^n)|_{s;n}^{}\\ &\le \varepsilon\kappa^2\,c\,|\mathcal{R}^n|_{s;n+1}^{}|W^{0<}|_{s;0}^{}\,. \end{aligned}\end{equation} (Note that we can only estimate $\delta\mathcal{G}^n$ in $D_{\eta-(n+1)\delta}^{}$ and not in $D_{\eta-n\delta}^{}$; similarly, since the definition of $\mathcal{R}^n$ involves $\mathsf{D} U^n$, it can only be estimated in $D_{\eta-(n+1)\delta}^{}$.) We then have \begin{equation}\begin{aligned} \!\!\!\!\!|\mathcal{R}^{n+1}|_{s;n+2}^{} &\le |\mathsf{D} U^n|_{s;n+1}^{}|\delta\mathcal{G}^n|_{s;n+1}^{} + \varepsilon\,|L^{-1}\mathsf{D}\mathcal{R}^n|_{s;n+2}^{}|\mathcal{G}^{n+1}|_{s;n+1}\\ &\qquad {}+ \varepsilon\mu\kappa^2\,|\mathcal{R}^n|_{s;n+1}^{} + \varepsilon\,|\nabla L^{-1}\mathcal{R}^n|_{s;n+1}^{}|\nabla(W^{0<}+U^{n+1})|_{s;n+1}^{}\\ &\qquad+ \varepsilon\,|\nabla(W^{0<}+U^n)|_{s;n}^{}|\nabla L^{-1}\mathcal{R}^n|_{s;n+1}^{}\\ &\le \frac1\delta|U^n|_{s;n}\,\varepsilon\,\kappa^2\,|\mathcal{R}^n|_{s;n+1}|W^{0<}|_{s;0} + c\,\frac\varepsilon\delta\,|\mathcal{R}^n|_{s;n+1}^{}|\mathcal{G}^{n+1}|_{s;n+1}^{}\\ &\qquad {}+ 4\,\varepsilon\kappa^2\,|\mathcal{R}^n|_{s;n+1}^{}|W^{0<}|_{s;0}^{} + \varepsilon\mu\kappa^2\,|\mathcal{R}^n|_{s;n+1}^{}\\ &\le \varepsilon^{1/4}\,|\mathcal{R}^n|_{s;n+1}^{}\,\cnst{e}(|W^{0<}|_{s;0}^2 + \mu\,|W^{0<}|_{s;0}^{} + |f^<|_s^{} + \mu) \end{aligned}\end{equation} where for the last inequality we have assumed that \begin{equation}\label{q:eps1a} \varepsilon^{1/4} \le \min\{ \mu/|W^{0<}|_{s;0}^{}, \mu\,\cnst{U}/4 \}. \end{equation} If we require $\varepsilon$ to satisfy, in addition to $\varepsilon\le1$, \eqref{q:eps1} and \eqref{q:eps1a}, \begin{equation}\label{q:eps2} \varepsilon^{1/4}\,\cnst{e}(|W^{0<}|_{s;0}^2 + \mu\,|W^{0<}|_{s;0}^{} + |f^<|_s^{} + \mu) \le \frac1\mathrm{e}\,, \end{equation} we have, for $n=0,1,\cdots,n_*-1$, \begin{equation} |\mathcal{R}^{n+1}|_{s;n+2}^{} \le \frac1\mathrm{e}\,|\mathcal{R}^n|_{s;n+1}^{}\,. \end{equation} Along with the estimate \begin{equation} |\mathcal{R}^0|_{s;1}^{} \le \cnst{r}\, (|W^{0<}|_{s;0}^2 + |f^<|_s^{}), \end{equation} taking $n=n_*-1$ leads us to \begin{equation}\label{q:bdRs}\begin{aligned} |\mathcal{R}^{n_*-1}|_{H^s}^{} \le |\mathcal{R}^{n_*-1}|_{s;n_*}^{} &\le \cnst{r}\, (|W^{0<}|_{s;0}^2 + |f^<|_s^{})\,\exp(-\eta/\varepsilon^{1/4})\\ &\le \cnst{r}\, [(|W^{0<}|_s^{} + \eta)^2 + |f^<|_s^{}]\,\exp(-\eta/\varepsilon^{1/4}). \end{aligned}\end{equation} The lemma follows by setting $U^*=U^{n_*-1}$ and taking as $\varepsilon_{**}$ the largest value that satisfies $\varepsilon\le1$, \eqref{q:eps1}, \eqref{q:eps1a} and \eqref{q:eps2}. For use later in the proof of Theorem~\ref{t:ho}, we also bound \begin{equation}\label{q:bdRR}\begin{aligned} \bigl|\nabla(1-{\sf P}^{\!{}^<})&[(\mathsf{D} U^*)\mathcal{G}^*]\bigr|_{L^2}^{} \le c\,\mathrm{e}^{-\sigma\kappa}|(\mathsf{D} U^*)\mathcal{G}^*|_{2,n_*}\\ &\le c\,\mathrm{e}^{-\sigma\kappa}\,\frac1\delta\,|U^*|_{2,n_*-1}^{}|\mathcal{G}^*|_{2,n_*-1}\\ &\le c\,\mathrm{e}^{-\sigma\kappa}\,\kappa^2\,(|W^{0<}|_{2;0}^2 + \mu\,|W^{0<}|_{2;0}^{} + |f|_2^{})^2 \end{aligned}\end{equation} where for the last inequality we have used \eqref{q:bdUn} and \eqref{q:Gyn} with $n=n_*-1$. \subsection{Proof of Theorem~\ref{t:ho}} We follow the conventions of the proofs of Theorem~\ref{t:o1} and Lemma~\ref{t:suba} on constants. We will be rather terse in parts of this proof which mirror a development in the proof of Theorem~\ref{t:o1}. First, we recall Theorem~0 and consider $t\ge T:=\max\{T_2,T_\sigma\}$ so that $|\nabla^2W(t)|\le K_2$ and $|\nabla^2W(t)|_{G^\sigma}^{}\le M_\sigma$. We use Lemma~\ref{t:suba} with $s=2$ and, collecting the constraints on $\varepsilon$ there, require that \begin{equation}\label{q:eps1*}\begin{aligned} &\varepsilon^{1/4}\,\cnst{U}\,\bigl( (K_2 + \eta)^2 + \mu\,(K_2 + \eta) + |f|_2^{}\bigr) \le \sfrac14 \min\{ 1, K_2 \},\\ &\varepsilon^{1/4} \le \min\{ \mu/(K_2 + \eta), \mu\,\cnst{U}/4, 1 \},\\ &\varepsilon^{1/4}\, \cnst{e}\,\bigl( (K_2 + \eta)^2 + \mu\,(K_2 + \eta) + \mu + |f|_2^{}\bigr) \le \frac1\mathrm{e}, \end{aligned}\end{equation} where $\cnst{e}$ is that in \eqref{q:eps2}. (We note that all these constraints are convex in $K_2$, so they do not cause problems when $|W^{0<}|<K_2$.) Further constraints on $\varepsilon$ will be imposed below. We note the bound \eqref{q:bdRs} and \begin{equation}\label{q:bdUs} |U^*|_{H^2}^{} \le |U^*|_{2;n_*}^{} \le \varepsilon^{1/4}\,\cnst{U}\,\bigl( (K_2 + \eta)^2 + \mu\,(K_2 + \eta) + |f|_2^{}\bigr) \end{equation} which follows from \eqref{q:bdUW}. We fix $\kappa=\varepsilon^{-1/4}$ as in \eqref{q:deltakappa} and consider the equation of motion for the low modes $W^<$, \begin{equation}\label{q:ddtWl}\begin{aligned} \partial_tW^< + \frac1\varepsilon LW^< + B^<(W^<,W^<) &+ AW^< - f^<\\ &= - B^<(W^>,W) - B^<(W^<,W^>)\\ &=: \hat{\mathcal{H}}\,. \end{aligned}\end{equation} Writing \begin{equation} W^{\varepsilon<} = U^*(W^{0<},f^<;\varepsilon) + W'\,, \end{equation} the equation governing the finite-dimensional variable $W'(t)$ is \begin{equation}\begin{aligned} \partial_tW' + \frac1\varepsilon LW' + B^{\varepsilon<}(W^<,W^<) &+ AW'\\ &= -\partial_t U^* - \frac1\varepsilon LU^* - AU^* + f^{\varepsilon<} + \hat\mathcal{H}^\eps. \end{aligned}\end{equation} Using \eqref{q:Rsdef}, this can be written as \begin{equation}\label{q:ddtWp}\begin{aligned} \partial_tW' &+ \frac1\varepsilon LW' + B^{\varepsilon<}(W^<,W') + B^{\varepsilon<}(W',W^{0<}+U^*) + AW'\\ &= -\mathcal{R}^* - (1-{\sf P}^{\!{}^<})[(\mathsf{D} U^*)\,\mathcal{G}^*] + \hat\mathcal{H}^\eps\\ &=: -\mathcal{R}^* + \mathcal{H}^\eps. \end{aligned}\end{equation} Multiplying by $W'$ in $L^2(\mathscr{M})$, we find \begin{equation}\label{q:ddt0Wp} \frac12\ddt{\;}|W'|^2 + (W',B^{\varepsilon<}(W',W^{0<}+U^*)) + \mu\,|\nabla W'|^2 = -(W',\mathcal{R}^*) + (W',\mathcal{H}^\eps). \end{equation} We now write the nonlinear term as \begin{equation}\begin{aligned} (W',B^{\varepsilon<}(W',W^{0<}+U^*)) &= (W',B(W',W^{0<}+U^*))\\ &= (W',B(W',U^*)) + (W',B(W',W^{0<}))\\ &= (W',B(W',U^*)) - (W^{0<},B(W',W')). \end{aligned}\end{equation} Following the proof of Theorem~\ref{t:o1} [cf.~\eqref{q:ddtwfour}], we rewrite \eqref{q:ddt0Wp} as \begin{equation}\label{q:ddt1Wp}\begin{aligned} \ddt{\;}\bigl(\mathrm{e}^{\nu t}|W'|^2\bigr) &+ \mu\,\mathrm{e}^{\nu t}\,|\nabla W'|^2 \le -2\,\mathrm{e}^{\nu t}\,(W',\mathcal{R}^*) + 2\,\mathrm{e}^{\nu t}\,(W',\mathcal{H}^\eps)\\ &{}- 2\,\mathrm{e}^{\nu t}\,(W',B(W',U^*)) + 2\,\mathrm{e}^{\nu t}\,(W^{0<},B(W',W')). \end{aligned}\end{equation} We bound the first two terms on the right-hand side as \begin{equation}\begin{aligned} &2\,|(W',\mathcal{R}^*)| \le \frac\mu6\,|\nabla W'|^2 + \frac{c}\mu\,|\mathcal{R}^*|^2,\\ &2\,|(W',\mathcal{H}^\eps)| \le \frac\mu6\,|\nabla W'|^2 + \frac{c}\mu\,|\mathcal{H}^\eps|^2. \end{aligned}\end{equation} As for the third term in \eqref{q:ddt1Wp}, we bound it as \begin{equation}\begin{aligned} 2\,|(W'&,B(W',U^*))| \le c\,|W'|_{L^6}|\nabla W'|_{L^2}|\nabla U^*|_{L^3}\\ &\le |\nabla W'|^2 \> \cnst1\,\varepsilon^{1/4}\bigl( (|W^{0<}|_2^{} + \eta)^2 + \mu\,(|W^{0<}|_2^{} + \eta) + |f^<|_2^{}\bigr)\\ \end{aligned}\end{equation} where we have used \eqref{q:bdUs} in the last step. We now require $\varepsilon$ to be small enough so that \begin{equation}\label{q:eps10} \varepsilon^{1/4}\cnst1\,\bigl( (K_2 + \eta)^2 + \mu\,K_2^{} + \mu\,\eta + |f|_2^{}\bigr) \le \frac\mu6, \end{equation} which implies that, since $|W^{0<}|_2^{}\le K_2$ by hypothesis, \begin{equation} 2\,|(W',B(W',U^*))| \le \frac\mu6\,|\nabla W'|^2. \end{equation} With these estimates, \eqref{q:ddt1Wp} becomes \begin{equation}\label{q:ddt2Wp}\begin{aligned} \ddt{\;}\bigl(\mathrm{e}^{\nu t}|W'|^2\bigr) + \frac\mu2\,\mathrm{e}^{\nu t}\,|\nabla W'|^2 &\le \frac{c}\mu\,\mathrm{e}^{\nu t}\,\bigl( |\mathcal{R}^*|^2 + |\mathcal{H}^\eps|^2 \bigr)\\ &\qquad+ 2\,\mathrm{e}^{\nu t}\,(W^{0<},B(W',W')). \end{aligned}\end{equation} Integrating this inequality and multiplying by $\mathrm{e}^{-\nu T}$, we find \begin{equation}\label{q:WpTt}\begin{aligned} \!\!\mathrm{e}^{\nu t}\,|W'(T&+t)|^2 - |W'(T)|^2 + \frac\mu2 \int_T^{T+t} \mathrm{e}^{\nu(\tau-T)}|\nabla W'|^2 \>\mathrm{d}\tau\\ &\le \int_T^{T+t} \mathrm{e}^{\nu(\tau-T)}\,\Bigl\{ \frac{c}\mu \bigl(|\mathcal{R}^*|^2 + |\mathcal{H}^\eps|^2\bigr) + 2\,(W^{0<},B(W',W')) \Bigr\} \>\mathrm{d}\tau. \end{aligned}\end{equation} We then integrate the last term by parts as in \eqref{q:Bws}, \begin{equation}\label{q:IBws}\begin{aligned} \!\!\!\!\!\int_T^{T+t} &\mathrm{e}^{\nu(\tau-T)}\,(W^{0<},B(W',W')) \>\mathrm{d}\tau\\ &= \varepsilon\,\mathrm{e}^{\nu t}\,(W^{0<},B_\omega(W',W'))(T+t) - \varepsilon\,(W^{0<},B_\omega(W',W'))(T)\\ &\quad {}- \varepsilon\int_T^{T+t} \mathrm{e}^{\nu(\tau-T)}\,\bigl\{ \nu\,(W^{0<},B_\omega(W',W')) + (\partial_\tau W^{0<},B_\omega(W',W'))\\ &\hskip169pt {}+ 2\,(W^{0<},B_\omega(\dy_\tau^* W',W')) \bigr\} \>\mathrm{d}\tau. \end{aligned}\end{equation} To bound the terms in the integral, we first need to estimate \begin{equation}\begin{aligned} |\nabla B^{\varepsilon<}(W',W^{0<}+U^*)|_{L^2}^{} &\le \kappa\,|B^{\varepsilon<}(W',W^{0<}+U^*)|_{L^2}^{}\\ &\le c\,\kappa\,|\nabla W'|_{L^2}^{}|\nabla(W^{0<}+U^*)|_{L^\infty}^{}\\ &\le c\,\kappa^2\,|\nabla W'|\,|W^{0<}+U^*|_{H^2}\\ &\le c\,\kappa^2\,|\nabla W'|\,|\nabla^2W^0| \end{aligned}\end{equation} where for the last inequality we have used \eqref{q:bdUW}. Using this and the bound \begin{equation} |\nabla B^{\varepsilon<}(W^<,W')|_{L^2}^{} \le c\,\kappa\,|\nabla W^<|_{L^\infty}^{}|\nabla W'|_{L^2} \le c\,\kappa^2\,|\nabla^2 W|\,|\nabla W'| \end{equation} for the term $B^{\varepsilon<}(W^<,W')$ in \eqref{q:ddtWp} gives us \begin{equation} |\nabla\dy_t^* W'|_{L^2}^{} \le c\,\kappa^2\,|\nabla W'|\,|\nabla^2W| + \mu\,\kappa^2\,|\nabla W'| + |\nabla\mathcal{R}^*| + |\nabla\mathcal{H}^\eps|. \end{equation} The worst term in \eqref{q:IBws} can now be bounded as \begin{equation}\begin{aligned} \varepsilon\,|(W^{0<},B_\omega(\dy_t^* W',W'))| &\le \varepsilon\,c\,|W^{0<}|_{L^\infty}^{}|\nabla W'|_{L^2}^{}|\nabla\dy_t^* W'|_{L^2}^{}\\ &\le \cnst2\,\varepsilon\,\kappa^2\,K_2^2\,|\nabla W'|^2 + \cnst3\,\varepsilon\,\kappa^2\,\mu\,K_2\,|\nabla W'|^2\\ &\qquad {}+ \frac\mu{48}\,|\nabla W'|^2 + \frac{\varepsilon^2c\,K_2^2}\mu\,(|\nabla\mathcal{R}^*|^2 + |\nabla\mathcal{H}^\eps|^2). \end{aligned}\end{equation} If we now require that $\varepsilon$ satisfy \begin{equation}\label{q:eps11} \varepsilon^{1/2}\,\cnst2\,K_2^2 \le \frac\mu{48} \qquad\textrm{and}\qquad \varepsilon^{1/2}\,\cnst3\,K_2 \le \frac1{48}\,, \end{equation} we have \begin{equation} \varepsilon\,|(W^{0<},B_\omega(\dy_t^* W',W'))| \le \frac\mu{16}\,|\nabla W'|^2 + \frac{\varepsilon^2c\,K_2^2}\mu\,(|\nabla\mathcal{R}^*|^2 + |\nabla\mathcal{H}^\eps|^2). \end{equation} Bounding another term in \eqref{q:IBws} as \begin{equation}\begin{aligned} \varepsilon\,|(\partial_tW^{0<},B_\omega(W',W'))| &\le \varepsilon\,c\,|\partial_tW^0|_{L^\infty}^{}|\nabla W'|_{L^2}^2\\ &\le \varepsilon\,c\,|\partial_tW^0|_{H^2}^{}|\nabla W'|_{L^2}^2\\ &\le \varepsilon\,\cnst4\,(\kappa K_2^2 + \mu\kappa^2 K_2 + |f|_2^{})\,|\nabla W'|^2 \end{aligned}\end{equation} and requiring that $\varepsilon$ also satisfy \begin{equation}\label{q:eps12} \varepsilon^{1/2}\,\cnst4\,(K_2^2 + \mu K_2 + |f|_2^{}) \le \frac{\mu}{12}, \end{equation} plus a similar estimate for the first (easiest) term in \eqref{q:IBws}, we can bound the integral on the r.h.s.\ as \begin{equation}\begin{aligned} \!\int_T^{T+t} &\mathrm{e}^{\nu(\tau-T)}\,\bigl| (W^{0<},B(W',W')) \bigr| \>\mathrm{d}\tau\\ &\le \frac\mu2 \int_T^{T+t} \mathrm{e}^{\nu(\tau-T)}\,|\nabla W'|^2 \>\mathrm{d}\tau + \frac{\varepsilon\,c}{\mu^2}\,K_2^2\,(\|\nabla\mathcal{R}^*\|^2 + \|\nabla\mathcal{H}^\eps\|^2)\,(\mathrm{e}^{\nu t}-1) \end{aligned}\end{equation} where $\|\nabla\mathcal{R}^*\|:=\sup_{|W^0|\le K_2}|\nabla\mathcal{R}^*(W^0,f;\varepsilon)|$ and similarly for $\|\nabla\mathcal{H}^\eps\|$. Bounding the limit term in \eqref{q:IBws} as \begin{equation} |(W^{0<},B_\omega(W',W'))| \le C\,|W^{0<}|_{L^\infty}^{}|\nabla W'|_{L^2}^2 \le c\,K_2\,\kappa^2\,|W'|^2, \end{equation} \eqref{q:WpTt} becomes \begin{equation}\begin{aligned} (1-\varepsilon^{1/2}\cnst5\,&K_2)\,|W'(T+t)|^2\\ &\le \mathrm{e}^{-\nu t}\,(1 + \varepsilon^{1/2}\cnst5\,K_2)\,|W'(T)|^2 + \frac{\varepsilon\,c}{\mu^2}\,K_2^2\,(\|\nabla\mathcal{R}^*\|^2 + \|\nabla\mathcal{H}^\eps\|^2). \end{aligned}\end{equation} To estimate $\|\nabla\mathcal{H}^\eps\|$, we use \eqref{q:ddtWl}, \eqref{q:Wgg} and \eqref{q:WGsig}, to obtain \begin{equation}\begin{aligned} |\nabla B^{\varepsilon<}(W^>,W)|_{L^2}^{} + |\nabla B^{\varepsilon<}(W^<,W^>)|_{L^2}^{} &\le c\,\kappa\,|\nabla W^>|_{L^4}^{}|\nabla W|_{L^4}^{}\\ &\le c\,\kappa\,\mathrm{e}^{-\sigma\kappa}\,M_\sigma\,K_2\,. \end{aligned}\end{equation} Now \eqref{q:bdRR} implies that \begin{equation} \bigl|\nabla(1-{\sf P}^{\!{}^<})[(\mathsf{D} U^*)\mathcal{G}^*]\bigr|_{L^2}^{} \le c\,\mathrm{e}^{-\sigma\kappa}\,\kappa^2\,\bigl((K_2 + \eta)^2 + \mu\,(K_2 + \eta) + |f|_2^{}\bigr)^2; \end{equation} this and the previous estimate give us \begin{equation} \|\nabla\mathcal{H}^\eps\|_{L^2}^{} \le c\,\mathrm{e}^{-\sigma\kappa}\,\kappa^2\, \bigl[ M_\sigma K_2 + \bigl( (K_2 + \eta)^2 + \mu\,(K_2 + \eta) + |f|_2^{}\bigr)^2 \bigr]. \end{equation} Meanwhile, using \eqref{q:bdRs} we have \begin{equation} \|\nabla\mathcal{R}^*\|_{L^2}^{} \le c\,\bigl( (K_2 + \eta)^2 + |f|_2^{}\bigr)\,\exp(-\eta/\varepsilon^{1/4}). \end{equation} Setting $\eta=\sigma$ and requiring $\varepsilon$ to satisfy, in addition to \eqref{q:eps1*}, \eqref{q:eps11} and \eqref{q:eps12}, \begin{equation}\label{q:eps13} \varepsilon^{1/2}\,\cnst5\,K_2 \le \frac12\,, \end{equation} we have \begin{equation}\begin{aligned} |&W'(T+t)|^2 \le 4\,\mathrm{e}^{-\nu t}\,|W'(T)|^2\\ &\quad{}+ \frac{c}{\mu^2}\,\bigl[ (K_2 + \sigma)^4 + \mu^2(K_2+\sigma)^2 + |f|_2^2 + |f|_2^4 + M_\sigma^2 K_2^2 \bigr]\,\exp(-2\sigma/\varepsilon^{1/4}). \end{aligned}\end{equation} Since $|W'(T)|\le c\,K_2$ by Theorem~0, by taking $t$ sufficiently large we have \begin{equation}\begin{aligned} |W'(T+t)| \le \frac{c}{\mu} \bigl[ (K_2+\sigma)^4 + \mu^2(K_2+\sigma)^2 + |f|_2^2 +{} &|f|_2^4 + M_\sigma^2 K_2^2\bigr] \times\\ &\varepsilon^{1/2}\,\exp(-\sigma/\varepsilon^{1/4}). \end{aligned}\end{equation} And since \begin{equation} |W^\varepsilon-U^*|^2 \le |W^{\varepsilon>}|^2 + |W'|^2 \le c\,M_\sigma^2\exp(-2\sigma/\varepsilon^{1/4}) + |W'|^2, \end{equation} the theorem follows by the same argument used to obtain Theorem~\ref{t:o1}. \appendix \section{}\label{s:nores} \noindent{\bf Proof of Lemma~\ref{t:nores}.} Since $B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0}=0$ when $j_3k_3|{\boldsymbol{l}}|=0$, we assume that $j_3k_3|{\boldsymbol{l}}|\ne0$ in the rest of this proof. As before, all wavevectors are understood to live in $\Zahl_L-\{0\}$ and their third component take values in $\{0,\pm2\pi/L_3,\pm4\pi/L_3,\cdots\}$. We start by noting that an {\em exact\/} resonance is only possible when ${\boldsymbol{j}}$ and ${\boldsymbol{k}}$ lie on the same ``resonance cone'', that is, when $|{\boldsymbol{j}}|/|j_3|=|{\boldsymbol{k}}|/|k_3|$, or equivalently, when $|{\boldsymbol{j}}'|/|j_3|=|{\boldsymbol{k}}'|/|k_3|$. There are only two cases to consider: ($\textbf{a}'$)~When ${\boldsymbol{j}}'={\boldsymbol{k}}'=0$, we have $B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0}=B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}=0$. ($\textbf{b}'$)~In the generic case $j_3k_3|{\boldsymbol{j}}'|\,|{\boldsymbol{k}}'|\ne0$, direct computation using the resonance relation $r|{\boldsymbol{j}}|/j_3+s|{\boldsymbol{k}}|/k_3=0$ gives $B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0}+B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}=0$. This result also follows as the special case $\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s=0$ in \eqref{q:res0a} below. Now we turn to {\em near\/} resonances. There are several cases to consider, and we start with the generic (and hardest) one. ($\textbf{a}$)~Suppose that $|{\boldsymbol{j}}'|\,|{\boldsymbol{k}}'|\ne0$ with ${\boldsymbol{l}}'\ne0$. We define $\Omega$ and $\theta$ by \begin{equation} 2\Omega := \omega_{\boldsymbol{j}}^r - \omega_{\boldsymbol{k}}^s \qquad\textrm{and}\qquad 2\theta\Omega := \omega_{\boldsymbol{j}}^r + \omega_{\boldsymbol{k}}^s. \end{equation} (We note that $\Omega$ and $\theta$ could take either sign. Our concern is obviously with small $|\theta|$, when when $\omega_{\boldsymbol{j}}^r$ and $\omega_{\boldsymbol{k}}^s$ are nearly resonant, so we will restrict $\theta$ below.) Now this implies that \begin{equation} \omega_{\boldsymbol{j}}^r = (1+\theta)\Omega \qquad\textrm{and}\qquad \omega_{\boldsymbol{k}}^s = (\theta-1)\Omega. \end{equation} We first note that \begin{equation} |{\boldsymbol{j}}'|^2/|j_3|^2 = (1+\theta)^2\Omega^2 - 1 \qquad\textrm{and}\qquad |{\boldsymbol{k}}'|^2/|k_3|^2 = (1-\theta)^2\Omega^2 - 1 \end{equation} and compute \begin{equation} |{\boldsymbol{j}}'|^2\frac{k_3}{j_3} - |{\boldsymbol{k}}'|^2\frac{j_3}{k_3} = 4\theta\Omega^2j_3k_3 = -\frac{4\theta}{1-\theta^2}\,rs\,|{\boldsymbol{j}}|\,|{\boldsymbol{k}}|. \end{equation} Direct computation gives us \begin{equation}\begin{aligned} B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0} &+ B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}\\ &= \frac{\mathrm{i}|\mathscr{M}|\delta_{{\boldsymbol{j}}+{\boldsymbol{k}}-{\boldsymbol{l}}}\,j_3k_3}{2\,|{\boldsymbol{j}}|\,|{\boldsymbol{j}}'|\,|{\boldsymbol{k}}|\,|{\boldsymbol{k}}'|\,|{\boldsymbol{l}}|} \bigl[(P+P')(Q+Q') + (-P+P'')(Q+Q'')\bigr]\\ &= \frac{\mathrm{i}|\mathscr{M}|\delta_{{\boldsymbol{j}}+{\boldsymbol{k}}-{\boldsymbol{l}}}\,j_3k_3}{2\,|{\boldsymbol{j}}|\,|{\boldsymbol{j}}'|\,|{\boldsymbol{k}}|\,|{\boldsymbol{k}}'|\,|{\boldsymbol{l}}|} \bigl[P(Q'-Q'') + (P'+P'')Q + P'Q' + P''Q''\bigr] \end{aligned}\end{equation} where \begin{equation}\begin{aligned} &P := {\boldsymbol{j}}'\wedge{\boldsymbol{k}}' &&Q := -{\boldsymbol{j}}'\cdot{\boldsymbol{k}}'\\ &P' := \mathrm{i}\frac{r|{\boldsymbol{j}}|}{j_3}({\boldsymbol{j}}'\cdot{\boldsymbol{k}}') - \mathrm{i}\frac{r|{\boldsymbol{j}}|}{j_3}\frac{k_3}{j_3}|{\boldsymbol{j}}'|^2 \qquad &&Q' := -\mathrm{i}\frac{s|{\boldsymbol{k}}|}{k_3}({\boldsymbol{j}}'\wedge{\boldsymbol{k}}') + \frac{j_3}{k_3}|{\boldsymbol{k}}'|^2\\ &P'' := \mathrm{i}\frac{s|{\boldsymbol{k}}|}{k_3}({\boldsymbol{j}}'\cdot{\boldsymbol{k}}') - \mathrm{i}\frac{s|{\boldsymbol{k}}|}{k_3}\frac{j_3}{k_3}|{\boldsymbol{k}}'|^2 && Q'' := \mathrm{i}\frac{r|{\boldsymbol{j}}|}{j_3}({\boldsymbol{j}}'\wedge{\boldsymbol{k}}') + \frac{k_3}{j_3}|{\boldsymbol{j}}'|^2. \end{aligned}\end{equation} After some computation, we find \begin{equation}\begin{aligned} &P' + P'' = 2\theta\Omega\,\mathrm{i}\,\Bigl( {\boldsymbol{j}}'\cdot{\boldsymbol{k}}' + \frac{2rs}{1-\theta^2}|{\boldsymbol{j}}|\,|{\boldsymbol{k}}| - \frac{|{\boldsymbol{j}}'|^2}2\frac{k_3}{j_3} - \frac{|{\boldsymbol{k}}'|^2}2\frac{j_3}{k_3} \Bigr),\\ &Q'-Q'' = 2\theta\Omega\,\Bigl( \frac{2rs/\Omega}{1-\theta^2}\,|{\boldsymbol{j}}|\,|{\boldsymbol{k}}| - \mathrm{i}({\boldsymbol{j}}'\wedge{\boldsymbol{k}}') \Bigr),\\ &P'Q' + P''Q'' = 2\theta\Omega\,\Bigl\{ \frac{2rs\,|{\boldsymbol{j}}|\,|{\boldsymbol{k}}|}{1-\theta^2}\frac{r|{\boldsymbol{j}}|}{j_3}\frac{s|{\boldsymbol{k}}|}{k_3} ({\boldsymbol{j}}'\wedge{\boldsymbol{k}}') + 2\mathrm{i} rs|{\boldsymbol{j}}|\,|{\boldsymbol{k}}|({\boldsymbol{j}}'\cdot{\boldsymbol{k}}')\\ &\hbox to110pt{}+ \frac{\mathrm{i}}2({\boldsymbol{j}}'\cdot{\boldsymbol{k}}')\,\Bigl(|{\boldsymbol{k}}'|^2\frac{j_3}{k_3} + |{\boldsymbol{j}}'|^2\frac{k_3}{j_3}\Bigr) - \mathrm{i}\,|{\boldsymbol{j}}'|^2\,|{\boldsymbol{k}}'|^2\Bigr\}, \end{aligned}\end{equation} from which we obtain \begin{equation}\begin{aligned} B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0} &+ B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0} = 2\theta\Omega\,\frac{\mathrm{i}\,|\mathscr{M}|\,\delta_{{\boldsymbol{j}}+{\boldsymbol{k}}-{\boldsymbol{l}}}}{2\,|{\boldsymbol{l}}|} \Bigl\{ \mathrm{i}\,({\boldsymbol{j}}'\cdot{\boldsymbol{k}}')\,\frac{|{\boldsymbol{j}}'|^2k_3^2+|{\boldsymbol{k}}'|^2j_3^2}{|{\boldsymbol{j}}|\,|{\boldsymbol{j}}'|\,|{\boldsymbol{k}}|\,|{\boldsymbol{k}}'|} - 2\mathrm{i}\frac{|{\boldsymbol{j}}'|\,|{\boldsymbol{k}}'|\,j_3k_3}{|{\boldsymbol{j}}|\,|{\boldsymbol{k}}|}\\ &\!\!- \frac{2\mathrm{i}\,rs\,\theta^2}{1-\theta^2}\frac{({\boldsymbol{j}}'\cdot{\boldsymbol{k}}')j_3k_3}{|{\boldsymbol{j}}'|\,|{\boldsymbol{k}}'|} + \frac{2\,rs/\Omega}{1-\theta^2}\frac{{\boldsymbol{j}}'\wedge{\boldsymbol{k}}'}{|{\boldsymbol{j}}'|\,|{\boldsymbol{k}}'|}\,j_3k_3 + \frac{2/\Omega}{1-\theta^2}\frac{|{\boldsymbol{j}}|\,|{\boldsymbol{k}}|}{|{\boldsymbol{j}}'|\,|{\boldsymbol{k}}'|}({\boldsymbol{j}}'\wedge{\boldsymbol{k}}') \Bigr\}. \end{aligned}\end{equation} Now if we require that $|\theta|\le\theta_0<1$, we have the bound \begin{equation}\label{q:res0a} \bigl|B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0} + B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}\bigr| \le \frac{|\mathscr{M}|}{2}\Bigl(4 + \frac{6}{1-\theta_0^2}\Bigr) \frac{|{\boldsymbol{j}}|\,|{\boldsymbol{k}}|}{|{\boldsymbol{l}}|}\,|\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s|. \end{equation} To take care of the case $|\theta|>\theta_0$, we note that in this case \begin{equation} |\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s| \ge \theta_0\,\Bigl(\frac{|{\boldsymbol{j}}|}{|j_3|} + \frac{|{\boldsymbol{k}}|}{|k_3|}\Bigr). \end{equation} We note that since $\theta_0<1$ by hypothesis, this inequality holds both when $\omega_{\boldsymbol{j}}^r\omega_{\boldsymbol{k}}^s<0$ and $\omega_{\boldsymbol{j}}^r\omega_{\boldsymbol{k}}^s>0$. Using \eqref{q:bdBrs0}, we then find \begin{equation} \bigl|B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0} + B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}\bigr| \le \bigl|B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0}\bigr| + \bigl| B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}\bigr| \le \sqrt5\,|\mathscr{M}|\,\Bigl(|{\boldsymbol{k}}'| + |{\boldsymbol{j}}'| + |{\boldsymbol{k}}'|\frac{|j_3|}{|k_3|} + |{\boldsymbol{j}}'|\frac{|k_3|}{|j_3|}\Bigr). \end{equation} Putting these together, we find after a short computation, \begin{equation}\label{q:res0b} \bigl|B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0} + B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}\bigr| \le \frac{2\sqrt5\,|\mathscr{M}|}{\theta_0}\bigl(|j_3|+|k_3|\bigr) |\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s|. \end{equation} ($\textbf{b}$)~Suppose now that $|{\boldsymbol{j}}'|\,|{\boldsymbol{k}}'|\ne0$ but ${\boldsymbol{l}}'=0$. We find using ${\boldsymbol{j}}'+{\boldsymbol{k}}'=0$, \begin{equation} B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0} + B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0} = \mathrm{i}\,|\mathscr{M}|\,\delta_{{\boldsymbol{j}}+{\boldsymbol{k}}-{\boldsymbol{l}}}\, \frac{-\mathrm{i}\sgn l_3\,|{\boldsymbol{j}}'|\,|{\boldsymbol{k}}'|}{2\,|{\boldsymbol{j}}|\,|{\boldsymbol{k}}|} (j_3+k_3)(\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s), \end{equation} and thus the bound \begin{equation}\label{q:res1} \bigl|B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0} + B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}\bigr| \le \frac{|\mathscr{M}|}2\,\bigl(|j_3|+|k_3|\bigr)\,|\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s|. \end{equation} ($\textbf{c}$)~Finally, we consider the case ${\boldsymbol{j}}'=0$ and ${\boldsymbol{k}}'\ne0$ (which obviously implies the case ${\boldsymbol{k}}'=0$ and ${\boldsymbol{j}}'\ne0$). After some computation using ${\boldsymbol{l}}'={\boldsymbol{k}}'$, we find \begin{equation} B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0} + B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0} = \frac{\mathrm{i}\,|\mathscr{M}|\,\delta_{{\boldsymbol{j}}+{\boldsymbol{k}}-{\boldsymbol{l}}}}{2\,|{\boldsymbol{l}}|\,|{\boldsymbol{k}}|} j_3(k_1-\mathrm{i} rk_2)|{\boldsymbol{k}}'|\,\Bigl(sr-\frac{|{\boldsymbol{k}}|}{k_3}\Bigr). \end{equation} But since in this case \begin{equation} |\omega_{\boldsymbol{j}}^r - \omega_{\boldsymbol{k}}^s| = \bigl|r\sgn j_3 - s|{\boldsymbol{k}}|/k_3\bigr| = \bigl|rs - |{\boldsymbol{k}}|/k_3\bigr|, \end{equation} we have the bound \begin{equation}\label{q:res2} \bigl|B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0} + B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}\bigr| \le \frac{|\mathscr{M}|\,|j_3|\,|{\boldsymbol{k}}'|^2}{\sqrt2\,|{\boldsymbol{k}}|\,|{\boldsymbol{l}}|} |\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s| \le \frac{|\mathscr{M}|\,|j_3|}{\sqrt2}\,|\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s|, \end{equation} which holds whether or not $l_3=0$. We recall that there is nothing to do when ${\boldsymbol{j}}'={\boldsymbol{k}}'=0$ since then $B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0}=B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}=0$. The lemma follows upon fixing $\theta_0$ and collecting \eqref{q:res0a}, \eqref{q:res0b}, \eqref{q:res1} and \eqref{q:res2}. \nocite{temam-ziane:03} \end{document}
\begin{document} \title{Approximating Optimal Transport via Low-rank and Sparse Factorization} \begin{abstract} Optimal transport (OT) naturally arises in a wide range of machine learning applications but may often become the computational bottleneck. Recently, one line of works propose to solve OT approximately by searching the \emph{transport plan} in a low-rank sub-space. However, the optimal transport plan is often not low-rank, which tend to yield large approximation errors. For example, when Monge's \emph{transport map} exists, the induced transport plan is full rank. This paper concerns the computation of the OT distance with adequate accuracy and efficiency. A novel approximation for OT is proposed, in which the transport plan can be decomposed into the sum of a low-rank matrix and a sparse one. We theoretically analyze the approximation error. An augmented Lagrangian method is then designed to efficiently calculate the transport plan. \end{abstract} \section{Introduction} \subfile{sections/introduction.tex} \section{Preliminaries} \subfile{sections/preliminaries.tex} \section{Methodology} \subfile{sections/methodology.tex} \section*{Conclusion} In this paper, we propose a novel approximation of the OT distance. The optimal transport plan is approximated by the sum of a low-rank matrix and a sparse one. An augmented Lagrangian method is designed to efficiently calculate the transport plan. \appendix \section{Omitted Proofs} \subfile{sections/proof.tex} \end{document}
\begin{document} \title{Lecture Notes on the Lambda Calculus} \begin{abstract} This is a set of lecture notes that developed out of courses on the lambda calculus that I taught at the University of Ottawa in 2001 and at Dalhousie University in 2007 and 2013. Topics covered in these notes include the untyped lambda calculus, the Church-Rosser theorem, combinatory algebras, the simply-typed lambda calculus, the Curry-Howard isomorphism, weak and strong normalization, polymorphism, type inference, denotational semantics, complete partial orders, and the language PCF. \end{abstract} \tableofcontents \section{Introduction}\label{sec-intro} \subsection{Extensional vs. intensional view of functions} \label{subsec-intro1} What is a function? In modern mathematics, the prevalent notion is that of ``functions as graphs'': each function $f$ has a fixed domain $X$ and codomain $Y$, and a function $f:X\to Y$ is a set of pairs $f\seq X\times Y$ such that for each $x\in X$, there exists exactly one $y\in Y$ such that $(x,y)\in f$. Two functions $f,g:X\to Y$ are considered equal if they yield the same output on each input, i.e., $f(x)=g(x)$ for all $x\in X$. This is called the {\em extensional} view of functions, because it specifies that the only thing observable about a function is how it maps inputs to outputs. However, before the 20th century, functions were rarely looked at in this way. An older notion of functions is that of ``functions as rules''. In this view, to give a function means to give a rule for how the function is to be calculated. Often, such a rule can be given by a formula, for instance, the familiar $f(x)=x^2$ or $g(x)=\sin(e^x)$ from calculus. As before, two functions are {\em extensionally} equal if they have the same input-output behavior; but now we can also speak of another notion of equality: two functions are {\em intensionally}\footnote{Note that this word is intentionally spelled ``intensionally''.} equal if they are given by (essentially) the same formula. When we think of functions as given by formulas, it is not always necessary to know the domain and codomain of a function. Consider for instance the function $f(x)=x$. This is, of course, the identity function. We may regard it as a function $f:X\to X$ for {\em any} set $X$. In most of mathematics, the ``functions as graphs'' paradigm is the most elegant and appropriate way of dealing with functions. Graphs define a more general class of functions, because it includes functions that are not necessarily given by a rule. Thus, when we prove a mathematical statement such as ``any differentiable function is continuous'', we really mean this is true for {\em all} functions (in the mathematical sense), not just those functions for which a rule can be given. On the other hand, in computer science, the ``functions as rules'' paradigm is often more appropriate. Think of a computer program as defining a function that maps input to output. Most computer programmers (and users) do not only care about the extensional behavior of a program (which inputs are mapped to which outputs), but also about {\em how} the output is calculated: How much time does it take? How much memory and disk space is used in the process? How much communication bandwidth is used? These are intensional questions having to do with the particular way in which a function was defined. \subsection{The lambda calculus} The lambda calculus is a theory of {\em functions as formulas}. It is a system for manipulating functions as {\em expressions}. Let us begin by looking at another well-known language of expressions, namely arithmetic. Arithmetic expressions are made up from variables ($x,y,z\ldots$), numbers ($1,2,3,\ldots$), and operators (``$+$'', ``$-$'', ``$\times$'' etc.). An expression such as $x+y$ stands for the {\em result} of an addition (as opposed to an {\em instruction} to add, or the {\em statement} that something is being added). The great advantage of this language is that expressions can be nested without any need to mention the intermediate results explicitly. So for instance, we write \[ A = (x+y)\times z^2, \] and not \[ \mbox{let $w=x+y$, then let $u=z^2$, then let $A=w\times u$.} \] The latter notation would be tiring and cumbersome to manipulate. The lambda calculus extends the idea of an expression language to include functions. Where we normally write \[ \mbox{Let $f$ be the function $x\mapsto x^2$. Then consider $A=f(5)$,} \] in the lambda calculus we just write \[ A = (\lam x.x^2)(5) . \] The expression $\lam x.x^2$ stands for the function that maps $x$ to $x^2$ (as opposed to the {\em statement} that $x$ is being mapped to $x^2$). As in arithmetic, we use parentheses to group terms. It is understood that the variable $x$ is a {\em local} variable in the term $\lam x.x^2$. Thus, it does not make any difference if we write $\lam y.y^2$ instead. A local variable is also called a {\em bound} variable. One advantage of the lambda notation is that it allows us to easily talk about {\em higher-order} functions, i.e., functions whose inputs and/or outputs are themselves functions. An example is the operation $f\mapsto f\cp f$ in mathematics, which takes a function $f$ and maps it to $f\cp f$, the composition of $f$ with itself. In the lambda calculus, $f\cp f$ is written as \[ \lam x.f(f(x)), \] and the operation that maps $f$ to $f\cp f$ is written as \[ \lam f.\lam x.f(f(x)). \] The evaluation of higher-order functions can get somewhat complex; as an example, consider the following expression: \[ \left((\lam f.\lam x.f(f(x)))(\lam y.y^2)\right)(5) \] Convince yourself that this evaluates to 625. Another example is given in the following exercise: \begin{exercise} Evaluate the lambda-expression \[ \Big( \big(\left(\lam f.\lam x.f(f(f(x)))\right) \left(\lam g.\lam y.g(g(y))\right)\big) (\lam z.z+1)\Big)(0). \] \end{exercise} We will soon introduce some conventions for reducing the number of parentheses in such expressions. \subsection{Untyped vs.\ typed lambda-calculi} We have already mentioned that, when considering ``functions as rules'', is not always necessary to know the domain and codomain of a function ahead of time. The simplest example is the identity function $f = \lam x.x$, which can have any set $X$ as its domain and codomain, as long as domain and codomain are equal. We say that $f$ has the {\em type} $X\to X$. Another example is the function $g = \lam f.\lam x.f(f(x))$ that we encountered above. One can check that $g$ maps any function $f:X\to X$ to a function $g(f):X\to X$. In this case, we say that the type of $g$ is \[ (X\to X)\to(X\to X). \] By being flexible about domains and codomains, we are able to manipulate functions in ways that would not be possible in ordinary mathematics. For instance, if $f=\lam x.x$ is the identity function, then we have $f(x) = x$ for {\em any} $x$. In particular, we can take $x=f$, and we get \[ f(f) = (\lam x.x)(f) = f. \] Note that the equation $f(f)=f$ never makes sense in ordinary mathematics, since it is not possible (for set-theoretic reasons) for a function to be included in its own domain. As another example, let $\omega = \lam x.x(x)$. \begin{exercise} What is $\omega(\omega)$? \end{exercise} We have several options regarding types in the lambda calculus. \begin{itemize} \item {\em Untyped lambda calculus.} In the untyped lambda calculus, we never specify the type of any expression. Thus we never specify the domain or codomain of any function. This gives us maximal flexibility. It is also very unsafe, because we might run into situations where we try to apply a function to an argument that it does not understand. \item {\em Simply-typed lambda calculus.} In the simply-typed lambda calculus, we always completely specify the type of every expression. This is very similar to the situation in set theory. We never allow the application of a function to an argument unless the type of the argument is the same as the domain of the function. Thus, terms such as $f(f)$ are ruled out, even if $f$ is the identity function. \item {\em Polymorphically typed lambda calculus.} This is an intermediate situation, where we may specify, for instance, that a term has a type of the form $X\to X$ for all $X$, without actually specifying $X$. \end{itemize} As we will see, each of these alternatives has dramatically different properties from the others. \subsection{Lambda calculus and computability} In the 1930's, several people were interested in the question: what does it mean for a function $f:\N\to\N$ to be {\em computable}? An informal definition of computability is that there should be a pencil-and-paper method allowing a trained person to calculate $f(n)$, for any given $n$. The concept of a pencil-and-paper method is not so easy to formalize. Three different researchers attempted to do so, resulting in the following definitions of computability: \begin{enumerate} \item {\bf Turing} defined an idealized computer we now call a {\em Turing machine}, and postulated that a function is computable (in the intuitive sense) if and only if it can be computed by such a machine. \item {\bf G\"odel} defined the class of {\em general recursive functions} as the smallest set of functions containing all the constant functions, the successor function, and closed under certain operations (such as compositions and recursion). He postulated that a function is computable (in the intuitive sense) if and only if it is general recursive. \item {\bf Church} defined an idealized programming language called the {\em lambda calculus}, and postulated that a function is computable (in the intuitive sense) if and only if it can be written as a lambda term. \end{enumerate} It was proved by Church, Kleene, Rosser, and Turing that all three computational models were equivalent to each other, i.e., each model defines the same class of computable functions. Whether or not they are equivalent to the ``intuitive'' notion of computability is a question that cannot be answered, because there is no formal definition of ``intuitive computability''. The assertion that they are in fact equivalent to intuitive computability is known as the {\em Church-Turing thesis}. \subsection{Connections to computer science} The lambda calculus is a very idealized programming language; arguably, it is the simplest possible programming language that is Turing complete. Because of its simplicity, it is a useful tool for defining and proving properties of programs. Many real-world programming languages can be regarded as extensions of the lambda calculus. This is true for all {\em functional programming languages}, a class that includes Lisp, Scheme, Haskell, and ML. Such languages combine the lambda calculus with additional features, such as data types, input/output, side effects, updatable memory, object orientated features, etc. The lambda calculus provides a vehicle for studying such extensions, in isolation and jointly, to see how they will affect each other, and to prove properties of programming language (such as: a well-formed program will not crash). The lambda calculus is also a tool used in compiler construction, see e.g. \cite{Pey87,App92}. \subsection{Connections to logic} In the 19th and early 20th centuries, there was a philosophical dispute among mathematicians about what a proof is. The so-called {\em constructivists}, such as Brouwer and Heyting, believed that to prove that a mathematical object exists, one must be able to construct it explicitly. {\em Classical logicians}, such as Hilbert, held that it is sufficient to derive a contradiction from the assumption that it doesn't exist. Ironically, one of the better-known examples of a proof that isn't constructive is Brouwer's proof of his own fixed point theorem, which states that every continuous function on the unit disk has a fixed point. The proof is by contradiction and does not give any information on the location of the fixed point. The connection between lambda calculus and constructive logics is via the ``proofs-as-programs'' paradigm. To a constructivist, a proof (of an existence statement) must be a ``construction'', i.e., a program. The lambda calculus is a notation for such programs, and it can also be used as a notion for (constructive) proofs. For the most part, constructivism has not prevailed as a philosophy in mainstream mathematics. However, there has been renewed interest in constructivism in the second half of the 20th century. The reason is that constructive proofs give more information than classical ones, and in particular, they allow one to compute solutions to problems (as opposed to merely knowing the existence of a solution). The resulting algorithms can be useful in computational mathematics, for instance in computer algebra systems. \subsection{Connections to mathematics} One way to study the lambda calculus is to give mathematical models of it, i.e., to provide spaces in which lambda terms can be given meaning. Such models are constructed using methods from algebra, partially ordered sets, topology, category theory, and other areas of mathematics. \section{The untyped lambda calculus} \subsection{Syntax} The lambda calculus is a {\em formal language}. The expressions of the language are called {\em lambda terms}, and we will give rules for manipulating them. \begin{definition} Assume given an infinite set $\Vars$ of {\em variables}, denoted by $x,y,z$ etc. The set of lambda terms is given by the following Backus-Naur Form: \[ \mbox{Lambda terms:}\ssep M,N \bnf x \bor (MN) \bor (\lam x.M) \] \end{definition} The above Backus-Naur Form (BNF) is a convenient abbreviation for the following equivalent, more traditionally mathematical definition: \begin{definition} Assume given an infinite set $\Vars$ of variables. Let $A$ be an alphabet consisting of the elements of $\Vars$, and the special symbols ``('', ``)'', ``$\lam$'', and ``.''. Let $A^*$ be the set of strings (finite sequences) over the alphabet $A$. The set of lambda terms is the smallest subset $\Lambda\seq A^*$ such that: \begin{itemize} \item Whenever $x\in\Vars$ then $x\in\Lambda$. \item Whenever $M,N\in\Lambda$ then $(MN)\in\Lambda$. \item Whenever $x\in\Vars$ and $M\in\Lambda$ then $(\lam x.M)\in\Lambda$. \end{itemize} \end{definition} Comparing the two equivalent definitions, we see that the Backus-Naur Form is a convenient notation because: (1) the definition of the alphabet can be left implicit, (2) the use of distinct meta-symbols for different syntactic classes ($x,y,z$ for variables and $M,N$ for terms) eliminates the need to explicitly quantify over the sets $\Vars$ and $\Lambda$. In the future, we will always present syntactic definitions in the BNF style. The following are some examples of lambda terms: \[ (\lam x.x) \sep ((\lam x.(xx))(\lam y.(yy))) \sep (\lam f.(\lam x.(f(fx)))) \] Note that in the definition of lambda terms, we have built in enough mandatory parentheses to ensure that every term $M\in\Lambda$ can be uniquely decomposed into subterms. This means, each term $M\in\Lambda$ is of precisely one of the forms $x$, $(MN)$, $(\lam x.M)$. Terms of these three forms are called {\em variables}, {\em applications}, and {\em lambda abstractions}, respectively. We use the notation $(MN)$, rather than $M(N)$, to denote the application of a function $M$ to an argument $N$. Thus, in the lambda calculus, we write $(fx)$ instead of the more traditional $f(x)$. This allows us to economize more efficiently on the use of parentheses. To avoid having to write an excessive number of parentheses, we establish the following conventions for writing lambda terms: \begin{convention} \begin{itemize} \item We omit outermost parentheses. For instance, we write $MN$ instead of $(MN)$. \item Applications associate to the left; thus, $MNP$ means $(MN)P$. This is convenient when applying a function to a number of arguments, as in $fxyz$, which means $((fx)y)z$. \item The body of a lambda abstraction (the part after the dot) extends as far to the right as possible. In particular, $\lam x.MN$ means $\lam x.(MN)$, and not $(\lam x.M)N$. \item Multiple lambda abstractions can be contracted; thus $\lam xyz.M$ will abbreviate $\lam x.\lam y.\lam z.M$. \end{itemize} \end{convention} It is important to note that this convention is only for notational convenience; it does not affect the ``official'' definition of lambda terms. \begin{exercise} \begin{enumerate}\alphalabels \item Write the following terms with as few parenthesis as possible, without changing the meaning or structure of the terms: \begin{enumerate} \item[(i)] $(\lam x.(\lam y.(\lam z.((xz)(yz)))))$, \item[(ii)] $(((ab)(cd))((ef)(gh)))$, \item[(iii)] $(\lam x.((\lam y.(yx))(\lam v.v)z)u)(\lam w.w)$. \end{enumerate} \item Restore all the dropped parentheses in the following terms, without changing the meaning or structure of the terms: \begin{enumerate} \item[(i)] $xxxx$, \item[(ii)] $\lam x.x\lam y.y$, \item[(iii)] $\lam x.(x\lam y.yxx)x$. \end{enumerate} \end{enumerate} \end{exercise} \subsection{Free and bound variables, $\alpha$-equivalence} In our informal discussion of lambda terms, we have already pointed out that the terms $\lam x.x$ and $\lam y.y$, which differ only in the name of their bound variable, are essentially the same. We will say that such terms are $\alpha$-equivalent, and we write $M\eqa N$. In the rare event that we want to say that two terms are precisely equal, symbol for symbol, we say that $M$ and $N$ are {\em identical} and we write $M\equiv N$. We reserve ``$=$'' as a generic symbol used for different purposes. An occurrence of a variable $x$ inside a term of the form $\lam x.N$ is said to be {\em bound}. The corresponding $\lam x$ is called a {\em binder}, and we say that the subterm $N$ is the {\em scope} of the binder. A variable occurrence that is not bound is {\em free}. Thus, for example, in the term \[ M \equiv (\lam x.xy)(\lam y.yz), \] $x$ is bound, but $z$ is free. The variable $y$ has both a free and a bound occurrence. The set of free variables of $M$ is $\s{y,z}$. More generally, the set of free variables of a term $M$ is denoted $\FV{M}$, and it is defined formally as follows: \[ \begin{array}{lll} \FV{x} &=& \s{x}, \\ \FV{MN} &=& \FV{M}\cup \FV{N}, \\ \FV{\lam x.M} &=& \FV{M} \setminus \s{x}. \end{array} \] This definition is an example of a definition by recursion on terms. In other words, in defining $\FV{M}$, we assume that we have already defined $\FV{N}$ for all subterms of $M$. We will often encounter such recursive definitions, as well as inductive proofs. Before we can formally define $\alpha$-equivalence, we need to define what it means to {\em rename} a variable in a term. If $x,y$ are variables, and $M$ is a term, we write $\ren{M}{y}{x}$ for the result of renaming $x$ as $y$ in $M$. Renaming is formally defined as follows: \[ \begin{array}{llll} \ren{x}{y}{x} &\equiv& y, \\ \ren{z}{y}{x} &\equiv& z, & \mbox{if $x\neq z$,} \\ \ren{(MN)}{y}{x} &\equiv& (\ren{M}{y}{x})(\ren{N}{y}{x}), \\ \ren{(\lam x.M)}{y}{x} &\equiv& \lam y.(\ren{M}{y}{x}), \\ \ren{(\lam z.M)}{y}{x} &\equiv& \lam z.(\ren{M}{y}{x}), & \mbox{if $x\neq z$.} \end{array} \] Note that this kind of renaming replaces all occurrences of $x$ by $y$, whether free, bound, or binding. We will only apply it in cases where $y$ does not already occur in $M$. Finally, we are in a position to formally define what it means for two terms to be ``the same up to renaming of bound variables'': \begin{definition} We define {\em $\alpha$-equivalence} to be the smallest congruence relation $\eqa$ on lambda terms, such that for all terms $M$ and all variables $y$ that do not occur in $M$, \[ \lam x.M \eqa \lam y.(\ren{M}{y}{x}). \] \end{definition} Recall that a relation on lambda terms is an equivalence relation if it satisfies rules $\trule{refl}$, $\trule{symm}$, and $\trule{trans}$. It is a congruence if it also satisfies rules $\trule{cong}$ and $\nrule{\xi}$. Thus, by definition, $\alpha$-equivalence is the smallest relation on lambda terms satisfying the six rules in Table~\ref{tab-alpha}. \begin{table*}[tbp] \[ \begin{array}{lc} \trule{refl} & \deriv{}{M=M} \\[1.8ex] \trule{symm} & \deriv{M=N}{N=M} \\[1.8ex] \trule{trans} & \deriv{M=N\sep N=P}{M=P} \end{array} \sep \begin{array}{lc} \trule{cong} & \deriv{M=M'\sep N=N'}{MN=M'N'} \\[1.8ex] \nrule{\xi} & \deriv{M=M'}{\lam x.M=\lam x.M'} \\[1.8ex] \nrule{\alpha} & \deriv{\mbox{$y\not\in M$}}{\lam x.M = \lam y.(\ren{M}{y}{x})} \end{array} \] \caption{The rules for alpha-equivalence} \label{tab-alpha} \end{table*} It is easy to prove by induction that any lambda term is $\alpha$-equivalent to another term in which the names of all bound variables are distinct from each other and from any free variables. Thus, when we manipulate lambda terms in theory and in practice, we can (and will) always assume {\wloss} that bound variables have been renamed to be distinct. This convention is called {\em Barendregt's variable convention}. As a remark, the notions of free and bound variables and $\alpha$-equivalence are of course not particular to the lambda calculus; they appear in many standard mathematical notations, as well as in computer science. Here are four examples where the variable $x$ is bound. \[ \begin{array}{l} \int_0^1 x^2\,dx \\[1.8ex] \sum_{x=1}^{10}\frac{1}{x} \\[1.8ex] \lim_{x\to\infty} e^{-x} \\[1.8ex] \verb!int succ(int x) { return x+1; }! \end{array} \] \subsection{Substitution}\label{ssec-substitution} In the previous section, we defined a renaming operation, which allowed us to replace a variable by another variable in a lambda term. Now we turn to a less trivial operation, called {\em substitution}, which allows us to replace a variable by a lambda term. We will write $\subst{M}{N}{x}$ for the result of replacing $x$ by $N$ in $M$. The definition of substitution is complicated by two circumstances: \begin{enumerate} \item[1.] We should only replace {\em free} variables. This is because the names of bound variables are considered immaterial, and should not affect the result of a substitution. Thus, $\subst{x(\lam xy.x)}{N}{x}$ is $N(\lam xy.x)$, and not $N(\lam xy.N)$. \item[2.] We need to avoid unintended ``capture'' of free variables. Consider for example the term $M\equiv\lam x.yx$, and let $N\equiv\lam z.xz$. Note that $x$ is free in $N$ and bound in $M$. What should be the result of substituting $N$ for $y$ in $M$? If we do this naively, we get \[ \subst{M}{N}{y}=\subst{(\lam x.yx)}{N}{y}= \lam x.Nx=\lam x.(\lam z.xz)x. \] However, this is not what we intended, since the variable $x$ was free in $N$, and during the substitution, it got bound. We need to account for the fact that the $x$ that was bound in $M$ was not the ``same'' $x$ as the one that was free in $N$. The proper thing to do is to rename the bound variable {\em before} the substitution: \[ \subst{M}{N}{y}=\subst{(\lam x'.yx')}{N}{y}= \lam x'.Nx'=\lam x'.(\lam z.xz)x'. \] \end{enumerate} Thus, the operation of substitution forces us to sometimes rename a bound variable. In this case, it is best to pick a variable from $\Vars$ that has not been used yet as the new name of the bound variable. A variable that is currently unused is called {\em fresh}. The reason we stipulated that the set $\Vars$ is infinite was to make sure a fresh variable is always available when we need one. \begin{definition} The (capture-avoiding) {\em substitution} of $N$ for free occurrences of $x$ in $M$, in symbols $\subst{M}{N}{x}$, is defined as follows: \[ \begin{array}{l@{~}l@{~}ll} \subst{x}{N}{x} &\equiv & N, \\ \subst{y}{N}{x} &\equiv & y, &\mbox{if $x\neq y$,} \\ \subst{(MP)}{N}{x} &\equiv & (\subst{M}{N}{x})(\subst{P}{N}{x}), \\ \subst{(\lam x.M)}{N}{x} &\equiv & \lam x.M, \\ \subst{(\lam y.M)}{N}{x} &\equiv & \lam y.(\subst{M}{N}{x}), &\mbox{if $x\neq y$ and $y\not\in\FV{N}$,} \\ \subst{(\lam y.M)}{N}{x} &\equiv & \lam y'.(\subst{\ren{M}{y'}{y}}{N}{x}), &\mbox{if $x\neq y$, $y\in\FV{N}$, and $y'$ fresh.} \\ \end{array} \] \end{definition} This definition has one technical flaw: in the last clause, we did not specify which fresh variable to pick, and thus, technically, substitution is not well-defined. One way to solve this problem is to declare all lambda terms to be identified up to $\alpha$-equivalence, and to prove that substitution is in fact well-defined modulo $\alpha$-equivalence. Another way would be to specify which variable $y'$ to choose: for instance, assume that there is a well-ordering on the set $\Vars$ of variables, and stipulate that $y'$ should be chosen to be the least variable that does not occur in either $M$ or $N$. \subsection{Introduction to $\beta$-reduction} \begin{convention} From now on, unless stated otherwise, we identify lambda terms up to $\alpha$-equivalence. This means, when we speak of lambda terms being ``equal'', we mean that they are $\alpha$-equivalent. Formally, we regard lambda terms as equivalence classes modulo $\alpha$-equivalence. We will often use the ordinary equality symbol $M=N$ to denote $\alpha$-equivalence. \end{convention} The process of evaluating lambda terms by ``plugging arguments into functions'' is called {\em $\beta$-reduction}. A term of the form $(\lam x.M)N$, which consists of a lambda abstraction applied to another term, is called a {\em $\beta$-redex}. We say that it {\em reduces} to $\subst{M}{N}{x}$, and we call the latter term the {\em reduct}. We reduce lambda terms by finding a subterm that is a redex, and then replacing that redex by its reduct. We repeat this as many times as we like, or until there are no more redexes left to reduce. A lambda term without any $\beta$-redexes is said to be in {\em $\beta$-normal form}. For example, the lambda term $(\lam x.y)((\lam z.zz)(\lam w.w))$ can be reduced as follows. Here, we underline each redex just before reducing it: \[ \begin{array}{lll} (\lam x.y)(\ul{(\lam z.zz)(\lam w.w)}) &\redb& (\lam x.y)(\ul{(\lam w.w)(\lam w.w)}) \\ &\redb& \ul{(\lam x.y)(\lam w.w)} \\ &\redb& y. \end{array} \] The last term, $y$, has no redexes and is thus in normal form. We could reduce the same term differently, by choosing the redexes in a different order: \[ \begin{array}{lll} \ul{(\lam x.y)((\lam z.zz)(\lam w.w))} &\redb& y. \end{array} \] As we can see from this example: \begin{itemize} \item[-] reducing a redex can create new redexes, \item[-] reducing a redex can delete some other redexes, \item[-] the number of steps that it takes to reach a normal form can vary, depending on the order in which the redexes are reduced. \end{itemize} We can also see that the final result, $y$, does not seem to depend on the order in which the redexes are reduced. In fact, this is true in general, as we will prove later. If $M$ and $M'$ are terms such that $M\redbs M'$, and if $M'$ is in normal form, then we say that $M$ {\em evaluates} to $M'$. Not every term evaluates to something; some terms can be reduced forever without reaching a normal form. The following is an example: \[ \begin{array}{lll} (\lam x.xx)(\lam y.yyy) &\redb& (\lam y.yyy)(\lam y.yyy) \\ &\redb& (\lam y.yyy)(\lam y.yyy)(\lam y.yyy) \\ &\redb& \ldots \end{array} \] This example also shows that the size of a lambda term need not decrease during reduction; it can increase, or remain the same. The term $(\lam x.xx)(\lam x.xx)$, which we encountered in Section~\ref{sec-intro}, is another example of a lambda term that does not reach a normal form. \subsection{Formal definitions of $\beta$-reduction and $\beta$-equivalence} The concept of $\beta$-reduction can be defined formally as follows: \begin{definition}\label{page-def-beta} We define {\em single-step $\beta$-reduction} to be the smallest relation $\redb$ on terms satisfying: \[ \begin{array}{lc} \nrule{\beta} & \deriv{}{(\lam x.M)N\redb \subst{M}{N}{x}} \\[1.8ex] \trule{cong$_1$} & \deriv{M\redb M'}{MN\redb M'N} \\[1.8ex] \trule{cong$_2$} & \deriv{N\redb N'}{MN\redb MN'} \\[1.8ex] \nrule{\xi} & \deriv{M\redb M'}{\lam x.M\redb \lam x.M'} \end{array} \] \end{definition} Thus, $M\redb M'$ iff $M'$ is obtained from $M$ by reducing a {\em single} $\beta$-redex of $M$. \begin{definition} We write $M\redbs M'$ if $M$ reduces to $M'$ in zero or more steps. Formally, $\redbs$ is defined to be the reflexive transitive closure of $\redb$, i.e., the smallest reflexive transitive relation containing $\redb$. \end{definition} Finally, $\beta$-equivalence is obtained by allowing reduction steps as well as inverse reduction steps, i.e., by making $\redb$ symmetric: \begin{definition} We write $M\eqb M'$ if $M$ can be transformed into $M'$ by zero or more reduction steps and/or inverse reduction steps. Formally, $\eqb$ is defined to be the reflexive symmetric transitive closure of $\redb$, i.e., the smallest equivalence relation containing $\redb$. \end{definition} \begin{exercise} This definition of $\beta$-equivalence is slightly different from the one given in class. Prove that they are in fact the same. \end{exercise} \section{Programming in the untyped lambda calculus} One of the amazing facts about the untyped lambda calculus is that we can use it to encode data, such as booleans and natural numbers, as well as programs that operate on the data. This can be done purely within the lambda calculus, without adding any additional syntax or axioms. We will often have occasion to give names to particular lambda terms; we will usually use boldface letters for such names. \subsection{Booleans}\label{ssec-booleans} We begin by defining two lambda terms to encode the truth values ``true'' and ``false'': \[ \begin{array}{rcl} \truet &=& \lam xy.x \\ \falset &=& \lam xy.y \end{array} \] Let $\andt$ be the term $\lam ab.ab\falset$. Verify the following: \[ \begin{array}{rcl} \andt \truet \truet &\redbs& \truet \\ \andt \truet \falset &\redbs& \falset \\ \andt \falset \truet &\redbs& \falset \\ \andt \falset \falset &\redbs& \falset \end{array} \] Note that $\truet$ and $\falset$ are normal forms, so we can really say that a term such as $\andt \truet \truet$ {\em evaluates} to $\truet$. We say that $\andt$ {\em encodes} the boolean function ``and''. It is understood that this coding is with respect to the particular coding of ``true'' and ``false''. We don't claim that $\andt MN$ evaluates to anything meaningful if $M$ or $N$ are terms other than $\truet$ and $\falset$. Incidentally, there is nothing unique about the term $\lam ab.ab\falset$. It is one of many possible ways of encoding the ``and'' function. Another possibility is $\lam ab.bab$. \begin{exercise} Find lambda terms $\ort$ and $\nott$ that encode the boolean functions ``or'' and ``not''. Can you find more than one term? \end{exercise} Moreover, we define the term $\ifthenelset=\lam x.x$. This term behaves like an ``if-then-else'' function --- specifically, we have \[ \begin{array}{rcl} \ifthenelset \truet M N &\redbs& M \\ \ifthenelset \falset M N &\redbs& N \\ \end{array} \] for all lambda terms $M$, $N$. \subsection{Natural numbers}\label{ssec-natural-numbers} If $f$ and $x$ are lambda terms, and $n\geq 0$ a natural number, write $f^nx$ for the term $f(f(\ldots(fx)\ldots))$, where $f$ occurs $n$ times. For each natural number $n$, we define a lambda term $\chnum{n}$, called the {\em $n$th Church numeral}, as $\chnum{n}=\lam fx.f^nx$. Here are the first few Church numerals: \[ \begin{array}{rcl} \chnum{0} &=& \lam fx.x \\ \chnum{1} &=& \lam fx.fx \\ \chnum{2} &=& \lam fx.f(fx) \\ \chnum{3} &=& \lam fx.f(f(fx)) \\ \ldots \end{array} \] This particular way of encoding the natural numbers is due to Alonzo Church, who was also the inventor of the lambda calculus. Note that $\chnum{0}$ is in fact the same term as $\falset$; thus, when interpreting a lambda term, we should know ahead of time whether to interpret the result as a boolean or a numeral. The successor function can be defined as follows: $\succt = \lam nfx.f(nfx)$. What does this term compute when applied to a numeral? \[ \begin{array}{rcl} \succt \chnum{n} &=& (\lam nfx.f(nfx))(\lam fx.f^nx) \\ &\redb& \lam fx.f((\lam fx.f^nx)fx) \\ &\redbs& \lam fx.f(f^nx) \\ &=& \lam fx.f^{n+1}x \\ &=& \chnum{n+1} \end{array} \] Thus, we have proved that the term $\succt$ does indeed encode the successor function, when applied to a numeral. Here are possible definitions of addition and multiplication: \[ \begin{array}{rcl} \addt &=& \lam nmfx.nf(mfx) \\ \multt &=& \lam nmf.n(mf). \end{array} \] \begin{exercise} \begin{enumerate} \item[(a)] Manually evaluate the lambda terms $\addt\chnum{2}\chnum{3}$ and $\multt\chnum{2}\chnum{3}$. \item[(b)] Prove that $\addt\chnum{n}\chnum{m}\redbs\chnum{n+m}$, for all natural numbers $n$, $m$. \item[(c)] Prove that $\multt\chnum{n}\chnum{m}\redbs\chnum{n\cdot m}$, for all natural numbers $n$, $m$. \end{enumerate} \end{exercise} \begin{definition} Suppose $f:\N^k\to \N$ is a $k$-ary function on the natural numbers, and that $M$ is a lambda term. We say that $M$ {\em (numeralwise) represents} $f$ if for all $n_1,\ldots,n_k\in\N$, \[ M\chnum{n_1}\ldots\chnum{n_k} \redbs \chnum{f(n_1,\ldots,n_k)}. \] \end{definition} This definition makes explicit what it means to be an ``encoding''. We can say, for instance, that the term $\addt = \lam nmfx.nf(mfx)$ represents the addition function. The definition generalizes easily to boolean functions, or functions of other data types. Often handy is the function $\iszerot$ from natural numbers to booleans, which is defined by \[ \begin{array}{rcll} \iszerot(0) &=& \mbox{true} \\ \iszerot(n) &=& \mbox{false,} & \mbox{if $n\neq 0$.} \end{array} \] Convince yourself that the following term is a representation of this function: \[ \iszerot = \lam nxy.n(\lam z.y)x. \] \begin{exercise} Find lambda terms that represent each of the following functions: \begin{enumerate} \item[(a)] $ f(n)=(n+3)^2, $ \item[(b)] $ f(n)=\left\{\begin{array}{ll}\mbox{true}&\mbox{if $n$ is even,}\\\mbox{false}&\mbox{if $n$ is odd,} \end{array}\right. $ \item[(c)] $ \expt(n,m)=n^m, $ \item[(d)] $ \predt(n)=n-1. $ \end{enumerate} Note: part (d) is not easy. In fact, Church believed for a while that it was impossible, until his student Kleene found a solution. (In fact, Kleene said he found the solution while having his wisdom teeth pulled, so his trick for defining the predecessor function is sometimes referred to as the ``wisdom teeth trick''.) \end{exercise} We have seen how to encode some simple boolean and arithmetic functions. However, we do not yet have a systematic method of constructing such functions. What we need is a mechanism for defining more complicated functions from simple ones. Consider for example the factorial function, defined by: \[ \begin{array}{rcll} 0! &=& 1 \\ n! &=& n\cdot (n-1)!,& \mbox{if $n\neq 0$}. \end{array} \] The encoding of such functions in the lambda calculus is the subject of the next section. It is related to the concept of a fixed point. \subsection{Fixed points and recursive functions}\label{subsec-fixed-points} Suppose $f$ is a function. We say that $x$ is a {\em fixed point} of $f$ if $f(x)=x$. In arithmetic and calculus, some functions have fixed points, while others don't. For instance, $f(x)=x^2$ has two fixed points $0$ and $1$, whereas $f(x)=x+1$ has no fixed points. Some functions have infinitely many fixed points, notably $f(x)=x$. We apply the notion of fixed points to the lambda calculus. If $F$ and $N$ are lambda terms, we say that $N$ is a fixed point of $F$ if $FN\eqb N$. The lambda calculus contrasts with arithmetic in that {\em every} lambda term has a fixed point. This is perhaps the first surprising fact about the lambda calculus we learn in this course. \begin{theorem}\label{thm-fix} In the untyped lambda calculus, every term $F$ has a fixed point. \end{theorem} \begin{proof} Let $A=\lam xy.y(xxy)$, and define $\Thetat=AA$. Now suppose $F$ is any lambda term, and let $N=\Thetat F$. We claim that $N$ is a fixed point of $F$. This is shown by the following calculation: \[ \begin{array}{rcl} N &=& \Thetat F \\ &=& AA F \\ &=& (\lam xy.y(xxy))AF \\ &\redbs& F(AAF) \\ &=& F(\Thetat F) \\ &=& FN. \end{array} \] \eottwo \end{proof} The term $\Thetat$ used in the proof is called {\em Turing's fixed point combinator}. The importance of fixed points lies in the fact that they allow us to solve {\em equations}. After all, finding a fixed point for $f$ is the same thing as solving the equation $x = f(x)$. This covers equations with an arbitrary right-hand side, whose left-hand side is $x$. From the above theorem, we know that we can always solve such equations in the lambda calculus. To see how to apply this idea, consider the question from the last section, namely, how to define the factorial function. The most natural definition of the factorial function is recursive, and we can write it in the lambda calculus as follows: \[ \begin{array}{rcl} \factt n &=& \ifthenelset (\iszerot n) (\chnum{1}) (\multt n (\factt (\predt n))) \end{array} \] Here we have used various abbreviations for lambda terms that were introduced in the previous section. The evident problem with a recursive definition such as this one is that the term to be defined, $\factt$, appears both on the left- and the right-hand side. In other words, to find $\factt$ requires solving an equation! We now apply our newfound knowledge of how to solve fixed point equations in the lambda calculus. We start by rewriting the problem slightly: \[ \begin{array}{rcl} \factt &=& \lam n.\ifthenelset (\iszerot n) (\chnum{1}) (\multt n (\factt (\predt n))) \\ \factt &=& (\lam f.\lam n.\ifthenelset (\iszerot n) (\chnum{1}) (\multt n (f (\predt n)))) \factt \end{array} \] Let us temporarily write $F$ for the term \[ \lam f.\lam n.\ifthenelset (\iszerot n) (\chnum{1}) (\multt n (f (\predt n))). \] Then the last equation becomes $\factt = F \factt$, which is a fixed point equation. We can solve it up to $\beta$-equivalence, by letting \[ \begin{array}{rcl} \factt &=& \Thetat F \\ &=& \Thetat (\lam f.\lam n.\ifthenelset (\iszerot n) (\chnum{1}) (\multt n (f (\predt n)))) \end{array} \] Note that $\factt$ has disappeared from the right-hand side. The right-hand side is a closed lambda term that represents the factorial function. (A lambda term is called {\em closed} if it contains no free variables). To see how this definition works in practice, let us evaluate $\factt \chnum{2}$. Recall from the proof of Theorem~\ref{thm-fix} that $\Thetat F\redbs F(\Thetat F)$, therefore $\factt \redbs F\factt$. \[ \begin{array}{@{}r@{~}c@{~}l@{}} \factt\chnum{2} &\redbs& F\factt\chnum{2} \\ &\redbs& \ifthenelset(\iszerot \chnum{2}) (\chnum{1}) (\multt \chnum{2} (\factt (\predt \chnum{2}))) \\ &\redbs& \ifthenelset (\falset) (\chnum{1}) (\multt \chnum{2} (\factt (\predt \chnum{2}))) \\ &\redbs& \multt \chnum{2} (\factt (\predt \chnum{2})) \\ &\redbs& \multt \chnum{2} (\factt \chnum{1}) \\ &\redbs& \multt \chnum{2} (F\factt \chnum{1}) \\ &\redbs& \ldots \\ &\redbs& \multt \chnum{2} (\multt \chnum{1} (\factt\chnum{0})) \\ &\redbs& \multt \chnum{2} (\multt \chnum{1} (F\factt\chnum{0})) \\ &\redbs& \multt \chnum{2} (\multt \chnum{1} (\ifthenelset(\iszerot \chnum{0}) (\chnum{1}) (\multt \chnum{0} (\factt (\predt \chnum{0}))))) \\ &\redbs& \multt \chnum{2} (\multt \chnum{1} (\ifthenelset(\truet) (\chnum{1}) (\multt \chnum{0} (\factt (\predt \chnum{0}))))) \\ \end{array}\]\[\begin{array}{rcl} &\redbs& \multt \chnum{2} (\multt \chnum{1} \chnum{1}) \\ &\redbs& \chnum{2} \end{array} \] Note that this calculation, while messy, is completely mechanical. You can easily convince yourself that $\factt \chnum{3}$ reduces to $\multt \chnum{3} (\factt \chnum{2})$, and therefore, by the above calculation, to $\multt \chnum{3} \chnum{2}$, and finally to $\chnum{6}$. It is now a matter of a simple induction to prove that $\factt \chnum{n}\redbs \chnum{n!}$, for any $n$. \begin{exercise} Write a lambda term that represents the Fibonacci function, defined by \[ f(0) = 1,\sep f(1) = 1,\sep f(n+2)=f(n+1)+f(n), \mbox{for $n\geq 2$} \] \end{exercise} \begin{exercise} Write a lambda term that represents the characteristic function of the prime numbers, i.e., $f(n)=\mbox{true}$ if $n$ is prime, and $\mbox{false}$ otherwise. \end{exercise} \begin{exercise} We have remarked at the beginning of this section that the number-theoretic function $f(x)=x+1$ does not have a fixed point. On the other hand, the lambda term $F=\lam x.\succt x$, which represents the same function, does have a fixed point by Theorem~\ref{thm-fix}. How can you reconcile the two statements? \end{exercise} \begin{exercise} The first fixed point combinator for the lambda calculus was discovered by Curry. Curry's fixed point combinator, which is also called the {\em paradoxical fixed point combinator}, is the term $\Y=\lam f.(\lam x.f(xx))(\lam x.f(xx))$. \begin{enumerate} \item[(a)] Prove that this is indeed a fixed point combinator, i.e., that $\Y F$ is a fixed point of $F$, for any term $F$. \item[(b)] Turing's fixed point combinator not only satisfies $\Thetat F\eqb F(\Thetat F)$, but also $\Thetat F\redbs F(\Thetat F)$. We used this fact in evaluating $\factt\chnum{2}$. Does an analogous property hold for $\Y$? Does this affect the outcome of the evaluation of $\factt\chnum{2}$? \item[(c)] Can you find another fixed point combinator, besides Curry's and Turing's? \end{enumerate} \end{exercise} \subsection{Other data types: pairs, tuples, lists, trees, etc.} So far, we have discussed lambda terms that represented functions on booleans and natural numbers. However, it is easily possible to encode more general data structures in the untyped lambda calculus. Pairs and tuples are of interest to everybody. The examples of lists and trees are primarily interesting to people with experience in a list-processing language such as LISP or PROLOG; you can safely ignore these examples if you want to. {\bf Pairs.} If $M$ and $N$ are lambda terms, we define the pair $\pair{M,N}$ to be the lambda term $\lam z.zMN$. We also define two terms $\leftt=\lam p.p(\lam xy.x)$ and $\rightt=\lam p.p(\lam xy.y)$. We observe the following: \[ \begin{array}{rcl} \leftt \pair{M,N} &\redbs& M \\ \rightt \pair{M,N} &\redbs& N \end{array} \] The terms $\leftt$ and $\rightt$ are called the left and right {\em projections}. {\bf Tuples.} The encoding of pairs easily extends to arbitrary $n$-tuples. If $M_1,\ldots,M_n$ are terms, we define the $n$-tuple $\tuple{M_1,\ldots,M_n}$ as the lambda term $\lam z.zM_1\ldots M_n$, and we define the $i$th projection $\pi^n_i=\lam p.p(\lam x_1\ldots x_n.x_i)$. Then \[ \begin{array}{rcl} \pi^n_i\tuple{M_1,\ldots,M_n} &\redbs& M_i, \mbox{for all $1\leq i\leq n$.} \end{array} \] {\bf Lists.} A list is different from a tuple, because its length is not necessarily fixed. A list is either empty (``nil''), or else it consists of a first element (the ``head'') followed by another list (the ``tail''). We write $\nilt$ for the empty list, and $H::T$ for the list whose head is $H$ and whose tail is $T$. So, for instance, the list of the first three numbers can be written as $1::(2::(3::\nilt))$. We usually omit the parentheses, where it is understood that ''$::$'' associates to the right. Note that every list ends in $\nilt$. In the lambda calculus, we can define $\nilt=\lam xy.y$ and $H::T = \lam xy.xHT$. Here is a lambda term that adds a list of numbers: \[ \nm{addlist} l = l(\lam h\,t.\addt h(\nm{addlist} t))(\chnum{0}). \] Of course, this is a recursive definition, and must be translated into an actual lambda term by the method of Section~\ref{subsec-fixed-points}. In the definition of $\nm{addlist}$, $l$ and $t$ are lists of numbers, and $h$ is a number. If you are very diligent, you can calculate the sum of last weekend's Canadian lottery results by evaluating the term \[ \nm{addlist} (\chnum{4}::\chnum{22}::\chnum{24}::\chnum{32}::\chnum{42}::\chnum{43}::\nilt). \] Note that lists enable us to give an alternative encoding of the natural numbers: We can encode a natural number as a list of booleans, which we interpret as the binary digits 0 and 1. Of course, with this encoding, we would have to carefully redesign our basic functions, such as successor, addition, and multiplication. However, if done properly, such an encoding would be a lot more efficient (in terms of number of $\beta$-reductions to be performed) than the encoding by Church numerals. {\bf Trees.} A binary tree is a data structure that can be one of two things: either a {\em leaf}, labeled by a natural number, or a {\em node}, which has a left and a right subtree. We write $\leaft(N)$ for a leaf labeled $N$, and $\nodet(L,R)$ for a node with left subtree $L$ and right subtree $R$. We can encode trees as lambda terms, for instance as follows: \[ \leaft(n) = \lam xy.xn,\sep \nodet(L,R) = \lam xy.yLR \] As an illustration, here is a program (i.e., a lambda term) that adds all the numbers at the leafs of a given tree. \[ \nm{addtree} t = t(\lam n.n)(\lam l\, r.\addt(\nm{addtree}l)(\nm{addtree}r)). \] \begin{exercise} This is a voluntary programming exercise. \begin{enumerate} \item[(a)] Write a lambda term that calculates the length of a list. \item[(b)] Write a lambda term that calculates the depth (i.e., the nesting level) of a tree. You may need to define a function $\nm{max}$ that calculates the maximum of two numbers. \item[(c)] Write a lambda term that sorts a list of numbers. You may assume given a term $\nm{less}$ that compares two numbers. \end{enumerate} \end{exercise} \section{The Church-Rosser Theorem} \subsection{Extensionality, $\eta$-equivalence, and $\eta$-reduction} In the untyped lambda calculus, any term can be applied to another term. Therefore, any term can be regarded as a function. Consider a term $M$, not containing the variable $x$, and consider the term $M'=\lam x.Mx$. Then for any argument $A$, we have $MA\eqb M'A$. So in this sense, $M$ and $M'$ define ``the same function''. Should $M$ and $M'$ be considered equivalent as terms? The answer depends on whether we want to accept the principle that ``if $M$ and $M'$ define the same function, then $M$ and $M'$ are equal''. This is called the principle of {\em extensionality}, and we have already encountered it in Section~\ref{subsec-intro1}. Formally, the extensionality rule is the following: \[ \begin{array}{lc} \trule{ext$_\forall$} & \deriv{\forall A.MA=M'A}{M=M'}. \end{array} \] In the presence of the axioms $\nrule{\xi}$, $\trule{cong}$, and $\nrule{\beta}$, it can be easily seen that $MA=M'A$ is true for {\em all} terms $A$ if and only if $Mx=M'x$, where $x$ is a fresh variable. Therefore, we can replace the extensionality rule by the following equivalent, but simpler rule: \[ \begin{array}{lc} \trule{ext} & \deriv{Mx=M'x, \mbox{ where $x\not\in\FV{M,M'}$}}{M=M'}. \end{array} \] Note that we can apply the extensionality rule in particular to the case where $M'=\lam x.Mx$, where $x$ is not free in $M$. As we have remarked above, $Mx\eqb M'x$, and thus extensionality implies that $M=\lam x.Mx$. This last equation is called the $\eta$-law (eta-law): \[ \begin{array}{lc} \nrule{\eta} & M=\lam x.Mx, \mbox{ where $x\not\in\FV{M}$}. \end{array} \] In fact, $\nrule{\eta}$ and $\trule{ext}$ are equivalent in the presence of the other axioms of the lambda calculus. We have already seen that $\trule{ext}$ and $\nrule{\beta}$ imply $\nrule{\eta}$. Conversely, assume $\nrule{\eta}$, and assume that $Mx=M'x$, for some terms $M$ and $M'$ not containing $x$ freely. Then by $\nrule{\xi}$, we have $\lam x.Mx=\lam x.M'x$, hence by $\nrule{\eta}$ and transitivity, $M=M'$. Thus $\trule{ext}$ holds. We note that the $\eta$-law does not follow from the axioms and rules of the lambda calculus that we have considered so far. In particular, the terms $x$ and $\lam y.xy$ are not $\beta$-equivalent, although they are clearly $\eta$-equivalent. We will prove that $x\not\eqb \lam y.xy$ in Corollary~\ref{cor-beta-not-eta} below. Single-step $\eta$-reduction is the smallest relation $\rede$ satisfying $\trule{cong$_1$}$, $\trule{cong$_2$}$, $\nrule{\xi}$, and the following axiom (which is the same as the $\eta$-law, directed right to left): \[ \begin{array}{lc}\label{page-def-eta} \nrule{\eta} & \lam x.Mx\rede M, \mbox{ where $x\not\in\FV{M}$}. \end{array} \] Single-step $\beta\eta$-reduction $\redbe$ is defined as the union of the single-step $\beta$- and $\eta$-reductions, i.e., $M\redbe M'$ iff $M\redb M'$ or $M\rede M'$. Multi-step $\eta$-reduction $\redes$, multi-step $\beta\eta$-reduction $\redbes$, as well as $\eta$-equivalence $\eqe$ and $\beta\eta$-equivalence $\eqbe$ are defined in the obvious way as we did for $\beta$-reduction and equivalence. We also get the evident notions of $\eta$-normal form, $\beta\eta$-normal form, etc. \subsection{Statement of the Church-Rosser Theorem, and some consequences} \begin{un-theorem}[Church and Rosser, 1936]\label{thm-church-rosser} Let $\reds$ denote either $\redbs$ or $\redbes$. Suppose $M$, $N$, and $P$ are lambda terms such that $M\reds N$ and $M\reds P$. Then there exists a lambda term $Z$ such that $N\reds Z$ and $P\reds Z$. \end{un-theorem} In pictures, the theorem states that the following diagram can always be completed: \[ \xymatrix@dr{M\ar@{->>}[r]\ar@{->>}[d] & P\ar@{.>>}[d]\\ N\ar@{.>>}[r] & Z} \] This property is called the {\em Church-Rosser property}, or {\em confluence}. Before we prove the Church-Rosser Theorem, let us highlight some of its consequences. \begin{corollary}\label{cor-cr-2} If $M\eqb N$ then there exists some $Z$ with $M,N\redbs Z$. Similarly for $\beta\eta$. \end{corollary} \begin{proof} Please refer to Figure~\ref{fig-cor-cr-2} for an illustration of this proof. Recall that $\eqb$ is the reflexive symmetric transitive closure of $\redb$. Suppose that $M\eqb N$. Then there exist $n\geq 0$ and terms $M_0,\ldots,M_n$ such that $M=M_0$, $N=M_n$, and for all $i=1\ldots n$, either $M_{i-1}\redb M_{i}$ or $M_{i}\redb M_{i-1}$. We prove the claim by induction on $n$. For $n=0$, we have $M=N$ and there is nothing to show. Suppose the claim has been proven for $n-1$. Then by induction hypothesis, there exists a term $Z'$ such that $M\redbs Z'$ and $M_{n-1}\redbs Z'$. Further, we know that either $N\redb M_{n-1}$ or $M_{n-1}\redb N$. In case $N\redb M_{n-1}$, then $N\redbs Z'$, and we are done. In case $M_{n-1}\redb N$, we apply the Church-Rosser Theorem to $M_{n-1}$, $Z'$, and $N$ to obtain a term $Z$ such that $Z'\redbs Z$ and $N\redbs Z$. Since $M\redbs Z'\redbs Z$, we are done. The proof in the case of $\beta\eta$-reduction is identical.\eot \end{proof} \begin{figure} \caption{The proof of Corollary~\ref{cor-cr-2}} \label{fig-cor-cr-2} \end{figure} \begin{corollary}\label{cor-cr-3} If $N$ is a $\beta$-normal form and $N\eqb M$, then $M\redbs N$, and similarly for $\beta\eta$. \end{corollary} \begin{proof} By Corollary~\ref{cor-cr-2}, there exists some $Z$ with $M,N\redbs Z$. But $N$ is a normal form, thus $N\eqa Z$. \eot \end{proof} \begin{corollary}\label{cor-cr-4} If $M$ and $N$ are $\beta$-normal forms such that $M\eqb N$, then $M\eqa N$, and similarly for $\beta\eta$. \end{corollary} \begin{proof} By Corollary~\ref{cor-cr-3}, we have $M\redbs N$, but since $M$ is a normal form, we have $M\eqa N$. \eot \end{proof} \begin{corollary} If $M\eqb N$, then neither or both have a $\beta$-normal form. Similarly for $\beta\eta$. \end{corollary} \begin{proof} Suppose that $M\eqb N$, and that one of them has a $\beta$-normal form. Say, for instance, that $M$ has a normal form $Z$. Then $N\eqb Z$, hence $N\redbs Z$ by Corollary~\ref{cor-cr-3}. \eot \end{proof} \begin{corollary}\label{cor-beta-not-eta} The terms $x$ and $\lam y.xy$ are not $\beta$-equivalent. In particular, the $\eta$-rule does not follow from the $\beta$-rule. \end{corollary} \begin{proof} The terms $x$ and $\lam y.xy$ are both $\beta$-normal forms, and they are not $\alpha$-equivalent. It follows by Corollary~\ref{cor-cr-4} that $x\not\eqb \lam y.xy$. \eot \end{proof} \subsection{Preliminary remarks on the proof of the Church-Rosser Theorem} \label{subsec-prelim-cr} Consider any binary relation $\red$ on a set, and let $\reds$ be its reflexive transitive closure. Consider the following three properties of such relations: \[ \mbox{(a)} \ssep \xymatrix@dr{M\ar@{->>}[r]\ar@{->>}[d] & P\ar@{.>>}[d]\\ N\ar@{.>>}[r] & Z} \sep \mbox{(b)} \ssep \xymatrix@dr{M\ar@{->}[r]\ar@{->}[d] & P\ar@{.>>}[d]\\ N\ar@{.>>}[r] & Z} \sep \mbox{(c)} \ssep \xymatrix@dr{M\ar@{->}[r]\ar@{->}[d] & P\ar@{.>}[d]\\ N\ar@{.>}[r] & Z} \] Each of these properties states that for all $M,N,P$, if the solid arrows exist, then there exists $Z$ such that the dotted arrows exist. The only difference between (a), (b), and (c) is the difference between where $\red$ and $\reds$ are used. Property (a) is the Church-Rosser property. Property (c) is called the diamond property (because the diagram is shaped like a diamond). A naive attempt to prove the Church-Rosser Theorem might proceed as follows: First, prove that the relation $\redb$ satisfies property (b) (this is relatively easy to prove); then use an inductive argument to conclude that it also satisfies property (a). Unfortunately, this does not work: the reason is that in general, property (b) does not imply property (a)! An example of a relation that satisfies property (b) but not property (a) is shown in Figure~\ref{fig-b-not-a}. In other words, a proof of property (b) is not sufficient in order to prove property (a). \begin{figure} \caption{An example of a relation that satisfies property (b), but not property (a)} \label{fig-b-not-a} \end{figure} \begin{figure} \caption{Proof that property (c) implies property (a)} \label{fig-diamond-a} \end{figure} On the other hand, property (c), the diamond property, {\em does} imply property (a). This is very easy to prove by induction, and the proof is illustrated in Figure~\ref{fig-diamond-a}. But unfortunately, $\beta$-reduction does not satisfy property (c), so again we are stuck. To summarize, we are faced with the following dilemma: \begin{itemize} \item $\beta$-reduction satisfies property (b), but property (b) does not imply property (a). \item Property (c) implies property (a), but $\beta$-reduction does not satisfy property (c). \end{itemize} On the other hand, it seems hopeless to prove property (a) directly. In the next section, we will solve this dilemma by defining yet another reduction relation $\tri$, with the following properties: \begin{itemize} \item $\tri$ satisfies property (c), and \item the transitive closure of $\tri$ is the same as that of $\redb$ (or $\redbe$). \end{itemize} \subsection{Proof of the Church-Rosser Theorem} \label{subsec-proof-cr} In this section, we will prove the Church-Rosser Theorem for $\beta\eta$-reduction. The proof for $\beta$-reduction (without $\eta$) is very similar, and in fact slightly simpler, so we omit it here. The proof presented here is due to Tait and Martin-L\"of. We begin by defining a new relation $M\tri M'$ on terms, called {\em parallel one-step reduction}. We define $\tri$ to be the smallest relation satisfying \[ \begin{array}{lc} (1) & \deriv{}{x\tri x} \\[1.8ex] (2) & \deriv{P \tri P' \sep N \tri N'}{PN \tri P'N'} \\[1.8ex] (3) & \deriv{N \tri N'}{\lam x.N \tri \lam x.N'} \\[1.8ex] (4) & \deriv{Q \tri Q' \sep N \tri N'}{(\lam x.Q)N \tri \subst{Q'}{N'}{x}} \\[1.8ex] (5) & \deriv{P \tri P',\mbox{ where $x\not\in\FV{P}$}}{\lam x.Px \tri P'}. \end{array} \] \begin{lemma}\label{lem-tri-redbes} \begin{enumerate} \item[(a)] For all $M,M'$, if $M\redbe M'$ then $M\tri M'$. \item[(b)] For all $M,M'$, if $M\tri M'$ then $M\redbes M'$. \item[(c)] $\redbes$ is the reflexive, transitive closure of $\tri$. \end{enumerate} \end{lemma} \begin{proof} (a) First note that we have $P\tri P$, for any term $P$. This is easily shown by induction on $P$. We now prove the claim by induction on a derivation of $M\redbe M'$. Please refer to pages~\pageref{page-def-beta} and {\pageref{page-def-eta}} for the rules that define $\redbe$. We make a case distinction based on the last rule used in the derivation of $M\redbe M'$. \begin{itemize} \item If the last rule was $\nrule{\beta}$, then $M=(\lam x.Q)N$ and $M'=\subst{Q}{N}{x}$, for some $Q$ and $N$. But then $M\tri M'$ by (4), using the facts $Q\tri Q$ and $N\tri N$. \item If the last rule was $\nrule{\eta}$, then $M=\lam x.Px$ and $M'=P$, for some $P$ such that $x\not\in\FV{P}$. Then $M\tri M'$ follows from (5), using $P\tri P$. \item If the last rule was $\trule{cong$_1$}$, then $M=PN$ and $M'=P'N$, for some $P$, $P'$, and $N$ where $P\redbe P'$. By induction hypothesis, $P\tri P'$. From this and $N\tri N$, it follows immediately that $M\tri M'$ by (2). \item If the last rule was $\trule{cong$_2$}$, we proceed similarly to the last case. \item If the last rule was $\nrule{\xi}$, then $M=\lam x.N$ and $M'=\lam x.N'$ for some $N$ and $N'$ such that $N\redbe N'$. By induction hypothesis, $N\tri N'$, which implies $M\tri M'$ by (3). \end{itemize} (b) We prove this by induction on a derivation of $M\tri M'$. We distinguish several cases, depending on the last rule used in the derivation. \begin{itemize} \item If the last rule was (1), then $M=M'=x$, and we are done because $x\redbes x$. \item If the last rule was (2), then $M=PN$ and $M'=P'N'$, for some $P$, $P'$, $N$, $N'$ with $P\tri P'$ and $N\tri N'$. By induction hypothesis, $P\redbes P'$ and $N\redbes N'$. Since $\redbes$ satisfies $\trule{cong}$, it follows that $PN\redbes P'N'$, hence $M\redbes M'$ as desired. \item If the last rule was (3), then $M=\lam x.N$ and $M'=\lam x.N'$, for some $N,N'$ with $N\tri N'$. By induction hypothesis, $N\redbes N'$, hence $M=\lam x.N\redbes \lam x.N'=M'$ by $\nrule{\xi}$. \item If the last rule was (4), then $M=(\lam x.Q)N$ and $M'=\subst{Q'}{N'}{x}$, for some $Q,Q',N,N'$ with $Q\tri Q'$ and $N\tri N'$. By induction hypothesis, $Q\redbes Q'$ and $N\redbes N'$. Therefore $M=(\lam x.Q)N\redbes(\lam x.Q')N'\redbe \subst{Q'}{N'}{x}=M'$, as desired. \item If the last rule was (5), then $M=\lam x.Px$ and $M'=P'$, for some $P,P'$ with $P\tri P'$, and $x\not\in\FV{P}$. By induction hypothesis, $P\redbes P'$, hence $M=\lam x.Px\redbe P\redbes P'=M'$, as desired. \end{itemize} (c) This follows directly from (a) and (b). Let us write $R^*$ for the reflexive transitive closure of a relation $R$. By (a), we have ${\redbe}\seq{\tri}$, hence ${\redbes}={\redbe}^*\seq{\tri}^*$. By (b), we have ${\tri}\seq{\redbes}$, hence ${\tri}^*\seq{\redbes}^*={\redbes}$. It follows that ${\tri}^*={\redbes}$.\eot \end{proof} We will soon prove that $\tri$ satisfies the diamond property. Note that together with Lemma~\ref{lem-tri-redbes}(c), this will immediately imply that $\redbes$ satisfies the Church-Rosser property. \begin{lemma}[Substitution] If $M \tri M'$ and $U \tri U'$, then $\subst{M}{U}{y} \tri \subst{M'}{U'}{y}$. \end{lemma} \begin{proof} We assume without loss of generality that any bound variables of $M$ are different from $y$ and from the free variables of $U$. The claim is now proved by induction on derivations of $M\tri M'$. We distinguish several cases, depending on the last rule used in the derivation: \begin{itemize} \item If the last rule was (1), then $M=M'=x$, for some variable $x$. If $x=y$, then $\subst{M}{U}{y}=U\tri U'=\subst{M'}{U'}{y}$. If $x\neq y$, then by (1), $\subst{M}{U}{y}=x\tri x=\subst{M'}{U'}{y}$. \item If the last rule was (2), then $M=PN$ and $M'=P'N'$, for some $P$, $P'$, $N$, $N'$ with $P\tri P'$ and $N\tri N'$. By induction hypothesis, $\subst{P}{U}{y}\tri \subst{P'}{U'}{y}$ and $\subst{N}{U}{y}\tri \subst{N'}{U'}{y}$, hence by (2), $\subst{M}{U}{y}=\subst{P}{U}{y}\subst{N}{U}{y} \tri \subst{P'}{U'}{y}\subst{N'}{U'}{y} = \subst{M'}{U'}{y}$. \item If the last rule was (3), then $M=\lam x.N$ and $M'=\lam x.N'$, for some $N,N'$ with $N\tri N'$. By induction hypothesis, $\subst{N}{U}{y}\tri \subst{N'}{U'}{y}$, hence by (3) $\subst{M}{U}{y}=\lam x.\subst{N}{U}{y} \tri \lam x.\subst{N'}{U'}{y} = \subst{M'}{U'}{y}$. \item If the last rule was (4), then $M=(\lam x.Q)N$ and $M'=\subst{Q'}{N'}{x}$, for some $Q,Q',N,N'$ with $Q\tri Q'$ and $N\tri N'$. By induction hypothesis, $\subst{Q}{U}{y}\tri \subst{Q'}{U'}{y}$ and $\subst{N}{U}{y}\tri \subst{N'}{U'}{y}$, hence by (4), $(\lam x.\subst{Q}{U}{y})\subst{N}{U}{y} \tri \subst{\subst{Q'}{U'}{y}}{\subst{N'}{U'}{y}}{x} = \subst{\subst{Q'}{N'}{x}}{U'}{y}$. Thus $\subst{M}{U}{y} \tri \subst{M'}{U'}{y}$. \item If the last rule was (5), then $M=\lam x.Px$ and $M'=P'$, for some $P,P'$ with $P\tri P'$, and $x\not\in\FV{P}$. By induction hypothesis, $\subst{P}{U}{y}\tri \subst{P'}{U'}{y}$, hence by (5), $\subst{M}{U}{y}=\lam x.\subst{P}{U}{y}x \tri \subst{P'}{U'}{y}=\subst{M'}{U'}{y}$.\eot \end{itemize} \end{proof} A more conceptual way of looking at this proof is the following: consider any derivation of $M\tri M'$ from axioms (1)--(5). In this derivation, replace any axiom $y\tri y$ by $U\tri U'$, and propagate the changes (i.e., replace $y$ by $U$ on the left-hand-side, and by $U'$ on the right-hand-side of any $\tri$). The result is a derivation of $\subst{M}{U}{y}\tri\subst{M'}{U'}{y}$. (The formal proof that the result of this replacement is indeed a valid derivation requires an induction, and this is the reason why the proof of the substitution lemma is so long). Our next goal is to prove that $\tri$ satisfies the diamond property. Before proving this, we first define the {\em maximal parallel one-step reduct} $M^*$ of a term $M$ as follows: \begin{enumerate} \item $x^* = x$, for a variable. \item $(PN)^* = P^*N^*$, if $PN$ is not a $\beta$-redex. \item $((\lam x.Q)N)^* = \subst{Q^*}{N^*}{x}$. \item $(\lam x.N)^* = \lam x.N^*$, if $\lam x.N$ is not an $\eta$-redex. \item $(\lam x.Px)^* = P^*$, if $x\not\in\FV{P}$. \end{enumerate} Note that $M^*$ depends only on $M$. The following lemma implies the diamond property for $\tri$. \begin{lemma}[Maximal parallel one-step reductions] \label{lem-max-par} Whenever $M \tri M'$, then $M' \tri M^*$. \end{lemma} \begin{proof} By induction on the size of $M$. We distinguish five cases, depending on the last rule used in the derivation of $M \tri M'$. As usual, we assume that all bound variables have been renamed to avoid clashes. \begin{itemize} \item If the last rule was (1), then $M=M'=x$, also $M^*=x$, and we are done. \item If the last rule was (2), then $M=PN$ and $M'=P'N'$, where $P\tri P'$ and $N\tri N'$. By induction hypothesis $P'\tri P^*$ and $N'\tri N^*$. Two cases: \begin{itemize} \item If $PN$ is not a $\beta$-redex, then $M^*=P^*N^*$. Thus $M'=P'N'\tri P^*N^*=M^*$ by (2), and we are done. \item If $PN$ is a $\beta$-redex, say $P=\lam x.Q$, then $M^*=\subst{Q^*}{N^*}{x}$. We distinguish two subcases, depending on the last rule used in the derivation of $P\tri P'$: \begin{itemize} \item If the last rule was (3), then $P'=\lam x.Q'$, where $Q\tri Q'$. By induction hypothesis $Q' \tri Q^*$, and with $N' \tri N^*$, it follows that $M' = (\lam x.Q')N' \tri \subst{Q^*}{N^*}{x} = M^*$ by (4). \item If the last rule was (5), then $P=\lam x.Rx$ and $P'=R'$, where $x\not\in\FV{R}$ and $R\tri R'$. Consider the term $Q=Rx$. Since $Rx \tri R'x$, and $Rx$ is a subterm of $M$, by induction hypothesis $R'x \tri (Rx)^*$. By the substitution lemma, $M' = R'N' = \subst{(R'x)}{N'}{x} \tri \subst{(Rx)^*}{N^*}{x} = M^*$. \end{itemize} \end{itemize} \item If the last rule was (3), then $M = \lam x.N$ and $M' = \lam x.N'$, where $N\tri N'$. Two cases: \begin{itemize} \item If $M$ is not an $\eta$-redex, then $M^*=\lam x.N^*$. By induction hypothesis, $N'\tri N^*$, hence $M'\tri M^*$ by (3). \item If $M$ is an $\eta$-redex, then $N=Px$, where $x\not\in\FV{P}$. In this case, $M^*=P^*$. We distinguish two subcases, depending on the last rule used in the derivation of $N\tri N'$: \begin{itemize} \item If the last rule was (2), then $N' = P'x$, where $P\tri P'$. By induction hypothesis $P' \tri P^*$. Hence $M' = \lam x.P'x \tri P^* = M^*$ by (5). \item If the last rule was (4), then $P = \lam y.Q$ and $N' = \subst{Q'}{x}{y}$, where $Q\tri Q'$. Then $M' = \lam x.\subst{Q'}{x}{y} = \lam y.Q'$ (note $x \not\in \FV{Q'}$). But $P \tri \lam y.Q'$, hence by induction hypothesis, $\lam y.Q' \tri P^* = M^*$. \end{itemize} \end{itemize} \item If the last rule was (4), then $M = (\lam x.Q)N$ and $M' = \subst{Q'}{N'}{x}$, where $Q\tri Q'$ and $N\tri N'$. Then $M^* = \subst{Q^*}{N^*}{x}$, and $M' \tri M^*$ by the substitution lemma. \item If the last rule was (5), then $M = \lam x.Px$ and $M' = P'$, where $P\tri P'$ and $x\not\in\FV{P}$. Then $M^*=P^*$. By induction hypothesis, $P'\tri P^*$, hence $M'\tri M^*$. \eot \end{itemize} \end{proof} The previous lemma immediately implies the diamond property for $\tri$: \begin{lemma}[Diamond property for $\tri$] If $M\tri N$ and $M\tri P$, then there exists $Z$ such that $N\tri Z$ and $P\tri Z$. \end{lemma} \begin{proof} Take $Z=M^*$.\eot \end{proof} Finally, we have a proof of the Church-Rosser Theorem: \begin{proofof}{Theorem~\ref{thm-church-rosser}} Since $\tri$ satisfies the diamond property, it follows that its reflexive transitive closure $\tri^*$ also satisfies the diamond property, as shown in Figure~\ref{fig-diamond-a}. But $\tri^*$ is the same as $\redbes$ by Lemma~\ref{lem-tri-redbes}(c), and the diamond property for $\redbes$ is just the Church-Rosser property for $\redbe$.\eot \end{proofof} \subsection{Exercises} \begin{exercise} Give a detailed proof that property (c) from Section~\ref{subsec-prelim-cr} implies property (a). \end{exercise} \begin{exercise} Prove that $M\tri M$, for all terms $M$. \end{exercise} \begin{exercise} Without using Lemma~\ref{lem-max-par}, prove that $M\tri M^*$ for all terms $M$. \end{exercise} \begin{exercise} Let $\Omega=(\lam x.xx)(\lam x.xx)$. Prove that $\Omega\not\eqbe\Omega\Omega$. \end{exercise} \begin{exercise} What changes have to be made to Section~\ref{subsec-proof-cr} to get a proof of the Church-Rosser Theorem for $\redb$, instead of $\redbe$? \end{exercise} \begin{exercise} Recall the properties (a)--(c) of binary relations $\red$ that were discussed in Section~\ref{subsec-prelim-cr}. Consider the following similar property, which is sometimes called the ``strip property'': \[ \mbox{(d)} \ssep \xymatrix@dr{M\ar@{->}[r]\ar@{->>}[d] & P\ar@{.>>}[d]\\ N\ar@{.>>}[r] & Z.} \] Does (d) imply (a)? Does (b) imply (d)? In each case, give either a proof or a counterexample. \end{exercise} \begin{exercise} To every lambda term $M$, we may associate a directed graph (with possibly multiple edges and loops) $\sG(M)$ as follows: (i) the vertices are terms $N$ such that $M\redbs N$, i.e., all the terms that $M$ can $\beta$-reduce to; (ii) the edges are given by a single-step $\beta$-reduction. Note that the same term may have two (or more) reductions coming from different redexes; each such reduction is a separate edge. For example, let $I=\lam x.x$. Let $M=I(Ix)$. Then \[ \sG(M)=\xymatrix{I(Ix)\ar@/^/[r]\ar@/_/[r]&Ix\ar[r]&x}. \] Note that there are two separate edges from $I(Ix)$ to $Ix$. We also sometimes write bullets instead of terms, to get $ \xymatrix{\bullet\ar@/^/[r]\ar@/_/[r]&\bullet\ar[r]&\bullet}$. As another example, let $\Omega=(\lam x.xx)(\lam x.xx)$. Then \[ \sG(\Omega) = \xymatrix{\bullet\ar@(ur,dr)[]&}. \] \begin{enumerate} \item[(a)] Let $M=(\lam x.I(xx))(\lam x.xx)$. Find $\sG(M)$. \item[(b)] For each of the following graphs, find a term $M$ such that $\sG(M)$ is the given graph, or explain why no such term exists. (Note: the ``starting'' vertex need not always be the leftmost vertex in the picture). Warning: some of these terms are tricky to find! \begin{enumerate} \item[(i)] \[ \xymatrix{\bullet\ar[r]&\bullet\ar@(ur,dr)[]} \] \item[(ii)] \[ \xymatrix{\bullet&\bullet\ar@(ur,dr)[]\ar[l]} \] \item[(iii)] \[ \xymatrix{\bullet&\bullet\ar[r]\ar[l]&\bullet} \] \item[(iv)] \[ \xymatrix{\bullet&\bullet\ar[l]\ar@/^/[r]&\bullet\ar@/^/[l]} \] \item[(v)] \[ \xymatrix{\bullet\ar@(ul,dl)[]&\bullet\ar@/^/[r]\ar[l]&\bullet\ar[r]\ar@/^/[l]&\bullet\ar@(ur,dr)[]} \] \item[(vi)] \[ \xymatrix@C-1em{\bullet\ar[rr]&&\bullet\ar[dl]\\&\bullet\ar[ul]} \] \item[(vii)] \[ \xymatrix@C-1em{\bullet\ar@(ul,dl)[]\ar[rr]&&\bullet\ar@(ur,dr)[]\ar[dl]\\&\bullet\ar@(dl,dr)[]\ar[ul]} \] \end{enumerate} \end{enumerate} \end{exercise} \section{Combinatory algebras} To give a model of the lambda calculus means to provide a mathematical space in which the axioms of lambda calculus are satisfied. This usually means that the elements of the space can be understood as functions, and that certain functions can be understood as elements. Na\"ively, one might try to construct a model of lambda calculus by finding a set $X$ such that $X$ is in bijective correspondence with the set $X^X$ of {\em all} functions from $X$ to $X$. This, however, is impossible: for cardinality reason, the equation $X\cong X^X$ has no solutions except for a one-element set $X=1$. To see this, first note that the empty set $\emptyset$ is not a solution. Also, suppose $X$ is a solution with $|X|\geq 2$. Then $|X^X|\geq |2^X|$, but by Cantor's argument, $|2^X|>|X|$, hence $X^X$ is of greater cardinality than $X$, contradicting $X\cong X^X$. There are two main strategies for constructing models of the lambda calculus, and both involve a restriction on the class of functions to make it smaller. The first approach, which will be discussed in this section, uses {\em algebra}, and the essential idea is to replace the set $X^X$ of all function by a smaller, and suitably defined set of {\em polynomials}. The second approach is to equip the set $X$ with additional structure (such as topology, ordered structure, etc), and to replace $X^X$ by a set of structure-preserving functions (for example, continuous functions, monotone functions, etc). \subsection{Applicative structures} \begin{definition} An {\em applicative structure} $(\Aa,\app)$ is a set $\Aa$ together with a binary operation ``$\app$''. \end{definition} Note that there are no further assumptions; in particular, we do {\em not} assume that application is an associative operation. We write $ab$ for $a\app b$, and as in the lambda calculus, we follow the convention of left associativity, i.e., we write $abc$ for $(ab)c$. \begin{definition} Let $(\Aa,\app)$ be an applicative structure. A {\em polynomial} in a set of variables $x_1,\ldots,x_n$ and with coefficients in $\Aa$ is a formal expression built from variables and elements of $\Aa$ by means of the application operation. In other words, the set of polynomials is given by the following grammar: \[ \begin{array}{lll} t,s&\bnf& x\bor a\bor ts, \end{array} \] where $x$ ranges over variables and $a$ ranges over the elements of $\Aa$. We write $\Aa\s{x_1,\ldots,x_n}$ for the set of polynomials in variables $x_1,\ldots,x_n$ with coefficients in $\Aa$. \end{definition} Here are some examples of polynomials in the variables $x,y,z$, where $a,b\in\Aa$: \[ x,\sep xy, \sep axx, \sep (x(y(zb)))(ax). \] If $t(x_1,\ldots,x_n)$ is a polynomial in the indicated variables, and $b_1,\ldots,b_n$ are elements of $\Aa$, then we can evaluate the polynomial at the given elements: the evaluation $t(b_1,\ldots,b_n)$ the element of $\Aa$ obtained by ``plugging'' $x_i=b_i$ into the polynomial, for $i=1,\ldots,n$, and evaluating the resulting expression in $\Aa$. Note that in this way, every polynomial $t$ in $n$ variables can be understood as a {\em function} from $\Aa^n\to \Aa$. This is very similar to the usual polynomials in algebra, which can also either be understood as formal expressions or as functions. If $t(x_1,\ldots,x_n)$ and $s(x_1,\ldots,x_n)$ are two polynomials with coefficients in $\Aa$, we say that the equation $t(x_1,\ldots,x_n) = s(x_1,\ldots,x_n)$ {\em holds} in $\Aa$ if for all $b_1,\ldots,b_n\in\Aa$, $t(b_1,\ldots,b_n) = s(b_1,\ldots,b_n)$. \subsection{Combinatory completeness} \begin{definition}[Combinatory completeness] An applicative structure $(\Aa,\app)$ is {\em combinatorially complete} if for every polynomial $t(x_1,\ldots,x_n)$ of $n\geq 0$ variables, there exists some element $a\in\Aa$ such that \[ ax_1\ldots x_n = t(x_1,\ldots,x_n) \] holds in $\Aa$. \end{definition} In other words, combinatory completeness means that every polynomial {\em function} $t(x_1,\ldots,x_n)$ can be represented (in curried form) by some {\em element} of $\Aa$. We are therefore setting up a correspondence between functions and elements as discussed in the introduction of this section. Note that we do not require the element $a$ to be unique in the definition of combinatory completeness. This means that we are dealing with an intensional view of functions, where a given function might in general have several different names (but see the discussion of extensionality in Section~\ref{subsec-extensional-combinatory}). The following theorem characterizes combinatory completeness in terms of a much simpler algebraic condition. \begin{theorem}\label{thm-combinatory-completeness} An applicative structure $(\Aa,\app)$ is combinatorially complete if and only if there exist two elements $s,k\in \Aa$, such that the following equations are satisfied for all $x,y,z\in\Aa$: \[ \begin{array}{ll} (1) & sxyz = (xz)(yz) \\ (2) & kxy = x \\ \end{array} \] \end{theorem} \begin{example}\label{exa-combinatory-completeness} Before we prove this theorem, let us look at a few examples. \begin{enumerate}\alphalabels \item The identity function. Can we find an element $i\in\Aa$ such that $ix=x$ for all $x$? Yes, indeed, we can let $i=skk$. We check that for all $x$, $skkx = (kx)(kx) = x$. \item The boolean ``true''. Can we find an element $\truet$ such that for all $x,y$, $\truet xy = x$? Yes, this is easy: $\truet = k$. \item The boolean ``false''. Can we find $\falset$ such that $\falset xy = y$? Yes, what we need is $\falset x = i$. Therefore a solution is $\falset = ki$. And indeed, for all $y$, we have $kixy = iy = y$. \item Find a function $f$ such that $fx = xx$ for all $x$. Solution: let $f=sii$. Then $siix = (ix)(ix) = xx$. \end{enumerate} \end{example} \begin{proofof}{Theorem~\ref{thm-combinatory-completeness}} The ``only if'' direction is trivial. If $\Aa$ is combinatorially complete, then consider the polynomial $t(x,y,z)=(xz)(yz)$. By combinatory completeness, there exists some $s\in\Aa$ with $sxyz=t(x,y,z)$, and similarly for $k$. We thus have to prove the ``if'' direction. Recall that $\Aa\s{x_1,\ldots,x_n}$ is the set of polynomials with variables $x_1,\ldots,x_n$. For each polynomial $t\in\Aa\s{x,y_1,\ldots,y_n}$ in $n+1$ variables, we will define a new polynomial $\lam^*x.t \in \Aa\s{y_1,\ldots,y_n}$ in $n$ variables, as follows by recursion on $t$: \[ \begin{array}{llll} \lam^*x.x &:=& i,\\ \lam^*x.y_i &:=& ky_i & \mbox{where $y_i\neq x$ is a variable,} \\ \lam^*x.a &:=& ka & \mbox{where $a\in \Aa$,}\\ \lam^*x.pq &:=& s(\lam^*x.p)(\lam^*x.q). \\ \end{array} \] We claim that for all $t$, the equation $(\lam^* x.t)x = t$ holds in $\Aa$. Indeed, this is easily proved by induction on $t$, using the definition of $\lam^*$: \[ \begin{array}{llll} (\lam^*x.x)x &=& ix = x,\\ (\lam^*x.y_i)x &=& ky_ix = y_i,\\ (\lam^*x.a)x &=& kax = a,\\ (\lam^*x.pq)x &=& s(\lam^*x.p)(\lam^*x.q)x = ((\lam^*x.p)x)((\lam^*x.q)x) = pq. \\ \end{array} \] Note that the last case uses the induction hypothesis for $p$ and $q$. Finally, to prove the theorem, assume that $\Aa$ has elements $s,k$ satisfying equations (1) and (2), and consider a polynomial $t\in\Aa\s{x_1,\ldots,x_n}$. We must show that there exists $a\in\Aa$ such that $ax_1\ldots x_n = t$ holds in $\Aa$. We let \[ a = \lam^*x_1.\ldots.\lam^*x_n.t. \] Note that $a$ is a polynomial in $0$ variables, which we may consider as an element of $\Aa$. Then from the previous claim, it follows that \[ \begin{array}{lll} ax_1\ldots x_n &=& (\lam^*x_1.\lam^*x_2.\ldots.\lam^*x_n.t)x_1x_2\ldots x_n\\ &=& (\lam^*x_2.\ldots.\lam^*x_n.t)x_2\ldots x_n\\ &=& \ldots\\ &=& (\lam^*x_n.t)x_n\\ &=& t\\ \end{array} \] holds in $\Aa$. \eot \end{proofof} \subsection{Combinatory algebras}\label{ssec-comb-alg} By Theorem~\ref{thm-combinatory-completeness}, combinatory completeness is equivalent to the existence of the $s$ and $k$ operators. We enshrine this in the following definition: \begin{definition}[Combinatory algebra] A {\em combinatory algebra} $(\Aa,\app,s,k)$ is an applicative structure $(\Aa,\app)$ together with elements $s,k\in\Aa$, satisfying the following two axioms: \[ \begin{array}{ll} (1) & sxyz = (xz)(yz) \\ (2) & kxy = x \\ \end{array} \] \end{definition} \begin{remark}\label{rem-derived-lambda} The operation $\lam^*$, defined in the proof of Theorem~\ref{thm-combinatory-completeness}, is defined on the polynomials of any combinatory algebra. It is called the {\em derived lambda abstractor}, and it satisfies the law of $\beta$-equivalence, i.e., $(\lam^*x.t)b = t[b/x]$, for all $b\in\Aa$. \end{remark} Finding actual examples of combinatory algebras is not so easy. Here are some examples: \begin{example} The one-element set $\Aa=\s{*}$, with $*\app *=*$, $s=*$, and $k=*$, is a combinatory algebra. It is called the {\em trivial} combinatory algebra. \end{example} \begin{example} Recall that $\Lambda$ is the set of lambda terms. Let $\Aa=\Lambda/{\eqb}$, the set of lambda terms modulo $\beta$-equivalence. Define $M\app N=MN$, $S=\lam xyz.(xz)(yz)$, and $K=\lam xy.x$. Then $(\Lambda,\app,S,K)$ is a combinatory algebra. Also note that, by Corollary~\ref{cor-beta-not-eta}, this algebra is non-trivial, i.e., it has more than one element. Similar examples are obtained by replacing $\eqb$ by $\eqbe$, and/or replacing $\Lambda$ by the set $\Lambda_0$ of closed terms. \end{example} \begin{example}\label{exa-sk-term-alg} We construct a combinatory algebra of $SK$-terms as follows. Let $V$ be a given set of variables. The set $\CTerm$ of {\em terms} of combinatory logic is given by the grammar: \[ \begin{array}{lll} A,B &\bnf& x\bor \S\bor \K\bor AB, \end{array} \] where $x$ ranges over the elements of $V$. On $\CTerm$, we define combinatory equivalence $\eqc$ as the smallest equivalence relation satisfying $\S ABC \eqc (AC)(BC)$, $\K AB \eqc A$, and the rules $\trule{cong$_1$}$ and $\trule{cong$_2$}$ (see page~\pageref{page-def-beta}). Then the set $\CTerm/{\eqc}$ is a combinatory algebra (called the {\em free} combinatory algebra generated by $V$, or the {\em term algebra}). You will prove in Exercise~\ref{exe-combinatory-cr} that it is non-trivial. \end{example} \void{ Note that all of the above examples of combinatory algebras are either trivial or syntactic. It is not easy to find a true ``mathematical'' example of a combinatory algebra. We will see how to find such models later. } \begin{exercise}\label{exe-combinatory-cr} On the set $\CTerm$ of combinatory terms, define a notion of {\em single-step reduction} by the following laws: \[ \begin{array}{l} \S ABC \redc (AC)(BC), \\ \K AB \redc A,\\ \end{array} \] together with the usual rules $\trule{cong$_1$}$ and $\trule{cong$_2$}$ (see page~\pageref{page-def-beta}). As in lambda calculus, we call a term a {\em normal form} if it cannot be reduced. Prove that the reduction $\redc$ satisfies the Church-Rosser property. (Hint: similarly to the lambda calculus, first define a suitable parallel one-step reduction $\tri$ whose reflexive transitive closure is that of $\redc$. Then show that it satisfies the diamond property.) \end{exercise} \begin{corollary}\label{cor-sk-nf} It immediately follows from the Church-Rosser Theorem for combinatory logic (Exercise~\ref{exe-combinatory-cr}) that two normal forms are $\eqc$-equivalent if and only if they are equal. \end{corollary} \subsection{The failure of soundness for combinatory algebras} A combinatory algebra is almost a model of the lambda calculus. Indeed, given a combinatory algebra $\Aa$, we can interpret any lambda term as follows. To each term $M$ with free variables among $x_1,\ldots,x_n$, we recursively associate a polynomial $\semm{M}\in\Aa\s{x_1,\ldots,x_n}$: \[ \begin{array}{lll} \semm{x} := x, \\ \semm{NP} := \semm{N}\semm{P}, \\ \semm{\lam x.M} := \lam^*x.\semm{M}.\\ \end{array} \] Notice that this definition is almost the identity function, except that we have replaced the ordinary lambda abstractor of lambda calculus by the derived lambda abstractor of combinatory logic. The result is a polynomial in $\Aa\s{x_1,\ldots,x_n}$. In the particular case where $M$ is a closed term, we can regard $\semm{M}$ as an element of $\Aa$. To be able to say that $\Aa$ is a ``model'' of the lambda calculus, we would like the following property to be true: \[ M\eqb N \imp \semm{M}=\semm{N}\mbox{ holds in $\Aa$}. \] This property is called {\em soundness} of the interpretation. Unfortunately, it is in general false for combinatory algebras, as the following example shows. \begin{example}\label{exa-failure-sound} Let $M=\lam x.x$ and $N=\lam x.(\lam y.y)x$. Then clearly $M\eqb N$. On the other hand, \[ \begin{array}{l} \semm{M} = \lam^*x.x=i, \\ \semm{N} = \lam^*x.(\lam^*y.y)x = \lam^*x.ix = s(ki)i. \end{array} \] It follows from Exercise~\ref{exe-combinatory-cr} and Corollary~\ref{cor-sk-nf} that the equation $i=s(ki)i$ does not hold in the combinatory algebra $\CTerm/{\eqc}$. In other words, the interpretation is not sound. \end{example} Let us analyze the failure of the soundness property further. Recall that $\beta$-equi\-va\-lence is the smallest equivalence relation on lambda terms satisfying the six rules in Table~\ref{tab-beta}. \begin{table*}[tbp] \[ \begin{array}{lc} \trule{refl} & \deriv{}{M=M} \\[1.8ex] \trule{symm} & \deriv{M=N}{N=M} \\[1.8ex] \trule{trans} & \deriv{M=N\sep N=P}{M=P} \end{array} \sep \begin{array}{lc} \trule{cong} & \deriv{M=M'\sep N=N'}{MN=M'N'} \\[1.8ex] \nrule{\xi} & \deriv{M=M'}{\lam x.M=\lam x.M'} \\[1.8ex] \nrule{\beta} & \deriv{}{(\lam x.M)N= \subst{M}{N}{x}} \\[1.8ex] \end{array} \] \caption{The rules for $\beta$-equivalence} \label{tab-beta} \end{table*} If we define a relation $\sim$ on lambda terms by \[ M\sim N\sep\iff\sep \semm{M}=\semm{N}\mbox{ holds in $\Aa$}, \] then we may ask which of the six rules of Table~\ref{tab-beta} the relation $\sim$ satisfies. Clearly, not all six rules can be satisfied, or else we would have $M\eqb N\imp M\sim N\imp \semm{M}=\semm{N}$, i.e., the model would be sound. Clearly, $\sim$ is an equivalence relation, and therefore satisfies $\trule{refl}$, $\trule{symm}$, and $\trule{trans}$. Also, $\trule{cong}$ is satisfied, because whenever $p,q,p',q'$ are polynomials such that $p=p'$ and $q=q'$ holds in $\Aa$, then clearly $pq=p'q'$ holds in $\Aa$ as well. Finally, we know from Remark~\ref{rem-derived-lambda} that the rule $\nrule{\beta}$ is satisfied. So the rule that fails is the $\nrule{\xi}$ rule. Indeed, Example~\ref{exa-failure-sound} illustrates this. Note that $x\sim (\lam y.y)x$ (from the proof of Theorem~\ref{thm-combinatory-completeness}), but $\lam x.x \not\sim \lam x.(\lam y.y)x$, and therefore the $\nrule{\xi}$ rule is violated. \subsection{Lambda algebras} A lambda algebra is, by definition, a combinatory algebra that is a sound model of lambda calculus, and in which $s$ and $k$ have their expected meanings. \begin{definition}[Lambda algebra] A {\em lambda algebra} is a combinatory algebra $\Aa$ satisfying the following properties: \[\begin{array}{l@{~~~}l} (\forall M,N\in\Lambda)\sssep M\eqb N \sssep\imp\sssep \semm{M}=\semm{N}& \trule{soundness},\\ s = \lam^*x.\lam^*y.\lam^*z.(xz)(yz) &\trule{s-derived},\\ k = \lam^*x.\lam^*y.x &\trule{k-derived}.\\ \end{array} \] \end{definition} The purpose of the remainder of this section is to give an axiomatic description of lambda algebras. \begin{lemma} Recall that $\Lambda_0$ is the set of closed lambda terms, i.e., lambda terms without free variables. Soundness is equivalent to the following: \[ (\forall M,N\in\Lambda_0)\sssep M\eqb N \sssep\imp\sssep \semm{M}=\semm{N} \ssep\trule{closed soundness} \] \end{lemma} \begin{proof} Clearly soundness implies closed soundness. For the converse, assume closed soundness and let $M,N\in\Lambda$ with $M\eqb N$. Let $\FV{M}\cup\FV{N}=\s{x_1,\ldots,x_n}$. Then \[ \begin{array}{llll} \multicolumn{3}{l}{M\eqb N} \\ &\imp& \lam x_1\ldots x_n.M\eqb \lam x_1\ldots x_n.N & \mbox{by $\nrule{\xi}$} \\ &\imp& \semm{\lam x_1\ldots x_n.M}=\semm{\lam x_1\ldots x_n.N} & \mbox{by closed soundness} \\ &\imp& \lam^* x_1\ldots x_n.\semm{M} = \lam^* x_1\ldots x_n.\semm{N} & \mbox{by def. of $\semm{-}$} \\ &\imp& (\lam^* x_1\ldots x_n.\semm{M})x_1\ldots x_n \\ && \ssep= (\lam^* x_1\ldots x_n.\semm{N})x_1\ldots x_n \\ &\imp& \semm{M}=\semm{N} &\mbox{by proof of Thm~\ref{thm-combinatory-completeness}} \end{array} \] This proves soundness.\eot \end{proof} \begin{definition}[Translations between combinatory logic and lambda calculus] Let $A\in\CTerm$ be a combinatory term (see Example~\ref{exa-sk-term-alg}). We define its translation to lambda calculus in the obvious way: the translation $A_\lam$ is given recursively by: \[\begin{array}{lll} \S_\lam &=& \lam xyz.(xz)(yz), \\ \K_\lam &=& \lam xy.x, \\ x_\lam &=& x, \\ (AB)_\lam &=& A_\lam B_\lam. \end{array} \] Conversely, given a lambda term $M\in\Lambda$, we recursively define its translation $M_c$ to combinatory logic like this: \[\begin{array}{lll} x_c &=& x,\\ (MN)_c &=& M_c N_c,\\ (\lam x.M)_c &=& \lam^*x.(M_c).\\ \end{array} \] \end{definition} \begin{lemma}\label{lem-c-lam} For all lambda terms $M$, $(M_c)_\lam \eqb M$. \end{lemma} \begin{lemma}\label{lem-lam-c} Let $\Aa$ be a combinatory algebra satisfying $k = \lam^*x.\lam^*y.x$ and $s = \lam^*x.\lam^*y.\lam^*z.(xz)(yz)$. Then for all combinatory terms $A$, $(A_\lam)_c=A$ holds in $\Aa$. \end{lemma} \begin{exercise} Prove Lemmas~\ref{lem-c-lam} and {\ref{lem-lam-c}}. \end{exercise} Let $\CTerm_0$ be the set of {\em closed} combinatory terms. The following is our first useful characterization of lambda calculus. \begin{lemma}\label{lem-alt-soundness} Let $\Aa$ be a combinatory algebra. Then $\Aa$ is a lambda algebra if and only if it satisfies the following property: \[ (\forall A,B\in\CTerm_0)\sssep A_\lam \eqb B_\lam \sssep\imp\sssep A=B\mbox{ holds in $\Aa$}. \ssep \trule{alt-soundness} \] \end{lemma} \begin{proof} First, assume that $\Aa$ satisfies $\trule{alt-soundness}$. To prove $\trule{closed soundness}$, let $M,N$ be lambda terms with $M\eqb N$. Then $(M_c)_\lam \eqb M\eqb N\eqb (N_c)_\lam$, hence by $\trule{alt-soundness}$, $M_c=N_c$ holds in $\Aa$. But this is the definition of $\semm{M}=\semm{N}$. To prove $\trule{k-derived}$, note that \[\begin{array}{llll} k_\lam &=& (\lam x.\lam y.x) &\mbox{by definition of $(-)_\lam$} \\ &=& ((\lam x.\lam y.x)_c)_\lam &\mbox{by Lemma~\ref{lem-c-lam}}\\ &=& (\lam^* x.\lam^* y.x)_\lam &\mbox{by definition of $(-)_c$}. \end{array} \] Hence, by $\trule{alt-soundness}$, it follows that $k=(\lam^* x.\lam^* y.x)$ holds in $\Aa$. Similarly for $\trule{s-derived}$. Conversely, assume that $\Aa$ is a lambda algebra. Let $A,B\in\CTerm_0$ and assume $A_\lam\eqb B_\lam$. By soundness, $\semm{A_\lam}=\semm{B_\lam}$. By definition of the interpretation, $(A_\lam)_c=(B_\lam)_c$ holds in $\Aa$. But by $\trule{s-derived}$, $\trule{k-derived}$, and Lemma~\ref{lem-lam-c}, $A=(A_\lam)_c=(B_\lam)_c=B$ holds in $\Aa$, proving $\trule{alt-soundness}$.\eot \end{proof} \begin{definition}[Homomorphism] Let $(\Aa,\app_\Aa,s_\Aa,k_\Aa)$, $(\Bb,\app_\Bb,s_\Bb,k_\Bb)$ be combinatory algebras. A {\em homomorphism} of combinatory algebras is a function $\phi:\Aa\to\Bb$ such that $\phi(s_\Aa)=s_\Bb$, $\phi(k_\Aa)=k_\Bb$, and for all $a,b\in\Aa$, $\phi(a\app_\Aa b)=\phi(a)\app_\Bb \phi(b)$. \end{definition} Any given homomorphism $\phi:\Aa\to\Bb$ can be extended to polynomials in the obvious way: we define $\phihat:\Aa\s{x_1,\ldots,x_n}\to\Bb\s{x_1,\ldots,x_n}$ by \[\begin{array}{ll} \phihat(a)=\phi(a) & \mbox{for $a\in\Aa$,}\\ \phihat(x)=x & \mbox{if $x\in\s{x_1,\ldots,x_n}$,}\\ \phihat(pq)=\phihat(p)\phihat(q). \\ \end{array} \] \begin{example} If $\phi(a)=a'$ and $\phi(b)=b'$, then $\phihat((ax)(by)) = (a'x)(b'y)$. \end{example} The following is the main technical concept needed in the characterization of lambda algebras. We say that an equation {\em holds absolutely} if it holds in $\Aa$ and in any homomorphic image of $\Aa$. If an equation holds only in the previous sense, then we sometimes say it holds {\em locally}. \begin{definition}[Absolute equation] Let $p,q\in\Aa\s{x_1,\ldots,x_n}$ be two polynomials with coefficients in $\Aa$. We say that the equation $p=q$ {\em holds absolutely} in $\Aa$ if for all combinatory algebras $\Bb$ and all homomorphisms $\phi:\Aa\to\Bb$, $\phihat(p)=\phihat(q)$ holds in $\Bb$. If an equation holds absolutely, we write $p\eqabs q$. \end{definition} We can now state the main theorem characterizing lambda algebras. Let $\one=s(ki)$. \begin{theorem}\label{thm-tfae-lambda-algebra} Let $\Aa$ be a combinatory algebra. Then the following are equivalent: \begin{enumerate} \item $\Aa$ is a lambda algebra, \item $\Aa$ satisfies $\trule{alt-soundness}$, \item for all $A,B\in\CTerm$ such that $ A_\lam \eqb B_\lam$, the equation $A=B$ holds absolutely in $\Aa$, \item $\Aa$ absolutely satisfies the nine axioms in Table~\ref{tab-lambda-algebra-axioms}, \item $\Aa$ satisfies $\trule{s-derived}$ and $\trule{k-derived}$, and for all $p,q\in\Aa\s{y_1,\ldots,y_n}$, if $px\eqabs qx$ then $\one p\eqabs \one q$, \item\label{thm-tfae-lambda-algebra6} $\Aa$ satisfies $\trule{s-derived}$ and $\trule{k-derived}$, and for all $p,q\in\Aa\s{x,y_1,\ldots,y_n}$, if $p\eqabs q$ then $\lam^*x.p\eqabs \lam^*y.q$. \end{enumerate} \end{theorem} \begin{table} \[ \begin{array}{crcl} \eqna & \one k &\eqabs& k, \\ \eqnb & \one s &\eqabs& s, \\ \eqnc & \one(kx) &\eqabs& kx, \\ \eqnd & \one(sx) &\eqabs& sx, \\ \eqne & \one(sxy) &\eqabs& sxy, \\ \eqnf & s(s(kk)x)y &\eqabs&\one x, \\ \eqng & s(s(s(ks)x)y)z &\eqabs& s(sxz)(syz), \\ \eqnh & k(xy) &\eqabs& s(kx)(ky), \\ \eqni & s(kx)i &\eqabs&\one x. \end{array} \] \caption{An axiomatization of lambda algebras. Here $\one=s(ki)$.}\label{tab-lambda-algebra-axioms} \end{table} The proof proceeds via $1\imp 2\imp 3\imp 4\imp 5\imp 6\imp 1$. We have already proven $1\imp 2$ in Lemma~\ref{lem-alt-soundness}. To prove $2\imp 3$, let $\FV{A}\cup\FV{B}\seq\s{x_1,\ldots,x_n}$, and assume $A_\lam \eqb B_\lam$. Then $\lam x_1\ldots x_n.(A_\lam)\eqb \lam x_1\ldots x_n.(B_\lam)$, hence $(\lam^* x_1\ldots x_n.A)_\lam \eqb (\lam^* x_1\ldots x_n.B)_\lam$ (why?). Since the latter terms are closed, it follows by the rule $\trule{alt-soundness}$ that $\lam^* x_1\ldots x_n.A=\lam^* x_1\ldots x_n.B$ holds in $\Aa$. Since closed equations are preserved by homomorphisms, the latter also holds in $\Bb$ for any homomorphism $\phi:\Aa\to\Bb$. Finally, this implies that $A=B$ holds for any such $\Bb$, proving that $A=B$ holds absolutely in $\Aa$. \begin{exercise} Prove the implication $3\imp 4$. \end{exercise} The implication $4\imp 5$ is the most difficult part of the theorem. We first dispense with the easier part: \begin{exercise} Prove that the axioms from Table~\ref{tab-lambda-algebra-axioms} imply $\trule{s-derived}$ and $\trule{k-derived}$. \end{exercise} The last part of $4\imp 5$ needs the following lemma: \begin{lemma}\label{lem-a-b} Suppose $\Aa$ satisfies the nine axioms from Table~\ref{tab-lambda-algebra-axioms}. Define a structure $(\Bb,\bullet,S,K)$ by: \[ \begin{array}{lll} \Bb = \s{a\in\Aa\such a=\one a},\\ a\bullet b = sab,\\ S=ks,\\ K=kk. \end{array} \] Then $\Bb$ is a well-defined combinatory algebra. Moreover, the function $\phi:\Aa\to\Bb$ defined by $\phi(a)=ka$ defines a homomorphism. \end{lemma} \begin{exercise} Prove Lemma~\ref{lem-a-b}. \end{exercise} To prove the implication $4\imp 5$, assume $ax=bx$ holds absolutely in $\Aa$. Then $\phihat(ax)=\phihat(bx)$ holds in $\Bb$ by definition of ``absolute''. But $\phihat(ax) = (\phi a)x = s(ka)x$ and $\phihat(bx) = (\phi b)x = s(kb)x$. Therefore $s(ka)x=s(kb)x$ holds in $\Aa$. We plug in $x=i$ to get $s(ka)i=s(kb)i$. By axiom $\eqni$, $\one a=\one b$. To prove $5\imp 6$, assume $p\eqabs q$. Then $(\lam^*x.p)x \eqabs p\eqabs q \eqabs (\lam^*x.q)x$ by the proof of Theorem~\ref{thm-combinatory-completeness}. Then by 5., $(\lam^*x.p)\eqabs (\lam^*x.q)$. Finally, to prove $6\imp 1$, note that if $6$ holds, then the absolute interpretation satisfies the $\xi$-rule, and therefore satisfies all the axioms of lambda calculus. \begin{exercise} Prove $6\imp 1$. \end{exercise} \begin{remark} The axioms in Table~\ref{tab-lambda-algebra-axioms} are required to hold {\em absolutely}. They can be replaced by local axioms by prefacing each axiom with $\lam^*xyz$. Note that this makes the axioms much longer. \end{remark} \subsection{Extensional combinatory algebras} \label{subsec-extensional-combinatory} \begin{definition} An applicative structure $(\Aa,\cdot)$ is {\em extensional} if for all $a,b\in \Aa$, if $ac=bc$ holds for all $c\in \Aa$, then $a=b$. \end{definition} \begin{proposition} In an extensional combinatory algebra, the $\nrule{\eta}$ axioms is valid. \end{proposition} \begin{proof} By $\nrule{\beta}$, $(\lam^*x.Mx)c = Mc$ for all $c\in\Aa$. Therefore, by extensionality, $(\lam^*x.Mx)=M$.\eot \end{proof} \begin{proposition} In an extensional combinatory algebra, an equation holds locally if and only if it holds absolutely. \end{proposition} \begin{proof} Clearly, if an equation holds absolutely, then it holds locally. Conversely, assume the equation $p=q$ holds locally in $\Aa$. Let $x_1,\ldots,x_n$ be the variables occurring in the equation. By $\nrule{\beta}$, \[(\lam^*x_1\ldots x_n.p)x_1\ldots x_n=(\lam^*x_1\ldots x_n.q)x_1\ldots x_n\] holds locally. By extensionality, \[\lam^*x_1\ldots x_n.p=\lam^*x_1\ldots x_n.q\] holds. Since this is a closed equation (no free variables), it automatically holds absolutely. This implies that $(\lam^*x_1\ldots x_n.p)x_1\ldots x_n=(\lam^*x_1\ldots x_n.q)x_1\ldots x_n$ holds absolutely, and finally, by $\nrule{\beta}$ again, that $p=q$ holds absolutely. \eot \end{proof} \begin{proposition} Every extensional combinatory algebra is a lambda algebra. \end{proposition} \begin{proof} By Theorem~\ref{thm-tfae-lambda-algebra}(\ref{thm-tfae-lambda-algebra6}), it suffices to prove $\trule{s-derived}$, $\trule{k-derived}$ and the $\nrule{\xi}$-rule. Let $a,b,c\in\Aa$ be arbitrary. Then \[ (\lam^*x.\lam^*y.\lam^*z.(xz)(yz))abc = (ac)(bc) = sabc \] by $\nrule{\beta}$ and definition of $s$. Applying extensionality three times (with respect to $c$, $b$, and $a$), we get \[ \lam^*x.\lam^*y.\lam^*z.(xz)(yz) = s. \] This proves $\trule{s-derived}$. The proof of $\trule{k-derived}$ is similar. Finally, to prove $\nrule{\xi}$, assume that $p\eqabs q$. Then by $\nrule{\beta}$, $(\lam^*x.p)c=(\lam^*x.q)c$ for all $c\in\Aa$. By extensionality, $\lam^*x.p=\lam^*x.q$ holds.\eot \end{proof} \section{Simply-typed lambda calculus, propositional logic, and the Curry-Howard isomorphism}\label{sec-simply-typed-lc} In the untyped lambda calculus, we spoke about functions without speaking about their domains and codomains. The domain and codomain of any function was the set of all lambda terms. We now introduce types into the lambda calculus, and thus a notion of domain and codomain for functions. The difference between types and sets is that types are {\em syntactic} objects, i.e., we can speak of types without having to speak of their elements. We can think of types as {\em names} for sets. \subsection{Simple types and simply-typed terms} We assume a set of basic types. We usually use the Greek letter $\iota$ (``iota'') to denote a basic type. The set of simple types is given by the following BNF: \[ \mbox{Simple types:}\ssep A,B \bnf \iota \bor A\to B\bor A\times B\bor 1 \] The intended meaning of these types is as follows: base types are things like the type of integers or the type of booleans. The type $A\to B$ is the type of functions from $A$ to $B$. The type $A\times B$ is the type of pairs $\pair{x,y}$, where $x$ has type $A$ and $y$ has type $B$. The type $1$ is a one-element type. You can think of $1$ as an abridged version of the booleans, in which there is only one boolean instead of two. Or you can think of $1$ as the ``void'' or ``unit'' type in many programming languages: the result type of a function that has no real result. When we write types, we adopt the convention that $\times$ binds stronger than $\to$, and $\to$ associates to the right. Thus, $A\times B\to C$ is $(A\times B)\to C$, and $A\to B\to C$ is $A\to (B\to C)$. The set of {\em raw typed lambda terms} is given by the following BNF: \[ \mbox{Raw terms:}\ssep M,N \bnf x \bor MN \bor \lamabs{x}{A}.M \bor \pair{M,N} \bor \proj1 M \bor \proj2 M \bor \unit \] Unlike what we did in the untyped lambda calculus, we have added special syntax here for pairs. Specifically, $\pair{M,N}$ is a pair of terms, $\proj{i} M$ is a projection, with the intention that $\proj{i}\pair{M_1,M_2}=M_i$. Also, we have added a term $\unit$, which is the unique element of the type $1$. One other change from the untyped lambda calculus is that we now write $\lamabs{x}{A}.M$ for a lambda abstraction to indicate that $x$ has type $A$. However, we will sometimes omit the superscripts and write $\lam x.M$ as before. The notions of free and bound variables and $\alpha$-conversion are defined as for the untyped lambda calculus; again we identify $\alpha$-equivalent terms. We call the above terms the {\em raw} terms, because we have not yet imposed any typing discipline on these terms. To avoid meaningless terms such as $\pair{M,N}(P)$ or $\proj1(\lam x.M)$, we introduce {\em typing rules}. We use the colon notation $M:A$ to mean ``$M$ is of type $A$''. (Similar to the element notation in set theory). The typing rules are expressed in terms of {\em typing judgments}. A typing judgment is an expression of the form \[ \typ{x_1}{A_1},\typ{x_2}{A_2},\ldots,\typ{x_n}{A_n} \tj M:A. \] Its meaning is: ``under the assumption that $x_i$ is of type $A_i$, for $i=1\ldots n$, the term $M$ is a well-typed term of type $A$.'' The free variables of $M$ must be contained in $x_1,\ldots,x_n$. The idea is that in order to determine the type of $M$, we must make some assumptions about the type of its free variables. For instance, the term $xy$ will have type $B$ if $\typ{x}{A\to B}$ and $\typ{y}{A}$. Clearly, the type of $xy$ depends on the type of its free variables. A sequence of assumptions of the form $\typ{x_1}{A_1},\ldots,\typ{x_n}{A_n}$, as in the left-hand-side of a typing judgment, is called a {\em typing context}. We always assume that no variable appears more than once in a typing context, and we allow typing contexts to be re-ordered implicitly. We often use the Greek letter $\Gamma$ to stand for an arbitrary typing context, and we use the notations $\Gamma,\Gamma'$ and $\Gamma,\typ{x}{A}$ to denote the concatenation of typing contexts, where it is always assumed that the sets of variables are disjoint. The symbol $\tj$, which appears in a typing judgment, is called the {\em turnstile} symbol. Its purpose is to separate the left-hand side from the right-hand side. The typing rules for the simply-typed lambda calculus are shown in Table~\ref{tab-simple-typing-rules}. \begin{table*}[tbp] \[ \begin{array}{rc} \tjvar & \deriv{}{\Gamma,\typ{x}{ A}\tj x: A} \\[1.8ex] \tjapp & \deriv{\Gamma\tj M: A\to B\sep\Gamma\tj N: A} {\Gamma\tj MN: B} \\[1.8ex] \tjlam & \deriv{\Gamma,\typ{x}{ A}\tj M: B} {\Gamma\tj\lamabs{x}{A}.M: A\to B} \\[1.8ex] \tjpair & \deriv{\Gamma\tj M: A\sep\Gamma\tj N: B} {\Gamma\tj\pair{M,N}: A\times B} \end{array} \begin{array}{rc} \\[1.8ex] \tjproja & \deriv{\Gamma\tj M: A\times B}{\Gamma\tj\proj1 M: A} \\[1.8ex] \tjprojb & \deriv{\Gamma\tj M: A\times B}{\Gamma\tj\proj2 M: B} \\[1.8ex] \tjunit & \deriv{}{\Gamma\tj\unit:1} \end{array} \] \caption{Typing rules for the simply-typed lambda calculus} \label{tab-simple-typing-rules} \end{table*} The rule $\tjvar$ is a tautology: under the assumption that $x$ has type $A$, $x$ has type $A$. The rule $\tjapp$ states that a function of type $A\to B$ can be applied to an argument of type $A$ to produce a result of type $B$. The rule $\tjlam$ states that if $M$ is a term of type $B$ with a free variable $x$ of type $A$, then $\lamabs{x}{A}.M$ is a function of type $A\to B$. The other rules have similar interpretations. Here is an example of a valid typing derivation: { \footnotesize \[ \deriv{ \deriv{ }{ \typ{x}{A\to A},\typ{y}{A}\tj x:A\to A } \sep \deriv{ \deriv{ }{ \typ{x}{A\to A},\typ{y}{A}\tj x:A\to A } \sep \deriv{ }{ \typ{x}{A\to A},\typ{y}{A}\tj y:A } }{ \typ{x}{A\to A},\typ{y}{A}\tj xy:A } }{ \deriv{ \typ{x}{A\to A},\typ{y}{A}\tj x(xy):A }{ \deriv{ \typ{x}{A\to A}\tj\lamabs{y}{A}.x(xy):A\to A }{ \tj\lamabs{x}{A\to A}.\lamabs{y}{A}.x(xy):(A\to A)\to A\to A } } } \] } One important property of these typing rules is that there is precisely one rule for each kind of lambda term. Thus, when we construct typing derivations in a bottom-up fashion, there is always a unique choice of which rule to apply next. The only real choice we have is about which types to assign to variables. \begin{exercise} Give a typing derivation of each of the following typing judgments: \begin{enumerate} \item[(a)] $\tj\lamabs{x}{(A\to A)\to B}.x(\lamabs{y}{A}.y):((A\to A)\to B)\to B$ \item[(b)] $\tj\lamabs{x}{A\times B}.\pair{\proj2 x,\proj1 x}:(A\times B)\to (B\times A)$ \end{enumerate} \end{exercise} Not all terms are typable. For instance, the terms $\proj1(\lam x.M)$ and $\pair{M,N}(P)$ cannot be assigned a type, and neither can the term $\lam x.xx$. Here, by ``assigning a type'' we mean, assigning types to the free and bound variables such that the corresponding typing judgment is derivable. We say that a term is typable if it can be assigned a type. \begin{exercise} Show that neither of the three terms mentioned in the previous paragraph is typable. \end{exercise} \begin{exercise} We said that we will identify $\alpha$-equivalent terms. Show that this is actually necessary. In particular, show that if we didn't identify $\alpha$-equivalent terms, there would be no valid derivation of the typing judgment \[ \tj\lamabs{x}{A}.\lamabs{x}{B}.x:A\to B\to B. \] Give a derivation of this typing judgment using the bound variable convention. \end{exercise} \subsection{Connections to propositional logic} \label{subsec-connprop} Consider the following types: \[ \begin{array}{ll} (1)& (A\times B)\to A \\ (2)& A\to B\to (A\times B) \\ (3)& (A\to B)\to(B\to C)\to(A\to C) \\ (4)& A\to A\to A \\ (5)& ((A\to A)\to B)\to B \\ (6)& A\to (A\times B) \\ (7)& (A\to C)\to C \end{array} \] Let us ask, in each case, whether it is possible to find a closed term of the given type. We find the following terms: \[ \begin{array}{ll} (1)& \lamabs{x}{A\times B}.\proj1 x \\ (2)& \lamabs{x}{A}.\lamabs{y}{B}.\pair{x,y} \\ (3)& \lamabs{x}{A\to B}.\lamabs{y}{B\to C}.\lamabs{z}{A}.y(xz) \\ (4)& \lamabs{x}{A}.\lamabs{y}{A}.x \sep\mbox{and}\sep \lamabs{x}{A}.\lamabs{y}{A}.y \\ (5)& \lamabs{x}{(A\to A)\to B}.x(\lamabs{y}{A}.y) \\ (6)& \mbox{can't find a closed term} \\ (7)& \mbox{can't find a closed term} \end{array} \] Can we answer the general question, given a type, whether there exists a closed term for it? For a new way to look at the problem, take the types (1)--(7) and make the following replacement of symbols: replace ``$\to$'' by ``$\imp$'' and replace ``$\times$'' by ``$\cand$''. We obtain the following formulas: \[ \begin{array}{ll} (1)& (A\cand B)\imp A \\ (2)& A\imp B\imp (A\cand B) \\ (3)& (A\imp B)\imp(B\imp C)\imp(A\imp C) \\ (4)& A\imp A\imp A \\ (5)& ((A\imp A)\imp B)\imp B \\ (6)& A\imp (A\cand B) \\ (7)& (A\imp C)\imp C \end{array} \] Note that these are formulas of propositional logic, where ``$\imp$'' is implication, and ``$\cand$'' is conjunction (``and''). What can we say about the validity of these formulas? It turns out that (1)--(5) are tautologies, whereas (6)--(7) are not. Thus, the types for which we could find a lambda term turn out to be the ones that are valid when considered as formulas in propositional logic! This is not entirely coincidental. Let us consider, for example, how to prove $(A\cand B)\imp A$. The proof is very short. It goes as follows: ``Assume $A\cand B$. Then, by the first part of that assumption, $A$ holds. Thus $(A\cand B)\imp A$.'' On the other hand, the lambda term of the corresponding type is $\lamabs{x}{A\times B}.\proj1 x$. You can see that there is a close connection between the proof and the lambda term. Namely, if one reads $\lamabs{x}{A\times B}$ as ``assume $A\cand B$ (call the assumption `$x$')'', and if one reads $\proj1 x$ as ``by the first part of assumption $x$'', then this lambda term can be read as a proof of the proposition $(A\cand B)\imp A$. This connection between simply-typed lambda calculus and propositional logic is known as the ``Curry-Howard isomorphism''. Since types of the lambda calculus correspond to formulas in propositional logic, and terms correspond to proofs, the concept is also known as the ``proofs-as-programs'' paradigm, or the ``formulas-as-types'' correspondence. We will make the actual correspondence more precise in the next two sections. Before we go any further, we must make one important point. When we are going to make precise the connection between simply-typed lambda calculus and propositional logic, we will see that the appropriate logic is {\em intuitionistic logic}, and not the ordinary {\em classical logic} that we are used to from mathematical practice. The main difference between intuitionistic and classical logic is that the former misses the principles of ``proof by contradiction'' and ``excluded middle''. The principle of proof by contradiction states that if the assumption ``not $A$'' leads to a contradiction then we have proved $A$. The principle of excluded middle states that either ``$A$'' or ``not $A$'' must be true. Intuitionistic logic is also known as {\em constructive logic}, because all proofs in it are by construction. Thus, in intuitionistic logic, the only way to prove the existence of some object is by actually constructing the object. This is in contrast with classical logic, where we may prove the existence of an object simply by deriving a contradiction from the assumption that the object doesn't exist. The disadvantage of constructive logic is that it is generally more difficult to prove things. The advantage is that once one has a proof, the proof can be transformed into an algorithm. \subsection{Propositional intuitionistic logic} \label{subsec-proplogic} We start by introducing a system for intuitionistic logic that uses only three connectives: ``$\cand$'', ``$\to$'', and ``$\ctrue$''. {\em Formulas} $A,B\ldots$ are built from atomic formulas $\alpha,\beta,\ldots$ via the BNF \[ \mbox{Formulas:}\ssep A,B \bnf \alpha \bor A\to B\bor A\cand B\bor\ctrue. \] We now need to formalize proofs. The formalized proofs will be called ``derivations''. The system we introduce here is known as {\em natural deduction}, and is due to Gentzen (1935). In natural deduction, derivations are certain kinds of trees. In general, we will deal with derivations of a formula $A$ from a set of assumptions $\Gamma=\s{A_1,\ldots,A_n}$. Such a derivation will be written schematically as \[ \ideriv{\typ{x_1}{A_1},\ldots,\typ{x_n}{A_n}}{A}. \] We simplify the bookkeeping by giving a name to each assumption, and we will use lower-case letters such as $x,y,z$ for such names. In using the above notation for schematically writing a derivation of $A$ from assumptions $\Gamma$, it is understood that the derivation may in fact use a given assumption more than once, or zero times. The rules for constructing derivations are as follows: \begin{enumerate} \item (Axiom) \[ \trule{ax}\ \deriv{\typ{x}{A}}{A} x \] is a derivation of $A$ from assumption $A$ (and possibly other assumptions that were used zero times). We have written the letter ``$x$'' next to the rule, to indicate precisely which assumption we have used here. \item ($\cand$-introduction) If \[ \ideriv{\Gamma}{A} \sep\mbox{and}\sep \ideriv{\Gamma}{B} \] are derivations of $A$ and $B$, respectively, then \[ \trule{$\cand$-I}\ \deriv{\ideriv{\Gamma}{A}\sep\ideriv{\Gamma}{B}}{A\cand B} \] is a derivation of $A\cand B$. In other words, a proof of $A\cand B$ is a proof of $A$ and a proof of $B$. \item ($\cand$-elimination) If \[ \ideriv{\Gamma}{A\cand B} \] is a derivation of $A\cand B$, then \[ \trule{$\cand$-E$_1$}\ \deriv{\ideriv{\Gamma}{A\cand B}}{A} \sep\mbox{and}\sep \trule{$\cand$-E$_2$}\ \deriv{\ideriv{\Gamma}{A\cand B}}{B} \] are derivations of $A$ and $B$, respectively. In other words, from $A\cand B$, we are allowed to conclude both $A$ and $B$. \item ($\ctrue$-introduction) \[ \trule{$\ctrue$-I}\ \deriv{\hspace{.5in}}{\ctrue} \] is a derivation of $\ctrue$ (possibly from some assumptions, which were not used). In other words, $\ctrue$ is always true. \item ($\to$-introduction) If \[ \ideriv{\Gamma,\typ{x}{A}}{B} \] is a derivation of $B$ from assumptions $\Gamma$ and $A$, then \[ \trule{$\to$-I}\ \deriv{\ideriv{\Gamma,[\typ{x}{A}]}{B}}{A\to B} x \] is a derivation of $A\to B$ from $\Gamma$ alone. Here, the assumption $\typ{x}{A}$ is no longer an assumption of the new derivation --- we say that it has been ``canceled''. We indicate canceled assumptions by enclosing them in brackets $[\,]$, and we indicate the place where the assumption was canceled by writing the letter $x$ next to the rule where it was canceled. \item ($\to$-elimination) If \[ \ideriv{\Gamma}{A\to B} \sep\mbox{and}\sep \ideriv{\Gamma}{A} \] are derivations of $A\to B$ and $A$, respectively, then \[ \trule{$\to$-E}\ \deriv{\ideriv{\Gamma}{A\to B} \sep \ideriv{\Gamma}{A}}{B} \] is a derivation of $B$. In other words, from $A\to B$ and $A$, we are allowed to conclude $B$. This rule is sometimes called by its Latin name, ``modus ponens''. \suspendenumerate \end{enumerate} This finishes the definition of derivations in natural deduction. Note that, with the exception of the axiom, each rule belongs to some specific logical connective, and there are introduction and elimination rules. ``$\cand$'' and ``$\to$'' have both introduction and elimination rules, whereas ``$\ctrue$'' only has an introduction rule. In natural deduction, like in real mathematical life, assumptions can be made at any time. The challenge is to get rid of assumptions once they are made. In the end, we would like to have a derivation of a given formula that depends on as few assumptions as possible --- in fact, we don't regard the formula as proven unless we can derive it from {\em no} assumptions. The rule {\trule{$\to$-I}} allows us to discard temporary assumptions that we might have made during the proof. \begin{exercise} Give a derivation, in natural deduction, for each of the formulas (1)--(5) from Section~\ref{subsec-connprop}. \end{exercise} \subsection{An alternative presentation of natural deduction} The above notation for natural deduction derivations suffers from a problem of presentation: since assumptions are first written down, later canceled dynamically, it is not easy to see when each assumption in a finished derivation was canceled. The following alternate presentation of natural deduction works by deriving entire {\em judgments}, rather than {\em formulas}. Rather than keeping track of assumptions as the leaves of a proof tree, we annotate each formula in a derivation with the entire set of assumptions that were used in deriving it. In practice, this makes derivations more verbose, by repeating most assumptions on each line. In theory, however, such derivations are easier to reason about. A {\em judgment} is a statement of the form $\typ{x_1}{A_1},\ldots,\typ{x_n}{A_n} \tj B$. It states that the formula $B$ is a consequence of the (labeled) assumptions $A_1,\ldots,A_n$. The rules of natural deduction can now be reformulated as rules for deriving judgments: \begin{enumerate} \item (Axiom) \[ \trule{ax$_x$}\ \deriv{}{\Gamma,\typ{x}{A}\tj A} \] \item ($\cand$-introduction) \[ \trule{$\cand$-I}\ \deriv{\Gamma\tj A\sep\Gamma\tj B}{\Gamma\tj A\cand B} \] \item ($\cand$-elimination) \[ \trule{$\cand$-E$_1$}\ \deriv{\Gamma\tj A\cand B}{\Gamma\tj A} \sep \trule{$\cand$-E$_2$}\ \deriv{\Gamma\tj A\cand B}{\Gamma\tj B} \] \item ($\ctrue$-introduction) \[ \trule{$\ctrue$-I}\ \deriv{}{\Gamma\tj\ctrue} \] \item ($\to$-introduction) \[ \trule{$\to$-I$_x$}\ \deriv{\Gamma,\typ{x}{A}\tj B}{\Gamma\tj A\to B} \] \item ($\to$-elimination) \[ \trule{$\to$-E}\ \deriv{\Gamma\tj A\to B\sep \Gamma\tj A}{\Gamma\tj B} \] \suspendenumerate \end{enumerate} \subsection{The Curry-Howard Isomorphism} There is an obvious one-to-one correspondence between types of the simply-typed lambda calculus and the formulas of propositional intuitionistic logic introduced in Section~\ref{subsec-proplogic} (provided that the set of basic types can be identified with the set of atomic formulas). We will identify formulas and types from now on, where it is convenient to do so. Perhaps less obvious is the fact that derivations are in one-to-one correspondence with simply-typed lambda terms. To be precise, we will give a translation from derivations to lambda terms, and a translation from lambda terms to derivations, which are mutually inverse up to $\alpha$-equivalence. To any derivation of $\typ{x_1}{A_1},\ldots,\typ{x_n}{A_n}\tj B$, we will associate a lambda term $M$ such that $\typ{x_1}{A_1},\ldots,\typ{x_n}{A_n}\tj M:B$ is a valid typing judgment. We define $M$ by recursion on the definition of derivations. We prove simultaneously, by induction, that $\typ{x_1}{A_1},\ldots,\typ{x_n}{A_n}\tj M:B$ is indeed a valid typing judgment. \begin{enumerate} \item (Axiom) If the derivation is \[ \trule{ax$_x$}\ \deriv{}{\Gamma,\typ{x}{A}\tj A}, \] then the lambda term is $M=x$. Clearly, $\Gamma,\typ{x}{A}\tj x:A$ is a valid typing judgment by $\tjvar$. \item ($\cand$-introduction) If the derivation is \[ \trule{$\cand$-I}\ \deriv{\ideriv{}{\Gamma\tj A}\sep\ideriv{}{\Gamma\tj B}}{\Gamma\tj A\cand B}, \] then the lambda term is $M=\pair{P,Q}$, where $P$ and $Q$ are the terms associated to the two respective subderivations. By induction hypothesis, $\Gamma\tj P:A$ and $\Gamma\tj Q:B$, thus $\Gamma\tj\pair{P,Q}:A\times B$ by $\tjpair$. \item ($\cand$-elimination) If the derivation is \[ \trule{$\cand$-E$_1$}\ \deriv{\ideriv{}{\Gamma\tj A\cand B}}{\Gamma\tj A}, \] then we let $M=\proj1 P$, where $P$ is the term associated to the subderivation. By induction hypothesis, $\Gamma\tj P:A\times B$, thus $\Gamma\tj\proj1 P:A$ by $\tjproja$. The case of {\trule{$\cand$-E$_2$}} is entirely symmetric. \item ($\ctrue$-introduction) If the derivation is \[ \trule{$\ctrue$-I}\ \deriv{}{\Gamma\tj\ctrue}, \] then let $M=\unit$. We have $\tj\unit:1$ by $\tjunit$. \item ($\to$-introduction) If the derivation is \[ \trule{$\to$-I$_x$}\ \deriv{\ideriv{}{\Gamma,\typ{x}{A}\tj B}}{\Gamma\tj A\to B}, \] then we let $M=\lamabs{x}{A}.P$, where $P$ is the term associated to the subderivation. By induction hypothesis, $\Gamma,\typ{x}{A}\tj P:B$, hence $\Gamma\tj \lamabs{x}{A}.P:A\to B$ by $\tjlam$. \item ($\to$-elimination) Finally, if the derivation is \[ \trule{$\to$-E}\ \deriv{\ideriv{}{\Gamma\tj A\to B}\sep \ideriv{}{\Gamma\tj A}}{\Gamma\tj B}, \] then we let $M=PQ$, where $P$ and $Q$ are the terms associated to the two respective subderivations. By induction hypothesis, $\Gamma\tj P:A\to B$ and $\Gamma\tj Q:A$, thus $\Gamma\tj PQ: B$ by $\tjapp$. \end{enumerate} Conversely, given a well-typed lambda term $M$, with associated typing judgment $\Gamma\tj M:A$, then we can construct a derivation of $A$ from assumptions $\Gamma$. We define this derivation by recursion on the type derivation of $\Gamma\tj M:A$. The details are too tedious to spell them out here; we simply go through each of the rules {\tjvar}, {\tjlam}, {\tjapp}, {\tjpair}, {\tjproja}, {\tjprojb}, {\tjunit} and apply the corresponding rule {\trule{ax}}, {\trule{$\to$-I}}, {\trule{$\to$-E}}, {\trule{$\cand$-I}}, {\trule{$\cand$-E$_1$}}, {\trule{$\cand$-E$_2$}}, {\trule{$\ctrue$-I}}, respectively. \subsection{Reductions in the simply-typed lambda calculus} \label{subsec-simply-reductions} $\beta$- and $\eta$-reductions in the simply-typed lambda calculus are defined much in the same way as for the untyped lambda calculus, except that we have introduced some additional terms (such as pairs and projections), which calls for some additional reduction rules. We define the following reductions: \[ \begin{array}{lllll}\label{page-typed-reductions} (\beta_{\to}) & (\lamabs{x}{A}.M)N & \to & \subst{M}{N}{x}, \\ (\eta_{\to}) & \lamabs{x}{A}.Mx & \to & M, & \mbox{where $x\not\in\FV{M}$},\\ (\beta_{\times,1}) & \proj1\pair{M,N} & \to & M,\\ (\beta_{\times,2}) & \proj2\pair{M,N} & \to & N,\\ (\eta_{\times}) & \pair{\proj1 M,\proj2 M} & \to & M,\\ (\eta_{1}) & M & \to & \unit, & \mbox{if $M:1$}. \end{array} \] Then single- and multi-step $\beta$- and $\eta$-reduction are defined as the usual contextual closure of the above rules, and the definitions of $\beta$- and $\eta$-equivalence also follow the usual pattern. In addition to the usual {\trule{cong}} and {\nrule{\xi}} rules, we now also have congruence rules that apply to pairs and projections. We remark that, to be perfectly precise, we should have defined reductions between typing judgments, and not between terms. This is necessary because some of the reduction rules, notably $(\eta_{1})$, depend on the type of the terms involved. However, this would be notationally very cumbersome, and we will blur the distinction, pretending at times that terms appear in some implicit typing context that we do not write. An important property of the reduction is the ``subject reduction'' property, which states that well-typed terms reduce only to well-typed terms of the same type. This has an immediate application to programming: subject reduction guarantees that if we write a program of type ``integer'', then the final result of evaluating the program, if any, will indeed be an integer, and not, say, a boolean. \begin{theorem}[Subject Reduction] If $\Gamma\tj M:A$ and $M\redbe M'$, then $\Gamma\tj M':A$. \end{theorem} \begin{proof} By induction on the derivation of $M\redbe M'$, and by case distinction on the last rule used in the derivation of $\Gamma\tj M:A$. For instance, if $M\redbe M'$ by $(\beta_{\to})$, then $M=(\lamabs{x}{B}.P)Q$ and $M'=\subst{P}{Q}{x}$. If $\Gamma\tj M:A$, then we must have $\Gamma,\typ{x}{B}\tj P:A$ and $\Gamma\tj Q:B$. It follows that $\Gamma\tj\subst{P}{Q}{x}:A$; the latter statement can be proved separately (as a ``substitution lemma'') by induction on $P$ and makes crucial use of the fact that $x$ and $Q$ have the same type. The other cases are similar, and we leave them as an exercise. Note that, in particular, one needs to consider the {\trule{cong}}, {\nrule{\xi}}, and other congruence rules as well. \eot \end{proof} \subsection{A word on Church-Rosser} One important theorem that does {\em not} hold for $\beta\eta$-reduction in the simply-typed $\lam^{\to,\times,1}$-calculus is the Church-Rosser theorem. The culprit is the rule $(\eta_{1})$. For instance, if $x$ is a variable of type $A\times 1$, then the term $M=\pair{\proj1 x,\proj2 x}$ reduces to $x$ by $(\eta_{\times})$, but also to $\pair{\proj1 x,\unit}$ by $(\eta_{1})$. Both these terms are normal forms. Thus, the Church-Rosser property fails. \[ \xymatrix{&\pair{\proj1 x,\proj2 x}\ar[dl]_{\eta_{\times}}\ar[dr]^{\eta_{1}} \\x&&\pair{\proj1 x,\unit}} \] There are several ways around this problem. For instance, if we omit all the $\eta$-reductions and consider only $\beta$-reductions, then the Church-Rosser property does hold. Eliminating $\eta$-reductions does not have much of an effect on the lambda calculus from a computational point of view; already in the untyped lambda calculus, we noticed that all interesting calculations could in fact be carried out with $\beta$-reductions alone. We can say that $\beta$-reductions are the engine for computation, whereas $\eta$-reductions only serve to clean up the result. In particular, it can never happen that some $\eta$-reduction inhibits another $\beta$-reduction: if $M\rede M'$, and if $M'$ has a $\beta$-redex, then it must be the case that $M$ already has a corresponding $\beta$-redex. Also, $\eta$-reductions always reduce the size of a term. It follows that if $M$ is a $\beta$-normal form, then $M$ can always be reduced to a $\beta\eta$-normal form (not necessarily unique) in a finite sequence of $\eta$-reductions. \begin{exercise} Prove the Church-Rosser theorem for $\beta$-reductions in the $\lam^{\to,\times,1}$-calculus. Hint: use the same method that we used in the untyped case. \end{exercise} Another solution is to omit the type $1$ and the term $\unit$ from the language. In this case, the Church-Rosser property holds even for $\beta\eta$-reduction. \begin{exercise} Prove the Church-Rosser theorem for $\beta\eta$-reduction in the $\lam^{\to,\times}$-calculus, i.e., the simply-typed lambda calculus without $1$ and $\unit$. \end{exercise} \subsection{Reduction as proof simplification} Having made a one-to-one correspondence between simply-typed lambda terms and derivations in intuitionistic natural deduction, we may now ask what $\beta$- and $\eta$-reductions correspond to under this correspondence. It turns out that these reductions can be thought of as ``proof simplification steps''. Consider for example the $\beta$-reduction $\proj1\pair{M,N} \to M$. If we translate the left-hand side and the right-hand side via the Curry-Howard isomorphism (here we use the first notation for natural deduction), we get \[ \trule{$\cand$-E$_1$}\deriv{\trule{$\cand$-I}\deriv{\ideriv{\Gamma}{A}\sep\ideriv{\Gamma}{B}}{A\cand B}}{A} \ssep\to\ssep \ideriv{\Gamma}{A}. \] We can see that the left derivation contains an introduction rule immediately followed by an elimination rule. This leads to an obvious simplification if we replace the left derivation by the right one. In general, $\beta$-redexes correspond to situations where an introduction rule is immediately followed by an elimination rule, and $\eta$-redexes correspond to situations where an elimination rule is immediately followed by an introduction rule. For example, consider the $\eta$-reduction $\pair{\proj1 M,\proj2 M} \to M$. This translates to: \[ \trule{$\cand$-I}\deriv{\trule{$\cand$-E$_1$}\deriv{\ideriv{\Gamma}{A\cand B}}{A}\sep\trule{$\cand$-E$_2$}\deriv{\ideriv{\Gamma}{A\cand B}}{B}}{A\cand B} \ssep\to\ssep \ideriv{\Gamma}{A\cand B} \] Again, this is an obvious simplification step, but it has a side condition: the left and right subderivation must be the same! This side condition corresponds to the fact that in the redex $\pair{\proj1 M,\proj2 M}$, the two subterms called $M$ must be equal. It is another characteristic of $\eta$-reductions that they often carry such side conditions. The reduction $M\to\unit$ translates as follows: \[ \ideriv{\Gamma}{\ctrue} \ssep\to\ssep \trule{$\ctrue$-I}\deriv{\hspace{.5in}}{\ctrue} \] In other words, any derivation of $\ctrue$ can be replaced by the canonical such derivation. More interesting is the case of the $(\beta_{\to})$ rule. Here, we have $(\lamabs{x}{A}.M)N \to \subst{M}{N}{x}$, which can be translated via the Curry-Howard Isomorphism as follows: \[ \trule{$\to$-E}\ \deriv{\trule{$\to$-I}\ \deriv{\ideriv{\Gamma,[\typ{x}{A}]}{B}}{A\to B} x \sep\ideriv{\Gamma}{A}}{B} \ssep\to \ssep \ideriv{\Gamma,\ideriv{\Gamma}{A}}{B}. \] What is going on here is that we have a derivation $M$ of $B$ from assumptions $\Gamma$ and $A$, and we have another derivation $N$ of $A$ from $\Gamma$. We can directly obtain a derivation of $B$ from $\Gamma$ by stacking the second derivation on top of the first! Notice that this last proof ``simplification'' step may not actually be a simplification. Namely, if the hypothesis labeled $x$ is used many times in the derivation $M$, then $N$ will have to be copied many times in the right-hand side term. This corresponds to the fact that if $x$ occurs several times in $M$, then $\subst{M}{N}{x}$ might be a longer and more complicated term than $(\lam x.M)N$. Finally, consider the $(\eta_{\to})$ rule $\lamabs{x}{A}.Mx \to M$, where $x\not\in\FV{M}$. This translates to derivations as follows: \[ \trule{$\to$-I}\ \deriv{\trule{$\to$-E}\ \deriv{\ideriv{\Gamma}{A\to B}\sep\trule{ax}\ \deriv{[\typ{x}{A}]}{A} x}{B}}{A\to B} x \ssep\to\ssep \ideriv{\Gamma}{A\to B} \] \subsection{Getting mileage out of the Curry-Howard isomorphism} The Curry-Howard isomorphism makes a connection between the lambda calculus and logic. We can think of it as a connection between ``programs'' and ``proofs''. What is such a connection good for? Like any isomorphism, it allows us to switch back and forth and think in whichever system suits our intuition in a given situation. Moreover, we can save a lot of work by transferring theorems that were proved about the lambda calculus to logic, and vice versa. As an example, we will see in the next section how to add disjunctions to propositional intuitionistic logic, and then we will explore what we can learn about the lambda calculus from that. \subsection{Disjunction and sum types} To the BNF for formulas of propositional intuitionistic logic from Section~\ref{subsec-proplogic}, we add the following clauses: \[ \mbox{Formulas:}\ssep A,B \bnf \ldots \bor A\cor B \bor \cfalse. \] Here, $A\cor B$ stands for disjunction, or ``or'', and $\cfalse$ stands for falsity, which we can also think of as zero-ary disjunction. The symbol $\cfalse$ is also known by the names of ``bottom'', ``absurdity'', or ``contradiction''. The rules for constructing derivations are extended by the following cases: \begin{enumerate} \resumeenumerate \item ($\cor$-introduction) \[ \trule{$\cor$-I$_1$}\ \deriv{\Gamma\tj A}{\Gamma\tj A\cor B} \sep \trule{$\cor$-I$_2$}\ \deriv{\Gamma\tj B}{\Gamma\tj A\cor B} \] In other words, if we have proven $A$ or we have proven $B$, then we may conclude $A\cor B$. \item ($\cor$-elimination) \[ \trule{$\cor$-E$_{x,y}$}\ \deriv{\Gamma\tj A\cor B \sep \Gamma,\typ{x}{A}\tj C \sep \Gamma,\typ{y}{B}\tj C}{\Gamma\tj C} \] This is known as the ``principle of case distinction''. If we know $A\cor B$, and we wish to prove some formula $C$, then we may proceed by cases. In the first case, we assume $A$ holds and prove $C$. In the second case, we assume $B$ holds and prove $C$. In either case, we prove $C$, which therefore holds independently. Note that the $\cor$-elimination rule differs from all other rules we have considered so far, because it involves some arbitrary formula $C$ that is not directly related to the principal formula $A\cor B$ being eliminated. \item ($\cfalse$-elimination) \[ \trule{$\cfalse$-E}\ \deriv{\Gamma\tj\cfalse}{\Gamma\tj C}, \] for an arbitrary formula $C$. This rule formalizes the familiar principle ``ex falsum quodlibet'', which means that falsity implies anything. \end{enumerate} There is no $\cfalse$-introduction rule. This is symmetric to the fact that there is no $\ctrue$-elimination rule. Having extended our logic with disjunctions, we can now ask what these disjunctions correspond to under the Curry-Howard isomorphism. Naturally, we need to extend the lambda calculus by as many new terms as we have new rules in the logic. It turns out that disjunctions correspond to a concept that is quite natural in programming: ``sum'' or ``union'' types. To the lambda calculus, add type constructors $A+B$ and $0$. \[ \mbox{Simple types:}\ssep A,B \bnf \ldots\bor A+B\bor 0. \] Intuitively, $A+B$ is the disjoint union of $A$ and $B$, as in set theory: an element of $A+B$ is either an element of $A$ or an element of $B$, together with an indication of which one is the case. In particular, if we consider an element of $A+A$, we can still tell whether it is in the left or right component, even though the two types are the same. In programming languages, this is sometimes known as a ``union'' or ``variant'' type. We call it a ``sum'' type here. The type $0$ is simply the empty type, corresponding to the empty set in set theory. What should the lambda terms be that go with these new types? We know from our experience with the Curry-Howard isomorphism that we have to have precisely one term constructor for each introduction or elimination rule of natural deduction. Moreover, we know that if such a rule has $n$ subderivations, then our term constructor has to have $n$ immediate subterms. We also know something about bound variables: Each time a hypothesis is canceled in a natural deduction rule, there must be a binder of the corresponding variable in the lambda calculus. This information more or less uniquely determines what the lambda terms should be; the only choice that is left is what to call them! We add four terms to the lambda calculus: \[ \begin{array}{lll} \mbox{Raw terms:}\ssep M,N,P &\bnf&\ldots\bor \inj1 M\bor \inj2 M \\ && \bor \caseof{M}{x}{A}{N}{y}{B}{P} \bor \Box_A M \end{array} \] The typing rules for these new terms are shown in Table~\ref{tab-sum-typing-rules}. \begin{table*}[tbp] \[ \begin{array}{rc} \tjinja & \deriv{\Gamma\tj M:A}{\Gamma\tj \inj1 M:A+B} \\[1.8ex] \tjinjb & \deriv{\Gamma\tj M:B}{\Gamma\tj \inj2 M:A+B} \\[1.8ex] \tjcase & \deriv{\Gamma\tj M: A+B\sep\Gamma,\typ{x}{A}\tj N: C \sep\Gamma,\typ{y}{B}\tj P: C} {\Gamma\tj (\caseof{M}{x}{A}{N}{y}{B}{P}): C} \\[1.8ex] \tjbox & \deriv{\Gamma\tj M: 0} {\Gamma\tj\Box_A M:A} \end{array} \] \caption{Typing rules for sums} \label{tab-sum-typing-rules} \end{table*} By comparing these rules to \trule{$\cor$-I$_1$}, \trule{$\cor$-I$_2$}, \trule{$\cor$-E}, and \trule{$\cfalse$-E}, you can see that they are precisely analogous. But what is the meaning of these new terms? The term $\inj1 M$ is simply an element of the left component of $A+B$. We can think of $\inj1$ as the injection function $A\to A+B$. Similar for $\inj2$. The term $(\caseof{M}{x}{A}{N}{y}{B}{P})$ is a case distinction: evaluate $M$ of type $A+B$. The answer is either an element of the left component $A$ or of the right component $B$. In the first case, assign the answer to the variable $x$ and evaluate $N$. In the second case, assign the answer to the variable $y$ and evaluate $P$. Since both $N$ and $P$ are of type $C$, we get a final result of type $C$. Note that the case statement is very similar to an if-then-else; the only difference is that the two alternatives also carry a value. Indeed, the booleans can be defined as $1+1$, in which case $\truet=\inj1\unit$, $\falset=\inj2\unit$, and $\ifthenelset MNP=\caseof{M}{x}{1}{N}{y}{1}{P}$, where $x$ and $y$ don't occur in $N$ and $P$, respectively. Finally, the term $\Box_A M$ is a simple type cast, corresponding to the unique function $\Box_A:0\to A$ from the empty set to any set $A$. \subsection{Classical logic vs.\ intuitionistic logic} We have mentioned before that the natural deduction calculus we have presented corresponds to intuitionistic logic, and not classical logic. But what exactly is the difference? Well, the difference is that in intuitionistic logic, we have no rule for proof by contradiction, and we do not have $A\cor\neg A$ as an axiom. Let us adopt the following convention for negation: the formula $\neg A$ (``not $A$'') is regarded as an abbreviation for $A\to\cfalse$. This way, we do not have to introduce special formulas and rules for negation; we simply use the existing rules for $\to$ and $\cfalse$. In intuitionistic logic, there is no derivation of $A\cor\neg A$, for general $A$. Or equivalently, in the simply-typed lambda calculus, there is no closed term of type $A+(A\to 0)$. We are not yet in a position to prove this formally, but informally, the argument goes as follows: If the type $A$ is empty, then there can be no closed term of type $A$ (otherwise $A$ would have that term as an element). On the other hand, if the type $A$ is non-empty, then there can be no closed term of type $A\to 0$ (or otherwise, if we applied that term to some element of $A$, we would obtain an element of $0$). But if we were to write a {\em generic} term of type $A+(A\to 0)$, then this term would have to work no matter what $A$ is. Thus, the term would have to decide whether to use the left or right component independently of $A$. But for any such term, we can get a contradiction by choosing $A$ either empty or non-empty. Closely related is the fact that in intuitionistic logic, we do not have a principle of proof by contradiction. The ``proof by contradiction'' rule is the following: \[ \trule{contra$_x$}\ \deriv{\Gamma,\typ{x}{\neg A}\tj\cfalse}{\Gamma\tj A}. \] This is {\em not} a rule of intuitionistic propositional logic, but we can explore what would happen if we were to add such a rule. First, we observe that the contradiction rule is very similar to the following: \[ \deriv{\Gamma,\typ{x}{A}\tj\cfalse}{\Gamma\tj\neg A}. \] However, since we defined $\neg A$ to be the same as $A\to\cfalse$, the latter rule is an instance of \trule{$\to$-I}. The contradiction rule, on the other hand, is not an instance of \trule{$\to$-I}. If we admit the rule \trule{contra}, then $A\cor\neg A$ can be derived. The following is such a derivation: {\footnotesize \[ \trule{$\to$-E}\ \deriv{\raisebox{-6ex}{$\squishr{\trule{ax$_y$}~}\deriv{}{\typ{y}{\neg(A\cor\neg A)}\tj\neg(A\cor\neg A)}$}\hspace{-24ex} \squishr{\trule{$\to$-E}~}\deriv{\squishr{\trule{ax$_y$}~}\deriv{}{\typ{y}{\neg(A\cor\neg A)},\typ{x}{A}\tj\neg(A\cor\neg A)} \sep \trule{$\cor$-I$_1$~}\deriv{\squishr{\trule{ax$_x$}~}\deriv{}{\typ{y}{\neg(A\cor\neg A)},\typ{x}{A}\tj A}}{\typ{y}{\neg(A\cor\neg A)},\typ{x}{A}\tj A\cor\neg A}}{ \trule{$\cor$-I$_2$~}\deriv{ \squishr{\trule{$\to$-I$_x$}~}\deriv{\typ{y}{\neg(A\cor\neg A)},\typ{x}{A}\tj \cfalse}{\typ{y}{\neg(A\cor\neg A)}\tj \neg A} }{\typ{y}{\neg(A\cor\neg A)}\tj A\cor \neg A}\hspace{-18ex} }}{\trule{contra$_y$~}\deriv{\typ{y}{\neg(A\cor\neg A)}\tj\cfalse}{\tj A\cor\neg A}} \]} Conversely, if we added $A\cor\neg A$ as an axiom to intuitionistic logic, then this already implies the \trule{contra} rule. Namely, from any derivation of $\Gamma,\typ{x}{\neg A}\tj\cfalse$, we can obtain a derivation of $\Gamma\tj A$ by using $A\cor\neg A$ as an axiom. Thus, we can {\em simulate} the \trule{contra} rule, in the presence of $A\cor\neg A$. \[ \trule{$\cor$-E$_{x,y}$}\ \deriv{\deriv{\trule{excluded middle}}{\Gamma\tj A\cor \neg A} \sep \trule{$\cfalse$-E}\ \deriv{ \Gamma,\typ{x}{\neg A}\tj\cfalse}{\Gamma,\typ{x}{\neg A}\tj A} \sep \trule{ax$_y$}\ \deriv{}{\Gamma,\typ{y}{A}\tj A}}{\Gamma\tj A} \] In this sense, we can say that the rule $\trule{contra}$ and the axiom $A\cor\neg A$ are equivalent, in the presence of the other axioms and rules of intuitionistic logic. It turns out that the system of intuitionistic logic plus \trule{contra} is equivalent to classical logic as we know it. It is in this sense that we can say that intuitionistic logic is ``classical logic without proofs by contradiction''. \begin{exercise} The formula $((A\to B)\to A)\to A$ is called ``Peirce's law''. It is valid in classical logic, but not in intuitionistic logic. Give a proof of Peirce's law in natural deduction, using the rule \trule{contra}. \end{exercise} Conversely, Peirce's law, when added to intuitionistic logic for all $A$ and $B$, implies \trule{contra}. Here is the proof. Recall that $\neg A$ is an abbreviation for $A\to\cfalse$. \[ \trule{$\to$-E}~ \deriv{ \deriv{\trule{Peirce's law for $B=\cfalse$}} {\Gamma\tj((A\to\cfalse)\to A)\to A} \sep \trule{$\to$-I$_x$}~ \deriv{ \squishr{\trule{$\cfalse$-E}~} \deriv{\Gamma,\typ{x}{A\to\cfalse}\tj\cfalse} {\Gamma,\typ{x}{A\to\cfalse}\tj A} } {\Gamma\tj(A\to\cfalse)\to A} } {\Gamma\tj A} \] We summarize the results of this section in terms of a slogan: \[ \begin{array}{ll} & \mbox{intuitionistic logic + \trule{contra}} \\ =& \mbox{intuitionistic logic + ``$A\cor\neg A$''} \\ =& \mbox{intuitionistic logic + Peirce's law} \\ =& \mbox{classical logic.} \end{array} \] The proof theory of intuitionistic logic is a very interesting subject in its own right, and an entire course could be taught just on that subject. \subsection{Classical logic and the Curry-Howard isomorphism} To extend the Curry-Howard isomorphism to classical logic, according to the observations of the previous section, it is sufficient to add to the lambda calculus a term representing Peirce's law. All we have to do is to add a term $\fC:((A\to B)\to A)\to A$, for all types $A$ and $B$. Such a term is known as {\em Felleisen's $\fC$}, and it has a specific interpretation in terms of programming languages. It can be understood as a control operator (similar to ``goto'', ``break'', or exception handling in some procedural programming languages). Specifically, Felleisen's interpretation requires a term of the form \[ M = \fC(\lam k^{A\to B}.N) : A \] to be evaluated as follows. To evaluate $M$, first evaluate $N$. Note that both $M$ and $N$ have type $A$. If $N$ returns a result, then this immediately becomes the result of $M$ as well. On the other hand, if during the evaluation of $N$, the function $k$ is ever called with some argument $x:A$, then the further evaluation of $N$ is aborted, and $x$ immediately becomes the result of $M$. In other words, the final result of $M$ can be calculated anywhere inside $N$, no matter how deeply nested, by passing it to $k$ as an argument. The function $k$ is known as a {\em continuation}. There is a lot more to programming with continuations than can be explained in these lecture notes. For an interesting application of continuations to compiling, see e.g. {\cite{App92}} from the bibliography (Section~\ref{ssec-bibliography}). The above explanation of what it means to ``evaluate'' the term $M$ glosses over several details. In particular, we have not given a reduction rule for $\fC$ in the style of $\beta$-reduction. To do so is rather complicated and is beyond the scope of these notes. \section{Weak and strong normalization} \subsection{Definitions} As we have seen, computing with lambda terms means reducing lambda terms to normal form. By the Church-Rosser theorem, such a normal form is guaranteed to be unique if it exists. But so far, we have paid little attention to the question whether normal forms exist for a given term, and if so, how we need to reduce the term to find a normal form. \begin{definition} Given a notion of term and a reduction relation, we say that a term $M$ is {\em weakly normalizing} if there exists a finite sequence of reductions $M\red M_1\red\ldots\red M_n$ such that $M_n$ is a normal form. We say that $M$ is {\em strongly normalizing} if there does not exist an infinite sequence of reductions starting from $M$, or in other words, if {\em every} sequence of reductions starting from $M$ is finite. \end{definition} Recall the following consequence of the Church-Rosser theorem, which we stated as Corollary~\ref{cor-cr-3}: If $M$ has a normal form $N$, then $M\reds N$. It follows that a term $M$ is weakly normalizing if and only if it has a normal form. This does not imply that every possible way of reducing $M$ leads to a normal form. A term is strongly normalizing if and only if every way of reducing it leads to a normal form in finitely many steps. Consider for example the following terms in the untyped lambda calculus: \begin{enumerate} \item The term $\Omega=(\lam x.xx)(\lam x.xx)$ is neither weakly nor strongly normalizing. It does not have a normal form. \item The term $(\lam x.y)\Omega$ is weakly normalizing, but not strongly normalizing. It reduces to the normal form $y$, but it also has an infinite reduction sequence. \item The term $(\lam x.y)((\lam x.x)(\lam x.x))$ is strongly normalizing. While there are several different ways to reduce this term, they all lead to a normal form in finitely many steps. \item The term $\lam x.x$ is strongly normalizing, since it has no reductions, much less an infinite reduction sequence. More generally, every normal form is strongly normalizing. \end{enumerate} We see immediately that strongly normalizing implies weakly normalizing. However, as the above examples show, the converse is not true. \subsection{Weak and strong normalization in typed lambda calculus} We found that the term $\Omega=(\lam x.xx)(\lam x.xx)$ is not weakly or strongly normalizing. On the other hand, we also know that this term is not typable in the simply-typed lambda calculus. This is not a coincidence, as the following theorem shows. \begin{theorem}[Weak normalization theorem]\label{thm-weak-norm} In the simply-typed lambda calculus, all terms are weakly normalizing. \end{theorem} \begin{theorem}[Strong normalization theorem]\label{thm-strong-norm} In the simply-typed lambda calculus, all terms are strongly normalizing. \end{theorem} Clearly, the strong normalization theorem implies the weak normalization theorem. However, the weak normalization theorem is much easier to prove, which is the reason we proved both these theorems in class. In particular, the proof of the weak normalization theorem gives an explicit measure of the complexity of a term, in terms of the number of redexes of a certain degree in the term. There is no corresponding complexity measure in the proof of the strong normalization theorem. Please refer to Chapters 4 and 6 of ``Proofs and Types'' by Girard, Lafont, and Taylor {\cite{GLT89}} for the proofs of Theorems~\ref{thm-weak-norm} and {\ref{thm-strong-norm}}, respectively. \section{Polymorphism} The polymorphic lambda calculus, also known as ``System F'', is obtained extending the Curry-Howard isomorphism to the quantifier $\forall$. For example, consider the identity function $\lambda x^A.x$. This function has type $A\to A$. Another identity function is $\lambda x^B.x$ of type $B\to B$, and so forth for every type. We can thus think of the identity function as a family of functions, one for each type. In the polymorphic lambda calculus, there is a dedicated syntax for such families, and we write $\Lambda\alpha.\lambda x^\alpha.x$ of type $\forall\alpha.\alpha\to\alpha$. System F was independently discovered by Jean-Yves Girard and John Reynolds in the early 1970's. \subsection{Syntax of System F} The primary difference between System F and simply-typed lambda calculus is that System F has a new kind of function that takes a {\em type}, rather than a {\em term}, as its argument. We can also think of such a function as a family of terms that is indexed by a type. Let $\alpha,\beta,\gamma$ range over a countable set of {\em type variables}. The types of System F are given by the grammar \[ \mbox{Types:}\ssep A,B \bnf \alpha \bor A\to B\bor \forall\alpha.A \] A type of the form $A\to B$ is called a {\em function type}, and a type of the form $\forall\alpha.A$ is called a {\em universal type}. The type variable $\alpha$ is bound in $\forall\alpha.A$, and we identify types up to renaming of bound variables; thus, $\forall\alpha.\alpha\to\alpha$ and $\forall\beta.\beta\to\beta$ are the same type. We write $\FTV{A}$ for the set of free type variables of a type $A$, defined inductively by: \begin{itemize} \item $\FTV{\alpha} = \s{\alpha}$, \item $\FTV{A\to B} = \FTV{A}\cup\FTV{B}$, \item $\FTV{\forall\alpha.A} = \FTV{A}\setminus\s{\alpha}$. \end{itemize} We also write $A[B/\alpha]$ for the result of replacing all free occurrences of $\alpha$ by $B$ in $A$. Just like the substitution of terms (see Section~\ref{ssec-substitution}), this type substitution must be {\em capture-free}, i.e., special care must be taken to rename any bound variables of $A$ so that their names are different from the free variables of $B$. The terms of System F are: \[ \mbox{Terms:}\ssep M,N \bnf x \bor MN \bor \lamabs{x}{A}.M \bor MA \bor \Lamabs{\alpha}.M \] Of these, variables $x$, applications $MN$, and lambda abstractions $\lamabs{x}{A}.M$ are exactly as for the simply-typed lambda calculus. The new terms are {\em type application} $MA$, which is the application of a type function $M$ to a type $A$, and {\em type abstraction} $\Lamabs{\alpha}.M$, which denotes the type function that maps a type $\alpha$ to a term $M$. The typing rules for System F are shown in Table~\ref{tab-system-f-typing-rules}. \begin{table*}[tbp] \[ \begin{array}{rc} \tjvar & \deriv{}{\Gamma,\typ{x}{ A}\tj x: A} \\[1.8ex] \tjapp & \deriv{\Gamma\tj M: A\to B\sep\Gamma\tj N: A} {\Gamma\tj MN: B} \\[1.8ex] \tjlam & \deriv{\Gamma,\typ{x}{ A}\tj M: B} {\Gamma\tj\lamabs{x}{A}.M: A\to B} \\[1.8ex] \tjApp & \deriv{\Gamma\tj M: \forall\alpha.A} {\Gamma\tj MB: A[B/\alpha]} \\[1.8ex] \tjLam & \deriv{\Gamma \tj M: A\sep \alpha\not\in\FTV{\Gamma}} {\Gamma\tj\Lamabs{\alpha}.M: \forall\alpha.A} \end{array} \] \caption{Typing rules for System F} \label{tab-system-f-typing-rules} \end{table*} We also write $\FTV{M}$ for the set of free type variables in the term $M$. We need a final notion of substitution: if $M$ is a term, $B$ a type, and $\alpha$ a type variable, we write $M[B/\alpha]$ for the capture-free substitution of $B$ for $\alpha$ in $M$. \subsection{Reduction rules} In System F, there are two rules for $\beta$-reduction. The first one is the familiar rule for the application of a function to a term. The second one is an analogous rule for the application of a type function to a type. \[ \begin{array}{lllll} (\beta_{\to}) & (\lamabs{x}{A}.M)N & \to & \subst{M}{N}{x}, \\ (\beta_{\forall}) & (\Lamabs{\alpha}.M)A & \to & \subst{M}{A}{\alpha}, \end{array} \] Similarly, there are two rules for $\eta$-reduction. \[ \begin{array}{lllll} (\eta_{\to}) & \lamabs{x}{A}.Mx & \to & M, & \mbox{if $x\not\in\FV{M}$,} \\ (\eta_{\forall}) & \Lamabs{\alpha}.M\alpha & \to & M, & \mbox{if $\alpha\not\in\FTV{M}$.} \end{array} \] The congruence and $\xi$-rules are as expected: \[ \deriv{M\to M'}{MN\to M'N} \sep \deriv{N\to N'}{MN\to MN'} \sep \deriv{M\to M'}{\lamabs{x}{A}{M}\to\lamabs{x}{A}{M'}} \] \[ \deriv{M\to M'}{MA\to M'A} \sep \deriv{M\to M'}{\Lamabs{\alpha}{M}\to\Lamabs{\alpha}{M'}} \] \subsection{Examples} Just as in the untyped lambda calculus, many interesting data types and operations can be encoded in System F. \subsubsection{Booleans} Define the System F type $\bool$, and terms $\truet,\falset:\bool$, as follows: \[ \begin{array}{ccl} \bool &=& \forall\alpha.\alpha\to\alpha\to\alpha,\\ \truet &=& \Lamabs{\alpha}.\lamabs{x}{\alpha}.\lamabs{y}{\alpha}.x,\\ \falset &=& \Lamabs{\alpha}.\lamabs{x}{\alpha}.\lamabs{y}{\alpha}.y.\\ \end{array} \] It is easy to see from the typing rules that $\tj\truet:\bool$ and $\tj\falset:\bool$ are valid typing judgements. We can define an if-then-else operation \[ \begin{array}{l} \ifthenelset : \forall\beta.\bool\to\beta\to\beta\to\beta, \\ \ifthenelset = \Lamabs{\beta}.\lamabs{z}{\bool}.z\beta. \end{array} \] It is then easy to see that, for any type $B$ and any pair of terms $M,N:B$, we have \[ \begin{array}{rcl} \ifthenelset B\, \truet\, MN &\redbs& M,\\ \ifthenelset B\, \falset\, MN &\redbs& N. \end{array} \] Once we have if-then-else, it is easy to define other boolean operations, for example \[ \begin{array}{ccl} \andt &=& \lamabs{a}{\bool}.\lamabs{b}{\bool}.\ifthenelset\bool a\,b\,\falset,\\ \ort &=& \lamabs{a}{\bool}.\lamabs{b}{\bool}.\ifthenelset\bool a\,\truet\,b,\\ \nott &=& \lamabs{a}{\bool}.\ifthenelset\bool a\,\falset\,\truet. \end{array} \] Later, in Proposition~\ref{prop-unique-bool}, we will show that up to $\beta\eta$ equality, $\truet$ and and $\falset$ are the {\em only} closed terms of type $\bool$. This, together with the if-then-else operation, justifies calling this the type of booleans. Note that the above encodings of the booleans and their if-then-else operation in System F is exactly the same as the corresponding encodings in the untyped lambda calculus from Section~\ref{ssec-booleans}, provided that one erases all the types and type abstractions. However, there is an important difference: in the untyped lambda calculus, the booleans were just two terms among many, and there was no guarantee that the argument of a boolean function (such as $\andt$ and $\ort$) was actually a boolean. In System F, the typing guarantees that all closed boolean terms eventually reduce to either $\truet$ or $\falset$. \subsubsection{Natural numbers} We can also define a type of Church numerals in System F. We define: \[ \begin{array}{ccl} \nat &=& \forall\alpha.(\alpha\to\alpha)\to\alpha\to\alpha,\\ \chnum{0} &=& \Lamabs{\alpha}.\lamabs{f}{\alpha\to\alpha}.\lamabs{x}{\alpha}.x,\\ \chnum{1} &=& \Lamabs{\alpha}.\lamabs{f}{\alpha\to\alpha}.\lamabs{x}{\alpha}.fx,\\ \chnum{2} &=& \Lamabs{\alpha}.\lamabs{f}{\alpha\to\alpha}.\lamabs{x}{\alpha}.f(fx),\\ \chnum{3} &=& \Lamabs{\alpha}.\lamabs{f}{\alpha\to\alpha}.\lamabs{x}{\alpha}.f(f(fx)),\\ \ldots \end{array} \] It is then easy to define simple functions, such as successor, addition, and multiplication: \[ \begin{array}{ccl} \succt &=& \lamabs{n}{\nat}.\Lamabs{\alpha}.\lamabs{f}{\alpha\to\alpha}.\lamabs{x}{\alpha}.f(n\alpha f x),\\ \addt &=& \lamabs{n}{\nat}.\lamabs{m}{\nat}.\Lamabs{\alpha}.\lamabs{f}{\alpha\to\alpha}.\lamabs{x}{\alpha}.n\alpha f(m\alpha fx), \\ \multt &=& \lamabs{n}{\nat}.\lamabs{m}{\nat}.\Lamabs{\alpha}.\lamabs{f}{\alpha\to\alpha}.n\alpha (m\alpha f). \end{array} \] Just as for the booleans, these encodings of the Church numerals and functions are exactly the same as those of the untyped lambda calculus from Section~\ref{ssec-natural-numbers}, if one erases all the types and type abstractions. We will show in Proposition~\ref{prop-unique-nat} below that the Church numerals are, up to $\beta\eta$-equivalence, the only closed terms of type $\nat$. \subsubsection{Pairs}\label{ssec-pairs} You will have noticed that we didn't include a cartesian product type $A\times B$ in the definition of System F. This is because such a type is definable. Specifically, let \[ \begin{array}{ccl} A\times B &=& \forall\alpha.(A\to B\to\alpha)\to\alpha,\\ \pair{M,N} &=& \Lamabs{\alpha}.\lamabs{f}{A\to B\to\alpha}.fMN. \end{array} \] Note that when $M:A$ and $N:B$, then $\pair{M,N}:A\times B$. Moreover, for any pair of types $A,B$, we have projection functions $\leftt AB : A\times B\to A$ and $\rightt AB : A\times B\to B$, defined by \[ \begin{array}{ccl} \leftt &=& \Lamabs{\alpha}.\Lamabs{\beta}.\lamabs{p}{\alpha\times \beta}.p\alpha(\lamabs{x}{\alpha}.\lamabs{y}{\beta}.x),\\ \rightt &=& \Lamabs{\alpha}.\Lamabs{\beta}.\lamabs{p}{\alpha\times \beta}.p\beta(\lamabs{x}{\alpha}.\lamabs{y}{\beta}.y). \end{array} \] This satisfies the usual laws \[ \begin{array}{ccl} \leftt AB\pair{M,N} &\redbs& M,\\ \rightt AB\pair{M,N} &\redbs& N. \end{array} \] Once again, these encodings of pairs and projections are exactly the same as those we used in the untyped lambda calculus, when one erases all the type-related parts of the terms. You will show in Exercise~\ref{ex-unique-pair} that every closed term of type $A\times B$ is $\beta\eta$-equivalent to a term of the form $\pair{M,N}$. \begin{remark} It is also worth noting that the corresponding $\eta$-laws, such as \[ \pair{\leftt M,\rightt M}=M, \] are {\em not} derivable in System F. These laws hold whenever $M$ is a closed term, but not necessarily when $M$ contains free variables. \end{remark} \begin{exercise}\label{ex-encodings} Find suitable encodings in System F of the types $1$, $A+B$, and $0$, along with the corresponding terms $\unit$, $\inj1$, $\inj2$, $\caseof{M}{x}{A}{N}{y}{B}{P}$, and $\Box_A{M}$. \end{exercise} \subsection{Church-Rosser property and strong normalization} \begin{theorem}[Church-Rosser] System F satisfies the Church-Rosser property, both for $\beta$-reduction and for $\beta\eta$-reduction. \end{theorem} \begin{theorem}[Strong normalization]\label{thm-strong-norm-system-f} In System F, all terms are strongly normalizing. \end{theorem} The proof of the Church-Rosser property is similar to that of the simply-typed lambda calculus, and is left as an exercise. The proof of strong normalization is much more complex; it can be found in Chapter 14 of ``Proofs and Types'' {\cite{GLT89}}. \subsection{The Curry-Howard isomorphism} From the point of view of the Curry-Howard isomorphism, $\forall\alpha.A$ is the universally quantified logical statement ``for all $\alpha$, $A$ is true''. Here $\alpha$ ranges over atomic propositions. For example, the formula $\forall\alpha.\forall\beta.\alpha\to(\beta\to\alpha)$ expresses the valid fact that the implication $\alpha\to(\beta\to\alpha)$ is true for all propositions $\alpha$ and $\beta$. Since this quantifier ranges over {\em propositions}, it is called a {\em second-order quantifier}, and the corresponding logic is {\em second-order propositional logic}. Under the Curry-Howard isomorphism, the typing rules for System F become the following logical rules: \begin{itemize} \item (Axiom) \[ \trule{ax$_x$}\ \deriv{}{\Gamma,\typ{x}{A}\tj A} \] \item ($\to$-introduction) \[ \trule{$\to$-I$_x$}\ \deriv{\Gamma,\typ{x}{A}\tj B}{\Gamma\tj A\to B} \] \item ($\to$-elimination) \[ \trule{$\to$-E}\ \deriv{\Gamma\tj A\to B\sep \Gamma\tj A}{\Gamma\tj B} \] \item ($\forall$-introduction) \[ \trule{$\forall$-I}\ \deriv{\Gamma\tj A\sep \alpha\not\in\FTV{\Gamma}}{\Gamma\tj \forall\alpha.A} \] \item ($\forall$-elimination) \[ \trule{$\forall$-E}\ \deriv{\Gamma\tj \forall\alpha.A}{\Gamma\tj A[B/\alpha]} \] \end{itemize} The first three of these rules are familiar from propositional logic. The $\forall$-introduction rule is also known as {\em universal generalization}. It corresponds to a well-known logical reasoning principle: If a statement $A$ has been proven for some {\em arbitrary} $\alpha$, then it follows that it holds for {\em all} $\alpha$. The requirement that $\alpha$ is ``arbitrary'' has been formalized in the logic by requiring that $\alpha$ does not appear in any of the hypotheses that were used to derive $A$, or in other words, that $\alpha$ is not among the free type variables of $\Gamma$. The $\forall$-elimination rule is also known as {\em universal specialization}. It is the simple principle that if some statement is true for all propositions $\alpha$, then the same statement is true for any particular proposition $B$. Note that, unlike the $\forall$-introduction rule, this rule does not require a side condition. Finally, we note that the side condition in the $\forall$-introduction rule is of course the same as that of the typing rule $\tjLam$ of Table~\ref{tab-system-f-typing-rules}. From the point of view of logic, the side condition is justified because it asserts that $\alpha$ is ``arbitrary'', i.e., no assumptions have been made about it. From a lambda calculus view, the side condition also makes sense: otherwise, the term $\lamabs{x}{\alpha}.\Lamabs{\alpha}.x$ would be well-typed of type $\alpha\to\forall\alpha.\alpha$, which clearly does not make any sense: there is no way that an element $x$ of some fixed type $\alpha$ could suddenly become an element of an arbitrary type. \subsection{Supplying the missing logical connectives} It turns out that a logic with only implication $\to$ and a second-order universal quantifier $\forall$ is sufficient for expressing all the other usual logical connectives, for example: \begin{eqnarray} A\cand B &\iff& \forall\alpha.(A\to B\to\alpha)\to\alpha,\label{eqn-A-cand-B}\\ A\cor B &\iff& \forall\alpha.(A\to\alpha)\to(B\to\alpha)\to\alpha,\label{eqn-A-cor-B}\\ \cnot A &\iff& \forall\alpha.A\to \alpha,\\ \ctrue &\iff& \forall\alpha.\alpha\to\alpha,\\ \cfalse &\iff& \forall\alpha.\alpha,\\ \exists\beta.A &\iff& \forall\alpha.(\forall\beta.(A\to\alpha))\to\alpha.\label{eqn-exists-beta-A} \end{eqnarray} \begin{exercise} Using informal intuitionistic reasoning, prove that the left-hand side is logically equivalent to the right-hand side for each of {\eqref{eqn-A-cand-B}}--{\eqref{eqn-exists-beta-A}}. \end{exercise} \begin{remark} The definitions {\eqref{eqn-A-cand-B}}--{\eqref{eqn-exists-beta-A}} are somewhat reminiscent of De Morgan's laws and double negations. Indeed, if we replace the type variable $\alpha$ by the constant $\falset$ in {\eqref{eqn-A-cand-B}}, the right-hand side becomes $(A\to B\to\falset)\to\falset$, which is intuitionistically equivalent to $\cnot\cnot(A\cand B)$. Similarly, the right-hand side of {\eqref{eqn-A-cor-B}} becomes $(A\to\falset)\to(B\to\falset)\to\falset$, which is intuitionistically equivalent to $\cnot(\cnot A\cand\cnot B)$, and similarly for the remaining connectives. However, the versions of {\eqref{eqn-A-cand-B}}, {\eqref{eqn-A-cor-B}}, and {\eqref{eqn-exists-beta-A}} using $\falset$ are only {\em classically}, but not {\em intuitionistically} equivalent to their respective left-hand sides. On the other hand, it is remarkable that by the use of $\forall\alpha$, each right-hand side is {\em intuitionistically} equivalent to the left-hand sides. \end{remark} \begin{remark} Note the resemblance between {\eqref{eqn-A-cand-B}} and the definition of $A\times B$ given in Section~\ref{ssec-pairs}. Naturally, this is not a coincidence, as logical conjunction $A\cand B$ should correspond to cartesian product $A\times B$ under the Curry-Howard correspondence. Indeed, by applying the same principle to the other logical connectives, one arrives at a good hint for Exercise~\ref{ex-encodings}. \end{remark} \begin{exercise} Extend System F with an existential quantifier $\exists\beta.A$, not by using {\eqref{eqn-exists-beta-A}}, but by adding a new type with explicit introduction and elimination rules to the language. Justify the resulting rules by comparing them with the usual rules of mathematical reasoning for ``there exists''. Can you explain the meaning of the type $\exists\beta.A$ from a programming language or lambda calculus point of view? \end{exercise} \subsection{Normal forms and long normal forms} Recall that a $\beta$-normal form of System F is, by definition, a term that contains no $\beta$-redex, i.e., no subterm of the form $(\lamabs{x}{A}.M)N$ or $(\Lamabs{\alpha}.M)A$. The following proposition gives another useful way to characterize the $\beta$-normal forms. \begin{proposition}[Normal forms] A term of System F is a $\beta$-normal form if and only if it is of the form \begin{equation}\label{eqn-nf} \Lamabs{a_1}.\Lamabs{a_2}\ldots\Lamabs{a_n}.zQ_1Q_2\ldots Q_k, \end{equation} where:\rm \begin{itemize} \item $n\geq 0$ and $k\geq 0$; \item Each $\Lamabs{a_i}$ is either a lambda abstraction $\lamabs{x_i}{A_i}$ or a type abstraction $\Lamabs{\alpha_i}$; \item Each $Q_j$ is either a term $M_j$ or a type $B_j$; and \item Each $Q_j$, when it is a term, is recursively in normal form. \end{itemize} \end{proposition} \begin{proof} First, it is clear that every term of the form {\eqref{eqn-nf}} is in normal form: the term cannot itself be a redex, and the only place where a redex could occur is inside one of the $Q_j$, but these are assumed to be normal. For the converse, consider a term $M$ in $\beta$-normal form. We show that $M$ is of the form {\eqref{eqn-nf}} by induction on $M$. \begin{itemize} \item If $M=z$ is a variable, then it is of the form {\eqref{eqn-nf}} with $n=0$ and $k=0$. \item If $M=NP$ is normal, then $N$ is normal, so by induction hypothesis, $N$ is of the form {\eqref{eqn-nf}}. But since $NP$ is normal, $N$ cannot be a lambda abstraction, so we must have $n=0$. It follows that $NP=zQ_1Q_2\ldots Q_kP$ is itself of the form {\eqref{eqn-nf}}. \item If $M=\lamabs{x}{A}.N$ is normal, then $N$ is normal, so by induction hypothesis, $N$ is of the form {\eqref{eqn-nf}}. It follows immediately that $\lamabs{x}{A}.N$ is also of the form {\eqref{eqn-nf}}. \item The case for $M=NA$ is like the case for $M=NP$. \item The case for $M=\Lamabs{\alpha}.N$ is like the case for $M=\lamabs{x}{A}.N$.\eot \end{itemize} \end{proof} \begin{definition} In a term of the form {\eqref{eqn-nf}}, the variable $z$ is called the {\em head variable} of the term. \end{definition} Of course, by the Church-Rosser property together with strong normalization, it follows that every term of System F is $\beta$-equivalent to a unique $\beta$-normal form, which must then be of the form {\eqref{eqn-nf}}. On the other hand, the normal forms {\eqref{eqn-nf}} are not unique up to $\eta$-conversion; for example, $\lamabs{x}{A\to B}.x$ and $\lamabs{x}{A\to B}.\lamabs{y}{A}.xy$ are $\eta$-equivalent terms and are both of the form {\eqref{eqn-nf}}. In order to achieve uniqueness up to $\beta\eta$-conversion, we introduce the notion of a {\em long normal form}. \begin{definition} A term of System F is a {\em long normal form} if \begin{itemize} \item it is of the form {\eqref{eqn-nf}}; \item the body $zQ_1\ldots Q_k$ is of atomic type (i.e., its type is a type variable); and \item each $Q_j$, when it is a term, is recursively in long normal form. \end{itemize} \end{definition} \begin{proposition}\label{prop-unique-lnf} Every term of System F is $\beta\eta$-equivalent to a unique long normal form. \end{proposition} \begin{proof} By strong normalization and the Church-Rosser property of $\beta$-reduction, we already know that every term is $\beta$-equivalent to a unique $\beta$-normal form. It therefore suffices to show that every $\beta$-normal form is $\eta$-equivalent to a unique long normal form. We first show that every $\beta$-normal form is $\eta$-equivalent to some long normal form. We prove this by induction. Indeed, consider a $\beta$-normal form of the form {\eqref{eqn-nf}}. By induction hypothesis, each of $Q_1,\ldots,Q_k$ can be $\eta$-converted to long normal form. Now we proceed by induction on the type $A$ of $zQ_1\ldots Q_k$. If $A=\alpha$ is atomic, then the normal form is already long, and there is nothing to show. If $A=B\to C$, then we can $\eta$-expand {\eqref{eqn-nf}} to \[ \Lamabs{a_1}.\Lamabs{a_2}\ldots\Lamabs{a_n}.\lamabs{w}{B}.zQ_1Q_2\ldots Q_kw \] and proceed by the inner induction hypothesis. If $A=\forall\alpha.B$, then we can $\eta$-expand {\eqref{eqn-nf}} to \[ \Lamabs{a_1}.\Lamabs{a_2}\ldots\Lamabs{a_n}.\Lamabs{\alpha}.zQ_1Q_2\ldots Q_k\alpha \] and proceed by the inner induction hypothesis. For uniqueness, we must show that no two different long normal forms can be $\beta\eta$-equivalent to each other. We leave this as an exercise. \eot \end{proof} \subsection{The structure of closed normal forms} It is a remarkable fact that if $M$ is in long normal form, then a lot of the structure of $M$ is completely determined by its type. Specifically: if the type of $M$ is atomic, then $M$ must start with a head variable. If the type of $M$ is of the form $B\to C$, then $M$ must be, up to $\alpha$-equivalence, of the form $\lamabs{x}{B}.N$, where $N$ is a long normal form of type $C$. And if the type of $M$ is of the form $\forall\alpha.C$, then $M$ must be, up to $\alpha$-equivalence, of the form $\Lamabs{\alpha}.N$, where $N$ is a long normal form of type $C$. So for example, consider the type \[ A=B_1\to B_2\to\forall \alpha_3. B_4\to\forall\alpha_5.\beta. \] We say that this type have five {\em prefixes}, where each prefix is of the form ``$B_i\to$'' or ``$\forall\alpha_i.$''. Therefore, every long normal form of type $A$ must also start with five prefixes; specifically, it must start with \[ \lamabs{x_1}{B_1}.\lamabs{x_2}{B_2}.\Lamabs{\alpha_3}.\lamabs{x_4}{B_4}.\Lamabs{\alpha_5}.\ldots \] The next part of the long normal form is a choice of head variable. If the term is closed, the head variable must be one of the $x_1$, $x_2$, or $x_4$. Once the head variable has been chosen, then {\em its} type determines how many arguments $Q_1,\ldots,Q_k$ the head variable must be applied to, and the types of these arguments. The structure of each of $Q_1,\ldots,Q_k$ is then recursively determined by its type, with its own choice of head variable, which then recursively determines its subterms, and so on. In other words, the degree of freedom in a long normal form is a choice of head variable at each level. This choice of head variables completely determines the long normal form. Perhaps the preceding discussion can be made more comprehensible by means of some concrete examples. The examples take the form of the following propositions and their proofs. \begin{proposition}\label{prop-unique-bool} Every closed term of type $\bool$ is $\beta\eta$-equivalent to either $\truet$ or $\falset$. \end{proposition} \begin{proof} Let $M$ be a closed term of type $\bool$. By Proposition~\ref{prop-unique-lnf}, we may assume that $M$ is a long normal form. Since $\bool = \forall\alpha.\alpha\to\alpha\to\alpha$, every long normal form of this type must start, up to $\alpha$-equivalence, with \[ \Lamabs{\alpha}.\lamabs{x}{\alpha}.\lamabs{y}{\alpha}.\ldots \] This must be followed by a head variable, which, since $M$ is closed, can only be $x$ or $y$. Since both $x$ and $y$ have atomic type, neither of them can be applied to further arguments, and therefore, the only two possible long normal forms are: \[ \begin{array}{l} \Lamabs{\alpha}.\lamabs{x}{\alpha}.\lamabs{y}{\alpha}.x \\ \Lamabs{\alpha}.\lamabs{x}{\alpha}.\lamabs{y}{\alpha}.y, \end{array} \] which are $\truet$ and $\falset$, respectively.\eot \end{proof} \begin{proposition}\label{prop-unique-nat} Every closed term of type $\nat$ is $\beta\eta$-equivalent to a Church numeral $\chnum{n}$, for some $n\in\N$. \end{proposition} \begin{proof} Let $M$ be a closed term of type $\nat$. By Proposition~\ref{prop-unique-lnf}, we may assume that $M$ is a long normal form. Since $\nat=\forall\alpha.(\alpha\to\alpha)\to\alpha\to\alpha$, every long normal form of this type must start, up to $\alpha$-equivalence, with \[ \Lamabs{\alpha}.\lamabs{f}{\alpha\to\alpha}.\lamabs{x}{\alpha}.\ldots \] This must be followed by a head variable, which, since $M$ is closed, can only be $x$ or $f$. If the head variable is $x$, then it takes no argument, and we have \[ M = \Lamabs{\alpha}.\lamabs{f}{\alpha\to\alpha}.\lamabs{x}{\alpha}.x \] If the head variable is $f$, then it takes exactly one argument, so $M$ is of the form \[ M = \Lamabs{\alpha}.\lamabs{f}{\alpha\to\alpha}.\lamabs{x}{\alpha}.fQ_1. \] Because $Q_1$ has type $\alpha$, its own long normal form has no prefix; therefore $Q_1$ must start with a head variable, which must again be $x$ or $f$. If $Q_1=x$, we have \[ M = \Lamabs{\alpha}.\lamabs{f}{\alpha\to\alpha}.\lamabs{x}{\alpha}.fx. \] If $Q_1$ has head variable $f$, then we have $Q_1=fQ_2$, and proceeding in this manner, we find that $M$ has to be of the form \[ M = \Lamabs{\alpha}.\lamabs{f}{\alpha\to\alpha}.\lamabs{x}{\alpha}.f(f(\ldots(fx)\ldots)), \] i.e., a Church numeral.\eot \end{proof} \begin{exercise}\label{ex-unique-pair} Prove that every closed term of type $A\times B$ is $\beta\eta$-equivalent to a term of the form $\pair{M,N}$, where $M:A$ and $N:B$. \end{exercise} \subsection{Application: representation of arbitrary data in System F} Let us consider the definition of a long normal form one more time. By definition, every long normal form is of the form \begin{equation}\label{eqn-nf2} \Lamabs{a_1}.\Lamabs{a_2}\ldots\Lamabs{a_n}.zQ_1Q_2\ldots Q_k, \end{equation} where $zQ_1Q_2\ldots Q_k$ has atomic type and $Q_1,\ldots,Q_k$ are, recursively, long normal forms. Instead of writing the long normal form on a single line as in {\eqref{eqn-nf2}}, let us write it in tree form instead: \[ \xymatrix@C-3ex{ &\makebox[0in][r]{$\Lamabs{a_1}.\Lamabs{a_2}\ldots\Lamabs{a_n}.$}z \ar@{-}[dl] \ar@{-}[d] \ar@{-}[dr] \ar@{-}[drr] \\ Q_1 & Q_2 & \cdots & Q_k, } \] where the long normal forms $Q_1,\ldots,Q_k$ are recursively also written as trees. For example, with this notation, the Church numeral $\chnum{2}$ becomes \begin{equation}\label{eqn-chnum} \m{\xymatrix@R-1.5em@C-2ex{ \makebox[0in][r]{$\Lamabs{\alpha}.\lamabs{f}{\alpha\to\alpha}.\lamabs{x}{\alpha}.$}f \ar@{-}[d] \\ f\ar@{-}[d] \\ x, }} \end{equation} and the pair $\pair{M,N}$ becomes \[ \xymatrix@R-1.5em@C-5ex{ &\makebox[0in][r]{$\Lamabs{\alpha}.\lamabs{f}{A\to B\to\alpha}.$}f \ar@{-}[dl]\ar@{-}[dr] \\ M&&N. } \] We can use this very idea to encode (almost) arbitrary data structures. For example, suppose that the data structure we wish to encode is a binary tree whose leaves are labelled by natural numbers. Let's call such a thing a {\em leaf-labelled binary tree}. Here is an example: \begin{equation}\label{eqn-tree} \m{\xymatrix@R-1.5em@C-5ex{ &&\bullet\ar@{-}[dl]\ar@{-}[dr] \\ &5 &&\bullet\ar@{-}[dl]\ar@{-}[dr] \\ && 8 && 7. }} \end{equation} In general, every leaf-labelled binary tree is either a {\em leaf}, which is labelled by a natural number, or else a {\em branch} that has exactly two {\em children} (a left one and a right one), each of which is a leaf-labelled binary tree. Written as a BNF, we have the following grammar for leaf-labelled binary trees: \[ \mbox{Tree:}\ssep T,S \bnf \leaft(n) \bor \brancht(T,S). \] When translating this as a System F type, we think along the lines of long normal forms. We need a type variable $\alpha$ to represent leaf-labelled binary trees. We need two head variables whose type ends in $\alpha$: The first head variable, let's call it $\ell$, represents a leaf, and takes a single argument that is a natural number. Thus $\ell:\nat\to\alpha$. The second head variable, let's call it $b$, represents a branch, and takes two arguments that are leaf-labelled binary trees. Thus $b:\alpha\to\alpha\to\alpha$. We end up with the following System F type: \[ \treet = \forall\alpha.(\nat\to\alpha)\to(\alpha\to\alpha\to\alpha)\to\alpha. \] A typical long normal form of this type is: \[ \xymatrix@R-1.5em@C-3ex{ &&\makebox[0in][r]{$\Lamabs{\alpha}.\lamabs{\ell}{\,\nat\to\alpha}.\lamabs{b}{\alpha\to\alpha\to\alpha}.$}b\ar@{-}[dl]\ar@{-}[dr] \\ &\ell\ar@{-}[d] &&b\ar@{-}[dl]\ar@{-}[dr] \\ &\chnum{5}& \ell\ar@{-}[d] && \ell\ar@{-}[d] \\ && \chnum{8} && \chnum{7}\makebox[0in][l]{,} } \] where $\chnum{5}$, $\chnum{7}$, and $\chnum{8}$ denote Church numerals as in {\eqref{eqn-chnum}}, here not expanded into long normal form for brevity. Notice how closely this long normal form follows {\eqref{eqn-tree}}. Here is the same term written on a single line: \[ \Lamabs{\alpha}.\lamabs{\ell}{\,\nat\to\alpha}.\lamabs{b}{\alpha\to\alpha\to\alpha}.b(\ell\chnum{5})(b(\ell\chnum{8})(\ell\chnum{7})) \] \begin{exercise} Prove that the closed long normal forms of type $\treet$ are in one-to-one correspondence with leaf-labelled binary trees. \end{exercise} \section{Type inference} In Section~\ref{sec-simply-typed-lc}, we introduced the simply-typed lambda calculus, and we discussed what it means for a term to be well-typed. We have also asked the question, for a given term, whether it is typable or not. In this section, we will discuss an algorithm that decides, given a term, whether it is typable or not, and if the answer is yes, it also outputs a type for the term. Such an algorithm is known as a {\em type inference algorithm}. A weaker kind of algorithm is a {\em type checking algorithm}. A type checking algorithm takes as its input a term with full type annotations, as well as the types of any free variables, and it decides whether the term is well-typed or not. Thus, a type checking algorithm does not infer any types; the type must be given to it as an input and the algorithm merely checks whether the type is legal. Many compilers of programming languages include a type checker, and programs that are not well-typed are typically refused. The compilers of some programming languages, such as ML or Haskell, go one step further and include a type inference algorithm. This allows programmers to write programs with no or very few type annotations, and the compiler will figure out the types automatically. This makes the programmer's life much easier, especially in the case of higher-order languages, where types such as $((A\to B)\to C)\to D$ are not uncommon and would be very cumbersome to write down. However, in the event that type inference {\em fails}, it is not always easy for the compiler to issue a meaningful error message that can help the human programmer fix the problem. Often, at least a basic understanding of how the type inference algorithm works is necessary for programmers to understand these error messages. \subsection{Principal types} A simply-typed lambda term can have more than one possible type. Suppose that we have three basic types $\iota_1,\iota_2,\iota_3$ in our type system. Then the following are all valid typing judgments for the term $\lam x.\lam y.yx$: \[ \begin{array}{l} \tj \lamabs{x}{\iota_1}.\lamabs{y}{\iota_1\to\iota_1}.yx : \iota_1\to(\iota_1\to\iota_1)\to\iota_1, \\ \tj \lamabs{x}{\iota_2\to\iota_3}.\lamabs{y}{(\iota_2\to\iota_3)\to\iota_3}.yx : (\iota_2\to\iota_3)\to((\iota_2\to\iota_3)\to\iota_3)\to\iota_3, \\ \tj \lamabs{x}{\iota_1}.\lamabs{y}{\iota_1\to\iota_3}.yx : \iota_1\to(\iota_1\to\iota_3)\to\iota_3, \\ \tj \lamabs{x}{\iota_1}.\lamabs{y}{\iota_1\to\iota_3\to\iota_2}.yx : \iota_1\to(\iota_1\to\iota_3\to\iota_2)\to\iota_3\to\iota_2, \\ \tj \lamabs{x}{\iota_1}.\lamabs{y}{\iota_1\to\iota_1\to\iota_1}.yx : \iota_1\to(\iota_1\to\iota_1\to\iota_1)\to\iota_1\to\iota_1. \end{array} \] What all these typing judgments have in common is that they are of the form \[ \begin{array}{l} \tj \lamabs{x}{A}.\lamabs{y}{A\to B}.yx : A\to(A\to B)\to B, \end{array} \] for certain types $A$ and $B$. In fact, as we will see, {\em every} possible type of the term $\lam x.\lam y.yx$ is of this form. We also say that $A\to(A\to B)\to B$ is the {\em most general type} or the {\em principal type} of this term, where $A$ and $B$ are placeholders for arbitrary types. The existence of a most general type is not a peculiarity of the term $\lam xy.yx$, but it is true of the simply-typed lambda calculus in general: every typable term has a most general type. This statement is known as the {\em principal type property}. We will see that our type inference algorithm not only calculates a possible type for a term, but in fact it calculates the most general type, if any type exists at all. In fact, we will prove the principal type property by closely examining the type inference algorithm. \subsection{Type templates and type substitutions} In order to formalize the notion of a most general type, we need to be able to speak of types with placeholders. \begin{definition} Suppose we are given an infinite set of {\em type variables}, which we denote by upper case letters $X,Y,Z$ etc. A {\em type template} is a simple type, built from type variables and possibly basic types. Formally, type templates are given by the BNF \[ \mbox{Type templates:}\ssep A,B \bnf X \bor \iota \bor A\to B\bor A\times B\bor 1 \] \end{definition} Note that we use the same letters $A,B$ to denote type templates that we previously used to denote types. In fact, from now on, we will simply regard types as special type templates that happen to contain no type variables. The point of type variables is that they are placeholders (just like any other kind of variables). This means, we can replace type variables by arbitrary types, or even by type templates. A type substitution is just such a replacement. \begin{definition} A {\em type substitution} $\sigma$ is a function from type variables to type templates. We often write $\tsubst{X_1\mapsto A_1,\ldots,X_n\mapsto A_n}$ for the substitution defined by $\sigma(X_i)=A_i$ for $i=1\ldots n$, and $\sigma(Y)=Y$ if $Y\not\in\s{X_1,\ldots,X_n}$. If $\sigma$ is a type substitution, and $A$ is a type template, then we define $\sigmabar A$, the {\em application} of $\sigma$ to $A$, as follows by recursion on $A$: \[ \begin{array}{rcl} \sigmabar X &=& \sigma X, \\ \sigmabar \iota &=& \iota, \\ \sigmabar (A\to B) &=& \sigmabar A \to \sigmabar B, \\ \sigmabar (A\times B) &=& \sigmabar A \times \sigmabar B, \\ \sigmabar 1 &=& 1. \end{array} \] \end{definition} In words, $\sigmabar A$ is simply the same as $A$, except that all the type variables have been replaced according to $\sigma$. We are now in a position to formalize what it means for one type template to be more general than another. \begin{definition} Suppose $A$ and $B$ are type templates. We say that $A$ is {\em more general} than $B$ if there exists a type substitution $\sigma$ such that $\sigmabar A=B$. \end{definition} In other words, we consider $A$ to be more general than $B$ if $B$ can be obtained from $A$ by a substitution. We also say that $B$ is an {\em instance} of $A$. Examples: \begin{itemize} \item $X\to Y$ is more general than $X\to X$. \item $X\to X$ is more general than $\iota\to\iota$. \item $X\to X$ is more general than $(\iota\to\iota)\to(\iota\to\iota)$. \item Neither of $\iota\to\iota$ and $(\iota\to\iota)\to(\iota\to\iota)$ is more general than the other. We say that these types are {\em incomparable}. \item $X\to Y$ is more general than $W\to Z$, and vice versa. We say that $X\to Y$ and $W\to Z$ are {\em equally general}. \end{itemize} We can also speak of one substitution being more general than another: \begin{definition} If $\tau$ and $\rho$ are type substitutions, we say that $\tau$ is more general than $\rho$ if there exists a type substitution $\sigma$ such that $\sigmabar\cp\tau=\rho$. \end{definition} \subsection{Unifiers} We will be concerned with solving equations between type templates. The basic question is not very different from solving equations in arithmetic: given an equation between expressions, for instance $x+y=x^2$, is it possible to find values for $x$ and $y$ that make the equation true? The answer is yes in this case, for instance $x=2,y=2$ is one solution, and $x=1,y=0$ is another possible solution. We can even give the most general solution, which is $x=\mbox{arbitrary}, y=x^2-x$. Similarly, for type templates, we might ask whether an equation such as \[ X\to(X\to Y)=(Y\to Z)\to W \] has any solutions. The answer is yes, and one solution, for instance, is $X=\iota\to\iota$, $Y=\iota$, $Z=\iota$, $W=(\iota\to\iota)\to\iota$. But this is not the most general solution; the most general solution, in this case, is $Y=\mbox{arbitrary}$, $Z=\mbox{arbitrary}$, $X=Y\to Z$, $W=(Y\to Z)\to Y$. We use substitutions to represent the solutions to such equations. For instance, the most general solution to the sample equation from the last paragraph is represented by the substitution \[ \sigma=\tsubst{X\mapsto Y\to Z, W\mapsto (Y\to Z)\to Y}. \] If a substitution $\sigma$ solves the equation $A=B$ in this way, then we also say that $\sigma$ is a {\em unifier} of $A$ and $B$. To give another example, consider the equation \[ X\times(X\to Z) = (Z\to Y)\times Y. \] This equation does not have any solution, because we would have to have both $X=Z\to Y$ and $Y=X\to Z$, which implies $X=Z\to(X\to Z)$, which is impossible to solve in simple types. We also say that $X\times(X\to Z)$ and $(Z\to Y)\times Y$ cannot be unified. In general, we will be concerned with solving not just single equations, but systems of several equations. The formal definition of unifiers and most general unifiers is as follows: \begin{definition} Given two sequences of type templates $\bar{A}=A_1,\ldots,A_n$ and $\bar{B}=B_1,\ldots,B_n$, we say that a type substitution $\sigma$ is a {\em unifier} of $\bar{A}$ and $\bar{B}$ if $\sigmabar A_i=\sigmabar B_i$, for all $i=1\ldots n$. Moreover, we say that $\sigma$ is a {\em most general unifier} of $\bar{A}$ and $\bar{B}$ if it is a unifier, and if it is more general than any other unifier of $\bar{A}$ and $\bar{B}$. \end{definition} \subsection{The unification algorithm} Unification is the process of determining a most general unifier. More specifically, unification is an algorithm whose input are two sequences of type templates $\bar{A}=A_1,\ldots,A_n$ and $\bar{B}=B_1,\ldots,B_n$, and whose output is either ``failure'', if no unifier exists, or else a most general unifier $\sigma$. We call this algorithm $\mgu$ for ``most general unifier'', and we write $\mgu(\bar{A};\bar{B})$ for the result of applying the algorithm to $\bar{A}$ and $\bar{B}$. Before we state the algorithm, let us note that we only use finitely many type variables, namely, the ones that occur in $\bar{A}$ and $\bar{B}$. In particular, the substitutions generated by this algorithm are finite objects that can be represented and manipulated by a computer. The algorithm for calculating $\mgu(\bar{A};\bar{B})$ is as follows. By convention, the algorithm chooses the first applicable clause in the following list. Note that the algorithm is recursive. \begin{enumerate} \item $\mgu(X;X)=\id$, the identity substitution. \item\label{item-mgu-2} $\mgu(X;B)=\tsubst{X\mapsto B}$, if $X$ does not occur in $B$. \item $\mgu(X;B)$ fails, if $X$ occurs in $B$ and $B\neq X$. \item $\mgu(A;Y)=\tsubst{Y\mapsto A}$, if $Y$ does not occur in $A$. \item $\mgu(A;Y)$ fails, if $Y$ occurs in $A$ and $A\neq Y$. \item $\mgu(\iota;\iota)=\id$. \item $\mgu(A_1\to A_2; B_1\to B_2) = \mgu(A_1,A_2; B_1,B_2)$. \item $\mgu(A_1\times A_2; B_1\times B_2) = \mgu(A_1,A_2; B_1,B_2)$. \item $\mgu(1; 1) = \id$. \item\label{item-mgu-10} $\mgu(A; B)$ fails, in all other cases. \item\label{item-mgu-11} $\mgu(A,\bar{A};B,\bar{B}) = \taubar \cp \rho$, where $\rho=\mgu(\bar{A};\bar{B})$ and $\tau=\mgu(\rhobar A;\rhobar B)$. \end{enumerate} Note that clauses 1--\ref{item-mgu-10} calculate the most general unifier of two type templates, whereas clause {\ref{item-mgu-11}} deals with lists of type templates. Clause {\ref{item-mgu-10}} is a catch-all clause that fails if none of the earlier clauses apply. In particular, this clause causes the following to fail: $\mgu(A_1\to A_2; B_1\times B_2)$, $\mgu(A_1\to A_2; \iota)$, etc. \begin{proposition}\label{pro-unification} If $\mgu(\bar{A};\bar{B})=\sigma$, then $\sigma$ is a most general unifier of $\bar{A}$ and $\bar{B}$. If $\mgu(\bar{A};\bar{B})$ fails, then $\bar{A}$ and $\bar{B}$ have no unifier. \end{proposition} \begin{proof} First, it is easy to prove by induction on the definition of $\mgu$ that if $\mgu(\bar{A};\bar{B})=\sigma$, then $\sigma$ is a unifier of $\bar{A}$ and $\bar{B}$. This is evident in all cases except perhaps clause \ref{item-mgu-11}: but here, by induction hypothesis, $\rhobar\bar{A}=\rhobar\bar{B}$ and $\taubar(\rhobar A)=\taubar(\rhobar B)$, hence also $\taubar(\rhobar(A,\bar{A}))=\taubar(\rhobar(B,\bar{B}))$. Here we have used the evident notation of applying a substitution to a list of type templates. Second, we prove that if $\bar{A}$ and $\bar{B}$ can be unified, then $\mgu(\bar{A};\bar{B})$ returns a most general unifier. This is again proved by induction. For example, in clause \ref{item-mgu-2}, we have $\sigma=\tsubst{X\mapsto B}$. Suppose $\tau$ is another unifier of $X$ and $B$. Then $\taubar X=\taubar B$. We claim that $\taubar\cp\sigma=\tau$. But $\taubar(\sigma(X)) = \taubar(B) = \taubar(X) = \tau(X)$, whereas if $Y\neq X$, then $\taubar(\sigma(Y))=\taubar(Y)=\tau(Y)$. Hence $\taubar\cp\sigma=\tau$, and it follows that $\sigma$ is more general than $\tau$. The clauses 1--\ref{item-mgu-10} all follow by similar arguments. For clause {\ref{item-mgu-11}}, suppose that $A,\bar{A}$ and $B,\bar{B}$ have some unifier $\sigma'$. Then $\sigma'$ is also a unifier for $\bar{A}$ and $\bar{B}$, and thus the recursive call return a most general unifier $\rho$ of $\bar{A}$ and $\bar{B}$. Since $\rho$ is more general than $\sigma'$, we have $\kappabar\cp\rho=\sigma'$ for some substitution $\kappa$. But then $\kappabar(\rhobar A)=\sigmabar' A=\sigmabar' B=\kappabar(\rhobar B)$, hence $\kappabar$ is a unifier for $\rhobar A$ and $\rhobar B$. By induction hypothesis, $\tau=\mgu(\rhobar A;\rhobar B)$ exists and is a most general unifier for $\rhobar A$ and $\rhobar B$. It follows that $\tau$ is more general than $\kappabar$, thus $\kappabar'\cp\tau = \kappabar$, for some substitution $\kappa'$. Finally we need to show that $\sigma=\taubar \cp \rho$ is more general than $\sigma'$. But this follows because $\kappabar'\cp\sigma = \kappabar'\cp\taubar\cp\rho = \kappabar\cp\rho=\sigma'$. \eot \end{proof} \begin{remark} Proving that the algorithm $\mgu$ terminates is tricky. In particular, termination can't be proved by induction on the size of the arguments, because in the second recursive call in clause \ref{item-mgu-11}, the application of $\rhobar$ may well increase the size of the arguments. To prove termination, note that each substitution $\sigma$ generated by the algorithm is either the identity, or else it eliminates at least one variable. We can use this to prove termination by nested induction on the number of variables and on the size of the arguments. We leave the details for another time. \end{remark} \subsection{The type inference algorithm} Given the unification algorithm, type inference is now relatively easy. We formulate another algorithm, $\typeinfer$, which takes a typing judgment $\Gamma\tj M:B$ as its input (using templates instead of types, and not necessarily a {\em valid} typing judgment). The algorithm either outputs a most general substitution $\sigma$ such that $\sigmabar\Gamma\tj M:\sigmabar B$ is a valid typing judgment, or if no such $\sigma$ exists, the algorithm fails. In other words, the algorithm calculates the most general substitution that makes the given typing judgment valid. It is defined as follows: \begin{enumerate} \item $\typeinfer(\typ{x_1}{A_1},\ldots,\typ{x_n}{A_n}\tj x_i:B) = \mgu(A_i;B)$. \item $\typeinfer(\Gamma\tj MN:B)= \taubar\cp\sigma$, where $\sigma=\typeinfer(\Gamma\tj M:X\to B)$, $\tau=\typeinfer(\sigmabar\Gamma\tj N:\sigmabar X)$, for a fresh type variable $X$. \item $\typeinfer(\Gamma\tj\lamabs{x}{A}.M:B)=\taubar\cp\sigma$, where $\sigma=\mgu(B;A\to X)$ and $\tau=\typeinfer(\sigmabar\Gamma,\typ{x}{\sigmabar A}\tj M:\sigmabar X)$, for a fresh type variable $X$. \item $\typeinfer(\Gamma\tj\pair{M,N}:A) = \rhobar\cp\taubar\cp\sigma$, where $\sigma=\mgu(A;X\times Y)$, $\tau=\typeinfer(\sigmabar\Gamma\tj M:\sigmabar X)$, and $\rho=\typeinfer(\taubar\sigmabar\Gamma\tj N:\taubar\sigmabar Y)$, for fresh type variables $X$ and $Y$. \item $\typeinfer(\Gamma\tj\proj1 M:A) = \typeinfer(\Gamma\tj M:A\times Y)$, for a fresh type variable $Y$. \item $\typeinfer(\Gamma\tj\proj2 M:B) = \typeinfer(\Gamma\tj M:X\times B)$, for a fresh type variable $X$. \item $\typeinfer(\Gamma\tj\unit:A) = \mgu(A;1)$. \end{enumerate} Strictly speaking, the algorithm is non-deterministic, because some of the clauses involve choosing one or more fresh type variables, and the choice is arbitrary. However, the choice is not essential, since we may regard all fresh type variables are equivalent. Here, a type variable is called ``fresh'' if it has never been used. Note that the algorithm $\typeinfer$ can fail; this happens if and only if the call to $\mgu$ fails in steps 1, 3, 4, or 7. Also note that the algorithm obviously always terminates; this follows by induction on $M$, since each recursive call only uses a smaller term $M$. \begin{proposition} If there exists a substitution $\sigma$ such that $\sigmabar\Gamma\tj M:\sigmabar B$ is a valid typing judgment, then $\typeinfer (\Gamma\tj M: B)$ will return a most general such substitution. Otherwise, the algorithm will fail. \end{proposition} \begin{proof} The proof is similar to that of Proposition~\ref{pro-unification}. \eot \end{proof} Finally, the question ``is $M$ typable'' can be answered by choosing distinct type variables $X_1,\ldots,X_n,Y$ and applying the algorithm $\typeinfer$ to the typing judgment $\typ{x_1}{X_1},\ldots,\typ{x_n}{X_n}\tj M:Y$. Note that if the algorithm succeeds and returns a substitution $\sigma$, then $\sigma Y$ is the most general type of $M$, and the free variables have types $\typ{x_1}{\sigma X_1},\ldots,\typ{x_n}{\sigma X_n}$. \section{Denotational semantics}\label{sec-set-semantics} We introduced the lambda calculus as the ``theory of functions''. But so far, we have only spoken of functions in abstract terms. Do lambda terms correspond to any {\em actual} functions, such as, functions in set theory? And what about the notions of $\beta$- and $\eta$-equivalence? We intuitively accepted these concepts as expressing truths about the equality of functions. But do these properties really hold of real functions? Are there other properties that functions have that that are not captured by $\beta\eta$-equivalence? The word ``semantics'' comes from the Greek word for ``meaning''. {\em Denotational semantics} means to give meaning to a language by interpreting its terms as mathematical objects. This is done by describing a function that maps syntactic objects (e.g., types, terms) to semantic objects (e.g., sets, elements). This function is called an {\em interpretation} or {\em meaning function}, and we usually denote it by $\semm{-}$. Thus, if $M$ is a term, we will usually write $\semm{M}$ for the meaning of $M$ under a given interpretation. Any good denotational semantics should be {\em compositional}, which means, the interpretation of a term should be given in terms of the interpretations of its subterms. Thus, for example, $\semm{MN}$ should be a function of $\semm{M}$ and $\semm{N}$. Suppose that we have an axiomatic notion of equality $\simeq$ on terms (for instance, $\beta\eta$-equivalence in the case of the lambda calculus). With respect to a particular class of interpretations, {\em soundness} is the property \[ M\simeq N \sep\imp\sep \semm{M}=\semm{N} \mbox{ for all interpretations in the class}. \] {\em Completeness} is the property \[ \semm{M}=\semm{N} \mbox{ for all interpretations in the class} \sep\imp\sep M\simeq N. \] Depending on our viewpoint, we will either say the axioms are sound (with respect to a given interpretation), or the interpretation is sound (with respect to a given set of axioms). Similarly for completeness. Soundness expresses the fact that our axioms (e.g., $\beta$ or $\eta$) are true with respect to the given interpretation. Completeness expresses the fact that our axioms are sufficient. \subsection{Set-theoretic interpretation} The simply-typed lambda calculus can be given a straightforward set-theoretic interpretation as follows. We map types to sets and typing judgments to functions. For each basic type $\iota$, assume that we have chosen a non-empty set $S_{\iota}$. We can then associate a set $\semm{A}$ to each type $A$ recursively: \[ \begin{array}{lll} \semm{\iota} &=& S_{\iota} \\ \semm{A\to B} &=& \semm{B}^{\semm{A}} \\ \semm{A\times B} &=& \semm{A}\times\semm{B} \\ \semm{1} &=& \s{*} \end{array} \] Here, for two sets $X,Y$, we write $Y^X$ for the set of all functions from $X$ to $Y$, i.e., $Y^X=\s{f\such f:X\to Y}$. Of course, $X\times Y$ denotes the usual cartesian product of sets, and $\s{*}$ is some singleton set. We can now interpret lambda terms, or more precisely, typing judgments, as certain functions. Intuitively, we already know which function a typing judgment corresponds to. For instance, the typing judgment $\typ{x}{A},\typ{f}{A\to B}\tj fx:B$ corresponds to the function that takes an element $x\in\semm{A}$ and an element $f\in\semm{B}^{\semm{A}}$, and that returns $f(x)\in\semm{B}$. In general, the interpretation of a typing judgment \[ \typ{x_1}{A_1},\ldots,\typ{x_n}{A_n} \tj M:B \] will be a function \[ \semm{A_1}\times\ldots\times\semm{A_n} \to \semm{B}. \] Which particular function it is depends of course on the term $M$. For convenience, if $\Gamma=\typ{x_1}{A_1},\ldots,\typ{x_n}{A_n}$ is a context, let us write $\semm{\Gamma}=\semm{A_1}\times\ldots\times\semm{A_n}$. We now define $\semm{\Gamma\tj M:B}$ by recursion on $M$. \begin{itemize} \item If $M$ is a variable, we define \[ \semm{\typ{x_1}{A_1},\ldots,\typ{x_n}{A_n}\tj x_i:A_i} = \proj{i} : \semm{A_1}\times\ldots\times\semm{A_n}\to\semm{A_i}, \] where $\proj{i}(a_1,\ldots,a_n) = a_i$. \item If $M=NP$ is an application, we recursively calculate \[ \begin{array}{lll} f &=& \semm{\Gamma\tj N:A\to B} : \semm{\Gamma}\to\semm{B}^{\semm{A}}, \\ g &=& \semm{\Gamma\tj P:A} : \semm{\Gamma}\to\semm{A}. \end{array} \] We then define \[ \semm{\Gamma\tj NP:B} = h : \semm{\Gamma}\to\semm{B} \] by $h(\abar)=f(\abar)(g(\abar))$, for all $\abar\in\semm{\Gamma}$. \item If $M=\lamabs{x}{A}.N$ is an abstraction, we recursively calculate \[ \begin{array}{lll} f &=& \semm{\Gamma,\typ{x}{A}\tj N:B} : \semm{\Gamma}\times\semm{A}\to\semm{B}. \end{array} \] We then define \[ \semm{\Gamma\tj\lamabs{x}{A}.N:A\to B} = h : \semm{\Gamma}\to\semm{B}^{\semm{A}} \] by $h(\abar)(a) = f(\abar,a)$, for all $\abar\in\semm{\Gamma}$ and $a\in\semm{A}$. \item If $M=\pair{N,P}$ is an pair, we recursively calculate \[ \begin{array}{lll} f &=& \semm{\Gamma\tj N:A} : \semm{\Gamma}\to\semm{A}, \\ g &=& \semm{\Gamma\tj P:B} : \semm{\Gamma}\to\semm{B}. \end{array} \] We then define \[ \semm{\Gamma\tj \pair{N,P}:A\times B} = h : \semm{\Gamma}\to\semm{A}\times\semm{B} \] by $h(\abar)=\rpair{f(\abar),g(\abar)}$, for all $\abar\in\semm{\Gamma}$. \item If $M=\proj{i}N$ is a projection (for $i=1,2$), we recursively calculate \[ \begin{array}{lll} f &=& \semm{\Gamma\tj N:B_1\times B_2} : \semm{\Gamma}\to\semm{B_1}\times\semm{B_2}. \end{array} \] We then define \[ \semm{\Gamma\tj \proj{i}N:B_i} = h : \semm{\Gamma}\to\semm{B_i} \] by $h(\abar)=\proj{i}(f(\abar))$, for all $\abar\in\semm{\Gamma}$. Here $\proj{i}$ in the meta-language denotes the set-theoretic function $\proj{i}:\semm{B_1}\times\semm{B_2}\to\semm{B_i}$ given by $\proj{i}\rpair{b_1,b_2} = b_i$. \item If $M=\unit$, we define \[ \semm{\Gamma\tj\unit:1} = h : \semm{\Gamma}\to\s{*} \] by $h(\abar)=*$, for all $\abar\in\semm{\Gamma}$. \end{itemize} To minimize notational inconvenience, we will occasionally abuse the notation and write $\semm{M}$ instead of $\semm{\Gamma\tj M:B}$, thus pretending that terms are typing judgments. However, this is only an abbreviation, and it will be understood that the interpretation really depends on the typing judgment, and not just the term, even if we use the abbreviated notation. We also refer to an interpretation as a {\em model}. \subsection{Soundness} \begin{lemma}[Context change]\label{lem-context-set} The interpretation behaves as expected under reordering of contexts and under the addition of dummy variables to contexts. More precisely, if $\sigma:\s{1,\ldots,n}\to\s{1,\ldots,m}$ is an injective map, and if the free variables of $M$ are among $x_{\sigma 1},\ldots,x_\sigma n$, then the interpretations of the two typing judgments, \[ \begin{array}{l} f = \semm{\typ{x_1}{A_1},\ldots,\typ{x_m}{A_m}\tj M:B} : \semm{A_1}\times\ldots\times\semm{A_m} \to \semm{B}, \\ g = \semm{\typ{x_{\sigma 1}}{A_{\sigma 1}},\ldots, \typ{x_{\sigma n}}{A_{\sigma n}} \tj M:B} : \semm{A_{\sigma 1}}\times\ldots\times\semm{A_{\sigma n}} \to \semm{B} \end{array} \] are related as follows: \[ f(a_1,\ldots,a_m) = g(a_{\sigma 1},\ldots,a_{\sigma n}), \] for all $a_1\in\semm{A_1},\ldots,a_m\in\semm{A_m}$. \end{lemma} \begin{proof} Easy, but tedious, induction on $M$.\eot \end{proof} The significance of this lemma is that, to a certain extent, the context does not matter. Thus, if the free variables of $M$ and $N$ are contained in $\Gamma$ as well as $\Gamma'$, then we have \[ \semm{\Gamma\tj M:B}=\semm{\Gamma\tj N:B} \sep\mbox{iff}\sep \semm{\Gamma'\tj M:B}=\semm{\Gamma'\tj N:B}. \] Thus, whether $M$ and $N$ have equal denotations only depends on $M$ and $N$, and not on $\Gamma$. \begin{lemma}[Substitution Lemma] If \[ \begin{array}{l} \semm{\Gamma,\typ{x}{A}\tj M:B} = f : \semm{\Gamma}\times\semm{A}\to\semm{B}\sep\mbox{and}\\ \semm{\Gamma\tj N:A} = g : \semm{\Gamma}\to\semm{A}, \end{array} \] then \[ \semm{\Gamma\tj \subst{M}{N}{x}:B} = h : \semm{\Gamma}\to\semm{B}, \] where $h(\abar) = f(\abar,g(\abar))$, for all $\abar\in\semm{\Gamma}$. \end{lemma} \begin{proof} Very easy, but very tedious, induction on $M$.\eot \end{proof} \begin{proposition}[Soundness] The set-theoretic interpretation is sound for $\beta\eta$-reasoning. In other words, \[ M\eqbe N \sep\imp\sep \semm{\Gamma\tj M:B}=\semm{\Gamma\tj N:B}. \] \end{proposition} \begin{proof} Let us write $M\sim N$ if $\semm{\Gamma\tj M:B}=\semm{\Gamma\tj N:B}$. By the remark after Lemma~\ref{lem-context-set}, this notion is independent of $\Gamma$, and thus a well-defined relation on terms (as opposed to typing judgments). To prove soundness, we must show that $M\eqbe N$ implies $M\sim N$, for all $M$ and $N$. It suffices to show that $\sim$ satisfies all the axioms of $\beta\eta$-equivalence. The axioms $\trule{refl}$, $\trule{symm}$, and $\trule{trans}$ hold trivially. Similarly, all the $\trule{cong}$ and $\nrule{\xi}$ rules hold, due to the fact that the meaning of composite terms was defined solely in terms of the meaning of their subterms. It remains to prove that each of the various $\nrule{\beta}$ and $\nrule{\eta}$ laws is satisfied (see page~\pageref{page-typed-reductions}). We prove the rule $\nrule{\beta_{\to}}$ as an example; the remaining rules are left as an exercise. Assume $\Gamma$ is a context such that $\Gamma,\typ{x}{A}\tj M:B$ and $\Gamma\tj N:A$. Let \[ \begin{array}{l} f = \semm{\Gamma,\typ{x}{A}\tj M:B} : \semm{\Gamma}\times\semm{A}\to\semm{B},\\ g = \semm{\Gamma\tj N:A} : \semm{\Gamma}\to\semm{A}, \\ h = \semm{\Gamma\tj (\lamabs{x}{A}.M):A\to B} : \semm{\Gamma}\to\semm{B}^{\semm{A}}, \\ k = \semm{\Gamma\tj (\lamabs{x}{A}.M)N:B} : \semm{\Gamma}\to\semm{B},\\ l = \semm{\Gamma\tj \subst{M}{N}{x} : B} : \semm{\Gamma}\to\semm{B}. \end{array} \] We must show $k=h$. By definition, we have $k(\abar)=h(\abar)(g(\abar))= f(\abar,g(\abar))$. On the other hand, $l(\abar)=f(\abar,g(\abar))$ by the substitution lemma. \eot \end{proof} Note that the proof of soundness amounts to a simple calculation; while there are many details to attend to, no particularly interesting new idea is required. This is typical of soundness proofs in general. Completeness, on the other hand, is usually much more difficult to prove and often requires clever ideas. \subsection{Completeness} We cite two completeness theorems for the set-theoretic interpretation. The first one is for the class of all models with finite base type. The second one is for the single model with one countably infinite base type. \begin{theorem}[Completeness, Plotkin, 1973] The class of set-theoretic models with finite base types is complete for the lambda-$\beta\eta$ calculus. \end{theorem} Recall that completeness for a class of models means that if $\semm{M}=\semm{N}$ holds in {\em all} models of the given class, then $M\eqbe N$. This is not the same as completeness for each individual model in the class. Note that, for each {\em fixed} choice of finite sets as the interpretations of the base types, there are some lambda terms such that $\semm{M}=\semm{N}$ but $M\not\eqbe N$. For instance, consider terms of type $(\iota\to\iota)\to\iota\to\iota$. There are infinitely many $\beta\eta$-distinct terms of this type, namely, the Church numerals. On the other hand, if $S_{\iota}$ is a finite set, then $\semm{(\iota\to\iota)\to\iota\to\iota}$ is also a finite set. Since a finite set cannot have infinitely many distinct elements, there must necessarily be two distinct Church numerals $M,N$ such that $\semm{M}=\semm{N}$. Plotkin's completeness theorem, on the other hand, shows that whenever $M$ and $N$ are distinct lambda terms, then there exist {\em some} set-theoretic model with finite base types in which $M$ and $N$ are different. The second completeness theorem is for a {\em single} model, namely the one where $S_{\iota}$ is a countably infinite set. \begin{theorem}[Completeness, Friedman, 1975] The set-theoretic model with base type equal to $\N$, the set of natural numbers, is complete for the lambda-$\beta\eta$ calculus. \end{theorem} We omit the proofs. \section{The language PCF} PCF stands for ``programming with computable functions''. The language PCF is an extension of the simply-typed lambda calculus with booleans, natural numbers, and recursion. It was first introduced by Dana Scott as a simple programming language on which to try out techniques for reasoning about programs. Although PCF is not intended as a ``real world'' programming language, many real programming languages can be regarded as (syntactic variants of) extensions of PCF, and many of the reasoning techniques developed for PCF also apply to more complicated languages. PCF is a ``programming language'', not just a ``calculus''. By this we mean, PCF is equipped with a specific evaluation order, or rules that determine precisely how terms are to be evaluated. We follow the slogan: \begin{center} Programming language = syntax + evaluation rules. \end{center} After introducing the syntax of PCF, we will look at three different equivalence relations on terms. \begin{itemize} \item {\em Axiomatic equivalence} $\eqax$ will be given by axioms in the spirit of $\beta\eta$-equivalence. \item {\em Operational equivalence} $\eqop$ will be defined in terms of the operational behavior of terms. Two terms are operationally equivalent if one can be substituted for the other in any context without changing the behavior of a program. \item {\em Denotational equivalence} $\eqden$ is defined via a denotational semantics. \end{itemize} We will develop methods for reasoning about these equivalences, and thus for reasoning about programs. We will also investigate how the three equivalences are related to each other. \subsection{Syntax and typing rules} PCF types are simple types over two base types $\boolt$ and $\natt$. \[ A,B \bnf \boolt \bor \natt \bor A\to B\bor A\times B\bor 1 \] The raw terms of PCF are those of the simply-typed lambda calculus, together with some additional constructs that deal with booleans, natural numbers, and recursion. \[ \begin{array}{@{}lll} M,N,P &\bnf& x \bor MN \bor \lamabs{x}{A}.M \bor \pair{M,N} \bor \proj1 M \bor \proj2 M \bor \unit \\ &&\bor \truet \bor \falset \bor \zerot \bor \succt(M) \bor \predt(M) \\ &&\bor \iszerot(M) \bor \ite{M}{N}{P} \bor \Y(M) \end{array} \] The intended meaning of these terms is the same as that of the corresponding terms we used to program in the untyped lambda calculus: $\truet$ and $\falset$ are the boolean constants, $\zerot$ is the constant zero, $\succt$ and $\predt$ are the successor and predecessor functions, $\iszerot$ tests whether a given number is equal to zero, $\ite{M}{N}{P}$ is a conditional, and $\Y(M)$ is a fixed point of $M$. The typing rules for PCF are the same as the typing rules for the simply-typed lambda calculus, shown in Table~\ref{tab-simple-typing-rules}, plus the additional typing rules shown in Table~\ref{tab-pcf-typing-rules}. \begin{table*}[tbp] \[ \begin{array}{rc} \trule{true} & \deriv{}{\Gamma\tj\truet: \boolt} \\[1.8ex] \trule{false} & \deriv{}{\Gamma\tj\falset: \boolt} \\[1.8ex] \trule{zero} & \deriv{}{\Gamma\tj\zerot: \natt} \\[1.8ex] \trule{succ} & \deriv{\Gamma\tj M: \natt}{\Gamma\tj\succt(M): \natt} \end{array} \sep \begin{array}{rc} \trule{pred} & \deriv{\Gamma\tj M: \natt}{\Gamma\tj\predt(M): \natt} \\[1.8ex] \trule{iszero} &\deriv{\Gamma\tj M: \natt}{\Gamma\tj\iszerot(M): \boolt} \\[1.8ex] \trule{fix} & \deriv{\Gamma\tj M:A\to A}{\Gamma\tj \Y(M):A} \end{array} \] \[ \begin{array}{rc} \trule{if} & \deriv{\Gamma\tj M: \boolt\sep\Gamma\tj N:A\sep\Gamma\tj P:A} {\Gamma\tj \ite{M}{N}{P}:A} \end{array} \] \caption{Typing rules for PCF} \label{tab-pcf-typing-rules} \end{table*} \subsection{Axiomatic equivalence} The axiomatic equivalence of PCF is based on the $\beta\eta$-equivalence of the simply-typed lambda calculus. The relation $\eqax$ is the least relation given by the following: \begin{itemize} \item All the $\beta$- and $\eta$-axioms of the simply-typed lambda calculus, as shown on page~\pageref{page-typed-reductions}. \item One congruence or $\xi$-rule for each term constructor. This means, for instance \[ \deriv {M\eqax M' \sep N\eqax N' \sep P\eqax P'} {\ite{M}{N}{P} \eqax \ite{M'}{N'}{P'}}, \] and similar for all the other term constructors. \item The additional axioms shown in Table~\ref{tab-pcf-axioms}. Here, $\numn$ stands for a {\em numeral}, i.e., a term of the form $\succt(\ldots(\succt(\zerot))\ldots)$. \end{itemize} \begin{table*}[tbp] \[ \begin{array}{rcl} \predt(\zerot) &=& \zerot \\ \predt(\succt(\numn)) &=& \numn \\ \iszerot(\zerot) &=& \truet \\ \iszerot(\succt(\numn)) &=& \falset \\ \ite{\truet}{N}{P} &=& N \\ \ite{\falset}{N}{P} &=& P \\ \Y(M) &=& M(\Y(M)) \end{array} \] \caption{Axiomatic equivalence for PCF} \label{tab-pcf-axioms} \end{table*} \subsection{Operational semantics} The operational semantics of PCF is commonly given in two different styles: the {\em small-step} or {\em shallow} style, and the {\em big-step} or {\em deep} style. We give the small-step semantics first, because it is closer to the notion of $\beta$-reduction that we considered for the simply-typed lambda calculus. There are some important differences between an operational semantics, as we are going to give it here, and the notion of $\beta$-reduction in the simply-typed lambda calculus. Most importantly, the operational semantics is going to be {\em deterministic}, which means, each term can be reduced in at most one way. Thus, there will never be a choice between more than one redex. Or in other words, it will always be uniquely specified which redex to reduce next. As a consequence of the previous paragraph, we will abandon many of the congruence rules, as well as the {\nrule{\xi}}-rule. We adopt the following informal conventions: \begin{itemize} \item never reduce the body of a lambda abstraction, \item never reduce the argument of a function (except for primitive functions such as $\succt$ and $\predt$), \item never reduce the ``then'' or ``else'' part of an if-then-else statement, \item never reduce a term inside a pair. \end{itemize} Of course, the terms that these rules prevent from being reduced can nevertheless become subject to reduction later: the body of a lambda abstraction and the argument of a function can be reduced after a $\beta$-reduction causes the $\lam$ to disappear and the argument to be substituted in the body. The ``then'' or ``else'' parts of an if-then-else term can be reduced after the ``if'' part evaluates to true or false. And the terms inside a pair can be reduced after the pair has been broken up by a projection. An important technical notion is that of a {\em value}, which is a term that represents the result of a computation and cannot be reduced further. Values are given as follows: \[ \mbox{Values:}\ssep V,W \bnf \truet \bor \falset \bor \zerot \bor \succt(V) \bor \unit \bor \pair{M,N} \bor \lamabs{x}{A}.M \] The transition rules for the small-step operational semantics of PCF are shown in Table~\ref{tab-pcf-small}. \begin{table*}[t]\def\\[1.8ex]{\\[1.8ex]} \[ \begin{array}{c} \deriv{M\to N}{\predt(M)\to\predt(N)} \\[1.8ex] \deriv{}{\predt(\zerot)\to\zerot} \\[1.8ex] \deriv{}{\predt(\succt(V))\to V} \\[1.8ex] \deriv{M\to N}{\iszerot(M)\to\iszerot(N)} \\[1.8ex] \deriv{}{\iszerot(\zerot)\to\truet} \\[1.8ex] \deriv{}{\iszerot(\succt(V))\to\falset} \\[1.8ex] \deriv{M\to N}{\succt(M)\to\succt(N)} \\[1.8ex] \deriv{M\to N}{MP\to NP} \\[1.8ex] \deriv{}{(\lamabs{x}{A}.M)N\to \subst{M}{N}{x}} \end{array} \sep \begin{array}{c} \deriv{M\to M'}{\proj{i} M\to \proj{i}M'} \\[1.8ex] \deriv{}{\proj1\pair{M,N}\to M} \\[1.8ex] \deriv{}{\proj2\pair{M,N}\to N} \\[1.8ex] \deriv{M:1,\ssep M\neq\unit}{M\to\unit} \\[1.8ex] \deriv{M\to M'}{\ite{M}{N}{P}\to\ite{M'}{N}{P}} \\[1.8ex] \deriv{}{\ite{\truet}{N}{P}\to N} \\[1.8ex] \deriv{}{\ite{\falset}{N}{P}\to P} \\[1.8ex] \deriv{}{\Y(M)\to M(\Y(M))} \\[1.8ex] \end{array} \] \caption{Small-step operational semantics of PCF} \label{tab-pcf-small} \end{table*} We write $M\to N$ if $M$ reduces to $N$ by these rules. We write $M\not\to$ if there does not exist $N$ such that $M\to N$. The first two important technical properties of small-step reduction are summarized in the following lemma. \begin{lemma}\label{lem-pcf-lemma1} \begin{enumerate} \item {\em Values are normal forms.} If $V$ is a value, then $V\not\to$. \item {\em Evaluation is deterministic.} If $M\to N$ and $M\to N'$, then $N\syntaxeq N'$. \end{enumerate} \end{lemma} Another important property is subject reduction: a well-typed term reduces only to another well-typed term of the same type. \begin{lemma}[Subject Reduction] If $\Gamma\tj M:A$ and $M\to N$, then $\Gamma\tj N:A$. \end{lemma} Next, we want to prove that the evaluation of a well-typed term does not get ``stuck''. If $M$ is some term such that $M\not\to$, but $M$ is not a value, then we regard this as an error, and we also write $M\to\errort$. Examples of such terms are $\proj1(\lam x.M)$ and $\pair{M,N}P$. The following lemma shows that well-typed closed terms cannot lead to such errors. \begin{lemma}[Progress]\label{lem-pcf-progress} If $M$ is a closed, well-typed term, then either $M$ is a value, or else there exists $N$ such that $M\to N$. \end{lemma} The Progress Lemma is very important, because it implies that a well-typed term cannot ``go wrong''. It guarantees that a well-typed term will either evaluate to a value in finitely many steps, or else it will reduce infinitely and thus not terminate. But a well-typed term can never generate an error. In programming language terms, a term that type-checks at {\em compile-time} cannot generate an error at {\em run-time}. To express this idea formally, let us write $M\to^* N$ in the usual way if $M$ reduces to $N$ in zero or more steps, and let us write $M\to^*\errort$ if $M$ reduces in zero or more steps to an error. \begin{proposition}[Safety]\label{prop-pcf-safety} If $M$ is a closed, well-typed term, then $M\not\to^*\errort$. \end{proposition} \begin{exercise} Prove Lemmas~\ref{lem-pcf-lemma1}--\ref{lem-pcf-progress} and Proposition~\ref{prop-pcf-safety}. \end{exercise} \subsection{Big-step semantics} In the small-step semantics, if $M\to^* V$, we say that $M$ {\em evaluates to} $V$. Note that by determinacy, for every $M$, there exists at most one $V$ such that $M\to^* V$. It is also possible to axiomatize the relation ``$M$ evaluates to $V$'' directly. This is known as the big-step semantics. Here, we write $M\evto V$ if $M$ evaluates to $V$. The axioms for the big-step semantics are shown in Table~\ref{tab-pcf-big}. \begin{table*}[t] \[ \begin{array}{c} \deriv{}{\truet\evto\truet} \\[1.8ex] \deriv{}{\falset\evto\falset} \\[1.8ex] \deriv{}{\zerot\evto\zerot} \\[1.8ex] \deriv{}{\pair{M,N}\evto\pair{M,N}} \\[1.8ex] \deriv{}{\lamabs{x}{A}.M\evto\lamabs{x}{A}.M} \\[1.8ex] \deriv{M\evto\zerot}{\predt(M)\evto\zerot} \\[1.8ex] \deriv{M\evto\succt(V)}{\predt(M)\evto V} \\[1.8ex] \deriv{M\evto\zerot}{\iszerot(M)\evto\truet} \\[1.8ex] \deriv{M\evto\succt(V)}{\iszerot(M)\evto\falset} \\[1.8ex] \end{array} \sep \begin{array}{c} \deriv{M\evto V}{\succt(M)\evto\succt(V)} \\[1.8ex] \deriv{M\evto\lamabs{x}{A}.M'\sep\subst{M'}{N}{x}\evto V}{MN\evto V} \\[1.8ex] \deriv{M\evto\pair{M_1,M_2}\sep M_1\evto V}{\proj1 M\evto V} \\[1.8ex] \deriv{M\evto\pair{M_1,M_2}\sep M_2\evto V}{\proj2 M\evto V} \\[1.8ex] \deriv{M:1}{M\evto\unit} \\[1.8ex] \deriv{M\evto\truet\sep N\evto V}{\ite{M}{N}{P}\evto V} \\[1.8ex] \deriv{M\evto\falset\sep P\evto V}{\ite{M}{N}{P}\evto V} \\[1.8ex] \deriv{M(\Y(M))\evto V}{\Y(M)\evto V} \end{array} \] \caption{Big-step operational semantics of PCF} \label{tab-pcf-big} \end{table*} The big-step semantics satisfies properties similar to those of the small-step semantics. \begin{lemma} \begin{enumerate} \item {\em Values.} For all values $V$, we have $V\evto V$. \item {\em Determinacy.} If $M\evto V$ and $M\evto V'$, then $V\syntaxeq V'$. \item {\em Subject Reduction.} If $\Gamma\tj M:A$ and $M\evto V$, then $\Gamma\tj V:A$. \end{enumerate} \end{lemma} The analogues of the Progress and Safety properties cannot be as easily stated for big-step reduction, because we cannot easily talk about a single reduction step or about infinite reduction sequences. However, some comfort can be taken in the fact that the big-step semantics and small-step semantics coincide: \begin{proposition}\label{prop-big-small} $M\to^* V$ iff $M\evto V$. \end{proposition} \subsection{Operational equivalence} Informally, two terms $M$ and $N$ will be called operationally equivalent if $M$ and $N$ are interchangeable as part of any larger program, without changing the observable behavior of the program. This notion of equivalence is also often called observational equivalence, to emphasize the fact that it concentrates on observable properties of terms. What is an observable behavior of a program? Normally, what we observe about a program is its output, such as the characters it prints to a terminal. Since any such characters can be converted in principle to natural numbers, we take the point of view that the observable behavior of a program is a natural number that it evaluates to. Similarly, if a program computes a boolean, we regard the boolean value as observable. However, we do not regard abstract values, such as functions, as being directly observable, on the grounds that a function cannot be observed until we supply it some arguments and observe the result. \begin{definition} An {\em observable type} is either $\boolt$ or $\natt$. A {\em result} is a closed value of observable type. Thus, a result is either $\truet$, $\falset$, or $\numn$. A {\em program} is a closed term of observable type. A {\em context} is a term with a hole, written $C[-]$. Formally, the class of contexts is defined by a BNF: \[ \begin{array}{@{}lll} C[-] &\bnf& [-] \bor x \bor C[-]N \bor MC[-] \bor \lamabs{x}{A}.C[-] \bor \ldots \end{array} \] and so on, extending through all the cases in the definition of a PCF term. \end{definition} Well-typed contexts are defined in the same way as well-typed terms, where it is understood that the hole also has a type. The free variables of a context are defined in the same way as for terms. Moreover, we define the {\em captured variables} of a context to be those bound variables whose scope includes the hole. So for instance, in the context $(\lam x.[-])(\lam y.z)$, the variable $x$ is captured, the variable $z$ is free, and $y$ is neither free nor captured. If $C[-]$ is a context and $M$ is a term of the appropriate type, we write $C[M]$ for the result of replacing the hole in the context $C[-]$ by $M$. Here, we do not $\alpha$-rename any bound variables, so that we allow free variables of $M$ to be captured by $C[-]$. We are now ready to state the definition of operational equivalence. \begin{definition} Two terms $M,N$ are {\em operationally equivalent}, in symbols $M\eqop N$, if for all closed and closing context $C[-]$ of observable type and all values $V$, \[ C[M]\evto V \iff C[N]\evto V. \] \end{definition} Here, by a {\em closing} context we mean that $C[-]$ should capture all the free variables of $M$ and $N$. This is equivalent to requiring that $C[M]$ and $C[N]$ are closed terms of observable types, i.e., programs. Thus, two terms are equivalent if they can be used interchangeably in any program. \subsection{Operational approximation} As a refinement of operational equivalence, we can also define a notion of operational approximation: We say that $M$ {\em operationally approximates} $N$, in symbols $M\sqleqop N$, if for all closed and closing contexts $C[-]$ of observable type and all values $V$, \[ C[M]\evto V \imp C[N]\evto V. \] Note that this definition includes the case where $C[M]$ diverges, but $C[N]$ converges, for some $N$. This formalizes the notion that $N$ is ``more defined'' than $M$. Clearly, we have $M\eqop N$ iff $M\sqleqop N$ and $N\sqleqop M$. Thus, we get a partial order $\sqleqop$ on the set of all terms of a given type, modulo operational equivalence. Also, this partial order has a least element, namely if we let $\Omega=\Y(\lam x.x)$, then $\Omega\sqleqop N$ for any term $N$ of the appropriate type. Note that, in general, $\sqleqop$ is not a complete partial order, due to missing limits of $\omega$-chains. \subsection{Discussion of operational equivalence} Operational equivalence is a very useful concept for reasoning about programs, and particularly for reasoning about program fragments. If $M$ and $N$ are operationally equivalent, then we know that we can replace $M$ by $N$ in any program without affecting its behavior. For example, $M$ could be a slow, but simple subroutine for sorting a list. The term $N$ could be a replacement that runs much faster. If we can prove $M$ and $N$ to be operationally equivalent, then this means we can safely use the faster routine instead of the slower one. Another example are compiler optimizations. Many compilers will try to optimize the code that they produce, to eliminate useless instructions, to avoid duplicate calculations, etc. Such an optimization often means replacing a piece of code $M$ by another piece of code $N$, without necessarily knowing much about the context in which $M$ is used. Such a replacement is safe if $M$ and $N$ are operationally equivalent. On the other hand, operational equivalence is a somewhat problematic notion. The problem is that the concept is not stable under adding new language features. It can happen that two terms, $M$ and $N$, are operationally equivalent, but when a new feature is added to the language, they become nonequivalent, {\em even if $M$ and $N$ do not use the new feature}. The reason is the operational equivalence is defined in terms of contexts. Adding new features to a language also means that there will be new contexts, and these new contexts might be able to distinguish $M$ and $N$. This can be a problem in practice. Certain compiler optimizations might be sound for a sequential language, but might become unsound if new language features are added. Code that used to be correct might suddenly become incorrect if used in a richer environment. For example, many programs and library functions in C assume that they are executed in a single-threaded environment. If this code is ported to a multi-threaded environment, it often turns out to be no longer correct, and in many cases it must be re-written from scratch. \subsection{Operational equivalence and parallel or} Let us now look at a concrete example in PCF. We say that a term $\POR$ implements the {\em parallel or} function if it has the following behavior: \[ \begin{array} {llll} \POR\truet P &\to& \truet, &\mbox{for all $P$} \\ \POR N\truet &\to& \truet, &\mbox{for all $N$} \\ \POR\falset\falset &\to& \falset. \end{array} \] Note that this in particular implies $\POR\truet\Omega=\truet$ and $\POR\Omega\truet=\truet$, where $\Omega$ is some divergent term. It should be clear why $\POR$ is called the ``parallel'' or: the only way to achieve such behavior is to evaluate both its arguments in parallel, and to stop as soon as one argument evaluates to $\truet$ or both evaluate to $\falset$. \begin{proposition} $\POR$ is not definable in PCF. \end{proposition} We do not give the proof of this fact, but the idea is relatively simple: one proves by induction that every PCF context $C[-,-]$ with two holes has the following property: either, there exists a term $N$ such that $C[M,M']=N$ for all $M,M'$ (i.e., the context does not look at $M,M'$ at all), or else, either $C[\Omega,M]$ diverges for all $M$, or $C[M,\Omega]$ diverges for all $M$. Here, again, $\Omega$ is some divergent term such as $\Y(\lam x.x)$. Although $\POR$ is not definable in PCF, we can define the following term, called the {\em POR-tester}: \[ \begin{array}{l@{}l} \nm{POR-test} = \lam x. & \nm{if} x \truet\Omega \nm{then} \\ &\sep \nm{if} x\Omega\truet\nm{then}\\ &\sep\sep \nm{if} x\falset\falset\nm{then}\Omega\\ &\sep\sep \nm{else}\truet\\ &\sep \nm{else}\Omega \\ & \nm{else}\Omega \end{array} \] The POR-tester has the property that $\nm{POR-test} M=\truet$ if $M$ implements the parallel or function, and in all other cases $\nm{POR-test} M$ diverges. In particular, since parallel or is not definable in PCF, we have that $\nm{POR-test}M$ diverges, for all PCF terms $M$. Thus, when applied to any PCF term, $\nm{POR-test}$ behaves precisely as the function $\lam x.\Omega$ does. One can make this into a rigorous argument that shows that $\nm{POR-test}$ and $\lam x.\Omega$ are operationally equivalent: \[ \nm{POR-test} \eqop \lam x.\Omega \sep\mbox{(in PCF)}. \] Now, suppose we want to define an extension of PCF called {\em parallel PCF}. It is defined in exactly the same way as PCF, except that we add a new primitive function $\POR$, and small-step reduction rules \[ \begin{array}{c} \deriv{M\to M'\sep N\to N'}{\POR MN\to\POR M'N'} \\[1.8ex] \deriv{}{\POR\truet N\to\truet} \\[1.8ex] \deriv{}{\POR M\truet\to\truet} \\[1.8ex] \deriv{}{\POR\falset\falset\to\falset} \end{array} \] Parallel PCF enjoys many of the same properties as PCF, for instance, Lemmas~\ref{lem-pcf-lemma1}--\ref{lem-pcf-progress} and Proposition~\ref{prop-pcf-safety} continue to hold for it. But notice that \[ \nm{POR-test} \not\eqop \lam x.\Omega \sep\mbox{(in parallel PCF)}. \] This is because the context $C[-]=[-]\POR$ distinguishes the two terms: clearly, $C[\nm{POR-test}]\evto\truet$, whereas $C[\lam x.\Omega]$ diverges. \section{Complete partial orders} \subsection{Why are sets not enough, in general?} As we have seen in Section~\ref{sec-set-semantics}, the interpretation of types as plain sets is quite sufficient for the simply-typed lambda calculus. However, it is insufficient for a language such as PCF. Specifically, the problem is the fixed point operator $\Y:(A\to A)\to A$. It is clear that there are many functions $f:A\to A$ from a set $A$ to itself that do not have a fixed point; thus, there is no chance we are going to find an interpretation for a fixed point operator in the simple set-theoretic model. On the other hand, if $A$ and $B$ are types, there are generally many functions $f:\semm{A}\to\semm{B}$ in the set-theoretic model that are not definable by lambda terms. For instance, if $\semm{A}$ and $\semm{B}$ are infinite sets, then there are uncountably many functions $f:\semm{A}\to\semm{B}$; however, there are only countably many lambda terms, and thus there are necessarily going to be functions that are not the denotation of any lambda term. The idea is to put additional structure on the sets that interpret types, and to require functions to preserve that structure. This is going to cut down the size of the function spaces, decreasing the ``slack'' between the functions definable in the lambda calculus and the functions that exist in the model, and simultaneously increasing the chances that additional structure, such as fixed point operators, might exist in the model. Complete partial orders are one such structure that is commonly used for this purpose. The method is originally due to Dana Scott. \subsection{Complete partial orders} \begin{definition} A {\em partially ordered set} or {\em poset} is a set $X$ together with a binary relation $\sqleq$ satisfying \begin{itemize} \item {\em reflexivity:} for all $x\in X$, $x\sqleq x$, \item {\em antisymmetry:} for all $x,y\in X$, $x\sqleq y$ and $y\sqleq x$ implies $x=y$, \item {\em transitivity:} for all $x,y,z\in X$, $x\sqleq y$ and $y\sqleq z$ implies $x\sqleq z$. \end{itemize} \end{definition} The concept of a partial order differs from a total order in that we do not require that for any $x$ and $y$, either $x\sqleq y$ or $y\sqleq x$. Thus, in a partially ordered set it is permissible to have incomparable elements. We can often visualize posets, particularly finite ones, by drawing their line diagrams as in Figure~\ref{fig-posets}. In these diagrams, we put one circle for each element of $X$, and we draw an edge from $x$ upward to $y$ if $x\sqleq y$ and there is no $z$ with $x\sqleq z\sqleq y$. Such line diagrams are also known as {\em Hasse diagrams}. \begin{figure} \caption{Some posets} \label{fig-posets} \end{figure} The idea behind using a partial order to denote computational values is that $x\sqleq y$ means that $x$ is {\em less defined than} $y$. For instance, if a certain term diverges, then its denotation will be less defined than, or below that of a term that has a definite value. Similarly, a function is more defined than another if it converges on more inputs. Another important idea in using posets for modeling computational value is that of {\em approximation}. We can think of some infinite computational object (such as, an infinite stream), to be a limit of successive finite approximations (such as, longer and longer finite streams). Thus we also read $x\sqleq y$ as $x$ {\em approximates} $y$. A complete partial order is a poset in which every countable chain of increasing elements approximates something. \begin{definition} Let $X$ be a poset and let $A\seq X$ be a subset. We say that $x\in X$ is an {\em upper bound} for $A$ if $a\sqleq x$ for all $a\in A$. We say that $x$ is a {\em least upper bound} for $A$ if $x$ is an upper bound, and whenever $y$ is also an upper bound, then $x\sqleq y$. \end{definition} \begin{definition} An {\em $\omega$-chain} in a poset $X$ is a sequence of elements $x_0,x_1,x_2,\ldots$ such that \[ x_0 \sqleq x_1 \sqleq x_2 \sqleq \ldots \] \end{definition} \begin{definition} A {\em complete partial order (cpo)} is a poset such that every $\omega$-chain of elements has a least upper bound. \end{definition} If $x_0,x_1,x_2,\ldots$ is an $\omega$-chain of elements in a cpo, we write $\dirsup_{i\in\N}x_i$ for the least upper bound. We also call the least upper bound the {\em limit} of the $\omega$-chain. Not every poset is a cpo. In Figure~\ref{fig-posets}, the poset labeled $\omega$ is not a cpo, because the evident $\omega$-chain does not have a least upper bound (in fact, it has no upper bound at all). The other posets shown in Figure~\ref{fig-posets} are cpo's. \subsection{Properties of limits} \begin{proposition}\label{prop-limit-properties} \begin{enumerate} \item {\em Monotonicity.} Suppose $\s{x_i}_i$ and $\s{y_i}_i$ are $\omega$-chains in a cpo $C$, such that $x_i\sqleq y_i$ for all $i$. Then \[ \dirsup_i x_i \sqleq\dirsup_i y_i. \] \item {\em Exchange.} Suppose $\s{x_{ij}}_{i,j\in\N}$ is a doubly monotone double sequence of elements of a cpo $C$, i.e., whenever $i\leq i'$ and $j\leq j'$, then $x_{ij}\sqleq x_{i'j'}$. Then \[ \dirsup_{i\in\N}\dirsup_{j\in\N} x_{ij} = \dirsup_{j\in\N}\dirsup_{i\in\N} x_{ij} = \dirsup_{k\in\N} x_{kk}. \] In particular, all limits shown are well-defined. \end{enumerate} \end{proposition} \begin{exercise} Prove Proposition~\ref{prop-limit-properties}. \end{exercise} \subsection{Continuous functions} If we model data types as cpo's, it is natural to model algorithms as functions from cpo's to cpo's. These functions are subject to two constraints: they have to be monotone and continuous. \begin{definition} A function $f:C\to D$ between posets $C$ and $D$ is said to be {\em monotone} if for all $x,y\in C$, \[ x\sqleq y \sep\imp\sep f(x)\sqleq f(y). \] A function $f:C\to D$ between cpo's $C$ and $D$ is said to be {\em continuous} if it is monotone and it preserves least upper bounds of $\omega$-chains, i.e., for all $\omega$-chains $\s{x_i}_{i\in\N}$ in $C$, \[ f(\dirsup_{i\in\N}x_i) = \dirsup_{i\in\N}f(x_i). \] \end{definition} The intuitive explanation for the monotonicity requirement is that information is ``positive'': more information in the input cannot lead to less information in the output of an algorithm. The intuitive explanation for the continuity requirement is that any particular output of an algorithm can only depend on a finite amount of input. \subsection{Pointed cpo's and strict functions} \begin{definition} A cpo is said to be {\em pointed} if it has a least element. The least element is usually denoted $\bot$ and pronounced ``bottom''. All cpo's shown in Figure~\ref{fig-posets} are pointed. A continuous function between pointed cpo's is said to be {\em strict} if it preserves the bottom element. \end{definition} \subsection{Products and function spaces} If $C$ and $D$ are cpo's, then their {\em cartesian product} $C\times D$ is also a cpo, with the pointwise order given by $\rpair{x,y}\sqleq\rpair{x',y'}$ iff $x\sqleq x'$ and $y\sqleq y'$. Least upper bounds are also given pointwise, thus \[ \dirsup_i\rpair{x_i,y_i} = \rpair{\dirsup_i x_i,\dirsup_i y_i}. \] \begin{proposition}\label{prop-cpo-products} The first and second projections, $\proj1:C\times D\to C$ and $\proj2:C\times D\to D$, are continuous functions. Moreover, if $f:E\to C$ and $g:E\to D$ are continuous functions, then so is the function $h:E\to C\times D$ given by $h(z)=\rpair{f(z),g(z)}$. \end{proposition} If $C$ and $D$ are cpo's, then the set of continuous functions $f:C\to D$ forms a cpo, denoted $D^C$. The order is given pointwise: given two functions $f,g:C\to D$, we say that \[ f\sqleq g \sep\mbox{iff}\sep \mbox{for all $x\in C$, $f(x)\sqleq g(x)$}. \] \begin{proposition} The set $D^C$ of continuous functions from $C$ to $D$, together with the order just defined, is a complete partial order. \end{proposition} \begin{proof} Clearly the set $D^C$ is partially ordered. What we must show is that least upper bounds of $\omega$-chains exist. Given an $\omega$-chain $f_0,f_1,\ldots$ in $D^C$, we define $g\in D^C$ to be the pointwise limit, i.e., \[ g(x) = \dirsup_{i\in\N}f_i(x), \] for all $x\in C$. Note that $\s{f_i(x)}_i$ does indeed form an $\omega$-chain in $C$, so that $g$ is a well-defined function. We claim that $g$ is the least upper bound of $\s{f_i}_i$. First we need to show that $g$ is indeed an element of $D^C$. To see that $g$ is monotone, we use Proposition~\ref{prop-limit-properties}(1) and calculate, for any $x\sqleq y\in C$, \[ g(x) = \dirsup_{i\in\N}f_i(x) \sqleq \dirsup_{i\in\N}f_i(y) = g(y). \] To see that $g$ is continuous, we use Proposition~\ref{prop-limit-properties}(2) and calculate, for any $\omega$-chain $x_0,x_1,\ldots$ in $C$, \[ g(\dirsup_j x_j) = \dirsup_i \dirsup_j f_i(x_j) = \dirsup_j \dirsup_i f_i(x_j) = \dirsup_j g(x_j). \] Finally, we must show that $g$ is the least upper bound of the $\s{f_i}_i$. Clearly, $f_i\sqleq g$ for all $i$, so that $g$ is an upper bound. Now suppose $h\in D^C$ is any other upper bound of $\s{f_i}$. Then for all $x$, $f_i(x)\sqleq h(x)$. Since $g(x)$ was defined to be the least upper bound of $\s{f_i(x)}_i$, we then have $g(x)\sqleq h(x)$. Since this holds for all $x$, we have $g\sqleq h$. Thus $g$ is indeed the least upper bound.\eot \end{proof} \begin{exercise} Recall the cpo $\Bb$ from Figure~\ref{fig-posets}. The cpo $\Bb^{\Bb}$ is also shown in Figure~\ref{fig-posets}. Its 11 elements correspond to the 11 continuous functions from $\Bb$ to $\Bb$. Label the elements of $\Bb^{\Bb}$ with the functions they correspond to. \end{exercise} \begin{proposition}\label{prop-cpo-app} The application function $D^C\times C\to D$, which maps $\rpair{f,x}$ to $f(x)$, is continuous. \end{proposition} \begin{proposition}\label{prop-cpo-curry} Continuous functions can be continuously curried and uncurried. In other words, if $f:C\times D\to E$ is a continuous function, then $f^*:C\to E^D$, defined by $f^*(x)(y) = f(x,y)$, is well-defined and continuous. Conversely, if $g:C\to E^D$ is a continuous function, then $g_*:C\times D\to E$, defined by $g_*(x,y) = g(x)(y)$, is well-defined and continuous. Moreover, $(f^*)_*=f$ and $(g_*)^*=g$. \end{proposition} \subsection{The interpretation of the simply-typed lambda calculus in complete partial orders}\label{subsec-cpo-interp} The interpretation of the simply-typed lambda calculus in cpo's resembles the set-theoretic interpretation, except that types are interpreted by cpo's instead of sets, and typing judgments are interpreted as continuous functions. For each basic type $\iota$, assume that we have chosen a pointed cpo $S_{\iota}$. We can then associate a pointed cpo $\semm{A}$ to each type $A$ recursively: \[ \begin{array}{lll} \semm{\iota} &=& S_{\iota} \\ \semm{A\to B} &=& \semm{B}^{\semm{A}} \\ \semm{A\times B} &=& \semm{A}\times\semm{B} \\ \semm{1} &=& {\bf 1} \end{array} \] Typing judgments are now interpreted as continuous functions \[ \semm{A_1}\times\ldots\times\semm{A_n} \to \semm{B} \] in precisely the same way as they were defined for the set-theoretic interpretation. The only thing we need to check, at every step, is that the function defined is indeed continuous. For variables, this follows from the fact that projections of cartesian products are continuous (Proposition~\ref{prop-cpo-products}). For applications, we use the fact that the application function of cpo's is continuous (Proposition~\ref{prop-cpo-app}), and for lambda-abstractions, we use the fact that currying is a well-defined, continuous operation (Proposition~\ref{prop-cpo-curry}). Finally, the continuity of the maps associated with products and projections follows from Proposition~\ref{prop-cpo-products}. \begin{proposition}[Soundness and Completeness] The interpretation of the simply-typed lambda calculus in pointed cpo's is sound and complete with respect to the lambda-$\beta\eta$ calculus. \end{proposition} \subsection{Cpo's and fixed points} One of the reasons, mentioned in the introduction to this section, for using cpo's instead of sets for the interpretation of the simply-typed lambda calculus is that cpo's admit fixed point, and thus they can be used to interpret an extension of the lambda calculus with a fixed point operator. \begin{proposition} Let $C$ be a pointed cpo and let $f:C\to C$ be a continuous function. Then $f$ has a least fixed point. \end{proposition} \begin{proof} Define $x_0=\bot$ and $x_{i+1} = f(x_i)$, for all $i\in\N$. The resulting sequence $\s{x_i}_i$ is an $\omega$-chain, because clearly $x_0\sqleq x_1$ (since $x_0$ is the least element), and if $x_{i}\sqleq x_{i+1}$, then $f(x_{i})\sqleq f(x_{i+1})$ by monotonicity, hence $x_{i+1}\sqleq x_{i+2}$. It follows by induction that $x_{i}\sqleq x_{i+1}$. Let $x=\dirsup_i x_i$ be the limit of this $\omega$-chain. Then using continuity of $f$, we have \[ f(x) = f(\dirsup_i x_i) = \dirsup_i f(x_i) = \dirsup_i x_{i+1} = x. \] To prove that it is the least fixed point, let $y$ be any other fixed point, i.e., let $f(y)=y$. We prove by induction that for all $i$, $x_i\sqleq y$. For $i=0$ this is trivial because $x_0=\bot$. Assume $x_i\sqleq y$, then $x_{i+1}=f(x_{i})\sqleq f(y)=y$. It follows that $y$ is an upper bound for $\s{x_i}_i$. Since $x$ is, by definition, the least upper bound, we have $x\sqleq y$. Since $y$ was arbitrary, $x$ is below any fixed point, hence $x$ is the least fixed point of $f$.\eot \end{proof} If $f:C\to C$ is any continuous function, let us write $\fix{f}$ for its least fixed point. We claim that $\fix{f}$ depends continuously on $f$, i.e., that $\dagger:C^C\to C$ defines a continuous function. \begin{proposition}\label{prop-dagger} The function $\dagger:C^C\to C$, which assigns to each continuous function $f\in C^C$ its least fixed point $\fix{f}\in C$, is continuous. \end{proposition} \begin{exercise} Prove Proposition~\ref{prop-dagger}. \end{exercise} Thus, if we add to the simply-typed lambda calculus a family of fixed point operators $Y_A:(A\to A)\to A$, the resulting extended lambda calculus can then be interpreted in cpo's by letting \[ \semm{Y_A} = \dagger:\semm{A}^{\semm{A}} \to \semm{A}. \] \subsection{Example: Streams} Consider streams of characters from some alphabet $A$. Let $A^{\leq\omega}$ be the set of finite or infinite sequences of characters. We order $A$ by the {\em prefix ordering}: if $s$ and $t$ are (finite or infinite) sequences, we say $s\sqleq t$ if $s$ is a prefix of $t$, i.e., if there exists a sequence $s'$ such that $t=ss'$. Note that if $s\sqleq t$ and $s$ is an infinite sequence, then necessarily $s=t$, i.e., the infinite sequences are the maximal elements with respect to this order. \begin{exercise} Prove that the set $A^{\leq\omega}$ forms a cpo under the prefix ordering. \end{exercise} \begin{exercise} Consider an automaton that reads characters from an input stream and writes characters to an output stream. For each input character read, it can write zero, one, or more output characters. Discuss how such an automaton gives rise to a continuous function from $A^{\leq\omega}\to A^{\leq\omega}$. In particular, explain the meaning of monotonicity and continuity in this context. Give some examples. \end{exercise} \section{Denotational semantics of PCF} The denotational semantics of PCF is defined in terms of cpo's. It extends the cpo semantics of the simply-typed lambda calculus. Again, we assign a cpo $\semm{A}$ to each PCF type $A$, and a continuous function \[ \semm{\Gamma\tj M:B} : \semm{\Gamma} \to \semm{B} \] to every PCF typing judgment. The interpretation is defined in precisely the same way as for the simply-typed lambda calculus. The interpretation for the PCF-specific terms is shown in Table~\ref{tab-pcf-denot}. Recall that $\Bb$ and $\Nn$ are the cpo's of lifted booleans and lifted natural numbers, respectively, as shown in Figure~\ref{fig-posets}. \begin{table} \[ \begin{array}{llll} \mbox{Types:} & \semm{\boolt} &=& \Bb \\ & \semm{\natt} &=& \Nn \\[1.8ex] \mbox{Terms:} & \semm{\truet} &=& T\in\Bb \\ & \semm{\falset} &=& F\in\Bb \\ & \semm{\zerot} &=& 0\in\Nn \\[1.8ex] & \semm{\succt(M)} &=& \left\{\begin{array}{ll} \bot & \mbox{if $\semm{M} = \bot$,} \\ n+1 & \mbox{if $\semm{M} = n$} \end{array}\right. \\\\ & \semm{\predt(M)} &=& \left\{\begin{array}{ll} \bot & \mbox{if $\semm{M} = \bot$,} \\ 0 & \mbox{if $\semm{M} = 0$,} \\ n & \mbox{if $\semm{M} = n+1$} \end{array}\right. \\\\ & \semm{\iszerot(M)} &=& \left\{\begin{array}{ll} \bot & \mbox{if $\semm{M} = \bot$,} \\ \truet & \mbox{if $\semm{M} = 0$,} \\ \falset & \mbox{if $\semm{M} = n+1$} \end{array}\right. \\\\ & \semm{\ite{M}{N}{P}} &=& \left\{\begin{array}{ll} \bot & \mbox{if $\semm{M} = \bot$,} \\ \semm{N} & \mbox{if $\semm{M} = \falset$,} \\ \semm{P} & \mbox{if $\semm{M} = \truet$,} \\ \end{array}\right. \\\\ & \semm{\Y(M)} &=& \semm{M}^{\dagger} \end{array} \] \caption{Cpo semantics of PCF}\label{tab-pcf-denot} \end{table} \begin{definition} Two PCF terms $M$ and $N$ of equal types are denotationally equivalent, in symbols $M\eqden N$, if $\semm{M}=\semm{N}$. We also write $M\sqleqden N$ if $\semm{M}\sqleq\semm{N}$. \end{definition} \subsection{Soundness and adequacy} We have now defined the three notions of equivalence on terms: $\eqax$, $\eqop$, and $\eqden$. In general, one does not expect the three equivalences to coincide. For example, any two divergent terms are operationally equivalent, but there is no reason why they should be axiomatically equivalent. Also, the POR-tester and the term $\lam x.\Omega$ are operationally equivalent in PCF, but they are not denotationally equivalent (since a function representing POR clearly exists in the cpo semantics). For general terms $M$ and $N$, one has the following property: \begin{theorem}[Soundness] For PCF terms $M$ and $N$, the following implications hold: \[ M\eqax N \sep\imp\sep M\eqden N \sep\imp\sep M\eqop N. \] \end{theorem} Soundness is a very useful property, because $M\eqax N$ is in general easier to prove than $M\eqden N$, and $M\eqden N$ is in turns easier to prove than $M\eqop N$. Thus, soundness gives us a powerful proof method: to prove that two terms are operationally equivalent, it suffices to show that they are equivalent in the cpo semantics (if they are), or even that they are axiomatically equivalent. As the above examples show, the converse implications are not in general true. However, the converse implications hold if the terms $M$ and $N$ are closed and of observable type, and if $N$ is a value. This property is called computational adequacy. Recall that a program is a closed term of observable type, and a result is a closed value of observable type. \begin{theorem}[Computational Adequacy] If $M$ is a program and $V$ is a result, then \[ M\eqax V \sep\iff\sep M\eqden V \sep\iff\sep M\eqop V. \] \end{theorem} \begin{proof} First note that the small-step semantics is contained in the axiomatic semantics, i.e., if $M\to N$, then $M\eqax N$. This is easily shown by induction on derivations of $M\to N$. To prove the theorem, by soundness, it suffices to show that $M\eqop V$ implies $M\eqax V$. So assume $M\eqop V$. Since $V\evto V$ and $V$ is of observable type, it follows that $M\evto V$. Therefore $M\to^* V$ by Proposition~\ref{prop-big-small}. But this already implies $M\eqax V$, and we are done.\eot \end{proof} \subsection{Full abstraction} We have already seen that the operational and denotational semantics do not coincide for PCF, i.e., there are some terms such that $M\eqop N$ but $M\not\eqden N$. Examples of such terms are $\nm{POR-test}$ and $\lam x.\Omega$. But of course, the particular denotational semantics that we gave to PCF is not the only possible denotational semantics. One can ask whether there is a better one. For instance, instead of cpo's, we could have used some other kind of mathematical space, such as a cpo with additional structure or properties, or some other kind of object altogether. The search for good denotational semantics is a subject of much research. The following terminology helps in defining precisely what is a ``good'' denotational semantics. \begin{definition} A denotational semantics is called {\em fully abstract} if for all terms $M$ and $N$, \[ M\eqden N \sep\iff\sep M\eqop N. \] If the denotational semantics involves a partial order (such as a cpo semantics), it is also called {\em order fully abstract} if \[ M\sqleqden N \sep\iff\sep M\sqleqop N. \] \end{definition} The search for a fully abstract denotational semantics for PCF was an open problem for a very long time. Milner proved that there could be at most one such fully abstract model in a certain sense. This model has a syntactic description (essentially the elements of the model are PCF terms), but for a long time, no satisfactory semantic description was known. The problem has to do with sequentiality: a fully abstract model for PCF must be able to account for the fact that certain parallel constructs, such as parallel or, are not definable in PCF. Thus, the model should consist only of ``sequential'' functions. Berry and others developed a theory of ``stable domain theory'', which is based on cpo's with a additional properties intended to capture sequentiality. This research led to many interesting results, but the model still failed to be fully abstract. Finally, in 1992, two competing teams of researchers, Abramsky, Jagadeesan and Malacaria, and Hyland and Ong, succeeded in giving a fully abstract semantics for PCF in terms of games and strategies. Games capture the interaction between a player and an opponent, or between a program and its environment. By considering certain kinds of ``history-free'' strategies, it is possible to capture the notion of sequentiality in just the right way to match PCF. In the last decade, game semantics has been extended to give fully abstract semantics to a variety of other programming languages, including, for instance, Algol-like languages. Finally, it is interesting to note that the problem with ``parallel or'' is essentially the {\em only} obstacle to full abstraction for the cpo semantics. As soon as one adds ``parallel or'' to the language, the semantics becomes fully abstract. \begin{theorem} The cpo semantics is fully abstract for parallel PCF. \end{theorem} \section{Bibliography}\label{ssec-bibliography} Here are some textbooks and other books on the lambda calculus.\void{None of them are required reading for the course, but you may nevertheless find it interesting or helpful to browse them. I will try to put them on reserve in the library, to the extent that they are available.} {\cite{Bar84}} is a standard reference handbook on the lambda calculus. {\cite{GLT89}}--{\cite{Rev88}} are textbooks on the lambda calculus. {\cite{Win93}}--{\cite{Hen90}} are textbooks on the semantics of programming languages. Finally, {\cite{Pey87}--\cite{App92}} are textbooks on writing compilers for functional programming languages. They show how the lambda calculus can be useful in a more practical context. \renewcommand{ }{ } \end{document}
\begin{document} \begin{abstract} Semibiproducts of monoids are introduced here as a common generalization to biproducts (of abelian groups) and to semidirect products (of groups) for exploring a wide class of monoid extensions. More generally, abstract semi\-bi\-products exist in any concrete category over sets in which map addition is meaningful thus reinterpreting Mac Lane's relative biproducts. In the pointed case they give rise to a special class of extensions called semibiproduct extensions. Not all monoid extensions are semibiproduct extensions but all group extensions are. A categorical equivalence is established between the category of pointed semibiproducts of monoids and the category of pointed monoid action systems, a new category of actions that emerges from the equivalence. The main difference to classical extension theory is that semibiproduct extensions are treated in the same way as split extensions, even though the section map may fail to be a homomorphism. A list with all the 14 semibiproduct extensions of 2-element monoids is provided. \end{abstract} \keywords{Semibiproduct, biproduct, semidirect product of groups and monoids, pointed semibiproduct, semibiproduct extension, pointed monoid action system, Schreier extension} \maketitle \section{Introduction} Biproducts were introduced by Mac Lane in his book \emph{Homology} to study split extensions in the context of abelian categories. Semidirect products are appropriate to study group split extensions. Although these concepts have been thoroughly developed over the last decades in the context of protomodular and semi-abelian categories \cite{BB,Bourn,BournJanelidze,semiabelian}, the notion of \emph{relative biproduct} introduced by Mac Lane to study relative split extensions seems to have been forgotten (see \cite{Homology}, p.~263). On the other hand, much work has been done in extending the tools and techniques from groups \cite{MacLane2} to monoids \cite{DB.NMF.AM.MS.13, NMF14,DB.NMF.AM.MS.16,Faul,Fleischer,Ganci,Leech,NMF et all,Wells} and even more general settings \cite{GranJanelidzeSobral}. However, as it has been observed several times, it is not a straightforward task to take a well-known result in the category of groups (or any other semi-abelian category) and materialize it in the category of monoids not to mention more general situations. We will argue that a convenient reformulation of relative biproduct (called \emph{semibiproduct}) can be used to study group and monoid extensions as a unified frame of work. Even though semidirect products are suitable to describe all group split extensions they fail to capture those group extensions that do not split. The key observation to semibiproducts in reinterpreting relative biproducts (see \cite{Homology}, diagram (5.2), p.~263 and compare with diagram $(\ref{diag: biproduct in a Mag-category})$ in Definition \ref{def: semibiproduct}) is that although an extension may fail to split as a monoid extension or as a group extension, it necessarily splits as an extension of pointed sets. The main result (Theorem \ref{thm: equivalence}) establishes an equivalence of categories between pointed semibiproducts of monoids (Definition \ref{def: pointed semibiproduct of monoids}) and pointed monoid action systems (Definition \ref{def: pseudo-action}). The 14 classes of non-isomorphic pointed semibiproducts of 2-element monoids are listed in Section \ref{sec: eg}. We start with some motivation in Section \ref{sec: motivation}, introduce $\mathbf{Mag}$-categories and semibiproducts in Section \ref{Sec: Mag-categories}, restrict to the pointed case in Section~\ref{sec: stability} while studying some stability properties and pointing out some differences and similarities between groups, monoids and unitary magmas. From Section \ref{Sec: Sbp} on we work towards the main result and restrict our attention to monoids. \section{Motivation}\label{sec: motivation} It is well known that a split extension of groups \[\xymatrix{X\ar[r]^k & A \ar[r]^{p} & B,} \] with a specified section $s\colon{B\to A}$, can be completed into a diagram of the form \[\xymatrix{X\ar@<-.5ex>[r]_{k} & A\ar@<-.5ex>@{..>}[l]_{q}\ar@<.5ex>[r]^{p} & B \ar@{->}@<.5ex>[l]^{s},}\] in which $q\colon{A\to X}$ is the map uniquely determined by the formula $kq(a)=a-sp(a)$, $a\in A$. Furthermore, the information needed to reconstruct the split extension as a semidirect product is encoded in the map $\varphi\colon{B\times X\to X}$, uniquely determined as $\varphi(b,x)=q(s(b)+k(x))$. If writing the element $\varphi(b,x)\in X$ as $b\cdot x$ we see that $k(b\cdot x)$ is equal to $s(b)+k(x)-s(b)$ and that the group $A$ is recovered as the semidirect product $X\rtimes_{\varphi}B$. In the event that the section $s$, while being a zero-preserving map, is not necessarily a group homomorphism then the classical treatment of group extensions prescribes a different procedure (see e.g.\ \cite{Northcott}, p.~238). However, the results obtained here suggest that non-split extensions may be treated similarly to split extensions, and moreover the same approach is carried straightforwardly into the context of monoids. Indeed, when $s$ is not a homomorphism, in addition to the map $\varphi$, we get a map $\gamma\colon{B\times B\to X}$, determined as $\gamma(b,b')=q(s(b)+s(b'))$ and the group $A$ is recovered as the set $X\times B$ with group operation \[(x,b)+(x',b')=(x+\varphi(b,x')+\gamma(b,b'),b+b')\] defined for every $x,x'\in X$ and $b,b'\in B$. Note that $X$ needs not be commutative. However, instead of simply saying that $\varphi$ is an action and that $\gamma$ is a factor system, we have to consider two maps $\varphi$ and $\gamma$ which in conjunction turn the set $X\times B$ into a group with a prescribed operation (Section \ref{Sec: Act}). This is precisely what we call a semibiproduct of groups. Observe that when $s$ is a homomorphism it reduces to the usual notion of semidirect product. Almost every step in the treatment of groups is carried over into the context of monoids. However, while in groups all extensions are obtained as semibiproducts, in monoids we have to restrict our attention to those for which there exists a section $s$ and a retraction $q$ satisfying the condition $a=kq(a)+sp(a)$ for all $a\in A$ (Section \ref{Sec: Sbp}). Consequently, in addition to the maps $\varphi$ and $\gamma$ obtained as in groups, a new map $\rho\colon{X\times B\to X}$, determined as $\rho(x,b)=q(k(x)+s(b))$ needs to be taken into consideration. Hence, the monoid $A$ is recovered as the set $\{(x,b)\in X\times B\mid\rho(x,b)=x\}$ with operation \begin{equation}\label{eq: operation} (x,b)+(x',b')=(\rho(x+\varphi(b,x')+\gamma(b,b'),b+b'),b+b') \end{equation} which is defined for every $x,x'\in X$ and $b,b'\in B$. \section{$\mathbf{Mag}$-categories and semibiproducts}\label{Sec: Mag-categories} Let $\mathbf{Mag}$ denote the category of magmas and magma homomorphisms and let $U\colon{\mathbf{Mag}\to \mathbf{Set}}$ be the forgetful functor into the category of sets and maps. By a $\mathbf{Mag}$-category we mean a category $\mathbf{C}$ together with a bifunctor $\mathrm{map}\colon{\mathbf{C}^{\text{op}}\times \mathbf{C}\to \mathbf{Mag}}$ and a natural inclusion $\varepsilon\colon{\hom_{\mathbf{C}}\to U\circ \mathrm{map}}$. If $\mathbf{C}$ is a concrete category over sets in which a meaningful map addition is available then a bifunctor $\mathrm{map}$ is obtained as follows. For every pair of objects $(A,B)$ in $\mathbf{C}$, $\mathrm{map}(A,B)$ is the magma of underlying maps from object $A$ to object $B$ equipped with component-wise addition. In particular $\mathrm{map}(A,B)$ contains $\hom_{\mathbf{C}}(A,B)$ as a subset since $$\varepsilon_{A,B}\colon{\hom_{\mathbf{C}}(A,B)\to U(\mathrm{map}(A,B))}$$ is required to be a natural inclusion. As expected the category $\mathbf{Mag}$ is a $\mathbf{Mag}$-category with $\mathrm{map}(A,B)$ the magma of all maps from $U(A)$ to $U(B)$. If $f$ is a magma homomorphism from $A$ to $B$ then $\varepsilon_{A,B}(f)$ is nothing but $f$ considered as a map between the underlying sets of $A$ and $B$. In the same way the categories of groups, abelian groups, monoids and commutative monoids are $\mathbf{Mag}$-categories. However, there is a significant distinction between the Ab-category of abelian groups, the linear category of commutative monoids and the Mag-categories of groups or monoids. If $A$ is an object in an Ab-category then $\hom(A,A)$ is a ring. If $A$ is an object in a linear category then $\hom(A,A)$ is a semiring. In contrast, if $A$ is a group (or a monoid) then $\hom(A,A)$ is a subset of the near-ring $\mathrm{map}(A,A)$. \begin{definition}\label{def: semibiproduct} Let $(\mathbf{C},\mathrm{map},\varepsilon)$ be a $\mathbf{Mag}$-category. A \emph{semibiproduct} is a tuple $(X,A,B,p,k,q,s)$ represented as a diagram of the shape \begin{equation} \label{diag: biproduct in a Mag-category} \xymatrix{X\ar@<-.5ex>[r]_{k} & A\ar@<-.5ex>@{..>}[l]_{q}\ar@<.5ex>[r]^{p} & B \ar@{..>}@<.5ex>[l]^{s}} \end{equation} in which $p\colon{A\to B}$ and $k\colon{X\to A}$ are morphisms in $\mathbf{C}$, whereas $q\in \mathrm{map}(A,X)$ and $s\in \mathrm{map}(B,A)$. Furthermore, the following conditions are satisfied: \begin{eqnarray} ps={1_B}\label{eq: biproduct1}\\ qk={1_{X}},\label{eq: biproduct2}\\ kq+sp={1_A}.\label{eq: biproduct3} \end{eqnarray} \end{definition} There is an obvious abuse of notation in the previous conditions. This is justified because we will be mostly concerned with the case in which $\mathbf{C}$ is the category of monoids and $\mathrm{map}(A,B)$ is the set of zero-preserving maps. In rigorous terms, condition $ps=1_B$ should have been written as $\mathrm{map}(1_B,p)(s)=\varepsilon_{B,B}(1_B)$ whereas condition $qk=1_X$ should have been written as $\mathrm{map}(k,1_X)(q)=\varepsilon_{X,X}(1_X)$. In the same way the condition $\mathrm{map}(1_A,k)(q)+\mathrm{map}(p,1_A)(s)=\varepsilon_{A,A}(1_A)$ should have been written in the place of $kq+sp=1_A$. For the moment we will not develop this concept further but rather concentrate our attention on the concrete cases of groups and monoids. \section{Stability properties of pointed semibiproducts}\label{sec: stability} From now on the category $\mathbf{C}$ is assumed to be either the category of groups or the category of monoids (occasionally we will refer to the category of unitary magmas) and $\mathrm{map}(A,B)$ is the magma of zero-preserving maps with component-wise addition. In each case the category $\mathbf{C}$ is pointed and the composition of maps is well defined. It will be convenient to consider \emph{pointed semibiproducts} by requiring two further conditions, namely \begin{eqnarray} pk=0_{X,B},\quad qs=0_{B,X}. \end{eqnarray} However, as it is well known, in the case of groups this distinction is irrelevant. \begin{proposition} Every semibiproduct of groups is pointed. \end{proposition} \begin{proof} We have $pk=p1_Ak=p(kq+sp)k=pkqk+pspk=pk+pk$. And $s=1_As=(kq+sp)s=kqs+sps=kqs+s$. Hence we may conclude $pk=0$ and $kqs=0$. Since $k$ is a monomorphism $qs=0$. \end{proof} The previous proof also shows that a semibiproduct of monoids is pointed as soon as the monoid $A$ admits right cancellation. Clearly, this is not a general fact. \begin{proposition} Let $A$ be a monoid. The tuple $(A,A,A,1_A,1_A,1_A,1_A)$ is a semibiproduct of monoids if and only if $A$ is an idempotent monoid. \end{proposition} \begin{proof} Condition $(\ref{eq: biproduct3})$ in this case becomes $a=a+a$ for all $a\in A$. \end{proof} Every pointed semibiproduct of monoids has an underlying exact sequence. \begin{proposition}\label{thm:kernel of p}\label{thm:cokernel of k} Let $(X,A,B,p,k,q,s)$ be a pointed semibiproduct of monoids. The sequence \[\xymatrix{X\ar[r]^k & A \ar[r]^{p} & B} \] is an exact sequence. \end{proposition} \begin{proof} Let $f\colon{Z\to A}$ be a morphism such that $pf=0$. Then the map $\bar{f}=qf$ is a homomorphism \begin{eqnarray*} qf(z+z')&=&q(fz+fz')=q(kqf(z)+spf(z)+kqf(z')+spf(z'))\\ &=& q(kqf(z)+0+kqf(z')+0)\\ &=& qk(qf(z)+qf(z'))=qf(z)+qf(z') \end{eqnarray*} and it is unique with the property $k\bar{f}=f$. Indeed, if $k\bar{f}=f$ then $qk\bar{f}=qf$ and hence $\bar{f}=qf$. This means that $k$ is the kernel of $p$. Let $g\colon{A\to Y}$ be a morphism and suppose that $gk=0$. It follows that $g=gsp$, \begin{equation*} g=g1_A=g(kq+sp)=gkq+gsp=0+gsp=gsp, \end{equation*} and consequently the map $\bar{g}=gs$ is a homomorphism, indeed \begin{eqnarray*} gs(b)+gs(b')=g(sb+sb')=gsp(sb+sb')=gs(b+b'). \end{eqnarray*} The fact that $\bar{g}=gs$ is the unique morphism with the property $\bar{g}p=g$ follows from $\bar{g}ps=gs$ which is the same as $\bar{g}=gs$. Hence $p$ is the cokernel of $k$ and the sequence is exact. \end{proof} The following results show that pointed semibiproducts are stable under pullback and in particular split semibiproducts of monoids are stable under composition. \begin{proposition}\label{thm:stable under pullback} Pointed semibiproducts of monoids are stable under pullback. \end{proposition} \begin{proof} Let $(X,A,B,p,k,q,s)$ be a pointed semibiproduct of monoids displayed as the bottom row in the following diagram which is obtained by taking the pullback of $p$ along an arbitrary morphism $h\colon{C\to B}$, with induced morphism $\langle k,0 \rangle$ and map $\langle sh,1 \rangle$, \begin{eqnarray} \vcenter{\xymatrix{X\ar@<-.5ex>[r]_(.35){\langle k,0 \rangle}\ar@{=}[d]_{} & A\times_B C\ar@{->}@<0ex>[d]^{\pi_1}\ar@<-.5ex>@{..>}[l]_(.6){q\pi_1}\ar@<.5ex>[r]^(.6){\pi_2} & C\ar@{->}[d]^{h} \ar@{->}@<.5ex>@{..>}[l]^(.35){\langle sh,1\rangle}\\ X\ar@<-.5ex>[r]_{k} & A \ar@<-.5ex>@{..>}[l]_{q}\ar@<.5ex>[r]^{p} & B \ar@{->}@<.5ex>@{..>}[l]^{s}.}} \end{eqnarray} We have to show that the top row is a pointed semibiproduct of monoids. By construction we have $\pi_2\langle sh,1\rangle=1_C$, $\pi_2\langle k,0\rangle=0$, $q\pi_1\langle sh,1\rangle=qsh=0$, $q\pi_1\langle k,0\rangle=qk=1_X$. It remains to prove the identity \[(a,c)=(kq(a),0)+(sh(c),c)=(kq(a)+sh(c),c)\] for every $a\in A$ and $c\in C$ with $p(a)=h(c)$, which follows from $a=kq(a)+sp(a)=kq(a)+sh(c)$. \end{proof} The previous results are stated at the level of monoids but are easily extended to unitary magmas. The particular case of semidirect products has been considered in \cite{GranJanelidzeSobral} and the notion of composable pair of pointed semibiproducts is borrowed from there. We say that a pointed semibiproduct $(X,A,B,p,k,q,s)$ can be composed with a pointed semibiproduct $(C,B,D,p',k',q',s')$ if the tuple $$(A\times_B C,A,D,p'p,\pi_1,q'',ss'),$$ in which $q''$ is such that $\pi_1q''=kq+sk'q'p$ and $\pi_2q''=q'p$, is a pointed semibiproduct. Note that in the case of groups the map $q$ is uniquely determined as $q(a)=a-sp(a)$ for all $a\in A$. However this is not the case for monoids nor for unitary magmas. \begin{proposition} A pointed semibiproduct of monoids $(X,A,B,p,k,q,s)$ can be composed with $(C,B,D,p',k',q',s')$, another pointed semibiproduct of monoids, if and only if the map $s$ is equal to the map $sk'q'+ss'p'$. \end{proposition} \begin{proof} Let us observe that the tuple $(A\times_B C,A,D,p'p,\pi_1,q'',ss')$ is a pointed semibiproduct if and only if $\pi_1q''+ss'p'p=1_A$. Indeed, the kernel of the composite $p'p$ is obtained by taking the pullback of $p$ along $k'$, the kernel of $p'$, as illustrated \begin{eqnarray} \vcenter{\xymatrix{ & A\times_B C\ar@{->}@<0ex>[d]_{\pi_1}\ar@<.5ex>[r]^(.6){\pi_2} & C\ar@{->}[d]_{k'} \ar@{->}@<.5ex>@{..>}[l]^(.35){\langle sk',1\rangle}\\ X\ar@<-.5ex>[r]_{k} & A \ar@<-.5ex>@{..>}[l]_{q}\ar@<.5ex>[r]^{p} \ar[d]_{p'p} & B \ar@{->}@<.5ex>@{..>}[l]^{s} \ar[d]_{p'}\\ & D\ar@{=}[r] & D.}} \end{eqnarray} In order to obtain a pointed semibiproduct we complete de diagram with a map $q''$ such that $\pi_1q''=kq+sk'q'p$ and $\pi_2q''=q'p$ as illustrated \begin{eqnarray} \vcenter{\xymatrix{ & A\times_B C\ar@{->}@<-0.5ex>[d]_{\pi_1}\ar@<.5ex>[r]^(.6){\pi_2} & C\ar@{->}[d]_{k'} \ar@{->}@<.5ex>@{..>}[l]^(.35){\langle sk',1\rangle}\\ X\ar@<-.5ex>[r]_{k} & A \ar@<-.5ex>@{..>}[l]_{q} \ar@<-.5ex>@{..>}[u]_{q''} \ar@<.5ex>[r]^{p} \ar@<-.5ex>[d]_{p'p} & B \ar@{->}@<.5ex>@{..>}[l]^{s} \ar[d]_{p'}\\ & D \ar@<-.5ex>@{..>}[u]_{ss'} \ar@{=}[r] & D.}} \end{eqnarray} The map $q''$ is well defined, $p(kq+sk'q'p)=pkq+psk'q'p=k'q'p$. Moreover, $p'pss'=1_D$, $p'p\pi_1=p'k'\pi_2=0$, $q''ss'=0$ and we observe \begin{eqnarray*} q''\pi_1 &=&\langle kq+sk'q'p,q'p \rangle \pi_1\\ &=&\langle kq\pi_1+sk'q'p\pi_1,q'p\pi_1 \rangle \\ &=&\langle kq\pi_1+sk'q'k'\pi_2,q'k'\pi_2 \rangle \\ &=&\langle kq\pi_1+sp\pi_1,\pi_2 \rangle \\ &=&\langle \pi_1,\pi_2 \rangle =1_{A\times_B C}. \end{eqnarray*} It remains to analyse the condition $\pi_1q''+ss'p'p=1_A$. If $s=sk'q'+ss'p'$ then we have $\pi_1q''+ss'p'p=kq+sk'q'p+ss'p'p$ and hence $kq+sp=1_A$. Conversely, having $\pi_1q''+ss'p'p=1_A$ we get $kq+sk'q'p+ss'p'p=1_A$ and $kqs+sk'q'ps+ss'p'ps=s$ so $sk'q'+ss'p'=s$. \end{proof} Note that associativity is used to convert $(kq+sk'q'p)+ss'p'p$ into $kq+(sk'q'p+ss'p'p)$. Moreover, if the map $s$ is a homomorphism then condition $s=sk'q'+ss'p'$ is trivial. A pointed semibiproduct $(X,A,B,p,k,q,s)$ in which the map $s$ is a homomorphism is called a pointed split semibiproduct. This means that pointed split semibiproducts of monoids are stable under composition. \section{The category of pointed semibiproducts of monoids}\label{Sec: Sbp} The purpose of this section is to introduce the category of pointed semibiproducts of monoids, denoted $\mathbf{Psb}$. \begin{definition}\label{def: pointed semibiproduct of monoids} A \emph{pointed semibiproduct of monoids} consists of a tuple $(X,A,B,p,k,q,s)$ that can also be represented as a diagram of the shape \begin{equation} \label{diag: biproduct} \xymatrix{X\ar@<-.5ex>[r]_{k} & A\ar@<-.5ex>@{..>}[l]_{q}\ar@<.5ex>[r]^{p} & B \ar@{..>}@<.5ex>[l]^{s}} \end{equation} in which $X$, $A$ and $B$ are monoids (not necessarily commutative), $p$, $k$, are monoid homomorphisms, while $q$ and $s$ are zero-preserving maps. Moreover, the following conditions are satisfied: \begin{eqnarray} ps&=&1_B\\ qk&=&1_X\\ kq+sp&=&1_A\\ pk&=&0_{X,B}\\ qs&=&0_{B,X}. \end{eqnarray} \end{definition} A morphism in $\mathbf{Psb}$, from the object $(X,A,B,p,k,q,s)$ to the object $(X',A',B',p',k',q',s')$, is a triple $(f_1,f_2,f_3)$, displayed as \begin{equation}\label{diag:morphism of semi-biproduct} \vcenter{\xymatrix{X\ar@<-.5ex>[r]_{k}\ar@{->}[d]_{f_1} & A\ar@{->}@<0ex>[d]^{f_2}\ar@<-.5ex>@{..>}[l]_{q}\ar@<.5ex>[r]^{p} & B\ar@{->}[d]^{f_3} \ar@{->}@<.5ex>@{..>}[l]^{s}\\ X'\ar@<-.5ex>[r]_{k'} & A' \ar@<-.5ex>@{..>}[l]_{q'}\ar@<.5ex>[r]^{p'} & B' \ar@{->}@<.5ex>@{..>}[l]^{s'}}} \end{equation} in which $f_1$, $f_2$ and $f_3$ are monoid homomorphisms and moreover the following conditions are satisfied: $f_2k=k'f_1$, $p'f_2=f_3p$, $f_2s=s'f_3$, $q'f_2=f_1q$. \begin{theorem}\label{thm: a+a' and associativity} Let $(X,A,B,p,k,q,s)$ be a pointed semibiproduct of monoids. For every $a,a'\in A$ the element $a+a'\in A$ can be written in terms of $q(a)$, $q(a')$, $p(a)$ and $p(a')$ as \begin{equation}\label{eq: a+a'} k(q(a)+q(sp(a)+ kq(a'))+q(sp(a)+ sp(a')))+s(p(a)+p(a')). \end{equation} \end{theorem} \begin{proof} We observe: \begin{eqnarray*} a+a'&=& kqa+(spa+kqa')+spa' \qquad(kq+sp=1)\\ &=& kqa+kq(spa+kqa')+sp(spa+kqa')+spa' \\ &=& kqa+kq(spa+kqa')+spa+spa' \qquad( ps=1,pk=0)\\ &=& kqa+kq(spa+kqa')+kq(spa+spa')+sp(spa+spa')\\ &=& kqa+kq(spa+kqa')+kq(spa+spa')+s(pa+pa')\\ &=& k(qa+q(spa+kqa')+q(spa+spa'))+sp(a+a'). \end{eqnarray*} \end{proof} The previous result suggests a transport of structure from the monoid $A$ into the set $X\times B$ as motivated with formula $(\ref{eq: operation})$ in Section \ref{sec: motivation}. However, as we will see, in order to keep an isomorphism with $A$ we need to restrict the set $X\times B$ to those pairs $(x,b)$ for which there exists $a\in A$ such that $x=q(a)$ and $b=p(a)$. \section{The category of pointed monoid action systems}\label{Sec: Act} The purpose of this section is to introduce the category of pointed monoid action systems, which will be denoted as $\mathbf{Act}$. This category is obtained by requiring the existence of a categorical equivalence between $\mathbf{Act}$ and $\mathbf{Psb}$ (see Theorem \ref{thm: equivalence}). \begin{definition}\label{def: pseudo-action} A \emph{pointed monoid action system} is a five-tuple $$(X,B,\rho,\varphi,\gamma)$$ in which $X$ and $B$ are monoids, $\rho\colon{X\times B\to X}$, $\varphi\colon{B\times X\to X}$, $\gamma\colon{B\times B\to X}$ are maps such that the following conditions are satisfied for every $x\in X$ and $b,b'\in B$: \begin{eqnarray} \rho(x,0)=x,\quad \rho(0,b)=0\label{eq:act01}\\ \varphi(0,x)=x,\quad \varphi(b,0)=0\label{eq:act02}\\ \gamma(b,0)=0=\gamma(0,b)\label{eq:act03}\\ \rho(x,b)=\rho(\rho(x,b),b)\label{eq:act04}\\ \varphi(b,x)=\rho(\varphi(b,x),b)\label{eq:act05}\\ \gamma(b,b')=\rho(\gamma(b,b'),b+b')\label{eq:act06} \end{eqnarray} and moreover the following condition holds for every $x,x',x''\in X$ and $b,b',b''\in B$, \begin{eqnarray}\label{eq:act07} \rho(\rho(x+\varphi(b,x')+\gamma(b,b'),b+b')+\varphi(b+b',x'')+\gamma(b+b',b''),b''')=\nonumber\\ =\rho(x+\varphi(b,\rho(x'+\varphi(b', x'') + \gamma(b', b''),{b'+b''}))+\gamma(b, b'+b''),{b'''})\quad \end{eqnarray} where $b'''=b+b'+b''$. \end{definition} A morphism of pointed monoid action systems, say from $(X,B,\rho,\varphi,\gamma)$ to $(X',B',\rho',\varphi',\gamma')$ is a pair $(f,g)$ of monoid homomorphisms, with $f\colon{X\to X'}$ and $g\colon{B\to B'}$ such that for every $x\in X$ and $b,b'\in B$ \begin{eqnarray} f(\rho(x,b))&=&\rho'(f(x),{g(b)}),\label{eq:act08}\\ f(\varphi(b, x))&=&\varphi'(g(b), {f(x)}),\label{eq:act09}\\ f(\gamma(b, b'))&=&\gamma'(g(b), g(b')).\label{eq:act10} \end{eqnarray} \begin{theorem}\label{thm: functor R syntehic construction} There exists a functor $R\colon{\mathbf{Act}\to \mathbf{Mon}}$ such that for every morphism in $\mathbf{Act}$, say $(f,g)\colon{(X,B,\rho,\varphi,\gamma)\to (X',B',\rho',\varphi',\gamma')}$, the diagram \begin{equation}\label{diag:semi-biproduct with R} \xymatrix{X\ar@<-.5ex>[r]_(.3){\langle 1,0\rangle}\ar@{->}[d]_{f} & R(X,B,\rho,\varphi,\gamma)\ar@{->}@<.0ex>[d]^{R(f,g)}\ar@<-.5ex>@{..>}[l]_(.7){\pi_X}\ar@<.5ex>[r]^(.7){\pi_B} & B\ar@{->}[d]^{g} \ar@{->}@<.5ex>@{..>}[l]^(.3){\langle 0,1\rangle}\\ X'\ar@<-.5ex>[r]_(.3){\langle 1,0 \rangle} & R{(X',B',{\rho',\varphi',\gamma'})} \ar@<-.5ex>@{..>}[l]_(.7){\pi_{X}}\ar@<.5ex>[r]^(.7){\pi_B} & B \ar@{->}@<.5ex>@{..>}[l]^(.3){{\langle 0 ,1 \rangle}}} \end{equation} is a morphism in $\mathbf{Psb}$. \end{theorem} The functor $R$ realizes a pointed monoid action system $(X,B,\rho,\varphi,\gamma)$ as a synthetic semibiproduct diagram \begin{equation}\label{diag:synthetic semi-biproduct} \xymatrix{X\ar@<-.5ex>[r]_(.5){\langle 1,0\rangle} & R\ar@<-.5ex>@{..>}[l]_(.5){\pi_X}\ar@<.5ex>[r]^(.5){\pi_B} & B \ar@{..>}@<.5ex>[l]^(.5){\langle 0,1\rangle}} \end{equation} in which $R=R(X,B,{\rho,\varphi,\gamma})=\{(x,b)\in X\times B\mid \rho(x,b)=x\}$ is equipped with the binary synthetic operation \begin{equation}\label{eq: semibiproduct sunthetic operation} (x,b)+(x',b')=(\rho(x+\varphi(b,x')+\gamma(b,b'),b+b'),b+b') \end{equation} which is well defined for every $x,x'\in X$ and $b,b'\in B$ due to condition $(\ref{eq:act04})$ and is associative due to condition $(\ref{eq:act07})$. It is clear that $\pi_B$ is a monoid homomorphism and due to conditions $(\ref{eq:act01})$--$(\ref{eq:act03})$ we see that the maps $\langle 1,0 \rangle $ and $\langle 0 ,1\rangle $ are well defined and moreover $\langle 1,0\rangle $ is a monoid homomorphism. Finally, we observe that a pair $(x,b)\in X\times B$ is in $R$ if and only if $(x,b)=(x,0)+(0,b)$. Further details can be found in the preprint \cite{NMF.20a-of}. \section{The equivalence} In order to establish a categorical equivalence between $\mathbf{Act}$ and $\mathbf{Psb}$ we need a procedure to associate a pointed monoid action system to every pointed semibiproduct of monoids in a functorial manner. \begin{theorem}\label{thm: pseudo-actions} Let $(X,A,B,p,k,q,s)$ be an object in $\mathbf{Psb}$. The system $(X,B,\rho,\varphi,\gamma)$ with \begin{eqnarray} \rho(x,b)=q(k(x)+s(b))\\ \varphi(b,x)=q(s(b)+k(x))\\ \gamma(b,b')=q(s(b)+s(b')) \end{eqnarray} is an object in $\mathbf{Act}$. Moreover, if $(f_1,f_2,f_3)$ is a morphism in $\mathbf{Psb}$ then $(f_1,f_3)$ is a morphism in $\mathbf{Act}$. \end{theorem} \begin{proof} To see that the system $(X,B,\rho,\varphi,\gamma)$ is a well defined object in $\mathbf{Act}$ we recall that $q$ and $s$ are zero-preserving maps and hence conditions $(\ref{eq:act01})$--$(\ref{eq:act03})$ are satisfied. Conditions $(\ref{eq:act04})$--$(\ref{eq:act06})$ are obtained by applying the map $q$ to both sides of equations \begin{eqnarray*} k(x)+s(b)=kq(k(x)+s(b))+s(b)\\ s(b)+k(x)=kq(s(b)+k(x))+s(b)\\ s(b)+s(b')=kq(s(b)+s(b'))+s(b+b') \end{eqnarray*} which hold because $(X,A,B,p,k,q,s)$ is a pointed semibiproduct of monoids. Condition $(\ref{eq:act07})$ follows from Theorem \ref{thm: a+a' and associativity} with $a=k(x)+s(b)$, $a'=k(x')+s(b')+k(x'')+s(b'')$ on the one hand whereas on the other hand $a=k(x)+s(b)+k(x')+s(b')$, $a'=k(x'')+s(b'')$. Moreover, the pair $(f_1,f_3)$ is a morphism of actions as soon as the triple $(f_1,f_2,f_3)$ is a morphism of semibiproducts, indeed we have \begin{eqnarray*} f_1(\rho(x,b))&=&f_1q(k(x)+s(b))=q'f_2(k(x)+s(b))\\ &=&q'(k'f_1(x)+s'f_3(b)))=\rho'(f_1(x),f_3(b)) \end{eqnarray*} and similarly for $\varphi$ and $\gamma$ thus proving conditions $(\ref{eq:act08})$--$(\ref{eq:act10})$. \end{proof} The previous result describes a functor from the category of pointed semibipro\-ducts of monoids into the category of pointed monoid action systems, let us denote it by $P\colon{\mathbf{Psb}\to\mathbf{Act}}$. The synthetic construction of Theorem \ref{thm: functor R syntehic construction} produces a functor in the other direction, let us denote it $Q\colon{\mathbf{Act}\to\mathbf{Psb}}$. We will see that $PQ=1$ whereas $QP\cong 1$. \begin{theorem}\label{thm: equivalence} The categories $\mathbf{Psb}$ and $\mathbf{Act}$ are equivalent. \end{theorem} \begin{proof} Theorem \ref{thm: pseudo-actions} tells us that $P(X,A,B,p,k,q,s)=(X,B,\rho,\varphi,\gamma)$ and $P(f_1,f_2,f_3)=(f_1,f_3)$ is a functor from $\mathbf{Psb}$ to $\mathbf{Act}$ whereas Theorem \ref{thm: functor R syntehic construction} gives a functor $Q$ in the other direction. It is clear that $Q(X,B,\rho,\varphi,\gamma)=(X,R,B,\pi_B,\langle 1,0\rangle,\pi_X,\langle 0,1\rangle)$ is the synthetic realization displayed in $(\ref{diag:synthetic semi-biproduct})$ and hence it is a pointed semibiproduct. Moreover $Q(f,g)=(f,R(f,g),g)$ with $R(f,g)$ illustrated as in $(\ref{diag:semi-biproduct with R})$ and defined as $R(f,g)(x,b)=(f(x),g(b))$ is clearly a morphism of semibiproducts. We observe that $PQ(X,B,\rho,\varphi,\gamma)=(X,B,\rho,\varphi,\gamma)$ due to conditions $(\ref{eq:act05})$ and $(\ref{eq:act06})$. This proves $PQ=1$, in order to prove $QP\cong 1$ we need to specify natural isomorphisms $\alpha$ and $\beta$ as illustrated \begin{equation} \vcenter{\xymatrix{A\ar@<.5ex>[r]^(.3){\alpha_A}\ar[d]_{f_2} & RP(X,A,B,p,k,q,s)\ar@<.5ex>[l]^(.7){\beta_A}\ar[d]^{R(f_1,f_3)}\\ A' \ar@<.5ex>[r]^(.3){\alpha_{A'}} & RP(X',A',B',p',k',q',s')\ar@<.5ex>[l]^(.7){\beta_{A'}} }} \end{equation} and show that they are compatible with diagrams $(\ref{diag:morphism of semi-biproduct})$ and $(\ref{diag:semi-biproduct with R})$. Indeed it is a routine calculation to check that $\alpha(a)=(q(a),p(a))$ and $\beta(x,b)=k(x)+s(b)$ are well defined natural isomorphisms compatible with semibiproducts. Further details can be found in the preprint \cite{NMF.20a-of}. \end{proof} \section{Examples}\label{sec: eg} Several examples can be found in the preprint \cite{NMF.20a-of}. Here we list all the possible pointed semibiproducts of monoids $(X,A,B,p,k,q,s)$ in which $X$ and $B$ are monoids with two elements. This particular case is interesting because it gives a simple list with all the possible components of an action system $(X,B,\rho,\varphi,\gamma)$. The equivalence of Theorem \ref{thm: equivalence} then gives us an easy way of checking all the possibilities. Let us denote by $M$ and $G$ the two monoids with two elements, $M$ being the idempotent monoid while $G$ being the group, both expressed in terms of multiplication tables as \[M=\begin{pmatrix} 1 & 2 \\ 2 & 2 \end{pmatrix}, \quad B=\begin{pmatrix} 1 & 2 \\ 2 & 1 \end{pmatrix}.\] Note that we are using multiplicative notation so that $2\cdot 2=2$ in $M$, whereas in $G$ we have $2\cdot 2=1$. Due to restrictions $(\ref{eq:act01})$--$(\ref{eq:act03})$ we have the following two possibilities for each component $\rho$, $\varphi$ and $\gamma$: \[\rho_0=\begin{pmatrix} 1 & 1 \\ 2 & 2 \end{pmatrix} , \quad \rho_1=\begin{pmatrix} 1 & 1 \\ 2 & 1 \end{pmatrix},\] \[\varphi_0=\begin{pmatrix} 1 & 2 \\ 1 & 2 \end{pmatrix}, \quad \varphi_1=\begin{pmatrix} 1 & 2 \\ 1 & 1 \end{pmatrix},\] \[\gamma_0=\begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}, \quad \gamma_1=\begin{pmatrix} 1 & 1 \\ 1 & 2 \end{pmatrix}.\] The following list shows all the possible 14 cases of pointed semibiproducts of monoids $(X,A,B,p,q,k,q,s)$ in which $X$ and $B$ are either $M$ or $G$ via the equivalence of Theorem \ref{thm: equivalence}. \begin{multicols}{2} \begin{enumerate} \item $(G,G,\rho_0,\varphi_0,\gamma_0)$ \item $(G,G,\rho_0,\varphi_0,\gamma_1)$ \item $(G,M,\rho_0,\varphi_0,\gamma_0)$ \item $(G,M,\rho_0,\varphi_0,\gamma_1)$ \item $(G,M,\rho_0,\varphi_1,\gamma_0)$ \item $(G,M,\rho_1,\varphi_1,\gamma_0)$ \item $(M,G,\rho_0,\varphi_0,\gamma_0)$ \item $(M,G,\rho_0,\varphi_0,\gamma_1)$ \item $(M,G,\rho_1,\varphi_1,\gamma_1)$ \item $(M,M,\rho_0,\varphi_0,\gamma_0)$ \item $(M,M,\rho_0,\varphi_0,\gamma_1)$ \item $(M,M,\rho_0,\varphi_1,\gamma_0)$ \item $(M,M,\rho_0,\varphi_1,\gamma_1)$ \item $(M,M,\rho_1,\varphi_1,\gamma_0)$ \end{enumerate} \end{multicols} Note that the cases with $\gamma_0$ correspond to split extensions while the cases with $\rho_0$ correspond to Schreier extensions. The cases with $\rho_1$ correspond to $R=\{(1,1),(1,2),(2,1)\}$ since $(2,2)$ fails to be in $R$ because $\rho_1(2,2)=1\neq 2$. If interpreting $\varphi$ as an action then the map $\varphi_0$ is the trivial action whereas $\varphi_1$ is a non-trivial action. \section{Conclusion} A new tool has been introduced for the study of monoid extensions from which a new notion of action has emerged in order to establish the categorical equivalence of Theorem \ref{thm: equivalence}. A clear drawback to this approach is the necessity of handling morphisms and maps at the same level. We have solved the problem by extending the hom-functor through an appropriate profunctor (Definition \ref{def: semibiproduct}). Other possible solutions would consider maps as an extra structure in higher dimensions \cite{Brown,NMF.15,NMF.20a-in} or as imaginary morphisms \cite{BZ,MontoliRodeloLinden}. Developing further a categorical framework in which to study semibiproducts seems desirable due to several important cases occurring in different settings. For example, semibiproduct extensions can be studied in the context of preordered monoids \cite{NMF.20b} and preordered groups \cite{Preord}, where the maps $q$ and $s$ are required to be monotone maps rather than zero-preserving maps. The context of topological monoids \cite{Ganci} should also be worthwhile studying with $q$ and $s$ required to be continuous maps. \end{document}
\begin{document} \title{Online detection of failures generated by storage simulator} \author{Kenenbek Arzymatov, Mikhail Hushchyn, Andrey Sapronov, Vladislav Belavin, Leonid Gremyachikh, Maksim Karpov and Andrey Ustyuzhanin} \address{National Research University Higher School of Economics} \address{11 Pokrovsky Boulevard, Moscow, Russia, 109028} \ead{karzymatov@hse.ru} \begin{abstract} Modern large-scale data-farms consist of hundreds of thousands of storage devices that span distributed infrastructure. Devices used in modern data centers (such as controllers, links, SSD- and HDD-disks) can fail due to hardware as well as software problems. Such failures or anomalies can be detected by monitoring the activity of components using machine learning techniques. In order to use these techniques, researchers need plenty of historical data of devices in normal and failure mode for training algorithms. In this work, we challenge two problems: 1) lack of storage data in the methods above by creating a simulator and 2) applying existing online algorithms that can faster detect a failure occurred in one of the components. We created a Go-based (golang) package for simulating the behavior of modern storage infrastructure. The software is based on the discrete-event modeling paradigm and captures the structure and dynamics of high-level storage system building blocks. The package's flexible structure allows us to create a model of a real-world storage system with a configurable number of components. The primary area of interest is exploring the storage machine's behavior under stress testing or exploitation in the medium- or long-term for observing failures of its components. To discover failures in the time series distribution generated by the simulator, we modified a change point detection algorithm that works in online mode. The goal of the change-point detection is to discover differences in time series distribution. This work describes an approach for failure detection in time series data based on direct density ratio estimation via binary classifiers. \end{abstract} \section*{Introduction} Disk-drive is one of the crucial elements of any computer and IT infrastructure. Disk failures have a high contributing factor to outages of the overall computing system. During the last decades, the storage system's reliability and modeling is an active area of research in industry and academia works~\cite{744151, 4292318, 816306}. Nowadays, the rough total amount of hard disk drives (HDD) and solid-state drives (SSD) deployed in data-farms and cloud systems passed tens of millions of units~\cite{7474362}. Consequently, the importance of early identifying defects leading to failures that can happen in the future can result in significant benefits. Such failures or anomalies can be detected by monitoring components' activity using machine learning techniques, named change point detection~\cite{aminikhanghahi2017survey, doi:10.1137/1.9781611972795.34, LIU201372}. To use these techniques, especially for anomaly detection, it is a necessity in historical data of devices in normal and failure mode for training algorithms. In this paper, due to the reasons mentioned above, we challenge two problems: 1) lack of storage data in the methods above by creating a simulator and 2) applying new online algorithms that can faster detect a failure occurred in one of the components~\cite{hushchyn2020online}. A Go-based (golang) package for simulating the behavior of modern storage infrastructure is created. The primary area of interest is exploring the storage machine's behavior under stress testing or exploitation in the medium- or long-term for observing failures of its components. The software is based on the discrete-event modeling paradigm and captures the structure and dynamics of high-level storage system building blocks. It represents the hybrid approach to modeling storage attached network~\cite{karpov, 10.7717/peerj-cs.271}. This method uses additional blocks with a neural network that tunes the internal model parameters while a simulation is running, described in~\cite{belavin}. This approach's critical advantage is a decreased requirement for detailed simulation and the number of modeled parameters of real-world system components and, as a result, a significant reduction in the intellectual cost of its development. The package's modular structure allows us to create a model of a real-word storage system with a configurable number of components. Compared to other techniques, parameter tuning does not require heavy-lifting changes within developing service~\cite{Mousavi_2017}. To discover failures in the time series distribution generated by the simulator, we modified a change point detection algorithm that works in online mode. The goal of the change-point detection is to discover differences in time series distribution. This work uses an approach for failure detection in time series data based on direct density ratio estimation via binary classifiers~\cite{hushchyn2020online}. \section*{Simulator} \subsection*{Internals} The simulator uses a Discrete Event Simulation (DES)~\cite{fishman} paradigm for modeling storage infrastructure. In a broad sense, DES is used to simulate a system as a discrete sequence of events in time. Each event happens in a specific moment in time and traces a change of state in the system. Between two consecutive events, no altering in the system is presumed to happen; thus, the simulation time can directly move to the next event's occurrence time. The scheme of the process is shown in Figure~\ref{fig:loop}. \begin{figure} \caption{ The event handling loop is the central part that responsible for time movement in the simulator. The Master process creates necessary logical processes (Client1, IOBalancer, HDD\_Write, etc.) and populates a Priority Queue by collecting events from modeling processes. The last part of the implementation is running the event handling loop. It removes successive elements from the queue. That would be correct because we know that the queue is already time sorted and performed the associated actions. } \label{fig:loop} \end{figure} The simulator's programming environment provides the functionality to set up a model for specific computing environments, especially storage area networks. The key site of interest is exploring the storage infrastructure's behavior under various stress testing or utilization in the medium- or long-term for monitoring breakups of its components. In the simulator, load to storage system can be represented by two action types: read file from disk and write file to disk. Each file has corresponding attributes, such as name, block size, and total size. With the current load, these attributes determine the amount of time required to perform the corresponding action. The three basic types of resources are provided: CPU, network interface, and storage. Their representation is shown in the Figure~\ref{fig:basicblocks} and informative description is given in the Table~\ref{tab:res}. By using basic blocks, real-world systems can be constructed, as shown in the Figure~\ref{fig:realsystem}. \begin{figure} \caption{The example of the real storage system that can be modeled by using basic blocks} \label{fig:realsystem} \caption{Basic resource entities in the simulator} \label{fig:basicblocks} \end{figure} \begin{table} \caption{\label{tab:res} Resource description} \begin{center} \begin{tabular}{lllll} \br Resource &Real word entity & Parameters & Units & Anomaly type \\ \mr CPU & Controller, server & Number of cores & Amount & Each component \\ & & Core speed & Flops & can suffer from \\ Link & Networking cables &Bandwidth & Megabyte/sec & performance \textbf{degradation} \\ & & Latency & Sec & or total \textbf{breakup} \\ Storage & Cache, SSD, HDD & Size & Gigabyte\\ & & Write speed & Megabyte/sec \\ & & Read speed & Megabyte/sec \\ \br \end{tabular} \end{center} \end{table} \subsection*{Comparison with the real data} The data from the real-world storage system were used to validate the behavior of the simulator. A similar writing load scenario was generated on the model prototype, together with intentional controller failure (turn-off). The comparison is shown in the Figure~\ref{fig:cmp}. As we can see, the simulator's data can qualitatively reflect the components breakup. \begin{figure} \caption{Comparison of the CPU load metrics between simulated (A) and real data (B). The periods marked ‘Failure’ correspond to a storage processor being offline} \label{fig:cmp} \end{figure} \section*{Change point detection} Consider a $d$-dimensional time series that is described by a vector of observations $x(t) \in R^d$ at time $t$. Sequence of observations for time $t$ with length $k$ is defined as: $$ X(t) = [x(t)^T, x(t-1)^T, \dots,x(t-k-1)^T]^T \in R^{kd} $$ Sample of sequences of size $n$ is defined as: $$ \mathcal{X}(t) = {X(t), X(t-1), \dots, X(t - n + 1)} $$ It is implied that observation distribution changes at time $t^{*}$. The goal is to detect this change. The idea is to estimate dissimilarity score between reference $X_{rf}(t-n)$ and test $X_{te}(t)$. The larger dissimilarity, the more likely the change point occurs at time $t-n$. In this work, we apply a CPD algorithm based on direct density ratio estimation developed in~\cite{hushchyn2020online}. The main idea is to estimate density ratio $w(X)$ between two probability distributions $P_{te}(X)$ and $P_{rf}(X)$ which correspond to test and reference sets accordingly. For estimating $w(X)$, different binary classifiers can be used, like decision trees, random forests, SVM, etc. We use neural networks for this purpose. This network $f(X, \theta)$ is trained on the mini-batches with cross-entropy loss function $L(\mathcal{X}(t-l), \mathcal{X}(t), \theta)$, $$ L(\mathcal{X}(t-l), \mathcal{X}(t), \theta) = - \frac{1}{n} \sum_{X \in \mathcal{X}(t-l)} \log (1 - f(X, \theta)) - \frac{1}{n} \sum_{X \in \mathcal{X}(t)} \log f(X, \theta), $$ We use a dissimilarity score based on the Kullback-Leibler divergence, $D(\mathcal{X}(t-l), \mathcal{X}(t))$. Following~\cite{hushchyn2020generalization}, we define this score as: $$ D(\mathcal{X}(t-l), \mathcal{X}(t), \theta) = \frac{1}{n} \sum_{X \in \mathcal{X}(t-l)} \log \frac{1 - f(X, \theta)}{f(X, \theta)} + \frac{1}{n} \sum_{X \in \mathcal{X}(t)} \log \frac{f(X, \theta)}{1 - f(X, \theta)}. $$ According to~\cite{hushchyn2020online}, the training algorithm is shown in Alg.~\ref{alg:clf}. It consists of the following steps performing in the loop: 1) initializing hyper-parameters 2) preparing single datasets $\mathcal{X'}_{rf}$ and $\mathcal{X'}_{te}$ 3) calculating loss function $J$ 4) applying gradients to the weights of neural network. \begin{algorithm} \SetAlgoLined \textbf{Inputs:} time series $\{X(t)\}_{t=k}^{T}$; $k$ -- size of a combined vector $X(t)$; $n$ -- size of a mini-batch $\mathcal{X}(t)$; $l$ -- lag size and $n \ll l$; $f(X, \theta)$ -- a neural network with weights $\theta$; \\ \textbf{Initialization:} $t \leftarrow k + n + l$; \\ \While{$t \le T$}{ take mini-batches $\mathcal{X}(t-l)$ and $\mathcal{X}(t)$; \\ $d(t) \leftarrow D(\mathcal{X}(t-l), \mathcal{X}(t), \theta)$; \\ $\bar{d}(t) \leftarrow \bar{d}(t-n) + \frac{1}{l} (d(t) - d(t-l-n))$; \\ $loss(t, \theta) \leftarrow L(\mathcal{X}(t-l), \mathcal{X}(t), \theta)$; \\ $\theta \leftarrow \mathrm{Optimizer}(\mathrm{loss}(t, \theta))$; \\ $t \leftarrow t + n$; \\ } \Return{$\{\bar{d}(t)\}_{t=1}^{T}$} -- change-point detection score \caption{Change-point detection algorithm.} \label{alg:clf} \end{algorithm} \section*{Results} To check the change-point algorithm against the simulation data, four time-series datasets were prepared: 1) controller's CPU load metric 2) load balancer request time 3) data traffic to storage devices and 4) differences change of used space. Their time-series are shown on the upper halves of Figures~\ref{fig:cpd1},~\ref{fig:cpd2},~\ref{fig:cpd3} and ~\ref{fig:cpd4}. As shown in the bottom halves of the figures above, the algorithm can identify data points where distribution changes. A red line on each plot is a CPD score line. The higher values it has, the more confident algorithm about a change point occurred at this timestamp. \begin{figure} \caption{Controller failure} \label{fig:cpd1} \caption{ IO balancer time series} \label{fig:cpd2} \end{figure} \begin{figure} \caption{Storage traffic} \label{fig:cpd3} \caption{Storage used space} \label{fig:cpd4} \end{figure} \section*{Conclusion} The simulator for modeling storage infrastructure based on the event-driven paradigm was presented. It allows researchers to try different I/O load scenarios to test disk performance and model failures of its hardware components. By providing large amounts of synthetic data of anomalies and time series of a machine in various modes, the simulator can also be used as a benchmark for comparing different change-point detection algorithms. In this work, the density ratio estimation CPD algorithm were successfully applied to the simulator data. \ack{This research was supported in part through computational resources of HPC facilities at NRU HSE.} \section*{References} \end{document}
\begin{document} \title{Blowup behaviour for the nonlinear Klein--Gordon equation} \author{Rowan Killip} \address{University of California, Los Angeles} \author{Betsy Stovall} \address{University of California, Los Angeles} \author{Monica Visan} \address{University of California, Los Angeles} \begin{abstract} We analyze the blowup behaviour of solutions to the focusing nonlinear Klein--Gordon equation in spatial dimensions $d\geq 2$. We obtain upper bounds on the blowup rate, both globally in space and in light cones. The results are sharp in the conformal and sub-conformal cases. The argument relies on Lyapunov functionals derived from the dilation identity. We also prove that the critical Sobolev norm diverges near the blowup time. \end{abstract} \maketitle \tableofcontents \section{Introduction.} We consider the initial-value problem for the nonlinear Klein--Gordon equation \begin{equation} \label{E:eqn} \begin{cases} u_{tt} - \Delta u + m^2 u = |u|^p u\\ u(0) = u_0,\ u_t(0) = u_1 \end{cases} \end{equation} in spatial dimensions $d\geq 2$ with $0<p<\frac4{d-2}$ and $m\in[0,1]$. Note that when $m=0$, this reduces to the nonlinear wave equation. We will only consider real-valued solutions to \eqref{E:eqn}; the methods adapt easily to the complex-valued case. This equation is the natural Hamiltonian flow associated with the energy \begin{equation} \label{E:energy} E_m(u) = \int_{{\mathbb{R}}^d} \tfrac12|\nabla_{t,x}u(t,x)|^2 + \tfrac{m^2}2 |u(t,x)|^2 - \tfrac1{p+2}|u(t,x)|^{p+2}\, dx. \end{equation} Both the linear and nonlinear Klein--Gordon equations enjoy finite speed of propagation (indeed, they are fully Poincar\'e invariant). For this reason, many statements (including the definition of a solution) are most naturally formulated in light cones. \begin{definition}[Light cones] \label{D:light cones} Given $(t_0,x_0) \in {\mathbb{R}}_+ \times {\mathbb{R}}^d$, we write $$ \Gamma(t_0,x_0) := \{(t,x) \in [0,t_0) \times {\mathbb{R}}^d : |x - x_0| \leq t_0-t\} $$ to denote the backwards light cone emanating from this point. Given $u : \Gamma(t_0,x_0) \to {\mathbb{R}}$, we write $ u \in C_{t,\textrm{loc}} L^2_x(\Gamma(t_0,x_0))$ if, when we extend $u$ to be zero outside of $\Gamma(t_0,x_0)$, the function $t \mapsto u(t)$ is continuous in $L^2_x({\mathbb{R}}^d)$ on compact subintervals of $[0,t_0)$. We define $$ C_{t,\textrm{loc}}H^1_x(\Gamma(t_0,x_0)):=\{ u\in C_{t,\rm{loc}} L^2_x(\Gamma(t_0,x_0)) \text{ and } \nabla u \in C_{t,\rm{loc}} L^2_x(\Gamma(t_0,x_0))\}. $$ Finally, we write $u \in L^{\frac{p(d+1)}2}_{\rm{loc}}(\Gamma(t_0,x_0))$ if $u \in L^{\frac{p(d+1)}2}_{t,x}[(K \times {\mathbb{R}}^d)\cap \Gamma(t_0,x_0)]$ for every compact time interval $K \subset [0,t_0)$. \end{definition} The dispersion relation for the linear Klein--Gordon equation is $\omega^2 = m^2+|\xi|^2$. In view of this, we adopt the notation \begin{equation}\label{E:jp def} \jp{\xi}_m := \sqrt{m^2+|\xi|^2}, \end{equation} by analogy with the widely-used $\jp{\xi} := \sqrt{1+|\xi|^2}$. With this notation, the solution of the linear Klein--Gordon equation with $u(0)=u_0$ and $u_t(0)=u_1$ is given by $$ {\mathcal{S}}_m(t)(u_0,u_1) = \cos(t\langle\nabla\rangle_m)u_0 + \langle\nabla\rangle_m^{-1}\sin(t\langle\nabla\rangle_m)u_1. $$ \begin{definition}[Solution] \label{D:solution} Let $(t_0,x_0) \in {\mathbb{R}}_+ \times {\mathbb{R}}^d$. A function $u:\Gamma(t_0,x_0) \to {\mathbb{R}}$ is a \emph{(strong) solution} to \eqref{E:eqn} if $(u,u_t) \in C_{t,\textrm{loc}}[H^1_x \times L^2_x](\Gamma(t_0,x_0))$, $u \in L^{\frac{p(d+1)}2}_\textrm{loc}(\Gamma(t_0,x_0))$, and $u$ satisfies the Duhamel formula \begin{equation} \label{E:Duhamel} u(t) = {\mathcal{S}}_m(t)(u_0,u_1) + \int_0^t \langle\nabla\rangle_m^{-1}\sin(\langle\nabla\rangle_m(t-s))|u(s)|^pu(s)\, ds \end{equation} on $\Gamma(t_0,x_0)$. \end{definition} Strong solutions are known to be unique (cf. Proposition~\ref{P:uniqueness}) and so any initial data in $(u_0,u_1)\in H^1_x\times L^2_x$ leads to a unique \emph{maximally extended} solution defined on the union of all light cones upon which a strong solution exists. This region of spacetime is called the \emph{domain of maximal (forward) extension}. When this is not $[0, \infty)\times{\mathbb{R}}^d$, it must take the form $$ \{ (t,x) : x\in {\mathbb{R}}^d \text{ and } 0 \leq t < \sigma(x) \} $$ where $\sigma:{\mathbb{R}}^d\to(0,\infty)$ is a 1-Lipschitz function. The surface $$ \Sigma = \{(\sigma(x),x): x \in {\mathbb{R}}^d\} $$ is called the \emph{(forward) blowup surface}. A point $t_0=\sigma(x_0)$ on the blowup surface is called \emph{non-characteristic} if $\sigma(x)\geq \sigma(x_0) - (1-\varepsilon)|x-x_0|$ for all $x$ and some $\varepsilon>0$. Otherwise the point is called \emph{characteristic}. With these preliminaries out of the way, let us now describe both the principal results and the structure of the paper. In Section~\ref{S:local theory}, we review the local well-posedness theory for our equation. Almost nothing in this section is new. However we include full proofs, both for the sake of completeness and because in several places the exact formulation we require is not that which appears in the literature. Additionally, local well-posedness results are intrinsically lower bounds on the rate of blowup. In this way, the results of Section~\ref{S:local theory} provide a counterpart to the upper bounds proved elsewhere in the paper. Sections~\ref{S:non+blow},~\ref{S:CC}, and~\ref{S:critical blowup} culminate in a proof that the critical Sobolev norm diverges as the blowup time is approached, at least along a subsequence of times. Here criticality is defined with respect to scaling. The nonlinear Klein--Gordon equation does not have a scaling symmetry, except when $m=0$. In the massless case the scaling symmetry takes the form $u(t,x)\mapsto u^\lambda(t,x) := \lambda^{2/p} u(\lambda t,\lambda x)$ and the corresponding scale-invariant spaces are $u\in \dot H^{s_c}({\mathbb{R}}^d)$ and $u_t \in \dot H^{s_c-1}({\mathbb{R}}^d)$ with $s_c := \frac{d}2 - \frac2p$. As blowup is naturally associated with short length scales (i.e., $\lambda\to\infty$) and the coefficient of the mass term shrinks to zero under this scaling, it is natural to regard $s_c$ as the critical regularity for \eqref{E:eqn} even when $m > 0$. \begin{theorem}\label{T:I:sc} Consider initial data $u_0\in H^1({\mathbb{R}}^d)$ and $u_1\in L^2({\mathbb{R}}^d)$ with $d \geq 2$ and suppose $p = \frac4{d-2s_c}$ with $\frac1{2d} < s_c < 1$. If the maximal-lifespan solution $u$ to \eqref{E:eqn} blows up forward in time at $0<T_* < \infty$, then $$ \smash{\limsup_{t \uparrow T_*}} \bigl\{\|u(t)\|_{\dot H^{s_c}_x} + \|u_t(t)\|_{H^{s_c-1}_x}\bigr\} = \infty. $$ When $s_c < \frac12$ we additionally assume that $u_0$ and $u_1$ are spherically symmetric. \end{theorem} By virtue of scale invariance, the blowup time can be adjusted arbitrarily without altering the size of the critical norm. As this indicates, the link between blowup and the critical norm is subtle. We note also the example of the mass-critical nonlinear Schr\"odinger equation for which blowup does occur despite the fact that the $L^2_x$-norm is a constant of motion! We were prompted to investigate the behaviour of the critical norm by a recent paper of Merle and Raphael, \cite{MerleRaphaelAJM}, who considered the nonlinear Schr\"odinger equation with radial data and $0<s_c<1$. They showed that the critical norm must blow up as a power of $|\log(T^*-t)|$. In \cite{MerleRaphaelAJM} a rescaling argument is used to show that if a blowup solution were to exist for which the critical norm did not diverge, then one could produce a second solution that is global in at least one time direction and has energy $E(u)\leq 0$. The impossibility of this second type of solution is then deduced via the virial argument. Because the second solution has poor spatial decay, the virial argument needs to be space localized and the resulting error terms controlled; this relies heavily on the radial hypothesis. Note that the roles of the symmetry assumption in \cite{MerleRaphaelAJM} and here are of a completely different character. As discussed in Section~\ref{S:local theory}, our equation is \emph{ill-posed} in $H_x^{s_c}\times H_x^{s_c-1}$ when $s_c<\frac12$, unless one imposes the restriction to radial data. The additional restriction to $s_c>\frac1{2d}$ in Theorem~\ref{T:I:sc} stems from the fact that we do not know if the equation is well-posed in $ H_x^{s_c}\times H_x^{s_c-1}$ for spherically symmetric data; see Section~\ref{S:local theory} for more details. To prove Theorem~\ref{T:I:sc} we argue in a broadly similar manner, showing that failure of the theorem would result in a semi-global solution to the limiting (massless) equation with energy $E(u)\leq 0$ and then arguing that such a solution cannot exist. In our setting we are able to handle arbitrary (nonradial) solutions when $1/2\leq s_c<1$ by employing a concentration-compactness principle for an inequality of Gagliardo--Nirenberg type. The requisite concentration-compactness result is obtained in Section~\ref{S:CC}. The impossibility of semi-global solutions with energy $E(u)\leq 0$ to the massless equation is proved in Section~\ref{S:non+blow}; this relies on a space-truncated virial argument. In Section~\ref{S:other} we examine how the $L^2({\mathbb{R}}^d)$ norms of $u$ and $\nabla_{t,x} u$ behave near the blowup time. The arguments are comparatively straightforward applications of the virial argument; no spatial truncation is required. In the remaining three sections of the paper, we study the behaviour of $u$ and $\nabla_{t,x} u$ near individual points on the blowup surface, rather than integrated over all space as in Section~\ref{S:other}. This is a much more delicate matter. For the case of the nonlinear wave equation, this has been treated in a series of papers by Antonini, Merle, and Zaag; see \cite{AntoniniMerle,MerleZaagAJM,MerleZaagMA,MerleZaagIMRN}. All of these papers restrict attention only to cases where $s_c\leq \frac12$. In this paper we will extend their results to the Klein--Gordon setting, considering also the regime $\frac12 < s_c < 1$. The analysis of the nonlinear wave equation relies centrally on certain monotonicity formulae. In the papers mentioned above, these appear via rather \emph{ad hoc} manipulations mimicking earlier work of Giga and Kohn on the nonlinear heat equation \cite{GigaKohn:Indiana,GigaKohn:CPAM}. In Section~\ref{S:Lyapunov} we uncover the physical origins of these identities, finding that they are in fact close cousins of the dilation identity. This in turn indicates the proper analogues in the Klein--Gordon setting. The identities are then used in Sections~\ref{S:superconf blow} and~\ref{S:conf blow} to control the behaviour of solutions inside light cones. Section~\ref{S:superconf blow} treats the case $s_c>\frac12$ while Section~\ref{S:conf blow} covers $s_c\leq \frac12$. The following theorem captures the flavour of our results in the two cases: \begin{theorem}\label{T:I:cone} Let $d \geq 2$, $m \in [0,1]$, and $p = \frac4{d-2s_c}$ with $0< s_c < 1$. If $u$ is a strong solution to \eqref{E:eqn} in the light cone $\{(t,x):0 < t \leq T, |x|<t\}$, then $u$ satisfies \begin{equation}\label{E:I:u} \int_{|x|< t/2} |u(t,x)|^2\, dx \lesssim \begin{cases} t^{\frac{pd}{p+4}} & :\text{ if } s_c > \tfrac 12 \\ t^{2s_c} &:\text{ if } s_c \leq \tfrac 12 \end{cases} \end{equation} and \begin{equation}\label{E:I:grad u} \int_{t_0}^{2t_0} \!\!\! \int_{|x|<t/2} |\nabla_{\!t,x}u(t,x)|^2 \, dx\, dt \lesssim \begin{cases} 1 & :\text{ if } s_c > \tfrac 12 \\ t_0^{2s_c-1} &:\text{ if } s_c \leq \tfrac 12. \end{cases} \end{equation} \end{theorem} Note that the powers appearing in the two cases in RHS\eqref{E:I:u} and RHS\eqref{E:I:grad u} agree when $s_c = \frac12$. Note also that this theorem is best understood by considering the time-reversed evolution, that is, for initial data given at time $T$ and with $(0,0)$ being a point on the backwards blowup surface. The local well-posedness results in Section~\ref{S:local theory} (cf. Corollary~\ref{C:lwp loc}) show that these upper bounds on the blowup rate are sharp when $0<s_c\leq\frac12$. One peculiarity of the case $s_c=\frac12$ is that the massless equation is invariant under the full conformal group of Minkowski spacetime. For this reason we term this the \emph{conformal} case. Correspondingly, $s_c<\frac12$ and $s_c>\frac12$ will be referred to as the \emph{sub-} and \emph{super-conformal} cases, respectively. In the conformal case, the Lagrangian action is invariant under scaling and so the dilation identity takes the form of a true conservation law (cf. \eqref{E:m dilation}), while at other regularities it does not. The key dichotomy between $s_c \leq \frac12$ and $s_c>\frac12$ in the context of Theorem~\ref{T:I:cone} is not dictated directly by conformality, but rather by the scaling of the basic monotonicity formulae we use. The dilation identity scales as $s_c=\frac12$. As a consequence, we are able to obtain stronger results in the conformal and sub-conformal cases than in the super-conformal regime. Indeed, in these cases \eqref{E:I:grad u} can be upgraded to a \emph{pointwise in time} statement; see Theorem~\ref{T:cone bound subc}. Systematic consideration of all conformal conservation laws (cf. Section~\ref{S:Lyapunov}) does not lead to any monotonicity formulae scaling at a higher regularity, thereby suggesting that the dilation is still the best tool for the job when $s_c>\frac12$. The simplified version of our estimates given in Theorem~\ref{T:I:cone} only controls the size of the solution in the middle portion of the light cone, $\{|x| < t/2\}$. In truth, the estimates we prove give weighted bounds in the whole light cone; however, the weight decays rather quickly near the boundary of the light cone. If the point $(0,0)$ is not a characteristic point of the blowup surface, then simple covering arguments using nearby light cones show that the same estimates hold for the whole region $\{|x| <t\}$. In fact, when $s_c\leq \frac12$ our results precisely coincide with those proved by Merle and Zaag for the corresponding nonlinear wave equation in \cite{MerleZaagAJM,MerleZaagMA,MerleZaagIMRN}. (As mentioned previously, their works do not consider the case $s_c >\frac12$.) It turns out that it is possible to repeat the Merle--Zaag arguments virtually verbatim in the Klein--Gordon setting (with $s_c\leq \frac12$); however, this is not what we have done. While we do follow their strategy rather closely, the implementation is quite different. We use usual spacetime coordinates, as opposed to the similarity coordinates used by Giga and Kohn and again by Antonini, Merle, and Zaag. This makes the geometry of light cones much more transparent, which we exploit to obtain stronger averaged Lyapunov functionals (cf. \eqref{E:L flux ineq 2}), as well as to simplify the key covering argument (cf. our passage from \eqref{almost} to \eqref{tada} with subsection~3.2 in \cite{MerleZaagIMRN}). \noindent{\bf Acknowledgements} The first author was supported by NSF grant DMS-1001531. The second author was supported by an NSF Postdoctoral Fellowship. The third author was supported by NSF grant DMS-0901166 and a Sloan Foundation Fellowship. \subsection{Preliminaries} We will be regularly referring to the spacetime norms \begin{equation}\label{E:qr def} \bigl\| u \bigr\|_{L^q_t L^{\vphantom{q}r}_{\vphantom{t}x}({\mathbb{R}}\times{\mathbb{R}}^d)} := \biggl(\int_{{\mathbb{R}}} \bigg[ \int_{{\mathbb{R}}^d} |u(t,x)|^r\,dx \biggr]^{\frac qr} \;dt\biggr)^\frac1q, \end{equation} with obvious changes if $q$ or $r$ is infinity. We write $X\lesssim Y$ to indicate that $X\leq C Y$ for some implicit constant $C$, which varies from place to place. Let $\varphi(\xi)$ be a radial bump function supported in the ball $\{ \xi \in {\mathbb{R}}^d: |\xi| \leq \tfrac {11}{10} \}$ and equal to $1$ on the ball $\{ \xi \in {\mathbb{R}}^d: |\xi| \leq 1 \}$. For each number $N \in 2^{{\mathbb{Z}}}$, we define the Littlewood--Paley projections \begin{align*} \widehat{P_{\leq N} f}(\xi) &:= \varphi(\xi/N) \hat f(\xi)\\ \widehat{P_{> N} f}(\xi) &:= (1 - \varphi(\xi/N)) \hat f(\xi)\\ \widehat{P_N f}(\xi) &:= (\varphi(\xi/N) - \varphi(2\xi/N)) \hat f(\xi) \end{align*} and similarly $P_{<N}$ and $P_{\geq N}$. We will use basic properties of these operators, including \begin{lemma}[Bernstein estimates]\label{Bernstein} For $1 \leq p \leq q \leq \infty$, \begin{align*} \bigl\| |\nabla|^{\pm s} P_N f\bigr\|_{L^p({\mathbb{R}}^d)} &\sim N^{\pm s} \| P_N f \|_{L^p({\mathbb{R}}^d)},\\ \|P_{\leq N} f\|_{L^q({\mathbb{R}}^d)} &\lesssim N^{\frac{d}{p}-\frac{d}{q}} \|P_{\leq N} f\|_{L^p({\mathbb{R}}^d)},\\ \|P_N f\|_{L^q({\mathbb{R}}^d)} &\lesssim N^{\frac{d}{p}-\frac{d}{q}} \| P_N f\|_{L^p({\mathbb{R}}^d)}. \end{align*} \end{lemma} Next, we recall some well-known elliptic estimates; see, for example, \cite[Ch.~7]{GilTru} or \cite[Ch.~8]{LiebLoss}. \begin{lemma}(Sobolev inequality for domains) \label{L:Sob domain} Let $d\geq 2$ and let $\Omega\subset {\mathbb{R}}^d$ be a domain with the cone property. Then $$ \|f\|_{L^q(\Omega)}\lesssim \|f\|_{H^1(\Omega)} $$ provided that $2\leq q<\infty$ if $d=2$ and $2\leq q\leq \frac{2d}{d-2}$ if $d\geq 3$. The implicit constant depends only on $d, q,$ and $\Omega$. \end{lemma} We will only be applying this lemma to balls, exteriors of balls, and in the whole of ${\mathbb{R}}^d$; thus, the cone property automatically holds. \begin{lemma}(Poincar\'e inequality on bounded domains)\label{L:Poincare} Let $d\geq 2$ and let $\Omega\subset {\mathbb{R}}^d$ be a bounded domain. Then for any $f\in H^1_0(\Omega)$, $$ \|f\|_{L^2(\Omega)}\lesssim |\Omega|^{1/d} \|\nabla f\|_{L^2(\Omega)}. $$ \end{lemma} \begin{lemma}(Gagliardo--Nirenberg)\label{L:GN} Let $d\geq 2$ and let $0<p<\infty$ if $d=2$ and let $0<p\leq\frac4{d-2}$ if $d\geq 3$. Then $$ \|f\|_{L^{p+2}}^{p+2} \lesssim \|f\|_{L^{\frac{pd}2}}^p \|\nabla f\|_{L^2}^2 \quad\text{and}\quad \|f\|_{L^{\frac{pd}2}} \lesssim \|f\|_{L^2}^{1-s_c} \|\nabla f\|_{L^2}^{s_c}. $$ Moreover, for any $R>0$, $$ \|f\|_{L^{p+2}(|x|\geq R)}^{p+2} \lesssim \|f\|_{L^{\frac{pd}2}(|x|\geq R)}^p \|\nabla f\|_{L^2(|x|\geq R)}^2. $$ \end{lemma} \section{Local theory.} \label{S:local theory} \subsection{Strichartz inequalities} \begin{lemma}[Strichartz inequality] \label{L:Strichartz} Fix a value of $m \in [0,1]$. Let $u$ be a solution to the inhomogeneous equation \begin{equation} \label{E:ivp} u_{tt} - \Delta u + m^2 u = F \quad \text{with} \quad u(0) = u_0 \quad \text{and}\quad u_t(0) = u_1 \end{equation} on the time interval $[0,T]$. Let $0 \leq \gamma \leq 1$, $2 < q,\tilde{q} \leq \infty$, and $2 \leq r, \tilde r < \infty$ be exponents satisfying the scaling and admissibility conditions: $$ \frac1q + \frac{d}r = \frac{d}2 - \gamma = \frac1{\tilde q'} + \frac{d}{\tilde r'} - 2 \qquad \text{and} \qquad \frac1q + \frac{d-1}{2r},\ \frac1{\tilde q} + \frac{d-1}{2\tilde r} \leq \frac{d-1}4. $$ Then \begin{equation} \label{E:wave strichartz} \begin{aligned} &\|\langle\nabla\rangle_m^{\gamma} u\|_{C_t L^2_x([0,T] \times {\mathbb{R}}^d)} + \|\langle\nabla\rangle_m^{\gamma-1}u_t\|_{C_t L^2_x([0,T] \times {\mathbb{R}}^d)} + \|u\|_{L^q_tL^r_x([0,T] \times {\mathbb{R}}^d)} \\ & \qquad \qquad \lesssim \|\langle\nabla\rangle_m^{\gamma}u_0\|_{L^2_x} + \|\langle\nabla\rangle_m^{\gamma-1} u_1\|_{L^2_x} + \|F\|_{L^{\tilde q'}_tL^{\tilde r'}_x([0,T] \times {\mathbb{R}}^d)}. \end{aligned} \end{equation} Here the implicit constant is independent of $m$ and $T$, but may depend on $d$, $\gamma$, $q$, $\tilde q$, $r$, $\tilde r$. \end{lemma} \begin{remark} We will make particularly heavy use of the following special case: \begin{equation} \label{E:H12 strichartz} \begin{aligned} &\|\langle\nabla\rangle_m^{\frac12}u\|_{C_tL^2_x([0,T] \times {\mathbb{R}}^d)} + \|\langle\nabla\rangle_m^{-\frac12}u_t\|_{C_tL^2_x([0,T] \times {\mathbb{R}}^d)} + \|u\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}([0,T] \times {\mathbb{R}}^d)} \\ &\qquad\qquad \lesssim \|\langle\nabla\rangle_m^{\frac12} u_0\|_{L^2_x} + \|\langle\nabla\rangle_m^{-\frac12}u_1\|_{L^2_x} + \|F\|_{L^{\frac{2(d+1)}{d+3}}_{t,x}([0,T] \times {\mathbb{R}}^d)}. \end{aligned} \end{equation} \end{remark} \begin{proof} The principle of stationary phase may be used to show that the linear operator $e^{it\langle\nabla\rangle_m}$ satisfies the dispersive estimate \begin{equation} \label{E:dispersive} \begin{aligned} \|e^{it\langle\nabla\rangle_m}P_N f\|_{L^{\infty}_x} &\lesssim N^d\bigl(1+\tfrac{|t|N^2}{\jp{N}_m}\bigr)^{-\frac{d-1}2}\|P_Nf\|_{L^1_x} \\ & \lesssim |t|^{-\frac{d-1}2}\bigl\langle N \bigr\rangle_m^{\frac{d+1}2}\|P_Nf\|_{L^1_x} , \end{aligned} \end{equation} where the implicit constants are independent of $m$. Combining this with the fact that $e^{it\langle\nabla\rangle_m}$ is an isometry on $L^2_x$, standard arguments (cf.\ \cite{tao:keel} and the references therein) give the Strichartz estimates \eqref{E:wave strichartz}. \end{proof} Using the Strichartz estimate, one can easily derive the following standard result: \begin{prop}[Uniqueness in light cones, \cite{Kapitanski94}] \label{P:uniqueness} Let $u$ and $\tilde u$ be two strong solutions to \eqref{E:eqn} on the backwards light cone $\Gamma(T,x_0)$. If $(u(0),u_t(0)) = (\tilde u(0),\tilde u_t(0))$ on $\{x:|x-x_0| \leq T\}$, then $u=\tilde u$ throughout $\Gamma(T,x_0)$. \end{prop} \begin{proof} To keep formulae within margins, we introduce the following notation: If $I \subset [0,\infty)$ is an interval, then we set $\Gamma_{I} := (I \times {\mathbb{R}}^d) \cap \Gamma(T,x_0)$. Let $\eta>0$ be a small constant to be chosen shortly. By Definition~\ref{D:solution}, we may write $[0,T] = \bigcup_{j=1}^{\infty} I_j$ with $$ \|u\|_{L^{\frac{p(d+1)}2}_{t,x}(\Gamma_{I_j})} + \|\tilde u\|_{L^{\frac{p(d+1)}2}_{t,x}(\Gamma_{I_j})} \leq \eta. $$ Next, by Lemma~\ref{L:Sob domain}, for each $t\in I_j$ we have \begin{align*} \|u(t)\|_{L_x^{\frac{2(d+1)}{d-1}}(|x-x_0|<T-t)}& + \|\tilde u(t)\|_{L_x^{\frac{2(d+1)}{d-1}}(|x-x_0|<T-t)}\\ &\lesssim \|u(t)\|_{H^1_x(|x-x_0|<T-t)}+ \|\tilde u(t)\|_{H^1_x(|x-x_0|<T-t)}. \end{align*} Thus, by the definition of strong solution, $$ \|u\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}(\Gamma_{I_j})} + \|\tilde u\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}(\Gamma_{I_j})}<\infty. $$ We now consider the difference $w = u - \tilde u$. By \eqref{E:wave strichartz}, finite speed of propagation, and H\"older's inequality, \begin{align*} \|w\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}(\Gamma_{I_1})} &\lesssim \||u|^pu - |\tilde u|^p \tilde u\|_{L^{\frac{2(d+1)}{d+3}}_{t,x}(\Gamma_{I_1}) } \\ &\lesssim \|(|u|^p+ |\tilde u|^p)(u-\tilde u)\|_{L^{\frac{2(d+1)}{d+3}}_{t,x}(\Gamma_{I_1}) } \\ &\lesssim \eta^p \|w\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}(\Gamma_{I_1})} . \end{align*} Choosing $\eta$ sufficiently small, we deduce that $w \equiv 0$ on $\Gamma_{I_1}$. An inductive argument yields $w \equiv 0$ on $\Gamma(T,x_0)$. \end{proof} If $m=0$ and $u$ has zero initial data, then it was proved by Harmse in \cite{Harmse} that better estimates than those in Lemma~\ref{L:Strichartz} are possible. \begin{lemma}[Strichartz estimates for inhomogeneous wave] \label{L:inhomog st} Let $u$ be a solution to the initial-value problem $$ u_{tt} - \Delta u = F \qquad \text{with} \qquad u(0) = u_t(0) = 0 $$ on the interval $[0,T]$. Then \begin{equation} \label{E:inhomog st} \|u\|_{L^r_{t,x}([0,T]\times {\mathbb{R}}^d)} \lesssim \|F\|_{L^{\tilde r'}_{t,x}([0,T] \times {\mathbb{R}}^d)}, \end{equation} whenever $r$ and $\tilde r$ satisfy the scaling and acceptability conditions $ \frac1r + \frac1{\tilde r} = \frac{d-1}{d+1}$ and $\frac1r, \frac1{\tilde r} < \frac{d-1}{2d}$. In particular, \eqref{E:inhomog st} holds with $r = \frac{p(d+1)}2$ and $\tilde r' = \frac{p(d+1)}{2(p+1)}$, provided that $\frac1{2d} < s_c < \frac12$. \end{lemma} Finally, in the radial case, lower regularity Strichartz estimates than those given in Lemma~\ref{L:Strichartz} are possible. \begin{lemma}[Radial Strichartz estimates for homogeneous wave] \label{L:radial wave} Let $\frac1{2d} < s_c < \frac12$ and let $p = \frac4{d-2s_c}$. If $u_0 \in \dot{H}^{s_c}_x({\mathbb{R}}^d)$ and $u_1 \in \dot{H}^{s_c-1}_x({\mathbb{R}}^d)$ are radial, then the solution to the linear wave equation satisfies \begin{equation} \label{E:radial wave} \|{\mathcal{S}}_0(t)(u_0,u_1)\|_{L^{\frac{p(d+1)}2}_{t,x}} \lesssim \|u_0\|_{\dot{H}^{s_c}_x} + \|u_1\|_{\dot{H}^{s_c-1}_x}. \end{equation} \end{lemma} This estimate is implicit in \cite{KlainermanMachedon93} as discussed in \cite{LindbladSogge95}. We note that the bound \eqref{E:radial wave} is true for larger values of $s_c$ without the assumption of radiality (cf.~\eqref{E:wave strichartz}). \subsection{Well-posedness results} Global well-posedness and scattering for NLW with small initial data in critical Sobolev spaces is due to Lindblad and Sogge, \cite{LindbladSogge95}, in the super-conformal case ($\frac12 < s_c < 1$) and to Strauss, \cite{Strauss81}, in the conformal case ($s_c=\frac12$). In the sub-conformal case, the Lorentz symmetry may be used to construct examples which show that such a small data theory is impossible (cf.\ \cite{LindbladSogge95}). This motivates the consideration of \emph{radial} initial data when $s_c<\frac12$. However, even in order to construct global solutions in $L^{\frac{p(d+1)}2}_{t,x}$ from small radial data, we need to impose the additional condition $s_c > \frac1{2d}$. This originates in the fact that $\frac{p(d+1)}2$ with $p=\frac{4d}{d^2-1}$ (i.e. $s_c=\frac1{2d}$) corresponds to an endpoint in the cone restriction conjecture. Global well-posedness and scattering for NLW for $\frac1{2d}<s_c<\frac12$ with small radial data in critical Sobolev spaces may again be found in \cite{LindbladSogge95}. We summarize below the small data theory that we will use. \begin{prop}[Critical small data theory for wave] \label{P:Hsc sdt} Fix $d \geq 2$ and $\frac1{2d} < s_c < 1$ and let $p = \frac4{d-2s_c}$. Let $(u_0, u_1)\in \dot H^{s_c}_x\times \dot H^{s_c-1}_x$ with $u_0$ and $u_1$ radial when $\frac1{2d}<s_c<\frac12$. There exists $\eta_0$ depending on $d$ and $p$ so that if $\eta \leq \eta_0$ and $$ \|{\mathcal{S}}_0(t)(u_0,u_1)\|_{L^{\frac{p(d+1)}2}_{t,x}([0,T] \times {\mathbb{R}}^d)} \leq \eta $$ and additionally $$ \||\nabla|^{s_c-\frac12} {\mathcal{S}}_0(t)(u_0,u_1)\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}([0,T] \times {\mathbb{R}}^d)} \leq \eta \qquad \text{if} \qquad \tfrac12\leq s_c<1, $$ then there is a unique solution $u$ to \eqref{E:eqn} with $m=0$ on $[0,T] \times {\mathbb{R}}^d$. Moreover, $u$ satisfies \begin{align} \|u\|_{L^{\frac{p(d+1)}2}_{t,x}([0,T] \times {\mathbb{R}}^d)} &\lesssim \eta \label{E:small data Lp}\\ \|\nabla_{t,x} u\|_{L^{\infty}_t \dot H^{s_c-1}_x([0,T] \times {\mathbb{R}}^d)} &\lesssim \|(u_0,u_1)\|_{\dot H^{s_c}_x \times \dot H^{s_c-1}_x}.\label{E:small data Hsc} \end{align} If in addition $(u_0,u_1) \in \dot H^1_x \times L^2_x$, then \begin{equation} \label{E:small data H1} \|\nabla_{t,x} u\|_{L^\infty_tL^2_x([0,T] \times {\mathbb{R}}^d)} \lesssim \|(u_0,u_1)\|_{\dot H^1_x \times L^2_x}. \end{equation} In particular, if $\|(u_0,u_1)\|_{\dot H^{s_c}_x \times \dot H^{s_c-1}_x} \leq \eta_1$ for some constant $\eta_1 = \eta_1(d,p) > 0$, then $u$ is global and obeys the estimates above on ${\mathbb{R}} \times {\mathbb{R}}^d$. \end{prop} \begin{proof} We begin by reviewing the proof of \eqref{E:small data Lp} and \eqref{E:small data Hsc} from \cite{LindbladSogge95} and then give the additional arguments needed to establish \eqref{E:small data H1}. Throughout the proof, all spacetime norms will be on $[0,T]\times{\mathbb{R}}^d$, unless we specify otherwise. Using Lemma~\ref{L:Strichartz} or Lemma~\ref{L:radial wave} (depending on $s_c$), we have \begin{align*} \|{\mathcal{S}}_0(t)(u_0,u_1)\|_{L^{\frac{p(d+1)}2}_{t,x}} +& \||\nabla|^{s_c-\frac12}{\mathcal{S}}_0(t)(u_0,u_1)\|_{L^{\frac{2(d+1)}2}_{t,x}} & \lesssim \|(u_0,u_1)\|_{\dot H^{s_c}_x \times \dot H^{s_c-1}_x}. \end{align*} Thus, without loss of generality we may assume that \begin{equation} \label{E:eta lesssim Hsc} \eta_0 \lesssim \min\{1, \|(u_0,u_1)\|_{\dot H^{s_c}_x \times \dot H^{s_c-1}_x}\}. \end{equation} Let $v \mapsto \Phi_0(v)$ be the mapping given by $$ \Phi_0(v)(t): = {\mathcal{S}}_0(t)(u_0,u_1) + \int_0^t |\nabla|^{-1}\sin(|\nabla|(t-s))(|v|^p v)(s)\, ds. $$ We will use a contraction mapping argument to prove that $\Phi_0$ has a fixed point. We start with the case when $\frac1{2d} < s_c < \frac12$. We define $$ B := \Bigl\{v \in L^{\frac{p(d+1)}2}_{t,x}([0,T] \times {\mathbb{R}}^d) :\, \|v\|_{L^{\frac{p(d+1)}2}_{t,x}([0,T] \times {\mathbb{R}}^d)} \leq 2 \eta\Bigr\}. $$ By Lemma~\ref{L:inhomog st} and our hypotheses, for $v\in B$ we have \begin{align*} \|\Phi_0(v)\|_{L^{\frac{p(d+1)}2}_{t,x}} &\leq \|{\mathcal{S}}_0(t)(u_0,u_1)\|_{L^{\frac{p(d+1)}2}_{t,x}} + C_{d,p}\||v|^pv\|_{L^{\frac{p(d+1)}{2(p+1)}}_{t,x}} \\ &\leq \eta + C_{d,p}\|v\|_{L^{\frac{p(d+1)}2}_{t,x}}^{p+1}\\ &\leq \eta + C_{d,p}\eta^{p+1}. \end{align*} Thus for $\eta$ sufficiently small, $\Phi_0$ maps $B$ into itself. To see that $\Phi_0$ is a contraction on $B$ with respect to the metric given by $d(u,v) = \|u-v\|_{L^{\frac{p(d+1)}2}_{t,x}}$, we apply Lemma~\ref{L:inhomog st} and use H\"older's inequality: \begin{align*} \|\Phi_0(u) - \Phi_0(v)\|_{L^{\frac{p(d+1)}2}_{t,x}} &\leq C_{d,p} \||u|^pu - |v|^p v\|_{L^{\frac{p(d+1)}{2(p+1)}}_{t,x}} \\ &\leq C_{d,p} \||u|+|v|\|^p_{L^{\frac{p(d+1)}2}_{t,x}}\|u-v\|_{L^{\frac{p(d+1)}2}_{t,x}}\\ &\leq C_{d,p} \eta^p \|u-v\|_{L^{\frac{p(d+1)}2}_{t,x}}. \end{align*} Thus for $\eta$ sufficiently small, $\Phi_0$ is a contraction on $B$ and so it has a fixed point $u$ in $B$. Moreover, by Lemma~\ref{L:Strichartz}, the fixed point satisfies \begin{align*} \|\nabla_{t,x} u\|_{C_t \dot H^{s_c-1}_x} &\lesssim \|(u_0,u_1)\|_{\dot H^{s_c}_x \times \dot H^{s_c-1}_x} + \||u|^pu\|_{L^{\frac{p(d+1)}{2(p+1)}}_{t,x}} \\ &\lesssim \|(u_0,u_1)\|_{\dot H^{s_c}_x \times \dot H^{s_c-1}_x} + \eta^{p+1}. \end{align*} The bound \eqref{E:small data Hsc} then follows from \eqref{E:eta lesssim Hsc}. We now turn to the case when $\frac12 \leq s_c < 1$. We define \begin{align*} B := \Bigl\{v \in L^{\frac{p(d+1)}2}_{t,x}([0,T] \times {\mathbb{R}}^d) :\, \|v\|_{L^{\frac{p(d+1)}2}_{t,x}} + \||\nabla|^{s_c-\frac12} v\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}}\leq 3\eta\Bigr\}. \end{align*} By Lemma~\ref{L:Strichartz} and the fractional chain rule, for $v\in B$ we obtain \begin{align*} \|\Phi_0(v)\|_{L^{\frac{p(d+1)}2}_{t,x}} &+\||\nabla|^{s_c-\frac12}\Phi_0(v)\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}} \\ &\leq \|{\mathcal{S}}_0(t)(u_0,u_1)\|_{L^{\frac{p(d+1)}2}_{t,x}} +\||\nabla|^{s_c-\frac12}{\mathcal{S}}_0(t)(u_0,u_1)\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}} \\ &\quad+ C_{d,p}\||\nabla|^{s_c-\frac12}(|v|^pv)\|_{L^{\frac{2(d+1)}{d+3}}_{t,x}} \\ &\leq 2\eta + C_{d,p}\||\nabla|^{s_c-\frac12}v\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}}\|v\|_{L^{\frac{p(d+1)}2}_{t,x}}^p \\ &\leq 2\eta + C_{d,p}\eta^{p+1}. \end{align*} Thus if $\eta$ is sufficiently small, $\Phi_0$ maps $B$ into itself. Since in this case we are considering $\frac4{d-1}\leq p<\frac4{d-2}$, by H\"older's inequality we have $$ \sup_{x_0 \in {\mathbb{R}}^d} \|v\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}(\Gamma(T,x_0))} \lesssim T^{s_c-\frac12}\sup_{x_0 \in {\mathbb{R}}^d}\|v\|_{L^{\frac{p(d+1)}2}_{t,x}(\Gamma(T,x_0))} \lesssim T^{s_c-\frac12}\eta $$ for any $v \in B$. Thus we may consider the metric on $B$ given by $$ d(u,v) = \sup_{x_0 \in {\mathbb{R}}^d} \|u-v\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}(\Gamma(T,x_0))}. $$ By \eqref{E:H12 strichartz}, finite speed of propagation, and H\"older's inequality, \begin{align*} \|\Phi_0(u)-\Phi_0(v)\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}(\Gamma(T,x_0))} &\lesssim \||u|^pu -|v|^pv\|_{L^{\frac{2(d+1)}{d+3}}_{t,x}(\Gamma(T,x_0))}\\ &\lesssim \|(|u|+|v|)^p\|_{L^{\frac{d+1}2}_{t,x}(\Gamma(T,x_0))} \|u-v\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}(\Gamma(T,x_0))} \\ &\lesssim \eta^p\|u-v\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}(\Gamma(T,x_0))}, \end{align*} for each $x_0 \in {\mathbb{R}}^d$. Therefore if $\eta$ is sufficiently small, $\Phi_0$ is a contraction on $B$ with respect to the metric $d$ and so $\Phi_0$ has a fixed point $u$ in $B$. By Lemma~\ref{L:Strichartz} and the fractional chain rule, $u$ satisfies \begin{align*} \|\nabla_{t,x}u\|_{C_t\dot H^{s_c-1}_x} &\lesssim \|(u_0,u_1)\|_{\dot H^{s_c}_x \times \dot H^{s_c-1}_x} + \||\nabla|^{s_c-\frac12}(|u|^pu)\|_{L^{\frac{2(d+1)}{d+3}}_{t,x}} \\ &\lesssim \|(u_0,u_1)\|_{\dot H^{s_c}_x \times \dot H^{s_c-1}_x} + \eta^{p+1}. \end{align*} The bound \eqref{E:small data Hsc} then follows from \eqref{E:eta lesssim Hsc}. Finally, we turn to the persistence of regularity statement \eqref{E:small data H1}. For $\frac1{2d} < s_c < 1$, by \eqref{E:H12 strichartz}, the fractional chain rule, and \eqref{E:small data Lp}, \begin{align*} \|\nabla_{t,x} u\|_{C_tL^2_x} + \| |\nabla|^{\frac12}u \|_{L^{\frac{2(d+1)}{d-1}}_{t,x}} &\lesssim \|(u_0,u_1)\|_{\dot H^1_x \times L^2_x} + \| |\nabla|^{\frac12} (|u|^pu) \|_{L^{\frac{2(d+1)}{d+3}}_{t,x}} \\ &\lesssim \|(u_0,u_1)\|_{\dot H^1_x \times L^2_x} + \| |\nabla|^{\frac12}u \|_{L^{\frac{2(d+1)}{d-1}}_{t,x}}\|u\|_{L^{\frac{p(d+1)}2}_{t,x}}^p \\ &\lesssim \|(u_0,u_1)\|_{\dot H^1_x \times L^2_x} + \eta^p \| |\nabla|^{\frac12}u \|_{L^{\frac{2(d+1)}{d-1}}_{t,x}}. \end{align*} Therefore \eqref{E:small data H1} holds if $\eta$ is chosen sufficiently small. This completes the proof. \end{proof} The local existence theory of \eqref{E:eqn} in $H^1_x\times L_x^2$ is well-known (cf.\ \cite{GV:IHP89}, \cite{GV:MathZ85}); however, we need a result which is uniform in $m$. This is the topic of the following proposition: \begin{prop}[$H^1_x\times L_x^2$ local well-posedness for \eqref{E:eqn}] \label{P:lwp} Let $d\geq 2$, $m \in [0,1]$, $0 < s_c < 1$, and take $p = \frac4{d-2s_c}$. Let $u_0,u_1$ be initial data satisfying $$ \|u_0\|_{H^1_x} + \|u_1\|_{L^2_x} \leq M < \infty. $$ Then there exist $T\gtrsim_{p,d} \min\{ M^{-1/(1-s_c)}, M^{-p(d+1)/2}\}$, independent of $m$, and a unique solution $u$ to \eqref{E:eqn} on $[0,T]$. Furthermore, this solution satisfies \begin{align} \|\nabla_{t,x}u\|_{C_tL_x^2([0,T] \times {\mathbb{R}}^d)} &\lesssim M \label{E:H1lwp u in H1}\\ \|u\|_{C_tL^2_x([0,T] \times {\mathbb{R}}^d)} &\lesssim (1+T)M \label{E:H1lwp mass}\\ \|u\|_{L^{\frac{p(d+1)}2}_{t,x}([0,T] \times {\mathbb{R}}^d)} &\lesssim \max\{T^{1-s_c},T^{\frac2{p(d+1)}}\}M, \label{E:H1lwp u in Lq} \end{align} with the implicit constants depending only on $d,p$. \end{prop} \begin{remark} Well-posedness in $H^1_x\times L_x^2$ ensures that conservation of energy, which follows from an elementary computation for smooth, decaying solutions, continues to hold for solutions in $C_t(H^1_x\times L_x^2)$. \end{remark} \begin{proof}[Proof of Proposition~\ref{P:lwp}] Throughout the proof, all spacetime norms will be over the set $[0,T]\times{\mathbb{R}}^d$. We use the contraction mapping argument for the map $v \mapsto \Phi_m(v)$ given by $$ \Phi_m(v)(t): = {\mathcal{S}}_m(t)(u_0,u_1) + \int_0^t \langle\nabla\rangle_m^{-1}\sin\bigl((t-s)\langle\nabla\rangle_m\bigr)(|v|^pv)(s)\, ds. $$ Our analysis breaks into two cases. If $\frac{d+2}{2d} < s_c < 1$ (that is, $\frac{4d}{(d-2)(d+1)}<p<\frac4{d-2}$), we define \begin{align*} B := \Bigl\{v \in C_tH^1_x([0,T] \times {\mathbb{R}}^d) : \, &\|\nabla_{t,x} v\|_{C_tL^2_x} + \|\langle\nabla\rangle_m^{\frac12}v\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}} \\ &+ \|v\|_{L^{\frac{2p(d+1)}{p(d+1)(d-2) - 4d}}_tL^{\frac{p(d+1)}2}_x} \leq C_{d,p}M \Bigr\}. \end{align*} By H\"older's inequality, for $v \in B$ we have \begin{equation} \label{E:B subset Lq} \|v\|_{L^{\frac{p(d+1)}2}_{t,x}} \leq T^{1-s_c}\|v\|_{L^{\frac{2p(d+1)}{p(d+1)(d-2) - 4d}}_tL^{\frac{p(d+1)}2}_x}\lesssim_{d,p} T^{1-s_c} M. \end{equation} Using this together with Lemma~\ref{L:Strichartz} and the fractional chain rule, we obtain \begin{align*} &\|\nabla_{t,x}\Phi_m(v)\|_{C_tL^2_x} + \|\langle\nabla\rangle_m^{\frac12} \Phi_m(v)\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}} + \|\Phi_m(v)\|_{L^{\frac{2p(d+1)}{p(d+1)(d-2) - 4d}}_tL^{\frac{p(d+1)}2}_x}\\ &\qquad \lesssim_{d,p} \|(u_0,u_1)\|_{H^1_x \times L^2_x} + \|\langle\nabla\rangle_m^{\frac12}(|v|^pv)\|_{L^{\frac{2(d+1)}{d+3}}_{t,x}} \\ &\qquad \lesssim_{d,p} M +\|\langle\nabla\rangle_m^{\frac12}v\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}}\|v\|_{L^{\frac{p(d+1)}2}_{t,x}}^p\\ &\qquad \lesssim_{d,p} M + T^{(1-s_c)p}M^{p+1}. \end{align*} Thus for $T$ sufficiently small depending on $d$, $p$, and $M$, $\Phi_m$ maps $B$ into itself. Next, we will show that $\Phi_m$ is a contraction with respect to the metric given by \begin{equation}\label{ssdist} d(u,v) =\|u-v\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}}. \end{equation} We start by noting that, by the fundamental theorem of calculus, \begin{equation} \label{E:B subset L2} \|v\|_{C_tL^2_x} \leq \|u_0\|_{L^2_x} + T\|v_t\|_{C_tL^2_x} \lesssim_{d,p} (1+T)M \end{equation} for any $v\in B$. Thus, by H\"older and Sobolev embedding, \begin{align*} \|v\|_{ L^{\frac{2(d+1)}{d-1}}_{t,x}} \lesssim T^{\frac{d-1}{2(d+1)}} \|v\|_{C_tH^1_x} \lesssim_{d,p} T^{\frac{d-1}{2(d+1)}} (1+T) M. \end{align*} To continue, we use \eqref{E:H12 strichartz}, \eqref{E:B subset Lq}, and H\"older's inequality to estimate \begin{align*} \|\Phi_m(u) - \Phi_m(v)\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}} &\lesssim \||u|^p u - |v|^p v\|_{L^{\frac{2(d+1)}{d+3}}_{t,x}}\\ &\lesssim \|(|u|+|v|)^p\|_{L^{\frac{d+1}2}_{t,x}} \|u-v\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}}\\ &\lesssim_{d,p} T^{(1-s_c)p} M^p\|u-v\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}} \end{align*} for any $u,v \in B$. Therefore for $T$ sufficiently small, $\Phi_m$ is a contraction and consequently has a fixed point $u \in B$. Claims \eqref{E:H1lwp mass} and \eqref{E:H1lwp u in Lq} follow from \eqref{E:B subset Lq} and \eqref{E:B subset L2}. It remains to treat the case $0 <s_c \leq\frac{d+2}{2d}$. This time, we define $$ B := \Bigl\{v \in C_t H^1_x([0,T] \times {\mathbb{R}}^d) : \|\nabla_{t,x} v\|_{C_tL^2_x} +\|\langle\nabla\rangle_m^{\frac12}v\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}} \leq C_{d,p}M\Bigr\}. $$ In this case $2 < \frac{p(d+1)}2 \leq \frac{2d}{d-2}$ and so, using H\"older, Sobolev embedding, and \eqref{E:B subset L2}, we obtain \begin{align} \label{E:B subset Lq small p} \|v\|_{L^{\frac{p(d+1)}2}_{t,x}} &\lesssim T^{\frac2{p(d+1)}}\|v\|_{C_t H^1_x} \lesssim_{d,p} T^{\frac2{p(d+1)}} (1+T) M \end{align} for any $v\in B$. Arguing as in the previous case, and substituting \eqref{E:B subset Lq small p} for \eqref{E:B subset Lq}, we derive \begin{align*} \|\nabla_{t,x}\Phi_m(v)\|_{C_tL^2_x} &+ \|\langle\nabla\rangle_m^{\frac12}\Phi_m(v)\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}} \\ &\lesssim \|(u_0,u_1)\|_{H^1_x \times L^2_x} + \|\langle\nabla\rangle_m^{\frac12}v\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}}\|v\|^p_{L^{\frac{p(d+1)}2}_{t,x}} \\ & \lesssim _{d,p} M + T^{\frac2{d+1}}(1+T)^{p}M^{p+1}. \end{align*} Thus if $T$ is sufficiently small, we again obtain that $\Phi_m$ maps $B$ into itself. The proof that $\Phi_m$ is a contraction on $B$ with respect to the metric \eqref{ssdist} is exactly the same as in the previous case. This completes the proof. \end{proof} Because \eqref{E:eqn} obeys finite speed of propagation, we may localize in space in Proposition~\ref{P:lwp}. In this way we obtain the following scale-invariant lower bound on the blowup rate. \begin{corollary}[$H^1_\textrm{loc}\times L^2_\textrm{loc}$ local well-posedness and blowup criterion] \label{C:lwp loc} Let $d\geq 2$, $m \in [0,1]$, $0 < s_c < 1$, and take $p=\frac{4}{d-2s_c}$. Let $(u_0,u_1)$ be initial data satisfying $\|u_0\|_{H^1_\textrm{loc}} + \|u_1\|_{L^2_\textrm{loc}} \leq M < \infty$, where we define $$ \|f\|_{L^2_\textrm{loc}}^2 := \sup_{x_0 \in {\mathbb{R}}^d} \int_{|x-x_0|\leq 1} |f(x)|^2\, dx \qtq{and} \|f\|_{H^1_\textrm{loc}}^2 := \|f\|_{L^2_\textrm{loc}}^2 + \|\nabla f\|_{L^2_\textrm{loc}}^2. $$ Then there exist $T_0 > 0$, depending only on $d,p,M$ and a unique strong solution $u:[0,T_0] \times {\mathbb{R}}^d \to {\mathbb{R}}$ to \eqref{E:eqn} satisfying $$ \|u\|_{C_tH^1_\textrm{loc}([0,T_0] \times {\mathbb{R}}^d)} + \|u_t\|_{C_tL^2_\textrm{loc}([0,T_0] \times {\mathbb{R}}^d)} \lesssim M. $$ Furthermore, if $u$ blows up at time $0<T_* < \infty$ and $(T_*,x_0)$ lies on the forward-in-time blowup surface of $u$, then \begin{align} \label{E:H1loc lb} 1 \lesssim (T_*-t)^{-2s_c} \int_{|x-x_0| \leq T_*-t} |u(t,x)|^2 + (T_*-t)^2|\nabla_{t,x} u(t,x)|^2 \, dx \end{align} for all $t>0$ such that $T_*-1 \leq t < T_*$. The implicit constant depends only on $d,p$. \end{corollary} We note that in Theorem~$1'$ in \cite{MerleZaagAJM} and Theorem~1(ii) of \cite{MerleZaagMA}, Merle and Zaag claim that an alternative blowup criterion holds, namely, \begin{equation} \label{E:MerleZaag H1 loc blowup} 1 \lesssim \sup_{x_0 \in {\mathbb{R}}^d} (T_*-t)^{d-2s_c}\int_{|x-x_0|\leq 1}|u(t,x)|^2 + (T_*-t)^2|\nabla_{t,x}u(t,x)|^2\, dx. \end{equation} This lower bound is also repeated as equation (1.8) in \cite{MerleZaagIMRN}. It seems that in all three instances this is essentially a typo, since \eqref{E:H1loc lb} is equivalent to the lower bound in self-similar variables given in Theorem~1 of \cite{MerleZaagAJM} and Theorem~1(i) of \cite{MerleZaagMA}, while \eqref{E:MerleZaag H1 loc blowup} is not. Moreover, the scaling argument that Merle and Zaag suggest to prove \eqref{E:MerleZaag H1 loc blowup} seems only to establish \eqref{E:H1loc lb}. It is not difficult to construct a counterexample to \eqref{E:MerleZaag H1 loc blowup}. For a general subluminal blowup surface $t=\sigma(x)$, Kichenassamy \cite{Kich} (see also \cite{KVblowup}) has constructed solutions with $u(t,x)\sim (\sigma(x)-t)^{-2/p}$. Whenever the blowup surface is smooth with non-zero curvature at the first blowup point, this is inconsistent with \eqref{E:MerleZaag H1 loc blowup}. We turn now to the proof of Corollary~\ref{C:lwp loc}. \begin{proof} Both conclusions may be proved by applying Proposition~\ref{P:lwp} to spatially truncated initial data and then invoking finite speed of propagation. Since the proof of the first conclusion is a little simpler, we give the details only for the second. We argue by contradiction. To this end, let $T_*-1 < t_0 < T_*$ and let $x_0\in {\mathbb{R}}^d$ be such that $(T_*,x_0)$ lies on the forward-in-time blowup surface of $u$. By space-translation invariance, we may assume $x_0=0$. Suppose that $u$ satisfies \begin{align}\label{uT*} (T_*-t_0)^{-2s_c} \int_{|x| \leq T_*-t_0} |u(t_0,x)|^2 + (T_*-t_0)^2|\nabla_{t,x} u(t_0,x)|^2 \, dx < \eta, \end{align} for some small constant $\eta$ to be determined in a moment. Now set $$ \tilde u(t,x):=(T_*-t_0)^{\frac2p}u(t_0+ (T_*-t_0) t, (T_*-t_0) x). $$ A simple computation shows that $\tilde u$ satisfies \eqref{E:eqn} with $m$ replaced by $\tilde m:=(T_*-t_0)m$. Moreover, as $(T_*,0)$ belongs to the forward-in-time blowup surface of $u$, we see that $(1,0)$ lies on the blowup surface of $\tilde u$. Changing variables, \eqref{uT*} becomes $$ \int_{|x| \leq 1} |\tilde u(0)|^2 + |\nabla_{t,x} \tilde u(0)|^2 \, dx < \eta. $$ Thus, by the dominated convergence theorem, there exists $0<\delta<\frac12$ such that \begin{equation} \label{E:u small 1} \int_{|x| \leq 1+\delta} |\tilde u(0,x)|^2 + |\nabla_{t,x} \tilde u(0,x)|^2 \, dx < 2\eta. \end{equation} To continue, we define $v_0$ and $v_1$ such that $v_0 = \tilde u(0)$ and $v_1 = \tilde u_t(0)$ on $|x| \leq 1+\delta$, $v_0 = v_1 = 0$ on $|x| \geq 2$, and \begin{equation}\label{E:tilde u small} \|v_0\|_{H^1_x}^2 + \|v_1\|_{L^2_x}^2 \lesssim \eta. \end{equation} (For example, one can take $v_0$ to be the harmonic function on the annulus $1+\delta<|x|<2$ that matches these boundary values.) For $\eta$ sufficiently small depending on $d,p, \delta$ (but not on $\tilde m\in[0,1]$), Proposition~\ref{P:lwp} yields a solution to the initial-value problem $$ v_{tt} - \Delta v + \tilde m^2 v = |v|^p v \qtq{with} v(0) = v_0 \qtq{and} v_t(0) = v_1 $$ on $[0,1+\delta]\times {\mathbb{R}}^d$. Thus by finite speed of propagation, $\tilde u$ may be extended to a strong solution on the backward light cone $\Gamma(1+\delta,0)$, which contradicts the fact that $(1,0)$ lies on the blowup surface of $\tilde u$. \end{proof} \begin{corollary}[$\dot H^1_x\times L_x^2$ blowup criterion] \label{C:lwp dot} Let $d\geq 3$, $m \in [0,1]$, $0 < s_c < 1$, and take $p=\frac4{d-2s_c}$. Given initial data $(u_0,u_1)\in \dot H^1_x\times L_x^2$, if the solution $u$ to \eqref{E:eqn} blows up at time $0<T_* < \infty$, then \begin{align} \label{E:H1dot lb} 1 \lesssim (T_*-t)^{2-2s_c}\int_{{\mathbb{R}}^d}|\nabla_{t,x}u(t,x)|^2\, dx \end{align} for all $t>0$ such that $T_*-1 \leq t < T_*$. The implicit constant depends only on $d,p$. \end{corollary} \begin{proof} Note that by H\"older's inequality and Sobolev embedding, $\dot H^1_x \subset H^1_\textrm{loc}$. The claim now follows from Corollary~\ref{C:lwp loc}. \end{proof} \begin{remark} Note that in two dimensions, $\dot H^1_x$ cannot be realized as a space of distributions. Moreover, it is not difficult to construct concrete initial data that show that \eqref{E:H1dot lb} does not hold: Given $R>1$, let $$ u_1:= 0 \qtq{and} u_0(x) := \begin{cases} 0 & : |x| > R \\[1mm] -\frac{\log(|x|/R)}{\sqrt{\log(R)}} & : 1 \leq |x| \leq R \\[2mm] \sqrt{\log(R)} & : |x| < 1.\end{cases} $$ Note that $\int |u_1|^2 + |\nabla u_0|^2\,dx \sim 1$. However, as $R\to\infty$ the corresponding solution blows up more and more quickly; indeed, by solving the ODE and using finite speed of propagation, we see that the lifespan cannot exceed $$ \int_{A}^\infty \bigl[ \tfrac{2}{p+2} \bigl(u^{p+2} - A^{p+2}\bigr)\bigr]^{-1/2} du \sim A^{-\frac{p+4}{2}} \quad\text{where $A:=\sqrt{\log(R)}$.} $$ \end{remark} The following result shows that blowup must be accompanied by the blowup of the $\dot H^1_x$ norm of $u$. In this sense, while non-quantitative, it is a strengthening of Corollary~\ref{C:lwp dot}, which provides a lower bound on the full spacetime gradient of $u$. \begin{corollary}\label{C:ee blowup} Let $d \geq 2$, $m \in [0,1]$, and $0 < s_c < 1$. Set $p = \frac4{d-2s_c}$. Let $(u_0,u_1) \in H^1_x \times L^2_x$ and assume that the maximal-lifespan solution $u$ to \eqref{E:eqn} cannot be extended past time $0<T_* < \infty$. Then $$ \lim_{t \uparrow T_*} \|\nabla u(t)\|_{L^2_x} = \infty. $$ The same conclusion holds if the initial displacement $u_0$ merely belongs to $\dot H^1_x\cap \dot H^{s_c}_x$. \end{corollary} \begin{proof} By Proposition~\ref{P:lwp}, the solution $u$ can be extended as long as $(u,u_t)$ remains bounded in $H^1_x\times L^2_x$. Thus, as $u$ cannot be extended past time $0<T_*<\infty$, we must have \begin{align}\label{E:all to infty} \lim_{t \uparrow T_*} \bigl\{\|u(t)\|_{H^1_x}+ \|u_t(t)\|_{L_x^2} \bigr\}= \infty. \end{align} Let $\chi_R:=\phi(x/R)$ be a smooth cutoff to the ball of radius $R$. Combining dominated convergence with Proposition~\ref{P:lwp}, we can find $R>10 T_*$ large enough so that initial data $\tilde u_0:=(1-\chi_R)u_0$ and $\tilde u_1:=(1-\chi_R)u_1$ lead to a solution $\tilde u$ up to time $2T_*$. Moreover, $\tilde u$ remains uniformly bounded in $H^1_x\times L_x^2$ on $[0,T_*]$ and so, by conservation of energy, the potential energy of $\tilde u$ is also bounded on $[0,T_*]$. By finite speed of propagation, the original solution $u$ agrees with $\tilde u$ on $[0, T_*]\times\{|x|\geq 3R\}$, and so inherits these bounds; in particular, \begin{align}\label{inherited bounds} \|u\|_{L_t^\infty L_x^2([0, T_*]\times\{|x|\geq 3R\})} + \|u\|_{L_t^\infty L_x^{p+2}([0, T_*]\times\{|x|\geq 3R\})}<\infty. \end{align} When $m>0$, conservation of energy and \eqref{E:all to infty} dictate \begin{align}\label{pe blowup} \lim_{t \uparrow T_*} \|u(t)\|_{L^{p+2}_x}^{p+2} =\infty. \end{align} Combining this with \eqref{E:all to infty}, we conclude \begin{align}\label{pe blowup'} \lim_{t \uparrow T_*} \|\chi_{6R}u(t)\|_{L^{p+2}_x}^{p+2} =\infty. \end{align} This conclusion also holds when $m=0$. Indeed, the argument above is applicable to all sequences $t_n \uparrow T_*$ for which $\|\nabla_{t,x} u(t_n)\|_{L^2_x}\to \infty$. On sequences where $\|\nabla_{t,x} u(t_n)\|_{L^2_x}$ is bounded, \eqref{E:all to infty} guarantees $\| u(t_n)\|_{L^2_x} \to \infty$. However, in this case \eqref{pe blowup'} follows by using the $L_t^\infty L_x^2$ estimate in \eqref{inherited bounds} and H\"older's inequality. Using the Gagliardo--Nirenberg inequality followed by Lemma~\ref{L:Poincare} (on the ball $\{|x|\leq 24R\}$), we obtain \begin{align*} \|\chi_{6R}u(t)\|_{L^{p+2}_x}^{p+2}&\lesssim \|\chi_{6R}u(t)\|_{L^2_x}^{p(1-s_c)} \|\nabla [\chi_{6R}u(t)]\|_{L^2_x}^{\frac{pd}2}\\ &\lesssim R^{p(1-s_c)}\|\nabla [\chi_{6R}u(t)]\|_{L_x^2}^{p+2}\\ &\lesssim R^{p(1-s_c)} \bigl[\|\nabla u(t)\|_{L_x^2}+ R^{-1}\|u(t)\|_{L^2(|x|\geq 3R)} \bigr]^{p+2}. \end{align*} Combining this with \eqref{inherited bounds} and \eqref{pe blowup'}, we derive the claim. This completes the proof of the corollary for data $(u_0, u_1)\in H^1_x\times L^2_x$. For initial data $u_0\in \dot H^1_x\cap \dot H^{s_c}_x$ we observe that for $R>10T_*$ sufficiently large, the restriction of $u_0$ to the region $|x|\geq R$ is small in $H^1_{\textrm{loc}}$. Thus by Proposition~\ref{P:lwp}, the solution extends to the region $[0, 2T_*]\times\{|x|\geq 3R\}$ in the class $H^1_{\textrm{loc}}\times L^2_{\textrm{loc}}$. Now consider the solution $v$ with initial data $\chi_{10R} u_0$ and $\chi_{10R} u_1$. By applying the first version of this corollary, we see that $\|\nabla v(t)\|_{L^2_x}$ diverges as $t\to T_*$. Moreover, by the bounds on $v$ where $|x|\geq 3R$, this divergence must occur in the region $|x|\leq 6R$ where finite speed of propagation guarantees $v\equiv u$. \end{proof} We will also need a stability result for the nonlinear wave equation in the weak topology. \begin{lemma} \label{L:weak stability} Let $d\geq2$, $0 < s_c < 1$, and set $p = \frac4{d-2s_c}$. Let $\{m_n\}_{n\geq 1},\{\lambda_n\}_{n\geq 1} \subset [0,1]$ be sequences with $\lim m_n = \lim \lambda_n = 0$ and let $\{(u_0^{(n)}, u_1^{(n)})\}_{n\geq 1}$ be a sequence of initial data such that \begin{equation} \label{E:un to u at 0} \nabla u^{(n)}_0 \rightharpoonup \nabla u_0 \qtq{and} u^{(n)}_1 \rightharpoonup u_1 \qtq{weakly in $L^2_x$.} \end{equation} Assume also that the sequence $\{u^{(n)}\}_{n\geq 1}$ of solutions to $$ \partial_{tt} u^{(n)} - \Delta u^{(n)} + m_n^2 u^{(n)} = |u^{(n)}|^p u^{(n)} \qtq{on} [0,T) \times {\mathbb{R}}^d $$ with initial data $(u_0^{(n)}, u_1^{(n)})$ at time $t=0$ satisfy \begin{equation} \label{E:bounded sequences} \begin{aligned} \|\nabla_{t,x}u^{(n)}\|_{C_tL^2_x([0,T) \times {\mathbb{R}}^d)} &+ \||\nabla|^{s_c} u^{(n)}\|_{C_tL^2_x([0,T) \times {\mathbb{R}}^d)} \\ &+ \|\langle\nabla\rangle_{\lambda_n}^{s_c-1}u^{(n)}_t\|_{C_tL^2_x([0,T) \times {\mathbb{R}}^d)} \leq M<\infty. \end{aligned} \end{equation} Then the initial-value problem \begin{equation} \label{E:nlw} u_{tt} - \Delta u = |u|^p u \qtq{with} u(0) = u_0 \qtq{and} \partial_t u(0) = u_1 \end{equation} has a strong solution on $[0,T) \times {\mathbb{R}}^d$ with $(u,u_t) \in C_t[\dot H^1_x \times L^2_x] \cap C_t[\dot H^{s_c}_x \times \dot H^{s_c-1}_x]$. Furthermore, for each $t \in [0,T)$, we have \begin{equation} \label{E:un to u at t} (u^{(n)}(t),\partial_t u^{(n)}(t)) \rightharpoonup (u(t),\partial_t u(t)) \qtq{weakly in $\dot H^1_x \times L^2_x$.} \end{equation} Consequently, the limiting solution $u$ obeys the bounds \begin{equation} \label{E:stability bounds} \|\nabla_{t,x} u\|_{C_tL^2_x([0,T)\times {\mathbb{R}}^d)} + \|\nabla_{t,x} u\|_{C_t \dot H^{s_c-1}_x([0,T) \times {\mathbb{R}}^d)} \leq M. \end{equation} \end{lemma} \begin{proof} We will prove that there exists a time $0<t_0 < \min\{1,T\}$, depending only on $M$, such that $u$ exists up to time $t_0$ and satisfies \eqref{E:un to u at t} for each $t \in [0,t_0]$. The lemma follows from this and a simple iterative argument. We will construct the solution $u$ on $[0, t_0]\times{\mathbb{R}}^d$ by gluing together solutions defined in light cones. To this end, let $x_0 \in {\mathbb{R}}^d$ and let $\phi$ be a smooth cutoff such that $\phi(x) = 1$ for $|x| \leq 1$ and $\phi(x) = 0$ for $|x| \geq 2$. For $j=0,1$ we define the initial data \begin{gather*} u_{x_0,j}(x) := \phi(x-x_0)u_j(x) \qtq{and} u^{(n)}_{x_0,j}(x) := \phi(x-x_0)u^{(n)}_j(x). \end{gather*} By \eqref{E:un to u at 0}, \begin{equation} \label{E:wk conv after loc} (u^{(n)}_{x_0,0},u^{(n)}_{x_0,1}) \rightharpoonup (u_{x_0,0},u_{x_0,1}) \qtq{weakly in $\dot H^1_x \times L^2_x$} \end{equation} and so, by Rellich--Kondrashov, \begin{equation} \label{E:H12 conv after loc} (u^{(n)}_{x_0,0},u^{(n)}_{x_0,1}) \to (u_{x_0,0},u_{x_0,1}) \qtq{in $\dot H^{\frac12}_x \times \dot H^{-\frac12}_x$.} \end{equation} Furthermore, by \eqref{E:bounded sequences}, \begin{equation} \label{E:H1 after loc} \|u_{x_0,0}\|_{H^1_x} + \|u_{x_0,1}\|_{L^2_x} + \|u^{(n)}_{x_0,0}\|_{H^1_x} + \|u^{(n)}_{x_0,1}\|_{L^2_x} \lesssim M. \end{equation} Thus, by Proposition~\ref{P:lwp} there exists a time $0<t_0< 1$, depending only on $M$, such that the solutions $u_{x_0}$ and $u_{x_0}^{(n)}$ to $$ \begin{cases} \partial_{tt}u_{x_0} - \Delta u_{x_0} = |u_{x_0}|^pu_{x_0} \\ u_{x_0}(0) = u_{x_0,0}, \quad \partial_t u_{x_0}(0) = u_{x_0,1} \end{cases} \quad \begin{cases} \partial_{tt}u^{(n)}_{x_0} - \Delta u^{(n)}_{x_0} + m_n^2 u^{(n)} = |u^{(n)}_{x_0}|^pu_{x_0}^{(n)} \\ u^{(n)}_{x_0}(0) = u^{(n)}_{x_0,0}, \quad \partial_t u^{(n)}_{x_0}(0) = u^{(n)}_{x_0,1} \end{cases} $$ exist on $[0,t_0]\times {\mathbb{R}}^d$ and satisfy the bounds \begin{align} \|\nabla_{t,x} u_{x_0}\|_{C_tL^2_x([0,t_0] \times {\mathbb{R}}^d)} + \|\nabla_{t,x} u_{x_0}^{(n)}\|_{C_tL^2_x([0,t_0] \times {\mathbb{R}}^d)} &\lesssim M \label{E:ux0 H1}\\ \|u_{x_0}\|_{L^{\frac{p(d+1)}2}_{t,x}([0,t_0] \times {\mathbb{R}}^d)} + \|u^{(n)}_{x_0}\|_{L^{\frac{p(d+1)}2}_{t,x}([0,t_0] \times {\mathbb{R}}^d)} &< \eta, \label{E:uunx0 Lq} \end{align} for a small constant $\eta>0$ to be determined in a moment. Throughout the remainder of the proof, all spacetime norms will be on $[0,t_0]\times {\mathbb{R}}^d$. By H\"older and Sobolev embedding, \begin{align*} \|u_{x_0}\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}} + \|u^{(n)}_{x_0}\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}} &\lesssim t_0^{\frac{d-1}{2(d+1)}}\Bigl(\|u_{x_0}\|_{C_tH^1_x} + \|u^{(n)}_{x_0}\|_{C_tH^1_x}\Bigr)\lesssim M. \end{align*} By Lemma~\ref{L:Strichartz} (applied with $m = 0$), \eqref{E:H12 conv after loc}, \eqref{E:ux0 H1}, \eqref{E:uunx0 Lq}, and H\"older's inequality, \begin{align*} \|\nabla_{t,x}(u^{(n)}_{x_0} - & u_{x_0})\|_{C_t\dot H^{-\frac12}_x} + \|u^{(n)}_{x_0} - u_{x_0}\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}}\\ &\lesssim \|(u^{(n)}_{x_0,0}-u_{x_0,0}, u^{(n)}_{x_0,1} - u_{x_0,1})\|_{\dot H^{\frac12}_x \times \dot H^{-\frac12}_x} + \|m_n^2u^{(n)}_{x_0}\|_{L^1_t\dot H^{-\frac12}_x} \\ &\qquad \qquad + \||u^{(n)}_{x_0}|^pu^{(n)}_{x_0} - |u_{x_0}|^pu_{x_0}\|_{L^{\frac{2(d+1)}{d+3}}_{t,x}}\\ & \lesssim \varepsilon_n + m_n^2 t_0 \|u^{(n)}_{x_0}\|_{C_t\dot H^1_x} + \eta^p\|u^{(n)}_{x_0} - u_{x_0}\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}}\\ &\lesssim \varepsilon_n + m_n^2M + \eta^p\|u^{(n)}_{x_0} - u_{x_0}\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}}, \end{align*} for some sequence $\varepsilon_n \to 0$. Thus, for $\eta$ sufficiently small, \begin{equation} \label{E:un to u locally} \|\nabla_{t,x}(u^{(n)}_{x_0} - u_{x_0})\|_{C_t\dot H^{-\frac12}_x} + \|u^{(n)}_{x_0} - u_{x_0}\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}} \to 0 \qtq{as} n\to \infty. \end{equation} To conclude, by finite speed of propagation, the solution $u$ to \eqref{E:eqn} with $m=0$ exists and equals $u_{x_0}$ on $\Gamma(1,x_0)\cap([0,t_0] \times {\mathbb{R}}^d)$ for each $x_0 \in {\mathbb{R}}^d$. In particular, $u$ is a strong solution on $[0,t_0] \times {\mathbb{R}}^d$. Additionally, $u^{(n)} = u^{(n)}_{x_0}$ on $\Gamma(T,x_0) \cap ([0,t_0] \times {\mathbb{R}}^d)$ for each $x_0 \in {\mathbb{R}}^d$. Thus by \eqref{E:bounded sequences} and \eqref{E:un to u locally}, we obtain \eqref{E:un to u at t} for all $0\leq t\leq t_0$. This completes the proof. \end{proof} \section{Blowup of non-positive energy solutions of NLW}\label{S:non+blow} In this section we prove that non-positive energy solutions to the nonlinear wave equation blow up in finite time. More precisely, we have \begin{theorem}[Non-positive energy implies blowup] \label{T:nonpos blowup} Let $\frac1{2d} < s_c < 1$ and set $p = \frac4{d-2s_c}$. Let $(u_0,u_1) \in (\dot{H}^1_x \times L^2_x) \cap (\dot{H}^{s_c}_x \times \dot{H}^{s_c-1}_x)$ be initial data, with $u_0$ and $u_1$ radial if $s_c < \frac12$. Assume that $(u_0,u_1)$ is not identically zero and satisfies $$ E(u_0,u_1) = \int_{{\mathbb{R}}^d} \tfrac12|\nabla u_0(x)|^2 + \tfrac12|u_1(x)|^2 - \tfrac1{p+2}|u_0(x)|^{p+2}\, dx \leq 0. $$ Then the maximal-lifespan solution to the initial-value problem $$ u_{tt} - \Delta u = |u|^p u \qtq{with} u(0) = u_0 \qtq{and} u_t(0) = u_1 $$ blows up both forward and backward in finite time. \end{theorem} We note that for solutions to \eqref{E:eqn} with $m>0$, finiteness of the energy dictates that $\|u_0\|_{L^2_x}$ also be finite. Indeed, because of the estimate (cf. Lemma~\ref{L:GN}) \begin{align}\label{E:GN} \|f\|_{p+2}^{p+2}\leq C_{\opt} \|f\|_{\frac{pd}2}^p \|\nabla f\|_2^2\lesssim \|f\|_{\dot H^{s_c}_x}^p \|f\|_{\dot H^1_x}^2, \end{align} the natural energy space for initial data is $(u_0,u_1) \in H^1_x \times L^2_x$. The constant $C_{\opt}$ depends only on $d,p$ and denotes the optimal constant in the first inequality in \eqref{E:GN}. In this case (that is, $u_0\in H^1_x$), the theorem is well-known and may be obtained by taking two time derivatives of $\|u(t)\|_{L_x^2}$ (cf. \cite{Glassey73}, \cite{PayneSattinger}, and the proof of Proposition~\ref{P:finite mass}). To handle data for which $\|u_0\|_{L_x^2}$ need not be finite, we adapt this argument by introducing a spatial truncation and then dealing with the resulting error terms. The larger class of initial data considered here is dictated by the needs of Section~\ref{S:critical blowup}. \begin{proof} We define $$ \phi(x) := \begin{cases} 1, &\qtq{if} |x| \leq 1;\\ 1-2(|x|-1)^2, &\qtq{if} 1 \leq |x| \leq \tfrac32; \\ 2(2-|x|)^2, &\qtq{if} \tfrac32 \leq |x| \leq 2; \\ 0,&\qtq{if} |x| \geq 2, \end{cases} $$ and $\phi^c := 1-\phi$. As $(u_0,u_1) \in (\dot{H}^1_x \times L^2_x) \cap (\dot H^{s_c}_x \times \dot H^{s_c-1}_x)$, there exists a radius $R > 0$ such that $$ \|\phi^c\bigl(\tfrac{\cdot}{R/4}\bigr) u_0\|_{\dot H^{s_c}_x} + \|\phi^c\bigl(\tfrac{\cdot}{R/4}\bigr)u_1\|_{\dot H^{s_c-1}_x} \leq \eta_1, $$ where $\eta_1$ is the small data threshold from Proposition~\ref{P:Hsc sdt}. Let $v$ denote the global solution to \begin{equation*} v_{tt} - \Delta v = |v|^p v \qtq{with} v(0,x) = \phi^c\bigl(\tfrac{x}{R/4}\bigr)u_0(x) \qtq{and} v_t(0,x) = \phi^c\bigl(\tfrac{x}{R/4}\bigr) u_1(x). \end{equation*} By Proposition~\ref{P:Hsc sdt}, we may take $R$ sufficiently large that \begin{equation} \label{E:v small} \|\nabla_{t,x}v\|_{C_tL^2_x({\mathbb{R}} \times {\mathbb{R}}^d)} + \|v\|_{C_t L^{\frac{pd}2}_x({\mathbb{R}} \times {\mathbb{R}}^d)} < \eta \end{equation} for a small constant $\eta>0$ to be determined later. By finite speed of propagation, $u = v$ where $|x| \geq R/2+|t|$, and so \begin{equation} \label{E:u small} \|\nabla_{t,x}u\|_{C_tL^2_x(\{|x| \geq R/2+|t|\})} + \|u\|_{C_t L^{\frac{pd}2}_x(\{|x| \geq R/2+|t|\})} < \eta. \end{equation} Next, since $E(u) \leq 0$ and $u$ is not identically zero, by \eqref{E:GN} we must have \begin{align*} 0\geq E(u) &= \int_{{\mathbb{R}}^d} \tfrac12|u_t(t,x)|^2 + \tfrac12|\nabla u(t,x)|^2 - \tfrac1{p+2}|u(t,x)|^{p+2}\, dx \\ &\geq \int_{{\mathbb{R}}^d} \tfrac12|u_t(t,x)|^2 + \tfrac12\bigl(1-\tfrac2{p+2}C_{\opt}\|u(t)\|_{L^{\frac{pd}2}_x}^p\bigr)|\nabla u(t,x)|^2\, dx \end{align*} and so, $$ \|u(t)\|_{L^{\frac{pd}2}_x}^p \geq \tfrac{p+2}2 C_{\opt}^{-1}. $$ Thus, using \eqref{E:u small} and taking $\eta$ small enough so that $\eta^p<\frac{p+2}4 C_{\opt}^{-1}$, we obtain \begin{equation} \label{E:u pd/2 big} \|u(t)\|_{L^{\frac{pd}2}_x(|x| \leq R/2+|t|)}^p \geq \tfrac{p+2}4 C_{\opt}^{-1} \end{equation} for each $t$ in the lifespan of $u$. Finally, letting $\chi$ denote a smooth cutoff that is equal to one on the ball $\{|x|\leq R/2+|t|\}$ and vanishes when $|x|\geq R+2|t|$, and using Gagliardo--Nirenberg followed by Lemma~\ref{L:Poincare}, H\"older, and \eqref{E:u small}, we obtain \begin{align*} \|u(t)\|_{L^{\frac{pd}2}_x(|x|\leq \frac R2 + |t|)}& \lesssim \|\chi u(t)\|_{L_x^2}^{1-s_c} \|\nabla [\chi u(t)]\|_{L_x^2}^{s_c} \\ &\lesssim (R+|t|)^{1-s_c}\|\nabla [\chi u(t)]\|_{L_x^2}\\ &\lesssim (R+|t|)^{1-s_c} \Bigl[ \|\nabla u(t)\|_{L^2_x} + \|u(t)\|_{L^{\frac{pd}2}_x(|x|\geq \frac R2 + |t|)}\|\nabla \chi\|_{L_x^{\frac{2pd}{pd-4}}} \Bigr]\\ &\lesssim (R+|t|)^{1-s_c}\|\nabla u(t)\|_{L^2_x} +\eta. \end{align*} Thus, invoking \eqref{E:u pd/2 big} and taking $\eta$ sufficiently small, we obtain \begin{equation} \label{E:u H1 big} \|\nabla u(t)\|_{L^2_x}\gtrsim (R+|t|)^{-1+s_c}, \end{equation} throughout the lifetime of $u$. Now we are ready to define our truncated `mass'. We set $$ M(t) := \int_{{\mathbb{R}}^d} \phi\bigl(\tfrac{x}{R+|t|}\bigr) |u(t,x)|^2\, dx. $$ It is easy to see that this quantity is finite throughout the lifetime of $u$. By \eqref{E:u pd/2 big}, it never vanishes, that is, $M(t)>0$ for all $t$ in the lifespan of $u$. We differentiate. Routine computations reveal that for $t \geq 0$, we have \begin{align} \notag &M'(t) = \int_{{\mathbb{R}}^d} -\tfrac{x}{(R+|t|)^2}\cdot \nabla \phi\bigl(\tfrac{x}{R+|t|} \bigr) |u(t)|^2\, dx + \int_{{\mathbb{R}}^d} 2\phi\bigl(\tfrac{x}{R+|t|}\bigr)u(t)u_t(t)\, dx, \\ \label{E:M''} &\begin{aligned} M''(t) &= -2(p+2)E(u) + \int_{{\mathbb{R}}^d} 4 \phi\bigl(\tfrac{x}{R+|t|} \bigr) |u_t(t)|^2 + p|\nabla_{t,x}u(t)|^2\, dx \\ &\qquad + \int_{{\mathbb{R}}^d} 2\phi^c\bigl(\tfrac{x}{R+|t|} \bigr)\Bigl[|\nabla_{t,x}u(t)|^2- |u(t)|^{p+2}\Bigr]\, dx \\ &\qquad + \int_{{\mathbb{R}}^d} \left[\tfrac{2x}{(R+|t|)^3} \cdot \nabla \phi\bigl(\tfrac{x}{R+|t|} \bigr) + \tfrac{x_ix_j}{(R+|t|)^4}\partial_i \partial_j \phi\bigl(\tfrac{x}{R+|t|} \bigr)\right] |u(t)|^2\, dx \\ &\qquad - \int_{{\mathbb{R}}^d} \tfrac2{R+|t|} \nabla\phi\bigl(\tfrac{x}{R+|t|} \bigr) \cdot \bigl[\tfrac{2x}{R+|t|} u_t(t) + \nabla u(t)\bigr]u(t) \, dx. \end{aligned} \end{align} We will seek an upper bound for $|M'(t)|$ and a lower bound for $M''(t)$. We will make repeated use of the following bound, which is a simple consequence of H\"older's inequality followed by \eqref{E:u small} and \eqref{E:u H1 big}: \begin{align}\label{E:mass annulus} \int_{R+|t| \leq |x| \leq 2(R+|t|)}\frac{|u(t,x)|^2}{(R+|t|)^2}\, dx &\lesssim (R+|t|)^{-2(1-s_c)}\|u\|_{L^{\frac{pd}2}_x(|x|\geq R+|t|)}^2\notag\\ &\lesssim\eta^2 \|\nabla u(t)\|_{L^2_x}^2. \end{align} Using Cauchy--Schwarz and the inequality $|ab|^{\frac12} + |cd|^{\frac12} \leq (|a|+|c|)^{\frac12} (|b|+|d|)^{\frac12}$, we estimate \begin{align*} |M'(t)| &\leq \biggl(\int_{{\mathbb{R}}^d} \tfrac{\varepsilon}8 |\nabla \phi\bigl(\tfrac{x}{R+|t|}\bigr)|^2|u(t)|^2\, dx \biggr)^{\!\frac12} \biggl(\int_{R+|t| \leq |x| \leq 2(R+|t|)} \tfrac{8|x|^2}{\varepsilon (R+|t|)^4} |u(t)|^2\, dx \biggr)^{\!\frac12} \\ &\quad + \biggl(\int_{{\mathbb{R}}^d}(1-\varepsilon)\phi\bigl(\tfrac{x}{R+|t|}\bigr) |u(t)|^2\, dx\biggr)^{\!\frac12} \biggl( \int_{{\mathbb{R}}^d} \tfrac4{1-\varepsilon}\phi\bigl(\tfrac{x}{R+|t|}\bigr)|u_t(t)|^2\, dx \biggr)^{\!\frac12}\\ &\leq \biggl(\int_{{\mathbb{R}}^d}\left[\tfrac\eps8 |\nabla \phi\bigl(\tfrac{x}{R+|t|}\bigr)|^2 + (1-\varepsilon) \phi\bigl(\tfrac{x}{R+|t|}\bigr) \right] |u(t)|^2\, dx \biggr)^{\!\frac12} \\ &\quad \times \biggl( \int_{R+|t| \leq |x| \leq 2(R+|t|)} \tfrac{32}{\varepsilon(R+|t|)^2} |u(t)|^2\, dx + \int_{{\mathbb{R}}^d} \tfrac4{1-\varepsilon}\phi\bigl(\tfrac{x}{R+|t|}\bigr)|u_t(t)|^2\, dx \biggr)^{\!\frac12} \end{align*} for any $0 < \varepsilon < 1$. Since $\tfrac18 |\nabla \phi|^2 \leq \phi$, the first factor in the product above is bounded by $M(t)$. Using \eqref{E:mass annulus} to estimate the first term in the second factor gives \begin{equation} \label{E:bound M'} |M'(t)|^2 \leq M(t) \left( C_{\varepsilon} \eta^2 \|\nabla u(t)\|_{L^2_x}^2 + \int_{{\mathbb{R}}^d} \tfrac4{1-\varepsilon}\phi\bigl(\tfrac{x}{R+|t|}\bigr)|u_t(t,x)|^2\, dx \right). \end{equation} We now turn to $M''(t)$. By \eqref{E:mass annulus}, \begin{align} \label{E:mass annulus'} \Biggl| \int_{{\mathbb{R}}^d}\Bigl[ & \tfrac{2x}{(R+|t|)^3}\cdot\nabla \phi\bigl(\tfrac{x}{R+|t|}\bigr)+\tfrac{x_ix_j}{(R+|t|)^4}\partial_i\partial_j\phi\bigl(\tfrac{x}{R+|t|} \bigr)\Bigr] |u(t,x)|^2\, dx \Biggr|\notag\\ &\qquad\lesssim \int_{R+|t| \leq |x| \leq 2(R+|t|)}\tfrac1{(R+|t|)^2}|u(t,x)|^2\, dx \lesssim \eta^2 \|\nabla u(t)\|_{L^2_x}^2. \end{align} Next, by Lemma~\ref{L:GN} and \eqref{E:u small}, \begin{align} \label{E:p+2 annulus} \int_{{\mathbb{R}}^d} \phi^c\bigl(\tfrac{x}{R+|t|}\bigr)|u(t,x)|^{p+2}\, dx &\leq \|u(t)\|_{L_x^{p+2}(|x|\geq R+|t|)}^{p+2}\notag\\ &\lesssim\|u(t)\|_{L^{\frac{pd}2}_x(|x|\geq R+|t|)}^p\|\nabla u(t)\|_{L^2_x(|x|\geq R+|t|)}^2\notag\\ &\lesssim \eta^p \|\nabla u(t)\|_{L^2_x}^2. \end{align} Finally, by Young's inequality and \eqref{E:mass annulus}, \begin{align} \label{E:u nabla u annulus} \Biggl|\int_{{\mathbb{R}}^d} \tfrac2{R+|t|} & \nabla\phi\bigl(\tfrac{x}{R+|t|}\bigr) \cdot \Bigl[\tfrac{2x}{R+|t|} u_t(t,x) + \nabla u(t,x)\Bigr]u(t,x) \, dx\Biggr|\notag\\ &\leq \int_{R+|t| \leq |x| \leq 2(R+|t|)} \tfrac{C_{\varepsilon}}{(R+|t|)^2} |u(t,x)|^2\, dx + \int_{{\mathbb{R}}^d} \varepsilon |\nabla_{t,x} u(t,x)|^2\, dx \notag\\ & \leq C_{\varepsilon} \eta^2 \|\nabla u(t)\|_{L^2_x}^2 + \varepsilon \|\nabla_{t,x} u(t)\|_{L^2_x}^2, \end{align} for any $\varepsilon > 0$. Now let $\delta > 0$. Combining $E(u)\leq 0$, \eqref{E:mass annulus'}, \eqref{E:p+2 annulus}, and \eqref{E:u nabla u annulus} with the identity \eqref{E:M''}, and choosing $\varepsilon=\varepsilon(\delta)$ and then $\eta=\eta(\varepsilon)$ sufficiently small, we obtain \begin{equation}\label{E:M'' lb 1} \begin{aligned} M''(t) &\geq \int_{{\mathbb{R}}^d} 4 \phi\bigl(\tfrac{x}{R+|t|}\bigr) |u_t(t,x)|^2 \, dx + (p-\delta) \int_{{\mathbb{R}}^d} |\nabla_{t,x} u(t,x)|^2\, dx \\ & \geq \int_{{\mathbb{R}}^d} \tfrac4{1-2\varepsilon} \phi\bigl(\tfrac{x}{R+|t|}\bigr) |u_t(t,x)|^2\, dx + (p-\delta) \int_{{\mathbb{R}}^d} |\nabla u(t,x)|^2\, dx. \end{aligned} \end{equation} Combining \eqref{E:bound M'} and \eqref{E:M'' lb 1}, we get \begin{equation} \label{E:M'MM''} |M'(t)|^2 \leq cM(t) M''(t), \end{equation} for some constant $0 < c < 1$. Using this we will prove that $u$ blows up in finite time, forward in time; finite-time blowup backward in time follows from time-reversal symmetry. We argue by contradiction. Suppose that the solution $u$ may be continued forward in time indefinitely. First, we consider the case when $M'(0) > 0$. By \eqref{E:u H1 big} and \eqref{E:M'' lb 1}, $M''(t) > 0$ for all $t$ in the lifespan of $u$, and so $M'(t)>0$ for all $t>0$. Thus by \eqref{E:bound M'}, $$ \frac{M'(t)}{M(t)} \leq c\frac{M''(t)}{M'(t)}. $$ Integrating both sides, we see that for $t \geq 0$ we have $$ \log \left( \frac{M(t)}{M(0)} \right) \leq c \log \left(\frac{M'(t)}{M'(0)} \right), $$ that is, $$ M'(0)M(0)^{-1/c} \leq M'(t)M(t)^{-1/c}. $$ Integrating a second time and recalling that $M(t)> 0$, we obtain $$ t M'(0) M(0)^{-1/c} \leq \frac{c}{1-c}(M(0)^{1-1/c} - M(t)^{1-1/c}) \leq \frac{c}{1-c}M(0)^{1-\frac1c}. $$ But this is impossible since the left-hand side grows linearly as $t \to \infty$, while the right-hand side is bounded. Thus we must have $M'(0) \leq 0$. More generally, if we suppose that $M'(t_0) \geq 0$ for any $t_0$ in the lifespan of $u$, then $M'(t) > 0$ for all $t > t_0$, since $M''(t)>0$ for all $t$ in the lifespan of $u$ (by \eqref{E:u H1 big} and \eqref{E:M'' lb 1}). Arguing as above, we again obtain a contradiction to indefinite forward in time existence of $u$. Thus we may assume that $M'(t) < 0$ for as long as $u$ exists, and therefore \begin{equation} \label{E:mass bounded} 0 < M(t) < M(0) \qtq{for all} t > 0. \end{equation} Furthermore, as $M'(t)$ stays negative, we must have by \eqref{E:M'' lb 1} that $$ |M'(0)| \geq \int_0^{\infty} M''(t) \,dt \gtrsim \int_0^{\infty} \|\nabla_{t,x} u(t)\|_{L^2_x}^2\, dt. $$ From this, we see that along some sequence $t_n \to \infty$, we have \begin{equation} \label{E:H1 to 0} \|\nabla u(t_n)\|_{L^2_x}^2 \to 0. \end{equation} Next, using Gagliardo--Nirenberg followed by H\"older, \eqref{E:u small}, \eqref{E:u H1 big}, and \eqref{E:mass bounded}, we obtain \begin{align*} \|&u(t_n)\|_{L^{\frac{pd}2}_x(|x| \leq R+t_n)}\\ &\lesssim \bigl\|\phi(\tfrac{\cdot}{R+t_n}) u(t_n)\bigr\|_{L^2_x}^{1-s_c}\bigl\|\nabla\bigl[\phi(\tfrac{\cdot}{R+t_n}) u(t_n)\bigr]\bigr\|_{L^2_x}^{s_c}\\ &\lesssim M(t_n)^{\frac{1-s_c}2} \Bigl[\|\nabla u(t_n)\|_{L^2_x} + \|u(t_n)\|_{L^{\frac{pd}2}_x(|x| \geq R+t_n)} \|\nabla \phi(\tfrac{\cdot}{R+t_n})\|_{L_x^{\frac{2pd}{pd-4}}}\Bigr]^{s_c}\\ &\lesssim M(0)^{\frac{1-s_c}2} \Bigl[\|\nabla u(t_n)\|_{L^2_x} + \eta (R+t_n)^{-1+s_c}\Bigr]^{s_c}\\ &\lesssim M(0)^{\frac{1-s_c}2} (1+\eta)^{s_c}\|\nabla u(t_n)\|_{L^2_x}^{s_c} \to 0 \quad \text{as}\quad n\to \infty, \end{align*} which contradicts \eqref{E:u pd/2 big}. This completes the proof of the theorem. \end{proof} \section{Concentration compactness for a Gagliardo--Nirenberg inequality}\label{S:CC} In this section we develop a concentration compactness principle associated to the following Gagliardo--Nirenberg inequality (cf. Lemma~\ref{L:GN}) \begin{align}\label{E:GN1} \|f\|_{L_x^{p+2}}^{p+2} \lesssim \|f\|_{\dot H^{s_c}_x}^p \|f\|_{\dot H^1_x}^2. \end{align} More precisely, we prove \begin{theorem}[Bubble decomposition for \eqref{E:GN1}] \label{T:bubble gn} Fix a dimension $d \geq 2$ and an exponent $\frac4d \leq p < \frac4{d-2}$. Let $s_c = \frac{d}2 - \frac2p$. Let $\{f_n\}_{n\geq 1}$ be a bounded sequence in $\dot H^1_x \cap \dot H^{s_c}_x$. Then there exist $J^* \in \{0,1,\ldots\} \cup \{\infty\}$, nonzero functions $\{\phi^j\}_{j=1}^{J^*} \subset \dot H^1_x \cap \dot H^{s_c}_x$, $\{x_n^j\}_{j=1}^{J^*} \subset {\mathbb{R}}^d$, and a subsequence of $\{f_n\}_{n\geq 1}$ such that along this subsequence \begin{equation} \label{E:bubble gn} f_n(x) = \sum_{j=1}^J \phi^j(x-x_n^j) + r_n^J(x) \qtq{for each} 0 \leq J < J^*+1 , \end{equation} with \begin{equation}\label{E:rnJ wkly to 0} r_n^J(\cdot + x_n^J) \rightharpoonup 0 \qtq{weakly in} \dot H^1_x \cap \dot H^{s_c}_x \qtq{for each} 1 \leq J < J^*+1. \end{equation} Furthermore, along this subsequence, the $r_n^J$ satisfy \begin{align} \label{E:rnJ to 0} \lim_{J \to J^*} \limsup_{n \to \infty} \|r_n^J\|_{L^{p+2}_x} = 0, \end{align} and for each $0 \leq J < J^*+1$, we have the following: \begin{align} \label{E:H1 decoup} &\lim_{n \to \infty} \Bigl\{\|f_n\|_{\dot{H}_x^1}^2 - \Bigl[\sum_{j=1}^J \|\phi^j\|_{\dot{H}_x^1}^2 + \|r_n^J\|_{\dot{H}_x^1}^2 \Bigr]\Bigr\} = 0 \\ \label{E:Hsc decoup} &\lim_{n \to \infty} \Bigl\{\|f_n\|_{\dot{H}_x^{s_c}}^2 - \Bigl[\sum_{j=1}^J \|\phi^j\|_{\dot{H}_x^{s_c}}^2 + \|r_n^J\|_{\dot{H}_x^{s_c}}^2 \Bigr]\Bigr\} = 0 \\ \label{E:p+2 decoup} &\lim_{n \to \infty} \Bigl\{\|f_n\|_{L^{p+2}_x}^{p+2} - \Bigl[\sum_{j=1}^J \|\phi^j\|_{L^{p+2}_x}^{p+2} + \|r_n^J\|_{L^{p+2}_x}^{p+2}\Bigr]\Bigr\} = 0. \end{align} Finally, for each $j' \neq j$, we have \begin{equation} \label{E:GN orthog} \lim_{n \to \infty} |x_n^j - x_n^{j'}| = \infty. \end{equation} \end{theorem} There are many results of this type, beginning with the work \cite{Solimini} of Solimini on Sobolev embedding. The argument below is modeled on the treatment in \cite{ClayNotes}, with the main step being the following inverse inequality. \begin{prop}[Inverse Gagliardo--Nirenberg inequality] \label{P:inverse gn} Fix a dimension $d \geq 2$ and an exponent $\frac4d \leq p < \frac4{d-2}$. Let $s_c = \frac{d}2 - \frac2p$. Let $\{f_n\}_{n\geq 1} \subset \dot{H}^1_x({\mathbb{R}}^d) \cap \dot{H}_x^{s_c}({\mathbb{R}}^d)$ and assume that $$ \limsup_{n \to \infty} \|f_n\|_{\dot H_x^{s_c}}^2 + \|f_n\|_{\dot H_x^1}^2 = M^2 \qtq{and} \liminf_{n \to \infty} \|f_n\|_{L^{p+2}_x} = \varepsilon > 0. $$ Then there exist $\phi \in \dot H_x^1({\mathbb{R}}^d) \cap \dot H_x^{s_c}({\mathbb{R}}^d)$ and $\{x_n\}_{n\geq 1} \subset {\mathbb{R}}^d$ such that after passing to a subsequence, we have the following: \begin{align} \label{E:fn wkly to phi} f_n(\cdot + x_n) \rightharpoonup \phi \qtq{weakly in} &\dot H^1_x \cap \dot H^{s_c}_x\\ \label{E:inv st H1} \lim_{n \to \infty} \Bigl\{ \|f_n\|_{\dot H^1_x}^2 - \|f_n - \phi(\cdot - x_n)\|_{\dot H^1_x}^2 \Bigr\} &= \|\phi\|_{\dot H^1_x}^2 \gtrsim \varepsilon^2\bigl(\frac{\varepsilon}{M}\bigr)^{\alpha_1} \\ \label{E:inv st Hsc} \lim_{n \to \infty} \Bigl\{ \|f_n\|_{\dot H^{s_c}_x}^2 - \|f_n - \phi(\cdot - x_n)\|_{\dot H^{s_c}_x}^2 \Bigr\} &= \|\phi\|_{\dot H^{s_c}_x}^2 \gtrsim \varepsilon^2\bigl(\frac{\varepsilon}{M}\bigr)^{\alpha_2} \\ \label{E:inv st p+2} \lim_{n \to \infty} \Bigl\{\|f_n\|_{L^{p+2}_x}^{p+2} - \|f_n - \phi(\cdot-x_n)\|_{L^{p+2}_x}^{p+2} \Bigr\} &= \|\phi\|_{L^{p+2}_x}^{p+2} \gtrsim \varepsilon^{p+2}\bigl(\frac{\varepsilon}{M}\bigr)^{\alpha_3}, \end{align} for certain positive constants $\alpha_1,\alpha_2,\alpha_3$ depending on $d,p$. \end{prop} \begin{proof}[Proof of Proposition \ref{P:inverse gn}] By passing to a subsequence, we may assume that \begin{equation} \label{E:control norms fn} \|f_n\|_{\dot H^1_x}^2 + \|f_n\|_{\dot H^{s_c}_x}^2 \leq 2M^2 \qtq{and} \|f_n\|_{L^{p+2}_x} \geq \tfrac\eps2 \qtq{for all $n$.} \end{equation} Note that by \eqref{E:GN1} we must have $\varepsilon \lesssim M$. Now by \eqref{E:GN1}, \eqref{E:control norms fn}, and Bernstein's inequality, for all dyadic frequencies $N$ we have \begin{align*} \|P_N f_n\|_{L^{p+2}_x}^{p+2} &\lesssim \|P_Nf_n\|_{\dot H^{s_c}_x}^p \|P_N f_n\|_{\dot H^1_x}^2 \lesssim \min\{N^{-p(1-s_c)}, N^{2(1-s_c)}\}M^{p+2}. \end{align*} Thus, if we define $$ K = C\bigl(M\varepsilon^{-1}\bigr)^{\frac{p+2}{2p(1-s_c)}} $$ for a suitably large constant $C$, we obtain $$ \|P_{\leq K^{-p}} f_n\|_{L^{p+2}_x}^{p+2} + \|P_{\geq K^2} f_n\|_{L^{p+2}_x}^{p+2} \ll \varepsilon^{p+2}. $$ Hence, by the pigeonhole principle there exist dyadic frequencies $N_n$ satisfying $K^{-p} \leq N_n \leq K^2$ such that \begin{equation} \label{E:lb fN p+2} (\log K)^{-1} \varepsilon \lesssim \|P_{N_n} f_n\|_{L^{p+2}_x}. \end{equation} By passing to a subsequence, we may assume that $N_n = N$ for all $n$. By H\"older's inequality, the Sobolev embedding $\dot H^{s_c}_x \hookrightarrow L^{\frac{pd}2}_x$, and \eqref{E:control norms fn}, $$ \|P_N f_n\|_{L^{p+2}_x} \leq \|P_N f_n\|_{L^{\infty}_x}^{1-\frac{pd}{2(p+2)}}\|P_Nf_n\|_{L^{\frac{pd}2}_x}^{\frac{pd}{2(p+2)}} \lesssim M^{\frac{pd}{2(p+2)}} \|P_N f_n\|_{L^{\infty}_x}^{1-\frac{pd}{2(p+2)}}, $$ and so by \eqref{E:lb fN p+2}, there exists a sequence $\{x_n\} \subset {\mathbb{R}}^d$ such that $$ \bigl(\varepsilon^2 M^{-1-\frac{pd}{2(p+2)}}\bigr)^{\frac{p+2}{p(1-s_c)}} \lesssim \left(\frac{\varepsilon M^{-\frac{pd}{2(p+2)}}}{\log K}\right)^{\frac{2(p+2)}{4-p(d-2)}} \lesssim |P_N f_n(x_n)|. $$ We consider the sequence $f_n(\cdot+x_n)$. This sequence is bounded in $\dot H^1_x({\mathbb{R}}^d) \cap \dot H^{s_c}_x({\mathbb{R}}^d)$ by \eqref{E:control norms fn}, and so, after passing to a subsequence, there exists a weak limit $\phi \in \dot H^1_x({\mathbb{R}}^d) \cap \dot H^{s_c}_x({\mathbb{R}}^d)$ as in \eqref{E:fn wkly to phi}. The equalities in \eqref{E:inv st H1} and \eqref{E:inv st Hsc} are immediate. We now turn to the $L^{p+2}_x$ decoupling, \eqref{E:inv st p+2}. By \eqref{E:fn wkly to phi} and the Rellich--Kondrashov theorem, $f_n(\cdot + x_n) \to \phi$ in $L^2_\textrm{loc}$ and hence, after passing to a subsequence, almost everywhere. The equality in \eqref{E:inv st p+2} is then an immediate consequence of the Fatou lemma of Br\'ezis and Lieb; see \cite{BrezisLieb} or \cite{LiebLoss}. Finally, to obtain the lower bounds in \eqref{E:inv st H1}, \eqref{E:inv st Hsc}, and \eqref{E:inv st p+2}, we test $\phi$ against the function $k = P_N \delta_0$ ($\delta_0$ being the Dirac delta). We have $$ |\langle \phi , k \rangle| = \lim_{n \to \infty} \Bigl|\int_{{\mathbb{R}}^d} f_n(x +x_n) \overline{k(x)}\, dx\Bigr| = \lim_{n \to \infty} | P_N f_n(x_n)| \gtrsim \bigl(\varepsilon^2 M^{-1-\frac{pd}{2(p+2)}}\bigr)^{\frac{p+2}{p(1-s_c)}}. $$ Routine computations reveal that $$ \|k\|_{\dot H^{-s_c}_x} \sim N^{\frac d2-s_c}, \qquad \|k\|_{\dot H^{-1}_x} \sim N^{\frac d2-1}, \qquad \|k\|_{L^{\frac{p+2}{p+1}}_x} \sim N^{\frac d{p+2}}, $$ and since $K^{-p} \leq N \leq K^2$, the lower bounds follow. This completes the proof. \end{proof} We are now ready to prove Theorem~\ref{T:bubble gn}. \begin{proof}[Proof of Theorem~\ref{T:bubble gn}] To begin, we set $r_n^0 := f_n$. The identities \eqref{E:H1 decoup}, \eqref{E:Hsc decoup}, and \eqref{E:p+2 decoup} with $J=0$ are thus trivial. After passing to a subsequence, we may assume that $$ \lim_{n \to \infty} \|r_n^0\|_{\dot H^1_x}^2 + \|r_n^0\|_{\dot H^{s_c}_x}^2 = M_0^2 \qtq{and} \lim_{n \to \infty} \|r_n^0\|_{L^{p+2}_x} = \varepsilon_0. $$ By hypothesis $M_0 < \infty$, while by \eqref{E:GN} we have $\varepsilon_0 \lesssim M_0$. We now proceed inductively, assuming that a decomposition satisfying \eqref{E:bubble gn}, \eqref{E:rnJ wkly to 0}, \eqref{E:H1 decoup}, \eqref{E:Hsc decoup}, and \eqref{E:p+2 decoup} has been carried out up to some integer $J \geq 0$ and that the remainder satisfies $$ \lim_{n \to \infty} \|r_n^J\|_{\dot H^1_x}^2 + \|r_n^J\|_{\dot H^{s_c}_x}^2 = M_J^2 \qtq{and} \lim_{n \to \infty}\|r_n^J\|_{L^{p+2}_x} = \varepsilon_J, $$ with $\varepsilon_J \lesssim M_J$ (which follows from \eqref{E:GN}) and $M_J < M_0$ (which will be established below). If $\varepsilon_J = 0$, we stop, setting $J^* = J$. The relations \eqref{E:bubble gn} through \eqref{E:p+2 decoup} have thus been established; we will come to \eqref{E:GN orthog} in a moment. If $\varepsilon_J > 0$, we apply Proposition~\ref{P:inverse gn} to $\{r_n^J\}$, producing a sequence $\{x_n^{J+1}\}$ of points and a (subsequential) weak limit $$ r_n^J(\cdot + x_n^{J+1}) \rightharpoonup \phi^{J+1} \qtq{in} \dot H^1_x \cap \dot H^{s_c}_x. $$ Setting $r_n^{J+1} := r_n^J - \phi^{J+1}(\cdot - x_n^{J+1})$, we obtain \eqref{E:bubble gn} and \eqref{E:rnJ wkly to 0} with $J$ replaced by $J+1$. The identity in \eqref{E:inv st H1} is just $$ \lim_{n \to \infty} \Bigl\{\|r_n^J\|_{\dot H^1_x}^2 - \Bigl[\|\phi^{J+1}\|_{\dot H^1_x}^2 + \|r_n^{J+1}\|_{\dot H^1_x}^2\Bigr]\Bigr\} = 0. $$ Adding this to \eqref{E:H1 decoup} shows that this continues to hold at $J+1$: $$ \lim_{n \to \infty} \Bigl\{\|f_n\|_{\dot H^1_x}^2 - \Bigl[\sum_{j=1}^{J+1} \|\phi^j\|_{\dot H^1_x}^2 + \|r_n^{J+1}\|_{\dot H^1_x}^2\Bigr]\Bigr\} = 0. $$ To derive \eqref{E:Hsc decoup} and \eqref{E:p+2 decoup} with $J$ replaced by $J+1$, one argues similarly. Passing to a subsequence and applying \eqref{E:inv st H1}, \eqref{E:inv st Hsc}, and \eqref{E:inv st p+2}, we obtain \begin{align}\label{E:Sobolev decrement} M_{J+1}^2 &= \lim_{n \to \infty} \Bigl[\|r_n^{J+1}\|_{\dot H^1_x}^2 + \|r_n^{J+1}\|_{\dot H^{s_c}_x}^2\Bigr] \notag\\ &= \lim_{n \to \infty} \Bigl[\|r_n^J\|_{\dot H^1_x}^2 - \|\phi^{J+1}\|_{\dot H^1_x}^2 + \|r_n^J\|_{\dot H^{s_c}_x}^2 - \|\phi^{J+1}\|_{\dot H^{s_c}_x}^2\Bigr] \notag\\ &\leq M_J^2 - C\varepsilon_J^2\Bigl[\bigl(\tfrac{\varepsilon_J}{M_J}\bigr)^{\alpha_1} + \bigl(\tfrac{\varepsilon_J}{M_J}\bigr)^{\alpha_2}\Bigr] \end{align} and \begin{align}\label{E:p+2 decrement} \varepsilon_{J+1}^{p+2} = \lim_{n \to \infty} \|r_n^{J+1}\|_{L^{p+2}_x}^{p+2} = \lim_{n \to \infty} \Bigl[ \|r_n^J\|_{L^{p+2}_x}^{p+2} - \|\phi^J\|_{L^{p+2}_x}^{p+2} \Bigr] \leq \varepsilon_J^{p+2} - C\varepsilon_J^{p+2}\bigl(\tfrac{\varepsilon_J}{M_J}\bigr)^{\alpha_3}. \end{align} Either this process eventually stops and we obtain some finite $J^*$ or we set $J^* = \infty$. If we do have $J^* = \infty$, then \eqref{E:rnJ to 0} follows from \eqref{E:Sobolev decrement} and \eqref{E:p+2 decrement}. Finally, we prove the asymptotic orthogonality \eqref{E:GN orthog}. Let us suppose that this fails for some $j \neq j'$. We may assume that $j'>j$ and that $\lim_{n \to\infty}|x_n^j - x_n^{k}| = \infty$ for $j < k < j'$. Passing to a subsequence, we may assume that $\lim_{n \to \infty} (x_n^j - x_n^{j'}) = y$. We recall that $$ \phi^{j'} = \wklim_{n \to \infty} \, r_n^{j'-1}(\cdot + x_n^{j'}), $$ while $$ r_n^{j'-1} = r_n^j - \sum_{k=j+1}^{j'-1}\phi^{k}(\cdot - x_n^{k}). $$ Therefore, \begin{align*} \phi^{j'} = \wklim_{n \to \infty} \Bigl\{ r_n^j(\cdot + x_n^{j'}) - \sum_{k=j+1}^{j'-1} \phi^{k}(\cdot + x_n^{j'} - x_n^{k}) \Bigr\} &= \wklim_{n \to \infty} \, r_n^j(\cdot + x_n^j + y) = 0, \end{align*} where we used $\lim_{n \to \infty} |x_n^{j'} - x_n^{k}| = \infty$ for all $j < k < j'$ in order to derive the second equality and \eqref{E:rnJ wkly to 0} to derive the third equality. But $\phi^{j'}$ cannot be $0$ in view of \eqref{E:inv st H1} and the fact that our inductive procedure stops once we obtain $\varepsilon_J = 0$. This completes the proof of Theorem~\ref{T:bubble gn}. \end{proof} \section{Blowup of the critical norm}\label{S:critical blowup} The goal of this section is to show that finite-time blowup of solutions to \eqref{E:eqn} is accompanied by subsequential blowup of their critical Sobolev norm. More precisely, we prove the following result which is slightly more general than Theorem~\ref{T:I:sc} given in the Introduction. \begin{theorem} \label{T:liminf v2} Let $d \geq 2$, $m \in [0,1]$, and $\frac1{2d} < s_c < 1$. Set $p = \frac4{d-2s_c}$. Let $(u_0,u_1)$ be initial data for \eqref{E:eqn} satisfying $$ \|\langle\nabla\rangle_m u_0\|_{L^2_x} + \|u_1\|_{L^2_x} + \||\nabla|^{s_c}u_0\|_{L^2_x} \leq M<\infty, $$ with $u_0$ and $u_1$ radial if $s_c < \frac12$. Assume that the maximal-lifespan solution $u$ to \eqref{E:eqn} blows up forward in time at $0<T_* < \infty$. Then $$ \limsup_{t \uparrow T_*} \bigl\{\|u(t)\|_{\dot H^{s_c}_x} + \|u_t(t)\|_{H^{s_c-1}_x}\bigr\} = \infty. $$ \end{theorem} The remainder of the section is dedicated to the proof of the theorem. We assume by way of contradiction that \begin{equation} \label{E:lie} \|u\|_{L^{\infty}_t \dot H^{s_c}_x([0,T_*)\times {\mathbb{R}}^d)} + \|u_t\|_{L^{\infty}_t H^{s_c-1}_x([0,T_*) \times {\mathbb{R}}^d)} = K < \infty. \end{equation} By Corollary~\ref{C:ee blowup}, $$ \|\nabla_{t,x}u(t)\|_{L^2_x} \to \infty \qtq{as} t\to T_*. $$ Thus we may choose a sequence of times $\{t_n\}_{n\geq 1}$ increasing to $T_*$ and satisfying $$ \|\nabla_{t,x}u(t_n)\|_{L^2_x} = \|\nabla_{t,x} u\|_{C_tL^2_x([0,t_n] \times {\mathbb{R}}^d)} \to \infty \qtq{as} n \to \infty. $$ Let $$ \lambda_n := (\|\nabla_{t,x} u(t_n)\|_{L^2_x})^{-\frac1{1-s_c}} \to 0 $$ and define $$ u^{(n)}(t,x) := \lambda_n^{\frac2p}u(t_n - \lambda_n t, \lambda_n x) \qtq{for all} (t,x) \in [0,T_n] \times {\mathbb{R}}^d, $$ where $T_n := \frac{t_n}{\lambda_n} \to \infty$. Then $u^{(n)}$ solves \begin{equation} \label{E:un soln} u^{(n)}_{tt} - \Delta u^{(n)} + m_n^2 u^{(n)} = |u^{(n)}|^p u^{(n)} \end{equation} on $[0,T_n] \times {\mathbb{R}}^d$ with $m_n := \lambda_n m \to 0$. Furthermore, by our choice of $t_n$ and $\lambda_n$, $u^{(n)}$ satisfies \begin{align} \label{E:un H1} \|\nabla_{t,x} u^{(n)}\|_{C_tL^2_x([0,T_n] \times {\mathbb{R}}^d)} = \|\nabla_{t,x} u^{(n)}(0)\|_{L^2_x} = 1 \end{align} and \begin{equation} \label{E:un Hsc} \begin{aligned} &\|u^{(n)}\|_{C_t \dot H^{s_c}_x([0,T_n] \times {\mathbb{R}}^d)} + \|\langle \nabla\rangle_{\lambda_n}^{s_c-1}\partial_t u^{(n)}\|_{C_t L_x^2([0,T_n] \times {\mathbb{R}}^d)}\\ &\qquad \qquad = \|u\|_{C_t\dot H^{s_c}_x([0,t_n] \times {\mathbb{R}}^d)} + \|u_t\|_{C_t H^{s_c-1}_x([0,t_n] \times {\mathbb{R}}^d)} \leq K. \end{aligned} \end{equation} (We note that the subscript $\lambda_n$ is essential in \eqref{E:un Hsc} because we need the weak limit of $\partial_t u^{(n)}(0)$ to belong to $\dot H^{s_c-1}_x$; cf. Lemma~\ref{L:weak stability}.) Finally, by conservation of energy, \begin{align} \label{E:un nrg} \int_{{\mathbb{R}}^d} \tfrac12|&\nabla_{t,x} u^{(n)}(0,x)|^2 - \tfrac1{p+2} |u^{(n)}(0,x)|^{p+2}\, dx \notag\\ &= \lambda_n^{2-2s_c} \int_{{\mathbb{R}}^d} \tfrac12|\nabla_{t,x}u(t_n,x)|^2 - \tfrac1{p+2}|u(t_n,x)|^{p+2}\, dx \notag\\ &\leq \lambda_n^{2-2s_c} \int_{{\mathbb{R}}^d} \tfrac12|\nabla_{t,x} u(0,x)|^2 + \tfrac{m^2}2 |u(0,x)|^2 - \tfrac1{p+2} |u(0,x)|^{p+2}\, dx \to 0. \end{align} Using this and \eqref{E:un H1}, for sufficiently large $n$ we obtain \begin{equation} \label{E:p+2 lb} \int_{{\mathbb{R}}^d} |u^{(n)}(0,x)|^{p+2}\, dx \geq 1. \end{equation} To continue, we will use Lemma~\ref{L:weak stability} and Theorem~\ref{T:bubble gn} to prove that under the assumption \eqref{E:lie}, we have the following: \begin{lemma} \label{L:bad limit} The sequence $\{u^{(n)}\}_{n\geq 1}$ gives rise to a nonzero solution $w$ to the nonlinear wave equation \eqref{E:eqn} with $m=0$ which is global forward in time, satisfies $(w,w_t) \in C_t ([0, \infty);\dot H^1_x \times L^2_x\cap \dot H^{s_c}_x \times \dot H^{s_c-1}_x)$, and has $E(w) \leq 0$. \end{lemma} By Theorem~\ref{T:nonpos blowup}, a solution $w$ as described in Lemma~\ref{L:bad limit} cannot exist. The restriction to radial data for $s_c<\frac12$ arises only from the use of this theorem. Thus, in order to conclude the proof of Theorem~\ref{T:liminf v2}, it remains to prove Lemma~\ref{L:bad limit}. To prove the lemma, we treat the sub- and super-conformal cases separately. We start with the sub-conformal case, where the radial assumption on the initial data allows for a simpler treatment. \begin{proof}[Proof of Lemma~\ref{L:bad limit} when $s_c < \frac12$] By hypothesis, in this case we have that $u_0$ and $u_1$ are radial, and so the $u^{(n)}$ are radial also. Using \eqref{E:un H1} and \eqref{E:un Hsc} and passing to a subsequence, we obtain a weak limit \begin{equation} \label{E:un wkly to w} (u^{(n)}(0),u^{(n)}_t(0)) \rightharpoonup (w_0,w_1) \qtq{in $[\dot H^1_x \cap \dot H^{s_c}_x] \times L^2_x$.} \end{equation} Additionally, by \eqref{E:un Hsc}, $w_1 \in \dot H^{s_c-1}_x$. As the embedding $\dot H^1_{\rm{rad}} \cap \dot H^{s_c}_{\rm{rad}} \hookrightarrow L^{p+2}_{\rm{rad}}$ is compact, \eqref{E:un wkly to w} dictates \begin{equation} \label{E:un to w p+2} u^{(n)}(0) \to w_0 \qtq{strongly in} L^{p+2}_x. \end{equation} By \eqref{E:p+2 lb}, $w_0$ is not identically 0. Finally, by \eqref{E:un nrg}, \eqref{E:un wkly to w}, and \eqref{E:un to w p+2}, we have \begin{align*} E(w_0,w_1) &= \int_{{\mathbb{R}}^d} \tfrac12 |\nabla w_0(x)|^2 + \tfrac12|w_1(x)|^2 - \tfrac1{p+2}|w_0(x)|^{p+2}\, dx \\ &\leq \lim_{n \to \infty} \int_{{\mathbb{R}}^d} \tfrac12|\nabla_{t,x}u^{(n)}(0,x)|^2 - \tfrac1{p+2}|u^{(n)}(0,x)|^{p+2}\, dx \leq 0. \end{align*} Now let $w$ be the solution to \eqref{E:eqn} with $m=0$ and initial data $(w_0,w_1)$ at time $t=0$. By Lemma~\ref{L:weak stability} and the fact that $T_n \to \infty$, we obtain that $w$ is global forward in time. Moreover it satisfies $(w,w_t) \in C_t ([0, \infty);\dot H^1_x \times L^2_x\cap \dot H^{s_c}_x \times \dot H^{s_c-1}_x)$ and $E(w) \leq 0$. This completes the proof of the lemma in the sub-conformal case. \end{proof} It remains to prove Lemma~\ref{L:bad limit} in the conformal and super-conformal cases; in this setting, we will substitute Theorem~\ref{T:bubble gn} for the compact radial embedding used in the sub-conformal case. \begin{proof}[Proof of Lemma~\ref{L:bad limit} when $s_c \geq \frac12$] Applying Theorem~\ref{T:bubble gn} to $\{u^{(n)}(0)\}_{n\geq 1}$ and passing to a subsequence, we obtain the decomposition $$ u^{(n)}(0) = \sum_{j=1}^J \phi^j_0(\cdot - x_n^j) + r_n^J \qtq{for all} 0 \leq J < J^*+1, $$ satisfying the conclusions of that theorem. By \eqref{E:p+2 lb} and \eqref{E:rnJ to 0} we must have $J^* \geq 1$. Using \eqref{E:rnJ wkly to 0} followed by \eqref{E:GN orthog}, for each $j$ we have \begin{equation} \label{E:phiJ0} \phi^j_0 = \wklim_{n \to \infty} \Bigl\{u^{(n)}(0,\cdot + x_n^j) - \sum_{k=1}^{j-1} \phi_0^k(\cdot - x_n^k + x_n^j)\Bigr\} = \wklim_{n \to \infty} u^{(n)}(0,\cdot + x_n^j) , \end{equation} where the weak limit is taken in $\dot H^1_x \cap \dot H^{s_c}_x$. Using \eqref{E:un H1} and passing to a subsequence, we may define \begin{equation} \label{E:phiJ1} \begin{aligned} \phi^j_1 &= \wklim_{n \to \infty} u^{(n)}_t(0,\cdot + x_n^j) = \wklim_{n \to \infty} \Bigl\{u^{(n)}_t(0,\cdot + x_n^j) - \sum_{k=1}^{j-1} \phi^k_1(\cdot - x_n^k + x_n^j)\Bigr\}, \end{aligned} \end{equation} where now the weak limits are taken in $L^2_x$. By \eqref{E:un Hsc}, we have $\phi^j_1 \in L^2_x \cap \dot H^{s_c-1}_x$ for all $1 \leq j < J^*+1$. By Lemma~\ref{L:weak stability} and the fact that $T_n \to \infty$, the solutions $w^j$ to $$ w_{tt}^j - \Delta w^j = |w^j|^pw^j \qtq{with} w^j(0) = \phi^j_0 \qtq{and} w_t^j(0) = \phi^j_1 $$ are global forward in time and satisfy $(w^j,w^j_t) \in C_t ([0, \infty);\dot H^1_x \times L^2_x\cap \dot H^{s_c}_x \times \dot H^{s_c-1}_x)$. Since the $\phi^j_0$ are all nonzero, the lemma will follow if we can prove that there exists $j_0$ such that $$ E(w^{j_0})=E(\phi^{j_0}_0,\phi^{j_0}_1) = \int_{{\mathbb{R}}^d} \tfrac12|\nabla\phi^{j_0}_0(x) |^2 +\tfrac12 |\phi^{j_0}_1(x)|^2 - \tfrac{1}{p+2}|\phi^{j_0}_0(x)|^{p+2}\, dx \leq 0. $$ Indeed, $w^{j_0}$ would then be the solution described in Lemma~\ref{L:bad limit}. Now, by \eqref{E:H1 decoup}, $$ \sum_{j=1}^{J^*} \|\nabla \phi^j_0\|_{L^2_x}^2 \leq \lim_{n \to \infty} \|\nabla u^{(n)}(0)\|_{L^2_x}^2, $$ and by the definition of $\phi^j_1$ (cf.\ the proof of \eqref{E:H1 decoup}), we have $$ \sum_{j=1}^{J^*} \|\phi^j_1\|_{L^2_x}^2 \leq \lim_{n \to \infty} \|u^{(n)}_t(0)\|_{L^2_x}^2. $$ Moreover, by \eqref{E:p+2 decoup} and \eqref{E:rnJ to 0}, $$ \sum_{j=1}^{J^*} \|\phi^j_0\|_{L^{p+2}_x}^{p+2} = \lim_{n \to \infty} \|u^{(n)}(0)\|_{L^{p+2}_x}^{p+2}. $$ Therefore, using \eqref{E:un nrg}, $$ \sum_{j=1}^{J^*} E(w^j) \leq \lim_{n \to \infty} \int_{{\mathbb{R}}^d} \tfrac12|\nabla_{t,x}u^{(n)}(0,x)|^2 - \tfrac1{p+2}|u^{(n)}(0,x)|^{p+2}\, dx \leq 0, $$ and so at least one $w^j$ must have non-positive energy. This completes the proof of the lemma. \end{proof} \section{Growth of other global norms}\label{S:other} \begin{proposition} \label{P:finite mass} Let $d \geq 2$, $m \in [0,1]$, and $0 < s_c < 1$. Set $p = \frac4{d-2s_c}$. Let $(u_0,u_1) \in H^1_x \times L^2_x$ and assume that the maximal-lifespan solution $u$ to \eqref{E:eqn} blows up forward in time at $0<T_* < \infty$. Then we have the pointwise in time bound \begin{equation} \label{E:ptwise L2 bound} \int_{{\mathbb{R}}^d} |u(t,x)|^2\, dx \lesssim (T_*-t)^{-\frac4p} \qtq{for all} 0 \leq t < T_*. \end{equation} Furthermore, if $I \subset [0,T_*)$ is an interval with $|I| \sim \dist(I,T_*)$, then we have the time-averaged bound \begin{equation} \label{E:avg H1 bound} \frac1{|I|} \int_I \int_{{\mathbb{R}}^d} |\nabla_{t,x} u(t,x)|^2\, dx\, dt \lesssim \dist(I,T_*)^{-\frac4p - 2}. \end{equation} The implicit constants in \eqref{E:ptwise L2 bound} and \eqref{E:avg H1 bound} may depend on $u$ but are independent of~$t$. \end{proposition} \begin{remark} The blowup rate of solutions to the ODE $v''+m^2v -|v|^p=0$, namely, $v(t)\sim (T_*-t)^{-2/p}$, shows that the blowup rate in the proposition is sharp. On the other hand, solutions such as those constructed by Kichenassamy, whose blowup surface $t=\sigma(x)$ has a non-degenerate minimum at $T_*$, show that one cannot expect lower bounds of comparable size to the upper bounds given above. \end{remark} \begin{proof} We let $$ M(t) := \int_{{\mathbb{R}}^d} |u(t,x)|^2\, dx $$ denote the `mass'. We differentiate twice with respect to time and use the equation and integration by parts to see that \begin{align*} M'(t) &= \int_{{\mathbb{R}}^d} 2 u(t) u_t(t)\, dx\\ M''(t) &= \int_{{\mathbb{R}}^d} 2 |u_t(t)|^2 - 2|\nabla u(t)|^2 - 2m^2 |u(t)|^2 + 2|u(t)|^{p+2}\, dx\\ &= -2(p+2)E(u) + \int_{{\mathbb{R}}^d} (p+4) |u_t(t)|^2 + p|\nabla u(t)|^2 + p m^2 |u(t)|^2\, dx. \end{align*} By Proposition~\ref{P:lwp}, $M(t),M'(t),M''(t)$ are all finite for $0 \leq t < T_*$. As the solution $u$ blows up at $T_*$, by Corollary~\ref{C:ee blowup} we must have $$ \lim_{t \uparrow T_*} \int_{{\mathbb{R}}^d} |\nabla u(t,x)|^2 \, dx = \infty. $$ Using this and the conservation of energy, we deduce that there exists $0<t_0<T_*$ such that $$ 2(p+2)E(u) \leq \tfrac{p}2\|\nabla u(t)\|_{L^2_x}^2 \qtq{for all} t_0<t<T_*. $$ Thus, \begin{equation} \label{E:M'' lb} M''(t) \geq \int_{{\mathbb{R}}^d} (p+4) |u_t(t,x)|^2 + \tfrac{p}2|\nabla u(t,x)|^2\, dx \qtq{for all} t_0 < t < T_* \end{equation} and so, by Cauchy--Schwarz, \begin{equation} \label{E:M'<MM''} |M'(t)|^2 \leq \frac4{p+4} M(t)M''(t) \qtq{for all} t_0 < t < T_*. \end{equation} From \eqref{E:M'' lb} we see that $M(t)\geq 0$ is strictly convex on $(t_0, T_*)$ and so vanishes at most once on this interval. Altering $t_0$ if necessary, we may thus assume $M(t)>0$ on $(t_0, T_*)$. This and \eqref{E:M'<MM''} show that $M(t)^{-\frac{p}4}$ is concave on $(t_0,T_*)$; indeed, $$ \partial_t^2 M(t)^{-\frac{p}4} = -\tfrac p4\bigl[M''(t)M(t) - \tfrac{p+4}4(M'(t))^2 \bigr]M(t)^{-\frac{p+8}4} \leq 0. $$ Therefore for all $t,T$ satisfying $t_0 < t \leq T < T_*$, we have $$ M(t)^{-\frac{p}4} \geq \frac{t-t_0}{T-t_0}M(T)^{-\frac{p}4} + \frac{T-t}{T-t_0}M(t_0)^{-\frac{p}4}\geq \frac{T-t}{T-t_0}M(t_0)^{-\frac{p}4}. $$ Letting $T\uparrow T_*$ and rearranging yields $$ M(t) \leq M(t_0)(T_*-t_0)^{\frac4p}(T_*-t)^{-\frac4p} \qtq{for all} t_0 < t < T_*. $$ This proves \eqref{E:ptwise L2 bound}, at least for $t_0 < t < T_*$. For $0\leq t\leq t_0$ this is trivial by the local-in-time continuity of $M$. We now turn to \eqref{E:avg H1 bound}. It suffices to consider intervals $I \subset (t_0,T_*)$ with $|I| \sim \dist(I,T_*)$. Let $\phi_I$ be a smooth cutoff with $\phi \equiv 1$ on $I$, $\supp \phi_I \subset [0,T_*)$, $|\supp \phi_I| \sim\dist(\supp \phi_I, T_*) \sim |I|$, and $|\phi''| \lesssim |I|^{-2}$. Then by \eqref{E:M'' lb}, integration by parts, and \eqref{E:ptwise L2 bound}, we have \begin{align*} \int_I \int_{{\mathbb{R}}^d} |\nabla_{t,x} u(t,x)|^2\, dx\, dt &\lesssim \int_0^{T_*} \phi_I(t) M''(t)\, dt = \int_0^{T_*} \phi_I''(t)M(t)\, dt \\ &\lesssim |I||I|^{-2}\dist(I,T_*)^{-\frac4{p}} \sim |I|^{-\frac4{p} -1}. \end{align*} This completes the proof of the proposition. \end{proof} When $0 < s_c \leq \frac12$, the estimate \eqref{E:avg H1 bound} can be improved to a pointwise in time estimate, namely, \begin{equation} \label{E:temp cone bound} \int_{|x-x_0|<T_*-t} (T_*-t)^{2(1-s_c)}|\nabla_{t,x}u(t,x)|^2\, dx \lesssim 1. \end{equation} Inequality \eqref{E:temp cone bound} is proved for $m=0$ in \cite{MerleZaagAJM, MerleZaagMA}. For $0<m\leq1$, this is the content of Theorem~\ref{T:cone bound subc}. By the local well-posedness in Proposition~\ref{P:lwp} and finite speed of propagation, there exists $R>0$ such that \begin{equation} \label{E:global H1 outer space} \int_{|x|\geq R+T_*} |\nabla_{t,x} u(t,x)|^2\, dx \lesssim 1. \end{equation} Since $\{|x|\leq R+T_*\}$ is contained in the union of $C_{d,T_*,R}(T_*-t)^{-d}$ balls of radius $(T_*-t)$, \eqref{E:temp cone bound} implies that $$ \int_{|x|\leq R+T_*}|\nabla_{t,x}u(t,x)|^2\, dx \lesssim (T_*-t)^{-2(1-s_c)-d} = (T_*-t)^{-\frac4p-2}. $$ The estimate \eqref{E:avg H1 bound} follows by combining the inequality above with \eqref{E:global H1 outer space}. In Section~\ref{S:superconf blow} we prove averaged-in-time estimates inside light cones in the super-conformal case. However, when these are used to derive global in space bounds (in the manner just shown), the result is weaker than that given in \eqref{E:avg H1 bound}. \section{Lyapunov functionals}\label{S:Lyapunov} The most flexible way to describe conservation laws is in their microscopic form, that is, as the fact that a certain vector field is divergence-free in spacetime. Myriad consequences can then be derived by applying the divergence theorem, or, more generally, by pairing the vector field with the gradient of a function and integrating by parts. One of our goals in this section is to identify the underlying microscopic identities that yield the key monotonicity formulae in the analyses of Merle and Zaag. This points the way to the appropriate analogues for the results in the remaining sections. To simplify various expressions, for the remainder of the article, we will work in light cones $$ \{(t,x):0 < t \leq T, |x-x_0| < t\} \qtq{with} (T,x_0) \in {\mathbb{R}}_+\times{\mathbb{R}}^d, $$ rather than the backwards light cones discussed earlier. It is clear how to adapt Definition~\ref{D:solution} to this case. All the local theory results from Section~\ref{S:local theory} carry over by applying the time translation/reversal symmetry $u(t,x) \mapsto u(T-t,x)$. We begin with energy conservation: If $u$ is a solution to \eqref{E:eqn} and \begin{equation}\label{E:m E defn} \mathfrak{e}^0 := \tfrac12 u_t^2 + \tfrac12 |\nabla u|^2 + \tfrac{m^2}2 u^2 - \tfrac1{p+2} |u|^{p+2} \qtq{and} \vec\mathfrak{e} := - u_t \nabla u, \end{equation} then \begin{equation}\label{E:m E cons} \partial_t \mathfrak{e}^0 + \nabla \cdot \vec\mathfrak{e} = 0. \end{equation} The closest thing to a general procedure for discovering conservation laws is via Noether's theorem which makes the connection to (continuous) symmetries. The general nonlinear Klein--Gordon equation \eqref{E:eqn} has only the $\binom{d+2}{2}$-dimensional Poincar\'e group as symmetries; however, in the special case of $m=0$ and $p=4/(d-1)$ the symmetry group becomes the full $\binom{d+3}{2}$-dimensional conformal group (of $(d+1)$-dimensional \emph{spacetime}). Note that $p=4/(d-1)$ corresponds to $s_c=\frac12$, which explains the sub-/super-conformal nomenclature used in this paper. While some elements of the conformal group fail to be true symmetries of the equation, the vestigial `conservation laws' that arise have proven to be very useful. The requisite computations are rather lengthy; nevertheless, the results are very neatly catalogued in the paper \cite{Strauss77} by Strauss. This paper also contains a proof that the only continuous symmetries are those described above, which is to say, they generate the full Lie algebra of Killing fields. Let us quickly review the list. \emph{Translations:} Time translation symmetry is responsible for the energy conservation \eqref{E:m E cons}, above. Spatial translation symmetry implies the conservation of momentum. Note that momentum conservation is of limited use, since it is not coercive. While energy is not coercive in the strictest sense in the focusing case, it is at least a scalar. \emph{Rotations:} Here we include the full group of spacetime rotations $SO(1,d)$, which includes both spatial rotations and Lorentz boosts. This produces a tensor of conserved quantities, of which the usual angular momentum is a part. Again their utility is limited because they are not coercive. \emph{Dilation:} By dilation, we mean rescaling both space and time. This gives rise to a very important conservation law: if \begin{equation}\label{E:frak d} \begin{aligned} \mathfrak{d}^0 &{}:= \hbox to 0.5em{\hss$t$\hss} \bigl[ \tfrac12 |\nabla u|^2 - \tfrac12 u_t^2 + \tfrac{m^2}2 u^2 - \tfrac1{p+2} |u|^{p+2} \bigr] + \bigl[ x\cdot\!\nabla u + t u_t + \tfrac{d-1}{2} u \bigr] u_t \\ \vec\mathfrak{d} &{}:= \hbox to 0.5em{\hss$x$\hss} \bigl[ \tfrac12 |\nabla u|^2 - \tfrac12 u_t^2 + \tfrac{m^2}2 u^2 - \tfrac1{p+2} |u|^{p+2} \bigr] - \bigl[ x\cdot\!\nabla u + t u_t + \tfrac{d-1}{2} u \bigr]\nabla u \end{aligned} \end{equation} then \begin{align}\label{E:m dilation} \partial_t \mathfrak{d}^0 + \nabla\! \cdot\!\;\! \vec\mathfrak{d} = \tfrac{p(d-1)-4}{2(p+2)} |u|^{p+2} + m^2 |u|^2. \end{align} This is an honest conservation law only in the conformally invariant case ($m=0$ and $p=\frac4{d-1}$). However, in the super-conformal case (i.e., $p > \frac4{d-1}$), both terms have the same sign; thus we obtain a monotonicity formula --- a Lyapunov functional! \emph{Conformal translations:} Recall that inversion in a cone, that is, $$ (t,x) \mapsto \bigl(\tfrac{t}{t^2 - |x|^2},\tfrac{x}{t^2 - |x|^2}), $$ is a conformal map of spacetime. This involution does not commute with translations; by forming commutators, we obtain a $(d+1)$-dimensional family of continuous symmetries (at least in the conformally invariant case). The resulting conservation laws are called conformal energy (relating to time translation) and conformal momentum (resulting from spatial translations). The conformal momentum lacks coercivity. The conservation of conformal energy reads as follows: if \begin{align}\label{E:frak q0} \mathfrak{k}^0 &:= (t^2+|x|^2) \mathfrak{e}^0 + 2tu_t(x\cdot\!\nabla u) + (d-1)tuu_t - \tfrac{d-1}2 u^2 \\ \vec\mathfrak{k} &:= -\bigl[ (t^2+|x|^2) u_t + 2t(x\cdot\!\nabla u)+(d-1)tu\bigr] \nabla u \notag \\ & \qquad\qquad\qquad - 2xt\bigl[ \tfrac12 u_t^2 - \tfrac12 |\nabla u|^2 - \tfrac{m^2}2 u^2 + \tfrac1{p+2} |u|^{p+2} \bigr] \label{E:frak qj} \end{align} then \begin{align}\label{E:m conf E} \partial_t \mathfrak{k}^0 + \nabla\! \cdot\!\;\! \vec\mathfrak{k} = t \tfrac{p(d-1)-4}{(p+2)} |u|^{p+2} + 2tm^2 |u|^2 . \end{align} This completes the list. We found $d+1$ translations, $\tbinom{d+1}{2}$ rotations, $1$ dilation, and $d+1$ conformal translations. These generate the promised $\binom{d+3}{2}$-dimensional group of conformal symmetries. We now turn to converting these microscopic conservation laws into integrated form. The key identities we need originate from energy conservation \eqref{E:m E cons} and the dilation identity \eqref{E:m dilation}. The conformal energy identity \eqref{E:m conf E} has a very similar structure to the dilation identity \eqref{E:m dilation}; however, the extra factor of $2t$ on the right-hand side of \eqref{E:m conf E} makes it inferior for our purposes. Integrating the energy identity \eqref{E:m E cons} yields the family of well-known energy flux identities. The particular cases we need are the following: \begin{lemma}[Energy flux identity] \label{L:E flux} Let $u$ be a strong solution to \eqref{E:eqn} in the lightcone \begin{equation}\label{E:basic cone} \Gamma := \bigl\{ (t,x): 0<t\leq T \text{ and } |x|<t\bigr\}. \end{equation} Then for all $0<t_0<t_1<T$, \begin{align} & \int_{|x| < t_1} \tfrac{t_1^2-|x|^2}{t_1} \mathfrak{e}^0(t_1,x) \,dx - \int_{|x| < t_0} \tfrac{t_0^2-|x|^2}{t_0} \mathfrak{e}^0(t_0,x) \,dx \notag \\ & \qquad = \int_{t_0}^{t_1} \!\!\! \int_{|x|<t} \tfrac14(1+\tfrac{|x|}t)^2\bigl[u_t(t,x)+u_r(t,x)\bigr]^2 + \tfrac14(1-\tfrac{|x|}t)^2\bigl[u_t(t,x)-u_r(t,x)\bigr]^2\notag \\ &\qquad \qquad + (1+\tfrac{|x|^2}{t^2})\bigl[ \tfrac12|\nabslash u(t,x)|^2 + \tfrac{m^2}2 |u(t,x)|^2 - \tfrac1{p+2}|u(t,x)|^{p+2} \bigr]\, dx\, dt. \label{E:E flux} \end{align} Here $u_r := \frac{x}{|x|}\cdot\nabla u$ and $\nabslash u := \nabla u - \frac{x}{|x|} u_r$ denote the radial and angular derivatives, respectively, and $$ \mathfrak{e}^0(t,x) = \tfrac12 |u_t(t,x)|^2 + \tfrac12 |\nabla u(t,x)|^2 + \tfrac{m^2}2 |u(t,x)|^2 - \tfrac1{p+2} |u(t,x)|^{p+2}, $$ as in \eqref{E:m E defn}. \end{lemma} \begin{proof} The identity follows easily from \eqref{E:m E cons} and integration by parts. First we define a function $\phi$ and a frustrum $F$ as follows: $$ F := \bigl\{ (t,x): t_0<t<t_1 \text{ and } |x|<t\bigr\} \quad\text{by}\quad \phi(t,x) = \begin{cases} \tfrac{t^2-|x|^2}{t} &:|x|<t \\ 0 &:|x|\geq t. \end{cases} $$ Then integration by parts and \eqref{E:m E cons} show that \begin{align*} \iint_F \mathfrak{e}^0(t,x)\partial_t\phi(t,x) & + \vec\mathfrak{e}(t,x)\cdot \nabla\phi(t,x) \,dx\,dt \\ &=\int_{{\mathbb{R}}^d}\mathfrak{e}^0(t_1,x)\phi(t_1,x) -\mathfrak{e}^0(t_0,x)\phi(t_0,x)\, dx, \end{align*} which is equivalent to \eqref{E:E flux}. An alternate proof of \eqref{E:E flux} can be based on applying the divergence theorem to a family of concentric frustra inside $F$ with varying opening angle and then averaging over this family. This proof is more intuitive: on each frustrum we obtain the usual energy flux identity, namely, the energy at the top is equal to the energy at the bottom plus the energy flux out through the side of the frustrum. However, at the low regularity we are considering, this intermediate step is ill-defined: $\mathfrak{e}^0$ and $\vec\mathfrak{e}$ are merely $L^1_\textrm{loc}$. \end{proof} It is tempting (and not difficult) to run the same argument using the dilation identity; however, the result takes a more satisfactorily coercive form if we make a trivial modification. Here we mean trivial in a cohomological sense: observe that for any vector-valued function $\vec f:{\mathbb{R}}\times{\mathbb{R}}^d\to{\mathbb{R}}^d$ on spacetime, $( \nabla\cdot \vec f ,\ - \partial_t \vec f)$ is divergence free, by equality of mixed partial derivatives. This is quite different from \eqref{E:m E cons} or \eqref{E:m dilation}, which rely on the fact that $u(t,x)$ solves a PDE, namely, \eqref{E:eqn}. Specifically, defining \begin{align}\label{E:frak l} \mathfrak{l}^0 := \mathfrak{d}^0 + \tfrac{d-1}{4} \nabla\!\cdot\bigl(\tfrac{x}t u^2\bigr) \qtq{and} \vec\mathfrak{l} := \vec\mathfrak{d} - \tfrac{d-1}{4} \tfrac{\partial\ }{\partial t} \bigl(\tfrac{x}t u^2\bigr) \end{align} we deduce that \begin{align}\label{E:m l} \partial_t \mathfrak{l}^0 + \nabla\! \cdot\!\;\! \vec\mathfrak{l} = \partial_t \mathfrak{d}^0 + \nabla\! \cdot\!\;\! \vec\mathfrak{d} = \tfrac{p(d-1)-4}{2(p+2)} |u|^{p+2} + m^2 |u|^2. \end{align} To see the improvement of coercivity over the original dilation identity \eqref{E:m dilation}, we need to expand out the definition of $\mathfrak{l}^0$ and collect terms. This yields \begin{equation}\label{E:frak l0} \begin{aligned} \mathfrak{l}^0 &= \tfrac1{2t}\bigl|x\!\;\!\cdot\!\nabla u + t u_t + \tfrac{d-1}{2} u\bigr|^2 + \tfrac{t}2\bigl(|\nabla u|^2 - |\tfrac{x}t \cdot \nabla u|^2\bigr) - \tfrac{t}{p+2}|u|^{p+2} \\ & \qquad {} + \tfrac{d^2-1}{8t} u^2 + t \tfrac{m^2}{2} u^2; \end{aligned} \end{equation} indeed, the modification \eqref{E:frak l} was chosen precisely to complete the squares here and in \eqref{E:L bndry}. \begin{lemma}[Two dilation inequalities]\label{L:dilat id} Let $u$ be a strong solution to \eqref{E:eqn} in the light cone \eqref{E:basic cone}. Then \begin{equation}\label{E:L flux ineq} \begin{aligned} \int_{t_0}^{t_1} \!\!\! \int_{|x|<t} \! \tfrac{p(d-1)-4}{2(p+2)} & |u(t,x)|^{p+2} + m^2 |u(t,x)|^2 \, dx\,dt + \int_{|x| < t_0} \mathfrak{l}^0(t_0,x) \,dx \\ &\leq \int_{|x| < t_1} \mathfrak{l}^0(t_1,x) \,dx \end{aligned} \end{equation} for all $0<t_0<t_1\leq T$. Moreover, in the conformal case $p(d-1)=4$ we have \begin{equation}\label{E:L flux ineq 2} \begin{aligned} \!\!\! \int_{t_0}^{(1+\alpha)t_0} \!\!\! \int_{|x|<\alpha t} \! (t-|x|)^{d+1}|\nabla_{\!t,x} u(t,x)|^2 + (t-|x|)^{d-1} |u(t,x)|^2 dx\!\:dt \lesssim \alpha t_0^{d+1}\!, \end{aligned} \end{equation} uniformly for $0<\alpha\leq 1$ and $[t_0, (1+\alpha)t_0]\subset (0, T]$. The implicit constant depends on $d$, $T$, and the $H^1_x\times L^2_x$ norm of $(u(T),u_t(T))$ on the ball $\{|x|<T\}$. \end{lemma} \begin{proof} We begin with \eqref{E:L flux ineq}. If $u$ where $C^2$, then we could apply the divergence theorem on the frustrum $F=\{ (t,x): t_0<t<t_1 \text{ and } |x|<t\}$ to obtain \begin{align} &\int_{t_0}^{t_1} \!\!\! \int_{|x|<t} \! \tfrac{p(d-1)-4}{2(p+2)} |u(t,x)|^{p+2} + m^2 |u(t,x)|^2 \, dx\,dt \label{E:L div thm} \\ {}={}& \int_{|x| < t_1} \mathfrak{l}^0(t_1,x) \,dx - \int_{|x| < t_0} \mathfrak{l}^0(t_0,x) \,dx + \int_{t_0}^{t_1} \!\!\! \int_{|x|=t} \vec\mathfrak{l}\cdot\tfrac{x}{|x|} - \mathfrak{l}^0 \ dS(x)\,dt. \notag \end{align} Here, $dS$ denotes surface measure on the sphere $\{|x|=t\}$, or, equivalently, $(d-1)$-dimensional Hausdorff measure. Note that although $(-1,x/|x|)$ is not a unit vector, this is compensated for by the fact that $dS(x)\,dt$ is $2^{-1/2}$ times $d$-dimensional surface measure on the cone. Thus, for $u\in C^2$ the inequality \eqref{E:L flux ineq} follows directly from \eqref{E:L div thm} by neglecting the manifestly sign-definite term \begin{equation}\label{E:L bndry} \int_{t_0}^{t_1} \!\!\! \int_{|x|=t} \vec\mathfrak{l}\;\!\cdot\tfrac{x}{|x|} - \mathfrak{l}^0 \ dS(x)\,dt = - \int_{t_0}^{t_1} \!\!\! \int_{|x|=t} \tfrac1t\bigl[x\cdot\!\nabla u + t u_t + \tfrac{d-1}{2} u\bigr]^2 \, dS(x)\,dt. \end{equation} To make this argument rigorous when $(u,u_t)$ is merely in $H^1_x\times L^2_x$, one can use the integration by parts technique of the previous lemma. This time, one chooses $\phi$ to be a mollified version of the characteristic function of the frustrum $F$. We turn now to \eqref{E:L flux ineq 2}. Again we simplify the presentation by assuming that the solution is $C^2$. In the conformal case, the coefficient of the potential energy term in \eqref{E:L div thm} is zero, so we rely instead on the positivity of \eqref{E:L bndry}. Applying the dilation identity \eqref{E:L div thm} to $u(t+s,x+y)$ with fixed $0<s<T$ and $|y|<s$, and using the fact that \begin{align*} \lim_{t\searrow s} \int_{|x-y|<t-s} \tfrac{t-s}{p+2} |u(t,x)|^{p+2}\,dx = 0 \quad \text{for all $0<s<T$}, \end{align*} by the definition of a strong solution, we obtain \begin{align*} \int_s^T \!\!\! \int_{|x-y|=t-s} \bigl[(x-y)\cdot\!\nabla u(t,x) + (t-s) u_t(t,x) + \tfrac{d-1}{2} u(t,x)\bigr]^2 \frac{dS(x)\,dt}{t-s} \lesssim 1, \end{align*} where the implicit constant depends on $T$ and the $H^1_x\times L^2_x$ norm of $(u(T),u_t(T))$ on the ball $\{|x|<T\}$. We will deduce \eqref{E:L flux ineq 2} by integrating this over all choices of $(s,y)$ in the region \begin{equation}\label{sy restr} R(\alpha):= \bigl\{ (s,y) : (1-\alpha)t_0 < s+|y| < (1+\alpha)^2 t_0 \ \text{and} \ |y| < s \bigr\}. \end{equation} As $R(\alpha)$ has volume $O(\alpha t_0^{d+1})$, we deduce \begin{align*} \iint_{R(\alpha)}\! \int_s^T \!\!\! \int_{|x-y|=t-s} \bigl[(x-y)\cdot\!\nabla u + (t-s) u_t + \tfrac{d-1}{2} u\bigr]^2 \frac{dS(x)\,dt}{t-s} \,dy\,ds \lesssim \alpha t_0^{d+1}. \end{align*} Next we replace the variable $x$ by $\omega\in S^{d-1}$ via $x=y+(t-s)\omega$ and then change variables a second time from $y$ to $x$ via $y=x-(t-s)\omega$. This yields \begin{align*} \int\!\!\!\int\!\!\!\int\!\!\!\int_{\Omega(\alpha)} \bigl[(t-s) \bigl(\omega\cdot\!\nabla u(t,x) &+ u_t(t,x)\bigr) + \tfrac{d-1}{2} u(t,x)\bigr]^2 (t-s)^{d-2} \,ds\,dS(\omega)\,dx\,dt\\ &\lesssim \alpha t_0^{d+1}, \end{align*} where the region of integration is $$ \Omega(\alpha) := \bigl\{ (s,\omega,x,t) : \omega\in S^{d-1}, \ 0< s < t < T, \ \text{and} \ \bigl(s,x-(t-s)\omega\bigr)\in R(\alpha) \bigr\}. $$ To find a lower bound for this integral, we replace $\Omega(\alpha)$ by a smaller region, namely, $$ \tilde\Omega(\alpha):= \{ (s,\omega,x,t) : \omega\in S^{d-1}, \ |x|<\alpha t, \ t_0<t<(1+\alpha)t_0, \ 0 < t-s < \tfrac{t-|x|}2 \}. $$ Verifying $\tilde\Omega(\alpha)\subseteq \Omega(\alpha)$ rests on two simple observations. First, $$ |x-(t-s)\omega| < s \iff 2\bigl(t-x\cdot\omega\bigr)(t-s) < t^2 -|x|^2 $$ and so $0<t-s< \tfrac{t-|x|}{2}$ implies $|x-(t-s)\omega| < s$ whenever $|x|< t$. The second observation is that for $|x|< \alpha t$ and $t_0<t<(1+\alpha)t_0$, $$ s + |x - (t-s)\omega| \in \bigl[t-|x|, t+|x|\bigr] \subseteq \bigl((1-\alpha)t_0, (1+\alpha)^2t_0\bigr). $$ The decoupling of variables that is built into the structure of $\tilde\Omega(\alpha)$ makes the integral very easy to evaluate. Indeed, freezing $t$ and $x$ for a moment we have \begin{align*} \frac1{|S^{d-1}|} \int_{S^{d-1}} &\int_{(t+|x|)/2}^t \bigl[(t-s) \bigl(\omega\cdot\!\nabla u + u_t\bigr) + \tfrac{d-1}{2} u\bigr]^2 (t-s)^{d-2} \,ds\,dS(\omega) \\ &= \tfrac{1}{d+1} \bigl(\tfrac{t-|x|}{2}\bigr)^{d+1} \bigl[ u_t^2 + \tfrac1d |\nabla u|^2\bigr] + \tfrac{d-1}{d} \bigl(\tfrac{t-|x|}{2}\bigr)^{d} u u_t + \tfrac{d-1}{4} \bigl(\tfrac{t-|x|}{2}\bigr)^{d-1} u^2. \end{align*} Thus the desired estimate \eqref{E:L flux ineq 2} follows easily by restoring the integrals over $t$ and $x$ and by noting that the quadratic form $$ \tfrac{1}{d+1} X^2 + \tfrac{d-1}{d} XY + \tfrac{d-1}{4} Y^2 $$ has negative discriminant (and so is positive definite). \end{proof} Notice that the LHS\eqref{E:L div thm} is sign-definite for any $p\geq \frac{4}{d-1}$, thus providing a Lyapunov functional in that case. More precisely, Lemma~\ref{L:dilat id} shows that \begin{equation} \label{E:L(t)} L(t) := \int_{|x| < t} \mathfrak{l}^0(t,x) \,dx \end{equation} is an increasing function of time. Moreover, the expansion \eqref{E:frak l0} shows that this Lyapunov functional has good coercivity properties. Specializing to the conformally invariant case $m=0$ and $p=\frac{4}{d-1}$ we find precisely the Lyapunov functional used by Merle and Zaag in \cite{MerleZaagMA} in their treatment of this case (cf. Lemma~21 in \cite{MerleZaagMA}). This is not immediately apparent because Merle and Zaag use similarity variables \begin{equation}\label{E:ss vars} w(s,y) := t^{-2/p} u(t,x) \qtq{with} y := x/t \qtq{and} s := \log(t), \end{equation} as advocated in earlier work \cite{GigaKohn:Indiana} of Giga and Kohn on the semilinear heat equation. In particular, the connection to the dilation identity does not seem to have been noted before. In Section~\ref{S:conf blow}, we will revisit the work of Merle and Zaag on the conformally invariant wave equation in the course of discussing analogous results for Klein--Gordon. The principal deviation from their argument is the use of the estimate \eqref{E:L flux ineq 2} appearing in Lemma~\ref{L:dilat id}, which should be compared with Proposition~2.4 in \cite{MerleZaagMA} and Proposition~4.2 in \cite{MerleZaagIMRN}. Like their estimates, \eqref{E:L flux ineq 2} was proved by averaging the identity \eqref{E:L div thm}. Here we see one advantage to working in usual coordinates, namely, it makes it clear how to perform this averaging so as to obtain control over all directional derivatives. More specifically, the region $R(\alpha)$ appearing in \eqref{sy restr} was chosen to contain all spacetime points whose future light cones intersect the region of $(t,x)$ integration appearing in LHS\eqref{E:L flux ineq 2}. \begin{lemma}\label{L:L>0} Let $d \geq 2$, $m \in [0,1]$, and $p = \frac4{d-2s_c}$ with $\tfrac12 \leq s_c < 1$. If $u$ is a strong solution to \eqref{E:eqn} in the light cone $\{(t,x):0 < t \leq T, |x|<t\}$, then $L(t)\geq 0$ for all $0 < t \leq T$. \end{lemma} \begin{proof} The following argument appears also in \cite{AntoniniMerle}. Suppose by way of contradiction that $L(t_0)<0$ for some $t_0 \in (0,T]$. By the dominated convergence theorem, \eqref{E:frak l0}, and our assumption that $u$ is a strong solution, there exists $0<\delta< t_0$ such that \begin{equation} \label{E:L delta} \begin{aligned} L_\delta(t) := \int_{|x| < t-\delta}& \tfrac1{2(t-\delta)}\bigl|x\!\;\!\cdot\!\nabla u + (t-\delta) u_t + \tfrac{d-1}{2} u\bigr|^2 + \tfrac{t-\delta}2\bigl(|\nabla u|^2 - |\tfrac{x}{t-\delta} \cdot \nabla u|^2\bigr)\\ &\qquad {} - \tfrac{t-\delta}{p+2}|u|^{p+2} + \tfrac{d^2-1}{8(t-\delta)} u^2 + (t-\delta) \tfrac{m^2}{2} u^2\, dx \end{aligned} \end{equation} is negative at time $t=t_0$. Letting $\mathfrak{l}_u^0$ denote the quantity in \eqref{E:frak l0} and $u^\delta(t) := u(t+\delta)$, we may write $$ L_\delta(t) = \int_{|x| < t-\delta} \mathfrak{l}^0_{u^\delta}(t-\delta,x)\, dx. $$ As $u^\delta$ is a strong solution in the light cone $\{(t,x):-\delta < t \leq T-\delta, |x|<t+\delta\}$, Lemma~\ref{L:dilat id} implies that $L_\delta$ is an increasing function of time. As $L_\delta(t_0)<0$ and $0<\delta <t_0$, we deduce that \begin{equation} \label{E:lim Ldelta neg} \lim_{t \searrow \delta} L_\delta(t) < 0. \end{equation} On the other hand, \eqref{E:L delta} gives \begin{equation} \label{E:Ldelta lb} L_\delta(t) \geq -\int_{|x| < t} \tfrac{t-\delta}{p+2} |u(t,x)|^{p+2}\, dx. \end{equation} As $u$ is a strong solution, Lemma~\ref{L:Sob domain} implies that $\|u(t)\|_{L^{p+2}(|x|<t)}$ is uniformly bounded for $t \in [\delta,T]$; the bound may depend on $\delta$, but this is irrelevant. Thus, by \eqref{E:Ldelta lb}, $$ \lim_{t \searrow \delta} L_\delta(t) \geq 0, $$ contradicting \eqref{E:lim Ldelta neg}. This completes the proof of the lemma. \end{proof} In the next section, we will see that our results in the super-conformal case (i.e., when $p > \frac{4}{d-1}$) are less complete than for the conformal and sub-conformal cases. Without going into any details of the analysis, we can already explain why this happens: scaling. Recall from the introduction that when $m=0$, the set of solutions to \eqref{E:eqn} is invariant under the scaling $u(t,x)\mapsto u^\lambda(t,x) := \lambda^{2/p} u(\lambda t,\lambda x)$. Recall also that the critical regularity $s_c$ is determined by invariance under this scaling, which leads to $s_c = \frac d2 - \frac2p$. Note that it is reasonable to neglect the mass term when computing the scaling of the equation, on account of it being subcritical when compared with the other terms. We can apply the same reasoning to the conservation of energy to see that it has $\dot H^1_x$ scaling, at least in the cases when $p\leq4/(d-2)$, which includes those discussed in this paper. When $p>4/(d-2)$ the derivative terms in the energy are subcritical relative to the potential energy, which means that it is more reasonable to assert that the energy has the scaling of $\dot H^s_x$ with $s=pd/[2(p+2)]$. (For such $p$, this $s$ is less than $s_c$, so the equation is supercritical relative to both terms in the energy.) The dilation identity has $\dot H_x^{1/2}$ scaling; notice, for example, that the components of $\mathfrak{d}$ resemble those of $\mathfrak{e}$ multiplied by length (or time, which has the same dimensionality). By comparison, the conformal energy scales as $L^2_x$ and this is why it is inferior for our purposes. Indeed, experience has shown that after coercivity, the utility of a conservation/monotonicity law is dictated by its proximity to critical scaling. Thus we obtain optimal results when $s_c=1/2$, but only weaker results at higher critical regularity. When $p<4/(d-1)$, that is, when $s_c < 1/2$, the equation is subcritical relative to the dilation identity. As we are working on a finite time-interval, this is a favourable situation. However, in this case the identity is no longer coercive, which is very bad news. As the dilation identity is local in space, we can produce a whole family of identities (with lower scaling regularity) by averaging translates. While the previously neglected coercivity \eqref{E:L bndry} will now produce an additional positive volume integral term, it is far from obvious that coercivity can be restored. Nevertheless, Antonini and Merle \cite{AntoniniMerle} demonstrated that there is a Lyapunov functional when $p<4/(d-1)$. In the limit $p\nearrow 4/(d-1)$, one recovers the functional used in the subsequent paper \cite{MerleZaagMA}. Given the connection of this limiting case to the dilation identity (the topic of Lemma~\ref{L:dilat id}), one would expect to find a connection for all $p<4/(d-1)$. Our next task is to explicate this connection. To do so, we need to begin with a slight detour. Let us consider \emph{complex}-valued solutions to \eqref{E:eqn} for a moment. In this case we pick up an additional symmetry, namely, phase rotation invariance; the class of solutions is invariant under $u(t,x)\mapsto e^{i\theta}u(t,x)$. This begets the law of charge conservation: If $\mathfrak{q}^0 = \bar u u_t$ and $\vec\mathfrak{q} = - \bar u \nabla u$, then \begin{gather}\label{E:m charge id} \partial_t \mathfrak{q}^0 + \nabla\! \cdot\!\;\! \vec\mathfrak{q} = |u_t|^2 - |\nabla u|^2 - m^2 |u|^2 + |u|^{p+2}. \end{gather} Strictly speaking, charge conservation corresponds to the imaginary part of this identity, for which the right-hand side vanishes. (For real-valued solutions, this is just $0=0$.) The real part of \eqref{E:m charge id} is a non-trivial identity, even in the case of real-valued solutions to \eqref{E:eqn}, although it is no longer a true conservation law. Like the dilation identity, this law has $\dot H_x^{1/2}$ scaling. Although it is incidental to the main themes of this section, let us pause to observe that the charge identity underpins the virial theorem for this system. Recall that the virial theorem (cf. \cite{Clausius} or \cite[\S 10]{LandauLif1}) shows the following: For a mechanical system whose potential energy is a homogenous function of the coordinates (of $u$ in our case), the time-averaged potential and kinetic energies are in the proportion dictated by the homogeneity. The proof extends immediately to the case of potential energies that are a sum of terms with different homogeneities, as is the case for our equation: \begin{lemma}[Virial identity]\label{L:EnEqPart} Let $u:[0,\infty)\times{\mathbb{R}}^d\to{\mathbb{R}}$ be a global strong solution to \eqref{E:eqn} with $u \in L^\infty_t H^1_x$ and $u_t\in L^\infty_t L^2_x$. Then \begin{align*} \lim_{T\to\infty}\frac1T \int_0^T \!\!\! \int_{{\mathbb{R}}^d} \! & |\nabla u(t,x)|^2 + m^2 |u(t,x)|^2 - |u(t,x)|^{p+2} \,dx\,dt \\ &= \lim_{T\to\infty} \frac1T \int_0^T \!\!\! \int_{{\mathbb{R}}^d} \! |u_t(t,x)|^2 \,dx\,dt. \end{align*} \end{lemma} \begin{remark} In the lemma we assume that $\|(u(t),u_t(t))\|_{H^1 \times L^2}$ is bounded; it suffices that this norm is merely $o(t)$, as will become immediately apparent in the proof. \end{remark} \begin{proof} Integrating \eqref{E:m charge id} over the space-time slab $[0,T]\times{\mathbb{R}}^d$ gives \begin{align*} \int_0^T \!\!\! \int_{{\mathbb{R}}^d} \! |u_t(t,x)|^2 - |\nabla u(t,x)|^2 - m^2 |u(t,x)|^2 &+ |u(t,x)|^{p+2} \,dx\,dt \\ &= \int_{{\mathbb{R}}^d} \! \mathfrak{q}^0(T,x) - \mathfrak{q}^0(0,x) \,dx. \end{align*} Observing that the right-hand side is $O(1)$ by hypothesis, the result follows by dividing by $T$ and rearranging a little. \end{proof} We now return to the question of determining the link between the Lyapunov functional introduced in \cite{AntoniniMerle} and the dilation identity. It is natural to try averaging the dilation identity against a Lorentz-invariant function. For a generic tensor $\mathfrak{z}$, integration by parts formally yields the following: \begin{equation}\label{E:general z id} \begin{aligned} \int_{{\mathbb{R}}^d} & \mathfrak{z}^0(t_1,x)\psi(t_1^2-|x|^2)\,dx - \int_{{\mathbb{R}}^d} \mathfrak{z}^0(t_0,x) \psi(t_0^2-|x|^2)\,dx \\ ={}& \int_{t_0}^{t_1} \! \int_{{\mathbb{R}}^d} [\partial_t \;\! \mathfrak{z}^0 + \nabla \! \cdot \!\;\! \vec \mathfrak{z} \,] \psi(t^2-|x|^2) + 2 [ t \;\! \mathfrak{z}^0 - x \cdot \vec \mathfrak{z} \,] \psi'(t^2-|x|^2) \,dx\,dt. \end{aligned} \end{equation} We consider for a moment the case when $\mathfrak{z}=\mathfrak{d}$. Starting with \eqref{E:frak d}, a few elementary manipulations reveal \begin{equation}\label{E:td0-xd} \begin{aligned} t \;\! \mathfrak{d}^0 - x \cdot \vec \mathfrak{d} &= (t^2 - |x|^2) \bigl[ \tfrac12 |\nabla u|^2 - \tfrac12 {u_t}^2 + \tfrac{m^2}2 u^2 - \tfrac1{p+2} |u|^{p+2} \bigr] \\ &\qquad {} + \bigl[ x\cdot\!\nabla u + t u_t + \tfrac{d-1}{2} u \bigr] \bigl[ t u_t + x\cdot\!\nabla u\bigr]. \end{aligned} \end{equation} Notice the similarity of the first term in square brackets to the right-hand side of the charge identity \eqref{E:m charge id}. More generally, if $\mathfrak{z}$ has $\dot H^{1/2}_x$ scaling (like $\mathfrak{d}$), then to obtain a formula with critical scaling, the function $\psi(t^2-|x|^2)$ should have dimensions of length to the power $2\alpha$ with $\alpha=\frac12-s_c$. That is, $\psi$ should be homogenous of degree $\alpha$. By Euler's formula, this implies \begin{equation}\label{E:psi alpha-homo} \psi'(t^2-|x|^2) = \alpha (t^2-|x|^2)^{-1} \psi(t^2-|x|^2). \end{equation} \begin{lemma}[Combined dilation + charge identity]\label{L:dilat id2} Assume $p< 4/(d-1)$ so that $\alpha:=\frac12-s_c > 0$ and let $u$ be a strong solution to \eqref{E:eqn} in the light cone \eqref{E:basic cone}. Let \begin{equation*} \mathfrak{z}^0 := \tfrac1{2t}\bigl|x\!\;\!\cdot\!\nabla u + t u_t + \tfrac2p u\bigr|^2 + \tfrac{t}2\bigl(|\nabla u|^2 - |\tfrac{x}t \!\cdot\! \nabla u|^2\bigr) - \tfrac{t}{p+2}|u|^{p+2} + \bigl(\tfrac{m^2t}{2} + \tfrac{p+2}{p^2 t}\bigr) u^2. \end{equation*} Then \begin{align} \int_{|x|<t_1} & \mathfrak{z}^0(t_1,x) (t_1^2-|x|^2)^\alpha \,dx - \int_{|x|<t_0} \mathfrak{z}^0(t_0,x) (t_0^2-|x|^2)^\alpha\,dx \label{E:AM mono} \\ = & \int_{t_0}^{t_1} \!\!\!\int_{|x|<t} \!\! 2 \bigl| x\cdot\!\nabla u + t u_t + \tfrac2p u \bigr|^2 \alpha (t^2-|x|^2)^{\alpha-1} + m^2 u^2 (t^2-|x|^2)^\alpha dx\,dt \notag \end{align} for all $0<t_0<t_1\leq T$. \end{lemma} \begin{proof} We will prove the identity in slightly greater generality, by employing \eqref{E:general z id} with a general $\psi(t^2-|x|^2)$ with $\psi$ homogeneous of degree $\alpha$ and with \begin{equation} \label{E:mcz defn} \begin{aligned} \mathfrak{z}^0 = \mathfrak{d}^0 + \alpha \mathfrak{q}^0 + \tfrac1p\nabla \! \cdot \! \vec f + \tfrac{2\alpha}{p} g \qtq{and} \vec\mathfrak{z} = \vec\mathfrak{d} + \alpha \vec\mathfrak{q} - \tfrac1p\partial_t \vec f . \end{aligned} \end{equation} Here $\mathfrak{d}$ and $\mathfrak{q}$ represent the tensors associated with the dilation and charge identities (as in \eqref{E:m dilation} and \eqref{E:m charge id}, respectively), while $$ \vec f(t,x) = \frac{x}t |u(t,x)|^2 \qtq{and} g(t,x) = t^{-1} |u(t,x)|^2. $$ A little patience is all that is required to show that this definition of $\mathfrak{z}^0$ agrees with that stated in the lemma. Notice that the $\vec f$ terms in \eqref{E:mcz defn} differ only in the prefactor from those appearing in \eqref{E:frak l}; equality of mixed partial derivatives shows that this term does not affect the divergence. It was chosen to complete squares in a formula below. The additional summand $g$ appearing in \eqref{E:mcz defn} has no analogue in our previous computations. It is not clear how one might intuit the introduction of this term; however, if one proceeds without it, then the left-over terms can be recognized as a complete derivative and so explain \emph{a posteriori} its inclusion. With these preliminaries out of the way, applying \eqref{E:general z id} we obtain \begin{equation*} \begin{aligned} \int_{{\mathbb{R}}^d} \mathfrak{z}^0(t_1,x) & \psi(t_1^2-|x|^2)\,dx - \int_{{\mathbb{R}}^d} \mathfrak{z}^0(t_0,x) \psi(t_0^2-|x|^2)\,dx \\ &= \int_{t_0}^{t_1} \!\!\!\int_{{\mathbb{R}}^d} 2 \bigl| x\cdot\!\nabla u(t,x) + t u_t(t,x) + \tfrac2p u(t,x) \bigr|^2 \psi'(t^2-|x|^2) \,dx\,dt \\ & \qquad\qquad + \int_{t_0}^{t_1} \!\!\! \int_{{\mathbb{R}}^d} m^2 |u(t,x)|^2 \psi(t^2-|x|^2) \,dx\,dt. \end{aligned} \end{equation*} Specializing to \begin{equation*} \psi(t^2- |x|^2) = \begin{cases} (t^2 - |x|^2)^\alpha &: |x| < t \\ 0 &: |x|\geq t \end{cases} \end{equation*} we obtain the identity stated in the lemma. \end{proof} Since the integrand on the right-hand side of \eqref{E:AM mono} is positive, this identity shows the monotonicity in time of the function \begin{equation} \label{E:Z(t)} Z(t) := \int_{|x|<t} \mathfrak{z}^0(t,x) (t^2-|x|^2)^\alpha \,dx. \end{equation} After switching to self-similar variables (cf. \eqref{E:ss vars}), this agrees with the Lyapunov functional introduced by Antonini and Merle in \cite{AntoniniMerle} and used subsequently in \cite{MerleZaagAJM}. As we have seen, this functional is less directly deducible from the dilation identity than that discussed in Lemma~\ref{L:dilat id}, which appeared in the later paper \cite{MerleZaagMA} of Merle and Zaag. Note that formally taking the limit $p\nearrow 4/(d-1)$ in Lemma~\ref{L:dilat id2} yields Lemma~\ref{L:dilat id}. While we find the physical meaning of these Lyapunov functionals difficult to properly understand when written in self-similar variables, the example discussed in Lemma~\ref{L:dilat id2} makes a good case for their utility as a technique for finding such laws. In the two key examples discussed in this section, the correct multiplier appears after switching to similarity variables and converting the resulting equation to divergence form; compare \cite[Eqn (4)]{MerleZaagAJM} and \cite[Eqn (1.22)]{GigaKohn:CPAM}. Next we give analogues of Lemma~\ref{L:L>0} and \eqref{E:L flux ineq 2} for the functional $Z$. \begin{corollary}\label{C:Z id} Let $d \geq 2$, $m \in [0,1]$, and $p = \frac4{d-2s_c}$ with $0< s_c < \tfrac12$. If $u$ is a strong solution to \eqref{E:eqn} in the light cone $\{(t,x):0 < t \leq T, |x|<t\}$, then $Z(t)\geq 0$ for all $0 < t \leq T$. Moreover, \begin{align}\label{E:smudge Z} \!\!\int_{t_0}^{2t_0} \!\!\!\!\int_{|x|<t} (t-|x|)^{d+2-2s_c} |\nabla_{t,x} u(t,x)|^2 + (t-|x|)^{d-2s_c} |u(t,x)|^2 \,dx\,dt \lesssim t_0^{d+1}, \! \end{align} uniformly for $0<t_0\leq\frac12T$. \end{corollary} \begin{proof} The proof that $Z(t)\geq 0$ is identical to that of Lemma~\ref{L:L>0}. To prove \eqref{E:smudge Z} we argue much as we did for \eqref{E:L flux ineq 2}. First we observe that applying Lemma~\ref{L:dilat id2} to the light cone with apex $(s,y)$ yields $$ \int_s^T \int_{|x-y|<t-s} \bigl| (x-y)\cdot\!\nabla u + (t-s) u_t + \tfrac2p u \bigr|^2 \bigl\{(t-s)^2-|x-y|^2\bigr\}^{\alpha-1} dx\,dt \lesssim 1, $$ where the implicit constant depends on $T$ and the $H^1_x\times L^2_x$ norm of $(u(T),u_t(T))$ on the ball $\{|x|<T\}$. Next we integrate this inequality over the region $|y|<s<2t_0$ and so deduce $$ \int\!\!\int\!\!\int\!\!\int \bigl| (x-y)\cdot\!\nabla u + (t-s) u_t + \tfrac2p u \bigr|^2 \bigl\{(t-s)^2-|x-y|^2\bigr\}^{\alpha-1} dx\,dt\,dy\,ds \lesssim t_0^{d+1}, $$ where the integral is over a region which contains $$ \Omega := \{ (s,y,t,x) : t_0<t<2t_0,\ \tfrac{t+|x|}{2} < s < t,\ |x-y| < t-s,\text{ and } |x|<t\}. $$ Freezing $(t,x)$ and integrating out $y$ and then $s$ produces the estimate \eqref{E:smudge Z}. \end{proof} \section{Bounds in light cones: The super-conformal case}\label{S:superconf blow} In this section, we consider the super-conformal case, $\frac12 < s_c < 1$. Little seems to be known about the behaviour of local norms for blowup solutions in this case. In particular, the work of Merle and Zaag \cite{MerleZaagAJM, MerleZaagMA, MerleZaagIMRN} only considers the conformal and sub-conformal cases $(0 < s_c \leq \frac12$). The majority of this section will be devoted to a proof of the following theorem: \begin{theorem}[Mass/Energy bounds] \label{T:cone bound superc} Let $d \geq 2$, $m \in [0,1]$, and $p = \frac4{d-2s_c}$ with $\frac12 < s_c < 1$. If $u$ is a strong solution to \eqref{E:eqn} in the light cone $\{(t,x):0 < t \leq T, |x|<t\}$, then $u$ satisfies \begin{equation} \label{E:cone bound superc} \begin{aligned} \int_0^{T} \int_{|x|<t} \bigl(1-\tfrac{|x|}{t}\bigr)^2 & |\nabla_{t,x}u(t,x)|^2 + |u(t,x)|^{p+2}\, dx\, dt \lesssim 1, \end{aligned} \end{equation} as well as the pointwise in time bounds \begin{equation} \label{E:mass bound superc} \int_{|x|< t} |u(t,x)|^2\, dx \lesssim t^{\frac{pd}{p+4}} \qtq{and} \int_{|x|< t} |u(t,x)|^{\frac{p+4}2}\, dx \lesssim 1 \end{equation} for all $t \in (0,T]$. The implicit constants in \eqref{E:cone bound superc} and \eqref{E:mass bound superc} depend on $d$, $p$, $T$, $\|u(T)\|_{H^1_x(|x|<T)}$, and $\|u_t(T)\|_{L^2_x (|x|<T)}$. \end{theorem} \begin{proof} We rely on the information provided by the Lyapunov functional $L$ introduced in the previous section. Specifically, we will make use of Lemmas~\ref{L:dilat id}~and~\ref{L:L>0}, which assert that \begin{equation} \label{E:L superc} L(t) := \int_{|x| < t} \mathfrak{l}^0(t,x)\, dx \end{equation} is a nonnegative increasing function of time. To keep formulae within margins, we will not keep track of the specific dependence on $T$ or $(u(T),u_t(T))$ in the estimates that follow. We first consider \eqref{E:cone bound superc}. Combining Lemmas~\ref{L:dilat id}~and~\ref{L:L>0}, we immediately obtain \begin{align} \label{E:p+2 superc} \int_0^{T} \int_{|x|<t} |u(t,x)|^{p+2}\, dx\, dt &\lesssim L(T)\lesssim 1. \end{align} Thus there exists a sequence $t_n \searrow 0$ such that $$ \lim_{n \to \infty} t_n\|u(t_n)\|_{L_x^{p+2}(|x|<t_n)}^{p+2} = 0. $$ Therefore, if we define $$ g(t) := \int_{|x|<t} \tfrac{t^2-|x|^2}t \mathfrak{e}^0(t,x)\, dx, $$ then $$ \liminf_{n \to \infty} g(t_n) \geq 0. $$ Thus, combining Lemma~\ref{L:E flux} and \eqref{E:p+2 superc}, we obtain \begin{align*} \int_0^{T} \int_{|x| < t} &\tfrac14\bigl(1+\tfrac{|x|}t\bigr)^2\bigl[u_t(t,x)+u_r(t,x)\bigr]^2 + \tfrac14\bigl(1-\tfrac{|x|}t\bigr)^2\bigl[u_t(t,x)-u_r(t,x)\bigr]^2\\ &\qquad +\tfrac12\bigl(1+\tfrac{|x|^2}{t^2}\bigr)|\nabslash u(t,x)|^2 \, dx\, dt\\ & \leq g(T) + \int_0^{T} \int_{|x| < t} \tfrac1{p+2}\bigl(1+\tfrac{|x|^2}{t^2}\bigr)|u(t,x)|^{p+2}\, dx\, dt \lesssim 1. \end{align*} This bounds each of the derivatives of $u$ with respect to the frame $\nabslash$, $\partial_t + \partial_r$, and $\partial_t - \partial_r$, which spans all possible spacetime directions. The estimate for the last of these is the weakest, since it deteriorates near the edge of the cone, and so dictates the form of~\eqref{E:cone bound superc}. We turn now to \eqref{E:mass bound superc}. Observe that the first inequality follows from the second and H\"older's inequality. Thus, it remains to establish the second bound in \eqref{E:mass bound superc}. With the estimates at hand, there are several ways to proceed. The method we present below is informed by the needs of Section~\ref{S:conf blow}; in particular, it uses only the estimates available in that case. We first notice that, as a consequence of \eqref{E:frak l0}, \eqref{E:cone bound superc}, and the monotonicity of $L(t)$, \begin{align}\label{t0 bound} \int_{t_0}^{2t_0} \!\!\int_{|x|<t} \! \bigl|x\cdot \nabla u + tu_t +\tfrac{d-1}2u \bigr|^2\, dx\, dt &\leq \int_{t_0}^{2t_0} \!2tL(t)\, dt + \int_{t_0}^{2t_0} \!\!\int_{|x|<t} \!\tfrac{2t^2}{p+2}|u|^{p+2}\, dx\, dt\notag\\ &\lesssim t_0^2 \end{align} for all $0<t_0\leq \frac12 T$. Moreover, by H\"older and \eqref{E:cone bound superc}, we can control the desired quantity but only on average in time: \begin{align}\label{t0 bound q} \int_{t_0}^{2t_0}\!\!\! \int_{|x|<t} |u(t,x)|^{\frac{p+4}2}\, dx\, dt &\lesssim t_0^{\frac{p(d+1)}{2(p+2)}}\biggl[\int_{t_0}^{2t_0}\!\!\! \int_{|x|<t}|u(t,x)|^{p+2}\, dx\, dt\biggr]^{\frac{p+4}{2(p+2)}}\notag\\ &\lesssim t_0^{\frac{p(d+1)}{2(p+2)}}. \end{align} Next we will prove that the estimate \eqref{t0 bound q} together with the $L^2_{t,x}$ control \eqref{t0 bound} over the directional derivatives of $u$ inside the light cone imply the desired pointwise in time estimates. Observe that for a $C^1$ function $f:[1,2]\to {\mathbb{R}}$, \begin{align*} \sup_{1\leq \lambda\leq 2} |f(\lambda)| &\leq \int_1^2 |f(\lambda)| \,d\lambda + \int _1^2 |f'(\lambda)|\,d\lambda. \end{align*} The second part of \eqref{E:mass bound superc} follows by applying this to $f(\lambda):= \int_{|x|<t_0} |\lambda^{\frac{d-1}2} u(\lambda t_0, \lambda x)|^{\frac{p+4}2} \, dx$ and using Cauchy--Schwarz, \eqref{E:cone bound superc}, \eqref{t0 bound}, and \eqref{t0 bound q}. Indeed, \begin{align*} &\sup_{t_0\leq t\leq 2t_0} \int_{|x|<t} |u(t,x)|^{\frac{p+4}2}\, dx\\ &\quad \lesssim \sup_{1\leq \lambda\leq 2} |f(\lambda)|\\ &\quad\lesssim \frac1{t_0}\int_{t_0}^{2t_0} \!\!\!\int_{|x|<t} |u|^{\frac{p+4}2}\, dx\, dt\\ &\qquad +\frac1{t_0} \biggl(\int_{t_0}^{2t_0} \!\!\!\int_{|x|<t}\! |u|^{p+2}\, dx\, dt\biggr)^{\!1/2} \biggl(\int_{t_0}^{2t_0} \!\!\!\int_{|x|<t}\bigl|x\cdot \nabla u + tu_t +\tfrac{d-1}2u \bigr|^2dx\, dt\biggr)^{\!1/2}\\ &\quad \lesssim t_0^{\frac{p(d-1)-4}{2(p+2)}} + 1\\ &\quad \lesssim_T 1. \end{align*} The last inequality relies on the super-conformality hypothesis, namely, $p(d-1)>4$. This completes the proof of Theorem~\ref{T:cone bound superc}. \end{proof} We conclude this section with a corollary of Theorem~\ref{T:cone bound superc}. \begin{corollary} \label{C:cone bound superc} Let $d \geq 2$, $m \in [0,1]$, and $\frac12 < s_c < 1$. Set $p=\frac4{d-2s_c}$. Assume that there exists $0 < \varepsilon \leq 1$ such that $u$ is a strong solution to \eqref{E:eqn} in the cone $\{(t,x):0 < t \leq T, |x|<(1+\varepsilon)t\}$. Then for each $0 < t_0 \leq \frac T2$, \begin{equation} \label{E:dyadic bound superc} \begin{aligned} &\int_{t_0}^{2t_0} \int_{|x|<t} |\nabla_{t,x} u(t,x)|^2\, dx\, dt \lesssim 1, \end{aligned} \end{equation} with the implicit constant depending on $d$, $p$, $\varepsilon$, $T$, and the $H^1_x\times L_x^2$ norm of $(u(T), u_t(T))$ on the ball $\{|x|<T\}$. \end{corollary} We note that the assumption that $u$ is defined in the cone $\{(t,x):0 < t \leq T, |x|<(1+\varepsilon)t\}$ is equivalent to the assumption that $(0,0)$ is not a characteristic point of the (backwards in time) blowup surface of $u$. \begin{proof} We begin with a simple covering argument. There exist $N$, depending on $\varepsilon$ and $d$, and a set $\{x_j\}_{j=1}^N$ with $|x_j|<(1+\varepsilon)\frac{t_0}2$ such that $$ \{x :|x|<t_0\} \subset \bigcup_{j=1}^N \{x:|x-x_j|<(1-\tfrac\eps2)\tfrac{t_0}2\}. $$ Therefore, \begin{align*} &\{(t,x):t_0 \leq t \leq 2t_0, \, |x|<t\}\\ &\qquad \subset \bigcup_{j=1}^N \{(t,x) :t_0 \leq t \leq 2t_0, \, |x-x_j|<(1-\tfrac\eps2)\tfrac{t_0}2 + (t-t_0)\}\\ &\qquad \subset \bigcup_{j=1}^N \{(t,x):t_0 \leq t \leq 2t_0, \, |x-x_j| < (1-\tfrac\varepsilon{8})(t-\tfrac{t_0}2)\}. \end{align*} By assumption, $u$ is defined on each light cone $$ \{(t,x):\tfrac12t_0<t \leq T, \, |x-x_j| < t-\tfrac{t_0}2 \}, $$ so, by \eqref{E:cone bound superc}, \begin{align*} &\int_{t_0}^{2t_0} \int_{|x-x_j|<(1-\frac{\varepsilon}8)(t-\frac{t_0}2)}|\nabla_{t,x}u(t,x)|^2\, dx\, dt \\ &\qquad \lesssim_\varepsilon \int_{\frac{t_0}2}^{T} \int_{|x-x_j|<t-\frac{t_0}2} \bigl(1-\tfrac{|x-x_j|}{t-\tfrac{t_0}2}\bigr)^2|\nabla_{t,x}u(t,x)|^2\, dx\, dt \lesssim_\varepsilon 1. \end{align*} Summing this inequality over $1 \leq j \leq N$, we derive the claim. \end{proof} \section{Bounds in light cones: The conformal and sub-conformal cases}\label{S:conf blow} The goal of this section is to give pointwise in time upper bounds on the blowup rate of solutions to \eqref{E:eqn} in the conformal and sub-conformal cases, that is, when $0<s_c\leq \frac12$. \begin{theorem} \label{T:cone bound subc} Let $d \geq 2$, $m \in [0,1]$, $0 < s_c \leq \tfrac12$, and $p=\frac4{d-2s_c}$. If $u$ is a strong solution to \eqref{E:eqn} in the light cone $\{ (t,x) : 0<t\leq T \text{ and } |x|<t \}$, then \begin{equation} \label{E:cone bound subc} \int_{|x|<t/2} t^{-2s_c}|u(t,x)|^2 + t^{2(1-s_c)}|\nabla_{t,x}u(t,x)|^2\, dx \lesssim 1. \end{equation} The implicit constant depends on $d, s_c, T,$ and the $H^1_x\times L_x^2$ norm of $(u(T),u_t(T))$ on the ball $\{|x|<T\}$. \end{theorem} For the nonlinear wave equation, that is, \eqref{E:eqn} with $m=0$, this theorem was proved by Merle and Zaag in \cite{MerleZaagIMRN}, building on earlier work \cite{MerleZaagAJM,MerleZaagMA} that considered solutions defined in a spacetime slab. This result describes the behaviour of solutions near a general blowup surface $t=\sigma(x)$, as defined in the Introduction. In particular, in the case of a non-characteristic point, a simple covering argument yields \eqref{E:cone bound subc} with integration over the larger region $|x|<t$. The arguments of Merle and Zaag adapt \emph{mutis mutandis} to the Klein--Gordon equation \eqref{E:eqn}, since the mass term always appears with the helpful sign. However, our Lemma~\ref{L:dilat id} and Corollary~\ref{C:Z id} allow us to streamline the arguments of \cite{MerleZaagAJM}, \cite{MerleZaagMA}, and \cite{MerleZaagIMRN}. We focus first on the conformal case; the discussion of the sub-conformal case can be found at the end of this section. In the conformal case, our argument relies on \eqref{E:L flux ineq 2}, which gives control over all directional derivatives of the solution; this should be compared with Proposition~2.4 in \cite{MerleZaagMA} and Proposition~4.2 in \cite{MerleZaagIMRN}, which only provide control over a subset of directional derivatives. An immediate consequence of \eqref{E:L flux ineq 2} is \begin{align}\label{E:deriv on ball} \int_{t_0}^{2t_0}\!\!\! \int_{|x|<t-\frac1{10}t_0} |\nabla_{t,x} u|^2 + t_0^{-2} |u|^2 + t_0^{-2}\bigl|x\cdot\nabla u +tu_t +\tfrac{d-1}2 u\bigr|^2\, dx\, dt \lesssim 1, \end{align} uniformly in $0<t_0\leq \frac12 T$. Applying \eqref{E:L flux ineq 2} to a spacetime translate of our solution yields \begin{align}\label{E:deriv on alpha ball} \int_{t_0}^{(1+\alpha) t_0}\!\!\! \int_{|x-x_0|<\alpha t} \bigl|(x-x_0)\cdot\nabla u +tu_t +\tfrac{d-1}2 u\bigr|^2\, dx\, dt \lesssim \alpha t_0^2, \end{align} uniformly for $0<\alpha\leq \frac1{10}$, $|x_0|<\frac45 t_0$, and $[t_0,(1+\alpha)t_0]\subseteq (0,T]$. For comparison, see the proof of Proposition~3.1 in \cite{MerleZaagMA} and Proposition~4.2 in \cite{MerleZaagIMRN}. Next we transfer the estimate \eqref{E:deriv on ball} to a bound on the potential energy. To do this, we will employ a translated version of the functional $$ L(t) = \int_{|x|<t} \mathfrak{l}^0(t,x)\, dx, $$ introduced in Section~\ref{S:Lyapunov}; recall that $\mathfrak{l}^0$ is defined as $$ \mathfrak{l}^0 = \tfrac1{2t}|x\cdot \nabla u + tu_t + \tfrac{d-1}2u|^2 + \tfrac{t}2(|\nabla u|^2 - |\tfrac{x}t \cdot \nabla u|^2) - \tfrac{(d-1)t}{2(d+1)}|u|^{\frac{2(d+1)}{d-1}} + \tfrac{d^2-1}{8t}u^2 + t\tfrac{m^2}2u^2. $$ By Lemmas~\ref{L:dilat id}~and~\ref{L:L>0}, $L$ is a nonnegative increasing function of time. Using this functional adapted to the cone $\{|x|<t-\frac1{10}t_0\}$, specifically the fact that $L\geq 0$, we deduce \begin{align}\label{E:pot on ball} \int_{t_0}^{2t_0}\!\!\! \int_{|x|<t-\frac1{10}t_0} |u(t,x)|^{\frac{2(d+1)}{d-1}}\, dx\, dt \lesssim \text{LHS\eqref{E:deriv on ball}} \lesssim 1, \end{align} uniformly in $0<t_0\leq \frac12 T$. To prove Theorem~\ref{T:cone bound subc}, we need to upgrade the averaged in time estimates obtained above to pointwise in time estimates. The first step is the following result: \begin{lemma}[Pointwise in time estimates on the mass and a critical norm] \label{L:ptwise mass subc}\leavevmode\\ Let $0<t_0\leq \frac12 T$. For $t \in [t_0, 2t_0]$ we have \begin{equation} \label{E:ptwise mass subc} \int_{|x|<t-\frac1{10}t_0} |u(t,x)|^2\, dx \lesssim t_0 \end{equation} and \begin{equation} \label{E:ptwise q subc} \int_{|x-x_0|<\alpha t} |u(t,x)|^{\frac{2d}{d-1}}\, dx \lesssim \alpha^{1/2}, \end{equation} whenever $0<\alpha\leq \frac1{10}$ and $|x_0|<\frac45 t_0$. \end{lemma} \begin{proof} The proof follows the argument used to establish \eqref{E:mass bound superc}. To derive \eqref{E:ptwise mass subc}, one uses the function $f(\lambda):= \int_{|x|<t_0} |\lambda^{\frac{d-1}2} u(\lambda t_0,\lambda x)|^2\, dx$, while for \eqref{E:ptwise q subc} one uses the original version of $f$ with $p=\frac{4}{d-1}$. We need two ingredients: The first ingredient is an integral bound over the directional derivatives of $u$ on the appropriate cone; the role of the first ingredient in the current setting is played by \eqref{E:deriv on ball} and \eqref{E:deriv on alpha ball}. The second ingredient we need is averaged in time estimates for the left-hand sides of \eqref{E:ptwise mass subc} and \eqref{E:ptwise q subc}; the role of the second ingredient is played by \eqref{E:deriv on ball} and \begin{align*} \int_{t_0}^{(1+\alpha)t_0} \!\!\! &\int_{|x-x_0|<\alpha t} |u(t,x)|^{\frac{2d}{d-1}}\, dx\, dt\\ &\lesssim t_0\alpha^{\frac{d}{d+1}}\biggl[\int_{t_0}^{(1+\alpha)t_0}\!\!\! \int_{|x-x_0|<\alpha t}|u(t,x)|^{\frac{2(d+1)}{d-1}}\, dx\, dt\biggr]^{\frac{d}{d+1}} \lesssim t_0\alpha^{\frac{d}{d+1}}, \end{align*} which follows from H\"older and \eqref{E:pot on ball}. \end{proof} The simple argument just used does not allow us to upgrade our integrated gradient or potential energy estimates to versions that are pointwise in time. We will instead employ a bootstrap argument close to that in the work of Merle and Zaag. The requisite smallness is provided by \eqref{E:ptwise q subc} by choosing $\alpha$ small enough. Combining this estimate with the Gagliardo--Nirenberg inequality gives \begin{align} \int_{|x-x_0|<r} & |u(t,x)|^{\frac{2(d+1)}{d-1}}\, dx \notag\\ &\lesssim\biggl[\int_{|x-x_0|<r} |u(t,x)|^{\frac{2d}{d-1}}\, dx\biggr]^{\frac2d} \int_{|x-x_0|<r} |\nabla u(t,x)|^2 + \tfrac1{r^2} |u(t,x)|^2\, dx \notag\\ &\lesssim \alpha^{1/d}\int_{|x-x_0|<r} |\nabla u(t,x)|^2 + \tfrac1{r^2} |u(t,x)|^2\, dx, \label{E:nonconc} \end{align} uniformly for $0<\alpha\leq \frac1{10}$, $r<\alpha t$, and $|x_0|<\frac45 t$. To obtain an inequality in the opposite direction, we use boundedness of the functional $L$ adapted to the cone $\{(s,y) : |y-x_0|<r+s-t\}$ together with the observation \begin{align*} \bigl(1-\tfrac{|x|^2}{t^2}\bigr) \bigl|\nabla_{t,x} u|^2 &\lesssim t^{-2}|x\cdot \nabla u + tu_t + \tfrac{d-1}2u|^2 + (|\nabla u|^2 - |\tfrac{x}t \cdot \nabla u|^2) + t^{-2}u^2\\ &\lesssim t^{-1} \mathfrak{l}^0 + |u|^{\frac{2(d+1)}{d-1}}. \end{align*} This gives \begin{align}\label{E:grad from pot} \int_{|x-x_0|<r}\bigl( 1-\tfrac{|x-x_0|^2}{r^2} \bigr) |\nabla_{t,x} u(t,x)|^2\, dx \lesssim \tfrac1r + \int_{|x-x_0|<r} |u(t,x)|^{\frac{2(d+1)}{d-1}}\,dx, \end{align} where the coefficient of $1/r$ depends on the norm of $(u(T),u_t(T))$ via the monotonicity of $L$. Combining \eqref{E:nonconc} and \eqref{E:grad from pot} yields \begin{align}\label{almost} \int_{|x-x_0|<\frac12 r} |\nabla u(t,x)|^2\, dx &\lesssim \tfrac1r + \alpha^{1/d}\int_{|x-x_0|<r} |\nabla u(t,x)|^2 + \tfrac1{r^2} |u(t,x)|^2\, dx, \end{align} which is not immediately amenable to bootstrap because the two regions of integration are different. To remedy this, we set $R=\frac35t$ and $r= \frac\alpha3(R-|x_0|)$ and apply the following averaging operator to both sides: $$ f(x_0) \mapsto \frac1{R^{d+2}} \int_{|x_0|<R} (R-|x_0|)^2 f(x_0) \,dx_0. $$ We note that $$ \bigl\{ (x_0,x) : |x_0|\!<\!R,\ |x-x_0| \!<\! \tfrac\alpha3(R-|x_0|) \bigr\} \subseteq \bigl\{ (x_0,x) : |x|\!<\!R,\ |x_0-x| \!<\! \tfrac\alpha2(R-|x|) \bigr\} $$ and $$ \bigl\{ (x_0,x) : |x_0|\!<\!R,\ |x_0-x|\! <\! \tfrac\alpha6(R-|x_0|) \bigr\} \supseteq \bigl\{ (x_0,x) : |x|\!<\!R,\ |x-x_0| \!<\! \tfrac\alpha7(R-|x|) \bigr\}, $$ and that $R-|x|\sim R-|x_0|$ on any of these sets. Using Fubini, we deduce \begin{align*} \int_{|x|<R} \bigl( 1 - \tfrac{|x|}R\bigr)^{d+2} &|\nabla_{t,x} u(t,x)|^2 \, dx\\ &\lesssim (\alpha R)^{-1} +\alpha^{1/d}\int_{|x|<R}\bigl( 1 - \tfrac{|x|}R\bigr)^{d+2} |\nabla u(t,x)|^2 \, dx \\ &\quad + (\alpha R)^{-2}\alpha^{1/d}\int_{|x|<R} \bigl( 1 - \tfrac{|x|}R\bigr)^{d} |u(t,x)|^2\, dx. \end{align*} Choosing $\alpha$ sufficiently small and recalling $R=\frac35t$ and \eqref{E:ptwise mass subc}, we obtain \begin{align}\label{tada} \int_{|x|<\frac35t} \bigl( 1 - \tfrac{5|x|}{3t}\bigr)^{d+2} |\nabla_{t,x} u(t,x)|^2 \, dx \lesssim t^{-1}, \end{align} which yields the requisite bound on the spacetime gradient of $u$. To finish the proof of \eqref{E:cone bound subc}, we merely note that the bound on the $L^2_x$ norm was obtained already in Lemma~\ref{L:ptwise mass subc}. This completes the proof of Theorem~\ref{T:cone bound subc} in the conformal (i.e., $s_c=1/2$) case. Our argument for the sub-conformal case is similar, but slightly simpler, with \eqref{E:smudge Z} taking over the role played above by \eqref{E:L flux ineq 2}. In particular we have the following analogue of \eqref{E:deriv on ball} \begin{align}\label{E:D ball} \int_{t_0}^{2t_0}\!\!\! \int_{|x|<t-\frac1{10}t_0} |\nabla_{t,x} u|^2 + t_0^{-2} |u|^2 + t_0^{-2}\bigl|x\cdot\nabla u +tu_t +\tfrac2p u\bigr|^2\, dx\, dt \lesssim t_0^{2s_c -1}. \end{align} From this and the fact that $Z\geq 0$, we deduce \begin{align}\label{E:P ball} \int_{t_0}^{2t_0}\!\!\! \int_{|x|<t-\frac1{10}t_0} |u(t,x)|^{p+2}\, dx\, dt \lesssim t_0^{2s_c -1}. \end{align} Using the same argument as in Lemma~\ref{L:ptwise mass subc} and~\eqref{E:mass bound superc}, modifying $f(\lambda)$ as needed, we obtain \begin{equation}\label{E:wk pt} \int_{|x|<\frac9{10} t} |u(t,x)|^2\, dx \lesssim t^{2 s_c} \qtq{and} \int_{|x|<\frac9{10} t} |u(t,x)|^{\frac{p+4}2}\, dx \lesssim t^{2s_c -1}. \end{equation} Let $\gamma:=\frac 12-s_c$. Using the boundedness of the functional $Z$ associated to the cone with apex $(t-r,x_0)$, we obtain \begin{align}\label{p4g} \int_{|x-x_0|<r} \!\bigl(1 - \tfrac{|x-x_0|^2}{r^2}\bigr)^{\gamma + 1}|\nabla_{t,x} u|^2 \, dx \lesssim r^{2s_c-2} + \int_{|x-x_0|<r} \!\bigl(1 - \tfrac{|x-x_0|^2}{r^2}\bigr)^{\gamma}|u|^{p+2} \, dx, \end{align} for all $|x_0|<\frac45 t$ and $0<r < \tfrac1{10} t$. This plays the role of \eqref{E:grad from pot}. In order to obtain an upper bound on the potential energy we use the Gagliardo--Nirenberg inequality: \begin{align*} \int_{|x-x_0|<r} & \bigl(1 - \tfrac{|x-x_0|^2}{r^2}\bigr)^{\gamma}|u|^{p+2} \, dx\\ &\lesssim \biggl[ \int_{|x-x_0|<r} |u|^{\frac{p+4}2} \, dx\biggr]^{\frac{8-2p(d-2)}{8-p(d-2)}} \biggl[ \int_{|x-x_0|<r} |\nabla u|^{2} + r^{-2} |u|^2 \, dx\biggr]^{\frac{pd}{8-p(d-2)}}. \end{align*} Incorporating \eqref{E:wk pt} we deduce \begin{align*} \int_{|x-x_0|<r} \bigl(1 - \tfrac{|x-x_0|^2}{r^2}\bigr)^{\gamma} &|u|^{p+2} \, dx \\ &\lesssim t^{(2s_c-1)\frac{8-2p(d-2)}{8-p(d-2)}}\biggl[ r^{-2} t^{2s_c} + \int_{|x-x_0|<r} |\nabla u|^{2} \, dx\biggr]^{\frac{pd}{8-p(d-2)}}. \end{align*} Combining this estimate with \eqref{p4g} yields \begin{align*} \int_{|x-x_0|<r/2} |\nabla_{t,x} u|^2 \, dx \lesssim r^{2s_c-2} + t^{2s_c-2} \biggl[ r^{-2} t^{2} + t^{2-2s_c} \!\!\int_{|x-x_0|<r} |\nabla u|^{2} \, dx\biggr]^{\frac{pd}{8-p(d-2)}}. \end{align*} The basic bootstrap relation follows by setting $r=\tfrac13(\frac45 t-|x_0|)$ and applying the averaging operator $$ f(x_0) \mapsto \frac1{t^{d+2}} \int_{|x_0|< \frac45 t} f(x_0) (\tfrac{4t}5 -|x_0| )^2 \,dx_0 $$ to both sides. A little patience and Jensen's inequality then yield \begin{align*} \int_{|x|<\frac45 t} \bigl(1 - \tfrac{5|x|^2}{4t^2}\bigr)^{d + 2}& |\nabla_{t,x} u|^2 \, dx\\ &\lesssim t^{2s_c-2} \biggl[ 1+ t^{2-2s_c}\int_{|x|<\frac45 t} \bigl(1 - \tfrac{5|x|^2}{4t^2}\bigr)^{d + 2} |\nabla u|^{2} \, dx\biggr]^{\frac{pd}{8-p(d-2)}}, \end{align*} in much the same manner as in the conformal case. Noting that the last power here is smaller than one, this inequality yields $$ \int_{|x|<\frac45 t} \bigl(1 - \tfrac{5|x|^2}{4t^2}\bigr)^{d + 2} |\nabla_{t,x} u(t,x)|^2 \, dx \lesssim t^{2s_c-2}. $$ This immediately implies the estimate on the spacetime gradient stated in Theorem~\ref{T:cone bound subc} in the sub-conformal case. The stated estimate on the $L^2_x$-norm was given in \eqref{E:wk pt}. This completes the proof of Theorem~\ref{T:cone bound subc}. \end{document}
\begin{document} \title{ extbf{On Non-Inclusion of Certain Functions in Reproducing Kernel Hilbert Spaces} \begin{abstract} \noindent We use a classical characterisation to prove that functions which are bounded away from zero cannot be elements of reproducing kernel Hilbert spaces whose reproducing kernels decays to zero in a suitable way. The result is used to study Hilbert spaces on subsets of the real line induced by analytic translation-invariant kernels which decay to zero at infinity. \end{abstract} \section{Introduction} The inclusion or non-inclusion of certain functions, often constants or polynomials, in reproducing kernel Hilbert spaces (RKHSs) has numerous implications in theory of statistical and machine learning algorithms. See \citet[p.\@~142]{Steinwart2008}; \citet[Assumption~2]{LeeLiZhao2016}; and \citet[Proposition~6]{KarvonenKanagawa2019} for a few specific examples. Non-inclusion of polynomials in an RKHS also explains the phenomena observed in \citet{XuStein2017}. Furthermore, error estimates for kernel-based approximations methods typically require that the target function be an element of the RKHS~\citep[Chapter~11]{Wendland2005}. The RKHSs of a number of finitely smooth kernels, such as Matérn and Wendland kernels, are well understood, being norm-equivalent to Sobolev spaces~\citep[e.g.,][Corollary~10.13]{Wendland2005}. With the exception of power series kernels~\citep{ZwicknaglSchaback2013}, less is known about infinitely smooth kernels. Since the work of \citet{Steinwart2006} and \citet{Minh2010}, which is based on explicit computations involving an orthonormal basis of the RKHS, it has been known that the RKHS of the Gaussian kernel does not contain non-trivial polynomials. Recently, \citet{DetteZhigljavsky2021} have proved that RKHSs of analytic translation-invariant kernels do not contain polynomials via connection to the classical Hamburger moment problem.\footnote{They do not state explicitly that their results apply to all analytic translation-invariant kernels, but this can be seen by inserting the standard bound $\abs[0]{f^{(n)}(x)} \leq C R^n n!$ for analytic functions in their Equation~(1.6) and using Stirling's approximation.} In this note we use a classical RKHS characterisation to furnish a simple proof for the fact that, roughly speaking, functions which are bounded away from zero (e.g., constant functions) cannot be elements of an RKHS whose kernel decays to zero in a certain manner. An analyticity assumption is used to effectively localise this result for domains $\Omega \subset \mathbb{R}$ which contain an accumulation point. We then consider analytic translation-invariant kernels which decay to zero. Although quite simple, it seems that these results have not appeared in the literature. Analyticity of functions in an RKHS has been previously studied by \citet[pp.\@~41--43]{Saitoh1997} and \citet{SunZhou2008}. General results concerning existence of RKHSs containing given classes of functions can be found in \citet[Section~I.13]{Aronszajn1950}. \section{Results} Let $\Omega$ be a set. Recall that a function $K \colon \Omega \times \Omega \to \mathbb{R}$ is a positive-semidefinite kernel if \begin{equation*} \sum_{n=1}^N \sum_{m=1}^N a_n a_m K(x_n, x_m) \geq 0 \end{equation*} for any $N \geq 1$, $a_1, \ldots, a_N \in \mathbb{R}$, and $x_1, \ldots, x_N \in \Omega$. By the Moore--Aronszajn theorem a positive-semidefinite kernel induces a unique reproducing kernel Hilbert space, $H_K(\Omega)$, which consists of functions $f \colon \Omega \to \mathbb{R}$. The inner product and norm of this space are denoted $\inprod{\cdot}{\cdot}_K$ and $\norm[0]{\cdot}_K$. The kernel is reproducing in $H_K(\Omega)$, which is to say that $f(x) = \inprod{f}{K(\cdot, x)}_K$ for every $f \in H_K(\Omega)$ and $x \in \Omega$. The following theorem characterises the elements of an RKHS; see, for example, Section~3.4 in \citet{Paulsen2016} for a proof. \begin{theorem}[Aronszajn] \label{thm:rkhs-inclusion-general} Let $K$ be a positive-semidefinite kernel on $\Omega$. A function $f \colon \Omega \to \mathbb{R}$ is contained in $H_K(\Omega)$ if and only if \begin{equation*} R(x, y) = K(x, y) - c^2 f(x) f(y) \end{equation*} defines a positive-semidefinite kernel on $\Omega$ for some $c > 0$. \end{theorem} If $\Theta$ is a subset of $\Omega$, the RKHS $H_K(\Theta)$ contains those functions $f \colon \Theta \to \mathbb{R}$ for which there exists an extension $f_e \in H_K(\Omega)$ (i.e., $f = f_e|_\Theta$). \subsection{General Result} We begin with a result for general bounded kernels. \begin{theorem} \label{thm:general} Let $K$ be a bounded positive-semidefinite kernel on $\Omega$ and $(x_n)_{n=1}^\infty$ a sequence in $\Omega$ such that \begin{equation} \label{eq:decay-assumption} \lim_{\ell \to \infty} \abs[0]{ K(x_{\ell+n}, x_{\ell+m}) } = 0 \quad \text{ for any } \quad n \neq m. \end{equation} If $f \colon \Omega \to \mathbb{R}$ satisfies either $f(x_n) \geq \alpha$ or $f(x_n) \leq -\alpha$ for some $\alpha > 0$ and all sufficiently large $n$, then $f \notin H_K(\Omega)$. \end{theorem} \begin{proof} Assume to the contrary that $f \in H_K(\Omega)$. By Theorem~\ref{thm:rkhs-inclusion-general} there exists $c > 0$ such that $R(x, y) = K(x, y) - c^2 f(x) f(y)$ defines a positive-semidefinite kernel on $\Omega$. Therefore the quadratic form \begin{equation*} \begin{split} r_{N,\ell} &= \sum_{n=1}^N \sum_{m=1}^N a_n a_m R(x_{\ell+n}, x_{\ell+m}) \\ &= \sum_{n=1}^N \sum_{m=1}^N a_n a_m \big( K(x_{\ell+n}, x_{\ell+m}) - c^2 f(x_{\ell+n}) f(x_{\ell+m}) \big) \end{split} \end{equation*} is non-negative for every $N \geq 1$ and $\ell \geq 0$ and any $a_1, \ldots, a_N \in \mathbb{R}$. By~\eqref{eq:decay-assumption} it holds for all sufficiently large $\ell$ that \begin{equation*} \max_{\substack{ n, m \leq N \\ n \neq m}} \abs[0]{ K(x_{\ell+n}, x_{\ell+m}) } \leq \frac{1}{2} c^2 \alpha^2. \end{equation*} Let $C_K = \sup_{ x \in \Omega } K(x,x)$ and set $a_1 = \cdots = a_N = 1$. Then, for sufficiently large $\ell$, \begin{equation*} \begin{split} r_{N,\ell} &= \sum_{n=1}^N K(x_{\ell+n}, x_{\ell+n}) + \sum_{n \neq m} K(x_{\ell+n}, x_{\ell+m}) - c^2 \sum_{n=1}^N \sum_{m=1}^N f(x_{\ell+n}) f(x_{\ell+m}) \\ &\leq C_K N + \frac{1}{2} c^2 \alpha^2 N^2 - c^2 \alpha^2 N^2 \\ &= \bigg( C_K - \frac{1}{2} c^2 \alpha^2 N \bigg) N, \end{split} \end{equation*} which is negative if $N > 2C_K/(c^2 \alpha^2)$. It follows that $r_{N,\ell}$ is negative for sufficiently large $N$ and $\ell$ which contradicts the assumption that $f \in H_K(\Omega)$. \end{proof} An alternative way to prove a similar result in some settings is by appealing to integrability. For example, elements of the RKHS of an integrable translation-invariant kernel on $\mathbb{R}^d$ are square-integrable~\citep[Theorem~10.12]{Wendland2005}. Other integrability results can be found in \citet{Sun2005} and \citet{CarmeliDeVitoToigo2006}. \subsection{Analytic Functions} Next we use the fact that RKHSs which consist of analytic functions do not depend on the domain to prove a localised versions of the above results for certain subset of $\mathbb{R}$. The classical results on real analytic functions that we use are collected in Section~1.2 of \citet{KrantzParks2002}. \begin{lemma} \label{lemma:general-analytic} Let $K$ be a positive-semidefinite kernel on $\mathbb{R}$ and $\Omega$ a subset of $\mathbb{R}$ which has an accumulation point. If $H_K(\mathbb{R})$ consists of analytic functions and $f \colon \mathbb{R} \to \mathbb{R}$ is analytic, then $f \in H_K(\mathbb{R})$ if and only if $f|_\Omega \in H_K(\Omega)$. \end{lemma} \begin{proof} If $f \in H_K(\mathbb{R})$, then $f|_\Omega \in H_K(\Omega)$ by definition. Suppose then that $f|_\Omega \in H_K(\Omega)$. Hence there is an analytic function $g \in H_K(\mathbb{R})$ such that $g|_\Omega = f|_\Omega$. The function $f - g$ is analytic and vanishes on $\Omega$. Because an analytic function which vanishes on a set with an accumulation point is identically zero, we conclude that $g = f$ and therefore $f \in H_K(\mathbb{R})$. \end{proof} \begin{theorem} \label{thm:analytic} Let $K$ be a bounded positive-semidefinite kernel on $\mathbb{R}$ such that $H_K(\mathbb{R})$ consists of analytic functions, $\Omega$ a subset of $\mathbb{R}$ which has an accumulation point, and $(x_n)_{n=1}^\infty$ a sequence in $\Omega$ such that \begin{equation*} \lim_{\ell \to \infty} \abs[0]{ K(x_{\ell+n}, x_{\ell+m}) } = 0 \quad \text{ for any } \quad n \neq m. \end{equation*} Then a function $f \colon \Omega \to \mathbb{R}$ is not an element of $H_K(\Omega)$ if there exist an analytic function $f_e \colon \mathbb{R} \to \mathbb{R}$ and $\alpha > 0$ such that $f_e|_\Omega = f$ and either $f_e(x_n) \geq \alpha$ or $f_e(x_n) \leq -\alpha$ for all sufficiently large $n$. \end{theorem} \begin{proof} By Lemma~\ref{lemma:general-analytic} $f \in H_K(\Omega)$ if and only if $f_e \in H_K(\mathbb{R})$. But by Theorem~\ref{thm:general} $f_e$ cannot be an element of $H_K(\mathbb{R})$. This proves the claim. \end{proof} Note that the requirement that $H_K(\mathbb{R})$ consist of analytic function cannot be simply removed. For example, by Proposition~\ref{prop:radial-general} the RKHS of the non-analytic kernel $K(x, y) = \exp(-\abs[0]{x-y})$ on $\mathbb{R}$ does not contain non-trivial polynomials. However, if $\Omega$ is a bounded interval, then $H_K(\Omega)$ is norm-equivalent to the first-order standard Sobolev space and therefore contains all polynomials. \subsection{Translation-Invariant Kernels} A kernel $K$ on $\mathbb{R}$ is translation-invariant if there is a function $\varphi \colon [0, \infty) \to \mathbb{R}$ such that \begin{equation*} K(x, y) = \varphi( (x - y)^2 ) \quad \text{ for all } \quad x, y \in \mathbb{R}. \end{equation*} For translation-invariant kernels the decay assumption~\eqref{eq:decay-assumption} can be cast into a less abstract form. \begin{proposition} \label{prop:radial-general} Let $K$ be a translation-invariant positive-semidefinite kernel on $\mathbb{R}$ for $\varphi \geq 0$ such that $\lim_{r \to \infty} \varphi(r) = 0$. Then a function $f \colon \mathbb{R} \to \mathbb{R}$ is not an element of $H_K(\mathbb{R})$ if there is $R \in \mathbb{R}$ such that (a) $f$ does not change sign on $[R, \infty)$ and $\liminf_{ x \to \infty} \abs[0]{f(x)} > 0$ or (b) $f$ does not change sign on $(-\infty, R]$ and $\liminf_{ x \to -\infty} \abs[0]{f(x)} > 0$. \end{proposition} \begin{proof} Translation-invariant kernels are bounded because $K(x,x) = \varphi(0)$ for every $x \in \mathbb{R}$. The claim follows from Theorem~\ref{thm:general} by selecting a sequence $(x_n)_{n=1}^\infty$ such that $\abs[0]{x_{\ell+n} - x_{\ell+m}} \to \infty$ as $\ell \to \infty$ for any $n \neq m$ and $x_n \to \infty$ (or $x_n \to -\infty$). For example, $x_n = 1 + \cdots + n$ (or $x_n = -(1+ \cdots + n)$) suffices since then \begin{equation*} \abs[0]{ x_{\ell+n} - x_{\ell+m} } = \frac{\abs[0]{n-m}(2\ell+n+m+1)}{2} \geq \ell. \end{equation*} \end{proof} Note that this proposition could be slightly generalised by requiring only that $f(x_n)$ be bounded away from zero for large $n$. For example, the function $f(x) = \sin(\pi (x+\frac{1}{2}))^2$, which is not covered by Proposition~\ref{prop:radial-general}, satisfies $f(x_n) = 1$ for all $n$ if $x_n = \pm (1+\cdots+n)$. Let $\varphi_+^{(n)}(0)$ denote the $n$th derivative from right of $\varphi$ at the origin and define \begin{equation*} \mathrm{D}^n K_x(y) = \quad \frac{\partial^n}{\partial v^n} K(v, y) \biggl|_{v = x} \text{ and } \quad \mathrm{D}^{n,n} K(x, y) = \frac{\partial^{2n}}{\partial v^n \partial w^n } K(v, w) \biggl|_{\substack{v = x \\ w = y}}. \end{equation*} The following lemma has been essentially proved by \citet{SunZhou2008}. For completeness we supply a simple proof. \begin{lemma} \label{lemma:radial-analytic} If $K$ is a translation-invariant positive-semidefinite kernel on $\mathbb{R}$ for $\varphi$ which is analytic on $\mathbb{R}$, then all elements of $H_K(\mathbb{R})$ are analytic. \end{lemma} \begin{proof} Because $K$ is infinitely differentiable on $\mathbb{R}$, every $f \in H_K(\mathbb{R})$ is infinitely differentiable and satisfies \begin{equation*} \abs[0]{ f^{(n)}(x) } = \abs[0]{ \inprod{f}{ \mathrm{D}^n K_x}_K } \leq \norm[0]{f}_K \norm[0]{\mathrm{D}^n K_x}_K = \norm[0]{f}_K \sqrt{ D^{n,n} K(x, x) } \end{equation*} for every $n \geq 0$ and $x \in \mathbb{R}$~\citep[Corollary~4.36]{Steinwart2008}. From the Taylor expansion \begin{equation*} K(x, y) = \sum_{n=0}^\infty \frac{\varphi^{(n)}(0)}{n!} (x-y)^{2n} \end{equation*} it is straightforward to compute that, for any $x \in \mathbb{R}$, \begin{equation*} \mathrm{D}^{n,n} K(x, x) = (-1)^n \frac{(2n)!}{n!} \varphi_+^{(n)}(0). \end{equation*} Since $\varphi$ is analytic, there are positive constants $C$ and $R$ such that $\abs[0]{ \varphi_+^{(n)}(0) } \leq C R^n n!$ for every $n \geq 0$. It follows that \begin{equation*} \abs[0]{ f^{(n)}(x) } \leq \norm[0]{f}_K \sqrt{ \frac{(2n)!}{n!} \varphi_+^{(n)}(0)} \leq \norm[0]{f}_K \sqrt{CR^n (2n)!} \leq \sqrt{C} \norm[0]{f}_K (2 \sqrt{R} \,)^n n!, \end{equation*} which implies that $f$ is analytic on $\mathbb{R}$. \end{proof} \begin{theorem} \label{thm:radial} Let $K$ be a translation-invariant positive-semidefinite kernel on $\mathbb{R}$ for $\varphi \geq 0$ which is analytic on $[0, \infty)$ and satisfies $\lim_{r \to \infty} \varphi(r) = 0$ and $\Omega$ a subset of $\mathbb{R}$ which has an accumulation point. Then a function $f \colon \Omega \to \mathbb{R}$ is not an element of $H_K(\Omega)$ if there exists an analytic function $f_e \colon \mathbb{R} \to \mathbb{R}$ such that $f_e|_\Omega = f$ and \begin{equation} \label{eq:liminf-radial-analytic} \liminf_{ x \to -\infty} \abs[0]{f_e(x)} > 0 \quad \text{ or } \quad \liminf_{ x \to \infty} \abs[0]{f_e(x)} > 0. \end{equation} \end{theorem} \begin{proof} The claim follows from Lemmas~\ref{lemma:general-analytic} and~\ref{lemma:radial-analytic} and Proposition~\ref{prop:radial-general}. The requirement in Proposition~\ref{prop:radial-general} that the function should not change sign follows from continuity and~\eqref{eq:liminf-radial-analytic}. \end{proof} \section{Examples} Standard examples of analytic translation-invariant kernels are the Gaussian kernel \begin{equation*} K(x, y) = \varphi \big( (x-y)^2 \big) \quad \text{ for } \quad \varphi(r) = \exp(-r) \end{equation*} and the inverse quadratic \begin{equation*} K(x, y) = \varphi \big( (x-y)^2 \big) \quad \text{ for } \quad \varphi(r) = \frac{1}{1+r}. \end{equation*} It is known that the RKHSs of these kernels do not contain non-trivial polynomials~\citep{Minh2010,DetteZhigljavsky2021} on bounded intervals. These results are special cases of Theorem~\ref{thm:radial}, which can be applied to any analytic function whose analytic continuation is bounded away from zero at infinity. For example, the function \begin{equation*} f(x) = \exp\bigg( \! -\sin(x)^2 + \frac{1}{\sqrt{1+x^2}} \bigg) \end{equation*} is in the RKHS of no translation-invariant kernel for which $\varphi \geq 0$ decays to zero at infinity. The exponential kernel \begin{equation*} K(x, y) = \exp( x y) \end{equation*} serves as a good example that $\lim_{ x \to \infty} K(x, y) = 0$ for infinitely many $y$ is not a sufficient condition for Theorem~\ref{thm:general} to hold. The RKHS on $\mathbb{R}$ of the exponential kernel consists of analytic functions and contains all polynomials. For any $y < 0$ it holds that $\lim_{x \to \infty} K(x, y) = 0$. However, it is not possible to select a sequence $(x_n)_{n=1}^\infty$ for which $K$ satisfies~\eqref{eq:decay-assumption}. For clearly $x_{\ell+n}$ and $x_{\ell+m}$ would have to have had opposite signs for all sufficiently large $\ell$ if $n \neq m$. But this would in particular imply that $\mathrm{sgn}(x_{\ell+1}) \neq \mathrm{sgn}(x_{\ell+2})$, $\mathrm{sgn}(x_{\ell+1}) \neq \mathrm{sgn}(x_{\ell+3})$, and $\mathrm{sgn}(x_{\ell+2}) \neq \mathrm{sgn}(x_{\ell+3})$ for sufficiently large $\ell$, which is not possible. \end{document}
\begin{document} \author{Fabio Cavalletti}\thanks{Universit\`a degli Studi di Pavia, Dipartimento di Matematica, email: fabio.cavalletti@unipv.it} \title[An Overview of $L^{1}$ optimal transportation]{An Overview of $L^{1}$ optimal transportation \\ on metric measure spaces} \keywords{optimal transport; Monge problem; Ricci curvature; curvature dimension condition} \begin{abstract} The scope of this note is to make a self-contained survey of the recent developments and achievements of the theory of $L^{1}$-Optimal Transportation on metric measure spaces. Among the results proved in the recent papers \cite{CM1,CM2} where the author, together with A. Mondino, proved a series of sharp (and in some cases rigid) geometric and functional inequalities in the setting of metric measure spaces enjoying a weak form of Ricci curvature lower bound, we review the proof of the L\'evy-Gromov isoperimetric inequality. \end{abstract} \maketitle \section{Introduction} The scope of this note is to make a self-contained survey of the recent developments and achievements of the theory of $L^{1}$-Optimal Transportation on metric measure spaces. We will focus on the general scheme adopted in the recent papers \cite{CM1,CM2} where the author, together with A. Mondino, proved a series of sharp (and in some cases even rigid and stable) geometric and functional inequalities in the setting of metric measure spaces enjoying a weak form of Ricci curvature lower bound. Roughly the general scheme consists in reducing the initial problem to a family of easier one-dimensional problems; as it is probably the most relevant result obtained with this technique, we will review in detail how to proceed to obtain the L\'evy-Gromov isoperimetric inequality for metric measure spaces verifying the Riemmanian Curvature Dimension condition (or, more generally, essentially non-branching metric measure spaces verifying the Curvature Dimension condition). In \cite{biacava:streconv, cava:MongeRCD} a fine analysis of the Monge problem in the metric setting was done treating, with a different perspective, similar questions whose answers were later used also in \cite{CM1,CM2}. We therefore believe the Monge problem and V.N. Sudakov's approach to it (see \cite{sudakov}) is a good starting point for our review and to see how $L^{1}$-Optimal Transportation naturally yields a reduction of the problem to a family of one-dimensional problems. It is worth stressing that the dimensional reduction proposed by V.N. Sudakov to solve the Monge problem is only one of the strategy to attack the problem. Monge problem has a long story and many different authors contributed to obtain solutions in different frameworks with different approaches; here we only mention that the first existence result for the Monge problem was independently obtained in \cite{caffa:Monge} and in \cite{trudi:Monge}. We also mention the subsequent generalizations obtained in \cite{Ambrosio:Monge,AKP,feldMccann} and we refer to the monograph \cite{Vil:topics} for a more complete list of results. \subsection{Monge problem} The original problem posed by Monge in 1781 can be restated in modern language as follows: given two Borel probability measures $\mu_{0}$ and $\mu_{1}$ over $\mathbb{R}^{d}$, called marginal measures, find the optimal manner of transporting $\mu_{0}$ to $\mu_{1}$; the transportation of $\mu_{0}$ to $\mu_{1}$ is understood as a map $T : \mathbb{R}^{d} \to \mathbb{R}^{d}$ assigning to each particle $x$ a final position $T(x)$ fulfilling the following compatibility condition \begin{equation}\label{E:transportmap} \qquad T_{\sharp} \, \mu_{0} = \mu_{1}, \qquad \textrm{i.e. }\quad \mu_{0}(T^{-1}(A)) = \mu_{1}(A), \quad \forall \, A \ \textrm{Borel set}; \end{equation} any map $T$ verifying the previous condition will be called a transport map. The optimality requirement is stated as follows: \begin{equation}\label{E:MongeEuclid} \int_{\mathbb{R}^{d}} |T(x) - x| \, \mu_{0}(dx) \leq \int_{\mathbb{R}^{d}} |\hat T(x) -x | \, \mu_{0}(dx), \end{equation} for any other $\hat T$ transport map. In proving the existence of a minimizer, the first difficulty appears studying the domain of the minimization, that is the set of maps $T$ verifying \eqref{E:transportmap}. Suppose $\mu_{0} = f_{0} \mathcal{L}^{d}$ and $\mu_{1} = f_{1} \mathcal{L}^{d}$ where $\mathcal{L}^{d}$ denotes the $d$-dimensional Lebesgue measure; a smooth injective map $T$ is then a transport map if and only if $$ f_{1} (T(x)) |\det (DT)(x) | = f_{0}(x), \qquad \mu_{0}\textrm{-a.e.} \ x \in \mathbb{R}^{d}, $$ showing a strong non-linearity of the constrain. The first big leap in optimal transportation theory was achieved by Kantorovich considering a suitable relaxation of the problem: associate to each transport map the probability measure $(Id,T)_{\sharp} \mu_{0}$ over $\mathbb{R}^{d}\times \mathbb{R}^{d}$ and introduce the set of \emph{transport plans} $$ \Pi(\mu_{0},\mu_{1}) : = \left\{ \pi \in \mathcal{P}(\mathbb{R}^{d}\times \mathbb{R}^{d}) \colon P_{1\,\sharp} \pi = \mu_{0},\ P_{2\,\sharp} \pi = \mu_{1} \right\}; $$ where $P_{i} : \mathbb{R}^{d} \times \mathbb{R}^{d} \to \mathbb{R}^{d}$ is the projection on the $i$-th component, with $i =1,2$. By definition $(Id,T)_{\sharp} \mu_{0} \in \Pi(\mu_{0},\mu_{1})$ and $$ \int_{\mathbb{R}^{d}} |T(x) - x| \, \mu_{0}(dx) = \int_{\mathbb{R}^{d}\times \mathbb{R}^{d}} |x-y| \, \left( (Id, T)_{\sharp} \mu_{0}\right)(dxdy); $$ then it is natural to consider the minimization of the following functional (called Monge-Kantorovich minimization problem) \begin{equation}\label{E:MK} \Pi(\mu_{0},\mu_{1}) \ni \pi \longmapsto \mathcal{I}(\pi) : = \int_{\mathbb{R}^{d}\times\mathbb{R}^{d}} |x-y| \, \pi(dxdy). \end{equation} The big advantage being now that $\Pi(\mu_{0},\mu_{1})$ is a convex subset of $\mathcal{P}(\mathbb{R}^{d}\times \mathbb{R}^{d})$ and it is compact with respect to the weak topology. Since the functional $\mathcal{I}$ is linear, the existence of a minimizer follows straightforwardly. Then a strategy to obtain a solution of the original Monge problem is to start from an optimal transport plan $\pi$ and prove that it is indeed concentrated on the graph of a Borel map $T$; the latter is equivalent to $\pi = (Id,T)_{\sharp} \mu_{0}$. To run this program one needs to deduce from optimality some condition on the geometry of the support of the transport plan. This was again obtained by Kantorovich introducing a dual formulation of \eqref{E:MK} and finding out that for any probability measures $\mu_{0}$ and $\mu_{1}$ with finite first moment, there exists a $1$-Lipschitz function $\varphi : \mathbb{R}^{d} \to \mathbb{R}$ such that $$ \Pi(\mu_{0},\mu_{1}) \ni \pi \ \textrm{is optimal} \quad \iff \quad \pi \big(\{ (x,y) \in \mathbb{R}^{2d} \colon \varphi(x) - \varphi(y) = |x-y| \} \big) = 1. $$ At this point one needs to focus on the structure of the set \begin{equation}\label{E:transportproduct} \Gamma : = \big\{ (x,y) \in \mathbb{R}^{2d} \colon \varphi(x) - \varphi(y) = |x-y| \big\}. \end{equation} \begin{definition} A set $\Lambda \subset \mathbb{R}^{2d}$ is $|\cdot|$-cyclically monotone if and only if for any finite subset of $\Lambda$, $\{ (x_{1},y_{1}), \dots, (x_{N},y_{N})\} \subset \Lambda$ it holds $$ \sum_{1\leq i\leq N} |x_{i} - y_{i}| \leq \sum_{1 \leq i\leq N} |x_{i} - y_{i+1}|, $$ where $y_{N+1} : = y_{1}$. \end{definition} Almost by definition, the set $\Gamma$ is $|\cdot|$-cyclically monotone and whenever $(x,y) \in \Gamma$ considering $z_{t} : = (1-t) x + t y$ with $t \in [0,1]$ it holds that $(z_{s},z_{t}) \in \Gamma$, for any $s \leq t$. In particular this suggests that $\Gamma$ produces a family of disjoint lines of $\mathbb{R}^{d}$ along where the optimal transportation should move. This can be made rigorous considering the following ``relation'' between points: a point $x$ is in relation with $y$ if, using optimal geodesics selected by the above optimal transport problem, one can travel from $x$ to $y$ or viceversa. That is, consider $R : = \Gamma \cup \Gamma^{-1}$ and define $x \sim y$ if and only if $(x,y) \in R$. Then $\mathbb{R}^{d}$ will be decomposed (up to a set of Lebesgue-measure zero) as $\mathcal{T} \cup Z$ where $\mathcal{T}$ will be called the \emph{transport set} and $Z$ the set of points not moved by the optimal transportation problem. The important property of $\mathcal{T}$ being that $$ \mathcal{T} = \bigcup_{q\in Q} X_{q}, \qquad X_{q} \textrm{ straight line}, \qquad X_{q} \cap X_{q'} = \emptyset, \quad \textrm{if } q \neq q'. $$ Here $Q$ is a set of indices; a convenient way to index a straight line $X_{q}$ is to select an element of $X_{q}$ and call it, with an abuse of notation, $q$. With this choice the set $Q$ can be understood as a subset of $\mathbb{R}^{d}$. Once a partition of the space is given, one obtains via Disintegration Theorem a corresponding decomposition of marginal measures: $$ \mu_{0} = \int_{Q} \mu_{0\,q} \, \mathfrak q(dq), \qquad \mu_{1} = \int_{Q} \mu_{1\, q} \, \mathfrak q(dq); $$ where $\mathfrak q$ is a Borel probability measure over the set of indices $Q \subset \mathbb{R}^{d}$. If $Q$ enjoys a measurability condition (see Theorem \ref{T:disintr} for details), the conditional measures $\mu_{0\,q}$ and $\mu_{1\,q}$ are concentrated on the straight line with index $q$, i.e. $\mu_{0\,q} (X_{q}) = \mu_{1\,q} (X_{q})= 1$, for $\mathfrak q$-a.e. $q \in Q$. Then a classic way to construct an optimal transport maps is to \begin{itemize} \item[-] consider $T_{q}$ the monotone rearrangement along $X_{q}$ of $\mu_{0\, q}$ to $\mu_{1\,q}$; \item[-] define the transport map $T$ as $T_{q}$ on each $X_{q}$. \end{itemize} The map $T$ will be then an optimal transport map moving $\mu_{0}$ to $\mu_{1}$; it is indeed easy to check that $(Id,T)_{\sharp}\mu_{0} \in \Pi(\mu_{0},\mu_{1})$ and $(x,T(x)) \in \Gamma$ for $\mu_{0}$-a.e. $x$. So the original Monge problem has been reduced to the following family of one-dimensional problems: for each $q \in Q$ find a minimizer of the following functional $$ \Pi(\mu_{0\,q},\mu_{1\,q}) \ni \pi \longmapsto \mathcal{I}(\pi) : = \int_{X_{q}\times X_{q}} |x-y| \,\pi(dxdy), $$ that is concentrated on the graph of a Borel function. As $X_{q}$ is isometric to the real line, whenever $\mu_{0\,q}$ does not contain any atom (i.e $\mu_{0\,q} (x) = 0$, for all $x \in X_{q}$), the monotone rearrangement $T_{q}$ exists and the existence of an optimal transport map $T$ constructed as before follows. The existence of a solution has been reduced therefore to a regularity property of the disintegration of $\mu_{0}$. As already stressed before, this approach to the Monge problem, mainly due to V.N. Sudakov, was proposed in \cite{sudakov} and was later completed in the subsequent papers \cite{caffa:Monge} and in \cite{trudi:Monge}. See also \cite{caravenna} for a complete Sudakov approach to Monge problem when the Euclidean distance is replaced by any strictly convex norm and \cite{biadaneri} where any norm is considered. In all these papers, assuming $\mu_{0}$ to be absolutely continuous with respect to $\mathcal{L}^{d}$ give the sufficient regularity to solve the problem. The Monge problem can be actually stated, and solved, in a much more general framework. Given indeed two Borel probability measures $\mu_{0}$ and $\mu_{1}$ over a complete and separable metric space $(X,\mathsf d)$, the notion of transportation map perfectly makes sense and the optimality condition \eqref{E:MongeEuclid} can be naturally formulated using the distance $\mathsf d$ as a cost function instead of the Euclidean norm: \begin{equation}\label{E:Mongemetric} \int_{\mathbb{R}^{d}} \mathsf d(T(x), x) \, \mu_{0}(dx) \leq \int_{\mathbb{R}^{d}} \mathsf d(\hat T(x),x) \, \mu_{0}(dx). \end{equation} The problem can be relaxed to obtain a transport plan $\pi$ solution of the corresponding Monge-Kantorovich minimization problem. Also the Kantorovich duality applies yielding the existence of a $1$-Lipschitz function $\varphi : X \to \mathbb{R}$ such that $$ \Pi(\mu_{0},\mu_{1}) \ni \pi \ \textrm{is optimal} \quad \iff \quad \pi \big( \Gamma \big) = 1, $$ where $\Gamma := \{ (x,y) \in X\times X \colon \varphi(x) - \varphi(y) = \mathsf d(x,y) \}$ is $\mathsf d$-cyclically monotone. \\ All the strategy proposed for the Euclidean problem can be adopted: produce a decomposition of $X$ as $\mathcal{T} \cup Z$ where $Z$ is the set of points not moved by the optimal transportation problem and $\mathcal{T}$ is the transport set and it is partitioned, up to a set of measure zero, by a family of geodesics $\{ X_{q} \}_{q \in Q}$; via Disintegration Theorem one obtains as before a reduction of the Monge problem to a family of one-dimensional problems $$ \Pi(\mu_{0\,q},\mu_{1\,q}) \ni \pi \longmapsto \mathcal{I}(\pi) : = \int_{X_{q}\times X_{q}} \mathsf d(x,y) \,\pi(dxdy). $$ Therefore, since $X_{q}$ with distance $\mathsf d$ is isometric to an interval of the real line with Euclidean distance, the problem is reduced to proving that for $\mathfrak q$-a.e. $q \in Q$ the conditional measure $\mu_{0\,q}$ does not have any atoms. Clearly in showing such a result, besides the regularity of $\mu_{0}$ itself, the regularity of the ambient space $X$ does play a crucial role. In particular, together with the localization of the Monge problem to $X_{q}$, it should come a localization of the regularity of the space. This is the case when the metric space $(X,\mathsf d)$ is endowed with a reference probability measure $\mathfrak m$ and the resulting metric measure space $(X,\mathsf d,\mathfrak m)$ verifies a weak Ricci curvature lower bound. In \cite{biacava:streconv} we in fact observed that if $(X,\mathsf d,\mathfrak m)$ verifies the so-called measure contraction property $\mathsf{MCP}$, then for $\mathfrak q$-a.e. $q \in Q$ the one-dimensional metric measure space $(X_{q},\mathsf d,\mathfrak m_{q})$ verifies $\mathsf{MCP}$ as well, where $\mathfrak m_{q}$ is the conditional measure of $\mathfrak m$ with respect to the family of geodesics $\{ X_{q}\}_{q \in Q}$. Now the assumption $\mu_{0}\ll \mathfrak m$ is sufficient to solve the Monge problem. It is worth mentioning that \cite{biacava:streconv} was the first contribution where regularity of conditional measures were obtained in a purely non-smooth framework. The techniques introduced in \cite{biacava:streconv} permitted also to threat such regularity issues in the infinite dimensional setting of Wiener space; see \cite{cava:Wiener}. This short introduction should suggest that $L^{1}$-Optimal Transportation permits to obtain an efficient dimensional reduction together with a localization of the ``smoothness'' of the space for very general metric measure spaces. We now make a short introduction also to the L\'evy-Gromov isoperimetric inequality. \subsection{L\'evy-Gromov isoperimetric inequality} The L\'evy-Gromov isoperimetric inequality \cite[Appendix C]{Gro} can be stated as follows: if $E$ is a (sufficiently regular) subset of a Riemannian manifold $(M^N,g)$ with dimension $N$ and Ricci bounded below by $K>0$, then \begin{equation}\label{eq:LevyGromov} \frac{|\partial E|}{|M|}\geq \frac{|\partial B|}{|S|}, \end{equation} where $B$ is a spherical cap in the model sphere $S$, i.e. the $N$-dimensional sphere with constant Ricci curvature equal to $K$, and $|M|,|S|,|\partial E|, |\partial B|$ denote the appropriate $N$ or $N-1$ dimensional volume, and where $B$ is chosen so that $|E|/|M|=|B|/|S|$. As $K >0$ both $M$ and $S$ are compact and their volume is finite; hence the previous equality and \eqref{eq:LevyGromov} makes sense. In other words, the L\'evy-Gromov isoperimetric inequality states that isoperimetry in $(M,g)$ is at least as strong as in the model space $S$. A general introduction on the isoperimetric problem goes beyond the scopes of this note; here it is worth mentioning that a complete description of isoperimetric inequality in spaces admitting singularities is quite an hard task and the bibliography reduces to \cite{MilRot, MR, MorPol}. See also \cite[Appendix H]{EiMe} for more details. We also include the following reference to the isoperimetric problem corresponding to different approaches: for a geometric measure theory approach see \cite{Mor}; for the point of view of optimal transport see \cite{FiMP, Vil}; for the connections with convex and integral geometry see \cite{BurZal}; for the recent quantitative forms see \cite{CL, FuMP} and finally for an overview of the more geometric aspects see \cite{Oss, Rit, Ros}. Coming back to L\'evy-Gromov isoperimetric inequality, it makes sense naturally also in the broader class of metric measure spaces, i.e. triples $(X,\mathsf d,\mathfrak m)$ where $(X,\mathsf d)$ is complete and separable and $\mathfrak m$ is a Radon measure over $X$. Indeed the volume of a Borel set is replaced by its $\mathfrak m$-measure, $\mathfrak m(E)$; the boundary area of the smooth framework instead can be replaced by the Minkowski content: \begin{equation}\label{def:MinkCont} \mathfrak m^+(E):=\liminf_{\varepsilon\downarrow 0} \frac{\mathfrak m(E^\varepsilon)- \mathfrak m(E)}{\varepsilon}, \end{equation} where $E^{\varepsilon}:=\{x \in X \,:\, \exists y \in E \, \text{ such that } \, \mathsf d(x,y)< \varepsilon \}$ is the $\varepsilon$-neighborhood of $E$ with respect to the metric $\mathsf d$; the natural analogue of ``dimension $N$ and Ricci bounded below by $K>0$'' is encoded in the so-called Riemannian Curvature Dimension condition, $\mathsf{RCD}^{*}(K,N)$ for short. As normalization factors appears in \eqref{eq:LevyGromov}, it is also more convenient to directly consider the case $\mathfrak m(X) = 1$. So the L\'evy-Gromov isoperimetric problem for a m.m.s. $(X,\mathsf d,\mathfrak m)$ with $\mathfrak m(X) = 1$ can be formulated as follows: \\ \noindent \emph{ Find the largest function $\mathcal{I}_{K,N}:[0,1]\to \mathbb{R}^+$ such that for every Borel subset $E\subset X$ it holds $$ \mathfrak m^{+}(E) \geq \mathcal{I}_{K,N}(\mathfrak m(E)), $$ with $\mathcal{I}_{K,N}$ depending on $N, K \in \mathbb{R}$ with $K>0$ and $N>1$. } Then in \cite{CM1} (Theorem 1.2) the author with A. Mondino proved the non-smooth L\'evy-Gromov isoperimetric inequality \eqref{eq:LevyGromov} \begin{theorem}[L\'evy-Gromov in $\mathsf{RCD}^*(K,N)$-spaces, Theorem 1.2 of \cite{CM1}] \label{thm:LG} Let $(X,\mathsf d,\mathfrak m)$ be an $\mathsf{RCD}^*(K,N)$ space for some $N\in \mathbb{N}$ and $K>0$ and $\mathfrak m(X)=1$. Then for every Borel subset $E\subset X$ it holds $$ \mathfrak m^+(E)\geq \frac{|\partial B|}{|S|}, $$ where $B$ is a spherical cap in the model sphere $S$ (the $N$-dimensional sphere with constant Ricci curvature equal to $K$) chosen so that $|B|/|S|=\mathfrak m(E)$. \end{theorem} We refer to Theorem 1.2 of \cite{CM1} (or Theorem 6.6) for the more general statement. The link between Theorem \ref{thm:LG} and the first part of the Introduction, where the Monge problem was discussed, stands in the techniques used to prove Theorem \ref{thm:LG}. The main obstacle to L\'evy-Gromov type inequalities in the non-smooth metric measure spaces setting is that the previously known proofs rely on regularity properties of isoperimetric regions and on powerful results of geometric measure theory (see for instance \cite{Gro,Mor}) that are out of disposal in the framework of metric measure spaces. The recent paper of B. Klartag \cite{klartag} permitted to obtain a proof of the L\'evy-Gromov isoperimetric inequality, still in the framework of smooth Riemannian manifolds, avoiding regularity of optimal shapes and using instead an optimal transportation argument involving $L^1$-Optimal Transportation and ideas of convex geometry. This approach goes back to Payne-Weinberger \cite{PW} and was later developed by Gromov-Milman \cite{GrMi}, Lov\'asz-Simonovits \cite{LoSi} and Kannan-Lov\'asz-Simonovits \cite{KaLoSi}; it consists in reducing a multi-dimensional problem, to easier one-dimensional problems. B. Klartag's contribution was to observe that a suitable $L^{1}$-Optimal Transportation problem produces what he calls a \emph{needle decomposition} (in our terminology will be called disintegration) that localize (or reduce) the proof of the isoperimetric inequality to the proof of a family of one-dimensional isoperimetric inequalities; also the regularity of the space is localized. The approach of \cite{klartag} does not rely on the regularity of the isoperimetric region, nevertheless it still heavily makes use of the smoothness of the ambient space to obtain the localization; in particular it makes use of sharp properties of the geodesics in terms of Jacobi fields and estimates on the second fundamental forms of suitable level sets, all objects that are still not enough understood in general metric measure space in order to repeat the same arguments. Hence to apply the localization technique to the L\'evy-Gromov isoperimetric inequality in singular spaces, structural properties of geodesics and of $L^1$-optimal transportation have to be understood also in the general framework of metric measure spaces. Such a program already started in the previous work of the author with S. Bianchini \cite{biacava:streconv} and of the author \cite{cava:MongeRCD, cava:decomposition}. Finally with A. Mondino in \cite{CM1} we obtained the general result permitting to obtained the L\'evy-Gromov isoperimetric inequality. \subsection{Outline} The outline of the paper goes as follows: Section \ref{S:preliminaries} contains all the basic material on Optimal Transportation and the theory of Lott-Sturm-Villani spaces, that is metric measure spaces verifying the Curvature Dimension condition, $\mathsf{CD}(K,N)$ for short. It also covers some basics on isoperimetric inequality, Disintegration Theorem and selection theorems we will use during the paper. In Section \ref{S:transportset} we prove all the structure results on the building block of $L^{1}$-Optimal Transportation, the $\mathsf d$-cyclically monotone sets. Here no curvature assumption enters. In Section \ref{S:cyclically} we show that the aforementioned sets induce a partition of almost all transport, provided the space enjoies a stronger form of the essentially non-branching condition; we also show that each element of the partition is a geodesic (and therefore a one-dimensional set). Section \ref{S:ConditionalMeasures} contains all the regularity results of conditional measures of the disintegration induced by the $L^{1}$-Optimal Transportation problem. In particular we will present three assumptions, each one implying the previous one, yielding three increasing level of regularity of the conditional measures. Finally in Section \ref{S:application} we collect the consequences of the regularity results of Section \ref{S:ConditionalMeasures}; in particular we first show the existence of a solution of the Monge problem under very general regularity assumption (Theorem \ref{T:mongeff}) and finally we go back to the L\'evy-Gromov isoperimetric inequality (Theorem \ref{T:iso}). \section{Preliminaries}\label{S:preliminaries} In what follows we say that a triple $(X,\mathsf d, \mathfrak m)$ is a metric measure space, m.m.s. for short, if $(X, \mathsf d)$ is a complete and separable metric space and $\mathfrak m$ is positive Radon measure over $X$. For this paper we will only be concerned with m.m.s. with $\mathfrak m$ probability measure, that is $\mathfrak m(X) =1$. The space of all Borel probability measures over $X$ will be denoted by $\mathcal{P}(X)$. A metric space is a geodesic space if and only if for each $x,y \in X$ there exists $\gamma \in {\rm Geo}(X)$ so that $\gamma_{0} =x, \gamma_{1} = y$, with $$ {\rm Geo}(X) : = \{ \gamma \in C([0,1], X): \mathsf d(\gamma_{s},\gamma_{t}) = |s-t| \mathsf d(\gamma_{0},\gamma_{1}), \text{ for every } s,t \in [0,1] \}. $$ It follows from the metric version of the Hopf-Rinow Theorem (see Theorem 2.5.28 of \cite{BBI}) that for complete geodesic spaces, local completeness is equivalent to properness (a metric space is proper if every closed ball is compact). So we assume the ambient space $(X,\mathsf d)$ to be proper and geodesic, hence also complete and separable. Moreover we assume $\mathfrak m$ to be a proability measure, i.e. $\mathfrak m(X)=1$. We denote by $\mathcal{P}_{2}(X)$ the space of probability measures with finite second moment endowed with the $L^{2}$-Wasserstein distance $W_{2}$ defined as follows: for $\mu_0,\mu_1 \in \mathcal{P}_{2}(X)$ we set \begin{equation}\label{eq:Wdef} W_2^2(\mu_0,\mu_1) = \inf_{ \pi} \int_{X\times X} \mathsf d^2(x,y) \, \pi(dxdy), \end{equation} where the infimum is taken over all $\pi \in \mathcal{P}(X \times X)$ with $\mu_0$ and $\mu_1$ as the first and the second marginal, called the set of transference plans. The set of transference plans realizing the minimum in \eqref{eq:Wdef} will be called the set of optimal transference plans. Assuming the space $(X,\mathsf d)$ to be geodesic, also the space $(\mathcal{P}_2(X), W_2)$ is geodesic. Any geodesic $(\mu_t)_{t \in [0,1]}$ in $(\mathcal{P}_2(X), W_2)$ can be lifted to a measure $\nu \in {\mathcal {P}}({\rm Geo}(X))$, so that $({\rm e}_t)_\sharp \, \nu = \mu_t$ for all $t \in [0,1]$. Here for any $t\in [0,1]$, ${\rm e}_{t}$ denotes the evaluation map: $$ {\rm e}_{t} : {\rm Geo}(X) \to X, \qquad {\rm e}_{t}(\gamma) : = \gamma_{t}. $$ Given $\mu_{0},\mu_{1} \in \mathcal{P}_{2}(X)$, we denote by $\mathrm{OptGeo}(\mu_{0},\mu_{1})$ the space of all $\nu \in \mathcal{P}({\rm Geo}(X))$ for which $({\rm e}_0,{\rm e}_1)_\sharp\, \nu$ realizes the minimum in \eqref{eq:Wdef}. If $(X,\mathsf d)$ is geodesic, then the set $\mathrm{OptGeo}(\mu_{0},\mu_{1})$ is non-empty for any $\mu_0,\mu_1\in \mathcal{P}_2(X)$. It is worth also introducing the subspace of $\mathcal{P}_{2}(X)$ formed by all those measures absolutely continuous with respect with $\mathfrak m$: it is denoted by $\mathcal{P}_{2}(X,\mathsf d,\mathfrak m)$. \subsection{Geometry of metric measure spaces}\label{Ss:geom} Here we briefly recall the synthetic notions of lower Ricci curvature bounds, for more detail we refer to \cite{BS10,lottvillani:metric,sturm:I, sturm:II, Vil}. In order to formulate the curvature properties for $(X,\mathsf d,\mathfrak m)$ we introduce the following distortion coefficients: given two numbers $K,N\in \mathbb{R}$ with $N\geq0$, we set for $(t,\theta) \in[0,1] \times \mathbb{R}_{+}$, \begin{equation}\label{E:sigma} \sigma_{K,N}^{(t)}(\theta):= \begin{cases} \infty, & \textrm{if}\ K\theta^{2} \geq N\pi^{2}, \crcr \displaystyle \frac{\sin(t\theta\sqrt{K/N})}{\sin(\theta\sqrt{K/N})} & \textrm{if}\ 0< K\theta^{2} < N\pi^{2}, \crcr t & \textrm{if}\ K \theta^{2}<0 \ \textrm{and}\ N=0, \ \textrm{or if}\ K \theta^{2}=0, \crcr \displaystyle \frac{\sinh(t\theta\sqrt{-K/N})}{\sinh(\theta\sqrt{-K/N})} & \textrm{if}\ K\theta^{2} \leq 0 \ \textrm{and}\ N>0. \end{cases} \end{equation} We also set, for $N\geq 1, K \in \mathbb{R}$ and $(t,\theta) \in[0,1] \times \mathbb{R}_{+}$ \begin{equation} \label{E:tau} \tau_{K,N}^{(t)}(\theta): = t^{1/N} \sigma_{K,N-1}^{(t)}(\theta)^{(N-1)/N}. \end{equation} As we will consider only the case of essentially non-branching spaces, we recall the following definition. \begin{definition}\label{D:essnonbranch} A metric measure space $(X,\mathsf d, \mathfrak m)$ is \emph{essentially non-branching} if and only if for any $\mu_{0},\mu_{1} \in \mathcal{P}_{2}(X)$, with $\mu_{0}$ absolutely continuous with respect to $\mathfrak m$, any element of $\mathrm{OptGeo}(\mu_{0},\mu_{1})$ is concentrated on a set of non-branching geodesics. \end{definition} A set $F \subset {\rm Geo}(X)$ is a set of non-branching geodesics if and only if for any $\gamma^{1},\gamma^{2} \in F$, it holds: $$ \exists \; \bar t\in (0,1) \text{ such that } \ \forall t \in [0, \bar t\,] \quad \gamma_{ t}^{1} = \gamma_{t}^{2} \quad \Longrightarrow \quad \gamma^{1}_{s} = \gamma^{2}_{s}, \quad \forall s \in [0,1]. $$ \begin{definition}[$\mathsf{CD}$ condition]\label{D:CD} An essentially non-branching m.m.s. $(X,\mathsf d,\mathfrak m)$ verifies $\mathsf{CD}(K,N)$ if and only if for each pair $\mu_{0}, \mu_{1} \in \mathcal{P}_{2}(X,\mathsf d,\mathfrak m)$ there exists $\nu \in \mathrm{OptGeo}(\mu_{0},\mu_{1})$ such that \begin{equation}\label{E:CD} \varrho_{t}^{-1/N} (\gamma_{t}) \geq \tau_{K,N}^{(1-t)}(\mathsf d( \gamma_{0}, \gamma_{1}))\varrho_{0}^{-1/N}(\gamma_{0}) + \tau_{K,N}^{(t)}(\mathsf d(\gamma_{0},\gamma_{1}))\varrho_{1}^{-1/N}(\gamma_{1}), \qquad \nu\text{-a.e.} \, \gamma \in {\rm Geo}(X), \end{equation} for all $t \in [0,1]$, where $({\rm e}_{t})_\sharp \, \nu = \varrho_{t} \mathfrak m$. \end{definition} For the general definition of $\mathsf{CD}(K,N)$ see \cite{lottvillani:metric, sturm:I, sturm:II}. \begin{remark}\label{R:CDN-1} It is worth recalling that if $(M,g)$ is a Riemannian manifold of dimension $n$ and $h \in C^{2}(M)$ with $h > 0$, then the m.m.s. $(M,g,h \, vol)$ verifies $\mathsf{CD}(K,N)$ with $N\geq n$ if and only if (see Theorem 1.7 of \cite{sturm:II}) $$ Ric_{g,h,N} \geq K g, \qquad Ric_{g,h,N} : = Ric_{g} - (N-n) \frac{\nabla_{g}^{2} h^{\frac{1}{N-n}}}{h^{\frac{1}{N-n}}}. $$ In particular if $N = n$ the generalized Ricci tensor $Ric_{g,h,N}= Ric_{g}$ makes sense only if $h$ is constant. Another important case is when $I \subset \mathbb{R}$ is any interval, $h \in C^{2}(I)$ and $\mathcal{L}^{1}$ is the one-dimensional Lebesgue measure; then the m.m.s. $(I ,|\cdot|, h \mathcal{L}^{1})$ verifies $\mathsf{CD}(K,N)$ if and only if \begin{equation}\label{E:CD-N-1} \left(h^{\frac{1}{N-1}}\right)'' + \frac{K}{N-1}h^{\frac{1}{N-1}} \leq 0, \end{equation} and verifies $\mathsf{CD}(K,1)$ if and only if $h$ is constant. Inequality \eqref{E:CD-N-1} has also a non-smooth counterpart; if we drop the smoothness assumption on $h$ it can be proven that the m.m.s. $(I ,|\cdot|, h \mathcal{L}^{1})$ verifies $\mathsf{CD}(K,N)$ if and only if \begin{equation}\label{E:curvdensmmR} h( (1-s) t_{0} + s t_{1} )^{1/(N-1)} \geq \sigma^{(1-s)}_{K,N-1}(t_{1} - t_{0}) h (t_{0})^{1/(N-1)} + \sigma^{(s)}_{K,N-1}(t_{1} - t_{0}) h (t_{1})^{1/(N-1)}, \end{equation} that is the formulation in the sense of distributions of the differential inequality $$ \left(h^{\frac{1}{N-1}}\right)'' + \frac{K}{N-1}h^{\frac{1}{N-1}} \leq 0. $$ Recall indeed that $s \mapsto \sigma^{(s)}_{K,N-1}(\theta)$ solves in the classical sense $f'' + (t_{1}-t_{0})^{2} \frac{K}{N-1}f = 0$. \end{remark} We also mention the more recent Riemannian curvature dimension condition $\mathsf{RCD}^{*}(K,N)$. In the infinite dimensional case, i.e. $N = \infty$, it was introduced \cite{AGS11b}. The class $\mathsf{RCD}^{*}(K,N)$ with $N<\infty$ has been proposed in \cite{G15} and deeply investigated in \cite{AGS, EKS} and \cite{AMS}. We refer to these papers and references therein for a general account on the synthetic formulation of Ricci curvature lower bounds for metric measure spaces. Here we only mention that $\mathsf{RCD}^{*}(K,N)$ condition is an enforcement of the so called reduced curvature dimension condition, denoted by $\mathsf{CD}^{*}(K,N)$, that has been introduced in \cite{BS10}: in particular the additional condition is that the Sobolev space $W^{1,2}(X,\mathfrak m)$ is an Hilbert space, see \cite{G15, AGS11a, AGS11b}. The reduced $\mathsf{CD}^{*}(K,N)$ condition asks for the same inequality \eqref{E:CD} of $\mathsf{CD}(K,N)$ but the coefficients $\tau_{K,N}^{(t)}(\mathsf d(\gamma_{0},\gamma_{1}))$ and $\tau_{K,N}^{(1-t)}(\mathsf d(\gamma_{0},\gamma_{1}))$ are replaced by $\sigma_{K,N}^{(t)}(\mathsf d(\gamma_{0},\gamma_{1}))$ and $\sigma_{K,N}^{(1-t)}(\mathsf d(\gamma_{0},\gamma_{1}))$, respectively. Hence while the distortion coefficients of the $\mathsf{CD}(K,N)$ condition are formally obtained imposing one direction with linear distortion and $N-1$ directions affected by curvature, the $\mathsf{CD}^{*}(K,N)$ condition imposes the same volume distortion in all the $N$ directions. For both definitions there is a local version that is of some relevance for our analysis. Here we state only the local formulation $\mathsf{CD}(K,N)$, being clear what would be the one for $\mathsf{CD}^{*}(K,N)$. \begin{definition}[$\mathsf{CD}_{loc}$ condition]\label{D:loc} An essentially non-branching m.m.s. $(X,\mathsf d,\mathfrak m)$ satisfies $\mathsf{CD}_{loc}(K,N)$ if for any point $x \in X$ there exists a neighborhood $X(x)$ of $x$ such that for each pair $\mu_{0}, \mu_{1} \in \mathcal{P}_{2}(X,\mathsf d,\mathfrak m)$ supported in $X(x)$ there exists $\nu \in \mathrm{OptGeo}(\mu_{0},\mu_{1})$ such that \eqref{E:CD} holds true for all $t \in [0,1]$. The support of $({\rm e}_{t})_\sharp \, \nu$ is not necessarily contained in the neighborhood $X(x)$. \end{definition} One of the main properties of the reduced curvature dimension condition is the globalization one: under the essentially non-branching property, $\mathsf{CD}^{*}_{loc}(K,N)$ and $\mathsf{CD}^{*}(K,N)$ are equivalent (see \cite[Corollary 5.4]{BS10}), i.e. the $\mathsf{CD}^{*}$-condition verifies the local-to-global property. We also recall a few relations between $\mathsf{CD}$ and $\mathsf{CD}^{*}$. It is known by \cite[Theorem 2.7]{GigliMap} that, if $(X,\mathsf d,\mathfrak m)$ is a non-branching metric measure space verifying $\mathsf{CD}(K,N)$ and $\mu_{0}, \mu_{1} \in \mathcal{P}(X)$ with $\mu_{0}$ absolutely continuous with respect to $\mathfrak m$, then there exists a unique optimal map $T : X \to X$ such $(id, T)_\sharp\, \mu_{0}$ realizes the minimum in \eqref{eq:Wdef} and the set $\mathrm{OptGeo}(\mu_{0},\mu_{1})$ contains only one element. The same proof holds if one replaces the non-branching assumption with the more general one of essentially non-branching, see for instance \cite{GRS2013}. \subsection{Isoperimetric profile function} Given a m.m.s. $(X,\mathsf d,\mathfrak m)$ as above and a Borel subset $A\subset X$, let $A^{\varepsilon}$ denote the $\varepsilon$-tubular neighborhood $$ A^{\varepsilon}:=\{x \in X \,:\, \exists y \in A \text{ such that } \mathsf d(x,y) < \varepsilon \}. $$ The Minkowski (exterior) boundary measure $\mathfrak m^+(A)$ is defined by \begin{equation}\label{eq:MinkCont} \mathfrak m^+(A):=\liminf_{\varepsilon\downarrow 0} \frac{\mathfrak m(A^\varepsilon)-\mathfrak m(A)}{\varepsilon}. \end{equation} The \emph{isoperimetric profile}, denoted by ${\mathcal{I}}_{(X,\mathsf d,\mathfrak m)}$, is defined as the point-wise maximal function so that $\mathfrak m^+(A)\geq \mathcal{I}_{(X,\mathsf d,\mathfrak m)}(\mathfrak m(A))$ for every Borel set $A \subset X$, that is \begin{equation}\label{E:profile} \mathcal{I}_{(X,\mathsf d,\mathfrak m)}(v) : = \inf \big\{ \mathfrak m^{+}(A) \colon A \subset X \, \textrm{ Borel}, \, \mathfrak m(A) = v \big\}. \end{equation} If $K>0$ and $N\in \mathbb{N}$, by the L\'evy-Gromov isoperimetric inequality \eqref{eq:LevyGromov} we know that, for $N$-dimensional smooth manifolds having Ricci $\geq K$, the isoperimetric profile function is bounded below by the one of the $N$-dimensional round sphere of the suitable radius. In other words the \emph{model} isoperimetric profile function is the one of ${\mathbb S}^N$. For $N\geq 1, K\in \mathbb{R}$ arbitrary real numbers the situation is more complicated, and just recently E. Milman \cite{Mil} discovered what is the model isoperimetric profile. We refer to \cite{Mil} for all the details. Here we just recall the relevance of isoperimetric profile functions for m.m.s. over $(\mathbb{R}, |\cdot|)$: given $K\in \mathbb{R}, N\in[1,+\infty)$ and $D\in (0,+\infty]$, consider the function \begin{equation}\label{defcI} \mathcal{I}_{K,N,D}(v) : = \inf \left\{ \mu^{+}(A) \colon A\subset \mathbb{R}, \,\mu(A) = v, \, \mu \in \mathcal{F}_{K,N,D} \right\}, \end{equation} where $\mathcal{F}_{K,N,D}$ denotes the set of $\mu \in \mathcal{P}(\mathbb{R})$ such that $\text{\rm supp}(\mu) \subset [0,D]$ and $\mu = h \cdot \mathcal{L}^{1}$ with $h \in C^{2}((0,D))$ satisfying \begin{equation}\label{eq:DiffIne} \left( h^{\frac{1}{N-1}} \right)'' + \frac{K}{N-1} h^{\frac{1}{N-1}} \leq 0 \quad \text{if }N \in (1,\infty), \quad h\equiv \textrm{const} \quad \text{if }N=1. \end{equation} Then from \cite[Theorem 1.2, Corollary 3.2]{Mil} it follows that for $N$-dimensional smooth manifolds having Ricci $\geq K$, with $K\in \mathbb{R}$ arbitrary real number, and diameter $D$, the isoperimetric profile function is bounded below by $\mathcal{I}_{K,N,D}$ and the bound is sharp. This also justifies the notation. Going back to non-smooth metric measure spaces (what follows is taken from \cite{CM1}), it is necessary to consider the following broader family of measures: \begin{eqnarray} \mathcal{F}^{s}_{K,N,D} : = \{ \mu \in \mathcal{P}(\mathbb{R}) : &\text{\rm supp}(\mu) \subset [0,D], \, \mu = h_{\mu} \mathcal{L}^{1},\, h_{\mu}\, \textrm{verifies} \, \eqref{E:curvdensmmR} \ \textrm{and is continuous if } N\in (1,\infty), \nonumber \\ & \quad h_{\mu}\equiv \textrm{const} \text{ if }N=1 \}, \end{eqnarray} and the corresponding comparison \emph{synthetic} isoperimetric profile: $$ \mathcal{I}^{s}_{K,N,D}(v) : = \inf \left\{ \mu^{+}(A) \colon A\subset \mathbb{R}, \,\mu(A) = v, \, \mu \in \mathcal{F}^{s}_{K,N,D} \right\}, $$ where $\mu^{+}(A)$ denotes the Minkowski content defined in \eqref{eq:MinkCont}. The term synthetic refers to $\mu \in \mathcal{F}^{s}_{K,N,D}$ meaning that the Ricci curvature bound is satisfied in its synthetic formulation: if $\mu = h \cdot \mathcal{L}^{1}$, then $h$ verifies \eqref{E:curvdensmmR}. We have already seen that $\mathcal{F}_{K,N,D} \subset \mathcal{F}^{s}_{K,N,D}$; actually one can prove that $\mathcal{I}^{s}_{K,N,D}$ coincides with its smooth counterpart $\mathcal{I}_{K,N,D}$ for every volume $v \in [0,1]$ via a smoothing argument. We therefore need the following approximation result. In order to state it let us recall that a standard mollifier in $\mathbb{R}$ is a non negative $C^\infty(\mathbb{R})$ function $\psi$ with compact support in $[0,1]$ such that $\int_{\mathbb{R}} \psi = 1$. \begin{lemma}[Lemma 6.2, \cite{CM1}]\label{lem:approxh} Let $D \in (0,\infty)$ and let $h:[0,D] \to [0,\infty)$ be a continuous function. Fix $N\in (1,\infty)$ and for $\varepsilon>0$ define \begin{equation} h_{\varepsilon}(t):=[h^{\frac{1}{N-1}}\ast \psi_{\varepsilon} (t)]^{N-1} := \left[ \int_{\mathbb{R}} h(t-s)^{\frac{1}{N-1}} \psi_{\varepsilon} (s) \, d s\right]^{N-1} = \left[ \int_{\mathbb{R}} h(s)^{\frac{1}{N-1}} \psi_{\varepsilon} (t-s) \, d s\right]^{N-1}, \end{equation} where $\psi_\varepsilon(x)=\frac{1}{\varepsilon} \psi(x/\varepsilon)$ and $\psi$ is a standard mollifier function. The following properties hold: \begin{enumerate} \item $h_{\varepsilon}$ is a non-negative $C^\infty$ function with support in $[-\varepsilon, D+\varepsilon]$; \item $h_{\varepsilon}\to h$ uniformly as $\varepsilon \downarrow 0$, in particular $h_{\varepsilon} \to h$ in $L^{1}$. \item If $h$ satisfies the convexity condition \eqref{E:curvdensmm} corresponding to the above fixed $N>1$ and some $K \in \mathbb{R}$ then also $h_{\varepsilon}$ does. In particular $h_{\varepsilon}$ satisfies the differential inequality \eqref{eq:DiffIne}. \end{enumerate} \end{lemma} Using this approximation one can prove the following \begin{theorem}[Theorem 6.3, \cite{CM1}]\label{thm:I=Is} For every $v\in [0,1]$, $K \in \mathbb{R}$, $N\in [1,\infty)$, $D\in (0,\infty]$ it holds $\mathcal{I}^{s}_{K,N,D}(v)=\mathcal{I}_{K,N,D}(v)$. \end{theorem} \subsection{Disintegration of measures} We include here a version of Disintegration Theorem that we will use. We will follow Appendix A of \cite{biacara:extreme} where a self-contained approach (and a proof) of Disintegration Theorem in countably generated measure spaces can be found. An even more general version of Disintegration Theorem can be found in Section 452 of \cite{Fre:measuretheory4}. Recall that a $\sigma$-algebra is \emph{countably generated} if there exists a countable family of sets so that the $\sigma$-algebra coincide with the smallest $\sigma$-algebra containing them. Given a measurable space $(X, \mathscr{X})$, i.e. $\mathscr{X}$ is a $\sigma$-algebra of subsets of $X$, and a function $\mathfrak Q : X \to Q$, with $Q$ general set, we can endow $Q$ with the \emph{push forward $\sigma$-algebra} $\mathscr{Q}$ of $\mathscr{X}$: $$ C \in \mathscr{Q} \quad \Longleftrightarrow \quad \mathfrak Q^{-1}(C) \in \mathscr{X}, $$ which could be also defined as the biggest $\sigma$-algebra on $Q$ such that $\mathfrak Q$ is measurable. Moreover given a probability measure $\mathfrak m$ on $(X,\mathscr{X})$, define a probability measure $\mathfrak q$ on $(Q,\mathscr{Q})$ by push forward via $\mathfrak Q$, i.e. $\mathfrak q := \mathfrak Q_\sharp \, \mathfrak m$. This general scheme fits with the following situation: given a measure space $(X,\mathscr{X},\mathfrak m)$, suppose a partition of $X$ is given in the form $\{ X_{q}\}_{q\in Q}$, $Q$ is the set of indices and $\mathfrak Q : X \to Q$ is the quotient map, i.e. $$ q = \mathfrak Q(x) \iff x \in X_{q}. $$ Following the previous scheme, we can consider also the quotient $\sigma$-algebra $\mathscr{Q}$ and the quotient measure $\mathfrak q$ obtaining the quotient measure space $(Q, \mathscr{Q}, \mathfrak q)$. \begin{definition} \label{defi:dis} A \emph{disintegration} of $\mathfrak m$ \emph{consistent with} $\mathfrak Q$ is a map $$ Q \ni q \longmapsto \mathfrak m_{q} \in \mathcal{P}(X,\mathscr{X}) $$ such that the following hold: \begin{enumerate} \item for all $B \in \mathscr{X}$, the map $\mathfrak m_{\cdot}(B)$ is $\mathfrak q$-measurable; \item for all $B \in \mathscr{X}, C \in \mathscr{Q}$ satisfies the consistency condition $$ \mathfrak m \left(B \cap \mathfrak Q^{-1}(C) \right) = \int_{C} \mathfrak m_{q}(B)\, \mathfrak q(dq). $$ \end{enumerate} A disintegration is \emph{strongly consistent with respect to $\mathfrak Q$} if for $\mathfrak q$-a.e. $q \in Q$ we have $\mathfrak m_{q}(\mathfrak Q^{-1}(q))=1$. The measures $\mathfrak m_q$ are called \emph{conditional probabilities}. \end{definition} When the map $\mathfrak Q$ is induced by a partition of $X$ as before, we will directly say that the disintegration is consistent with the partition, meaning that the disintegration is consistent with the quotient map $\mathfrak Q$ associated to the partition $X = \cup_{q\in Q} X_{q}$. We now report Disintegration Theorem. \begin{theorem}[Theorem A.7, Proposition A.9 of \cite{biacara:extreme}]\label{T:disintegrationgeneral} \label{T:disintr} Assume that $(X,\mathscr{X},\rho)$ is a countably generated probability space and $X = \cup_{q \in Q}X_{q}$ is a partition of $X$. Then the quotient probability space $(Q, \mathscr{Q},\mathfrak q)$ is essentially countably generated and there exists a unique disintegration $q \mapsto \mathfrak m_{q}$ consistent with the partition $X = \cup_{q\in Q} X_{q}$. The disintegration is strongly consistent if and only if there exists a $\mathfrak m$-section $S \in \mathscr{X}$ such that the $\sigma$-algebra $\mathscr{S}$ contains $\mathcal{B}(S)$. \end{theorem} We expand the statement of Theorem \ref{T:disintegrationgeneral}.\\ In the measure space $(Q, \mathscr{Q},\mathfrak q)$, the $\sigma$-algebra $\mathscr{Q}$ is \emph{essentially countably generated} if, by definition, there exists a countable family of sets $Q_{n} \subset Q$ such that for any $C \in \mathscr{Q}$ there exists $\hat C \in \hat{\mathscr{Q}}$, where $\hat{\mathscr{Q}}$ is the $\sigma$-algebra generated by $\{ Q_{n} \}_{n \in \mathbb{N}}$, such that $\mathfrak q(C\, \Delta \, \hat C) = 0$. Uniqueness is understood in the following sense: if $q\mapsto \mathfrak m^{1}_{q}$ and $q\mapsto \mathfrak m^{2}_{q}$ are two consistent disintegrations then $\mathfrak m^{1}_{q}=\mathfrak m^{2}_{q}$ for $\mathfrak q$-a.e. $q \in Q$. Finally, a set $S$ is a section for the partition $X = \cup_{q}X_{q}$ if for any $q \in Q$ there exists a unique $x_{q} \in S \cap X_{q}$. A set $S_{\mathfrak m}$ is an $\mathfrak m$-section if there exists $Y \subset X$ with $\mathfrak m(X \setminus Y) = 0$ such that the partition $Y = \cup_{q} (X_{q} \cap Y)$ has section $S_{\mathfrak m}$. Once a section (or an $\mathfrak m$-section) is given, one can obtain the measurable space $(S,\mathscr{S})$ by pushing forward the $\sigma$-algebra $\mathscr{X}$ on $S$ via the map that associates to any $X_{q} \ni x \mapsto x_{q} = S \cap X_{q}$. \section{Transport set}\label{S:transportset} The following setting is fixed once for all: \begin{center} $(X,\mathsf d,\mathfrak m)$ is a fixed metric measure space with $\mathfrak m(X)=1$ such that \\ the ambient metric space $(X, \mathsf d)$ is geodesic and proper (hence complete and separable). \end{center} Let $\varphi : X \to \mathbb{R}$ be any $1$-Lipschitz function. Here we present some useful results (all of them already presented in \cite{biacava:streconv}) concerning the $\mathsf d$-cyclically monotone set associated with $\varphi$: \begin{equation}\label{E:Gamma} \Gamma : = \{ (x,y) \in X\times X : \varphi(x) - \varphi(y) = \mathsf d(x,y) \}, \end{equation} that can be seen as the set of couples moved by $\varphi$ with maximal slope. Recall that a set $\Lambda \subset X \times X$ is said to be $\mathsf d$-cyclically monotone if for any finite set of points $(x_{1},y_{1}),\dots,(x_{N},y_{N})$ it holds $$ \sum_{i = 1}^{N} \mathsf d(x_{i},y_{i}) \leq \sum_{i = 1}^{N} \mathsf d(x_{i},y_{i+1}), $$ with the convention that $y_{N+1} = y_{1}$. The following lemma is a consequence of the $\mathsf d$-cyclically monotone structure of $\Gamma$. \begin{lemma}\label{L:cicli} Let $(x,y) \in X\times X$ be an element of $\Gamma$. Let $\gamma \in {\rm Geo}(X)$ be such that $\gamma_{0} = x$ and $\gamma_{1}=y$. Then $$ (\gamma_{s},\gamma_{t}) \in \Gamma, $$ for all $0\leq s \leq t \leq 1$. \end{lemma} \begin{proof} Take $0\leq s \leq t \leq 1$ and note that \begin{align*} \varphi(\gamma_{s})& - \varphi(\gamma_{t})\crcr = &~ \varphi(\gamma_{s}) - \varphi(\gamma_{t}) + \varphi(\gamma_{0}) - \varphi(\gamma_{0}) + \varphi(\gamma_{1}) - \varphi(\gamma_{1})\crcr \geq &~ \mathsf d(\gamma_{0},\gamma_{1}) - \mathsf d(\gamma_{0},\gamma_{s}) - \mathsf d(\gamma_{t},\gamma_{1}) \crcr = &~ \mathsf d(\gamma_{s},\gamma_{t}). \end{align*} The claim follows. \end{proof} It is natural then to consider the set of geodesics $G \subset {\rm Geo}(X)$ such that $$ \gamma \in G \iff \{ (\gamma_{s},\gamma_{t}) : 0\leq s \leq t \leq 1 \} \subset \Gamma, $$ that is $G : = \{ \gamma \in {\rm Geo}(X) : (\gamma_{0},\gamma_{1}) \in \Gamma \}$. We now recall some basic definitions of the $L^{1}$-optimal transportation theory that will be needed to describe the structure of $\Gamma$. \begin{definition}\label{D:transport} We define the set of \emph{transport rays} by $$ R := \Gamma \cup \Gamma^{-1}, $$ where $\Gamma^{-1}:= \{ (x,y) \in X \times X : (y,x) \in \Gamma \}$. The set of \emph{initial points} and \emph{final points} are defined respectively by \begin{align*} {\mathfrak a} :=& \{ z \in X: \nexists \, x \in X, (x,z) \in \Gamma, \mathsf d(x,z) > 0 \}, \crcr {\mathfrak b} :=& \{ z \in X: \nexists \, x \in X, (z,x) \in \Gamma, \mathsf d(x,z) > 0 \}. \end{align*} The set of \emph{end points} is ${\mathfrak a} \cup {\mathfrak b}$. We define the subset of $X$, \emph{transport set with end points}: $$ \mathcal{T}_{e} = P_{1}(\Gamma \setminus \{ x = y \}) \cup P_{1}(\Gamma^{-1}\setminus \{ x=y \}). $$ where $\{ x = y\}$ stands for $\{ (x,y) \in X^{2} : \mathsf d(x,y) = 0 \}$. \end{definition} Few comments are in order. Notice that $R$ coincide with $\{(x,y) \in X \times X \colon |\varphi(x) -\varphi(y)| = \mathsf d(x,y) \}$; the name transport set with end points for $\mathcal{T}_{e}$ is motivated by the fact that later on we will consider a more regular subset of $\mathcal{T}_{e}$ that will be called transport set; moreover if $x \in X$ for instance is moved forward but not backward by $\varphi$, this is translated in $x \in \Gamma$ and $x \notin \Gamma^{-1}$; anyway it belongs to $\mathcal{T}_{e}$. We also introduce the following notation that will be used throughout the paper; we set $\Gamma(x):=P_2(\Gamma\cap(\{x\}\times X))$ and $\Gamma^{-1}(x):=P_2(\Gamma^{-1}\cap(\{x\} \times X))$. More in general if $F \subset X \times X$, we set $F(x) = P_2(F \cap (\{x\}\times X))$. \begin{remark}\label{R:regularity} Here we discuss the measurability of the sets introduced in Definition \ref{D:transport}. Since $\varphi$ is $1$-Lipschitz, $\Gamma$ is closed and therefore $\Gamma^{-1}$ and $R$ are closed as well. Moreover by assumption the space is proper, hence the sets $\Gamma, \Gamma^{-1}, R$ are $\sigma$-compact (countable union of compact sets). Then we look at the set of initial and final points: $$ {\mathfrak a} = P_{2} \left( \Gamma \cap \{ (x,z) \in X\times X : \mathsf d(x,z) > 0 \} \right)^{c}, \qquad {\mathfrak b} = P_{1} \left( \Gamma \cap \{ (x,z) \in X\times X : \mathsf d(x,z) > 0 \} \right)^{c}. $$ Since $\{ (x,z) \in X\times X : \mathsf d(x,z) > 0 \} = \cup_{n} \{ (x,z) \in X\times X : \mathsf d(x,z) \geq 1/n \}$, it follows that it follows that both ${\mathfrak a}$ and ${\mathfrak b}$ are the complement of $\sigma$-compact sets. Hence ${\mathfrak a}$ and ${\mathfrak b}$ are Borel sets. Reasoning as before, it follows that $\mathcal{T}_{e}$ is a $\sigma$-compact set. \end{remark} \begin{lemma} \label{L:mapoutside} Let $\pi \in \Pi(\mu_{0},\mu_{1})$ with $\pi(\Gamma) = 1$, then $$ \pi(\mathcal{T}_e \times \mathcal{T}_e \cup \{x = y\}) = 1. $$ \end{lemma} \begin{proof} It is enough to observe that if $(z,w) \in \Gamma$ with $z \neq w$, then $w \in \Gamma(z)$ and $z \in \Gamma^{-1}(w)$ and therefore $$ (z,w) \in \mathcal{T}_{e}\times \mathcal{T}_{e}. $$ Hence $\Gamma \setminus \{x = y\} \subset \mathcal{T}_e \times \mathcal{T}_e$. Since $\pi(\Gamma) =1$, the claim follows. \end{proof} As a consequence, $\mu_{0}(\mathcal{T}_e) = \mu_{1}(\mathcal{T}_e)$ and any optimal map $T$ such that $T_\sharp \mu_{0} \llcorner_{\mathcal{T}_e}= \mu_{1} \llcorner_{\mathcal{T}_e}$ can be extended to an optimal map $T'$ with $ T^{'}_\sharp \mu_{0} = \mu_{1}$ with the same cost by setting \begin{equation} \label{E:extere} T'(x) = \begin{cases} T(x), & \textrm{if } x \in \mathcal{T}_e \crcr x, & \textrm{if } x \notin \mathcal{T}_e. \end{cases} \end{equation} It can be proved that the set of transport rays $R$ induces an equivalence relation on a subset of $\mathcal{T}_{e}$. It is sufficient to remove from $\mathcal{T}_{e}$ the branching points of geodesics. Then using curvature properties of the space, one can prove that such branching points all have $\mathfrak m$-measure zero. \subsection{Branching structures in the Transport set} What follows was first presented in \cite{cava:MongeRCD}. Consider the sets of respectively forward and backward branching points \begin{align}\label{E:branchingpoints} A_{+}: = &~\{ x \in \mathcal{T}_{e} : \exists z,w \in \Gamma(x), (z,w) \notin R \}, \nonumber \\ A_{-} : = &~\{ x \in \mathcal{T}_{e} : \exists z,w \in \Gamma(x)^{-1}, (z,w) \notin R \}. \end{align} The sets $A_{\pm}$ are $\sigma$-compact sets. Indeed since $(X,\mathsf d)$ is proper, any open set is $\sigma$-compact. The main motivation for the definition of $A_{+}$ and $A_{-}$ is contained in the next \begin{theorem}\label{T:equivalence} The set of transport rays $R\subset X \times X$ is an equivalence relation on the set $$ \mathcal{T}_{e} \setminus \left( A_{+} \cup A_{-} \right). $$ \end{theorem} \begin{proof} First, for all $x \in P_{1}(\Gamma)$, $(x,x) \in R$. If $x,y \in \mathcal{T}_{e}$ with $(x,y) \in R$, then by definition of $R$, it follows straightforwardly that $(y,x) \in R$. So the only property needing a proof is transitivity. Let $x,z,w \in \mathcal{T}_{e} \setminus \left( A_{+} \cup A_{-} \right)$ be such that $(x,z), (z,w) \in R$ with $x,z$ and $w$ distinct points. The claim is $(x,w) \in R$. So we have 4 different possibilities: the first one is \[ z\in \Gamma(x), \quad w \in \Gamma(z). \] This immediately implies $w \in \Gamma(x)$ and therefore $(x,w) \in R$. The second possibility is \[ z\in \Gamma(x), \quad z \in \Gamma(w), \] that can be rewritten as $(z,x), (z,w) \in \Gamma^{-1}$. Since $z \notin A_{-}$, necessarily $(x,w) \in R$. Third possibility: \[ x\in \Gamma(z), \quad w \in \Gamma(z), \] and since $z \notin A_{+}$ it follows that $(x,w) \in R$. The last case is \[ x\in \Gamma(z), \quad z \in \Gamma(w), \] and therefore $x \in \Gamma(w)$, hence $(x,w) \in R$ and the claim follows. \end{proof} Next, we show that each equivalence class of $R$ is formed by a single geodesic. \begin{lemma}\label{L:singlegeo} For any $x \in \mathcal{T}$ and $z,w \in R(x)$ there exists $\gamma \in G \subset {\rm Geo}(X)$ such that $$ \{ x, z,w \} \subset \{ \gamma_{s} : s\in [0,1] \}. $$ If $\hat \gamma \in G$ enjoys the same property, then \[ \big( \{ \hat \gamma_{s} : s \in [0,1] \} \cup \{ \gamma_{s} : s \in [0,1] \} \big) \subset \{ \tilde \gamma_{s} : s \in [0,1] \} \] for some $\tilde \gamma \in G$. \end{lemma} Since $G = \{ \gamma \in {\rm Geo}(X) : (\gamma_{0},\gamma_{1}) \in \Gamma \}$, Lemma \ref{L:singlegeo} states that as soon as we fix an element $x$ in $\mathcal{T}_{e} \setminus (A_{+} \cup A_{-})$ and we pick two elements $z,w$ in the same equivalence class of $x$, then these three points are aligned on a geodesic $\gamma$ whose image is again all contained in the same equivalence class $R(x)$. \begin{proof} Assume that $x, z$ and $w$ are all distinct points otherwise the claim follows trivially. We consider different cases. \noindent \emph{First case:} $z \in \Gamma(x)$ and $w \in \Gamma^{-1}(x)$. \\ By $\mathsf d$-cyclical monotonicity $$ \mathsf d(z,w) \leq \mathsf d(z,x) + \mathsf d(x,w) = \varphi(w) - \varphi(z) \leq \mathsf d(z,w). $$ Hence $z,x$ and $w$ lie on a geodesic. \noindent \emph{Second case:} $z,w \in \Gamma(x)$. \\ Without loss of generality $\varphi(x) \geq \varphi(w) \geq \varphi(z)$. Since in the proof of Lemma \ref{L:geoingamma} we have already excluded the case $\varphi(w) = \varphi(z)$, we assume $\varphi(x) > \varphi(w) > \varphi(z)$. Then if there would not exist any geodesics $\gamma \in G$ with $\gamma_{0} = x$ and $\gamma_{1} = z$ and $\gamma_{s} = w$, there will be $\gamma \in G$ with $(\gamma_{0},\gamma_{1}) = (x,z)$ and $s \in (0,1)$ such that $$ \varphi(\gamma_{s}) = \varphi(w), \qquad \gamma_{s} \in \Gamma(x), \qquad \gamma_{s} \neq w. $$ As observed in the proof of Lemma \ref{L:geoingamma}, this would imply that $(\gamma_{s},w) \notin R$ and since $x \notin A_{+}$ this would be a contradiction. Hence the second case follows. The remaining two cases follow with the same reasoning, exchanging the role of $\Gamma(x)$ with the one of $\Gamma^{-1}(x)$. The second part of the statement follows now easily. \end{proof} \section{Cyclically monotone sets}\label{S:cyclically} Following Theorem \ref{T:equivalence} and Lemma \ref{L:singlegeo}, the next step is to prove that both $A_{+}$ and $A_{-}$ have $\mathfrak m$-measure zero, that is branching happens on rays with zero $\mathfrak m$-measure. Already from the statement of this property, it is clear that some regularity assumption on $(X,\mathsf d,\mathfrak m)$ should play a role. We will indeed assume the space to enojoy a stronger form of essentially non-branching. Recall that the latter is formulated in terms of geodesics of $(\mathcal{P}_{2}(X),W_{2})$ hence of $\mathsf d^{2}$-cyclically monotone set, while we need regularity for the $\mathsf d$-cyclically monotone set $\Gamma$. Hence it is necessary to include $\mathsf d^{2}$-cyclically monotone sets as subset of $\mathsf d$-cyclically monotone sets. We present here a strategy introduced by the author in \cite{cava:decomposition, cava:MongeRCD} from where all the material presented in this section is taken. Section \ref{Ss:structure} contains results from \cite{biacava:streconv} while Section \ref{Ss:balanced} is taken from \cite{CM1}. \begin{lemma}[Lemma 4.6 of \cite{cava:decomposition}]\label{L:12monotone} Let $\Delta \subset \Gamma$ be any set so that: $$ (x_{0},y_{0}), (x_{1},y_{1}) \in \Delta \quad \Rightarrow \quad (\varphi(y_{1}) - \varphi(y_{0}) )\cdot (\varphi(x_{1}) - \varphi(x_{0}) ) \geq 0. $$ Then $\Delta$ is $\mathsf d^{2}$-cyclically monotone. \end{lemma} \begin{proof} It follows directly from the hypothesis of the lemma that the set $$ \Lambda: = \{ (\varphi(x), \varphi(y) ) : (x,y) \in \Delta \} \subset \mathbb{R}^{2}, $$ is monotone in the Euclidean sense. Since $\Lambda \subset \mathbb{R}^{2}$, it is then a standard fact that $\Lambda$ is also $|\cdot|^{2}$-cyclically monotone, where $|\cdot|$ denotes the modulus. We anyway include a short proof: there exists a maximal monotone multivalued function $F$ such that $\Lambda \subset \textrm{graph} (F)$ and its domain is an interval, say $(a,b)$ with $a$ and $b$ possibly infinite; moreover, apart from countably many $x \in \mathbb{R}$, the set $F(x)$ is a singleton. Then the following function is well defined: $$ \Psi(x) : = \int_{c}^{x} F(s) ds, $$ where $c$ is any fixed element of $(a,b)$. Then observe that $$ \Psi(z) - \Psi(x) \geq y(z-x), \qquad \forall \ z,x \in (a,b), $$ where $y$ is any element of $F(x)$. In particular this implies that $\Psi$ is convex and $F(x)$ is a subset of its sub-differential. In particular $\Lambda$ is $|\cdot |^{2}$-cyclically monotone. \\ Then for $\{(x_{i},y_{i})\}_{ i \leq N} \subset \Delta$, since $\Delta \subset \Gamma$, it holds \begin{align*} \sum_{i=1}^{N} \mathsf d^{2}(x_{i},y_{i}) = &~ \sum_{i =1}^{N}|\varphi(x_{i}) - \varphi(y_{i})|^{2} \crcr \leq&~ \sum_{i =1}^{N}|\varphi(x_{i}) - \varphi(y_{i+1})|^{2} \crcr \leq &~ \sum_{i=1}^{N} \mathsf d^{2}(x_{i},y_{i+1}), \end{align*} where the last inequality is given by the 1-Lipschitz regularity of $\varphi$. The claim follows. \end{proof} To study the set of branching points is necessary to relate point of branching to geodesics. In the next Lemma, using Lemma \ref{L:cicli}, we observe that once a branching happens there exist two distinct geodesics, both contained in $\Gamma(x)$, that are not in relation in the sense of $R$. \begin{lemma}\label{L:geoingamma} Let $x \in A_{+}$. Then there exist two distinct geodesics $\gamma^{1},\gamma^{2} \in G$ such that \begin{itemize} \item[-] $(x,\gamma_{s}^{1}), (x,\gamma_{s}^{2}) \in \Gamma$ for all $s \in [0,1]$; \item[-] $(\gamma_{s}^{1},\gamma^{2}_{s}) \notin R$ for all $s \in [0,1]$; \item[-] $\varphi(\gamma^{1}_{s}) = \varphi(\gamma^{2}_{s})$ for all $s \in [0,1]$. \end{itemize} Moreover both geodesics are non-constant. \end{lemma} \begin{proof} From the definition of $A_{+}$ there exists $z,w \in \mathcal{T}_{e}$ such that $z,w \in \Gamma(x)$ and $(z,w) \notin R$. Since $z,w \in \Gamma(x)$, from Lemma \ref{L:cicli} there exist two geodesics $\gamma^{1},\gamma^{2} \in G$ such that $$ \gamma^{1}_{0} = \gamma^{2}_{0} = x, \quad \gamma^{1}_{1} = z, \quad \gamma^{2}_{1} = w. $$ Since $(z,w) \notin R$, necessarily both $z$ and $w$ are different from $x$ and $x$ is not a final point, that is $x \notin {\mathfrak b}$. So the previous geodesics are not constant. Since $z$ and $w$ can be exchanged, we can also assume that $\varphi(z) \geq \varphi(w)$. Since $z \in \Gamma(x)$, $\varphi(x) \geq \varphi(z)$ and by continuity there exists $s_{2} \in (0,1]$ such that $$ \varphi(z) = \varphi(\gamma^{2}_{s_{2}}). $$ Note that $z \neq \gamma^{2}_{s_{2}}$, otherwise $w \in \Gamma(z)$ and therefore $(z,w) \in R$. Moreover still $(z,\gamma^{2}_{s_{2}}) \notin R$. Indeed if the contrary was true, then $$ 0= |\varphi(z) - \varphi(\gamma^{2}_{s_{2}}) | = \mathsf d(z,\gamma^{2}_{s_{2}}), $$ that is a contradiction with $z \neq \gamma^{2}_{s_{2}}$. So by continuity there exists $\delta > 0$ such that $$ \varphi (\gamma^{1}_{1-s} ) = \varphi (\gamma^{2}_{s_{2}(1-s)} ), \qquad \mathsf d(\gamma^{1}_{1-s}, \gamma^{2}_{s_{2}-s}) > 0, $$ for all $0 \leq s \leq \delta$. Hence reapplying the previous argument $(\gamma^{1}_{1-s}, \gamma^{2}_{s_{2}(1-s)}) \notin R$. The curve $\gamma^{1}$ and $\gamma^{2}$ of the claim are then obtained properly restricting and rescaling the geodesic $\gamma^{1}$ and $\gamma^{2}$ considered so far. \end{proof} The previous correspondence between branching points and couples of branching geodesics can be proved to be measurable. We will make use of the following selection result, Theorem 5.5.2 of \cite{Sri:courseborel}. We again refer to \cite{Sri:courseborel} for some preliminaries on analytic sets. \begin{theorem} \label{T:vanneuma} Let $X$ and $Y$ be Polish spaces, $F \subset X \times Y$ analytic, and $\mathcal{A}$ be the $\sigma$-algebra generated by the analytic subsets of X. Then there is an $\mathcal{A}$-measurable section $u : P_1(F) \to Y$ of $F$. \end{theorem} Recall that given $F \subset X \times Y$, a \emph{section $u$ of $F$} is a function from $P_1(F)$ to $Y$ such that $\textrm{graph}(u) \subset F$. \begin{lemma}\label{L:selectiongeo} There exists an $\A$-measurable map $u : A_{+} \mapsto G \times G$ such that if $u(x) = (\gamma^{1},\gamma^{2})$ then \begin{itemize} \item[-] $(x,\gamma_{s}^{1}), (x,\gamma_{s}^{2}) \in \Gamma$ for all $s \in [0,1]$; \item[-] $(\gamma_{s}^{1},\gamma^{2}_{s}) \notin R$ for all $s \in [0,1]$; \item[-] $\varphi(\gamma^{1}_{s}) = \varphi(\gamma^{2}_{s})$ for all $s \in [0,1]$. \end{itemize} Moreover both geodesics are non-constant. \end{lemma} \begin{proof} Since $G = \{ \gamma \in {\rm Geo}(X) : (\gamma_{0},\gamma_{1}) \in \Gamma \}$ and $\Gamma \subset X \times X$ is closed, the set $G$ is a complete and separable metric space. Consider now the set \begin{align*} F:= &~ \{ (x,\gamma^{1},\gamma^{2}) \in \mathcal{T}_{e}\times G \times G : (x,\gamma^{1}_{0}), (x,\gamma^{2}_{0}) \in \Gamma \} \crcr &~ \cap\left( X\times \{ (\gamma^{1},\gamma^{2}) \in G\times G : \mathsf d(\gamma^{1}_{1},\gamma^{2}_{1})>0 \} \right) \crcr &~ \cap\left( X\times \{ (\gamma^{1},\gamma^{2}) \in G\times G : \mathsf d(\gamma^{1}_{0},\gamma^{2}_{0})>0 \} \right) \crcr &~ \cap\left( X\times \{ (\gamma^{1},\gamma^{2}) \in G\times G : \mathsf d(\gamma^{1}_{0},\gamma^{1}_{1})>0 \} \right) \crcr &~ \cap\left( X\times \{ (\gamma^{1},\gamma^{2}) \in G\times G : \varphi(\gamma^{1}_{i}) = \varphi(\gamma^{2}_{i}), \, i =0,1 \} \right). \end{align*} It follows from Remark \ref{R:regularity} that $F$ is $\sigma$-compact. To avoid possible intersections in interior points of $\gamma^{1}$ with $\gamma^{2}$ we consider the following map: \begin{align*} h : G \times G &~ \to ~ [0,\infty) \crcr (\gamma^{1},\gamma^{2}) & ~ \mapsto ~ h(\gamma^{1},\gamma^{2}) : = \min_{s\in [0,1]} \, \mathsf d ( \gamma^{1}_{s},\gamma^{2}_{s}). \end{align*} From compactness of $[0,1]$, we deduce the continuity of $h$. Therefore $$ \hat F : = F \cap \{ (x,\gamma^{1},\gamma^{2} )\in X \times G\times G : h(\gamma^{1},\gamma^{2}) > 0\} $$ is a Borel set and from Lemma \ref{L:geoingamma}, $$ \hat F \cap \left( \{x\} \times G\times G \right) \neq \emptyset $$ for all $x \in A_{+}$. By Theorem \ref{T:vanneuma} we infer the existence of an $\mathcal{A}$-measurable selection $u$ of $\hat F$. Since $A_{+} = P_{1}(\hat F)$ and if $u(x) = (\gamma^{1},\gamma^{2})$, then $$ \mathsf d( \gamma^{1}_{s},\gamma^{2}_{s}) > 0, \qquad \varphi(\gamma^{1}_{s}) = \varphi(\gamma^{2}_{s}), $$ for all $s \in [0,1]$, and therefore $(\gamma^{1}_{s},\gamma^{2}_{s}) \notin R$ for all $s \in [0,1]$. The claim follows. \end{proof} We are ready to prove the following \begin{proposition}\label{P:nobranch} Let $(X,\mathsf d,\mathfrak m)$ be a m.m.s. such that for any $\mu_{0},\mu_{1} \in \mathcal{P}(X)$ with $\mu_{0} \ll \mathfrak m$ any optimal transference plan for $W_{2}$ is concentrated on the graph of a function. Then $$ \mathfrak m(A_{+}) = \mathfrak m(A_{-}) = 0. $$ \end{proposition} \begin{proof} {\bf Step 1.} \\ Suppose by contradiction that $\mathfrak m(A_{+})>0$. By definition of $A_{+}$, thanks to Lemma \ref{L:geoingamma} and Lemma \ref{L:selectiongeo}, for every $x \in A_{+}$ there exist two non-constant geodesics $\gamma^{1},\gamma^{2} \in G$ such that \begin{itemize} \item[-] $(x,\gamma_{s}^{1}), (x,\gamma_{s}^{2}) \in \Gamma$ for all $s \in [0,1]$; \item[-] $(\gamma_{s}^{1},\gamma^{2}_{s}) \notin R$ for all $s \in [0,1]$; \item[-] $\varphi(\gamma^{1}_{s}) = \varphi(\gamma^{2}_{s})$ for all $s \in [0,1]$. \end{itemize} Moreover the map $A_{+} \ni x \mapsto u(x) : = (\gamma^{1},\gamma^{2}) \in G^{2}$ is $\A$-measurable. By inner regularity of compact sets (or by Lusin's Theorem), possibly selecting a subset of $A_{+}$ still with strictly positive $\mathfrak m$-measure, we can assume that the previous map is continuous and in particular the functions $$ A_{+} \ni x \mapsto \varphi(\gamma^{i}_{j}) \in \mathbb{R}, \qquad i =1,2, \ j = 0,1 $$ are all continuous. Put $$ \alpha_{x} : = \varphi(\gamma^{1}_{0}) = \varphi(\gamma^{2}_{0}), \qquad \beta_{x} : = \varphi(\gamma^{1}_{1}) = \varphi(\gamma^{2}_{1}) $$ and note that $\alpha_{x} > \beta_{x}$. Now we want to show the existence of a subset $B \subset A_{+}$, still with $\mathfrak m(B) > 0$, such that $$ \sup_{x \in B} \beta_{x} < \inf_{x\in B} \alpha_{x}. $$ By continuity of $\alpha$ and $\beta$, a set $B$ verifying the previous inequality can be obtained considering the set $A_{+} \cap B_{r}(x)$, for $x \in A_{+}$ with $r$ sufficiently small. Since $\mathfrak m(A_{+})>0$, for $\mathfrak m$-a.e. $x \in A_{+}$ the set $A_{+}\cap B_{r}(x)$ has positive $\mathfrak m$-measure. So the existence of $B \subset A_{+}$ enjoying the aforementioned properties follows. {\bf Step 2.} \\ Let $I = [c,d]$ be a non trivial interval such that $$ \sup_{x \in B} \beta_{x} < c < d <\inf_{x\in B} \alpha_{x}. $$ Then by construction for all $x \in B$ the image of the composition of the geodesics $\gamma^{1}$ and $\gamma^{2}$ with $\varphi$ contains the interval $I$: $$ I \subset \{ \varphi(\gamma^{i}_{s}) : s \in [0,1] \}, \qquad i = 1,2. $$ Then fix any point inside $I$, say $c$ and consider for any $x \in B$ the value $s(x)$ such that $\varphi(\gamma^{1}_{s(x)}) = \varphi(\gamma^{2}_{s(x)}) = c$. We can now define on $B$ two transport maps $T^{1}$ and $T^{2}$ by $$ B \ni x \mapsto T^{i}(x) : = \gamma^{i}_{s(x)}, \qquad i =1,2. $$ Accordingly we define the transport plan $$ \eta : = \frac{1}{2} \left( (Id, T^{1})_{\sharp} \mathfrak m_{B} + (Id, T^{2})_{\sharp} \mathfrak m_{B} \right), $$ where $\mathfrak m_{B} : = \mathfrak m(B)^{-1} \mathfrak m \llcorner_{B}$. {\bf Step 3.} \\ The support of $\eta$ is $\mathsf d^{2}$-cyclically monotone. To prove it we will use Lemma \ref{L:12monotone}. The measure $\eta$ is concentrated on the set \[ \Delta : = \{ (x,\gamma^{1}_{s(x)}) : x \in B \} \cup \{ (x,\gamma^{2}_{s(x)}) : x \in B \} \subset \Gamma. \] Take any two couples $(x_{0},y_{0}), (x_{1},y_{1}) \in \Delta$ and notice that by definition: \[ \varphi(y_{1}) - \varphi(y_{0}) = 0, \] and therefore trivially $\left( \varphi(y_{1}) - \varphi(y_{0}) \right) \left( \varphi(x_{1}) - \varphi(x_{0}) \right) = 0$, and Lemma \ref{L:12monotone} can be applied to $\Delta$. Hence $\eta$ is optimal with $(P_{1})_{\sharp}\eta \ll \mathfrak m$ and is not induced by a map; this is a contradiction with the assumption. It follows that $\mathfrak m(A_{+})=0$. The claim for $A_{-}$ follows in the same manner. \end{proof} \begin{remark}\label{R:nobranch} If the space is itself non-branching, then Proposition \ref{P:nobranch} can be proved more directly under the assumption \ref{A:1}, that will be introduced at the beginning of Section \ref{S:ConditionalMeasures}. Recall that $(X,\mathsf d,\mathfrak m)$ is non-branching if for any $\gamma^{1},\gamma^{2} \in {\rm Geo}$ such that $$ \gamma^{1}_{0} = \gamma^{2}_{0}, \qquad \gamma^{1}_{t} = \gamma^{2}_{t}, $$ for some $t \in (0,1)$, implies that $\gamma^{1}_{1} = \gamma^{2}_{1}$. In particular the following statement holds \noindent \emph{ Let $(X,\mathsf d,\mathfrak m)$ be non-branching and assume moreover \ref{A:1} to hold. Then $$ \mathfrak m(A_{+}) = \mathfrak m(A_{-}) = 0. $$ } For the proof of this statement (that goes beyond the scope of this note) we refer to \cite{biacava:streconv}, Lemma 5.3. The same comment will also apply to the next Theorem \ref{T:RCD}. \end{remark} To summarize what proved so far introduce also the following notation: the set \begin{equation}\label{E:transportset} \mathcal{T} : = \mathcal{T}_{e} \setminus (A_{+} \cup A_{-}) \end{equation} will be called the \emph{transport set}. Since $\mathcal{T}_{e}, A_{+}$ and $A_{-}$ are $\sigma$-compact sets, notice that $\mathcal{T}$ is countable intersection of $\sigma$-compact sets and in particular Borel. \begin{theorem}[Theorem 5.5, \cite{cava:MongeRCD}]\label{T:RCD} Let $(X,\mathsf d,\mathfrak m)$ be such that for any $\mu_{0},\mu_{1} \in \mathcal{P}(X)$ with $\mu_{0} \ll \mathfrak m$ any optimal transference plan for $W_{2}$ is concentrated on the graph of a function. Then the set of transport rays $R\subset X \times X$ is an equivalence relation on the transport set $\mathcal{T}$ and $$ \mathfrak m(\mathcal{T}_{e} \setminus \mathcal{T} ) = 0. $$ \end{theorem} To recap, we have shown that given a $\mathsf d$-monotone set $\Gamma$, the set of all those points moved by $\Gamma$, denoted with $\mathcal{T}_{e}$, can be written, neglecting a set of $\mathfrak m$-measure zero, as the union of a family of disjoint geodesics. The next step is to decompose the reference measure $\mathfrak m$ restricted to $\mathcal{T}$ with respect to the partition given by $R$, where each equivalence class is given by $$ [x] = \{ y \in \mathcal{T}: (x,y) \in R \}. $$ Denoting the set of equivalence classes with $Q$, we can apply Disintegration Theorem (see Theorem \ref{T:disintegrationgeneral}) to the measure space $(\mathcal{T}, \mathcal{B}(\mathcal{T}), \mathfrak m)$ and obtain the disintegration of $\mathfrak m$ consistent with the partition of $\mathcal{T}$ in rays: $$ \mathfrak m\llcorner_{\mathcal{T}} = \int_{Q} \mathfrak m_{q} \, \mathfrak q(dq), $$ where $\mathfrak q$ is the quotient measure. \subsection{Structure of the quotient set}\label{Ss:structure} In order to use the strength of Disintegration Theorem to localize the measure, one needs to obtain a \emph{strongly consistent} disintegration. Following the last part of Theorem \ref{T:disintegrationgeneral}, it is necessary to build a section $S$ of $\mathcal{T}$ together with a measurable quotient map with image $S$. \begin{proposition}[$Q$ is locally contained in level sets of $\varphi$]\label{P:Qlevelset} It is possible to construct a Borel quotient map $\mathfrak Q: \mathcal{T} \to Q$ such that the quotient set $Q \subset X$ can be written locally as a level set of $\varphi$ in the following sense: $$ Q = \bigcup_{i\in \mathbb{N}} Q_{i}, \qquad Q_{i} \subset \varphi^{-1}(\alpha_{i}), $$ where $\alpha_i \in \mathbb{Q}$, $Q_{i}$ is analytic and $Q_{i} \cap Q_{j} = \emptyset$, for $i\neq j$. \end{proposition} \begin{proof} {\bf Step 1.}\\ For each $n \in \mathbb{N}$, consider the set $\mathcal{T}_{n}$ of those points $x$ having ray $R(x)$ longer than $1/n$, i.e. $$ \mathcal{T}_{n} : = P_{1} \{ (x,z,w) \in \mathcal{T}_{e} \times \mathcal{T}_{e} \times \mathcal{T}_{e} \colon z,w \in R(x), \, \mathsf d(z,w) \geq 1/n \} \cap \mathcal{T}. $$ It is easily seen that $\mathcal{T}=\bigcup_{n \in \mathbb{N}} \mathcal{T}_n$ and that $\mathcal{T}_{n}$ is Borel: the set $\mathcal{T}_{e}$ is $\sigma$-compact and therefore its projection is again $\sigma$-compact. Moreover if $x \in \mathcal{T}_{n}, y \in \mathcal{T}$ and $(x,y) \in R$ then also $y \in \mathcal{T}_{n}$: for $x \in \mathcal{T}_{n}$ there exists $z,w \in \mathcal{T}_{e}$ with $z,w\in R(x)$ and $\mathsf d(z,w)\geq 1/n$. Since $x\in \mathcal{T}$ necessarily $z,w \in \mathcal{T}$. Since $R$ is an equivalence relation on $\mathcal{T}$ and $y \in \mathcal{T}$, it follows that $z,w \in R(y)$. Hence $y \in \mathcal{T}_{n}$. In particular, $\mathcal{T}_{n}$ is the union of all those maximal rays of $\mathcal{T}$ with length at least $1/n$. Using the same notation, we have $\mathcal{T} = \cup_{n\in \mathbb{N}} \mathcal{T}_{n}$ with $\mathcal{T}_{n}$ Borel, saturated with respect to $R$, each ray of $\mathcal{T}_{n}$ is longer than $1/n$ and $\mathcal{T}_{n} \cap \mathcal{T}_{n'} = \emptyset$ as soon as $n \neq n'$. Now we consider the following saturated subsets of $\mathcal{T}_{n}$: for $\alpha \in \mathbb{Q}$ \begin{equation}\label{eq:defTnalpha} \mathcal{T}_{n,\alpha}:= P_{1} \Big( R \cap \Big \{ (x,y) \in \mathcal{T}_{n} \times \mathcal{T}_{n} \colon \varphi(y) = \alpha - \frac{1}{3n}\Big \} \Big) \cap P_{1} \Big( R \cap \Big \{ (x,y) \in \mathcal{T}_{n} \times \mathcal{T}_{n} \colon \varphi(y) = \alpha+ \frac{1}{3n} \Big\} \Big), \end{equation} and we claim that \begin{equation} \label{eq:Tnalpha} \mathcal{T}_{n} = \bigcup_{\alpha \in \mathbb{Q}} \mathcal{T}_{n,\alpha} . \end{equation} We show the above identity by double inclusion. First note that $(\supset)$ holds trivially. For the converse inclusion $(\subset)$ observe that for each $\alpha \in \mathbb{Q}$, the set $ \mathcal{T}_{n,\alpha}$ coincides with the family of those rays $R(x) \cap \mathcal{T}_{n}$ such that there exists $y^{+},y^{-} \in R(x)$ such that \begin{equation}\label{eq:ypm} \varphi(y^{+}) = \alpha - \frac{1}{3n}, \qquad \varphi(y^{-}) = \alpha + \frac{1}{3n}. \end{equation} Then we need to show that any $x \in \mathcal{T}_{n}$, also verifies $x \in \mathcal{T}_{n,\alpha}$ for a suitable $\alpha \in \mathbb{Q}$. So fix $x \in \mathcal{T}_{n}$ and since $R(x)$ is longer than $1/n$, there exist $z,y^{+},y^{-} \in R(x) \cap \mathcal{T}_{n}$ such that $$ \varphi(y^{-}) -\varphi(z) = \frac{1}{2n}, \qquad \varphi(z) -\varphi(y^{+})= \frac{1}{2n}. $$ Consider now the geodesic $\gamma \in G$ such that $\gamma_{0} = y^{-}$ and $\gamma_{1} = y^{+}$. By continuity of $[0,1] \ni t \mapsto \varphi(\gamma_{t})$ it follows the existence of $0 < s_{1}< s_{2} < s_{3} <1$ such that $$ \varphi(\gamma_{s_{3}}) = \varphi(\gamma_{s_{2}})- \frac{1}{3n}, \qquad \varphi(\gamma_{s_{1}}) = \varphi(\gamma_{s_{2}}) + \frac{1}{3n}, \qquad \varphi \in \mathbb{Q}. $$ This concludes the proof of the identity \eqref{eq:Tnalpha}. {\bf Step 2.}\\ By the above construction, one can check that for each $\alpha \in \mathbb{Q}$, the level set $\varphi^{-1}(\alpha)$ is a quotient set for $\mathcal{T}_{n,\alpha}$, i.e. $\mathcal{T}_{n,\alpha}$ is formed by disjoint geodesics each one intersecting $\varphi^{-1}(\alpha)$ in exactly one point. Equivalently, $\varphi^{-1}(\alpha)$ is a section for the partition of $\mathcal{T}_{n}$ induced by $R$. Moreover $\mathcal{T}_{n,\alpha}$ is obtained as the projection of a Borel set and it is therefore analytic. \\ Since $\mathcal{T}_{n,\alpha}$ is saturated with respect to $R$ either $\mathcal{T}_{n,\alpha} \cap \mathcal{T}_{n,\alpha'} = \emptyset$ or $\mathcal{T}_{n,\alpha} = \mathcal{T}_{n,\alpha'}$. Hence, removing the unnecessary $\alpha$, we can assume that $\mathcal{T} = \bigcup_{n \in \mathbb{N}, \alpha\in \mathbb{Q}} \mathcal{T}_{n,\alpha}$, is a partition. Then we characterize $\mathfrak Q : \mathcal{T} \to \mathcal{T}$ defining its graph as follows: $$ \textrm{graph}(\mathfrak Q) := \bigcup_{n \in \mathbb{N}, \alpha\in \mathbb{Q}} \mathcal{T}_{n,\alpha} \times \left( \varphi^{-1}(\alpha) \cap\mathcal{T}_{n,\alpha} \right). $$ Notice that $\textrm{graph}(\mathfrak Q)$ is analytic and therefore $\mathfrak Q: \mathcal{T} \to Q$ is Borel (see Theorem 4.5.2 of \cite{Sri:courseborel}). The claim follows. \end{proof} \begin{corollary} The following strongly consistent disintegration formula holds true: \begin{equation}\label{E:disint} \mathfrak m \llcorner_{\mathcal{T}} = \int_{Q} \mathfrak m_{q} \, \mathfrak q(dq), \qquad \mathfrak m_{q}(\mathfrak Q^{-1}(q)) = 1, \ \mathfrak q\text{-a.e.}\ q \in Q. \end{equation} \end{corollary} \begin{proof} From Proposition \ref{P:Qlevelset} there exists an analytic quotient set $Q$ with Borel quotient map $\mathfrak Q : \mathcal{T} \to Q$. In particular $Q$ is a section and the push-forward $\sigma$-algebra of $\mathcal{B}(\mathcal{T})$ on $Q$ contains $\mathcal{B}(Q)$. From Theorem \ref{T:disintr} \eqref{E:disint} follows. \end{proof} \begin{remark}\label{R:regulardisint} One can improve the regularity of the disintegration formula \eqref{E:disint} as follows. From inner regularity of Borel measures there exists $S \subset Q$ $\sigma$-compact, such that $\mathfrak q(Q \setminus S) = 0$. The subset $R^{-1}(S) \subset \mathcal{T}$ is again $\sigma$-compact, indeed \begin{align*} R^{-1}(S) = &~ \{ x\in \mathcal{T} \colon (x,q) \in R, \, q \in S \} = P_{1} (\{ (x,q) \in \mathcal{T} \times S \colon (x,q) \in R \} ) \crcr = &~ P_{1} (\mathcal{T} \times S \cap R) = P_{1} (\mathcal{T}_{e} \times S \cap R). \end{align*} and the regularity follows. Notice that $R^{-1}(S)$ is formed by non-branching rays and $\mathfrak m(\mathcal{T} \setminus R^{-1})(S)) = \mathfrak q(Q \setminus S) = 0$. Hence we have proved that the transport set with end points $\mathcal{T}_{e}$ admits a saturated, partitioned by disjoint rays, $\sigma$-compact subset of full measure with $\sigma$-compact quotient set. Since in what follows we will not use the definition \eqref{E:transportset}, we will denote this set with $\mathcal{T}$ and its quotient set with $Q$. \end{remark} For ease of notation $X_{q} : = \mathfrak Q^{-1}(q)$. The next goal will be to deduce regularity properties for the conditional measures $\mathfrak m_{q}$. The next function will be of some help during the note. \begin{definition}[Definition 4.5, \cite{biacava:streconv}][Ray map] \label{D:mongemap} Define the \emph{ray map} $$ g: \textrm{Dom}(g) \subset Q \times \mathbb{R} \to \mathcal{T} $$ via the formula: \begin{align*} \textrm{graph} (g) : = &~ \Big\{ (q,t,x) \in Q \times [0,+\infty) \times \mathcal{T}: (q,x) \in \Gamma, \, \mathsf d(q,x) = t \Big\} \crcr &~ \cup \Big\{ (q,t,x) \in Q \times (-\infty,0] \times \mathcal{T} : (x,q) \in \Gamma, \, \mathsf d(x,q) = t \Big\} \crcr = &~ \textrm{graph}(g^+) \cup \textrm{graph}(g^-). \end{align*} \end{definition} Hence the ray map associates to each $q \in Q$ and $t\in \textrm{Dom\,}(g(q, \cdot))\subset \mathbb{R}$ the unique element $x \in \mathcal{T}$ such that $(q,x) \in \Gamma$ at distance $t$ from $q$ if $t$ is positive or the unique element $x \in \mathcal{T}$ such that $(x,q) \in \Gamma$ at distance $-t$ from $q$ if $t$ is negative. By definition $\textrm{Dom}(g) : = g^{-1}(\mathcal{T})$. Notice that from Remark \ref{R:regulardisint} it is not restrictive to assume $\textrm{graph} (g)$ to be $\sigma$-compact. In particular the map $g$ is Borel. Next we list few (trivial) regularity properties enjoyed by $g$. \begin{proposition} \label{P:gammaclass} The following holds. \begin{itemize} \item[-] $g$ is a Borel map. \item[-] $t \mapsto g(q,t)$ is an isometry and if $s,t \in \textrm{Dom\,}(g(q,\cdot))$ with $s \leq t$ then $( g(q,s), g(q,t) ) \in \Gamma$; \item[-] $\textrm{Dom}(g) \ni (q,t) \mapsto g(q,t)$ is bijective on $\mathfrak Q^{-1}(Q) = \mathcal{T}$, and its inverse is $$ x \mapsto g^{-1}(x) = \big( \mathfrak Q(x),\pm \mathsf d(x,\mathfrak Q(x)) \big) $$ where $\mathfrak Q$ is the quotient map previously introduced and the positive or negative sign depends on $(x,\mathfrak Q(x)) \in \Gamma$ or $(\mathfrak Q(x),x) \in \Gamma$. \end{itemize} \end{proposition} Observe that from Lemma \ref{L:cicli}, $\textrm{Dom\,} (g(q,\cdot))$ is a convex subset of $\mathbb{R}$ (i.e. an interval), for any $q \in Q$. Using the ray map $g$, we will review in Section \ref{S:ConditionalMeasures} how to prove that $\mathfrak q$-a.e. conditional measure $\mathfrak m_{q}$ is absolutely continuous with respect to the $1$-dimensional Hausdorff measure on $X_{q}$, provided $(X,\mathsf d,\mathfrak m)$ enjoys weak curvature properties. The other main use of the ray map $g$ was presented in Section 7 of \cite{biacava:streconv} where it was used to build a 1-dimensional metric currents in the sense of Ambrosio-Kirchheim (see \cite{AK}) associated to $\mathcal{T}$. It is worth also noticing that so far, besides the assumption of Proposition \ref{P:nobranch}, no extra assumption on the geometry of the space was used. In particular, given two probability measures $\mu_{0}$ and $\mu_{1}$ with finite first moment, the associated transport set permits to decompose the reference measure $\mathfrak m$ in one-dimensional conditional measures $\mathfrak m_{q}$, i.e. formula \eqref{E:disint} holds. \subsection{Balanced transportation}\label{Ss:balanced} Here we want underline that the disintegration (or one-dimensional localization) of $\mathfrak m$ induced by the $L^{1}$-Optimal Transportation problem between $\mu_{0}$ and $\mu_{1}$ is actually a localization of the Monge problem. We will present this fact considering a function $f : X \to \mathbb{R}$ such that $$ \int_{X} f(x)\,\mathfrak m(dx) = 0, \qquad \int_{X}|f(x)|\mathsf d(x,x_{0}) \, \mathfrak m(dx) < \infty, $$ and considering $\mu_{0} : = f_{+}\,\mathfrak m$ and $\mu_{1} : = f_{-}\,\mathfrak m$, where $f_{\pm}$ denotes the positive and the negative part of $f$. We can also assume $\mu_{0}, \mu_{1} \in \mathcal{P}(X)$ and study the Monge minimization problem between $\mu_{0}$ and $\mu_{1}$. This setting is equivalent to study the general Monge problem assuming both $\mu_{0},\mu_{1} \ll \mathfrak m$; note indeed that $\mu_{0}$ and $\mu_{1}$ can always be assumed to be concentrated on disjoint sets (see \cite{biacava:streconv} for details). If $\varphi$ is an associated Kantorovich potential producing as before the transport set $\mathcal{T}$, we have a disintegration of $\mathfrak m$ as follows: $$ \mathfrak m\llcorner_{\mathcal{T}} = \int_{Q} \mathfrak m_{q} \, \mathfrak q(dq), \qquad \mathfrak m_{q}(X_{q}) =1,\ \mathfrak q\textrm{-a.e.} \, q \in Q. $$ Then the natural localization of the Monge problem would be to consider for every $q \in Q$ the Monge minimization problem between $$ \mu_{0\, q} : = f_{+} \,\mathfrak m_{q},\quad \mu_{1\, q} : = f_{-} \,\mathfrak m_{q}, $$ in the metric space $(X_{q},\mathsf d)$ (that is isometric via the ray map $g$ to an interval of $\mathbb{R}$ with the Euclidean distance). To check that this family of problems makes sense we need to prove the following \begin{lemma}\label{L:balancing} It holds that for $\mathfrak q$-a.e. $q \in Q$ one has $\int_{X} f \, \mathfrak m_{q} = 0$. \end{lemma} \begin{proof} Since for both $\mu_{0}$ and $\mu_{1}$ the set $\mathcal{T}_{e} \setminus \mathcal{T}$ is negligible ($\mu_{0},\mu_{1} \ll \mathfrak m$), for any Borel set $C \subset Q$ \begin{eqnarray}\label{E:mu0=mu1} \mu_{0}(\mathfrak Q^{-1}(C)) &= & \pi \Big( (\mathfrak Q^{-1}(C) \times X) \cap \Gamma \setminus \{ x = y\} \Big) \nonumber \\ &= & \pi \Big( ( X \times \mathfrak Q^{-1}(C)) \cap \Gamma \setminus \{ x = y\} \Big) \nonumber \\ &= & \mu_{1}(\mathfrak Q^{-1}(C)), \label{eq:mu0=mu1} \end{eqnarray} where the second equality follows from the fact that $\mathcal{T}$ does not branch: indeed since $\mu_{0}(\mathcal{T}) = \mu_{1}(\mathcal{T}) = 1$, then $\pi \big( (\Gamma \setminus \{ x= y\}) \cap \mathcal{T} \times \mathcal{T} \big) =1$ and therefore if $x,y \in \mathcal{T}$ and $(x,y)\in \Gamma$, then necessarily $\mathfrak Q(x) = \mathfrak Q(y)$, that is they belong to the same ray. It follows that $$ (\mathfrak Q^{-1}(C) \times X) \cap (\Gamma \setminus \{ x = y\}) \cap (\mathcal{T} \times \mathcal{T}) = ( X\times \mathfrak Q^{-1}(C) ) \cap (\Gamma \setminus \{ x = y\}) \cap (\mathcal{T} \times \mathcal{T}), $$ and \eqref{E:mu0=mu1} follows. Since $f$ has null mean value it holds $\int_X f_{+}(x) \mathfrak m(dx)= - \int_X f_{-}(x) \mathfrak m(dx)$, which combined with \eqref{E:mu0=mu1} implies that for each Borel $C \subset Q$ \begin{align*} \int_{C} \int_{X_{q}} f(x) \mathfrak m_{q}(dx) \mathfrak q(dq) = &~ \int_{C} \int_{X_{q}} f_{+}(x) \mathfrak m_{q}(dx) \mathfrak q(dq) - \int_{C} \int_{X_{q}} f_{-}(x) \mathfrak m_{q}(dx) \mathfrak q(dq) \crcr = &~ \left( \int_{X} f_{+}(x) \mathfrak m(dx) \right)^{-1} \left( \mu_{0}(\mathfrak Q^{-1}(C)) - \mu_{1}(\mathfrak Q^{-1}(C)) \right) \crcr = &~ 0. \end{align*} Therefore for $\mathfrak q$-a.e. $q \in Q$ the integral $\int f \, \mathfrak m_{q}$ vanishes and the claim follows. \end{proof} It can be proven in greater generality and without assuming $\mu_{1}\ll \mathfrak m$ that the Monge problem is localized once a strongly consistent disintegration of $\mathfrak m$ restricted to the transport ray is obtained. See \cite{biacava:streconv} for details. \section{Regularity of conditional measures} \label{S:ConditionalMeasures} We now review regularity and curvature properties of $\mathfrak m_{q}$. What contained in this section is a collection of results spread across \cite{biacava:streconv, cava:decomposition, cava:MongeRCD} and \cite{CM1}. We try here to give a unified presentation. We will inspect three increasing level of regularity: for $\mathfrak q$-a.e. $q \in Q$ \begin{enumerate}[label=(\textbf{R.\arabic*})] \item \label{R:1} $\mathfrak m_{q}$ has no atomic part, i.e. $\mathfrak m_{q}(\{x\}) = 0$, for any $x \in X_{q}$; \item \label{R:2} $\mathfrak m_{q}$ is absolutely continuous with respect to $\mathcal{H}^{1}\llcorner_{X_{q}} = g(q,\cdot)_{\sharp} \mathcal{L}^{1}$; \item \label{R:3} $\mathfrak m_{q} = g(q,\cdot)_{\sharp} (h_{q}\,\mathcal{L}^{1})$ verifies $\mathsf{CD}(K,N)$, i.e. the m.m.s. $(\mathbb{R}, |\cdot|, h_{q} \, \mathcal{L}^{1})$ verifies $\mathsf{CD}(K,N)$. \end{enumerate} We will review how to obtain \ref{R:1}, \ref{R:2}, \ref{R:3} starting from the following three \emph{increasing} regularity assumptions on the space: \begin{enumerate}[label=(\textbf{A.\arabic*})] \item \label{A:1} if $C \subset \mathcal{T}$ is compact with $\mathfrak m(C)> 0$, then $\mathfrak m(C_{t}) > 0$ for uncountably many $t \in \mathbb{R}$; \item \label{A:2} if $C \subset \mathcal{T}$ is compact with $\mathfrak m(C)> 0$, then $\mathfrak m(C_{t}) > 0$ for a set of $t \in \mathbb{R}$ with $\mathcal{L}^{1}$-positive measure; \item \label{A:3} the m.m.s. $(X,\mathsf d,\mathfrak m)$ verifies $\mathsf{CD}(K,N)$. \end{enumerate} Given a compact set $C \subset X$, we indicate with $C_{t}$ its translation along the transport set at distance with sign $t$, see the following Definition \ref{D:evolution}. We will see that: \ref{A:1} implies \ref{R:1}, \ref{A:2} implies \ref{R:2} and \ref{A:3} implies \ref{R:3}. Actually we will also show a variant of $\ref{A:3}$ (assuming $\mathsf{MCP}$ instead of $\mathsf{CD}$) implies a variant of \ref{R:3} ($\mathsf{MCP}$ instead of $\mathsf{CD}$). Even if we do not to state it each single time, assumptions \ref{A:1} and \ref{A:2} are not hypothesis on the smoothness of the space but on the regularity of the set $\Gamma$ and therefore on the Monge problem itself; they should both be read as: \emph{for $\mu_{0}$ and $\mu_{1}$ probability measures over $X$, assume the existence of a 1-Lipschitz Kantorovich potential $\varphi$ such that the associated transport set $\mathcal{T}$ verifies} \ref{A:1} \emph{(or }\ref{A:2}\emph{)}. \subsection{Atomless conditional probabilities} The results presented here are taken from \cite{biacava:streconv}. \begin{definition}\label{D:evolution} Let $C \subset \mathcal{T}$ be a compact set. For $t \in \mathbb{R}$ define the \emph{$t$-translation $C_{t}$ of $C$} by $$ C_t := g \big( \{ (q,s +t) \colon (q,s) \in g^{-1}(C) \} \big). $$ \end{definition} \noindent Since $C \subset \mathcal{T}$ is compact, $g^{-1}(C)\subset Q \times \mathbb{R}$ is $\sigma$-compact ($\textrm{graph} (g)$ is $\sigma$-compact) and the same holds true for $$ \{ (q,s +t) \colon (q,s) \in g^{-1}(C) \}. $$ Since $$ C_{t} = P_{3}( \textrm{graph}(g) \cap \{ (q,s +t) \colon (q,s) \in g^{-1}(C) \} \times \mathcal{T} ), $$ it follows that $C_{t}$ $\sigma$-compact (projection of $\sigma$-compact sets is again $\sigma$-compact). \\ Moreover the set $B : = \{ (t,x) \in \mathbb{R} \times \mathcal{T} \colon x \in C_{t} \}$ is Borel and therefore by Fubini's Theorem the map $t \mapsto \mathfrak m(C_t)$ is Borel. It follows that \ref{A:1} makes sense. \begin{proposition}[Proposition 5.4, \cite{biacava:streconv}]\label{P:nonatoms} Assume \emph{\ref{A:1}} to hold and the space to be non-branching. Then \emph{\ref{R:1}} holds true, that is for $\mathfrak q$-a.e. $q \in Q$ the conditional measure $\mathfrak m_{q}$ has no atoms. \end{proposition} \begin{proof} The partition in trasport rays and the associated disintegration are well defined, see Remark \ref{R:nobranch}. From the regularity of the disintegration and the fact that $\mathfrak q(Q) = 1$, we can assume that the map $q \mapsto \mathfrak m_q$ is weakly continuous on a compact set $K \subset Q$ with $\mathfrak q(Q \setminus K) < \varepsilon$ such that the length of the ray $X_{q}$, denoted by $L(X_{q})$, is strictly larger than $\varepsilon$ for all $q \in K$. It is enough to prove the proposition on $K$. {\bf Step 1.}\\ From the continuity of $K \ni q \mapsto \mathfrak m_q \in \mathcal{P}(X)$ w.r.t. the weak topology, it follows that the map $$ q \mapsto C(q) := \big\{ x \in X_{q}: \mathfrak m_q(\{x\}) > 0 \big\} = \cup_n \big\{ x \in X_{q}: \mathfrak m_q(\{x\}) \geq 2^{-n} \big\} $$ is $\sigma$-closed, i.e. its graph is countable union of closed sets: in fact, if $(q_m,x_m) \to (y,x)$ and $\mathfrak m_{q_m}(\{x_m\}) \geq 2^{-n}$, then $\mathfrak m_q(\{x\}) \geq 2^{-n}$ by upper semi-continuity on compact sets. Hence it is Borel, and by Lusin Theorem (Theorem 5.8.11 of \cite{Sri:courseborel}) it is the countable union of Borel graphs: setting in case $c_i(q) = 0$, we can consider them as Borel functions on $K$ and order them w.r.t. $\Gamma$ in the following sense: $$ \mathfrak m_{q,\textrm{atomic}} = \sum_{i \in \mathbb{Z}} c_i(q) \delta_{x_i(q)}, \quad (x_i(q), x_{i+1}(q)) \in \Gamma, \ i \in \mathbb{Z}, $$ with $K \ni q \mapsto x_{i}(q)$ Borel. {\bf Step 2.} \\ Define the sets $$ S_{ij}(t) := \Big\{ q \in K: x_i(q) = g \big( g^{-1}(x_j(q)) + t \big) \Big\}, $$ Since $K \subset Q$, to define $S_{ij}(t)$ we are using the $\textrm{graph} (g) \cap Q \times \mathbb{R} \times \mathcal{T}$, which is $\sigma$-compact: hence $\textrm{graph}(S_{ij})$ is analytic. For $A_j := \{x_j(q), q \in K\}$ and $t \in \mathbb{R}^+$ we have that \begin{align*} \mathfrak m((A_j)_t) =&~ \int_K \mathfrak m_q((A_j)_t)\, \mathfrak q(dq) = \int_K \mathfrak m_{q,\textrm{atomic}}((A_j)_t) \, \mathfrak q(dq) \crcr =&~ \sum_{i \in \mathbb{Z}} \int_K c_i(q) \delta_{x_i(q)} \big( g(g^{-1}(x_j(q)) + t) \big)\, \mathfrak q(dq) = \sum_{i \in \mathbb{Z}} \int_{S_{ij}(t)} c_i(q) \, \mathfrak q(dq), \end{align*} and we have used that $A_j \cap X_{q}$ is a singleton. Then for fixed $i,j \in \mathbb{N}$, again from the fact that $A_j \cap X_{q}$ is a singleton $$ S_{ij}(t) \cap S_{ij}(t') = \begin{cases} S_{ij}(t) & t = t', \crcr \emptyset & t \not= t', \end{cases} $$ and therefore the cardinality of the set $\big\{ t : \mathfrak q(S_{ij}(t)) > 0 \big\}$ has to be countable. On the other hand, $$ \mathfrak m((A_j)_t) > 0 \quad \Longrightarrow \quad t \in \bigcup_i \big\{ t : \mathfrak q(S_{ij}(t)) > 0 \big\}, $$ contradicting \ref{A:1}. \end{proof} \subsection{Absolute continuity} \label{Ss:regolarita'} The results presented here are taken from \cite{biacava:streconv}. The condition \ref{A:2} can be stated also in the following way: for every compact set $C \subset \mathcal{T}$ $$ \mathfrak m(C) > 0 \quad \Longrightarrow \quad \int_{\mathbb{R}} \mathfrak m(C_t) dt > 0. $$ \begin{lemma} \label{Lem:dec} Let $\mathfrak m$ be a Radon measure and $$ \mathfrak m_q = r_{q}\, g(q,\cdot)_\sharp \mathcal{L}^1 + \omega_q, \quad \omega_q \perp g(q,\cdot)_\sharp \mathcal{L}^1 $$ be the Radon-Nikodym decomposition of $\mathfrak m_q$ w.r.t. $g(q,\cdot)_\sharp \mathcal{L}^1$. Then there exists a Borel set $C\subset X$ such that $$ \mathcal{L}^{1} \Big(P_{2}\big( g^{-1} (C) \cap (\{q\} \times \mathbb{R}) ) \big) \Big)= 0, $$ and $\omega_q = \mathfrak m_q \llcorner_C$ for $\mathfrak q$-a.e. $q \in Q$. \end{lemma} \begin{proof} Consider the measure $\lambda = g_\sharp (\mathfrak q \otimes \mathcal L^1)$, and compute the Radon-Nikodym decomposition \[ \mathfrak m = \frac{D \mathfrak m}{D \lambda} \lambda + \omega. \] Then there exists a Borel set $C$ such that $\omega = \mathfrak m \llcorner_C$ and $\lambda(C)=0$. The set $C$ proves the Lemma. Indeed $C = \cup_{q \in Q} C_{q}$ where $C_{q} = C \cap R(q)$ is such that $\mathfrak m_q \llcorner_{C_{q}} = \omega_{q} $ and $g(q,\cdot)_{\sharp}\mathcal{L}^{1}(C_{q})=0$ for $\mathfrak q$-a.e. $q \in Q$. \end{proof} \begin{theorem}[Theorem 5.7, \cite{biacava:streconv}]\label{teo:a.c.} Assume \emph{\ref{A:2}} to hold and the space to be non-branching. Then \emph{\ref{R:2}} holds true, that is for $\mathfrak q$-a.e. $q \in Q$ the conditional measure $\mathfrak m_{q}$ is absolute continuous with respect to $g(q,\cdot)_{\sharp}\mathcal{L}^{1}$. \end{theorem} The proof is based on the following simple observation. \noindent Let $\eta$ be a Radon measure on $\mathbb{R}$. Suppose that for all $A \subset \mathbb{R}$ Borel with $\eta(A)>0$ it holds \[ \int_{\mathbb{R}^+} \eta(A+t) dt = \eta \otimes\mathcal{L}^1 \big( \{ (x,t): t \geq 0, x - t \in A \} \big) > 0. \] Then $\eta \ll \mathcal{L}^1$. \begin{proof} The proof will use Lemma \ref{Lem:dec}: take $C$ the set constructed in Lemma \ref{Lem:dec} and suppose by contradiction that $$ \mathfrak m(C) > 0 \quad \text{and} \quad \mathfrak q \otimes \mathcal{L}^1 (g^{-1}(C)) = 0. $$ In particular, for all $t \in \mathbb{R}$ it follows that $$ \mathfrak q \otimes \mathcal{L}^1 (g^{-1}(C_t)) = 0. $$ By Fubini-Tonelli Theorem \begin{align*} 0< &~ \int_{\mathbb{R}^+} \mathfrak m(C_t) \,dt = \int_{\mathbb{R}^+} \bigg( \int_{g^{-1}(C_t)} (g^{-1})_\sharp \mathfrak m(dq\,d\tau) \bigg) dt \crcr = &~ \big( (g^{-1})_\sharp \mathfrak m \otimes \mathcal{L}^1 \big) \Big( \Big\{ (q,\tau,t): (q,\tau) \in g^{-1}(\mathcal{T}), (q,\tau-t) \in g^{-1}(C) \Big\} \Big) \crcr \leq &~ \int_{Q \times \mathbb{R}} \mathcal{L}^1 \big( \big\{\tau - g^{-1}(C \cap \mathfrak Q^{-1}(q)) \big\} \big) \, (g^{-1})_{\sharp} \mathfrak m (dq\, d\tau) \crcr = &~ \int_{Q \times \mathbb{R}} \mathcal{L}^1 \big( g^{-1}(C \cap \mathfrak Q^{-1}(q)) \big) \,(g^{-1})_{\sharp} \mathfrak m (dq\,d\tau) \crcr = &~ \int_{Q} \mathcal{L}^1 \big( g^{-1}(C \cap \mathfrak Q^{-1}(y)) \big)\, \mathfrak q(dy) = 0. \end{align*} That gives a contradiction. \end{proof} The proof of Theorem \ref{teo:a.c.} inspired the definition of \emph{inversion points} and of \emph{inversion plan} as presented in \cite{CM0}, in particular see {\bf Step 2.} of the proof of Theorem 5.3 of \cite{CM0}. \subsection{Weak Ricci curvature bounds: $\mathsf{MCP}(K,N)$} The presentation of the following results is taken from \cite{cava:MongeRCD}. The same results were already proved in \cite{biacava:streconv} using more involved arguments and different notation. In this section we additionally assume the metric measure space to satisfy the measure contraction property $\mathsf{MCP}(K,N)$. Recall that the space is also assumed to be non-branching. \begin{lemma}\label{L:evo1} For each Borel $C \subset \mathcal{T}$ and $\delta \in \mathbb{R}$ the set $$ \left( C \times \{ \varphi= \delta \} \right) \cap \Gamma, $$ is $\mathsf d^{2}$-cyclically monotone. \end{lemma} \begin{proof} The proof follows straightforwardly from Lemma \ref{L:12monotone}. The set $\left( C \times \{ \varphi = c \} \right) \cap \Gamma$ is trivially a subset of $\Gamma$ and whenever $$ (x_{0},y_{0}), (x_{1},y_{1}) \in \left( C \times \{ \varphi = \delta \} \right) \cap \Gamma, $$ then $(\varphi (y_{1}) - \varphi(y_{0}) ) \cdot (\varphi(x_{1}) - \varphi(x_{0}) ) = 0$. \end{proof} We can deduce the following \begin{corollary}\label{C:evo1} For each Borel $C \subset \mathcal{T}$ and $\delta \in \mathbb{R}$ define $$ C_{\delta}: =P_{1}(\left( C \times \{ \varphi = \delta \} \right) \cap \Gamma). $$ If $\mathfrak m(C_{\delta}) > 0$, there exists a unique $\nu \in \mathrm{OptGeo}$ such that \begin{equation}\label{E:12mappa} \left( e_{0} \right)_{\sharp} \nu = \mathfrak m( C_{\delta} )^{-1} \mathfrak m\llcorner_{C_{\delta}}, \qquad (e_{0},e_{1})_{\sharp}( \nu ) \Big( \left( C \times \{ \varphi = \delta \} \right) \cap \Gamma \Big) = 1. \end{equation} \end{corollary} \noindent From Corollary \ref{C:evo1}, we infer the existence of a map $T_{C,\delta}$ depending on $C$ and $\delta$ such that $$ \left( Id,T_{C,\delta} \right)_{\sharp} \left( \mathfrak m( C_{\delta} )^{-1} \mathfrak m\llcorner_{C_{\delta}} \right) = (e_{0},e_{1})_{\sharp} \nu. $$ Taking advantage of the ray map $g$, we define a convex combination between the identity map and $T_{C,\delta}$ as follows: $$ C_{\delta} \ni x \mapsto \left(T_{C,\delta}\right)_{t}(x) \in \{ z \in \Gamma(x) : \mathsf d(x,z) = t \cdot \mathsf d(x,T_{C,\delta}(x)) \}. $$ Since $C \subset \mathcal{T}$, the map $\left(T_{C,\delta}\right)_{t}$ is well defined for all $t\in [0,1]$. We then define the evolution of any subset $A$ of $C_{\delta}$ in the following way: $$ [0,1] \ni t \mapsto \left(T_{C,\delta}\right)_{t}(A). $$ In particular from now on we will adopt the following notation: $$ A_{t} : = \left(T_{C,\delta} \right)_{t}(A), \qquad \forall A \subset C_{\delta}, \ A \ \textrm{ compact}. $$ So for any Borel $C \subset \mathcal{T}$ compact and $\delta \in \mathbb{R}$ we have defined an evolution for compact subsets of $C_{\delta}$. The definition of the evolution depends both on $C$ and $\delta$. \begin{remark}\label{R:regularity2} Here we spend a few lines on the measurability of the maps involved in the definition of evolution of sets assuming for simplicity $C$ to be compact. First note that since $\Gamma$ is closed and $C$ is compact, we can prove that also $C_{\delta}$ is compact. Indeed from compactness of $C$ we obtain that $\varphi$ is bounded on $C$ and then, since $C$ is bounded, it follows that also $C \times \{\varphi = c \} \cap \Gamma$ is bounded. Since $X$ is proper, compactness follows. Moreover $$ \textrm{graph} (T_{C,\delta}) = \left( C \times \{ \varphi= \delta \} \right) \cap \Gamma, $$ hence $T_{C,\delta}$ is continuous. Moreover $$ \left(T_{C,\delta}\right)_{t}(A) = P_{2} \left( \{(x,z) \in \Gamma \cap (A \times X) : \mathsf d(x,z) = t\cdot \mathsf d(x,T_{C,\delta}(x)) \}\right), $$ hence if $A$ is compact, the same holds for $\left(T_{C,\delta}\right)_{t}(A)$ and $$ [0,1] \ni t \mapsto \mathfrak m(\left(T_{C,\delta}\right)_{t}(A)) $$ is $\mathfrak m$-measurable. \end{remark} The next result gives quantitative information on the behavior of the map $t \mapsto \mathfrak m(A_{t})$. The statement will be given assuming the lower bound on the generalized Ricci curvature $K$ to be positive. Analogous estimates holds for any $K \in \mathbb{R}$. \begin{proposition}\label{P:mcp} For each compact $C \subset \mathcal{T}$ and $\delta \in \mathbb{R}$ such that $\mathfrak m(C_{\delta}) >0$, it holds \begin{equation}\label{E:mcp} \mathfrak m( A_{t} ) \geq (1-t) \cdot \inf_{x\in A} \left(\frac{\sin \left( (1-t) \mathsf d(x,T_{C,\delta}(x) )\sqrt{K/(N-1)} \right) } {\sin \left( \mathsf d(x,T_{C,\delta}(x))\sqrt{K/(N-1)} \right)} \right)^{N-1} \mathfrak m(A), \end{equation} for all $t \in [0,1]$ and $A \subset C_{\delta}$ compact set. \end{proposition} \begin{proof} The proof of \eqref{E:mcp} is obtained by the standard method of approximation with Dirac deltas of the second marginal. Even though similar arguments already appeared many times in literature, in order to be self-contained, we include all the details. For ease of notation $T = T_{C,\delta}$ and $C = C_{\delta}$. {\bf Step 1.}\\ Consider a sequence $\{ y_{i} \}_{i \in \mathbb{N}} \subset \{\varphi = \delta\}$ dense in $T(C)$. For each $I \in \mathbb{N}$, define the family of sets $$ E_{i,I} : = \{ x \in C : \mathsf d(x,y_{i}) \leq \mathsf d(x,y_{j}) , j =1,\dots, I \}, $$ for $i =1, \dots, I$. Then for all $I \in \mathbb{N}$, by the same argument of Lemma \ref{L:evo1}, the set $$ \Lambda_{I}: = \bigcup_{i =1}^{I} E_{i,I}\times \{ y_{i} \} \subset X \times X, $$ is $\mathsf d^{2}$-cyclically monotone. Consider then $A_{i,I} : = A \cap E_{i,I}$ and the approximate evolution $$ A_{i,I,t} : = \{ z \in X \colon \mathsf d(z,y_{i}) = (1-t)\mathsf d(x,y_{i}), \ x \in A_{i,I}\}; $$ and notice that $A_{i,I,0} = A_{i,I}$. Then by $\mathsf{MCP}(K,N)$ it holds $$ \mathfrak m( A_{i,I,t} ) \geq (1-t) \cdot \inf_{x\in A_{i,I}} \left(\frac{\sin \left( (1-t) \mathsf d(x,x_{i} )\sqrt{K/(N-1)} \right) } {\sin \left( \mathsf d(x,x_{i})\sqrt{K/(N-1)} \right)} \right)^{N-1} \mathfrak m(A_{i,I}). $$ Taking the sum over $i \leq I$ in the previous inequality implies $$ \sum_{i \leq I} \mathfrak m( A_{i,I,t} ) \geq (1-t) \cdot \inf_{x\in A} \left(\frac{\sin \left( (1-t) \mathsf d(x,T_{I}(x) )\sqrt{K/(N-1)} \right) } {\sin \left( \mathsf d(x,T_{I}(x))\sqrt{K/(N-1)} \right)} \right)^{N-1} \mathfrak m(A), $$ where $T_{I}(x) : = y_{i}$ for $x \in E_{i,I}$. From $\mathsf d^{2}$-cyclically monotonicity and the non-branching of the space, up to a set of measure zero, the map $T_{I}$ is well defined, i.e. $\mathfrak m(E_{i,I} \cap E_{j,I}) = 0$ for $i \neq j$. It follows that for each $I \in \mathbb{N}$ we can remove a set of measure zero from $A$ and obtain $$ A_{i,I,t} \cap A_{j,I,t} = \emptyset, \quad i\neq j. $$ As before consider also the interpolated map $T_{I,t}$ and observe that $A_{I,t} = T_{I,t} (A)$. Since also $A$ is compact we obtain $$ \mathfrak m( A_{I,t} ) \geq (1-t) \cdot \min_{x\in A} \left(\frac{\sin \left( (1-t) \mathsf d(x,T_{I}(x) )\sqrt{K/(N-1)} \right) } {\sin \left( \mathsf d(x,T_{I}(x))\sqrt{K/(N-1)} \right)} \right)^{N-1} \mathfrak m(A). $$ {\bf Step 2.}\\ Since $C$ is a compact set, for every $I \in \mathbb{N}$ the set $\Lambda_{I}$ is compact as well and it is a subset of $C \times \{ \varphi = \delta \}$ that can be assumed to be compact as well. By compactness, there exists a subsequence $I_{n}$ and a compact set $\Theta \subset C \times \{ \varphi = \delta \}$ compact such that $$ \lim_{n \to \infty} \mathsf d_{\mathcal{H}}(\Lambda_{I_{n}}, \Theta) = 0, $$ where $\mathsf d_{\mathcal{H}}$ is the Hausdorff distance. Since the sequence $\{y_{i}\}_{i\in \mathbb{N}}$ is dense in $\{\varphi = \delta \}$ and $C \subset \mathcal{T}$ is compact, by definition of $E_{i,I}$, necessarily for every $(x,y) \in \Theta$ it holds $$ \varphi(x)+ \varphi(y) = \mathsf d(x,y), \quad \varphi(y) = \delta. $$ Hence $\Theta \subset \Gamma \cap C\times \{\varphi = \delta\}$ and this in particular implies, by upper semicontinuity of $\mathfrak m$ along converging sequences of closed sets, that $$ \mathfrak m(A_{t}) \geq \limsup_{n \to \infty} \mathfrak m(A_{I_{n},t})\, . $$ The claim follows. \end{proof} As the goal is to localize curvature conditions, we first need to prove that almost every conditional probability is absolutely continuous with respect to the one dimensional Hausdorff measure restricted to the correct geodesic. One way is to prove that Proposition \ref{P:mcp} implies \ref{A:2} and then apply Theorem \ref{teo:a.c.} to obtain \ref{R:2} (approach used in \cite{biacava:streconv}). Another option is to repeat verbatim the proof of Theorem \ref{teo:a.c.} substituting the translation with the evolution considered in Proposition \ref{P:mcp} and to observe that the claim follows (approach used in \cite{cava:MongeRCD}). So we take for granted the following \begin{proposition}\label{P:MCP-1} Assume the non-branching m.m.s. $(X,\mathsf d,\mathfrak m)$ to satisfy $\mathsf{MCP}(K,N)$. Then \emph{\ref{R:2}} holds true, that is for $\mathfrak q$-a.e. $q \in Q$ the conditional measure $\mathfrak m_{q}$ is absolute continuous with respect to $g(q,\cdot)_{\sharp}\mathcal{L}^{1}$. \end{proposition} To fix the notation, we now have proved the existence of a Borel function $h : \textrm{Dom\,}(g) \to \mathbb{R}_{+}$ such that \begin{equation}\label{E:definitionh} \mathfrak m\llcorner\mathcal{T} = g_{\sharp} \left( h \, \mathfrak q \otimes \mathcal{L}^{1} \right) \end{equation} Using standard arguments, estimate \eqref{E:mcp} can be localized at the level of the density $h$: for each compact set $A \subset \mathcal{T}$ \begin{align*} \int_{P_{2}(g^{-1}(A_{t}) )} & h(q,s) \mathcal{L}^{1}(ds) \crcr \geq (1-t) & \left( \inf_{\tau \in P_{2}(g^{-1}(A))} \frac{\sin( (1-t) |\tau - \sigma| \sqrt{K/(N-1)} ) }{\sin( |\tau - \sigma| \sqrt{K/(N-1)} )} \right)^{N-1} \int_{P_{2}(g^{-1}(A))} h(q,s) \mathcal{L}^{1}(ds), \end{align*} for $\mathfrak q$-a.e. $q \in Q$ such that $g(q,\sigma) \in \mathcal{T}$. Then using change of variable, one obtains that for $\mathfrak q$-a.e. $q \in Q$: $$ h(q,s+|s-\sigma| t ) \geq \left( \frac{\sin( (1-t) |s - \sigma| \sqrt{K/(N-1)} ) }{\sin( |s - \sigma| \sqrt{K/(N-1)} )} \right)^{N-1} h(y,s), $$ for $\mathcal{L}^{1}$-a.e. $s \in P_{2}(g^{-1}(R(q)))$ and $\sigma \in \mathbb{R}$ such that $s + |\sigma -s| \in P_{2}(g^{-1}(R(q)))$. We can rewrite the estimate in the following way: $$ h(q, \tau ) \geq \left( \frac{\sin( ( \sigma - \tau ) \sqrt{K/(N-1)} ) }{\sin( ( \sigma - s ) \sqrt{K/(N-1)} )} \right)^{N-1} h(q,s), $$ for $\mathcal{L}^{1}$-a.e. $s \leq \tau \leq \sigma$ such that $g(q,s), g(q,\tau), g(q,\sigma) \in \mathcal{T}$. Since evolution can be also considered backwardly, we have proved the next \begin{theorem}[Localization of $\mathsf{MCP}$, Theorem 9.5 of \cite{biacava:streconv}]\label{T:densityestimates} Assume the non-branching m.m.s. $(X,\mathsf d,\mathfrak m)$ to satisfy $\mathsf{MCP}(K,N)$. For $\mathfrak q$-a.e. $q \in Q$ it holds: $$ \left( \frac{\sin( ( \sigma_{+} - \tau ) \sqrt{K/(N-1)} ) }{\sin( ( \sigma_{+} - s ) \sqrt{K/(N-1)} )} \right)^{N-1} \leq \frac{h(q, \tau )} {h(q,s)} \leq \left( \frac{\sin( ( \tau - \sigma_{-} ) \sqrt{K/(N-1)} ) }{\sin( (s - \sigma_{-} ) \sqrt{K/(N-1)} )} \right)^{N-1}, $$ for $\sigma_{-} < s \leq \tau < \sigma _{+}$ such that their image via $g(q,\cdot)$ is contained in $R(q)$. \end{theorem} \noindent In particular from Theorem \ref{T:densityestimates} we deduce that \begin{equation}\label{E:regularityh} \{ t \in \textrm{Dom\,}(g(q,\cdot)) \colon h(q,t) > 0 \} = \textrm{Dom\,}(g(q,\cdot)), \end{equation} in particular such set is convex and $t \mapsto h(q,t)$ is locally Lipschitz continuous. \subsection{Weak Ricci curvature bounds: $\mathsf{CD}(K,N)$} The results presented here are taken from \cite{CM1}. We now turn to proving that the conditional probabilities inherit the synthetic Ricci curvature lower bounds, that is, \ref{A:3} implies \ref{R:3}. Actually it is enough to assume the space to verify such a lower bound only locally to obtain globally the synthetic Ricci curvature lower bound on almost every 1-dimensional metric measure spaces. Since under the essentially non-branching condition $\mathsf{CD}_{loc}(K,N)$ implies $\mathsf{MCP}(K,N)$ and existence and uniqueness of optimal transport maps, see \cite{cavasturm:MCP}, we can already assume \eqref{E:definitionh} and \eqref{E:regularityh} to hold. In particular $t \mapsto h_{q}(t)$ is locally Lipschitz continuous, where, for easy of notation $h_{q} = h(q,\cdot)$. \begin{theorem}[Theorem 4.2 of \cite{CM1}]\label{T:CDKN-1} Let $(X,\mathsf d,\mathfrak m)$ be an essentially non-branching m.m.s. verifying the $\mathsf{CD}_{loc}(K,N)$ condition for some $K\in \mathbb{R}$ and $N\in [1,\infty)$. Then for any 1-Lipschitz function $\varphi:X\to \mathbb{R}$, the associated transport set $\Gamma$ induces a disintegration of $\mathfrak m$ restricted to the transport set verifying the following inequality: if $N> 1$ for $\mathfrak q$-a.e. $q \in Q$ the following curvature inequality holds: \begin{equation}\label{E:curvdensmm} h_{q}( (1-s) t_{0} + s t_{1} )^{1/(N-1)} \geq \sigma^{(1-s)}_{K,N-1}(t_{1} - t_{0}) h_{q} (t_{0})^{1/(N-1)} + \sigma^{(s)}_{K,N-1}(t_{1} - t_{0}) h_{q} (t_{1})^{1/(N-1)}, \end{equation} for all $s\in [0,1]$ and for all $t_{0}, t_{1} \in \textrm{Dom\,}(g(q,\cdot))$ with $t_{0} < t_{1}$. If $N =1$, for $\mathfrak q$-a.e. $q \in Q$ the density $h_{q}$ is constant. \end{theorem} \begin{proof} We first consider the case $N>1$. {\bf Step 1.} \\ Thanks to Proposition \ref{P:Qlevelset}, without any loss of generality we can assume that the quotient set $Q$ (identified with the set $\{g(q,0) : q \in Q\}$) is locally a subset of a level set of the map $\varphi$ inducing the transport set, i.e. there exists a countable partition $\{ Q_{i}\}_{i\in \mathbb{N}}$ with $Q_{i} \subset Q$ Borel set such that $$ \{ g(q,0) : q \in Q_{i} \} \subset \{ x \in X : \varphi(x) = \alpha_{i} \}. $$ It is clearly sufficient to prove \eqref{E:curvdensmm} on each $Q_{i}$; so fix $\bar i \in \mathbb{N}$ and for ease of notation assume $\alpha_{\bar i} = 0$ and $Q = Q_{\bar i}$. As $\textrm{Dom\,}(g(q,\cdot))$ is a convex subset of $\mathbb{R}$, we can also restrict to a uniform subinterval $$ (a_0,a_1) \subset \textrm{Dom\,}(g(q,\cdot)), \qquad \forall \ q \ \in Q_{i}, $$ for some $a_0,a_1 \in \mathbb{R}$. Again without any loss of generality we also assume $a_0 < 0 < a_1$. Consider any $a_{0} <A_{0} < A_{1} < a_{1}$ and $L_{0}, L_{1} >0$ such that $A_{0} + L_{0} < A_{1}$ and $A_{1} + L_{1} < a_{1}$. Then define the following two probability measures $$ \mu_{0} : = \int_{Q} g(q,\cdot)_\sharp \left( \frac{1}{L_{0}} \mathcal{L}^{1}\llcorner_{ [A_{0},A_{0}+L_{0}] } \right) \, \mathfrak q(dq), \qquad \mu_{1} : = \int_{Q} g(q,\cdot)_\sharp \left( \frac{1}{L_{1}} \mathcal{L}^{1}\llcorner_{ [A_{1},A_{1}+L_{1}] } \right) \, \mathfrak q(dq). $$ Since $g(q,\cdot)$ is an isometry one can also represent $\mu_{0}$ and $\mu_{1}$ in the following way: $$ \mu_{i} : = \int_{Q} \frac{1}{L_{i}} \mathcal{H}^{1}\llcorner_{ \left\{g(q,t) \colon t \in [A_{i},A_{i}+L_{i}] \right\} } \, \mathfrak q(dq) $$ for $i =0,1$. Both $\mu_{i}$ are absolutely continuous with respect to $\mathfrak m$ and $\mu_{i} = \varrho_{i} \mathfrak m$ with $$ \varrho_{i} (g(q,t)) = \frac{1}{L_{i}} h_{q}(t)^{-1}, \qquad \forall \, t \in [A_{i},A_{i}+L_{i}]. $$ Moreover from Lemma \ref{L:12monotone} it follows that the curve $[0,1] \ni s \mapsto \mu_{s} \in \mathcal{P}(X)$ defined by $$ \mu_{s} : = \int_{Q} \frac{1}{L_{s}} \mathcal{H}^{1}\llcorner_{ \left\{g(q,t) \colon t \in [A_{s},A_{s}+L_{s}] \right\} } \, \mathfrak q(dq) $$ where $$ L_{s} : = (1 - s)L_{0} + sL_{1}, \qquad A_{s} : = (1-s ) A_{0} + s A_{1} $$ is the unique $L^{2}$-Wasserstein geodesic connecting $\mu_{0}$ to $\mu_{1}$. Again one has $\mu_{s} = \varrho_{s} \mathfrak m$ and can also write its density in the following way: $$ \varrho_{s} (g(q,t)) = \frac{1}{L_{s}} h_{q}(t)^{-1}, \qquad \forall \, t \in [A_{s},A_{s}+L_{s}]. $$ {\bf Step 2.}\\ By $\mathsf{CD}_{loc}(K,N)$ and the essentially non-branching property one has: for $\mathfrak q$-a.e. $q \in Q_{i}$ $$ (L_{s})^{\frac{1}{N}} h_{q}( (1-s) t_{0} + s t_{1} )^{\frac{1}{N}} \geq \tau_{K,N}^{(1-s)}(t_{1}-t_{0}) (L_{0})^{\frac{1}{N}} h_{q}( t_{0} )^{\frac{1}{N}}+ \tau_{K,N}^{(s)}(t_{1}-t_{0}) (L_{1})^{\frac{1}{N}} h_{q}( t_{1} )^{\frac{1}{N}}, $$ for $\mathcal{L}^{1}$-a.e. $t_{0} \in [A_{0},A_{0} + L_{0}]$ and $t_{1}$ obtained as the image of $t_{0}$ through the monotone rearrangement of $[A_{0},A_{0}+L_{0}]$ to $[A_{1},A_{1}+L_{1}]$ and every $s \in [0,1]$. If $t_{0} = A_{0} + \tau L_{0}$, then $t_{1} = A_{1} + \tau L_{1}$. Also $A_{0}$ and $A_{1} +L_{1}$ should be taken close enough to verify the local curvature condition. Then we can consider the previous inequality only for $s = 1/2$ and include the explicit formula for $t_{1}$ and obtain: \begin{align*} (L_{0} + L_{1})^{\frac{1}{N}} &h_{q}(A_{1/2} + \tau L_{1/2})^{\frac{1}{N}} \\ & \geq \sigma^{(1/2)}_{K,N-1}( A_{1} - A_{0} + \tau |L_{1} - L_{0}| )^{\frac{N-1}{N}} \left\{ (L_{0})^{\frac{1}{N}} h_{q}(A_{0} + \tau L_{0})^{\frac{1}{N}} + (L_{1})^{\frac{1}{N}} h_{q}(A_{1} + \tau L_{1})^{\frac{1}{N}} \right\}, \end{align*} for $\mathcal{L}^{1}$-a.e. $\tau \in [0,1]$, where we used the notation $A_{1/2}:=\frac{A_0+A_1}{2}, L_{1/2}:=\frac{L_0+L_1}{2}$. Now observing that the map $s \mapsto h_{q}(s)$ is continuous, the previous inequality also holds for $\tau =0$: \begin{equation}\label{E:beforeoptimize} (L_{0} + L_{1})^{\frac{1}{N}} h_{q}(A_{1/2} )^{\frac{1}{N}} \geq \sigma^{(1/2)}_{K,N-1}( A_{1} - A_{0})^{\frac{N-1}{N}} \left\{ (L_{0})^{\frac{1}{N}} h_{q}(A_{0})^{\frac{1}{N}} + (L_{1})^{\frac{1}{N}} h_{q}(A_{1})^{\frac{1}{N}} \right\}, \end{equation} for all $A_{0} < A_{1}$ with $A_{0},A_{1}\in (a_0, a_1)$, all sufficiently small $L_{0}, L_{1}$ and $\mathfrak q$-a.e. $q\in Q$, with exceptional set depending on $A_{0},A_{1},L_{0}$ and $L_{1}$. Noticing that \eqref{E:beforeoptimize} depends in a continuous way on $A_{0},A_{1},L_{0}$ and $L_{1}$, it follows that there exists a common exceptional set $N \subset Q$ such that $\mathfrak q(N) = 0$ and for each $q \in Q\setminus N$ for all $A_{0},A_{1},L_{0}$ and $L_{1}$ the inequality \eqref{E:beforeoptimize} holds true. Then one can make the following (optimal) choice $$ L_{0} : = L \frac{h_{q}(A_{0})^{\frac{1}{N-1}} }{h_{q}(A_{0})^{\frac{1}{N-1}} + h_{q}(A_{1})^{\frac{1}{N-1}} }, \qquad L_{1} : = L \frac{h_{q}(A_{1})^{\frac{1}{N-1}} }{h_{q}(A_{0})^{\frac{1}{N-1}} + h_{q}(A_{1})^{\frac{1}{N-1}} }, $$ for any $L > 0$ sufficiently small, and obtain that \begin{equation}\label{E:CDKN-1} h_{q}(A_{1/2} )^{\frac{1}{N-1}} \geq \sigma^{(1/2)}_{K,N-1}( A_{1} - A_{0}) \left\{ h_{q}(A_{0})^{\frac{1}{N-1}} + h_{q}(A_{1})^{\frac{1}{N-1}} \right\}. \end{equation} Now one can observe that \eqref{E:CDKN-1} is precisely the inequality requested for $\mathsf{CD}^{*}_{loc}(K,N-1)$ to hold. As stated in Section \ref{Ss:geom}, the reduced curvature-dimension condition verifies the local-to-global property. In particular, see \cite[Lemma 5.1, Theorem 5.2]{cavasturm:MCP}, if a function verifies \eqref{E:CDKN-1} locally, then it also satisfies it globally. Hence $h_{q}$ also verifies the inequality requested for $\mathsf{CD}^{*}(K,N-1)$ to hold, i.e. for $\mathfrak q$-a.e. $q \in Q$, the density $h_{q}$ verifies \eqref{E:curvdensmm}. \\ {\bf Step 3.}\\ For the case $N =1$, repeat the same construction of {\bf Step 1.} and obtain for $\mathfrak q$-a.e. $q \in Q$ $$ (L_{s}) h_{q}( (1-s) t_{0} + s t_{1} ) \geq (1-s) L_{0} h_{q}( t_{0} )+ s L_{1} h_{q}( t_{1} ), $$ for any $s \in [0,1]$ and $L_{0}$ and $L_{1}$ sufficiently small. As before, we deduce for $s = 1/2$ that $$ \frac{L_{0} + L_{1}}{2} h_{q}( A_{1/2} ) \geq \frac{1}{2} \left(L_{0} h_{q}( A_{0} )+ L_{1}h_{q}( A_{1} ) \right). $$ Now taking $L_{0} = 0$ or $L_{1} = 0$, it follows that necessarily $h_{q}$ has to be constant. \end{proof} Accordingly to Remark \ref{R:CDN-1}, Theorem \ref{T:CDKN-1} can be alternatively stated as follows. \\ \noindent \emph{ If $(X,\mathsf d,\mathfrak m)$ is an essentially non-branching m.m.s. verifying $\mathsf{CD}_{loc}(K,N)$ and $\varphi : X \to \mathbb{R}$ is a 1-Lipschitz function, then the corresponding decomposition of the space in maximal rays $\{ X_{q}\}_{q\in Q}$ produces a disintegration $\{\mathfrak m_{q} \}_{q\in Q}$ of $\mathfrak m$ so that for $\mathfrak q$-a.e. $q\in Q$, $$ \textrm{the m.m.s. }( \textrm{Dom\,}(g(q,\cdot)), |\cdot|, h_{q} \mathcal{L}^{1}) \quad \textrm{verifies} \quad \mathsf{CD}(K,N). $$ } Accordingly, one says that the disintegration $q \mapsto \mathfrak m_{q}$ is a $\mathsf{CD}(K,N)$ disintegration. The disintegration obtained with $L^{1}$-Optimal Transportation is also balanced in the sense of Section \ref{Ss:balanced}. This additional information together with what proved so far is collected in the next \begin{theorem}[Theorem 5.1 of \cite{CM1}]\label{T:localize} Let $(X,\mathsf d, \mathfrak m)$ be an essentially non-branching metric measure space verifying the $\mathsf{CD}_{loc}(K,N)$ condition for some $K\in \mathbb{R}$ and $N\in [1,\infty)$. Let $f : X \to \mathbb{R}$ be $\mathfrak m$-integrable such that $\int_{X} f\, \mathfrak m = 0$ and assume the existence of $x_{0} \in X$ such that $\int_{X} | f(x) |\, \mathsf d(x,x_{0})\, \mathfrak m(dx)< \infty$. Then the space $X$ can be written as the disjoint union of two sets $Z$ and $\mathcal{T}$ with $\mathcal{T}$ admitting a partition $\{ X_{q} \}_{q \in Q}$ and a corresponding disintegration of $\mathfrak m\llcorner_{\mathcal{T}}$, $\{\mathfrak m_{q} \}_{q \in Q}$ such that: \begin{itemize} \item For any $\mathfrak m$-measurable set $B \subset \mathcal{T}$ it holds $$ \mathfrak m(B) = \int_{Q} \mathfrak m_{q}(B) \, \mathfrak q(dq), $$ where $\mathfrak q$ is a probability measure over $Q$ defined on the quotient $\sigma$-algebra $\mathcal{Q}$. \item For $\mathfrak q$-almost every $q \in Q$, the set $X_{q}$ is a geodesic and $\mathfrak m_{q}$ is supported on it. Moreover $q \mapsto \mathfrak m_{q}$ is a $\mathsf{CD}(K,N)$ disintegration. \item For $\mathfrak q$-almost every $q \in Q$, it holds $\int_{X_{q}} f \, \mathfrak m_{q} = 0$ and $f = 0$ $\mathfrak m$-a.e. in $Z$. \end{itemize} \end{theorem} The proof is just a collection of already proven statements. We include it for readers convenience. \begin{proof} Consider $$ \mu_{0} : = f_{+} \mathfrak m \frac{1}{\int f_{+}\mathfrak m}, \qquad \mu_{1} : = f_{-}\mathfrak m \frac{1}{\int f_{-}\mathfrak m}, $$ where $f_{\pm}$ stands for the positive and negative part of $f$, respectively. From the summability assumption on $f$ it follows the existence of $\varphi : X \to \mathbb{R}$, $1$-Lipschitz Kantorovich potential for the couple of marginal probability $\mu_{0}, \mu_{1}$. Since the m.m.s. $(X,\mathsf d,\mathfrak m)$ is essentially non-branching, the transport set $\mathcal{T}$ is partitioned by the rays: $$ \mathfrak m_{\mathcal{T}} = \int_{Q} \mathfrak m_{q}\, \mathfrak q(dq),\qquad \mathfrak m_{q}(X_{q}) = 1, \quad \mathfrak q-\textrm{a.e. } q \in Q; $$ moreover $(X,\mathsf d,\mathfrak m)$ verifies $\mathsf{CD}_{loc}$ and therefore Theorem \ref{T:CDKN-1} implies that $q \mapsto \mathfrak m_{q}$ is a $\mathsf{CD}(K,N)$ disintegration. Lemma \ref{L:balancing} implies that $$ \int_{X_{q}} f(x) \, \mathfrak m_{q}(dx) = 0. $$ To conclude moreover note that in $X \setminus \mathcal{T}$ necessarily $f$ has to be zero. Take indeed any $B \subset X\setminus \mathcal{T}$ compact with $\mathfrak m(B) > 0$ and assume $f \neq 0$ over $B$. Then possibly taking a subset, we can assume $f > 0$ over $B$ and therefore $\mu_{0}(B) > 0$. Since $$ \mu_{0} = \int_{Q} \mu_{0\,q} \mathfrak q(dq), \qquad \mu_{0\,q} (X_{q}) = 1, $$ necessarily $B$ cannot be a subset of $X\setminus \mathcal{T}$ yielding a contradiction. All the claims are proved. \end{proof} \section{Applications}\label{S:application} Here we will collect some applications of the results proved so far, in particular of Proposition \ref{P:nonatoms} and Theorem \ref{T:CDKN-1} \subsection{Solution of the Monge problem} Here we review how regularity of conditional probabilities of the one-dimensional disintegration studied so far permits to construct a solution to the Monge problem. In particular we will see how Proposition \ref{P:nonatoms} allows to construct an optimal map $T$. As the plan is to use the one-dimensional reduction, first we recall the one dimensional result for the Monge problem \cite{Vil}. \begin{theorem} \label{T:oneDmonge} Let $\mu_{0}, \mu_{1}$ be probability measures on $\mathbb{R}$, $\mu_{0}$ with no atoms, and let $$ H(s) := \mu_{0}((-\infty,s)), \quad F(t) := \mu_{1}((-\infty,t)), $$ be the left-continuous distribution functions of $\mu_{0}$ and $\mu_{1}$ respectively. Then the following holds. \begin{enumerate} \item The non decreasing function $T : \mathbb{R} \to \mathbb{R} \cup [-\infty,+\infty)$ defined by $$ T(s) := \sup \big\{ t \in \mathbb{R} : F(t) \leq H(s) \big\} $$ maps $\mu_{0}$ to $\mu_{1}$. Moreover any other non decreasing map $T'$ such that $T'_\sharp \mu_{0} = \mu_{1}$ coincides with $T$ on the support of $\mu_{0}$ up to a countable set. \item If $\phi : [0,+\infty] \to \mathbb{R}$ is non decreasing and convex, then $T$ is an optimal transport relative to the cost $c(s,t) = \phi(|s-t|)$. Moreover $T$ is the unique optimal transference map if $\phi$ is strictly convex. \end{enumerate} \end{theorem} \begin{theorem}[Theorem 6.2 of \cite{biacava:streconv}]\label{T:mongeff} Let $(X,\mathsf d,\mathfrak m)$ be a non-branching metric measure space and consider $\mu_{0}, \mu_{1} \in \mathcal{P}(X)$ with finite first moment. Assume the existence of a Kantorovich potential $\varphi$ such that the associated transport set $\mathcal{T}$ verifies \emph{\ref{A:1}}. Assume $\mu_{0} \ll \mathfrak m$. Then there exists a Borel map $T: X \to X$ such that $$ \int_{X} \mathsf d(x,T(X)) \, \mu_{0} (dx) = \min_{\pi \in \Pi(\mu_{0},\mu_{1})} \int_{X\times X} \mathsf d(x,y) \, \pi(dxdy). $$ \end{theorem} Theorem \ref{T:mongeff} was presented in \cite{biacava:streconv} assuming the space to be non-branching, while here we assume essentially non-branching. \begin{proof} {\bf Step 1.} One dimensional reduction of $\mu_{0}$. \\ Let $\varphi : X \to \mathbb{R}$ be the Kantorovich potential from the assumptions and $\mathcal{T}$ the corresponding transport set. Accordingly $$ \mathfrak m\llcorner_{\mathcal{T}} = \int_{Q} \mathfrak m_{q}\,\mathfrak q(dq), $$ with $\mathfrak m_{q}(X_{q}) = 1$ for $\mathfrak q$-a.e. $q \in Q$. Moreover from \ref{A:1} for $\mathfrak q$-a.e. $q \in Q$ the conditional $\mathfrak m_{q}$ has no atoms, i.e. $\mathfrak m_{q}(\{z \}) = 0$ for all $z \in X$. From Lemma \ref{L:mapoutside}, we can assume that $\mu_{0}(\mathcal{T}_{e}) = \mu_{1}(\mathcal{T}_{e}) = 1$. Since $\mu_{0} = \varrho_{0} \mathfrak m$, with $\varrho_{0} : X \to [0,\infty)$, from Theorem \ref{T:equivalence} we have $\mu_{0}(\mathcal{T}) = 1$. Hence $$ \mu_{0} = \int_{Q} \varrho_{0} \mathfrak m_{q} \, \mathfrak q(dq) = \int_{Q} \mu_{0 \, q} \, \mathfrak q_{0}(dq), \qquad \mu_{0\, q} : = \varrho_{0} \mathfrak m_{q} \left(\int_{X} \varrho_{0}(x) \mathfrak m_{q}(dx) \right)^{-1}, $$ and $\mathfrak q_{0} = \mathfrak Q_{\sharp} \mu_{0}$. In particular $\mu_{0,\, q}$ has no atoms and $\mu_{0\,q}(X_{q}) = 1$. {\bf Step 2.} One dimensional reduction of $\mu_{1}$. \\ As we are not making any assumption on $\mu_{1}$ we cannot exclude that $\mu_{1}(\mathcal{T}_{e} \setminus \mathcal{T}) >0$ and therefore to localize $\mu_{1}$ one cannot proceed as for $\mu_{0}$. Consider therefore an optimal transport plan $\pi$ with $\pi(\Gamma) = 1$. Since $\pi (\mathcal{T} \times \mathcal{T}_{e}) = 1$ and a partition of $\mathcal{T}$ is given, we can consider the following family of sets $\{ X_{q} \times \mathcal{T}_{e}\}_{q\in Q}$ as a partition of $\mathcal{T} \times \mathcal{T}_{e}$; note indeed that $X_{q} \times \mathcal{T}_{e} \cap X_{q'} \cap \mathcal{T}_{e} = \emptyset$ as soon as $q \neq q'$. The domain of the quotient map $\mathfrak Q : \mathcal{T} \to Q$ can be trivially extended to $\mathcal{T} \times \mathcal{T}_{e}$ by saying that $\mathfrak Q(x,z) = \mathfrak Q(x)$ and observe that $$ \mathfrak Q_{\sharp} \,\pi(I) = \pi \left( \mathfrak Q^{-1} (I) \right) =\pi \left( \mathfrak Q^{-1}(I) \times \mathcal{T}_{e} \right) =\mu_{0}(\mathfrak Q^{-1}(I)) =\mathfrak q_{0}(I). $$ In particular this implies that $$ \pi = \int_{Q} \pi_{q} \, \mathfrak q_{0}(dq), \qquad \pi_{q} (X_{q} \times \mathcal{T}_{e}) = 1, \quad \textrm{for } \mathfrak q_{0}\textrm{-a.e. } q \in Q. $$ Then applying the projection $$ \mu_{0} = P_{1\,\sharp} \pi = \int_{Q} P_{1\,\sharp}(\pi_{q}) \, \mathfrak q_{0}(dq), $$ and by uniqueness of disintegration $P_{1\,\sharp}(\pi_{q}) = \mu_{0\,\mathfrak q}$ for $\mathfrak q_{0}$-a.e. $q\in Q$. Then we can find a localization of $\mu_{1}$ as follows: $$ \mu_{1} = P_{2\,\sharp} \pi = \int_{Q} P_{2\,\sharp}(\pi_{q}) \, \mathfrak q_{0}(dq) = \int_{Q} \mu_{1\,q} \, \mathfrak q_{0}(dq), $$ where by definition we posed $\mu_{1\, q} : = P_{2\,\sharp}(\pi_{q})$ and by construction $\mu_{1\, q} (X_{q}) = \mu_{0\,q}(X_{q}) = 1$. {\bf Step 3.} Solution to the Monge problem.\\ For each $q \in Q$ consider the distribution functions $$ H(q,t) := \mu_{0\,q}((-\infty,t)), \quad F(q, t) := \mu_{1\,q }((-\infty,t)), $$ where for ease of notation $\mu_{i\,q} = g(q,\cdot)^{-1}_{\sharp} \mu_{i\,q}$ for $i = 0,1$. Then define $\hat T$, as Theorem \ref{T:oneDmonge} suggests, by $$ \hat T(q,s) := \Big(q, \sup \big\{ t : F(q,t) \leq H(q,s) \big\} \Big). $$ Note that since $H$ is continuous ($\mu_{0\,q}$ has no atoms), the map $s \mapsto \hat T(q,s)$ is well-defined. Then define the transport map $T: \mathcal{T} \to X$ as $g\circ \hat T \circ g^{-1}$. It is fairly easy to observe that $$ T_{\sharp} \,\mu_{0} = \int_{Q} \left(g\circ \hat T \circ g^{-1}\right)_{\sharp} \mu_{0\, q}\, \mathfrak q_{0}(dq) = \int_{Q} \mu_{1\, q} \,\mathfrak q_{0}(dq) = \mu_{1}; $$ moreover $(x,T(x)) \in \Gamma$ and therefore the graph of $T$ is $\mathsf d$-cyclically monotone and therefore the map $T$ is optimal. Extend $T$ to $X$ as the identity. It remains to show that it is Borel. First observe that, possibly taking a compact subset of $Q$ the map $q \mapsto (\mu_{0\,q},\mu_{1\,q})$ can be assumed to be weakly continuity; it follows that the maps $$ \textrm{Dom\,}(g) \ni (q,t) \mapsto H(q,t) := \mu_{0\,q}((-\infty,t)), \ \ (q,t) \mapsto F(q,t) := \mu_{1\,q}((-\infty,t)) $$ are lower semicontinuous. Then for $A$ Borel, $$ \hat T^{-1}(A \times [t,+\infty)) = \big\{ (q,s) : q \in A, H(q,s) \geq F(q,t) \big\} \in \mathcal{B}(Q \times \mathbb{R}), $$ and therefore the same applies for $T$. \end{proof} If $(X,\mathsf d,\mathfrak m)$ verifies $\mathsf{MCP}$ then it also verifies \ref{A:1}, see Proposition \ref{P:MCP-1}. So we have the following \begin{corollary}[Corollary 9.6 of \cite{biacava:streconv}]\label{C:MCP-Monge} Let $(X,\mathsf d,\mathfrak m)$ be a non-branching metric measure space verifying $\mathsf{MCP}(K,N)$. Let $\mu_{0}$ and $\mu_{1}$ be probability measures with finite first moment and $\mu_{0}\ll \mathfrak m$. Then there exists a Borel optimal transport map $T: X \to X$ solution to the Monge problem. \end{corollary} Corollary \ref{C:MCP-Monge} in particular implies the existence of solutions to the Monge problem in the Heisenberg group when $\mu_{0}$ is assumed to be absolutely continuous with respect to the left-invariant Haar measure. \begin{theorem}[Monge problem in the Heisenberg group] Consider $(\mathbb{H}^n, \mathsf d_c,\mathcal{L}^{2n+1} )$, the $n$-dimensional Heisenberg group endowed with the Carnot-Carath\'eodory distance $\mathsf d_{c}$ and the $(2n+1)$-Lebesgue measure that coincide with the Haar measure on $(\mathbb{H}^n, \mathsf d_c)$ under the identification $\mathbb{H}^n\simeq \mathbb{R}^{2n+1}$. Let $\mu_{0}$ and $\mu_{1}$ be two probability measures with finite first moment and $\mu_{0}\ll \mathcal{L}^{2n+1}$. Then there exists a Borel optimal transport map $T: X \to X$ solution to the Monge problem. \end{theorem} \begin{remark} The techniques used so far were successfully used also to threat the more general case of infinite dimensional spaces with curvature bound, see \cite{cava:Wiener} where the existence of solutions for the Monge minimization problem in the Wiener space is proved. Note that the material presented in the previous sections can be obtained also without assuming the existence of a $1$-Lipschitz Kantorovich potential (e.g. the Wiener space); the decomposition of the space in geodesics and the associated disintegration of the reference measures can be obtained starting from a generic $\mathsf d$-cyclically monotone set. For all the details see \cite{biacava:streconv}. \end{remark} \subsection{Isoperimetric inequality} We now turn to the second main application of techniques reviewed so far, the L\'evy-Gromov isoperimetric inequality in singular spaces. The results of this section are taken from \cite{CM1,CM2}. \begin{theorem}[Theorem 1.2 of \cite{CM1}]\label{T:iso} Let $(X,\mathsf d,\mathfrak m)$ be a metric measure space with $\mathfrak m(X)=1$, verifying the essentially non-branching property and $\mathsf{CD}_{loc}(K,N)$ for some $K\in \mathbb{R},N \in [1,\infty)$. Let $D$ be the diameter of $X$, possibly assuming the value $\infty$. Then for every $v\in [0,1]$, $$ \mathcal{I}_{(X,\mathsf d,\mathfrak m)}(v) \ \geq \ \mathcal{I}_{K,N,D}(v), $$ where $\mathcal{I}_{K,N,D}$ is the model isoperimetric profile defined in \eqref{defcI}. \end{theorem} \begin{proof} First of all we can assume $D<\infty$ and therefore $\mathfrak m \in \mathcal{P}_{2}(X)$: indeed from the Bonnet-Myers Theorem if $K>0$ then $D<\infty$, and if $K\leq 0$ and $D=\infty$ then the model isoperimetric profile \eqref{defcI} trivializes, i.e. $\mathcal{I}_{K,N,\infty}\equiv 0$ for $K\leq 0$. For $v=0,1$ one can take as competitor the empty set and the whole space respectively, so it trivially holds $$ \mathcal{I}_{(X,\mathsf d,\mathfrak m)}(0)=\mathcal{I}_{(X,\mathsf d,\mathfrak m)}(1)= \mathcal{I}_{K,N,D}(0)=\mathcal{I}_{K,N,D}(1)=0. $$ Fix then $v\in(0,1)$ and let $A\subset X$ be an arbitrary Borel subset of $X$ such that $\mathfrak m(A)=v$. Consider the $\mathfrak m$-measurable function $f(x) : = \chi_{A}(x) - v$ and notice that $\int_{X} f \, \mathfrak m = 0$. Thus $f$ verifies the hypothesis of Theorem \ref{T:localize} and noticing that $f$ is never null, we can decompose $X = Y \cup \mathcal{T}$ with $$ \mathfrak m(Y)=0, \qquad \mathfrak m\llcorner_{\mathcal{T}} = \int_{Q} \mathfrak m_{q}\, \mathfrak q(dq), $$ with $\mathfrak m_{q} = g(q,\cdot)_\sharp \left( h_{q} \cdot \mathcal{L}^{1}\right)$; moreover, for $\mathfrak q$-a.e. $q \in Q$, the density $h_{q}$ verifies \eqref{E:curvdensmm} and $$ \int_{X} f(z) \, \mathfrak m_{q}(dz) = \int_{\textrm{Dom\,}(g(q,\cdot))} f(g(q,t)) \cdot h_{q}(t) \, \mathcal{L}^{1}(dt) = 0. $$ Therefore \begin{equation}\label{eq:volhq} v=\mathfrak m_{q} ( A \cap \{ g(q,t) : t\in \mathbb{R} \} ) = (h_{q}\mathcal{L}^1) (g(q,\cdot)^{-1}(A)), \quad \text{ for $\mathfrak q$-a.e. $q \in Q$}. \end{equation} For every $\varepsilon>0$ we then have \begin{align*} \frac{\mathfrak m(A^\varepsilon)-\mathfrak m(A)}{\varepsilon} &~ = \frac{1}{\varepsilon} \int_{\mathcal{T}} \chi_{A^\varepsilon\setminus A} \,\mathfrak m(dx) = \frac{1}{\varepsilon} \int_{Q} \left( \int_{X} \chi_{A^\varepsilon\setminus A} \, \mathfrak m_{q} (dx) \right)\, \mathfrak q(dq) \crcr &~ = \int_{Q} \frac{1}{\varepsilon} \left( \int_{\textrm{Dom\,}(g(q,\cdot))} \chi_{A^\varepsilon\setminus A} \,h_{q}(t) \, \mathcal{L}^{1}(dt) \right)\, \mathfrak q(dq) \crcr &~ = \int_{Q} \left( \frac{(h_{q}\mathcal{L}^1)(g(q,\cdot)^{-1}(A^\varepsilon))- (h_{q}\mathcal{L}^1)(g(q,\cdot)^{-1}(A))}{\varepsilon} \right)\, \mathfrak q(dq) \crcr &~ \geq \int_{Q} \left( \frac{(h_{q}\mathcal{L}^1)((g(q,\cdot)^{-1}(A))^\varepsilon)- (h_{q}\mathcal{L}^1)(g(q,\cdot)^{-1}(A))}{\varepsilon} \right)\, \mathfrak q(dq), \crcr \end{align*} where the last inequality is given by the inclusion $ (g(q,\cdot)^{-1}(A))^\varepsilon \cap \text{\rm supp}(h_q) \subset g(q,\cdot)^{-1}(A^\varepsilon)$. \\ Recalling \eqref{eq:volhq} together with $h_{q}\mathcal{L}^1\in \mathcal{F}^{s}_{K,N,D}$, by Fatou's Lemma we get \begin{align*} \mathfrak m^+(A) &~ = \liminf_{\varepsilon\downarrow 0} \frac{\mathfrak m(A^\varepsilon)-\mathfrak m(A)}{\varepsilon} \crcr &~\geq \int_{Q} \left( \liminf_{\varepsilon\downarrow 0} \frac{(h_{q}\mathcal{L}^1)((g(q,\cdot)^{-1}(A))^\varepsilon) - (h_{q}\mathcal{L}^1)(g(q,\cdot)^{-1}(A))}{\varepsilon} \right)\, \mathfrak q(dq) \crcr &~ = \int_{Q} \left( (h_{q}\mathcal{L}^1)^+(g(q,\cdot)^{-1}(A)) \right)\, \mathfrak q(dq) \crcr &~ \geq \int_{Q} \mathcal{I}^s_{K,N,D} (v) \, \mathfrak q(dq) \crcr &~ = \mathcal{I}_{K,N,D} (v), \end{align*} where in the last equality we used Theorem \ref{thm:I=Is}. \end{proof} From the definition of $\mathcal{I}_{K,N,D}$, see \eqref{defcI}, and the smooth results of E. Milman in \cite{Mil}, the estimates proved in Theorem \ref{T:iso} are sharp. Furthermore, 1-dimensional localization technique permits to obtain rigidity in the following sense: if for some $v \in (0,1)$ it holds $\mathcal{I}_{(X,\mathsf d,\mathfrak m)}(v)= \mathcal{I}_{K,N,\pi}(v)$, then $(X,\mathsf d,\mathfrak m)$ is a spherical suspension. It is worth underlining that to obtain such a result $(X,\mathsf d,\mathfrak m)$ is assumed to be in the more regular class of $\mathsf{RCD}$-spaces. Even more, one can prove an almost rigidity statement: if $(X,\mathsf d,\mathfrak m)$ is an $\mathsf{RCD}^*(K,N)$ space such that $\mathcal{I}_{(X,\mathsf d,\mathfrak m)}(v)$ is close to $\mathcal{I}_{K,N,\pi}(v)$ for some $v \in (0,1)$, this force $X$ to be close, in the measure-Gromov-Hausdorff distance, to a spherical suspension. What follows is Corollary 1.6 of \cite{CM1}. \begin{theorem}[Almost equality in L\'evy-Gromov implies mGH-closeness to a spherical suspension] \label{cor:AlmRig} For every $N\in [2, \infty) $, $v \in (0,1)$, $\varepsilon>0$ there exists $\bar{\delta}=\bar{\delta}(N,v,\varepsilon)>0$ such that the following hold. For every $\delta \in [0, \bar{\delta}]$, if $(X,\mathsf d,\mathfrak m)$ is an $\mathsf{RCD}^*(N-1-\delta,N+\delta)$ space satisfying $$ \mathcal{I}_{(X,\mathsf d,\mathfrak m)}(v)\leq \mathcal{I}_{N-1,N,\pi}(v)+\delta, $$ then there exists an $\mathsf{RCD}^*(N-2,N-1)$ space $(Y, \mathsf d_Y, \mathfrak m_Y)$ with $\mathfrak m_Y(Y)=1$ such that $$ \mathsf d_{mGH}(X, [0,\pi] \times_{\sin}^{N-1} Y) \leq \varepsilon. $$ \end{theorem} We refer to \cite{CM1} for the precise rigidity statement (Theorem 1.4, \cite{CM1}) and for the proof of Theorem 1.4 and Corollary 1.6 of \cite{CM1}. See also \cite{CM1} for the precise definition of spherical suspension. We conclude by recalling that 1-dimensional localization was used also in \cite{CM2} to obtain sharp version of several functional inequalities (e.g. Brunn-Minkowski, spectral gap, Log-Sobolev etc.) in the class of $\mathsf{CD}(K,N)$-spaces. See \cite{CM2} for details. \end{document}
\begin{document} \title{Projection onto quadratic hypersurfaces} \abstract{We address the problem of projecting a point onto a quadratic hypersurface, more specifically a central quadric. We show how this problem reduces to finding a given root of a scalar-valued nonlinear function. We completely characterize one of the optimal solutions of the projection as either the unique root of this nonlinear function on a given interval, or as a point that belongs to a finite set of computable solutions. We then leverage this projection and the recent advancements in splitting methods to compute the projection onto the intersection of a box and a quadratic hypersurface with alternating projections and Douglas-Rachford splitting methods. We test these methods on a practical problem from the power systems literature, and show that they outperform IPOPT and Gurobi in terms of objective, execution time and feasibility of the solution. } \textbf{Keywords}: quadric, quadratic surfaces, nonconvex projection, Douglas-Rachford splitting, alternating projections. \section{Introduction} \label{sec:Introduction} This paper discusses the projection of a given point onto a nonsingular quadratic hypersurface, or nonsingular \emph{quadric}. Quadrics are a natural generalization of hyperplanes. The projection onto a quadric appears, e.g., in splitting algorithms for the projection between a quadric---or the Cartesian product of quadrics---and a polytope. This problem has direct applications, \emph{e.g.}, in power systems~\cite{vh22}, where the power losses can be approximated as a quadratic hypersurface. Numerical experiments on such problems are developed in this paper. Other applications of quadratic projections emerge in the context of the security region of gas networks~\cite{song_security_2021}, or in local learning methods~\cite{scott_thesis_2020}. It is therefore intriguing that few studies of this problem can be found in the literature. Indeed, projections onto quadratic surfaces have been studied for the 2D and 3D cases, see \emph{e.g.},~\cite{morera_distance_2013,lott_iii_direct_2014,huang_shape_2020}. However, to the best of our knowledge, the extension to an arbitrary dimension has not been pursued, with the exception of the short discussion at the end of~\cite{lott_iii_direct_2014} and in~\cite{sosa_algorithm_2020}. Although the method proposed in~\cite{sosa_algorithm_2020} can handle the singular case, \emph{i.e.}, the case where the matrix that defines the quadratic surface is singular, it does not always return the exact projection. Moreover, the two-level iterative scheme that is proposed in~\cite{sosa_algorithm_2020} can be computationally expensive. The structure of this paper is twofold. Firstly, we tackle the problem of projecting onto an $n$-dimensional nonsingular quadric. Secondly, we leverage this projection in the context of splitting methods. The projection considered in the first part of this paper (\cref{sec:Projection_onto_a_quadric}) is not unique in general, due to the nonconvexity of the feasible set. This implies that we cannot rely on first or second-order methods, since such methods may converge to a local minimum. This projection can also be handled by black-box (commercial) solvers, \emph{e.g.}, Gurobi or IPOPT~\cite{gurobi,wachter06}, however these methods suffer from two main problems: i) the execution time rockets when the dimension of the problem increases to mid or large-scale sizes---this phenomenon is present in our numerical results---and ii) Gurobi is not a local method, and does not exploit the local structure of the problem, even if in certain applications, the starting point is close to the feasible set. The first problem is highlighted by the power system application that is considered here, where the projection step is only a small part of the overall procedure, which renders execution time an important factor in our analysis. Using the Lagrange multiplier technique, we reduce this quadratically constrained quadratic program (QCQP) to the problem of finding the roots of a nonlinear real function. Then, we completely characterize the solutions of the nonconvex projection, and compute one of the solutions of this projection as either the (unique) root of a scalar function on a computable interval, or among a finite set of closed-form solutions. We also show how to find this root using Newton's method, which guarantees a quadratic convergence. Thus, the proposed method provides an efficient way to obtain the exact projection onto a nonsingular quadric. Finally, to further reduce execution time, we also introduce a heuristic based on a geometric construction. This allows us to quickly map a point to the quadric. We detail two variants of this heuristic. We note that our proposed approach for projecting onto a quadric is not unusual. For example, \cite[\S 6.2.1]{golub_matrix_2013} uses a similar construction for the problem of least squares minimization over a sphere. However, this problem is easier to tackle than ours, since the (unique) solution of this convex problem is the (unique) root of the \emph{secular equation} defined by the KKT conditions. In~\cite[\S 7.3]{conn_trust_2000}, the authors also use a similar procedure, and taxonomy of secular equations, for finding the $\ell_2$-norm model minimizer. However, while the discussion is analogous to what is proposed in this paper, \emph{i.e.}, searching for a specific root of a given scalar-valued nonlinear function on a specific domain, the domain and the function are different in our work. Moreover, our discussion on degenerate cases is not present in~\cite{conn_trust_2000}, since such cases do not appear in the problem that the authors tackle, which is linked to the trust-region subproblem. In the second part of this paper (\cref{sec:Splitting_methods}) we test our method of projecting onto a quadric in order to solve the problem of projecting onto the intersection between a polytope and a quadric. Both projections can be easily computed: the first one is even trivial if we consider a box, and the second one is efficiently obtained with the method proposed in the first part of the paper. We then leverage the rich literature on splitting methods for nonconvex programming. We consider, in this work, two splitting methods: the alternating projections and Douglas-Rachford splitting. References for these schemes can be found in~\cite{attouch_proximal_2010,bauschke_phase_2002,drusvyatskiy_transversality_2015,lewis_alternating_2008,lewis_local_2009}, and in~\cite{bauschke_phase_2002,li_douglasrachford_2016}, respectively. Depending on the splitting method considered, and whether or not we use exact projection onto the quadric or a heuristic, we detail five different methods for projecting a point onto the intersection of a box and a quadric. We analyse these methods in~\cref{sec:Numerical_experiments}. The five methods are benchmarked against IPOPT in the ellipsoidal and hyperboloidal case, both for small and large-scale problems. In these experiments, we observe that one of the proposed methods, namely the alternating projections with exact projections, attains the best objective. We also observe that the alternating projections used with one of the heuristics (the gradient-based heuristic) reaches competitive objectives in a reduced amount of run time. All the methods considered outperform IPOPT in terms of execution time, with a difference of several orders of magnitude. Finally, we benchmark one of the proposed methods against Gurobi. We use Gurobi in order to find the optimal solution for a problem inspired from the power systems literature. Since, in this context, the starting point is close to the feasible set, our proposed method clearly outperforms Gurobi, both in terms of execution time and objective. Using the lower bound computed by Gurobi, we can also conclude that, in the context of this specific problem, the proposed method finds the optimal solution, even if there is no guarantee for finding it in general. \section{Projection onto a quadric} \label{sec:Projection_onto_a_quadric} \subsection{Problem formulation} \label{sub:Problem_formulation} In this section, the problem of interest is introduced. This problem consists in the projection of a given point $\tilde{\bm x}^0 \in \mathbb{R}^{n}$ onto a feasible set $\mathcal{Q}$: \begin{align*} \min_{\bm x \in \mathbb{R}^{n}} & \norm{\bm x - \tilde{\bm x}^0}_2^2 \numberthis \label{eq:proj_quadric} \\ \text{s.t. } & \bm x \in \mathcal{Q} , \end{align*} where $\mathcal{Q}$ is a nonempty and non-cylindrical central quadric~\cite[Theorem 3.1.1]{odehnal_universe_2020}. In other words, $\mathcal{Q}$ is nonempty and there exists a quadratic function \[\Psi: \mathbb{R}^n \to \mathbb{R}: \bm x \mapsto \Psi(\bm x) = \Tr{\bm x} \bm{B} \bm x + \Tr{\bm{b}} \bm x + c ,\] with $\bm{B}$ nonsingular and $c \neq \frac{\Tr{\bm{b}} \bm{B}^{-1} \bm{b}}{4}$, such that \begin{equation}\label{eq:quadric} \mathcal{Q} = \left \{ \bm x \in \mathbb{R}^{n} \left | \right .{\Psi(\bm x) = 0} \right \}= \Psi^{-1}(0). \end{equation} See~\cite[\S 3.1]{odehnal_universe_2020} and ~\cite[Chapter 21]{2010mathematik} for a complete classification of quadrics. This quadratic surface, or \emph{quadric}, is denoted as quadric with middle point, in the sense of~\cite{2010mathematik}. The middle point or \emph{centre}, $\bm d$, corresponding to the centre of symmetry, is computed as $- \frac{\bm{B}^{-1} \bm{b}}{2}$, and the condition $c \neq \frac{\Tr{\bm{b}} \bm{B}^{-1} \bm{b}}{4}$ is equivalent to $\bm d \notin \mathcal{Q}$. Note that, under these assumptions, we can prove that the feasible set defined by~\cref{eq:quadric} is a manifold, see~\cite[\S 3.2.1]{vh22} for more details. This centre, and the characterization of the surface as a manifold, will be used in~\cref{sub:Quasi-projection_on_the_quadric} to build a fast but inexact projection mapping, referred to as a \emph{quasi-projection}. The problem defined by~\cref{eq:proj_quadric} is invariant to translations and rotations. Hence, without loss of generality we can consider the following problem in \emph{normal form}~\cite[Theorem 3.1.1]{odehnal_universe_2020}: \begin{align*} \min_{\bm x \in \mathbb{R}^{n}} & \norm{\bm x - \bm x^0}_2^2 \numberthis \label{eq:proj_quadric_std} \\ \text{s.t. } & \sum_{i=1}^n \lambda_i x_i^2 -1 = 0 , \end{align*} where $\bm \lambda = \spec{\bm{B}}$, contains the eigenvalues of $\bm{B}$ sorted in descending order, and $\bm x^0$ is the appropriate transformation of $\tilde{\bm x}^0$. Note that, since the feasible set is nonempty, we have $\lambda_1 >0$. We will refer to the solution(s) of~\cref{eq:proj_quadric_std} as the \emph{(true) projection(s)} of $\bm x^0$ onto the quadric. Note that the centre $\bm d$ is now the origin $\bm 0$. Since this problem is symmetric with respect to the axes, we consider that $\bm x^0 \geq 0$, \emph{i.e.}, $\bm x^0$ is located inside the first orthant ($\mathbb{R}^{n}_+:=\left \{ \bm x \in \mathbb{R}^{n} \left | \right .{\bm x \geq0} \right \} = \mathbb{R}^{n,*}_+ \bigcup \left \{ \bm 0 \right \}$). Remark that if $\lambda_n > 0$, \emph{i.e.}, the quadric is an ellipsoid, and if we have $\sum_{i=1}^n \lambda_i (x^0_i)^2 - 1 > 0$, then the solution of~\cref{eq:proj_quadric_std} is identical to the solution of \begin{align*} \min_{\bm x \in \mathbb{R}^{n}} & \norm{\bm x^0 - \bm x}_2^2 \\ \text{s.t. } & \sum_{i=1}^n \lambda_i x_i^2 -1 \leq 0 , \end{align*} which is a \emph{convex} optimization problem that is easy to solve, \emph{e.g.}, using interior points method (IPM), see~\cite{boyd_convex_2004,nesterov_lectures_2018} for more details, or a black-box commercial solver such as Gurobi~\cite{gurobi}. On the other hand, if $\bm{B}$ is indefinite or if $\sum_{i=1}^n \lambda_i (x^0_i)^2 - 1 < 0$, then we are confronted with a nonconvex optimization problem. First, let us show that the problem is well-posed, \emph{i.e.}, that there exists a global optimum of~\cref{eq:proj_quadric_std}. \begin{proposition} \label{prop:existence_projection_quadric} There exists a global optimum $\bm x^*$ of~\cref{eq:proj_quadric_std}. \end{proposition} \begin{proof} The objective of~\cref{eq:proj_quadric_std} is a real-valued, continuous and coercive function defined on a nonempty closed set, therefore there exists a global optimum~\cite[Theorem 2.32]{beck_2014}. \end{proof} \subsection{KKT conditions} \label{sub:KKT_conditions} Since the feasible set is nonconvex, the projection operator does not always return a singleton, see \cite[Theorem 3.8]{fletcher_chebyshev_2015}. The set of solutions may be a singleton (\cref{fig:x_trajectories_mu}), a finite set (\cref{fig:x_trajectories_mu_degenerate_2}), or an infinite set (suppose that $\bm{x}^0$ is the centre of a sphere, \emph{i.e.}, $\bm \lambda = \mathbb{1}$ and $\bm x^0 = \bm{0}$). Using the KKT conditions, we can characterize the solutions of~\cref{eq:proj_quadric_std}. The Lagrangian of~\cref{eq:proj_quadric_std}, with Lagrange multiplier $\mu$ and with $\bm D = \textrm{diag}(\bm \lambda) \in \mathbb{R}^{n\times n}$, reads \begin{equation} \label{eq:lagrangian_not_sphere} \mathcal{L}(\bm x, \mu) = \Tr{(\bm x - \bm x^0)} (\bm x - \bm x^0) + \mu(\Tr{\bm x} \bm D \bm x - 1), \end{equation} and the gradient, \begin{align*} \label{eq:grad_lagrangian_not_sphere_real} \bm \nabla \mathcal{L} (\bm x, \mu) &= \begin{pmatrix} 2 (\bm x - \bm x^0) + 2 \mu \bm D \bm x \\ \numberthis \Tr{\bm x} \bm D \bm x - 1 \end{pmatrix}. \end{align*} Note that we can write the $i$-th equation of~\cref{eq:grad_lagrangian_not_sphere_real} as \begin{equation} \label{eq:x_i} x_i (1+\mu \lambda_i) = x^0_i . \end{equation} Any point $(\bm x, \mu)$ that satisfies \begin{equation} \label{eq:grad_lagrangian_not_sphere} \bm \nabla \mathcal{L} (\bm x, \mu) = \bm 0 \end{equation} is referred to as a \emph{KKT point}. An optimal solution, $\bm x^*$, of~\cref{eq:proj_quadric_std} must either meet the KKT conditions~\cref{eq:grad_lagrangian_not_sphere}, or fail to satisfy the linear independence constraint qualification (LICQ) criterion. In~\cref{eq:proj_quadric_std}, the latter occurs if $\nabla \Psi (\bm x^*) = \bm 0$. This corresponds to the case where the centre belongs to the quadric, and is ruled out by the condition $c \neq \frac{\Tr{\bm{b}} \bm{B}^{-1} \bm{b}}{4}$. Isolating $\bm x$ in the first $n$ equations of~\cref{eq:grad_lagrangian_not_sphere} yields, for $\mu \notin \pi(\bm{B}) := \left \{ -\frac{1}{\lambda} \left | \right .{\lambda \text{ is an eigenvalue of }\bm{B}} \right \}$, \begin{equation} \label{eq:x_not_sphere} \bm x (\mu) =(\bm I + \mu \bm D)^{-1} \bm x^0 , \end{equation} and the $i-$th component can be rewritten as \begin{equation} \label{eq:x_not_sphere_j} x_i (\mu) = \frac{x_i^0}{1+\mu \lambda_i}. \end{equation} Note that the set $\pi(\bm{B})$ corresponds to the \emph{poles} of the rational function~\cref{eq:f_mu}. We distinguish two cases: \begin{itemize} \item Case 1: $\mu \notin \pi( \bm{B} )$. The matrix $(\bm I + \lambda \bm D)$ is nonsingular and we have~\cref{eq:x_not_sphere}. \item Case 2: $\mu = -\frac{1}{\lambda_i}$ for some $i =1, \hdots, n$. The $i$-th equation of~\cref{eq:grad_lagrangian_not_sphere} reads $-2 x_i^0 = 0$, therefore $\mu$ is a solution only if $x_i^0 = 0$. We first treat the case $\bm x^0 >0$, denoted as nondegenerate case, in~\cref{sub:Ellipsoid_case,sub:Hyperboloid_case}. Then, the degenerate case $\bm x^0 \geq 0$ is tackled in~\cref{sub:degenerate_case}. \end{itemize} In the first case, if we insert~\cref{eq:x_not_sphere} in the quadric equation, $\Psi(\bm x) = 0$, we obtain a \emph{univariate}, \emph{extended-real-valued} function \begin{align*} f : \mathbb{R} \to \overline{\mathbb{R}} : \mu \mapsto f(\mu) &= \Psi(\bm x(\mu)) = \Tr{\bm x(\mu)} \bm D \bm x(\mu) -1 \\ &= \sum_{i=1, x^0_i \neq 0}^n \lambda_i \left(\frac{x_i^0}{1+\mu \lambda_i}\right )^2 -1 \numberthis \label{eq:f_mu} \end{align*} of which we want to obtain the roots. Notice that, as the roots correspond to the values $\mu^*$ for which $\Psi(\bm x(\mu^*)) = 0$, they can be geometrically understood as the intersections between $\left \{\bm x(\mu): \mu \in \mathbb{R} \right \}$ and the quadric $\mathcal{Q} = \Psi^{-1}(0)$. This is illustrated in~\cref{ssub:2D_example_of_nondegenerate_projection_onto_an_ellipse,ssub:2D_example_of_nondegenerate_projection_onto_an_hyperbola}. In the following, we show how to efficiently solve~\cref{eq:proj_quadric_std} by computing a specific root of~\cref{eq:f_mu}. We first consider the case where $\bm x^0 >0$ for the ellipsoid case in~\cref{sub:Ellipsoid_case} and the hyperboloid case in~\cref{sub:Hyperboloid_case}. Then we discuss the case $\bm x^0 \geq \bm 0$ in~\cref{sub:degenerate_case}. Finally, we bring everything together into a single algorithm, \cref{alg:exact_projection_quadric}, in~\cref{sub:Bringing_everything_together}. We also propose in~\cref{sub:Quasi-projection_on_the_quadric} a simpler procedure that allows us to map a point to the quadric without having to diagonalize the matrix $\bm{B}$. As this mapping does not return the true projection, we refer to it as \emph{quasi-projection}. \subsection{Ellipsoid case, \texorpdfstring{$\mathbf{x}^0 >\mathbf{0}$}{x0>0}} \label{sub:Ellipsoid_case} Here, we assume that the quadric is an ellipsoid, \emph{i.e.}, $\bm \lambda >0$ and that the initial point lies (strictly) in the first orthant, \emph{i.e.}, $\bm x^0 > \bm{0}$. The goal of this section is twofold. First, we derive several successive results~(\cref{prop:same_orthant_ellipsoid,prop:f_decreasing,prop:uniqueness_root,prop:uniqueness_of_solution_orthant}) that characterize the roots of $f$ and the solutions of~\cref{eq:proj_quadric_std}. The combination of these results yields~\cref{prop:optimal_is_root} which states that \emph{\cref{eq:proj_quadric_std} can be solved by finding the root of $f$ on a given interval $\mathcal{I}$}. Second, we provide a starting point for the Newton root-finding algorithm for efficiently computing this root. \begin{proposition}\label{prop:same_orthant_ellipsoid} Under the standing assumptions, every solution $\bm x^*$ of~\cref{eq:proj_quadric_std} satisfies $\bm x^*>0$. \end{proposition} \begin{proof} Recall that any solution of~\cref{eq:proj_quadric_std} is a KKT point. Using~\cref{eq:x_i} we see that if $(\bm x^*, \mu^*)$ is a KKT point then the positivity of $x_i^0$ for all $i = 1, \hdots, n$ implies that $x^*_i \neq 0$ for all $i$. Let us suppose, for the sake of contradiction, that $\bm x^*$ is a minimizer of~\cref{eq:proj_quadric_std} and that there exists a nonempty set of indices $J \subseteq \left \{ 1, 2, \hdots, n \right \}$ with $x^*_j < 0$ for all $j \in J$. By symmetry, we can construct $\bm x^{**}$ defined as \[ \bm x^{**} := \begin{cases} x^*_i & \text{if } i \notin J, \\ -x^*_i & \text{if } i \in J, \end{cases} \] and we have $\Psi(\bm x^{**}) = 0$, \emph{i.e.}, the point belongs to the quadric. The (squared) objective can be computed: \begin{align*} \norm{\bm x^{**} - \bm x^0}_2^2 &= \sum_{i=1}^n (x^0_i - x^{**}_i)^2 \\ &= \sum_{i=1, i\notin J}^n (x^0_i - x^{*}_i)^2 + \sum_{j \in J} \underbrace{(x_j^0 + x_j^*)^2}_{<(x^0_j - x^*_j)^2} \\ &< \norm{\bm x^{*} - \bm x^0}_2^2 \end{align*} This contradicts the optimality of $\bm x^*$. \end{proof} \begin{proposition}\label{prop:f_decreasing} $f$, defined as in~\cref{eq:f_mu}, is strictly decreasing on $\mathcal{I}:= \left ] -\frac{1}{\lambda_1}, +\infty \right [$. \end{proposition} \begin{proof} Since $f \in \mathcal{C}^1$ on $\mathcal{I}$, we compute \begin{equation} \label{eq:df_mu} f'(\mu) = -2 \sum_{i=1}^n \frac{(\lambda_i x^0_i)^2}{{\underbrace{(1+\mu \lambda_i)}_{>0\text{ for }\mu \in \mathcal{I}}}^3}, \end{equation} and this function is negative on $\mathcal{I}$. \end{proof} \begin{proposition} \label{prop:uniqueness_root} Function $f$ restricted to $\mathcal{I}$ has one and only one zero. \end{proposition} \begin{proof} By~\cref{prop:f_decreasing}, $f$ is strictly decreasing, and hence has \emph{at most} one zero. Moreover, $\lim\limits_{\mu \to +\infty}f(\mu) = \Psi(\bm d) = -1 < 0$ and $\lim\limits_{\mu \to -\frac{1}{\lambda_1}} f(\mu) = + \infty$; the continuity of $f$ on $\mathcal{I}$ implies the existence of the zero on $]-\frac{1}{\lambda_1}, +\infty[$. \end{proof} \begin{proposition}\label{prop:uniqueness_of_solution_orthant} If $\mu^*$ is a root of $f$, and $\mu^* \notin \mathcal{I}$, then $\bm x(\mu^*) \notin \mathbb{R}^{n}_{+}$. \end{proposition} \begin{proof} If $\mu^* \notin \mathcal{I}$, then $\mu^* < -\frac{1}{\lambda_1}$ and therefore the first component of $\bm x(\mu^*)$ reads \begin{equation} x_1(\mu^*) = \frac{x^0_1}{1+\mu^* \lambda_1} \end{equation} As the denominator is negative, $\bm x(\mu^*)$ belongs to a different orthant than $\bm x^0$. \end{proof} \begin{proposition}\label{prop:optimal_is_root} If $x^0_i \neq 0$ for all $i = 1, \hdots, n$ and $\bm \lambda >0$, then the optimal solution of~\cref{eq:proj_quadric_std} is given by the unique root $\mu^*$ of $f$ restricted to $\mathcal{I}$. \end{proposition} \begin{proof} As shown in~\cref{sub:KKT_conditions}, the optimal solution $\bm x^*$ is a KKT point, meaning that it satisfies~\cref{eq:grad_lagrangian_not_sphere}. Using~\cref{prop:same_orthant_ellipsoid}, $\bm x^*$ belongs to the same orthant as $\bm x^0$, we are therefore interested in the best KKT solution in the first orthant. However,~\cref{prop:uniqueness_of_solution_orthant} shows that the corresponding $\mu^*$ of the KKT solutions belonging to the same orthant of $\bm x^0$ are located in $\mathcal{I}$ and~\cref{prop:uniqueness_root} proves the existence and uniqueness of a root on $\mathcal{I}$, which corresponds therefore to the optimal solution of~\cref{eq:proj_quadric_std}. \end{proof} \begin{proposition}\label{prop:f_convex} $f$ is strictly convex on $\mathcal{I}$. \end{proposition} \begin{proof} $f \in \mathcal{C}^2(\mathcal{I})$ and we compute \[ f''(\mu) = 6 \sum_{i=1}^n \frac{(\lambda_i x^0_i)^2 \lambda_i}{(1+\mu \lambda_i)^4}, \] which is positive on $\mathcal{I}$. \end{proof} \begin{proposition}\label{prop:convergence_newton} Let $\mu^0 \in \mathcal{I} =] -\frac{1}{\lambda_1}, +\infty[$ with $f(\mu^0)>0$. The Newton-Raphson algorithm with starting point $\mu^0$ converges to $\mu^*$, the unique root of $f$ on $\mathcal{I}$ (as in~\cref{prop:optimal_is_root}). \end{proposition} \begin{proof} Let us now prove by induction on $k$ that the sequence $(\mu^k)_{k \in \mathbb{N}}$ provided by Newton's method is an increasing sequence upper bounded by $\mu^*$. The Newton-Raphson iterate for $k = 0,1,\hdots$ is given by \begin{equation}\label{eq:newton_iterate} \mu^{k+1} = \mu^{k} - \frac{f(\mu^k)}{f'(\mu^k)} \, . \end{equation} Using the induction hypothesis, which implies that $f(\mu^k) > 0$ for $\mu^k \neq \mu^*$, and~\cref{prop:f_decreasing}, we have \[ \mu^k < \mu^{k+1}.\] Since $f$ is strictly convex on $\mathcal{I}$ (\cref{prop:f_convex}), the tangent of $f$ at a given point is below any chord starting from this point. In particular we have \[ f'(\mu^k) < \frac{f(\mu^k) - f(\mu^*)}{\mu^k - \mu^*}, \] Using the definition of $\mu^*$ and rearranging, we obtain \begin{align*} \mu^k - \frac{f(\mu^k)}{f'(\mu^k)} &< \mu^*, \\ \mu^{k+1} &< \mu^*. \end{align*} Since the sequence $(\mu^k)_{k\in\mathbb{N}}$ is strictly increasing (for $\mu^k \neq \mu^*$) and bounded, it must converge to a fixed point of~\cref{eq:newton_iterate} which corresponds to a root of $f$. This concludes the proof as, by~\cref{prop:optimal_is_root}, there is a unique root of $f$ on $\mathcal{I}$ corresponding to the optimal solution of~\cref{eq:proj_quadric_std}. \end{proof} \subsubsection{2D example of a nondegenerate projection onto an ellipse} \label{ssub:2D_example_of_nondegenerate_projection_onto_an_ellipse} Figure~\ref{fig:x_trajectories_mu} presents an example of a nondegenerate projection, that is with $\bm x^0 >0$, onto an ellipse. We plot $\bm x(\mu)$, $f(\mu)$, $f'(\mu)$ and $\norm{\bm x(\mu)- \bm x^0}_2$ for $\mu$ ranging on $]-\infty, \infty[$. Let us describe how $\bm x(\mu)$, in the top left subfigure, varies when $\mu$ decreases from $+\infty$ to $-\infty$. For $\mu \to +\infty$, we have $\bm x(\mu)= \bm d$, where $\bm d = \bm{0}$ is the quadric centre depicted as a blue dot. Then, while decreasing $\mu$ to $0$, we reach $\bm x(0) = \bm x^0$. For $\mu \to -\frac{1}{\lambda_1}$, $\bm x(\mu)$ follows an asymptote \emph{and crosses the quadric on $\bm x(\mu^*)$}, the optimal solution of~\cref{eq:proj_quadric_std}, depicted as a purple triangle. Further decreasing $\mu$, $\bm x(\mu)$ reappears on the left part of the asymptote ($x_1 \to -\infty$) and tends to the asymptote ($x_2 \to +\infty$) defined by the other eigenvalue. Finally, $\bm x(\mu)$ converges to the quadric centre when $\mu \to -\infty$, passing again through the quadric in $\bm x(\mu^{**})$, the max point of~\cref{eq:proj_quadric_std}, depicted as a purple square. The function $f$ is also depicted with its two roots. Note that, depending on the parameters of the problem, it may have one or two additional roots corresponding to local minima or maxima. We also observe in the bottom right figure that $f'(\mu)$ is negative on $\mathcal{I}$, and show the distance to $\bm x^0$ for the different values of $\bm x(\mu)$ in the bottom left figure. \begin{figure} \caption{Image of $\bm x(\mu)$ and graphs of $f(\mu)$, $f^\prime(\mu)$, $\norm{\bm x(\mu) - \bm x^0}_2$ for $\mu$ ranging on $]-\infty, +\infty[$ in the nondegenerate elliptic case, here $\mathcal{Q} (\Psi) := \mathcal{Q}$.} \label{fig:x_trajectories_mu} \end{figure} \subsection{Hyperboloid case, \texorpdfstring{$\mathbf{x}^0 > \mathbf{0}$}{x0 > 0}} \label{sub:Hyperboloid_case} In the hyperboloid case, there is at least one positive and one negative eigenvalue of $\bm{B}$. Let $1 \leq p \leq n-1$ be the number of positive eigenvalues. Let us consider $e_1:= -\frac{1}{\lambda_1}$ and $e_2 := -\frac{1}{\lambda_{n}}$. We have $0 \in ]e_1, e_2[$. We will work analogously as in the ellipsoidal case, with $\mathcal{I} := ]e_1, e_2[$. \cref{prop:same_orthant_ellipsoid} has no assumption with respect to the positivity of $\lambda$, and therefore remains valid. The other propositions can be successively adapted: \cref{prop:f_decreasing_hyperboloid} adapts~\cref{prop:f_decreasing}, \cref{prop:uniqueness_root_hyperboloid} adapts~\cref{prop:uniqueness_root}, \cref{prop:uniqueness_of_solution_orthant_hyperboloid} adapts~\cref{prop:uniqueness_of_solution_orthant}, and finally the main result remains valid, \emph{i.e.}, \cref{prop:optimal_is_root_hyperboloid} adapts~\cref{prop:optimal_is_root}. We then propose \cref{alg:double_newton}, also based on Newton-Raphson, to efficiently compute the root of $f$ in $\mathcal{I}$, and hence one of the optimal solutions of~\cref{eq:proj_quadric_std}. An example of the hyperbolic case is provided in~\cref{fig:x_trajectories_mu_hyperbola}. \begin{proposition}\label{prop:f_decreasing_hyperboloid} $f$, defined as in~\cref{eq:f_mu}, is strictly decreasing on $\mathcal{I}:= \left ] e_1, e_2 \right [$. \end{proposition} \begin{proof} Since $f \in \mathcal{C}^1(\mathcal{I})$, we compute \begin{equation} \label{eq:df_mu_hyperboloid} f'(\mu) = -2 \sum_{i=1}^p \frac{(\lambda_i x^0_i)^2}{{\underbrace{(1+\mu \lambda_i)}_{>0 \text{ if }\mu > e_1}}^3} -2 \sum_{i=p+1}^n \frac{(\lambda_i x^0_i)^2}{{\underbrace{(1+\mu \lambda_i)}_{>0 \text{ if }\mu < e_2}}^3} , \end{equation} and this function is negative on $\mathcal{I}$. \end{proof} \begin{proposition}\label{prop:uniqueness_root_hyperboloid} Function $f$ restricted to $\mathcal{I}$ with $\bm x^0 >0$ has one and only one zero. \end{proposition} \begin{proof} By~\cref{prop:f_decreasing_hyperboloid}, $f$ is strictly decreasing, and hence has \emph{at most} one zero. Moreover, $\lim\limits_{\mu \to e_1}f(\mu) = +\infty$ and $\lim\limits_{\mu \to e_2} f(\mu) = - \infty$; the continuity of $f$ on $\mathcal{I}$ implies the existence of the zero on $\mathcal{I}$. \end{proof} \begin{proposition}\label{prop:uniqueness_of_solution_orthant_hyperboloid} If $\mu^*$ is a root of $f$, and $\mu^* \notin \mathcal{I}$, then $\bm x(\mu^*) \notin \mathbb{R}^{n}_+$. \end{proposition} \begin{proof} If $\mu^* \notin \mathcal{I}$, then either $\mu^* < -\frac{1}{\lambda_1}$ or $\mu^* > -\frac{1}{\lambda_{n}}$. The first case is already treated in the proof of~\cref{prop:uniqueness_of_solution_orthant}. For the second case, we note that \begin{equation} x_{n}(\mu^*) = \frac{x^0_{n}}{1+\mu^* \lambda_{n}}. \end{equation} As $\lambda_{n}<0$, the denominator is negative, and thus $\bm x(\mu^*)$ belongs to a different orthant than $\bm x^0$. \end{proof} \begin{proposition}\label{prop:optimal_is_root_hyperboloid} If $x^0_i \neq 0$ for all $i = 1, \hdots, n$, then the optimal solution of~\cref{eq:proj_quadric_std} is given by the unique root $\mu^*$ of $f$ restricted to $\mathcal{I}:=]e_1, e_2[$. \end{proposition} \begin{proof} Since~\cref{prop:same_orthant_ellipsoid,prop:uniqueness_of_solution_orthant,prop:uniqueness_root} are also valid in the hyperboloid case with $\mathcal{I}:=]e_1, e_2[$, the proof is identical to~\cref{prop:optimal_is_root}. \end{proof} \begin{proposition}\label{prop:uniqueness_inflexion} There exists a unique inflexion point, $\mu^\mathrm{I}$, of $f$ on $\mathcal{I}$. \end{proposition} \begin{proof} This follows from the monotonicity of $f^{\prime\prime} \in \mathcal{C}^{\infty}(\mathcal{I})$, \emph{i.e.}, $f^{\prime\prime\prime}(\mu) < 0$ for all $\mu \in \mathcal{I}$, and because $\lim\limits_{\mu \to e_1^+} f^{\prime\prime}(\mu) = - \lim\limits_{\mu \to e_2^-} f^{\prime\prime}(\mu) = +\infty $. \end{proof} Since there is a single inflexion point $\mu^{\mathrm{I}}$, we can launch in parallel two Newton's algorithms and guarantee that at least one will converge. \begin{algorithm} \caption{Double Newton} \label{alg:double_newton} \begin{algorithmic} \IF{$f(0) <0$} \STATE Use bisection method (see~\cite[Chapter 2.1]{Burden01}) to find $\mu_\mathrm{s} \in ]e_1, 0[$ s.t. $f(\mu_\mathrm{s}) >0$ \ELSIF{$f(0) > 0$} \STATE Use bisection method to find $\mu_\mathrm{s} \in ]0, e_2[$ s.t. $f(\mu_\mathrm{s}) <0$ \ELSE \RETURN 0 \ENDIF \STATE $\mu_0 \gets \textrm{Newton}(0)$ \COMMENT{This and the next line are run in parallel} \STATE $\mu_1 \gets \textrm{Newton}(\mu_{\textrm{s}})$ \RETURN $\mu_0, \mu_1$ \COMMENT{Returns the output solution of the first of the two parallel Newtons that is finished} \end{algorithmic} \end{algorithm} \begin{proposition} One of the two Newton's methods of~\cref{alg:double_newton} converges to $\mu^*$, with $\bm x(\mu^*)$ (defined in~\cref{eq:x_not_sphere}) the optimal solution of~\cref{eq:proj_quadric_std}. \end{proposition} \begin{proof} This proof relies on the double initiation of Newton's method in~\cref{alg:double_newton}: one starting from a positive value, and the other from a negative value. We comment on the sign of $f(\mu^\mathrm{I})$: \begin{itemize} \item If $f(\mu^{\mathrm{I}}) < 0$, then $\mu^* \in ]e_1, \mu^{\mathrm{I}}[$ and the function is convex on this interval. The situation is similar to~\cref{prop:convergence_newton}, and any starting point $\mu_\mathrm{s}$ with $f(\mu_\mathrm{s}) >0$ is a valid starting point, in the sense that the sequence of iterates converges to $\mu^*$. \item If $f(\mu^{\mathrm{I}}) > 0$, then $\mu^* \in ]\mu^{\mathrm{I}}, e_2[$ and the function is concave on this interval. Using a similar argument as~\cref{prop:convergence_newton}, any starting point $\mu_\mathrm{s}$ with $f(\mu_\mathrm{s}) <0$ is a valid starting point. \item If $f(\mu^{\mathrm{I}}) = 0$, any starting point in $\mathcal{I}$ is a valid starting point. \end{itemize} \end{proof} Remark that with the knowledge of the value of $\mu^{\mathrm{I}}$, we could launch a single Newton scheme with the appropriate starting point. Unfortunately, computing $\mu^{\mathrm{I}}$ amounts to computing the root of $f''$ which is at least as costly as finding the root of $f$. \subsubsection{2D example of a nondegenerate projection onto a hyperbola} \label{ssub:2D_example_of_nondegenerate_projection_onto_an_hyperbola} Figure~\ref{fig:x_trajectories_mu_hyperbola} shows an example of a nondegenerate projection onto a hyperbola. We observe a similar image of $\bm x(\mu)$, with two asymptotes. We see that $f(\mu)$ has a unique inflexion point on $\mathcal{I}$. In this example, the inflexion point is on the right of the root, and thus we know that starting a Newton-Raphson scheme in some $\mu_{\mathrm{s}}$ with $f(\mu_\mathrm{s}) >0$ yields a sequence that converges to $\mu^*$. \begin{figure} \caption{Image of $\bm x(\mu)$ and graphs of $f(\mu)$, $f^\prime(\mu)$, $\norm{\bm x(\mu) - \bm x^0}_2$ for $\mu$ ranging on $]-\infty, +\infty[$ in the nondegenerate hyperbolic case, here $\mathcal{Q} (\Psi) := \mathcal{Q}$.} \label{fig:x_trajectories_mu_hyperbola} \end{figure} \subsection{Degenerate case, \texorpdfstring{$\mathbf{x}^0 \geq \mathbf{0}$}{x0 >= 0}} \label{sub:degenerate_case} Let us first assume that all eigenvalues of $\bm{B}$ are distinct, the case with repeated eigenvalues is treated at the end of the current section. \subsubsection{All eigenvalues are distinct} \label{ssub:All_eigenvalues_are_distinct} The discussion is largely similar to~\cref{sub:Hyperboloid_case}, with the following differences: i) $f$ is continuous at $-\frac{1}{\lambda_i}$ if the associated component of $\bm x^0$ is equal to zero and ii) at most two additional KKT points can be obtained for each component of $\bm x^0$ that is equal to zero. We tackle these issues as follows. First, we arbitrarily decide to single out a solution in the first orthant. Second, we change the definition of $e_1$, $e_2$, to account for the continuity of $f$ in $-\frac{1}{\lambda_i}$: $\lim\limits_{\mu \to -\frac{1}{\lambda_i}}f(\mu) \neq \infty$ if $x^0_i = 0$. Finally, we show how to analytically compute these additional solutions. Let $I := \left \{1, 2, \hdots, n \right \}$ and $K \subseteq I$, it is clear that~\cref{prop:same_orthant_ellipsoid} is not valid any more if $\size{K}$ entries of $\bm x^0$ are equal to zero. Indeed, if $\bm x^*$ is an optimal solution that belongs to the same orthant as $\bm x^0$, then $\bm x^{**}$ defined as follows \begin{equation} x^{**}_i = \begin{cases} x^*_i & \forall i\notin K,\\ -x^*_i & \forall i\in K,\\ \end{cases} \end{equation} is also an optimal solution. In fact, up to ${2^{\size{K}}} -1$ solutions outside the first orthant can be obtained by mirroring $\bm x^*$ along a selected set of components in $K$. As we are interested in finding \emph{one} of the optimal solutions, we note that we can also restrict our search to the first orthant. \begin{proposition}\label{prop:same_orthant_degenerate} Given $\bm x^0 \geq \bm{0}$, there exists an optimal solution $\bm x^*$ of~\cref{eq:proj_quadric_std} such that $\bm x^* \geq \bm{0}$. \end{proposition} \begin{proof} Let $\bm x^*$ be an optimal solution. The existence of $\bm x^*$ follows from~\cref{prop:existence_projection_quadric}. Using a similar argument as in the proof of~\cref{prop:same_orthant_ellipsoid}, we have $\mathrm{sign}(x^0_j) = \mathrm{sign}(x^*_j) \quad \forall j \in I \setminus K$. Let \[x^{**}_i = \begin{cases} -x^*_i & \forall i\in K \text{ with } \textrm{sign}(x^*_i) \neq \textrm{sign}(x^0_i),\\ x^*_i & \text{elsewhere.} \end{cases} \] This feasible point has the same objective as $\bm x^*$ and is located in the same orthant as $\bm x^0$. \end{proof} For $\mu \notin -\frac{1}{\spec{\bm{B}}}$, we change the definition of $e_1, e_2$ as \begin{align*}\label{eq:e_1_e_2_degenerate} e_1 &= \max_{\{i\in I \left | \right .{\lambda_i>0, x^0_i \neq 0}\}} -\frac{1}{\lambda_i} \numberthis \\ e_2 &= \min_{\{i\in I \left | \right .{\lambda_i < 0, x^0_i \neq 0}\}} -\frac{1}{\lambda_i} \\ \end{align*} and $e_1 = - e_2 := -\infty$ if the $\max$ or $\min$ is empty. This takes into account the continuity of $f$ at $\mu = -\frac{1}{\lambda_i}$ if $x_i^0 = 0$ for some index $i$. Let us adapt~\cref{prop:f_decreasing_hyperboloid,prop:uniqueness_root_hyperboloid,prop:uniqueness_of_solution_orthant_hyperboloid} to the degenerate case. \begin{proposition}\label{prop:f_decreasing_hyperboloid_degenerate} $f$, defined as in~\cref{eq:f_mu}, is strictly decreasing on $\mathcal{I}:= \left ] e_1, e_2 \right [$. \end{proposition} \begin{proof} Since $f \in \mathcal{C}^1(\mathcal{I})$, we compute \begin{equation} \label{eq:df_mu_hyperboloid_degenerate} f'(\mu) = -2 \sum_{i=1,i \notin K}^p \frac{(\lambda_i x^0_i)^2}{{\underbrace{(1+\mu \lambda_i)}_{>0 \text{ if }\mu > e_1}}^3} -2 \sum_{i=p+1, i\notin K}^n \frac{(\lambda_i x^0_i)^2}{{\underbrace{(1+\mu \lambda_i)}_{>0 \text{ if }\mu < e_2}}^3} , \end{equation} and this function is negative on $\mathcal{I}$. \end{proof} \begin{proposition}\label{prop:uniqueness_root_hyperboloid_degenerate} Function $f$ restricted to $\mathcal{I}$ has one and only one zero if $\exists i \in I^+:= \left \{ i\in I \left | \right .{\lambda_i >0} \right \}$ with $x^0_i \neq 0$. \end{proposition} \begin{proof} We note that the technical assumption on $x^0_i$ ensures that $\lim\limits_{\mu \to e_1} f(\mu) >0$. For $\lim\limits_{\mu \to e_2} f(\mu)$ we distinguish two cases: \begin{itemize} \item either $e_2 = +\infty$ and $\lim\limits_{\mu \to e_2} = -1$; \item or $e_2 = \min\limits_{i \in I \left | \right .{\lambda_i <0, x^0_i \neq 0}} -\frac{1}{\lambda_i}$ and $\lim\limits_{\mu \to e_2} f(\mu) = -\infty$. \end{itemize} Since in both cases the limit is negative, and $f$ is continuous and strictly decreasing on $\mathcal{I}$, there exists a unique zero on this interval. \end{proof} \begin{proposition}\label{prop:uniqueness_of_solution_orthant_hyperboloid_degenerate} If $\mu^*$ is a root of $f$, and $\mu^* \notin \mathcal{I}$, then $\bm x(\mu^*) \notin \mathbb{R}^{n}_+$. \end{proposition} \begin{proof} If $\mu^* \notin \mathcal{I}$, then either $e_1 \neq -\infty$ and $\mu < e_1$ or $e_2 \neq +\infty$ and $\mu > e_2$. The proof follows from the definition of $e_i$, \emph{e.g.}, in the first case we note that \[ x_{i_1}(\mu^*) = \frac{x^0_{i_1}}{1+\mu \lambda_{i_1}}, \] where $i_1 = \argmax_{\{i\in I \left | \right .{\lambda_i>0, x^0_i \neq 0}\}} -\frac{1}{\lambda_i} $. This implies that $\bm x(\mu^*)$ belongs to a different orthant that $\bm x^0$ since the numerator is nonzero and the denominator is negative. \end{proof} Remark that if $x^0_i = 0 \quad \forall i \in I^+$, meaning that the assumption on $\bm x^0$ of~\cref{prop:uniqueness_root_hyperboloid_degenerate} does not hold, then $f(\mu)$ reads \[ f(\mu) = \sum_{i \in I^-} \lambda_i \left( \frac{x^0_i}{1+\mu \lambda_i} \right)^2 -1, \] where $I^- := I \setminus I^+ = \left \{p+1, p+2, \hdots, n \right \}$. This function is negative on $\mathbb{R}$. In this specific case, \emph{$f$ does not provide any KKT point}, such a situation is depicted in~\cref{fig:x_trajectories_mu_hyperbola_degenerate_no_root}. However, the problem is solvable, due to additional KKT points that appear when $\bm x^0$ is located on the axes. Indeed, if $\mu = -\frac{1}{\lambda_k}$ for $k \in K$ then the $k-$th entry of~\cref{eq:grad_lagrangian_not_sphere} reads \begin{equation} 2(x_k - x_k^0) + 2 \mu \lambda_k x_k = 0 \label{eq:lagrangian_degenerate} \end{equation} which is true no matter $x_k$. Therefore, we obtain at most \emph{two additional} solutions of the Lagrangian system~\cref{eq:grad_lagrangian_not_sphere}. Geometrically, this corresponds to looking at the intersection between i) a line perpendicular to the axis corresponding to the component $k$ where $x^0_k = 0$ and ii) the quadric. These solutions, if they exist, can be computed as \begin{equation} \label{eq:x_d_degenerate} (\bm x^{\mathrm{d}}_k)_i = \begin{cases} \frac{x^0_i}{1- \frac{\lambda_i}{\lambda_k}} & \text{if } i \neq k ,\\ \pm \sqrt{\frac{1}{\lambda_k} \left ( 1-\sum\limits_{j\in I,j \neq k} \lambda_j \left( \frac{x^0_j}{1-\frac{\lambda_j}{\lambda_k}} \right )^2 \right ) } & \text{if } i=k , \end{cases} \end{equation} where $(\cdot)_i$ selects the $i-$th component, and we choose the ``$+$'' solution that lies in the first orthant. Such a situation is depicted in~\cref{fig:x_trajectories_mu_degenerate}. We observe that $\bm x(\mu)$ moves around the axis corresponding to the component of $\bm x^0$ which is equal to zero. Moreover, the additional solution is depicted in green in~\cref{fig:x_trajectories_mu_degenerate,fig:x_trajectories_mu_degenerate_2}. Note that in~\cref{fig:x_trajectories_mu_degenerate}, the optimal solution is a root of $f$ and in~\cref{fig:x_trajectories_mu_degenerate_2}, it is the additional solution. Remark that there is no intersection, and therefore no additional solution to~\cref{eq:grad_lagrangian_not_sphere}, if $\frac{1}{\lambda_k} \left ( 1-\sum\limits_{j \in I, j \neq k} \lambda_j \left( \frac{x^0_j}{1-\frac{\lambda_j}{\lambda_k}}\right)^2 \right ) < 0$, see, \emph{e.g.},~\cref{fig:x_trajectories_mu_hyperbola_degenerate_no_green}. \subsubsection{2D examples of degenerate projections} \label{ssub:2D_example_of_degenerate_projections} \cref{fig:x_trajectories_mu_degenerate,fig:x_trajectories_mu_degenerate_2} show two examples of \emph{degenerate} projections onto an ellipse. \cref{fig:x_trajectories_mu_degenerate} depicts an example where the optimal solution is given by the KKT point corresponding to the root of $f$. \cref{fig:x_trajectories_mu_degenerate_2} depicts an example where the optimal solution is given by the KKT point corresponding to $\mu = -\frac{1}{\lambda_2}$. Notice that in these (degenerate) cases, one of the asymptotes of the $\bm x(\mu)$ image disappears, and the image is hence along one of the axes. Moreover, one of the discontinuities of $f$, $f^\prime$ and $\norm{\bm x(\mu) - \bm x^0}_2$ disappears as, \emph{e.g.}, $\lim\limits_{\mu \to -\frac{1}{\lambda_k}} f(\mu) = \sum\limits_{i=1, i\neq k}^n \lambda_i \left ( \frac{x^0_i}{1 - \frac{\lambda_i}{\lambda_k}} \right )^2 -1 \neq \infty$. \begin{figure}\label{fig:x_trajectories_mu_degenerate} \end{figure} \begin{figure}\label{fig:x_trajectories_mu_degenerate_2} \end{figure} \cref{fig:x_trajectories_mu_hyperbola_degenerate_no_root,fig:x_trajectories_mu_hyperbola_degenerate_no_green} show two examples of \emph{degenerate} projections onto a hyperbola. \cref{fig:x_trajectories_mu_hyperbola_degenerate_no_root} depicts an example where $f$ has no root. This is not an issue because the optimal solution is given, in this case, as one of the $\bm x^\mathrm{d}_k$ depicted in green which are derived in~\cref{eq:x_d_degenerate}. \cref{fig:x_trajectories_mu_hyperbola_degenerate_no_green} shows an example where there is no intersection between the grey line and the quadric, and therefore no $\bm x^\mathrm{d}_k$. This is not an issue because then there must exist a root $\mu^*$ of $f$ on $\mathcal{I}$, which is the optimal solution (purple triangle). Remark that, if $\sum_{i=1}^n \lambda_i (x^0_i)^2-1 > 0$, then both $\bm x(\mu^*)$ and $\bm x^\mathrm{d}_k$ are KKT points, and one of them is the optimal solution. \begin{figure} \caption{Image of $\bm x(\mu)$ and graphs of $f(\mu)$, $f^\prime(\mu)$, $\norm{\bm x(\mu) - \bm x^0}_2$ for $\mu$ ranging on $]-\infty, +\infty[$ in the degenerate hyperbolic case. The optimal solution is $\bm x^\mathrm{d}_1$, as $f$ has no root.} \label{fig:x_trajectories_mu_hyperbola_degenerate_no_root} \end{figure} \begin{figure} \caption{Image of $\bm x(\mu)$ and graphs of $f(\mu)$, $f^\prime(\mu)$, $\norm{\bm x(\mu) - \bm x^0}_2$ for $\mu$ ranging on $]-\infty, +\infty[$ in the degenerate hyperbolic case. The optimal solution is $\bm x(\mu^*)$, the root of $f$. There is no additional KKT point $\bm x^\mathrm{d}_k$; the grey line in the upper left panel has no intersection with the quadric.} \label{fig:x_trajectories_mu_hyperbola_degenerate_no_green} \end{figure} \subsubsection{Some eigenvalues are repeated} \label{ssub:Some_eigenvalues_are_repeated} Let $\overline{\bm \lambda}$ be the vector of the unique eigenvalues of $\bm{B}$, sorted in descending order, let $k \in \left \{ 1,\hdots,\size{\overline{\bm \lambda}} \right \}$ be a given component of $\overline{\bm{\lambda}}$, let $L_k$ be a subset of $I$ corresponding to the same eigenvalue, \emph{i.e.}, $L_k := \left \{l \in I \left | \right .{\lambda_l=\overline{\lambda}_k} \right \}$, and let $K_{k}$ be a subset of $L_k$ where the associated component of $\bm x^0$ is equal to zero, \emph{i.e.}, \[ K_{k} := \left \{ i \in I \left | \right .{\lambda_i = \overline{\lambda}_k, x^0_i = 0} \right \}. \] \begin{proposition}\label{prop:L_K} Let $L_k$ and $K_{k}$ be defined as above. There exists a solution of~\cref{eq:grad_lagrangian_not_sphere} with $\mu^* = -\frac{1}{\overline{\lambda_k}}$ only if $L_k = K_k$. \end{proposition} \begin{proof} Let $(\bm x^*, \mu^*)$ be a solution of~\cref{eq:grad_lagrangian_not_sphere} with $\mu^*=-\frac{1}{{\overline{\lambda_k}}}$. Let us assume, for the sake of contradiction, that $L_k \neq K_{k}$, or equivalently, that $\exists i \in I$ with $\lambda_i = \overline{\lambda}_k$ but $x^0_i \neq 0$. The $i-$th component of~\cref{eq:grad_lagrangian_not_sphere} reads \[2(x_i - x_i^0) - 2 x_i = 0 , \] which does not hold. \end{proof} Remark that~\cref{prop:L_K} is a left implication and it is possible that no solution of~\cref{eq:grad_lagrangian_not_sphere} exists with $\mu^* = -\frac{1}{\overline{\lambda}_k}$ and $L_k = K_k$, see, \emph{e.g.},~\cref{fig:x_trajectories_mu_hyperbola_degenerate_no_root}. If $\size{K_{k}}=1$, the discussion is analogous to the previous paragraph: at most two KKT solutions are obtained as the intersection between a line and the quadric, but for $\size{K_{k}}>1$, we have to take the intersection between a plane $\pi := \left \{ \bm x \in \mathbb{R}^{n} \left | \right .{x_{k'} = 0 \, \forall k' \in K_{k}} \right \}$, and the quadric. Geometrically, the intersection---if there is one, \emph{i.e.}, if the argument of the square root below is positive---will be a $\size{K_{k}}-1$ hypersphere in the corresponding subspace of $\mathbb{R}^{n}$: \begin{align*} \pi \bigcap \mathcal{Q} = & \left \{ \vphantom{\frac{1}{\overline{\lambda}_k} \left ( 1- \sum\limits_{j\in I \setminus K_{k}} \lambda_j \left( \frac{x^0_j}{1-\frac{\lambda_j}{\overline{\lambda}_k}} \right)^2 \right )} \bm x \in \mathbb{R}^{n} \text{ s.t } x_i = \frac{x^0_i}{1- \frac{\lambda_i}{\overline{\lambda}_k}} \text{if } i \notin K_{k}, \right . \\ \label{eq:hyper_sphere} \numberthis & \left . \sum_{i\in K_{k}} x_i^2 = \frac{1}{\overline{\lambda}_k} \left ( 1- \sum\limits_{j\in I \setminus K_{k}} \lambda_j \left( \frac{x^0_j}{1-\frac{\lambda_j}{\overline{\lambda}_k}} \right)^2 \right ) \right \} \end{align*} and every point belonging to this hypersphere is a KKT point. Moreover, all the points in this hypersphere achieve the same value for the objective function of~\cref{eq:proj_quadric_std}. Hence, for the purpose of finding one of the optimal solutions of~\cref{eq:proj_quadric_std}, we can keep in our list of candidates just one element of~\cref{eq:hyper_sphere}. In particular, we can arbitrarily select \emph{one} solution that lies in the first orthant by setting to zero all components of $K_{k}$ except one ($k'$): \begin{equation} \label{eq:x_d_degenerate_full} (\bm x^{\mathrm{d}}_k)_i = \begin{cases} \frac{x^0_i}{1- \frac{\lambda_i}{\overline{\lambda}_k}} & \text{if } i \notin K_{k} ,\\ \sqrt{ \frac{1}{\overline{\lambda}_k} \left ( 1- \sum\limits_{j\in I \setminus K_{k}} \lambda_j \left( \frac{x^0_j}{1-\frac{\lambda_j}{\overline{\lambda}_k}} \right)^2 \right )} & \text{if } i=k' , \\ 0 & \text{if } i \in K_{k}, i\neq k' . \\ \end{cases} \end{equation} Any $k' \in K_k$ works, let us choose without loss of generality $k' := \min\limits_{i \in K_k} i$. \index{subspace} As a matter of fact, this is equivalent to restricting the search to the subspace $\{ \bm x \in \mathbb{R}^{n} \left | \right .{x_i = 0 \quad \forall i\in K_{k} \setminus \{k'\} } \}$, because all solutions of the hypersphere have the same objective. In this subspace, the problem is analogous to the case $\size{K_{k}}=1$, \emph{i.e.}, the intersection between a line and a quadric. \subsection{Bringing everything together} \label{sub:Bringing_everything_together} Let us give a full characterization of an optimal solution to~\cref{eq:proj_quadric_std}. \begin{proposition}\label{prop:optimal_full} There is an optimal solution of~\cref{eq:proj_quadric_std} in the set $ \{\bm x(\mu^*)\} \bigcup \bm X^d $ where \begin{itemize} \item $\bm x(\mu^*)$ is defined by~\cref{eq:x_not_sphere}, where $\mu^*$ is the unique root of $f$ on $\mathcal{I} = ]e_1, e_2[$, and $e_1$, $e_2$ are given by~\cref{eq:e_1_e_2_degenerate}; \item $\bm X^{\mathrm{d}} := \left \{ \bm x^{\mathrm{d}}_{k} \left | \right .{k = 1,\hdots,\size{\overline{\bm \lambda}}, \text{and } \size{K_k}>0} \right \}$ as defined in~\cref{sub:degenerate_case}. \end{itemize} \end{proposition} \begin{proof} Since the quadric is central, no point fulfils the LICQ condition. Hence, the solution of~\cref{eq:proj_quadric_std}--- which exists by~\cref{prop:existence_projection_quadric}--- must be a KKT point. The KKT points are the solutions of~\cref{eq:grad_lagrangian_not_sphere}, \emph{i.e.}, \[ x_i (1+\mu\lambda_i) = x^0_i \quad \text{ for all } i \in I \text{ and } \sum_{i\in I} \lambda_i x_i^2 =1 \] Hence $(\bm x^*, \mu^*)$ is a solution of the KKT conditions~\cref{eq:grad_lagrangian_not_sphere} if and only if one of the following holds: \begin{itemize} \item[(i)] $\mu^* \neq -\frac{1}{\lambda_i}$ for all $i$, $\mu^*$ is a root of $f$ defined in \cref{eq:f_mu}, and $\bm x^*$ satisfies~\cref{eq:x_not_sphere}. \item[(ii)] $\mu^* = -\frac{1}{\lambda_k}$ for some $k$, $x^0_i=0$ for all $i$ such that $\lambda_i = \lambda_k$, and, letting $K_k$ be the set of those $i$'s, $\bm x^* \in \pi \bigcap \mathcal{Q}$ defined in~\cref{eq:hyper_sphere}. \end{itemize} In case (i), we have seen that the smallest objective of~\cref{eq:proj_quadric_std} is given by the--- possibly nonexistent---unique root of $f$. In case (ii), all the points in~\cref{eq:hyper_sphere}---which may be empty---achieve the same value for the objective of~\cref{eq:proj_quadric_std}, and~\cref{eq:x_d_degenerate_full}---defined if and only if~\cref{eq:hyper_sphere} is nonempty---is one of those points. Finally, recall that~\cref{prop:existence_projection_quadric} proves the existence of one optimal solution to~\cref{eq:proj_quadric_std}: either case (i) or case (ii) will provide a solution. \end{proof} The full procedure to compute the projection of any point $\bm x^0$ onto a nonempty and non-cylindrical central quadric is given in~\cref{alg:exact_projection_quadric}. \begin{algorithm} \caption{Exact projection onto a non-cylindrical central quadric in normal form} \label{alg:exact_projection_quadric} \begin{algorithmic} \REQUIRE $\bm \lambda$, the eigenvalues corresponding to~\cref{eq:proj_quadric_std}, and $\bm x^0 \in \mathbb{R}^{n}_+$ \STATE $e_1 \gets \max \left \{ \max_{\{i\in I \left | \right .{\lambda_i>0, x^0_i \neq 0}\}} -\frac{1}{\lambda_i}, -\infty \right \}$ \COMMENT{If the inner $\max$ is empty, $e_1 = -\infty$} \STATE $e_2 \gets \min \left \{ \min_{\{i\in I \left | \right .{\lambda_i < 0, x^0_i \neq 0}\}} -\frac{1}{\lambda_i}, +\infty \right \}$ \COMMENT{If the inner $\min$ is empty, $e_2 = +\infty$} \STATE $\bm D \gets \textrm{diag}(\bm \lambda)$ \STATE $\bm x(\mu) \gets (\bm I + \mu \bm D)^{-1} \bm x^0 $ \STATE $f(\mu) \gets \sum\limits_{i=1, x^0_i \neq 0}^n \lambda_i \left(\frac{x_i^0}{1+\mu \lambda_i}\right )^2 -1 $ \IF{$e_1 \neq -\infty$} \IF{$e_2 = +\infty$} \STATE $\mu_0 \gets \textrm{bisection}(f, e_1)$ \COMMENT{Use bisection search (see~\cite[Chapter 2.1]{Burden01}) to find $\mu^0 > e_1$ with $f(\mu^0) > 0$} \STATE $\mu^* \gets \textrm{root}(f, \mu_0)$ \COMMENT{Using Newton with starting point $\mu^0$} \ELSE \STATE $\mu^* \gets \textrm{root}(f, e_1, e_2)$ \COMMENT{Using~\cref{alg:double_newton}} \ENDIF \ENDIF \COMMENT{See contrapositive of~\cref{prop:uniqueness_root_hyperboloid_degenerate}: $e_1 = -\infty \Rightarrow f$ has no root on $\mathcal{I}$} \STATE $\overline{\bm \lambda} \gets \textrm{unique}(\bm\lambda)$ \STATE $\bm X^\mathrm{d} \gets [\,]$ \FOR{$k = 1, \hdots, \size{\overline{\bm \lambda}}$} \STATE $K_{k} \gets \left \{ i \in I \left | \right .{\lambda_i = \overline{\lambda}_k, x^0_i = 0} \right \} $ \STATE $L_{k} \gets \left \{ i \in I \left | \right .{\lambda_i = \overline{\lambda}_k} \right \} $ \IF{$K_k = L_k \textbf{ and } \frac{1}{\overline{\lambda}_k} \left ( 1- \sum\limits_{j\in I \setminus K_{k}} \lambda_j \left( \frac{x^0_j}{1-\frac{\lambda_j}{\overline{\lambda}_k}} \right)^2 \right ) > 0$ } \STATE $\bm x^{\mathrm{d}}_k \gets$ \cref{eq:x_d_degenerate_full} \STATE $\bm X^\mathrm{d} \textrm{.append(}\bm x^{\mathrm{d}}_k $) \ENDIF \ENDFOR \RETURN $\operatornamewithlimits{\arg\,\min}\limits_{\bm x \in \left \{ \bm x(\mu^*) \right \} \bigcup \bm X^{\mathrm{d}}} \norm{\bm x^0 - \bm x}_2 $ \COMMENT{The $\min$ is taken over at most $n+1$ values} \end{algorithmic} \end{algorithm} \begin{figure} \caption{Illustration of the exact and quasi-projections for an ellipse.} \label{fig:exact_vs_quasi_proj_ellipse} \caption{Illustration of the exact and quasi-projections for an hyperbola.} \label{fig:exact_vs_quasi_proj_hyperbole} \caption{Comparison between the exact and quasi-projections onto a 2D quadric.} \label{fig:comparison_quasi_exact_2D} \end{figure} \subsection{Quasi-projection onto the quadric} \label{sub:Quasi-projection_on_the_quadric} The procedure detailed in~\cref{alg:exact_projection_quadric} is an exact projection, but it requires computing the full eigenvalue decomposition of $\bm{B}$, including the eigenvectors, which may be expensive for problems of large dimension. In this subsection, we detail a geometric procedure which allows us, under some conditions (see~\cref{ssub:Failure_of_the_quasi-projection}), to map a given point to the feasible set $\mathcal{Q}$ of~\cref{eq:proj_quadric}. We refer to such a mapping as a \emph{quasi-projection}. \begin{definition}{Quasi-projection.} Let $\bm x \in \mathbb{R}^{n}$, a \emph{quasi-projection} on the quadric $\mathcal{Q}$ is a mapping \[ \mathrm{P} : \mathcal{D} \to \mathcal{Q} : \bm x \mapsto \mathrm{P}(\bm x) \, , \] where $\mathcal{D}$ is a nonempty subset of $\mathbb{R}^{n}$. \end{definition} Note that this definition is broad, and includes the projection operator. Ideally, $\mathcal{D}$ should be $\mathbb{R}^{n}$, but we allow the quasi-projection to fail to map some points. This quasi-projection is inspired from the retraction in~\cite{BorSelBouAbs2014,vh22}, and from the following observation: the projection of a given point $\bm x$ onto a sphere that can be analytically computed by looking at the intersection between the sphere and the half line defined by the sphere centre and $\bm x$. As the quadric $\mathcal{Q}$ is by assumption a \emph{central quadric}, it is tempting to approximate the projection by the same mechanism described above for the sphere. This yields a first variant of the \emph{quasi-projection}. The second variant is obtained by searching for the intersection between the quadric and the line passing through $\bm x^0$ along the direction $\nabla \Psi(\bm x^0)$. We are looking at the intersection between the quadric and the line starting from $\bm x^0$ along some direction $\bm \xi$. The intersections are parametrized as $\bm x^0 + \beta \bm \xi$, where $\beta$ satisfies $\Psi(\bm x^0 + \beta \bm{\xi})=0$, or equivalently, $b_1 \beta^2 + b_2 \beta + b_3 = 0$, for appropriate $b_i$'s. The $b_i$'s are given in~\cref{alg:quasi-projection-quadric}, see~\cite[\S 3.2]{BorSelBouAbs2014} for more details. In order to select the point that is the closest to $\bm x^0$ among both intersections, the $\beta$ which is the closest to zero is chosen. We detail two variants of our \emph{quasi-projection}: \begin{itemize} \item $\bm{\xi} = \bm d - \bm x^0$: this is analogous to the retraction used in~\cite{BorSelBouAbs2014}, is referred to as \emph{centre-based} quasi-projection, and is denoted by $\mathrm{P}_1$; \item $\bm{\xi} = \nabla \Psi(\bm x^0) = 2 \bm{B} \bm x^0 + \bm{b}$: the direction is given by the gradient of the level curve of $\Psi$ at $\bm x^0$. We refer to it as \emph{gradient-based} quasi-projection and denote it as $\mathrm{P}_2$. \end{itemize} The quasi-projection procedure is given in~\cref{alg:quasi-projection-quadric} and depicted in~\cref{fig:comparison_quasi_exact_2D} for both strategies. \begin{algorithm} \caption{Quasi-projection onto the quadric} \label{alg:quasi-projection-quadric} \begin{algorithmic} \REQUIRE{$\bm x^0 \in \mathbb{R}^{n}$, a non-cylindrical central quadric $\mathcal{Q}$ with centre $\bm d$, a direction $\bm \xi$} \STATE{$b_1 \gets \Tr{\bm \xi} \bm{B} \bm \xi$} \STATE{$b_2 \gets 2 \Tr{\bm x^0} \bm{B} \bm \xi + \Tr{\bm{b}} \bm \xi $} \STATE{$b_3 \gets \Tr{\bm x^0} \bm{B} \bm x^0 + \Tr{\bm{b}} \bm x^0 + c$} \IF{$b_2^2 - 4 b_1 b_3 < 0$} \RETURN \texttt{None} \COMMENT{it may happen that no intersection is available see, \emph{e.g.},~\cref{fig:hyperbole_KO}} \ELSE \STATE{$\Delta \gets \sqrt{b_2^2 - 4 b_1 b_3}$} \STATE $\beta^+ \gets \frac{-b_2 + \Delta}{2 b_1} $ \STATE $\beta^- \gets \frac{-b_2 - \Delta}{2 b_1} $ \IF{$b_2 > 0$} \STATE $\beta \gets \beta^+ $ \ELSE \STATE $\beta \gets \beta^-$ \ENDIF \COMMENT{we select the closest to $\bm x^0$ of the two intersections} \RETURN $\bm x^0 + \beta \bm \xi$ \ENDIF \end{algorithmic} \end{algorithm} \subsubsection{Failure of the quasi-projection} \label{ssub:Failure_of_the_quasi-projection} Remark that for $\bm x^0 = \bm{0}$, $\mathrm{P}_1$ is not defined, and for the hyperboloid case, the set where $\mathrm{P}_1$ is not defined ($\mathbb{R}^{n} \setminus \mathcal{D}$), is a closed set including $\bm{0}$. Examples of nontrivial points that cannot be mapped using $\mathrm{P}_1$ are provided in~\cref{fig:hyperbole_KO}. Indeed, in these cases, there is no intersection between the quadric and the line starting from $\bm x^0$. We tackle this issue by resorting to the exact projection from~\cref{alg:exact_projection_quadric} whenever this situation occurs. There are points $\bm x^0$ for which $\mathrm{P}_2$ returns \texttt{None}, but it returns a point when $\bm x^0$ is close enough to $\mathcal{Q}$. \begin{figure} \caption{2D (hyperbola) case.} \label{subfig:projection_KO_2D} \caption{3D (two and one-sheet hyperboloid) cases.} \label{subfig:projection_KO_3D} \caption{Illustration of a failure of the quasi-projection ($\mathrm{P}_1$): the point $\bm x^0$ cannot be mapped to the quadric using~\cref{alg:quasi-projection-quadric}. Indeed, the line defined by $\bm x^0$ and $\bm d$ does not intersect with the quadric.} \label{fig:hyperbole_KO} \end{figure} \subsubsection{Features of the quasi-projection} \label{ssub:Features_of_the_quasi-projection} In general the quasi-projection is not exact, in the sense that the resulting point is \emph{not} the optimal solution of~\cref{eq:proj_quadric}. However, we expect the quasi-projection to be close to optimality when the point is close enough to the quadric. Such a behaviour is observed in our simulations in~\cref{sub:Alternate_projection_versus_Gurobi}. Also, in the specific case when the quadric is a sphere, then both $\mathrm{P}_1$ and $\mathrm{P}_2$ solve~\cref{eq:proj_quadric}. \section{Splitting methods for the projection onto the intersection of a box and a quadric} \label{sec:Splitting_methods} This section is devoted to the analysis of the projection onto a feasible set, $\Omega$, which is the intersection between a box and a non-cylindrical central quadric. Let $\mathcal{B}$ be a nonempty $n$-dimensional hyper-cube or \emph{box}, aligned with the axes: \begin{equation} \label{eq:box} \mathcal{B} = \left \{ \bm x \in \mathbb{R}^{n} \left | \right .{ \underline{x}_i \leq x_i \leq \overline{x}_i, \, \forall i=1\hdots n } \right \}, \end{equation} for given lower and upper bounds $\bm \underline{x}$ and $\bm \overline{x}$, and $\mathcal{Q}$ a quadric. The optimization problem at hand reads \begin{align*} \label{eq:P} \min_{\bm x \in \mathbb{R}^{n}} & \norm{\bm x - \bm x^0}_2^2 \numberthis \\ \text{s.t. } & \bm x \in \mathcal{B} , \\ & \bm x \in \mathcal{Q} . \end{align*} Note that what is developed in this article can be easily extended to a polytope $\mathcal{P}$ and a Cartesian product of $\size{T}$ quadrics $\times_{t = 1 \hdots \size{T}} \mathcal{Q}(\Psi_t)$. This is discussed in~\cref{sub:Extensions_and_further_researches}. In particular, we study two splitting algorithms: the Douglas-Rachford (DR) scheme, and the alternating projection method (AP). We consider three variants of the latter: one based on the exact projection and two based on the quasi-projection from~\cref{sub:Quasi-projection_on_the_quadric}, that approximates the projection via a geometric construction. Splitting algorithms exploit the separable structure of the problem, since the projection onto each of the sets that define the intersection, $\Omega:= \mathcal{B} \bigcap \mathcal{Q}$, is easy to compute. They recently have been widely studied, and perform particularly well on certain classes of nonconvex problems. A first convergence result for the (local) solution of the alternating projections in the nonconvex setting is presented in~\cite[Theorem 3.2.3]{drusvyatskiy_slope_2013} for sets that intersect transversally. This result is particularized to (nonempty and closed) semi-algebraic intersections in~\cite[Theorem 7.3]{drusvyatskiy_transversality_2015} which we use in this work. A second result that is used in this paper is~\cite[Corollary 1]{li_douglasrachford_2016}, which is a convergence result for a (modified) Douglas-Rachford splitting. These two important propositions exploit the Kurdyka-Łojasiewicz properties of the indicator function of semi-algebraic sets, and are unfortunately local results: the theorem from~\cite{drusvyatskiy_transversality_2015} assumes that the starting point is \emph{near} the intersection of the two considered sets, and the theorem from~\cite{li_douglasrachford_2016} proves the convergence to a \emph{stationary point} of the problem of minimizing the distance to one of the sets, subject to being in the second set. We present an example where both methods fail to converge to a feasible point (\cref{fig:pathological_AP,fig:pathological_DR}), and propose a restart heuristic in~\cref{sub:Implementation_details}. The structure is the following: \S~\ref{sub:Projection_on_the_box} briefly recalls how to project onto a box, \S~\ref{sub:Alternate_projection_method_AP} details the AP methods, and \S~\ref{sub:Douglas-Rachford_method} covers the DR splitting. A comparison table of all methods is presented in \S~\ref{sub:comparison}, a power systems application is discussed in \S~\ref{sub:Extensions_and_further_researches}, and a restart mechanism is given in \S~\ref{sub:Implementation_details}. \subsection{Projection onto a box} \label{sub:Projection_on_the_box} This projection is straightforward. Indeed given a point $\bm x^0$, it suffices to check for each dimension $i$ whether this point violates the lower (respectively upper) bound and replace it accordingly. This gives~\cref{alg:projection-box}. \begin{algorithm} \caption{Projection onto the box~\cref{eq:box}} \label{alg:projection-box} \begin{algorithmic} \REQUIRE{$\bm x^0 \in \mathbb{R}^{n}$} \STATE{$\bm x \gets \bm x^0$} \FOR{$i = 1 \hdots n$} \IF{$x_i^0 < \underline{x}_i$} \STATE{$x_i \gets \underline{x}_i$} \ELSIF{$x_i^0 > \overline{x}_i$} \STATE{$x_i \gets \overline{x}_i$} \ENDIF \ENDFOR \RETURN $\bm x$ \end{algorithmic} \end{algorithm} Note that for a more general polytope $\mathcal{P}$, the projection cannot be computed analytically. However the projection can be efficiently computed by solving the convex QP optimization problem: \begin{align} \label{eq:proj_polytope} \min_{\bm x \in \mathcal{P}} \norm{\bm x^0 - \bm x}_2^2 . \end{align} \subsection{Alternating projection method} \label{sub:Alternate_projection_method_AP} The alternating projection method can be easily built by alternately projecting onto the quadric and onto the box. This gives~\cref{alg:AP}. Depending on whether we use the exact projection or one of the two quasi-projections detailed in \S~\ref{sub:Quasi-projection_on_the_quadric}, we refer to the methods as follows: alternating projections with exact projection onto the quadric (APE), alternating projections with the centre-based quasi-projection (APC) or with the gradient-based quasi-projection (APG). $\textrm{Pr}_\mathcal{X}$ stands for (one solution of) the projection onto a (non)convex set $\mathcal{X}$ and $\mathrm{P}_{\mathcal{Y}}$ the (quasi-)projection onto a set $\mathcal{Y}$. \begin{algorithm} \caption{Alternating projections} \label{alg:AP} \begin{algorithmic} \REQUIRE $\bm x^0 \in \mathbb{R}^{n}$ \STATE $k \gets 0$ \WHILE{$k < n_{\text{iter}}$ \textrm{\textbf{and not}} $\bm x^k \in \Omega$} \STATE $\bm y^{k+1} \gets \textrm{Pr}_{\mathcal{B}}(\bm x^k)$ \COMMENT{Using~\cref{alg:projection-box}} \STATE $\bm x^{k+1} \gets \mathrm{P}_{\mathcal{Q}}(\bm y^{k+1})$ \COMMENT{Using~\cref{alg:exact_projection_quadric} or \cref{alg:quasi-projection-quadric}} \STATE $k \gets k+1$ \ENDWHILE \RETURN $\bm x^k$ \end{algorithmic} \end{algorithm} Assuming that the initial iterate is close enough to the intersection,~\cite{drusvyatskiy_transversality_2015} provides a convergence result for APE, which is particularized to our case in~\cref{prop:convergence_APE}. Note that~\cref{prop:convergence_APE} guarantees convergence to a point $\bm x^*$ in $\Omega$, but provides no guarantee about the optimality of this point, \emph{i.e.}, it is not true in general that $\bm x^* \in \operatornamewithlimits{\arg\,\min}\limits_{\bm x \in \Omega} \norm{\bm x - \bm x^0}^2_2 $. \begin{proposition}\label{prop:convergence_APE} If~\cref{alg:AP} with the exact projection (APE) is initialized from $\bm x^0 \in \mathcal{Q}$ and near $\mathcal{B}$, then the distance of the iterates to the intersection $\mathcal{Q} \bigcap \mathcal{B}$ converges to zero, and hence every limit point, $\bm x^*$, lies in $Q \bigcap \mathcal{B}$. \end{proposition} \begin{proof} This follows from~\cite[Theorem 7.3]{drusvyatskiy_transversality_2015}, since $\mathcal{Q}$ and $\mathcal{B}$ are semi-algebraic and $\mathcal{B}$ is bounded. \end{proof} Remark that if $\bm{B} \succ \bm 0$, then $\mathcal{Q}$ is also bounded and we can as well choose $\bm x^0 \in \mathcal{B}$ and near $\mathcal{Q}$. \cref{fig:oneshot,fig:hyperboloid_OK} present examples where the alternating methods converge in a single iteration or in multiple iterations. Only APC is depicted. Notice that if APE converges in a single iteration, then the obtained solution, $\bm x^*$, is an optimal solution of~\cref{eq:P}, that is, $\bm x^* \in \operatornamewithlimits{\arg\,\min}\limits_{\bm x \in \Omega} \norm{\bm x - \bm x^0}_2^2$. Figure~\ref{fig:pathological_AP} shows a pathological example where none of the alternating projection methods converge to a feasible point of~\cref{eq:proj_quadric_std}. We propose in~\cref{sub:Implementation_details} certain heuristics in order to overcome such pathological cases. \begin{figure} \caption{Alternating projections with exact projection (APE).} \label{subfig:pathological_APE} \caption{Gradient-based alternating projections (APG).} \label{subfig:pathological_APG} \caption{Centre-based alternating projections (APC).} \label{subfig:pathological_APC} \caption{Illustration of a (2D) pathological case where none of the proposed alternating methods converge to a feasible point of~\cref{eq:P}. We represent both $\bm x^k$ and $\bm y^k$ as orange crosses.} \label{fig:pathological_AP} \end{figure} \begin{figure} \caption{2D (ellipse) case.} \label{subfig:ellipse_oneshot} \caption{3D (ellipsoid) case.} \label{subfig:ellipsoid_oneshot} \caption{Illustration of the centre-based alternating projection method (APC). In these cases, the method converges in a single iteration as the quasi-projection from~\cref{alg:quasi-projection-quadric} yields a feasible point, \emph{i.e.}, a point inside the box. We represent both $\bm x^k$ and $\bm y^k$ as orange crosses.} \label{fig:oneshot} \end{figure} \begin{figure} \caption{2D (hyperbola) case: the algorithm converges in three iterations.} \label{subfig:hyperbole_OK} \caption{3D (two-sheet hyperboloid) case, the algorithm converges in three iterations.} \label{subfig:hyperboloid_OK} \caption{Illustration of successes of the centre-based alternating projection method (APC) on 2D and 3D hyperbolic cases. We represent both $\bm x^k$ and $\bm y^k$ as orange crosses.} \label{fig:hyperboloid_OK} \end{figure} \subsection{Douglas-Rachford method} \label{sub:Douglas-Rachford_method} Following~\cite{li_douglasrachford_2016}, the Douglas-Rachford splitting algorithm aims at solving \begin{align*} \min \, f(\bm x) + g(\bm x) \end{align*} where $f$ has a Lipschitz continuous gradient and $g$ is a proper closed function. The DR iteration starts at any $\bm y_0$ and repeats for $k = 0,1,\hdots$ \begin{align*} \bm x^{k+1} &= \prox_f (\bm y^k),\\ \bm y^{k+1} &= \bm y^k + \prox_g (2 \bm x^{k+1} - \bm y^k) - \bm x^{k+1}, \end{align*} where the $\prox$ operator (with step size 1) is defined as \begin{equation} \label{eq:prox} \prox_f(v) = \arg \min_{x\in\mathbb{R}^{n}} \left(f(x) + \frac 1 2 \|x - v\|_2^2\right) \, . \end{equation} Let $\mathcal{I}_{\mathcal{X}}: \mathcal{X} \mapsto \mathbb{B}$ be the indicator function of a set $\mathcal{X}$ defined as \[ \mathcal{I}_{\mathcal{X}}(\bm x) = \begin{cases} 0 \text{ if } x\in \mathcal{X},\\ +\infty \text{ else.} \end{cases} \] If we identify $f:= \mathcal{I}_{\mathcal{B}}$ and $g := \mathcal{I}_{\mathcal{Q}}$, \emph{i.e.}, the indicator functions of the sets that define $\Omega$, then the DR algorithm reads \begin{align*} \bm x^{k+1} &= \textrm{Pr}_{\mathcal{B}}(\bm y^k)\\ \bm y^{k+1} &= \bm y^k + \textrm{Pr}_{\mathcal{Q}}(2 \bm x^{k+1} - \bm y^k) - \bm x^{k+1}\\ \end{align*} which can be rewritten in a compact way~\cite{bauschke_phase_2002}, \begin{equation} \bm x^{k+1} = \left(\textrm{Pr}_{\mathcal{Q}} (2 \textrm{Pr}_{\mathcal{B}} - \bm I) + (\bm I - \textrm{Pr}_{\mathcal{B}}) \right ) (\bm x^k) \end{equation} since the proximal operator of an indicator function of a given set $X$ is the projection onto this set $\textrm{Pr}_{X}$. We denote this method as DR, and explicitly state it in~\cref{alg:douglasrachford}. \begin{algorithm} \caption{Douglas-Rachford splitting method (DR)} \label{alg:douglasrachford} \begin{algorithmic} \REQUIRE An initial point $\bm x^0$ \WHILE{a termination criterion is not met} \STATE $\bm y^{t+1} \gets \textrm{Pr}_{\mathcal{B}}(\bm x^t)$ \STATE $\bm z^{t+1} \gets \operatornamewithlimits{\arg\,\min}_{\bm z\in \mathcal{Q}} \norm{2 \bm y^{t+1} - \bm x^t - \bm z}^2$ \STATE $\bm x^{t+1} \gets \bm x^t + (\bm z^{t+1} - \bm y^{t+1})$ \ENDWHILE \RETURN $\bm z^{t+1}$ \end{algorithmic} \end{algorithm} \paragraph{Modified Douglas-Rachford} \label{par:Modified_Douglas-Rachford} We now present the modification of DR splitting for the feasibility problem of~\cite{li_douglasrachford_2016}. Instead of using the indicator function for the convex set $\mathcal{B}$, the splitting is performed with the squared distance function $d^2_{\mathcal{B}}(\bm x) = \operatornamewithlimits{\arg\,\min}_{\bm y \in \mathcal{B}} \norm{\bm x - \bm y}_2^2$, \emph{i.e.}, \begin{equation} \label{eq:obj_dr-feasibility} \min_{\bm x \in \mathcal{Q}} d_{\mathcal{B}}^2(\bm x) \, , \end{equation} which can be equivalently seen as \begin{equation} \label{eq:obj_dr-feasibility_2} \min_{\bm x \in \mathbb{R}^{n}} d_{\mathcal{B}}^2(\bm x) + \mathbb{1}_{\mathcal{Q}} (\bm x) \, . \end{equation} DR applied to~\cref{eq:obj_dr-feasibility_2} gives~\cref{alg:douglasrachford_feasibility}, denoted as DR-F. \begin{algorithm} \caption{Douglas-Rachford splitting method for feasibility problems (DR-F)} \label{alg:douglasrachford_feasibility} \begin{algorithmic} \REQUIRE An initial point $\bm x^0$ and a step size parameter $\gamma >0$ \WHILE{a termination criterion is not met} \STATE $\bm y^{t+1} \gets \frac{1}{\gamma +1} (\bm x^t + \gamma P_{\mathcal{B}}(\bm x^t))$ \STATE $\bm z^{t+1} \gets \operatornamewithlimits{\arg\,\min}_{\bm z\in \mathcal{Q}} \norm{2 \bm y^{t+1} - \bm x^t - \bm z}^2$ \STATE $\bm x^{t+1} \gets \bm x^t + (\bm z^{t+1} - \bm y^{t+1})$ \ENDWHILE \RETURN $\bm z^{t+1}$ \end{algorithmic} \end{algorithm} We can use~\cite[Corollary 1]{li_douglasrachford_2016} to obtain a convergence result for the DR-F method. \begin{proposition}\label{prop:convergence_DR-F} If $0<\gamma<\sqrt{\frac{3}{2}}-1$, then the sequence $\left \{ (\bm y^t, \bm z^t, \bm x^t) \right \} $ provided by~\cref{alg:douglasrachford_feasibility} converges to a point $(\bm y^*, \bm z^*, \bm x^*)$ which satisfies $\bm z^* = \bm y^*$, and $\bm z^*$ is a stationary point of~\cref{eq:obj_dr-feasibility}. \end{proposition} \begin{proof} Since $\mathcal{Q}$ and $\mathcal{B}$ are nonempty closed semi-algebraic set, with $\mathcal{B}$ being convex and compact, we satisfy the hypothesis of~\cite[Corollary 1]{li_douglasrachford_2016} for $0 < \gamma < \sqrt{\frac{3}{2}}-1$. \end{proof} \begin{figure} \caption{Douglas-Rachford (DR).} \label{subfig:DR_OK} \caption{Modified Douglas-Rachford for the feasibility problem (DR-F).} \label{subfig:DR-F_OK} \caption{Illustration of the behaviour of the DR splitting algorithms on the same (pathological) case of~\cref{fig:pathological_AP}. On this problem, DR does converge to a feasible point while DR-F does not. However, DR-F converges to a stationary point (a minimum) of~\cref{eq:obj_dr-feasibility}.} \label{fig:pathological_DR} \end{figure} \subsection{Comparison} \label{sub:comparison} Table~\ref{tab:comparison} compares the different complexities and convergence results of all the methods. Methods using exact projection onto the quadric require the diagonalization of $\bm{B}$ as a precomputation step, which typically costs $\mathcal{O}(n^3)$ flops. \begin{table} \centering \caption{Comparison of the considered splitting methods.} \label{tab:comparison} {\footnotesize \hspace*{-3cm} \begin{tabular}{lrrrrr} \toprule & APE & APC & APG & DR & DR-F \\ \midrule Complexity & $\mathcal{O}(n^3+kn^2)$ & $\mathcal{O}(kn^2)$ & $\mathcal{O}(kn^2)$ & $\mathcal{O}(n^3+kn^2)$ & $\mathcal{O}(n^3+kn^2)$ \\ Convergence guarantees & Locally to a feasible & None & None & None\footnote{There are, however, proofs of the convergence of DR in some nonconvex applications, see, \emph{e.g.}, the discussion in~\cite[Section~4]{aragon_douglasrachford_2020}.} & Locally to a stationary\\ & point of~\cref{eq:P}: \cref{prop:convergence_APE} & & & & point of~\cref{eq:obj_dr-feasibility}: \cref{prop:convergence_DR-F} \end{tabular} } \end{table} \subsection{Extensions and applications} \label{sub:Extensions_and_further_researches} We can extend the splitting methods to a polytope, $\mathcal{P}$, and a Cartesian product of $m$ quadrics $\mathcal{Q}^{\text{tot}} :=\bigtimes_{i=0}^m \mathcal{Q}_i$, and solve \begin{align*} \label{eq:P_extension} \min_{\bm x \in \mathbb{R}^{n}} & \norm{\bm x - \bm x^0}_2^2 \numberthis \\ \text{s.t. } & \bm x \in \mathcal{P} , \\ & \bm x \in \mathcal{Q}^{\text{tot}} . \end{align*} The extension of all methods described in \cref{tab:comparison} is direct: instead of computing the projection on $\mathcal{B}$---now $\mathcal{P}$---analytically, we have to resort to a QP solver. And, similarly to the retraction from~\cite{vh22}, the (quasi-)projection is obtained by working independently on each quadric $\mathcal{Q}_i$: \begin{equation} \bm x^* \in \operatornamewithlimits{\arg\,\min}_{\bm x \in \mathcal{Q}^{\text{tot}}} \norm{\bm x - \bm x^0}_2 \Leftrightarrow \bm x^*_i \in \operatornamewithlimits{\arg\,\min}_{\bm x_i \in \mathcal{Q}_i} \norm{\bm x_i - \bm x^0_i}_2 \forall i = 1 \hdots m . \end{equation} For example, in the practical case from~\cite{vh22}, the paper focuses on the dynamic economic dispatch problem which aims at the optimal allocation of power production among generating units at each timestep, \emph{e.g.}, each hour of a day. The modelling of the power losses makes the feasible set of each independent (static) economic dispatch a quadric and certain operational constraints, namely the ramping constraints, couple consecutive time steps. Hence, the full feasible set $\Omega$ is a polytope $\mathcal{P} \subseteq \mathbb{R}^{nT} $ that accounts for the power ranges (box) and ramping constraints, and of a Cartesian product of $T$ different quadrics $\mathcal{Q}_i \subseteq \mathbb{R}^{n}$ that model the balance constraint, \emph{i.e.}, that power production matches demand. The projection of a point onto $\Omega = \mathcal{P} \bigcap \mathcal{Q}^{\text{tot}}$ can then be obtained using the methods described in the present paper. Moreover, the point that has to be projected in~\cite{vh22} is obtained as the solution of a surrogate problem defined on a relaxed set, see~\cite{vh19,vh20,vh22} for more details. And because this relaxation is close to the feasible set, the point that has to be projected is inside the box and near the quadric. This is the reason for the favorable performance of APC which is reported in~\cref{sub:Alternate_projection_versus_Gurobi}. \subsection{Implementation details} \label{sub:Implementation_details} To address the convergence issues identified in~\cref{fig:pathological_AP,fig:pathological_DR}, we add a restart mechanism whenever this situation arises. Such situations are easily detected: the alternating method will loop between two points, and the DR or DR-F will simply converge to an infeasible point. These problems mostly appear in the hyperboloid case, and typically occur when the method is trapped on the wrong sheet of the hyperboloid. To mitigate this issue, when detected, we use the geometric construction from the centre-based quasi-projection (\cref{alg:quasi-projection-quadric} with $\bm \xi = \bm x^0 - \bm d$), and select the \emph{largest} $\beta$. This is equivalent to transforming $\bm x^k \in \mathcal{Q}, \bm x^k\notin \mathcal{B}$ into $-\bm x^k =:\bm x^{k+1} \in \mathcal{Q}$, and continuing the method from $\bm x^{k+1}$. Alternatively, it is also possible to consider $\bm x^{k+1}$ such that at least one---instead of all---of its components is the opposite of $\bm x^k$. If, on the other hand, $\bm x^k \in \mathcal{B}$, then we work analogously with respect to the centre of the box. Such a restart mechanism is not a guarantee of convergence: the method can then be trapped into another region, or even come back to the exact same region. But in the few instances ($\approx$ once every 10000 trials) where the presented algorithms experience convergence issues, the restart results in successful convergence. \section{Numerical experiments} \label{sec:Numerical_experiments} This section is devoted to the benchmarking of the methods developed in~\cref{sec:Splitting_methods}. \cref{sub:Douglas-Rachford_Alternate_projections_and_IPOPT} tests the five presented methods (APE, APC, APG, DR, DR-F) as well as IPOPT. IPOPT is an interesting method to benchmark against, as it is a natural candidate for solving~\cref{eq:P}. Note that IPOPT is an open-source solver that uses an embedded linear solver. The performance of IPOPT can be enhanced through the use of a dedicated commercial linear solver. In this work, we use Pardiso~\cite{pardiso-7.2a}. We solve for small scale (\cref{fig:ellipsoid_100,fig:hyperboloid_100}) and larger scale (\cref{fig:ellipsoid_1000,fig:hyperboloid_1000}) instances of~\cref{eq:P}. For each considered dimension, $n$, we run 100 randomly independent trials in order to smooth the effect of the random selection of the problem parameters. In particular, each independent trial consists of a unique (randomly generated) set of parameters $\bm{B}, \bm{b}, c$ and $\bm x^0 \in \mathbb{R}^{n}$. The ellipsoid case is tested in~\cref{ssub:Ellipsoid_experiments} and the hyperboloid case in~\cref{ssub:Hyperboloid_experiments}. The problems are generated as follows: we first create a quadric with $\bm A \sim \mathcal{N}(\bm \mu = \mathbb{1}, \Sigma = I)$, $\bm{B} = \frac{\bm A + \Tr{\bm A}}{2}$, $\bm{b}$ defined with $b_i \sim \mathcal{N}(\mu = 0, \sigma = 1)$ and $c \sim \mathcal{N}(\mu = -1, \sigma = 1)$. For the ellipsoidal case, we shift $\bm{B}$ in order to ensure that $\bm{B} \succ \bm 0$. Then, we find one feasible point and construct the box $\mathcal{B}$ around it; this allows us to ensure that the intersection of $\mathcal{B}$ and $\mathcal{Q}$ is nonempty. Note that this feasible point is not necessarily the centre of the box. Then, in~\cref{sub:Alternate_projection_versus_Gurobi}, we perform two experiments for comparing APC to Gurobi for a very specific problem structure, which is a problem stemming from~\cite{vh22}, and the initial goal of the present research. The same remarks with respect to the 100 randomly generated data also apply here. For this second experiment, we use Gurobi as a benchmark because i) it may also be a natural method for solving~\cref{eq:P}\footnote{Since version 9.0, Gurobi supports nonconvex QCQP optimization, and Gurobi is employed widely in the power systems optimization community.} and ii) it provides lower bounds, which allow us to assess how close the returned solutions are to the global optimum. We benchmark it against APC because, even if it is the worst-performing methods among the five that are presented in~\cref{sub:Douglas-Rachford_Alternate_projections_and_IPOPT}, it still outperforms Gurobi. This behaviour is explained by the relative position of the starting point with respect to the feasible set from the problem of~\cref{sub:Alternate_projection_versus_Gurobi}: this point is inside the box and close to the quadric. Note that, in all experiments, whenever an algorithm terminates with a timeout and returns an infeasible point, the associated objective is meaningless. In order to avoid distorting our reported results, we omit these instance in the recorded objectives; but we count the number of timeouts and record the deviation. The deviation is computed as \begin{equation} \label{eq:deviation} \textrm{deviation} = \abs{\Tr{\bm x} \bm{B} \bm x + \Tr{\bm{b}} \bm x + c}, \end{equation} and is an intuitive measure of how far an infeasible point is to the feasible set. The prescribed tolerance for the deviation is $10^{-6}$. This deviation does not account for the box. This is not an issue here, because none of the tests considered in the numerical experiments terminates outside of the box. \subsection{Douglas-Rachford, Alternating Projections and IPOPT} \label{sub:Douglas-Rachford_Alternate_projections_and_IPOPT} Two different settings are considered here. In~\cref{ssub:Ellipsoid_experiments}, the matrix $\bm{B}$ is chosen such that $\bm{B} \succ \bm 0$, \emph{i.e.}, the quadric is an \emph{ellipsoid}. This means that the quasi-projection with $\bm \xi = \bm x^0 - \bm d$ is well-defined: situations depicted in~\cref{fig:hyperbole_KO} cannot occur. In~\cref{ssub:Hyperboloid_experiments}, we consider the case of \emph{hyperboloids}, \emph{i.e.}, $\bm{B}$ is nonsingular but indefinite. From these two experiments, it appears that both DR-F and APE are the methods that find the best solution in terms of objective. However, if the execution time is taken into account, APG reaches an objective close to the one of DR-F and APE in a significantly lower run time. APG should therefore be considered, \emph{e.g.}, if the eigenvector decomposition is too expensive to compute. APC works particularly well in the ellipsoidal case, but performs worse in the hyperboloidal case. IPOPT is clearly the slowest method. It achieves good solution objectives in the ellipsoidal case, but gives poorer results in the hyperboloidal case. \subsubsection{Ellipsoid experiments} \label{ssub:Ellipsoid_experiments} In these two experiments, we run small and large-scale ellipsoidal problems. The box $\mathcal{B}$ is small with respect to the quadric and the starting points $\bm x^0$ are uniformly distributed inside the box. For small-scale ellipsoidal problems ($n\leq 100$,~\cref{fig:ellipsoid_100}), we observe that all methods except DR reach the same objective: APE, DR-F and IPOPT obtain the same objective, APG is within 1\% and DR within several percent. We also observe that none of the methods exceeds the prescribed deviation accuracy of $10^{-6}$, and that IPOPT provides the most feasible points. The number of iterations required for each method remains more or less constant when the dimension increases. Considering the running time, APC is the fastest and ten times faster than APG, which is two time faster than DR. DR-F and APE require approximately the same amount of time, which is two times slower than DR. Finally, IPOPT requires much more time than all the other methods. For large-scale ellipsoidal problems ($n\geq 100$,~\cref{fig:ellipsoid_1000}), the behaviour of the methods remains similar as the small-scale case. The distance increase with $n$ is simply due to the increase of $\norm{\bm x^* - \bm x^0}_2$ with $n$. Remark that the execution time of IPOPT is remarkably stable, this is because creating the model already requires approximately 10 seconds, and this creation time does not increase much when the dimension increases. However, it should be noted that i) for much larger dimension $n>>1000$ the solving time of IPOPT increases significantly and ii) for such large dimension, it becomes crucial to use advanced linear algebra tools for, \emph{e.g.}, the eigenvalue decompositions and matrix products used in the methods developed here. Hence, the comparison against IPOPT when the latter relies on a dedicated linear algebra software (Pardiso) becomes less meaningful for too large $n$. \begin{figure} \caption{Distance.} \label{subfig:distance_ellipsoid_100} \caption{Deviation.} \label{subfig:deviation_ellipsoid_100} \caption{Execution time in seconds.} \label{subfig:total-time_ellipsoid_100} \caption{Number of iterations.} \label{subfig:iteration_ellipsoid_100} \caption{Comparison of the different methods developed in~\cref{sec:Splitting_methods}: Douglas-Rachford splitting (DR) and its modified counterpart (DR-F), alternating projections using the exact projection (APE) and the alternating projections using the quasi-projections (centre-based APC and gradient-based APG). IPOPT is used as a benchmark with standard settings and with the underlying linear solver Pardiso. Ten dimensions $n$ are considered and, for each $n$, 100 independent trials with $\bm{B} \succ \bm{0}$ are run. The top (bottom) dashed lines represent the max (min) value of the 100 trials, and the continuous line is the sample mean. The frame in the upper left of the upper left panel is a magnification around $n=10$.} \label{fig:ellipsoid_100} \end{figure} \begin{figure} \caption{Distance.} \label{subfig:distance_ellipsoid_1000} \caption{Deviation.} \label{subfig:deviation_ellipsoid_1000} \caption{Execution time in seconds.} \label{subfig:total-time_ellipsoid_1000} \caption{Number of iterations.} \label{subfig:iteration_ellipsoid_1000} \caption{Same as~\cref{fig:ellipsoid_100} for larger dimensions.} \label{fig:ellipsoid_1000} \end{figure} \subsubsection{Hyperboloid experiments} \label{ssub:Hyperboloid_experiments} In these two experiments, we run small and large-scale hyperboloidal problems. The box $\mathcal{B}$ is large with respect to the quadric and the starting point $\bm x^0$ is uniformly distributed inside $\mathcal{B}$. For small-scale hyperboloidal problems ($n\leq 100$,~\cref{fig:hyperboloid_100}), we observe that the best objectives are obtained by APE. Then, within several \%, by DR-F, APG, DR and IPOPT. These four methods reach the same solution as APE most of the time, however they sometimes reach solutions that are far away from the best methods, see, \emph{e.g.}, the maximum curve (top dashed-lines in~\cref{subfig:hyperboloid_100_distance}) that is significantly above the maximum curves of APE. Finally, APC performs poorly in terms of objective values. It is now APG which is the fastest method, despite its need of more iterations: the reason stems from the need of APC to resort to an exact projection whenever the situation depicted in~\cref{fig:hyperbole_KO} appears. For the large-scale hyperboloidal problems ($n\geq 100$,~\cref{fig:hyperboloid_1000}), the best objectives are attained by APE and DR-F. The APG algorithm comes within one percent of their performance. The unmodified Douglas-Rachford finds objectives within several percent, and IPOPT within 10 percent, \emph{e.g.}, the mean objective for $n=1000$ (solid lines in~\cref{subfig:hyperboloid_1000_distance}) is around 0.51 for APE, DR-F and APG, around 0.53 for DR and around 0.63 for IPOPT. We note that the number of iterations increases with $n$, and that APG is also the fastest method. We also observe a significant increase in the execution time of IPOPT, which implies that the solving time is now larger than the 10 seconds that are required for creating the problem. Finally, we observe that both IPOPT and APC sometimes finish with a timeout, and return points above the prescribed deviation of $10^{-6}$. \begin{figure} \caption{Distance.} \label{subfig:hyperboloid_100_distance} \caption{Deviation.} \label{subfig:hyperboloid_100_deviation} \caption{Execution time in seconds.} \label{subfig:hyperboloid_100_time} \caption{Number of iterations.} \label{subfig:hyperboloid_100_timeout} \caption{Same as~\cref{fig:ellipsoid_100} with $\bm \bm{B} \nsucc \bm 0$.} \label{fig:hyperboloid_100} \end{figure} \begin{figure} \caption{Distance.} \label{subfig:hyperboloid_1000_distance} \caption{Deviation.} \label{subfig:hyperboloid_1000_deviation} \caption{Execution time in seconds.} \label{subfig:hyperboloid_1000_time} \caption{Number of iterations.} \label{subfig:hyperboloid_1000_timeout} \caption{Same as~\cref{fig:hyperboloid_100} for larger dimensions.} \label{fig:hyperboloid_1000} \end{figure} \subsection{Alternating projections versus Gurobi} \label{sub:Alternate_projection_versus_Gurobi} In this section, we benchmark the alternating projections with centre-based quasi-projection (APC) against an \emph{exact method}, which actually aims at finding the optimal solution of~(\ref{eq:P}). Several methods can be used to tackle (\ref{eq:P}), here we choose to use the commercial software Gurobi~\cite{gurobi}. This tool tackles the nonconvex quadratic equality via piecewise linearization, and solves the resulting mixed-integer quadratic programming (MIQP) problem. In this way, a solution along with a lower bound are obtained, and the optimal solution---up to a given tolerance---can be reached, assuming enough time is afforded to the solver. The problem parameters are chosen so as to resemble problems from the power systems literature: the feasible set of the economic dispatch problem with power losses~\cite{vh22} typically exhibits a similar structure as the feasible set of~\cref{eq:P}. The entries of $\bm{B}$ are in the order of $10^{-5}$ except for diagonal entries ($10^{-4}$), $\bm{b}$ is close to 1 and $c$ around -100. $\bm{B}$ represents the quadratic power losses---expected to be small---, and $\bm{b}$ encodes the constraint stating that the sum of the power production must be equal to the demand ($\approx-c$). The following quantities are compared: \begin{align*} \textrm{Relative time} &= \log{\left (1 +\frac{t_{\textrm{APC}} - t_{\textrm{Gur}}}{t_{\textrm{Gur}}}\right)} = \log \left ( \frac{t_{\alt}}{t_{\exact}} \right ) \, \\ \textrm{Relative distance} &= \log \left ( \frac{d_{\alt}}{d_{\textrm{Gur}}} \right ) \, \end{align*} where $t_{\alt}$, $t_{\exact}$ are the execution times of APC and Gurobi, respectively, and $d$ is the distance between the final iterate and $\bm x^0$, \emph{i.e.}, the objective. Note that, in order to smooth out random effects, we run $m = 100$ slightly different instances for each dimension and report the mean, such that, \emph{e.g.}, \[ t_{\alt} = \frac{\sum_{i=1}^m t_{\alt}^i}{m}, \] where $t_{\alt}^i$ stands for the execution time of the $i-$th instance of the alternating projection method. In the following experiments, it may be the case that no feasible point is found. For the alternating projection method, this can occur when the method reaches the maximum number of iterations, \emph{e.g.}, when the method is trapped in a cycle loop (see~\cref{fig:pathological_AP}). On the other hand, Gurobi may also fail to yield a feasible point, if the time limit criterion is attained. We do not encode such points into the relative distance. In this way, we do not pollute the reported distance mean by a small number of instances that terminate due to a timeout. However, we also record the number of timeouts---either due to maximum iterations or time limit. \subsubsection{One-shot experiment} \label{ssub:One-shot_experiment} In this experiment, we aim to compare the speed of both methods. We thus terminate the algorithm as soon as it finds a feasible point, hence the reference to ``one shot''. For APC, this does not affect the algorithm. On the other hand, Gurobi relies on lower and upper bounds, and terminates whenever a targeted tolerance is achieved. Here, we modify the stopping criterion such that the algorithm stops as soon as a feasible solution is obtained, no matter the objective. Hence, this is a lower bound on the execution time if the method is run with the tolerance criterion. \cref{fig:comparison_one_solution} presents the relative execution time and distance. We observe that APC clearly outperforms Gurobi in this experiment: for low dimension APC executes at least two times faster and reaches a better solution, for larger dimensions the difference becomes even larger, \emph{e.g.}, when the dimension is bigger than 40 APC accelerates by a factor of 100 000 and reaches an objective which is 10 times lower than that of Gurobi. Moreover, it should be noted that the number of timeout terminations recorded in Gurobi starts to increase for dimensions greater than 40 (see the bar plot in \cref{fig:comparison_one_solution}): hence the relative time is limited because of the time limit, this explains the saturation of the relative time for large dimensions. The relative distance does not encode the infeasibility of the points that finish with a timeout, and such points should have an infinite objective value. The time limit criterion is set to 600 seconds. We note that a significant number of instances terminate without a solution for problems of large dimension. \begin{figure} \caption{Comparison between the alternating projection method with the centre-based quasi-projection (APC) and Gurobi. In this experiment, described in~\cref{ssub:One-shot_experiment}, the method terminates whenever it finds a feasible solution, no matter the objective. The timeout termination of Gurobi is set at 600 seconds, and the number of timeouts for the 100 instances is depicted as a bar plot.} \label{fig:comparison_one_solution} \end{figure} \subsubsection{Multiple-shot experiment} \label{ssub:Multiple-shot_experiment} In this experiment, we allow Gurobi to execute until it reaches the best solution, up to a given tolerance of one percent, or until timeout (600 seconds). \cref{fig:Multiple-shot} depicts the (mean) relative distance and execution time, for 100 runs, as well as the number of timeout terminations of Gurobi. We observe that, for small problem instances, \emph{i.e.}, when the dimension is below 13, Gurobi reaches the best solution \emph{which is also very close to the one obtained via APC}. Indeed, a relative distance around one means that the solutions returned by the two algorithms are comparable. Since Gurobi does not terminate with a timeout, this implies that the solution returned by APC is, as a matter of fact, also optimal. Note that the theory does not guarantee that this should occur. We also note that the execution time of Gurobi is 10 to 1000 larger than that of APC. For higher dimension, the relative distance slightly decreases and the relative time converges to $\num{1e-6}$: this is due to the increasing number of timeout terminations. In other words, Gurobi fails to find the best solution in an increasing execution time. \begin{figure} \caption{Comparison between the alternating projection method with the centre-based quasi-projection (APC) and Gurobi. In this experiment, described in~\cref{ssub:Multiple-shot_experiment}, Gurobi terminates when the objective is proven to be optimal within a 1\% tolerance. The timeout termination of Gurobi is set to 600 seconds.} \label{fig:Multiple-shot} \end{figure} \section{Conclusion} \label{sec:Conclusion} In this paper, the projection onto quadratic hypersurfaces, or quadrics, is investigated. We assume that the quadratic hypersurface is a non-cylindrical central quadric; however, the non-cylindrical assumption can be easily lifted by focusing on the variables that appear in the normal form. Using the method of Lagrange multipliers, we reduce this nonconvex optimization problem to the problem of finding the solutions of a system of nonlinear equations. We then show how one of the optimal solutions of the projection either lies in a finite set of computable solutions, or is a root of a scalar-valued nonlinear function. This unique root is located on a given interval, and is therefore simply computed with a Newton-Raphson scheme to which we provide suitable starting points. The cost of this projection is thus cheap, and the bottleneck is the eigenvector decomposition. This decomposition is needed for diagonalizing the matrix that is used to define the quadric. We also propose a heuristic, referred to as quasi-projection, based on a geometric construction. This construction consists of finding the closest intersection between the quadric and a line passing by the point that we want to project. We detail two variants of the quasi-projection, depending on whether the direction of the line is computed as the level-curve gradient of the quadric, or the vector joining the centre and the point. This quasi-projection does not require eigenvector decomposition, thereby economizing in computational time. This projection is then leveraged in the context of splitting algorithms, namely alternating projections and Douglas-Rachford splitting. This allows us to project a point onto a feasible set that is the intersection between a quadric and a box. The extension to the more general case of a Cartesian product of quadrics and a polytope is also discussed. Five methods are proposed depending on whether we use standard Douglas-Rachford splitting (DR), modified Douglas-Rachford splitting (DR-F), or one of the alternating projection methods. We detail the alternating projections with the exact projection on the quadric (APE) or one of the two quasi-projections (the centre-based APC or the gradient-based APG). All methods are tested on problems of several dimensions, from 10 to 1000, and 100 independent trials are executed for each dimension. Using IPOPT as a benchmark, we find that APE and DF reach the best objectives and APG is within one percent, while APC and IPOPT lag behind. However, APG is much faster than the other methods and appears to achieve a good trade-off between the attained objective and execution time. We also test APC on a case similar to the economic dispatch problem from the power systems literature, and compare it to Gurobi. We show that, in this specific case where the initial point is close to the feasible set, APC quickly reaches a solution close to or better than Gurobi, even if the execution time of Gurobi is several orders of magnitude greater. For small dimensions, Gurobi can guarantee the optimality of its solution, which shows that APC obtains the optimal solution in our examples. For higher dimension, Gurobi terminates with a timeout and with a higher objective than APC. Hence, even APC, which is the poorest of the methods that we propose in our paper in terms of performance, outperforms Gurobi in these experiments. For the first part of the paper, namely the projection onto nonsingular quadrics in~\cref{sec:Projection_onto_a_quadric}, the extension to singular quadrics could be contemplated. In this context, the linear independence constraint qualification, LICQ, is not fulfilled any more. We should also include such points as projection candidates. A numerical comparison with the method from~\cite{sosa_algorithm_2020} is another natural research direction for further work. For the second part of the paper,~\cref{sec:Splitting_methods}, further research may include the comparison with the alternating projection method using inexact projections, \emph{e.g.}, projecting on the tangent space of the previous feasible point. Such methods are discussed in~\cite{drusvyatskiy_local_2019}. \end{document}
\begin{document} \title{Computation of Stability Radii for Large-Scale Dissipative Hamiltonian Systems} \renewcommand{\fnsymbol{footnote}}{\fnsymbol{footnote}} \footnotetext[1]{Istanbul Sabahattin Zaim University, Department of Industrial Engineering, Halkal{\i} mahallesi, Halkal{\i} Caddesi 34303, K\"{u}\c{c}\"{u}k\c{c}ekmece, Istanbul, Turkey. (naliyev@ku.edu.tr).} \footnotetext[2]{Technische Universit\"{a}t Berlin, Sekreteriat MA 4-5. Strasse des 17 Juni 136, D-10623 Berlin, Germany (mehrmann@math.tu-berlin.de). Supported by Deutsche Forschungsgemeinschaft via Project A02 of Sonderforschungsbereich 910.} \footnotetext[3]{Ko\c{c} University, Department of Mathematics, Rumelifeneri Yolu, 34450 Sar\i yer, Istanbul, Turkey (emengi@ku.edu.tr). Supported by Deutsche Forschungsgemeinschaft via Project A02 of Sonderforschungsbereich 910.} \begin{abstract} \noindent A linear time-invariant dissipative Hamiltonian (DH) system $\dot x = (J-R)Q x$, with a skew-Hermitian $J$, an Hermitian positive semi-definite $R$, and an Hermitian positive definite $Q$, is always Lyapunov stable and under weak further conditions even asymptotically stable. In various applications there is uncertainty on the system matrices $J, R, Q$, and it is desirable to know whether the system remains asymptotically stable uniformly against all possible uncertainties within a given perturbation set. Such robust stability considerations motivate the concept of stability radius for DH systems, i.e., what is the maximal perturbation permissible to the coefficients $J, R, Q$, while preserving the asymptotic stability. We consider two stability radii, the unstructured one where $J, R, Q$ are subject to unstructured perturbation, and the structured one where the perturbations preserve the DH structure. We employ characterizations for these radii that have been derived recently in [\emph{SIAM J. Matrix Anal. Appl., 37, pp. 1625-1654, 2016}] and propose new algorithms to compute these stability radii for large scale problems by tailoring subspace frameworks that are interpolatory and guaranteed to converge at a super-linear rate in theory. At every iteration, they first solve a reduced problem and then expand the subspaces in order to attain certain Hermite interpolation properties between the full and reduced problems. The reduced problems are solved by means of the adaptations of existing level-set algorithms for ${\mathcal H}_\infty$-norm computation in the unstructured case, while, for the structured radii, we benefit from algorithms that approximate the objective eigenvalue function with a piece-wise quadratic global underestimator. The performance of the new approaches is illustrated with several examples including a system that arises from a finite-element modeling of an industrial disk brake. \\[10pt] \noindent \textbf{Key words.} Linear Time-Invariant Dissipative Hamiltonian System, Port-Hamiltonian system, Robust Stability, Stability Radius, Eigenvalue Optimization, Subspace Projection, Structure Preserving Subspace Framework, Hermite Interpolation. \\[5pt] \noindent \textbf{AMS subject classifications.} 65F15, 93D09, 93A15, 90C26 \end{abstract} \section{Introduction} Linear time-invariant \emph{Dissipative Hamiltonian (DH) systems} are dynamical systems of the form \begin{equation}\label{DH} \dot x \;\; = \;\; (J-R)Qx. \end{equation} They arise as homogeneous part of \emph{port-Hamiltonian (PH) systems} of the form \begin{equation}\label{ph} \begin{split} \dot x & \;\; = \;\; (J-R)Qx(t) + (B-P)u(t), \\ y(t) & \;\; = \;\; (B+P)^H Qx(t) + D u(t), \end{split} \end{equation} when the input $u$ is $0$ and the output $y$ is not considered. Here $Q=Q^H\in\mathbb{C}^{n \times n}$ is an Hermitian positive definite matrix (denoted as $Q>0$), $J\in\mathbb{C}^{n \times n}$ is a skew-Hermitian matrix associated with the energy flux of the system, $R\in\mathbb{C}^{n \times n}$ is the Hermitian positive semi-definite (denoted by $R\geq 0$) \emph{dissipation matrix} of the system, $B\pm P\in\mathbb{C}^{n \times m}$ are the \emph{port matrices}, and $D$ describes the \emph{direct feed-through} from input to output. The function $\mathcal H(x)= \frac 12 x^HQx$ (called \emph{Hamiltonian function}) describes the total internal energy of the system. Here and elsewhere $A^H$ denotes the conjugate transpose of a complex matrix $A$. PH and DH systems play an essential role in most areas of science and engineering, see e.g. \cite{JacZ12,SchJ14}, due to their very important structural properties; e.g., they allow modularized modeling and easy model reduction via Galerkin projection. An important structural property is that DH systems are automatically \emph{Lyapunov stable}, i.e., all eigenvalues of $A=(J-R)Q$ are in the closed left half of the complex plane, and those on the imaginary axis are semisimple, see \cite{MehMS16a}. However, DH systems are not necessarily \emph{asymptotically stable}, since $A$ may have purely imaginary eigenvalues, e.g., when the dissipation matrix $R$ vanishes, then all eigenvalues are purely imaginary. If a DH system is Lyapunov stable but not asymptotically stable, then arbitrarily small unstructured perturbations (such as rounding errors) may cause the system to become unstable. These issues are our motivation to analyse whether a DH system is \textit{robustly asymptotically stable}, i.e., whether small (structured or unstructured) perturbations keep it asymptotically stable. \begin{example}\label{ex:ex1} {\rm Disk brake squeal is a well-known problem in mechanical engineering. It occurs due to self-excited vibrations caused by instability at the pad-rotor interface \cite{Akay_2002}. The transition from stability to instability of the brake system is generally examined by finite element (FE) analysis of the system. In \cite{GraMQSW16} FE models resulting for disk brakes are derived in form of second order differential equations \begin{equation}\label{brake1} M \ddot x+D(\Omega) \dot x+ K(\Omega) x=f, \end{equation} with large and sparse coefficient matrixes $M$, $D(\Omega)$, and $K(\Omega) \in \mathbb{R}^{n \times n}$, where $D(\Omega)$ and $K(\Omega)$ depend on the rotational speed $\Omega>0$ of the disk, and have the form \[ D(\Omega) := D_M + \frac{1}{\Omega} D_R,\ K(\Omega) := K_E + \Omega^2 K_g \] with $D_M, D_R$ representing material, friction-induced damping matrices, $K_E, K_g$ corresponding to elastic, geometric stiffness matrices, respectively. Here, $M>0$ and $K(\Omega)>0$, whereas $D(\Omega)\geq 0$. (The function $f$ represents a forcing term or control, but for the stability analysis one may assume that $f=0$, which we assume in the following.) The incorporation of gyroscopic effects, modeled by the term $G(\Omega) \dot x$, with $G(\Omega) := \Omega D_G=-\Omega D_G^H$, and circulatory effects, modeled by an unsymmetric term $N x$ gives rise to a system \begin{equation} \label{eq:full_brake} M \ddot x + (D(\Omega) +G(\Omega)) \dot x + (K(\Omega) + N)x=0, \end{equation} or in first order representation $ \widetilde{M} \dot z + \widetilde{K} z=0$, where \begin{equation} \label{eq:first_order} \widetilde{M}=\begin{bmatrix} M & 0 \\ 0 & K(\Omega) \end{bmatrix}, \;\: \widetilde{K}=\begin{bmatrix} D(\Omega) + G(\Omega) & K(\Omega) + N \\ -K(\Omega) & 0 \end{bmatrix}. \end{equation} Straightforward manipulations yields a system \begin{equation} \label{insta} \dot {\widetilde{z}} = (J - R)Q\widetilde{z} \end{equation} with $\widetilde{z} = Q^{-1}z \:$, where \begin{equation}\label{eq:insta_matrices} \begin{split} J =\begin{bmatrix} -G(\Omega) & -(K(\Omega) + \frac12N) \\ K(\Omega) + \frac12N^H & 0 \end{bmatrix}, \hskip 22ex \\ R =\begin{bmatrix} D(\Omega) & \frac12N \\ \frac12N^H & 0 \end{bmatrix}, \quad Q =\begin{bmatrix} M & 0 \\ 0 & K(\Omega) \end{bmatrix}^{-1}. \end{split} \end{equation} In the absence of the circulatory effects, i.e., when $N=0$, the system in (\ref{insta}) is a DH system and as a result it is Lyapunov stable and typically even asymptotically stable. However, small circulatory effects, i.e., perturbations by a non-symmetric $N$ of small norm, may result in instability. } \end{example} Asymptotic stability of a general linear dynamical system in the presence of uncertainty can only be guaranteed when the system has a reasonable \textit{distance to instability}, i.e., to systems with purely imaginary eigenvalues. Hence, an estimation of the distance to instability, which is an optimization problem over admissible perturbations, is an important ingredient of a proper stability analysis. In this paper we focus on the stability analysis of large-scale (and typically sparse) DH systems of the form~\eqref{DH} in the presence of uncertainties in the coefficients. Considering perturbations in one of the coefficient matrices $J$, $R$, $Q$ of \eqref{DH}, in \cite{MehMS16a} characterizations for several structured distances to instability were derived under restricted perturbations of the form $B\Delta C$, with restriction matrices $B \in\mathbb{C}^ {n\times m}$ and $C \in \C^{p\times n}$ of full column rank and full row rank, respectively, allowing selected parts of the matrices $J, R, Q$ to be unperturbed. We will use an adaptation of the subspace framework introduced in \cite{Aliyev2017}, based on model order reduction techniques to compute the stability radii using the characterizations in \cite{MehMS16a}. The paper is organized as follows. Section \ref{defns} provides formal definitions of the structured and unstructured stability radii, and in Section \ref{char} we briefly recall the characterizations of these stability radii derived in \cite{MehMS16a}. Section \ref{sec:DH_unstructured_dist} proposes subspace frameworks for computing the unstructured stability radii problems exploiting these characterizations. The performance of the proposed frameworks for the unstructured stability radii is illustrated via the disk brake example and several synthetic examples in Section \ref{sec:DH_numexps}. Finally, Section \ref{sec:structured} focuses on the structured stability radius when only $R$ is subject to Hermitian perturbations. We first discuss how small-scale problems can be solved in Section~\ref{sec:small_scale_st}. A new structured subspace framework is discussed in Section~\ref{sec:large_scale_st} followed by several numerical examples in Section \ref{sec:str_num_exp}. \section{Unstructured and Structured Stability Radii}\label{defns} In \cite{MehMS16a} computable formulas for DH systems of the form \eqref{DH} are derived using several notions of unstructured and structured stability radii. In this section we briefly recall the main definitions and results from \cite{MehMS16a} for restricted perturbations in one of the following forms. \begin{equation}\label{pert_sys} \left( \left(J+B\Delta C\right) - R\right)Q , \; \left(J - \left(R+B\Delta C\right)\right)Q , \; \text{or} \; \left(J - R \right)\left(Q+B\Delta C\right). \end{equation} In the following ${\rm i} {\mathbb R}$ denotes the imaginary axis in the complex plane, $\Lambda(A)$ the spectrum of a matrix $A$, and $\|A\|_2$ the spectral norm. \begin{definition}\label{defn:unst_stab_radii} Consider a DH system of the form \eqref{DH} and suppose that $B \in \mathbb{C}^{n \times m}$ and $C \in \mathbb{C}^{p \times n}$ are given full rank restriction matrices. \begin{itemize} \item[(i)] The \emph{unstructured restricted stability radius} $r(J; B,C)$ with respect to perturbations of $J$ under the restriction matrices $B$, $C$ is defined by \[ r(J; B,C) := \inf \{\|\Delta\|_2 : \Delta\in \mathbb{C}^{m\times p}, \Lambda (\left( \left(J+B\Delta C\right) - R\right)Q)\cap {\rm i}\mathbb{R} \neq \emptyset\}. \] \item[(ii)] The \emph{unstructured restricted stability radius} $r(R; B,C)$ with respect to perturbations of $R$ under the restriction matrices $B$, $C$ is defined by \[ r(R; B,C) := \inf \{\|\Delta\|_2 : \Delta\in \mathbb{C}^{m\times p}, \Lambda (\left(J - \left(R+B\Delta C\right)\right)Q\cap {\rm i}\mathbb{R} \neq \emptyset\}. \] \item[(iii)] The \emph{unstructured restricted stability radius} $r(Q; B,C)$ with respect to perturbations of $Q$ under the restriction matrices $B$, $C$ is defined by \[ r(Q; B,C) := \inf \{\|\Delta\|_2 : \Delta\in \mathbb{C}^{m\times p}, \Lambda \left(J - R \right)\left(Q+B\Delta C\right)\cap {\rm i}\mathbb{R} \neq \emptyset\}. \] \end{itemize} \end{definition} \begin{example} \label{ex:ex2}{\rm Consider again Example~\ref{ex:ex1}. Here it is of interest to know whether (for given $\Omega$) the norm of the non-symmetric matrix $N$ is tolerable to preserve the asymptotic stability of the DH system in (\ref{insta}) without the circulatory effects. The relevant stability radius for a specified $\Omega$ is given by \begin{equation}\label{eq:BH_stabradii} \inf \left\{ \| N \|_2 \; \bigg| \; \Lambda \left( {\mathcal A}(N) \right) \cap \ri {\mathbb R} \neq \emptyset \right\}, \end{equation} where \begin{equation*} \begin{split} {\mathcal A}(N) & := \left( \begin{bmatrix} -G(\Omega) & -(K(\Omega) + \frac12N) \\ K(\Omega) + \frac12N^H & 0 \end{bmatrix} - \begin{bmatrix} D(\Omega) & \frac12N \\ \frac12N^H & 0 \end{bmatrix} \right) \begin{bmatrix} M & 0 \\ 0 & K(\Omega) \end{bmatrix}^{-1} \\ & = \left\{ \begin{bmatrix} -G(\Omega) & -K(\Omega) \\ K(\Omega) & 0 \end{bmatrix} - \left( \begin{bmatrix} D(\Omega) & 0 \\ 0 & 0 \end{bmatrix} + \begin{bmatrix} 0 & N \\ 0 & 0 \end{bmatrix} \right) \right\} \begin{bmatrix} M & 0 \\ 0 & K(\Omega) \end{bmatrix}^{-1}. \end{split} \end{equation*} Hence, the stability radius in (\ref{eq:BH_stabradii}) corresponds to the unstructured stability radius $r(R; B, C)$ with the restriction matrices $ B = \begin{bmatrix} I & 0 \end{bmatrix}^T $ and $ C = \begin{bmatrix} 0 & I \end{bmatrix} $ with $n\times n$ blocks. Furthermore, in the definition of ${\mathcal A}(N)$ the skew-Hermitian perturbations are more influential on the imaginary parts of its eigenvalues, whereas the Hermitian perturbations are more effective in moving its eigenvalues towards the imaginary axis. This leads us to the consideration of the stability radius \begin{equation}\label{eq:BH_stabradii2} \inf \left\{ \| N \|_2 \; \bigg| \; \Lambda \left( {\mathcal A}_0(N) \right) \cap \ri {\mathbb R} \neq \emptyset \right\} \end{equation} with \[ {\mathcal A}_0(N) := \left( \begin{bmatrix} -G(\Omega) & -K(\Omega) \\ K(\Omega) & 0 \end{bmatrix} - \begin{bmatrix} D(\Omega) & \frac12N \\ \frac12N^H & 0 \end{bmatrix} \right) \begin{bmatrix} M & 0 \\ 0 & K(\Omega) \end{bmatrix}^{-1}. \] } \end{example} Examples such as Example~\ref{ex:ex2} motivate the following definition of the structured stability radius in \cite{MehMS16a}. \begin{definition}\label{defn:st_stab_radii} Consider a DH system of the form \eqref{DH} and suppose that $B \in \mathbb{C}^{n \times m}$ is a given restriction matrix. The \emph{structured restricted stability radius} with respect to Hermitian perturbations of $R$ under the restriction $B$ is defined by \begin{equation}\label{eq:structured_radii_defn} \begin{split} r^{\rm Herm}(R; B) := \inf\{ \|\Delta \|_2 \; | \; \Delta = \Delta^H, \;\;\; \hskip 25ex \\ \hskip 23ex \Lambda \big( (J - R)Q - (B\Delta B^H)Q \big) \cap {\rm i}\mathbb R\neq \emptyset\}. \end{split} \end{equation} \end{definition} \section{Characterizations for Stability Radii}\label{char} The numerical techniques that we will derive for the computation of the unstructured and structured stability radii exploit eigenvalue or singular value optimization characterizations derived in \cite{MehMS16a}. \begin{theorem}\label{thm_unstr} For an asymptotically stable DH system of the form \eqref{DH} and restriction matrices $B\in\mathbb{C}^{n \times m}$, $C \in\mathbb C^{p \times n}$ the following assertions hold: \begin{itemize} \item[(i)] The unstructured stability radius $r(R; B,C)$ is finite if and only if $G_R(\omega) := CQ({\rm i}\omega I_n-(J - R)Q)^{-1}B$ is not identically zero if and only if $r(J; B,C)$ is finite. If $r(R; B,C)$ is finite, then we have \begin{equation}\label{th2} r(R; B,C) = r(J;B,C) = \inf_{\omega\in\mathbb R}\frac{1}{\|G_{R}(\omega)\|_2}. \end{equation} \item[(ii)] The unstructured stability radius $r(Q; B,C)$ is finite if and only if $G_Q(\omega) : = C({\rm i}\omega I_n-(J - R)Q)^{-1}(J-R)B$ is not identically zero for all $\omega\in\mathbb R$. If\, $r(Q; B,C)$ is finite, then we have \begin{equation}\label{th1} r(Q; B,C) = \inf_{\omega\in\mathbb R}\frac{1}{\|G_{Q}(\omega)\|_2}. \end{equation} \end{itemize} \end{theorem} For the structured stability radius and Hermitian perturbations of $R$ the following result is obtained in \cite{MehMS16a}. \begin{theorem}\label{thm_str_Herm} For an asymptotically stable DH system of the form \eqref{DH}, and a restriction matrix $B\in\mathbb{C}^{n\times m}$ of full column rank, let \begin{enumerate} \item $W(\lambda) := (J-R)Q - \lambda I$ for a given $\lambda \in {\mathbb C}$ such that $W(\lambda)$ is invertible, \item $L(\lambda)$ be a lower triangular Cholesky factor of \begin{center} $\widetilde{H}_0(\lambda) := B^H W(\lambda)^{-H} Q B B^H Q W(\lambda)^{-1} B$, \end{center} i.e., $L(\lambda)$ is a lower triangular matrix satisfying $\widetilde{H}_0(\lambda) = L(\lambda) L(\lambda)^H$, \item $H_0(\lambda) := L(\lambda)^{-1} L(\lambda)^{-H}$, \item $H_1(\lambda) := \ri (\widetilde{H}_1(\lambda) - \widetilde{H}_1(\lambda)^H)$, where $\widetilde{H}_1(\lambda) := L(\lambda)^{-1} B^H W(\lambda)^{-H} Q B L(\lambda)^{-H}$. \end{enumerate} Then $r^{\rm Herm}(R; B)$ is finite, and given by \begin{equation*} r^{\rm Herm}(R; B) \; = \; \left\{ \inf_{\omega \in {\mathbb R}} \: \sup_{t \in {\mathbb R}} \: \lambda_{\min} (H_0(\ri \omega) + t H_1 (\ri \omega)) \right\}^{1/2}, \end{equation*} where $\lambda_{\min}( \cdot )$ denotes the smallest eigenvalue of its Hermitian matrix argument, and the inner supremum is attained if and only if $H_1({\rm i} \omega)$ is indefinite. \end{theorem} The characterization in \cite{MehMS16a} is presented in a slightly different form. In particular, it is stated in terms of an orthonormal basis $U(\lambda)$ for the kernel of $\left((I - BB^+) W(\lambda)\right)$. It turns out that $U(\lambda)$ does not have to be orthonormal, rather the theorem can be stated in terms of any basis for the kernel of $\left((I - BB^+) W(\lambda)\right)$. In Theorem~\ref{thm_str_Herm}, we have employed a particular basis that simplifies the formulas and facilitates the computation. \section{Computation of the Unstructured Stability Radii for Large-Scale Problems}\label{sec:DH_unstructured_dist} In this section we study the computation of unstructured stability radii for large-scale DH systems using the characterizations of $r(R; B,C)$, $r(Q; B,C)$, $r(J;B,C)$ given in Theorem~\ref{thm_unstr}. One easily observes that \begin{equation*} \begin{split} G_R({\rm i} \omega) & \; := \; CQ({\rm i}\omega I_n-(J - R)Q)^{-1}B, \\ G_Q({\rm i} \omega) & \; := \; C({\rm i}\omega I_n-(J - R)Q)^{-1}(J-R)B \end{split} \end{equation*} can be viewed as restrictions of transfer functions of control systems to the imaginary axis. To be precise, setting $\widetilde{A} = (J - R ) Q$, $\widetilde{B} = B$ and $\widetilde{C} = C Q$, the matrix-valued function $G_R({\rm i} \omega) := CQ({\rm i}\omega I_n-(J - R)Q)^{-1}B \;$ becomes \begin{equation}\label{def_GR} \widetilde{G}_R({\rm i} \omega) := \widetilde{C}({\rm i}\omega I_n - \widetilde{A})^{-1} \widetilde{B} \end{equation} which can be considered as the transfer function of the system \begin{equation}\label{eq:Disc} \dot x = \widetilde{A} x + \widetilde{B} u, \quad y = \widetilde{C} x \end{equation} on the imaginary axis. Theorem \ref{thm_unstr} suggests that if $\widetilde{G}_R({\rm i} \omega) := \widetilde{C}({\rm i}\omega I_n - \widetilde{A})^{-1} \widetilde{B}$ is not identically zero, then $r(R; B,C)$ and $r(J; B,C)$ are finite, and characterized by \begin{equation}\label{char1} \begin{split} r(R; B,C) \; = \; r(J; B,C) & \; = \; \inf_{\omega\in\mathbb{R}}\frac{1}{ \| \widetilde{G}_R({\rm i} \omega) \|_2} \\ & \; = \ \frac{1}{\sup_{\omega\in\mathbb{R}} \| \widetilde{G}_R({\rm i} \omega) \|_2} \; = \; \frac{1}{\;\; \| \widetilde{G}_R \|_{\mathcal H_\infty}}, \end{split} \end{equation} where $\| \widetilde{G}_R \|_{{\mathcal H}_\infty} := \sup_{\omega \in {\mathbb R}} \: \sigma_{\max} (\widetilde{G}_R({\rm i} \omega))$ denotes the ${\mathcal H}_\infty$-norm of $\widetilde{G}_R$, and $\sigma_{\max}(\cdot)$ denotes the maximal singular value. For the stability radius $r(Q; B,C)$, consideration of $G_Q({\rm i} \omega) : = C({\rm i}\omega I_n-(J - R)Q)^{-1}(J-R)B$, by setting $\widetilde{A} = (J - R)Q$, $\widetilde{B} = (J - R )B$ and $\widetilde{C} = C$, leads us to a similar characterization. \subsection{A Subspace Framework}\label{sec:sf_descriptor} Recently, in \cite{Aliyev2017}, a subspace framework for the computation of the ${\mathcal H}_\infty$-norm of a large-scale system has been proposed, which is inspired from model order reduction techniques, and has made the computation of ${\mathcal H}_\infty$-norms feasible for very large control systems. We will now discuss how to use these techniques for the computation of the unstructured stability radii $r(R; B,C)$, $r(J; B,C)$, $r(Q; B,C)$ in the large-scale setting. To briefly summarize the iterative procedure in the subspace framework of~\cite{Aliyev2017}, let us assume that in iteration $k$, two subspaces ${\mathcal V}_k$ and ${\mathcal W}_k$ of equal dimension have been determined, as well as matrices $V_k$ and $W_k$ whose columns span orthonormal bases for these subspaces. Applying a Petrov-Galerkin projection to system (\ref{eq:Disc}), restricts the state $x$ to ${\mathcal V}_k$, i.e., in (\ref{eq:Disc}) we replace $x$ by $V_k x_k$, and imposes that the residual after this restriction is orthogonal to ${\mathcal W}_k$. This projection gives rise to a reduced order system \begin{equation}\label{eq:red_sys1} \dot x_k = \widetilde{A}_k x_k+ \widetilde{B}_k u, \quad y_k = \widetilde{C}_k x_k, \end{equation} with \begin{equation}\label{eq:red_sys2} \widetilde{A}_k := W_k^H \widetilde{A} V_k, \;\; \widetilde{B}_k := W_k^H \widetilde{B}, \;\; \widetilde{C}_k = \widetilde{C} V_k. \end{equation} Then the ${\mathcal H}_\infty$-norm of a transfer function $G(s) := \widetilde{C}( s I_n - \widetilde{A})^{-1} \widetilde{B}$ in (\ref{eq:Disc}) can be approximated by computing the ${\mathcal H}_\infty$-norm of \[ G_k(s) := \widetilde{C}_k( s I_k - \widetilde{A}_k)^{-1} \widetilde{B}_k \] for instance by employing the method in \cite{Boyd1990} or \cite{Bruinsma1990}, in particular, by computing $ \omega_{k+1} := \argmax_{\omega \in {\mathbb R}} \: \sigma_{\max} ( G_k( \ri \omega) )$. This is computationally cheap if the dimensions of ${\mathcal V}_k, {\mathcal W}_k$ are small. Once $\omega_{k+1}$ has been computed, then the subspaces ${\mathcal V}_k$ and ${\mathcal W}_k$ are expanded into larger subspaces ${\mathcal V}_{k+1}$ and ${\mathcal W}_{k+1}$ in such a way that the corresponding reduced transfer function $G_{k+1}(s)$ satisfies the \emph{Hermite interpolation conditions} \begin{equation}\label{eq:Hermite_interpolate} \begin{split} \sigma_{\max} (G(\ri \omega_{k+1})) & = \sigma_{\max} (G_{k+1}(\ri \omega_{k+1})), \\ \sigma_{\max}' (G(\ri \omega_{k+1})) & = \sigma_{\max}' (G_{k+1}(\ri \omega_{k+1})), \end{split} \end{equation} where $\sigma_{\max}'(G({\rm i}\omega))$ denotes the derivative of $\sigma_{\max} (G({\rm i} \omega))$ with respect to $\omega$. Denoting the image space of a matrix by $A$ by $ {\rm Im} (A)$, it is shown in \cite{Aliyev2017} that \[ \begin{split} {\mathcal V}_{k+1} & := {\mathcal V}_k \oplus {\rm Im} (( \ri \omega_{r+1} I_n - \widetilde{A})^{-1} \widetilde{B}), \\ {\mathcal W}_{k+1} & := {\mathcal W}_k \oplus {\rm Im} ( (\widetilde{C} ( \ri \omega_{k+1} I_n - \widetilde{A})^{-1})^H), \end{split} \] more specifically the inclusions \[ {\rm Im} (( \ri \omega_{k+1} I_n - \widetilde{A})^{-1} \widetilde{B}) \subseteq {\mathcal V}_{k+1},\ {\rm Im} ( (\widetilde{C} ( \ri \omega_{k+1} I_n - \widetilde{A})^{-1})^H) \subseteq {\mathcal W}_{k+1}, \] ensure that the Hermite interpolation conditions (\ref{eq:Hermite_interpolate}) are satisfied. The procedure is then repeated with the expanded subspaces ${\mathcal V}_{k+1}$, ${\mathcal W}_{k+1}$. In \cite{Aliyev2017}, it is shown that the sequence $\{ \omega_k \}$ converges at a super-linear rate and satisfies \[ \begin{split} \sigma_{\max} (G(\ri \omega_{j})) & = \sigma_{\max} (G_{k}(\ri \omega_{j})), \\ \sigma_{\max}' (G(\ri \omega_{j})) & = \sigma_{\max}' (G_{k}(\ri \omega_{j})) \end{split} \] for $j = 1,\dots,k$. A disadvantage of this general approach is that even if $\widetilde{A} = (J - R)Q$ has DH structure, this is not necessarily true for $\widetilde{A}_k$, so it cannot be guaranteed from the structure that the reduced system is stable. In the next section we modify the procedure of~\cite{Aliyev2017} to preserve the DH structure. \subsection{A Structure Preserving Subspace Framework}\label{sec:sf_PH} In this subsection we derive an interpolating, DH structure preserving version of the robust subspace projection framework. Structure preserving subspace projection methods in the context of model order reduction of large-scale PH and DH systems have been proposed in \cite{Gugercin_2012,Gugercin_2009,Polyuga_2009,Polyuga_2011,Schaft_2009,Wu_2014}. Our approach is inspired by \cite{Gugercin_2012} and uses a general interpolation result from \cite{Gallivan2005}. \begin{theorem}\label{thm:Gal_interpolate} Let $G(s)$ be the transfer function for a full order system as in (\ref{eq:Disc}), and let $G_k(s)$ be the transfer function for the reduced system defined by (\ref{eq:red_sys1}), (\ref{eq:red_sys2}). \begin{enumerate} \item[\bf (i)] \emph{(Right Tangential Interpolation)} For given $\widehat{s} \in {\mathbb C}$ and $\widehat{b} \in {\mathbb C}^m$, if \begin{equation}\label{eq:subspace_incl} \left[ (\widehat{s}I - \widetilde{A})^{-1} \right]^\ell \widetilde{B} \widehat{b} \; \in \; {\mathcal V}_k \quad \quad {\rm for} \;\; \ell = 1, \dots, N, \end{equation} and $\: {\mathcal W}_k$ is such that $W_k^H V_k = I$, then we have \begin{equation}\label{eq:tangent_interpolate} G^{(\ell)} (\widehat{s}) \widehat{b} = G^{(\ell)}_k (\widehat{s}) \widehat{b} \quad\quad {\rm for} \;\; \ell = 0, \dots, N-1 \end{equation} provided that both \, $\widehat{s} I - \widetilde{A}$ and\, $\widehat{s} I - \widetilde{A}_k$ are invertible. \item[\bf (ii)] \emph{(Left Tangential Interpolation)} For a given $\widehat{s} \in {\mathbb C}$ and $\widehat{c} \in {\mathbb C}^p$, if \begin{equation}\label{eq:subspace_inclusion_left} \left( \widehat{c}^{\: H} \widetilde{C} \left[ (\widehat{s}I - \widetilde{A})^{-1} \right]^\ell \right)^H \; \in \; {\mathcal W}_k \quad \quad {\rm for} \;\; \ell = 1, \dots, N, \end{equation} and $\: {\mathcal V}_k$ is such that $W_k^H V_k = I$, then we have \begin{equation}\label{eq:tangent_interpolate_left} \widehat{c}^{\: H} G^{(\ell)} (\widehat{s}) = \widehat{c}^{\: H} G^{(\ell)}_k (\widehat{s}) \quad\quad {\rm for} \;\; \ell = 0, \dots, N-1 \end{equation} provided that both \, $\widehat{s} I - \widetilde{A}$ and \, $\widehat{s} I - \widetilde{A}_k$ are invertible. \end{enumerate} \end{theorem} \subsubsection{Computation of $r(R; B,C)$ and $r(J; B,C)$}\label{sec:unstructured_subspace_rR_rJ} The computation of $r(R; B, C) = r(J; B,C)$ involves the maximization of the largest singular value of the transfer function $G(s) = CQ (sI - (J-R)Q)^{-1} B$ associated with the system \begin{equation}\label{eq:PHS} \begin{split} \dot x &= (J-R) Qx + B u, \\ y &= CQx. \end{split} \end{equation} on the imaginary axis. We make use of Theorem~\ref{thm:Gal_interpolate} to obtain a reduced order system satisfying the interpolation conditions (\ref{eq:tangent_interpolate}) while retaining the structure in (\ref{eq:PHS}). We, in particular, employ right tangential interpolation for a given $\widehat{s} \in {\mathbb C}$ and $\widehat{b} \in {\mathbb C}^m$, and choose ${\mathcal V}_k$ as any subspace satisfying (\ref{eq:subspace_incl}). Let us also define $ W_k \;\; := \;\; QV_k (V_k^H Q V_k)^{-1}$, ${\mathcal W}_k := {\rm Im} (W_k)$, so that \[ W_k^H V_k \;\; = \;\; I_k \quad {\rm and} \quad ( W_k V_k^H )^2 \;\; = \;\; W_k V_k^H, \] i.e., $W_k V_k^H$ is an oblique projector onto ${\rm Im}( Q V_k)$. The matrices $\widetilde{A}_k, \widetilde{B}_k, \widetilde{C}_k$ of the reduced system (\ref{eq:red_sys1}), (\ref{eq:red_sys2}) for these choices of $V_k$ and $W_k$ then satisfy \begin{equation}\label{eq:reduced_A} \begin{split} \widetilde{A}_k & \;\; = \;\; W_k^H \widetilde{A} V_k = W_k^H (J-R) Q V_k = W_k^H (J-R) W_k V_k^H Q V_k \\ & \;\; = \;\; (J_k - R_k) Q_k, \end{split} \end{equation} where $J_k := W_k^H J W_k=-J_k^H$, $R_k := W_k^H R W_k=R_k^H\geq 0$, and $Q_k := V_k^H Q V_k=Q_k^H>0$. Additionally, $ \widetilde{C}_k \;\; = \;\; \widetilde{C} V_k = CQ V_k = C W_k V_k^H Q V_k = C_k Q_k$, where $C_k : = C W_k$, and $\widetilde{B}_k \;\; = \;\; W_k^H B \;\; =: \;\; B_k$. This construction leads to the following result of \cite{Gugercin_2012}. \begin{theorem} \label{interpol2} Consider a linear system of the form \eqref{eq:PHS} with transfer function $G(s) := CQ(s I_n - (J - R)Q)^{-1}B$. Furthermore, for a given point $\widehat{s}\in\C$ and a given tangent direction $\widehat{b}\in\C^m$, suppose that $V_k$ is a matrix with orthonormal columns such that \begin{equation*} ( \widehat{s}I_n - (J - R)Q )^{-(\ell-1)} (\widehat{s}I_n - (J - R)Q )^{-1}B\widehat{b} \in{\rm Im}(V_k) \quad \text{for} \;\; \ell = 1,\ldots,N. \end{equation*} Define $\: W_k := QV_k(V_k^HQV_k)^{-1} \:$ and set \begin{equation}\label{eq:DH_subspaces} \begin{split} J_k: = & W_k^H J W_k, \;\; Q_k: = V_k^H Q V_k, \;\; R_k := W_k^H R W_k \\ B_k: =& W_k^H B, \;\; C_k: = CW_k. \end{split} \end{equation} Then the resulting reduced order model \begin{equation}\label{eq:RPHS} \begin{split} \dot x_k& \;\; = \;\; (J_k-R_k) Q_k x_k + B_k u, \\ y_k& \;\; = \;\; C_k Q_kx_k \end{split} \end{equation} is a DH system with transfer function \begin{equation} \label{eq:rph} G_k(s) \;\; := \;\; C_kQ_k(s I_k - (J_k - R_k)Q_k)^{-1}B_k \end{equation} that satisfies \begin{equation} G^{(j)}(\hat{s})\hat{b} \;\; = \;\; G^{(j)}_k(\hat{s})\hat{b} \quad \text{for} \quad j = 0\ldots, N-1, \end{equation} where $G^{(j)}(\hat{s})$ denotes the $j$-th derivative of $\; G(s)$ at the point $\hat{s}$. \end{theorem} Based on Theorem~\ref{interpol2} we obtain Algorithm \ref{alg:dti} for the computation of $r(R; B,C) = r(J; B,C)$. \begin{algorithm}[ht] \label{algor1} \begin{algorithmic}[1] \REQUIRE{Matrices $B \in \mathbb{C}^{n\times m}$, $C \in \mathbb{C}^{p\times n}$, $ J, R, Q \in \mathbb{C}^{n\times n}$.} \ENSURE{The sequence of frequencies $\{ \omega_k \}$.} \STATE Choose initial interpolation points $\omega_{1}, \dots , \omega_{q} \in {\mathbb R}.$ \STATE $V_q \gets {\rm orth} \begin{bmatrix} D(\ri\omega_1)^{-1}B & D(\ri\omega_1)^{-2}B & \dots & D(\ri\omega_q)^{-1}B & D(\ri\omega_q)^{-2}B \end{bmatrix}$ \label{defn_init_subspaces0} \\ \hskip 19ex $\quad \text{and}\quad \quad W_q \gets QV_q(V_q^HQV_q)^{-1} $. \label{defn_init_subspaces} \FOR{$k = q,\,q+1,\,\dots$} \STATE Form $G_{k}$ as in \eqref{eq:rph} for the choices of $J_k, R_k, Q_k, B_k, C_k$ in (\ref{eq:DH_subspaces}). \label{reduced_transfer} \STATE $\; \displaystyle \omega_{k+1} \gets \: \argmax_{\omega \in {\mathbb R}} \sigma_{\max}(G_{k} (\ri \omega))$. \label{solve_reduced} \STATE $\widehat{V}_{k+1} \gets \begin{bmatrix} D(\ri\omega_{k+1})^{-1}B & D(\ri\omega_{k+1})^{-2}B. \end{bmatrix}$ \label{defn_later_subspaces} \STATE $V_{k+1} \gets \operatorname{orth}\left(\begin{bmatrix} V_{k} & \widehat{V}_{k+1} \end{bmatrix}\right) \quad \text{and}\quad W_{k+1} \gets Q V_{k+1}(V_{k+1}^H Q V_{k+1})^{-1}$. \ENDFOR \end{algorithmic} \caption{$\;$ DH structure preserving subspace method for the computation of the stability radii $r(R; B,C)$ and $r(J; B,C)$ for large-scale systems.} \label{alg:dti} \end{algorithm} According to Theorem~\ref{interpol2}, for a given $\widehat{s} \in {\mathbb C}$, setting \[ V_k := \begin{bmatrix} D(\widehat{s})^{-1}B & D(\widehat{s})^{-2}B \end{bmatrix},\ W_k := QV_k(V_k^HQV_k)^{-1} \] where \begin{equation}\label{eq:defn_Ds} D(\widehat{s}): = (\widehat{s} I_n - (J - R)Q), \end{equation} we obtain $G(\widehat{s}) = G_k(\widehat{s})$ and $G'(\widehat{s}) = G'_k(\widehat{s})$ and thus the Hermite interpolation conditions \begin{equation}\label{eq:Hermite_inter_smax} \sigma_{\max}(G(\widehat{s})) = \sigma_{\max}(G_k(\widehat{s})),\ \sigma'_{\max}(G(\widehat{s})) = \sigma'_{\max}(G_k(\widehat{s})) \end{equation} are satisfied, which suggest the use of the reduced system in the greedy subspace framework outlined in Algorithm~\ref{alg:dti}. In line 5 of every iteration, the subspace framework computes the ${\mathcal H}_\infty$-norm of a reduced system, in particular it computes the point ${\rm i} \omega_\ast$ on the imaginary axis where this ${\mathcal H}_\infty$-norm is attained. Then the current left and right subspaces are expanded in a way so that the resulting reduced system still has DH structure and its transfer function Hermite interpolates the original transfer function at ${\rm i} \omega_\ast$. Since the Hermite interpolation conditions~(\ref{eq:Hermite_inter_smax}) are satisfied at $\widehat{s} = {\rm i} \omega_1, \dots, {\rm i} \omega_{k}$ at the end of iteration $k$, the rate-of-convergence analysis in \cite{Aliyev2017} applies to deduce a superlinear rate-of-convergence for the sequence $\{ \omega_k \}$. The computationally most expensive part of Algorithm \ref{alg:dti} is in lines \ref{defn_init_subspaces} and \ref{defn_later_subspaces}, where many linear systems with possibly many right hand sides have to be solved. If this is done with a direct solver, then for each value $\widehat{\omega} \in {\mathbb R}$ one $LU$ factorization of the matrix $D(\ri \widehat{\omega})$ has to be performed. For large values of $n$, the computation time is usually dominated by these $LU$ factorizations. In contrast to this, the solution of the reduced problem in line \ref{solve_reduced} can be achieved (for small systems) by means of the efficient algorithm in \cite{Boyd1990, Bruinsma1990}. \subsubsection{Computation of $r(Q; B,C)$} To compute the stability radius $r(Q; B,C)$ in the large-scale setting, we employ left tangential interpolations (i.e., part (ii) of Theorem \ref{thm:Gal_interpolate}). In this case $r(Q; B,C)$ is the reciprocal of the ${\mathcal H}_\infty$-norm of the transfer function $G(s) := C (sI - (J-R)Q)^{-1} (J-R)B$ corresponding to the system \begin{equation}\label{eq:PHS2} \dot x = (J-R) Qx + (J-R)B u, \quad y(t) = Cx. \end{equation} To obtain a reduced system which has the same structure as (\ref{eq:PHS2}) and has a transfer function $G_k(s)$ that satisfies $\widehat{c}^{\: H} G(\widehat{s}) = \widehat{c}^{\: H}G_k(\widehat{s})$ for a given point $\widehat{s} \in {\mathbb C}$ and a direction $\widehat{c} \in {\mathbb C}^p$, let us choose $W_k$ so as to satisfy the condition in (\ref{eq:subspace_inclusion_left}) for $\widetilde{A} := (J-R)Q$, and $\widetilde{C} := C$. Furthermore, we set \[ V_k := (J - R)^H W_k (W_k^H (J - R)^H W_k)^{-1}, \] as well as ${\mathcal W}_k := {\rm Im}(W_k)$, ${\mathcal V}_k := {\rm Im}(V_k)$. The matrix $V_k$ is chosen to satisfy \[ V_k^H W_k \;\; = \;\; I_k \quad {\rm and} \quad (V_k W_k^H)^2 \;\; = \;\; V_k W_k^H, \] so that $V_k W_k^H$ is an oblique projector onto ${\rm Im} ( (J-R)^H W_k )$. In (\ref{eq:PHS2}), setting $\widetilde{A} := (J-R)Q$, $\widetilde{B} := (J-R)B$, $\widetilde{C} = C$, let us investigate the matrices $\widetilde{A}_k, \widetilde{B}_k, \widetilde{C}_k$ of the corresponding reduced system defined by (\ref{eq:red_sys1}), (\ref{eq:red_sys2}). Specifically, we have that \[ \widetilde{A}_k \;\; = \;\; W_k^H \widetilde{A} V_k \; = \; W_k^H (J-R) Q V_k \; = \; W_k^H (J-R) W_k V_k^H Q V_k, \] where, in the third equality, we employ $V_k W_k^H (J - R)^H W_k = (J- R)^H W_k$, or equivalently $W_k^H (J- R) W_k V_k^H = W_k^H (J-R)$. Hence, defining $J_k := W_k^H J W_k=-J_k^H$, $R_k := W_k^H R W_k=R_k^H\geq 0$, $Q_k := V_k^H Q V_k=Q_k^H>0$, we obtain a DH system with $\widetilde{A}_k \;\; = \;\; (J_k - R_k) Q_k$. We also have \[ \widetilde{B}_k \;\; = \;\; W_k^H \widetilde{B} = W_k^H (J-R) B = W_k^H (J - R) W_k V_k^H B = (J_k - R_k) B_k \] with $B_k := V_k^H B$, and $\widetilde{C}_k \;\; = \;\; C V_k \;\; =: \;\; C_k$. These constructions lead to the following analogue of Theorem \ref{interpol2}. \begin{theorem} \label{interpol3} Consider a linear system of the form \eqref{eq:PHS2} with the transfer function $G(s) := C(s I_n - (J - R)Q)^{-1}(J-R)B$. For a given point $\widehat{s}\in\C$ and a direction $\widehat{c}\in\C^m$, suppose that $W_k$ is a matrix with orthonormal columns such that \begin{equation*} \left( \widehat{c}^{\: H} C ( \widehat{s}I_n - (J - R)Q )^{-1} (\widehat{s}I_n - (J - R)Q )^{-(\ell-1)} \right)^H \in{\rm Im}(W_k) \end{equation*} for $\ell= 1, \dots, N$. Letting $\: V_k := (J - R)^H W_k (W_k^H (J - R)^H W_k)^{-1} \:$, and \begin{equation}\label{eq:DH_subspaces2} \begin{split} J_k: = & W_k^H J W_k \;\; Q_k: = V_k^H Q V_k, \;\; R_k := W_k^H R W_k \\ B_k: =& V_k^H B, \;\; C_k: = CV_k, \end{split} \end{equation} the resulting reduced order system \begin{equation}\label{eq:RPHS2} \dot x_k \;\; = \;\; (J_k-R_k) Q_k x_k + (J_k - R_k) B_k u, \quad y_k \;\; = \;\; C_k x_k \end{equation} is such that $\dot x_k = (J_k-R_k) Q_k x_k$ is dissipative Hamiltonian. Furthermore, the transfer function \begin{equation} \label{eq:rph2} G_k(s) \;\; := \;\; C_k (s I_k - (J_k - R_k)Q_k)^{-1} (J_k - R_k) B_k \end{equation} of (\ref{eq:RPHS2}) satisfies \begin{equation} \widehat{c}^{\: H} G^{(\ell)}(\hat{s})\hat{b} \;\; = \;\; \widehat{c}^{\: H} G^{(\ell)}_k(\hat{s}) \quad \text{for} \quad \ell = 0\ldots, N-1. \end{equation} \end{theorem} Theorem \ref{interpol3} shows that at a given $\widehat{s} \in {\mathbb C}$, the Hermite interpolation properties $G(\widehat{s}) = G_k(\widehat{s})$ and $G'(\widehat{s}) = G_k'(\widehat{s})$ and, in particular, $\sigma_{\max} (G(\widehat{s}))$ $=$ $\sigma_{\max}(G_k(\widehat{s}))$ and $\sigma_{\max}'(G(\widehat{s})) = \sigma_{\max}'(G_k(\widehat{s}))$) can be achieved, while preserving the structure, with the choices \begin{eqnarray*} W_k \; &=& \; \left[ \begin{array}{cc} (C D(\widehat{s})^{-1})^H & (C D(\widehat{s})^{-2})^H \end{array} \right],\\ V_k \; &=& \; (J - R)^H W_k (W_k^H (J - R)^H W_k)^{-1}, \end{eqnarray*} where $D(\widehat{s})$ is as in (\ref{eq:defn_Ds}). This in turn gives rise to Algorithm \ref{alg:dti2}. \begin{algorithm}[ht] \begin{algorithmic}[1] \REQUIRE{Matrices $B \in \mathbb{C}^{n\times m}$, $C \in \mathbb{C}^{p\times n}$, $ J, R, Q \in \mathbb{C}^{n\times n}$.} \ENSURE{The sequence $\{ \omega_k \}$.} \STATE Choose initial interpolation points $\omega_{1}, \dots, \omega_{j} \in {\mathbb R}$. \STATE $W_j \gets {\rm orth} \left[ \left( C D(\ri\omega_1)^{-1} \right)^H \;\; \left( C D(\ri\omega_1)^{-2} \right)^H \;\; \dots \right.$ \\ \hskip 35ex $ \left. \;\; \left( C D(\ri\omega_j)^{-1} \right)^H \;\; \left( C D(\ri\omega_j)^{-2} \right)^H \right]$, \\ $\quad\quad\quad\quad V_j\gets (J - R)^H W_j (W_j^H (J - R)^H W_j)^{-1} $. \label{defn_init_subspaces2} \FOR{$k = j,\, j+1,\,\dots$} \STATE Form $G_{k}$ as in \eqref{eq:rph2} for the choices of $J_k$, $R_k$, $Q_k$, $B_k$, $C_k$ in (\ref{eq:DH_subspaces2}). \label{reduced_transfer2} \STATE $\; \displaystyle \omega_{k+1} \gets \argmax_{\omega \in {\mathbb R}} \sigma_{\max}(G_{k} (\ri \omega))$. \label{solve_reduced2} \STATE $\widehat{W}_{k+1} \gets \begin{bmatrix} \left( C D(\ri\omega_{k+1})^{-1} \right)^H & \left( C D(\ri\omega_{k+1})^{-2} \right)^H \end{bmatrix}$. \label{defn_later_subspaces2} \STATE $W_{k+1} \gets \operatorname{orth}\left(\begin{bmatrix} W_{k} & \widehat{W}_{k+1} \end{bmatrix}\right) \quad \text{and} $ \\ \hskip 15ex $V_{ k+1} \gets (J - R)^H W_{k+1} (W_{k+1}^H (J - R)^H W_{k+1})^{-1}$. \ENDFOR \end{algorithmic} \caption{$\;$ DH structure preserving subspace method for the computation of the stability radius $r(Q; B,C)$ of a large scale DH system.} \label{alg:dti2} \end{algorithm} At every iteration of this algorithm, the ${\mathcal H}_\infty$-norm is computed for a reduced problem of the form $(\ref{eq:RPHS2})$, in particular, the optimal frequency where this ${\mathcal H}_\infty$-norm is attained is retrieved. Then the subspaces are updated so that the Hermite interpolation properties hold also at this optimal frequency at the largest singular values of the full and reduced problem, respectively. Once again the sequence $\{ \omega_k \}$ by Algorithm \ref{alg:dti2} is guaranteed to converge at a super-linear rate, which can be attributed to the Hermite interpolation properties holding between the largest singular values of the full and reduced transfer functions. \subsection{Numerical Experiments}\label{sec:DH_numexps} In this subsection we illustrate the performance of MATLAB implementations of Algorithms \ref{alg:dti} and \ref{alg:dti2} via some numerical examples. We first discuss some implementation details and then present numerical results on two sets of random synthetic examples in Section \ref{sec:numexp_syn}, and data from a FE model of a brake disk in Section~\ref{sec:numexp_brake}. \subsubsection{Implementation Details and Test Setup}\label{sec:test_setup} Algorithms \ref{alg:dti} and \ref{alg:dti2} are terminated when at least one of the following three conditions is fulfilled: \begin{enumerate} \item The relative distance between $\omega_{k}$ and $\omega_{k-1}$ is less than a prescribed tolerance for some $k> j$, i.e., \[ \left| \omega_k- \omega_{k-1} \right| < \varepsilon \cdot \frac{1}{2} (\omega_k + \omega_{k-1}). \] \item Letting $f_k := \max_{\omega \in {\mathbb R} \cup \infty} \sigma_{\max}(G_k({\rm i} \omega))$, two consecutive iterates $f_k, f_{k-1}$ are close enough in a relative sense, i.e., \[ \left| f_k - f_{k-1} \right| < \varepsilon \cdot \frac{1}{2} (f_k + f_{k-1}). \] \item The number of iterations exceeds a specified integer, i.e., $k> k_{\max}$. \end{enumerate} In all numerical examples that we present, we set $\varepsilon = 10^{-6}$ and $k_{\max} = 100$. In general, Algorithms \ref{alg:dti} and \ref{alg:dti2} converge only locally. The choice of the initial interpolation points affects the maximizers that the subspace frameworks converge to, in particular, whether convergence to a global maximizer occurs. The initial interpolation points are chosen based on the following procedure. First, we discretize the interval $[ \lambda_{\min}^{\Im} , \lambda_{\max}^{\Im} ]$ into $\rho$ equally spaced points, say $\omega_{0,1}, \dots, \omega_{0,\rho}$, including the end-points $\lambda_{\min}^{\Im}, \lambda_{\max}^{\Im}$, where $\rho$ is specified by the user and $\lambda_{\min}^{\Im}, \lambda_{\max}^{\Im}$ denote the imaginary parts of the eigenvalues of $(J-R)Q$ with the smallest and largest imaginary part, respectively. Then we approximate the eigenvalues $z_1, \dots, z_\rho$ of $(J-R)Q$ closest to ${\rm i} \omega_{0,1}, \dots, {\rm i} \omega_{0,\rho}$, and permute them into $z_{j_1}, \dots, z_{j_\rho}$ where $\{ j_1, \dots, j_\rho \} = \{ 1, \dots, \rho \}$ so that $\sigma_{\max}(G({\rm i} \Im z_{j_1} )) \geq \dots \geq \sigma_{\max}(G({\rm i} \Im z_{j_\rho} ))$. The interpolation points $\omega_1, \dots, \omega_\ell$ employed initially are then chosen as the imaginary parts of $z_{j_1}, \dots, z_{j_\ell}$, where again $\ell\leq \rho$ is specified by the user. \subsubsection{Results on Synthetic Examples}\label{sec:numexp_syn} We now present results for two families of linear DH systems with random coefficient matrices; the first family consists of dense systems of order $800$, whereas the second family consists of sparse systems of order $5000$. \noindent \textbf{Dense Random examples.} In the dense family the coefficient matrices $J$, $Q$, $R$ are formed by the MATLAB commands \begin{verbatim} >> J = randn(800); J = (J - J')/2; >> Q = randn(800); Q = (Q + Q')/2; mineig = min(eig(Q)); >> if (mineig < 10^-4) Q = Q + (-mineig + 5*rand)*eye(n); end >> p = round(80*rand); >> Rp = randn(p); Rp = (Rp + Rp')/2; mineig = min(eig(Rp)); >> if (mineig < 10^-4) Rp = Rp + (-mineig + 5*rand)*eye(p); end >> R = [Rp zeros(p,800-p); zeros(800-p,p) zeros(800-p,800-p)]; >> X = randn(800); [U,~] = qr(X); R = U'*R*U; \end{verbatim} The restriction matrices $B$ and $C$ are chosen as $800\times 2$ and $2\times 800$ random matrices created by the MATLAB command \texttt{randn}. To compute $r(R; B,C) = r(J; B,C)$, as well as $r(Q;B,C)$, we ran \begin{enumerate} \item[(1)] the Boyd-Balakrishnan (BB) algorithm \cite{Boyd1990}, \item[(2)] the subspace framework that does not preserve the DH structure \cite[Algorithm 1]{Aliyev2017} described in Subsection \ref{sec:sf_descriptor}, and \item[(3)] the subspace frameworks that preserve structure, i.e., Algorithms \ref{alg:dti} and \ref{alg:dti2}, introduced in Subection \ref{sec:sf_PH} \end{enumerate} on $100$ such random examples. The spectrum of a typical $(J-R)Q$ of size $800$ generated in this way is depicted in Figure \ref{fig:spectra_randomPH} on the left. \begin{figure} \caption{The spectra of $A = (J-R)Q$ for a dense random $J, R, Q \in {\mathbb R}^{800\times 800}$ (left), and a sparse random $J, R, Q \in {\mathbb R}^{5000\times 5000}$ (right). The MATLAB commands yielding these $J, R, Q$ are specified in Section \ref{sec:numexp_syn}.} \label{fig:spectra_randomPH} \end{figure} The progress of Algorithm \ref{alg:dti2}, as well as Algorithm 1 in \cite{Aliyev2017}, to compute $r(Q; B,C)$ for this example is presented in Figure~\ref{fig:progress_alg2}, which includes on the top left a plot of $f(\omega) := \sigma_{\max}(C ({\rm i} \omega I - (J-R) Q)^{-1} (J-R)B)$ for $\omega \in [-2000,0]$ along with the converged maximizers by the respective Algorithms. Algorithm \ref{alg:dti2} converges to the global maximizer $\omega_{\ast,1} = -731.9774$ with $f(\omega_{\ast,1}) = 32.321399$, while Algorithm 1 in \cite{Aliyev2017} converges to the local maximizer $\omega_{\ast,2} = -1602.1187$ with $f(\omega_{\ast,2}) = 29.028197$. The globally optimal peak $(\omega_{\ast,1}, f(\omega_{\ast,1}))$ and the locally optimal peak $(\omega_{\ast,2}, f(\omega_{\ast,2}))$ are marked in the plot with a square and a circle, respectively. The remaining five plots in Figure \ref{fig:progress_alg2} illustrate the progress of Algorithm \ref{alg:dti2}. In each one of these plots, the black curve is a plot of the reduced function $f_k(\omega) := \max_{\omega \in {\mathbb R} \cup \infty} \sigma_{\max}(C_k ({\rm i} \omega I - (J_k-R_k) Q_k)^{-1} (J_k-R_k)B_k)$ with respect to $\omega$, and the circle marks the global maximizer of this reduced function. The top right shows the initial reduced function in black interpolating the full function at ten points, and the other four show the reduced function after iterations $1$-$4$ from middle-left to bottom-right. Observe that, at every iteration, the refined reduced function interpolates the full function at the maximizer of the previous reduced function in addition to the earlier interpolation points. We also list the iterates of Algorithm \ref{alg:dti2} in Table \ref{table:dense_iterates_alg2} indicating a quick converge. The algorithm terminates after performing six subspace iterations. The results of Algorithms \ref{alg:dti} and \ref{alg:dti2} for the first $10$ random examples are presented in Tables~\ref{table:dense_comparison} and~\ref{table:dense_comparisonQ}, respectively. Results from \cite[Algorithm 1]{Aliyev2017} and the BB Algorithm \cite{Boyd1990} are also included in these tables for comparison purposes. For the computation of $r(J; B,C) = r(R; B,C)$, the new structure-preserving Algorithm \ref{alg:dti} and \cite[Algorithm 1]{Aliyev2017} perform equally well on these first $10$ examples. They both return the globally optimal solutions in $9$ out of $10$ examples, perform similar number of subspace iterations and require similar amount of cpu-time. \begin{figure} \caption{\textbf{(Top Left)} The plot of $\sigma_{\max}(C({\rm i} \omega - (J-R)Q)^{-1} (J-R)B)$ as a function of $\omega \in [-2000,0]$ along with the maxima computed by Algorithm \ref{alg:dti2} and \cite[Algorithm 1]{Aliyev2017} marked with the square and circle, respectively, for a dense random example of order $800$. \textbf{(Top Right)} The black curve is the initial reduced function for Algorithm \ref{alg:dti2} interpolating the full function at $10$ points, whereas the circle is the global maximum of this reduced function. \textbf{(Middle Left - Bottom Right)} Plots of the reduced functions after iterations $1$-$4$ of Algorithm \ref{alg:dti2} displayed with black curves along with the maximizers of the reduced functions marked with circles.} \label{fig:progress_alg2} \end{figure} A more decisive conclusion can be drawn when we consider all of the $100$ random examples. The left-hand columns in Figure \ref{fig:dense_functions} depict the ratios $(f_{\rm BB} - f_{\rm SF}) / ((f_{\rm BB} + f_{\rm SF})/2)$, where $f_{BB}$ are the globally maximal values of $\sigma_{\max}(CQ ({\rm i} \omega I - (J-R)Q)^{-1} B)$ over $\omega$ returned by the BB algorithm and $f_{\rm SF}$ are the values returned by the subspace frameworks, specifically by Algorithm \ref{alg:dti} on the top and by \cite[Algorithm 1]{Aliyev2017} at the bottom. The results by Algorithm \ref{alg:dti} match with the ones by the BB algorithm $81$ times out of $100$, while the results by Algorithm \cite[Algorithm 1]{Aliyev2017} match with the ones by the BB algorithm $67$ times out of $100$. (In the examples where the results by the subspace frameworks differ from those by the BB algorithm, the subspace frameworks converge to local maximizers that are not global maximizers.) On these $100$ random examples Algorithm \ref{alg:dti} performs slightly fewer iterations, on average $17.6$, whereas \cite[Algorithm 1]{Aliyev2017} on average performs $20.3$ iterations. On the other hand, the total run-time on average is better for \cite[Algorithm 1]{Aliyev2017} compared with Algorithm \ref{alg:dti} here, $21.3\, s$ vs $30.9\, s$. We observe this behavior on various other DH systems; Algorithm~\ref{alg:dti} seems to be more robust for the computation of $r(J; B,C) = r(R; B,C)$ in converging to the globally maximal value of $\sigma_{\max}(CQ ({\rm i} \omega I - (J-R)Q)^{-1} B)$ compared with \cite[Algorithm 1]{Aliyev2017}, however, this is at the expense of slightly more computation time. On the other hand, for the computation of $r(Q; B, C)$, Table \ref{table:dense_comparisonQ} indicates that Algorithm \ref{alg:dti2} returns exactly the same globally maximal values (up to tolerances) as the BB algorithm for all of the first $10$ examples except one, whereas application of \cite[Algorithm 1]{Aliyev2017} results in locally maximal solutions that are not globally maximal $4$ times. Fewer number of subspace iterations in favor of Algorithm \ref{alg:dti2} are also apparent from the table. Once again, the plots of the ratios $(f_{\rm BB} - f_{\rm SF}) / ((f_{\rm BB} + f_{\rm SF})/2)$ are shown in Figure \ref{fig:dense_functions} on the right-hand column for all $100$ examples with $f_{\rm SF}$ now representing the values returned by Algorithm \ref{alg:dti2} on the top and by \cite[Algorithm 1]{Aliyev2017} at the bottom. Algorithm \ref{alg:dti2} and \cite[Algorithm 1]{Aliyev2017} return locally optimal solutions that are not globally optimal $21$ and $27$ times, respectively. In this case the difference between the number of subspace iterations for these $100$ examples is more pronounced in favor of Algorithm \ref{alg:dti2}; indeed the number of subspace iterations is on average $7.2$ for Algorithm \ref{alg:dti2} and and $17.0$ for \cite[Algorithm 1]{Aliyev2017}. This difference in the number of iterations is also reflected in the average run-times which are $13.2\, s$ and $19\, s$ for Algorithm \ref{alg:dti2} and \cite[Algorithm 1]{Aliyev2017}, respectively. \begin{table} \begin{center} \begin{tabular}{|c||c|c|} \hline $k$ & $\omega_{k+1}$ & $\sigma_{\max}(G_k(\omega_{k+1}))$ \\ [0.5ex] \hline \hline 10 & -600.705819 & 26.182525 \\ 11 & -674.769938 & 28.262865 \\ 12 & -697.139310 & 34.834307 \\ 13 & -731.573363 & 35.133647 \\ 14 & -731.942586 & 32.309246 \\ 15 & -731.977386 & 32.321399 \\ 16 & -731.977385 & 32.321399 \\ \hline \end{tabular} \end{center} \caption{ Iterates of Algorithm~\ref{alg:dti2} to compute $r(Q; B, C)$ on a DH system with dense random $J, R, Q \in {\mathbb R}^{800\times 800}$ and random restriction matrices $B \in {\mathbb R}^{800\times 2}, C \in {\mathbb R}^{2\times 800}$. The algorithm is initiated with $10$ interpolation points and terminates after $6$ iterations with $32$ dimensional subspaces. } \label{table:dense_iterates_alg2} \end{table} \begin{table} \begin{center} \begin{tabular}{|c||ccc|cc|cc|} \hline & \multicolumn{3}{c}{$\max_{\omega} \sigma_{\max}(CQ ({\rm i} \omega I - (J-R)Q)^{-1} B)$} & \multicolumn{2}{|c|}{$\#$ iterations} & \multicolumn{2}{|c|}{run-time} \\ [0.5ex] Ex. & Alg.~\ref{alg:dti} & \cite[Alg. 1]{Aliyev2017} & BB Alg. \cite{Boyd1990} & Alg.~\ref{alg:dti} & \cite{Aliyev2017} & Alg.~\ref{alg:dti} & \cite{Aliyev2017} \\ [0.5ex] \hline \hline 1 & 32.559659 & 32.559659 & 32.559659 & 9 & 9 & 14 & 13 \\ 2 & 46.703932 & 46.703932 & 46.703932 & 15 & 12 & 16 & 12.4 \\ 3 & 26.227029 & \textbf{24.023572} & 26.227029 & 7 & 12 & 12.9 & 15.1 \\ 4 & \textbf{62.748090} & 108.030409 & 108.030409 & 17 & 41 & 17.8 & 27.2 \\ 5 & 35.974956 & 35.974957 & 35.974956 & 9 & 14 & 13.4 & 13.9 \\ 6 & 53.522033 & 53.522033 & 53.522033 & 6 & 3 & 11.4 & 10.2 \\ 7 & 31.739000 & 31.739000 & 31.739000 & 4 & 12 & 11.6 & 13.8 \\ 8 & 76.958658 & 76.958658 & 76.958658 & 35 & 8 & 43.2 & 11 \\ 9 & 37.007241 & 37.007241 & 37.007241 & 6 & 2 & 13 & 11.5 \\ 10 & 155.642871 & 155.642871 & 155.642871 & 2 & 2 & 8.6 & 8.5 \\ \hline \end{tabular} \end{center} \caption{Run-time (in $s$) comparison of Algorithm \ref{alg:dti} and \cite[Algorithm 1]{Aliyev2017} to compute $r(R; B, C) = r(J; B, C)$ for $10$ dense random examples of order $800$. The third column refers to the number of subspace iterations.} \label{table:dense_comparison} \end{table} \begin{table} \hskip -1.3ex \begin{tabular}{|c||ccc|cc|cc|} \hline & \multicolumn{3}{c}{$\max \: \sigma_{\max}(C({\rm i} \omega I - (J-R)Q)^{-1} (J-R)B)$} & \multicolumn{2}{|c|}{$\#$ iterations} & \multicolumn{2}{|c|}{run-time} \\ [0.5ex] $\#$ & Alg.~\ref{alg:dti} & \cite[Alg. 1]{Aliyev2017} & BB Alg. \cite{Boyd1990} & Alg.~\ref{alg:dti} & \cite{Aliyev2017} & Alg.~\ref{alg:dti} & \cite{Aliyev2017} \\ [0.5ex] \hline \hline 1 & 9.809182 & 9.809182 & 9.809182 & 3 & 26 & 11.1 & 19.5 \\ 2 & 22.386670 & 22.386670 & 22.386670 & 5 & 26 & 10.2 & 18.1 \\ 3 & 8.364927 & 8.364927 & 8.364927 & 3 & 8 & 11 & 13.3 \\ 4 & 32.321399 & \textbf{29.028197} & 32.321399 & 6 & 37 & 10.5 & 25.2 \\ 5 & 15.071678 & 15.071678 & 15.071678 & 7 & 15 & 12.7 & 14 \\ 6 & 21.641484 & 21.641484 & 21.641484 & 4 & 8 & 10.2 & 11.6 \\ 7 & 12.858494 & \textbf{12.763161} & 12.858494 & 6 & 4 & 11.8 & 11.2 \\ 8 & 31.901305 & \textbf{27.996873} & 31.901305 & 8 & 12 & 11.5 & 12.5 \\ 9 & 9.228945 & 9.228945 & 9.228945 & 3 & 8 & 11.3 & 13 \\ 10 & \textbf{47.697528} & \textbf{47.697528} & 71.534252 & 10 & 8 & 11.8 & 10.2 \\ \hline \end{tabular} \caption{Run-time (in $s$) comparison of Algorithm \ref{alg:dti2} and \cite[Algorithm 1]{Aliyev2017} for the computation of $r(Q; B, C)$ on $10$ dense random examples of order $800$.} \label{table:dense_comparisonQ} \end{table} \begin{figure} \caption{\textbf{(Left Column)} Ratios $(f_{BB} - f_{SF}) / ((f_{BB} + f_{SF})/2)$ for $100$ dense random DH examples of order $800$, where $f_{BB}$ and $f_{SF}$ denote the maximal values of $\sigma_{\max}(CQ ({\rm i} \omega I - (J-R)Q)^{-1}B)$ over $\omega$ computed by the BB algorithm and the subspace framework (i.e., Algorithm \ref{alg:dti} for the plot on the top, \cite[Algorithm 1]{Aliyev2017} for the plot at the bottom). \textbf{(Right Column)} Same as the left column on the same $100$ dense random examples of order $800$ except now this concerns a comparison of the maximization of $\sigma_{\max}(C ({\rm i} \omega I - (J-R)Q)^{-1}(J-R)B)$ using Algorithm \ref{alg:dti2} in the top plot and \cite[Algorithm 1]{Aliyev2017} in the bottom plot.} \label{fig:dense_functions} \end{figure} \noindent \textbf{Sparse Random Examples.} The $5000\times 5000$ sparse matrices $J, Q, R$ are constrained to be banded with bandwidth $10$. The matrix $J$ is generated as in the dense family randomly using the \texttt{randn} command, but the entries that fall outside of the bandwidth $10$ are set equal to zero. The matrix $Q>0$ is created using the commands \begin{verbatim} >> A = sprandn(n,n,1/n); >> Q = (A + A')/2, \end{verbatim} followed by setting the entries outside the bandwidth $10$ again to zero. Finally, the following commands ensure that $Q>0$. \begin{verbatim} >> mineig = eigs(Q,1,'smallestreal'); >> if (mineig<10^-4) Q=Q+(-mineig+5*rand)*speye(n); end \end{verbatim} To form $R\geq 0$, first a diagonal matrix $D$ of random rank not exceeding $500$ is generated by the commands \begin{verbatim} >> p = round(500*rand); D = sparse(5000,5000); h = n/p; >> for j=1:p k = floor(j*h); D(k,k) = 5*rand; end \end{verbatim} Then we set \texttt{R = sparse(X'*D*X)} for a square random matrix $X$ with bandwidth $5$. The matrices $B$, $C$ are random, and of size $5000 \times 2$, $2 \times 5000$, respectively. The spectrum of a typical such sparse matrix $(J-R)Q$ is displayed in Figure \ref{fig:spectra_randomPH} on the right-hand side. We again apply Algorithms \ref{alg:dti} and \ref{alg:dti2} to $100$ such random sparse examples. Since the matrices are too large to apply the BB algorithm, we compare the structure-preserving algorithms directly with the unstructured algorithm \cite[Algorithm 1]{Aliyev2017}. The retrieved estimates of $\max_{\omega \in {\mathbb R} \cup \infty} \sigma_{\max}(G({\rm i} \omega))$ for $G({\rm i}\omega) = CQ (\ri \omega I - (J-R)Q)^{-1}B$ and $G({\rm i}\omega) = C (\ri \omega I - (J-R)Q)^{-1}(J-R)B$ are compared on the top and at the bottom, respectively, in Figure \ref{fig:sparse_functions}. Specifically, the ratio $ \frac{2 (f_{\rm ST} - f_{\rm UN})}{f_{\rm ST} + f_{\rm UN}} $ is plotted for each random example with $f_{\rm UN}$ denoting the estimate by the unstructured algorithm \cite[Algorithm 1]{Aliyev2017}, and $f_{\rm ST}$ denoting the estimate by the structured algorithm, i.e., Algorithm \ref{alg:dti} for the top plot, Algorithm \ref{alg:dti2} for the bottom plot. According to the top plot, which concerns the computation of $r(J; B, C) = R(R; B, C)$, the two algorithms return exactly the same results (up to tolerances) for all but $6$ examples; the structured algorithm returns better estimates for $4$ of these $6$ examples, while the unstructured algorithm returns better estimates for the other two. The structured algorithm appears to be even more robust for the computation of $r(Q; B, C)$ in terms of avoiding locally optimal solutions away from global solutions; as displayed at the bottom, the structured algorithm returns a better estimate for $40$ of the $100$ examples, the unstructured algorithm returns the better estimate for $6$ examples, and the results match exactly up to the tolerances for the remaining $54$ examples. The structured algorithms perform typically fewer iterations as compared to the unstructured algorithm. Indeed the average value of the number of subspace iterations performed on these $100$ examples is $6.3$ for the structured and $9.8$ for the unstructured algorithm for the computation of $r(J; B,C) = r(R; B, C)$, while these average values are $9.7$ and $12.9$ for the computation of $r(Q; B, C)$. On the other hand, the unstructured algorithm is slightly superior when run-times are taken into account. The average run-times are $14.3\, s$ for the structured and $12.7\, s$ for the unstructured algorithm for the computation of $r(J; B, C) = r(R; B, C)$, whereas these figures are $16.5\, s$ and $14\, s$ for the computation of $r(Q; B, C)$. We also list the computed maximal values of $\sigma_{\max}(CQ({\rm i} \omega I - (J-R)Q)^{-1}B)$ and $\sigma_{\max}(C({\rm i} \omega I - (J-R)Q)^{-1}(J-R)B)$ over $\omega$ for the first $10$ of these sparse examples in Tables~\ref{table:sparse_iterates_alg} and \ref{table:sparse_iterates_algQ}. Included in these tables are also the number of subspace iterations, as well as the run-time required by the structured and the unstructured algorithm. \begin{figure} \caption{ \textbf{(Top)} Plot of the ratios $(f_{\rm ST} - f_{\rm UN})/((f_{\rm ST} + f_{\rm UN})/2)$ on $100$ sparse random examples of order $5000$, where $f_{\rm ST}, f_{\rm UN}$ represent the computed maximal value of $\sigma_{\max}(CQ({\rm i} \omega I - (J-R)Q)^{-1} B)$ over $\omega$ by Algorithm \ref{alg:dti} and \cite[Algorithm 1]{Aliyev2017}, respectively. \textbf{(Bottom)} Similar to the top plot, only now for the computed maximal values $f_{\rm ST}$, $f_{\rm UN}$ of $\sigma_{\max}(C({\rm i} \omega I - (J-R)Q)^{-1} (J-R)B)$ over $\omega$ by Algorithm \ref{alg:dti2} and \cite[Algorithm 1]{Aliyev2017}, respectively. } \label{fig:sparse_functions} \end{figure} \begin{table} \begin{center} \begin{tabular}{|c||cc|cc|cc|} \hline & \multicolumn{2}{c}{$\max_{\omega} \sigma_{\max}(CQ({\rm i} \omega I - (J-R)Q)^{-1} B)$} & \multicolumn{2}{|c|}{$\#$ iterations} & \multicolumn{2}{|c|}{run-time} \\ [0.5ex] $\#$ & Alg.~\ref{alg:dti} & \cite[Alg. 1]{Aliyev2017} & Alg.~\ref{alg:dti} & \cite{Aliyev2017} & Alg.~\ref{alg:dti} & \cite{Aliyev2017} \\ [0.5ex] \hline \hline 1 & $1.393086\times 10^3$ & $1.393086\times 10^3$ & 2 & 3 & 10.7 & 10.1 \\ 2 & $8.323309\times 10^2$ & $8.323309\times 10^2$ & 6 & 11 & 12.3 & 11.9 \\ 3 & $1.289416 \times 10^3$ & $1.289416 \times 10^3$ & 4 & 3 & 11.2 & 10.6 \\ 4 & $8.850355 \times 10^2$ & $8.850355 \times 10^2$ & 5 & 16 & 13.2 & 14 \\ 5 & $6.891467 \times 10^2$ & $6.891467 \times 10^2$ & 26 & 46 & 30.3 & 31.4 \\ 6 & $5.652337\times 10^6$ & $5.652337\times 10^6$ & 1 & 1 & 6.6 & 6.1 \\ 7 & $8.834190\times 10^2$ & $8.834190\times 10^2$ & 1 & 1 & 9.7 & 9.6 \\ 8 & $3.402375\times 10^3$ & $3.402375\times 10^3$ & 1 & 2 & 9.1 & 9 \\ 9 & $8.240097\times 10^2$ & $8.240097\times 10^2$ & 2 & 3 & 12.3 & 12.1 \\ 10 & $1.256781\times 10^3$ & $1.256781\times 10^3$ & 2 & 10 & 10.9 & 11.3 \\ \hline \end{tabular} \end{center} \caption{Comparison of Algorithms \ref{alg:dti} and \cite[Algorithm 1]{Aliyev2017} to compute $r(R; B,C) = r(J; B, C)$ on sparse random DH systems of order $5000$ of bandwidth $10$. The MATLAB commands to generate these random sparse examples are explained in Section \ref{sec:numexp_syn}. The third column lists the number of subspace iterations, while the run-times (in $s$) are listed in the last column.} \label{table:sparse_iterates_alg} \end{table} \begin{table} \hskip -2ex \begin{tabular}{|c||cc|cc|cc|} \hline & \multicolumn{2}{c}{$\max_{\omega} \sigma_{\max}(C({\rm i} \omega I - (J-R)Q)^{-1} (J-R)B)$} & \multicolumn{2}{|c|}{$\#$ iterations} & \multicolumn{2}{|c|}{run-time} \\ [0.5ex] $\#$ & Alg.~\ref{alg:dti2} & \cite[Alg. 1]{Aliyev2017} & Alg.~\ref{alg:dti2} & \cite{Aliyev2017} & Alg.~\ref{alg:dti2} & \cite{Aliyev2017} \\ [0.5ex] \hline \hline 1 & $1.320177\times 10^3$ & $\mathbf{9.360406\times 10^2}$ & 15 & 39 & 18.3 & 26.7 \\ 2 & $8.284036\times 10^2$ & $8.284036\times 10^2$ & 2 & 13 & 11.1 & 12.7 \\ 3 & $9.288427\times 10^2$ & $\mathbf{4.863413 \times 10^2}$ & 31 & 11 & 46.1 & 11.8 \\ 4 & $6.583171\times 10^2$ & $6.583171 \times 10^2$ & 5 & 17 & 13.1 & 14.6 \\ 5 & $7.722736\times 10^2$ & $7.722736\times 10^2$ & 3 & 9 & 11.3 & 11.4 \\ 6 & $2.260647\times 10^6$ & $2.260647\times 10^6$ & 1 & 1 & 6.4 & 6.1 \\ 7 & $1.660164\times 10^3$ & $1.660164\times 10^3$ & 4 & 12 & 10.6 & 11.1 \\ 8 & $1.515824\times 10^3$ & $1.515824\times 10^3$ & 8 & 9 & 11.8 & 10 \\ 9 & $4.610935\times 10^2$ & $\mathbf{2.535771\times 10^2}$ & 13 & 4 & 17.9 & 12.1 \\ 10 & $1.011299\times 10^3$ & $1.011299\times 10^3$ & 3 & 7 & 10.8 & 10.6 \\ \hline \end{tabular} \caption{Comparison of Algorithms \ref{alg:dti2} and \cite[Algorithm 1]{Aliyev2017} to compute $r(Q; B, C)$. The display is analogous to Table \ref{table:sparse_iterates_alg}, in particular the numerical experiments are carried out exactly on the same $10$ sparse random examples of order $5000$ employed for Table \ref{table:sparse_iterates_alg}. } \label{table:sparse_iterates_algQ} \end{table} \subsubsection{The FE model of a Disk Brake}\label{sec:numexp_brake} The only large-scale computation required by Algorithm \ref{alg:dti} is the solution of the linear systems \begin{equation}\label{eq:large_lin_sys} D({\rm i} \omega) \: X = B \quad {\rm and} \quad D({\rm i} \omega)^2 \: Y = B \end{equation} at a given $\omega \in {\mathbb R}$ in lines \ref{defn_init_subspaces0} and \ref{defn_later_subspaces}, where $D({\rm i} \omega) = {\rm i} \omega I - (J - R)Q$. For the DH system resulting from a FE model of a disk brake in (\ref{insta}) and (\ref{eq:insta_matrices}), the mass matrix $M$ and the stiffness matrix $K(\Omega)$ are available from the FE modeling. In other words, we have the sparse matrix $Q^{-1}$, but not $Q$, which turns out to be dense. Trying to invert $Q^{-1}$ and/or solve a linear system with the coefficient matrix $Q$ is computationally very expensive and would require full matrix storage. This difficulty can be avoided by exploiting that \[ ( {\rm i} \omega I - (J - R)Q )^{-1} B \;\;\; = \;\;\; Q^{-1} ( {\rm i} \omega Q^{-1} - (J - R) )^{-1} B. \] Hence, to compute $X, Y$ as in (\ref{eq:large_lin_sys}), we proceed as follows. \begin{enumerate} \item[\bf (1)] We first solve $( {\rm i} \omega Q^{-1} - (J - R) ) \widehat{X} = B$ for $\widehat{X}$, and set $X = Q^{-1} \widehat{X}$. \item[\bf (2)] Then we solve $( {\rm i} \omega Q^{-1} - (J - R) ) \widehat{Y} = X$ for $\widehat{Y}$, and set $Y = Q^{-1} \widehat{Y}$. \end{enumerate} A second observation that further speeds up the computation is the particular structure of the coefficient matrix $\{ {\rm i} \omega Q^{-1} - (J - R) \}$ with $N = 0$. Setting $\widetilde{M}({\rm i} \omega; \Omega) := {\rm i} \omega M + D(\Omega) + G(\Omega)$, we have \[ {\rm i} \omega Q^{-1} - (J - R) \;\; = \;\; \left[ \begin{array}{cc} \widetilde{M}({\rm i} \omega; \Omega) & K(\Omega) \\ -K(\Omega) & {\rm i} \omega K(\Omega) \end{array} \right]. \] Hence, to solve $\left\{ {\rm i} \omega Q^{-1} - (J - R) \right\} Z = W$, for a given $W = \left[ \begin{array}{cc} W_1^T & W_2^T \end{array} \right]^T$ and the unknown $Z = \left[ \begin{array}{cc} Z_1^T & Z_2^T \end{array} \right]^T$ with $W_1, W_2, Z_1, Z_2$ having all equal number of rows, we perform a column block permutation and then eliminate the lower left block to obtain \[ \left[ \begin{array}{cc} K(\Omega) & \widetilde{M}({\rm i} \omega; \Omega) \\ 0 & -K(\Omega) - {\rm i} \omega \widetilde{M}({\rm i} \omega; \Omega) \end{array} \right] \left[ \begin{array}{c} Z_2 \\ Z_1 \end{array} \right] \;\; = \;\; \left[ \begin{array}{c} W_1 \\ W_2 - {\rm i} \omega W_1 \end{array} \right], \] which in turn yields \[ (-K(\Omega) - {\rm i} \omega \widetilde{M}({\rm i} \omega; \Omega) ) Z_1 = W_2 - {\rm i} \omega W_1,\ K(\Omega) Z_2 = W_1 - \widetilde{M}({\rm i} \omega; \Omega) Z_1. \] At every subspace iteration, the highest costs arise from the computation of the $LU$ factorizations of the sparse matrices $K(\Omega)$ and $K(\Omega) + {\rm i} \omega \widetilde{M}({\rm i} \omega; \Omega)$. The main cost for Algorithm \ref{alg:dti2} is the solution of the linear systems \[ D({\rm i} \omega)^H X \;\; = \;\; C^H \quad {\rm and} \quad \left[ D({\rm i} \omega)^H \right]^2 Y \;\; = \;\; C^H \] at a given $\omega \in {\mathbb R}$. This can be treated similarly by exploiting that \[ D({\rm i} \omega)^{-H} C^H \;\; = \;\; ( {\rm i} \omega Q^{-1} - (J-R))^{-H} (CQ^{-1})^H. \] We have applied Algorithm \ref{alg:dti} to compute the unstructured stability radius $r(R; B, B^T)$ for the DH system of the form (\ref{insta}), (\ref{eq:insta_matrices}) resulting from the FE brake model with $N = 0$, where $G(\Omega), K(\Omega), D(\Omega), M \in {\mathbb R}^{4669 \times 4669}$ so that $J, R, Q \in {\mathbb R}^{9338\times 9338}$. The plot of the computed $r(R; B, B^T)$ vs the rotation speed $\Omega$ is presented in Figure \ref{fig:brake_squeal_unstructured} at lower frequencies (i.e., $\Omega \in [2.5, 100]$) on the top, and at higher frequencies (i.e., $\Omega \in [900, 1700]$) at the bottom. For smaller frequencies, the stability radius initially decreases with respect to $\Omega$, but around $\Omega = 1100$ the stability radius suddenly increases. The non-smooth nature of the stability radius with respect to $\Omega$ is apparent from the figure. One should note, in particular, the sharp turns near $\Omega = 1120$ and $\Omega = 1590$; this non-smoothness is due to the fact that $\sigma_{\max}(B^TQ ({\rm i} \omega I - (J-R)Q)^{-1} B)$ has multiple global maximizers. This means that two distinct points on the imaginary axis can be attained with perturbations of minimal norm. The computed values of $r(R; B, B^T)$ are listed in Table~\ref{tab:brake_squeal_unstructured} for some values of $\Omega$. In this table, for each $\Omega$, the value $\omega_\ast$, where the singular value function $\sigma_{\max}(B^TQ ({\rm i} \omega I - (J-R)Q)^{-1} B)$ is maximized globally is displayed, the number of subspace iterations, the run-time (in $s$) and the subspace dimension at termination are included as well. In all cases, $2$ or $3$ subspace iterations are sufficient to achieve the prescribed accuracy tolerance. This leads to considerably smaller reduced systems of size $72 \times 72$ or $78\times 78$ compared with the original problem of size $9338\times 9338$. The $\omega$ value maximizing $\sigma_{\max}(B^TQ ({\rm i} \omega I - (J-R)Q)^{-1} B)$ differs substantially, depending on whether the frequency $\Omega$ is small or large. The resulting reduced problems at termination capture the full problem remarkably well around the global maximizer. This is depicted in Figure~\ref{fig:brake_squeal_unstructured_sval}, where, for $\Omega = 1000$, the singular value function $f(\omega) = \sigma_{\max}(B^TQ ({\rm i} \omega I - (J-R)Q)^{-1} B)$ for the full problem (solid curve) and $f_k(\omega) = \sigma_{\max}(B^T_kQ_k ({\rm i} \omega I - (J_k-R_k)Q_k)^{-1} B_k)$ for the reduced problem at termination (dashed curve) are plotted near the global maximizer $\omega_\ast = -178880.9$. It is fairly difficult to distinguish these two curves from each other. \begin{table} \begin{tabular}{|c||ccccc|} \hline $\Omega$ & $r(R; B, B^T)$ & $\omega_\ast$ & iterations & run-time & dimension \\ \hline \hline 2.5 & 0.01066 & $-1.938\times 10^5$ & 2 & 22.0 & 72 \\ 5 & 0.01038 & $-1.938\times 10^5$ & 2 & 21.7 & 72 \\ 10 & 0.01026 & $-1.938\times 10^5$ & 3 & 24.9 & 78 \\ 50 & 0.00999 & $-1.938\times 10^5$ & 2 & 22.0 & 72 \\ 100 & 0.00988 & $-1.938\times 10^5$ & 2 & 22.1 & 72 \\ 1000 & 0.00809 & $-1.789\times 10^5$ & 2 & 21.4 & 72 \\ 1050 & 0.00789 & $-1.789\times 10^5$ & 2 & 21.8 & 72 \\ 1100 & 0.00834 & $-1.789\times 10^5$ & 3 & 24.8 & 78 \\ 1116 & 0.01091 & $-1.789\times 10^5$ & 3 & 25.6 & 78 \\ 1150 & 0.00344 & $-1.742\times 10^5$ & 2 & 21.5 & 72 \\ 1200 & 0.00407 & $-1.742\times 10^5$ & 2 & 21.8 & 72 \\ 1250 & 0.00471 & $-1.742\times 10^5$ & 2 & 21.8 & 72 \\ 1300 & 0.00516 & $-1.742\times 10^5$ & 2 & 21.2 & 72 \\ \hline \end{tabular} \caption{Computed stability radii $r(R; B, B^T)$ by Algorithm \ref{alg:dti} for several $\Omega$ values for the DH system of order $9338$ originating from the FE brake model. The other columns display $\omega_\ast$ corresponding to $\: \argmax_{\omega} \sigma_{\max}(B^TQ ({\rm i} \omega I - (J-R)Q)^{-1} B) \:$, the number of subspace iterations, the total run-time (in $s$) and the subspace dimension at termination. } \label{tab:brake_squeal_unstructured} \end{table} \begin{figure} \caption{ Plot of the stability radius $r(R; B, B^T)$ for the DH system (\ref{insta}), (\ref{eq:insta_matrices}) resulting from the FE model of a disk-brake as a function of the rotation speed $\Omega$ for $\Omega \in [2.5, 100]$ (top plot), and $\Omega \in [900, 1700]$ (bottom plot). The order of the DH system under consideration in this plot is $9338$. } \label{fig:brake_squeal_unstructured} \end{figure} \begin{figure} \caption{ Plot of the singular value functions $f(\omega) = \sigma_{\max}(B^TQ ({\rm i} \omega I - (J-R)Q)^{-1} B)$ (solid curve) and $f_k(\omega) = \sigma_{\max}(B^T_kQ_k ({\rm i} \omega I - (J_k-R_k)Q_k)^{-1} B_k)$ (dashed curve) at termination of Algorithm \ref{alg:dti} near the global maximizer $\omega_\ast = -178880.9$ for the DH system of order $9338$ arising from the FE disk-brake model with $\Omega = 1000$. The circle marks $(\omega_\ast, f(\omega_\ast))$.} \label{fig:brake_squeal_unstructured_sval} \end{figure} \section{Computation of the Structured Stability Radius}\label{sec:structured} In the last section we have studied stability radii for dissipative Hamiltonian systems where the restriction matrices, however, allowed unstructured perturbations in the system coefficients. In this section we put additional constraints on the perturbations, in particular we require that the perturbations are structured themselves. We discuss only perturbations in the dissipation matrix, since this is usually the most uncertain part of the system, due to the fact that modeling damping or friction very exactly is usually extremely difficult. We deal with the computation of $r^{\rm Herm}(R; B)$ defined as in (\ref{eq:structured_radii_defn}). We first describe a numerical technique for small-scale problems in Section \ref{sec:small_scale_st} and then develop a subspace framework that converges superlinearly with respect to the subspace dimension in Section \ref{sec:large_scale_st}. Both techniques use the eigenvalue optimization characterization of $r^{\rm Herm}(R; B)$ in Theorem \ref{thm_str_Herm}. \subsection{Small-Scale Problems}\label{sec:small_scale_st} \subsubsection{Inner Maximization Problems}\label{sec:small_scale_inner} The eigenvalue optimization characterization of $r^{\rm Herm}(R; B)$ is a min-max problem, where the inner maximization problem is concave, indeed it can alternatively be expressed as a semi-definite program (SDP). Formally, for a given $\omega \in {\mathbb R}$, and $H_0({\rm i} \omega), H_1({\rm i} \omega)$ representing the Hermitian matrices defined in Theorem \ref{thm_str_Herm}, we have \begin{equation}\label{eq:inner_max} \begin{split} \widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega) & \; = \; \sup_{t \in {\mathbb R}} \: \lambda_{\min} (H_0({\rm i} \omega) + t H_1 ({\rm i} \omega)) \\ & \; = \; \sup\{ z \; | \; z,t \in {\mathbb R} \;\; {\rm s.t.} \;\; H_0({\rm i} \omega) + t H_1 ({\rm i} \omega) - zI \geq 0 \}, \end{split} \end{equation} where the characterization in the second line is a linear convex SDP. Here $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega)$ is related to a structured backward error for the eigenvalue ${\rm i} \omega$, specifically it corresponds to the square of the distance (see \cite[Definition 3.2 and Theorem 4.9]{MehMS16a}) \begin{equation}\label{eq:backward_error} \begin{split} \eta^{\rm Herm}(R; B, {\rm i}\omega) \; := \; \hskip 49ex \\ \hskip 13ex \inf\{ \|\Delta \|_2 \; | \; \Delta = \Delta^H, \;\; {\rm i} \omega \in \Lambda \big( (J - R)Q - (B\Delta B^H)Q \big) \}. \end{split} \end{equation} We have that $\eta^{\rm Herm}(R; B, {\rm i}\omega)$ is finite if and only if the suprema in (\ref{eq:inner_max}) are attained, which happens if and only if $H_1({\rm i}\omega)$ is indefinite, i.e., $H_1({\rm i}\omega)$ has both negative and positive eigenvalues. The most widely used techniques to solve a linear convex SDP are different forms of interior-point methods. Implementations of some of these interior-point methods are made available through the package \texttt{cvx} \cite{Grant2008,cvx}. Hence, one option is to use \texttt{cvx} to compute $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega)$ directly. An alternative, and also theoretically well understood approach, is to employ the software package \texttt{eigopt} \cite{Mengi2014} for the eigenvalue optimization problem in the first characterization in (\ref{eq:inner_max}). This second approach forms piece-wise quadratic functions that lie globally above the eigenvalue function, and maximizes these piece-wise quadratic functions instead of the eigenvalue function. Each piece-wise quadratic function is defined as the minimum of several other quadratic functions, all of which have the same curvature $\gamma$ (which must be a global upper bound on the second derivative of the eigenvalue function at all points where the eigenvalue function is differentiable). Any slightly positive real number for the curvature $\gamma$ serves the purpose (e.g., $\gamma = 10^{-6}$), since the smallest eigenvalue function in (\ref{eq:inner_max}) is a concave function of $t$. In our experience, \texttt{eigopt} performs the computation of $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega)$ significantly faster than \texttt{cvx}. The only downside is that an interval containing the optimal $t$ for (\ref{eq:inner_max}) must be supplied to \texttt{eigopt}, whereas such an interval is not needed by \texttt{cvx}. \subsubsection{Outer Minimization Problems}\label{sec:small_scale_outer} The minimum of $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega)$ with respect to $\omega \in {\mathbb R}$ yields the distance $r^{\rm Herm}(R; B)$, and the minimizing $\omega \in {\mathbb R}$ yields the point ${\rm i} \omega$ that first becomes an eigenvalue on the imaginary axis under the smallest perturbation possible. This is a non-convex optimization problem, indeed the objective $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega)$ may even blow up at some $\omega$. We again resort to \texttt{eigopt} for the minimization of $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega)$. For the sake of completeness, a formal description is provided in Algorithm \ref{eigopt} below, where we use the abbreviations \[ \widetilde{\eta}_j \; := \; \widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega_j) \quad {\rm and} \quad \widetilde{\eta}^{\;'}_j \; := \; \frac{d\, \widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega_j)}{d \omega}. \] Introducing $t(\omega) := \argmax_{t \in {\mathbb R}} \: \lambda_{\min}(H_0({\rm i} \omega) + t H_1({\rm i} \omega))$, the algorithm approximates the smallest eigenvalue function \[ \lambda_{\min}(H_0({\rm i} \omega) + t(\omega) H_1({\rm i} \omega)) \; = \; \widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega) \] with the piece-wise quadratic function \begin{eqnarray*} Q_k(\omega) &:=& \max \{ q_j(\omega) \; | \; j = 0, \dots, k \}, \\ q_j(\omega) \; &:=& \; \widetilde{\eta}_j + \widetilde{\eta}_j^{\;'} ( \omega - \omega_j ) + (\gamma/2) ( \omega - \omega_j )^2 \end{eqnarray*} at iteration $k$. It computes the global minimizer $\omega_{k+1}$ of $Q_k(\omega)$, and refines the piece-wise quadratic function $Q_k(\omega)$ with the addition of one more quadratic piece, namely $q_{k+1}(\omega) \; := \; \widetilde{\eta}_{k+1} + \widetilde{\eta}_{k+1}^{\;'} ( \omega - \omega_{k+1} ) + (\gamma/2) ( \omega - \omega_{k+1} )^2$. Here, $\gamma$ is supposed to be a lower bound for the second derivative $\lambda_{\min}''(H_0({\rm i} \omega) + t(\omega) H_1({\rm i} \omega))$ for all $\omega$ sufficiently close to the global minimizer of $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega)$. In theory, it can be shown that for all $\gamma$ small enough, every convergent subsequence of the sequence $\{ \omega_k\}$ converges to a global minimizer of $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega)$. At step $k$ of the algorithm $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega)$ and its derivative need to be computed at $\omega_{k+1}$. We rely on one of the two approaches (\texttt{cvx} or \texttt{eigopt}) described in Section \ref{sec:small_scale_inner} for the computation of $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega_{k+1})$, and employ a finite difference formula to approximate its derivative. \begin{algorithm} \begin{algorithmic}[1] \REQUIRE{Matrices $B \in {\mathbb C}^{n\times m}$, $J, R, Q \in {\mathbb C}^{n\times n}$, a negative real number $\gamma$, and a closed interval $\widetilde \Omega \subseteq {\mathbb R}$ that contains the global minimizer of $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega)$ over $\omega \in {\mathbb R}$.} \ENSURE{The sequence $\{ \omega_k \}$.} \STATE{$\omega_0 \gets$ an initial point in $\widetilde \Omega$} \STATE Compute $\widetilde{\eta}_0 := \widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega_0)$ and $\widetilde{\eta}_0^{\;'} := d\, \widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega_0) / d\omega$ \FOR{$k=0,1,\dots$} \STATE{$Q_k(\omega)\gets\max\left\{q_j(\omega)\;|\; j=0,\dots,k\right\}$, where \\ \hskip 22ex $q_j(\omega) \; := \; \widetilde{\eta}_j + \widetilde{\eta}_j^{\;'} ( \omega - \omega_j ) + (\gamma/2) ( \omega - \omega_j )^2$} \STATE{$\omega_{k+1} \gets \argmin_{\omega\in\widetilde\Omega} Q_k(\omega)$} \STATE Compute \begin{eqnarray*}\widetilde{\eta}_{k+1} &:=& \widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega_{k+1}),\\ \widetilde{\eta}_{k+1}^{\;'} &:=& d\, \widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega_{k+1}) / d\omega \end{eqnarray*} \ENDFOR \end{algorithmic} \caption{Small-Scale Computation of $r^{\rm Herm}(R; B)$.} \label{eigopt} \end{algorithm} \subsection{Large-Scale Problems}\label{sec:large_scale_st} The characterization via eigenvalue optimization in Theorem \ref{thm_str_Herm} is in terms of the matrix-valued functions $H_0({\rm i} \omega), H_1({\rm i} \omega)$, which are of small size provided that $B$ has few columns. The large-scale nature of this characterization is hidden in the matrix-valued function $W({\rm i} \omega) := (J-R)Q - {\rm i} \omega I$ defined in Theorem~\ref{thm_str_Herm}. Note that, in particular, both $H_0({\rm i} \omega)$ and $H_1({\rm i} \omega)$ are defined in terms of $W({\rm i} \omega)^{-1} B$. This is also reflected in Algorithm \ref{eigopt} when $J, R, Q$ are large; at iteration $k$ of the algorithm, the matrices $H_0({\rm i} \omega_{k+1})$, $H_1({\rm i} \omega_{k+1})$ need to be formed for the computation of $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega_{k+1})$, which in turn requires the solution of the linear system $W({\rm i} \omega_{k+1}) Z = B$. To cope with the large-scale setting, we benefit from structure preserving two-sided projections similar to those described in Section \ref{sec:sf_PH}. In particular, for a given subspace ${\mathcal V}_k$ and a matrix $V_k$ whose columns form an orthonormal basis for ${\mathcal V}_k$, we set \begin{equation}\label{eq:defn_Wr} W_k \;\; := \;\; QV_k (V^H_k Q V_k)^{-1}. \end{equation} Furthermore, we define the projected matrices \begin{equation}\label{eq:projected_matrices} \begin{split} J_k \;\; & := \;\; W_k^H J W_k, \;\; R_k \;\; := \;\; W_k^H R W_k, \\ Q_k \;\; & := \;\; V_k^H Q V_k, \;\;\;\: B_k \;\; := \;\; W_k^H B. \end{split} \end{equation} Recall also the identities \begin{equation}\label{eq:identities} W_k^H V_k = I \quad {\rm and} \quad (W_k V_k^H)^2 = W_k V_k^H, \end{equation} the latter of which means that $W_k V_k^H$ is an oblique projector onto ${\rm Im}(Q V_k)$. Although these identities are still available, however, we no longer have a tool such as Theorem~\ref{thm:Gal_interpolate} that we could depend on to establish interpolation results. This is because there is no apparent transfer function, as there is indeed no apparent linear port-Hamiltonian system that can be tied to the eigenvalue optimization characterization. But the following simple observation turns out to be very useful. \begin{lemma}\label{thm:reduced_Wlambda} Consider a DH model~(\ref{DH}) and a reduced model $\dot x_k= (J_k-R_k)Q_k x_k$ with coefficients as in (\ref{eq:projected_matrices}). With $W(\lambda) = (J - R) Q - \lambda I$ and $W_k(\lambda) := (J_k - R_k) Q_k - \lambda I$, we then have \[ W_k(\lambda) \;\; = \;\; W_k^H \: W(\lambda) \: V_k \quad\quad \mbox{\rm for all}\ \lambda \in {\mathbb C}. \] \end{lemma} \begin{proof} From the definition of $J_k, R_k, Q_k$ in (\ref{eq:projected_matrices}) we obtain \[ \begin{split} W_k(\lambda) & \;\; = \;\; (W_k^H (J - R) W_k) V_k^H Q V_k - \lambda I \\ & \;\; = \;\; W_k^H (J - R) Q V_k - \lambda W_k^H V_k \\ & \;\; = \;\; W_k^H \left\{ (J-R) Q - \lambda I \right\} V_k \;\; = \;\; W_k^H W(\lambda) V_k, \end{split} \] where we have employed the identities in (\ref{eq:identities}). \end{proof} Our reduced problems are expressed in terms of the reduced versions of $H_0(\lambda)$, $L(\lambda)$, and $H_1(\lambda)$ defined via \[ H_{k,0}(\lambda) := L_k(\lambda)^{-1} L_k(\lambda)^{-H}, \] with $L_k(\lambda)$ denoting a lower triangular Cholesky factor of \[ \widetilde{H}_{k,0}(\lambda) := B^H_k W_k(\lambda)^{-H} Q_k B_k B^H_k Q_k W_k(\lambda)^{-1} B_k, \] and $H_{k,1}(\lambda) := \ri (\widetilde{H}_{k,1}(\lambda) - \widetilde{H}_{k,1}(\lambda)^H)$ with \[ \widetilde{H}_{k,1}(\lambda) := L_k(\lambda)^{-1} B^H_k W_k(\lambda)^{-H} Q_k B_k L_k(\lambda)^{-H}. \] Note that to ensure the uniqueness of $L(\lambda)$ and $L_k(\lambda)$, we define them as the Cholesky factors of $\widetilde{H}_{0}(\lambda)$ and $\widetilde{H}_{k,0}(\lambda)$ with real and positive entries along the diagonal. Our goal is to come up with reduced counterparts of $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega)$, $\eta^{\rm Herm}(R; B, {\rm i}\omega)$ that Hermite-interpolate the full functions at prescribed points. For a given subspace ${\mathcal V}_k$, a matrix $V_k$ whose columns form an orthonormal basis for ${\mathcal V_k}$, and for $W_k$ as in (\ref{eq:defn_Wr}), we introduce \[ \begin{split} \widetilde{\eta}^{\rm Herm}_k(R; B, {\rm i}\omega) \; & := \; \sup_{t \in {\mathbb R}} \: \lambda_{\min} (H_{k,0}({\rm i} \omega) + t H_{k,1} ({\rm i} \omega)) \\ \eta^{\rm Herm}_k(R; B, {\rm i}\omega) \; & := \; \inf\{ \|\Delta \|_2 \; | \; \Delta = \Delta^H, \;\; {\rm i} \omega \in \Lambda \big( (J_k - R_k)Q_k - (B_k\Delta B^{H}_k)Q_k \big) \}. \end{split} \] Recall that $\eta^{\rm Herm}(R; B, {\rm i}\omega) = \widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega)^{1/2}$, and a similar relation holds for the reduced problems, i.e., $\eta^{\rm Herm}_k(R; B, {\rm i}\omega) = \widetilde{\eta}^{\rm Herm}_k(R; B, {\rm i}\omega)^{1/2}$. We start our analysis by establishing that the quantities $\eta^{\rm Herm}_k(R; B, {\rm i}\omega)$, $\widetilde{\eta}^{\rm Herm}_k(R; B, {\rm i}\omega)$ are independent of the choice of basis for the subspace ${\mathcal V}_k$. For this proof we introduce the notation $\eta^{\rm Herm}_{V_k,W_k}(R; B, {\rm i}\omega))$, $\widetilde{\eta}^{\rm Herm}_{V_k,W_k}(R; B, {\rm i}\omega)$ to emphasize the particular choices of basis $V_k, W_k$ for the subspaces ${\mathcal V_k}, {\mathcal W}_k$ used in the definitions of $\eta^{\rm Herm}_k(R; B, {\rm i}\omega), \widetilde{\eta}^{\rm Herm}_k(R; B, {\rm i}\omega)$. Similarly, we indicate the spaces and the bases with the help of the notations $J_{W_k}$, $R_{W_k}$, $Q_{V_k}$, and $ B_{W_k}$. \begin{lemma}\label{thm:eta_basis_independent} Let the columns of $V_k$ and $\widetilde{V}_k$ form orthonormal bases for the subspace ${\mathcal V}_k$, and let $W_k := Q V_k (V_k^H Q V_k)^{-1}$, $\widetilde{W}_k := Q \widetilde{V}_k (\widetilde{V}_k^H Q \widetilde{V}_k)^{-1}$. Then \[ \begin{split} \eta^{\rm Herm}_{V_k, W_k}(R; B, {\rm i}\omega) & = \eta^{\rm Herm}_{\widetilde{V}_k, \widetilde{W}_k}(R; B, {\rm i}\omega), \\ \widetilde{\eta}^{\rm Herm}_{V_k, W_k}(R; B, {\rm i}\omega) & = \widetilde{\eta}^{\rm Herm}_{\widetilde{V}_k, \widetilde{W}_k}(R; B, {\rm i}\omega) \end{split} \] for all $\omega \in {\mathbb R}$. \end{lemma} \begin{proof} It suffices to prove $\eta^{\rm Herm}_{V_k, W_k}(R; B, {\rm i}\omega) = \eta^{\rm Herm}_{\widetilde{V}_k, \widetilde{W}_k}(R; B, {\rm i}\omega)$, the other equality follows from this equality immediately. Since the columns of $V_k$ and $\widetilde V_k$ form bases for the same space, there exists a unitary matrix $P$ such that $\widetilde{V}_k = V_k P$. Furthermore, by definition, \[ \widetilde{W}_k = Q (V_k P) (P^H V_k^H Q V_k P)^{-1} = Q V_k P P^H (V_k^H Q V_k)^{-1} P = W_k P. \] The assertion then follows from the following set of equivalences: \[ \begin{split} {\rm i} \omega \in \Lambda \big( (J_{W_k} - R_{W_k}) Q_{V_k} - (B_{W_k} \Delta B^H_{W_k})Q_{V_k} \big) \;\; \Longleftrightarrow \;\; \\ {\rm det}(W_k^H (J - R)Q V_k - W_k^H B \Delta B^H Q V_k - {\rm i} \omega W_k^H V_k ) = 0 \;\; \Longleftrightarrow \;\; \\{\rm det}(W_k^H \left( (J - R)Q - B \Delta B^H Q - {\rm i} \omega I \right) V_k ) = 0 \;\; \Longleftrightarrow \;\; \\ {\rm det}(P^H W_k^H \left( (J - R)Q - B \Delta B^H Q - {\rm i} \omega I \right) V_k P ) = 0 \;\; \Longleftrightarrow \;\; \\ {\rm det}(\widetilde{W}_k^H (J - R)Q \widetilde{V}_k - \widetilde{W}_k^H B \Delta B^H Q \widetilde{V}_k - {\rm i} \omega \widetilde{W}_k^H \widetilde{V}_k ) = 0 \;\; \Longleftrightarrow \;\; \\ {\rm i} \omega \in \Lambda \big( (J_{\widetilde{W}_k} - R_{\widetilde{W}_k}) Q_{\widetilde{V}_k} - (B_{\widetilde{W}_k} \Delta B^H_{\widetilde{W}_k})Q_{\widetilde{V}_k} \big). \quad\quad\quad \end{split} \] \end{proof} In the following we will develop a subspace framework including an Hermite interpolation property for DH systems. For this we first show an auxiliary interpolation result for $B^{H} Q W(\lambda)^{-1} B$, the matrix through which $H_0(\lambda), H_1(\lambda)$ are defined. \begin{theorem}\label{thm:interpolate_matrixfuns} Consider a DH model~(\ref{DH}) and a reduced model $\dot x_k= (J_k-R_k)Q_k x_k$ with coefficients as in (\ref{eq:projected_matrices}), and let $W(\lambda) = (J - R) Q - \lambda I$ and $W_k(\lambda) := (J_k - R_k) Q_k - \lambda I$. For a given $\widehat{\lambda} \in {\mathbb C}$ such that $W(\widehat{\lambda})$ and $W_k(\widehat{\lambda})$ are invertible, the following assertions hold: \begin{enumerate} \item[\bf (i)] If $\: {\rm Im}(W(\widehat{\lambda})^{-1} B) \subseteq {\mathcal V}_k$, then $B^H Q W(\widehat{\lambda})^{-1} B \; = \; B^H_k Q_k W_k(\widehat{\lambda})^{-1} B_k$. \item[\bf (ii)] Additionally, if $\: {\rm Im}(W(\widehat{\lambda})^{-2} B) \subseteq {\mathcal V}_k$ and the orthonormal basis $V_k$ for ${\mathcal V}_k$ is such that $\: V_k = \left[ \begin{array}{cc} \widetilde{V}_k & \widehat{V}_k \end{array} \right] $ where the columns of $\:\: \widetilde{V}_k$ form an orthonormal basis for ${\rm Im}(W(\widehat{\lambda})^{-1} B)$, then $\: B^H Q W(\widehat{\lambda})^{-2} B \; = \;$ $B^H_k Q_k W_k(\widehat{\lambda})^{-2} B_k$. \end{enumerate} \end{theorem} \begin{proof} \textbf{(i)} If ${\rm Im}(W(\widehat{\lambda})^{-1} B) \subseteq {\mathcal V}_k$ then, since $W_k V_k^H$ is a projector onto ${\rm Im}(Q V_k)$, we obtain \[ \begin{split} B^H Q W(\widehat{\lambda})^{-1} B \; = \; B^H Q V_k V_k^H W(\widehat{\lambda})^{-1} B & \; = \; B^H W_k V_k^H Q V_k V_k^H W(\widehat{\lambda})^{-1} B \\ & \; = \; B^H_k Q_k V_k^H W(\widehat{\lambda})^{-1} B. \end{split} \] To show that $V_k^H W(\widehat{\lambda})^{-1} B = W_k(\widehat{\lambda})^{-1} B_k$, let $Z := W(\widehat{\lambda})^{-1} B$, and $Z_k$ be such that $V_k Z_k = Z$. (There exists a unique $Z_k$ with this property, because ${\rm Im}(Z) \subseteq {\mathcal V}_k$.) Then $W(\widehat{\lambda}) Z = B$ implies that $ W(\widehat{\lambda}) V_k Z_k = B$, and thus $ W^H_k W(\widehat{\lambda}) V_k Z_k = W^H_k B$. Hence, by Lemma~\ref{thm:reduced_Wlambda} we see that $Z_k = W_k(\widehat{\lambda})^{-1} B_k$, implying that \[ V_k^{H} W(\widehat{\lambda})^{-1} B \; = \; V_k^{H} Z \; = \; V_k^{H} (V_k Z_k) \; = \; W_k(\widehat{\lambda})^{-1} B_k, \] as asserted. \textbf{(ii)} Following the steps at the beginning of the proof of part (i), we have \[ B^H Q W(\widehat{\lambda})^{-2} B \;\; = \;\; B^H_k Q_k V_k^H W(\widehat{\lambda})^{-2} B. \] To show that $V_k^H W(\widehat{\lambda})^{-2} B = W_k(\widehat{\lambda})^{-2} B_k$, we exploit that \begin{equation}\label{eq:intermed_secpow_interpolate} V_k^H W(\widehat{\lambda})^{-2} B \;\; = \;\; (V_k^H W(\widehat{\lambda})^{-1} \widetilde{V}_k) (\widetilde{V}_k^H W(\widehat{\lambda})^{-1}B). \end{equation} Now define $Z := W(\widehat{\lambda})^{-1} \widetilde{V}_k$ and $Z_k$ such that $V_k Z_k = Z$ (once again such a $Z_k$ exists uniquely, because ${\rm Im}(Z) \subseteq {\mathcal V}_k$) so that $W(\widehat{\lambda}) Z = \widetilde{V}_k$. Then $ W(\widehat{\lambda}) V_k Z_k = \widetilde{V}_k$ and hence $ W^H_k W(\widehat{\lambda}) V_k Z_k = I_{\widetilde{k},m}$, where $I_{\widetilde{k},m}$ is the matrix consisting of the first $m$ columns of the $\widetilde{k}\times \widetilde{k}$ identity matrix with $\widetilde{k} := {\rm dim} \: {\mathcal V}_k > m$. This implies that $Z_k = W_k(\widehat{\lambda})^{-1} I_{\widetilde{k},m}$, so the following can be deduced about the term inside the first parenthesis on the right-hand side of (\ref{eq:intermed_secpow_interpolate}): \[ V_k^H W(\widehat{\lambda})^{-1} \widetilde{V}_k \; = \; V_k^H Z \; = \; V_k^H (V_k Z_k) \; = \; W_k(\widehat{\lambda})^{-1} I_{\widetilde{k},m}. \] As for the term inside the second parenthesis on the right-hand side of (\ref{eq:intermed_secpow_interpolate}), we make use of the following observation: \begin{equation}\label{eq:intermed2_secpow_interpolate} V_k^H W(\widehat{\lambda})^{-1}B \; = \; \left[ \begin{array}{c} \widetilde{V}_k^H \\ \widehat{V}_k^H \end{array} \right] W(\widehat{\lambda})^{-1}B \; = \; \left[ \begin{array}{c} \widetilde{V}_k^H W(\widehat{\lambda})^{-1}B \\ 0 \end{array} \right], \end{equation} where the last equality follows, since the columns of $\widetilde{V}_k$ form an orthonormal basis for ${\rm Im}(W(\widehat{\lambda})^{-1}B)$. Putting these observations together in (\ref{eq:intermed_secpow_interpolate}), we obtain \[ \begin{split} V_k^H W_k (\widehat{\lambda})^{-2}B_k \; & = \; (V_k^H W(\widehat{\lambda})^{-1} \widetilde{V}_k) (\widetilde{V}_k^H W(\widehat{\lambda})^{-1}B) \\ \; & = \; W_k(\widehat{\lambda})^{-1} I_{\widetilde{k},m} (\widetilde{V}_k^H W(\widehat{\lambda})^{-1}B) \\ \; & = \; W_k(\widehat{\lambda})^{-1} I_{\widetilde{k}} (V_k^H W(\widehat{\lambda})^{-1}B) \\ \; & = \; W_k(\widehat{\lambda})^{-1} I_{\widetilde{k}} ( W_k (\widehat{\lambda})^{-1}B_k ) \; = \; W_k (\widehat{\lambda})^{-2}B_k, \end{split} \] where in the third equality we exploit (\ref{eq:intermed2_secpow_interpolate}), and in the fourth equality we employ that $V_k^H W(\widehat{\lambda})^{-1}B = W_k (\widehat{\lambda})^{-1}B_k$ which is proven in part (i). \end{proof} After these preparations we can prove our main interpolation result. \begin{theorem}\label{thm:main_Hermite_interpolate} Consider a DH model~(\ref{DH}) and a reduced model $\dot x_k= (J_k-R_k)Q_k x_k$ with coefficients as in (\ref{eq:projected_matrices}), and let $W(\lambda) = (J - R) Q - \lambda I$ and $W_k(\lambda) := (J_k - R_k) Q_k - \lambda I$. Suppose that the subspace ${\mathcal V}_k$ is such that \begin{equation}\label{eq:subspace_inclusion} {\rm Im}( W({\rm i} \widehat{\omega})^{-1} B ), {\rm Im}( W({\rm i} \widehat{\omega})^{-2} B ) \subseteq {\mathcal V}_k, \end{equation} and $W_k$ is defined as in (\ref{eq:defn_Wr}) in terms of a matrix $V_k$ whose columns form an orthonormal basis for ${\mathcal V}_k$. \begin{enumerate} \item[\bf (i)] The quantity $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i} \widehat{\omega})$ is finite if and only if $\widetilde{\eta}^{\rm Herm}_k(R; B, {\rm i} \widehat{\omega})$ is finite. If $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i} \widehat{\omega})$ is finite, then \begin{equation}\label{eq:main_interpolation_kes} \widetilde{\eta}^{\rm Herm}(R; B, {\rm i} \widehat{\omega}) \; = \; \widetilde{\eta}^{\rm Herm}_k(R; B, {\rm i} \widehat{\omega}). \end{equation} \item[\bf (ii)] Moreover, if $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i} \omega)$ and $\widetilde{\eta}^{\rm Herm}_k(R; B, {\rm i} \omega)$ are differentiable at $\widehat{\omega}$, then we have \begin{equation}\label{eq:main_interpolation_kes2} \frac{d\, \widetilde{\eta}^{\rm Herm}(R; B, {\rm i} \widehat{\omega})}{ d \omega} \; = \; \frac{d\, \widetilde{\eta}^{\rm Herm}_k(R; B, {\rm i} \widehat{\omega})}{d \omega}. \end{equation} \end{enumerate} \end{theorem} \begin{proof} Without loss of generality, we may assume that the matrix $V_k$ is such that $ V_k = \left[ \begin{array}{cc} \widetilde{V}_k & \widehat{V}_k \end{array} \right], $ with the columns of $\widetilde{V}_k$ forming an orthonormal basis for ${\rm Im}( W({\rm i} \widehat{\omega})^{-1} B )$. It suffices to prove the claims for this particular choice of orthonormal basis, because it is established in Lemma \ref{thm:eta_basis_independent} that the function $\widetilde{\eta}^{\rm Herm}_k(R; B, {\rm i} \omega)$, hence its derivative, are independent of the choice of $V_k$ as long as its columns form an orthonormal basis for ${\mathcal V}_k$. \textbf{(i)} By the definitions of $\widetilde{H}_0 (\lambda), \widetilde{H}_{k,0} (\lambda)$, and part (i) of Theorem \ref{thm:interpolate_matrixfuns}, we have \[ \begin{split} \widetilde{H}_0 ({\rm i} \widehat{\omega}) & \; = \; (B^{H} Q W({\rm i} \widehat{\omega})^{-1} B)^H (B^{H} Q W({\rm i} \widehat{\omega})^{-1} B) \\ & \; = \; (B^{H}_k Q_k W_k({\rm i} \widehat{\omega})^{-1} B_k)^H (B^{H}_k Q_k W_k({\rm i} \widehat{\omega})^{-1} B_k) \; = \; \widetilde{H}_{k,0} ({\rm i} \widehat{\omega}). \end{split} \] This also implies that $L({\rm i} \widehat{\omega}) = L_k({\rm i} \widehat{\omega})$ due to the uniqueness of the Cholesky factors of $\widetilde{H}_0 ({\rm i} \widehat{\omega})$, $\widetilde{H}_{k,0} ({\rm i} \widehat{\omega})$. Therefore, \[ H_0({\rm i} \widehat{\omega}) = L({\rm i} \widehat{\omega})^{-1} L({\rm i} \widehat{\omega})^{-H} = L_k({\rm i} \widehat{\omega})^{-1} L_k({\rm i} \widehat{\omega})^{-H} = H_{k,0} ({\rm i} \widehat{\omega}). \] Furthermore, \[ \begin{split} \widetilde{H}_1 ({\rm i} \widehat{\omega}) & \; = \; L({\rm i} \widehat{\omega})^{-1} (B^{H} Q W({\rm i} \widehat{\omega})^{-1} B)^H L({\rm i} \widehat{\omega})^{-H} \\ & \; = \; L_k({\rm i} \widehat{\omega})^{-1} (B^{H}_k Q_k W_k({\rm i} \widehat{\omega})^{-1} B_k)^H L_k({\rm i} \widehat{\omega})^{-H} \; = \; \widetilde{H}_{k,1} ({\rm i} \widehat{\omega}) \end{split} \] and \[ H_1 ({\rm i} \widehat{\omega}) = {\rm i} \left( \widetilde{H}_1({\rm i} \widehat{\omega}) - \widetilde{H}_1({\rm i} \widehat{\omega})^H \right) = {\rm i} \left( \widetilde{H}_{k,1}({\rm i} \widehat{\omega}) - \widetilde{H}_{k,1}({\rm i} \widehat{\omega})^H \right) = H_{k,1} ({\rm i} \widehat{\omega}). \] Since $H_1 ({\rm i} \widehat{\omega}) = H_{k,1} ({\rm i} \widehat{\omega})$, it follows that $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i} \widehat{\omega})$ is finite if and only if $\widetilde{\eta}^{\rm Herm}_k(R; B, {\rm i} \widehat{\omega})$ is finite. Additionally, if $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i} \widehat{\omega})$ is finite, then \[ \begin{split} \widetilde{\eta}^{\rm Herm}(R; B, {\rm i} \widehat{\omega}) & \; = \; \max_{t \in {\mathbb R}} \: \lambda_{\min} (H_0({\rm i} \widehat{\omega}) + t H_1 ({\rm i} \widehat{\omega})) \\ & \; = \; \max_{t \in {\mathbb R}} \: \lambda_{\min} (H_{k,0}({\rm i} \widehat{\omega}) + t H_{k,1} ({\rm i} \widehat{\omega})) \; = \; \widetilde{\eta}^{\rm Herm}_k(R; B, {\rm i} \widehat{\omega}), \end{split} \] completing the proof of (\ref{eq:main_interpolation_kes}). \textbf{(ii)} Now let us suppose that $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i} \omega)$ and $\widetilde{\eta}^{\rm Herm}_k(R; B, {\rm i} \omega)$ are differentiable at $\widehat{\omega}$. To prove the interpolation property in the derivatives, we benefit from the analytical expressions \cite{Lancaster1964} \begin{equation}\label{eq:eta_derivative} \begin{split} & \frac{d\, \widetilde{\eta}^{\rm Herm}(R; B, {\rm i} \widehat{\omega})}{ d \omega} \; = \; v^H \left( \frac{d H_0({\rm i} \widehat{\omega}) }{d \omega} + \widehat{t} \: \frac{d H_1 ({\rm i} \widehat{\omega})}{d \omega} \right) v, \\ & \frac{d\, \widetilde{\eta}^{\rm Herm}_k(R; B, {\rm i} \widehat{\omega})}{ d \omega} \; = \; v^H \left( \frac{d H_{k,0}' ({\rm i} \widehat{\omega}) }{d \omega} + \widehat{t} \: \frac{d H_{k,1}' ({\rm i} \widehat{\omega})}{ d \omega} \right) v, \end{split} \end{equation} where \[ \widehat{t} := \argmax_t \lambda_{\min} (H_0({\rm i} \widehat{\omega}) + t H_1 ({\rm i} \widehat{\omega})) = \argmax_t \lambda_{\min} (H_{k,0}({\rm i} \widehat{\omega}) + t H_{k,1} ({\rm i} \widehat{\omega})) \] and $v$ is a unit eigenvector corresponding to $\lambda_{\min} (H_0({\rm i} \widehat{\omega}) + \widehat{t} H_1 ({\rm i} \widehat{\omega}))$ $=$ $\lambda_{\min} ($ $H_{k,0}({\rm i} \widehat{\omega}) + \widehat{t} H_{k,1} ({\rm i} \widehat{\omega}))$. Thus, it suffices to prove that $d H_0({\rm i} \widehat{\omega}) / d \omega = d H_{k,0}({\rm i} \widehat{\omega}) / d\omega$ and $d H_1({\rm i} \widehat{\omega}) / d\omega = d H_{k,1}'({\rm i} \widehat{\omega}) / d\omega$ in order to show (\ref{eq:main_interpolation_kes2}). It follows from parts (i) and (ii) of Theorem \ref{thm:interpolate_matrixfuns} that \[ \begin{split} d \widetilde{H}_0'({\rm i} \widehat{\omega}) / d\omega & = -{\rm i} (B^{H} Q W({\rm i} \widehat{\omega})^{-1} B)^H (B^{H} Q W({\rm i} \widehat{\omega})^{-2} B) \hskip 10ex \\ & \hskip 3ex + \; {\rm i} (B^{H} Q W({\rm i} \widehat{\omega})^{-2} B)^H (B^{H} Q W({\rm i} \widehat{\omega})^{-1} B) \\ & = -{\rm i} (B^{H}_k Q_k W_k({\rm i} \widehat{\omega})^{-1} B_k)^H (B^{H}_k Q_k W_k({\rm i} \widehat{\omega})^{-2} B_k) \\ &\hskip 3ex + \; {\rm i} (B^{H}_k Q_k W_k({\rm i} \widehat{\omega})^{-2} B_k)^H (B^{H}_k Q_k W_k({\rm i} \widehat{\omega})^{-1} B_k) = d \widetilde{H}_{k,0}'({\rm i} \widehat{\omega}) / d \omega. \end{split} \] Now let us determine the derivatives of the Cholesky factors, which at a given $\omega$ satisfy \[ \widetilde{H}_0({\rm i} \omega) = L(\omega) L({\rm i} \omega)^H \quad \text{and} \quad \widetilde{H}_{k,0}({\rm i} \omega) = L_k({\rm i} \omega) L_k({\rm i} \omega)^H . \] Differentiating these two equations, and setting the derivatives equal to each other at $\widehat{\omega}$ yield \[ \begin{split} L({\rm i} \widehat{\omega}) \left( \frac{d L({\rm i} \widehat{\omega})}{d \omega} \right)^H + \frac{d L({\rm i} \widehat{\omega})}{d \omega} L({\rm i} \widehat{\omega})^H & \; = \; L_k({\rm i} \widehat{\omega}) \left( \frac{d L_k({\rm i} \widehat{\omega})}{d \omega} \right)^H + \frac{ L_k({\rm i} \widehat{\omega})}{d \omega} L_k({\rm i} \widehat{\omega})^H \\ & \; = \; L({\rm i} \widehat{\omega}) \left( \frac{d L_k({\rm i} \widehat{\omega})}{d \omega} \right)^H + \frac{d L_k({\rm i} \widehat{\omega})}{d \omega} L({\rm i} \widehat{\omega})^H, \end{split} \] where we have used that $L({\rm i} \widehat{\omega}) = L_k({\rm i} \widehat{\omega})$ as established in part (i). Thus, both $d L({\rm i} \widehat{\omega}) / d\omega$ and $d L_k({\rm i} \widehat{\omega}) / d\omega$ are lower triangular solutions of the matrix equation \[ d \widetilde{H}_0({\rm i} \widehat{\omega}) / d\omega = L({\rm i} \widehat{\omega}) X^H + X L({\rm i} \widehat{\omega})^H. \] This linear matrix equation has a unique lower triangular solution, so $dL({\rm i} \widehat{\omega}) / d\omega = dL_k({\rm i} \widehat{\omega}) / d\omega$. Now, by the definitions of $H_0 ({\rm i} \omega), H_{k,0} ({\rm i} \omega)$, we have \[ L({\rm i} \omega) H_0 ({\rm i} \omega) L({\rm i} \omega)^H \;\; = \;\; I \;\; = \;\; L_k({\rm i} \omega) H_{k,0} ({\rm i} \omega) L_k({\rm i}\omega)^H. \] Differentiating this equation at $\omega = \widehat{\omega}$ yields \[ \begin{split} \frac{d L({\rm i} \widehat{\omega})}{d \omega} H_0({\rm i} \widehat{\omega}) L({\rm i} \widehat{\omega})^H \; + \; L({\rm i} \widehat{\omega}) \frac{d H_0({\rm i} \widehat{\omega})}{d \omega} L({\rm i} \widehat{\omega})^H \hskip 30ex \\ \; + \;\; L({\rm i} \widehat{\omega}) H_0({\rm i} \widehat{\omega}) \left( \frac{d L({\rm i} \widehat{\omega})}{d \omega} \right)^H \;\; = \;\; \frac{d L_k({\rm i} \widehat{\omega})}{d \omega} H_{k,0}({\rm i} \widehat{\omega}) L_k({\rm i} \widehat{\omega})^H \;\; + \; \hskip 5ex \\ \hskip 8ex L_k({\rm i} \widehat{\omega}) \frac{d H_{k,0}({\rm i} \widehat{\omega})}{d \omega} L_k({\rm i} \widehat{\omega})^H \; + \; L_k({\rm i} \widehat{\omega}) H_{k,0}({\rm i} \widehat{\omega}) \left( \frac{d L_k({\rm i} \widehat{\omega})}{d \omega} \right)^H. \end{split} \] Using that $L({\rm i} \widehat{\omega}) = L_k({\rm i} \widehat{\omega})$, $dL({\rm i} \widehat{\omega}) / d\omega = dL_k({\rm i} \widehat{\omega}) / d\omega$, and $H_0({\rm i} \widehat{\omega}) = H_{k,0}({\rm i} \widehat{\omega})$, we deduce that $dH_0({\rm i} \widehat{\omega}) / d\omega = dH_{k,0}({\rm i} \widehat{\omega}) / d\omega$. Next we focus on the derivatives $d \widetilde{H}_1({\rm i} \omega) / d\omega, d \widetilde{H}_{k,1}({\rm i} \omega) / d\omega$. In particular, we use that \[ L({\rm i} \omega) \widetilde{H}_1({\rm i} \omega) L({\rm i} \omega)^H \;\; = \;\; (B^H Q W({\rm i}\omega)^{-1} B)^{H}. \] Differentiating both sides of the last equation at $\omega = \widehat{\omega}$ gives rise to \[ \begin{split} \frac{d \left\{ L({\rm i} \omega) \widetilde{H}_1({\rm i}\omega) L({\rm i} \omega)^H \right\}}{d\omega} \bigg|_{\omega = \widehat{\omega}} \;\; & = \;\; {\rm i} (B^H Q W({\rm i} \widehat{\omega})^{-2} B)^{H} \\ \;\; & = \;\; {\rm i} (B^H_k Q_k W_k({\rm i} \widehat{\omega})^{-2} B_k)^{H} \\ \;\; & = \;\; \frac{d \left\{ L_k({\rm i} \omega) \widetilde{H}_{k,1}({\rm i} \omega) L_k({\rm i} \omega)^H \right\}}{d\omega} \bigg|_{\omega = \widehat{\omega}} \;\;\; , \end{split} \] which in turn implies that \[ \begin{split} \frac{d L({\rm i} \widehat{\omega})}{d \omega} \widetilde{H}_1({\rm i} \widehat{\omega}) L({\rm i} \widehat{\omega})^H \; + \; L({\rm i} \widehat{\omega}) \frac{d \widetilde{H}_1({\rm i} \widehat{\omega})}{d \omega} L({\rm i} \widehat{\omega})^H \hskip 28ex \\ \; + \;\; L({\rm i} \widehat{\omega}) \widetilde{H}_1({\rm i} \widehat{\omega}) \left( \frac{d L({\rm i} \widehat{\omega})}{d \omega} \right)^H \;\; = \;\; \frac{d L_k({\rm i} \widehat{\omega})}{d \omega} \widetilde{H}_{k,1}({\rm i} \widehat{\omega}) L_k({\rm i} \widehat{\omega})^H \;\; + \; \hskip 5ex \\ L_k({\rm i} \widehat{\omega}) \frac{d \widetilde{H}_{k,1}({\rm i} \widehat{\omega})}{d \omega} L_k({\rm i} \widehat{\omega})^H \; + \; L_k({\rm i} \widehat{\omega}) \widetilde{H}_{k,1}({\rm i} \widehat{\omega}) \left( \frac{d L_k({\rm i} \widehat{\omega})}{d \omega} \right)^H . \end{split} \] Once again exploiting that $L({\rm i} \widehat{\omega}) = L_k({\rm i} \widehat{\omega})$, $dL({\rm i} \widehat{\omega}) / d\omega = dL_k({\rm i} \widehat{\omega}) / d\omega$, as well as $\widetilde{H}_1({\rm i} \widehat{\omega}) = \widetilde{H}_{k,1}({\rm i} \widehat{\omega})$ in the last equation, we obtain $d \widetilde{H}_1({\rm i} \widehat{\omega}) / d\omega = d \widetilde{H}_{k,1}({\rm i} \widehat{\omega}) / d\omega$ which implies that \begin{equation*} \begin{split} \frac{d H_1 ({\rm i} \widehat{\omega})}{d\omega} & \;\; = \;\; {\rm i} \left\{ \frac{d \widetilde{H}_1({\rm i} \widehat{\omega})}{d\omega} - \left( \frac{d \widetilde{H}_1({\rm i} \widehat{\omega})}{d \omega} \right)^H \right\} \\ & \;\; = \;\; {\rm i} \left\{ \frac{d \widetilde{H}_{k,1}({\rm i} \widehat{\omega})}{d \omega} - \left( \frac{d \widetilde{H}_{k,1}({\rm i} \widehat{\omega})}{d \omega} \right)^H \right\} \;\; = \;\; \frac{d H_{k,1} ({\rm i} \widehat{\omega})}{d \omega}, \end{split} \end{equation*} and the proof of (\ref{eq:main_interpolation_kes2}) is complete. \end{proof} \begin{remark}\label{rem:rem2}{\rm The function $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i} \omega)$ is differentiable at $\widehat{\omega}$ whenever $\widetilde{\eta}^{\rm Herm}(R;$ $B, {\rm i} \widehat{\omega})$ is finite (equivalently $H_1({\rm i} \widehat{\omega})$ is indefinite), $\lambda_{\min}(H_0({\rm i} \widehat{\omega}) + \widehat{t} H_1({\rm i} \widehat{\omega}))$ is simple where $\widehat{t} := \argmax_{t \in {\mathbb R}} \lambda_{\min}(H_0({\rm i} \widehat{\omega}) + t H_1({\rm i} \widehat{\omega}))$, and the global minimum of $\lambda_{\min}(H_0({\rm i} \widehat{\omega}) + t H_1({\rm i} \widehat{\omega}))$ over $t$ is attained at a unique $t$. These conditions guarantee also the differentiability of $\widetilde{\eta}^{\rm Herm}_k(R; B, {\rm i} \omega)$ at $\widehat{\omega}$ provided the subspace inclusion in (\ref{eq:subspace_inclusion}) holds. This latter differentiability property is due to $H_0({\rm i} \widehat{\omega}) = H_{k,0}({\rm i} \widehat{\omega})$ and $H_1({\rm i} \widehat{\omega}) = H_{k,1}({\rm i} \widehat{\omega})$ from part (i) of Theorem \ref{thm:main_Hermite_interpolate}. Additionally, when the concave function $g(t) := \lambda_{\min}(H_0({\rm i} \widehat{\omega}) + t H_1({\rm i} \widehat{\omega}))$ attains its maximum, the maximizer is nearly always unique. The function $g(t)$ is the minimum of $m$ real analytic functions \cite{Kato1995,Rellich1969} each corresponding to an eigenvalue of $H_0({\rm i} \widehat{\omega}) + t H_1({\rm i} \widehat{\omega})$. If $g(t)$ does not have a unique maximizer, then at least one of these real analytic functions must be constant and equal to $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i} \widehat{\omega}) = \max_t g(t)$ everywhere. Thus, a simple sufficient condition that ensures the uniqueness of the maximizer is that $H_1({\rm i} \widehat{\omega})$ has full rank, in which case all eigenvalues of $H_0({\rm i} \widehat{\omega}) + t H_1({\rm i} \widehat{\omega})$ blow up (either to $\infty$ or $-\infty$) as $t \rightarrow \infty$ implying that each of the real analytic functions is non-constant. } \end{remark} \subsubsection{The Subspace Framework for $r^{\rm Herm}(R; B)$} The Hermite interpolation result of Theorem~\ref{thm:main_Hermite_interpolate} immediately suggests the subspace framework in Algorithm \ref{alg:structured_kadii_sf} for the computation of $r^{\rm Herm}(R; B)$. This resembles the structure preserving subspace framework to compute the unstructured stability radii $r(R; B, C)$ $\; = \;$ $r(J; B, C)$, in particular, in the way the subspaces ${\mathcal V}_k, {\mathcal W}_k$ are built. At every iteration, a reduced problem is solved using the ideas in Section~\ref{sec:small_scale_st} and employing Algorithm \ref{eigopt}. Letting $\widehat{\omega}$ be the global minimizer of the reduced problem, the subspaces are expanded so that the original function $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i} \omega)$ is Hermite interpolated by its reduced counter-part at $\omega = \widehat{\omega}$. Assuming that the sequence $\{ \omega_k \}$ converges to a minimizer $\omega_\ast$ of the function $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega)$ such that $H_1({\rm i} \omega_\ast)$ is indefinite with full rank and $\lambda_{\min} (H_0({\rm i} \omega_\ast) + t_\ast H_1({\rm i} \omega_\ast))$ is simple where $t_\ast := \argmax_{t \in {\mathbb R}} \lambda_{\min} (H_0({\rm i} \omega_\ast) + t H_1({\rm i} \omega_\ast))$, it can be shown that the sequence $\{ \omega_k \}$ converges at a super-linear rate. Here the analysis in \cite{Aliyev2018} applies. The conditions that $H_1({\rm i} \omega_\ast)$ is indefinite with full rank, $\lambda_{\min} (H_0({\rm i} \omega_\ast) + t_\ast H_1({\rm i} \omega_\ast))$ is simple, and the interpolation properties $H_0({\rm i} \omega_k) = H_{k,0}({\rm i} \omega_k)$, $H_1({\rm i} \omega_k) = H_{k,1}({\rm i} \omega_k)$ ensure that the full function $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i} \omega)$, as well as the reduced function $\widetilde{\eta}^{\rm Herm}_k(R; B, {\rm i} \omega)$ for all large $k$ are differentiable at all $\omega$ in a ball ${\mathcal B}(\omega_\ast,\delta)$ centered at $\omega_\ast$ and of radius $\delta$. This differentiability property is essential for the applicability of the rate-of-convergence analysis in \cite{Aliyev2018}. \begin{algorithm}[ht] \begin{algorithmic}[1] \REQUIRE{Matrices $B \in \mathbb{C}^{n\times m}$, $ J, R, Q \in \mathbb{C}^{n\times n}$.} \ENSURE{The sequence $\{ \omega_k \}$.} \STATE Choose the initial interpolation points $\omega_{1}, \dots , \omega_{j} \in {\mathbb R}.$ \STATE $V_j \gets {\rm orth} \begin{bmatrix} W(\ri\omega_1)^{-1}B & W(\ri\omega_1)^{-2}B & \dots & W(\ri\omega_j)^{-1}B & W(\ri\omega_j)^{-2}B \end{bmatrix}$, \\ $\;\;\;\; W_j \gets QV_j(V_j^HQV_j)^{-1} $. \label{defn_init_subspaces_structured} \FOR{$k = j,\,j+1,\,\dots$} \STATE $\; \displaystyle \omega_{k+1} \gets \: {\rm argmin}_{\omega \in {\mathbb R}} \; \widetilde{\eta}^{\rm Herm}_k(R; B, {\rm i}\omega)$. \label{solve_keduced_structured} \STATE $\widehat{V}_{k+1} \gets \begin{bmatrix} W(\ri\omega_{k+1})^{-1}B & W(\ri\omega_{k+1})^{-2}B \end{bmatrix}$. \label{defn_later_subspaces_structured} \STATE $V_{k+1} \gets \operatorname{orth}\left(\begin{bmatrix} V_{k} & \widehat{V}_{k+1} \end{bmatrix}\right) \quad \text{and}\quad W_{k+1} \gets Q V_{k+1}(V_{k+1}^H Q V_{k+1})^{-1}$. \ENDFOR \end{algorithmic} \caption{$\;$ Subspace method for large-scale computation of the structured stability radius $r^{\rm Herm}(R; B)$.} \label{alg:structured_kadii_sf} \end{algorithm} \subsection{Numerical Experiments}\label{sec:str_num_exp} In this section we present several numerical tests for Algorithms~\ref{eigopt} and~\ref{alg:structured_kadii_sf}, on synthetic examples and the FE model of the disk brake. The test set-up is similar to the one for the unstructured case in Section~\ref{sec:DH_numexps}. In particular, for Algorithm \ref{alg:structured_kadii_sf}, we use the same stopping criteria with the same parameters, but now in terms of $f_k := \argmin_{\omega} \widetilde{\eta}^{\rm Herm}_k(R; B, {\rm i}\omega)$, so we terminate when $| f_k - f_{k-1} | \leq \varepsilon | f_k + f_{k-1} |/2$ holds, or when one of the other two conditions stated in Section \ref{sec:test_setup} holds. The initial subspaces are also chosen as described in that section. \subsubsection{Synthetic Examples} \textbf{Small Scale Examples.} We first present numerical results for a small dense random example, where $J, R, Q \in {\mathbb R}^{20\times 20}$ and $B \in {\mathbb R}^{20\times 2}$. These matrices are generated by means of the MATLAB commands employed for the generation of the dense family in Section \ref{sec:numexp_syn}; only here the matrices are $20\times 20$ instead of $800 \times 800$ and $R$ is of rank $5$. The spectrum of $(J-R)Q$ is depicted on the top in Figure \ref{fig:structured20by20_spec}. Application of Algorithm \ref{eigopt} to this example yields $r^{\rm Herm}(R;B) = 0.0501$, and the point that is first reached on the imaginary axis under the smallest perturbation is $1.9794{\rm i}$, i.e., $\widetilde{\eta}^{\rm Herm}_k(R; B, {\rm i}\omega)$ is minimized at $\omega = 1.9794$. At the bottom of Figure \ref{fig:structured20by20_spec}, the spectra of matrices of the form $(J - (R + B \Delta B^H))Q$ are plotted for $100000$ randomly chosen Hermitian $\Delta$ such that $\| \Delta \|_2 = 0.0501$. One can notice that some of the eigenvalues (nearly) touch the imaginary axis at $1.9794{\rm i}$ marked with a circle. \begin{figure} \caption{A DH system of order $20$ with random system matrices. \textbf{(Top)} A plot of the spectrum for $(J-R)Q$. \textbf{(Bottom)} All eigenvalues of all matrices of the form $(J - (R + B \Delta B^H ))Q$ for $100000$ randomly chosen Hermitian $\Delta$ with $\| \Delta \|_2 = r^{\rm Herm}(R; B)$ are displayed. The circle marks $1.9794{\rm i}$, the global minimizer of $\widetilde{\eta}^{\rm Herm}_k(R; B, \lambda)$ over $\lambda \in {\rm i} {\mathbb R}$.} \label{fig:structured20by20_spec} \end{figure} The subspace framework for this example is illustrated in Figure \ref{fig:structured20by20_sf}, where the solid, dashed curves correspond to the plots of the full function $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega)$, the reduced function $\widetilde{\eta}^{\rm Herm}_k(R; B, {\rm i}\omega)$, respectively, and the circle represents the minimizer of $\widetilde{\eta}^{\rm Herm}_k(R; B, {\rm i}\omega)$. On the top row, the framework is initiated with two interpolation points at 0 and near -20; the dashed curve interpolates the solid curve at these points. Then the subspaces are expanded so that the Hermite interpolation property is also satisfied at the minimizer of the dashed curve on the top, leading to the dashed curve at the bottom, which has nearly the same global minimizer as the solid curve. Note that starting from $\omega$ near -14 and for smaller $\omega$ values, the matrix $H_1({\rm i} \omega)$ turns out to be definite for this example, meaning for such $\omega$ values the point ${\rm i} \omega$ is not attainable as an eigenvalue with Hermitian perturbations. In practice, we set the objective to be minimized at such $\omega$ a value considerably larger than the minimal value of the objective, which in this example is 0.1. \noindent \textbf{Large Examples.} The remaining synthetic examples are with larger random matrices. We created three sets of matrices $J, R, Q, B$ using the commands at the beginning of Section \ref{sec:numexp_syn} for the generation of the dense family. Each of the three sets consists of four quadruples $J, R, Q \in {\mathbb R}^{n\times n}$, $B \in {\mathbb R}^{n\times 2}$ with the same $n$, specifically $n = 1000, 2000, 4000$ for the first, second, third set. The results obtained by applications of Algorithm \ref{alg:structured_kadii_sf} to compute $r^{\rm Herm}(R; B)$ are reported in Tables \ref{table:structured_1000exs}, \ref{table:structured_2000exs}, \ref{table:structured_4000exs} for these sets, respectively. For the first family with $n = 1000$, the subspace framework, i.e., Algorithm \ref{alg:structured_kadii_sf} already needs less computing time than Algorithm \ref{eigopt} except for the last example where quite a few additional subspace iterations have been performed. On this family, the direct application of Algorithm \ref{eigopt} and the subspace framework return exactly the same values for $r^{\rm Herm}(R; B)$. For bigger systems Algorithm \ref{eigopt} becomes too computationally expensive, so we do not report results here for the larger dimensions. A remarkable fact we have observed is that the number of subspace iterations to reach the prescribed accuracy is usually small and seems independent of $n$. By the definitions of the structured and unstructured radii, we must have $r^{\rm Herm}(R; B) \geq r(R; B, B^H)$, and the presented radii in the tables are in harmony with this. All of these examples involve optimization of highly non-convex and non-smooth functions. Figure \ref{fig:structured1000by10000_sf} depicts $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega)$ as a function of $\omega$ with the solid curve for the first example in the first family with $n = 1000$. The same figure also depicts the reduced function $\widetilde{\eta}^{\rm Herm}_k(R; B, {\rm i}\omega)$ with the dashed curve for the same example at termination after 7 subspace iterations. Even though the reduced function $\widetilde{\eta}^{\rm Herm}_k(R; B, {\rm i}\omega)$ represented by the dashed curve in this plot is defined for projected matrices onto $48$ dimensional subspaces, it captures the original function $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega)$ remarkably well near the global minimizer $\omega_\ast = -70.9623$. \begin{figure} \caption{ Progress of Algorithm \ref{alg:structured_kadii_sf} on a random DH system of order $20$. The solid, dashed curves display $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega)$, $\widetilde{\eta}^{\rm Herm}_k(R; B, {\rm i}\omega)$ as functions of $\omega$, respectively. The square, circle represent the global minima of $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega)$, $\widetilde{\eta}^{\rm Herm}_k(R; B, {\rm i}\omega)$. \textbf{(Top Plot)} The reduced function interpolates the full function at two points, at $\omega = 0$ and near $\omega=-20$. \textbf{(Bottom Plot)} Plot of the refined reduced function after applying one subspace iteration. } \label{fig:structured20by20_sf} \end{figure} \begin{figure} \caption{Application of Algorithm~\ref{alg:structured_kadii_sf} to compute $r^{\rm Herm}(R;B)$ on a dense random DH system of order $1000$. The full and reduced functions $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega)$ and $\widetilde{\eta}^{\rm Herm}_k(R; B, {\rm i}\omega)$ at termination after $7$ subspace iterations are plotted with the solid and dashed curves, respectively. The point $(\omega_\ast, \widetilde{\eta}^{\rm Herm}_k(R; B, {\rm i}\omega_\ast))$ with $\omega_\ast$ denoting the global minimizer of $\widetilde{\eta}^{\rm Herm}_k(R; B, {\rm i}\omega)$ is marked with a circle. } \label{fig:structured1000by10000_sf} \end{figure} \begin{table} \begin{center} \begin{tabular}{|c||cc|c|c|cc|} \hline & \multicolumn{2}{c|}{$r^{\rm Herm}(R; B)$} & \multicolumn{1}{c|}{$r(R; B, B^H)$} & \multicolumn{1}{c}{iterations} & \multicolumn{2}{|c|}{run-time} \\ [0.5ex] $\#$ & Alg.~\ref{eigopt} & Alg.~\ref{alg:structured_kadii_sf} & Alg.~\ref{alg:dti} & Alg.~\ref{alg:structured_kadii_sf} & Alg.~\ref{eigopt} & Alg.~\ref{alg:structured_kadii_sf} \\ [0.5ex] \hline \hline 1 & 0.0129 & 0.0129 & 0.0113 & 7 & 99.2 & 66.2 \\ 2 & 0.0101 & 0.0101 & 0.0101 & 1 & 66.5 & 7.1 \\ 3 & 0.0141 & 0.0141 & 0.0128 & 5 & 128.5 & 72.5 \\ 4 & 0.0096 & 0.0096 & 0.0072 & 20 & 121.2 & 247.2 \\ \hline \end{tabular} \end{center} \caption{ Performance of Algorithm \ref{alg:structured_kadii_sf} to compute $r^{\rm Herm}(R;B)$ on dense random DH systems of order $1000$. For comparison, the results from Algorithm \ref{eigopt}, as well as the unstructured radii $r(R;B, B^H)$ by Algorithm \ref{alg:dti} are also included. The fourth column contains the number of subspace iterations, and the fifth the run-times (in $s$). } \label{table:structured_1000exs} \end{table} \begin{table} \begin{center} \begin{tabular}{|c||c|c|c|c|} \hline & \multicolumn{1}{c|}{$r^{\rm Herm}(R; B)$} & \multicolumn{1}{c|}{$r(R; B, B^H)$} & \multicolumn{1}{c}{iterations} & \multicolumn{1}{|c|}{run-time} \\ [0.5ex] $\#$ & Alg.~\ref{alg:structured_kadii_sf} & Alg.~\ref{alg:dti} & Alg.~\ref{alg:structured_kadii_sf} & Alg.~\ref{alg:structured_kadii_sf} \\ [0.5ex] \hline \hline 1 & 0.0102 & 0.0061 & 6 & 149.1 \\ 2 & 0.0040 & 0.0012 & 1 & 17.1 \\ 3 & 0.0125 & 0.0109 & 6 & 175.9 \\ 4 & 0.0101 & 0.0100 & 3 & 126.7 \\ \hline \end{tabular} \end{center} \caption{Performance of Algorithm \ref{alg:structured_kadii_sf} to compute $r^{\rm Herm}(R;B)$ on dense random DH systems of order $n = 2000$. The fourth column contains the number of subspace iterations, and the fifth the run-times (in $s$). } \label{table:structured_2000exs} \end{table} \begin{table} \begin{center} \begin{tabular}{|c||c|c|c|c|} \hline & \multicolumn{1}{c|}{$r^{\rm Herm}(R; B)$} & \multicolumn{1}{|c|}{$r(R; B, B^H)$} & \multicolumn{1}{|c|}{iterations} & \multicolumn{1}{|c|}{run-time} \\ [0.5ex] $\#$ & Alg.~\ref{alg:structured_kadii_sf} & Alg.~\ref{alg:dti} & Alg.~\ref{alg:structured_kadii_sf} & Alg.~\ref{alg:structured_kadii_sf} \\ [0.5ex] \hline \hline 1 & 0.0090 & 0.0087 & 2 & 282.6 \\ 2 & 0.0084 & 0.0068 & 3 & 319.3 \\ 3 & 0.0104 & 0.0086 & 2 & 333.1 \\ 4 & 0.0040 & 0.0007 & 46 & 786.5 \\ \hline \end{tabular} \end{center} \caption{Performance of Algorithm \ref{alg:structured_kadii_sf} to compute $r^{\rm Herm}(R;B)$ on dense random DH systems of order $n = 4000$. The fourth column contains the number of subspace iterations, and the fifth the run-times (in $s$). } \label{table:structured_4000exs} \end{table} \subsubsection{FE Model of a Disk Brake} We also applied our implementation of Algorithm~\ref{alg:structured_kadii_sf} to the FE model of the disk brake described in Section \ref{sec:numexp_brake} which is of the form (\ref{insta}), (\ref{eq:insta_matrices}) with $G(\Omega), D(\Omega), K(\Omega), M \in {\mathbb R}^{4669\times 4669}$, and $J, R, Q \in {\mathbb R}^{9338\times 9338}$. The computed value of the structured radius $r^{\rm Herm}(R; B)$ along with the computed global minimizer $\omega_\ast$ of $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega)$, as well as the number of subspace iterations, run-time (in $s$) and the subspace dimension at termination are listed in Table \ref{tab:brake_squeal_structured} for various values of $\Omega$. It is worth comparing the computed values of $r^{\rm Herm}(R; B)$ in this table with those for the unstructured stability radius $r(R; B, B^T)$ listed in Table \ref{tab:brake_squeal_unstructured}. The computed structured and the unstructured stability radii are close, though the structured stability radii are slightly larger as expected in theory. \begin{table} \hskip -0.2ex \begin{tabular}{|c||ccccc|} \hline $\Omega$ & $r^{\rm Herm}(R; B)$ & $\omega_\ast$ & iterations & run-time & dimension \\ \hline \hline 2.5 & 0.01067 & $-1.938\times 10^5$ & 2 & 145.8 & 72 \\ 5 & 0.01038 & $-1.938\times 10^5$ & 2 & 133.5 & 72 \\ 10 & 0.01026 & $-1.938\times 10^5$ & 2 & 132.0 & 72 \\ 50 & 0.01000 & $-1.938\times 10^5$ & 2 & 132.5 & 72 \\ 100 & 0.00988 & $-1.938\times 10^5$ & 1 & 94.4 & 66 \\ 1000 & 0.00810 & $-1.789\times 10^5$ & 2 & 127.7 & 72 \\ 1050 & 0.00794 & $-1.789\times 10^5$ & 2 & 126.4 & 72 \\ 1100 & 0.00835 & $-1.789\times 10^5$ & 3 & 171.2 & 78 \\ 1116 & 0.01092 & $-1.789\times 10^5$ & 2 & 127.4 & 72 \\ 1150 & 0.00346 & $-1.742\times 10^5$ & 2 & 124.3 & 72 \\ 1200 & 0.00408 & $-1.742\times 10^5$ & 2 & 121.2 & 72 \\ 1250 & 0.00472 & $-1.742\times 10^5$ & 2 & 119.1 & 72 \\ 1300 & 0.00517 & $-1.742\times 10^5$ & 2 & 117.1 & 72 \\ \hline \end{tabular} \caption{ Structured stability radii $r^{\rm Herm}(R; B)$ computed by Algorithm \ref{alg:structured_kadii_sf} for the FE model of a disk brake of order $9338$ for several values of $\Omega$. The column $\omega_\ast$ depicts the computed global minimizer of $\widetilde{\eta}^{\rm Herm}(R; B, {\rm i}\omega)$, whereas the last three columns depict the number of subspace iterations, the total run-time (in $s$), and the subspace dimension at termination. } \label{tab:brake_squeal_structured} \end{table} \section{Concluding Remarks} We have proposed subspace frameworks to compute the stability radii for large scale dissipative Hamiltonian systems. The frameworks operate on the eigenvalue optimization characterizations of the stability radii derived in \cite{MehMS16a}. At every iteration, we apply DH structure preserving Petrov-Galerkin projections to small subspaces. This leads to the computation of the corresponding stability radii for the reduced system. We expand the subspaces used in the Petrov-Galerkin projections so that Hermite-interpolation properties between the objective eigenvalue function of the full and the reduced problems are attained at the optimizer of the reduced problem. This strategy results in super-linear convergence with respect to the subspace dimensions. We have illustrated that the frameworks work well in practice on several synthetic examples, and a FE model of a disk brake. Matlab implementations of the proposed algorithms and subspace frameworks are made publicly available on the web\footnote{\url{http://home.ku.edu.tr/~emengi/software/DH-stabradii.html}}. Some of the data (including the one associated with the disk brake example) used in the numerical experiments are also available on the same website. One difficulty is that the proposed frameworks converge only locally. As a remedy for this, we have initiated the subspaces to attain Hermite interpolation at several points on the imaginary axis between the full and initial reduced problems. One potential strategy that is currently investigated is to employ equally spaced interpolation points. Another potential strategy finds the poles closest to these equally spaced points, then employs the imaginary parts of the poles as the initial interpolation points. Another research direction that is currently investigated is the maximization of the stability radii, when $J, R, Q$ depend on parameters in a given parameter set. As an example, for the dissipative Hamiltonian system arising from the FE model of a disk brake, even in the simple setting considered here, $J, R, Q$ depend on the rotation speed $\Omega$. \end{document}
\begin{document} \title{Positive-homogeneous operators, heat kernel estimates and the Legendre-Fenchel transform} \begin{center} \textit{Dedicated to Professor Rodrigo Ba\~{n}uelos on the occasion of his 60th birthday.} \end{center} \begin{abstract} We consider a class of homogeneous partial differential operators on a finite-dimensional vector space and study their associated heat kernels. The heat kernels for this general class of operators are seen to arise naturally as the limiting objects of the convolution powers of complex-valued functions on the square lattice in the way that the classical heat kernel arises in the (local) central limit theorem. These so-called positive-homogeneous operators generalize the class of semi-elliptic operators in the sense that the definition is coordinate-free. More generally, we introduce a class of variable-coefficient operators, each of which is uniformly comparable to a positive-homogeneous operator, and we study the corresponding Cauchy problem for the heat equation. Under the assumption that such an operator has H\"{o}lder continuous coefficients, we construct a fundamental solution to its heat equation by the method of E. E. Levi, adapted to parabolic systems by A. Friedman and S. D. Eidelman. Though our results in this direction are implied by the long-known results of S. D. Eidelman for $2\vec{b}$-parabolic systems, our focus is to highlight the role played by the Legendre-Fenchel transform in heat kernel estimates. Specifically, we show that the fundamental solution satisfies an off-diagonal estimate, i.e., a heat kernel estimate, written in terms of the Legendre-Fenchel transform of the operator's principal symbol--an estimate which is seen to be sharp in many cases. \end{abstract} \noindent{\small\bf Keywords:} Semi-elliptic operators, quasi-elliptic operators, $2\vec{b}$-parabolic operators, heat kernel estimates, Legendre-Fenchel transform.\\ \noindent{\small\bf Mathematics Subject Classification:} Primary 35H30; Secondary 35K25. \section{Introduction} In this article, we consider a class of homogeneous partial differential operators on a finite dimensional vector space and study their associated heat kernels. These operators, which we call nondegenerate-homogeneous operators, are seen to generalize the well-studied classes of semi-elliptic operators introduced by F. Browder \cite{Browder1957}, also known as quasi-elliptic operators \cite{Volevich1962}, and a special ``positive" subclass of semi-elliptic operators which appear as the spatial part of S. D. Eidelman's $2\vec{b}$-parabolic operators \cite{Eidelman1960}. In particular, this class of operators contains all integer powers of the Laplacian. \subsection{Semi-Elliptic Operators} \noindent To motivate the definition of nondegenerate-homogeneous operators, given in the next section, we first introduce the class of semi-elliptic operators. Semi-elliptic operators are seen to be prototypical examples of nondegenerate-homogeneous operators; in fact, the definition of nondegenerate-homogeneous operators is given to formulate the following construction in a basis-independent way. Given $d$-tuple of positive integers $\mathbf{n}=(n_1,n_2,\dots,n_d)\in\mathbb{N}_+^d$ and a multi-index $\beta=(\beta_1,\beta_2,\dots,\beta_d)\in\mathbb{N}^d$, set $|\beta:\mathbf{n}|=\sum_{k=1}^d\beta_k/n_k$. Consider the constant coefficient partial differential operator \begin{equation*} \Lambda=\sum_{|\beta:\mathbf{n}|\leq1}a_{\beta}D^{\beta} \end{equation*} with principal part (relative to $\mathbf{n}$) \begin{equation*} \Lambda_p=\sum_{|\beta:\mathbf{n}|=1}a_{\beta}D^{\beta}, \end{equation*} where $a_{\beta}\in\mathbb{C}$ and $D^{\beta}=(i\partial_{x_1})^{\beta_1}(i\partial_{x_2})^{\beta_2}\cdots(i\partial_{x_d})^{\beta_d}$ for each multi-index $\beta\in\mathbb{N}^d$. Such an operator $\Lambda$ is said to be \textit{semi-elliptic} if the symbol of $\Lambda_p$, defined by $P_p(\xi)=\sum_{|\beta:\mathbf{n}|=1}a_{\beta}\xi^{\beta}$ for $\xi\in\mathbb{R}^d$, is non-vanishing away from the origin. If $\Lambda$ satisfies the stronger condition that $\Re P_p(\xi)$ is strictly positive away from the origin, we say that it is \textit{positive-semi-elliptic}. What seems to be the most important property of semi-elliptic operators is that their principal part $\Lambda_p$ is homogeneous in the following sense: If given any smooth function $f$ we put $\delta_t(f)(x)=f(t^{1/n_1}x_1,t^{1/n_2}x_2,\dots,t^{1/n_d}x_d)$ for all $t>0$ and $x=(x_1,x_2,\dots,x_d)\in\mathbb{R}^d$, then \begin{equation*} t\Lambda=\delta_{1/t}\circ \Lambda_p\circ \delta_t \end{equation*} for all $t>0$. This homogeneous structure was used explicitly in the work of F. Browder and L. H\"{o}rmander and, in this article, we generalize this notion. We note that our definition for the differential operators $D^{\beta}$ is given to ensure a straightforward relationship between operators and symbols under our convention for the Fourier transform (defined in Subsection \ref{subsec:Preliminaries}); this definition differs only slightly from the standard references \cite{Hormander1963,Hormander1983,Rudin1991, Taylor1981} in which $i$ is replaced by $1/i$. In both conventions, the symbol of the operator $\Lambda=-\Delta=-\sum_{k=1}^d\partial^2_{x_k}$ is the positive polynomial $\xi\mapsto|\xi|^2=\sum_{k=1}^d\xi_k^2$. In fact, the principal symbols of all positive-semi-elliptic operators agree in both conventions.\\ \noindent As mentioned above, the class of semi-elliptic operators was introduced by F. Browder in \cite{Browder1957} who studied spectral asymptotics for a related class of variable-coefficient operators (operators of constant strength). Semi-elliptic operators appeared later in L. H\"{o}rmander's text \cite{Hormander1963} as model examples of hypoelliptic operators on $\mathbb{R}^d$ beyond the class of elliptic operators. Around the same time, L. R. Volevich \cite{Volevich1962} independently introduced the same class of operators but instead called them ``quasi-elliptic''. Since then, the theory of semi-elliptic operators, and hence quasi-elliptic operators, has reached a high level of sophistication and we refer the reader to the articles \cite{Apel2014, Artino1973, Artino1993, Artino1977, Artino1995, Browder1957, Hile2006, Hile2001, Hormander1963, Hormander1983, Kannai1969, Triebel1983, Tsutsumi1975}, which use the term semi-elliptic, and the articles \cite{Bondar2012, Bondar2012a, Bondar2008, Cavallucci1965, Demidenko1989, Demidenko1993, Demidenko1994, Demidenko1998, Demidenko2001, Demidenko2007, Demidenko2008, Demidenko2009, Giusti1967, Matsuzawa1968, Pini1962, Troisi1971, Volevich1960, Volevich1962}, which use the term quasi-elliptic, for an account of this theory. We would also like to point to the 1971 paper of M. Troisi \cite{Troisi1971} which gives a more complete list of references (pertaining to quasi-elliptic operators).\\ \noindent Shortly after F. Browder's paper \cite{Browder1957} appeared, S. D. Eidelman considered a subclass of semi-elliptic operators on $\mathbb{R}^{d+1}=\mathbb{R}\oplus\mathbb{R}^d$ (and systems thereof) of the form \begin{equation}\label{eq:2b-Parabolic} \partial_t+\sum_{|\beta:2\mathbf{m}|\leq 1}a_{\beta}D^{\beta}=\partial_t+\sum_{|\beta:\mathbf{m}|\leq 2}a_{\beta}D^{\beta}, \end{equation} where $\mathbf{m}\in\mathbb{N}_+^d$ and the coefficients $a_{\beta}$ are functions of $x$ and $t$. Such an operator is said to be \textit{$2\mathbf{m}$-parabolic} if its spatial part, $\sum_{|\beta:2\mathbf{m}|\leq 1}a_{\beta}D^{\beta}$, is (uniformly) positive-semi-elliptic. We note however that Eidelman's work and the existing literature refer exclusively to $2\vec{b}$-parabolic operators, i.e., where $\mathbf{m}=\vec{b}$, and for consistency we write $2\vec{b}$-parabolic henceforth \cite{Eidelman1960,Eidelman2004}. The relationship between positive-semi-elliptic operators and $2\vec{b}$-parabolic operators is analogous to the relationship between the Laplacian and the heat operator and, in the context of this article, the relationship between nondegenerate-homogeneous and positive-homogeneous operators described by Proposition \ref{prop:Dichotomy}. The theory of $2\vec{b}$-parabolic operators, which generalizes the theory of parabolic partial differential equations (and systems), has seen significant advancement by a number of mathematicians since Eidelman's original work. We encourage the reader to see the recent text \cite{Eidelman2004} which provides an account of this theory and an exhaustive list of references. It should be noted however that the literature encompassing semi-elliptic operators and quasi-elliptic operators, as far as we can tell, has very few cross-references to the literature on $2\vec{b}$-parabolic operators beyond the 1960s. We suspect that the absence of cross-references is due to the distinctness of vocabulary. \\ \subsection{Motivation: Convolution powers of complex-valued functions on $\mathbb{Z}^d$} We motivate the study of homogeneous operators by first demonstrating the natural appearance of their heat kernels in the study of convolution powers of complex-valued functions. To this end, consider a finitely supported function $\phi:\mathbb{Z}^d\rightarrow\mathbb{C}$ and define its convolution powers iteratively by \begin{equation*} \phi^{(n)}(x)=\sum_{y\in\mathbb{Z}^d}\phi^{(n-1)}(x-y)\phi(y) \end{equation*} for $x\in\mathbb{Z}^d$ where $\phi^{(1)}=\phi$. In the special case that $\phi$ is a probability distribution, i.e., $\phi$ is non-negative and has unit mass, $\phi$ drives a random walk on $\mathbb{Z}^d$ whose nth-step transition kernels are given by $k_n(x,y)=\phi^{(n)}(y-x)$. Under certain mild conditions on the random walk, $\phi^{(n)}$ is well-approximated by a single Gaussian density; this is the classical local limit theorem. Specifically, for a symmetric, aperiodic and irreducible random walk, the theorem states that \begin{equation}\label{eq:LLT} \phi^{(n)}(x)=n^{-d/2}G_\phi(x/\sqrt{n})+o(n^{-d/2}) \end{equation} uniformly for $x\in\mathbb{Z}^d$, where $G_\phi$ is the generalized Gaussian density \begin{equation}\label{eq:Gaussian} G_\phi(x)=\frac{1}{(2\pi)^d}\int_{\mathbb{R}^d}\exp\big(-\xi\cdot C_\phi\xi\big)e^{-ix\cdot\xi}\,d\xi=\frac{1}{(2\pi )^{d/2}\sqrt{\det C_\phi}}\exp\left(-\frac{x\cdot {C_\phi}^{-1}x}{2}\right); \end{equation} here, $C_\phi$ is the positive definite covariance matrix associated to $\phi$ and $\cdot$ denotes the dot product \cite{Spitzer1964,Lawler2010, Randles2015a}. The canonical example is that in which $C_{\phi}=I$ (e.g. Simple Random Walk) and in this case $\phi^{(n)}$ is approximated by the so-called heat kernel $K_{(-\Delta)}:(0,\infty)\times\mathbb{R}^d\rightarrow (0,\infty)$ defined by \begin{equation*} K_{(-\Delta)}^t(x)=(2\pi t)^{-d/2}\exp\left(-\frac{|x|^2}{2t}\right) \end{equation*} for $t>0$ and $x\in\mathbb{R}^d$. Indeed, we observe that $n^{-d/2}G_{\phi}(x/\sqrt{n})=K_{(-\Lambda)}^n(x)$ for each positive integer $n$ and $x\in\mathbb{Z}^d$ and so the local limit theorem \eqref{eq:LLT} is written equivalently as \begin{equation*} \phi^{(n)}(x)=K_{(-\Delta)}^n(x)+o(n^{-d/2}) \end{equation*} uniformly for $x\in\mathbb{Z}^d$. In addition to its natural appearance as the \textit{attractor} in the local limit theorem above, $K^t_{(-\Delta)}(x)$ is a fundamental solution to the heat equation \begin{equation*} \partial_t+(-\Delta)=0 \end{equation*} on $(0,\infty)\times\mathbb{R}^d$. In fact, this connection to random walk underlies the heat equation's probabilistic/diffusive interpretation. Beyond the probabilistic setting, this link between convolution powers and fundamental solutions to partial differential equations persists as can be seen in the examples below. In what follows, the heat kernels $(t,x)\mapsto K_{\Lambda}^t(x)$ are fundamental solutions to the corresponding heat-type equations of the form \begin{equation*} \partial_t+\Lambda=0. \end{equation*} The appearance of $K_{\Lambda}$ in local limit theorems (for $\phi^{(n)}$) is then found by evaluating $K_{\Lambda}^t(x)$ at integer time $t=n$ and lattice point $x\in\mathbb{Z}^d$. \begin{example}\label{ex:intro1} Consider $\phi:\mathbb{Z}^2\rightarrow\mathbb{C}$ defined by \begin{equation*} \phi(x_1,x_2)=\frac{1}{22+2\sqrt{3}}\times\begin{cases} 8 & (x_1,x_2)=(0,0)\\ 5+\sqrt{3} & (x_1,x_2)=(\pm 1,0)\\ -2 & (x_1,x_2)=(\pm 2,0)\\ i(\sqrt{3}-1)& (x_1,x_2)=(\pm 1,-1)\\ -i(\sqrt{3}-1)& (x_1,x_2)=(\pm 1,1)\\ 2\mp 2i & (x_1,x_2)=(0,\pm 1)\\ 0 & \mbox{otherwise}. \end{cases} \end{equation*} \begin{figure} \caption{$\Re(\phi^{(n)})$ for $n=100$} \label{fig:ex_intro_100_1} \caption{$\Re(e^{-i\pi x_2/3}K_{\Lambda}^n)$ for $n=100$} \label{fig:ex_intro_100_attractor_1} \caption{The graphs of $\Re(\phi^{(n)})$ and $\Re(e^{-i\pi x_2/3}K_{\Lambda}^n)$ for $n=100$.} \label{fig:ex1_intro} \end{figure} \noindent Analogous to the probabilistic setting, the large $n$ behavior of $\phi^{(n)}$ is described by a generalized local limit theorem in which the attractor is a fundamental solution to a heat-type equation. Specifically, the following local limit theorem holds (see \cite{Randles2015a} for details): \begin{equation*} \phi^{(n)}(x_1,x_2)=e^{-i\pi x_2/3}K_{\Lambda}^n(x_1,x_2)+o(n^{-3/4}) \end{equation*} uniformly for $(x_1,x_2)\in\mathbb{Z}^2$ where $(t,x)\mapsto K_{\Lambda}^t(x)$ is the ``heat'' kernel for the heat-type equation $\partial_t+\Lambda=0$ where \begin{equation*} \Lambda=\frac{1}{22+2\sqrt{3}}\left(2\partial_{x_1}^4-i(\sqrt{3}-1)\partial_{x_1}^2\partial_{x_2}-4\partial_{x_2}^2\right). \end{equation*} This local limit theorem is illustrated in Figure \ref{fig:ex1_intro} which shows $\Re(\phi^{(n)})$ and the approximation $\Re(e^{-i\pi x_2/3}K_{\Lambda}^n)$ when $n=100$. \end{example} \begin{example}\label{ex:intro2} Consider $\phi:\mathbb{Z}^2\rightarrow \mathbb{R}$ defined by $\phi=(\phi_1+\phi_2)/512$, where \begin{equation*} \phi_1(x_1,x_2)= \begin{cases} 326 & (x_1,x_2)=(0,0)\\ 20 & (x_1,x_2)=(\pm 2,0)\\ 1 & (x_1,x_2)=(\pm 4,0)\\ 64 & (x_1,x_2)=(0,\pm 1)\\ -16 & (x_1,x_2)=(0,\pm 2)\\ 0 & \mbox{otherwise} \end{cases} \hspace{.5cm}\mbox{ and }\hspace{.5cm} \phi_2(x_1,x_2)= \begin{cases} 76 & (x_1,x_2)=(1,0)\\ 52 & (x_1,x_2)=(-1,0)\\ \mp 4 & (x_1,x_2)=(\pm 3,0)\\ \mp 6 & (x_1,x_2)=(\pm 1,1)\\ \mp 6 & (x_1,x_2)=(\pm 1,-1)\\ \pm 2 & (x_1,x_2)=(\pm 3,1)\\ \pm 2 & (x_1,x_2)=(\pm 3,-1)\\ 0 & \mbox{otherwise}. \end{cases} \end{equation*} \begin{figure} \caption{$\phi^{(n)}$ for $n=10,000$} \label{fig:ex5phi100_1} \caption{$K_{\Lambda}^n$ for $n=10,000$} \label{fig:ex5HP100_1} \caption{The graphs of $\phi^{(n)}$ and $K_{\Lambda}^n$ for $n=10,000$.} \label{fig:ex2_intro} \end{figure} \noindent In this example, the following local limit theorem, which is illustrated by Figure \ref{fig:ex2_intro}, describes the limiting behavior of $\phi^{(n)}$. We have \begin{equation*} \phi^{(n)}(x_1,x_2)=K^n_{\Lambda}(x_1,x_2)+o(n^{-5/12}) \end{equation*} uniformly for $(x_1,x_2)\in\mathbb{Z}^2$ where $K_{\Lambda}$ is again a fundamental solution to $\partial_t+\Lambda=0$ where, in this case, \begin{equation*} \Lambda=\frac{1}{64}\left(-\partial_{x_1}^6+2\partial_{x_2}^4+2\partial_{x_1}^3\partial_{x_2}^2\right). \end{equation*} \end{example} \begin{example}\label{ex:intro3} Consider $\phi:\mathbb{Z}^2\rightarrow\mathbb{R}$ defined by \begin{equation*} \phi(x,y)=\begin{cases} 3/8 & (x_1,x_2)=(0,0)\\ 1/8 & (x_1,x_2)=\pm(1,1)\\ 1/4 & (x_1,x_2)=\pm(1,-1)\\ -1/16& (x_1,x_2)=\pm( 2,-2)\\ 0 & \mbox{otherwise}. \end{cases} \end{equation*} Here, the following local limit theorem is valid: \begin{equation*}\nonumber\label{eq:ex_3} \phi^{(n)}(x_1,x_2)=\left(1+e^{i\pi(x_1+x_2)}\right)K_{\Lambda}^n(x_1,x_2)+o(n^{-3/4})\\ \end{equation*} uniformly for $(x_1,x_2)\in\mathbb{Z}^2$. Here again, the attractor $K_{\Lambda}$ is the fundamental solution to $\partial_t+\Lambda=0$ where \begin{equation*} \Lambda=-\frac{1}{8}\partial_{x_1}^2+\frac{23}{384}\partial_{x_1}^4-\frac{1}{4}\partial_{x_1}\partial_{x_2}-\frac{25}{96}\partial_{x_1}^3\partial_{x_2}-\frac{1}{8}\partial_{x_2}^2+\frac{23}{64}\partial_{x_1}^2\partial_{x_2}^2-\frac{25}{96}\partial_{x_1}\partial_{x_2}^3+\frac{23}{384}\partial_{x_2}^4. \end{equation*} \end{example} \noindent Looking back at preceding examples, we note that the operators appearing in Examples \ref{ex:intro1} and \ref{ex:intro2} are both positive-semi-elliptic and consist only of their principal parts. This is easily verified, for $\mathbf{n}=(4,2)=2(2,1)$ in Example \ref{ex:intro1} and $\mathbf{n}=(6,4)=2(3,2)$ in Example \ref{ex:intro2}. In contrast to Examples \ref{ex:intro1} and \ref{ex:intro2}, the operator $\Lambda$ which appears in Example \ref{ex:intro3} is not semi-elliptic in the given coordinate system. After careful study, the $\Lambda$ appearing in Example \ref{ex:intro3} can be written equivalently as \begin{equation}\label{eq:DiagonalizedExample} \Lambda=-\frac{1}{8}\partial_{v_1}^2+\frac{23}{384}\partial_{v_2}^4 \end{equation} where $\partial_{v_1}$ is the directional derivative in the $v_1=(1,1)$ direction and $\partial_{v_2}$ is the directional derivative in the $v_2=(1,-1)$ direction. In this way, $\Lambda$ is seen to be semi-elliptic with respect to some basis $\{v_1,v_2\}$ of $\mathbb{R}^2$ and, with respect to this basis, we have $\mathbf{n}=(2,4)=2(1,2)$. For this reason, our formulation of nondegenerate-homogeneous operators (and positive-homogeneous operators), given in the next section, is made in a basis-independent way.\\ \noindent All of the operators appearing in Examples \ref{ex:intro1}, \ref{ex:intro2} and \ref{ex:intro3} share two important properties: homogeneity and positivity (in the sense of symbols). While we make these notions precise in the next section, loosely speaking, homogeneity is the property that $\Lambda$ ``plays well'' with some dilation structure on $\mathbb{R}^d$, though this structure is different in each example. Further, homogeneity for $\Lambda$ is reflected by an analogous one for the corresponding heat kernel $K_{\Lambda}$; in fact, the specific dilation structure is, in some sense, selected by $\phi^{(n)}$ as $n\rightarrow\infty$ and leads to the corresponding local limit theorem. In further discussion of these examples, a very natural question arises: Given $\phi:\mathbb{Z}^d\rightarrow\mathbb{C}$, how does one compute the operator $\Lambda$ whose heat kernel $K_{\Lambda}$ appears as the attractor in the local limit theorem for $\phi^{(n)}$? In the examples we have looked at, one studies the Taylor expansion of the Fourier transform $\hat{\phi}$ of $\phi$ near its local extrema and, here, the symbol of the relevant operator $\Lambda$ appears as certain scaled limit of this Taylor expansion. In general, however, this is a very delicate business and, at present, there is no known algorithm to determine these operators. In fact, it is possible that multiple (distinct) operators can appear by looking at the Taylor expansions about distinct local extrema of $\hat{\phi}$ (when they exist) and, in such cases, the corresponding local limit theorems involve sums of of heat kernels--each corresponding to a distinct $\Lambda$. This study is carried out in the article \cite{Randles2015a} wherein local limit theorems involve the heat kernels of the positive-homoegeneous operators studied in the present article. We note that the theory presented in \cite{Randles2015a} is not complete, for there are cases in which the associated Taylor approximations yield symbols corresponding to operators $\Lambda$ which fail to be positive-homogeneous (and hence fail to be positive-semi-elliptic) and further, the heat kernels of these (degenerate) operators appear as limits of oscillatory integrals which correspond to the presence of ``odd" terms in $\Lambda$, e.g., the Airy function. In one dimension, a complete theory of local limit theorems is known for the class of finitely supported functions $\phi:\mathbb{Z}\rightarrow\mathbb{C}$. Beyond one dimension, a theory for local limit theorems of complex-valued functions, in which the results of \cite{Randles2015a} will fit, remains open.\\ \noindent The subject of this paper is an account of positive-homogeneous operators and their corresponding heat equations. In Section \ref{sec:HomogeneousOperators}, we introduce positive-homogeneous operators and study their basic properties; therein, we show that each positive-homogeneous operator is semi-elliptic in some coordinate system. Section \ref{sec:Holder} develops the necessary background to introduce the class of variable-coefficient operators studied in this article; this is the class of $(2\mathbf{m,v})$-positive-semi-elliptic operators introduced in Section \ref{sec:UniformlySemiElliptic}--each of which is comparable to a constant-coefficient positive-homogeneous operator. In Section \ref{sec:FundamentalSolution}, we study the heat equations corresponding to uniformly $(2\mathbf{m,v})$-positive-semi-elliptic operators with H\"{o}lder continuous coefficients. Specifically, we use the famous method of E. E. Levi, adapted to parabolic systems by A. Friedman and S. D. Eidelman, to construct a fundamental solution to the corresponding heat equation. Our results in this direction are captured by those of S. D. Eidelman \cite{Eidelman1960} and the works of his collaborators, notably S. D. Ivashyshen and A. N. Kochubei \cite{Eidelman2004}, concerning $2\vec{b}$-parabolic systems. Our focus in this presentation is to highlight the essential role played by the Legendre-Fenchel transform in heat kernel estimates which, to our knowledge, has not been pointed out in the context of semi-elliptic operators. In a forthcoming work, we study an analogous class of operators, written in divergence form, with measurable-coefficients and their corresponding heat kernels. This class of measurable-coefficient operators does not appear to have been previously studied. The results presented here, using the Legendre-Fenchel transform, provides the background and context for our work there. \subsection{Preliminaries}\label{subsec:Preliminaries} \textbf{Fourier Analysis:} Our setting is a real $d$-dimensional vector space $\mathbb{V}$ equipped with Haar (Lebesgue) measure $dx$ and the standard smooth structure; we do not affix $\mathbb{V}$ with a norm or basis. The dual space of $\mathbb{V}$ is denoted by $\mathbb{V}^*$ and the dual pairing is denoted by $\xi(x)$ for $x\in\mathbb{V}$ and $\xi\in\mathbb{V}^*$. Let $d\xi$ be the Haar measure on $\mathbb{V}^*$ which we take to be normalized so that our convention for the Fourier transform and inverse Fourier transform, given below, makes each unitary. Throughout this article, all functions on $\mathbb{V}$ and $\mathbb{V}^*$ are understood to be complex-valued. The usual Lebesgue spaces are denoted by $L^p(\mathbb{V})=L^p(\mathbb{V},dx)$ and equipped with their usual norms $\|\cdot\|_p$ for $1\leq p\leq \infty$. In the case that $p=2$, the corresponding inner product on $L^2(\mathbb{V})$ is denoted by $\langle\cdot,\cdot\rangle$. Of course, we will also work with $L^2(\mathbb{V}^*) :=L^2(\mathbb{V}^*,d\xi)$; here the $L^2$-norm and inner product will be denoted by $\|\cdot\|_{2^*}$ and $\langle\cdot,\cdot\rangle_*$ respectively. The Fourier transform $\mathcal{F}:L^2(\mathbb{V})\rightarrow L^2(\mathbb{V}^*)$ and inverse Fourier transform $\mathcal{F}^{-1}:L^2(\mathbb{V}^*)\rightarrow L^2(\mathbb{V})$ are initially defined for Schwartz functions $f\in \mathcal{S}(\mathbb{V})$ and $g\in\mathcal{S}(\mathbb{V}^*)$ by \begin{equation*} \mathcal{F}(f)(\xi)=\hat{f}(\xi)=\int_{\mathbb{V}}e^{i\xi(x)}f(x)\,dx\hspace{1cm}\mbox{and}\hspace{1cm}\mathcal{F}^{-1}(g)(x)=\check{g}(x)=\int_{\mathbb{V}^*}e^{-i\xi(x)}g(\xi)\,d\xi \end{equation*} for $\xi\in\mathbb{V}^*$ and $x\in\mathbb{V}$ respectively. For the remainder of this article (mainly when duality isn't of interest), $W$ stands for any real $d$-dimensional vector space (and so is interchangeable with $\mathbb{V}$ or $\mathbb{V}^*$). For a non-empty open set $\Omega\subseteq W$, we denote by $C(\Omega)$ and $C_b(\Omega)$ the set of continuous functions on $\Omega$ and bounded continuous functions on $\Omega$, respectively. The set of smooth functions on $\Omega$ is denoted by $C^{\infty}(\Omega)$ and the set of compactly supported smooth functions on $\Omega$ is denoted by $C^{\infty}_0(\Omega)$. We denote by $\mathcal{D}'(\Omega)$ the space of distributions on $\Omega$; this is dual to the space $C^{\infty}_0(\Omega)$ equipped with its usual topology given by seminorms. A partial differential operator $H$ on $W$ is said to be \textit{hypoelliptic} if it satisfies the following property: Given any open set $\Omega\subseteq W$ and any distribution $u\in \mathcal{D}'(\Omega)$ which satisfies $Hu=0$ in $\Omega$, then necessarily $u\in C^{\infty}(\Omega)$.\\ \noindent \textbf{Dilation Structure:} Denote by $\mbox{End}(W)$ and $\mbox{Gl}(W)$ the set of endomorphisms and isomorphisms of $W$ respectively. Given $E\in\mbox{End}(W)$, we consider the one-parameter group $\{t^E\}_{t>0}\subseteq \mbox{Gl}(W)$ defined by \begin{equation*} t^E=\exp((\log t)E)=\sum_{k=0}^{\infty}\frac{(\log t)^k}{k!}E^k \end{equation*} for $t>0$. These one-parameter subgroups of $\mbox{Gl}(W)$ allow us to define continuous one-parameter groups of operators on the space of distributions as follows: Given $E\in\mbox{End}(W)$ and $t>0$, first define $\delta_t^E(f)$ for $f\in C_0^{\infty}(W)$ by $\delta_t^E(f)(x)=f(t^Ex)$ for $x\in W$. Extending this to the space of distribution on $W$ in the usual way, the collection $\{\delta_t^E\}_{t>0}$ is a continuous one-parameter group of operators on $\mathcal{D}'(W)$; it will allow us to define homogeneity for partial differential operators in the next section.\\ \noindent \textbf{Linear Algebra, Polynomials And The Rest:} Given a basis $\mathbf{w}=\{w_1,w_2,\dots,w_d\}$ of $W$, we define the map $\phi_{\mathbf{w}}:W\rightarrow\mathbb{R}^d$ by setting $\phi_{\mathbf{w}}(w)=(x_1,x_2,\dots,x_d)$ whenever $w=\sum_{l=1}^d x_l w_l$. This map defines a global coordinate system on $W$; any such coordinate system is said to be a linear coordinate system on $W$. By definition, a polynomial on $W$ is a function $P:W\rightarrow\mathbb{C}$ that is a polynomial function in every (and hence any) linear coordinate system on $W$. A polynomial $P$ on $W$ is called a nondegenerate polynomial if $P(w)\neq 0$ for all $w\neq 0$. Further, $P$ is called a positive-definite polynomial if its real part, $R=\Re P$, is non-negative and has $R(w)=0$ only when $w=0$. The symbols $\mathbb{R,C,Z}$ mean what they usually do, $\mathbb{N}$ denotes the set of non-negative integers and $\mathbb{I}=[0,1]\subseteq\mathbb{R}$. The symbols $\mathbb{R}_+$, $\mathbb{N}_+$ and $\mathbb{I}_+$ denote the set of strictly positive elements of $\mathbb{R}$, $\mathbb{N}$ and $\mathbb{I}$ respectively. Likewise, $\mathbb{R}_+^d$, $\mathbb{N}_+^d$ and $\mathbb{I}_+^d$ respectively denote the set of $d$-tuples of these aforementioned sets. Given $\alpha=(\alpha_1,\alpha_2,\dots,\alpha_d)\in\mathbb{R}_+^d$ and a basis $\mathbf{w}=\{w_1,w_2,\dots,w_d\}$ of $W$, we denote by $E_{\mathbf{w}}^\alpha$ the isomorphism of $W$ defined by \begin{equation}\label{eq:DefofE} E_{\mathbf{w}}^\alpha w_k=\frac{1}{\alpha_k}w_k \end{equation} for $k=1,2,\dots, d$. We say that two real-valued functions $f$ and $g$ on a set $X$ are comparable if, for some positive constant $C$, $C^{-1}f(x)\leq g(x)\leq C f(x)$ for all $x\in X$; in this case we write $f\asymp g$. Adopting the summation notation for semi-elliptic operators of L. H\"{o}rmander's treatise \cite{Hormander1983}, for a fixed $\mathbf{n}=(n_1,n_2,\dots,n_d)\in\mathbb{N}_+^d$, we write \begin{equation*} |\beta:\mathbf{n}|=\sum_{k=1}^d\frac{\beta_k}{m_k} \end{equation*} for all multi-indices $\beta=(\beta_1,\beta_2,\dots,\beta_d)\in\mathbb{N}^d$. Finally, throughout the estimates made in this article, constants denoted by $C$ will change from line to line without explicit mention. \section{Homogeneous operators}\label{sec:HomogeneousOperators} In this section we introduce two important classes of homogeneous constant-coefficient on $\mathbb{V}$. These operators will serve as ``model'' operators in our theory in the way that integer powers of the Laplacian serves a model operators in the elliptic theory of partial differential equations. To this end, let $\Lambda$ be a constant-coefficient partial differential operator on $\mathbb{V}$ and let $P:\mathbb{V}^*\rightarrow\mathbb{C}$ be its symbol. Specifically, $P$ is the polynomial on $\mathbb{V}^*$ defined by $P(\xi)=e^{-i\xi(x)}\Lambda(e^{i\xi(x)})$ for $\xi\in\mathbb{V}^*$ (this is independent of $x\in\mathbb{V}$ precisely because $\Lambda$ is a constant-coefficient operator). We first introduce the following notion of homogeneity of operators; it is mirrored by an analogous notion for symbols which we define shortly. \begin{definition} Given $E\in\mbox{End}(\mathbb{V})$, we say that a constant-coefficient partial differential operator $\Lambda$ is homogeneous with respect to the one-parameter group $\{\delta_t^E\}$ if \begin{equation*} \delta_{1/t}^E\circ \Lambda\circ \delta_t^E=t\Lambda \end{equation*} for all $t>0$; in this case we say that $E$ is a member of the exponent set of $\Lambda$ and write $E\in\Exp(\Lambda)$. \end{definition} \noindent A constant-coefficient partial differential operator $\Lambda$ need not be homogeneous with respect to a unique one-parameter group $\{\delta_t^E\}$, i.e., $\Exp(\Lambda)$ is not necessarily a singleton. For instance, it is easily verified that, for the Laplacian $-\Delta$ on $\mathbb{R}^d$, \begin{equation*} \Exp(-\Delta)=2^{-1}I+\mathfrak{o}_d \end{equation*} where $I$ is the identity and $\mathfrak{o}_d$ is the Lie algebra of the orthogonal group, i.e., is given by the set of skew-symmetric matrices. Despite this lack of uniqueness, when $\Lambda$ is equipped with a nondegenerateness condition (see Definition \ref{def:HomogeneousOperators}), we will find that trace is the same for each member of $\Exp(\Lambda)$ and this allows us to uniquely define an ``order" for $\Lambda$; this is Lemma \ref{lem:Trace}.\\ \noindent Given a constant coefficient operator $\Lambda$ with symbol $P$, one can quickly verify that $E\in\Exp(\Lambda)$ if and only if \begin{equation}\label{eq:homofsymbol} tP(\xi)=P(t^F\xi) \end{equation} for all $t>0$ and $\xi\in\mathbb{V}^*$ where $F=E^*$ is the adjoint of $E$. More generally, if $P$ is any continuous function on $W$ and \eqref{eq:homofsymbol} is satisfied for some $F\in\mbox{End}(\mathbb{V}^*)$, we say that $P$ \textit{is homogeneous with respect to} $\{t^F \}$ and write $F\in\Exp(P)$. This admitted slight abuse of notation should not cause confusion. In this language, we see that $E\in \Exp(\Lambda)$ if and only if $E^*\in\Exp(P)$.\\ \noindent We remark that the notion of homogeneity defined above is similar to that put forth for homogeneous operators on homogeneous (Lie) groups, e.g., Rockland operators \cite{Folland1982}. The difference is mostly a matter of perspective: A homogeneous group $G$ is equipped with a fixed dilation structure, i.e., it comes with a one-parameter group $\{\delta_t\}$, and homogeneity of operators is defined with respect to this fixed dilation structure. By contrast, we fix no dilation structure on $\mathbb{V}$ and formulate homogeneity in terms of an operator $\Lambda$ and the existence of a one-parameter group $\{\delta_t^E\}$ that ``plays'' well with $\Lambda$ in sense defined above. As seen in the study of convolution powers on the square lattice (see \cite{Randles2015a}), it useful to have this freedom. \begin{definition}\label{def:HomogeneousOperators} Let $\Lambda$ be constant-coefficient partial differential operator on $\mathbb{V}$ with symbol $P$. We say that $\Lambda$ is a nondegenerate-homogeneous operator if $P$ is a nondegenerate polynomial and $\Exp(\Lambda)$ contains a diagonalizable endomorphism. We say that $\Lambda$ is a positive-homogeneous operator if $P$ is a positive-definite polynomial and $\Exp(\Lambda)$ contains a diagonalizable endomorphism. \end{definition} \noindent For any polynomial $P$ on a finite-dimensional vector space $W$, $P$ is said to be \textit{nondegenerate-homogeneous} if $P$ is nondegenerate and $\Exp(P)$, defined as the set of $F\in\mbox{End}(W)$ for which \eqref{eq:homofsymbol} holds, contains a diagonalizable endomorphism. We say that $P$ is \textit{positive-homogeneous} if it is a positive-definite polynomial and $\Exp(P)$ contains a diagonalizable endomorphism. In this language, we have the following proposition. \begin{proposition}\label{prop:OperatorSymbolEquivalence} Let $\Lambda$ be a positive homogeneous operator on $\mathbb{V}$ with symbol $P$. Then $\Lambda$ is a nondegenerate-homogeneous operator if and only if $P$ is a nondegenerate-homogeneous polynomial. Further, $\Lambda$ is a positive-homogeneous operator if and only if $P$ is a positive-homogeneous polynomial. \end{proposition} \begin{proof} Since the adjectives ``nondegenerate'' and ``positive'', in the sense of both operators and polynomials, are defined in terms of the symbol $P$, all that needs to be verified is that $\Exp(\Lambda)$ contains a diagonalizable endomorphism if and only if $\Exp(P)$ contains a diagonalizable endomorphism. Upon recalling that $E\in\Exp(\Lambda)$ if and only if $E^*\in\Exp(P)$, this equivalence is verified by simply noting that diagonalizability is preserved under taking adjoints. \end{proof} \begin{remark} To capture the class of nondegenerate-homogeneous operators (or positive-homogeneous operators), in addition to requiring that that the symbol $P$ of an operator $\Lambda$ be nondegenerate (or positive-definite), one can instead demand only that $\Exp(\Lambda)$ contains an endomorphism whose characteristic polynomial factors over $\mathbb{R}$ or, equivalently, whose spectrum is real. This a priori weaker condition is seen to be sufficient by an argument which makes use of the Jordan-Chevalley decomposition. In the positive-homogeneous case, this argument is carried out in \cite{Randles2015a} (specifically Proposition 2.2) wherein positive-homogeneous operators are first defined by this (a priori weaker) condition. For the nondegenerate case, the same argument pushes through with very little modification. \end{remark} \noindent We observe easily that all positive-homogeneous operators are nondegenerate-homogeneous. It is the ``heat'' kernels corresponding to positive-homogeneous operators that naturally appear in \cite{Randles2015a} as the attractors of convolution powers of complex-valued functions. The following proposition highlights the interplay between positive-homogeneity and nondegenerate-homogeneity for an operator $\Lambda$ on $\mathbb{V}$ and its corresponding ``heat'' operator $\partial_t+\Lambda$ on $\mathbb{R}\oplus\mathbb{V}$. \begin{proposition}\label{prop:Dichotomy} Let $\Lambda$ be a constant-coefficient partial differential operator on $\mathbb{V}$ whose exponent set $\Exp(\Lambda)$ contains a diagonalizable endomorphism. Let $P$ be the symbol of $\Lambda$, set $R=\Re P$, and assume that there exists $\xi\in\mathbb{V}^*$ for which $R(\xi)>0$. We have the following dichotomy: $\Lambda$ is a positive-homogeneous operator on $\mathbb{V}$ if and only if $\partial_t+\Lambda$ is a nondegenerate-homogeneous operator on $\mathbb{R}\oplus\mathbb{V}$. \end{proposition} \begin{proof} Given a diagonalizable endomorphism $E\in\Exp(\Lambda)$, set $E_1=I\oplus E$ where $I$ is the identity on $\mathbb{R}$. Obviously, $E_1$ is diagonalizable. Further, for any $f\in C_0^{\infty}(\mathbb{R}\oplus \mathbb{V})$, \begin{eqnarray*} \left( (\partial_t+\Lambda)\circ \delta_{s}^{E_1}\right)(f)(t,x)&=& \left (\partial_t\left(f\left(st,s^Ex\right)\right)+\Lambda \left(f\left(st,s^Ex\right)\right)\right)\\ &=&s(\partial_t+\Lambda)(f)(st,s^Ex)=s\left(\delta_s^{E_1}\circ\left(\partial_t+\Lambda\right)\right)(f)(t,x) \end{eqnarray*} for all $s>0$ and $(t,x)\in\mathbb{R}\oplus\mathbb{V}$. Hence \begin{equation*} \delta_{1/s}^{E_1}\circ (\partial_t+\Lambda)\circ\delta_t^{E_1}=s(\partial_t+\Lambda) \end{equation*} for all $s>0$ and therefore $E_1\in\Exp(\partial_t+\Lambda)$. It remains to show that $P$ is positive-definite if and only if the symbol of $\partial_t+\Lambda$ is nondegenerate. To this end, we first compute the symbol of $\partial_t+\Lambda$ which we denote by $Q$. Since the dual space of $\mathbb{R}\oplus\mathbb{V}$ is isomorphic to $\mathbb{R}\oplus\mathbb{V}^*$, the characters of $\mathbb{R}\oplus\mathbb{V}$ are represented by the collection of maps $\left(\mathbb{R}\oplus\mathbb{V}\right)\ni(t,x)\mapsto \exp(-i(\tau t+\xi(x)))$ where $(\tau,\xi)\in\mathbb{R}\oplus\mathbb{V}^*$. Consequently, \begin{equation*} Q(\tau,\xi)=e^{-i(\tau t+\xi(x))}\left(\partial_t+\Lambda\right)(e^{i(\tau t+\xi(x)})=i\tau+P(\xi) \end{equation*} for $(\tau,\xi)\in\mathbb{R}\oplus\mathbb{V}^*$. We note that $P(0)=0$ because $E^*\in\Exp(P)$; in fact, this happens whenever $\Exp(P)$ is non-empty. Now if $P$ is a positive-definite polynomial, $\Re Q(\tau,\xi)=\Re P(\xi)=R(\xi)>0$ whenever $\xi\neq 0$. Thus to verify that $Q$ is a nondegenerate polynomial, we simply must verify that $Q(\tau,0)\neq 0$ for all non-zero $\tau\in\mathbb{R}$. This is easy to see because, in light of the above fact, $Q(\tau,0)=i\tau+P(0)=i\tau\neq 0$ whenever $\tau\neq 0$ and hence $Q$ is nondegenerate. For the other direction, we demonstrate the validity of the contrapositive statement. Assuming that $P$ is not positive-definite, an application of the intermediate value theorem, using the condition that $R(\xi)>0$ for some $\xi\in\mathbb{V}^*$, guarantees that $R(\eta)=0$ for some non-zero $\eta\in\mathbb{V}^*$. Here, we observe that $Q(\tau,\eta)=i(\tau+\Im P(\eta))=0$ when $(\tau,\eta)=(-\Im P(\eta),\eta)$ and hence $Q$ is not nondegenerate. \end{proof} \noindent We will soon return to the discussion surrounding a positive-homogeneous operator $\Lambda$ and its heat operator $\partial_t+\Lambda$. It is useful to first provide representation formulas for nondegenerate-homogeneous and positive-homogeneous operators. Such representations connect our homogeneous operators to the class of semi-elliptic operators discussed in the introduction. To this end, we define the ``base'' operators on $\mathbb{V}$. First, for any element $u\in\mathbb{V}$, we consider the differential operator $D_u:\mathcal{D}'(\mathbb{V})\rightarrow\mathcal{D}'(\mathbb{V})$ defined originally for $f\in C_0^{\infty}(\mathbb{V})$ by \begin{equation*} (D_uf)(x)=i\frac{\partial f}{\partial u}(x)=i\left(\lim_{t\rightarrow 0}\frac{f(x+tu)-f(x)}{t}\right) \end{equation*} for $x\in\mathbb{V}$. Fixing a basis $\mathbf{v}=\{v_1,v_2,\dots,v_d\}$ of $\mathbb{V}$, we introduce, for each multi-index $\beta\in\mathbb{N}^d$, $D_{\mathbf{v}}^{\beta}=\left(D_{v_1}\right)^{\beta_1}\left(D_{v_2}\right)^{\beta_2}\cdots\left( D_{v_d}\right)^{\beta_d}$. \begin{proposition}\label{prop:OperatorRepresentation} Let $\Lambda$ be a nondegenerate-homogeneous operator on $\mathbb{V}$. Then there exist a basis $\mathbf{v}=\{v_1,v_2,\dots,v_d\}$ of $\mathbb{V}$ and $\mathbf{n}=(n_1,n_2,\dots,n_d)\in\mathbb{N}_+^d$ for which \begin{equation}\label{eq:OperatorRepresentation1} \Lambda=\sum_{|\beta:\mathbf{n}|=1}a_{\beta}D_{\mathbf{v}}^\beta. \end{equation} where $\{a_{\beta}\}\subseteq\mathbb{C}$. The isomorphism $E_{\mathbf{v}}^{\mathbf{n}}\in\mbox{Gl}(\mathbb{V})$, defined by \eqref{eq:DefofE}, is a member of $\Exp(\Lambda)$. Further, if $\Lambda$ is positive-homogeneous, then $\mathbf{n}=2\mathbf{m}$ for $\mathbf{m}=(m_1,m_2,\dots,m_d)\in\mathbb{N}_+^d$ and hence \begin{equation*} \Lambda=\sum_{|\beta:\mathbf{m}|=2}a_{\beta}D_{\mathbf{v}}^\beta. \end{equation*} \end{proposition} \noindent We will sometimes refer to the $\mathbf{n}$ and $\mathbf{m}$ of the proposition as \textit{weights}. Before addressing the proposition, we first prove the following mirrored result for symbols. \begin{lemma}\label{lem:PolynomialRepresentation} Let $P$ be a nondegenerate-homogeneous polynomial on a $d$-dimensional real vector space $W.$ Then there exists a basis $\mathbf{w}=\{w_1,w_2,\dots,w_d\}$ of $W$ and $\mathbf{n}=(n_1,n_2,\dots,n_d)\in\mathbb{N}_+^d$ for which \begin{equation*} P(\xi)=\sum_{|\beta:\mathbf{n}|=1}a_{\beta}\xi^{\beta} \end{equation*} for all $\xi=\xi_1 w_1+\xi_2 w_2+\cdots+\xi_dw_d\in W$ where $\xi^{\beta}:=\left(\xi_1\right)^{\beta_1}\left(\xi_2\right)^{\beta_2}\cdots\left(\xi_d\right)^{\beta_d}$ and $\{a_\beta\}\subseteq\mathbb{C}$. The isomorphism $E_{\mathbf{w}}^{\mathbf{n}}\in\mbox{Gl}(\mathbb{V})$, defined by \eqref{eq:DefofE}, is a member of $\Exp(P)$. Further, if $P$ is a positive-definite polynomial, i.e., it is positive-homogeneous, then $\mathbf{n}=2\mathbf{m}$ for $\mathbf{m}=(m_1,m_2,\dots,m_d)\in\mathbb{N}_+^d$ and hence \begin{equation*} P(\xi)=\sum_{|\beta:\mathbf{m}|=2}a_{\beta}\xi^{\beta} \end{equation*} for $\xi\in W$. \end{lemma} \begin{proof} Let $E\in\Exp(P)$ be diagonalizable and select a basis $\mathbf{w}=\{w_1,w_2,\dots,w_d\}$ which diagonalizes $E$, i.e., $Ew_k=\delta_k w_k$ where $\delta_k\in\mathbb{R}$ for $k=1,2,\dots,d$. Because $P$ is a polynomial, there exists a finite collection $\{a_{\beta}\}\subseteq\mathbb{C}$ for which \begin{equation*} P(\xi)=\sum_{\beta}a_{\beta}\xi^{\beta} \end{equation*} for $\xi\in W$. By invoking the homogeneity of $P$ with respect to $E$ and using the fact that $t^Ew_k=t^{\delta_k}w_k$ for $k=1,2,\dots, d$, we have \begin{equation*} t\sum_{\beta}a_{\beta}\xi^{\beta}=\sum_{\beta}a_{\beta}(t^E\xi)^{\beta}=\sum_{\beta}a_{\beta}t^{\delta\cdot\beta}\xi^{\beta} \end{equation*} for all $\xi\in W$ and $t>0$ where $\delta\cdot\beta=\delta_1\beta_1+\delta_2\beta_2+\cdots+\delta_d\beta_d$. In view of the nondegenerateness of $P$, the linear independence of distinct powers of $t$ and the polynomial functions $\xi\mapsto\xi^{\beta}$, for distinct multi-indices $\beta$, as $C^{\infty}$ functions ensures that $a_{\beta}=0$ unless $\beta\cdot\delta=1$. We can therefore write \begin{equation}\label{eq:1} P(\xi)=\sum_{\beta\cdot\delta=1}a_{\beta}\xi^{\beta} \end{equation} for $\xi\in W$. We now determine $\delta=(\delta_1,\delta_2,\dots,\delta_d)$ by evaluating this polynomial along the coordinate axes. To this end, by fixing $k=1,2,\dots, d$ and setting $\xi=xw_k$ for $x\in\mathbb{R}$, it is easy to see that the summation above collapses into a single term $a_{\beta}x^{|\beta |}$ where $\beta=|\beta | e_k=(1/\delta_k)e_k$ (here $e_k$ denotes the usual $k$th-Euclidean basis vector in $\mathbb{R}^d$). Consequently, $n_k:=1/\delta_k\in\mathbb{N}_+$ for $k=1,2,\dots,d$ and thus, upon setting $\mathbf{n}=(n_1,n_2,\dots,n_d)$, \eqref{eq:1} yields \begin{equation*} P(\xi)=\sum_{|\beta:\mathbf{n}|=1}a_{\beta}\xi^{\beta} \end{equation*} for all $\xi\in W$ as was asserted. In this notation, it is also evident that $E_{\mathbf{w}}^{\mathbf{n}}=E\in\Exp(P)$. Under the additional assumption that $P$ is positive-definite, we again evaluate $P$ at the coordinate axes to see that $\Re P(xw_k)=\Re( a_{n_ke_k})x^{n_k}$ for $x\in\mathbb{R}$. In this case, the positive-definiteness of $P$ requires $\Re (a_{n_ke_k})>0$ and $n_k\in 2\mathbb{N}_+$ for each $k=1,2,\dots,d$. Consequently, $\mathbf{n}=2\mathbf{m}$ for $\mathbf{m}=(m_1,m_2,\dots,m_d)\in\mathbb{N}_+^d$ as desired. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:OperatorRepresentation}] Given a nondegenerate-homogeneous $\Lambda$ on $\mathbb{V}$ with symbol $P$, $P$ is necessarily a nondegenerate-homogeneous polynomial on $\mathbb{V}^*$ in view of Proposition \ref{prop:OperatorSymbolEquivalence}. We can therefore apply Lemma \ref{lem:PolynomialRepresentation} to select a basis $\mathbf{v}^*=\{v_1^*,v_2^*,\dots,v_d^*\}$ of $\mathbb{V}^*$ and $\mathbf{n}=(n_1,n_2,\dots,n_d)\in\mathbb{N}_+^d$ for which \begin{equation} P(\xi)=\sum_{|\beta:\mathbf{n}|=1}a_{\beta}\xi^{\beta} \end{equation} for all $\xi=\xi_1 v_1^*+\xi_2 v_2^*+\cdots \xi_d v_d^*$ where $\{a_{\beta}\}\subseteq\mathbb{C}$. We will denote by $\mathbf{v}$, the dual basis to $\mathbf{v}^*$, i.e., $\mathbf{v}=\{v_1,v_2,\dots,v_d\}$ is the unique basis of $\mathbb{V}$ for which $v_k^*(v_l)=1$ when $k=l$ and $0$ otherwise. In view of the duality of the bases $\mathbf{v}$ and $\mathbf{v}^*$, it is straightforward to verify that, for each multi-index $\beta$, the symbol of $D_{\mathbf{v}}^{\beta}$ is $\xi^{\beta}$ in the notation of Lemma \ref{lem:PolynomialRepresentation}. Consequently, the constant-coefficient partial differential operator defined by the right hand side of \eqref{eq:OperatorRepresentation1} also has symbol $P$ and so it must be equal to $\Lambda$ because operators and symbols are in one-to-one correspondence. Using \eqref{eq:OperatorRepresentation1}, it is now straightforward to verify that $E_{\mathbf{v}}^{\mathbf{n}}\in\Exp(\Lambda)$. The assertion that $\mathbf{n}=2\mathbf{m}$ when $\Lambda$ is positive-homogeneous follows from the analogous conclusion of Lemma \ref{lem:PolynomialRepresentation} by the same line of reasoning. \end{proof} \noindent In view of Proposition \ref{prop:OperatorRepresentation}, we see that all nondegenerate-homogeneous operators are semi-elliptic in some linear coordinate system (that which is defined by $\mathbf{v}$). An appeal to Theorem 11.1.11 of \cite{Hormander1983} immediately yields the following corollary. \begin{corollary} Every nondegenerate-homogeneous operator $\Lambda$ on $\mathbb{V}$ is hypoelliptic. \end{corollary} \noindent Our next goal is to associate an ``order'' to each nondegenerate-homogeneous operator. For a positive-homogeneous operator $\Lambda$, this order will be seen to govern the on-diagonal decay of its heat kernel $K_{\Lambda}$ and so, equivalently, the ultracontractivity of the semigroup $e^{-t\Lambda}$. With the help of Lemma \ref{lem:PolynomialRepresentation}, the few lemmas in this direction come easily. \begin{lemma}\label{lem:PtoInfty} Let $P$ be a nondegenerate-homogeneous polynomial on a $d$-dimensional real vector space $W$. Then $\lim_{\xi \rightarrow \infty}|P(\xi)|=\infty$; here $\xi\rightarrow \infty$ means that $|\xi|\rightarrow \infty$ in any (and hence every) norm on $W$. \end{lemma} \begin{proof} The idea of the proof is to construct a function which bounds $|P|$ from below and obviously blows up at infinity. To this end, let $\mathbf{w}$ be a basis for $W$ and take $\mathbf{n}\in\mathbb{N}_+^d$ as guaranteed by Lemma \ref{lem:PolynomialRepresentation}; we have $E_{\mathbf{w}}^{\mathbf{n}}\in\Exp(P)$ where $E_{\mathbf{w}}^{\mathbf{n}}w_k=(1/n_k)w_k$ for $k=1,2,\dots, d$. Define $|\cdot|_{\mathbf{w}}^{\mathbf{n}}:W\rightarrow [0,\infty)$ by \begin{equation*} |\xi |_{\mathbf{w}}^{\mathbf{n}}=\sum_{k=1}^d |\xi_k|^{n_k} \end{equation*} where $\xi=\xi_1 w_1+\xi_2 w_2+\cdots+\xi_d w_d\in W$. We observe immediately $E_\mathbf{w}^{\mathbf{n}}\in\Exp(|\cdot|_{\mathbf{w}}^{\mathbf{n}})$ because $t^{E_\mathbf{w}^{\mathbf{n}}}w_k=t^{1/n_k}w_k$ for $k=1,2,\dots, d$. An application of Proposition \ref{prop:ComparePoly} (a basic result appearing in our background section, Section \ref{sec:Holder}), which uses the nondegenerateness of $P$, gives a positive constant $C$ for which $|\xi|_{\mathbf{w}}^{\mathbf{n}}\leq C|P(\xi)|$ for all $\xi\in W$. The lemma now follows by simply noting that $|\xi |_{\mathbf{w}}^{\mathbf{n}}\rightarrow\infty$ as $\xi\rightarrow\infty$. \end{proof} \begin{lemma} Let $P$ be a polynomial on $W$ and denote by $\Sym(P)$ the set of $O\in\mbox{End}(W)$ for which $P(O\xi)=P(\xi)$ for all $\xi\in W$. If $P$ is a nondegenerate-homogeneous polynomial, then $\Sym(P)$, called the symmetry group of $P$, is a compact subgroup of $\mbox{Gl}(W)$. \end{lemma} \begin{proof} Our supposition that $P$ is a nondegenerate polynomial ensures that, for each $O\in\Sym(P)$, $\operatorname{Ker} (O)$ is empty and hence $O\in\mbox{Gl}(W)$. Consequently, given $O_1$ and $O_2\in\Sym(P)$, we observe that $P(O_1^{-1}\xi)=P(O_1O_1^{-1}\xi)=P(\xi)$ and $P(O_1O_2\xi)=P(O_2\xi)=P(\xi)$ for all $\xi\in W$; therefore $\Sym(P)$ is a subgroup of $\mbox{Gl}(W)$. To see that $\Sym(P)$ is compact, in view of the finite-dimensionality of $\mbox{Gl}(W)$ and the Heine-Borel theorem, it suffices to show that $\Sym(P)$ is closed and bounded. First, for any sequence $\{O_n\}\subseteq\Sym(P)$ for which $O_n\rightarrow O$ as $n\rightarrow\infty$, the continuity of $P$ ensures that $P(O\xi)=\lim_{n\rightarrow \infty}P(O_n\xi)=\lim_{n\rightarrow\infty}P(\xi)=P(\xi)$ for each $\xi\in W$ and therefore $\Sym(P)$ is closed. It remains to show that $\Sym(P)$ is bounded; this is the only piece of the proof that makes use of the fact that $P$ is nondegenerate-homogeneous and not simply homogeneous. Assume that, to reach a contradiction, that there exists an unbounded sequence $\{O_n\}\subseteq\Sym(P)$. Choosing a norm $|\cdot|$ on $W$, let $S$ be the corresponding unit sphere in $W$. Then there exists a sequence $\{\xi_n\}\subseteq W$ for which $|\xi_n|=1$ for all $n\in\mathbb{N}_+$ but $\lim_{n\rightarrow\infty}|O_n\xi_n|=\infty$. In view of Lemma \ref{lem:PtoInfty}, \begin{equation*} \infty=\lim_{n\rightarrow\infty}|P(O_n\xi_n)|=\lim_{n\rightarrow\infty}|P(\xi_n)|\leq \sup_{\xi\in S}|P(\xi)|, \end{equation*} which cannot be true for $P$ is necessarily bounded on $S$ because it is continuous. \end{proof} \begin{lemma}\label{lem:Trace} Let $\Lambda$ be a nondegenerate-homogeneous operator. For any $E_1,E_2\in\Exp(\Lambda)$, \begin{equation*} \tr E_1=\tr E_2. \end{equation*} \end{lemma} \begin{proof} Let $P$ be the symbol of $\Lambda$ and take $E_1,E_2\in\Exp(\Lambda)$. Since $E_1^*,E_2^*\in \Exp(P)$, $t^{E_1^*}t^{-E_2^*}\in\Sym(P)$ for all $t>0$. As $\Sym(P)$ is a compact group in view of the previous lemma, the determinant map $\det:\mbox{Gl}(\mathbb{V}^*)\rightarrow\mathbb{C}^*$, a Lie group homomorphism, necessarily maps $\Sym(P)$ into the unit circle. Consequently, \begin{equation*} 1=|\det(t^{E_1^*}t^{-E_2^*})|=|\det(t^{E_1^*})\det(t^{-E_2^*})=|t^{\tr {E_1^*}}t^{-\tr {E_2^*}}|=t^{\tr {E_1^*}}t^{-\tr {E_2^*}} \end{equation*} for all $t>0$. Therefore, $\tr E_1=\tr E_1^*=\tr E_2^*=\tr E_2$ as desired. \end{proof} \noindent By the above lemma, to each nondegenerate-homogenerous operator $\Lambda$, we define the \textit{homogeneous order} of $\Lambda$ to be the number \begin{equation*} \mu_{\Lambda}=\tr E \end{equation*} for any $E\in\Exp(\Lambda)$. By an appeal to Proposition \ref{prop:OperatorRepresentation}, $E_{\mathbf{v}}^{\mathbf{n}}\in\Exp(\Lambda)$ for some $\mathbf{n}\in\mathbb{N}_+$ and so we observe that \begin{equation}\label{eq:HomogeneousOrderExplicit} \mu_{\Lambda}=\frac{1}{n_1}+\frac{1}{n_2}+\cdots+\frac{1}{n_d}. \end{equation} In particular, $\mu_{\Lambda}$ is a positive rational number. We note that the term ``homogeneous-order'' does not coincide with the usual ``order'' for a partial differential operator. For instance, the Laplacian $-\Delta$ on $\mathbb{R}^d$ is a second order operator; however, because $2^{-1}I\in\Exp(-\Delta)$, its homogeneous order is $\mu_{(-\Delta)}=\tr 2^{-1}I=d/2$. \subsection{Positive-homogeneous operators and their heat kernels} \noindent We now restrict our attention to the study of positive-homogeneous operators and their associated heat kernels. To this end, let $\Lambda$ be a positive-homogeneous operator on $\mathbb{V}$ with symbol $P$ and homogeneous order $\mu_{\Lambda}$. The heat kernel for $\Lambda$ arises naturally from the study of the following Cauchy problem for the corresponding heat equation $\partial_t+\Lambda=0$: Given initial data $f:\mathbb{V}\rightarrow\mathbb{C}$ which is, say, bounded and continuous, find $u(t,x)$ satisfying \begin{equation}\label{eq:CauchyProblem} \begin{cases} \left(\partial_t+\Lambda\right) u=0 & \mbox{in } (0,\infty)\times\mathbb{V}\\ u(0,x)=f(x)& \mbox{for } x\in\mathbb{V}. \end{cases} \end{equation} The initial value problem \eqref{eq:CauchyProblem} is solved by putting \begin{equation*} u(t,x)=\int_{\mathbb{V}}K_{\Lambda}^t(x-y)f(y)\,dy \end{equation*} where $K_{\Lambda}^{(\cdot)}(\cdot):(0,\infty)\times \mathbb{V}\rightarrow\mathbb{C}$ is defined by \begin{equation*} K_{\Lambda}^t(x)=\mathcal{F}^{-1}\left(e^{-tP}\right)(x)=\int_{\mathbb{V}^*}e^{-i\xi(x)}e^{-tP(\xi)}\,d\xi \end{equation*} for $t>0$ and $x\in\mathbb{V}$; we call $K_{\Lambda}$ \textit{the heat kernel} associated to $\Lambda$. Equivalently, $K_{\Lambda}$ is the integral (convolution) kernel of the continuous semigroup $\{e^{-t\Lambda}\}_{t>0}$ of bounded operators on $L^2(\mathbb{V})$ with infinitesimal generator $-\Lambda$. That is, for each $f\in L^2(\mathbb{V})$, \begin{equation}\label{eq:ConvolutionSemigroupDefinition} \left(e^{-t\Lambda}f\right)(x)=\int_{\mathbb{V}}K_{\Lambda}^t(x-y)f(y)\,dy \end{equation} for $t>0$ and $x\in\mathbb{V}$. \noindent Let us make some simple observations about $K_{\Lambda}$. First, by virtue of Lemma \ref{lem:PtoInfty}, it follows that $K_{\Lambda}^t\in \mathcal{S}(\mathbb{V})$ for each $t>0$. Further, for any $E\in\Exp(\Lambda)$, \begin{eqnarray*} \lefteqn{K_{\Lambda}^t(x)=\int_{\mathbb{V}^*}e^{-i\xi(x)}e^{-P(t^{E^*}\xi)}\,d\xi}\\ &&\hspace{2cm}=\int_{\mathbb{V}^*}e^{-i(t^{-E^*})\xi(x)}e^{-P(\xi)}\det (t^{-E^*})\,d\xi =\frac{1}{t^{\tr E}}\int_{\mathbb{V}^*}e^{-i\xi(t^{-E}x)}e^{-P(\xi)}\,d\xi=\frac{1}{t^{\mu_{\Lambda}}}K_{\Lambda}^1(t^{-E}x) \end{eqnarray*} for $t>0$ and $x\in\mathbb{V}$. This computation immediately yields the so-called on-diagonal estimate for $K_{\Lambda}$, \begin{equation*} \|e^{-t\Lambda}\|_{1\to\infty}=\|K_{\Lambda}^t\|_{\infty}=\frac{1}{t^{\mu_{\Lambda}}}\|K_{\Lambda}^1\|_{\infty}\leq \frac{C}{t^{\mu_{\Lambda}}} \end{equation*} for $t>0$; this is equivalently a statement of ultracontractivity for the semigroup $e^{-t\Lambda}$. As it turns out, we can say something much stronger. \begin{proposition}\label{prop:CCEstimates} Let $\Lambda$ be a positive-homogeneous operator with symbol $P$ and homogeneous order $\mu_{\Lambda}$. Let $R^{\#}:\mathbb{V}\rightarrow\mathbb{R}$ be the Legendre-Fenchel transform of $R=\Re P$ defined by \begin{equation*} R^{\#}(x)=\sup_{\xi\in\mathbb{V}^*}\{\xi(x)-R(\xi)\} \end{equation*} for $x\in\mathbb{V}$. Also, let $\mathbf{v}$ and $\mathbf{m}\in\mathbb{N}_+^d$ be as guaranteed by Proposition \ref{prop:OperatorRepresentation}. Then, there exit positive constants $C_0$ and $M$ and, for each multi-index $\beta$, a positive constant $C_{\beta}$ such that, for all $k\in\mathbb{N}$, \begin{equation}\label{eq:CCDerivativeEstimate} \left|\partial_t^kD_{\mathbf{v}}^{\beta}K_{\Lambda}^t(x-y)\right|\leq \frac{C_{\beta}C_0^k k!}{t^{\mu_{\Lambda}+k+|\beta:2\mathbf{m}|}}\exp\left(-tMR^{\#}\left(\frac{x-y}{t}\right)\right) \end{equation} for all $x,y\in\mathbb{V}$ and $t>0$. In particular, \begin{equation}\label{eq:CCEstimate} \left|K_{\Lambda}^t(x-y)\right|\leq \frac{C_0}{t^{\mu_{\Lambda}}}\exp\left(-tMR^{\#}\left(\frac{x-y}{t}\right)\right) \end{equation} for all $x,y\in\mathbb{V}$ and $t>0$. \end{proposition} \begin{remark} In view of \eqref{eq:HomogeneousOrderExplicit}, the exponent on the prefactor in \eqref{eq:CCDerivativeEstimate} can be equivalently written, for any multi-index $\beta$ and $k\in\mathbb{N}$, as $\mu_{\Lambda}+k+|\beta:2\mathbf{m}|=k+|\mathbf{1}+\beta:2\mathbf{m}|=|\mathbf{1}+2k\mathbf{m}+\beta:2\mathbf{m}|$ where $\mathbf{1}=(1,1,\dots,1)$. \end{remark} \begin{remark} We note that the estimates of Proposition \ref{prop:CCEstimates} are written in terms of the difference $x-y$ and can (trivially) be expressed in terms of a single spatial variable $x$. The estimates are written in this way to emphasize the role that $K$ plays as an integral kernel. We will later replace $\Lambda$ in \eqref{eq:HeatEquation} by a comparable variable-coefficient operator $H$ and, in that setting, the associated heat kernel is not a convolution kernel and so we seek estimates involving two spatial variables $x$ and $y$. To that end, the estimates here form a template for estimates in the variable-coefficient setting. \end{remark} \noindent We prove the proposition above in the Section \ref{sec:FundamentalSolution}; the remainder of this section is dedicated to discussing the result and connecting it to the existing theory. Let us first note that the estimate \eqref{eq:CCDerivativeEstimate} is mirrored by an analogous space-time estimate, Theorem 5.3 of \cite{Randles2015a}, for the convolution powers of complex-valued functions on $\mathbb{Z}^d$ satisfying certain conditions (see Section 5 of \cite{Randles2015a}). The relationship between these two results, Theorem 5.3 of \cite{Randles2015a} and Proposition \ref{prop:CCEstimates}, parallels the relationship between Gaussian off-diagonal estimates for random walks and the analogous off-diagonal estimates enjoyed by the classical heat kernel \cite{Hebisch1993}.\\ \noindent Let us first show that the estimates \eqref{eq:CCDerivativeEstimate} and \eqref{eq:CCEstimate} recapture the well-known estimates of the theory of parabolic equations and systems in $\mathbb{R}^d$ -- a theory in which the Laplacian operator $\Delta=\sum_{l=1}^d\partial_{x_l}^2$ and its integer powers play a central role. To place things into the context of this article, let us observe that, for each positive integer $m$, the partial differential operator $(-\Delta)^m$ is a positive-homogeneous operator on $\mathbb{R}^d$ with symbol $P(\xi)=|\xi|^{2m}$; here, we identify $\mathbb{R}^d$ as its own dual equipped with the dot product and Euclidean norm $|\cdot|$. Indeed, one easily observes that $P=|\cdot|^{2m}$ is a positive-definite polynomial and $E=(2m)^{-1}I\in\Exp((-\Delta)^m)$ where $I\in\mbox{Gl}(\mathbb{R}^d)$ is the the identity. Consequently, the homogeneous order of $(-\Delta)^m$ is $d/2m=(2m)^{-1}\tr(I)$ and the Legendre-Fenchel transform of $R=\Re P=|\cdot|^{2m}$ is easily computed to be $R^{\#}(x)=C_m|x|^{2m/(2m-1)}$ where $C_m=(2m)^{1/(2m-1)}-(2m)^{-2m/(2m-1)}>0$. Hence, \eqref{eq:CCEstimate} is the well-known estimate \begin{equation*} \left|K_{(-\Delta)^m}^t(x-y)\right|\leq \frac{C_0}{t^{d/2m}}\exp\left(-M\frac{|x-y|^{2m/(2m-1)}}{t^{1/(2m-1)}}\right) \end{equation*} for $x,y\in\mathbb{R}^d$ and $t>0$; this so-called off-diagonal estimate is ubiquitous to the theory of ``higher-order" elliptic and parabolic equations \cite{Friedman1964, Eidelman1969, Robinson1991a, Davies1997}. To write the derivative estimate \eqref{eq:CCDerivativeEstimate} in this context, we first observe that the basis given by Proposition \ref{prop:OperatorRepresentation} can be taken to be the standard Euclidean basis, $\mathbf{e}=\{e_1,e_2,\dots,e_d\}$ and further, $\mathbf{m}=(m,m,\dots,m)$ is the (isotropic) weight given by the proposition. Writing $D^{\beta}=D^{\beta}_{\mathbf{e}}=(i\partial_{x_1})^{\beta_1}(i\partial_{x_2})^{\beta_2}\cdots(i\partial_{x_d})^{\beta_d}$ and $|\beta|=\beta_1+\beta_2+ \cdots + \beta_d$ for each multi-index $\beta$, \eqref{eq:CCDerivativeEstimate} takes the form \begin{equation*} \left|\partial_t^kD^{\beta}K_{(-\Delta)^m}^t(x-y)\right|\leq \frac{C_0}{t^{(d+|\beta|)/2m+k}}\exp\left(-M\frac{|x-y|^{2m/(2m-1)}}{t^{1/(2m-1)}}\right) \end{equation*} for $x,y\in\mathbb{R}^d$ and $t>0$, c.f., \cite[Property 4, p. 93]{Eidelman1969}.\\ \noindent The appearance of the $1$-dimensional Legendre-Fenchel transform in heat kernel estimates was previously recognized and exploited in \cite{Barbatis1996} and \cite{Blunck2005} in the context of elliptic operators. Due to the isotropic nature of elliptic operators, the $1$-dimensional transform is sufficient to capture the inherent isotropic decay of corresponding heat kernels. Beyond the elliptic theory, the appearance of the full $d$-dimensional Legendre-Fenchel transform is remarkable because it sharply captures the general anisotropic decay of $K_{\Lambda}$. Consider, for instance, the particularly simple positive-homogeneous operator $\Lambda=-\partial_{x_1}^6+\partial_{x_2}^8$ on $\mathbb{R}^2$ with symbol $P(\xi_1,\xi_2)=\xi_1^6+\xi_2^8$. It is easily checked that the operator $E$ with matrix representation $\diag(1/6,1/8)$, in the standard Euclidean basis, is a member of the $\Exp(\Lambda)$ and so the homogeneous order of $\Lambda$ is $\mu_{\Lambda}=\tr(\diag(1/6,1/8))=7/24$. Here we can compute the Legendre-Fenchel transform of $R=\Re P=P$ directly to obtain $R^{\#}(x_1,x_2)=c_1|x_1|^{6/5}+c_2|x_2|^{8/7}$ for $(x_1,x_2)\in\mathbb{R}^2$ where $c_1$ and $c_2$ are positive constants. In this case, Proposition \ref{prop:CCEstimates} gives positive constants $C_0$ and $M$ for which \begin{equation}\label{eq:SeparableExample} |K_{\Lambda}^t(x_1-y_1,x_2-y_2)|\leq\frac{C_0}{t^{7/24}}\exp\left(- \left(M_1\frac{|x_1-y_1|^{6/5}}{t^{1/5}}+M_2\frac{|x_2-y_2|^{8/7}}{t^{1/7}}\right)\right) \end{equation} for $(x_1,x_2),(y_1,y_2)\in\mathbb{R}^2$ and $t>0$ where $M_1=c_1M$ and $M_2=c_2M$. We note however that $\Lambda$ is ``separable'' and so we can write $K_{\Lambda}^t(x_1,x_2)=K_{(-\Delta)^3}^t(x_1)K_{(-\Delta)^4}^t(x_2)$ where $\Delta$ is the $1$-dimensional Laplacian operator. In view of Theorem 8 of \cite{Barbatis1996} and its subsequent remark, the estimate \eqref{eq:SeparableExample} is seen to be sharp (modulo the values of $M_1,M_2$ and $C$). To further illustrate the proposition for a less simple positive-homogeneous operator, we consider the operator $\Lambda$ appearing in Example \ref{ex:intro3}. In this case, \begin{equation*} R(\xi_1,\xi_2)=P(\xi_1,\xi_2)=\frac{1}{8}(\xi_1+\xi_2)^2+\frac{23}{384}(\xi_1-\xi_2)^4 \end{equation*} and one can verify directly that the $E\in\mbox{End}(\mathbb{R}^2)$, with matrix representation \begin{equation*} E_{\mathbf{e}}= \begin{pmatrix} 3/8 & 1/8\\ 1/8 & 3/8 \end{pmatrix} \end{equation*} in the standard Euclidean basis, is a member of $\Exp(\Lambda)$. From this, we immediately obtain $\mu_{\Lambda}=\tr(E)=3/4$ and one can directly compute \begin{equation*} R^{\#}(x_1,x_2)=c_1|x_1+x_2|^2+c_2|x_1-x_2|^{4/3} \end{equation*} for $(x_1,x_2)\in\mathbb{R}^2$ where $c_1$ and $c_2$ are positive constants. An appeal to Proposition \ref{prop:CCEstimates} gives positive constants $C_0$ and $M$ for which \begin{equation*} |K^t_{\Lambda}(x_1-y_1,x_2-y_2)|\leq \frac{C_0}{t^{3/4}}\exp\left(-\left(M_1\frac{|(x_1-y_1)+(x_2-y_2)|^2}{t}+M_2\frac{|(x_1-y_1)-(x_2-y_2)|^{4/3}}{t^{1/3}}\right)\right) \end{equation*} for $(x_1,x_2),(y_1,y_2)\in\mathbb{R}^2$ and $t>0$ where $M_1=c_1M$ and $M_2=c_2M$. Furthermore, $\mathbf{m}=(1,2)\in\mathbb{N}_+^2$ and the basis $\mathbf{v}=\{v_1,v_2\}$ of $\mathbb{R}^2$ given in discussion surrounding \eqref{eq:DiagonalizedExample} are precisely those guaranteed by Proposition \ref{prop:OperatorRepresentation}. Appealing to the full strength of Proposition \ref{prop:CCEstimates}, we obtain positive constants $C_0 $, $M$ and, for each multi-index $\beta$, a positive constant $C_\beta$ such that, for each $k\in\mathbb{N}$, \begin{eqnarray*} \lefteqn{\left|\partial_t^kD_{\mathbf{v}}^\beta K_{\Lambda}(x_1-y_1,x_2-y_2)\right|}\\ &\leq&\frac{C_\beta C_0^k k!}{t^{3/4+k+|\beta:2\mathbf{m}|}}\exp\left(-\left(M_1\frac{|(x_1-y_1)+(x_2-y_2)|^2}{t}+M_2\frac{|(x_1-y_1)-(x_2-y_2)|^{4/3}}{t^{1/3}}\right)\right) \end{eqnarray*} for $(x_1,x_2),(y_1,y_2)\in\mathbb{R}^2$ and $t>0$ where $M_1=c_1M$ and $M_2=c_2M$.\\ \noindent In the context of homogeneous groups, the off-diagonal behavior for the heat kernel of a positive Rockland operator (a positive self-adjoint operator which is homogeneous with respect to the fixed dilation structure) has been studied in \cite{Hebisch1989,Dziubanski1989,Auscher1994} (see also \cite{Artino1995}). Given a positive Rockland operator $\Lambda$ on homogeneous group $G$, the best known estimate for the heat kernel $K_{\Lambda}$, due to Auscher, ter Elst and Robinson, is of the form \begin{equation}\label{eq:HebischEst} |K_{\Lambda}^t(h^{-1}g)|\leq\frac{C_0}{t^{\mu_{\Lambda}}}\exp\left(-M\left(\frac{\|h^{-1}g\|^{2m}}{t}\right)^{1/(2m-1)}\right) \end{equation} where $\|\cdot\|$ is a homogeneous norm on $G$ (consistent with $\Lambda)$ and $2m$ is the highest order derivative appearing in $\Lambda$. In the context of $\mathbb{R}^d$, given a symmetric and positive-homogeneous operator $\Lambda$ with symbol $P$, the structure $G_D=(\mathbb{R}^d,\{\delta_t^D\})$ for $D=2mE$ where $E\in\Exp(\Lambda)$ is a homogeneous group on which $\Lambda$ becomes a positive Rockland operator. On $G_D$, it is quickly verified that $\|\cdot\|=R(\cdot)^{1/2m}$ is a homogeneous norm (consistent with $\Lambda$) and so the above estimate is given in terms of $R(\cdot)^{1/(2m-1)}$ which is, in general, dominated by the Legendre-Fenchel transform of $R$. To see this, we need not look further than our previous and simple example in which $\Lambda= -\partial_{x_1}^6+ \partial_{x_2}^8$. Here $2m=8$ and so $R(x_1,x_2)^{1/(2m-1)}=(|x_1|^6+|x_2|^8)^{1/7}$. In view of \eqref{eq:SeparableExample}, the estimate \eqref{eq:HebischEst} gives the correct decay along the $x_2$-coordinate axis; however, the bounds decay at markedly different rates along the $x_1$-coordinate axis. This illustrates that the estimate \eqref{eq:HebischEst} is suboptimal, at least in the context of $\mathbb{R}^d$, and thus leads to the natural question: For positive-homogeneous operators on a general homogeneous group $G$, what is to replace the Legendre-Fenchel transform in heat kernel estimates?\\ \noindent Returning to the general picture, let $\Lambda$ be a positive-homogeneous operator on $\mathbb{V}$ with symbol $P$ and homogeneous order $\mu_{\Lambda}$. To highlight some remarkable properties about the estimates \eqref{eq:CCDerivativeEstimate} and \eqref{eq:CCEstimate} in this general setting, the following proposition concerning $R^{\#}$ is useful; for a proof, see Section 8.3 of \cite{Randles2015a}. \begin{proposition}\label{prop:LegendreTransformProperties} Let $\Lambda$ be a positive-homogeneous operator with symbol $P$ and let $R^{\#}$ be the Legendre-Fenchel transform of $R=\Re P$. Then, for any $E\in\Exp(\Lambda)$, $I-E\in\Exp(R^{\#})$. Moreover $R^{\#}$ is continuous, positive-definite in the sense that $R^{\#}(x)\geq 0$ and $R^{\#}(x)=0$ only when $x=0$. Further, $R^{\#}$ grows superlinearly in the sense that, for any norm $|\cdot |$ on $\mathbb{V}$, \begin{equation*} \lim_{x\to\infty}\frac{|x|}{R^{\#}(x)}=0; \end{equation*} in particular, $R^{\#}(x)\rightarrow\infty$ as $x\rightarrow\infty$. \end{proposition} \noindent Let us first note that, in view of the proposition, we can easily rewrite \eqref{eq:CCEstimate}, for any $E\in \Exp(\Lambda)$, as \begin{equation*} \left| K_{\Lambda}^t(x-y)\right|\leq \frac{C_0}{t^{\mu_{\Lambda}}}\exp\left(-MR^{\#}\left(t^{-E}(x-y)\right)\right) \end{equation*} for $x,y\in\mathbb{V}$ and $t>0$; the analogous rewriting is true for \eqref{eq:CCDerivativeEstimate}. The fact that $R^{\#}$ is positive-definite and grows superlinearly ensures that the convolution operator $e^{-t\Lambda}$ defined by \eqref{eq:ConvolutionSemigroupDefinition} for $t>0$ is a bounded operator from $L^p$ to $L^q$ for any $1\leq p,q\leq \infty$. Of course, we already knew this because $K_{\Lambda}^t$ is a Schwartz function; however, when replacing $\Lambda$ with a variable-coefficient operator $H$, as we will do in the sections to follow, the validity of the estimate \eqref{eq:CCEstimate} for the kernel of the semigroup $\{e^{-tH}\}$ initially defined on $L^2$, guarantees that the semigroup extends to a strongly continuous semigroup $\{e^{-tH_p}\}$ on $L^p(\mathbb{R}^d)$ for all $1\leq p\leq \infty$ and, what's more, the respective infinitesimal generators $-H_p$ have spectra independent of $p$ \cite{Davies1995}. Further, the estimate \eqref{eq:CCEstimate} is key to establishing the boundedness of the Riesz transform, it is connected to the resolution of Kato's square root problem and it provides the appropriate starting point for uniqueness classes of solutions to $\partial_t+H=0$ \cite{Auscher2001,Ouhabaz2009}. With this motivation in mind, following some background in Section \ref{sec:Holder}, we introduce a class of variable-coefficient operators in Section \ref{sec:UniformlySemiElliptic} called $(2\mathbf{m,v})$-positive-semi-elliptic operators, each such operator $H$ comparable to a fixed positive-homogeneous operator. In Section \ref{sec:FundamentalSolution}, under the assumption that $H$ has H\"{o}lder continuous coefficients and this notion of comparability is uniform, we construct a fundamental solution to the heat equation $\partial_t+H=0$ and show the essential role played by the Legendre-Fenchel transform in this construction. As mentioned previously, in a forthcoming work we will study the semigroup $\{e^{-tH}\}$ where $H$ is a divergence-form operator, which is comparable to a fixed positive-homogeneous operator, whose coefficients are at worst measurable. As the Legendre-Fenchel transform appears here by a complex change of variables followed by a minimization argument, in the measurable coefficient setting it appears quite naturally by an application of the so-called Davies' method, suitably adapted to the positive-homogeneous setting. \section{Contracting groups, H\"{o}lder continuity and the Legendre-Fenchel transform}\label{sec:Holder} \noindent In this section, we provide the necessary background on one-parameter contracting groups, anisotropic H\"{o}lder continuity, and the Legendre-Fenchel transform and its interplay with the two previous notions. \subsection{One-parameter contracting groups} \noindent In what follows, $W$ is a $d$-dimensional real vector space with a norm $|\cdot|$; the corresponding operator norm on $\mbox{Gl}(W)$ is denoted by $\|\cdot\|$. Of course, since everything is finite-dimensional, the usual topologies on $W$ and $\mbox{Gl}(W)$ are insensitive to the specific choice of norms. \begin{definition} Let $\{T_t\}_{t>0}\subseteq \mbox{Gl}(W)$ be a continuous one-parameter group. $\{T_t\}$ is said to be contracting if \begin{equation*} \lim_{t\rightarrow 0}\|T_t\|=0. \end{equation*} \end{definition} \noindent We easily observe that, for any diagonalizable $E\in\mbox{End}(W)$ with strictly positive spectrum, the corresponding one-parameter group $\{t^E\}_{t>0}$ is contracting. Indeed, if there exists a basis $\mathbf{w}=\{w_1,w_2,\dots,w_d\}$ of $W$ and a collection of positive numbers $\lambda_1,\lambda_2,\dots,\lambda_d$ for which $Ew_k=\lambda_kw_k$ for $k=1,2,\dots,d$, then the one parameter group $\{t^E\}_{t>0}$ has $t^{E}w_k=t^{\lambda_k}w_k$ for $k=1,2,\dots,d$ and $t>0$. It then follows immediately that $\{t^E\}$ is contracting. \begin{proposition}\label{prop:ComparePoly} Let $Q$ and $R$ be continuous real-valued functions on $W$. If $R(w)>0$ for all $w\neq 0$ and there exists $E\in\Exp(Q)\cap\Exp(R)$ for which $\{t^E\}$ is contracting, then, for some positive constant $C$, $Q(w)\leq C R(w)$ for all $w\in W$. If additionally $Q(w)>0$ for all $w\neq 0$, then $Q\asymp R$. \end{proposition} \begin{proof} Let $S$ denote the unit sphere in $W$ and observe that \begin{equation*} \sup_{w\in S}\frac{Q(w)}{R(w)}=:C<\infty \end{equation*} because $Q$ and $R$ are continuous and $R$ is non-zero on $S$. Now, for any non-zero $w\in W$, the fact that $t^E$ is contracting implies that $t^Ew\in S$ for some $t>0$ by virtue of the intermediate value theorem. Therefore, $Q(w)=Q(t^Ew)/t\leq CR(t^E w)/t=CR(w)$. In view of the continuity of $Q$ and $R$, this inequality must hold for all $w\in W$. When additionally $Q(w)>0$ for all non-zero $w$, the conclusion that $Q\asymp R$ is obtained by reversing the roles of $Q$ and $R$ in the preceding argument. \end{proof} \begin{corollary}\label{cor:MovingConstants} Let $\Lambda$ be a positive-homogeneous operator on $\mathbb{V}$ with symbol $P$ and let $R^{\#}$ be the Legendre-Fenchel transform of $R=\Re P$. Then, for any positive constant $M$, $R^{\#}\asymp (MR)^{\#}$. \end{corollary} \begin{proof} By virtue of Proposition \ref{prop:OperatorRepresentation}, let $\mathbf{m}\in\mathbb{N}_+^d$ and $\mathbf{v}$ be a basis for $\mathbb{V}$ and for which $E_{\mathbf{v}}^{2\mathbf{m}}\in \Exp(\Lambda)$. In view of Proposition \ref{prop:LegendreTransformProperties}, $R^{\#}$ and $(MR)^{\#}$ are both continuous, positive-definite and have $I-E_{\mathbf{v}}^{2\mathbf{m}}\in \Exp(R^{\#})\cap \Exp((MR)^{\#})$. In view of \eqref{eq:DefofE}, it is easily verified that $I-E_{\mathbf{v}}^{2\mathbf{m}}=E_\mathbf{v}^\omega$ where \begin{equation}\label{eq:DefOfOmega} \omega:=\left(\frac{2m_1}{2m_1-1},\frac{2m_2}{2m_2-1},\dots\frac{2m_d}{2m_d-1}\right)\in \mathbb{R}_+^d \end{equation} and so it follows that $\{t^{E_{\mathbf{v}}^{\omega}}\}$ is contracting. The corollary now follows directly from Proposition \ref{prop:ComparePoly}. \end{proof} \begin{lemma}\label{lem:Scaling} Let $P$ be a positive-homogeneous polynomial on $W$ and let $\mathbf{n}=2\mathbf{m}\in\mathbb{N}_+^d$ and $\mathbf{w}$ be a basis for $W$ for which the conclusion of Lemma \ref{lem:PolynomialRepresentation} holds. Let $R=\Re P$ and let $\beta$ and $\gamma$ be multi-indices such that $\beta\leq\gamma$ (in the standard partial ordering of multi-indices); we shall assume the notation of Lemma \ref{lem:PolynomialRepresentation}. \begin{enumerate} \item\label{item:Scaling1} For any $n\in\mathbb{N}_+$ such that $|\beta:\mathbf{m}|\leq 2n$, there exist positive constants $M$ and $M'$ for which \begin{equation*} |\xi^{\gamma}\nu^{\beta-\gamma}|\leq M(R(\xi)+R(\nu))^n+M' \end{equation*} for all $\xi,\nu\in W$. \item\label{item:Scaling2} If $|\beta:\mathbf{m}|=2$, there exist positive constants $M$ and $M'$ for which \begin{equation*} |\xi^{\gamma}\nu^{\beta-\gamma}|\leq M R(\xi)+M'R(\nu) \end{equation*} for all $\nu,\xi\in W$. \item\label{item:Scaling3} If $|\beta:\mathbf{m}|=2$ and $\beta>\gamma$, then for every $\epsilon>0$ there exists a positive constant $M$ for which \begin{equation*} |\xi^{\gamma}\nu^{\beta-\gamma}|\leq \epsilon R(\xi)+MR(\nu) \end{equation*} for all $\nu,\xi\in W$. \end{enumerate} \end{lemma} \begin{proof} Assuming the notation of Lemma \ref{lem:PolynomialRepresentation}, let $E=E_{\mathbf{w}}^{2\mathbf{m}}\in\mbox{End}(W)$ and consider the contracting group $\{t^{E\oplus E}\}=\{t^{E}\oplus t^{E}\}$ on $W\oplus W$. Because $R$ is a positive-definite polynomial, it immediately follows that $W\oplus W\ni (\xi,\nu)\mapsto R(\xi)+R(\nu)$ is positive-definite. Let $|\cdot|$ be a norm on $W\oplus W$ and respectively denote by $B$ and $S$ the corresponding unit ball and unit sphere in this norm. To see Item \ref{item:Scaling1}, first observe that \begin{equation*} \sup_{(\xi,\nu)\in S}\frac{|\xi^{\gamma}\nu^{\beta-\gamma}|}{(R(\xi)+R(\nu))^n}=:M<\infty \end{equation*} Now, for any $(\xi,\nu)\in W\oplus W\setminus B$, because $\{t^{E\oplus E}\}$ is contracting, it follows from the intermediate value theorem that, for some $t\geq 1$, $t^{-(E\oplus E)}(\xi,\nu)=(t^{-E}\xi,t^{-E}\nu)\in S$. Correspondingly, \begin{eqnarray*} |\xi^{\gamma}\nu^{\beta-\gamma}|&=&t^{|\beta:2\mathbf{m}|}|(t^{-E}\xi)^{\gamma}(t^{-E})^{\beta-\gamma}|\\ &\leq & t^{|\beta:2\mathbf{m}|}M(R(t^{-E}\xi)+R(t^{-E}\nu))^n\\ &\leq &t^{|\beta:\mathbf{m}|/2-n}M(R(\xi)+R(\nu))^n\\ &\leq &M(R(\xi)+R(\nu))^n \end{eqnarray*} because $|\beta:\mathbf{m}|/2\leq n$. One obtains the constant $M'$ and hence the desired inequality by simply noting that $|\xi^{\gamma}\nu^{\beta-\gamma}|$ is bounded for all $(\xi,\nu)\in B$. For Item \ref{item:Scaling2}, we use analogous reasoning to obtain a positive constant $M$ for which $|\xi^{\gamma}\nu^{\beta-\gamma}|\leq M (R(\xi)+R(\nu))$ for all $(\xi,\nu)\in S$. Now, for any non-zero $(\xi,\nu)\in W\oplus W $, the intermediate value theorem gives $t>0$ for which $t^{E\oplus E}(\xi,\nu)=(t^{E}\xi,t^{E}\nu)\in S$ and hence \begin{equation*} |\xi^{\gamma}\nu^{\beta-\gamma}|\leq t^{-|\beta:2\mathbf{m}|}M (R(t^{E}\xi)+R(t^E\nu))=M(R(\xi)+R(\nu)) \end{equation*} where we have used the fact that $|\beta:2\mathbf{m}|=|\beta:\mathbf{m}|/2=1$ and that $E\in \Exp(R)$. As this inequality must also trivially hold at the origin, we can conclude that it holds for all $\xi,\nu\in W$, as desired. Finally, we prove Item \ref{item:Scaling3}. By virtue of Item \ref{item:Scaling2}, for any $\xi,\nu\in W$ and $t>0$, \begin{eqnarray*} \lefteqn{\hspace{-1.5cm}|\xi^{\gamma}\nu^{\beta-\gamma}|=|(t^Et^{-E}\xi)^{\gamma}\nu^{\beta-\gamma}|=t^{|\gamma:2\mathbf{m}|}|(t^{-E}\xi)^{\gamma}\nu^{\beta-\gamma}|}\\ &\leq& t^{|\gamma:2\mathbf{m}|}\left(MR(t^{-E}\xi)+M'R(\nu)\right)=Mt^{|\gamma:2\mathbf{m}|-1}R(\xi)+M't^{|\gamma:2\mathbf{m}|}R(\nu). \end{eqnarray*} Noting that $|\gamma:2\mathbf{m}|-1<0$ because $\gamma<\beta$, we can make the coefficient of $R(\xi)$ arbitrarily small by choosing $t$ sufficiently large and thereby obtaining the desired result. \end{proof} \subsection{Notions of regularity and H\"{o}lder continuity} Throughout the remainder of this article, $\mathbf{v}$ will denote a fixed basis for $\mathbb{V}$ and correspondingly we henceforth assume the notational conventions appearing in Proposition \ref{prop:OperatorRepresentation} and $\mathbf{n}=2\mathbf{m}$ is fixed. For $\alpha\in\mathbb{R}_+^d$, consider the homogeneous norm $|\cdot|_{\mathbf{v}}^{\alpha}$ defined by \begin{equation*} |x|_{\mathbf{v}}^{\alpha}=\sum_{i=1}^{d}|x_i|^{\alpha_i} \end{equation*} for $x\in\mathbb{V}$ where $\phi_{\mathbf{v}}(x)=(x_1,x_2,\dots,x_d)$. As one can easily check, \begin{equation*} |t^{E_{\mathbf{v}}^\alpha} x|_{\mathbf{v}}^{\alpha}=t|x|_{\mathbf{v}}^{\alpha} \end{equation*} for all $t>0$ and $x\in\mathbb{V}$ where $E_{\mathbf{v}}^\alpha\in \mbox{Gl}(\mathbb{V})$ is defined by \eqref{eq:DefofE}. \begin{definition}\label{def:Consistent} Let $\mathbf{m}\in\mathbb{N}_+^d$. We say that $\alpha\in\mathbb{R}_+^d$ is consistent with $\mathbf{m}$ if \begin{equation}\label{eq:Consistent} E_{\mathbf{v}}^{\alpha}= a(I-E_{\mathbf{v}}^{2\mathbf{m}}) \end{equation} for some $a>0$. \end{definition} \noindent As one can check, $\alpha$ is consistent with $\mathbf{m}$ if and only if $\alpha=a^{-1}\omega$ where $\omega$ is defined by \eqref{eq:DefOfOmega}. \begin{definition} Let $\Omega\subseteq\Omega'\subseteq\mathbb{V}$ and let $f:\Omega'\rightarrow \mathbb{C}$. We say that $f$ is $\mathbf{v}$-H\"{o}lder continuous on $\Omega$ if for some $\alpha\in\mathbb{I}_+^d$ and positive constant $M$, \begin{equation}\label{eq:HolderCondition} |f(x)-f(y)|\leq M|x-y|_{\mathbf{v}}^{\alpha} \end{equation} for all $x,y\in\Omega$. In this case we will say that $\alpha$ is the $\mathbf{v}$-H\"{o}lder exponent of $f$. If $\Omega=\Omega'$ we will simply say that $f$ is $\mathbf{v}$-H\"{o}lder continuous with exponent $\alpha$. \end{definition} \noindent The following proposition essentially states that, for bounded functions, H\"{o}lder continuity is a local property; its proof is straightforward and is omitted. \begin{proposition} Let $\Omega\subseteq\mathbb{V}$ be open and non-empty. If $f$ is bounded and $\mathbf{v}$-H\"{o}lder continuous of order $\alpha\in\mathbb{I}_+^d$, then, for any $\beta<\alpha$, $f$ is also $\mathbf{v}$-H\"{o}lder continuous of order $\beta$. \end{proposition} \noindent In view of the proposition, we immediately obtain the following corollary. \begin{corollary}\label{cor:HolderConsistent} Let $\Omega\subseteq \mathbb{V}$ be open and non-empty and $\mathbf{m}\in\mathbb{N}_+^d$. If $f$ is bounded and $\mathbf{v}$-H\"{o}lder continuous on $\Omega$ of order $\beta\in\mathbb{I}_+^d$, there exists $\alpha\in\mathbb{I}_+^d$ which is consistent with $\mathbf{m}$ for which $f$ is also $\mathbf{v}$-H\"{o}lder continuous of order $\alpha$. \end{corollary} \begin{proof} The statement follows from the proposition by choosing any $\alpha$, consistent with $\mathbf{m}$, such that $\alpha\leq \beta$. \end{proof} \noindent The following definition captures the minimal regularity we will require of fundamental solutions to the heat equation. \begin{definition} Let $\mathbf{n}\in\mathbb{N}_+^d$, $\mathbf{v}$ be a basis of $\mathbb{V}$ and let $\mathcal{O}$ be a non-empty open subset of $[0,T]\times\mathbb{V}$. A function $u(t,x)$ is said to be $(\mathbf{n,v})$-regular on $\mathcal{O}$ if on $\mathcal{O}$ it is continuously differentiable in $t$ and has continuous (spatial) partial derivatives $D_{\mathbf{v}}^{\beta}u(t,x)$ for all multi-indices $\beta$ for which $|\beta:\mathbf{n}|\leq 1$. \end{definition} \subsection{The Legendre-Fenchel transform and its interplay with $\mathbf{v}$-H\"{o}lder continuity} Throughout this section, $R$ is the real part of the symbol $P$ of a positive-homogeneous operator $\Lambda$ on $\mathbb{V}$. We assume the notation of Proposition \ref{prop:LegendreTransformProperties} (and hence Proposition \ref{prop:OperatorRepresentation}) and write $E=E_{\mathbf{v}}^{2\mathbf{m}}$. Let us first record two important results which follow essentially from Proposition \ref{prop:LegendreTransformProperties}. \begin{corollary}\label{cor:LegendreCompareHomogeneousNorm} \begin{equation*} R^{\#}\asymp |\cdot|_{\mathbf{v}}^\omega. \end{equation*} where $\omega$ was defined in \eqref{eq:DefOfOmega}. \end{corollary} \begin{proof} In view of Propositions \ref{prop:OperatorRepresentation} and \ref{prop:LegendreTransformProperties}, $E_{\mathbf{v}}^\omega=I-E_{\mathbf{v}}^{2\mathbf{m}}\in \Exp(R^{\#})\cap\Exp(|\cdot|_{\mathbf{v}}^{\omega})$. After recalling that $\{t^{E_{\mathbf{v}}^{\omega}}\}$ is contracting, Proposition \ref{prop:ComparePoly} yields the desired result immediately. \end{proof} \noindent By virtue of Proposition \ref{prop:LegendreTransformProperties}, standard arguments immediately yield the following corollary. \begin{corollary}\label{cor:ExponentialofLegendreTransformIntegrable} For any $\epsilon>0$ and polynomial $Q:\mathbb{V}\rightarrow\mathbb{C}$, i.e., $Q$ is a polynomial in any coordinate system, then \begin{equation*} Q(\cdot)e^{-\epsilon R^{\#}(\cdot)}\in L^{\infty}(\mathbb{V})\cap L^1(\mathbb{V}). \end{equation*} \end{corollary} \begin{lemma}\label{lem:PreLegendreSubscale} Let $\gamma=(2m_{\max}-1)^{-1}$. Then for any $T>0$, there exists $M>0$ such that \begin{equation*} R^{\#}(x)\leq Mt^{\gamma}R^{\#}(t^{-E}x) \end{equation*} for all $x\in\mathbb{V}$ and $0<t\leq T$. \end{lemma} \begin{proof} In view of Corollary \ref{cor:LegendreCompareHomogeneousNorm}, it suffices to prove the statement \begin{equation*} |t^Ex|_{\mathbf{v}}^{\omega}\leq Mt^{\gamma}|x|_{\mathbf{v}}^{\omega} \end{equation*} for all $x\in\mathbb{V}$ and $0<t\leq T$ where $M>0$ and $\omega$ is given by \eqref{eq:DefOfOmega}. But for any $0<t\leq T$ and $x\in\mathbb{V}$, \begin{equation*} |t^{E}x|_{\mathbf{v}}^{\omega}=\sum_{j=1}^d t^{1/(2m_j-1)}|x_j|^{\omega_j}\leq t^{\gamma}\sum_{j=1}^d T^{(1/(2m_j-1)-\gamma)}|x_j|^{\omega_j} \end{equation*} from which the result follows. \end{proof} \begin{lemma}\label{lem:LegendreSubscale} Let $\alpha\in\mathbb{I}_+^d$ be consistent with $\mathbf{m}$. Then there exists positive constants $\sigma$ and $\theta$ such that $0<\sigma<1$ and for any $T>0$ there exists $M>0$ such that \begin{equation*} |x|_{\mathbf{v}}^{\alpha}\leq Mt^{\sigma}(R^{\#}(t^{-E}x))^{\theta} \end{equation*} for all $x\in\mathbb{V}$ and $0<t\leq T$. \end{lemma} \begin{proof} By an appeal to Corollary \ref{cor:LegendreCompareHomogeneousNorm} and Lemma \ref{lem:PreLegendreSubscale}, \begin{equation*} |x|_{\mathbf{v}}^{\omega}\leq Mt^{\gamma}R^{\#}(t^{-E}x) \end{equation*} for all $x\in\mathbb{V}$ and $0<t\leq T$. Since $\alpha$ is consistent with $\mathbf{m}$, $\alpha=a^{-1}\omega$ where $a$ is that of Definition \ref{def:Consistent}, the desired inequality follows by setting $\sigma=\gamma/a$ and $\theta=1/a$. Because $\alpha\in \mathbb{I}_+^d$, it is necessary that $a \geq 2m_{\min}/(2m_{\min}-1)$ whence $0<\sigma\leq (2m_{\min}-1)/(2m_{\min}(2m_{\max}-1))<1.$ \end{proof} \noindent The following corollary is an immediate application of Lemma \ref{lem:LegendreSubscale}. \begin{corollary}\label{cor:HolderLegendreEstimate} Let $f:\mathbb{V}\rightarrow\mathbb{C}$ be $\mathbf{v}$-H\"{o}lder continuous with exponent $\alpha\in\mathbb{I}_+^d$ and suppose that $\alpha$ is consistent with $\mathbf{m}$. Then there exist positive constants $\sigma$ and $\theta$ such that $0<\sigma<1$ and, for any $T>0$, there exists $M>0$ such that \begin{equation*} |f(x)-f(y)|\leq M t^{\sigma}(R^{\#}(t^{-E}))^{\theta} \end{equation*} for all $x,y\in\mathbb{V}$ and $0<t\leq T$. \end{corollary} \section{On $(2\mathbf{m,v})$-positive-semi-elliptic operators}\label{sec:UniformlySemiElliptic} In this section, we introduce a class of variable-coefficient operators on $\mathbb{V}$ whose heat equations are studied in the next section. These operators, in view of Proposition \ref{prop:OperatorRepresentation}, generalize the class of positive-homogeneous operators. Fix a basis $\mathbf{v}$ of $\mathbb{V}$, $\mathbf{m}\in\mathbb{N}_+^d$ and, in the notation of the previous section, consider a differential operator $H$ of the form \begin{eqnarray*} H=\sum_{|\beta:\mathbf{m}|\leq 2}a_{\beta}(x)D_{\mathbf{v}}^{\beta}&=&\sum_{|\beta:\mathbf{m}|=2}a_{\beta}(x)D_{\mathbf{v}}^{\beta}+\sum_{|\beta:\mathbf{m}|< 2}a_{\beta}(x)D_{\mathbf{v}}^{\beta}\\ &:=&H_p+H_l \end{eqnarray*} where the coefficients $a_{\beta}:\mathbb{V}\rightarrow \mathbb{C}$ are bounded functions. The symbol of $H$, $P:\mathbb{V}\times\mathbb{V}^*\rightarrow \mathbb{C}$, is defined by \begin{eqnarray*} P(y,\xi)=\sum_{|\beta:\mathbf{m}|\leq 2}a_{\beta}(y)\xi^{\beta} &=&\sum_{|\beta:\mathbf{m}|=2}a_{\beta}(y)\xi^{\beta}+\sum_{|\beta:\mathbf{m}|<2}a_{\beta}(y)\xi^{\beta}\\ &:=&P_p(y,\xi)+P_l(y,\xi). \end{eqnarray*} for $y\in\mathbb{V}$ and $\xi\in\mathbb{V}^*$. We shall call $H_p$ the principal part of $H$ and correspondingly, $P_p$ is its principal symbol. Let's also define $R:\mathbb{V}^*\rightarrow \mathbb{R}$ by \begin{equation}\label{eq:HSymbol} R(\xi)=\Re P_p(0,\xi) \end{equation} for $\xi\in\mathbb{V}^*$. At times, we will freeze the coefficients of $H$ and $H_p$ at a point $y\in\mathbb{V}$ and consider the constant-coefficient operators they define, namely $H(y)$ and $H_p(y)$ (defined in the obvious way). We note that, for each $y\in\mathbb{V}$, $H_p(y)$ is homogeneous with respect to the one-parameter group $\{\delta_t^E\}_{t>0}$ where $E=E_{\mathbf{v}}^{2\mathbf{m}}\in\mbox{Gl}(\mathbb{V})$ is defined by \eqref{eq:DefofE}. That is, $H_p$ is homogeneous with respect to the same one-parameter group of dilations at each point in space. This also allows us to uniquely define the \textit{homogeneous order of} $H$ by \begin{equation}\label{eq:HHomogeneousOrder} \mu_{H}=\tr E=(2m_1)^{-1}+(2m_2)^{-1}+\cdots+(2m_d)^{-1}. \end{equation} We remark that this is consistent with our definition of homogeneous-order for constant-coefficient operators and we remind the reader that this notion differs from the usual order a partial differential operator (see the discussion surrounding \eqref{eq:HomogeneousOrderExplicit}). As in the constant-coefficient setting, $H_p(y)$ is not necessarily homogeneous with respect to a unique group of dilations, i.e., it is possible that $\Exp(H_p(y))$ contains members of $\mbox{Gl}(\mathbb{V})$ distinct from $E$. However, we shall henceforth only work with the endomorphism $E$, defined above, for worrying about this non-uniqueness of dilations does not aid our understanding nor will it sharpen our results. Let us further observe that, for each $y\in\mathbb{V}$, $P_p(y,\cdot)$ and $R$ are homogeneous with respect to $\{t^{E^*}\}_{t>0}$ where $E^{*}\in \mbox{Gl}(\mathbb{V}^*)$. \begin{definition} The operator $H$ is called $(2\mathbf{m,v})$-positive-semi-elliptic if for all $y\in\mathbb{V}$, $\Re P_p(y,\cdot)$ is a positive-definite polynomial. $H$ is called uniformly $(2\mathbf{m,v})$-positive-semi-elliptic if it is $(2\mathbf{m,v})$-positive-semi-elliptic and there exists $\delta>0$ for which \begin{equation*} \Re P_p(y,\xi)\geq\delta R(\xi) \end{equation*} for all $y\in\mathbb{V}$ and $\xi\in\mathbb{V}^*$. When the context is clear, we will simply say that $H$ is positive-semi-elliptic and uniformly positive-semi-elliptic respectively. \end{definition} \noindent In light of the above definition, a semi-elliptic operator $H$ is one that, at every point $y\in\mathbb{V}$, its frozen-coefficient principal part $H_p(y)$, is a constant-coefficient positive-homogeneous operator which is homogeneous with respect to the same one-parameter group of dilations on $\mathbb{V}$. A uniformly positive-semi-elliptic operator is one that is semi-elliptic and is uniformly comparable to a constant-coefficient positive-homogeneous operator, namely $H_p(0)$. In this way, positive-homogeneous operators take a central role in this theory. \begin{remark} In view of Proposition \ref{prop:OperatorRepresentation}, the definition of $R$ via \eqref{eq:HSymbol} agrees with that we have given for constant-coefficient positive-homogeneous operators. \end{remark} \begin{remark} For an $(2\mathbf{m,v})$-positive-semi-elliptic operator $H$, uniform semi-ellipticity can be formulated in terms of $\Re P_p(y_0,\cdot)$ for any $y_0\in\mathbb{V}$; such a notion is equivalent in view of Proposition \ref{prop:ComparePoly}. \end{remark} \section{The heat equation}\label{sec:FundamentalSolution} \noindent For a uniformly positive-semi-elliptic operator $H$, we are interested in constructing a fundamental solution to the heat equation, \begin{equation}\label{eq:HeatEquation} (\partial_t+H)u=0 \end{equation} on the cylinder $[0,T]\times \mathbb{V}$; here and throughout $T>0$ is arbitrary but fixed. By definition, a fundamental solution to \eqref{eq:HeatEquation} on $[0,T]\times\mathbb{V}$ is a function $Z:(0,T]\times\mathbb{V}\times\mathbb{V}\rightarrow \mathbb{C}$ satisfying the following two properties: \begin{enumerate} \item For each $y\in\mathbb{V}$, $Z(\cdot,\cdot,y)$ is $(2\mathbf{m,v})$-regular on $(0,T)\times\mathbb{V}$ and satisfies \eqref{eq:HeatEquation}. \item For each $f\in C_b(\mathbb{V})$, \begin{equation*} \lim_{t\downarrow 0}\int_{\mathbb{V}}Z(t,x,y)f(y)dy=f(x) \end{equation*} for all $x\in\mathbb{V}$. \end{enumerate} \noindent Given a fundamental solution $Z$ to \eqref{eq:HeatEquation}, one can easily solve the Cauchy problem: Given $f\in C_b(\mathbb{V})$, find $u(t,x)$ satisfying \begin{equation*} \begin{cases} (\partial_t+H)u=0 & \mbox{on}\hspace{.25cm} (0,T)\times\mathbb{V}\\ u(0,x)=f(x) & \mbox{for}\hspace{.25cm} x\in\mathbb{V}. \end{cases} \end{equation*} This is, of course, solved by putting \begin{equation*} u(t,x)=\int_{\mathbb{V}}Z(t,x,y)f(y)\,dy \end{equation*} for $x\in\mathbb{V}$ and $0<t\leq T$ and interpreting $u(0,x)$ as that defined by the limit of $u(t,x)$ as $t\downarrow 0$. \noindent The remainder of this paper is essentially dedicated to establishing the following result: \begin{theorem}\label{thm:FundamentalSolution} Let $H$ be uniformly $(2\mathbf{m,v})$-positive-semi-elliptic with bounded $\mathbf{v}$-H\"{o}lder continuous coefficients. Let $R$ and $\mu_H$ be defined by \eqref{eq:HSymbol} and \eqref{eq:HHomogeneousOrder} respectively and denote by $R^{\#}$ the Legendre-Fenchel transform of $R$. Then, for any $T>0$, there exists a fundamental solution $Z:(0,T]\times\mathbb{V}\times\mathbb{V}\rightarrow \mathbb{C}$ to \eqref{eq:HeatEquation} on $[0,T]\times\mathbb{V}$ such that, for some positive constants $C$ and $M$, \begin{equation}\label{eq:FundamentalSolution} |Z(t,x,y)|\leq \frac{C}{t^{\mu_{H}}}\exp\left(-tMR^{\#}\left(\frac{x-y}{t}\right)\right) \end{equation} for $x,y\in\mathbb{V}$ and $0<t\leq T$. \end{theorem} \noindent We remark that, by definition, the fundamental solution $Z$ given by Theorem \ref{thm:FundamentalSolution} is $(2\mathbf{m,v})$-regular. Thus $Z$ is necessarily continuously differentiable in $t$ and has continuous spatial derivatives of all orders $\beta$ such that $|\beta:\mathbf{m}|\leq 2$.\\ \noindent As we previously mentioned, the result above is implied by the work of S. D. Eidelman for $2\vec{b}$-parabolic systems on $\mathbb{R}^d$ (where $\vec{b}=\mathbf{m}$) \cite{Eidelman1960,Eidelman2004}. Eidelman's systems, of the form \eqref{eq:2b-Parabolic}, are slightly more general than we have considered here, for their coefficients are also allowed to depend on $t$ (but in a uniformly H\"{o}lder continuous way). Admitting this $t$-dependence is a relatively straightforward matter and, for simplicity of presentation, we have not included it (see Remark \ref{rmk:Gp}). In this slightly more general situation, stated in $\mathbb{R}^d$ and in which $\mathbf{v}=\mathbf{e}$ is the standard Euclidean basis, Theorem 2.2 (p.79) \cite{Eidelman2004} guarantees the existence of a fundamental solution $Z(t,x,y)$ to \eqref{eq:2b-Parabolic}, which has the same regularity appearing in Theorem \ref{thm:FundamentalSolution} and satisfies \begin{equation}\label{eq:EidelmansEstimate} |Z(t,x,y)|\leq \frac{C}{t^{1/(2m_1)+1/(2m_2)+\cdots+1/(2m_d)}}\exp\left(-M\sum_{k=1}^d\frac{|x_k-y_k|^{2m_k/(2m_k-1)}}{t^{1/(2m_k-1)}}\right) \end{equation} for $x,y\in\mathbb{R}^d$ and $0<t\leq T$ where $C$ and $M$ are positive constants. By an appeal to Corollary \ref{cor:LegendreCompareHomogeneousNorm}, we have $R^{\#}\asymp |\cdot|_{\mathbf{v}}^{\omega}$ and from this we see that the estimates \eqref{eq:FundamentalSolution} and \eqref{eq:EidelmansEstimate} are comparable.\\ \noindent In view of Corollary \ref{cor:HolderConsistent}, the hypothesis of Theorem \ref{thm:FundamentalSolution} concerning the coefficients of $H$ immediately imply the following a priori stronger condition: \begin{hypothesis}\label{hypoth:HolderCoefficients} There exists $\alpha\in\mathbb{I}_+^d$ which is consistent with $\mathbf{m}$ and for which the coefficients of $H$ are bounded and $\mathbf{v}$-H\"{o}lder continuous on $\mathbb{V}$ of order $\alpha$. \end{hypothesis} \subsection{Levi's Method} \noindent In this subsection, we construct a fundamental solution to \eqref{eq:HeatEquation} under only the assumption that $H$, a uniformly $(2\mathbf{m,v})$-positive-semi-elliptic operator, satisfies Hypothesis \ref{hypoth:HolderCoefficients}. Henceforth, all statements include Hypothesis \ref{hypoth:HolderCoefficients} without explicit mention. We follow the famous method of E. E. Levi, c.f., \cite{Levi1907} as it was adopted for parabolic systems in \cite{Eidelman1969} and \cite{Friedman1964}. Although well-known, Levi's method is lengthy and tedious and we will break it into three steps. Let's motivate these steps by first discussing the heuristics of the method.\\ \noindent We start by considering the auxiliary equation \begin{equation}\label{eq:FrozenCoeffHeat} \big(\partial_t+\sum_{|\beta:\mathbf{m}|=2}a_{\beta}(y)D_{\mathbf{v}}^{\beta}\big)u=(\partial_t+H_p(y))u=0 \end{equation} where $y\in\mathbb{V}$ is treated as a parameter. This is the so-called frozen-coefficient heat equation. As one easily checks, for each $y\in\mathbb{V}$, \begin{equation*} G_p(t,x;y):=\int_{\mathbb{V}^*}e^{-i\xi(x)}e^{-tP_p(y,\xi)}d\xi\hspace{.5cm}(x\in\mathbb{V},t>0) \end{equation*} solves \eqref{eq:FrozenCoeffHeat}. By the uniform semi-ellipticity of $H$, it is clear that $G_p(t,\cdot;y)\in \mathcal{S}(\mathbb{V})$ for $t>0$ and $y\in\mathbb{V}$. As we shall see, more is true: $G_p$ is an approximate identity in the sense that \begin{equation*} \lim_{t\downarrow 0}\int_{\mathbb{V}}G_p(t,x-y;y)f(y)\,dy=f(x) \end{equation*} for all $f\in C_b(\mathbb{V})$. Thus, it is reasonable to seek a fundamental solution to \eqref{eq:HeatEquation} of the form \begin{eqnarray}\label{eq:FundSolution}\nonumber Z(t,x,y)&=&G_p(t,x-y;y)+\int_0^t\int_{\mathbb{V}}G_p(t-s,x-z;z)\phi(s,z,y)dzds\\ &=&G_p(t,x-y;y)+W(t,x,y) \end{eqnarray} where $\phi$ is to be chosen to ensure that the correction term $W$ is $(2\mathbf{m,v})$-regular, accounts for the fact that $G_p$ solves \eqref{eq:FrozenCoeffHeat} but not \eqref{eq:HeatEquation}, and is ``small enough'' as $t\rightarrow 0$ so that the approximate identity aspect of $Z$ is inherited directly from $G_p$.\\ \noindent Assuming for the moment that $W$ is sufficiently regular, let's apply the heat operator to \eqref{eq:FundSolution} with the goal of finding an appropriate $\phi$ to ensure that $Z$ is a solution to \eqref{eq:HeatEquation}. Putting \begin{equation*} K(t,x,y)=-(\partial_t+H)G_p(t,x-y;y), \end{equation*} we have formally, \begin{eqnarray}\label{formaldifferentiation}\nonumber (\partial_t+H)Z(t,x,y)&=&-K(t,x,y)+(\partial_t+H)\int_{0}^t\int_{\mathbb{V}}G_p(t-s,x-z;z)\phi(s,z,y)\,dz\,ds\\ \nonumber &=&-K(t,x,y)+\lim_{s\uparrow t}\int_{\mathbb{V}}G_p(t-s,x-z;z)\phi(s,z,y)\,dz\\\nonumber &&\hspace{3cm}-\int_0^t\int_{\mathbb{V}}-(\partial_t+H)G_p(t-s,x-z;z)\phi(s,z,y)\,dz\,ds\\ &=&-K(t,x,y)+\phi(t,x,y)-\int_{0}^t\int_{\mathbb{V}}K(t-s,x,z)\phi(s,z,y)\,dz\,ds \end{eqnarray} where we have made use of Leibniz' rule and our assertion that $G_p$ is an approximate identity. Thus, for $Z$ to satisfy \eqref{eq:HeatEquation}, $\phi$ must satisfy the integral equation \begin{eqnarray}\label{eq:IntegralEquation}\nonumber K(t,x,y)&=&\phi(t,x,y)-\int_0^{t}\int_{\mathbb{V}}K(t-s,x,z)\phi(s,z,y)\,dz\,ds\\ &=&\phi(t,x,y)-L(\phi)(t,x,y). \end{eqnarray} Viewing $L$ as a linear integral operator, \eqref{eq:IntegralEquation} is the equation $K=(I-L)\phi$ which has the solution \begin{equation}\label{eq:PhiHeuristic} \phi=\sum_{n=0}^{\infty} L^nK \end{equation} provided the series converges in an appropriate sense.\\ \noindent Taking the above as purely formal, our construction will proceed as follows: We first establish estimates for $G_p$ and show that $G_p$ is an approximate identity; this is Step 1. In Step 2, we will define $\phi$ by \eqref{eq:PhiHeuristic} and, after deducing some subtle estimates, show that $\phi$'s defining series converges whence \eqref{eq:IntegralEquation} is satisfied. Finally in Step $3$, we will make use of the estimates from Steps 1 and 2 to validate the formal calculation made in \eqref{formaldifferentiation}. Everything will be then pieced together to show that $Z$, defined by \eqref{eq:FundSolution}, is a fundamental solution to \eqref{eq:HeatEquation}. Our entire construction depends on obtaining precise estimates for $G_p$ and for this we will rely heavily on the homogeneity of $P_p$ and the Legendre-Fenchel transform of $R$.\\ \begin{remark}\label{rmk:Gp} One can allow the coefficients of $H$ to also depend on $t$ in a uniformly continuous way, and Levi's method pushes though by instead taking $G_p$ as the solution to a frozen-coefficient initial value problem \cite{Eidelman1960,Eidelman2004}.\\ \end{remark} \noindent\textbf{Step 1. Estimates for $G_p$ and its derivatives} \noindent The lemma below is a basic building block used in our construction of a fundamental solution to \eqref{eq:HeatEquation} via Levi's method and it makes essential use of the uniform semi-ellipticity of $H$. We note however that the precise form of the constants obtained, as they depend on $k$ and $\beta$, are more detailed than needed for the method to work. Also, the partial differential operators $D_{\mathbf{v}}^{\beta}$ of the lemma are understood to act of the $x$ variable of $G_p(t,x;y)$. \begin{lemma}\label{lem:LegendreEstimate} There exist positive constants $M$ and $C_0$ and, for each multi-index $\beta$, a positive constant $C_{\beta}$ such that, for any $k\in\mathbb{N}$, \begin{equation}\label{eq:LegendreEstimate1} |\partial_t^kD^{\beta}_{\mathbf{v}}G_p(t,x;y)|\leq\frac{C_{\beta}C_0^k k!}{t^{\mu_H+k+|\beta:2\mathbf{m}|}}\exp\left(-tMR^{\#}\left(x/t\right)\right) \end{equation} for all $x,y\in\mathbb{V}$ and $t>0$. \end{lemma} \noindent Before proving the lemma, let us note that $tR^{\#}(x/t)=R^{\#}(t^{-E}x)$ for all $t>0$ and $x\in\mathbb{V}$ in view of Proposition \ref{prop:LegendreTransformProperties}. Thus the estimate \eqref{eq:LegendreEstimate1} can be written equivalently as \begin{equation}\label{eq:LegendreEstimate11} |\partial_t^kD^{\beta}_{\mathbf{v}}G_p(t,x;y)|\leq\frac{C_{\beta}C_0^k k!}{t^{\mu_H+k+|\beta:2\mathbf{m}|}}\exp(-MR^{\#}(t^{-E}x)) \end{equation} for $x,y\in\mathbb{V}$ and $t>0$. We will henceforth use these forms interchangeably and without explicit mention. \begin{proof} Let us first observe that, for each $x,y\in\mathbb{V}$ and $t>0$, \begin{eqnarray*} \partial_t^kD_{\mathbf{v}}^{\beta}G_p(t,x;y)&=&\int_{\mathbb{V}^*}(P_p(y,\xi))^k\xi^{\beta}e^{-i\xi(x)}e^{-tP_p(y,\xi)}\,d\xi\\ &=&\int_{\mathbb{V}^*}(P_p(y,t^{-E^*}\xi))^k(t^{-E^*}\xi)^{\beta}e^{-i\xi(t^{-E}x)}e^{-P_p(y,\xi)}t^{-\tr E}\,d\xi\\ &=&t^{-\mu_H-k-|\beta:2\mathbf{m}|}\int_{\mathbb{V}^*}(P_p(y,\xi))^k\xi^{\beta}e^{-i\xi(t^{-E}x)}e^{-P_p(y,\xi)}d\,\xi \end{eqnarray*} where we have used the homogeneity of $P_p$ with respect to $\{t^{E^*}\}$ and the fact that $\mu_H=\tr E$. Therefore \begin{equation}\label{eq:LegendreEstimate2} t^{\mu_H+k+|\beta:2\mathbf{m}|}(\partial_t^kD_{\mathbf{v}}^{\beta}G_p(t,\cdot\,;y))(t^{E}x)=\int_{\mathbb{V}^*}(P_p(y,\xi))^k\xi^{\beta}e^{-i\xi(x)}e^{-P_p(y,\xi)}d\xi \end{equation} for all $x,y\in\mathbb{V}$ and $t>0$. Thus, to establish \eqref{eq:LegendreEstimate1} (equivalently \eqref{eq:LegendreEstimate11}) it suffices to estimate the right hand side of \eqref{eq:LegendreEstimate2} which is independent of $t$. The proof of the desired estimate requires making a complex change of variables and for this reason we will work with the complexification of $\mathbb{V}^*$, whose members are denoted by $z=\xi-i\nu$ for $\xi,\nu\in\mathbb{V}^*$; this space is isomorphic to $\mathbb{C}^d$. We claim that there are positive constants $C_0, M_1,M_2$ and, for each multi-index $\beta$, a positive constant $C_{\beta}$ such that, for each $k\in\mathbb{N}$, \begin{equation}\label{eq:LegendreEstimate3} |(P_p(y,\xi-i\nu))^k(\xi-i\nu)^{\beta}e^{-P_p(y,\xi-i\nu)}|\leq C_{\beta}C_0^k k!e^{-M_1 R(\xi)}e^{M_2R(\nu)} \end{equation} for all $\xi,\nu\in\mathbb{V}^*$ and $y\in\mathbb{V}$. Let us first observe that \begin{equation*} P_p(y,\xi-i\nu)=P_p(y,\xi)+\sum_{|\beta:\mathbf{m}|=2}\sum_{\gamma<\beta}a_{\beta,\gamma}\xi^{\gamma}(-i\nu)^{\beta-\gamma} \end{equation*} for all $z,\nu\in\mathbb{V}^*$ and $y\in\mathbb{V}$, where $a_{\beta,\gamma}$ are bounded functions of $y$ arising from the coefficients of $H$ and the coefficients of the multinomial expansion. By virtue of the uniform semi-ellipticity of $H$ and the boundedness of the coefficients, we have \begin{equation*} -\Re P_p(y,\xi-i\nu)\leq -\delta R(\xi)+C\sum_{|\beta:\mathbf{m}|=2}\sum_{\gamma<\beta}|\xi^{\gamma}\nu^{\beta-\gamma}| \end{equation*} for all $\xi,\nu\in\mathbb{V}^*$ and $y\in\mathbb{V}$ where $C$ is a positive constant. By applying Lemma \ref{lem:Scaling} to each term $|\xi^{\gamma}\nu^{\beta-\gamma}|$ in the summation, we can find a positive constant $M$ for which the entire summation is bounded above by $\delta/2 R(\xi)+MR(\nu)$ for all $\xi,\nu\in\mathbb{V}^*$. By setting $M_1=\delta/6$, we have \begin{equation}\label{eq:LegendreEstimate4} -\Re P_p(y,\xi-i\nu)\leq -3M_1 R(\xi)+M R(\nu) \end{equation} for all $\xi,\nu\in\mathbb{V}^*$ and $y\in\mathbb{V}$. By analogous reasoning (making use of item 1 of Lemma \ref{lem:Scaling}), there exists a positive constant $C$ for which \begin{equation*} |P_p(y,\xi-i\nu)|\leq C (R(\xi)+ R(\nu)) \end{equation*} for all $\xi,\nu\in\mathbb{V}^*$ and $y\in\mathbb{V}$. Thus, for any $k\in\mathbb{N}$, \begin{equation}\label{eq:LegendreEstimate5} |P_p(y,\xi-i\nu)|^k\leq\frac{C^k k!}{M_1^k}\frac{(M_1 (R(\xi)+R(\nu)))^k}{k!}\leq C_0^k k!e^{M_1 (R(\xi)+R(\nu))} \end{equation} for all $\xi,\nu\in\mathbb{V}^*$ and $y\in\mathbb{V}$ where $C_0=C/M_1$. Finally, for each multi-index $\beta$, another application of Lemma \ref{lem:Scaling} gives $C'>0$ for which \begin{equation*} |(\xi-i\nu)^{\beta}|\leq|\xi^{\beta}|+|\nu^{\beta}|+\sum_{0<\gamma<\beta}c_{\gamma,\beta}|\xi^{\gamma}\nu^{\beta-\gamma}|\leq C'\left((R(\xi)+R(\nu))^n+1\right) \end{equation*} for all $\xi,\nu\in\mathbb{V}^*$ where $n\in\mathbb{N}$ has been chosen to satisfy $|\beta:2n\mathbf{m}|<1$. Consequently, there is a positive constant $C_{\beta}$ for which \begin{equation}\label{eq:LegendreEstimate6} |(\xi-i\nu)^{\beta}|\leq C_\beta e^{M_1(R(\xi)+R(\nu))} \end{equation} for all $\xi,\nu\in\mathbb{V}^*$. Upon combining \eqref{eq:LegendreEstimate4}, \eqref{eq:LegendreEstimate5} and \eqref{eq:LegendreEstimate6}, we obtain the inequality \begin{equation*} \left|P_p(y,\xi-i\nu)^k(\xi-i\nu)^{\beta}e^{-P_p(y,\xi-i\nu)}\right|\leq C_{\beta}C_0^k k! e^{-M_1 R(\xi)+(M+2M_1)R(\nu)} \end{equation*} which holds for all $\xi,\nu\in\mathbb{V}^*$ and $y\in\mathbb{V}$. Upon paying careful attention to the way in which our constants were chosen, we observe the claim is established by setting $M_2=M+2M_1$. From the claim above, it follows that, for any $\nu\in\mathbb{V}^*$ and $y\in\mathbb{V}$, the following change of coordinates by means of a $\mathbb{C}^d$ contour integral is justified: \begin{eqnarray*} \int_{\mathbb{V}^*}(P_p(y,\xi))^k\xi^{\beta}e^{-i\xi(x)}e^{-P_p(y,\xi)}\,d\xi &=&\int_{\xi\in\mathbb{V}^*}(P_p(y,\xi-i\nu)^k(\xi-i\nu)^{\beta}e^{-i(\xi-i\nu)(x)}e^{-P_p(y,\xi-i\nu)}\,d\xi\\ &=&e^{-\nu(x)}\int_{\xi\in\mathbb{V}^*}(P_p(y,\xi-i\nu)^k(\xi-i\nu)^{\beta}e^{-i\xi(x)}e^{-P_p(y,\xi-i\nu)}\,d\xi. \end{eqnarray*} Thus, by virtue of the estimate \eqref{eq:LegendreEstimate3}, \begin{eqnarray*} \left|\int_{\mathbb{V}^*}(P_p(y,\xi))^k\xi^{\beta}e^{-i\xi(x)}e^{-P_p(y,\xi)}\,d\xi\right |&\leq& C_\beta C_0^k k! e^{-\nu(x)}e^{M_2R(\nu)}\int_{\mathbb{V}^*}e^{-M_1 R(\xi)}\,d\xi\\ &\leq & C_{\beta}C_0^k k! e^{-(\nu(x)-M_2 R(\nu))} \end{eqnarray*} for all $x,y\in\mathbb{V}$ and $\nu\in\mathbb{V}^*$ where we have absorbed the integral of $\exp(-M_1R(\xi))$ into $C_{\beta}$. Upon minimizing with respect to $\nu\in\mathbb{V}^*$, we have \begin{equation}\label{eq:LegendreEstimate7} \left|\int_{\mathbb{V}^*}(P_p(y,\xi))^k\xi^{\beta}e^{-i\xi(x)}e^{-P_p(y,\xi)}d\xi\right |\leq C_{\beta}C_0^k k!e^{-(M_2R)^{\#}(x)}\leq C_{\beta}C_0^k k!e^{-MR^{\#}(x)} \end{equation} for all $x$ and $y\in\mathbb{V}$ because \begin{equation*} -(M_2R)^{\#}(x)=-\sup_{\nu}\{\nu(x)-M_2 R(\nu)\}=\inf_{\nu}\{-(\nu(x)-M_2R(\nu))\}; \end{equation*} in this we see the natural appearance of the Legendre-Fenchel transform. The replacement of $(M_2 R)^{\#}(x)$ by $MR^{\#}(x)$ is done using Corollary \ref{cor:MovingConstants} and, as required, the constant $M$ is independent of $k$ and $\beta$. Upon combining \eqref{eq:LegendreEstimate2} and \eqref{eq:LegendreEstimate7}, we obtain the desired estimate \eqref{eq:LegendreEstimate1}. \end{proof} \noindent As a simple corollary to the lemma, we obtain Proposition \ref{prop:CCEstimates}. \begin{proof}[Proof of Proposition \ref{prop:CCEstimates}] Given a positive-homogeneous operator $\Lambda$, we invoke Proposition \ref{prop:OperatorRepresentation} to obtain $\mathbf{v}$ and $\mathbf{m}$ for which $\Lambda=\sum_{|\beta:\mathbf{m}|=2}a_{\beta}D_\mathbf{v}^{\beta}$. In other words, $\Lambda$ is an $(2\mathbf{m,v})$-positive-semi-elliptic operator which consists only of its principal part. Consequently, the heat kernel $K_{\Lambda}$ satisfies $K_{\Lambda}^t(x)=G_p(t,x;0)$ for all $x\in\mathbb{V}$ and $t>0$ and so we immediately obtain the estimate \eqref{eq:CCDerivativeEstimate} from the lemma. \end{proof} \noindent Making use of Hypothesis \ref{hypoth:HolderCoefficients}, a similar argument to that given in the proof of Lemma \ref{lem:LegendreEstimate} yields the following lemma. \begin{lemma}\label{lem:LegendreHolderEstimate} There is a positive constant $M$ and, to each multi-index $\beta$, a positive constant $C_{\beta}$ such that \begin{equation*} |D^{\beta}_\mathbf{v}[G_p(t,x;y+h)-G_p(t,x;y)]|\leq C_{\beta} t^{-(\mu_H+|\beta:2\mathbf{m}|)}|h|_{\mathbf{v}}^{\alpha}\exp(-tMR^{\#}(x/t)) \end{equation*} for all $t>0$, $x,y,h\in\mathbb{V}$. Here, in view of Hypothesis \ref{hypoth:HolderCoefficients}, $\alpha$ is the $\mathbf{v}$-H\"{o}lder continuity exponent for the coefficients of $H$. \end{lemma} \begin{lemma}\label{lem:ApproximateIdentity} Suppose that $g\in C_b((t_0,T]\times\mathbb{V})$ where $0\leq t_0<T<\infty$. Then, on any compact set $Q\subseteq (t_0,T]\times\mathbb{V}$, \begin{equation*} \int_{\mathbb{V}}G_p(t,x-y;y)g(s-t,y)\,dy\rightarrow g(s,x) \end{equation*} uniformly on $Q$ as $t\rightarrow 0$. In particular, for any $f\in C_b(\mathbb{V})$, \begin{equation*} \int_{\mathbb{V}}G_p(t,x-y;y)f(y)\,dy\rightarrow f(x) \end{equation*} uniformly on all compact subsets of $\mathbb{V}$ as $t\rightarrow 0$. \end{lemma} \begin{proof} Let $Q$ be a compact subset of $(t_0,T]\times\mathbb{V}$ and write \begin{eqnarray*} \lefteqn{\int_{\mathbb{V}} G_p(t,x-y;y)g(s-t,y)\,dy}\\ &=&\int_{\mathbb{V}}G_p(t,x-y;x)g(s-t,y)\,dy +\int_{\mathbb{V}}[G_p(t,x-y;y)-G_p(t,x-y;x)]g(s-t,y)\,dy\\ &:=&I_t^{(1)}(s,x)+I_t^{(2)}(s,x). \end{eqnarray*} Let $\epsilon>0$ and, in view of Corollary \ref{cor:ExponentialofLegendreTransformIntegrable}, let $K$ be a compact subset of $\mathbb{V}$ for which \begin{equation*} \int_{\mathbb{V}\setminus K}\exp(-MR^{\#}(z))\,dz<\epsilon \end{equation*} where the constant $M$ is that given in \eqref{eq:LegendreEstimate1} of Lemma \ref{lem:LegendreEstimate}. Using the continuity of $g$, we have for sufficiently small $t>0$, \begin{equation*} \sup_{\substack{(s,x)\in Q\\z\in K}}|g(s-t,x-t^{E}z)-g(s,x)|<\epsilon. \end{equation*} We note that, for any $t>0$ and $x\in\mathbb{V}$, \begin{equation*} \int_{\mathbb{V}}G_p(t,x-y;x)\,dy=e^{-tP_p(x,\xi)}\Big\vert_{\xi=0}=1. \end{equation*} Appealing to Lemma \ref{lem:LegendreEstimate} we have, for any $(s,x)\in Q$, \begin{eqnarray*} |I^{(1)}_t(s,x)-g(s,x)|&\leq&\Big|\int_{\mathbb{V}}G_p(t,x-y;x)(g(s-t,y)-g(s,x))\,dy\Big|\\ &\leq&\int_{\mathbb{V}}|G_p(1,z;x)(g(s-t,x-t^{E}z)-g(s,x))|\,dz\\ &\leq& 2\|g\|_{\infty}C\int_{\mathbb{V}\setminus K}\exp(-MR^{\#}(z))\,dz\\ &&\hspace{3cm}+C\int_{K}\exp(-MR^{\#}(z))|(g(s-t,x-t^{E}z)-g(s,x))|\,dz\\ &\leq& \epsilon C\left(2\|g\|_{\infty}+\|e^{-MR^{\#}}\|_1\right) ; \end{eqnarray*} here we have made the change of variables: $y\mapsto t^{E}(x-y)$ and used the homogeneity of $P_p$ to see that $t^{\mu_H}G_p(t,t^{E}z;x)=G_p(1,z;x)$. Therefore $I^{(1)}_t(s,x)\rightarrow g(s,x)$ uniformly on $Q$ as $t\rightarrow 0$. Let us now consider $I^{(2)}$. With the help of Lemmas \ref{lem:LegendreSubscale} and \ref{lem:LegendreHolderEstimate} and by making similar arguments to those above we have \begin{eqnarray*} |I_t^{(2)}(s,x)|&\leq& C\|g\|_{\infty}\int_{\mathbb{V}}t^{-\mu_H}|x-y|_{\mathbf{v}}^{\alpha}\exp(-MR^{\#}(t^{-E}(x-y))\,dy\\ &\leq &\|g\|_{\infty} C t^{\sigma}\int_{\mathbb{V}} t^{-\tr E}(R^{\#}(t^{-E}(x-y)))^{\theta}\exp(-MR^{\#}(t^{-E}(x-y)))\,dy\\ &\leq&\|g\|_{\infty}Ct^{\sigma}\int_{\mathbb{V}}(R^{\#}(x))^{\theta}\exp(-MR^{\#}(z))\,dz\leq \|g\|_{\infty}C't^{\sigma} \end{eqnarray*} for all $s\in(t_0, T]$, $0<t<s-t_0$ and $x\in\mathbb{V}$; here $0<\sigma<1$. Consequently, $I_t^{(2)}(s,x)\rightarrow 0$ uniformly on $Q$ as $t\rightarrow 0$ and the lemma is proved. \end{proof} \noindent Combining the results of Lemmas \ref{lem:LegendreEstimate} and \ref{lem:ApproximateIdentity} yields at once: \begin{corollary}\label{frozenfundsolcor} For each $y\in\mathbb{V}$, $G_p(\cdot,\cdot-y;y)$ is a fundamental solution to \eqref{eq:FrozenCoeffHeat}. \end{corollary} \noindent\textbf{Step 2. Construction of $\phi$ and the integral equation}\\ \noindent For $t>0$ and $x,y\in\mathbb{V}$, put \begin{eqnarray*} K(t,x,y)&=&-(\partial_t+H)G_p(t,x-y;y)\\ &=&\big (H_p(y)-H\big)G_p(t,x-y;y)\\ &=&\int_{\mathbb{V}^*}e^{-i\xi(x-y)}\big(P_p(y,\xi)-P(x,\xi)\big)e^{-tP_p(y,\xi)}\,d\xi \end{eqnarray*} and iteratively define \begin{equation*} K_{n+1}(t,x,y)=\int_0^t\int_{\mathbb{V}}K_1(t-s,x,z)K_{n}(s,z,y)\,dzds \end{equation*} where $K_1=K$. In the sense of \eqref{eq:PhiHeuristic}, note that $K_{n+1}=L^n K$.\\ \noindent We claim that for some $0<\rho<1$ and positive constants $C$ and $M$, \begin{equation}\label{eq:K1Bound} |K(t,x,y)|\leq C t^{-(\mu_H+1-\rho)}\exp(-M R^{\#}(t^{-E}(x-y))) \end{equation} for all $x,y\in\mathbb{V}$ and $0<t\leq T$. Indeed, observe that \begin{equation*} |K(t,x,y)|\leq \sum_{|\beta:\mathbf{m}|=2}|a_{\beta}(y)-a_{\beta}(x)||D^{\beta}_\mathbf{v} G_p(t,x-y;y)|\\ +C\sum_{|\beta:\mathbf{m}|<2}|D^{\beta}_\mathbf{v}G_p(t,x-y;y)| \end{equation*} for all $x,y\in\mathbb{V}$ and $t>0$ where we have used the fact that the coefficients of $H$ are bounded. In view of Lemma \ref{lem:LegendreEstimate}, we have \begin{eqnarray*} |K(t,x,y)|\leq \sum_{|\beta:\mathbf{m}|=2}|a_{\beta}(y)-a_{\beta}(x)|Ct^{-(\mu_H+1)}\exp(-M R^{\#}(t^{-E}(x-y)))\\ +Ct^{-(\mu_H+\eta)}\exp(-M R^{\#}(t^{-E}(x-y))) \end{eqnarray*} for all $x,y\in\mathbb{V}$ and $0<t\leq T$ where \begin{equation*} \eta=\max\{|\beta:2\mathbf{m}|:|\beta:\mathbf{m}|\neq 2\mbox{ and }a_{\beta}\neq 0\}<1. \end{equation*} Using Hypothesis \ref{hypoth:HolderCoefficients}, an appeal to Corollary \ref{cor:HolderLegendreEstimate} gives $0<\sigma<1$ and $\theta>0$ for which \begin{eqnarray*} |K(t,x,y)|&\leq& C t^{\sigma-(\mu_H+1)}(R^{\#}(t^{-E}(x-y)))^{\theta}\exp(-M R^{\#}(t^{-E}(x-y)))\\ &&\hspace{4cm}+Ct^{-(\mu_H+\eta)}\exp(-M R^{\#}(t^{-E}(x-y))) \end{eqnarray*} for all $x,y\in\mathbb{V}$ and $0<t\leq T$. Our claim is then justified by setting \begin{equation}\label{eq:defofrho} \rho=\max\{\sigma,1-\eta\} \end{equation} and adjusting the constants $C$ and $M$ appropriately to absorb the prefactor $(R^{\#}(t^{-E}(x-y)))^{\theta}$ into the exponent. It should be noted that the constant $\rho$ is inherently dependent on $H$. For it is clear that $\eta$ depends on $H$. The constants $\sigma$ and $\theta$ are specified in Lemma \ref{lem:LegendreSubscale} and are defined in terms of the H\"{o}lder exponent of the coefficients of $H$ and the weight $\mathbf{m}$.\\ \noindent Taking cues from our heuristic discussion, we will soon form a series whose summands are the functions $K_n$ for $n\geq 1.$ In order to talk about the convergence of this series, our next task is to estimate these functions and in doing this we will observe two separate behaviors: a finite number of terms will exhibit singularities in $t$ at the origin; the remainder of the terms will be absent of such singularities and will be estimated with the help of the Gamma function. We first address the terms with the singularities. \begin{lemma}\label{lem:KFiniteTerms} Let $0<\rho<1$ be given by \eqref{eq:defofrho} and $M>0$ be any constant for which \eqref{eq:K1Bound} is satisfied. For any positive natural number $n$ such that $\rho(n-1)\leq \mu_H+1$ and $\epsilon>0$ for which $\epsilon n<1$, there is a constant $C_n(\epsilon)\geq 1$ such that \begin{equation*} |K_n(t,x,y)|\leq C_n(\epsilon)t^{-(\mu_H+1-n\rho)}\exp(-M(1-\epsilon n)R^{\#}(t^{-E}(x-y))) \end{equation*} for all $x,y\in\mathbb{V}$ and $0<t\leq T$. \end{lemma} \begin{proof} In view of \eqref{eq:K1Bound}, it is clear that the estimate holds when $n=1$. Let us assume the estimate holds for $n\geq 1$ such that $\rho n<1+\mu_H$ and $\epsilon>0$ for which $\epsilon n<\epsilon (n+1)<1$. Then \begin{eqnarray}\label{eq:KFiniteTerms1}\nonumber |K_{n+1}(t,x,y)|&\leq &\int_0^t\int_{\mathbb{V}}C_1(\epsilon)(t-s)^{-(\mu_H+1-\rho)}C_n(\epsilon)s^{-(\mu_H+1-n\rho)}\\ &&\hspace{1cm}\times\exp(-MR^{\#}((t-s)^{-E}(x-z)))\exp(-M_{\epsilon,n}R^{\#}(s^{-E}(z-y)))\,dz\,ds \end{eqnarray} for $x,y\in\mathbb{V}$ and $0<t\leq T$ where we have set $M_{\epsilon,n}=M(1-\epsilon n)$. Observe that \begin{eqnarray}\label{eq:KFiniteTerms2}\nonumber R^{\#}(t^{-E}(x-y))&=&\sup\{\xi(x-y)-tR(\xi)\}\\\nonumber &=&\sup\{\xi(x-z)-(t-s)R(\xi)+\xi(z-y)-sR(\xi)\}\\ &\leq& R^{\#}((t-s)^{-E}(x-z))+R^{\#}(s^{-E}(z-y)) \end{eqnarray} for all $x,y,z\in\mathbb{V}$ and $0< s\leq t$. Using the fact that $0<\epsilon n<\epsilon (n+1)<1$, \eqref{eq:KFiniteTerms2} guarantees that \begin{eqnarray*} \lefteqn{(1-\epsilon(n+1))R^{\#}(t^{-E}(x-y))+\epsilon\left(R^{\#}((t-s)^{-E}(x-z))+R^{\#}(s^{-E}(z-y))\right)}\\ &\leq&(1-\epsilon(n+1))\left(R^{\#}((t-s)^{-E}(x-z))+R^{\#}(s^{-E}(z-y))\right)\\ & &\hspace{4cm}+\epsilon\left(R^{\#}((t-s)^{-E}(x-z))+R^{\#}(s^{-E}(z-y))\right)\\ &\leq & (1-\epsilon n)R^{\#}((t-s)^{-E}(x-z))+(1-\epsilon n)R^{\#}(s^{-E}(z-y))\\ &\leq &R^{\#}((t-s)^{-E}(x-z))+(1-\epsilon n)R^{\#}(s^{-E}(z-y)) \end{eqnarray*} or equivalently \begin{eqnarray}\label{eq:KFiniteTerms3}\nonumber \lefteqn{-MR^{\#}((t-s)^{-E}(x-z))-M_{\epsilon,n}R^{\#}(s^{-E}(z-y)}\\ &\leq& -M_{\epsilon,n+1}R^{\#}(t^{-E}(x-y))-\epsilon M \left(R^{\#}((t-s)^{-E}(x-z))+R^{\#}(s^{-E}(z-y))\right) \end{eqnarray} for all $x,y,z\in\mathbb{V}$ and $0<s\leq t$. Combining \eqref{eq:KFiniteTerms1} and \eqref{eq:KFiniteTerms3} yields \begin{eqnarray}\label{eq:KFiniteTerms4}\nonumber \lefteqn{|K_{n+1}(t,x,y)|}\\\nonumber &\leq& C_1(\epsilon)C_n(\epsilon)\exp(-M_{\epsilon,n+1}R^{\#}(t^{-E}(x-y)))\int_0^t\int_{\mathbb{V}}(t-s)^{-(\mu_H+1-\rho)}s^{-(\mu_H+1-n\rho)}\\\nonumber &&\hspace{4cm}\times\exp(-\epsilon M(R^{\#}((t-s)^{-E}(x-z))+R^{\#}(s^{-E}(z-y)))\,dz\,ds\\\nonumber &\leq&C_1(\epsilon)C_n(\epsilon)\exp(-M_{\epsilon,n+1}R^{\#}(t^{-E}(x-y)))\\\nonumber &&\hspace{2cm}\times\Big[\int_0^{t/2}\int_{\mathbb{V}}(t-s)^{-(\mu_H+1-\rho)}s^{-(\mu_H+1-n\rho)}\times\exp(-\epsilon MR^{\#}(s^{-E}(z-y)))\,dz\,ds\\ &&\hspace{2.1cm}+\int_{t/2}^{t}\int_{\mathbb{V}}(t-s)^{-(\mu_H+1-\rho)}s^{-(\mu_H+1-n\rho)}\exp(-\epsilon MR^{\#}((t-s)^{-E}(x-z)))\,dz\,ds\Big] \end{eqnarray} for all $x,y\in\mathbb{V}$ and $0<t\leq T$, where we have used the fact that $R^{\#}$ is non-negative. Let us focus our attention on the first term above. For $0\leq s\leq t/2$, \begin{equation*} (t-s)^{-(\mu_H+1-\rho)}s^{-(\mu_H+1-n\rho)}\leq (t/2)^{-(\mu_H+1-\rho)}s^{-(\mu_H+1-n\rho)} \end{equation*} because $\mu_H+1-\rho>0$. Consequently, \begin{eqnarray}\label{eq:KFiniteTerms5}\nonumber \lefteqn{\int_{0}^{t/2}\int_{\mathbb{V}}(t-s)^{-(\mu_H+1-\rho)}s^{-(\mu_H+1-n\rho)}\exp(-\epsilon MR^{\#}(s^{-E}(z-y)))\,dz\,ds}\\\nonumber &\leq &(t/2)^{-(\mu_H+1-\rho)}\int_0^{t/2}s^{-(\mu_H+1-n\rho)}\int_{\mathbb{V}}\exp(-\epsilon MR^{\#}(s^{-E}(z-y)))\,dz\,ds\\\nonumber &\leq& (t/2)^{-(\mu_H+1-\rho)}\int_0^{t/2}s^{n\rho-1}\int_{\mathbb{V}}\exp(-\epsilon MR^{\#}(z))\,dz\,ds\\\nonumber &\leq&t^{-(\mu_H+1-(n+1)\rho)}\frac{2^{(\mu_H+1-(n+1)\rho)}}{n\rho}\int_{\mathbb{V}}\exp(-\epsilon MR^{\#}(z))\,dz\,ds \end{eqnarray} for all $y\in\mathbb{V}$ and $t>0$. We note that the second inequality is justified by making the change of variables $z\mapsto s^{-E}(z-y)$ (thus canceling the term $s^{-\tr E}=s^{-\mu_H}$ in the integral over $s$) and the final inequality is valid because $n\rho-1>\rho-1>-1$. By similar reasoning, we obtain \begin{eqnarray}\label{eq:KFiniteTerms6}\nonumber \lefteqn{\int_{t/2}^t\int_{\mathbb{V}}(t-s)^{-(\mu_H+1-n\rho)}s^{-(\mu_H+1-\rho)}\exp(-\epsilon MR^{\#}((t-s)^{-E}(x-z)))\,dz\,ds}\\ &&\leq t^{-(\mu_H+1-(n+1)\rho)}\frac{2^{(\mu_H+1-(n+1)\rho)}}{\rho}\int_{\mathbb{V}}\exp(-\epsilon MR^{\#}(z))\,dz\,ds \end{eqnarray} for all $x\in\mathbb{V}$ and $t>0$. Upon combining the estimates \eqref{eq:KFiniteTerms4}, \eqref{eq:KFiniteTerms5} and \eqref{eq:KFiniteTerms6}, we have \begin{equation*} |K_{n+1}(t,x,y)|\leq C_{n+1}(\epsilon)t^{-(\mu_H+1-(n+1)\rho)}\exp(-M_{\epsilon,n+1}R^{\#}(t^{-E}(x-y)) \end{equation*} for all $x,y\in\mathbb{V}$ and $0<t\leq T$ where we have put \begin{equation*} C_{n+1}(\epsilon)=C_1(\epsilon)C_{n}(\epsilon)\frac{n+1}{n\rho}2^{\mu_H+(1-(n+1)\rho)}\int_{\mathbb{V}}\exp(-\epsilon M R^{\#}(z))\,dz \end{equation*} and made use of Corollary \ref{cor:ExponentialofLegendreTransformIntegrable}. \end{proof} \begin{remark} The estimate \eqref{eq:KFiniteTerms2} is an important one and will be used again. In the context of elliptic operators, i.e., where $R^{\#}(x)=C_m|x|^{2m/(2m-1)}$, the analogous result is captured in Lemma 5.1 of \cite{Eidelman1969}. It is interesting to note that S. D. Eidelman worked somewhat harder to prove it. Perhaps this is because the appearance of the Legendre-Fenchel transform wasn't noticed. \end{remark} \noindent It is clear from the previous lemma that for sufficiently large $n$, $K_n$ is bounded by a positive power of $t$. The first such $n$ is $\bar n:=\lceil\rho^{-1}(\tr E+1)\rceil$. In view of the previous lemma, \begin{equation*} |K_{\bar n}(t,x,y)|\leq C_{\bar n}(\epsilon)\exp(-M(1-\epsilon\bar n)R^{\#}(t^{-E}(x-y))) \end{equation*} for all $x,y\in\mathbb{V}$ and $0<t\leq T$ where we have adjusted $C_{\bar n}(\epsilon)$ to account for this positive power of $t$. Let $\delta<1/2$ and set \begin{equation*} \epsilon=\frac{\delta}{\bar n},\hspace{.5cm}M_1=M(1-\delta)\hspace{.25cm}\mbox{ and }\hspace{.25cm}C_0=\max_{1\leq n\leq \bar n}C_n(\epsilon). \end{equation*} Upon combining preceding estimate with the estimates\eqref{eq:K1Bound} and \eqref{eq:KFiniteTerms2}, we have \begin{eqnarray*} \lefteqn{|K_{\bar n+1}(t,x,y)|}\\ &\leq& C_0^2\int_0^t\int_{\mathbb{V}}(t-s)^{-(\mu_H+(1-\rho))}\\ &&\hspace{2cm}\times\exp(-MR^{\#}((t-s)^{-E}(x-z))\exp(-M(1-\epsilon\bar n)R^{\#}(s^{-E}(z-y)))\,ds\,dz\\ &\leq& C_0^2\exp(-M_1R^{\#}(t^{-E}(x-y)))\int_0^t\int_{\mathbb{V}}(t-s)^{-(\mu_H+(1-\rho))}\exp(-C\delta R^{\#}((t-s)^{-E}(z)))\,dz\,ds\\ &\leq& C_0(C_0 F)\frac{t^{\rho}}{\rho}\exp(-M_1R^{\#}(t^{-E}(x-y))) \end{eqnarray*} for all $x,y\in\mathbb{V}$ and $0<t\leq T$ where \begin{equation*} F=\int_{\mathbb{V}}\exp(-M\delta R^{\#}(z))\,dz<\infty. \end{equation*} Let us take this a little further. \begin{lemma}\label{lem:KSeries} For every $k\in \mathbb{N}_+$, \begin{equation}\label{eq:KSeriesBound} |K_{\bar n+k}(t,x,y)|\leq \frac{C_0}{\Gamma(\rho)} \frac{(C_0 F\Gamma(\rho))^k}{k!}t^{\rho k}\exp(-M_1 R^{\#}(t^{-E}(x-y))) \end{equation} for all $x,y\in\mathbb{V}$ and $0<t\leq T$. Here $\Gamma(\cdot)$ denotes the Gamma function. \end{lemma} \begin{proof} The Euler-Beta function $B(\cdot,\cdot)$ satisfies the well-known identity $B(a,b)=\Gamma(a)\Gamma(b)/\Gamma(a+b)$. Using this identity, one quickly obtains the estimate \begin{equation*} \prod_{j=1}^{k-1}B(\rho,1+j\rho)=\frac{\Gamma(\rho)^{k-1}}{\Gamma(1+k\rho)}\leq\frac{\Gamma(\rho)^{k-1}}{k!}. \end{equation*} It therefore suffices to prove that \begin{equation}\label{eq:KSeriesBoundEulerBeta} |K_{\bar n+k}(t,x,y)|\leq C_0 (C_0 F)^k\prod_{j=0}^{k-1} B(\rho,1+j\rho) t^{k\rho}\exp(-M_1 R^{\#}(t^{-E}(x-y))) \end{equation} for all $x,y\in\mathbb{V}$ and $0<t\leq T$. We first note that $B(\rho,1)=\rho^{-1}$ and so, for $k=1$, \eqref{eq:KSeriesBoundEulerBeta} follows directly from the calculation proceeding the lemma. We shall induct on $k$. By another application of \eqref{eq:K1Bound} and \eqref{eq:KFiniteTerms2}, we have \begin{eqnarray*} J_{k+1}(t,x,y)&:=&\Big[C_0^2(C_0 F)^k\prod_{j=0}^{k-1}B(\rho,1+j\rho)\Big]^{-1}|K_{\bar n+k+1}(t,x,y)|\\ &\leq& \int_{0}^t\int_{\mathbb{V}}(t-s)^{-(\mu_H+(1-\rho))}s^{-k\rho}\exp(-MR^{\#}((t-s)^{-E}(x-z)))\\ &&\hspace{6cm}\times\exp(-M_1R^{\#}(s^{-E}(z-y)))\,dz\,ds\\ &\leq& \exp(-M_1R^{\#}(t^{-E}(x-y)))\\ &&\hspace{1cm}\times\int_{0}^t\int_{\mathbb{V}}(t-s)^{-(\mu_H+(1-\rho))}s^{-k\rho}\exp(-M\delta R^{\#}((t-s)^{-E}(x-z)))\,dz\,ds \end{eqnarray*} for all $x,y\in\mathbb{V}$ and $0<t\leq T$. Upon making the changes of variables $z\rightarrow (t-s)^{-E}(x-z)$ followed by $s\rightarrow s/t$, we have \begin{eqnarray*} J_{k+1}(t,x,y)&\leq &\exp(-M_1R^{\#}(t^{-E}(x-y)))F\int_0^1 (t-st)^{\rho-1}(st)^{k\rho}t\,ds \\ &\leq&\exp(-M_1R^{\#}(t^{-E}(x-y)))F t^{(k+1)\rho} B(\rho,1+k\rho) \end{eqnarray*} for all $x,y\in\mathbb{V}$ and $0<t\leq T$. Therefore \eqref{eq:KSeriesBoundEulerBeta} holds for $k+1$ as required. \end{proof} \begin{proposition}\label{prop:IntegralIdentity} Let $\phi:(0,T]\times\mathbb{V}\times\mathbb{V}\rightarrow\mathbb{C}$ be defined by \begin{equation*} \phi=\sum_{n=1}^{\infty}K_n. \end{equation*} This series converges uniformly for $x,y\in\mathbb{V}$ and $t_0\leq t\leq T$ where $t_0$ is any positive constant. There exists $C\geq 1$ for which \begin{equation}\label{eq:PhiEstimate} |\phi(t,x,y)|\leq \frac{C}{t^{\mu_H+(1-\rho)}}\exp(-M_1R^{\#}(t^{-E}(x-y))) \end{equation} for all $x,y\in\mathbb{V}$ and $0<t\leq T$ where $M_1$ and $\rho$ are as in the previous lemmas. Moreover, the identity \begin{equation}\label{eq:IntegralIdentity} \phi(t,x,y)=K(t,x,y)+\int_0^t\int_{\mathbb{V}}K(t-s,x,z)\phi(s,z,y)\,dz\,ds \end{equation} holds for all $x,y\in\mathbb{V}$ and $0<t\leq T$. \end{proposition} \begin{proof} Using Lemmas \ref{lem:KFiniteTerms} and \ref{lem:KSeries} we see that \begin{equation*} \sum_{k=1}^{\infty}|K_n(t,x,y)|\leq C_0\Big[\sum_{n=1}^{\bar n}t^{-(\mu_H+(1-n\rho))} +\frac{1}{\Gamma(\rho)}\sum_{k=1}^{\infty} \frac{(C_0 F\Gamma(\rho))^{k}}{k!} t^{k\rho}\Big]\exp(-M_1R^{\#}(t^{-E}(x-y))) \end{equation*} for all $x,y\in\mathbb{V}$ and $0<t\leq T$ from which \eqref{eq:PhiEstimate} and our assertion concerning uniform convergence follow. A similar calculation and an application of Tonelli's theorem justify the following use of Fubini's theorem: For $x,y\in\mathbb{V}$ and $0<t\leq T$, \begin{eqnarray*} \int_0^t\int_{\mathbb{V}}K(t-s,x,z)\phi(s,z,y)\,ds\,dz&=&\sum_{n=1}^{\infty}\int_0^t\int_{\mathbb{V}}K(t-s,x,z)K_n(s,z,y)\,dz\,ds\\ &=&\sum_{n=1}^{\infty}K_{n+1}(t,z,y)=\phi(t,x,y)-K(t,x,y) \end{eqnarray*} as desired. \end{proof} \noindent The following H\"{o}lder continuity estimate for $\phi$ is obtained by first showing the analogous estimate for $K$ and then deducing the desired result from the integral formula \eqref{eq:IntegralIdentity}. As the proof is similar in character to those of the preceding two lemmas, we omit it. A full proof can be found in \cite[p.80]{Eidelman2004}. We also note here that the result is stronger than is required for our purposes (see its use in the proof of Lemma \ref{lem:WProperties}). All that is really required is that $\phi(\cdot,\cdot,y)$ satisfies the hypotheses (for $f$) in Lemma \ref{lem:IntegralDifferentiation} for each $y\in\mathbb{V}$. \begin{lemma}\label{holderphilemma} There exists $\alpha\in\mathbb{I}_+^d$ which is consistent with $\mathbf{m}$, $0<\eta<1$ and $C\geq 1$ such that \begin{equation*} |\phi(t,x+h,y)-\phi(t,x,y)|\leq \frac{C}{t^{\mu_H+(1-\eta)}}|h|_{\mathbf{v}}^\alpha\exp(-M_1R^{\#}(t^{-E}(x-y))) \end{equation*} for all $x,y,h\in\mathbb{V}$ and $0<t\leq T$. \end{lemma} \noindent\textbf{Step 3. Verifying that $Z$ is a fundamental solution to \eqref{eq:HeatEquation}} \begin{lemma}\label{lem:IntegralDifferentiation} Let $\alpha\in\mathbb{I}_+^d$ be consistent with $\mathbf{m}$ and, for $t_0> 0$, let $f:[t_0,T]\times\mathbb{V}\rightarrow\mathbb{C}$ be bounded and continuous. Moreover, suppose that $f$ is uniformly $\mathbf{v}$-H\"{o}lder continuous in $x$ on $[t_0,T]\times\mathbb{V}$ of order $\alpha$, by which we mean that there is a constant $C>0$ such that \begin{equation*} \sup_{t\in[t_0,T]}|f(t,x)-f(t,y)|\leq C|x-y|_{\mathbf{v}}^{\alpha} \end{equation*} for all $x,y\in\mathbb{V}$. Then $u:[t_0,T]\times\mathbb{V}\rightarrow \mathbb{C}$ defined by \begin{equation*} u(t,x)=\int_{t_0}^t\int_{\mathbb{V}}G_p(t-s,x-z;z)f(s,z)\,ds\,dz \end{equation*} is $(2\mathbf{m,v})$-regular on $(t_0,T)\times\mathbb{V}$. Moreover, \begin{equation}\label{eq:IntegralDifferentiation1} \partial_t u(t,x)=f(t,x)+\lim_{h\downarrow 0}\int_{t_0}^{t-h}\int_{\mathbb{V}}\partial_tG_p(t-s,x-z;z)f(s,z)\,dz\,ds \end{equation} and for any $\beta$ such that $|\beta:\mathbf{m}|\leq 2$, we have \begin{equation}\label{eq:IntegralDifferentiation2} D_{\mathbf{v}}^{\beta}u(t,x)=\lim_{h\downarrow 0}\int_{t_0}^{t-h}\int_{\mathbb{V}}D_{\mathbf{v}}^{\beta}G(t-s,x-z;z)f(s,z)\,dz\,ds \end{equation} for $x\in\mathbb{V}$ and $t_0< t< T$. \end{lemma} \noindent Before starting the proof, let us observe that, for each multi-index $\beta$, \begin{equation*} |D_{\mathbf{v}}^{\beta}G_p(t-s,x-z;z)f(s,z)|\leq C(t-s)^{-(\mu_H+ |\beta:2\mathbf{m}|)}\exp(-MR^{\#}((t-s)^{-E}(x-z)))|f(s,z)|. \end{equation*} Using the assumption that $f$ is bounded, we observe that \begin{eqnarray*} \lefteqn{\hspace{-2cm}\int_{t_0}^t\int_{\mathbb{V}}|D_{\mathbf{v}}^{\beta}G_p(t-s,x-z;z)f(s,z)|\,dz\,ds}\\ &\leq& C\int_{t_0}^t\int_{\mathbb{V}}(t-s)^{-\mu_H+|\beta:2\mathbf{m}|}\exp(-MR^{\#}((t-s)^{-E}(x-z)))\,dz\,ds\\ &\leq& C\int_{t_0}^t\int_{\mathbb{V}}(t-s)^{-|\beta:2\mathbf{m}|}\exp(-MR^{\#}(z))\,dz\,ds\\ &\leq& C\int_{t_0}^t(t-s)^{-|\beta:2\mathbf{m}|}\,ds \end{eqnarray*} for all $t_0\leq t\leq T$ and $x\in\mathbb{V}$. When $|\beta:\mathbf{m}|<2$, \begin{equation}\label{eq:Singularity} \int_{t_0}^{t}(t-s)^{-|\beta:2\mathbf{m}|}\,ds \end{equation} converges and consequently \begin{equation*} D_{\mathbf{v}}^{\beta}u(t,x)=\int_{t_0}^t\int_{\mathbb{V}}D_{\mathbf{v}}^{\beta}G_p(t-s,z-x;z)f(s,z)\,dz\,ds \end{equation*} for all $t_0\leq t\leq T$ and $x\in\mathbb{V}$. From this it follows that $D_{\mathbf{v}}^{\beta}u(t,x)$ is continuous on $(t_0,T)\times\mathbb{V}$ and moreover \eqref{eq:IntegralDifferentiation2} holds for such an $\beta$ in view of Lebesgue's dominated convergence theorem. When $|\beta:\mathbf{m}|=2$, \eqref{eq:Singularity} does not converge and hence the above argument fails. The main issue in the proof below centers around using $\mathbf{v}$-H\"{o}lder continuity to remove this obstacle. \begin{proof} Our argument proceeds in two steps. The fist step deals with the spatial derivatives of $u$. Therein, we prove the asserted $x$-regularity and show that the formula \eqref{eq:IntegralDifferentiation2} holds. In fact, we only need to consider the case where $|\beta:\mathbf{m}|=2$; the case where $|\beta:\mathbf{m}|<2$ was already treated in the paragraph proceeding the proof. In the second step, we address the time derivative of $u$. As we will see, \eqref{eq:IntegralDifferentiation1} and the asserted $t$-regularity are partial consequences of the results proved in Step 1; this is, in part, due to the fact that the time derivative of $G_p$ can be exchanged for spatial derivatives. The regularity shown in the two steps together will automatically ensure that $u$ is $(2\mathbf{m,v})$-regular on $(t_0,T)\times\mathbb{V}$.\\ \noindent\emph{Step 1. }Let $\beta$ be such that $|\beta:\mathbf{m}|=2$. For $h>0$ write \begin{equation*} u_h(t,x)=\int_{t_0}^{t-h}\int_{\mathbb{V}}G_p(t-s,x-z;z)f(s,z)\,dz\,ds \end{equation*} and observe that \begin{equation*} D_{\mathbf{v}}^{\beta}u_h(t,x)=\int_{t_0}^{t-h}\int_{\mathbb{V}}D_{\mathbf{v}}^{\beta}G_p(t-s,x-z;z)f(s,z)\,dz\,ds \end{equation*} for all $t_0\leq t-h<t\leq T$ and $x\in\mathbb{V}$; it is clear that $D_{\mathbf{v}}^{\beta}u_h(t,x)$ is continuous in $t$ and $x$. The fact that we can differentiate under the integral sign is justified because $t$ has been replaced by $t-h$ and hence the singularity in \eqref{eq:Singularity} is avoided in the upper limit. We will show that $D_{\mathbf{v}}^{\beta}u_h(t,x)$ converges uniformly on all compact subsets of $(t_0,T)\times\mathbb{V}$ as $h\rightarrow 0$. This, of course, guarantees that $D_{\mathbf{v}}^{\beta}u(x,t)$ exists, satisfies \eqref{eq:IntegralDifferentiation2} and is continuous on $(t_0,T)\times\mathbb{V}$. To this end, let us write \begin{eqnarray*} D_{\mathbf{v}}^{\beta}u_h(t,x)&=&\int_{t_0}^{t-h}\int_{\mathbb{V}}D_{\mathbf{v}}^{\beta}G_p(t-s,x-z;z)(f(s,z)-f(s,x))\,dz\,ds\\ &&\hspace{3cm}+\int_{t_0}^{t-h}\int_{\mathbb{V}}D_{\mathbf{v}}^{\beta}G_p(t-s,x-z;z)f(s,x)\,dz\,ds\\ &=:&I^{(1)}_h(t,x)+I^{(2)}_h(t,x). \end{eqnarray*} Using our hypotheses, Corollary \ref{cor:HolderConsistent} and Lemma \ref{lem:LegendreSubscale}, for some $0<\sigma<1$ and $\theta>0$, there is $M>0$ such that \begin{equation*} |f(s,z)-f(s,x)|\leq C(t-s)^{\sigma}(R^{\#}((t-s)^{-E}(x-z)))^{\theta} \end{equation*} for all $x,z\in\mathbb{V}$, $t\in[t_0,T]$ and $s\in[t_0,t]$. In view of the preceding estimate and Lemma \ref{lem:LegendreEstimate}, we have \begin{eqnarray*} \lefteqn{\hspace{-1cm}|D_{\mathbf{v}}^{\beta}G_p(t-s,x-z;z)(f(s,z)-f(s,x))|}\\ &\leq& C(t-s)^{-(\mu_H+1)}(t-s)^{\sigma}(R^{\#}((t-s)^{-E}(x-z)))^{\theta}\exp(-MR^{\#}((t-s)^{-E}(x-z)))\\ &\leq& C(t-s)^{-(\mu_H+(1-\sigma))}\exp(-MR^{\#}(t-s)^{-E}(x-z)) \end{eqnarray*} for all $x,z\in\mathbb{V}$, $t\in[t_0,T]$ and $s\in[t_0,t]$, where $C$ and $M$ are positive constants. We then observe that \begin{eqnarray*} \lefteqn{\hspace{-2cm}\int_{t_0}^t\int_{\mathbb{V}}|D_{\mathbf{v}}^{\beta}G_p(t-s,x-z;z)(f(s,z)-f(s,x))|\,dz\,ds}\\ &\leq &C\int_{t_0}^t(t-s)^{-(\mu_H+(1-\sigma))}\int_{\mathbb{V}}\exp(-MR^{\#}((t-s)^{-E}(x-z)))\,dz\,ds\\ &\leq& C\int_{t_0}^t(t-s)^{\sigma-1}\int_{\mathbb{V}}\exp(-MR^{\#}(z))\,dz\,ds\\ &\leq &\frac{C(t-t_0)^{\sigma}}{\sigma}\int_{\mathbb{V}}\exp(-MR^{\#}(z))\,dz\\ &\leq & \frac{C(T-t_0)^{\sigma}}{\sigma}\int_{\mathbb{V}}\exp(-MR^{\#}(z))\,dz<\infty \end{eqnarray*} for all $t\in[t_0,T]$ and $x\in \mathbb{V}$, where the validity of the second inequality is seen by making the change of variables $z\mapsto (t-s)^{-E}(x-z)$ and canceling the term $(t-s)^{-\mu_H}=(t-s)^{-\tr E}$. Consequently, \begin{equation*} I^{(1)}(t,x):=\int_{t_0}^t\int_{\mathbb{V}}D_{\mathbf{v}}^{\beta}G_p(t-s,x-z;z)(f(s,z)-f(s,x))\,dz\,ds \end{equation*} exists for each $t\in[t_0,T]$ and $x\in\mathbb{V}$. Moreover, for all $t_0\leq t-h<t\leq T$ and $x\in\mathbb{V}$, \begin{eqnarray*} |I^{(1)}(t,x)-I_h^{(1)}(t,x)|&\leq&\int_{t-h}^t\int_{\mathbb{V}}|D_{\mathbf{v}}^{\beta}G_p(t-s,x-z;z)(f(s,z)-f(s,x))|\,dz\,ds\\ &\leq& C\int_{t-h}^t\int_{\mathbb{V}}(t-s)^{\sigma-1}\exp(-MR^{\#}(z))\,dz\,ds\leq C h^{\sigma}. \end{eqnarray*} From this we see that $\lim_{h\downarrow 0}I_h^{(1)}(t,x)$ converges uniformly on all compact subsets of $(t_0,T)\times\mathbb{V}$. We claim that for some $0<\rho<1$, there exists $C>0$ such that \begin{equation}\label{eq:IntegralDifferentiation3} \Big|\int_{\mathbb{V}}D_{\mathbf{v}}^{\beta}G_p(t-s,x-z;z)\,dz\Big |\leq C(t-s)^{-(1-\rho)} \end{equation} for all $x\in\mathbb{V}$ and $s\in[t_0,t]$. Indeed, \begin{eqnarray*} \lefteqn{\int_{\mathbb{V}}D_{\mathbf{v}}^{\beta}G_p(t-s,x-z;z)\,dz}\\ &=& \int_{\mathbb{V}}D_{\mathbf{v}}^{\beta}[G_p(t-s,x-z;z)-G_p(t-s,x-z;y)]\big\vert_{y=x}\,dz+\big [D_{\mathbf{v}}^{\beta}\int_{\mathbb{V}}G_p(t-s,x-z;y)\,dz\big]\big\vert_{y=x}. \end{eqnarray*} The first term above is estimated with the help of Lemma \ref{lem:LegendreHolderEstimate} and by making arguments analogous to those in the previous paragraph; the appearance of $\rho$ follows from an obvious application of Lemma \ref{lem:LegendreSubscale}. This term is bounded by $C(t-s)^{-(1-\rho)}$. The second term is clearly zero and so our claim is justified. By essentially repeating the arguments made for $I_h^{(1)}$ and making use of \eqref{eq:IntegralDifferentiation3}, we see that \begin{equation*} \lim_{h\downarrow 0}I_h^{(2)}(t,x)=I^{(2)}(t,x)=:\int_{t_0}^{t}\int_{\mathbb{V}}D^{\beta}_{\mathbf{v}}G_p(t-s,x-z;z)f(s,x)\,dz\,ds \end{equation*} where this limit converges uniformly on all compact subsets of $(t_0,T)\times\mathbb{V}$.\\ \noindent{Step 2. } It follows from Leibnitz' rule that \begin{eqnarray*} \partial_tu_h(x,t)&=&\int_{\mathbb{V}}G_p(h,x-z;z)f(t-h,z)\,dz+\int_{t_0}^{t-h}\int_{\mathbb{V}}\partial_tG_p(t-s,x-z;z)f(s,z)\,dz\,ds\\ &=:&J_h^{(1)}(t,x)+J_h^{(2)}(t,x) \end{eqnarray*} for all $t_0<t-h<t< T$ and $x\in\mathbb{V}$. Now, in view of Lemma \ref{lem:ApproximateIdentity} and our hypotheses concerning $f$, \begin{equation*} \lim_{h\downarrow 0}J_h^{(1)}(t,x)=f(t,x) \end{equation*} where this limit converges uniformly on all compact subsets of $(t_0,T)\times\mathbb{V}$. Using the fact that $\partial_tG_p(t-s,x-z;z)=-H_p(z)G_p(t-s,x-z;z)$, we see that \begin{eqnarray*} \lim_{h\downarrow 0}J_h^{(2)}(t,x)&=&\lim_{h\downarrow 0}\int_0^{t-h}\int_{\mathbb{V}}\Big(-\sum_{|\beta:\mathbf{m}|=2}a_{\beta}(z)D_{\mathbf{v}}^{\beta}\Big )G_p(t-s,x-z;z)f(s,z)\,dz\,ds\\ &=&-\sum_{|\beta:\mathbf{m}|=2}\lim_{h\downarrow 0}\int_0^{t-h}\int_{\mathbb{V}}D_{\mathbf{v}}^{\beta}G_p(t-s,x-z;z)(a_{\beta}(z)f(s,z))\,dz\,ds \end{eqnarray*} for all $t\in(t_0,T)$ and $x\in\mathbb{V}$. Because the coefficients of $H$ are $\mathbf{v}$-H\"{o}lder continuous and bounded, for each $\beta$, $a_{\beta}(z)f(s,z)$ satisfies the same condition we have required for $f$ and so, in view of Step 1, it follows that $J_h^{(2)}(t,x)$ converges uniformly on all compact subsets of $(t_0,T)\times\mathbb{V}$ as $h\rightarrow 0$. We thus conclude that $\partial_tu(t,x)$ exists, is continuous on $(t_0,T)\times\mathbb{V}$ and satisfies \eqref{eq:IntegralDifferentiation1}. \end{proof} \begin{lemma}\label{lem:WProperties} Let $W:(0,T]\times\mathbb{V}\times\mathbb{V}\rightarrow \mathbb{C}$ be defined by \begin{equation*} W(t,x,y)=\int_0^t\int_{\mathbb{V}}G_p(t-s,x-z;z)\phi(s,z,y)\,dz\,ds, \end{equation*} for $x,y\in\mathbb{V}$ and $0<t\leq T$. Then, for each $y\in\mathbb{V}$, $W(\cdot,\cdot,y)$ is $(2\mathbf{m,v})$-regular on $(0,T)\times\mathbb{V}$ and satisfies \begin{equation}\label{eq:WProperties1} (\partial_t+H)W(t,x,y)=K(t,x,y). \end{equation} for all $x,y\in\mathbb{V}$ and $t\in(0,T)$. Moreover, there are positive constants $C$ and $M$ for which \begin{equation}\label{eq:WProperties2} |W(t,x,y)|\leq Ct^{-\mu_H+\rho}\exp(-MR^{\#}(t^{-E}(x-y))) \end{equation} for all $x,y\in\mathbb{V}$ and $0<t\leq T$ where $\rho$ is that which appears in Lemma \ref{lem:KFiniteTerms}. \end{lemma} \begin{proof} The estimate \eqref{eq:WProperties2} follows from \eqref{eq:LegendreEstimate1} and \eqref{eq:PhiEstimate} by an analogous computation to that done in the proof of Lemma \ref{lem:KFiniteTerms}. It remains to show that, for each $y\in\mathbb{V}$, $W(\cdot,\cdot,y)$ is $(2\mathbf{m,v})$-regular and satisfies \eqref{eq:WProperties1} on $(0,T)\times\mathbb{V}$. These are both local properties and, as such, it suffices to examine them on $(t_0,T)\times\mathbb{V}$ for an arbitrary but fixed $t_0>0$. Let us write \begin{eqnarray*} W(t,x,y)&=&\int_{t_0}^t\int_{\mathbb{V}}G_p(t-s,x-z;z)\phi(s,z,y)\,dz\,ds+\int_0^{t_0}\int_{\mathbb{V}}G_p(t-s,x-z;z)\phi(s,z,y)\,dz\,ds\\ &=:&W_1(t,x,y)+W_2(t,x,y) \end{eqnarray*} for $x,y\in\mathbb{V}$ and $t_0<t<T$. In view of Lemmas \ref{holderphilemma} and \ref{lem:IntegralDifferentiation}, for each $y\in\mathbb{V}$, $W_1(\cdot,\cdot,y)$ is $(2\mathbf{m,v})$-regular on $(t_0,T)\times\mathbb{V}$ and \begin{eqnarray}\label{eq:WProperties3}\nonumber (\partial_t+H)W_1(t,x,y)&=&\partial_t W_1(t,x,y)+\sum_{|\beta:\mathbf{m}|\leq 2}a_{\beta}(x)D_{\mathbf{v}}^{\beta}W_1(t,x,y)\\\nonumber &=&\phi(t,x,y)+\lim_{h\downarrow 0}\int_{t_0}^{t-h}\int_{\mathbb{V}}\partial_tG_p(t-s,x-z;z)\phi(s,z,y)\,dz\,dy\\\nonumber &&\hspace{1cm}+\lim_{h\downarrow 0}\int_{t_0}^{t-h}\int_{\mathbb{V}}\sum_{|\beta:\mathbf{m}|\leq 2}a_{\beta}(x)D_{\mathbf{v}}^{\beta}G_p(t-s,x-z;z)\phi(s,z,y)\,dz\,ds\\\nonumber &=&\phi(t,x,y)+\lim_{h\downarrow 0}\int_{t_0}^{t-h}\int_{\mathbb{V}}(\partial_t+H)G_p(t-s,x-z;z)\phi(s,z,y)\,dz\,ds\\ &=&\phi(t,x,y)-\lim_{h\downarrow 0}\int_{t_0}^{t-h}\int_{\mathbb{V}}K(t-s,x,z)\phi(s,z,y)\,dz\,ds \end{eqnarray} for all $x\in\mathbb{V}$ and $t_0<t<T$; here we have used the fact that \begin{equation*} (\partial_t+H)G_p(t-s,x-z;z)=-K(t-s,x,z). \end{equation*} Treating $W_2$ is easier because $\partial_t G_p(t-s,x-z;z)$ and, for each multi-index $\beta$, $D_{\mathbf{v}}^{\beta}G_p(t-s,x-z;z)$ are, as functions of $s$ and $z$, absolutely integrable on $(0,t_0]\times \mathbb{V}$ for every $t\in (t_0,T]$ and $x\in\mathbb{V}$ by virtue of Lemma \ref{lem:LegendreEstimate}. Consequently, derivatives may be taken under the integral sign and so it follows that, for each $y\in\mathbb{V}$, $W_2(\cdot,\cdot,y)$ is $(2\mathbf{m,v})$-regular on $(t_0,T)\times\mathbb{V}$ and \begin{equation}\label{eq:WProperties4} (\partial_t+H)W_2(t,x,y)=-\int_0^{t_0}\int_{\mathbb{V}}K(t-s,x,z)\phi(s,z,y)\,dz\,ds \end{equation} for $x\in\mathbb{V}$ and $t_0<t< T$. We can thus conclude that, for each $y\in\mathbb{V}$, $W(\cdot,\cdot,y)$ is $(2\mathbf{m,v})$-regular on $(t_0,T)\times\mathbb{V}$ and, by combining \eqref{eq:WProperties3} and \eqref{eq:WProperties4}, \begin{equation*} (\partial_t+H)W(t,x,y)=\phi(t,x,y)-\lim_{h\downarrow 0}\int_0^{t-h}\int_{\mathbb{V}}K(t-s,x,z)\phi(s,z,y)\,dz\,ds \end{equation*} for $x\in\mathbb{V}$ and $t_0<t< T$. By \eqref{eq:K1Bound}, Proposition \ref{prop:IntegralIdentity} and the Dominated Convergence Theorem, \begin{eqnarray*} \lim_{h\downarrow 0}\int_0^{t-h}\int_{\mathbb{V}}K(t-s,x,z)\phi(s,z,y)\,dz\,ds&=&\int_0^{t}\int_{\mathbb{V}}K(t-s,x,z)\phi(s,z,y)\,dz\,ds\\ &=&\phi(t,x,y)-K(t,x,y) \end{eqnarray*} and therefore \begin{equation*} (\partial_t+H)W(t,x,y)=K(t,x,y) \end{equation*} for all $x,y\in\mathbb{V}$ and $t_0<t<T$. \end{proof} \noindent The theorem below is our main result. It is a more refined version of Theorem \ref{thm:FundamentalSolution} because it gives an explicit formula for the fundamental solution $Z$; in particular Theorem \ref{thm:FundamentalSolution} is an immediate consequence of the result below. \begin{theorem} Let $H$ be a uniformly $(2\mathbf{m,v})$-positive-semi-elliptic operator. If $H$ satisfies Hypothesis \ref{hypoth:HolderCoefficients} then $Z:(0,T]\times\mathbb{V}\times\mathbb{V}\rightarrow \mathbb{C}$, defined by \begin{equation}\label{eq:Main1} Z(t,x,y)=G_p(t,x-y;y)+W(t,x,y) \end{equation} for $x,y\in\mathbb{V}$ and $0<t\leq T$, is a fundamental solution to \eqref{eq:HeatEquation}. Moreover, there are positive constants $C$ and $M$ for which \begin{equation}\label{eq:Main2} |Z(t,x,y)|\leq \frac{C}{t^{\mu_H}}\exp\left(-tM R^{\#}\left(\frac{x-y}{t}\right)\right) \end{equation} for all $x,y\in\mathbb{V}$ and $0<t\leq T$. \end{theorem} \begin{proof} As $0<\rho<1$, \eqref{eq:WProperties2} and Lemma \ref{lem:LegendreEstimate} imply the estimate \eqref{eq:Main2}. In view of Lemma \ref{lem:WProperties} and Corollary \ref{frozenfundsolcor}, for each $y\in\mathbb{V}$, $Z(\cdot,\cdot,y)$ is $(2\mathbf{m,v})$-regular on $(0,T)\times\mathbb{V}$ and \begin{eqnarray*} (\partial_t+H)Z(t,x,y)&=&(\partial_t+H)G_p(t,x-y,y)+(\partial_t+H)W(t,x,y)\\ &=&-K(t,x,y)+K(t,x,y)=0 \end{eqnarray*} for all $x\in\mathbb{V}$ and $0<t< T$. It remains to show that for any $f\in C_b(\mathbb{V})$, \begin{equation*} \lim_{t\rightarrow 0}\int_{\mathbb{V}}Z(t,x,y)f(y)\,dy=f(x) \end{equation*} for all $x\in\mathbb{V}$. Indeed, let $f\in C_b(\mathbb{V})$ and, in view of \eqref{eq:WProperties2}, observe that \begin{eqnarray*} \left|\int_{\mathbb{V}}W(t,x,y)f(y)\right|&\leq& Ct^{\rho}\|f\|_{\infty}\int_{\mathbb{V}}t^{-\mu_H}\exp(-MR^{\#}(t^{-E}(x-y)))dy\\ &\leq& Ct^{\rho}\|f\|_{\infty}\int_{\mathbb{V}}\exp(-MR^{\#}(y))\,dy\leq Ct^{\rho}\|f\|_{\infty} \end{eqnarray*} for all $x\in\mathbb{V}$ and $0<t\leq T$. An appeal to Lemma \ref{lem:ApproximateIdentity} gives, for each $x\in\mathbb{V}$, \begin{eqnarray*} \lim_{t\rightarrow 0}\int_{\mathbb{V}}Z(t,x,y)f(y)dy&=&\lim_{t\rightarrow 0}\int_{\mathbb{V}}G_p(t,x-y;y)f(y)\,dy+\lim_{t\rightarrow 0}\int_{\mathbb{V}}W(t,x,y)f(y)\,dy\\ &=&f(x)+0=f(x) \end{eqnarray*} as required. In fact, the above argument guarantees that this convergence happens uniformly on all compact subsets of $\mathbb{V}$. \end{proof} \noindent We remind the reader that implicit in the definition of fundamental solution to \eqref{eq:HeatEquation} is the condition that $Z$ is $(2\mathbf{m,v})$-regular. In fact, one can further deduce estimates for the spatial derivatives of $Z$, $D_{\mathbf{v}}^{\beta}Z$, of the form \eqref{eq:CCDerivativeEstimate} for all $\beta$ such that $|\beta:2\mathbf{m}|\leq 1$ (see \cite[p. 92]{Eidelman2004}). Using the fact that $Z$ satisfies \eqref{eq:HeatEquation} and $H$'s coefficients are bounded, an analogous estimate is obtained for a single $t$ derivative of $Z$. \noindent Evan Randles\footnote{This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1144153} : Department of Mathematics, University of California, Los Angeles, Los Angeles CA 90025. \newline E-mail: randles@math.ucla.edu\\ \noindent Laurent Saloff-Coste\footnote{This material is based upon work supported by the National Science Foundation under Grant No. DMS-1404435}: Department of Mathematics, Cornell University, Ithaca NY 14853. \newline E-mail: lsc@math.cornell.edu \end{document}
\begin{document} \title[QD and LQU in a vertical quantum dot]{Comparison of quantum discord and local quantum uncertainty in a vertical quantum dot} \author{E. Faizi, H. Eftekhari} \address{Physics Department, Azarbaijan shahid madani university} \ead{ efaizi@azaruniv.edu} \ead{ h.eftekhari@azaruniv.edu} \begin{abstract} In this paper, we consider quantum correlations (quantum discord and local quantum uncertainty) in a vertical quantum dot. Their dependencies on magnetic field and temperature are presented in detail. It is noticeable that, quantum discord and local quantum uncertainty behavior is similar to a large extent. In addition, the time evolution of quantum discord and local quantum uncertainty under dephasing and amplitude damping channels is investigated. It has been found that, for some Bell-diagonal states quantum discord is invariant under some decoherence in a finite time interval [Phys. Rev. Lett. 104, 200401 (2010)]. Also, our results show that quantum discord is invariant under dephasing channel for a finite time interval in a vertical quantum dot, while this phenomenon does not occurs for local quantum uncertainty case. \end{abstract} \section{Introduction} The quantum correlation for a quantum state contain entanglement and other type of nonclassical correlations. It is known that the quantum correlations are more comprehensive than entanglement \cite{C. H. Bennett, W. H. Zurek}. A prominent and widely approved quantity of quantum correlation is the quantum discord (QD) \cite{H. Ollivier, K. Modi} which indicates the quantumness of correlations. Considerable progresses have been made about the significance and applications of quantum discord. Particularly, there are strict expressions for quantum discord for some two- qubit states, like as for the X states \cite{M. Ali, B. Li, Q. Chen, M. Shi}. Despite of quantum discord, a lot of other measures of quantum correlation have been given, such as the GMQD \cite{S. Luo, B. Dakic}, MID \cite{S . Luo} and quantum deficit \cite{J. Oppenheim, M. Horodecki}. Lately Girolami et. al. \cite{D. Girolami} proposed the concept of local quantum uncertainty which determines the uncertainty in a quantum state as a result of measurement of a local observable. However, such quantifier is strong criterion to be considered as a accurate measure of quantumness in quantum states. Although because of inherent optimization, finding explicit expression is a difficult problem for most of the quantum correlations measures. For instance, the value of quantum discord is not known even for general bipartite qubit system. In bipartite systems with higher dimension, the results are known for only some certain states. Nevertheless, local quantum uncertainty (LQU) has closed form only for any qubit-qudit system.\\ Quantum dot (as the artificial atoms) devices are a well- controlled object for studying quantum many- body physics. Also, ground state single exciton qubits in quantum dots have been introduced for quantum computation tasks \cite{P. Solinas}. So it is worthwhile, investigation of the characteristics and properties of the quantum dot.\\ Decoherence of the quantum system due to interacting with its surrounding is the important difficulty to perform quantum computation tasks. Therefore, it is inevitable to specify the dynamical properties of quantum correlations for preserve the protocol to against decoherence. Many investigation have been paid to dynamics of quantum correlations both theoretically and experimentally in the Markovian \cite{M. Piani, J. Maziero} and non- Markovian \cite{Fanchini} environment. For instance, there is an many investigation on decoherence due to spin environment \cite{A. Hutton, F. M. Cucchietti, D. Rossini, H. T. Quan}, like single qubit coupled to the environment and two qubits coupled to the environment. In this paper, our goal is to study QD and LQU in a vertical quantum dot. Their dependencies on magnetic field and temperature are also investigated.\\ The parer is organized as follows. In sec. 2, we recall QD, LQU briefly. In sec. 3 we will investigate these quantities in vertical quantum dot and give a detailed comparison. The effect of magnetic field and temperature are illustrated. Sec. 4 is devoted to the dynamics of QD in dephasing and amplitude damping model, and the dynamics of LQU is compared with that of QD. In the last section, the conclusions are given. \section{Quantum discord, Local quantum uncertainty} \subsection{QD} For a bipartite quantum system, the quantum mutual information between the two subsystems A and B is as follows: \begin{eqnarray} I({\rho}_ {AB})=S({\rho}_A)+S({\rho}_B)-S({\rho}_{AB}), \end{eqnarray} where $S({\rho})=-Tr(\rho\log_2\rho)$ is Von- Neumann entropy of the density matrix $\rho$. The quantum mutual information has fundamental physical importance, and is generally applied as a measure of total correlations that contain quantum and classical ones. The classical correlation may be defined by projective measurement. Assume one carry out a set of projective measurements $\Pi_k^B$ on the subsystem B, then the probability of measurement with outcome k is $P_k =Tr_{AB}[({I^A}\otimes{\Pi_k^B}){\rho}_ {AB}({I^A}\otimes{\Pi_k^B})]$ where $I^A$ the identity operator for subsystem A. After this measurement, the state of subsystem A is characterized by the conditional density operator ${\rho}_ {A\mid{B}}=Tr_B[({I^A}\otimes{\Pi_k^B}){\rho}_ {AB}({I^A}\otimes{\Pi_k^B})]/P_k$. We determine the upper bound of the difference between the Von- Neumann entropy $S(\rho_A)$ and the based on measurement quantum conditional entropy $\sum_kP_kS({\rho}_ {A\mid{k}})$ of subsystem A, i.e. \cite{H. Ollivier, S . Luo, J. Maziero, V. Vedral}, \begin{eqnarray} C({\rho}_{AB})=sup_{\{\Pi_k^B\}}[S({\rho}_A)-\sum_kP_kS({\rho}_ {A\mid{k}})], \end{eqnarray} as the classical correlation of the two subsystems. The maximum is taken for whole probable types of projective measurements. Finally, the quantum discord is specified as the difference between the total and classical correlations \cite{H. Ollivier, J. Maziero, V. Vedral}. \begin{eqnarray} D({\rho}_ {AB})=I({\rho}_ {AB})-C({\rho}_ {AB}), \end{eqnarray} \subsection{LQU} Lately, a measure of quantum correlations for bipartite quantum systems namely the local quantum uncertainty (LQU) is introduced by D. Girolami \cite{D. Girolami}. The LQU is defined as follows: \begin{eqnarray} U_A=\min_{K^A}I(\rho_{AB}, K^A), \end{eqnarray} where the two parts denoted by A and B, the minimum is optimized over all of the non degenerate local projective operators on part A: $K^A=\Lambda{^A}\otimes{{I}_B}$, and \begin{eqnarray} I(\rho,K)=-\frac{1} {2}Tr\{[\sqrt{\rho},K^A]^2\}, \end{eqnarray} is the information which introduced in Ref. \cite{E. P. Wigner}. The closed form of the LQU for $2\times{d}$ quantum systems is \cite{D. Girolami}: \begin{eqnarray} U_A=1-\lambda_{max}(W), \end{eqnarray} In which $\lambda_{max}$ is the maximum eigenvalue of the $3\times{3}$ matrix W with the elements $W_{ij}=Tr\{\sqrt{\rho}(\sigma_{i}\otimes{I})\sqrt{\rho}(\sigma_{j}\otimes{I})\}$ and $\sigma_i$ $i=1, 2, 3$ is the Pauli matrices. \section{Quantum discord and Local quantum uncertainty in a vertical quantum dot} In this section, we will investigate QD and LQU in a vertical quantum dot. The effects of magnetic field and temperature on these outstanding characteristics of quantum physics are demonstrated. Moreover, we will compare these quantities and illustrate their different properties.\\ The reduced Hamiltonian of the quantum dot is written as \cite{L. G. Qin}: \begin{eqnarray} \hat{H}=\frac{k_0}{4}\hat{S_1}.\hat{S_2}-\gamma{B_0}\hat{S^3}, \end{eqnarray} Where $\gamma$ is gyromagnetic ratio, $k_0=\delta-2E_s>0$ is the bare value at B = 0 and $\hat{S}^3$ is the third component of total spin. $B_0$ is the magnetic field of the degenerate point, $\delta$ is the level spacing and $E_s$ is the exchange energy. In the standard basis, $\{|00\rangle, |01\rangle, |10\rangle, |11\rangle\}$ , the density matrix $\rho(T)$ of the system reads \cite{L. G. Qin} \begin{eqnarray} \rho(0)={\frac{1}{Z}} \left( \begin{array}{cccccccccccccccc} u && 0 && 0 && 0\\ 0 && w && y && 0\\ 0 && y && w && 0\\ 0 && 0 && 0 && v\\ \end{array} \right). \end{eqnarray} In which the nonzero matrix elements are given by \begin{eqnarray} u=\exp(-\frac{k_0-16\gamma{B_0}}{16T});\nonumber\\ v=\exp(-\frac{k_0+16\gamma{B_0}}{16T});\nonumber\\ w=\frac{1}{2}[\exp(\frac{-k_0}{16T})+\exp(\frac{3k_0}{16T})];\nonumber\\ y=\frac{1}{2}[\exp(\frac{-k_0}{16T})-\exp(\frac{3k_0}{16T})], \end{eqnarray} and $Z=u+2w+v$. Here $\gamma$ and $B_0$ always appear in the form $\gamma{B_0}$ and thus we can consider it as $\gamma{B_0}=r$.\\ We can find that, the ground state of the Hamiltonian in eq. (7) become separable when the magnetic field is strong enough. In addition, the ground state will be entangled when the $k_0$ is large enough (for more detail see the eigenvalues and eigenvectors of reduced Hamiltonian Eq. (7) in \cite{L. G. Qin}). From this aspect, we can say that a strong magnetic field will shrink the quantum correlation measured by the QD, however a large $k_0$ can cause a large amount of quantum correlation.\\ The matrix (8) is the X- matrix whose discord has been studied in \cite{M. Ali}. Although this reference contains a mistake concerning the number of arbitrary optimization parameters in the calculation of the classical part of mutual correlations \cite{M. Ali, Y.Huang}, this mistake is not important in our case because the element $\rho_{14}$ is zero in density matrix (8). As a consequence, we have only one optimization parameter. Thus, we use the algorithm developed in the above reference for the calculation of discord. By the density matrix given in eq. (8), we can find the analytical expressions of QD as given in ref. \cite{M. Ali, F. F. Fanchini}. The QD of the density matrix in eq. (8) can be computed directly and it takes the following expression: \begin{eqnarray} D=min\{D_1,D_2\}, \end{eqnarray} where $D_1$ and $D_2$ read respectively as: \begin{eqnarray} &D_1=S(\rho_A)-S(\rho)-\frac{1}{Z}[v\log_2{\frac{v}{w+v}}+w\log_2{\frac{w}{w+v}}],\\\nonumber &-\frac{1}{Z}[u\log_2{\frac{u}{w+u}}+w\log_2{\frac{w}{w+u}}],\\\nonumber &D_2=S(\rho_A)-S(\rho)-\frac{1-\Gamma}{2}\log_2\frac{1-\Gamma}{2}-\frac{1+\Gamma}{2}\log_2\frac{1+\Gamma}{2}. \end{eqnarray} where $\rho_A$ is the reduced density matrix of $\rho$ in eq. (8) by tracing off the second party and $\Gamma$ satisfies the relation $\Gamma=\sqrt{{(u-v)^2+4|y|^2}}/{Z}$. The analytical expression of QD shows that the QD depends on the temperature, the magnetic field and $k_0$.\\ The LQU of the thermal density matrix in eq. (8) takes the expression of \begin{eqnarray} U_A=1-\max\{\lambda_1,\lambda_2\}, \end{eqnarray} where $\lambda_1=2(\sqrt{u}+\sqrt{v})(\frac{\sqrt{w-y}}{2}+\frac{\sqrt{w+y}}{2})$ and $\lambda_2=(u+v)+2(\frac{\sqrt{w-y}}{2}+\frac{\sqrt{w+y}}{2})^2-2(\frac{\sqrt{w+y}}{2}-\frac{\sqrt{w-y}}{2})^2$ are the eigenvalue of the $3\times{3}$ matrix W.\\ After calculations, we find that QD and LQU are symmetric under change of r to -r, so we will consider only $r>0$ in our calculations. The influence of parameters on QD and LQU in quantum dot is discussed in detail as follows.\\ \begin{figure} \caption{Comparison of QD and LQU vs $r$ and fix $k_0$ and T.} \label{fig1} \end{figure} \begin{figure} \caption{Comparison of QD and LQU vs $k_0$ and fix $r$ and T.} \label{fig2} \end{figure} \begin{figure} \caption{QD and LQU vs temperature (T) and r in the case of $k_0=10$.} \label{fig3} \end{figure} \begin{figure} \caption{QD and LQU vs r for different temperature (T) and $k_0=10$.} \label{fig4} \end{figure} At first, we analyze the sensitivity of the parameters $k_0$ and r for the QD and the LQU and the results are given in Figures 1 and 2. From Figures 1 and 2 we can see that the behaviors of QD and LQU are similar to a large extent. When the temperature is not zero, QD and LQU changes with the r when the $k_0$ is fixed. The higher the r is, the smaller the QD is, and the smaller the LQU is. In this sense, we can find that high r can shrink the QD and LQU. Also, both of QD and LQU decreases asymptotically to a very small value. From Figure 2 we can see that $k_0>0$ show more quantum correlation than $k_0<0$ and large $k_0$ cause large quantum correlation. Although for $k_0<0$, there is no entanglement (see ref. \cite{L. G. Qin}), QD and LQU exist. Moreover, we can see that when r is zero quantum correlations have higher value this is because the ground state become the maximally entangled state. \\ Secondly, we examine the effect of the temperature on the QD and the LQU and the results are given in Figure 3. From Figure 3, for the case $T\neq0$, the QD and LQU decreases by increasing temperature. Nevertheless, they decreases more slowly when the temperature is higher. From Figure 3, we can find that the QD and the LQU is not sensitive to the magnetic field when the temperature takes a value larger than about 2. This point indicates that the quantum correlation measured by the QD and LQU may not be affected by the magnetic field efficiently when the system temperature has high value. As to give a better illustration about the sensitivity of the QD and the LQU to the temperature, we plot Figure 4. From Figure 4, we can find that the QD and LQU is sensitive to the magnetic field when the temperature is low. While for the high temperature of the case T=4, they are not sensitive anymore and remain stable. \section{Evolution of QD and LQU in the vertical quantum dot under noisy channels} In order to calculate the quantum discord between two qubits subject to dissipative channels, we consider the following approach. The dynamics of two qubits which interacting independently with distinct environments is described by the solutions of the Born-Markov-Lindblad equations \cite{H. Carmichael}, that can be acquired appropriately by the Kraus operator method \cite{M. A. Nielsen}. Given an initial state for two qubits $\rho(0)$, its time evolution can be written as \begin{eqnarray} \rho(t)=\Sigma_{\mu,\nu}E_{\mu,\nu}\rho(0){E_{\mu,\nu}^\dag}, \end{eqnarray} where the Kraus operators $E_{\mu,\nu}=E_\mu\otimes{E_\nu}$ \cite{M. A. Nielsen} satisfy $\Sigma_{\mu,\nu}{E_{\mu,\nu}^\dag}E_{\mu,\nu}=I$ for all t. The operators $E_{\mu}$ characterize the one- qubit quantum channel effects. We present below what happens to the QD and LQU in for two qubit of the dephasing and amplitude damping channels. \subsection{Dephasing channel} Here we examine time evolution of the vertical quantum dot under first phase damping and then amplitude damping channels. We will begin by obtaining the time dependence of QD and LQU for the vertical quantum dot. Recently, it has been shown that for some Bell- diagonal states (BDS), their quantum discord are invariant under some decoherence for a finite time interval \cite{L. Mazzola}. An interesting question is that such phenomenon occurs in other systems?\\ In the next of this section we consider that the state of density matrix $\rho$ in Eq .(8) undergoes the dephasing channel. Kraus operators for a dephasing channel given by $E_0=diag(1,\sqrt{1-\gamma})$ and $E_1=diag(0,\sqrt{\gamma})$ where $\gamma=1-e^{-\Gamma{t}}$, $\Gamma$ denoting decay rate \cite{M. A. Nielsen}. Under the effect of phase noise the only time dependence is in y and other element in density matrix remain unchanged: \begin{eqnarray} y(t)=y(0)(1-\gamma), \end{eqnarray} In fact, the time- dependent parameter $\gamma$ may be different for qubits A and B, but we take it identical.\\ The results are shown in figure 5. We can see that the behavior of QD and LQU under the effect of this channel is different. In particular we note the evolution of QD, which its smoothly behavior at a finite time is noticeable (and remains stable). However, LQU by increasing t decrease monolitically. \begin{figure} \caption{QD and LQU vs $\Gamma_{ph}.t$ and $k_0$ in the case of r=1 and T=0.4.} \label{fig5} \end{figure} \subsection{Amplitude damping channel} Next we consider time evolution under amplitude noise. Kraus operators for an amplitude damping channel given by $F_1=diag(\sqrt{1-\gamma},1)$ and $F_2=\frac{\sqrt{\gamma}}{2}(\sigma_1-i\sigma_2)$. We find from the appropriate Kraus operators given in \cite{T. Yu} that the following time dependence determine $\rho(t)$ at any time: \begin{eqnarray} &u(t)=u(0)(1-\gamma)^2,\\\nonumber &y(t)=y(0)(1-\gamma(t)),\\\nonumber &w(t)=w(0)(\gamma)^2+u(0)(1-\gamma)\gamma,\\\nonumber &v(t)=v(0)+u(0)(\gamma)^2+2w(0)\gamma. \end{eqnarray} where $\gamma=1-exp(-\Gamma_{am}t)$, and $\Gamma_{am}$ indicate decay rate of the qubits. The results are in figure 6. We can see from figure 6 that against dephasing channel case, here the behavior of QD and LQU is similar approximately. Both of them decrease asymptotically by growing time, while increase by growing the absolute value of $k_0$. \begin{figure} \caption{QD and LQU vs $\Gamma_{am}.t$ and $k_0$ in the case of r=1 and T=0.4.} \label{fig6} \end{figure} \section{Conclusion} In summary, we have investigated QD and LQU in a vertical quantum dot. Our results imply that the QD and LQU depends on the magnetic field, $k_0$ and the temperature of the system in equilibrium. The behavior of the LQU is similar to that of the QD to large extent. They change smoothly without any sudden transitions. The larger $k_0$ is, the larger the QD is and the stronger the magnetic field is, the smaller the QD is. With regard to the effect of temperature, we find that the higher the temperature is, the smaller the QD is. Specifically, the QD is not sensitive to the change of temperature when the temperature is higher than a value of about 2. Moreover, we have studied the dynamics of QD in dephasing and amplitude damping model, and the dynamics of LQU is compared with that of QD. In particular we note the evolution of QD in dephasing channel, which its smoothly behavior at a finite time is noticeable (and remains stable). \end{document}
{\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {document {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \thispagestyle {\mbox{\tt $\star$}} {empty {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \title{ space{-15mm} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \SBIMSMark {\mbox{\tt $\star$}} {1997 {\mbox{\tt $\star$}} /13 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {November {\mbox{\tt $\star$}} 1997 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \thispagestyle {\mbox{\tt $\star$}} {empty {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {center {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {minipage {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {80mm {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \def {\mbox{\tt $\star$}} \toc {\mbox{\tt $\star$}} _vspace {\mbox{\tt $\star$}} {1 {\mbox{\tt $\star$}} .5em {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \def {\mbox{\tt $\star$}} Contents {\mbox{\tt $\star$}} {Contents {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \tableofcontents {\mbox{\tt $\star$}} {0 {\mbox{\tt $\star$}} .3em {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {minipage {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {center {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \vspace {\mbox{\tt $\star$}} {5mm {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {abstract {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} give {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} new {\mbox{\tt $\star$}} proof {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} rational {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} relation {\mbox{\tt $\star$}} between {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamics {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Our {\mbox{\tt $\star$}} proof {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} original {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} given {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} Douady {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Hubbard {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} refined {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} Lavaurs {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} several {\mbox{\tt $\star$}} ways {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} replaces {\mbox{\tt $\star$}} analytic {\mbox{\tt $\star$}} arguments {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} combinatorial {\mbox{\tt $\star$}} ones {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} use {\mbox{\tt $\star$}} complex {\mbox{\tt $\star$}} analytic {\mbox{\tt $\star$}} dependence {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} polynomials {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} respect {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} thus {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} made {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} apply {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} non {\mbox{\tt $\star$}} -complex {\mbox{\tt $\star$}} analytic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} spaces {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} proof {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} also {\mbox{\tt $\star$}} technically {\mbox{\tt $\star$}} simpler {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Finally {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} derive {\mbox{\tt $\star$}} several {\mbox{\tt $\star$}} corollaries {\mbox{\tt $\star$}} about {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Along {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} way {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} introduce {\mbox{\tt $\star$}} partitions {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} dynamical {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} planes {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} independent {\mbox{\tt $\star$}} interest {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} interpret {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} symbolic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} space {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} internal {\mbox{\tt $\star$}} addresses {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Nous {\mbox{\tt $\star$}} donnons {\mbox{\tt $\star$}} une {\mbox{\tt $\star$}} nouvelle {\mbox{\tt $\star$}} d {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} 'emonstration {\mbox{\tt $\star$}} que {\mbox{\tt $\star$}} tous {\mbox{\tt $\star$}} les {\mbox{\tt $\star$}} rayons {\mbox{\tt $\star$}} externes {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} `a {\mbox{\tt $\star$}} arguments {\mbox{\tt $\star$}} rationels {\mbox{\tt $\star$}} de {\mbox{\tt $\star$}} l {\mbox{\tt $\star$}} 'ensemble {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} aboutissent {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} et {\mbox{\tt $\star$}} nous {\mbox{\tt $\star$}} montrons {\mbox{\tt $\star$}} la {\mbox{\tt $\star$}} relation {\mbox{\tt $\star$}} entre {\mbox{\tt $\star$}} l {\mbox{\tt $\star$}} 'argument {\mbox{\tt $\star$}} externe {\mbox{\tt $\star$}} d {\mbox{\tt $\star$}} 'un {\mbox{\tt $\star$}} tel {\mbox{\tt $\star$}} rayon {\mbox{\tt $\star$}} et {\mbox{\tt $\star$}} la {\mbox{\tt $\star$}} dynamique {\mbox{\tt $\star$}} au {\mbox{\tt $\star$}} param {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} `etre {\mbox{\tt $\star$}} o {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} `u {\mbox{\tt $\star$}} le {\mbox{\tt $\star$}} rayon {\mbox{\tt $\star$}} aboutit {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Notre {\mbox{\tt $\star$}} d {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} 'emonstration {\mbox{\tt $\star$}} est {\mbox{\tt $\star$}} diff {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} 'erente {\mbox{\tt $\star$}} de {\mbox{\tt $\star$}} l {\mbox{\tt $\star$}} 'originale {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} donn {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} 'ee {\mbox{\tt $\star$}} par {\mbox{\tt $\star$}} Douady {\mbox{\tt $\star$}} et {\mbox{\tt $\star$}} Hubbard {\mbox{\tt $\star$}} et {\mbox{\tt $\star$}} elabor {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} 'ee {\mbox{\tt $\star$}} par {\mbox{\tt $\star$}} Lavaurs {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} `a {\mbox{\tt $\star$}} plusieurs {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} 'egards {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} elle {\mbox{\tt $\star$}} remplace {\mbox{\tt $\star$}} des {\mbox{\tt $\star$}} arguments {\mbox{\tt $\star$}} analytiques {\mbox{\tt $\star$}} par {\mbox{\tt $\star$}} des {\mbox{\tt $\star$}} arguments {\mbox{\tt $\star$}} combinatoires {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} elle {\mbox{\tt $\star$}} n {\mbox{\tt $\star$}} 'utilise {\mbox{\tt $\star$}} pas {\mbox{\tt $\star$}} la {\mbox{\tt $\star$}} d {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} 'ependence {\mbox{\tt $\star$}} analytique {\mbox{\tt $\star$}} des {\mbox{\tt $\star$}} polyn {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} ^omes {\mbox{\tt $\star$}} par {\mbox{\tt $\star$}} rapport {\mbox{\tt $\star$}} au {\mbox{\tt $\star$}} param {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} `etre {\mbox{\tt $\star$}} et {\mbox{\tt $\star$}} peut {\mbox{\tt $\star$}} donc {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} ^etre {\mbox{\tt $\star$}} appliqu {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} 'ee {\mbox{\tt $\star$}} aux {\mbox{\tt $\star$}} espaces {\mbox{\tt $\star$}} de {\mbox{\tt $\star$}} param {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} `etres {\mbox{\tt $\star$}} qui {\mbox{\tt $\star$}} ne {\mbox{\tt $\star$}} sont {\mbox{\tt $\star$}} pas {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} analytiques {\mbox{\tt $\star$}} complexes {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} la {\mbox{\tt $\star$}} d {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} 'emonstration {\mbox{\tt $\star$}} est {\mbox{\tt $\star$}} aussi {\mbox{\tt $\star$}} techniquement {\mbox{\tt $\star$}} plus {\mbox{\tt $\star$}} facile {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Finalement {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} nous {\mbox{\tt $\star$}} d {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} 'emontrons {\mbox{\tt $\star$}} quelques {\mbox{\tt $\star$}} corollaires {\mbox{\tt $\star$}} sur {\mbox{\tt $\star$}} les {\mbox{\tt $\star$}} composantes {\mbox{\tt $\star$}} hyperboliques {\mbox{\tt $\star$}} de {\mbox{\tt $\star$}} l {\mbox{\tt $\star$}} 'ensemble {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} En {\mbox{\tt $\star$}} route {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} nous {\mbox{\tt $\star$}} introduisons {\mbox{\tt $\star$}} des {\mbox{\tt $\star$}} partitions {\mbox{\tt $\star$}} du {\mbox{\tt $\star$}} plan {\mbox{\tt $\star$}} dynamique {\mbox{\tt $\star$}} et {\mbox{\tt $\star$}} de {\mbox{\tt $\star$}} l {\mbox{\tt $\star$}} 'espace {\mbox{\tt $\star$}} des {\mbox{\tt $\star$}} param {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} `etres {\mbox{\tt $\star$}} qui {\mbox{\tt $\star$}} sont {\mbox{\tt $\star$}} int {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} 'eressantes {\mbox{\tt $\star$}} en {\mbox{\tt $\star$}} elles {\mbox{\tt $\star$}} -m {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} ^emes {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} et {\mbox{\tt $\star$}} nous {\mbox{\tt $\star$}} interpr {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} 'etons {\mbox{\tt $\star$}} l {\mbox{\tt $\star$}} 'en {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} -sem {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} -ble {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} comme {\mbox{\tt $\star$}} un {\mbox{\tt $\star$}} espace {\mbox{\tt $\star$}} de {\mbox{\tt $\star$}} param {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} `etres {\mbox{\tt $\star$}} symboliques {\mbox{\tt $\star$}} contenant {\mbox{\tt $\star$}} des {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \sl {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} et {\mbox{\tt $\star$}} des {\mbox{\tt $\star$}} adresses {\mbox{\tt $\star$}} internes {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {abstract {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \pagebreak {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \newsection {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {Introduction {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {SecIntro {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Quadratic {\mbox{\tt $\star$}} polynomials {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} when {\mbox{\tt $\star$}} iterated {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} exhibit {\mbox{\tt $\star$}} amazingly {\mbox{\tt $\star$}} rich {\mbox{\tt $\star$}} dynamics {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Up {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} affine {\mbox{\tt $\star$}} conjugation {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} polynomials {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} parametrized {\mbox{\tt $\star$}} uniquely {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} single {\mbox{\tt $\star$}} complex {\mbox{\tt $\star$}} variable {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} serves {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} organize {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} space {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (conjugacy {\mbox{\tt $\star$}} classes {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} quadratic {\mbox{\tt $\star$}} polynomials {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} understood {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} ` {\mbox{\tt $\star$}} `table {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} contents {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamical {\mbox{\tt $\star$}} possibilities {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} most {\mbox{\tt $\star$}} beautiful {\mbox{\tt $\star$}} structure {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Much {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} structure {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} been {\mbox{\tt $\star$}} discovered {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} explained {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} groundbreaking {\mbox{\tt $\star$}} work {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Douady {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Hubbard {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {Orsay {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} deeper {\mbox{\tt $\star$}} understanding {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} fine {\mbox{\tt $\star$}} structure {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} very {\mbox{\tt $\star$}} active {\mbox{\tt $\star$}} area {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} research {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} importance {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} due {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} fact {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} simplest {\mbox{\tt $\star$}} non {\mbox{\tt $\star$}} -trivial {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} space {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} analytic {\mbox{\tt $\star$}} families {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} iterated {\mbox{\tt $\star$}} holomorphic {\mbox{\tt $\star$}} maps {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} because {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} universality {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} explained {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} Douady {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Hubbard {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {Polylike {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} typical {\mbox{\tt $\star$}} local {\mbox{\tt $\star$}} configuration {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} -dimensional {\mbox{\tt $\star$}} complex {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} spaces {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (see {\mbox{\tt $\star$}} also {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {CurtUniv {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {figure {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} [htbp {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \centerline {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \psfig {\mbox{\tt $\star$}} {figure {\mbox{\tt $\star$}} =P {\mbox{\tt $\star$}} _MandelRays {\mbox{\tt $\star$}} .eps {\mbox{\tt $\star$}} ,height {\mbox{\tt $\star$}} =75mm {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \centerline {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \parbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \captionwidth {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \caption {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \sl {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} several {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} mentioned {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} text {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (Picture {\mbox{\tt $\star$}} courtesy {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Jack {\mbox{\tt $\star$}} Milnor {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {FigMandelRays {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {figure {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Unfortunately {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} most {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} beautiful {\mbox{\tt $\star$}} results {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Douady {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Hubbard {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} structure {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} written {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} preliminary {\mbox{\tt $\star$}} form {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} preprints {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {Orsay {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} purpose {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} article {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} provide {\mbox{\tt $\star$}} concise {\mbox{\tt $\star$}} proofs {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} several {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} their {\mbox{\tt $\star$}} theorems {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Our {\mbox{\tt $\star$}} proofs {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} quite {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} original {\mbox{\tt $\star$}} ones {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} several {\mbox{\tt $\star$}} respects {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} while {\mbox{\tt $\star$}} Douady {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Hubbard {\mbox{\tt $\star$}} used {\mbox{\tt $\star$}} elaborate {\mbox{\tt $\star$}} perturbation {\mbox{\tt $\star$}} arguments {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} many {\mbox{\tt $\star$}} basic {\mbox{\tt $\star$}} results {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} introduce {\mbox{\tt $\star$}} partitions {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} dynamical {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} planes {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} describe {\mbox{\tt $\star$}} them {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} symbolic {\mbox{\tt $\star$}} dynamics {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} reduce {\mbox{\tt $\star$}} many {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} questions {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} combinatorial {\mbox{\tt $\star$}} level {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} feel {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} our {\mbox{\tt $\star$}} proofs {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} technically {\mbox{\tt $\star$}} significantly {\mbox{\tt $\star$}} simpler {\mbox{\tt $\star$}} than {\mbox{\tt $\star$}} those {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Douady {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Hubbard {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} An {\mbox{\tt $\star$}} important {\mbox{\tt $\star$}} difference {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} certain {\mbox{\tt $\star$}} applications {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} our {\mbox{\tt $\star$}} proof {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} use {\mbox{\tt $\star$}} complex {\mbox{\tt $\star$}} analytic {\mbox{\tt $\star$}} dependence {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} maps {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} respect {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} therefore {\mbox{\tt $\star$}} applicable {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} certain {\mbox{\tt $\star$}} wider {\mbox{\tt $\star$}} circumstances {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} initial {\mbox{\tt $\star$}} motivation {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} research {\mbox{\tt $\star$}} was {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} project {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} Nakane {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (see {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {Nakane {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} references {\mbox{\tt $\star$}} therein {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} understand {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} space {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} antiholomorphic {\mbox{\tt $\star$}} quadratic {\mbox{\tt $\star$}} polynomials {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} depends {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} real {\mbox{\tt $\star$}} -analytically {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Of {\mbox{\tt $\star$}} course {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} ` {\mbox{\tt $\star$}} `standard {\mbox{\tt $\star$}} proof {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} using {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} coordinates {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Ecalle {\mbox{\tt $\star$}} cylinders {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} developed {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} Douady {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Hubbard {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} elaborated {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} Lavaurs {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {LaEc {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} most {\mbox{\tt $\star$}} powerful {\mbox{\tt $\star$}} tool {\mbox{\tt $\star$}} giving {\mbox{\tt $\star$}} interesting {\mbox{\tt $\star$}} insights {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} had {\mbox{\tt $\star$}} many {\mbox{\tt $\star$}} important {\mbox{\tt $\star$}} applications {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Our {\mbox{\tt $\star$}} goal {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} present {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} alternative {\mbox{\tt $\star$}} approach {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} order {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} enlarge {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} toolbox {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} applications {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} situations {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} fundamental {\mbox{\tt $\star$}} result {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} want {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} describe {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} article {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} following {\mbox{\tt $\star$}} theorem {\mbox{\tt $\star$}} about {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} properties {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} theorem {\mbox{\tt $\star$}} due {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} Douady {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Hubbard {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} background {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} terminology {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} see {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} next {\mbox{\tt $\star$}} section {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {theorem {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} [The {\mbox{\tt $\star$}} Structure {\mbox{\tt $\star$}} Theorem {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} Set {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {ThmRatRays {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \rule{0pt}{0pt}\par\noindent {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \vspace {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} -24pt {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {enumerate {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \item {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Every {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} plane {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \item {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Every {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} exactly {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} These {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} plane {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \item {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Every {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} Misiurewicz {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} plane {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \item {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Every {\mbox{\tt $\star$}} Misiurewicz {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} finite {\mbox{\tt $\star$}} non {\mbox{\tt $\star$}} -zero {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} These {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} exactly {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} plane {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {enumerate {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {theorem {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (The {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} =1 {\mbox{\tt $\star$}} /4 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} single {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} but {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} corresponds {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} count {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} twice {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} order {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} avoid {\mbox{\tt $\star$}} having {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} state {\mbox{\tt $\star$}} exceptions {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} organization {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} article {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} Section {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {SecDynamics {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} describe {\mbox{\tt $\star$}} necessary {\mbox{\tt $\star$}} terminology {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} complex {\mbox{\tt $\star$}} dynamics {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} give {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} few {\mbox{\tt $\star$}} fundamental {\mbox{\tt $\star$}} lemmas {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Section {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {SecPeriodic {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} contains {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} proof {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} part {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} theorem {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} along {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} way {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} shows {\mbox{\tt $\star$}} how {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} interpret {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} space {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} part {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} theorem {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} proved {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} Section {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {SecPreperiodic {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} using {\mbox{\tt $\star$}} properties {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} final {\mbox{\tt $\star$}} Section {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {SecHyperbolic {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} show {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} methods {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Section {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {SecPeriodic {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} used {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} prove {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} Orbit {\mbox{\tt $\star$}} Separation {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} interesting {\mbox{\tt $\star$}} consequences {\mbox{\tt $\star$}} about {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Most {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} results {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} proofs {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} paper {\mbox{\tt $\star$}} work {\mbox{\tt $\star$}} also {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} ` {\mbox{\tt $\star$}} `Multibrot {\mbox{\tt $\star$}} sets {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} connectedness {\mbox{\tt $\star$}} loci {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} polynomials {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} ^d {\mbox{\tt $\star$}} +c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $d {\mbox{\tt $\star$}} \geq {\mbox{\tt $\star$}} 2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} article {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} elaborated {\mbox{\tt $\star$}} version {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Chapter {\mbox{\tt $\star$}} ~2 {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} my {\mbox{\tt $\star$}} Ph {\mbox{\tt $\star$}} .D {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} thesis {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {DSThesis {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} Cornell {\mbox{\tt $\star$}} University {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} written {\mbox{\tt $\star$}} under {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} supervision {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} John {\mbox{\tt $\star$}} Hubbard {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} submitted {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} summer {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} 1994 {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} part {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} mathematical {\mbox{\tt $\star$}} ping {\mbox{\tt $\star$}} -pong {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} John {\mbox{\tt $\star$}} Milnor {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} builds {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} important {\mbox{\tt $\star$}} places {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} paper {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {GM {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} recently {\mbox{\tt $\star$}} Milnor {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} written {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} most {\mbox{\tt $\star$}} beautiful {\mbox{\tt $\star$}} new {\mbox{\tt $\star$}} paper {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {MiOrbits {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} investigating {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} view {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} ` {\mbox{\tt $\star$}} `orbit {\mbox{\tt $\star$}} portraits {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} i {\mbox{\tt $\star$}} .e {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} patterns {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} I {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} tried {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} hide {\mbox{\tt $\star$}} how {\mbox{\tt $\star$}} much {\mbox{\tt $\star$}} both {\mbox{\tt $\star$}} I {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} paper {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} profited {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} many {\mbox{\tt $\star$}} discussions {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} him {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} become {\mbox{\tt $\star$}} apparent {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} many {\mbox{\tt $\star$}} places {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} paper {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} well {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} Milnor {\mbox{\tt $\star$}} 's {\mbox{\tt $\star$}} new {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} uses {\mbox{\tt $\star$}} certain {\mbox{\tt $\star$}} global {\mbox{\tt $\star$}} counting {\mbox{\tt $\star$}} arguments {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} provide {\mbox{\tt $\star$}} estimates {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} but {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} directions {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} current {\mbox{\tt $\star$}} project {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} combine {\mbox{\tt $\star$}} both {\mbox{\tt $\star$}} approaches {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} provide {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} new {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} more {\mbox{\tt $\star$}} conceptual {\mbox{\tt $\star$}} proof {\mbox{\tt $\star$}} without {\mbox{\tt $\star$}} global {\mbox{\tt $\star$}} counting {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Proofs {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} similar {\mbox{\tt $\star$}} spirit {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} further {\mbox{\tt $\star$}} fundamental {\mbox{\tt $\star$}} properties {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} found {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {DS {\mbox{\tt $\star$}} _Fibers {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \sc {\mbox{\tt $\star$}} Acknowledgements {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} pleasure {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} thank {\mbox{\tt $\star$}} many {\mbox{\tt $\star$}} people {\mbox{\tt $\star$}} who {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} contributed {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} paper {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} variety {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} ways {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} I {\mbox{\tt $\star$}} am {\mbox{\tt $\star$}} most {\mbox{\tt $\star$}} grateful {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} John {\mbox{\tt $\star$}} Hubbard {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} much {\mbox{\tt $\star$}} help {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} friendship {\mbox{\tt $\star$}} over {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} years {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} To {\mbox{\tt $\star$}} John {\mbox{\tt $\star$}} Milnor {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} I {\mbox{\tt $\star$}} am {\mbox{\tt $\star$}} deeply {\mbox{\tt $\star$}} indebted {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} ping {\mbox{\tt $\star$}} -pong {\mbox{\tt $\star$}} mentioned {\mbox{\tt $\star$}} above {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} well {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} many {\mbox{\tt $\star$}} discussions {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} lot {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} encouragement {\mbox{\tt $\star$}} along {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} way {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Many {\mbox{\tt $\star$}} more {\mbox{\tt $\star$}} people {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} shared {\mbox{\tt $\star$}} their {\mbox{\tt $\star$}} ideas {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} understanding {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} me {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} recognize {\mbox{\tt $\star$}} their {\mbox{\tt $\star$}} contributions {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} they {\mbox{\tt $\star$}} include {\mbox{\tt $\star$}} Bodil {\mbox{\tt $\star$}} Branner {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} Adrien {\mbox{\tt $\star$}} Douady {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} Karsten {\mbox{\tt $\star$}} Keller {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} Eike {\mbox{\tt $\star$}} Lau {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} Jiaqi {\mbox{\tt $\star$}} Luo {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} Misha {\mbox{\tt $\star$}} Lyubich {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} Shizuo {\mbox{\tt $\star$}} Nakane {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} Chris {\mbox{\tt $\star$}} Penrose {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} Carsten {\mbox{\tt $\star$}} Petersen {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} Johannes {\mbox{\tt $\star$}} Riedl {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} Mitsu {\mbox{\tt $\star$}} Shishikura {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} others {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} I {\mbox{\tt $\star$}} am {\mbox{\tt $\star$}} also {\mbox{\tt $\star$}} grateful {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Institute {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} Mathematical {\mbox{\tt $\star$}} Sciences {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} Stony {\mbox{\tt $\star$}} Brook {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} inspiring {\mbox{\tt $\star$}} environment {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} support {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Finally {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} special {\mbox{\tt $\star$}} thanks {\mbox{\tt $\star$}} go {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} Katrin {\mbox{\tt $\star$}} Wehrheim {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} most {\mbox{\tt $\star$}} helpful {\mbox{\tt $\star$}} suggestion {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \newsection {\mbox{\tt $\star$}} {Complex {\mbox{\tt $\star$}} Dynamics {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {SecDynamics {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} section {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} briefly {\mbox{\tt $\star$}} recall {\mbox{\tt $\star$}} some {\mbox{\tt $\star$}} results {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} notation {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} complex {\mbox{\tt $\star$}} dynamics {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} needed {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} sequel {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} details {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} notes {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {MiIntro {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} Milnor {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} recommended {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} course {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} work {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {Orsay {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} Douady {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Hubbard {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} source {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} most {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} results {\mbox{\tt $\star$}} mentioned {\mbox{\tt $\star$}} below {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} By {\mbox{\tt $\star$}} affine {\mbox{\tt $\star$}} conjugation {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} quadratic {\mbox{\tt $\star$}} polynomials {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} written {\mbox{\tt $\star$}} uniquely {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} normal {\mbox{\tt $\star$}} form {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $p {\mbox{\tt $\star$}} _c {\mbox{\tt $\star$}} :z {\mbox{\tt $\star$}} \mapsto {\mbox{\tt $\star$}} z {\mbox{\tt $\star$}} ^2 {\mbox{\tt $\star$}} +c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} some {\mbox{\tt $\star$}} complex {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} polynomial {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} filled {\mbox{\tt $\star$}} -in {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} defined {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} bounded {\mbox{\tt $\star$}} orbits {\mbox{\tt $\star$}} under {\mbox{\tt $\star$}} iteration {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} filled {\mbox{\tt $\star$}} -in {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} also {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} do {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} neighborhood {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} iterates {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} normal {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} sense {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Montel {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} filled {\mbox{\tt $\star$}} -in {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} bounded {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} otherwise {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} sets {\mbox{\tt $\star$}} coincide {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} Cantor {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\bf M} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} quadratic {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} locus {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} sets {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} filled {\mbox{\tt $\star$}} -in {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} sets {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} well {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} compact {\mbox{\tt $\star$}} subsets {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} complex {\mbox{\tt $\star$}} plane {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} known {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} full {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (i {\mbox{\tt $\star$}} .e {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} complement {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Douady {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Hubbard {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} shown {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} sets {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} profitably {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} studied {\mbox{\tt $\star$}} using {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} compact {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} full {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $K {\mbox{\tt $\star$}} \subset {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \bbf {\mbox{\tt $\star$}} C {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Riemann {\mbox{\tt $\star$}} mapping {\mbox{\tt $\star$}} theorem {\mbox{\tt $\star$}} supplies {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} unique {\mbox{\tt $\star$}} conformal {\mbox{\tt $\star$}} isomorphism {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \Phi {\mbox{\tt $\star$}} _K {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} exterior {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $K {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} exterior {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} unique {\mbox{\tt $\star$}} disk {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \overline {\mbox{\tt $\star$}} {D {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} _R {\mbox{\tt $\star$}} = {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {z {\mbox{\tt $\star$}} \in {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \bbf {\mbox{\tt $\star$}} C {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} |z {\mbox{\tt $\star$}} | {\mbox{\tt $\star$}} \leq {\mbox{\tt $\star$}} R {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} subject {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} normalization {\mbox{\tt $\star$}} condition {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \lim {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} {z {\mbox{\tt $\star$}} \rightarrow {\mbox{\tt $\star$}} \infty {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} \Phi {\mbox{\tt $\star$}} (z {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} /z {\mbox{\tt $\star$}} = {\mbox{\tt $\star$}} 1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} inverse {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Riemann {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} allows {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} transport {\mbox{\tt $\star$}} polar {\mbox{\tt $\star$}} coordinates {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} exterior {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $K {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} images {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} radial {\mbox{\tt $\star$}} lines {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} centered {\mbox{\tt $\star$}} circles {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} called {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} equipotentials {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} respectively {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} \in {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \bbf {\mbox{\tt $\star$}} C {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} -K {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \Phi {\mbox{\tt $\star$}} (z {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} =r {\mbox{\tt $\star$}} e {\mbox{\tt $\star$}} ^ {\mbox{\tt $\star$}} {2 {\mbox{\tt $\star$}} \pi {\mbox{\tt $\star$}} i {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} called {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \log {\mbox{\tt $\star$}} r {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} called {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} potential {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} External {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} live {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \bbf {\mbox{\tt $\star$}} S {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ^1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} always {\mbox{\tt $\star$}} measure {\mbox{\tt $\star$}} them {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} full {\mbox{\tt $\star$}} turns {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} i {\mbox{\tt $\star$}} .e {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} interpreting {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \bbf {\mbox{\tt $\star$}} S {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ^1 {\mbox{\tt $\star$}} = {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \bbf {\mbox{\tt $\star$}} R {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} / {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \bbf {\mbox{\tt $\star$}} Z {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Sometimes {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} convenient {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} count {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} differently {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} live {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} [0 {\mbox{\tt $\star$}} ,1 {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Potentials {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} parametrized {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} open {\mbox{\tt $\star$}} interval {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ( {\mbox{\tt $\star$}} \log {\mbox{\tt $\star$}} R {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \infty {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} An {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} said {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \lim {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} {r {\mbox{\tt $\star$}} \searrow {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \log {\mbox{\tt $\star$}} R {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \Phi {\mbox{\tt $\star$}} _K {\mbox{\tt $\star$}} ^ {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} (re {\mbox{\tt $\star$}} ^ {\mbox{\tt $\star$}} {2 {\mbox{\tt $\star$}} \pi {\mbox{\tt $\star$}} i {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} exists {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} equals {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} general {\mbox{\tt $\star$}} compact {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} full {\mbox{\tt $\star$}} sets {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $K {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} need {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} By {\mbox{\tt $\star$}} Carath {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} 'eodory {\mbox{\tt $\star$}} 's {\mbox{\tt $\star$}} theorem {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} local {\mbox{\tt $\star$}} connectivity {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $K {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} equivalent {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} depending {\mbox{\tt $\star$}} continuously {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} sets {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} consider {\mbox{\tt $\star$}} here {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} turns {\mbox{\tt $\star$}} out {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} conformal {\mbox{\tt $\star$}} radius {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $R {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} necessarily {\mbox{\tt $\star$}} equal {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} order {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} avoid {\mbox{\tt $\star$}} confusion {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} replace {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} term {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} ` {\mbox{\tt $\star$}} `external {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} according {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} whether {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} filled {\mbox{\tt $\star$}} -in {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} \in {\mbox{\tt $\star$}} {\bf M} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} filled {\mbox{\tt $\star$}} -in {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $K {\mbox{\tt $\star$}} _c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} brevity {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} denote {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} preferred {\mbox{\tt $\star$}} Riemann {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \varphi {\mbox{\tt $\star$}} _c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} rather {\mbox{\tt $\star$}} than {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \Phi {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} {K {\mbox{\tt $\star$}} _c {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} A {\mbox{\tt $\star$}} classical {\mbox{\tt $\star$}} theorem {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} B {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} "ottcher {\mbox{\tt $\star$}} asserts {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} conjugates {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamics {\mbox{\tt $\star$}} outside {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $K {\mbox{\tt $\star$}} _c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} squaring {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} outside {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} closed {\mbox{\tt $\star$}} unit {\mbox{\tt $\star$}} disk {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \varphi {\mbox{\tt $\star$}} _c {\mbox{\tt $\star$}} \circ {\mbox{\tt $\star$}} p {\mbox{\tt $\star$}} _c {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} = {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} ( {\mbox{\tt $\star$}} \varphi {\mbox{\tt $\star$}} _c {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} ^2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} A {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} whenever {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} under {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} doubling {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \bbf {\mbox{\tt $\star$}} S {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ^1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} exactly {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} rational {\mbox{\tt $\star$}} numbers {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} More {\mbox{\tt $\star$}} precisely {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} rational {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} iff {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} when {\mbox{\tt $\star$}} written {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} lowest {\mbox{\tt $\star$}} terms {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} denominator {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} odd {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} denominator {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} even {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} well {\mbox{\tt $\star$}} known {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} [Section {\mbox{\tt $\star$}} ~18 {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {MiIntro {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} filled {\mbox{\tt $\star$}} -in {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} sets {\mbox{\tt $\star$}} always {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} whenever {\mbox{\tt $\star$}} their {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} rational {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (resp {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (resp {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbits {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Conversely {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} more {\mbox{\tt $\star$}} rational {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} preperiods {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} equal {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} quadratic {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} Cantor {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} still {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} B {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} "ottcher {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \varphi {\mbox{\tt $\star$}} _c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} near {\mbox{\tt $\star$}} infinity {\mbox{\tt $\star$}} conjugating {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamics {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} squaring {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} One {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} try {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} extend {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} domain {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} definition {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} B {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} "ottcher {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} pulling {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} back {\mbox{\tt $\star$}} using {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} conjugation {\mbox{\tt $\star$}} relation {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} However {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} problems {\mbox{\tt $\star$}} about {\mbox{\tt $\star$}} choosing {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} right {\mbox{\tt $\star$}} branch {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} square {\mbox{\tt $\star$}} root {\mbox{\tt $\star$}} needed {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} conjugation {\mbox{\tt $\star$}} relation {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} absolute {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} B {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} "ottcher {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} independent {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} choices {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} allows {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} define {\mbox{\tt $\star$}} potentials {\mbox{\tt $\star$}} outside {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} filled {\mbox{\tt $\star$}} -in {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} potentials {\mbox{\tt $\star$}} exceeding {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} potential {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} simply {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \varphi {\mbox{\tt $\star$}} _c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} defined {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} uniquely {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} domain {\mbox{\tt $\star$}} includes {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} particular {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} defined {\mbox{\tt $\star$}} uniquely {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Douady {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Hubbard {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} shown {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} preferred {\mbox{\tt $\star$}} Riemann {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \Phi {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} {\bf M} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} exterior {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} given {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \Phi {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} {\bf M} {\mbox{\tt $\star$}} (c {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} = {\mbox{\tt $\star$}} \varphi {\mbox{\tt $\star$}} _c {\mbox{\tt $\star$}} (c {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} disconnected {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} sets {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \varphi {\mbox{\tt $\star$}} _c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} defines {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} sufficiently {\mbox{\tt $\star$}} large {\mbox{\tt $\star$}} potentials {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} defined {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} potentials {\mbox{\tt $\star$}} greater {\mbox{\tt $\star$}} than {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $t {\mbox{\tt $\star$}} >0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} pull {\mbox{\tt $\star$}} back {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamics {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} obtain {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} /2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ( {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} /2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} down {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} potential {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $t {\mbox{\tt $\star$}} /2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} except {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} contains {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} latter {\mbox{\tt $\star$}} case {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} pull {\mbox{\tt $\star$}} -back {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} bounce {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} pull {\mbox{\tt $\star$}} -back {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} no {\mbox{\tt $\star$}} longer {\mbox{\tt $\star$}} possible {\mbox{\tt $\star$}} uniquely {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} phenomenon {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} been {\mbox{\tt $\star$}} studied {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} Goldberg {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Milnor {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} appendix {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {GM {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Conversely {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} extended {\mbox{\tt $\star$}} down {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} potential {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $t {\mbox{\tt $\star$}} >0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} provided {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} image {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $2 {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} extended {\mbox{\tt $\star$}} down {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} potential {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $2t {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} contain {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $4 {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} extended {\mbox{\tt $\star$}} down {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} potential {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $4t {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} without {\mbox{\tt $\star$}} containing {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} image {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} etc {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} defined {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} potentials {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} (0 {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \infty {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $2 {\mbox{\tt $\star$}} ^k {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} =1 {\mbox{\tt $\star$}} ,2 {\mbox{\tt $\star$}} ,3 {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \ldots {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} general {\mbox{\tt $\star$}} situation {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} case {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} known {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} unique {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} whether {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} rational {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} rephrase {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} facts {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} form {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} many {\mbox{\tt $\star$}} opportunities {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} use {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} \notin {\mbox{\tt $\star$}} {\bf M} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} contain {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} cannot {\mbox{\tt $\star$}} possibly {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} bounce {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} inverse {\mbox{\tt $\star$}} image {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} finite {\mbox{\tt $\star$}} positive {\mbox{\tt $\star$}} potential {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} main {\mbox{\tt $\star$}} focus {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Sections {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {SecPeriodic {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {SecPreperiodic {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} transfer {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} properties {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} rational {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} properties {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} rational {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} often {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} complex {\mbox{\tt $\star$}} dynamics {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} general {\mbox{\tt $\star$}} strategy {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} ` {\mbox{\tt $\star$}} `to {\mbox{\tt $\star$}} plow {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamical {\mbox{\tt $\star$}} plane {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} harvest {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} space {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} Douady {\mbox{\tt $\star$}} phrased {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} When {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} need {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} equal {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} possible {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} proper {\mbox{\tt $\star$}} multiple {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} therefore {\mbox{\tt $\star$}} distinguish {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} both {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} equal {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} general {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} relation {\mbox{\tt $\star$}} between {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} each {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} see {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemRayPermutation {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} our {\mbox{\tt $\star$}} purposes {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} orbits {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} most {\mbox{\tt $\star$}} interesting {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} least {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} each {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Such {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} orbits {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} distinguished {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} distinguished {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} play {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} prominent {\mbox{\tt $\star$}} role {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} symbolic {\mbox{\tt $\star$}} descriptions {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Following {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} terminology {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Milnor {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {MiOrbits {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} call {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} distinguished {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (see {\mbox{\tt $\star$}} below {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} corresponding {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} Thurston {\mbox{\tt $\star$}} 's {\mbox{\tt $\star$}} fundamental {\mbox{\tt $\star$}} preprint {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {Th {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} their {\mbox{\tt $\star$}} common {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} ` {\mbox{\tt $\star$}} `minor {\mbox{\tt $\star$}} leaf {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} ` {\mbox{\tt $\star$}} `lamination {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} use {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} describe {\mbox{\tt $\star$}} his {\mbox{\tt $\star$}} notation {\mbox{\tt $\star$}} here {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} but {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} note {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} very {\mbox{\tt $\star$}} close {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} spirit {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} article {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} our {\mbox{\tt $\star$}} purposes {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} sufficient {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} define {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} orbits {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {definition {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} [Characteristic {\mbox{\tt $\star$}} Components {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} Points {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Rays {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {DefCharacteristic {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \rule{0pt}{0pt}\par\noindent {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} quadratic {\mbox{\tt $\star$}} polynomial {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} unique {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} containing {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} called {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} least {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} them {\mbox{\tt $\star$}} closest {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} either {\mbox{\tt $\star$}} side {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {definition {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} fact {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} least {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} shown {\mbox{\tt $\star$}} after {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemHubbardBranch {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemRayPermutation {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} describe {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} dynamically {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} With {\mbox{\tt $\star$}} hesitation {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} use {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} term {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} ` {\mbox{\tt $\star$}} `Misiurewicz {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} equivalently {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (strictly {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} terminology {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} been {\mbox{\tt $\star$}} introduced {\mbox{\tt $\star$}} long {\mbox{\tt $\star$}} ago {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} but {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} very {\mbox{\tt $\star$}} special {\mbox{\tt $\star$}} case {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} what {\mbox{\tt $\star$}} Misiurewicz {\mbox{\tt $\star$}} was {\mbox{\tt $\star$}} investigating {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} real {\mbox{\tt $\star$}} dynamics {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} term {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} used {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} wider {\mbox{\tt $\star$}} meaning {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} been {\mbox{\tt $\star$}} successful {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} finding {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} adequate {\mbox{\tt $\star$}} substitution {\mbox{\tt $\star$}} term {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} invite {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} reader {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} suggestions {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} section {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} provide {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} lemmas {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} engine {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} our {\mbox{\tt $\star$}} proof {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} analytical {\mbox{\tt $\star$}} nature {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} slight {\mbox{\tt $\star$}} generalization {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~B {\mbox{\tt $\star$}} .1 {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} Goldberg {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Milnor {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {GM {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} guaranteeing {\mbox{\tt $\star$}} stability {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (pre {\mbox{\tt $\star$}} )periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} second {\mbox{\tt $\star$}} lemma {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} make {\mbox{\tt $\star$}} counting {\mbox{\tt $\star$}} possible {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} estimating {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} given {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {lemma {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} [Stability {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Repelling {\mbox{\tt $\star$}} Orbits {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {LemStable {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \rule{0pt}{0pt}\par\noindent {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Suppose {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} some {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} \in {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \bbf {\mbox{\tt $\star$}} C {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (not {\mbox{\tt $\star$}} necessarily {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} some {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Then {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} sufficiently {\mbox{\tt $\star$}} close {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} continued {\mbox{\tt $\star$}} analytically {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} function {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} (c {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} plane {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} (c {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Moreover {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} form {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} closed {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} canonically {\mbox{\tt $\star$}} homeomorphic {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} [0 {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \infty {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} via {\mbox{\tt $\star$}} potentials {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} parametrized {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} depends {\mbox{\tt $\star$}} continuously {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} analogous {\mbox{\tt $\star$}} statement {\mbox{\tt $\star$}} holds {\mbox{\tt $\star$}} provided {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} neither {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} nor {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} forward {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {lemma {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \par \noindent {\sc Proof. } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} assume {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} By {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} implicit {\mbox{\tt $\star$}} function {\mbox{\tt $\star$}} theorem {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} continued {\mbox{\tt $\star$}} analytically {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} function {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} (c {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} neighborhood {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \lambda {\mbox{\tt $\star$}} (c {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} also {\mbox{\tt $\star$}} depend {\mbox{\tt $\star$}} analytically {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} cycle {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} sufficiently {\mbox{\tt $\star$}} close {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $V {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} neighborhood {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} denote {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Then {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} \in {\mbox{\tt $\star$}} V {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} exists {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} local {\mbox{\tt $\star$}} branch {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $g {\mbox{\tt $\star$}} _c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} inverse {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $p {\mbox{\tt $\star$}} _c {\mbox{\tt $\star$}} ^ {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \circ {\mbox{\tt $\star$}} n {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} fixing {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} (c {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} There {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} neighborhood {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $U {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $g {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} {c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} maps {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \overline {\mbox{\tt $\star$}} U {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $U {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} possibly {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} shrinking {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $V {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} may {\mbox{\tt $\star$}} assume {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $g {\mbox{\tt $\star$}} _c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} property {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} \in {\mbox{\tt $\star$}} V {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Under {\mbox{\tt $\star$}} iteration {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $g {\mbox{\tt $\star$}} _c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $U {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} converges {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} (c {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $t {\mbox{\tt $\star$}} >0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} potential {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $U {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} contains {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} -ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} potentials {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $t {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} below {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} including {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Now {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} distinguish {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} cases {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} according {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} whether {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} \in {\mbox{\tt $\star$}} {\bf M} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} \notin {\mbox{\tt $\star$}} {\bf M} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} well {\mbox{\tt $\star$}} -defined {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} finitely {\mbox{\tt $\star$}} many {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $2 {\mbox{\tt $\star$}} ^k {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} =1 {\mbox{\tt $\star$}} ,2 {\mbox{\tt $\star$}} ,3 {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \ldots {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} because {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $V {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} small {\mbox{\tt $\star$}} enough {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $V {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} outside {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\bf M} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} their {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $2 {\mbox{\tt $\star$}} ^k {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} \in {\mbox{\tt $\star$}} V {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} potential {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $t {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} depends {\mbox{\tt $\star$}} analytically {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} therefore {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} contained {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $U {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} sufficiently {\mbox{\tt $\star$}} small {\mbox{\tt $\star$}} perturbations {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} thus {\mbox{\tt $\star$}} converge {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} (c {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} under {\mbox{\tt $\star$}} iteration {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $g {\mbox{\tt $\star$}} _c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} (c {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} However {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} \in {\mbox{\tt $\star$}} {\bf M} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} may {\mbox{\tt $\star$}} assume {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $V {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} small {\mbox{\tt $\star$}} enough {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} potentials {\mbox{\tt $\star$}} less {\mbox{\tt $\star$}} than {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $t {\mbox{\tt $\star$}} /2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (with {\mbox{\tt $\star$}} respect {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} potential {\mbox{\tt $\star$}} function {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} corresponding {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} planes {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} values {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} potentials {\mbox{\tt $\star$}} less {\mbox{\tt $\star$}} than {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $t {\mbox{\tt $\star$}} /2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} exists {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} depends {\mbox{\tt $\star$}} analytically {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} potentials {\mbox{\tt $\star$}} greater {\mbox{\tt $\star$}} than {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $t {\mbox{\tt $\star$}} /2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} By {\mbox{\tt $\star$}} shrinking {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $V {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} may {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} assume {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} \in {\mbox{\tt $\star$}} V {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} segment {\mbox{\tt $\star$}} between {\mbox{\tt $\star$}} potentials {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $t {\mbox{\tt $\star$}} /2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $t {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} contained {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $U {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Iterating {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $g {\mbox{\tt $\star$}} _c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} (c {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} both {\mbox{\tt $\star$}} cases {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} depend {\mbox{\tt $\star$}} continuously {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} including {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parametrization {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} potentials {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} statement {\mbox{\tt $\star$}} about {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} taking {\mbox{\tt $\star$}} inverse {\mbox{\tt $\star$}} images {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} straightforward {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} except {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} forward {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} However {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} some {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} small {\mbox{\tt $\star$}} perturbation {\mbox{\tt $\star$}} may {\mbox{\tt $\star$}} bring {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} inverse {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} bounce {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (after {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} both {\mbox{\tt $\star$}} branches {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} branches {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} analytic {\mbox{\tt $\star$}} function {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $\Box$ \par {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {lemma {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} [Counting {\mbox{\tt $\star$}} Parabolic {\mbox{\tt $\star$}} Orbits {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {LemParaCount {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \rule{0pt}{0pt}\par\noindent {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} positive {\mbox{\tt $\star$}} integer {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \bbf {\mbox{\tt $\star$}} C {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} having {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} exact {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} most {\mbox{\tt $\star$}} half {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} [0 {\mbox{\tt $\star$}} ,1 {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} having {\mbox{\tt $\star$}} exact {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} under {\mbox{\tt $\star$}} doubling {\mbox{\tt $\star$}} modulo {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {lemma {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \par \noindent {\sc Proof. } {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} calculate {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} exact {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} \in {\mbox{\tt $\star$}} [0 {\mbox{\tt $\star$}} ,1 {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} satisfies {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $2 {\mbox{\tt $\star$}} ^n {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} \equiv {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} modulo {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} write {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} =a {\mbox{\tt $\star$}} / {\mbox{\tt $\star$}} (2 {\mbox{\tt $\star$}} ^n {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} some {\mbox{\tt $\star$}} integer {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $a {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $2 {\mbox{\tt $\star$}} ^n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} [0 {\mbox{\tt $\star$}} ,1 {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Only {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} subset {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} exact {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} denoting {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $s {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} _n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \sum {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} {k {\mbox{\tt $\star$}} |n {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} s {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} _k {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} = {\mbox{\tt $\star$}} 2 {\mbox{\tt $\star$}} ^n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} allows {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} determine {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $s {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} _n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} recursively {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} via {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} M {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} "obius {\mbox{\tt $\star$}} inversion {\mbox{\tt $\star$}} formula {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $s {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} =2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $s {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} _n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} easily {\mbox{\tt $\star$}} seen {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} even {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} sequel {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} work {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} integers {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $s {\mbox{\tt $\star$}} _n {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} =s {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} _n {\mbox{\tt $\star$}} /2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} few {\mbox{\tt $\star$}} terms {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} (s {\mbox{\tt $\star$}} _n {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} starting {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $s {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $1 {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} 1 {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} 3 {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} 6 {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} 15 {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} 27 {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} 63 {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \ldots {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} specified {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} [0 {\mbox{\tt $\star$}} ,1 {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} exactly {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $2s {\mbox{\tt $\star$}} _n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} consider {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} curve {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} (c {\mbox{\tt $\star$}} ,z {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} \in {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \bbf {\mbox{\tt $\star$}} C {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ^2 {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} p {\mbox{\tt $\star$}} _c {\mbox{\tt $\star$}} ^ {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \circ {\mbox{\tt $\star$}} n {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} (z {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} =z {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} consisting {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} under {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $p {\mbox{\tt $\star$}} _c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} dividing {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} factors {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} product {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \prod {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} {k {\mbox{\tt $\star$}} |n {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} Q {\mbox{\tt $\star$}} _k {\mbox{\tt $\star$}} (c {\mbox{\tt $\star$}} ,z {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} according {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} exact {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (The {\mbox{\tt $\star$}} curves {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $Q {\mbox{\tt $\star$}} _k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} been {\mbox{\tt $\star$}} shown {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} irreducible {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} Bousch {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {Bousch {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} Lau {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Schleicher {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {IntAddr {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} fact {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} use {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} |c {\mbox{\tt $\star$}} | {\mbox{\tt $\star$}} >2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} filled {\mbox{\tt $\star$}} -in {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $p {\mbox{\tt $\star$}} _c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} Cantor {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} containing {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} |c {\mbox{\tt $\star$}} | {\mbox{\tt $\star$}} >4 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} easy {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} verify {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} |z {\mbox{\tt $\star$}} | {\mbox{\tt $\star$}} > {\mbox{\tt $\star$}} |c {\mbox{\tt $\star$}} | {\mbox{\tt $\star$}} ^ {\mbox{\tt $\star$}} {1 {\mbox{\tt $\star$}} /2 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} escape {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \infty {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} do {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} |z {\mbox{\tt $\star$}} | {\mbox{\tt $\star$}} < {\mbox{\tt $\star$}} |c {\mbox{\tt $\star$}} | {\mbox{\tt $\star$}} ^ {\mbox{\tt $\star$}} {1 {\mbox{\tt $\star$}} /2 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} therefore {\mbox{\tt $\star$}} satisfy {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} |z {\mbox{\tt $\star$}} | {\mbox{\tt $\star$}} = {\mbox{\tt $\star$}} |c {\mbox{\tt $\star$}} | {\mbox{\tt $\star$}} ^ {\mbox{\tt $\star$}} {1 {\mbox{\tt $\star$}} /2 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} (1 {\mbox{\tt $\star$}} +o {\mbox{\tt $\star$}} (1 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} \rightarrow {\mbox{\tt $\star$}} \infty {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} exact {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} product {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} multiplied {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $2 {\mbox{\tt $\star$}} ^n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} grows {\mbox{\tt $\star$}} like {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} |4c {\mbox{\tt $\star$}} | {\mbox{\tt $\star$}} ^ {\mbox{\tt $\star$}} {n {\mbox{\tt $\star$}} /2 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} (1 {\mbox{\tt $\star$}} +o {\mbox{\tt $\star$}} (1 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} fixed {\mbox{\tt $\star$}} under {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} -th {\mbox{\tt $\star$}} iterate {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} obviously {\mbox{\tt $\star$}} equal {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $2 {\mbox{\tt $\star$}} ^n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} counting {\mbox{\tt $\star$}} multiplicities {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} These {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} exact {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} dividing {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} exact {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} equals {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $2s {\mbox{\tt $\star$}} _n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} recursion {\mbox{\tt $\star$}} formula {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} above {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} These {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} grouped {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} orbits {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} orbits {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $2s {\mbox{\tt $\star$}} _n {\mbox{\tt $\star$}} /n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (which {\mbox{\tt $\star$}} implies {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $2s {\mbox{\tt $\star$}} _n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} divisible {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} bounded {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} thus {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} multipliers {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} bounded {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} since {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $2s {\mbox{\tt $\star$}} _n {\mbox{\tt $\star$}} /n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} orbits {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} multipliers {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} analytic {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} behave {\mbox{\tt $\star$}} like {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} |c {\mbox{\tt $\star$}} | {\mbox{\tt $\star$}} ^ {\mbox{\tt $\star$}} {n {\mbox{\tt $\star$}} /2 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} near {\mbox{\tt $\star$}} infinity {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} since {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} contains {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} sufficiently {\mbox{\tt $\star$}} large {\mbox{\tt $\star$}} multipliers {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} assumed {\mbox{\tt $\star$}} exactly {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} (2s {\mbox{\tt $\star$}} _n {\mbox{\tt $\star$}} /n {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} (n {\mbox{\tt $\star$}} /2 {\mbox{\tt $\star$}} )n {\mbox{\tt $\star$}} =ns {\mbox{\tt $\star$}} _n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} times {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $Q {\mbox{\tt $\star$}} _n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (we {\mbox{\tt $\star$}} do {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} count {\mbox{\tt $\star$}} multiplicities {\mbox{\tt $\star$}} here {\mbox{\tt $\star$}} because {\mbox{\tt $\star$}} multiple {\mbox{\tt $\star$}} orbits {\mbox{\tt $\star$}} always {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Consider {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $Q {\mbox{\tt $\star$}} _n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} assigns {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} (c {\mbox{\tt $\star$}} ,z {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ( {\mbox{\tt $\star$}} \partial {\mbox{\tt $\star$}} / {\mbox{\tt $\star$}} \partial {\mbox{\tt $\star$}} z {\mbox{\tt $\star$}} )p {\mbox{\tt $\star$}} _c {\mbox{\tt $\star$}} ^ {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \circ {\mbox{\tt $\star$}} n {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} (z {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} proper {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} thus {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} mapping {\mbox{\tt $\star$}} degree {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (counting {\mbox{\tt $\star$}} multiplicities {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \bbf {\mbox{\tt $\star$}} C {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} assumed {\mbox{\tt $\star$}} equally {\mbox{\tt $\star$}} often {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} including {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} (c {\mbox{\tt $\star$}} ,z {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} having {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} therefore {\mbox{\tt $\star$}} equals {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ns {\mbox{\tt $\star$}} _n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} counting {\mbox{\tt $\star$}} multiplicities {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Projecting {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} -coordinate {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} ignoring {\mbox{\tt $\star$}} multiplicities {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} factor {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} lost {\mbox{\tt $\star$}} because {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} project {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} obtain {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} upper {\mbox{\tt $\star$}} bound {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $s {\mbox{\tt $\star$}} _n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (In {\mbox{\tt $\star$}} fact {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} too {\mbox{\tt $\star$}} hard {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} show {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $s {\mbox{\tt $\star$}} _n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} provides {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} exact {\mbox{\tt $\star$}} count {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {MiOrbits {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} show {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} Corollary {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {CorParaCountExact {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} global {\mbox{\tt $\star$}} counting {\mbox{\tt $\star$}} argument {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Consider {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} exact {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \mu {\mbox{\tt $\star$}} =e {\mbox{\tt $\star$}} ^ {\mbox{\tt $\star$}} {2 {\mbox{\tt $\star$}} \pi {\mbox{\tt $\star$}} ip {\mbox{\tt $\star$}} /q {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} (p {\mbox{\tt $\star$}} ,q {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} =1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} exact {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $qk {\mbox{\tt $\star$}} = {\mbox{\tt $\star$}} :n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $qk {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} also {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} smallest {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} when {\mbox{\tt $\star$}} interpreting {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} becomes {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Therefore {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $Q {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} {n {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} having {\mbox{\tt $\star$}} exact {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} therefore {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} most {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $s {\mbox{\tt $\star$}} _n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $\Box$ \par {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} A {\mbox{\tt $\star$}} more {\mbox{\tt $\star$}} detailed {\mbox{\tt $\star$}} account {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} counting {\mbox{\tt $\star$}} arguments {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} found {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} Section {\mbox{\tt $\star$}} ~5 {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Milnor {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {MiOrbits {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} following {\mbox{\tt $\star$}} standard {\mbox{\tt $\star$}} lemma {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} folklore {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} base {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} description {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} quadratic {\mbox{\tt $\star$}} iteration {\mbox{\tt $\star$}} theory {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Our {\mbox{\tt $\star$}} proof {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} Milnor {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {MiOrbits {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} compare {\mbox{\tt $\star$}} also {\mbox{\tt $\star$}} Thurston {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} [Theorem {\mbox{\tt $\star$}} ~II {\mbox{\tt $\star$}} .5 {\mbox{\tt $\star$}} .3 {\mbox{\tt $\star$}} case {\mbox{\tt $\star$}} b {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} i {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {Th {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} do {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} assume {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} particular {\mbox{\tt $\star$}} property {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} need {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} even {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \goodbreak {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {lemma {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} [Permutation {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Rays {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} \nobreak {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {LemRayPermutation {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \rule{0pt}{0pt}\par\noindent {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} more {\mbox{\tt $\star$}} than {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} return {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} permutes {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} transitively {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {lemma {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \par \noindent {\sc Proof. } {\mbox{\tt $\star$}} Denote {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Since {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} return {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} local {\mbox{\tt $\star$}} homeomorphism {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} permutes {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} way {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} their {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} equal {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} each {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} constant {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $s {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} say {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $s {\mbox{\tt $\star$}} =1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} equal {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $s {\mbox{\tt $\star$}} =2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} either {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} equal {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} return {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} no {\mbox{\tt $\star$}} choice {\mbox{\tt $\star$}} but {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} transitively {\mbox{\tt $\star$}} permute {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} may {\mbox{\tt $\star$}} hence {\mbox{\tt $\star$}} assume {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $s {\mbox{\tt $\star$}} \geq {\mbox{\tt $\star$}} 3 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $s {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} separate {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} plane {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $s {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} sectors {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Every {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} bounded {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} associated {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} width {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} cut {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \bbf {\mbox{\tt $\star$}} S {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ^1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} open {\mbox{\tt $\star$}} intervals {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} exactly {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} contain {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} width {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} length {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} interval {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (normalized {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} total {\mbox{\tt $\star$}} length {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \bbf {\mbox{\tt $\star$}} S {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ^1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Since {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamics {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} return {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} local {\mbox{\tt $\star$}} homeomorphism {\mbox{\tt $\star$}} near {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} corresponding {\mbox{\tt $\star$}} widths {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} More {\mbox{\tt $\star$}} precisely {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} justify {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} following {\mbox{\tt $\star$}} observations {\mbox{\tt $\star$}} below {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} contain {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} maps {\mbox{\tt $\star$}} homeomorphically {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} image {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (based {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} image {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} width {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} doubles {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} However {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} contain {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} maps {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} -to {\mbox{\tt $\star$}} -one {\mbox{\tt $\star$}} fashion {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} image {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} covers {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} remaining {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} plane {\mbox{\tt $\star$}} once {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} case {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} width {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} decrease {\mbox{\tt $\star$}} under {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} mapping {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} image {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} contains {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} To {\mbox{\tt $\star$}} justify {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} statements {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} note {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} bounding {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} mapped {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} bounding {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} image {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Looking {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} within {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} either {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} maps {\mbox{\tt $\star$}} forward {\mbox{\tt $\star$}} homeomorphically {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} covers {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} entire {\mbox{\tt $\star$}} complex {\mbox{\tt $\star$}} plane {\mbox{\tt $\star$}} once {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} image {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} twice {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} latter {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} happen {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} containing {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Since {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} sectors {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} combined {\mbox{\tt $\star$}} exactly {\mbox{\tt $\star$}} cover {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} complex {\mbox{\tt $\star$}} plane {\mbox{\tt $\star$}} twice {\mbox{\tt $\star$}} when {\mbox{\tt $\star$}} mapped {\mbox{\tt $\star$}} forward {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} other {\mbox{\tt $\star$}} sectors {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} homeomorphically {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} image {\mbox{\tt $\star$}} sectors {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} also {\mbox{\tt $\star$}} see {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} among {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} sectors {\mbox{\tt $\star$}} based {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} containing {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} width {\mbox{\tt $\star$}} greater {\mbox{\tt $\star$}} than {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $1 {\mbox{\tt $\star$}} /2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} other {\mbox{\tt $\star$}} sectors {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} widths {\mbox{\tt $\star$}} less {\mbox{\tt $\star$}} than {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $1 {\mbox{\tt $\star$}} /2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} cannot {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} cannot {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} superattracting {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} width {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} doubles {\mbox{\tt $\star$}} under {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} contain {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} since {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} sum {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} widths {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} sectors {\mbox{\tt $\star$}} based {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} width {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} decrease {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} each {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} sectors {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} least {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} minimal {\mbox{\tt $\star$}} width {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} contain {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (or {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} would {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} image {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} half {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} width {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} cannot {\mbox{\tt $\star$}} contain {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (or {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} image {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} would {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} smaller {\mbox{\tt $\star$}} width {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Therefore {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} shortest {\mbox{\tt $\star$}} sectors {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} various {\mbox{\tt $\star$}} cycles {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} sectors {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} bounded {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} pairs {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} separating {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} sectors {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} nested {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Among {\mbox{\tt $\star$}} them {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} innermost {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $S {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} based {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} some {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $S {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} cannot {\mbox{\tt $\star$}} contain {\mbox{\tt $\star$}} another {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} was {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} would {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} based {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} was {\mbox{\tt $\star$}} shorter {\mbox{\tt $\star$}} than {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} shortest {\mbox{\tt $\star$}} sectors {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} obviously {\mbox{\tt $\star$}} absurd {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \hide {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} Now {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} show {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $S {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} contain {\mbox{\tt $\star$}} another {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $S {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} based {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} contain {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} cannot {\mbox{\tt $\star$}} contain {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} either {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} By {\mbox{\tt $\star$}} minimality {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $S {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $S {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} cannot {\mbox{\tt $\star$}} contain {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} There {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} shortest {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $S {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} contain {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} thus {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $S {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (it {\mbox{\tt $\star$}} might {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $S {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} But {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} also {\mbox{\tt $\star$}} contains {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $S {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} larger {\mbox{\tt $\star$}} width {\mbox{\tt $\star$}} than {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $S {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} contradiction {\mbox{\tt $\star$}} shows {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $S {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} cannot {\mbox{\tt $\star$}} contain {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} sectors {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} involving {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $S {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} shortest {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} contain {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} thus {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $S {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} but {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} cannot {\mbox{\tt $\star$}} contain {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} contain {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} sectors {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} except {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} containing {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} representative {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} sectors {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} unique {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} containing {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Any {\mbox{\tt $\star$}} cycle {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} sectors {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} choices {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} representative {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} containing {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} more {\mbox{\tt $\star$}} than {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} cycle {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} just {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} cycles {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} each {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} them {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} exactly {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} representative {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} each {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $s {\mbox{\tt $\star$}} =2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} contradiction {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} our {\mbox{\tt $\star$}} assumption {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $\Box$ \par {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \par \noindent {\sc Remark. } {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} lemma {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} heart {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} general {\mbox{\tt $\star$}} definition {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} main {\mbox{\tt $\star$}} part {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} proof {\mbox{\tt $\star$}} works {\mbox{\tt $\star$}} when {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} least {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} each {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} shows {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} unique {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} minimal {\mbox{\tt $\star$}} width {\mbox{\tt $\star$}} containing {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} bounding {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} sector {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} called {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} special {\mbox{\tt $\star$}} case {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} definition {\mbox{\tt $\star$}} agrees {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} given {\mbox{\tt $\star$}} above {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \newsection {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {Periodic {\mbox{\tt $\star$}} Rays {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {SecPeriodic {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} section {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} concerned {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} proof {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} following {\mbox{\tt $\star$}} weak {\mbox{\tt $\star$}} form {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} theorem {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} due {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} Goldberg {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} Milnor {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} Douady {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Hubbard {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} see {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} [Theorem {\mbox{\tt $\star$}} ~C {\mbox{\tt $\star$}} .7 {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {GM {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {proposition {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} [Periodic {\mbox{\tt $\star$}} Parameter {\mbox{\tt $\star$}} Rays {\mbox{\tt $\star$}} Land {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {PropPeriodicRaysLand {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \rule{0pt}{0pt}\par\noindent {\mbox{\tt $\star$}} Every {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} plane {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} / {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} / {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {proposition {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \par \noindent {\sc Proof. } {\mbox{\tt $\star$}} Let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} limit {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} exact {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} plane {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} see {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} [Theorem {\mbox{\tt $\star$}} ~18 {\mbox{\tt $\star$}} .1 {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {MiIntro {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} was {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemStable {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} would {\mbox{\tt $\star$}} imply {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} sufficiently {\mbox{\tt $\star$}} close {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} plane {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} would {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} (c {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} could {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} bounce {\mbox{\tt $\star$}} off {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} precritical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} However {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} when {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} bounce {\mbox{\tt $\star$}} off {\mbox{\tt $\star$}} some {\mbox{\tt $\star$}} precritical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} even {\mbox{\tt $\star$}} infinitely {\mbox{\tt $\star$}} often {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Therefore {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} within {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} dynamics {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Since {\mbox{\tt $\star$}} limit {\mbox{\tt $\star$}} sets {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} but {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} given {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} form {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} finite {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemParaCount {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} statements {\mbox{\tt $\star$}} follow {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $\Box$ \par {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} proves {\mbox{\tt $\star$}} half {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} assertion {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} Theorem {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {ThmRatRays {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} remainder {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} second {\mbox{\tt $\star$}} assertion {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} shown {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} several {\mbox{\tt $\star$}} steps {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} want {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} show {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} those {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} no {\mbox{\tt $\star$}} other {\mbox{\tt $\star$}} rational {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} statement {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} usually {\mbox{\tt $\star$}} shown {\mbox{\tt $\star$}} using {\mbox{\tt $\star$}} Ecalle {\mbox{\tt $\star$}} cylinders {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} turns {\mbox{\tt $\star$}} out {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} much {\mbox{\tt $\star$}} easier {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} show {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} some {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} no {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} /t {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} given {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} rather {\mbox{\tt $\star$}} than {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} show {\mbox{\tt $\star$}} where {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} idea {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} paper {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} exclude {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} wrong {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} given {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} using {\mbox{\tt $\star$}} partitions {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} planes {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Using {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} somewhere {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} global {\mbox{\tt $\star$}} counting {\mbox{\tt $\star$}} argument {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} prove {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} theorem {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \Theta {\mbox{\tt $\star$}} _c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} A {\mbox{\tt $\star$}} priori {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} might {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} empty {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} prove {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} following {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} results {\mbox{\tt $\star$}} later {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} section {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {proposition {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} [Necessary {\mbox{\tt $\star$}} Condition {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {PropNecessary {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \rule{0pt}{0pt}\par\noindent {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \Theta {\mbox{\tt $\star$}} _c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} plane {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {proposition {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {proposition {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} [At {\mbox{\tt $\star$}} Most {\mbox{\tt $\star$}} Two {\mbox{\tt $\star$}} Rays {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {PropAtMostTwoRays {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \rule{0pt}{0pt}\par\noindent {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} / {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \Theta {\mbox{\tt $\star$}} _c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} contains {\mbox{\tt $\star$}} more {\mbox{\tt $\star$}} than {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} consists {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} exactly {\mbox{\tt $\star$}} those {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} plane {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {proposition {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} These {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} propositions {\mbox{\tt $\star$}} allow {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} prove {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} half {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} theorem {\mbox{\tt $\star$}} dealing {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} deal {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} half {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} next {\mbox{\tt $\star$}} section {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \proofof {\mbox{\tt $\star$}} {Theorem {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {ThmRatRays {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (periodic {\mbox{\tt $\star$}} case {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} By {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemParaCount {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} given {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} most {\mbox{\tt $\star$}} half {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Since {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} Proposition {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {PropPeriodicRaysLand {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} most {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} may {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} Proposition {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {PropAtMostTwoRays {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} exactly {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Proposition {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {PropAtMostTwoRays {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} says {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} ones {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} also {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} given {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} largest {\mbox{\tt $\star$}} possible {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} allowed {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemParaCount {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $\Box$ \par {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \par \noindent {\sc Remark. } {\mbox{\tt $\star$}} Since {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} complete {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} proof {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Proposition {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {PropAtMostTwoRays {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} induction {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} using {\mbox{\tt $\star$}} Theorem {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {ThmRatRays {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} lower {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} important {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} note {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} order {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} prove {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Theorem {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} suffices {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} know {\mbox{\tt $\star$}} Proposition {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {PropAtMostTwoRays {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {corollary {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} [Counting {\mbox{\tt $\star$}} Parabolic {\mbox{\tt $\star$}} Orbits {\mbox{\tt $\star$}} Exactly {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {CorParaCountExact {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \rule{0pt}{0pt}\par\noindent {\mbox{\tt $\star$}} Let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $s {\mbox{\tt $\star$}} _k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} having {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} exact {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} These {\mbox{\tt $\star$}} numbers {\mbox{\tt $\star$}} satisfy {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} recursive {\mbox{\tt $\star$}} relation {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \sum {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} {k {\mbox{\tt $\star$}} |n {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} s {\mbox{\tt $\star$}} _k {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} = {\mbox{\tt $\star$}} 2 {\mbox{\tt $\star$}} ^ {\mbox{\tt $\star$}} {n {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} determines {\mbox{\tt $\star$}} them {\mbox{\tt $\star$}} uniquely {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $\Box$ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {corollary {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {figure {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} [htbp {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \centerline {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \psfig {\mbox{\tt $\star$}} {figure {\mbox{\tt $\star$}} =P {\mbox{\tt $\star$}} _Periodic1 {\mbox{\tt $\star$}} .eps {\mbox{\tt $\star$}} ,height {\mbox{\tt $\star$}} =55mm {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \hfil {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \psfig {\mbox{\tt $\star$}} {figure {\mbox{\tt $\star$}} =P {\mbox{\tt $\star$}} _Periodic2 {\mbox{\tt $\star$}} .eps {\mbox{\tt $\star$}} ,height {\mbox{\tt $\star$}} =55mm {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \centerline {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \parbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \captionwidth {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \caption {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \sl {\mbox{\tt $\star$}} Illustration {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} theorem {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} case {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} polynomials {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $3 {\mbox{\tt $\star$}} /15 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $4 {\mbox{\tt $\star$}} /15 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (left {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $22 {\mbox{\tt $\star$}} /63 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $25 {\mbox{\tt $\star$}} /63 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (right {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} shown {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} both {\mbox{\tt $\star$}} pictures {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} drawn {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} corresponding {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} see {\mbox{\tt $\star$}} Figure {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \protect {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {FigMandelRays {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {FigPeriodic {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {figure {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} remains {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} prove {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} propositions {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} both {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} them {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} exclude {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} certain {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} given {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} do {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} using {\mbox{\tt $\star$}} appropriate {\mbox{\tt $\star$}} partitions {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} plane {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} space {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} start {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} discussing {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} topology {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} quadratic {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} sets {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} define {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} variant {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Hubbard {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} them {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Hubbard {\mbox{\tt $\star$}} trees {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} been {\mbox{\tt $\star$}} introduced {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} Douady {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Hubbard {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {Orsay {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} postcritically {\mbox{\tt $\star$}} finite {\mbox{\tt $\star$}} polynomials {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} interested {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} combinatorial {\mbox{\tt $\star$}} statements {\mbox{\tt $\star$}} about {\mbox{\tt $\star$}} combinatorially {\mbox{\tt $\star$}} described {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} sets {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} results {\mbox{\tt $\star$}} could {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} derived {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} purely {\mbox{\tt $\star$}} combinatorial {\mbox{\tt $\star$}} terms {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} However {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} more {\mbox{\tt $\star$}} convenient {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} use {\mbox{\tt $\star$}} topological {\mbox{\tt $\star$}} properties {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} sets {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} case {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} particular {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} they {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} pathwise {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (which {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} local {\mbox{\tt $\star$}} connectivity {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} was {\mbox{\tt $\star$}} originally {\mbox{\tt $\star$}} proved {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} Douady {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Hubbard {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {Orsay {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} proofs {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} also {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} found {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} Carleson {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Gamelin {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {CG {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} Tan {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Yin {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {TY {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} quadratic {\mbox{\tt $\star$}} polynomial {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} within {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} filled {\mbox{\tt $\star$}} -in {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $U {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} bounded {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} define {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} projection {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \overline {\mbox{\tt $\star$}} U {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} \in {\mbox{\tt $\star$}} \overline {\mbox{\tt $\star$}} U {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} projection {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \overline {\mbox{\tt $\star$}} U {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} itself {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Otherwise {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} consider {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} path {\mbox{\tt $\star$}} within {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} filled {\mbox{\tt $\star$}} -in {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} connecting {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} interior {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $U {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (such {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} path {\mbox{\tt $\star$}} exists {\mbox{\tt $\star$}} because {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} filled {\mbox{\tt $\star$}} -in {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} pathwise {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} projection {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \overline {\mbox{\tt $\star$}} U {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} where {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} path {\mbox{\tt $\star$}} intersects {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \partial {\mbox{\tt $\star$}} U {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} There {\mbox{\tt $\star$}} may {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} many {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} paths {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} but {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} projection {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} still {\mbox{\tt $\star$}} well {\mbox{\tt $\star$}} -defined {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} take {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} paths {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} interior {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $U {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} connect {\mbox{\tt $\star$}} their {\mbox{\tt $\star$}} endpoints {\mbox{\tt $\star$}} within {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $U {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} paths {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} they {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} bound {\mbox{\tt $\star$}} some {\mbox{\tt $\star$}} subset {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \bbf {\mbox{\tt $\star$}} C {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $K {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} because {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $K {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} full {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} paths {\mbox{\tt $\star$}} reach {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \partial {\mbox{\tt $\star$}} U {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} paths {\mbox{\tt $\star$}} enclose {\mbox{\tt $\star$}} part {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $U {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} but {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} always {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} filled {\mbox{\tt $\star$}} -in {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} contradiction {\mbox{\tt $\star$}} shows {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} projection {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} well {\mbox{\tt $\star$}} -defined {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Every {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} least {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} projection {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} case {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} just {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} identity {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {lemma {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} [Projection {\mbox{\tt $\star$}} Onto {\mbox{\tt $\star$}} Periodic {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} Components {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {LemProject {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \rule{0pt}{0pt}\par\noindent {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} quadratic {\mbox{\tt $\star$}} polynomial {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} projections {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} containing {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} take {\mbox{\tt $\star$}} images {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Projections {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} other {\mbox{\tt $\star$}} bounded {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} take {\mbox{\tt $\star$}} images {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} most {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {lemma {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \par \noindent {\sc Proof. } {\mbox{\tt $\star$}} Let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} them {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $U {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} U {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \ldots {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \brkOK {\mbox{\tt $\star$}} U {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} {n {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \brkOK {\mbox{\tt $\star$}} U {\mbox{\tt $\star$}} _n {\mbox{\tt $\star$}} =U {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} order {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamics {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $U {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} =U {\mbox{\tt $\star$}} _n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} contains {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $a {\mbox{\tt $\star$}} _k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} images {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} projections {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \overline {\mbox{\tt $\star$}} U {\mbox{\tt $\star$}} _k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} =0 {\mbox{\tt $\star$}} ,1 {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \ldots {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (with {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $a {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} =a {\mbox{\tt $\star$}} _n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} show {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $a {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} {k {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} \geq {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} _k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} =1 {\mbox{\tt $\star$}} ,2 {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \ldots {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} n {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \pi {\mbox{\tt $\star$}} (z {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} projection {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \overline {\mbox{\tt $\star$}} U {\mbox{\tt $\star$}} _k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} claim {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $p {\mbox{\tt $\star$}} ( {\mbox{\tt $\star$}} \pi {\mbox{\tt $\star$}} (z {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} projection {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \overline {\mbox{\tt $\star$}} U {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} {k {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} either {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $p {\mbox{\tt $\star$}} (z {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} containing {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (i {\mbox{\tt $\star$}} .e {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Indeed {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} path {\mbox{\tt $\star$}} between {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \pi {\mbox{\tt $\star$}} (z {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} maps {\mbox{\tt $\star$}} forward {\mbox{\tt $\star$}} homeomorphically {\mbox{\tt $\star$}} under {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $p {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \pi {\mbox{\tt $\star$}} (p {\mbox{\tt $\star$}} (z {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} =p {\mbox{\tt $\star$}} ( {\mbox{\tt $\star$}} \pi {\mbox{\tt $\star$}} (z {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} path {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} intersect {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} containing {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \pi {\mbox{\tt $\star$}} (z {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} = {\mbox{\tt $\star$}} \pi {\mbox{\tt $\star$}} (0 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} But {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $p {\mbox{\tt $\star$}} ( {\mbox{\tt $\star$}} \pi {\mbox{\tt $\star$}} (z {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} projection {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Therefore {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} \in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {0 {\mbox{\tt $\star$}} ,1 {\mbox{\tt $\star$}} ,2 {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \ldots {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} n {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $a {\mbox{\tt $\star$}} _k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} image {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} projections {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \overline {\mbox{\tt $\star$}} U {\mbox{\tt $\star$}} _k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} mapped {\mbox{\tt $\star$}} under {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $p {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} image {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} projection {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \overline {\mbox{\tt $\star$}} U {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} {k {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Since {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} \neq {\mbox{\tt $\star$}} 0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} polynomial {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $p {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} maps {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \overline {\mbox{\tt $\star$}} U {\mbox{\tt $\star$}} _k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} homeomorphically {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \overline {\mbox{\tt $\star$}} U {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} {k {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} get {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $a {\mbox{\tt $\star$}} _n {\mbox{\tt $\star$}} \geq {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} {n {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} \geq {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \ldots {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} _2 {\mbox{\tt $\star$}} \geq {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Similarly {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} since {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $p {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} maps {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \overline {\mbox{\tt $\star$}} U {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} -to {\mbox{\tt $\star$}} -one {\mbox{\tt $\star$}} fashion {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \overline {\mbox{\tt $\star$}} U {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $a {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} \geq {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} /2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Now {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} connect {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} path {\mbox{\tt $\star$}} between {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} other {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} path {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} subtree {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} been {\mbox{\tt $\star$}} constructed {\mbox{\tt $\star$}} thus {\mbox{\tt $\star$}} far {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} require {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} path {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} adding {\mbox{\tt $\star$}} intersects {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} bounded {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} least {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (at {\mbox{\tt $\star$}} most {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} After {\mbox{\tt $\star$}} finitely {\mbox{\tt $\star$}} many {\mbox{\tt $\star$}} steps {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} finite {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} endpoints {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} hard {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} check {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} intersects {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} bounded {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} exactly {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} image {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} projections {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} now {\mbox{\tt $\star$}} claim {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} whose {\mbox{\tt $\star$}} closure {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} disconnect {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Indeed {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} disconnect {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} least {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} thus {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} least {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} each {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} complement {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Pick {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} within {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} pick {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} ` {\mbox{\tt $\star$}} `closest {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} removed {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} sense {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} path {\mbox{\tt $\star$}} between {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} contain {\mbox{\tt $\star$}} further {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Remove {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} continue {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} process {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} continued {\mbox{\tt $\star$}} until {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} arrive {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} disconnect {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $U {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \tilde {\mbox{\tt $\star$}} k {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $a {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \tilde {\mbox{\tt $\star$}} k {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} \leq {\mbox{\tt $\star$}} 2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $U {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \tilde {\mbox{\tt $\star$}} k {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} project {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} say {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $b {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} may {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} may {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $U {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \tilde {\mbox{\tt $\star$}} k {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} now {\mbox{\tt $\star$}} show {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $a {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \tilde {\mbox{\tt $\star$}} k {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} =1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Since {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $b {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} image {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} projection {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \overline {\mbox{\tt $\star$}} U {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \tilde {\mbox{\tt $\star$}} k {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} some {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \overline {\mbox{\tt $\star$}} U {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \tilde {\mbox{\tt $\star$}} k {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} argument {\mbox{\tt $\star$}} above {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $p {\mbox{\tt $\star$}} (b {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} image {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} projection {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \overline {\mbox{\tt $\star$}} U {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \tilde {\mbox{\tt $\star$}} k {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} some {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \overline {\mbox{\tt $\star$}} U {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \tilde {\mbox{\tt $\star$}} k {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} But {\mbox{\tt $\star$}} since {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $b {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \overline {\mbox{\tt $\star$}} U {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \tilde {\mbox{\tt $\star$}} k {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} property {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $b {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} fixed {\mbox{\tt $\star$}} under {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} return {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $b {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} unique {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \overline {\mbox{\tt $\star$}} U {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \tilde {\mbox{\tt $\star$}} k {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $a {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \tilde {\mbox{\tt $\star$}} k {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} =1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Since {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $a {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} \leq {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} _2 {\mbox{\tt $\star$}} \leq {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} _n {\mbox{\tt $\star$}} \leq {\mbox{\tt $\star$}} 2a {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} particular {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $a {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} =1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $a {\mbox{\tt $\star$}} _k {\mbox{\tt $\star$}} \leq {\mbox{\tt $\star$}} 2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} projections {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} containing {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} take {\mbox{\tt $\star$}} values {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} remaining {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} claims {\mbox{\tt $\star$}} follow {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $\Box$ \par {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \par \noindent {\sc Remark. } {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} just {\mbox{\tt $\star$}} constructed {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} similar {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} Hubbard {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} introduced {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {Orsay {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} postcritically {\mbox{\tt $\star$}} finite {\mbox{\tt $\star$}} polynomials {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} An {\mbox{\tt $\star$}} important {\mbox{\tt $\star$}} difference {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} our {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} connect {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Moreover {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} Hubbard {\mbox{\tt $\star$}} trees {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {Orsay {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} specified {\mbox{\tt $\star$}} uniquely {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} while {\mbox{\tt $\star$}} our {\mbox{\tt $\star$}} trees {\mbox{\tt $\star$}} still {\mbox{\tt $\star$}} involve {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} choice {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} how {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} traverse {\mbox{\tt $\star$}} bounded {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} suggest {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} preferred {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} below {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} However {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} some {\mbox{\tt $\star$}} properties {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} independent {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} choice {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Assume {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} simple {\mbox{\tt $\star$}} curves {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \gamma {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \gamma {\mbox{\tt $\star$}} _2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} within {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} filled {\mbox{\tt $\star$}} -in {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} connect {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $w {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} curves {\mbox{\tt $\star$}} but {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} other {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Then {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $w {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} closure {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} bounded {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} because {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} region {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} enclosed {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} curves {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} filled {\mbox{\tt $\star$}} -in {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} intersects {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} bounded {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} most {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} projection {\mbox{\tt $\star$}} images {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} thus {\mbox{\tt $\star$}} well {\mbox{\tt $\star$}} -defined {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Therefore {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} choice {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} curves {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} thus {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} interior {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} bounded {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $w {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (not {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} bounded {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} branches {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (i {\mbox{\tt $\star$}} .e {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} disconnects {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} independent {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} choice {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Similarly {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} branches {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} ` {\mbox{\tt $\star$}} `almost {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} non {\mbox{\tt $\star$}} -decreasing {\mbox{\tt $\star$}} under {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamics {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $p {\mbox{\tt $\star$}} (w {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} least {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} many {\mbox{\tt $\star$}} branches {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $w {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} branches {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $w {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} yield {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} branches {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $p {\mbox{\tt $\star$}} (w {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} except {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $w {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} containing {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} At {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} branch {\mbox{\tt $\star$}} leading {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} get {\mbox{\tt $\star$}} lost {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (However {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} happen {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $p {\mbox{\tt $\star$}} (w {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} extra {\mbox{\tt $\star$}} branches {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} most {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} branch {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} other {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} up {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} branches {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} A {\mbox{\tt $\star$}} branch {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $w {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} disconnects {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} least {\mbox{\tt $\star$}} three {\mbox{\tt $\star$}} complementary {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {lemma {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} [Branch {\mbox{\tt $\star$}} Points {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Tree {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {LemHubbardBranch {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \rule{0pt}{0pt}\par\noindent {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Branch {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} between {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} orbits {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {lemma {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \par \noindent {\sc Proof. } {\mbox{\tt $\star$}} Branch {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} never {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} just {\mbox{\tt $\star$}} seen {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Therefore {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} image {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} branch {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} always {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} branch {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} least {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} many {\mbox{\tt $\star$}} branches {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Since {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} finitely {\mbox{\tt $\star$}} many {\mbox{\tt $\star$}} branch {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} branch {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} hence {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $\Box$ \par {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} now {\mbox{\tt $\star$}} proceed {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} select {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} preferred {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} call {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (parabolic {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} Hubbard {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} specify {\mbox{\tt $\star$}} how {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} traverse {\mbox{\tt $\star$}} bounded {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} fact {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} since {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} bounded {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} eventually {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} homeomorphically {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} specify {\mbox{\tt $\star$}} how {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} traverse {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} remaining {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} pull {\mbox{\tt $\star$}} back {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $U {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $w {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} First {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} want {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} connect {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $U {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $w {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} simple {\mbox{\tt $\star$}} curve {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} forward {\mbox{\tt $\star$}} invariant {\mbox{\tt $\star$}} under {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamics {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} use {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} coordinates {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} attracting {\mbox{\tt $\star$}} petal {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamics {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} [Section {\mbox{\tt $\star$}} ~7 {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {MiIntro {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} coordinates {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamics {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} simply {\mbox{\tt $\star$}} addition {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} our {\mbox{\tt $\star$}} curve {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} just {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} horizontal {\mbox{\tt $\star$}} straight {\mbox{\tt $\star$}} line {\mbox{\tt $\star$}} connecting {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} curve {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} extended {\mbox{\tt $\star$}} up {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} other {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} connect {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} -w {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} use {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} symmetric {\mbox{\tt $\star$}} curve {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} With {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} choice {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} specified {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} preferred {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} invariant {\mbox{\tt $\star$}} under {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamics {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} except {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} image {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} connects {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Removing {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} curve {\mbox{\tt $\star$}} segment {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} image {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} obtain {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} before {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} well {\mbox{\tt $\star$}} known {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} disconnects {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} several {\mbox{\tt $\star$}} parts {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} many {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} disconnects {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} hard {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} see {\mbox{\tt $\star$}} once {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} known {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} least {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} branch {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Hubbard {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} between {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} branches {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} interior {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} least {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} separating {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} now {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} thus {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} least {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} closest {\mbox{\tt $\star$}} possible {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} either {\mbox{\tt $\star$}} side {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} A {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} description {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} been {\mbox{\tt $\star$}} given {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemRayPermutation {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {lemma {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} [Orbit {\mbox{\tt $\star$}} Separation {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {LemOrbitSeparation {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \rule{0pt}{0pt}\par\noindent {\mbox{\tt $\star$}} Any {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} quadratic {\mbox{\tt $\star$}} polynomial {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} separated {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (pre {\mbox{\tt $\star$}} )periodic {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} common {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (pre {\mbox{\tt $\star$}} )periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {lemma {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \par \noindent {\sc Proof. } {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} suffices {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} prove {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} lemma {\mbox{\tt $\star$}} when {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} also {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} case {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} need {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} section {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Consider {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} polynomial {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} constructed {\mbox{\tt $\star$}} above {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} contains {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} unique {\mbox{\tt $\star$}} path {\mbox{\tt $\star$}} connecting {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} may {\mbox{\tt $\star$}} assume {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} path {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} traverse {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} except {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} ends {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} replace {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Similarly {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} may {\mbox{\tt $\star$}} assume {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} path {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} traverse {\mbox{\tt $\star$}} another {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} path {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} contains {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} branch {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemHubbardBranch {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} branch {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} therefore {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} rational {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} separating {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} claimed {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Hubbard {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} branch {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} between {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} takes {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} finite {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} iterations {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} time {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Denoting {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} path {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \gamma {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} -th {\mbox{\tt $\star$}} iterate {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \gamma {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} traverse {\mbox{\tt $\star$}} itself {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} possibly {\mbox{\tt $\star$}} more {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} orientation {\mbox{\tt $\star$}} reversing {\mbox{\tt $\star$}} way {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} denoting {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} -th {\mbox{\tt $\star$}} image {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} image {\mbox{\tt $\star$}} curve {\mbox{\tt $\star$}} connects {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} start {\mbox{\tt $\star$}} along {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} end {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \gamma {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} because {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} endpoint {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Hubbard {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} cannot {\mbox{\tt $\star$}} branch {\mbox{\tt $\star$}} off {\mbox{\tt $\star$}} because {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} had {\mbox{\tt $\star$}} assumed {\mbox{\tt $\star$}} no {\mbox{\tt $\star$}} branch {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \gamma {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} There {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} unique {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _f {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} interior {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \gamma {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} fixed {\mbox{\tt $\star$}} under {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} -th {\mbox{\tt $\star$}} image {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \gamma {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} desired {\mbox{\tt $\star$}} separation {\mbox{\tt $\star$}} properties {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $\Box$ \par {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Now {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} prove {\mbox{\tt $\star$}} Proposition {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {PropNecessary {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \proofof {\mbox{\tt $\star$}} {Proposition {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {PropNecessary {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} Proposition {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {PropPeriodicRaysLand {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} shown {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \Theta {\mbox{\tt $\star$}} _c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} contain {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} cycle {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} plane {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} By {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Orbit {\mbox{\tt $\star$}} Separation {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemOrbitSeparation {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} separated {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} formed {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} common {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (pre {\mbox{\tt $\star$}} )periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} stable {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} neighborhood {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} space {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemStable {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} But {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} being {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} limit {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} means {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} arbitrarily {\mbox{\tt $\star$}} close {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $\Box$ \par {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \Theta {\mbox{\tt $\star$}} _c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} thus {\mbox{\tt $\star$}} contain {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamical {\mbox{\tt $\star$}} plane {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} more {\mbox{\tt $\star$}} than {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} want {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} exclude {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} those {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} evidently {\mbox{\tt $\star$}} impossible {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} argument {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} plane {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} order {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} prove {\mbox{\tt $\star$}} Proposition {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {PropAtMostTwoRays {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} use {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} space {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} look {\mbox{\tt $\star$}} more {\mbox{\tt $\star$}} closely {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} space {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} incorporate {\mbox{\tt $\star$}} some {\mbox{\tt $\star$}} symbolic {\mbox{\tt $\star$}} dynamics {\mbox{\tt $\star$}} using {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} space {\mbox{\tt $\star$}} according {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} more {\mbox{\tt $\star$}} geometrically {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} internal {\mbox{\tt $\star$}} addresses {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} interest {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} own {\mbox{\tt $\star$}} right {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (see {\mbox{\tt $\star$}} below {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} been {\mbox{\tt $\star$}} investigated {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} Lau {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Schleicher {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {IntAddr {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} related {\mbox{\tt $\star$}} ideas {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} found {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} Thurston {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {Th {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} Penrose {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {Pe {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {Pe2 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} series {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} papers {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} Bandt {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Keller {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (see {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {Ke1 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {KeHabil {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} references {\mbox{\tt $\star$}} therein {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} also {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} helpful {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} next {\mbox{\tt $\star$}} section {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} establishing {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} properties {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {definition {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} [Kneading {\mbox{\tt $\star$}} Sequence {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {DefKneadSeq {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \rule{0pt}{0pt}\par\noindent {\mbox{\tt $\star$}} To {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} \in {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \bbf {\mbox{\tt $\star$}} S {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ^1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} associate {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} divide {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} / {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \bbf {\mbox{\tt $\star$}} S {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ^1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} parts {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} / {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} /2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} / {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ( {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} /2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (the {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} inverse {\mbox{\tt $\star$}} images {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} / {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} under {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} doubling {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} open {\mbox{\tt $\star$}} part {\mbox{\tt $\star$}} containing {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} labeled {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 0}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} other {\mbox{\tt $\star$}} open {\mbox{\tt $\star$}} part {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} labeled {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 1}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} gets {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} label {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} * {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} labels {\mbox{\tt $\star$}} corresponding {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $2 {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $4 {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $8 {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \ldots {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {definition {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {figure {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} [htbp {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \centerline {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \psfig {\mbox{\tt $\star$}} {figure {\mbox{\tt $\star$}} =P {\mbox{\tt $\star$}} _Kneading {\mbox{\tt $\star$}} .eps {\mbox{\tt $\star$}} ,height {\mbox{\tt $\star$}} =45mm {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \centerline {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \parbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \captionwidth {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \vspace {\mbox{\tt $\star$}} {0mm {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \caption {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \sl {\mbox{\tt $\star$}} Left {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} used {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} definition {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Right {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} corresponding {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} plane {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} shown {\mbox{\tt $\star$}} here {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} example {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} Misiurewicz {\mbox{\tt $\star$}} polynomial {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {FigKneading {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {figure {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} easy {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} check {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} \neq {\mbox{\tt $\star$}} 0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} position {\mbox{\tt $\star$}} always {\mbox{\tt $\star$}} equals {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 1}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} obviously {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} property {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} symbol {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} * {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} appears {\mbox{\tt $\star$}} exactly {\mbox{\tt $\star$}} once {\mbox{\tt $\star$}} within {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} last {\mbox{\tt $\star$}} position {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} symbol {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} * {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} occurs {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} However {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} may {\mbox{\tt $\star$}} happen {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} irrational {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (see {\mbox{\tt $\star$}} e {\mbox{\tt $\star$}} .g {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {IntAddr {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} As {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} varies {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} entry {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} position {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} changes {\mbox{\tt $\star$}} exactly {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} those {\mbox{\tt $\star$}} values {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $2 {\mbox{\tt $\star$}} ^ {\mbox{\tt $\star$}} {n {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} i {\mbox{\tt $\star$}} .e {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} where {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} entry {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} * {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} happens {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} exact {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} divides {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Another {\mbox{\tt $\star$}} useful {\mbox{\tt $\star$}} property {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} needed {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} Section {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {SecPreperiodic {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} pointwise {\mbox{\tt $\star$}} limits {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \ttt {\mbox{\tt $\star$}} K {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} - {\mbox{\tt $\star$}} ( {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} = {\mbox{\tt $\star$}} \lim {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} \nearrow {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \ttt {\mbox{\tt $\star$}} K {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} ( {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \ttt {\mbox{\tt $\star$}} K {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} + {\mbox{\tt $\star$}} ( {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} = {\mbox{\tt $\star$}} \lim {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} \searrow {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \ttt {\mbox{\tt $\star$}} K {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} ( {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} exist {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \ttt {\mbox{\tt $\star$}} K {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} \pm {\mbox{\tt $\star$}} ( {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} also {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (but {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} exact {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} may {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} smaller {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Both {\mbox{\tt $\star$}} limiting {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} coincide {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \ttt {\mbox{\tt $\star$}} K {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} ( {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} everywhere {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} except {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} * {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} -symbols {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} replaced {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 0}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 1}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} other {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} reason {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} simple {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} very {\mbox{\tt $\star$}} close {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} orbits {\mbox{\tt $\star$}} under {\mbox{\tt $\star$}} doubling {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} well {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} partitions {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} close {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} each {\mbox{\tt $\star$}} other {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} symbol {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 0}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 1}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} finite {\mbox{\tt $\star$}} position {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} unchanged {\mbox{\tt $\star$}} provided {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} close {\mbox{\tt $\star$}} enough {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} However {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $2 {\mbox{\tt $\star$}} ^ {\mbox{\tt $\star$}} {n {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $2 {\mbox{\tt $\star$}} ^ {\mbox{\tt $\star$}} {n {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} barely {\mbox{\tt $\star$}} miss {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} own {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} * {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} turn {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 0}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 1}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} As {\mbox{\tt $\star$}} long {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} close {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} symbols {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} * {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} replaced {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} symbol {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {figure {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} [tbh {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \centerline {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \psfig {\mbox{\tt $\star$}} {figure {\mbox{\tt $\star$}} =P {\mbox{\tt $\star$}} _ParaPart1 {\mbox{\tt $\star$}} .eps {\mbox{\tt $\star$}} ,height {\mbox{\tt $\star$}} =70mm {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \hfil {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \psfig {\mbox{\tt $\star$}} {figure {\mbox{\tt $\star$}} =P {\mbox{\tt $\star$}} _ParaPart2 {\mbox{\tt $\star$}} .eps {\mbox{\tt $\star$}} ,height {\mbox{\tt $\star$}} =70mm {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \centerline {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \parbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \captionwidth {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \vspace {\mbox{\tt $\star$}} {3mm {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \caption {\mbox{\tt $\star$}} [The {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \Part {\mbox{\tt $\star$}} {n {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} used {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} proof {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Theorem {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \protect {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {ThmRatRays {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} =4 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \sl {\mbox{\tt $\star$}} Left {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \Part {\mbox{\tt $\star$}} {n {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} used {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} proof {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Proposition {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \protect {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {PropAtMostTwoRays {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} =4 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} corresponding {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} drawn {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} clarity {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} do {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} form {\mbox{\tt $\star$}} part {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Right {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} corresponding {\mbox{\tt $\star$}} symbolic {\mbox{\tt $\star$}} picture {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} showing {\mbox{\tt $\star$}} how {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} yields {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} space {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} initial {\mbox{\tt $\star$}} segments {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} pairs {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} drawn {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} left {\mbox{\tt $\star$}} hand {\mbox{\tt $\star$}} side {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} but {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} unlabeled {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} lack {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} space {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {FigParaPart {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {figure {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Proposition {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {PropPeriodicRaysLand {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} asserts {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} particular {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} equal {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} All {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} most {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} divide {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} plane {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} finitely {\mbox{\tt $\star$}} many {\mbox{\tt $\star$}} pieces {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} denote {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \Part {\mbox{\tt $\star$}} {n {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} illustrated {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} Figure {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {FigParaPart {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Parabolic {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} no {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} common {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {lemma {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} [Kneading {\mbox{\tt $\star$}} Sequences {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Partition {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {LemKneadingPartition {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \rule{0pt}{0pt}\par\noindent {\mbox{\tt $\star$}} Fix {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} \geq {\mbox{\tt $\star$}} 1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} suppose {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} most {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} pairs {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Then {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} / {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \Part {\mbox{\tt $\star$}} {n {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} property {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} entries {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} their {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} coincide {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} do {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} contain {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} symbol {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} * {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} particular {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} do {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {lemma {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \par \noindent {\sc Proof. } {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} statement {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} trivial {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} do {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} lower {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} between {\mbox{\tt $\star$}} them {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} i {\mbox{\tt $\star$}} .e {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} ` {\mbox{\tt $\star$}} `access {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} infinity {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \Part {\mbox{\tt $\star$}} {n {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} entries {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} stable {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} within {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} access {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} claim {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} interesting {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} several {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} ` {\mbox{\tt $\star$}} `accesses {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} infinity {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} hypothesis {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} theorem {\mbox{\tt $\star$}} asserts {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} up {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} pairs {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Therefore {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} whenever {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \Part {\mbox{\tt $\star$}} {n {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} \leq {\mbox{\tt $\star$}} n {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} either {\mbox{\tt $\star$}} side {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \bbf {\mbox{\tt $\star$}} S {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ^1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} between {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} pairs {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} thus {\mbox{\tt $\star$}} even {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} -th {\mbox{\tt $\star$}} entry {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} changes {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} even {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} times {\mbox{\tt $\star$}} between {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 0}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 1}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $\Box$ \par {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \par \noindent {\sc Remark. } {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} lemma {\mbox{\tt $\star$}} allows {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} interpret {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \Part {\mbox{\tt $\star$}} {n {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} space {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} initial {\mbox{\tt $\star$}} segments {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} Figure {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {FigParaPart {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} indicated {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} =4 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} together {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} initial {\mbox{\tt $\star$}} four {\mbox{\tt $\star$}} symbols {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} entire {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} space {\mbox{\tt $\star$}} may {\mbox{\tt $\star$}} thus {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} described {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} noted {\mbox{\tt $\star$}} above {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} To {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} \in {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \bbf {\mbox{\tt $\star$}} C {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} may {\mbox{\tt $\star$}} associate {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} -sided {\mbox{\tt $\star$}} infinite {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} symbols {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} -th {\mbox{\tt $\star$}} entry {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 1}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} separated {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} origin {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} even {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} pairs {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} dividing {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} pairs {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} odd {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} entry {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 0}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} exactly {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} pair {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} entry {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} * {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Calculating {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} substantially {\mbox{\tt $\star$}} simplified {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} observation {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} order {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} know {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} entire {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} pair {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} some {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} suffices {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} know {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} entries {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} look {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} pairs {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} up {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} leads {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} following {\mbox{\tt $\star$}} algorithm {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} \in {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \bbf {\mbox{\tt $\star$}} C {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} find {\mbox{\tt $\star$}} consecutively {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} pairs {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} lowest {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} between {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} previously {\mbox{\tt $\star$}} used {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} pair {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} pairs {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} form {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} strictly {\mbox{\tt $\star$}} increasing {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} integers {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} allow {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} reconstruct {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} encoding {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} very {\mbox{\tt $\star$}} efficiently {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} extend {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} single {\mbox{\tt $\star$}} entry {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} beginning {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} obtain {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} internal {\mbox{\tt $\star$}} address {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} details {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} see {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {IntAddr {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} context {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} real {\mbox{\tt $\star$}} quadratic {\mbox{\tt $\star$}} polynomials {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} internal {\mbox{\tt $\star$}} address {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} known {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} cutting {\mbox{\tt $\star$}} times {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Hofbauer {\mbox{\tt $\star$}} tower {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} figure {\mbox{\tt $\star$}} shows {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} certain {\mbox{\tt $\star$}} initial {\mbox{\tt $\star$}} segments {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} appear {\mbox{\tt $\star$}} several {\mbox{\tt $\star$}} times {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} described {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} explained {\mbox{\tt $\star$}} precisely {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} gives {\mbox{\tt $\star$}} rise {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} certain {\mbox{\tt $\star$}} symmetries {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} see {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {IntAddr {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {lemma {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} [Different {\mbox{\tt $\star$}} Kneading {\mbox{\tt $\star$}} Sequences {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {LemKneadingDifferent {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \rule{0pt}{0pt}\par\noindent {\mbox{\tt $\star$}} Let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Among {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} identical {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {lemma {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \par \noindent {\sc Proof. } {\mbox{\tt $\star$}} Let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _2 {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \ldots {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _s {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} their {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $s {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} both {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} nothing {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} show {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} may {\mbox{\tt $\star$}} hence {\mbox{\tt $\star$}} assume {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $s {\mbox{\tt $\star$}} \geq {\mbox{\tt $\star$}} 3 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} All {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _i {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} say {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} By {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemRayPermutation {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} exactly {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} /s {\mbox{\tt $\star$}} = {\mbox{\tt $\star$}} :k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (different {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} immediate {\mbox{\tt $\star$}} inverse {\mbox{\tt $\star$}} images {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $R {\mbox{\tt $\star$}} ( {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _i {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} chosen {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} inverse {\mbox{\tt $\star$}} images {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} together {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} simple {\mbox{\tt $\star$}} path {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} connecting {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} form {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} complex {\mbox{\tt $\star$}} plane {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} parts {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} label {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} parts {\mbox{\tt $\star$}} again {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 0}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 1}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} part {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 0}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} label {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} * {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Now {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} labels {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parts {\mbox{\tt $\star$}} containing {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $R {\mbox{\tt $\star$}} ( {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _i {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $R {\mbox{\tt $\star$}} (2 {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _i {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $R {\mbox{\tt $\star$}} (4 {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _i {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \ldots {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} again {\mbox{\tt $\star$}} reflect {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _i {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} because {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} partitions {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} bounded {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Above {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} constructed {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} connecting {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} By {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemHubbardBranch {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} branch {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} orbits {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} most {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} branches {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} One {\mbox{\tt $\star$}} branch {\mbox{\tt $\star$}} always {\mbox{\tt $\star$}} goes {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} other {\mbox{\tt $\star$}} branch {\mbox{\tt $\star$}} goes {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} always {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} region {\mbox{\tt $\star$}} labeled {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 1}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} second {\mbox{\tt $\star$}} branch {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} other {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} ( {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} leave {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} symmetric {\mbox{\tt $\star$}} direction {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} always {\mbox{\tt $\star$}} lead {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} region {\mbox{\tt $\star$}} labeled {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 0}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (if {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} second {\mbox{\tt $\star$}} branch {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} were {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} lead {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} direction {\mbox{\tt $\star$}} other {\mbox{\tt $\star$}} than {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} symmetric {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} common {\mbox{\tt $\star$}} image {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} could {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} endpoint {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemHubbardBranch {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} entire {\mbox{\tt $\star$}} tree {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} except {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} part {\mbox{\tt $\star$}} between {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} subset {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} filled {\mbox{\tt $\star$}} -in {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} whose {\mbox{\tt $\star$}} label {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} depend {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _i {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} been {\mbox{\tt $\star$}} used {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} define {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} positive {\mbox{\tt $\star$}} integers {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $l {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _l {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $l {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} -th {\mbox{\tt $\star$}} forward {\mbox{\tt $\star$}} image {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $l {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} divisible {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} label {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _l {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} independent {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _i {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} since {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _l {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $2 {\mbox{\tt $\star$}} ^ {\mbox{\tt $\star$}} {l {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _i {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $l {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} -st {\mbox{\tt $\star$}} entries {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _i {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Therefore {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} restrict {\mbox{\tt $\star$}} our {\mbox{\tt $\star$}} attention {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _2 {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \ldots {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _s {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} where {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _i {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} immediate {\mbox{\tt $\star$}} inverse {\mbox{\tt $\star$}} image {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _i {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} return {\mbox{\tt $\star$}} dynamics {\mbox{\tt $\star$}} among {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} multiplication {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $2 {\mbox{\tt $\star$}} ^k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} cyclic {\mbox{\tt $\star$}} permutation {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} combinatorial {\mbox{\tt $\star$}} rotation {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $r {\mbox{\tt $\star$}} /s {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} some {\mbox{\tt $\star$}} integer {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $r {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemRayPermutation {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} compare {\mbox{\tt $\star$}} Figure {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {FigKnSeqDiff {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {figure {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} [tb {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \centerline {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \psfig {\mbox{\tt $\star$}} {figure {\mbox{\tt $\star$}} =P {\mbox{\tt $\star$}} _KnSeqDiff {\mbox{\tt $\star$}} .eps {\mbox{\tt $\star$}} ,height {\mbox{\tt $\star$}} =82mm {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \centerline {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \parbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \captionwidth {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \vspace {\mbox{\tt $\star$}} {0mm {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \caption {\mbox{\tt $\star$}} [Illustration {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} proof {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \protect {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemKneadingDifferent {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \sl {\mbox{\tt $\star$}} Illustration {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} proof {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \protect {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemKneadingDifferent {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Left {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} coarse {\mbox{\tt $\star$}} sketch {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} entire {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} solid {\mbox{\tt $\star$}} numbers {\mbox{\tt $\star$}} describe {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (in {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} case {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $5 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} outlined {\mbox{\tt $\star$}} numbers {\mbox{\tt $\star$}} specify {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} corresponding {\mbox{\tt $\star$}} entries {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Right {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} blow {\mbox{\tt $\star$}} -up {\mbox{\tt $\star$}} near {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (center {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} marked {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} * {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} considered {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} fixed {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} return {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} case {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} seven {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} combinatorial {\mbox{\tt $\star$}} rotation {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $3 {\mbox{\tt $\star$}} /7 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} labeled {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} corresponding {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} symbols {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 0}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 1}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} indicate {\mbox{\tt $\star$}} regions {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} always {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} side {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} independently {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} chosen {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {FigKnSeqDiff {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {figure {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Depending {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _i {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} used {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} i {\mbox{\tt $\star$}} .e {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} _i {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} defines {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} given {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} {j {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} may {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} label {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 0}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} label {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 1}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} particular {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} total {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} among {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} region {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 0}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} may {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} But {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} symbols {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 0}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} within {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} cannot {\mbox{\tt $\star$}} coincide {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Two {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} among {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _i {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} thus {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} corresponding {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} equally {\mbox{\tt $\star$}} many {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} region {\mbox{\tt $\star$}} labeled {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 0}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} leaves {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} various {\mbox{\tt $\star$}} pairs {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} symmetric {\mbox{\tt $\star$}} positions {\mbox{\tt $\star$}} around {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} candidates {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} identical {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} But {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} hard {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} verify {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} looking {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} cyclic {\mbox{\tt $\star$}} permutation {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} _i {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} define {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} least {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} _i {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} region {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 0}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} corresponding {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} some {\mbox{\tt $\star$}} position {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} multiple {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} identical {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} therefore {\mbox{\tt $\star$}} those {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} _i {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} region {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 1}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (or {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} adjacent {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} containing {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _i {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} hence {\mbox{\tt $\star$}} those {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} adjacent {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} corresponding {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} They {\mbox{\tt $\star$}} do {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} fact {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} identical {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $\Box$ \par {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \proofof {\mbox{\tt $\star$}} {Proposition {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {PropAtMostTwoRays {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} do {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} proof {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} induction {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} =1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} both {\mbox{\tt $\star$}} describe {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} =1 {\mbox{\tt $\star$}} /4 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} To {\mbox{\tt $\star$}} show {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} statement {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} proposition {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} may {\mbox{\tt $\star$}} suppose {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} up {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} pairs {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \Theta {\mbox{\tt $\star$}} _c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} contains {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} All {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemKneadingPartition {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} identical {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Since {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} corresponding {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} Proposition {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {PropNecessary {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemKneadingDifferent {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} says {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \Theta {\mbox{\tt $\star$}} _c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} contains {\mbox{\tt $\star$}} more {\mbox{\tt $\star$}} than {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} single {\mbox{\tt $\star$}} element {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} contains {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} proves {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} proposition {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} implies {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Structure {\mbox{\tt $\star$}} Theorem {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} inductive {\mbox{\tt $\star$}} step {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} completed {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $\Box$ \par {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} now {\mbox{\tt $\star$}} finished {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} proof {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} part {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} theorem {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} describing {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} common {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} prepares {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} ground {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} combinatorial {\mbox{\tt $\star$}} descriptions {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} Lavaurs {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} algorithm {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {La {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} internal {\mbox{\tt $\star$}} addresses {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \newsection {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {Preperiodic {\mbox{\tt $\star$}} Rays {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {SecPreperiodic {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} section {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} turn {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} show {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} Misiurewicz {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} they {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} use {\mbox{\tt $\star$}} again {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Recall {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \ttt {\mbox{\tt $\star$}} K {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} ( {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} symbol {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} * {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} exactly {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} positions {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} 2n {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} 3n {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \ldots {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Moreover {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} pointwise {\mbox{\tt $\star$}} limits {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \ttt {\mbox{\tt $\star$}} K {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} - {\mbox{\tt $\star$}} ( {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} = {\mbox{\tt $\star$}} \lim {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} \nearrow {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \ttt {\mbox{\tt $\star$}} K {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} ( {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \ttt {\mbox{\tt $\star$}} K {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} + {\mbox{\tt $\star$}} ( {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} = {\mbox{\tt $\star$}} \lim {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} \searrow {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \ttt {\mbox{\tt $\star$}} K {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} ( {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} both {\mbox{\tt $\star$}} exist {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} them {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} symbols {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} * {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} replaced {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 1}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} other {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 0}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} throughout {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Both {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} still {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} fact {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} their {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (or {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} divisor {\mbox{\tt $\star$}} thereof {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} More {\mbox{\tt $\star$}} precisely {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} both {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \ttt {\mbox{\tt $\star$}} K {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} \pm {\mbox{\tt $\star$}} ( {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} = {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \ttt {\mbox{\tt $\star$}} K {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} \mp {\mbox{\tt $\star$}} ( {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _2 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} suffices {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} verify {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} statement {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} single {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} within {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemKneadingPartition {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} thus {\mbox{\tt $\star$}} imagine {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} pair {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} being {\mbox{\tt $\star$}} replaced {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} pairs {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} infinitesimally {\mbox{\tt $\star$}} close {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} either {\mbox{\tt $\star$}} side {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} given {\mbox{\tt $\star$}} pair {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} having {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} without {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} symbol {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} * {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} themselves {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} lengths {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} preperiods {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} equal {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (this {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} easy {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} verify {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} see {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} proof {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemNumberRaysMisiu {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} However {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} lengths {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} do {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} equal {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $9 {\mbox{\tt $\star$}} /56 {\mbox{\tt $\star$}} =0 {\mbox{\tt $\star$}} .001 {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \overline {\mbox{\tt $\star$}} {010 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 1}} {\mbox{\tt $\star$}} {\mbox{\tt 1}} {\mbox{\tt $\star$}} {\mbox{\tt 0}} {\mbox{\tt $\star$}} \overline {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} {\mbox{\tt 1}} {\mbox{\tt $\star$}} {\mbox{\tt 1}} {\mbox{\tt $\star$}} {\mbox{\tt 1}} {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} = {\mbox{\tt $\star$}} {\mbox{\tt 1}} {\mbox{\tt $\star$}} {\mbox{\tt 1}} {\mbox{\tt $\star$}} {\mbox{\tt 0}} {\mbox{\tt $\star$}} \overline {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} {\mbox{\tt 1}} {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} fact {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} directly {\mbox{\tt $\star$}} related {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} Misiurewicz {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} see {\mbox{\tt $\star$}} below {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {figure {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} [htbp {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \centerline {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \psfig {\mbox{\tt $\star$}} {figure {\mbox{\tt $\star$}} =P {\mbox{\tt $\star$}} _Preperiodic1 {\mbox{\tt $\star$}} .eps {\mbox{\tt $\star$}} ,height {\mbox{\tt $\star$}} =50mm {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \hfil {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \psfig {\mbox{\tt $\star$}} {figure {\mbox{\tt $\star$}} =P {\mbox{\tt $\star$}} _Preperiodic2 {\mbox{\tt $\star$}} .eps {\mbox{\tt $\star$}} ,height {\mbox{\tt $\star$}} =50mm {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \centerline {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \parbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \captionwidth {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \caption {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \sl {\mbox{\tt $\star$}} Illustration {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} theorem {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} case {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Shown {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} sets {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} polynomials {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $9 {\mbox{\tt $\star$}} /56 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $11 {\mbox{\tt $\star$}} /56 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $15 {\mbox{\tt $\star$}} /56 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (left {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $1 {\mbox{\tt $\star$}} /6 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (right {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} both {\mbox{\tt $\star$}} pictures {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} values {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} drawn {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {FigPreperiodic {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {figure {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \proofof {\mbox{\tt $\star$}} {Theorem {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {ThmRatRays {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (preperiodic {\mbox{\tt $\star$}} case {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} Consider {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} limit {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} First {\mbox{\tt $\star$}} suppose {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} know {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} imagine {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} pairs {\mbox{\tt $\star$}} infinitesimally {\mbox{\tt $\star$}} close {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} pair {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ( {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _2 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} both {\mbox{\tt $\star$}} sides {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} pairs {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} without {\mbox{\tt $\star$}} symbols {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} * {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Each {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} differ {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} some {\mbox{\tt $\star$}} finite {\mbox{\tt $\star$}} position {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} But {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} had {\mbox{\tt $\star$}} seen {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} previous {\mbox{\tt $\star$}} section {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} regions {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} constant {\mbox{\tt $\star$}} initial {\mbox{\tt $\star$}} segments {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} bounded {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} pairs {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (see {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemKneadingPartition {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} pair {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} separating {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} {1 {\mbox{\tt $\star$}} ,2 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Therefore {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} cannot {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} limit {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} contradiction {\mbox{\tt $\star$}} shows {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} no {\mbox{\tt $\star$}} limit {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Now {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} argue {\mbox{\tt $\star$}} similarly {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} proof {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Proposition {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {PropPeriodicRaysLand {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} no {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} want {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} show {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Since {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} limit {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} arbitrarily {\mbox{\tt $\star$}} close {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} backwards {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} depend {\mbox{\tt $\star$}} continuously {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemStable {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} however {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} -ray {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} backwards {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} some {\mbox{\tt $\star$}} finite {\mbox{\tt $\star$}} forward {\mbox{\tt $\star$}} image {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} depend {\mbox{\tt $\star$}} continuously {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} pulling {\mbox{\tt $\star$}} back {\mbox{\tt $\star$}} may {\mbox{\tt $\star$}} yield {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} bouncing {\mbox{\tt $\star$}} once {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} backward {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} but {\mbox{\tt $\star$}} after {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} continuations {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} well {\mbox{\tt $\star$}} -defined {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} both {\mbox{\tt $\star$}} continuations {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} both {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} still {\mbox{\tt $\star$}} depend {\mbox{\tt $\star$}} continuously {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} again {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} -ray {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (However {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} contradicts {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} assumption {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} backwards {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} because {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} would {\mbox{\tt $\star$}} force {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} see {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} limit {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} under {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} \mapsto {\mbox{\tt $\star$}} z {\mbox{\tt $\star$}} ^2 {\mbox{\tt $\star$}} +c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} fixed {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} preperiod {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} Misiurewicz {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Since {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} satisfies {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} certain {\mbox{\tt $\star$}} polynomial {\mbox{\tt $\star$}} equation {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} finitely {\mbox{\tt $\star$}} many {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} limit {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} Misiurewicz {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} required {\mbox{\tt $\star$}} properties {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} shows {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} third {\mbox{\tt $\star$}} part {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Theorem {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {ThmRatRays {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} last {\mbox{\tt $\star$}} part {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} already {\mbox{\tt $\star$}} shown {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} Misiurewicz {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} cannot {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} remains {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} show {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} given {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} Misiurewicz {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} use {\mbox{\tt $\star$}} ideas {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} Douady {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Hubbard {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {Orsay {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} By {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemStable {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} simply {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} neighborhood {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $V {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} space {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} continued {\mbox{\tt $\star$}} analytically {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} yielding {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} analytic {\mbox{\tt $\star$}} function {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} (c {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} (c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} =c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} \in {\mbox{\tt $\star$}} V {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} (c {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} relation {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} (c {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} =c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} certainly {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} satisfied {\mbox{\tt $\star$}} identically {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $V {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} solutions {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} discrete {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} may {\mbox{\tt $\star$}} assume {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} within {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $V {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Now {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} consider {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} winding {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} around {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} defined {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} denoting {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} -ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} potential {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $t {\mbox{\tt $\star$}} \geq {\mbox{\tt $\star$}} 0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} _t {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} decreasing {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $t {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} + {\mbox{\tt $\star$}} \infty {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} winding {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} total {\mbox{\tt $\star$}} change {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \arg {\mbox{\tt $\star$}} (z {\mbox{\tt $\star$}} _t {\mbox{\tt $\star$}} -c {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (divided {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $2 {\mbox{\tt $\star$}} \pi {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} count {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} full {\mbox{\tt $\star$}} turns {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Provided {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} winding {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} well {\mbox{\tt $\star$}} -defined {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} finite {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} depends {\mbox{\tt $\star$}} continuously {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} When {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} moves {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} small {\mbox{\tt $\star$}} circle {\mbox{\tt $\star$}} around {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} winding {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} defined {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} time {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} change {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} integer {\mbox{\tt $\star$}} corresponding {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} multiplicity {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} root {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} (c {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} -c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} However {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} when {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} returns {\mbox{\tt $\star$}} back {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} where {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} started {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} winding {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} restored {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} what {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} was {\mbox{\tt $\star$}} before {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} requires {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} discontinuity {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} winding {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} arbitrarily {\mbox{\tt $\star$}} close {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} limit {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Since {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} finishes {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} proof {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Theorem {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {ThmRatRays {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $\Box$ \par {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \par \noindent {\sc Remark. } {\mbox{\tt $\star$}} There {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} no {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} plane {\mbox{\tt $\star$}} showing {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} countably {\mbox{\tt $\star$}} many {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} example {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} they {\mbox{\tt $\star$}} cannot {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} separated {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} stable {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} final {\mbox{\tt $\star$}} part {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} theorem {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} used {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} (c {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} depends {\mbox{\tt $\star$}} analytically {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} As {\mbox{\tt $\star$}} mentioned {\mbox{\tt $\star$}} before {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} proof {\mbox{\tt $\star$}} started {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} need {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} describe {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} spaces {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} antiholomorphic {\mbox{\tt $\star$}} polynomials {\mbox{\tt $\star$}} like {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Tricorn {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Multicorns {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} do {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} analytic {\mbox{\tt $\star$}} dependence {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Here {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} another {\mbox{\tt $\star$}} way {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} prove {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} Misiurewicz {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} whose {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} plane {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} start {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} Misiurewicz {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Then {\mbox{\tt $\star$}} both {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} property {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} plane {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} suffices {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} prove {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} property {\mbox{\tt $\star$}} determines {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} uniquely {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} exactly {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} content {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Spider {\mbox{\tt $\star$}} Theorem {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} iterative {\mbox{\tt $\star$}} procedure {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} find {\mbox{\tt $\star$}} postcritically {\mbox{\tt $\star$}} finite {\mbox{\tt $\star$}} polynomials {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} assigned {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} Hubbard {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Schleicher {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {HS {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} easy {\mbox{\tt $\star$}} proof {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} polynomials {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} single {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} While {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} existence {\mbox{\tt $\star$}} part {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} proof {\mbox{\tt $\star$}} works {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} need {\mbox{\tt $\star$}} here {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} uniqueness {\mbox{\tt $\star$}} part {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} works {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} case {\mbox{\tt $\star$}} just {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} well {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} both {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} holomorphic {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} antiholomorphic {\mbox{\tt $\star$}} polynomials {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} last {\mbox{\tt $\star$}} part {\mbox{\tt $\star$}} could {\mbox{\tt $\star$}} probably {\mbox{\tt $\star$}} also {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} done {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} more {\mbox{\tt $\star$}} combinatorial {\mbox{\tt $\star$}} but {\mbox{\tt $\star$}} rather {\mbox{\tt $\star$}} tedious {\mbox{\tt $\star$}} way {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} using {\mbox{\tt $\star$}} counting {\mbox{\tt $\star$}} arguments {\mbox{\tt $\star$}} like {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} case {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} would {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} however {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} quite {\mbox{\tt $\star$}} delicate {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Misiurewicz {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} them {\mbox{\tt $\star$}} require {\mbox{\tt $\star$}} more {\mbox{\tt $\star$}} bookkeeping {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} Misiurewicz {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} varies {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} positive {\mbox{\tt $\star$}} integer {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} following {\mbox{\tt $\star$}} lemma {\mbox{\tt $\star$}} makes {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} more {\mbox{\tt $\star$}} precise {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {lemma {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} [Number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Rays {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} Misiurewicz {\mbox{\tt $\star$}} Points {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {LemNumberRaysMisiu {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \rule{0pt}{0pt}\par\noindent {\mbox{\tt $\star$}} Suppose {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} preperiod {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $l {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \ttt {\mbox{\tt $\star$}} K {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} ( {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} preperiod {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $l {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} divides {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} /k {\mbox{\tt $\star$}} >1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} total {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} /k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} /k {\mbox{\tt $\star$}} =1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {lemma {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} example {\mbox{\tt $\star$}} above {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} had {\mbox{\tt $\star$}} seen {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $9 {\mbox{\tt $\star$}} /56 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $3 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} while {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Therefore {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} total {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} corresponding {\mbox{\tt $\star$}} Misiurewicz {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} three {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} their {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $9 {\mbox{\tt $\star$}} /56 {\mbox{\tt $\star$}} ,11 {\mbox{\tt $\star$}} /56 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $15 {\mbox{\tt $\star$}} /56 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} more {\mbox{\tt $\star$}} than {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} given {\mbox{\tt $\star$}} Misiurewicz {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} hard {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} determine {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} knowing {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} them {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} using {\mbox{\tt $\star$}} ideas {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} proof {\mbox{\tt $\star$}} below {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \par \noindent {\sc Proof. } {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} plane {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} inverse {\mbox{\tt $\star$}} image {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} /2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ( {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} /2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} separate {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} plane {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} parts {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} cuts {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} way {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} defining {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} see {\mbox{\tt $\star$}} Definition {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {DefKneadSeq {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Figure {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {FigKneading {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} label {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} parts {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 0}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 1}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} analogous {\mbox{\tt $\star$}} way {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} assigning {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} symbol {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} * {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} intersects {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} jumps {\mbox{\tt $\star$}} after {\mbox{\tt $\star$}} exactly {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $l {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} steps {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Denote {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} ,c {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} ,c {\mbox{\tt $\star$}} _2 {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \ldots {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} =0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _1 {\mbox{\tt $\star$}} =c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} {l {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} =c {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} {l {\mbox{\tt $\star$}} +n {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} while {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _l {\mbox{\tt $\star$}} = {\mbox{\tt $\star$}} -c {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} {l {\mbox{\tt $\star$}} +n {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _l {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} {l {\mbox{\tt $\star$}} +n {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} sides {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} part {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} starts {\mbox{\tt $\star$}} exactly {\mbox{\tt $\star$}} where {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} part {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} start {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} preperiods {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} equal {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} know {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} falls {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} exactly {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} their {\mbox{\tt $\star$}} entire {\mbox{\tt $\star$}} forward {\mbox{\tt $\star$}} orbits {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} equal {\mbox{\tt $\star$}} sides {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} following {\mbox{\tt $\star$}} reason {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} connect {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} curve {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} avoids {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} finitely {\mbox{\tt $\star$}} many {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} their {\mbox{\tt $\star$}} forward {\mbox{\tt $\star$}} orbits {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} their {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (if {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} cross {\mbox{\tt $\star$}} some {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} pairs {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} they {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} also {\mbox{\tt $\star$}} visit {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} sides {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} reduce {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} problem {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Now {\mbox{\tt $\star$}} inverse {\mbox{\tt $\star$}} images {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} inverse {\mbox{\tt $\star$}} images {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} curve {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} avoids {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Continuing {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} take {\mbox{\tt $\star$}} inverse {\mbox{\tt $\star$}} images {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} way {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} converge {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} each {\mbox{\tt $\star$}} other {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} they {\mbox{\tt $\star$}} cannot {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} therefore {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} /k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} jumps {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} local {\mbox{\tt $\star$}} homeomorphism {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} equally {\mbox{\tt $\star$}} many {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} But {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} reappear {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} space {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Misiurewicz {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} remains {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} show {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} no {\mbox{\tt $\star$}} extra {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} By {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemRayPermutation {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} more {\mbox{\tt $\star$}} than {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} i {\mbox{\tt $\star$}} .e {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamics {\mbox{\tt $\star$}} permutes {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} transitively {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} therefore {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} greater {\mbox{\tt $\star$}} than {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} /k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} /k {\mbox{\tt $\star$}} =1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} case {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} most {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $\Box$ \par {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \par \noindent {\sc Remark. } {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} indeed {\mbox{\tt $\star$}} happen {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} /k {\mbox{\tt $\star$}} =1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} while {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} An {\mbox{\tt $\star$}} example {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} given {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $25 {\mbox{\tt $\star$}} /56 {\mbox{\tt $\star$}} =0 {\mbox{\tt $\star$}} /011 {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \overline {\mbox{\tt $\star$}} {100 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $31 {\mbox{\tt $\star$}} /56 {\mbox{\tt $\star$}} =0 {\mbox{\tt $\star$}} .100 {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \overline {\mbox{\tt $\star$}} {011 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} their {\mbox{\tt $\star$}} common {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 1}} {\mbox{\tt $\star$}} {\mbox{\tt 0}} {\mbox{\tt $\star$}} {\mbox{\tt 0}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \overline {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} {\mbox{\tt 1}} {\mbox{\tt $\star$}} {\mbox{\tt 0}} {\mbox{\tt $\star$}} {\mbox{\tt 1}} {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} =k {\mbox{\tt $\star$}} =3 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} but {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} together {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} real {\mbox{\tt $\star$}} axis {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} On {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} other {\mbox{\tt $\star$}} hand {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $1 {\mbox{\tt $\star$}} /2 {\mbox{\tt $\star$}} =0 {\mbox{\tt $\star$}} .0 {\mbox{\tt $\star$}} \overline {\mbox{\tt $\star$}} {1 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} =0 {\mbox{\tt $\star$}} .1 {\mbox{\tt $\star$}} \overline {\mbox{\tt $\star$}} {0 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequence {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt 1}} {\mbox{\tt $\star$}} \overline {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} {\mbox{\tt 0}} {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} =k {\mbox{\tt $\star$}} =1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $1 {\mbox{\tt $\star$}} /2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} leftmost {\mbox{\tt $\star$}} antenna {\mbox{\tt $\star$}} tip {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} = {\mbox{\tt $\star$}} -2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} These {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} indicated {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} Figure {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {FigMandelRays {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} related {\mbox{\tt $\star$}} discussion {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} common {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} view {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} ` {\mbox{\tt $\star$}} `Thurston {\mbox{\tt $\star$}} obstructions {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} see {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {HS {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \hide {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} At {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Farey {\mbox{\tt $\star$}} -algorithm {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} kneading {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (sub {\mbox{\tt $\star$}} - {\mbox{\tt $\star$}} )limbs {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} ` {\mbox{\tt $\star$}} `hide {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} certain {\mbox{\tt $\star$}} cyclic {\mbox{\tt $\star$}} permutations {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} might {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} introduced {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} used {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \newsection {\mbox{\tt $\star$}} {Hyperbolic {\mbox{\tt $\star$}} Components {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {SecHyperbolic {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} Orbit {\mbox{\tt $\star$}} Separation {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemOrbitSeparation {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} important {\mbox{\tt $\star$}} consequence {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} helps {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} control {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamics {\mbox{\tt $\star$}} when {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} perturbed {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Perturbations {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parabolics {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} subtle {\mbox{\tt $\star$}} issue {\mbox{\tt $\star$}} because {\mbox{\tt $\star$}} both {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} filled {\mbox{\tt $\star$}} -in {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} behave {\mbox{\tt $\star$}} drastically {\mbox{\tt $\star$}} discontinuously {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} show {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} nonetheless {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} rational {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} behave {\mbox{\tt $\star$}} continuously {\mbox{\tt $\star$}} wherever {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} way {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} following {\mbox{\tt $\star$}} proposition {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} analogue {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemStable {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} dealt {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} However {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} explain {\mbox{\tt $\star$}} below {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} themselves {\mbox{\tt $\star$}} do {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} depend {\mbox{\tt $\star$}} continuously {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {proposition {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} [Continuous {\mbox{\tt $\star$}} Dependence {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Landing {\mbox{\tt $\star$}} Points {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {PropContinuousLanding {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \rule{0pt}{0pt}\par\noindent {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} rational {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} depends {\mbox{\tt $\star$}} continuously {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} entire {\mbox{\tt $\star$}} subset {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} space {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {proposition {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} polynomial {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $p {\mbox{\tt $\star$}} _c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} fails {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} bounces {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} inverse {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} happens {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} outside {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} finitely {\mbox{\tt $\star$}} many {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {2 {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} ,4 {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} ,8 {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \ldots {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} All {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} well {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} These {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} interesting {\mbox{\tt $\star$}} cases {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} proposition {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \par \noindent {\sc Proof. } {\mbox{\tt $\star$}} First {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} discuss {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} case {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} proposition {\mbox{\tt $\star$}} reduces {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemStable {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} may {\mbox{\tt $\star$}} thus {\mbox{\tt $\star$}} assume {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Under {\mbox{\tt $\star$}} perturbation {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} splits {\mbox{\tt $\star$}} up {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} several {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} may {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} attracting {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} indifferent {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} depend {\mbox{\tt $\star$}} continuously {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} need {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} show {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} after {\mbox{\tt $\star$}} perturbation {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} continuations {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} was {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Denote {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} before {\mbox{\tt $\star$}} perturbation {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $V {\mbox{\tt $\star$}} \subset {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \bbf {\mbox{\tt $\star$}} C {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} simply {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} open {\mbox{\tt $\star$}} neighborhood {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} contain {\mbox{\tt $\star$}} further {\mbox{\tt $\star$}} parabolics {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} equal {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} lower {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Then {\mbox{\tt $\star$}} analytic {\mbox{\tt $\star$}} continuation {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} up {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} possible {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $V {\mbox{\tt $\star$}} - {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} continued {\mbox{\tt $\star$}} analytically {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} function {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} (c {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} neighborhood {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} remains {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} may {\mbox{\tt $\star$}} assume {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} case {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $V {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} possibly {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} shrinking {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $V {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Then {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemStable {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} (c {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} keep {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} throughout {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $V {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} since {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} cannot {\mbox{\tt $\star$}} lose {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $V {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} cannot {\mbox{\tt $\star$}} gain {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} either {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Since {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \theta {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} after {\mbox{\tt $\star$}} perturbation {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} coming {\mbox{\tt $\star$}} out {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} remains {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} show {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} jump {\mbox{\tt $\star$}} between {\mbox{\tt $\star$}} continuations {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} where {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Orbit {\mbox{\tt $\star$}} Separation {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemOrbitSeparation {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} comes {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} separated {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} pairs {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} separation {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} stable {\mbox{\tt $\star$}} under {\mbox{\tt $\star$}} perturbations {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} cannot {\mbox{\tt $\star$}} cross {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} partition {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} their {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} depend {\mbox{\tt $\star$}} continuously {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} provided {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} statement {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} taking {\mbox{\tt $\star$}} inverse {\mbox{\tt $\star$}} images {\mbox{\tt $\star$}} because {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} pull {\mbox{\tt $\star$}} -back {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} continuous {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} visits {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} along {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} happens {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} Misiurewicz {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} several {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} may {\mbox{\tt $\star$}} merge {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} split {\mbox{\tt $\star$}} up {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} but {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} happens {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} continuous {\mbox{\tt $\star$}} way {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $\Box$ \par {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \par \noindent {\sc Remark. } {\mbox{\tt $\star$}} Unlike {\mbox{\tt $\star$}} their {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} themselves {\mbox{\tt $\star$}} may {\mbox{\tt $\star$}} depend {\mbox{\tt $\star$}} discontinuously {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} perturbation {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} simplest {\mbox{\tt $\star$}} possible {\mbox{\tt $\star$}} example {\mbox{\tt $\star$}} occurs {\mbox{\tt $\star$}} near {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} =1 {\mbox{\tt $\star$}} /4 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $0 {\mbox{\tt $\star$}} =1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} fixed {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} =1 {\mbox{\tt $\star$}} /2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} real {\mbox{\tt $\star$}} line {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} right {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $1 {\mbox{\tt $\star$}} /2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} interior {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} filled {\mbox{\tt $\star$}} -in {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Perturbing {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} right {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} real {\mbox{\tt $\star$}} axis {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} i {\mbox{\tt $\star$}} .e {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $0 {\mbox{\tt $\star$}} =1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} bounce {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} thus {\mbox{\tt $\star$}} fail {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} But {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} arbitrarily {\mbox{\tt $\star$}} small {\mbox{\tt $\star$}} perturbations {\mbox{\tt $\star$}} near {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $0 {\mbox{\tt $\star$}} =1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} get {\mbox{\tt $\star$}} very {\mbox{\tt $\star$}} close {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} before {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} turns {\mbox{\tt $\star$}} back {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} near {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $1 {\mbox{\tt $\star$}} /2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} closer {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} =1 {\mbox{\tt $\star$}} /4 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} lower {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} potential {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} while {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} keeps {\mbox{\tt $\star$}} reaching {\mbox{\tt $\star$}} out {\mbox{\tt $\star$}} near {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} lower {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} lower {\mbox{\tt $\star$}} potentials {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} limit {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} part {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} real {\mbox{\tt $\star$}} parts {\mbox{\tt $\star$}} less {\mbox{\tt $\star$}} than {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $1 {\mbox{\tt $\star$}} /2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} squeezed {\mbox{\tt $\star$}} off {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Points {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} potential {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $t {\mbox{\tt $\star$}} >0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} depend {\mbox{\tt $\star$}} continuously {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} potential {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $t {\mbox{\tt $\star$}} =0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} however {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} continuity {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} uniform {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $t {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} whole {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} does {\mbox{\tt $\star$}} change {\mbox{\tt $\star$}} discontinuously {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} respect {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Hausdorff {\mbox{\tt $\star$}} metric {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Continuous {\mbox{\tt $\star$}} dependence {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} requires {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} single {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (of {\mbox{\tt $\star$}} possibly {\mbox{\tt $\star$}} higher {\mbox{\tt $\star$}} multiplicity {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} false {\mbox{\tt $\star$}} already {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} cubic {\mbox{\tt $\star$}} polynomials {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} example {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} see {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} appendix {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} Goldberg {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Milnor {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {GM {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Most {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} interior {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} consists {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} what {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} known {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Proposition {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {PropContinuousLanding {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} possible {\mbox{\tt $\star$}} key {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} understanding {\mbox{\tt $\star$}} many {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} their {\mbox{\tt $\star$}} properties {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} First {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} discuss {\mbox{\tt $\star$}} some {\mbox{\tt $\star$}} necessary {\mbox{\tt $\star$}} background {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} A {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} rational {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} where {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} attracted {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} attracting {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} superattracting {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} orbits {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} dynamical {\mbox{\tt $\star$}} significance {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} equivalent {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} existence {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} expanding {\mbox{\tt $\star$}} metric {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} neighborhood {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} many {\mbox{\tt $\star$}} important {\mbox{\tt $\star$}} consequences {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} local {\mbox{\tt $\star$}} connectivity {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (see {\mbox{\tt $\star$}} Milnor {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {MiIntro {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} polynomial {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \infty {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} always {\mbox{\tt $\star$}} superattracting {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} quadratic {\mbox{\tt $\star$}} case {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} polynomial {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} unique {\mbox{\tt $\star$}} finite {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} either {\mbox{\tt $\star$}} converges {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \infty {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} finite {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (super {\mbox{\tt $\star$}} )attracting {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Hyperbolicity {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} obviously {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} open {\mbox{\tt $\star$}} condition {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} A {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} / {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} interior {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} attracting {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} constant {\mbox{\tt $\star$}} throughout {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} defines {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} / {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} see {\mbox{\tt $\star$}} below {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} also {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} interior {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\bf M} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} There {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} no {\mbox{\tt $\star$}} example {\mbox{\tt $\star$}} known {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} non {\mbox{\tt $\star$}} -hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} conjectured {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} none {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} A {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} center {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} / {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} polynomial {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} superattracting {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} root {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} / {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} where {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} show {\mbox{\tt $\star$}} below {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} unique {\mbox{\tt $\star$}} center {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} unique {\mbox{\tt $\star$}} root {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} easy {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} verify {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} attracting {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} proper {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} open {\mbox{\tt $\star$}} unit {\mbox{\tt $\star$}} disk {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} finite {\mbox{\tt $\star$}} mapping {\mbox{\tt $\star$}} degree {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} see {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} fact {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} conformal {\mbox{\tt $\star$}} isomorphism {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} relation {\mbox{\tt $\star$}} between {\mbox{\tt $\star$}} centers {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} roots {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} important {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} difficulty {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} establishing {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} lies {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} discontinuity {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} sets {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Proposition {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {PropContinuousLanding {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} helps {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} overcome {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} difficulty {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {lemma {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} [Roots {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Hyperbolic {\mbox{\tt $\star$}} Components {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {LemRoots {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \rule{0pt}{0pt}\par\noindent {\mbox{\tt $\star$}} Every {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} root {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} least {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} smaller {\mbox{\tt $\star$}} than {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} also {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} no {\mbox{\tt $\star$}} case {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {lemma {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \par \noindent {\sc Proof. } {\mbox{\tt $\star$}} First {\mbox{\tt $\star$}} suppose {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} equal {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} return {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} leaves {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} fixed {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} local {\mbox{\tt $\star$}} coordinates {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} form {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \zeta {\mbox{\tt $\star$}} \mapsto {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \zeta {\mbox{\tt $\star$}} + {\mbox{\tt $\star$}} \zeta {\mbox{\tt $\star$}} ^ {\mbox{\tt $\star$}} {q {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} + {\mbox{\tt $\star$}} \ldots {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} some {\mbox{\tt $\star$}} integer {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $q {\mbox{\tt $\star$}} \geq {\mbox{\tt $\star$}} 1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $q {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} attracting {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} petals {\mbox{\tt $\star$}} each {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} attracting {\mbox{\tt $\star$}} petal {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} absorb {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Since {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} unique {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $q {\mbox{\tt $\star$}} =1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Under {\mbox{\tt $\star$}} perturbation {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} breaks {\mbox{\tt $\star$}} up {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} exactly {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} orbits {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} exact {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} no {\mbox{\tt $\star$}} further {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} involved {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Denote {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $V {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} simply {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} neighborhood {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} containing {\mbox{\tt $\star$}} further {\mbox{\tt $\star$}} parabolics {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} equal {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $V {\mbox{\tt $\star$}} - {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} exact {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} continued {\mbox{\tt $\star$}} analytically {\mbox{\tt $\star$}} because {\mbox{\tt $\star$}} their {\mbox{\tt $\star$}} multipliers {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Among {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} those {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} continued {\mbox{\tt $\star$}} analytically {\mbox{\tt $\star$}} throughout {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $V {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} while {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} colliding {\mbox{\tt $\star$}} orbits {\mbox{\tt $\star$}} might {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} interchanged {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} simple {\mbox{\tt $\star$}} loop {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $V {\mbox{\tt $\star$}} - {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (in {\mbox{\tt $\star$}} fact {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} they {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} see {\mbox{\tt $\star$}} Corollary {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {CorHypBdy {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Their {\mbox{\tt $\star$}} multipliers {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} therefore {\mbox{\tt $\star$}} defined {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} -sheeted {\mbox{\tt $\star$}} covering {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $V {\mbox{\tt $\star$}} - {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} analytic {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} even {\mbox{\tt $\star$}} when {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} put {\mbox{\tt $\star$}} back {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} By {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} open {\mbox{\tt $\star$}} mapping {\mbox{\tt $\star$}} principle {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} least {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Since {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} orbits {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} divisible {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} divisible {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} was {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $rn {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} some {\mbox{\tt $\star$}} integer {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $r {\mbox{\tt $\star$}} >1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $rn {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} -periodic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} would {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} indifferent {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} since {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} indifferent {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} would {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} merge {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} indifferent {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} would {\mbox{\tt $\star$}} get {\mbox{\tt $\star$}} higher {\mbox{\tt $\star$}} multiplicity {\mbox{\tt $\star$}} than {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} contradiction {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} contradiction {\mbox{\tt $\star$}} shows {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} other {\mbox{\tt $\star$}} than {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} strictly {\mbox{\tt $\star$}} divides {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $s {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} =n {\mbox{\tt $\star$}} /k {\mbox{\tt $\star$}} \geq {\mbox{\tt $\star$}} 2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} return {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} permute {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} transitively {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemRayPermutation {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} least {\mbox{\tt $\star$}} iterate {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} fixes {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} also {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} least {\mbox{\tt $\star$}} iterate {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} either {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (this {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Snail {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} see {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {MiIntro {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} conversely {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} whenever {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} finitely {\mbox{\tt $\star$}} many {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} fixed {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} return {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} exact {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $s {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} -th {\mbox{\tt $\star$}} root {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} unity {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} continued {\mbox{\tt $\star$}} analytically {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} neighborhood {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} root {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Since {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} analytic {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $s {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} -th {\mbox{\tt $\star$}} iterate {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} return {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} hence {\mbox{\tt $\star$}} again {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} form {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \zeta {\mbox{\tt $\star$}} \mapsto {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \zeta {\mbox{\tt $\star$}} + {\mbox{\tt $\star$}} \zeta {\mbox{\tt $\star$}} ^ {\mbox{\tt $\star$}} {q {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} + {\mbox{\tt $\star$}} \ldots {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} local {\mbox{\tt $\star$}} coordinates {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} integer {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $q {\mbox{\tt $\star$}} \geq {\mbox{\tt $\star$}} 1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} coalescing {\mbox{\tt $\star$}} fixed {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} iterate {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} exactly {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $q {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Since {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} return {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} permute {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $q {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} attracting {\mbox{\tt $\star$}} petals {\mbox{\tt $\star$}} transitively {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $q {\mbox{\tt $\star$}} =s {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} second {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \ldots {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $s {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} -st {\mbox{\tt $\star$}} iterate {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} return {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} respective {\mbox{\tt $\star$}} iterate {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} single {\mbox{\tt $\star$}} fixed {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $s {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} -th {\mbox{\tt $\star$}} iterate {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} however {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} corresponding {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $sk {\mbox{\tt $\star$}} =n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} -th {\mbox{\tt $\star$}} iterate {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} original {\mbox{\tt $\star$}} polynomial {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} fixed {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} multiplicity {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $q {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} =s {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} exactly {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} exact {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} other {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} no {\mbox{\tt $\star$}} lower {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} than {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} they {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} single {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $s {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} each {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} coalesced {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} There {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} no {\mbox{\tt $\star$}} further {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} involved {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (or {\mbox{\tt $\star$}} some {\mbox{\tt $\star$}} iterate {\mbox{\tt $\star$}} would {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} fixed {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} higher {\mbox{\tt $\star$}} multiplicity {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} more {\mbox{\tt $\star$}} attracting {\mbox{\tt $\star$}} petals {\mbox{\tt $\star$}} attached {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} above {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Since {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} single {\mbox{\tt $\star$}} indifferent {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} well {\mbox{\tt $\star$}} defined {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} analytic {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} neighborhood {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} hence {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} well {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $\Box$ \par {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} A {\mbox{\tt $\star$}} root {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} called {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} primitive {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} / {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} equal {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} merger {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} orbits {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} equal {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} root {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} called {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} non {\mbox{\tt $\star$}} -primitive {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} / {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} bifurcation {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} attracting {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} bifurcates {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} another {\mbox{\tt $\star$}} attracting {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} higher {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (the {\mbox{\tt $\star$}} terminology {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} bi {\mbox{\tt $\star$}} - {\mbox{\tt $\star$}} }furcation {\mbox{\tt $\star$}} comes {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamics {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} real {\mbox{\tt $\star$}} line {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} where {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} ratio {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} always {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} will {\mbox{\tt $\star$}} see {\mbox{\tt $\star$}} below {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} unique {\mbox{\tt $\star$}} root {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} therefore {\mbox{\tt $\star$}} makes {\mbox{\tt $\star$}} sense {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} call {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} primitive {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} non {\mbox{\tt $\star$}} -primitive {\mbox{\tt $\star$}} according {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} whether {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} root {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} primitive {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} now {\mbox{\tt $\star$}} draw {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} couple {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} useful {\mbox{\tt $\star$}} conclusions {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \goodbreak {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {corollary {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} [Stability {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} Roots {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Hyperbolic {\mbox{\tt $\star$}} Components {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} \nobreak {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {CorRootStability {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \rule{0pt}{0pt}\par\noindent {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} pattern {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} polynomials {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} roots {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {corollary {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \par \noindent {\sc Proof. } {\mbox{\tt $\star$}} Again {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} suffices {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} discuss {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} statement {\mbox{\tt $\star$}} about {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} simply {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} taking {\mbox{\tt $\star$}} inverse {\mbox{\tt $\star$}} images {\mbox{\tt $\star$}} because {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} considered {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} they {\mbox{\tt $\star$}} never {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Throughout {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} no {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} lose {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} under {\mbox{\tt $\star$}} perturbations {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} consequently {\mbox{\tt $\star$}} no {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} gain {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} either {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} rational {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} common {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} throughout {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} When {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamics {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} primitive {\mbox{\tt $\star$}} root {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} perturbed {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} breaks {\mbox{\tt $\star$}} up {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} attracting {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} equal {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} continuous {\mbox{\tt $\star$}} dependence {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (Proposition {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {PropContinuousLanding {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} inherit {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} cannot {\mbox{\tt $\star$}} get {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} further {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} because {\mbox{\tt $\star$}} they {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} been {\mbox{\tt $\star$}} attached {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} orbits {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} For {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} non {\mbox{\tt $\star$}} -primitive {\mbox{\tt $\star$}} root {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} denote {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} respectively {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Perturbing {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} root {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} becomes {\mbox{\tt $\star$}} attracting {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} -periodic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} take {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} without {\mbox{\tt $\star$}} getting {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} further {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $\Box$ \par {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \par \noindent {\sc Remark. } {\mbox{\tt $\star$}} Perturbing {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} >k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} changes {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} pattern {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} rational {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} creates {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} attracting {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} repelling {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} remains {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Phrased {\mbox{\tt $\star$}} differently {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} when {\mbox{\tt $\star$}} moving {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} into {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} /k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} each {\mbox{\tt $\star$}} start {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} common {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} course {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} forces {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} obvious {\mbox{\tt $\star$}} relations {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} pattern {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} other {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} remains {\mbox{\tt $\star$}} stable {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} discussion {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} describes {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} patterns {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} within {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} but {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} entire {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} space {\mbox{\tt $\star$}} except {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} since {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} depend {\mbox{\tt $\star$}} continuously {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} orbits {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} simple {\mbox{\tt $\star$}} except {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} parabolics {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} pattern {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} change {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} where {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} fail {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} land {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} relation {\mbox{\tt $\star$}} between {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} patterns {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} structure {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} space {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} been {\mbox{\tt $\star$}} investigated {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} described {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Milnor {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {MiOrbits {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} pattern {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} also {\mbox{\tt $\star$}} changes {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} Misiurewicz {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} preperiodic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {corollary {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} [The {\mbox{\tt $\star$}} Multiplier {\mbox{\tt $\star$}} Map {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {CorMultiplier {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \rule{0pt}{0pt}\par\noindent {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} conformal {\mbox{\tt $\star$}} isomorphism {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} open {\mbox{\tt $\star$}} unit {\mbox{\tt $\star$}} disk {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} extends {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} homeomorphism {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} closures {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} particular {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} unique {\mbox{\tt $\star$}} root {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} unique {\mbox{\tt $\star$}} center {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} contained {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {corollary {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \par \noindent {\sc Proof. } {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} proper {\mbox{\tt $\star$}} analytic {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} open {\mbox{\tt $\star$}} unit {\mbox{\tt $\star$}} disk {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} extends {\mbox{\tt $\star$}} continuously {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Any {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} unique {\mbox{\tt $\star$}} indifferent {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} extend {\mbox{\tt $\star$}} analytically {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} neighborhood {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} obviously {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} constant {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} shows {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} particular {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} dense {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} since {\mbox{\tt $\star$}} parabolics {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} thus {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\bf M} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} contained {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} fixed {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} finite {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} consists {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} finite {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} analytic {\mbox{\tt $\star$}} arcs {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (which {\mbox{\tt $\star$}} might {\mbox{\tt $\star$}} contain {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} limiting {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} finitely {\mbox{\tt $\star$}} many {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} multipliers {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Since {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} proper {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \bbf {\mbox{\tt $\star$}} D {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} least {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} root {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} By {\mbox{\tt $\star$}} Corollary {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {CorRootStability {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} pattern {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} roots {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} given {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} roots {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbits {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} coincide {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} case {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} pair {\mbox{\tt $\star$}} lands {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Fatou {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} containing {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} separates {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} rest {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Among {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} pairs {\mbox{\tt $\star$}} separating {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} pair {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} closest {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Hence {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} pattern {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} dynamic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} determines {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Since {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} root {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} unique {\mbox{\tt $\star$}} root {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} now {\mbox{\tt $\star$}} determine {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} mapping {\mbox{\tt $\star$}} degree {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $d {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} say {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} simply {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} because {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} full {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \mu {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} exactly {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $d {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} counting {\mbox{\tt $\star$}} multiplicities {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $d {\mbox{\tt $\star$}} >1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $v {\mbox{\tt $\star$}} \in {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \bbf {\mbox{\tt $\star$}} D {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} value {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \mu {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} connect {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $v {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} +1 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} simple {\mbox{\tt $\star$}} smooth {\mbox{\tt $\star$}} curve {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \gamma {\mbox{\tt $\star$}} \subset {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \bbf {\mbox{\tt $\star$}} D {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} avoiding {\mbox{\tt $\star$}} further {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} values {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Then {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \mu {\mbox{\tt $\star$}} ^ {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ( {\mbox{\tt $\star$}} \gamma {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} together {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} unique {\mbox{\tt $\star$}} root {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} contains {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} simple {\mbox{\tt $\star$}} closed {\mbox{\tt $\star$}} curve {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \Gamma {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} enclosing {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} open {\mbox{\tt $\star$}} subset {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} subset {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} least {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} all {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \bbf {\mbox{\tt $\star$}} D {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} - {\mbox{\tt $\star$}} \gamma {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \Gamma {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} surrounds {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} thus {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\bf M} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} But {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} contradicts {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} fact {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} full {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} conformal {\mbox{\tt $\star$}} isomorphism {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \bbf {\mbox{\tt $\star$}} D {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} unique {\mbox{\tt $\star$}} center {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} extends {\mbox{\tt $\star$}} continuously {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} closure {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} surjective {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \partial {\mbox{\tt $\star$}} \mbox {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} \bbf {\mbox{\tt $\star$}} D {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} because {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} surjective {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Near {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} non {\mbox{\tt $\star$}} -root {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} analytic {\mbox{\tt $\star$}} arc {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (possibly {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} locally {\mbox{\tt $\star$}} constant {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} arc {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} locally {\mbox{\tt $\star$}} injective {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Global {\mbox{\tt $\star$}} injectivity {\mbox{\tt $\star$}} now {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} injectivity {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} thus {\mbox{\tt $\star$}} invertible {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} continuity {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} inverse {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} generality {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $\Box$ \par {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {proposition {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} [No {\mbox{\tt $\star$}} Shared {\mbox{\tt $\star$}} Roots {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {PropNoSharedRoots {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \rule{0pt}{0pt}\par\noindent {\mbox{\tt $\star$}} Every {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} root {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} single {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {proposition {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \par \noindent {\sc Proof. } {\mbox{\tt $\star$}} Since {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} equals {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} root {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} restrict {\mbox{\tt $\star$}} attention {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} any {\mbox{\tt $\star$}} fixed {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} well {\mbox{\tt $\star$}} defined {\mbox{\tt $\star$}} maps {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} centers {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (which {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} just {\mbox{\tt $\star$}} seen {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} bijection {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} their {\mbox{\tt $\star$}} roots {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} gives {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} surjective {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} centers {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Denote {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} centers {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $s {\mbox{\tt $\star$}} _n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} A {\mbox{\tt $\star$}} center {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} such {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} exact {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} under {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} \mapsto {\mbox{\tt $\star$}} z {\mbox{\tt $\star$}} ^2 {\mbox{\tt $\star$}} +c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} therefore {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} satisfy {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} polynomial {\mbox{\tt $\star$}} equation {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ( {\mbox{\tt $\star$}} \ldots {\mbox{\tt $\star$}} ( {\mbox{\tt $\star$}} (c {\mbox{\tt $\star$}} ^2 {\mbox{\tt $\star$}} +c {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} ^2 {\mbox{\tt $\star$}} +c {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} \ldots {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} ^2 {\mbox{\tt $\star$}} +c {\mbox{\tt $\star$}} =0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} degree {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $2 {\mbox{\tt $\star$}} ^ {\mbox{\tt $\star$}} {n {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Since {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} polynomial {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} also {\mbox{\tt $\star$}} solved {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} centers {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} dividing {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} get {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} recursive {\mbox{\tt $\star$}} relation {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} \sum {\mbox{\tt $\star$}} _ {\mbox{\tt $\star$}} {k {\mbox{\tt $\star$}} |n {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} s {\mbox{\tt $\star$}} _k {\mbox{\tt $\star$}} =2 {\mbox{\tt $\star$}} ^ {\mbox{\tt $\star$}} {n {\mbox{\tt $\star$}} -1 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} By {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemParaCount {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} exactly {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} number {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Since {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} surjective {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} between {\mbox{\tt $\star$}} finite {\mbox{\tt $\star$}} sets {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} equal {\mbox{\tt $\star$}} cardinality {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} bijection {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} root {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} single {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $\Box$ \par {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \par \noindent {\sc Remark. } {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} proposition {\mbox{\tt $\star$}} shows {\mbox{\tt $\star$}} even {\mbox{\tt $\star$}} without {\mbox{\tt $\star$}} resorting {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} Corollary {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {CorMultiplier {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} unique {\mbox{\tt $\star$}} center {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} only {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (if {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} had {\mbox{\tt $\star$}} mapping {\mbox{\tt $\star$}} degree {\mbox{\tt $\star$}} greater {\mbox{\tt $\star$}} than {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} could {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} center {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} indeed {\mbox{\tt $\star$}} what {\mbox{\tt $\star$}} happens {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} ` {\mbox{\tt $\star$}} `Multibrot {\mbox{\tt $\star$}} sets {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} connectedness {\mbox{\tt $\star$}} loci {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} maps {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $z {\mbox{\tt $\star$}} \mapsto {\mbox{\tt $\star$}} z {\mbox{\tt $\star$}} ^d {\mbox{\tt $\star$}} +c {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $d {\mbox{\tt $\star$}} \geq {\mbox{\tt $\star$}} 2 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Before {\mbox{\tt $\star$}} continuing {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} study {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} note {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} algebraic {\mbox{\tt $\star$}} observation {\mbox{\tt $\star$}} following {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} proof {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} just {\mbox{\tt $\star$}} given {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {corollary {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} [Centers {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Components {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} Algebraic {\mbox{\tt $\star$}} Numbers {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {CorCentersAlgebraic {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \rule{0pt}{0pt}\par\noindent {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Every {\mbox{\tt $\star$}} center {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} degree {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} algebraic {\mbox{\tt $\star$}} integer {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} degree {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} most {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $s {\mbox{\tt $\star$}} _n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} simple {\mbox{\tt $\star$}} root {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} minimal {\mbox{\tt $\star$}} polynomial {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $\Box$ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {corollary {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} A {\mbox{\tt $\star$}} neat {\mbox{\tt $\star$}} algebraic {\mbox{\tt $\star$}} proof {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} fact {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} been {\mbox{\tt $\star$}} given {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} Gleason {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} see {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {Orsay {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} As {\mbox{\tt $\star$}} far {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} I {\mbox{\tt $\star$}} know {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} algebraic {\mbox{\tt $\star$}} structure {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} minimal {\mbox{\tt $\star$}} polynomials {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} centers {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} known {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} when {\mbox{\tt $\star$}} factored {\mbox{\tt $\star$}} according {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} exact {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} they {\mbox{\tt $\star$}} irreducible {\mbox{\tt $\star$}} ? {\mbox{\tt $\star$}} What {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} their {\mbox{\tt $\star$}} Galois {\mbox{\tt $\star$}} groups {\mbox{\tt $\star$}} ? {\mbox{\tt $\star$}} Manning {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (unpublished {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} verified {\mbox{\tt $\star$}} irreducibility {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} \leq {\mbox{\tt $\star$}} 10 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} he {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} determined {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Galois {\mbox{\tt $\star$}} groups {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} first {\mbox{\tt $\star$}} few {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} full {\mbox{\tt $\star$}} symmetric {\mbox{\tt $\star$}} groups {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Giarrusso {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (unpublished {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} observed {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} induces {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} Galois {\mbox{\tt $\star$}} action {\mbox{\tt $\star$}} between {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Riemann {\mbox{\tt $\star$}} maps {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} equal {\mbox{\tt $\star$}} periods {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} provided {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} their {\mbox{\tt $\star$}} centers {\mbox{\tt $\star$}} are {\mbox{\tt $\star$}} algebraically {\mbox{\tt $\star$}} conjugate {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Now {\mbox{\tt $\star$}} we {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} describe {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} much {\mbox{\tt $\star$}} more {\mbox{\tt $\star$}} completely {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {corollary {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} [Boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Hyperbolic {\mbox{\tt $\star$}} Components {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \label {\mbox{\tt $\star$}} {CorHypBdy {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \rule{0pt}{0pt}\par\noindent {\mbox{\tt $\star$}} No {\mbox{\tt $\star$}} non {\mbox{\tt $\star$}} -parabolic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} more {\mbox{\tt $\star$}} than {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Every {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} either {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} primitive {\mbox{\tt $\star$}} root {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} no {\mbox{\tt $\star$}} further {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} or {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} ` {\mbox{\tt $\star$}} `point {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} bifurcation {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} ' {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} non {\mbox{\tt $\star$}} -primitive {\mbox{\tt $\star$}} root {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} unique {\mbox{\tt $\star$}} further {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} particular {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} common {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} root {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} exactly {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} them {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} smooth {\mbox{\tt $\star$}} analytic {\mbox{\tt $\star$}} curve {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} except {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} root {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} primitive {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} At {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} primitive {\mbox{\tt $\star$}} root {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} cusp {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} analytic {\mbox{\tt $\star$}} continuation {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} along {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} small {\mbox{\tt $\star$}} loop {\mbox{\tt $\star$}} around {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} cusp {\mbox{\tt $\star$}} interchanges {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} orbits {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} merge {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} cusp {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {corollary {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \par \noindent {\sc Proof. } {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} non {\mbox{\tt $\star$}} -parabolic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} their {\mbox{\tt $\star$}} common {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} patterns {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} within {\mbox{\tt $\star$}} both {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} This {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} also {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} true {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} their {\mbox{\tt $\star$}} respective {\mbox{\tt $\star$}} roots {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} yields {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} contradiction {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} hand {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} roots {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} Proposition {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {PropNoSharedRoots {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} other {\mbox{\tt $\star$}} hand {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} orbits {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} roots {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} characteristic {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (compare {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} proof {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Corollary {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {CorMultiplier {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} they {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} follows {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} indifferent {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} cannot {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} had {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} there {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} would {\mbox{\tt $\star$}} connect {\mbox{\tt $\star$}} locally {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} regions {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} cannot {\mbox{\tt $\star$}} belong {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} however {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} if {\mbox{\tt $\star$}} they {\mbox{\tt $\star$}} belonged {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} closure {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} would {\mbox{\tt $\star$}} separate {\mbox{\tt $\star$}} part {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} its {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} exterior {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} contradiction {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Therefore {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} smooth {\mbox{\tt $\star$}} analytic {\mbox{\tt $\star$}} curve {\mbox{\tt $\star$}} near {\mbox{\tt $\star$}} every {\mbox{\tt $\star$}} non {\mbox{\tt $\star$}} -parabolic {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Now {\mbox{\tt $\star$}} let {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} We {\mbox{\tt $\star$}} know {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} root {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} unique {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} non {\mbox{\tt $\star$}} -primitive {\mbox{\tt $\star$}} case {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (when {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} strictly {\mbox{\tt $\star$}} divides {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} cannot {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemRoots {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} It {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} single {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (Proposition {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {PropNoSharedRoots {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} small {\mbox{\tt $\star$}} punctured {\mbox{\tt $\star$}} neighborhood {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} avoiding {\mbox{\tt $\star$}} further {\mbox{\tt $\star$}} parabolics {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} ray {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} -periodic {\mbox{\tt $\star$}} orbit {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} analytic {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} cannot {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} reason {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} above {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} occupies {\mbox{\tt $\star$}} asymptotically {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (on {\mbox{\tt $\star$}} small {\mbox{\tt $\star$}} scales {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} half {\mbox{\tt $\star$}} plane {\mbox{\tt $\star$}} near {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} also {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} which {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} analytic {\mbox{\tt $\star$}} near {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Since {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} cannot {\mbox{\tt $\star$}} overlap {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} asymptotically {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} contained {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} half {\mbox{\tt $\star$}} plane {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} cannot {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} critical {\mbox{\tt $\star$}} point {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} locally {\mbox{\tt $\star$}} injective {\mbox{\tt $\star$}} near {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} boundaries {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} both {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} smooth {\mbox{\tt $\star$}} analytic {\mbox{\tt $\star$}} curves {\mbox{\tt $\star$}} near {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} primitive {\mbox{\tt $\star$}} case {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $k {\mbox{\tt $\star$}} =n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} cannot {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} from {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} by {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemRoots {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} cannot {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} boundary {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} period {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $n {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} because {\mbox{\tt $\star$}} otherwise {\mbox{\tt $\star$}} it {\mbox{\tt $\star$}} would {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} their {\mbox{\tt $\star$}} simultaneous {\mbox{\tt $\star$}} root {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} contradicting {\mbox{\tt $\star$}} Proposition {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {PropNoSharedRoots {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} small {\mbox{\tt $\star$}} simply {\mbox{\tt $\star$}} connected {\mbox{\tt $\star$}} neighborhood {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $V {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} analytic {\mbox{\tt $\star$}} continuation {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} orbits {\mbox{\tt $\star$}} colliding {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} possible {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $V {\mbox{\tt $\star$}} - {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (compare {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} proof {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Lemma {\mbox{\tt $\star$}} ~ {\mbox{\tt $\star$}} \ref {\mbox{\tt $\star$}} {LemRoots {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} their {\mbox{\tt $\star$}} multipliers {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} defined {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} -sheeted {\mbox{\tt $\star$}} cover {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $V {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} ramified {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} If {\mbox{\tt $\star$}} analytic {\mbox{\tt $\star$}} continuation {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} orbits {\mbox{\tt $\star$}} along {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} simple {\mbox{\tt $\star$}} loop {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $V {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} around {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} did {\mbox{\tt $\star$}} not {\mbox{\tt $\star$}} interchange {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} orbits {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} then {\mbox{\tt $\star$}} both {\mbox{\tt $\star$}} multipliers {\mbox{\tt $\star$}} could {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} defined {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $V {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} both {\mbox{\tt $\star$}} would {\mbox{\tt $\star$}} define {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} intersecting {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $V {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} disjoint {\mbox{\tt $\star$}} regions {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} yielding {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} contradiction {\mbox{\tt $\star$}} as {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} non {\mbox{\tt $\star$}} -primitive {\mbox{\tt $\star$}} case {\mbox{\tt $\star$}} above {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Therefore {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} small {\mbox{\tt $\star$}} simple {\mbox{\tt $\star$}} loops {\mbox{\tt $\star$}} around {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $c {\mbox{\tt $\star$}} _0 {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} do {\mbox{\tt $\star$}} interchange {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} orbits {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} order {\mbox{\tt $\star$}} to {\mbox{\tt $\star$}} avoid {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} same {\mbox{\tt $\star$}} contradiction {\mbox{\tt $\star$}} again {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} multiplier {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} be {\mbox{\tt $\star$}} locally {\mbox{\tt $\star$}} injective {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} two {\mbox{\tt $\star$}} -sheeted {\mbox{\tt $\star$}} covering {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $V {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Projecting {\mbox{\tt $\star$}} down {\mbox{\tt $\star$}} onto {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $V {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} asymptotically {\mbox{\tt $\star$}} occupy {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} full {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} directions {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} so {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} component {\mbox{\tt $\star$}} has {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} cusp {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $\Box$ \par {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \par \noindent {\sc Remark. } {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} basic {\mbox{\tt $\star$}} motor {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} many {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} these {\mbox{\tt $\star$}} proofs {\mbox{\tt $\star$}} about {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} was {\mbox{\tt $\star$}} uniqueness {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parabolic {\mbox{\tt $\star$}} parameters {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} given {\mbox{\tt $\star$}} combinatorics {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} via {\mbox{\tt $\star$}} landing {\mbox{\tt $\star$}} properties {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} parameter {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} at {\mbox{\tt $\star$}} periodic {\mbox{\tt $\star$}} angles {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} A {\mbox{\tt $\star$}} consequence {\mbox{\tt $\star$}} was {\mbox{\tt $\star$}} uniqueness {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} centers {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} given {\mbox{\tt $\star$}} combinatorics {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} One {\mbox{\tt $\star$}} can {\mbox{\tt $\star$}} also {\mbox{\tt $\star$}} turn {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} discussion {\mbox{\tt $\star$}} around {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} start {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} centers {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} fact {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} must {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} different {\mbox{\tt $\star$}} combinatorics {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} that {\mbox{\tt $\star$}} they {\mbox{\tt $\star$}} have {\mbox{\tt $\star$}} unique {\mbox{\tt $\star$}} centers {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} a {\mbox{\tt $\star$}} consequence {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Thurston {\mbox{\tt $\star$}} 's {\mbox{\tt $\star$}} topological {\mbox{\tt $\star$}} characterization {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} rational {\mbox{\tt $\star$}} maps {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} case {\mbox{\tt $\star$}} most {\mbox{\tt $\star$}} easily {\mbox{\tt $\star$}} used {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} form {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Spider {\mbox{\tt $\star$}} Theorem {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \cite {\mbox{\tt $\star$}} {HS {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \begin {\mbox{\tt $\star$}} {thebibliography {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {McM {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \bibitem {\mbox{\tt $\star$}} [Bo {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {Bousch {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} T {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ~Bousch {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} Sur {\mbox{\tt $\star$}} quelques {\mbox{\tt $\star$}} probl {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} `emes {\mbox{\tt $\star$}} de {\mbox{\tt $\star$}} la {\mbox{\tt $\star$}} dynamique {\mbox{\tt $\star$}} holomorphe {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} Th {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} `ese {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} Universit {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} 'e {\mbox{\tt $\star$}} de {\mbox{\tt $\star$}} Paris {\mbox{\tt $\star$}} -Sud {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (1992 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \bibitem {\mbox{\tt $\star$}} [CG {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {CG {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} L {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ~Carleson {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} T {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ~Gamelin {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} Complex {\mbox{\tt $\star$}} dynamics {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} Universitext {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} Springer {\mbox{\tt $\star$}} Verlag {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (1993 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \bibitem {\mbox{\tt $\star$}} [DH1 {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {Orsay {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} A {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ~Douady {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} J {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ~Hubbard {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} Etude {\mbox{\tt $\star$}} dynamique {\mbox{\tt $\star$}} des {\mbox{\tt $\star$}} polyn {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} ^omes {\mbox{\tt $\star$}} com {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} -plexes {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} Publications {\mbox{\tt $\star$}} math {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} 'ematiques {\mbox{\tt $\star$}} d {\mbox{\tt $\star$}} 'Orsay {\mbox{\tt $\star$}} 84 {\mbox{\tt $\star$}} -02 {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (1984 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (pre {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} -mi {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} `ere {\mbox{\tt $\star$}} partie {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} 85 {\mbox{\tt $\star$}} -04 {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (1985 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (deuxi {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} `eme {\mbox{\tt $\star$}} partie {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \bibitem {\mbox{\tt $\star$}} [DH2 {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {Polylike {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} A {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ~Douady {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} J {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ~Hubbard {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} On {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} dynamics {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} polynomial {\mbox{\tt $\star$}} -like {\mbox{\tt $\star$}} mappings {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} Ann {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Scient {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Ec {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Norm {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Sup {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} 18 {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (1985 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} 287 {\mbox{\tt $\star$}} - {\mbox{\tt $\star$}} -343 {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \bibitem {\mbox{\tt $\star$}} [GM {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {GM {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} L {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ~Goldberg {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} J {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ~Milnor {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} Fixed {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} polynomial {\mbox{\tt $\star$}} maps {\mbox{\tt $\star$}} I {\mbox{\tt $\star$}} /II {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Ann {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ~Scient {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Ec {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Norm {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Sup {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} 25 {\mbox{\tt $\star$}} /26 {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (1992 {\mbox{\tt $\star$}} /93 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \bibitem {\mbox{\tt $\star$}} [HS {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {HS {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} J {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ~Hubbard {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} D {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ~Schleicher {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} spider {\mbox{\tt $\star$}} algorithm {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} Complex {\mbox{\tt $\star$}} dynamics {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} mathematics {\mbox{\tt $\star$}} behind {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} sets {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} AMS {\mbox{\tt $\star$}} Lecture {\mbox{\tt $\star$}} Notes {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (1994 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \bibitem {\mbox{\tt $\star$}} [Ke1 {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {Ke1 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} K {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ~Keller {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} Symbolic {\mbox{\tt $\star$}} dynamics {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} angle {\mbox{\tt $\star$}} -doubling {\mbox{\tt $\star$}} on {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} circle {\mbox{\tt $\star$}} III {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Sturmian {\mbox{\tt $\star$}} sequences {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} quadratic {\mbox{\tt $\star$}} map {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Ergod {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Th {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Dyn {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Sys {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} 14 {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (1994 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \bibitem {\mbox{\tt $\star$}} [Ke2 {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {KeHabil {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} K {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ~Keller {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} Invariante {\mbox{\tt $\star$}} Faktoren {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} "aquivalenzen {\mbox{\tt $\star$}} und {\mbox{\tt $\star$}} die {\mbox{\tt $\star$}} abstrakte {\mbox{\tt $\star$}} Mandelbrotmenge {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Habilitationsschrift {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} Universit {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} "at {\mbox{\tt $\star$}} Greifswald {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (1996 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \bibitem {\mbox{\tt $\star$}} [La1 {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {La {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} P {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ~Lavaurs {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} Une {\mbox{\tt $\star$}} description {\mbox{\tt $\star$}} combinatoire {\mbox{\tt $\star$}} de {\mbox{\tt $\star$}} l {\mbox{\tt $\star$}} 'involution {\mbox{\tt $\star$}} definie {\mbox{\tt $\star$}} par {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} $M {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} sur {\mbox{\tt $\star$}} les {\mbox{\tt $\star$}} rationnels {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} `a {\mbox{\tt $\star$}} d {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} 'enominateur {\mbox{\tt $\star$}} impair {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Comptes {\mbox{\tt $\star$}} Rendus {\mbox{\tt $\star$}} Aca {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Sci {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Paris {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (4 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} 303 {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (1986 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} 143 {\mbox{\tt $\star$}} - {\mbox{\tt $\star$}} -146 {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \bibitem {\mbox{\tt $\star$}} [La2 {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {LaEc {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} P {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ~Lavaurs {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} Syst {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} `emes {\mbox{\tt $\star$}} dynamiques {\mbox{\tt $\star$}} holomorphes {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} explosion {\mbox{\tt $\star$}} de {\mbox{\tt $\star$}} points {\mbox{\tt $\star$}} p {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} 'eriodiques {\mbox{\tt $\star$}} paraboliques {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Th {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} `ese {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} Universit {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} 'e {\mbox{\tt $\star$}} de {\mbox{\tt $\star$}} Paris {\mbox{\tt $\star$}} -Sud {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (1989 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \bibitem {\mbox{\tt $\star$}} [LS {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {IntAddr {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} E {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ~Lau {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} D {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ~Schleicher {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} Internal {\mbox{\tt $\star$}} addresses {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} irreducibility {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} polynomials {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Preprint {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} Institute {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} Mathematical {\mbox{\tt $\star$}} Sciences {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} Stony {\mbox{\tt $\star$}} Brook {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} #19 {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (1994 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \bibitem {\mbox{\tt $\star$}} [M1 {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {MiIntro {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} J {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ~Milnor {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} Dynamics {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} one {\mbox{\tt $\star$}} complex {\mbox{\tt $\star$}} variable {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} introductory {\mbox{\tt $\star$}} lectures {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Preprint {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} Institute {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} Mathematical {\mbox{\tt $\star$}} Sciences {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} Stony {\mbox{\tt $\star$}} Brook {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} #5 {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (1990 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \bibitem {\mbox{\tt $\star$}} [M2 {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {MiOrbits {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} J {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ~Milnor {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} Periodic {\mbox{\tt $\star$}} orbits {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} external {\mbox{\tt $\star$}} rays {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} ; {\mbox{\tt $\star$}} an {\mbox{\tt $\star$}} expository {\mbox{\tt $\star$}} account {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} In {\mbox{\tt $\star$}} this {\mbox{\tt $\star$}} volume {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (1997 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \bibitem {\mbox{\tt $\star$}} [McM {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {CurtUniv {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} C {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ~McMullen {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} The {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} is {\mbox{\tt $\star$}} universal {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Preprint {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (1997 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \bibitem {\mbox{\tt $\star$}} [NS {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {Nakane {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} S {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ~Nakane {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} D {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ~Schleicher {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} On {\mbox{\tt $\star$}} multicorns {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} unicorns {\mbox{\tt $\star$}} I {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} antiholomorphic {\mbox{\tt $\star$}} dynamics {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} hyperbolic {\mbox{\tt $\star$}} components {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Preprint {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} preparation {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \bibitem {\mbox{\tt $\star$}} [Pr1 {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {Pe {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} C {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ~Penrose {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} On {\mbox{\tt $\star$}} quotients {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} shift {\mbox{\tt $\star$}} associated {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} dendrite {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} sets {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} quadratic {\mbox{\tt $\star$}} polynomials {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Thesis {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} University {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} Coventry {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (1990 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \bibitem {\mbox{\tt $\star$}} [Pr2 {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {Pe2 {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} C {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ~Penrose {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} Quotients {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} shift {\mbox{\tt $\star$}} associated {\mbox{\tt $\star$}} with {\mbox{\tt $\star$}} dendrite {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} sets {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Manuscript {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (1994 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \bibitem {\mbox{\tt $\star$}} [S1 {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {DSThesis {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} D {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ~Schleicher {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} Internal {\mbox{\tt $\star$}} addresses {\mbox{\tt $\star$}} in {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} irreducibility {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} polynomials {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Thesis {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} Cornell {\mbox{\tt $\star$}} University {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (1994 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \bibitem {\mbox{\tt $\star$}} [S2 {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {DS {\mbox{\tt $\star$}} _Fibers {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} D {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ~Schleicher {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} On {\mbox{\tt $\star$}} Fibers {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Local {\mbox{\tt $\star$}} Connectivity {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Mandelbrot {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Multibrot {\mbox{\tt $\star$}} Sets {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} Preprint {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (1997 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \bibitem {\mbox{\tt $\star$}} [T {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {Th {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} W {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ~Thurston {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} On {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} geometry {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} dynamics {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} iterated {\mbox{\tt $\star$}} rational {\mbox{\tt $\star$}} maps {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} Preprint {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} Princeton {\mbox{\tt $\star$}} University {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (1985 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \bibitem {\mbox{\tt $\star$}} [TY {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {TY {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} Tan {\mbox{\tt $\star$}} Lei {\mbox{\tt $\star$}} and {\mbox{\tt $\star$}} Yin {\mbox{\tt $\star$}} Yongcheng {\mbox{\tt $\star$}} : {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \em {\mbox{\tt $\star$}} Local {\mbox{\tt $\star$}} connectivity {\mbox{\tt $\star$}} of {\mbox{\tt $\star$}} the {\mbox{\tt $\star$}} Julia {\mbox{\tt $\star$}} set {\mbox{\tt $\star$}} for {\mbox{\tt $\star$}} geometrically {\mbox{\tt $\star$}} finite {\mbox{\tt $\star$}} rational {\mbox{\tt $\star$}} maps {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} Preprint {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} Ecole {\mbox{\tt $\star$}} Normale {\mbox{\tt $\star$}} Superieure {\mbox{\tt $\star$}} de {\mbox{\tt $\star$}} Lyon {\mbox{\tt $\star$}} No {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} ~121 {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} (1994 {\mbox{\tt $\star$}} ) {\mbox{\tt $\star$}} . {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {thebibliography {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \noindent {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \parbox {\mbox{\tt $\star$}} [t {\mbox{\tt $\star$}} ] {\mbox{\tt $\star$}} {65mm {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} Dierk {\mbox{\tt $\star$}} Schleicher {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Fakult {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} "at {\mbox{\tt $\star$}} f {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} "ur {\mbox{\tt $\star$}} Mathematik {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Technische {\mbox{\tt $\star$}} Universit {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} "at {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} Barer {\mbox{\tt $\star$}} Stra {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \ss {\mbox{\tt $\star$}} }e {\mbox{\tt $\star$}} 23 {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} D {\mbox{\tt $\star$}} -80290 {\mbox{\tt $\star$}} M {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} "unchen {\mbox{\tt $\star$}} , {\mbox{\tt $\star$}} Germany {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} { {\mbox{\tt $\star$}} \sl {\mbox{\tt $\star$}} dierk {\mbox{\tt $\star$}} $ {\mbox{\tt $\star$}} @ {\mbox{\tt $\star$}} $mathematik {\mbox{\tt $\star$}} .tu {\mbox{\tt $\star$}} -muenchen {\mbox{\tt $\star$}} .de {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} {\mbox{\tt $\star$}} \end {\mbox{\tt $\star$}} {document {\mbox{\tt $\star$}} } {\mbox{\tt $\star$}}
\begin{document} \title{The reflexive closure of the adjointable operators} \author{E. G. Katsoulis} \address{Department of Mathematics, East Carolina University, Greenville, NC 27858, USA} \email{katsoulise@ecu.edu} \thanks{2010 {\it Mathematics Subject Classification.} 46L08, 47L10} \thanks{{\it Key words and phrases:} Hilbert $C^*$-module, adjointable operator, reflexive operator algebra, reflexive closure, invariant subspace, left centralizer, left multiplier.} \begin{abstract} Given a Hilbert $C^*$-module $E$ over a C*-algebra $\cl A$, we give an explicit description for the invariant subspace lattice $\Lat \cl L (E)$ of all adjointable operators on $E$. We then show that the collection $\End_{\cl A}(E)$ of all bounded $\cl A$-module operators acting on $E$ forms the reflexive closure for $ \cl L (E) $, i.e., $\End _{\cl A} (E) = \Alg \Lat \cl L (E) $. Finally we make an observation regarding the representation theory of the left centralizer algebra of a $C^*$-algebra and use it to give an intuitive proof of a related result of H.~Lin. \end{abstract} \maketitle \section{Introduction} In this note, $\cl A$ denotes a C*-algebra and $E$ a Hilbert C*-module over $\cl A$, i.e., a right $\cl A$-module equipped with an $\cl A$-valued inner product $\sca{\, , \, }$ so that the norm $\| \xi \|\equiv \| \sca{\xi , \xi }^{1/2} \|$ makes $E$ into a Banach space. The collection of all bounded $\cl A$-module operators acting on $E$ is denoted as $\End_{\cl A} (E)$. A linear operator $S$ acting on $E$ is said to be adjointable iff given $x , y \in E$ there exists $ y' \in E$ so that $\sca{S x , y} = \sca{x , y'}$. Elementary examples of adjointable operators are the ``rank one" operators $\theta_{\eta, \xi}$, defined by $\theta_{\eta, \xi}(x)\equiv \eta \sca{\xi, x}$, where $\eta, \xi , x \in E$. The collection of all adjointable operators acting on $E$ will be denoted as $\cl L (E)$ while the norm closed subalgebra generated by the rank one operators will be denoted as $\cl K (E)$. It is a well known fact that $\cl L (E) \subseteq \End_{\cl A} (E)$. However, the reverse inclusion is known to fail in general; this is perhaps the first obstacle one encounters when extending the theory of operators on a Hilbert space to that of operators on a Hilbert $C^*$-module. This problem has been addressed since the beginning of the theory \cite[page 447]{Pasc} and has influenced its subsequent development. The first few chapters of the monograph of Manuilov and Troitsky \cite{ManT} and the references therein provide the basics of the theory and give a good account of what is known regarding that issue. (See also \cite{Ble, lance1995hilbert}.) The purpose of this note is to demonstrate that the inequality between $\cl L (E) $ and $\End_{\cl A} (E)$ is intimately related to another area of continuing mathematical interest, the reflexivity of operator algebras. If ${\mathfrak{A}}$ is a unital operator algebra acting on a Banach space ${\mathfrak{X}}$, then $\Lat {\mathfrak{A}}$ will denote the collection of all closed subspaces $M\subseteq {\mathfrak{X}}$ which are left invariant by ${\mathfrak{A}}$, i.e., $A(m)\in M$, for all $A \in {\mathfrak{A}}$ and $m \in M$. Dually, for a collection ${\mathfrak{L}}$ of closed subspaces of ${\mathfrak{X}}$, we write $\Alg {\mathfrak{L}}$ to denote the collection of all bounded operators on ${\mathfrak{X}}$ that leave invariant each element of ${\mathfrak{L}}$. The reflexive cover of an algebra ${\mathfrak{A}}$ of operators acting on ${\mathfrak{X}}$ is the algebra $\Alg \Lat {\mathfrak{A}}$; we say that ${\mathfrak{A}}$ is \textit{reflexive} iff \[ {\mathfrak{A}} = \Alg \Lat {\mathfrak{A}}. \] Similarly, the reflexive cover of a subspace lattice ${\mathfrak{L}}$ is the lattice $\Lat \Alg {\mathfrak{L}}$ and ${\mathfrak{L}}$ is said to be reflexive if ${\mathfrak{L}} = \Lat \Alg {\mathfrak{L}}$. A formal study of reflexivity for operator algebras and subspace lattices began with the work of Halmos \cite{Hal}, after Ringrose's proof \cite{Rin} that all nests on Hilbert space are reflexive. Since then, the concept of reflexivity for operator algebras and subspace lattices has been addressed by various authors on both Hilbert space \cite{An, ArP, a, DKP, Had2, Kak, KatP, Ol, Sar, ShuT} and Banach space \cite{BMZ, Erd, Had}, including in particular investigations on a Hilbert $C^*$-module. The main results of this short note provide a link between the two areas of inquiry discussed above. In Theorem \ref{main} we show that the presence of bounded but not adjointable module operators on a $C^*$-module $E$ is equivalent to the failure of reflexivity for $\cl L (E)$. (Here we think of $\cl L (E)$ simply as an operator algebra acting on $E$.) Actually, we do more: we explicitly describe $\Lat \cl L (E)$ and we show that as a complete lattice, $\Lat \cl L (E)$ is isomorphic to the lattice of closed left ideals of $\overline{\sca{E,E}} $ (Theorem~\ref{lat}). A key step in the proof of Theorem \ref{main} is a classical result of Barry Johnson \cite[Theorem 1]{Jo}. Actually, our Theorem \ref{main} can also be thought of as a generalization of Johnson's result, since its statement reduces to the statement of \cite[Theorem 1]{Jo}, when applied to the case of the trivial (unital) Hilbert $C^*$-module. Another interpretation for the inequality between $\cl L (E)$ and $\End_{\cl A} (E)$ comes from the work of H. Lin. Lin shows in \cite[Theorem 1.5]{Lin} that $\End_{\cl A} (E)$ is isometrically isomorphic as a Banach algebra to the left centralizer algebra of $\cl K (E)$. Furthermore, the isomorphism Lin constructs extends the familiar $*$-isomorphism between $\cl L (E)$ and the double centralizer algebra of $\cl K (E)$. This shows that the gap between $\cl L (E)$ and $\End_{\cl A} (E)$ is solely due to the presence of left centralizers for $\cl K (E)$ which fail to be double centralizers. In Proposition~\ref{repn} we observe that the representation theory of the left centralizer algebra of a $C^*$-algebra is flexible enough to allow the use of representations on a Banach space. This leads to yet another short proof of Lin's Theorem, which we present in Theorem~\ref{Linthm}. Our proof makes no reference to Cohen's Factorization Theorem and its only prerequisite is the existence of a contractive approximate identity for a $C^*$-algebra. (Compare also with \cite[Proposition 8.1.16 (ii)]{Ble}.) A final remark. Johnson's Theorem \cite[Theorem 1]{Jo}, which plays a central role in this paper, may no longer be true for Banach algebras which are not semisimple. Nevertheless there are specific classes of (non-semisimple) operator algebras for which this theorem is actually valid. This is being explored in a subsequent work \cite{Katsnew}. \section{the main result} We begin by identifying a useful class of subspaces of $E$. \begin{definition} \label{defn:E} Let $E$ a Hilbert C*-module over a C*-algebra $\cl A$. If ${\mathcal{J}} \subseteq \cl A$, then we define $$E({\mathcal{J}}):=\overline{\sspp}\{\xi a \mid \xi\in E, a\in {\mathcal{J}}\}.$$ \end{definition} The correspondence ${\mathcal{J}} \mapsto E({\mathcal{J}})$ of Definition~\ref{defn:E} is not bijective. Indeed, if $l({\mathcal{J}})$ is the closed left ideal generated by ${\mathcal{J}}\subseteq \cl A$, then it is easy to see that $E(l({\mathcal{J}}))=E({\mathcal{J}})$. Therefore we restrict our attention to closed left ideals of $\cl A$. It turns out that an extra step is still required to ensure bijectivity. First we need the following. \begin{lemma}\label{descr} Let $E$ be a Hilbert C*-module over a C*-algebra $\cl A$ and let ${\mathcal{J}}\subseteq \cl A$ be a closed left ideal. Then \[ E({\mathcal{J}})=\{ \xi \in E \mid \sca{\eta , \xi} \in {\mathcal{J}} \mbox{ for all } \eta \in E\}. \] \end{lemma} \begin{proof} The inclusion \[ E({\mathcal{J}}) \subseteq \{ \xi \in E \mid \sca{\eta , \xi} \in {\mathcal{J}} \mbox{ for all } \eta \in E\} \] is obvious. The reverse inclusion follows from the well known fact \cite[Lemma 1.3.9]{ManT} that \[ \xi = \lim_{\epsilon \rightarrow 0} \xi \sca{\xi , \xi}[ \sca{\xi , \xi } + \epsilon ]^{-1} \] for any $\xi \in E$. \end{proof} The following gives now a complete description for the lattice of invariant subspaces of the adjointable operators. \begin{theorem} \label{lat} Let $E$ a Hilbert C*-module over a C*-algebra $\cl A$. Then \[ \Lat \cl L (E)= \{ E({\mathcal{J}}) \mid {\mathcal{J}} \subseteq \overline{\sca{E,E}} \mbox{ closed left ideal } \} \] and the association ${\mathcal{J}} \mapsto E({\mathcal{J}})$ establishes a complete lattice isomorphism between the closed left ideals of $\overline{\sca{E,E}} $ and $\Lat \cl L (E)$. In addition, \[ \Lat \cl K (E) = \Lat \cl L (E) = \Lat \End _{\cl A} (E). \] \end{theorem} \begin{proof} First observe that if $ {\mathcal{J}} \subseteq \cl A$ is a closed left ideal, then the subspace $E({\mathcal{J}})$ is invariant under $\cl L (E)$, because $\cl L (E)$ consists of $\cl A$-module operators. Conversely assume that $M \in \Lat \cl L (E)$ and let \[ J(M) \equiv \overline{\sspp}\{ \sca{ \eta , m } \mid \eta \in E \mbox{ and }m \in M \}. \] Clearly, $J(M)\subseteq \overline{\sca{E,E}}$ and the identity \[ a\sca{ \eta , m} = \sca{ \eta a^* , m}, \, a \in \cl A, \eta \in E, m \in M, \] implies that $J(M)$ is a left ideal. We claim that $M=E(J(M))$. Indeed, if $m \in M$, then by the definition of $J(M)$ we have $\sca{ \eta , m } \in J(M)$, for all $\eta \in E$, and so Lemma \ref{descr} implies that $m \in E(J(M))$. On the other hand, any $\xi a$, with $\xi \in E$ and $a \in J(M)$ is the limit of finite sums of elements of the form $\xi \sca{ \eta , m}$, where $\eta \in E$ and $m \in M$. However \[ \xi \sca{ \eta , m}= \theta_{\xi , \eta}(m) \in M \] and so $M=E(J(M))$. This shows that ${\mathcal{J}} \mapsto E({\mathcal{J}})$ is surjective. In order to prove that ${\mathcal{J}} \mapsto E({\mathcal{J}})$ is also injective we need to verify that ${\mathcal{J}} = J(E({\mathcal{J}}))$, for any closed ideal ${\mathcal{J}}\subseteq \overline{\sca{E,E}}$. Since ${\mathcal{J}} \subseteq \overline{\sca{E,E}} $ is a left ideal, $J(E({\mathcal{J}})) \subseteq {\mathcal{J}}$. On the other hand, if $(e_i)_i$ is a right approximate identity for ${\mathcal{J}}$, then any element of ${\mathcal{J}}\subseteq \overline{\sca{E,E}}$ can be approximated by elements of the form \[ \sum_{k} \, \sca{\eta_k , \xi_k} e_k = \sum_{k} \, \sca{\eta_k , \xi_k e_k},\quad \eta_k , \xi_k \in E. \] However, $\xi_k e_k \in E({\mathcal{J}})$, by Definition \ref{defn:E}, and so sums of the above form belong to $J(E({\mathcal{J}})) $. Hence ${\mathcal{J}} \subseteq J(E({\mathcal{J}}))$ and so ${\mathcal{J}} \mapsto E({\mathcal{J}})$ is also injective with inverse $M \mapsto J(M)$. The proof that ${\mathcal{J}} \mapsto E({\mathcal{J}})$ respects the lattice operations follows from two successive applications of Lemma \ref{descr}. Indeed, if $({\mathcal{J}}_i)_i$ is a collection of closed ideals of $\overline{\sca{E,E}}$, then $\xi \in \cap_i E({\mathcal{J}}_i)$ is equivalent by Lemma~\ref{descr} to $\sca{\eta , \xi} \in \cap_i {\mathcal{J}}_i$ which, once again by Lemma~\ref{descr}, is equivalent to $\xi \in E(\cap_i {\mathcal{J}}_i)$. Therefore $\cap_i E({\mathcal{J}}_i) = E(\cap_i {\mathcal{J}}_i)$. The proof of $\vee_i E({\mathcal{J}}_i) = E(\vee_i {\mathcal{J}}_i)$ is immediate. For the final assertion of the theorem, first note that \[ \Lat \cl K (E) \supseteq \Lat \cl L (E) \supseteq \Lat \End _{\cl A} (E). \] On the other hand, if $M \in \Lat \cl K (E)$, then an argument identical to that of the second paragraph of the proof shows that $M=E(J(M))$. Hence $M \in \Lat \End _{\cl A} (E)$ and the conclusion follows. \end{proof} The following result was proved by B. Johnson \cite[Theorem 1]{Jo} for arbitrary semisimple Banach algebras by making essential use of their representation theory. One can adopt Johnson's original proof to the C*-algebraic context by using the GNS construction and Kadison's Transitivity Theorem wherever representation theory is required in the original proof. \begin{theorem} \label{Johnson} Let $\cl A$ be a $C^*$-algebra and let $\Phi$ be a linear operator acting on $\cl A$ that leaves invariant all closed left ideals of $\cl A$. Then $\Phi (ba)=\Phi(b)a$, $\forall\, a,b \in \cl A$. In particular, if $1 \in \cl A$ is a unit then $\Phi$ is the left multiplication operator by $\Phi (1)$. \end{theorem} Note that the proof of Theorem~\ref{lat} shows that any bounded $\cl A$-module map leaves invariant $\Lat \cl L (E)$. This establishes one direction in the following, which is the main result of the paper. \begin{theorem} \label{main} Let $E$ be a Hilbert module over a C*-algebra $\cl A$. Then \[ \Alg \Lat \cl L (E) = \End _{\cl A} (E). \] In particular, $\End_{\cl A} (E)$ is a reflexive algebra of operators acting on $E$. \end{theorem} \begin{proof} Let $S \in \Alg \Lat \cl L (E)$ and $\xi , \eta \in E$. Consider the linear operator \[ \Phi_{\eta, \xi}: \cl A \ni a \longmapsto \sca{\eta, S(\xi a) } \in \cl A \] We claim that $\Phi_{\eta, \xi}$ leaves invariant any of the closed left ideals of $\cl A$. Indeed, if ${\mathcal{J}} \subseteq \cl A$ is such an ideal and $j \in {\mathcal{J}}$, then $\xi j \in E({\mathcal{J}})$ and since $S \in \Alg \Lat \cl L $, $S(\xi j) \in E({\mathcal{J}})$. By Theorem~\ref{lat}, we have $$ \Phi_{\eta, \xi}(j)=\sca{\eta, S(\xi j) } \in {\mathcal{J}}$$ and so $ \Phi_{\eta, \xi}$ leaves ${\mathcal{J}}$ invariant, which proves the claim. Hence Theorem~\ref{Johnson}, implies now that $\Phi_{\eta, \xi}(ba)= \Phi_{\eta, \xi}(b)a$, $\forall\, a,b \in \cl A$. Let $(e_i)$ be an approximate unit for $\cl A$. By the above $\Phi_{\eta, \xi}(e_ia) =\Phi_{\eta, \xi }(e_i)a$, $\forall i$, and so \begin{align*} \sca{\eta, S(\xi a) } &=\lim_i \sca{\eta, S(\xi e_i a) } = \lim_i \Phi_{\eta, \xi}(e_ia) \\ &=\lim_i \Phi_{\eta, \xi }(e_i)a =\lim_i \sca{\eta, S(\xi e_i) } a \\ &= \sca{\eta, S(\xi ) } a \end{align*} Hence \[ \sca{\eta, S(\xi a) } = \sca{\eta, S(\xi)a}, \quad \forall a \in \cl A, \] which establishes that $S$ is an $\cl A$-module map. \end{proof} The above Theorem can also be thought as a generalization of Theorem~\ref{Johnson} (Johnson's Theorem) since its statement reduces to the statement of Theorem \ref{Johnson} when applied to the case of the trivial unital Hilbert $C^*$-module. \begin{corollary} If $E$ is a selfdual Hilbert $C^*$-module, then $\cl L (E)$ is reflexive as an algebra of operators acting on $E$. \end{corollary} In particular, the above Corollary shows that if $\cl A$ is a unital $C^*$-algebra, then $\cl L ( \cl A^{(n)})$, $1\leq n<\infty$, is a reflexive operator algebra. This is not necessarily true for $\cl L ( \cl A^{(\infty)})$. Indeed in \cite[Example 2.1.2]{ManT} the authors give an example of a unital commutative $C^*$-algebra $\cl A$ for which $\cl L (\cl A^{(\infty)}) \neq \End_{\cl A}(\cl A^{(\infty)})$. By Theorem \ref{main}, $\cl L ( \cl A^{(\infty)})$ is not reflexive. \section{Left Centralizers and a theorem of H. Lin} An alternative description for the inclusion $\cl L (E) \subseteq \End_{\cl A}(E)$ has been given by H. Lin in \cite{Lin}. \begin{definition} \label{centraldefn} If ${\mathfrak{A}}$ is a Banach algebra then a linear and bounded map $\Phi: {\mathfrak{A}} \rightarrow {\mathfrak{A}}$ is called a left centralizer if $\Phi(ab)=\Phi(a)b$, for all $a,b \in {\mathfrak{A}}$. If in addition there exists a map $\Psi: {\mathfrak{A}} \rightarrow {\mathfrak{A}}$ so that $\Psi(a)b=a\Phi(b)$, for all $a,b \in {\mathfrak{A}}$, then $\Phi$ is called a double centralizer. \end{definition} The collection of all left (resp. double) centralizers equipped with the supremum norm will be denoted as $\LC ({\mathfrak{A}})$ (resp. $\DC({\mathfrak{A}})$). Note that in the case where ${\mathfrak{A}}$ has an approximate unit, the linearity and boundedness of centralizers does not have to be assumed \textit{a priori} but instead follows from the condition $\Phi(ab)=\Phi(a)b$, for all $a,b \in {\mathfrak{A}}$. (See \cite{Jo2} for a proof; the unital case is of course trivial.) In \cite[Theorem 1.5]{Lin} Lin shows that $\End_{\cl A}(E)$ is isometrically isomorphic as a Banach algebra to $\LC\left(\cl K (E) \right)$. Furthermore, the isomorphism Lin constructs extends the familiar $*$-isomorphism of Kasparov \cite{Kas} between $\cl L (E)$ and $\DC(\cl K(E))$. Lin's proof is similar in nature to that of Kasparov \cite{Kas} for the double centralizers of $\cl K (E)$. However it is more elaborate and also requires some additional results of Paschke \cite{Pasc}. In what follows we give an elementary proof of Lin's Theorem. Our argument depends on the observation that the representation theory for the left centralizers of a $C^*$-algebra $\cl A$ is flexible enough to allow the use of representations on a Banach space. \begin{definition} Let ${\mathfrak{X}}$ be a Banach space and let ${\mathfrak{A}}$ be a norm closed subalgebra of $B({\mathfrak{X}})$, the bounded operators on ${\mathfrak{X}}$. The left multiplier algebra of ${\mathfrak{A}}$ is the collection \[ \LM_{{\mathfrak{X}}}({\mathfrak{A}}) \equiv \{ b \in B({\mathfrak{X}})\mid ba \in {\mathfrak{A}}, \mbox{ for all } a \in {\mathfrak{A}} \}. \] If $b \in \LM_{{\mathfrak{X}}}({\mathfrak{A}})$, then $L_b \in B({\mathfrak{A}})$ denotes the left multiplication operator by~$b$. \end{definition} The following has also a companion statement for double centralizers, which we plan to state and explore elsewhere. \begin{proposition} \label{repn} Let $\cl A$ be a $C^*$-algebra and assume that $\cl A$ is acting isometrically and non-degenerately on a Banach space ${\mathfrak{X}}$. Then the mapping \begin{equation} \label{Linmap} \LM_{{\mathfrak{X}}}(\cl A) \longrightarrow \LC (\cl A)\colon b \longmapsto L_b \end{equation} establishes an isometric Banach algebra isomorphism between $\LM_{{\mathfrak{X}}}(\cl A)$ and $\LC (\cl A)$. \end{proposition} \begin{proof} The statement of this Proposition is a well-known fact, provided that ${\mathfrak{X}}$ is a Hilbert space. In that case, in order to establish the surjectivity of (\ref{Linmap}) one starts with a contractive approximate unit $(e_i)_i$ for $\cl A$. If $B \in \LC\left(\cl A \right)$, then the net $( B(e_i))_i$ is bounded and therefore has at least one weak limit point $b \in B({\mathfrak{X}})$. The conclusion then follows by showing that $b \in \LM_{{\mathfrak{X}}}(\cl A)$. (See \cite[Proposition 3.12.3]{Ped} for a detailed argument.) Bounded nets of operators on a Banach space need not have weak limits. However, the non-degeneracy of the action and the identity \[ B(e_i)ax=B(e_ia)x, \,\, a \in \cl A , x \in {\mathfrak{X}}, \] guarantees that the net $(B(e_i)x)_i $ is convergent when $x$ ranges over a dense subset of ${\mathfrak{X}}$. Since $( B(e_i))_i$ is bounded, we obtain that $(B(e_i)x)_i $ is Cauchy (and thus convergent) for any $x \in {\mathfrak{X}}$. This establishes that $(B(e_i))_i $ converges pointwise to some bounded operator $b \in B({\mathfrak{X}})$, even when ${\mathfrak{X}}$ is assumed to be a Banach space. With this observation at hand, the rest of the proof now goes as in the Hilbert space case. \end{proof} We are in position now to give the promised proof for Lin's Theorem. \begin{theorem} \label{Linthm} Let $E$ be a Hilbert $C^*$-module over a $C^*$-algebra $\cl A$. Then there exists an isometric isomorphism of Banach algebras \[ \phi : \End_{\cl A}(E) \longrightarrow \LC\left(\cl K (E) \right), \] whose restriction $\phi_{\mid \cl L(E)}$ establishes a $*$-isomorphism between $\cl L(E)$ and \break $\DC(\cl K (E))$. \end{theorem} \begin{proof} In light of Proposition \ref{repn}, it suffices to verify that $$\LM_{E}(\cl K (E)) = \End_{\cl A}(E).$$ Clearly $\End_{\cl A}(E)\subseteq \LM_{E}(\cl K (E))$. Conversely, let $S \in \LM_{E}(\cl K (E))$. If $a \in \cl A$ and $\eta , \xi, \zeta \in E$, then \begin{align*} S(\eta \sca{ \xi, \zeta} a)&=S\theta_{\eta , \xi}(\zeta a) = S\theta_{\eta , \xi}(\zeta ) a \\ &=S(\eta \sca{ \xi, \zeta}) a. \end{align*} However vectors of the form $\eta \sca{ \xi, \zeta}$, $\eta , \xi, \zeta \in E$, are dense in E by \cite[Lemma 1.3.9]{ManT} and so $S$ is an $\cl A$-module map, as desired. Specializing now the mapping of~(\ref{Linmap}) to our setting, we obtain an isometric isomorphism \begin{equation} \label{trueLinmap} \phi \colon \End_{\cl A}(E) \longrightarrow \LC (\cl K (E))\colon S \longmapsto L_S. \end{equation} Furthermore, the restriction $\phi_{\mid \cl L(E)}$ coincides with Kasparov's map and the conclusion follows. \end{proof} {\noindent}{\it Acknowledgements.} The present paper grew out of discussions between the author and Aristides Katavolos during the International Conference on Operator Algebras, which was held at Nanjing University, China, June 20-23, 2013. The author would like to thank Aristides for the stimulating conversations and is grateful to the organizers of the conference for the invitation to participate and their hospitality. \end{document}
\begin{document} \title{ The Cobordism Hypothesis in Dimension $1$} \begin{abstract} In~\cite{lur1} Lurie published an expository article outlining a proof for a higher version of the cobordism hypothesis conjectured by Baez and Dolan in~\cite{bd}. In this note we give a proof for the 1-dimensional case of this conjecture. The proof follows most of the outline given in~\cite{lur1}, but differs in a few crucial details. In particular, the proof makes use of the theory of quasi-unital $\infty$-categories as developed by the author in~\cite{har}. \end{abstract} \tableofcontents \section{ Introduction } Let $\mathcal{B}^{\ori}_1$ denote the $1$-dimensional oriented cobordism $\infty$-category, i.e. the symmetric monoidal $\infty$-category whose objects are oriented $0$-dimensional closed manifolds and whose morphisms are oriented $1$-dimensional cobordisms between them. Let $\mathcal{D}$ be a symmetric monoidal $\infty$-category with duals. The $1$-dimensional cobordism hypothesis concerns the $\infty$-category $$ \Fun^{\otimes}(\mathcal{B}^{\ori}_1,\mathcal{D}) $$ of symmetric monoidal functors ${\varphi}: \mathcal{B}^{\ori}_1 \longrightarrow \mathcal{D}$. If $X_+ \in \mathcal{B}^{\ori}_1$ is the object corresponding to a point with positive orientation then the evaluation map $Z \mapsto Z(X_+)$ induces a functor $$ \Fun^{\otimes}(\mathcal{B}^{\ori}_1,\mathcal{D}) \longrightarrow \mathcal{D} $$ It is not hard to show that since $\mathcal{B}^{\ori}_1$ has duals the $\infty$-category $\Fun^{\otimes}(\mathcal{B}^{\ori}_1,\mathcal{D})$ is in fact an $\infty$-groupoid, i.e. every natural transformation between two functors $F,G: \mathcal{B}^{\ori}_1 \longrightarrow \mathcal{D}$ is a natural equivalence. This means that the evaluation map $Z \mapsto Z(X_+)$ actually factors through a map $$ \Fun^{\otimes}(\mathcal{B}^{\ori}_1,\mathcal{D}) \longrightarrow \wtl{\mathcal{D}} $$ where $\wtl{\mathcal{D}}$ is the maximal $\infty$-groupoid of $\mathcal{D}$. The cobordism hypothesis then states \begin{thm}\label{cobordism-hypothesis} The evaluation map $$ \Fun^{\otimes}(\mathcal{B}^{\ori}_1,\mathcal{D}) \longrightarrow \wtl{\mathcal{D}} $$ is an equivalence of $\infty$-categories. \end{thm} \begin{rem} From the consideration above we see that we could have written the cobordism hypothesis as an equivalence $$ \wtl{\Fun}^{\otimes}(\mathcal{B}^{\ori}_1,\mathcal{D}) \stackrel{\simeq}{\longrightarrow} \wtl{\mathcal{D}} $$ where $\wtl{\Fun}^{\otimes}(\mathcal{B}^{\ori}_1,\mathcal{D})$ is the maximal $\infty$-groupoid of $\Fun^{\otimes}(\mathcal{B}^{\ori}_1,\mathcal{D})$ (which in this case happens to coincide with $\Fun^{\otimes}(\mathcal{B}^{\ori}_1,\mathcal{D})$). This $\infty$-groupoid is the fundamental groupoid of the space of maps from $\mathcal{B}^{\ori}_1$ to $\mathcal{D}$ in the $\infty$-category $\Cat^{\otimes}$ of symmetric monoidal $\infty$-categories. \end{rem} In his paper~\cite{lur1} Lurie gives an elaborate sketch of proof for a higher dimensional generalization of the $1$-dimensional cobordism hypothesis. For this one needs to generalize the notion of $\infty$-categories to $(\infty,n)$-categories. The strategy of proof described in~\cite{lur1} is inductive in nature. In particular in order to understand the $n=1$ case, one should start by considering the $n=0$ case. Let $\mathcal{B}^{\un}_0$ be the $0$-dimensional unoriented cobordism category, i.e. the objects of $\mathcal{B}^{\un}_0$ are $0$-dimensional closed manifolds (or equivalently, finite sets) and the morphisms are diffeomorphisms (or equivalently, isomorphisms of finite sets). Note that $\mathcal{B}^{\un}_0$ is a (discrete) $\infty$-groupoid. Let $X \in \mathcal{B}^{\un}_0$ be the object corresponding to one point. Then the $0$-dimensional cobordism hypothesis states that $\mathcal{B}^{\un}_0$ is in fact the free $\infty$-groupoid (or $(\infty,0)$-category) on one object, i.e. if $\mathcal{G}$ is any other $\infty$-groupoid then the evaluation map $Z \mapsto Z(X)$ induces an equivalence of $\infty$-groupoids $$ \Fun^{\otimes}(\mathcal{B}^{\un}_0,\mathcal{G}) \stackrel{\simeq}{\longrightarrow} \mathcal{G} $$ \begin{rem} At this point one can wonder what is the justification for considering non-oriented manifolds in the $n=0$ case oriented ones in the $n=1$ case. As is explained in~\cite{lur1} the desired notion when working in the $n$-dimensional cobordism $(\infty,n)$-category is that of \textbf{$n$-framed} manifolds. One then observes that $0$-framed $0$-manifolds are unoriented manifolds, while taking $1$-framed $1$-manifolds (and $1$-framed $0$-manifolds) is equivalent to taking the respective manifolds with orientation. \end{rem} Now the $0$-dimensional cobordism hypothesis is not hard to verify. In fact, it holds in a slightly more general context - we do not have to assume that $\mathcal{G}$ is an $\infty$-groupoid. In fact, if $\mathcal{G}$ is \textbf{any symmetric monoidal $\infty$-category} then the evaluation map induces an equivalence of $\infty$-categories $$ \Fun^{\otimes}(\mathcal{B}^{\un}_0,\mathcal{G}) \stackrel{\simeq}{\longrightarrow} \mathcal{G} $$ and hence also an equivalence of $\infty$-groupoids $$ \wtl{\Fun}^{\otimes}(\mathcal{B}^{\un}_0,\mathcal{G}) \stackrel{\simeq}{\longrightarrow} \wtl{\mathcal{G}} $$ Now consider the under-category $\Cat^{\otimes}_{\mathcal{B}^{\un}_0/}$ of symmetric monoidal $\infty$-categories $\mathcal{D}$ equipped with a functor $\mathcal{B}^{\un}_0 \longrightarrow \mathcal{D}$. Since $\mathcal{B}^{\un}_0$ is free on one generator this category can be identified with the $\infty$-category of \textbf{pointed} symmetric monoidal $\infty$-categories, i.e. symmetric monoidal $\infty$-categories with a chosen object. We will often not distinguish between these two notions. Now the point of positive orientation $X_+ \in \mathcal{B}^{\ori}_1$ determines a functor $\mathcal{B}^{\un}_0 \longrightarrow \mathcal{B}^{\ori}_1$, i.e. an object in $\Cat^{\otimes}_{\mathcal{B}^{\un}_0/}$, which we shall denote by $\mathcal{B}^+_1$. The $1$-dimensional coborodism hypothesis is then equivalent to the following statement: \begin{thm}\label{0-to-1}[Cobordism Hypothesis $0$-to-$1$] Let $\mathcal{D} \in \Cat^{\otimes}_{\mathcal{B}^{\un}_0 /}$ be a pointed symmetric monoidal $\infty$-category with duals. Then the $\infty$-groupoid $$ \wtl{\Fun}^{\otimes}_{\mathcal{B}^{\un}_0 /}(\mathcal{B}^+_1,\mathcal{D}) $$ is \textbf{contractible}. \end{thm} Theorem~\ref{0-to-1} can be considered as the inductive step from the $0$-dimensional cobordism hypothesis to the $1$-dimensional one. Now the strategy outlines in~\cite{lur1} proceeds to bridge the gap between $\mathcal{B}^{\un}_0$ to $\mathcal{B}^{\ori}_1$ by considering an intermediate $\infty$-category $$ \mathcal{B}^{\un}_0 \hookrightarrow \mathcal{B}^{\ev}_1 \hookrightarrow \mathcal{B}^{\ori}_1 $$ This intermediate $\infty$-category is defined in~\cite{lur1} in terms of framed functions and index restriction. However in the $1$-dimensional case one can describe it without going into the theory of framed functors. In particular we will use the following definition: \begin{define} Let $\iota: \mathcal{B}^{\ev}_1 \hookrightarrow \mathcal{B}^{\ori}_1$ be the subcategory containing all objects and only the cobordisms $M$ in which every connected component $M_0 \subseteq M$ is either an identity segment or an evaluation segment. \end{define} Let us now describe how to bridge the gap between $\mathcal{B}^{\un}_0$ and $\mathcal{B}^{\ev}_1$. Let $\mathcal{D}$ be an $\infty$-category with duals and let $$ {\varphi}:\mathcal{B}^{\ev}_1 \longrightarrow \mathcal{D} $$ be a symmetric monoidal functor. We will say that ${\varphi}$ is \textbf{non-degenerate} if for each $X \in \mathcal{B}^{\ev}_1$ the map $$ {\varphi}(\ev_X): {\varphi}(X) \otimes {\varphi}(\check{X}) \simeq {\varphi}(X \otimes \check{X}) \longrightarrow {\varphi}(1) \simeq 1 $$ is \textbf{non-degenerate}, i.e. identifies ${\varphi}(\check{X})$ with a dual of ${\varphi}(X)$. We will denote by $$ \Cat^{\nd}_{\mathcal{B}^{\ev}_1 /} \subseteq \Cat^{\otimes}_{\mathcal{B}^{\ev}_1 /} $$ the full subcategory spanned by objects ${\varphi}: \mathcal{B}^{\ev}_1 \longrightarrow \mathcal{D}$ such that $\mathcal{D}$ has duals and ${\varphi}$ is non-degenerate. Let $X_+ \in \mathcal{B}^{\ev}_1$ be the point with positive orientation. Then $X_+$ determines a functor $$ \mathcal{B}^{\un}_0 \longrightarrow \mathcal{B}^{\ev}_1 $$ The restriction map ${\varphi} \mapsto {\varphi}|_{\mathcal{B}^{\un}_0}$ then induces a functor $$ \Cat^{\nd}_{\mathcal{B}^{\ev}_1 /} \longrightarrow \Cat^{\otimes}_{\mathcal{B}^{\un}_0 /} $$ Now the gap between $\mathcal{B}^{\ev}_1$ and $\mathcal{B}^{\un}_0$ can be climbed using the following lemma (see~\cite{lur1}): \begin{lem}\label{0-to-1-ev} The functor $$ \Cat^{\nd}_{\mathcal{B}^{\ev}_1 /} \longrightarrow \Cat^{\otimes}_{\mathcal{B}^{\un}_0 /} $$ is fully faithful. \end{lem} \begin{proof} First note that if $F:\mathcal{D} \longrightarrow \mathcal{D}'$ is a symmetric monoidal functor where $\mathcal{D},\mathcal{D}'$ have duals and ${\varphi}: \mathcal{B}^{\ev}_1 \longrightarrow \mathcal{D}$ is non-degenerate then $f \circ {\varphi}$ will be non-degenerate as well. Hence it will be enough to show that if $\mathcal{D}$ has duals then the restriction map induces an equivalence between the $\infty$-groupoid of non-degenerate symmetric monoidal functors $$ \mathcal{B}^{\ev}_1 \longrightarrow \mathcal{D} $$ and the $\infty$-groupoid of symmetric monoidal functors $$ \mathcal{B}^{\un}_0 \longrightarrow \mathcal{D} $$ Now specifying a non-degenerate functor $$ \mathcal{B}^{\ev}_1 \longrightarrow \mathcal{D} $$ is equivalent to specifying a pair of objects $D_+,D_- \in \mathcal{D}$ (the images of $X_+,X_-$ respectively) and a non-degenerate morphism $$ e: D_+ \otimes D_- \longrightarrow 1 $$ which is the image of $\ev_{X_+}$. Since $\mathcal{D}$ has duals the $\infty$-groupoid of triples $(D_+,D_-,e)$ in which $e$ is non-degenerate is equivalent to the $\infty$-groupoid of triples $(D_+,\check{D}_-,f)$ where $f: D_+ \longrightarrow \check{D}_-$ is an equivalence. Hence the forgetful map $(D_+,D_-,e) \mapsto D_+$ is an equivalence. \end{proof} Now consider the natural inclusion $\iota: \mathcal{B}^{\ev}_1 \longrightarrow \mathcal{B}^{\ori}_1$ as an object in $\Cat^{\nd}_{\mathcal{B}^{\ev}_1 /}$. Then by Lemma~\ref{0-to-1-ev} we see that the $1$-dimensional cobordism hypothesis will be established once we make the following last step: \begin{thm}[Cobordism Hypothesis - Last Step]\label{cobordism-last-step} Let $\mathcal{D}$ be a symmetric monoidal $\infty$-category with duals and let ${\varphi}: \mathcal{B}^{\ev}_1 \longrightarrow \mathcal{D}$ be a \textbf{non-degenerate} functor. Then the $\infty$-groupoid $$ \wtl{\Fun}^{\otimes}_{\mathcal{B}^{\ev}_1 /}(\mathcal{B}^{\ori}_1,\mathcal{D}) $$ is contractible. \end{thm} Note that since $\mathcal{B}^{\ev}_1 \longrightarrow \mathcal{B}^{\ori}_1$ is essentially surjective all the functors in $$ \wtl{\Fun}^{\otimes}_{\mathcal{B}^{\ev}_1 /}(\mathcal{B}^{\ori}_1,\mathcal{D}) $$ will have the same essential image of ${\varphi}$. Hence it will be enough to prove for the claim for the case where ${\varphi}: \mathcal{B}^{\ev}_1 \longrightarrow \mathcal{D}$ is \textbf{essentially surjective}. We will denote by $$ \Cat^{\sur}_{\mathcal{B}^{\ev}_1 /} \subseteq \Cat^{\nd}_{\mathcal{B}^{\ev}_1 /} $$ the full subcategory spanned by essentially surjective functors ${\varphi}: \mathcal{B}^{\ev}_1 \longrightarrow \mathcal{D}$. Hence we can phrase Theorem~\ref{cobordism-last-step} as follows: \begin{thm}[Cobordism Hypothesis - Last Step 2]\label{cobordism-last-step-2} Let $\mathcal{D}$ be a symmetric monoidal $\infty$-category with duals and let ${\varphi}: \mathcal{B}^{\ev}_1 \longrightarrow \mathcal{D}$ be an \textbf{essentially surjective non-degenerate} functor. Then the space of maps $$ \Map_{\Cat^{\sur}_{\mathcal{B}^{\ev}_1 /}}(\iota,{\varphi}) $$ is contractible. \end{thm} The purpose of this paper is to provide a formal proof for this last step. This paper is constructed as follows. In \S~\ref{s-qu-cobordism} we prove a variant of Theorem~\ref{cobordism-last-step-2} which we call the quasi-unital cobordism hypothesis (Theorem~\ref{qu-cobordism}). Then in \S~\ref{s-from-qu-to-regular} we explain how to deduce Theorem~\ref{cobordism-last-step-2} from Theorem~\ref{qu-cobordism}. Section \S~\ref{s-from-qu-to-regular} relies on the notion of \textbf{quasi-unital $\infty$-categories} which is developed rigourously in~\cite{har} (however \S~\ref{s-qu-cobordism} is completely independent of~\cite{har}). \section{ The Quasi-Unital Cobordism Hypothesis }\label{s-qu-cobordism} Let ${\varphi}: \mathcal{B}^{\ev}_1 \longrightarrow \mathcal{D}$ be a non-degenerate functor and let $\Grp_\infty$ denote the $\infty$-category of $\infty$-groupoids. We can define a lax symmetric functor $M_{\varphi}: \mathcal{B}^{\ev}_1 \longrightarrow \Grp_{\infty}$ by setting $$ M_{\varphi}(X) = \Map_{\mathcal{D}}(1,{\varphi}(X)) $$ We will refer to $M_{\varphi}$ as the \textbf{fiber functor} of ${\varphi}$. Now if $\mathcal{D}$ has duals and ${\varphi}$ is non-degenerate, then one can expect this to be reflected in $M_{\varphi}$ somehow. More precisely, we have the following notion: \begin{define} Let $M: \mathcal{B}^{\ev}_1 \longrightarrow \Grp_{\infty}$ be a lax symmetric monoidal functor. An object $Z \in M(X \otimes \check{X})$ is called \textbf{non-degenerate} if for each object $Y \in \mathcal{B}^{\ev}_1$ the natural map $$ M(Y \otimes \check{X}) \stackrel{Id \times Z}{\longrightarrow} M(Y \otimes \check{X}) \times M(X \otimes \check{X}) \longrightarrow M(Y \otimes \check{X} \otimes X \otimes \check{X}) \stackrel{M(Id \otimes \ev \otimes Id)}{\longrightarrow} M(Y \otimes \check{X}) $$ is an equivalence of $\infty$-groupoids. \end{define} \begin{rem}\label{uniqueness} If a non-degenerate element $Z \in M(X \otimes \check{X})$ exists then it is unique up to a (non-canonical) equivalence. \end{rem} \begin{example}\label{unit} Let $M: \mathcal{B}^{\ev}_1 \longrightarrow \Grp_{\infty}$ be a lax symmetric monoidal functor. The lax symmetric structure of $M$ includes a structure map $1_{\Grp_{\infty}} \longrightarrow M(1)$ which can be described by choosing an object $Z_1 \in M(1)$. The axioms of lax monoidality then ensure that $Z_1$ is non-degenerate. \end{example} \begin{define} A lax symmetric monoidal functor $M: \mathcal{B}^{\ev}_1 \longrightarrow \Grp_{\infty}$ will be called \textbf{non-degenerate} if for each object $X \in \mathcal{B}^{\ev}_1$ there exists a non-degenerate object $Z \in M(X \otimes \check{X})$. \end{define} \begin{define} Let $M_1,M_2: \mathcal{B}^{\ev}_1 \longrightarrow \Grp_{\infty}$ be two non-degenerate lax symmetric monoidal functors. A lax symmetric natural transformation $T: M_1 \longrightarrow M_2$ will be called \textbf{non-degenerate} if for each object $X \in \Bord^{\ev}$ and each non-degenerate object $Z \in M(X \otimes \check{X})$ the objects $T(Z) \in M_2(X \otimes \check{X})$ is non-degerate. \end{define} \begin{rem} From remark~\ref{uniqueness} we see that if $T(Z) \in M_2(X \otimes \check{X})$ is non-degenerate for \textbf{at least one} non-degenerate $Z \in M_1(X \otimes \check{X})$ then it will be true for all non-degenerate $Z \in M_1(X \otimes \check{X})$. \end{rem} Now we claim that if $\mathcal{D}$ has duals and ${\varphi}: \mathcal{B}^{\ev}_1 \longrightarrow \mathcal{D}$ is non-degenerate then the fiber functor $M_{\varphi}$ will be non-degenerate: for each object $X \in \mathcal{B}^{\ev}_1$ there exists a coevaluation morphism $$ \coev_{{\varphi}(X)}: 1 \longrightarrow {\varphi}(X) \otimes {\varphi}(\check{X}) \simeq {\varphi}(X \otimes \check{X}) $$ which determines an element in $Z_X \in M_{\varphi}(X \otimes \check{X})$. It is not hard to see that this element is non-degenerate. Let $\Fun^{\lax}(\mathcal{B}^{\ev}_1,\Grp_{\infty})$ denote the $\infty$-category of lax symmetric monoidal functors $\mathcal{B}^{\ev}_1 \longrightarrow \Grp_{\infty}$ and by $$ \Fun_{\nd}^{\lax}(\mathcal{B}^{\ev}_1,\Grp_{\infty}) \subseteq \Fun^{\lax}(\mathcal{B}^{\ev}_1,\Grp_{\infty}) $$ the subcategory spanned by non-degenerate functors and non-degenerate natural transformations. Now the construction ${\varphi} \mapsto M_{\varphi}$ determines a functor $$ \Cat^{\nd}_{\mathcal{B}^{\ev}_1 /} \longrightarrow \Fun_{\nd}^{\lax}(\mathcal{B}^{\ev}_1,\Grp_{\infty}) $$ In particular if ${\varphi}: \mathcal{B}^{\ev}_1 \longrightarrow \mathcal{C}$ and $\psi: \mathcal{B}^{\ev}_1 \longrightarrow \mathcal{D}$ are non-degenerate then any functor $T:\mathcal{C} \longrightarrow \mathcal{D}$ under $\mathcal{B}^{\ev}_1$ will induce a non-degenerate natural transformation $$ T_*: M_{{\varphi}} \longrightarrow M_{\psi} $$ The rest of this section is devoted to proving the following result, which we call the "quasi-unital cobordism hypothesis": \begin{thm}[Cobordism Hypothesis - Quasi-Unital] \label{qu-cobordism} Let $\mathcal{D}$ be a symmetric monoidal $\infty$-category with duals, let ${\varphi}: \mathcal{B}^{\ev}_1 \longrightarrow \mathcal{D}$ be a non-degenerate functor and let $\iota: \mathcal{B}^{\ev}_1 \hookrightarrow \mathcal{B}^{\ori}_1$ be the natural inclusion. Let $M_\iota,M_{\varphi} \in \Fun^{\lax}_{\nd}$ be the corresponding fiber functors. Them the space of maps $$ \Map_{\Fun^{\lax}_{\nd}}(M_\iota, M_{\varphi}) $$ is contractible. \end{thm} \begin{proof} We start by transforming the lax symmetric monoidal functors $M_\iota,M_{\varphi}$ to \textbf{left fibrations} over $\mathcal{B}^{\ev}_1$ using the symmetric monoidal analogue of Grothendieck's construction, as described in~\cite{lur1}, page $67-68$. Let $M: \mathcal{B} \longrightarrow \Grp_\infty$ be a lax symmetric monoidal functor. We can construct a symmetric monoidal $\infty$-category $\Groth(\mathcal{B},M)$ as follows: \begin{enumerate} \item The objects of $\Groth(\mathcal{B},M)$ are pairs $(X, \eta)$ where $X \in \mathcal{B}$ is an object and $\eta$ is an object of $M(X)$. \item The space of maps from $(X,\eta)$ to $(X',\eta')$ in $\Groth(\mathcal{B},M)$ is defined to be the classifying space of the $\infty$-groupoid of pairs $(f,{\alpha})$ where $f: X \longrightarrow X'$ is a morphism in $B$ and ${\alpha}: f_*\eta \longrightarrow \eta$ is a morphism in $M(X')$. Composition is defined in a straightforward way. \item The symmetric monoidal structure on $\Groth(\mathcal{B},M)$ is obtained by defining $$ (X,\eta) \otimes (X',\eta') = (X \otimes X',\beta_{X,Y}(\eta \otimes \eta')) $$ where $\beta_{X,Y}: M(X) \times M(Y) \longrightarrow M(X \otimes Y)$ is given by the lax symmetric structure of $M$. \end{enumerate} The forgetful functor $(X,\eta) \mapsto X$ induces a \textbf{left fibration} $$ \Groth(\mathcal{B},M) \longrightarrow \mathcal{B} $$ \begin{thm}\label{unstraightening} The association $M \mapsto \Groth(\mathcal{B},M)$ induces an equivalence between the $\infty$-category of lax-symmetric monoidal functors $\mathcal{B} \longrightarrow \Grp_\infty$ and the full subcategory of the over $\infty$-category $ \Cat^{\otimes}_{/\mathcal{B}} $ spanned by left fibrations. \end{thm} \begin{proof} This follows from the more general statement given in~\cite{lur1} Proposition $3.3.26$. Note that any map of left fibrations over $\mathcal{B}$ is in particular a map of coCartesian fibrations because if $p: \mathcal{C} \longrightarrow \mathcal{B}$ is a left fibration then any edge in $\mathcal{C}$ is $p$-coCartesian. \end{proof} \begin{rem} Note that if $\mathcal{C} \longrightarrow \mathcal{B}$ is a left fibration of symmetric monoidal $\infty$-categories and $\mathcal{A} \longrightarrow \mathcal{B}$ is a symmetric monoidal functor then the $\infty$-category $$ \Fun^{\otimes}_{/ \mathcal{B}}(\mathcal{A},\mathcal{C}) $$ is actually an \textbf{$\infty$-groupoid}, and by Theorem~\ref{unstraightening} is equivalent to the $\infty$-groupoid of lax-monoidal natural transformations between the corresponding lax monoidal functors from $\mathcal{B}$ to $\Grp_\infty$. \end{rem} Now set $$ \mathcal{F}_\iota \stackrel{\df}{=} \Groth(\mathcal{B}^{\ev}_1,M_{\iota}) $$ $$ \mathcal{F}_{\varphi} \stackrel{\df}{=} \Groth(\mathcal{B}^{\ev}_1,M_{{\varphi}}) $$ Let $$ \Fun^{\nd}_{/\mathcal{B}^{\ev}_1}(\mathcal{F}_{\iota},\mathcal{F}_{{\varphi}}) \subseteq \Fun^{\otimes}_{/\mathcal{B}^{\ev}_1}(\mathcal{F}_{\iota},\mathcal{F}_{{\varphi}}) $$ denote the full sub $\infty$-groupoid of functors which correspond to \textbf{non-degenerate} natural transformations $$ M_\iota \longrightarrow M_{\varphi} $$ under the Grothendieck construction. Note that $\Fun^{\nd}_{/\mathcal{B}^{\ev}_1}(\mathcal{F}_{\iota},\mathcal{F}_{{\varphi}})$ is a union of connected components of the $\infty$-groupoid $\Fun^{\otimes}_{/\mathcal{B}^{\ev}_1}(\mathcal{F}_{\iota},\mathcal{F}_{{\varphi}})$. We now need to show that the $\infty$-groupoid $$ \Fun^{\nd}_{/\mathcal{B}^{\ev}_1}(\mathcal{F}_{\iota},\mathcal{F}_{{\varphi}}) $$ is contractible. Unwinding the definitions we see that the objects of $\mathcal{F}_{\iota}$ are pairs $(X,M)$ where $X \in \mathcal{B}^{\ev}_1$ is a $0$-manifold and $M \in \Map_{\mathcal{B}^{\ori}_1}(\emptyset,X)$ is a cobordism from $\emptyset$ to $X$. A morphism in ${\varphi}$ from $(X,M)$ to $(X',M')$ consists of a morphism in $\mathcal{B}^{\ev}_1$ $$ N:X \longrightarrow X' $$ and a diffeomorphism $$ T:M \coprod_{X} N \cong M' $$ respecting $X'$. Note that for each $(X,M) \in \mathcal{F}_{\iota}$ we have an identification $X \simeq \partial M$. Further more the space of morphisms from $(\partial M,M)$ to $(\partial M',M')$ is \textbf{homotopy equivalent to the space of orientation-preserving $\pi_0$-surjective embeddings of $M$ in $M'$} (which are not required to respect the boundaries in any way). Now in order to analyze the symmetric monoidal $\infty$-category $\mathcal{F}_\iota$ we are going to use the theory of \textbf{$\infty$-operads}, as developed in~\cite{lur2}. Recall that the category $\Cat^{\otimes}$ of symmetric monoidal $\infty$-categories admits a forgetful functor $$ \Cat^{\otimes} \longrightarrow \Op^{\infty} $$ to the $\infty$-category of \textbf{$\infty$-operads}. This functor has a left adjoint $$ \Env: \Op^{\infty} \longrightarrow \Cat^{\otimes} $$ called the \textbf{monoidal envelope} functor (see~\cite{lur2} \S $2.2.4$). In particular, if $\mcal{C}^{\otimes}$ is an $\infty$-operad and $\mathcal{D}$ is a symmetric monoidal $\infty$-category with corresponding $\infty$-operad $\mathcal{D}^{\otimes} \longrightarrow \N({\Gamma}_*)$ then there is an \textbf{equivalence of $\infty$-categories} $$ \Fun^{\otimes}(\Env(\mathcal{C}^{\otimes}),\mathcal{D}) \simeq \Alg_{\mcal{C}}(\mathcal{D}^{\otimes}) $$ Where $\Alg_{\mcal{C}}\left(\mathcal{D}^{\otimes}\right) \subseteq \Fun_{/\N({\Gamma}_*)}(\mcal{C}^{\otimes},\mathcal{D}^{\otimes})$ denotes the full subcategory spanned by $\infty$-operad maps (see Proposition $2.2.4.9$ of~\cite{lur2}). Now observing the definition of monoidal envelop (see Remark $2.2.4.3$ in~\cite{lur2}) we see that $\mathcal{F}_{\iota}$ is equivalent to the monoidal envelope of a certain simple $\infty$-operad $$ F_\iota \simeq \Env\left(\mathcal{OF}^{\otimes}\right) $$ which can be described as follows: the underlying $\infty$-category $\mathcal{OF}$ of $\mathcal{OF}^{\otimes}$ is the $\infty$-category of \textbf{connected} $1$-manifolds (i.e. either the segment or the circle) and the morphisms are \textbf{orientation-preserving embeddings} between them. The (active) $n$-to-$1$ operations of $\mathcal{OF}$ (for $n\geq 1$) from $(M_1,...,M_n)$ to $M$ are the orientation-preserving embeddings $$ M_1 \coprod ... \coprod M_n \longrightarrow M $$ and there are no $0$-to-$1$ operations. Now observe that the induced map $\mathcal{OF}^{\otimes} \longrightarrow (\mathcal{B}^{\ev}_1)^{\infty}$ is a fibration of $\infty$-operads. We claim that $\mathcal{F}_{\iota}$ is not only the enveloping symmetric monoidal $\infty$-category of $\mathcal{OF}^{\otimes}$, but that $\mathcal{F}_{\iota} \longrightarrow \mathcal{B}^{\ev}_1$ is the enveloping \textbf{left fibration} of $\mathcal{OF} \longrightarrow \mathcal{B}^{\ev}_1$. More precisely we claim that for any left fibration $\mathcal{D} \longrightarrow \mathcal{B}^{\ev}_1$ of symmetric monoidal $\infty$-categories the natural map $$ \Fun^{\otimes}_{/\mathcal{B}^{\ev}_1}\left(F_{\iota},\mathcal{D}\right) \longrightarrow \Alg_{\mathcal{OF} / \mathcal{B}^{\ev}_1}(\mathcal{D}^{\otimes}) $$ is an equivalence if $\infty$-groupoids (where both terms denote mapping objects in the respective \textbf{over-categories}). This is in fact not a special property of $F_{\iota}$: \begin{lem}\label{left-envelope} Let $\mathcal{O}$ be a symmetric monoidal $\infty$-category with corresponding $\infty$-operad $\mathcal{O}^{\otimes} \longrightarrow \N({\Gamma}_*)$ and let $p:\mathcal{C}^{\otimes} \longrightarrow \mathcal{O}^{\otimes}$ be a fibration of $\infty$-operads such that the induced map $$ \ovl{p}:\Env\left(\mathcal{C}^{\otimes}\right) \longrightarrow \mathcal{O} $$ is a left fibration. Let $\mathcal{D} \longrightarrow \mathcal{O}$ be some other left fibration of symmetric monoidal categories. Then the natural map $$ \Fun^{\otimes}_{/\mathcal{O}}\left(\Env\left(\mathcal{C}^{\otimes}\right),\mathcal{D}\right) \longrightarrow \Alg_{\mathcal{C} / \mathcal{O}}(\mathcal{D}^{\otimes}) $$ is an equivalence of $\infty$-categories. Further more both sides are in fact $\infty$-groupoids. \end{lem} \begin{proof} Consider the diagram $$ \xymatrix{ \Fun^{\otimes}(\Env\left(\mathcal{C}^{\otimes}\right),\mathcal{D}) \ar^{\simeq}[r]\ar[d] & \Alg_{\mathcal{C}}\left(\mathcal{D}^{\otimes}\right) \ar[d] \\ \Fun^{\otimes}(\Env\left(\mathcal{C}^{\otimes}\right),\mathcal{O}) \ar^{\simeq}[r] & \Alg_{\mathcal{C}}\left(\mathcal{O}^{\otimes}\right) \\ }$$ Now the vertical maps are left fibrations and by adjunction the horizontal maps are equivalences. By~\cite{lur3} Proposition $3.3.1.5$ we get that the induced map on the fibers of $p$ and $\ovl{p}$ respectively $$ \Fun^{\otimes}_{/\mathcal{O}}\left(\Env\left(\mathcal{C}^{\otimes}\right),\mathcal{D}\right) \longrightarrow \Alg_{\mathcal{C} / \mathcal{O}}(\mathcal{D}^{\otimes}) $$ is a weak equivalence of $\infty$-groupoids. \end{proof} \begin{rem} In~\cite{lur2} a relative variant $\Env_{\mathcal{B}^{\ev}_1}$ of $\Env$ is introduced which sends a fibration of $\infty$-operads $\mathcal{C}^{\otimes} \longrightarrow (\mathcal{B}^{\ev}_1)^{\otimes}$ to its enveloping coCartesin fibration $\Env_{\mathcal{O}}\left(\mathcal{C}^{\otimes}\right) \longrightarrow \mathcal{B}^{\ev}_1$. Note that in our case the map $$ \mathcal{F}_{\iota} \longrightarrow \mathcal{B}^{\ev}_1 $$ is \textbf{not} the enveloping coCartesian fibration of $\mathcal{OF}^{\otimes} \longrightarrow (\mathcal{B}^{\ev}_1)^{\otimes}$. However from Lemma~\ref{left-envelope} it follows that the map $$ \xymatrix{ \mathcal{F}_{\iota} \ar[rr]\ar[dr] && \Env_{\mathcal{B}^{\ev}_1}\left(\mathcal{OF}^{\otimes}\right) \ar[dl] \\ & \mathcal{B}^{\ev}_1 & \\ }$$ is a \textbf{covariant equivalence} over $\mathcal{B}^{\ev}_1$, i.e. induces a weak equivalence of simplicial sets on the fibers (where the fibers on the left are $\infty$-groupoids and the fibers on the right are $\infty$-categories). This claim can also be verified directly by unwinding the definition of $\Env_{\mathcal{B}^{\ev}_1}\left(\mathcal{OF}^{\otimes}\right)$. \end{rem} Summing up the discussion so far we observe that we have a weak equivalence of $\infty$-groupoids $$ \Fun^{\otimes}_{/\mathcal{B}^{\ev}_1}\left(\mathcal{F}_{\iota},\mathcal{F}_{{\varphi}}\right) \stackrel{\simeq}{\longrightarrow} \Alg_{\mathcal{OF} / \mathcal{B}^{\ev}_1}\left(\mathcal{F}_{{\varphi}}^{\otimes}\right) $$ Let $$ \Alg^{\nd}_{\mathcal{OF} / \mathcal{B}^{\ev}_1}\left(\mathcal{F}_{{\varphi}}^{\otimes}\right) \subseteq \Alg_{\mathcal{OF} / \mathcal{B}^{\ev}_1}\left(\mathcal{F}_{{\varphi}}^{\otimes}\right) $$ denote the full sub $\infty$-groupoid corresponding to $$ \Fun^{\nd}_{/\mathcal{B}^{\ev}_1}(\mathcal{F}_{\iota},\mathcal{F}_{{\varphi}}) \subseteq \Fun^{\otimes}_{/\mathcal{B}^{\ev}_1}(\mathcal{F}_{\iota},\mathcal{F}_{{\varphi}}) $$ under the adjunction. We are now reduced to prove that the $\infty$-groupoid $$ \Alg^{\nd}_{\mathcal{OF} / \mathcal{B}^{\ev}_1}\left(\mathcal{F}_{{\varphi}}^{\otimes}\right) $$ is contractible. Let $\mathcal{OI}^{\otimes} \subseteq \mathcal{OF}^{\otimes}$ be the full sub $\infty$-operad of $\mathcal{OF}^{\otimes}$ spanned by connected $1$-manifolds which are diffeomorphic to the segment (and all $n$-to-$1$ operations between them). In particular we see that $\mathcal{OI}^{\otimes}$ is equivalent to the \textbf{non-unital associative $\infty$-operad}. We begin with the following theorem which reduces the handling of $\mathcal{OF}^{\otimes}$ to $\mathcal{OI}^{\otimes}$. \begin{thm}\label{removing-circles} Let $q:\mathcal{C}^{\otimes} \longrightarrow \mathcal{O}^{\otimes}$ be a left fibration of $\infty$-operads. Then the restriction map $$ \Alg_{\mathcal{OF} / \mathcal{O}}(\mathcal{C}^{\otimes}) \longrightarrow \Alg_{\mathcal{OI} / \mathcal{O}}(\mathcal{C}^{\otimes}) $$ is a weak equivalence. \end{thm} \begin{proof} We will base our claim on the following general lemma: \begin{lem}\label{free-algebra} Let $\mathcal{A}^{\otimes} \longrightarrow \mathcal{B}^{\otimes}$ be a map of $\infty$-groupoids and let $q:\mathcal{C}^{\otimes} \longrightarrow \mathcal{O}^{\otimes}$ be \textbf{left fibration} of $\infty$-operads. Suppose that for every object $B \in \mathcal{B}$, the category $$ \mathcal{F}_B = \mathcal{A}^{\otimes}_{\act} \times_{\mathcal{B}^{\otimes}_{\act}} \mathcal{B}^{\otimes}_{/B} $$ is weakly contractible (see~\cite{lur2} for the terminology). Then the natural restriction map $$ \Alg_{\mathcal{A} / \mathcal{O}}(\mathcal{C}^{\otimes}) \longrightarrow \Alg_{\mathcal{B} / \mathcal{O}}(\mathcal{C}^{\otimes}) $$ is a weak equivalence. \end{lem} \begin{proof} In~\cite{lur2} \S $3.1.3$ it is explained how under certain conditions the forgetful functor (i.e. restriction map) $$ \Alg_{\mathcal{A} / \mathcal{O}}(\mathcal{C}^{\otimes}) \longrightarrow \Alg_{\mathcal{B} / \mathcal{O}}(\mathcal{C}^{\otimes}) $$ admits a left adjoint, called the \textbf{free algebra functor}. Since $\mathcal{C}^{\otimes} \longrightarrow \mathcal{O}^{\otimes}$ is a left fibration both these $\infty$-categories are $\infty$-groupoids, and so any adjunction between them will be an equivalence. Hence it will suffice to show that the conditions for existence of left adjoint are satisfies in this case. Since $q: \mathcal{C}^{\otimes} \longrightarrow \mathcal{O}^{\otimes}$ is a left fibration $q$ is \textbf{compatible with colimits indexed by weakly contractible diagrams} in the sense of~\cite{lur2} Definition $3.1.1.18$ (because weakly contractible colimits exists in every $\infty$-groupoid and are preserved by any functor between $\infty$-groupoids). Combining Corollary $3.1.3.4$ and Proposition $3.1.1.20$ of~\cite{lur2} we see that the desired free algebra functor exists. \end{proof} In view of Lemma~\ref{free-algebra} it will be enough to check that for every object $M \in \mathcal{OF}$ (i.e. every connected $1$-manifolds) the $\infty$-category $$ \mathcal{F}_M \stackrel{\df}{=} \mathcal{OI}^{\otimes}_{\act} \times_{\mathcal{OF}^{\otimes}_{\act}} \left(\mathcal{OF}^{\otimes}_{\act}\right)_{/M} $$ is weakly contractible. Unwinding the definitions we see that the objects of $\mathcal{F}_M$ are tuples of $1$-manifolds $(M_1,...,M_n)$ ($n \geq 1$), such that each $M_i$ is diffeomorphic to a segment, together with an orientation preserving embedding $$ f: M_1 \coprod ... \coprod M_n \hookrightarrow M $$ A morphisms in $\mathcal{F}_M$ from $$ f: M_1 \coprod ... \coprod M_n \hookrightarrow M $$ to $$ g: M_1' \coprod ... \coprod M_m' \hookrightarrow M $$ is a $\pi_0$-surjective orientation-preserving embedding $$ T:M_1 \coprod ... \coprod M_n \longrightarrow M_1' \coprod ... \coprod M_m' $$ together with an \textbf{isotopy} $g \circ T \sim f$. Now when $M$ is the segment then $\mathcal{F}_M$ contains a terminal object and so is weakly contractible. Hence we only need to take care of the case of the circle $M=S^1$. It is not hard to verify that the category $F_{S^1}$ is in fact discrete - the space of self isotopies of any embedding $f:M_1 \coprod ... \coprod M_n \hookrightarrow M $ is equivalent to the loop space of $S^1$ and hence discrete. In fact one can even describe $F_{S^1}$ in completely combinatorial terms. In order to do that we will need some terminology. \begin{define} Let ${\Lambda}_{\infty}$ be the category whose objects correspond to the natural numbers $1,2,3,...$ and the morphisms from $n$ to $m$ are (weak) order preserving maps $f: \mathbb{Z} \longrightarrow \mathbb{Z}$ such that $f(x+n) = f(x)+m$. \end{define} The category ${\Lambda}_{\infty}$ is a model for the the universal fibration over the cyclic category, i.e., there is a left fibration ${\Lambda}_\infty \longrightarrow {\Lambda}$ (where ${\Lambda}$ is connes' cyclic category) such that the fibers are connected groupoids with a single object having automorphism group $\mathbb{Z}$ (or in other words circles). In particular the category ${\Lambda}_{\infty}$ is known to be weakly contractible. See~\cite{kal} for a detailed introduction and proof (Lemma $4.8$). Let ${\Lambda}^{\sur}_{\infty}$ be the sub category of ${\Lambda}_\infty$ which contains all the objects and only \textbf{surjective} maps between. It is not hard to verify explicitly that the map ${\Lambda}^{\sur}_\infty \longrightarrow {\Lambda}_\infty$ is cofinal and so ${\Lambda}^{\sur}_{\infty}$ is contractible as well. Now we claim that $F_{S^1}$ is in fact equivalent to ${\Lambda}^{\sur}_{\infty}$. Let ${\Lambda}^{\sur}_{\bg}$ be the category whose objects are linearly ordered sets $S$ with an order preserving automorphisms ${\sigma}: S \longrightarrow S$ and whose morphisms are surjective order preserving maps which commute with the respective automorphisms. Then ${\Lambda}^{\sur}_{\infty}$ can be considered as a full subcategory of ${\Lambda}^{\sur}_{\bg}$ such that $n$ corresponds to the object $(\mathbb{Z},{\sigma}_n)$ where ${\sigma}_n: \mathbb{Z} \longrightarrow \mathbb{Z}$ is the automorphism $x \mapsto x+n$. Now let $p:\mathbb{R} \longrightarrow S^1$ be the universal covering. We construct a functor $F_{S^1} \longrightarrow {\Lambda}^{\sur}_{\bg}$ as follows: given an object $$ f: M_1 \coprod ... \coprod M_n \hookrightarrow S^1 $$ of $F_{S^1}$ consider the fiber product $$ P = \left[M_1 \coprod ... \coprod M_n\right] \times_{S^1} \mathbb{R} $$ note that $P$ is homeomorphic to an infinite union of segments and the projection $$ P \longrightarrow \mathbb{R} $$ is injective (because $f$ is injective) giving us a well defined linear order on $P$. The automorphism ${\sigma}: \mathbb{R} \longrightarrow \mathbb{R}$ of $\mathbb{R}$ over $S^1$ given by $x \mapsto x + 1$ gives an order preserving automorphism $\wtl{{\sigma}}: P \longrightarrow P$. Now suppose that $((M_1,...,M_n),f)$ and $((M_1',...,M_m'),g)$ are two objects and we have a morphism between them, i.e. an embedding $$ T:M_1 \coprod ... \coprod M_n \longrightarrow M_1' \coprod ... \coprod M_m' $$ and an isotopy $\psi: g \circ T \sim f$. Then we see that the pair $(T,\psi)$ determine a well defined order preserving map $$ \left[M_1 \coprod ... \coprod M_n\right] \times_{S^1} \mathbb{R} \longrightarrow \left[M_1' \coprod ... \coprod M_m'\right] \times_{S^1} \mathbb{R} $$ which commutes with the respective automorphisms. Clearly we obtain in this way a functor $u:F_{S^1} \longrightarrow {\Lambda}^{\sur}_{\bg}$ whose essential image is the same as the essential image of ${\Lambda}^{\sur}_\infty$. It is also not hard to see that $u$ is fully faithful. Hence $F_{S^1}$ is equivalent to ${\Lambda}^{\sur}_\infty$ which is weakly contractible. This finishes the proof of the theorem. \end{proof} Let $$ \Alg^{\nd}_{\mathcal{OI} / \mathcal{B}^{\ev}_1}\left(\mathcal{F}_{{\varphi}}^{\otimes}\right) \subseteq \Alg_{\mathcal{OI} / \mathcal{B}^{\ev}_1}\left(\mathcal{F}_{{\varphi}}^{\otimes}\right) $$ denote the full sub $\infty$-groupoid corresponding to the full sub $\infty$-groupoid $$ \Alg^{\nd}_{\mathcal{OF} / \mathcal{B}^{\ev}_1}\left(\mathcal{F}_{{\varphi}}^{\otimes}\right) \subseteq \Alg_{\mathcal{OF} / \mathcal{B}^{\ev}_1}\left(\mathcal{F}_{{\varphi}}^{\otimes}\right) $$ under the equivalence of Theorem~\ref{removing-circles}. Now the last step of the cobordism hypothesis will be complete once we show the following: \begin{lem}\label{final-lemma} The $\infty$-groupoid $$ \Alg^{\nd}_{\mathcal{OI} / \mathcal{B}^{\ev}_1}\left(\mathcal{F}_{{\varphi}}^{\otimes}\right) $$ is contractible. \end{lem} \begin{proof} Let $$ q: p^*\mathcal{F}_{{\varphi}} \longrightarrow \mathcal{OI}^{\otimes} $$ be the pullback of left fibration $\mathcal{F}_{\varphi} \longrightarrow \mathcal{B}^{\ev}_1$ via the map $p: \mathcal{OI}^{\otimes} \longrightarrow B^{\ev}_1$, so that $q$ is a left fibration as well. In particular, since $\mathcal{OI}^{\otimes}$ is the non-unital associative $\infty$-operad, we see that $q$ classifies an $\infty$-groupoid $q^{-1}(\mathcal{OI})$ with a non-unital monoidal structure. Unwinding the definitions one sees that this $\infty$-groupoid is the fundamental groupoid of the space $$ \Map_{\mathcal{C}}(1,{\varphi}(X_+) \otimes {\varphi}(X_-)) $$ where $X_+,X_- \in \mathcal{B}^{\ev_1}$ are the points with positive and negative orientations respectively. The monoidal structure sends a pair of maps $$ f,f': 1 \longrightarrow {\varphi}(X_+) \otimes {\varphi}(X_-) $$ to the composition $$ 1 \stackrel{f \otimes f'}{\longrightarrow} \left[{\varphi}(X_+) \otimes {\varphi}(X_-)\right] \otimes \left[{\varphi}(X_+) \otimes {\varphi}(X_-)\right] \stackrel{\simeq}{\longrightarrow} $$ $$ {\varphi}(X_+) \otimes \left[{\varphi}(X_-) \otimes {\varphi}(X_+)\right] \otimes {\varphi}(X_-) \stackrel{Id \otimes {\varphi}(\ev) \otimes Id}{\longrightarrow} {\varphi}(X_+) \otimes {\varphi}(X_-) $$ Since $\mathcal{C}$ has duals we see that this monoidal $\infty$-groupoid is equivalent to the fundamental $\infty$-groupoid of the space $$ \Map_{\mathcal{C}}({\varphi}(X_+),{\varphi}(X_+)) $$ with the monoidal product coming from \textbf{composition}. Now $$ \Alg_{\mathcal{OI} / \mathcal{B}^{\ev}_1}(\mathcal{F}_{{\varphi}}) \simeq \Alg_{\mathcal{OI} / \mathcal{OI}}(p^*\mathcal{F}_{{\varphi}}) $$ classifies $\mathcal{OI}^{\otimes}$-algebra objects in $p^*\mathcal{F}_{{\varphi}}$, i.e. non-unital algebra objects in $$ \Map_{\mathcal{C}}({\varphi}(X_+),{\varphi}(X_+)) $$ with respect to composition. The full sub $\infty$-groupoid $$ \Alg^{\nd}_{\mathcal{OI} / \mathcal{B}^{\ev}_1}(\mathcal{F}_{{\varphi}}) \subseteq \Alg_{\mathcal{OI} / \mathcal{B}^{\ev}_1}(\mathcal{F}_{{\varphi}}) $$ will then classify non-unital algebra objects $A$ which correspond to \textbf{self equivalences} $$ {\varphi}(X_+) \longrightarrow {\varphi}(X_+) $$ It is left to prove the following lemma: \begin{lem} Let $\mathcal{C}$ be an $\infty$-category. Let $X \in \mathcal{C}$ be an object and let $\mathcal{E}_X$ denote the $\infty$-groupoid of self equivalences $u: X \longrightarrow X$ with the monoidal product induced from composition. Then the $\infty$-groupoid of non-unital algebra objects in $\mathcal{E}_X$ is contractible. \end{lem} \begin{proof} Let $\mathcal{A}ss_{\nun}$ denote the non-unital associative $\infty$-operad. The identity map $\mathcal{A}ss_{\nun} \longrightarrow \mathcal{A}ss_{\nun}$ which is in particular a left fibration of $\infty$-operads classifies the terminal non-unital monoidal $\infty$-groupoid $\mathcal{A}$ which consists of single automorphismless idempotent object $a \in \mathcal{A}$. The non-unital algebra objects in $\mathcal{E}_X$ are then classified by non-unital lax monoidal functors $$ \mathcal{A} \longrightarrow \mathcal{E}_X $$ Since $\mathcal{E}_X$ is an $\infty$-groupoid this is same as non-unital monoidal functors (without the lax) $$ \mathcal{A} \longrightarrow \mathcal{E}_X $$ Now the forgetful functor from unital to non-unital monoidal $\infty$-groupoids has a left adjoint. Applying this left adjoint to $\mathcal{A}$ we obtain the $\infty$-groupoid $\mathcal{UA}$ with two automorphismless objects $$ \mathcal{UA} = \{1,a\} $$ such that $1$ is the unit of the monoidal structure and $a$ is an idempotent object. Hence we need to show that the $\infty$-groupoids of monoidal functors $$ \mathcal{UA} \longrightarrow \mathcal{E}_X $$ is contractible. Now given a monoidal $\infty$-groupoid $\mathcal{G}$ we can form the $\infty$-category $\mathcal{B}(\mathcal{G})$ having a single object with endomorphism space $\mathcal{G}$ (the monoidal structure on $\mathcal{G}$ will then give the composition structure). This construction determines a fully faithful functor from the $\infty$-category of monoidal $\infty$-groupoids and the $\infty$-category of pointed $\infty$-categories (see~\cite{lur1} Remark $4.4.6$ for a much more general statement). In particular it will be enough to show that the $\infty$-groupoid of \textbf{pointed functors} $$ \mathcal{B}(\mathcal{UA}) \longrightarrow \mathcal{B}(\mathcal{E}_X) $$ is contractible. Since $\mathcal{B}(\mathcal{E}_X)$ is an $\infty$-groupoid it will be enough to show that $\mathcal{B}(\mathcal{UA})$ is weakly contractible. Now the nerve $\N\mathcal{B}(\mathcal{UA})$ of $\mathcal{B}(\mathcal{UA})$ is the simplicial set in which for each $n$ there exists a single \textbf{non-degenerate} $n$-simplex ${\sigma}_n \in \N\mathcal{B}(\mathcal{UA})_n$ such that $d_i({\sigma}_n) = {\sigma}_{n-1}$ for all $i=0,...,n$. By Van-Kampen it follows that $\N\mathcal{B}(\mathcal{UA})$ is simply connected and by direct computation all the homology groups vanish. \end{proof} This finishes the proof of Lemma~\ref{final-lemma}. \end{proof} This finishes the proof of Theorem~\ref{qu-cobordism}. \end{proof} \section{ From Quasi-Unital to Unital Cobordism Hypothesis }\label{s-from-qu-to-regular} In this section we will show how the quasi-unital cobordism hypothesis (Theorem~\ref{qu-cobordism}) implies the last step in the proof of the $1$-dimensional cobordism hypothesis (Theorem~\ref{cobordism-last-step-2}). Let $M: \mathcal{B}^{\ev}_1 \longrightarrow \Grp_{\infty}$ be a non-degenerate lax symmetric monoidal functor. We can construct a pointed \textbf{non-unital} symmetric monoidal $\infty$-category $\mathcal{C}_M$ as follows: \begin{enumerate} \item The objects of $\mathcal{C}_M$ are the objects of $\mathcal{B}^{\ev}_1$. The marked point is the object $X_+$. \item Given a pair of objects $X, Y \in \mathcal{C}_M$ we define $$ \Map_{\mathcal{C}_M}(X, Y) = M(\check{X} \otimes Y) $$ Given a triple of objects $X, Y, Z \in \mathcal{C}_M$ the composition law $$ \Map_{\mathcal{C}_M}(\check{X}, Y) \times \Map_{\mathcal{C}_M}(\check{Y},Z) \longrightarrow \Map_{\mathcal{C}_M}(\check{X},Z) $$ is given by the composition $$ M(\check{X} \otimes Y) \times M(\check{Y} \otimes Z) \longrightarrow M(\check{X} \otimes Y \otimes \check{Y} \otimes Z) \longrightarrow M(\check{X} \otimes Z) $$ where the first map is given by the lax symmetric monoidal structure on the functor $M$ and the second is induced by the evaluation map $$ \ev_Y : \check{Y} \otimes Y \longrightarrow 1 $$ in $\mathcal{B}^{\ev}_1 $. \item The symmetric monoidal structure is defined in a straight forward way using the lax monoidal structure of $M$. \end{enumerate} It is not hard to see that if $M$ is non-degenerate then $\mathcal{C}_M$ is \textbf{quasi-unital}, i.e. each object contains a morphism which \textbf{behaves} like an identity map (see~\cite{har}). This construction determines a functor $$ G: \Fun_{\nd}^{\lax}(\mathcal{B}^{\ev}_1,\Grp_{\infty}) \longrightarrow \Cat^{\qu,\otimes}_{\mathcal{B}^{\un}_0 /} $$ where $\Cat^{\qu,\otimes}$ is the $\infty$-category of symmetric monoidal quasi-unital categories (i.e. commutative algebra objects in the $\infty$-category $\Cat^{\qu}$ of quasi-unital $\infty$-categories). In~\cite{har} it is proved that the forgetful functor $$ S:\Cat \longrightarrow\Cat^{\qu} $$ From $\infty$-categories to quasi-unital $\infty$-categories is an \textbf{equivalence} and so the forgetful functor $$ S^{\otimes}:\Cat^{\otimes} \longrightarrow \Cat^{\qu,\otimes} $$ is an equivalence as well. Now recall that $$ \Cat^{\sur}_{\mathcal{B}^{\ev}_1 /} \subseteq \Cat^{\nd}_{\mathcal{B}^{\ev}_1 /} $$ is the full subcategory spanned by essentially surjective functors ${\varphi}: \mathcal{B}^{\ev}_1 \longrightarrow \mathcal{C}$. The fiber functor construction ${\varphi} \mapsto M_{\varphi}$ induces a functor $$ F: \Cat^{\sur}_{\mathcal{B}^{\ev}_1 /} \longrightarrow \Fun_{\nd}^{\lax}(\mathcal{B}^{\ev}_1,\Grp_{\infty}) $$ The composition $G \circ F$ gives a functor $$ \Cat^{\sur}_{\mathcal{B}^{\ev}_1 / } \longrightarrow \Cat^{\qu,\otimes}_{\mathcal{B}^{\un}_0 /} $$ We claim that $G \circ F$ is in fact \textbf{equivalent} to the composition $$ \Cat^{\sur}_{\mathcal{B}^{\ev}_1 / } \stackrel{T}{\longrightarrow} \Cat^{\otimes}_{\mathcal{B}^{\un}_0 / } \stackrel{S}{\longrightarrow} \Cat^{\qu,\otimes}_{\mathcal{B}^{\un}_0 / } $$ where $T$ is given by the restriction along $X_+:\mathcal{B}^{\un}_0 \hookrightarrow \mathcal{B}^{\ev}_1$ and $S$ is the forgetful functor. Explicitly, we will construct a natural transformation $$ N:G \circ F \stackrel{\simeq}{\longrightarrow} S \circ T $$ In order to construct $N$ we need to construct for each non-degenerate functor ${\varphi}: \mathcal{B}^{\ev}_1 \longrightarrow \mathcal{D}$ a natural pointed functor $$ N_{\varphi}: \mathcal{C}_{M_{\varphi}} \longrightarrow \mathcal{D} $$ The functor $N_{\varphi}$ will map the objects of $\mathcal{C}_{M_{\varphi}}$ (which are the objects of $\mathcal{B}^{\ev}_1$) to $\mathcal{D}$ via ${\varphi}$. Then for each $X,Y \in \mathcal{B}^{\ev}_1$ we can map the morphisms $$ \Map_{\mathcal{C}_{M_{{\varphi}}}}(X,Y) = \Map_{\mathcal{D}}(1,\check{X} \otimes Y) \longrightarrow \Map_{\mathcal{D}}(X,Y) $$ via the duality structure - to a morphism $f: 1 \longrightarrow \check{X} \otimes Y$ one associates the morphism $\what{f}: X \longrightarrow Y$ given as the composition $$ X \stackrel{Id \otimes f}{\longrightarrow} X \otimes \check{X} \otimes Y \stackrel{{\varphi}(\ev_X) \otimes Y}{\longrightarrow} Y $$ Since $\mathcal{D}$ has duals we get that $N_{\varphi}$ is fully faithful and since we have restricted to essentially surjective ${\varphi}$ we get that $N_{\varphi}$ is essentially surjective. Hence $N_{\varphi}$ is an equivalence of quasi-unital symmetric monoidal $\infty$-categories and $N$ is a natural equivalence of functors. In particular we have a homotopy commutative diagram: $$ \xymatrix{ & \Cat^{\sur}_{\mathcal{B}^{\ev}_1 / } \ar_{F}[dl] \ar^{T}[dr] & \\ \Fun_{\nd}^{\lax}(\mathcal{B}^{\ev}_1,\Grp_{\infty}) \ar_{G}[dr] & & \Cat^{\otimes}_{\mathcal{B}^{\un}_0 /} \ar^{S}[dl] \\ & \Cat^{\qu,\otimes}_{\mathcal{B}^{\un}_0 /} & \\ }$$ Now from Lemma~\ref{0-to-1-ev} we see that $T$ is fully faithful. Since $S$ is an equivalence of $\infty$-categories we get \begin{cor}\label{retract} The functor $G \circ F$ is fully faithful. \end{cor} We are now ready to complete the proof of~\ref{cobordism-last-step-2}. Let $\mathcal{D}$ be a symmetric monoidal $\infty$-category with duals and let ${\varphi}: \mathcal{B} \longrightarrow \mathcal{D}$ be a non-degenerate functor. We wish to show that the space of maps $$ \Map_{\Cat^{\sur}_{\mathcal{B}^{\ev}_1 /}}(\iota,{\varphi}) $$ is contractible. Consider the sequence $$ \Map_{\Cat^{\sur}_{\mathcal{B}^{\ev}_1 /}}(\iota,{\varphi}) \longrightarrow \Map_{\Fun_{\nd}^{\lax}(\mathcal{B}^{\ev}_1,\Grp_{\infty})}(M_\iota,M_{\varphi}) \longrightarrow \Map_{\Cat^{\qu,\otimes}_{\mathcal{B}^{\un}_0 /}}(\mathcal{B}^{\ori}_1,\mathcal{D}) $$ By Theorem~\ref{qu-cobordism} the middle space is contractible and by lemma~\ref{retract} the composition $$ \Map_{\Cat^{\sur}_{\mathcal{B}^{\ev}_1 /}}(\iota,{\varphi}) \longrightarrow \Map_{\Cat^{\qu,\otimes}_{\mathcal{B}^{\un}_0 /}}(\mathcal{B}^{\ori}_1,\mathcal{D}) $$ is a weak equivalence. Hence we get that $$ \Map_{\Cat^{\sur}_{\mathcal{B}^{\ev}_1 /}}(\iota,{\varphi}) $$ is contractible. This completes the proof of Theorem~\ref{cobordism-last-step-2}. \end{document}
\begin{document} \title{Second-order superposition operations via Hong-Ou-Mandel interference} \author{Su-Yong Lee} \affiliation{Department of Physics, Texas A\&M University at Qatar, POBox 23874, Doha, Qatar} \author{Hyunchul Nha} \affiliation{Department of Physics, Texas A\&M University at Qatar, POBox 23874, Doha, Qatar} \affiliation{Institute f{\"u}r Quantenphysik, Universit{\"a}t Ulm, D-89069 Ulm, Germany} \date{\today} \begin{abstract} We propose an experimental scheme to implement a second-order nonlocal superposition operation $\hat{a}^{\dag 2}+e^{i\phi}\hat{b}^{\dag 2}$ and its variants by way of Hong-Ou-Mandel interference. The second-order coherent operations enable us to generate a NOON state with high particle number in a heralded fashion and also can be used to enhance the entanglement properties of continuous variable states. We discuss the feasibility of our proposed scheme considering realistic experimental conditions such as on-off photodetectors with nonideal efficiency and imperfect single-photon sources. \end{abstract} \pacs{42.50.Dv, 03.65.Ud, 03.67.-a } \maketitle \section{Introduction} The superposition principle in quantum mechanics plays a crucial role in manifesting physical effects that go beyond the reach of classical descriptions. A coherent superposition, when realized at the level of quantum operation rather than quantum state, can provide a useful tool for a number of applications, e.g., in quantum information processing. In the regime of continuous-variables (CVs), the superposition operation $\hat{a}\hat{a}^{\dag}\pm \hat{a}^{\dag}\hat{a}$ was proposed to test the commutation relation $[\hat{a},\hat{a}^{\dag}]=1$ \cite{Kim}, which was experimentally realized in a single-photon interferometric setting using thermal lights \cite{Zavatta}. ($\hat{a}^{\dag},\hat{a}$: bosonic creation and annihilation operators, respectively). This idea was extended to a proposal of implementing an arbitrary polynomial of photon-number operators using multiple photon subtractions and additions \cite{Fiurasek0}. Furthermore, the superposition operation at a more elementary level of first-order field operators, i.e., $t\hat{a}+r\hat{a}^{\dag}$, was studied with possible applications to quantum-state engineering \cite{Lee} and entanglement (nonlocality) concentration \cite{Lee1}. In addition to these {\it local} coherent operations, some {\it nonlocal} coherent operations were also investigated such as $\hat{a}+\hat{b}$ \cite{Grangier}, $\hat{a}^2+\hat{b}^2$ \cite{Kok}, and $\hat{a}^{\dag}+\hat{b}^{\dag}$ \cite{Fiurasek}. In general, the working principle of realizing a coherent superposition operation is to erase the which-path information relevant to the operation under study. For example, to implement the superposition operation $t\hat{a}+r\hat{a}^{\dag}$, one may place a beam splitter before detecting a photon in order to erase the information on whether the photon emerges from the path of photon subtraction $\hat{a}$ or photon addition $\hat{a}^{\dag}$ \cite{Lee}. In this paper, we propose a second-order nonlocal operation $\hat{a}^{\dag2}+e^{i\phi}\hat{b}^{\dag2}$, and its variants, via the effect of Hong-Ou-Mandel (HOM) interferometer \cite{Hong}. The HOM interference arises when two indistinguishable photons are each injected into the input modes of a 50:50 beam splitter: the output state from the beam splitter shows a bunching effect such that both photons appear together at one of two output modes, which may be attributed to the bosonic nature of photons. The HOM interference can thus be used for the creation of a NOON state for $N=2$ deterministically, or the demonstration of purity of a single photon \cite{Santori, Beugnon}. Conversely, with the beam splitting being a reversible unitary operation, if a single photon is detected for each output mode, the input state is inferred to be $\frac{1}{\sqrt{2}}\left(|2,0\rangle-|0,2\rangle\right)$. When this HOM effect is employed in conjunction with two non-degenerate parametric amplifiers (NDPAs), the operation $\hat{a}^{\dag2}-\hat{b}^{\dag2}$ can be implemented on two signal modes by projecting the idler modes to $\frac{1}{\sqrt{2}}\left(|2,0\rangle-|0,2\rangle\right)=\frac{1}{2}\left(\hat{c}^{\dag2}-\hat{d}^{\dag2}\right)|0,0\rangle$, as will be shown later. The superposition operation $\hat{a}^{\dag2}+e^{i\phi}\hat{b}^{\dag2}$ can be useful for a number of applications, and in this paper, we particularly discuss the generation of a NOON state with high particle number and the enhancement of entanglement properties. A NOON state is known to be a valuable resource for quantum lithography and quantum metrology, e.g., beating the shot-noise limit in an optical phase measurement \cite{Dowling}, and also for linear-optical quantum computing \cite{Knill}. In Ref. \cite{Fiurasek}, the multiple use of the operation $\hat{a}^{\dag}+e^{i\phi}\hat{b}^{\dag}$ was proposed to produce a N00N state, which is, however, implemented on the condition of the non-detection events. Usually, the conditioning on non-detection may significantly suffer from nonideal detector efficiency, as one cannot know whether the nondetection is attributed to no photons being present or to a failure of the detector \cite{Lee3}. To overcome this difficulty, a heralded scheme implementing the superposition operation $\hat{a}^2+\hat{b}^2$ was also proposed, which, however, requires a Fock-state input $|N,N\rangle$ of large $N$ that is thereby rather demanding \cite{Kok,Hwang}. On the other hand, our scheme makes use of the vacuum-state inputs $|0,0\rangle$, for which the superposition operation $\hat{a}^{\dag2}+e^{i\phi}\hat{b}^{\dag2}$ is consecutively applied. Alternatively, one may first prepare a $|2,0\rangle-|0,2\rangle$ state by injecting single photons into a 50:50 beam splitter and then applying $\hat{a}^{\dag}+e^{i\phi}\hat{b}^{\dag}$ or $\hat{a}^{\dag2}+e^{i\phi}\hat{b}^{\dag2}$ in a heralded fashion. The latter alternative scheme is here investigated in some detail by including experimental imperfections such as on-off photodetectors with nonideal efficiency and imperfect single-photon sources. This paper is organized as follows. In Sec. II, we propose how the second-order local and nonlocal superposition operations are implemented via the Hong-Ou-Mandel interference. In Sec. III, we investigate their applications for the generation of a NOON state with high particle number and the enhancement of entanglement properties. In Sec. IV, we study in more detail the experimental feasibility of generating a $4004$ state in terms of the fidelity and the phase-sensitivity of a phase-measurement, and in Sec. V, our main results are summarized. \section{Second-order coherent superposition operation via HOM interference} In this section, we propose an optical method to implement a coherent superposition operation $\hat{a}^{\dag 2}+e^{i\phi}\hat{b}^{\dag 2}$ in a heralded fashion via the HOM interference. Our scheme is depicted in Fig. 1, where the success of the operation is heralded by the detection of a single photon at both photodetectors $\rm SPD_1$ and $\rm SPD_2$. When a two-mode Fock-state input $|n\rangle_c|m\rangle_d$ is injected into a 50:50 beam splitter, the output state is given by \begin{eqnarray} \hat{B}_{cd}|n\rangle_c|m\rangle_d=\frac{(\hat{c}^{\dag}+\hat{d}^{\dag})^n(-\hat{c}^{\dag}+\hat{d}^{\dag})^m|0\rangle_{cd}}{\sqrt{2^{n+m}n!m!}}. \end{eqnarray} For the input $\{n,m\}=\{1,0\}$ or $\{0,1\}$, the output is the single photon entangled state $\frac{1}{\sqrt{2}}(|1,0\rangle\pm|0,1\rangle$. On the other hand, for the input $\{n,m\}=\{1,1\}$, the output is a NOON state $\frac{1}{\sqrt{2}}(|2,0\rangle-|0,2\rangle)$. Conversely, as the beam splitting is a reversible unitary operation, these results imply that the detection of a single photon at $\rm SPD_1$ and no photons at $\rm SPD_2$, or vice versa, projects the input state to $\frac{1}{\sqrt{2}}(|1,0\rangle\pm|0,1\rangle)=\frac{1}{\sqrt{2}}(\hat{c}^{\dag}\pm\hat{d}^{\dag})|0,0\rangle$. On the other hand, the detection of a single photon each at both detectors $\rm SPD_1$ and $\rm SPD_2$ projects the input state to $\frac{1}{\sqrt{2}}(|2,0\rangle-|0,2\rangle)=\frac{1}{2}(\hat{c}^{\dag2}-\hat{d}^{\dag2})|0,0\rangle$. When the above two-mode projective measurement is made on the idler modes from two independent NDPAs, the superposition operations $\hat{a}^{\dag}\pm\hat{b}^{\dag}$ and $\hat{a}^{\dag2}-\hat{b}^{\dag2}$ can be implemented on the signal modes. This is because the interaction between the signal mode $a (b)$ and the idler mode $c (d)$ within the NDPA creates and annihilates photons in a pair-wise fashion. For example, the detection of two photons at the idler mode immediately implies the two-photon creation at the signal mode. Therefore, the projection on the idler modes is identically mapped to the projection on the signal modes. It is in a sense an entanglement swapping by a projective measurement \cite{Zukowski,Pan}: The two signal modes initially uncorrelated become entangled by the projection of the idler modes to an entangled state. More rigorously, when an arbitrary two-mode state $|\psi\rangle_{ab}$ is injected into the signal modes of two NDPAs with both the idler modes in a vacuum state, the output state is given by \begin{eqnarray} &&\hat{S}_{ac}(\xi_1)\hat{S}_{bd}(\xi_2)|\psi\rangle_{ab}|0\rangle_{cd},\nonumber\\ &&=\exp(-\xi_1\hat{a}^{\dag}\hat{c}^{\dag}+\xi^*_1\hat{a}\hat{c})\exp(-\xi_2\hat{b}^{\dag}\hat{d}^{\dag}+\xi^*_2\hat{b}\hat{d})|\psi\rangle_{ab}|0\rangle_{cd}. \nonumber\\ \end{eqnarray} Next, the 50:50 beam splitter with the transformations $\hat{c}\rightarrow \frac{1}{\sqrt{2}}(\hat{c}+\hat{d})$ and $\hat{d}\rightarrow \frac{1}{\sqrt{2}}(-\hat{c}+\hat{d})$ yields \begin{eqnarray} &&\hat{B}_{cd}\hat{S}_{ac}(\xi_1)\hat{S}_{bd}(\xi_2)|\psi\rangle_{ab}|0\rangle_{cd},\nonumber\\ &&=e^{-\frac{\hat{a}^{\dag}}{\sqrt{2}}(\hat{c}^{\dag}+\hat{d}^{\dag})e^{i\phi_1}\tanh{s_1}} e^{-\frac{\hat{b}^{\dag}}{\sqrt{2}}(-\hat{c}^{\dag}+\hat{d}^{\dag})e^{i\phi_2}\tanh{s_2}}\nonumber\\ &&e^{-\hat{a}\hat{a}^{\dag}\ln(\cosh{s_1})}e^{-\hat{b}\hat{b}^{\dag}\ln(\cosh{s_2})}|\psi\rangle_{ab}|0\rangle_{cd}, \end{eqnarray} where $\xi_1=s_1e^{\phi_1}$ and $\xi_2=s_2e^{\phi_2}$. With the coincident detection of single photons at SPD1 and SPD2, the state is projected to \begin{eqnarray} |\Psi\rangle_{ab}&=&_{cd}\langle 11|\hat{B}_{cd}\hat{S}_{ac}(\xi_1)\hat{S}_{bd}(\xi_2)|\psi\rangle_{ab}|0\rangle_{cd} \nonumber\\ &=& \frac{1}{2}(\hat{a}^{\dag 2}e^{2i\phi_1}\tanh^2{s_1}-\hat{b}^{\dag 2}e^{2i\phi_2}\tanh^2{s_2})\nonumber\\ &&e^{-\hat{a}\hat{a}^{\dag}\ln(\cosh{s_1})}e^{-\hat{b}\hat{b}^{\dag}\ln(\cosh{s_2})}|\psi\rangle_{ab}. \end{eqnarray} Under the weak-coupling condition $s_1\approx s_2 \ll1$, the output field is approximated to $|\Psi\rangle_{ab}\approx (\hat{a}^{\dag 2}+\hat{b}^{\dag 2}e^{i\phi})|\psi\rangle_{ab}$, where $\phi=2(\phi_2-\phi_1)+\pi$. The phase $\phi$ can be controlled by adjusting the phase of pumping fields to the NDPAs, which determine the coupling constants $\xi_1$ and $\xi_2$ proportional to the second-order susceptibility of the nonlinear medium \cite{Walls}. Obviously, the interferometric effect described above can be realized only if the optical paths from the two idler modes to the beam splitter have the same length in Fig. 1. In a pulsed-mode implementation of our scheme, the two down converters may be pumped with external fields repeatedly at the same predetermined times. Then, the photo detections at SPD1 and SPD2 do not reveal the information on the origin of the photons, thus realizing the coherent operation $\hat{a}^{\dag 2}+\hat{b}^{\dag 2}e^{i\phi}$ on the input two-mode signal. \begin{figure} \caption{Optical scheme to implement a coherent superposition operation $\hat{a}^{\dag 2}+e^{i\phi}\hat{b}^{\dag 2}$ on an arbitrary two-mode state $|\psi\rangle_{ab}$. BS is a 50:50 beam splitter; SPD1 and SPD2 are single photon detectors. The successful implementation of the superposition operation is heralded by the coincident detection of single photons at SPD1 and SPD2.} \label{FIG_1} \end{figure} If desired, one can also change the ratio of the operations $\hat{a}^{\dag 2}$ and $\hat{b}^{\dag 2}$ in the superposition by inserting additional beam splitters between the down converters and the 50:50 beam splitter, as shown in Fig. 2 (a). By the coincident detection of single photons at SPD1 and SPD2, together with the non-detection at the additional detectors PDs, the output state is now reduced to \begin{eqnarray} &&_{cd}\langle 11|_{ef}\langle 00|\hat{B}_{cd}\hat{B}_{ce}\hat{B}_{df}\hat{S}_{ac}(\xi_1)\hat{S}_{bd}(\xi_2)|\psi\rangle_{ab}|0\rangle_{cdef}\nonumber\\ &=& \frac{1}{2}(\hat{a}^{\dag 2}t^{*2}_1e^{2i\phi_1}\tanh^2{s_1}-\hat{b}^{\dag 2}t^{*2}_2e^{2i\phi_2}\tanh^2{s_2})\nonumber\\ &&e^{-\hat{a}\hat{a}^{\dag}\ln(\cosh{s_1})}e^{-\hat{b}\hat{b}^{\dag}\ln(\cosh{s_2})}|\psi\rangle_{ab}, \end{eqnarray} where the beam splitter $\hat{B}_{ce}$ ($\hat{B}_{df}$) transforms the initial modes as $\hat{c}\rightarrow t_1\hat{c}+r_1\hat{e}$ and $\hat{e}\rightarrow t_1\hat{e}-r_1\hat{c}$ ($\hat{d}\rightarrow t_2\hat{d}+r_2\hat{f}$ and $\hat{f}\rightarrow t_2\hat{f}-r_2\hat{d}$ ). $t_{1(2)}$ and $r_{1(2)}$ denote the transmissivity and reflectivity of the beam splitters, respectively. Under the weak-coupling condition $s_1\approx s_2 \ll1$, the output field is approximated to $(\hat{a}^{\dag 2}+e^{i\phi}\gamma\hat{b}^{\dag 2})|\psi\rangle_{ab}$, with the ratio given by $\gamma=\frac{t^{*2}_2}{t^{*2}_1}$. The nonlocal superposition operation $\hat{a}^{\dag 2}+e^{i\phi}\gamma\hat{b}^{\dag 2}$ can also be transformed to a local superposition operation $\hat{a}^{2}+\gamma\hat{a}^{\dag 2}$, as shown in Fig. 2 (b), by replacing one down converter with a beam splitter of high transmissivity. The output state now reads \begin{eqnarray} &&_{bd}\langle 11|_{ce}\langle 0|\hat{B}_{bd}\hat{B}_{bc}\hat{B}_{de}\hat{S}_{ad}\hat{B}_{ab}|\psi\rangle_a|0\rangle_{bcde}\nonumber\\ &&\approx -\frac{1}{2} [\hat{a}^2(\frac{r^*}{t})^2t^{*2}_1+\hat{a}^{\dag 2}\xi^2t^{* 2}_2]|\psi\rangle_a, \end{eqnarray} where $t$ ($r$) is the transmissivity (reflectivity) of $\hat{B}_{ab}$, and $t_{1(2)}$ is the transmissivity of $\hat{B}_{bc(de)}$. At $(\frac{r^*}{t})^2 \approx \xi^2 \ll 1$, the output field is approximated to $(\hat{a}^{2}+\gamma\hat{a}^{\dag 2})|\psi\rangle_a$. For the case of $\gamma=1$, one can simplify the implementation of the local operation $\hat{a}^{2}+\hat{a}^{\dag 2}$ by removing modes $c$ and $e$, the two beam splitters at the center, and the additional detectors PDs in Fig. 2 (b), which recovers a fully heralded scheme. \begin{figure}\label{FIG_2} \end{figure} \section{Applications} The second-order coherent superposition operations can be employed for a number of applications, and we particularly investigate the generation of NOON states and the enhancement of entanglement properties for CV states. First, as shown in Fig. 3, the coherent operation $\hat{a}^{\dag 2}+e^{i\phi}\hat{b}^{\dag 2}$, when consecutively applied, can produce a NOON state with high particle number for even $N$ as \begin{eqnarray} (\hat{a}^{\dag N}+\hat{b}^{\dag N})|0,0\rangle =\prod_{k=1}^{N/2}(\hat{a}^{\dag 2}+e^{i\phi_k}\hat{b}^{\dag 2})|0,0\rangle, \end{eqnarray} with the choice of $\phi_k=\frac{4\pi k}{N}$. When an even-number NOON state is prepared, an odd-number NOON state can also be obtained by applying a coherent photon-subtraction $\hat{a}+\hat{b}$, that is, $(\hat{a}+\hat{b})\left(|N,0\rangle+|0,N\rangle\right)\sim|N-1,0\rangle+|0,N-1\rangle$. We have previously seen in Ref. \cite{Lee3} that the high fidelity for an odd $N$ state can be achieved from the coherent photon-subtraction, even with a very low detector efficiency used for the heralding, if the initial even-$N$ NOON state can be generated with high fidelity. In the next section, we investigate in more detail the generation of NOON states considering experimental imperfections for a realistic application. \begin{figure} \caption{Generation of NOON states via a successive application of the coherent operation $\hat{a}^{\dag 2}+e^{i\phi}\hat{b}^{\dag 2}$.} \label{FIG_3} \end{figure} Second, the local coherent superposition operation $\hat{a}^2+\gamma\hat{a}^{\dag 2}$ can be useful to enhance the entanglement properties of a CV entangled state, e.g., two-mode squeezed vacuum state, $|S_{\rm TMSS}\rangle_{AB}=\sqrt{1-\lambda^2}\sum^{\infty}_{n=0}\lambda^n|n\rangle_A|n\rangle_B~(\lambda=\tanh{s})$. In Fig. 4 (a), the degree of entanglement, which can be quantified by the von Neumann entropy of the reduced density operator for a pure state \cite{Bennett}, is compared between the states obtained with first-order and second-order superposition operations on the TMSS. It is clearly seen that the entanglement is more improved by the second-order coherent operation than by the first-order coherent operation $ t\hat{a}+r\hat{a}^{\dag}$ with $|t|^2+|r|^2=1$. Moreover, the second-order operation can significantly improve the Einstein-Podolsky-Rosen (EPR) correlation characterized by the condition $\Delta^2(\hat{x}_A-\hat{x}_B)+\Delta^2(\hat{p}_A+\hat{p}_B)<2$, where $\hat{x}_j=\frac{1}{\sqrt{2}}(\hat{a}_j+\hat{a}^{\dag}_j)$ and $\hat{p}_j=\frac{1}{i\sqrt{2}}(\hat{a}_j-\hat{a}^{\dag}_j)$ ($j=A,B$) \cite{Duan}. In particular, the improvement of the EPR correlation is more enhanced by the second-order operation in the small-squeezing region, $0.08 < s < 0.47$, as shown in Fig. 4 (b), which may thus provide a practical advantage. \begin{figure} \caption{(a) Entanglement quantified by the von Neumann entropy and (b) EPR correlation as functions of the squeezing parameter $s$ for the states: $(t\hat{a}^2+r\hat{a}^{\dag 2})(t\hat{b}^2+r\hat{b}^{\dag 2})|S_{\rm TMSS}\rangle$ (blue solid line), $(t\hat{a}+r\hat{a}^{\dag})(t\hat{b}+r\hat{b}^{\dag})|S_{\rm TMSS}\rangle$ (red dotted), and $|S_{\rm TMSS}\rangle$ (black dashed). The value $r$ in each coherent operation ($t^2+r^2=1$) is optimized at each point of $s$. } \label{FIG_4} \end{figure} \section{Experimental feasibility} \begin{figure} \caption{Fidelity between the ideal NOON state $|4,0\rangle-|0,4\rangle$ and the state $\rho_{\rm out}$ [Eq. (8)] obtained by applying $\hat{a}^{\dag 2}+\hat{b}^{\dag 2}$, using on-off detectors with efficiency $\eta$, to a two-photon state $(|20\rangle-|02\rangle)/\sqrt{2}$ (a) as a function of the coupling strength $s$ of the NDPAs and the detector efficiency $\eta$ and (b) as a function of $s$ for $\eta=0.66$. } \label{FIG_5} \end{figure} In this section, we consider realistic experimental conditions in implementing our proposed scheme for the superposition operation $\hat{a}^{\dag 2}+e^{i\phi}\hat{b}^{\dag 2}$ and also the generation of a NOON state. Ideally, the second-order superposition operation is heralded by the detection of exactly one photon at both detectors SPD1 and SPD2 in Fig. 1. We here consider each SPD as an on-off detector with efficiency $\eta$ that can only distinguish two events, detection and non-detection, with no photon-number resolving. It can be characterized by a two-component positive-operator-valued-measure (POVM) \cite{Mog,Rossi}, $\hat{\Pi}_0=\sum_n (1-\eta)^n|n\rangle\langle n|$ (no click), and $\hat{\Pi}_1=\hat{I}-\hat{\Pi}_0$ (click). Suppose that one first prepares a NOON state $|\psi_2\rangle\equiv\frac{1}{\sqrt{2}}\left(|2\rangle_a|0\rangle_b-|0\rangle_a|2\rangle_b\right)$ deterministically, to which the superposition operation $\hat{a}^{\dag 2}+e^{i\phi}\hat{b}^{\dag 2}$ heralded by nonideal on-off detectors is applied. This yields an output state \begin{eqnarray} \rho_{\rm out}=\frac{{\rm Tr}_{cd}\{\hat{\Pi}_1^c\hat{\Pi}_1^d\hspace{0.2cm}{\hat U}_1\rho_{\rm in}{\hat U}_1^\dag\}}{{\rm Tr}_{abcd}\{\hat{\Pi}_1^c\hat{\Pi}_1^d\hspace{0.2cm}{\hat U}_1\rho_{\rm in}{\hat U}_1^\dag\}}, \end{eqnarray} where $\rho_{\rm in}\equiv|\psi_2\rangle\langle\psi_2|_{ab}\otimes|0\rangle\langle0|_{cd}$ and ${\hat U}_1\equiv \hat{B}_{cd}\hat{S}_{ac}\hat{S}_{bd}$. We evaluate the performance of our scheme by investigating the fidelity $F$ between $\rho_{\rm out}$ and the ideal NOON state $|\psi_4\rangle= \frac{1}{\sqrt{2}}\left(|4\rangle_a|0\rangle_b-|0\rangle_a|4\rangle_b\right)$, i.e. $F=\langle \psi_4|\rho_{\rm out}|\psi_4\rangle$. In Fig. 5 (a), we plot the fidelity $F$ as a function of the coupling strength $s$ of the two NDPAs and the on-off detector efficiency $\eta$. We see that a high fidelity is achievable even with a very low detector efficiency $\eta$, which is a usual practical advantage arising from a heralded scheme. The fidelity decreases with the coupling strength $s$ because a larger $s$ makes multi-photon generations within the NDPAs more substantial and the on-off detectors cannot distinguish the multi-photons from an exact one-photon. Nevertheless, the fidelity remains considerably high to $s\sim0.2$ \cite{Alexei}. In Fig. 5 (b), we plot the fidelity for the case of $\eta=0.66$, which is the detection efficiency currently available \cite{Achilles,Fitch,Brida}. The fidelity is achieved at least above $0.86$ for the whole range of $s<0.2$. In the above analysis, the initial state was assumed to be a pure two-photon state $|2\rangle_a|0\rangle_b-|0\rangle_a|2\rangle_b$, which can be generated by injecting a perfect single-photon state into each input-mode of a 50:50 beam-splitter. Now, let us address the case of imperfect single-photon sources, denoted by $\rho_s=(1-p)|0\rangle\langle 0|+p|1\rangle\langle 1|$, which are a mixture of a vacuum state and a single-photon state. When this imperfect state $\rho_s$ is injected for each input of a 50:50 beam-splitter and the coherent operation $\hat{a}^{\dag 2}+e^{i\phi}\hat{b}^{\dag 2}$ is subsequently applied to the output from the beam splitter, we obtain a non-ideal NOON state $\rho_{\rm out,r}$ for $N=4$, \begin{eqnarray} \rho_{\rm out,r}=\frac{{\rm Tr}_{cd}\{\hat{\Pi}_1^c\hat{\Pi}_1^d\hspace{0.2cm}{\hat U}_2\rho_{\rm in,r}{\hat U}_2^\dag\}}{{\rm Tr}_{abcd}\{\hat{\Pi}_1^c\hat{\Pi}_1^d\hspace{0.2cm}{\hat U}_2\rho_{\rm in,r}{\hat U}_2^\dag\}}, \end{eqnarray} where $\rho_{\rm in,r}\equiv\rho^a_{s}\otimes\rho^b_{s}\otimes|0\rangle\langle0|_{cd}$ and ${\hat U}_2\equiv \hat{B}_{cd}\hat{S}_{ac}\hat{S}_{bd}\hat{B}_{ab}$. We compare $\rho_{\rm out,r}$ with the ideal state $|\psi_4\rangle= \frac{1}{\sqrt{2}}\left(|4\rangle_a|0\rangle_b-|0\rangle_a|4\rangle_b\right)$ in terms of fidelity. Furthermore, we also investigate the phase sensitivity $\Delta\varphi$ of a phase-measurement using the output state $\rho_{\rm out,r}$ that can be obtained via a Mach-Zehnder interferometer. The interferometer can be designed to estimate the phase difference $\varphi$ between two optical paths by measuring the photon-number parity of an output field \cite{Dowling}. The sensitivity $\Delta\varphi$ in this case is given by $\Delta\varphi=\Delta\Pi_b/|\frac{\partial \langle \hat{\Pi}_b\rangle}{\partial \varphi}|$, where $\hat{\Pi}_b=(-1)^{\hat{b}^{\dag}\hat{b}}$ corresponds to the parity of an output field from the interferometer. In Fig. 6 (a), we show the phase sensitivity as a function of single-photon source efficiency $p$ and on-off detector efficiency $\eta$ at the coupling strength $s=0.05$ of the NDPAs. The colored region represents the phase-sensitivity below the shot-noise limit using $\rho_{\rm out,r}$, for which the single-photon efficiency $p>0.68$ is required. We note that the value of $p=0.69$ was previously reported in the experiment of Ref. \cite{Lvovsky}. In Fig. 6 (b), we also show the fidelity as a function of $p$ and $\eta$, which can reach the value of, e.g., 0.595 with $p=0.69$. Both the phase-sensitivity $\Delta \varphi$ and the fidelity $F$ are significantly affected by the single-photon efficiency $p$, but they are very insensitive to the efficiency $\eta$ of the on-off photodetectors. \begin{figure} \caption{(a) Phase sensitivity $\Delta \varphi$ of the phase measurement using the nonideal state $\rho_{\rm out,r}$ [Eq.(9)] and (b) fidelity $F$ between the ideal state $(|40\rangle-|04\rangle)/\sqrt{2}$ and $\rho_{\rm out,r}$ as functions of single-photon source efficiency $p$ and on-off detector efficiency $\eta$. (See main texts.) The coupling strength of the NDPAs is $s=0.05$. The colored region represents the condition for (a) $\Delta\varphi <0.5$ (below shot-noise level) and (b) $F\ge0.5$. } \label{FIG_6} \end{figure} \section{Summary} We have proposed an optical scheme to implement a second-order nonlocal superposition operation $\hat{a}^{\dag 2}+e^{i\phi}\hat{b}^{\dag 2}$ and its variants in a heralded fashion via the HOM effect. We also investigated the application of these superposition operations to the generation of NOON states with high particle number and the entanglement concentration of a CV entangled state. Furthermore, we considered experimental imperfections such as on-off photodetectors with nonideal efficiency and imperfect single-photon sources to demonstrate the feasibility of our proposed scheme. In view of the experimental capacity reported, e.g., in Refs. \cite{Zavatta,Grangier,Alexei,Lvovsky}, our proposal can provide a potentially useful tool for various quantum information tasks. \begin{acknowledgments} S.Y.L. thanks J. Bae for a helpful discussion. This work is supported by NPRP Grant 08-043-1-011 from the Qatar National Research Fund. HN also acknowledges support from a research fellowship from the Alexander von Humboldt Foundation. \end{acknowledgments} \end{document}
\begin{document} \begin{titlepage} \begin{center} {\huge \textbf{Pulse Design in Solid-State}}\\[4pt] {\huge \textbf{Nuclear Magnetic Resonance}}\\[4pt] {\textsc{Study and Design of Dipolar Recoupling Experiments}}\\ {\textsc{in Spin-$1/2$ Nuclei}}\\[4cm] \includegraphics[width=0.3\textwidth]{Figs/AUlogo_bw}\\[1.0cm] \textsc{Ravi Shankar Palani}\\[1cm] {Ph.D. Dissertation,} \\ {under the guidance of Prof. Niels Christian Nielsen,}\\ {Interdisciplinary Nanoscience Center (iNANO),}\\ {Faculty of Science, Aarhus University.}\\[1cm] {\textsc{October 2016}}\\ \begin{center} \includegraphics[width=0.3\linewidth]{Figs/au_inano_2} \end{center} \end{center} \end{titlepage} \pagestyle{fancy} \frontmatter {\fancyhead[LO]{} \fancyhead[RE]{} \fancyhead[LE,RO]{} \fancyfoot[C]{\footnotesize \thepage} \chapter*{Abstract} \addcontentsline{toc}{chapter}{\numberline{}Abstract} The work presented in this dissertation is centred on the theory of experimental methods in solid-state Nuclear Magnetic Resonance (NMR) spectroscopy, which deals with interaction of electromagnetic radiation with nuclei in a magnetic field and possessing a fundamental quantum mechanical property called spin. Nuclei with a spin number $1/2$ is the focus here. Unlike liquid-state, the spin interactions that are dependent on the orientation of the sample with respect to the external magnetic field, are not averaged out in solid-state and therefore lead to broad indistinct signals. To avoid this, the solid sample is spun rapidly about a particular axis thereby averaging out the orientation-dependent interactions. However, these interactions are also typically a major source of structural information and a means to improve spectral resolution and sensitivity. Therefore it is crucial to be able to reintroduce them as and when needed, in the course of an experiment. This is achieved using radio frequency pulses, a control oscillating magnetic field, applied transverse to the external magnetic field. The class of pulse sequences that specifically reintroduces space-mediated spin-spin (termed dipolar coupling) interactions are referred to as dipolar recoupling pulse sequences and forms the subject of this dissertation. NMR experiments, that involve repeating (periodic) pulse sequences, are generally understood by finding an average or effective Hamiltonian (interaction, in the language of physics), which approximates the spin dynamics to a great extent and often offers insights into the workings of the experiment that a full numerical simulation of the spin dynamics do not. The average Hamiltonian is found using either the intuitively named average Hamiltonian theory (AHT) or the Floquet theory, where the former is the most widely used approach in NMR. AHT and Floquet theory, which use mathematical tools namely, Magnus expansion and Van Vleck transformations respectively, yield the same effective Hamiltonian that is valid for stroboscopic observation. The equivalence between the theories has been discussed in the past in literature. Generalised expressions for the effective Hamiltonian using AHT are derived in the frequency domain in this dissertation, to allow for appreciation of the equivalence with Floquet theory mediated effective Hamiltonian. The derivation relies on the ability to express the time dependency of the interactions with a finite set of fundamental frequencies, which has long been sought and the lack of which has, at times, been misunderstood as limitations of AHT. A formalism to represent any periodic time-dependent interaction in the Fourier space with no more than two fundamental frequencies for every involved spin, is developed and presented here, which allows for the computation of effective Hamiltonian for any pulsed experiment. The formalism has been applied to understand established dipolar recoupling pulse sequences, namely Radio-Frequency-Driven Recoupling (RFDR), Rotor Echo Short Pulse IRrAdiaTION mediated Cross Polarisation ($^{\tr{RESPIRATION}}$CP) and sevenfold symmetric C7 pulse schemes. Limitations of the pulse sequences, in particular their sensitivity to isotropic chemical shift which is a measure of the electron cloud surrounding the nuclei of interest, are addressed by designing novel variants of the pulse sequences, aided by insights gained through the effective Hamiltonian description. \chapter*{Resum\'e} \addcontentsline{toc}{chapter}{\numberline{}Resum\'e} Arbejdet, der pr{\ae}senteres i denne afhandling, omhandler den teori, der ligger til grund for eksperimentelle metoder anvendt indenfor faststof kernemagnetisk resonans (NMR) spektroskopi. Denne gren af spektroskopi besk{\ae}ftiger sig med interaktioner imellem elektromagnetisk str{\aa}ling og atomare kerne, der besidder den kvantemekaniske egenskab spin og er placeret i et magnetfelt. Det prim{\ae}re fokus er p{\aa} atomare kerne med spintal 1/2. I mods{\ae}tning til i v{\ae}skefasen findes der i fast stof spininteraktioner, hvis st{\o}rrelse er afh{\ae}ngig af pr{\o}vens orientering i forhold til det eksterne magnetfelt, og disse giver anledning til brede og ofte overlappende signaler. For at undg{\aa} dette, roteres den faste pr{\o}ve med h{\o}j hastighed omkring en specifik akse s{\aa}ledes, at de orienteringsafh{\ae}ngige interaktioner midles. Interaktionerne er dog vigtige kilder til strukturel information, og det er derfor af afg{\o}rende vigtighed at kunne aktivere dem igen efter behov i l{\o}bet af et eksperiment. Dette opn{\aa}s ved brug af radiofrekvens pulse -- et kontrolleret, oscillerende magnetfelt -- som p{\aa}f{\o}res vinkelret p{\aa} det eksterne magnetfelt. Typen af pulssekvenser, der specifikt genintroducerer rumligt medierede spin-spin interaktioner (dipol koblinger), kaldes dipol{\ae}re rekoblings pulssekvenser og er emnet for denne afhandling. NMR eksperimenter, der benytter periodiske pulssekvenser, forst{\aa}s generelt ved at bestemme en gennemsnitlig eller effektiv Hamilton operator (eller interaktions operator), som i vid udstr{\ae}kning approksimerer spin dynamikken og ofte giver indsigt i, hvordan eksperimentet virker, hvilket end ikke en komplet numerisk simulering af dynamikken kan levere. Den gennemsnitlige Hamilton operator findes ved hj{\ae}lp af enten den intuitivt navngivne Average Hamiltonian Theory (AHT) eller Floquet teori, som benytter sig af henholdsvis Magnus ekspansionen og Van Vleck transformationer. Begge producerer den samme effektive Hamilton operator, som beskriver systemet n{\o}jagtigt under foruds{\ae}tning af stroboskopisk observation. Ækvivalensen imellem de to teorier er tidligere blevet diskuteret i litteraturen, men den bliver typisk glemt indenfor NMR feltet. I denne afhandling bliver generaliserede udtryk for effektive Hamilton operatorer udledt ved hj{\ae}lp af AHT i frekvensdom{\ae}net s{\aa}ledes, at {\ae}kvivalensen med de Floquet udledte effektive Hamilton operatorer bliver tydeliggjort. Udledningen afh{\ae}nger af en evne til at udtrykke tidsafh{\ae}ngigheden af interaktionerne med et endeligt s{\ae}t af grundl{\ae}ggende frekvenser, hvilket l{\ae}nge har v{\ae}ret eftertragtet, og manglen p{\aa} denne evne er til tider blevet fejltolket som en begr{\ae}nsning i AHT. En formalisme, der kan repr{\ae}sentere en vilk{\aa}rlig, periodisk, tidsafh{\ae}ngig interaktion i Fourierrummet med h{\o}jest to grundl{\ae}ggende frekvenser per involveret spin, udvikles og pr{\ae}senteres her. Formalismen er blevet anvendt til at forst{\aa} eksisterende dipol{\ae}re rekoblings pulssekvenser. Specifikt Radio-Frequency-Driven Recoupling (RFDR), Rotor Echo Short Pulse IRrAdiaTION mediated Cross polarisation ($^{\tr{RESPIRATION}}$CP) og syvfoldigt symmetriske C7 pulssekvenser. Pulssekvensernes begr{\ae}nsninger -- specielt deres f{\o}lsomhed overfor isotropt kemisk skift, som er et m{\aa}l for elektront{\ae}thedsskyen, der omgiver den unders{\o}gte kerne -- bliver adresseret ved at designe nye varianter med forbedret tolerance ved hj{\ae}lp af indsigt leveret af effektiv Hamilton operator beskrivelsen. \section*{Publications} \begin{enumerate} \item Anders B. Nielsen, Kong Ooi Tan, \underline{Ravi Shankar}, Susanne Penzel, Riccardo Cadalbert, Ago Samoson, Beat H. Meier, and Matthias Ernst: "Theoretical description of $^{\tr{RESPIRATION}}$CP", \textit{Chemical Physics Letters, 645:150-156, 2016} \item Lasse A Straas\o, \underline{Ravi Shankar}, Kong Ooi Tan, Johannes Hellwagner, Beat H. Meier, Michael Ryan Hansen, Niels Chr. Nielsen, Thomas Vosegaard, Matthias Ernst, and Anders B. Nielsen: "Improved transfer efficiencies in radio-frequency driven recoupling solid-state NMR by adiabatic sweep through the dipolar recoupling condition", \textit{The Journal of Chemical Physics, 145(3):034201, 2016} \item Kristoffer Basse$^*$, \underline{Ravi Shankar}$^*$, Morten Bjerring, Thomas Vosegaard, Niels Chr. Nielsen, and Anders B. Nielsen: "Handling the influence of chemical shift in amplitude-modulated heteronuclear dipolar recoupling solid-state NMR", \textit{The Journal of Chemical Physics, 145(9):094202, 2016.}\\ $^{*}$ contributed equally to the work. \item Asif Equbal, \underline{Ravi Shankar}, Michal Leskes, Shimon Vega, Niels Chr. Nielsen, P. K. Madhu: "Significance of symmetry in the nuclear spin Hamiltonian for efficient heteronuclear dipolar decoupling in solid-state NMR: A Floquet description of supercycled rCW schemes". \textit{Manuscript submitted to The Journal of Chemical Physics.} \item \underline{Ravi Shankar}, Matthias Ernst, Beat H. Meier, P. K. Madhu, Thomas Vosegaard, Niels Chr. Nielsen and Anders B. Nielsen: "A General Theoretical Description of the Influence of Isotropic Chemical Shift under Dipolar Recoupling Experiments for Solid-State NMR". \textit{Manuscript submitted to The Journal of Chemical Physics.} \end{enumerate} \section*{Overview} \noindent Chapter 1: A brief introduction to the presented work.\\ \noindent Chapter 2: Basic concepts of quantum mechanics relevant to understand NMR and description of the interactions encountered in NMR are detailed.\\ \noindent Chapter 3: Average Hamiltonian theory employed to obtain effective Hamiltonian for a pulsed experiment, along with discussion on finding propagators for multiples of a time shorter than the periodic time of the interaction frame Hamiltonian, are explained. The author recommends a careful reading of this intense chapter, for easier understanding of the following chapters.\\ \noindent Chapter 4: Theoretical description to find effective Hamiltonian for amplitude modulated pulse sequences, together with applications to explain and design variants of a homonuclear and a heteronuclear dipolar recoupling experiment are elaborated.\\ \noindent Chapter 5: Theoretical description to find effective Hamiltonian for a general (amplitude and phase modulated) pulse sequence is detailed, along with an application to describe C-symmetry homonuclear dipolar recoupling experiments. } \tableofcontents \mainmatter \renewcommand{0.4pt}{0.4pt} \renewcommand{0.pt}{0.pt} \fancyhead[LE,RO]{\footnotesize \thepage} \fancyhead[LO]{\slshape\footnotesize\nouppercase \rightmark} \fancyhead[RE]{\slshape\footnotesize\nouppercase \leftmark} \fancyfoot[C]{} \chapter{Introduction} Spectroscopy is the study of absorption and emission of electromagnetic radiation by matter. Nuclear magnetic resonance (NMR)\cite{bloch1946nuclear,purcell1946resonance} is a phenomenon, that exploits a fundamental property of certain atomic nuclei, called \textit{spin}, and occurs when the nuclei experience magnetic field. NMR spectroscopy is a tool utilised to study physical, chemical and biological properties of matter, and finds numerous applications in natural sciences and medicine\cite{lauterbur1973image}, including but not limited to study of polymers, inorganic materials and biological macromolecules\cite{duer2008solid} like proteins\cite{castellani2002structure} and Deoxyribonucleic acid (DNA)\cite{wuthrich1986nmr}. It is the preferred choice of study tool for membrane proteins\cite{bertelsen2007membrane}, amyloid fibrils\cite{nielsen2009unique} and the like, that do not form high-quality crystals, which is a necessary requirement for X-ray crystallography\cite{drenth2007principles}. Investigations of samples using NMR are non-destructive and offers atomic-resolution structural information, advantages that are fundamental to its versatility.\\ Sensitivity of NMR experiments is intrinsically low due to the relatively small energy difference between consecutive spin states, which is the fundamental measure and is in the radio-frequency (rf) range. For certain nuclei, this is worsened by low natural abundance of NMR active isotopes. The issue is tackled by transferring polarisation from high to low gyromagnetic nuclei and by enhancing the abundance of NMR active isotopic nuclei through isotopic labelling techniques in sample preparation. The significant advantage and challenge of solid-state NMR, as compared to liquid-state NMR, is the direct spectral presence of orientation-dependent (anisotropic) interactions, resulting in low resolution of spectra. Such interactions are averaged out in samples in liquid-state by fast isotropic molecular motion, and as a result liquid-state NMR spectra are highly resolved. The solid-state NMR spectrum is improved by rapidly spinning the sample about a specific axis, known as magic-angle spinning (MAS)\cite{andrew1958nuclear,lowe1959free}, to average out the anisotropic interactions. RF pulse sequences are an additional oscillating magnetic field applied transverse to the dominant external magnetic field. Sequences developed specifically to suppress the effects of interactions not completely averaged out by MAS, called decoupling pulse sequences, also help in this regard\cite{bloom1955effects}. However, anisotropic interactions contain information about the structure, dynamics, and orientation of the spin system under study\cite{andronesi2005determination}, and it is therefore crucial to reintroduce the interaction or recouple, as and when needed. In the case of dipole-dipole interaction, the recoupled interaction can be used to transfer magnetisation from one to another, which is instrumental in distance measurements between nuclei\cite{griffin1998dipolar,ladizhansky2009homonuclear}, together with sensitivity enhancement through polarisation transfers between nuclei and improved resolution in multi-dimensional experiments. Pulse sequences that are specifically designed to address this are known as recoupling pulse sequences, and forms the subject of this dissertation. Such techniques are used to establish correlations among spins, yielding essential information about the structure of the system, as the rate of magnetisation transfer depends on the spatial proximities and the relative orientations of the spins.\\ NMR spectroscopy is a theoretically complex tool, and the interactions are often time-dependent, brought in by MAS and rf irradiation. The spin dynamics of small spin systems can be numerically simulated, however, it rarely offers any understanding or insights into the workings of an experiment. An NMR experiment is typically understood by finding an effective or average Hamiltonian that approximates the active interactions present in the spin system under study. The effective Hamiltonian offers better comprehension and aids in the development of pulse sequences to enable or suppress desired interactions. The two most approved ways of treating such time-dependent Hamiltonians, in order to obtain a propagator for a spin system, are average Hamiltonian theory (AHT)\cite{waugh1968js,haeberlen1968coherent,burum1981magnus,klarsfeld1989recursive} and Floquet theory\cite{floquet,shirley1965solution,maricq1982application,shimon1996}. AHT is the most widely adopted method in NMR owing to its intuitive approach in arriving at a time-independent Hamiltonian, valid for use at multiples of a certain defined cycle time. Floquet theory transforms the Hamiltonian to a so-called Floquet space, that is infinite in dimension and which is a combined Fourier and Hilbert space, where the time-dependencies are implicitly seen through frequencies. To obtain a time-independent effective Hamiltonian, the infinite-dimensional matrix is diagonalised, often using Van Vleck\cite{van1948dipolar} transformation. The effective Hamiltonian, so obtained, is identical to the effective Hamiltonian obtained using AHT, as shown by Llor\cite{llor1992equivalence}. However, to show the equivalence for any general pulse sequence, the time-dependent Hamiltonian has to be written as a Fourier series\cite{fourier1822theorie} with finite number of fundamental frequencies, before the averaging using AHT can be applied. The lack of which has often led to people limiting AHT to only cases, where the interaction frame time-dependent Hamiltonian is simple and expressed by using a single fundamental frequency defined by the MAS rate. A description to enable expressing time-dependent Hamiltonian modulated by any general pulse sequence is developed in this work and its usefulness in obtaining a time-independent effective Hamiltonian using AHT is detailed. The average Hamiltonian helps describing an experiment with an approximate overall Hamiltonian present in the spin system in relation to the control rf field parameters. This enables the study of different features of the experiment and helps design better pulse sequences.\\ The above description is put to use studying established homonuclear dipolar recoupling experiments, Radio-Frequency-Driven Recoupling (RFDR), sevenfold symmetric C7 pulse schemes and heteronuclear dipolar recoupling experiment, Rotor Echo Short Pulse IRrAdiaTION mediated Cross Polarisation ($^{\tr{RESPIRATION}}$CP). The analysis is used to explain the effect of chemical shift, in particular the isotropic component, on polarisation transfer and design novel variants to minimise the adverse effects of the same. Additionally, a simpler calculation of effective flip angles about an axis imparted by the combined rf field and isotropic chemical shift are shown to qualitatively explain the effects, without the need for the full calculation of the effective Hamiltonian. \newcommand{\epc}{$\frac{1}{2}(1+c_\beta)$} \newcommand{\emc}{$\frac{1}{2}(1-c_\beta)$} \newcommand{\sts}{$\frac{1}{\sqrt{2}}s_\beta$} \newcommand{$c_\beta$}{$c_\beta$} \newcommand{$s_\beta$}{$s_\beta$} \newcommand{\sbs}{$\sqrt{\frac{3}{8}} s_\beta^2$} \newcommand{\sbt}{$\sqrt{\frac{3}{8}} s_{2 \beta}$} \newcommand{\cop}{$\frac{1}{4}(1+c_\beta)^2$} \newcommand{\com}{$\frac{1}{4}(1-c_\beta)^2$} \newcommand{$c_\beta^2$}{$c_\beta^2$} \newcommand{\slp}{$\frac{1}{2}(3c_\beta^2$-1)} \chapter{Theory of Nuclear Magnetic Resonance} This chapter will cover the fundamentals of NMR by introducing quantum mechanical concepts pertaining to NMR, followed by mathematical representation of interactions present in a spin system. The concepts are readily found in numerous introductory textbooks\cite{sakurai2011modern,levitt2001spin,mehring2012principles} on quantum mechanics and NMR spectroscopy, and this chapter is intended only as a compendium of relevant concepts. \section{Quantum Mechanical description} \subsection{Angular momentum and Spin} \label{ssec:spin} A rotating object has \textit{angular momentum}. Unlike classical mechanics, the angular momentum in quantum mechanics is \textit{quantized}\cite{pauli1933einige,dirac1930principles,gerlach1922_1,gerlach1922_2,gerlach1922_3}, resulting in a set of allowed discrete stable rotational states (say, for a rotating molecule) specified by a quantum number $\textbf{J}$, which can take any value from the set $\{0,1,2\dots\}$. The total angular momentum $\lvert \vec{L} \rvert$ and the quantum number $\textbf{J}$ are related by $\lvert \vec{L} \rvert^2 = \textbf{J}(\textbf{J}+1)\hbar^2$, where $\hbar = 1.0545718 \times 10^{-34}$J$\cdot$s is a fundamental constant known as the reduced Planck constant. The z-component of the angular momentum is given by, $L_z = m \hbar$, where $m$ takes one of the $2\textbf{J}+1$ values, $m \in \{-\textbf{J}, -\textbf{J}+1 \dots, \textbf{J}-1, \textbf{J}\}$.\\ \textit{Spin} is similar to angular momentum in the sense that maths of spin is reminiscent to that of angular momentum, however the physics is not. Spin is an \textit{intrinsic} property of the particle and not a consequence of any rotational motion. It is present even at a temperature of absolute zero, unlike rotational angular momentum which is a function of temperature. Every elementary particle has a particular spin quantum number $\textbf{S}$, which takes an integer or a half integer value. The angular momentum $\lvert \vec{L} \rvert$ of a particle due to its spin, is related to the spin quantum number $\textbf{S}$ by $ \lvert \vec{L} \rvert^2 = \textbf{S}(\textbf{S}+1)\hbar^2$. Spin also possesses a magnetic moment $\vec{\mu}$ that is related to the spin quantum number by $\vec{\mu} = \gamma \vec{L}$, where $\gamma$ is defined as the ratio of magnetic moment to angular momentum of the particle of interest, and is called the gyromagnetic ratio. Again, there are $2\textbf{S}+1$ possible states with the same $\textbf{S}$ but different $m$ value that describes the z-component of spin angular momentum $L_z = m\hbar$. These states are all degenerate unless an external magnetic field is applied. In the presence of an external magnetic field (along the arbitrary z-direction), the associated potential energy of the state is given by \begin{eqnarray} \begin{aligned} U_{pot} &= \int |\vec{\mu} \times \vec{B}_0| d\theta &= - |\vec{\mu}| |\vec{B}_0| \cos\theta &= -m \hbar\gamma |\vec{B}_0|. \end{aligned} \label{eq:zeeman_E} \end{eqnarray} i.e., states with same $\vec{L}$, but different $m$ values are separated by an energy gap $|\Delta E| = \Delta m \hbar\gamma |\vec{B}_0|$. This splitting of degenerate energy levels on application of magnetic field is generally known as Zeeman splitting, and the energy difference falls in the radio frequency range for nuclear spins.\\ For an object with only a magnetic moment, say a compass, the external magnetic field produces a torque that aligns the magnetic moment along the magnetic field. However for spins, which possess both a magnetic moment and a proportional angular momentum, the case is different. The torque $\vec{\tau}$ resulting from the action of the external magnetic field $\vec{B}_0$ on the magnetic moment $\vec{\mu}$ changes the direction of the angular momentum $\vec{L}$ fulfilling $\vec{\tau} = \frac{d\vec{L}}{dt}$. This motion is referred to as \textit{precession} and the rate of precession, also called the \textit{Larmor frequency}\cite{levitt2001spin} is then given by, \begin{eqnarray} \omega = \frac{d\phi}{dt} = \frac{1}{|\vec{L}|\sin\theta }\frac{d\vec{L}}{dt} = \frac{-\gamma|\vec{\mu}||\vec{B}_0|\sin\theta}{|\vec{\mu}|\sin\theta} = -\gamma|\vec{B}_0|. \label{eq:larmor} \end{eqnarray} In the \textit{Standard Model}\cite{oerter2006theory}, all matter in the universe is made of elementary particles, the two basic types of which are quarks and leptons - six of each kind, all of which are spin-$\frac{1}{2}$. The most stable lepton, \textit{electron}, has an electric charge \textit{-e} along with a spin value of $\frac{1}{2}$. Proton and neutron, each of which is composed of three quarks with only two of them aligned in parallel\footnote{At high-energies, it is possible that all three quarks are aligned parallel to result in a spin-$\frac{3}{2}$ state for proton/neutron. But these are beyond the case-space of magnetic resonance.}, have a net spin-$\frac{1}{2}$ and net electric charge of \textit{e} and \textit{0}.\\ \subsection{Spin states} In quantum mechanics, the state of a quantum system is described by a time-dependent wave function $\psi(t)$. This can be represented in Dirac notation as a linear combination or quantum superposition of an orthonormal basis set $\{\ket{n}\}$, \begin{equation} \begin{aligned} \ket{\psi(t)} = \sum_n c_n(t) \ket{n}, \end{aligned} \end{equation} where $c_n(t)$ are complex amplitude as functions of time. The amplitude squared, $|c_n(t)|^2$ gives the probability of measuring the spin to be in the state $\ket{n}$ at time $t$. This is a fundamental law of quantum mechanics - Born rule\cite{born1926} - that has not yet been derived from the other postulates of quantum mechanics\cite{LaFlamme2012}. A state that can be described by a single ket, like above, is called a \textit{pure state}. For a system that is a statistical ensemble of numerous pure states, it is not possible to represent the system with a single ket. Such a state is termed a \textit{mixed state}. It is worth illustrating the difference between a quantum superposition of pure states and a mixed state: a detector set to measure $\frac{1}{2}(\ket{\alpha} + \ket{\beta})$ in a spin-$\frac{1}{2}$ nucleus that is in a pure state described as a linear combination in equal measures of states $\ket{\alpha}$ and $\ket{\beta}$ (i.e., $\frac{1}{2}(\ket{\alpha} + \ket{\beta})$), returns 1 whereas the same detector when used on a mixture of two spin-$\frac{1}{2}$ nuclei where one spin is in state $\ket{\alpha}$ and the other is in $\ket{\beta}$, returns $\frac{1}{2}$. To accommodate mixed states in the formalism, \textit{density operators} are used.\\ Density operator, that describes a quantum state as a function of time, is defined as \begin{equation} \hat{\rho}(t) = \sum_{n} p_n \ket{\psi_n(t)}\bra{\psi_n(t)}, \end{equation} where $p_n$ is statistical probability of the state $\psi_n(t)$ in the ensemble/mixture. The density operator for a pure state would have $p_{n'} = 1$ for one particular $n'$ in the set $\{n\}$ and 0 for rest of the set. For an NMR system in thermal equilibrium, the population distribution of spin states is Boltzmann. The density operator for such an N-spin system is, \begin{eqnarray} \begin{aligned} \hat{\rho}_{eq} = \frac{1}{2^N}\Big(\mathds{1} + \frac{\hbar \gamma B_0}{kT} \sum_{j=1}^{N}I_{jz}\Big) \end{aligned} \end{eqnarray} where $k = 1.38064852 \times 10^{−23}$J.K$^{-1}$ is the Boltzmann constant and T is temperature of the system.\\ The expectation value of any operator $\hat{\mathcal{O}}$, which is defined for a state $\ket{\psi}$ as \begin{equation} \begin{aligned} \epv{\hat{\mathcal{O}}} = \bra{\psi}\hat{\mathcal{O}}\ket{\psi}, \end{aligned} \end{equation} can be reformulated to suit the density operator formalism by making use of the identity $\sum\limits_i \ket{x_i}\bra{x_i} = 1$. \begin{eqnarray} \epv{\hat{\mathcal{O}}} & = & \sum_{i}\sum_{j}\epv{\psi|x_i}\epv{x_i|\hat{\mathcal{O}}|x_j}\epv{x_j|\psi} \nonumber \\ & = & \sum_{i}\sum_{j}\epv{x_j|\psi}\epv{\psi|x_i}\epv{x_i|\hat{\mathcal{O}}|x_j} \nonumber \\ & = & \sum_{i}\sum_{j}\bra{x_j}\hat{\rho}\ket{x_i}\epv{x_i|\hat{\mathcal{O}}|x_j} \nonumber \\ & = & \mathrm{Tr}\{\hat{\rho}\hat{\mathcal{O}}\}, \label{eq:exp_trace} \end{eqnarray} where Tr\{\dots\} refers to tracing over the diagonal elements of the matrix. \subsection{Temporal dynamics} The time evolution of a state $\ket{\psi(t)}$ of a quantum system is described by Schr{\"o}dinger equation\cite{schrodinger_eqn}, \begin{eqnarray} i\hbar \frac{\partial \ket{\psi(t)}}{\partial t} & = & \hat{\mathcal{H}}(t)\ket{\psi(t)}. \label{eq:sch} \end{eqnarray} This can be extrapolated to describe time evolution of density operators, which is of interest, to describe NMR systems. \begin{eqnarray} i\hbar \frac{\partial \hat{\rho}(t)}{\partial t} & = & i\hbar \frac{\partial \ket{\psi(t)}}{\partial t} \bra{\psi(t)} + \ket{\psi(t)} \frac{\partial \bra{\psi(t)}}{\partial t} i\hbar \nonumber \\ & = & \hat{\mathcal{H}}(t) \ket{\psi(t)} + \bra{\psi(t)}\hat{\mathcal{H}}(t) \label{eq:vonNeu_Herm}\\ \frac{\partial\hat{\rho}}{\partial t}(t) & = & \lbrack \hat{\mathcal{H}}(t),\rho(t) \rbrack, \label{eq:vonNeu} \end{eqnarray} where $\hat{\mathcal{H}}(t)$ is the time-dependent Hamiltonian operator whose Hermitian property is used in Eq. \ref{eq:vonNeu_Herm}, $\lbrack\dots\rbrack$ refers to the commutation operator. Eq. \ref{eq:vonNeu} is known as the Liouville-von Neumann equation and governs spin evolution\footnote{ignoring the phenomenon of Relaxation.}. The solution to Eq. \ref{eq:vonNeu}, for \textit{time-independent} Hamiltonian, is \begin{eqnarray} \hat{\rho}(t) = e^{-i\hat{\mathcal{H}}t}\hat{\rho}(0)e^{i\hat{\mathcal{H}}t}. \label{eq:rhot_time_indep} \end{eqnarray} The solution to time-dependent Hamiltonian can be found by assuming piece-wise time-independency for $\hat{\mathcal{H}}(t)$, thus resulting in the solution \begin{eqnarray} \hat{\rho}(t) & = & \hat{U}(t)\hat{\rho}(0)\hat{U}^{\dagger}(t). \label{eq:neuSol} \end{eqnarray} The operator $\hat{U}(t)$ in Eq. \ref{eq:neuSol} is referred as the \textit{propagator} and is described by \begin{eqnarray} \hat{U}(t) & = & e^{-i\hat{\mathcal{H}}_1\int_{t_{n-1}}^{t_n}dt'}\dots e^{-i\hat{\mathcal{H}}_1\int_{t_1}^{t_2}dt'} e^{-i\hat{\mathcal{H}}_1\int_{0}^{t_1}dt'} \nonumber \\ & = & \hat{\mathcal{T}}e^{-i\int_{0}^{t}\hat{\mathcal{H}}(t')dt'}, \label{eq:dyson} \end{eqnarray} where $\hat{\mathcal{T}}$ is the Dyson time-ordering operator\cite{dyson1952divergence}. It is worth noting here that $\ket{\psi(t)} = \hat{U}(t)\ket{\psi(0)}$, which upon derivation and using Eq. \ref{eq:sch} gives, \begin{eqnarray} \hat{\mathcal{H}}\ket{\psi(t)} & = & i\hbar\frac{\partial \hat{U}}{\partial t}(t)\ket{(\psi(0))} \nonumber \\ \hat{\mathcal{H}}\hat{U}(t)\ket{\psi(0)} & = & i\hbar\frac{\partial \hat{U}}{\partial t}(t)\ket{(\psi(0))} \nonumber\\ \implies \frac{\partial \hat{U}}{\partial t} & = & \hat{\mathcal{H}}\hat{U} \label{eq:dUdt} \end{eqnarray} \subsection{Change of reference frame} \label{sec:int_frame} In the analysis of NMR experiments, it often proves useful to choose a rotating frame for description. In general, say the spin system is under the effect of Hamiltonian, $\hat{\mathcal{H}}(t)$ given by \begin{eqnarray} \hat{\mathcal{H}}(t) & = & \hat{\mathcal{H}}_{\textrm{int}}(t) + \hat{\mathcal{H}}_{\textrm{ext}}(t), \end{eqnarray} where $\hat{\mathcal{H}}_{\textrm{ext}}(t)$ is the dominant Hamiltonian and $\hat{\mathcal{H}}_{\textrm{int}}(t)$ is the internal Hamiltonian, whose effect under $\hat{\mathcal{H}}_{\textrm{ext}}(t)$ on the spin system is of interest. The propagator, $\hat{U}(t)$ corresponding to $\hat{\mathcal{H}}(t)$ can be decomposed as \begin{eqnarray} \hat{U}(t) & = & \hat{U}_{\textrm{ext}}(t)\hat{\tilde{U}}(t). \end{eqnarray} Taking time-derivative on both sides\footnote{Time dependency (t) is dropped for better readability}, \begin{eqnarray} \frac{\partial \hat{U}}{\partial t} & = & \frac{\partial \hat{U}_{\textrm{ext}}}{\partial t}\hat{\tilde{U}} + \hat{U}_{\textrm{ext}}\frac{\partial \hat{\tilde{U}}}{\partial t} \nonumber\\ \hat{\mathcal{H}}\hat{U} & = & \hat{\mathcal{H}}_{\textrm{ext}}\hat{U}_{\textrm{ext}}\hat{\tilde{U}} + \hat{U}_{\textrm{ext}}\frac{\partial \hat{\tilde{U}}}{\partial t} \label{eq:intFr1}\\ \hat{U}_{\textrm{ext}}^{-1}(\hat{\mathcal{H}}_{\textrm{ext}}+\hat{\mathcal{H}}_{\textrm{int}})\hat{U}_{\textrm{big}}\hat{\tilde{U}} & = & \hat{U}_{\textrm{ext}}^{-1}\hat{\mathcal{H}}_{\textrm{ext}}\hat{U}_{\textrm{ext}}\hat{\tilde{U}} + \frac{\partial \hat{\tilde{U}}}{\partial t} \label{eq:intFr2}\\ \frac{\partial \hat{\tilde{U}}}{\partial t} & = & \hat{U}_{\textrm{ext}}^{-1}\hat{\mathcal{H}}_{\textrm{int}}\hat{U}_{\textrm{ext}}\hat{\tilde{U}}, \label{eq:intFr3} \end{eqnarray} where Eq. \ref{eq:intFr1} follows from Eq. \ref{eq:dUdt}. Drawing parallels from Eq. \ref{eq:dUdt}, Eq. \ref{eq:intFr3} can be rewritten as $\frac{\partial \hat{\tilde{U}}}{\partial t} = \hat{\tilde{\mathcal{H}}}_{\textrm{int}}\hat{\tilde{U}}$, where $\hat{\tilde{\mathcal{H}}}_{\textrm{int}} = \hat{U}_{\textrm{ext}}^{-1}\hat{\mathcal{H}}_{\textrm{int}}\hat{U}_{\textrm{ext}}$. Any operator $\hat{\mathcal{O}}$ can be written in the interaction frame of $\hat{\mathcal{H}}_{\textrm{ext}}$ as $\hat{\tilde{\mathcal{O}}} = \hat{U}_{\textrm{ext}}^{-1}\hat{\mathcal{O}}\hat{U}_{\textrm{ext}}$. The Liouville-von Neumann equation (Eq. \ref{eq:vonNeu}), expressed in this frame as $\frac{\partial \hat{\tilde{\rho}}}{\partial t}(t) = \lbrack \hat{\tilde{\mathcal{H}}},\hat{\tilde{\rho}}(t) \rbrack$, is still valid. \section{System Hamiltonians} The Hamiltonian in NMR takes the form \begin{equation} \hat{\mathcal{H}}(t) = \hat{\mathcal{H}}_{\textrm{int}}(t) + \hat{\mathcal{H}}_{\textrm{ext}}(t) \end{equation} where $\hat{\mathcal{H}}_{\textrm{ext}}$ represents the Zeeman interaction $\hat{\mathcal{H}}_z$ - the largest of NMR interactions and is the one between the static external magnetic field and the nuclear spins. $\hat{\mathcal{H}}_{\textrm{ext}}$ also includes the user controlled rf interaction $\hat{\mathcal{H}}_{\textrm{rf}}$, a time-dependent magnetic field applied along the plane transverse to the static magnetic field to manipulate nuclear spin polarisations. $\hat{\mathcal{H}}_{\textrm{int}}$ represents a host of internal interactions. This chapter details the mathematical representation of these interactions.\\ \subsection{Tensor representation} \label{ssect:2_tensor_Hamil} A spin-spin interaction Hamiltonian in NMR can generally be represented, in the Cartesian basis as \begin{equation} \begin{alignedat}{3} \hat{\mathcal{H}}_{\lambda} &= \vec{\hat{I}}_k\cdot\lambda\cdot\vec{\hat{I}}_n &= \begin{bmatrix} \hat{I}_{kx} & \hat{I}_{ky} & \hat{I}_{kz} \end{bmatrix} \begin{bmatrix} \lambda_{xx} & \lambda_{xy} & \lambda_{xz} \\ \lambda_{yx} & \lambda_{yy} & \lambda_{yz} \\ \lambda_{zx} & \lambda_{zy} & \lambda_{zz} \end{bmatrix} \begin{bmatrix} \hat{I}_{nx} \\ \hat{I}_{ny} \\ \hat{I}_{nz} \end{bmatrix} \end{alignedat} \label{eq:genHamil} \end{equation} where $\vec{\hat{I}}_k$ and $\vec{\hat{I}}_n$ represent spins and the matrix $\lambda$ represents the strength and spatial dependency of the interaction as a reducible rank-2 tensor. A spin-field interaction Hamiltonian can be represented as \begin{equation} \begin{alignedat}{3} \hat{\mathcal{H}}_{\lambda} &= \vec{\hat{I}}_k\cdot\lambda\cdot\vec{B}_0 &= \begin{bmatrix} \hat{I}_{kx} & \hat{I}_{ky} & \hat{I}_{kz} \end{bmatrix} \begin{bmatrix} \lambda_{xx} & \lambda_{xy} & \lambda_{xz} \\ \lambda_{yx} & \lambda_{yy} & \lambda_{yz} \\ \lambda_{zx} & \lambda_{zy} & \lambda_{zz} \end{bmatrix} \begin{bmatrix} B_{0x} \\ B_{0y} \\ B_{0z} \end{bmatrix} \end{alignedat} \label{eq:genHamil2} \end{equation} The reducible rank-2 tensor $\lambda$ can be expressed as a sum of three components - a diagonal $\lambda_0$, an antisymmetric $\lambda_1$ and a traceless symmetric $\lambda_2$ components, i.e., three \textit{irreducible} tensors\cite{fano1959} $\lambda_k$ of rank $k = \{0,1,2\}$ as \begin{eqnarray} \lambda & = & \lambda_{0} + \lambda_{1} + \lambda_{2} \label{eq:Red_sp_int_tensor} \end{eqnarray} with \begin{equation} \begin{alignedat}{2} \lambda_{0} &= \begin{bmatrix} \lambda_{0,0} & 0 & 0 \\ 0 & \lambda_{0,0} & 0 \\ 0 & 0 & \lambda_{0,0} \end{bmatrix}\\ \lambda_{1} &= \begin{bmatrix} 0 & \lambda_{xy}^a & \lambda_{xz}^a \\ -\lambda_{xy}^a & 0 & \lambda_{yz}^a \\ -\lambda_{xz}^a & -\lambda_{yz}^a & 0 \end{bmatrix}\\ \lambda_{2} &= \begin{bmatrix} \lambda_{xx}^s-\lambda_0 & \lambda_{xy}^s & \lambda_{xz}^s \\ \lambda_{xy}^s & \lambda_{yy}^s-\lambda_0 & \lambda_{yz}^s \\ \lambda_{xz}^s & \lambda_{yz}^s & \lambda_{zz}^s-\lambda_0 \end{bmatrix} \end{alignedat} \label{eq:cart_decomp} \end{equation} where $\lambda_{0,0} = \frac{1}{3}\sum_i \lambda_{ii}^s$, $\lambda_{ij}^a = \frac{1}{2}(\lambda_{ij}-\lambda_{ji})$ and $\lambda_{ij}^s = \frac{1}{2}(\lambda_{ij}+\lambda_{ji})$. It is evident that the number of independent components in an irreducible tensor of rank $k$ is $2k+1$. The three irreducible tensor components in Eq. \ref{eq:Red_sp_int_tensor} transform \textit{differently} and \textit{independently} under rotations.\\ \noindent A general rank-2 tensor, like $\lambda$, transforms under rotation as $\lambda' = R\lambda R^{-1}$, where $R$ is a 3x3 rotation matrix. The spatial tensor $\lambda$ can also be represented as a nine-dimensional vector \begin{equation} \vec{\lambda} = (\lambda_{xx},\lambda_{xy},\lambda_{xz},\lambda_{yx},\lambda_{yy},\lambda_{yz},\lambda_{zx},\lambda_{zy},\lambda_{zz}) \end{equation} in which case, Eq. \ref{eq:genHamil} and \ref{eq:genHamil2} are represented as a scalar product of two vectors \begin{equation} \begin{alignedat}{2} \hat{\mathcal{H}}_{\lambda} &= \vec{\lambda}\cdot\vec{\hat{I}} \end{alignedat} \label{eq:H_vector_rep} \end{equation} with $\vec{\hat{I}} = \vec{\hat{I}}_k \otimes \vec{\hat{I}}_n$ for spin-spin interactions and $\vec{\hat{I}} = \vec{\hat{I}}_k \otimes \vec{B}_0$ for spin-field interactions. The nine-dimensional vector form of the spatial tensor transforms under a rotation as $\vec{\lambda}' = R\vec{\lambda}$, where $R$ is a full 9x9 matrix. However it is known from Eq. \ref{eq:Red_sp_int_tensor} that $\vec{\lambda}$ can be written in a rank-separated basis as \begin{equation} \begin{alignedat}{2} \vec{\lambda} &= (\lambda_{0},\lambda^a_{xy},\lambda^a_{xz},\lambda^a_{yz},\lambda^s_{xx}-\lambda_0,\lambda^s_{yy}-\lambda_0,\lambda^s_{xy},\lambda^s_{xz},\lambda^s_{yz})\\ &= (\lambda_{0,0},\lambda_{1,-1},\lambda_{1,0},\lambda_{1,1},\lambda_{2,-2},\lambda_{2,-1},\lambda_{2,0},\lambda_{2,1},\lambda_{2,2}). \end{alignedat} \label{eq:ranksep} \end{equation} The rotation matrix $\mathfrak{R}$ is a block diagonal matrix in the rank-separated basis, as the components of three different ranked tensors do not mix. The transformation can therefore be written as {\setlength{\mathindent}{0cm} \begin{equation} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \cline{1-1} \cline{3-11} \cline{13-13} $\lambda'_{0,0}$ & & $\mathfrak{R}^0_{0,0}$ & & & & & & & & & & $\lambda_{0,0}$\\ \cline{1-1} \cline{3-11} \cline{13-13} $\lambda'_{1,-1}$& & & $\mathfrak{R}^1_{-1,-1}$ & $\mathfrak{R}^1_{0,-1}$ & $\mathfrak{R}^1_{1,-1}$ & & & & & & & $\lambda_{1,-1}$\\ \cline{1-1} \cline{3-11} \cline{13-13} $\lambda'_{1,0}$ & & & $\mathfrak{R}^1_{-1,0}$ & $\mathfrak{R}^1_{0,0}$ & $\mathfrak{R}^1_{1,0}$ & & & & & & & $\lambda_{1,0}$\\ \cline{1-1} \cline{3-11} \cline{13-13} $\lambda'_{1,1}$ & & & $\mathfrak{R}^1_{-1,1}$ & $\mathfrak{R}^1_{0,1}$ & $\mathfrak{R}^1_{1,1}$ & & & & & & & $\lambda_{1,1}$\\ \cline{1-1} \cline{3-11} \cline{13-13} $\lambda'_{2,-2}$ & $=$ & & & & & $\mathfrak{R}^2_{-2,-2}$ & $\mathfrak{R}^2_{-2,-1}$ & $\mathfrak{R}^2_{-2,0}$ & $\mathfrak{R}^2_{-2,1}$ & $\mathfrak{R}^2_{-2,2}$ & $\cdot$ & $\lambda_{2,-2}$\\ \cline{1-1} \cline{3-11} \cline{13-13} $\lambda'_{2,-1}$ & & & & & & $\mathfrak{R}^2_{-1,-2}$ & $\mathfrak{R}^2_{-1,-1}$ & $\mathfrak{R}^2_{-1,0}$ & $\mathfrak{R}^2_{-1,1}$ & $\mathfrak{R}^2_{-1,2}$ & & $\lambda_{2,-1}$\\ \cline{1-1} \cline{3-11} \cline{13-13} $\lambda'_{2,0}$ & & & & & & $\mathfrak{R}^2_{0,-2}$ & $\mathfrak{R}^2_{0,-1}$ & $\mathfrak{R}^2_{0,0}$ & $\mathfrak{R}^2_{0,1}$ & $\mathfrak{R}^2_{0,2}$ & & $\lambda_{2,0}$\\ \cline{1-1} \cline{3-11} \cline{13-13} $\lambda'_{2,1}$ & & & & & & $\mathfrak{R}^2_{1,-2}$ & $\mathfrak{R}^2_{1,-1}$ & $\mathfrak{R}^2_{1,0}$ & $\mathfrak{R}^2_{1,1}$ & $\mathfrak{R}^2_{1,2}$ & & $\lambda_{2,1}$\\ \cline{1-1} \cline{3-11} \cline{13-13} $\lambda'_{2,2}$ & & & & & & $\mathfrak{R}^2_{2,-2}$ & $\mathfrak{R}^2_{2,-1}$ & $\mathfrak{R}^2_{2,0}$ & $\mathfrak{R}^2_{2,1}$ & $\mathfrak{R}^2_{2,2}$ & & $\lambda_{2,2}$\\ \cline{1-1} \cline{3-11} \cline{13-13} \end{tabular} \label{eq:tensorrot} \end{equation}}\\ As the predominant transformation in NMR is a rotation, it is convenient to express the interaction tensors in spherical tensor basis, rather than in Cartesian basis. The interaction tensor represented in spherical tensor basis, $\mathcal{R}_l$ of rank $l$, can be expressed as a vector with $2l+1$ elements, $\{\mathcal{R}_{l,-l},\mathcal{R}_{l,-l+1},...,\mathcal{R}_{l,l}\}$. In general, the elements transform under a rotation as \begin{equation} \mathcal{R}_{lm}^{(new)} = \sum_{m'=-l}^{l}\mathfrak{D}^{l}_{m'm}(\Omega)\mathcal{R}_{lm'}^{(old)}. \label{eq:spTensorTransf} \end{equation} where $\mathfrak{D}_{m'm}^{l}(\Omega) = e^{-i\alpha m'}d_{m'm}^{(l)}(\beta) e^{-i\gamma m}$ represents the Wigner rotation matrix elements for a rotation represented by three Euler angles ($\Omega$ = \{$\alpha$, $\beta$, $\gamma$\}). This is consistent with Eq. \ref{eq:tensorrot} where the rotation matrix $\mathfrak{D}$ is denoted in the Cartesian basis by $\mathfrak{R}$. The reduced Wigner matrix\cite{wigner1939unitary} elements, $d_{m'm}^{(l)}$ are given in Table \ref{tab:wig}.\\ The irreducible spatial spherical tensors $\mathcal{R}_k$ of rank $k$ and their components $\mathcal{R}_{k,l}$ are related to the reducible Cartesian spatial tensor components $\lambda_{ij}$ as \begin{equation} \begin{alignedat}{2} \mathcal{R}_{0,0} &= \frac{-1}{\sqrt{3}} (\lambda_{xx} + \lambda_{yy} + \lambda_{zz})\\ \mathcal{R}_{1,0} &= \frac{i}{\sqrt{2}}(\lambda_{xy}-\lambda_{yx})\\ \mathcal{R}_{1,\pm1} &= \frac{1}{2}(\lambda_{zx}-\lambda_{xz} \pm i(\lambda_{zy}-\lambda_{yz}))\\ \mathcal{R}_{2,0} &= \frac{1}{\sqrt{6}}(2\lambda_{zz}-\lambda_{xx}-\lambda_{yy})\\ \mathcal{R}_{2,\pm1} &= \mp\frac{1}{2}(\lambda_{xz}+\lambda_{zx} \pm i(\lambda_{yz}+\lambda_{zy}))\\ \mathcal{R}_{2,\pm2} &= \frac{1}{2}(\lambda_{xx}-\lambda_{yy} \pm i(\lambda_{xy}+\lambda_{yx}))\\ \end{alignedat} \label{eq:cart2sph} \end{equation} \begin{table}[!h] \footnotesize \caption{\footnotesize{Reduced Wigner matrix elements $d^{(j)}_{m',m}(\beta)$ for $j$ = 1 and 2.$^a$}} \begin{tabular*}{\hsize}{ccccccc} \hline \hline $j$ & $m' \setminus m$ & -2 & -1 & 0 & 1 & 2 \\ \hline 1 & -1 & & \epc & \sts & \emc & \\ & 0 & & -\sts & $c_\beta$ & \sts & \\ & 1 & & \emc & -\sts & \epc & \\ \hline 2 & -2 & \cop & \epc $s_\beta$ & \sbs & \emc $s_\beta$ & \com \\ & -1 & -\epc $s_\beta$ & $c_\beta^2$ - \emc & \sbt & \epc - $c_\beta^2$ & \emc $s_\beta$\\ & 0 & \sbs & -\sbt & \slp & \sbt & \sbs \\ & 1 & -\emc $s_\beta$ & \epc - $c_\beta^2$ & -\sbt & $c_\beta^2$ - \emc & \epc $s_\beta$ \\ & 2 & \com & -\emc $s_\beta$ & \sbs & -\epc $s_\beta$ & \cop \\ \hline \hline \end{tabular*} \footnotesize{$^a$ Abbreviations: $c_\beta$ = $\cos \beta$, $s_\beta$ = $\sin \beta$ .} \label{tab:wig} \normalsize \end{table} For every interaction in a spin system, it is possible to find a reference frame, known as the principal axis frame (PAS), denoted with a superscript $P$, in which the symmetric component ($\lambda_2$) of the interaction tensor is a diagonal matrix and is fully described by $\lambda^P_{xx}, \lambda^P_{yy}$ and $\lambda^P_{zz}$. The anti-symmetric component is however not diagonal and is described by $\lambda^P_{xy}$, $\lambda^P_{xz}$ and $\lambda^P_{yz}$\cite{saito2010chemical}. It is common in NMR to represent the components of the Cartesian tensor with the isotropic average $\delta_{iso}$, the anisotropy $\delta_{aniso}$ and the asymmetry parameter $\eta$ of the tensor as, \begin{equation} \begin{alignedat}{2} \delta_{iso} &= \frac{1}{3}(\lambda^P_{xx} + \lambda^P_{yy} + \lambda^P_{zz})\\ \delta_{aniso} &= \lambda^P_{zz}-\delta_{iso}\\ \eta &= \frac{\lambda^P_{yy}-\lambda^P_{xx}}{\delta_{aniso}}. \end{alignedat} \label{eq:2-35} \end{equation} such that $\delta_{iso}$ describe the zeroth-rank tensor component, while $\delta_{aniso}$ and $\eta$ describe the second-rank tensor component. Eq. \ref{eq:cart2sph} in PAS frame, then is simply \begin{equation} \begin{alignedat}{2} \mathcal{R}_{0,0}^P &= -\sqrt{3} \delta_{iso},\\ \mathcal{R}_{1,0}^P &= i\sqrt{2}\lambda_{xy},\\ \mathcal{R}_{1,\pm1}^P &= -\lambda_{xz} \mp i\lambda_{yz},\\ \mathcal{R}_{2,0}^P &= \sqrt{\frac{3}{2}}\delta_{aniso},\\ \mathcal{R}_{2,\pm1}^P &= 0,\\ \mathcal{R}_{2,\pm2}^P &= \frac{-1}{2}\delta_{aniso}\eta. \end{alignedat} \label{eq:cart2sph_PAS} \end{equation} A set of transformations is needed to relate the spatial dependency tensor in PAS frame to lab frame. The transformations are governed by Eq. \ref{eq:spTensorTransf}. The PAS frame interaction tensor is first transformed to the molecular frame (also called a crystal frame). As the molecules in an NMR sample are in a multitude of possible orientations, crystal frames are transformed to the frame of rotor that contains the sample. The rotor is typically aligned at a specific angle with respect to the external magnetic field (lab frame) and the last transformation concerns this. The entire sequence of transformations is shown in Fig. \ref{fig:pcr}. \begin{figure} \caption{Frame transformations for spatial interaction tensor. Principal Axis frame of an interaction is transformed to a molecular/crystal frame followed by a transformation to the rotor and finally a transformation to account for the angle the rotor axis makes with the external magnetic field.} \label{fig:pcr} \end{figure} The spin part in Eq. \ref{eq:H_vector_rep} can also be represented as spherical spin tensor operators, in a similar fashion. The one-spin spherical spin-tensor operators for a spin-$\frac{1}{2}$ nucleus are \begin{equation} \begin{alignedat}{3} \mathcal{T}_{0,0} &= \mathds{1} &\\ \mathcal{T}_{1,0} &= \hat{I}_z\\ \mathcal{T}_{1,1} &= \frac{-1}{\sqrt{2}}\hat{I}^+ &= \frac{-1}{\sqrt{2}}(\hat{I}_x + i\hat{I}_y)\\ \mathcal{T}_{1,-1} &= \frac{1}{\sqrt{2}}\hat{I}^- &= \frac{1}{\sqrt{2}}(\hat{I}_x - i\hat{I}_y). \end{alignedat} \end{equation} This allows Eq. \ref{eq:H_vector_rep} to be rewritten in lab frame as \begin{eqnarray} \mathcal{H}_{\lambda} & = & C^{\lambda}\sum_{j=0}^{2} \sum_{m=-j}^{j}(-1)^m\mathcal{R}_{j,-m}^{\lambda}\mathcal{T}_{j,m}^{\lambda} \end{eqnarray} in spherical tensor basis.\\ The components $\mathcal{R}^{P}_{j,m}$ and $\mathcal{T}_{j,m}$ for NMR interactions are summarized in Table \ref{tab:sphTensorComp}, where the spatial component are in PAS frame and the spin components are in lab frame. The set of transformations to obtain the spatial components in lab frame is performed as described above, which could also incorporate sample spinning. \begin{table}[h] \footnotesize \caption{\footnotesize{Spherical tensor elements of spatial and spin parts of the interaction Hamiltonian. The spatial components are given in the PAS frame while the spin part components are given in the lab frame.}} \begin{tabular*}{\hsize}{c||ccc} \hline \hline $\lambda$ & CS & J & D\\ \hline $C^{\lambda}$ & $\gamma_i$ & 1 & $-2\hbar\gamma_i\gamma_j$\\ $\mathcal{R}_{0,0}^{\lambda,P}$ & $\delta_{iso}$ & $J_{iso}$ & 0\\ $\mathcal{R}_{2,0}^{\lambda,P}$ & $\sqrt{\frac{3}{2}}\delta_{aniso}$ & $\sqrt{\frac{3}{2}}J_{aniso}$ & $\sqrt{\frac{3}{2}}\frac{\mu_0}{4\pi} |\vec{r}_{ij}|^{-3}$\\ $\mathcal{R}_{2,\pm1}^{\lambda,P}$ & 0 & 0 & 0 \\ $\mathcal{R}_{2,\pm2}^{\lambda,P}$ & $-\delta_{aniso}\frac{\eta_{\textrm{CS}}}{2}$ & $-\textrm{J}_{aniso}\frac{\eta_{\textrm{J}}}{2}$ & 0\\ $\mathcal{T}_{0,0}^{\lambda}$ & $\hat{I}_{iz} B_0$ & $\vec{\hat{I}}_i\cdot\vec{\hat{I}}_j$ & 0 \\ $\mathcal{T}_{2,0}^{\lambda}$ & $\sqrt{\frac{2}{3}}\hat{I}_{iz}B_0$ & $\frac{1}{\sqrt{6}}(3\hat{I}_{iz}\hat{I}_{jz}-\vec{\hat{I}}_i\cdot\vec{\hat{I}}_j)$ & $\frac{1}{\sqrt{6}}(3\hat{I}_{iz}\hat{I}_{jz}-\vec{\hat{I}}_i\cdot\vec{\hat{I}}_j)$\\ $\mathcal{T}_{2,\pm1}^{\lambda}$ & $\mp\hat{I}_i^{\pm}B_0$ & $\mp\frac{1}{2}(\hat{I}_i^{\pm}\hat{I}_{jz} + \hat{I}_{iz}\hat{I}_j^{\pm})$ & $\mp\frac{1}{2}(\hat{I}_i^{\pm}\hat{I}_{jz} + \hat{I}_{iz}\hat{I}_j^{\pm})$\\ $\mathcal{T}_{2,\pm2}^{\lambda}$ & 0 & $\frac{1}{2}\hat{I}_i^{\pm}\hat{I}_j^{\pm}$ & $\frac{1}{2}\hat{I}_i^{\pm}\hat{I}_j^{\pm}$\\ \hline \hline \end{tabular*} \label{tab:sphTensorComp} \normalsize \end{table} \subsection{Spin-Field interactions} \subsubsection{The Zeeman Interaction} The Zeeman interaction is generally the largest of NMR interactions and describes the interaction of the spin with the static magnetic field $\vec{B}_0$, which is, for convenience, set to point along the z-axis in the lab frame. The expression in Eq. \ref{eq:zeeman_E} can be rewritten as a scalar product given in Eq. \ref{eq:genHamil} \begin{equation} \mathcal{H}_{\textrm{Z}} = \vec{I} \cdot (-\gamma \mathds{1}) \cdot \vec{B}_0. \end{equation} $\omega_0 = -\gamma |\vec{B}_0|$ is the Larmor frequency of the nuclear spin of interest and $\gamma$ is the nucleus specific gyromagnetic ratio, which is an observed ratio of magnetic moment to angular momentum, and its sign determines the sense of precession of the spin about the magnetic field. \subsubsection{The Chemical Shift Interaction} Each nucleus experiences an effective magnetic field slightly different from the external magnetic field $\vec{B}_0$. This is because of the magnetic field induced by the surrounding electrons at the location of each nucleus. This allows to distinguish nuclei of same species in different chemical environments. The induced magnetic field is dependent on the electronic charge distribution and is proportional to the external magnetic field $\vec{B}_0$. The effective magnetic field experienced by a nucleus is, therefore, given by \begin{equation} \vec{B}_{\tr{eff}} = \vec{B}_0 + \sigma \vec{B}_0, \end{equation} where $\sigma$ is the chemical shift tensor of the nucleus that quantifies the proportionality of the induced magnetic field to the external magnetic field. The chemical shift Hamiltonian in the lab frame is then given by, \begin{equation} \begin{alignedat}{2} \mathcal{H}_{\textrm{CS}} &= \vec{I} \cdot (-\gamma\sigma) \cdot \vec{B}_0\\ &= \omega_0(\sigma_{xz}I_x + \sigma_{yz}I_y + \sigma_{zz}I_z) \end{alignedat} \label{eq:2-40} \end{equation} In high-field, the precession of the spin is considered fast enough to approximately average out $x$ and $y$ components. This simplifies Eq. \ref{eq:2-40} to $\mathcal{H}_{\tr{CS}} = \omega_0\sigma_{zz}I_z$. The chemical shift tensor can also be expressed in the PAS frame, where as given in Eq. \ref{eq:2-35}, \begin{equation} \begin{alignedat}{2} \sigma^{\tr{CS}}_{iso} &= \frac{1}{3}(\sigma^P_{xx} + \sigma^P_{yy} + \sigma^P_{zz})\\ \delta^{\tr{CS}} &= \sigma^P_{zz}-\sigma^{\tr{CS}}_{iso}\\ \eta^{\tr{CS}} &= \frac{\sigma^P_{yy}-\sigma^P_{xx}}{\delta^{\tr{CS}}}. \end{alignedat} \label{eq:2-42} \end{equation} where the convention is $|\sigma^{P}_{zz} - \sigma^{\tr{CS}}_{iso}| \geq |\sigma^{P}_{xx} - \sigma^{\tr{CS}}_{iso}| \geq |\sigma^{P}_{yy} - \sigma^{\tr{CS}}_{iso}|$ so that $\eta^{\tr{CS}}$ is positive and smaller than 1, while $\delta^{\tr{CS}}$ is either positive or negative. Parameters defined in Eq. \ref{eq:2-42} are helpful as they determine the position, width and shape of the peak in the spectrum.\\ \subsubsection{The Radio-frequency Interaction} \label{ssec:rot_frame} The rf Hamiltonian takes the form \begin{equation} \hat{\mathcal{H}}_{\textrm{rf}}(t) = \hat{\mathcal{H}}_{\textrm{rf}}(t) = 2\omega_{\textrm{rf}}(t)\cos(\omega_c t+\phi(t)) \hat{I}_x, \end{equation} where $\omega_0$ is the Larmor frequency of the nucleus of interest, $\omega_{\tr{rf}}(t)$ is the time dependent amplitude of the rf field, $\omega_c$ is the frequency of the rf field, called the \textit{carrier frequency} and $\phi(t)$ is the time-dependent phase of the rf field. It is often simpler to transform the entire description to the rotating frame described by the carrier frequency. The external Hamiltonian, which also includes the Zeeman interaction, in such a frame is given by, \begin{equation} \begin{alignedat}{2} \hat{\tilde{\mathcal{H}}}_{\textrm{ext}} & = \omega_0e^{i\omega_ct\hat{I}_z}\hat{I}_ze^{-i\omega_ct\hat{I}_z} +\\ & \hspace*{30pt} 2\omega_{\textrm{rf}}\cos(\omega_0t+\phi(t))e^{i\omega_ct\hat{I}_z}\hat{I}_xe^{-i\omega_ct\hat{I}_z} - ie^{i\omega_ct\hat{I}_z}\frac{d(e^{-i\omega_ct\hat{I}_z})}{dt}\\ & = \omega_0 \hat{I}_z + 2\omega_{\textrm{rf}}\cos(\omega_0t+\phi(t))(\hat{I}_x\cos(\omega_ct) - \hat{I}_y\sin(\omega_ct)) - \omega_c \hat{I}_z\\ & = (\omega_0-\omega_c)\hat{I}_z + \omega_{\textrm{rf}}\Big(\hat{I}_x \cos\phi(t) + \hat{I}_y\sin\phi(t) \Big) + \\ & \hspace*{30pt}\omega_{\textrm{rf}}\Big(\hat{I}_x\cos\big((\omega_0+\omega_c)t + \phi(t)\big) - \hat{I}_y\sin\big((\omega_0+\omega_c)t+\phi(t)\big)\Big) \label{eq:rf_zeeman} \end{alignedat} \end{equation} On averaging over the Larmor period ($\frac{2\pi}{\omega_0}$), which is orders of magnitude shorter than the cycle time of rf amplitude and phase, the average first-order Hamiltonian is \begin{equation} \begin{alignedat}{2} \hat{\overline{\tilde{\mathcal{H}}}}_{\tr{ext}} &= (\omega_0-\omega_c)\hat{I}_z + \omega_{\textrm{rf}}(\cos\phi(t) \hat{I}_x + \sin\phi(t) \hat{I}_y) + \\ &\hspace{30pt}\int_{0}^{\frac{2\pi}{\omega_0}}\omega_{\textrm{rf}}\Big(\hat{I}_x\cos\big((\omega_0+\omega_c)t + \phi(t)\big) - \hat{I}_y\sin\big((\omega_0+\omega_c)t+\phi(t)\big)\Big)\\ &= (\omega_0-\omega_c)\hat{I}_z + \omega_{\textrm{rf}}(\cos\phi(t) I_x + \sin\phi(t) I_y), \end{alignedat} \label{eq:rotframe_Hext} \end{equation} where $\omega_c \approx \omega_0$ i.e., the rf carrier frequency is set very close to the Larmor frequency of the nucleus, and $\phi(t)$ is considered piece-wise constant in the intervals defined by \big((n-1)$\frac{2\pi}{\omega_0}$,n$\frac{2\pi}{\omega_0}$\big), with integer n $\geq1$.\\ \subsection{Spin-Spin interactions} \subsubsection{Dipolar-coupling interaction} The influence of magnetic dipole moment of one spin on another through space is termed the \textit{dipolar coupling} interaction and the Hamiltonian is given by \begin{equation} \hat{\mathcal{H}}_{\textrm{D}} = -\frac{\mu_0}{4\pi}\frac{\gamma_1\gamma_2}{|\vec{r}|^3} \Bigg(\frac{3}{|\vec{r}|^2}\Big(\vec{\hat{I}}_1\cdot\vec{r}\Big)\Big(\vec{\hat{I}}_1\cdot\vec{r}\Big) - \Big(\vec{\hat{I_1}}\cdot\vec{\hat{I}}_2\Big)\Bigg) \label{eq:2-43} \end{equation} where $\vec{r}$ is the distance vector connecting the two spins. This can be written in the form of Eq. \ref{eq:genHamil} as \begin{equation} \hat{\mathcal{H}}_{\textrm{D}} = \vec{\hat{I}}_1 \cdot D \cdot \vec{\hat{I}}_2 \end{equation} where the elements of the traceless (and therefore purely anisotropic interaction) symmetric matrix $D$ are given by \begin{equation} \begin{alignedat}{2} D_{\alpha\beta} &= -\frac{\mu_0}{4\pi}\frac{\gamma_1\gamma_2}{|\vec{r}|^3} (-3e_\alpha e_\beta + \delta_{\alpha\beta}) \end{alignedat} \end{equation} with $\alpha,\beta = x, y, z$ and $e_\alpha$ / $e_\beta$ are the components of a unit vector parallel to $\vec{r}$. In the PAS frame, since $D$ is symmetric and traceless, \begin{equation} D^{P} = \frac{-2\mu_0}{4\pi} \frac{\gamma_1\gamma_2}{|\vec{r}|^3} \begin{bmatrix} \frac{-1}{2} & 0 & 0\\ 0 & \frac{-1}{2} & 0\\ 0 & 0 & 1 \end{bmatrix}. \end{equation} Representing the spin part of Eq. \ref{eq:2-43} in the rotating frame, as described in Sec. \ref{ssec:rot_frame}, and assuming high-field, the dipolar Hamiltonian term is given by \begin{equation} \hat{\mathcal{H}}_{\tr{D}} = -\frac{\mu_0}{4\pi} \frac{\gamma_1\gamma_2}{|\vec{r}^3|} \frac{3\cos^2\theta - 1}{2} (2\hat{I}_{1z}\hat{I}_{2z} - \hat{I}_{1x}\hat{I}_{2x} - \hat{I}_{1y}\hat{I}_{2y}), \end{equation} when the two nuclei are the same kind, homonuclear and \begin{equation} \hat{\mathcal{H}}_{\tr{D}} = -\frac{\mu_0}{4\pi} \frac{\gamma_1\gamma_2}{|\vec{r}^3|} \frac{3\cos^2\theta - 1}{2} 2\hat{I}_{1z}\hat{I}_{2z}, \end{equation} when the two nuclei are of different kind, heteronuclear with $\theta$ being the angle the vector $\vec{r}$ makes with the external magnetic field. As can be seen here, at the magic angle, $3\cos^2\theta = 1$, making the dipolar coupling Hamiltonian null.\\ \subsubsection{J-coupling interaction} The influence of magnetic dipole moment of one spin on another, mediated by the bond electrons is termed the J-coupling or the scalar coupling. The Hamiltonian for this interaction is given by, \begin{equation} \hat{\mathcal{H}}_{\tr{J}} = 2\pi \vec{\hat{I}}_1 \cdot J \cdot \vec{\hat{I}}_2, \end{equation} where the interaction matrix $J$, in principle, has both isotropic and anisotropic components. However only the isotropic component (the trace of $J$, denoted by $J_{\tr{iso}}$) is usually considered as the anisotropic component is experimentally indistinguishable from the dipolar coupling interaction and is included in the dipolar interaction tensor $D$. For light nuclei, the anisotropic component is known to be small and therefore neglected. Again with the high-field approximation, the Hamiltonian is given by, \begin{equation} \hat{\mathcal{H}}_{\tr{J}} = 2\pi J_{\tr{iso}} \vec{\hat{I}}_1 \cdot \vec{\hat{I}}_2 \end{equation} for the homonuclear case, and \begin{equation} \hat{\mathcal{H}}_{\tr{J}} = 2\pi J_{\tr{iso}} \vec{\hat{I}}_{1z} \cdot \vec{\hat{I}}_{2z} \end{equation} for the heteronuclear case.\\ The following chapters will use the concepts and mathematical description established in this chapter with focus on the chemical shift and dipolar coupling Hamiltonians. The average Hamiltonian theory to find effective Hamiltonian for any interaction frame Hamiltonian represented as product of complex exponentials with limited set of frequencies, will be discussed in the next chapter, followed by its application to dipolar recoupling pulse sequences in the subsequent chapters. \chapter{Design Principles} \label{chap:DesignPrinciples} The knowledge on effective Hamiltonian is crucial in describing and understanding a solid-state NMR experiment. The performance of experiments is evaluated and design of novel experiments is guided, with the help of effective Hamiltonians. The time dependencies to the system Hamiltonian are brought in by MAS and rf irradiation. There are two major ways to deal with time-dependent Hamiltonians: the average Hamiltonian theory\cite{waugh1968js,haeberlen1968coherent,burum1981magnus,klarsfeld1989recursive} (AHT) and the Floquet theory\cite{floquet,shirley1965solution,maricq1982application,shimon1996}. AHT has been the most widely adopted approach in NMR, whereas Floquet theory is the preferred approach in most other fields\cite{chu2004beyond,kulander1991time}.\\ In Floquet theory, the time-dependent finite-dimensional Hilbert space Hamiltonian is transformed into a time-independent infinite-dimensional Hamiltonian in the so called Floquet space, which is the Kronecker product of Fourier space and the Hilbert space\cite{augustine1995theoretical,boender1996stacking}. Strictly speaking, it is inappropriate to call the Floquet Hamiltonian time-independent, as the problem has been transformed out of time domain into frequency domain. Density operator evolution to arbitrary times can, however, be calculated directly in the Floquet space to obtain a spectrum\cite{schmidt1992floquet,kubo1990one}. This, for example, has aided in the analytical calculation of both the centrebands and the sidebands in MAS spectra\cite{schmidt1992floquet,nakai1992application,bain2001introduction}. In general this approach is of interest for numerical simulations\cite{levante1995formalized,baldus1995structure}. However, in practice, numerical simulations of the time-dependent Hilbert space Hamiltonian sliced to piece-wise constant Hamiltonians, are more efficient. To understand and gain insight into an experiment, an approximate analytical time-independent Hamiltonian in Hilbert space is sought. This can be achieved by diagonalising the Floquet Hamiltonian and projecting it back to the Hilbert space\cite{shimon1996}. More often than not, perturbation theory on the Floquet Hamiltonian, rather than full diagonalisation, is applied\cite{primas1963generalized}. The van Vleck transformation\cite{van1948dipolar}, which is a block diagonalisation method, is used to achieve this. The effective Hamiltonian so obtained has been shown to be identical to that obtained with AHT\cite{maricq1986application,llor1992equivalence} using Magnus expansion\cite{magnus1954exponential,salzman1986convergence,burum1981magnus} and the Hamiltonians are valid only at multiples of the periodicity (stroboscopic observation) defined by the time-dependent Hamiltonian. The details of the Floquet theory and van Vleck perturbation treatment is beyond the scope of this dissertation and can be found in the literature, in particular the reviews by S. Vega\cite{leskes2010floquet}, et al and M. Ernst\cite{scholz2010operator} are recommended.\\ In this chapter, AHT is discussed and used to derive the effective Hamiltonian for a time-dependent Hamiltonian. This is followed by sequential discussion on situations with increasing complexity in the time dependency of the Hamiltonian. It will be shown that the interaction frame time modulations of single-spin operators can be described by using at most two frequencies per spin. One which reflects the periodicity of the pulse sequence acting on the spin in question and the other reflecting the overall rotation caused by a unit of the pulse sequence on the single-spin operators. This allows the total Hamiltonian for a solid-state MAS NMR experiment involving $N$ spin-$\frac{1}{2}$ nuclei to be described using at most $2N + 1$ fundamental frequencies, where one of the frequencies is a result of the time modulation of spatial part of the Hamiltonian. The overall rotation on a spin, caused by a pulse sequence can be numerically found using quaternions, an alternative description for rotations\cite{hamilton1853lectures,hamilton1969,klein1965}, and is described in the last section. \section{Average Hamiltonian Theory} Consider a spin system under a time dependent Hamiltonian, $\hat{\mathcal{H}}(t)$, that can be written as a sum of two parts, \begin{equation} \hat{\mathcal{H}}(t) = \hat{\mathcal{H}}_{\tr{small}}(t) + \hat{\mathcal{H}}_{\tr{big}}(t). \end{equation} The effect of big Hamiltonian on the interesting small Hamiltonian, can easily be seen by transforming the problem to an interaction frame defined by $\hat{\mathcal{H}}_{\tr{big}}(t)$. This helps the series expansion of the effective Hamiltonian converge faster. As seen in Sec. \ref{sec:int_frame}, the interaction frame Hamiltonian $\hat{\tilde{\mathcal{H}}}(t)$ is given by, \begin{equation} \hat{\tilde{\mathcal{H}}}(t) = \hat{U}_{\tr{big}}^{\dagger}(t,0) \hat{\mathcal{H}}_{\tr{small}}(t)\hat{U}_{\tr{big}}(t,0), \label{eq:intFrHam} \end{equation} where \begin{equation} \hat{U}_{\tr{big}}(t,0) = \hat{\mathcal{T}}e^{-i\int_{0}^{t}\hat{\mathcal{H}}_{\tr{big}}(t')dt'}. \label{eq:3-3} \end{equation} In the interaction frame, the propagator for a time interval (0,$t$) is given by $\hat{\tilde{U}}(t,0) = \hat{\mathcal{T}}e^{-i\int_{0}^{t}\hat{\tilde{\mathcal{H}}}(t')dt'}$, as also given in Eq. \ref{eq:dyson}. As a product of unitary operators is again an unitary operator, $\hat{\tilde{U}}(\tau,0)$ for the interval 0 to $\tau$ can in principle be represented as a single operator, \begin{equation} \hat{\tilde{U}}(\tau,0) = e^{-i\hat{\overline{\tilde{\mathcal{H}}}}(\tau,0)\tau}, \label{eq:prop_int_frame} \end{equation} where $\hat{\overline{\tilde{\mathcal{H}}}}$($\tau$,0) is the \textit{average} or \textit{effective} Hamiltonian with averaging done over the interval (0,$\tau$). The average Hamiltonian is given as a solution to the linear differential equation, \begin{equation} \frac{d\hat{\tilde{U}}(t)}{dt} = -i \hat{\tilde{\mathcal{H}}}(t)\hat{\tilde{U}}(t), \label{eq:dudt} \end{equation} through Magnus expansion\cite{magnus1954exponential,salzman1986convergence,burum1981magnus}. The solution is given by \begin{equation} \hat{\tilde{U}}(\tau,0) = e^{-i\sum\limits_{k = 1}^{\infty} \hat{\overline{\tilde{\mathcal{H}}}}^{(k)}(\tau,0)\tau}\hat{\tilde{U}}(0) \label{eq:ahtMagnus} \end{equation} where the initial value of the propagator $\hat{\tilde{U}}(0)$ is known to be $\mathds{1}$ and the average Hamiltonian $\hat{\overline{\tilde{H}}}(\tau,0)$ is given as a series $\sum_{k = 1}^{\infty} \hat{\overline{\tilde{\mathcal{H}}}}^{(k)}(\tau,0)$. The first two terms in the series are given by \begin{equation} \begin{aligned} \hat{\overline{\tilde{\mathcal{H}}}}^{(1)}(\tau,0) &= \frac{1}{\tau}\int_{0}^{\tau}\hat{\tilde{\mathcal{H}}}(t_1)dt_1 \\ \hat{\overline{\tilde{\mathcal{H}}}}^{(2)}(\tau,0) &= \frac{1}{2i\tau}\int_{0}^{\tau}dt_1\int_{0}^{t_1}[\hat{\tilde{\mathcal{H}}}(t_1),\hat{\tilde{\mathcal{H}}}(t_2)]dt_2. \end{aligned} \label{eq:MagExpnsn} \end{equation} Note that from here on, the time interval $(t,0)$ will be dropped from the effective Hamiltonians and $(t,0)$ will be replaced with just (t) in the propagators for better readability. The Hamiltonian $\hat{\overline{\tilde{\mathcal{H}}}}$ is valid for any chosen fixed time $\tau$, where the averaging is done from 0 to $\tau$. However should the propagator given in Eq. \ref{eq:prop_int_frame} be valid for use at multiples of $\tau$, then it is understandably required that $\tau$ also represent the periodicity of the Hamiltonian $\hat{\tilde{\mathcal{H}}}$(t), i.e., $\hat{\tilde{\mathcal{H}}}(\tau+t) = \hat{\tilde{\mathcal{H}}}(t)$. The evolution at multiples of $\tau$ (i.e., $n\tau$, where $n$ is an integer) is then described by \begin{equation} \hat{\tilde{U}}(n\tau) = \Big(\hat{\tilde{U}}(\tau)\Big)^n = e^{-i\hat{\overline{\tilde{\mathcal{H}}}}(\tau)n\tau}. \label{eq:ahtU} \end{equation} In the following sections, the cases corresponding to the modulation of $\hat{\tilde{\mathcal{H}}}$(t) with single and multiple frequencies will be discussed. \subsection{Single frequency} \label{sect:3_SingleFrequency} Consider $\hat{\mathcal{H}}_{\tr{small}}(t)$ and $\hat{\mathcal{H}}_{\tr{big}}(t)$ which are periodic in time with $\tau_c$ and $\hat{U}_{\tr{big}}(t)$ is cyclic over the same time $\tau_c$, i.e., $\hat{U}_{\tr{big}}(\tau_c) = \mathds{1}$. The total Hamiltonian in interaction frame of $\hat{\mathcal{H}}_{\tr{big}}(t)$ can then be shown to also be periodic with $\tau_c$. i.e., \begin{empheq}[right={\empheqrbrace \Rightarrow \hat{\tilde{\mathcal{H}}}(\tau_c + t) = \hat{\tilde{\mathcal{H}}}(t). }]{alignat=2} \begin{aligned} \hat{\mathcal{H}}_{\tr{small}}(\tau_c + t) &= \hat{\mathcal{H}}_{\tr{small}}(t)\\ \hat{\mathcal{H}}_{\tr{big}}(\tau_c + t) &= \hat{\mathcal{H}}_{\text{big}}(t)\\ \hat{U}_{\tr{big}}(\tau_c) &= \mathds{1} \end{aligned} \label{eq:aht_assumptions} \end{empheq} In a MAS solid-state NMR experiment, this corresponds to a scenario where the internal Hamiltonians are periodic with $\tau_r$, the rf pulse sequence is rotor synchronised with period $\tau_r$ and $\hat{U}_{\tr{rf}}(\tau_r) = \mathds{1}$. The rf field interaction frame Hamiltonian is then periodic with $\tau_r$. In such a case, $\hat{\tilde{\mathcal{H}}}(t)$ can be Fourier transformed and written as, \begin{equation} \begin{alignedat}{2} \hat{\tilde{\mathcal{H}}}(t) = \sum_{n=-\infty}^{\infty}\hat{\tilde{\mathcal{H}}}^{(n)}e^{in\omega_rt}, \end{alignedat} \end{equation} where $\hat{\tilde{\mathcal{H}}}^{(n)}$ are the Fourier components whose elements also contain spin operators. The calculation of these Fourier components are shown in Sec. \ref{sect:4_ff} for the case of amplitude-modulated pulse sequences and in Sec. \ref{sect:5_ff} for the case of amplitude- and phase-modulated pulse sequences.\\ The effective Hamiltonian to first order, as given in Eq. \ref{eq:MagExpnsn}, is then simply, \begin{equation} \begin{alignedat}{2} \hat{\overline{\tilde{\mathcal{H}}}}^{(1)}(\tau_r,0) &= \frac{1}{\tau_r}\int_{0}^{\tau_r}\hat{\tilde{\mathcal{H}}}(t)dt\\ &= \frac{1}{\tau_r}\sum_{n=-\infty}^{\infty}\hat{\tilde{\mathcal{H}}}^{(n)} \int_{0}^{\tau_r}e^{in\omega_rt} dt\\ &= \hat{\tilde{\mathcal{H}}}^{(0)}. \end{alignedat} \end{equation} The second-order effective Hamiltonian is given by, \begin{gather} \begin{alignedat}{2} \hat{\overline{\tilde{\mathcal{H}}}}^{(2)} &= \frac{1}{2i\tau_r}\int_{0}^{\tau_r}dt_1\int_{0}^{t_1}[\hat{\tilde{\mathcal{H}}}(t_1),\hat{\tilde{\mathcal{H}}}(t_2)]dt_2.\\ &= \frac{1}{2i\tau_r}\int_{0}^{\tau_r}\sum_{n_1,n_2}[\hat{\tilde{\mathcal{H}}}^{(n_1)},\hat{\tilde{\mathcal{H}}}^{(n_2)}]e^{in_1\omega_rt_1}dt_1\int_{0}^{t_1}e^{in_2\omega_rt_2}dt_2. \end{alignedat} \label{eq:irupa} \end{gather} There are four possible cases with the indices being summed over, depending on whether $n_1\omega_r = 0$ (resonance condition, abbreviated as $\texttt{res}$) or $n_1\omega_r \neq 0$ (non-resonance condition, abbreviated as $\texttt{non-res}$) and likewise for $n_2\omega_r$. The cases can be listed with corresponding commutators as, \begin{enumerate} \item $[\texttt{non-res}, \texttt{non-res}]$ \item $[\texttt{res}, \texttt{non-res}]$ \item $[\texttt{non-res}, \texttt{res}]$ \item $[\texttt{res}, \texttt{res}]$. \end{enumerate} \subsubsection*{\underline{Cases 1 and 2:}} For the first two of these cases, where the second term in the commutator corresponds to a $\texttt{non-res}$, Eq. \ref{eq:irupa} simplifies to \begin{equation} = \frac{1}{2i\tau_r} \int_{0}^{\tau_r} \sum_{n_1,n_2}^{} [\hat{\tilde{\mathcal{H}}}^{(n_1)}, \hat{\tilde{\mathcal{H}}}^{(n_2)}] e^{in_1\omega_r t_1} \frac{(e^{in_2\omega_r t_1} - 1)}{i (n_2\omega_r)} dt_1. \label{eq:irupa2} \end{equation} This can further be seen as a sum of two terms, one involving the exponent of $n_2\omega_r$, while the other involving minus one. As the integration is over $\tau_r$, the first term gives a non-zero contribution only when $(n_1 + n_2)\omega_r = 0$ and allows the contribution to second-order effective Hamiltonian from $[\texttt{non-res}, \texttt{non-res}]$ to be reformulated as \begin{equation} = \frac{-1}{2} \sum_{n_0,n}^{} \frac{[\hat{\tilde{\mathcal{H}}}^{(n_0-n)}, \hat{\tilde{\mathcal{H}}}^{(n)}]}{n\omega_r}, \end{equation} where the indices being summed over obey $n_0\omega_r = 0$ and $n\omega_r \neq 0$. Similarly, the second term results in a non-zero contribution only when $n_1\omega_r = 0$. This allows the contribution from $[\texttt{res}, \texttt{non-res}]$ to be reformulated as \begin{equation} = \frac{1}{2} \sum_{n_0,n}^{} \frac{[\hat{\tilde{\mathcal{H}}}^{(n_0)}, \hat{\tilde{\mathcal{H}}}^{(n)}]}{n\omega_r}, \label{eq:3-16} \end{equation} where again, $n_0\omega_r = 0$ and $n\omega_r \neq 0$. \subsubsection*{\underline{Cases 3 and 4:}} For the last two of the listed cases, where the second term in the commutator corresponds to a $\texttt{res}$, Eq. \ref{eq:irupa} simplifies to \begin{equation} = \frac{1}{2i\tau_r} \int_{0}^{\tau_r} \sum_{n_1,n_2}^{} [\hat{\tilde{\mathcal{H}}}^{(n_1)}, \hat{\tilde{\mathcal{H}}}^{(n_2)}] t_1 e^{in_1\omega_r t_1} dt_1. \end{equation} Contribution from $[\texttt{non-res}, \texttt{res}]$ works out to be the same as given in Eq. \ref{eq:3-16}, while the contribution from $[\texttt{res}, \texttt{res}]$ works out to zero.\\ The total second-order effective Hamiltonian can therefore be expressed as \begin{equation} \hat{\overline{\tilde{\mathcal{H}}}}^{(2)} = \frac{-1}{2} \sum_{n_0,n}^{} \frac{[\hat{\tilde{\mathcal{H}}}^{(n_0-n)}, \hat{\tilde{\mathcal{H}}}^{(n)}]}{n\omega_r} + \sum_{n_0,n}^{} \frac{[\hat{\tilde{\mathcal{H}}}^{(n_0)}, \hat{\tilde{\mathcal{H}}}^{(n)}]}{n\omega_r}, \end{equation} where $n_0\omega_r = 0$ and $n\omega_r \neq 0$. \subsection{Multiple Frequencies} \label{sect:MultipleFrequencies} Consider $\hat{\mathcal{H}}_{\tr{small}}(t)$ that is periodic in time with $\tau_r$ ($=\frac{2\pi}{\omega_r}$) and $\hat{\mathcal{H}}_{\tr{big}}(t)$ that is periodic in time with a different time $\tau_m$ ($=\frac{2\pi}{\omega_m}$). Despite the fact that the sum of the two Hamiltonians, $\hat{\mathcal{H}}_{\tr{small}}(t) + \hat{\mathcal{H}}_{\tr{big}}(t)$, is necessarily periodic in time with $\tau'_c = \frac{2\pi}{\gcd(\omega_r,\omega_m)}$, the interaction frame Hamiltonian $\hat{\tilde{\mathcal{H}}}(t)$ is not necessarily periodic with $\tau'_c$, where $\gcd(a,b)$ refers to the greatest common divisor between $a$ and $b$. This is because, propagator of the big Hamiltonian at $\tau'_c$, $\hat{U}_{\tr{big}}(\tau'_c)$, may not necessarily be cyclic with time $\tau'_c$, i.e., $\hat{U}_{\tr{big}}(\tau'_c) \neq \mathds{1}$. But a multiple $p$ of $\tau'_c$ can always be found such that $\hat{U}_{\tr{big}}(p\cdot\tau'_c) = \mathds{1}$. Therefore, in general, \begin{empheq}[right={\empheqrbrace \Rightarrow \hat{\tilde{\mathcal{H}}}(\tau_c + t) = \hat{\tilde{\mathcal{H}}}(t),}]{alignat=2} \begin{aligned} \hat{\mathcal{H}}_{\tr{small}}\Bigg(\frac{2\pi}{\omega_r} + t\Bigg) &= \hat{\mathcal{H}}_{\tr{small}}(t)\\ \hat{\mathcal{H}}_{\tr{big}}\Bigg(\frac{2\pi}{\omega_m} + t\Bigg) &= \hat{\mathcal{H}}_{\tr{big}}(t)\\ \hat{U}_{\tr{big}}\big(\tau_c\big) &= \mathds{1} \end{aligned} \label{eq:318} \end{empheq} where $\tau_c = p \cdot \tau'_c = p \cdot \frac{2\pi}{\gcd(\omega_r,\omega_m)}$ where $p$ is an integer.\\ In the special case, where the propagator of the big Hamiltonian is cyclic with time $\tau'_c$, i.e., $\hat{U}_{\tr{big}}(\tau'_c) = \mathds{1}$, the period of $\hat{\tilde{\mathcal{H}}}(t)$ is defined only by the frequencies $\omega_r$ and $\omega_m$ and is given by \begin{equation} \tau_c = \tau'_c = \frac{2\pi}{\gcd(\omega_r,\omega_m)}. \end{equation} The time dependency of the interaction frame Hamiltonian, in this special case, of $\tau_c = \tau'_c$, can therefore be expressed in the Fourier space with only these two frequencies as given by, \begin{equation} \hat{\tilde{\mathcal{H}}}(t) = \sum_{n,k}^{}\hat{\tilde{\mathcal{H}}}^{(n,k)} e^{in\omega_rt} e^{ik\omega_mt} \label{eq:3-20} \end{equation} where $\hat{\tilde{\mathcal{H}}}^{(n,k)}$ are the Fourier components, the calculation of which are shown later in Sec. \ref{sect:4_ff} and Sec. \ref{sect:5_ff}. The effective or average Hamiltonian for the cycle time $\tau_c$ ($= \tau'_c$), to the first and second order are then given by, \begin{equation} \begin{alignedat}{2} \hat{\overline{\tilde{\mathcal{H}}}}^{(1)} &= \sum_{n_0,k_0} \hat{\tilde{\mathcal{H}}}^{(n_0,k_0)} \end{alignedat} \label{eq:321} \end{equation} and \begin{gather} \begin{alignedat}{2} \hat{\overline{\tilde{\mathcal{H}}}}^{(2)} &= \frac{-1}{2} \sum_{n,k}\frac{[\hat{\tilde{\mathcal{H}}}^{(n_0-n,k_0-k)},\hat{\tilde{\mathcal{H}}}^{(n,k)}]}{n\omega_r + k\omega_m} + \sum_{n_0,k_0}\sum_{n,k}\frac{[\hat{\tilde{\mathcal{H}}}^{(n_0,k_0)},\hat{\tilde{\mathcal{H}}}^{(n,k)}]}{n\omega_r + k\omega_m}, \end{alignedat} \label{eq:322} \end{gather} where $n_0\omega_r + k_0\omega_m = 0$ and $n\omega_r + k\omega_m \neq 0$. The expressions in Eq. \ref{eq:321} and Eq. \ref{eq:322} can be derived by following the same procedure described in the previous section. It is noted here that even though pairs of $(n,k)$ will have different cycle times $\Big(\frac{2\pi}{n\omega_r + k\omega_m}\Big)$ in the integration, it will always be a factor of $\tau_c$ defined above and therefore the integration over $\tau_c$ will be valid.\\ The expressions for the effective Hamiltonian given here for the first two orders, are exactly the same as obtained through Floquet theory that uses van Vleck transformation, and are available in literature reviews\cite{leskes2010floquet,scholz2010operator}. In the most general case, where the propagator of the big Hamiltonian, $\hat{U}_{\tr{big}}(t)$, is not cyclic with $\tau'_c$, but cyclic with some multiple $p$ of $\tau'_c$, i.e., $\hat{U}_{\tr{big}}(\tau'_c) \neq \mathds{1}$ but $\hat{U}_{\tr{big}}(p\cdot\tau'_c) = \mathds{1}$, the period of the interaction frame Hamiltonian, $\hat{\tilde{\mathcal{H}}}(t)$ is defined not only by the two frequencies $\omega_r$ and $\omega_m$, but also by a third frequency which reflects the multiple $p$. However, propagation for the interaction frame Hamiltonian, $\hat{\tilde{\mathcal{H}}}(t)$, can still be found at multiples of $\tau'_c$, using only time-independent averaged or effective Hamiltonians, by employing a trick.\\ Say, $\hat{U}_{\tr{big}}(\tau'_c)$ can be found and expressed as, \begin{equation} \hat{U}_{\tr{big}}(\tau'_c) = e^{-i\hat{\mathcal{H}}_{\tr{eff}}\tau'_c}, \label{eq:323} \end{equation} where $\hat{\mathcal{H}}_{\tr{eff}}$ is the time-independent average or effective of $\hat{\mathcal{H}}_{\tr{big}}(t)$ averaged over its periodic time $\tau'_c$. Since $\hat{\mathcal{H}}_{\tr{big}}$ is typically the rf field and/or isotropic chemical shift Hamiltonians, it only involves single-spin operators and so the average $\hat{\mathcal{H}}_{\tr{eff}}$ can be calculated using quaternions, which is discussed later in the chapter, in Sec. \ref{sect:quaternions}. Note that since $\hat{\mathcal{H}}_{\tr{big}}(t)$ is periodic in time with $\tau'_c$ (ref. Eq. \ref{eq:318}), the propagator given in Eq. \ref{eq:323} can be used to find the same at multiples of $\tau'_c$ as given by, \begin{equation} \hat{U}_{\tr{big}}(N\tau'_c) = \Big(\hat{U}_{\tr{big}}(\tau'_c)\Big)^N, \label{eq:324} \end{equation} where $N$ is an integer. This allows the different intervals of duration $\tau'_c$ of $\hat{\tilde{\mathcal{H}}}(t)$ to be related according to, \begin{equation} \hat{\tilde{\mathcal{H}}}(N\tau_c' + t) = \hat{U}_{\tr{big}}^{\dagger}(N\tau_c')\hat{\tilde{\mathcal{H}}}(t)\hat{U} _{\tr{big}}(N\tau_c'). \label{eq:314} \end{equation} With the help of Eq. \ref{eq:314}, it can be shown that, propagators of the interaction frame Hamiltonian can be defined at multiples of $\tau'_c$, which is only a factor of the period $\tau_c$ ($= p\cdot\tau'_c$) of the interaction frame Hamiltonian $\hat{\tilde{\mathcal{H}}}(t)$, according to \begin{equation} \begin{aligned} \hat{\tilde{U}}(N\tau_c') &=e^{i\hat{\mathcal{H}}_{\tr{eff}}N\tau_c'} \cdot e^{-i\hat{\overline{\tilde{\mathcal{H}}}}N\tau_c'}. \end{aligned} \label{eq:modProp} \end{equation} Note that in Eq. \ref{eq:modProp}, $\hat{\overline{\tilde{\mathcal{H}}}}$ is the effective time-independent Hamiltonian of $\hat{\tilde{\mathcal{H}}}(t)$ averaged over the period $\tau_c$, while $\hat{\mathcal{H}}_{\tr{eff}}$ is the effective time-independent Hamiltonian of $\hat{\mathcal{H}}_{\tr{big}}(t)$ averaged over $\tau'_c$. This is advantageous as the propagators are now defined at time points ($\tau'_c$) shorter than the period ($\tau_c$) of $\hat{\tilde{\mathcal{H}}}(t)$, using only time-independent effective Hamiltonians.\\ To verify Eq. \ref{eq:modProp}, consider $\tau_c$ sliced into $p$ intervals of duration $\tau'_c$ and each of those intervals further sliced into $m$ divisions of duration $\delta t$. The explicit expression for the propagator of the interaction frame Hamiltonian at $\tau_c$ can then be written as a product of exponentials, as given by, \begin{equation} \begin{alignedat}{2} \hat{\tilde{U}}^{\dagger}(\tau_c) &= \Bigg(\prod_{k=1}^{m}e^{i\hat{\tilde{\mathcal{H}}}(k\delta t)\delta t}\Bigg)\\ & \hspace*{40pt} \hat{U}_{\tr{big}}(\tau'_c) \cdot \Bigg(\prod_{k=1}^{m}e^{i\hat{\tilde{\mathcal{H}}}(k\delta t)\delta t}\Bigg) \cdot \hat{U}_{\tr{big}}^{\dagger}(\tau'_c)\\ & \hspace*{100pt} \cdots\\ & \hspace*{60pt} \hat{U}_{\tr{big}}\big((p-1)\tau'_c\big) \cdot \Bigg(\prod_{k=1}^{m}e^{i\hat{\tilde{\mathcal{H}}}(k\delta t)\delta t}\Bigg) \cdot \hat{U}_{\tr{big}}^{\dagger}\big((p-1)\tau'_c\big), \end{alignedat} \label{eq:327} \end{equation} where Eq. \ref{eq:314} has been utilised. Note that in Eq.\ref{eq:327}, the first $q$ lines taken together on the RHS corresponds the propagator $\hat{\tilde{U}}^{\dagger}(q\cdot\tau'_c)$, with $q$ taking the values from 1 to $p$. By taking the exponentials corresponding to the $m$-th divisions out of the products and expressing them with original small and big Hamiltonians, as given by, \begin{equation} \begin{alignedat}{2} e^{i\hat{\tilde{\mathcal{H}}}(k\delta t)\delta t} &= \hat{U}_{\tr{big}}(\tau'_c) \cdot e^{i\hat{\mathcal{H}}_{\tr{small}}(m\delta t)\delta t} \cdot \hat{U}_{\tr{big}}^{\dagger}(\tau'_c), \end{alignedat} \end{equation} Eq. \ref{eq:327} can be rewritten as, \begin{equation} \begin{alignedat}{2} \hat{\tilde{U}}^{\dagger}(\tau_c) &= \Bigg(\prod_{k=1}^{m-1}e^{i\hat{\tilde{\mathcal{H}}}(k\delta t)\delta t}\Bigg) \hat{U}_{\tr{big}}(\tau'_c)e^{i\hat{\mathcal{H}}_{\tr{small}}(m\delta t)\delta t}\hat{U}_{\tr{big}}^{\dagger}(\tau'_c)\\ & \hspace*{20pt} \hat{U}_{\tr{big}}(\tau'_c) \Bigg(\prod_{k=1}^{m-1}e^{i\hat{\tilde{\mathcal{H}}}(k\delta t)\delta t}\Bigg) \hat{U}_{\tr{big}}(\tau'_c)e^{i\hat{\mathcal{H}}_{\tr{small}}(m\delta t)\delta t}\hat{U}_{\tr{big}}^{\dagger}(2\tau'_c)\\ & \hspace*{100pt} \cdots\\ & \hspace*{40pt} \hat{U}_{\tr{big}}\big((p-1)\tau'_c\big) \Bigg(\prod_{k=1}^{m-1}e^{i\hat{\tilde{\mathcal{H}}}(k\delta t)\delta t}\Bigg) \hat{U}_{\tr{big}}(\tau'_c)e^{i\hat{\mathcal{H}}_{\tr{small}}(m\delta t)\delta t} \hat{U}_{\tr{big}}^{\dagger}(p\tau'_c). \end{alignedat} \label{eq:328} \end{equation} Two things can be inferred from Eq. \ref{eq:328}. One is that, \begin{equation} \begin{alignedat}{2} \hat{\tilde{U}}^{\dagger}(\tau_c) = e^{i\hat{\overline{\tilde{\mathcal{H}}}}\tau_c} &= \Big(e^{i\hat{\overline{\tilde{\mathcal{H}}}}\tau'_c}\Big)^p\\ &= \Bigg(\Bigg(\prod_{k=1}^{m-1}e^{i\hat{\tilde{\mathcal{H}}}(k\delta t)\delta t}\Bigg) \hat{U}_{\tr{big}}(\tau'_c)e^{i\hat{\mathcal{H}}_{\tr{small}}(m\delta t)\delta t}\Bigg)^p. \end{alignedat} \label{eq:329} \end{equation} The second inference is that, the first $q$ lines taken together on the RHS of Eq. \ref{eq:328}, which corresponds to $\hat{\tilde{U}}^{\dagger}(q\tau'_c)$, can be rewritten using Eq. \ref{eq:329} to yield \begin{equation} \hat{\tilde{U}}^{\dagger}(q\tau'_c) = \Big(e^{i\hat{\overline{\tilde{\mathcal{H}}}}\tau'_c}\Big)^q \cdot \hat{U}_{\tr{big}}(q\tau'_c). \label{eq:330} \end{equation} By making use of Eqs. \ref{eq:323} and \ref{eq:324} in Eq. \ref{eq:330}, it is seen that Eq. \ref{eq:modProp} is true.\\ It is worth reiterating that the effective Hamiltonian, $\hat{\overline{\tilde{\mathcal{H}}}}$, of the interaction frame Hamiltonian, $\hat{\tilde{\mathcal{H}}}(t)$, is an average over the period $\tau_c$, while the effective Hamiltonian $\hat{\mathcal{H}}_{\tr{eff}}$ of $\hat{\mathcal{H}}_{\tr{big}}(t)$ is averaged over the interval $\tau'_c$, where $\hat{\mathcal{H}}_{\tr{big}}(t)$ is periodic.\\ Since $\hat{\mathcal{H}}_{\tr{big}}(t)$ is usually the rf field and/or isotropic chemical shift Hamiltonians, its average $\hat{\mathcal{H}}_{\tr{eff}}$ can be expressed as a linear combination of single-spin operators and the effective rotation given by Eq. \ref{eq:323} can be written for a single-spin as, \begin{equation} \hat{U}_{\tr{big}}(\tau'_c) = e^{-i\hat{\mathcal{H}}_{\tr{eff}}\tau'_c} = e^{-i \omega_{\tr{cw}}\tau'_c \hat{\mathcal{F}}} \end{equation} where $\hat{\mathcal{F}}$ is a linear combination of single-spin operators of the spin under consideration. The effective field $\omega_{\tr{cw}}$ and the rotation axis $\hat{\mathcal{F}}$ can numerically be found using a variety of methods, one of which is using \textit{quaternions}, described in Sec. \ref{sect:quaternions}. It can be seen that the periodicity of $\hat{\tilde{\mathcal{H}}}(t)$ is now given by $\tau_c = p\cdot\tau'_c$ such that $\omega_{\tr{cw}}\cdot p\cdot\tau'_c = 2\pi$. This implies that $\tau_c$ is defined not only by the two frequencies $\omega_r$ and $\omega_m$, but also by $\omega_{\tr{cw}}$ according to, \begin{equation} \tau_c = \frac{2\pi}{\gcd(\omega_r,\omega_m,\omega_{\tr{cw}})}. \end{equation} Therefore the time-dependent interaction frame Hamiltonian in this case is written in the Fourier space using the three frequencies, given by, \begin{equation} \begin{aligned} \hat{\tilde{\mathcal{H}}}(t) = \sum_{n,k,l}^{}\hat{\tilde{\mathcal{H}}}^{(n,k,l)} e^{in\omega_rt} e^{ik\omega_mt} e^{il\omega_{\tr{cw}}t} \end{aligned} \end{equation} The effective Hamiltonian, $\hat{\overline{\tilde{\mathcal{H}}}}$, subsequently can be derived to first and second-order as, \begin{equation} \begin{alignedat}{2} \hat{\overline{\tilde{\mathcal{H}}}}^{(1)} &= \sum_{n_0,k_0,l_0} \hat{\tilde{\mathcal{H}}}^{(n_0,k_0,l_0)} \end{alignedat} \end{equation} and \begin{gather} \begin{alignedat}{2} \hat{\overline{\tilde{\mathcal{H}}}}^{(2)} &= \frac{-1}{2} \sum_{n,k}\frac{[\hat{\tilde{\mathcal{H}}}^{(n_0-n,k_0-k,l_0-l)},\hat{\tilde{\mathcal{H}}}^{(n,k,l)}]}{n\omega_r + k\omega_m + l\omega_{\tr{cw}}} + \sum_{n_0,k_0,l_0}\sum_{n,k,l}\frac{[\hat{\tilde{\mathcal{H}}}^{(n_0,k_0,l_0)},\hat{\tilde{\mathcal{H}}}^{(n,k,l)}]}{n\omega_r + k\omega_m + l\omega_{\tr{cw}}}, \end{alignedat} \end{gather} where $n_0\omega_r + k_0\omega_m + l_0\omega_{\tr{cw}}= 0$ and $n\omega_r + k\omega_m + l\omega_{\tr{cw}} \neq 0$. \section{Quaternions} \label{sect:quaternions} Quaternions are an alternative to describe rotations\cite{hamilton1853lectures,hamilton1969,klein1965}. Unlike unitary matrices and directional cosines that describe rotation axis of a pulse as functions of pulse parameters with possible discontinuities, quaternions describe the axis as continuous functions. Indeed, this favours the use of quaternions over directional cosines for optimisation procedures involving rotation axis\cite{emsley1992optimization}. This will be elucidated below.\\ Consider a periodic rf Hamiltonian, $\hat{\mathcal{H}}(t)$ with period $\tau_p$ given by \begin{equation} \hat{\mathcal{H}}(t) = \Omega \hat{I}_z + \omega_{\tr{rf}}(t) (\hat{I}_x \cos\phi(t) + \hat{I}_y \sin\phi(t)) \end{equation} where offset from the rf carrier frequency is given by $\Omega = \omega_0 - \omega_c$, and $\omega_{\tr{rf}}(t)$ and $\phi(t)$ describe the time-dependent amplitude and phase of an arbitrary pulse. Dividing $\hat{\mathcal{H}}(t)$ into a finite number $N$ of intervals $\Delta t$, such that $\Delta t$ is short enough to assume $\hat{\mathcal{H}}$ to be constant in the interval and $N\Delta t = \tau_p$, the propagator defined in Eq. \ref{eq:3-3} is given by, \begin{equation} \hat{U}(\tau_p,0) = e^{-i \hat{\mathcal{H}}_N \Delta t} e^{-i \hat{\mathcal{H}}_{N-1} \Delta t} \cdots e^{-i \hat{\mathcal{H}}_1 \Delta t} \end{equation} where $\hat{\mathcal{H}}_j$ is the constant Hamiltonian for the $j$-th interval. For a spin-$\frac{1}{2}$, $I$, the propagator for the $j$-th pulse, $\hat{U}\big(j\Delta t, (j-1)\Delta t\big)$ can be expressed as a linear combination of Pauli matrices $\hat{I}_x$, $\hat{I}_y$, $\hat{I}_z$ and $\mathds{1}$ operators\cite{goldman1988quantum}. This can be expressed as \begin{equation} \hat{U}\big(j\Delta t, (j-1)\Delta t\big) = \begin{bmatrix} U_{11}^{(j)} & U_{12}^{(j)}\\ U_{21}^{(j)} & U_{22}^{(j)}\\ \end{bmatrix}, \end{equation} where \begin{equation} \begin{alignedat}{2} U_{11}^{(j)} & = \cos\frac{\beta_j}{2} - i\cos\theta_j\sin\frac{\beta_j}{2}\\ U_{12}^{(j)} & = -i\sin\theta_j\sin\frac{\beta_j}{2}e^{-i\phi_j}\\ U_{21}^{(j)} & = -i\sin\theta_j\sin\frac{\beta_j}{2}e^{i\phi_j}\\ U_{22}^{(j)} & = \cos\frac{\beta_j}{2} + i\cos\theta_j\sin\frac{\beta_j}{2}\\ \end{alignedat} \end{equation} with $\phi_j$ being the phase of the $j$-th pulse, while the polar angle $\theta_j$ and the flip angle $\beta_j$ are given by \begin{equation} \begin{alignedat}{2} \theta_j &= \tan^{-1}\big(\omega_{\tr{rf}}(j\Delta t)/\Omega\big)\\ \beta_j &= \big(\Omega^2 + \omega^2_{\tr{rf}}(j\Delta t)\big)^{\frac{1}{2}} \Delta t. \end{alignedat} \end{equation} To visualise the position of the rotation axis of the $j$-th pulse, defined by phase and polar angle of the pulse, directional cosines are considered\cite{BLUMICH1985356}. They are simply the components of the axis of rotation along $x$, $y$ and $z$ directions and are given by \begin{equation} \begin{alignedat}{2} l_x^{(j)} &= \sin\theta_j\cos\phi_j\\ l_y^{(j)} &= \sin\theta_j\sin\phi_j\\ l_z^{(j)} &= \cos\theta_j. \end{alignedat} \end{equation} For a 50 ms Gaussian-shaped $\pi/2$ pulse, the directional cosines and the flip angle extracted from the overall propagator $\hat{U}(\tau_p,0)$ with varying offset is shown on the left side in Fig. \ref{fig:3-1}\footnote{Figure adapted from Ref. \cite{emsley1992optimization}. As the exact simulation parameters were not found in the said source, the author has been able to resimulate to only roughly match the original figure.}. Note the jump discontinuity in $l_z$, when the flip angle $\beta$ reaches 0 or $2\pi$ radians. \begin{figure}\label{fig:3-1} \end{figure} The issue of discontinuity is avoided by describing rotations using quaternions, whose elements are related to directional cosines as, \begin{equation} \begin{alignedat}{2} A_{j} = l_x^{(n)} \sin\frac{\beta_n}{2},\\ B_{j} = l_y^{(n)} \sin\frac{\beta_n}{2},\\ C_{j} = l_z^{(n)} \sin\frac{\beta_n}{2},\\ D_{j} = \cos\frac{\beta_n}{2},\\ \end{alignedat} \end{equation} The overall quaternion is obtained by \begin{equation} \begin{alignedat}{2} \begin{bmatrix} A_{\tr{tot}}\\ B_{\tr{tot}}\\ C_{\tr{tot}}\\ D_{\tr{tot}} \end{bmatrix} &= \begin{bmatrix} D_N & -C_N & B_N & A_N \\ C_N & D_N & -A_N & B_N \\ -B_N & A_N & D_N & C_N \\ -A_N & -B_N & -C_N & D_N \end{bmatrix} \cdots \begin{bmatrix} D_2 & -C_2 & B_2 & A_2 \\ C_2 & D_2 & -A_2 & B_2 \\ -B_2 & A_2 & D_2 & C_2 \\ -A_2 & -B_2 & -C_2 & D_2 \end{bmatrix} \begin{bmatrix} A_1\\ B_1\\ C_1\\ D_1 \end{bmatrix} \end{alignedat} \end{equation} Elements of the overall quaternion calculated for the same 50 ms Gaussian shaped pulse is shown on the right side in Fig. \ref{fig:3-1}. Note that plots are free of any jump discontinuities associated with directional cosines.\\ Euler angles is also a way to describe rotations. However it suffers from the phenomenon of gimbal-lock, where at particular configurations, two of the three degrees of freedom (corresponding to three Euler angles) become redundant. This constrains the motion, unless artificially moved out of those configurations. Quaternions also avoid the problem of gimbal-lock and are therefore the preferred description of rotations, as far as numerical optimisation, involving the axis of rotation, is concerned.\\ Having established the expressions for the effective Hamiltonian in this chapter, the subsequent chapters will discuss the formalism developed to express the interaction frame Hamiltonian as product of complex exponentials with finite set of frequencies, a requirement for the validity of the expressions derived here. The next chapter will deal with pulse sequences that are only amplitude-modulated, while the subsequent chapter will deal with the formalism to handle pulse sequences that are both amplitude- and phase-modulated. \chapter{Amplitude Modulated Pulse Sequences} \label{chap:ampMod} In this chapter, a description that aids in expressing rf interaction frame time-dependent Hamiltonian, where the spin part is time-modulated only by an amplitude modulated rf pulse sequence, as a product of complex exponentials is introduced. This enables the application of formulas derived in chapter \ref{chap:DesignPrinciples} to obtain a time-independent effective Hamiltonian that describes the experiment. The idea is illustrated by first describing homonuclear radio-frequency driven recoupling (RFDR) \cite{rfdr_vega,rfdr_griffin_2008} experiment, where the spin part time modulation is rewritten with only one frequency (see Sec. \ref{sect:3_SingleFrequency}). The description helps with arguments in favour of introducing temporal displacement of $\pi$ pulses in the RFDR experiment for better transfer efficiencies.\\ Later, the theoretical description for amplitude-modulated pulse sequences is applied to study transfer efficiency of $^{\tr{RESPIRATION}}$CP under chemical shift interaction, where the spin part is rewritten with two basic frequencies for every rf field channel (see Sec. \ref{sect:MultipleFrequencies}). The low polarisation transfer efficiency of $^{\tr{RESPIRATION}}$CP in spin systems with dominant chemical shift offset interactions is explained by the generation of second-order effective Hamiltonian term that is along the rf field axis, as this displaces the effective Hamiltonian axis from the plane transverse to the transfer axis, in the zero-quantum subspace. Effective Hamiltonian so calculated are compared with direct propagation numerical simulations. Validity of the description at large offset is questioned and explained with help of quaternion analysis. Aided by insights from the analysis, first a new variant of $^{\tr{RESPIRATION}}$CP is proposed. The variant, called Broadband-$^{\tr{RESPIRATION}}$CP is shown to perform better even under large isotropic chemical shift offsets. Secondly, insights offer arguments in favour of transforming into an interaction frame defined by both rf field and isotropic chemical shift offset, as against an interaction frame defined only by the rf field. Such a treatment would amount to handling phase-modulation in the pulse sequence. A full description to explain any amplitude- and phase-modulated pulse sequence is described in the next chapter. \section{Fundamental Frequencies} \label{sect:4_ff} Consider a time-dependent rf field Hamiltonian, that is only modulated in amplitude, given by \begin{equation} \hat{\mathcal{H}}_{\tr{rf}}(t) = \omega_{\tr{rf}}(t) \hat{I}_x. \end{equation} This can be split into two components: a time-independent continuous wave component, $\omega_{\tr{cw}} = \overline{\omega_{\tr{rf}}(t)}$ and a time-dependent amplitude modulated component with zero net rotation, $\omega_{\tr{AM}}(t) = \omega_{\tr{rf}}(t) - \omega_{\tr{cw}}$. This is illustrated for a single pulse in Fig. \ref{fig:4-1} for better understanding and can be extended for multiple pulses, in a straightforward manner. \begin{figure}\label{fig:4-1} \end{figure}\\ The rf interaction frame transformation of single spin operators, say $\hat{I}_z$, can now be rewritten as \begin{equation} \begin{alignedat}{2} \hat{\tilde{I}}_z(t) &= \Big(\hat{\mathcal{T}}e^{-i\omega_{\tr{AM}}(t)\hat{I}_x t}\Big)^{\dagger} e^{i\omega_{\tr{cw}}\hat{I}_x t} \hat{I}_z e^{-i\omega_{\tr{cw}}\hat{I}_x t} \Big(\hat{\mathcal{T}}e^{-i\hat{I}_x\int_{0}^{t}\omega_{\tr{AM}}(t)t}\Big)\\ &= \hat{I}_z \big(\cos\big(\omega_{\tr{cw}}t\big)\cos\big(\beta_m(t)\big)-\sin\big(\omega_{\tr{cw}}t\big)\sin\big(\beta_m(t)\big)\Big) +\\ & \hspace*{30pt}\hat{I}_y \Big(\sin\big(\omega_{\tr{cw}}t\big)\cos\big(\beta_m(t)\big)+\cos\big(\omega_{\tr{cw}}t\big)\sin\big(\beta_m(t)\big)\Big) \end{alignedat} \label{eq:4_intFrame} \end{equation} where $\beta_m(t) = \int_{0}^{t}\omega_{\tr{AM}}(t')dt'$. This is valid as the two rf field components commute with each other at all times. As the flip angle imparted by the amplitude modulated component over the entire period defined by the pulse element is zero, the cosine and sine of it can be written as a Fourier series given by, \begin{equation} \begin{alignedat}{2} \cos\big(\beta_m(t)\big) &= \sum_{k = -\infty}^{\infty} a_{k}^z e^{ik\omega_m t}\\ \sin\big(\beta_m(t)\big) &= \sum_{k = -\infty}^{\infty} a_{k}^y e^{ik\omega_m t}\\ \end{alignedat} \label{eq:4_Fourier_Omegam} \end{equation} Eq. \ref{eq:4_Fourier_Omegam}, along with the complex exponential forms of $\cos(\omega_{\tr{cw}}t)$ and $\sin(\omega_{\tr{cw}}t)$ and the relations $\hat{I}^{+} = \hat{I}_z + i\hat{I}_y$ and $\hat{I}^{-} = \hat{I}_z - i\hat{I}_y$, can be used to rewrite Eq. \ref{eq:4_intFrame} as \begin{equation} \begin{alignedat}{2} \hat{\tilde{I}}_z(t) &= \sum_{k=-\infty}^{\infty}\sum_{\substack{l=-1\\l\neq0}}^{1}\hat{\mathcal{H}}^{(k,l)}e^{ik\omega_mt}e^{il\omega_{\tr{cw}t}} \end{alignedat} \label{eq:4_timeEq_FT} \end{equation} where the Fourier coefficients $\hat{\mathcal{H}}^{(k,\pm1)} = (a_{k,z} \pm ia_{k,y}) I^{\mp}$.\\ The above procedure can be used to express all amplitude modulated rf field interaction frame time dependent Hamiltonian terms. It is now valid to use the expressions for effective Hamiltonian derived in Sec. \ref{sect:MultipleFrequencies}, for the case $\omega_{\tr{cw}} = 0$ and $\omega_{\tr{cw}} \neq 0$. \section{Adiabatic RFDR}\cite{straaso2016improved} \label{sect:4_adRFDR} In this section, a modification to the RFDR pulse sequence that improves the transfer efficiency, theoretically to 100\% is presented. The improvement is a result of adiabatically varying the temporal displacement of $\pi$ pulses in the pulse sequence from their original positions. Consider two homonuclear coupled spin-$\frac{1}{2}$ nuclei, $I_1$ and $I_2$. In the rotating frame, the time dependent Hamiltonian is given by, \begin{equation} \hat{\mathcal{H}}(t) = \hat{\mathcal{H}}_I(t) + \hat{\mathcal{H}}_{II}(t) + \hat{\mathcal{H}}_{\tr{rf}}(t), \end{equation} with \begin{equation} \begin{aligned} \hat{\mathcal{H}}_I(t) &= \sum_{q=1}^{2} \sum_{n=-2}^{2}\omega_{I_q}^{(n)} e^{in\omega_rt}\hat{I}_{qz}, \end{aligned} \label{eq:4_csH} \end{equation} \begin{equation} \begin{aligned} \hat{\mathcal{H}}_{II}(t) &= \sum_{n=-2}^{2} \omega_{I_1I_2}^{(n)} e^{in\omega_rt} (2\hat{I}_{1z}\hat{I}_{2z}-\hat{I}_{1x}\hat{I}_{2x}-\hat{I}_{1y}\hat{I}_{2y}) \end{aligned} \label{eq:4_dipH} \end{equation} where the angular frequencies $\omega_{I_q}^{(n)}$ are the isotropic ($n = 0$) and anisotropic chemical shifts of the two nuclei, $\omega_{I_1I_2}^{(n)}$ the dipolar coupling and $\omega_r$ the spinning rate. The original RFDR and adiabatic RFDR pulse sequences are shown in a 2D correlation experiment depicted in Fig. \ref{fig:4-2}A. RFDR pulse sequence has a $\pi$ pulse in the middle of every rotor period, as shown in Fig. \ref{fig:4-2}B. To compensate for chemical shift offset and pulse imperfections by averaging certain higher-order terms in the Hamiltonian, XY-4 or XY-8 phase cycling scheme\cite{XY4_XY8_1,XY4_XY8_2} of the pulse train is employed. It will be shown in this section that by adiabatically varying the temporal placement of the $\pi$ pulses, polarisation transfer through the recoupled homonuclear dipolar coupling Hamiltonian can be improved. The modified pulse sequence, called the adiabatic RFDR pulse sequence is shown in Fig. \ref{fig:4-2}C. It is noted the sequence presented here is conceptually different from the variant found in literature, where the $\pi$ pulses are replaced with adiabatic inversion pulses, for improved stability of the experiment\cite{rfdr_adiabaticPi}. The basic unit, which is two rotor periods long in this case, has the two $\pi$ pulses shifted in time by $\Delta \tau$ in opposite directions from $\frac{\tau_r}{2}$ and $\frac{3\tau_r}{2}$. Clearly, $\Delta \tau = 0$ corresponds to the original RFDR pulse sequence. Convention here is that $\Delta \tau$ is negative when the first $\pi$ pulse is advanced in time with respect to $\frac{\tau_r}{2}$ (i.e., second $\pi$ is deferred from $\frac{3\tau_r}{2}$) and vice versa. A specific time-shift $\Delta \tau$ is repeated four times to accommodate XY-8 phase cycling scheme for the $\pi$ pulses. Ideally, in an adiabatic RFDR experiment, the time shift $\Delta \tau$ is changed gradually over repetitions.\\ \begin{figure} \caption{A $^{13}$C-$^{13}$C correlation experiment using either B) RFDR or C) adiabatic RFDR to mediate polarization transfer. The RFDR element consists of one $\pi$ pulse in the middle of every rotor period, which is repeated M times. In adiabatic RFDR, two rotor periods are considered where the temporal positions of the $\pi$ pulses are displaced in opposite directions with respect to the centre of the rotor period, by $\Delta \tau$. The element is repeated M times, with a new $\Delta \tau$ for every repetition. During the RFDR mixing element, $^1$H decoupling with constant x-phase is employed.} \label{fig:4-2} \end{figure} Assuming ideal $\pi$ pulses, the dipolar coupling Hamiltonian given in Eq. \ref{eq:4_dipH} remains unchanged in the rf field interaction frame. However the chemical shift Hamiltonian given in Eq. \ref{eq:4_csH} is modified and is given by, \begin{equation} \begin{aligned} \hat{\tilde{\mathcal{H}}}_I(t) &= \sum_{q=1}^{2} \sum_{n=-2}^{2}\omega_{I_q}^{(n)} e^{in\omega_rt} \varepsilon(t) \hat{I}_{qz}, \end{aligned} \end{equation} where $\varepsilon(t)$ denotes the sign of spin part of the chemical shift Hamiltonian, at time $t$, and is the same for both spins. The isotropic chemical shift Hamiltonian can be rewritten as \begin{equation} \begin{alignedat}{2} \omega_{I_1}^{(0)} \varepsilon(t) \hat{I}_{1z} + \omega_{I_2}^{(0)} \varepsilon(t) \hat{I}_{2z} &= \Omega^{\tr{diff}} \varepsilon(t) \hat{I}_z^{ZQ} + \Omega^{\tr{sum}} \varepsilon(t) \hat{I}_z^{DQ} \end{alignedat} \label{eq:4_cs_zq_dq} \end{equation} where $\Omega^{\tr{diff}} = \omega_{I_1}^{(0)} - \omega_{I_2}^{(0)}$ and $\Omega^{\tr{sum}} = \omega_{I_1}^{(0)} + \omega_{I_2}^{(0)}$ are the difference and sum of the isotropic chemical shifts, respectively, while $\hat{I}_z^{ZQ} = \frac{1}{2}(\hat{I}_{1z}-\hat{I}_{2z})$ and $\hat{I}_z^{DQ} = \frac{1}{2}(\hat{I}_{1z}+\hat{I}_{2z})$ are the fictitious spin-$\frac{1}{2}$ zero-quantum and double-quantum operators\cite{fictitiousOps_Vega_1978}, respectively. As zero-quantum terms in the effective dipolar coupling Hamiltonian is what transfers in the RFDR experiment\cite{rfdr_Vega_1992,rfdr_griffin_1998}, it proves useful to transform the problem further into a frame defined by the zero quantum term in Eq. \ref{eq:4_cs_zq_dq}. To do so, a trick similar to the one discussed in Sec. \ref{sect:4_ff} is employed to split $\Omega^{\tr{diff}} \varepsilon(t)$, given in Fig. \ref{fig:4-3}A as a sum of time-independent component, $\Omega_{\tr{cw}}^{\tr{diff}} = \overline{\Omega^{\tr{diff}} \varepsilon(t)}$ shown in Fig. \ref{fig:4-3}C and a time-dependent component, $\Omega_m^{\tr{diff}}(t) = \Omega^{\tr{diff}} \varepsilon(t) - \Omega_{\tr{cw}}^{\tr{diff}}$ with zero mean, shown in Fig. \ref{fig:4-3}B.\\ The total rf field interaction frame Hamiltonian of the system with all the components is now given by, \begin{equation} \begin{alignedat}{2} \hat{\tilde{\mathcal{H}}}(t) &= \Omega^{\tr{diff}}_m(t) \hat{I}_z^{ZQ} + \Omega^{\tr{diff}}_{\tr{cw}} \hat{I}_z^{ZQ} + \Omega^{\tr{sum}}\varepsilon(t) \hat{I}_z^{DQ} + \sum_{q=1}^{2}\sum_{\substack{n=-2\\n\neq0}}^{2}\omega_{I_q}^{(n)} e^{in\omega_rt}\varepsilon(t) \hat{I}_{qz}\\ &\hspace{40pt} +\sum_{n=-2}^{2}\omega_{I_1I_2}^{(n)} e^{in\omega_rt} (2\hat{I}_{1z}\hat{I}_{2z}-\hat{I}_{1x}\hat{I}_{2x}-\hat{I}_{1y}\hat{I}_{2y}). \end{alignedat} \label{eq:4_rfFrameH} \end{equation} \begin{figure}\label{fig:4-3} \end{figure} By transforming the problem into a frame defined by the first term in Eq. \ref{eq:4_rfFrameH}, only the dipolar Hamiltonian term is changed, as all other terms commute with the first term. The transformation is given by, \begin{equation} \begin{alignedat}{3} \hat{\tilde{U}}(t) &= \hat{\mathcal{T}} e^{-i\hat{I}^{ZQ}_z\int_{0}^{t}\Omega^{\tr{diff}}_m(t')dt'} &= e^{-i\beta(t)\hat{I}^{ZQ}_z}. \end{alignedat} \end{equation} Similar to Eq. \ref{eq:4_Fourier_Omegam}, cosine and sine of $\beta(t)$ can be written as $\cos(\beta(t)) = \sum_{k=-\infty}^{\infty}a_{k}^xe^{ik\omega_mt}$ and $\sin(\beta(t)) = \sum_{k=-\infty}^{\infty}a_{k}^ye^{ik\omega_mt}$, respectively. This leads to dipolar coupling Hamiltonian in the interaction frame of the first term of Eq. \ref{eq:4_rfFrameH} given by, \begin{equation} \begin{alignedat}{2} \hat{\tilde{\mathcal{H'}}}_{II}(t) = \sum_{n=-2}^{2}\sum_{k=-\infty}^{\infty}\hat{\tilde{\mathcal{H'}}}^{(n,k)} e^{in\omega_rt} e^{ik\omega_mt} \end{alignedat} \end{equation} where the Fourier components work out to, \begin{equation} \begin{aligned} \hat{\tilde{\mathcal{H'}}}^{(n,k)} = \omega_{I_1I_2}^{(n)}(-\hat{I}_x^{ZQ}a_{k}^x + \hat{I}_y^{ZQ}a_{k}^y + 2\hat{I}_{1z}\hat{I}_{2z} \delta_{k,0}). \end{aligned} \end{equation} The fictitious spin-$\frac{1}{2}$ zero-quantum operators are $\hat{I}_x^{ZQ} = \hat{I}_{1x}\hat{I}_{2x} + \hat{I}_{1y}\hat{I}_{2y}$ and $\hat{I}_y^{ZQ} = \hat{I}_{1y}\hat{I}_{2x} + \hat{I}_{1x}\hat{I}_{2y}$. The first-order effective dipolar coupling Hamiltonian is given by (see Sec. \ref{sect:MultipleFrequencies}) \begin{equation} \hat{\overline{\tilde{\mathcal{H'}}}}^{(1)}_{II} = \sum_{n,k}\hat{\tilde{\mathcal{H'}}}^{(n,k)} \label{eq:4_1stOrdHomo} \end{equation} such that $n\omega_r + k\omega_m = 0$. As $2\omega_m = \omega_r$ for the problem here, $2n+k = 0$ and simplifies Eq. \ref{eq:4_1stOrdHomo} to \begin{equation} \hat{\overline{\tilde{\mathcal{H'}}}}^{(1)}_{II} = -\hat{I}_x^{ZQ}\omega_x^{\tr{eff}} + \hat{I}_y^{ZQ}\omega_y^{\tr{eff}} \label{eq:4_recDipHamil} \end{equation} where $\omega^{\tr{eff}}_x = \sum_{n,k}^{}\omega_{I_1I_2}^{(n)}a_{k}^x$ and $\omega^{\tr{eff}}_y = \sum_{n,k}^{}\omega_{I_1I_2}^{(n)}a_{k}^y$ with $n\omega_r + k\omega_m = 0$.\\ The total first-order effective Hamiltonian is therefore given by, \begin{equation} \begin{alignedat}{2} \hat{\overline{\tilde{\mathcal{H'}}}}^{(1)} = \Omega^{\tr{diff}}_{\tr{cw}} \hat{I}_z^{ZQ} + \overline{\Omega^{\tr{sum}}\varepsilon(t)} \hat{I}_z^{DQ} + -\hat{I}_x^{ZQ}\omega_x^{\tr{eff}} + \hat{I}_y^{ZQ}\omega_y^{\tr{eff}} + \hat{\overline{\tilde{\mathcal{H'}}}}^{(1)}_{CSA} \end{alignedat} \label{eq:4-16} \end{equation} where \begin{equation} \begin{alignedat}{2} \hat{\overline{\tilde{\mathcal{H'}}}}^{(1)}_{CSA} = \sum_{q=1}^{2}\sum_{\substack{n=-2\\n\neq0}}^{2} \omega_{I_q}^{(n)}\cdot 4 \cdot (-1)^{n+1} \frac{\sin(n\omega_r\Delta\tau)}{n\omega_r} \hat{I}_{qz}. \end{alignedat} \end{equation} It is seen from Eq. \ref{eq:4-16} that in the zero-quantum subspace, the effective dipolar coupling Hamiltonian, which is on the x,y-plane, adds with rest of the terms, including $\Omega^{\tr{diff}}_{\tr{cw}}\hat{I}_z^{ZQ}$, all along the longitudinal axis to give the total effective Hamiltonian. Therefore $\Omega^{\tr{diff}}_{\tr{cw}} = (\omega_{I_1}^{(0)}-\omega_{I_2}^{(0)})\frac{2\Delta\tau}{\tau_r}$, could in principle be varied slowly enough using $\Delta\tau$, such that the polarization is adiabatically changed from $\hat{I}_{1z}$ to $\hat{I}_{2z}$.\\ The powder-averaged strength of the effective dipolar coupling Hamiltonian given in Eq. \ref{eq:4_recDipHamil}, is given by \begin{equation} \begin{alignedat}{2} \omega_{\tr{eff}} = \frac{1}{8\pi^2}\int \sqrt{(\omega^{\tr{eff}}_x)^2 + (\omega^{\tr{eff}}_y)^2} d\alpha_{\tr{PR}} \sin\beta_{\tr{PR}} d\beta_{\tr{PR}} d\gamma_{\tr{PR}}. \end{alignedat} \end{equation} and is shown with varying $\Delta \tau$ in Fig. \ref{fig:4-4}. The calculation shown is done with $\Omega^{\tr{diff}}/2\pi = 1.2\omega_r/2\pi = 12$ kHz and MAS rate of 10 kHz. The extremes in the plot, where $\Delta\tau = \pm \tau_r$ corresponds to the two $\pi$ pulses being either at the same time or separated by two rotor periods, and leads to no recoupling. The centre of the plot, where $\Delta\tau = 0$ corresponds to the normal RFDR pulse sequence. Even though maximum transfer is achieved for a $\Delta$ that is slightly off zero, there is no significant gain in strength of the recoupled Hamiltonian by doing so compared to the normal RFDR pulse sequence. As the objective here is to adiabatically transfer, it is observed that strength of the recoupled dipolar interaction is not compromised much, for any of the $\Delta \tau$ used in the sweep. It is noted that the profile seen in Fig. \ref{fig:4-4} is highly dependent on the ratio of chemical shift different to the spinning rate and presence of CSA term further complicates the overall transfer prediction. \begin{figure}\label{fig:4-4} \end{figure} To find the optimal sweep that adiabatically performs $I_{1z} \rightarrow I_{2z}$ transfer for a real system with a significant CSA, numerical simulations were performed. For a fixed $N$ XY-8 blocks of the pulse sequence, $\Delta\tau$ for each of the N blocks can be found using\cite{vinod2008swept} \begin{equation} \tr{x}_i = \tr{x}_{\tr{co}} \cdot \bigg(-1 + 2 \frac{i}{N-1}\bigg)\\ \Delta\tau_i = \frac{\tau_{\tr{sweep}}}{2} \frac{\tan \tr{x}_i}{\tan \tr{x}_{\tr{co}}} \end{equation} where $\tau_{\tr{sweep}}$ is the sweep size, $\tr{x}_{\tr{co}}$ is the tangential cut-off angle and $i$ denotes the block number, taking either of the values $0, 1, \cdots N-1$. For $N\leq3$, it is noted that $\tr{x}_\tr{co}$ is inconsequential. If $N$ = 1, $\Delta\tau$ is chosen to be zero as in the normal RFDR pulse sequence. Simulations were done on open-source SIMPSON\cite{bak2000simpson,tosner2014computer} software and based on a representative $^{13}$CO to $^{13}$C$_{\alpha}$ spin pair in a polypeptide with the chemical shift parameters ($\delta_{\tr{iso}}^{\tr{CS}}, \delta_{\tr{aniso}}^{\tr{CS}}, \eta^{\tr{CS}}, \alpha_{\tr{PC}}^{\tr{CS}}, \beta_{\tr{PC}}^{\tr{CS}}, \gamma_{\tr{PC}}^{\tr{CS}}$) set to (170 ppm, -76 ppm, 0.90, 0$^{\circ}$, 0$^{\circ}$, 90$^{\circ}$) and (50 ppm, -20 ppm, 0.43, 90$^{\circ}$, 90$^{\circ}$, 0$^{\circ}$) for $^{13}$CO and $^{13}$C$_{\alpha}$ respectively\cite{bak2002specification}. The dipole interaction parameters ($b_{I_1I_2}, \beta_{\tr{PE}}^{\tr{D}}, \gamma_{\tr{PE}}^{\tr{D}}$) were (-2142 Hz, 90$^{\circ}$, 120.8$^{\circ}$), MAS rate was 10 kHz and the $^1$H Larmor frequency was set to 400 MHz. Powder averaging was achieved using the REPULSION\cite{bak1997repulsion} scheme with 66 $\alpha_{\tr{CR}}$, $\beta_{\tr{CR}}$ pairs and 9 $\gamma_{\tr{CR}}$ angles. The duration of the $\pi$ pulses was set to $5\mu$s and ideal $^1$H heteronuclear decoupling was assumed. The results for grid search optimisations of $\tau_{\tr{sweep}}$ and $\tr{x}_\tr{co}$ for a fixed $N$ are shown in Tab. \ref{tab:adrfdr_opt}. \begin{table} \centering \begin{tabular}{|l|cccccccccc|} \hline \hline N & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10\\ \hline $\tau_{\tr{sweep}}$ & 0 & 2.9 & 2.5 & 3.2 & 3.3 & 3.4 & 3.7 & 3.5 & 3.7 & 3.6 \\ $\tr{x}_\tr{co}$ & - & - & - & 89 & 80 & 80 & 79 & 79 & 81 & 81\\ \hline \end{tabular} \caption{The optimal adiabatic parameters ($\tau_{\tr{sweep}}, \tr{x}_\tr{co}$), obtained using a grid search, for various numbers of XY-8 element blocks (N). Note that the sweep form, i.e., $\tr{x}_\tr{co}$, is immaterial for $N =$ 1, 2, and 3.} \label{tab:adrfdr_opt} \end{table} In Fig. \ref{fig:4-5}A, simulated transfer efficiencies for RFDR and the different optimal adiabatic RFDR pulse sequences are compared. Normal RFDR reaches a maximum transfer of 53.3\% after 24 rotor period ($N = 3$), while adiabatic RFDR reaches 66.7\% for the same mixing time. Adiabatic RFDR achieves about 79\% transfer efficiency with a mixing time of 8ms ($N = 10$). The robustness towards chemical shift offset variations are seen in the plots given in Fig. \ref{fig:4-5}B-D, where RFDR (Fig. \ref{fig:4-5}B) with a mixing time of 24 rotor periods ($N = 3$), adiabatic RFDR (Fig. \ref{fig:4-5}C) with a mixing time of 24 rotor periods ($N = 3$) and adiabatic RFDR (Fig. \ref{fig:4-5}D) with a mixing time of 80 rotor periods ($N = 10$) are shown. It is evident that the polarisation transfer is higher over the entire chemical shift region for adiabatic RFDR pulse sequence compared to the normal RFDR pulse sequence. \begin{figure} \caption{Simulations for the transfer efficiencies for RFDR (blue squares) and adiabatic RFDR (red squares) with varying mixing time. For a given number of blocks, $N$, the corresponding time-shifts $\Delta\tau_i$ of the most efficient adiabatic RFDR sequence has been selected. Dashed line at N = 3 indicates the time point of maximum transfer efficiency of standard RFDR. The transfer efficiency as function of chemical-shifts for both involved nuclei are numerically calculated for (b) RFDR (N = 3), (c) adiabatic RFDR (N = 3), and (d) adiabatic RFDR (N = 10). All simulations were done for a 400 MHz spectrometer at 10.0 kHz MAS. The rf field strength of the $\pi$-pulses was set to 100 kHz and a XY-8 phase cycling was employed.} \label{fig:4-5} \end{figure} Fig. \ref{fig:4-6} presents a comparison of the experimental transfer efficiencies for normal RFDR and the adiabatic RFDR ($N = 3$) pulse sequences. The transfer efficiencies have been extracted from slices of 2D spectra recorded on uniformly $^{13}$C, $^{15}$N labelled glycine at 10 kHz MAS on a 400 MHz spectrometer. Fig. \ref{fig:4-6}A and B show the peak intensities of cross peaks and diagonal peaks with varying mixing time. The peak intensities have been integrated and normalised to the diagonal peak intensity from a spectrum without any mixing element. The adiabatic RFDR is recorded consistently with $\tau_{\tr{sweep}} = 2.5\mu$s. Hence, only the last measured data point (24 rotor periods with $N = 3$) exploits the entire sweep of the given pulse sequence and matches with the simulated data presented in Fig. \ref{fig:4-5}. The adiabatic RFDR is seen to reach a maximum transfer efficiency of about 55\% which is a gain of more than 20\% over the normal RFDR pulse sequence that reached a maximum transfer efficiency of 45\% at 12 rotor periods. The inset in Fig. \ref{fig:4-6}A shows spectrum slices extracted from the highest $^{13}$CO to $^{13}$C$_{\alpha}$ cross peaks for RFDR and adiabatic RFDR pulse sequences. The signal-to-noise ratio was determined from the reference spectrum to be about 6000. In Fig. \ref{fig:4-6}B, the diagonal peak intensities can be seen dropping continuously for the adiabatic RFDR, whereas for the normal RFDR sequence, it seems like the polarisation is equilibrating between the two carbon nuclei. The equilibration after 12 rotor periods can be attributed to the different effective dipolar coupling strengths for a powder sample where polarisation will be transferred either forward or backward for certain crystallites. \begin{figure}\label{fig:4-6} \end{figure} The observed experimental transfer efficiency for $N = 3$ was found to be lower than the one found using numerical simulation in Fig. \ref{fig:4-5} (55\% compared to 66.7\%). This may be explained by several aspects, as follows. The numerical simulations were performed for an isolated two-spin system without relaxation which does not describe the full spin dynamics in a multi-spin system. In particular $^1$H decoupling, which was assumed to be ideal for the numerical simulations are not valid in the experiments. It has been discussed that better decoupling performance may be achieved using moderate CW irradiation during the windows between the $\pi$ pulses and strong CW irradiation during the $\pi$ pulses\cite{bayro2008radio}. Fig. \ref{fig:4-6}C presents experimental data for the peak intensities of cross peaks with varying $^1$H rf field strength $\omega_{\tr{pul}}/2\pi$, that is applied during the $\pi$ pulses on $^{13}$C. The performance is seen to improve for both version of RFDR, however the improvement is higher for adiabatic RFDR pulse sequence. Fig. \ref{fig:4-6}D presents the cross peak intensities with varying strength of CW irradiation $\omega_{\tr{win}}/2\pi$ that is applied between the $\pi$ pulses on $^{13}$C, while $\omega_{\tr{pul}} = 150$ kHz. It is observed that the transfer efficiencies are insensitive to changes in $^1$H decoupling CW irradiation between the $\pi$ pulses.\\ In summary, the adiabatic variant of the RFDR experiment, that gradually varies temporal positions of $\pi$ pulses during the mixing time, significantly improves polarisation transfer. Theoretically, the chemical shift difference, which is dependent on the temporal placements of the $\pi$ pulses, is understood to adiabatically vary the total effective Hamiltonian such that the polarisation is transferred from one nucleus to the other. \section{$^{\textrm{RESPIRATION}}$CP}\cite{nielsen2016theoretical} \label{sect:resp} A highly approved and endorsed heteronuclear dipolar recoupling pulse sequence is the Hartmann-Hahn cross-polarisation (CP)\cite{hartmann1962,stejskal1977,Stejskal1979}. It comprises of simultaneous CW rf irradiations on two heteronuclear spins $I$ and $S$ under a MAS rate $\omega_r$, with the two rf amplitudes $\omega_{\tr{rf}}^{\tr{(q)}}$ satisfying $\omega_{\tr{rf}}^{\tr{(I)}}+\omega_{\tr{rf}}^{\tr{(S)}} = n\omega_r$, with $\tr{q} = \tr{I or S}$. Several modifications of the CP experiment have been proposed over the years, to ensure good performance even under inhomogeneous rf field or chemical shift offset\cite{khaneja2006composite,hediger1993cross,kolbert1995broadband,bjerring2003solid}. One such modification is the Rotor Echo Short Pulse IRaAdiaTION mediated cross polarization ($^{\tr{RESPIRATION}}$CP)\cite{jain2012efficient,nielsen2013adiabatic,jain2014low,basse2014efficient}. In this section, a theoretical description for any amplitude modulated pulse sequence is detailed by illustrating $^{\tr{RESPIRATION}}$CP. It turns out that the rf field interaction frame Hamiltonian in such cases is described by two basic frequencies for every rf pulse channel, as discussed in Sec. \ref{sect:MultipleFrequencies}.\\ Consider two heteronuclear coupled spin-$\frac{1}{2}$ nuclei, $I$ and $S$. In the rotating frame, the time-dependent Hamiltonian is given by, \begin{equation} \hat{\mathcal{H}}(t) = \hat{\mathcal{H}}_I(t) + \hat{\mathcal{H}}_S(t) + \hat{\mathcal{H}}_{IS}(t) + \hat{\mathcal{H}}_{\tr{rf}} \end{equation} with \begin{equation} \begin{alignedat}{2} \hat{\mathcal{H}}_I(t) &= \sum_{n=-2}^{2} \omega_{I}^{(n)} e^{in\omega_rt} \hat{I}_{z}\\ \hat{\mathcal{H}}_S(t) &= \sum_{n=-2}^{2} \omega_{S}^{(n)} e^{in\omega_rt} \hat{S}_{z}\\ \hat{\mathcal{H}}_{IS}(t) &= \sum_{n=-2}^{2} \omega_{IS}^{(n)} e^{in\omega_rt} 2\hat{I}_{z}\hat{S}_z\\ \hat{\mathcal{H}}_{\tr{rf}}(t) &= \omega_{\tr{rf}}^{\tr{(I)}}(t)\hat{I}_x + \omega_{\tr{rf}}^{\tr{(S)}}(t)\hat{S}_x \end{alignedat} \label{eq:resp_H} \end{equation} The $^{\tr{RESPIRATION}}$CP pulse sequence is shown on the left side in Fig. \ref{fig:4-6}, with $\tau_m^{(\tr{S})} = \tau_m^{\tr{(I)}} = \tau_r$. The pulse sequence is repeated M times to accomplish transfer. In order to write the time dependency of $\hat{\mathcal{H}}_{\tr{rf}}(t)$ also as complex exponential, like rest of the Hamiltonians in Eq. \ref{eq:resp_H}, splitting of the rf field explained in Sec. \ref{sect:4_ff} is employed. The rf field on each channel is split into two components, a time dependent amplitude modulated component $\omega_{\tr{AM}}^{\tr{(q)}}(t)$ with zero net rotation on single spin operators and a time-independent continuous wave component $\omega_{\tr{cw}}^{\tr{(q)}}$. The splitting is shown on the right side in Fig. \ref{fig:4-6}. \begin{figure} \caption{Fig7} \label{fig:4-7} \end{figure} The rf field Hamiltonian given in Eq. \ref{eq:resp_H} can therefore be rewritten as \begin{equation} \begin{alignedat}{2} \hat{\mathcal{H}}_{\tr{rf}}(t) &= \big(\omega_{\tr{AM}}^{\tr{(I)}}(t) + \omega_{\tr{cw}}^{\tr{(I)}}\big) \hat{I}_x + \big(\omega_{\tr{AM}}^{\tr{(S)}}(t) + \omega_{\tr{cw}}^{\tr{(S)}}\big) \hat{S}_x \end{alignedat} \end{equation} where $\omega_{\tr{cw}}^{\tr{(q)}} = \overline{\omega_{\tr{rf}}^{\tr{(q)}}(t)}$ and $\omega_{\tr{AM}}^{\tr{(q)}}(t) = \omega_{\tr{rf}}^{\tr{(q)}}(t) - \omega_{\tr{cw}}^{\tr{(q)}}$. As explained in Sec. \ref{sect:4_ff}, the rf field interaction frame Hamiltonian can therefore be written as \begin{equation} \begin{alignedat}{2} \hat{\tilde{\mathcal{H}}}(t) = \sum_{n=-2}^{2}\sum_{k_{\tr{I}}=-\infty}^{\infty}\sum_{l_{\tr{I}}=-1}^{1}\sum_{k_{\tr{S}}=-\infty}^{\infty}\sum_{l_{\tr{S}}=-1}^{1} \hat{\tilde{\mathcal{H}}}^{(n,k_{\tr{I}},l_{\tr{I}},k_{\tr{S}},l_{\tr{S}})} e^{in\omega_rt} e^{ik_{\tr{I}}\omega_m^{\tr{(I)}}t} e^{il_{\tr{I}}\omega_{\tr{cw}}^{\tr{(I)}}t} e^{ik_{\tr{S}}\omega_m^{\tr{(S)}}t} e^{il_{\tr{S}}\omega_{\tr{cw}}^{\tr{(S)}}t} \end{alignedat} \label{eq:4_respTFH} \end{equation} with the Fourier components given by \begin{equation} \begin{alignedat}{2} \hat{\tilde{\mathcal{H}}}^{(n,k_{\tr{I}},\pm1,0,0)} &= \omega_{I}^{(n)} (a_{k_{\tr{I}}}^z \pm il a_{k_{\tr{I}}}^y) \hat{I}^{\mp}\\ \hat{\tilde{\mathcal{H}}}^{(n,0,0,k_{\tr{S}},\pm1)} &= \omega_{S}^{(n)} (a_{k_{\tr{S}}}^z \pm il a_{k_{\tr{S}}}^y) \hat{S}^{\mp}\\ \hat{\tilde{\mathcal{H}}}^{(n,k_{\tr{I}},\pm1,k_{\tr{S}},\pm1)} &= \omega_{IS}^{(n)} (a_{k_{\tr{I}}}^z \pm il a_{k_{\tr{I}}}^y) (a_{k_{\tr{S}}}^z \pm il a_{k_{\tr{S}}}^y) \hat{I}^{\mp}\hat{S}^{\mp}\\ \hat{\tilde{\mathcal{H}}}^{(n,k_{\tr{I}},\pm1,k_{\tr{S}},\mp1)} &= \omega_{IS}^{(n)} (a_{k_{\tr{I}}}^z \pm il a_{k_{\tr{I}}}^y) (a_{k_{\tr{S}}}^z \mp il a_{k_{\tr{S}}}^y) \hat{I}^{\mp}\hat{S}^{\pm}\\ \end{alignedat} \label{eq:4_23} \end{equation} In this section, only the dipolar coupling Hamiltonian in the effective Hamiltonian is discussed. The influence of chemical shift Hamiltonian in is discussed later in Chapter \ref{chap:genSeq}.\\ The first-order effective Hamiltonian for the time-dependent Hamiltonian given in Eq. \ref{eq:4_respTFH}, in line with discussions in Chapter \ref{chap:DesignPrinciples}, is given by \begin{equation} \hat{\overline{\tilde{\mathcal{H}}}}^{(1)} = \sum_{n,k_{\tr{I}},l_{\tr{I}},k_{\tr{S}},l_{\tr{S}}}^{} \hat{\tilde{\mathcal{H}}}^{(n,k_{\tr{I}},l_{\tr{I}},k_{\tr{S}},l_{\tr{S}})} \label{eq:4_resp1stord} \end{equation} where the sum is over the quintuples $(n,k_{\tr{I}},l_{\tr{I}},k_{\tr{S}},l_{\tr{S}})$ that satisfy the resonance condition \begin{equation} n\omega_r + k_{\tr{I}} \omega_m^{\tr{(I)}} + l_{\tr{I}} \omega_{\tr{cw}}^{\tr{(I)}} + k_{\tr{S}} \omega_m^{\tr{(S)}} + l_{\tr{S}} \omega_{\tr{cw}}^{\tr{(S)}} = 0. \end{equation} For $^{\tr{RESPIRATION}}$CP, as the pulse sequence is rotor synchronised, the relation $\omega_r = \omega_m^{\tr{(I)}} = \omega_m^{\tr{(S)}}$ holds true and as the short pulses on both channels are identical, the relation $\omega_{\tr{cw}}^{\tr{(I)}} = \omega_{\tr{cw}}^{\tr{(S)}}$ holds true. Eq. \ref{eq:4_resp1stord} is therefore greatly simplified to \begin{equation} \hat{\overline{\tilde{\mathcal{H}}}}^{(1)} = \sum_{n=-2}^{2}\sum_{k_{\tr{I}}=-\infty}^{\infty} \hat{\tilde{\mathcal{H}}}^{(n,k_{\tr{I}},\pm1,-(k_{\tr{I}}+n),\mp1)}. \label{eq:4_resp1stordSimpl} \end{equation} The observation that $l_{\tr{I}} = -l_{\tr{S}}$ is true in Eq. \ref{eq:4_resp1stordSimpl} suggests that the recoupled dipolar Hamiltonian terms are zero-quantum. Therefore the form of effective first-order dipolar coupling Hamiltonian can be written as a linear combination of fictitious spin-$\frac{1}{2}$ operators $\hat{F}_{z}^{ZQ} (= 2\hat{I}_z\hat{S}_z + 2\hat{I}_y\hat{S}_y)$ and $\hat{F}_y^{ZQ} (= 2\hat{I}_z\hat{S}_y - 2\hat{I}_y\hat{S}_z)$. This Hamiltonian enables the transfer, \begin{equation} \hat{I}_x = \hat{F}^{ZQ}_x + \hat{F}^{DQ}_x \xrightarrow{(c\hat{F}^{ZQ}_z + d\hat{F}^{ZQ}_y)t} -\hat{F}^{ZQ}_x + \hat{F}^{DQ}_x = \hat{S}_x, \end{equation} where $c$ and $d$ define the Hamiltonian vector in the zero-quantum operator subspace and $\sqrt{c^2+d^2} = \frac{\pi}{t}$. Illustration of this is given in Fig. \ref{fig:5-1}A, where the red arrow describes the effective first-order dipolar coupling Hamiltonian.\\ To determine the size of the effective dipolar coupling Hamiltonian, the Fourier components according to the last of Eq. \ref{eq:4_23} have to be calculated. In Fig. \ref{fig:4-8}, the Fourier coefficients, $a_k^{(\tr{q})} = a_{k_{\tr{q}}}^z + ia_{k_{\tr{q}}}^y$, are shown with varying $|\omega_{\tr{rf}}^{\tr{(q)}}|$ which is set as the amplitude for both the amplitude modulated (for the interval 0 to $\tau_m^{\tr{(q)}} - \tau_p^{\tr{(q)}}$) and the short pulse (of duration $\tau_p^{\tr{(q)}}$). The duration $\tau_p^{\tr{(q)}}$ was set to $\tau_m^{\tr{(q)}}/15$. It is evident that for experimentally realistic values, the coefficients are non-zero only in the interval given by $\pm10$, i.e., $k_{\tr{q}} \in [-10,10]$. \begin{figure}\label{fig:4-8} \end{figure} The effective dipolar Hamiltonian so found is utilised to calculate $\hat{I}_x \rightarrow \hat{S}_x$ transfer efficiency. The efficiency with varying rf field amplitude, which is set constant for the entire pulse sequence, and mixing time is shown in Fig. \ref{fig:4-9}A for a MAS rate of 16.7 kHz and short pulse length $\tau_p^{\tr{(q)}} = 4\mu$s. Fig. \ref{fig:4-9}B shows the experimental data for $^{15}$N $\rightarrow ^{13}$C$_\alpha$ transfer recorded on a 600 MHz spectrometer with the same MAS rate and other pulse parameters. The experimental plots are qualitatively comparable to the theoretical plots, however the 20-25\% difference in transfer efficiency between the plots can be attributed to the ignored CSA interaction in the theoretical calculation. \begin{figure}\label{fig:4-9} \end{figure} As it has already been shown in previous studies that $^{\tr{RESPIRATION}}$CP is intrinsically broad-banded in $I$ spin, and owing to similar form of chemical shift analysis for the two spins, focus will be on the effect of chemical shift interaction of $S$-spin in this work.\\ The effective first-order chemical shift Hamiltonian is given by \begin{equation} \hat{\bar{\tilde{\mathcal{H}}}}_{\tr{I}}^{(1)} = \sum_{n_0,k_\tr{I},l_\tr{I}} \hat{\tilde{\mathcal{H}}}^{(n_0,0,0,k_\tr{S},l_\tr{S})}, \label{eq:5_firstord} \end{equation} where the triplet $(n_0,k_\tr{S},l_\tr{S})$ satisfy the resonance condition, $(n_0+k_{\tr{S}})\omega_r + l_\tr{S} \omega_{\tr{cw}}^{(\tr{I})} = 0$. Since $\omega_{\tr{cw}}^{(\tr{S})}$ is not an integral multiple of $\omega_r$, there is no set $(n,k_\tr{S},l_\tr{S})$ that satisfy the resonance condition and therefore, the effective first-order chemical shift Hamiltonian for $^{\tr{RESPIRATION}}$CP pulse sequence is not present.\\ The second-order terms in the effective Hamiltonian that could potentially result in single-spin operators are the ones where the commutator is between chemical shift interaction and itself. Therefore the effective second-order Hamiltonian of interest takes the form \begin{equation} \hat{\bar{\tilde{\mathcal{H}}}}_{\tr{S}}^{(2)} = -\frac{1}{2}\sum_{n_0,k_\tr{S},l_\tr{S}}\sum_{n,\kappa_\tr{S},\lambda_\tr{S}} \frac{[\hat{\tilde{\mathcal{H}}}^{(n_0-n,0,0,k_\tr{S}-\kappa_\tr{S},l_\tr{S}-\lambda_\tr{S})}, \hat{\tilde{\mathcal{H}}}^{(n,0,0,\kappa_\tr{S},\lambda_\tr{S})}]}{(n + \kappa_\tr{S})\omega_r + \lambda_\tr{S}\omega_{\tr{cw}}^{(\tr{S})}}, \label{eq:5_secord} \end{equation} where $(n+ \kappa_\tr{S})\omega_r + \lambda_\tr{S}\omega_{\tr{cw}}^{(\tr{S})} \neq 0$. It is noted here that in principle, all three contributions, i.e., commutator of isotropic chemical shift with itself, anisotropic chemical shift with itself, and isotropic with anisotropic chemical shift are present. However for easier visualisation of the effects of isotropic and anisotropic chemical shift interactions on the transfer, only the cases where either of these interactions is present and not both together, are considered. In the case where only the anisotropic chemical shift interaction is present, the effective second-order chemical shift Hamiltonian is given by, \begin{equation} \begin{alignedat}{2} \hat{\bar{\tilde{\mathcal{H}}}}_{\tr{S},\tr{aniso}}^{(2)} &= -\frac{1}{2}\sum_{\substack{n_0\\n_0\neq n}}^{}\sum_{\substack{n,\kappa_\tr{S}\\ n\neq0}} \frac{[\hat{\tilde{\mathcal{H}}}^{(n_0-n,0,0,-n_0-\kappa_\tr{S},\mp1)}, \hat{\tilde{\mathcal{H}}}^{(n,0,0,\kappa_\tr{S},\pm1)}]}{(n+\kappa_\tr{S})\omega_r \pm \omega_{\tr{cw}}^{(\tr{S})}}\\ &= \xi_{\tr{aniso}}^{(\tr{S})} \hat{S}_x, \end{alignedat} \label{eq:5_secord_aniso} \end{equation} while for the case where only the isotropic chemical shift interaction is present, the same is given by, \begin{equation} \begin{alignedat}{2} \hat{\bar{\tilde{\mathcal{H}}}}_{\tr{S},\tr{iso}}^{(2)} &= -\frac{1}{2}\sum_{\kappa_\tr{S}} \frac{[\hat{\tilde{\mathcal{H}}}^{(0,0,0,-\kappa_\tr{S},\mp1)}, \hat{\tilde{\mathcal{H}}}^{(0,0,0,\kappa_\tr{S},\pm1)}]}{\kappa_\tr{S}\omega_m^{(\tr{S})} \pm \lambda_\tr{I}\omega_{\tr{cw}}^{(\tr{S})}}\\ &= 4\pi^2\delta_{\tr{iso}}^{(\tr{S})^2}\xi_{\tr{iso}}^{(\tr{S})} \hat{S}_x, \end{alignedat} \label{eq:5_secord_iso} \end{equation} where $\delta_{\tr{iso}}^{(\tr{S})} = \frac{\omega_{\tr{S}}^{(0)}-\omega_{\tr{S}}^{\tr{rf}}}{2\pi}$ with $\omega_{\tr{S}}^{\tr{rf}}$ being the rf carrier frequency for the $S$ spin channel. As the contributions to effective second-order chemical shift Hamiltonian is only of the form $\hat{S}_x$, which can be written as $\hat{S}_x = -\hat{F}_x^{ZQ} + \hat{F}_x^{DQ}$, the resultant total effective Hamiltonian in the zero-quantum subspace in shifted away from the $\hat{F}_{z,y}^{ZQ}$ plane. This is illustrated in Fig. \ref{fig:5-0}, where the blue arrow describes the effective second-order chemical shift term, and the total effective Hamiltonian is shown by the purple arrow. Trajectory of the initial density operator ($\hat{F}_x^{ZQ}$) under the total effective Hamiltonian follows the green curve shown in the figure and results in diminished transfer efficiency.\\ \begin{figure}\label{fig:5-0} \end{figure} The powder-averaged strength of effective second-order chemical shift Hamiltonian given in Eq. \ref{eq:5_secord_aniso} was computed and is presented in Fig. \ref{fig:5-1}B as $\xi_{\tr{aniso}}^{(\tr{S})}$ with varying $\tau_p^{(\tr{S})}$, length of the short pulse. The calculations were performed for 20 kHz MAS and with anisotropic chemical shift, $\delta_{\tr{aniso}}^{(\tr{S})} = 5.0$ kHz. $\tau_p^{(\tr{S})}$ was varied, while also ensuring that $\omega_{\tr{cw}}^{(\tr{S})}$ remained constant by relating $\omega^{(\tr{S})} = \frac{2\pi}{25\tau_p^{(\tr{S})}}$. This is done to ensure constant dipolar recoupling conditions for the entire considered case space, that spans from an ideal pulse ($\tau_p^{(\tr{S})} = 0$) to a continuous-wave irradiation ($\tau_p^{(\tr{S})} = \tau_r = 50\mu$s). The strength can be seen proportional to the length of short pulse. Impact of the above calculated chemical shift Hamiltonian on the transfer efficiency is studied by also calculating the effective first-order dipolar coupling Hamiltonian with a dipolar coupling constant of $\frac{b_{IS}}{2\pi} = 50$ Hz. The transfer efficiency calculated with the total effective Hamiltonian, sum first-order dipolar coupling and second-order anisotropic chemical shift Hamiltonians, for 46 ms of mixing time is shown in Fig. \ref{fig:5-1}C as red circles and is seen to correspond well with Fig. \ref{fig:5-1}B. Additionally, the propagation is verified by direct-propagation numerical simulations, shown in Fig \ref{fig:5-1}C as blue boxes. \begin{figure}\label{fig:5-1} \end{figure} For the case where only isotropic chemical shift is present, the strength of $\hat{S}_z$ term given in Eq. \ref{eq:5_secord_iso} is $4\pi^2\delta_{\tr{iso}}^{(\tr{S})^2}\xi_{\tr{iso}}^{(\tr{S})}$. As the strength of the effective second-order chemical shift Hamiltonian depends on square of the offset, convergence of the Magnus series defining the effective Hamiltonian will be slower at large isotropic chemical shift offset. The strength also depends on length of the short pulse through $a_{\kappa_{\tr{S}}}^{(\tr{S})}$ coefficients and this is shown in Fig. \ref{fig:5-2}, where $\omega^{(\tr{S})} = 2\omega_r$ and the MAS rate was set to 20 kHz. Increasing the short pulse length (thereby increasing $\omega_{\tr{cw}}^{(\tr{S})}$), can be seen to suppress the second-order isotropic chemical shift Hamiltonian term. Of course, varying the short pulse length also affects the effective dipolar coupling Hamiltonian and so it is important to compare the transfer profiles with varying short pulse lengths, against isotropic chemical shift values. \begin{figure}\label{fig:5-2} \end{figure} Transfer efficiencies at varying isotropic chemical shift offset for $\hat{I}_x \rightarrow \hat{S}_x$, were obtained through direct-propagation simulations for short pulse lengths of $2\mu$s, $6\mu$s, $10\mu$s and $14\mu$s and are shown in Fig. \ref{fig:5-3}A. The MAS rate was set to 20 kHz, the rf field strengths satisfied, $\omega_{\tr{rf}}^{(\tr{S})} = \omega_{\tr{rf}}^{(\tr{I})} = 2\omega_r$, dipolar coupling constant $\frac{b_{\tr{IS}}}{2\pi}$ was set to 1 kHz and the isotropic S-spin chemical shift was zero. Even though longer short pulse lengths correspond to broader efficient transfer profiles around zero chemical shift offset, the broadest ($\tau_p^{(\tr{S})} = 14\mu$s) corresponds a range of only $\pm$5 kHz. The same set of simulations were repeated using the total effective Hamiltonian as calculated above using Eqs. \ref{eq:5_firstord} and \ref{eq:5_secord}. The results are shown in Fig. \ref{fig:5-3}B, and are seen to be consistent with the direct-propagation simulations shown in Fig. \ref{fig:5-3}A for isotropic chemical shift offset values in the range of $\pm5$ kHz. This justifies our choice to ignore the effective second-order Hamiltonian terms that involve commutators between the isotropic chemical shift term and the dipolar coupling term. However the second-order isotropic chemical shift Hamiltonian is too large at higher offset values, and this is evident as the Figs \ref{fig:5-3}A and B do not match at the regime. To overcome this problem, it is better to transform the Hamiltonian into an interaction frame defined by both the rf field and isotropic chemical shift interactions, rather than to transform just into the rf field interaction frame.\\ \begin{figure}\label{fig:5-3} \end{figure} By transforming into a frame defined by the combined rf field and isotropic chemical shift interactions, the only two frequencies of the five fundamental frequencies that change are $\omega_{\tr{cw}}^{(\tr{q})}$. However the trick of splitting the amplitude-modulated rf field can not be applied to this case, as the operators no longer commute at different times. However, as shown in Sec. \ref{sect:quaternions}, the effective frequency $\omega_{\tr{cw}}^{(\tr{q})}$ and the effective axis can be found using quaternions. The effective fields, $\omega_{\tr{cw}}^{(\tr{q})}$, were calculated for parameters used in Fig. \ref{fig:5-3} and are shown in Fig. \ref{fig:5-4}. It is noted here that $\omega_{\tr{cw}}^{(\tr{S})} = \omega_{\tr{cw}}^{(\tr{I})}$ corresponds to presence of effective first-order dipolar coupling resonance conditions, and in Fig. \ref{fig:5-4}, this is where the y-axis is zero. Evidently that the broader efficient transfer profiles seen in Fig. \ref{fig:5-3} can be explained by corresponding shallower profiles in Fig. \ref{fig:5-4}, around $\delta_{\tr{iso}}^{(\tr{S})} = 0$ kHz. This is because shallower (or flatter) zero-values correspond to resonance conditions being valid over larger ranges. It can be further seen that efficient transfers seen at higher offsets in Fig. \ref{fig:5-3} are also explained by Fig. \ref{fig:5-4}. It is noted here since the transfer to $\hat{S}_x$ from $\hat{I}_x$ is desired and that $\omega_{\tr{cw}}^{(\tr{I})}$ is already along $x$, $\omega_{\tr{cw}}^{(\tr{S})}$ should also be directionally along $x$ to satisfy the resonance conditions. $\hat{S}_x$ component of the effective fields for different short pulse lengths at $\delta_{\tr{iso}}^{(\tr{S})} \neq 0$ are tabulated in Fig. \ref{fig:5-4}. This explains the transfers seen at higher offset values seen in Fig. \ref{fig:5-3}A, including the negative transfer efficiency for $\tau_p^{(\tr{S})} = 14\mu$s at $\pm18.5$ kHz. From this, it is clear that a broad-banded pulse sequence should satisfy the condition, $\omega_{\tr{cw}}^{(\tr{S})} = \omega_{\tr{cw}}^{(\tr{I})}$, for larger range of offsets. As generation of effective field plots as shown in Fig. \ref{fig:5-4} is straightforward and takes little to no time, it is faster and easier to test new ideas this way, rather than to do a full direct-propagation or effective-Hamiltonian-driven-propagation simulations. \begin{figure}\label{fig:5-4} \end{figure}\\ \section{Broadband-$^{\tr{RESPIRATION}}$CP}\cite{shankar2016handling} \label{sect:bb_resp} To make the I-spin channel better offset-compensated, it was natural to insert a single pulse of length $\tau_{\tr{com}}^{(\tr{S})}$ in the middle of each free evolution period. It is important to include the compensation pulse on the S-spin channel as well, to avoid nullifying the recoupled dipolar Hamiltonian. The question of optimized flip angle of the compensation pulses and whether they should be phase cycled are now simply answered by the effective fields plots they generate. The pulse sequence with compensation pulses is shown in Fig. \ref{fig:5-5}A and two variants with phase cycling of the compensation pulses are shown in Fig. \ref{fig:5-5}B and \ref{fig:5-5}C. \begin{figure}\label{fig:5-5} \end{figure} The effective field frequency plots for the above pulse sequences, when the flip angle of the compensation pulses are $\pi$ are shown in Fig. \ref{fig:5-6}, from which it is evident that pulse sequences where the compensation pulses are phase cycled perform better with respect to offset-compensation.\\ \begin{figure} \caption{Effective field frequencies of the combined rf field and isotropic chemical shift. The rf pulse sequences are given in Fig. \ref{fig:5-5}. The effective fields are represented with varying isotropic chemical shift, and represented as a quantity, which if zero corresponds to presence of resonance conditions.} \label{fig:5-6} \end{figure} To verify if the flip angle of $\pi$ radians is best for the compensation pulses, the effective field frequency plot for the pulse sequence given in Fig. \ref{fig:5-6} was calculated for a flip angle of less than $\pi$ ($\tau_{\tr{com}}^{\tr{(q)}} = 10\mu$s) radians and greater than $\pi$ ($\tau_{\tr{com}}^{\tr{(q)}} = 15\mu$s) radians and are shown along with that for the pulse sequence where the flip angle is exactly $\pi$ ($\tau_{\tr{com}}^{\tr{(q)}} = 12.5\mu$s) radians in Fig. \ref{fig:5-7} (left). The corresponding transfer efficiencies obtained through direct-propagation simulations are also shown in Fig. \ref{fig:5-7} (right), which substantiate the claims made using only the effective field plots.\\ \begin{figure}\label{fig:5-7} \end{figure} To experimentally validate the results, $^{15}$N ($\hat{I}_x$) $\rightarrow ^{13}$C$_{\alpha}$ ($\hat{S}_x$) transfer efficiency with varying $S$-spin isotropic chemical shift was measured in selectively $^{15}$N,$^{13}$C$_{\alpha}$-labelled glycine on a 400 MHz spectrometer using both $^{\tr{RESPIRATION}}$CP and its broadband variant. The MAS rate was set to 20 kHz. The measured transfer efficiency is shown in Fig. \ref{fig:5-10}A for $^{\tr{RESPIRATION}}$CP where the duration of the short pulse was varied to take the values 2$\mu$s, 6$\mu$s, 10$\mu$s and 14$\mu$s. The same for BB-$^{\tr{RESPIRATION}}$CP is shown in Fig. \ref{fig:5-10}B where the duration of the short pulse was constant at 2$\mu$s, while the duration of the offset-compensating pulse was varied to take the values $10\mu$s ($<\pi$), $12.5\mu$s (=$\pi$) and $15\mu$s ($>\pi$). The experimentally observed values correspond well with theoretical prediction. The understanding is that offset compensation can be tailored by altering the effective rotation of the offset-compensating pulses without much affecting the recoupled dipolar coupling Hamiltonian. \begin{figure}\label{fig:5-10} \end{figure} As the offset-compensating pulses are also inserted on the $I$-spin rf channel, effect of the $\pi$ pulses in this channel, in particular the length $\tau_{\tr{com}}^{(\tr{I})}$ is studied using effective field frequencies. For the pulse sequence given in Fig \ref{fig:5-5}B, $\tau_{\tr{com}}^{(\tr{I})}$ was varied while keeping the flip angle of $\pi$ radians constant through corresponding variation in $\omega_{\tr{rf}}^{(\tr{I})}$ (only for the offset-compensating pulse), and the effective field frequencies obtained are shown in Fig. \ref{fig:5-8}. The rf field amplitude for rest of the pulses was set at $\omega_{\tr{rf}}^{(\tr{q})} = 2\omega_r$, the spinning rate was set at $\omega_r = 20$ kHz, short pulse lengths were $\tau_{p}^{(\tr{q})} = 2\mu$s. The finding is that, the shorter the offset-compensating pulse, better the transfer. This is substantiated by the transfer efficiency offset plots obtained through direct-propagation numerical simulations shown in Fig. \ref{fig:5-8}B.\\ \begin{figure}\label{fig:5-8} \end{figure} In Fig. \ref{fig:5-9}A, transfer efficiency with respect to isotropic chemical shift of both spins, obtained through direct-propagation simulation, is shown and it is seen that the offset-compensation is wider ($\pm13$ kHz) with efficiencies above 73\% than for $^{\tr{RESPIRATION}}$CP with respect to the $S$-spin rf field channel. The parameters used here correspond to the hyphenated black line in Fig. \ref{fig:5-8}, where $\tau_{\tr{com}}^{(\tr{I})} = 5\mu$s ($\implies \omega_{\tr{rf}}^{(\tr{I})} = 100$ kHz for $I$-spin channel compensation pulses), $\omega_r = 20$ kHz and $\omega_{\tr{rf}}^{(\tr{S})} = \omega_{\tr{rf}}^{(\tr{I})} = 2\omega_r$ for rest of the pulses.\\ To further improve the transfer efficiency of the BB-$^{\tr{RESPIRATION}}$CP experiment, an additional amplitude sweep was added to the phase-alternating pulses in the $I$-spin rf channel. Similar to the discussion in Sec. \ref{sect:4_adRFDR}, the additional $\hat{F}^{ZQ}_z$ in the effective Hamiltonian helps drag the polarisation adiabatically from $\hat{I}_z$ to $\hat{S}_z$. Transfer efficiency so calculated with respect to varying isotropic chemical shift of both spins is shown in Fig. \ref{fig:5-9}B. The simulation used the same parameters as used in Fig. \ref{fig:5-9}A, but with a total mixing time of 7.0 ms (M=70). The sweep parameters were $\Delta = 800$ Hz and $d_{\tr{est}} = 160$ Hz. The maximum transfer efficiency obtained is about 90\% and the offset compensation is also slightly broader on both the rf channels. \begin{figure}\label{fig:5-9} \end{figure} To demonstrate the performance of BB-$^{\tr{RESPIRATION}}$CP transfer element on a more a complicated system of biological relevance, 1D and 2D experiments were carried out on a sample of SNNFGAILSS amyloid fibrils, and compared with 1D $^{\tr{RESPIRATION}}$CP and DCP experiments for $^{15}$N $\rightarrow ^{13}$C transfer. The experiments were carried out on a 950 MHz spectrometer at 22.2 kHz MAS. For the comparisons adiabatic versions of all three pulse sequences were used. For the DCP experiment, the adiabatic amplitude sweep was added to the $^{13}$C channel, while linear sweeps were added to the phase-alternating pulses of both $^{\tr{RESPIRATION}}$CP experiments. The spectra obtained from 1D experiments recorded with adiabatic DCP, adiabatic $^{\tr{RESPIRATION}}$CP and adiabatic BB-$^{\tr{RESPIRATION}}$CP are shown in three rows respectively, in Fig. \ref{fig:5-11}, where the first column depicts the transfer to $^{13}$C$_{\alpha}$ and the second column depicts the transfer to $^{13}$CO. The two spectra each for the band-selective adiabatic DCP and adiabatic $^{\tr{RESPIRATION}}$CP were recorded individually by setting the rf carrier frequency at 170 ppm ($^{13}$C$_{\alpha}$) and 50 ppm ($^{13}$CO), while the two spectra for adiabatic BB-$^{\tr{RESPIRATION}}$CP were recorded simultaneously by setting the rf carrier frequency at 100 ppm. \begin{figure}\label{fig:5-11} \end{figure} The transfer efficiencies, $\varepsilon_{\tr{NC}}$, are shown in Tab. \ref{tab:bb_trans} and were calculated according to $\varepsilon_{\tr{NC}} = \frac{\eta_{\tr{HNC}}}{4\cdot\eta_{\tr{C}}\cdot \varepsilon_{\tr{HN}}}$, where $\eta_{\tr{HNC}}$ is the $^{13}$C signal intensity in the $^1$H $\rightarrow^{15}$ N$\rightarrow^{13}$C transfer, $\eta_{\tr{C}}$ is the direct excitation $^{13}$C signal intensity and $\eta_{\tr{HN}}$ is the transfer efficiency of $^1$H $\rightarrow ^{15}$N transfer. Admittedly, the transfers obtained for adiabatic BB-$^{\tr{RESPIRATION}}$CP are lower than that obtained for the other two pulse sequences. However, since BB-$^{\tr{RESPIRATION}}$CP transfers to the two $^{13}$C sites in a single experiment from the same number of $^{15}$N nuclei, as the other two sequences that transfer to the two $^{13}$C sites in two individual experiments, fair comparison between the pulse sequences would be to compare the weighted total transfers, shown in the last column of the table. \begin{table}[!h] \centering \begin{tabular}[b]{|c|c|c|c|} \hline & $^{15}$N $\rightarrow ^{13}$C$_{\alpha}$ & $^{15}$N $\rightarrow ^{13}$CO & weighted total\\ \hline \hline DCP & 27\% & 14\% & 20.5\%\\ \hline $^{\tr{RESPIRATION}}$CP & 18\% & 27\% & 22.5\%\\ \hline BB-$^{\tr{RESPIRATION}}$CP & 10\% & 14\% & 24\%\\ \hline \end{tabular} \caption{Transfer efficiencies obtained for $^{15}$N $\rightarrow ^{13}$C transfers using adiabatic versions of DCP, $^{\tr{RESPIRATION}}$CP and BB-$^{\tr{RESPIRATION}}$CP pulse sequences, on a sample of SNNFGAILSS amyloid fibrils.} \label{tab:bb_trans} \end{table} In summary, the effect of isotropic chemical shift offset on heteronuclear transfer efficiency for $^{\tr{RESPIRATION}}$CP has been understood theoretically and verified experimentally. The theoretical analysis has been put to use in designing a variant that is broadbanded and its validity had been verified by experimental data recorded on 400 MHz and 950 MHz spectrometers. Though the description has been applied to analyse only the RFDR and $^{\tr{RESPIRATION}}$CP pulse sequences, the description could be applied to any amplitude-modulated pulse sequences to find the effective Hamiltonian and gain insights into the experiments. \chapter{General Pulse Sequences} \label{chap:genSeq} The previous chapter detailed the formalism to find the effective Hamiltonian in problems where the rf field interactions were only amplitude-modulated. The formalism can not be used to treat problems where the rf field interactions are modulated in both amplitude and phase, since the underlying assumption, that the rf field Hamiltonian at different times commute, is not valid here. In this chapter, a general formalism that enables the representation of a time-dependent rf field interaction frame Hamiltonian, where the rf field is both amplitude- and phase-modulated, in the Fourier space. This in turn enables the use of formulae derived in chapter \ref{chap:DesignPrinciples} for calculating the time-independent effective Hamiltonian. Sec. \ref{sect:5_ff} deals with the formalism, while Sec. \ref{sect:5-c7} illustrates the same by describing C-symmetry based homonuclear dipolar recoupling pulse sequences.\\ \section{Fundamental Frequencies} \label{sect:5_ff} Similar to the formalism for the amplitude-modulated pulse sequences, given in Sec. \ref{sect:4_ff}, to find the interaction frame Hamiltonian, it is sufficient to find the interaction frame transformation of single-spin operators of individual spins. This is valid as long as the interaction frame transformation is defined by only single-spin operators, and is typically the case with them defined by rf field and/or isotropic chemical shift interactions. Another reason is that two-spin coupling terms in the Hamiltonian can be written as a Kronecker product of two single-spin operators - \textit{separable states}. It was discussed in Sec. \ref{sect:bb_resp} that it is better to transform into a frame defined by both the rf field and the isotropic chemical shift interaction together, rather than a frame defined by just the rf field interaction. Therefore, consider a combined rf field and isotropic chemical shift interaction experienced by a spin $I$, given by, \begin{equation} \hat{\mathcal{H}}_{\tr{rf+iso}}(t) = \omega_{\tr{I}}^{(0)} \hat{I}_z + \omega_{\tr{rf}}(t)(\hat{I}_x\cos \phi(t) + \hat{I}_y\sin \phi(t)), \end{equation} where $\omega_I^{(0)} = \omega_0^{\tr{(I)}} - \omega_c$ represents the isotropic chemical shift of spin $I$, with $\omega_0^{(I)}$ denoting the Larmor frequency of spin $I$ and $\omega_c$ denoting the rf carrier frequency of the rf channel. Say the rf field Hamiltonian, and therefore the combined rf and isotropic chemical shift Hamiltonian, $\hat{\mathcal{H}}_{\tr{rf+iso}}(t)$, is periodic with $\tau_m = \frac{2\pi}{\omega_m}$ and under MAS, the internal Hamiltonian is periodic with $\tau_r = \frac{2\pi}{\omega_r}$. The independency of the spatial and spin parts of the internal Hamiltonian can be exploited here. Since the spatial part of the internal Hamiltonian is already represented as a Fourier series, and is unaffected by interaction frame transformation, it can kept out of the discussion concerning spin-part transformations and later appended to interaction frame spin-parts to represent the total interaction frame Hamiltonian. The spin-parts of internal Hamiltonian are not time-modulated as yet.\\ The propagator for the combined rf and isotropic chemical shift Hamiltonian for the interval (0,$\tau_m$), for each spin can be expressed, \begin{equation} \hat{U}_{\tr{rf+iso}}(\tau_m) = e^{-i\omega_{\tr{cw}} \tau_m \hat{\mathcal{F}}}, \label{eq:5_2} \end{equation} where $\hat{\mathcal{F}}$ is a linear combination of single-spin operators of the spin ($I$) under consideration and $\omega_{\tr{cw}}$ is the effective field frequency that produces an average flip of $\omega_{\tr{cw}}t$ on the single-spin operators. The periodicity of interaction frame single-spin operators is therefore $\frac{2\pi}{\gcd(\omega_m,\omega_{\tr{cw}})}$, if $\omega_{\tr{cw}}$ is non-zero and simply $\frac{2\pi}{\omega_m} = \tau_m$ if $\omega_{\tr{cw}}$ is zero. The Fourier space representation of the interaction frame single-spin operators accordingly can be represented with respect to either one or two frequencies and is be shown below.\\ \subsubsection*{\underline{Case: $\omega_{\tr{cw}} = 0$}} In the scenario where the propagator given in Eq. \ref{eq:5_2} is identity, i.e., $\omega_{\tr{cw}} = 0$, then the interaction frame single-spin operators are periodic with the same period of the combined rf and isotropic chemical shift Hamiltonian, $\tau_m$. The interaction frame transformation of single-spin operators is given by, \begin{equation} \hat{I}_{j} \xrightarrow{\hat{\mathcal{H}}_{\tr{rf+iso}}(t)} \sum_{j' = x,y,z}^{} c_{j,j'}(t) \hat{I}_{j'}, \end{equation} where the time-dependent components $c_{j,j'}(t)$ are given by, \begin{equation} c_{j,j'}(t) = \langle \hat{I}_{j'} | \hat{U}^{\dagger}_{\tr{rf+iso}}(t) \cdot \hat{I}_j \cdot \hat{U}_{\tr{rf+iso}}(t) \rangle \label{eq:5-4} \end{equation} with $j$ and $j'$ each taking any of the three values $x, y$ or $z$. The components $c_{j,j'}(t)$ that are periodic in time with $\tau_m$ (because $\hat{U}_{\tr{rf+iso}}(\tau_m) = \mathds{1}$) can be expressed as a Fourier series given by, \begin{equation} c_{j,j'}(t) = \sum_{k_{\tr{I}} = -\infty}^{\infty} a_{j,j'}(k) \cdot e^{ik_{\tr{I}}\omega_m t}, \label{eq:5-5} \end{equation} where $a_{j,j'}(k)$ are the complex Fourier coefficients. The interaction frame single-spin operators are then explicitly given by \begin{equation} \hat{I}_j \xrightarrow{\hat{\mathcal{H}}_{\tr{rf+iso}}(t)} \sum_{k_{\tr{I}} = -\infty}^{\infty} \Big(a_{j,x}(k) e^{ik_{\tr{I}}\omega_m t} \cdot \hat{I}_x +a_{j,y}(k) e^{ik_{\tr{I}}\omega_m t} \cdot \hat{I}_y + a_{j,z}(k) e^{ik_{\tr{I}}\omega_m t} \cdot \hat{I}_z\Big), \label{eq:5-6} \end{equation} where $j$ can take any of the values $x, y$ or $z$. \subsubsection*{\underline{Case: $\omega_{\tr{cw}} \neq 0$}} In the scenario where the propagator given in Eq. \ref{eq:5_2} is valid and $\omega_{\tr{cw}} \neq 0$, then the interaction frame single-spin operators are periodic with $\frac{2\pi}{\gcd\big(\omega_m,\omega_{\tr{cw}}\big)}$. Therefore the coefficients $c_{j,j'}(t)$ in Eq. \ref{eq:5-4}, can not be written as a Fourier series with just $\omega_m$ as was done for the case of $\omega_{\tr{cw}} = 0$ in Eq. \ref{eq:5-5}. To overcome this, interaction frame transformation of single-spin operators are rewritten as, \begin{equation} \begin{alignedat}{2} \hat{I}_j \xrightarrow{\hat{\mathcal{H}}_{\tr{rf+iso}}(t)} & \hspace*{20pt} \hat{U}^{\dagger}_{\tr{rf+iso}}(t) \cdot \hat{I}_j \cdot \hat{U}_{\tr{rf+iso}}(t)\\ &= e^{i\omega_{\tr{cw}}\hat{\mathcal{F}}t} \underbrace{e^{-i\omega_{\tr{cw}}\hat{\mathcal{F}}t} \hat{U}^{\dagger}_{\tr{rf+iso}}(t) \cdot \hat{I}_j \cdot \hat{U}_{\tr{rf+iso}}(t) e^{i\omega_{\tr{cw}}\hat{\mathcal{F}}t}}_{\text{periodic with $\tau_m = \frac{2\pi}{\omega_m}$}} e^{-i\omega_{\tr{cw}}\hat{\mathcal{F}}t} \end{alignedat} \label{eq:5-7} \end{equation} where a positive and negative rotation about $\hat{\mathcal{F}}$ has been padded. The reason for this is that the term bracketed in Eq. \ref{eq:5-7} is now periodic with $\tau_m$ and therefore its projection can be rewritten as a Fourier series. For mathematical convenience that will be apparent by Eq. \ref{eq:5-12}, a rotated basis \{$x',y',z'$\} is defined such that $\hat{\mathcal{F}}$ is along the $z'$. Projection of the bracketed time-dependent term in Eq. \ref{eq:5-7} on the rotated basis allows the transformation given in Eq. \ref{eq:5-7} to be rewritten as \begin{equation} \begin{alignedat}{2} \hat{I}_j \xrightarrow{\hat{\mathcal{H}}_{\tr{rf+iso}}(t)} & \hspace*{20pt} e^{i\omega_{\tr{cw}}\hat{\mathcal{F}}t} \Big(\sum_{j'}^{}c^{rot}_{j,j'}(t) \cdot \hat{I}_{j'}\Big) e^{-i\omega_{\tr{cw}}\hat{\mathcal{F}}t} \end{alignedat} \label{eq:5-9} \end{equation} where \begin{equation} c^{rot}_{j,j'}(t) = \langle \hat{I}_{j'} | e^{-i\omega_{\tr{cw}}\hat{\mathcal{F}}t} \hat{U}^{\dagger}_{\tr{rf+iso}}(t) \cdot \hat{I}_j \cdot \hat{U}_{\tr{rf+iso}}(t) e^{i\omega_{\tr{cw}}\hat{\mathcal{F}}t} \rangle, \label{eq:5-8} \end{equation} and the index $j$ can take any of the values ${x,y,z}$ while $j'$ can take any of the values ${x',y',z'}$. Note that only the projection operators $\hat{I}_{j'}$ are in the rotated basis, while the initial operators $\hat{I}_j$ are in the conventional basis. The components $c^{rot}_{j,j'}(t)$ given in Eq. \ref{eq:5-8}, being periodic with $\tau_m$, can now be written as a Fourier series given by, \begin{equation} c^{rot}_{j,j'}(t) = \sum_{k_{\tr{I}} = -\infty}^{\infty} a_{j,j'}(k) \cdot e^{ik_{\tr{I}}\omega_m t}, \label{eq:5-10} \end{equation} where $a_{j,j'}(k)$ are the complex Fourier coefficients. The Fourier series expression in Eq. \ref{eq:5-10} substituted in Eq. \ref{eq:5-9} leads to, \begin{equation} \begin{alignedat}{2} \hat{I}_j \xrightarrow{\hat{\mathcal{H}}_{\tr{rf+iso}}(t)} & \hspace*{20pt} e^{i\omega_{\tr{cw}}\hat{\mathcal{F}}t} \Big(\sum_{j'}^{}\sum_{k_{\tr{I}} = -\infty}^{\infty} a_{j,j'}(k) \cdot e^{ik_{\tr{I}}\omega_m t} \cdot \hat{I}_{j'}\Big) e^{-i\omega_{\tr{cw}}\hat{\mathcal{F}}t} \end{alignedat} \label{eq:5-11} \end{equation} Now the rotation defined by $e^{-i\omega_{\tr{cw}}\hat{\mathcal{F}}t}$ in Eq. \ref{eq:5-11} can be acted on the operators $I_{j'}$. But as the $\hat{\mathcal{F}}$ is along $\hat{I}_{z'}$ (as that is how the rotated basis was defined above), it is only the operators $\hat{I}_{x'}$ and $\hat{I}_{y'}$ that will mix with each other in a simple fashion. For a given $k_{\tr{I}}$ the rotation on the operators are given by, \begin{equation} \begin{alignedat}{2} e^{i\omega_{\tr{cw}}\hat{\mathcal{F}}t} \Big(a_{j,x'}(k) \cdot \hat{I}_{x'}\Big) e^{-i\omega_{\tr{cw}}\hat{\mathcal{F}}t} &= a_{j,x'}(k) \Big(\hat{I}_{x'} \cos(\omega_{\tr{cw}}t) - \hat{I}_{y'} \sin(\omega_{\tr{cw}}t)\Big),\\ e^{i\omega_{\tr{cw}}\hat{\mathcal{F}}t} \Big(a_{j,y'}(k) \cdot \hat{I}_{y'}\Big) e^{-i\omega_{\tr{cw}}\hat{\mathcal{F}}t} &= a_{j,y'}(k) \Big(\hat{I}_{y'} \cos(\omega_{\tr{cw}}t) + \hat{I}_{x'} \sin(\omega_{\tr{cw}}t)\Big),\\ e^{i\omega_{\tr{cw}}\hat{\mathcal{F}}t} \Big(a_{j,z'}(k) \cdot \hat{I}_{z'}\Big) e^{-i\omega_{\tr{cw}}\hat{\mathcal{F}}t} &= a_{j,z'}(k) \hat{I}_{z'}. \end{alignedat} \label{eq:5-12} \end{equation} Replacing the cosines and sines in Eq. \ref{eq:5-12} with their exponential versions given by, \begin{equation} \cos(\omega_{\tr{cw}}t) = \frac{e^{i\omega_{\tr{cw}}t} + e^{-i\omega_{\tr{cw}}t}}{2},\\ \sin(\omega_{\tr{cw}}t) = \frac{e^{i\omega_{\tr{cw}}t} - e^{-i\omega_{\tr{cw}}t}}{2i}, \end{equation} and substituting Eq. \ref{eq:5-11} in Eq. \ref{eq:5-10}, the transformation of single spin operators can be neatly expressed as, \begin{equation} \hat{I}_{j} \xrightarrow{\hat{\mathcal{H}}_{\tr{rf+iso}}(t)} \sum_{j' = x,y,z}^{} c_{j,j'}(t) \hat{I}_{j'}, \end{equation} where \begin{equation} \begin{alignedat}{2} c_{j,x'}(t) &= \sum_{k = -\infty}^{\infty} \sum_{\substack{l=-1\\l\neq0}}^{1} \frac{1}{2}(a_{j,x'}(k) - ila_{j,y'}(k)) e^{il\omega_{\tr{cw}}} e^{ik\omega_mt},\\ c_{j,y'}(t) &= \sum_{k = -\infty}^{\infty} \sum_{\substack{l=-1\\l\neq0}}^{1} \frac{1}{2}(a_{j,y'}(k) + ila_{j,x'}(k)) e^{il\omega_{\tr{cw}}} e^{ik\omega_mt},\\ c_{j,z'}(t) &= \sum_{k = -\infty}^{\infty} a_{j,z'}(k) e^{il\omega_{\tr{cw}}} e^{ik\omega_mt}. \end{alignedat} \label{eq:5-16} \end{equation} The above formalism will be illustrated by describing C-symmetry pulse sequences, in particular the improved isotropic chemical shift compensation of the POST element of the sequence will be detailed.\\ \section{C-symmetry Homonuclear Dipolar Recoupling Experiments} \label{sect:5-c7} In the C7 pulse scheme\cite{lee1995efficient}, a basic pulse sequence element with phase $\phi$ is repeated seven times over a span of two rotor periods, each time with the phase incremented by $\frac{2\pi}{7}$. This is schematically illustrated in Fig. \ref{fig:C7_Pulse_Scheme}A. The basic element of the original C7 pulse sequence, as shown in Fig. \ref{fig:C7_Pulse_Scheme}B, consists of two pulses of flip angle $2\pi$ each, and phases of $\phi$ and $\phi+\pi$. Citing experimental evidence for deficiencies in the dipolar recoupling when the rf carrier frequency is not close to the mean of the isotropic Larmor frequencies of the two nuclei, M. Hohwy et al. performed \textit{high-order} error term analysis to design of the POST element of C7 and to show its improved robustness with respect to chemical shift offsets\cite{hohwy1998broadband}. POST-C7 basic element is merely a permutation of pulses within the C7 basic element and consists of a $\pi/2$ pulse with phase $\phi$, a $2\pi$ pulse with phase $\phi+\pi$ and a $3\pi/2$ pulse with phase $\phi$, in order. This is illustrated in Fig. \ref{fig:C7_Pulse_Scheme}C. Employing the formalism described in the previous section, the effect of isotropic chemical shift on transfer efficiency of the C7 pulse scheme, using both C7 and POST-C7 basic elements will be shown below, using \textit{only} the first-order effective Hamiltonian. \begin{figure} \caption{A) A schematic representation of the C7 pulse scheme, where a basic pulse element with a defining phase $\phi$ is repeated seven times over a duration of two rotor periods, each time with an increment in the phase value by $2\pi/7$. Each equally long pulse element has a constant rf field amplitude, equal to seven times the MAS spinning rate thereby performing a . The entire pulse sequence is repeated $M$ times in an experiment to achieve polarisation transfer between two homonuclear coupled nuclei. The basic pulse element in the pulse scheme is either B) C7, where the phase of the pulse is $\phi$ for the first half and $\phi+\pi$ for the second half, with each half performing a $2\pi$ rotation, or C) POST-C7, where the element is divided into three parts performing rotations of $\pi/2$, $2\pi$ and $3\pi/2$ with phase $\phi$, $\phi+\pi$ and $\phi$ respectively.} \label{fig:C7_Pulse_Scheme} \end{figure} \subsubsection*{\underline{System Hamiltonian:}} Consider two homonuclear coupled spin$-\frac{1}{2}$ nuclei, $I_1$ and $I_2$. The rotating frame Hamiltonian, is given for the present problem by, \begin{equation} \hat{\mathcal{H}}(t) = \hat{\mathcal{H}}_{\tr{int}}(t) + \hat{\mathcal{H}}_{\tr{rf}}(t), \end{equation} with internal Hamiltonian $\hat{\mathcal{H}}_{\tr{int}}(t)$ given by, \begin{equation} \begin{alignedat}{2} \hat{\mathcal{H}}_{\tr{int}}(t) &= \sum_{n=-2}^{2} \omega^{(n)}_{\tr{I$_1$}} e^{in\omega_rt} \cdot \hat{I}_{1z} + \sum_{n=-2}^{2} \omega^{(n)}_{\tr{I$_2$}} e^{in\omega_rt} \hat{I}_{2z} \\ &\hspace*{15pt}+ \sum_{n=-2}^{2} \omega^{(n)}_{\tr{I$_1$I$_2$}} e^{in\omega_rt} \cdot (2\hat{I}_{1z}\hat{I}_{2z} - \hat{I}_{1x}\hat{I}_{2x} - \hat{I}_{1y}\hat{I}_{2y}), \end{alignedat} \end{equation} and the rf Hamiltonian $\hat{\mathcal{H}}_{\tr{rf}}(t)$ \begin{equation} \begin{alignedat}{2} \hat{\mathcal{H}}_{\tr{rf}}(t) &= \omega_{\tr{rf}}(t) \cdot e^{-i\phi(t) (\hat{I}_{1z}+\hat{I}_{2z})} (\hat{I}_{1x} + \hat{I}_{2x}) e^{i\phi(t) (\hat{I}_{1z} + \hat{I}_{2z})} \end{alignedat} \end{equation} where $\omega_r$ is the MAS spinning rate and the spatial part of the interactions are given as Fourier components $\sum_{n=-2}^{2}\omega_{\lambda}^{(n)}e^{in\omega_rt}$, with $\lambda = \tr{I}_1\tr{I}_2$ for the dipolar coupling and, $\lambda = \tr{I}_1$ and $\lambda = \tr{I}_2$ for the chemical shift of the nuclei $I_1$ and $I_2$ respectively. For simplicity, only the isotropic component of the chemical shift Hamiltonian is considered in this work. However, the calculation of effective Hamiltonian including anisotropic chemical shift Hamiltonian is straightforward, just that it also involves second-order terms. For the both C7 and POST-C7 basic pulse element, the rf field strength is a constant, $\omega_{\tr{rf}}(t) = 7\omega_r$, while the phase $\phi(t)$ is a step function with increments of $2\pi/7$, every $2\tau_r/7$ period.\\ \subsubsection*{\underline{Frequency Domain:}} The total Hamiltonian can be represented in an interaction frame defined by the rf field and the isotropic chemical shift of each spin given by, \begin{equation} \hat{\mathcal{H}}_{\tr{rf+iso}}(t) = \sum_{q=1}^{2}\omega_{\tr{rf}}(t) \cdot e^{-i\phi(t)\hat{I}_{qz}}\hat{I}_{qx}e^{i\phi(t)\hat{I}_{qz}} + \omega^{(0)}_{\tr{I}_q}\cdot\hat{I}_{qz}, \label{eq:5-20} \end{equation} where the index $q$ represents the two nuclei. The combined rf field and isotropic chemical shift Hamiltonian is periodic with the same period as that of the rf field Hamiltonian, which is two rotor periods, i.e., $\tau_m = 2\tau_r$. The propagator, as given in Eq. \ref{eq:5_2}, for the combined rf field and isotropic chemical shift Hamiltonian at $\tau_m$ is \begin{equation} \hat{U}^{{(q)}}_{\tr{rf+iso}}(\tau_m) = e^{-i\omega^{{(q)}}_{\tr{cw}}\tau_m\hat{\mathcal{F}}_q} \label{eq:5-21} \end{equation} where $\hat{\mathcal{F}}_q$ is the axis about which the single-spin operators of spin $\hat{I}_q$ are rotated by flip angle $\omega^{{(q)}}_{\tr{cw}}\tau_m$, every $\tau_m$ period.\\ The spin-part of the interaction frame Hamiltonian, for each spin, is now given by \begin{equation} \hat{I}_{qj} \xrightarrow[q=\{1,2\}]{\hat{\mathcal{H}}_{\tr{rf+iso}}(t)} \sum_{j'}^{} c^q_{j,j'}(t) \hat{I}_{qj'}, \end{equation} where the index $j$ represents the conventional basis operators $\{x,y,z\}$, while the index $j'$ represents the rotated basis operators $\{x',y',z'\}$ such that $\hat{F}_q$ is along $z'$, when $\omega^{{(q)}}_{\tr{cw}}\neq0$ and represents the conventional basis operators $\{x,y,z\}$, when $\omega^{{(q)}}_{\tr{cw}}=0$. It is noted here that the rotated basis operators, if present, are defined independently for the two spins. The components $c^q_{j,j'}(t)$ are represented as Fourier series given by Eq. \ref{eq:5-5}, when $\omega^{{(q)}}_{\tr{cw}}=0$ and by Eq. \ref{eq:5-16}, when $\omega^{{(q)}}_{\tr{cw}}\neq0$.\\ The total interaction frame Hamiltonian can now be represented as, \begin{equation} \hat{\tilde{\mathcal{H}}}(t) = \sum_{n=-2}^{2}\sum_{k_1=\infty}^{\infty}\sum_{l_1}^{}\sum_{k_2=\infty}^{\infty}\sum_{l_2}^{}\hat{\tilde{\mathcal{H}}}^{(n,k_1,l_1,k_2,l_2)} e^{in\omega_rt} e^{ik_1\omega^{(1)}_mt} e^{il_1\omega^{(1)}_{\tr{cw}}t} e^{ik_2\omega^{(2)}_mt} e^{il_1\omega^{(2)}_{\tr{cw}}t}, \label{eq:5-23} \end{equation} where the index $l_q=0$, when $\omega^{{(q)}}_{\tr{cw}}=0$ and $l_q = \{-1,1\}$, when $\omega^{{(q)}}_{\tr{cw}}\neq0$. It is admittedly tedious, but straightforward to see from Eq. \ref{eq:5-5} and \ref{eq:5-16}, as the case may be, that the Fourier components $\hat{\tilde{\mathcal{H}}}^{(n,k_1,l_1,k_2,l_2)}$ work out to, \begin{equation} \hat{\tilde{\mathcal{H}}}^{(n,k_1,l_1,k_2,l_2)} = \omega^{(n)}_{\tr{I}_1\tr{I}_2} \hat{\tilde{I}}_1 \hat{\tilde{I}}_2, \end{equation} with \begin{equation} \hat{\tilde{I}}_q = \begin{cases} \sum\limits_{j,j'}^{} a^{(q)}_{j,j'}(k_q) \hat{I}_{qj'}; \hspace*{10pt} \tr{if } l_q=0 \begin{cases} j' = \{x',y',z'\} & \omega^{{(q)}}_{\tr{cw}} = 0,\\ j' = \{z'\} & \omega^{{(q)}}_{\tr{cw}} \neq 0, \end{cases}&\\ \frac{1}{2} \sum\limits_{j}^{} \big(a^{(q)}_{j,x'}(k_q) \mp ia^{(q)}_{j,y'}(k_q)\big)\hat{I}_{qx'} + \big(a^{(q)}_{j,y'}(k_q) \pm ia^{(q)}_{j,x'}(k_q)\big)\hat{I}_{qy'}; & \tr{if } l_q=\pm1, \end{cases} \end{equation} where the summation index $j$ runs over the conventional basis operators $\{x,y,z\}$ and $l=\pm1$ exists only when $\omega^{(q)}_{\tr{cw}} \neq 0$.\\ \subsubsection*{\underline{Effective Hamiltonian:}} The time-dependent effective first-order Hamiltonian is now given by, \begin{equation} \hat{\overline{\tilde{\mathcal{H}}}}^{(1)} = \sum_{n,k_1,l_1,k_2,l_2}^{} \hat{\tilde{\mathcal{H}}}^{(n,k_1,l_1,k_2,l_2)}, \label{eq:5-26} \end{equation} such that $n\omega_r + k_1\omega^{(1)}_m + l_1\omega^{(1)}_{\tr{cw}} + k_2\omega^{(2)}_m + l_2\omega^{(2)}_{\tr{cw}} = 0$, resonance condition is satisfied. In the C7 pulse scheme, with either C7 or POST-C7 basic element, $\omega^{(1)}_m = \omega^{(2)}_m = \omega_r/2$, whereas $\omega^{(q)}_{\tr{cw}}$ depends on isotropic chemical shift in addition to the rf field. The effective Hamiltonian can be represented in zero-quantum (ZQ) / double-quantum (DQ) subspace with fictitious spin$-\frac{1}{2}$ operators defined as \begin{equation} \begin{alignedat}{4} \hat{I}^{ZQ}_{z'} &= \frac{1}{2}\big(\hat{I}_{1z'} - \hat{I}_{2z'}\big), &\hspace*{40pt} \hat{I}^{DQ}_{z'} &= \frac{1}{2}\big(\hat{I}_{1z'} + \hat{I}_{2z'}\big),\\ \hat{I}^{ZQ}_{y'} &= \hat{I}_{1y'}\hat{I}_{2x'} - \hat{I}_{1x'}\hat{I}_{2y'}, &\hspace*{40pt} \hat{I}^{DQ}_{y'} &= \hat{I}_{1x'}\hat{I}_{2y'} + \hat{I}_{1y'}\hat{I}_{2x'},\\ \hat{I}^{ZQ}_{x'} &= \hat{I}_{1x'}\hat{I}_{2x'} + \hat{I}_{1y'}\hat{I}_{2y'}, &\hspace*{40pt} \hat{I}^{DQ}_{x'} &= \hat{I}_{1x'}\hat{I}_{2x'} - \hat{I}_{1y'}\hat{I}_{2y'}. \end{alignedat} \label{eq:5-27} \end{equation} In this basis, the effective first-order Hamiltonian given in Eq. \ref{eq:5-26} will be on the ZQ/DQ $x-y$ plane. This is illustrated in Fig. \ref{fig:ZQ-DQ_subspace}A, as red arrows representing the direction for different crystals. The length of red arrows should ideally be of different lengths in agreement with differing strengths of the Hamiltonian for different crystals, but for clarity, they are represented with equal lengths. Such a Hamiltonian enables the transfer over a mixing time $\tau_{mix}$, \begin{equation} \hat{I}_z = \hat{I}^{ZQ}_{z'} + \hat{I}^{DQ}_{z'} \xrightarrow[\sqrt{c^2 + d^2}t_{mix} = \pi]{c\hat{I}^{ZQ/DQ}_{x'} + d\hat{I}^{ZQ/DQ}_{y'}} \begin{cases} -\hat{I}^{ZQ}_{z'} + \hat{I}^{DQ}_{z'} = \hat{I}_{2z'}; & \tr{zero-quantum},\\ \hat{I}^{ZQ}_{z'} - \hat{I}^{DQ}_{z'} = -\hat{I}_{2z'}; & \tr{double-quantum}. \end{cases} \label{eq:5-28} \end{equation} \begin{figure}\label{fig:ZQ-DQ_subspace} \end{figure} \subsubsection*{\underline{Near-resonance conditions:}} The summation in Eq. \ref{eq:5-26} is carried only over such quintuples $(n,k_1,l_1,k_2,l_2)$ such that the resonance condition, $n\omega_r + k_1\omega^{(1)}_m + l_1\omega^{(1)}_{\tr{cw}} + k_2\omega^{(2)}_m + l_2\omega^{(2)}_{\tr{cw}} = 0$, is satisfied. However when the condition is close to be zero, but not exactly zero, i.e., \begin{equation} n\omega_r + k_1\omega^{(1)}_m + l_1\omega^{(1)}_{\tr{cw}} + k_2\omega^{(2)}_m + l_2\omega^{(2)}_{\tr{cw}} = \Delta\omega_{\tr{near}} \label{eq:5-near-res} \end{equation} such that $|\Delta\omega_{\tr{near}}|$ is small\footnote{How small is small enough is discussed a little later in the text.}, it is not intuitive to claim that there is absolutely no contribution whatsoever. In such a circumstance, $\Delta\omega_{\tr{near}}$ can be subtracted from either\footnote{Subtracted from the one such that $l_q = 1$. This is merely for mathematical convenience, and is not a limitation, as presence of $l_q = \pm1$ in the resonance conditions implies that $l_q = \mp1$ is also a part of the resonance conditions.} of the $\omega^{(q)}_{\tr{cw}}$. To do so, say, spin $I_1$ is chosen and the combined rf field and isotropic chemical shift Hamiltonian of spin $I_1$, as given in Eq. \ref{eq:5-20}, can be updated to, \begin{equation} \hat{\mathcal{H}}_{\tr{rf+iso}}(t) = \underbrace{\omega_{\tr{rf}}(t) \cdot e^{-i\phi(t)\hat{I}_{1z}} \hat{I}_{1x} e^{i\phi(t)\hat{I}_{1z}}+ \omega^{(0)}_{I_1} \cdot \hat{I}_{1z} - \Delta\omega_{\tr{near}}\hat{\mathcal{F}}_{1}} + \Delta\omega_{\tr{near}}\hat{\mathcal{F}}_{1}. \label{eq:5-29} \end{equation} Updating the definition of interaction frame transformation to the bracketed terms, Eq. \ref{eq:5-21} for spin $I$, is now given by, \begin{equation} \hat{U}^{(1)}_{\tr{rf+iso}}(\tau_m) = e^{-i(\omega^{(1)}_{\tr{cw}} - \Delta\omega_{\tr{near}})\tau_m\hat{\mathcal{F}}_1}, \end{equation} provided $\Delta\omega_{\tr{near}}$ is small enough in magnitude to not change the rotation axis, here, much from the originally calculated axis, $\hat{\mathcal{F}}_1$, in Eq. \ref{eq:5-21}. This requirement translates to $|\Delta\omega_{\tr{near}}|\ll\overline{\omega_{\tr{rf}}(t)}$. The interaction frame Hamiltonian, given in Eq. \ref{eq:5-23}, now additionally contains the last term of Eq. \ref{eq:5-29}, which is not part of the interaction frame transformation definition. It figures as it is in the interaction frame Hamiltonian, since it is along $\hat{\mathcal{F}}_1$. The effective first-order Hamiltonian is correspondingly updated to, \begin{equation} \hat{\overline{\tilde{\mathcal{H}}}}^{(1)} = \sum_{n,k_1,l_1,k_2,l_2}^{} \hat{\tilde{\mathcal{H}}}^{(n,k_1,l_1,k_2,l_2)} + \Delta\omega_{\tr{near}}\hat{\mathcal{F}}_1, \label{eq:5-32} \end{equation} where $n\omega_r + k_1\omega^{(1)}_m + l_1(\omega^{(1)}_{\tr{cw}}-\Delta\omega_{\tr{near}}) + k_2\omega^{(2)}_m + l_2\omega^{(2)}_{\tr{cw}} = 0$. The additional single-spin operator in the effective Hamiltonian, $\Delta\omega_{\tr{near}}\hat{\mathcal{F}}_1$ can be viewed as $I_{z'}$ of the ZQ/DQ subspace, defined in Eq. \ref{eq:5-27}, as illustrated with a green arrow in Fig. \ref{fig:ZQ-DQ_subspace}B. The total effective Hamiltonian is then no longer on the ZQ/DQ $x-y$ plane, but shifted out of the plane by $\Delta\omega_{\tr{near}}$ amount, and this adversely affects the transfer described in Eq. \ref{eq:5-28}.\\ \subsubsection*{\underline{Simulations}} The effective Hamiltonian, using the description above, was found for the C7 pulse scheme using both C7 and POST-C7 basic elements with $\omega_r/2\pi = 5$ kHz. This corresponds to a constant rf field strength of $\omega_{\tr{rf}}=7\omega_r = 35$ kHz. As $|\Delta\omega_{\tr{near}}| \ll \overline{\omega_{\tr{rf}}(t)} \implies |\Delta\omega_{\tr{near}}| \ll 35$ kHz, the threshold for allowed $|\Delta\omega_{\tr{near}}|$ was set to $<$1 kHz, which happens to be of the same order as the dipolar strength $b_{I_1I_2}/2\pi = 1$ kHz. For both the basic pulse elements, $\omega^{(1)}_m = \omega^{(2)}_m = \omega_r/2 = 2.5$ kHz and therefore the resonance condition can be rewritten as \begin{equation} n\omega_r + k_1\omega^{(1)}_r/2 + l_1\omega^{(1)}_{\tr{cw}} + k_2\omega^{(2)}_r/2 + l_2\omega^{(2)}_{\tr{cw}} = \Delta\omega_{\tr{near}}. \label{eq:5-33} \end{equation} This implies that if the generated $\omega^{(q)}_{\tr{cw}}$ are smaller relative to $\omega_r/2$, then the $\omega^{(q)}_{\tr{cw}}$ have to cancel among themselves to satisfy the resonance condition, i.e., $\omega^{(1)}_{\tr{cw}} \pm \omega^{(2)}_{\tr{cw}} = \Delta\omega_{\tr{near}}$. This is the case for above mentioned parameters along with isotropic chemical shift in the range of $\pm 10$ kHz. The first term in Eq. \ref{eq:5-32}, which is the recoupled dipolar coupling Hamiltonian, is found to be predominantly DQ, and therefore only $\omega^{(1)}_{\tr{cw}} + \omega^{(2)}_{\tr{cw}} = \Delta\omega_{\tr{near}}$ is of relevance. Fig. \ref{fig:sum_of_freq} shows $|\omega^{(1)}_{\tr{cw}} + \omega^{(2)}_{\tr{cw}}|$, with varying isotropic chemical offset of the two nuclei, for C7 (left) and POST-C7 (right) basic elements. The plot for the basic C7 pulse element suggests that $|\Delta\omega_{\tr{near}}|$, and in turn the deleterious second term in Eq. \ref{eq:5-32}, is significant when the sum of the isotropic chemical shift of the two nuclei is farther from zero, thereby explaining the experimental observation stated by Howhy et al., \begin{center} \textquotedblleft \textit{$\dots$ there is experimental evidence for deficiencies in the dipolar recoupling when the rf carrier frequency is not close to the mean of the isotropic Larmor frequencies of the two involved nuclei.} \textquotedblright \end{center} \begin{figure}\label{fig:sum_of_freq} \end{figure} The suppression of $|\Delta\omega_{\tr{near}}|$ for the entire considered range of isotropic chemical shift values, for POST-C7 basic element, is evident from Fig. \ref{fig:sum_of_freq}B. The maximum value of $|\Delta\omega_{\tr{near}}|$ for POST-C7, reached at isotropic chemical shift values $\omega^{(0)}_{I_1} = \omega^{(0)}_{I_2} = +10$ kHz, is only a fifth of the maximum value for C7 reached at $\omega^{(0)}_{I_1} = \omega^{(0)}_{I_2} = \pm 10$ kHz. The asymmetric nature of the plot obtained for POST-C7, is due to the choice made in the C7 pulse scheme where the $\phi$ is incremented. If instead, the time series of $\phi(t)$ was decremented from $12\pi/7$ to $0$ in steps of $2\pi/2$, a mirrored plot where the maximum $|\Delta\omega_{\tr{near}}|$ is observed at $\omega^{(0)}_{I_1} = \omega^{(0)}_{I_2} = -10$ kHz, would have been obtained. Also, the generated $|\Delta\omega_{\tr{near}}| = |\omega^{(1)}_{\tr{cw}} + \omega^{(2)}_{\tr{cw}}|$ are also smaller than $\omega_r/2$, thus validating the assumption made below Eq. \ref{eq:5-33}.\\ Fig. \ref{fig:sum_of_freq}, that shows how much off-plane is the total effective Hamiltonian from the DQ $x-y$ plane, is indicative of transfer efficiency. However to verify the prediction, direct propagation numerical simulations, where the initial density operator $\hat{I}_{1z}$ is evolved under the rf field and internal interaction Hamiltonians to a mixing time $\tau_{mix}$ and the resulting density operator is projected on $\hat{I}_{2z}$. Transfer efficiencies, so calculated, with varying isotropic chemical shift offsets of both nuclei, are shown in Fig. \ref{fig:c7_prop}(left) for C7 (top) and POST-C7 (bottom) basic pulse elements. $\tau_{mix}$ was constant for the entire grid, and chosen to be the time of maximum transfer for $\omega^{(0)}_{I_1} = \omega^{(0)}_{I_2} = 0$ kHz. The results are in good agreement with the prediction from Fig. \ref{fig:sum_of_freq}, which consists only the flip angles and the corresponding rotation axes effected on individual spins by one block of a repeating rf field combined with isotropic chemical shift of the spin. Calculation of the flip angles and the corresponding rotation axes for individual spins are simpler and faster, as they only involve single-spin operators. \begin{figure} \caption{Numerical simulations for the $\hat{I}_1\rightarrow\hat{I}_2$ transfer efficiencies with varying isotropic chemical shift offsets of spins $I_1$ and $I_2$ for C7 basic pulse element (top row) and POST-C7 basic pulse element (bottom row). Simulations were performed using direct-propagation with SIMPSON software package (left column) and using the effective Hamiltonian (right column) given in Eq. \ref{eq:5-32}. All simulations were done for 5.0 kHz MAS with total mixing time set to 3.2 ms (M=8). The dipolar coupling constant was set to $b_{I_1I_2}/2\pi = 1$ kHz and powder averaging was performed using 11 $\gamma_{CR}$ and 320 $\alpha_{CR}, \beta_{CR}$ crystallite angles.} \label{fig:c7_prop} \end{figure} The transfer efficiencies were also computed using the total effective Hamiltonian, given in Eq. \ref{eq:5-32}, to verify the formalism described above and are shown in Fig. \ref{fig:c7_prop} (right) for C7 (top) and POST-C7 (bottom) basic pulse elements. Recalling discussion in Sec. \ref{sect:MultipleFrequencies}, the effective Hamiltonian computed here is valid only at times defined by the greatest common divisor of all five fundamental frequencies. However, it was also shown that propagator can be found at multiples of time defined only by the spinning and modulation frequencies, using only time-independent Hamiltonian, as given in Eq. \ref{eq:modProp}, \begin{equation} \hat{U}(n\tau_c) = e^{-i\hat{\overline{\tilde{\mathcal{H}}}}^{(1)}\cdot n\tau_c} \cdot e^{i\sum\limits_{q=1}^{2}\omega^{(q)}_{\tr{cw}}\hat{\mathcal{F}}_q\cdot n\tau_c}, \end{equation} where $\tau_c = \frac{2\pi}{\gcd(\omega_r,\omega^{(1)}_m,\omega^{(2)}_m)} = 2\tau_r$ and the effective first-order Hamiltonian $\hat{\overline{\tilde{\mathcal{H}}}}^{(1)}$ is defined in Eq. \ref{eq:5-32}. The transfer efficiencies so calculated, shown in Fig. \ref{fig:c7_prop} (right) are in great agreement with corresponding direct-propagation simulations (left).\\ In summary, the better offset compensation of POST-C7 basic element over the original C7 basic element is shown using just effective first-order Hamiltonian. Howhy et al., resorted to finding higher-order terms to show the same, whose need was nullified here by treating the problem in an interaction frame defined by both rf field and isotropic chemical shift. The flip angle imparted by repeating a pulse element together with isotropic chemical shift on a spin has been shown to be of great value, in predicting the total effective Hamiltonian. \chapter{Conclusion} In this work, the equivalence between AHT and Floquet theory with regard to finding effective time-independent Hamiltonian for a time-dependent Hamiltonian has been revisited. The equivalence is shown to rest on being able to represent the time-dependent Hamiltonian in the Fourier space with finite number of fundamental frequencies and a description to represent any time-dependent Hamiltonian with no more than two fundamental frequencies per involved spin is developed and detailed. The description has been put to use to describe, understand and design variants of established dipolar recoupling pulse sequences, with focus on the effects of chemical shift on magnetisation transfer.\\ The RFDR pulse sequence, a homonuclear dipolar recoupling experiment, has been described to explain the effect of isotropic chemical shift on the transfer efficiency. The temporal placements of the $\pi$ pulses figure in the effective Hamiltonian and the form of it enabled the design of an adiabatic variant of the pulse sequence, where the placements of the $\pi$ pulses were shifted gradually throughout the mixing time, which was shown to significantly improve the transfer efficiency theoretically and the results are verified through experiments.\\ The $^{\tr{RESPIRATION}}$CP pulse sequence, a heteronuclear dipolar recoupling experiment, was described using AHT without any assumptions on the constituent pulses, that limited the description in the original work. It was established that for the recoupling conditions to exist, it is enough to match the effective rotations imparted on the individual spins by the combined rf field and isotropic chemical shift Hamiltonians. This lead to a natural modification of the experiment to create a variant that sustains recoupling conditions over a larger range of isotropic chemical shift values. The new variant was verified experimentally to be significantly more broadband than the original $^{\tr{RESPIRATION}}$CP and DCP experiments. This enabled simultaneous recording of NCO and NC$_{alpha}$ experiments with modest rf power, otherwise possible only with ultra-high-power DCP or dual-band selective transfers.\\ The above two experiments were described using a formalism developed for handling pulse sequences that are only amplitude modulated. Based on the findings in the BB-$^{\tr{RESPIRATION}}$CP work, a general formalism to describe experiments, where the pulse sequences are both amplitude and phase modulated, was developed and detailed. The general formalism was used to describe effect of isotropic chemical shift on C7 pulse schemes, and the better offset compensated POST-C7 element over the original C7 element was explained using just first-order calculations. In this case too, the simpler calculations of effective flip angles imparted by the combined rf field and isotropic chemical shift on the individual spins proved to be a useful preview of what to expect from the total effective Hamiltonian calculation. Though the description has been applied only to a select few pulse sequences to analyse and design improved variants, the theory has a larger potential for the design of advanced solid-state NMR experiments. In addition to the insights it provides, the effective Hamiltonian so calculated can also be applied to achieve faster numerical simulations, which will be apparent for larger spin systems, as the spins are treated individually and are combined to find two spin operators only in the end to construct the effective Hamiltonian matrix. Numerical optimisations of pulse sequences have mostly been focussed on optimising state-to-state transfers and effective Hamiltonian over the entire mixing time or only over such times where there is no net rotation imparted by the rf field on the corresponding spins. The theory detailed in this dissertation urges for a rethink of the pulse design optimisation approach, as the effective Hamiltonian can be found even for non-cyclic times and therefore allows more freedom and guidance for the optimisation procedures. {\footnotesize \addcontentsline{toc}{chapter}{Bibliography} } \chapter*{Appendix A. SIMPSON script to find optimised adiabatic RFDR pulse sequence} Below is a script that can be executed via SIMPSON to find the best sweeps via a grid search. One can change the spin system directly in the spinsys section. In par section, an end user can change the spectrometer field (\texttt{proton$\_$frequency}), the MAS frequency (\texttt{spin$\_$rate}) and the duration of the $\pi$ pulse (\texttt{p180}). In the main section, it is possible to control the number of XY-8 blocks via an internal list (\texttt{lN}), the sweeps via an internal list (\texttt{lsweep}) and the tangential cut-off values via an internal list (\texttt{ltco}). Executing the program, a vdlist for implementing in topspin is generated for a given $N$. Likewise, the build-up curve is provided. \begin{lstlisting}[basicstyle=\scriptsize, breaklines=true, upquote=true, aboveskip={1.5\baselineskip}, columns=fixed, showstringspaces=false, extendedchars=false, frame=single, showtabs=false, showspaces=false, showstringspaces=false, identifierstyle=\ttfamily, backgroundcolor={\color[rgb]{0.9,0.9,0.9}}, keywordstyle={\color[rgb]{0,0,1}}, commentstyle={\color[rgb]{0.026,0.112,0.095}}, stringstyle={\color[rgb]{0.627,0.126,0.941}}, numberstyle={\color[rgb]{0.205, 0.142, 0.73}}, language=tcl] spinsys { ## SET UP SYSTEM ACCORDING TO TABLE 2 IN BAK et al, JMR 154, 28 (2002) channels 13C nuclei 13C 13C shift 1 60p -76p 0.9 0 0 94 shift 2 -60p -20p 0.43 90 90 0 dipole 1 2 -2142 0 90 120.8 } par { method cheby2 proton_frequency 400e6 start_operator I1z detect_operator I2z spin_rate 10000.0 crystal_file rep66 gamma_angles 9 np -1 # np gets changed according to N sw spin_rate/8.0 ## TIME variable tr 1.0e6/spin_rate variable p180 5.0 ## RF variable rf_p180 0.5e6/p180 ## SWEEP variable tsweep 0 # CHANGE tsweep via lsweep in main variable tco 1 # CHANGE tco via ltco in main ## EXTRA string form # form gives number of digits in delay. string save_delays no # save or no } proc pulseq {} { global par vd_times ## THIS PULSE SEQUENCE EMPLOYS XY-8 PHASE CYCLING OF THE PI PULSES reset store 0 acq foreach lt $vd_times { set t1 [lindex $lt 0]; set t2 [lindex $lt 1] # only delays before pi pulses are used to make sure that prop is synchronized 4 reset delay $t1 store 1 reset [expr {$t1+$par(p180)}] delay [expr {$par(tr)-$t1-$par(p180)}] store 2 reset delay $t2 store 3 reset [expr {$t2+$par(p180)}] delay [expr {$par(tr)-$t2-$par(p180)}] store 4 reset prop 0 prop 1 pulse $par(p180) $par(rf_p180) x prop 2 prop 3 pulse $par(p180) $par(rf_p180) y prop 4 prop 1 pulse $par(p180) $par(rf_p180) x prop 2 prop 3 pulse $par(p180) $par(rf_p180) y prop 4 prop 1 pulse $par(p180) $par(rf_p180) y prop 2 prop 3 pulse $par(p180) $par(rf_p180) x prop 4 prop 1 pulse $par(p180) $par(rf_p180) y prop 2 prop 3 pulse $par(p180) $par(rf_p180) x prop 4 store 0 acq 5 } } proc main {} { global par vd_times ## SET lN set lN [list 1 2 3 4 5 6 7 8 9 10]; puts "lN: $lN" # N is the number of XY-8 blocks ## SET lsweep #set lsweep [list 0.0] set lsweep {} set sdelta 0.1 for {set s 0} {$s <= 200} {incr s 1} { lappend lsweep [format $par(form) [expr {0.0+$s*$sdelta}]] }; puts "lsweep: $lsweep" ## SET ltco IN DEGREES #set ltco [list 1] set ltco {} for {set t 1} {$t <= 89} {incr t 1} { lappend ltco $t }; puts "ltco: $ltco" set name COCa set name $name\_mas$par(spin_rate)\_$par(crystal_file)\g$par(gamma_angles)\_pf[format " set log_time [open $name.log.time w] puts $log_time "\# N(all) Nmax max tsweep tco duration" foreach N $lN { set par(np) [expr {$N+1}];# np includes a zero-point set duration [expr {$N*8*$par(tr)}] set par(save_delays) "no" set transf_max -1e6 set Npoint_max -1 set sweep_max -1 set tco_max -1 ## GRID SEARCH IN tsweep and tco FOR GIVEN N foreach par(tsweep) $lsweep { foreach par(tco) $ltco { set vd_times [p_list_time $par(tr) $par(tsweep) $par(tco) $par(p180) $N $par(form) $par(save_delays)] # vd_times GENERATES LISTS OF DELAYS set f [fsimpson] set lmax [p_find_max $f] set npoint [lindex $lmax 0] # npoint includes initial zero-time point 6 set transf [lindex $lmax 1] if {$transf > $transf_max} { set Npoint_max $npoint set transf_max $transf set sweep_max $par(tsweep) set tco_max $par(tco) } puts "N: $N (duration: $duration), tsweep: $par(tsweep), tco: $par(tco); transfer: $transf vs max: $transf_max " funload $f } } ## RECALCULATE BEST SHAPE set par(tsweep) $sweep_max set par(tco) $tco_max set par(save_delays) "save" set vd_times [p_list_time $par(tr) $par(tsweep) $par(tco) $par(p180) $N $par(form) $par(save_delays)] set f [fsimpson] set lmax [p_find_max $f] set Npoint_max [lindex $lmax 0] set transf_max [lindex $lmax 1] puts $log_time "$N [expr {$Npoint_max-1}] $transf_max $sweep_max $tco_max $duration" fsave $f $name\_N$N\_max[format $par(form) $transf_max].fid fsave $f $name\_N$N\_max[format $par(form) $transf_max].xy -xreim funload $f } close $log_time } proc p_list_time {tr tsweep tco p180 N form save} { global par if {$N < 1} {puts "$N should be >=1"; exit} set lsweep {}; # not same sweep as in main set lsweepBruker {} if {$N == 1} { # NORMAL RFDR set tFor [expr {($tr-$p180)/2.0}]; # TIME BEFORE FINITE PI-PULSE set tRev $tFor ; # TIME BEFORE FINITE PI-PULSE IN SECOND PERIOD lappend lsweep [list $tFor $tRev] } if {$N > 1} { set pi [expr {acos(-1.0)}] set tcoRad [expr {$pi/180.0*$tco}] for {set i 1} {$i <= $N} {incr i} { set xRad [expr {$tcoRad*(-1.0+2.0*($i-1)/($N-1))}] 7 set Dt [expr {$tsweep/2.0*tan($xRad)/tan($tcoRad)}] set tFor [format $form [expr {$tr/2.0+$Dt-$p180/2.0}]]; # DELAY BEFORE PI IN FIRST TR set tFor2nd [format $form [expr {$tr-$tFor-$p180}]]; # DELAY AFTER PI IN FIRST TR set tRev [format $form [expr {$tr/2.0-$Dt-$p180/2.0}]]; # DELAY BEFORE PI IN SECOND TR set tRev2nd [format $form [expr {$tr-$tRev-$p180}]]; # DELAY AFTER PI IN SECOND TR lappend lsweep [list $tFor $tRev] lappend lsweepBruker [list $tFor $tFor2nd $tRev $tRev2nd $tFor $tFor2nd $tRev $tRev2nd $tFor $tFor2nd $tRev $tRev2nd $tFor $tFor2nd $tRev $tRev2nd] } } if {[string equal $save "save"]} { set bruker_input COCa_tsw$tsweep\_tco$tco\_p180_$p180\_N$N.input set brukerList [open $bruker_input w] foreach lt $lsweepBruker { set t1 [lindex $lt 0]; set t2 [lindex $lt 1] set t3 [lindex $lt 2]; set t4 [lindex $lt 3] puts $brukerList "$t1\u\n$t2\u\n$t3\u\n$t4\u" puts $brukerList "$t1\u\n$t2\u\n$t3\u\n$t4\u" puts $brukerList "$t1\u\n$t2\u\n$t3\u\n$t4\u" puts $brukerList "$t1\u\n$t2\u\n$t3\u\n$t4\u" } close $brukerList } return $lsweep } proc p_find_max {f} { # examines all np for max set np [fget $f -np] set sw [fget $f -sw] set np_max 1; set value_max [findex $f 1 -re] for {set i 2} {$i <= $np} {incr i} { set value_temp [findex $f $i -re] if {$value_temp > $value_max} { set value_max $value_temp set np_max $i } } return [list $np_max $value_max] } \end{lstlisting} \begin{appendices} { \makeatletter\@openrightfalse\makeatother \appendix \chapter*{Appendix A. SIMPSON script to find optimised adiabatic RFDR pulse sequence} Below is a script that can be executed via SIMPSON to find the best sweeps via a grid search. One can change the spin system directly in the spinsys section. In par section, an end user can change the spectrometer field (\texttt{proton$\_$frequency}), the MAS frequency (\texttt{spin$\_$rate}) and the duration of the $\pi$ pulse (\texttt{p180}). In the main section, it is possible to control the number of XY-8 blocks via an internal list (\texttt{lN}), the sweeps via an internal list (\texttt{lsweep}) and the tangential cut-off values via an internal list (\texttt{ltco}). Executing the program, a vdlist for implementing in topspin is generated for a given $N$. Likewise, the build-up curve is provided. \begin{lstlisting}[basicstyle=\scriptsize, breaklines=true, upquote=true, aboveskip={1.5\baselineskip}, columns=fixed, showstringspaces=false, extendedchars=false, frame=single, showtabs=false, showspaces=false, showstringspaces=false, identifierstyle=\ttfamily, backgroundcolor={\color[rgb]{0.9,0.9,0.9}}, keywordstyle={\color[rgb]{0,0,1}}, commentstyle={\color[rgb]{0.026,0.112,0.095}}, stringstyle={\color[rgb]{0.627,0.126,0.941}}, numberstyle={\color[rgb]{0.205, 0.142, 0.73}}, language=tcl] spinsys { ## SET UP SYSTEM ACCORDING TO TABLE 2 IN BAK et al, JMR 154, 28 (2002) channels 13C nuclei 13C 13C shift 1 60p -76p 0.9 0 0 94 shift 2 -60p -20p 0.43 90 90 0 dipole 1 2 -2142 0 90 120.8 } par { method cheby2 proton_frequency 400e6 start_operator I1z detect_operator I2z spin_rate 10000.0 crystal_file rep66 gamma_angles 9 np -1 # np gets changed according to N sw spin_rate/8.0 ## TIME variable tr 1.0e6/spin_rate variable p180 5.0 ## RF variable rf_p180 0.5e6/p180 ## SWEEP variable tsweep 0 # CHANGE tsweep via lsweep in main variable tco 1 # CHANGE tco via ltco in main ## EXTRA string form # form gives number of digits in delay. string save_delays no # save or no } proc pulseq {} { global par vd_times ## THIS PULSE SEQUENCE EMPLOYS XY-8 PHASE CYCLING OF THE PI PULSES reset store 0 acq foreach lt $vd_times { set t1 [lindex $lt 0]; set t2 [lindex $lt 1] # only delays before pi pulses are used to make sure that prop is synchronized 4 reset delay $t1 store 1 reset [expr {$t1+$par(p180)}] delay [expr {$par(tr)-$t1-$par(p180)}] store 2 reset delay $t2 store 3 reset [expr {$t2+$par(p180)}] delay [expr {$par(tr)-$t2-$par(p180)}] store 4 reset prop 0 prop 1 pulse $par(p180) $par(rf_p180) x prop 2 prop 3 pulse $par(p180) $par(rf_p180) y prop 4 prop 1 pulse $par(p180) $par(rf_p180) x prop 2 prop 3 pulse $par(p180) $par(rf_p180) y prop 4 prop 1 pulse $par(p180) $par(rf_p180) y prop 2 prop 3 pulse $par(p180) $par(rf_p180) x prop 4 prop 1 pulse $par(p180) $par(rf_p180) y prop 2 prop 3 pulse $par(p180) $par(rf_p180) x prop 4 store 0 acq 5 } } proc main {} { global par vd_times ## SET lN set lN [list 1 2 3 4 5 6 7 8 9 10]; puts "lN: $lN" # N is the number of XY-8 blocks ## SET lsweep #set lsweep [list 0.0] set lsweep {} set sdelta 0.1 for {set s 0} {$s <= 200} {incr s 1} { lappend lsweep [format $par(form) [expr {0.0+$s*$sdelta}]] }; puts "lsweep: $lsweep" ## SET ltco IN DEGREES #set ltco [list 1] set ltco {} for {set t 1} {$t <= 89} {incr t 1} { lappend ltco $t }; puts "ltco: $ltco" set name COCa set name $name\_mas$par(spin_rate)\_$par(crystal_file)\g$par(gamma_angles)\_pf[format " set log_time [open $name.log.time w] puts $log_time "\# N(all) Nmax max tsweep tco duration" foreach N $lN { set par(np) [expr {$N+1}];# np includes a zero-point set duration [expr {$N*8*$par(tr)}] set par(save_delays) "no" set transf_max -1e6 set Npoint_max -1 set sweep_max -1 set tco_max -1 ## GRID SEARCH IN tsweep and tco FOR GIVEN N foreach par(tsweep) $lsweep { foreach par(tco) $ltco { set vd_times [p_list_time $par(tr) $par(tsweep) $par(tco) $par(p180) $N $par(form) $par(save_delays)] # vd_times GENERATES LISTS OF DELAYS set f [fsimpson] set lmax [p_find_max $f] set npoint [lindex $lmax 0] # npoint includes initial zero-time point 6 set transf [lindex $lmax 1] if {$transf > $transf_max} { set Npoint_max $npoint set transf_max $transf set sweep_max $par(tsweep) set tco_max $par(tco) } puts "N: $N (duration: $duration), tsweep: $par(tsweep), tco: $par(tco); transfer: $transf vs max: $transf_max " funload $f } } ## RECALCULATE BEST SHAPE set par(tsweep) $sweep_max set par(tco) $tco_max set par(save_delays) "save" set vd_times [p_list_time $par(tr) $par(tsweep) $par(tco) $par(p180) $N $par(form) $par(save_delays)] set f [fsimpson] set lmax [p_find_max $f] set Npoint_max [lindex $lmax 0] set transf_max [lindex $lmax 1] puts $log_time "$N [expr {$Npoint_max-1}] $transf_max $sweep_max $tco_max $duration" fsave $f $name\_N$N\_max[format $par(form) $transf_max].fid fsave $f $name\_N$N\_max[format $par(form) $transf_max].xy -xreim funload $f } close $log_time } proc p_list_time {tr tsweep tco p180 N form save} { global par if {$N < 1} {puts "$N should be >=1"; exit} set lsweep {}; # not same sweep as in main set lsweepBruker {} if {$N == 1} { # NORMAL RFDR set tFor [expr {($tr-$p180)/2.0}]; # TIME BEFORE FINITE PI-PULSE set tRev $tFor ; # TIME BEFORE FINITE PI-PULSE IN SECOND PERIOD lappend lsweep [list $tFor $tRev] } if {$N > 1} { set pi [expr {acos(-1.0)}] set tcoRad [expr {$pi/180.0*$tco}] for {set i 1} {$i <= $N} {incr i} { set xRad [expr {$tcoRad*(-1.0+2.0*($i-1)/($N-1))}] 7 set Dt [expr {$tsweep/2.0*tan($xRad)/tan($tcoRad)}] set tFor [format $form [expr {$tr/2.0+$Dt-$p180/2.0}]]; # DELAY BEFORE PI IN FIRST TR set tFor2nd [format $form [expr {$tr-$tFor-$p180}]]; # DELAY AFTER PI IN FIRST TR set tRev [format $form [expr {$tr/2.0-$Dt-$p180/2.0}]]; # DELAY BEFORE PI IN SECOND TR set tRev2nd [format $form [expr {$tr-$tRev-$p180}]]; # DELAY AFTER PI IN SECOND TR lappend lsweep [list $tFor $tRev] lappend lsweepBruker [list $tFor $tFor2nd $tRev $tRev2nd $tFor $tFor2nd $tRev $tRev2nd $tFor $tFor2nd $tRev $tRev2nd $tFor $tFor2nd $tRev $tRev2nd] } } if {[string equal $save "save"]} { set bruker_input COCa_tsw$tsweep\_tco$tco\_p180_$p180\_N$N.input set brukerList [open $bruker_input w] foreach lt $lsweepBruker { set t1 [lindex $lt 0]; set t2 [lindex $lt 1] set t3 [lindex $lt 2]; set t4 [lindex $lt 3] puts $brukerList "$t1\u\n$t2\u\n$t3\u\n$t4\u" puts $brukerList "$t1\u\n$t2\u\n$t3\u\n$t4\u" puts $brukerList "$t1\u\n$t2\u\n$t3\u\n$t4\u" puts $brukerList "$t1\u\n$t2\u\n$t3\u\n$t4\u" } close $brukerList } return $lsweep } proc p_find_max {f} { # examines all np for max set np [fget $f -np] set sw [fget $f -sw] set np_max 1; set value_max [findex $f 1 -re] for {set i 2} {$i <= $np} {incr i} { set value_temp [findex $f $i -re] if {$value_temp > $value_max} { set value_max $value_temp set np_max $i } } return [list $np_max $value_max] } \end{lstlisting} } \end{appendices} \end{document}
\begin{document} \allowdisplaybreaks \renewcommand{\arabic{footnote}}{} \renewcommand{061}{061} \FirstPageHeading \ShortArticleName{Big and Nef Tautological Vector Bundles over the Hilbert Scheme of Points} \ArticleName{Big and Nef Tautological Vector Bundles\\ over the Hilbert Scheme of Points\footnote{This paper is a~contribution to the Special Issue on Enumerative and Gauge-Theoretic Invariants in honor of Lothar G\"ottsche on the occasion of his 60th birthday. The~full collection is available at \href{https://www.emis.de/journals/SIGMA/Gottsche.html}{https://www.emis.de/journals/SIGMA/Gottsche.html}}} \Author{Dragos OPREA} \AuthorNameForHeading{D.~Oprea} \Address{Department of Mathematics, University of California San Diego,\\ 9500 Gilman Drive, La Jolla, CA, USA} \Email{\href{mailto:doprea@math.ucsd.edu}{doprea@math.ucsd.edu}} \URLaddress{\url{http://math.ucsd.edu/~doprea/}} \ArticleDates{Received January 31, 2022, in final form July 31, 2022; Published online August 12, 2022} \Abstract{We study tautological vector bundles over the Hilbert scheme of points on surfaces. For each $K$-trivial surface, we write down a simple criterion ensuring that the tautological bundles are big and nef, and illustrate it by examples. In the $K3$ case, we extend recent constructions and results of Bini, Boissi\`ere and Flamini from the Hilbert scheme of~2 and~3 points to an arbitrary number of points. Among the $K$-trivial surfaces, the case of Enriques surfaces is the most involved. Our techniques apply to other smooth projective surfaces, including blowups of $K3$s and minimal surfaces of general type, as well as to the punctual Quot schemes of curves.} \Keywords{Hilbert scheme; Quot scheme; tautological bundles} \Classification{14C05; 14D20; 14C17} \renewcommand{\arabic{footnote}}{\arabic{footnote}} \setcounter{footnote}{0} \section{Introduction}Let $X$ be a smooth projective surface, and let $X^{[k]}$ denote the Hilbert scheme of $k$ points on $X$. Each vector bundle $F\to X$ of rank $r$ yields a tautological vector bundle $F^{[k]}\to X^{[k]}$ of rank $rk$ given by \begin{gather*} F^{[k]}=p_{\star} (q^{\star} F\otimes \mathcal O_{\mathcal Z}). \end{gather*} Here, $p$, $q$ are the natural projections from $X^{[k]}\times X$, and $\mathcal Z\subset X^{[k]}\times X$ denotes the universal subscheme. The literature surrounding the geometry of the tautological bundles is vast. Likewise, many notions of positivity for vector bundles have been studied in algebraic and complex differential geometry. Merging these two themes, it is natural to investigate the positivity properties of the tautological bundles. In this note, we address the question whether the bundles $F^{[k]}\to X^{[k]}$ are big and nef. To our knowledge, for $K3$s, this has been considered for the first time in the recent article \cite{BBF}, alongside the stability and bigness of twists of the tangent bundle of $X^{[k]}$. Specifically, if $X$ a $K3$ surface of Picard rank $1$, and the number of points is $k=2, 3$, it is shown in \cite{BBF} that $F^{[k]}$ is big and nef when $F$ is either \begin{itemize}\itemsep=-1pt \item [$(a)$] a positive line bundle, \item [$(b)$] a twist of a Lazarsfeld--Mukai bundle (for suitable numerics), \item [$(c)$] a twist of an Ulrich bundle. \end{itemize} Recall that a vector bundle $V$ over a scheme $Y$ is said to be big and nef if the line bundle $\mathcal O_{\mathbb P(V)}(1)\to \mathbb P(V)$ is big and nef, where $\mathbb P(V)$ denotes the projective bundle of one dimensional quotients. A discussion of big and nef vector bundles can be found in \cite[Chapters~6 and~7]{L-II}. A useful well-known characterization occurs when $V\to Y$ is globally generated. In this case, if the top Segre class \begin{equation} \label{pos}(-1)^{\dim Y} \int_{Y} s(V)>0 \end{equation} it follows that $V\to Y$ is big and nef. \subsection{Results} The original motivation for our work was provided by the recent results of \cite{BBF}, which we extend in several directions. \begin{itemize}\itemsep=0pt \item [$(i)$] For $K3$s, we allow for arbitrary number of points $k$, and derive a condition that ensures $F^{[k]}$ is big and nef, see Theorem~\ref{tt}. We apply this theorem to obtain analogues of examples $(a)$--$(c)$ above for any $k$. \item [$(ii)$] We allow for arbitrary $K$-trivial surfaces. The case of Enriques surfaces is the most difficult, and we only have results in odd rank, see Theorem~\ref{tt2}, as well as a conjectural bound in general. \item [$(iii)$] We consider other smooth projective surfaces, including blowups of $K3$s in Theorem~\ref{blok}, and minimal surfaces of general type, in rank~$1$, in Theorem~\ref{gtyp}. The latter theorem is the most involved result we prove here, requiring a more detailed analysis than for other geometries. \item [$(iv)$] We show how the same ideas yield similar results over the punctual Quot schemes of curves, see Theorem~\ref{quotpun}. \end{itemize} Compared to \cite{BBF}, the new ingredient is the closed form calculation of the Segre integrals in \cite{MOP2, MOP1, MOP, OP2021}. The formulas are explicit, and the goal here is to show how to apply them to derive geometric positivity results. This is not always immediate, and the arguments require several different ideas. We thus believe it is worthwhile to record the outcome. We also illustrate our calculations by a few geometric examples. \subsection{Applications} By \cite[Proposition 1.4]{Y}, taking determinants of big and nef vector bundles yields big and nef divisors. There are several results in the literature concerning the positivity of the determinants $\det F^{[k]}$, see for instance \cite{BS, CG} with regards to very ampleness when $F$ has rank $1$, for arbitrary surfaces. In general, nef divisors over the Hilbert scheme of $K3$s were studied in \cite[Section~10]{BM}. Over other surfaces, related results can be found in \cite{ABCH, BC, BHL+, Ko, LQZ, MM, N, QT, YY}, among others. The nef cones of divisors of the punctual Quot schemes of curves of genus $0$ and $1$ were determined in \cite{GS, St}. Whenever $F^{[k]}\to X^{[k]}$ is a big and nef bundle, for every $m_1, \dots, m_h\geq 0$, Demailly vanishing gives\footnote{For an ample vector bundle $V\to Y$, Demailly's vanishing theorem \cite{De} states \begin{gather*} H^i\big(Y, \omega_Y\otimes \textnormal{Sym}^{m_1}V\otimes \dots \otimes \textnormal{Sym}^{m_h}V\otimes (\det V)^{\otimes h}\big)=0, \qquad m_1, \dots, m_h\geq 0, \qquad h>0, \qquad i> 0. \end{gather*} This is derived in \cite[Theorem 7.3.14]{L-II} from Griffiths vanishing. The same argument applies to big and nef vector bundles; \cite[Example 7.3.3]{L-II} notes that Griffiths vanishing holds in this context.} \begin{gather*} H^i\bigl(X^{[k]}, \omega_{X^{[k]}} \otimes \textnormal{Sym}^{m_1}F^{[k]}\otimes \dots \otimes \textnormal{Sym}^{m_h} F^{[k]}\otimes \bigl(\det F^{[k]}\bigr)^{\otimes h}\bigr)=0, \qquad i>0. \end{gather*} For instance, using Theorem \ref{tt} or Corollary \ref{c4}, if $L\to X$ is an ample line bundle on a $K3$ surface $X$ of Picard rank $1$, with $\chi(L)\geq 3k$, we have \begin{gather*} H^i\bigl(X^{[k]}, \textnormal{Sym}^{m_1}L^{[k]}\otimes \dots \otimes \textnormal{Sym}^{m_h} L^{[k]}\otimes \bigl(\det L^{[k]}\bigr)^{\otimes h}\bigr)=0, \qquad i>0. \end{gather*} Analogous statements hold in all geometric situations covered by items $(i)$--$(iv)$ above. Cohomology with values in the tautological bundles and their representations was studied for instance in \cite{A, Da, EGL, Sc, Z}, but the vanishings results above are new. To further understand the cohomology, the next step would be to compute the holomorphic Euler characteristics, that is to find the series \begin{gather*} \mathsf Z_{X, F}^{m_1, \dots, m_h}=\sum_{k=0}^{\infty} q^k \chi \bigl(X^{[k]}, \omega_{X^{[k]}} \otimes \textnormal{Sym}^{m_1}F^{[k]}\otimes \dots \otimes \textnormal{Sym}^{m_h} F^{[k]}\otimes \bigl(\det F^{[k]}\bigr)^{\otimes h}\bigr). \end{gather*} This is a difficult but interesting question. We expect that the answer is given by {\it algebraic} functions. The simplest case $m_1=\dots=m_h=0$ corresponds to the Verlinde series determined in \cite[Theorem~5.3]{EGL} for $K$-trivial surfaces, and conjectured for small $h$ for all surfaces in \cite[Section 1.6]{MOP}. (After the writing was completed, we learned about the recent announcement~\cite{G} regarding expressions for the Verlinde series for all surfaces and all values of~$h$.) Similar vanishing statements can be made over the punctual Quot schemes of curves. \subsection{Plan of the paper} The case of $K3$ surfaces is the simplest and is discussed first, see Section~\ref{seck3}. To illustrate the results of Section~\ref{seck3}, in Section~\ref{secex} we extend the constructions in~\cite{BBF} to arbitrary number of points. Other $K$-trivial surfaces, and in particular Enriques surfaces, are considered in Section~\ref{secktriv}. Other geometries, specifically $K3$ blowups and minimal surfaces of general type, are studied in Section~\ref{secother}. Sections~\ref{secktriv} and~\ref{secother} are the most involved. Finally, Section \ref{seccur} concerns the punctual Quot scheme of curves. \section[K3 surfaces]{$\boldsymbol{K3}$ surfaces} \label{seck3} Let $X$ be a smooth projective surface. The bundle $F^{[k]}\to X^{[k]}$ is globally generated, and therefore nef, provided $F\to X$ is $(k-1)$-very ample. By definition, $(k-1)$-very ampleness is the requirement that the natural map \begin{gather*} H^0(X, F)\to H^0(X, F\otimes \mathcal O_{\zeta}) \end{gather*} is surjective for all zero-dimensional subschemes $\zeta$ of $X$ of length $k$. Thus, via \eqref{pos}, if $F$ is $(k-1)$-very ample and \begin{gather*} \int_{X^{[k]}} s\big(F^{[k]}\big)>0, \end{gather*} then $F^{[k]}\to X^{[k]}$ is big and nef. This is explained for instance in \cite[Propositions~2.4 and~4.5]{BBF}. Let $(X, H)$ be a polarized $K3$ surface, and let $r= \operatorname{rank}F$. Central for our argument is the following structural expression for the Segre integrals established in \cite{MOP}: \begin{equation}\label{eq1} \sum_{k=0}^{\infty} z^k \int_{X^{[k]}} s \big(F^{[k]}\big) = A_0(z)^{c_2(F)} A_1(z)^{c_1^2(F)} A_2(z). \end{equation} {\samepage The series $A_0$, $A_1$, and $A_2$ are given by explicit algebraic functions \begin{gather} A_0(z)= (1+(1+r)t)^{-r-1} (1 + (2+r) t)^{r}, \nonumber \\ A_1 (z)= (1+(1+r)t)^{\frac{r}{2}} (1 + (2+r) t)^{-\frac{r-1}{2}},\nonumber \\ A_2 (z)= (1+(1+r)t)^{r^2+2r} (1 + (2+r)t)^{-r^2+1} (1 + (1+r)(2+r)t)^{-1},\label{aaa} \end{gather} for the change of variables \begin{equation*} z = t (1+(1+r)t)^{1+r}. \end{equation*}} We point out that in \cite{MOP}, $r$ stands for $\operatorname{rank}(F) + 1$, while for us $r=\operatorname{rank}(F)$; the above expressions account for the different notational conventions. Related formulas over the moduli space of higher rank sheaves were proposed in \cite{GK} and were recently proven in \cite{Ob}. For each rank $r$ vector bundle $F\to X$, we write $v=v(F)=\operatorname{ch} (F) \sqrt{\operatorname{Td}(X)}$ for its Mukai vector, and we set \begin{gather*} \chi=\chi(F),\qquad \delta=1+\frac{1}{2}\langle v, v\rangle. \end{gather*} Here $\langle\,,\, \rangle$ is the Mukai pairing given by \begin{gather*} \langle v, v\rangle =\int_{X} v_2^2-2v_0v_4 \qquad \text{for vectors}\quad v=(v_0, v_2, v_4)\in H^{0}(X)\oplus H^2(X)\oplus H^4(X). \end{gather*} Moreover, $\delta$ equals the expected dimension of the moduli space of sheaves of type $v$. We show \begin{Theorem}\label{tt} Assume $\chi \geq (r+2) k$ and $\delta \geq 0$. If $F$ is $(k-1)$-very ample, then $F^{[k]}$ is a big and nef vector bundle over $X^{[k]}$. \end{Theorem} \begin{proof} By the first paragraph of this section, it suffices to show that the integral of the Segre class $s_{2k}\big(F^{[k]}\big)$ is positive. To this end, we use an equivalent form of equation \eqref{eq1}, which can be found in \cite[p.~11]{MOP}. Specifically, via a residue calculation, it was established there that \begin{equation}\label{e1} \int_{X^{[k]}} s_{2k}\big(F^{[k]}\big)=\operatorname{Coeff}_{ t^k} \bigl[(1+ (2+r)t)^{\delta} (1+(1+r)t)^{\chi -\delta - (r+1)k} \bigr]. \end{equation} Expanding via the binomial theorem, equation~\eqref{e1} shows that the Segre integral is positive when the expression between brackets is a polynomial of degree at least $k$, which is the case for \begin{gather*} \delta\geq 0, \qquad \chi- \delta - (r+1)k\geq 0,\qquad \chi-(r+1)k\geq k. \end{gather*} However, since $\delta$ could be large, we seek better bounds. To this end, we change variables, setting \begin{gather*} t=\frac{u}{1-(1+r)u}. \end{gather*} We rewrite \eqref{e1} as \begin{align*} \int_{X^{[k]}} s_{2k}\big(F^{[k]}\big)&=\operatorname{Res}_{t=0} (1+ (2+r)t)^{\delta} (1+(1+r)t)^{\chi -\delta - (r+1)k} \frac{\mathrm dt}{t^{k+1}} \\ &= \operatorname{Res}_{u=0} (1+u)^{\delta} (1-(1+r)u)^{-\chi + (r+2) k-1} \frac{\mathrm du}{u^{k+1}} \\ &= \operatorname{Coeff}_{u^k} (1+u)^{\delta} (1-(1+r)u)^{-\chi + (r+2) k-1}. \end{align*} Letting $a_i$ denote the coefficients of the term $(1+u)^{\delta}$, we have $a_i>0$ for $0\leq i\leq \delta$. Similarly, the coefficients of the second term are \begin{gather*} b_j=(1+r)^j (-1)^j \binom{-\chi +(r+2)k-1}{j}>0, \end{gather*} since $-\chi+(r+2)k-1<0$. Here, we use the standard definition of binomial numbers \begin{gather*} \binom{x}{j}=\frac{x(x-1)\cdots (x-j+1)}{j!} \end{gather*} for arbitrary $x$. Thus, the Segre integral equals $\sum a_i b_j$, the sum ranging over $i+j=k$, $0\leq i\leq \delta$, $j\geq 0$. The integral is positive since each term $a_ib_j>0$, and the sum is non-empty (it contains the term $a_0b_k$). The proof is complete. \end{proof} \begin{Remark} The theorem is certainly not optimal in all cases, but it suffices for our purposes. To illustrate it, when $\delta=0$, equation \eqref{e1} yields the following result originally noted in \cite[Proposition 2.1]{MOP}: \begin{equation}\label{f1} \int_{X^{[k]}} s_{2k}\big(F^{[k]}\big)=(r+1)^k \binom{\chi-(r+1)k}{k}. \end{equation} In this case, the positivity of the Segre integral is guaranteed when $\chi\geq (r+2)k$, but also when $\chi<(r+1)k$ and $k$ even. The vanishing of the Segre integrals~\eqref{f1} for exceptional bundles with $(r+1)k\leq \chi<(r+2)k$ played an important role in the proof of~(2.2) in~\cite{MOP}. The point of Theorem~\ref{tt} is that we can furthermore pin down the sign of the Segre integral for $\chi$ to the right of the above interval. \end{Remark} \begin{Remark}\label{remk} In rank $1$, $(k-1)$-very ampleness of nef line bundles over $K3$ surfaces can be effectively studied using \cite[Theorem~2.1]{BS}. Specifically, if $L$ is nef and $L^2>4k$, then either $L$ is $(k-1)$-very ample or else there exists an effective divisor $D$ such that $L-2D$ is $\mathbb Q$-effective, with \begin{equation}\label{kva} L.D-k\leq D^2<L.D/2<k. \end{equation} Furthermore, $D$ contains a subscheme $\zeta$ of length less or equal to $k$ such that \begin{gather*} H^0(L)\to H^0(L|_{\zeta})\quad\text{is not surjective.} \end{gather*} Over arbitrary smooth projective surfaces, a similar result ensures the $(k-1)$-very ampleness of the adjoint bundles $K_X+L$. To our knowledge, an analogous criterion in higher rank is missing. We point out two constructions yielding $(k-1)$-very ample bundles over $K3$ surfaces: \begin{itemize}\itemsep=0pt \item [$(i)$] If $(X, H)$ satisfies $\text{Pic} (X)=\mathbb Z\langle H\rangle$ and $F$ is a $\mu_H$-stable vector bundle with $\det F=H$ and $\chi\geq (r+1)k+\delta$ then $F$ is $(k-1)$-very ample. This assertion follows by the proof of \cite[Proposition 2.2]{MOP}. \item [$(ii)$] By \cite[Proposition 4.5]{BBF}, over any surface, twisting a globally generated vector bundle by a~$(k-1)$-very ample line bundle yields a $(k-1)$-very ample bundle (with large determinant). \end{itemize} For abelian surfaces, other constructions are possible via isogenies or extensions, see Section~\ref{ababab}. \end{Remark} \subsection[Big and nef tautological bundles over the Hilbert scheme of K3s] {Big and nef tautological bundles over the Hilbert scheme of $\boldsymbol{K3}$s} \label{secex} Theorem \ref{tt} applies to the three examples considered in Theorems~5.3, 5.5 and~5.7 and Corollaries~5.4, 5.6 and~5.8 of \cite{BBF}, and mentioned in items $(a)$--$(c)$ of the Introduction: line bundles and twists of Lazarsfeld--Mukai or Ulrich bundles. The goal here is to show how to extend the results in \cite{BBF} from $k\leq 3$ to any number of points. Throughout this section, we assume $(X, H)$ is a $K3$ surface of Picard rank $1$, and $\text{Pic}(X)=\mathbb Z\langle H\rangle$. Let $H^2=2g-2$. \begin{Corollary} \label{c4}Let $L_n=H^{\otimes n}$ for $n\geq 1$. Assume $g\geq 3k-1$. Then $(L_n)^{[k]}$ is big and nef over~$X^{[k]}$. \end{Corollary} \begin{proof} Global generation, and thus nefness, is explained in \cite[Theorem 5.3]{BBF}. To prove bigness, as also noted in \cite{BBF}, it suffices to establish the positivity of the Segre integral \begin{gather*} \int_{X^{[k]}} s_{2k} \big((L_n)^{[k]}\big)>0. \end{gather*} By formula \eqref{f1}, we have \begin{gather*} \int_{X^{[k]}} s_{2k} \big((L_n)^{[k]}\big)=2^k \binom{\chi(L_n)-2k}{k}, \end{gather*} which is positive provided \begin{gather*} \chi(L_n)\geq 3k\iff 2+n^2(g-1)\geq 3k. \end{gather*} The latter inequality is clear under our hypothesis. \end{proof} We next consider Lazarsfeld--Mukai bundles and their induced tautological bundles over $X^{[k]}$. This geometric situation corresponds to Theorem 5.5 and Corollary 5.6 in \cite{BBF}. There are several ways of formulating the result, but in keeping with \cite{BBF}, we prefer bounds which do not depend on the rank. Recall that the Lazarsfeld--Mukai bundles are obtained as duals $E=K_{C, L}^{\vee}$ to kernels \begin{gather*} 0\to K_{C, L}\to H^0(X, L)\otimes \mathcal O_X\to \iota_{\star} L\to 0. \end{gather*} Here $L\to C$ is a line bundle over a nonsingular curve $C\in |H|$, of degree $d$ and with $r\geq 2$ sections, such that $L$ and $\omega_C\otimes L^{\vee}$ are globally generated. It follows that $E$ is globally generated, with \begin{gather*} \operatorname{rk} E=r\geq 2, \qquad c_1(E)=H, \qquad c_2(E)=d\implies v(E)=(r, H, g-1-d+r). \end{gather*} We let $\rho=g-r(r-1+g-d)$ denote the Brill--Noether number. \begin{Example} Let $E$ be a globally generated Lazarsfeld--Mukai bundle as above. Assume that \begin{gather*} \rho\geq 0, \qquad g>2k-2>0, \qquad g> \frac{2}{5}(d+1). \end{gather*} Then $(E\otimes H)^{[k]}$ is big and nef. \end{Example} \begin{proof} Let $F=E\otimes H$. The assumption $g>2k-2$ was used in \cite[Theorem~5.5]{BBF} to prove that $F=E\otimes H$ is $(k-1)$-very ample; this is based on the result cited in Remark~\ref{remk}$(ii)$. As~noted in \cite{BBF}, bigness follows once we verify that $\int_{X^{[k]}} s_{2k}\big(F^{[k]}\big)>0$. To this end, we check that the assumptions of Theorem~\ref{tt} hold true. A simple calculation yields \begin{gather*} \chi(F)=g(r+3) -d +r-3,\qquad \delta(F)=1+\frac{1}{2} \langle v(F), v(F)\rangle=\rho. \end{gather*} The inequality $\chi(F)\geq (r+2)k$ is satisfied. Indeed, by hypothesis $g\geq 2k-1$ and $g\geq \frac{2d+3}{5}$, hence averaging we have $g\geq k+\frac{d-1}{5}$. Then \begin{align*} \chi(F)-(r+2)k&=g(r+3)-d+r-3-(r+2)k \\ &\geq \biggl(k+\frac{d-1}{5}\biggr)(r+3)-d+r-3-(r+2)k \\ &=\frac{d+4}{5}(r-2)+(k-2)\geq 0, \end{align*} since $r, k\geq 2$. \end{proof} \begin{Example} The (untwisted) Lazarsfeld--Mukai bundles also yield big and nef vector bundles over $X^{[k]}$, under more restrictive assumptions. Take $r=2$ for simplicity. Assume \begin{gather*} 2d-2\geq g> 2k-3+ \frac{3}{2}d,\qquad \text{which implies} \quad \chi(E)\geq 4k+\rho, \quad \rho\geq 0. \end{gather*} This is a bit stronger than what is needed, but it ensures $\chi\geq 4k$ and $\chi\geq 3k+\rho$ simultaneously. It is well known that $E$ is $\mu_H$-stable when $\text{Pic}(X)=\mathbb Z\langle H\rangle$. (Reason: any destabilizing quotient has rank $1$, slope $\leq 0$, and is globally generated since $E$ is. Hence the quotient is trivial, and thus $c_2(E)=d=0$, a contradiction.) Since $c_1(E)=H$, it follows $E$ is $(k-1)$-very ample by Remark \ref{remk}$(i)$. By Theorem \ref{tt}, we have that $E^{[k]}$ is big and nef. \end{Example} Finally, we turn to Ulrich bundles considered in Theorem 5.7 and Corollary 5.8 in \cite{BBF}. We~write $H^2=2h$, so that $g=h+1$. Recall that a bundle $E$ over $(X, H)$ is said to be Ulrich if \begin{gather*} H^{\star}(X, E(-H))=H^\star(X, E(-2H))=0. \end{gather*} Such bundles always exist for $K3$ surfaces of Picard rank $1$ by \cite[Theorem 1.5]{AFO}, and they have numerics \begin{gather*} \operatorname{rk} E=2a, \qquad c_1(E)=3aH,\qquad c_2(E)=9a^2h-4a(h-1). \end{gather*} \begin{Example} Assume $h>2k-3>0$. Consider an Ulrich bundle $E$ as above. Then $(E\otimes H)^{[k]}$ is big and nef on the Hilbert scheme $X^{[k]}$. \end{Example} \begin{proof} Letting $F=E\otimes H$, we compute \begin{gather*} \chi(F)=12ah, \qquad \delta(F)=1+a^2h+4a^2. \end{gather*} As noted in \cite[Theorem 5.6]{BBF}, the bundle $F=E\otimes H$ is $(k-1)$-very ample if $h>2k-3$; this uses the statement cited in Remark 3(ii). To prove bigness, it remains to verify the positivity of the top Segre integral. By Theorem \ref{tt}, we check that \begin{gather*} \chi(F)\geq (r+2)k\iff 12ah\geq (2a+2)k. \end{gather*} This is clear since $h> 2k-3>0$. \end{proof} \begin{Example} We can extend the result to Ulrich bundles $E$ over the polarized $K3$ surface $(X, mH)$ for all $m\geq 1$. By \cite[Proposition 4.4]{CNY}, the Mukai vectors of such Ulrich bundles are of the form \begin{gather*} v=\biggl(r, \frac{3rm}{2}H, h\big(2m^2r\big)-r\biggr). \end{gather*} By the same arguments, $(E\otimes H)^{[k]}$ is big and nef for $h>2k-3>0$. \end{Example} \section[Other K-trivial surfaces] {Other $\boldsymbol K$-trivial surfaces} \label{secktriv} \subsection{Abelian and bielliptic surfaces}\label{ababab} The formulas in \cite{MOP} can be used to treat the case of abelian or bielliptic surfaces. For vector bundles $F\to X$, we set \begin{gather*} r=\operatorname{rank}F,\qquad \chi=\chi(F),\qquad v=\operatorname{ch}(F), \qquad \delta=\frac{1}{2}\langle v, v\rangle. \end{gather*} We show \begin{Theorem} For a $(k-1)$-very ample vector bundle $F\to X$, the tautological bundle $F^{[k]}\to X^{[k]}$ is big and nef provided that $\chi\geq (r+2)k$ and~$\delta\geq 0$. \end{Theorem} \begin{proof} We follow the same steps as for $K3$ surfaces, but a few numerical changes are necessary. First, it was noted on \cite[p.~19]{MOP} that the analogue of equation \eqref{e1} for abelian or bielliptic surfaces takes the form \begin{gather*} \int_{X^{[k]}} s_{2k} \big(F^{[k]}\big)= \operatorname{Coeff}_{t^k} \bigl[(1+ (2+r)t)^{\delta} (1+(1+r)t)^{\chi-\delta -(r+1)k-1} (1+(1+r)(2+r)t)\bigr]. \end{gather*} Next, the change of variables \begin{gather*} t=\frac{u}{1-(1+r)u} \end{gather*} turns the above expression into \begin{gather*} \int_{X^{[k]}} s_{2k} \big(F^{[k]}\big)=\operatorname{Coeff}_{u^k} \bigl[(1+u)^{\delta} (1-(1+r)u)^{-\chi+(r+2)k-1} \big(1+(1+r)^2u\big)\bigr]. \end{gather*} The proof is completed by the same argument as in Theorem \ref{tt}, this time letting $a_i$ be the~coefficients of $(1+u)^{\delta}\big(1+(1+r)^2u\big)$ and letting $b_j$ be the~coefficients of $(1-(1+r)u)^{-\chi+(r+2)k-1}$. \end{proof} \begin{Corollary}\label{c10} If $L$ is $(k-1)$-very ample line bundle over an abelian or bielliptic surface and $L^2\geq 6k$, then $L^{[k]}$ is big and nef. \end{Corollary} In the following examples, we assume $X$ is an abelian surface with Picard rank $1$, with N\'eron--Severi ample generator $H$. \begin{Example} Assume $H^2\geq 6k$. For $H^2>4k$, the line bundle $L_n=H^{\otimes n}$ is $(k-1)$-very ample for all $n\geq 1$. This is an immediate consequence of \cite[Theorem 2.1]{BS}. Indeed, using that the Picard rank is $1$, the inequality \eqref{kva} is impossible. For $H^2\geq 6k$, we also have $\chi(L_n)\geq 3k$. It~follows from Corollary \ref{c10} that $(L_n)^{[k]}$ is big and nef. \end{Example} In higher rank, just as for $K3$s, the reader can consider twists of Ulrich and Lazarfeld--Mukai bundles. Here, taking advantage of the abelian surface geometry, we discuss simple semihomogeneous bundles, and twists of unipotent and homogeneous bundles. \begin{Example} Assume that $H$ is a principal polarization. Let $(a, b)=1$ be coprime positive integers with \begin{gather*} b>a^2k. \end{gather*} By \cite[Remark 7.13]{M}, there exist simple semihomogeneous vector bundles $W\to X$ with {\samepage\begin{gather*} \operatorname{rk} W=a^2, \qquad \mu(W)=\frac{bH}{a}\in NS(X)\otimes \mathbb Q, \qquad \chi(W)=b^2. \end{gather*} We claim $W^{[k]}$ is big and nef.} We note first that $W$ is $(k-1)$-very ample. Indeed, it is shown in \cite[Theorem 5.8] {M} that \begin{gather*} W=f_{\star} L \end{gather*} for some line bundle $L\to Y$, where $f\colon Y\to X$ is an isogeny of degree $a^2$. It was remarked in \cite[Section 2.4]{Op} that $W$ has no higher cohomology for $b>0$. Consequently, the same is true about $L$. Since \begin{gather*} h^0(L)=h^0(W)=\chi(W)=b^2, \end{gather*} we conclude that $L$ is effective and $L^2=2b^2$. We claim $L$ is $\big(a^2k-1\big)$-very ample for $b>a^2k$. This follows again by \cite[Theorem 2.1]{BS}. Indeed, the Picard rank is invariant under isogenies \cite[Proposition~3.2]{BL}, hence $Y$ has Picard rank $1$ since $X$ does. If $M$ is the ample N\'eron--Severi generator, write $L\equiv_{\text{num}} M^{\ell}$, for $\ell> 0$. We have \begin{gather*} 2b^2=L^2=\ell^2 M^2\geq 2\ell^2\implies 0< \ell \leq b. \end{gather*} For any effective divisor $D\neq 0$, we have \begin{gather*} L.D= \ell M.D\geq \ell M.M=\frac{L^2}{\ell}=\frac{2b^2}{\ell}\geq 2b\geq 2a^2k. \end{gather*} This violates \eqref{kva}. Since $L$ is nef and $L^2=2b^2>4a^2k$, it follows that $L$ is $\big(a^2k-1\big)$-very ample. To check $(k-1)$-very ampleness for $W$, note that if $\zeta$ is a subscheme of $X$ of length $k$, then \begin{gather*} H^0(W)\to H^0(W\otimes\mathcal O_{\zeta}) \text{ surjective }\iff H^0(L)\to H^0(L\otimes \mathcal O_{f^{\star} \zeta}) \text { surjective}. \end{gather*} The latter is true since $L$ is $\big(a^2k-1\big)$-very ample and $f^{\star}\zeta$ has length $a^2k$. Finally, the inequality \begin{gather*} \chi(W)\geq (r+2) k\iff b^2\geq \big(a^2+2\big)k \end{gather*} is certainly true when $b>a^2k$. \end{Example} \begin{Example} Assume $H^2>4k$. Let $E$ be a unipotent bundle of rank $r\geq 2$, that is, $E$~admits a filtration whose successive quotients are the trivial line bundle \cite[Definition~4.5]{M}. Let $F=E\otimes H$. If $H^2> 4k$ then $H$ is $(k-1)$-very ample by \cite[Theorem 2.1]{BS}, see \eqref{kva}. It follows that~$F$ is also $(k-1)$-very ample. Indeed, $F$ is obtained as an iterated extension of $H$. At each step, we inductively show that the extension is $(k-1)$-very ample without higher cohomology, by examining the relevant short exact sequences. The filtration of $F$ also gives \begin{gather*} \chi(F)=r\chi(H)> 2rk\geq (r+2)k. \end{gather*} Thus $(E\otimes H)^{[k]}$ is big and nef. Let $E$ be homogeneous bundle of rank $r\geq 2$, that is, $E$ is invariant by translations on $X$. By \cite[Theorem 4.17]{M}, we can write \begin{gather*} E=\bigoplus_i U_i\otimes P_i, \end{gather*} with $U_i$ unipotent, and $P_i$ a line bundle of degree $0$. Repeating the argument above for each summand, we show first that $E\otimes H$ is $(k-1)$-very ample, and then $(E\otimes H)^{[k]}$ is big and nef. \end{Example} For bielliptic surfaces, we leave specific examples to the reader, mentioning only that $(k-1)$-very ampleness of line bundles is studied in \cite{MP}. \subsection{Enriques surfaces} By contrast, the case of Enriques surfaces requires more care, due to the shape of the algebraic functions giving the Segre integrals. As before, we write $v(F)=\operatorname{ch}(F) \sqrt{\operatorname{Td}(X)}$ for the Mukai vector, and set \begin{gather*} \delta=\frac{1}{2}+\frac{1}{2}\langle v, v\rangle \implies \delta=rc_2-\frac{r-1}{2}c_1^2-\frac{r^2-1}{2}. \end{gather*} Note that $\delta$ is an integer if and only if the rank $r$ is odd. In this case, we prove: \begin{Theorem}\label{tt2} Let $F$ be a $(k-1)$-very ample bundle of odd rank $r$ on an Enriques surface, with \begin{gather*} \chi\geq 2k (r+1), \qquad \delta\geq 0. \end{gather*} Then $F^{[k]}$ is big and nef. \end{Theorem} Computer experiments show that the bound $\chi\geq (r+2)k$ imposed for the other $K$-trivial surfaces is insufficient here. Nonetheless, we have the following \begin{Conjecture} For all ranks, odd or even, Theorem $\ref{tt2}$ holds under the weaker assumption \begin{gather*} \chi\geq \biggl(\frac{5r}{4}+2\biggr)k,\quad \delta\geq 0. \end{gather*} \end{Conjecture} \begin{proof}[Proof of Theorem \ref{tt2}] Since $F$ is $(k-1)$-very ample, $F^{[k]}$ is globally generated, hence nef. To establish bigness, it remains to show that the degree of the top Segre class $s_{2k}\big(F^{[k]}\big)>0$. By \cite[Theorem 1]{MOP}, the Enriques analogue of \eqref{eq1} takes the form \begin{equation}\label{eqx} \sum_{k=0}^{\infty} z^k \int_{X^{[k]}} s \big(F^{[k]}\big) = A_0(z)^{c_2(F)} A_1(z)^{c_1(F)^2} A_2(z)^{\frac{1}{2}} \end{equation} for exactly the same universal functions $A_0$, $A_1$, $A_2$ which appear for $K3$ surfaces. Using \eqref{eqx} and following the reasoning in \cite[p.~11]{MOP}, with the modified numerics, we obtain an expression for the top Segre integral \begin{gather}\label{pro} \int_{X^{[k]}} s_{2k}\big(F^{[k]}\big)\nonumber \\ \qquad {}=\operatorname{Coeff}_{t^k} \big[(1+(1+r)t)^{\chi-\delta-k(r+1) - \frac{1}{2}} (1+(2+r)t)^{\delta} (1+(1+r)(2+r)t)^{\frac{1}{2}}\big]. \end{gather} This is the Enriques analogue of equation~\eqref{e1}, but the half integer exponents complicate our analysis. To determine the sign of the Segre integral, we carry out the usual change of variables \begin{gather*} t=\frac{u}{1-(1+r)u}. \end{gather*} {\samepage Then we rewrite \eqref{pro} as \begin{align*} \int_{X^{[k]}} s_{2k}\big(F^{[k]}\big) &= \operatorname{Res}_{t=0} (1\!+(1\!+r)t)^{\chi-\delta-k(r+1) - \frac{1}{2}} (1\!+(2\!+r)t)^{\delta} (1\!+(1\!+r)(2\!+r)t)^{\frac{1}{2}}\frac{\mathrm dt}{t^{k+1}} \\ &=\operatorname{Res}_{u=0} (1-(1+r)u)^{-\chi+k(r+2)-1}(1+u)^{\delta} \big(1+(1+r)^2u\big)^{\frac{1}{2}} \frac{\mathrm du}{u^{k+1}} \\ &=\operatorname{Coeff}_{u^k} (1-(1+r)u)^{-\alpha} (1+u)^{\delta}\big(1+(1+r)^2u\big)^{\frac{1}{2}}, \end{align*} where $\alpha=\chi-k(r+2)+1\geq kr+1$.} We note that when $r$ is odd, $\delta\geq 0$ is an integer. Thus, the middle term $(1+u)^{\delta}$ has nonnegative coefficients and constant term equal to $1$. We claim that \begin{gather*} (1-(1+r)u)^{-\alpha}\big(1+(1+r)^2u\big)^{\frac{1}{2}} \end{gather*} has positive coefficients up to order $k$. The same will therefore be true after multiplying by $(1+u)^{\delta}$, showing that the top Segre class is positive. A different idea is needed to establish the above claim. We change variables $u\mapsto u/(1+r)$ and consider instead the series \begin{gather*} W=(1-u)^{-\alpha}(1+(1+r)u)^{\frac{1}{2}}. \end{gather*} For each $0\leq m\leq k$, the coefficient of $u^m$ in $W$ equals \begin{equation}\label{eee} \sum_{i+j=m} \binom {-\alpha}{i} \binom{\frac{1}{2}}{j} (-1)^{i} (1+r)^j=\sum_{i+j=m} \binom{\alpha+i-1}{i} \binom{\frac{1}{2}}{j} (1+r)^j. \end{equation} The term corresponding to $i=m$, $j=0$ is clearly positive since $\alpha\geq 1$. We will ignore this term for the analysis. The next term $i=m-1$, $j=1$ is positive as well. The remaining terms however have alternating signs because of the fractional binomials. We show nonetheless that the sum of the consecutive $(i, j)$ and $(i-1, j+1)$ terms is positive, for $j$ odd: \begin{equation}\label{summ} \binom {\alpha+i-1}{i} \binom{\frac{1}{2}}{j} (1+r)^j+\binom {\alpha+i-2}{i-1} \binom{\frac{1}{2}}{j+1} (1+r)^{j+1}>0. \end{equation} This proves that the alternating sum \eqref{eee} is positive as well. To justify \eqref{summ}, we note that \begin{gather*} \binom {\alpha+i-1}{i}=\binom {\alpha+i-2}{i-1} \frac{\alpha+i-1}{i}>0, \qquad \binom{\frac{1}{2}}{j+1} = \binom{\frac{1}{2}}{j} \frac{\frac{1}{2}-j}{j+1}. \end{gather*} For $j$ odd, we have $\binom{\frac{1}{2}}{j} >0$. After cancellation, the inequality to establish becomes \begin{gather*} \frac{\alpha+i-1}{i}+(1+r) \frac{\frac{1}{2}-j}{j+1}>0. \end{gather*} Writing $i=m-j$, and using $\alpha-1=\chi-(r+2)k\geq rk\geq rm$, it suffices to establish \begin{gather*} \frac{rm+m-j}{m-j}+(1+r) \frac{\frac{1}{2}-j}{j+1}>0\iff j^2r+\frac{r+3}{2} (m-j)+mr>0, \end{gather*} which is clearly true. \end{proof} \begin{Corollary} If $L$ is a $(k-1)$-very ample line bundle over an Enriques surface $X$, $k\geq 2$, then $L^{[k]}$ is big and nef. In particular, if $H$ is an ample line bundle over an Enriques surface, and $L_n=H^{\otimes n}$ then $(L_n)^{[k]}$ is big and nef for all $n\geq k+1$. \end{Corollary} \begin{proof} The second half of the corollary follows from the first. Indeed, it is noted in \cite[Proposition $2.5$]{S} that if $H$ is ample then $L_n=H^{\otimes n}$ is $(k-1)$-very ample for all $n\geq k+1$. We prove the first statement. If $\chi:=\chi(L)\geq 4k$, the assertion follows from Theorem \ref{tt2} with $r=1$. Now, over Enriques surfaces, $(k-1)$-very ampleness imposes numerical restrictions on $L$ which are stronger than for the other $K$-trivial surfaces. Indeed, it was noted in \cite[Theorem~2.4]{S} that if $L$ is $(k-1)$-very ample, then $L^2\geq (k+1)^2$. When $k\geq 6$, this is sufficient to guarantee that \begin{gather*} \chi(L)=1+\frac{L^2}{2}\geq 1+\frac{(k+1)^2}{2}>4k. \end{gather*} When $2\leq k\leq 5$, the bound $\chi\geq 4k$ required by Theorem \ref{tt2} may fail. However, the finitely many cases \begin{gather*} 4k>\chi\geq 1+ \frac{(1+k)^2}{2}, \qquad 2\leq k\leq 5, \end{gather*} can be checked by hand using equation \eqref{pro}. \end{proof} \section{Other geometries} \label{secother} The Segre integrals for arbitrary surfaces are established only when $F$ has rank $1$ or $2$, see \cite{MOP1, MOP}. In rank $1$, the answers were conjectured by Lehn \cite{Le}. The formulation below can be found in \cite{MOP}: \begin{equation}\label{lehnc} \sum_{n=0}^{\infty} z^k \int_{X^{[k]}} s\big(L^{[k]}\big) = A_1 (z)^{L^2} A_2(z)^{\chi(\mathcal O_X)} A_3 (z)^{L.K_X} A_4(z)^{K_X^2}. \end{equation} Here, for $z=t(1+2t)^2$, we have \begin{gather*} A_1(z)=(1+2t)^{\frac{1}{2}}, \\ A_2(z)=(1+2t)^{\frac{3}{2}} (1+6t)^{-\frac{1}{2}}, \\ A_3(z)=\frac{1}{2} (1+2t)^{-1} \left(\sqrt{1+2t}+\sqrt{1+6t}\right), \\ A_4(z)=4 (1+2t)^{\frac{1}{2}} (1+6t)^{\frac{1}{2}} \left(\sqrt{1+2t}+\sqrt{1+6t}\right)^{-2}. \end{gather*} \subsection{A positivity result} It is more difficult to determine the sign of the top Segre integral from the above formulas. The~following lemma plays an important role in the analysis. We will apply it in the next section to geometric situations. \begin{Lemma}\label{cl} For all integers $m$, $n$, $p$ with $m\geq 0$, $p\geq 0$ such that $m+n+p$ is even, the series \begin{gather*} f(t)=\left(\sqrt{1+2t}+\sqrt{1+6t}\right)^m {(1+2t)}^{\frac{n-1}{2}} (1+6t)^{\frac{p-1}{2}} \end{gather*} has positive coefficients up to order less or equal than $\min\big(\frac{1}{2}(m+n+p)-1, m-1\big)$. \end{Lemma} Taking the minimum is necessary. Indeed, for $(m, n, p)=(2, 19, 1)$ the term of order $\frac{1}{2}(m+n+p)-1$ has negative coefficient, while for $(m, n, p)=(4, 0, 0)$, the term of order $m-1$ has zero coefficient. The lemma also holds for $m+n+p$ odd, but this case {\it never} occurs geometrically. The hypothesis $m, p\geq 0$ can fail in geometric examples, but our proof requires it. \begin{proof} The argument is not straightforward due to the alternating signs in the expansions \begin{gather*} \sqrt{1+2t}= 1+t - \frac{t^2}{2}+\frac{t^3}{2}-\frac{5t^4}{8}+\cdots, \\ \sqrt{1+6t}=1 + 3t - \frac{9t^2}{2}+\frac{27t^3}{2} - \frac{405 t^4}{8} +\cdots. \end{gather*} However, a residue calculation and suitable changes of variables will render the answer manifestly positive. We begin by considering the case $n\geq 0$. In this situation, assume first $m$, $n$, $p$ are even, and~set \begin{gather*} F(t)=\left(\sqrt{1+2t}+\sqrt{1+6t}\right)^m {(1+2t)}^{-\frac{1}{2}} (1+6t)^{-\frac{1}{2}}. \end{gather*} \begin{Claim}\label{claim} The series $F$ has positive coefficients up to order less or equal than $\frac{m}{2}-1$, and no terms of order between $\frac{m}{2}$ and $m-1$. \end{Claim} We assume this for now. Note that \begin{gather*} f(t)=F(t) P(t), \quad P(t)=(1+2t)^{\frac{n}{2}} (1+6t)^{\frac{p}{2}}. \end{gather*} Clearly, $P$ is a polynomial of degree $\frac{n+p}{2}$ with positive coefficients. This observation together with Claim~\ref{claim} gives the argument. Indeed, let $k\leq \min\big(\frac{1}{2}(m+n+p)-1, m-1\big)$. Writing $F_i$ and $P_j$ for the coefficients of $F$ and $P$, we have \begin{gather*} \operatorname{Coeff}_{ t^k} f(t)=\sum F_iP_j, \qquad\text{the sum ranging over}\quad i+j=k, \quad i\geq 0,\quad 0\leq j\leq \frac{n+p}{2}. \end{gather*} The sum is nonnegative since $i\leq k\leq m-1$, so $F_i\geq 0$ by the Claim~\ref{claim}, and $P_j> 0$. The sum is in fact strictly positive. Indeed, for $k\geq \frac{n+p}{2}$, it contains the term \begin{gather*} F_{k-\frac{n+p}{2}} P_{\frac{n+p}{2}}>0. \end{gather*} The last statement is also a consequence of Claim~\ref{claim}, using that $k-\frac{n+p}{2}\leq\frac{m}{2}-1$. When $k<\frac{n+p}{2}$, the sum contains the term $F_0 P_k=2^m P_k>0$. \begin{proof}[Proof of Claim~\ref{claim}] We seek to show that for all $k\leq \frac{m}{2}-1$, the residue \begin{gather*} \operatorname{Res}_{t=0} \frac{F(t)}{t^{k+1}} \,{\mathrm dt}>0. \end{gather*} The peculiar change of variables \begin{gather*} t=\frac{s(s+1)}{2\sqrt{3}s+(2+\sqrt{3})} \end{gather*} will simplify the calculation. For convenience, set \begin{gather*} a=\frac{2-\sqrt{3}}{2\sqrt{3}}>0\qquad\text{so that}\quad t=\frac{s(s+1)}{2\sqrt{3}(s+a+1)}. \end{gather*} We note the following identities \begin{gather*} \sqrt{1+2t}=\frac{s+\frac{\sqrt{3}+1}{2}}{3^{\frac{1}{4}} (s+a+1)^{\frac{1}{2}}}, \qquad \sqrt{1+6t}=\frac{3^{\frac{1}{4}}\big(s+\frac{\sqrt{3}+1}{2\sqrt{3}}\big)}{(s+a+1)^{\frac{1}{2}}}, \\ \sqrt{1+2t}+\sqrt{1+6t}=\frac{(\sqrt{3}+1)(s+1)}{3^{\frac{1}{4}}(s+a+1)^{\frac{1}{2}}},\qquad {\mathrm dt}=\frac{\big(s+\frac{\sqrt{3}+1}{2}\big)\big(s+\frac{\sqrt{3}+1}{2\sqrt{3}}\big)}{2\sqrt{3} (s+a+1)^2}\,{\mathrm ds}. \end{gather*} From here, we obtain \begin{gather*} \operatorname{Res}_{t=0} \frac{F(t)}{t^{k+1}} \,{\mathrm dt}=\operatorname{Res}_{s=0} \frac{G(s)}{s^{k+1}} \,{\mathrm ds}, \end{gather*} where \begin{gather*} G(s)=\alpha (s+1)^{m-k-1} (s+a+1)^{k-\frac{m}{2}}, \qquad \alpha=2^k \big(\sqrt{3}+1\big)^{m} \sqrt{3}^{k-\frac{m}{2}}>0. \end{gather*} A second change of variables will be needed next (this change of variables could have been carried out simultaneously with the first, but this is a price worth paying for readability). We~set \begin{gather*} s=\frac{(a+1)^2}{a} \frac{u}{1-\frac{a+1}{a}u}. \end{gather*} The reader can verify that \begin{gather*} s+1=\frac{1+(a+1)u}{1-\frac{a+1}{a}u}, \qquad s+a+1=\frac{a+1}{1-\frac{a+1}{a}u}, \qquad {\mathrm ds}=\frac{(a+1)^2}{a} \frac{1}{\big(1-\frac{a+1}{a}u\big)^2}\,{\mathrm du}. \end{gather*} By direct calculation, we find \begin{gather*} \operatorname{Res}_{s=0} \frac{G(s)}{s^{k+1}}\,{\mathrm ds}=\operatorname{Res}_{u=0} \frac{H(u)}{u^{k+1}}\,{\mathrm du} \end{gather*} for \begin{gather*} H(u)=\beta (1+(a+1)u)^{m-k-1} \biggl(1-\frac{a+1}{a}u\biggr)^{k-\frac{m}{2}}, \qquad \beta=\alpha a^{k} (a+1)^{-k-\frac{m}{2}}>0. \end{gather*} The first term is a polynomial with positive coefficients $a_i$ given by binomial numbers, for $0\leq i\leq m-k-1$. The second term also has positive coefficients \begin{gather*} b_j=\biggl(-\frac{a+1}{a}\biggr)^j\binom{k-\frac{m}{2}}{j}>0 \end{gather*} since $k-\frac{m}{2}<0$. Thus \begin{gather*} \operatorname{Coeff}_{u^k} H(u)=\!\sum a_i b_j,\quad \text{the sum ranging over}\!\quad i+j=k, \quad 0\leq i\leq m-k-1, \quad j\geq 0. \end{gather*} This coefficient is positive since $a_i>0$, $b_j> 0$ and the sum is non-empty (the term $a_0b_k=b_k>0$ appears in the sum). When $m$ is even, and $\frac{m}{2}\leq k\leq m-1$, the expression $H(u)$ is a polynomial in $u$ of degree $\frac{m}{2}-1<k$. Hence, the coefficient of $u^k$ in $H(u)$ vanishes. This proves the claim (and completes the argument when $m$, $n$, $p$ are even and $n\geq 0$). \end{proof} When $n\geq 0$, there are three other cases to consider, which require different choices for $F$ and $P$: \begin{itemize}\itemsep=0pt \item [$(i)$] when $m$ even, $n$, $p$ odd, we set \begin{gather*} F(t)=\left(\sqrt{1+2t}+\sqrt{1+6t}\right)^m, \qquad P(t)=(1+2t)^{\frac{n-1}{2}} (1+6t)^{\frac{p-1}{2}}, \end{gather*} \item [$(ii)$] when $m$ odd, $n$ even, $p$ odd, we set \begin{gather*} F(t)=\left(\sqrt{1+2t}+\sqrt{1+6t}\right)^m {(1+2t)}^{-\frac{1}{2}}, \qquad P(t)=(1+2t)^{\frac{n}{2}} (1+6t)^{\frac{p-1}{2}}, \end{gather*} \item [$(iii)$] when $m$ odd, $n$ odd, $p$ even, we set \begin{gather*} F(t)=\left(\sqrt{1+2t}+\sqrt{1+6t}\right)^m (1+6t)^{-\frac{1}{2}}, \qquad P(t)=(1+2t)^{\frac{n-1}{2}} (1+6t)^{\frac{p}{2}}. \end{gather*} \end{itemize} In case $(i)$, $F$ has positive coefficients up to order $\frac{m}{2}$, and no terms of order between $\frac{m}{2}+1$ and $m-1$. In cases $(ii)$ and~$(iii)$, $F$ has positive coefficients up to order $\frac{m-1}{2}$, and no terms of order between $\frac{m+1}{2}$ and $m-1$. The arguments are similar, and we leave the details to the reader. When $n<0$, the reasoning used to prove Claim~\ref{claim} also works to deal directly with the function \begin{gather*} f(t)=\left(\sqrt{1+2t}+\sqrt{1+6t}\right)^m {(1+2t)}^{\frac{n-1}{2}} (1+6t)^{\frac{p-1}{2}}. \end{gather*} Following exactly the same steps, first changing from $t$ to $s$ and then from $s$ to $u$, we obtain \begin{align*} H(u)={}&\gamma (1+(a+1)u)^{m-k-1} \biggl(1-\frac{a+1}{a}u\biggr)^{k-\frac{m+n+p}{2}} \biggl(1-\frac{(\sqrt{3}+1)^3}{4\sqrt{3}}u\biggr)^{n} \\ &\times \biggl(1+\frac{\big(\sqrt{3}+1\big)^3}{4}u\biggr)^p, \end{align*} for $\gamma>0$. By the same arguments, this has positive coefficients when the exponents \begin{gather*} m-k-1\geq 0, \qquad k-\frac{m+n+p}{2}<0, \qquad n<0, \qquad p\geq 0, \end{gather*} which we assumed. \end{proof} \subsection[K3 blowups]{$\boldsymbol{K3}$ blowups} The top Segre classes computed by formula \eqref{lehnc} are always coefficients in series of the form~$f(t)$ as in Lemma \ref{cl}, barring the condition $m, p\geq 0$. When this condition is satisfied, we easily obtain big and nef criteria for the tautological bundles $L^{[k]}\to X^{[k]}$. There are several specific examples where our techniques apply. We illustrate them first when \begin{gather*} \pi\colon\ X\to S \end{gather*} is the blowup of a $K3$ surface $S$ at a point $p\in S$. We assume $S$ has Picard rank $1$, with ample Picard generator $H$. Let $H^2=2h$. \begin{Theorem}\label{blok} Let $E$ be the exceptional divisor on $X$, and set $L=H-\ell E$. Assume $\ell\geq k-1$ and \begin{gather*} 2h>\max \bigl((\ell+2)^2-6, (\ell+1)^2+4k, \ell(\ell+1)+6k-6\bigr). \end{gather*} Then $L^{[k]}$ is big and nef on $X^{[k]}$. \end{Theorem} \begin{proof} We first show $H-(\ell+1) E$ nef. Recall the Sheshadri constant \begin{gather*} \epsilon (S, p)= \max \{t \in \mathbb R_{\geq 0}\colon H-tE \text{ is nef}\}, \end{gather*} see for instance \cite[Definition 5.1.1]{L-I}. Note that since $H$ is in the nef cone of $X$, if $H-tE$ is nef, then $H-t'E$ is also nef for all $0\leq t'\leq t$. Thus it suffices to explain that \begin{gather*} \epsilon(S, p)\geq \ell+1. \end{gather*} The Sheshadri constants of $K3$ surfaces of Picard rank $1$ have been studied in \cite{K} and shown to satisfy \begin{gather*} \epsilon(S, p)\geq \big\lfloor {\sqrt{H^2}}\big\rfloor, \end{gather*} with two possible exceptions \begin{gather*} H^2=\alpha^2+\alpha-2,\qquad \epsilon(S, p)\geq \alpha - \frac{2}{\alpha+1} \end{gather*} and \begin{gather*} H^2=\alpha^2+\frac{\alpha-1}{2},\qquad \epsilon(S, p)\geq \alpha -\frac{1}{2\alpha+1}, \qquad \alpha\in \mathbb Z_{>0}. \end{gather*} Since the inequality $2h>\ell(\ell+3)$ is implied by our hypothesis, it follows that $\epsilon(S, p)\geq \ell+1$ in all cases (using $\alpha\geq \ell+2$ in the two exceptional cases). We next show that $L$ is $(k-1)$-very ample. Note that $K_X=E$, and write \begin{gather*} L=K_X+M, \qquad M=H-(\ell+1)E. \end{gather*} Observe that $M$ is nef by the first paragraph of the proof, and \begin{gather*} M^2=2h-(\ell+1)^2>4k. \end{gather*} By \cite[Theorem 2.1]{BS}, if $L$ is not $(k-1)$-very ample, there exists an effective divisor $D\neq 0$ such that \begin{gather*} D.M-k\leq D^2<\frac{D.M}{2}<k. \end{gather*} Furthermore, $M-2D$ is $\mathbb Q$-effective and $D$ contains a subscheme $\zeta$ of length at most equal to~$k$ such that \begin{gather*} H^0(L)\to H^0(L\otimes \mathcal O_\zeta) \end{gather*} is not surjective. Write \begin{gather*}D=aH+bE, \end{gather*} and note that $D$ effective implies $a\geq 0$. Similarly, \begin{gather*} M-2D=(1-2a)H+(-\ell-1-2b)E \end{gather*} is $\mathbb Q$-effective, so $1-2a\geq 0$. Thus $a=0$, $D=bE$ with $b > 0$. We have \begin{gather*} \frac{D.M}{2}<k\implies b(\ell+1)<2k\implies b<\frac{2k}{\ell+1}\leq 2 \end{gather*} since $k\leq \ell+1$. Hence $b=1$ and $D=E$. For subschemes $\zeta$ of $E$, the map \begin{gather*} H^0(L)\to H^0(L\otimes \mathcal O_\zeta) \end{gather*} can be written as composition \begin{gather*} H^0(L)\to H^0(L|_{E}), \qquad H^0(L|_E)\to H^0(L\otimes \mathcal O_{\zeta}). \end{gather*} Since $L|_{E}=\mathcal O_E(\ell)$ and $\zeta$ has length less or equal to $k\leq \ell+1$, the second map is clearly surjective. The first map is also surjective since \begin{gather*} H^1(L(-E))=0. \end{gather*} This is a consequence of \cite[Proposition 4.1]{V} and requires the bound $2h>(\ell+2)^2-6$, which we assumed. We conclude $L^{[k]}$ is globally generated and thus nef. It remains to explain that the top Segre class of $L^{[k]}$ is positive. We use \eqref{lehnc} and we change variables \begin{gather*} z=t(1+2t)^2\qquad \text{so that}\quad \mathrm dz=(1+2t)(1+6t)\,{\mathrm dt}. \end{gather*} Then \begin{align} \int_{X^{[k]}} s\big(L^{[k]}\big)&=\operatorname{Res}_{z=0} A_1^{2h-\ell^2} A_2^2 A_3^{\ell} A_4^{-1} \frac{\mathrm dz}{z^{k+1}}\nonumber \\ \nonumber&= \operatorname{Res}_{t=0} 2^{-\ell-2} (1+2t)^{h-\frac{\ell^2}{2}-2k-\ell+\frac{3}{2}} (1+6t)^{-\frac{1}{2}} \left(\sqrt{1+2t}+\sqrt{1+6t}\right)^{\ell+2}\frac{\mathrm dt}{t^{k+1}} \\ &= 2^{-\ell-2} \operatorname{Coeff}_{t^k} (1\!+2t)^{h-\frac{\ell^2}{2}-2k-\ell+\frac{3}{2}} (1\!+6t)^{-\frac{1}{2}} \left(\sqrt{1+2t}\!+\sqrt{1+6t}\right)^{\ell+2}\!. \label{cvvv} \end{align} We are now in the situation considered in Lemma \ref{cl}. Thus, the coefficient above is positive provided $k\leq \ell+1$ and \begin{gather*} k\leq \frac{1}{2} \bigg((\ell+2)+2\bigg(h-\frac{\ell^2}{2}-2k-\ell+2\bigg)\bigg)-1\iff 2h> \ell(\ell+1)+6k-6. \end{gather*} This completes the proof. \end{proof} \subsection{Surfaces of general type} It is natural to wonder how far these techniques take us. We show \begin{Theorem} \label{gtyp}Assume $X$ is a smooth projective minimal surface of general type. Let $L$ be a~$(k-1)$-very ample line bundle such that \begin{gather*} \chi(L)\geq 3k, \qquad L.K_X\geq 2K_X^2+k+1. \end{gather*} Then $L^{[k]}$ is big and nef over $X^{[k]}$. \end{Theorem} \begin{proof} Arguing as in the proof of Theorem \ref{blok}, in particular equation \eqref{cvvv}, we express the Segre integral as the $t^k$-coefficient in the series \begin{gather*} f(t)=2^{-m} \left(\sqrt{1+2t}+\sqrt{1+6t}\right)^m {(1+2t)}^{\frac{n-1}{2}} (1+6t)^{\frac{p-1}{2}}, \end{gather*} where \begin{gather*} m=L.K-2K^2, \qquad n=(L-K)^2+3\chi - 4k - 1, \qquad p=K^2-\chi+3, \end{gather*} with $K=K_X$ and $\chi=\chi(\mathcal O_X)$. Note that \begin{gather*} m+n+p=2\chi(L)-4k+2 \end{gather*} is even. Furthermore, $p\geq 0$. Indeed, let $p_g$, $q$ denote the genus and irregularity of $X$. By~Noet\-her's inequality $K^2\geq 2p_g-4$, and $K^2>0$ since $X$ is minimal of general type. Averaging, we obtain $K^2>p_g-2$, and thus \begin{gather*} p=K^2-\chi+3>(p_g-2)-(1-q+p_g)+3=q\geq 0. \end{gather*} By Lemma \ref{cl}, the coefficients of $f(t)$ up to order $\min\big(\frac{1}{2} (m+n+p)-1, m-1\big)$ are positive. In~particular, if \begin{gather*} k\leq \min\bigg(\frac{1}{2} (m+n+p)-1, m-1\bigg), \end{gather*} then the Segre integral is positive. The latter inequality is exactly our hypothesis, and thus $L^{[k]}$ is big and nef. \end{proof} \section{Curves} \label{seccur} Let $C$ be a smooth projective curve of genus $g$, and let $V\to C$ be a vector bundle with $\chi=\chi(V)$. It is natural to ask whether the previous results also apply to the tautological bundles $V^{[k]}\to C^{[k]}$. In fact, by similar considerations, we establish an analogue of Theorem~\ref{tt}: \begin{Proposition} \label{tt3}Assume $V\to C$ is a $(k-1)$-very ample rank $r$ vector bundle with $\chi \geq (r+1)k$. Then $V^{[k]}\to C^{[k]}$ is big and nef over $C^{[k]}$. \end{Proposition} To illustrate, if $V$ is stable of degree $d>r(2g-2+k)$, then $V$ is $(k-1)$-very ample. This holds because for any divisor $Z\subset C$ of degree $k$, we have \begin{gather*} H^1(V(-Z))=H^0\big(V^{\vee}\otimes K_C(Z)\big)=0 \end{gather*} by Serre duality, stability and the assumption $\mu(V)>2g-2+k$. \begin{proof} Since $V$ is $(k-1)$-very ample, it follows that $V^{[k]}$ is globally generated, hence nef. To~prove $V^{[k]}$ is big, it suffices to verify \begin{gather*} (-1)^k \int_{C^{[k]}} s\big(V^{[k]}\big)>0. \end{gather*} The latter integrals are computed in \cite[Theorem 2]{MOP1}; they can be viewed as higher rank analogues of the classical $k$-secant integrals. The answer bears analogies with equation \eqref{eq1}: \begin{gather*} \sum_{k=0}^{\infty} z^k \int_{C^{[k]}} s_{k} \big(V^{[k]}\big) =A_1^{d} A_2^{1-g}, \end{gather*} where \begin{gather*} A_1(z)=1+t, \qquad A_2(z)=\frac{(1+t)^{r+1}}{1+(1+r)t}, \qquad z=-t(1+t)^r. \end{gather*} We can express the Segre integrals as residues \begin{align*} (-1)^k \int_{C^{[k]}} s_{k} \big(V^{[k]}\big)&=(-1)^k \operatorname{Res}_{z=0} A_1^d A_2^{1-g} \frac{\mathrm dz}{z^{k+1}} \\ &=\operatorname{Res}_{t=0} (1+t)^{\chi - kr - g} (1+(1+r)t)^{g} \frac{\mathrm dt}{t^{k+1}}. \end{align*} Just as in Theorem \ref{tt}, we further change variables $t=\frac{u}{1-u}$, so that \begin{align*} (-1)^k \int_{C^{[k]}} s_{k} \big(V^{[k]}\big)&=\operatorname{Res}_{u=0} (1+ru)^{g} (1-u)^{-\chi+k(r+1)-1} \frac{\mathrm du}{u^{k+1}} \\ &= \operatorname{Coeff}_{u^k} (1+ru)^{g} (1-u)^{-\chi+k(r+1)-1}. \end{align*} By the same reasoning as in Theorem~\ref{tt}, we see that this coefficient is positive when $-\chi+k(r+1)-1<0\iff \chi\geq k(r+1)$. This completes the argument. \end{proof} To go further, we consider the punctual Quot schemes $\mathsf {Quot}_C\big(\mathbb C^N, k\big)$ parametrizing quotients \begin{gather*} 0\to S\to \mathbb C^N\otimes \mathcal O_C\to Q\to 0, \qquad \operatorname{rk} Q=0, \qquad \text{length }Q=k. \end{gather*} These are smooth projective varieties of dimension $Nk$, and carry tautological vector bundles~$V^{[k]}$ for each vector bundle $V\to C$: \begin{gather*} V^{[k]}=p_{\star} (\mathcal Q\otimes q^{\star} V). \end{gather*} Here $\mathcal Q$ is the universal quotient and $p$, $q$ are the two projections over $\mathsf{Quot}_C\big(\mathbb C^N, k\big)\times C$. The associated Segre integrals were studied in~\cite{OP2021}. We extend the above results to the punctual Quot scheme, in rank $1$. The higher rank case appears more involved. \begin{Theorem} \label{quotpun} Let $L$ be a line bundle with $\chi (L)\geq k+g$ and $\chi(L)\geq k\big(1+\frac{1}{N}\big)$. Then, the vector bundle $L^{[k]}$ is big and nef over $\mathsf {Quot}_C\big(\mathbb C^N, k\big)$. \end{Theorem} \begin{proof} We show that $L^{[k]}$ is globally generated, hence nef. The universal sequence over $\mathsf {Quot}_C\big(\mathbb C^N, k\big)\times C$ \begin{gather*} 0\to \mathcal S\to \mathbb C^N\otimes \mathcal O\to \mathcal Q\to 0 \end{gather*} induces, via tensorization by $L$ followed by pushforward, a morphism \begin{gather*} H^0\bigl(C, L^{\oplus N}\bigr)\otimes \mathcal O_{\mathsf{Quot}}\to L^{[k]}. \end{gather*} To prove that the morphism is surjective, we establish that \begin{equation}\label{ss} H^0\bigl(C, L^{\oplus N}\bigr)\to H^0(C, L\otimes Q) \end{equation} is surjective for all length $k$ punctual quotients $Q$ of the rank $N$ trivial bundle. The argument only requires $L$ to be $(k-1)$-very ample, which is certainly true for us. In~fact, $L$ is $(k-1)$-very ample whenever $\deg L\geq 2g-1+k\iff \chi(L)\geq g+k$, as we remarked before the proof of Proposition~\ref{tt3}. Surjectivity of~\eqref{ss} for $N=1$ is a rephrasing of $(k-1)$-very ampleness. For the general case, we induct on $N$. We pick a splitting $\mathbb C^N=\mathbb C\oplus \mathbb C^{N-1}$, and form the diagram with exact rows and columns: \begin{center} $\xymatrix{ & \ar[d] 0 & \ar[d]0 & \ar[d]0 &\\ 0\ar[r] & S' \ar[r]\ar[d] & S\ar[r]\ar[d] & S''\ar[r]\ar[d] & 0 \\ 0\ar[r] & \mathcal O_C \ar[r]\ar[d]& \mathbb C^{N}\otimes \mathcal O_{C}\ar[r]\ar[d] &\mathbb C^{N-1}\otimes \mathcal O_{C} \ar[r]\ar[d]& 0 \\ 0\ar[r] & Q' \ar[r]\ar[d] & Q\ar[r]\ar[d] & Q''\ar[r]\ar[d] & 0\\ & 0 & 0 & 0 & }$ \end{center} Tensoring with $L$ and taking global sections yields \begin{center} $\xymatrix{ 0\ar[r] & \ar[r]\ar[d]H^0(C, L)& \ar[r]\ar[d]H^0\big(C, L^{\oplus N}\big) & \ar[r] \ar[d]H^0\big(C, L^{\oplus (N-1)}\big)\ar[d] &0\\ 0 \ar[r]& H^0(C, L\otimes Q')\ar[r]\ar[d] & H^0(C, L\otimes Q)\ar[r]\ & H^0(C, L\otimes Q'')\ar[r] \ar[d] & 0 \\ & 0 & & 0 & }$ \end{center} Both $Q'$, $Q''$ have lengths less or equal to $k$, the length of $Q$. Since $L$ is $(k-1)$-very ample, it is also $(\ell-1)$-very ample for all $1\leq \ell\leq k$, in particular when $\ell$ equals the length of $Q'$ or~$Q''$. Thus, the first and last vertical arrows are surjective by induction. This implies that the middle vertical arrow is surjective as well. It remains to show that $L^{[k]}$ is big. By \eqref{pos}, it suffices to determine the sign of the top Segre class of $L^{[k]}$. No additional calculation is needed in this case. Indeed, the Segre integrals were noted in \cite[Corollary~10]{OP2021} to satisfy the symmetry \begin{gather*} (-1)^{Nk} \int_{{\mathsf{Quot}}_C(\mathbb C^N, k)} s\big(L^{[k]}\big)=(-1)^k\int_{C^{[k]}} s\bigl(\big(L^{\oplus N}\big)^{[k]}\bigr). \end{gather*} In the proof of Proposition \ref{tt3}, the latter integral was shown to be positive if $N\chi (L)\geq (N+1)k$, which is true by hypothesis. \end{proof} \LastPageEnding \end{document}
\begin{document} \title{Periodic sequences of \(p\)-class tower groups} \author{Daniel C. Mayer} \address{Naglergasse 53\\8010 Graz\\Austria} \email{algebraic.number.theory@algebra.at} \urladdr{http://www.algebra.at} \thanks{Research supported by the Austrian Science Fund (FWF): P 26008-N25} \subjclass[2000]{Primary 11R37, 11R29, 11R11, 11R16; secondary 20D15, 20F05, 20F12, 20F14, 20-04} \keywords{\(p\)-class field towers, \(p\)-principalization, \(p\)-class groups, quadratic fields, multiquadratic fields, cubic fields; finite \(p\)-groups, parametrized pc-presentations, \(p\)-group generation algorithm} \date{March 31, 2015} \dedicatory{Dedicated to the memory of Emil Artin} \begin{abstract} Recent examples of periodic bifurcations in descendant trees of finite \(p\)-groups with \(p\in\lbrace 2,3\rbrace\) are used to show that the possible \(p\)-class tower groups \(G\) of certain multiquadratic fields \(K\) with \(p\)-class group of type \((2,2,2)\), resp. \((3,3)\), form periodic sequences in the descendant tree of the elementary abelian root \(C_2^3\), resp. \(C_3^2\). The particular vertex of the periodic sequence which occurs as the \(p\)-class tower group \(G\) of an assigned field \(K\) is determined uniquely by the \(p\)-class number of a quadratic, resp. cubic, auxiliary field \(k\), associated unambiguously to \(K\). Consequently, the hard problem of identifying the \(p\)-class tower group \(G\) is reduced to an easy computation of low degree arithmetical invariants. \end{abstract} \maketitle \section{Introduction} \label{s:Intro} In this article, we establish class field theoretic applications of the purely group theoretic discovery of periodic bifurcations in descendant trees of finite \(p\)-groups, as described in our previous papers \cite[\S\S\ 21--22, pp.182--193]{Ma1} and \cite[\S\ 6.2.2]{Ma2}, and summarized in section \S\ \ref{s:PeriodicBifurcations}. The infinite families of Galois groups of \(p\)-class field towers with \(p\in\lbrace 2,3\rbrace\) which are presented in sections \S\S\ \ref{s:TwoStage} and \ref{s:ThreeStage} can be divided into different kinds. Either they form infinite periodic sequences of uniform step size \(1\), and thus of fixed coclass. These are the classical and well-known \textit{coclass sequences} whose virtual periodicity has been proved independently by du Sautoy and by Eick and Leedham-Green (see \cite[\S\ 7, pp.167--168]{Ma1}). Or they arise from infinite paths of directed edges in descendant trees whose vertices reveal periodic bifurcations (see \cite[Thm.21.1, p.182]{Ma1}, \cite[Thm.21.3, p.185]{Ma1}, and \cite[Thm.6.4]{Ma2}). Extensive finite parts of the latter have been constructed computationally with the aid of the \(p\)-group generation algorithm by Newman \cite{Nm} and O'Brien \cite{Ob} (see \cite[\S\S 12--18]{Ma1}), but their indefinitely repeating periodic pattern has not been proven rigorously, so far. They can be of uniform step size \(2\), as in \S\ \ref{s:TwoStage}, or of mixed alternating step sizes \(1\) and \(2\), as in \S\ \ref{s:ThreeStage}, whence their coclass increases unboundedly. \section{Periodic bifurcations in trees of \(p\)-groups} \label{s:PeriodicBifurcations} For the specification of finite \(p\)-groups throughout this article, we use the identifiers of the SmallGroups database \cite{BEO1,BEO2} and of the ANUPQ-package \cite{GNO} implemented in the computational algebra systems GAP \cite{GAP} and MAGMA \cite{BCP,BCFS,MAGMA}, as discussed in \cite[\S\ 9, pp.168--169]{Ma1}. The first periodic bifurcations were discovered in August 2012 for the descendant trees of the \(3\)-groups \(Q=\langle 729,49\rangle\) and \(U=\langle 729,54\rangle\) (see \cite[\S\ 3, p.163]{Ma1} and \cite[Thm.21.3, p.185]{Ma1}), having abelian quotient invariants \((3,3)\), when we, in collaboration with Bush, conducted a search for Schur \(\sigma\)-groups as possible candidates for Galois groups \(\mathrm{G}_3^{\infty}(K)=\mathrm{Gal}(\mathrm{F}_3^{\infty}(K)\vert K)\) of three-stage towers of \(3\)-class fields over complex quadratic base fields \(K=\mathbb{Q}(\sqrt{d})\) with \(d\le -9748\) and a certain type of \(3\)-principalization \cite[Cor.4.1.1, p.775]{BuMa}. The result in \cite[Thm.4.1, p.774]{BuMa} will be generalized to more principalization types and groups of higher nilpotency class in section \S\ \ref{s:ThreeStage}. \noindent Similar phenomena were found in May 2013 for the trees with roots \(\langle 2187,168\rangle\) and \(\langle 2187,181\vert 191\rangle\) of type \((9,3)\) but have not been published yet, since we first have to present a classification of all metabelian \(3\)-groups with abelianization \((9,3)\). At the beginning of 2014, we investigated the root \(\langle 729,45\rangle\), which possesses an infinite balanced cover \cite[Dfn.6.1]{Ma2}, and found periodic bifurcations in its decendant tree \cite[Thm.6.4]{Ma2}). In January 2015, we studied complex bicyclic biquadratic fields \(K=\mathbb{Q}(\sqrt{-1},\sqrt{d})\), called \textit{special Dirichlet fields} by Hilbert \cite{Hi}, for whose \(2\)-class tower groups \(\mathrm{G}_2^{\infty}(K)\) presentations had been given by Azizi, Zekhnini and Taous \cite[Thm.2(4)]{AZT}, provided the radicand \(d\) exhibits a certain prime factorization which ensures a \(2\)-class group \(\mathrm{Cl}_2(K)\) of type \((2,2,2)\). In section \S\ \ref{s:TwoStage}, we use the viewpoint of descendant trees of finite metabelian \(2\)-groups and our discovery of periodic bifurcations in the tree with root \(\langle 32,34\rangle\) \cite[Thm.21.1, p.182]{Ma1} to prove a group theoretic restatement of the main result in the paper \cite{AZT}, which connects pairs \((m,n)\) of positive integer parameters with vertices of the descendant tree \(\mathcal{T}(\langle 8,5\rangle)\) by means of an injective mapping \((m,n)\mapsto G_{m,n}\), as shown impressively in Figure \ref{fig:IdAndParam222}. \section{Pattern recognition via Artin transfers} \label{s:ArtinPattern} Let \(p\) denote a prime number and suppose that \(G\) is a finite \(p\)-group or an infinite pro-\(p\) group with finite abelianization \(G/G^\prime\) of order \(p^v\) with a positive integer exponent \(v\ge 1\). In this situation, there exist \(v+1\) layers \[\mathrm{Lyr}_n(G):=\lbrace G^\prime\le H\le G\mid (G:H)=p^n\rbrace,\text{ for }0\le n\le v,\] of intermediate normal subgroups \(H\unlhd G\) between \(G\) and its commutator subgroup \(G^\prime\). For each of them, we denote by \(T_{G,H}:G\to H/H^\prime\) the Artin transfer homomorphism from \(G\) to \(H\) \cite{Ar}. In our recent papers \cite[\S\ 3]{Ma2} and \cite{Ma}, the components of the multiple-layered \textit{transfer target type} (TTT) \(\tau(G)=\lbrack\tau_0(G);\ldots;\tau_v(G)\rbrack\) of \(G\), resp. the multiple-layered \textit{transfer kernel type} (TKT) \(\varkappa(G)=\lbrack\varkappa_0(G);\ldots;\varkappa_v(G)\rbrack\) of \(G\), were defined by \[\tau_n(G):=(H/H^\prime)_{H\in\mathrm{Lyr}_n(G)}, \text{ resp. } \varkappa_n(G):=(\ker(T_{G,H}))_{H\in\mathrm{Lyr}_n(G)}, \text{ for }0\le n\le v.\] The following information is known \cite{Ma} to be crucial for identifying the metabelianization \(G/G^{\prime\prime}\) of a \(p\)-class tower group \(G\), but usually does not suffice to determine \(G\) itself. \begin{definition} \label{dfn:ArtinPattern} By the \textit{Artin pattern} of \(G\) we understand the pair \begin{equation} \label{eqn:ArtinPattern} \mathrm{AP}(G):=(\tau(G);\varkappa(G)) \end{equation} \noindent consisting of the multiple-layered TTT \(\tau(G)\) and the multiple-layered TKT \(\varkappa(G)\) of \(G\). \noindent If \(G\) is the \(p\)-tower group of a number field \(K\), then we put \(\mathrm{AP}(K):=\mathrm{AP}(G)\) and speak about the \textit{Artin pattern} of \(K\). \end{definition} As Emil Artin \cite{Ar} pointed out in 1929 already, using a more classical terminology, the concepts transfer target type (TTT) and transfer kernel type (TKT) of a base field \(K\), which we have now combined to the Artin pattern \((\tau(K);\varkappa(K))\) of \(K\), require a \textit{non-abelian} setting of unramified extensions of \(K\). The reason is that the derived subgroup \(H^\prime\) of an intermediate group \(G^\prime<H<G\) between the \(p\)-tower group \(G\) of \(K\) and its commutator subgroup \(G^\prime\) is an intermediate group between \(G^\prime\) and the second derived subgroup \(G^{\prime\prime}\). Therefore, the TTT \(\tau(G)\) of the \(p\)-tower group \(G=G_p^\infty(K)\) coincides with the TTT \(\tau(G_p^n(K))\) of any higher derived quotient \(G_p^n(K)\simeq G/G^{(n)}\), for \(n\ge 2\) but not for \(n=1\), since \(H/H^\prime\simeq(H/G^{(n)})/(H^\prime/G^{(n)})\), according to the isomorphism theorem. Similarly, we have the coincidence of TKTs \(\varkappa(G_p^n(K))=\varkappa(G)\), for \(n\ge 2\). \section{Two-stage towers of \(2\)-class fields} \label{s:TwoStage} As our first application of periodic bifurcations in trees of \(2\)-groups, we present a family of biquadratic number fields \(K\) with \(2\)-class group \(\mathrm{Cl}_2(K)\) of type \((2,2,2)\), discovered by Azizi, Zekhnini and Taous \cite{AZT}, whose \(2\)-class tower groups \(G=\mathrm{G}_2^{\infty}(K)\) are conjecturally distributed over infinitely many periodic coclass sequences, without gaps. This claim is stronger than the statements in the following Theorem \ref{thm:TwoStage}. The proof firstly consists of a group theoretic construction of all possible candidates for \(G\), identified by their Artin pattern, up to nilpotency class \(\mathrm{cl}(G)\le 12\) and coclass \(\mathrm{cc}(G)\le 13\), thus having a maximal logarithmic order \(\log_2(\mathrm{ord}(G))\le 25\). (The first part is independent of the actual realization of the possible groups \(G\) as \(2\)-tower groups of suitable fields \(K\).) Secondly, evidence is provided of the realization of at least all those groups constructed in the first part whose logarithmic order does not exceed \(11\). The second part (see \S\ \ref{s:CompRslt}) is done by computing the Artin pattern of sufficiently many fields \(K\) or by using more sophisticated ideas, presented in Theorem \ref{thm:TwoStage}. \begin{remark} \label{rmk:TwoStage} Generally, the first layer of the transfer kernel type \(\varkappa_1(G)\) of \(G\) will turn out to be a permutation \cite[Dfn.21.1, p.182]{Ma1} of the seven planes in the \(3\)-dimensional \(\mathbb{F}_2\)-vector space \(G/G^\prime\simeq\mathrm{Cl}_2(K)\). We are going to use the notation of \cite[Thm.21.1 and Cor.21.1]{Ma1}. \end{remark} \begin{theorem} \label{thm:TwoStage} Let \(K=\mathbb{Q}(\sqrt{-1},\sqrt{d})\) be a complex bicyclic biquadratic Dirichlet field with radicand \(d=p_1p_2q\), where \(p_1\equiv 1\pmod{8}\), \(p_2\equiv 5\pmod{8}\) and \(q\equiv 3\pmod{4}\) are prime numbers such that \(\left(\frac{p_1}{p_2}\right)=-1\) and \(\left(\frac{p_1}{q}\right)=-1\). \noindent Then the \(2\)-class group \(\mathrm{Cl}_2(K)\) of \(K\) is of type \((2,2,2)\), the \(2\)-class field tower of \(K\) is metabelian (with exactly two stages), and the isomorphism type of the Galois group \(G=\mathrm{G}_2^{\infty}(K)=\mathrm{Gal}(\mathrm{F}_2^{\infty}(K)\vert K)\) of the maximal unramified pro-\(2\) extension \(\mathrm{F}_2^{\infty}(K)\) of \(K\) is characterized uniquely by the pair of positive integer parameters \((m,n)\) defined by the \(2\)-class numbers \(h_2(k_1)=2^{m+1}\) and \(h_2(k_2)=2^{n}\) of the complex quadratic fields \(k_1=\mathbb{Q}(\sqrt{-p_1})\) and \(k_2=\mathbb{Q}(\sqrt{-p_2q})\). \noindent The Legendre symbol \(\left(\frac{p_2}{q}\right)\) decides whether \(G\) is a descendant of \(\langle 32,34\rangle\) or \(\langle 32,35\rangle\): \begin{itemize} \item \(\left(\frac{p_2}{q}\right)=-1\) \(\Longleftrightarrow\) \((m\ge)n=1\) \(\Longleftrightarrow\) the first layer TKT \(\varkappa_1(G)\) is a permutation with five fixed points and a single \(2\)-cycle \(\Longleftrightarrow\) \(G\) belongs to the mainline \begin{equation} \label{eqn:MainLine35} M_{0,k}:=\langle 32,35\rangle(-\#1;1)^k,\text{ with }k=m-1\ge 0, \end{equation} \noindent of the coclass tree \(\mathcal{T}^3(\langle 32,35\rangle)\). \item \(\left(\frac{p_2}{q}\right)=+1\) \(\Longleftrightarrow\) \(n>1\) \(\Longleftrightarrow\) the first layer TKT \(\varkappa_1(G)\) is a permutation with a single fixed point and three \(2\)-cycles \(\Longleftrightarrow\) \(G\) is a descendant of the group \(\langle 32,34\rangle\), that is \(G\in\mathcal{T}(\langle 32,34\rangle)\). \end{itemize} \noindent More precisely, in the second case the following equivalences hold in dependence on the parameters \(m,n\le\ell\), where \(\ell\le 11\) denotes a foregiven upper bound: \begin{itemize} \item \(m\ge n\ge 2\) (with \(n\) fixed) \(\Longleftrightarrow\) \(G\) belongs to the mainline \begin{equation} \label{eqn:MainLine} M_{j+1,k}:=\langle 32,34\rangle(-\#2;1)^j-\#2;2(-\#1;1)^k,\text{ with fixed }j=n-2 \end{equation} \noindent and varying \(k=m-n\ge 0\), of the coclass tree \(\mathcal{T}^{n+2}(\langle 32,34\rangle(-\#2;1)^{n-2}-\#2;2)\). \item \(n>m\ge 1\) (with \(m\) fixed) \(\Longleftrightarrow\) \(G\) belongs to the unique periodic coclass sequence \begin{equation} \label{eqn:Sequence} V_{j,k}:=\langle 32,34\rangle(-\#2;1)^j(-\#1;1)^k-\#1;2,\text{ with fixed }j=m-1 \end{equation} \noindent and varying \(k=n-m-1\ge 0\), whose members possess a permutation as their first layer transfer kernel type, of the coclass tree \(\mathcal{T}^{m+2}(\langle 32,34\rangle(-\#2;1)^{m-1})\). \end{itemize} \end{theorem} We add a corollary which gives the Artin pattern of the groups in Theorem \ref{thm:TwoStage}, firstly, since it is interesting in its own right, and secondly, because we are going to use its proof as a starting point for the proof of Theorem \ref{thm:TwoStage}. \begin{corollary} \label{cor:TwoStage} Under the assumptions of Theorem \ref{thm:TwoStage}, the Artin pattern \(\mathrm{AP}(G)=(\tau(G);\varkappa(G))\) of the \(2\)-tower group \(G=\mathrm{G}_2^\infty(K)\) of the biquadratic field \(K=\mathbb{Q}(\sqrt{-1},\sqrt{d})\) is given as follows: The ordered multi-layered transfer target type (TTT) \(\tau(G)=\lbrack\tau_0;\tau_1;\tau_2;\tau_3\rbrack\) of the Galois group \(G\) is given by \(\tau_0=(1^3)\), \(\tau_3=(m,n)\), and \begin{equation} \label{eqn:1stLayerTTT} \tau_1= \begin{cases} \lbrack (m+1,2),(2,1)^2,(1^3)^2,(2,1)^2\rbrack,\text{ if }\left(\frac{p_2}{q}\right)=-1, \\ \lbrack (m+1,n+1),(1^3)^6\rbrack,\text{ else}, \end{cases} \end{equation} \begin{equation} \label{eqn:2ndLayerTTT} \tau_2= \begin{cases} \lbrack (m+1,1),(m,2),(m+1,1),(2,1)^4\rbrack,\text{ if }\left(\frac{p_2}{q}\right)=-1, \\ \lbrack (m+1,n),(m,n+1),(\max(m+1,n+1),\min(m,n)),(1^3)^4\rbrack,\text{ else}. \end{cases} \end{equation} \noindent If we now denote by \(N_i:=\mathrm{Norm}_{K_i\vert K}(\mathrm{Cl}_2(K_i))\), \(1\le i\le 7\), the norm class groups of the seven unramified quadratic extensions \(K_i\vert K\), then the ordered multi-layered transfer kernel type (TKT) \(\varkappa(G)=\lbrack\varkappa_0;\varkappa_1;\varkappa_2;\varkappa_3\rbrack\) of the Galois group \(G\) is given by \(\varkappa_0=1\), \(\varkappa_2=(0^7)\), \(\varkappa_3=(0)\), and \begin{equation} \label{eqn:1stLayerTKT} \varkappa_1= \begin{cases} (N_1,N_2,N_3,N_5,N_4,N_6,N_7),\text{ if }\left(\frac{p_2}{q}\right)=-1, \\ (N_1,N_3,N_2,N_5,N_4,N_7,N_6),\text{ else}. \end{cases} \end{equation} \noindent Thus, \(\varkappa_1\) is always a permutation of the norm class groups \(N_i\). For \(\left(\frac{p_2}{q}\right)=-1\) it contains five fixed points and a single \(2\)-cycle, and otherwise it contains a single fixed point and three \(2\)-cycles. \end{corollary} \begin{proof} The underlying order of the \(7\) unramified quadratic, resp. bicyclic biquadratic, extensions of \(K\) is taken from \cite[\S\ 2.1, Thm.1,(3),(5)]{AZT}. For the TTT we use logarithmic abelian type invariants as explained in \cite[\S\ 2]{Ma2}. \(\tau_0\) is taken from \cite[\S\ 2.2, Thm.2,(1)]{AZT}, \(\tau_1,\tau_2\) from \cite[\S\ 2.3, Thm.3,(1),(2)]{AZT}, and \(\tau_3\) from \cite[\S\ 2.2, Thm.2,(5)]{AZT}. Concerning the TKT, \(\varkappa_0\) is trivial, \(\varkappa_1,\varkappa_2\) are taken from \cite[\S\ 2.3, Thm.3,(3)--(5)]{AZT}, and \(\varkappa_3\) is total, due to the Hilbert/Artin/Furtw\"angler principal ideal theorem. \end{proof} \begin{proof} (Proof of Theorem \ref{thm:TwoStage}) Firstly, the equivalence \(\left(\frac{p_2}{q}\right)=-1\) \(\Longleftrightarrow\) \(n=1\) is proved in \cite[\S\ 3, Lem.5]{AZT}. \noindent Next, we use the Artin pattern of \(G\), as given in Corollary \ref{cor:TwoStage}, to narrow down the possibilities for \(G\). The possible class-\(2\) quotients of \(G\) are exactly the immediate descendants of the root \(\langle 8,5\rangle\), that is, three vertices \(\langle 16,11\ldots 13\rangle\) of step size \(1\), nine vertices \(\langle 32,27\ldots 35\rangle\) of step size \(2\), and ten vertices \(\langle 64,73\ldots 82\rangle\) of step size \(3\). Among all descendants of \(\langle 8,5\rangle\), the mainline vertices of the tree \(\mathcal{T}(\langle 32,35\rangle)\) are characterized uniquely by the fact that their first layer TKT \(\varkappa_1\) is a permutation with five fixed points and a single \(2\)-cycle, and that their first layer TTT \(\tau_1\) contains the unique polarized (i.e. parameter dependent) component \((m+1,2)\). Note that the mainline vertices of the tree \(\mathcal{T}(\langle 32,31\rangle)\) reveal the same six stable (i.e. parameter independent) components \(((1^3)^2,(2,1)^4)\) of the accumulated (unordered) first layer TTT \(\tau_1\), but their first layer TKT \(\varkappa_1\) contains three \(2\)-cycles, similarly as for descendants of \(\langle 32,34\rangle\). However, vertices of the complete descendant tree \(\mathcal{T}(\langle 32,34\rangle)\) are characterized uniquely by six stable components \(((1^3)^6)\) of their first layer TTT \(\tau_1\). \noindent So far, we have been able to single out that \(G\) must be a descendant of either \(\langle 32,34\rangle\) or \(\langle 32,35\rangle\), by means of Artin patterns, without knowing a presentation. Now, the parametrized presentation for the group \(G=G_{m,n}\) in \cite[\S\ 2.2, Thm.2,(4)]{AZT}, \begin{equation} \label{eqn:Presentation} G_{m,n} = \langle\rho,\sigma,\tau\mid\rho^4=\sigma^{2^{n+1}}=\tau^{2^{m+1}}=1,\rho^2=\sigma^{2^n}, \lbrack\rho,\sigma\rbrack=\sigma^2,\lbrack\rho,\tau\rbrack=\tau^2,\lbrack\sigma,\tau\rbrack=1\rangle, \end{equation} \noindent is used as input for a Magma program script \cite{BCFS,MAGMA} which identifies a \(2\)-group, given by generators and relations, \begin{center} \texttt{Group}\(<\rho,\sigma,\tau\mid\text{ relator words in }\rho,\sigma,\tau>\), \end{center} \noindent with the aid of the following functions: \begin{itemize} \item \texttt{CanIdentifyGroup()} and \texttt{IdentifyGroup()} if \(\lvert G\rvert\le 2^8\), \item \texttt{IsInSmallGroupDatabase(), pQuotient(), NumberOfSmallGroups(), SmallGroup()} \\ and \texttt{IsIsomorphic()} if \(\lvert G\rvert=2^9\), and \item \texttt{GeneratepGroups()}, resp. a recursive call of \texttt{Descendants()} \\ (using \texttt{NuclearRank()} for the recursion), and \texttt{IsIsomorphic()} if \(\lvert G\rvert\ge 2^{10}\). \end{itemize} The output of the Magma script is in perfect accordance with the pruned descendant tree \(\mathcal{T}_\ast(\langle 8,5\rangle)\), as described in Theorem 21.1 and Corollary 21.1 of \cite[pp.182--183]{Ma1}. Finally, the class and coclass of \(G\) are given in \cite[\S\ 2.2, Thm.2,(6)]{AZT}. \end{proof} {\tiny \begin{figure} \caption{Pairs \((m,n)\) of parameters distributed over \(\mathcal{T}_\ast(\langle 8,5\rangle)\)} \label{fig:IdAndParam222} \end{figure} } {\tiny \begin{figure} \caption{Minimal radicands \(d\) distributed over \(\mathcal{T}_\ast(\langle 8,5\rangle)\)} \label{fig:MinRad222} \end{figure} } \section{Computational results for two-stage towers} \label{s:CompRslt} With the aid of the computational algebra system MAGMA \cite{MAGMA}, we have determined the pairs of parameters \((m,n)=(m(d),n(d))\), investigated in \cite{AZT}, for all \(11\,753\) square free radicands \(d=p_1p_2q\) of the shape in Theorem \ref{thm:TwoStage} which occur in the range \(0<d<2\cdot 10^6\). As mentioned at the beginning of \S\ \ref{s:TwoStage}, the result supports the conjecture that the corresponding \(2\)-tower groups \(G_{m(d),n(d)}\) cover the pruned tree \(\mathcal{T}_\ast(\langle 8,5\rangle)\) without gaps. Recall that a pair \((m,n)\) contains information on the \(2\)-class numbers of complex quadratic fields. So we have a reduction of hard problems for biquadratic fields to easy questions about quadratic fields. By means of the following invariants, the statistical distribution \(d\mapsto (m(d),n(d))\) of parameter pairs is visualized on the pruned descendant tree \(\mathcal{T}_\ast(\langle 8,5\rangle)\), using the injective (and probably even bijective) mapping \((m,n)\mapsto G_{m,n}\). For each fixed individual pair \((m,n)\), we define its \textit{minimal radicand} \(M(m,n)\) as an absolute invariant: \begin{equation} \label{eqn:StatInv} M(m,n) := \min\lbrace d>0\mid (m(d),n(d))=(m,n)\rbrace. \end{equation} The purely group theoretic pruned descendant tree was constructed in \cite[\S\ 21.1, pp.182--184]{Ma1}, and was shown in \cite[\S\ 10.4.1, Fig.7, p.175]{Ma1}, with vertices labelled by the standard identifiers in the SmallGroups Library \cite{BEO1,BEO2} or of the ANUPQ-package \cite{GNO}. In Figure \ref{fig:IdAndParam222}, a pair \((m,n)\) of parameters is placed adjacent to the corresponding vertex \(G_{m,n}\) of the pruned descendant tree \(\mathcal{T}_\ast(\langle 8,5\rangle)\). There are no overlaps, since the mapping \((m,n)\mapsto G_{m,n}\) is injective. Each vertex is additionally labelled with a formal identifier, as used in \cite[Cor.21.1]{Ma1}. In Figure \ref{fig:MinRad222}, the minimal radicand \(M(m,n)\) for which the adjacent vertex is realized as the corresponding group \(G_{m,n}\), is shown underlined and with boldface font. Vertices within the support of the distribution are surrounded by an oval. The oval is drawn in horizontal orientation for mainline vertices and in vertical orientation for vertices in other periodic coclass sequences. \section{Three-stage towers of \(3\)-class fields} \label{s:ThreeStage} Our second discovery of periodic bifurcations in trees of \(3\)-groups will now be applied to a family of quadratic number fields \(K\) with \(3\)-class group \(\mathrm{Cl}_3(K)\) of type \((3,3)\), originally investigated by ourselves in \cite{Ma,Ma0,Ma3}, and extended by Boston, Bush and Hajir in \cite{BBH}. The \(3\)-class tower groups \(G=\mathrm{G}_3^{\infty}(K)\) of these fields are conjecturally distributed over six periodic sequences arising from repeated bifurcations (of the new kind which was unknown up to now), whereas it is proven that their metabelianizations populate six well-known periodic coclass sequences of fixed coclass \(2\). \begin{theorem} \label{thm:ThreeStage} Let \(K=\mathbb{Q}(\sqrt{d})\) be a complex quadratic field with discriminant \(d<0\), having a \(3\)-class group \(\mathrm{Cl}_3(K)\) of type \((3,3)\), such that its \(3\)-principalization in the four unramified cyclic cubic extensions \(L_1,\ldots,L_4\) is given by one of the following two first layer TKTs \[\varkappa_1(K)=(1,1,2,2)\text{ or }(3,1,2,2),\] resp. \[\varkappa_1(K)=(2,2,3,4)\text{ or }(2,3,3,4).\] Further, let the integer \(2\le\ell\le 9\) denote a foregiven upper bound. \noindent Then the \(3\)-class field tower of \(K\) is non-metabelian with exactly three stages, and the isomorphism type of the Galois group \(G=\mathrm{G}_3^{\infty}(K)=\mathrm{Gal}(\mathrm{F}_3^{\infty}(K)\vert K)\) of the maximal unramified pro-\(3\) extension \(\mathrm{F}_3^{\infty}(K)\) of \(K\) is characterized uniquely by the positive integer parameter \(2\le u\le\ell\) defined by the \(3\)-class number \(h_3(k_0)=3^u\) of the simply real non-Galois cubic subfield \(k_0\) of the distinguished polarized extension \(L\) among \(L_1,\ldots,L_4\) (i.e., \(L=L_1\), resp. \(L=L_2\)): \begin{equation} \label{eqn:BifurcationSequence} \begin{aligned} G &\simeq\langle 729,49\rangle(-\#2;1-\#1;1)^j-\#2;4\text{ or }5\vert 6,\text{ resp.}\\ G &\simeq\langle 729,54\rangle(-\#2;1-\#1;1)^j-\#2;2\text{ or }4\vert 6,\text{ with }j=u-2. \end{aligned} \end{equation} \noindent The metabelianization \(G/G^{\prime\prime}\) of the Schur \(\sigma\)-group \(G\), that is the Galois group \(\mathrm{G}_3^2(K)=\mathrm{Gal}(\mathrm{F}_3^2(K)\vert K)\) of the maximal metabelian unramified \(3\)-extension \(\mathrm{F}_3^2(K)\) of \(K\) is unbalanced and given by \begin{equation} \label{eqn:CoclassSequence} \begin{aligned} G/G^{\prime\prime} &\simeq\langle 729,49\rangle(-\#1;1-\#1;1)^k-\#1;4\text{ or }5\vert 6,\text{ resp.}\\ G/G^{\prime\prime} &\simeq\langle 729,54\rangle(-\#1;1-\#1;1)^k-\#1;2\text{ or }4\vert 6,\text{ with }k=u-2. \end{aligned} \end{equation} \end{theorem} Again, we first state a corollary whose proof will initialize the proof of Theorem \ref{thm:ThreeStage}. \begin{corollary} \label{cor:ThreeStage} Under the assumptions of Theorem \ref{thm:ThreeStage}, the Artin pattern \(\mathrm{AP}(G)=(\tau(G);\varkappa(G))\) of the \(3\)-tower group \(G=\mathrm{G}_3^\infty(K)\) of the complex quadratic field \(K=\mathbb{Q}(\sqrt{d})\) is given as follows: The ordered multi-layered transfer target type (TTT) \(\tau(G)=\lbrack\tau_0;\tau_1;\tau_2\rbrack\) of the Galois group \(G\) is given by \(\tau_0=(1^3)\), \(\tau_2=(u,u,1)\), and \begin{equation} \label{eqn:TTT} \tau_1= \begin{cases} \lbrack (u+1,u),1^3,(2,1)^2\rbrack,\text{ if }G\in\mathcal{T}(\langle 729,49\rangle), \\ \lbrack (2,1),(u+1,u),(2,1)^2\rbrack,\text{ if }G\in\mathcal{T}(\langle 729,54\rangle). \end{cases} \end{equation} \noindent If we now denote by \(N_i:=\mathrm{Norm}_{L_i\vert K}(\mathrm{Cl}_3(L_i))\), \(1\le i\le 4\), the norm class groups of the four unramified cyclic cubic extensions \(L_i\vert K\), then the ordered multi-layered transfer kernel type (TKT) \(\varkappa(G)=\lbrack\varkappa_0;\varkappa_1;\varkappa_2\rbrack\) of the Galois group \(G\) is given by \(\varkappa_0=1\), \(\varkappa_2=(0)\), and \begin{equation} \label{eqn:TKT} \varkappa_1= \begin{cases} (N_1,N_1,N_2,N_2)\text{ or }(N_3,N_1,N_2,N_2),\text{ if }G\in\mathcal{T}(\langle 729,49\rangle), \\ (N_2,N_2,N_3,N_4)\text{ or }(N_2,N_3,N_3,N_4),\text{ if }G\in\mathcal{T}(\langle 729,54\rangle). \end{cases} \end{equation} \noindent Thus, \(\varkappa_1\) is not a permutation of the norm class groups \(N_i\). For \(G\in\mathcal{T}(\langle 729,49\rangle)\) it contains a single or no fixed point and no \(2\)-cycle, and for \(G\in\mathcal{T}(\langle 729,54\rangle)\) it contains three or two fixed points and no \(2\)-cycle. \end{corollary} \begin{proof} First, we must establish the connection of the TTT of \(G\) with the distinguished non-Galois simply real cubic field \(k_0\). Anticipating the partial result of Theorem \ref{thm:ThreeStage} that the metabelianization \(G/G^{\prime\prime}\) of \(G\) must be of coclass \(r=2\), we can determine the \(3\)-class numbers of all four non-Galois cubic subfields \(k_i<L_i\) with the aid of Theorem 4.2 in \cite[p.489]{Ma0}: with respect to the normalization in this theorem, we have \(h_3(k_0)=3^u=h_3(k_1)=3^{\frac{m-2}{2}}\) and uniformly \(h_3(k_i)=3\) for \(2\le i\le 4\), since \(e=r+1=3\), which implies \(\frac{e-1}{2}=1\), and \(G/G^{\prime\prime}\) has no defect of commutativity. The parameter \(m\) is the index of nilpotency of \(G/G^{\prime\prime}\), whence the nilpotency class is given by \(c=m-1\). Now, the statements are an immediate consequence of \S\S\ 4.1--4.2 in our recent article \cite{Ma2}, where the claims are reduced to theorems in our earlier papers: \cite[Thm.1.3, p.405]{Ma}, and, more generally, \cite[Thm.4.4, p.440 and Tbl.4.7, p.441]{Ma3}. We must only take into consideration that the \(3\)-class group \(\mathrm{Cl}_3(L)\) of \(L\) is nearly homocyclic with abelian type invariants \(A(3,c)\simeq (u+1,u)\), since \(u=\frac{m-2}{2}\), and thus \(2u+1=m-1=c\). \end{proof} \begin{proof} (Proof of Theorem \ref{thm:ThreeStage}) First, we use the Artin pattern of \(G\), as given in Corollary \ref{cor:ThreeStage}, to narrow down the possibilities for \(G\). The possible class-\(3\) quotients of \(G\) are exactly the immediate descendants of the common class-\(2\) quotient \(\langle 27,3\rangle\) of all \(3\)-groups with abelianization of type \((3,3)\) (apart from \(\langle 27,4\rangle\)), that is, four vertices \(\langle 81,7\ldots 10\rangle\) of step size \(1\) \cite[Fig.3]{Ma1}, and seven vertices \(\langle 243,3\ldots 9\rangle\) of step size \(2\) \cite[Fig.4]{Ma1}. All descendants of the former are of coclass \(1\) and reveal the same three stable (i.e. parameter independent) components \(((1^2)^3)\) of the first layer TTT \(\tau_1\), according to \cite[Thm.3.2,(1)]{Ma2}, which does not agree with the required TTT of \(G\). Among the latter, the criterion \cite[Cor.3.0.2, p.772]{BuMa} rejects three of the seven vertices, \(\langle 243,3\vert 4\vert 9\rangle\), since the TKT of \(G\) does not contain a \(2\)-cycle, and \(\langle 243,5\vert 7\rangle\) are discouraged, since they are terminal. The remaining two vertices \(\langle 243,6\vert 8\rangle\) are exactly the parents of the decisive groups \(\langle 729,49\vert 54\rangle\), where periodic bifurcations set in. Now, Theorem 21.3 and Corollaries 21.2--21.3 in \cite[pp.185--187]{Ma1} show that, using the local notation of Corollary 21.2, \[G\simeq S_k:=\langle 729,49\vert 54\rangle(-\#2;1-\#1;1)^k-\#2;4\vert 5\vert 6\text{ resp. }2\vert 4\vert 6\] and \[G/G^{\prime\prime}\simeq V_{0,2k}:=\langle 729,49\vert 54\rangle(-\#1;1)^{2k}-\#1;4\vert 5\vert 6\text{ resp. }2\vert 4\vert 6,\] both with \(k=u-2\). \end{proof} {\tiny \begin{figure} \caption{Minimal absolute discriminants \(\lvert d\rvert<10^8\) distributed over \(\mathcal{T}^2(\langle 243,6\rangle)\)} \label{fig:MinDiscCocl2TreeQTyp33} \end{figure} } \section{Computational results for three-stage towers} \label{s:CompRsltThreeSage} With the aid of the computational algebra system MAGMA \cite{MAGMA}, where the class field theoretic techniques of Fieker \cite{Fi} are implemented, we have determined the Artin pattern \((\tau(K);\varkappa(K))\) of all complex quadratic fields \(K=\mathbb{Q}(\sqrt{d})\) with discriminants in the range \(-10^8<d<0\), whose first layer TTT \(\tau_1(K)\) had been precomputed by Boston, Bush and Hajir in the database underlying the numerical results in \cite{BBH}. Figure \ref{fig:MinDiscCocl2TreeQTyp33}, resp. \ref{fig:MinDiscCocl2TreeUTyp33}, shows the minimal absolute discriminant \(\lvert d\rvert\), underlined and with boldface font, for which the adjacent vertex of the coclass tree \(\mathcal{T}^2(\langle 729,49\rangle)\), resp. \(\mathcal{T}^2(\langle 729,54\rangle)\), is realized as the metabelianization \(G/G^{\prime\prime}\) of the \(3\)-tower group \(G\) of \(K=\mathbb{Q}(\sqrt{d})\). Vertices within the support of the distribution are surrounded by an oval. The corresponding projections \(G\to G/G^{\prime\prime}\) have been visualized in the Figures 8--9 of \cite[pp.188--189]{Ma1}. We have published this information in the Online Encyclopedia of Integer Sequences (OEIS) \cite{OEIS}, sequences A247692 to A247697. {\tiny \begin{figure} \caption{Minimal absolute discriminants \(\lvert d\rvert<10^8\) distributed over \(\mathcal{T}^2(\langle 243,8\rangle)\)} \label{fig:MinDiscCocl2TreeUTyp33} \end{figure} } We emphasize that the results of section \ref{s:ThreeStage} provide the background for considerably stronger assertions than those made in \cite{BuMa}. Firstly, since they concern four TKTs E.6, E.14, E.8, E.9 instead of just TKT E.9 \cite[\S\ 4]{Ma2}, and secondly, since they apply to varying odd nilpotency class \(5\le\mathrm{cl}(G)\le 19\) instead of just class \(5\). \end{document}
\begin{document} \title[Nonlinear Helmholtz equation ]{Complex solutions and stationary scattering for the nonlinear Helmholtz equation } \author{Huyuan Chen} \address{Department of Mathematics, Jiangxi Normal University, Nanchang,\\ Jiangxi 330022, PR China} \email{ chenhuyuan@yeah.net} \author{Gilles Ev\'equoz } \address{School of Engineering, University of Applied Sciences of Western Switzerland, Route du Rawil 47,\\ 1950 Sion, Switzerland} \email{ gilles.evequoz@hevs.ch} \author{Tobias Weth} \address{Goethe-Universit\"{a}t Frankfurt, Institut f\"{u}r Mathematik, Robert-Mayer-Str. 10\\ D-60629 Frankfurt, Germany } \email{ weth@math.uni-frankurt.de} \begin{abstract} We study a stationary scattering problem related to the nonlinear Helmholtz equation $ - \Delta u -k^2 u = f(x,u) \ \ \text{in $\mathbb{R}^N$,} $ where $N \ge 3$ and $k>0$. For a given incident free wave $\varphi \in L^\infty(\mathbb{R}^N)$, we prove the existence of complex-valued solutions of the form $u=\varphi+u_{\text{sc}}$, where $u_{\text{sc}}$ satisfies the Sommerfeld outgoing radiation condition. Since neither a variational framework nor maximum principles are available for this problem, we use topological fixed point theory and global bifurcation theory to solve an associated integral equation involving the Helmholtz resolvent operator. The key step of this approach is the proof of suitable a priori bounds. \end{abstract} \maketitle \section{Introduction} A basic model for wave propagation in an ambient medium with nonlinear response is provided by the nonlinear wave equation \begin{equation}\label{nlw0} \frac{\partial^2 \psi}{\partial t^2}(t,x) - \Delta \psi(t,x) =f(x,\psi(t,x)), \qquad (t,x)\in\mathbb{R}\times\mathbb{R}^N. \end{equation} Considering nonlinearities of the form $f(x,\psi)=g(x,|\psi|^2)\psi$, where $g$ is a real-valued function, the time-periodic ansatz \begin{equation} \label{psi-ansatz} \psi(t,x)=e^{-i k t}u(x), \qquad k >0 \end{equation} leads to the nonlinear Helmholtz equation \begin{equation}\label{nlh-1} - \Delta u -k^2 u = f(x,u) \qquad \text{in $\mathbb{R}^N$.} \end{equation} Assuming in this model that nonlinear interactions occur only locally in space. we are lead to restrict our attention to nonlinearities $f \in C(\mathbb{R}^N \times \mathbb{C},\mathbb{C})$ with $\lim \limits_{|x| \to \infty}f(x,u)=0$ for every $u \in \mathbb{R}$. The stationary scattering problem then consists in analyzing solutions of the form $u=\varphi+u_{\text{sc}}$, where $\varphi$ is a solution of the homogeneous Helmholtz equation $-\Delta \varphi - k^2 \varphi=0$ and $u_{\text{sc}}$ obeys the Sommerfeld outgoing radiation condition \begin{equation}\label{sommerfeld-1} r^{\frac{N-1}{2}}\left|\frac{\partial u_{\text{sc}}}{\partial r}-iku_{\text{sc}} \right|\to 0\quad\text{as }r=|x| \to\infty \end{equation} or a suitable variant of it. The function $\varphi$ represents a given {\em incident free wave} whose interaction with the nonlinear ambient medium gives rise to a scattered wave $u_{\text{sc}}$. Usually, $\varphi$ is chosen as a plane wave \begin{equation} \label{eq:plane-wave} \varphi(x)=e^{ik\:x\cdot \xi},\qquad \xi\in S^{N-1} \end{equation} or as superposition of plane waves. To justify the notions of incident and scattered wave, let us assume for the moment that the nonlinearity is compactly supported in the space variable $x$. Then $u_{\text{sc}}$ has the asymptotics $u_{\text{sc}}(x)= r^{\frac{1-N}{2}}e^{i kr}g(\frac{x}{|x|})+ o(r^{\frac{1-N}{2}})$ as $r= |x| \to \infty$ with a function $g: S^{N-1} \to \mathbb{C}$ (see \cite[Theorem 2.5]{colton-kress} and \cite[Proposition 2.6]{EW1}). For incident plane waves $\varphi$ as in \eqref{eq:plane-wave}, this leads to the asymptotic expansion \begin{equation*} \psi(t,x)=e^{i k (x \cdot \xi-t)}+ r^{\frac{1-N}{2}}e^{i k(r-t)}g(\frac{x}{|x|}) +o(r^{\frac{1-N}{2}}) \qquad \text{as $r= |x| \to \infty$} \end{equation*} uniformly in $t \in \mathbb{R}$ for the corresponding time periodic solution given by the ansatz~\eqref{psi-ansatz}. This expansion clearly shows the asymptotic decomposition of the wave function $\psi$ in two parts, of which one is propagating with constant speed $k$ in the given direction $\xi$ and the other part is outward radiating in the radial direction. For a more detailed discussion of the connection of notions of stationary and dynamical scattering, we refer the reader to \cite{komech} and the references therein. \qquad In the (affine) linear case $f(x,u)=a(x) u+b(x)$, both the forward and the inverse stationary scattering problem have been extensively studied and are reasonably well understood from a functional analytic point of view (see e.g. \cite{colton-kress} and the references therein). In contrast, the nonlinear setting remains widely unexplored, although it appears in important models driven by applications and therefore is receiving fastly growing attention in recent years. Specifically, we mention the modeling of propagation and scattering of electromagnetic waves in localized nonlinear Kerr media as considered e.g. in \cite{fibich-tsynkov:2005,baruch-fibich-tsynkov:2009,wu-zou}. In this context, the nonlinear Helmholtz equation arises from a reduction of Maxwell's equations in the case of a linearly polarized electric field after elimination of the corresponding magnetic field. As noted in \cite{wu-zou}, this leads to a special case of equation (\ref{nlh-1}) given by $$ - \Delta u -k^2 u = \rho 1_{\Omega}|u|^2 u \qquad \text{in $\mathbb{R}^N$.} $$ Here $\Omega \subset \mathbb{R}^N$ is the support of the nonlinear Kerr medium and $\rho$ is the Kerr constant given as quotient of the Kerr coefficient of the medium and the index of refraction of the ambient homogeneous medium. Both from a theoretical and an applied point of view, it is of great interest to understand self-focusing and scattering effects of laser beams interacting with localized nonlinear media, and computational approaches to these questions have been developed e.g. in \cite{fibich-tsynkov:2005,baruch-fibich-tsynkov:2009,wu-zou}. From a theoretical point of view, the current understanding of the stationary scattering problem for (\ref{nlh-1}) is mainly restricted to the case of small incident waves $\varphi$ which can be reduced to a perturbation of an associated linear problem in suitable function spaces. In this case, existence and well-posedness results have been obtained by Guti\'errez \cite{G}, Jalade \cite{J} and Gell-Redman et al. \cite{Gell-Red}. In \cite{J}, the scattering problem is studied for a small incident plane wave and a family of compactly supported nonlinearities in dimension $N=3$. The main result in \cite{G} yields, in dimensions $N=3,4$, the existence of solutions to the scattering problem with small incident Herglotz wave $\varphi$ and cubic power nonlinearity. We recall that a Herglotz wave is a function of the type \begin{equation} \label{eq:18} x \mapsto \varphi(x):= \int_{S^{N-1}} e^{i k(x\cdot \xi)} g (\xi)\,d \sigma(\xi) \qquad \text{for some function $g \in L^2(S^{N-1})$.} \end{equation} Since plane waves of the form (\ref{eq:plane-wave}) cannot be written in this way, they are not admitted in \cite{G}. On the other hand, no asymptotic decay of the nonlinearity is required for the approach developed in \cite{G}. This is also the case for the approach in \cite{Gell-Red}, where more general nonlinearities are considered, while the class of admissible incident Herglotz waves $\varphi$ is restricted by assuming smallness measured in higher Sobolev norms on $S^{N-1}$. \qquad The main reason for the smallness assumption in the papers \cite{G,J,Gell-Red} is the use of contraction mappings together with resolvent estimates for the Helmholtz operator. The main aim of this paper is to remove this smallness assumption by means of different tools from nonlinear analysis and new a priori estimates on the set of solutions. More precisely, for a given solution $\varphi \in L^\infty(\mathbb{R}^N)$ of the homogeneous Helmholtz equation $\Delta \varphi + \varphi= 0$ which we shall refer to as {\em incident free wave} in the following, we wish to find solutions of (\ref{nlh-1}) of the form $u = \varphi + u_{sc} \in L^\infty(\mathbb{R}^N)$ with $u_{sc}$ satisfying (\ref{sommerfeld-1}) or a suitable variant of this radiation condition. This problem can be reduced to an integral equation involving the Helmholtz resolvent operator ${\mathcal R}_k$, which is formally given as a convolution ${\mathcal R}_k f= \Phi_k * f$ with the fundamental solution \begin{equation}\label{eqn:fund_sol} \Phi_k : \mathbb{R}^N \setminus \{0\} \to \mathbb{C}, \qquad \Phi_k(x)=\frac{i}{4} \Bigl(\frac{k}{2\pi |x|}\Bigr)^{\frac{N-2}{2}}H^{(1)}_{\frac{N-2}{2}}(k|x|) \end{equation} associated to (\ref{sommerfeld-1}). Here $H^{(1)}_{\frac{N-2}{2}}$ is the Hankel function of the first kind of order $\frac{N-2}{2}$, see e.g. \cite{AS}. It is easy to see from the asymptotics of $H^{(1)}_{\frac{N-2}{2}}$ that $\Phi_k$ satisfies (\ref{sommerfeld-1}), and the same is true for $u:= {\mathcal R}_k h= \Phi_k * h$ e.g. in the case where $h \in L^\infty(\mathbb{R}^N)$ has compact support. \qquad By the estimate in \cite[Theorem 8]{G} and the remark following it, an integral variant of (\ref{sommerfeld-1}) is available under weaker assumptions on $h$. More precisely, if $N=3,4$ and $1 < p \le \frac{2(N+1)}{N+3}$ or $N \ge 5$ and $\frac{2N}{N+4} \le p \le \frac{2(N+1)}{N+3}$, then, for $h \in L^p(\mathbb{R}^N)$, the function $u= {\mathcal R}_k h$ is a well-defined strong solution of the inhomogeneous Helmholtz equation $-\Delta u - k^2 u = h$ satisfying the following variant of the Sommerfeld outgoing radiation condition: \begin{equation}\label{eqn:sommerfeld1-averaged} \lim_{R\to\infty}\frac{1}{R} \int_{B_R}\left|\nabla u(x)-iku(x)\frac{x}{|x|} \right|^2\, dx=0. \end{equation} Hence, under appropriate assumptions on the nonlinearity $f$, we are led to study the integral equation \begin{equation}\label{nlh-1-integral} u = {\mathcal R}_k(N_f(u))+ \varphi \qquad \text{in $L^\infty(\mathbb{R}^N)$} \end{equation} for a given incident free wave $\varphi \in L^\infty(\mathbb{R}^N)$. Here $N_f$ is the substitution operator associated to $f$ given by $N_f(u)(x):= f(x,u(x))$. \qquad To state our main results we need to introduce some more notation. It is convenient to define $\langle x \rangle = (1+|x|^2)^{\frac{1}{2}}$ for $x \in \mathbb{R}^N$. For $\alpha\in\mathbb{R}$ and a measurable subset $A \subset \mathbb{R}^N$, we consider the Banach space $L^\infty_\alpha(A)$ of measurable functions $w: A \to \mathbb{C}$ with $$ \norm{w}_{L^\infty_\alpha(A)}:=\| \langle \,\cdot\, \rangle^{\alpha} w\|_{L^\infty(A) } <+\infty. $$ In particular, $L^\infty(A)=L^\infty_0(A)$. In the case $A= \mathbb{R}^N$, we merely write $\|\cdot\|_{L^\infty_\alpha}$ in place of $\|\cdot\|_{L^\infty_\alpha(\mathbb{R}^N)}$. For subspaces of real-valued functions, we use the notations $L^p(A,\mathbb{R})$ for $1 \le p \le \infty$ and $L^\infty_\alpha(A,\mathbb{R})$. We first note the following preliminary observation regarding properties of the resolvent operator ${\mathcal R}_k$. \begin{proposition}\label{resolvent-compact-and-continuous} Let $N\geq2$, $\alpha>\frac{N+1}{2}$ and $\tau(\alpha)$ be defined by \begin{align} \label{exp 1} \tau(\alpha) &= \begin{cases} \alpha-\frac{N+1}{2}\quad&{\rm if}\ \, \frac{N+1}{2}<\alpha<N, \\[1.5mm] \frac{N-1}{2}\quad&{\rm if}\ \, \alpha\geq N. \end{cases} \end{align} Then we have \begin{equation} \label{eq:kappa-sigma-finite} \kappa_{\alpha}:= \sup \Big\{ \bigl \| |\Phi_k| * w \bigr \|_{L^\infty_{\tau(\alpha)}}:\: w \in L^\infty_\alpha(\mathbb{R}^N),\: \| w\|_{L^\infty_{\alpha }}=1 \Big\} < \infty, \end{equation} so ${\mathcal R}_k$ defines a bounded linear map $L^\infty_\alpha(\mathbb{R}^N) \to L^\infty_{\tau(\alpha)}(\mathbb{R}^N)$. Moreover: \begin{enumerate} \item[(i)] The resolvent operator defines a compact linear map ${\mathcal R}_k: L^\infty_\alpha(\mathbb{R}^N) \to L^\infty(\mathbb{R}^N)$.\\ \item[(ii)] If $\alpha > \frac{N(N+3)}{2(N+1)}$ and $h \in L^\infty_{\alpha}(\mathbb{R}^N)$, then the function $u:= {\mathcal R}_k h$ is a strong solution of $-\Delta u - k^2 u = h$ satisfying~(\ref{eqn:sommerfeld1-averaged}). If $\alpha > N$, then $u$ satisfies~(\ref{sommerfeld-1}). \end{enumerate} \end{proposition} \qquad Our first main existence result is concerned with linearly bounded nonlinearities $f$. \begin{theorem} \label{W teo 1-sublinear} Let, for some $\alpha>\frac{N+1}{2}$, the nonlinearity $f: \mathbb{R}^N \times \mathbb{C} \to \mathbb{C}$ be a continuous function satisfying \begin{equation} \label{eq:assumption-f1} \sup_{|u|\le M,x \in \mathbb{R}^N} \langle x \rangle^{\alpha}|f(x,u)|< \infty \qquad \text{for all $M >0$.} \end{equation} Moreover, suppose that \underline{one} of the following assumptions is satisfied: \begin{enumerate} \item[$(f_1)$] The nonlinearity is of the form $f(x,u)= a(x) u + b(x,u)$ with $a \in L^\infty_\alpha(\mathbb{R}^N,\mathbb{R})$ and $$ \sup \limits_{|u|\le M, x \in \mathbb{R}^N} \langle x \rangle^{\alpha} |b(x,u)|= o(M) \qquad \text{as $M \to +\infty.$} $$ \item[$(f_2)$] There exists $Q,b \in L^\infty_{\alpha}(\mathbb{R}^N,\mathbb{R})$ with $\|Q\|_{L^\infty_\alpha} < \frac{1}{\kappa_\alpha}$, where $\kappa_\alpha$ is given in (\ref{eq:kappa-sigma-finite}), and \begin{equation*} |f(x,u)|\leq Q(x) |u|+b(x)\qquad\text{for all }\, (x,u)\in \mathbb{R}^N \times \mathbb{C}. \end{equation*} \end{enumerate} Then, for any given solution $\varphi \in L^\infty(\mathbb{R}^N)$ of the homogeneous Helmholtz equation $\Delta \varphi + k^2 \varphi = 0$, the equation (\ref{nlh-1-integral}) admits a solution $u \in L^\infty(\mathbb{R}^N)$. \end{theorem} \begin{remark} (i) In many semilinear elliptic problems with asymptotically linear nonlinearities as in assumption $(f_1)$, additional nonresonance conditions have to be assumed to guarantee a priori bounds which eventually lead to the existence of solutions. This is not the case in the present scattering problem. We shall establish a priori bounds merely as a consequence of $(f_1)$ by means of suitable nonexistence results for solutions of the linear Helmholtz equation satisfying the radiation condition~(\ref{eqn:sommerfeld1-averaged}). The key assumption here is that the function $a$ in $(f_1)$ is real-valued. \qquad $(ii)$ Theorem~\ref{W teo 1-sublinear} leaves open the question of uniqueness of solutions to (\ref{nlh-1-integral}). In fact, under the sole assumptions of Theorem~\ref{W teo 1-sublinear}, uniqueness is not to be expected. If, however, for some $\alpha>\frac{N+1}{2}$, the nonlinearity $f \in C(\mathbb{R}^N \times \mathbb{R}, \mathbb{R})$ satisfies (\ref{eq:assumption-f1}) and the Lipschitz condition \begin{equation} \label{eq:assumption-f1-lipschitz} \ell_\alpha:= \sup \Bigl \{ \langle x \rangle^{\alpha} \, \Bigl|\frac{f(x,u)-f(x,v)}{u-v}\Bigr|\::\: u,v \in \mathbb{R}, \: x \in \mathbb{R}^N \Bigr\} < \frac{1}{\kappa_\alpha}, \end{equation} then the contraction mapping principle readily yields the existence of a unique solution $u \in L^\infty(\mathbb{R}^N)$ of (\ref{nlh-1-integral}) for given $\varphi \in L^\infty(\mathbb{R}^N)$, see Theorem~\ref{theo-uniqueness} below. \end{remark} \qquad Next we turn our attention to superlinear nonlinearities which do not satisfy $(f_1)$ or $(f_2)$. Assuming additional regularity estimates for $f$, we can still prove the existence of solutions of (\ref{nlh-1-integral}) in the case where $\|\varphi\|_{L^\infty(\mathbb{R}^N)}$ is small. More precisely, we have the following. \begin{theorem} \label{teo-implicit-function} Let, for some $\alpha>\frac{N+1}{2}$, the nonlinearity $f: \mathbb{R}^N \times \mathbb{C} \to \mathbb{C}$ be a continuous function satisfying (\ref{eq:assumption-f1}). Suppose moreover that the function $f(x,\cdot):\mathbb{C} \to \mathbb{C}$ is real differentiable for every $x \in \mathbb{R}^N$, and that $f':= \partial_u f: \mathbb{R}^N \times \mathbb{C} \to {\mathcal L}_{\mathbb{R}}(\mathbb{C},\mathbb{C})$ is a continuous function satisfying \begin{equation} \label{eq:assumption-f1-diff} \sup_{|u|\le M,x \in \mathbb{R}^N} \langle x \rangle^{\alpha}\|f'(x,u)\|_{{\mathcal L}_{\mathbb{R}}(\mathbb{C},\mathbb{C})}< \infty. \end{equation} Finally, suppose that $f(x,0)=0$ and $f'(x,0)= 0 \in {\mathcal L}_\mathbb{R}(\mathbb{C},\mathbb{C})$ for all $x \in \mathbb{R}^N$. \qquad Then there exists open neighborhoods $U,V \subset L^\infty(\mathbb{R}^N)$ of zero with the property that for every $\varphi \in V$ there exists a unique solution $u= u_\varphi \in U$ of (\ref{nlh-1-integral}). Moreover, the map $V \to U$, $u \mapsto u_\varphi$ is of class $C^1$. \end{theorem} \qquad The proof of this theorem is very short and merely based on the inverse function theorem, see Section~\ref{sec:proofs-main-results} below. It applies in particular to power type nonlinearities \begin{equation} \label{eq:power-type} f(x,u)=Q(x)|u|^{p-2}u. \end{equation} More precisely, if $p>2$, and $Q \in L^\infty_{\alpha}(\mathbb{R}^N)$ for some $\alpha>\frac{N+1}{2}$, we find that $f(x,\cdot)$ is real differentiable for every $x \in \mathbb{R}^N$, and $f'= \partial_u f \in {\mathcal L}_{\mathbb{R}}(\mathbb{C},\mathbb{C})$ is given by $f'(x,u)v = Q(x)\bigl( \frac{p}{2}|u|^{p-2}v + \frac{p-2}{2}|u|^{p-4} u^2 \bar v\bigr)$, which implies that $$ \|f'(x,u)\|_{{\mathcal L}_{\mathbb{R}}(\mathbb{C},\mathbb{C})} \le (p-1)|Q(x)||u|^{p-2} \qquad \text{for $x \in \mathbb{R}^N$, $u \in \mathbb{C}$}. $$ From this it is easy to deduce that the assumptions of Theorem~\ref{teo-implicit-function} are satisfied in this case. In particular, for given $\varphi \in L^\infty(\mathbb{R}^N)$, Theorem~\ref{teo-implicit-function} yields the existence of $\epsilon>0$ and a unique local branch $(-\epsilon,\epsilon) \to L^\infty(\mathbb{R}^N)$, $\lambda \mapsto u_\lambda$ of solutions of the equation \begin{equation}\label{nlh-1-integral-parameter-lambda} u = {\mathcal R}_k(Q|u|^{p-2}u)+ \lambda \varphi \qquad \text{in $L^\infty(\mathbb{R}^N)$.} \end{equation} In our next result, we establish the existence of a global continuation of this local branch. \begin{theorem}\label{thm:rabinowitz-applied} Let $N\geq 3$, $2<p<2^\ast$, $Q\in L^\infty_\alpha(\mathbb{R}^N,\mathbb{R})\backslash\{0\}$ for some $\alpha > \frac{N+1}{2}$ and $\varphi\in L^\infty(\mathbb{R}^N)$. Moreover, let $$ {\mathcal S}_\varphi:= \{(\lambda,u)\::\: \lambda \ge 0,\: u \in L^\infty(\mathbb{R}^N),\: \text{$u$ solves (\ref{nlh-1-integral-parameter-lambda})}\}\:\subset \: [0,\infty) \times L^\infty(\mathbb{R}^N), $$ and let ${\mathcal C}_\varphi \subset {\mathcal S}_\varphi$ denote the connected component of ${\mathcal S}_\varphi$ which contains the point $(0,0)$. Then ${\mathcal C}_\varphi \setminus \{(0,0)\}$ is an unbounded subset of $(0,\infty) \times L^\infty(\mathbb{R}^N)$. \end{theorem} \qquad We note that in general the unboundedness of ${\mathcal C}_\varphi$ does not guarantee that ${\mathcal C}_{\varphi}$ intersects $\{1\} \times \mathbb{R}^N$, since the branch given by ${\mathcal C}_\varphi$ may blow up in $L^\infty(\mathbb{R}^N)$ at some value $\lambda \in (0,1)$. In particular, under the general assumptions of Theorem~\ref{thm:rabinowitz-applied}, we cannot guarantee the existence of solutions of the equation (\ref{nlh-1-integral}). For this, additional a priori bounds on the set of solutions are needed. We shall find such a priori bounds in the case where $Q \le 0$ in $\mathbb{R}^N$, which is usually refered to as the {\em defocusing case}. Moreover, we require $Q$ to have compact support with some control of its diameter. In the following, we let $L^\infty_c(\mathbb{R}^N)$ denote the set of functions $Q \in L^\infty(\mathbb{R}^N)$ with compact support $\operatorname{supp} Q \subset \mathbb{R}^N$, and we let $L^\infty_c(\mathbb{R}^N, \mathbb{R})$ denotes the subspace of real-valued functions in $L^\infty_c(\mathbb{R}^N)$. We then have the following result. \begin{theorem}\label{thm:unbounded_branch-defocusing} Let $N\geq 3$, $2<p<2^\ast$, $Q\in L^\infty_c(\mathbb{R}^N,\mathbb{R})\backslash\{0\}$ and $\varphi\in L^\infty(\mathbb{R}^N)$. Assume furthermore that $Q\leq 0$ a.e. in $\mathbb{R}^N$ and $\text{diam}(\text{supp }Q)\le \frac{{\bf z({\text{\tiny $N$}})}}{k}$, where ${\bf z({\text{\tiny $N$}})}$ denotes the first positive zero of the Bessel function $Y_{\frac{N-2}2}$ of the second kind of order $\frac{N-2}{2}$. Then the set ${\mathcal C}_\varphi$ given in Theorem~\ref{thm:rabinowitz-applied} intersects $\{\lambda\} \times L^\infty(\mathbb{R}^N)$ for every $\lambda>0$. In particular, (\ref{nlh-1-integral-parameter-lambda}) admits a solution with $\lambda=1$. \end{theorem} \qquad To put the assumption on the support of $Q$ into perspective, we note that ${\bf z({\text{\footnotesize $3$}})} = \frac{\pi}{2}$ since $Y_{\frac{1}{2}}(t)=- \sqrt{\frac{2}{\pi t}} \cos t$ for $t>0$. Moreover, ${\bf z({\text{\tiny $N$}})} > {\bf z({\text{\footnotesize $3$}})}$ for $N > 3$, see \cite[Section 9.5]{AS}. Consequently, the assumptions of Theorem~\ref{thm:unbounded_branch-defocusing} are satisfied if $Q\in L^\infty_c(\mathbb{R}^N,\mathbb{R})\backslash\{0\}$ is a nonpositive function with $\text{diam}(\text{supp }Q)< \frac{\pi}{2k}$. We also refer to \cite[p. 467]{AS} for a list of the values of ${\bf z({\text{\tiny $N$}})}$ for $3 \le N \le 15$. \qquad It seems appropriate to compare our results with recent work on the existence of {\em real-valued (standing wave) solutions} of (\ref{nlh-1}). A large class of such real-valued solutions has been detected and studied extensively in recent years by considering the associated integral equation \begin{equation} \label{eq:real-valued-variational} u = \Psi_k \ast(N_f(u)), \end{equation} where $\Psi_k$ is the real part of the fundamental solution $\Phi_k$, see e.g. \cite{MMP,EW0,EW1,EY} and the references therein. In particular, a variational approach to detect and analyze solutions of (\ref{eq:real-valued-variational}) has been set up in \cite{EW1} for the special case where the nonlinearity $f$ is of the form $f(x,u)=Q(x)|u|^{p-2}u$ with nonnegative $Q \in L^\infty(\mathbb{R}^N,\mathbb{R})$ and suitable exponents $p>2$. Variants of this variational approach have been developed further in \cite{MMP,EY} under appropriate assumptions on the nonlinearity. However, the variational methods in these papers are of no use in the context of the integral equation (\ref{nlh-1-integral}) which has no variational structure. The contrast between real standing wave solutions and complex scattering solutions is even more glaring as we shall see that the related homogeneous equation $u = {\mathcal R}_k [Q|u|^{p-2}u]$ admits only the trivial bounded solution $u\equiv 0$ if $p \ge 2$ and $Q \in L^\infty_\alpha(\mathbb{R}^N,\mathbb{R})$ for some $\alpha> \frac{N+1}{2}$. Indeed, we shall prove this Liouville type result in Proposition \ref{corol-kato} below by adapting a nonexistence result due to Kato \cite{kato59} to the present nonlinear context. \qquad In the perturbative setting where a priori smallness assumptions are imposed, the detection of real and complex solutions of (\ref{nlh-1-integral}) follows the same strategy of applying contraction mapping arguments in suitable function spaces. In this context, we mention the paper \cite{Mandel1} where a variant of the contraction mapping argument of Guti\'errez \cite{G} is developed and used to detect continua of small real-valued solutions of (\ref{nlh-1}) for a larger class of nonlinearities than in \cite{G}. More precisely, these continua are found by solving the non-homogeneous variant $u = \Psi_k \ast(Q|u|^{p-2}u) +\varphi$ of (\ref{eq:real-valued-variational}) for a range of given small real-valued solutions $\varphi$ of the homogeneous Helmholtz equation $-\Delta \varphi -\varphi =0$. \qquad Due to the lack of a priori smallness assumptions and the lack of a variational structure, our main results given in Theorems 1.2, 1.5 and 1.6 require a different approach than in the above-mentioned papers. As mentioned earlier, this approach is based on topological fixed point theory, and it therefore requires suitable a priori bounds. With regard to this aspect, the present paper is related to \cite{EW2} where continuous branches of real-valued standing wave solutions of (\ref{eq:real-valued-variational}) have been constructed. However, while the derivation of suitable priori bounds is the key step both in \cite{EW2} and in the present paper, these bounds are of different nature as they relate to different integral equations and to different classes of solutions. In \cite{EW2}, under suitable additional assumptions on $Q$ and $p$, a priori bounds are derived for real-valued solutions of $u = \Psi_k \ast(Q|u|^{p-2}u)$ which are positive within the support of the nonlinearity $f$. In contrast, here we need a priori bounds for complex solutions of (\ref{nlh-1-integral}), and for this we cannot use positivity properties and local maximum principles. Instead, the approach of the present paper is based on a Liouville theorem relying on Sommerfeld's radiation condition and on combining regularity and test function estimates with local monotonicity properties of the function $\Psi_k$, see Sections \ref{nonexistence} and~\ref{sec:priori-estim-defoc} below. \qquad The paper is organized as follows. In Section~\ref{sec:estim-helmh-resolv} we establish basic estimates of the resolvent operator ${\mathcal R}_k$, and we prove Proposition~\ref{resolvent-compact-and-continuous}. In Section~\ref{sec:estim-subst-oper}, we show useful estimates and regularity properties of the substitution operator associated with the nonlinearity $f(x,u)$. In order to apply topological fixed point theory, we first need to prove the nonexistence of solutions to linear and superlinear integral equations related to the operator ${\mathcal R}_k$. This will be done in Section~\ref{nonexistence}. In Section~\ref{sec:priori-estim-defoc}, we then prove a priori bounds for solution of equation \eqref{nlh-1-integral} and related variants under various assumptions on the nonlinearity $f$. The proof of the main theorems is then completed in Section~\ref{sec:proofs-main-results}. Finally, in the appendix, we provide a relative a priori bound based on bootstrap regularity estimates between $L^p$-spaces which is used in the proof of Theorem~\ref{thm:unbounded_branch-defocusing}. \section{Estimates for the Helmholtz resolvent operator} \label{sec:estim-helmh-resolv} \begin{lemma}\label{lm 2.1} Let $N\geq2$, $k>0$ and for $\alpha>\frac{N+1}{2}$, let $\tau(\alpha)$ be defined by (\ref{exp 1}). Then for any $ v\in L^\infty_{\alpha } (\mathbb{R}^N)$ and $\alpha>\frac{N+1}{2}$, we have \[ \||\Phi_k|* v\|_{L^\infty_{\tau(\alpha)}}\leq C\| v\|_{L^\infty_{\alpha }},\quad \||\nabla \Phi_k|* v\|_{L^\infty_{\tau(\alpha)}}\leq C\| v\|_{L^\infty_{\alpha }}, \] where the constant $C>0$ depends only on $N$, $\alpha$ and $k$. \end{lemma} \begin{proof} In the following, the letter $C>0$ always denotes constants which only depends on $N$, $\alpha$ and $k$. We observe that \[ |\Phi_k(x)|\leq \begin{cases} C\, |x|^{2-N}\quad &\text{if }\ N\geq 3,\\[1.5mm] C\, \log \frac2{|x|} &\text{if }\ N=2,\, \end{cases}\qquad |\nabla\Phi_k|\leq c|x|^{1-N} \quad\,\text{for $0<|x|\leq 1$} \] and \[ |\Phi_k(x)|,\, |\nabla\Phi_k|\leq C\, |x|^{\frac{1-N}2} \quad \text{if } |x|>1. \] It then follows that \begin{align*} &|(|\Phi_k|* v)(x)|\leq \int_{\mathbb{R}^N} |\Phi_k(z)|\,| v(x-z)|\, dz\notag\\ &\leq \begin{cases} C\| v\|_{L^\infty_\alpha}\,\Big(\int_{B_1(0)}|z|^{2-N}\langle x-z\rangle^{-\alpha}\, dz + \int_{\mathbb{R}^N\backslash B_1(0)}|z|^{\frac{1-N}2}\, \langle x-z\rangle^{-\alpha}\, dz\Big)\ \ \text{if }\ N\geq3 ,\\[3mm] C\| v\|_{L^\infty_\alpha}\,\Big(\int_{B_1(0)}\log\frac2{|z|} \langle x-z\rangle^{-\alpha}\, dz + \int_{\mathbb{R}^N\backslash B_1(0)}|z|^{\frac{1-N}2}\, \langle x-z\rangle^{-\alpha}\, dz\Big)\ \ \text{if }\ N=2. \end{cases} \end{align*} For $|x|\leq 4$, it is easy to see that \begin{align}\label{2.2} |(|\Phi_k|* v)(x)|\leq \begin{cases} C\| v\|_{L^\infty_\alpha}\,\Big(\int_{B_1(0)}|z|^{2-N} \,dz + \int_{\mathbb{R}^N\backslash B_1(0)}|z|^{\frac{1-N}2-\alpha}\, dz\Big)\ \ \text{if }N\geq3 \\[3mm] C\| v\|_{L^\infty_\alpha}\,\Big(\int_{B_1(0)}\log\frac2{|z|} \, dz + \int_{\mathbb{R}^N\backslash B_1(0)}|z|^{\frac{1-N}2-\alpha}\, dz\Big)\ \ \text{if }N=2, \end{cases} \end{align} and \begin{align}\label{2.2-gradient} |(|\nabla\Phi_k|* v)(x)|\leq C\| v\|_{L^\infty_\alpha}\,\Big(\int_{B_1(0)}|z|^{1-N} \,dz + \int_{\mathbb{R}^N\backslash B_1(0)}|z|^{\frac{1-N}2-\alpha}\, dz\Big), \end{align} where $\frac{1-N}2-\alpha<-N$. \qquad In the following, we consider $|x|>4$. Since $\alpha>\frac{N+1}{2}$, direct computation shows that \begin{align*} I_1&:=\begin{cases} \int_{B_1(0)}|z|^{2-N}\, \langle x-z\rangle^{-\alpha}\, dz\quad{\rm if}\ N\geq 3 \\[1mm] \int_{B_1(0)}\log\frac{2}{|z|}\, \langle x-z\rangle^{-\alpha}\, dz\quad{\rm if}\ N=2 \end{cases} \\[1.5mm] & \displaystyle\,\leq C |x|^{-\alpha} \le C \langle x\rangle^{-\alpha}. \end{align*} Moreover, \begin{align*} I_2&:=\int_{B_{\frac{|x|}2} (0)\setminus B_1(0)} |z|^{\frac{1-N}2}\, \langle x-z\rangle^{-\alpha}\, dz \leq C|x|^{-\alpha} \int_{B_{\frac{|x|}2} (0)\setminus B_1(0)} |z|^{\frac{1-N}2} dz \leq C|x|^{-\alpha+\frac{N+1}{2}},\\ I_3&:=\int_{B_{\frac{|x|}2} (x)} |z|^{\frac{1-N}2}\, \langle x-z\rangle^{-\alpha}\, dz \leq C|x|^{-\frac{N-1}{2}} \int_{B_{\frac{|x|}2} (x) } \langle x-z\rangle^{-\alpha} dz \leq C|x|^{-\tau(\alpha)} \end{align*} and \begin{align*} I_4:&=\int_{\mathbb{R}^N\setminus(B_{\frac{|x|}2} (0)\cup B_{\frac{|x|}2} (x)) } |z|^{\frac{1-N}2}\, \langle x-z\rangle^{-\alpha}\, dz \\ &= |x|^{-\alpha+\frac{N+1}{2}} \int_{\mathbb{R}^N\setminus(B_{\frac{1}2} (0)\cup B_{\frac{1}2} (e_x)) } |z|^{\frac{N-1}{2}}|z-\hat x|^{-\alpha} dz \leq C|x|^{-\alpha+\frac{N+1}{2}}, \end{align*} where $\hat x=\frac{x}{|x|}$. Since $-\tau(\alpha)\geq \max\{-\frac{N-1}{2}, -\alpha, -\alpha+\frac{N+1}{2}\}$, we may combine these estimates with (\ref{2.2}) to see that \[|(|\Phi_k|*v)(x)|\leq C\| v\|_{L^\infty_{ \alpha}}\,\bigg(\sum^{4}_{j=1}I_j\bigg) \leq C \langle x \rangle^{-\tau(\alpha)} \| v\|_{L^\infty_{ \alpha}} \quad \text{for all $x\in \mathbb{R}^N.$}\] Moreover, noting that \begin{align*} \tilde I_1:= \int_{B_1(0)}|z|^{1-N}\, \langle x-z\rangle^{-\alpha}\, dz \leq C |x|^{-\alpha} \leq C \langle x\rangle^{-\alpha}\qquad \text{for $|x|>4$,} \end{align*} we find by (\ref{2.2-gradient}) that \[|(|\nabla \Phi_k|*v)(x)|\leq C\| v\|_{L^\infty_{ \alpha}}\,\bigg(\tilde I_1+\sum^{4}_{j=2}I_j\bigg) \leq C \langle x \rangle^{-\tau(\alpha)}\| v\|_{L^\infty_{ \alpha}}\quad \text{for all $x\in \mathbb{R}^N.$}\] The proof is thus complete. \end{proof} \begin{proof}[Proof of Proposition~\ref{resolvent-compact-and-continuous}] (i) Clearly, Lemma~\ref{lm 2.1} yields (\ref{eq:kappa-sigma-finite}) and therefore the continuity of the linear resolvent operator ${\mathcal R}_k: L^\infty_\alpha(\mathbb{R}^N) \to L^\infty_{\tau(\alpha)}(\mathbb{R}^N)$, whereas the latter space is continuously embedded in $L^\infty(\mathbb{R}^N)$. To see the compactness of ${\mathcal R}_k$ as a map $L^\infty_\alpha(\mathbb{R}^N) \to L^\infty(\mathbb{R}^N)$, let $(u_n)_n$ be a sequence in $L^\infty_\alpha(\mathbb{R}^N)$ with $$ m:= \sup_{n \in \mathbb{N}} \|u_n \|_{L^\infty_{\alpha}}< \infty. $$ Moreover, let $v_n:= {\mathcal R}_k u_n = \Phi * u_n$ for $n \in \mathbb{N}$. By Lemma~\ref{lm 2.1}, we then have \begin{equation} \label{eq:proof-est-compactness} \|v_n\|_{L^\infty_{\tau(\alpha)}} \le C m \quad \text{and}\qquad \|\nabla v_n\|_{L^\infty_{\tau(\alpha)}}= \|\nabla \Phi * u_n\|_{L^\infty_{\tau(\alpha)}} \le C m \end{equation} for all $n \in \mathbb{N}$. In particular, the sequence $(v_n)_n$ is bounded in $C^1_{loc}(\mathbb{R}^N)$. By the Arzel\`a-Ascoli theorem, there exists $v \in L^\infty_{loc}(\mathbb{R}^N)$ with \begin{equation} \label{eq:proof-locally-uniformly} \text{$v_n \mapsto v$ locally uniformly on $\mathbb{R}^N$.} \end{equation} By (\ref{eq:proof-est-compactness}), it then follows that $v \in L^\infty_{\tau(\alpha)}(\mathbb{R}^N)$ with $\|v\|_{L^\infty_{\tau(\alpha)}} \le C m$. \qquad Moreover, for given $R>0$ we have, with $A_R:= \mathbb{R}^N \setminus B_R(0)$ $$ \|v_n-v\|_{L^\infty(A_R)}\le \|v_n\|_{L^\infty(A_R)} + \|v\|_{L^\infty(A_R)} \le R^{-\tau(\alpha)}\Bigl(\|v_n\|_{L^\infty_{\tau(\alpha)}} + \|v\|_{L^\infty_{\tau(\alpha)}}\Bigr) \le 2Cm R^{-\tau(\alpha)}. $$ Combining this estimate with (\ref{eq:proof-locally-uniformly}), we see that $\limsup \limits_{n \to \infty}\|v_n-v\|_{L^\infty(\mathbb{R}^N)}\le 2Cm R^{-\tau(\alpha)}$ for every $R>0$. Since $\tau(\alpha)>0$, we conclude that $v_n \to v$ in $L^\infty(\mathbb{R}^N)$. This shows the compactness of the operator $L^\infty_\alpha(\mathbb{R}^N) \to L^\infty(\mathbb{R}^N)$.\\ (ii) Let $\alpha > \frac{N(N+3)}{2(N+1)}$ and $h \in L^\infty_\alpha(\mathbb{R}^N)$. It then follows that $h \in L^{\frac{2(N+1)}{N+3}}(\mathbb{R}^N)$. Consequently, \cite[Proposition A.1]{EW1} implies that $u= {\mathcal R}_k h$ is a strong solution of $-\Delta u - k^2 u = h$. Moreover, $u$ satisfies~(\ref{eqn:sommerfeld1-averaged}) by the estimate in \cite[Theorem 8]{G} and the remark following it. Finally, we suppose that $\alpha > N$. In this case, the linear map $$ \widetilde{\mathcal R}_k: L^\infty_\alpha(\mathbb{R}^N) \to L^\infty_{\text{\tiny $\frac{N-1}{2}$}}(\mathbb{R}^N), \qquad v \mapsto \widetilde{\mathcal R}_k(v):=\frac{d {\mathcal R}_k v}{dr} -ik {\mathcal R}_k v $$ is well-defined and bounded by Lemma~\ref{lm 2.1}. Moreover, if $h \in L^\infty(\mathbb{R}^N)$ has compact support, the fact that $\Phi_k$ satisfies~(\ref{sommerfeld-1}) and elementary convolution estimates show that $u= {\mathcal R}_k h$ also satisfies~(\ref{sommerfeld-1}). In the general case $h \in L^\infty_\alpha(\mathbb{R}^N)$, we consider a sequence of functions $h_n \in L^\infty_\alpha(\mathbb{R}^N)$ with compact support and such that $h_n \to h$ in $L^\infty_\alpha(\mathbb{R}^N)$, which then also implies that \begin{equation} \label{eq:extra-argument-strong-s-c} \widetilde{\mathcal R}_k h_n \to \widetilde{\mathcal R}_k h \qquad \text{in $\: L^\infty_{\text{\tiny $\frac{N-1}{2}$}}(\mathbb{R}^N).$} \end{equation} Moreover, for every $n \in \mathbb{N}$ we have \begin{align*} \limsup_{|x| \to \infty} |x|^{\frac{N-1}{2}}\bigl|[\widetilde{\mathcal R}_k h](x)\bigr| &\le \limsup_{|x| \to \infty} |x|^{\frac{N-1}{2}}\bigl|[\widetilde{\mathcal R}_k h_n](x)\bigr| + \|\widetilde{\mathcal R}_k h-\widetilde{\mathcal R}_k h_n\|_{L^\infty_{\text{\tiny $\frac{N-1}{2}$}}}\\ &= \|\widetilde{\mathcal R}_k h-\widetilde{\mathcal R}_k h_n\|_{L^\infty_{\text{\tiny $\frac{N-1}{2}$}}}, \end{align*} and thus $$ \limsup_{|x| \to \infty} |x|^{\frac{N-1}{2}}\bigl|[\widetilde{\mathcal R}_k h](x)\bigr| \le \lim_{n \to \infty}\|\widetilde{\mathcal R}_k h-\widetilde{\mathcal R}_k h_n\|_{L^\infty_{\text{\tiny $\frac{N-1}{2}$}}} = 0 $$ by (\ref{eq:extra-argument-strong-s-c}). Hence $u= {\mathcal R}_k h$ satisfies~(\ref{sommerfeld-1}). \end{proof} \section{Estimates for the substitution operator} \label{sec:estim-subst-oper} \begin{lemma} \label{lem:nemytskii_cont} Let, for some $\alpha \in \mathbb{R}$, the nonlinearity $f: \mathbb{R}^N \times \mathbb{C} \to \mathbb{C}$ be a continuous function satisfying \begin{equation} \label{eq:assumption-f1-1} S_{f,M,\alpha}:= \sup_{|u|\le M,x \in \mathbb{R}^N} \langle x \rangle^{\alpha}|f(x,u)|< \infty \qquad \text{for all $M>0$.} \end{equation} Then the superposition operator $$ N_f: L^\infty (\mathbb{R}^N) \to L^{\infty}_{\alpha'}(\mathbb{R}^N),\qquad N_f(u)(x):= f(x,u(x)) $$ is well defined, bounded and continuous for every $\alpha'<\alpha$. \end{lemma} \begin{proof} It clearly follows from (\ref{eq:assumption-f1-1}) that $N_f$ is well defined and satisfies the estimate $$ \|N_f(u)\|_{L^\infty_{\alpha'}}\le \|N_f(u)\|_{L^\infty_\alpha} \le S_{f,M,\alpha} \qquad \text{for $M>0$ and $u \in L^\infty(\mathbb{R}^N)$ with $\|u\|_{L^\infty} \le M$.} $$ To see the continuity we consider a sequence $(u_n)_n \subset L^\infty(\mathbb{R}^N)$ with $u_n \to u$ in $L^\infty(\mathbb{R}^N)$, and we put $M:= \sup \{\|u_n\|_{L^\infty}\::\: n \in \mathbb{N}\}$. For given $R>0$ we have, with $B_R:= B_R(0)$ and $A_R:= \mathbb{R}^N \setminus B_R$, \begin{align*} \|N_f(u_n)-N_f(u)\|_{L^\infty_{\alpha'}(A_R)} & \le \|N_f(u_n)\|_{L^\infty_{\alpha'}(A_R)} +\|N_f(u)\|_{L^\infty_{\alpha'}(A_R)}\\ &\le R^{\alpha'-\alpha}\Bigl( \|N_f(u_n)\|_{L^\infty_{\alpha}(A_R)} +\|N_f(u)\|_{L^\infty_{\alpha}(A_R)}\Bigr)\\ &\le 2 S_{f,M,\alpha} R^{\alpha'-\alpha}. \end{align*} Moreover, since $f$ is uniformly continuous on $D_R:= \{(x,z) \in \mathbb{R}^N \times \mathbb{C}\::\: \|x\| \le R,\:|z| \le M\}$, we find that $$ \|N_f(u_n)-N_f(u)\|_{L^\infty(B_R)}= \sup_{|x| \le R}|f(x,u_n(x))-f(x,u(x))| \to 0 \qquad \text{as $n \to \infty$.} $$ We thus infer that $\limsup \limits_{n \to \infty}\|N_f(u_n)-N_f(u)\|_{L^\infty_{\alpha'}(\mathbb{R}^N)}\le 2 S_{f,M,\alpha} R^{\alpha'-\alpha}$ for every $R>0$. Since $\alpha' < \alpha$ by assumption, we conclude that $N_f(u_n) \to N_f(u)$ in $L^\infty_{\alpha'}(\mathbb{R}^N)$. This shows the continuity of $N_f: L^\infty(\mathbb{R}^N) \to L^\infty_{\alpha'}(\mathbb{R}^N)$. \end{proof} \begin{lemma}\label{lem:nemytskii_C-1} Let, for some $\alpha>\frac{N+1}{2}$, the nonlinearity $f: \mathbb{R}^N \times \mathbb{C} \to \mathbb{C}$ be a continuous function satisfying (\ref{eq:assumption-f1-1}). Suppose moreover that the function $f(x,\cdot):\mathbb{C} \to \mathbb{C}$ is real differentiable for every $x \in \mathbb{R}^N$, and that $f':= \partial_u f: \mathbb{R}^N \times \mathbb{C} \to {\mathcal L}_{\mathbb{R}}(\mathbb{C},\mathbb{C})$ is a continuous function satisfying \begin{equation} \label{eq:assumption-f1-1-C-1} T_{f,M,\alpha}:= \sup_{|u|\le M,x \in \mathbb{R}^N} \langle x \rangle^{\alpha}\|f'(x,u)\|_{{\mathcal L}_\mathbb{R}(\mathbb{C},\mathbb{C})}< \infty \qquad \text{for all $M>0$.} \end{equation} Then the superposition operator $N_f: L^\infty (\mathbb{R}^N) \to L^{\infty}_{\alpha'}(\mathbb{R}^N)$ is of class $C^1$ for $\alpha' < \alpha$ with \begin{equation} \label{eq:expression-derivative} N_f'(u) := N_{f'}(u) \qquad \text{for $u \in L^\infty(\mathbb{R}^N)$,} \end{equation} where $N_{f'}(u) \in {\mathcal L}_\mathbb{R}(L^\infty(\mathbb{R}^N),L^\infty_{\alpha'}(\mathbb{R}^N))$ is defined by \begin{equation} \label{eq:definition-substitution-deriv} [N_f'(u)v](x):= f'(x,u(x))v(x) \qquad \text{for $v \in L^\infty(\mathbb{R}^N), x \in \mathbb{R}^N$.} \end{equation} \end{lemma} \begin{proof} For the sake of brevity, we put $X:= L^\infty(\mathbb{R}^N)$ and $Y:= L^{\infty}_{\alpha'}(\mathbb{R}^N)$. By assumption (\ref{eq:assumption-f1-1-C-1}) and a very similar argument as in the proof of Lemma~\ref{lem:nemytskii_cont}, the nonlinear operator $$ N_{f'}: X \to {\mathcal L}_\mathbb{R}(X,Y) $$ defined by (\ref{eq:definition-substitution-deriv}) is well-defined, bounded and continuous. Thus, it suffices to show that $N_f$ is G\^ateaux-differentiable, and that (\ref{eq:expression-derivative}) is valid as a directional derivative. So let $u,v \in X$, and let $M:= \|u\|_{L^\infty}+ \|v\|_{L^\infty}$. For $\theta \in \mathbb{R}$ and $x \in \mathbb{R}^N$, we estimate \begin{align*} \Bigl|&\frac{N_f(u+\theta v)(x)-N_f(u)(x)}{\theta}-[N_{f'}(u)v](x)\Bigr|\\ &= \Bigl|\frac{f(x,[u+\theta v](x))-f(x,u(x))}{\theta}-f'(x,u(x))v(x)\Bigr|\\ &= \Bigl|\int_0^1 \bigl[f'(x,[u+\xi \theta v](x))-f'(x,u(x))\bigr] v(x)\, d\xi \Bigr| \le |v(x)| g_\theta(x) \end{align*} with $$ g_\theta(x):= \sup_{\xi\in[0,1]}\bigl\|f'(x,[u+\xi \theta v](x))-f'(x,u(x))\bigr\|_{{\mathcal L}_{\mathbb{R}}(\mathbb{C},\mathbb{C})} \qquad \text{for $\theta \in \mathbb{R}$, $x \in \mathbb{R}^N$.} $$ Since $\|u + \tau v\|_{L^\infty} \le M$ for $\tau \in \mathbb{R}$, $|\tau| \le 1$, we have $$ |g_\theta(x)|\le \sup_{\tau \in[0,1]}\|f'(x,[u+\tau v](x))\|_{{\mathcal L}_{\mathbb{R}}(\mathbb{C},\mathbb{C})} + \|f'(x,u(x))\|_{{\mathcal L}_{\mathbb{R}}(\mathbb{C},\mathbb{C})} \le 2T_{f,M,\alpha}\langle x\rangle^{-\alpha} $$ for $|\theta| \le 1$, $x \in \mathbb{R}^N$. Similarly as in the proof of Lemma~\ref{lem:nemytskii_cont}, we now define, for given $R>0$, $B_R:= B_R(0)$, $A_R:= \mathbb{R}^N \setminus B_R$, and $D_R:= \{(x,z) \in \mathbb{R}^N \times \mathbb{C}\::\: \|x\| \le R,\:|z| \le M\}$. From the estimate above, it then follows \begin{equation} \label{diff-substitution-est-1} \Bigl \| \frac{N_f(u+\theta v)-N_f(u)}{\theta}-N_{f'}(u)v \Bigr\|_{L^\infty_{\alpha'}(A_R)} \le 2 \|v\|_X T_{f,M,\alpha}R^{\alpha'-\alpha}. \end{equation} Moreover, since, by assumption, $f'$ is uniformly continuous on the compact set $D_R$, we find that $$ \|g_\theta\|_{L^\infty(B_R)} \to 0 \qquad \text{as $\theta \to 0$}. $$ We thus conclude that $$ \limsup \limits_{\theta \to 0}\, \Bigl \|\frac{N_f(u+\theta v)-N_f(u)}{\theta}-M(u)v \Bigr \|_{L^\infty_{\alpha'}(\mathbb{R}^N)} \le 2 \|v\|_X\, T_{f,M,\alpha}\,R^{\alpha-\alpha'}\qquad \text{for every $R>0$.} $$ Since $\alpha' < \alpha$ by assumption, we conclude that $\frac{N_f(u+\theta v)-N_f(u)}{\theta} \to N_{f'}(u)v$ in $Y$ as $\theta \to 0$. The proof is thus finished. \end{proof} \section{Nonexistence of outgoing waves for the nonlinear Helmholtz equation} \label{nonexistence} To begin this section, we recall the following nonexistence result for eigenfunctions of Schr\"odinger operators with positive eigenvalue. It is a consequence of a result by Alsholm and Schmidt \cite[Proposition 2 of Appendix 3]{alsholm-schmidt70} extending earlier results due to Kato \cite{kato59}: \begin{proposition}[see {\cite[Proposition 2]{alsholm-schmidt70}}]\label{prop:kato} Let $u\in W^{2,2}_{\text{loc}}(\mathbb{R}^N,\mathbb{C})$ solve $-\Delta u +Vu=k^2u$ in $\mathbb{R}^N$, where $V\in L^\infty(\mathbb{R}^N)$ satisfies \begin{equation} \label{condition-V} |V(x)|\leq C\langle x\rangle^{-1-\epsilon} \qquad \text{for a.e. $x\in\mathbb{R}^N$ with constants $C, \epsilon>0$.} \end{equation} If $$ \liminf_{R\to\infty}\frac1R\int_{B_R(0)}(|\nabla u|^2+k^2|u|^2)\, dx=0, $$ then there exists $R>0$ such that $u$ vanishes identically in $\mathbb{R}^N\backslash B_R(0)$ for some $R>0$.\\ If, moreover, $V$ is real-valued, then $u$ vanishes identically in $\mathbb{R}^N$. \end{proposition} \begin{proof} It has been proved in {\cite[Proposition 2]{alsholm-schmidt70}} that $u$ vanishes identically in $\mathbb{R}^N\backslash B_R(0)$ for some $R>0$. Assuming in addition that $V$ is real-valued, we then deduce by a unique continuation result that $u$ vanishes identically on $\mathbb{R}^N$. More precisely, for $u_1=\text{Re}(u)$ and $u_2=\text{Im}(u)$ we have $|\Delta u_i|\leq C |u_i|$ on $\mathbb{R}^N$ with some constant $C>0$. The strong unique continuation property \cite[Theorem 6.3]{jerison-kenig85} (see also Remark 6.7 in the same paper) therefore implies $u_1=u_2=0$ on $\mathbb{R}^N$, and this concludes the proof. \end{proof} \qquad From Proposition~\ref{prop:kato}, we shall now deduce the following nonexistence result for linear and superlinear variants of the corresponding integral equation involving the Helmholtz resolvent operator. \begin{proposition} \label{corol-kato} Let $N \ge 3$, $2 \le p < \infty$, $\alpha >\frac{N+1}{2}$, and let $u \in L^\infty(\mathbb{R}^N)$ be a solution of \begin{equation} \label{eq:corol-kato-eq} u = {\mathcal R}_k [Q|u|^{p-2}u] \end{equation} with a function $Q \in L^\infty_\alpha(\mathbb{R}^N, \mathbb{R})$. Then $u \equiv 0$. \end{proposition} \begin{proof} Let $V:= Q|u|^{p-2}$, so that (\ref{eq:corol-kato-eq}) writes in the form \begin{equation} \label{eq:corol-kato-eq-variant} u = {\mathcal R}_k [V u] \end{equation} We then have $V \in L^\infty_\alpha(\mathbb{R}^N, \mathbb{R})$ and also $Vu \in L^\infty_\alpha(\mathbb{R}^N)$ since $u \in L^\infty(\mathbb{R}^N)$. Therefore Proposition~\ref{resolvent-compact-and-continuous} implies that $u \in L^\infty_{\tau(\alpha)}(\mathbb{R}^N)$ with $\tau(\alpha)$ given in (\ref{exp 1}). It then follows that $Vu \in L^\infty_{\alpha_1}(\mathbb{R}^N)$ with $\alpha_1 = \alpha +\tau(\alpha)$ and hence $u \in L^\infty_{\tau(\alpha_1)}(\mathbb{R}^N)$ again by Proposition~\ref{resolvent-compact-and-continuous}. Defining inductively $\alpha_{k}:= \alpha_{k-1}+\tau(\alpha_{k-1})$ for $k \ge 2$, we may iterate the application of Proposition~\ref{resolvent-compact-and-continuous} to obtain that $u \in L^\infty_{\tau(\alpha_k)}(\mathbb{R}^N)$ for all $k \in \mathbb{N}$. After a finite number of steps, we therefore deduce from (\ref{exp 1}) that $u \in L^\infty_{\text{\tiny $\frac{N-1}{2}$}}(\mathbb{R}^N)$ and therefore $Vu \in L^\infty_{\alpha+\text{\tiny $\frac{N-1}{2}$}}(\mathbb{R}^N)$. Since $\alpha > \frac{N+1}{2}$ by assumption, this implies that $Vu \in L^\infty(\mathbb{R}^N) \cap L^1(\mathbb{R}^N)$. It then follows e.g. from \cite[Proposition A.1]{EW1} that $u \in W^{r}_{\text{loc}}(\mathbb{R}^N) \cap L^{\frac{2(N+1)}{N-1}}(\mathbb{R}^N) \cap L^\infty(\mathbb{R}^N)$ for $r < \infty$, and $u$ is a strong solution of the differential equation \begin{equation}\label{eqn:nlh_power} -\Delta u -k^2 u = V u\quad\text{in }\mathbb{R}^N. \end{equation} Moreover, by \cite[Theorem 8]{G} and the remark following it, $u$ satisfies the Sommerfeld outgoing radiation condition in the form given in (\ref{eqn:sommerfeld1-averaged}), e.g. \begin{equation} \label{eq:sommerfeld-proof} \lim_{R\to\infty}\frac{1}{R} \int_{B_R}\left|\nabla u(x)-iku(x)\frac{x}{|x|} \right|^2\, dx=0. \end{equation} We now proceed similarly as in the proof of Corollary 1 in \cite{G}. Expanding the terms in (\ref{eq:sommerfeld-proof}), the condition can be rewritten as \begin{align}\label{eqn:som2} \lim_{R\to\infty}\frac1{R}\left\{\int_{B_R}(|\nabla u|^2+k^2|u|^2)\, dx - 2k \int_0^R \text{Im}\left(\int_{\partial B_\rho}\overline{u}\nabla u\cdot\frac{x}{|x|}\, d\sigma\right)\, d\rho\right\}=0. \end{align} Since $u\in W^{2,2}_{\text{loc}}(\mathbb{R}^N)$ solves \eqref{eqn:nlh_power} in the strong sense, the divergence theorem gives \begin{align*} \int_{\partial B_\rho}\overline{u}\nabla u\cdot\frac{x}{|x|}\, d\sigma &=\int_{B_\rho}|\nabla u|^2\, dx + \int_{B_\rho}\overline{u}\Delta u\, dx\\ &=\int_{B_\rho}|\nabla u|^2\, dx - \int_{B_\rho} (k^2|u|^2 + V|u|^2)\, dx, \end{align*} where the right-hand side in the last line is purely real-valued, since by assumption $V= Q|u|^{p-2}$ takes only real values. Consequently, we find $$ \text{Im}\left( \int_{\partial B_\rho}\overline{u}\nabla u\cdot\frac{x}{|x|}\, d\sigma \right)=0 $$ for all $\rho>0$, and plugging this into \eqref{eqn:som2} yields \begin{equation}\label{eqn:som3} \lim_{R\to\infty}\frac1{R}\int_{B_R}(|\nabla u|^2+k^2|u|^2)\, dx=0. \end{equation} Moreover, since $V \in L^\infty_\alpha(\mathbb{R}^N)$ and $\alpha> \frac{N+1}{2}>1$, condition (\ref{condition-V}) is satisfied for $V$. Hence Proposition~\ref{prop:kato} implies that $u \equiv 0$ on $\mathbb{R}^N$. \end{proof} \section{A priori bounds for solutions} \label{sec:priori-estim-defoc} The aim of this section is to collect various a priori bounds for solutions of (\ref{nlh-1-integral}) under different assumptions on the nonlinearity $f$. \subsection{A priori bounds for the case of linearly bounded nonlinearities} \label{sec-a-priori-linearly-bounded} In this subsection we focus on linearly bounded nonlinearities, and we prove the following boundedness property. \begin{proposition} \label{sec:proof-theorem-refw-1} Let, for some $\alpha>\frac{N+1}{2}$, the nonlinearity $f$ satisfy the assumption \begin{equation} \label{eq:assumption-f1-section} \sup_{|u|\le M,x \in \mathbb{R}^N}\langle x\rangle^{\alpha}|f(x,u)|< \infty \qquad \text{for all $M>0$} \end{equation} and \underline{one} of the assumptions $(f_1)$ or $(f_2)$ from Theorem~\ref{W teo 1-sublinear}. Moreover, let $\varphi \in L^\infty(\mathbb{R}^N)$, and let ${\mathcal F} \subset L^\infty(\mathbb{R}^N)$ be the set of functions $u$ which solve the equation \begin{equation} \label{schaefer-equation} u = \mu \Bigl({\mathcal R}_k N_f(u) + \varphi\Bigr) \qquad \text{for some $\mu \in [0,1]$.} \end{equation} Then ${\mathcal F}$ is bounded in $L^\infty(\mathbb{R}^N)$. \end{proposition} \begin{proof} We first assume $(f_2)$. Let $u \in {\mathcal F}$. By (\ref{schaefer-equation}) and Proposition~\ref{resolvent-compact-and-continuous}, we then have \begin{align*} \|u\|_{L^\infty} &\le \|{\mathcal R}_k N_f(u)\|_{L^\infty} + \|\varphi\|_{L^\infty} \le \bigl\| |\Phi| * N_f(u) \bigr\|_{L^\infty_{\tau(\alpha)}} + \|\varphi\|_{L^\infty}\\ &\le \kappa_{\alpha} \|N_f(u)\|_{L^\infty_{\alpha}} + \|\varphi\|_{L^\infty} \le \kappa_{\alpha} \Bigl(\|Q |u|\|_{L^\infty_{\alpha}} + \|b\|_{L^\infty_{\alpha}}\Bigr)+ \|\varphi\|_{L^\infty}\\ &\le \kappa_{\alpha} \|Q\|_{L^\infty_{\alpha}}\|u\|_{L^\infty} + \kappa_{\alpha} \|b\|_{L^\infty_{\alpha}} + \|\varphi\|_{L^\infty}. \end{align*} Since $\kappa_{\alpha} \|Q\|_{L^\infty_{\alpha}}<1$ by assumption, we conclude that $$ \|u\|_{L^\infty} \le \bigl( 1- \kappa_{\alpha} \|Q\|_{L^\infty_{\alpha}}\bigr)^{-1}\bigl(\kappa_{\alpha} \|b\|_{L^\infty_{\alpha}} + \|\varphi\|_{L^\infty}\bigr), $$ and this shows the boundedness of ${\mathcal F}$. \qquad Next we assume $(f_1)$. In this case we argue by contradiction, so we assume that there exists a sequence $(u_n)_n$ in ${\mathcal F}$ such that $c_n:= \|u_n\|_{L^\infty} \to \infty$ as $n \to \infty$. Moreover, we let $\mu_n \in [0,1]$ be such that (\ref{schaefer-equation}) holds with $u=u_n$ and $\mu= \mu_n$. We then define $w_n:= \frac{u_n}{c_n} \in L^\infty(\mathbb{R}^N)$, so that $\|w_n\|_{L^\infty}= 1$ and, by assumption $(f_1)$, \begin{equation} \label{schaefer-equation-wn-proof} w_n = \mu_n {\mathcal R}_k (a w_n + g_n) + \frac{\mu_n}{c_n} \varphi \qquad \text{with $g_n \in L^\infty_\alpha(\mathbb{R}^N)$, $g_n(x)= \frac{b(x,c_n w_n(x))}{c_n}$.} \end{equation} Passing to a subsequence, we may assume that $\mu_n \to \mu \in [0,1]$. Moreover, by assumption $(f_1)$ we have $$ g_n \to 0 \qquad \text{in $L^\infty_\alpha(\mathbb{R}^N)$ as $n \to \infty$,} $$ whereas the sequence $(a w_n)_n$ is bounded in $L^\infty_\alpha(\mathbb{R}^N)$. Since also $\frac{\mu_n}{c_n} \to 0$ as $n \to \infty$, it follows from the compactness of the operator ${\mathcal R}_k:L^\infty_\alpha(\mathbb{R}^N) \to L^\infty(\mathbb{R}^N)$ that, after passing to a subsequence, $w_n \to w \in L^\infty(\mathbb{R}^N)$. From this we then deduce that $$ a w_n \to a w \qquad \text{in $L^\infty_\alpha(\mathbb{R}^N)$,} $$ and passing to the limit in (\ref{schaefer-equation-wn-proof}) yields $$ w = \mu {\mathcal R}_k [a w]= {\mathcal R}_k [\mu a w]. $$ Applying Proposition~\ref{corol-kato} with $p=2$ and $Q:= \mu a$, we conclude that $w \equiv 0$, but this contradicts the fact that $\|w\|_\infty = \lim \limits_{n \to \infty}\|w_n\|_\infty = 1$. Again, we infer the boundedness of ${\mathcal F}$ in $L^\infty(\mathbb{R}^N)$. \end{proof} \subsection{A priori bounds in the superlinear and defocusing case} \label{sec:priori-bounds-superl} In this subsection we restrict our attention to the case $f(x,u)= Q(x)|u|^{p-2}u$ with $Q \le 0$. In this case, we shall prove the following a priori estimate. \begin{proposition}\label{prop:apriori_defocusing} Let $N\geq 3$, $k>0$, $2<p<2^\ast$, $Q\in L^\infty_c(\mathbb{R}^N,\mathbb{R})\backslash\{0\}$ and $\varphi\in L^\infty(\mathbb{R}^N)$. Assume that \begin{itemize} \item[(Q1)] $Q\leq 0$ a.e $\mathbb{R}^N$ and \item[(Q2)] $\text{diam}(\text{supp }Q)\le \frac{{\bf z({\text{\tiny $N$}})}}{k}$, where ${\bf z({\text{\tiny $N$}})}$ denotes the first positive zero of the Bessel function $Y_{\frac{N-2}2}$ of the second kind of order $\frac{N-2}{2}$. \end{itemize} Then, there exist $C=C(N,k,p,\|Q\|_\infty, |\text{supp }Q|)>0$ and $m=m(N,k,p)\in\mathbb{N}$ such that for any solution $u\in L^\infty(\mathbb{R}^N)$ of \begin{equation}\label{eqn:fp_complex} \begin{aligned} u={\mathcal R}_k\bigl(Q |u|^{p-2}u\bigr) +\varphi \end{aligned} \end{equation} we have \begin{equation} \label{eq:apriori-superlinear-defocusing-estimate} \|u\|_\infty\leq C\left(1+\|\varphi\|_\infty^{(p-1)^m}\right). \end{equation} \end{proposition} \qquad For the proof, we first need two preliminary lemmas. The first lemma gives a sufficient condition for the nonnegativity of the Fourier transform of a radial function. It is well known in the case $N=3$ (see for example \cite{tuck06}). Since we could not find any reference for the general case, we give a proof for completeness. \begin{lemma}\label{lem:FT_rad_positive} Let $N\geq 3$ and consider $f\in L^1(\mathbb{R}^N)$ radially symmetric, i.e., $f(x)=f(|x|)$, such that $f\geq 0$ on $\mathbb{R}^N$. If the function $t\mapsto t^{\frac{N-1}2}f(t)$ is nonincreasing on $(0,\infty)$, then $\widehat{f}\geq 0$ on $\mathbb{R}^N$. \end{lemma} \begin{proof} The Fourier transform of the radial function $f$ is given by $$ \widehat{f}(\xi)=|\xi|^{-\frac{N-2}2}\int_0^\infty J_{\frac{N-2}2}(s|\xi|)f(s)s^{\frac{N}2}\, ds. $$ Let $j^{(\ell)}$, $\ell\in N$ denote the positive zeros of the Bessel function $J_{\frac{N-2}2}$ of the first kind of order $\frac{N-2}2$, arranged in increasing order, and set $j^{(0)}:=0$. Then, it follows that $J_{\frac{N-2}2}>0$ in the interval $\bigl(j^{(2m-2)},j^{(2m-1)}\bigr)$ and $J_{\frac{N-2}2}<0$ in the interval $\bigl(j^{(2m-1)},j^{(2m)}\bigr)$, $m\in\mathbb{N}$. For $\xi\neq 0$, we can write therefore \begin{align*} &\int_0^\infty J_{\frac{N-2}2}(s|\xi|)f(s)s^{\frac{N}2}\, ds = \sum_{\ell=1}^\infty \int_{\frac{j^{(\ell-1)}}{|\xi|}}^{\frac{j^{(\ell)}}{|\xi|}} s^\frac12J_{\frac{N-2}2}(s|\xi|) s^{\frac{N-1}2}f(s)\, ds\\ &\quad\geq \sum_{m=1}^{\infty}\left(\frac{j^{(2m-1)}}{|\xi|}\right)^{\frac{N-1}2} f\bigl(\frac{j^{(2m-1)}}{|\xi|}\bigr) \Bigl[ \int_{\frac{j^{(2m-2)}}{|\xi|}}^{\frac{j^{(2m-1)}}{|\xi|}} s^\frac12\bigl|J_{\frac{N-2}2}(s|\xi|)\bigr| ds -\int_{\frac{j^{(2m-1)}}{|\xi|}}^{\frac{j^{(2m)}}{|\xi|}} s^\frac12\bigl|J_{\frac{N-2}2}(s|\xi|)\bigr| ds \Bigr]\\ &\quad=\sum_{m=1}^{\infty}|\xi|^{-\frac32}\left(\frac{j^{(2m-1)}}{|\xi|}\right)^{\frac{N-1}2} f\bigl(\frac{j^{(2m-1)}}{|\xi|}\bigr) \Bigl[ \int_{j^{(2m-2)}}^{j^{(2m-1)}} t^\frac12\bigl|J_{\frac{N-2}2}(t)\bigr| dt -\int_{j^{(2m-1)}}^{j^{(2m)}} t^\frac12\bigl|J_{\frac{N-2}2}(t)\bigr| dt \Bigr], \end{align*} using the fact that $s\mapsto s^{\frac{N-1}2}f(s)$ is nonincreasing by assumption. To conclude, an argument which goes back to Sturm \cite{sturm} (see also \cite{lorch-szego63,M}) shows that \begin{equation} \label{eq:sturm-argument} \int_{j^{(2m-2)}}^{j^{(2m-1)}} t^\frac12\bigl|J_{\frac{N-2}2}(t)\bigr| dt\geq \int_{j^{(2m-1)}}^{j^{(2m)}} t^\frac12\bigl|J_{\frac{N-2}2}(t)\bigr| dt,\quad\text{ for all }m\in\mathbb{N}, \end{equation} provided $N\geq 3$, and this gives the desired result. For the reader's convenience, we now give the proof of (\ref{eq:sturm-argument}). \qquad Consider for $\nu>\frac12$ the function $z(t):=t^\frac12J_\nu(t)$. It satisfies $z(j^{(\ell)})=0$ and $(-1)^\ell z'(j^{(\ell)})>0$ for all $\ell\in\mathbb{N}_0$. Moreover, it solves the differential equation \begin{equation}\label{eqn:equa_diff_bessel} z''(t) + \Bigl(1-\frac{\nu^2-\frac14}{t^2}\Bigr)z(t)=0\quad \text{for all $t>0.$} \end{equation} For $m\in\mathbb{N}$ and $t$ in the interval $I:=\bigl(j^{(2m-1)},\min\{j^{(2m)}, 2j^{(2m-1)}-j^{(2m-2)}\}\bigr)$, consider the functions $y_1(t)=-z(t)$ and $y_2(t)=z(2j^{(2m-1)}-t)$. According to the above remark, we have $y_1, y_2>0$ in $I$ and $y_1(j^{(2m-1)})=y_2(j^{(2m-1)})=0$. Moreover, $y_1'(j^{(2m-1)})=y_2'(j^{(2m-1)})\in(0,\infty)$. Using the differential equation \eqref{eqn:equa_diff_bessel}, we find that \begin{align*} \frac{d}{dt}\left(y_1'(t)y_2(t)-y_1(t)y_2'(t)\right)&=y_1''(t)y_2(t)-y_1(t)y_2''(t)\\ &=(\nu^2-\frac14)\left(\frac1{t^2}-\frac1{(2j^{(2m-1)}-t)^2}\right)y_1(t)y_2(t)\\ &<0 \quad\text{for all }t\in I. \end{align*} Hence, \begin{equation}\label{eqn:phi_prime} y_1'(t)y_2(t)-y_1(t)y_2'(t)<0\quad\text{ for all }j^{(2m-1)}<t\leq \min\{j^{(2m)}, 2j^{(2m-1)}-j^{(2m-2)}\}, \end{equation} and since $y_2(2j^{(2m-1)}-j^{(2m-2)})=0$ and $y_2'(2j^{(2m-1)}-j^{(2m-2)})=-z'(j^{(2m-2)})<0$, the positivity of $y_1$ in $I$ implies that $j^{(2m)}<2j^{(2m-1)}-j^{(2m-2)}$, i.e. $I=\bigl(j^{(2m-1)},j^{(2m)}\bigr)$. \qquad Moreover, from \eqref{eqn:phi_prime}, we infer that the quotient $\frac{y_1}{y_2}$ is a decreasing function in $I$ which vanishes at the right boundary of this interval. Consequently, $y_1(t)<y_2(t)$ in $I$, i.e., $|z(t)|< |z(2j^{(2m-1)}-t)|$ for all $t\in(j^{(2m-1)},j^{(2m)})$ and we conclude that $$ \int_{j^{(2m-2)}}^{j^{(2m-1)}}|z(t)|\, dt > \int_{j^{(2m-1)}}^{j^{(2m)}}|z(t)|\, dt. $$ In the case $\nu=\frac12$, we have $z(t)=\sqrt{\frac2\pi}\sin t$ and $j^{(\ell)}=\ell\pi$, $\ell\in\mathbb{N}_0$. Thus, $$ \int_{j^{(\ell-1)}}^{j^{(\ell)}}|z(t)|\, dt =\sqrt{\frac2\pi}\int_0^\pi \sin t\, dt=2\sqrt{\frac2\pi}\quad\text{for all }\ell\in\mathbb{N}, $$ and this concludes the proof of (\ref{eq:sturm-argument}). \end{proof} \qquad In our proof of the a priori bound given in Proposition~\ref{prop:apriori_defocusing}, we only need the following corollary of Lemma~\ref{lem:FT_rad_positive}. \begin{corollary}\label{prop:bilinear_positive} Let $N\geq 3$, $k>0$ and choose $\delta>0$ such that $k\delta\leq {\bf z({\text{\tiny $N$}})}$, where ${\bf z({\text{\tiny $N$}})}$ denotes the first positive zero of the Bessel function $Y_{\frac{N-2}{2}}$. Then, $$ \int_{\mathbb{R}^N}f(x) [(1_{B_\delta} \Psi_k)\ast f](x)\, dx\geq 0 \quad \text{for all }f\in L^{p'}(\mathbb{R}^N,\mathbb{R}), \ 2\leq p\leq 2^\ast, $$ where $\Psi_k$ denotes the real part of the fundamental solution $\Phi_k$ defined in (\ref{eq:18}). \end{corollary} \begin{proof} Since $1_{B_\delta}\Psi_k\in L^1(\mathbb{R}^N)\cap L^{\frac{N}{N-2}}_{w}(\mathbb{R}^N)$, by the weak Young inequality there is for each $2\leq p\leq 2^\ast$ a constant $C_p>0$ such that $$ \left|\int_{\mathbb{R}^N}f(x) [(1_{B_\delta}\Psi_k)\ast f](x)\, dx\right|\leq C_p \|f\|_{p'}^2\quad \text{for all }f\in L^{p'}(\mathbb{R}^N,\mathbb{R}). $$ Hence, it suffices to prove the conclusion for $f\in{\mathcal S}(\mathbb{R}^N,\mathbb{R})$. For such functions, Parseval's identity gives \begin{equation}\label{eqn:parseval} \int_{\mathbb{R}^N}f(x) [(1_{B_\delta}\Psi_k)\ast f](x)\, dx =(2\pi)^{\frac{N}2}\int_{\mathbb{R}^N} |\widehat{f}(\xi)|^2 {\mathcal F}\bigl(1_{B_\delta}\Psi_k\bigr)(\xi)\, d\xi. \end{equation} \qquad It thus remains to show that \begin{equation} \label{eq:positivity-Fourier-proof} {\mathcal F}\bigl(1_{B_\delta}\Psi_k\bigr) \ge 0 \quad \text{on $\mathbb{R}^N$.} \end{equation} In the radial variable, the radial function $1_{B_\delta}\Psi_k$ is given, up to a positive constant factor, by $t \mapsto -t^{\frac{2-N}{2}}1_{[0,\delta]}(t)Y_{\frac{N-2}2}(kt)$. Moreover, for $N\geq 3$ the function $t\mapsto t^\frac12Y_{\frac{N-2}2}(kt)$ is negative and increasing on $(0,\delta)$. Hence Lemma~\ref{lem:FT_rad_positive} implies (\ref{eq:positivity-Fourier-proof}), and the proof is finished. \end{proof} \qquad We can now prove Proposition~\ref{prop:apriori_defocusing}. \begin{proof}[Proof of Proposition~\ref{prop:apriori_defocusing}] We write $u:=v+\varphi$ and $u=u_1+iu_2$ with real-valued functions $u_1, u_2\in L^p_\text{loc}(\mathbb{R}^N)$. Multiplying the equation \eqref{eqn:fp_complex} by $Q|u|^{p-2}\overline{u}$ and integrating over $\mathbb{R}^N$, we find \begin{align*} &\int_{\mathbb{R}^N}Q|u|^p\, dx-\int_{\mathbb{R}^N}Q|u|^{p-2}\varphi\overline{u}\, dx \\ &\quad= \int_{\mathbb{R}^N}Q|u|^{p-2}(u_1-iu_2)[\Phi_k\ast\bigl(Q|u|^{p-2}(u_1+iu_2)\bigr)]\, dx \\ &\quad= \int_{\mathbb{R}^N}Q|u|^{p-2}u_1[\Phi_k\ast\bigl(Q|u|^{p-2}u_1\bigr)]\, dx +\int_{\mathbb{R}^N}Q|u|^{p-2}u_2[\Phi_k\ast\bigl(Q|u|^{p-2}u_2\bigr)]\, dx \\ &\qquad + i \int_{\mathbb{R}^N}Q|u|^{p-2}u_1[\Phi_k\ast\bigl(Q|u|^{p-2}u_2\bigr)]\, dx - i\int_{\mathbb{R}^N}Q|u|^{p-2}u_2[\Phi_k\ast\bigl(Q|u|^{p-2}u_1\bigr)]\, dx \\ &\quad=\int_{\mathbb{R}^N}Q|u|^{p-2}u_1[\Phi_k\ast\bigl(Q|u|^{p-2}u_1\bigr)]\, dx +\int_{\mathbb{R}^N}Q|u|^{p-2}u_2[\Phi_k\ast\bigl(Q|u|^{p-2}u_2\bigr)]\, dx, \end{align*} where the symmetry of the convolution has been used in the last step. Taking real parts on both sides of the equality, we obtain \begin{equation}\label{eqn:integr_estim1} \begin{aligned} \int_{\mathbb{R}^N}Q|u|^p\, dx-\int_{\mathbb{R}^N}Q|u|^{p-2}\text{Re}\left(\varphi\overline{u}\right)\, dx &=\int_{\mathbb{R}^N}Q|u|^{p-2}u_1[\Psi_k\ast\bigl(Q|u|^{p-2}u_1\bigr)]\, dx\\ &\quad+\int_{\mathbb{R}^N}Q|u|^{p-2}u_2[\Psi_k\ast\bigl(Q|u|^{p-2}u_2\bigr)]\, dx. \end{aligned} \end{equation} where again $\Psi_k$ denotes the real part of $\Phi_k$. Notice in addition that setting $\delta=\text{diam}(\text{supp }Q)$, the assumption (Q2) implies $\delta\le \frac{{\bf z({\text{\tiny $N$}})}}{k}$ and hence, for all $f\in L^{p'}_{\text{loc}}(\mathbb{R}^N)$, $$ \int_{\mathbb{R}^N}Qf[\Psi_k\ast (Qf)]\, dx = \int_{\mathbb{R}^N}Qf[(1_{B_\delta}\Psi_k)\ast(Qf)]\, dx\geq 0, $$ by Corollary~\ref{prop:bilinear_positive}. Thus, as a consequence of \eqref{eqn:integr_estim1}, we find $$ \int_{\mathbb{R}^N}Q|u|^p\, dx\geq \int_{\mathbb{R}^N}Q|u|^{p-2}\text{Re}\left(\overline{u}\varphi\right)\, dx, $$ and, since $Q\leq 0$ on $\mathbb{R}^N$, by (Q1), it follows that \begin{equation}\label{eqn:first_bound} \int_{\mathbb{R}^N}|Q|\ |u|^p\, dx \leq \|\varphi\|_\infty \int_{\mathbb{R}^N} |Q|\ |u|^{p-1}\, dx. \end{equation} Using H\"older's inequality we then obtain the estimate \begin{align*} \int_{\mathbb{R}^N} |Q|\ |u|^{p-1}\, dx & \leq \left(\int_{\mathbb{R}^N} |Q|\, dx\right)^{\frac1p}\left(\int_{\mathbb{R}^N}|Q|\ |u|^{p}\, dx\right)^{\frac1{p'}}\\ & \leq \left(\int_{\mathbb{R}^N} |Q|\, dx\right)^{\frac1p} \left(\|\varphi\|_\infty \int_{\mathbb{R}^N} |Q|\ |u|^{p-1}\, dx\right)^{\frac1{p'}}, \end{align*} and therefore $$ \int_{\mathbb{R}^N} |Q|\ |u|^{p-1}\, dx \leq \|\varphi\|_\infty^{p-1} \int_{\mathbb{R}^N}|Q|\, dx \leq |\Omega|\ \|Q\|_\infty\ \|\varphi\|_\infty^{p-1}, $$ where $\Omega=\{x\in\mathbb{R}^N\, :\, Q(x)\neq 0\}$. Using again \eqref{eqn:first_bound}, we deduce that $$ \|\ |Q|^{\frac1{p'}}\ |u|^{p-1}\|_{p'}^{p'} =\int_{\mathbb{R}^N}|Q|\ |u|^p\, dx \leq |\Omega|\ \|Q\|_\infty\ \|\varphi\|_\infty^p. $$ Since the support $Q$ is compact and since $p<2^\ast$, H\"olders inequality yields the estimates \begin{align} \|Q|u|^{p-1}\|_{(2^\ast)'} \leq |\Omega|^{\frac1{(2^\ast)'}-\frac1{p'}} \|Q|u|^{p-1}\|_{p'} &\leq |\Omega|^{\frac1{(2^\ast)'}-\frac1{p'}} \|Q\|_\infty^{\frac1p} \|\ |Q|^{\frac1p'}|u|^{p-1}\|_{p'} \nonumber\\ &\leq |\Omega|^{\frac1{(2^\ast)'}} \|Q\|_\infty \|\varphi\|_\infty^{p'-1}=:D. \label{eqn:final_bound} \end{align} Lemma~\ref{lem:regularity1} with $a=Q$ and the estimate~\eqref{eqn:final_bound} imply the existence of constants $C=C(N,k,p,\|Q\|_\infty,|\Omega|)>0$ and $m=m(N,p)\in\mathbb{N}$ such that \begin{align*} \|v\|_\infty\leq C\left(D+D^{(p-1)^m}+\|\varphi\|_\infty^{p-1} + \|\varphi\|_\infty^{(p-1)^m}\right). \end{align*} Making $C>0$ larger if necessary, we thus obtain~(\ref{eq:apriori-superlinear-defocusing-estimate}), as claimed. \end{proof} \section{Proofs of the main results} \label{sec:proofs-main-results} In this section, we complete the proofs of the main results in the introduction. \begin{proof}[Proof of Theorem~\ref{W teo 1-sublinear}] Let $\varphi \in X:= L^\infty(\mathbb{R}^N)$. We write (\ref{nlh-1-integral}) as a fixed point equation $$ u = {\mathcal A}(u) \qquad \text{in $X$} $$ with the nonlinear operator \begin{equation} \label{eq:A-operator} {\mathcal A}: X \to X, \qquad {\mathcal A}[w]={\mathcal R}_k (N_f(w))+ \varphi. \end{equation} Since $\alpha > \frac{N+1}{2}$, we may fix $\alpha' \in (\frac{N+1}{2}, \alpha)$. By Lemma~\ref{lem:nemytskii_cont}, the nonlinear operator $N_f: X \to L^\infty_{\alpha'}(\mathbb{R}^N)$ is well-defined and continuous. Moreover, ${\mathcal R}_k: L^\infty_{\alpha'}(\mathbb{R}^N) \to X$ is compact by Proposition~\ref{resolvent-compact-and-continuous}. Consequently, ${\mathcal A}$ is a compact and continuous operator. Moreover, the set $$ {\mathcal F}:=\{u\in X: \, u=\mu {\mathcal A}[u]\ \text{ for some } \mu\in[0,1]\} $$ is bounded by Proposition~\ref{sec:proof-theorem-refw-1}. Hence Schaefer's fixed point theorem (see e.g. \cite[Chapter 9.2.2.]{Evans}) implies that ${\mathcal A}$ has a fixed point. \end{proof} \qquad We continue with the proof of Theorem~\ref{thm:rabinowitz-applied}. For this we recall the following variant of Rabinowitz' global continuation theorem (see \cite[Theorem 3.2]{rabinowitz71}; see also \cite[Theorem 14.D]{zeidler}). \begin{theorem}\label{thm:rabinowitz} Let $(X,\|\cdot\|)$ be a real Banach space, and consider a continuous and compact mapping $G$: $\mathbb{R}\times X$ $\to$ $X$ satisfying $G(0,0)=0$. Assume that \begin{itemize} \item[(a)] $G(0,u)=u$ $\Leftrightarrow$ $u=0$, and \item[(b)] there exists $r>0$ such that $\text{\em deg}(id-G(0,\cdot),B_r(0),0)\neq 0$, where $\text{\em deg}$ denotes the Leray-Schauder degree. \end{itemize} Moreover, denote by $S$ the set of solutions $(\lambda, u)\in \mathbb{R}\times X$ of the equation $$ u=G(\lambda,u). $$ Then the connected components $C^+$ and $C^-$ of $S$ in $[0,\infty)\times X$ and $(-\infty,0]\times X$ which contain $(0,0)$ are both unbounded. \end{theorem} \begin{proof}[Proof of Theorem~\ref{thm:rabinowitz-applied} (completed)] Let $2<p<2^\ast$, $Q\in L^\infty_\alpha(\mathbb{R}^N,\mathbb{R})\backslash\{0\}$ for some $\alpha > \frac{N+1}{2}$, $\varphi\in X:= L^\infty(\mathbb{R}^N)$ and consider $G$: $\mathbb{R} \times X \to X$ given by \begin{equation} \label{eq:def-G-function} G(\lambda,w)= {\mathcal R}_k\bigl(Q|w|^{p-2}w\bigr)+\lambda\varphi, \end{equation} Using Proposition~\ref{resolvent-compact-and-continuous} and Lemma~\ref{lem:nemytskii_cont}, we obtain that the map $G$ is continuous and compact. Moreover, if $w\in X$ satisfies $w=G(\lambda,w)$, then $w$ is a solution of (\ref{nlh-1-integral-parameter-lambda}). Furthermore, if $w \in X$ satisfies $w = G(0,w) = {\mathcal R}_k\bigl(Q|w|^{p-2}w\bigr)$, then $w = 0$ by Proposition~\ref{corol-kato}. \qquad To compute the Leray-Schauder degree, we remark that $G(0,0)=0$ and $\partial_wG(0,0)=0$ by Lemma~\ref{lem:nemytskii_C-1}. Hence, we can find some radius $r>0$ such that $\|G(0,w)\|_{L^\infty} \leq \frac12 \|w\|_{L^\infty}$ for all $w\in X$ such that $\|w\|_{L^\infty} \leq r$. Therefore, the compact homotopy $H(t,w)=tG(0,w)$ is admissible in the ball $B_r(0)\subset X$ and we find that \begin{align*} \text{deg}(id-G(0,\cdot),B_r(0),0)=\text{deg}(id-H(1,\cdot),B_r(0),0) &=\text{deg}(id-H(0,\cdot),B_r(0),0)\\ &=\text{deg}(id,B_r(0),0)=1. \end{align*} Theorem~\ref{thm:rabinowitz} therefore applies and we obtain the existence of an unbounded branch $C_\varphi \subseteq \bigl\{(\lambda,w)\in \mathbb{R}\times X\, :\, w=G(\lambda,w)\text{ and } \lambda\geq 0\bigr\}$ which contains $(0,0)$. Moreover, $C_\varphi \setminus \{(0,0)\}$ is a subset of $(0,\infty) \times X$ since $w=G(0,w)$ implies $w=0$ by Proposition~\ref{corol-kato}, as noted above. \end{proof} \begin{remark} \label{remark-rabinowitz} The application of Theorem~\ref{thm:rabinowitz} to the function $G$ defined in (\ref{eq:def-G-function}) also yields a connected component $$ C_\varphi^- \subset \bigl\{(\lambda,w)\in \mathbb{R}\times X\, :\, w=G(\lambda,w)\text{ and } \lambda\le 0\bigr\} $$ which contains $(0,0)$. However, this component is also obtained by passing from $\varphi$ to $-\varphi$ in the statement of Theorem~\ref{thm:rabinowitz-applied}, since by definition we have $C_\varphi^- = C_{-\varphi}$. \end{remark} \qquad We may now also prove Theorem~\ref{thm:unbounded_branch-defocusing}. \begin{proof}[Proof of Theorem~\ref{thm:unbounded_branch-defocusing}] Since, by assumption, $Q\leq 0$ in $\mathbb{R}^N$ and $\text{diam}(\text{supp }Q)\le \frac{{\bf z({\text{\tiny $N$}})}}{k}$, the a priori bounds in Proposition~\ref{prop:apriori_defocusing} imply that the unbounded branch $C_\varphi$ contains, for each $\lambda \ge 0$, at least one pair $(\lambda,w)$, as claimed. \end{proof} \qquad Next, we complete Theorem~\ref{teo-implicit-function}. \begin{proof}[Proof of Theorem~\ref{teo-implicit-function}] Let again $X:= L^\infty(\mathbb{R}^N)$, and consider the nonlinear operator ${\mathcal B}: X \to X$, ${\mathcal B}(u):= u- {\mathcal R}_k N_f(u)$. Then ${\mathcal B}(0)=0$, since $N_f(0)=0$ by assumption. Since $N_f: X \to L^\infty_{\alpha'}$ is differentiable by Lemma~\ref{lem:nemytskii_C-1}, ${\mathcal B}$ is differentiable as well. Moreover $$ {\mathcal B}'(0)= {\rm id} - {\mathcal R}_k N_f'(0) = {\rm id} \in {\mathcal L}_{\mathbb{R}}(X,X), $$ since $N_f'(0) =N_{f'}(0) =0 \in {\mathcal L}_{\mathbb{R}}(X,L^\infty_{\alpha'})$ by assumption and Lemma~\ref{lem:nemytskii_C-1}. Consequently, ${\mathcal B}$ is a diffeomorphism between open neighborhoods $U,V \subset X$ of zero, and this shows the claim. \end{proof} \qquad Finally, we state and prove the unique existence of solutions in the case where $f$ satisfies a suitable Lipschitz condition. \begin{theorem} \label{theo-uniqueness} Let, for some $\alpha>\frac{N+1}{2}$, the nonlinearity $f: \mathbb{R}^N \times \mathbb{C} \to \mathbb{C}$ be a continuous function satisfying (\ref{eq:assumption-f1}) and the Lipschitz condition \begin{equation} \label{eq:assumption-f1-lipschitz2} \ell_\alpha:= \sup \Bigl \{ \langle x \rangle^{\alpha} \, \Bigl|\frac{f(x,u)-f(x,v)}{u-v}\Bigr|\::\: u,v \in \mathbb{R}, \: x \in \mathbb{R}^N \Bigr\} < \frac{1}{\kappa_\alpha}, \end{equation} where $\kappa_\alpha$ is defined in Proposition~\ref{resolvent-compact-and-continuous}. Then, for any given solution $\varphi \in L^\infty(\mathbb{R}^N)$ of the homogeneous Helmholtz equation $\Delta \varphi + k \varphi = 0$, the equation (\ref{nlh-1-integral}) admits precisely one solution $u \in L^\infty(\mathbb{R}^N)$. \end{theorem} \begin{proof} Let $\varphi \in X:= L^\infty(\mathbb{R}^N)$. As in the proof of Theorem~\ref{W teo 1-sublinear} given above, we write (\ref{nlh-1-integral}) as a fixed point equation $u = {\mathcal A}(u)$ in $X$ with the nonlinear operator ${\mathcal A}$ defined in (\ref{eq:A-operator}). Assumption (\ref{eq:assumption-f1-lipschitz2}) implies that $$ \|{\mathcal A}(u)-{\mathcal A}(v)\|_{X} = \bigl \|{\mathcal R}_k \bigl (N_f(u)-N_f(v)\bigr)\bigr\| \le \kappa_{\alpha} \|N_f(u)-N_f(v)\|_{L^\infty_\alpha}\le \kappa_\alpha \ell_\alpha \|u-v\|_X $$ with $\kappa_\alpha \ell_\alpha<1$. Hence ${\mathcal A}$ is a contraction, and thus it has a unique fixed point in $X$. \end{proof} \appendix \section{Uniform regularity estimates} \label{sec:unif-regul-estim} In this section, we wish to prove uniform regularity estimates for solutions of (\ref{nlh-1-integral}) in the case where the nonlinearity $f$ is of the form given in (\ref{eq:power-type}). These estimates, which we used in the proof of the a priori bound given in Proposition~\ref{prop:apriori_defocusing}, allow to pass from uniform bounds in $L^{(2^*)'}(\mathbb{R}^N)$ to uniform bounds in $L^\infty(\mathbb{R}^N)$. The proof of the following lemma is similar to a regularity estimate for real-valued solutions given in \cite[Proposition 3.1]{EW2}, but the differences justify to include a complete proof in this paper. In the following, for $q \in [1,\infty]$, we let $L^q_c(\mathbb{R}^N)$ denote the space of functions in $L^q(\mathbb{R}^N)$ with compact support in $\mathbb{R}^N$. \begin{lemma}\label{lem:regularity1} Let $N\geq 3$, $2<p<2^\ast$ and consider a function $a\in L^\infty_c(\mathbb{R}^N)$. For $k>0$ and $\varphi\in L^\infty_{\text{loc}}(\mathbb{R}^N)$, every solution $v\in L^p_{\text{loc}}(\mathbb{R}^N)$ of $$ v=\Phi_k \ast \bigl(a|v|^{p-2}v \bigr)+ \varphi $$ satisfies $v\in W^{2,t}(\mathbb{R}^N)$ for all $2_\ast\leq t<\infty$. In particular, $u\in L^\infty(\mathbb{R}^N)$ and there exist constants $$ C=C\bigl(N,k,p,\|a\|_\infty\bigr)>0\qquad \text{and}\qquad m=m(N,p)\in\mathbb{N} $$ independent of $v$ and $\varphi$ such that \begin{equation}\label{eqn:infty-estimate} \|v\|_\infty\leq C\left( \|a |\varphi|^{p-1}\|_{(2^\ast)'}+\|a|v|^{p-1}\|_{(2^\ast)'}^{(p-1)^m} +\|\varphi\|_{\infty}^{p-1}+\|\varphi\|_{\infty}^{(p-1)^m}\right). \end{equation} \end{lemma} \noindent{\bf Proof.} Since, by assumption, $v \in L^p_{\text{loc}}(\mathbb{R}^N)$, and since $a\in L^\infty_c(\mathbb{R}^N)$, it follows that \begin{equation}\label{eqn:f1_f2_Lpq} f:=a|v |^{p-2}v\in L^q_c(\mathbb{R}^N), \quad\text{ for all }1\leq q\leq p'. \end{equation} Furthermore, since $v=\Phi_k\ast f + \varphi$, we deduce that \begin{equation}\label{eqn:f1_f2_estim_Rf} |f|\leq 2^{p-2} |a| \bigl(|\Phi_k\ast f|^{p-1} + |\varphi|^{p-1}\bigr) \quad\text{a.e. in }\mathbb{R}^N. \end{equation} \qquad We start by proving that $v\in L^\infty(\mathbb{R}^N)$. For this, we first remark that $f\in L^{(2^\ast)'}_{c}(\mathbb{R}^N)$, since $p<2^\ast$. Consequently, the mapping properties of $\Phi_k$ given in \cite[Proposition A.1]{EW1} yield $\Phi_k\ast f\in L^{2^\ast}(\mathbb{R}^N)\cap W^{2,(2^\ast)'}_{\text{loc}}(\mathbb{R}^N)$ and, for every $0<R<2$, the existence of constants $\tilde{C}_0=\tilde{C}_0(N,k,R)>0$ and $D=D(N,k)>0$ such that \begin{align*} \|\Phi_k\ast f\|_{W^{2,(2^\ast)'}(B_{R}(x_0))} &\leq \tilde{C}_0 \left( \|\Phi_k\ast f\|_{L^{(2^\ast)'}(B_2(x_0))}+\|f\|_{L^{(2^\ast)'}(B_2(x_0))}\right)\\ &\leq \tilde{C}_0(D+1)\|f\|_{(2^\ast)'} \quad\text{for all }x_0\in\mathbb{R}^N. \end{align*} \qquad Setting $C_0:=\tilde{C}_0(D+1)$, we consider a strictly decreasing sequence $2>R_1>R_2>\ldots>R_j>R_{j+1}>\ldots>1$. From Sobolev's embedding theorem, there is for each $1\leq t\leq 2^\ast$, a constant $\kappa_t^{(0)}=\kappa_t^{(0)}(N,t)>0$ such that $$ \|\Phi_k\ast f\|_{L^t(B_{R_1}(x_0))}\leq \kappa_t^{(0)} C_0 \|f\|_{(2^\ast)'}, $$ where $C_0$ is given as above, with $R=R_1$. Choosing $t_1:=\frac{2^\ast}{p-1}$, we obtain from \eqref{eqn:f1_f2_estim_Rf}, there is some constant $D_2=D_2(N,p)>0$ such that \begin{align*} \|f\|_{L^{t_1}(B_{R_1}(x_0))} &\leq D_2 \|a\|_\infty \bigl(\|\Phi_k\ast f\|_{L^{2^\ast}(B_{R_1}(x_0))}^{p-1}+\|\varphi\|_{L^{2^\ast}(B_{R_1}(x_0))}^{p-1}\bigr)\\ &\leq D_2 \|a\|_\infty\left((\kappa_{2^\ast}^{(0)} C_0)^{p-1} \|f\|_{(2^\ast)'}^{p-1}+|B_{R_1}|^{\frac1{t_1}}\|\varphi\|_\infty^{p-1}\right). \end{align*} \qquad It then follows as in \cite[Proof of Proposition A.1(i)]{EW1} from elliptic regularity theory that $\Phi\ast f\in W^{2,t_1}_{\text{loc}}(\mathbb{R}^N)$ and for some constant $\tilde{C}_1=\tilde{C}_1(N,k,p)>0$, \begin{align*} \|\Phi_k\ast f&\|_{W^{2,t_1}(B_{R_2}(x_0))} \leq \tilde{C}_1 \left( \|\Phi_k\ast f\|_{L^{t_1}(B_{R_1}(x_0))}+\|f\|_{L^{t_1}(B_{R_1}(x_0))}\right)\\ &\ \ \leq \tilde{C}_1\Bigl[\kappa_{t_1}^{(0)}C_0\|f\|_{(2^\ast)'} +D_2 \|a\|_\infty\left((\kappa_{2^\ast}^{(0)} C_0)^{p-1} \|f\|_{(2^\ast)'}^{p-1} +|B_{R_1}|^{\frac1{t_1}}\|\varphi\|_\infty^{p-1}\right)\Bigr]\\ &\ \ \leq C_1\left( \|f\|_{(2^\ast)'}+\|f\|_{(2^\ast)'}^{p-1}+\|\varphi\|_\infty^{p-1}\right)\qquad\text{for all }x_0\in\mathbb{R}^N, \end{align*} where $C_1=C_1\bigl(N,k,p,\|a\|_\infty\bigr)$. If $t_1\geq \frac{N}{2}$, Sobolev's embedding theorem gives for each $1\leq t<\infty$ the existence of a constant $\kappa^{(1)}_t=\kappa_t^{(1)}(N,q,t)>0$ such that $$ \|\Phi_k\ast f\|_{L^t(B_{R_2}(x_0))}\leq \kappa^{(1)}_t C_1\left( \|f\|_{(2^\ast)'}+\|f\|_{(2^\ast)'}^{p-1}+\|\varphi\|_\infty^{p-1}\right). $$ As a consequence, we obtain \begin{align*} \|f\|_{L^t(B_{R_2}(x_0))} &\leq D_2 \|a\|_\infty\Bigl(3^{p-2}(\kappa_{t(p-1)}^{(1)} C_1)^{p-1} \left(\|f\|_{(2^\ast)'}^{p-1}+\|f\|_{(2^\ast)'}^{(p-1)^2} +\|\varphi\|_\infty^{(p-1)^2}\right)\\&\quad +|B_{R_2}|^{\frac{p-1}{t}}\|\varphi\|_\infty^{p-1}\Bigr), \end{align*} for all $1\leq t<\infty$. As in \cite[Proof of Proposition A.1(i)]{EW1}, it then follows from elliptic regularity theory that $\Phi\ast f\in W^{2,N}_{\text{loc}}(\mathbb{R}^N)$, and since $R_2>1$, there exists some constant $\tilde{C}_2=\tilde{C}_2(N,k)>0$ such that \begin{align*} \|\Phi_k\ast f&\|_{W^{2,N}(B_1(x_0))}\leq \tilde{C}_2 \left( \|\Phi_k\ast f\|_{L^N(B_{R_2}(x_0))}+\|f\|_{L^N(B_{R_2}(x_0))}\right)\\ &\leq \tilde{C}_2\Bigl\{\kappa_N^{(1)}C_1\left(\|f\|_{(2^\ast)'}+\|f\|_{(2^\ast)'}^{p-1}+\|\varphi\|_\infty^{p-1}\right)\\ &+D_2 \|a\|_\infty\Bigl(3^{p-2}(\kappa_{N(p-1)}^{(1)} C_1)^{p-1} \left(\|f\|_{(2^\ast)'}^{p-1}+\|f\|_{(2^\ast)'}^{(p-1)^2} +\|\varphi\|_\infty^{(p-1)^2}\right)\\ &+|B_{R_2}|^{\frac{p-1}N}\|\varphi\|_\infty^{p-1}\Bigr)\Bigr\}\\ &\leq C_2\left( \|f\|_{(2^\ast)'}+\|f\|_{(2^\ast)'}^{(p-1)^2}+\|\varphi\|_\infty^{p-1}+\|\varphi\|_\infty^{(p-1)^2}\right) \end{align*} for all $x_0\in\mathbb{R}^N$, where $C_2=C_2\bigl(N,k,p,\|a\|_\infty\bigr)$. By Sobolev's embedding theorem, there is a constant $\kappa_\infty=\kappa_\infty(N)>0$ such that $$ \|\Phi_k\ast f\|_{L^\infty(B_1(x_0))} \leq \kappa_\infty C_2\left( \|f\|_{(2^\ast)'}+\|f\|_{(2^\ast)'}^{(p-1)^2}+\|\varphi\|_\infty^{(p-1)^2}+\|\varphi\|_\infty^{p-1}\right) $$ for all $x_0\in\mathbb{R}^N$. Therefore, $\Phi\ast f\in L^\infty(\mathbb{R}^N)$ and since $v=\Phi\ast f$, the estimate \eqref{eqn:infty-estimate} holds with $C=2\kappa_\infty C_2$ and $m=2$. \qquad If $t_1<\frac{N}{2}$, we infer from Sobolev's embedding theorem that $$ \|\Phi_k\ast f\|_{L^t(B_{R_2}(x_0))}\leq \kappa^{(1)}_t C_1\left(\|f\|_{(2^\ast)'} +\|f\|_{(2^\ast)'}^{p-1}+\|\varphi\|_\infty^{p-1}\right) $$ for each $1\leq t\leq \frac{Nt_1}{N-2t_1}$, where $\kappa_t^{(1)}=\kappa_t^{(1)}(N,p,t)$. Therefore, setting $t_2:=\frac{Nt_1}{(N-2t_1)(p-1)}$, we obtain from \eqref{eqn:f1_f2_estim_Rf}, \begin{align*} &\|f\|_{L^{t_2}(B_{R_2}(x_0))} \\&\leq D_2 \|a\|_\infty\Bigl(3^{p-2}(\kappa_{t_2(p-1)}^{(1)} C_1)^{p-1} \left(\|f\|_{(2^\ast)'}^{p-1} +\|f\|_{(2^\ast)'}^{(p-1)^2}+\|\varphi\|_\infty^{(p-1)^2}\right)+|B_{R_2}|^{\frac{p-1}{t_2}}\|\varphi\|_\infty^{p-1}\Bigr). \end{align*} Using again elliptic regularity theory as before, we find that $\Phi_k\ast f\in W^{2,t_2}_{\text{loc}}(\mathbb{R}^N)$ and for some constant $\tilde{C}_2=\tilde{C}_2(N,k,p)>0$, \begin{align*} \|\Phi_k\ast f&\|_{W^{2,t_2}(B_{R_3}(x_0))}\leq \tilde{C}_2 \left( \|\Phi_k\ast f\|_{L^{t_2}(B_{R_2}(x_0))}+\|f\|_{L^{t_2}(B_{R_2}(x_0))}\right)\\ \ \ &\leq \tilde{C}_2\Bigl\{\kappa_{t_2}^{(1)}C_1\left(\|f\|_{(2^\ast)'}+\|f\|_{(2^\ast)'}^{p-1}+\|\varphi\|_\infty^{p-1}\right)\\ & \ \ \quad +D_2 \|a\|_\infty\Bigl(3^{q-2}(\kappa_{t_2(p-1)}^{(1)} C_1)^{p-1} \left(\|f\|_{(2^\ast)'}^{p-1}+\|f\|_{(2^\ast)'}^{(p-1)^2}+\|\varphi\|_\infty^{(p-1)^2}\right)\\ &\ \ \quad +|B_{R_2}|^{\frac{p-1}{t_2}}\|\varphi\|_\infty^{p-1}\Bigr)\Bigr\}\\ &\ \ \leq C_2\left( \|f\|_{(2^\ast)'}+\|f\|_{(2^\ast)'}^{(p-1)^2}+\|\varphi\|_\infty^{p-1}+\|\varphi\|_\infty^{(p-1)^2}\right), \end{align*} for all $x_0\in\mathbb{R}^N$, where $C_2=C_2\bigl(N,k,p,\|a\|_\infty\bigr)$. \qquad Remarking that $t_2>t_1$, since $p<2^\ast$, we may iterate the procedure. At each step we find some constant $C_j=C_j\bigl(N,k,p,\|a\|_\infty\bigr)$ such that the estimate $$ \|\Phi_k\ast f\|_{W^{2,t_j}(B_{R_{j+1}}(x_0))} \leq C_j\left(\|f\|_{(2^\ast)'}+\|f\|_{(2^\ast)'}^{(p-1)^j}+\|\varphi\|_\infty^{p-1}+\|\varphi\|_\infty^{(p-1)^j}\right) $$ holds and where $t_j$ is defined recursively via $t_0=(2^\ast)'$ and $t_{j+1}=\frac{Nt_j}{(N-2t_j)(p-1)}$, as long as $t_j<\frac{N}{2}$. Since $t_{j+1}\geq \frac{t_1}{p'}\,t_j$ and since $t_1>p'$, we reach after finitely many steps $t_\ell\geq\frac{N}{2}$, where $\ell$ only depends on $N$ and $p$. Since $R_j>1$ for all $j$, using the regularity properties of $\Phi $ and arguing as above, we obtain $\Phi\ast f\in W^{2,N}_{\text{loc}}(\mathbb{R}^N)$ as well as the estimate $$ \|\Phi_k\ast f\|_{W^{2,N}(B_1(x_0))}\leq C_{\ell+1}\left(\|f\|_{(2^\ast)'}+\|f\|_{(2^\ast)'}^{(p-1)^{\ell+1}} +\|\varphi\|_\infty^{p-1}+\|\varphi\|_\infty^{(p-1)^{\ell+1}}\right), $$ where $x_0$ is any point of $\mathbb{R}^N$ and $C_{\ell+1}=C_{\ell+1}\bigl(N,k,p,\|a\|_\infty\bigr)$ is independent of $x_0$. Then, Sobolev's embedding theorem gives a constant $\kappa_\infty=\kappa_\infty(N)$ for which $$ \|\Phi_k\ast f\|_{L^\infty(B_1(x_0))} \leq \kappa_\infty C_{\ell+1}\left(\|f\|_{(2^\ast)'}+\|f\|_{(2^\ast)'}^{(q-1)^{\ell+1}} +\|\varphi\|_\infty^{p-1}+\|\varphi\|_\infty^{(p-1)^{\ell+1}}\right) $$ holds for all $x_0\in\mathbb{R}^N$. Hence, $\Phi\ast f\in L^\infty(\mathbb{R}^N)$ and choosing $C=\kappa_\infty C_{\ell+1}$ and $m=\ell+1$ concludes the proof of \eqref{eqn:infty-estimate}. We complete the proof. $\Box$ \noindent{\bf Acknowledgements:} H. Chen is supported by NNSF of China, No: 12071189, 12001252, by the Jiangxi Provincial Natural Science Foundation, No: 20202BAB201005, 20202ACBL201001 and by the Alexander von Humboldt Foundation. T. Weth is supported by the German Science Foundation (DFG) within the project WE-2821/5-2. \end{document}
\begin{document} \title{A Presentation for the Dual Symmetric Inverse Monoid} \author{David Easdown\\ {\footnotesize \emph{School of Mathematics and Statistics, University of Sydney, NSW 2006, Australia}}\\ {\footnotesize {\tt de\,@\,maths.usyd.edu.au} }\\~\\ James East\\ {\footnotesize \emph{Department of Mathematics, La Trobe University, Victoria 3083, Australia}}\\ {\footnotesize {\tt james.east\,@\,latrobe.edu.au} }\\~\\ D.~G.~FitzGerald\\ {\footnotesize \emph{School of Mathematics and Physics, University of Tasmania, Private Bag 37, Hobart 7250, Australia}}\\ {\footnotesize {\tt d.fitzgerald\,@\,utas.edu.au} }} \maketitle \begin{abstract} The dual symmetric inverse monoid $\mathscr{I}_n^*$ is the inverse monoid of all isomorphisms between quotients of an $n$-set. We give a monoid presentation of $\mathscr{I}_n^*$ and, along the way, establish criteria for a monoid to be inverse when it is generated by completely regular elements. \end{abstract} \section{Introduction} Inverse monoids model the partial or local symmetries of structures, generalizing the total symmetries modelled by groups. Key examples are the \emph{symmetric inverse monoid} $\mathscr{I}_X$ on a set $X$ (consisting of all bijections between subsets of $X$), and the \emph{dual symmetric inverse monoid} $\mathscr{I}_X^*$ on $X$ (consisting of all isomorphisms between subobjects of $X$ in the category ${\bf Set}^{\text{opp}}$), each with an appropriate multiplication. They share the property that every inverse monoid may be faithfully represented in some $\mathscr{I}_X$ and some $\mathscr{I}_X^*$. The monoid $\mathscr{I}_X^*$ may be realized in many different ways; in \cite{2}, it was described as consisting of bijections between quotient sets of $X$, or \emph{block bijections} on $X$, which map the blocks of a ``domain'' equivalence (or partition) on $X$ bijectively to blocks of a ``range'' equivalence. These objects may also be regarded as special binary relations on $X$ called \emph{biequivalences}. The appropriate multiplication involves the join of equivalences---details are found in \cite{2}, and an alternative description in \cite[pp.~122--124]{4}. \subsection{Finite dual symmetric inverse monoids}\label{sect:fdsim} In this paper we focus on finite $X$, and write $\mathbf{n}=\{ 1,\dots n\}$ and $\mathscr{I}_n^*=\mathscr{I}_{\mathbf{n}}^*$. In a graphical representation described in \cite{5}, the elements of $\mathscr{I}_n^*$ are thought of as graphs on a vertex set~$\{1,\ldots,n\}\cup\{1',\ldots,n'\}$ (consisting of two copies of $\mathbf{n}$) such that each connected component has at least one dashed and one undashed vertex. This representation is not unique---two graphs are regarded as equivalent if they have the same connected components---but it facilitates visualization and is intimately connected to the combinatorial structure. Conventionally, we draw the graph of an element of $\mathscr{I}_n^*$ such that the vertices $1,\ldots,n$ are in a horizontal row (increasing from left to right), with vertices $1',\ldots,n'$ vertically below. See Fig. 1 for the graph of a block bijection $\theta\in \mathscr{I}_{8}^*$ with domain $(1,2\,|\,3\,|\,4,6,7\,|\,5,8)$ and range $(1\,|\,2,4\,|\,3\,|\,5,6,7,8)$. In an obvious notation, we also write $\textstyle{{\theta = \big( {1,2 \atop 2,4} \big| {3 \atop 5,6,7,8} \big| {4,6,7 \atop 1} \big| {5,8 \atop 3} \big).}}$ \begin{figure} \caption{A graphical representation of a block bijection $\theta \in \mathscr{I}_8^*$.} \label{picoftheta} \end{figure} To multiply two such diagrams, they are stacked vertically, with the ``interior'' rows of vertices coinciding; then the connected components of the resulting graph are constructed and the interior vertices are ignored. See Fig. 2 for an example. \begin{figure} \caption{The product of two block bijections $\theta_1,\theta_2\in\mathscr{I}_4^*$.} \label{prodininstar} \end{figure} \newline It is clear from its graphical representation that $\mathscr{I}_n^*$ is a submonoid of the \emph{partition monoid}, though not one of the submonoids discussed in \cite{3}. Maltcev \cite{5} shows that $\mathscr{I}_n^*$ with the zero of the partition monoid adjoined is a maximal inverse subsemigroup of the partition monoid, and gives a set of generators for $\mathscr{I}_n^*$. These generators are completely regular; later in this paper, we present auxiliary results on the generation of inverse semigroups by completely regular elements. Although these results are of interest in their own right, our main goal is to obtain a presentation, in terms of generators and relations, of~$\mathscr{I}_n^*$. Our method makes use of known presentations of some special subsemigroups of $\mathscr{I}_n^*$. We now describe these subsemigroups, postponing their presentations until a later section. The group of units of $\mathscr{I}_n^*$ is the symmetric group $\mathcal{S}_n$, while the semilattice of idempotents is (isomorphic to) $\mathscr{E}_n$, the set of all equivalences on $\mathbf{n}$, with multiplication being join of equivalences. Another subsemigroup consists of those block bijections which are induced by permutations of $\mathbf{n}$ acting on the equivalence relations; this is the \emph{factorizable part} of~$\mathscr{I}_n^*$, which we denote by $\mathscr{F}_n$, and which is equal to the set product $\mathscr{E}_n\mathcal{S}_n=\mathcal{S}_n\mathscr{E}_n$. In~\cite{2} these elements were called \emph{uniform}, and in \cite{5} \emph{type-preserving}, since they have the characteristic property that corresponding blocks are of equal cardinality. We will also refer to the \emph{local submonoid}~$\varepsilon\mathscr{I}_X^*\varepsilon$ of $\mathscr{I}_X^*$ determined by a non-identity idempotent $\varepsilon$. This subsemigroup consists of all $\beta\in\mathscr{I}_X^*$ for which $\varepsilon$ is a (left and right) identity. Recalling that the idempotent~$\varepsilon$ is an equivalence on $X$, it is easy to see that there is a natural isomorphism~$\varepsilon\mathscr{I}_X^*\varepsilon\to\mathscr{I}_{X/\varepsilon}^*$. As an example which we make use of later, when $X=\mathbf{n}$ and~$\varepsilon=(1,2\,|\,3\,|\,\cdots\,|\,n)$, we obtain an isomorphism $\Upsilon:\varepsilon\mathscr{I}_n^*\varepsilon\to\mathscr{I}_{n-1}^*$. Diagrammatically, we obtain a graph of $\beta\Upsilon\in\mathscr{I}_{n-1}^*$ from a graph of $\beta\in\varepsilon\mathscr{I}_n^*\varepsilon$ by identifying vertices $1\equiv2$ and $1'\equiv2'$, relabelling the vertices, and adjusting the edges accordingly; an example is given in Fig. 3. \begin{figure} \caption{The action of the map $\Upsilon:\varepsilon\mathscr{I}_n^*\varepsilon\to\mathscr{I}_{n-1}^*$ in the case $n=5$.} \label{Upsilon} \end{figure} \subsection{Presentations} Let $X$ be an alphabet (a set whose elements are called \emph{letters}), and denote by $X^*$ the free monoid on $X$. For $R\subseteq X^*\times X^*$ we denote by $R^\sharp$ the congruence on $X^*$ generated by~$R$, and we define $\langle X|R\rangle =X^*/R^\sharp$. We say that a monoid $M$ \emph{has presentation} $\langle X|R\rangle$ if~${M\cong\langle X|R\rangle}$. Elements of $X$ and $R$ are called \emph{generators} and \emph{relations} (respectively), and a relation $(w_1,w_2)\in R$ is conventionally displayed as an equation:~${w_1=w_2}$. We will often make use of the following universal property of $\langle X|R\rangle$. We say that a monoid~$S$ \emph{satisfies}~$R$ (or that $R$ \emph{holds in} $S$) via a map $i_S:X\to S$ if for all~${(w_1,w_2)\in R}$ we have~$w_1i_S^*=w_2i_S^*$ (where $i_S^*:X^*\to S$ is the natural extension of $i_S$ to $X^*$). Then~${M=\langle X~|~R\rangle}$ is the monoid, unique up to isomorphism, which is universal with respect to the property that it satisfies $R$ (via $i_M:x\mapsto xR^\sharp$); that is, if a monoid $S$ satisfies $R$ via $i_S$, there is a \emph{unique} homomorphism $\phi:M\to S$ such that $i_M\phi=i_S$: \begin{center} \psset{xunit= .6 cm,yunit= 1.1 cm} \psset{origin={0,0}} \uput[r](1,0){$S$} \uput[l](-4,2){$X$} \uput[r](1,2){$M=\langle X|R\rangle$} \psline{->}(-4.2,1.9)(1.1,0.1) \psline{->}(-4.2,2)(1.1,2) \psline{->}(1.56,1.8)(1.56,.4) \rput(-1.2,2.3){\small $i_M$} \rput(-1.6,.7){\small $i_S$} \rput(2,1.1){\small $\phi$} \end{center} This map $\phi$ is called the \emph{canonical homomorphism}. If $X$ generates $S$ via $i_S$, then $\phi$ is surjective since $i_S^*$ is. \section{Inverse Monoids Generated by Completely Regular Elements} In this section we present two general results which give necessary and sufficient conditions for a monoid generated by completely regular elements to be inverse, with a semilattice of idempotents specified by the generators. For a monoid $S$, we write $E(S)$ and $G(S)$ for the set of idempotents and group of units of $S$ (respectively). Suppose now that $S$ is an inverse monoid (so that $E(S)$ is in fact a semilattice). The \emph{factorizable part} of $S$ is $F(S)=E(S)G(S)=G(S)E(S)$, and $S$ is \emph{factorizable} if $S=F(S)$; in general, $F(S)$ is the largest factorizable inverse submonoid of~$S$. Recall that an element $x$ of a monoid $S$ is said to be \emph{completely regular} if its $\mathscr H$-class $H_x$ is a group. For a completely regular element $x\in S$, we write $x^{-1}$ for the inverse of $x$ in $H_x$, and $x^0$ for the identity element of $H_x$. Thus,~${xx^{-1}=x^{-1}x=x^0}$ and, of course, $x^0\in E(S)$. If $X\subseteq S$, we write $X^0=\left\{ x^{0}~|~x\in X\right\}$. \begin{proposition}\label{secondprop} Let $S$ be a monoid, and suppose that $S=\langle X\rangle$ with each $x\in X$ completely regular. Then $S$ is inverse with $E(S)=\langle X^0\rangle$ if and only if, for all $x,y\in X$, \bit \item[\emph{(i)}] $x^0y^0=y^0x^0$, and \item[\emph{(ii)}] $y^{-1}x^0y\in\langle X^0\rangle$. \eit \end{proposition} \noindent{\bf Proof}\,\, If $S$ is inverse, then (i) holds. Also, for $x,y\in X$, we have \[ (y^{-1}x^0y)^2 = y^{-1} x^0y^0x^0y = y^{-1}x^0x^0y^0y = y^{-1}x^0y, \] so that $y^{-1}x^0y\in E(S)$. So if $E(S)=\langle X^0\rangle$, then (ii) holds. Conversely, suppose that (i) and (ii) hold. From (i) we see that $\langle X^0\rangle \subseteq E(S)$ and that~$\langle X^0\rangle$ is a semilattice. Now let $y\in X$. Next we demonstrate, by induction on $n$, that \begin{gather} \tag{A} y^{-1}(X^0)^ny \subseteq \langle X^0\rangle, \end{gather} for all $n\in\mathbb N$. Clearly (A) holds for $n=0$. Suppose next that (A) holds for some $n\in\mathbb N$, and that $w\in(X^0)^{n+1}$. So $w=x^0v$ for some $x\in X$ and $v\in(X^0)^n$. But then by (i) we have \[ y^{-1}wy = y^{-1}x^0vy = y^{-1}y^0x^0vy = y^{-1}x^0y^0vy = (y^{-1}x^0y )(y^{-1}vy). \] Condition (ii) and an inductive hypothesis that $y^{-1}vy\in\langle X^0\rangle$ then imply ${y^{-1}wy\in\langle X^0\rangle}$. So (A) holds. Next we claim that for each $w\in S$ there exists $w'\in S$ such that \begin{gather} \tag{B1} w'\langle X^0\rangle w \subseteq \langle X^0\rangle,\\ \tag{B2} ww',w'w \in \langle X^0\rangle,\\ \tag{B3} ww'w = w ,\, w'ww'=w'. \end{gather} We prove the claim by induction on the \emph{length} of $w$ (that is, the minimal value of $n\in\mathbb N$ for which $w\in X^n$). The case $n=0$ is trivial since then $w=1$ and we may take $w'=1$. Next suppose that $n\in\mathbb N$ and that the claim is true for elements of length $n$. Suppose that~$w\in S$ has length $n+1$, so that $w=xv$ for some $x\in X$ and $v\in S$ of length $n$. Put~$w'=v'x^{-1}$. Then \[ w'\langle X^0\rangle w = v'x^{-1}\langle X^0\rangle xv \subseteq v'\langle X^0\rangle v \subseteq \langle X^0\rangle, \] the first inclusion holding by (A) above, and the second by inductive hypothesis. Thus (B1) holds. Also, \[ ww' = xvv'x^{-1} \in x\langle X^0\rangle x^{-1} \subseteq \langle X^0\rangle \] and \[ w'w = v'x^{-1}xv = v'x^0v \in v'\langle X^0\rangle v \subseteq \langle X^0\rangle \] by (A), (B1), and the induction hypothesis, establishing (B2). For (B3), we have \[ ww'w = xvv'x^{-1}xv = xvv'x^0 v = xx^0vv'v = xv = w, \] using (B2), (i), and the inductive hypothesis. Similarly we have \[ w'ww' = v'x^0vv'x^{-1} = v'vv'x^0x^{-1} = v'x^{-1} =w', \] completing the proof of (B3). Since $S$ is regular, by (B3), the proof will be complete if we can show that $E(S)\subseteq\langle X^0\rangle$. So suppose that $w\in E(S)$, and choose $w'\in S$ for which (B1---B3) hold. Then \[ w' = w'ww' = (w'w)(ww') \in \langle X^0\rangle \] by (B2), whence $w'\in E(S)$. But then \[ w = ww'w = (ww')(w'w) \in\langle X^0\rangle, \] again by (B2). This completes the proof. $\Box$ \begin{proposition}\label{thirdprop} Suppose that $S$ is a monoid and that $S=\langle G\cup\{z\}\rangle$ where $G=G(S)$ and~$z^3=z$. Then $S$ is inverse with \[ E(S)=\langle {g^{-1}z^2g~|~g\in G}\rangle \text{~~and~~} F(S)=\langle G\cup\{z^2\} \rangle \] if and only if, for all $g\in G$, \begin{align} \tag{C1} g^{-1}z^2gz^2 &= z^2g^{-1}z^2g \\ \tag{C2} zg^{-1}z^2gz &\in \langle G\cup\{z^2\}\rangle. \end{align} \end{proposition} \noindent{\bf Proof}\,\, First observe that $z$ is completely regular, with $z=z^{-1}$ and $z^0=z^2$. Now put \[ X=G\cup \{g^{-1}zg~|~g\in G\}. \] Then $S=\langle X\rangle$, and each $x\in X$ is completely regular. Further, if $y=g^{-1}zg$ (with $g\in G$), then $y^{-1}=y$ and $y^0=y^2=g^{-1}z^2g$. Thus,~${X^0 = \{1\}\cup\{g^{-1}z^2g~|~g\in G\}}$. Now if $S$ is inverse, then (C1) holds. Also, $zg^{-1}z^2gz\in E(S)$ for all $g\in G$ so that (C2) holds if $F(S)=\langle G\cup\{z^2\} \rangle$. Conversely, suppose now that (C1) and (C2) hold. We wish to verify Conditions (i) and~(ii) of Proposition \ref{secondprop}, so let $x,y\in X$. If $x^0=1$ or $y^0=1$, then (i) is immediate, so suppose~${x^0=g^{-1}z^2g}$ and $y^0=h^{-1}z^2h$ (where $g,h\in G$). By (C1) we have \[ x^0y^0 = h^{-1}(hg^{-1}z^2gh^{-1})z^2h = h^{-1}z^2(hg^{-1}z^2gh^{-1})h = y^0x^0, \] and (i) holds. If $x^0=1$ or $y\in G$, then (ii) is immediate, so suppose $x^0=g^{-1}z^2g$ and $y=h^{-1}zh$ (where~$g,h\in G$). Then $y=y^{-1}$ and, by (C2), \[ y^{-1}x^0y = h^{-1}(zh g^{-1}z^2g h^{-1}z) h \in h^{-1} \langle G\cup\{z^2\} \rangle h \subseteq\langle G\cup\{z^2\} \rangle. \] But by \cite[Lemma 2]{1} and (C1), $\langle G\cup\{z^2\} \rangle$ is a factorizable inverse submonoid of $S$ with~${E\big(\langle G\cup\{z^2\} \rangle\big)=\langle X^0\rangle}$. Since \[ (y^{-1}x^0y)^2 = y^{-1}x^0y^0x^0y = y^{-1}x^0y \in E\big(\langle G\cup\{z^2\} \rangle\big), \] it follows that $y^{-1}x^0y\in\langle X^0\rangle$, so that (ii) holds. So, by Proposition \ref{secondprop}, $S$ is inverse with $E\left( S\right) =\left\langle X^{0}\right\rangle =\left\langle g^{-1}z^{2}g~|~g\in G\right\rangle $ and, moreover, its factorizable part satisfies \[ F(S) = E(S)G \subseteq \langle G\cup X^0\rangle \subseteq \langle G\cup\{z^2\} \rangle \subseteq F(S). \] Hence $F(S)=\langle G\cup\{z^2\}\rangle$, and the proof is complete. $\Box$ \section{A Presentation of $\mathscr{I}_n^*$} If $n\leq2$, then $\mathscr{I}_n^*=\mathscr{F}_n$ is equal to its factorizable part. A presentation of $\mathscr{F}_n$ (for any~$n$) may be found in \cite{1} so, without loss of generality, we will assume for the remainder of the article that $n\geq3$. We first fix an alphabet \[ \mathscr{X}=\mathscr{X}_n=\{x,s_1,\ldots,s_{n-1}\}. \] Several notational conventions will prove helpful, and we note them here. The empty word will be denoted by $1$. A word $s_i\cdots s_j$ is assumed to be empty if either (i) $i>j$ and the subscripts are understood to be ascending, or (ii) if $i<j$ and the subscripts are understood to be descending. For $1\leq i,j\leq n-1$, we define integers \[ m_{ij} = \begin{cases} 1 &\quad\text{if\, $i=j$}\\ 3 &\quad\text{if\, $|i-j|=1$}\\ 2 &\quad\text{if\, $|i-j|>1$.} \end{cases} \] It will be convenient to use abbreviations for certain words in the generators which will occur frequently in relations and proofs. Namely, we write \[ \sigma = s_2s_3s_1s_2, \] and inductively we define words $l_2,\ldots,l_{n-1}$ and $y_3,\ldots,y_n$ by \begin{align*} l_2=xs_2s_1 &\AND l_{i+1}=s_{i+1}l_is_{i+1}s_i &\hspace{-1 cm}\text{for ${2\leq i\leq n-2}$,}\\ \intertext{and} y_3=x &\AND y_{i+1}=l_iy_is_i &\hspace{-2 cm}\text{for $3\leq i\leq n-1$.} \end{align*} Consider now the set $\mathscr{R}=\mathscr{R}_n$ of relations \begin{align} \tag{R1} (s_is_j)^{m_{ij}} &= 1 &&\text{for\, $1\leq i\leq j\leq n-1$}\\ \tag{R2} x^3 &= x \\ \tag{R3} xs_1=s_1x &= x \\ \tag*{} xs_2x =xs_2xs_2 &= s_2xs_2x\\ \tag{R4} &= xs_2x^2=x^2s_2x\\ \tag{R5} x^2\sigma x^2\sigma = \sigma x^2\sigma x^2 &= xs_2s_3s_2x \\ \tag{R6} y_is_iy_i &= s_iy_is_i &&\text{for\, $3\leq i\leq n-1$} \\ \tag{R7} xs_i &= s_ix &&\text{for\, $4\leq i\leq n-1$.} \end{align} Before we proceed, some words of clarification are in order. We say a relation belongs to~$\mathscr{R}_n$ \emph{vacuously} if it involves a generator $s_i$ which does not belong to $\mathscr{X}_n$; for example,~(R5) is vacuously present if $n=3$ because $\mathscr{X}_3$ does not contain the generator $s_3$. So the reader might like to think of $\mathscr{R}_n$ as the set of relations (R1---R4) if $n=3$, (R1---R6) if $n=4$, and (R1---R7) if $n\geq5$. We also note that we will mostly refer only to the $i=3$ case of relation (R6), which simply says $xs_3x=s_3xs_3$. We aim to show that $\mathscr{I}_n^*$ has presentation $\langle \mathscr{X}~|~\mathscr{R} \rangle$, so put $M=M_n=\langle \mathscr{X}~|~\mathscr{R} \rangle=\mathscr{X}^*/\mathscr{R}^\sharp$. Elements of $M$ are $\mathscr{R}^\sharp$-classes of words over $\mathscr{X}$. However, in order to avoid cumbersome notation, we will think of elements of $M$ simply as words over $\mathscr{X}$, identifying two words if they are equivalent under the relations $\mathscr{R}$. Thus, the reader should be aware of this when reading statements such as ``\emph{Let $w\in M$}'' and so on. With our goal in mind, consider the map \[ \Phi=\Phi_n:\mathscr{X}\to\mathscr{I}_n^* \] defined by $$\textstyle{\text{$x\Phi = \big( {1,2 \atop 3} \big| {3 \atop 1,2} \big| {4 \atop 4} \big| {\cdots \atop \cdots} \big| {n \atop n} \big)$ \ \ and \ \ $s_i\Phi = \big( {1 \atop 1} \big|{\cdots \atop \cdots} \big|{i-1 \atop i-1} \big|{i \atop i+1} \big|{i+1 \atop i} \big|{i+2 \atop i+2} \big|{\cdots \atop \cdots}\big|{n \atop n} \big)$ for $1\leq i\leq n-1$.}}$$ See also Fig. 4 for illustrations. \begin{figure} \caption{The block bijections $x\Phi$ (left) and $s_i\Phi$ (right) in $\mathscr{I}_n^*$.} \label{picofgens} \end{figure} \begin{lemma}\label{relslemma} The monoid $\mathscr{I}_n^*$ satisfies $\mathscr{R}$ via $\Phi$. \end{lemma} \noindent{\bf Proof}\,\, This lemma may be proved by considering the relations one-by-one and diagrammatically verifying that they each hold. This is straightforward in most cases, but we include a proof for the more technical relation (R6). First, one may check that $l_i\Phi$ ($2\leq i\leq n-1$) and $y_j\Phi$ ($3\leq j\leq n$) have graphical representations as pictured in Fig. 5. \begin{figure} \caption{The block bijections $l_i\Phi$ (left) and $y_j\Phi$ (right) in $\mathscr{I}_n^*$.} \label{picofliyj} \end{figure} Using this, we demonstrate in Fig. 6 that relation (R6) holds. $\Box$ \begin{figure} \caption{A diagrammatic verification that relation (R6) is satisfied in $\mathscr{I}_n^*$ via $\Phi$.} \label{picofR6} \end{figure} By Lemma \ref{relslemma}, $\Phi$ extends to a homomorphism from $M=\langle \mathscr{X}~|~\mathscr{R} \rangle$ to $\mathscr{I}_n^*$ which, without causing confusion, we will also denote by $\Phi=\Phi_n$. By \cite[Proposition 16]{5}, $\mathscr{I}_n^*$ is generated by~$\mathscr{X}\Phi$, so that $\Phi$ is in fact an epimorphism. Thus, it remains to show that $\Phi$ is injective, and the remainder of the paper is devoted to this task. The proof we offer is perhaps unusual in the sense that it uses, in the general case, not a normal form for elements of $M$, but rather structural information about $M$ and an inductive argument. The induction is founded on the case~${n=3}$, for which a normal form is given in the next proposition. \begin{proposition}\label{n=3case} The map $\Phi_3$ is injective. \end{proposition} \noindent{\bf Proof}\,\, Consider the following list of 25 words in $M_3$: \begin{itemize} \item the 6 units $\{1,s_1,s_2,s_1s_2,s_2s_1,s_1s_2s_1\}$, \item the 18 products in $\{1,s_2,s_1s_2\}\{x,x^2\}\{1,s_2,s_2s_1\}$, and \item the zero element $xs_2x$. \end{itemize} This list contains the generators, and is easily checked to be closed under multiplication on the right by the generators. Thus, $|M_3|\leq 25$. But $\Phi_3$ is a surjective map from $M_3$ onto $\mathscr{I}_3^*$, which has cardinality $25$. It follows that $|M_3|=25$, and that $\Phi_3$ is injective. $\Box$ From this point forward, we assume that $n\geq4$. The inductive step in our argument relies on Proposition \ref{firstprop} below, which provides a sufficient condition for a homomorphism of inverse monoids to be injective. Let $S$ be an inverse monoid and, for $s,t\in S$, write~$s^t=t^{-1}st$. We say that a non-identity idempotent $e\in E(S)$ has property (P) if, for all non-identity idempotents $f\in E(S)$, there exists $g\in G(S)$ such that~$f^g\in eSe$. \begin{proposition}\label{firstprop} Let $S$ be an inverse monoid with $E=E(S)$ and $G=G(S)$. Suppose that~$1\not=e\in E$ has property (P). Let $\phi:S\to T$ be a homomorphism of inverse monoids for which $\phi|_E$, $\phi|_G$, and $\phi|_{eSe}$ are injective. Then $\phi$ is injective. \end{proposition} \noindent{\bf Proof}\,\, By the kernel-and-trace description of congruences on $S$ \cite[Section 5.1]{4}, and the injectivity of $\phi|_G$, it is enough to show that $x\phi=f\phi$ (with $x\in S$ and $1\not=f\in E$) implies $x=f$, so suppose that $x\phi=f\phi$. Choose $g\in G$ such that $f^g\in eSe$. Now $ (xx^{-1})\phi=f\phi=(x^{-1}x)\phi, $ so that $f=xx^{-1}=x^{-1}x$, since $\phi|_E$ is injective. Thus $f \mathscr{H} x$ and it follows that $f^g \mathscr{H} x^g$, so that $x^g\in eSe$. Now $x\phi=f\phi$ also implies $x^g\phi=f^g\phi$ and so, by the injectivity of $\phi|_{eSe}$, we have $x^g=f^g$, whence $x=f$. $\Box$ It is our aim to apply Proposition \ref{firstprop} to the map $\Phi:M\to\mathscr{I}_n^*$. In order to do this, we first use Proposition \ref{thirdprop} to show (in Section \ref{sect:structure}) that~$M$ is inverse, and we also deduce information about its factorizable part~$F(M)$, including the fact that $\Phi|_{F(M)}$ is injective; this then implies that both~$\Phi|_{E(M)}$ and~$\Phi|_{G(M)}$ are injective too. Finally, in Section \ref{sect:local}, we locate a non-identity idempotent $e\in M$ which has property (P). We then show that the injectivity of $\Phi|_{eMe}$ is equivalent to the injectivity of $\Phi_{n-1}$ which we assume, inductively. We first pause to make some observations concerning the factorizable part $\mathscr{F}_n$ of $\mathscr{I}_n^*$. \subsection{The factorizable part of $\mathscr{I}_n^*$}\label{sect:factIn*} Define an alphabet $\mathscr{X}_F=\{t,s_1,\ldots,s_{n-1}\}$, and consider the set $\mathscr{R}_F$ of relations \begin{align} \tag{F1} (s_is_j)^{m_{ij}} &= 1 &&\text{for\, $1\leq i\leq j\leq n-1$}\\ \tag{F2} t^2 &= t \\ \tag{F3} ts_1=s_1t &=t \\ \tag{F4} ts_i &= s_it &&\text{for\, $3\leq i\leq n-1$}\\ \tag{F5} ts_2ts_2 &= s_2ts_2t\\ \tag{F6} t\sigma t\sigma &= \sigma t\sigma t. \end{align} (Recall that $\sigma$ denotes the word $s_2s_3s_1s_2$.) The following result was proved in \cite{1}. \begin{thm}\label{Fnpres} The monoid $\mathscr{F}_n=F(\mathscr{I}_n^*)$ has presentation $\langle \mathscr{X}_F~|~\mathscr{R}_F \rangle$ via \begin{equation*} s_i\mapsto s_i\Phi,\,\, t \mapsto (1,2\,|\,3\,|\cdots|\,n). \Box \end{equation*} \end{thm} \begin{lemma}\label{RFholdsinM} The relations $\mathscr{R}_F$ hold in $M$ via the map $\Theta:t\mapsto x^2,\,s_i\mapsto s_i$. \end{lemma} \noindent{\bf Proof}\,\, Relations (F1---F3) are immediate from (R1---R3); (F5) follows from several applications of~(R4); and (F6) forms part of (R5). The $i\geq4$ case of (F4) follows from (R7), and the $i=3$ case follows from~(R1) and (R6), since \begin{equation*} x^2s_3 = xs_3s_3xs_3 = xs_3 xs_3x = s_3xs_3 s_3x = s_3x^2. \end{equation*} $\Box$ It follows that $\Theta\circ\Phi$ extends to a homomorphism of $\langle \mathscr{X}_F~|~\mathscr{R}_F \rangle$ to $\mathscr{F}_n$, which is an isomorphism by Theorem \ref{Fnpres}. We conclude that $\Phi|_{\langle x^2,s_1,\ldots,s_{n-1}\rangle} = \Phi|_{\text{im} (\Theta)}$ is injective (and therefore an isomorphism). \subsection{The structure of $M$}\label{sect:structure} It is easy to see that the group of units $G(M)$ is the subgroup generated by $\{s_1,\ldots,s_{n-1}\}$. The reason for this is that relations (R2---R7) contain at least one occurrence of $x$ on both sides. Now (R1) forms the set of defining relations in Moore's famous presentation~\cite{6} of the symmetric group $\mathcal{S}_n$. Thus we may identify $G(M)$ with~$\mathcal{S}_n$ in the obvious way. Part (i) of the following well-known result (Lemma 8) gives a normal form for the elements of $\mathcal{S}_n$ (and is probably due to Burnside; a proof is also sketched in \cite{1}). The second part follows immediately from the first, and is expressed in terms of a convenient contracted notation which is defined as follows. Let $1\leq i\leq n-1$, and $0\leq k\leq n-1$. We write \[ s_i^k = \begin{cases} s_i &\quad\text{if\, $i\leq k$}\\ 1 &\quad\text{otherwise.} \end{cases} \] The reader might like to think of this as abbreviating $s_i^{k\geq i}$, where $k\geq i$ is a boolean value, equal to $1$ if $k\geq i$ holds and $0$ otherwise. \begin{lemma}\label{Snnorm} Let $g\in G(M)=\langle s_1,\ldots,s_{n-1}\rangle$. Then \begin{description} \item[\emph{(i)}] $g=(s_{i_1}\cdots s_{j_1})\cdots(s_{i_k}\cdots s_{j_k})$ for some $k\geq0$ and some $i_1\leq j_1,\ldots,i_k\leq j_k$ with $1\leq i_k<\cdots<i_1\leq n-1$, and \item[\emph{(ii)}] $g=hs_2^ks_3^ks_4^k(s_5\cdots s_k)s_1^\ell s_2^\ell s_3^\ell(s_4\cdots s_\ell) = hs_2^ks_3^ks_4^ks_1^\ell s_2^\ell s_3^\ell(s_5\cdots s_k)(s_4\cdots s_\ell)$ for some $h\in\langle s_3,\ldots,s_{n-1}\rangle$, $k\geq 1$ and $\ell\geq 0$. $\Box$ \end{description} \end{lemma} We are now ready to prove the main result of this section. \begin{proposition}\label{Misinverse} The monoid $M=\langle \mathscr{X}~|~\mathscr{R} \rangle$ is inverse, and we have \[ E(M) = \langle g^{-1}x^2g~|~g\in G(M)\rangle \text{~~~and~~~} F(M)=\langle x^2, s_1,\ldots,s_{n-1}\rangle. \] \end{proposition} \noindent{\bf Proof}\,\, Put $G=G(M)=\langle s_1,\ldots,s_{n-1}\rangle$. So $M=\langle G\cup\{x\}\rangle$ and $x=x^3$. We will now verify conditions (C1) and (C2) of Proposition \ref{thirdprop}. By Lemma \ref{RFholdsinM}, $\langle G\cup\{x^2\}\rangle$ is a homomorphic (in fact isomorphic) image of $\mathscr{F}_n$, so $g^{-1}x^2g$ commutes with $x^2$ for all $g\in G$, and condition (C1) is verified. To prove (C2), let $g\in G$. By Lemma \ref{Snnorm}, we have $$g= hs_2^ks_3^ks_4^ks_1^\ell s_2^\ell s_3^\ell(s_5\cdots s_k)(s_4\cdots s_\ell)$$ for some $h\in\langle s_3,\ldots,s_{n-1}\rangle$, $k\geq 1$ and $\ell\geq 0$. Now \begin{align*} &xg^{-1}x^2gx \\&= x(s_\ell\cdots s_4)(s_k\cdots s_5) s_3^\ell s_2^\ell s_1^\ell s_4^ks_3^ks_2^k (h^{-1}x^2h)s_2^ks_3^ks_4^ks_1^\ell s_2^\ell s_3^\ell(s_5\cdots s_k)(s_4\cdots s_\ell)x \\ &= (s_\ell\cdots s_4)(s_k\cdots s_5) xs_3^\ell s_2^\ell s_1^\ell s_4^ks_3^ks_2^k x^2 s_2^ks_3^ks_4^ks_1^\ell s_2^\ell s_3^\ell x(s_5\cdots s_k)(s_4\cdots s_\ell), \end{align*} by (R7), (F4), and (R1). Thus it suffices to show that $x(x^2)^\pi x\in\langle G\cup\{x^2\}\rangle$, where we have written $\pi=s_2^ks_3^ks_4^ks_1^\ell s_2^\ell s_3^\ell$. Altogether there are 16 cases to consider for all pairs~$(k,\ell)$ with~$k=1,2,3,\geq4$ and $\ell=0,1,2,\geq3$. Table 1 below contains an equivalent form of~$x(x^2)^\pi x$ as a word over $\{x^2,s_1,\ldots,s_{n-1}\}$ for each $(k,\ell)$, as well as a list of the relations used in deriving the expression. We performed the calculations in the order determined by going along the first row from left to right, then the second, third, and fourth rows. Thus, as for example in the case $(k,\ell)=(2,1)$, we have used expressions from previously considered cases. \begin{table}[ht] {\footnotesize \begin{center} \begin{tabular}{|c||c|c|c|c|c} \hline & $\ell=0$ & $\ell=1$ & $\ell=2$ & $\ell\geq 3$\\ \hline \hline & $x^2$ & $x^2$ & $x^2s_2x^2$ & $x^2\sigma x^2\sigma$ \\ $k=1$ & (R2) & (R2,3) & (R3,4) & (R1,2,3,5) \\ & & & & and (F4) \\ \hline & $x^2s_2x^2$ & $x^2s_2x^2$ & $x^2s_2x^2$ & $x^2\sigma x^2\sigma$ \\ $k=2$ & (R4) & (R3,4) & (R1,3,4) & (R1,3) and \\ & & & & $(k,\ell)=(1,\geq3)$ \\ \hline & $x^2\sigma x^2\sigma$ & $x^2\sigma x^2\sigma$ & $x^2s_2s_3s_2x^2$ & $x^2s_2s_3s_2x^2$ \\ $k=3$ & $(k,\ell)=(1,\geq3)$ & (R3) and & (R1,2,5) & (R1,3) and \\ & & $(k,\ell)=(1,\geq3)$ & & $(k,\ell)=(3,2)$ \\ \hline & $s_4x^2\sigma x^2\sigma s_4$ & $s_4x^2\sigma x^2\sigma s_4$ & $s_4x^2s_2s_3s_2x^2s_4$ & $s_3s_4x^2\sigma x^2\sigma s_4s_3$ \\ $k\geq4$ & (R7) and & (R3) and & (R1,7) and & (R1,2,5,6,7) \\ & $(k,\ell)=(1,\geq3)$ & $(k,\ell)=(\geq4,0)$ & $(k,\ell)=(3,2)$ & and (F4) \\ \hline \end{tabular} \end{center} } \caption{Expressions for $x(x^2)^\pi x$ and the relations used. See text for further explanation.} \label{table1} \end{table} \newline In order that readers need not perform all the calculations themselves, we provide a small number of sample derivations. The first case we consider is that in which $(k,\ell)=(1,2)$. In this case we have $\pi=s_1s_2$ and, by (R3) and several applications of (R4), we calculate \[ x(x^2)^\pi x = xs_2s_1x^2s_1s_2x = xs_2x^2s_2x = x^2s_2x^2. \] Next suppose $(k,\ell)=(1,\geq3)$. (Here we mean that $k=1$ and $\ell\geq3$.) Then $\pi=s_1s_2s_3$ and \begin{align*} x(x^2)^\pi x &= xs_3s_2s_1x^2s_1s_2s_3x \\ &= xs_3s_2x^2s_2s_3x &&\text{by (R3)}\\ &= xs_3s_2s_3x^2s_3s_2s_3x &&\text{by (R1) and (F4)}\\ &= (x^2\sigma x^2\sigma)(\sigma x^2\sigma x^2) &&\text{by (R1) and (R5)}\\ &= x^2\sigma x^2\sigma x^2 &&\text{by (R1) and (R2)}\\ &= x^2\sigma x^2\sigma &&\text{by (R5) and (R2).} \end{align*} If $(k,\ell)=(3,2)$, then $\pi=\pi^{-1}=\sigma$ by (R1) and, by (R2) and (R5), we have \[ x(x^2)^\pi x = x\sigma x^2\sigma x = x(\sigma x^2 \sigma x^2) x = x(xs_2s_3s_2x)x. \] If $(k,\ell)=(\geq4,2)$ then $\pi=s_2s_3s_4s_1s_2=\sigma s_4$ by (R1), and so, using (R7) and the $(k,\ell)=(3,2)$ case, we have \[ x(x^2)^\pi x = xs_4\sigma x^2\sigma s_4x = s_4x\sigma x^2\sigma xs_4 = s_4(x^2s_2s_3s_2x^2)s_4. \] Finally, we consider the $(k,\ell)=(\geq4,\geq3)$ case. Here we have $\pi=s_2s_3s_4s_1s_2s_3=\sigma s_4s_3$ by (R1), and so \begin{align*} x(x^2)^\pi x &= xs_3s_4\sigma x^2\sigma s_4s_3x\\ &= xx^2s_3s_4\sigma x^2\sigma s_4s_3x &&\text{by (R2)}\\ &= xs_3s_4x^2\sigma x^2\sigma s_4s_3x &&\text{by (F4)}\\ &= xs_3s_4xs_2s_3s_2x s_4s_3x &&\text{by (R5)}\\ &= xs_3xs_4s_2s_3s_2s_4x s_3x &&\text{by (R7)}\\ &= s_3xs_3s_4s_3s_2s_3s_4 s_3xs_3 &&\text{by (R1) and (R6)}\\ &= s_3xs_4s_3s_4s_2s_4s_3 s_4xs_3 &&\text{by (R1)}\\ &= s_3s_4xs_3s_2s_3 xs_4s_3 &&\text{by (R1) and (R7)}\\ &= s_3s_4(x^2\sigma x^2\sigma)s_4s_3 &&\text{by (R1) and (R5).} \end{align*} After checking the other cases, the proof is complete. $\Box$ After the proof of Lemma \ref{RFholdsinM}, we observed that $\Phi|_{\langle x^2,s_1,\ldots,s_{n-1}\rangle}$ is injective. By Proposition \ref{Misinverse}, we conclude that $\Phi|_{F(M)}$ is injective. In particular, both $\Phi|_{E(M)}$ and~$\Phi|_{G(M)}$ are injective. \subsection{A local submonoid}\label{sect:local} Now put $e=x^2\in M$. So clearly $e$ is a non-identity idempotent of $M$. Our goal in this section is to show that $e$ has property (P), and that $eMe$ is a homomorphic image of $M_{n-1}$. \begin{lemma}\label{propP} The non-identity idempotent $e=x^2\in M$ has property (P). \end{lemma} \noindent{\bf Proof}\,\, Let $1\not=f\in E(M)$. By Proposition \ref{Misinverse} we have $f=e^{g_1}e^{g_2}\cdots e^{g_k}$ for some~$k\geq1$ and $g_1,g_2,\ldots,g_k\in G(M)$. But then $$f^{g_1^{-1}}= e\,e^{g_2g_1^{-1}}\cdots e^{g_kg_1^{-1}}e\in eMe.$$ $\Box$ We now define words \[ X=s_3x\sigma xs_3 ,\, S_1=x ,\quad\text{and}\quad S_j = es_{j+1} \quad\text{for\, $j=2,\ldots,n-2$,} \] and write $\mathscr{Y}=\mathscr{Y}_{n-1}=\{X,S_1,\ldots,S_{n-2}\}$. We note that $e$ is a left and right identity for the elements of $\mathscr{Y}$ so that $\mathscr{Y}\subseteq eMe$, and that $X=y_4$ (by definition). \begin{proposition}\label{eMegens} The submonoid $eMe$ is generated (as a monoid with identity $e$) by $\mathscr{Y}$. \end{proposition} \noindent{\bf Proof}\,\, We take $w\in M$ with the intention of showing that $u=ewe\in eMe$ belongs to~$\langle\mathscr{Y}\rangle$. We do this by induction on the (minimum) number $d=d(w)$ of occurrences of~$x^\delta$~($\delta\in\{1,2\}$) in $w$. Suppose first that $d=0$, so that $u=ege$ where $g\in G(M)$. By Lemma \ref{Snnorm} we have \[ g=h(s_2^js_3^js_1^is_2^i)(s_4\cdots s_j)(s_3\cdots s_i) \] for some $h\in\langle s_3,\ldots,s_{n-1}\rangle$ and $j\geq1$, $i\geq0$. Put $h'=(s_4\cdots s_j)(s_3\cdots s_i)\in\langle s_3,\ldots,s_{n-1}\rangle$. Now by (F2) and (F4) we have \[ u = ege = eh \cdot e(s_2^js_3^js_1^is_2^i)e \cdot eh'. \] By (F2) and (F4) again, we see that $eh,eh'\in\langle S_2,\ldots,S_{n-1}\rangle$, so it is sufficient to show that the word~${e\pi e}$ belongs to $\langle\mathscr{Y}\rangle$, where we have written $\pi=s_2^js_3^js_1^is_2^i$. Table 2 below contains an equivalent form of $e\pi e$ as a word over $\mathscr{Y}$ for each $(i,j)$, as well as a list of the relations used in deriving the expression. \begin{table}[ht!] {\footnotesize \begin{center} \begin{tabular}{|c||c|c|c|c} \hline & $j=1$ & $j=2$ & $j\geq3$ \\ \hline \hline & $e$ & $X^2$ & $X^2S_2$ \\ $i=0$ & (R2) & (R1,2,5) & (R2), (F4), and \\ & & and (F4) & $(i,j)=(0,2)$ \\ \hline & $e$ & $X^2$ & $X^2S_2$ \\ $i=1$ & (R2,3) & (R3) and & (R3) and \\ & & $(i,j)=(0,2)$ & $(i,j)=(0,\geq3)$ \\ \hline & $X^2$ & $X^2$ & $S_1S_2XS_2S_1$ \\ $i\geq2$ & (R3) and & (R1,3) and & (R1,2) \\ & $(i,j)=(0,2)$ & $(i,j)=(0,2)$ & and (F4) \\ \hline \end{tabular} \end{center} \caption{Expressions for $e\pi e$ and the relations used. See text for further explanation.} } \label{table2} \end{table} Most of these derivations are rather straightforward, but we include two example calculations. For the $(i,j)=(0,2)$ case, note that \begin{align*} X^2 = s_3x\sigma xs_3s_3x\sigma x s_3 &= s_3x(x^2\sigma x^2\sigma) x s_3 &&\text{by (R1) and (R2)}\\ &= s_3x(xs_2s_3s_2x) x s_3 &&\text{by (R5)}\\ &= s_3x^2s_3s_2s_3x^2 s_3 &&\text{by (R1)}\\ &= x^2s_2x^2 &&\text{by (F4) and (R1)}\\ &= e\pi e.\\ \intertext{For the $(i,j)=(\geq2,\geq3)$ case, we have $\pi=\sigma$ and} e\pi e = x^2\sigma x^2 &= xs_3s_3x\sigma xs_3s_3x &&\text{by (R1)}\\ &= x(x^2s_3)s_3x\sigma xs_3(s_3x^2)x &&\text{by (R2)}\\ &= x(x^2s_3)s_3x\sigma xs_3(x^2s_3)x &&\text{by (F4)}\\ &= S_1S_2XS_2S_1. \end{align*} This establishes the $d=0$ case. Now suppose $d\geq1$, so that $w=vx^\delta g$ for some $\delta\in\{1,2\}$, $v\in M$ with $d(v)=d(w)-1$, and $g\in G(M)$. Then by (R2), \[ u = ewe = (eve)x^\delta(ege). \] Now $x^\delta$ belongs to $\langle\mathscr{Y}\rangle$ since $x^\delta$ is equal to $S_1$ (if $\delta=1$) or $e$ (if $\delta=2$). By an induction hypothesis we have $eve\in\langle\mathscr{Y}\rangle$ and, by the $d=0$ case considered above, we also have~${ege\in\langle\mathscr{Y}\rangle}$. $\Box$ The next step in our argument is to prove (in Proposition \ref{eMerels} below) that the elements of~$\mathscr{Y}_{n-1}$ satisfy the relations $\mathscr{R}_{n-1}$ via the obviously defined map. Before we do this, however, it will be convenient to prove the following basic lemma. If $w\in M$, we write $\text{rev}(w)$ for the word obtained by writing the letters of $w$ in reverse order. We say that $w$ is \emph{symmetric} if $w=\text{rev}(w)$. \begin{lemma}\label{intlem} If $w\in M$ is symmetric, then $w=w^3$ and $w^2\in E(M)$. \end{lemma} \noindent{\bf Proof}\,\, Now $z=z^{-1}$ for all $x\in\mathscr{X}$ and it follows that $w^{-1}=\text{rev}(w)$ for all $w\in M$. So, if~$w$ is symmetric, then $w=w^{-1}$, in which case $w=ww^{-1}w=w^3$. $\Box$ \begin{proposition}\label{eMerels} The elements of $\mathscr{Y}_{n-1}$ satisfy the relations $\mathscr{R}_{n-1}$ via the map \[ \Psi: \mathscr{X}_{n-1}\to eMe:x\mapsto X,\,s_i\mapsto S_i. \] \end{proposition} \noindent{\bf Proof}\,\, We consider the relations from $\mathscr{R}_{n-1}$ one at a time. In order to avoid confusion, we will refer to the relations from $\mathscr{R}_{n-1}$ as (R1)$'$, (R2)$'$, etc. We also extend the use of upper case symbols for the element $\Sigma=S_2S_3S_1S_2$ as well as the words $L_i$ (for~$i=2,\ldots,n-2$) and $Y_j$ (for $j=3,\ldots,n-1$). It will also prove convenient to refer to the idempotents \[ e_i = e^{(s_2\cdots s_i)(s_1\cdots s_{i-1})}\in E(M), \] defined for each $1\leq i\leq n$. Note that $e_1=e$, and that $e_i\Phi\in\mathscr{I}_n^*$ is the idempotent with domain $(1\,|\cdots|\,i-1\,|\,i,i+1\,|\,i+2\,|\cdots|\,n)$. We first consider relation (R1)$'$. We must show that $(S_iS_j)^{m_{ij}}=e$ for all $1\leq i\leq j\leq n-2$. Suppose first that $i=j$. Now $S_1^2=e$ by definition and if $2\leq i\leq n-2$ then, by (R1), (R2), (R7), and (F4), we have $S_i^2=es_{i+1}es_{i+1}=e^2s_{i+1}^2=e$. Next, if $2\leq j\leq n-2$, then \begin{align*} (S_1S_j)^{m_{1j}} = (xe&s_{j+1})^{m_{1j}} = (xs_{j+1})^{m_{1j}} \\ &= \begin{cases} xs_3xs_3xs_3 = s_3xs_3s_3xs_3=s_3x^2s_3=x^2=e &\quad\text{if\, $j=2$}\\ xs_{j+1}xs_{j+1}=x^2s_{j+1}^2=x^2=e &\quad\text{if\, $j\geq3$,} \end{cases} \end{align*} by (R1), (R2), (R6), (R7), and (F4). Finally, if $2\leq i\leq j\leq n-2$, then \[ (S_iS_j)^{m_{ij}} = (es_{i+1}es_{j+1})^{m_{i+1,j+1}} = e(s_{i+1}s_{j+1})^{m_{i+1,j+1}} = e, \] by (R1), (R2), and (F4). This completes the proof for (R1)$'$. For (R2)$'$, we have \begin{align*} X^3 &= (s_3x\sigma xs_3)(s_3x\sigma xs_3)(s_3x\sigma xs_3)\\ &= s_3x\sigma x^2\sigma x^2\sigma xs_3 &&\text{by (R1)}\\ &= s_3xx^2\sigma x^2\sigma \sigma xs_3 &&\text{by (R5)}\\ &= s_3x\sigma xs_3 &&\text{by (R1) and (R2)}\\ &= X.\\ \intertext{For (R3)$'$, first note that} XS_1 &=s_3x\sigma xs_3x\\ &=s_3x\sigma s_3xs_3 &&\text{by (R6)}\\ &=s_3xs_1\sigma xs_3 &&\text{by (R1)}\\ &=s_3x\sigma xs_3 &&\text{by (R3)}\\ &=X. \end{align*} (Here, and later, we use the fact that $\sigma s_3=s_1\sigma$, and so also $\sigma s_1=s_3\sigma$. These are easily checked using (R1) or by drawing pictures.) The relation $S_1X=X$ is proved by a symmetrical argument. To prove (R4)$'$, we need to show that $XS_2X$ is a (left and right) zero for $X$ and $S_2$. Since~$X,S_2\in\langle x,s_1,s_2,s_3\rangle$, it suffices to show that $XS_2X$ is a zero for each of $x,s_1,s_2,s_3$. In order to contract the proof, it will be convenient to use the following ``arrow notation''. If~$a$ and $b$ are elements of a semigroup, we write $a\mathrel{\hspace{-0.35 ex}>\hspace{-1.1ex}-}\hspace{-0.35 ex} b$ and $a\mathrel{\hspace{-0.35ex}-\hspace{-0.5ex}-\hspace{-2.3ex}>\hspace{-0.35 ex}} b$ to denote the relations~$ab=a$ ($a$ is a left zero for $b$) and $ba=a$ ($a$ is a right zero for $b$), respectively. The arrows may be superimposed, so that $a\mathrel{\hspace{-0.35ex}>\hspace{-1.1ex}-\hspace{-0.5ex}-\hspace{-2.3ex}>\hspace{-0.35 ex}} b$ indicates the presence of both relations. We first calculate \begin{align*} XS_2X &= (s_3x\sigma xs_3)es_3(s_3x\sigma xs_3)\\ &= s_3x\sigma xs_3x\sigma xs_3 &&\text{by (R1) and (R2)}\\ &= s_3x\sigma s_3xs_3\sigma xs_3 &&\text{by (R6)}\\ &= s_3xs_1\sigma x\sigma s_1xs_3 &&\text{by (R1)}\\ &= s_3x\sigma x\sigma xs_3 &&\text{by (R3).} \end{align*} Put $w=s_3x\sigma x\sigma xs_3$. We see immediately that $w\mathrel{\hspace{-0.35 ex}>\hspace{-1.1ex}-}\hspace{-0.35 ex} x$ since \[ wx=s_3x\sigma x\sigma xs_3x = s_3x\sigma x\sigma s_3xs_3 = s_3x\sigma xs_1\sigma xs_3 = s_3x\sigma x\sigma xs_3 = w, \] by (R1), (R3), and (R6), and a symmetrical argument shows that $w\mathrel{\hspace{-0.35ex}-\hspace{-0.5ex}-\hspace{-2.3ex}>\hspace{-0.35 ex}} x$. Next, note that~$w$ is symmetric so that $w=w^3$ and $w\mathrel{\hspace{-0.35ex}>\hspace{-1.1ex}-\hspace{-0.5ex}-\hspace{-2.3ex}>\hspace{-0.35 ex}} w^2$, by Lemma \ref{intlem}. Since $\hspace{.2ex}\mathrel{\hspace{-0.35ex}>\hspace{-1.1ex}-\hspace{-0.5ex}-\hspace{-2.3ex}>\hspace{-0.35 ex}}\hspace{.2ex}$ is transitive, the proof of~(R4)$'$ will be complete if we can show that $w^2\mathrel{\hspace{-0.35ex}>\hspace{-1.1ex}-\hspace{-0.5ex}-\hspace{-2.3ex}>\hspace{-0.35 ex}} s_1,s_2,s_3$. Now by Lemma \ref{intlem} again we have $w^2\in E(M)$ and, since $w^2\Phi=(e_1e_2e_3)\Phi$ as may easily be checked diagrammatically, we have $w^2=e_1e_2e_3$ by the injectivity of $\Phi|_{E(M)}$. But $(e_1e_2e_3)\Phi\mathrel{\hspace{-0.35ex}>\hspace{-1.1ex}-\hspace{-0.5ex}-\hspace{-2.3ex}>\hspace{-0.35 ex}} s_1\Phi,s_2\Phi,s_3\Phi$ in $\mathscr{F}_n$ and so, by the injectivity of $\Phi|_{F(M)}$, it follows that $w^2=e_1e_2e_3\mathrel{\hspace{-0.35ex}>\hspace{-1.1ex}-\hspace{-0.5ex}-\hspace{-2.3ex}>\hspace{-0.35 ex}} s_1,s_2,s_3$. Relations (R5---R7)$'$ all hold vacuously if $n=4$, so for the remainder of the proof we assume that $n\geq5$. Next we consider (R5)$'$. Now, by (R2) and (F4), we see that \[ \Sigma=S_2S_3S_1S_2 = (es_3)(es_4)x(es_3) = s_3s_4xs_3. \] In particular, $\Sigma$ is symmetric, by (R7), and $\Sigma^{-1}=\Sigma$. Also, $X=s_3x\sigma xs_3$ is symmetric and so~$X^2$ is idempotent by Lemma \ref{intlem}. It follows that $X^2\Sigma X^2\Sigma$ and $\Sigma X^2\Sigma X^2$ are both idempotent. Since $(X^2\Sigma X^2\Sigma)\Phi=(\Sigma X^2\Sigma X^2)\Phi$, as may easily be checked diagrammatically, we conclude that $X^2\Sigma X^2\Sigma=\Sigma X^2\Sigma X^2$, by the injectivity of $\Phi|_{E(M)}$. It remains to check that $XS_2S_3S_2X=\Sigma X^2\Sigma X^2$. Since $(XS_2S_3S_2X)\Phi=(\Sigma X^2\Sigma X^2)\Phi$, it suffices to show that~$XS_2S_3S_2X\in E(M)$. By (R1), (R2), and (F4), \[ XS_2S_3S_2X = (s_3x\sigma xs_3)es_3es_4es_3(s_3x\sigma xs_3) = s_3(x\sigma x)s_4(x\sigma x)s_3, \] so it is enough to show that $v=(x\sigma x)s_4(x\sigma x)$ is idempotent. We see that \begin{align*} v^2 &= x\sigma xs_4x\sigma x^2\sigma xs_4x\sigma x\\ &= x\sigma xs_4x(x^2\sigma x^2\sigma) xs_4x\sigma x &&\text{by (R2)}\\ &= x\sigma xs_4x(xs_2s_3s_2x) xs_4x\sigma x &&\text{by (R5)}\\ &= x\sigma xs_4s_2s_3s_2 s_4x\sigma x &&\text{by (R2) and (R7).} \end{align*} Put $u=x\sigma x$. Since $u$ is symmetric, Lemma \ref{intlem} says that $u^2\in E(M)$. One verifies easily that $(u^2s_4s_2s_3s_2s_4u^2)\Phi=(u^2s_4u^2)\Phi$ in $\mathscr{F}_n$ and it follows, by the injectivity of~$\Phi|_{F(M)}$, that $u^2s_4s_2s_3s_2s_4u^2=u^2s_4u^2$. By Lemma \ref{intlem} again, we also have $u=u^3$ so that \[ v^2 = us_4s_2s_3s_2 s_4u = u u^2s_4s_2s_3s_2 s_4u^2u = u u^2s_4u^2u = u s_4u = v, \] and (R5)$'$ holds. Now we consider (R6)$'$, which says $Y_iS_iY_i=S_iY_iS_i$ for $i\geq3$. So we must calculate the words $Y_i$ which, in turn, are defined in terms of the words $L_i$. Now ${L_2=XS_2S_1}$ and ${L_{i+1}=S_{i+1}L_iS_{i+1}S_i}$ for $i\geq2$. A straightforward induction shows that ${L_i=l_{i+1}e}$ for all~${i\geq2}$. This, together with the definition of the words $Y_i$ (as $Y_3=X$, and~${Y_{i+1}=L_iY_iS_i}$ for $i\geq3$) and a simple induction, shows that $Y_i=y_{i+1}$ for all $i\geq3$. But then for~${3\leq i\leq n-2}$, we have \[ Y_iS_iY_i = y_{i+1}es_{i+1}y_{i+1} = y_{i+1}s_{i+1}y_{i+1} = s_{i+1}y_{i+1}s_{i+1} = s_{i+1}ey_{i+1}es_{i+1} = S_iY_iS_i. \] Here we have used (R6) and (F4), and the fact, verifiable by a simple induction, that $y_je=ey_j=y_j$ for all $j$. Relation (R7)$'$ holds vacuously when $n=5$ so, to complete the proof, suppose $n\geq6$ and~$i\geq4$. Now $Xe=eX=X$ as we have already observed, and $Xs_{i+1}=s_{i+1}X$ by (R1) and (R7). Thus $X$ commutes with $es_{i+1}=S_i$. This completes the proof. $\Box$ \subsection{Conclusion} We are now ready to tie together all the loose ends. \begin{thm} The dual symmetric inverse monoid $\mathscr{I}_n^*$ has presentation $\langle \mathscr{X}~|~\mathscr{R} \rangle$ via $\Phi$. \end{thm} \noindent{\bf Proof}\,\, All that remains is to show that $\Phi=\Phi_n$ is injective. In Proposition \ref{n=3case} we saw that this was true for $n=3$, so suppose that $n\geq4$ and that $\Phi_{n-1}$ is injective. By checking that both maps agree on the elements of $\mathscr{X}_{n-1}$, it is easy to see that~${\Psi\circ\Phi|_{eMe}\circ\Upsilon=\Phi_{n-1}}$. (The map $\Upsilon$ was defined at the end of Section \ref{sect:fdsim}., and $\Psi$ in Proposition 13.) Now $\Psi$ is surjective (by Proposition~\ref{eMegens}) and $\Phi_{n-1}$ is injective (by assumption), so it follows that $\Phi|_{eMe}$ is injective. After the proof of Proposition \ref{Misinverse}, we observed that $\Phi|_{E(M)}$ and $\Phi|_{G(M)}$ are injective. By Lemma \ref{propP}, $e$ has property (P) and it follows, by Proposition \ref{firstprop}, that $\Phi$ is injective. $\Box$ We remark that the method of Propositions 5 and 13 may be used to provide a concise proof of the presentation of $\mathscr{I}_n$ originally found by Popova \cite{7}. \end{document}
\begin{document} \title{Approximately Efficient Two-Sided Combinatorial Auctions} \author{Riccardo Colini-Baldeschi \and Paul Goldberg \and Bart de Keijzer \and Stefano Leonardi \and Tim Roughgarden \and Stefano Turchetta} \institute{LUISS Rome, \email{rcolini@luiss.it} \and University of Oxford, \email{paul.goldberg@cs.ox.ac.uk}, \email{stefano.turchetta@cs.ox.ac.uk} \and Centrum Wiskunde \& Informatica (CWI), Amsterdam, \email{keijzer@cwi.nl} \and Sapienza University of Rome, \email{leonardi@diag.uniroma1.it} \and Stanford University, \email{tim@cs.stanford.edu}} \maketitle \thispagestyle{empty} \setcounter{page}{0} \begin{abstract} Mechanism design for one-sided markets has been investigated for several decades in economics and in computer science. More recently, there has been an increased attention on mechanisms for two-sided markets, in which buyers and sellers act strategically. For two-sided markets, an impossibility result of Myerson and Satterthwaite states that no mechanism can simultaneously satisfy {\em individual rationality (IR)}, {\em incentive compatibility (IC)}, {\em strong budget-balance (SBB)}, and be efficient. Follow-up work mostly focused on designing mechanisms that trade off among these properties, in settings where there exists one single type of items for sale, the buyers ask for one unit of the items, and the sellers initially hold one unit of the item. On the other hand, important applications to web advertisement, stock exchange, and frequency spectrum allocation, require us to consider two-sided combinatorial auctions in which buyers have preferences on subsets of items, and sellers may offer multiple heterogeneous items. No efficient mechanism was known so far for such two-sided combinatorial markets. This work provides the first IR, IC and SBB mechanisms that provides an $O(1)$-approximation to the optimal social welfare for two-sided markets. An initial construction yields such a mechanism, but exposes a conceptual problem in the traditional SBB notion. This leads us to define the stronger notion of \emph{direct trade strong budget balance (DSBB)}. We then proceed to design mechanisms that are IR, IC, DSBB, and again provide an $O(1)$-approximation to the optimal social welfare. Our mechanisms work for any number of buyers with XOS valuations -- a class in between submodular and subadditive functions -- and any number of sellers. We provide a mechanism that is dominant strategy incentive compatible (DSIC) if the sellers each have one item for sale, and one that is bayesian incentive compatible (BIC) if sellers hold multiple items and have additive valuations over them. Finally, we present a DSIC mechanism for the case that the valuation functions of all buyers and sellers are additive. We prove our main results by showing that there exists a variant of a sequential posted price mechanism, generalised to two-sided combinatorial markets, that achieves the desired goals. \end{abstract} \section{Introduction} One-sided markets have been studied in economics for several decades and more recently in computer science. Mechanism Design in one-sided markets aims to find an efficient (high-welfare) allocation of a set of items to a set of agents, while ensuring that truthfully reporting the input data is the best strategy for the agents. The cornerstone method in mechanism design is the Vickrey-Clarke-Groves (VCG) mechanism \cite{vickrey61, clarke71, groves73} that optimizes the social welfare of the agents while providing the right incentives for truth-telling: VCG mechanisms are \emph{dominant strategy incentive compatible (DSIC)}, and in many mechanism design settings, VCG is also {\em individually rational (IR)}. IR requires that participating in the mechanism is beneficial to each agent. DSIC requires that truthfully reporting one's preferences to the mechanism is a dominant strategy for each agent, independently of what the other agents report. Recently, increased attention has been on the problems that arise in two-sided markets, in which the set of agents is partitioned into \emph{buyers} and \emph{sellers}. As opposed to the one-sided setting (where one could say that the mechanism itself initially holds the items), in the two-sided setting the items are initially held by the set of sellers, who express valuations over the items they hold, and who are assumed to act rationally and strategically. The mechanism's task is now to decide which buyers and sellers should trade, and at which prices. There is a growing interest in two-sided markets that can be attributed to various important applications. Examples range from selling display-ads on ad exchange platforms, the US FCC spectrum license reallocation, and stock exchanges. Two-sided markets are usually studied in a Bayesian setting: there is public knowledge of probability distributions, one for each buyer and one for each seller, from which the valuations of the buyers and sellers are drawn. In two-sided markets, a further important requirement is {\em strong budget-balance (SBB)}, which states that monetary transfers happen only among the agents in the market, i.e., the buyers and the sellers are allowed to trade without leaving to the mechanism any share of the payments, and without the mechanism adding money into the market. A weaker version of SBB often considered in the literature is \emph{weak budget-balance (WBB)}, which only requires the mechanism not to inject money into the market. Unfortunately, Myerson and Satterthwaite \cite{ms83} proved that it is impossible for an IR, Bayesian incentive compatible (BIC),\footnote{Bayesian incentive compatibility is a form of incentive compatibility that is less restrictive than DSIC. It only requires that reporting truthfully is in expectation the best strategy for a player when everyone else also does so, i.e., truthful reporting is a Bayes-Nash equilibrium.} and WBB mechanism to maximize social welfare in such a market, even in the \emph{bilateral trade} setting, i.e., when there is just one seller and one buyer.\footnote{The VCG mechanism can also be applied to two-sided markets; however, in this setting, VCG is either not IR or it does not satisfy WBB.} Despite the numerous above mentioned practical contexts that need the application of \emph{combinatorial} two-sided market mechanisms, we are not aware of any mechanism that approximates the social welfare while meeting the IR, IC and SBB requirements. The purpose of this paper is to provide mechanisms that satisfy these requirements and achieve an $O(1)$-approximation to the social welfare for a broad class of agents' valuation functions. We do, in fact, design mechanisms that work under the assumption of the valuations being {\em fractionally subadditive (XOS)}, a generalisation of submodular functions that are contained in the class of subadditive functions. Our work builds upon previous work on an important special case of a two-sided market that is the one in which each seller holds a single item, items are identical, and each agent is only interested in holding a single item. In this setting, the valuations of the agents are thus given by a single number, representing the agent's appreciation for holding an item. A mechanism for this setting is known in the literature as a \emph{double auction}. The goal of several works on double auctions \cite{mcafee92, SW89, sw02} has been that of trading off the achievable social welfare with the strength of the incentive compatibility and budget balance constraints. A recent work addressed the problem of approximating social welfare in double auctions and related problems under the WBB requirement. D\"{u}tting, Talgam-Cohen, and Roughgarden \cite{dtr14} indeed proposed a greedy strategy that combines the one-sided VCG mechanism, independently applied to buyers and to sellers with the trade-reduction mechanism of McAfee \cite{mcafee92}. They obtain IR, DSIC, WBB mechanisms with a good approximation of the social welfare, for knapsack, matching and matroid allocation constraints. More recently, Colini-Baldeschi et al.~\cite{cdlt16} presented the first double auction that satisfies IR, DSIC, and SBB, and approximates the optimal (expected) social welfare up to a constant factor. These results hold for any number of buyers and sellers with arbitrary, independent distributions on valuations. The mechanisms are also extended to the setting where there is an additional matroid constraint on the set of buyers who can purchase an item. \subsection{The Model} As stated above, the set of agents is partitioned into a set of {\em sellers}, each of which is initially endowed with a set of heterogeneous items, and a set of {\em buyers}, having no items initially. Buyers have money that can be used to pay for items. Every agent has its own, private valuation function, which maps subsets of the items to numbers, and agents are assumed to optimize their (quasi-linear) utility, which is given by the valuation of the set of items that the mechanism allocates to the agents, minus the payment that the mechanism asks from the agents. A seller will typically receive money instead of pay money, which we model by a negative payment. For each agent we are given a (publicly known) probability distribution over a set of valuation functions, from which we assume her valuation function is drawn. The mechanism and the other agents have no knowledge of the actual valuation function of the $i$-th agent, but only of her probability distribution. The general aim of the mechanism is to reallocate the items so as to maximize the expected social welfare (the sum of the agents' valuations of the resulting allocation). Let \OPT\ be the expected social welfare of an optimal allocation of the items. Note that this is a well-defined quantity, even though computing an optimal allocation may be computationally hard, and even though there might not exist an appropriate mechanism that is guaranteed to always output an optimal allocation. We are interested in mechanisms that satisfy \emph{individual rationality (IR)}, \emph{dominant strategy incentive compatibility (DSIC)} (or failing that, the weaker notion of \emph{Bayesian incentive compatibility (BIC)}), and \emph{strong budget-balance (SBB)}, and that reallocate the items in such a way that the expected social welfare is within some constant fraction of \OPT, where expectation is taken over the given probability distributions of the agents' valuations, and over the randomness of the allocation that the mechanism outputs. The obstacle of interest to us is not the computational one, but the requirement of achieving mechanisms with such approximation guarantees, that are also DSIC, IR, and SBB. The main focus of this paper is the existence of mechanisms that obtain a constant approximation to the optimal social welfare and not the computational issues related to them. We remark, however, that our mechanisms can be implemented in polynomial time at the cost of an additional welfare loss of a factor $c$, where $c$ is the best-known approximation factor for the problem of optimising buyers' social welfare, if both a poly-time approximation algorithm and an approximation for the query oracle exist (see e.g., \cite{fgl15}). \subsection{Overview of the Results} The present paper starts off by showing that there is a straightforward trick that one may apply to turn any WBB mechanism into an SBB one, with a small loss in approximation factor. The technique is to pre-select one random agent that is taken out of the market a priori whose role is to receive all leftover money, and it can be applied in order to obtain from \cite{bd14} a $O(1)$-approximate DSIC, IR, and SBB mechanism for a very broad class of markets. This therefore demonstrates a weakness in the current notion of SBB, which motivates the introduction of the strengthened notion \emph{direct trade strong budget balance (DSBB)}, that requires that a monetary transfer between two agents in the market is only possible in case the agents trade items. The goal of our proposed mechanisms is twofold: (i) achieve a constant approximation to the optimal social welfare, and (ii) design mechanisms that respect the stronger notion of DSBB. Note that with non-unit-supply sellers, a constant approximation is not known even in the context of WBB or standard SBB.\footnote{The mechanism proposed in \cite{bd14} achieves a constant approximation if the size of the initial endowment of each agent is bounded by a constant. Otherwise the mechanism achieves a logarithmic approximation to the optimal social welfare.} We present the following mechanisms: \begin{itemize} \item A $6$-approximate DSIC mechanism for buyers with XOS-valuations and sellers with one item at their disposal (i.e., \emph{unit-supply sellers}); \item a $6$-approximate BIC mechanism for buyers with XOS-valuations and non-unit supply sellers with additive valuations; \item a $6$-approximate DSIC mechanism for buyers with additive valuations and sellers with additive valuations. \end{itemize} XOS functions lie in between the class of monotone submodular functions and subadditive functions in terms of generality. Additive and XOS functions are frequently used in the mechanism design literature. To our knowledge, these are the first mechanisms for these two-sided market settings that are simultaneously incentive compatible, (D)SBB, IR, and approximate the optimal social welfare to within a constant factor. A first ingredient needed to obtain our results is the extension of {\em two-sided sequential posted price mechanisms (SPMs)} \citep{cdlt16} for double auctions to two-sided markets. SPMs are a particularly elegant and well-studied class of mechanisms for one-sided markets. All the presented SPMs do not require any assumption on the arrival order of the agents. A second ingredient of our result is to use the \emph{expected marginal contribution of an item to the social welfare} as the price of the item in a sequential posted price mechanism for buyers with XOS valuations \cite{fgl15}. \subsection{Related Work}\label{sec:related} Due to the impossibility result of \citep{ms83}, no two-sided mechanism can simultaneously satisfy BIC, IR, WBB and be socially efficient, even in the simple bilateral trade setting. Follow-up work thus had to focus on designing mechanisms that trade off among these properties. Some paper of the Economics literature studied the convergence rate to social efficiency as a function of the number of agents when all sellers' and buyers' valuations are independently respectively drawn from identical regular distributions, while satisfying IR and WBB. \citet{gs89} showed that duplicating the number of agents by $\tau$ results in a market where the optimal IR, IC, WBB mechanism's inefficiency goes down as a function of $O(\log{\tau} / \tau^2)$. \citep{rsw94, sw02} investigated a family of non-truthful double auction mechanisms, parameterized by a value $c \in [0,1]$. We remark that the results mentioned above only hold for unit-demand buyers and unit-supply sellers, identical valuation distributions, and the hidden constants in these asymptotic result depend on the particular distribution. In contrast, our interest is in finding \emph{universal} constant approximation guarantees for combinatorial settings. In \citet{mcafee92}, an IC, WBB, IR double auction is proposed that extracts at least a $(1 - 1/\ell)$ fraction of the maximum social welfare, where $\ell$ is the number of traders in the optimal solution. Optimal revenue-maximizing Bayesian auctions were characterized in \citep{myerson81}, which provides an elegant tool applicable to single-parameter, one-sided auctions. Various subsequent articles dealt with extending these results. Related to our work is \citep{dgtz14}, which studied maximizing the auctioneer's revenue in Bayesian double auctions. The same objective was studied in \citep{dghk02} yet in the \emph{prior-free} model. Recently, \citet{dtr14} provided black-box reductions from WBB double auctions to one-sided mechanisms. They are for a prior-free setting and can be applied with matroid, knapsack, and matching feasibility constraints on the allocations. More recently, \cite{cdlt16} presented the first IR, IC and SBB mechanism for double auctions that $O(1)$-approximates the optimal social welfare. In \cite{DBLP:journals/corr/Segal-HaleviHA16}, mechanisms for some special cases of two-sided markets are presented that work by a combination of random sampling and random serial dictatorship. The mechanism is IR, SBB and DSIC and its \emph{gain from trade} approaches the optimum when the market is sufficiently large. Mechanisms that are IC, IR, and SBB have been given for bilateral trade in \citep{bd14}. In addition to this, the authors proposed a WBB mechanism for a general class of combinatorial exchange markets. We will use this result to construct our initial mechanism. Sequential posted price mechanisms (SPMs) in one-sided markets have been introduced in \citep{sandholm2004} and have since gained attention due to their simplicity, robustness to collusion, and their easy implementability in practical applications. One of the first theoretical results concerning SPMs is an asymptotic comparison among three different types of single-parameter mechanisms \citep{bh08}. They were later studied for the objective of revenue maximisation in \citep{chms10}. Additionally, \cite{kw12, dk15} strengthen these results further. Very relevant to our work is the paper of Feldman et al. \cite{fgl15} showing that sequential posted price mechanisms can approximate social welfare up to a constant factor of $1/2$ for XOS valuation functions if the published price for an item is equal to the expected additive contribution of the item to the social welfare. \section{Preliminaries}\label{sec:prelims} As a general convention, we use boldface notation for vectors and use $[a]$ to denote the set $\{1, \ldots, a\}$. We will use $\mathbb{I}(X)$ to denote the indicator function that maps to $1$ if and only if event/fact $X$ holds. \smallbreak \noindent \textbf{Markets.} A \emph{two-sided market} comprises a set of two distinct types of agents: the \emph{sellers}, who initially hold items for sale, and the \emph{buyers}, who are interested in buying the sellers' items. All agents possess a monotone and normalized valuation function, mapping subsets of items to $\mathbb{R}_{\geq 0}$. \footnote{By \emph{monotone} we mean that getting more items cannot decrease an agent's overall valuation, and with normalized we mean that the empty set is mapped to $0$.} Formally, we represent a two-sided market as a tuple $(n, m, k,\bm{I},\bm{G},\bm{F})$, where $[n]$ denotes the set of buyers, $[m]$ denotes the set of sellers, $[k]$ denotes the set of all items for sale, $\bm{I} := (I_1, \ldots, I_m)$ is a vector of (mutually disjoint) sets of items initially held by each seller, called the \emph{initial endowment}, where it holds that $\bigcup_{j=1}^m I_j = [k]$. Vectors $\bm{G} = (G_1, \ldots, G_n)$ and $\bm{F} = (F_1, \ldots, F_m)$ are vectors of probability distributions, from which the buyers' and sellers' valuation functions are assumed to be drawn. A \emph{(combinatorial) exchange market} is a more general version of the above defined two-sided market where an agent can act as both a buyer and a seller. Thus, everyone may initially own items and may both sell and buy items As a result, in this setting, we override the notation and simply use $n$ to denote the total number of agents. Formally, an exchange market is thus a tuple $(n, k, \bm{I}, \bm{F})$. In two-sided markets sellers are assumed to only value items in their initial bundle and are therefore not interested in buying from other sellers, i.e., $\forall j \in [m]$ and $\forall S \subseteq [k]$, $w_{j}(S) = w_{j}(S \cap I_{j})$. On the contrary, in exchange markets, no such restriction on the valuation functions exists. Throughout the paper, we reserve the usage of the letter $i$ to denote a single buyer, the letter $j$ to denote a single seller, and the letter $\ell$ to denote a single item. Moreover, we use $v_i$ to denote buyer $i$'s valuation function and $w_j$ to denote seller $j$'s valuation function. \smallbreak \noindent \textbf{Mechanism Design Goals.} In order to avoid overloading this section, the following content will only be described for two-sided markets (which are the main focus of the paper). Nonetheless, any of the subsequent concepts can naturally be extended to combinatorial exchange markets. Given a two-sided market, our aim is to redistribute the items among the agents so as to maximize the \emph{social welfare} (the sum of the agents' valuations). An \emph{allocation} for a given two-sided market $(n, m, k, \bm{I}, \bm{G}, \bm{F})$ is a pair of vectors $(\bm{X},\bm{Y}) = ((X_1, \ldots, X_n),(Y_1,\ldots,Y_m))$ such that the union of $X_1, \ldots, X_n, Y_1, \ldots Y_m$ is $[k]$, and $X_1,\ldots X_n,Y_1, \ldots, Y_m$ are mutually non-intersecting. Redistribution of the items is done by running a \emph{mechanism} $\mathbb{M}$. A mechanism interacts with and receives input from the buyers, and outputs an \emph{outcome}, consisting of an allocation $(\bm{X},\bm{Y})$ and a payment vector $(\bm{\rho}^B, \bm{\rho}^S) \in \mathbb{R}^n \times \mathbb{R}^m$, where $\bm{\rho}^B$ refers to the buyers' vector of payments and $\bm{\rho}^S$ to the sellers' one. The outcome of a mechanism $\mathbb{M}$ is represented by a tuple $(\bm{X},\bm{Y},\bm{\rho}^B,\bm{\rho}^S)$. Agents are assumed to maximize their \emph{utility}, which is defined as the valuation for the bundle of items that they get allocated, minus the payment charged by the mechanism. In particular, the utility $u_i^B(v_i,(\bm{X},\bm{Y},\bm{\rho}^B,\bm{\rho}^S))$ of a buyer $i \in [n]$ is $v_i(X_i) - \rho^B_i$, whereas for a seller $j \in [m]$ it is $u_j^S(w_j,(\bm{X},\bm{Y},\bm{\rho}^B,\bm{\rho}^S)) = w_j(Y_j) - \rho^S_j$. Since agents are assumed to maximize their utility, they will strategically interact with the mechanism so as to achieve this. Our goal is to design mechanism such that there is a dominant strategy or Bayes-Nash equilibrium for the agents under which the mechanism returns an allocation with a high social welfare. For a allocation $(\bm{X}, \bm{Y})$, the social welfare $\SW(\bm{X}, \bm{Y})$ is defined as $\textsf{SW}(\bm{X}, \bm{Y}) = \sum_{i \in [n]} v_i(X_i) + \sum_{j \in [m]} w_j(Y_j)$. The objective function we want to maximize is the above defined social welfare. We now describe three main constraints our mechanisms must adhere to. For each of these constraints we first introduce the strictest version and then a more relaxed one. Our mechanisms aim to satisfy the strictest versions, whenever possible. \begin{itemize} \item \emph{Dominant strategy incentive compatibility (DSIC):} It is a dominant strategy for every agent to report her true valuation sincerely, no matter what the others do, i.e., no agent can increase her utility by misreporting her true valuation. \item \emph{Bayesian incentive compatibility (BIC):} No agent can obtain a gain in her expected utility by declaring a valuation different from her true one, where the expectation is taken w.r.t. the others' valuations.\footnote{Technically, as can be inferred, the DSIC properties are reserved for \emph{direct revelation mechansims}, i.e., where the buyer solely interacts with the mechanism by reporting his valuation function. It is well-known that mechanisms admitting a dominant strategy can be transformed into DSIC direct revelation mechanisms, and those with a Bayes-Nash equilibrium can be transformed into BIC direct revelation mechanisms. This way, we extend the DSIC and BIC definitions to non-direct revelation mechanisms.} \item \emph{Ex-post individual rationality (ex-post IR):} It is not harmful for any agent to participate in the mechanism, i.e., there is guaranteed to be a strategy for an agent that yields the agent an increase in utility. \item \emph{Interim individual rationality (interim IR):} There is a strategy for each agent that yields her an expected increase in utility (where expectation is over the valuations of the other agents and the internal randomness of the mechanism). \item \emph{Strong Budget Balance (SBB):} The sum of all agents' payments output by the mechanism is equal to zero. Conceptually, this means that no money is burnt or ends up at an external party, and no external party needs to subsidise the mechanism. \item \emph{Weak Budget Balance (WBB):} The sum of all payments is at least zero. In two sided-markets this translates in having the buyers' payments being at least as large as the payments received by the sellers. \end{itemize} For valuation profiles $(\bm{v}, \bm{w})$, $\textsf{OPT}(\bm{v}, \bm{w}) := \max_{\bm{X}, \bm{Y}}\{\textsf{SW}(\bm{X}, \bm{Y})\}$ denotes the \emph{optimal social welfare}, where the max goes over all feasible allocations $\bm{X}, \bm{Y}$. The \emph{expected optimal social welfare} is the value $\textsf{OPT} = \mathbb{E}_{\bm{v}, \bm{w}}[\textsf{OPT}(\bm{v}, \bm{w})]$. We say that a mechanism $\mathbb{M}$ \emph{$\alpha$-approximates the optimal social welfare} for some $\alpha > 1$ iff $\textsf{OPT} \leq \alpha \mathbb{E}_{\bm{v}, \bm{w}}[\mathsf{SW}(\mathbb{M}(\bm{v}, \bm{w}))]$. Our goal is to find mechanisms that $\alpha$-approximates the optimal social welfare for a low $\alpha$, are DSIC or BIC, SBB, and ex-post IR or interim IR. \noindent \textbf{Classes of Valuation Functions.} We will consider probability distributions over the following classes of valuation functions. Let $v : 2^{[k]} \rightarrow \mathbb{R}_{\geq 0}$ be a valuation function. Then, \begin{itemize} \item $v$ is \emph{additive} iff there exists numbers $\alpha_1, \ldots \alpha_k \in \mathbb{R}_{\geq 0}$ such that $v(S) = \sum_{j \in S} \alpha_j$ for all $S \subseteq [k]$. \item $v$ is \emph{fractionally subadditive (or XOS)} iff there exists a collection of additive functions $a_1, \ldots, a_{d}$ such that for every bundle $S \subseteq [k]$ it holds that $v(S) = \max_{i \in [d]} a_i(S)$. \end{itemize} It is easy to see that every additive function is a XOS function. Further, it is well-known that the class of submodular functions are contained in the class of XOS functions. Due to space constraints, all proofs in the remainder of this paper have been deferred to the appendix. \section{An Initial Mechanism and Direct Trade Strong Budget Balance}\label{sec:dsbb} In \cite{bd14}, Blumrosen and Dobzinski present a mechanism for exchange markets with subadditive valuation functions. They prove the following for this mechanism, which we name $\mathbb{M}_{\text{bd}}$. \begin{theorem}[Blumrosen and Dobzinski \cite{bd14}] Mechanism $\mathbb{M}_{\text{bd}}$ is a DSIC, WBB, ex-post IR randomized direct revelation mechanism that $4H(s)$-approximates the optimal social welfare for combinatorial exchange markets $(n,k,\bm{I},\bm{F})$ with subadditive valuation functions, where $s = \min\{n, |I_i| : i \in [n]\}$ is the minimum of the number of agents and the number of items in an agent's initial endowment, and $H(\cdot)$ denotes the harmonic numbers. \end{theorem} In particular, this mechanism gives us a constant approximation factor if the number of starting items of the agents is bounded by a constant. Now consider a mechanism $\mathcal{M}_{\text{sbb}}$ that selects an agent $i \in [n]$ uniformly at random, runs $\mathcal{M}_{\text{bd}}$ on the remaining agents, and allocates the surplus money of $\mathbb{M}_{\text{bd}}$ to agent $i$. We then are able to proof the following. \begin{theorem} Mechanism $\mathbb{M}_{\text{sbb}}$ is DSIC, ex-post IR, SBB, and achieves an $8nH(s)/(n-1)$-approximation to the optimal social welfare for exchange markets with subadditive valuations and at least $3$ agents.\footnote{For $2$ agents, it is straightforward to come up with alternative mechanisms that have the desired properties.} \end{theorem} The proof and a more precise description of $\mathbb{M}_{\text{sbb}}$ are provided in Appendix \ref{sec:comb_exchange}. This yields an ex-post IR, SBB, DSIC mechanism that $O(1)$-approximates the social welfare if the number of items initially posessed by an agent is bounded by a constant. The principle that we used to construct Mechanism $\mathbb{M}_{\text{sbb}}$ can more generally be used to turn any WBB mechanism into a SBB one, while preserving the DSIC and ex-post IR properties. It also reveals a problematic aspect of the notion of SBB: it allows for agents to receive money, while they are not involved in any trade. This motivates a strengthened notion of strong budget balance, which we call \emph{direct trade strong budget balance}. \begin{definition} A mechanism for an exchange market satisfies \emph{direct trade strong budget balance (DSBB)} iff the outcome it generates can be achieved by a set of bilateral trades, where each trade consists of a reallocation of an item from an agent $i$ to an agent $j$, and a monetary transfer from agent $j$ to agent $i$. Moreover, each item may only be traded once. \end{definition} It can be seen that Mechanism $\mathbb{M}_{\text{sbb}}$ does not satisfy DSBB. In the remainder of the paper we will proceed to design mechanisms for two-sided markets that do satisfy DSBB.\footnote{We note that the double auctions given in \cite{cdlt16} also satisfy the DSBB property.} Moreover, two of our results provide an $O(1)$-approximation even in settings in which the $\mathbb{M}_{sbb}$ provides only a log-factor. \section{A Mechanism for Unit-Supply Two-Sided Markets with XOS Buyers}\label{sec:unit-supply} In this section we present a DSIC, ex-post IR, and DSBB mechanism for unit supply two-sided markets, when buyers have XOS valuation functions. This mechanism achieves constant approximation to the optimal social welfare. This result extends \citep{fgl15} to two-sided auctions and \citep{cdlt16} to a setting in which buyers have combinatorial preferences over the items. In \emph{unit-supply two-sided markets}, for simplicity, we use $[k]$ to denote to both the set of items and the set of sellers, where item $j$ is owned by seller $j$ (so $I_j = \{j\}$ for all $j \in [k]$). For each seller $j \in [k]$, we then treat $F_j$ as a distribution over $\mathbb{R}_{\geq 0}$ instead of a distribution over functions. We assume throughout this section that $(n,k,k,\bm{I},\bm{G},\bm{F})$ is a given unit-supply two-sided market, on which we run the mechanism to be defined. For an allocation $(\bm{X},\bm{Y}) \in \mathcal{A}$, we shall use the superscripts $\SW^B, \SW^S$ to respectively denote the buyers' and the sellers' contribution to the social welfare, i.e., \begin{equation*} \SW^B(\bm{X}, \bm{Y}) := \sum_{i=1}^n v_i(X_i), \text{ and } \SW^S(\bm{X}, \bm{Y}) := \sum_{j=1}^k w_j \indicator{j \in Y_j}. \end{equation*} Our mechanism will work by specifying prices on the items in the market. For a bundle $\Lambda$ and an item price vector $\bm{p} = (p_1, \ldots, p_k)$, we define the \emph{demand correspondence} of buyer $i \in [n]$ with valuation function $v_i$ as \begin{equation*} \mathcal{D}(v_i, \bm{p}, \Lambda) := \left\{S \subseteq \Lambda : v_i(S) - \sum_{j \in S} p_j \geq v_i(T) - \sum_{j \in T} p_j \text{ for all } T \subseteq \Lambda \right\}, \end{equation*} i.e., the set of bundles of items in $\Lambda$ that maximize $i$'s utility under the given item prices, when she has valuation function $v_i$. For a buyer $i$ with valuation function $v_i$, we define the \emph{additive representative function} for bundle $T \subseteq [k]$ as the additive function $a(v_i,T,\cdot) : 2^{[k]} \rightarrow \mathbb{R}_{\geq 0}$ such that $v_i(T) = a(v_i,T,T)$, and $v_i(S) \geq a(v_i,T,S)$ for all $S \subseteq [k]$ . The additive representative function of a bundle is guaranteed to exist for each buyer $i$, for each valuation function in the support of $F_i$, by the definition of XOS functions. \smallbreak \noindent \textbf{Mechanism.} Let $\mathbb{A}$ be an algorithm that, given a valuation profile of the buyers $\bm{v}$, allocates the items $[k]$ to them, and does not take into account the sellers and their valuations. Our mechanism will use such an algorithm as a black-box, and it can be thought of as outputting either an allocation that is optimal for the buyers (in case one does not care about the runtime of the mechanism) or an approximately optimal one (in case one insists on the mechanism running in polynomial time). Let $\bm{X}^\all(\bm{v}) = (X_1^\all(\bm{v}), \ldots, X_n^\all(\bm{v}))$ be the output allocation of $\mathbb{A}(\bm{v})$. Let $\SW(\bm{X}^\all(\bm{v}))$ be the total social welfare of the allocation $\bm{X}^\all(\bm{v})$. We define for each item $j \in [k]$ its \emph{contribution $\SW^B_j(\bm{v})$ to the social welfare $\SW(\bm{X}^{\all}(\bm{v}))$} as follows: If there exists a player $i$ that receives item $j$ in allocation $X_i^\all(\bm{v})$, then $\SW^B_j(\bm{v}) = a(v_i, X_i^\all(\bm{v}), \{j\})$. Otherwise, if $j$ is not allocated to any buyer in $X_i^\all(\bm{v})$, then $\SW^B_j(\bm{v}) = 0$. This notion allows us to make a distinction between \emph{high welfare items} and \emph{low welfare items}. An item $j \in [k]$ is said to have \emph{high welfare with respect to $\SW(X_i^\all(\bm{v}))$} iff $\mathbb{E}_{\bm{v}}[\SW^B_j(\bm{v})] \geq 4\expected{w_j}$, i.e., the expected social welfare contribution of $j$ if we would allocate $j$ according to $\bm{X}^\all(\bm{v})$ is at least four times as high as the social welfare that results from leaving item $j$ at its seller. Formally, let $L$ be the set of high welfare items, i.e., $L := \left\{\ell \in [k] : \expected{\SW^{B}_\ell(\bm{v}} \geq 4 \expected{w_{j}}\right\}$, and let $\bar{L}$ be the set of low welfare items, i.e. $\bar{L} := [k]\setminus L$. For each high welfare item $j \in L$, the mechanism makes use of the following associated \emph{item price} $p_j$: \begin{equation*} p_j :=\frac{1}{2} \expectedsub{\bm{v}}{\SW^B_j(\bm{v})}. \end{equation*} Observe that $p_j \geq 2\mathbb{E}[w_j]$ for all $j \in L$, by our definition of high welfare items. The reason why $L$ is chosen in such a way is twofold: first, the items in $\bar{L}$ if kept by their sellers provide a welfare loss of at most a constant factor; second, every item in $L$ is guaranteed to be sold (if sold) at a high price. Our (randomized) mechanism does the following simple procedure. First, it goes to every seller in $L$ (in any order) and asks each of them whether they would sell their item for a price of $p_j$. As mentioned above, by definition of the prices, every seller $j \in L$ accepts the price with probability at least $1/2$ because of Markov's inequality. To make sure that this probability is exactly $1/2$, the seller $j$ is only given the opportunity to sell her item at the price $p_j$ with probability $q_j$ such that (in expectation) the offer is accepted with probability exactly $1/2$. Formally, the mechanism makes an offer to the seller $j$ with probability \[ q_j := \frac{1}{2 F_j(p_j)}, \text{ where } F_j(p_j) = \prob{w_j \leq p_j}. \] Once we have gone through every seller and know which items are in the market, we move to the buyers and ask each (in any order) for their favorite bundle according to the items currently available for purchase. We call the mechanism sketched above $\mathbb{M}_{\text{1-supply}}$, which we will now present more precisely: \\ \noindent\fbox{ \begin{varwidth}{\dimexpr\linewidth-2\fboxsep-2\fboxrule\relax} \small{ \begin{enumerate} \item Let $L := \{j \in [k] : \mathbb{E}_{\bm{v}}[\SW^B_j(\bm{v})] \geq 4\mathbb{E}[w_j]\}$. \item For all $j \in L$, set $p_j :=\frac{1}{2} \expectedsub{\bm{v}}{\SW^B_j(\bm{v})}$. \item Let $\Lambda_1 := \emptyset, X_i := \emptyset$ for all $i \in [n]$ and $Y_j := \{j\}$ for all $j \in [k]$. \item For all $j \in L$: \begin{enumerate} \item Set $q_j := 1 / (2 \mathsf{Pr}[w_j \leq p_j])$. \item With probability $q_j$, offer payment $p_j$ in exchange for her item. Otherwise, skip this seller. \item If $j$ accepts the offer, set $\Lambda_1 := \Lambda_1 \cup \{j\}$. \end{enumerate} \item For all $i \in [n]$:\label{algline:buyer_cycle} \begin{enumerate} \item Buyer $i$ chooses a bundle $B_i \in \mathcal{D}(v_i, \bm{p}, \Lambda_i)$ that maximizes her utility. \item Allocate the accepted items to buyer $i$, i.e., $X_i :=B_i$ and $Y_j := \emptyset$ for all $j \in B_i$. \item Remove the selected items from the available items, i.e., $\Lambda_{i+1} := \Lambda_i \setminus B_i$. \end{enumerate} \item Return the outcome consisting of allocation $(\bm{X} = (X_1, \ldots, X_n), \bm{Y} = (Y_1, \ldots, Y_k))$ and payments $\bm{\rho} = (\bm{\rho}^B, \bm{\rho}^S)$, where $\rho_i^B = \sum_{j \in X_i} p_j$ for $i \in [n]$ and $\rho_j^S = -p_j \indicator{Y_j = \varnothing}$ for $j \in [k]$. \end{enumerate} } \end{varwidth} }\\ Note that Algorithm $\mathbb{A}$ is used in the first steps of mechanism $\mathbb{M}_{\text{1-supply}}$, where $\mathbb{E}_{\bm{v}}[\SW^B_j(\bm{v})]$ is computed. Let $\alpha$ be the factor by which $\mathbb{A}$ is guaranteed to approximate the social welfare of the buyers. \begin{theorem} \label{thm:18-approx_xos} $\mathbb{M}_{\text{1-supply}}$ is ex-post IR, DSIC, DSBB, and $(2 + 4\alpha)$-approximates the optimal social welfare. \end{theorem} In particular, taking for $\mathbb{A}$ an optimal algorithm (i.e., $\alpha = 1$), we obtain that there exists a mechanism that is ex-post IR, DSIC, DSBB, and $6$-approximates the optimal social welfare. Alternatively, one may take for $\mathbb{A}$ an approximation algorithm in order to obtain polynomial time implementable mechanisms, as explained in further detail in \cite{fgl15}. \section{A Mechanism for Two-Sided Markets with XOS Buyers and Additive Sellers} We now consider the setting in which sellers may own multiple distinct items and have an additive valuation function over them. We design a DSBB mechanism that is DSIC and ex-post IR on the sellers' side, and BIC and interim IR on the buyers' side. At the end of the section also proof that in the case of both buyers and sellers possessing additive valuation functions, the mechanism we present is DSIC and ex-post IR on both sides of the market. We assume throughout this section that $(n,m,k,\bm{I},\bm{G},\bm{F})$ is a given two-sided market with XOS buyers and Additive Sellers, on which we run the mechanism to be defined. Like in the previous section, the buyers are still assumed to have XOS valuation functions over the items. Since now the number of items and sellers is different in general, we use $m$ to denote the number of sellers and $k$ for the number of items. The valuation $w_j$ of a seller $j$ is now an additive function. We reuse the following notation from Section \ref{sec:unit-supply}: The allocation $(X_1^\all(\bm{v}), \ldots, X_n^\all(\bm{v}))$ returned by an allocation algorithm $\mathbb{A}$ on input $\bm{v}$ returns an allocation of $[k]$ to $[n]$. We let $\alpha \geq 1$ again denote the approximation factor by which $\mathbb{A}$ approximates the social welfare. For XOS valuation $v_i$ and bundle $T \subseteq [k]$ we use $a(v_i,T, \cdot)$ to denote the corresponding additive function of $v_i$ for $T$. Also we use the buyers' social welfare contribution $\SW^B_\ell(\bm{v})$ for item $\ell \in [k]$ and buyers' valuation profile $\bm{v}$, as defined in Section \ref{sec:unit-supply}. We define furthermore the \emph{sellers' social welfare contribution} $\SW^S_\ell(\bm{w})$ for item $\ell \in I_j$ and sellers' valuation profile $\bm{w}$ as $\SW^S_\ell(\bm{w}) := w_j(\{\ell\})$. Due to the fact that for $j \in [m]$, $w_j$ is an additive function, there is no need for defining the notion of a \emph{corresponding additive function} for a seller. \smallbreak \noindent \textbf{Mechanism.} We aim to design a BIC, interim IR, and SBB mechanism that approximates the optimal social welfare within a constant. We propose the following mechanism, which we refer to as $\mathbb{M}_\text{add}$. We let $L_j := \{\ell \in I_j : \mathbb{E}[\SW^B_\ell(\bm{v})] \geq 4 \mathbb{E}[\SW^S_\ell(\bm{w})]\}$ and $\bar{L}_j := I_j \setminus L_j$ for all $j \in [m]$, and we let $L := \bigcup_{j = 1}^m L_j$ and $\bar{L} := [k] \setminus L$ denote the sets of high welfare items and low welfare items, respectively. Our mechanism will only allow trading items in $L$, and define for $\ell \in L$ the item price \begin{equation*} p_\ell := \frac{1}{2} \expected{\SW^B_\ell(\bm{v})}, \end{equation*} similar to what we did for $\mathbb{M}_{\text{1-supply}}$. An essential difference between $\mathbb{M}_{\text{add}}$ and $\mathbb{M}_{\text{1-supply}}$ is that the order in which buyers and sellers are processed is reversed. Mechanism $\mathbb{M}_{\text{add}}$ roughly works as follows. It first asks every buyer which set of items it would like to receive from those items in $L$ that have not been requested yet. Then $\mathbb{M}_{\text{add}}$ offers every seller $j \in [m]$ a payment in exchange for the subset of all items in $I_j$ that have been requested. This offer is made with a $q_j$ ensures that the requested items of seller $j$ are transfered to the buyers with probability $1/2$. The items of the sellers accepting the offer are transferred to the buyers for the corresponding item prices. Buyers act strategically, and request a set of items that maximizes their expected utility, knowing that the item sets requested from each seller will be assigned to them with probability $1/2$\footnote{The buyer may need to compute complicated expected values in order to establish which bundle is maximizing his expected utility. But this depends on the query oracle model we assume to have. However, most importantly, the focus of the paper is not on polynomial time implementability but on the existence of mechanisms that constantly approximate the optimal social-welfare.}. \noindent\fbox{ \begin{varwidth}{\dimexpr\linewidth-2\fboxsep-2\fboxrule\relax} \small{ \begin{enumerate} \item For $\ell \in [k]$, compute $\expected{\SW^B_\ell(\bm{v})}$ and $\expected{\SW^S_\ell(\bm{w})}$. \item For all $j \in [m]$, compute $L_j$ and $q_j$. \item Compute $L$, $\bar{L}$. \item Let $\Lambda_1 := L$, $X_i := \varnothing$ for all $i \in [n]$, and $Y_j := I_j$ for all $j \in [m]$. \item \label{buyerloop} For each buyer $i \in [n]$: \begin{enumerate} \item Ask buyer $i$ to select an expected-maximising bundle $B_i \subseteq \Lambda_i$ given the prices $\{p_\ell : \ell \in \Lambda_i\}$ from the set of available items (where the expectation is taken with respect to the randomness of both the mechanism and the valuations). \item \label{lambda} Update the set of available items $\Lambda_{i+1} := \Lambda_i \setminus B_i$. \end{enumerate} \item Let $B := \bigcup_{i=1}^n B_i$ be the set of all items demanded by the buyers. \item For each seller $j \in [m]$: \begin{enumerate} \item Let $S_j := B \cap L_j$ be the set of items owned by seller $j$ that are demanded. \item Let $p(S_j) := \sum_{\ell \in S_j} p_\ell$ and let $q_j = 1/(2 \prob{w_j(S_j) \leq p(S_j)})$. \item \label{offer} With probability $q_j$, offer payment $p(S_j)$ in exchange for the bundle $S_j$. Otherwise, skip this seller. \item If the seller accepts the offer, allocate each items in $S_j$ to the buyer that requested it (i.e., remove $S_j$ from $Y_j$ and add $S_j \cap B_i$ to $X_i$ for all $i \in [n]$) \end{enumerate} \item Return the outcome consisting of allocation $(\bm{X} = (X_1, \ldots, X_n), \bm{Y} = (Y_1, \ldots, Y_k))$ and payments $\bm{\rho} = (\bm{\rho}^B, \bm{\rho}^S)$, where $\rho_i^B = \sum_{\ell \in X_i} p_\ell$ for $i \in [n]$ and $\rho_j^S = \sum_{\ell \in I_j \setminus Y_j} -p_\ell$ for $j \in [m]$. \end{enumerate} } \end{varwidth} }\\ \begin{example} There is $1$ buyer and $2$ unit-supply sellers. Each seller posses $1$ item. The buyer has two XOS valuation functions $v_{1}$ and $v_{2}$, each one is chosen with probability $0.5$. $v_{1}$ is composed by $3$ additive functions $a_{1}$, $a_{2}$, and $a_{3}$, i.e., $v_{1}(S)=\max \{a_{1}(S), a_{2}(S), a_{3}(S)\}$. $v_{2}$ is composed by the only additive function $a_{4}$. The additive functions are represented in the following table, recall that an additive function is represented by $a(S) = \sum_{j \in S} \alpha_j$ for all $S \subseteq [k]$. Each seller $j$ has a valuation function $w_j = 0$, so we always want to reallocate items from sellers to the buyer. \begin{center} \begin{tabular}{ | c | c | c | } \hline Function & item $1$ ($\alpha_{1}$)& item $2$ ($\alpha_{2}$)\\ \hline $a_{1}$ & $0$ & $4$ \\ \hline $a_{2}$ & $8$ & $0$ \\ \hline $a_{3}$ & $7$ & $2$ \\ \hline $a_{4}$ & $1$ & $6$ \\ \hline \end{tabular} \end{center} Now, we have to compute the prices that the mechanism has to post. Thus, we need the expected contribution to the optimal social welfare of every item. First, notice that the optimum allocates the items $1$ and $2$ to the buyer when her valuation is $v_{1}$. In this case the contribution to the optimal social welfare of items $1$ is $7$, and the contribution of item $2$ is $2$. Similarly, if the buyer has valuation $v_{2}$, the optimum still allocates items $1$ and $2$ to her, but in this case the contribution to the optimal social welfare of item $1$ is $1$, and the contribution of item $2$ is $6$. Thus, the expected contribution of every item to the optimal social welfare is $4$, i.e., $\mathbb{E}[\SW^B_j(\bm{v})] = 4$ for all $j =1,2$. Since the price $p_j$ of each item is defined to be half of the expected contribution to the optimal social welfare, $p_{j} = 2$ for all the items. Now the buyer has to compute the bundle that maximises her expected-utility given the posted prices, and the fact that each seller is available with probability $0.5$. First, consider the case when the buyer has valuation $v_{1}$. In this case the expected utility for the different bundles are: \[ u(\{1\})=\frac{1}{2} \cdot (8-4) + \frac{1}{2} \cdot 0 = 2 \] \[ u(\{2\})=\frac{1}{2} \cdot (4-4) + \frac{1}{2} \cdot 0 = 0 \] \[ u(\{1,2\}) = \frac{1}{4} \cdot (8-4) + \frac{1}{4} \cdot (4-4) +\frac{1}{4} \cdot (9 - 8) + \frac{1}{4} \cdot 0 = \frac{5}{4} \] The utility-maximising bundle that will be requested by the buyer in case of $v_{1}$ is $\{1\}$. Instead, if the valuation of the buyer is $v_{2}$, then the requested bundle will be $\{1,2\}$. \end{example} This example shows that the buyers are committed to difficult computations in order to understand which is the utility-maximising bundle. But, again, our goal is to understand the existence of mechanisms that achieve a constant approximation to the optimum social welfare, even if they are not computational efficient. \begin{theorem}\label{thm:bicmech} The mechanism $\mathbb{M}_\mathrm{add}$ is ex-interim IR, BIC, DSBB, and $(2 + 4\alpha)$-approximates the optimal social welfare. \end{theorem} By taking for $\mathbb{A}$ an optimal algorithm (i.e., $\alpha = 1$), we obtain the existence of a mechanism that is ex-post IR, DSIC, DSBB, and $6$-approximates the optimal social welfare. However, if both a polynomial time approximation algorithm $\mathbb{A}$ and an approximation for the query oracle exist, then we get our mechanism to run in polynomial time. \begin{corollary}\label{cor:additive} For the special case that for all $i \in [n]$, distribution $G_i$ is over additive valuation functions, $\mathbb{M}_\mathrm{add}$ is ex-post IR, DSIC, DSBB and $(2 + 4\alpha)$-approximates the optimal social welfare. \end{corollary} \section{Discussion} An open problem is to extend or refine our mechanisms so that they satisfy the DSIC and ex-post IR properties for the case of XOS buyers and additive sellers. The first naive approach for doing so might be trying to consider every additive seller as a set of distinct unit-supply sellers and then run $\mathbb{M}_\text{1-supply}$. However, this is not guaranteed to work due to the fact that the items placed in the market by the seller influence the buyers' decisions on their bundle choices. Something we might additionally do is to ask every seller for her favourite bundle to place in the market, However, this may cause a seller to regret having chosen that particular bundle after seeing the realizations of the buyers' valuations. On the other hand, it also seems highly challenging to establish any sort of impossibility result for any reasonably defined class of posted price mechanisms for two-sided markets. Another natural challenge is to extend the above mechanism to the setting in which both buyers and sellers possess an XOS valuation function over bundles of items. We suspect however that it is impossible to devise a suitable mechanism that uses item-pricing (i.e., mechanisms that fix a price vector offline and then request buyers or sellers one-by-one to specify their favorite item bundle). The main reason for why this seems to be the case is that if a buyer (or a seller) is asked to select a bundle of items $B$ she desires the most, then it is impossible to guarantee that she receives the complete bundle. Instead, she may receive a subset of it, and in turn she may regret having chosen that bundle $B$ and not another bundle $B'$ after having observed the realizations of the sellers' valuations. \appendix \section{An Initial Mechanism and Direct Trade Strong Budget Balance (Full Details)}\label{sec:comb_exchange} In \cite{bd14}, Blumrosen and Dobzinski present a mechanism for exchange markets with subadditive valuation functions. They prove the following for this mechanism, which we name $\mathbb{M}_{\text{bd}}$. \begin{theorem}[Blumrosen and Dobzinski \cite{bd14}]\label{thm:db-sw} Mechanism $\mathbb{M}_{\text{bd}}$ is a DSIC, WBB, ex-post IR randomized direct revelation mechanism that $4H(s)$-approximates the optimal social welfare for combinatorial exchange markets $(n,k,\bm{I},\bm{F})$ with subadditive valuation functions, where $s = \min\{n, |I_i| : i \in [n]\}$ is the minimum of the number of agents and the number of items in an agents initial endowment, and $H(\cdot)$ denotes the harmonic numbers. \end{theorem} In particular, this mechanism gives us a constant approximation factor if the number of starting items of the agents is bounded by a constant. We show now how we can use this mechanism as a black box in order to obtain an SBB mechanism with only a slightly worse approximation ratio. Define mechanism $\mathcal{M}_{\text{sbb}}$ as follows. When given as input a combinatorial exchange market $C = (n,k,\bm{I},\bm{F})$, \begin{enumerate} \item Select an agent in $i \in [n]$ uniformly at random. \item Run Mechanism $\mathbb{M}_{\text{bd}}$ on the combinatorial exchange market \begin{equation*} C_{-i} = ([n]\setminus\{i\}, \bm{I}_{-i} = (I_1, \ldots, I_{i-1}, I_{i+1}, \ldots, I_n), \bm{F}_{-i} = (F_1, \ldots, F_{i-1}, F_{i+1}, \ldots, F_n)). \end{equation*} Let $(\bm{X}_{-i},\bm{\rho}_{-i})$ be the outcome that Mechanism $\mathbb{M}_{\text{bd}}$ outputs. \item Set $X_i = I_i$ and set $p_i = -\sum_{j \in [n] \setminus\{i\}} p_j$. Output the allocation $(X_i, \bm{X}_{-i})$ and output payment vector $(p_i, \bm{\rho}_{-i})$. \end{enumerate} So Mechanism $\mathbb{M}_{\text{sbb}}$ essentially runs Mechanism $\mathbb{M}_{\text{bd}}$ where one random agent is removed from the market. This agent receives the leftover money that Mechanism $\mathbb{M}_{\text{bd}}$ generates, and does not receive or lose any items. The following is a direct corollary of the DSIC, WBB, and ex-post IR properties of mechanism $\mathbb{M}_{\text{bd}}$. \begin{theorem} Mechanism $\mathbb{M}_{\text{sbb}}$ is a DSIC, SBB, and ex-post IR mechanism for exchange markets with subadditive valuation functions. \end{theorem} Secondly, the following theorem shows that the mechanism loses only a factor $2n/(n-1) \leq 3$ in the approximation ratio for $n \geq 3$. (For $n = 2$ it is straightforward to come up with alternative mechanisms that achieve a good approximation ratio.) \begin{theorem} Mechanism $\mathbb{M}_{\text{sbb}}$ achieves an $8nH(s)/(n-1)$-approximation to the optimal social welfare for exchange markets with subadditive valuations and at least $3$ agents. \end{theorem} \begin{proof} Fix a valuation vector $\bm{v}$ of the agents, let $\bm{X}_{\bm{v}}^{**} \subseteq \mathcal{A}$ be the social welfare maximising allocation when the agents have valuations $\bm{v}$. For an agent $i \in [n]$, denote by $\bm{X}_{\bm{v},-i}^{**}$ the allocation for $C_{-i}$ where $(X_{\bm{v},-i}^{**})_j = (X_{\bm{v}}^{**})_j \setminus I_i$ for $j \in [n] \setminus \{i\}$, i.e., the allocation obtained from $\bm{X}_{\bm{v}}^{**}$ when $i$ is removed, and all items of $i$ are removed. Moreover let $\bm{X}_{\bm{v},-i}^{*}$ be the optimal allocation of the combinatorial exchange market $C_{-i}$ when the valuation function vector of the players $[n]\setminus\{i\}$ is fixed to $\bm{v}_{-i}$. Mechanism $\mathbb{M}_{\text{sbb}}$ selects $i$ uniformly at random, so by Theorem \ref{thm:db-sw}, the expected social welfare of Mechanism $\mathbb{M}_{\text{sbb}}$ is at least \begin{eqnarray*} \frac{1}{4H(s)}\expectedsub{i}{\sum_{j \in [n] \setminus \{i\}} v_j(\bm{X}_{\bm{v},-i}^{*})} & \geq & \frac{1}{4H(s)}\expectedsub{i}{\sum_{j \in [n] \setminus \{i\}} v_j(\bm{X}_{\bm{v},-i}^{**})}\\ & = & \frac{1}{4nH(s)} \sum_{i \in [n]} \sum_{j \in [n] \setminus \{i\}} v_j((X_{\bm{v}}^{**})_j \setminus I_i) \\ & = & \frac{1}{4nH(s)} \sum_{i \in [n]} \sum_{j \in [n] \setminus \{i\}} v_i((X_{\bm{v}}^{**})_i \setminus I_j) \\ & = & \frac{1}{4nH(s)} \sum_{i \in [n]} \sum_{\substack{\{j,j'\} : j, j' \in [n] \setminus \{i\} \\ \wedge j \not= j'}} \frac{1}{n-2}(v_i((X_{\bm{v}}^{**})_i \setminus I_j) + v_i((X_{\bm{v}}^{**})_i \setminus I_{j'}))\\ & \geq & \frac{1}{4nH(s)} \sum_{i \in [n]} \sum_{\{j,j'\} : j, j' \in [n] \setminus \{i\} \wedge j \not= j'} \frac{1}{n-2}v_i(\bm{X}_{\bm{v}}^{**}) \\ & = & \frac{1}{4nH(s)} \sum_{i \in [n]} \frac{n-1}{2}v_i(\bm{X}_{\bm{v}}^{**}) \\ & = & \frac{n-1}{8nH(s)} \sum_{i\in [n]} v_i(\bm{X}_{\bm{v}}^{**}), \end{eqnarray*} \noindent where the second inequality follows from subadditivity. This proves the claim, since the above holds for every valuation vector $\bm{v}$. \qed \end{proof} This yields an ex-post IR, SBB, DSIC mechanism that $O(1)$-approximates the social welfare if the number of items initially posessed by an agent is bounded by a constant. The principle that we used to construct Mechanism $\mathbb{M}_{\text{sbb}}$ can more generally be used to turn any WBB mechanism into a SBB one, while preserving the DSIC and ex-post IR properties. This principle also reveals a problematic aspect of the notion of SBB: it allows for agents to receive money, while they are not involved in any trade. This motivates a strengthened notion of strong budget balance, which we call \emph{direct trade strong budget balance}. \begin{definition} A mechanism for an exchange market satisfies \emph{direct trade strong budget balance (DSBB)} iff the outcome it generates can be achieved by a set of bilateral trades, where each trade consists of a reallocation of an item from an agent $i$ to an agent $j$, and a monetary transfer from agent $j$ to agent $i$. Moreover, each item may only be traded once. \end{definition} It can be seen that Mechanism $\mathbb{M}_{\text{sbb}}$ does not satisfy DSBB. In the remainder of the paper we will proceed to design mechanisms for two-sided markets that do satisfy DSBB.\footnote{We note that the double auctions given in \cite{cdlt16} also satisfy the DSBB property.} \section{Proof of Theorem \ref{thm:18-approx_xos}} We split the proof into two lemmas that bound the sellers' and the buyers' relative contributions to the social welfare. We use the notation \OPT \ as definied in Section \ref{sec:prelims}, and we use \ALG \ to denote the expected social welfare of the mechanism, i.e., $\mathbb{E}_{\bm{v},\bm{w}}[\SW(\mathbb{M}_\text{1-supply}(\bm{v},\bm{w}))]$. Moreover, the superscripts $S, B$ respectively denote the sellers' and buyers' contributions to the social welfare, e.g., $\OPT = \OPT^S + \OPT^B$ and $\ALG = \ALG^S + \ALG^B$. Recall that we use $\bm{X}^{\all}(\bm{v})$ to denote the allocation resulting from Algorithm $\mathbb{A}$ on buyers' valuation vector $\bm{v}$. The following lemma is a simple consequence of the fact that $\mathbb{M}_\text{1-supply}$ lets every seller in $L$ gets an offer and accepts it with probability exactly$1/2$. \begin{lemma} \label{lemma:ALG^S} If every seller $j \in L$ puts her item into the market with probability exactly $1/2$, then \begin{equation*} 2 \ALG^S \geq \sum_{j =1}^k \expected{w_j} \geq \OPT^S. \end{equation*} \end{lemma} \begin{proof} The second inequality is trivial, so we focus on the first inequality. First, observe that \[ \prob{w_j > p_j} \leq \prob{w_j > 2 \expected{w_j}} < \frac{1}{2}, \] where the first inequality is because $j \in L$, and the second inequality is by Markov's inequality. Thus with probability at least $1/2$ a seller $j$ is happy to sell his item at price $p_{j}$. But every seller receives an offer from the mechanism with probability $q_j := 1 / (2 \mathsf{Pr}[w_j \leq p_j])$, so every seller gets an offer and accepts it with probability exactly $1/2$. This implies that every $j \in L$ contributes in expectation at least a $\expected{w_j}/2$ to the social welfare, since with non negative probability the buyers won't buy the item $j$. Moreover, a seller in $\bar{L}$ who never trades, so their full expected valuations are contributed to the expected social welfare. \qed \end{proof} Next, we prove a more difficult bound that relates $\ALG^B$ and $\ALG^S$ to $\OPT^B$. \begin{lemma}\label{lemma:ALG^B} The buyers' contributions to the optimal social welfare is bounded by \[ 4\alpha \ALG^B + 4\alpha \ALG^S \geq \OPT^B. \] \end{lemma} \noindent Before proving Lemma \ref{lemma:ALG^B}, we point out that \refTheorem{thm:18-approx_xos} follows straightforwardly from it. \begin{proof}[of \refTheorem{thm:18-approx_xos}] The bound on the approximation ratio follows from the sum of the inequalities of \refLemma{lemma:ALG^S} and \refLemma{lemma:ALG^B}. Moreover, it is a dominant strategy for a seller to accept if and only if the payment offered to her exceeds her valuation, and it is a dominant strategy for a buyer to choose a utility-maximising bundle for the items and item prices offered to her. Thus, when viewed as a direct revelation mechanism, $\mathbb{M}_{\text{1-supply}}$ is DSIC. It is clear that participating in the mechanism can never lead to a decrease in utility for both buyers and sellers, and therefore the mechanism is also ex-post IR. Lastly, it is straightforward to see that the mechanism is DSBB, as the definition of $\mathbb{M}_{\text{1-supply}}$ which we gave in terms of sequential posted pricing naturally yields us the required set of bilateral trades. \qed \end{proof} So it remains to prove Lemma \ref{lemma:ALG^B}. In order to do this, we first prove two propositions: one of them bounds the expected sum of the buyers' utilities, and one of them bounds the expected sum of the buyers' payments. In both propositions we only consider items in $L$ Given a buyers' valuation profile $\bm{v}$, let $\bm{v}_{<i} = (v_1, \ldots, v_{i-1})$. Further, let $Z$ be a random variable that denotes the sellers that receive and accept an offer from the mechanism, i.e., the set $\Lambda_{1}$ at step \ref{algline:buyer_cycle} of $\mathbb{M}_{1-supply}$. For $i \in [n]$ let $\Lambda_i(\bm{v}_{<i}, Z)$ be the set $\Lambda_i$ as given in the definition of $\mathbb{M}_{\text{1-supply}}$ when the valuation profile of the buyers is $\bm{v}$ and $Z$ are the sellers in the market. Note that this implies that $X_i \subseteq \Lambda_i(\bm{v}_{<i}, Z) \subseteq Z$. Consequently, $\Lambda_{n+1}(\bm{v}, Z)$ is the subset of items for which the corresponding sellers have accepted the offer made to them by the mechanism, but remain allocated to the corresponding seller after execution. \begin{proposition} \label{prop:utility_B} The total expected utility of the buyers for the allocation returned by $\mathbb{M}_{\text{1-supply}}$ is bounded from below by \begin{equation*} \mathbb{E}\left[\sum_{i \in [n]} u_i(\mathbb{M}_{\text{1-supply}}(\bm{v},\bm{w}))\right] \geq \frac{1}{2} \sum_{j \in L} \probsub{\bm{v}, Z}{j \in \Lambda_{n+1}(\bm{v}, Z)\ |\ j \in Z} p_j. \end{equation*} (Note that the random variables in this expression are $\bm{v}, \bm{w}$, and the decisions of the mechanism to make offers to the sellers in $L$.) \end{proposition} \begin{proof} First, note that for each $j \in L$ it holds that $\mathsf{Pr}[j \in Z] = 1/2$. Recall that we defined $p_j := \frac{1}{2} \mathbb{E}_{\bm{v}}[\SW^B_j(\bm{v})]$. Thus, observe that by definition of $p_j$, $\SW_j^B(\bm{v})$, and the law of total probability, it holds for all $j \in L$ that \begin{equation} \label{eq:pj_sw} p_j = \expectedsub{\bm{v}}{\SW^B_j(\bm{v}) - p_j} = \sum_{i=1}^n \expectedsub{\bm{v}}{(\SW^B_j(\bm{v}) - p_j) \indicator{j \in X_i^\all(\bm{v})}}. \end{equation} Fix $i \in [n]$, buyers' valuation profile $\bm{v}$, and set $Z \subseteq L$ of sellers who accepted the mechanism's offer, and now consider the set $\Lambda_i(\bm{v}_{<i}, Z) \subseteq L$ of available items that $i$ can choose from. Buyer $i$ selects a bundle that maximizes her utility, i.e., that is in $\mathcal{D}(v_i, \bm{p}, \Lambda_i(\bm{v}_{<i}, Z))$. Now consider an additional randomly drawn profile of valuation functions $\tilde{\bm{v}}_{-i}$ for all buyers except $i$, that is independent of $\bm{v}$. Consider the allocation $X^\all_i(v_i, \tilde{\bm{v}}_{-i})$ be the allocation of buyer $i$ returned by $\mathbb{A}(v_i, \tilde{\bm{v}}_{-i})$. For $i \in [n]$, consider the corresponding additive representative function $a(v_i, X_i^\all(v_i, \tilde{\bm{v}}_{-i}), \cdot)$, such that $a(v_i, X_i^\all(v_i, \tilde{\bm{v}}_{-i}),\{j\}) = \SW^B_j(v_i, \tilde{\bm{v}}_{-i})$. Let \begin{equation*} S_i(v_i, \bm{v}_{-i}, \tilde{\bm{v}}_{-i}, Z) := X_i^\all(v_i, \tilde{\bm{v}}_{-i}) \cap \Lambda_i(\bm{v}_{<i}, Z) \end{equation*} be the items in $X_i^\all(v_i, \tilde{\bm{v}}_{-i})$ that buyer $i$ may choose from under valuation profile $\bm{v}$. As $i$ chooses a bundle $B_i(\bm{v}, Z) \in \mathcal{D}(v_i, \bm{p}, \Lambda_i(\bm{v}_{<i}, Z))$ that maximizes her utility, and $S_i(v_i, \bm{v}_{-i}, \tilde{\bm{v}}_{-i}, Z)$ is in $\mathcal{D}_i(v_i, \bm{p}, \Lambda_i(\bm{v}_{<i}, Z))$, it follows that $i$'s utility for $B_i(\bm{v}, Z)$ is at least the utility she would get for choosing $S_i(v_i, \bm{v}_{-i}, \tilde{\bm{v}}_{-i}, Z)$. That is, for all $\bm{v}$ and $Z \subseteq L$ \begin{eqnarray*} v_i(B_i(\bm{v}, Z)) - \sum_{j \in B_i(\bm{v},Z)} p_j & \geq & \mathbb{E}_{\tilde{\bm{v}}_{-i}}\left[v_i(S_i(v_i, \bm{v}_{-i}, \tilde{\bm{v}}_{-i}, Z)) - \sum_{j \in S_i(v_i, \bm{v}_{-i}, \tilde{\bm{v}}_{-i}, Z)} p_j\right] \\ & \geq & \expectedsub{\tilde{\bm{v}}_{-i}}{\sum_{j \in S_i(v_i, \bm{v}_{-i}, \tilde{\bm{v}}_{-i}, Z)} (a(v_i, X_i^\all(v_i, \tilde{\bm{v}}_{-i}), \{j\}) - p_j)}\\ & = & \expectedsub{\tilde{\bm{v}}_{-i}}{\sum_{j \in S_i(v_i, \bm{v}_{-i}, \tilde{\bm{v}}_{-i}, Z)} (\SW^B_j(v_i, \tilde{\bm{v}}_{-i}) - p_j)}, \end{eqnarray*} The second-to-last inequality follows from the definition of the corresponding additive function $a(v_i, X_i^\all(v_i, \tilde{\bm{v}}_{-i}), \cdot)$. Now summing the above expression over all $i \in [n]$ and taking the expectation over $\bm{v}$ and $Z$, we get \begin{eqnarray*} \expectedsub{\bm{v}, Z}{\sum_{i=1}^n \left(v_i(B_i(\bm{v}, Z)) - \sum_{j \in B_i(\bm{v},Z)} p_j\right)} & \geq & \expectedsub{\bm{v}, \tilde{\bm{v}}_{-i}, Z}{\sum_{i=1}^n \sum_{j \in S_i(v_i, \bm{v}_{-i}, \tilde{\bm{v}}_{-i}, Z)} (\SW^B_j(v_i, \tilde{\bm{v}}_{-i}) - p_j)}\\ & = & \expnobrack_{\bm{v}, \tilde{\bm{v}}_{-i}, Z}\Bigg[\sum_{i=1}^n \sum_{j \in L} (\SW^B_j(v_i, \tilde{\bm{v}}_{-i}) - p_j) \\ && \ \cdot \indicator{j \in X_i^\all(v_i, \tilde{\bm{v}}_{-i})} \indicator{j \in \Lambda_i(\bm{v}_{<i}, Z)}\Bigg]. \end{eqnarray*} Note that we exploited the independence of the events $(j \in X_i^\all(v_i, \tilde{\bm{v}}_{-i}))$ and $(j \in \Lambda_i(\bm{v}_{<i}, \bm{z}))$. Thus, switching the order of the sums and using linearity of expectation, we get that \begin{align*} & \expectedsub{\bm{v}, Z}{\sum_{i=1}^n \left(v_i(B_i(\bm{v}, Z)) - \sum_{j \in B_i(\bm{v},Z)} p_j\right)} \\ & \qquad \geq \sum_{j \in L} \sum_{i=1}^n \probsub{\bm{v}, Z}{j \in \Lambda_i(\bm{v}_{<i}, Z)}\expectedsub{v_i, \tilde{\bm{v}}_{-i}}{(\SW^B_j(v_i, \tilde{\bm{v}}_{-i}) - p_j) \indicator{j \in X_i^\all(v_i, \tilde{\bm{v}}_{-i})}} \\ & \qquad \geq \sum_{j \in L} \probsub{\bm{v}, Z}{j \in \Lambda_{n+1}(\bm{v}, Z)} \sum_{i=1}^n \expectedsub{\bm{v}}{(\SW^B_j(\bm{v}) - p_j) \indicator{j \in X_i^\all(\bm{v})}} \\ & \qquad = \sum_{j \in L} \probsub{\bm{v}, Z}{j \in \Lambda_{n+1}(\bm{v}, Z)} p_j \\ & \qquad = \sum_{j \in L} \probsub{\bm{v}, Z}{j \in \Lambda_{n+1}(\bm{v}, Z)\ |\ j \in Z}\prob{j \in Z} p_j \\ & \qquad = \frac{1}{2}\sum_{j \in L} \probsub{\bm{v}, Z}{j \in \Lambda_{n+1}(\bm{v}, Z)\ |\ j \in Z}p_j. \end{align*} For the last inequality, we used the fact that for any $i \in [n]$ it holds that $\probsub{\bm{v}}{j \in \Lambda_i(\bm{v}_{<i}, Z)} \geq \probsub{\bm{v}}{j \in \Lambda_{n+1}(\bm{v}, Z)}$. The first equality follows from (\ref{eq:pj_sw}). \qed \end{proof} \begin{proposition} \label{prop:revenue_B} The expected sum of the payments charged by $\mathbb{M}_\text{1-supply}$ to the buyers is equal to \begin{equation*} \expected{\sum_{i \in [n]} \rho_i^B} = \frac{1}{2} \sum_{j \in L} p_j \probsub{\bm{v}, Z}{j \notin \Lambda_{n+1}(\bm{v}, Z)\ |\ j \in Z} \end{equation*} \end{proposition} \begin{proof} The revenue extracted by the mechanism, meaning the sum of the payments charged to the buyers, is equal to \begin{eqnarray*} \sum_{j \in L} p_j \probsub{\bm{v}, Z}{j \notin \Lambda_{n+1}(\bm{v}, Z) \wedge j \in Z} & = & \sum_{j \in L} p_j \probsub{\bm{v}, Z}{j \notin \Lambda_{n+1}(\bm{v}, Z)\ |\ j \in Z} \prob{j \in Z}\\ &=& \frac{1}{2} \sum_{j \in L} p_j \probsub{\bm{v}, Z}{j \notin \Lambda_{n+1}(\bm{v}, Z)\ |\ j \in Z}. \end{eqnarray*} \qed \end{proof} We now prove \refLemma{lemma:ALG^B} using the above two propositions. Observe that the buyers' contribution to the social welfare $\ALG^B$ extracted by $\mathbb{M}_\text{1-supply}$ is equal to the sum of all the buyers' utilities and all the buyers' payments. \begin{proof}[of \refLemma{lemma:ALG^B}] As just observed above, from \refProposition{prop:utility_B} and \refProposition{prop:revenue_B}, we have that \begin{eqnarray*} \ALG^B &=& \mathbb{E}\left[\sum_{i \in [n]} u_i(\mathbb{M}_{\text{1-supply}}(\bm{v},\bm{w}))\right] + \sum_{j \in L} p_j \probsub{\bm{v}, Z}{j \notin \Lambda_{n+1}(\bm{v}, Z) \wedge j \in Z} \\ &\geq& \frac{1}{2}\sum_{j \in L} \probsub{\bm{v}, Z}{j \in \Lambda_{n+1}(\bm{v}, Z)\ |\ j \in Z}p_j + \frac{1}{2} \sum_{j \in L} p_j \probsub{\bm{v}, Z}{j \notin \Lambda_{n+1}(\bm{v}, Z)\ |\ j \in Z}\\ &=& \frac{1}{2} \sum_{j \in L} p_j = \frac{1}{4} \sum_{j \in L} \expected{\SW^B_j(\bm{v})}. \end{eqnarray*} By definition of $\bar{L}$, for each $j \in \bar{L}$ it holds that $4 \mathbb{E}[w_j] > \mathbb{E}[\SW^B_j(\bm{v})]$. Every item in $\bar{L}$ stays unsold so, \begin{equation*} \ALG^S \geq \sum_{j \in \bar{L}} \expected{w_j} > \frac{1}{4} \sum_{j \in \bar{L}} \expected{\SW^B_j(\bm{v})}. \end{equation*} Therefore, \begin{equation*} \ALG^B + \ALG^S \geq \frac{1}{4} \sum_{j=1}^k \expected{\SW^B_j(\bm{v})}. \end{equation*} Now recall that $\mathbb{E}[\SW^B_j(\bm{v})]$ was defined by the allocation $\bm{X}^\all(\bm{v})$, being the one returned by Algorithm $\mathbb{A}$. So, \begin{equation*} \frac{1}{4} \sum_{j=1}^k \expected{\SW^B_j(\bm{v})} = \frac{1}{4} \sum_{i=1}^n \expectedsub{\bm{v}}{v_i(X_i^\all(\bm{v}))} \geq \frac{1}{4\alpha} \OPT^B. \end{equation*} \qed \end{proof} \section{Proof of Theorem \ref{thm:bicmech} and Corollary \ref{cor:additive}} \begin{proposition} \label{prop:cover_sellers} \begin{equation*} \sum_{\ell \in L} \expected{\SW^B_\ell(\bm{v})} + 4\sum_{\ell \in \bar{L}} \expected{\SW^S_\ell(\bm{w})} > \sum_{i=1}^n \expected{v_i(X_i^\all(\bm{v}))}. \end{equation*} \end{proposition} \begin{proof} Let $a(v_i,X_i^\all(\bm{v}),\cdot)$ be the representative additive function of $v_i$ for the bundle $X_i^\all(\bm{v})$. Then, \begin{eqnarray*} \sum_{i=1}^n \expected{v_i(X_i^\all(\bm{v}))} &=& \sum_{i=1}^n \expected{\sum_{\ell \in X^\all_i(\bm{v})} a(v_i,X^\all_i(\bm{v}),\{\ell\})} \\ & = & \sum_{i=1}^n \sum_{\ell=1}^k \expected{a(v_i, X^\all_i(\bm{v}),\{\ell\}) \indicator{\ell \in X^\all_i(\bm{v})}} \\ & = & \sum_{\ell=1}^k \expected{\SW^B_\ell(\bm{v})} \\ & = & \sum_{\ell \in L} \expected{\SW^B_\ell(\bm{v})} + \sum_{\ell \in \bar{L}} \expected{\SW^B_\ell(\bm{v})} \\ & < & \sum_{\ell \in L} \expected{\SW^B_\ell(\bm{v})} + 4\sum_{\ell \in \bar{L}} \expected{\SW^S_\ell(\bm{w})}. \end{eqnarray*} The last inequality follows because \begin{equation*} 4 \sum_{\ell \in \bar{L}} \expected{\SW^S_\ell(\bm{w})} > \sum_{\ell \in \bar{L}} \expected{\SW^B_\ell(\bm{v})}, \end{equation*} by definition of $\bar{L}$. \qed \end{proof} \begin{proposition} \label{prop:utility_z} Let $\bm{v}$ be a buyers' valuation function profile and let $(X'_1, \ldots, X'_n)$ be any allocation of items to the buyers, let $X'_{i, j} := X'_i \cap L_j$ be the set of items in $L$ that are allocated to buyer $i \in [n]$ and belonged to seller $j \in [m]$. For each seller $j \in [m]$, let $z_j \in \{0, 1\}$ be a Bernoulli random variable such that $\expected{z_j} = 1/2$. Let $X''_i(\bm{z}) := \bigcup_{j \in [m] : z_j = 1} X'_{i, j}$ for all $i \in [n]$. Then, for all $i \in [n]$ it holds that \begin{equation*} \expectedsub{\bm{z}}{v_i(X''_i(\bm{z}))} \geq \frac{1}{2} v_i(X'_i). \end{equation*} Moreover, given any vector $\bm{p} \in \mathbb{R}^k$ of item prices, the inequality also holds on the utilities of the buyers: \begin{equation*} \expectedsub{\bm{z}}{v_i(X''_i(\bm{z})) - \sum_{\ell \in X''_i(\bm{z})} p_\ell} \geq \frac{1}{2} \left( v_i(X_i') - \sum_{\ell \in X'_i} p_\ell \right). \end{equation*} \end{proposition} \begin{proof} For the first claim, first note that due to subadditivity \begin{equation*} \expectedsub{\bm{z}}{v_i(X''_i(\bm{z}))} \geq v_i(X'_i) - \expectedsub{\bm{z}}{v_i\left(\bigcup_{j \in [m] : z_j = 0} X'_{i, j}\right)}. \end{equation*} Observe that \begin{equation*} \expectedsub{\bm{z}}{v_i(X''_i(\bm{z}))} = \expectedsub{\bm{z}}{v_i\left(\bigcup_{j \in [m] : z_j = 1} X'_{i, j}\right)} = \expectedsub{\bm{z}}{v_i\left(\bigcup_{j \in [m] : z_j = 0} X'_{i, j}\right)}, \end{equation*} because the events $z_j = 0$ and $z_j = 1$ are equiprobable for all $j \in [m]$. Combining this with the above inequality establishes the first claim. The second claim follows from the following derivation. \begin{eqnarray*} \expectedsub{\bm{z}}{v_i(X''_i(\bm{z})) - \sum_{\ell \in X''_i(\bm{z})} p_\ell} & = & \expectedsub{\bm{z}}{v_i(X''_i(\bm{z}))} - \expectedsub{\bm{z}}{\sum_{j \in [m] : z_j = 1} \sum_{\ell \in X'_{i, j}} p_\ell} \\ &=& \expectedsub{\bm{z}}{v_i(X''_i(\bm{z}))} - \expectedsub{\bm{z}}{\sum_{j = 1}^m \left(\sum_{\ell \in X'_{i, j}} p_\ell \right) \indicator{z_j = 1}}\\ &=& \expectedsub{\bm{z}}{v_i(X''_i(\bm{z}))} - \sum_{j = 1}^m \left(\sum_{\ell \in X'_{i, j}} p_\ell \right)\expectedsub{\bm{z}}{\indicator{z_j = 1}}\\ &=& \expectedsub{\bm{z}}{v_i(X''_i(\bm{z}))} - \sum_{\ell \in X'_i} p_\ell \frac{1}{2}\\ &\geq& \frac{1}{2} v_i(X'_i) - \frac{1}{2} \sum_{\ell \in X'_i} p_\ell \end{eqnarray*} \qed \end{proof} \begin{proposition} \label{prop:sellers_1/2} Let $j \in [m]$ be a seller. The probability that the mechanism $\mathbb{M}_{\text{add}}$ makes in Step \ref{offer} an offer to $j$ that she accepts, is $1/2$. \end{proposition} \begin{proof} For every $j \in [m]$ and $\ell \in L_j$, it holds by definition of $p_\ell$ and $L_j$ that $p_\ell \geq 2\expected{w_j(\{\ell\})}$. From Markov's inequality it follows that \begin{equation*} \prob{w_j(S_j) > \sum_{\ell \in S_j} p_\ell} \leq \prob{w_j(S_j) > 2 \expected{w_j(S_j)}} < \frac{1}{2}. \end{equation*} Thus, $\prob{w_j(S_j) \leq \sum_{\ell \in S_j} p_\ell} \geq 1/2$, meaning that $j$ accepts the offer with probability at least $1/2$, in case she is made an offer. The mechanism makes the offer with probability $q_j$, and \begin{equation*} q_j \prob{w_j(S_j) \leq \sum_{\ell \in S_j} p_\ell} = 1/2. \end{equation*} \qed \end{proof} For $i \in [n+1]$ and valuation profile $\bm{v}$, let $\bm{v}_{<i} = (v_1, \ldots, v_{i-1})$ and let $\Lambda_i(\bm{v}_{<i})$ be the set $\Lambda_i$ defined in Step \ref{lambda}, when $\mathbb{M}_{\text{add}}$ is run when the buyers in $[i-1]$ have valuation profile $\bm{v}_{<i}$. Given this definition, the set $\Lambda_{n+1}(\bm{v})$ are the items not requested by any buyer at the end of Step \ref{buyerloop}, when the buyers' valuation profile is $\bm{v}$. \begin{lemma} The expected total utility of the buyers is at least \begin{equation*} \frac{1}{2} \sum_{\ell \in L} \mathsf{Pr}_{\bm{v}}[\ell \in \Lambda_{n+1}(\bm{v})] p_\ell. \end{equation*} \end{lemma} \begin{proof} First, let us consider a fixed buyer $i \in [n]$ and a fixed buyers' valuation profile $\bm{v}$. Let $\tilde{\bm{v}}_{-i}$ be an independently sampled valuation profile for the buyers in $[n]\setminus\{i\}$, and consider the bundle $X_i^\all(v_i, \tilde{\bm{v}}_{-i})$ that $\mathbb{A}$ allocates to $i$ when the valuation profile is $(v_i, \tilde{\bm{v}}_{-i})$. Let $X_i^L(\bm{v}, \tilde{\bm{v}}_{-i}) = X_i^\all(v_i, \tilde{\bm{v}}_{-i}) \cap L \cap \Lambda_i(\bm{v}_{<i})$. Moreover, let $\bm{z}$ be a vector of $m$ Bernoulli random variables with $\expected{z_j} = 1/2$ and define for a subset $S(\bm{v}) \subseteq \Lambda_i{\bm{v}}$ the random variable $S(\bm{v},\bm{z}) = \bigcup_{j \in [m] : z_j = 1} (S \cap L_j)$. Particularly, from this definition we obtain the random variable $X_i(\bm{v}, \tilde{\bm{v}}_{-i}, \bm{z}) = \bigcup_{j \in [m] : z_j = 1} (X^L_{i}(\bm{v}, \tilde{\bm{v}}_{-i}) \cap L_j)$. Also, note that when the buyers' valuations are $\bm{v}$, the mechanism will let $i$ choose to request a bundle from the set $\Lambda_i(\bm{v}_{<i})$ with item prices $\bm{p}$, the buyer maximizes her expected utility and will therefore request the bundle $B_i(\bm{v})$ that maximizes her expected utility which can be expressed as \begin{equation*} \mathbb{E}_{\bm{z}}\left[v_i(B(\bm{v},\bm{z})) - \sum_{\ell \in B(\bm{v},\bm{z})} p_{\ell}\right], \end{equation*} as by Propostion \ref{prop:sellers_1/2} each sellers' requested items will be allocated with probability $1/2$, as reflected by the Bernouilli variables $\bm{z}$. Since $B_i(\bm{v})$ is an expected-utility-maximising bundle $X_{i, j}(\bm{v}, \tilde{\bm{v}}_{-i}, \bm{z}) \subseteq \Lambda_i(\rm{v})$, it holds that \begin{eqnarray*} \mathbb{E}_{\bm{z}}\left[v_i(B(\bm{v},\bm{z})) - \sum_{\ell \in B(\bm{v},\bm{z})} p_{\ell}\right] & \geq & \expectedsub{\tilde{\bm{v}}_{-i}, \bm{z}}{v_i(X_i(\bm{v}, \tilde{\bm{v}}_{-i}, \bm{z})) - \sum_{\ell \in X_i(\bm{v}, \tilde{\bm{v}}_{-i}, \bm{z})} p_{\ell}} \\ & \geq & \frac{1}{2} \expectedsub{\tilde{\bm{v}}_{-i}}{v_i(X^L_i(\bm{v}, \tilde{\bm{v}}_{-i})) - \sum_{\ell \in X^L_i(\bm{v}, \tilde{\bm{v}}_{-i})} p_{\ell}} \\ & \geq & \frac{1}{2} \expectedsub{\tilde{\bm{v}}_{-i}}{a(v_i,X^\all_i(v_i, \bm{v}_{-i}), X_i^L(\bm{v}, \bm{v}_{-i})) - \sum_{\ell \in X^L_i(\bm{v}, \tilde{\bm{v}}_{-i})} p_\ell}\\ &=& \frac{1}{2} \expectedsub{\tilde{\bm{v}}_{-i}}{\sum_{\ell \in X^L_i(\bm{v}, \tilde{\bm{v}}_{-i})} (\SW^B_\ell(v_i, \tilde{\bm{v}}_{-i}) - p_\ell)}. \end{eqnarray*} where the second inequality follows from \refProposition{prop:utility_z}, and the last inequality follows from the definition of the additive representative function $a(v_i,X^\all_i(v_i, \tilde{\bm{v}}_{-i}, \cdot)$. If we sum over all $i \in [n]$ and take the expectation w.r.t. every $v_i$, we obtain the following bound on the total expected utility of the buyers. \begin{align*} & \mathbb{E}_{\bm{v},\bm{z}}\left[\sum_{i = 1}^n (v_i(B(\bm{v},\bm{z})) - \sum_{\ell \in B(\bm{v},\bm{z})} p_{\ell})\right] \geq \frac{1}{2} \expectedsub{\bm{v},\tilde{\bm{v}}_{-i}}{\sum_{i = 1}^n \sum_{\ell \in X^L_i(\bm{v}, \tilde{\bm{v}}_{-i})} (\SW^B_\ell(v_i, \tilde{\bm{v}}_{-i}) - p_\ell)} \\ & \qquad = \frac{1}{2} \expectedsub{\bm{v}, \tilde{\bm{v}}_{-i}}{\sum_{i=1}^n \sum_{\ell \in L} (\SW^B_\ell(v_i, \tilde{\bm{v}}_{-i}) - p_\ell) \indicator{\ell \in X^L_i(\bm{v}, \tilde{\bm{v}}_{-i})}}\\ & \qquad = \frac{1}{2} \expectedsub{\bm{v}, \tilde{\bm{v}}_{-i}}{\sum_{i=1}^n \sum_{\ell \in L} (\SW^B_\ell(v_i, \tilde{\bm{v}}_{-i}) - p_\ell) \indicator{\ell \in X_i^\all(v_i, \tilde{\bm{v}}_{-i})} \indicator{\ell \in \Lambda_i(\bm{v}_{<i})}} \\ & \qquad = \frac{1}{2} \sum_{\ell \in L} \sum_{i=1}^n \expectedsub{v_i, \tilde{\bm{v}}_{-i}}{(\SW^B_\ell(v_i, \tilde{\bm{v}}_{-i}) - p_\ell) \indicator{\ell \in X_i^\all(v_i, \tilde{\bm{v}}_{-i})}} \expectedsub{\bm{v}_{-i}}{\indicator{\ell \in \Lambda_i(\bm{v}_{<i})}}. \end{align*} For the second-to-last equality, we exploited the independence of the events $(\ell \in X_i^\all(v_i, \tilde{\bm{v}}_{-i}))$ and $(\ell \in \Lambda_i(\bm{v}_{<i}))$. Then, $\expectedsub{\bm{v}_{-i}}{\indicator{\ell \in \Lambda_i(\bm{v}_{<i})}} = \prob{\ell \in \Lambda_i(\bm{v}_{<i})}$ and since $L = \Lambda_1(\bm{v}_{<1}) \supseteq \ldots \supseteq \Lambda_{n+1}(\bm{v})$, it holds that $\prob{\ell \in \Lambda_i(\bm{v}_{<i})} \geq \prob{\ell \in \Lambda_{n+1}(\bm{v})}$. So, we have that the above expression is at least \begin{eqnarray*} && \frac{1}{2} \sum_{\ell \in L} \probsub{\bm{v}}{\ell \in \Lambda_{n+1}(\bm{v})} \sum_{i=1}^n \expectedsub{v_i, \tilde{\bm{v}}_{-i}}{(\SW^B_\ell(v_i, \tilde{\bm{v}}_{-i}) - p_\ell) \indicator{\ell \in X_i^\all(v_i, \tilde{\bm{v}}_{-i})}}\\ &=& \frac{1}{2} \sum_{\ell \in L} \probsub{\bm{v}}{\ell \in \Lambda_{n+1}(\bm{v})} \sum_{i=1}^n \expectedsub{\bm{v}}{(\SW^B_\ell(\bm{v}) - p_\ell) \indicator{\ell \in X_i^\all(\bm{v})}}. \end{eqnarray*} The equality follows from renaming the random variable $v_j := \tilde{v}_j$ for all $j \neq i$ Now observe that by definition of the prices, $p_\ell = \sum_{i=1}^n \expectedsub{\bm{v}}{(\SW^B_\ell(\bm{v}) - p_\ell) \indicator{\ell \in X_i^\all(\bm{v})}}$. Combining these derivations, we obtain the desired bound on the expected utilities \begin{equation*} \mathbb{E}_{\bm{v},\bm{z}}\left[\sum_{i = 1}^n (v_i(B(\bm{v},\bm{z})) - \sum_{\ell \in B(\bm{v},\bm{z})} p_{\ell})\right] \geq \frac{1}{2} \sum_{\ell \in L} \mathsf{Pr}_{\bm{v}}[\ell \in \Lambda_{n+1}(\bm{v})] p_\ell. \end{equation*} \qed \end{proof} \begin{lemma} The expected sum of payments made by the buyers is equal to \begin{equation*} \frac{1}{2} \sum_{\ell \in L} \probsub{\bm{v}}{\ell \notin \Lambda_{n+1}(\bm{v})} p_\ell. \end{equation*} \end{lemma} \begin{proof} For $j \in [m]$, let $z_j$ be the random $(0,1)$-variable that indicates whether seller $j$ has been made an offer and accepted it in Step \ref{offer} of Mechanism $\mathbb{M}_{\text{add}}$, so $z_j = 1$ is a Bernouilli variable with expected value $1/2$. The expected sum of payments made by the buyers is then \begin{eqnarray*} \sum_{j = 1}^m \sum_{\ell \in L_j} \prob{\ell \notin \Lambda_{n+1}(\bm{v}) \wedge z_j = 1} p_\ell & = & \sum_{j = 1}^m \sum_{\ell \in L_j} \prob{\ell \notin \Lambda_{n+1}(\bm{v})} \prob{z_j = 1} p_\ell \\ & = & \frac{1}{2} \sum_{j = 1}^m \sum_{\ell \in L_j} \prob{\ell \notin \Lambda_{n+1}(\bm{v})} p_\ell\\ & = & \frac{1}{2} \sum_{\ell \in L} \prob{\ell \notin \Lambda_{n+1}(\bm{v})} p_\ell \end{eqnarray*} The second equality holds by the independence of the two events. \qed \end{proof} \begin{lemma} \begin{equation*} \ALG^B \geq \frac{1}{4} \sum_{\ell \in L} \expectedsub{\bm{v}}{\SW^B_\ell(\bm{v})}. \end{equation*} \end{lemma} \begin{proof} The expected social welfare contribution of the buyers is equal to the sum of the expected utilities and expected payments. By the above two lemmas, their sum is at least \begin{equation*} \frac{1}{2} \sum_{\ell \in L} \prob{\ell \in \Lambda_{n+1}(\bm{v})} p_\ell + \frac{1}{2} \sum_{\ell \in L} \prob{\ell \notin \Lambda_{n+1}(\bm{v})} p_\ell = \frac{1}{2} \sum_{\ell \in L} p_\ell = \frac{1}{4} \sum_{\ell \in L} \expectedsub{\bm{v}}{\SW^B_\ell(\bm{v})}, \end{equation*} by definition of $p_\ell$. \qed \end{proof} \begin{lemma} \begin{equation*} 4\alpha \ALG^B + 4\alpha \ALG^S \geq \OPT^B. \end{equation*} \end{lemma} \begin{proof} By the above lemma, $4 \ALG^B \geq \sum_{\ell \in L} \mathbb{E}_{\bm{v}}[\SW^B_\ell(\bm{v})]$. Moreover, our mechanism leaves every item $\ell \in \bar{L}$ with its seller, and so $4\ALG^S \geq 4 \sum_{\ell \in \bar{L}} \mathbb{E}_{\bm{w}}[\SW^S_\ell(\bm{v})]$. Therefore, \begin{equation*} 4 \ALG^B + 4 \ALG^S \geq \sum_{\ell \in L} \mathbb{E}_{\bm{v}}{\SW^B_\ell(\bm{v})} + 4 \sum_{\ell \in \bar{L}} \mathbb{E}_{\bm{w}}[\SW^S_\ell(\bm{v})] \geq \sum_{i=1}^n \mathbb{E}_{\bm{v}}{v_i(X_i^\all(\bm{v}))} \geq \frac{1}{\alpha}\OPT^B, \end{equation*} The second inequality holds by \refProposition{prop:cover_sellers}, and the last inequality follows because we defined $\alpha$ to be the approximation factor of algorithm $\mathbb{A}$, which is the algorithm that we assumed to generate allocation $X^\all(\bm{v})$. \qed \end{proof} \begin{lemma} \begin{equation*} 2 \ALG^S \geq \OPT^S. \end{equation*} \end{lemma} \begin{proof} The only items that our mechanisms potentially reallocates are the ones belonging to $L$. Every item in $\bar{L}$ stays with its seller. For the items in $L$, the mechanism ensures every seller sells her demanded bundle with probability exactly $1/2$, so for each seller it holds that she retains her full initial endowment with probability at least $1/2$, which implies the claim. \qed \end{proof} \begin{proof}[of Theorem \ref{thm:bicmech}] Every buyer chooses a bundle that maximizes her expected utility, so the mechanism is ex-interim IR and BIC on the buyers' side. On the sellers' side, it is actually ex-post IR and DSIC: the sellers solely have to decide between accepting or rejecting a single offer to receive a proposed payment in exchange for a bundle of items, and it is clearly a dominant strategy to accept if and only if such an exchange leads to an improvement in the seller's utility. The fact that the mechanism is DSBB follows from its definition, which makes clear that payments are defined by the appropriate sequence of trades and payments from buyers to sellers. The approximation guarantee follows by the sum of the inequalities of the above two lemmas. \qed \end{proof} By taking for $\mathbb{A}$ an optimal algorithm (i.e., $\alpha = 1$), we obtain the existence of a mechanism that is ex-post IR, DSIC, DSBB, and $6$-approximates the optimal social welfare. Again, one may also take for $\mathbb{A}$ a polynomial time approximation algorithm in order to obtain a polynomial time mechanism. \begin{proof}[of Corollary \ref{cor:additive}] If a buyer $i \in [n]$ has an additive valuation function, it is a dominant strategy to request the items in $\Lambda_i(\bm{v}_{<i})$) for which it holds that $v_i(\{\ell\}) > p_\ell$. This follows from the simple fact that by additivity, the utility that a player has for any bundle of items $S$ can be written as $\sum_{\ell \in S} v_i(\{\ell\}) - p_{\ell}$. Thus, for every item $\ell \in [k]$ that a buyer requests (recall that this item is then allocated to her for price $p_\ell$ with probability $1/2$), a term of $(1/2) (v_i(\{\ell\}) - p_{\ell})$ gets added to her expected utility. So including $\ell$ in her requested bundle is profitable if and only if $v_i(\{\ell\}) - p_{\ell} \geq 0$. Ex-post IR property is also satisfied by following this strategy. \qed \end{proof} \end{document}
\begin{document} \begin{center} {\large \textbf{On the study of cellular automata on modulo-recurrent words}}\\ \vspace*{1cm} Moussa Barro \\ \footnotesize {\textit{D\'{e}partement de Mathématiques, UFR-SEA\\ Université Nazi BONI\\ Bobo-Dioulasso, Burkina Faso}\\ mous.barro@yahoo.com }\\ 01 BP 1091 Bobo-Dioulasso\\ \vspace*{0.5cm} K. Ernest Bognini\\ \footnotesize{\textit{Centre Universitaire de Kaya (CU-Kaya)\\ Universit\'{e} Joseph KI-ZERBO\\ Ouagadougou, Burkina Faso}\\ ernestk.bognini@yahoo.fr}\\ 03 BP 7021 Ouagadougou\\ \vspace*{0.5cm} Boucar\'{e} Kient\'{e}ga\\ \footnotesize{\textit{Institut Universitaire Professionnnalisant, IUP\\ Universit\'{e} de D\'{e}dougou, Burkina Faso}\\ kientega.boucare@gmail.com }\\ BP 176 D\'{e}dougou\\ \begin{abstract} \noindent In this paper, we study some class of cellular automata (CA) preserving modulo-recursive, stability by reflection and richness called stable cellular automata (SCA). After applying these automata on Sturmian words, we establish some combinatorial properties of these new words. Next, the classical and palindromic complexity functions of these words are also determined. Finally, we show that these words are $2$-balanced and we establish their abelian complexity function. \\[2mm] {\textbf{Keywords:} cellular automata (CA), modulo-recursive, Sturmian words, palindrome, complexity function. \\[2mm] {\textbf{2020 Mathematics Subject Clasification:} 37B15, 68Q80, 68R15, 11B85 } } \end{abstract} \end{center} \section{Introduction} A cellular automaton is a series of cells evolving into a precise set of rules, resulting in a new generation of cells. These automata were introduced in \cite{b0} with objective to realize some dynamic systems capable to model complex sefl-reproduction phenomena. Later, in the 1970s, the concept was popularized by the work of John Horton Conway with his famous \emph{game of life} on two-dimensional cellular automata (CA). Thus, CA have become a multidisciplinary field of study going from physics to computer science and from biology to mathematics. Modulo-recurrent words are recurrent words in which any factor appears in all positions modulo its length. For instance, we have Sturmian words and Champernowne word. They were introduced in $\cite{KT-f}$ and intensively sudied in $\cite{BKT, CKT, b23}$. A palindrome is a word which is the same when read left to right or from right to left. The study of palindromes in combinatorics on words allows the characterization of some infinite words (see $\cite{b7,b9,b24}$). Given a finite or infinite word, the complexity function $p$ of this word is the number of distinct factors of given length in this later. Their study allows to characterize some families of infinite words $\cite{b6}$. This notion also allowed to establish many characterizations and various properties on Sturmian words (see $\cite{b27, b5, b20,b28,b12,b16}$). Depending on the specificity of the factors included in the word (finite or infinite), we distinguish several types of complexity functions: palindromic, abelian, etc. The palindromic complexity function computes the number of distinct palindromic factors of certain length in this word. As for the abelian complexity function, it counts the number of Parikh vectors for each given length in this word. It was intensively studied in $\cite{BKT-p, BKT}$. All these two notions allow us to characterize Sturmian words $\cite{b11}$. In this work, we study combinatorial properties of infinite words obtained by application of CA. It is organized as follows. After giving some definitions and notation, we recall some properties of Sturmian and modulo-recurrent words in Section 2. Next, we apply CA to infinite words and show that these automata preserve some properties such that modulo-recursive, periodicity in Section 3. In Section 4, we define a class of CA called stable cellular automata (SCA) and we establish that they preserve stability by reflection and richness. Lastly, we discuss the combinatorial study of words obtained by applying these SCA on Sturmian words in Section 5. \section{Preliminaries} \subsection{ Definitions and notation} An alphabet $\mathcal{A}$, is a finite set whose elements are called letters. A word is a finite or infinite sequence of elements over $\mathcal{A}$. We denote by $\mathcal{A}^\ast $, the set of finite words over $\mathcal{A}$ and $\varepsilon$ the empty word. For all $u\in \mathcal{A}^*$, $|u|$ denotes the length of $u$ and $|u|_x$ for all $x$ over $\mathcal{A}$, the number of occurrence of $x$ in $u$. A word $u$ of length $n$ constitued by a single letter $x$ is simply denoted $u=x^n$; by convention $x^0=\varepsilon $. Let $u=u_1u_2 \dotsb u_n$ be a finite word with $u_i\in \mathcal{A}$ for all $i\in\left\{1,2,\dots ,n\right\}$. The word $\overline{u}=u_n\dotsb$ $u_2u_1$ is called the reflection of $u$. Given two finite words $u$ and $v$ then, we have $\overline{uv}=\overline{v}$ $\overline{u}$. The word $u$ is called palindrome if $\overline{u}=u$. We denote by $\mathcal{A}^{\omega}$ (respectively, $ \mathcal{A}^\infty=\mathcal{A}^*\cup \mathcal{A}^{\omega}$ ), the set of infinite (respectively, finite and infinite) words. An infinite word $\textbf{u}$ is ultimately periodic if there are two words $v\in \mathcal{A}^*$ and $w\in \mathcal{A}^+$ such that $\textbf{u}=vw^\omega$. The word $\textbf{u}$ is said recurrent if each of its factors appears an infinitely. If moreover $v=\varepsilon$ then $\textbf{u}$ is said periodic. The $n$-th power for some finite word $w$ is denoted by $w^n$. Let $\textbf{u}\in \mathcal{A}^\infty$ and $v\in \mathcal{A}^*$. We say that $v$ is a factor of $\textbf{u}$ if there exists $u_1\in \mathcal{A}^*$ and $\textbf{u}_2\in \mathcal{A}^\infty$ such that $\textbf{u}=u_1v\textbf{u}_2$. In other words, we say that $\textbf{u}$ contains $v$. We also say that $u_1$ is a prefix of $\textbf{u}$ and we note $u_1=\text{Pref}_{|u_1|}(\textbf{u})$. If in particular $\textbf{u} \in \mathcal{A}^*$ then $\textbf{u}_2$ is said suffix of $\textbf{u}$. Let $w$ be a factor of an infinite word $\textbf{u}$ and $x$ a letter over $\mathcal{A}$. Then, $L_n(\textbf{u})$ denotes the set of factors of length $n$ of $\textbf{u}$ and $L(\textbf{u})$ that of all factors of $\textbf{u}$. The letter $x$ is said to be a left (respectively, right) extension of $w$ if $xw$ (respectively, $wx$) belongs to $L(\textbf{u})$. Let us denote by $\partial^-w$ (respectively, $\partial^+w$) the number of left (respectively, right) extension of $w$ in $\textbf{u}$. When $\partial^+w=k$ with $k\geq 2$, $w$ is said to be right $k$-prolongeable. In the same way, we can define the notion of left $k$-prolongeable factor. A factor $w$ of $\textbf{u}$ is said to be right (respectively, left) special if $\partial^+w>1$ (respectively, $\partial^-w>1$). Any factor that is both right and left special is called a bispecial factor. Given an infinite word $\textbf{u}$. Then the map of $\mathbb{N}$ into $\mathbb{N^*}$ defined by $p_\textbf{u}(n) = \# L_n(\textbf{u})$, is called complexity function of $\textbf{u}$, where $\# L_n(\textbf{u})$ denotes the cardinal of $L_n(\textbf{u})$. This function is related to the special factors by the relation (see \cite{b8}): $$\hspace{0 cm} p_\textbf{u}(n+1)-p_\textbf{u}(n)= \displaystyle\sum_{w\in L_n(\textbf{u})} (\partial^+(w)-1). $$ We denotes by $\text{Pal}_n(\textbf{u})$, the set of palindromic factors of length $n$ and by $\text{Pal}(\textbf{u})$, the set of all palindromic factors in $\textbf{u}$. The palindromic complexity function of $\textbf{u}$, noted $p^{al}_\textbf{u}$, is the map of $\mathbb{N}$ into $\mathbb{N}$ which counts the number of distint palindromic factors of length $n$ contained in $\textbf{u}$: $$p^{al}_\textbf{u}(n) = \# \left\{w\in L_n(\textbf{u}) : \overline{w}=w \right\}.$$ When for all $w\in L(\textbf{u})$, we have $\overline{w}\in L(\textbf{u})$. Then, $\textbf{u}$ is said stable by reflection. Let $w$ be a factor of an infinite word $\textbf{u}$ over an alphabet $\mathcal{A}_q=\{a_{1}, a_{2}, \cdots, a_{q}\}$. Then, the $q$-uplet $\chi (w)=(| w |_{a_{1}}, | w|_{a_{2}}, \cdots, | w |_{a_{q}})$ is called the Parikh vector of $w$. The set of Parikh vectors of factor of length $n$ in $\textbf{u}$ is denoted by: $$\chi_{n}(\textbf{u})=\{\chi(w): w\in L_{n}(\textbf{u})\}.$$ The abelian complexity function of $\textbf{u}$, is the map defined of $\mathbb{N}$ into $\mathbb{N}^*$ by: $$\rho^{ab}_\textbf{u}(n)= \# \chi_{n}(\textbf{u}).$$ The window complexity function of $\textbf{u}$ is the map, $p^f_\textbf{u}$ of $\mathbb{N}$ into $\mathbb{N}^*$, defined by $$p^f_\textbf{u}(n)=\#\left\{u_{kn}u_{kn+1}\cdots u_{n(k+1)-1} : k\geq 0\right\}.$$ The shift $S$, is the application $S$ on $\mathcal{A}^\omega$ which erases the first letter of some given word. For instance $S(\textbf{u})=u_1u_2u_3\cdots$. A substitution $\varphi$ is a map of $\mathcal{A}^*$ into itself such that $\varphi (uv)=\varphi(u)\varphi(v)$, for any $u$, $v\in \mathcal{A}^*$. \subsection{Sturmian words and modulo-recurrent words} In this subsection, we recall some properties of Sturmian words and modulo-recurrent words that will be used in the following. In this part, the alphabet is $\mathcal{A}_{2}=\left\{a,b\right\}$. \begin{definition} An infinite word $\textbf{u}$ over $\mathcal{A}_{2}$ is said to be Sturmian if for any integer $n,\ p_\textbf{u}(n)=n+1$. \end{definition} The well-known Sturmian word is the famous Fibonacci word. It is the fixed point of the substitution $\varphi$ defined over $\mathcal{A}_{2}^\ast$ by: $$\varphi(a)=ab \ \textrm{and} \ \varphi(b)=a.$$ It is noted: $$\mathbf{F}=\displaystyle \lim_{n\rightarrow \infty}\varphi^n(a).$$ \begin{definition} A Sturmian word is said to be $a$-Sturmian (respectively, $b$-Sturmian) when it contains $ a^2 $ (respectively, $ b^2$). \end{definition} \begin{definition} A word $\textbf{u}=u_0u_1u_2 \dotsb$ is said to be modulo-recurrent if any factor of $\textbf{u}$ appears in all positions modulo $i,\ i\geq 1$. \end{definition} \begin{definition} Let $w$ be a factor of some infinite word $\textbf{u}$. We say that $w$ is a window factor when it appears in $\textbf{u}$ at a multiple position its length. \end{definition} \begin{Proposition}$\cite{CKT}$\label{propo-comp-mod-rec} Let $\textbf{\emph{u}}$ be a modulo-recurrent word. Then, for all integers $n$, the set of window factors of length $n$ in $\textbf{\emph{u}}$ is equal to $L_n(\textbf{\emph{u}})$. \end{Proposition} \begin{definition} An infinite word $\textbf{u}$ is said to be $\alpha$-balanced if $\alpha$ is the smallest integer such that for any pair ($v$, $w$) of factors in $\textbf{u}$ of the same length and for all letter $x$ over $\mathcal{A}$, we have: \begin{center} $||v|_x-|w|_x|\leq \alpha$. \end{center} If $\alpha =1$, then $\textbf{u}$ is said simply balanced. \end{definition} The following theorem gives us some classical characterization of Sturmian words. \begin{theorem}\label{theo-stur} \cite{b11, b9} Let $\textbf{\emph{u}}$ be an infinite binary word. Then, the following assertions are equivalent: \begin{enumerate} \item $\textbf{\emph{u}}$ is Sturmian. \item $\textbf{\emph{u}}$ is non-ultimately periodic and balanced. \item For all $n\in \mathbb{N}^*, \hspace{0.3cm} \rho^{ab}_\textbf{\emph{u}}(n)=2$. \item For all $n\in \mathbb{N}, \hspace{0.3cm}$ $$ p^{al}_\textbf{\emph{u}}(n) = \left \{ \begin{array}{l} \hspace{0cm}1 \hspace{0.3cm} if \hspace{0.3cm} n \hspace{0.3cm} is \hspace{0.3cm} even \hspace{0.1cm} \\ \hspace{0cm}2 \hspace{0.33cm} otherwise. \end{array} \right. $$ \end{enumerate} \end{theorem}$ $ \begin{theorem}\label{stur-puis}$\cite{b16}$ Let $\textbf{\emph{v}}$ be an $a$-sturmian word over $\mathcal{A}_2$. Then, there exists a sturmian sequence $(\epsilon_i)_{i\geq 1}$ over the alphabet $\left\{0,1\right\}$ and integer $l$ such that $\textbf{\emph{v}}$ is written: $\textbf{\emph{v}}=a^{l_0}ba^{l+\epsilon_1}ba^{l+\epsilon_2}ba^{l+\epsilon_3}b \dotsb$ avec $l_o\leq l+1$. \end{theorem} It is proved in $\cite{b23}$ that Sturmian words are modulo-recurrent. \begin{theorem}\label{stur-mod}$\cite{CKT}$ Let $\textbf{\emph{u}}$ be an infinite recurrent word. Then, the following assertions are equivalent: \begin{enumerate} \item $\textbf{\emph{u}}$ is a modulo-recurrent word. \item For all integers $n\geq 1,\ p^f_\textbf{\emph{u}}(n)= p_\textbf{\emph{u}}(n)$. \end{enumerate} \end{theorem} \section{Cellular automata (CA)} In this section, we define a class of CA that we apply to infinite words. In all the rest of this paper, $\textbf{u}\in \mathcal{A}^{\infty}$ and $F$ is a CA defined over $\mathcal{A}^r$. \begin{definition} Let $\mathcal{A},\mathcal{B}$ be two alphabets, $r\geq 1$ and $f :\mathcal{A}^r \longrightarrow \mathcal{B},\ w_i \longmapsto x_i$ a morphism. We call CA, any map $F:\mathcal{A}^*\longrightarrow\mathcal{B}^*$ verifying: $$ \left \{ \begin{array}{l} \hspace{0cm}F(w)=\varepsilon \hspace{3.3cm} if \hspace{0.3cm} |w|< r \hspace{0.1cm} \\ \hspace{0cm}F(xyz)= f(xy)F(yz) \hspace{1cm} if \hspace{0.3cm} x\in \mathcal{A}, \ |y|=r-1,\ z\in \mathcal{A}^{\infty}. \end{array} \right. $$ \end{definition} From this definition, we have the following remarks: \begin{Remarque} \label{rem-long} \begin{enumerate} \item The map $F$ is a surjection. \item For all finite word $w$, we have $|F(w)|=|w|-r+1$ if $|w|\geq r$. \item If $r=1$ then $F$ is a projection. \end{enumerate} \end{Remarque} \begin{Proposition}\label{fac-conserv} Let $\textbf{\emph{u}},\textbf{\emph{v}}\in \mathcal{A}^{\infty}$ and $F$ be a CA. Then, we have: \begin{enumerate} \item If $u_1\in L(\textbf{\emph{u}})$ then $F(u_1)\in L(F(\textbf{\emph{u}}))$. \item If $ L(\textbf{\emph{u}})\subset L(\textbf{\emph{v}})$ then $L(F(\textbf{\emph{u}}))\subset L(F(\textbf{\emph{v}}))$. \item $L(F(\textbf{\emph{u}}))= F(L(\textbf{\emph{u}}))$. \end{enumerate} \end{Proposition} \begin{theorem}\label{cor-inj} For any infinite word $\textbf{\emph{u}}$, we have $p_{F(\textbf{\emph{u}})}(n)\leq p_\textbf{\emph{u}}(n+r-1)$. Moreover if, $F$ is an injection. Then, $p_{F(\textbf{\emph{u}})}(n)= p_\textbf{\emph{u}}(n+r-1)$. \end{theorem} \textbf{Proof:} \\ $\bullet$ By Remark \ref{rem-long}, any factor of $F(\textbf{u})$ of length $n$ comes from some factor of $\textbf{u}$ of length $n+r-1$. Moreover, as $F$ is surjective, we have $\# L_{n}(F(\textbf{u}))\leq \# L_{n+r-1}(\textbf{u})$. Hence, $p_{F(\textbf{u})}(n)\leq p_\textbf{u}(n+r-1)$.\\ $\bullet$ Now suppose that $F$ is injective. Then each factor of $F(\textbf{u})$ of length $n$ comes from only one factor of $\textbf{u}$ of length $n+r-1$. Thus, we obtain $\# L_{n}(F(\textbf{u}))\geq \# L_{n+r-1}(\textbf{u})$. Hence, $P_{F(\textbf{u})}(n)= P_\textbf{u}(n+r-1)$. $ \square$ \begin{theorem}\label{mod} Let $\textbf{\emph{u}}$ be an infinite word. Then, $F(\textbf{\emph{u}})$ is modulo-recurrent if and only if $\textbf{\emph{u}}$ is modulo-recurrent. \end{theorem} \textbf{Proof:} Suppose that $\textbf{u}$ is modulo-recurrent. Let $w\in L(F(\textbf{u}))$. As $F$ is surjective, there exists $u_1\in L(\textbf{u})$ such that $w=F(u_1)$. In addition, the factors $u_1$ and $F(u_1)$ appear in the same positions respectively in the words $\textbf{u}$ and $F(\textbf{u})$. Furthermore, $u_1$ apppears in all positions $\mod |u_1|$ in $\textbf{u}$. Hence, $F(u_1)$ appears also in all positions $\mod |u_1|$ in $F(\textbf{u})$. Since $|F(u_1)|\leq |u_1|$ then $F(u_1)$ appears in all positions $\mod |F(u_1)|$ in $F(\textbf{u})$. Therefore, $F(\textbf{u})$ is modulo-recurrent. Conservely suppose that $F(\textbf{u})$ is modulo-recurrent. Let $u_1\in L_{n+r-1}(\textbf{u})$, then by Proposition \ref{fac-conserv}, we have $F(u_1)\in L_n(F(\textbf{u}))$. The word $F(\textbf{u})$ being modulo-recurrent, then the factor $F(u_1)$ appears in all positions modulo $n$ in $F(\textbf{u})$. As $u_1$ and $F(u_1)$ are respectively at the same position in $\textbf{u}$ and $F(\textbf{u})$, then $u_1$ appears in all positions modulo $n$ in $\textbf{u}$. It remains to show that $u_1$ appears at positions $i \mod \hspace{0.1cm} (n+r-1)$ with $i\in \lbrace n+1,\dots,n+r-1 \rbrace$. Since $F(\textbf{u})$ is in particular recurrent then there exists a factor $\delta\in L_{r-1}(F(\textbf{u}))$ such that $F(u_1)\delta \in L_{n+r-1}(F(\textbf{u}))$. Furthermore, $u_1$ and $F(u_1)\delta$ appear at respectively the same position in $\textbf{u}$ and $F(\textbf{u})$. Since $F(\textbf{u})$ is modulo-recurrent, so $\textbf{u}$ is modulo-recurrent. $\square$ \begin{Lemme} Let $\textbf{\emph{u}}$ be an infinite word. Then the following assertions hold. \begin{enumerate} \item If $\textbf{\emph{u}}$ is periodic then $F(\textbf{\emph{u}})$ is periodic. \item If $F$ is bijective and $F(\textbf{\emph{u}})$ periodic then $\textbf{\emph{u}}$ is periodic. \end{enumerate} \end{Lemme} \textbf{Proof:} \begin{enumerate} \item Suppose that $\textbf{u}$ is periodic. Then, there exists a finite word $u_1$ such that $\textbf{u}=u_1^\omega.$ As a result, $$\textbf{u}=u_1^\omega\Longrightarrow F(\textbf{u})=v_{1}^\omega,$$ where $v_1= F(\text{Pref}_{|u_1|+r-1}( \textbf{u}))=\text{Pref}_{|u_1|}( F(\textbf{u}))$. Consequently, $F(\textbf{u})$ is periodic. \item Suppose that $F(\textbf{u})$ is periodic. Then, there exists a factor $v_{1}$ in $F(\textbf{u})$ such that $F(\textbf{u})=v_{1}^{\omega}$. Since $F$ is bijective, so there exists a unique factor $w\in L_{\vert v_{1}\vert+r-1}$ such that $F(w)=v_1$. By putting $u_1=\text{Pref}_{\vert v_1\vert}(w)$ then we get $\textbf{u}=u_1^{\omega}$. Hence, $\textbf{u}$ is periodic. \end{enumerate} $\square$ \begin{Remarque} If $\textbf{\emph{u}}$ and $F(\textbf{\emph{u}})$ are two periodic words. Then, they have the same period. \end{Remarque} \begin{theorem} Let $\textbf{\emph{u}}$ be a recurrent word and $F$ a CA such that: $$ \left \{ \begin{array}{l} \hspace{0cm}F(x_{1}y_{1})= F(x_{2}y_{2})\hspace{1cm} if\ x_{1}= x_{2} \\ \hspace{0cm}F(x_{1}y_{1})\neq F(x_{2}y_{2}) \hspace{1cm} \text{otherwise}, \end{array} \right.$$ where $x_{1}, x_{2}\in \mathcal{A}$ and $\ y_{1}, y_{2}\in \mathcal{A}^{r-1}$. Then, $F(\textbf{\emph{u}})$ is balanced if and only if $\textbf{\emph{u}}$ is balanced. \end{theorem} \textbf{Proof:} Suppose that $\textbf{u}$ is balanced and $F(\textbf{u})$ is non-balanced. The word $F(\textbf{u})$ being non-balanced, there exists a factor $v_1\in L(F(\textbf{u}))$ such that $x_1v_1x_1$, $x_2v_1x_2\in L(F(\textbf{u}))$. Since $F$ is surjective, there are two factors $u_1,\ u_2\in L(\textbf{u})$ such that $x_1v_1x_1=F(u_1)$ and $x_2v_1x_2=F(u_2)$. Thus, we can write $u_1=au_1'a\delta_1$ and $u_2=bu_1'b\delta_2$ with $\delta_1,\ \delta_2\in L_{r-1}(\textbf{u})$ and for some finite word $u'_{1}$. As a result, we have $au_1'a,\ bu_1'b\in L(\textbf{u})$. We get a contradiction because $\textbf{u}$ is balanced. Reciprocaly suppose that $F(\textbf{u})$ is balanced and $\textbf{u}$ non-balanced. As $\textbf{u}$ non-balanced, there exists a factor $u_1\in L(\textbf{u})$ such that $au_1a,\ bu_1b\in L_n(\textbf{u})$, for all $a,\ b\in \mathcal{A}$. In addition,there are $\delta_1,\ \delta_2\in L_{r-1}(\textbf{u})$ such that $au_1a\delta_1,\ bu_1b\delta_2\in L_{n+r-1}(\textbf{u})$. Furthermore, $F(au_1a\delta_1)=x_1v_1x_1$ and $F(bu_1b\delta_2)=x_2v_1x_2$ are some factors of $L(F(\textbf{u}))$. This contradicts our hypothesis. From all the above, we obtain the desired equivalence. $\square$ \begin{theorem}\label{spec} Let $\textbf{\emph{u}}$ be a recurrent word and $F$ a CA such that: $$ \left \{ \begin{array}{l} \hspace{0cm}F(x_{1}y_{1})= F(x_{2}y_{2})\hspace{1cm} if\ x_{1}= x_{2} \\ \hspace{0cm}F(x_{1}y_{1})\neq F(x_{2}y_{2}) \hspace{1cm} \text{otherwise}, \end{array} \right.$$ where $x_{1}, x_{2}\in \mathcal{A}$ and $\ y_{1}, y_{2}\in \mathcal{A}^{r-1}$. Then, any right (respectively, left) special factor of $F(\textbf{\emph{u}})$ comes from a right (respectively, left) special factor of $\textbf{\emph{u}}$. \end{theorem} \textbf{Proof:} Let $v_1$ be a right special factor of $F(\textbf{u})$. Then, we have $v_1x_1,v_1x_2\in L(F(\textbf{u}))$ with $x_{1},x_{2}\in\mathcal{A}$. Since $F$ is surjective, there exists $u'_1,\delta_1, \delta_2\in L(\textbf{u})$ such that $v_1x_1=F(u'_1a\delta_1)$ and $v_1x_2=F(u'_1b\delta_2)$ with $|\delta_1|=|\delta_2|=r-1$. Hence, $u'_1a\delta_1, u'_1b\delta_2 \in L(\textbf{u})$; i.e, $u'_1a, u'_1b \in L(\textbf{u})$. Whence, $u'_1$ is right special in $\textbf{u}$.\\ Let $v_1$ be a left special factor of $F(\textbf{u})$. Then, we have $x_1v_1,x_2v_1\in L(F(\textbf{u}))$ with $x_{1},x_{2}\in\mathcal{A}$. As $F$ is surjective, so there exists $u'_1\in L(\textbf{u})$ such that $x_1v_1=F(au'_1)$ and $x_2v_1=F(bu'_1)$ with $a,b\in\mathcal{A}$. Hence, $au'_1, bu'_1 \in L(\textbf{u})$. Consequently, $u'_1$ is left special in $\textbf{u}$. $\square$ \begin{Corollaire} Any bispecial factor of $F(\textbf{\emph{u}})$ comes from a bispecial factor of $\textbf{\emph{u}}$. \end{Corollaire} \section{Stable cellular automata(SCA)} In this section, we study some class of cellular automata that we call stable cellular automata (SCA). \begin{definition} A cellular automaton $F:\mathcal{A}^r\longrightarrow\mathcal{A}$ is invariant if for any infinite word $\textbf{u}$, we have $F(\textbf{u})=\textbf{u}$. \end{definition} The following proposition gives us a characterization of an invariant cellular automaton. \begin{Proposition}\label{eq1} Let $F:\mathcal{A}^r\longrightarrow\mathcal{A}$, be a CA. Then, the following assertions are equivalent. \begin{enumerate} \item $F$ is invariant. \item $F(xy)=x$, for all $x\in \mathcal{A}\ \text{and} \ y\in \mathcal{A}^{r-1}$. \end{enumerate} \end{Proposition} \textbf{Proof:} Let $\textbf{u}$ be an infinite word in the form $\textbf{u}=x_0x_1x_2\cdots$. Then, $$F(\textbf{u})=F(x_0x_1\cdots x_{r-1})F(x_1x_2\cdots x_{r})F(x_2x_3\cdots x_{r+1})\cdots.$$ As a result, we have the following equivalent. \begin{align*} F\ \text{is invariant}&\Longleftrightarrow F(\textbf{u})=\textbf{u}\\ &\Longleftrightarrow F(x_0x_1\cdots x_{r-1})F(x_1x_2\cdots x_{r})F(x_2x_3\cdots x_{r+1})\cdots=x_0x_1x_2x_3 \cdots\\ &\Longleftrightarrow F(x_ix_{i+1}\cdots x_{i+r-1})=x_i,\ \forall i\in \mathbb{N}\\ &\Longleftrightarrow F(xy)=x,\ \forall\ x\in \mathcal{A},\ y\in \mathcal{A}^{r-1}. \end{align*} $\square$ \begin{Lemme}\label{Lem-Ech} Let $F:\mathcal{A}_2^r\longrightarrow\mathcal{A}_2$ be an invariant CA and $E$, the exchanges map defined over $\mathcal{A}_2$ . Then, $F \circ E=E \circ F$. \end{Lemme} \textbf{Preuve:} Let $\textbf{u}$ be an infinite word over $\mathcal{A}_2$ and $u_1\in L(\textbf{u}) $. Then, we distinguish two cases:\\ \noindent\textbf{Case 1}: $|u_1|< r$. Then, we have $F(u_1)=\varepsilon$. As a result, $E (F(u_1))=\varepsilon$. In addition, $|E(u_1)|<r$. Thus, $F(E(u_1))=\varepsilon$. Hence, $F(E(u_1))=\varepsilon=E (F(u_1))$. \noindent\textbf{Case 2}: $|u_1|\geq r$. Let $w$ be a factor of length $r$ in $u_1$. Then, without loss of generality, let us assume that the first letter of $w$ is $a$. Thus, by Proposition \ref{eq1}, we have $F(E(w))=b$. Furthemore, $F(w)=a$, i.e, $E(F(u_1))=b$. As a result, we obtain $F(E(w))=E(F(w))$. It follows that $F(E(u_1))=E(F(u_1))$.\\ In all cases, $F \circ E=E \circ F$. $\square$ \begin{definition} Let $F$ be a CA. Then, $F$ is said stable if $F(\overline{w})=F(w)$, for all $w\in \mathcal{A}^r$. \end{definition} \begin{Lemme}\label{Lem-stable} Let $\textbf{\emph{u}}$ be an infinite word and $F$ a SCA. Then, for all $u_1\in L(\textbf{\emph{u}})$, we have $F(\overline{u_1})=\overline{F(u_1)}$. \end{Lemme} \textbf{Proof:} Let $u_1\in L(\textbf{u})$. Then, we distinguish two cases. \\ \noindent\textbf{Case 1}: $|u_1|< r$. Then, $F(u_1)=F(\overline{u_1})=\varepsilon=\overline{F(u_1)}$. \noindent\textbf{Case 2}: $|u_1|\geq r$. Then, we have $u_1\in L_{n+r}(\textbf{u})$ and $F(u_1)=F(w_0)\cdots F(w_n)$ where $w_i=\text{Pref}_r(S^i(u_1))$, for all $i\in \{0,\dots ,n\}$. In addition, \begin{align*} \overline{F(u_1)}&=\overline{F(w_0)F(w_1)\cdots F(w_n)}\\ &=\overline{F(w_n)}\cdots \overline{F(w_1)}\ \overline{F(w_0)}\\ &=F(w_n)\cdots F(w_1)F(w_0),\ \text{because}\ F(w_i)\in \mathcal{A}, \ \text{for all}\ i\in \{0\dots,n\}. \end{align*} Furthermore, we have $F(\overline{ u}_1)=F(\overline{ w}_n)\cdots F(\overline{ w}_1)\ F(\overline{ w}_0)$. Since $F$ is stable we have, for all $i\in \{0,\dots,n\}$, $F(w_i)=F(\overline{ w}_i)$ with $w_i\in L_r(\textbf{u})$. Thus, $F(\overline{u}_1)=F(w_n)\cdots F(w_1)F(w_0)$.\\ Hence, $\overline{F(u_1)}=F(\overline{u}_1)$. $\square$ \begin{theorem}\label{Theo-stable} Let $\textbf{\emph{u}}$ be an infinite word and $F$ a SCA. Then, $F(\textbf{\emph{u}})$ is stable by reflection if and only if $u$ is. \end{theorem} \textbf{Proof:} Suppose that $\textbf{u}$ is stable by reflection. Let $u_1\in L_{n+r}(\textbf{u})$. Then, by Proposition \ref{fac-conserv} we have $F(u_1)\in L_{n+1}(F(\textbf{u}))$. By writing $F(u_1)=F(w_0)F(w_1)\cdots F(w_n)$ with $w_i=\text{Pref}_r(S^i(u_1)$, for all $i\in \{0,\dots,n\}$. Then, By Lemma \ref{Lem-stable}, $\overline{F(u_1)}=F(\overline{u}_1)$. But, $F(\overline{u}_1)\in L(F(\textbf{u}))$. Thus, $\overline{F(u_1)}\in L(F(\textbf{u}))$. Hence, $F(\textbf{u})$ is stable by reflection. Conversely, suppose that $F(\textbf{u})$ is stable by reflection. Let $v_1\in L_{n+1}(F(\textbf{u}))$ then, there exists $u_1\in L_{n+r}(\textbf{u})$ such that $v_1=F(u_1)$. As a result, $F(u_1)\in L_{n+1}(F(\textbf{u}))$ and $\overline{F(u_1)}\in L_{n+1}(F(\textbf{u}))$ because $F$ is stable. But, by Lemma \ref{Lem-stable}, $\overline{F(u_1)}=F(\overline{u}_1)$, i.e, $F(\overline{u}_1) \in L(F(\textbf{u}))$. Since $F$ is surjective we have $\overline{u}_1\in L(\textbf{u})$. Hence, $u$ is stable by reflection. $\square$ \begin{Corollaire}\label{pal1} Let $\textbf{\emph{u}}$ be an infinite word stable by reflection and $F$ an injective SCA. Then, any factor $u_1$ of $\textbf{\emph{u}}$ is a palindromic factor if and only if $F(u_1)$ is. \end{Corollaire} \textbf{Proof:} Suppose $u_1$ a palindromic factor of $u$. Then, By Lemma \ref{Lem-stable}, we have $F(\overline{u}_1)=\overline{F(u_1)}$. In addition, $F(\overline{u}_{1})=F(u_1)$. Hence, $\overline{F(u_1)}=F(u_1)$. Reciprocaly suppose that $\overline{F(u_1)}=F(u_1)$. As $F(\overline{u}_1)=\overline{F(u_1)}$, by Lemma \ref{Lem-stable} we get $F(\overline{u}_1)=F(u_1)$. From the injectivity of $F$, we deduce that $\overline{u_1}=u_1$. $\square$ \begin{Corollaire}\label{pal2} Let $\textbf{\emph{u}}$ be an infinite word stable by reflection and $F$ an injective SCA. Then, for all integers $n$, we have: $$p^{al}_{F(\textbf{\emph{u}})}(n)=p^{al}_{\textbf{\emph{u}}}(n+r-1).$$ \end{Corollaire} \textbf{Proof:} Use Theorem \ref{cor-inj} and Corollary \ref{pal1}. $\square$ \begin{definition} Let $\textbf{u}\in \mathcal{A}^{\infty}$. We say that $u$ is rich if any factor $w$ of $u$, has exactly $|w|+1$ distincts palindromes including the empty word. \end{definition} The result below in $\cite{bucci}$ characterizes the rich words stable by reflection. \begin{theorem}\label{Bucci} Let $\textbf{\emph{u}}$ be an infinite word such that the set of these factors is stable by reflection. Then, $\textbf{\emph{u}}$ is rich if and only if for all $n \in \mathbb{N}:$ $$p_\textbf{\emph{u}}^{al}(n)+p_\textbf{\emph{u}}^{al}(n+1)=p_{\textbf{\emph{u}}}(n+1)-p_\textbf{\emph{u}}(n)+2.$$ \end{theorem} \begin{theorem}\label{riche} Let $\textbf{\emph{u}}$ be an infinite word stable by reflection and $F$ an injective SCA. Then, $F(\textbf{\emph{u}})$ is rich if and only if $\textbf{\emph{u}}$ is. \end{theorem} \textbf{Proof:} As $F$ is injective and stable, we have respectively by Theorem \ref{cor-inj} and Corollary \ref{pal2}, for all $n\in \mathbb{N}^*$, $p_{F(\textbf{u})}(n)=p_u(n+r-1)$ and $p^{al}_{F(\textbf{u})}(n)=p^{al}_u(n+r-1)$.\\ In addition, $p_{F(\textbf{u})}(n+1)-p_{F(\textbf{u})}(n)=p_\textbf{u}(n+r)-p_\textbf{u}(n+r-1)$. Moreover, $\textbf{u}$ being recurrent and reflection stable then, by Theorem \ref{Bucci}, we have: \begin{align*} \textbf{u}\ \text{riche} \Longleftrightarrow p_\textbf{u}(n+r)-p_\textbf{u}(n+r-1)+2 &=p_\textbf{u}^{al}(n+r)+p_\textbf{u}^{al}(n+r-1)\\ &=p_{F(\textbf{u})}^{al}(n+1)+p_{F(\textbf{u})}^{al}(n),\ \text{because}\ F \ \text{is injective}. \end{align*} Consequently, $p_{F(\textbf{u})}(n+1)-p_{F(\textbf{u})}(n)+2=p_{F(\textbf{u})}^{al}(n+1)+p_{F(\textbf{u})}^{al}(n)$, i.e, $F(\textbf{u})$ is rich. $\square$ \section{Cellular automata and sturmian words} In this section, we apply the CA to Sturmian words. \begin{definition} Let $F$ be a CA. Then, $F$ is said Sturmian if, for any sturmian word, the image by the map $F$ is also Sturmian.\\ Moreover if, $F(\textbf{u})=\textbf{u}$ for any $\textbf{u}$ then, $\textbf{u}$ is said fixed point of $F$. \end{definition} \begin{Exemple} Let us consider $\textbf{\emph{u}}$ a Sturmian word over $\mathcal{A}_2$. Then, for the CA defined by: $$ \left \{ \begin{array}{l} \text{for all}\ x\in \mathcal{A}_{2}\ \text{and}\ |y|=r-1 \\ H(xy)=x\\ G(xy)=E(x), \end{array} \right.$$ we have $H(\textbf{\emph{u}})=\textbf{\emph{u}}$ and $G(\textbf{\emph{u}})=E(H(\textbf{\emph{u}}))=H(E(\textbf{\emph{u}}))=E(\textbf{\emph{u}})$ which are respectively fixed point and Sturmian. \end{Exemple} Note that $H$ and $G$ are SCA. \subsection{Classical and window complexity} Let $\textbf{v}$ be a Sturmian word in the form $\textbf{v}=a^{l_0}ba^{l+\epsilon_1}ba^{l+\epsilon_2}ba^{l+\epsilon_3}b \dotsb$ of Theorem \ref{stur-puis}. Let us consider the SCA $F$ defined over $\mathcal{A}^{l+1}$ by: $$F(w)= \left \{ \begin{array}{l} \hspace{0cm}a \hspace{0.2cm} if \hspace{0.2cm} w=a^{l+1} \\ \hspace{0cm} b \hspace{0.2cm} otherwise. \end{array} \right. $$ Then, we get: $$F(\textbf{v})=\left \{ \begin{array}{l} ab^{l_0+1}a^{\epsilon_1}b^{l+1} a^{\epsilon_2}b^{l+1}a^{\epsilon_3}b^{l+1}a^{\epsilon_4}b^{l+1} \dotsb \hspace{0.2cm} if \hspace{0.2cm} l_o=l+1 \\ b^{l_0+1}a^{\epsilon_1}b^{l+1} a^{\epsilon_2}b^{l+1}a^{\epsilon_3}b^{l+1}a^{\epsilon_4}b^{l+1} \dotsb \hspace{0.5cm} otherwise. \end{array} \right. $$ In the following, we assume $n_0=k_0(l+1)$ where $k_0$ is the maximum power of $a^{l+\epsilon_i}b$ in $\textbf{v}$. We study some combinatorial properties then classical and window complexity of the word $F(\textbf{v})$ thus obtained. \begin{Proposition} For all $n\geq 0,\ p^f_{F(\textbf{\emph{v}})}(n)=p_{F(\textbf{\emph{v}})}(n).$ \end{Proposition} \textbf{Proof:} Since $\textbf{v}$ is modulo-recurrent, then it is the same for $F(\textbf{v})$, by Theorem \ref{mod}. Hence, $p^f_{F(\textbf{v})}(n)=p_{F(\textbf{v})}(n)$ by Theorem \ref{stur-mod}. $\square$ \begin{Lemme}\label{Lem1-auto-stur} Let $v_1$ be a factor of $F(\textbf{\emph{v}})$ such that $|v_1|>n_0$. Then, $v_1$ comes from a unique factor of $\textbf{\emph{v}}$. \end{Lemme} \textbf{Proof:} Note that, any factor of $F(\textbf{v})$ containing the letter $a$ comes from only one factor of $\textbf{v}$. Indeed, $a$ has only one antecedent which is $a^{l+1}$. But, any factor of $F(\textbf{v})$ of length strictely greater than $n_0$ contains at least one occurrence of $a$. Hence, it comes necessary of only one antecedent in $\textbf{v}$. $\square$ \begin{theorem}\label{cc} The classical complexity function of the word $F(\textbf{\emph{v}})$ is given by: $$p_{F(\textbf{\emph{v}})}(n)= \left \{ \begin{array}{l} \hspace{0cm}n+1 \hspace{2.5cm} if \ n\leq n_0-l \\ \hspace{0cm}2n-n_0+l+1 \hspace{0.7cm} if \ n_0-l< n\leq n_0 \\ \hspace{0cm} n+l+1 \hspace{1.9cm} if \ n>n_0. \end{array} \right. $$ \end{theorem} \textbf{Proof:} Let us proceed by disjunction of cases according to the length $n$ of the factors.\\ \noindent\textbf{Case 1}: $1\leq n\leq n_0-l$. Then, $L_n(F(\textbf{v}))=\left\{b^n,\ b^iab^{n-i-1} \:\ i=0,\dots, n-1 \right\}.$ Therefore, $ p_{F(\textbf{v})}(n)=n+1$, for all $n\leq n_0-l$. Let us observe that for all $ n\leq n_0-l-1$, $b^n$ is the only right special factor of length $n$ of $F(\textbf{v})$. Similarly, the right special factor of $n_0-l$ in $F(\textbf{v})$ are $b^{n_0-l}$ and $ab^{n_0-l-1}$. Hence, we obtain the following equalities: \begin{align*} p_{F(\textbf{v})}(n_0-l+1)&=p_{F(\textbf{v})}(n_0-l)+2\\ &=(n_0-l+1)+2\\ &=n_0-l+3. \end{align*} \noindent\textbf{Case 2}: $n_0-l+1\leq n\leq n_0$. Then: $$L_n(F(\textbf{v}))=\left\{b^n,\ b^iab^{n-i-1}, \ b^jab^{n_0-l-1}ab^{n-n_0+l-1-j} \:\ i=0,1,\dots, n-1; \ 0\leq j \leq n-n_0+l-1 \right\}.$$ As a result, we get: \begin{align*} p_{F(\textbf{v})}(n)&=1+n+(n-n_0+l-1+1)\\ &=2n-n_0+l+1. \end{align*} \noindent\textbf{Case 3}: $n> n_0$. Then, any factor of $F(\textbf{v})$ of length $n$ comes from only one factor of length $n+r-1$ in $\textbf{v}$, by Lemma \ref{Lem1-auto-stur}. By applying Theorem \ref{cor-inj}, we obtain the following equalities: \begin{align*} p_{F(\textbf{v})}(n)&=p_\textbf{v}(n+r-1)\\ &=n+r\\ &=n+l+1. \end{align*} $\square$ \begin{Remarque} \begin{enumerate} \item The complexity function $p_{F(\textbf{\emph{v}})}$ is continuous over $\mathbb{N}$. \item The word $F(\textbf{\emph{v}})$ is a quasi-sturmian word. \item The sets of return words for the letters $a$ and $b$ in $F(\textbf{\emph{v}})$ are respectively: $$\left\{ab^{n_0-l-1}, \ ab^{n_0} \right\}\ \text{and}\ \left\{b, \ ba \right\}.$$ \end{enumerate} \end{Remarque} \subsection{Palindromic properties} In this subsection, we study the palindromic complexity function and the richness of $F(\textbf{v})$. \begin{Lemme}\label{Lem2-auto-stur} Any palindromic factor of length greather than $n_0$ of $F(\textbf{\emph{v}})$ comes from a palindromic factor of $\textbf{\emph{v}}$. \end{Lemme} \textbf{Proof:} First, note that $F$ satisfies the conditions of Theorem \ref{Theo-stable}, i.e, $F(\textbf{v})$ is stable by reflection. Let $v_1$ be a palindromic factor of $F(\textbf{v})$ such that $|v_1|> n_0$. Then, by Lemma \ref{Lem1-auto-stur}, $v_1$ comes from only one factor $u_1$ of $\textbf{v}$. Thus, by Corollary \ref{cor-inj}, we deduce that $u_1$ is a palindrome. $\square$ \begin{theorem}\label{cp} The palindromic complexity function of the word $F(\textbf{\emph{v}})$ is given by: \begin{enumerate} \item If $n \leq n_0-l$, $$p^{al}_{F(\textbf{\emph{v}})}(n)= \left \{ \begin{array}{l} \hspace{0cm}1 \hspace{0.2cm} if \hspace{0.2cm} n \hspace{0.1cm} is \hspace{0.1cm} even \\ \hspace{0cm}2 \hspace{0.2cm} otherwise. \end{array} \right. $$ \item If $n_0-l< n \leq n_0$, \begin{itemize} \item for $n_0$ even, we have: $ p^{al}_{F(\textbf{\emph{v}})}(n)= \left \{ \begin{array}{l} \hspace{0cm}1 \hspace{0.2cm} if \hspace{0.2cm} l \hspace{0.1cm} and \hspace{0.1cm} n \hspace{0.1cm} are \hspace{0.1cm} even \\ \hspace{0cm}2 \hspace{0.2cm} if \hspace{0.2cm} l \hspace{0.1cm} is \hspace{0.1cm} odd\\ \hspace{0cm}3 \hspace{0.2cm} otherwise, \end{array} \right. $ \item for $n_0$ odd, we have: $p^{al}_{F(\textbf{\emph{v}})}(n)=2.$ \end{itemize} \item If $n> n_0$, $$p^{al}_{F(\textbf{\emph{v}})}(n)= \left \{ \begin{array}{l} \hspace{0cm}1 \hspace{0.2cm} if \hspace{0.2cm} n+l \hspace{0.1cm} is \hspace{0.1cm} even \\ \hspace{0cm}2 \hspace{0.2cm} otherwise. \end{array} \right. $$ \end{enumerate} \end{theorem} \textbf{Proof:} \begin{enumerate} \item If $ n \leq n_0-l$, then we have: $$L_n(F(\textbf{v}))=\left\{b^n,\ b^iab^{n-i-1} \:\ i=0,1,\dots, n-1 \right\}.$$ Hence, the word $b^iab^{n-i-1}$ is a palindromic factor of length $n$ if and only if $i=n-i-1$, i.e, $n=2i+1$. As a result, we obtain: $$\text{Pal}_n(F(\textbf{v}))= \left \{ \begin{array}{l} \left\{b^n \right\} \hspace{3cm}if\hspace{0.1cm} n \hspace{0.1cm}is \hspace{0.1cm} even\\ \left\{b^n; \hspace{0.1cm}b^{\frac{n-1}{2}} ab^{\frac{n-1}{2}} \right\} \hspace{0.8cm} otherwise. \end{array} \right. $$ \item If $n_0-l< n\leq n_0$ then: $$L_n(F(\textbf{v}))=\left\{b^n,\ b^iab^{n-i-1}, \ b^jab^{n_0-l-1}ab^{n-n_0+l-1-j} \ :\ i=0,1,\dots, n-1; \ 0\leq j \leq n-n_0+l-1 \right\}.$$ Hence, the word $b^jab^{l+1}ab^{n-n_0+l-1-j}$ is a palindromic factor of length $n$ of $F(\textbf{v})$ if and only if $j=n-n_0+l-1-j$. It follows that, $n+l=2j+n_0+1$. Thus, let us now reason according to the parity of $n_0$ to ensure that the word $b^jab^{l+1}ab^{n-n_0+l-1-j}$ be a palindromic factor of $F(\textbf{v})$: \begin{itemize} \item for $n_0$ even, as $n+l=2j+n_0+1$, we deduce that $l$ and $n$ are different parities. As a conseqence, we obtain: $$\text{Pal}_n(F(\textbf{v}))= \left \{ \begin{array}{l} \left\{b^n \right\} \hspace{7.8cm} if\hspace{0.1cm} l \hspace{0.1cm}and\hspace{0.1cm} n\hspace{0.1cm} are\hspace{0.1cm} even\\ \left\{b^n;\hspace{0.1cm} b^{\frac{n-1}{2}} ab^{\frac{n-1}{2}} \right\} \hspace{5.5cm}if\hspace{0.1cm}l \hspace{0.1cm} and\hspace{0.1cm} n\hspace{0.1cm} are\hspace{0.1cm} odd\\ \left\{b^n;\hspace{0.1cm} b^{\frac{n-n_0+l-1}{2}}ab^{n_0-l-1}ab^{\frac{n-n_0+l-1}{2}} \right\} \hspace{2.5cm}if\hspace{0.1cm}l \hspace{0.1cm}is \hspace{0.1cm}odd\hspace{0.1cm} and\hspace{0.1cm} n \hspace{0.1cm}is \hspace{0.1cm}even\\ \left\{b^n;\hspace{0.1cm} b^{\frac{n-1}{2}} ab^{\frac{n-1}{2}};\hspace{0.1cm} b^{\frac{n-n_0+l-1}{2}}ab^{n_0-l-1}ab^{\frac{n-n_0+l-1}{2}} \right\} \hspace{0.5cm}if\hspace{0.1cm}l \hspace{0.1cm}is \hspace{0.1cm}even\hspace{0.1cm} and\hspace{0.1cm} n \hspace{0.1cm}is \hspace{0.1cm}odd. \end{array} \right. $$ \item for $n_0$ odd, since $n+l=2j+n_0+1$ then the integers $n$ and $l$ are different parities. Thus: $$\text{Pal}_n(F(\textbf{v}))= \left \{ \begin{array}{l} \left\{b^n;\hspace{0.1cm} b^{\frac{n-1}{2}} ab^{\frac{n-1}{2}} \right\} \hspace{5cm}if\hspace{0.1cm} n\hspace{0.1cm} is\hspace{0.1cm} odd\\ \left\{b^n;\hspace{0.1cm} b^{\frac{n-n_0+l-1}{2}}ab^{n_0-l-1}ab^{\frac{n-n_0+l-1}{2}} \right\} \hspace{2cm}if\hspace{0.1cm} n \hspace{0.1cm}is \hspace{0.1cm}even. \end{array} \right. $$ \end{itemize} \item If $n\geq n_0$. Then firstly, it is known that any factor of length $n$ of $F(\textbf{v})$ comes from a factor of length $n+l$ of $\textbf{v}$. Secondly, by Lemma \ref{Lem2-auto-stur}, any palindromic factor of $F(\textbf{v})$ of length $n> n_0$ comes from only one palindromic factor of $\textbf{v}$. In addition, by applying the Theorem \ref{theo-stur}, we deduce that $F(\textbf{v})$ has a palindromic factor of length $n$ if $n+l$ is even and two otherwise. \end{enumerate} $\square$ \begin{Corollaire} The word $F(\textbf{\emph{v}})$ is rich. \end{Corollaire} \textbf{Proof:} The proof follows from the Theorem \ref{riche}. $\square$ \subsection{Abelian complexity function} In this subsection, we determine the balance, Parikh vectors and abelian complexity function of $F(\textbf{v})$. \begin{Proposition}$\cite{BKT-p}$\label{pab-binaire} Let $\textbf{\emph{u}}$ be an infinite $\beta$-balanced word over $\left\{a,b\right\}$. Then, for all integers $n$, we have: $$\rho^{ab}_\textbf{\emph{u}}(n)\leq \beta+1.$$ \end{Proposition} \begin{Lemme}\label{eq} The word $F(\textbf{\emph{v}})$ is $2$-balanced. \end{Lemme} \textbf{Proof:} Note that for all factors $u_1$ and $u_2$ of $\textbf{v}$, we have: $\hspace{2cm}|u_1|=|u_2|\hspace{0.2cm} \Longrightarrow \hspace{0.2cm} ||u_1|_{a^{l+1}}-|u_2|_{a^{l+1}}|\leq 2$ $\hspace{0.2cm}$ because $\textbf{v}$ is Sturmian. Let $v_1, v_2\in L_n(F(\textbf{v}))$ then, there are two factors $u_1, u_2\in L_{n+l}(\textbf{v})$ such that $v_1=F(u_1)$ and $v_2=F(u_2)$. As $|u_1|=|u_2|$, we have $||u_1|_{a^{l+1}}|-|u_2|_{a^{l+1}}||\leq 2$. This implies that $||F(u_1)|_a-|F(u_2)|_a|\leq 2$, i.e, $||v_1|_a-|v_2|_a|\leq 2$. Moreover, $b^{n_0-l+1},\ ab^{n_0-l-1}a,\ b^{n_0}ab^{n_0-l+1}$ and $ab^{n_0}ab^{n_0-l-1}a$ are factors of $F(\textbf{v})$. Hence, $F(\textbf{v})$ is $2$-balanced. $\square$ \begin{theorem}\label{ca} The abelian complexity function of $F(\textbf{\emph{v}})$ is given for all $n\in \mathbb{N}^*$ by: \begin{enumerate} \item For $n \leq n_0-l$, $\rho^{ab}_{F(\textbf{\emph{v}})}(n)=2$. \item For $n_0-l+1\leq n\leq n_0$, $\rho^{ab}_{F(\textbf{\emph{v}})}(n)=3$. \item For $n> n_0 $, $\rho^{ab}_{F(\textbf{v})}(n)\in \left\{2,3 \right\}$. \end{enumerate} \end{theorem} \textbf{Proof:} We distinguish the following cases according to the length $n$ of factors. \noindent\textbf{Case 1}: $1\leq n \leq n_0-l$. Then, since $L_n(F(\textbf{v}))=\left\{b^n,\ b^iab^{n-i-1} \:\ i=0,1,\dots, n-1 \right\}$, we obtain $$\chi_n(F(\textbf{v}))=\left\{(0,n), (1,n-1)\right\}.$$ Hence, $\rho^{ab}_{F(\textbf{v})}(n)= 2$. \noindent\textbf{Case 2}: $n_0-l+1\leq n\leq n_0$. Then, we have: $$L_n(F(\textbf{v}))=\{b^n,\ b^iab^{n-i-1},\ b^jab^{n_0-l-1}ab^{n-n_0+l-1-j} \:\ i=0,\dots, n-1; \ 0\leq j \leq n-n_0+l-1\}.$$ It follows that, $\chi_n(F(\textbf{v}))=\left\{(0,n),\ (1,n-1),\ (2,n-2)\right\}.$ Consequently, $\rho^{ab}_{F(\textbf{v})}(n)= 3$. \noindent\textbf{Case 3}: Let us consider $n> n_0$. Then, by Theorem \ref{cc}, the classical complexity function of $F(\textbf{v})$ is unbound. Therefore, $F(\textbf{v})$ is non-ultimately periodic. Thus, $\rho^{ab}_{F(\textbf{v})}(n)\geq 2$. Moreover, by Lemma \ref{eq}, the word $F(\textbf{v})$ is $2$-balanced. In addition, $F(\textbf{v})$ being a binary word, we deduce by Proposition \ref{pab-binaire} that $\rho^{ab}_{F(\textbf{v})}(n)\leq 3$. Hence, $\rho^{ab}_{F(\textbf{v})}(n)\in \left\{2,\hspace{0.1cm} 3 \right\}$. $\square$ \begin{Remarque} The sequence $(\rho^{ab}_{F(\textbf{\emph{v}})}(n))_{n\in \mathbb{N}}$ is non-ultimately periodic. \end{Remarque} \end{document}
\begin{document} \title{Quantum signal processing with continuous variables} \author{Zane M.\ Rossi} \affiliation{ Department of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA} \affiliation{ Physics and Informatics Laboratory, \mbox{NTT Research,~Inc.,} 940 Stewart Dr., Sunnyvale, California, 94085, USA} \affiliation{ NTT Basic Research Laboratories and Research Center for Theoretical Quantum Physics, 3-1 Morinosato-Wakamiya, Atsugi, Kanagawa 243-0198, Japan} \author{Victor M.\ Bastidas}\affiliation{ NTT Basic Research Laboratories and Research Center for Theoretical Quantum Physics, 3-1 Morinosato-Wakamiya, Atsugi, Kanagawa 243-0198, Japan} \author{William J.\ Munro}\affiliation{ NTT Basic Research Laboratories and Research Center for Theoretical Quantum Physics, 3-1 Morinosato-Wakamiya, Atsugi, Kanagawa 243-0198, Japan} \author{Isaac L.\ Chuang}\affiliation{ Department of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA} \date{\today} \begin{abstract} \noindent Quantum singular value transformation (QSVT) enables the application of polynomial functions to the singular values of near arbitrary linear operators embedded in unitary transforms, and has been used to unify, simplify, and improve most quantum algorithms. QSVT depends on precise results in representation theory, with the desired polynomial functions acting simultaneously within invariant two-dimensional subspaces of a larger Hilbert space. These two-dimensional transformations are largely determined by the related theory of quantum signal processing (QSP). While QSP appears to rely on properties specific to the compact Lie group SU(2), many other Lie groups appear naturally in physical systems relevant to quantum information. This work considers settings in which SU(1,1) describes system dynamics and finds that, surprisingly, despite the non-compactness of SU(1,1), one can recover a QSP-type ansatz, and show its ability to approximate near arbitrary polynomial transformations. We discuss various experimental uses of this construction, as well as prospects for expanded relevance of QSP-like ansätze to other Lie groups. \end{abstract} \maketitle \section{Introduction} \noindent In quantum computing the Lie group associated with the evolution of a single qubit, SU(2), has received the majority of attention and study—this group and its related algebra permit basic intuition for the character of certain quantum computations, mainly through the ubiquitously applied surjective homomorphism from SU(2) to SO(3), diagrammed as the Bloch-sphere. In some quantum algorithms, e.g., Grover search \cite{grover_05, hoyer_00}, the evolution of a multiple-qubit systems can be simplified to two-dimensional transformations, by which the pleasant properties of SU(2) are recovered. The algorithmic techniques at the center of this work, quantum signal processing (QSP) \cite{lyc_16_equiangular_gates, lc_17_simultation, lc_19_qubitization} and its lifted version quantum singular value transformation (QSVT) \cite{gslw_19}, permit similar SU(2)-derived intuition. QSP and QSVT have seen success in unifying, simplifying, and improving most known quantum algorithms \cite{mrtc_21}, in turn showcasing that basic properties of SU(2), when properly understood and applied, are sufficient to capture unexpectedly sophisticated algorithmic behavior. These algorithms, by use of a simple alternating circuit ansatz, permit one to modify the singular values of near arbitrary linear operators by polynomial functions; while abstract, this fundamental linear algebraic manipulation subsumes algorithms for Hamiltonian simulation \cite{coherent_ham_sim_21}, phase estimation \cite{rall_21}, quantum-inspired machine learning algorithms \cite{chia_20}, semi-definite programming \cite{q_sdp_solvers_20}, adiabatic methods \cite{lin_eig_filter_20}, computation of approximate correlation functions \cite{rall_correlation_20}, computation of approximate fidelity \cite{gilyen_fidelity_22}, recovery maps \cite{petz_recovery_20}, metrology \cite{dgn_qsp_metrology_22}, and fast inversion of linear systems \cite{tong_inversion_21}. The theory of QSP has roots in the study of composite pulse techniques for NMR \cite{wimperis_bb1_94, ylc_14, lyc_16_equiangular_gates} but was first named for use in Hamiltonian simulation \cite{lc_17_simultation, lc_19_qubitization}, in which the single-qubit nature of QSP was suitably lifted to apply to systems of multiple qubits by a technique known as qubitization. This idea was greatly expanded to cover the manipulation of general, non-normal linear operators and termed QSVT \cite{gslw_19}, by which one can achieve QSP-like manipulation of invariant SU(2) subspaces preserved by alternating projectors according to Jordan's lemma \cite{jordan_75}. Recently this argument has been even further simplified in relation to the cosine-sine decomposition \cite{cs_qsvt_tang_tian}. Parallel to this development, experimental work in quantum optics has long considered basic interferometric operations, whose action on optical modes are also describable by SU(2). In modifying these passive interferometric devices to actively driven ones, one can move from an SU(2) description to one desfined by the related but non-compact Lie group SU(1,1). Such devices have the upshot of enabling improved sensitivity for a variety of interferometric measurements, as well as greatly simplified experimental apparatuses \cite{yurke_su2_su11_86}. This apparently simple change in the defining algebra has deep experimental and theoretical implications, and correspondingly the general analysis of composite systems of SU(1,1) transformations is difficult \cite{ou_li_su11_review_20, su_mode_engineering_19} beyond low-gain regimes. In light of this, it is worthwhile to determine whether QSP and QSVT, which both (1) simply subsume a large number of quantum algorithmic techniques and (2) rely strongly on basic properties of SU(2), can be suitably modified to apply to and analogously simplify the analysis composite systems of similar SU(1,1) interactions. While this extension may appear mathematically mild, the loss of the compactness of SU(2), as well as movement from finite-dimensional unitary operations to those involving continuous variables, is significant. In bridging the gap from SU(2) to SU(1,1) for QSP-like algorithms, this work gives first steps in addressing the significant theoretical challenges and correspondingly curious insights within the application of QSP and QSVT to continuous variable quantum computation. This work is somewhat intended for an audience already familiar with the basic structure and theorems of standard QSP and QSVT. However, we reproduce some salient theorems from previous works in Appendix~\ref{appendix:basics_qsp_qsvt}, and discuss and reproduce proof techniques where appropriate. As this work concerns a modification to some of the low-level tenets of these algorithms, we try to phrase our constructions in terms of what they preserve from previous work, and what they break (and thus force us to recover and re-prove). In many senses the QSP and QSVT ansätze are fragile, and modifying them to describe new contexts introduces subtleties, various no-go results, and a few exciting insights. QSP relies strongly on familiar aspects of SU(2); its comprising gates are rotations on the Bloch sphere, and its action is, despite application to multiple-qubit settings, summarized by the evolution of a single qubit. For the remainder of this work we take $X, Z$ as the Pauli matrices (one representation of the generators of $\mathfrak{su}(2)$) with the common form \begin{equation} X = \begin{bmatrix} 0 & 1\\ 1 & 0 \end{bmatrix}, \quad Y = \begin{bmatrix} 0 & -i\\ i & 0 \end{bmatrix}, \quad Z = \begin{bmatrix} 1 & 0\\ 0 & -1 \end{bmatrix}. \end{equation} QSP protocols can be visualized as walks on the Bloch sphere, where the distance per step is defined by the unknown signal $x = \cos{\theta}$, and the direction of each step by the known and chosen $\phi_k \in \mathbb{R}$ for $k \in [n]$ where $n \in \mathbb{N}$ is the length of the QSP protocol. The power of the theory of QSP lies in that the action of this walk can be precisely controlled such that the same choice of direction for each step results, when the step length is changed, in drastically different behavior. This is depicted in Fig.~\ref{fig:qsp_as_walk}, with more explicit description of these protocols in Fig.~\ref{fig:qsp_generalized_circuit}. \begin{figure} \caption{Conceptual depictions of the actions of (a) standard QSP and (b) SU(1,1) QSP according to the natural surjections (a) SU(2) to SO(3) and (b) SU(1,1) to SO(2, 1). I.e., elements of each group can be recast as preserving the unit sphere or an (infinitely extending) hyperbola respectively. QSP, for a given argument $x$, is a walk on the Bloch sphere with fixed step angle $\theta = \cos^{-1}{x}$ and step direction $\phi_k$ the $k$-th QSP phase. For SU(1,1) QSP, this walk can be seen on a hyperbola, with generalized step angle $\beta = \cosh^{-1}{x}$; that the manifold for (b) is not compact is a major source of the difference in the corresponding theories, and the reason for the modified ranges of the maps from $x$ to $\theta, \beta$. In both cases walk steps are along geodesics.} \label{fig:qsp_as_walk} \end{figure} The surprising aspect of the theory of QSP is that one can quickly and classically compute the proper $\phi_k$ such that, for walks of a given step size $x$, said \emph{fixed set of $\phi_k$} achieve a near arbitrary desired end point for said walk. In other words, QSP permits immense control over functions of the form \begin{equation} \text{QSP} : \mathbb{R}^{n+1} \,\rightarrow\, (F : [-1, 1] \,\rightarrow\, \text{SU(2)}). \end{equation} Here we mean that QSP is a map from an ordered list of real numbers (the QSP phases $\Phi$) to a function $F$ from the interval to the Lie group SU(2). This basic type signature exemplifies QSP's utility as a generator of \emph{superoperators}. The ansatz takes in a set of parameters and returns a function taking a scalar (encoded as a rotation) to an element of the compact Lie group SU(2) in a tunable way. In fact, one can prove the following result summarizing the expressivity of the QSP ansatz. \begin{theorem}[Expressivity of the QSP ansatz] \label{thm:qsp_ansatz_expressivity} The set of functions achieved by the set of QSP protocols of finite length is dense in the set of definite-parity piecewise-continuous functions with the form $[-1, 1] \rightarrow SU(2)$, up to an ambiguity (a rotation about a known, fixed axis on the Bloch sphere) parameterized by a single function with the form $R: [-1, 1] \rightarrow U(1)$. Additionally, the rate of uniform convergence to a desired function $G: [-1, 1] \rightarrow SU(2)$ up to this ambiguity is inverse polynomial in protocol length in the worst case, and inverse exponential under certain common assumptions of smoothness of the desired functional form. \end{theorem} We claim that the path from QSP to its lifted counterpart, quantum singular value transformation (QSVT) is simple enough to preserve much of the single-qubit character of standard QSP. In QSVT, the geometric intuition of QSP is preserved and lifted almost solely due to a classic result of Jordan \cite{jordan_75} (with modern proofs in \cite{regev_06, cs_qsvt_tang_tian}). This lemma states that products of two reflections necessarily preserve one- and two-dimensional subspaces, which in the context of QSVT means that circuits can be designed which implicitly perform QSP-like SU(2) unitary operations within such subspaces. These unitary operations have a natural interpretation as modifying the singular values of block-encoded linear operators by precisely the same polynomial functions achievable with standard QSP. In this case, the encoded linear operators can be thought of as mappings between sets of left and right singular vectors, where one is assumed to have easy access to projectors onto the \emph{span} of these two sets respectively. While QSVT has enabled a qualified unification of quantum algorithms \cite{mrtc_21}, it could also be seen as indicating a scarcity of diverse quantum algorithmic techniques. Consequently, probing related results in functional analysis and representation theory has potential, guided by the success of QSVT, to generate novel quantum algorithms. Indeed, this work shows that even basic modifications to the QSP ansatz can induce large changes in the character of the resulting algorithm. Here the change specifically concerns the natural Lie group for elements of the ansatz. We can now pose an informal problem statement for this work, relating to the two Lie groups SU(2) and SU(1,1). It is known that quantum signal processing (QSP) and quantum singular value transformation (QSVT) rely strongly on properties of SU(2), the former consisting entirely of interleaved products of elements of this group. Moreover, SU(2) and SU(1,1) are known to be quite similar algebraically (in terms of their generators) despite appearing in substantively different physical contexts, and the latter being non-compact. Together, these two observations point toward the following problem statement. \begin{problem}[Informal problem statement] Do techniques similar to those used in the theory of QSP permit one to \emph{usefully characterize} interleaved products of elements of SU(1,1)? \end{problem} While we claim an answer of `yes' to this problem statement, how we have stated it brings up a few important ambiguities. The first is that we desire a \emph{useful characterization} of such products. In standard QSP this characterization manifests in showing which polynomials in an unknown parameter for an oracle unitary can be embedded as matrix elements. In our setting we slightly modify this condition, asking not only about the description of analogous polynomials, but their density other functional spaces, as well as use in approximating arbitrary desired functions. Moreover, as SU(1,1) lacks finite dimensional unitary representations, we will have to do some lifting in resituating what we mean by embedding a polynomial transform; this is the purview of Sec.~\ref{sec:analyzing_qsp_ansatz}. Secondly, in this new setting we may not be bound to interleaved products of the same form as in QSP, or with the same physical interpretation; defining our ansatz, and discussing its physical reasonableness, is the purview of Sec.~\ref{sec:qsp_ansatz}. Together, these two sections give a concrete, formal setup and analysis of this movement of the theory of QSP from SU(2) to SU(1,1). We stress that modifying QSP to encompass SU(1,1) dynamics is not a superficial translation; while the linear operators describing such interleaved products may appear similar, the non-compactness of SU(1,1) puts into question a vital but overlooked result in standard QSP. I.e., most results in the theory of functional approximation and its famous cousin Fourier analysis rely on natural bases of functions which are either (1) periodic or (2) are square integrable. Critically, the SU(1,1) analogue of standard QSP sacrifices both of these traits along with its compactness, for fundamental reasons in the theory of Lie groups \cite{wigner_39}. Thus to recover a useful theory of SU(1,1) QSP, we need to provide a bridge from the achievable polynomials with this ansatz to the approximable functions. Spanning this gap (which did not exist in standard QSP) is the heart of this paper, and the focus of Sec.~\ref{sec:analyzing_qsp_ansatz}. \subsection{Prior work} \label{sec:prior_work} Given the current understanding QSVT as a quantum algorithm, room for research has fallen along two main axes. Along the first, one can translate classical or quantum algorithmic tasks to the language of QSVT, and prove or disprove (possibility with respect to careful assumptions) the existence of efficient block-encodings (ways to load linear operators into the QSVT ansatz for processing) and polynomial transforms required to solve given algorithmic tasks. This axis has experienced great success \cite{coherent_ham_sim_21, rall_21, lombardi_pqzk_2021, chia_20, q_sdp_solvers_20, lin_eig_filter_20, rall_correlation_20, gilyen_fidelity_22, petz_recovery_20, dgn_qsp_metrology_22, tong_inversion_21} and enabled improvements in query complexity lower bounds for a variety of famed quantum algorithms. The second axis involves taking the basic mathematical tenants which enabled the success of the theory of QSVT, and \emph{augmenting or deepening them} to apply to contexts not previously amenable to QSVT. While this latter work remains more exploratory \cite{rc_22, sym_qsp_21, dlnw_infinite_22, tltc_alec_23, rycs_noisy_22}, it has demonstrated significant creative potential for QSP- and QSVT-inspired methods for substantively different circuit ansätze. One such unexplored generalization concerns transporting QSP and QSVT to algebraically distinct settings. To map algorithmic problems to functional analytic ones, QSP relies strongly on special properties of SU(2), most notably the simple relations between its generators, and the compactness of the Lie group. What remains unclear, however, is whether this map can survive the jump to more exotic (but still physically reasonable) Lie group and algebras. While not extensively discussed in this paper, the guiding motivation for this work rests on the dual appearance of both SU(2) and SU(1,1) in photonic systems \cite{yurke_su2_su11_86, ou_li_su11_review_20}. There exist multiple works examining the serial application of elements from both of these Lie groups, showing, albeit only non-analytically or perturbatively, that such protocols permit the useful manipulation of continuous variable quantum information \cite{su_mode_engineering_19, cui_mode_engineering_20}. This work re-examines these settings, showing that many of the strong statements possible in QSP due to the simplicity of the underlying Lie algebra, can be ported to similar (but notably non-compact) algebras, and that moreover the resulting protocols have reasonable physical interpretation and a concise analytic characterization. As far as the authors are aware, this is the first work describing such cascaded SU(1,1) interactions in the large-amplification regime, as well as the first application of QSP-like methods to non-compact Lie groups. \subsection{Outline of results} \label{sec:summary_of_results} This work is broken into three major sections. The first, in Sec.~\ref{sec:qsp_ansatz}, discusses the explicit form of an SU(1,1) analogue to QSP, providing a series of supporting definitions and lemmas from the theory of Lie groups and algebras. The second major portion discusses the expressive power of this ansatz: in other words, Sec.~\ref{sec:analyzing_qsp_ansatz} discusses the ability of this ansatz to embed polynomial transforms of its matrix elements. This section calls on further results in the theory of Lie groups and generalized Fourier analysis, and provides compact proofs where possible, toward an explicit description of the family of achievable functional transforms. The flavor of the results of this work is summarized by the following informal statements: \begin{enumerate}[label=(\arabic*)] \item The achievable polynomials in SU(1,1) QSP are dense in the space of continuous functions with (a) definite parity and (b) bounded from below by one. \item However, the length of an SU(1,1) QSP protocol which uniformly approximates a desired continuous function on a \emph{finite} interval in general must grow exponentially in the size of the interval of uniform approximation. \item There exist simple, concrete parameterizations of SU(1,1) QSP of length $n$ which achieve, for signals in a finite interval independent of $n$, polynomial transforms whose magnitude is bounded above by a function independent of $n$. \end{enumerate} The above statements are necessarily qualified by specific statements on the domain and form of the considered functions, which are different than those of standard QSP, and for which we refer the interested reader to the main text. Thirdly, as mentioned, we look at three explicit examples for parameterizations for the SU(1,1) QSP ansatz in Sec.~\ref{sec:worked_numerical_examples}, including those which achieve Chebyshev polynomials, monomials, and generalized bandpass functions. We depict and analytically investigate the behavior of these protocols. Finally in Sec.~\ref{sec:discussion_conclusion}, we discuss the outlook for SU(1,1) QSP, its limitations, and paths toward the extension of techniques in QSP to further settings in continuous variable quantum computation. Algebraically involved proofs and auxiliary but important results in the theory of Lie groups and generalized Fourier series are relegated to the appendices. \section{Constructing the SU(1,1) QSP ansatz} \label{sec:qsp_ansatz} This section serves two purposes. The first is to provide a brief and self-contained introduction to relevant concepts in Lie groups and algebras (any involved proofs again in the appendices), limited to discussion and manipulation of the properties of SU(2) and SU(1,1) specifically. We also use this section, following the brief review of algebraic concepts, to introduce the main problem statement of this work: an SU(1,1) analogue of standard QSP. We use this ansatz to more concretely discuss how these Lie groups can appear in the context of quantum information, providing a simple example to motivate the proposed modified QSP ansatz. \subsection{Lie groups and algebras} We give minimal presentations of SU(2) and SU(1,1) as Lie groups, as well as their common definitions in terms of Lie algebras. We discuss common properties of these groups toward their use in the quantum computing protocols, and use these connections to present a clear analogy to the theory of QSP before discussing how differences between constructions of QSP relying on SU(2) and SU(1,1) relate to differences in physical implementation. Unless otherwise noted, common definitions and statements can be found in any introductory textbook on compact Lie groups \cite{fegan_lie_groups_1991}. \begin{definition}[Lie group] \label{def:lie_group} Lie groups are both groups and differentiable manifolds. That is, they locally resemble Euclidean space, and multiplication by group elements and inverses is smooth. In other words, the required binary multiplication operation $\mu$ for a Lie group $G$ is such that \begin{equation} \mu :: G\times G \mapsto G, \quad \mu(g, h) = gh, \end{equation} is a smooth mapping of the product manifold $G\times G$ to $G$. With this comes the assumption of a well-defined derivative and the weaker condition of continuity. \end{definition} \begin{definition}[Lie algebra] \label{def:lie_algebra} Lie groups give rise to Lie algebras, whose formal definition is in terms of the tangent space at the identity of the Lie group. Correspondingly, finite-dimensional Lie algebras correspond to connected Lie groups uniquely up to finite covering (if we choose for the Lie group to be simply connected, this correspondence is indeed unique). Formally Lie algebras are vector spaces $\mathfrak{g}$ over a field $F$ with a bilinear alternating map $[\ast, \ast]$ (the Lie bracket) that satisfies the Jacobi identity. \begin{align} &[\ast, \ast] :: \mathfrak{g}\times\mathfrak{g} \mapsto \mathfrak{g},\\ &[x, [y, z]] + [y, [z, x]] + [z, [x, y]] = 0. \end{align} \end{definition} For our purposes we consider finite dimensional Lie algebras whose Lie bracket is the commutator, and whose representations as linear operators are particularly simple. By the uniqueness of the map between a Lie algebra and its induced Lie group up to questions of covering, the physical definitions of SU(2) and SU(1,1) are often given in terms of the simpler, discrete list of elements generating the corresponding algebra (i.e., acting as a basis for the vector space indicated in Def.~\ref{def:lie_algebra}). \begin{definition}[SU(2)] \label{def:su2} The compact Lie group SU(2) can be defined by its generating algebra $\mathfrak{su}(2)$. This algebra is often given the following presentation: \begin{align} [J_x, J_y] &= iJ_z,\nonumber\\ [J_y, J_z] &= iJ_x,\nonumber\\ [J_z, J_x] &= iJ_y,\label{eq:su2_commutation} \end{align} where we conflate $J_x, J_y, J_z$ with $X, Y, Z$, the Pauli matrices. It is not difficult to identify the generated Lie group with the sphere (and indeed the double covering of SO(3) by SU(2) is often used in quantum computation). \end{definition} \begin{definition}[SU(1,1)] \label{def:su11} The non-compact Lie group SU(1,1) can also be defined by its generating algebra $\mathfrak{su}(1, 1)$. In a physical setting this algebra is usually given the following presentation. \begin{align} [K_x, K_y] &= -iK_z\nonumber\\ [K_y, K_z] &= iK_x\nonumber\\ [K_z, K_x] &= iK_y.\label{eq:su11_commutation} \end{align} As a side note, often defined are the \emph{raising} and \emph{lowering} operators $K_{\pm} = (K_x \pm i K_y)$, from which, by standard techniques in quantum mechanics, a useful complete set of basis vectors which are common eigenstates of $K_z$ and $K^2 = K_x^2 - K_y^2 - K_z^2$ are defined. Equivalently one can consider the two-by-two matrices $M$ such that \begin{equation} M^\dagger P M = P, \end{equation} where $P$ is the matrix $\text{diag}(1,-1)$, meaning that such matrices $M$ preserve $P$ up to the given product. It is not so difficult to find a parameterization of such matrices, namely \begin{equation} M(\beta, \phi, \psi) = \begin{pmatrix} e^{i\psi}\cosh{\beta} & e^{i\phi}\sinh{\beta}\\ e^{-i\phi}\sinh{\beta} & e^{-i\psi}\cosh{\beta} \end{pmatrix}, \end{equation} where for our purposes, ignoring the overall phase (equivalently setting $\psi = 0$) is sufficient for our purposes. Additionally, in all realistic settings the maximum allowed $\beta$ is some finite constant, and we will often refer to this implicit bound in later results. Note that while the above is commonly given as a definition for SU(1,1), it is instead a representation of SU(1,1), with simple expression in terms of the exponential map applied to the Pauli-like generators of the algebra $\mathfrak{su}(1,1)$. \end{definition} The astute reader notices that SU(2) (Def.~\ref{def:su2}) and SU(1,1) (Def.~\ref{def:su11}) are almost identical in their definition, up to signs in the defining commutation relations. And indeed, the complexification of the algebras for both SU(2) and SU(1,1) are the same SL(2, $\mathbb{C}$). From the casual definition of Lie algebras as the tangent space of the idenitity of a corresponding Lie group, we see that the magnitude of the curvature of these manifolds is constant but different in sign between these two Lie groups, suggesting the hyperbolic character of SU(1,1). Both represent three-dimensional vector spaces and, as per the definition of Lie algebras, have generators which commute or anticommute among themselves. With these basic definitions out of the way, it is possible to concretely define the types of quantum evolutions (essentially products of SU(1,1) transformations) we would like to physically realize to most closely emulate the structure of the QSP circuit ansatz. We will discuss the properties of this alternating ansatz, and then some examples of its physical realization. Along the way we will try to clarify where seemingly analagous mathematical properties between standard QSP and our modified ansatz may represent dramatically different physical properties. \subsection{Problem statement} Taking inspiration from the QSP constructions given in Appendix~\ref{appendix:basics_qsp_qsvt}, we can build up products of the analogous object to the phased iterate, namely the phased boost (Def.~\ref{def:phased_boost}), and compare the structures of the resulting evolutions. It is important to note, as will be remedied later, that necessarily this product represents something physically quite different than its analogue in standard QSP, foremost by the fact that the matrix considered is not an irreducible unitary representation of the evolution being studied (the existence of one for SU(1,1) being impossible under the constraint that it be finite-dimensional, as discussed in Appendix~\ref{appendix:basics_rep_theory}). \begin{definition}[Phased boost] \label{def:phased_boost} The basic element of SU(1,1) used in our protocol will be the phased boost, which has the explicit form \begin{equation} V_{\phi}(\beta) \equiv \begin{pmatrix} \cosh{\beta} & e^{i\phi}\sinh{\beta}\\ e^{-i\phi}\sinh{\beta} & \cosh{\beta} \end{pmatrix}, \end{equation} where $\beta \in [0, \infty)$ (with some implicit cutoff to be discussed) is called the boost parameter, and $\phi$ is some rotation angle. In terms of representations for the commonly cited generators of SU(1,1) this could be expressed by the simple conjugation \begin{equation} V_{\phi}(\beta) = e^{i\phi K_z}e^{i \beta K_x} e^{-i\phi K_z}. \end{equation} \end{definition} \begin{definition}[SU(1,1)-based QSP] \label{def:qsp_su11} The direct analogue of the QSP circuit ansatz \cite{lyc_16_equiangular_gates, lc_17_simultation, lc_19_qubitization, gslw_19} in SU(1,1) is the repeated application of \emph{phased boosts} (Def.~\ref{def:phased_boost}). This results in the following element of SU(1,1): \begin{equation} \label{eq:qsp_su11_form} S_{\Phi} = \prod_{k = 0}^{n} V_{\phi_k}(\beta), \end{equation} where $\Phi \in \mathbb{R}^{n + 1}$ as per usual, and $\beta \in [0, \infty)$ is generally taken to be the unknown scalar \emph{signal} being processed. We will also refer to this signal, in analogy to standard QSP, in the transformed picture $x = \cosh{\beta}$, where $x \in [1, \infty)$, as in Eq.~\ref{eq:hyperbolic_form}. As in standard QSP, an evolution of this form represents a map from a list of real numbers to a function from a (artificially constrained) compact subset of possible $\beta$ to elements of SU(1,1). \begin{equation} \label{eq:hqsp_type_definition} \text{SU(1,1)-QSP} : \mathbb{R}^n \rightarrow (F: [-\beta, \beta] \rightarrow SU(1,1)). \end{equation} Here we have overloaded $\beta$ to refer also to largest permitted boost parameter in our ansatz. It will turn out that finite energy and experimental constraints will always place a reasonable limit on the allowed $\beta$. \end{definition} For the moment we do not discuss the physical relevance of the product of phased boosts given in Def.~\ref{def:qsp_su11}. Instead, we investigate the formal properties of this product, the map it induces to a space of functions, and the extent to which standard statements in QSP port to this new picture. It turns out that the similarities of the defining algebras (Defs.~\ref{def:su2} and \ref{def:su11}) permit plenty to be directly learned from the theory of standard QSP re this mathematical object. However, as the astute reader will remember that non-compact Lie groups do not permit finite-dimensional unitary representations, the matrices we will consider will not represent, as was the case in QSP, the evolution of quantum states, but rather (for our purposes) that of quantum operators. \begin{theorem}[Functional form of SU(1,1) QSP] \label{thm:hyperbolic_substitution} The achievable functional form of a SU(1,1) QSP protocol with constituting phases $\Phi$ is similar to that of a standard QSP protocol, namely the product in Eq.~\ref{eq:qsp_su11_form} permits the non-unitary finite dimensional representation \begin{equation} \label{eq:hyperbolic_form} S_{\Phi} = \begin{pmatrix} P & Q\sqrt{x^2 - 1} \\ Q^*\sqrt{x^2 - 1} & P^* \end{pmatrix}, \end{equation} with $P, Q$ having definite (but opposite) parity in $x$, degree bounded above by the length $(n + 1)$ of $\Phi \in \mathbb{R}^{n + 1}$, and satisfying $|P|^2 - (x^2 - 1)|Q|^2 = 1$, up to the replacement $x = \cosh{\beta}$, in analogy with standard QSP. Proof of this statement is by direct substitution, observing the forms of the phased iterates in standard QSP and SU(1,1) QSP, reproduced respectively below. \begin{equation} \label{eq:phased_iterates} W_{\phi}(\theta) \equiv \begin{pmatrix} \cos{\theta} & e^{i\phi}\sin{\theta}\\ -e^{-i\phi}\sin{\theta} & \cos{\theta} \end{pmatrix}, \quad V_{\phi}(\beta) \equiv \begin{pmatrix} \cosh{\beta} & e^{i\phi}\sinh{\beta}\\ e^{-i\phi}\sinh{\beta} & \cosh{\beta} \end{pmatrix}. \end{equation} Note specifically that for standard QSP, with $x = \cosh{\theta}$, that the analytic continuation of $\theta \mapsto i\theta$ takes $\cos{\theta}$ to $\cosh{\theta}$ and $\sin{\theta}$ to $i\sinh{\theta}$, and consequently $\sqrt{1 - x^2}$ to $\sqrt{x^2 - 1}$. By modifying the angles defining the phased iterate in QSP, namely \begin{equation} W_{[\phi - \pi/2]}(\theta) \equiv \begin{pmatrix} \cos{\theta} & -e^{i\phi}i\sin{\theta}\\ -e^{-i\phi}i\sin{\theta} & \cos{\theta} \end{pmatrix}, \end{equation} we see that the analytic continuation $\theta \mapsto i\theta$ precisely recovers the form of the phased boost used in SU(1,1) QSP. In this sense the polynomials in $x = \cosh{\beta}$, \emph{as defined by their coefficients}, are in bijection with those of standard QSP. Note that this does not immediately imply that these polynomials are dense in a useful space of functions; this needs to be shown, and relies on a number of basic results in the theory of functional approximation. \end{theorem} \begin{remark}[On analytically continuing the QSP ansatz] \label{remark:analytic_continuation} As discussed in Theorem~\ref{thm:hyperbolic_substitution}, there are a few implicit maps between SU(2) and SU(1,1) based QSP. First, one can interpret that the signal oracle has been transformed to rotate by an imaginary angle: \begin{equation} \theta \mapsto i\theta = \beta. \end{equation} In this sense, when viewing QSP unitaries as embedding Laurent polynomials (as in \cite{haah_2019, rc_22}, the function is here being evaluated \emph{off the unit circle}. Equivalently, taking the natural assignment \begin{equation} x \equiv \cos{\theta} \mapsto x \equiv \cosh{\beta}, \end{equation} the transformation can instead be seen as evaluating QSP-induced polynomial transforms outside the range $x \in [-1,1]$. We will often work in the latter picture to follow previous convention, but both indicate that we now seek to control the behavior of these polynomials outside of regions previously considered (and thus under novel constraints). \end{remark} It is quick work, once the bijection between QSP protocols and SU(1,1) QSP protocols has been established, to prove an analogous theorem to that in \cite{gslw_19}, namely that a \emph{partially specified} finite dimensional representation for a SU(1,1) QSP protocol can be \emph{completed}, that is, its missing elements filled in and its corresponding phases read off following the standard techniques of QSP \cite{chao_machine_prec_20, haah_2019, dong_efficient_phases_21}. \begin{figure} \caption{The abstract form of a QSP protocol. As discussed in Fig.~\ref{fig:qsp_as_walk}, both standard QSP and its SU(1,1) variant can be seen as a walk on a manifold. Namely, one interleaves controllable (blue, $\phi_k$) elements of a Lie group and an unknown but consistent (red, $s_k$) oracle operation, such that the ultimate unitary depends strongly on the unknown signal $s$. Here $s$ relates simply to the phased iterates $W_\phi(\theta)$ and $V_\phi(\beta)$ up to $\theta = \cos^{-1}(x)$ and $\beta = \cosh^{-1}(x)$, where in Eq.~\ref{eq:phased_iterates} abutting phases have been absorbed into the signal.} \label{fig:qsp_generalized_circuit} \end{figure} \begin{theorem}[Matrix completion in SU(1,1) QSP] \label{thm:matrix_completion_su11} Take $\beta \in [-\gamma, \gamma]$ for some $\gamma \in \mathbb{R}$ and consider polynomials $P, Q \in \mathbb{R}[\cosh{\beta}]$ where $x \equiv \cosh{\beta}$ such that the following conditions hold \begin{enumerate} \item $P$ has degree $n$ and $Q$ has degree $n - 1$ \item $P$ has parity $n \pmod{2}$ and $Q$ has parity $(n-1) \pmod{2}$. \item $P^2 - (x^2 - 1)Q^2 \geq 1$ for $x \in [1, \infty)$. \end{enumerate} Then there exists $\Phi \in \mathbb{R}^{n+1}$ such that the SU(1,1) QSP protocol with phases $\Phi$ (Def.~\ref{def:qsp_su11}) has the form \begin{equation} S_{\Phi} = \begin{pmatrix} \tilde{P} & \tilde{Q}\sqrt{x^2 - 1} \\ \tilde{Q}^*\sqrt{x^2 - 1} & \tilde{P}^* \end{pmatrix}, \end{equation} where $\Re[\tilde{P}] = P$ and $\Re[\tilde{Q}] = Q$. The phases $\Phi$ are known to be efficiently computable in time $\text{poly}(n)$ by a classical algorithm \cite{dong_efficient_phases_21, haah_2019}. Both of these results follow from the bijection with QSP up to analytic continuation in the variable $\theta$. Making the reverse substitution $\beta \mapsto -i\beta$ and $\phi \mapsto \phi + \pi/2$ allows the required phases to be recovered and repurposed for the SU(1,1) QSP protocol. \end{theorem} The bijection between \emph{circuit descriptions} and \emph{polynomial coefficients} for standard QSP versus SU(1,1) QSP is promising when considering their similarities, but note that this does not tell the entire story. The space of functions on the right-hand-side of the type definition for SU(1,1) QSP protocols given in Eq.~\ref{eq:hqsp_type_definition} is currently not well-defined. Indeed, the map between both circuit descriptions and unitaries, as well as polynomial coefficients and functions of the underlying signal parameter $\beta$, is not injective, let alone obviously preserving of functional analytic properties like density in a space of functions. We leave this discussion for Sec.~\ref{sec:analyzing_qsp_ansatz}, devoted to functional approximation theory. Before this, however, we discuss one physical instantiation of SU(1,1) interactions in quantum information, which will serve as a model for intuition going forward. \subsection{A simple physical implementation} In this section we ground SU(1,1) in the physical context of interferometry. Again we aim to present this minimally, pointing the interested reader toward recent in-depth experimental work in this topic, extensive examination of which is beyond the scope of this paper. We also note that even within the two major interferometric regimes we discuss (i.e., beam-splitter and parametric amplifier based apparatuses) there are multiple incomparable devices with differing performance and underlying physical mechanisms. Ultimately, the primary goal of this section, as shown in Fig.~\ref{fig:su11_interferometer}, is to realize the dynamics given in Def.~\ref{def:qsp_su11} by a reasonable physical device. For us this means specifying the proper (in this case optical) element achieving the SU(1,1) phased iterate of Eq.~\ref{eq:phased_iterates}. \begin{definition}[SU(1,1) interferometer] \label{def:su11_pa_hamiltonian} For the rest of this work we refer to SU(1,1) interferometers as those whose primary optical element is a parametric amplifier. The defining interaction term for this Hamiltonian is the following \begin{equation} H = i\hbar \xi a_1^\dagger a_2^\dagger + \text{h.c.}, \end{equation} where we will usually refer to the composite variable $\beta$ which will be proportional to $\xi$ and thus the nonlinear coefficient and pump field amplitudes that are producing the intended amplification. The term $\beta$ can be easily used to define the evolutions of modes under this Hamiltonian, as discussed below. \end{definition} \begin{figure} \caption{One instantiation of a staged SU(1,1) interferometer, following the model considered in \cite{ou_li_su11_review_20}, using notation from \cite{chuang_simple_computer_95}. Here two modes are fed into a series of four-wave-mixing parametric amplifiers, where a phase shift $\phi_k$ is applied to one arm of the interferometer between each amplification $V_k$ (as per $V_\phi$ for $\phi = 0$ in Eq.~\ref{eq:phased_iterates}). Dashed boxes are mirrors, while lines represent the path of light, moving from left to right. The output modes depend non-linearly on the underlying parameters of each parametric amplification; these unknown amplifications, as in standard QSP, are assumed to be \emph{consistent}: the same each time applied.} \label{fig:su11_interferometer} \end{figure} SU(1,1) interferometry can be summarized as an ability to perform the following mode transformations, induced by the interaction Hamiltonian in Def.~\ref{def:su11_pa_hamiltonian}. We mainly follow the notation of the work introducing SU(1,1) interferometry \cite{yurke_su2_su11_86}, which in our simple setting is more than sufficient. \begin{align} a_1 &\mapsto (a_1)\,\cosh{\beta} + (a_2^{\dagger})\,e^{i\phi}\sinh{\beta},\nonumber\\ a_2 &\mapsto (a_2)\,e^{-i\phi}\sinh{\beta} + (a_1^\dagger)\,\cosh{\beta}.\label{eq:su11_mode_map} \end{align} In other words, the phased boost discussed in Def.~\ref{def:phased_boost} describes the manipulation of the operators $a_1, a_2$ and their complex conjugates under the action of a series of parametric amplifications and phase shifts. Conjugating a squeezing operation by phase shifts is precisely the statement of Def.~\ref{def:phased_boost}. \begin{definition}[Staging SU(1,1) interferometers] The mode transformations enacted by both beam-splitters and parametric-amplifier based interferometers can be applied sequentially to achieve more complicated transformations of the mode. Such a sequence will be referred to as a staged SU(1,1) interferometer, and is depicted in Fig.~\ref{fig:su11_interferometer}. In its simplest form, however, as discussed in \cite{ou_li_su11_review_20}, one can consider the low-gain regime, where $\beta$ is small, and therefore the applied unitary (now in the Schrödinger picture) has the simple approximation \begin{equation} U \approx I + (e^{i\phi}\sinh{\beta}\,a_1^\dagger a_2^\dagger + \text{h.c.}) + \mathcal{O}(\sinh^2{\beta}), \end{equation} in which case repeated applications of $U$ interspersed with some phase shift unitary $\Theta$ inducing a phase difference of $\theta$ between the two modes will result in the following transformation (again in Schrödinger picture) after application $N$ times: \begin{equation} |00\rangle \mapsto |00\rangle + e^{i\phi}\sinh{\beta} \left[\sum_{k = 1}^{N} e^{-i(k - 1)\theta}\right] |11\rangle + \mathcal{O}(\sinh^2{\beta}). \end{equation} It is worth noting that in this limit, we indeed simply have the effective action of one SU(1,1) interferometric operation with modified $\beta$ term corresponding to the following substitution: \begin{equation} e^{i\phi}\sinh{\beta} \mapsto e^{i(\phi - [N - 1]\theta/2)} \sinh{\beta} \frac{\sin{[N\theta/2]}}{\sin{[\theta/2]}}. \end{equation} We note that this does not make use of the non-commutative aspect of the Hamiltonian's various terms, and in fact that this approximation restricts us to a qubit-like subspace spanned by the vacuum state and $|1,1\rangle$. It is the goal of this work to explicitly violate some of the assumptions of this approximation, which necessarily takes us out of a nice, finite dimensional unitary representation for system dynamics. \end{definition} While the evolution of the modes in SU(1,1) interferometry matches the mathematical formalism of the previous section, the primary utility of QSP as originally introduced lies in the polynomial transformation of an unknown signal. Taking the parameter $\beta$ or $\cosh{\beta}$ as the unknown, it is worthwhile to consider settings in which this value can be both (1) reasonably said to be unknown, or otherwise (2) varying with respect to some other degree of freedom such that the overall transformation, non-linearly dependent on this unknown, performs a useful task. We discuss one possible concrete method of coupling SU(1,1) operations to external quantum systems, such that a protocol as in Fig.~\ref{fig:su11_interferometer} could be implemented. A major open question along this line of work is whether one can construct further natural couplings between (possibly unknown) systems and SU(1,1) interferometric operations. \begin{definition}[Controlled-squeezing operations] A controlled-squeezing operation considers the case where the mode-transformation discussed previously is coherently controlled by the state of another quantum system. For qubit-coupled oscillators, a simple interaction could take exactly this form, e.g., \begin{equation} C(\beta_0, \beta_1) \equiv |0\rangle\langle 0|\otimes W_{\phi}(\beta_0) + |1\rangle\langle1|\otimes W_{\phi}(\beta_1). \end{equation} Consequently the composite map between modes defined by products of the phased boost used in SU(1,1) QSP, i.e., \begin{align} a_1 &\mapsto (a_1)\,P(\cosh{\beta}) + (a_2^\dagger)\,[\sinh{\beta}]\,Q(\cosh{\beta}) ,\\ a_2 &\mapsto (a_2)\,[\sinh{\beta}]\,Q^*(\cosh{\beta}) + (a_1^\dagger)\,P^*(\cosh{\beta}), \end{align} can be applied in superposition according to the quantum state of some auxiliary, perhaps qubit-based system. This sort of coupling could be advantageous in systems where the physical instantiation of the auxiliary qubit is one more amenable to measurement, while the system on which the parametric amplifier acts is more resistant to noise. Such problems of coupling are ubiquitous in QSP and QSVT, where they form the basis of the theory of block-encoded linear operators. \end{definition} For controlled-squeezing operations, the utility of a QSP-like ansatz is clear: one can perform these QSP manipulations in superposition according to the state of the coupled qubit. Indeed, given the indefinite direction of controlled operations in quantum computing, such circuits can also be used to preferentially prepare the coupled qubit into a desired state based on the magnitude or direction of the squeezing operation. For the moment though, we set the question of optimal physical instantiation of this method alone, and focus instead on the expressivity of the SU(1,1) QSP ansatz in comparison to its standard cousin. \section{The expressivity of SU(1,1) QSP} \label{sec:analyzing_qsp_ansatz} As mentioned the main difference between the theory of standard QSP and SU(1,1) analogue is the replacement of standard trigonometric polynomials with their hyperbolic equivalents (Remark~\ref{remark:analytic_continuation}). In this setting, many of the basic results of functional approximation theory employed in standard QSP are no longer immediately applicable. This section discusses methods by which the functional expressivity of QSP-like algorithms are proven, and interprets relevant theorems for SU(1,1) QSP. As in standard QSP, many of the theorems we care about are, in their most generic form, already fundamentally known in functional analysis and approximation theory: the business of QSP is often massaging our problem statement and use case into a form that matches these established theorems' assumptions. We also discuss the numerical efficiency of classical subroutines used to specify QSP protocols (i.e., those algorithms to compute QSP phases to a specified precision given a desired embedded polynomial). Unless otherwise noted, standard definitions and theorems in topology and functional analysis are taken from common textbooks \cite{bourbaki_66, rudin_func_91}. \subsection{The Stone-Weierstrass theorem} We briefly cover results related to functional approximation, the most famous of which concern the approximation of functions by polynomials. Toward discussing approximation with the natural functions embeddable in SU(1,1) QSP, we provide a series of definitions and standard results. It should be noted that this section mainly discusses how one determines whether a set of functions generates a sub-algebra of the set of continuous functions that is \emph{dense} in the set of continuous functions. This is often removed from discussing the efficiency of such approximation or the ease of its computation. \begin{definition}[Hausdorff space] \label{def:hausdorff_space} A Hausdorff space $X$ is a topological space such that for any two distinct elements, $x_1, x_2$, there exist two open sets, $U_1, U_2$, such that $x_1 \in U_1, x_2 \in U_2$ and $U_1 \cap U_2 = \emptyset$. \end{definition} \begin{definition}[Uniform metric and convergence] \label{def:uniform_metric} A metric space $C(X, \mathbb{R})$ is said to have the \emph{uniform metric} if the distance between two functions $f, g$ is computed according to \begin{equation} d(f, g) \equiv \sup_{x \in X} \lvert f(x) - g(x) \rvert. \end{equation} Given a sequence of functions $f_0, f_1, \cdots$, this sequence is said to \emph{uniformly converge} to some $g$ if the sequence of real numbers $d(f_n, g)$ converges to zero. Sometimes a space of functions is said to have \emph{the topology of uniform convergence} if its underlying metric space is equipped with the uniform metric. \end{definition} \begin{theorem}[Stone-Weierstrass theorem (general form)] \label{thm:stone_weierstrass} Suppose $X$ is a compact Hausdorff space and $A$ is a subalgebra of $C(X, \mathbb{R})$ (real-valued continuous functions on $X$ with the topology induced by $\lVert \cdot \rVert_\infty$) which contains a non-zero constant function. Then $A$ is dense in $C(X, \mathbb{R})$ if and only if it separates points, i.e., that for every $x \neq y \in X$ there exists some function $p \in A$ such that $p(x) \neq p(y)$. \end{theorem} \begin{lemma}[Hyperbolic trigonometric functions as bases] \label{lemma:complete_basis} The hyperbolic trigonometric functions $S = \{\cosh{(nx)}\}$ for $n \in \mathbb{N}$ form a complete basis for square integrable functions with compact support on some interval $x \in [-c, c]$ for $c \in \mathbb{R}$. Proof is by bootstrapping via the Stone-Weierstrass theorem. It can easily be seen that the set of real exponentials $\{e^{n\theta}\}, n \in \mathbb{Z}$ defined on a finite interval $[-\beta, \beta], \beta > 0$ both separates points (in fact they do not overlap beyond at $\beta = 0$) and contains a non-zero constant function, namely $1$. Note that this property is not modified if one considers hyperbolic trigonometric functions or real exponential functions, again indexed by the integers. \end{lemma} \begin{definition}[$L^2$ functions, $L^p$ spaces, and integrability] \label{def:l2_space} A square integrable function, equivalently an $L^2$-function is a real or complex valued function for which the integral of the square of its absolute value is finite. On the real line this is the statement, for $L^2$-function $f$, that \begin{equation} \int_{-\infty}^{\infty} |f(x)|^2 \,dx < \infty. \end{equation} It is also equivalent to say that the square of the absolute value of the function is Lesbegue integrable. The vector space of square integrable functions is called $L^2$, for which the extension to arbitrary positive integers $p$ define the $L^p$ spaces. The space $L^2$ is unique among the $L^p$ spaces for being compatible with an inner product among functions, and we consider it exclusively. \end{definition} \begin{lemma}[Density in square-integrable functions] \label{lemma:density_in_l2} The set $C([0, 1], \mathbb{R})$ is dense in $L^2[0, 1]$ (the space of square, Lebesgue-integrable functions). Proof is standard in functional analysis. This permits us to use Stone-Weierstrass results for approximating square integrable functions, which are the common goal in Fourier analysis. By the applicability of Stone-Weierstrass to the hyperbolic trigonometric functions we are also able to approximate square integrable functions. \end{lemma} \begin{remark}[On the efficient approximation of a desired function with an arbitrary dense sub-algebra of continuous functions] \label{remark:parsevals_thm} We know from suitably modified versions of Parseval's theorem \cite{rudin_func_91} the generalized Fourier coefficients for the approximation to a given square-integrable function have the sum of their squared-magnitudes equal to the result of the integral of the square of the magnitude of the function itself on the relevant interval (this is one statement of the unitarity of the Fourier transform). In other words, given two complex-valued square-integrable functions $f, g$ over the reals \begin{equation} \sum_{n = -\infty}^{n = \infty} f_n g_n = \frac{1}{2\pi}\int_{-\infty}^{\infty} f(x)g(x)\,dx, \end{equation} where $f_n, g_n$ are the $n$-th Fourier coefficients of $f$ and $g$ respectively. Taking $f = g$ and assuming $f$ to be square integrable, this is a statement that the square integrability of a function is equivalent to a finite sum of the squared-magnitudes of its Fourier coefficients. It is known that this relation holds even for generalized Fourier coefficients induced by any valid choice of a complete orthogonal system of univariate functions, and so will notably also hold in our case. \end{remark} \begin{remark}[On Gram-Schmidt orthgonalization] \label{remark:gram_schmidt} In standard QSP the basically achieved functions, e.g., $\langle 0 |U_{\Phi} |0\rangle$, for trivial QSP phase lists $\{0, 0, \cdots, 0\}$ are the Chebyshev polynomials, which are naturally orthogonal on $[-1,1]$ with respect to the simple functional inner product \begin{equation} \langle P, Q\rangle = \int_{-1}^{-1} P(x)Q(x)\,\frac{dx}{\sqrt{1 - x^2}}. \end{equation} In SU(1,1) QSP, the same polynomials are now evaluated over a different region, $[1, \cosh{\beta}]$ for some finite positive $\beta$, and the orthogonality enjoyed by the Chebyshev polynomials is lost. Nevertheless, the density of these polynomials in the relevant functional space discussed above is preserved, and thus successive Gram-Schmidt orthonormalization of these functions is possible, albeit possibly numerically ill-conditioned with increasing degree. We can choose to implicitly work in one of these manufactured bases when discussing Fourier analysis in the next section. \end{remark} \subsection{On non-harmonic Fourier analysis} \label{sec:non_harmonic_analysis} In the previous section we focused on, for the SU(1,1)-variant ansatz of QSP, the density of generated transforms in a reasonable space of functions. While such density results are useful and necessary for the application of QSP, they are not sufficient. More specifically, one of the major benefits of standard QSP is that the embedded functional transform can be made to quickly converge to a desired functional transform, implying that the realizing circuit is relatively short. I.e., while we have a good description of the types of polynomials that are permissible in these transforms from Theorem~\ref{thm:matrix_completion_su11}, the speed with which such polynomials converge to desired continuous functions with the same properties is not obvious. In other words, we also seek to relate properties of the desired function to the minimum required length of interleaving ansätze whose embedded polynomial transforms achieving such a function (up to uniform approximation). As discussed, in standard QSP the achieved functional transforms are trigonometric polynomials in $\cos{\theta}$, which have clean Fourier series. The nice property we care about in relation to these implicit complex exponentials $\{e^{i n \theta}\}, n \in \mathbb{Z}$ is that they are \emph{closed} over $[-\pi, \pi]$, namely that for Lebesgue-integrable functions $f(\theta)$, the equation \begin{equation} f_n = \int_{-\pi}^{\pi} f(\theta)\,e^{in\theta}\,d\theta = 0, \end{equation} holding for all $n \in \mathbb{Z}$ implies that $f(\theta) = 0$ identically. This is one of the basic observations of Fourier analysis, relating closely to the unitarity of the Fourier transform. Integrals similar to those above yield a set of complex numbers $f_n$ (the Fourier coefficients), which can be used to well-approximate a wide class of desired functions. We would like to recover similar properties for SU(1,1) QSP, as well as connect this discussion (on closed sets of functions) to the previous section on density in useful classes of functions. In literature the results we care about are referred to as closure and gap theorems. These theorems seek to assert similar statements to those applied in standard Fourier analysis, save one considers a possibly non-orthogonal basis of functions $\{e^{i \lambda_n \theta}\}, n \in \mathbb{Z}$ with possibly complex $\lambda_n$. Gap theorems specifically consider families of functions for which coefficients corresponding to some set $\{\lambda_n\}$ are zero, in which case one may be able to assert that, only on some subinterval, all functions which have vanishing coefficients can again only be the function which is identically zero on that interval. In general however we are mainly concerned with closure theorems, which are strong enough for the purposes of this work. We now show that completeness and closure (properties of sets of functions on intervals) are for us effectively interchangeable. We then show that a small modification to a known complete/closed set of functions (complex exponentials) yields a function set which (1) aligns with that of SU(1,1) QSP and (2) maintains desired closure completeness properties. We will exclusively work with $L^2$ (square-integrable) functions, unless otherwise noted. Toward this result, we cite a number of constitutive definitions and theorems. \begin{definition}[Closed {$L^p[-a, a]$} function set] \label{def:closure} A set of function $\{f_n\}, n \in \mathbb{N}$ is $L^p$ closed on an interval $[-a, a], a > 0$ if for every function $g \in L^p[-a, a]$, $g$ can be approximated according to the $L^p$ norm by linear combinations of $f_n$ with possibly complex coefficients. \end{definition} \begin{remark}[On closed versus complete function sets] \label{remark:closure_vs_completeness} A set of functions is said to be incomplete in $L^p$ if there exists a non-trivial function in $L^p$ which is orthogonal to all function in that set. Closure, however, makes a statement about the \emph{approximation} of functions in $L^p$ (with respect to some interval). For general $p$ and over general measure spaces, the equivalence of closure and completeness is neither obvious nor necessarily true \cite{young_non_harmonic_01}. \end{remark} \begin{definition}[Measure space] \label{def:measure_space} A measure space is a tuple $(X, \mathcal{A}, \mu)$ of a set $X$, a $\sigma$-algebra $\mathcal{A}$ for $X$ and a measure $\mu$ on $(X, \mathcal{A}$ (the latter common called a measurable set). Here $\mathcal{A}$ is used to assign measurability to $X$, while $\mu$ is used for computing the size of various subsets of $X$. We also say a measure $\mu$ is $\sigma$-finite if it is a countable union of measurable sets with finite measure, i.e., $\mu(X_k) < \infty$, where $\cup X_k = X$ (a common example is the Lebesgue measure over the reals, which is not finite, but which is $\sigma$-finite). \end{definition} \begin{theorem}[Completeness and closure equivalence for $L^2$ functions (Example 1 in \cite{young_non_harmonic_01})] \label{thm:completeness_closure_equivalence} If $(X, \mathcal{A}, \mu)$ is a $\sigma$-finite measure space and $1 \leq p < \infty$, then the Riesz representation theorem shows that the dual of $L^p(\mu)$ can be identified with $L^q(\mu)$, where $1/p + 1/q = 1$. From this it follows that a sequence of functions $\{f_n\}, n \in \mathbb{N}$ in $L^p(\mu)$ is closed over $L^p[X]$ if it is complete over $L^p[X]$. For $p = 2$, the space of $L^2$ functions over $X$ equipped with the measure $\mu$ is self-dual, and closure and completeness coincide. \end{theorem} We now cite and apply a few results from foundational work in functional analysis on closure theorems, geared toward our specific setting. The main cited sources include a standard textbook on non-harmonic analysis \cite{young_non_harmonic_01}, as well as some of the monographs it cites, which provide full proofs of the provided statements \cite{levinson_gap_40, levinson_36}. Where relevant we also cite more recent work \cite{redheffer_completeness_77, redheffer_completeness_83}, which supply streamlined proofs. \begin{theorem}[Theorem 4 in \cite{levinson_gap_40}, cited as such in \cite{young_non_harmonic_01} (Theorem 4) and \cite{redheffer_completeness_77, redheffer_completeness_83} (Theorem 9) in generalized forms] \label{thm:levinson_completeness} If $0 < p < \infty$ and $\{\lambda_n\}$ is a sequence of real or complex numbers for which \begin{equation} |\lambda_n | \leq |n| + \frac{1}{2p}, \quad n \in \mathbb{Z}, \end{equation} then the system $\{e^{i\lambda_n \theta}\}$ is complete in $L^{p}[-\pi, \pi]$, and the term $1/2p$ cannot be improved in general. Consequently by Theorem~\ref{thm:completeness_closure_equivalence}, for $p = 2$ this functional set is also closed in $L^{2}[-\pi, \pi]$. Proof of this statement given in terms of properties of a counting function. Concretely, this is a statement that shows that a set $\{e^{i\lambda_n\theta}\}, n \in \mathbb{N}$ is complete for $L^p$ on an interval of length $2\pi D$ if \begin{equation} \label{eq:root_counting} \limsup_{r \rightarrow \infty} \left(\int_{1}^{r} \frac{\Lambda(\theta) - 2 D \theta}{\theta}\,d\theta + \frac{\log{q}}{r}\right) > -\infty. \end{equation} Here $\Lambda(\theta)$ is the number of points in $\{\lambda_n\}$ (complex) inside the disk of radius $\theta$, referred to as an unsigned counting function. \end{theorem} The proof method cited briefly in Theorem~\ref{thm:levinson_completeness}, and its accompanying imposed condition in Eq.~\ref{eq:root_counting}, can appear quite abstract, and so we take a moment to discuss its heuristic justification. In fact, as discussed in \cite{young_non_harmonic_01}, proving the completeness of sets of functions by investigating the properties of roots of special functions is an exceedingly common technique (if not often the only commonly employed technique). The expression in Eq.~\ref{eq:root_counting} permits proof in the following way. Assume toward contradiction of the completeness of the $f_n$ that all $F(\lambda_n)$ (the $n$-th generalized Fourier coefficients of some $f$) are zero. Then the following inequality holds \begin{equation} |F(z)| \leq \int_{-a + \delta}^{a - \delta}e^{-yt}|f(t)|\,dt + \int_{\text{out}}e^{-yt}|f(t)|\,dt, \end{equation} where $z = x + iy$ and $\delta > 0$ is some small positive number, and `out' refers to the portions of the interval beyond $a - \delta$ and before $-a + \delta$. Hölder's inequality states that if $\lVert f \rVert$ is small then \begin{equation} |F(z)| \leq e^{a|y|}|y|^{1/q}(e^{-\delta |y|} + \eta), \end{equation} where $\eta$ goes to zero as $\delta$ does. This permits us to write out the following integral inequality, following the simplified calculations in Theorem 8 of \cite{redheffer_completeness_77}: \begin{align} \int_{-\pi}^{\pi} \log\left[F(re^{i\theta})\right]\,d\theta &\leq \int_{-\pi}^{\pi}ar|\sin{\theta}|\,d\theta - (1/q)\int_{-\pi}^{\pi}\log{r}\,d\theta - (1/q)\int_{-\pi}^{\pi}\log{|\sin{\theta}|}\,d\theta \nonumber\\ &+ \int_{-\pi/3}^{\pi/3}\log\left(e^{\delta r/2} + \eta\right)\,d\theta + \int_{\text{out}} \log{\left(1 + \eta\theta\right)}\,d\theta. \end{align} The first two integrals are simple, the third is convergent, but the fourth can be made less than any chosen negative number by first choosing $\delta$ so that $\eta$ is small, and then taking $r$ as large as necessary. This permits us to write the following inequality \begin{equation} \label{eq:magnitude_inequality} \int_{-\pi}^{\pi} \log\left[F(re^{i\theta})\right]\,d\theta \leq 2Dr - ([\log{r}]/q) - \varphi(r), \end{equation} where $\varphi(r)$ goes to infinity as $r$ goes to infinity. The final step is an application of Jensen's formula in Lemma~\ref{lemma:jensens_formula}, which relates the integral we're considering to a root counting function, namely \begin{equation} \label{eq:jensens_inequality} \int_{r}^{\infty} \frac{\Lambda(t)}{t}\,dt \leq \frac{1}{2\pi}\int_{-\pi}^{\pi}\log{|F(r e^{i\theta})|}\,d\theta. \end{equation} We can then see that substituting the inequality in Eq.~\ref{eq:jensens_inequality} into Eq.~\ref{eq:magnitude_inequality} shows that if the integral in Eq.~\ref{eq:root_counting} is greater than $-\infty$, that the integral of the log of $F(z)$ will be taken to negative infinity by the behavior of $\varphi(r)$, in which case the function $f$ itself will be forced to zero identically, in contradiction of our assumption of non-completeness of the $f_n$. Consequently the satisfaction of Eq.~\ref{eq:root_counting} can be used to show the completeness of the functional set. Below we cite Jensen's inequality (or formula) in more detail. \begin{lemma}[Jensen's formula \cite{young_non_harmonic_01}] \label{lemma:jensens_formula} If $f(z)$ is analytic in $|z| < R$, then we denote by $\Lambda(r)$ for $0 \leq r < R$ the number of zeros $z_1, z_2, z_3, \cdots$ of $f(z)$ for which $|z_k| \leq r$. Provided that $f(0) \neq 0$, simple results in complex analysis can be used to show \begin{equation} \sum_{|z_k| \leq r} \log{\frac{r}{|z_k|}} = \int_{0}^{r} \frac{\Lambda(\theta)}{\theta}\,d\theta, \end{equation} in which case Jensen's formula can be modified from its original statement to the slightly more useful \begin{equation} \frac{1}{2\pi}\int_{0}^{2\pi} \log{|f(r e^{i\phi})|}\,d\phi = \log{|f(0)|} + \int_{0}^{r}\frac{\Lambda(\theta)}{\theta}\,d\theta. \end{equation} This provides a concrete relation between the growth of an entire function and the density of its zeros and can be used, as shown above, in the discussion of the completeness of non-trigonometric functional bases. \end{lemma} The result given in Theorem~\ref{thm:levinson_completeness} discusses completeness, and by merit of Theorem~\ref{thm:completeness_closure_equivalence}, also closure of our desired ansatz in the relevant space of functions. The original question of this section, however, concerns also the efficiency of functional approximation in terms of the number of terms before truncation in order to achieve a given degree of uniform approximation. To investigate such properties, we go through yet another common object in the study of generalized Fourier analysis: Riesz bases. We provide a common definition and related theorem below. \begin{definition}[Riesz basis; from \cite{young_non_harmonic_01}] \label{def:riesz_bases} A basis for a Hilbert space is a Riesz basis if it is equivalent to an orthonormal basis; that is, it is obtained from an orthonormal basis by means of a bounded invertible operator. \end{definition} \begin{theorem}[Properties of Riesz bases \cite{young_non_harmonic_01}] \label{thm:riesz_properties} Let $H$ be a separable Hilbert space; then the following are equivalent. \begin{enumerate} \item The sequence $\{f_n\}$ forms a Riesz basis for $H$. \item There is an equivalent inner product on $H$ for which $\{f_n\}$ becomes an orthonormal basis for $H$. \item The sequence $\{f_n\}$ is complete in $H$ and there exist positive constants $A, B$ such that for an arbitrary positive integer $n$ and arbitrary scalars $c_1, c_2, \cdots, c_n$ one has \begin{equation} A \sum_{k = 1}^{n} |c_k|^2 \leq \left\lVert \sum_{k = 1}^{n} c_k f_k\right\rVert \leq B \sum_{k = 1}^{n} |c_k|^2. \end{equation} \end{enumerate} \end{theorem} We can make a couple statements based off Definition~\ref{def:riesz_bases} and corresponding Theorem~\ref{thm:riesz_properties} of equivalent conditions. The first is that the real exponential functions $\{e^{n\theta}\}, n \in \mathbb{Z}$ \emph{do not} form a Riesz basis, despite their completeness and closure in $L^2[-\beta, \beta]$ for any finite $\beta > 0$ as shown, simply because they are not bounded on the interval $[-\beta, \beta]$ for arbitrary $n$. More intuitively, for any fixed $\beta$, the growth of the maximum of $f_n$ on the interval $[-\beta, \beta]$ is unbounded as $n$ gets large, and consequently the behavior of an embedded polynomial in $\cosh{(n\beta)}, n \in \mathbb{N}$ for large $n$ will be dominated by the leading term near $\beta$. As we have completeness and closure, we are not precluded from attempting to approximate desired functions using this ansatz, but we will have to remain careful about asymptotic statements, as there may exist sub-classes of functions for which even very long lists of Fourier coefficients approximate the desired function poorly on some sub-interval. To be fair we did not expect that these unbounded functions would constitute a Reisz basis as stated, and we provide some later indication that to assume the universal efficiency of approximation for functions outside $x \in [-1,1]$ is to assume various unphysical properties for the underlying quantum system's evolution. In what follows, however, we determine that even without this well-conditioned basis, various useful functions can nevertheless be approximated relatively quickly, with experimental utility. \section{Worked numerical examples} \label{sec:worked_numerical_examples} Furthering the analogy that SU(1,1)-based QSP can be viewed as standard QSP phase rotations interspersed by an oracle unitary rotating by a complex angle, the numerical side of the computation of these phases follows similar steps. In this section we provide a few examples of concretely computed phase sequences with easily interpreted actions, as well as specific demonstrations of the drawbacks (discussed in the previous section) that expanding over real exponential functions, which are not a Riesz basis, introduces into numerics. The first of these examples is simple, but demonstrates an important point. In standard QSP, trivial protocols of length $n$ serve to generate the Chebyshev polynomials of the first kind $T_n$ in terms of the modified signal $x = \cos{\theta}$, by merit of their usual definition $T_n(x) = \cos{(n\arccos{x})}$. Such trivial protocols in the case of phased boosts are modified according to $x \mapsto ix$, giving \begin{equation} W(x) \mapsto V(x) = \begin{bmatrix} x & \sqrt{x^2 - 1}\\ \sqrt{x^2 - 1} & x \end{bmatrix}, \end{equation} where it should be noted that $x$ here has been overloaded to refer to $\cosh{\beta}$, and thus has a different domain and codomain. In this setting the trivial protocol of length $n$, i.e., with $\Phi \in \mathbb{R}^{n + 1} = \{0, 0, \cdots, 0\}$, takes the form \begin{equation} V(x)^n = \begin{bmatrix} T_n(x) & U_{n - 1}(x)\sqrt{x^2 - 1}\\ U_{n - 1}(x)\sqrt{x^2 - 1} & T_n(x) \end{bmatrix}, \end{equation} where $x \in [1, \infty)$ now constitutes the valid range, and the $n$-th Chebyshev polynomial of the second kind is notated $U_n(x)$. Unlike their action on the interval $[-1, 1]$, the Chebyshev polynomials outside this interval have the following unique property. \begin{theorem}[Extremal growth of Chebyshev polynomials] Among polynomials $P(x)$ of a fixed degree $n$ whose modulus obeys $|P(x)| \leq 1$ on $x \in [-1,1]$ the Chebyshev polynomial $T_n(x)$ is the unique polynomial (up to an overall phase) whose modulus increases most quickly on the complement of that interval, i.e., for $x \not\in [-1,1]$. \end{theorem} From this theorem we see that the trivial protocol in SU(1,1) QSP achieves, as might be expected, an extremal polynomial in a concrete sense: the magnitude of the top left element of the resulting transfer matrix increases as quickly as possible in $x$ for a given length protocol. Note that the restriction that this is over all such polynomials such that for $x \in [-1,1]$ they have bounded modulus is satisfied as $x \leq 1$ allows us to apply the inverse of the original $x \mapsto ix$ map, and recognize the resulting polynomial as that of standard QSP on the relevant interval, whose modulus bound follows from unitarity. We can also look at another special function achievable by standard QSP protocols, extending this function to beyond $x \in [-1,1]$. Specifically, we look at the prescription for the phases of fixed-point amplitude amplification. In this setting, the QSP phases for a given protocol are defined recursively in terms of those of a shorter protocol, such that longer sequences better approximate one which, for nearly all $x$, generates a (possibly phased) bit-flip operation. In what follows, we translate the so-called $\pi/3$-protocol \cite{yoder_14, grover_05} directly to the setting of QSP. We note that \cite{yoder_14} gives an improved version of this protocol when one does not desire monotonicity of the success probability, but for our purposes, the achieved function has a much neater form. Moreover, the explicit QSP protocol we provide below is not described explicitly in related work. \begin{definition}[Monotonically amplifying QSP protocol] \label{def:monotonic_amplification} Consider the following QSP phase list, which will be the base case of our recursion \begin{equation} \Phi_0 = \{0, -\pi/6 + \pi/2, \pi/6 - \pi/2, 0\}. \end{equation} Moreover, toward definition of the recursive step, given a phase list $\Phi$ consider the inverse phase list $\Phi^{-1}$ to be the reversed, negated version of $\Phi$ with its first and last elements modified in the following way \begin{align} \Phi &= \{\phi_0, \phi_1, \cdots, \phi_{n-1}, \phi_{n}\}\\ \Phi^{-1} &= \{-\phi_{n} + \pi/2, -\phi_{n-1}, \cdots, -\phi_1, -\phi_0 -\pi/2\}. \end{align} Additionally, given two phase lists $\Phi_0, \Phi_1$ we define their concatenation as protocols, denoted $\Phi_0 \cup \Phi_1$, by the following operation, which unites the trailing phase of the first and leading phase of the second phase list: \begin{align} \Phi_0 &= \{\phi_{0,0}, \phi_{0,1}, \cdots, \phi_{0,n-1}, \phi_{0,n}\}\\ \Phi_1 &= \{\phi_{1,0}, \phi_{1,1}, \cdots, \phi_{1,n-1}, \phi_{1,n}\}\\ \Phi_0 \cup \Phi_1 &= \{\phi_{0,0}, \phi_{0,1}, \cdots, \phi_{0,n-1}, \phi_{0,n} + \phi_{1,0}, \phi_{1,1}, \cdots, \phi_{1,n-1}, \phi_{1,n}\}. \end{align} We can now define the recursive step which, given the phases for a monotonically amplifying QSP protocol $\Phi_n$, generates the phases of a longer monotonically amplifying protocol $\Phi_{n + 1}$: \begin{equation} \Phi_{n + 1} = \Phi_{n} \cup \{-\pi/6\} \cup \Phi_{n}^{-1} \cup \{\pi/6\} \cup \Phi_n. \end{equation} For clarity we list the initial phase list below, followed by the result after applying this recursive procedure once and then twice: \begin{align} \Phi_{0} &= \{0, 0\}\\ \Phi_{1} &= \{0, 0, -\pi/6, \pi/2, -\pi/2, \pi/6, 0, 0\}\\ \Phi_{2} &= \{0, 0, -\pi/6, \pi/2, -\pi/2, \pi/6, 0, 0, -\pi/6, \pi/2, 0, -\pi/6,\nonumber\\ &\hphantom{{}={}\{} \pi/2, -\pi/2, \pi/6, 0,-\pi/2, \pi/6, 0, 0, -\pi/6, \pi/2, -\pi/2, \pi/6, 0, 0\} \end{align} It is evident that the length of this protocol increases exponentially in $n$. In the following lemma we give the functional transform this protocol achieves, and describe why it is termed monotonically amplifying. \end{definition} \begin{lemma}[Functional form of monotonically amplifying QSP protocol (Def.~\ref{def:monotonic_amplification})] \label{lemma:monotonic_amplification} The protocol with phases $\Phi_n$ as given in Def.~\ref{def:monotonic_amplification} generates a unitary with the following form \begin{equation} U_{\Phi_n} = \begin{bmatrix} P_n & \;\cdot\; \\ \;\cdot\; & \;\cdot\; \end{bmatrix}, \end{equation} where the magnitude-squared of $P$ has the following simple form \begin{equation} |P_n(x)|^2 = x^{2(3^{n + 1})}. \end{equation} It is easy to check that this polynomial satisfies the required boundedness and parity conditions on $x \in [-1, 1]$. Moreover, for larger $n$, we see that every point on this graph in the range $x \in [-1,1]$ monotonically approaches $0$, meaning this matrix monotonically approaches a (possibly phased) bit flip as expected. \end{lemma} The function described in Lemma~\ref{lemma:monotonic_amplification} is very simple, and has some nice properties. When we consider this function under the usual map $\cos{x} \mapsto \cosh{\beta}$, meaning we evaluate it outside of the interval $[-1,1]$. While the modulus of $P$ necessarily does not grow as quickly outside $[-1,1]$ as the Chebyshev polynomial of the same degree, it has exceedingly regular form in terms of $x = \cosh{\beta}$, lending itself to easier analytic treatment. Finally, we consider a simple phase sequence whose induced function has interesting and tunable (non-monotonic) behavior outside the interval $[-1,1]$. Specifically, take the QSP protocol of length $n + 1$ whose phases are identical: \begin{equation} \Phi = \{\phi, \phi, \cdots, \phi\} \in \mathbb{R}^{n + 1}. \end{equation} For certain phases (notably $\phi \in \{0, \pi/2\}$), this reduces to simple and known protocols. For our purposes, however, we are interested in the behavior of the induced polynomial transforms for all possible choices of $\phi$ (though by symmetry we can restrict $\phi \in [0, \pi/2]$). It is not hard to determine that the basic iterate of this protocol, $V_\phi = e^{i\phi \sigma_z}W(x)$, has the following eigenvalues: \begin{equation} \lambda_{\pm} = \cos{\phi} \left(x \pm \sqrt{x^2 - \sec^2{\phi}}\right). \end{equation} Moreover, one can quickly determine that the magnitude of these eigenvalues is one precisely when $|x| \leq |\sec{\phi}|$. In continuing the induced polynomial transform $P(x) = \langle 0 | U_\Phi | 0\rangle$ to all real $x$, one can see that there exists an \emph{extended region} in which the eigenvalues of the iterate have magnitude one. We now show that this induces a set of three regions in which the induced polynomial transform has substantively different character. We phrase this property in terms of the following three inequalities: \begin{alignat}{2} \sqrt{B(\phi, x)} \leq &|P(x)| \leq 1, \quad &&0 \leq |x| \leq 1, \forall n,\\ 1 \leq &|P(x)| \leq \sqrt{B(\phi, x)}, \quad &&1 \leq |x| < \sec{\phi}, \forall n,\\ \sqrt{T_n(x\cos{\phi})} \leq &|P(x)| < \infty, \quad &&\sec{\phi} \leq |x| < \infty , \forall n, \end{alignat} where $T_n(x)$ is the $n$-th Chebyshev polynomial evaluated at $x$ and $B(x)$ is the following scaled and shifted secant function, which should be noted is independent of $n$: \begin{equation} \label{eq:secant_bound} B(\phi, x) \equiv \frac{\sec{([\pi/2]\, x \cos\phi)} - 1}{\sec{([\pi/2]\cos\phi)} - 1}. \end{equation} For a derivation of a simpler version of this upper bound, we refer the reader to the Appendix~\ref{appendix:upper_bounds}; however most of these bounds are simply recoverable using standard computer algebra software, taking care with the regions in which $P(x)$ is evaluated. Consequently we see that for $|x| \leq |\sec{\phi}|$, the maximum modulus attained by $|P(x)|$ is bounded from above by an expression that is independent of $n$, while for $x$ outside of this region, this modulus is bounded below by a series of functions which, for any fixed $x$, grow exponentially in $n$. Consequently, with a different character than that of the functional transforms seen in standard QSP, we have thresholding behavior in $x$ beyond the interval $[-1,1]$; that is, given any real number $\mu > 1$ and promised gap $\delta > 0$, there exists a positive integer $n$ and real angle $\phi$ such that the SU(1,1) QSP protocol with repeating phase $\phi$ and length $n$ produces a $P(x)$ whose magnitude at arguments $x_{\pm} = \mu \pm \delta$ differs by \emph{at least} any desired finite amount. We capture this statement in the following definition and theorem, and then discuss the character of the induced polynomial function more closely. \begin{definition}[Weak step function] \label{def:weak_step_function} Given a piecewise-continuous function $f$ across two (possibly infinite) intervals $A, B \in \mathbb{R}$, where all elements of $A$ are less than all elements of $B$, we say $f$ is a $(g, h)$-weak step function if the following inequalities hold: \begin{equation} f_A \leq g_A, \quad f_B \geq h_B. \end{equation} In other words, the function $f$ is bounded from above by $g$ on the interval $A$, and bounded from below by $h$ on the interval $B$. For example, the Heaviside step function $\Theta(x)$ is a $(0, 1)$-weak step function for $A = \mathbb{R}_{-}$ and $B = \mathbb{R}_{+}$. \end{definition} In general, a function satisfying the properties of a weak step function may not be so interesting, especially if the characters of $g, h$ as given above are not substantively different. By the previous results, however, we will see that the properties of the weak step function induced by the QSP protocol of length $n + 1$ with constant phase factors are dramatic. Specifically, we showed above the following \begin{lemma}[Weak step functions in constant-phase QSP] Let $P$ be the top-left element of the unitary matrix generated by the QSP protocol with constant phase list $\{\phi, \phi, \cdots, \phi\}$ of length $n + 1$. Then $|P|$ for $A = [1, \sec{\phi})$ and $B = [\sec{\phi}, \infty)$ is a $(B(\phi, x), T_n(\cos{\phi}))$-weak step function. Here we have made reference to Def.~\ref{def:weak_step_function} and the secant bound in Eq.~\ref{eq:secant_bound}. \end{lemma} Before showing a quantitative theorem about constant-phase QSP, we take a moment to analyze the bounds we've already given on the induced $|P|$ more closely, so that we might simplify our proofs. First, we note that most of the interesting behavior of $|P|$ occurs near the critical point $x = \sec{\phi}$, where we transition from our known upper bound (for all $n$) to our known lower bound (for each $n$). Taking $x = \sec{\phi} - \epsilon$, the upper bound provided approaches the following \begin{equation} \label{eq:negative_deviation} |P(\sec{\phi} - \epsilon)|^2 \geq B(\phi, x) = \epsilon^{-1}\left[\frac{(2/\pi)\sec{\phi}}{\sec{(\pi\cos{\phi}/2)} - 1}\right] - \left[\frac{1}{\sec{(\pi\cos{\phi}/2)} - 1}\right] + \mathcal{O}(\epsilon). \end{equation} We note that this is expected, as secant looks like the inverse function near its singularity. On the other side of this critical point, we can analyze the computed $|P|$ exactly, rather than its bound, and take an expansion \begin{equation} \label{eq:positive_deviation} |P(\sec{\phi} + \epsilon)|^2 = \sqrt{1 + n^2 \tan^2{\phi}} + \mathcal{O}(\epsilon), \end{equation} where we have kept only the zeroth order term for brevity. These particular limits can be computed laboriously by hand, or by computer algebra software. The key takeaway from both of these limits is that the behavior, for a fixed $\epsilon$ on either side of the critical point $x = \sec{\phi}$, of the function $|P|$ induced by the constant-phase QSP protocol is alternately bounded from above for $x < \sec{\phi}$ by a function constant in $n$, and bounded from below for $x > \sec{\phi}$ by a growing function in $n$. That the latter lower bound increases without bound in $n$ necessitates that the former upper bound approaches infinity near the critical point. The utility of the weak-step function induced here relies on that upper bound growing quite quickly. Before finally posing a quantitative theorem, we also look at the limiting behavior of the functions considered here, and pair them with a few diagrams. In addition to expanding about the critical point, it is worthwhile to look at the behavior of $|P|$ for large $x$. In this case, we can replace terms of the form $\sqrt{x^2 - \sec{\phi}^2}$ in the analytical expression for $P$ by approximately $|x|$, and simplify the resulting expression. In this case one finds that, for $x$ larger than the critical point, the expression $|P|$ for the QSP protocol of length $n$ with constant phases $\phi$ approaches the clean function \begin{equation} \label{eq:large_x_limit} \lim_{x \gg \sec{\phi}} |P| = \frac{1}{2^{n + 1}} x^{n} \cos^{n + 1}{\phi}. \end{equation} That is, rather than simply growing quadratically in $n$ as we observed just above the critical point, when one evaluates $P$ at some constant distance above the critical point, its magnitude grows exponentially in $n$ for all such arguments $x$. Indeed, as given by our Chebyshev-dependent lower bound, we see that this is within a constant factor dependent on $\phi$, as fast as any function achievable with QSP could grow on this interval. \begin{theorem}[Properties of constant-phase SU(1,1) QSP] \label{thm:weak_step_function} Take $\xi > 0$ the desired minimal step distance and $\mu > 1$ the desired step location. For all $\delta > 0$, there exists a positive integer $N$ such that for all $n > N$, there is a QSP protocol of length $n$ which satisfies the following properties: \begin{align} |P(x < \mu - \delta)| &\leq \sqrt{B(\phi, x)}\\ |P(x > \mu + \delta)| - |P(\mu - \delta)| &\geq \xi. \end{align} Moreover, the minimum length of this protocol goes as $N = \mathcal{O}(\mu\xi/\delta)$. Here $P(x)$ is the top-left element of the SU(1,1) QSP protocol with phase list $\{\phi, \phi, \cdots, \phi\}$ of length $n + 1$ with $\phi = \sec^{-1}{\mu} \in (0, \pi/2)$. Finally, an upper bound the required $N$ goes as \begin{equation} N \leq \mathcal{O}\left[\delta^{-1/2}\sqrt{\xi \cot{\phi} + \frac{\csc{\phi}}{\sec{([\pi/2]\cos{\phi})} - 1}}\right]. \end{equation} Here again we can restrict $\phi \in (0, \pi/2)$ by symmetry arguments. For simplicity we have also assumed that $\phi$ is not tending extremely close to $0$ or $\pi/2$, in which case the limiting behavior of the protocol becomes more involved. In this sense there exists suppressed constants dependent on the closeness of $\phi$ to these critical values; moreover at the critical points, as discussed before, the considered regions become degenerate. Moreover, it is easy to note that for limits $\phi \rightarrow \{0, \pi/2\}$, the upper bound for $N$ given grows arbitrarily large. Proof follows by working in the limiting region of small $\delta$, in which case the behavior on either side of the critical point of $|P|$ have analytical expressions (Eqs.~\ref{eq:negative_deviation} and \ref{eq:positive_deviation}). Requiring that these points be separated by $\xi$ immediately yields the given scaling, and thus an upper bound for $n$, as the rate of growth of $|P|$ beyond $x = \sec{\phi}$ is near maximal, as given in Eq.~\ref{eq:large_x_limit}. It is worthwhile to note that outside this critcial region, and especially for $\delta = \mathcal{O}(\mu)$, the required $N$ can grow as slowly as logarithmically in $\xi$. \end{theorem} \begin{figure} \caption{Two log plots of $|P|^2$ for QSP protocols with phase list $\{\pi/3, \pi/3, \cdots, \pi/3\}$ of lengths $6, 8, 10$, along with the analytically derived (a) upper bound for $|P|^2$ (Eq.~\ref{eq:secant_bound}), and (b) large-$x$ limit of these functions (Eq.~\ref{eq:large_x_limit}). Note that, for each $x$ along these plots above $x = 1$, the evaluated function grows exponentially in $n$, and that $|P|^2$ only converges to these limits well-after the critical point. One can also see the changed behavior at $x = \sec{\pi/3} = 2$, when the plotted functions become monotonic.} \label{fig:upper_bound} \end{figure} \begin{remark}[On weak step functions] Note that unlike we might hope to be the case, the achieved function in Theorem~\ref{thm:weak_step_function} does not uniformly converge to a discontinuous jump. Instead, we are given that, across to regions (here $x \leq \sec{\phi}$ and $x \geq \sec{\phi}$), that the magnitude of the relevant matrix element is respectively upper bounded and lower bounded by known and simple analytic functions (Def.~\ref{def:weak_step_function}). Moreover, these bounds are useful because the former is constant in $n$, while the later is shown to grow without bound in $n$ at a reasonable rate. Consequently, while this form of thresholding is not as strong as uniform approximation, it nevertheless captures a useful, tunable property of a polynomial function extended beyond its usually considered region in QSP. \end{remark} We pause to note that it is interesting that such a simple protocol, which to the authors' knowledge has had no application in standard QSP, nevertheless reveals novel properties under extension arguments of large modulus. It is reasonable to assume that other repeated QSP-like units could have, when analyzed in a similar way, experimentally useful properties when the argument is extended beyond the compact interval usually considered in QSP. \section{Concluding discussion} \label{sec:discussion_conclusion} In this work we have investigated a variant of the QSP ansatz where the interleaved components are generated not by complex exponentiation of elements of the $\mathfrak{su}(2)$ Lie algebra, but rather $\mathfrak{su}(1,1)$. Quantum systems which evolve within SU(1,1) occur frequently in photonic and mechanical settings, and while there necessarily do not exist finite dimensional unitary representations of such evolutions, one can identify the Heisenberg picture evolution of optical modes through series of parametric amplifiers with our proposed ansatz. One finds that while the achievable polynomial coefficients are identical between the SU(2) and SU(1,1) ansätze, a key difference emerges: said polynomials are evaluated not on the image of the cosine function, but instead that of the hyperbolic cosine. This simple equivalence can be entirely captured by the following relations \begin{align} \begin{bmatrix} P(\cos{\theta}) & iQ(\cos{\theta})\sin{\theta}\\ iQ^*(\cos{\theta})\sin{\theta} & P*(\cos{\theta}) \end{bmatrix} &\Longleftrightarrow \begin{bmatrix} P(\cosh{\beta}) & Q(\cosh{\beta})\sinh{\beta}\\ Q^*(\cosh{\beta})\sinh{\beta} & P*(\cosh{\beta}) \end{bmatrix},\\[0.8em] \begin{bmatrix} P(x) & iQ(x)\sqrt{1 - x^2}\\ iQ^*(x)\sqrt{1 - x^2} & P*(x) \end{bmatrix} &\Longleftrightarrow \begin{bmatrix} P(x) & Q(x)\sqrt{x^2 - 1}\\ Q^*(x)\sqrt{x^2 - 1} & P*(x) \end{bmatrix},\\[1em] \Re[P]^2 + (1 - x^2)\Re[Q]^2 \leq 1 &\Longleftrightarrow \Re[P]^2 + (x^2 - 1)\Re[Q]^2 \geq 1\\[1em] x \in [-1,1] &\Longleftrightarrow x \in [1, \infty). \end{align} We note that even this apparently simple transformation (of only the argument of the achieved function, not the coefficients) nevertheless presents significant barriers to understanding the nature (especially in the approximation of desired functions) of these polynomials. Standard QSP can care only, by definition, about $x \in [-1,1]$, and it is not evident (nor even true) that arbitrary functional approximation outside this interval can be achieved, and moreover achieved efficiently (i.e., with short protocols). The business of re-proving similar statements to those in QSP relies on general tools in functional analysis. Ultimately this apparently simple transformation reduces the problem of characterizing the achievable functions of QSP to the study of polynomials over the reals which are (1) bounded above in magnitude by one on the interval $[-1,1]$, (2) bounded below in magnitude by one outside the interval $[-1, 1]$, and (3) of definite parity. We show that the modified ansatz is still dense in the set of continuous functions also obeying constraints (1-3), although the length of the achieving protocols can, in the worst case, grow very fast. This poor scaling is due to the desire to approximate functions over non-compact sets. Toward useful application of QSP-like ansätze to continuous-variable computations, we provided a series of concrete phase prescriptions for which the achieved polynomial transforms are analytically simple: these include Chebyshev polynomials, monomials, and a thresholding function whose step location and height are precisely tunable. The last of these is noteworthy, as its standard QSP counterpart had no obvious use previously, indicating that simple constructions in standard QSP can reveal unexpected traits when extended to regions in parameter space the standard ansatz had no way of accessing. These protocols, which are shown, unlike their SU(2) counterparts, to be especially sensitive to their underlying parameters, thus form a natural building block for general amplification techniques in quantum systems. Specifically, recent advances not only in optics but also superconducting quantum computing systems have investigated the possibility of cascaded series of parametric amplifications for quantum measurement, most successfully in the form of travelling wave parametric amplifiers (TWPAs) \cite{mohdbzos_twpa_15, peng_floquet_twpa_22}. While we consider a simpler model, the ability to precise tune the location and magnitude of thresholding behavior resulting from a cascaded series of parametric amplifiers, as shown in Thm.~\ref{thm:weak_step_function}, holds promise for constituting the analytic language of such devices. Simple and exciting possibilities for extensions include the development of band-pass functional transforms \cite{wimperis_bb1_94}, protocols for signal trifurcation, and application of chaotic theory to our protocols in the long-length limit. On this line, going forward there is great promise in better understanding functional analysis over non-compact sets and with non-Riesz bases, toward a coherent theory of alternating ansätze over families of physically relevant Lie algebras. Understanding how one can precisely control such quantum systems opens the door for the development of new algorithms in the continuous variable setting, while maintaining the single-qubit intuition that has made quantum signal processing so successful. \appendix \section{Basic results in QSP and QSVT} \label{appendix:basics_qsp_qsvt} In the following section we provide proofs of a few main theorems in the body of the paper, as well as additional commentary on relevant mechanisms of the standard proof methods of QSP and QSVT. We aim for these to be self-contained, and to give some intuition for how one attempting to expand or modify the theory of these circuits might begin to do so. \begin{remark} \label{remark:qsp_ansatz_expressivity} \begin{proof} In reference to Theorem~\ref{thm:qsp_ansatz_expressivity}. The unitary matrix corresponding to a QSP protocol has the form \begin{equation} U_{\Phi} \equiv \begin{bmatrix} P(x) & \sqrt{1 - x^2}Q(x) \\ -\sqrt{1 - x^2}Q^*(x) & P^*(x) \end{bmatrix}, \end{equation} where $P, Q \in \mathbb{C}[x]$ have definite parity, are bounded above in magnitude by $1$, and obey the relation $|P|^2 + (1 - x^2)|Q|^2 = 1$. It is also known \cite{gslw_19} that choosing the real part $\tilde{P}$ of $P$ with definite parity and bounded above by $1$ defines a QSP protocol (i.e., a finite list of real phases $\Phi \in \mathbb{R}^{n + 1}$) whose unitary has the following form \begin{equation} U_{\Phi} \equiv \begin{bmatrix} \tilde{P}(x) + iB(x) & i\sqrt{1 - x^2}C(x) \\ i\sqrt{1 - x^2}C(x) & P(x) - iB(x) \end{bmatrix}, \end{equation} where $\tilde{P}, B, C \in \mathbb{R}[x]$ and $\Re[P] = \tilde{P}$ as stated. The map $\tilde{P} \rightarrow \Phi$ is not unique, but can be made unique by choosing how $B, C$ are defined in terms of the zeros of $1 - \tilde{P}^2$. It is not difficult to see that this unitary is a rotation of the form \begin{align} U_\Phi &= \tilde{P}(x) I + i\, \big[B(x) Z + \sqrt{1 - x^2} C(x) X\big]\\ &= \cos{(\xi(x))} I + i\sin{(\xi(x))}\,e^{iR(x)Y}Ze^{-iR(x)Y}, \end{align} where we have defined new functions of $x$, namely $\xi(x)$ and $R(x)$ with the following form \begin{align} \xi(x) &= \arccos{\biggr[\tilde{P}(x)\biggr]},\\ R(x) &= \arccos{\left[B(x)/\sqrt{1 - \tilde{P}^2}\right]}. \end{align} Here we identify $R(x)$ as the overall $Y$-rotation ambiguity for the unitary, parameterized by $x$ in a way that completely depends on both the choice of $\tilde{P}$ and certain choices concerning the roots of $1 - \tilde{P}^2$ as stated to make $\Phi$ unique. Both $\xi(x)$ and $R(x)$ are in truth functions $[-1, 1] \rightarrow U(1)$, choosing an angle on the circle through the branch cut of the arccosine function. The speed of convergence to a desired real function by $\tilde{P}$ is a well known classical result in functional analysis. \end{proof} \end{remark} \begin{remark} \label{remark:su11_qsp_completion} \begin{proof} In reference to Theorem~\ref{thm:matrix_completion_su11}. The intent of this theorem, in analogy to that of standard QSP, is to determine if partial constraints on the representation of SU(1,1) as given allow one to complete unspecified elements of the matrix such that the overall matrix can be realized as a product of phased boosts. We assume the existence of a \emph{real} polynomials $P$ and $Q$ of definite parity (though opposite from one another), such that \begin{equation} P^2 - (x^2 - 1)Q^2 \geq 1, x \in [1, \infty). \end{equation} As in standard QSP, we can examine the polynomial function $F$ with the following form \begin{equation} F = P^2 - (x^2 - 1)Q^2 - 1, \end{equation} which is necessarily positive semidefinite on the relevant interval by our assumption. By a simplified version of the Fejér-Riesz theorem \cite{rc_22} or simple root analysis \cite{gslw_19}, there exists a complex polynomial $G(x)$ of definite parity such that the function $F$ is the magnitude square of $G$, i.e., \begin{equation} F = G(x)G^{*}(x). \end{equation} Note that the existence of such a decomposition depends solely on the positive semidefiniteness of the relevant polynomial $F$. The imginary parts of $\tilde{P}, \tilde{Q}$ in the theorem statement can now be identified with the real and imaginary parts of the polynomial $G$, the latter of which will permit $\sqrt{x^2 - 1}$ to be factored from it by merit of the known boundary conditions at $x = 1$, equivalently $\beta = 0$. That the resulting matrix corresponds to a product of phase boosts now follows directly from the mapping from $\cosh{\beta}$ to $\cos{\theta}$, under which the coefficients do not change, but we are guaranteed that the modified polynomials must now satisfy the standard $P^2 + (1 - x^2)Q^2 \leq 1$ on $x \in [-1,1]$. In this way we see that the constraints for matrix completion in SU(2) and SU(1,1) QSP are dual to one another. But while we have a succinct description of the real $P, Q$ that are achievable, the required degree for uniform approximation by such functions of a desired continuous function of the same parity/bound constraints, is unclear, and left to Sec.~\ref{sec:non_harmonic_analysis}. \end{proof} \end{remark} \section{Upper bounds in concrete SU(1,1) QSP protocols} \label{appendix:upper_bounds} For the QSP protocol of length $n$ with fixed phases $\phi$, the text gives a particularly simple upper bound for $|P(x)|$ (for $1 \leq |x| < \sec{\phi}$) which also suffices as a lower bound for $|x| \leq 1$. The derivation of this bound, however, is both involved and not particularly illuminating, and consequently we give a more simply derived upper bound for the relevant region $1 \leq |x| < \sec{\phi}$ whose properties are nevertheless good enough to prove all results in the text on the scaling of the required length $n$ in terms of relevant properties of the achieved functions. The first step in this derivation is determining the general form of $P$ of the generated unitary for fixed $\phi$; this corresponds to exponentiating the basic iterate $e^{i\phi\sigma_z}V(x)$ with an additional overall phase, which can be done via computer algebra software or by hand by diagonalizing the relevant small matrix. This element has the form \begin{align} P(x) {}={} \frac{1}{2}e^{-i(n-1)\phi}&\frac{1}{\sqrt{x^2 - \sec^2\phi}}\nonumber\\ \biggr[ &\left(\sqrt{x^2\cos^2\phi - 1} - ix\sin{\phi}\right)\left(x\cos\phi - \sqrt{x^2\cos^2\phi - 1}\right)^n + \nonumber\\ &\left(\sqrt{x^2\cos^2\phi - 1} + ix\sin{\phi}\right)\left(x\cos\phi + \sqrt{x^2\cos^2\phi - 1}\right)^n \biggr]. \end{align} While this expression does indeed have dependence on $n$, we can now show that this dependence is mild, and does not affect the magnitude of this term very strongly in the critical region $1 \leq |x| < \sec{\phi}$. This follows by noting that the terms raised to the $n$-th power in the second and third line are, for $1 \leq |x| < \sec{\phi}$, bounded in magnitude by a constant, namely $1$. Replacing these terms with this upper bound, and repeatedly applying the simple triangle-inequality-derived upper bound for the magnitude of a complex number $|a + bi| \leq a + b$, we find that this whole expression is upper bounded by the following \begin{align} |P(x)| &\leq \frac{1}{\sqrt{\sec^2\phi - x^2}} \left[\sqrt{\sec^2\phi - x^2} + x\tan\phi\right],\\ &= 1 + \frac{x\tan\phi}{\sqrt{\sec^2\phi - x^2}} \label{eq:simple_bound}, \end{align} which has the expected behavior at $x = 0$, as well as the expected singularity at $x = \sec{\phi}$. Comparing this bound to the one given in the body of the paper (Eq.~\ref{eq:secant_bound}), we see that their limiting behavior around $x = \sec{\phi}$ are characterized by the following limits \begin{align} |P(\sec{\phi} - \delta)| &\approx \frac{1}{\sqrt{2}} \sec^{1/2}\phi \tan\phi \frac{1}{\delta^{1/2}},\\ &\approx \sqrt{\frac{2}{\pi}}\sqrt{\frac{\sec{\phi}}{\sec{[(\pi/2)\cos\phi]} - 1}}\frac{1}{\delta^{1/2}}, \end{align} where the first has been derived from Eq.~\ref{eq:secant_bound}, and the second from Eq.~\ref{eq:simple_bound} taking $x = \sec\phi - \delta$, and computing the leading order singular term in $\delta$. Comparing these two terms numerically one can find that they agree exceptionally well as $\phi$ approaches $\pi/2$, despite radically different forms. Their distinguishing features lie in the difference between uncommon terms multiplying these two functions: \begin{equation} \left|\tan\phi - 2\sqrt{\frac{1}{\pi\sec{[(\pi/2)\cos\phi] - 1}}}\right|, \end{equation} where the first term is clearly unbounded while the latter term is bounded in magnitude by $2/\sqrt{\pi - 1}$ as $\phi \mapsto \pi/2$. The similarity of these multiplying terms, however, lies in the fact that both act approximately as $\phi$ in the limit as $\phi \rightarrow 0$. Consequently as long as we do not consider $\phi$ too close to $\pi/2$, which is an unphysical region for a variety of reasons, then our simply-derived upper bound will be close in its behavior around $x = \sec{\phi}$ to the more sophisticated bound. \section{Representation theory for Lie groups and algebras} \label{appendix:basics_rep_theory} In this section we cover a few basic definitions from representation theory, toward a minimal presentation of a few no-go theorems on the finite-dimensional unitary representation of certain non-compact Lie groups. \begin{definition}[Group representation] A representation of a group $G$ on a finite-dimensional complex vector space $V$ is a homomorphism $\rho: G \rightarrow \text{GL}(V)$, from $G$ to the general linear group over $V$ (with respect to some field $K$). Usually we will just refer to the vector space $V$ as the representation of $G$, when we really intend $\rho$. \end{definition} \begin{definition}[Irreducible representation] A representation $V$ is called irreducible if there is no proper nonzero invariant subspace $W$ of $V$. Here the supposed $W$ would be a subrepresentation of $V$, namely a vector subspace $W$ of $V$ which is invariant under $G$, the group being represented, with respect to the defining homomorphism of the reprsentation. \end{definition} \begin{definition}[Unitary representation] A representation $V$ (over a complex Hilbert space) is said to be unitary if the image of the homomorphism $\rho$ is a unitary matrix for all $g \in G$. Such representations are often coveted in physics, given the physical interpretation natural to unitary operators. \end{definition} In what follows we reproduce a few prominent results from the last century on the representation theory of certain special Lie groups. The first, the Peter-Weyl theorem, concerns compact Lie groups like SU(2), and showcases a variety of the natural properties on which the success of QSP and QSVT can be seen to crucially rely. The second theorem, not explicitly named but often cited in the study of the Lorentz group, provides a prominent counterexample when some of the assumptions of the Peter-Weyl theorem (most notably compactness) are violated. \begin{theorem}[Peter-Weyl theorem, from \cite{peter_weyl_01}] Take $\rho$ a finite dimensional continuous group representation of $G$. Then the Peter-Weyl theorem can be summarized in three statements concerning this representation \begin{enumerate}[label=(\arabic*)] \item (Density). The set of matrix coefficients of $G$ is dense in the space of continuous complex functions $C[G]$ on $G$, equipped with the uniform norm (this also implies density in $L^2[G]$. \item (Complete reducibility). Let $\rho$ be a unitary representation of a compact group $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$. \item (Generalized Fourier basis). The matrix coefficients for $G$, suitably renormalized, are an orthonormal basis of $L^2[G]$. \end{enumerate} While the statements of the Peter-Weyl theorem are not in themselves strong enough to recover the theorems of QSP, due to the specificity of its alternating ansatz, they are nevertheless intimately connected to the reason that the achievable space of functions parameterizing QSP circuits is dense in the space of continuous functions with similar constraints. \end{theorem} In contrast to the assurances of the Peter-Weyl theorem, which guarantee most desired analogues to the requirements of harmonic analysis on compact groups, the case for non-compact groups is somewhat bleak. The following theorem justifies our focus on finite dimensional (non-unitary) representations of SU(1,1), as well as the difficulty in finding Riesz bases for natural spaces of functions over the elements of such representations. \begin{theorem}[Theorem of \cite{wigner_39} (p 165)] Non-compactness implies, for a connected simple Lie group, that no nontrivial finite-dimensional unitary representations exist. In other words, unitary irreducible representations (except the identity) of non-compact Lie groups are infinite dimensional. \end{theorem} \end{document}
\begin{document} \title{Note on the group edge irregularity strength of graphs} \author{Marcin Anholcer$^1$, Sylwia Cichacz$^{2,3}$\\ $^1$Pozna\'n University of Economics and Business\\ $^2$AGH University of Science and Technology Krak\'ow, Poland\\ $^3$ University of Primorska, Koper, Slovenia} \maketitle \begin{abstract} We investigate the \textit{edge group irregularity strength} ($es_g(G)$) of graphs, i.e. the smallest value of $s$ such that taking any Abelian group $\mathcal{G}$ of order $s$, there exists a function $f:V(G)\rightarrow \mathcal{G}$ such that the sums of vertex labels at every edge are distinct. In this note we provide some upper bounds on $es_g(G)$ as well as for edge irregularity strength $es(G)$ and harmonious order ${\rm{har}}(G)$. \end{abstract} \section{Introduction} In 1988 Chartrand et al. \cite{ref_ChaJacLehOelRuiSab1} proposed the problem of irregular labeling. This problem was motivated by the well known fact that a simple graph of order at least 2 must contain a pair of vertices with the same degree. The situation changes if we consider multigraphs. Each multiple edge may be represented with some integer label and the (\textit{weighted}) degree of any vertex $x$ is then calculated as the sum of labels of all the edges incident to $x$. The maximum label $s$ is called the \textit{strength} of the labeling. The labeling itself is called \textit{irregular} if the weighted degrees of \textbf{all} vertices are distinct. The smallest value of $s$ that allows an irregular labeling is called the \textit{irregularity strength of} $G$ and denoted by $s(G)$. This problem was one of the major sources of inspiration in graph theory \cite{AhmAlMBac,ref_AigTri,ref_AmaTog,ref_AnhCic1,ref_BacJenMilRya,ref_FerGouKarPfe,ref_KalKarPfe1,ref_KarLucTho,ref_MajPrz,ref_Nie,ref_ThoWuZha,refXuLiGe}. For example the concept of $\mathcal{G}$-irregular labeling is a generalization of irregular labeling on Abelian groups. The \textit{group irregularity strength} of $G$, denoted $s_g(G)$, is the smallest integer $s$ such that for every Abelian group $\mathcal{G}$ of order $s$ there exists a $\mathcal{G}$-irregular labeling $f$ of $G$. The following theorem, determining the value of $s_g(G)$ for every connected graph $G$ of order $n\geq 3$, was proved by Anholcer, Cichacz and Milani\v{c} \cite{ref_AnhCic1}. \begin{mytheorem}[\cite{ref_AnhCic1}]\label{AnhCic1} Let $G$ be an arbitrary connected graph of order $n\geq 3$. Then $$ s_g(G)=\begin{cases} n+2,&\text{if } G\cong K_{1,3^{2q+1}-2} \text{ for some integer }q\geq 1,\\ n+1,&\text{if } n\equiv 2 \imod 4 \wedge G\not\cong K_{1,3^{2q+1}-2} \text{ for any integer }q\geq 1,\\ n,&\text{otherwise.} \end{cases} $$ \end{mytheorem} The notion of \textit{edge irregularity strength} was defined by Ahmad, Al-Mushayt and Ba\v{c}a \cite{AhmAlMBac}. The weight of an edge $xy$ in $G$ is the sum of the labels of its end vertices $x$ and $y$. A vertex $k$-labeling $\phi\colon V(G)\rightarrow \{1,2,\ldots,k\}$ is called edge irregular $k$-labeling of $G$ if every two different edges have different weights. The minimum $k$ for which $G$ has an edge irregular $k$-labeling is called the \textit{edge irregularity strength} of $G$ and denoted by $es(G)$. They established the exact value of the edge irregularity strength of paths, stars, double stars and Cartesian product of two paths. They also gave a lower bound for $es(G)$. In the literature there is also investigated the total version of this concept, namely \textit{edge irregular total labeling} \cite{ref_BacJenMilRya,refXuLiGe}. Graham and~Sloane defined \textit{harmonious labeling} as a direct extension of additive bases and graceful labeling. We call a labeling $f \colon V (G) \rightarrow\mathbb{Z}_{|E(G)|}$ harmonious if it is an injection such that each edge $xy\in E(G)$ has different sum $f (x) + f (y)$ modulo $|E(G)|$. When $G$ is a tree, exactly one label may be used on two vertices. They conjectured that any tree is harmonious (the conjecture is still unsolved) \cite{ref_GraSlo}. Beals et al. (see \cite{ref_BeaGalHeaJun}) considered the concept of harmoniousness with respect to arbitrary Abelian groups. \.Zak in \cite{ref_Zak} generalized the problem and introduced a new parameter, the \textit{harmonious order of $G$}, defined as the smallest number $t$ such that there exists an injection $f:V(G)\rightarrow \mathbb{Z}_t$ (or surjection if $t<V(G)$) that produces distinct edge weights. The problem of harmonious order is connected with a~problem of sets in Abelian groups with distinct sums of pairs. A subset $S$ of an Abelian group $\Gamma$, where $|S|=k$, is an $S_2$-set of size $k$ if all the sums of 2 different elements in $S$ are distinct in $\Gamma$. Let $s(G)$ denote the cardinality of the largest $S_2$-set in $\Gamma$. Two central functions in the study of $S_2$-sets are $v(k)$ and $v_{\gamma}(k)$, which give the order of the smallest Abelian and cyclic group $G$, respectively, for which $s(G)\geq k$. Since cyclic groups are a special case of Abelian groups, clearly $v(k)\leq v_{\gamma}(k)$, and any upper bound on $v_{\gamma}(k)$ is also an upper bound on $v(k)$ \cite{Haa}. Note that $har(K_n)=v_{\gamma}(n)\leq n^2+O(n^{36/23})$ \cite{ref_Zak}. Recently Montgomery, Pokrovskiy and Sudakov proved the following theorem. \begin{mytheorem}[\cite{ref_Sudakov}] Every tree $T$ of order $n$ has an injective $\Gamma$-harmonious labeling for any Abelian group $\Gamma$ of order $n + o(n)$. \end{mytheorem} In this paper we would like to introduce a new concept which gathers the ideas of $\mathcal{G}$-irregular labeling, edge irregularity strength and harmonious order, namely \textit{group edge irregularity strength}. Assign an element of an Abelian group $\mathcal{G}$ of order $s$ to every vertex $v \in V(G)$. For every edge $e=uv \in E(G)$ the \textit{weight} is defined as: \begin{eqnarray*} wd(uv)=w(u)+w(v). \end{eqnarray*} The labeling $w$ is $\mathcal{G}$-edge irregular if for $e\neq f$ we have $wd(e) \neq wd(f)$. \textit{Group edge irregularity strength} $es_g(G)$ is the lowest $s$ such that for every Abelian group $\mathcal{G}$ of order $s$ there exists $\mathcal{G}$-edge irregular labeling of $G$. \section{Bounds on $es_g(G)$} Let us start with general lower bound on $es_g(G)$. Of course, the order of the group must be equal at least to the number of edges of $G$. \begin{myobservation}\label{lemma_cycle_below0} For each graph $G$ with $|E(G)|=m$, $es_g(G)\geq m$. \end{myobservation} The above bound can be improved e.g. for cycles. \begin{myproposition}\label{lemma_cycle_below} If $n \equiv 2 \imod 4$, then $es_g(C_n) \geq n+1$. \end{myproposition} \begin{proof} Assume we can use some $\mathcal{G}$ of order $2(2k+1)$. Obviously $\mathcal{G}=\mathbb{Z}_2\times \mathcal{G}_1$ for some group $\mathcal{G}_1$ of order $2k+1$. There are $2k+1$ elements $(1,a)$ where $a\in\mathcal{G}_1$ and all of them have to appear as the edge weights, so $$ \sum_{e\in E(G)}{wd(e)}=(1,b_1) $$ For some $b_1\in \mathcal{G}_1$. On the other hand $$ \sum_{e\in E(G)}{wd(e)}=2\sum_{v\in V(G)}{w(v)}=(0,b_2) $$ for some $b_2\in\mathcal{G}_1$, a contradiction. \end{proof} Note that each $\mathcal{G}$-edge irregular labeling of $K_n$ has to be injection, what implies that ${\rm{har}}(K_n)\leq es_g(K_n)$. So $es_g(K_n)={n \choose 2}$ only for $n\leqslant 3$ \cite{ref_GraSlo}. Recall that for an integer $n$ that has all primes distinct in its factorization, there exists a unique Abelian group of order $n$, namely $\mathbb{Z}_n$. Therefore we obtain that $es_g(K_5)=11$, $es_g(K_6)=19$, $es_g(K_{12})=114$, $es_g(K_{14})=178$ and $es_g(K_{15})=183$ \cite{ref_Zak}. Although it is not known what are the exact values of ${\rm{har}}(K_n)$ for $n\geq16$, the lower bound $n^2-3n\leq {\rm{har}}(K_n)$ is given for each $n \geq 3$ \cite{ref_Zak}, hence we easily obtain the following observation. \begin{myobservation} If $n \geq 3$, then $n^2-3n \leq es_g(K_{n})$. \end{myobservation} We give now several upper bounds on $es_g(G)$ In \cite{AhmAlMBac} Ahmad, Al-Mushayt and Ba\v{c}a obtained exponential upper bound on $es(G)$ depending on Fibonacci number with seed values $F_1=1, F_2=2$. However, because of $es(G)\leq har(K_n)$ we obtain the following. \begin{myproposition} If $G$ is a graph of order $n$, then $es(G)\leq n^2+O(n^{36/23})$. \end{myproposition} Let $|E(G)|=m$. Note that in general we do not know whether $es_g(G) \leq {\rm{har}}(K_n)$, however we are able to show the following. \begin{mytheorem}\label{marcin} For each graph $G$, $es(G) \leq es_g(G)\leq p(2es(G)) \leq p(2 {\rm{har}}(G))$, where $p(k)$ is the least prime greater than $k$. \end{mytheorem} \begin{proof} The first inequality follows from the fact that if we can find a $\mathcal{G}$-irregular labeling for any group $\mathcal{G}$ with a given order $p$, then in particular we can do it for the cyclic group and if it distinguishes the weights modulo $p$, then also the labeling with integers (where we use $p$ instead of $0$ as a label) generates an irregular labeling. For every prime $p$ there is only one Abelian group $\mathcal{G}$ of order $p$, namely $\mathbb{Z}_p$. If in the labeling one uses only the labels less than $p/2$, the addition inside the group is equivalent to the addition in $\mathbb{Z}$, so the second inequality follows. The last inequality is implied by the fact that $es(G)\leq {\rm{har}}(G)$. \end{proof} From the Bertrand–Chebyshev theorem \cite{ref_The} it follows that for any positive integer $i$ there exists a prime number between $i$ and $2i$, which easily leads to the following. \begin{mycorollary} Let $G$ be a graph of order $n$. Then $es_g(G)\leq 4n^2 +O(n^{36/23}).$ \end{mycorollary} However, for larger $n$, better bounds are known. For example, Naguro \cite{ref_Nag} proved that there is a prime between $i$ and $1.2i$ provided that $i\geq 25$. This gives us the following upper bound. \begin{mycorollary} Let $G$ be a graph of order $n\geq 25$. Then $es_g(G)\leq 2.4n^2 +O(n^{36/23}).$ \end{mycorollary} Recently Baker, Harman and Pintz \cite{ref_BakHarPin} proved that for sufficiently large $i$, there is a prime between $i$ and $i+i^{0.525}$. This allows to obtain the following result for large graphs. \begin{mycorollary}\label{corMA} Let $G$ be a graph of sufficiently large order $n$. Then $es_g(G)\leq 2n^2 +O(n^{36/23}).$ \end{mycorollary} The latter result can be reduced for some special classes of graphs. First, let us consider a graph $G$ of the following form. Assume that the vertices of $G$ can be divided into four sets $V_{11}$, $V_{12}$, $V_{21}$ and $V_{22}$ such that for every edge $xy$, we have that $x\in V_{ij}$ and $y\in V_{kl}$ implies $i=k$ or $j=l=1$. Moreover, assume that $|V_{11}|+|V_{12}|\leq \left\lceil n/2 \right\rceil$, $|V_{11}|+|V_{21}|\leq \left\lceil n/2 \right\rceil$ and $|V_{21}|+|V_{22}|\leq \left\lceil n/2 \right\rceil$. A special case is a graph in which $|V_{11}|+|V_{21}|=0$, i.e. a graph with two components having orders $\left\lceil n/2 \right\rceil$ and $\left\lfloor n/2 \right\rfloor$. \begin{mycorollary} Let $G$ be a graph defined as above, where $n$ is sufficiently large. Then $es_g(G)\leq 1.5n^2 +O(n^{36/23})$. \end{mycorollary} \begin{proof} For any graph of sufficiently large order $\left\lceil n/2 \right\rceil$, there is a group of prime order $g=0.5n^2 +O(n^{36/23})$ that allows a group edge irregular labeling of this graph by Corollary~\ref{corMA}. Of course, $g$ is not divisible by $3$. Let us start with modifying the graph $G$ by adding some edges so that the subgraphs with vertex sets $V_{11}\cup V_{12}$, $V_{11}\cup V_{21}$ and $V_{21}\cup V_{22}$ become complete graphs. Let us take any group $\mathcal{G}$ of order $3g$. Of course such group must be of the form $\mathbb{Z}_3\times \mathcal{G}^\prime$ for some group $\mathcal{G}^\prime$ of order $g$. We label the vertices of $G$ with elements $(g_1,g_2)$ of $\mathcal{G}$, where $g_1\in \mathbb{Z}_3$ and $g_2\in \mathcal{G}^\prime$ in the following way. First we choose $g_2$ for the vertices in $V_{11}\cup V_{21}$ in such way that the edges are distinguished even if all $g_1$ are equal (it is possible, since any graph of order $\left\lceil n/2 \right\rceil$ can be labeled with a group of order $g$). Then we do the same with vertices of $V_{11}\cup V_{12}$, by labeling the vertices of $V_{12}$ with the elements of $\mathcal{G}^\prime$ not used in $V_{11}$. Finally we label the vertices of $V_{22}$ with the elements not used in $V_{21}$. This distinguishes the edges inside $V_{11}\cup V_{12}$ and inside $V_{21}\cup V_{22}$, no matter what are the values of $g_1$. Now we choose $g_1=0$ for the vertices in $V_{11}\cup V_{12}$ and $g_1=1$ for the vertices in $V_{21}\cup V_{22}$, which distinguishes the edges from different sets: the first coordinate of every edge inside $V_{11}\cup V_{12}$ is now $0$, inside $V_{21}\cup V_{22}$ equals to $2$ and for each edge between $V_{11}$ and $V_{12}$ it equals to $1$. This means that the labeling is the group edge irregular labeling of the modified graph, so also of each of its subgraphs, in particular of $G$. Thus $$es_g(G)\leq 3g\leq 1.5n^2 +O(n^{36/23})$$. \end{proof} A similar reasoning allows as to strengthen the result for graphs having more than two components of almost the same order. As we know, in any group of odd (in particular prime) order $p$, $x=y$ if and only if $2x=2y$, provided that $x,y\neq 0$. Thus if one uses different value of $g_1$ in every component, it is enough to distinguish the edges inside every component by the elements $g_2$ of the subgroup $\mathcal{G}^\prime$ of $\mathcal{G}=\mathbb{Z}_p\times \mathcal{G}^\prime$ (of course $p$ must be prime and $|\mathcal{G}^\prime|$ not divisible by $p$ if we want this decomposition to be unique). It gives us the following result. \begin{mycorollary} Let $G$ be a graph of order $n$, consisting of $q\geq 2$ components with orders different by at most $1$, where $n$ is sufficiently large. Let $p$ be the smallest odd prime not less than $q$. Then $$es_g(G)\leq \frac{2p}{q^2}n^2 +O(n^{36/23}).$$ \end{mycorollary} Note that if also $q$ is sufficiently large, then we obtain $es_g(G)\leq 2n^2/q +O(n^{36/23}).$ \begin{myproposition}\label{forest} For each forest $F$, $es_g(F)=m$. Moreover, any weighting of edges is possible for arbitrary choice of the label of one vertex in each component. \end{myproposition} \begin{proof} Given any edge that is still not weighted, if one of the vertices has label $a$, and the edge is supposed to be weighted with $b$, it is enough to put $b-a$ on the other vertex. \end{proof} The notion of coloring number of a graph was introduced by Erd\H{o}s and Hajnal in \cite{ErdHaj}. For a given graph $G$ by ${\rm col}(G)$ we denote its coloring number, that is the least integer $k$ such that each subgraph of $G$ has minimum degree less than $k$. Equivalently, it is the smallest $k$ for which we may linearly order all vertices of $G$ into a sequence $v_1,v_2,\ldots,v_n$ so that every vertex $v_i$ has at most $k-1$ neighbors preceding it in the sequence. Hence $\chi(G)\leq {\rm col}(G)\leq \Delta(G)+1$. Note that ${\rm col}(G)$ equals the degeneracy of $G$ plus $1$, and thus the result below may be formulated in terms of either of the two graph invariants.\\ \begin{mytheorem}\label{col_upper} For every graph $G=(V,E)$, there exists a $\mathcal{G}$-edge irregular labeling for any Abelian group $\mathcal{G}$ of order $|\mathcal{G}|\geq ({\rm col}(G)-1)(|E|-1)+1$. \end{mytheorem} \begin{proof} By Proposition \ref{forest} we can assume that $G$ is not a forest. Fix any Abelian group $\mathcal{G}$ of order $|\mathcal{G}|\geq ({\rm col}(G)-1)(|E(G)|-1)+1$. Let $v_1,v_2,\ldots,v_n$ be the ordering of $V(G)$ witnessing the value of ${\rm col}(G)$. We start with putting arbitrary color on $v_1$. Then we will color the remaining vertices of $G$ with elements of $\mathcal{G}$ in $n-1$ stages, each corresponding to a consecutive vertex from among $v_2,v_3,\ldots,v_n$. Initially no vertex except $v_1$ is colored. Then at each stage $i$, $i=2,3,\ldots,n$, we color the vertex $v_i$. We will choose a color avoiding sum conflicts between already analyzed vertices and so that at all times the partial edge coloring has the desired property. Namely we choose a color $w(v_i)\in \mathcal{G}$ so that $wd(v_iv_j)$ for $j<i$ and $v_iv_j\in E(G)$ is distinguished from any $wd(v_tv_k)$ where $1\leq t\leq k$ and $v_tv_k\in E(G)$. Thus we cannot use at most $({\rm col}(G)-1)(|E(G)|-1)$ colors. \end{proof} We immediately obtain the following result. \begin{mycorollary}\label{nullcol} For each graph $G$ of order at least $4$, $es_g(G)\leq ({\rm col}(G)-1)(|E(G)|-1)+1$. \end{mycorollary} Taking into account that for every planar graph $G$ we have ${\rm col}(G)\leq6$, we obtain the following corollary. \begin{mycorollary}\label{Planar} For each planar graph $G$ of order at least $4$, $es_g(G)\leq 5|E(G)|-4$. \end{mycorollary} Note also that if we additionally want to have injection of colors on vertices, then within the proof of Theorem~\ref{col_upper} above, we obtain at most $n-1$ constraints while choosing a color for a given vertex. Consequently, by a straightforward adaptation of the proof above, we obtain the following. \begin{mycorollary}\label{har} For each graph $G$ of order at least $4$, ${\rm{har}}(G)\leq |V(G)|+ ({\rm col}(G)-1)(|E(G)|-1)$. \end{mycorollary} The exact value of $es_g(C_n)$, where $C_n$ is a cycle of order $n$, is given by the following theorem. \begin{mytheorem} Let $C_n$ be arbitrary cycle of order $n\geq 3$. Then $$ es_g(G)=\begin{cases} n+1&\text{when } n\equiv 2 \imod 4\\ n&\text{otherwise} \end{cases} $$ Moreover respective labeling exists for an arbitrary choice of the label of any vertex. \end{mytheorem} Remark: in fact, the labeling can be found for any group of order at least $es_g(C_n)$. \begin{proof} Labeling the vertices distinguishing the edge weights is in this case equivalent to the labeling of the edges distinguishing the vertex weighted degrees (we label the line graph, moreover $m$=$n$). Thus the theorem is a simple corollary of Theorem \ref{AnhCic1}. \end{proof} \begin{mytheorem}\label{dwudzielne} Let $G=K_{m,n}$, then $es_g(G)=mn$. \end{mytheorem} \noindent\textbf{Proof.} Let $\Gamma$ be an Abelian group of order $mn$. One of the consequences of the fundamental theorem of finite Abelian groups is that for any divisor $k$ of $|\Gamma|$ there exists a subgroup $H$ of $\Gamma$ of order $k$. Therefore there exists $\Gamma_0< \Gamma$ such that $|\Gamma_0|=n$. Let $V_1$ and $V_2$ be the partition sets of $G$ such that $|V_1|=m$ and $|V_2|=n$. Put all elements of $\Gamma_0$ on the vertices of the set $V_1$, whereas on the vertices of $V_2$ put all coset representatives for $\Gamma/\Gamma_0$. Note that all vertices incident with a vertex $v\in V_2$ obtain different weights which are elements of the coset $w(v)\Gamma_0$. Hence using a coset decomposition of $\Gamma$ we are done.~\qed\\ From the above observation we obtain the following upper bound for bipartite graphs. \begin{mycorollary} Let $G$ be a bipartite graph of order $n$, then $es_g(G)\leq \left\lceil \frac{n^2-1}{4}\right\rceil$. \end{mycorollary} \noindent\textbf{Proof.} Let $G$ have partition sets $V_1$ and $V_2$ of orders $n_1$ and $n-n_1$, respectively. Obviously $G$ is a subgraph of $K_{n_1,n-n_1}$, so by Theorem~\ref{dwudzielne} we obtain $es(G)\leq es(K_{n_1,n-n_1})= {n_1(n-n_1)}\leq \left\lceil \frac{n^2-1}{4}\right\rceil$.~\qed\\ \section{Final remarks} In the paper we presented a new graph invariant, the group edge irregularity strength $es_g(G)$. We presented the relations between that and other parameters, like harmonious order ${\rm{har}}(G)$. We also gave some lower and upper bounds for $es_g(G)$. Based on them, we state the following conjecture. \begin{myconjecture} There exists a constant $c>0$, such that for every graph $G$ of size $m$, $es_g(G)\leq 2m+c$ \end{myconjecture} Let us consider a version of the problem for directed graphs. Assume that the weight of an arc $(u,v)$ with tail $u$ and head $v$ is now computed as \begin{eqnarray*} wd((u,v))=w(u)-w(v). \end{eqnarray*} For example, if one considers directed acyclic graphs (DAGs), the following result analogous to Theorem \ref{col_upper} easily follows from the fact that the vertices of such digraph may be ordered so that each of them is preceded by all its in-neighbors (or all the out-neighbors). \begin{myproposition} Let $D$ be a DAG with $m$ arcs, maximum indegree $\Delta^-$ and maximum outdegree $\Delta^+$. Then $es_g(D)\leq (m-1)\min\{\Delta^-,\Delta^+\}+1$. \end{myproposition} Observe that directed acyclic graphs are connected with an old~problem of \textit{difference basis} from the number theory \cite{ref_RedRen}, therefore the following problem would be interesting. \begin{myproblem} Find the $es_g(D)$ for arbitrary digraph $D$. \end{myproblem} \nocite{*} \end{document}
\begin{document} \title{f Ensemble Conditional Variance Estimator for Sufficient Dimension Reduction} \begin{abstract} \textit{Ensemble Conditional Variance Estimation} (\texttt{ECVE}) is a novel sufficient dimension reduction (SDR) method in regressions with continuous response and predictors. \texttt{ECVE} applies to general non-additive error regression models. It operates under the assumption that the predictors can be replaced by a lower dimensional projection without loss of information.It is a semiparametric forward regression model based exhaustive sufficient dimension reduction estimation method that is shown to be consistent under mild assumptions. It is shown to outperform \textit{central subspace mean average variance estimation} (\texttt{csMAVE}), its main competitor, under several simulation settings and in a benchmark data set analysis. \end{abstract} \section{Introduction} \label{sec:intro} Let $(\Omega ,{\mathcal {F}},\mathbb{ P})$ be a probability space. Let $Y$ be a univariate continuous response and ${\mathbf X}$ a $p$-variate continuous predictor, jointly distributed, with $(Y,{\mathbf X}^T)^T:\Omega \to {\mathbb R}^{p+1}$. We consider the linear sufficient dimension reduction model \begin{align} Y = g_{\textit{cs}}({\mathbf B}^T {\mathbf X}, \epsilon), \label{mod:e_basic} \end{align} where ${\mathbf X} \in {\mathbb R}^p$ is independent of the random variable $\epsilon$, ${\mathbf B}$ is a $ p \times k$ matrix of rank $k$, and $g_{\textit{cs}}: {\mathbb R}^{k+1} \to {\mathbb R}$ is an unknown non-constant function. \cite[Thm. 1]{ZENG2010271} showed that if $(Y,{\mathbf X}^T)^T$ has a joint continuous distribution, \eqref{mod:e_basic} is equivalent to \begin{align}\label{dimredspace} Y \perp \!\!\! \perp {\mathbf X} \mid {\mathbf B}^T{\mathbf X}, \end{align} where the symbol $\perp \!\!\! \perp$ indicates stochastic independence. The matrix ${\mathbf B}$ is not unique. It can be replaced by any basis of its column space, $\operatorname{span}\{{\mathbf B}\}$. Let ${\mathcal S}$ denote a subspace of ${\mathbb R}^p$, and let $\mathbf{P}_{{\mathcal S}}$ denote the orthogonal projection onto ${\mathcal S}$ with respect to the usual inner product. If the response $Y$ and predictor vector ${\mathbf X}$ are independent conditionally on $\mathbf{P}_{{\mathcal S}}{\mathbf X}$, then $\mathbf{P}_{{\mathcal S}}{\mathbf X}$ can replace ${\mathbf X}$ as the predictor in the regression of $Y$ on ${\mathbf X}$ without loss of information. Such subspaces ${\mathcal S}$ are called dimension reduction subspaces and their intersection, provided it satisfies the conditional independence condition \eqref{dimredspace}, is called the central subspace and denoted by $\mathcal{S}_{Y\mid\X}$ [see \cite[p. 105]{Cook1998}, \cite{Cook2007}]. By their equivalence, under both models \eqref{mod:e_basic} and \eqref{dimredspace}, $F_{Y \mid {\mathbf X}}(y) = F_{Y \mid {\mathbf B}^T{\mathbf X}}(y)$ and $\mathcal{S}_{Y\mid\X}=\operatorname{span}\{{\mathbf B}\}$. Since the conditional distribution of $Y \mid {\mathbf X}$ is the same as that of $Y \mid {\mathbf B}^T{\mathbf X}$, ${\mathbf B}^T{\mathbf X}$ contains all the information in ${\mathbf X}$ for modeling the target variable $Y$, and it can replace ${\mathbf X}$ without any loss of information. If the error term in model \eqref{mod:e_basic} is additive with $\mathbb{E}(\epsilon \mid{\mathbf X})=0$, \eqref{mod:e_basic} reduces to $Y = g({\mathbf B}^T{\mathbf X}) + \epsilon$. Now, $\mathbb{E}(Y \mid {\mathbf X}) =\mathbb{E}(Y \mid {\mathbf B}^T{\mathbf X})= \mathbb{E}(Y \mid \mathbf{P}_{{\mathcal S}} {\mathbf X})$, where ${\mathcal S}=\operatorname{span}\{{\mathbf B}\}$. The mean subspace, denoted by $\mathcal{S}_{\E\left(Y\mid\X\right)}$, is the intersection of all subspaces ${\mathcal S}$ such that $\mathbb{E}(Y \mid {\mathbf X}) = \mathbb{E}(Y \mid \mathbf{P}_{{\mathcal S}} {\mathbf X})$ \cite{CookLi2002}. In this case, \eqref{mod:e_basic} becomes the classic mean subspace model with $\operatorname{span}\{{\mathbf B}\} = \mathcal{S}_{\E\left(Y\mid\X\right)}$. \cite{CookLi2002} showed that the mean subspace is a subset of the central subspace, $ \mathcal{S}_{\E\left(Y\mid\X\right)} \subseteq \mathcal{S}_{Y\mid\X}$. Several \textit{linear sufficient dimension reduction} (SDR) methods estimate $\mathcal{S}_{\E\left(Y\mid\X\right)}$ consistently (\cite{AdragniCook2009,MaZhu2013, Li2018,Xiaetal2002}). \textit{Linear} refers to the reduction being a linear transformation of the predictor vector. \textit{Minimum Average Variance Estimation} (\texttt{MAVE}) \cite{Xiaetal2002} is the most competitive and accurate method among them. \texttt{MAVE} differentiates from the majority of SDR methods, in that it is not \textit{inverse regression} based such as, for example, the widely used \textit{Sliced Inverse Regression} (SIR, \cite{Li1991}). \texttt{MAVE} requires minimal assumptions on the distribution of $(Y,{\mathbf X}^T)^T$ and is based on estimating the gradients of the regression function $E(Y \mid {\mathbf X})$ via local-linear smoothing \cite{locallinearsmoothing}. The \textit{central subspace mean average variance estimation} (\texttt{csMAVE}) \cite{WangXia2008,MAVEpackage} is the extension of \texttt{MAVE} that consistently and exhaustively estimates the $\operatorname{span}\{{\mathbf B}\}$ in model \eqref{mod:e_basic} without restrictive assumptions limiting its applicability. \texttt{csMAVE} has remained the gold standard since it was proposed by \cite{WangXia2008}. It is based on repeatedly applying \texttt{MAVE} on the sliced target variables $f_u(Y) = \mathbf 1_{\{s_{u-1} < Y \leq s_u\}}$ for $s_1 < \ldots < s_H$. \cite{WangXia2008} showed that the mean subspaces of the sliced $Y$ can be combined to recover the central subspace $\mathcal{S}_{Y\mid\X}$. Several papers made contributions in establishing a road path from the central mean to the central subspace [see \cite{YinLi2011} for a list of references]. \cite{YinLi2011} recognized that these approaches pointed to the same direction: if one can estimate the central mean subspace of $\mathbb{E}(f({\mathbf X}) \mid Y)$ for sufficiently many functions $f \in \mathcal{F}$ for a family of functions $\mathcal{F}$, then one can recover the central subspace. Such families that are rich enough to obtain the desired outcome are called \textit{characterizing ensembles} by \cite{YinLi2011}, who also proposed and studied such functional families [see also \cite{Li2018} for an overview]. In this paper, we extend the \textit{conditional variance estimator} (\texttt{CVE}) \cite{FertlBura} to the exhaustive \textit{ensemble conditional variance estimator} for recovering fully the \textit{central subspace} $\mathcal{S}_{Y\mid\X}$. \textit{Conditional variance estimation} is a semi-parametric method for the estimation of $\mathcal{S}_{\E\left(Y\mid\X\right)}$ consistently under minimal regularity assumptions on the distribution of $(Y,{\mathbf X}^T)^T$. In contrast to other SDR approaches, it operates by identifying the orthogonal complement of $\mathcal{S}_{\E\left(Y\mid\X\right)}$. In this paper we apply the \textit{conditional variance estimator} (\texttt{CVE}) to identify the mean subspace $\mathcal{S}_{\E\left(f_t(Y)\mid\X\right)}$ of transformed responses $f_t(Y)$, where $f_t$ are elements of an \textit{ensemble} $\mathcal{F} = \{f_t : t \in \Omega_T\}$, and then combine them to form the \textit{central subspace} $\mathcal{S}_{Y\mid\X}$. The paper is organized as follows. In Section~\ref{Preliminaries} we define the notation and concepts we use throughout the paper. A short overview of ensembles is given in Section~\ref{e_motivation}. The \textit{ensemble conditional variance estimator} (ECVE) is introduced in Section~\ref{sec:ensembleCVE} and the estimation procedure in Section~\ref{e_estimation}. In Section~\ref{sec:consistency}, the consistency of the \textit{ensemble conditional variance estimator} for the central subspace is shown. We assess and compare the performance of the estimator vis-a-vis \texttt{csMAVE} via simulations in Section~\ref{sec:simulations} and by applying it to the Boston Housing data in Section~\ref{sec:dataAnalysis}. We conclude in Section~\ref{sec:discussion}. \section{Preliminaries}\label{Preliminaries} We denote by $F_{\mathbf Z}$ the cumulative distribution function (cdf) of a random variable or vector ${\mathbf Z}$. We drop the subscript, when the attribution is clear from the context. For a matrix ${\mathbf A}$, $\|{\mathbf A}\|$ denotes its Frobenius norm, and $\|\bm{a}\|$ the Euclidean norm for a vector $\bm{a}$. Scalar product refers to the usual Euclidean scalar product, and $\perp$ denotes orthogonality with respect to it. The probability density function of ${\mathbf X}$ is denoted by $f_{{\mathbf X}}$, and its support by $\mbox{supp}(f_{{\mathbf X}})$. The notation $Y \perp \!\!\! \perp {\mathbf X}$ signifies stochastic independence of the random vector ${\mathbf X}$ and random variable $Y$. The $j$-th standard basis vector with zeroes everywhere except for 1 on the $j$-th position is denoted by $\mathbf{e}_j \in {\mathbb R}^p$, $\greekbold{\iota}_p = (1,1,\ldots,1)^T \in {\mathbb R}^p$, and ${\mathbf I}_p = (\mathbf{e}_1,\ldots,\mathbf{e}_p)$ is the identity matrix of order $p$. For any matrix ${\mathbf M} \in {\mathbb R}^{p \times q}$, $\mathbf{P}_{{\mathbf M}}$ denotes the orthogonal projection matrix on its column or range space $\operatorname{span}\{{\mathbf M}\}$; i.e., $\mathbf{P}_{{\mathbf M}} = \mathbf{P}_{\operatorname{span}\{{\mathbf M}\}} ={\mathbf M}({\mathbf M}^T {\mathbf M})^{-1} {\mathbf M}^T \in {\mathbb R}^{p \times p}$. For $q \leq p$, \begin{equation}\label{Smanifold} {\mathcal S}(p,q) = \{{\mathbf V} \in {\mathbb R}^{p \times q}: {\mathbf V}^T{\mathbf V} = {\mathbf I}_q\}, \end{equation} denotes the Stiefel manifold that comprizes of all $p \times q$ matrices with orthonormal columns. ${\mathcal S}(p,q)$ is compact with $\dim({\mathcal S}(p,q)) = pq - q(q+1)/2$ [see \cite{Boothby} and Section 2.1 of \cite{Tagare2011}]. The set \begin{equation}\label{Grassman_def} Gr(p,q) = {\mathcal S}(p,q)/{\mathcal S}(q,q) \end{equation} denotes a Grassmann manifold \cite{Grassman} that contains all $q$-dimensional subspaces in ${\mathbb R}^p$. $Gr(p,q)$ is the quotient space of ${\mathcal S}(p,q)$ with all $q \times q$ orthonormal matrices in ${\mathcal S}(q,q)$. \begin{comment} For any ${\mathbf V} \in {\mathcal S}(p,q)$, defined in \eqref{Smanifold}, we generically denote a basis of the orthogonal complement of its column space $\operatorname{span}\{{\mathbf V}\}$, by ${\mathbf U}$. That is, ${\mathbf U} \in {\mathcal S}(p,p-q)$ such that $\operatorname{span}\{{\mathbf V}\} \perp \operatorname{span}\{{\mathbf U}\}$ and $\operatorname{span}\{{\mathbf V}\} \cup \operatorname{span}\{{\mathbf U}\} = {\mathbb R}^p$, ${\mathbf U}^T{\mathbf V} = {\mathbf 0} \in {\mathbb R}^{(p-q) \times q}, {\mathbf U}^T{\mathbf U} = {\mathbf I}_{p-q}$. For any ${\mathbf x}, \mathbf s_0 \in {\mathbb R}^p$ we can always write \begin{equation}\label{ortho_decomp} {\mathbf x} = \mathbf s_0 + \mathbf{P}_{\mathbf V} ({\mathbf x} - \mathbf s_0) + \mathbf{P}_{\mathbf U} ({\mathbf x} - \mathbf s_0) = \mathbf s_0 + {\mathbf V}{\mathbf r}_1 + {\mathbf U}{\mathbf r}_2 \end{equation} where ${\mathbf r}_1 = {\mathbf V}^T({\mathbf x}-\mathbf s_0) \in {\mathbb R}^{q}, {\mathbf r}_2 = {\mathbf U}^T({\mathbf x}-\mathbf s_0) \in {\mathbb R}^{p-q}$. \end{comment} \subsection{Ensembles}\label{e_motivation} \begin{comment} In the sequel, we refer to the following assumptions as needed. \begin{assumption1a}\label{A1} Model \eqref{mod:basic} holds with $g:{\mathbb R}^k \to {\mathbb R}$ non constant in all arguments, ${\mathbf X}$ stochastically independent from $\epsilon$, $\mathbb{E}(\epsilon)=0$, $\mathbb{V}\mathrm{ar}(\epsilon)= \eta^2 < \infty$, and $ \Sigmabf_{\x} $ is positive definite. \end{assumption1a} \begin{assumption2a}\label{A2} The link function $g$ is continuous and $f_{{\mathbf X}}$ is continuous. \end{assumption2a} \begin{assumption3a}\label{A3} $\mathbb{E}(|Y|^4) < \infty$. \end{assumption3a} \begin{assumption4a}\label{A4} $\text{supp}(f_{\mathbf X})$ is compact. \end{assumption4a} \begin{assumption5a}\label{A5} $|Y| < M_2 < \infty$ almost surely. \end{assumption5a} The set \begin{equation}\label{Smanifold} S(p,q) = \{{\mathbf V} \in {\mathbb R}^{p \times q}: {\mathbf V}^T{\mathbf V} = {\mathbf I}_q\}, \end{equation} denotes a Stiefel manifold that comprizes of all $p \times q$ matrices with orthonormal columns. $S(p,q)$ is compact and $\dim(S(p,q)) = pq - q(q+1)/2$ [see \cite{Boothby} and Section 2.1 of \cite{Tagare2011}]. \end{comment} \cite{YinLi2011} introduced \textit{ensembles} as a device to extend mean subspace to central subspace SDR methods. The \textit{ensemble} approach of combining mean subspaces to span the central subspace comprizes of two components: (a) a rich family of functions of transformations for the response and (b) a sampling mechanism for drawing the functions from the ensemble to ascertain coverage of the central subspace. To distinguish between families of functions and ensembles, \cite{YinLi2011} use the term \textit{parametric} ensemble, which we define next. \begin{definition}\label{parametricensemble} A family $\mathcal{F}$ of measurable functions from ${\mathbb R}$ to ${\mathbb R}$ is called an ensemble. If $\mathcal{F}$ is a family of measurable functions with respect to an index set $\Omega_T$; i.e. $\mathcal{F} = \{f_t : t \in \Omega_T\}$, $\mathcal{F}$ is called a parametric ensemble. \end{definition} Let $\mathcal{F}$ be an ensemble, $f \in \mathcal{F}$ and let $f(Y)$, for $Y$ following model~\eqref{mod:e_basic}. The space ${\mathcal S}_{\mathbb{E}(f(Y)\mid {\mathbf X})}$ is defined to be the mean subspace of the transformed random variable $f(Y)$ [see \cite{Cook1998} or \cite{CookLi2002}]. \begin{definition} An ensemble $\mathcal{F}$ characterizes the central subspace $\mathcal{S}_{Y\mid\X}$, if \begin{align}\label{Fa_characterises_cs} \operatorname{span}\{\mathcal{S}_{\E\left(f_t(Y)\mid\X\right)}: f_t \in \mathcal{F}\} = \mathcal{S}_{Y\mid\X} \end{align} \end{definition} As an example, the parametric ensemble $\mathcal{F} =\{f_t: t \in \Omega_T\} = \{1_{\{z \leq t\}}: t \in {\mathbb R}\}$ can characterize the \textit{central subspace} $\mathcal{S}_{Y\mid\X}$. That is, $\mathbb{E}(f_t(Y)|{\mathbf X})$ is the conditional cumulative distribution function evaluated at $t$. To see this, let ${\mathbf B} \in {\mathcal S}(p,k)$ be such that $\mathbb{E}(f_t(Y) \mid {\mathbf X}) = \mathbb{E}(f_t(Y) \mid {\mathbf B}^T {\mathbf X})$ for all $t$. Then, $F_{Y \mid {\mathbf X}}(t) = \mathbb{E}(f_t(Y) \mid {\mathbf X}) = \mathbb{E}(f_t(Y) \mid {\mathbf B}^T {\mathbf X}) = F_{Y \mid {\mathbf B}^T{\mathbf X}}(t)$ for all $t$. Varying over the parametric ensemble $\mathcal{F}$, in this case over $t \in {\mathbb R}$, obtains the conditional cumulative distribution function. This \texttt{indicator} ensemble fully recovers the conditional distribution of $Y\mid {\mathbf X}$ and, thus, also the \textit{central subspace} $\mathcal{S}_{Y\mid\X}$, \begin{align*} \operatorname{span}\{\mathcal{S}_{\E\left(f_t(Y)\mid\X\right)}: f_t \in \mathcal{F}\} = \operatorname{span}\{\mathcal{S}_{\mathbb{E}\left(1_{\{Y \leq t\}} \mid {\mathbf X} \right)}: t \in {\mathbb R}\} = \mathcal{S}_{Y\mid\X} \end{align*} We reproduce a list of parametric ensembles $\mathcal{F}$, and associated regularity conditions, that can characterize $\mathcal{S}_{Y\mid\X}$ from \cite{YinLi2011} next. \begin{description} \item[Characteristic ensemble] $\mathcal{F} =\{f_t: t \in \Omega_T\} = \{\exp(i t \cdot): t \in {\mathbb R}\}$ \item[Indicator ensemble] $\mathcal{F} = \{1_{\{z \leq t\}}: t \in {\mathbb R}\}$, where $\operatorname{span}\{\mathcal{S}_{\E\left(f_t(Y)\mid\X\right)}: f_t \in \mathcal{F}\}$ recovers the conditional cumulative distribution function \item[Kernel ensemble] $\mathcal{F} = \{ h^{-1}K\left((z-t)/h\right): t \in {\mathbb R}, h > 0\}$, where $K$ is a kernel suitable for density estimation, and $\operatorname{span}\{\mathcal{S}_{\E\left(f_t(Y)\mid\X\right)}: f_t \in \mathcal{F}\}$ recovers the conditional density \item[Polynomial ensemble] $\mathcal{F} = \{z^t: t = 1,2,3,...\}$, where $\operatorname{span}\{\mathcal{S}_{\E\left(f_t(Y)\mid\X\right)}: f_t \in \mathcal{F}\}$ recovers the conditional moment generating function \item[Box-Cox ensemble] $\mathcal{F} = \{(z^t-1)/t : t \neq 0\} \cup \{ \log(z): t = 0\}$ Box-Cox Transforms \item[Wavelet ensemble] Haar Wavelets \end{description} The \texttt{characteristic} and \texttt{indicator} ensembles describe the conditional characteristic and distribution function of $Y\mid {\mathbf X}$, respectively, which always exist and determine the distribution uniquely. If the conditional density function $f_{Y\mid {\mathbf X}}$ of $Y\mid {\mathbf X}$ exists, then the \textit{kernel} ensemble characterizes the conditional distribution $Y\mid {\mathbf X}$. Further, if the conditional moment generating function exists, then the polynomial ensemble characterizes $\mathcal{S}_{Y\mid\X}$. \cite{YinLi2011} used the ensemble device to extend \texttt{MAVE} \cite{Xiaetal2002}, which targets the mean subspace, to its ensemble version that also estimates the central subspace $\mathcal{S}_{Y\mid\X}$ consistently. Theorem~\ref{Fa_characteris_cs_thm} \cite[Thm~2.1]{YinLi2011} establishes when an ensemble $\mathcal{F}$ is rich enough to characterize $\mathcal{S}_{Y\mid\X}$. \begin{thm}\label{Fa_characteris_cs_thm} Let $\mathcal{B} = \{1_A: A \text{ is a Borel set in}\; \text{supp}(Y) \}$ be the set of indicator functions on $\text{supp}(Y)$ and $ L^2(F_Y)$ be the set of square integrable random variables with respect to the distribution $F_Y$ of the response $Y$. If $\mathcal{F} \subseteq L^2(F_Y)$ is dense in $\mathcal{B} \subseteq L^2(F_Y)$, then the ensemble $\mathcal{F}$ characterizes the \textit{central subspace} $\mathcal{S}_{Y\mid\X}$. \end{thm} In Theorem~\ref{finite_characterisation_of_cs} we show that finitely many functions of an ensemble $\mathcal{F}$ are sufficient to characterize the \textit{central subspace} $\mathcal{S}_{Y\mid\X}$. \begin{thm}\label{finite_characterisation_of_cs} If a parametric ensemble $\mathcal{F}$ characterizes $\mathcal{S}_{Y\mid\X}$, then there exist finitely many functions $f_t \in \mathcal{F}$, with $t = 1,\ldots,m$ and $m \in \mathbb{N}$, such that \begin{align*} \operatorname{span}\{\mathcal{S}_{\E\left(f_t(Y)\mid\X\right)}: t \in 1,\ldots,m\} = \mathcal{S}_{Y\mid\X} \end{align*} \end{thm} \begin{proof}[Proof:] Let $k = \dim(\mathcal{S}_{Y\mid\X}) \leq p$. Since $\mathcal{F}$ characterizes $\mathcal{S}_{Y\mid\X}$, $\dim(\mathcal{S}_{\E\left(f_t(Y)\mid\X\right)}) = k_t \leq k$ by \eqref{Fa_characterises_cs} for any $t$. If $k_t = 0$, $\mathcal{S}_{\E\left(f_t(Y)\mid\X\right)} = \{{\mathbf 0}\}$ so the corresponding $f_t$ does not contribute to \eqref{Fa_characterises_cs}. Assume $k_t \ge 1$. If there were infinitely many $\mathcal{S}_{\E\left(f_t(Y)\mid\X\right)} \neq \{{\mathbf 0}\}$ of dimension at least 1, whose span is $\mathcal{S}_{Y\mid\X}$, then infinitely as many are identical, otherwise the dimension of the central subspace $\mathcal{S}_{Y\mid\X}$ would be infinite, contradicting that $\dim(\mathcal{S}_{Y\mid\X})=k <\infty$. \end{proof} The importance of Theorem \ref{finite_characterisation_of_cs} lies in the fact that the search to characterize the central subspace is over a finite set, even though it does not offer tools for identifying the elements of the ensemble. \section{Ensemble CVE}\label{sec:ensembleCVE} Throughout the paper, we refer to the following assumptions as needed. \begin{assumption1e} Model \eqref{mod:e_basic}, $Y = g_{\textit{cs}}({\mathbf B}^T{\mathbf X}, \epsilon)$ holds with $Y \in {\mathbb R}$, $g_{\textit{cs}}:{\mathbb R}^k \times {\mathbb R} \to {\mathbb R}$ non constant in the first argument, ${\mathbf B}= ({\mathbf b}_1, ..., {\mathbf b}_k) \in {\mathcal S}(p,k)$, ${\mathbf X} \in {\mathbb R}^p$ is independent of $\epsilon$, the distribution of ${\mathbf X}$ is absolutely continuous with respect to the Lebesgue measure in ${\mathbb R}^p$, $\text{supp}(f_{\mathbf X})$ is convex, and $ \mathbb{V}\mathrm{ar}({\mathbf X}) = \Sigmabf_{\x} $ is positive definite. \end{assumption1e} \begin{assumption2e} The density $f_{\mathbf X} : {\mathbb R}^p \to [0,\infty)$ of ${\mathbf X}$ is twice continuously differentiable with compact support $\text{supp}(f_{\mathbf X})$. \end{assumption2e} \begin{assumption3e} For a parametric ensemble $\mathcal{F}$, its index set $\Omega_T$ is endowed with a probability measure $F_T$ such that for all $t \in \Omega_T$ with $ \mathcal{S}_{\E\left(f_t(Y)\mid\X\right)} \neq \{{\mathbf 0}\}$, \begin{align*} \mathbb{ P}_{F_T} \left( \{\tilde{t} \in \Omega_T: {\mathcal S}_{\mathbb{E}(f_{\tilde{t}}(Y)\mid {\mathbf X})} = \mathcal{S}_{\E\left(f_t(Y)\mid\X\right)}\} \right) > 0 \end{align*} \end{assumption3e} \begin{assumption4e} For an ensemble $\mathcal{F}$ we assume that for all $f \in \mathcal{F}$, the conditional expectation \begin{align*} \mathbb{E}\left(f(Y) \mid {\mathbf X} \right) \end{align*} is twice continuously differentiable in the conditioning argument. Further, for all $f \in \mathcal{F}$ \begin{align*} \mathbb{E}(|f(Y)|^8) < \infty \end{align*} \end{assumption4e} Assumption (E.1) assures the existence and uniqueness of $\mathcal{S}_{Y\mid\X} = \operatorname{span}\{{\mathbf B}\}$. Furthermore, it allows the mean subspace to be a proper subset of the central subspace, i.e. $\mathcal{S}_{\E\left(Y\mid\X\right)} \subsetneq \mathcal{S}_{Y\mid\X}$. In Assumption (E.2), the compactness assumption for $\text{supp}(f_{\mathbf X})$ is not as restrictive as it might seem. \cite[Prop. 11]{CompactAssumption} showed that there is a compact set $K \subset {\mathbb R}^p$ such that $\mathcal{S}_{Y \mid {\mathbf X}_{|K}} = \mathcal{S}_{Y\mid\X}$, where ${\mathbf X}_{|K} = {\mathbf X} 1_{\{{\mathbf X} \in K\}}$. Assumption (E.3) simply states that the set of indices that characterize the \textit{central subspace} $\mathcal{S}_{Y\mid\X}$ is not a null set. In practice, the choice of the probability measure $F_T$ on the index set $\Omega_T$ of a parametric ensemble $\mathcal{F}$ can always guarantee the fulfillment of this assumption. If the characteristic or indicator ensemble are used, (E.4) states that the conditional characteristic or distribution function are twice continuously differentiable. In this case, the 8$th$ moments exist since the complex exponential and indicator functions are bounded. \begin{definition} For $q \leq p \in \mathbb{N}$, $f \in \mathcal{F}$, and any ${\mathbf V} \in S(p,q)$, we define \begin{equation} \tilde{L}_\mathcal{F}({\mathbf V}, \mathbf s_0,f) = \mathbb{V}\mathrm{ar}\left(f(Y)\mid {\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\}\right) \label{e_Lvs} \end{equation} where $\mathbf s_0 \in {\mathbb R}^p$ is a non-random shifting point. \end{definition} \begin{definition} Let $\mathcal{F}$ be a parametric ensemble and $F_T$ a cumulative distribution function (cdf) on the index set $\Omega_T$. For $q \leq p$, and any ${\mathbf V} \in S(p,q)$, we define \begin{align}\label{e_objective} L_\mathcal{F}({\mathbf V}) &= \int_{\Omega_T}\int_{{\mathbb R}^p}\tilde{L}({\mathbf V},{\mathbf x}, f_t)d F_{{\mathbf X}}({\mathbf x}) d F_T(t) \\ &= \mathbb{E}_{t \sim F_T} \left(\mathbb{E}_{\mathbf X}\left(\tilde{L}_\mathcal{F}({\mathbf V},{\mathbf X},f_t)\right)\right) = \mathbb{E}_{t \sim F_T}(L_\mathcal{F}^*({\mathbf V},f_t)), \notag \end{align} where $F_{{\mathbf X}}$ is the cdf of ${\mathbf X}$, and \begin{align}\label{e_LV1} L_\mathcal{F}^*({\mathbf V},f_t) = \mathbb{E}_{\mathbf X}\left(\tilde{L}_\mathcal{F}({\mathbf V},{\mathbf X},f_t)\right). \end{align} \end{definition} For the identity function, $f_{t_0}(z) = z$, \eqref{e_LV1} is the target function of the \textit{conditional variance estimation} proposed in \cite{FertlBura}. If the random variable $t$ is concentrated on $t_0$; i.e., $t \sim \delta_{t_0}$, then the \textit{ensemble conditional variance estimator} (\texttt{ECVE}) coincides with the \textit{conditional variance estimator} (CVE). The following theorem will be used in establishing the main result of this paper, which obtains the \textit{exhaustive} sufficient reduction of the conditional distribution of $Y$ given the predictor vector ${\mathbf X}$. \begin{comment} \begin{thm} \label{thm4_e} Let ${\mathbf X}$ be a $p$-dimensional continuous random vector with density $f_{\mathbf X}({\mathbf x})$. Under assumption A.2, for $\mathbf s_0 \in \text{supp}(f_{\mathbf X}) \subset {\mathbb R}^p$ and ${\mathbf V} \in S(p,q)$ defined in \eqref{Smanifold}, \eqref{e_density} is a proper density. Under assumptions A.1, A.2 and A.4, \eqref{e_Lvs} and \eqref{e_objective} are well defined and continuous for ${\mathbf V} \in {\mathcal S}(p,q)$ and $\mathbf s_0 \in \text{supp}(f_{\mathbf X})$. Moreover, \begin{align}\label{e_LtildeVs0} \tilde{L}_\mathcal{F}({\mathbf V},\mathbf s_0,f) = \mu_2({\mathbf V},\mathbf s_0,f) - \mu_1({\mathbf V},\mathbf s_0,f)^2 \end{align} where \begin{displaymath} \mu_l({\mathbf V},\mathbf s_0,f) = \int_{{\mathbb R}^q} \int_{\Omega_\epsilon} f(g({\mathbf B}^\top\mathbf s_0 + {\mathbf B}^\top{\mathbf V}{\mathbf r}_1, e))^l\frac{f_{\mathbf X}(\mathbf s_0 + {\mathbf V}{\mathbf r}_1)}{\int_{{\mathbb R}^q}f_{\mathbf X}(\mathbf s_0 + {\mathbf V}{\mathbf r})d{\mathbf r}} d{\mathbf r}_1 dF_\epsilon(e) \end{displaymath} with $F_\epsilon(e)$ the cdf of $\epsilon$, and \begin{align}\label{e_density} f_{{\mathbf X}\mid {\mathbf X} \in \mathbf s_0 +\operatorname{span}\{{\mathbf V}\}}({\mathbf x}) = \begin{cases} \frac{f_{\mathbf X}(\mathbf s_0 + {\mathbf V}{\mathbf r}_1)}{\int_{{\mathbb R}^q}f_{\mathbf X}(\mathbf s_0 + {\mathbf V}{\mathbf r})d{\mathbf r}} & \mbox{if } {\mathbf x} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\}, {\mathbf r}_1= {\mathbf V}^\top({\mathbf x}-\mathbf s_0)\\ 0 & \mbox{otherwise} \\ \end{cases} \end{align} being the conditional density of ${\mathbf X}\mid {\mathbf X} \in \mathbf s_0 +\operatorname{span}\{{\mathbf V}\}$ \end{thm} \end{comment} \begin{thm}\label{Y_decomposition_thm} Assume (E.1) and (E.2) hold, in particular model \eqref{mod:e_basic} holds. Let $\widetilde{{\mathbf B}}$ be a basis of $\mathcal{S}_{\E\left(f_t(Y)\mid\X\right)}$; i.e. $\operatorname{span}\{\widetilde{{\mathbf B}}\} = \mathcal{S}_{\E\left(f_t(Y)\mid\X\right)} \subseteq \mathcal{S}_{Y\mid\X} = \operatorname{span}\{{\mathbf B}\}$. Then, for any $f \in \mathcal{F}$ for which assumption (E.4) holds, \begin{align} f(Y) = g(\widetilde{{\mathbf B}}^T{\mathbf X}) + \tilde{\epsilon}, \label{Y_decomposition} \end{align} with $\mathbb{E}(\tilde{\epsilon}\mid {\mathbf X}) = 0$ and $g:{\mathbb R}^{k_t} \to {\mathbb R}$ is a twice continuously differentiable function, where $k_t = \dim(\mathcal{S}_{\E\left(f_t(Y)\mid\X\right)})$. \end{thm} By Theorem~\ref{Y_decomposition_thm}, any response $Y$ can be written as an additive error via the decomposition \eqref{Y_decomposition}. The predictors and the additive error term are only required to be conditionally uncorrelated in model \eqref{Y_decomposition}. The \textit{conditional variance estimator} \cite{FertlBura} also estimated $\widetilde{{\mathbf B}}$ in \eqref{Y_decomposition} but under the more restrictive condition of predictor and error independence. \begin{proof}[Proof of Theorem~\ref{Y_decomposition_thm}] \begin{align*} f(Y) &= \mathbb{E}\left( f(Y) \mid {\mathbf X} \right) + \underbrace{f(Y) - \mathbb{E}\left(f(Y) \mid {\mathbf X} \right)}_{\tilde{\epsilon}} = \mathbb{E}\left(f(Y) \mid {\mathbf X} \right) + \tilde{\epsilon} \notag \\ &= \mathbb{E}\left(f(Y) | \widetilde{{\mathbf B}}^T{\mathbf X} \right) + \tilde{\epsilon} = g(\widetilde{{\mathbf B}}^T{\mathbf X}) + \tilde{\epsilon} \end{align*} where $g(\widetilde{{\mathbf B}}^T{\mathbf X})=\mathbb{E}\left(f(Y) | \widetilde{{\mathbf B}}^T{\mathbf X} \right)$. By the tower property of the conditional expectation, $\mathbb{E}(\tilde{\epsilon}\mid {\mathbf X}) = \mathbb{E}(f(Y)\mid {\mathbf X}) - \mathbb{E}(\mathbb{E}(f(Y)\mid {\mathbf X})\mid {\mathbf X}) = \mathbb{E}(f(Y)\mid {\mathbf X}) - \mathbb{E}(f(Y)\mid {\mathbf X}) = {\mathbf 0}$. The function $g$ is twice continuous differentiable by (E.4). \end{proof} \begin{thm}\label{CVE_targets_meansubspace_thm} Assume (E.1) and (E.2) hold. Let $\mathcal{F}$ be a parametric ensemble, $\mathbf s_0 \in \text{supp}(f_{\mathbf X}) \subset {\mathbb R}^p$, ${\mathbf V} \in S(p,q)$ defined in \eqref{Smanifold}. Then, for any $f \in \mathcal{F}$ for which assumption (E.4) holds, \begin{align}\label{e_LtildeVs0} \tilde{L}_\mathcal{F}({\mathbf V},\mathbf s_0,f) = \mu_2({\mathbf V},\mathbf s_0,f) - \mu_1^2({\mathbf V},\mathbf s_0,f) + \mathbb{V}\mathrm{ar}(\tilde{\epsilon}\mid {\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\}) \end{align} where \begin{equation}\label{mu_l} \mu_l({\mathbf V},\mathbf s_0,f) = \int_{{\mathbb R}^q} g(\widetilde{{\mathbf B}}^T\mathbf s_0 + \widetilde{{\mathbf B}}^T{\mathbf V}{\mathbf r}_1)^l\frac{f_{\mathbf X}(\mathbf s_0 + {\mathbf V}{\mathbf r}_1)}{\int_{{\mathbb R}^q}f_{\mathbf X}(\mathbf s_0 + {\mathbf V}{\mathbf r})d{\mathbf r}} d{\mathbf r}_1 = \frac{t^{(l)}({\mathbf V},\mathbf s_0,f)}{t^{(0)}({\mathbf V},\mathbf s_0,f)}, \end{equation} for $g$ given in \eqref{Y_decomposition} with \begin{equation}\label{tl} t^{(l)}({\mathbf V},\mathbf s_0,f) = \int_{{\mathbb R}^q} g(\widetilde{{\mathbf B}}^T\mathbf s_0 + \widetilde{{\mathbf B}}^T{\mathbf V}{\mathbf r}_1)^l f_{\mathbf X}(\mathbf s_0 + {\mathbf V}{\mathbf r}_1) d{\mathbf r}_1, \end{equation} and \begin{gather} \mathbb{V}\mathrm{ar}(\tilde{\epsilon}\mid {\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\}) = \mathbb{E}(\tilde{\epsilon}^2\mid {\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\}) \notag\\ = \int_{\text{supp}(f_{\mathbf X})\cap{\mathbb R}^q} h(\mathbf s_0 + {\mathbf V} {\mathbf r}_1 ) f_{\mathbf X}(\mathbf s_0 + {\mathbf V} {\mathbf r}_1)d{\mathbf r}_1 / \int_{{\mathbb R}^q}f_{\mathbf X}(\mathbf s_0 + {\mathbf V}{\mathbf r})d{\mathbf r} = \frac{\tilde{h}({\mathbf V},\mathbf s_0,f)}{t^{(0)}({\mathbf V},\mathbf s_0,f)} \label{tilde_eps_var} \end{gather} with $\mathbb{E}(\tilde{\epsilon}^2\mid {\mathbf X} = {\mathbf x}) = h({\mathbf x})$ and $\tilde{h}({\mathbf V},\mathbf s_0,f) = \int_{\text{supp}(f_{\mathbf X})\cap{\mathbb R}^q} h(\mathbf s_0 + {\mathbf V} {\mathbf r}_1 ) f_{\mathbf X}(\mathbf s_0 + {\mathbf V} {\mathbf r}_1)d{\mathbf r}_1$. Further assume $h(\cdot)$ to be continuous, then $L_\mathcal{F}^*({\mathbf V},f_t)$ in \eqref{e_LV1} is well defined and continuous, \begin{align}\label{CVE_of_transformed_Y} {\mathbf V}^t_q = \operatorname{argmin}_{{\mathbf V} \in {\mathcal S}(p,q)}L_\mathcal{F}^*({\mathbf V},f_t) \end{align} is well defined, and the conditional variance estimator of the transformed response $f_t(Y)$ identifies $\mathcal{S}_{\E\left(f_t(Y)\mid\X\right)}$, \begin{align}\label{CVE_targets_meansubspace} \mathcal{S}_{\E\left(f_t(Y)\mid\X\right)} = \operatorname{span}\{{\mathbf V}^t_q\}^\perp. \end{align} \end{thm} \begin{comment} \begin{remark} For a more detailed proof of \eqref{variance_constant}, let $\tilde{{\mathbf X}}$ be an independent copy of ${\mathbf X}$. Then the vector $({\mathbf X}^T,\tilde{{\mathbf X}}^T,\epsilon)^T \in {\mathbb R}^{2p+1}$ drives all the stochasticity in \eqref{variance_constant} and \begin{align*} \mathbb{E}\left(\mathbb{V}\mathrm{ar}(\tilde{\epsilon}\mid {\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\})\right) = \mathbb{E}_{\tilde{{\mathbf X}}}\left(\mathbb{V}\mathrm{ar}(\tilde{\epsilon}| {\mathbf U}^T({\mathbf X}- \tilde{{\mathbf X}})= {\mathbf 0}, \tilde{{\mathbf X}} = \mathbf s_0\right) \end{align*} where ${\mathbf U} \perp {\mathbf V}$. \end{remark} \end{comment} \cite{FertlBura} assumed model $Y = g({\mathbf B}^T{\mathbf X}) + \epsilon$ with $\epsilon \perp \!\!\! \perp {\mathbf X}$, which implies $\mathcal{S}_{\E\left(Y\mid\X\right)} = \operatorname{span}\{{\mathbf B}\} = \mathcal{S}_{Y\mid\X}$. \cite{FertlBura} showed that the \textit{conditional variance estimator (CVE)} can identify $\mathcal{S}_{\E\left(Y\mid\X\right)}$ at the population level. Theorem~\ref{CVE_targets_meansubspace_thm} extends this result to obtain that the \textit{conditional variance estimator (CVE)} identifies the \textit{mean subspace} $\mathcal{S}_{\E\left(Y\mid\X\right)}$ also in models of the form $Y = g({\mathbf B}^T{\mathbf X}) + \tilde{\epsilon}$, where $\tilde{\epsilon}$ is simply conditionally uncorrelated with ${\mathbf X}$. This allows CVE to apply to problems where the \textit{mean subspace} is a proper subset of the \textit{central subspace}, i.e. $\mathcal{S}_{\E\left(Y\mid\X\right)} \subsetneq \mathcal{S}_{Y\mid\X}$. ${\mathbf V}^t_q$ in \eqref{CVE_of_transformed_Y} is not unique since for all orthogonal ${\mathbf O} \in {\mathbb R}^{q \times q}$, $L_\mathcal{F}^*({\mathbf V}^t_q {\mathbf O},f_t) = L_\mathcal{F}^*({\mathbf V}^t_q ,f_t)$ as $L_\mathcal{F}^*({\mathbf V}^t_q ,f_t)$ depends on ${\mathbf V}^t_q$ only through $\operatorname{span}\{{\mathbf V}^t_q\}$ by \eqref{e_Lvs}. Nevertheless, it is a unique minimizer over the Grassmann manifold $Gr(p,q)$ in \eqref{Grassman_def}. To see this, suppose ${\mathbf V} \in {\mathcal S}(p,q)$ is an arbitrary basis of a subspace ${\mathbf M} \in Gr(p,q)$. We can identify ${\mathbf M}$ through the projection $\mathbf{P}_{\mathbf M} = {\mathbf V}{\mathbf V}^T$. By \eqref{ortho_decomp}, we write ${\mathbf x} = {\mathbf V} {\mathbf r}_1 + {\mathbf U} {\mathbf r}_2$. Application of the Fubini-Tonelli Theorem yields \begin{align}\label{Grassman} \tilde{t}^{(l)}(\mathbf{P}_{\mathbf M},\mathbf s_0,f) &= \int_{\text{supp}(f_{\mathbf X})} g({\mathbf B}^T\mathbf s_0 + {\mathbf B}^T \mathbf{P}_{\mathbf M} {\mathbf x})^l f_{\mathbf X}(\mathbf s_0 + \mathbf{P}_{\mathbf M} {\mathbf x})d{\mathbf x} \\&= t^{(l)}({\mathbf V},\mathbf s_0,f) \int_{\text{supp}(f_{\mathbf X})\cap {\mathbb R}^{p-q} }d{\mathbf r}_2. \notag \end{align} Therefore $\tilde{t}^{(l)}(\mathbf{P}_{\mathbf M},\mathbf s_0,f)/\tilde{t}^{(0)}(\mathbf{P}_{\mathbf M},\mathbf s_0,f) = t^{(l)}({\mathbf V},\mathbf s_0,f)/t^{(0)}({\mathbf V},\mathbf s_0,f)$ and $\mu_l(\cdot,\mathbf s_0,f)$ in \eqref{mu_l} can also be viewed as a function from $Gr(p,q)$ to ${\mathbb R}$. Next we define the \textit{ensemble conditional variance estimator (ECVE)} for a parametric ensemble $\mathcal{F}$ which characterizes the \textit{central subspace} $\mathcal{S}_{Y\mid\X}$. Following the \textit{ensemble minimum average variance estimation} formulation in \cite{YinLi2011}, we extend the original objective function by integrating over the index random variable $t \sim F_T$ in \eqref{e_objective} that indexes the ensemble $\mathcal{F}$ as \cite{YinLi2011}. \begin{comment} Let \begin{align} {\mathbf V}_q = \operatorname{argmin}_{{\mathbf V} \in S(p,q)}L_\mathcal{F}({\mathbf V}) \end{align} then \begin{align} \mathcal{S}_{Y\mid\X} = \operatorname{span}\{{\mathbf V}_q\}^\perp \end{align} which is well defined since the stiefle manifold ${\mathcal S}(p,q)$ \eqref{Smanifold} being compact and $L_\mathcal{F}({\mathbf V})$ is continuous and well defined if the index random variable $t$ has a continuous distribution. \end{comment} \begin{defn} Let \begin{align}\label{enVq} {\mathbf V}_q = \operatorname{argmin}_{{\mathbf V} \in S(p,q)}L_\mathcal{F}({\mathbf V}) \end{align} The \textbf{Ensemble Conditional Variance Estimator} with respect to the ensemble $\mathcal{F}$ is defined to be any basis ${\mathbf B}_{p-q,\mathcal{F}}$ of $\operatorname{span}\{{\mathbf V}_q\}^\perp$. \end{defn} \begin{thm}\label{ECVE_identifies_cs_thm} Assume (E.1), (E.2), (E.3), and (E.4) hold, and that the function $h(\cdot)$ defined in Theorem~\ref{CVE_targets_meansubspace_thm} is continuous. Let $\mathcal{F}$ be a parametric ensemble that characterizes $\mathcal{S}_{Y\mid\X}$, with $k = \dim(\mathcal{S}_{Y\mid\X})$, and ${\mathbf V}$ be an element of the Stiefel manifold $S(p,q)$, which is defined in \eqref{Smanifold}, with $q = p - k$. Then, ${\mathbf V}_q$ in \eqref{enVq} is well defined and \begin{align} \mathcal{S}_{Y\mid\X} = \operatorname{span}\{{\mathbf V}_q\}^\perp. \end{align} \end{thm} \section{Estimation of the ensemble CVE}\label{e_estimation} Assume $(Y_i,{\mathbf X}_i^\top)_{i=1,...,n}^\top$ is an i.i.d. sample from model \eqref{mod:e_basic}, and let \begin{align} d_i({\mathbf V},\mathbf s_0)&= \|{\mathbf X}_i - \mathbf{P}_{\mathbf s_0 + \operatorname{span}\{{\mathbf V}\}}{\mathbf X}_i\|_2^2 = \|{\mathbf X}_i -\mathbf s_0\|_2^2 - \langle {\mathbf X}_i - \mathbf s_0,{\mathbf V}{\mathbf V}^\top({\mathbf X}_i - \mathbf s_0)\rangle \notag\\ &= \| ({\mathbf I}_p - {\mathbf V}{\mathbf V}^\top)({\mathbf X}_i - \mathbf s_0)\|_2^2 = \| {\mathbf Q}_{{\mathbf V}}({\mathbf X}_i - \mathbf s_0)\|_2^2 \label{distance} \end{align} where $\langle \cdot, \cdot\rangle$ is the usual inner product in ${\mathbb R}^p$, $\mathbf{P}_{{\mathbf V}}={\mathbf V}{\mathbf V}^\top$ and ${\mathbf Q}_{{\mathbf V}}={\mathbf I}_p-\mathbf{P}_{{\mathbf V}}$. The estimators we propose involve a variation of kernel smoothing, which depends on a bandwidth $h_n$. In our procedure, $h_n$ is the squared width of a slice around the subspace $\mathbf s_0 + \operatorname{span}\{{\mathbf V}\}$. In order to obtain pointwise convergence for the ensemble CVE, we use the following bias and variance assumptions on the bandwidth, as typical in nonparametric estimation. \begin{assumptionH1} For $n \to \infty$, $h_n \to 0$ \end{assumptionH1} \begin{assumptionH2} For $n \to \infty$, $nh^{(p-q)/2}_n \to \infty$ \end{assumptionH2} In order to obtain consistency of the proposed estimator, Assumption (H.2) will be strengthened to $\log(n)/nh^{(p-q)/2}_n \to 0$. We also let $K$, which we refer to as \textit{kernel}, be a function satisfying the following assumptions. \begin{assumptionK1} $K:[0,\infty) \rightarrow [0,\infty)$ is a non increasing and continuous function, so that $|K(z)| \leq M_1$, with $\int_{{\mathbb R}^{q}} K(\|{\mathbf r}\|^2) d{\mathbf r} < \infty$ for $q \leq p-1$. \end{assumptionK1} \begin{assumptionK2} There exist positive finite constants $L_1$ and $L_2$ such that $K$ satisfies either (1) or (2) below: \begin{itemize} \item[(1)] $K(u) = 0$ for $|u| > L_2$ and for all $u, \tilde{u}$ it holds $|K(u) - K(\tilde{u})| \leq L_1 |u - \tilde{u}|$ \item[(2)] $K(u)$ is differentiable with $|\partial_u K(u)| \leq L_1$ and for some $\nu > 1$ it holds $|\partial_u K(u)| \leq L_1 |u|^{-\nu}$ for $|u| > L_2$ \end{itemize} \end{assumptionK2} The Gaussian kernel $K(z) = \exp(-z^2)$, for example, fulfills both (K.1) and (K.2) [see \cite{Hansen2008}], and will be used throughout the paper. For $i=1,\ldots,n$, we let \begin{equation} w_i({\mathbf V},\mathbf s_0) = \frac{K\left(\frac{d_i({\mathbf V},\mathbf s_0)}{h_n}\right)}{\sum_{j=1}^nK\left(\frac{d_j({\mathbf V},\mathbf s_0)}{h_n}\right)} \label{weights} \end{equation} \begin{equation} \label{e_ybar} \bar{y}_l({\mathbf V},\mathbf s_0,f) = \sum_{i=1}^n w_i({\mathbf V},\mathbf s_0)f(Y_i)^l \quad \text{for} \quad l=1,2 \end{equation} We estimate $\tilde{L}_\mathcal{F}({\mathbf V},s_0,f)$ in \eqref{e_LtildeVs0} with \begin{equation} \tilde{L}_{n,\mathcal{F}}({\mathbf V},s_0, f) = \bar{y}_2({\mathbf V},\mathbf s_0,f) - \bar{y}_1({\mathbf V},\mathbf s_0,f)^2, \label{e_Ltilde} \end{equation} and the objective function $L^*_\mathcal{F}({\mathbf V},f)$ in \eqref{e_LV1} with \begin{equation} L^*_n({\mathbf V},f) = \frac{1}{n} \sum_{i=1}^n \tilde{L}_{n,\mathcal{F}}({\mathbf V},{\mathbf X}_i,f), \label{e_LN} \end{equation} where each data point ${\mathbf X}_i$ is a shifting point. For a parametric ensemble $\mathcal{F} = \{f_t : t \in \Omega_T\}$ and $(t_j)_{j=1,...,{m_n}}$ an i.i.d. sample from $F_T$ with $\lim_{n \to \infty} m_n = \infty$, the final estimate of the objective function in \eqref{e_objective} is given by \begin{equation}\label{e_objective_est} L_{n,\mathcal{F}}({\mathbf V}) = \frac{1}{m_n} \sum_{j=1}^{{m_n}} L^*_n({\mathbf V},f_{t_j}) \end{equation} The ensemble conditional variance estimator (ECVE) is defined to be any basis of $\operatorname{span}\{\hat{{\mathbf V}}_q\}^\perp$, where \begin{equation} \label{e_optim} \hat{{\mathbf V}}_q = \operatorname{argmin}_{{\mathbf V} \in S(p,q)}L_{n,\mathcal{F}}({\mathbf V}) \end{equation} We use the same algorithm as in \cite{FertlBura} to solve the optimization problem \eqref{e_optim}. It requires the explicit form of the gradient of \eqref{e_objective_est}. Theorem~\ref{e_lemma-one} provides the gradient when a Gaussian kernel is used. \begin{thm}\label{e_lemma-one} The gradient of $\tilde{L}_{n,\mathcal{F}}({\mathbf V},s_0, f) $ in \eqref{e_Ltilde} is given by \begin{align*} \nabla_{{\mathbf V}}\tilde{L}_{n,\mathcal{F}}({\mathbf V},s_0, f) = \frac{1}{h_n^2}\sum_{i=1}^n (\tilde{L}_{n,\mathcal{F}}({\mathbf V},\mathbf s_0, f) - (f(Y_i)-\bar{y}_1({\mathbf V},\mathbf s_0,f))^2)w_id_i\nabla_{{\mathbf V}}d_i({\mathbf V},\mathbf s_0) \in {\mathbb R}^{p \times q}, \end{align*} and the gradient of $L_{n,\mathcal{F}}({\mathbf V})$ in \eqref{e_objective_est} is \[ \nabla_{{\mathbf V}}L_{n,\mathcal{F}}({\mathbf V}) = \frac{1}{n {m_n}} \sum_{i=1}^n \sum_{j=1}^{{m_n}} \nabla_{{\mathbf V}}\tilde{L}_{n,\mathcal{F}}({\mathbf V},{\mathbf X}_i,f_{t_j}). \] \end{thm} In the implementation of ECVE, we follow \cite{FertlBura} and set the bandwidth to \begin{equation} \label{bandwidth} h_n = 1.2^2 \frac{2\mbox{tr}(\widehat{\Sigma}_{\mathbf x})}{p} \left(n^{-1/(4+p-q)} \right)^2. \end{equation} where $\widehat{\Sigma}_{\mathbf x} = (1/n) \sum_i ({\mathbf X}_i -\Bar{{\mathbf X}})({\mathbf X}_i -\Bar{{\mathbf X}})^T$ and $\Bar{{\mathbf X}} = (1/n) \sum_i {\mathbf X}_i$. \subsection{Weighted estimation of $L^*_n({\mathbf V},f)$}\label{weight_section} The set of points $\{{\mathbf x} \in {\mathbb R}^p: \|{\mathbf x} - \mathbf{P}_{\mathbf s_0 + \operatorname{span}\{{\mathbf V}\}}{\mathbf x}\|^2 \leq h_n\}$ represents a \textit{slice} in the subspace of ${\mathbb R}^p$ about $\mathbf s_0+ \operatorname{span}\{{\mathbf V}\}$. In the estimation of $L({\mathbf V})$ two different weighting schemes are used: (a) \textit{Within slices}: The weights are defined in \eqref{weights} and are used to calculate \eqref{e_Ltilde}. (b) \textit{Between slices}: Equal weights $1/n$ are used to calculate \eqref{e_LN}. Another idea for the between slices weighting is to assign more weight to slices with more points. This can be realized by altering \eqref{e_LN} to \begin{align} L^{(w)}_n({\mathbf V},f) &= \sum_{i=1}^n \tilde{w}({\mathbf V},{\mathbf X}_i) \tilde{L}_n({\mathbf V},{\mathbf X}_i,f), \quad \mbox{with} \label{wLN}\\ \tilde{w}({\mathbf V},{\mathbf X}_i) &= \frac{\sum_{j=1}^n K(d_j({\mathbf V},{\mathbf X}_i)/h_n) - 1}{\sum_{l,u=1}^nK(d_l({\mathbf V},{\mathbf X}_u)/h_n) -n} = \frac{\sum_{j=1,j\neq i}^n K(d_j({\mathbf V},{\mathbf X}_i)/h_n) }{\sum_{l,u=1, l\neq u}^nK(d_l({\mathbf V},{\mathbf X}_u)/h_n)}\label{wtilde} \end{align} The denominator in \eqref{wtilde} guarantees the weights $\tilde{w}({\mathbf V},{\mathbf X}_i)$ sum up to one. If \eqref{wLN} instead of \eqref{e_LN} is used in \eqref{e_objective_est} we refer to this method as \textit{weighted ensemble conditional variance estimation}. For example, if a rectangular kernel is used, $\sum_{j=1,j\neq i}^n K(d_j({\mathbf V},{\mathbf X}_i)/h_n)$ is the number of ${\mathbf X}_j$ ($j \neq i$) points in the slice corresponding to $\tilde{L}_n({\mathbf V},{\mathbf X}_i,f)$. Therefore, this slice is assigned weight that is proportional to the number of ${\mathbf X}_j$ points in it, and the more observations we use for estimating $L({\mathbf V},{\mathbf X}_i,f)$, the better its accuracy. \section{Consistency of the ECVE}\label{sec:consistency} The consistency of ECVE derives from the consistency of CVE \cite{FertlBura} that targets a specific $\mathcal{S}_{\E\left(f_t(Y)\mid\X\right)}$ and the fact that we can recover $\mathcal{S}_{Y\mid\X}$ from $\mathcal{S}_{\E\left(f_t(Y)\mid\X\right)}$ across all transformations $f_t \in \mathcal{F} = \{f_t : t \in \Omega_T\}$ for an ensemble that characterizes $\mathcal{S}_{Y\mid\X}$. This is achieved in sequential steps from Theorem \ref{uniform_convergence_ecve}, which is the main building block, to Theorem \ref{ECVE_consistency}. The proofs are technical and lengthy, and, thus, are given in the Appendix. \begin{comment} \begin{itemize} \item In section~\ref{e_motivation} ensembles which characterize the \textit{central subspace} $\mathcal{S}_{Y\mid\X}$ are introduced and Theorem~\ref{Fa_characteris_cs_thm} gives conditions for when an ensemble $\mathcal{F}$ characterize the $\mathcal{S}_{Y\mid\X}$. Therefore if we can estimate $\mathcal{S}_{\E\left(f_t(Y)\mid\X\right)}$ consistently for all transformations $f_t \in \mathcal{F} = \{f_t : t \in \Omega_T\}$ for an ensemble that characterizes $\mathcal{S}_{Y\mid\X}$, we can recover $\mathcal{S}_{Y\mid\X}$. \item Theorem~\ref{finite_characterisation_of_cs} establishes that a finite subset of functions of the ensemble $\mathcal{F}$ characterizes $\mathcal{S}_{Y\mid\X}$. Nevertheless the caveat is that one cannot know beforehand which finite subset of $\mathcal{F}$ suffices for identifying $\mathcal{S}_{Y\mid\X}$. Therefore we have to let the numbers of functions used in definition \eqref{e_objective_est} go to infinity as the sample size increases, i.e. $|\mathcal{F}| = m_n \to \infty$, such that asymptotically we can be sure that we use all functions of the finite subset of $\mathcal{F}$ that characterize $\mathcal{S}_{Y\mid\X}$. \item Theorem~\ref{CVE_targets_meansubspace_thm} shows that CVE can identify $\mathcal{S}_{\E\left(f_t(Y)\mid\X\right)}$ by replacing the original response $Y$ by $f_t(Y)$ on the population level without the requirement that the error term $\tilde{\epsilon}$ is independent of the predictors ${\mathbf X}$. I.e. $\mathcal{S}_{\E\left(f_t(Y)\mid\X\right)} = \operatorname{span}\{ \operatorname{argmin}_{{\mathbf V} \in {\mathcal S}(p,q)}L^*({\mathbf V},f_t)\}^\perp$. \item Theorem~\ref{ECVE_identifies_cs_thm} states that the target function $L_\mathcal{F}({\mathbf V})$ of ECVE identifies $\mathcal{S}_{Y\mid\X}$ on the population level if $\mathcal{F}$ characterizes $\mathcal{S}_{Y\mid\X}$. I.e. $\mathcal{S}_{Y\mid\X} = \operatorname{span}\{\operatorname{argmin}_{{\mathbf V} \in {\mathcal S}(p,q)} L_\mathcal{F}({\mathbf V})\}^\perp$ \item Theorems~\ref{uniform_convergence_ecve} shows that the target function $L_n^*({\mathbf V},f)$ defined in \eqref{e_LV1} of CVE converges uniformly in ${\mathbf V} \in {\mathcal S}(p,q)$ to $L^*({\mathbf V},f)$, defined in \eqref{e_LV1}, in probability \item Theorem~\ref{thm_consistency_mean_subspace} establish that CVE estimates $\mathcal{S}_{\E\left(f_t(Y)\mid\X\right)}$ consistently by utilizing the uniform convergence in probability. \item Theorem~\ref{uniform_convergence_eobjective} establishes that the target function $L_{n,\mathcal{F}}({\mathbf V})$ defined in \eqref{e_objective_est} of ECVE converges uniformly in ${\mathbf V} \in {\mathcal S}(p,q)$ to $L_\mathcal{F}({\mathbf V})$, defined in \eqref{e_objective}, in probability \item Theorem~\ref{ECVE_consistency} shows that ECVE estimates $\mathcal{S}_{Y\mid\X}$ consistently by utilizing the uniform convergence in probability \end{itemize} \end{comment} \begin{thm}\label{uniform_convergence_ecve} Assume conditions (E.1), (E.2), (E.4), (K.1), (K.2), (H.1) hold, $a_n^2 = \log(n)/nh_n^{(p-q)/2} = o(1)$, and $a_n/h_n^{(p-q)/2} = O(1)$. Let $\mathcal{F}$ be a parametric ensemble such that $\mathbb{E}(|\tilde{\epsilon}|^l \mid{\mathbf X} ={\mathbf x})$ is continuous for $l = 1,\ldots,4$, and the second conditional moment is twice continuously differentiable, where $\tilde{\epsilon}$ is given by Theorem~\ref{Y_decomposition_thm}. Then, $L^*_n({\mathbf V},f)$, defined in \eqref{e_LN}, converges uniformly in probability to $L^*({\mathbf V},f)$ in \eqref{e_LV1} for all $f \in \mathcal{F}$; i.e., \begin{align*} \sup_{{\mathbf V} \in {\mathcal S}(p,q)}|L^*_n({\mathbf V},f) -L^*({\mathbf V},f)| \longrightarrow 0 \quad \text{in probability as $n \to \infty$.} \end{align*} \end{thm} \begin{comment} \begin{proof}[Proof of Theorem~\ref{uniform_convergence_ecve}] Let $(Y_i,{\mathbf X}_i)_{i=1,\ldots,n}$ be an i.i.d. sample from the model $Y = g_{\textit{cs}}({\mathbf B}^T {\mathbf X}, \epsilon)$ as in (E.1). Fix an arbitrary $f \in \mathcal{F}$. By Theorem~\ref{Y_decomposition_thm}, \begin{align}\label{Ytilde_decomposition} \tilde{Y}_i = f(Y_i) = g(\widetilde{{\mathbf B}}^T{\mathbf X}_i) + \tilde{\epsilon_i} \end{align} with $\operatorname{span}\{\widetilde{{\mathbf B}}\} = {\mathcal S}_{\mathbb{E}(\tilde{Y}\mid {\mathbf X})} = {\mathcal S}_{\mathbb{E}(f(Y)\mid {\mathbf X})}$. Conditions (E.2), and (E.4) yield that $g$ and $f_{\mathbf X}$ are twice continuously differentiable, $\mathbb{E}(|\tilde{Y}|^8) < \infty$, and $\text{supp}(f_{\mathbf X})$ is compact. Inserting \eqref{Ytilde_decomposition} into \eqref{e_ybar} we can write \begin{align}\label{tnf} t^{(l)}_n({\mathbf V},\mathbf s_0,f)&=\frac{1}{nh_n^{(p-q)/2}}\sum_{i=1}^n K\left(\frac{d_i({\mathbf V},\mathbf s_0)}{h_n}\right)\tilde{Y}^l_i,\\ \bar{y}_l({\mathbf V},\mathbf s_0,f) &= \frac{ t^{(l)}_n({\mathbf V},\mathbf s_0,f)}{t^{(0)}_n({\mathbf V},\mathbf s_0,f)},\label{ylf}\\ \tilde{L}_{n,\mathcal{F}}({\mathbf V},\mathbf s_0) &= \bar{y}_2({\mathbf V},\mathbf s_0,f) -\bar{y}_1({\mathbf V},\mathbf s_0,f)^2, \notag \\ L^*_n({\mathbf V},f) &= \frac{1}{n} \sum_{i=1}^n \tilde{L}_{n,\mathcal{F}}({\mathbf V},{\mathbf X}_i,f). \notag \end{align} This set-up is the same as in the proof of Theorem~\ref{thm_L_uniform} except that $\tilde{\epsilon}$ is not independent from ${\mathbf X}$ but instead $\mathbb{E}(\tilde{\epsilon}\mid {\mathbf X}) = 0$. Therefore, it suffices to show that $\mathbb{E}(\tilde{\epsilon}\mid {\mathbf X}) = 0$ suffices at each point in the proof of Theorem~\ref{thm_L_uniform} where independence was used. The independence of the predictors and the error was used three times. \begin{itemize} \item[(a)] The first time the independence was used was in Theorem~\ref{thm4} which can be completely replaced by Theorem~\ref{CVE_targets_meansubspace_thm}. \item[(b)] The second time the independence was used is in Lemma~\ref{aux_lemma3}, which was used in Theorem~\ref{thm_variance} to obtain an upper bound for $\mathbb{V}\mathrm{ar}(n h_n^{(p-q)/2} t^{(l)}_n({\mathbf V},\mathbf s_0) )\leq n h_n^{(p-q)/2} \text{const}$. We show next that the upper bound can be achieved under the current set-up but the constant term is no longer characterized. Since a continuous function attains a finite maximum over a compact set, \[ \sup_{{\mathbf x} \in \text{supp}(f_{\mathbf X})}|g(\widetilde{{\mathbf B}}^T{\mathbf x})| < \infty. \] Therefore, \begin{align*} |\tilde{Y}_i| \leq |g(\widetilde{{\mathbf B}}^T{\mathbf X}_i)| +|\tilde{\epsilon_i}| \leq \sup_{{\mathbf x} \in \text{supp}(f_{\mathbf X})}|g(\widetilde{{\mathbf B}}^T{\mathbf x})| +|\tilde{\epsilon_i}| = \text{const} +|\tilde{\epsilon_i}| \end{align*} and $| \tilde{Y}_i|^{2l} \leq \sum_{u=0}^{2l} \binom{2l}{u} \text{const}^u |\tilde{\epsilon_i}|^{2l - u}$. Since $\tilde{Y}_i$ are i.i.d., for $l = 0,1,2$ we have \begin{gather} \mathbb{V}\mathrm{ar}\left(n h_n^{(p-q)/2} t^{(l)}_n({\mathbf V},\mathbf s_0,f)\right) = n \mathbb{V}\mathrm{ar}\left(\tilde{Y}^l K\left(\frac{d_i({\mathbf V},\mathbf s_0)}{h_n}\right)\right) \leq n \mathbb{E}\left(\tilde{Y}^{2l} K^2\left(\frac{d_i({\mathbf V},\mathbf s_0)}{h_n}\right)\right) \notag \\ = n \mathbb{E}\left(|\tilde{Y}|^{2l} K^2\left(\frac{d_i({\mathbf V},\mathbf s_0)}{h_n}\right)\right) \leq n \sum_{u=0}^{2l} \binom{2l}{u} \text{const}^u \mathbb{E}\left( |\tilde{\epsilon_i}|^{2l - u} K^2\left(\frac{d_i({\mathbf V},\mathbf s_0)}{h_n}\right)\right) \notag \\ = n \sum_{u=0}^{2l} \binom{2l}{u} \text{const}^u \mathbb{E}\left( \mathbb{E}(|\tilde{\epsilon_i}|^{2l - u}\mid {\mathbf X}_i) K^2\left(\frac{d_i({\mathbf V},\mathbf s_0)}{h_n}\right)\right) \label{bound_tnf_var} \end{gather} Let $\mathbb{E}(|\tilde{\epsilon_i}|^{2l - u} \mid {\mathbf X}_i) = g_{2l-u}({\mathbf X}_i)$ for a continuous (by assumption) function $g_{2l-u}(\cdot)$ with finite moments for $l=0,1,2$ by the compactness of $\textbf{supp}(f_{\mathbf X})$, i.e. (E.2). Using Lemma~\ref{aux_lemma2} with \[Z_n({\mathbf V},\mathbf s_0) = \frac{1}{nh_n{(p-q)/2}} \sum_i g_{2l-u}({\mathbf X}_i) K^2\left(d_i({\mathbf V},\mathbf s_0)/h_n\right),\] where $K^2(\cdot)$ fulfills (K.1), we calculate \begin{gather} \mathbb{E}\left( \mathbb{E}(|\tilde{\epsilon_i}|^{2l - u}\mid {\mathbf X}_i) K^2\left(\frac{d_i({\mathbf V},\mathbf s_0)}{h_n}\right)\right) = h_n^{(p-q)/2} \mathbb{E}(Z_n({\mathbf V},\mathbf s_0)) \notag \\ = h_n^{(p-q)/2} \int_{\text{supp}(f_{\mathbf X})\cap{\mathbb R}^{p-q}}K^2(\|{\mathbf r}_2\|^2)\times \notag \\ \int_{\text{supp}(f_{\mathbf X})\cap{\mathbb R}^q} g_{2l-u}(\mathbf s_0 + {\mathbf V} {\mathbf r}_1 + h_n^{1/2} {\mathbf U} {\mathbf r}_2) f_{\mathbf X}(\mathbf s_0 + {\mathbf V} {\mathbf r}_1 + h_n^{1/2}{\mathbf U}{\mathbf r}_2)d{\mathbf r}_1 d{\mathbf r}_2 \label{ugly_integral}\\ \leq h_n^{(p-q)/2} \text{const} \notag \end{gather} since all integrands in \eqref{ugly_integral} are continuous and over compact sets by (E.2) and the continuity of $g_{2l-u}(\cdot)$ and $K(\cdot)$, so that the integral can be upper bounded by a finite constant, $\text{const}$. Inserting $\eqref{ugly_integral}$ into \eqref{bound_tnf_var} yields \begin{align}\label{upper_bound_var_tnf} \mathbb{V}\mathrm{ar}\left(n h_n^{(p-q)/2} t^{(l)}_n({\mathbf V},\mathbf s_0,f)\right) \leq n h_n^{(p-q)/2} \underbrace{\sum_{u=0}^{2l} \binom{2l}{u} \text{const}^u\text{const}}_{ = \text{const}} = n h_n^{(p-q)/2} \text{const} \end{align} Inequality \eqref{upper_bound_var_tnf} can be used in place of Lemma~\ref{aux_lemma3} in the proof. \item[(c)] The third time independence was used was in Theorem~\ref{thm_bias}. For $l=0$, Theorem~\ref{thm_bias} still applies because $\tilde{Y}^0 = 1$. For $l = 1$, \begin{align*} \mathbb{E}( t^{(1)}_n({\mathbf V},\mathbf s_0,f)) &= \frac{1}{h_n^{(p-q)/2}} \Bigl(\mathbb{E}(g(\widetilde{{\mathbf B}}^T{\mathbf X}_i)K\left(d_i({\mathbf V},\mathbf s_0)/h_n\right)) \\ &\qquad \quad + \mathbb{E}(\underbrace{\mathbb{E}(\tilde{\epsilon}_i\mid {\mathbf X}_i)}_{ = 0}K\left(d_i({\mathbf V},\mathbf s_0)/h_n\right))\Bigr) \end{align*} and therefore this case can be handled as in Theorem~\eqref{thm_bias}. For $l =2$, we have $\tilde{Y}_i^2 = g(\widetilde{{\mathbf B}}^T{\mathbf X}_i)^2 + 2g(\widetilde{{\mathbf B}}^T{\mathbf X}_i)\tilde{\epsilon}_i + \tilde{\epsilon}_i^2$ and \begin{align} \mathbb{E}( t^{(2)}_n({\mathbf V},\mathbf s_0,f)) &= \frac{1}{h_n^{(p-q)/2}} \mathbb{E}\left(g(\widetilde{{\mathbf B}}^T{\mathbf X}_i)^2 K\left(d_i({\mathbf V},\mathbf s_0)/h_n\right)\right) \notag \\ &\qquad \quad + \frac{1}{h_n^{(p-q)/2}} \mathbb{E}\left( \mathbb{E}(\tilde{\epsilon}_i^2\mid {\mathbf X}_i) K\left(d_i({\mathbf V},\mathbf s_0)/h_n\right)\right) \label{bias_tmp1} \end{align} since the cross term vanishes as in the case for $l = 1$. The first term on the right hand side of \eqref{bias_tmp1} can be handled as in Theorem~\eqref{thm_bias}. For the second term on the right side of \eqref{bias_tmp1} note that $\mathbb{E}(\tilde{\epsilon}_i^2\mid {\mathbf X}_i) = h({\mathbf X}_i)$ is a function of ${\mathbf X}_i$, so it can be handled as the first term on the right hand side of \eqref{bias_tmp1}. In sum, \begin{gather} \mathbb{E}( t^{(l)}_n({\mathbf V},\mathbf s_0,f)) \to t^{(l)}({\mathbf V},\mathbf s_0,f) +\notag \\ 1_{\{l=2\}} \int_{\text{supp}(f_{\mathbf X})\cap{\mathbb R}^{p-q}}K(\|{\mathbf r}_2\|^2)d{\mathbf r}_2 \int_{\text{supp}(f_{\mathbf X})\cap{\mathbb R}^q} h(\mathbf s_0 + {\mathbf V} {\mathbf r}_1 ) f_{\mathbf X}(\mathbf s_0 + {\mathbf V} {\mathbf r}_1)d{\mathbf r}_1 \label{bias_tmp2} \end{gather} uniformly over ${\mathbf V} \in {\mathcal S}(p,q)$, where $\mathbb{E}(\tilde{\epsilon}_i^2\mid {\mathbf X}_i = {\mathbf x}) = h({\mathbf x})$ and \begin{gather} \mathbb{V}\mathrm{ar}(\tilde{\epsilon}\mid {\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\}) = \mathbb{E}(\tilde{\epsilon}^2\mid {\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\}) \notag\\ = \int_{\text{supp}(f_{\mathbf X})\cap{\mathbb R}^q} \frac{h(\mathbf s_0 + {\mathbf V} {\mathbf r}_1 ) f_{\mathbf X}(\mathbf s_0 + {\mathbf V} {\mathbf r}_1)}{t^{(0)}({\mathbf V},\mathbf s_0,f)} d{\mathbf r}_1 \end{gather} This completes the proof of Theorem~\ref{uniform_convergence_ecve} since the independence was not used in any other point of the proof. \end{itemize} \end{proof} \end{comment} Next, Theorem~\ref{thm_consistency_mean_subspace} shows that ensemble conditional variance estimator is consistent for $\mathcal{S}_{\E\left(f_t(Y)\mid\X\right)}$ for any transformation $f$. \begin{thm}\label{thm_consistency_mean_subspace} Under the same conditions as Theorem~\ref{uniform_convergence_ecve}, the conditional variance estimator $\operatorname{span}\{\widehat{{\mathbf B}}^t_{{k_t}}\}$ estimates $\mathcal{S}_{\E\left(f_t(Y)\mid\X\right)}$ consistently, for $f_t \in \mathcal{F}$. That is, \begin{equation*} \|\mathbf{P}_{\widehat{{\mathbf B}}^t_{{k_t}}} - \mathbf{P}_{\mathcal{S}_{\E\left(f_t(Y)\mid\X\right)}}\| \to 0 \quad \text{in probability as } n \to \infty . \end{equation*} where $\widehat{{\mathbf B}}^t_{{k_t}}$ is any basis of $\operatorname{span}\{\widehat{{\mathbf V}}_{k_t}^t\}^\perp$ with \begin{align*} \widehat{{\mathbf V}}_{k_t}^t= \operatorname{argmin}_{{\mathbf V} \in {\mathcal S}(p,q)}L_{n,\mathcal{F}}^*({\mathbf V},f_t). \end{align*} with $q = p - k_t$ and $k_t = \dim(\mathcal{S}_{\E\left(f_t(Y)\mid\X\right)})$. \end{thm} A straightforward application of Theorem \ref{thm_consistency_mean_subspace}, using the identity function, obtains that ${\mathcal S}_{E(Y\mid {\mathbf X})}$ can be consistently estimated by ECVE. \begin{thm}\label{uniform_convergence_eobjective} Assume the conditions of Theorem~\ref{uniform_convergence_ecve} hold. Let $\mathcal{F}$ be a parametric ensemble such that $\sup_{t \in \Omega_T} |f_t(Y)| < M < \infty$ almost surely, and let the index random variable $t \sim F_T$ be independent from the data $(Y_i, {\mathbf X}_i)_{i=1,\ldots,n}$. Then $L_{n,\mathcal{F}}({\mathbf V})$, defined in \eqref{e_objective_est}, converges uniformly in probability to $L_{\mathcal{F}}({\mathbf V})$ in \eqref{e_objective}; i.e., \begin{align*} \sup_{{\mathbf V} \in {\mathcal S}(p,q)}|L_{n,\mathcal{F}}({\mathbf V}) -L_{\mathcal{F}}({\mathbf V})| \longrightarrow 0 \quad \text{in probability as $n \to \infty$.} \end{align*} \end{thm} The assumption $\sup_{t \in \Omega_T} |f_t(Y)| < M < \infty$ in Theorem~\ref{uniform_convergence_eobjective} is trivially satisfied by the elements of the characteristic and indicator ensembles. Further the assumption $a_n/h_n^{(p-q)/2} = O(1)$ used for the truncation step in the proof of Theorem~\ref{uniform_convergence_ecve} can be dropped since obviously no truncation is needed. The rate of convergence of $m_n$ is not characterized in Theorem~\ref{uniform_convergence_eobjective}. In the simulation studies of Sections \ref{sec:consistency_simulations} and \ref{sec:simulations}, we find that $m_n$ should be chosen to be very small relative to the sample size $n$, roughly at the rate of $\log(n)$. The consistency of the ensemble CVE is shown in Theorem~\ref{ECVE_consistency}. \begin{thm}\label{ECVE_consistency} Assume the conditions of Theorem~\ref{uniform_convergence_ecve} and (E.3) hold. Let $\mathcal{F}$ be a parametric ensemble that characterizes $\mathcal{S}_{Y\mid\X}$ and whose members satisfy $\sup_{t \in \Omega_T} |f_t(Y)| < M < \infty$ almost surely. Also, assume the index random variable $t \sim F_T$ is independent from the data $(Y_i, {\mathbf X}_i)_{i=1,\ldots,n}$. Then, the \textit{ensemble conditional variance estimator (ECVE)} is a consistent estimator for $\mathcal{S}_{Y\mid\X}$. That is, for any basis $\widehat{{\mathbf B}}_{p-q, \mathcal{F}}$ of $\operatorname{span}\{\widehat{{\mathbf V}}_q\}^\perp$, where $\widehat{{\mathbf V}}_q$ is defined in \eqref{e_optim} with $q = p- k$ and $k = \dim(\mathcal{S}_{Y\mid\X})$, \begin{align*} \|\mathbf{P}_{\widehat{{\mathbf B}}_{p-q, \mathcal{F}}} - \mathbf{P}_{\mathcal{S}_{Y\mid\X}}\| \longrightarrow 0 \quad \text{in probability as } n \to \infty, \end{align*} where $\mathbf{P}_{{\mathbf M}}$ denotes the orthogonal projection onto the range space of the matrix or linear subspace ${\mathbf M}$. \end{thm} \section{Simulation Studies}\label{sec:simulations} \subsection{Influence of $m_n$ on ECVE}\label{sec:influence} In this section we study the influence of the number of functions of the ensemble $\mathcal{F}$, $m_n$ in \eqref{e_objective_est}, on the accuracy of the ensemble conditional variance estimation. In Theorem~ \ref{uniform_convergence_eobjective} and \ref{ECVE_consistency}, how fast $m_n$ approaches $\infty$ is unspecified. We consider the 2-dimensional regression model \begin{equation}\label{ecve_simmodel} Y = ({\mathbf b}_2^T{\mathbf X}) + (0.5 + ({\mathbf b}_1^T{\mathbf X})^2)\epsilon, \end{equation} where $p = 10$, $k =2$, ${\mathbf X} \sim N(0,{\mathbf I}_{10})$, $\epsilon \sim N(0,1)$ independent of ${\mathbf X}$, ${\mathbf b}_1 = (1,0,\ldots,0)^T \in {\mathbb R}^{p}$, and ${\mathbf b}_2=(0,1,0,\ldots,0)^T \in {\mathbb R}^{p}$. Therefore, $\mathcal{S}_{\E\left(Y\mid\X\right)} = \operatorname{span}\{{\mathbf b}_2\} \subsetneq \mathcal{S}_{Y\mid\X} = \operatorname{span}\{{\mathbf B}\}$, with ${\mathbf B} = ({\mathbf b}_1,{\mathbf b}_2)$. We set the sample size to $n = 300$ and vary $m$ over $\{4,8,10,26,50,76,100\}$ for the (a) indicator, $\mathcal{F}_{m,\text{Indicator}}= \{1_{\{x \geq q_j\}}: j = 1,\ldots,m\}$, where $q_j$ is the $j/(m +1)$th empirical quantile of $(Y_i)_{i=1,\ldots,n}$; (b) characteristic or Fourier, $\mathcal{F}_{m,\text{Fourier}}= \{\sin(jx): j = 1,\ldots,m/2\} \cup \{\cos(jx): j = 1,\ldots,m/2\}$; (c) monomial, $\mathcal{F}_{m,\text{Monom}}= \{x^j: j = 1,\ldots,m\}$, (d) and Box-Cox, $\mathcal{F}_{m,\text{BoxCox}}= \{(x^{t_j} -1)/t_j: t_j = 0.1 + 2(j-1)/(m-1), j = 1,\ldots,m-1\} \cup \{\log(x)\}$, ensembles. For each ensemble, we form the ensemble conditional variance estimator and its weighted version as in section~\ref{weight_section}, see also \cite{FertlBura}. The results of 100 replications for each method and each $m$ are displayed in Figure~\ref{ecve_test_nr basisfunction_Fig2}. We assess the estimation accuracy with $\text{err}_{j,m} = \|\widehat{{\mathbf B}}\widehat{{\mathbf B}}^T - {\mathbf B}{\mathbf B}^T\|/(2k)^{1/2}$, $j=1,\ldots,100$, $m \in \{2,4,8,10,26,50,76,100\}$. \texttt{ECVE}'s main competitor, \texttt{csMAVE}, which does not vary with $m$, estimate of the central subspace has median error 0.2 with a wide range from 0.1 to 0.6. The estimation accuracy of Fourier, Indicator and Box-Cox \texttt{ECVE} vary over $m$ and is on par or better for some $m$ values. For the Fourier basis, fewer basis functions give the best performance, the indicator and BoxCox ensembles are quite robust against varying $m$, whereas the errors get rapidly larger if $m$ is increased for the monomial ensemble. The weighted version of ECVE improves the accuracy for all ensembles. $\mathcal{F}_{4,\text{Fourier\_weighted}}$, $\mathcal{F}_{8,\text{Indicator\_weighted}}$, $\mathcal{F}_{4,\text{BoxCox\_weighted}}$ are on par or more accurate than \texttt{csMAVE}. In sum, the simulation results support a choice of a small $m$ number of basis functions. Based on this and further unreported simulations, we set the default value of $m$ to \begin{align}\label{mn_value} m_n = \begin{cases} \lceil \log(n)\rceil,\text{if}\quad\lceil \log(n)\rceil \quad \text{even}\\ \lceil \log(n)\rceil + 1 ,\text{if}\quad \lceil \log(n)\rceil \quad \text{odd}\\ \end{cases} \end{align} for all simulations in Section~\ref{sec:consistency_simulations}, \ref{evaluate_acc} and the data analysis in Section~\ref{sec:dataAnalysis}. \begin{comment} \begin{figure} \caption{Box plots of the estimation errors over $100$ replications of model~\eqref{ecve_simmodel} with $n =300$ over $ m =|\mathcal{F}| = (2,4,8,10,26,50,76,100)$ across four ensembles.} \label{ecve_test_nr basisfunction_Fig2} \end{figure} \end{comment} \subsection{Demonstrating consistency}\label{sec:consistency_simulations} We explore the consistency rate of the \textit{conditional variance estimator (CVE)} and \textit{ensemble conditional variance estimator (CVE)}, \texttt{csMAVE} and \texttt{mMAVE} in model~\eqref{ecve_simmodel}. Specifically, we apply seven estimation methods, the first five targeting the central subspace $\mathcal{S}_{Y\mid\X}$ and the last two $\mathcal{S}_{\E\left(Y\mid\X\right)}$, as follows. For $\mathcal{S}_{Y\mid\X}$, we compare \texttt{ECVE} for the indicator (I), Fourier (II), monomial (III) and Box-Cox (IV) ensembles, as in Section \ref{sec:influence}, and \texttt{csMAVE} (V). \begin{comment} \begin{itemize} \item[I:] \textbf{Fourier} is \texttt{ECVE}: $\mathcal{F}_{m_n,\text{Fourier}}= \{\sin(jx): j = 1,\ldots,m_n/2\} \cup \{\cos(jx): j = 1,\ldots,m_n/2\}$ \item[II:] \textbf{Indicator} is \texttt{ECVE}: $\mathcal{F}_{m_n,\text{Indicator}}= \{1_{\{x \geq q_j\}}: j = 1,\ldots,m_n\}$, where $q_j$ is the $j/(m_n +1)$th empirical quantile of $(Y_i)_{i=1,\ldots,n}$ \item[III:] \textbf{Monom} is \texttt{ECVE}: $\mathcal{F}_{m_n,\text{Monom}}= \{x^j: j = 1,\ldots,m_n\}$ \item[IV:] \textbf{BoxCox} is \texttt{ECVE}: $\mathcal{F}_{m_n,\text{BoxCox}}= \{(x^{t_j} -1)/t_j: t_j = 0.1 + 2(j-1)/(m_n-1), j = 1,\ldots,m_n-1\} \cup \{\log(x)\}$. \item[V:] \textbf{csMAVE} from \cite{WangXia2008} \end{itemize} \end{comment} For $\mathcal{S}_{\E\left(Y\mid\X\right)}$, we use \texttt{CVE} (VI) of \cite{FertlBura} and \texttt{mMAVE} (VII) in \cite{Xiaetal2002}. The simulation is performed as follows. We generate 100 i.i.d samples $(Y_i,{\mathbf X}_i^T)_{i=1,...,n}$ from \eqref{ecve_simmodel} for each sample size $n=100,200,400,600,800,1000$. Model \eqref{ecve_simmodel} is a two dimensional model with $\mathcal{S}_{\E\left(Y\mid\X\right)}=\operatorname{span}({\mathbf b}_2) \subsetneq \mathcal{S}_{Y\mid\X}=\operatorname{span}({\mathbf B})$. For methods (I)-(V), we set $k =2$ and estimate ${\mathbf B} \in {\mathbb R}^{10 \times 2}$. For (VI) and (VII), we set $k=1$ and estimate ${\mathbf b}_2 \in {\mathbb R}^{10 \times 1}$. Then, we calculate $\text{err}_{j,n} = \|\widehat{{\mathbf B}}\widehat{{\mathbf B}}^T - {\mathbf B}{\mathbf B}^T\|/(2k)^{1/2}$, $j=1,\ldots,100$, $n \in \{100,200,400,600,800,1000\}$. Figure~\ref{ecve_consisteny_Fig1} displays the distribution of $err_{j,n}$ for increasing $n$ for the seven methods. As the sample size increases ECVE Indicator, Fourier and csMAVE are on par with respect to both speed and accuracy. The accuracy of ECVE Box-Cox improves as the sample size increases but at a slower rate. There is no improvement in the accuracy of ECVE monomial. This is not surprising as the monomial, as well as the Box-Cox, do not satisfy the assumption $\sup_{t \in \Omega_T} |f_t(Y)| < M < \infty$ in Theorem~\ref{ECVE_consistency}, in contrast to the Indicator and Fourier ensembles. The Fourier, Indicator ECVE and csMAVE estimate $\mathcal{S}_{Y\mid\X} = \operatorname{span}\{{\mathbf B}\}$ consistently and the mean subspace methods, CVE and mMAVE, estimate $\mathcal{S}_{\E\left(Y\mid\X\right)} = \operatorname{span}\{{\mathbf b}_2\}$ consistently. \begin{comment} \begin{figure} \caption{Estimation error distribution of model~\eqref{ecve_simmodel} plotted over $n = (100,200,400,600,800,1000)$ for the seven (I-VII) methods} \label{ecve_consisteny_Fig1} \end{figure} \end{comment} \begin{figure} \caption{Box plots of the estimation errors over $100$ replications of model~\eqref{ecve_simmodel} with $n =300$ over $ m =|\mathcal{F}| = (2,4,8,10,26,50,76,100)$ across four ensembles.} \label{ecve_test_nr basisfunction_Fig2} \caption{Estimation error distribution of model~\eqref{ecve_simmodel} plotted over $n = (100,200,400,600,800,1000)$ for the seven (I-VII) methods} \label{ecve_consisteny_Fig1} \end{figure} \subsection{Evaluating estimation accuracy}\label{evaluate_acc} We consider seven models, (M1-M7) defined in Table~\ref{tab:e_mod}, three different sample sizes $\{100,200,400\}$, and three different distributions of the predictor vector ${\mathbf X} = \greekbold{\Sigma}^{1/2}{\mathbf Z} \in {\mathbb R}^p$, where $\greekbold{\Sigma}=(\Sigma_{ij})_{i,j=1,\ldots,p}$, $\Sigma_{i,j} = 0.5^{|i-j|}$. Throughout, $p=10$, ${\mathbf B}$ are the first $k$ columns of ${\mathbf I}_p$, and $\epsilon \sim N(0,1)$ independent of ${\mathbf X}$. As in \cite{WangXia2008}, we consider three distributions for ${\mathbf Z} \in {\mathbb R}^p$: (I) $N(0,{\mathbf I}_p)$, (II) $p$-dimensional uniform distribution on $[-\sqrt{3},\sqrt{3}]^p$, i.e. all components of ${\mathbf Z}$ are independent and uniformly distributed , and (III) a mixture-distribution $N(0,{\mathbf I}_p)+\greekbold{\mu}$, where $\greekbold{\mu} = (\mu_1,\ldots,\mu_p)^T \in {\mathbb R}^p$ with $\mu_j=2$, $\mu_k=0$, for $k \ne j$, and $j$ is uniformly distributed on $\{1,\ldots,p\}$. The simple and weighted [see Section~\ref{weight_section}] \texttt{Fourier} and \texttt{Indicator} ensembles are used to form four \textit{ensemble conditional variance estimators} (ECVE). The monomial and BoxCox ensembles were also used but did not give satisfactory results and are not reported. From these two ensembles four ECVE estimators are formed and compared against the reference method \texttt{csMAVE} \cite{WangXia2008}, which is implemented in the \texttt{R} package \texttt{MAVE}. The source code for \textit{conditional variance estimation} and its ensemble version is available at \url{https://git.art-ist.cc/daniel/CVE}. \begin{comment}\begin{itemize} \item[1] \texttt{Fourier} \item[2] \texttt{Fourier\_weighted} \item[3] \texttt{Indicator} \item[4] \texttt{Indicator\_weighted} \item[5] \texttt{csMAVE} \end{itemize} \end{comment} \begin{table}[!htbp] \centering \caption{Models} {\small \begin{tabular}{lcccc} \toprule Name & Model & $\mathcal{S}_{\E\left(Y\mid\X\right)}$& $\mathcal{S}_{Y\mid\X}$&$k$ \\ \midrule M1& $Y = \frac{1}{{\mathbf b}_1^T{\mathbf X}}+0.2\epsilon$ &$\operatorname{span}\{{\mathbf b}_1\} $&$\operatorname{span}\{{\mathbf b}_1\} $& 1\\ M2& $Y = \cos(2{\mathbf b}_1^T{\mathbf X})+\cos({\mathbf b}_2^T{\mathbf X})+0.2\epsilon$&$\operatorname{span}\{{\mathbf b}_1,{\mathbf b}_2\} $&$\operatorname{span}\{{\mathbf b}_1, {\mathbf b}_2\}$ &2\\ M3&$Y = ({\mathbf b}_2^T{\mathbf X}) + (0.5 + ({\mathbf b}_1^T{\mathbf X})^2)\epsilon$&$\operatorname{span}\{{\mathbf b}_2\} $&$\operatorname{span}\{{\mathbf b}_1,{\mathbf b}_2\} $& 2\\ M4& $Y = \frac{{\mathbf b}_1^T{\mathbf X}}{0.5+(1.5+{\mathbf b}_2^T{\mathbf X})^2}+(|{\mathbf b}_1^T{\mathbf X}| + ({\mathbf b}_2^T{\mathbf X})^2 +0.5)\epsilon$&$\operatorname{span}\{{\mathbf b}_1,{\mathbf b}_2\} $&$\operatorname{span}\{{\mathbf b}_1,{\mathbf b}_2\}$ & 2\\ M5& $Y = {\mathbf b}_3^T{\mathbf X} +\sin({\mathbf b}_1^T {\mathbf X}({\mathbf b}_2^T{\mathbf X})^2)\epsilon$&$\operatorname{span}\{{\mathbf b}_3\} $&$\operatorname{span}\{{\mathbf b}_1,{\mathbf b}_2,{\mathbf b}_3\}$ & 3\\ M6& $Y = 0.5({\mathbf b}_1^T{\mathbf X})^2\epsilon$&$\operatorname{span}\{{\mathbf 0}\} $&$\operatorname{span}\{{\mathbf b}_1\} $& 1\\ M7& $Y = \cos({\mathbf b}_1^T{\mathbf X} - \pi)+ \cos(2{\mathbf b}_1^T{\mathbf X})\epsilon$&$\operatorname{span}\{{\mathbf b}_1\} $&$\operatorname{span}\{{\mathbf b}_1\} $& 1\\ \bottomrule \end{tabular} } \label{tab:e_mod} \end{table} We set $q = p - k$ and generate $r=100$ replicates of models M1-M7 with the specified distribution of ${\mathbf X}$ and sample size $n$. We estimate ${\mathbf B}$ using the four ECVE methods and csMAVE. The accuracy of the estimates is assessed using $err= \|\mathbf{P}_{\mathbf B} - \mathbf{P}_{\widehat{{\mathbf B}}}\|_2/\sqrt{2k} \in [0,1]$, where $\mathbf{P}_{\mathbf B} = {\mathbf B}({\mathbf B}^T{\mathbf B})^{-1}{\mathbf B}^T$ is the orthogonal projection matrix on $\operatorname{span}\{{\mathbf B}\}$. The factor $\sqrt{2k}$ normalizes the distance, with values closer to zero indicating better agreement and values closer to one indicating strong disagreement. The results are displayed in Tables~\ref{tab:summaryM1}-\ref{tab:summaryM8}. In M1, which is taken from \cite{WangXia2008}, the mean subspace agrees with the central subspace, i.e. $\mathcal{S}_{\E\left(Y\mid\X\right)} =\mathcal{S}_{Y\mid\X}$, but due to the unboundedness of the link function $g(x) = 1/x$ most mean subspace estimation methods, such as \texttt{SIR, mean MAVE} and \texttt{CVE}, fail. In contrast, all 4 ensemble CVE methods and \texttt{csMAVE} succeed in identifying the minimal dimension reduction subspace, with ensemble CVE performing slightly better, as can be seen in Table~\ref{tab:summaryM1}. In particular, \texttt{Fourier} is the best performing method. M2, is a two dimensional mean subspace model, i.e. $\mathcal{S}_{\E\left(Y\mid\X\right)} = \mathcal{S}_{Y\mid\X}$, and in Table~\ref{tab:summaryM2} we see that \texttt{csMAVE} is the best performing method. M3 is the same as model \eqref{ecve_simmodel} and here the mean subspace is a proper subset of the central subspace. In Table~\ref{tab:summaryM3} we see that \texttt{Indicator\_weighted} and \texttt{csMAVE} are the best performers and are roughly on par. In M4, the two dimensional mean subspace, which determines also the heteroskedasticity, agrees with the central subspace. In Table~\ref{tab:summaryM4} we see that this model is quite challenging for all methods, and only \texttt{Indicator\_weighted} and \texttt{csMAVE} give satisfactory results, with \texttt{Indicator\_weighted} the clear winner. In M5, the heteroskedasticity is induced by an interaction term, and the three dimensional central subspace model is a proper superset of the one dimensional mean subspace. In Table~\ref{tab:summaryM5} we see that M5 is quite challenging for all five methods, therefore we increase the sample size $n$ to $800$. For M5, the two weighted ensemble conditional variance estimators are the best performing methods followed by \texttt{csMAVE}. M6 is a one dimensional pure central subspace model, whereas the mean subspace is $0$. In Table~\ref{tab:summaryM7}, we see that for $n = 100$ the two weighted ECVEs are the best performing methods and for higher sample sizes \texttt{csMAVE} is slightly more accurate than the ECVE methods. In M7 the one dimensional mean subspace agrees with the central subspace, i.e. $\mathcal{S}_{\E\left(Y\mid\X\right)} = \mathcal{S}_{Y\mid\X}$, and the conditional first and second moments, $\mathbb{E}(Y^l \mid {\mathbf X})$ for $l =1,2$, are highly nonlinear and periodic functions of the sufficient reduction. In Table~\ref{tab:summaryM8}, we see that all ensemble conditional variance estimators clearly outperform \texttt{csMAVE}. \begin{table}[!htbp] \centering \caption{Mean and standard deviation (in parenthesis) of estimation errors of M1} {\tiny \begin{tabular}{ll|ccccc} \toprule Distribution &$n$ &\texttt{Fourier} & \texttt{Fourier\_weighted} & \texttt{Indicator}& \texttt{Indicator\_weighted}& \texttt{csMAVE} \\ \midrule I& 100& \textbf{0.172} &0.201 &0.248& 0.265 & 0.210\\ & & (0.047) &(0.054)& (0.064)& (0.063)& (0.063)\\ \midrule I& 200& \textbf{0.120} &0.142& 0.182& 0.197& 0.128\\ & & (0.029)& (0.037) &(0.045)& (0.049)& (0.037)\\ \midrule I& 400& \textbf{0.079}& 0.091 &0.126& 0.136& 0.080\\ & & (0.020)& (0.024) &(0.037)& (0.040)& (0.024)\\ \midrule II& 100& \textbf{0.174}&0.196& 0.241& 0.254&{0.193}\\ & & (0.038)& (0.049) &(0.055)& (0.056)& (0.059)\\ \midrule II& 200& \textbf{0.110} &0.127& 0.170& 0.182& {0.121}\\ & & (0.031)& (0.033) &(0.043)& (0.045)& (0.036)\\ \midrule II& 400& \textbf{0.078}& 0.091 &0.122 &0.132& {0.079}\\ & & (0.021) &(0.026)& (0.031)& (0.033) &(0.020)\\ \midrule III& 100& \textbf{0.187}& 0.218& 0.256 &0.263& {0.204}\\ & &(0.045)& (0.053)& (0.060)& (0.058) &(0.066)\\ \midrule III& 200& \textbf{0.118}& 0.137 &0.171&0.179 & \textbf{0.118}\\ & & (0.031)& (0.038)& (0.043)& (0.042)& (0.033)\\ \midrule III& 400& 0.082 &0.101&0.127 &0.132& \textbf{0.079}\\ & & (0.020) &(0.029) &(0.031) &(0.032) &(0.022)\\ \bottomrule \end{tabular} } \label{tab:summaryM1} \end{table} \begin{table}[!htbp] \centering \caption{Mean and standard deviation (in parenthesis) of estimation errors of M2} {\tiny \begin{tabular}{ll|ccccc} \toprule Distribution &$n$ &\texttt{Fourier} & \texttt{Fourier\_weighted} & \texttt{Indicator}& \texttt{Indicator\_weighted}& \texttt{csMAVE} \\ \midrule I& 100& {0.670} &0.601 &0.629& 0.582 & \textbf{0.575}\\ & & (0.089) &(0.135)& (0.130)& (0.140)& (0.176)\\ \midrule I& 200& {0.478} &0.388& 0.436& 0.407& \textbf{0.219}\\ & & (0.201)& (0.152) &(0.193)& (0.162)& (0.136)\\ \midrule I& 400& {0.226}& 0.201 &0.231& 0.236& \textbf{0.098}\\ & & (0.153)& (0.074) &(0.127)& (0.111)& (0.025)\\ \midrule II& 100&{0.663}&0.652& 0.687& 0.658&\textbf{0.544}\\ & & (0.097)& (0.104) &(0.057)& (0.080)& (0.176)\\ \midrule II& 200& {0.525} &0.468& 0.601& 0.539& \textbf{0.182}\\ & & (0.171)& (0.171) &(0.127)& (0.148)& (0.096)\\ \midrule II& 400& {0.267}& 0.307 &0.375 &0.357& \textbf{0.087}\\ & & (0.081) &(0.146)& (0.154)& (0.141) &(0.021)\\ \midrule III& 100& {0.657}& 0.590&\textbf{0.530} &0.542& {0.603}\\ & &(0.104)& (0.148)& (0.155)& (0.148) &(0.193)\\ \midrule III& 200& {0.421}& 0.367 &0.306&0.336& \textbf{0.240}\\ & & (0.203)& (0.165)& (0.147)& (0.151)& (0.193)\\ \midrule III& 400& 0.170&0.170&0.144&0.170& \textbf{0.089}\\ & & (0.110) &(0.071) &(0.053) &(0.063) &(0.019)\\ \bottomrule \end{tabular} } \label{tab:summaryM2} \end{table} \begin{table}[!htbp] \centering \caption{Mean and standard deviation (in parenthesis) of estimation errors of M3} {\tiny \begin{tabular}{ll|ccccc} \toprule Distribution &$n$ &\texttt{Fourier} & \texttt{Fourier\_weighted} & \texttt{Indicator}& \texttt{Indicator\_weighted}& \texttt{csMAVE} \\ \midrule I& 100& 0.744& 0.657& 0.668& \textbf{0.561}& 0.602\\ & & (0.056)&(0.113)&(0.083)&(0.142)&(0.147)\\ \midrule I& 200&0.702 &0.472 &0.559& \textbf{0.369}& 0.374 \\ & & (0.061)&(0.177)&(0.147)&(0.155)&(0.148)\\ \midrule I& 400& 0.621& 0.252& 0.408& 0.223& \textbf{0.203}\\ & & (0.148)&(0.102)&(0.177)&(0.064)&(0.061)\\ \midrule II& 100&0.751& 0.698& 0.683& \textbf{0.570} &0.635\\ & & (0.041)&(0.076)&(0.080)&(0.136)&(0.136)\\ \midrule II& 200& 0.719& 0.521& 0.584 &\textbf{0.355} &0.387\\ & & (0.040)&(0.163)&(0.111)&(0.097)&(0.144)\\ \midrule II& 400& 0.686& 0.267& 0.452 &0.252& \textbf{0.201}\\ & & (0.079)&(0.084)&(0.153)&(0.052)&(0.045)\\ \midrule III& 100& 0.739& 0.676& 0.654& \textbf{0.563}&0.571\\ & &(0.073)&(0.106)&(0.105)&(0.150)&(0.120)\\ \midrule III& 200& 0.704& 0.546& 0.523& 0.368&\textbf{0.330}\\ & & (0.048)&(0.162)&(0.171)&(0.153)&(0.131)\\ \midrule III& 400& 0.616 &0.252 &0.297& 0.202 &\textbf{0.179}\\ & & (0.151)&(0.113)&(0.106)&(0.055)&(0.042)\\ \bottomrule \end{tabular} } \label{tab:summaryM3} \end{table} \begin{table}[!htbp] \centering \caption{Mean and standard deviation (in parenthesis) of estimation errors of M4} {\tiny \begin{tabular}{ll|ccccc} \toprule Distribution &$n$ &\texttt{Fourier} & \texttt{Fourier\_weighted} & \texttt{Indicator}& \texttt{Indicator\_weighted}& \texttt{csMAVE} \\ \midrule I& 100& {0.836} &0.794 &0.774& \textbf{0.713} & {0.803}\\ & & (0.072) &(0.076)& (0.074)& (0.105)& (0.087)\\ \midrule I& 200& {0.820} &0.733& 0.747& \textbf{0.545}& {0.685}\\ & & (0.066)& (0.094) &(0.060)& (0.150)& (0.116)\\ \midrule I& 400& {0.782}& 0.633 &0.710& \textbf{0.364}& {0.534}\\ & & (0.059)& ( 0.142) &(0.081)& (0.129)& (0.155)\\ \midrule II& 100&{0.839}&0.828& 0.788& \textbf{0.751}&{0.818}\\ & & (0.067)& (0.064) &(0.062)& (0.095)& (0.095)\\ \midrule II& 200& {0.834} &0.781& 0.759& \textbf{0.660}& {0.701}\\ & & (0.171)& (0.081) &(0.040)& (0.117)& (0.111)\\ \midrule II& 400& {0.812}& 0.712 &0.739 &\textbf{0.511}& {0.544}\\ & & (0.059) &(0.097)& (0.038)& (0.135) &(0.151)\\ \midrule III& 100& {0.838}& 0.815&{0.764} &\textbf{0.706}& {0.786}\\ & &(0.074)& (0.077)& (0.069)& (0.108) &(0.109)\\ \midrule III& 200& {0.829}&0.761&0.726&\textbf{0.544}& {0.676}\\ & & (0.071)& (0.099)& (0.083)& (0.149)& (0.123)\\ \midrule III& 400& 0.796&0.646&0.669&\textbf{0.317}& {0.506}\\ & & (0.069) &(0.139) &(0.113) &(0.110) &(0.146)\\ \bottomrule \end{tabular} } \label{tab:summaryM4} \end{table} \begin{table}[!htbp] \centering \caption{Mean and standard deviation (in parenthesis) of estimation errors of M5} {\tiny \begin{tabular}{ll|ccccc} \toprule Distribution &$n$ &\texttt{Fourier} & \texttt{Fourier\_weighted} & \texttt{Indicator}& \texttt{Indicator\_weighted}& \texttt{csMAVE} \\ \midrule I& 100& 0.705& \textbf{0.682}& 0.708 &0.691 &0.709\\ & & (0.060)&(0.067)&(0.060)&(0.056)&(0.069)\\ \midrule I& 200& 0.679 &\textbf{0.634} &0.688 &0.642 &0.687\\ & & (0.061)&(0.054)&(0.058)&(0.060)&(0.073)\\ \midrule I& 400& 0.644 &\textbf{0.588}& 0.660& 0.591& 0.646\\ & & (0.050)&(0.047)&(0.056)&(0.061)&(0.082)\\ \midrule I& 800& 0.622& 0.543& 0.629& \textbf{0.493}& 0.553\\ & & (0.032)&(0.078)&(0.035)&(0.100)&(0.077)\\ \midrule II& 100&0.712& \textbf{0.688}& 0.713& 0.697 &0.722\\ & & (0.060)&(0.069)&(0.051)&(0.057)&(0.054)\\ \midrule II& 200& 0.693& \textbf{0.669} &0.694&\textbf{0.669}& 0.697\\ & & (0.058)&(0.065)&(0.054)&(0.057)&(0.064)\\ \midrule II& 400& 0.670& \textbf{0.614}& 0.681& 0.633& 0.687\\ & & (0.054)&(0.059)&(0.052)&(0.050)&(0.067)\\ \midrule II& 800& 0.660& \textbf{0.584}& 0.672& 0.585& 0.589\\ & & (0.053)&(0.045)&(0.052)&(0.055)&(0.074)\\ \midrule III& 100& 0.706& \textbf{0.687}& 0.703& 0.691 &0.724\\ & &(0.062)&(0.062)&(0.061)&(0.061)&(0.051)\\ \midrule III& 200& 0.701& \textbf{0.655} &0.702& 0.668& 0.703\\ & & (0.063)&(0.069)&(0.058)&(0.074)&(0.080)\\ \midrule III& 400& 0.659 &\textbf{0.603} &0.664 &0.604& 0.682\\ & & (0.062)&(0.072)&(0.059)&(0.077)&(0.081)\\ \midrule III& 800&0.657 &0.562& 0.651 &\textbf{0.513}& 0.602\\ & & (0.064)&(0.068)&(0.052)&(0.109)&(0.087)\\ \bottomrule \end{tabular} } \label{tab:summaryM5} \end{table} \begin{table}[!htbp] \centering \caption{Mean and standard deviation (in parenthesis) of estimation errors of M6} {\tiny \begin{tabular}{ll|ccccc} \toprule Distribution &$n$ &\texttt{Fourier} & \texttt{Fourier\_weighted} & \texttt{Indicator}& \texttt{Indicator\_weighted}& \texttt{csMAVE} \\ \midrule I& 100& {0.304} &\textbf{0.294} &0.492& {0.299} & {0.539}\\ & & (0.092) &(0.082)& (0.135)& (0.087)& (0.255)\\ \midrule I& 200& {0.217} &0.213& 0.329& {0.205}& \textbf{0.194}\\ & & (0.057)& (0.054) &(0.107)& (0.059)& (0.061)\\ \midrule I& 400& {0.142}& 0.146 &0.199& {0.138}& \textbf{0.114}\\ & & (0.036)& ( 0.035) &(0.069)& (0.039)& (0.034)\\ \midrule II& 100&{0.308}&\textbf{0.293}& 0.479& {0.299}&{0.488}\\ & & (0.094)& (0.073) &(0.129)& (0.086)& (0.248)\\ \midrule II& 200& {0.205} &0.210& 0.321&{0.210}& \textbf{0.192}\\ & & (0.058)& (0.057) &(0.095)& (0.058)& (0.061)\\ \midrule II& 400& {0.144}& 0.150 &0.190 &{0.142}& \textbf{0.111}\\ & & (0.039) &(0.042)& (0.055)& (0.045) &(0.032)\\ \midrule III& 100& {0.373}& 0.375&{0.504} &\textbf{0.322}& {0.562}\\ & &(0.152)& (0.175)& (0.143)& (0.083) &(0.273)\\ \midrule III& 200& {0.226}&0.230&0.340&\textbf{0.218}& \textbf{0.218}\\ & & (0.065)& (0.070)& (0.100)& (0.060)& (0.083)\\ \midrule III& 400& 0.149&0.151&0.194&{0.146}& \textbf{0.114}\\ & & (0.039) &(0.038) &(0.068) &(0.042) &(0.032)\\ \bottomrule \end{tabular} } \label{tab:summaryM7} \end{table} \begin{table}[!htbp] \centering \caption{Mean and standard deviation (in parenthesis) of estimation errors of M7} {\tiny \begin{tabular}{ll|ccccc} \toprule Distribution &$n$ &\texttt{Fourier} & \texttt{Fourier\_weighted} & \texttt{Indicator}& \texttt{Indicator\_weighted}& \texttt{csMAVE} \\ \midrule I& 100& {0.273} &\textbf{0.237} &0.241& {0.252} & {0.790}\\ & & (0.169) &(0.050)& (0.136)& (0.158)& (0.316)\\ \midrule I& 200& {0.160} &0.159& \textbf{0.143}& {0.153}& {0.425}\\ & & (0.093)& (0.041) &(0.083)& (0.093)& (0.391)\\ \midrule I& 400& {0.098}& 0.104 &\textbf{0.088}& {0.102}& {0.127}\\ & & (0.024)& ( 0.025) &(0.021)& (0.093)& (0.202)\\ \midrule II& 100&\textbf{0.233}&{0.260}& 0.236& {0.265}&{0.902}\\ & & (0.057)& (0.134) &(0.142)& (0.185)& (0.219)\\ \midrule II& 200& {0.154} &0.176& \textbf{0.145}&{0.150}& {0.649}\\ & & (0.058)& (0.124) &(0.093)& (0.094)& (0.414)\\ \midrule II& 400& {0.097}& 0.110 &\textbf{0.087} &{0.099}& {0.295}\\ & & (0.025) &(0.094)& (0.022)& (0.093) &(0.391)\\ \midrule III& 100& {0.274}& 0.303&\textbf{0.238} &{0.298}& {0.933}\\ & &(0.201)& (0.237)& (0.160)& (0.242) &(0.163)\\ \midrule III& 200& {0.167}&0.188&\textbf{0.159}&{0.167}& {0.678}\\ & & (0.120)& (0.159)& (0.150)& (0.144)& (0.408)\\ \midrule III& 400& 0.100&0.116&\textbf{0.089}&{0.112}& {0.375}\\ & & (0.023) &(0.090) &(0.023) &(0.129) &(0.431)\\ \bottomrule \end{tabular} } \label{tab:summaryM8} \end{table} \section{Boston Housing Data}\label{sec:dataAnalysis} We apply the ensemble conditional variance estimator and \texttt{csMAVE} to the \texttt{Boston Housing} data set. This data set has been extensively used as a benchmark for assessing regression methods [see, for example, \cite{James2013}], and is available in the \texttt{R}-package \texttt{mlbench}. The data contains 506 instances of 14 variables from the 1970 Boston census, 13 of which are continuous. The binary variable \texttt{chas}, indexing proximity to the Charles river, is omitted from the analysis since ensemble conditional variance estimation operates under the assumption of continuous predictors. The target variable is the median value of owner-occupied homes, \texttt{medv}, in \$1,000. The 12 predictors are \texttt{crim} (per capita crime rate by town), \texttt{zn} (proportion of residential land zoned for lots over 25,000 sq.ft), \texttt{indus} (proportion of non-retail business acres per town), \texttt{nox} (nitric oxides concentration (parts per 10 million)), \texttt{rm} (average number of rooms per dwelling), \texttt{age} (proportion of owner-occupied units built prior to 1940), \texttt{dis} (weighted distances to five Boston employment centres), \texttt{rad} (index of accessibility to radial highways), \texttt{tax} (full-value property-tax rate per \$10,000), \texttt{ptratio} (pupil-teacher ratio by town), \texttt{lstat} (percentage of lower status of the population), and \texttt{b} stands for $1000(B - 0.63)^2$ where $B$ is the proportion of blacks by town. We analyze these data with the weighted and unweighted Fourier and indicator ensembles, and \texttt{csMAVE}. We compute unbiased error estimates by leave-one-out cross-validation. We estimate the sufficient reduction with the five methods from the standardized training set, estimate the forward model from the reduced training set using \texttt{mars}, multivariate adaptive regression splines \cite{mars}, in the \texttt{R}-package \texttt{mda}, and predict the target variable on the test set. We report results for dimension $k =1$. The analysis was repeated setting $k = 2$ with similar results. Table~\ref{table:datasets} reports the first quantile, median, mean and third quantile of the out-of-sample prediction errors. The reductions estimated by the ensemble \texttt{CVE} methods achieve lower mean and median prediction errors than \texttt{csMAVE}. Also, both \texttt{ensemble CVE} and \texttt{csMAVE} are approximately on par with the variable selection methods in \cite[Section 8.3.3]{James2013}. \begin{table}[!htbp] \centering \caption{Summary statistics of the out of sample prediction errors for the Boston Housing data obtained by LOO cross validation} {\footnotesize \begin{tabular}{l|ccccc} \toprule & \texttt{Fourier} & \texttt{Fourier\_weighted} & \texttt{Indicator}& \texttt{Indicator\_weighted}& \texttt{csMAVE} \\ \bottomrule 25\% quantile & \textbf{0.766} & 0.785 &0.973& 0.916&0.851\\ median & \textbf{3.323} & 3.358 &3.844& 3.666&4.515\\ mean & 19.971 & 19.948 &19.716& \textbf{19.583} &24.309\\ 75\% quantile & 11.129 & 10.660 &11.099& \textbf{10.429} &16.521\\ \bottomrule \end{tabular} } \label{table:datasets} \end{table} \begin{comment} \begin{table}[!htbp] \centering \caption{Summary statistics of the standardized out of sample prediction errors for the Boston Housing data} { \begin{tabular}{l|ccccc} \toprule & \texttt{Fourier} & \texttt{Fourier\_weighted} & \texttt{Indicator}& \texttt{Indicator\_weighted}& \texttt{csMAVE} \\ \bottomrule 25\% quantile & 0.078 & \textbf{0.077} &0.088& 0.081 &0.120\\ median & \textbf{0.106} & 0.107 &0.107& 0.110 &0.177\\ mean & \textbf{0.251} & \textbf{0.251} &0.254& 0.255 &0.291\\ 75\% quantile & \textbf{0.210} & \textbf{0.210} &0.234& 0.214 &0.235\\ \bottomrule \end{tabular} } \label{table:datasets} \end{table} \end{comment} Moreover, we plot the standardized response \texttt{medv} against the reduced \texttt{Fourier} and \texttt{csMAVE} predictors, ${\mathbf B}^T{\mathbf X}$, in Figure~\ref{Fig_real_data}. The sufficient reductions are estimated using the entire data set. A particular feature of these data is that the response \texttt{medv} appears to be truncated as the highest median price of exactly \$50,000 is reported in 16 cases. Both methods pick up similar patterns, which is captured by the relatively high absolute correlation of the coefficients of the two reductions, $|\widehat{{\mathbf B}}_{\texttt{Fourier}}^T \widehat{{\mathbf B}}_{\texttt{csMAVE}}| = 0.786$. The coefficients of the reductions, $\widehat{{\mathbf B}}_{\texttt{Fourier}}$ and $\widehat{{\mathbf B}}_{\texttt{csMAVE}}$, are reported in Table~\ref{tab:dataset_reductions}. For the \texttt{Fourier} ensemble, the variables \texttt{rm} and \texttt{lstat} have the highest influence on the target variable \texttt{medv}. This agrees with the analysis in \cite[Section 8.3.4]{James2013} where it was found that these two variables are by far the most important using different variable selection techniques, such as random forests and boosted regression trees. In contrast, the reduction estimated by \texttt{csMAVE} has a lower coefficient for \texttt{rm} and higher ones for \texttt{crim} and \texttt{rad}. \begin{table}[!htbp] {\scriptsize \centering \caption{{\footnotesize Rounded coefficients of the estimated reductions for $\widehat{{\mathbf B}}_{\texttt{Fourier}}$ and $\widehat{{\mathbf B}}_{\texttt{csMAVE}}$ from the full Boston Housing data}} \begin{tabular}{l|cccccccccccc} \toprule & \texttt{crim}& \texttt{zn}& \texttt{indus}& \texttt{ nox}& \texttt{rm}& \texttt{age}&\texttt{dis}&\texttt{rad}&\texttt{tax}&\texttt{pt}\texttt{ratio}& \texttt{b}&\texttt{lstat}\\ \bottomrule \texttt{Fourier}&0.21 &-0.01 &0.04 & 0.1 &-0.62 &0.16& 0.2 & 0 & 0.2 & 0.27 &-0.25 & 0.57\\ \texttt{csMAVE}&0.5& -0.05 &-0.06& 0.14& -0.27& 0.11& 0.24& -0.43& 0.3& 0.19& -0.15 & 0.51\\ \bottomrule \end{tabular} \label{tab:dataset_reductions} } \end{table} \begin{figure}\label{Fig_real_data} \end{figure} \section{Discussion}\label{sec:discussion} In this paper, we extend the \textit{mean subspace} conditional variance estimation (\texttt{CVE}) of \cite{FertlBura} to the ensemble conditional variance estimation (\texttt{ECVE}), which exhaustively estimates the \textit{central subspace}, by applying the ensemble device introduced by \cite{YinLi2011}. In Section~\ref{sec:consistency} we showed that the new estimator is consistent for the central subspace. The regularity conditions for consistency require the joint distribution of the target variable and predictors, $(Y,{\mathbf X}^T)^T$, be sufficiently smooth. They are comparable to those under which the main competitor \texttt{csMAVE} \cite{WangXia2008} is consistent. We analysed the estimation accuracy of \texttt{ECVE} in Section~\ref{sec:simulations}. We found that it is either on par with \texttt{csMAVE} or that it exhibits substantial performance improvement in certain models. We could not characterize the defining features of the models for which the ensemble conditional variance estimation outperforms \texttt{csMAVE}. This is an interesting line of further research together with establishing more theoretical results such as the rate of convergence, estimation of the structural dimension, and the limiting distribution of the estimator. \texttt{ECVE} identifies the central subspace via the orthogonal complement and thus circumvents the estimation and inversion of the variance matrix of the predictors ${\mathbf X}$. This renders the method formally applicable to settings where the sample size $n$ is small or smaller than $p$, the number of predictors, and leads to potential future research. Throughout, the dimension of the central subspace, $k = \dim(\mathcal{S}_{Y\mid\X})$, is assumed to be known. The derivation of asymptotic tests for dimension is technically very challenging due to the lack of closed-form solution and the lack of independence of all quantities in the calculation. The dimension can be estimated via cross-validation, as in \cite{WangXia2008} and \cite{FertlBura}, or information criteria. \begin{thebibliography}{MMW{\etalchar{+}}63} \bibitem[AC09]{AdragniCook2009} {Kofi P.} Adragni and {R. Dennis} Cook. \newblock Sufficient dimension reduction and prediction in regression. \newblock {\em Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences}, 367(1906):4385--4405, 11 2009. \bibitem[Ame85]{Takeshi} Takeshi Amemiya. \newblock {\em Advanced econometrics}. \newblock Harvard university press, 1985. \bibitem[Boo02]{Boothby} W.~M. Boothby. \newblock {\em An Introduction to Differentiable Manifolds and Riemannian Geometry}. \newblock Academic Press, 2002. \bibitem[CD88]{locallinearsmoothing} William~S. Cleveland and Susan~J. Devlin. \newblock Locally weighted regression: An approach to regression analysis by local fitting. \newblock {\em Journal of the American Statistical Association}, 83(403):596--610, 1988. \bibitem[CL02]{CookLi2002} R.Dennis Cook and Bing Li. \newblock Dimension reduction for conditional mean in regression. \newblock {\em Ann. Statist.}, 30(2):455--474, 04 2002. \bibitem[Coo98]{Cook1998} Dennis~R. Cook. \newblock {\em Regression Graphics: Ideas for studying regressions through graphics}. \newblock Wiley, New York, 1998. \bibitem[Coo07]{Cook2007} R.~Dennis Cook. \newblock Fisher lecture: Dimension reduction in regression. \newblock {\em Statist. Sci.}, 22(1):1--26, 02 2007. \bibitem[Fad85]{regProb} Arnold~M. Faden. \newblock The existence of regular conditional probabilities: Necessary and sufficient conditions. \newblock {\em The Annals of Probability}, 13(1):288--298, 1985. \bibitem[FB21]{FertlBura} Lukas Fertl and Efstathia Bura. \newblock Conditional variance estimator for sufficient dimension reduction, 2021. \bibitem[Fri91]{mars} Jerome~H. Friedman. \newblock Multivariate adaptive regression splines. \newblock {\em The Annals of Statistics}, 19(1):1--67, 1991. \bibitem[GH94]{Grassman} Phillip Griffiths and Joseph Harris. \newblock {\em Principles of algebraic geometry}. \newblock Wiley Classics Library. John Wiley \& Sons, Inc., New York, 1994. \newblock Reprint of the 1978 original. \bibitem[Han08]{Hansen2008} Bruce~E. Hansen. \newblock Uniform convergence rates for kernel estimation with dependent data. \newblock {\em Econometric Theory}, 24:726–748, 2008. \bibitem[Heu95]{HarroHeuser} H.~Heuser. \newblock {\em Analysis 2, 9 Auflage}. \newblock Teubner, 1995. \bibitem[Jen69]{jennrich1969} Robert~I. Jennrich. \newblock Asymptotic properties of non-linear least squares estimators. \newblock {\em Ann. Math. Statist.}, 40(2):633--643, 04 1969. \bibitem[JWHT13]{James2013} Gareth James, Daniela Witten, Trevor Hastie, and Robert Tibshirani. \newblock {\em An Introduction to Statistical Learning: with Applications in R}. \newblock Springer, 2013. \bibitem[Kar93]{regProb2} Alan~F. Karr. \newblock {\em Probability}. \newblock Springer Texts in Statistics. Springer-Verlag New York, 1993. \bibitem[Li91]{Li1991} K.~C. Li. \newblock Sliced inverse regression for dimension reduction. \newblock {\em Journal of the American Statistical Association}, 86(414):316--327, 1991. \bibitem[Li18]{Li2018} Bing Li. \newblock {\em Sufficient dimension reduction: methods and applications with R}. \newblock CRC Press, Taylor \& Francis Group, 2018. \bibitem[LJFR04]{Leaoetal2004} D.~Leao~Jr., M.~Fragoso, and P.~Ruffino. \newblock Regular conditional probability, disintegration of probability and radon spaces. \newblock {\em {Proyecciones (Antofagasta)}}, 23:15 -- 29, 05 2004. \bibitem[MMW{\etalchar{+}}63]{mickey1963test} M.R. Mickey, P.B. Mundle, D.N. Walker, A.M. Glinski, Inc C-E-I-R, and Aerospace Research~Laboratories (U.S.). \newblock {\em Test Criteria for Pearson Type III Distributions}. \newblock ARL (Aerospace Research Laboratories (U.S.))). Aerospace Research Laboratories, Office of Aerospace Research, United States Air Force, 1963. \bibitem[MZ13]{MaZhu2013} Yanyuan Ma and Liping Zhu. \newblock A review on dimension reduction. \newblock {\em International Statistical Review}, 81(1):134--150, 4 2013. \bibitem[S.N27]{Bernstein} S.N.Bernstein. \newblock {\em Theory of Probability}. \newblock 1927. \bibitem[Tag11]{Tagare2011} Hemant~D. Tagare. \newblock Notes on optimization on stiefel manifolds, January 2011. \bibitem[WX08]{WangXia2008} Hansheng Wang and Yingcun Xia. \newblock Sliced regression for dimension reduction. \newblock {\em Journal of the American Statistical Association}, 103(482):811--821, 2008. \bibitem[WY19]{MAVEpackage} Hang Weiqiang and Xia Yingcun. \newblock {\em MAVE: Methods for Dimension Reduction}, 2019. \newblock R package version 1.3.10. \bibitem[XTLZ02]{Xiaetal2002} Yingcun Xia, Howell Tong, W.~K. Li, and Li-Xing Zhu. \newblock An adaptive estimation of dimension reduction space. \newblock {\em Journal of the Royal Statistical Society: Series B (Statistical Methodology)}, 64(3):363--410, 2002. \bibitem[YL11]{YinLi2011} Xiangrong Yin and Bing Li. \newblock Sufficient dimension reduction based on an ensemble of minimum average variance estimators. \newblock {\em Ann. Statist.}, 39(6):3392--3416, 12 2011. \bibitem[YLC08]{CompactAssumption} Xiangrong Yin, Bing Li, and R.~Cook. \newblock Successive direction extraction for estimating the central subspace in a multiple-index regression. \newblock {\em Journal of Multivariate Analysis}, 99:1733--1757, 09 2008. \bibitem[ZZ10]{ZENG2010271} Peng Zeng and Yu~Zhu. \newblock An integral transform method for estimating the central mean and central subspaces. \newblock {\em Journal of Multivariate Analysis}, 101(1):271 -- 290, 2010. \end{thebibliography} \section*{Appendix} For any ${\mathbf V} \in {\mathcal S}(p,q)$, defined in \eqref{Smanifold}, we generically denote a basis of the orthogonal complement of its column space $\operatorname{span}\{{\mathbf V}\}$, by ${\mathbf U}$. That is, ${\mathbf U} \in {\mathcal S}(p,p-q)$ such that $\operatorname{span}\{{\mathbf V}\} \perp \operatorname{span}\{{\mathbf U}\}$ and $\operatorname{span}\{{\mathbf V}\} \cup \operatorname{span}\{{\mathbf U}\} = {\mathbb R}^p$, ${\mathbf U}^T{\mathbf V} = {\mathbf 0} \in {\mathbb R}^{(p-q) \times q}, {\mathbf U}^T{\mathbf U} = {\mathbf I}_{p-q}$. For any ${\mathbf x}, \mathbf s_0 \in {\mathbb R}^p$ we can always write \begin{equation}\label{ortho_decomp} {\mathbf x} = \mathbf s_0 + \mathbf{P}_{\mathbf V} ({\mathbf x} - \mathbf s_0) + \mathbf{P}_{\mathbf U} ({\mathbf x} - \mathbf s_0) = \mathbf s_0 + {\mathbf V}{\mathbf r}_1 + {\mathbf U}{\mathbf r}_2 \end{equation} where ${\mathbf r}_1 = {\mathbf V}^T({\mathbf x}-\mathbf s_0) \in {\mathbb R}^{q}, {\mathbf r}_2 = {\mathbf U}^T({\mathbf x}-\mathbf s_0) \in {\mathbb R}^{p-q}$. \begin{proof}[Proof of Theorem~\ref{CVE_targets_meansubspace_thm}] The density of ${\mathbf X} \mid {\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\}$ is given by \begin{gather}\label{density} f_{{\mathbf X}\mid{\mathbf X} \in \mathbf s_0 +\operatorname{span}\{{\mathbf V}\}}({\mathbf r}_1) = \frac{f_{\mathbf X}(\mathbf s_0 + {\mathbf V}{\mathbf r}_1)}{\int_{{\mathbb R}^q}f_{\mathbf X}(\mathbf s_0 + {\mathbf V}{\mathbf r})d{\mathbf r}} \end{gather} where ${\mathbf X}$ is the $p$-dimensional continuous random covariate vector with density $f_{\mathbf X}({\mathbf x})$, $\mathbf s_0 \in \text{supp}(f_{\mathbf X}) \subset {\mathbb R}^p$, and ${\mathbf V}$ belongs to the Stiefel manifold ${\mathcal S}(p,q)$ defined in \eqref{Smanifold}. Equation \eqref{density} follows from Theorem 3.1 of \cite{Leaoetal2004} and the fact that $({\mathbb R}^p, \mathcal{B}({\mathbb R}^p))$, where $\mathcal{B}({\mathbb R}^p)$ denotes the Borel sets on ${\mathbb R}^p$, is a Polish space, which in turnguarantees the existence of the regular conditional probability of ${\mathbf X}\mid{\mathbf X} \in \mathbf s_0 +\operatorname{span}\{{\mathbf V}\}$ [see also \cite{regProb}]. Further, the measure is concentrated on the affine subspace $\mathbf s_0 +\operatorname{span}\{{\mathbf V}\} \subset {\mathbb R}^p$ with density \eqref{density}, which follows from Definition 8.38, Theorem 8.39 of \cite{regProb2}, the orthogonal decomposition \eqref{ortho_decomp}, and the continuity of $f_{\mathbf X}$ (E.2). By assumption (E.1), $Y = g_{\textit{cs}}({\mathbf B}^T{\mathbf X},\epsilon)$ with $\epsilon \perp \!\!\! \perp {\mathbf X}$. Assume $f \in \mathcal{F}$ for which assumption (E.4) holds and let $\widetilde{{\mathbf B}}$ be a basis of $\mathcal{S}_{\E\left(f_t(Y)\mid\X\right)}$; that is, $\operatorname{span}\{\widetilde{{\mathbf B}}\} = \mathcal{S}_{\E\left(f_t(Y)\mid\X\right)} \subseteq \mathcal{S}_{Y\mid\X} = \operatorname{span}\{{\mathbf B}\}$. By Theorem~\ref{Y_decomposition_thm}, $ f(Y) = g(\widetilde{{\mathbf B}}^T{\mathbf X}) + \tilde{\epsilon}$, with $\mathbb{E}(\tilde{\epsilon}\mid {\mathbf X}) = 0$ and $g$ twice continuously differentiable. Therefore, \begin{gather} \tilde{L}_\mathcal{F}({\mathbf V}, \mathbf s_0,f) = \mathbb{V}\mathrm{ar}\left(f(Y)\mid {\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\}\right) \notag\\ = \mathbb{V}\mathrm{ar}\left(g(\widetilde{{\mathbf B}}^T{\mathbf X}) \mid {\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\} \right) +2\mbox{cov}\left(\tilde{\epsilon},g(\widetilde{{\mathbf B}}^T{\mathbf X}) \mid {\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\}\right) \notag \\ +\mathbb{V}\mathrm{ar}\left(\tilde{\epsilon} \mid {\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\} \right) \notag \\ =\mathbb{V}\mathrm{ar}\left(g(\widetilde{{\mathbf B}}^T{\mathbf X}) \mid {\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\} \right) + \mathbb{V}\mathrm{ar}\left(\tilde{\epsilon} \mid {\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\} \right)\label{CVE_problem} \end{gather} The covariance term in \eqref{CVE_problem} vanishes since \begin{gather*} \mbox{cov}\left(\tilde{\epsilon},g(\widetilde{{\mathbf B}}^T{\mathbf X}) \mid {\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\}\right) = \mathbb{E}\left( \underbrace{\mathbb{E}(\tilde{\epsilon}\mid {\mathbf X})}_{=0}g(\widetilde{{\mathbf B}}^T{\mathbf X}) \mid {\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\}\right) \\ - E\left(g(\widetilde{{\mathbf B}}^T{\mathbf X}) \mid {\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\}\right)\mathbb{E}\left( \underbrace{\mathbb{E}(\tilde{\epsilon}\mid {\mathbf X})}_{=0} \mid {\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\}\right) = 0 \end{gather*} i.e. the sigma field generated by ${\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\}$ is a subset of that generated by ${\mathbf X}$. By the same argument and using \eqref{density} \begin{gather} \mathbb{V}\mathrm{ar}\left(\tilde{\epsilon}\mid {\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\}\right) = \mathbb{E}(\tilde{\epsilon}^2\mid {\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\}) \notag\\ = \mathbb{E}(\mathbb{E}(\tilde{\epsilon}^2\mid {\mathbf X}) \mid {\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\}) = \mathbb{E}(h({\mathbf X}) \mid {\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\}) \notag \\ = \int_{\text{supp}(f_{\mathbf X})\cap{\mathbb R}^q} h(\mathbf s_0 + {\mathbf V} {\mathbf r}_1 ) \times f_{\mathbf X}(\mathbf s_0 + {\mathbf V} {\mathbf r}_1)d{\mathbf r}_1 / t^{(0)}({\mathbf V},\mathbf s_0,f)) \notag \end{gather} where $\mathbb{E}(\tilde{\epsilon}^2\mid {\mathbf X} = {\mathbf x}) = h({\mathbf x})$. Using \eqref{density} again for the first term in \eqref{CVE_problem} obtains formula \eqref{e_LtildeVs0} and \eqref{tilde_eps_var}. To see that \eqref{e_objective}, \eqref{e_LtildeVs0}, and \eqref{tilde_eps_var} are well defined and continuous, let $\tilde{g}({\mathbf V},\mathbf s_0,{\mathbf r})= g({\mathbf B}^T\mathbf s_0 + {\mathbf B}^T{\mathbf V}{\mathbf r})^l f_{\mathbf X}(\mathbf s_0 + {\mathbf V}{\mathbf r})$ for $l = 1,2$ or $\tilde{g}({\mathbf V},\mathbf s_0,{\mathbf r})= h({\mathbf B}^T\mathbf s_0 + {\mathbf B}^T{\mathbf V}{\mathbf r}) f_{\mathbf X}(\mathbf s_0 + {\mathbf V}{\mathbf r})$ (for \eqref{tilde_eps_var}) which are continuous by assumption. In consequence, the parameter depending integrals \eqref{tl} and \eqref{tilde_eps_var} are well defined and continuous if (1) $\tilde{g}({\mathbf V},\mathbf s_0,\cdot)$ is integrable for all ${\mathbf V} \in {\mathcal S}(p,q),\mathbf s_0 \in \text{supp}(f_{\mathbf X})$, (2) $\tilde{g}(\cdot,\cdot,{\mathbf r})$ is continuous for all ${\mathbf r}$, and (3) there exists an integrable dominating function of $\tilde{g}$ that does not depend on ${\mathbf V}$ and $\mathbf s_0$ [see \cite[p. 101]{HarroHeuser}]. Furthermore, for some compact set $\mathcal{K}$, $t^{(l)}({\mathbf V},\mathbf s_0) = \int_{\mathcal{K}} \tilde{g}({\mathbf V},\mathbf s_0,{\mathbf r}) d{\mathbf r}$, since $\text{supp}(f_{\mathbf X})$ is compact by (E.2). The function $\tilde{g}({\mathbf V},\mathbf s_0,{\mathbf r})$ is continuous in all inputs by the continuity of $g$ (E.4) and $f_{\mathbf X}$ by (E.2), and therefore it attains a maximum. In consequence, all three conditions are satisfied so that $t^{(l)}({\mathbf V},\mathbf s_0)$ is well defined and continuous. By the same argument \eqref{tilde_eps_var} is well defined and continuous. Next, $\mu_l({\mathbf V},\mathbf s_0) = t^{(l)}({\mathbf V},\mathbf s_0)/t^{(0)}({\mathbf V},\mathbf s_0)$ is continuous since $t^{(0)}({\mathbf V},\mathbf s_0) > 0$ for all interior points $\mathbf s_0 \in \text{supp}(f_{\mathbf X})$ by the continuity of $f_{\mathbf X}$, convexity of the support and $\Sigmabf_{\x} > 0$. Then, $\tilde{L}({\mathbf V},\mathbf s_0,f)$ in \eqref{e_LtildeVs0} is continuous, which results in \eqref{e_LV1} also being well defined and continuous by virtue of it being a parameter depending integral following the same arguments as above. Moreover, \eqref{CVE_of_transformed_Y} exists as the minimizer of a continuous function over the compact set ${\mathcal S}(p,q)$. Then, \eqref{e_LV1} can be writen as \begin{align} \label{LV_Fa} L_\mathcal{F}^*({\mathbf V},f) = \mathbb{E}_{\mathbf s_0 \sim {\mathbf X}}\left(\mu_2({\mathbf V},\mathbf s_0,f) - \mu_1({\mathbf V},\mathbf s_0,f)^2 \right) + \mathbb{E}_{\mathbf s_0 \sim {\mathbf X}}\left(\mathbb{V}\mathrm{ar}\left(\tilde{\epsilon} \mid {\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\} \right) \right) \end{align} where $\mathbf s_0 \sim {\mathbf X}$ signifies that $\mathbf s_0$ is distributed as ${\mathbf X}$ and the expectation is with respect to the distribution of $\mathbf s_0$. It now suffices to show that the second term on the right hand side of \eqref{LV_Fa} is constant with respect to ${\mathbf V}$. By the law of total variance, \begin{align} \mathbb{V}\mathrm{ar}(\tilde{\epsilon}) &= \mathbb{E}\left(\mathbb{V}\mathrm{ar}(\tilde{\epsilon}\mid {\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\})\right) + \mathbb{V}\mathrm{ar}\left(\mathbb{E}(\tilde{\epsilon}\mid {\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\} )\right) \notag\\ &= \mathbb{E}\left(\mathbb{V}\mathrm{ar}(\tilde{\epsilon}\mid {\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\})\right) \label{variance_constant} \end{align} since $\mathbb{E}(\tilde{\epsilon}\mid {\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\} ) = \mathbb{E}(\underbrace{\mathbb{E}(\tilde{\epsilon}\mid {\mathbf X})}_{= 0}\mid {\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\} ) = 0$. Inserting \eqref{variance_constant} into \eqref{LV_Fa} obtains \begin{align} L_\mathcal{F}^*({\mathbf V},f_t) = \mathbb{E}\left(\mu_2({\mathbf V},{\mathbf X},f_t) - \mu_1({\mathbf V},{\mathbf X},f_t)^2\right) + \mathbb{V}\mathrm{ar}(\tilde{\epsilon}) \notag \\ = \mathbb{E}_{\mathbf s_0 \sim {\mathbf X}}\left(\mathbb{V}\mathrm{ar}\left(g(\widetilde{{\mathbf B}}^T{\mathbf X}) \mid {\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\}\right) \right) + \mathbb{V}\mathrm{ar}(\tilde{\epsilon})\label{CVE_target} \end{align} Next we show that \eqref{e_LV1}, or, equivalently \eqref{CVE_target}), attains its minimum at ${\mathbf V} \perp \widetilde{{\mathbf B}}$. Let $\mathbf s_0 \in \text{supp}(f_{\mathbf X})$ and ${\mathbf V} = (\mathbf v_1,...,\mathbf v_q) \in {\mathbb R}^{p \times q}$, so that $\mathbf v_u \in \operatorname{span}\{{\mathbf B}\}$ for some $u \in \{1,...,q\}$. Since ${\mathbf X} \in \mathbf s_0 +\operatorname{span}\{{\mathbf V}\} \Longleftrightarrow {\mathbf X} = \mathbf s_0 + \mathbf{P}_{{\mathbf V}}({\mathbf X} - \mathbf s_0)$, by the first term in \eqref{CVE_target} \begin{align} &\mathbb{V}\mathrm{ar}\left(g(\widetilde{{\mathbf B}}^T{\mathbf X}) \mid {\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\}\right) = \mathbb{V}\mathrm{ar}\left(g(\widetilde{{\mathbf B}}^T{\mathbf X}) \mid {\mathbf X} = \mathbf s_0 + {\mathbf V}{\mathbf V}^T({\mathbf X}-\mathbf s_0)\right) \notag \\ &= \mathbb{V}\mathrm{ar}\left(g(\widetilde{{\mathbf B}}^T\mathbf s_0 + \widetilde{{\mathbf B}}^T{\mathbf V}{\mathbf V}^T({\mathbf X}-\mathbf s_0))\mid{\mathbf X} = \mathbf s_0 + {\mathbf V}{\mathbf V}^T({\mathbf X}-\mathbf s_0)\right) \geq 0 \label{1stterm} \end{align} If \eqref{1stterm} is positive, i.e. $\widetilde{{\mathbf B}}^T{\mathbf V}{\mathbf V}^T({\mathbf X}-\mathbf s_0) \neq 0$ with positive probability, then the lower bound is not attained. If it is zero; i.e., for ${\mathbf V}$ such that ${\mathbf V} $ and $\widetilde{{\mathbf B}}^T$ are orthogonal, then $L_\mathcal{F}^*({\mathbf V},f) = \mathbb{V}\mathrm{ar}(\tilde{\epsilon})$. Since $\mathbf s_0$ is arbitrary yet constant, the same inequality holds for \eqref{e_LV1}; that is, \eqref{e_LV1} attains its minimum for ${\mathbf V}$ such that ${\mathbf V} $ and $\widetilde{{\mathbf B}}^T$ are orthogonal. Since $\operatorname{span}\{\widetilde{{\mathbf B}}^T\} = \mathcal{S}_{\E\left(f_t(Y)\mid\X\right)}$, \eqref{CVE_of_transformed_Y} follows. \end{proof} \begin{proof}[Proof of Theorem~\ref{ECVE_identifies_cs_thm}] Under assumptions (E.1), (E.2), and (E.3), \eqref{e_objective} is well defined and continuous by arguments analogous to those in the proof of Theorem~\ref{CVE_targets_meansubspace_thm}. Therefore \eqref{enVq} exists as a minimizer of a continuous function over the compact set ${\mathcal S}(p,q)$. To show $\mathcal{S}_{Y\mid\X} = \operatorname{span}\{{\mathbf V}_q\}^\perp$, let $\tilde{{\mathcal S}} \neq \mathcal{S}_{Y\mid\X}$ with $\dim(\tilde{{\mathcal S}}) = \dim(\mathcal{S}_{Y\mid\X}) = k$. Also, let ${\mathbf Z} \in {\mathbb R}^{p \times (p-k)}$ be an orthonormal base of $\tilde{{\mathcal S}}^\perp$. Suppose $L_\mathcal{F}({\mathbf Z}) = \min_{V \in {\mathcal S}(p,p-k)}L_\mathcal{F}({\mathbf V})$. By~\eqref{CVE_of_transformed_Y} and~\eqref{CVE_targets_meansubspace} in Theorem~\ref{CVE_targets_meansubspace_thm}, $L_\mathcal{F}^*({\mathbf V},f_t)$, considered as a function from ${\mathbb R}^{p \times (p -k_t)}$, is minimized by an orthonormal base of $\mathcal{S}_{\E\left(f_t(Y)\mid\X\right)}^\perp$ with $p -k_t$ elements, where $k_t = \dim(\mathcal{S}_{\E\left(f_t(Y)\mid\X\right)}) \leq k$. By (E.1), $\mathcal{S}_{\E\left(f_t(Y)\mid\X\right)} \subseteq \mathcal{S}_{Y\mid\X}= \operatorname{span}\{{\mathbf B}\}$. As in the proof of Theorem~\ref{CVE_targets_meansubspace_thm}, we obtain that $L_\mathcal{F}^*({\mathbf V},f_t)$, as a function from ${\mathbb R}^{p \times (p-k)}$, is minimized by an orthonormal base ${\mathbf U} \in {\mathbb R}^{p \times (p-k)}$ of $\operatorname{span}\{{\mathbf B}\}^\perp$. Since $\tilde{{\mathcal S}}= \operatorname{span}\{{\mathbf Z}\} \neq \operatorname{span}\{{\mathbf U}\} = \mathcal{S}_{Y\mid\X}$, we can rearrange the bases ${\mathbf U} = ({\mathbf U}_1,{\mathbf U}_2)$ and ${\mathbf Z} = ({\mathbf Z}_1,{\mathbf Z}_2)$ such that $\operatorname{span}\{{\mathbf U}_1\} = \operatorname{span}\{{\mathbf Z}_1\}$ and $\operatorname{span}\{{\mathbf U}_2\} \neq \operatorname{span}\{{\mathbf Z}_2\}$. Since $\mathcal{F}$ characterises $\mathcal{S}_{Y\mid\X}$, the set $A = \{t \in \Omega_T: \operatorname{span}\{{\mathbf U}_2\} \subseteq \mathcal{S}_{\E\left(f_t(Y)\mid\X\right)} \}$ is non empty and by (E.3) $A$ is not a null set with respect to the probability measure $F_T$. \begin{comment} By Theorem~\ref{finite_characterisation_of_cs}, there exists a finite $m$ such that $\mathcal{S}_{Y\mid\X} = \cup_{t=1}^m \mathcal{S}_{\E\left(f_t(Y)\mid\X\right)}$. We can always write \begin{align*} \tilde{{\mathcal S}} = \underbrace{(\tilde{{\mathcal S}}\setminus \mathcal{S}_{Y\mid\X})}_{= \tilde{{\mathcal S}}_1} \cup \underbrace{(\tilde{{\mathcal S}} \cap \mathcal{S}_{Y\mid\X})}_{\tilde{{\mathcal S}}_2} = \tilde{{\mathcal S}}_1 \cup \tilde{{\mathcal S}}_2 \end{align*} (i.e. $\tilde{{\mathcal S}}_1$ is the part of $\tilde{{\mathcal S}}$ not belonging to $\mathcal{S}_{Y\mid\X}$ and $\tilde{{\mathcal S}}_2$ is the part that is in $\mathcal{S}_{Y\mid\X}$). By $\tilde{{\mathcal S}} \neq \mathcal{S}_{Y\mid\X}$ we have $\tilde{k} = \dim(\tilde{{\mathcal S}}_1) \geq 1$ and by Theorem~\ref{finite_characterisation_of_cs} we have $\tilde{{\mathcal S}}_2 = \cup_{t=1}^{m-\tilde{k}} \mathcal{S}_{\E\left(f_t(Y)\mid\X\right)}$. By~\eqref{CVE_of_transformed_Y} and~\eqref{CVE_targets_meansubspace} in Theorem~\ref{CVE_targets_meansubspace_thm}, $L_\mathcal{F}^*({\mathbf V},f_t)$ is minimized by $\mathcal{S}_{\E\left(f_t(Y)\mid\X\right)}^\perp$. \end{comment} Thus, \begin{gather*} \min_{V \in {\mathcal S}(p,p-k)}L_\mathcal{F}({\mathbf V}) = L_\mathcal{F}({\mathbf Z}) = \mathbb{E}_{t\sim F_T}\left(L_\mathcal{F}^*({\mathbf Z},f_t)\right) \\ = \int_A \underbrace{L_\mathcal{F}^*({\mathbf Z},f_t)}_{> L_\mathcal{F}^*({\mathbf U},f_t)} dF_T(t) +\int_{A^c} \underbrace{L_\mathcal{F}^*({\mathbf Z},f_t)}_{= L_\mathcal{F}^*({\mathbf U},f_t)}dF_T(t) >\mathbb{E}_{t\sim F_T}\left(L_\mathcal{F}^*({\mathbf U},f_t)\right), \end{gather*} which contradicts our assumption that $L_\mathcal{F}({\mathbf Z}) = \min_{V \in {\mathcal S}(p,p-k)}L_\mathcal{F}({\mathbf V})$. \end{proof} \noindent Next we introduce notation and auxiliary lemmas for the proof of Theorem~\ref{uniform_convergence_ecve}. We suppose all assumptions of Theorem~\ref{uniform_convergence_ecve} hold. We generically use the letter ``C'' to denote constants. Suppose $f$ is an arbitrary element of $\mathcal{F}$ and let \begin{align}\label{Ytilde_decomposition} \tilde{Y}_i = f(Y_i) = g(\widetilde{{\mathbf B}}^T{\mathbf X}_i) + \tilde{\epsilon_i} \end{align} with $\operatorname{span}\{\widetilde{{\mathbf B}}\} = {\mathcal S}_{\mathbb{E}(\tilde{Y}\mid {\mathbf X})} = {\mathcal S}_{\mathbb{E}(f(Y)\mid {\mathbf X})}$. Condition (E.4) yields that $g$ is twice continuously differentiable, and $\mathbb{E}(|\tilde{Y}|^8) < \infty$. Since $f$ is fixed, we suppress it in $t^{(l)}({\mathbf V},\mathbf s_0,f)$ and $\tilde{h}({\mathbf V},\mathbf s_0,f)$, so that \begin{align}\label{tn} t^{(l)}_n({\mathbf V},\mathbf s_0,f) = t^{(l)}_n({\mathbf V},\mathbf s_0) =\frac{1}{nh_n^{(p-q)/2}}\sum_{i=1}^n K\left(\frac{d_i({\mathbf V},\mathbf s_0)}{h_n}\right)\tilde{Y}^l_i, \end{align} which is the sample version of \eqref{tl} for $l=0,1,2$. Eqn. \eqref{e_ybar} can be expressed as \begin{align}\label{yl} \bar{y}_l({\mathbf V},\mathbf s_0) &= \frac{ t^{(l)}_n({\mathbf V},\mathbf s_0)}{t^{(0)}_n({\mathbf V},\mathbf s_0)}, \end{align} \begin{lemma}\label{aux_lemma2} Assume (E.2) and (K.1) hold. For a continuous function $g$, we let $Z_n({\mathbf V},\mathbf s_0) = \left(\sum_i g({\mathbf X}_i)^l K(d_i({\mathbf V},\mathbf s_0)/h_n)\right) /(n h_n^{(p-q)/2})$. Then, \begin{align*} \mathbb{E}\left(Z_n({\mathbf V},\mathbf s_0)\right) &= \int_{\text{supp}(f_{\mathbf X})\cap{\mathbb R}^{p-q}}K(\|{\mathbf r}_2\|^2)\int_{\text{supp}(f_{\mathbf X})\cap{\mathbb R}^q} \tilde{g}({\mathbf r}_1,h_n^{1/2}{\mathbf r}_2)d{\mathbf r}_1 d{\mathbf r}_2 \end{align*} \end{lemma} where $\tilde{g}({\mathbf r}_1,{\mathbf r}_2) = g(\mathbf s_0 + {\mathbf V}{\mathbf r}_1 +{\mathbf U}{\mathbf r}_2)^lf_{\mathbf X}(\mathbf s_0 + {\mathbf V}{\mathbf r}_1 + {\mathbf U}{\mathbf r}_2)$, ${\mathbf x}= \mathbf s_0 + {\mathbf V} {\mathbf r}_1 + {\mathbf U} {\mathbf r}_2$ in \eqref{ortho_decomp}. \begin{proof}[Proof of Lemma~\ref{aux_lemma2}] By \eqref{ortho_decomp}, $\|\mathbf{P}_{{\mathbf U}} ({\mathbf x}-\mathbf s_0)\|^2 = \|{\mathbf U} {\mathbf r}_2\|^2 = \|{\mathbf r}_2\|^2$. Further \begin{gather*} \mathbb{E}\left(Z_n({\mathbf V},\mathbf s_0)\right) = \frac{1}{h_n^{(p-q)/2}} \int_{\text{supp}(f_{\mathbf X})} g( {\mathbf x})^l K(\|\mathbf{P}_{\mathbf U} ({\mathbf x}-\mathbf s_0)/ h^{1/2}_n \|^2) f_{\mathbf X}({\mathbf x})d{\mathbf x} \\ = \frac{1}{h_n^{(p-q)/2}} \int_{\text{supp}(f_{\mathbf X})\cap{\mathbb R}^{p-q}}\int_{\text{supp}(f_{\mathbf X})\cap{\mathbb R}^q} g(\mathbf s_0 + {\mathbf V} {\mathbf r}_1 + {\mathbf U} {\mathbf r}_2)^lK(\|{\mathbf r}_2/ h^{1/2}_n\|^2) \times \\ f_{\mathbf X}(\mathbf s_0 + {\mathbf V} {\mathbf r}_1 + {\mathbf U} {\mathbf r}_2)d{\mathbf r}_1 d{\mathbf r}_2\\ \notag = \int_{\text{supp}(f_{\mathbf X})\cap{\mathbb R}^{p-q}}K(\|{\mathbf r}_2\|^2)\int_{\text{supp}(f_{\mathbf X})\cap{\mathbb R}^q} g(\mathbf s_0 + {\mathbf V} {\mathbf r}_1 + h_n^{1/2} {\mathbf U} {\mathbf r}_2)^l f_{\mathbf X}(\mathbf s_0 + {\mathbf V} {\mathbf r}_1 + h_n^{1/2}{\mathbf U}{\mathbf r}_2)d{\mathbf r}_1 d{\mathbf r}_2 \end{gather*} where the substitution $\tilde{{\mathbf r}}_2 = {\mathbf r}_2/h_n^{1/2}$, $d{\mathbf r}_2 = h_n^{(p-q)/2} d\tilde{{\mathbf r}}_2$ was used to obtain the last equality. \end{proof} \begin{lemma}\label{aux_lemma3} Assume (E.1), (E.2), (E.3), (E.4), (H.1) and (K.1) hold. Then, there exists a constant $C > 0$, such that \begin{equation*} \mathbb{V}\mathrm{ar}\left(n h_n^{(p-q)/2} t^{(l)}_n({\mathbf V},\mathbf s_0,f)\right) \leq n h_n^{(p-q)/2} C \end{equation*} \end{lemma} for $n > n^\star$ and $t^{(l)}_n({\mathbf V},\mathbf s_0)$, $l = 0,1,2$, in \eqref{tn}. \begin{proof}[Proof of Lemma~\ref{aux_lemma3}] Since a continuous function attains a finite maximum over a compact set, $ \sup_{{\mathbf x} \in \text{supp}(f_{\mathbf X})}|g(\widetilde{{\mathbf B}}^T{\mathbf x})| < \infty.$ Therefore, \begin{align*} |\tilde{Y}_i| \leq |g(\widetilde{{\mathbf B}}^T{\mathbf X}_i)| +|\tilde{\epsilon_i}| \leq \sup_{{\mathbf x} \in \text{supp}(f_{\mathbf X})}|g(\widetilde{{\mathbf B}}^T{\mathbf x})| +|\tilde{\epsilon_i}| = C +|\tilde{\epsilon_i}| \end{align*} and $| \tilde{Y}_i|^{2l} \leq \sum_{u=0}^{2l} \binom{2l}{u} C^u |\tilde{\epsilon_i}|^{2l - u}$. Since $(\tilde{Y}_i,{\mathbf X}_i)$ are i.i.d., \begin{gather} \mathbb{V}\mathrm{ar}\left(n h_n^{(p-q)/2} t^{(l)}_n({\mathbf V},\mathbf s_0,f)\right) = n \mathbb{V}\mathrm{ar}\left(\tilde{Y}^l K\left(\frac{d_i({\mathbf V},\mathbf s_0)}{h_n}\right)\right) \leq n \mathbb{E}\left(\tilde{Y}^{2l} K^2\left(\frac{d_i({\mathbf V},\mathbf s_0)}{h_n}\right)\right) \notag \\ = n \mathbb{E}\left(|\tilde{Y}|^{2l} K^2\left(\frac{d_i({\mathbf V},\mathbf s_0)}{h_n}\right)\right) \leq n \sum_{u=0}^{2l} \binom{2l}{u} C^u \mathbb{E}\left( |\tilde{\epsilon_i}|^{2l - u} K^2\left(\frac{d_i({\mathbf V},\mathbf s_0)}{h_n}\right)\right) \notag \\ = n \sum_{u=0}^{2l} \binom{2l}{u} C^u \mathbb{E}\left( \mathbb{E}(|\tilde{\epsilon_i}|^{2l - u}\mid {\mathbf X}_i) K^2\left(\frac{d_i({\mathbf V},\mathbf s_0)}{h_n}\right)\right) \label{bound_tnf_var} \end{gather} for $l = 0,1,2$. Let $\mathbb{E}(|\tilde{\epsilon_i}|^{2l - u} \mid {\mathbf X}_i) = g_{2l-u}({\mathbf X}_i)$ for a continuous (by assumption) function $g_{2l-u}(\cdot)$ with finite moments for $l=0,1,2$ by the compactness of $\text{supp}(f_{\mathbf X})$. Using Lemma~\ref{aux_lemma2} with \[Z_n({\mathbf V},\mathbf s_0) = \frac{1}{nh_n^{(p-q)/2}} \sum_i g_{2l-u}({\mathbf X}_i) K^2\left(d_i({\mathbf V},\mathbf s_0)/h_n\right),\] where $K^2(\cdot)$ fulfills (K.1), we calculate \begin{gather} \mathbb{E}\left( \mathbb{E}(|\tilde{\epsilon_i}|^{2l - u}\mid {\mathbf X}_i) K^2\left(\frac{d_i({\mathbf V},\mathbf s_0)}{h_n}\right)\right) = h_n^{(p-q)/2} \mathbb{E}(Z_n({\mathbf V},\mathbf s_0)) \notag \\ = h_n^{(p-q)/2} \int_{\text{supp}(f_{\mathbf X})\cap{\mathbb R}^{p-q}}K^2(\|{\mathbf r}_2\|^2)\times \notag \\ \quad \int_{\text{supp}(f_{\mathbf X})\cap{\mathbb R}^q} g_{2l-u}(\mathbf s_0 + {\mathbf V} {\mathbf r}_1 + h_n^{1/2} {\mathbf U} {\mathbf r}_2) f_{\mathbf X}(\mathbf s_0 + {\mathbf V} {\mathbf r}_1 + h_n^{1/2}{\mathbf U}{\mathbf r}_2)d{\mathbf r}_1 d{\mathbf r}_2 \label{ugly_integral}\\ \leq h_n^{(p-q)/2} C \notag \end{gather} since all integrands in \eqref{ugly_integral} are continuous and over compact sets by (E.2) and the continuity of $g_{2l-u}(\cdot)$ and $K(\cdot)$, so that the integral can be upper bounded by a finite constant $C$. Inserting $\eqref{ugly_integral}$ into \eqref{bound_tnf_var} yields \begin{align}\label{upper_bound_var_tnf} \mathbb{V}\mathrm{ar}\left(n h_n^{(p-q)/2} t^{(l)}_n({\mathbf V},\mathbf s_0,f)\right) \leq n h_n^{(p-q)/2} \underbrace{\sum_{u=0}^{2l} \binom{2l}{u} C^u C}_{ = C} = n h_n^{(p-q)/2}C \end{align} \end{proof} In Lemma \ref{d_inequality} we show that $d_i({\mathbf V},\mathbf s_0)$ in \eqref{distance} is Lipschitz in its inputs under assumption (E.2). \begin{lemma}\label{d_inequality} Under assumption (E.2) there exists a constant $0 < C_2 < \infty$ such that for all $\delta >0$ and ${\mathbf V}, {\mathbf V}_j \in {\mathcal S}(p,q)$ with $\|\mathbf{P}_{\mathbf V} - \mathbf{P}_{{\mathbf V}_j}\| < \delta$ and for all $\mathbf s_0, \mathbf s_j \in \text{supp}(f_{\mathbf X}) \subset {\mathbb R}^p$ with $\|\mathbf s_0 - \mathbf s_j\| < \delta$, \begin{equation*} |d_i({\mathbf V},\mathbf s_0) - d_i({\mathbf V}_j,\mathbf s_j)| \leq C_2 \delta \end{equation*} for $d_i({\mathbf V},\mathbf s_0)$ given by \eqref{distance} \end{lemma} \begin{proof}[Proof of Lemma~\ref{d_inequality}] \begin{gather} |d_i({\mathbf V},\mathbf s_0) - d_i({\mathbf V}_j,\mathbf s_j)| \leq \left|\|{\mathbf X}_i - \mathbf s_0\|^2 - \|{\mathbf X}_i - \mathbf s_j\|^2\right| + \notag\\ \left|\langle {\mathbf X}_i - \mathbf s_0,\mathbf{P}_{{\mathbf V}}({\mathbf X}_i - \mathbf s_0)\rangle - \langle {\mathbf X}_i - \mathbf s_j,\mathbf{P}_{{\mathbf V}_j}({\mathbf X}_i - \mathbf s_j)\rangle\right| = I_1 + I_2\label{di-dj2} \end{gather} where $\langle\cdot,\cdot \rangle$ is the scalar product in ${\mathbb R}^p$. We bound the first term on the right hand side of \eqref{di-dj2} as follows using $\|{\mathbf X}_i \| \leq \sup_{z \in \text{supp}(f_{\mathbf X})} \|z \| = C_1 < \infty$ with probability 1 by (E.2). \begin{align*} I_1 &= \left|\|{\mathbf X}_i - \mathbf s_0\|^2 - \|{\mathbf X}_i - \mathbf s_j\|^2\right| \leq 2\left|\langle {\mathbf X}_i,\mathbf s_0 -\mathbf s_j \rangle\right| + \left|\|\mathbf s_0\|^2 - \|\mathbf s_j\|^2\right| \\ & \leq 2\|{\mathbf X}_i\|\|\mathbf s_0 -\mathbf s_j\| + 2C_1\|\mathbf s_0 - \mathbf s_j\| \leq 2C_1 \delta + 2C_1\delta = 4 C_1\delta \end{align*} by Cauchy-Schwartz and the reverse triangular inequality for which $\left|\|\mathbf s_0\|^2 - \|\mathbf s_j\|^2\right| = \left|\|\mathbf s_0\| - \|\mathbf s_j\|\right|(\|\mathbf s_0\| + \|\mathbf s_j\|) \leq \|\mathbf s_0 - \mathbf s_j\|2C_1$. The second term in \eqref{di-dj2} satisfies \begin{gather*} I_2 \leq \left|\langle {\mathbf X}_i,(\mathbf{P}_{{\mathbf V}}-\mathbf{P}_{{\mathbf V}_j}){\mathbf X}_i\rangle\right| + 2\left|\langle {\mathbf X}_i,\mathbf{P}_{{\mathbf V}}\mathbf s_0 -\mathbf{P}_{{\mathbf V}_j}\mathbf s_j\rangle\right| + \left|\langle \mathbf s_0,\mathbf{P}_{{\mathbf V}}\mathbf s_0\rangle - \langle \mathbf s_j,\mathbf{P}_{{\mathbf V}_j}\mathbf s_j\rangle\right| \\ \leq \|{\mathbf X}_i\|^2\|\mathbf{P}_{{\mathbf V}} - \mathbf{P}_{{\mathbf V}_j}\| + 2\|{\mathbf X}_i\| \left\|\mathbf{P}_{{\mathbf V}}(\mathbf s_0 - \mathbf s_j) + (\mathbf{P}_{{\mathbf V}}-\mathbf{P}_{{\mathbf V}_j})\mathbf s_j\right\| + \left|\langle \mathbf s_0 - \mathbf s_j,\mathbf{P}_{{\mathbf V}} \mathbf s_0 \rangle\right|+\\ \left|\langle \mathbf s_j,\mathbf{P}_{{\mathbf V}} \mathbf s_0 - \mathbf{P}_{{\mathbf V}_j}\mathbf s_j \rangle\right| \leq C_1^2 \delta + 2C_1(\delta + C_1 \delta) + C_1\delta +C_1 (\delta + C_1 \delta) =4C_1\delta + 4C_1^2 \delta \end{gather*} Collecting all constants into $C_2$ (i.e. $C_2 = 8C_1 + 4C_1^2$) yields the result. \end{proof} To show Theorems~\ref{uniform_convergence_ecve} and \ref{thm_variance}, we use the \textbf{Bernstein inequality} \cite{Bernstein}. Let $\{Z_i, i=1,2,\ldots \}$, be an independent sequence of bounded random variables with $|Z_i| \leq b$. Let $S_n = \sum_{i=1}^n Z_i$, $E_n = \mathbb{E}(S_n)$ and $V_n = \mathbb{V}\mathrm{ar}(S_n)$. Then, \begin{equation}\label{Bernstein} P(|S_n - E_n|>t) < 2 \exp{\left(-\frac{t^2/2}{V_n + b t/3} \right)} \end{equation} Assumption (K.2) yields \begin{equation} \label{kernel} |K(u) - K(u')| \leq K^*(u') \delta \end{equation} for all $u, u'$ with $|u-u'| < \delta \leq L_2$ and $K^*(\cdot)$ is a bounded and integrable kernel function [see \cite{Hansen2008}]. Specifically, if condition (1) of (K.2) holds, then $K^*(u) = L_1 1_{\{|u| \leq 2L_2\}}$. If condition (2) holds, then $K^*(u) = L_1 1_{\{|u| \leq 2L_2\}} + 1_{\{|u| > 2L_2\}}|u-L_2|^{-\nu}$. Let $A = {\mathcal S}(p,q) \times \text{supp}(f_{\mathbf X})$. In Lemma~\ref{thm_variance} and \ref{thm_bias} we show that \eqref{tn} converges uniformly in probability to \eqref{tl} by showing that the variance and bias terms vanish uniformly in probability, respectively. \begin{lemma}\label{thm_variance} Under the assumptions of Theorem~\ref{uniform_convergence_ecve}, \begin{equation} \sup_{{\mathbf V} \times \mathbf s_0 \in A} \left|t^{(l)}_n({\mathbf V},\mathbf s_0) - \mathbb{E}\left(t_n^{(l)}({\mathbf V},\mathbf s_0)\right)\right| = O_{P}(a_n), \; l=0,1,2 \end{equation} \end{lemma} \begin{proof}[Proof of Lemma~\ref{thm_variance}] The proof proceeds in 3 steps: (i) truncation, (ii) discretization by covering $A= {\mathcal S}(p,q) \times \text{supp}(f_{\mathbf X})$, and (iii) application of Bernstein's inequality \eqref{Bernstein}. If the function $f$ in \eqref{Ytilde_decomposition} is bounded, the truncation step and the assumption $a_n/h_n^{(p-q)/2} = O(1)$ are not needed. (i) We let $\tau_n = a_n^{-1}$ and truncate $\tilde{Y}_i^l$ by $\tau_n$ as follows. We let \begin{align}\label{tl.trc} t^{(l)}_{n,\text{trc}}({\mathbf V},\mathbf s_0) &= (1/nh_n^{(p-q)/2})\sum_i K(\|\mathbf{P}_{\mathbf U} ({\mathbf X}_i-\mathbf s_0) \|^2/ h_n)\tilde{Y}_i^l 1_{\{|\tilde{Y}_i|^l \leq \tau_n\}} \end{align} be the truncated version of \eqref{tn} and $\tilde{R}^{(l)}_n = (1/nh_n^{(p-q)/2})\sum_i |\tilde{Y}_i|^l 1_{\{|\tilde{Y}_i|^l > \tau_n\}}$ be the remainder of \eqref{tn}. Therefore $R^{(l)}_n({\mathbf V},\mathbf s_0) = t^{(l)}_n({\mathbf V},\mathbf s_0) - t^{(l)}_{n,\text{trc}}({\mathbf V},\mathbf s_0) \leq M_1 \tilde{R}^{(l)}_n $ due to (K.1) and \begin{align} \sup_{{\mathbf V} \times \mathbf s_0 \in A} \left|t^{(l)}_n({\mathbf V},\mathbf s_0) - \mathbb{E}\left(t_n^{(l)}({\mathbf V},\mathbf s_0)\right)\right| &\leq M_1(\tilde{R}^{(l)}_n +\mathbb{E}\tilde{R}^{(l)}_n) \notag \\ & \qquad + \sup_{{\mathbf V} \times \mathbf s_0 \in A}\left|t^{(l)}_{n,\text{trc}}({\mathbf V},\mathbf s_0) - \mathbb{E} \left( t^{(l)}_{n,\text{trc}}({\mathbf V},\mathbf s_0)\right)\right|\label{truncation} \end{align} By Cauchy-Schwartz and the Markov inequality, $\mathbb{ P}(|Z| > t) = \mathbb{ P}(Z^4 > t^4) \leq \mathbb{E}(Z^4)/t^4$, we obtain \begin{align} \mathbb{E}\tilde{R}^{(l)}_n &= \frac{1}{h_n^{(p-q)/2}} \mathbb{E} \left(|\tilde{Y}_i|^{l} 1_{\{|\tilde{Y}_i|^l > \tau_n\}}\right) \leq \frac{1}{h_n^{(p-q)/2}}\sqrt{\mathbb{E}(|\tilde{Y}_i|^{2l})} \sqrt{\mathbb{ P}(|\tilde{Y}_i|^l > \tau_n)} \notag \\ &\leq \frac{1}{h_n^{(p-q)/2}} \sqrt{\mathbb{E}(|\tilde{Y}_i|^{2l})} \left(\frac{\mathbb{E}(|\tilde{Y}_i|^{4l})}{a_n^{-4}}\right)^{1/2} = o(a_n), \label{Rtilde} \end{align} where the last equality uses the assumption $a_n/h_n^{(p-q)/2} = O(1)$ and the expectations are finite due to (E.4) for $l=0,1,2$. No truncation is needed for $l=0$ or if $\tilde{Y}_i = f(Y_i) \leq \sup_{f \in \mathcal{F}} |f(Y_i)| < C < \infty$. Therefore, the first two terms of the right hand side of \eqref{truncation} converge to 0 with rate $a_n$ by \eqref{Rtilde} and Markov's inequality. From this point on, $\tilde{Y}_i$ will denote the truncated version $\tilde{Y}_i 1_{\{|\tilde{Y}_i| \leq \tau_n\}}$ and we do not distinguish the truncated from the untruncated $t_n({\mathbf V},\mathbf s_0)$ since this truncation results in an error of magnitude $a_n$. (ii) For the discretization step we cover the compact set $A = {\mathcal S}(p,q) \times \text{supp}(f_{\mathbf X})$ by finitely many balls, which is possible by (E.2) and the compactness of ${\mathcal S}(p,q)$. Let $\delta_n = a_n h_n$ and $A_j = \{{\mathbf V}: \|\mathbf{P}_{\mathbf V} - \mathbf{P}_{{\mathbf V}_j}\| \leq \delta_n\} \times \{\mathbf s :\|\mathbf s - \mathbf s_j\| \leq \delta_n\}$ be a cover of $A$ with ball centers ${\mathbf V}_j \times \mathbf s_j$. Then, $A \subset \bigcup_{j=1}^{N} A_j$ and the number of balls can be bounded by $N \leq C \, \delta_n^{-d}\delta_n^{-p}$ for some constant $C \in (0, \infty)$, where $d = \text{dim}({\mathcal S}(p,q)) = pq - q(q+1)/2$. Let ${\mathbf V} \times \mathbf s_0 \in A_j$. Then by Lemma~\ref{d_inequality} there exists $0 < C_2 < \infty$, such that \begin{align}\label{inequality1} |d_i({\mathbf V},\mathbf s_0) - d_i({\mathbf V}_j,\mathbf s_j)| \leq C_2 \delta_n \end{align} for $d_i$ in \eqref{distance}. Under (K.2), which implies \eqref{kernel}, inequality~\eqref{inequality1} yields \begin{equation}\label{Ki-Kj} \left|K\left(\frac{d_i({\mathbf V},\mathbf s_0)}{h_n}\right) - K\left(\frac{d_i({\mathbf V}_j,\mathbf s_j)}{h_n}\right)\right| \leq K^*\left(\frac{d_i({\mathbf V}_j,\mathbf s_j)}{h_n}\right) C_2 a_n \end{equation} for ${\mathbf V} \times \mathbf s_0 \in A_j$ and $K^*(\cdot)$ an integrable and bounded function. Define $r^{(l)}_n({\mathbf V}_j,\mathbf s_j) = (1/nh_n^{(p-q)/2}) \sum_{i=1}^n K^*(d_i({\mathbf V}_j,\mathbf s_j)/h_n)|\tilde{Y}_i|^l$. For notational convenience we next drop the dependence on $l$ and $j$ and observe that \eqref{Ki-Kj} yields \begin{equation}\label{t_diff} |t^{(l)}_n({\mathbf V},\mathbf s_0) - t^{(l)}_n({\mathbf V}_j,\mathbf s_j)| \leq C_2 a_n r^{(l)}_n({\mathbf V}_j,\mathbf s_j) \end{equation} Since $K^*$ fulfills (K.1) except for continuity, an analogous argument as in the proof of Lemma~\ref{aux_lemma2} yields that $\mathbb{E}\left(r^{(l)}_n({\mathbf V}_j,\mathbf s_j)\right) < \infty$. By subtracting and adding $t^{(l)}_n({\mathbf V}_j,\mathbf s_j)$, $\mathbb{E}(t^{(l)}_n({\mathbf V}_j,\mathbf s_j))$, the triangular inequality, \eqref{t_diff} and integrability of $r_n^l$, we obtain \begin{gather} \left|t^{(l)}_n({\mathbf V},\mathbf s_0) - \mathbb{E}\left(t_n^{(l)}({\mathbf V},\mathbf s_0)\right)\right| \leq \left|t^{(l)}_n({\mathbf V},\mathbf s_0) - t^{(l)}_n({\mathbf V}_j,\mathbf s_j)\right| + \left|\mathbb{E}\left(t^{(l)}_n({\mathbf V}_j,\mathbf s_j) - t_n^{(l)}({\mathbf V},\mathbf s_0)\right)\right| \notag\\+ \left|t^{(l)}_n({\mathbf V}_j,\mathbf s_j) - \mathbb{E}\left(t_n^{(l)}({\mathbf V}_j,\mathbf s_j)\right)\right| \leq C_2 a_n \left(|r_n| + |\mathbb{E}\left(r_n\right)| \right) + \left|t^{(l)}_n({\mathbf V}_j,\mathbf s_j) - \mathbb{E}\left(t_n^{(l)}({\mathbf V}_j,\mathbf s_j)\right)\right| \notag\\ \leq C_2 a_n(|r_n -\mathbb{E}(r_n)| + 2|\mathbb{E}(r_n)|) + \left|t^{(l)}_n({\mathbf V}_j,\mathbf s_j) - \mathbb{E}\left(t_n^{(l)}({\mathbf V}_j,\mathbf s_j)\right)\right| \notag\\ \leq 2C_3a_n + |r_n -\mathbb{E}(r_n)| + \left|t^{(l)}_n({\mathbf V}_j,\mathbf s_j) - \mathbb{E}\left(t_n^{(l)}({\mathbf V}_j,\mathbf s_j)\right)\right| \label{inequality2} \end{gather} for any constant $C_3 > C_2 \mathbb{E}(r^{(l)}_n({\mathbf V}_j,\mathbf s_j))$ and $n$ such that $C_2 a_n \leq 1$, since $a_n^2 = o(1)$, which in turn yields that there exists $0 < C_3 < \infty$ such that \eqref{inequality2} holds. Since $\sup_{x \in A} f(x) = \max_{1\leq j\leq N}\sup_{x \in A_j}f(x) \leq \sum_{j=1}^N \sup_{x \in A_j}f(x)$ for any cover of $A$ and continuous function $f$, \begin{gather} \mathbb{ P}(\sup_{{\mathbf V} \times \mathbf s_0 \in A} |t^{(l)}_n({\mathbf V},\mathbf s_0) - \mathbb{E}\left(t_n^{(l)}({\mathbf V},\mathbf s_0)\right)| > 3C_3a_n) \notag\\ \leq \sum_{j=1}^N \mathbb{ P}(\sup_{{\mathbf V} \times \mathbf s_0 \in A_j} |t^{(l)}_n({\mathbf V},\mathbf s_0) - \mathbb{E}\left(t_n^{(l)}({\mathbf V},\mathbf s_0)\right)| > 3C_3a_n) \notag \\ \leq N \max_{1 \leq j \leq N} \mathbb{ P}(\sup_{{\mathbf V} \times \mathbf s_0 \in A_j} |t^{(l)}_n({\mathbf V},\mathbf s_0) - \mathbb{E}\left(t_n^{(l)}({\mathbf V},\mathbf s_0)\right)| > 3C_3a_n) \label{Prob1} \\ \leq N \left(\max_{1 \leq j \leq N}\mathbb{ P}(|t^{(l)}_n({\mathbf V}_j,\mathbf s_j) - \mathbb{E}\left(t_n^{(l)}({\mathbf V}_j,\mathbf s_j)\right)| > C_3 a_n) + \max_{1 \leq j \leq N} \mathbb{ P}(|r_n -\mathbb{E}(r_n)| > C_3a_n)\right) \leq \notag\\ C\, \delta^{-(d+p)} \left( \max_{1 \leq j \leq N}\mathbb{ P}(|t^{(l)}_n({\mathbf V}_j,\mathbf s_j) - \mathbb{E}\left(t_n^{(l)}({\mathbf V}_j,\mathbf s_j)\right)| > C_3 a_n) + \max_{1 \leq j \leq N} \mathbb{ P}(|r_n -\mathbb{E}(r_n)| > C_3a_n)\right) \notag \end{gather} by the subadditivity of probability for the first inequality and \eqref{inequality2} for the third inequality above, where the last inequality is due to $N \leq C\, \delta_n^{-d}\delta_n^{-p}$ for a cover of $A$. Finally, we bound the first and second term in the last line of \eqref{Prob1} by the Bernstein inequality~\eqref{Bernstein}. For the first term in the last line of \eqref{Prob1}, let $Z_i = Y^l_i K(d_i({\mathbf V}_j,\mathbf s_j)/h_n)$ and $S_n = \sum_i Z_i = nh_n^{(p-q)/2} t^{(l)}_n({\mathbf V}_j,\mathbf s_j)$. Then, $Z_i$ are independent with $|Z_i| \leq b = M_1\tau_n =M_1/a_n$ by (K.1) and the truncation step (i). For $V_n = \mathbb{V}\mathrm{ar}(S_n)$, Lemma~\ref{aux_lemma3} yields $nh_n^{(p-q)/2}C \geq V_n$ with $C>0$ , and set $t = C_3a_n n h_n^{(p-q)/2}$. The Bernstein inequality~\eqref{Bernstein} yields \begin{gather*} \label{first_term} \mathbb{ P}\left(\left|t^{(l)}_n({\mathbf V}_j,\mathbf s_j) - \mathbb{E}\left(t_n^{(l)}({\mathbf V}_j,\mathbf s_j)\right)\right| > C_3 a_n\right) < 2 \exp{\left(\frac{-t^2/2}{V_n + b t/3}\right)} \leq \\ 2 \exp{\left(-\frac{(1/2)C_3^2a^2_n n^2 h_n^{(p-q)}}{nh_n^{(p-q)/2}C + (1/3) M_1\tau_n C_3 a_n n h_n^{(p-q)/2})} \right)} \leq 2 \exp{\left(-\frac{(1/2)C_3\log(n)}{C/C_3 + M_1/3 } \right)} = 2 n^{-\gamma(C_3)} \end{gather*} where $a_n^2 = \log(n)/(n h_n^{(p-q)/2})$ and $\gamma(C_3) = C_3\left(2(C/C_3 + M_1 /3)\right)^{-1} $ that is an increasing function that can be made arbitrarily large by increasing $C_3$. For the second term in the last line of \eqref{Prob1}, set $Z_i = Y^l_i K^*(d_i({\mathbf V}_j,\mathbf s_j)/h_n)$ in~\eqref{Bernstein} and proceed similarly to obtain \begin{gather*} \label{second_term} \mathbb{ P}\left(\left|r^{(l)}_n({\mathbf V}_j,\mathbf s_j) - \mathbb{E}\left(r_n^{(l)}({\mathbf V}_j,\mathbf s_j)\right)\right| > C_3 a_n\right) < 2 n^{- \frac{(1/2)C_3}{C/C_3 + (1/3) M_2}} = 2 n^{-\gamma(C_3)} \end{gather*} By (H.1), $h_n^{(p-q)/4} \leq 1$ for $n$ large and (H.2) implies $1/(nh_n^{(p-q)/2}) \leq 1$ for $n$ large, therefore $h_n^{-1} \leq n^{2/(p-q)} \leq n^2$ since $p-q \geq 1$. Then, $\delta_n^{-1} = (a_n h_n)^{-1} \leq n^{1/2}h_n^{-1} h_n^{(p-q)/4} \leq n^{5/2}$. Therefore, \eqref{Prob1} is smaller than $4\,C\, \delta_n^{-(d+p)}n^{-\gamma(C_3)} \leq 4C n^{5(d+p)/2 - \gamma(C_3)}$. For $C_3$ large enough, we have $5(d+p)/2 - \gamma(C_3) < 0$ and $n^{5(d+p)/2 - \gamma(C_3)} \to 0$. This completes the proof. \end{proof} If we assume $|\tilde{Y}_i| < M_2 < \infty$ almost surely, the requirement $a_n/h_n^{(p-q)/2} = O(1)$ for the bandwidth can be dropped and the truncation step of the proof of Lemma~\ref{thm_variance} is no longer necessary. \begin{lemma}\label{thm_bias} Under (E.1), (E.2), (E.3), (E.4), (H.1), (K.1), and $\int_{{\mathbb R}^{p-q}}K(\|{\mathbf r}_2\|^2)d{\mathbf r}_2 = 1$, \begin{equation} \sup_{{\mathbf V} \times\mathbf s_0 \in A} \left| t^{(l)}({\mathbf V},\mathbf s_0)+1_{\{l=2\}}\tilde{h}({\mathbf V},\mathbf s_0) - \mathbb{E}\left(t_n^{(l)}({\mathbf V},\mathbf s_0)\right)\right| =O(h_n), \quad l=0,1,2 \end{equation} where $t^{(l)}({\mathbf V},\mathbf s_0)$ and $\tilde{h}({\mathbf V},\mathbf s_0)$ are defined in Theorem~\ref{CVE_targets_meansubspace_thm}. \end{lemma} \begin{proof}[Proof of Lemma~\ref{thm_bias}] Let $\tilde{g}({\mathbf r}_1,{\mathbf r}_2)= g(\widetilde{{\mathbf B}}^T\mathbf s_0 + \widetilde{{\mathbf B}}^T{\mathbf V}{\mathbf r}_1 +\widetilde{{\mathbf B}}^T{\mathbf U}{\mathbf r}_2)^l f_{\mathbf X}(\mathbf s_0 + {\mathbf V}{\mathbf r}_1 + {\mathbf U}{\mathbf r}_2)$, where ${\mathbf r}_1,{\mathbf r}_2$ satisfy the orthogonal decomposition~\eqref{ortho_decomp}. \begin{align*} \mathbb{E}\left(t_n^{(0)}({\mathbf V},\mathbf s_0)\right) &= \mathbb{E}\left(K(d_i({\mathbf V},\mathbf s_0)/h_n)\right)/h_n^{(p-q)/2} \\ \mathbb{E}(t_n^{(1)}({\mathbf V},\mathbf s_0)) &= \mathbb{E}\left(K(d_i({\mathbf V},\mathbf s_0)/h_n)g(\widetilde{{\mathbf B}}^T{\mathbf X}_i)\right)/h_n^{(p-q)/2} \\ &\quad + \mathbb{E}\left(K(d_i({\mathbf V},\mathbf s_0)/h_n)\underbrace{\mathbb{E}(\tilde{\epsilon}_i\mid{\mathbf X})}_{=0}\right)/h_n^{(p-q)/2}\\ \mathbb{E}(t_n^{(2)}({\mathbf V},\mathbf s_0)) &= \mathbb{E}\left(K(d_i({\mathbf V},\mathbf s_0)/h_n)g(\widetilde{{\mathbf B}}^T{\mathbf X}_i)^2\right)/h_n^{(p-q)/2} \\ &\quad + 2\mathbb{E}\left(K(d_i({\mathbf V},\mathbf s_0)/h_n)\underbrace{\mathbb{E}(\tilde{\epsilon}_i\mid{\mathbf X})}_{=0}\right)/h_n^{(p-q)/2} \\ &\qquad+ \mathbb{E}\left(K(d_i({\mathbf V},\mathbf s_0)/h_n)\underbrace{\mathbb{E}(\tilde{\epsilon}^2_i\mid{\mathbf X})}_{= h({\mathbf X}_i)}\right)/h_n^{(p-q)/2} \end{align*} Then \begin{align}\label{bias1} \mathbb{E}\left(t_n^{(l)}({\mathbf V},\mathbf s_0)\right) = \int_{{\mathbb R}^{p-q}}K(\|{\mathbf r}_2\|^2)\int_{{\mathbb R}^p} \tilde{g}({\mathbf r}_1,{h_n}^{1/2}{\mathbf r}_2) d{\mathbf r}_1 d{\mathbf r}_2 \end{align} holds by Lemma~\ref{aux_lemma2} for $l = 0,1$. For $l = 2$, $\tilde{Y}_i^2 = g_i^2 + 2g_i \epsilon_i + \epsilon_i^2$ with $g_i = g(\widetilde{{\mathbf B}}^T{\mathbf X}_i)$ and can be handled as in the case of $l = 0,1$. Plugging in \eqref{bias1} the second order Taylor expansion for some $\xi$ in the neighborhood of 0, $\tilde{g}({\mathbf r}_1,{h_n}^{1/2}{\mathbf r}_2) = \tilde{g}({\mathbf r}_1,0) + {h_n}^{1/2} \nabla_{{\mathbf r}_2}\tilde{g}({\mathbf r}_1,0)^T{\mathbf r}_2 + h_n{\mathbf r}_2^T \nabla^2_{{\mathbf r}_2}\tilde{g}({\mathbf r}_1,\xi) {\mathbf r}_2$ , yields \begin{gather*} \mathbb{E}\left(t_n^{(l)}({\mathbf V},\mathbf s_0)\right) = \int_{{\mathbb R}^q}\tilde{g}({\mathbf r}_1,0) d{\mathbf r}_1 + \sqrt{h_n} \left(\int_{{\mathbb R}^q}\nabla_{{\mathbf r}_2}\tilde{g}({\mathbf r}_1,0)d{\mathbf r}_1\right)^T\int_{{\mathbb R}^{p-q}}K(\|{\mathbf r}_2\|^2){\mathbf r}_2 d{\mathbf r}_2 + \\ h_n \frac{1}{2}\int_{{\mathbb R}^{p-q}}K(\|{\mathbf r}_2\|^2)\int_{{\mathbb R}^p}{\mathbf r}_2^T \nabla^2_{{\mathbf r}_2}\tilde{g}({\mathbf r}_1,\xi) {\mathbf r}_2 d{\mathbf r}_1d{\mathbf r}_2 = t^{(l)}({\mathbf V},\mathbf s_0) + h_n \frac{1}{2}R({\mathbf V},\mathbf s_0) \end{gather*} since $\int_{{\mathbb R}^q}\tilde{g}({\mathbf r}_1,0) d{\mathbf r}_1 = t^{(l)}({\mathbf V},\mathbf s_0)$ and $\int_{{\mathbb R}^{p-q}}K(\|{\mathbf r}_2\|^2){\mathbf r}_2 d{\mathbf r}_2 = 0 \in {\mathbb R}^{p-q}$ due to $K(\|\cdot\|^2)$ being even. Let $R({\mathbf V},\mathbf s_0) = \int_{{\mathbb R}^{p-q}}K(\|{\mathbf r}_2\|^2)\int_{{\mathbb R}^p}{\mathbf r}_2^T \nabla^2_{{\mathbf r}_2}\tilde{g}({\mathbf r}_1,\xi) {\mathbf r}_2 d{\mathbf r}_1d{\mathbf r}_2$. By (E.4) and (E.2), $|{\mathbf r}_2^T \nabla^2_{{\mathbf r}_2}\tilde{g}({\mathbf r}_1,\xi) {\mathbf r}_2| \leq C \|{\mathbf r}_2\|^2$ for $C = \sup_{{\mathbf x},{\mathbf y}} \| \nabla^2_{{\mathbf r}_2}\tilde{g}({\mathbf x},{\mathbf y})\| < \infty$, since a continuous function over a compact set is bounded. Then, $R({\mathbf V},\mathbf s_0) \leq C C_4 \int_{{\mathbb R}^{p-q}}K(\|{\mathbf r}_2\|^2)\|{\mathbf r}_2\|^2d{\mathbf r}_2 < \infty$ for some $C_4 > 0$, since the integral over ${\mathbf r}_1$ is over a compact set by (E.2). \end{proof} Lemma~\ref{t_uniform} follows directly from Lemmas~\ref{thm_variance} and \ref{thm_bias} and the triangle inequality. \begin{lemma}\label{t_uniform} Suppose (E.1), (E.2), (E.3), (E.4), (K.1), (K.2), (H.1) hold. If $a_n^2 = \log(n)/nh_n^{(p-q)/2} = o(1)$, and $a_n/h_n^{(p-q)/2} = O(1)$, then for $l=0,1,2$ \begin{equation*} \sup_{{\mathbf V} \times \mathbf s_0 \in A} \left|t^{(l)}({\mathbf V},\mathbf s_0)+1_{\{l=2\}}\tilde{h}({\mathbf V},\mathbf s_0) - t_n^{(l)}({\mathbf V},\mathbf s_0)\right| = O_P(a_n + h_n) \end{equation*} \end{lemma} \begin{thm}\label{thm_Ltilde_uniform} Suppose (E.1), (E.2), (E.3), (E.4), (K.1), (K.2), (H.1) hold. Let $a_n^2 = \log(n)/nh_n^{(p-q)/2} = o(1)$, $a_n/h_n^{(p-q)/2} = O(1)$, then \begin{equation*} \sup_{{\mathbf V} \times \mathbf s_0 \in A}\left|\bar{y}_l({\mathbf V},\mathbf s_0) - \mu_l({\mathbf V},\mathbf s_0)-1_{\{l=2\}}\tilde{h}({\mathbf V},\mathbf s_0)\right| = o_P(1) , \quad l=0,1,2 \end{equation*} and \begin{equation}\label{Ltilde_uniform} \sup_{{\mathbf V} \times \mathbf s_0 \in A}\left|\tilde{L}_{n,\mathcal{F}}({\mathbf V},\mathbf s_0) - \tilde{L}_\mathcal{F}({\mathbf V},\mathbf s_0)\right| = o_P(1) \end{equation} where $\bar{y}_l({\mathbf V},\mathbf s_0)$, $\mu_l({\mathbf V},\mathbf s_0)$, $\tilde{L}_{n,\mathcal{F}}({\mathbf V},\mathbf s_0)$ and $\tilde{L}_\mathcal{F}({\mathbf V},\mathbf s_0)$ are defined in \eqref{e_ybar}, \eqref{mu_l}, \eqref{e_Ltilde} and \eqref{e_LtildeVs0}, respectively. \end{thm} \begin{proof}[Proof of Theorem~\ref{thm_Ltilde_uniform}] Let $\delta_n = \inf_{{\mathbf V} \times \mathbf s_0 \in A_n}t^{(0)}({\mathbf V},\mathbf s_0)$, where $t^{(0)}({\mathbf V},\mathbf s_0)$ is defined in \eqref{tl}, and $A_n = {\mathcal S}(p,q) \times \{{\mathbf x} \in \text{supp}(f_{\mathbf X}): |{\mathbf x} - \partial\text{supp}(f_{\mathbf X})| \geq b_n\}$, where $\partial C $ denotes the boundary of the set $C$ and $|{\mathbf x} - C| = \inf_{{\mathbf r} \in C} |{\mathbf x} - {\mathbf r}| $, for a sequence $b_n \to 0$ so that $\delta_n^{-1}(a_n + h_n) \to 0$ for any bandwidth $h_n$ that satisfies the assumptions. Then, \begin{equation} \bar{y}_l({\mathbf V},\mathbf s_0) = \frac{t_n^{(l)}({\mathbf V},\mathbf s_0)}{t_n^{(0)}({\mathbf V},\mathbf s_0)} = \frac{t_n^{(l)}({\mathbf V},\mathbf s_0)/t^{(0)}({\mathbf V},\mathbf s_0)}{t_n^{(0)}({\mathbf V},\mathbf s_0)/t^{(0)}({\mathbf V},\mathbf s_0)} \label{ylbar} \end{equation} We consider the numerator and enumerator of \eqref{ylbar} separately. By Lemma~\ref{t_uniform} \begin{gather*} \sup_{{\mathbf V} \times \mathbf s_0 \in A_n} \left|\frac{t_n^{(0)}({\mathbf V},\mathbf s_0)}{t^{(0)}({\mathbf V},\mathbf s_0)} - 1\right| \leq \frac{\sup_{A}|t_n^{(0)}({\mathbf V},\mathbf s_0) - t^{(0)}({\mathbf V},\mathbf s_0)|}{\inf_{A_n} t^{(0)}({\mathbf V},\mathbf s_0)} = O_P(\delta_n^{-1}(a_n + h_n)) \end{gather*} \begin{gather*} \sup_{{\mathbf V} \times \mathbf s_0 \in A_n} \left|\frac{t_n^{(l)}({\mathbf V},\mathbf s_0)}{t^{(0)}({\mathbf V},\mathbf s_0)} - \mu_l({\mathbf V},\mathbf s_0)\right| \leq \frac{\sup_{A}|t_n^{(l)}({\mathbf V},\mathbf s_0) - t^{(l)}({\mathbf V},\mathbf s_0)|}{\inf_{A_n} t^{(0)}({\mathbf V},\mathbf s_0)} = O_P(\delta_n^{-1}(a_n + h_n)), \end{gather*} and therefore by $A_n \uparrow A = {\mathcal S}(p,q) \times \text{supp}(f_{\mathbf X})$, \begin{equation*} \lim_{n \to \infty} \sup_{{\mathbf V} \times \mathbf s_0 \in A_n}\left|\frac{t_n^{(l)}({\mathbf V},\mathbf s_0)}{t^{(0)}({\mathbf V},\mathbf s_0)} - \mu_l({\mathbf V},\mathbf s_0)\right| = \lim_{n \to \infty} \sup_{{\mathbf V} \times \mathbf s_0 \in A}\left|\frac{t_n^{(l)}({\mathbf V},\mathbf s_0)}{t^{(0)}({\mathbf V},\mathbf s_0)} - \mu_l({\mathbf V},\mathbf s_0)\right| \end{equation*} Substituting in \eqref{ylbar}, we obtain \begin{equation*} \bar{y}_l({\mathbf V},\mathbf s_0) = \frac{t_n^{(l)}({\mathbf V},\mathbf s_0)/t^{(0)}({\mathbf V},\mathbf s_0)}{t_n^{(0)}({\mathbf V},\mathbf s_0)/t^{(0)}({\mathbf V},\mathbf s_0)} = \frac{\mu_l + O_P(\delta_n^{-1}(a_n + h_n))}{1 + O_P(\delta_n^{-1}(a_n + h_n))} = \mu_l + O_P(\delta_n^{-1}(a_n + h_n)). \end{equation*} For $l = 2$, $\tilde{Y}^2_i = g(\widetilde{{\mathbf B}}^T{\mathbf X}_i)^2 + 2g(\widetilde{{\mathbf B}}^T{\mathbf X}_i)\tilde{\epsilon}_i + \tilde{\epsilon}_i^2$, and \eqref{Ltilde_uniform} follows from~\eqref{e_LtildeVs0}. \end{proof} \begin{lemma}\label{mu_lemma} Under (E.1), (E.2), (E.4), there exists $0 < C_5 < \infty$ such that \begin{align}\label{mu_inequality} \left|\mu_l({\mathbf V},\mathbf s_0) - \mu_l({\mathbf V}_j,\mathbf s_0)\right| \leq C_5 \|\mathbf{P}_{\mathbf V} -\mathbf{P}_{{\mathbf V}_j}\| \end{align} for all interior points $\mathbf s_0 \in \text{supp}(f_{\mathbf X})$ \end{lemma} \begin{proof} From the representation $\tilde{t}^{(l)}(\mathbf{P}_{\mathbf V},\mathbf s_0)$ in \eqref{Grassman} instead of $t^{(l)}({\mathbf V},\mathbf s_0)$, we consider $\mu_l({\mathbf V},\mathbf s_0) = \mu_l(\mathbf{P}_{\mathbf V},\mathbf s_0)$ as a function on the Grassmann manifold since $\mathbf{P}_{\mathbf V} \in Gr(p,q)$. Then, \begin{align}\label{LipschitG} \left|\mu_l(\mathbf{P}_{\mathbf V},\mathbf s_0) - \mu_l(\mathbf{P}_{{\mathbf V}_j},\mathbf s_0)\right| &= \left|\frac{\tilde{t}^{(l)}(\mathbf{P}_{\mathbf V},\mathbf s_0)}{\tilde{t}^{(0)}(\mathbf{P}_{\mathbf V},\mathbf s_0)} - \frac{\tilde{t}^{(l)}(\mathbf{P}_{{\mathbf V}_j},\mathbf s_0)}{\tilde{t}^{(0)}(\mathbf{P}_{{\mathbf V}_j},\mathbf s_0)}\right| \notag \\ &\leq \frac{\sup |\tilde{t}^{(0)}(\mathbf{P}_{\mathbf V},\mathbf s_0)|}{(\inf \tilde{t}^{(0)}(\mathbf{P}_{\mathbf V},\mathbf s_0))^2}\left| \tilde{t}^{(l)}(\mathbf{P}_{\mathbf V},\mathbf s_0)-\tilde{t}^{(l)}(\mathbf{P}_{{\mathbf V}_j},\mathbf s_0)\right|\notag \\ &\quad +\frac{\sup \tilde{t}^{(l)}(\mathbf{P}_{\mathbf V},\mathbf s_0)}{(\inf \tilde{t}^{(0)}(\mathbf{P}_{\mathbf V},\mathbf s_0))^2}\left| \tilde{t}^{(0)}(\mathbf{P}_{\mathbf V},\mathbf s_0)-\tilde{t}^{(0)}(\mathbf{P}_{{\mathbf V}_j},\mathbf s_0)\right| \end{align} with $\sup_{\mathbf{P}_{\mathbf V} \in Gr(p,q)} \tilde{t}^{(0)}(\mathbf{P}_{\mathbf V},\mathbf s_0) \in (0,\infty)$ and $\inf_{\mathbf{P}_{\mathbf V} \in Gr(p,q)} \tilde{t}^{(0)}(\mathbf{P}_{\mathbf V},\mathbf s_0) \in (0,\infty)$ since $\tilde{t}^{(l)}$ is continuous, $\Sigmabf_{\x} >0$ and $\mathbf s_0 \in \text{supp}(f_{\mathbf X})$ an interior point. By (E.2) and (E.4), $\tilde{g}({\mathbf x}) =g(\widetilde{{\mathbf B}}^T {\mathbf x})f_{\mathbf X}({\mathbf x})$ is twice continuous differentiable and therefore Lipschitz continuous on compact sets. We denote its Lipschitz constant by $L < \infty$. Therefore, \begin{gather} \left| \tilde{t}^{(l)}(\mathbf{P}_{{\mathbf V}},\mathbf s_0)-\tilde{t}^{(l)}(\mathbf{P}_{{\mathbf V}_j},\mathbf s_0)\right| \leq \int_{\text{supp}(f_{\mathbf X})} \left|\tilde{g}(\mathbf s_0 + \mathbf{P}_{{\mathbf V}} {\mathbf r})-\tilde{g}(\mathbf s_0 + \mathbf{P}_{{\mathbf V}_j} {\mathbf r})\right|d {\mathbf r} \notag \\ \leq L \int_{\text{supp}(f_{\mathbf X})} \|(\mathbf{P}_{{\mathbf V}} -\mathbf{P}_{{\mathbf V}_j}) {\mathbf r}\|d{\mathbf r} \leq L\left(\int_{\text{supp}(f_{\mathbf X})} \| {\mathbf r} \|dr\right) \|\mathbf{P}_{\mathbf V} -\mathbf{P}_{{\mathbf V}_j}\| \label{t_inequality} \end{gather} where the last inequality is due to the sub-multiplicativity of the Frobenius norm and the integral being finite by (E.2). Plugging \eqref{t_inequality} in \eqref{LipschitG} and collecting all constants into $C_5$ yields \eqref{mu_inequality}. \end{proof} \begin{proof}[Proof of Theorem~\ref{uniform_convergence_ecve}] By \eqref{e_LN} and \eqref{e_objective}, \begin{align} \left|L^*_n({\mathbf V},f) - L_\mathcal{F}^*({\mathbf V},f)\right| &\leq \left|\frac{1}{n} \sum_i \left(\tilde{L}_{n,\mathcal{F}}({\mathbf V},{\mathbf X}_i,f) -\tilde{L}_\mathcal{F}({\mathbf V},{\mathbf X}_i,f)\right)\right| \notag\\ &\qquad + \left|\frac{1}{n} \sum_i \left(\tilde{L}_\mathcal{F}({\mathbf V},{\mathbf X}_i,f) - \mathbb{E}(\tilde{L}_\mathcal{F}({\mathbf V},{\mathbf X},f))\right) \right| \label{Ln-L} \end{align} By Theorem~\ref{thm_Ltilde_uniform}, \begin{equation} \left|\frac{1}{n} \sum_i \tilde{L}_{n,\mathcal{F}}({\mathbf V},{\mathbf X}_i,f) -\tilde{L}_\mathcal{F}({\mathbf V},{\mathbf X}_i,f)\right| \leq \sup_{{\mathbf V} \times \mathbf s_0 \in A}\left|\tilde{L}_{n,\mathcal{F}}({\mathbf V},\mathbf s_0,f) - \tilde{L}_\mathcal{F}({\mathbf V},\mathbf s_0,f)\right| = o_P(1) \end{equation} The second term in \eqref{Ln-L} converges to 0 almost surely for all ${\mathbf V} \in {\mathcal S}(p,q)$ by the strong law of large numbers. In order to show uniform convergence the same technique as in the proof of Theorem~\ref{thm_variance} is used. Let $B_j = \{{\mathbf V} \in {\mathcal S}(p,q): \|{\mathbf V}{\mathbf V}^T - {\mathbf V}_j{\mathbf V}_j^T\| \leq \tilde{a}_n\}$ be a cover of ${\mathcal S}(p,q)\subset \bigcup_{j=1}^{N} B_j$ with $N \leq C\, \tilde{a}_n^{-d} = C\,(n/\log(n))^{d/2} \leq C\, n^{d/2}$, where $d = \dim({\mathcal S}(p,q))$ is defined in the proof of Theorem~\ref{thm_variance}. By Lemma~\ref{mu_lemma}, \begin{align}\label{inequality3} \left|\mu_l({\mathbf V},{\mathbf X}_i) - \mu_l({\mathbf V}_j,{\mathbf X}_i)\right| \leq C_5 \|\mathbf{P}_{\mathbf V} - \mathbf{P}_{{\mathbf V}_j}\| \end{align} Let $G_n({\mathbf V},f) = \sum_i\tilde{L}_\mathcal{F}({\mathbf V},{\mathbf X}_i,f)/n$ with $\mathbb{E}(G_n(V)) = L^*_\mathcal{F}({\mathbf V},f)$. Using \eqref{inequality3} and following the same steps as in the proof of Lemma~\ref{thm_variance} we obtain \begin{align}\label{G_n_ineq} \left|G_n({\mathbf V},f)-L^*_\mathcal{F}({\mathbf V},f)\right| &\leq \left|G_n({\mathbf V},f) - G_n({\mathbf V}_j,f)\right|\notag\\ &\quad+ \left|G_n({\mathbf V}_j,f) -L^*_\mathcal{F}({\mathbf V}_j,f)\right| + \left|L^*_\mathcal{F}({\mathbf V},f)-L^*_\mathcal{F}({\mathbf V}_j,f)\right| \notag \\ &\quad \leq 2C_6\tilde{a}_n + \left|G_n({\mathbf V}_j,f) -L^*_\mathcal{F}({\mathbf V}_j,f)\right| \end{align} for ${\mathbf V} \in B_j$ and some $C_6 > C_5$. Inequality \eqref{G_n_ineq} leads to \begin{align} \mathbb{ P}\left(\sup_{{\mathbf V} \in {\mathcal S}(p,q)}|G_n({\mathbf V},f) - L^*_\mathcal{F}({\mathbf V},f)| > 3C_6\tilde{a}_n\right) &\leq C\,N\, \mathbb{ P}(\sup_{{\mathbf V} \in B_j}|G_n({\mathbf V},f) - L^*_\mathcal{F}({\mathbf V},f)| > 3C_6\tilde{a}_n) \notag \\ &\leq C\, n^{d/2} \mathbb{ P}(|G_n({\mathbf V}_j,f) -L^*_\mathcal{F}({\mathbf V}_j,f)|> C_6\tilde{a}_n) \notag \\ &\leq C\, n^{d/2} n^{-\gamma(C_6)} \to 0 \label{inequality5} \end{align} where the last inequality in \eqref{inequality5} is due to \eqref{Bernstein} with $Z_i = \tilde{L}_\mathcal{F}({\mathbf V}_j,{\mathbf X}_i,f)$, which is bounded since $\tilde{L}_\mathcal{F}(\cdot,\cdot,f)$ is continuous on the compact set $A$, and $\gamma(C_6)$ a monotone increasing function of $C_6$ that can be made arbitrarily large by choosing $C_6$ accordingly. Therefore, $\sup_{{\mathbf V} \in {\mathcal S}(p,q)}\left|L^*_n({\mathbf V},f) - L_\mathcal{F}^*({\mathbf V},f)\right| \leq o_P(1) + O_P(\tilde{a}_n)$ which implies Theorem~\ref{uniform_convergence_ecve}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm_consistency_mean_subspace}] We apply \cite[Thm 4.1.1]{Takeshi} to obtain consistency of the conditional variance estimator. This theorem requires three conditions that guarantee the convergence of the minimizer of a sequence of random functions $L^*_n(\mathbf{P}_{\mathbf V},f_t)$ to the minimizer of the limiting function $L^*(\mathbf{P}_{\mathbf V},f_t)$; i.e., $\mathbf{P}_{\operatorname{span}\{\widehat{{\mathbf B}}^t_{{k_t}}\}^\perp} = \operatorname{argmin} L^*_n(\mathbf{P}_{\mathbf V},f) \to \mathbf{P}_{\operatorname{span}\{{\mathbf B}\}^\perp} = \operatorname{argmin} L^*(\mathbf{P}_{\mathbf V},f_t)$ in probability. To apply the theorem three conditions have to be met: (1) The parameter space is compact; (2) $L^*_n(\mathbf{P}_{\mathbf V},f_t)$ is continuous in $\mathbf{P}_{\mathbf V}$ and a measurable function of the data $(Y_i,{\mathbf X}_i^T)_{i=1,...,n}$, and (3) $L^*_n(\mathbf{P}_{\mathbf V},f_t)$ converges uniformly to $L^*(\mathbf{P}_{\mathbf V},f_t)$ and $L^*(\mathbf{P}_{\mathbf V},f_t)$ attains a unique global minimum at $\mathcal{S}_{\E\left(f_t(Y)\mid\X\right)}^\perp$. Since $L^*_n({\mathbf V},f_t)$ depends on ${\mathbf V}$ only through $\mathbf{P}_{\mathbf V} = {\mathbf V}{\mathbf V}^T$, $L^*_n({\mathbf V},f_t)$ can be considered as functions on the Grassmann manifold, which is compact, and the same holds true for $L^*({\mathbf V},f_t)$ by \eqref{Grassman}. Further, $L^*_n({\mathbf V},f_t)$ is by definition a measurable function of the data and continuous in ${\mathbf V}$ if a continuous kernel, such as the Gaussian, is used. Theorem~\ref{uniform_convergence_ecve} obtains the uniform convergence and Theorem~\ref{CVE_targets_meansubspace_thm} that the minimizer is unique when $L({\mathbf V})$ is minimized over the Grassmann manifold $G(p,q)$, since $\mathcal{S}_{\E\left(f_t(Y)\mid\X\right)} = \operatorname{span}\{\widetilde{{\mathbf B}}\}$ is uniquely identifiable and so is $\operatorname{span}\{\widetilde{{\mathbf B}}\}^\perp$ (i.e. $\|\mathbf{P}_{\operatorname{span}\{\widehat{{\mathbf B}}^t_{k_t}\}} - \mathbf{P}_{\operatorname{span}\{\widetilde{{\mathbf B}}\}}\|=\|\widehat{{\mathbf B}}^t_{k_t}(\widehat{{\mathbf B}}^t_{k_t})^T - \widetilde{{\mathbf B}}\widetilde{{\mathbf B}}^T\| = \| ({\mathbf I}_p- \widetilde{{\mathbf B}}\widetilde{{\mathbf B}}^T)- ({\mathbf I}_p-\widehat{{\mathbf B}}^t_{k_t}(\widehat{{\mathbf B}}^t_{k_t})^T)\| = \| \mathbf{P}_{\operatorname{span}\{\widetilde{{\mathbf B}}\}^\perp}-\mathbf{P}_{\operatorname{span}\{\widehat{{\mathbf B}}^t_{k_t}\}^\perp}\|$). Thus, all three conditions are met and the result is obtained. \end{proof} \begin{proof}[Proof of Theorem~\ref{uniform_convergence_eobjective}] Let $(t_j)_{j=1,\ldots,{m_n}}$ be an i.i.d. sample from $F_T$ and write \begin{align}\label{L_nF_decompostion} |L_{n,\mathcal{F}}({\mathbf V}) -L_{\mathcal{F}}({\mathbf V})| &= \left| \frac{1}{{m_n}} \sum_{j=1}^{{m_n}} \left(L_n^*({\mathbf V},f_{t_j}) - L^*({\mathbf V},f_{t_j})\right) \right| \notag \\ &\quad + \left| \frac{1}{{m_n}} \sum_{j=1}^{{m_n}} \left(L^*({\mathbf V},f_{t_j}) - \mathbb{E}_{t \sim F_T}(L^*({\mathbf V},f_{t})\right) \right| \end{align} Then, $\sup_{{\mathbf V} \in {\mathcal S}(p,q)} \left| L_n^*({\mathbf V},f_{t}) - L^*({\mathbf V},f_{t}) \right| \leq 8M^2$, by the assumption $\sup_{t \in \Omega_T} |f_t(Y)| < M < \infty$, and the triangle inequality. That is, $L_n^*({\mathbf V},f_{t})$ estimates a variance of a bounded response $ f_t(Y) \in [-M, M]$ and is therefore bounded by the squared range $4M^2$ of $f_t(Y)$. The same holds true for $L^*({\mathbf V},f_{t})$. Further, $8M^2$ is an integrable dominant function so that Fatou's Lemma applies. Consider the first term on the right hand side of \eqref{L_nF_decompostion} and let $\delta > 0$. By Markov's and triangle inequalities and Fatou's Lemma, \begin{gather*} \limsup_{n} \mathbb{ P}\left(\sup_{{\mathbf V} \in {\mathcal S}(p,q)} \left| \frac{1}{{m_n}} \sum_{j=1}^{{m_n}} L_n^*({\mathbf V},f_{t_j}) - L^*({\mathbf V},f_{t_j}) \right| > \delta \right) \\ \leq \frac{1}{\delta} \limsup_{n} \mathbb{E}_{F_T}\left(\mathbb{E}(\sup_{{\mathbf V} \in {\mathcal S}(p,q)} \left| \frac{1}{{m_n}} \sum_{j=1}^{{m_n}} L_n^*({\mathbf V},f_{t_j}) - L^*({\mathbf V},f_{t_j}) \right| \right) \; \mbox{:Markov inequality}\\ \leq \frac{1}{\delta}\limsup_{n}\mathbb{E}_{F_T}\left( \frac{1}{{m_n}} \sum_{j=1}^{{m_n}} \mathbb{E}(\sup_{{\mathbf V} \in {\mathcal S}(p,q)}|L_n^*({\mathbf V},f_{t_j}) - L^*({\mathbf V},f_{t_j})|\right) \\ = \frac{1}{\delta}\limsup_{n}\mathbb{E}_{F_T}\left( \mathbb{E}(\sup_{{\mathbf V} \in {\mathcal S}(p,q)}|L_n^*({\mathbf V},f_{t_j}) - L^*({\mathbf V},f_{t_j})|)\right) \\\\ \leq \frac{1}{\delta}\mathbb{E}_{ F_T}\left( \mathbb{E}(\limsup_{n} \sup_{{\mathbf V} \in {\mathcal S}(p,q)}|L_n^*({\mathbf V},f_{t_j}) - L^*({\mathbf V},f_{t_j})| \right) = \frac{1}{\delta}\mathbb{E}_{t\sim F_T}\left( \mathbb{E}(0)\right) = 0 \end{gather*} since by Theorem~\ref{uniform_convergence_ecve} it holds $\limsup_{n} \sup_{{\mathbf V} \in {\mathcal S}(p,q)}|L_n^*({\mathbf V},f_{t_j}) - L^*({\mathbf V},f_{t_j})| = 0$. \begin{comment} For the first term on the right hand side of \eqref{L_nF_decompostion}, assume $m_n$ is a sequence independent of $n$ \begin{align*} \left| \frac{1}{{m_n}} \sum_{j=1}^{{m_n}} L_n^*({\mathbf V},f_{t_j}) - L^*({\mathbf V},f_{t_j}) \right| \leq \frac{1}{{m_n}} \sum_{j=1}^{{m_n}} \sup_{{\mathbf V} \in {\mathcal S}(p,q)} \left| L_n^*({\mathbf V},f_{t_j}) - L^*({\mathbf V},f_{t_j}) \right| \\ \longrightarrow \mathbb{E}_{t\sim F_T}\left( \sup_{{\mathbf V} \in {\mathcal S}(p,q)} \left| L_n^*({\mathbf V},f_{t}) - L^*({\mathbf V},f_{t}) \right|\right) \end{align*} Due to the assumption $\sup_{t \in \Omega_T} |f_t(Y)| < M < \infty$ we have that $Z_n = \mathbb{E}_{t\sim F_T}\left(\sup_{{\mathbf V} \in {\mathcal S}(p,q)} \left| L_n^*({\mathbf V},f_{t}) - L^*({\mathbf V},f_{t}) \right|\right) \leq 8M^2$, i.e. $L_n^*({\mathbf V},f_{t})$ estimates a variance of a bounded response $ f_t(Y) \in [-M, M]$ and is therefore bounded by the squared range $4M^2$ of $f_t(Y)$ and the same holds true for $L^*({\mathbf V},f_{t})$. Further $8M^2$ is an integrable dominant function, and therefore $Z_n$ is an uniformly integrable sequence, moreover from Theorem~\ref{uniform_convergence_ecve} it follows $Z_n \to 0$ in probability. Then by Vitali's Theorem we have $Z_n \to 0$ also in $L^1(\Omega)$, i.e $\mathbb{E}(Z_n) = \mathbb{E}(|Z_n - 0|) \to 0$, i.e. \begin{gather*} \mathbb{E}_{t\sim F_T}\left( \sup_{{\mathbf V} \in {\mathcal S}(p,q)} \left| L_n^*({\mathbf V},f_{t}) - L^*({\mathbf V},f_{t}) \right|\right) \\ \longrightarrow E_{t\sim F_T}\left(E\left( \lim_{n \to \infty}\sup_{{\mathbf V} \in {\mathcal S}(p,q)} \left| L_n^*({\mathbf V},f_{t}) - L^*({\mathbf V},f_{t}) \right|\right) \right)= 0 \text{ as $n \to \infty$} \end{gather*} \textcolor{red}{i am quite unsure if this argument is correct} \end{comment} For the second term on the right hand side of \eqref{L_nF_decompostion} we apply Theorem 2 of \cite{jennrich1969} in \cite[p. 40]{mickey1963test}: \begin{thm}\label{Jennrich} Let $t_j$ be an i.i.d. sample and $L^*({\mathbf V},f_{t}): \Theta \times \Omega_T \to {\mathbb R}$ where $\Theta$ is a compact subset of an euclidean space. $L^*({\mathbf V},f_{t})$ is continuous in ${\mathbf V}$ and measurable in $t$ by Theorem~\ref{CVE_targets_meansubspace_thm}. If $L^*({\mathbf V},f_{t_j})\leq h(t_j)$, where $h(t_j)$ is integrable with respect to $F_T$, then \begin{align*} \frac{1}{m_n}\sum_{j=1}^{m_n} L^*({\mathbf V},f_{t_j}) \longrightarrow \mathbb{E}_{F_T}\left(L^*({\mathbf V},f_{t}) \right) \quad \text{uniformly over ${\mathbf V} \in \Theta$ almost surely as $n \to \infty$ } \end{align*} \end{thm} Here ${\mathbf V} \in {\mathcal S}(p,q) = \Theta \subseteq {\mathbb R}^{pq}$, by $\sup_{t \in \Omega_T} |f_t(Y)| < M < \infty$ and an analogous argument as for the first term in \eqref{L_nF_decompostion}, $Z_j({\mathbf V}) = L^*({\mathbf V},f_{t_j})< 4M^2$. Therefore, $\mathbb{E}(\sup_{{\mathbf V}\in {\mathcal S}(p,q)} |Z_j({\mathbf V})|) < 4M^2 $, which is integrable. Further, since $t_j$ are an i.i.d. sample from $F_T$, $Z_j({\mathbf V})$ is a i.i.d. sequence of random variables, $Z_j({\mathbf V})$ is continuous in ${\mathbf V}$ by Theorem~\ref{CVE_targets_meansubspace_thm} and the parameter space ${\mathcal S}(p,q)$ is compact. Then by Theorem \ref{Jennrich}, \begin{align*} \sup_{{\mathbf V}\in {\mathcal S}(p,q)} \left| \frac{1}{{m_n}} \sum_{j=1}^{{m_n}} L^*({\mathbf V},f_{t_j}) - \mathbb{E}_{t \sim F_T}(L^*({\mathbf V},f_{t})) \right| \longrightarrow 0 \quad \quad \text{almost surely as $n \to \infty$} \end{align*} if $\lim_{n \to \infty} m_n = \infty$. Putting everything together it follows that $\sup_{{\mathbf V}\in {\mathcal S}(p,q)} |L_{n,\mathcal{F}}({\mathbf V}) -L_{\mathcal{F}}({\mathbf V})| \to 0$ in probability as $n \to \infty$. \end{proof} \begin{proof}[Proof of Theorem~\ref{ECVE_consistency}] The proof is directly analogous to the proof of Theorem~\ref{thm_consistency_mean_subspace}. The uniform convergence of the target function $L_{n,\mathcal{F}}({\mathbf V})$ is obtained by Theorem~\ref{uniform_convergence_eobjective}. The minimizer over $Gr(p,q)$ and its uniqueness derive from Theorem~\ref{ECVE_identifies_cs_thm}. \end{proof} \begin{proof}[Proof of Theorem~\ref{e_lemma-one}] In this proof we supress the dependence on $f$ in the notation. The Gaussian kernel $K$ satisfies $\partial_z K(z) = - z K(z)$. From \eqref{weights} and \eqref{e_Ltilde} we have $\tilde{L}_{n,\mathcal{F}} = \bar{y}_2 - \bar{y}_1^2$ where $\bar{y}_l = \sum_i w_i \tilde{Y}_i^l$, $l=1,2$. We let $K_{j} = K(d_j({\mathbf V},\mathbf s_0)/h_n)$, suppress the dependence on ${\mathbf V}$ and $\mathbf s_0$ and write $w_i = K_i/\sum_j K_j$. Then, $\nabla K_i = (-1/h_n^2)K_i d_i \nabla d_i$ and $\nabla w_i = -\left(K_i d_i \nabla d_i (\sum_j K_j) - K_i \sum_j K_j d_j\nabla d_j\right)/(h_n \sum_j K_j)^2$. Next, \begin{align} \nabla \bar{y}_l &= -\frac{1}{h_n^2}\sum_i \tilde{Y}_i^l\frac{K_id_i\nabla d_i - K_i (\sum_jK_jd_j\nabla d_j)}{( \sum_jK_j)^2} = -\frac{1}{h_n^2}\sum_i \tilde{Y}_i^l w_i \left(d_i \nabla d_i - \sum_j w_j d_j \nabla d_j\right) \notag \\ &= -\frac{1}{h_n^2}\left(\sum_i \tilde{Y}_i^l w_i d_i \nabla d_i - \sum_j\tilde{Y}_j^l w_j \sum_i w_id_i \nabla d_i\right) =-\frac{1}{h_n^2}\sum_i (\tilde{Y}_i^l - \bar{y}_l) w_i d_i \nabla d_i \label{grad.yl} \end{align} Then, $\nabla \tilde{L}_n = \nabla \bar{y}_2 - 2\bar{y}_1 \nabla \bar{y}_1$, and inserting $\nabla \bar{y}_l$ from \eqref{grad.yl} yields $\nabla \tilde{L}_n = (-1/h_n^2)\sum_i (Y_i^2 -\bar{y}_2 - 2\bar{y}_1(Y_i - \bar{y}_1))w_i d_i \nabla d_i = (1/h_n^2)\left(\sum_i \left(\tilde{L}_n -(Y_i -\bar{y}_1)^2 \right) w_i d_i \nabla d_i\right)$, since $Y_i^2 -\bar{y}_2 - 2\bar{y}_1(Y_i - \bar{y}_1) = (Y_i -\bar{y}_1)^2 - \tilde{L}_n $. \end{proof} \end{document}
\begin{document} \title{Hoops, Coops and the Algebraic Semantics of Continuous Logic} \begin{abstract} B\"{u}chi and Owen studied algebraic structures called hoops. Hoops provide a natural algebraic semantics for a class of substructural logics that we think of as intuitionistic analogues of the widely studied {\L}ukasiewicz logics. Ben Yaacov extended {\L}ukasiewicz logic to get what is called continuous logic by adding a halving operator. In this paper, we define the notion of continuous hoop, or coop for short, and show that coops provide a natural algebraic semantics for continuous logic. We characterise the simple and subdirectly irreducible coops and investigate the decision problem for various theories of coops. In passing, we give a new proof that hoops form a variety by giving an algorithm that converts a proof in intuitionistic {\L}ukasiwicz logic into a chain of equations. \end{abstract} \tableofcontents \Section{Introduction} \label{sec:introduction} Around 1930, {\L}ukasiewicz and Tarski \cite{lukasiewicz-tarski30} instigated the study of logics admitting models in which the truth values are real numbers drawn from some subset $T$ of the interval $[0, 1]$. In these models, conjunction is represented by capped addition \footnote{We here follow the convention of the literature on continuous logic in ordering the truth values by increasing logical strength so that $0$ represents truth and $1$ falsehood.}: $A \land B \mathrel{{:}{=}} \Func{inf}\{A + B, 1\}$ and negation is represented by inversion: $\lnot A \mathrel{{:}{=}} 1 - A$. The set $T$ is required to contain $1$ and to be closed under these operations. One then finds that $T$ is the intersection $G \cap [0, 1]$ where $G$ is some additive subgroup of $\mathbb{R}$ with $\mathbb{Z} \subseteq G$ and that $T$ is also closed under disjunction and implication defined by $A \lor B \mathrel{{:}{=}} \Func{sup}\{A + B - 1, 0\}$ and $A \Rightarrow B \mathrel{{:}{=}} \Func{sup}\{B - A, 0\}$. These logics are classical in that $\lnot\lnot A$ and $A$ are equivalent. Moreover the law of the excluded middle holds, but $A \Rightarrow A \land A$ only holds in the special case of Boolean logic for which $T = \{0, 1\}$, so apart from this special case, the logics are substructural. These {\L}ukasiewicz logics have been widely studied, e.g., as instances of fuzzy logics \cite{Hajek98}. More recently ben Yaacov has used them as a building block in what is called continuous logic \cite{ben-yaacov-pedersen09}. Continuous logic unifies work of Henson and others \cite{henson-iovino02} that aims to overcome shortfalls of classical first-order model theory when applied to continuous structures such as metric spaces and Banach spaces. The language of continuous logic extends that of the usual propositional logic by adding a halving operator, written $A/2$. In the standard numerical model of continuous logic the set $T$ of truth values is the interval $[0, 1]$ and $A/2 \mathrel{{:}{=}} \frac{1}{2}A$. Many basic facts about both {\L}ukasiewicz and continuous logics depend on work of Rose and Rossser \cite{rose-rosser58} and Chang \cite{chang58a, chang58b} who proved that the following axiom schemas together with the rule of modus ponens are complete for the propositional {\L}ukawiecisz logics: \begin{gather*} A \Rightarrow (B \Rightarrow A) \tag*{(A1)} \\ (A \Rightarrow B) \Rightarrow (B \Rightarrow C) \Rightarrow (A \Rightarrow C) \tag*{(A2)} \\ ((A \Rightarrow B) \Rightarrow B) \Rightarrow ((B \Rightarrow A) \Rightarrow A) \tag*{(A3)} \\ (\lnot A \Rightarrow \lnot B) \Rightarrow (B \Rightarrow A). \tag*{(A4)} \end{gather*} This had been a long-standing conjecture of {\L}ukasiewicz. Ben Yaacov \cite{ben-yaacov08} added the following axiom schemata for the halving operator: \begin{gather*} (A/2 \Rightarrow A) \Rightarrow A/2 \tag*{(A5)} \\ A/2 \Rightarrow (A/2 \Rightarrow A) \tag*{(A6)} \end{gather*} and showed that $A1$--$A6$ together with modus ponens are complete for the standard numerical model of continuous logic. The goal of the present paper is to cast some light onto these axiomatizations by developing propositional {\L}ukasiewicz logic and continuous logic as a series of extensions of \emph{intuitionistic} affine logic. A similar approach for {\L}ukasiewicz logic was developed in \cite{Ciabattoni:1997} where \emph{classical} affine logic was taken as the starting point. The more restricted setting of intuitionistic affine logic will allow us to better calibrate the amount of contraction that needs to be added to affine logic to obtain {\L}ukawiecisz logics. In particular, we obtain an intuitionistic counter-part of {\L}ukawiecisz logic. Our work began with the observation that ben Yaacov's continuous logic, which we call $\Logic{CL}{c}$, is an extension of a primitive intuitionistic substructural logic $\Logic{AL}{i}$. We now consider an even more primitive logic $\Logic{AL}{u}$ and develop $\Logic{CL}{c}$ as depicted in Figure~\ref{fig:logics}, which also shows how the Brouwer-Heyting intuitionistic propositional logic $\Logic{IL}{\relax}$ and Boolean logic $\Logic{BL}{\relax}$ relate to this development. \begin{figure} \caption{Relationships between the Logics} \label{fig:logics} \end{figure} The structure of the rest of this paper is as follows: \begin{description} \item[Section~\ref{sec:the-logics}] gives the definitions of the logical languages we deal with and of each of the twelve logics shown in Figure~\ref{fig:logics}. \item[Section~\ref{sec:algebraic-semantics}] gives sound and complete algebraic semantics for the logics in terms of certain classes of pocrims and hoops, algebraic structures that have been quite widely studied in connection with $\Logic{AL}{i}$ and related logics. We introduce the notion of a continuous hoop or {\em coop} to give the algebraic semantics of continuous logic. \item[Section~\ref{sec:algebra-of-coops}] considers coops from the perspective of universal algebra. We characterize the simple and subdirectly irreducible coops and use the results to begin an investigation of the decision problem for theories of coops. \item[Section~\ref{sec:future-work}] outlines further work, particularly concerning decidability. \end{description} \Section{The Logics}\label{sec:the-logics} We work in a language $\Lang{\relax}$ (or $\Lang{1\frac{1}{2}}$ for emphasis) whose atomic formulas are the propositional constants $0$ (truth) and $1$ (falsehood) and propositional variables drawn from the set $\Func{Var} = \{P, Q, \ldots\}$. If $A$ and $B$ are formulas of $\Lang{\relax}$ then so are $A \otimes B$ (conjunction), $A \multimap B$ (implication) and $A/2$ (halving). We define $\Lang{1}$ and $\Lang{\frac{1}{2}}$ to be the sublanguages of $\Lang{\relax}$ that disallow halving and $1$ respectively and we define $\Lang{0}$ to be the intersection of $\Lang{1}$ and $\Lang{\frac{1}{2}}$. We write $A{{}^{\perp}}$ as an abbreviation for $A \multimap 1$. The judgments of the logics we consider are {\em sequents} of the form $\Gamma \vdash A$, where the {\em succedent} $A$ is a formula and the {\em antecedent} $\Gamma$ is a multiset of formulas. \begin{figure} \caption{Inference Rules} \label{fig:rules} \end{figure} The inference rules for all our logics are the introduction and elimination rules for the two connnectives\footnote{Omitting disjunction from the logic greatly simplifies the algebraic semantics. While it may be unsatisfactory from the point of view of intuitionistic philosophy, disjunction defined using de Morgan's law is adequate for our purposes.} shown in Figure~\ref{fig:rules}. The various logics we deal with are distinguished only the by axioms we define for them. We define the axioms such that if $\Gamma \vdash B$ is an axiom, then so is $\Gamma, A \vdash B$ for any formula $A$. The way the antecedents of sequents are handled in the inference rules then implies that we have the following derived rule of weakening: \[ \begin{prooftree} \Gamma \vdash B \justifies \mathstrut \Gamma, A \vdash B \using{[\Func{WK}]} \end{prooftree} \] since any instance of this rule in a proof tree may be moved up the proof tree until it is just beneath an axiom and then the conclusion of the rule will already be an axiom. It is easily proved for any logic with the inference rules of Figure~\ref{fig:rules} that a form of the deduction theorem holds in the sense that if one of the following three sequents is provable then so are the other two: \begin{align*} A_1, \ldots, A_m &\vdash B, \\ &\vdash A_1 \multimap \ldots \multimap A_m \multimap B, \\ &\vdash A_1 \otimes \ldots \otimes A_m \multimap B. \end{align*} The axiom schemata for our logics are selected from those shown in Figure~\ref{fig:axioms}. These are the axiom of assumption $[\Func{ASM}]$, ex-falso-quodlibet $[\Func{EFQ}]$, double negation elimination $[\Func{DNE}]$, commutative weak conjunction $[\Func{CWC}]$, commutative strong disjunction $[\Func{CSD}]$, the axiom of contraction $[\Func{CON}]$, and two axioms for the halving operator: one for a lower-bound $[\Func{HLB}]$ and one for an upper bound $[\Func{HUB}]$. \begin{figure} \caption{Axiom Schemata} \label{fig:axioms} \end{figure} The logics we deal with are discussed in the next few paragraphs and the axioms for eah logic are summarised in Table~\ref{tab:models}. In the logics that do not have the axiom schemata $[\Func{EFQ}]$, $1$ plays no special r\^{o}le and may be omitted from the language and similarly halving may be omitted from the language in the logics that do not have the axioms schemata $[\Func{HLB}]$ and $[\Func{HUB}]$. Alternatively, the full language $\Lang{1\frac{1}{2}}$ may be used in all cases with $1$ and $A/2$ effectively acting as variables in formulas that involve them. Unbounded\footnote{ We use the term ``unbounded'' here for logics in which $1$ has no special meaning and need not be an upper bound for the lattice of truth values. } \ intuitionistic affine logic, $\Logic{AL}{u}$, has for its axiom schemata $[\Func{ASM}]$ alone. All our other logics include $\Logic{AL}{u}$. Since the contexts $\Gamma$, $\Delta$ are multisets, an assumption in the rules of Figure~\ref{fig:rules} can be used at most once. $\Logic{AL}{u}$ serves as a prototype for substructural logics with this property. It corresponds under the Curry-Howard correspondence to a $\lambda$-calculus with pairing (i.e., $\lambda$-abstractions of the form $\lambda (x, y)\bullet t$, $\lambda((x, y), z)\bullet t$, $\lambda(x, (y, z))\bullet t$ etc.) in which no variable is used twice. Intuitionistic affine logic, $\Logic{AL}{i}$, extends $\Logic{AL}{u}$ with the axiom schemata $[\Func{EFQ}]$. Classical affine logic, $\Logic{AL}{c}$, extends $\Logic{AL}{i}$ with the axiom schema $[\Func{DNE}]$. It can also be viewed as the extension of the so-called multiplicative fragment of Girard's linear logic \cite{Girard(87B)} by allowing weakening and the axiom schema $[\Func{EFQ}]$. We do not define an ``unbounded'' version of $\Logic{AL}{c}$ or its extensions, since $[\Func{EFQ}]$ is derivable from $[\Func{DNE}]$ in the presence of weakening $[\Func{WK}]$. Unbounded intuitionistic {\L}ukasiewicz logic, $\Logic{{\L}L}{u}$, extends $\Logic{AL}{u}$ with the axiom schema $[\Func{CWC}]$. In $\Logic{AL}{u}$, $A \otimes (A \multimap B)$ can be viewed as a weak conjunction of $A$ and $B$. In $\Logic{{\L}L}{u}$, we have commutativity of this weak conjunction. Intuitionistic {\L}ukasiewicz logic, $\Logic{{\L}L}{i}$, extends $\Logic{{\L}L}{u}$ with the axiom schema $[\Func{EFQ}]$. Classical {\L}ukasiewicz logic, $\Logic{{\L}L}{c}$, extends $\Logic{AL}{i}$ with the axiom schema $[\Func{CSD}]$. In $\Logic{AL}{i}$, $(A \multimap B) \multimap B$ can be viewed as a form of disjunction, stronger than that defined by $(A{{}^{\perp}} \otimes B{{}^{\perp}}){{}^{\perp}}$. In $\Logic{AL}{c}$ we have commutativity of this strong disjunction. This gives us the widely-studied multi-valued logic of {\L}ukasiewicz. \begin{table} \centering \begin{tabular}{|c|l|l|} \hline {\bf Logic} & {\bf Axioms} & {\bf Models} \\ \hline \hline $\Logic{AL}{u}$ & $[\Func{ASM}]$ & pocrims \\[1mm] \hline $\Logic{AL}{i}$ & as $\Logic{AL}{u} + [\Func{EFQ}]$ & bounded pocrims \\[1mm] \hline $\Logic{AL}{c}$ & as $\Logic{AL}{i} + [\Func{DNE}]$& involutive pocrims \\[1mm] \hline $\Logic{{\L}L}{u}$ & as $\Logic{AL}{u} + [\Func{CWC}]$ & hoops \\[1mm] \hline $\Logic{{\L}L}{i}$ & as $\Logic{{\L}L}{u} + [\Func{EFQ}]$ & bounded hoops \\[1mm] \hline $\Logic{{\L}L}{c}$ & as $\Logic{AL}{i} + [\Func{CSD}]$ & bounded involutive hoops \\[1mm] \hline $\Logic{IL}{u}$ & as $\Logic{AL}{u} + [\Func{CON}]$ & idempotent pocrims \\[1mm] \hline $\Logic{IL}{\relax}$ & as $\Logic{AL}{i} + [\Func{CON}]$ & bounded idempotent pocrims \\[1mm] \hline $\Logic{BL}{\relax}$ & as $\Logic{IL}{\relax} + [\Func{DNE}]$ & involutive idempotent pocrims \\[1mm] \hline $\Logic{CL}{u}$ & as $\Logic{{\L}L}{u} + [\Func{HLB}] + [\Func{HUB}]$ & coops \\[1mm] \hline $\Logic{CL}{i}$ & as $\Logic{{\L}L}{i} + [\Func{HLB}] + [\Func{HUB}]$ & bounded coops \\[1mm] \hline $\Logic{CL}{c}$ & as $\Logic{{\L}L}{c} + [\Func{HLB}] + [\Func{HUB}]$ & involutive coops \\[1mm] \hline \end{tabular} \caption{The logics and their models} \label{tab:models} \end{table} Unbounded intuitionistic propositional logic, $\Logic{IL}{u}$, extends $\Logic{AL}{u}$ with the axiom schema $[\Func{CON}]$, which is equivalent to a contraction rule allowing $\Gamma, A \vdash B$ to be derived from $\Gamma, A, A \vdash B$. Intuitionistic propositional logic, $\Logic{IL}{\relax},$ extends $\Logic{IL}{u}$ with the axioms schemata $[\Func{EFQ}]$. $\Logic{IL}{\relax}$ is the conjunction-implication fragment of the well-known Brouwer-Heyting intuitionistic propositional logic. Boolean logic, $\Logic{BL}{\relax}$, extends $\Logic{IL}{\relax}$ with the axiom schema $[\Func{DNE}]$. This is the familiar two-valued logic of truth tables. Unbounded intuitionistic continuous logic, $\Logic{CL}{u}$, allows the halving operator and extends $\Logic{{\L}L}{u}$ with the axiom schemas $[\Func{HLB}]$ and $[\Func{HUB}]$, which effectively give lower and upper bounds on the logical strength of $A/2$. The two axioms can also be read as saying that $A/2$ is equivalent to $A/2 \multimap A$. Intuitionistic continuous logic, $\Logic{CL}{i}$, extends $\Logic{CL}{u}$ with the axiom schemata $[\Func{EFQ}]$. $\Logic{CL}{i}$ is an intuitionistic version of the continuous logic of ben Yaacov \cite{ben-yaacov-pedersen09}. Classical Continuous logic, $\Logic{CL}{c}$ extends $\Logic{CL}{i}$ with the axiom schema $[\Func{DNE}]$. This gives ben Yaacov's continuous logic. The motivating model takes truth values to be real numbers between $0$ and $1$ with conjunction defined as capped addition. Our initial goal was to understand the relationships amongst $\Logic{AL}{i}$, $\Logic{{\L}L}{c}$ and $\Logic{CL}{c}$. The other logics came into focus when we tried to decompose the somewhat intractable axiom $[\Func{CSD}]$ into a combination of $[\Func{DNE}]$ and an intuitionistic component. We will see that the twelve logics are related as shown in Figure~\ref{fig:logics}. In the figure, an arrow from $T_1$ to $T_2$ means that $T_2$ extends $T_1$, i.e., the set of provable sequents of $T_2$ contains that of $T_1$. In each square, the north-east logic is the least extension of the south-west logic that contains the other two. For human beings, at least, the proof of this fact is quite tricky for the $\Logic{AL}{i}$-$\Logic{{\L}L}{c}$ square, see~\cite[chapters 2 and 3]{Hajek98}. The routes in Figure~\ref{fig:logics} from $\Logic{AL}{u}$ to $\Logic{IL}{\relax}$ and $\Logic{BL}{\relax}$ have been quite extensively studied \cite{blok-ferreirim00,raftery07}. We are not aware of any work on $\Logic{CL}{u}$ and $\Logic{CL}{i}$, but these are clearly natural objects of study in connection with ben Yaacov's continuous logic. It should be noted that $\Logic{IL}{u}$ and $\Logic{CL}{u}$ are incompatible: given $[\Func{CON}]$, $A/2$ and $A/2 \otimes A/2$ are equivalent, so that from $[\Func{HLB}]$ and $[\Func{HUB}]$ one finds that $A/2 \multimap A$ and $A/2$ are both provable; which proves $A$, for arbitrary formulas $A$. \Section{Algebraic Semantics}\label{sec:algebraic-semantics} \ANote{We need to explain that this is a fairly potted account and give references for more detail of the kind of thing we are doing. It would be nice to have a reference (or give our own account, if necessary) contrasting our natural deduction/Gentzen style approach with the Hilbert-style approach in much of the literature.} We give an algebraic semantics to our logics using pocrims: partially ordered, commutative, residuated, integral monoids. A {\em pocrim}\footnote{Strictly speaking, this is a {\em dual pocrim}, since we order it by increasing logical strength and write it additively, whereas in much of the literature the opposite order and multiplicative notation is used (so halves would be square roots). We follow the ordering convention of the continuous logic literature.}, is a structure for the signature $(0, +, \mathop{\rightarrow})$ of type $(0, 2, 2)$ satisfying the following laws: \begin{gather*} (x + y) + z = x + (y + z) \tag*{$[{\sf m}_1]$} \\ x + y = y + x \tag*{$[{\sf m}_2]$} \\ x + 0 = x \tag*{$[{\sf m}_3]$} \\ x \ge x \tag*{$[{\sf o}_1]$} \\ x \ge y \land y \ge z \Rightarrow x \ge z \tag*{$[{\sf o}_2]$} \\ x \ge y \land y \ge x \Rightarrow x = y \tag*{$[{\sf o}_3]$} \\ x \ge y \Rightarrow x + z \ge y + z \tag*{$[{\sf o}_4]$} \\ x \ge 0 \tag*{$[{\sf le}]$} \\ x + y \ge z \Leftrightarrow x \ge y \mathop{\rightarrow} z \tag*{$[{\sf r}]$} \end{gather*} \noindent where $x \ge y$ is an abbreviation for $x \mathop{\rightarrow} y = 0$. When working in a pocrim, we adopt the convention that $\mathop{\rightarrow}$ associates to the right and has lower precedence than $+$. So, for example, the brackets in $(a + b) \mathop{\rightarrow} (c \mathop{\rightarrow} (d + f))$ are all redundant, while those in $((a \mathop{\rightarrow} b) \mathop{\rightarrow} c) + d$ are all required. Let $VM = (M, 0, +, \mathop{\rightarrow})$ be a pocrim. The laws~$[{\sf m}_i]$, $[{\sf o}_j]$ and $[{\sf le}]$ say that $(M, 0, +; {\ge})$ is a partially ordered commutative monoid with the identity $0$ as least element. In particular, $+$ is monotonic in both its arguments. Law~$[{\sf r}]$, the {\em residuation property}, says that for any $x$ and $z$ the set $\{y \mathrel{|} x + y \ge z\}$ is non-empty and has $x \mathop{\rightarrow} z$ as least element. As is easily verified, $\mathop{\rightarrow}$ is antimonotonic in its first argument and monotonic in its second argument. Let $\alpha : \Func{Var} \rightarrow M$ be an interpretation of logical variables as elements of $M$ and extend $\alpha$ to a function $v_{\alpha} : \Lang{0} \rightarrow M$ by interpreting $0$, $\otimes$ and $\multimap$ as $0$, $+$ and $\mathop{\rightarrow}$ respectively. We say that $\alpha$ {\em satisfies} the sequent $C_1, \ldots, C_n \vdash A$ iff $v_{\alpha}(C_1) + \ldots + v_{\alpha}(C_n) \ge v_{\alpha}(A)$. We say that $\Gamma \vdash A$ is {\em valid} in $\mathbf{M}$ iff it is satisfied by every assignment $\alpha : \Func{Var} \rightarrow M$, in which case we say $\mathbf{M}$ is a model of $\Gamma \vdash A$. If $\cal C$ is a class of pocrims, we say $\Gamma \vdash A$ is valid in $\cal C$ if it is valid in every member of $\cal C$. We say that a logic $L$ whose language is $\Lang{0}$ is sound for a class of pocrims $\cal C$ if every sequent over $\Lang{0}$ that is provable in $L$ is valid in $\cal C$. We say that $L$ is complete for $\cal C$ if the converse holds. We then have: \begin{Theorem}\label{thm:alu-sound-complete} $\Logic{AL}{u}$ is sound and complete for the class of all pocrims. \end{Theorem} \par \noindent{\bf Proof: } This is standard. Soundness is a routine exercise. For the completeness, one defines an equivalence relation $\simeq$ on formulas such that $A \simeq B$ holds iff both $A \vdash B$ and $B \vdash A$ are provable in the logic. Writing $[A]$ for the equivalence class of a formula $A$, one then shows that the set of equivalence classes $T$ is the carrier set of a pocrim $\mathbf{T} = (T; 0, +, \mathop{\rightarrow})$, where $0 = [0]$ and the operators $+$ and $\mathop{\rightarrow}$ are defined so that $[A] + [B] = [A \otimes B]$ $[A] \mathop{\rightarrow} [B] = [A \multimap B]$. In $\mathbf{T}$, the {\em term model} of the logic, $C_1, \ldots, C_n \vdash A$ is valid, i.e., $[C_1] + \ldots [C_n] \mathop{\rightarrow} [A] = 0$ holds, iff $C_1, \ldots, C_n \vdash A$ is provable. Completeness follows, since a sequent that is valid in all pocrims must be valid in the pocrim $\mathbf{T}$ and hence must be provable. \rule{0.5em}{0.5em} The above theorem says that a sequent is provable in $\Logic{AL}{u}$ iff it has every pocrim as a model. In the sequel we will often use the theorem to derive laws that hold in all pocrims. For example, it is easy to find a proof in $\Logic{AL}{u}$ of the sequent $P, Q \multimap P \vdash Q \multimap (P \otimes P)$, from which we may conclude that the law $x + (y \mathop{\rightarrow} x) \ge y \mathop{\rightarrow} x + x$ holds in any pocrim. A {\em hoop} is a pocrim that is {\em naturally ordered}, i.e., whenever $x \ge y$, there is $z$ such that $x = y + z$. It is a nice exercise in the use of the residuation property to show that a pocrim is a hoop iff it satisfies the identity \[ \begin{array}{l@{\quad\quad}r} x + (x \mathop{\rightarrow} y) = y + (y \mathop{\rightarrow} x) \tag*{$[{\sf cwc}]$} \end{array} \] From this it follows that the logic $\Logic{{\L}L}{u}$ is sound and complete for the class of all hoops. See~\cite{blok-ferreirim00} for more information on hoops. We say a pocrim is {\em idempotent} if it is idempotent as a monoid, i.e., it satisfies $x + x = x$. Note that this condition implies condition $[{\sf cwc}]$, since it implies $x + y \ge x + (x \mathop{\rightarrow} y) = x + (x + (x \mathop{\rightarrow} y)) \ge x + y$, whence, $x + (x \mathop{\rightarrow} y) = x + y = y + x = y + (y \mathop{\rightarrow} x)$. Using this, we find that $\Logic{IL}{u}$ is sound and complete for the class of all idempotent pocrims. To complete our treatment of the bottom layer in Figure~\ref{fig:logics}, we need to prove a lemma about hoops that will help us with the algebraic semantics of the halving operator. The hoop axiom $[{\sf cwc}]$ is surprisingly powerful but often requires considerable ingenuity to apply. We have been greatly assisted in our work by using the late Bill McCune's {\tt Prover9} and {\tt Mace4} programs to prove algebraic facts and to find counter-examples. Readers who enjoy a challenge may like to look for their own proof of the following lemma before reading ours, which is a fairly direct translation of that found after a few minutes by {\tt Prover9}. \begin{Theorem}[{\tt Prover9}] \label{thm:hoop-halving-bounds} The following hold in any hoop: \begin{align*} \mbox{\em(i)}\quad& \mbox{if $a = a \mathop{\rightarrow} b$ and $c = c \mathop{\rightarrow} b$, then $a = c$} \\ \mbox{\em(ii)}\quad& \mbox{if $a \ge a \mathop{\rightarrow} b$ and $c = c \mathop{\rightarrow} b$, then $a \ge c$} \\ \mbox{\em(iii)}\quad& \mbox{if $a \le a \mathop{\rightarrow} b$ and $c = c \mathop{\rightarrow} b$, then $a \le c$}. \end{align*} \end{Theorem} \par \noindent{\bf Proof: } \noindent {\em(i):} this is immediate from parts~{\em(ii)} and~{\em(iii)}. \noindent {\em(ii):} by the hypothesis on $a$, $a + a \ge b$, and so, using the hypothesis on $c$ and the fact that $x + (y \mathop{\rightarrow} x) \ge y \mathop{\rightarrow} x + x$ discussed in the remarks following the proof of Theorem~\ref{thm:alu-sound-complete}, we find: \[ a + (c \mathop{\rightarrow} a) \ge c \mathop{\rightarrow} a + a \ge c \mathop{\rightarrow} b = c \] \noindent so that $c \mathop{\rightarrow} a \ge a \mathop{\rightarrow} c$. Using the fact that if $x \ge y$, then $x + (y \mathop{\rightarrow} z) \ge z$, we have: $ (c \mathop{\rightarrow} a) + ((a \mathop{\rightarrow} c) \mathop{\rightarrow} c) \ge c. $ As the hypothesis on $c$ implies $c + c \ge b$, this gives: \[ c + (c \mathop{\rightarrow} a) + ((a \mathop{\rightarrow} c) \mathop{\rightarrow} c) \ge b. \] \noindent Using $[{\sf cwc}]$ twice and the fact that $x \mathop{\rightarrow} y \mathop{\rightarrow} x = 0$, we find: \begin{align*} b &\le c + (c \mathop{\rightarrow} a) + ((a \mathop{\rightarrow} c) \mathop{\rightarrow} c) \\ &= a + (a \mathop{\rightarrow} c) + ((a \mathop{\rightarrow} c) \mathop{\rightarrow} c) \\ &= a + c + (c \mathop{\rightarrow} a \mathop{\rightarrow} c) \\ &= a + c. \end{align*} \noindent I.e., $a + c \ge b$, so that $a \ge c \mathop{\rightarrow} b = c$ as required. \noindent {\em(iii):} by the hypothesis on $c$ and using the fact that $(x \mathop{\rightarrow} y) + x \ge y$ twice, we have: \[ c + (a \mathop{\rightarrow} c) + a = (c \mathop{\rightarrow} b) + (a \mathop{\rightarrow} c) + a \ge b. \] \noindent Using the hypothesis on $a$, we have: \[ c + (a \mathop{\rightarrow} c) \ge a \mathop{\rightarrow} b \ge a \] So $c \ge (a \mathop{\rightarrow} c) \mathop{\rightarrow} a$, implying: \[ c \mathop{\rightarrow} (a \mathop{\rightarrow} c) \mathop{\rightarrow} a = 0. \] \noindent Using $[{\sf cwc}]$ and the facts that $x \mathop{\rightarrow} y \le (z \mathop{\rightarrow} x) \mathop{\rightarrow} y$ and $(x \mathop{\rightarrow} y) + x \ge y$, we have: \begin{align*} c &= c + (c \mathop{\rightarrow} (a \mathop{\rightarrow} c) \mathop{\rightarrow} a) \\ &= ((a \mathop{\rightarrow} c) \mathop{\rightarrow} a) + (((a \mathop{\rightarrow} c) \mathop{\rightarrow} a) \mathop{\rightarrow} c) \\ &\ge ((a \mathop{\rightarrow} c) \mathop{\rightarrow} a) + (a \mathop{\rightarrow} c) \\ &\ge a. \tag*{ \rule{0.5em}{0.5em}} \end{align*} We define a {\em coop} to be a structure for the signature $(0, +, \mathop{\rightarrow}, /2)$ of type $(0, 2, 2, 1)$ whose $(0, +, \mathop{\rightarrow})$-reduct is a hoop and such that for every $x$ we have: \[ \begin{array}{l@{\quad\quad}r} x/2 = x/2 \mathop{\rightarrow} x \tag*{$[{\sf h}]$} \end{array} \] From $[{\sf h}]$, one clearly has $x \ge x/2 \mathop{\rightarrow} x = x/2$, i.e., $x \mathop{\rightarrow} x/2 = 0$ and so using also $[{\sf cwc}]$ one finds $x/2 + x/2 = x/2 + (x/2 \mathop{\rightarrow} x) = x + (x \mathop{\rightarrow} x/2) = x$ justifying the choice of notation. (Our convention is that $/2$ binds tighter than the infix operators, so the brackets are needed in $(x + y)/2$ but not in $x \mathop{\rightarrow} (x/2)$). The following very useful theorem shows that the halving operator is uniquely defined by the condition $x/2 = x/2 \mathop{\rightarrow} x$ (so we could have defined a coop to be a hoop that satisfies the axiom $\all{x}\ex{y}y = y \mathop{\rightarrow} x$ and taken the halving operator to be defined on such a hoop by equation~$[{\sf h}]$). \begin{Theorem} \label{thm:coop-halving-bounds} Let $a$ and $b$ be elements of a coop. Then the following hold: \begin{align*} \mbox{\em(i)}\quad& a = b/2 \Leftrightarrow a = a \mathop{\rightarrow} b \\ \mbox{\em(ii)}\quad& a \ge b/2 \Leftrightarrow a \ge a \mathop{\rightarrow} b \\ \mbox{\em(iii)}\quad& a \le b/2 \Leftrightarrow a \le a \mathop{\rightarrow} b \end{align*} \end{Theorem} \par \noindent{\bf Proof: } $\Rightarrow$: let $R \in \{=, \ge, \le\}$, then using the definition of a coop and the fact that $\mathop{\rightarrow}$ is antimonotonic in its left argument, we have: \[ a \mathrel{R} b/2 = b/2 \mathop{\rightarrow} b \mathrel{R} a \mathop{\rightarrow} b \] $\Leftarrow$: immediate from Theorem~\ref{thm:hoop-halving-bounds} and the definition of a coop. \rule{0.5em}{0.5em} \begin{Corollary}\label{cor:halving-miscellany} Let $a$ and $b$ be elements of a coop. Then the following hold: \begin{align*} \mbox{\em(i)}\quad& a = b \Leftrightarrow a/2 = b/2 \\ \mbox{\em(ii)}\quad& a \ge b \Leftrightarrow a/2 \ge b/2 \\ \mbox{\em(iii)}\quad& a/2 = a \Leftrightarrow a = 0 \\ \mbox{\em(iv)}\quad& a/2 + b/2 \ge (a + b)/2 \\ \mbox{\em(v)}\quad& a/2 \mathop{\rightarrow} b/2 = (a \mathop{\rightarrow} b)/2 \end{align*} \end{Corollary} \par \noindent{\bf Proof: } {\em(i):} immediate from {\em(ii)}. \\ {\em(ii)$\Rightarrow$:} if $a \ge b$, then as $a = a/2 + a/2$, we have $a/2 \ge a/2 \mathop{\rightarrow} b$ and then, by the theorem, $a/2 \ge b/2$. \\ {\em(ii)$\Leftarrow$:} if $a/2 \ge b/2$, then $a = a/2 + a/2 \ge b/2 + b/2 = b$. \\{\em{(iii)}}: if $a/2 = a$, then, by the theorem, $a/2 = a/2 \mathop{\rightarrow} a = a \mathop{\rightarrow} a = 0$. \\{\em{(iv)}}: We have \begin{align*} a/2 + b/2 \mathop{\rightarrow} a/2 + b/2 \mathop{\rightarrow} a + b &= a/2 + b/2 + a/2 + b/2 \mathop{\rightarrow} a + b \\ &= a + b \mathop{\rightarrow} a + b\\ &= 0 \end{align*} I.e., $a/2 + b/2 \ge a/2 + b/2 \mathop{\rightarrow} a + b$, so, by the theorem, $a/2 + b/2 \ge (a +b)/2$ \\{\em{(v)}}: I claim that $(a \mathop{\rightarrow} b)/2 \ge a/2 \mathop{\rightarrow} b/2$ and $(a \mathop{\rightarrow} b)/2 \le a/2 \mathop{\rightarrow} b/2$, from which the result follows. For the first part of the claim, we have $(a \mathop{\rightarrow} b)/2 \ge a/2 \mathop{\rightarrow} b/2$ iff $a/2 + (a \mathop{\rightarrow} b)/2 \ge b/2$ and, by the theorem, this holds iff $a/2 + (a \mathop{\rightarrow} b)/2 \ge a/2 + (a \mathop{\rightarrow} b)/2 \mathop{\rightarrow} b$, i.e. iff $b \le (a/2 + (a \mathop{\rightarrow} b)/2) + (a/2 + (a \mathop{\rightarrow} b)/2) = a + (a \mathop{\rightarrow} b)$ which is true. For the second part of the claim, we have: \begin{align*} (a/2 \mathop{\rightarrow} b/2) \mathop{\rightarrow} a \mathop{\rightarrow} b &= (a/2 \mathop{\rightarrow} b/2) + a \mathop{\rightarrow} b \\ &= a/2 + a/2 + (a/2 \mathop{\rightarrow} b/2) \mathop{\rightarrow} b \\ &= a/2 + b/2 + (b/2 \mathop{\rightarrow} a/2) \mathop{\rightarrow} b \tag*{$[{\sf cwc}]$} \\ &= a/2 + (b/2 \mathop{\rightarrow} a/2) \mathop{\rightarrow} b/2 \mathop{\rightarrow} b \\ &= a/2 + (b/2 \mathop{\rightarrow} a/2) \mathop{\rightarrow} b/2 \tag*{[{\sf h}]}\\ &\le a/2 \mathop{\rightarrow} b/2. \end{align*} where the inequality follows from the fact that $\mathop{\rightarrow}$ is antimonotonic in its first argument. So, by the theorem, $(a \mathop{\rightarrow} b)/2 \le a/2 \mathop{\rightarrow} b/2$ as required. \rule{0.5em}{0.5em} \begin{Corollary}\label{cor:no-finite-coops} There are no non-trivial finite coops. \end{Corollary} \par \noindent{\bf Proof: } If $a$ is a non-zero element of a coop, parts {\em(ii)} and {\em(iii)} of the corollary imply that the sequence $a, a/2, (a/2)/2, \ldots$ is strictly decreasing. Hence a finite coop has no non-zero elements. \rule{0.5em}{0.5em} Given an interpretation, $\alpha : \Func{Var} \rightarrow M$ with values in a coop, we extend the function $v_{\alpha} : \Lang{0} \rightarrow M$ to $\Lang{\frac{1}{2}}$ in such a way that $v_{\alpha}(A/2) = (v_{\alpha}(A))/2$ and extend the notions of satisfaction, etc. accordingly. The proof of Theorem~\ref{thm:alu-sound-complete} is easily extended to show that the logic $\Logic{CL}{u}$ is sound and complete for the class of coops (using Theorem~\ref{thm:coop-halving-bounds} to show that the halving operation on the term model is well-defined). We now have a sound and complete algebraic semantics for each of the logics in the bottom layer of Figure~\ref{fig:logics}. Moving to the middle layer, let us say that a pocrim, hoop or coop is {\em bounded} if it has a (necessarily unique) {\em annihilator}, i.e., an element $1$ such that for every $x$ we have: \[ \begin{array}{l@{\quad\quad}r} x + 1 = 1 & [{\sf ann}] \end{array} \] Assume the pocrim $\mathbf{M}$ is bounded. Then $0 \le x \le x + 1 = 1$ for any $x$ and $(M; \le)$ is indeed a bounded ordered set. Given an interpretation, $\alpha : \Func{Var} \rightarrow M$ with values in a bounded pocrim, we extend the function $v_{\alpha} : \Lang{0} \rightarrow M$ to $\Lang{1}$ so that $v_{\alpha}(1) = 1$ and extend the notions of satisfaction etc. accordingly. Yet again the proof of Theorem~\ref{thm:alu-sound-complete} is easily extended to show that the logic $\Logic{AL}{i}$ is sound and complete for the class of bounded pocrims. We then find that the logics $\Logic{{\L}L}{i}$, $\Logic{CL}{i}$ and $\Logic{IL}{\relax}$ are sound and complete for bounded hoops, bounded coops and idempotent bounded hoops respectively. Idempotent bounded hoops are also known as Brouwerian algebras and are known to be the conjunction-implication reducts of Heyting algebras (see \cite{Koehler81} and the works cited therein). Finally, for the top layer of Figure~\ref{fig:logics}, we say a pocrim is {\em involutive} if it is bounded and satisfies $\lnot\lnot x = x$, where we write $\lnot x$ as an abbreviation for $x \mathop{\rightarrow} 1$, Idempotent involutive hoops are easily seen to be the conjunction-implication reducts of Boolean algebras. We find that $\Logic{AL}{c}$, $\Logic{{\L}L}{c}$, $\Logic{CL}{c}$ and $\Logic{BL}{\relax}$ are sound and complete for involutive pocrims, involutive hoops, involutive coops and idempotent involutive hoops respectively. This completes the proof of the following theorem: \begin{Theorem}\label{thm:all-sound-complete} The logics $\Logic{AL}{i}, \ldots, \Logic{CL}{c}$, $\Logic{IL}{u}$, $\Logic{IL}{\relax}$ and $\Logic{BL}{\relax}$ of Figure~\ref{fig:logics} are sound and complete for the corresponding classes of pocrims, hoops and coops listed in Table~\ref{tab:models}. \rule{0.5em}{0.5em} \end{Theorem} A {\em Wajsberg hoop} is a hoop satisfying the identity \[ \begin{array}{l@{\quad\quad}r} (x \mathop{\rightarrow} y) \mathop{\rightarrow} y = (y \mathop{\rightarrow} x) \mathop{\rightarrow} x & [{\sf csd}] \end{array} \] It can be shown that Wajsberg hoops are the same as bounded involutive hoops. The classes of pocrims associated with the logics in the left-hand column in Figure~\ref{fig:logics} are very general: any partial order can be embedded in an involutive pocrim. To see this, let $X$ be any partially ordered set. Take a disjoint copy $X{{}^{\perp}}$ of $X$ (say $X{{}^{\perp}} = X \times \{1\}$) and write $x{{}^{\perp}}$ for the image in $X{{}^{\perp}}$ of $x \in X$. Choose objects $0$, $1$, $r$ and $s$ distinct from each other and from the elements of $X \cup X{{}^{\perp}}$ and order the disjoint union $P_X = \{0, r\} \cup X \cup X{{}^{\perp}} \cup \{s, 1\}$ so that, {\em(i),} $0 < r < X < X{{}^{\perp}} < s < 1$, {\em(ii),} the subset $X$ has the given ordering and, {\em(iii),} $X{{}^{\perp}}$ has the opposite ordering. Extend the mapping $(\cdot){{}^{\perp}}:X \rightarrow X{{}^{\perp}}$ to all of $P_X$ so that $0{{}^{\perp}} = 1$, $r{{}^{\perp}} = s$ and $a{{}^{\perp}}{{}^{\perp}} = a$ for all $a$. Then ${{}^{\perp}}$ is an order-reversing mapping of $P_X$ onto itself and there is a unique commutative binary operation $+$ on $P_X$ with the following properties: $$ \begin{array}{rcl@{\quad}l} a + 0 &=& a, &\mbox{for every $a$;}\\ a + b &=& s, &\mbox{for every $a, b \ge r$ such that $a \not\ge b{{}^{\perp}}$;}\\ a + b &=& 1, &\mbox{for every $a, b \ge r$ such that $a \ge b{{}^{\perp}}$.} \end{array} $$ Now let $\mathbf{P}_X = (P_X, 0, +, \mathop{\rightarrow})$ where $\mathop{\rightarrow}$ is defined using de Morgan's law: $a \mathop{\rightarrow} b = (a + b{{}^{\perp}}){{}^{\perp}}$. Then one finds that $a \mathop{\rightarrow} b = 0$ iff $a \ge b$ in $P_X$ with respect to the order defined above and the laws for an involutive pocrim other than associativity of $+$ are then easily verified for $\mathbf{P}_X$. For the associativity of $+$, first note that if $0 \in \{a, b, c\}$, $(a + b) + c = a + (b + c)$ is trivial. If $a, b, c \ge r$ then $a + b, b + c \ge r{{}^{\perp}}$ and we have: \[ 1 \ge (a + b) + c \ge r{{}^{\perp}} + r = 1 = r + r{{}^{\perp}} \le a + (b + c) \le 1. \] \noindent so that $a + (b + c) = 1 = (a + b) + c$. Thus $\mathbf{P}_X$ is indeed an involutive pocrim. It is known that the class of involutive pocrims is not a variety i.e., it cannot be characterised by equational laws. Since involutive pocrims are characterised over bounded pocrims and over pocrims by equational laws, it follows that the class of pocrims and the class of bounded pocrims are also not varieties. See~\cite{raftery07} and the works cited therein for these results and their history and for further information about pocrims in general and involutive pocrims in particular. Bosbach~\cite{bosbach69a} gave a direct proof of an equational axiomatization of the class of hoops. Using Theorem~\ref{thm:all-sound-complete}, we can give an alternative proof that shows how a proof of a sequent $\vdash A$ may be translated into an equational proof that $\alpha = 0$, where $\alpha$ is a translation into the language of pocrims of the formula $A$. \begin{Theorem}\label{thm:luk-provability-equational} A structure $\mathbf{H} = (H; 0, +, \mathop{\rightarrow})$ is a hoop iff $(H; 0, +)$ is a commutative monoid and the following equations hold in $H$: \begin{enumerate} \item\label{luk-imp-self} $x \mathop{\rightarrow} x = 0$ \item\label{luk-imp-zero} $x \mathop{\rightarrow} 0 = 0$ \item\label{luk-zero-imp} $0 \mathop{\rightarrow} x = x$ \item\label{luk-conj-imp} $x + y \mathop{\rightarrow} z = x \mathop{\rightarrow} y \mathop{\rightarrow} z$ \item\label{luk-cwc} $x + (x \mathop{\rightarrow} y) = y + (y \mathop{\rightarrow} x)$ \end{enumerate} \end{Theorem} \par \noindent{\bf Proof: } It follows easily from the definitions (or from Theorem~\ref{thm:all-sound-complete}) that the equations hold in any hoop. For the converse, Theorem~\ref{thm:all-sound-complete} implies that it is sufficient to show that if there is proof of $\vdash A$ in $\Logic{{\L}L}{u}$ then $[A]$ (the element of the term model of $\Logic{{\L}L}{u}$ represented by $A$) can be reduced to 0 using the commutative monoid laws and equations~\ref{luk-imp-self} to \ref{luk-cwc}. More generally, if $B_1, \ldots, B_m$ and $A$ are formulas, with $\gamma = [B_1] + \ldots + [B_m]$ and $a = [A]$, we will show how to translate a proof of $B_1, \ldots, B_m \vdash A$ into a sequence of equations $\gamma \mathop{\rightarrow} a = a_1 = \ldots = a_n = 0$ where each equation $a_i = a_{i+1}$ is obtained by applying one of the equations~\ref{luk-imp-self} to \ref{luk-cwc} to a subterm of $a_i$ or $a_{i+1}$ and then simplifying or rearranging as necessary using the commutative monoid laws. We have base cases for the axioms $[\Func{ASM}]$ and $[\Func{CWC}]$ of Figure~\ref{fig:axioms} and inductive steps for the rules of Figure~\ref{fig:rules}. \noindent $[\Func{ASM}]$: we want $\gamma + a \mathop{\rightarrow} a = 0$ for arbitrary $\gamma$ and $a$: \begin{align*} \gamma + a \mathop{\rightarrow} a &= & \tag*{(eq. \ref{luk-conj-imp})} \\ \gamma \mathop{\rightarrow} a \mathop{\rightarrow} a &= & \tag*{(eq. \ref{luk-imp-self})} \\ \gamma \mathop{\rightarrow} 0 &= 0 & \tag*{(eq. \ref{luk-imp-zero})} \end{align*} \noindent $[\Func{CWC}]$: we want $\gamma + a + (a \mathop{\rightarrow} b) \mathop{\rightarrow} b + (b \mathop{\rightarrow} a) = 0$ for arbitrary $\gamma$, $a$ and $b$: \begin{align*} \gamma + a + (a \mathop{\rightarrow} b) \mathop{\rightarrow} b + (b \mathop{\rightarrow} a) &= &\tag*{(eq. \ref{luk-cwc})} \\ \gamma + b + (b \mathop{\rightarrow} a) \mathop{\rightarrow} b + (b \mathop{\rightarrow} a) &= &\tag*{(eq. \ref{luk-conj-imp})} \\ \gamma \mathop{\rightarrow} b + (b \mathop{\rightarrow} a) \mathop{\rightarrow} b + (b \mathop{\rightarrow} a) &= &\tag*{(eq. \ref{luk-imp-self})} \\ \gamma \mathop{\rightarrow} 0 &= 0 &\tag*{(eq. \ref{luk-imp-zero})} \end{align*} \noindent $[{\multimap}\Func{I}]$: we are given $\gamma + a \mathop{\rightarrow} b = 0$ and we want $\gamma \mathop{\rightarrow} a \mathop{\rightarrow} b = 0$: \begin{align*} \gamma \mathop{\rightarrow} a \mathop{\rightarrow} b &= &\tag*{(eq. \ref{luk-conj-imp})}\\ \gamma + a \mathop{\rightarrow} b &= 0 &\tag*{(hyp.)} \end{align*} \noindent $[{\multimap}\Func{E}]$: we are given $\gamma \mathop{\rightarrow} a = 0$ and $\delta \mathop{\rightarrow} a \mathop{\rightarrow} b = 0$ and we want $\gamma + \delta \mathop{\rightarrow} b = 0$: \begin{align*} \gamma + \delta \mathop{\rightarrow} b &= &\tag*{(hyp.)}\\ \gamma + (\gamma \mathop{\rightarrow} a) + \delta \mathop{\rightarrow} b &= &\tag*{(eq. \ref{luk-cwc})}\\ a + (a \mathop{\rightarrow} \gamma) + \delta \mathop{\rightarrow} b &= &\relax{}\\ (a \mathop{\rightarrow} \gamma) + \delta + a \mathop{\rightarrow} b &= &\tag*{(hyp.)}\\ (a \mathop{\rightarrow} \gamma) + \delta + a + (\delta \mathop{\rightarrow} a \mathop{\rightarrow} b) \mathop{\rightarrow} b &= &\tag*{(eq. \ref{luk-conj-imp})}\\ (a \mathop{\rightarrow} \gamma) + \delta + a + (\delta + a \mathop{\rightarrow} b) \mathop{\rightarrow} b &= &\tag*{(eq. \ref{luk-cwc})}\\ (a \mathop{\rightarrow} \gamma) + b + (b \mathop{\rightarrow} \delta + a) \mathop{\rightarrow} b &= &\relax{}\\ (a \mathop{\rightarrow} \gamma) + (b \mathop{\rightarrow} \delta + a) + b \mathop{\rightarrow} b &= &\tag*{(eq. \ref{luk-conj-imp})}\\ (a \mathop{\rightarrow} \gamma) + (b \mathop{\rightarrow} \delta + a) \mathop{\rightarrow} b \mathop{\rightarrow} b &= &\tag*{(eq. \ref{luk-imp-self})}\\ (a \mathop{\rightarrow} \gamma) + (b \mathop{\rightarrow} \delta + a) \mathop{\rightarrow} 0 &= 0 &\tag*{(eq. \ref{luk-imp-zero})} \end{align*} \noindent $[{\iAnd}\Func{I}]$: we are given $\gamma \mathop{\rightarrow} a = 0$ and $\delta \mathop{\rightarrow} b = 0$ and we want $\gamma + \delta \mathop{\rightarrow} a + b = 0$. \begin{align*} \gamma + \delta \mathop{\rightarrow} a + b &= &\tag*{(hyp.)}\\ \gamma + (\gamma \mathop{\rightarrow} a) + \delta + (\delta \mathop{\rightarrow} b) \mathop{\rightarrow} a + b &= &\tag*{(eq. \ref{luk-cwc})}\\ a + (a \mathop{\rightarrow} \gamma) + b + (b \mathop{\rightarrow} \delta) \mathop{\rightarrow} a + b &= &\relax{}\\ (a \mathop{\rightarrow} \gamma) + (b \mathop{\rightarrow} \delta) + a + b \mathop{\rightarrow} a + b &= &\tag*{(eq. \ref{luk-conj-imp})}\\ (a \mathop{\rightarrow} \gamma) + (b \mathop{\rightarrow} \delta) \mathop{\rightarrow} a + b \mathop{\rightarrow} a + b &= &\tag*{(eq. \ref{luk-imp-self})}\\ (a \mathop{\rightarrow} \gamma) + (b \mathop{\rightarrow} \delta) \mathop{\rightarrow} 0 &= 0 &\tag*{(eq. \ref{luk-imp-zero})} \end{align*} \noindent $[{\iAnd}\Func{E}]$: we are given $\gamma \mathop{\rightarrow} a + b = 0$ and $\delta + a + b \mathop{\rightarrow} c = 0$ and we want $\gamma + \delta \mathop{\rightarrow} c = 0$: \begin{align*} \gamma + \delta \mathop{\rightarrow} c &= &\tag*{(hyp.)}\\ \gamma + (\gamma \mathop{\rightarrow} a + b) + \delta \mathop{\rightarrow} c &= &\tag*{(eq. \ref{luk-cwc})}\\ a + b + (a + b \mathop{\rightarrow} \gamma) + \delta \mathop{\rightarrow} c &= &\relax{} \\ (a + b \mathop{\rightarrow} \gamma) + \delta + a + b \mathop{\rightarrow} c &= &\tag*{(eq. \ref{luk-conj-imp})}\\ (a + b \mathop{\rightarrow} \gamma) \mathop{\rightarrow} \delta + a + b \mathop{\rightarrow} c &= &\tag*{(hyp.)}\\ (a + b \mathop{\rightarrow} \gamma) \mathop{\rightarrow} 0 &= 0 &\tag*{(eq. \ref{luk-imp-zero})} \end{align*} This completes the induction. \rule{0.5em}{0.5em} The axiomatization in the statement of Theorem~\ \ref{thm:luk-provability-equational} is natural and convenient but by no means minimal. See~\cite{bosbach69a} for more concise axiomatizations. \Section{Algebra of Coops}\label{sec:algebra-of-coops} Blok and Ferreirim \cite{blok-ferreirim00} have studied hoops from the perspective of universal algebra. Here we undertake an analogous study of coops. Our goal is to obtain decision problems for useful theories of coops. This will require various facts about hoops, most of which may be found in \cite{blok-ferreirim00}, but in the dual (multiplicative) notation. We begin by looking at some special classes of coops, for which certain facts that hold for involutive hoops can be obtained rather efficiently by dint of the halving operator. \Subsection{Some Special Classes of Coops}\label{sec:special-classes} We say a hoop is {\em cancellative} if its underlying monoid is a cancellation monoid ($x + y = x + z$ implies $y = z$). Let us say a hoop is {\em semi-cancellative} if $x + y = x + z$ and $y \not= z$ implies $x + y$ is an annihilator (i.e., the hoop is bounded with $x + y = 1$). Thus a hoop that is semi-cancellative and not bounded is cancellative. In a linearly ordered hoop, the semi-cancellative property is easily seen to be equivalent to the condition that $x + y = x$ implies that either $y = 0$ or $x$ is an annihilator. Semi-cancellative coops enjoy the property that halving is almost a homomorphism, or, indeed, a real homomorphism if the coop is cancellative: \begin{Lemma}\label{lma:semi-cancellative-half-plus} Let $\mathbf{C}$ be a semi-cancellative coop, then, for any $x, y \in C$, either $ (x + y)/2 = x/2 + y/2 $ or $x + y = 1$. \end{Lemma} \par \noindent{\bf Proof: } Since $x/2 + x/2 = x$ and $y/2 + y/2 = y$, we have $x + y = x/2 + y/2 + x/2 + y/2$. On the other hand, since $x + y \ge x/2 + y/2$, $[{\sf cwc}]$ implies that $x + y = x/2 + y/2 + (x/2 + y/2 \mathop{\rightarrow} x + y)$. By the semi-cancellative property, either $x + y = 1$ or $x/2 + y/2 = x/2 + y/2 \mathop{\rightarrow} x + y$. In the latter case, Theorem~\ref{thm:coop-halving-bounds} (i) tells us that $x/2 + y/2 = (x + y)/2$. \rule{0.5em}{0.5em} We now prove a very useful theorem that will let us transfer some important results about bounded coops to unbounded coops. This corresponds to Chang's construction of the enveloping group of an MV-algebra but the proof involves much less tricky algebra. Before stating the theorem, we introduce some notation and terminology that will be used throughout the sequel. Let $\mathbf{G} = (G; 0, +, \ge)$ be a 2-divisible linearly ordered commutative group. Writing $G_{{\ge}0}$ for the set of non-negative elements of $G$, we then have a coop $\mathbf{G}_{{\ge}0} = (G_{{\ge}0}; 0, +, \mathop{\rightarrow}, /2)$ where $x \mathop{\rightarrow} y \mathrel{{:}{=}} \Func{sup}\{0, y - x\}$ and $x/2$ is that element of $G$ such that $x/2 + x/2 = x$ (this is unique because $\mathbf{G}$ is linearly ordered and hence torsion-free). If $\mathbf{L}$ is any coop and $a$ is any non-zero element of $\mathbf{L}$, we have a bounded coop $\mathbf{L}_a = (\{x\in L \mathrel{|} x \le a\}, \cappedplus{a}, \mathop{\rightarrow}, /2)$ where $x \cappedplus{a} y \mathrel{{:}{=}} \Func{inf}\{a, x + y\}$. We say $\mathbf{L}_a$ is $\mathbf{L}$ {\em capped at $a$}. We will just write $x + y$ for $x \cappedplus{a} y$ in contexts where it is clear that we are working in $\mathbf{L}_a$. If $\mathbf{L} = \mathbf{G}_{{\ge}0}$ for some 2-divisible linearly ordered commutative group $\mathbf{G}$, we write $\mathbf{G}_{[0, a]}$ for $\mathbf{L}_a$. Note that $\mathbf{G}_{[0, a]}$ is an involutive coop: with $\lnot x = a - x$, we clearly have $\lnot\lnot x = x$. As an example, take $\mathbf{G}$ to be the additive group $\mathbb{D}$ of dyadic rationals $\mathbb{D} = (\{\frac{i}{2^n} \mathrel{|} i \in \mathbb{Z}, n \in \mathbb{N}\}; 0, +, \ge)$. We then have an unbounded coop $\DD_{{\ge}0}$ and from $\DD_{{\ge}0}$, we obtain the bounded coops $\mathbb{D}_{[0, a]} = ([0, a] \cap \mathbb{D}, 0, \cappedplus{a}, \mathop{\rightarrow}, /2)$ for $a$ any positive dyadic rational. Note that the isomorphism type of $\mathbb{D}_{[0, a]}$ depends on $a$: $\mathbb{D}_{[0, 1]}$ contains no $x$ such that $3x$ is the annihilator but $\mathbb{D}_{[0, 3]}$ does. \begin{Theorem}\label{thm:c-hat} Let $\mathbf{C}$ be a semi-cancellative bounded coop. Then there exist a cancellative unbounded coop $\hat{\mathbf{C}}$, an element $\hat{1} \in \hat{C}$ and an isomorphism $\alpha : \mathbf{C} \rightarrow \hat{\mathbf{C}}_{\hat{1}}$. Every element of $\hat{\mathbf{C}}$ has the form $2^m\alpha(a)$ for some $a \in C$ and $m \in \mathbb{N}$. If $\mathbf{C}$ is linearly ordered then so is $\hat{\mathbf{C}}$. \end{Theorem} \par \noindent{\bf Proof: } Let $\mathbf{D} = \mathbf{C}^{\mathbb{N}}$ be the product of countably many copies of $\mathbf{C}$. Thus elements of $\mathbf{D}$ are sequences $x = \Tuple{x_0, x_1, \ldots}$ of elements of $C$ and the coop operations are defined pointwise: $(x + y)_i = x_i + y_i$, $(x \mathop{\rightarrow} y)_i = x_i \mathop{\rightarrow} y_i$ and $(x/2)_i = x_i/2$. For this proof, let us say $x \in D$ is {\em regular} if $x_{i+1} = x_i/2$ for all but finitely many $i$. Using Corollary~\ref{cor:halving-miscellany} and Lemma \ref{lma:semi-cancellative-half-plus} as appropriate, it is easy to see that if $x$ and $y$ are regular then so are $x \mathop{\rightarrow} y$, $x + y$ and $x/2$. Thus the regular elements comprise a subcoop $\mathbf{R}$ of $\mathbf{D}$. Define a relation $\sim$ on $R$ by $x \sim y$ iff $x_i = y_i$ for all but finitely many $i$. It is a routine exercise to verify that $\sim$ is a congruence. Let $\hat{\mathbf{C}}$ be $\mathbf{R}/{\sim}$ and, for $a \in C$, let $\alpha(a)$ be given by $(\alpha(a))_i = \frac{1}{2^i}a$ and let $\hat{1} = \alpha(1)$. By Corollary~\ref{cor:halving-miscellany}, $\alpha(a \mathop{\rightarrow} b) = \alpha(a) \mathop{\rightarrow} \alpha(b)$ for any $a, b \in C$, and, by Lemma \ref{lma:semi-cancellative-half-plus}, if $a + b < 1$, $\alpha(a + b) = \alpha(a) + \alpha(b)$. It is easy to verify that $\alpha$ is an injection and that $\alpha(C) = \{a \in \hat{C} \mathrel{|} \hat{1} \ge a\}$, from which it follows that $\alpha$ is an isomorphism between $\mathbf{C}$ and $\hat{\mathbf{C}}_{\hat{1}}$. If $x \in R$, there is $a \in C$ and $m \in \mathbb{N}$ such that for all $i \in \mathbb{N}$, $x_{m+i} = \frac{1}{2^i}a$ and then $[x] = 2^m\alpha(a)$. Hence, for any $s \in \hat{C}$, $\frac{1}{2^i}s \in \alpha(C)$ for all but finitely many $i$ and from this it follows that, $\hat{\mathbf{C}}$ is semi-cancellative and hence cancellative and that, if $\mathbf{C}$ is linearly ordered, then so is $\hat{\mathbf{C}}$. \rule{0.5em}{0.5em} \begin{Theorem}\label{thm:c-bar} Let $\mathbf{C}$ be a linearly ordered cancellative unbounded coop. Then there exist a 2-divisible linearly ordered group $\overline{\mathbf{C}}$ and an isomorphism $\beta : \mathbf{C} \rightarrow \overline{\mathbf{C}}_{{\ge}0}$. \end{Theorem} \par \noindent{\bf Proof: } Define $\overline{\mathbf{C}}$ to be the group of differences of $\mathbf{C}$ and let $\beta : \mathbf{C} \rightarrow \overline{\mathbf{C}}$ be the natural homomorphism. Every element of $\overline{\mathbf{C}}$ has the form $\beta(a) - \beta(b)$ for $a, b \in C$. $\beta(a) - \beta(b) = \beta(c) - \beta(d)$ iff there are $x, y \in C$, such that $a + x = c + y$ and $b + x = d + y$. We have $(\beta(a/2) - \beta(b/2)) + (\beta(a/2) - \beta(b/2)) = \beta(a) - \beta(b)$, so $\overline{\mathbf{C}}$ is 2-divisible. As $\mathbf{C}$ is linearly ordered, given $a, b \in C$, either {\em(i)} $a \ge b$, in which case, $\beta(a) - \beta(b) = \beta(b \mathop{\rightarrow} a)$, since $a + 0 = (b \mathop{\rightarrow} a) + b$ and $b + 0 = 0 + b$, or {\em(ii)} $b \ge a $, in which case, $\beta(a) - \beta(b) = -\beta(a \mathop{\rightarrow} b)$, since $a + 0 = 0 + a$ and $b + 0 = (a \mathop{\rightarrow} b) + a$. Thus for any $s \in \overline{C}$, either $s \in \beta(C)$ or $s \in -\beta(C)$. Moreover if $s \in \beta(C) \cap -\beta(C)$, we have $s = \beta(a) = -\beta(b)$ whence for some $x, y \in C$ we have $a + x = y$ and $x = b + y$, whence $a + b + y = y$ implying $a = b = 0$, thus $\beta(C) \cap -\beta(C) = \{0\}$. Since $\beta(C) + \beta(C) = \beta(C)$, it follows that $\beta(C)$ is the non-negative cone of a linear order on $\overline{\mathbf{C}}$ and that $\beta$ is an isomorphism of $\mathbf{C}$ with $\overline{\mathbf{C}}_{{\ge}0}$. \rule{0.5em}{0.5em} \begin{Theorem}\label{thm:lin-canc-coops-decidable} The first order theories of the following classes of coops are decidable: {\em(i)} linearly ordered cancellative coops {\em(ii)} linearly ordered bounded semi-cancellative coops, {\em(iii)} linearly ordered semi-cancellative coops. \end{Theorem} \par \noindent{\bf Proof: } Using Theorems~\ref{thm:c-hat} and~\ref{thm:c-bar}, one can find primitive recursive reductions of the theory of linearly ordered bounded semi-cancellative coops to that of linearly ordered cancellative coops and of the latter theory to the theory of linearly ordered 2-divisible groups. The theory of linearly ordered groups is decidable by a well-known result of Gurevich, and hence so is the theory of 2-divisible linearly ordered groups (since the latter is a finitely axiomatisable extension of the former). Hence, {\em(i)} and {\em(ii)} hold. As for {\em(iii)}, a general linearly ordered semi-cancellative coop is either cancellative or bounded, so the theory in {\em(iii)} is the intersection of the theories in {\em(i)} and {\em(ii)}. \rule{0.5em}{0.5em} \Subsection{Homomorphisms and Ideals} Let $\mathbf{H}$ be a hoop. An {\em ideal} $I$ of $\mathbf{H}$, is a downwards-closed submonoid: \begin{gather*} 0 \in I \subseteq H \\ I + I \subseteq I \\ \Downset{I} \subseteq I \end{gather*} \noindent where, for any $X, Y \subseteq H$, $X + Y = \{ x + y \mathrel{|} x \in X, y \in Y\}$ and $\Downset{X} = \{y \in H \mathrel{|} \ex{x \in X} x \ge y\}$. For example, if $X \subseteq H$, the {\em ideal generated by $X$}, $\Func{I}(X)$, is the set comprising all $y \in H$, such that for some $x_1, \ldots, x_n \in X$, $y \le x_1 + \ldots + x_n$. $\Func{I}(X)$ is easily seen to be an ideal and is clearly the smallest ideal containing $X$. As a special case, the ideal $\Func{I}(x)$ generated by $x \in H$, comprises all elements $y$ such that $y \le nx$ for some $n \in \mathbb{N}$. We say an ideal $I$ is {\em proper} if ${0} \not= I \not= H$. If $I$ is an ideal, then $I$ is actually the carrier set of a subhoop, since, we have $I \mathop{\rightarrow} I \subseteq H \mathop{\rightarrow} I \subseteq I$ (since $I$ is downwards-closed and $x \mathop{\rightarrow} y \le y$ for any $x$ and $y$). If $\mathbf{K}$ is also a hoop and $f : H \rightarrow K$ is a homomorphism of hoops, we define the {\em kernel} of $f$, $\Func{ker}(f)$, as follows: \[ \Func{ker}(f) \mathrel{{:}{=}} \{x : H \mathrel{|} f(x) = 0\}. \] \noindent $\Func{ker}(f)$ is clearly a submonoid of $\mathbf{H}$. Moreover, if $y \in \Func{ker}(f)$ and $x \le y$, then, by definition, $f(y) = 0$ and $y \mathop{\rightarrow} x = 0$, and then $f(x) = f(y) \mathop{\rightarrow} f(x) = f(y \mathop{\rightarrow} x) = f(0) = 0$, so $x \in \Func{ker}(f)$. Thus $\Func{ker}(f)$ is an ideal of $\mathbf{H}$. Conversely, if $I$ is an ideal of $\mathbf{H}$, define a relation $\theta \subseteq H \times H$, by $x \mathrel{\theta} y \Leftrightarrow x \mathop{\rightarrow} y \in I \land y \mathop{\rightarrow} x \in I$. It is then routine to verify that $\theta$ is a hoop congruence on $\mathbf{H}$ and that, with $p_{\theta} : \mathbf{H} \rightarrow \mathbf{H}/\theta$, the natural projection onto the quotient hoop, we have $\Func{ker}(p_{\theta}) = I$. It follows that the lattice of congruences on $\mathbf{H}$ is isomorphic to its lattice of ideals. In particular, a hoop is simple (i.e., it admits no non-trivial congruences) iff it has no proper ideals (so that $\Func{I}(x) = H$ for every non-zero $x \in H$). \begin{Theorem}\label{thm:coop-homomorphisms} If $\mathbf{C}$ and $\mathbf{D}$ are coops then a mapping $f : C \rightarrow D$ is a homomorphism of coops iff it is a homomorphism of the underlying hoops of $\mathbf{C}$ and $\mathbf{D}$. \end{Theorem} \par \noindent{\bf Proof: } Necessity is trivial. For sufficiency, assume $f: C \rightarrow D$ is a homomorphism of hoops. By definition, $f(x \mathop{\rightarrow} y) = f(x) \mathop{\rightarrow} f(y)$ for any $x, y \in C$. So for any $x \in C$, we have: \[ f(x/2) \mathop{\rightarrow} f(x) = f(x/2 \mathop{\rightarrow} x) = f(x/2) \] \noindent whence by Theorem~\ref{thm:coop-halving-bounds} we must have $f(x/2) = f(x)/2$. It follows that $f$ is a homomorphism of coops. \rule{0.5em}{0.5em} Thus we need no new notion for the kernels of coop homomorphisms: the lattice of congruences on a coop is isomorphic to its lattice of ideals in the sense defined above. We have the following immediate corollary: \begin{Corollary}\label{cor:coop-simple} A coop is simple iff its $(0, +, \mathop{\rightarrow})$-reduct is a simple hoop. \rule{0.5em}{0.5em} \end{Corollary} In categorical language, the forgetful functor from the category of coops to the category of hoops provides an isomorphism between the category of coops and the full subcategory of the category of hoops comprising the objects satisfying the axiom $\all{x}\ex{y}x = y \mathop{\rightarrow} x$. In fact, there is a functor that maps a hoop to an enveloping coop. This is adjoint to the forgetful functor from coops to hoops. The forgetful functor is faitfhul (as they always are) and the above says that it is full as well. \Subsection{Simple Coops} The hoop $\mathbf{H}$ is said to be {\em archimedean} iff, for any non-zero $x \in H$ and any $y \in H$, there is $m \in \mathbb{N}$, such that $y \le mx$. We then have: \begin{Theorem}\label{thm:simple-hoops-archimedean} A hoop is simple iff it is archimedean. \end{Theorem} \par \noindent{\bf Proof: } Immediate from the definition of $\Func{I}(x)$ and the fact that $\mathbf{H}$ is simple iff $\Func{I}(x) = H$ for every non-zero $x \in H$. \rule{0.5em}{0.5em} \begin{Theorem}\label{thm:simple-coops-archimedean} A coop is simple iff it is archimedean. \end{Theorem} \par \noindent{\bf Proof: } Immediate from Corollary~\ref{cor:coop-simple} and Theorem~\ref{thm:simple-hoops-archimedean}. \rule{0.5em}{0.5em} We will need an interesting property of hoops due to Bosbach~\cite{bosbach69a}. From a logical perspective, this says that $\Logic{{\L}L}{i}$ enjoys the principle that to prove an implication one may assume the converse implication. \begin{Lemma}\label{lma:bosbach} Let $\mathbf{H}$ be a hoop, $x, y \in H$. Then \[ (x \mathop{\rightarrow} y) \mathop{\rightarrow} (y \mathop{\rightarrow} x) = y \mathop{\rightarrow} x \] \end{Lemma} \par \noindent{\bf Proof: } Clearly $y \mathop{\rightarrow} x \ge (x \mathop{\rightarrow} y) \mathop{\rightarrow} y \mathop{\rightarrow} x$, so it is enough to prove that $((x \mathop{\rightarrow} y) \mathop{\rightarrow} y \mathop{\rightarrow} x \ge y \mathop{\rightarrow} x$, or equivalently that $y + ((x \mathop{\rightarrow} y) \mathop{\rightarrow} y \mathop{\rightarrow} x) \ge x$, but we have: \begin{align*} y + ((x \mathop{\rightarrow} y) \mathop{\rightarrow} y \mathop{\rightarrow} x) &= \\ y + (y \mathop{\rightarrow} (x \mathop{\rightarrow} y) \mathop{\rightarrow} x) &= & \tag*{$[{\sf cwc}]$}\\ ((x \mathop{\rightarrow} y) \mathop{\rightarrow} x) + (((x \mathop{\rightarrow} y) \mathop{\rightarrow} x) \mathop{\rightarrow} y) &\ge \\ ((x \mathop{\rightarrow} y) \mathop{\rightarrow} x) + (x \mathop{\rightarrow} y) &\ge x \end{align*} \noindent where the penultimate inequality holds since $\mathop{\rightarrow}$ is antimonotonic in its first argument and $(x \mathop{\rightarrow} y) \mathop{\rightarrow} x \le x$. \rule{0.5em}{0.5em} \begin{Lemma}\label{lma:m14-linear} Let $\mathbf{H}$ be a hoop such that for all $x, y \in H$, if $y = x \mathop{\rightarrow} y$, then $x = 0$ or $y = 0$. Then $\mathbf{H}$ is linearly ordered. \end{Lemma} \par \noindent{\bf Proof: } By Lemma~\ref{lma:bosbach}, $(a \mathop{\rightarrow} b) \mathop{\rightarrow} (b \mathop{\rightarrow} a) = b \mathop{\rightarrow} a$ and then by assumption, either $a \mathop{\rightarrow} b = 0$ or $b \mathop{\rightarrow} a = 0$, i.e., either $a \ge b$ or $b \ge a$. \rule{0.5em}{0.5em} \begin{Lemma}\label{lma:simple-m14} If $\mathbf{H}$ is a simple hoop and $x, y \in H$ are such that $y = x \mathop{\rightarrow} y$, then $x = 0$ or $y = 0$. \end{Lemma} \par \noindent{\bf Proof: } If $y = x \mathop{\rightarrow} y$, it is easy to see by induction that $y = nx \mathop{\rightarrow} y$, for every $n \in \mathbb{N}$. But by Theorem~\ref{thm:simple-hoops-archimedean}, $\mathbf{H}$ is archimedean, so either $x = 0$ or, for some $n$, $y = nx \mathop{\rightarrow} y = 0$. \rule{0.5em}{0.5em} \begin{Theorem}\label{thm:simple-hoops-linear} \label{thm:simple-coops-linear} Simple hoops and simple coops are linearly ordered. \end{Theorem} \par \noindent{\bf Proof: } For hoops, this is immediate from Lemmas~\ref{lma:simple-m14} and~\ref{lma:m14-linear}. The statement for coops follows using Corollary~\ref{cor:coop-simple}. \rule{0.5em}{0.5em} We will see later that simple coops are also Wajsberg hoops. \begin{Lemma}\label{lma:subcoops-of-real-coops} Let $\mathbf{C}$ be a coop such that $C \subseteq \mathbb{R}$ and let $\mathbf{G}$ be the subgroup of the additive group $\mathbb{R}$ generated by $C$. Then $\mathbf{G}$ is 2-divisible and:\\ {\em(i)} if $\mathbf{C}$ is a subcoop of $\mathbb{R}_{{\ge}0}$, then $G = C \cup -C$ and $\mathbf{C} = \mathbf{G}_{{\ge}0}$;\\ {\em(ii)} if $\mathbf{C}$ is a subcoop of $\mathbb{R}_{[0, 1]}$ and $1 \in C$, then $G = \bigcup_{n\in\mathbb{Z}} (n + C)$ and $\mathbf{C} = \mathbf{G}_{[0, 1]}$. \end{Lemma} \par \noindent{\bf Proof: } If $g \in G$, $g$ can be written as $i_1x_1 + \ldots i_mx_m$ where $x_j \in C$ and $i_j \in \mathbb{Z}$. But then $g/2 = i_1y_1 + \ldots i_my_m$, where $y_j = x_j/2 \in C$. So $G$ is indeed 2-divisible. \\ {\em(i):} It is enough to prove that $G = C \cup -C$, for then $C = G \cap \mathbb{R}_{{\ge}0}$ and so $\mathbf{C} = \mathbf{G}_{{\ge}0}$. Since clearly $C \cup -C \subseteq G$, we have only to show $C \cup -C$ is closed under negation and addition. Closure under negation is clear. To show closure under addition, we have to show that if $x, y \in C$, then $x + y$, $-x + -y$ and $x - y$ are in $C \cup -C$. This is clear for $x + y$ and $-x + -y$, since $C$ is closed under addition. As for $x - y$, if $x \ge y$, then, by definition, $y \mathop{\rightarrow} x = x - y \in C$, while, if $x < y$, $x \mathop{\rightarrow} y = y - x \in C$ and so $x - y \in -C$.\\ {\em(ii):} It is enough to prove that $G = \bigcup_{n\in\mathbb{Z}} (n + C)$, for then $C = G \cap [0, 1]$ and so $\mathbf{C} = \mathbf{G}_{[0, 1]}$. Clearly $\bigcup_{n\in\mathbb{Z}} (n + C) \subseteq G$, so we have only to show that $\bigcup{n\in\mathbb{Z}} (n + c)$ is closed under negation and addition. So let $x, y \in C$ and $j, k \in \mathbb{Z}$ be given. We have: \[ -(j + x) = -(j+1) + 1 - x = -(j+1) + (x \mathop{\rightarrow} 1) \in -(j+1) + C \] \noindent giving closure under negation. If $x + y \le 1$ (in $\mathbf{G}$, not $\mathbf{C}$), then we have: \[ (j + x) + (k + y) = (j+k) + (x + y) \in (j+k) + C, \] \noindent while if $1 < x + y < 2$, we can find $i, n \in \mathbb{N}$ with $i \le 2^n$, such that $x > \frac{i}{2^n}$ and $y > \frac{2^n-i}{2^n}$ and then we have: \begin{align*} (j + x) + (k + y) &= (j+k+1) + (x - \frac{i}{2^n}) + (y - \frac{2^n-i}{2^n}) \\ &= (j+k+1) + (\frac{i}{2^n} \mathop{\rightarrow} x) + (\frac{2^n-i}{2^n} \mathop{\rightarrow} y) \\ &\in (j+k+1) + C \end{align*} \noindent since $1 \in C$, so that $\frac{i}{2^n}, \frac{2^n-i}{2^n} \in C$, since $C$ is closed under halving and coop addition (which agrees with the group addition when the sum in the group is at most 1). Finally if $x + y = 2$, we have: \[ (j + x) + (k + y) = (j+k+2) + 0 \in (j+k+2) + C. \] In all cases, $(j + x) + (k + y) \in \bigcup_{n\in\mathbb{Z}} (n + C)$ and so $\bigcup_{n\in\mathbb{Z}} (n + C)$ is closed under addition, as claimed. \rule{0.5em}{0.5em} Dyadic rational numbers will play an important r\^{o}le in the sequel as they did in the above proof. We will now generalise the halving operator on a coop to multiplication by arbitrary non-negative dyadic rationals. So, let $\mathbf{C} = (C; 0, +, \mathop{\rightarrow}, /2)$ be any coop and define a function $\phi : \mathbb{N}_{{>}0} \times \mathbb{N} \times C \rightarrow C$ such that: \begin{align*} \phi(1, 0, x) &= x \\ \phi(1, n+1, x) &= \phi(1, n, x)/2 \\ \phi(i, n, x) &= i\phi(1, n, x) \end{align*} Using the fact that $x/2 + x/2 = x$, we find that the following holds for any $i, n \in \mathbb{N}$ and $x \in C$. \begin{align*} \phi(2i, n+1, x) &= \phi(i, n, x) \\ \end{align*} Thus, if $\frac{i}{2^n} = \frac{j}{2^m}$ (in $\mathbb{Q}$), $\phi(i, n, x) = \phi(j, m, x)$ for any $x$, and so $\phi$ induces a function $\DD_{{\ge}0} \times C \rightarrow C$ which we write multiplicatively: $(p, x) \mapsto px$. (Here, as with $\mathbb{N}, \mathbb{Z}$, etc., we abuse notation by writing $\mathbb{D}$, $\DD_{{\ge}0}$ and $\mathbb{D}_{[0, a]}$ both for the structures and for their carrier sets.) So for example $\frac{3}{4}x = (x/2)/2 + (x/2)/2 + (x/2)/2$. Clearly we have $(p+q)x = px + qx$, so, for fixed $x$, $p \mapsto px$ defines a homomorphism of monoids from $\DD_{{\ge}0}$ to $\mathbf{C}$. Also, we have $p(x + y) = px + py$, so that for fixed $p$, $x \mapsto px$ is a homomorphism of monoids from $\mathbf{C}$ to itself. If $p, q \in \mathbb{D}$ with $0 \le p, q \le 1$, we have $p(qx) = (pq)x$, so we have an action on $\mathbf{C}$ {\it qua} monoid of the multiplicative monoid of dyadic rationals in the interval $[0, 1]$. However, if $p > 1$ or $q > 1$, $p(qx) \not= (pq)x$ in general; e.g. with $M = \mathbb{D}_{[0, 1]}$ and $x = 1$, one has $2x = x$, so that $\frac{1}{2}(2x) = \frac{1}{2}x = \frac{1}{2}$, while $(\frac{1}{2}.2)x = 1x = 1$. \begin{Lemma}\label{lma:dyadic-fractions-in-a-coop} Let $x \not= 0$ be an element of a coop, $\mathbf{C}$, and $0 \le i < j \le 2^n$. Then {\em(i)} $\frac{i}{2^n}x < \frac{j}{2^n}x$ and {\em(ii)} $\frac{i}{2^n}x \mathop{\rightarrow} \frac{j}{2^n}x = \frac{j-i}{2^n}x$. \end{Lemma} \par \noindent{\bf Proof: } We prove {\em(ii)} first. Note that since $\frac{i}{2^n}x + \frac{j-i}{2^n}x = \frac{j}{2^n}x$, we have $\frac{j-i}{2^n}x \ge \frac{i}{2^n}x \mathop{\rightarrow} \frac{j}{2^n}x$ by the residuation property. Thus as $a \mathop{\rightarrow} b \ge a + c \mathop{\rightarrow} b + c$, it is enough to prove {\em(ii)} in the special case when $j = 2^n$, for then for $j < 2^n$ we have: \[ \frac{i}{2^n}x \mathop{\rightarrow} \frac{j}{2^n}x \ge \frac{i + 2^n - j}{2^n}x \mathop{\rightarrow} \frac{2^n}{2^n}x = \frac{2^n - (i + 2^n - j)}{2^n}x = \frac{j-i}{2^n}x. \] So taking $j = 2^n$, let us prove {\em(ii)} by induction on $n$. The statement is trivial when $n=0$. So given $n \ge 0$ assume that $\frac{i}{2^n}x \mathop{\rightarrow} x = \frac{2^n-i}{2^n}x$ holds for any $x$ and $i$ with $0 \le i < 2^n$. Let $x$ and $i$ with $0 \le i < 2^{n+1}$ be given. If $i = 2^n$, then $\frac{i}{2^{n+1}}x = \frac{1}{2}x$ and we have $\frac{1}{2}x \mathop{\rightarrow} x = \frac{1}{2}x$ by the coop laws. If $i < 2^n$, we have (using the inductive hypothesis on the line marked ($*$)): \begin{align*} \frac{2^{n+1}-i}{2^{n+1}} x &= \frac{2^n-i}{2^{n+1}}x + \frac{1}{2}x \\ &= \frac{2^n-i}{2^{n+1}}x + \left(\frac{1}{2}x \mathop{\rightarrow} x\right) &\tag*{[{\sf h}]}\\ &= \frac{2^n-i}{2^{n+1}}x + \left(\frac{2^n-i}{2^{n+1}}x + \frac{i}{2^{n+1}}x \mathop{\rightarrow} x\right) \\ &= \frac{2^n-i}{2^{n+1}}x + \left(\frac{2^n-i}{2^{n+1}}x \mathop{\rightarrow} \frac{i}{2^{n+1}}x \mathop{\rightarrow} x\right) \\ &= \left(\frac{i}{2^{n+1}}x \mathop{\rightarrow} x\right) + \left[\left(\frac{i}{2^{n+1}}x \mathop{\rightarrow} x\right) \mathop{\rightarrow} \frac{2^n-i}{2^{n+1}}x\right] &\tag*{$[{\sf cwc}]$}\\ &= \left(\frac{i}{2^{n+1}}x \mathop{\rightarrow} x\right) + \left[\left(\frac{i}{2^{n+1}}x \mathop{\rightarrow} x\right) \mathop{\rightarrow} \frac{2^n-i}{2^{n}}\frac{1}{2}x\right] \\ &= \left(\frac{i}{2^{n+1}}x \mathop{\rightarrow} x\right) + \left[\left(\frac{i}{2^{n+1}}x \mathop{\rightarrow} x\right) \mathop{\rightarrow} \left(\frac{i}{2^n}\frac{1}{2}x \mathop{\rightarrow} \frac{1}{2}x\right)\right] &\tag*{($*$)} \\ &= \left(\frac{i}{2^{n+1}}x \mathop{\rightarrow} x\right) + \left[\left(\frac{i}{2^{n+1}}x \mathop{\rightarrow} x\right) \mathop{\rightarrow} \left(\frac{i}{2^{n+1}}x \mathop{\rightarrow} \frac{1}{2}x\right)\right] \\ &= \frac{i}{2^{n+1}}x \mathop{\rightarrow} x. \end{align*} If $2^{n+1} > i > 2^n$, then we have: \begin{align*} \frac{i}{2^{n+1}}x \mathop{\rightarrow} x &= \frac{i-2^n}{2^{n+1}}x \mathop{\rightarrow} \frac{1}{2}x \mathop{\rightarrow} x \\ &= \frac{i-2^n}{2^{n+1}}x \mathop{\rightarrow} \frac{1}{2}x &\tag*{[{\sf h}]}\\ &= \frac{i-2^n}{2^n}\frac{1}{2}x \mathop{\rightarrow} \frac{1}{2}x \\ &= \frac{2^n-(i-2^n)}{2^n}\frac{1}{2}x &\tag*{($*$)}\\ &= \frac{2^{n+1}-i}{2^{n+1}}x. \end{align*} \noindent This completes the proof of part {\em(ii)}. Part {\em(i)} follows since, by part {\em(ii)}, we have $\frac{i}{2^n}x \mathop{\rightarrow} \frac{i+1}{2^n}x = \frac{1}{2^n}x \not= 0$, whence $\frac{i}{2^n}x < \frac{i+1}{2^n}x \le \frac{j}{2^n}x$. \rule{0.5em}{0.5em} By the following lemma, simple coops are semi-cancellative. \begin{Lemma}\label{lma:simple-semi-cancellative} Let $\mathbf{C}$ be a coop and let $x, y \in C$ be such that $x + y = x$, then either $x = mx$ for all $m \in \mathbb{N}$ or $y \le \frac{1}{2^n}x$ for all $n \in \mathbb{N}$. In particular, if $\mathbf{C}$ is simple, and hence archimedean, either $x$ is an annihilator or $y = 0$. \end{Lemma} \par \noindent{\bf Proof: } By an easy induction, we have $x + my = x$ for all $m \in \mathbb{N}$. If $y > \frac{1}{2^n}x$ for some $n \in \mathbb{N}$, then we have $$ x = x + 2^ny \ge x + 2^n(\frac{1}{2^n}x) = 2x $$ \noindent and then by another easy induction we have $x = mx$ for all $m \in \mathbb{N}$. \rule{0.5em}{0.5em} If $\mathbf{H}$ is a hoop and $0 \not= x \in H$, define the {\em depth} of $x$ to be the smallest $d \in \mathbb{N}$ such that $(d+1)x = dx$, or to be $\infty$ if no such $d$ exists. Lemma~\ref{lma:dyadic-fractions-in-a-coop} implies that if $x$ is a non-zero element of a coop, then the depth of $\frac{1}{2^n}x$ is at least $2^n$. \begin{Lemma}\label{lma:simple-hoop-depth} Let $\mathbf{H}$ be a simple hoop. Then either {\em(i)} every non-zero element has infinite depth or {\em(ii)} $\mathbf{H}$ is bounded and every non-zero element has finite depth. \end{Lemma} \par \noindent{\bf Proof: } Assume {\em(i)} does not hold, so there is a non-zero $x \in H$ with finite depth $d$, so $dx = (d+1)x$. By induction, for any $n > d$, we have $dx = nx$. Let $a = dx$. Then $na = ndx = dx = a$ for any $n > 0$, so that, as $\mathbf{H}$ is simple, $H = \Func{I}(a) = \Downset{a}$. Now if $y$ is any non-zero element, $\Func{I}(y) = H$, so $a \le ny$ for some $n$ and we have $ny \ge a \ge (n+1)y$ so that $ny = (n+1)y$ and $y$ has finite depth. \rule{0.5em}{0.5em} \begin{Theorem}\label{thm:simple-coop-structure} Let $\mathbf{C}$ be a simple coop. Then there is 2-divisible subgroup $\mathbf{G}$ of the additive group $\mathbb{R}^{+}$, such that either {\em(i)} $\mathbf{C}$ is isomorphic to $\mathbf{G}_{{\ge}0}$, or {\em(ii)} $\mathbf{C}$ is isomorphic to $\mathbf{G}_{[0, 1]}$. \end{Theorem} \par \noindent{\bf Proof: } By Theorems~\ref{thm:simple-coops-archimedean} and~\ref{thm:simple-coops-linear}, $\mathbf{C}$ is archimedean and linearly ordered. We use these properties without further comment in the rest of the proof. \\ If $C$ is not bounded, then, by Lemma~\ref{lma:simple-hoop-depth}, there is a non-zero $e \in C$ with infinite depth, so that $ne < (n+1)e$ for every $n \in \mathbb{N}$. We will show that case {\em(i)} holds. To see this, define $f : C \rightarrow \mathbb{R}_{{\ge}0}$ by: \[ f(x) = \Func{sup} \{\frac{i}{2^n} \mathrel{|} i, n \in \mathbb{N}, \frac{i}{2^n}e \le x\}. \] For every $x$, we have $0e \le x \le ne$ for all large enough $n$. Thus the set whose supremum is used in the definition of $f$ is always non-empty and bounded above, so $f$ is well-defined. Clearly, $f$ is at least weakly monotonic, i.e., if $x \le y$, then $f(x) \le f(y)$. We claim that $f$ is a homomorphism from $\mathbf{C}$ to the coop $\mathbb{R}_{{\ge}0}$. By Lemma~\ref{lma:simple-semi-cancellative}, the $(0, +)$-reduct of $\mathbf{C}$ is a cancellation monoid, so that as $2e = e + e = e + (e \mathop{\rightarrow} 2e)$, we have $e = e \mathop{\rightarrow} 2e$, whence $e = (2e)/2$. By induction, $e = \frac{1}{2^n}(2^ne)$ for any $n \in \mathbb{N}$. Hence, for any $m$, taking $x = 2^{m}e$ in Theorem~\ref{lma:dyadic-fractions-in-a-coop} we find that for any $a, b$ of the form $\frac{i}{2^n}x$, with $0 \le i \le 2^n$ we have $f(a + b) = f(a) + f(b)$ and $f(a \mathop{\rightarrow} b) = f(a) \mathop{\rightarrow} f(b)$. Letting $m$ tend to infinity, these equations hold for any $a, b \in D$, where $D$ is the set $\{\frac{i}{2^n}e \mathrel{|} i, n \in \mathbb{N}\}$ of all dyadic rational multiples of $e$. Thus $D$ is a subcoop of $\mathbf{C}$ isomorphic to $\mathbb{D}_{{\ge}0}$. By Theorem~\ref{lma:dyadic-fractions-in-a-coop}, given $x, y \in C$ and any $n \in \mathbb{N}$, there are $p_n, q_n, r_n, s_n \in D$ such that $p_n \le x \le q_n$, $r_n \le y \le s_n$, $f(q_n) - f(p_n) = \frac{1}{2^{n+1}}$ and $f(s_n) - f(r_n) = \frac{1}{2^{n+1}}$. But then as $f|_D$ is a coop-homorphism and $f$ is weakly monotonic, we have: \[ \begin{array}{c} f(p_n + r_n) = f(p_n) + f(r_n) \\ f(q_n + s_n) = f(q_n) + f(s_n) \\ f(p_n + r_n) \le f(x + y) \le f(q_n + s_n) \\ f(p_n) + f(r_n) \le f(x) + f(y) \le f(q_n) + f(s_n) \\ f(q_n + s_n) - f(p_n + r_n) \le \frac{1}{2^n} \\ \\ f(q_n \mathop{\rightarrow} r_n) = f(q_n) \mathop{\rightarrow} f(r_n) \\ f(p_n \mathop{\rightarrow} s_n) = f(p_n) \mathop{\rightarrow} f(s_n) \\ f(q_n \mathop{\rightarrow} r_n) \le f(x \mathop{\rightarrow} y) \le f(p_n \mathop{\rightarrow} s_n) \\ f(q_n) \mathop{\rightarrow} f(r_n) \le f(x) \mathop{\rightarrow} f(y) \le f(p_n) \mathop{\rightarrow} f(s_n) \\ f(p_n \mathop{\rightarrow} s_n) - f(q_n \mathop{\rightarrow} s_n) \le \frac{1}{2^n} \end{array} \] \noindent Letting $n$ tend to infinity, we must have that $f(x + y) = f(x) + f(y)$ and $f(x \mathop{\rightarrow} y) = f(x) \mathop{\rightarrow} f(y)$ and $f$ is indeed a homomorphism from $\mathbf{C}$ to $\mathbb{R}_{{\ge}0}$ as claimed. But $\mathbf{C}$ is simple, hence $f$ is either identically zero or is one-to-one, but clearly $f(e) = 1 \not= 0$, so $f$ embeds $\mathbf{C}$ as a subcoop of $\mathbb{R}_{{\ge}0}$. {\em(i)} follows immediately using Lemma~\ref{lma:subcoops-of-real-coops}. \\ Now assume $\mathbf{C}$ is bounded, with annihilator $a$, say. So $a \ge x$ for every $x \in C$. To see that that case {\em(ii)} holds, define $g : C \rightarrow \mathbb{R}_{[0,1]}$ by: \[ g(x) = \Func{sup} \{\frac{i}{2^n} \mathrel{|} i, n \in \mathbb{N}, i \le 2^n, \frac{i}{2^n}a \le x\}. \] Then by an argument very similar to the one used above in the unbounded case, $C$ has a dense subcoop $D_1$ such that $g|_{D_1}$ is an isomorphism from $D_1$ to $\mathbb{D}_{[0,1]}$. Then, approximating $x + y$ and $x \mathop{\rightarrow} y$ by elements of $D_1$ just as we did above, we find that $g$ is a homomorphism embedding $\mathbf{C}$ as a subcoop of $\mathbb{R}_{[0,1]}$, from which {\em(ii)} follows using Lemma~\ref{lma:subcoops-of-real-coops}. \rule{0.5em}{0.5em} \Subsection{Subdirectly Irreducible Coops} Recall that an algebra $\mathbf{A}$ is {\em subdirectly irreducible} iff the intersection $\mu$ of its non-identity congruences is not the identity congruence, in which case $\mu$ is called the {\em monolith}. Thus a hoop or a coop is subdirectly irreducible iff the intersection of all its non-zero ideals is non-zero. In this section, we determine the structure of subdirectly irreducible coops. If $\mathbf{C}$ and $\mathbf{D}$ are subcoops of a coop $\mathbf{E}$, we say $\mathbf{E}$ is the {\em ordinal sum} of $\mathbf{C}$ and $\mathbf{D}$ and write $\mathbf{E} = \mathbf{C} \mathop{\stackrel{\frown}{\relax}} \mathbf{D}$ iff $C \cap D = \{0\}$, $C \cup D = E$ and whenever $c \in C$ and $0 \not= d \in D$, $c + d = d$. It is easy to see that, if $\mathbf{E} = \mathbf{C} \mathop{\stackrel{\frown}{\relax}} \mathbf{D}$ and $c \in C$ and $0 \not= d \in D$, then $d > c$ and $c \mathop{\rightarrow} d = d$. Thus $C$ is an ideal and $\mathbf{E}/C \cong \mathbf{D}$. We will find that any subdirectly irreducible coop is $\mathbf{S} \mathop{\stackrel{\frown}{\relax}} \mathbf{F}$ where $\mathbf{S}$ is totally ordered and subdirectly irreducible and $\mathbf{F}$ can be any coop. This could also be establishing using the analogous result for subdirectly irreducible hoops proved in \cite{blok-ferreirim00}, but the extra structure in a coop admits a slightly more efficient presentation. \begin{Theorem}\label{thm:cep} Hoops and coops have the congruence extension property. \end{Theorem} \par \noindent{\bf Proof: } By Theorem~\ref{thm:coop-homomorphisms} and the discussion of ideals that precedes it, it suffices to show that if $\mathbf{C}$ is a subhoop of a hoop $\mathbf{D}$, then for any ideal $I \subseteq C$, there is an ideal $J \subseteq D$, such that $I = J \cap C$. But, if $I \subseteq C$ is an ideal, it is easily verified from the definitions that $I = J \cap C$ where $J$ is the ideal of $\mathbf{D}$ generated by $I$. \rule{0.5em}{0.5em} Let $\mathbf{C}$ be a subdirectly irreducible coop, so that the set of all non-zero ideals of $\mathbf{C}$ intersect in a non-zero ideal $M$, which we call the {\em monolithic ideal}. Since coops have the congruence extension property, $M$ viewed as a coop in its own right can have no non-trivial ideals, so $M$ is a simple coop, and so by Theorems~\ref{thm:simple-coops-archimedean} and~\ref{thm:simple-coops-linear}, $M$ is archimedean and linearly ordered. If $x \in C$, we define the {\em implicative stabilizer} $\Func{IS}(x)$ as follows: \begin{align*} \Func{IS}(x) &\mathrel{{:}{=}} \{s \in C \mathrel{|} s \mathop{\rightarrow} x = x\}. \end{align*} \noindent It is easily verified that $\Func{IS}(x)$ is an ideal. So, for any $x$, either $\Func{IS}(x) = \{0\}$ or $\Func{IS}(x) \supseteq M$. If $X \subseteq C$, we write $\Func{IS}(X)$ for $\bigcap_{x\in X} \Func{IS}(x)$. \begin{Theorem}\label{thm:subdirectly-irreducible-coops} Let $\mathbf{C}$ be a subdirectly irreducible coop with monolithic ideal $M$ and let $F, S \subseteq C$ be defined as follows: \begin{align*} F &\mathrel{{:}{=}} \{f \in C \mathrel{|} M \subseteq \Func{IS}(f)\} \\ S &\mathrel{{:}{=}} \Func{IS}(F) \end{align*} Then: \begin{align*} \mbox{{\em(i)}} \quad & \all{x \in C \mathop{\backslash} \{0\}}\ex{a \in M \mathop{\backslash} \{0\}} x \ge a; \\ \mbox{{\em(ii)}} \quad & \all{f \in F \mathop{\backslash} \{0\}, a \in M} f \ge a; \\ \mbox{{\em(iii)}} \quad & \all{a \in M, f \in F \mathop{\backslash} \{0\}} a + f = f; \\ \mbox{{\em(iv)}} \quad & \all{f \in F \mathop{\backslash} \{0\}, x \in C} x \ge f \Rightarrow x \in F; \\ \mbox{{\em(v)}} \quad & \all{x \in C, f \in F} x \mathop{\rightarrow} f \in F; \\ \mbox{{\em(vi)}} \quad & \all{f \in F \mathop{\backslash} \{0\}, x \in C \mathop{\backslash} F} f > x; \\ \mbox{{\em(vii)}} \quad & \mbox{$F$ is the carrier set of a subcoop $\mathbf{F}$ of $\mathbf{C}$}; \\ \mbox{{\em(viii)}} \quad & \mbox{$S$ is a linearly ordered ideal of $\mathbf{C}$, and $S \cap F = \{0\}$}; \\ \mbox{{\em(ix)}} \quad & \mbox{Writing $\mathbf{S}$ for the subcoop with carrier set $S$, $\mathbf{S}$ is semi-cancellative}; \\ \mbox{{\em(x)}} \quad & \mathbf{C} = \mathbf{S} \mathop{\stackrel{\frown}{\relax}} \mathbf{F}. \end{align*} \end{Theorem} \par \noindent{\bf Proof: } First note that if $x \in C$ and $a \mathop{\rightarrow} x = x$ for some $a \in C \mathop{\backslash} \{0\}$, then $\Func{IS}(x) \not= \{0\}$, hence $M \subseteq \Func{IS}(x)$ so that $x \in F$. \\ {\em(i):} if $0 \not= x \in C$, then as $\{0\} \not= M \subseteq \Func{I}(x)$, there is $a \in M$ and $n \in \mathbb{N}$, with $2^{n}x \ge a \not= 0$, but then $0 \not= \frac{1}{2^n}a \in M$ and $x \ge \frac{1}{2^n}(2^{n}x) \ge \frac{1}{2^n}a$ (where the first inequality follows by induction using part {\em(iv)} of Corollary~\ref{cor:halving-miscellany}). \\ {\em(ii):} since $a \in M \subseteq \Func{I}(f)$, $mf \ge a$ for some $m \in \mathbb{N}$. By Lemma~\ref{lma:bosbach}, $f \mathop{\rightarrow} a = (a \mathop{\rightarrow} f) \mathop{\rightarrow} f \mathop{\rightarrow} a = f \mathop{\rightarrow} f \mathop{\rightarrow} a = 2f \mathop{\rightarrow} a$, since $f \in F$. By induction, $f \mathop{\rightarrow} a = nf \mathop{\rightarrow} a$ for every $n \in \mathbb{N}$. In particular, $f \mathop{\rightarrow} a = mf \mathop{\rightarrow} a = 0$. \\ {\em(iii):} by part {\em(ii)}, $f \ge a$, i.e. $f \mathop{\rightarrow} a = 0$. Hence, using $[{\sf cwc}]$, $a + f = a + (a \mathop{\rightarrow} f) = f + (f \mathop{\rightarrow} a) = f$. \\ {\em(iv)} assume $f \in F$, $x \in C$ and $x \ge f \not= 0$. We need to show that if $a \in M$, $a \mathop{\rightarrow} x = x$. But given $a \in M$, we have $a \mathop{\rightarrow} x \ge a \mathop{\rightarrow} f = f$, i.e., $((a \mathop{\rightarrow} x) \mathop{\rightarrow} f) = 0$. Hence: \begin{align*} a \mathop{\rightarrow} x &= (a \mathop{\rightarrow} x) + ((a \mathop{\rightarrow} x) \mathop{\rightarrow} f) \\ &= f + (f \mathop{\rightarrow} a \mathop{\rightarrow} x) \tag*{$[{\sf cwc}]$}\\ &= f + (a \mathop{\rightarrow} f \mathop{\rightarrow} x) \\ &= f + a + (a \mathop{\rightarrow} f \mathop{\rightarrow} x) \tag*{(iii)} \\ &= f + (f \mathop{\rightarrow} x) + ((f \mathop{\rightarrow} x) \mathop{\rightarrow} a) \tag*{$[{\sf cwc}]$} \\ &\ge x. \end{align*} So $x \ge a \mathop{\rightarrow} x \ge x$ giving $x = a \mathop{\rightarrow} x$ as required.\\ {\em(v):} if $a \in M$ and $f \in F$, $a \mathop{\rightarrow} f = f$ by the definition of $F$. So, for any $x \in C$, $a \mathop{\rightarrow} x \mathop{\rightarrow} f = x \mathop{\rightarrow} a \mathop{\rightarrow} f = x \mathop{\rightarrow} f$, whence $x \mathop{\rightarrow} f \in F$. \\ {\em(vi):} Let $f \in F$ and $x \in C \mathop{\backslash} F$. if $a \in M$, we have $a \mathop{\rightarrow} f \mathop{\rightarrow} x = a + f \mathop{\rightarrow} x = f \mathop{\rightarrow} x$, by part {\em(iii)}, so $x \ge f \mathop{\rightarrow} x \in F$ and by part {\em(iv)} we can only have $f \mathop{\rightarrow} x = 0$, i.e., $f \ge x$, and the inequality must be strict, since $x \not\in F$. \\ {\em(vii):} Clearly $0 \in F$. Given $f, g \in F$, we must show that $f + g, f \mathop{\rightarrow} g$ and $f/2$ all belong to $F$. As $f + g \ge f$, $f + g \in F$ follows from part {\em(iv)}. That $f \mathop{\rightarrow} g \in F$ follows from part {\em(v)}. Finally $f/2 \in F$ follows from part {\em(iii)} together with part {\em(v)} of Corollary~\ref{cor:halving-miscellany}, since given $0 \not= a \in M$ and $a \mathop{\rightarrow} f = f$, then we have $0 \not= a/2 \in M$ and $a/2 \mathop{\rightarrow} f/2 = (a \mathop{\rightarrow} f)/2 = f/2$.\\ {\em(viii):} As the intersection of a set of ideals, $S$ is itself an ideal. If $x \in S \cap F$, then $x = x \mathop{\rightarrow} x = 0$, so $S \cap F = \{0\}$. If $s, t \in S$ and $s \mathop{\rightarrow} t = t$, I claim that either $s = 0$ or $t = 0$, whence $S$ is linearly ordered by Lemma~\ref{lma:m14-linear}. To prove the claim, if $s \mathop{\rightarrow} t = t$ and $s \not= 0$, then, by part {\em(i)}, there is $a \in M$ such that $s \ge a > 0$, but then $t \ge a \mathop{\rightarrow} t \ge s \mathop{\rightarrow} t \ge t$, so $a \mathop{\rightarrow} t = t$ and $t \in F$, so $t \in S \cap F$ and therefore $t = 0$. \\ {\em(ix):} let $s, t \in S$ with $t \not= 0$ and $s + t = s$. We must prove that $s$ annihilates $S$, i.e., $S = \Downset{s}$. We may choose an $a \in M$ such that $t \ge a \not= 0$ and then $s = s + t \ge s + a \ge s$, whence $s + a = s$. If $u \in S$, then we have $a \mathop{\rightarrow} s \mathop{\rightarrow} u = a + s \mathop{\rightarrow} u = s \mathop{\rightarrow} u$, so $s \mathop{\rightarrow} u \in S \cap F = \{0\}$ by part {\em(viii)}. Hence $s \mathop{\rightarrow} u = 0$, i.e., $s \ge u$. \\ {\em(x):} by part {\em(viii)} $S \cap F = \{0\}$. I claim that, if $x \in C \mathop{\backslash} F$ and $0 \not= f \in F$, then $x \in S$ and $x + f = f$. Given this, we must have that $C = S \cup F$ and $S \cap F = \{0\}$ and so $\mathbf{C} = \mathbf{S} \mathop{\stackrel{\frown}{\relax}} \mathbf{F}$. So assume $x \in C \mathop{\backslash} F$ and $0 \not= f \in F$. We must prove that $f = x \mathop{\rightarrow} f = x + f$. By part {\em(v)}, $(x \mathop{\rightarrow} f) \mathop{\rightarrow} f \in F$, but $x \ge (x \mathop{\rightarrow} f) \mathop{\rightarrow} f$ and $x \not\in F$, so by part {\em(iv)}, we must have $(x \mathop{\rightarrow} f) \mathop{\rightarrow} f = 0$, i.e., $x \mathop{\rightarrow} f \ge f$ implying $f = x \mathop{\rightarrow} f$. Using $[{\sf cwc}]$ and part {\em(vi)}, we have $f = f + (f \mathop{\rightarrow} x) = x + (x \mathop{\rightarrow} f) = x + f$ and the claim is true. \rule{0.5em}{0.5em} We refer to the subcoops $\mathbf{F}$ and $\mathbf{S}$ of the theorem as the {\em fixed} subcoop and the {\em support} subcoop respectively. Since the support is linearly ordered and semi-cancellative the following theorem applies to it. \begin{Theorem}\label{thm:lin-semi-canc} Let $\mathbf{L}$ be a linearly ordered semi-cancellative coop. Then \\ {\em(i)} If $\mathbf{L}$ is bounded, it is involutive; \\ {\em(ii)} $\mathbf{L}$ is a Wajsberg coop, i.e., for any $s, t \in L$, $ (t \mathop{\rightarrow} s) \mathop{\rightarrow} s = (s \mathop{\rightarrow} t) \mathop{\rightarrow} t $. \end{Theorem} \par \noindent{\bf Proof: } {\em(i):} As usual write $1$ for the annihilator of $\mathbf{L}$ and $\lnot x$ for $x \mathop{\rightarrow} 1$. Note that by Corollary \ref{cor:halving-miscellany}, $(\lnot s)/2 = s/2 \mathop{\rightarrow} 1/2$. We must show that $\lnot\lnot s = s$ for any $s \in L$. We claim that $\lnot(\cdot) : L \rightarrow L$ is injective. To see this, assume $s, t \in L$ with $\lnot s = \lnot t$. Then $s/2 \mathop{\rightarrow} 1/2 = (\lnot s)/2 = (\lnot t)/2 = t/2 \mathop{\rightarrow} 1/2$. As $1/2 \ge s/2$ and $1/2 \ge t/2$, we have $1/2 = (\lnot s)/2 + s/2 = (\lnot t)/2 + t/2$. But $1/2$ is not an annihilator, so by part {\em(i)}, this implies $s/2 = t/2$, whence $s = t$, completing the proof that $\lnot$ is injective. But for any $s \in L$, $\lnot\lnot\lnot s = \lnot s$, so if $\lnot$ is injective, $\lnot\lnot s = s$. \\ {\em(ii):} \noindent We claim that for any $s, t \in L$, $(t \mathop{\rightarrow} s) \mathop{\rightarrow} s = \Func{min}\{s, t\}$, which is well-defined because $\mathbf{L}$ is linearly ordered. Assuming the claim, we have: $$ (t \mathop{\rightarrow} s) \mathop{\rightarrow} s = \Func{min}\{s, t\} = \Func{min}\{t, s\} = (s \mathop{\rightarrow} t) \mathop{\rightarrow} t $$ \noindent so the claim implies the required identity. To prove the claim, note that if $t \ge s$, we have: $$ (t \mathop{\rightarrow} s) \mathop{\rightarrow} s = 0 \mathop{\rightarrow} s = s = \Func{min}\{s, t\} $$ \noindent while if $s \ge t$, we have $$ s = (t \mathop{\rightarrow} s) + t = (t \mathop{\rightarrow} s) + ((t \mathop{\rightarrow} s) \mathop{\rightarrow} s). $$ If $\mathbf{L}$ is unbounded or if $\mathbf{L}$ is bounded but $s$ is not the annihilator, then the semi-cancellative property gives us $$ (t \mathop{\rightarrow} s) \mathop{\rightarrow} s = t = \Func{min}\{s, t\}. $$ \noindent If $\mathbf{L}$ is bounded, let us write $1$ for its annihilator and $\lnot x$ for $x \mathop{\rightarrow} 1$ as we did in the proof of part {\em(i)}. Then if $s = 1$, we have: $$ (t \mathop{\rightarrow} s) \mathop{\rightarrow} s = \lnot\lnot t = t = \Func{min}\{s, t\} $$ \noindent by part {\em(i)}. In all cases, the claim holds and the proof is complete. \rule{0.5em}{0.5em} Note that a non-trivial ordinal sum is never Wajsberg: if $s \in S$ and $f \in F$, then in $\mathbf{S} \mathop{\stackrel{\frown}{\relax}} \mathbf{F}$, we have: \begin{align*} (s \mathop{\rightarrow} f) \mathop{\rightarrow} f &= f \mathop{\rightarrow} f &= 0 \\ (f \mathop{\rightarrow} s) \mathop{\rightarrow} s &= 0 \mathop{\rightarrow} s &= s \end{align*} So $(s \mathop{\rightarrow} f) \mathop{\rightarrow} f = (f \mathop{\rightarrow} s) \mathop{\rightarrow} s$ iff $0 \in \{s, f\}$. \begin{Theorem}\label{thm:wajsberg-coops-univ-decidable} The universal theory of Wajsberg coops is decidable. (I.e., the set of purely universal formulas in the language of a coop that are valid in all coops is decidable). \end{Theorem} \par \noindent{\bf Proof: } We claim that any Wajsberg coop is isomorphic to a subcoop of a product of linearly ordered semi-cancellative coops. Given part {\em(ii)} of Theorem~\ref{thm:lin-semi-canc}, such a product is itself a Wajsberg hoop, hence, given the claim, the universal theory of Wajsberg coops reduces to that of linearly ordered semi-cancellative coops and by Theorem~\ref{thm:lin-canc-coops-decidable} the full first order theory of linearly ordered semi-cancellative coops is decidable. \\ As for the claim, let $\mathbf{W}$ be a Wajsberg coop. By Birkhoff's theorem, $\mathbf{W}$ embeds in a product $\prod_i \mathbf{C}_i$, where each $\mathbf{C}_i$ is a subdirectly irreducible homomorphic image of $\mathbf{W}$. By the remarks above, when we write $\mathbf{C}_i$ as the ordinal sum of its support and fixed part, $\mathbf{S}_i \mathop{\stackrel{\frown}{\relax}} \mathbf{F}_i$, $\mathbf{F}_i = \{0\}$, so $\mathbf{C}_i$ is isomorphic to $\mathbf{S}_i$. The claim follows from parts {\em(viii)} and {\em(ix)} of Theorem~\ref{thm:subdirectly-irreducible-coops} and Theorem~\ref{thm:lin-semi-canc}. \rule{0.5em}{0.5em} \Section{Future Work}\label{sec:future-work} An important goal of our work is to understand the decision problem for useful classes of coop, and we have presented some results on Wajsberg coops, in particular, in the present paper. We already have some more results about general coops, but the proofs are not yet in a very satisfactory form. Blok and Ferreirim have shown that the quasi-equational theory of hoops is decidable. Using their results on subdirectly irreducible hoops one can show that any hoop embeds in a coop and from this conclude that the quasi-equational theory of coops is decidable (since this implies that a horn clause in the language of coops, can be translated into an equisatisfiable Horn clause in the language of hoops\footnote{ To do this, replace subterms of the form $t/2$ by $v_t$ where $v_t$ is a fresh variable and add hypotheses $v_t = v_t \mathop{\rightarrow} t$.} However, this approach doesn't yield a practically feasible algorithm and gives no information about the complexity of the decision problem. We hope to improve on this position in future work. \end{document}
\begin{document} \numberwithin{equation}{section} \title[Order extreme points]{Order extreme points and solid convex hulls} \author{T.~Oikhberg and M.A.~Tursi} \address{ Dept.~of Mathematics, University of Illinois, Urbana IL 61801, USA} \email{oikhberg@illinois.edu, gramcko2@illinois.edu} \date{\today} \subjclass[2010]{46B22, 46B42} \keywords{Banach lattice, extreme point, convex hull, Radon-Nikod{\'y}m Property} \dedicatory{To the memory of Victor Lomonosov} \maketitle \parindent=0pt \parskip=3pt \begin{abstract} We consider the ``order'' analogues of some classical notions of Banach space geometry: extreme points and convex hulls. A Hahn-Banach type separation result is obtained, which allows us to establish an ``order'' Krein-Milman Theorem. We show that the unit ball of any infinite dimensional reflexive space contains uncountably many order extreme points, and investigate the set of positive norm-attaining functionals. Finally, we introduce the ``solid'' version of the Krein-Milman Property, and show it is equivalent to the Radon-Nikod{\'y}m Property. \end{abstract} \maketitle \thispagestyle{empty} \section{Introduction}\label{s:intro} At the very heart of Banach space geometry lies the study of three interrelated subjects: (i) separation results (starting from the Hahn-Banach Theorem), (ii) the structure of extreme points, and (iii) convex hulls (for instance, the Krein-Milman Theorem on convex hulls of extreme points). Certain counterparts of these notions exist in the theory of Banach lattices as well. For instance, there are positive separation/extension results; see e.g. \cite[Section 1.2]{AB}. One can view solid convex hulls as lattice analogues of convex hulls; these objects have been studied, and we mention some of their properties in the paper. However, no unified treatment of all three phenomena listed above has been attempted. In the present paper, we endeavor to investigate the lattice versions of (i), (ii), and (iii) above. We introduce the order version of the classical notion of an extreme point: if $A$ is a subset of a Banach lattice $X$, then $a \in A$ is called an \emph{order extreme point} of $A$ if for all $x_0, x_1 \in A$ and $t \in (0,1)$ the inequality $a \leq (1-t) x_0 + t x_1$ implies $x_0 = a = x_1$. Note that, in this case, if $x \geq a$ and $x \in A$, then $x = a$ (write $a \leq (x+a)/2$). Throughout, we work with real spaces. We will be using the standard Banach lattice results and terminology (found in, for instance, \cite{AB}, \cite{M-N} or \cite{Sch}). We also say that a subset of a Banach lattice is \textit{bounded} when it is norm bounded, as opposed to order bounded. Some special notation is introduced in Section \ref{s:definitions}. In the same section, we establish some basic facts about order extreme points and solid hulls. In particular, we note a connection between order and ``canonical'' extreme points (Theorem \ref{t:connection}). In Section \ref{s:separation} we prove a ``Hahn-Banach'' type result (Proposition \ref{p:separation2}), involving separation by positive functionals. This result is used in Section \ref{s:KM} to establish a ``solid'' analogue of the Krein-Milman Theorem. We prove that solid compact sets are solid convex hulls of their order extreme points (see Theorem \ref{t:KM}). A ``solid'' Milman Theorem is also proved (Theorem \ref{t:order-milman}). In Section \ref{s:examples} we study order extreme points in $AM$-spaces. For instance, we show that, for an AM-space $X$, the following three statements are equivalent: (i) $X$ is a $C(K)$ space; (ii) the unit ball of $X$ is the solid convex hull of finitely many of its elements; (iii) the unit ball of $X$ has an order extreme point (Propositions \ref{p:describe_AM} and \ref{p:only_C(K)_has_OEP}). Further in Section \ref{s:examples} we investigate norm-attaining positive functionals. Functionals attaining their maximum on certain sets have been investigated since the early days of functional analysis; here we must mention V.~Lomonosov's papers on the subject (see e.g.~the excellent summary \cite{ArL}, and the references contained there). In this paper, we show that a separable AM-space is a $C(K)$ space iff any positive functional on it attains its norm (Proposition \ref{p:only_C(K)_attain_norm}). On the other hand, an order continuous lattice is reflexive iff every positive operator on it attains its norm (Proposition \ref{p:functional_OC}). In Section \ref{s:how_many_extreme_points} we show that the unit ball of any reflexive infinite-dimensional Banach lattice has uncountably many order extreme points (Theorem \ref{t:uncount_many}). Finally, in Section \ref{s:SKMP} we define the ``solid'' version of the Krein-Milman Property, and show that it is equivalent to the Radon-Nikodym Property (Theorem \ref{t:RNP}). To close this introduction, we would like to mention that related ideas have been explored before, in other branches of functional analysis. In the theory of $C^*$ algebras, and, later, operator spaces, the notions of ``matrix'' or ``$C^*$'' extreme points and convex hulls have been used. The reader is referred to e.g.~\cite{EWe}, \cite{EWi}, \cite{FaM}, \cite{WeWi} for more information; for a recent operator-valued separation theorem, see \cite{Mag}. \section{Preliminaries}\label{s:definitions} In this section, we introduce the notation commonly used in the paper, and mention some basic facts. The closed unit ball (sphere) of a Banach space $X$ is denoted by $\mathbf{B}(X)$ (resp.~$\mathbf{S}(X)$). If $X$ is a Banach lattice, and $C \subset X$, write $C_+ = C \cap X_+$, where $X_+$ stands for the positive cone of $X$. Further, we say that $C \subset X$ is \emph{solid} if, for $x \in X$ and $z \in C$, the inequality $|x| \leq |z|$ implies the inclusion $x \in C$. In particular, $x \in X$ belongs to $C$ if and only if $|x|$ does. Note that any solid set is automatically \emph{balanced}; that is, $C = -C$. Restricting our attention to the positive cone $X_+$, we say that $C \subset X_+$ is \emph{positive-solid} if for any $x\in X_+$, the existence of $z \in C$ satisfying $x \leq z$ implies the inclusion $x \in C$. We will denote the set of order extreme points of $C$ (defined in Section \ref{s:intro}) by $\mathrm{OEP}(C)$; the set of ``classical'' extreme points is denoted by $\mathrm{EP}(C)$. \begin{remark}\label{r:G_delta} It is easy to see that the set of all extreme points of a compact metrizable set is $G_\delta$. The same can be said for the set of order extreme points of $A$, whenever $A$ is a closed solid bounded subset of a separable reflexive Banach lattice. Indeed, then the weak topology is induced by a metric $d$. For each $n$ let $F_n$ be the set of all $x \in A$ for which there exist $x_1, x_2, \in A$ with $x \leq (x_1 + x_2)/2$, and $d(x_1, x_2) \geq 1/n$. By compactness, $F_n$ is closed. Now observe that $\cup_n F_n$ is the complement of the set of all order extreme points. \end{remark} Note that every order extreme point is an extreme point in the usual sense, but the converse is not true: for instance, $\mathbf{1}_{(0,1)}$ is an extreme point of $\mathbf{B}(L_\infty(0,2))_+$, but not its order extreme point. However, a connection between ``classical'' and order extreme points exists: \begin{theorem}\label{t:connection} Suppose $A$ is a solid subset of a Banach lattice $X$. Then $a$ is an extreme point of $A$ if and only if $|a|$ is its order extreme point. \end{theorem} The proof of Theorem \ref{t:connection} uses the notion of a quasi-unit. Recall \cite[Definition 1.2.6]{M-N} that for $e,v \in X_+$, $v$ is a \textit{quasi-unit} of $e$ if $v \wedge (e-v) = 0$. This terminology is not universally accepted: the same objects can be referred to as \textit{components} \cite{AB}, or \textit{fragments} \cite{PR}. \begin{proof} Suppose $|a|$ is order extreme. Let $0<t<1$ be such that $a = tx+(1-t)y$. Then since $A$ is solid and $|a| \leq t|x| + (1-t)|y|$, one has $|x| = |y| = |a|$. Thus the latter inequality is in fact equality. Thus $|a|+a = 2a_+ = 2t x_+ +2(1-t)y_+$, so $a_+ = tx_+ +(1-t)y_+$. Similarly, $a_- = tx_- + (1-t)y_-$. It follows that $x_+ \perp y_-$ and $x_- \perp y_+$. Since $ x_+ + x_- =|x| = |y| = y_+ +y_- $, we have that $x_+ = x_+ \wedge (y_+ +y_-) = x_+\wedge y_+ + x_+ \wedge y_-$ (since $y_+, y_-$ are disjoint). Now since $x_+ \perp y_-$, the latter is just $x_+ \wedge y_+$, hence $x_+ \leq y_+$. By similar argument one can show the opposite inequality to conclude that $x_+ = y_+$, and likewise $x_-=y_-$, so $x=y=a$. Now suppose $a$ is extreme. It is sufficient to show that $|a| $ is order extreme for $A_+$. Indeed, if $|a| \leq tx + (1-t)y$ (with $0 \leq t \leq 1$ and $x, y \in A$), then $|a| \leq t|x|+(1-t)|y|$. As $|a|$ is an order extreme point of $A_+$, we conclude that $|x| = |y| = |a|$, so $|a| = tx+(1-t)y= t|x|+(1-t)|y|$. The latter implies that $x_- = y_-=0$, hence $x=|x| =|a|=|y|=y$. Therefore, suppose $|a| \leq tx + (1-t)y$ with $0 \leq t \leq 1$, and $x,y \in A_+$. First show that $|a|$ is a quasi-unit of $x$ (and by similar argument of $y$). To this end, note that $a_+ - tx \wedge a_+ \leq (1-t)y\wedge a_+$. Since $A$ is solid, \[A \ni z_+:= \frac{1}{1-t}( a_+ -tx\wedge a_+ ) \] and similarly, since $a_- - tx \wedge a_- \leq (1-t)y\wedge a_-$, \[A \ni z_-:= \frac{1}{1-t}( a_- -tx\wedge a_-) \] These inequalities imply that $z_+ \perp z_-$, so they correspond to the positive and negative parts of some $z = z_+ -z_-$. Also, $z\in A$ since $|z| \leq |a|$. Now $a_+ = t(x\wedge \frac{a_+}{t}) +(1-t) z_+$ and $a_- = t(x\wedge \frac{a_-}{t}) +(1-t) z_+$. In addition, $|x\wedge\frac{a_+}{t} - x\wedge\frac{a_-}{t}| \leq x$, so since $A$ is solid, \[z':= x\wedge\frac{a_+}{t} - x\wedge\frac{a_-}{t} \in A. \] Therefore $ a=a_+ -a_- = tz'+(1-t)z$. Since $a$ is an extreme point, $a=z$, hence \[(1-t)z_+ =(1-t) a_+ = a_+ -tx\wedge a_+ \] so $tx \wedge a_+ = ta_+ $ which implies that $(t(x-a_+))\wedge((1-t)a_+) = 0$. As $0<t<1$, we have that $a_+$ (and likewise $a_-$) is a quasi-unit of $x$ (and similarly of $y$). Thus $|a|$ is a quasi-unit of $x$ and of $y$. Now let $s = x-|a|$. Then $a+s, a-s \in A$, since $|a \pm s|= x$. We have \[ a = \frac{a-s}{2} +\frac{a+s}{2}, \] but since $a$ is extreme, $s$ must be $0$. Hence $x=|a|$, and similarly $y=|a|$. \end{proof} The situation is different if $A$ is a positive-solid set: the paragraph preceding Theorem \ref{t:connection} shows that $A$ can have extreme points which are not order extreme. If, however, a positive-solid set satisfies certain compactness conditions, then some connections between extreme and order extreme points can be established; see Proposition \ref{p:under_oep}, and the remark following it. If $C$ is a subset of a Banach lattice $X$, denote by ${\mathrm{S}}(C)$ the \emph{solid hull} of $C$, which is the smallest solid set containing $C$. It is easy to see that ${\mathrm{S}}(C)$ is the set of all $z \in X$ for which there exists $x \in C$ satisfying $|z| \leq |x|$. Clearly ${\mathrm{S}}(C) = {\mathrm{S}}(|C|)$, where $|C| = \{|x| : x \in C\}$. Further, we denote by ${\mathrm{CH}}(C)$ the \emph{convex hull} of $C$. For future reference, observe: \begin{proposition}\label{p:interchange} If $X$ is a Banach lattice, then ${\mathrm{S}}({\mathrm{CH}}(|C|)) = {\mathrm{CH}}({\mathrm{S}}(C))$ for any $C \subset X$. \end{proposition} \begin{proof} Let $x\in {\mathrm{CH}}({\mathrm{S}}(C))$. Then $x = \sum a_iy_i,$ where $\sum a_i = 1, a_i > 0$, and $|y_i| \leq |k_i|$ for some $k_i \in C$. Then \[|x| \leq \sum a_i|y_i| \leq \sum a_i |k_i| \in {\mathrm{CH}}(|C|), \] so $x\in {\mathrm{S}}({\mathrm{CH}}(|C|)).$ If $x\in {\mathrm{S}}({\mathrm{CH}}(|C|))$, then \[|x| \leq \sum_1^n a_i y_i,\quad y_i \in |C|,\quad 0 < a_i, \quad \sum a_i = 1. \] We use induction on $n$ to prove that $x \in {\mathrm{CH}}({\mathrm{S}}(C))$. If $n= 1$, $x\in {\mathrm{S}}(C)$ and we are done. Now, suppose we have shown that if $|x| \leq \sum_1^{n-1} a_iy_i$ then there are $z_1,...,z_{n-1} \in {\mathrm{S}}(C)_+$ such that $|x|= \sum_1^{n-1}a_iz_i$. From there, we have that \[|x| = (\sum_1^n a_i y_i)\wedge |x| \leq (\sum_1^{n-1} a_iy_i)\wedge |x| + (a_ny_n)\wedge |x|. \] Now \[ 0 \leq |x| - (\sum_1^{n-1} a_iy_i)\wedge |x| \leq a_n(y_n\wedge \frac{|x|}{a_n}). \] Let $z_n :=\frac{1}{a_n}(|x| - (\sum_1^{n-1} a_iy_i)\wedge |x|)$. By the above, $z_n \in {\mathrm{S}}(C)_+$. Furthermore, \[\frac{1}{1-a_n}(|x| \wedge \sum_1^{n-1}a_iy_i) \leq \sum_1^{n-1} \frac{a_i}{1-a_n} y_i \in {\mathrm{CH}}(|C|), \] so by induction there exist $z_1,..,z_{n-1} \in {\mathrm{S}}(C)_+$ such that \[ |x|\wedge( \sum_1^{n-1}a_iy_i) = \sum_1^{n-1} \frac{a_i}{1-a_n} z_i \] Therefore $|x| = \sum_1^n a_iz_i$. Now for each $n$, $a_iz_i \leq |x|$, so $|x| = \sum \big( (a_iz_i) \wedge|x|\big)$, and \[a_iz_i = a_iz_i\wedge x_+ +a_iz_i\wedge x_- = a_i( z_i\wedge(\frac{x_+}{a_i}) + z_i\wedge(\frac{x_-}{a_i}) ).\] Let $w_i = z_i\wedge(\frac{x_+}{a_i}) - z_i\wedge(\frac{x_-}{a_i})$. Note that $|w_i| = z_i$, so $w_i \in {\mathrm{S}}(C)$. It follows that $x= \sum a_iw_i \in {\mathrm{CH}}({\mathrm{S}}(C))$. \end{proof} For $C \subset X$ (as before, $X$ is a Banach lattice) we define the \emph{solid convex hull} of $C$ to be the smallest convex, solid set containing $C$, and denote it by ${\mathrm{SCH}}(C)$; the norm (equivalently, weak) closure of the latter set is denoted by ${\mathrm{CSCH}}(C)$, and referred to as the \emph{closed solid convex hull} of $C$. \begin{corollary}\label{c:equal-sch} Let $C\subseteq X$. Then\begin{enumerate} \item ${\mathrm{SCH}}(C) = {\mathrm{CH}}({\mathrm{S}}(C)) = {\mathrm{SCH}}(|C|)$, and consequently, ${\mathrm{CSCH}}(C) = {\mathrm{CSCH}}(|C|)$. \item If $C \subseteq X_+$, then ${\mathrm{SCH}}(C) = {\mathrm{S}}({\mathrm{CH}}(C))$. \end{enumerate} \end{corollary} \begin{proof} (1) Suppose $C\subseteq D$, where $D$ is convex and solid. Then ${\mathrm{CH}}({\mathrm{S}}(C)) \subseteq D$. Consequently, ${\mathrm{CH}}({\mathrm{S}}(C)) \subset {\mathrm{SCH}}(C)$. On the other hand, by Proposition \ref{p:interchange}, ${\mathrm{CH}}({\mathrm{S}}(C))$ is also solid, so ${\mathrm{SCH}}(C) \subseteq {\mathrm{CH}}({\mathrm{S}}(C))$. Thus, ${\mathrm{SCH}}(C) = {\mathrm{CH}}({\mathrm{S}}(C)) = {\mathrm{CH}}({\mathrm{S}}(|C|)) = {\mathrm{SCH}}(|C|)$. \\ (2) This follows from (1) and the equality in Proposition \ref{p:interchange}. \end{proof} \begin{remark}\label{r:shadow-not-closed} The two examples below show that ${\mathrm{S}}(C)$ need not be closed, even if $C$ itself is. Example (1) exhibits an unbounded closed set $C$ with ${\mathrm{S}}(C)$ not closed; in Example (2), $C$ is closed and bounded, but the ambient Banach lattice needs to be infinite dimensional. (1) Let $X$ be a Banach lattice of dimension at least two, and consider disjoint norm one $e_1, e_2 \in \mathbf{B}(X)_+$. Let $C = \{ x_n : n \in \mathbb{N}\}$, where $x_n = \frac{n}{n+1}e_1 +ne_2$. Now, $C$ is norm-closed: if $m > n$, then $\|x_m-x_n\| \geq \|e_2\| = 1$. However, ${\mathrm{S}}(C)$ is not closed: it contains $r e_1$ for any $r \in (0,1)$, but not $e_1$. (2) If $X$ is infinite dimensional, then there exists a closed \emph{bounded} $C \subset X_+$, for which ${\mathrm{S}}(C)$ is not closed. Indeed, find disjoint norm one elements $e_1, e_2, \ldots \in X_+$. For $n \in \mathbb{N}$ let $y_n = \sum_{k=1}^n 2^{-k} e_k$ and $x_n = y_n + e_n$. Then clearly $\|x_n\| \leq 2$ for any $n$; further, $\|x_n - x_m\| \geq 1$ for any $n \neq m$, hence $C = \{x_1, x_2, \ldots\}$ is closed. However, $y_n \in {\mathrm{S}}(C)$ for any $n$, and the sequence $(y_n)$ converges to $\sum_{k=1}^\infty 2^{-k} e_k \notin {\mathrm{S}}(C)$. \end{remark} However, under certain conditions we can show that the solid hull of a closed set is closed. \begin{proposition}\label{p:conv_closed_KB} A Banach lattice $X$ is reflexive if and only if, for any norm closed, bounded convex $C \subset X_+$, ${\mathrm{S}}(C)$ is norm closed. \end{proposition} \begin{proof} Support first $X$ is reflexive, and $C$ is a norm closed bounded convex subset of $X_+$. Suppose $(x_n)$ is a sequence in ${\mathrm{S}}(C)$, which converges to some $x$ in norm; show that $x$ belongs to ${\mathrm{S}}(C)$ as well. Clearly $|x_n| \rightarrow |x|$ in norm. For each $n$ find $y_n \in C$ so that $|x_n| \leq y_n$. By passing to a subsequence if necessary, we assume that the sequence $(y_n)$ converges to some $y \in X$ in the weak topology. For convex sets, norm and weak closures coincide, hence $y$ belongs to $C$. For each $n$, $\pm x_n \leq y_n$; passing to the weak limit gives $\pm x \leq y$, hence $|x| \leq y$. Now suppose $X$ is not reflexive. By \cite[Theorem 4.71]{AB}, there exists a sequence of disjoint elements $e_i \in \mathbf{S}(X)_+$, equivalent to the natural basis of either $c_0$ or $\ell_1$. First consider the $c_0$ case. Let $C$ be the closed convex hull of $$ x_1 = \frac{e_1}{2}, \, \, x_n = \big(1 - 2^{-n} \big) e_1 + \sum_{j=2}^n e_j \, \, (n \geq 2) . $$ We shall show that any element of $C$ can be written as $c e_1 + \sum_{i=2}^\infty c_i e_i$, with $c < 1$ . This will imply that ${\mathrm{S}}(C)$ is not closed: clearly $e_1 \in \overline{{\mathrm{S}}(C)}$. The elements of ${\mathrm{CH}}(x_1, x_2, \ldots)$ are of the form $\sum_{i=1}^\infty t_i x_i = c e_1 + \sum_{i=2}^\infty c_i e_i$; here, $t_i \geq 0$, $t_i \neq 0$ for finitely many values of $i$ only, and $\sum_i t_i = 1$. Note that $c_i = \sum_{j=i}^\infty t_i$ for $i \geq 2$ (so $c_i = 0$ eventually); for convenience, let $c_1 = \sum_{j=1}^\infty t_i = 1$. Then $t_i = c_i - c_{i+1}$; Abel's summation technique gives $$ c = \sum_{i=1}^\infty \big( 1 - 2^{-i} \big) t_i = 1 - \sum_{i=1}^\infty 2^{-i} \big( c_i - c_{i+1} \big) = \frac12 + \sum_{j=2}^\infty 2^{-j} c_j . $$ Now consider $x \in C$. Then $x$ is the norm limit of the sequence $$ x^{(m)} = c^{(m)} e_1 + \sum_{i=2}^\infty c_i^{(m)} e_i \in {\mathrm{CH}}(x_1, x_2, \ldots) ; $$ for each $m$, the sequence $(c_i^{(m)})$ has only finitely many non-zero terms, $c^{(m)} = \frac12 + \sum_{j=2}^\infty 2^{-j} c_j^{(m)}$, and for all $m,n \in \mathbb{N}$, $|c_i^{(m)} - c_i^{(n)}| \leq \|x^{(m)} - x^{(n)}\|$. Thus, $x = c e_1 + \sum_{i=2}^\infty c_i e_i$, with $c = \frac12 + \sum_{j=2}^\infty 2^{-j} c_j$. As $0 \leq c_j \leq 1$, and $\lim_j c_j = 0$, we conclude that $c < 1$, as claimed. Now suppose $(e_i)$ are equivalent to the natural basis of $\ell_1$. Let $C$ be the closed convex hull of the vectors $$ x_n = \big(1 - 2^{-n} \big) e_1 + e_n \, \, (n \geq 2) , $$ and show that $e_1 \in \overline{{\mathrm{S}}(C)} \backslash {\mathrm{S}}(C)$. Note that $$ C = \Big\{ \Big( \sum_{i=2}^\infty \big(1 - 2^{-n} \big) t_i \Big) e_1 + \sum_{i=2}^\infty t_i e_i : t_2, t_3, \ldots \geq 0 , \sum_{i=2}^\infty t_i = 1 \Big\} . $$ Clearly $e_1$ belongs to $\overline{{\mathrm{S}}(C)}$, but not to ${\mathrm{S}}(C)$. \end{proof} \section{Separation by positive functionals}\label{s:separation} Throughout the section, $X$ is a Banach lattice, equipped with a locally convex Hausdorff topology $\tau$. This topology is called \emph{sufficiently rich} if the following conditions are satisfied: \begin{enumerate}[(i)] \item The space $X^\tau$ of $\tau$-continuous functionals on $X$ is a Banach lattice (with lattice operations defined by Riesz-Kantorovich formulas). \item $X_+$ is $\tau$-closed. \end{enumerate} Note that (i) and (ii) together imply that positive $\tau$-continuous functionals separate points. That is, for every $x \in X \backslash \{0\}$ there exists $f \in X^\tau_+$ so that $f(x) \neq 0$. Indeed, without loss of generality, $x_+ \neq 0$. Then $- x_+ \notin X_+$, hence there exists $f \in X^\tau_+$ so that $f(x_+) > 0$. By \cite[Proposition 1.4.13]{M-N}, there exists $g \in X^\tau_+$ so that $g(x_+) > f(x_+)/2$ and $g(x_-) < f(x_+)/2$. Then $g(x) > 0$. Clearly, the norm and weak topologies are sufficiently rich; in this case, $X^\tau = X^*$. The weak$^*$ topology on $X$, induced by the predual Banach lattice $X_*$, is sufficiently rich as well; then $X^\tau = X_*$. \begin{proposition}[Separation]\label{p:separation2} Suppose $\tau$ is a sufficiently rich topology on a Banach lattice $X$, and $A \subset X_+$ is a $\tau$-closed positive-solid bounded subset of $X_+$. Suppose, furthermore, $x \in X_+$ does not belong to $A$. Then there exists $f \in X^\tau_+$ so that $f(x) > \sup_{a \in A} f(a)$. \end{proposition} \begin{lemma}\label{l:max} Suppose $A$ and $X$ are as above, and $f \in X^\tau$. Then $\sup_{a \in A} f(a)$ $= \sup_{a \in A} f_+(a)$. \end{lemma} \begin{proof} Clearly $\sup_{a \in A} f(a) \leq \sup_{a \in A} f_+(a)$. To prove the reverse inequality, write $f = f_+ - f_-$, with $f_+ \wedge f_- = 0$. Fix $a \in A$; then $$ 0 = \big[ f_+ \wedge f_- \big](a) = \inf_{0 \leq x \leq a} \big( f_+(a-x) + f_-(x) \big) . $$ For any $\varepsilon > 0$ we can find $x \in A$ so that $f_+(a-x), f_-(x) < \varepsilon$. Then $f_+(x) = f_+(a) - f_+(a-x) > f_+(a) - \varepsilon$, and therefore, $f(x) = f_+(x) - f_-(x) > f_+(a) - 2\varepsilon$. Now recall that $\varepsilon > 0$ and $a \in A$ are arbitrary. \end{proof} \begin{proof}[Proof of Proposition \ref{p:separation2}] Use Hahn-Banach Theorem to find $f$ strictly separating $x$ from $A$. By Lemma \ref{l:max}, $f_+$ achieves the separation as well. \end{proof} \begin{remark}\label{r:lattice_needed} In this paper, we do not consider separation results on general ordered spaces. Our reasoning will fail without lattice structure. For instance, Lemma \ref{l:max} is false when $X$ is not a lattice, but merely an ordered space. Indeed, consider $X = M_2$ (the space of real $2 \times 2$ matrices), $\displaystyle f = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}$, and $A = \{t a_0 : 0 \leq t \leq 1\}$, where $\displaystyle a_0 = \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}$; one can check that $A = \{x \in M_2 : 0 \leq x \leq a_0\}$. Then $f|_A = 0$, while $\displaystyle \sup_{x \in A} f_+(x) = 1$. The reader interested in the separation results in the non-lattice ordered setting is referred to an interesting result of \cite{FW}, recently re-proved in \cite{Ama}. \end{remark} \section{Solid convex hulls: theorems of Krein-Milman and Milman}\label{s:KM} Throughout this section, the topology $\tau$ is assumed to be sufficiently rich (defined in the beginning of Section \ref{s:separation}). \begin{theorem}[``Solid'' Krein-Milman]\label{t:KM} Any $\tau$-compact positive-solid subset $A$ of $X_+$ coincides with the $\tau$-closed positive-solid convex hull of its order extreme points. \end{theorem} \begin{proof} Let $A$ be a $\tau$-compact positive-solid subset of $X_+$. Denote the $\tau$-closed positive convex hull of $\mathrm{OEP}(A)$ by $B$; then clearly $B \subset A$. The proof of the reverse inclusion is similar to that of the ``usual'' Krein-Milman. Suppose $C$ is a $\tau$-compact subset of $X$. We say that a non-void closed $F \subset C$ is an \emph{order extreme subset} of $C$ if, whenever $x \in F$ and $a_1, a_2 \in C$ satisfy $x \leq (a_1 + a_2)/2$, then necessarily $a_1, a_2 \in F$. The set ${\mathcal{F}}(C)$ of order extreme subsets of $C$ can be ordered by reverse inclusion (this makes $C$ the minimal order extreme subset of itself). By compactness, each chain has an upper bound; therefore, by Zorn's Lemma, ${\mathcal{F}}(C)$ has a maximal element. We claim that these maximal elements are singletons, and they are the order extreme points of $C$. We need to show that, if $F \in {\mathcal{F}}(C)$ is not a singleton, then there exists $G \subsetneq F$ which is also an order extreme set. To this end, find distinct $a_1, a_2 \in F$, and $f \in X^\tau_+$ which separates them -- say $f(a_1) > f(a_2)$. Let $\alpha = \max_{x \in F} f(x)$, then $G = F \cap f^{-1}(\alpha)$ is a proper, order extreme subset of $F$. Suppose, for the sake of contradiction, that there exists $x \in A \backslash B$. Use Proposition \ref{p:separation2} to find $f \in X^\tau_+$ so that $f(x) > \max_{y \in B} f(y)$. Let $\alpha = \max_{x \in A} f(x)$, then $A \cap f^{-1}(\alpha)$ is an order extreme subset of $A$, disjoint from $B$. As noted above, this subset contains at least one extreme point. This yields a contradiction, as we started out assuming all order extreme points lie in $B$. \end{proof} \begin{corollary}\label{c:KM} Any $\tau$-compact solid subset of $X$ coincides with the $\tau$-closed solid convex hull of its order extreme points. \end{corollary} Of course, there exist Banach lattices whose unit ball has no order extreme points at all -- $L_1(0,1)$, for instance. However, an order analogue of \cite[Lemma 1]{Linl1} holds. \begin{proposition}\label{p:some_vs_all} For a Banach lattice $X$, the following two statements are equivalent: \begin{enumerate} \item Every bounded closed solid convex subset of $X$ has an order extreme point. \item Every bounded closed solid convex subset of $X$ is the closed solid convex hull of its order extreme points. \end{enumerate} \end{proposition} \begin{proof} (2) $\Rightarrow$ (1) is evident; we shall prove (1) $\Rightarrow$ (2). Suppose $A \subset X$ is closed, bounded, convex, and solid. Let $B = {\mathrm{CSCH}}(\mathrm{OEP}(A))$ (which is not empty, by (1)). Suppose, for the sake of contradiction, that $B$ is a proper subset of $A$. Let $a \in A \backslash B$. Since $B$ and $A$ are solid, $|a| \in A \backslash B$ as well, so without loss of generality we assume that $a \geq 0$. Then there exists $f \in \mathbf{S}(X^*)_+$ which strictly separates $a$ from $B$; consequently, $$ \sup_{x \in A} f(x) \geq f(a) > \sup_{x \in B} f(x) . $$ Fix $\varepsilon > 0$ so that $$ 2 \sqrt{2 \varepsilon} \alpha < \sup_{x \in A} f(x) - \sup_{x \in B} f(x) , \, \, {\textrm{where}} \, \, \alpha = \sup_{x \in A} \|x\| . $$ By Bishop-Phelps-Bollob\'as Theorem (see e.g. \cite{Boll} or \cite{CKMetc}), there exists $f' \in \mathbf{S}(X^*)$, attaining its maximum on $A$, so that $\|f - f'\| \leq \sqrt{2 \varepsilon}$. Let $g = |f'|$, then $\|f - g\| \leq \|f - f'\| \leq \sqrt{2 \varepsilon}$. Further, $g$ attains its maximum on $A_+$, and $\max_{g \in A} g(x) > \sup_{x \in B} g(x)$. Indeed, the first statement follows immediately from the definition of $g$. To establish the second one, note that the triangle inequality gives us $$ \sup_{x \in B} g(x) \leq \sqrt{2 \varepsilon} \alpha + \sup_{x \in B} f(x) , \, \, \, \sup_{x \in A} g(x) \geq \sup_{x \in A} f(x) - \sqrt{2 \varepsilon} \alpha . $$ Our assumption on $\varepsilon$ gives us $\max_{g \in A} g(x) > \sup_{x \in B} g(x)$. Let $D = \{a \in A : g(a) = \sup_{x \in A} g(x)\}$. Due to (1), $D$ has an order extreme point which is also an order extreme point of $A$; this point lies inside of $B$, leading to the desired contradiction. \end{proof} Milman's theorem \cite[3.25]{Rud} states that, if both $K$ and $\overline{{\mathrm{CH}}(K)}^\tau$ are compact, then $\mathrm{EP}\big(\overline{{\mathrm{CH}}(K)}^\tau\big) \subset K$. An order analogue of Milman's theorem exists: \begin{theorem}\label{t:order-milman} Suppose $X$ is a Banach lattice. \begin{enumerate} \item If $K \subset X_+$ and $\overline{{\mathrm{CH}}(K)}^\tau$ are $\tau$-compact, then $\mathrm{OEP}\big(\overline{{\mathrm{SCH}}(K)}^\tau\big) \subseteq K$. \item If $K \subset X_+$ is weakly compact, then $\mathrm{OEP}({\mathrm{CSCH}}(K)) \subseteq K$. \item If $K \subset X$ is norm compact, then $\mathrm{OEP}({\mathrm{CSCH}}(K)) \subseteq |K|$. \end{enumerate} \end{theorem} The following lemma describes the solid hull of a $\tau$-compact set. \begin{lemma}\label{l:compact_closed} Suppose a Banach lattice $X$ is equipped with a sufficiently rich topology $\tau$. If $C \subset X_+$ is $\tau$-compact, then ${\mathrm{S}}(C)$ is $\tau$-closed. \end{lemma} \begin{proof} Suppose a net $(y_i) \subset {\mathrm{S}}(C)$ $\tau$-converges to $y \in X$. For each $i$ find $x_i \in C$ so that $|y_i| \leq x_i$ -- or equivalently, $y_i \leq x_i$ and $-y_i \leq x_i$. Passing to a subnet if necessary, we assume that $x_i \to x \in C$ in the topology $\tau$. Then $\pm y \leq x$, which is equivalent to $|y| \leq x$. \end{proof} \begin{proof}[Proof of Theorem \ref{t:order-milman}] (1) We first consider a $\tau$-compact $K\subseteq X_+$. Milman's traditional theorem holds that $\mathrm{EP}\big(\overline{{\mathrm{CH}}(K)}^\tau\big) \subseteq K$. Every order extreme point of a set is extreme, hence the order extreme points of $\overline{{\mathrm{CH}}(K)}^\tau$ are in $K$. Therefore, by Lemma \ref{l:compact_closed} and Corollary \ref{c:equal-sch}, \[\overline{{\mathrm{SCH}}(K)}^\tau = \overline{{\mathrm{S}}({\mathrm{CH}}(K))}^\tau \subseteq {\mathrm{S}}\big(\overline{{\mathrm{CH}}(K)}^\tau\big) = \{x: |x| \leq y \in \overline{{\mathrm{CH}}(K)}^\tau \} . \] Thus, the points of $\overline{{\mathrm{SCH}}(K)}^\tau \backslash \overline{{\mathrm{CH}}(K)}^\tau$ cannot be order extreme due to being dominated by $\overline{{\mathrm{CH}}(K)}^\tau$. Therefore $\mathrm{OEP}\big(\overline{{\mathrm{SCH}}(K)}^\tau\big) \subseteq \mathrm{OEP}\big(\overline{{\mathrm{CH}}(K)}^\tau\big) \subseteq K$. (2) Combine (1) with Krein's Theorem (see e.g.~\cite[Theorem 3.133]{FHHMZ}), which states that $\overline{{\mathrm{CH}}(K)}^w = \overline{{\mathrm{CH}}(K)}$ is weakly compact. (3) Finally, suppose $K\subseteq X$ is norm compact. By Corollary \ref{c:equal-sch}, ${\mathrm{CSCH}}(K) = {\mathrm{CSCH}}(|K|)$. $|K|$ is norm compact, hence by \cite[Theorem 3.20]{Rud}, so is $\overline{{\mathrm{CH}}(|K|)}$. By the proof of part (1), $\mathrm{OEP} ( {\mathrm{CSCH}}(K) ) \subseteq |K|$. \end{proof} We turn our attention to interchanging ``solidification'' and norm closure. We work with the norm topology, unless specified otherwise. \begin{lemma}\label{l:switch-close-solid} Let $C\subseteq X$, where $X$ is a Banach lattice, and suppose that ${\mathrm{S}}(\overline{|C|})$ is closed. Then $ \overline{{\mathrm{S}}(C)} = {\mathrm{S}}(\overline{|C|})$. \end{lemma} \begin{proof} One direction is easy: ${\mathrm{S}}(C)= {\mathrm{S}}(|C|) \subseteq {\mathrm{S}}(\overline{|C|})$, hence $\overline{{\mathrm{S}}(C)} \subseteq \overline{{\mathrm{S}}(\overline{|C|})} = {\mathrm{S}}(\overline{|C|})$. Now consider $x\in {\mathrm{S}}(\overline{|C|})$. Then by definition, $|x| \leq y$ for some $y \in \overline{|C|}$. Take $y_n \in |C|$ such that $y_n \rightarrow y$ . Then $|x|\wedge y_n \in {\mathrm{S}}(|C|) = {\mathrm{S}}(C)$ for all $n$. Furthermore, \[ |x_+\wedge y_n -x_-\wedge y_n| =|x|\wedge y_n, \] so, $x_+ \wedge y_n - x_-\wedge y_n \in {\mathrm{S}}(C)$. By norm continuity of $\wedge$, \[x_+\wedge y_n -x_-\wedge y_n \rightarrow x_+\wedge y -x_-\wedge y = x, \] hence $x \in \overline{{\mathrm{S}}(C)}$. \end{proof} \begin{remark} The assumption of ${\mathrm{S}}(\overline{|C|})$ being closed is necessary: Remark \ref{r:shadow-not-closed} shows that, for a closed $C \subset X_+$, ${\mathrm{S}}(C)$ need not be closed. \end{remark} \begin{corollary}\label{c:switch-close-solid1} Suppose $C\subseteq X$is relatively compact in the norm topology. Then $\overline{{\mathrm{S}}(C)} = {\mathrm{S}}(\overline{C})$. \end{corollary} \begin{proof} The set $\overline{C}$ is compact, hence, by the continuity of $| \cdot |$, the same is true for $|\overline{C}|$. Consequently, $|\overline{C}| \subseteq \overline{|C|} \subseteq \overline{|\overline{C}|} = |\overline{C}|$, hence $|\overline{C}| = \overline{|C|}$. By Lemmas \ref{l:compact_closed} and \ref{l:switch-close-solid}, ${\mathrm{S}}(\overline{C}) = {\mathrm{S}}(|\overline{C}|) = {\mathrm{S}}(\overline{|C|}) = \overline{{\mathrm{S}}(C)}$. \end{proof} \begin{remark}\label{r:closure_versus_modulus} In the weak topology, the equality $|\overline{C}| = \overline{|C|}$ may fail. Indeed, equip the Cantor set $\Delta = \{0,1\}^\mathbb{N}$ with its uniform probability measure $\mu$. Define $x_i \in L_2(\mu)$ by setting, for $t = (t_1, t_2, \ldots) \in \Delta$, $x_i(t) = t_i - 1/4$ (that is, $x_i$ equals to either $3/4$ or $-1/4$, depending on whether $t_i$ is $1$ or $0$). Then $C = \{x_i : i \in \mathbb{N}\}$ belongs to the unit ball of $L_2(\mu)$, hence it is relatively compact. It is clear that $\overline{C}$ contains $\mathbf{1}/4$ (here and below, $\mathbf{1}$ denotes the constant $1$ function). On the other hand, $\overline{C}$ does not contain $\mathbf{1}/2$, which can be witnessed by applying the integration functional. Conversely, $\overline{|C|}$ contains $\mathbf{1}/2$, but not $\mathbf{1}/4$. \end{remark} \begin{remark}\label{r:weakly_compact_solids} Relative weak compactness of solid hulls have been studied before. If $X$ is a Banach lattice, then, by \cite[Theorem 4.39]{AB}, it is order continuous iff the solid hull of any weakly compact subset of $X_+$ is relatively weakly compact. Further, by \cite{ChW}, the following three statements are equivalent: \begin{enumerate} \item The solid hull of any relatively weakly compact set is relatively weakly compact. \item If $C \subset X$ is relatively weakly compact, then so is $|C|$. \item $X$ is a direct sum of a KB-space and a purely atomic order continuous Banach lattice (a Banach lattice is called purely atomic if its atoms generate it, as a band). \end{enumerate} \end{remark} Finally, we return to the connections between extreme points and order extreme points. As noted in the paragraph preceding Theorem \ref{t:connection}, a non-zero extreme point of a positive-solid set need not be order extreme. However, we have: \begin{proposition}\label{p:under_oep} Suppose $\tau$ is a sufficiently rich topology, and $A$ is a $\tau$-compact positive-solid convex subset of $X_+$. Then for any extreme point $a \in A$ there exists an order extreme point $b \in A$ so that $a \leq b$. \end{proposition} \begin{remark}\label{r:for_domination_need_compactness} The compactness assumption is essential. Consider, for instance, the closed set $A \subset C[-1,1]$, consisting of all functions $f$ so that $0 \leq f \leq \mathbf{1}$, and $f(x) \leq x$ for $x \geq 0$. Then $g(x) = x \vee 0$ is an extreme point of $A$; however, $A$ has no order extreme points. \end{remark} \begin{proof} If $a$ is not an order extreme point, then we can find distinct $x_1, x_2 \in A$ so that $2a \leq x_1 + x_2$. Then $2a \leq (x_1 + x_2) \wedge (2a) \leq x_1 \wedge (2a) + x_2 \wedge (2a) \leq x_1 + x_2$. Write $2a = x_1 \wedge (2a) + (2a - x_1 \wedge (2a))$. Both summands are positive, and both belong to $A$ (for the second summand, note that $2a - x_1 \wedge (2a) \leq x_2$). Therefore, $x_1 \wedge (2a) = a = 2a - x_1 \wedge (2a)$, hence in particular $x_1 \wedge (2a) = a$. Similarly, $x_2 \wedge (2a) = a$. Therefore, we can write $x_1$ as a disjoint sum $x_1 = x_1' + a$ ($a, x_1'$ are quasi-units of $x_1$). In the same way, $x_2 = x_2' + a$ (disjoint sum). Now consider the $\tau$-closed set $B = \{x \in A : x \geq a\}$. As in the proof of Theorem \ref{t:KM}, we show that the family of $\tau$-closed extreme subsets of $B$ has a maximal element; moreover, such an element is a singleton $\{b\}$. It remains to prove that $b$ is an order extreme point of $A$. Indeed, suppose $x_1, x_2 \in A$ satisfy $2b \leq x_1 + x_2$. A fortiori, $2a \leq x_1 + x_2$, hence, by the preceding paragraph, $x_1, x_2 \in B$. Thus, $x_1 = b = x_2$. \end{proof} \section{Examples: AM-spaces and their relatives}\label{s:examples} The following example shows that, in some cases, $\mathbf{B}(X)$ is much larger than the closed convex hull of its extreme points, yet is equal to the closed solid convex hull of its order extreme points. \begin{proposition}\label{p:fin_many} For a Banach lattice $X$, $\mathbf{B}(X)$ is the (closed) solid convex hull of $n$ disjoint non-zero elements if and only if $X$ is lattice isometric to $C(K_1) \oplus_1 \ldots \oplus_1 C(K_n)$ for suitable non-trivial Hausdorff compact topological spaces $K_1,...,K_n$. \end{proposition} \begin{proof} Clearly, the only order extreme points of $\mathbf{B}(C(K_1) \oplus_1 \ldots \oplus_1 C(K_n))$ are $\mathbf{1}_{K_i}$, with $1 \leq i \leq n$. Conversely, suppose $\mathbf{B}(X) = {\mathrm{CSCH}}(x_1, \ldots, x_n)$, where $x_1, \ldots, x_n \in \mathbf{B}(X)_+$ are disjoint. It is easy to see that, in this case, $\mathbf{B}(X) = {\mathrm{SCH}}(x_1, \ldots, x_n)$. Moreover, $x_i \in \mathbf{S}(X)_+$ for each $i$. Indeed, otherwise there exists $i \in \{1, \ldots, n\}$ and $\lambda > 1$ so that $\lambda x_i \in {\mathrm{SCH}}(x_1, \ldots, x_n)$, or in other words, $\lambda x_i \leq \sum_{j=1}^n t_j x_j$, with $t_j \geq 0$ and $\sum_j t_j \leq 1$. Consequently, due to the disjointness of $x_j$'s, $$ \lambda x_i = (\lambda x_i) \wedge (\lambda x_i) \leq \big( \sum_{j=1}^n t_j x_j \big) \wedge (\lambda x_i) \leq \sum_{j=1}^n (t_j x_j) \wedge (\lambda x_i) \leq t_i x_i , $$ which yields the desired contradiction. Let $E_i$ be the ideal of $X$ generated by $x_i$, meaning the set of all $x \in X$ for which there exists $c > 0$ so that $|x| \leq c |x_i|$. Note that, for such $x$, $\|x\|$ is the infimum of all $c$'s with the above property. Indeed, if $|x| \leq |x_i|$, then clearly $x \in \mathbf{B}(X)$. Conversely, suppose $x \in \mathbf{B}(X) \cap E_i$. In other words, $|x| \leq c x_i$ for some $c$, and also $|x| \leq \sum_j t_j x_j$, with $t_j \geq 0$, and $\sum_j t_j = 1$. Then $|x| \leq (c x_i) \wedge ( \sum_j t_j x_j ) = (c \wedge t_i) x_i$. Consequently, $E_i$ (with the norm inherited from $X$) is an $AM$-space, whose strong unit is $x_i$. By \cite[Theorem 2.1.3]{M-N}, $E_i$ can be identified with $C(K_i)$, for some Hausdorff compact $K_i$. Further, Proposition \ref{p:interchange} shows that $X$ is the direct sum of the ideals $E_i$: any $y \in X$ has a unique disjoint decomposition $y = \sum_{i=1}^n y_i$, with $y_i \in E_i$. We have to show that $\|y\| = \sum_i \|y_i\|$. Indeed, suppose $\|y\| \leq 1$. Then $|y| = \sum_i |y_i| \leq \sum_j t_j x_j$, with $t_j \geq 0$, and $\sum_j t_j = 1$. Note that $\|y_i\| \leq 1$ for every $i$, or equivalently, $|y_i| \leq x_i$. Therefore, $$ |y_i| = |y| \wedge x_i = \big( \sum_j t_j x_j \big) \wedge x_i = t_i , $$ which leads to $\|y_i\| \leq t_i$; consequently, $\|y\| \leq \sum_i t_i \leq 1$. \end{proof} \begin{example}\label{e:other_etreme_points} For $X = (C(K_1) \oplus_1 C(K_2)) \oplus_\infty C(K_3)$, order extreme points of $\mathbf{B}(X)$ are $\mathbf{1}_{K_1} \oplus_\infty \mathbf{1}_{K_3}$ and $\mathbf{1}_{K_2} \oplus_\infty \mathbf{1}_{K_3}$; $\mathbf{B}(X)$ is the solid convex hull of these points. Thus, the word ``disjoint'' in the statement of Proposition \ref{p:fin_many} cannot be omitted. \end{example} Note that $\mathbf{B}(C(K))$ is the closed solid convex hull of its only order extreme point -- namely, $\mathbf{1}_K$. This is the only type of AM-spaces with this property. \begin{proposition}\label{p:describe_AM} Suppose $X$ is an AM-space, and $\mathbf{B}(X)$ is the closed solid convex hull of finitely many of its elements. Then $X = C(K)$ for some Hausdorff compact $K$. \end{proposition} \begin{proof} Suppose $\mathbf{B}(X)$ is the closed solid convex hull of $x_1, \ldots, x_n \in \mathbf{B}(X)_+$. Then $x_0 := x_1 \vee \ldots \vee x_n \in \mathbf{B}(X)_+$ (due to $X$ being an AM-space), hence $x \in \mathbf{B}(X)$ iff $|x| \leq x_0$. Thus, $x_0$ is the strong unit of $X$. \end{proof} \begin{proposition}\label{p:only_C(K)_has_OEP} If $X$ is an AM-space, and $\mathbf{B}(X)$ has an order extreme point, then $X$ is lattice isometric to $C(K)$, for some Hausdorff compact $K$. \end{proposition} \begin{proof} Suppose $a$ is order extreme point of $\mathbf{B}(X)$. We claim that $a$ is a strong unit, which means that $a \geq x$ for any $x \in \mathbf{B}(X)_+$. Suppose, for the sake of contradiction, that the inequality $a \geq x$ fails for some $x \in \mathbf{B}(X)_+$. Then $b = a \vee x \in \mathbf{B}(X)_+$ (due to the definition of an AM-space), and $a \leq (a+b)/2$, contradicting the definition of an order extreme point. \end{proof} We next consider norm-attaining functionals. It is known that, for a Banach space $X$, any element of $X^*$ attains its norm iff $X$ is reflexive. If we restrict ourself to positive functionals on a Banach lattice, the situation is different: clearly every positive functional on $C(K)$ attains it norm at $\mathbf{1}$. Below we show that, among separable AM-spaces, only $C(K)$ has this property. \begin{proposition}\label{p:only_C(K)_attain_norm} Suppose $X$ is a separable AM-space, so that every positive linear functional attains its norm. Then $X$ is lattice isometric to $C(K)$. \end{proposition} \begin{proof} Let $(x_i)_{i=1}^\infty$ be a dense sequence in $\mathbf{S}(X)_+$. For each $i$ find $x_i^* \in \mathbf{B}(X^*_+)$ so that $x_i^*(x_i) = 1$. Let $x^* = \sum_{i=1}^\infty 2^{-i} x_i^*$. We shall show that $\|x^*\| = 1$. Indeed, $\|x^*\| \leq \sum_i 2^{-i} = 1$ by the triangle inequality. For the opposite inequality, fix $N \in \mathbb{N}$, and let $x = x_1 \vee \ldots \vee x_N$. Then $x \in \mathbf{S}(X)_+$, and $$ \|x^*\| \geq x^*(x) \geq \sum_{i=1}^N 2^{-i} x_i^*(x) \geq \sum_{i=1}^N 2^{-i} x_i^*(x_i) = \sum_{i=1}^N 2^{-i} = 1 - 2^{-N} . $$ As $N$ can be arbitrarily large, we obtain the desired estimate on $\|x^*\|$. Now suppose $x^*$ attains its norm on $a \in \mathbf{S}(X)_+$. We claim that $a$ is the strong unit for $X$. Suppose otherwise; then there exists $y \in \mathbf{B}(X)_+$ so that $a \geq y$ fails. Let $b = a \vee y$, then $z = b - y$ belongs to $X_+\backslash\{0\}$. Then $1 \geq x^*(b) \geq x^*(a) = 1$, hence $x^*(z) = 0$. However, $x^*$ cannot vanish at $z$. Indeed, find $i$ so that $\|z/\|z\| - x_i\| < 1/2$. Then $x_i^*(z) \geq \|z\|/2$, hence $x(z) > 2^{-i-1} \|z\| > 0$. This gives the desired contradiction. \end{proof} In connection to this, we also mention a result about norm-attaining functionals on order continuous Banach lattices. \begin{proposition}\label{p:functional_OC} An order continuous Banach lattice $X$ is reflexive if and only if every positive linear functional on it attains its norm. \end{proposition} \begin{proof} If an order continuous Banach lattice $X$ is reflexive, then clearly every linear functional is norm-attaining. If $X$ is not reflexive, then, by the classical result of James, there exists $x^* \in X^*$ which does not attain its norm. We show that $|x^*|$ does not either. Let $B_+ = \{x \in X : x^*_+(|x|) = 0\}$, and define $B_-$ similarly. As all linear functionals on $X$ are order continuous \cite[Section 2.4]{M-N}, $B_+$ and $B_-$ are bands \cite[Section 1.4]{M-N}. Due to the order continuity of $X$ \cite[Section 2.4]{M-N}, $B_{\pm}$ are ranges of band projections $P_{\pm}$. Let $B$ be the range of $P = P_+ P_-$; let $B_+^o$ be the range of $P_+^o = P_+ P_-^\perp = P_+ - P$ (where we set $Q^\perp = I_X - Q$), and similarly for $B_-^o$ and $P_-^o$. Note that $P_+^o + P_-^o = P^\perp$. Suppose for the sake of contradiction that $x \in \mathbf{S}(X)_+$ satisfies $|x^*|(x) = \|x^*\|$. Replacing $x$ by $P^\perp x$ if necessary, we assume that $P x = 0$, so $x = P_+^o x + P_-^o x$. Then $\|P_+^o x - P_-^o x\| = 1$, and \begin{align*} x^* \big( P_-^o x - P_+^o x \big) & = x^*_+ \big( P_-^o x \big) - x^*_+ \big( P_+^o x \big) - x^*_- \big( P_-^o x \big) + x^*_- \big( P_+^o x \big) \\ & = x^*_+ \big( P_-^o x \big) + x^*_- \big( P_+^o x \big) = |x^*|(x) = \|x^*\| , \end{align*} which contradicts our assumption that $x$ does not attain its norm. \end{proof} \section{On the number of order extreme points}\label{s:how_many_extreme_points} It is shown in \cite{LP} that, if a Banach space $X$ is reflexive and infinite-dimensional Banach lattice, then $\mathbf{B}(X)$ has uncountably many extreme points. Here, we establish a similar lattice result. \begin{theorem}\label{t:uncount_many} If $X$ is a reflexive infinite-dimensional Banach lattice, then $\mathbf{B}(X)$ has uncountably many order extreme points. \end{theorem} Note that if $X$ is a reflexive infinite-dimensional Banach lattice, then Theorems \ref{t:connection} and \ref{t:uncount_many} imply that $\mathbf{B}(X)$ has uncountably many extreme points, re-proving the result of \cite{LP} in this case. \begin{proof} Suppose, for the sake of contradiction, that there were only countably many such points $\{x_n\}$. For each such $x_n$, we define $F_n =\{ f \in \mathbf{B}(X^*)_+ : f(x_n) = \| f\| \}$. Clearly $F_n$ is weak$^*$ ($=$ weakly) compact. By the reflexivity of $X$, any $f\in \mathbf{B}(X^*)$ attains its norm at some $x \in \mathrm{EP}(\mathbf{B}(X))$. Since $f(x) \leq |f|(|x|)$ we assume that any positive functional attains its norm at a positive extreme point in $ \mathbf{B}(X)$. By Theorem \ref{t:connection}, these are precisely the order extreme points. Therefore $\bigcup F_n = \mathbf{B}(X^*)_+$. By the Baire Category Theorem, one of these sets $F_n$ must have non-empty interior in $ \mathbf{B}(X^*)_+$. Assume it is $F_1$. Pick $f_0\in F_1 $, and $y_1,...,y_k \in X$, such that if $f \in \mathbf{B}(X^*)_+$ and for each $y_i$, $|f(y_i) - f_0(y_i) | < 1$, then $f \in F_1$. Without loss of generality, we assume that $\|f_0 \| < 1$, and also that each $y_i \geq 0$. Further, we can and do assume that there exist mutually disjoint $u_1, u_2, \ldots \in \mathbf{S}(X)_+$ which are disjoint from $y = \vee_i y_i$. Indeed, find mutually disjoint $z_1, z_2, \ldots \in \mathbf{S}(X)_+$. Denote the corresponding band projections by $P_1, P_2, \ldots$ (such projections exist, due to the $\sigma$-Dedekind completeness of $X$). Then the vectors $P_n y$ are mutually disjoint, and dominated by $y$. As $X$ is reflexive, it must be order continuous, and therefore, $\lim_n \|P_n y\| = 0$. Find $n_1 < n_2 < \ldots$ so that $\sum_j \|P_{n_j} y\| < 1/2$. Let $w_i = \sum_j P_{n_j} y_i$ and $y'_i = 2(y_i - w_i)$. Then if $|(f_0-g)(y'_i)|< 1$, with $g\geq 0, \|g\| \leq 1$, it follows that \begin{align*} |(f_0 - g)(y_i)| & \leq \frac{1}{2} ( |(f_0 - g)(y'_i) | + |(f_0- g)(w_i) |) \\ &\leq \frac{1}{2} ( 1 + \|f_0 - g\| \|w_i\| ) < \frac12(1+ 2\cdot \frac12 ) = 1 \end{align*} We can therefore replace $y_i$ with $y'_i$ to ensure sufficient conditions for being in $F_1$. Then the vectors $u_j = z_{n_j}$ have the desired properties. Let $P$ be the band projection complementary to $\sum_j P_{n_j}$ (in other words, complementary to the the band projection of $\sum_j 2^{-j} u_j$); then $P y_i = y_i$ for any $i$. By \cite[Lemma 1.4.3 and its proof]{M-N}, there exist linear functionals $g_j \in \mathbf{S}(X^*)_+$ so that $g_j(u_j) = 1$, and $g_j =P_{n_j}^* g_j$. Consequently, the functionals $g_j$ are mutually disjoint, and $g_j|_{\mathrm{ran} \, P} = 0$. For $j \in \mathbb{N}$ find $\alpha_j \in [1 - \|P^* f_0\|, 1]$ so that $\|f_j\| = 1$, where $f_j = P^* f_0 + \alpha_j g_j$. Then, for $1 \leq i \leq k$, $f_j(y_i) = (P^* f_0)(y_i) + \alpha_j g_j(y_i) = f_0(y_i)$, which implies that, for every $j$, $f_j$ belongs to $F_1$, hence attains its norm at $x_1$. On the other hand, note that $\lim_j g_j(x_1) = 0$. Indeed, otherwise, there exist $\gamma > 0$ and a sequence $(j_k)$ so that $g_{j_k}(x_1) \geq \gamma$ for every $k$. For any finite sequence of positive numbers $(\beta_k)$, we have $$ \sum_k |\beta_k| \geq \big\| \sum_k \beta_k g_{j_k} \big\| \geq \sum_k \beta_k g_{j_k} (x_1) \geq \gamma \sum_k |\beta_k| . $$ As the functionals $g_{j_k}$ are mutually disjoint, the inequalities $$ \sum_k |\beta_k| \geq \big\| \sum_k \beta_k g_{j_k} \big\| \geq \gamma \sum_k |\beta_k| $$ hold for every finite sequence $(\beta_k)$. We conclude that $\overline{\mathrm{span}}[g_{j_k} : k \in \mathbb{N}]$ is isomorphic to $\ell_1$, which contradicts the reflexivity of $X$. Thus, $\lim_j g_j(x_1) = 0$, hence $\lim_j f_j (x_1) = f_0(P x_1) \leq \|f_0\| < 1$. \end{proof} \begin{corollary} Suppose $C$ is a closed, bounded, solid, convex subset of a reflexive Banach lattice, having non-empty interior. Then $C$ contains uncountably many order extreme points. \end{corollary} \begin{proof} We assume without loss of generality that $\sup_{x \in C} \|x\| = 1$. Note that $0$ is an interior point of $C$. Indeed, suppose $x$ is an interior point. Pick $\varepsilon > 0$ such that $x + \varepsilon \mathbf{B}(X) \subset C$. For any $k$ such that $\|k\| < \varepsilon$, we have $\frac{k}{2} = \frac{-x}{2} +\frac{x+k}{2} \in C$, since $C$ is solid and convex. Hence $\frac{\varepsilon}{2} \mathbf{B}(X) \subseteq C$. Since $C$ is bounded, we can then define an equivalent norm, with $\|y\|_C = \inf \{\lambda > 0: y \in \lambda C \}$. Since $C$ is solid, $\|y \|_C = \| \ |y | \ \|_C$, and the norm is consistent with the order. Finally, $\| \cdot \|_C$ is equivalent to $\| \cdot \|$, since for all $y\in X$, we have that $\frac{\varepsilon}{2}\|y\|_C \leq \|y\| \leq \|y\|_C$. The conclusion follows by Theorem \ref{t:uncount_many}. \end{proof} \section{The solid Krein-Milman Property and the RNP}\label{s:SKMP} We say that a Banach lattice (or, more generally, an ordered Banach space) $X$ has the \emph{Solid Krein-Milman Property} (\emph{SKMP}) if every solid closed boun\-ded subset of $X$ is the closed solid convex hull of its order extreme points. This is analogous to the canonical Krein-Milman Property (KMP) in Banach spaces, which is defined in the similar manner, but without any references to order. It follows from Theorem \ref{t:connection} that the KMP implies the SKMP. These geometric properties turn out to be related to the Radon-Nikod{\'ym} Property (RNP). It is known that the RNP implies the KMP, and, for Banach lattices, the converse is also true (see \cite{Cas} for a simple proof). For more information about the RNP in Banach lattices, see \cite[Section 5.4]{M-N}; a good source of information about the RNP in general is \cite{Bour} or \cite{DU}. One of the equivalent definitions of the RNP of a Banach space $X$ involves integral representations of operators $T : L_1 \to X$. If $X$ is a Banach lattice, then, by \cite[Theorem IV.1.5]{Sch}, any such operator is regular (can be expressed as a difference of two positive ones); so positivity comes naturally into the picture. \begin{comment} We use the following notation, taken from \cite{Bour}: Let $X$ be a Banach lattice, $A \subseteq X_+$ be a bounded set, and let $a > 0$. Then for $f\in X^*$, let $M(A,f) = \sup_{a\in A} |f(a)|$, and let $T(A,f,a) = \{ x\in A: f(x) > M(A,f) - a \}$, Sets of the form $T(A,f,a)$ are called \textbf{slices}. Recall that $X$ has the RNP iff all of the separable sublattices of $X$ have the RNP. Assuming that $X$ does not have the RNP, we will first work within a separable sublattice $Y \subseteq X$, and generate a closed convex solid subset of $X$ generated by elements in $Y$ that does not have any order extreme points, and thus cannot have the KMP. Let $Y$ be a separable sublattice that does not have the RNP. We can without loss of generality assume that the ideal generated by $Y$ is $X$. Let $u \in Y_+$ be a weak unit in $Y$. Then it is also a weak unit in $X$. For $n\in \mathbb{N}$, let $H_n = \{ x \in X_+: \| u \wedge x\| > \frac{1}{n} \}$. Now let $A\subseteq Y_+$. We say that $A$ is \textit{order dentable} if there exists a slice $T$ of $A$ such that $T \subseteq H_n$ for some $n$. By (cite something else FILL IN), if $Y$ does not have the RNP, there exists a closed, bounded, convex set $A\subseteq Y_+$ that is not order-dentable. We now restate without proof the following lemmas from \cite{BourTal} to be used in the proof of our theorem: \begin{lemma}\label{l:convex-hull-closed} Let $X$ be an order continuous Banach lattice. Then for any closed, bounded, convex sets $A, B \subseteq X_+$, the set $CH(A \cup B)$ is also closed. \end{lemma} \begin{lemma}\label{l:martingale-builder} Suppose $X$ is order continuous, and let $A$ be a non-order dentable bounded, closed, convex subset of $X_+$. Let $n\in \mathbb{N}$, $f\in X^*, x\in A$ with $f(x) = M(f,A)$, let $\varepsilon > 0$, and $T:= T(A,f,a)$. Then there exists $m\in \mathbb{N}$ and finite sequences $(y_i)_1^m \subseteq A$, $(f_i)_1^m \in X^*$, $(a_i)_1^m \subseteq \mathbb{R}^+$, and $(\lambda_i)_1^m \subseteq \mathbb{R}$ with $\lambda_i \geq 0$ and $\sum \lambda_i = 1$ such that \begin{enumerate} \item $f_i(y_i) = M(A,f_i)$, \item $T(A,f_i, a_i) \subseteq T$, \item $\overline{T(A,f_i, a_i)} \cap H_n = \emptyset$, \item $\| x - \sum_1^m \lambda_iy_i \| < \varepsilon$. \end{enumerate} \end{lemma} \end{comment} \begin{theorem}\label{t:RNP} For a Banach lattice $X$, the SKMP, KMP, and RNP are equivalent. \end{theorem} \begin{proof} The implications RNP $\Leftrightarrow$ KMP $\Rightarrow$ SKMP are noted above. Now suppose $X$ fails the RNP (equivalently, the KMP). We shall establish the failure of the SKMP in two different ways, depending on whether $X$ is a KB-space, or not. (1) If $X$ is not a KB-space, then \cite[Theorem 2.4.12]{M-N} there exist disjoint $e_1, e_2, \ldots \in \mathbf{S}(X)_+$, equivalent to the canonical basis of $c_0$. Then the set $$ C = \overline{{\mathrm{S}} \Big( \big\{ \sum_i \alpha_i e_i : \max_i |\alpha_i| = 1 , \, \lim_i \alpha_i = 0 \big\} \Big)} $$ is solid, bounded, and closed. To give a more intuitive description of $C$, for $x \in X$ we let $x_i = |x| \wedge e_i$. It is easy to see that $x \in C$ if and only if $\lim_i \|x_i\| = 0$, and $|x| = \sum_i x_i$. Finally, show that $x \in C_+$ cannot be an order extreme point. Find $i$ so that $\|x_i\| < 1/2$, and consider $x' = \sum_{j \neq i} x_j + e_i$. Then clearly $x' \in C$, and $x' - x \in X_+ \backslash \{0\}$. (2) If $X$ is a KB-space failing the RNP, then, by \cite[Proposition 5.4.9]{M-N}, $X$ contains a separable sublattice $Y$ failing the RNP. Find a quasi-interior point $u \in Y$ -- that is, $y = \lim_n y \wedge (nu)$ for any $y \in Y_+$. By \cite[Corollary 5.4.20]{M-N}, $Y$ is not order dentable -- that is, $Y_+$ contains a non-empty convex bounded subset $A$ so that, for every $n \in \mathbb{N}$, $A = \overline{{\mathrm{CH}}(A \backslash H_n)}$, where $H_n = \{ y \in Y_+ : \|u \wedge y\| > \frac1n \}$. We use the techniques (and notation) of \cite{BourTal} to construct a set $C$ witnessing the failure of the SKMP. For $f\in Y^*$, let $M(A,f) = \sup_{x\in A} |f(x)|$. For $\alpha > 0$, define the \emph{slice} $T(A,f,\alpha) = \{ x\in A: f(x) > M(A,f) - \alpha\}$. By \cite{BourTal}, we can construct increasing measure spaces $\Sigma_n$ on $[0,1]$ with $|\Sigma_n|$ finite, as well as $\Sigma_n$-measurable functions $Y_n:[0,1] \rightarrow A$, $f_n:[0,1]\rightarrow Y^*$, and $\alpha_n:[0,1] \rightarrow \mathbb{R}$ such that: \begin{enumerate} \item For any $n$ and $t$, $Y_n(t) \in \overline{T(A, f_n(t), \alpha_n(t))}$. \item $(Y_n)$ is a martingale -- that is, $Y_n(t) = {\mathbb{E}}^{\Sigma_n}(Y_{n+1}(t))$, for any $t$ and $n$ (${\mathbb{E}}$ stands for the conditional expectation). \item For any $n$ and $t$, $H_n \cap \overline{T(A,f_n(t),\alpha_n(t)))} = \emptyset$. \item For any $n$ and $t$, $T(A,f_{n+1}(t), \alpha_{n+1}(t)) \subseteq T(A,f_n(t), \alpha_n(t))$. \end{enumerate} Now let $C' = \overline{{\mathrm{CH}}(\{Y_n(t), n\in \mathbb{N}, t\in [0,1] \})}$, then the set $C = \overline{{\mathrm{S}}(C')}$ (the solid hull is in $X$) is closed, bounded, convex, and solid. We will show that $C$ has no order extreme points. By Theorem \ref{t:connection}, it suffices to show that no $x \in C_+ \backslash \{0\}$ can be an extreme point of $C$, or equivalently, of $C_+ = C \cap X_+$. From now on, fix $x \in C_+ \backslash \{0\}$. Note that $x \wedge u \neq 0$. Indeed, suppose, for the sake of contradiction, that $x \wedge u = 0$. Find $y' \in C' \subset Y_+$, so that $x \leq y'$. For any $n$, we have $y' \wedge (nu) = (y'-x) \wedge (nu) \leq y'-x$. Thus, $\|y' - y' \wedge (nu)\| \geq \|x\|$. However, $u$ is a quasi-interior point of $Y$, hence $y' = \lim_n y' \wedge (nu)$. This is the desired contradiction. Find $n \in \mathbb{N}$ so that $\|x \wedge u\| > \frac1n$. Let $I_1,..., I_m$ be the atoms of $\Sigma_n$. For $i \leq m$, define $C'_i = \overline{{\mathrm{CH}}(\{ Y_m(t):m \geq n, t\in I_i \})}$, and let $C_i = \overline{{\mathrm{S}}(C'_i)}_+$. The sequence $(Y_k)$ is a martingale, hence $C' = \overline{{\mathrm{CH}}(\cup_{i=1}^m C'_i)}$. Thus, by Proposition \ref{p:interchange}, $$ C = \overline{{\mathrm{S}}(C')} = \overline{{\mathrm{S}}(\overline{{\mathrm{CH}}(\cup_{i=1}^m C'_i))}} = \overline{{\mathrm{S}}({\mathrm{CH}}(\cup_{i=1}^m C_i))} . $$ By \cite[Lemme 3]{BourTal}, ${\mathrm{CH}}(\cup_{i=1}^m C_i)$ is closed. This set is clearly positive-solid, so by norm continuity of $| \cdot |$, ${\mathrm{S}}({\mathrm{CH}}(\cup_1^m C_i))$ is closed, hence equal to $C$. In particular, $C_+ = {\mathrm{CH}}(\cup_{i=1}^m C_i)$. Therefore, if $x$ is an extreme point of $C_+$, then it must belong to $C_i$, for some $i$. We show this cannot happen. If $y \in {\mathrm{S}}(C'_i)_+$, then we can find $y' \in C_i'$ with $y \leq y'$. By parts (1) and (4), $C'_i \subseteq \overline{T(A, f_n(t), \alpha_n(t))}$ for $t\in I_i$, hence, by (3), $\|y' \wedge u\| \leq \frac1n$, which implies $\|y \wedge u\| \leq \frac1n$. By the triangle inequality, $$ \|x \wedge u\| \leq \|y \wedge u\| + \|x-y\| \leq \frac1n + \|x-y\| . $$ hence $\|x-y\| \geq \|x \wedge u\| - \frac1n$. Recall that $n$ is selected in such a way that $\|x \wedge u\| > \frac1n$. As $C_i = \overline{{\mathrm{S}}(C'_i)_+}$, it cannot contain $x$. Thus, $C$ witnesses the failure of the SKMP. \begin{comment} If $X$ is a KB-space, then by the reasoning in the proof of the main theorem of \cite{Cas}, there exists a closed convex set $D \subset \mathbf{B}(X)_+$ with no extreme points. By Propositions \ref{p:interchange} and \ref{p:conv_closed_KB}, the set $C = {\mathrm{S}}(D)$ is convex and closed; it is clearly bounded and solid. However, $C$ has no order extreme points, since all such points would have to be extreme points of $D$. \end{comment} \end{proof} {\bf Acknowledgments.} We would like to thank the anonymous referee for reading the paper carefully, and providing numerous helpful suggestions. We are also grateful to Prof.~Anton Schep for finding an error in an earlier version of Proposition \ref{p:conv_closed_KB}. \end{document}
\begin{document} \title{Dynamics inside Fatou sets in higher dimensions} \begin{abstract} In this paper, we investigate the behavior of orbits inside attracting basins in higher dimensions. Suppose $F(z, w)=(P(z), Q(w))$, where $P(z), Q(w)$ are two polynomials of degree $m_1, m_2\geq2$ on $\mathbb{C}$, $P(0)=Q(0)=0,$ and $0<|P'(0)|, |Q'(0)|<1.$ Let $\Omega$ be the immediate attracting basin of $F(z, w)$. Then there is a constant $C$ such that for every point $(z_0, w_0)\in \Omega$, there exists a point $(\tilde{z}, \tilde{w})\in \cup_k F^{-k}(0, 0), k\geq0$ so that $d_\Omega\big((z_0, w_0), (\tilde{z}, \tilde{w})\big)\leq C, d_\Omega$ is the Kobayashi distance on $\Omega$. However, for many other cases, this result is invalid. \\ \end{abstract} \section{Introduction}\label{sec1} In discrete dynamical systems, we are interested in qualitatively and quantitatively describing the possible dynamical behavior under the iteration of maps satisfying certain conditions. In our paper \cite{RefH1}, Hu studied the dynamics of holomorphic polynomials on attracting basins and obtained Theorem \ref{thmA}: \begin{thm}\label{thmA} [Hu, 2022] Suppose $f(z)$ is a polynomial of degree $N\geq 2$ on $\mathbb{C}$, $p$ is an attracting fixed point of $f(z),$ $\Omega_1$ is the immediate basin of attraction of $p$, $\{f^{-1}(p)\}\cap \Omega_1\neq\{p\}$, $\mathcal{A}(p)$ is the basin of attraction of $p$, $\Omega_i (i=1, 2, \cdots)$ are the connected components of $\mathcal{A}(p)$. Then there is a constant $\tilde{C}$ so that for every point $z_0$ inside any $\Omega_i$, there exists a point $q\in \cup_k f^{-k}(p)$ inside $\Omega_i$ such that $d_{\Omega_i}(z_0, q)\leq \tilde{C}$, where $d_{\Omega_i}$ is the Kobayashi distance on $\Omega_i.$ \end{thm} Theorem \ref{thmA} shows that in an attracting basin of a complex polynomial, the backward orbit of the attracting fixed point either is the point itself or accumulates at the boundary of all the components of the basin in such a way that all points of the basin lie within a uniformly bounded distance of the backward orbit, measured with respect to the Kobayashi metric. This is an interesting and innovative problem and result. There are no publications by other researchers. However, Hu \cite{RefH2} proved that Theorem \ref{thmA} is no longer valid for parabolic basins of polynomials in one dimension. This is a more interesting and surprising result. Compared with one dimension, there are very interesting results about dynamics inside attracting basins in higher dimensions. Let $F(z, w)=(P(z), Q(w))$, where $P(z), Q(w)$ are two polynomials of degree $m_1, m_2\geq2$ on $\mathbb{C}$, and $F^{ n}: \mathbb{C}^2 \rightarrow \mathbb{C}^2$ be its $n$-fold iterate. In complex dynamics, two crucial disjoint invariant sets are associated with $F$, the {\sl Julia set} and the {\sl Fatou set} \cite{RefM}, which partition $\mathbb{C}^2$. The Fatou set of $F$ is defined as the largest open set where the family of iterates is locally normal. In other words, for any point $(z, w)\in \mathbb{C}^2$, there exists some neighborhood $U$ of $(z, w)$ so that the sequence of iterates of the map restricted to $U$ forms a normal family, so the iterates are well behaved. The complement of the Fatou set is called the Julia set. The classification of Fatou components in one dimension was completed in the '80s when Sullivan \cite{RefS} proved that every Fatou component of a rational map is preperiodic, i.e., there are $n,m \in \mathbb{N}$ such that $f^{n+m}(\Omega)=f^m(\Omega)$. For more details and results, we refer the reader to \cite{RefB, RefCG, RefM}. In higher dimensions, the dynamics is quite different and more fruitful than in $\mathbb{C}.$ We refer the reader to \cite{RefL, RefFS1, RefFS3, RefFS2} for more details and results. The connected components of the Fatou set of $F$ are called {\sl Fatou components}. A Fatou component $\Omega\subset \mathbb{C}^2$ of $F$ is {\sl invariant} if $F(\Omega)=\Omega$. For $(z, w)\in \mathbb{C}^2$, the set $\{(z_n, w_n)\}=\{(z_1, w_1)=F(z_0, w_0), (z_2, w_2)=F^2(z_0, w_0), \cdots\}$ is called the orbit of the point $(z, w)=(z_0, w_0)$. If $(z_N, w_N)=(z_0, w_0)$ for some integer $N$, we say that $(z_0, w_0)$ is a periodic point of $F$ of periodic $N$. If $N=1$, then $(z_0, w_0)$ is a fixed point of $F.$ There have been few detailed studies until now of the more precise behavior of orbits inside the Fatou set. For example, let $\mathcal{A}(z', w'):=\{(z, w)\in \mathbb{C}^2; F^n(z, w)\rightarrow (z', w')\}$ be the {\sl basin of attraction} of an attracting fixed point $(z', w')$. One can ask when $(z_0, w_0)$ is close to $\partial\mathcal{A}(z', w')$, what orbits $\{(z_n, w_n)\}$ of $(z_0, w_0)$ going from $(z_0, w_0)$ to near the attracting fixed point $(z', w')$ look like, or how many iterations are needed. In this paper, we will investigate how the orbits behave inside the attracting basin of $F(z, w)$ in $\mathbb{C}^2$. We obtain the following, Theorem \ref{the2} in section 3: \begin{thm*}{\bf \ref{the2}.} Suppose $F(z, w)=(P(z), Q(w)),$ where $P(z), Q(w)$ are two polynomials of degree $m_1, m_2\geq2$ on $\mathbb{C},$ $P(0)=Q(0)=0,$ and $0< |P'(0)|, |Q'(0)|<1.$ Let $\Omega$ be the immediate attracting basin of $F(z, w)$. Then there is a constant $C$ such that for every point $(z_0, w_0)\in \Omega$, there exists a point $(\tilde{z}, \tilde{w})\in \cup_k F^{-k}(0, 0), k\geq0$ so that $d_\Omega\big((z_0, w_0), (\tilde{z}, \tilde{w})\big)\leq C, d_\Omega$ is the Kobayashi distance on $\Omega$. \end{thm*} However, Theorem \ref{the2} is not valid for any of the following cases: Let $F(z, w)=(P(z), Q(w)),$ where $P(z), Q(w)$ are two polynomials of degree $m_1, m_2\geq2$ on $\mathbb{C}$ and $P(0)=Q(0)=0$. \begin{itemize} [itemsep=0pt] \item[(1)] $P(z)=z^{m_1}, Q(w)=w^{m_2}$; \item[(2)] $P(z)=z^m, 0<|Q'(0)|<1, $ i.e., $P'(0)=0;$ \item[(3)] $P(z)=z^m, Q'(0)=1, $ i.e., $P'(0)=0;$ \item[(4)] $0<|P'(0)|<1, Q'(0)=1;$ \item[(5)] $P'(0)=Q'(0)=1$. \end{itemize} Polynomial skew products, for example, \cite{RefJM, RefJ}, have been useful test cases for complex dynamics in two dimensions. This allows one to use one variable results in one of the variables: Suppose $F$ is a polynomial skew product, $F(z, w)=(P(z), Q(z, w)),$ where $P(z), Q(z, w)$ are two polynomials of degree $m_1, m_2\geq2$ on $\mathbb{C}$ and $P(0)=0, Q(0,0)=0$. We will prove that, in the following cases in section 4, Theorem \ref{the2} fails as well: \begin{itemize} [itemsep=0pt] \item[(1)] $P(z)=z^2, Q(z, w)=w^2+az, a\in\mathbb{C};$ \item[(2)] $P(z)=az+z^2, Q(z, w)=w^2+cw+bz, 0<|a|, |b|, |c|<<1$ and $ |a|>>|c|, |a|>>|b|, |c|>>|ab|$. \end{itemize} \section{Preliminaries}\label{subsec1} First of all, let us recall the definition of the Kobayashi metric, for example, \cite{RefA, RefBGN, RefK, RefW}. \begin{defn*}\label{def5} Let $\hat{\Omega}\subset\mathbb C^n$ be a domain. We choose a point $z\in \hat{\Omega}$ and a vector $\xi$ tangent to $\mathbb C^n$ at the point $z.$ Let $\triangle$ denote the unit disk in the complex plane $\mathbb C$. We define the {\em Kobayashi metric} $$ F_{\hat{\Omega}}(z, \xi):=\inf\{\lambda>0 : \exists f: \triangle\stackrel{hol}{\longrightarrow} \hat{\Omega}, f(0)=z, \lambda f'(0)=\xi\}. $$ Let $\gamma: [0, 1]\rightarrow \hat{\Omega}$ be a piecewise smooth curve. The {\em Kobayashi length} of $\gamma$ is defined to be $$ L_{\hat{\Omega}} (\gamma)=\int_{\gamma} F_{\hat{\Omega}}(z, \xi) \lvert dz\rvert=\int_{0}^{1}F_{\hat{\Omega}}\big(\gamma(t), \gamma'(t)\big)\lvert \gamma'(t)\rvert dt.$$ For any two points $z_1$ and $z_2$ in $\hat{\Omega}$, the {\em Kobayashi distance} between $z_1$ and $z_2$ is defined to be $$d_{\hat{\Omega}}(z_1, z_2)=\inf\{L_{\hat{\Omega}} (\gamma): \gamma ~ \text{is a piecewise smooth curve connecting} ~z_1~ \text{and} ~z_2 \}.$$ Note that $d_{\hat{\Omega}}(z_1, z_2)$ is defined when $z_1$ and $z_2$ are in the same connected component of $\hat{\Omega}.$ Let $d_E(z_1, z_2)$ denote the Euclidean metric distance for any two points $z_1, z_2\in\triangle.$ \end{defn*} \section{ Dynamics of holomorphic polynomials inside Fatou sets of $F(z, w)= (P(z), Q(w))$} In this section, we study how precisely the orbit goes inside attracting basins of $F(z, w)=(P(z), Q(w))$, where $P(z), Q(w)$ are two polynomials of degree $m_1, m_2\geq2$ on $\mathbb{C}$. \subsection{Dynamics of $F(z, w)=(z^m, w^m), m\geq2$, i.e., $|P'(0)|=|Q'(0)|=0$} \begin{thm}\label{the1} Let $F(z, w)=(z^m, w^m), m\geq2$. We choose an arbitrary constant $C>0$ and the point $(\varepsilon,\varepsilon), 0<\varepsilon<<1$. Then there exists a point $(z_0, w_0)\in \mathcal{A}$ so that for any $(\tilde{z}, \tilde{w})\in \cup_{k}^{\infty}\{F^{-k}(\varepsilon, \varepsilon)\}, k\geq0$, the Kobayashi distance $d_{\mathcal {A}}((z_0, w_0), (\tilde{z}, \tilde{w}))\geq C$. \end{thm} \begin{proof} We know $(\tilde{z}_k, \tilde{w}_k)=(\varepsilon^{1/m^k}, \varepsilon^{1/m^k} )$ for some positive integer $k.$ Then $|\tilde{z}|=|\tilde{w}|.$ We take $(z_0, w_0)=(1-\delta, \delta)$, $\delta$ is sufficiently small depending on $C$. Then \begin{equation*} \begin{aligned} d_{\triangle\times\triangle}\big((z_0, w_0), (\tilde{z}_k, \tilde{w}_k)\big)&=d_{\triangle\times \triangle}\big((z_0, w_0), (\varepsilon^{1/m^k}, \varepsilon^{1/m^k})\big)\\ &=\max\bigg\{d_\triangle(z_0, \tilde{z}_k), d_\triangle(w_0, \tilde{w}_k)\bigg\}\\ &=\max\bigg\{d_\triangle(0, \frac{z_0-\tilde{z}_k}{1-z_0\overline{\tilde{z}_k}}), d_\triangle(0, \frac{w_0-\tilde{w}_k}{1-w_0\overline{\tilde{w}_k}})\bigg\}\\ &=\max\bigg\{\ln \frac{1+\lvert\frac{z_0-\tilde{z}_k}{1-z_0\overline{\tilde{z}_k}}\rvert}{1-\lvert\frac{z_0-\tilde{z}_k}{1-z_0\overline{\tilde{z}_k}}\rvert}, \ln \frac{1+\lvert\frac{w_0-\tilde{w}_k}{1-w_0\overline{\tilde{w}_k}}\rvert}{1-\lvert\frac{w_0-\tilde{w}_k}{1-w_0\overline{\tilde{w}_k}}\rvert}\bigg\}\\ &=\max\bigg\{\ln \frac{1+\lvert\frac{1-\delta-\varepsilon^{1/m^k}}{1-(1-\delta)\varepsilon^{1/m^k}}\rvert}{1-\lvert\frac{1-\delta-\varepsilon^{1/m^k}}{1-(1-\delta)\varepsilon^{1/m^k}}\rvert}, \ln \frac{1+\lvert\frac{\delta-\varepsilon^{1/m^k}}{1-\delta\varepsilon^{1/m^k}}\rvert}{1-\lvert\frac{\delta-\varepsilon^{1/m^k}}{1-\delta\varepsilon^{1/m^k}}\rvert}\bigg\}. \end{aligned} \end{equation*} Hence there are two cases in the following: Case 1: Notice that $d_{\triangle\times\triangle}\big((z_0, w_0), (\tilde{z}_k, \tilde{w}_k)\big)\geq\ln \frac{1+\lvert\frac{\delta-\varepsilon^{1/m^k}}{1-\delta\varepsilon^{1/m^k}}\rvert}{1-\lvert\frac{\delta-\varepsilon^{1/m^k}}{1-\delta\varepsilon^{1/m^k}}\rvert}\geq\ln\frac{1}{1-\lvert\frac{\delta-\varepsilon^{1/m^k}}{1-\delta\varepsilon^{1/m^k}}\rvert}\rightarrow\infty$, when $k\rightarrow\infty.$ Hence $d_{\triangle\times\triangle}\big((z_0, w_0), (\tilde{z}_k, \tilde{w}_k)\big)\geq C$ if $k>k_0$ for some integer $k_0$ and $\delta$ is smaller than $1/2$. Case 2: Also, $d_{\triangle\times\triangle}\big((z_0, w_0), (\tilde{z}_k, \tilde{w}_k)\big)\geq\ln \frac{1+\lvert\frac{1-\delta-\varepsilon^{1/m^k}}{1-(1-\delta)\varepsilon^{1/m^k}}\rvert}{1-\lvert\frac{1-\delta-\varepsilon^{1/m^k}}{1-(1-\delta)\varepsilon^{1/m^k}}\rvert}\rightarrow\infty$ when $\delta\rightarrow0$ for each fixed $k.$ Hence $d_{\triangle\times\triangle}\big((z_0, w_0), (\tilde{z}_k, \tilde{w}_k)\big)\geq C$ if $k\leq k_0$ and $\delta$ small enough. \end{proof} \subsection{Dynamics of $F(z, w)=(P(z), Q(w)), P(0)=Q(0)=0, 0<|P'(0)|, |Q'(0)|<1$ } \begin{thm}\label{the2} Suppose $F(z, w)=(P(z), Q(w))$, where $P(z), Q(w)$ are two polynomials of degree $m_1, m_2\geq2$ on $\mathbb{C}$, $P(0)=Q(0)=0,$ and $0<|P'(0)|, |Q'(0)|<1.$ Let $\Omega$ be the immediate attracting basin of $F(z, w)$. Then there is a constant $C$ such that for every point $(z_0, w_0)\in \Omega$, there exists a point $(\tilde{z}, \tilde{w})\in \cup_k F^{-k}(0, 0), k\geq0$ so that $d_\Omega\big((z_0, w_0), (\tilde{z}, \tilde{w})\big)\leq C, d_\Omega$ is the Kobayashi distance on $\Omega$. \end{thm} \begin{proof} Let the immediate basin of attraction of $P(z), Q(w)$ be denoted by $\Omega_P, \Omega_Q$, respectively. Then $\Omega=\Omega_P\times\Omega_Q.$ By the conclusion of Theorem 2.9 in \cite{RefH1}, we know that there is a constant $C_P(C_Q)$ such that for every point $z_0(w_0)\in \Omega_P(\Omega_Q)$, there exists a point $\tilde{z}(\tilde{w})\in \cup_k P^{-k}(0)(\cup_k Q^{-k'}(0)), k, k'\geq0$ so that $d_{\Omega_P}(z_0, \tilde{z})\leq C_P, (d_{\Omega_Q}(w_0, \tilde{w})\leq C_Q),$ where $d_{\Omega_P}(d_{\Omega_Q})$ is the Kobayashi distance on $\Omega_P (\Omega_Q)$. Suppose $K=\max\{k, k'\}$ and $C=\max\{C_P, C_Q\}$, then $\tilde{z}\in\cup_k P^{-K}(0), \tilde{w}\in \cup_k Q^{-K}(0).$ Hence $(\tilde{z}, \tilde{w})\in \cup_K F^{-K}(0, 0)\subset\Omega. $ Therefore, $d_\Omega((z_0, w_0), (\tilde{z}, \tilde{w}))\leq C, d_\Omega$ is the Kobayashi distance on $\Omega$. \end{proof} \subsection{Dynamics of $F(z, w)=(P(z), Q(w)), P(0)= Q(0)=0$} \begin{itemize} \item[(1)] $P(z)=z^m, 0<|Q'(0)|<1, $ i.e., $|P'(0)|=0;$ \item[(2)] $P(z)=z^m, Q'(0)=1, $ i.e., $|P'(0)|=0;$ \item[(3)] $0<|P'(0)|<1, Q'(0)=1;$ \item[(4)] $P'(0)=Q'(0)=1.$ \end{itemize} \begin{thm}\label{the3} Suppose $F(z, w)=(P(z), Q(w))$, where $P(z), Q(w)$ are two polynomials of degree $m_1, m_2\geq2$ on $\mathbb{C}$, $P(0)=Q(0)=0$, and the module of the derivative of $|P'(0)|, |Q'(0)|$ is any one of the above four. Let $\Omega$ be the immediate attracting basin of $F(z, w)$. Then there exists a point $(z_0, w_0)\in \Omega$ so that for any $(\tilde{z}, \tilde{w})$ inside the preimage set under $\{F^{-k}\}$ of the fixed point $(0, 0)$ or a point very close to $(0, 0),$ the Kobayashi distance $d_{\Omega}((z_0, w_0), (\tilde{z}, \tilde{w}))\geq C$. \end{thm} \begin{proof} If $|P'(0)|=0, 0<|Q'(0)|<1,$ by Theorem \ref{the1} and \ref{the2}, we know $\Omega=\triangle \times\Omega_Q.$ We choose $z_0=1-\delta$, $\delta$ small enough. Then we know that $\tilde{z}=\varepsilon^{1/m^k}$ and \begin{equation*} \begin{aligned} d_\Omega((z_0, w_0), (\tilde{z}, \tilde{w})) &=\max\big(d_\triangle(z_0, \tilde{z}), d_{\Omega_Q}(w_0, \tilde{w})\big)\\ &\geq d_\triangle(z_0, \tilde{z})\\ &=\ln \frac{1+\lvert\frac{1-\delta-\varepsilon^{1/m^k}}{1-(1-\delta)\varepsilon^{1/m^k}}\rvert}{1-\lvert\frac{1-\delta-\varepsilon^{1/m^k}}{1-(1-\delta)\varepsilon^{1/m^k}}\rvert}\\ &\rightarrow\infty \end{aligned} \end{equation*} as $\delta\rightarrow0.$ Then the proof is done. If $P(z)=z^m, Q'(0)=1$ or $0<|P'(0)|<1, Q'(0)=1$, by the result in the paper \cite{RefH2}, we know this theorem is valid since $d_{\Omega_Q}(w_0, \tilde{w})$ is unbounded. Hence the same reason for $P'(0)=Q'(0)=1.$ \end{proof} \section{ Dynamics of Polynomial skew products inside attracting basins of $F(z, w)= (P(z), Q(z,w))$} Polynomial skew products have been useful test cases for complex dynamics in two dimensions. This allows one to use one variable results in one of the variables. However, for $F(z, w)= (P(z), Q(z,w))$, we can not calculate the Kobayashi distance $d_\Omega((z_0, w_0), (\tilde{z}, \tilde{w}))$ by analyzing $d_{\Omega_P}(z_0, \tilde{z}), d_{\Omega_Q}(w_0, \tilde{w})$ separately as for $F(z, w)=(P(z), Q(w))$ since $\Omega_Q$ also depends on the $z$-coordinate. In this section, we consider the dynamics of $F(z, w)$ near $(0, 0)$ and study two simple cases. \subsection{Dynamics of $F(z, w)=(z^2, w^2+az)$} \begin{thm}\label{thm5} Suppose $F(z, w)=(z^2, w^2+az)$, $a\neq0$. Let $\Omega$ be the immediate attracting basin of $(0, 0)$ and $0<\varepsilon<<1$. We choose an arbitrary constant $C>0.$ Then there exists a point $(z_0, w_0)\in \Omega$ so that for any $(\tilde{z}, \tilde{w})\in \cup_{k}^{\infty}\{F^{-k}(\varepsilon, 0)\}, k\geq0$, the Kobayashi distance $d_{\Omega}((z_0, w_0), (\tilde{z}, \tilde{w}))\geq C$. \end{thm} \begin{proof} We choose $z_0=0.$ Note that the projection map $\pi :\Omega\rightarrow \triangle, \pi(z, w)=z$ is distance decreasing in the Kobayashi metric. In addition, we know that the $z$-coordinate of any point in $F^{-k}(\varepsilon, 0)$ approaches to $\partial\triangle$ as $k\rightarrow\infty.$ Hence there is an $l$ so that if $k>l,$ then $d_\Omega((0, w_0), F^{-k}(\varepsilon, 0))\geq C$ for any $w_0.$ Let $(z_j, w_j), j=1, \cdots, N$ be the points in $\{F^{-k}(\varepsilon, 0)\}_{k\leq l}.$ Next, we want to show that there is a $w_0$ so that $(0, w_0)\in\Omega$ and $d_\Omega\big((0, w_0), (z_j, w_j)\big)\geq C$ for any $j=1, \cdots, N.$ We first deal with a small $a$ and then with general $a$. When $a$ is small, we have the following lemma. \begin{lem}\label{lem1} Let $D:=\{(z,w); |z|<1, |w|<3/4\}$ be a bidisc. If $|a|<3/16$, then $D\subset\Omega.$ \end{lem} \begin{proof} If $|w|<3/4$, then $|w^2|<9/16.$ Hence we have $|w^2+az|<9/16+|a|<3/4.$ \end{proof} Now we continue to prove Theorem \ref{thm5}. Let $f(z)\equiv w_0=2/3$ for $|z|<1.$ Then $\gamma_0=(z, f(z))$ is a graph over $|z|<1$ inside $\Omega$. Then $F^{-1}(\gamma_0)=F^{-1}(z, f(z))=(\sqrt{z}, \sqrt{f(z)- a \sqrt{z}}).$ Let us choose $\gamma_1=(\sqrt{z}, \sqrt{f(z)-a\sqrt{z}}):=(z, \sqrt{f(z^2)-az})=(z, f_1(z)).$ By induction, we can have $\gamma_2=(z, \sqrt{f_1(z^2)-az})=(z, f_2(z));\dots; \gamma_n=(z, f_n(z))$, here we always choose $f_n(z)$ so that $f_n(0)>0$. Note that inductively $f_n(z)\geq 2/3,$ hence $f_n(z^2)-az$ never has any zeros, which means any branches of $f_n$ cannot meet at some points of $z\in D,$ i.e., none of any two graphs of $\gamma_n$ intersect each other in $D$. Then $\lim_{n\rightarrow\infty}\gamma_n\subset\partial\Omega$ since $\gamma_0$ does not go through $(0,0)$ and $\gamma_0\subset\Omega$ so that the backward orbits of $\gamma_ 0$ converge to $\partial\Omega.$ By Montel's theorem, there is a convergent subsequence $f_n(z)\rightarrow f_\infty(z)$, then $(z, f_\infty(z))\subseteq \partial\Omega$ and $f_\infty(0)=1.$ This implies that for any $z$, $\gamma_\infty(z)\in\partial\Omega_z$ for every slice $\Omega_z.$ Let $h(z, w)=w-f_\infty(z)$, then $h(z, w)$ is holomorphic on $\Omega.$ And we know that $h(0, w)=w-1$, then $\lim_{w\rightarrow1}h(0, w)=0.$ Hence $h(z, w)$ is vanishing at $(0, 1)$ and $h(\Omega) \subset \triangle(0, R)\setminus\{0\}$ for some constant radius $R>1$ since $\Omega$ is a bounded set. Let us recall the Kobayashi metric on the punctured disk, see example 2.8 in \cite{RefM}. The universal covering surface of $\triangle(0, R)\setminus\{0\}$ can be identified with the left half-plane $\{w=u+iv; u<0\}$ under the exponential map $ w\mapsto z=Re^w\in\triangle(0, R)\setminus\{0\}$ with $dw=\frac{dz}{z}$. Hence the Kobayashi metric $|\frac{dw}{u}|$ on the left half-plane corresponds to the metric $\bigg|\frac{dz}{r \ln \frac{r}{R}}\bigg|$ on the punctured disk $\triangle(0, R)\setminus\{0\}$, where $r=|z|$ and $u=\ln \frac{r}{R}.$ Then let $x:=h(0, w_0):=\lim_{w\rightarrow1}h(0, w)$ and $y:=h(z_j, w_j)$ for $j=1, \cdots, N.$ Then \begin{equation*} \begin{aligned} d_\Omega((0, w_0), (z_j, w_j))&\geq d_{\triangle(0, R)\setminus\{0\}}(x, y)\\ &=\bigg|\int_{x}^{y}\frac{1}{|z| \ln \frac{|z|}{R}}dz\bigg|\\ &=|\ln(|\ln y/R|)-\ln(|\ln x/R|)|\\ &\rightarrow\infty, \end{aligned} \end{equation*} as $x\rightarrow0.$ For general $a$, although we cannot choose a cylinder as in Lemma \ref{lem1}, we can first choose $D_0:=\{(z,w); |z|<\eta<<1, |w|<3/4\}$. Let $\hat{f}(z)=\hat{w}_0=2/3$ for $|z|<\eta$, then $\hat{\gamma}_0=(z, \hat{f}(z))$ is a graph over $|z|<\eta.$ We have $F^{-1}(\hat{\gamma}_0)=F^{-1}(z, \hat{w}_0)=(\sqrt{z}, \sqrt{\hat{w}^2_0- a\sqrt{z}}).$ Then we choose $\hat{f}_1:=\sqrt{\hat{f}(z^2)-az}$ restricted to $|z|<\eta.$ Inductively, $\hat{f}_{n+1}(z)=\sqrt{\hat{f}_n(z^2)-az}$ is restricted to $|z|<\eta$ as well. Then one gets $\hat{f}_n(z)\rightarrow \hat{f}_\infty(z)$, then $(z, \hat{f}_\infty(z))\subseteq \partial\Omega$ and $\hat{f}_\infty(0)=1.$ Second, let $D_1:=\{(z,w); |z|<\sqrt{\eta}, w\in\mathbb{C}\}$. We choose $g_0(z)=w'_0=\hat{f}_\infty(z)$ where $|z|<\eta.$ And $\gamma'_0=(z, g_0(z))$ is a graph over $|z|<\eta.$ Then we have $F^{-1}(z, w'_0):=(z, g_1(z))$ for $|z|<\eta.$ However, there are two cases: (1) If $g_0-az\neq0$ for $|z|<\sqrt{\eta}.$ Then there are two solutions of $g_1$, we denote them by $g_{1,1}:=\sqrt{w'^2_0- a\sqrt{z}}; g_{1,2}:=-\sqrt{w'^2_0- a\sqrt{z}}, |z|<\sqrt{\eta}$. We let $g_1$ be one of them and $D_2:=\{(z,w); |z|<\sqrt{\eta}, w\in\mathbb{C}\}.$ (2) If $g_0-az=0$ has one or more zeros on $|z|<\sqrt{\eta}.$ We let $g_1$ denote the multivalued function. We repeat this for $g_2(z)=\sqrt{g_1(z^2)-az}$, etc. We continue this process until we obtain a multivalued function $g_n(z)$ well defined on $|z|<\eta^{1/2^n}$, here $n$ will be determined below. Hence, $g_n(z)$ will have at most $2^n$ sheets. Meanwhile, we let $D_{n+2}:=\{(z,w); |z|<\eta^{1/2^n}, w\in\mathbb{C}\}.$ Then one gets $g_{n}(z)\rightarrow g_\infty(z)$, then $(z, \lim_{n\rightarrow\infty}g_{n}(z))\subseteq \partial\Omega$ and $\lim_{n\rightarrow\infty}g_{n}(0)=1.$ However, we cannot simply choose $\hat{h}(z, w)=w-g_\infty(z)$. The reason is there are $2^n$ sheets of $g_n(z)$ for every integer $n$, and it is possible that some of them meet at some $z\in D_{n+1}$, eg. $g_{n, 1}(z')=g_{n, 100}(z')=0$ for some $z'\in\{z; |z|<\eta^{1/2^n}\}$, i.e., $g_{n-1}(z^2)-az$ has zeroes in $D_{n}$. Hence we let $\hat{h}(z, w):=\prod_{i=1, \cdots, 2^n}\big(w-g_{i}(z)\big).$ Then $h$ is holomorphic on $D_{n+1}$ and $\lim _{w\rightarrow1}\hat{h}(0, w)=0.$ Thus, $\hat{h}$ vanishes at $(0, 1)$ and $\hat{h}(\Omega) \subset \triangle(0, R)\setminus\{0\}$ for some constant radius $R>1$ since $\Omega$ is a bounded set. In addition, we can choose $n$ big enough so that all finitely many $(z_j, w_j)$ are inside $D_{n+1}.$ Then let $x':=\hat{h}(0, w_0):=\lim_{w\rightarrow1}\hat{h}(0, w)$ and $y':=\hat{h}(z_j, w_j)$ for $j=1, \cdots, N.$ Then \begin{equation*} \begin{aligned} d_{D_{n+1}}((0, w_0), (z_j, w_j))&\geq d_{\triangle(0, R)\setminus\{0\}}(x', y')\\ &=\bigg|\int_{x'}^{y'}\frac{1}{|z| \ln \frac{|z|}{R}}dz\bigg|\\ &=|\ln(|\ln y'/R|)-\ln(|\ln x'/R|)|\\ &\rightarrow\infty, \end{aligned} \end{equation*} as $x'\rightarrow0.$ In the end, we still need to show that for any two points $(0, w_0), (z_j, w_j)$ inside $D_{n+1}$, the Kobayashi distance $d_{D_{m}}((0, w_0), (z_j, w_j))$ is approximately equal to $d_\Omega((0, w_0), (z_j, w_j))$ as long as $D_{n+1}\subset\subset D_m$, and $D_m$ is very close to $\Omega.$ We prove the following localization result for the Kobayashi metric (see \cite{RefBGN}). \begin{lem} For $0<s<1,$ let $\Omega_s=\{(z, w)\in\Omega; |z|<s\}.$ Fix $0<r<1,$ let $0<c<1$. Then there exists an $R$ $ (r<R<1)$ so that for every $p\in\Omega_r$ and $\xi$, we have $$c K_\Omega(p, \xi)\geq K_{\Omega_R}(p, \xi)\geq K_\Omega(p, \xi).$$ \end{lem} \begin{proof} By Definition in section 2, we know $$F_{\Omega_r}(p, \xi):=\inf\{\lambda>0 : \exists f: \triangle\stackrel{hol}{\longrightarrow} \Omega_r, f(0)=p, \lambda f'(0)=\xi\}.$$ We choose $r<R<1,$ $$F_{\Omega_R}(p, \xi):=\inf\{\mu>0 : \exists g: \triangle\stackrel{hol}{\longrightarrow} \Omega_r, g(0)=p, \lambda g'(0)=\xi\}.$$ Let $g :\triangle\rightarrow\triangle$ and $g(0)=p, g'(0)=\mu\xi.$ By Schwarz Lemma, we know that $|g(rz)|<|rz|<r$ for $|z|<1.$ Then we choose $f(z)=g(rz)\in\Omega_r$ with $f'(0)=cg'(0.)$ Therefore, we have $$c K_\Omega(p, \xi)\geq K_{\Omega_R}(p, \xi)\geq K_\Omega(p, \xi).$$ \end{proof} Hence, $d_\Omega((0, w_0), (z_j, w_j)) \approx d_{D_{n+1}}((0, w_0), (z_j, w_j))\rightarrow\infty$ for general $a.$ Thus, there exists a point $(z_0, w_0)=(0, 1-\delta)\in \Omega,$ where $\delta\rightarrow0$, so that for any $(\tilde{z}, \tilde{w})\in \cup_{k}^{\infty}\{F^{-k}(\varepsilon, 0)\}, k\geq0$, the Kobayashi distance $d_{\Omega}((z_0, w_0), (\tilde{z}, \tilde{w}))\geq C$. \end{proof} \subsection{Dynamics of $F(z, w)=(z^2+az, w^2+cw+bz), 0<|a|, |b|, |c|<<1$} \begin{thm}\label{thm2} Suppose $F(z, w)=(az+z^2, w^2+cw+bz), 0<|a|, |b|, |c|<<1,$ and $ |a|>>|c|, |a|>>|b|, |c|>>|ab|$. Let $\Omega$ be the immediate attracting basin of $(0, 0)$. We choose an arbitrary constant $C>0.$ Then there exists a point $(z_0, w_0)\in \Omega$ so that for any $(\tilde{z}, \tilde{w})\in \cup_{k}^{\infty}\{F^{-k}(0, 0)\}, k\geq0$, the Kobayashi distance $d_{\Omega}((z_0, w_0), (\tilde{z}, \tilde{w}))\geq C$. \end{thm} Note that if we first fix $a,c$ and let $b$ be chosen smaller and smaller, then Theorem \ref{the2} always fails, but in the limit case when $b=0$, Theorem \ref{the2} is valid. This shows that the situation is very unstable. \begin{proof} To prove this theorem, we first prove the following lemma: \begin{lem}\label{lem1} Let $\Omega_{2/3}=\{(z, w), z\in\Omega_P, |w|<2/3\},$ then $F(\Omega_{2/3})\subseteq\Omega_{2/3}.$ Moreover, $\Omega_{2/3}\subseteq\Omega.$ \end{lem} \begin{proof} We know $\Omega_P\subseteq\{z, |z|<2\}.$ If $|w|<2/3,$ we obtain $|w^2+cw+bz|\leq|w^2|+|c||w|+|b||z|<4/9+2c/3+|b||z|<4/9+|c|+2|b|<2/3$ for $0<|c|, |b|<<1.$ Thus, $F(\Omega_{2/3})\subseteq\Omega_{2/3}.$ Let $(z_1, w_1)\in\Omega_{2/3}$ and $(z_n, w_n)$ be the orbit of $(z_1, w_1).$ We know $z_n\rightarrow0,$ and $|w_n|<2/3$ since $F(\Omega_{2/3})\subseteq\Omega_{2/3}.$ Then $|w_{n+1}|\leq|w_n|^2+|c||w_n|+|b||z_n|\leq(2/3+|c|)|w_n|+|b||z_n|.$ Hence $w_n\rightarrow0.$ Therefore, $\Omega_{2/3}\subseteq\Omega.$ \end{proof} \begin{lem} Let $\Omega_z$ be the slice of $\Omega$ at $z$ and $\Omega_{z, 2/3}=\{(z, w)\in\Omega_z, |w|<2/3\}.$ Then, each $\Omega_z$ is connected and simply connected. \end{lem} \begin{proof} Since $Q(w)=w^2+cw+bz$ is a two-to-one function, every point $w\in\Omega_{P(z)}$ has two preimages inside $\Omega_z$ counting multiplicity. Hence $F: \Omega_z\rightarrow\Omega_{P(z)}$ is a double covering and it has a critical point $w=-\frac{c}{2}$ in the $w$-coordinate. Suppose there exists at least two disjoint connected components $\Omega^1_z, \Omega^2_z$ inside $\Omega_z$, and $(z, 0)\in\Omega^1_z.$ Then $\Omega_{z, 2/3}\subseteq\Omega^1_z.$ By Lemma \ref{lem1}, we know $F( \Omega_{z, 2/3})\subseteq\Omega_{P(z), 2/3}.$ In addition, $F$ sends $\Omega_z$ to $\Omega_{P(z)},$ then we know $F(\Omega^1_z)\subseteq\Omega_{P(z)}.$ Furthermore, we know that every point inside $\Omega_{P(z), 2/3}$ has two preimages inside $\Omega^1_z$ since $F$ is a double covering, and it has a critical point $w=-\frac{c}{2}$, which is very close to $0$ in the $w$-coordinate. Hence $F(\Omega_z^2)\cap \Omega_{P(z), 2/3}=\emptyset.$ Inductively, we know $ F^n(\Omega_z^2)\cap\Omega_{P^n(z), 2/3}=\emptyset.$ Thus, $F^n(\Omega^2_z)$ cannot converge to $0.$ Therefore, $\Omega^2_z$ is not inside $\Omega_z.$ By the maximum principle, we know that $\Omega_z$ is simply connected. \end{proof} \begin{lem} Let $z_N$ be a preimage of $-a\in P^{-1}(0),$ i.e., $z_N\in P^{-N}(-a)$, where $N$ is large enough. Then we have $(0, 0)\notin F^n(\Omega_{z_N, 2/3})$ for any integer $n\geq1.$ \end{lem} \begin{proof} Let us take a point $(z_N, w_N)\in\Omega_{z_N, 2/3}$ where $N$ is sufficiently large, then $F(z_N, w_N)=(z_{N-1}, w_{N-1})\in\Omega_{z_{N-1}, 1/2}, F^2(z_N, w_N)=(z_{N-2}, w_{N-2})\in\Omega_{z_{N-2}, 1/3}, F^3(z_N, w_N)=(z_{N-3}, w_{N-3})\in\Omega_{z_{N-3}, 1/4}.$ If $4|b|<|w|<1/4,$ we know that $$ |w^2+cw+2b|<|w|(|w|+|c|+1/2)<7/8|w|.$$ This shows $w$ will shrink to $0$ very fast. Thus, for some uniformly large $L>>4$, we will have $|w_{N-l}|<4|b|$ for all $L\leq l \leq N.$ Then inductively, $F^{N-1}(z_N, w_N )=(-a, w_1)\in\Omega_{z_1, 4b}, i.e., |w_1|\leq 4|b|.$ And $F^N(z_N, w_N)=(0, w_0)=(0, w_1^2+cw_1+bz_1).$ Then $$0<\frac{1}{2}|ab|\leq|ab|-16|b|^2-4|cb| \leq|w_0|=|w_1^2+cw_1+bz_1|<16|b|^2+4|cb|+|ab|\leq2|ab|<<|c|$$ since $|a|>>|c|, |a|>>|b|, |c|>>|ab|.$ Therefore, $(0, 0)\notin F^n(z_N, \Omega_{2/3})$ for any $n\leq N.$ However, for $n>N,$ we use that $F$ restricted to the $w$-axis is just $w\rightarrow w^2+cw\approx cw$, so $w_n$ goes to $0$ but never lands on $0$. \end{proof} \begin{lem}\label{lem2} Let $\Omega_n=F^{-n}(\Omega_{2/3}).$ Then for $(\tilde{ z}_n, \tilde{w}_n)\in\Omega_n,$ the Euclidean distance $d_E(\tilde{w}_n, \partial\Omega_{\tilde{z}_n})\rightarrow 0$ in $\Omega_{\tilde{z}_n}$ when $n\rightarrow\infty.$ \end{lem} \begin{proof} Suppose that for some $\varepsilon>0,$ there exists arbitrarily large $N_1$ such that, in $\Omega_{\tilde{z}_{N_1}},$ the Euclidean distance $d_E(\tilde{w}_{N_1}, \partial\Omega_{\tilde{z}_{N_1}})>\varepsilon.$ Then since $\big|\frac{\partial Q}{\partial w}\big|>2|w|-|c|>\frac{7}{6}>1$ for $|w|>\frac{2}{3},$ it follows that the distance $d_E(\tilde{w}_{N_1-1}, \partial\Omega_{\tilde{z}_{N_1-1}})>\frac{7}{6}\varepsilon.$ Repeating this for large $l$ times, we have $d_E(\tilde{w}_{N_1-l}, \partial\Omega_{\tilde{z}_{N_1-l}})>\frac{7}{6}\varepsilon^l\geq 4.$ It will get a contradiction to $d_E(\tilde{w}_{N_1-l}, \partial\Omega_{\tilde{z}_{N_1-l}})$ bounded by $4.$ \end{proof} \begin{lem} Let $D:=\partial\Omega\cap\{(z, w); z\in P(z)\}$. Then $D$ is laminated by holomorphic graphs $w=f_\alpha(z).$ Moreover, the Kobayashi distance $d_\Omega((z_N, 0), (\tilde{ z}, \tilde{w}))\geq C$ for any $(z_N, 0)\in \Omega_{z_N}, (\tilde{ z}, \tilde{w})\in\cup_{k}^{\infty}\{F^{-k}(0, 0)\}\subset\Omega_{z_k}, k\geq0, N\geq N(C).$ \end{lem} \begin{proof} The set $\partial\Omega_{2/3}$ is laminated by graphs $w=\frac{2}{3}e^{i\theta}.$ Then we take $F^{-1}\big(w=\{\frac{2}{3}e^{i\theta}\}\big)= \frac{-c\pm \sqrt{c^2-4bz-\frac{8}{3}e^{i\theta}}}{2}:=f^{1, 2}_1.$ It is obvious that $c^2-4bz-\frac{8}{3}e^{i\theta}\neq 0$ since $0<|b|, |c|<<1, |z|<2.$ Hence $F^{-1}\big(w=\{\frac{2}{3}e^{i\theta}\}\big)$ always has two disjoint preimages. Then we can use $f^{1, 2}_1$ to laminate $F^{-1}(\partial\Omega_{2/3})$. Inductively, we can use $f^{1, 2}_j$ to laminate $F^{-j}(\partial\Omega_{2/3}), j\geq 2$. We know that $F^{-(j-1)}(\partial\Omega_{2/3})$ is laminated by $f^{1, 2}_{j-1}$, hence $w=f^{1,2}_{j-1}(z)$ are graphs inside $ F^{-(j-1)}(\partial\Omega_{2/3}).$ Then we calculate $F^{-1}(w=f^{1, 2}_{j-1}(z)):$ let $$(Z, W)\in F^{-1}(w=f^{1, 2}_j(z)) ~~\text{i.e.,} ~~F(Z, W)\in(w=f^{1, 2}_j(z)).$$ Hence $$Z^2+aZ=z, W^2+cW+bZ=w,$$ then $$W^2+cW+bZ=f_j(z)=f_j(Z^2+aZ),$$ $$W^2+cW+bZ-f_j(Z^2+aZ)=0,$$ thus, $$W=\frac{-c\pm\sqrt{c^2-4bZ+4f_j(Z^2+aZ)}}{2}=f^{1, 2}_{j+1}(Z).$$ Then let $f_\alpha(z):=\lim_{j\rightarrow\infty}F^{-j}\big(w=\{\frac{2}{3}e^{i\theta}\}\big).$ Therefore, $D$ is laminated by graphs $w=f_\alpha(z).$ Next, we need to show in any different slices $\Omega_{z_N},$ and any $(\tilde{z}, \tilde{w})$ in any slices $\Omega_{z_k}$, we always have $d_\Omega((z_N, 0), (\tilde{ z}, \tilde{w}))\geq C$ as long as $(\tilde{ z}, \tilde{w})\rightarrow\partial\Omega$. Here we choose $N$ sufficiently large. Note that the Kobayashi distance $d_{\Omega_{P}}(z_k, z_N)\geq C$, if $z_N\rightarrow\partial\Omega_P$ for a fixed $k$ (See \cite{RefW}). So we can assume both $k$ and $N$ are very large. However, by Lemma \ref{lem2}, $\tilde{w}$ is very close to some point denoted by $(\tilde{ z}, \eta)$ in $\partial\Omega_{\tilde{ z}}.$ Let $H:\Omega\rightarrow \triangle(0, R)\setminus\{0\}, H(z, w)=w-f_\alpha(z)$. Here we choose $\alpha$ so that $f_\alpha(\tilde{z})=\eta.$ Then $H(z, w)$ is holomorphic on $\Omega.$ And we know that $H(z, w)=w-f_\alpha(z)$, then $\lim_{w\rightarrow f_\alpha(z)}H(z, w)=0.$ Hence $H(z, w)$ is vanishing at $(\tilde{z}, \eta)$ and $H(\Omega) \subset \triangle(0, R)\setminus\{0\}$ for some constant radius $R>1$ since $\Omega$ is a bounded set. Then let $x':=H(z, w)$ and $y':=H(\tilde{ z}, \tilde{w})$. Then \begin{equation*} \begin{aligned} d_{\Omega}((z, w), (\tilde{ z}, \tilde{w}))&\geq d_{\triangle(0, R)\setminus\{0\}}(x', y')\\ &=\bigg|\int_{x'}^{y'}\frac{1}{|z| \ln \frac{|z|}{R}}dz\bigg|\\ &=|\ln(|\ln y'/R|)-\ln(|\ln x'/R|)|\\ &\rightarrow\infty, \end{aligned} \end{equation*} as $x'\rightarrow0, i. e., w\rightarrow f_\alpha(z).$ \end{proof} Now we continue to prove Theorem \ref{thm2} using the same method as in Theorem \ref{thm5}. We take $(z_0, w_0)=(z_N, 0)\in\Omega_{z_N, 2/3}.$ Then $H(\Omega_{2/3})\subset \triangle(0, R)\setminus \{0\}, H(\tilde{z}, \tilde{w})\rightarrow 0$ as $k\rightarrow\infty.$ Therefore, we know that there exists a point $(z_0, w_0)=(z_N, 0)$ so that for any $(\tilde{z}, \tilde{w})\in \cup_{k}^{\infty}\{F^{-k}(0, 0)\}, k\geq0$, the Kobayashi distance $d_{\Omega}((z_0, w_0), (\tilde{z}, \tilde{w}))=d_{\Omega}\big((z_N, 0), (P^{-i}(0), Q^{-i}(0))\big)\geq d_{\triangle(0, R)\setminus\{0\}}(x', y')\geq C$ for all $i\in\mathbb{N}$. \end{proof} \small University of Parma, Department of Mathematical, Physical and Computer Sciences, Parco Area delle Scienze, 53/A, 43124 Parma PR, Italy \emph{Email address: mi.hu@unipr.it} \end{document}
\begin{document} \begin{abstract} We compute the divisor class group and the Picard group of projective varieties with Hibi rings as homogeneous coordinate rings. These varieties are precisely the toric varieties associated to order polytopes. We use tools from the theory of toric varieties to get a description of the two groups which only depends on combinatorial properties of the underlying poset. \end{abstract} \title{Divisors on Projective Hibi Varieties} \section{introduction} \newcommand{\mathcal{P}}{\mathcal{P}} \newcommand{\mathcal{I}(\mathcal{P})}{\mathcal{I}(\mathcal{P})} Let $(\mathcal{P},\le)$ be a finite partially ordered set (poset). A subset $I\subseteq\mathcal{P}$ is called an \emph{order ideal} if it is down-closed, i.e. $p\in I$ and $q\le p$ implies $q\in I$. Denote by $\mathcal{I}(\mathcal{P})$ the set of all order ideals of $\mathcal{P}$. The poset $(\mathcal{I}(\mathcal{P}),\subseteq)$ is a distributive lattice with join $I\vee J = I\cup J$ and meet $I\wedge J = I\cap J$ for $I,J\in\mathcal{I}(\mathcal{P})$. \emph{Hibi rings} \cite{Hi87} are graded algebras with straightening laws associated to finite posets. More precisely, for a poset $\mathcal{P}=\{p_1,\ldots,p_n\}$ the Hibi ring $\mathbb{C}[\mathcal{P}]$ is the subalgebra of $\mathbb{C}[x_1,...,x_n,t]$ generated by the set of monomials $\{t\prod_{p_i\in I}{x_i}:I\in\mathcal{I}(\mathcal{P})\}$. Hibi rings are normal Cohen-Macaulay domains and we have $\mathbb{C}[\mathcal{P}]\cong\mathbb{C}[y_I:I\in\mathcal{I}(\mathcal{P})]/\mathfrak{I}_{\mathcal{I}(\mathcal{P})}$, where $\mathfrak{I}_{\mathcal{I}(\mathcal{P})}$ is the ideal generated by the so-called \emph{Hibi relations} $y_Iy_J-y_{I\wedge J}y_{I\vee J}$ for all $I,J\in\mathcal{I}(\mathcal{P})$ (see \cite{Hi87}). Since the Hibi relations are homogeneous there is a natural grading on $\mathbb{C}[\mathcal{P}]$ coming from the standard grading on $\mathbb{C}[y_I:I\in\mathcal{I}(\mathcal{P})]$. In the following our central object of sudy are the projective varieties $X_\mathcal{P}$ with the graded ring $\mathbb{C}[\mathcal{P}]$ as its homogeneous coordinate ring, which we will call $\emph{(projective) Hibi varieties}$. Hibi varieties appear for example as flat degenerations of Grassmannians and flag varieties (\cite{MS05},\cite{EH12}). Moreover, they generalize several well-studied classes of varieties, such as certain determinantal and ladder determinantal varieties (\cite{BC03},\cite{Co95}). Hibi varieties are toric varieties, hence geometric questions can be reduced to discrete-geometric questions about polytopes and fans. In the case of Hibi varieties one can hope to go even one step further and describe the geometry of $X_\mathcal{P}$ in terms of the combinatorics of $\mathcal{P}$. A first step was done by Wagner in \cite{Wa96}, where the orbits of the torus action and the singular locus of $X_\mathcal{P}$ are described in terms of properties of $\mathcal{P}$. In the present paper we compute the divisor class group and the Picard group of Hibi varieties. In Section 2 we describe the polytope of $X_\mathcal{P}$. This was already used without proof in \cite{Wa96}. In Section 3 we use general results on toric varieties to compute the divisor class group of $X_\mathcal{P}$. Finally, in Section 4 we use the description of the divisor class group to compute the Picard group of $X_\mathcal{P}$. \section{Hibi Varieties and Order Polytopes} \newcommand{\mathcal{P}^{op}}{\mathcal{P}^{op}} Let $\mathcal{P}$ be a finite poset. The projective variety $X_\mathcal{P}=\textnormal{Proj}(\mathbb{C}[\mathcal{P}])$ is called the \emph{(projective) Hibi variety} associated to $\mathcal{P}$. Hibi varieties appear in various contexts and generalize some well-studied classes of varieties, as the following examples show. \begin{example} Let $\mathcal{P}_n$ denote the chain consisting of $n$ elements. The Hibi variety $X_{\mathcal{P}_n}$ is the complex projective space $\mathbb{P}^n$. More generally, if $\mathcal{P}$ is the disjoint union of chains $\mathcal{P}_{n_1},\ldots,\mathcal{P}_{n_l}$ the associated Hibi variety $X_\mathcal{P}$ is the Segre embedding of $\mathbb{P}^{n_1}\times\cdots\times\mathbb{P}^{n_l}$. \end{example} \begin{example} For $1\le d\le n$ there exists a flat degeneration taking the \emph{Grassmannian} $G_{d,n}$ of $d$-dimensional subspaces of an $n$-dimensional complex vector space to the Hibi variety $X_{\mathcal{P}_d\times\mathcal{P}_{n-d}}$. For details see \cite{EH12}, \cite{Fr13} or \cite{St96}. More generally, also \emph{flag varieties} degenerate to Hibi varieties (see \cite{MS05}). \end{example} \begin{example} \emph{Projective determinantal varieties} are determined by the vanishing of all minors of a fixed size of a matrix of indeterminates. In the case of $2$-minors of an $(n\times m)$-matrix $A$ the determinantal variety is the Hibi variety associated to $\mathcal{P}_{n-1}\cupdot\mathcal{P}_{m-1}$. Indeed, the lattice $\mathcal{I}(\mathcal{P}_{n-1}\cupdot\mathcal{P}_{m-1})$ is isomorphic to $\mathcal{P}_n\times\mathcal{P}_m$ and Hibi relations in $\mathcal{P}_n\times\mathcal{P}_m$ correspond precisely to the $2$-minors of $A$. \end{example} \begin{example} \emph{Ladder determiantal varieties} are a generalisation of determinantal varieties, where instead of matrices so-called \emph{ladders} of indeterminates are considered (see e.g. \cite{Co95}). In the case of $2$-minors, these are again Hibi varieties. \end{example} In the following we will describe the polytope associated to the toric variety $X_\mathcal{P}$. For a poset $\mathcal{P}$ a subset $J\subseteq \mathcal{P}$ is called an \emph{order filter} if it is up-closed, i.e. if $b\ge a$ and $a\in J$ implies $b\in J$. Note that $J$ is an order filter if and only if its complement $\mathcal{P}\backslash J$ is an order ideal. The set $\mathcal{J}(\mathcal{P})$ of all order filters is a distributive lattice with union and intersection as join and meet operation, respectively. We have $\mathcal{J}(\mathcal{P})\cong\mathcal{I}(\mathcal{P}^{op})$, where $\mathcal{P}^{op}$ is the \emph{opposite poset} of $\mathcal{P}$, the poset with the same underlying set as $\mathcal{P}$ but with the order reversed. For a subset $S\subset\mathcal{P}$ we denote by $\mathbf{a}_S\in\mathbb{R}^\mathcal{P}$ the characteristic vector of $S$, i.e. $a_p=1$ if $p\in S$ and $a_p=0$ otherwise. The convex hull of the set $\{\mathbf{a}_J:J\in\mathcal{J}(\mathcal{P})\}$ is called the \emph{order polytope} of $\mathcal{P}$ and denoted by $\mathcal{O}(\mathcal{P})$. It can be shown that $\mathcal{O}(\mathcal{P})$ consists of all order-preserving functions $f:\mathcal{P}\to[0,1]\subseteq\mathbb{R}$ (see \cite{St86}). There is the following close connection between Hibi varieties and order polytopes. \begin{prop} The Hibi variety $X_\mathcal{P}$ is isomorphic to the projective toric variety associated to the order polytope $\mathcal{O}(\mathcal{P}^{op})$. \end{prop} \begin{proof} We will only sketch the proof, using results and notation from \cite{CLS11}. As in Chapter 1 and 2 of \cite{CLS11} for a finite set of lattice points $\mathcal{A}=\{\mathbf{a}_1,\ldots,\mathbf{a}_m\}\subseteq\mathbb{Z}^k$ we denote by $Y_\mathcal{A}$ the associated affine toric variety defined to be the Zariski closure of the image of the map \begin{equation*} \Phi_\mathcal{A}:(\mathbb{C}^*)^k\to \mathbb{C}^m, \mathbf{t}\mapsto (\mathbf{t}^{\mathbf{a}_1},...,\mathbf{t}^{\mathbf{a}_m}). \end{equation*} Moreover, let $X_\mathcal{A}$ be the Zariski closure of the image of $\pi\circ\Phi_\mathcal{A}$, where $\pi:(\mathbb{C}^*)^m\to\mathbb{P}^{m-1}$ denotes the canonical projection. The order polytope $\mathcal{O}(\mathcal{P}^{op})$ is normal, since it has a unimodular triangulation (\cite{St86}). Using that the integral points of $\mathcal{O}(\mathcal{P}^{op})$ are precisely its vertices, it now follows that the associated toric variety is isomorphic to $X_\mathcal{A}$, where $\mathcal{A}=\{\mathbf{a}_I:I\in\mathcal{I}(\mathcal{P})\}\subseteq\mathbb{Z}^\mathcal{P}$ and $\mathbf{a}_I$ denotes the characteristic vector of the ideal $I$. For the set $\mathcal{A}'=\{(1,\mathbf{a}_I):I\in\mathcal{I}(\mathcal{P})\}\subseteq\mathbb{Z}^{|\mathcal{P}|+1}$ of lattice points of the homogenization of $\mathcal{O}(\mathcal{P}^{op})$ we clearly have $X_\mathcal{A}=X_{\mathcal{A}'}$. On the other hand, $\mathcal{A}'$ forms a set of generators of the affine semigroup of the Hibi ring $\mathbb{C}[\mathcal{P}]$. Hence if follows from Proposition 2.1.4 in \cite{CLS11} and the quotient description of $\mathbb{C}[\mathcal{P}]$ that $X_{\mathcal{A}'}\cong X_\mathcal{P}$. \end{proof} Since $\mathcal{O}(\mathcal{P}^{op})$ is full-dimensional and Hibi rings are normal (see \cite{Hi87}), we have the following immediate corollary. \begin{cor} $X_\mathcal P$ is a projectively normal toric variety of dimension $|\mathcal{P}|$. \end{cor} \section{Divisor Class Group} A relation $p<q$ with $p,q\in\mathcal{P}$ is called a \emph{covering relation} if there is no $r\in\mathcal{P}$ with $p<r<q$. We write $\mathcal{C}(\mathcal P)$ for the set of covering relations in $\mathcal P$. The \emph{Hasse diagram} of $\mathcal{P}$ is the directed graph on the elements of $\mathcal{P}$ with an edge from $p$ to $q$ if and only if $p<q\in\mathcal{C}(\mathcal P)$. For a finite poset $\mathcal{P}$ denote by $\hat{\mathcal{P}}$ the poset obtained from $\mathcal{P}$ by attaching a minimal element $\hat 0$ and a maximal element $\hat{1}$. For a covering relation $p<q\in\mathcal{C}(\hat{\mathcal P})$ define $\mathbf{u}_{p<q}\in\mathbb{Z}^{\mathcal{P}}$ by \begin{equation}\label{FacetNormals} \mathbf{u}_{p<q}= \begin{cases} \mathbf{e}_p & \textnormal{ if }q=\hat 1\\ -\mathbf{e}_q & \textnormal{ if }p=\hat 0\\ \mathbf{e}_p-\mathbf{e}_q & \textnormal{ otherwise}, \end{cases} \end{equation} where $\mathbf{e}_p$ is the standard basis vector corresponding to an element $p\in\mathcal{P}$. Note that these vectors are precisely the facet normals of the order polytope $\mathcal{O}(\mathcal{P}^{op})$ (see \cite{St86}). For each such facet normal we can associate a torus-invariant divisor $D_{p<q}$ on $X_\mathcal{P}$. Moreover, the set $\{D_{p<q}:p<q\in\mathcal{C}(\hat{\mathcal{P}})\}$ of all such divisors forms a basis of $\textnormal{Div}_T(X_\mathcal{P})$, the group of torus-invariant divisors on $X_\mathcal{P}$ (see \cite{CLS11}, Chapter 4). \begin{rem} The facet of $\mathcal{O}(\mathcal{P}^{op})$ with normal vector $\mathbf{u}_{p<q}$ is linear equivalent to the order polytope $\mathcal{O}((\tilde\mathcal{P})^{op})$, where $\tilde\mathcal{P}$ is the poset obtained by first contracting the edge $p<q$ in the Hasse diagram of $\hat\mathcal{P}$ and then removing $\hat 0$ and $\hat 1$ (see \cite{St86}). Therefore it follows from \cite[Prop. 3.2.9]{CLS11} that $D_{p<q}$ is isomorphic to the Hibi variety $X_{\tilde\mathcal{P}}$. More explicitly, we have $D_{p<q}=X_\mathcal{P}\cap V(x_I:|(I\cup\{\hat{0}\})\cap\{p,q\}|=1)\subseteq\mathbb{P}^{|\mathcal{I}(\mathcal{P})|-1}$. \end{rem} Let $\textnormal{Cl}(X_\mathcal{P})$ denote the divisor class group of $X_\mathcal{P}$. The main result of this section is the following. \begin{thm}\label{Cl} Let $\mathcal{P}$ be a finite poset with $n$ elements and $X_\mathcal{P}$ the associated projective Hibi variety. Then we have \begin{equation*} \textnormal{Cl}(X_\mathcal{P})\cong\mathbb{Z}^{|\mathcal{C}(\hat{\mathcal{P}})|-n}. \end{equation*} \end{thm} \begin{proof} We have the well-known exact sequence (see e.g. \cite[Thm. 4.1.3]{CLS11}) \begin{equation*} 0\longrightarrow\mathbb{Z}^\mathcal{P}\xlongrightarrow{\phi}\textnormal{Div}_T(X_\mathcal{P})\longrightarrow\textnormal{Cl}(X_\mathcal{P})\longrightarrow 0 \end{equation*} where the second map sends a divisor $D$ to its divisor class $[D]$ and $\phi$ is defined by \begin{equation*} \phi(\mathbf{m})=\sum\limits_{p<q\in\mathcal{C}(\hat{\mathcal{P}})}{\langle \mathbf{m},\mathbf{u}_{p<q}\rangle D_{p<q}}. \end{equation*} More explicitly, we have \begin{equation}\label{phi} \phi(\mathbf{e}_p)=\sum\limits_{p<q\in\mathcal{C}(\hat{\mathcal{P}})}{D_{p<q}}-\sum\limits_{r<p\in\mathcal{C}(\hat{\mathcal{P}})}{D_{r<p}}. \end{equation} To prove the theorem we will define a map $\psi:\textnormal{Div}_T(X_\mathcal{P})\to\mathbb{Z}^{|\mathcal{C}(\hat{\mathcal{P}})|-n}$ such that the sequence \begin{equation*} 0\longrightarrow\mathbb{Z}^{|\mathcal{P}|}\xlongrightarrow{\phi}\textnormal{Div}_T(X_\mathcal{P}) \xlongrightarrow{\psi}\mathbb{Z}^{|\mathcal{C}(\hat{\mathcal{P}})|-n}\longrightarrow 0 \end{equation*} is exact. From this it follows that $\textnormal{Cl}(X_{\mathcal{P}})\cong\mathbb{Z}^{|\mathcal{C}(\hat{\mathcal{P}})|-n}$.\\ To define $\mathcal{\psi}$ we do the following. For every $p\in\mathcal P$ we choose an element $r_p\in\mathcal{P}\cup\{\hat 0\}$ such that $r_p<p$ is a covering relation. Let $T$ be the connected subgraph of the Hasse diagram of $\mathcal{P}\cup\{\hat 0\}$ whose edges are the covering relations $r_p<p$ for all $p\in\mathcal P$. Since $T$ has $n$ edges we can define a basis of $\mathbb{Z}^{|\mathcal{C}(\hat{\mathcal{P}})|-n}$ of the form $\{\mathbf{e}_{p<q}:p<q\in\mathcal{C}(\hat{\mathcal{P}})\backslash T\}$. Now define $\psi(D_{p<q})=\mathbf{e}_{p<q}$ for $p<q\in\mathcal{C}(\hat{\mathcal{P}})\backslash T$. We want to define the image of all other divisors in a way such that $\textnormal{im}(\phi)\subseteq \textnormal{ker}(\psi)$. From \eqref{phi} we get that for $p<q\in\mathcal{C}(\hat{\mathcal{P}})$ we must have \begin{equation}\label{psi} \psi(D_{p<q})=\sum\limits_{q<r\in\mathcal{C}(\hat{\mathcal{P}})}{\psi(D_{q<r})}- \sum\limits_{p'<q\in\mathcal{C}(\hat{\mathcal{P}}):p'\neq p}{\psi(D_{p'<q})}. \end{equation} If $q$ is a leaf of $T$ equation \eqref{psi} uniquely defines $\psi(D_{p<q})$. But in fact, as we see by inductively removing leaves, the condition in \eqref{psi} already determines the value of $\psi$ on all edges of $T$.\\ It remains to show that $\textnormal{ker}(\psi)\subseteq\textnormal{im}(\phi)$. Let $D=\sum\limits_{p<q\in\mathcal{C}(\hat{\mathcal{P}})}{\alpha_{p<q}D_{p<q}}$ be a divisor in $\textnormal{ker}(\psi)$. We claim that it suffices to find $\mathbf{m}\in\mathbb{Z}^{\mathcal{P}}$ such that for $D'=D+\phi(\mathbf{m})=\sum\limits_{p<q\in\mathcal{C}(\hat{\mathcal{P}})}{\alpha'_{p<q}D_{p<q}}$ we have $\alpha'_{p<q}=0$ whenever $p<q\in T$. Indeed, by the first part of the proof, $D'$ must lie in $\textnormal{ker}(\psi)$. But since $\alpha'_{p<q}=0$ for all $p<q\in T$ this implies $D'=0$ and therefore $D=-\phi(\mathbf{m})\in\textnormal{im}(\phi)$. Any such $\mathbf{m}$ has to satisfy \begin{equation*} 0=\alpha'_{r_p<p}= \begin{cases} \alpha_{r_p<p}-m_p & \textnormal{ if }r_p=\hat 0\textnormal{ and}\\ \alpha_{r_p<p}+m_{r_p}-m_p & \textnormal{ otherwise}. \end{cases} \end{equation*} Hence we define $\mathbf{m}=(m_p)_{p\in\mathcal P}$ inductively by \begin{equation*} m_p= \begin{cases} \alpha_{\hat 0<p} & \textnormal{ for }p \textnormal{ minimal element of }\mathcal{P}\textnormal{ and}\\ \alpha_{r_p<p}+m_{r_p} & \textnormal{ otherwise}. \end{cases} \end{equation*} It is easy to see that this $\mathbf{m}$ has the desired properties. \end{proof} From the proof of Theorem \ref{Cl} we immediately get the following description of generators of $\textnormal{Cl}(X_\mathcal{P})$. \begin{cor}\label{gens} Let $T$ be an arborescence in the Hasse diagram of $\mathcal{P}\cup\{\hat 0\}$, i.e. a subgraph which for every $p\in\mathcal{P}$ contains a unique directed path from $\hat 0$ to $p$. Then the divisor class group $\textnormal{Cl}(X_\mathcal{P})$ is the free abelian group generated by the divisor classes $\{[D_{p<q}]:p<q\in\mathcal{C}(\hat{\mathcal{P}})\backslash T\}$. \end{cor} \begin{rem} The above proof is similar to the one in \cite{HHN92}, where the divisor class group of \emph{affine} Hibi varietes is computed. \end{rem} \section{Picard Group} Let $\textnormal{Pic}(X_\mathcal{P})$ denote the Picard group of $X_{\mathcal{P}}$. The main result of this section is the following. \begin{thm}\label{Pic} We have $\textnormal{Pic}(X_\mathcal{P})\cong\mathbb{Z}^l$ where $l$ denotes the number of connected components of the Hasse diagram of $\mathcal{P}$. \end{thm} The Picard group $\textnormal{Pic}(X_\mathcal{P})$ is isomorphic to the subgroup of $\textnormal{Cl}(X_\mathcal{P})$ which consists of divisor classes of locally principal divisors. Hence, we want to understand when a divisor $D_{p<q}$ is locally principal. For an ideal $I\in\mathcal{I}(\mathcal{P})$ let $\mathcal{C}_I(\hat{\mathcal{P}})=\{p<q\in\mathcal{C}(\hat{\mathcal{P}}):|\{p,q\}\cap(I\cup\{\hat 0\})|\neq 1\}$. Note that $\mathcal{C}_I(\hat{\mathcal{P}})$ corresponds to the set of all facets of $\mathcal{O}(\mathcal{P}^{op})$ which contain the vertex $\mathbf{a}_I$. Recall the description of the facet normals given in equation \eqref{FacetNormals}. With this notation we have the following criterion, which is a consequence of Thm. 4.2.8. in \cite{CLS11}. \begin{lem}\label{LocPrinc} Let $D=\sum\limits_{p<q\in\mathcal{C}(\hat{\mathcal{P}})}{\alpha_{p<q}D_{p<q}}$. Then $D$ is locally principal if and only if for every $I\in\mathcal{I}(\mathcal{P})$ there is $\mathbf{m}\in\mathbb{Z}^{\mathcal{P}}$ such that \begin{equation*} \langle \mathbf{m},\mathbf{u}_{p<q}\rangle=\alpha_{p<q}\textnormal{ for all }p<q\in\mathcal{C}_I(\hat{\mathcal{P}}). \end{equation*} \end{lem} We will now use this to prove the main theorem. \begin{proof}[Proof of Theorem \ref{Pic}] We want to describe the subgroup of $\textnormal{Cl}(X_\mathcal{P})$ which consists of divisor classes of locally principal divisors. Let $[D]$ be a divisor class such that $D$ is locally principal. By Corollary \ref{gens} we may assume that $D$ is of the form \begin{equation*} D=\sum\limits_{p<q\in\mathcal{C}(\hat{\mathcal{P}})\backslash T}{\alpha_{p<q}D_{p<q}}. \end{equation*} We will first apply Lemma \ref{LocPrinc} for the ideals $\mathcal{P}$ and $\emptyset$ to get some conditions on the coefficients $\alpha_{p<q}$. Then we will show that these conditions are in fact sufficient.\\ Let $I=\mathcal{P}\in\mathcal{I}(\mathcal{P})$. Then $\mathcal{C}_I(\hat{\mathcal{P}})=\{p<q\in\mathcal{C}(\hat{\mathcal{P}}): q\neq\hat 1\}$. We claim that for all $p<q\in\mathcal{C}_I(\hat{\mathcal{P}})$ we must have $\alpha_{p<q}=0$. First note that for any chain $\hat 0< p_1<\cdots< p_k< q$ in the Hasse diagram of $\hat{\mathcal{P}}$ we have by the above lemma \begin{equation*} \alpha_{\hat 0<p_1}+\sum\limits_{1\le i\le k-1}{\alpha_{p_i<p_{i+1}}}+\alpha_{p_k<q}=-m_q=0, \end{equation*} where the last equality follows from choosing a chain in $T$. Now consider a chain of the form $\hat 0< p_1'<\cdots< p_l'< p< q$ such that $\hat 0< p_1'<\cdots< p_l'< p$ lies in $T$. This yields $\alpha_{p<q}=0$.\\ So far we have shown that $D$ must be of the form $D=\sum_{p\in M}{\alpha_{p<\hat 1}D_{p<\hat 1}}$, where $M$ denotes the set of maximal elements of $\mathcal{P}$. Now choose $I=\emptyset\in\mathcal{I}(\mathcal{P})$. We have $\mathcal{C}_I(\hat{\mathcal{P}})=\{p<q\in\mathcal{C}(\hat{\mathcal{P}}):p\neq\hat 0\}$. We claim that if $p_1,p_2\in M$ are in the same connected component of $\mathcal{P}$ then we must have $\alpha_{p_1<\hat 1}=\alpha_{p_2<\hat 1}$. We call $p_1,p_2\in M$ adjacent if there exists a $q\in\mathcal{P}$ such that $q<p_1$ and $q<p_2$. Since $\mathcal{P}$ is finite it suffices to prove the claim for adjacent $p_1,p_2$. Let $q\in\mathcal{P}$ such that $q<p_1,p_2$. As above we get $0=m_q-m_{p_1}=m_q-m_{p_2}$, which in particular implies $m_{p_1}=m_{p_2}$. But $m_{p_i}=\alpha_{p_i<\hat 1}$ by Lemma \ref{LocPrinc}, which proves the claim.\\ Let $\mathfrak{C}(\mathcal{P})$ be the set of connected components of $\mathcal{P}$. We have shown that $D$ must be of the form \begin{equation*} D=\sum\limits_{C\in\mathfrak{C}(\mathcal{P})}{\alpha_C D_C}\textnormal{ where }D_C=\sum\limits_{p\in M\cap C}{D_{p<\hat 1}}. \end{equation*} The only thing left to show is that every such $D$ is locally principal by again using Lemma \ref{LocPrinc}. Let $I\in\mathcal{I}(\mathcal{P})$. Define $\mathbf{m}=(m_p)_{p\in\mathcal{P}}$ as follows. For all $p\in I$ set $m_{p}=0$. For all $p\in\mathcal{P}\backslash I$, let $C$ be the connected component that $p$ lies in and set $m_{p}=\alpha_C$. It is easy to check that $\mathbf{m}$ has all the desired properties. \end{proof} \textbf{Acknowledgements.} This work generalizes results from the author's master thesis supervised by Gunnar Fl\o ystad, whom the author would like to thank for fruitful discussions and constant support. The author would also like to thank Raman Sanyal for helpful comments on a previous version of this paper. \linespread{1.0} \setlength{\parskip}{0cm} \small \end{document}
\begin{document} \title{Oscillations and Random Perturbations of a FitzHugh-Nagumo System} \author{Catherine Doss\thanks{ Laboratoire Jacques-Louis Lions, Bo\^ite 189, Universit{\'e} Pierre et Marie Curie-Paris 6, 4, Place Jussieu, 75252 Paris cedex 05, France; doss@ann.jussieu.fr }\\ \and Mich{\`e}le Thieullen\thanks{ Laboratoire de Probabilit{\'e}s et Mod{\`e}les Al{\'e}atoires, Bo\^ite 188, Universit{\'e} Pierre et Marie Curie-Paris 6, 4, Place Jussieu, , 75252 Paris cedex 05, France; michele.thieullen@upmc.fr}\\ } \maketitle \begin{abstract} \noindent We consider a stochastic perturbation of a FitzHugh-Nagumo system. We show that it is possible to generate oscillations for values of parameters which do not allow oscillations for the deterministic system. We also study the appearance of a new equilibrium point and new bifurcation parameters due to the noisy component. \end{abstract} \noindent {\bf Keywords:} FitzHugh-Nagumo system, fast-slow system, excitability, equilibrium points, bifurcation parameter, limit cycle, bistable system, random perturbation, large deviations, metastability, stochastic resonance \eject \section{Introduction.} Let us consider the following family of deterministic systems indexed by the parameters $a\in{\bf R}$ and $\delta>0$. \begin{eqnarray}\label{FHNdet} \delta {\dot x}_t&=&-y_t+f(x_t),\quad X_0=x\\ {\dot y}_t&=&x_t -a,\quad Y_0=y \end{eqnarray} and their stochastic perturbation by a one dimensional Wiener process $(W_t)$ as follows \begin{eqnarray}\label{FHNsto} \delta dX_t&=&(-Y_t+f(X_t))dt+\sqrt\epsilon dW_t,\quad X_0=x\\ dY_t&=&(X_t-a)dt,\quad Y_0=y \end{eqnarray} The function $f$ is a cubic polynomial: $f(x)= -x(x-\alpha)(x-\beta)$ with $\alpha<0<\beta$. The parameter $\delta$ is small. The deterministic system (\ref{FHNdet})-(1.2) is an example of a slow-fast system: the two variables $x,y$ have different time scales, $ x_t$ evolves rapidly while $y_t$ evolves slowly. This system is one version of the so called FitzHugh-Nagumo system and plays an important role in neuronal modelling. In this context $x_t$ denotes the voltage or action potential of the membrane of a single neuron. It was first proposed by FitzHugh and Nagumo (cf. \cite{Fi}, \cite{N}). One interest of this model is that it reproduces periodic oscillations observed experimentally. Indeed FitzHugh-Nagumo system finds its origin in the nonlinear oscillator model proposed by van der Pol. It is also a simplification of the Hodgkin-Huxley model which describes the coupled evolution of the membrane potential and the different ionic currents: existence of different time scales enable to pass from a four dimensional model to a two dimensional one. Oscillations can take place because the deterministic system (\ref{FHNdet})-(1.2) exhibits bifurcations; more details will be given in section 3. Let us mention that oscillations in this system (\ref{FHNdet})-(1.2) can only occur when $a\in]a_0,a_1[$ where $a_0<a_1$ are two particular values of parameter $a$ namely the bifurcation parameters. \\ Our main interest in the present paper is to generate oscillations even for $a<a_0$ (symmetrically $a>a_1$) )by adding a stochastic perturbation to the deterministic system. What may be interpreted as some resonance type effect (cf. \cite{HI}, \cite{G}). We will therefore investigate possible oscillations for system (\ref{FHNsto})-(1.4). The presence of parameter $\epsilon$ introduces a third scale in the system and the relative strength of $\delta$ and $\epsilon$ measured by the ratio $\frac{\epsilon|\log\delta|}{\delta}$ will determine its evolution. Our study was inspired by reference \cite{F1} where M. Freidlin considers a random perturbation of the second order equation $\delta\frac{d^2 y_t}{dt^2}=g(\frac{dy_t}{dt}, y_t)$ and performs the study of its solution using the theory of large deviations (cf. \cite{FW}). See also \cite{F2} for the study of a more general situation. In our case $g(\dot y, y)=y-f(\dot y+a)$. Although our argument is close to M. Freidlin's, the presence of parameter $a$ leads to a richer behaviour.\\ We prove the existence of equilibrium point and limit cycles different from the deterministic ones; as in \cite{F1}, as well as a new bifurcation point which did not exist for the deterministic system. Our study relies on transitions between basins of attraction of stable equilibrium points due to noise. Relying on some estimation of a family of exit times(propositions 3.3 and 3.4) we study conditions on the parameters under which a convenient stochastic dynamic system approach its main state(proposition 3.5 ),(which corresponds to the equilibrium point exhibited in main theorem 2.2), or approach a metastable state (which corresponds to the limit cycle and to the new bifurcation parameters exhibited in main theorem 2.1).\\ A general study of slow-fast systems perturbed by noise can be found in \cite{BG}. Bursting oscillations in which a system alternates periodically between phases of quiescence and phases of repetitive spiking has been studied for stochastically perturbed systems in \cite{HM} and may be studied later in our stochastic setting. We recall that in the deterministic one a bursting-type behaviour has been generated in \cite{DFP}. The paper is organized as follows. In section 2 we recall basic facts about (\ref{FHNdet})-(1.2) and we state the two main theorems (2.1) and (2.2). Section 3 is devoted to the application of large deviation theory to (\ref{FHNsto})-(1.4) and section 4 to the proof of the main theorems. \section{Some Basic Results.} \subsection{Deterministic FitzHugh-Nagumo System.} \noindent In (\ref{FHNdet})-(1.2) let us consider $\alpha<0<\beta$ and $f(x)= -x(x-\alpha)(x-\beta)$. In order to investigate the asymptotic behaviour of $(x_t,y_t)$ one first looks for equilibrium points of the system and their stability. The equilibrium points are defined as the points $(x,y)$ where the right-hand sides of both equations of the system vanish. For any value of $a$ there is therefore a unique equilibrium point for (\ref{FHNdet})-(1.2) which is $(a,f(a))$. Moreover let $a_0, a_1,a_0< a_1$ be the two points where $f'$ vanishes. The stability of the equilibrium point $(a,f(a))$ changes when $a$ passes through the value $a_0$ (resp. $a_1$); $a_0$ and $a_1$ are called the bifurcation parameters of the system. Let us focus on $a_0$; an analogous argument holds for $a_1$. By linearizing system (\ref{FHNdet}) at $(a_0+\eta,f(a_0+\eta))$ for $\eta$ small, we obtain system $\dot Z=AZ$ with $$ A=\pmatrix{{\frac{f'(a_0+\eta)}{\delta}}&-\frac{1}{\delta}\cr1&0\cr} $$ A admits the two eigenvalues $\lambda_\pm=\frac{1}{2\delta}(f'(a_0+\eta)\pm i\sqrt{4\delta-{f'}^2(a_0+\eta)})$. The sign of $f'(a_0+\eta)$ is the same as that of $\eta$ since $f'$ is increasing in the neighbourhood of $a_0$. The point $(a_0+\eta,f(a_0+\eta))$ is therefore an attracting (resp. repulsive) focus when $\eta<0$ (resp. $\eta>0$). In particular, $(a,f(a))$ is stable when $a<a_0$, unstable when $a>a_0$. For $a\in]a_0,a_1[$ the system admits a limit cycle. The bifurcation is of Hopf type \cite{JPF} It can be verified numerically that if $\delta<0.01$ the limit cycle is very close to the loop made up with the two attracting branches of the curve $y=f(x)$ where $x\mapsto f(x)$ is decreasing and $y\in[f(a_0),f(a_1)]$, and the portions of the two horizontal segments $y=f(a_0)$, $y=f(a_1)$ connecting them. When $\delta\rightarrow 0$ the period of this limit cycle is $0(1)$; by example for $f(x)=x(4-x^2)$ it is equal to $2$ (cf. \cite{Pi}). \subsection{Main Theorems} Consider $(X_t,Y_t)$ the solution of (1.3)-(1.4) and $S>0$ given in definition 3.3. Let us assume that $\epsilon >0$ and $\delta >0$ go to zero in such a way that for some constant $c>0$. \begin{equation}\label{hyp} \frac{\epsilon|\log\delta|}{\delta}\rightarrow c \end{equation} Let us denote by $\lim*$ any limit on $\epsilon$ and $\delta$ going to $0$ under condition (\ref{hyp}). \newtheorem{theo}{Theorem}[section] \begin{theo}\label{periodique} Let $c\in]0,S[$; then:\\ \noindent 1.If $a\in]x_{-}(c),x_{+}(c)[$ where $x_{-}(c)$ and $x_{+}(c)$ are introduced in definition 3.4; then there exist two periodic functions $\Phi_c^a$ and $\Psi_c^a$ given in definition 4.2, s.t for all $A$, $h>0$, $y\in]f(a_0),f(a_1)[$, \begin{eqnarray} \lim*&{\bf P}_{(x,y)}&(\int_0^A|X_t-\Phi_c^a(t)|^2 dt>h)=0\\ \lim*&{\bf P}_{(x,y)}&(\sup_{[0,A]}|Y_t-\Psi_c^a(t)|>h)=0 \end{eqnarray}\\ \noindent 2. If $a<x_{-}(c)$ or $a>x_{+}(c)$ then for all $y\in ]f(a_0),f(a_1)[$, for all $h>0$ there exists ${\hat t}(y,h)$ such that for all $A>{\hat t}(y,h)$,\\ \begin{equation} \lim*{\bf P}_{(x,y)}(\sup_{[{\hat t}(y),A]} |X_t-a|+|Y_t-f(a)|>h)=0 \end{equation} \end{theo} \begin{figure} \caption{Limit cycle when $f(x)=x(4-x^2)$ and $\frac{\epsilon|\log\delta|}{\delta}\rightarrow c$} \end{figure} \noindent \textbf{Remark}\\ In the first case the solution stabilizes when $\delta\rightarrow 0$ and $\frac{\epsilon|\log\delta|}{\delta}\rightarrow c$ around a limit cycle defined by $c$ and different from the one obtained in the deterministic case when $\delta\rightarrow 0$ and $\epsilon=0$ (see figure 1). Moreover $ x_{-}(c)$ and by symmetry $x_{+}(c)$ play the role of bifurcation parameters for the stochastic FitzHugh-Nagumo system (\ref{FHNsto})-(1.4). Indeed for $a$ in the neighborhood of $x_{-}(c)$ but smaller the limit of $(X_t, Y_t)$ is a unique equilibrium point, whereas for $a$ in the neighborhood of $x_{-}(c)$ but greater it is the graph of a periodic function. These bifurcation parameters are different from those of the deterministic system (\ref{FHNdet}). This theorem is a new result w.r.t. \cite{F1}. It is made possible by the freedom on parameter $a$. The new limit cycle $(\Phi_c^a(t),\Psi_c^a(t))$ is defined in the same way as in \cite{F1} Theorem 1 Part 3., provided we take into account the presence of $a$ in our system. \\ Other regimes are considered in the work of Berglund and Gentz (cf. \cite{BG} and references therein). \begin{theo}\label{nouvelequilibre} Let $c> S$. Consider $x_{-}^*(y^*)$ and $x_{+}^*(y^*)$ defined in proposition 3.2 and definition 3.3; then for all $y\in ]f(a_0),f(a_1)[$ and for all $h>0$ there exists ${\hat t}(y,h)$ such that for all $A>{\hat t}(y,h)$,\\ \noindent 1. If $a\in]x_{-}^*(y^*),x_{+}^*(y^*)[$ \begin{equation} \lim*{\bf P}_{(x,y)}(\sup_{[{\hat t}(y),A]} |Y_t-y^*|>h)=0 \end{equation} \noindent 2. If $a<x_{-}^*(y^*)$ or $a>x_{+}^*(y^*)$, \begin{equation} \lim*{\bf P}_{(x,y)}(\sup_{[{\hat t}(y),A]}(|X_t-a|+|Y_t-f(a)|)>h)=0 \end{equation} \end{theo} \noindent \textbf{Remark}:\\ \noindent Case 1 may be considered as a degenerate version of the limit cycle of case 1 theorem 2.1. In fact $y^*$ is a fixed point but $X_t$ oscillates between $x_{-}^*(y^*)$ and $x_{+}^*(y^*)$. \section{Exit Time, Main State and Metastable State} \subsection{Basic results on Large Deviations Theory} Because of the slow-fast property of FitzHugh-Nagumo systems, the slow variable $Y_t$ of system (1.3)-(1.4) may be in a first approximation frozen at the value $y$ which leads us to the study of the family of one dimensional dynamical systems indexed by parameter $y$,which plays a basic role in the study of FitzHugh-Nagumo systems (\ref{FHNdet})-(1.2) and (\ref{FHNsto})-(1.4): \begin{equation}\label{detyfixe} dx_t^y=(-y+f(x_t^y))dt,\quad x_0^y=x \end{equation} \noindent So we are led to consider the real valued deterministic system \begin{equation}\label{sysdet} dx_t=b(x_t)dt,\quad x_0=x \end{equation} and its perturbation by a brownian motion \begin{equation} d{\tilde x}_t=b({\tilde x}_t)dt+\sqrt{\tilde\epsilon} dW_t, \quad {\tilde x}_0=x \end{equation} \noindent Let us briefly recall some results from \cite{FW}. The process $({\tilde x}_t)$ describes the movement of a particle on the real line submitted to the force field $b(x)$ and to a stationnary Gaussian noise of amplitude $\sqrt{\tilde\epsilon}$. When $\tilde\epsilon\rightarrow 0$, $({\tilde x}_t)$ converges to the solution $(x_t)$ of (\ref{sysdet}): \begin{equation} \forall\eta>0\quad\forall T>0\quad \lim_{{\tilde\epsilon}\rightarrow 0}{\bf P}(\sup_{[0,T]}|{\tilde x}_t -x_t|>\eta)=0 \end{equation} However because of diffusion due to the presence of noise, some trajectories of the process $({\tilde x}_t)$ may present large deviations from those of the deterministic system $(x_t)$. Such deviations are measured by means of the action functional $S_{T_1}^{T_2}(\varphi)$ independant of $\tilde\epsilon$ and defined by \begin{equation} S_{T_1}^{T_2}(\varphi)=\frac{1}{2}\int_{T_1}^{T_2} | {\dot\varphi}_u-b(\varphi_u)|^2 du \end{equation} when $\varphi$ is absolutely continuous, by $S_{T_1}^{T_2}(\varphi)=+\infty$ otherwise \begin{theo} ( \cite{FW}, Lemma 2.1, Chap. 4)\\ Let $\eta>0$. Then \begin{equation} {\bf P}_x(\sup_{[0,T]}|{\tilde x}_t-x_t|\geq\eta)\leq\exp(-\frac{1}{\tilde\epsilon}[\inf_{\Delta} S_0^T(\varphi)+o(1)]) \end{equation} when $\tilde\epsilon\rightarrow0$ and where $\Delta:=\{\varphi; \varphi_0=x, \sup_{[0,T]}|\varphi_t-x_t|\geq\eta\}$. \end{theo} Large deviations theory also provides estimates on the first exit time of $({\tilde x}_t)$ from a domain (cf. \cite{FW}, Theorem 4.2, Chap. 4). Domains of interest are basins of attraction of stable equilibrium points of (\ref{sysdet}). The key tools are quasipotentials. \newtheorem{definition}{Definition}[section] \begin{definition} The quasi potential of the deterministic system (\ref{sysdet}) w.r.t. a point $\overline x$ (also called transition rate) is defined as the function \begin{equation} u\mapsto V(u):=\inf \{S_{T_1}^{T_2}(\varphi);0\leq T_1<T_2, \varphi(T_1)=\overline x, \varphi(T_2)=u\} \end{equation} \end{definition} \newtheorem{proposition}{Proposition}[section] \begin{proposition}\label{onedim} The quasi potential of (\ref{sysdet}) w.r.t. $\overline x$ coincides with the function \begin{equation} u\mapsto V(u)=-2\int_{\overline x}^u b(r)dr \end{equation} \end{proposition} \noindent \textbf{Remark}:\\ \noindent The above statement holds since (\ref{sysdet}) is one dimensional. It also holds in the multidimensional case when the drift $b$ of (\ref{sysdet}) is a gradient. \begin{theo}\label{estimeesortie} Let $x^*$ be a stable equilibrium point of (\ref{sysdet}) such that $b(r)<0$ for all $r>x^*$, and $b(r)>0$ for all $ r<x^*$. Let $D$ be the basin of attraction of $x^*$ and $\tilde\tau$ denote the first exit time of ${\tilde x}$ from $D$. Let us assume that $D=]\alpha_1,\alpha_2[$ with $V(\alpha_1)<V(\alpha_2)$. Then for all $ x\in D$ and $ h>0$, \begin{equation} \lim_{\tilde\epsilon\rightarrow 0}{\bf P}_x({\tilde x}_{\tilde\tau}=\alpha_1)=1 \end{equation} \begin{equation}\label{tempssortie} \lim_{\tilde\epsilon\rightarrow 0}{\bf P}_x({\rm e}^{\frac{V(\alpha_1)-h}{\tilde\epsilon}}<\tilde\tau<{\rm e}^{\frac{V(\alpha_1)+h}{\tilde\epsilon}})=1 \end{equation} \end{theo} \noindent \textbf{Remark}:\\ \noindent With the notations of Theorem \ref{estimeesortie}, with great probability when $\tilde\epsilon\rightarrow 0$, the behaviour of the process $({\tilde x}_t)$ on an interval $[0,T(\tilde\epsilon)]$, in particular whether the process has jumped out of the basin of attraction $D$ or not, depends on the value of $\tilde\epsilon\log T(\tilde\epsilon)$ compared to $V(\alpha_1)$. \subsection{Two Families of Quasipotentials.} Let us now apply these results to the following family of one dimensional dynamical systems indexed by parameter $y$ introduced in the preceeding subsection. \begin{equation}\label{detyfixe} dx_t^y=(-y+f(x_t^y))dt,\quad x_0^y=x \end{equation} \begin{proposition}\label{equilibriumpoints} Let $y \in]f(a_0),f(a_1)[ $ with $a_0$ and $a_1$ the two points where $f'$ vanishes. \noindent (i) The set $\{x\in{\bf R}; f(x)=y\}$ consists of three points $x_{-}^*(y)<x_0^*(y)<x_{+}^*(y)$ (the equilibrium points of (\ref{detyfixe})) each being a continuous function of $y$ with bounded first and second derivatives. \noindent (ii) The two points $x_{\pm}^*(y)$ are stable. $x_0^*(y)$ is unstable. \end{proposition} \noindent \textbf{Proof of Proposition \ref{equilibriumpoints}}:\\ Left to the reader; we refer to figure 1-section 2 \begin{definition} Let us define the two functions $V_{\pm}$ on $]f(a_0),f(a_1)[$ as follows: \begin{equation}\label{quasipotentiels} V_{\pm}(y)=-2\int_{x_{\pm}^*(y)}^{x_0^*(y)}(-y+f(u))du. \end{equation} \end{definition} \noindent From Proposition \ref{onedim} we see that $V_{\pm}(y)$ is the value at $x_0^*(y)$ of the quasipotential of (\ref{sysdet}) w.r.t. $x_{\pm}^*(y)$. Both functions $ V_{\pm}(y)$ are strictly monotone: $ V_{-}$ is strictly increasing, $ V_{+}$ is strictly decreasing. Therefore their graphs restricted to $]f(a_0),f(a_1)[$ intersect at a unique point. \begin{definition}\label{notations1} \noindent We denote by $(y^*,S)$ the intersection point of the graphs of $ V_{-}$ and $ V_{+}$. Let ${\cal E}_1:=\{y>y^*\}$ and ${\cal E}_2:=\{y<y^*\}$. \end{definition} \begin{definition}\label{notations2} \noindent For $c\in]0,S[$ we denote by $y_{\pm}(c)$ the points of $]f(a_0),f(a_1)[$ which satisfy $y_{-}(c)< y^*<y_{+}(c)$ and $ V_{-}(y_{-}(c))=c=V_{+}(y_{+}(c))$. Let us also define $x_{-}(c):=x_{-}^*(y_{-}(c))$ and $x_{+}(c):=x_{+}^*(y_{+}(c))$ (cf. figure 1). \end{definition} \noindent\textbf{Remark}:\\ \noindent By definition $ V_{-}(y^*)=V_{+}(y^*)=S$. For $f(x)=4x-x^3$, $y^*=0$ and $S=4$. \noindent For any function $U$ satisfying $y-f(u)\equiv{-\partial_x U}/2 $ the following identities hold: \begin{equation}\label{wells} V_{-}(y)=U(x_0^*(y))-U(x_{-}^*(y)), \quad V_{+}(y)=U(x_0^*(y))-U(x_{+}^*(y)) \end{equation} \noindent In our case such a function $U$ is a polynomial of degree $4$ which admits $x_0^*(y)$ as relative maximum and $x_{\pm}^*(y)$ as relative minima. The graph of the function $U$ has two wells with respective bottoms at $x_{\pm}^*(y)$ and one top at $x_0^*(y)$. Identities (\ref{wells}) express that $V_{\pm}(y)$ are the respective depths of these wells. Therefore, on ${\cal E}_1$ the well with bottom $x_{-}^*(y)$ is the deepest one while it is the contrary on ${\cal E}_2$. \noindent Second, the portion of the curve $z=f(x)$ connecting $(x_{-}^*(y),y)$ to $(x_0^*(y),y)$ is situated below the horizontal line $L_y:=\{(x,z);z=y\}$; thus the positive quantity $\frac{1}{2}V_{-}(y)$ measures the surface of the area limited by the curve $z=f(x)$ and the segment of $L_y$ on which $x\in [x_{-}^*(y) ,x_0^*(y)]$. In the same way $\frac{1}{2}V_{+}(y)$ measures the surface of the area limited by the curve $z=f(x)$ and the segment of $L_y$ on which $x\in [x_0^*(y),x_{+}^*(y) ]$ but in this case the portion of the curve is above the line segment. \noindent Moreover as we will see in the following section $V_{\pm}$ is connected to exit times of diffusions \begin{equation} d{\tilde Z}_t^y=(-y+f({\tilde Z}_t^y))dt+\sqrt{\tilde\epsilon} dW_t,\quad {\tilde Z}_0^y=x \end{equation} from the basins of attraction of $x_{\pm}^*(y)$ (cf.Theorem \ref{estimeesortie}). \subsection{Exit Times, Main state and Metastable States} We refer the reader to \cite{F2} for the present section. Let us recall the fundamental difference between the two parameters $\epsilon$ and $\delta$. Parameter $\delta$ is already present in the deterministic system (\ref{FHNdet})-(1.2) where it measures the difference between the time scale of the slow variable $y_t$ and the time scale of the fast variable $x_t$. In particular after the time change $s:=\delta t$, the trajectory $({\tilde x}_s,{\tilde y}_s):=(x_{\delta t},y_{\delta t})$ satisfies \begin{eqnarray}\label{dettime} {\dot {\tilde x}}_s&=&-{\tilde y}_s+f({\tilde x}_s)\\ {\dot {\tilde y}}_s&=&\delta({\tilde x}_s -a) \end{eqnarray} Since $\delta$ is small the component ${\tilde y}_s$ may be considered as constant equal to ${\tilde y}_0$. Let us define $Z_t^y$ to be the family of solutions of \begin{equation} \delta d Z_t^y=(-y+f( Z_t^y))dt+\sqrt\epsilon dW_t,\quad Z_0^y=x. \end{equation} We will also consider the time change $({\tilde Z}_t^y)$ of $(Z_t^y)$ under $s:=\delta t$ \begin{equation}\label{Ztilda} d{\tilde Z}_s^y=(-y+f({\tilde Z}_s^y))ds+\sqrt{\tilde\epsilon} d{\tilde W}_s,\quad {\tilde Z}_0^y=x \end{equation} with $\tilde\epsilon=\frac{\epsilon}{\delta}$. $({\tilde Z}_t^y)$ is the stochastic perturbation of (\ref{detyfixe}). \\ In the two following propositions we give crucial estimates on exit time of the respective solutions of (3.17) and (3.18). \begin{proposition}\label{tempssortie1} Let ${\tilde\tau}_1^y$ (resp. ${\tilde\tau}_2^y$) denote the exit time of ${\tilde Z}^y$ from $D_1^y$ (resp. $D_2^y$) which is the basin of attraction of $x_{-}^*(y)$ (resp. $x_{+}^*(y)$). Let us recall that $D_1^y =]-\infty,x_0^*(y)[$ (resp. $D_2^y=]x_0^*(y),+\infty[$). From Theorem \ref{estimeesortie} identity (\ref{tempssortie}), for $x\in D_1^y$ and $h>0$ we obtain\\ \noindent $\forall x \in ]-\infty,x_0^*(y)[ , \forall h>0$ \begin{equation}\label{tildeexit1} \lim_{\tilde\epsilon\rightarrow 0}{\bf P}_x(\exp(\frac{V_{-}(y)-h}{\tilde\epsilon})<{\tilde\tau}_1^y<\exp(\frac{V_{-}(y))+h}{\tilde\epsilon}))=1 \end{equation} An analogous result holds for $x\in D_2^y$ by replacing $\tilde\tau_1^y$ by $\tilde\tau_2^y$ and $V_{-}(y)$ by $V_{+}(y)$:\\ \noindent $\forall x \in ]x_0^*(y),+\infty[ , \forall h>0 $: \begin{equation}\label{tildeexit2} \lim_{\tilde\epsilon\rightarrow 0}{\bf P}_x(\exp(\frac{V_{+}(y)-h}{\tilde\epsilon})<{\tilde\tau}_2^y<\exp(\frac{V_{+}(y))+h}{\tilde\epsilon}))=1 \end{equation} \end{proposition} \begin{proposition}\label{tempssortie2} Let $\tau_1^y$ (resp. $\tau_2^y$) denote the first exit time of $Z^y$ from $D_1^y$ (resp. $D_2^y$). The law of $\tau_i^y$ is the same as the law of $\delta{\tilde\tau}_i^y$ for $i=1,2$. Let us assume that $\epsilon$ and $\delta$ go to $0$ in such a way that \begin{equation}\label{epsilondelta} \frac{\epsilon}{\delta}|\log\delta|\rightarrow c\in]0,+\infty[. \end{equation} In this case we obtain: \\ \noindent $\forall x \in ]-\infty,x_0^*(y)[ , \forall h>0$ \begin{equation}\label{sortiebassin1} \quad\lim{\bf P}_x(\delta^{c^{-1}(c-V_{-}(y)+h)}<\tau_1^y<\delta^{c^{-1}(c-V_{-}(y)-h)})=1 \end{equation} An analogous result holds for $x\in D_2^y$ by replacing $\tau_1^y$ by $\tau_2^y$ and $V_{-}(y)$ by $V_{+}(y)$:\\ \noindent$ \forall x \in ]x_0^*(y),+\infty[, \forall h>0$ \begin{equation}\label{sortiebassin2} \quad\lim{\bf P}_x(\delta^{c^{-1}(c-V_{+}(y)+h)}<\tau_2^y<\delta^{c^{-1}(c-V_{+}(y)-h)})=1 \end{equation} \end{proposition} These two properties enable us to introduce some remarks about the main theorems stated in the previous section. These remarks are linked to the notions of mainstate and metastable state introduced in \cite{FW}.\\ In a general framework ( cf. \cite{F2}) the main state is the point towards which the cost of moving, or the transition rate, is minimum. It is not always unique: for instance in our bistable case there are two main states when $y=y^*$. Before reaching the main state, the process may reach metastable ones accessible for shorter time lenghts. \\ Main states may be considered as stable states, whereas metastable states are only stable in some time scale. To study transitions of (\ref{Ztilda}) between the two basins of attraction during $[0,T(\tilde\epsilon)]$, the relevant quantity to consider is $\tilde\epsilon\log T(\tilde\epsilon)$ which we must compare to the transition rates $V_{\pm}(y)$ given by (\ref{quasipotentiels}). \noindent For the time scale $T(\tilde\epsilon)={\rm e}^{\frac{c}{\tilde\epsilon}}$; $\tilde\epsilon\log T(\tilde\epsilon)=c$. So we must compare $c$ to the transition rates $V_{\pm}(y)$. Actually this amounts to compare first $c$ to $S$ defined in definition 3.3. We refer again to figure 1.\\ More precisely we can state using definition 3.3 and 3.4: \begin{proposition}\label{state}; \noindent 1. When $c>S$ the main state of ${\tilde Z}^y$ is equal to $x_{+}^*(y)$ (resp. $x_{-}^*(y)$) if $y<y^*$ (resp. $y>y^*$). \\ When $y=y^*$ the two points $x_{\pm}^*(y^*)$ are both main state.\\ \noindent 2. When $c<S$ for $y \in ]y_{-}(c),y_{+}(c)[$ and $x\in D_1^y$ (resp. $x\in D_2^y$) the metastable state of ${\tilde Z}^y$ is equal to $x_{-}^*(y)$ (resp. $x_{+}^*(y)$). \end{proposition} \noindent \textbf{Proof of Proposition \ref{state}}:\\ \noindent Direct consequence of the estimation of the time of exit given in proposition 3.4. \noindent \textbf{Remark}:\\ \noindent 1. When $c>S$ the time interval $[0,T(\tilde\epsilon)]$ is long enough so that the process $Z^y$ reaches with great probability a small neighborhood of its main states.\\ And as we can find an open interval $I$ containing $y^*$ such that: \begin{equation}\label{lemme1} \forall y\in I\quad c>\max(V_{-}(y),V_{+}(y)). \end{equation} following proposition \label{tempssortie 2} both exit time $\tau_1^y$ and $\tau_2^y$ tends to 0 in probability so we can only expect results on the slow component and the result depends on whether the boundary between the two main states is attracting or notas is shown in theorem(2.2).\\ \noindent 2. On the countrary when $c<S$ the time interval $[0,T(\tilde\epsilon)]$ is too short and the process only reaches with great probability a neighborhood of a metastable state. In this case we have $y_{-}(c)<y_{+}(c)$ and if we consider the interval $]y_{-}(c),y_{+}(c)[$ when $c<V_{-}(y)<V_{+}(y)$ and $x\in D_1^y$ (resp. $c<V_{-}(y)<V+{-}(y)$ and $x\in D_2^y$), for all $y\in ]y_{-}(c),y_{+}(c)[$ exit time $\tau_1^y$ (resp. $\tau_2^y$ )tends to infinity. So the process $Z^y$ remain in a neighborhood of one metastable state and switch to the other one as soon as $Y_t$ gets out of $]y_{-}(c),y_{+}(c)[$; this give rise to a limit cycle as is shown in theorem (2.1). \section{Proof of the Main Theorems} \begin{definition}\label{period} For $a\in ]x_{-}(c),x_{+}(c)[$ and $y\in ]y_{-}(c),y_{+}(c)[$ as we can check on figure 1 we have:\\ \begin{equation} x_{-}^*(z)<x_{-}(c)<a<x_{+}(c)<x_{+}^*(z) \end{equation} Then for $c\in]0,S[$, $a\in]x_{-}(c),x_{+}(c)[$ and $y\in ]y_{-}(c),y_{+}(c)[$, the following definitions of time duration makes sense \begin{equation} T_1^a(c)=\int_{y_{-}(c)}^{y_{+}(c)}\frac{dy}{x_{+}^*(y)-a}\quad {\rm and}\quad T_2^a(c)=\int_{y_{-}(c)}^{y_{+}(c)}\frac{dy}{|x_{-}^*(y)-a|} \end{equation} where $y_{\pm}(c)$ and $x_{\pm}(c)$ have been introduced in Proposition 3.2 and Definition \ref{notations2}. \end{definition} On the interval $[0,T_1^a +T_2^a]$we can introduce the solution of the following ordinary differential equations used to define the limit cycle in Theorem\label{periodique} \begin{definition}\label{notationsder} For $c\in]0,S[$, $a\in]x_{-}(c),x_{+}(c)[$ and $y\in ]y_{-}(c),y_{+}(c)[$ we define $\Psi_c^a$ as the continuous periodic function with period $T^a(c)=T_1^a(c)+T_2^a(c)$ satisfying $\Psi_c^a(0)=y_{-}(c)$ and the ode \begin{eqnarray}\label{odes} {\dot\Psi}_c^a(t)&=&x_{+}^*(\Psi_c^a(t))-a , \quad t\in[0,T_1^a(c)[\\ {\dot\Psi}_c^a(t)&=&x_{-}^*(\Psi_c^a(t))-a , \quad t\in[T_1^a(c),T_1^a(c)+T_2^a(c)[ \end{eqnarray} For $t\notin \{ kT^a(c), kT^a(c)+T_1^a(c), k\in{\bf Z}\}$, we denote by $\Phi_c^a(t)$ the derivative of $\Psi_c^a$ at $t$. \end{definition} Existence and regularity of solutions to odes (4.3)-(4.4) follow from the $C^2$ regularity of the functions $x_{\pm}.$. Under the assumptions we recall that we have (see figure 1) \begin{equation} x_{-}^*(z)<x_{-}(c)<a<x_{+}(c)<x_{+}^*(z) \end{equation} so $\Psi_c^a(t)$ increases from $y_{-}(c)$ to $y_{+}(c)$ in a duration of time equal to $T_1^a(c)$ and decreases from $y_{+}(c)$ to $y_{-}(c)$ in a duration of time equal to $T_2^a(c)$. The periodic function $\Psi_c^a$ is obtained by continuously sticking together at $y_{\pm}(c)$ the solutions ${\overline y}_{\pm}^a$ of the following odes \begin{equation} {\dot{\overline y}}_{\pm}^a(t)=x_{\pm}^*({\overline y}_{\pm}^a(t))-a,\quad {\overline y}_{\pm}^a(0)=y \end{equation} \noindent \textbf{Remark}:\\ \noindent The function $\Psi_c^a$ is continuous on ${\bf R}$ differentiable with continuous derivatives except at points in the set $\{kT^a(c), kT^a(c)+T_1^a(c), k\in{\bf Z}\}$.\\ For $ t\in[0,T_1^a(c)[$ the point $(x_{+}^*(\Psi_c^a(t)), \Psi_c^a(t))$ belongs to the right stable branch; for $t\in[T_1^a(c), T_1^a(c)+T_2^a(c)[ $ the point $(x_{-}^*(\Psi_c^a(t)), \Psi_c^a(t))$ belongs to the left stable branch. \noindent{\bf Proof of Theorem \ref{periodique}.} \\ \noindent 1. Case $a\in]x_{-}(c),x_{+}(c)[$:\\ a) Let $y<y_{-}(c)$ and $x\in D_1^y$. Then $\lim*{\bf P}_x(\tau_1^y=0)=1$ since $V_{-}(y)<V_{-}(y_{-}(c))=c$ (remember that $V_{-}$ is strictly increasing). Therefore the process $(X_t)$ leaves $D_1^y$ instantaneously and is attracted to a neighborhood of $x_{+}^*(y)>x_{+}(c)$. The point $(X_t, Y_t)$ remains in a neighbourhood of the branch $v=f(u)$ containing $(x_{+}^*(y),y)$ and $Y_t$ increases as long as $X_t>x^+(c)$ since $dY_t=(X_t-a)dt$. However identity (3.23) implies that $\lim*{\bf P}_x(\tau_2^z=+\infty)=1$ for $z<y_{+}(c)$ (resp. $\lim*{\bf P}_x(\tau_2^z=0)=1$ for $z>y_{+}(c)$ ). Therefore the point $(X_t,Y_t)$ is instantaneously attracted to a neighborhood of $(x_{-}^*(y_{+}(c)),y_{+}(c))$ after $Y_t$ has crossed $y_{+}(c)$ since the speed of $Y_t$ is strictly positive in a neighbourhood of $(x_{+}(c),y_{+}(c))$. The argument is then the same as before. We detail it for completeness. Since $x_{-}^*(y_{+}(c))<x_{-}(c)<a$, the second coordinate $Y_t$ decreases as long as $X_t-a<0$. However identity (3.22) implies that $\lim*{\bf P}_x(\tau_1^z=+\infty)=1$ for $z>y_{-}(c)$ (resp. $\lim*{\bf P}_x(\tau_1^z=0)=1$ for $z<y_{-}(c)$). Therefore the point $(X_t,Y_t)$ is instantaneously captured by $(x_{+}^*(y_{-}(c)),y_{-}(c))$ after $Y_t$ has crossed $y_{-}(c)$. Hence $(X, Y)$ converges in probability to a limit cycle of period $T^a(c)=T_1^a(c)+T_2^a(c)$.\\ b) Let $y>y_{-}(c)$ and $x\in D_1^y$. By identity 3.22 $\lim*{\bf P}_x(\tau_1^y=+\infty)=1$, the fast process $(X_t)$ is attracted to $x_{-}^*(y)$. After this first phase, one can apply the same argument as in a).\\ \noindent 2.Case $a<x_{-}(c)$. As already mentioned in section 3, when $c<S$, the relevant states are the metastable ones. The assumption $a<x_{-}(c)$ implies $f(a)>y_{-}(c)$. Let ${\cal V}$ be a small neighborhood ${\cal V}$ of $(a,f(a))$. Let us first assume that ${\cal V}$ is so small that all $(u,v)\in {\cal V}$ satisfies $v>y_{-}(c)$ and accordingly $u<x_{-}(c)$. Let $(X_t,Y_t)$ start from $(u,v)$. In particular $u\in D_1^v$ and $\tau_1^v=+\infty$ since $V_{-}(v)>c$. Then $(Y_t)$ will evolve like the solution of ${\dot v}_t=x_{-}^*(v_t)-a$, $v_0=v$ for which $f(a)$ is a stable equilibrium point. Let us now assume that there are points $(u,v)\in {\cal V}$ such that $v<y_{-}(c)$. Then $(X_t)$ instantaneously jumps to the right stable branch; $(Y_t)$ becomes close to the solution of ${\dot v}_t=x_{+}^*(v_t)-a$ and therefore increases until it reaches $y_{+}(c)$. At that time it jumps to the left stable branch and we are back to the previous argument since then $(Y_t)$ becomes close to the solution of ${\dot v}_t=x_{-}^*(v_t)-a$.\\ \noindent Let us notice that for the slow variable $Y_t$ we get a result using the uniform norm by large deviation estimates but for the quick variable $X_t$ the result is formulated in $L^1$ norm thanks to equation (1.4). \noindent {\bf Proof of Theorem \ref{nouvelequilibre}.} When $c>S$, as pointed out in subsection (3) $Z^y$ has time to reach its main state so the evolution of $Y_t$ is close to the solution of \begin{eqnarray*} {\dot v}_t&=&x_{+}^*(v_t)-a\quad {\rm provided}\quad v_t<y^*\\ {\dot v}_t&=&x_{-}^*(v_t)-a\quad {\rm provided} \quad v_t>y^* \end{eqnarray*} and it depends on whether the boundary $\{ y^*\}$ between ${\cal E}_1:=\{y>y^*\}$ and ${\cal E}_2:=\{y<y^*\}$ is attractive for this system or not(cf. \cite{F2}). If $a\in ]x_{-}^*(y^*),x_{+}^*(y^*)[$ the boundary is attractive. This is not the case when $a<x_{+}^*(y^*$ nor when $a>x_{+}^*(y^*)$. \noindent 1. Let $x_{-}^*(y)<a<x_{+}^*(y^*)$. The point $y^*$ is an attracting boundary since $x_{-}^*(y^*)-a<0<x_{+}^*(y^*)-a$. \noindent Assume for example that $Y_0=y<y^*$ that is $y\in{\cal E}_2$. The evolution of $(Y_t)$ is close to the solution of ${\dot{\overline Y}}_t=x_{+}^*({\overline Y}_t)-a$. Therefore since $x_{+}^*(y)>a$, process $(Y_t)$ starts increasing until it reaches $y^*$. After it has reached $y^*$, the evolution of $(Y_t)$ is close to the solution of ${\overline Y}_0=y^*$, ${\dot{\overline Y}}_t=B({\overline Y}_t)$ where $B$ is a vector field tangent to boundary (cf. \cite{F2} Theorem 1) which is the $0$-dimensional manifold $\{y^*\}$; therefore $B$ is zero and ${\overline Y}_t\equiv y^*$. The time ${\hat t}(y,h)$ is the time necessary to reach a small ball $B(y^*,h)$. \noindent If $y\in{\cal E}_1$ the evolution of $(Y_t)$ is close to the solution of ${\dot{\overline {\cal Y}}}_t=x_{-}^*({\overline {\cal Y}}_t)-a$. In this case process $(Y_t)$ starts decreasing since $x_{-}^*(y)<a$ until it reaches $y^*$. The argument and the conclusion are then identical to the preceeding case. \noindent 2. Let $a>x_{+}^*(y)$. The point $y^*$ is not an attracting boundary since $x_{-}^*(y^*)-a=<0$ and $x_{+}^*(y^*)-a<0$. \noindent Let us assume for example that $Y_0=y\in {\cal E}_2$. The evolution of $(Y_t)$ is close to the solution of ${\dot{\overline Y}}_t=x_{+}^*({\overline Y}_t)-a$. The value $f(a)$ is a stable equilibrium for $({\overline Y}_t)$. If $x_{+}^*(y)>a$, process $(Y_t)$ starts increasing until it reaches $f(a)$ to which it is attracted. If now $x_{+}^*(y)<a$ the argument and the conclusion are the same except that $(Y_t)$ starts decreasing. If $y\in {\cal E}_1$ the evolution of $(Y_t)$ is close to the solution of ${\dot{\overline {\cal Y}}}_t=x_{-}^*({\overline {\cal Y}}_t)-a$. Since $x_{-}^*(y^*)-a<0$ process$(Y_t)$ starts decreasing until it crosses $y^*$ (which is not attractive) towards ${\cal E}_2$. Then we are back to the previous case. \noindent 3. The case $a<x_{-}^*(y)$ is treated analogously. \noindent {\bf Acknowledgment.} We thank Prof. Mark Freidlin for sending us a copy of his paper \cite{F2} which was not available to us. \end{document}
\begin{document} \title{\LARGE\bf A rational approximation of the arctangent function and a new approach in computing pi} \author{ \normalsize\bf S. M. Abrarov\footnote{\scriptsize{Dept. Earth and Space Science and Engineering, York University, Toronto, Canada, M3J 1P3.}}\, and B. M. Quine$^{*}$\footnote{\scriptsize{Dept. Physics and Astronomy, York University, Toronto, Canada, M3J 1P3.}}} \date{March 24, 2016} \maketitle \begin{abstract} We have shown recently that integration of the error function ${\rm{erf}}\left( x \right)$ represented in form of a sum of the Gaussian functions provides an asymptotic expansion series for the constant pi. In this work we derive a rational approximation of the arctangent function $\arctan \left( x \right)$ that can be readily generalized it to its counterpart $ - {\rm{sgn}}\left( x \right)\pi /2 + \arctan \left( x \right)$, where ${\rm{sgn}}\left( x \right)$ is the signum function. The application of the expansion series for these two functions leads to a new asymptotic formula for $\pi$. \\ \noindent {\bf Keywords:} arctangent function, error function, Gaussian function, rational approximation, constant pi \end{abstract} \section{Derivation} Consider the following integral \cite{Fayed2014} \begin{equation}\label{eq_1} \int\limits_0^\infty {{e^{ - {y ^2}{t^2}}}{\rm{erf}}\left( {x t} \right)dt} = \frac{1}{{y \sqrt \pi }}\arctan \left( {\frac{x }{y }} \right), \end{equation} \\ \noindent where we imply that all variables $t$, $x$ and $y$ are real. Assuming that $y = 1$ the integral \eqref{eq_1} can be rewritten as \begin{equation}\label{eq_2} \arctan \left( x \right) = \sqrt \pi \int\limits_0^\infty {{e^{ - {t^2}}}{\rm{erf}}\left( {xt} \right)dt}. \end{equation} The error function can be represented in form of a sum of the Gaussian functions (see Appendix A) \begin{equation}\label{eq_3} {\rm{erf}}\left( x \right) = \frac{{2x}}{{\sqrt \pi }} \times \mathop {\lim }\limits_{L \to \infty } \frac{1}{L}\sum\limits_{\ell = 1}^L {{e^{ - \frac{{{{\left( {\ell - 1/2} \right)}^2}{x^2}}}{{{L^2}}}}}}. \end{equation} Consequently, substituting this limit into the equation \eqref{eq_2} leads to \[ \arctan \left( x \right) = \sqrt \pi \times \mathop {\lim }\limits_{L \to \infty } \int\limits_0^\infty {{e^{ - {t^2}}}\underbrace {\frac{{2xt}}{{\sqrt \pi L}}\sum\limits_{\ell = 1}^L {{e^{ - \frac{{{{\left( {\ell - 1/2} \right)}^2}{x^2}{t^2}}}{{{L^2}}}}}} }_{{\rm{erf}}\left( {xt} \right)}dt}. \] Each integral term in this equation is analytically integrable. Consequently, we obtain a new equation for the arctangent function \begin{equation}\label{eq_4} \arctan \left( x \right) = 4 \times \mathop {\lim }\limits_{L \to \infty } \sum\limits_{\ell = 1}^L {\frac{{Lx}}{{{{\left( {2\ell - 1} \right)}^2}{x^2} + 4{L^2}}}}. \end{equation} Since $$ \pi = 4\arctan \left( 1 \right) $$ it follows that \begin{equation}\label{eq_5} \pi = 16 \times \mathop {\lim }\limits_{L \to \infty } \sum\limits_{\ell = 1}^L {\frac{L}{{{{\left( {2\ell - 1} \right)}^2} + 4{L^2}}}}. \end{equation} It should be noted that the limit \eqref{eq_5} has been reported already in our recent work \cite{Abrarov2016}. \begin{figure}\end{figure} Truncation of the limit \eqref{eq_4} yields a rational approximation of the arctangent function \begin{equation}\label{eq_6} \arctan \left( x \right) \approx 4L\sum\limits_{\ell = 1}^L {\frac{x}{{{{\left( {2\ell - 1} \right)}^2}{x^2} + 4{L^2}}}}. \end{equation} Figure 1 shows the difference between the original arctangent function $\arctan \left( x \right)$ and its rational approximation \eqref{eq_6} $$ \varepsilon = \arctan \left( x \right) - 4L\sum\limits_{\ell = 1}^L {\frac{x}{{{{\left( {2\ell - 1} \right)}^2}{x^2} + 4{L^2}}}} $$ over the range $ - 1 \le x \le 1$ at $L = 100$, $L = 200$, $L = 300$, $L = 400$ and $L = 500$ shown by blue, red, green, brown and black curves, respectively. As we can see from this figure, the difference $\varepsilon $ is dependent upon $x$. In particular, it increases with increasing argument by absolute value $\left| x \right|$. Thus, we can conclude that the rational approximation \eqref{eq_6} of the arctangent function is more accurate when its argument is smaller. Consequently, in order to obtain a higher accuracy we have to look for an equation in the form $$ \pi = \sum\limits_{n = 1}^N {{a_n}\arctan \left( {{b_n}} \right)}, \qquad \left|{b_n}\right| < < 1, $$ where ${a_n}$ and ${b_n}$ are the coefficients, with arguments of the arctangent function as small as possible by absolute value $\left|{b_n}\right|$. For example, applying the equation \eqref{eq_6} we may expect that at some fixed $L$ the approximation $$ \pi = \, 4\arctan \left( {x = 1} \right) \approx 16L\sum\limits_{\ell = 1}^L {\frac{1}{{{{\left( {2\ell - 1} \right)}^2} + 4{L^2}}}} $$ is less accurate than the approximation based on the Machin\text{'}s formula \cite{Borwein1989, Borwein2015} \small \[ \begin{aligned} \pi &= \, 4\left[ {4\arctan \left( {\frac{1}{5}} \right) - \arctan \left( {\frac{1}{{239}}} \right)} \right]\\ &\approx 16L\sum\limits_{\ell = 1}^L {\left( {\frac{{4\left( {1/5} \right)}}{{{{\left( {2\ell - 1} \right)}^2}{{\left( {1/5} \right)}^2} + 4{L^2}}} - \frac{{1/239}}{{{{\left( {2\ell - 1} \right)}^2}{{\left( {1/239} \right)}^2} + 4{L^2}}}} \right)}. \end{aligned} \] \normalsize Furthermore, with same equation \eqref{eq_6} for $\arctan \left( x \right)$ we can improve accuracy by using another formula for pi \cite{Borwein2015} \small \[ \begin{aligned} \pi = & \, 4\left[ {12\arctan \left( {\frac{1}{{18}}} \right) + 8\arctan \left( {\frac{1}{{57}}} \right) - 5\arctan \left( {\frac{1}{{239}}} \right)} \right]\\ \approx & \, 16L\sum\limits_{\ell = 1}^L \left( \frac{{12\left( {1/18} \right)}}{{{{\left( {2\ell - 1} \right)}^2}{{\left( {1/18} \right)}^2} + 4{L^2}}} + \frac{{8\left( {1/57} \right)}}{{{{\left( {2\ell - 1} \right)}^2}{{\left( {1/57} \right)}^2} + 4{L^2}}} \right. \\ & \left. - \frac{{5\left( {1/239} \right)}}{{{{\left( {2\ell - 1} \right)}^2}{{\left( {1/239} \right)}^2} + 4{L^2}}} \right) \end{aligned} \] \normalsize due to smaller arguments $b_{n}$ of the arctangent function. \section{Application} \subsection{Counterpart function} Once the rational approximation \eqref{eq_6} for the arctangent function is found, from the identity $$ \arctan \left( {\frac{1}{x}} \right) + \arctan \left( x \right) = {\frac{\pi}{2}} {\rm{sgn}} \left( x \right), $$ where \[ {\rm{sgn}}\left( x \right) = \left\{ \begin{aligned} 1, \qquad & x > 0\\ 0, \qquad & x = 0\\ - 1, \qquad & x < 0 \end{aligned} \right. \] is the signum function \cite{Weisstein2003}, it follows that \begin{equation}\label{eq_7} - 4L\sum\limits_{\ell = 1}^L {\frac{x}{{{{\left( {2\ell - 1} \right)}^2} + 4{L^2}{x^2}}}} \approx - \frac{\pi }{2}{\rm{sgn}}\left( x \right) + \arctan \left( x \right). \end{equation} Figure 2 shows the expansion series \eqref{eq_7} computed at $L = 100$ (blue curve). The arctangent function is also shown for comparison (red curve). As we can see from this figure, on the left-half plane the expansion series \eqref{eq_7} is greater than the original arctangent function by $\pi /2$, while on the right-half plane it is smaller than the original arctangent function by $\pi /2$. \begin{figure}\end{figure} The approximation \eqref{eq_7} can be replaced with exact relation by tending the integer $L$ to infinity and taking the limit as \begin{equation}\label{eq_8} - 4 \times \mathop {\lim }\limits_{L \to \infty } \sum\limits_{\ell = 1}^L {\frac{{Lx}}{{{{\left( {2\ell - 1} \right)}^2} + 4{L^2}{x^2}}}} = - \frac{\pi }{2}{\rm{sgn}}\left( x \right) + \arctan \left( x \right). \end{equation} Since this limit represents a simple generalization of the equation \eqref{eq_4}, the function $- {\rm{sgn}} \left( x \right)\pi/2 + \arctan \left( x \right)$ can be regarded as a counterpart to the arctangent function $\arctan \left( x \right)$. \subsection{Asymptotic formula for pi} Using the limits \eqref{eq_4} and \eqref{eq_8} for the arctangent function $\arctan \left( x \right)$ and its counterpart function $ - {\rm{sgn}}\left( x \right)\pi /2 + \arctan \left( x \right)$, we can readily obtain an asymptotic expansion series for pi. Let us rewrite the equation \eqref{eq_8} as follows \begin{equation}\label{eq_9} \arctan \left( x \right) = \frac{\pi }{2}{\rm{sgn}}\left( x \right) - 4 \times \mathop {\lim }\limits_{L \to \infty } \sum\limits_{\ell = 1}^L {\frac{{Lx}}{{{{\left( {2\ell - 1} \right)}^2} + 4{L^2}{x^2}}}}. \end{equation} The difference of the equations \eqref{eq_9} and \eqref{eq_4} yields \footnotesize \[ 0 = \underbrace {\frac{\pi }{2}{\rm{sgn}}\left( x \right) - \left( {4 \times \mathop {\lim }\limits_{L \to \infty } \sum\limits_{\ell = 1}^L {\frac{{Lx}}{{{{\left( {2\ell - 1} \right)}^2} + 4{L^2}{x^2}}}} } \right)}_{\,{\rm{eq}}{\rm{.}}\,\,\left( 9 \right)} - \underbrace {\left( {4 \times \mathop {\lim }\limits_{L \to \infty } \sum\limits_{\ell = 1}^L {\frac{{Lx}}{{{{\left( {2\ell - 1} \right)}^2}{x^2} + 4{L^2}}}} } \right)}_{{\rm{eq}}{\rm{.}}\,\,\left( 4 \right)} \] \normalsize or $$ 4 \times \mathop {\lim }\limits_{L \to \infty } \sum\limits_{\ell = 1}^L \left[ {\frac{{Lx}}{{{{\left( {2\ell - 1} \right)}^2} + 4{L^2}{x^2}}}} + \frac{{Lx}}{{{{\left( {2\ell - 1} \right)}^2}{x^2} + 4{L^2}}} \right] = \frac{\pi }{2}{\rm{sgn}}\left( x \right) $$ or \begin{equation}\label{eq_10} \pi = 8 \times \mathop {\lim }\limits_{L \to \infty } \sum\limits_{\ell = 1}^L {L\left| x \right|\left[ {\frac{1}{{{{\left( {2\ell - 1} \right)}^2}{x^2} + 4{L^2}}} + \frac{1}{{{{\left( {2\ell - 1} \right)}^2} + 4{L^2}{x^2}}}} \right]} \end{equation} since ${\rm{sgn}}\left( x \right) = x/ \left| x \right|$ \cite{Weisstein2003}. Obviously, the equation \eqref{eq_10} can be interpreted as \[ \pi = 2 \frac{\left| x \right|}{x}\left[ \arctan \left( {\frac{1}{x}} \right) + \arctan \left( x \right) \right]. \] Remarkably, although the argument $x$ is still present in the limit \eqref{eq_10} this asymptotic expansion series remains, nevertheless, independent of $x$. This signifies that according to equation \eqref{eq_10} the constant $\pi $ can be computed at any real value of the argument $x \in \mathbb{R}$. The limit \eqref{eq_10} can be truncated by an arbitrarily large value $L > > 1$ as given by \begin{equation}\label{eq_11} \pi \approx 8L\left| x \right|\sum\limits_{\ell = 1}^L {\left[ {\frac{1}{{{{\left( {2\ell - 1} \right)}^2}{x^2} + 4{L^2}}} + \frac{1}{{{{\left( {2\ell - 1} \right)}^2} + 4{L^2}{x^2}}}} \right]}. \end{equation} We performed sample computations by using Wolfram Mathematica 9 in enhanced precision mode in order to visualize the number of coinciding digits with actual value of the constant pi $$ 3.1415926535897932384626433832795028841971693993751 \ldots \,\,. $$ The sample computations show that accuracy of the approximation limit \eqref{eq_11} depends upon the two values $L$ and $x$ (the dependence on the argument $x$ in the equation \eqref{eq_11} is due to truncation now). For example, at $L = {10^{12}}$ and $x = 1$, we get $$ \underbrace {3.141592653589793238462643}_{25\,\,{\rm{coinciding}}\,\,{\rm{digits}}}46661283621753050273271 \ldots \,\,, $$ while at same $L = {10^{12}}$ but smaller $x = {10^{ - 9}}$, the result is $$ \underbrace {3.14159265358979323846264338327950}_{33\,\,{\rm{coinciding}}\,\,{\rm{digits}}}305086383606604 \ldots \,\,. $$ Comparing these approximated values with the actual value for the constant pi one can see that at $x = 1$ and $x = {10^{ - 9}}$ the quantity of coinciding digits are $25$ and $33$, respectively. It should be noted, however, that the argument $x$ cannot be taken arbitrarily small since its optimized value depends upon the chosen integer $L.$ \section{Conclusion} We obtain an efficient rational approximation for the arctangent function $\arctan \left( x \right)$ that can be generalized to its counterpart function $- {\rm{sgn}}\left( x \right)\pi /2 + \arctan \left( x \right)$. The application of the expansion series of the arctangent function and its counterpart results in a new formula for $\pi$. The computational test we performed shows that the new asymptotic expansion series for pi may be rapid in convergence. \section*{Acknowledgments} This work is supported by National Research Council Canada, Thoth Technology Inc. and York University. The authors would like to thank Prof. H. Rosengren and Prof. L. Tournier for review and useful information. \section*{Appendix A} Consider an integral of the error function (see integral 12 on page 4 in \cite{Ng1969}) \[ {\rm{erf}}\left( x \right) = \frac{1}{\pi }\int\limits_0^\infty {{e^{ - u}}\sin \left( {2x\sqrt u } \right)\frac{{du}}{u}}. \] This integral can be readily expressed through the sinc function $$ \left\{ {\rm{sinc}} \left( x \ne 0 \right) = {\rm{sin}} \left( x \right) / x, \,\, {\rm{sinc}} \left( x = 0 \right) = 1 \right\} $$ by making change of the variable $v = \sqrt u$ leading to \[ \begin{aligned} {\rm{erf}}\left( x \right) &= \frac{1}{\pi }\int\limits_0^\infty {{e^{ - {v^2}}}\sin \left( {2xv} \right)\frac{{2vdv}}{{{v^2}}}} = \frac{2}{\pi }\int\limits_0^\infty {{e^{ - {v^2}}}\sin \left( {2xv} \right)\frac{{dv}}{v}} \\ &= \frac{{4x}}{\pi }\int\limits_0^\infty {{e^{ - {v^2}}}\sin \left( {2xv} \right)\frac{{dv}}{{2xv}}} \end{aligned} \] or \[ {\rm{erf}}\left( x \right) = \frac{{4x}}{\pi }\int\limits_0^\infty {{e^{ - {v^2}}}{\rm{sinc}}\left( {2xv} \right)dv}. \] The factor $2$ in the argument of the sinc function can be excluded by making change of the variable $t = 2v$ again. This provides \[ {\rm{erf}}\left( x \right) = \frac{{4x}}{\pi }\int\limits_0^\infty {{e^{ - {t^2}/4}}{\rm{sinc}}\left( {xt} \right)\frac{{dt}}{2}} \] or \[\label{A.1} \tag{A.1} {\rm{erf}}\left( x \right) = \frac{{2x}}{\pi }\int\limits_0^\infty {{e^{ - {t^2}/4}}{\rm{sinc}}\left( {xt} \right)dt}. \] As it has been shown in our recent publication, the sinc function can be expressed as given by \cite{Abrarov2015} \[\label{A.2} \tag{A.2} {\rm{sinc}}\left( x \right) = \mathop {\lim }\limits_{L \to \infty } \frac{1}{L}\sum\limits_{\ell = 1}^L {\cos \left( {\frac{{\ell - 1/2}}{L}x} \right)}. \] From the following integral \[\label{A.3} \tag{A.3} {\rm{sinc}}\left( x \right) = \int\limits_0^1 {\cos \left( {xu} \right)du} = \frac{1}{x}\int\limits_0^x {\cos \left( t \right)dt} \] it is not difficult to see that the cosine expansion \eqref{A.2} of the sinc function is just a result of integration of equation \eqref{A.3} performed by using the midpoint rule over each infinitesimal interval $\Delta t = 1/L$. There are many cosine expansions of the sinc function can be found from equation \eqref{A.3} by taking integral with help of efficient integration methods \cite{Mathews1999}. For example, another cosine expansions of the sinc function can be found by using the trapezoidal rule \small \[\label{A.4} \tag{A.4} {\rm{sinc}}\left( x \right) = \mathop {\lim }\limits_{L \to \infty } \frac{1}{L}\left[ {\frac{{1 + \cos \left( x \right)}}{2} + \sum\limits_{\ell = 1}^{L - 1} {\cos \left( {\frac{\ell }{L}x} \right)} } \right] \] \normalsize and the Simpson\text{'}s rule \footnotesize \[\label{A.5} \tag{A.5} {\rm{sinc}}\left( x \right) = \mathop {\lim }\limits_{L \to \infty } \frac{1}{{6L}}\left[ {1 + \cos \left( x \right) + 4\sum\limits_{\ell = 1}^L {\cos \left( {\frac{{\ell - 1/2}}{L}x} \right)} + 2\sum\limits_{\ell = 1}^{L - 1} {\cos \left( {\frac{\ell }{L}x} \right)} } \right]. \] \normalsize It is interesting to note that the limit \eqref{A.5} can also be derived trivially as a weighted sum of equations \eqref{A.2} and \eqref{A.4} in a proportion $2/3$ to $1/3$ as follows \small \[ \begin{aligned} {\rm{sinc}}\left( x \right) = &\frac{2}{3} \times \mathop {\lim }\limits_{L \to \infty } \frac{1}{L}\sum\limits_{\ell = 1}^{L} {\cos \left( {\frac{{\ell - 1/2}}{L}x} \right)} \\ & + \frac{1}{3} \times \mathop {\lim }\limits_{L \to \infty } \frac{1}{L}\left[ {\frac{{1 + \cos \left( x \right)}}{2} + \sum\limits_{\ell = 1}^{L - 1} {\cos \left( {\frac{\ell }{L}x} \right)} } \right]. \end{aligned} \] \normalsize Any of these or similar cosine expansions of the sinc function can be used in integration to obtain expansion series for the error function ${\rm{erf}} \left( x \right)$ and, consequently, for the constant pi as well. However, as a simplest case we consider an application of equation \eqref{A.2} only. Thus, substituting the cosine expansion \eqref{A.2} of the sinc function into the integral \eqref{A.1} yields \[ {\rm{erf}}\left( x \right) = \frac{{2x}}{\pi } \times \mathop {\lim }\limits_{L \to \infty } \int\limits_0^\infty {\exp \left( { - {t^2}/4} \right)\underbrace {\frac{1}{L}\sum\limits_{\ell = 1}^L {\cos \left( {\frac{{\ell - 1/2}}{L}xt} \right)} }_{{\rm{sinc}}\left( {xt} \right)}dt}. \] Each terms in this equation is analytically integrable. Therefore, its integration leads to the expansion series \eqref{eq_3} of the error function. The more detailed description of the expansion series \eqref{eq_3} of the error function is given in our work \cite{Abrarov2016}. \end{document}
\begin{document} \title{Extremal Unimodular Lattices in Dimension $36$} \author{ Masaaki Harada\thanks{ Research Center for Pure and Applied Mathematics, Graduate School of Information Sciences, Tohoku University, Sendai 980--8579, Japan. email: mharada@m.tohoku.ac.jp. This work was carried out at Yamagata University.} } \maketitle \noindent {\bf Dedicated to Professor Vladimir D. Tonchev on His 60th Birthday} \begin{abstract} In this paper, new extremal odd unimodular lattices in dimension $36$ are constructed. Some new odd unimodular lattices in dimension $36$ with long shadows are also constructed. \end{abstract} \section{Introduction} A (Euclidean) lattice $L \subset \mathbb{R}^n$ in dimension $n$ is {\em unimodular} if $L = L^{*}$, where the dual lattice $L^{*}$ of $L$ is defined as $\{ x \in {\mathbb{R}}^n \mid (x,y) \in \mathbb{Z} \text{ for all } y \in L\}$ under the standard inner product $(x,y)$. A unimodular lattice is called {\em even} if the norm $(x,x)$ of every vector $x$ is even. A unimodular lattice, which is not even, is called {\em odd}. An even unimodular lattice in dimension $n$ exists if and only if $n \equiv 0 \pmod 8$, while an odd unimodular lattice exists for every dimension. Two lattices $L$ and $L'$ are {\em isomorphic}, denoted $L \cong L'$, if there exists an orthogonal matrix $A$ with $L' = L \cdot A$, where $ L \cdot A=\{xA \mid x \in L\}$. The automorphism group $\Aut(L)$ of $L$ is the group of all orthogonal matrices $A$ with $L = L \cdot A$. Rains and Sloane~\cite{RS-bound} showed that the minimum norm $\min(L)$ of a unimodular lattice $L$ in dimension $n$ is bounded by $\min(L) \le 2 \lfloor n/24 \rfloor+2$ unless $n=23$ when $\min(L) \le 3$. We say that a unimodular lattice meeting the upper bound is {\em extremal}. The smallest dimension for which there is an odd unimodular lattice with minimum norm (at least) $4$ is $32$ (see~\cite{lattice-database}). There are exactly five odd unimodular lattices in dimension $32$ having minimum norm $4$, up to isomorphism~\cite{CS98}. For dimensions $33,34$ and $35$, the minimum norm of an odd unimodular lattice is at most $3$ (see~\cite{lattice-database}). The next dimension for which there is an odd unimodular lattice with minimum norm (at least) $4$ is $36$. Four extremal odd unimodular lattices in dimension $36$ are known, namely, Sp4(4)D8.4 in~\cite{lattice-database}, $G_{36}$ in~\cite[Table~2]{G04}, $N_{36}$ in~\cite[Section~3]{H11} and $A_4(C_{36})$ in~\cite[Section~3]{H12}. Recently, one more lattice has been found, namely, $A_6(C_{36,6}(D_{18}))$ in~\cite[Table~II]{Hodd}. This situation motivates us to improve the number of known non-isomorphic extremal odd unimodular lattices in dimension $36$. The main aim of this paper is to prove the following: \begin{prop}\label{main} There are at least $26$ non-isomorphic extremal odd unimodular lattices in dimension $36$. \end{prop} The above proposition is established by constructing new extremal odd unimodular lattices in dimension $36$ from self-dual $\mathbb{Z}_k$-codes, where $\mathbb{Z}_{k}$ is the ring of integers modulo $k$, by using two approaches. One approach is to consider self-dual $\mathbb{Z}_4$-codes. Let $B$ be a binary doubly even code of length $36$ satisfying the following conditions: \begin{align} \label{eq:C1} &\text{the minimum weight of $B$ is at least $16$}, \\ \label{eq:C2} &\text{the minimum weight of its dual code $B^\perp$ is at least $4$.} \end{align} Then a self-dual $\mathbb{Z}_4$-code with residue code $B$ gives an extremal odd unimodular lattice in dimension $36$ by Construction A. We show that a binary doubly even $[36,7]$ code satisfying the conditions (\ref{eq:C1}) and (\ref{eq:C2}) has weight enumerator $1+ 63 y^{16}+ 63 y^{20}+ y^{36}$ (Lemma~\ref{lem:WE}). It was shown in~\cite{PST} that there are four codes having the weight enumerator, up to equivalence. We construct ten new extremal odd unimodular lattices in dimension $36$ from self-dual $\mathbb{Z}_4$-codes whose residue codes are doubly even $[36,7]$ codes satisfying the conditions (\ref{eq:C1}) and (\ref{eq:C2}) (Lemma~\ref{lem:N1}). New odd unimodular lattices in dimension $36$ with minimum norm $3$ having shadows of minimum norm $5$ are constructed from some of the new lattices (Proposition~\ref{prop:longS}). These are often called unimodular lattices with long shadows (see~\cite{NV03}). The other approach is to consider self-dual $\mathbb{Z}_k$-codes $(k=5,6,7,9,19)$, which have generator matrices of a special form given in (\ref{eq:GM}). Eleven more new extremal odd unimodular lattices in dimension $36$ are constructed by Construction A (Lemma~\ref{lem:N2}). Finally, we give a certain short observation on ternary self-dual codes related to extremal odd unimodular lattices in dimension $36$. All computer calculations in this paper were done by {\sc Magma}~\cite{Magma}. \section{Preliminaries} \label{sec:def} \subsection{Unimodular lattices} Let $L$ be an odd unimodular lattice and let $L_0$ denote the even sublattice, that is, the sublattice of vectors of even norms. Then $L_0$ is a sublattice of $L$ of index $2$~\cite{CS98}. The {\em shadow} $S(L)$ of $L$ is defined to be $L_0^* \setminus L$. There are cosets $L_1,L_2,L_3$ of $L_0$ such that $L_0^* = L_0 \cup L_1 \cup L_2 \cup L_3$, where $L = L_0 \cup L_2$ and $S = L_1 \cup L_3$. Shadows for odd unimodular lattices appeared in~\cite{CS98} and also in~\cite[p.~440]{SPLAG}, in order to provide restrictions on the theta series of odd unimodular lattices. Two lattices $L$ and $L'$ are {\em neighbors} if both lattices contain a sublattice of index $2$ in common. If $L$ is an odd unimodular lattice in dimension divisible by $4$, then there are two unimodular lattices containing $L_0$, which are rather than $L$, namely, $L_0 \cup L_1$ and $L_0 \cup L_3$. Throughout this paper, we denote the two unimodular neighbors by \begin{equation}\label{eq:N} Ne_1(L)=L_0 \cup L_1 \text{ and } Ne_2(L)=L_0 \cup L_3. \end{equation} The theta series $\theta_{L}(q)$ of $L$ is the formal power series $ \theta_{L}(q) = \sum_{x \in L} q^{(x,x)}. $ The kissing number of $L$ is the second nonzero coefficient of the theta series of $L$, that is, the number of vectors of minimum norm in $L$. Conway and Sloane~\cite{CS98} gave some characterization of theta series of odd unimodular lattices and their shadows. Using~\cite[(2), (3)]{CS98}, it is easy to determine the possible theta series $\theta_{L_{36}}(q)$ and $\theta_{S(L_{36})}(q)$ of an extremal odd unimodular lattice $L_{36}$ in dimension $36$ and its shadow $S(L_{36})$: \begin{align} \label{eq:T1} \theta_{L_{36}}(q) =& 1 + (42840 + 4096 \alpha)q^4 +(1916928 - 98304 \alpha)q^5 + \cdots, \\ \label{eq:T2} \theta_{S(L_{36})}(q) =& \alpha q + (960 - 60 \alpha) q^3 + (3799296 + 1734 \alpha)q^5 + \cdots, \end{align} respectively, where $\alpha$ is a nonnegative integer. It follows from the coefficients of $q$ and $q^3$ in $\theta_{S(L_{36})}(q)$ that $0 \le \alpha \le 16$. \subsection{Self-dual $\mathbb{Z}_k$-codes and Construction A} Let $\mathbb{Z}_{k}$ be the ring of integers modulo $k$, where $k$ is a positive integer greater than $1$. A {\em $\mathbb{Z}_{k}$-code} $C$ of length $n$ is a $\mathbb{Z}_{k}$-submodule of $\mathbb{Z}_{k}^n$. Two $\mathbb{Z}_k$-codes are {\em equivalent} if one can be obtained from the other by permuting the coordinates and (if necessary) changing the signs of certain coordinates. A code $C$ is {\em self-dual} if $C=C^\perp$, where the dual code $C^\perp$ of $C$ is defined as $\{ x \in \mathbb{Z}_{k}^n \mid x \cdot y = 0$ for all $y \in C\}$, under the standard inner product $x \cdot y$. If $C$ is a self-dual $\mathbb{Z}_k$-code of length $n$, then the following lattice \[ A_{k}(C) = \frac{1}{\sqrt{k}} \{(x_1,\ldots,x_n) \in \mathbb{Z}^n \mid (x_1 \bmod k,\ldots,x_n \bmod k)\in C\} \] is a unimodular lattice in dimension $n$. This construction of lattices is called Construction A. \section{From self-dual $\mathbb{Z}_4$-codes}\label{sec:4} From now on, we omit the term odd for odd unimodular lattices in dimension $36$, since all unimodular lattices in dimension $36$ are odd. In this section, we construct ten new non-isomorphic extremal unimodular lattices in dimension $36$ from self-dual $\mathbb{Z}_4$-codes by Construction A. Five new non-isomorphic unimodular lattices in dimension $36$ with minimum norm $3$ having shadows of minimum norm $5$ are also constructed. \subsection{Extremal unimodular lattices} Every $\mathbb{Z}_4$-code $C$ of length $n$ has two binary codes $C^{(1)}$ and $C^{(2)}$ associated with $C$: \[ C^{(1)}= \{ c \bmod 2 \mid c \in C \} \text{ and } C^{(2)}= \left\{ c \bmod 2 \mid c \in \mathbb{Z}_4^n, 2c\in C \right\}. \] The binary codes $C^{(1)}$ and $C^{(2)}$ are called the {residue} and {torsion} codes of $C$, respectively. If $C$ is a self-dual $\mathbb{Z}_4$-code, then $ C^{(1)}$ is a binary doubly even code with $C^{(2)} = {C^{(1)}}^{\perp}$~\cite{Z4-CS}. Conversely, starting from a given binary doubly even code $B$, a method for construction of all self-dual $\mathbb{Z}_4$-codes $C$ with $C^{(1)}=B$ was given in~\cite[Section~3]{Z4-PLF}. The {Euclidean weight} of a codeword $x=(x_1,\ldots,x_n)$ of $C$ is $m_1(x)+4m_2(x)+m_3(x)$, where $m_{\alpha}(x)$ denotes the number of components $i$ with $x_i=\alpha$ $(\alpha=1,2,3)$. The {minimum Euclidean weight} $d_E(C)$ of $C$ is the smallest Euclidean weight among all nonzero codewords of $C$. It is easy to see that $\min\{d(C^{(1)}),4d(C^{(2)})\} \le d_E(C)$, where $d(C^{(i)})$ denotes the minimum weight of $C^{(i)}$ $(i=1,2)$. In addition, $d_E(C) \le 4d(C^{(2)})$ and $A_4(C)$ has minimum norm $\min\{4,d_E(C)/4\}$ (see e.g.~\cite{H11}). Hence, if there is a binary doubly even code $B$ of length $36$ satisfying the conditions (\ref{eq:C1}) and (\ref{eq:C2}), then an extremal unimodular lattice in dimension $36$ is constructed as $A_4(C)$, through a self-dual $\mathbb{Z}_4$-code $C$ with $C^{(1)}=B$. If there is a binary $[36,k]$ code $B$ satisfying the conditions (\ref{eq:C1}) and (\ref{eq:C2}), then $k=7$ or $8$ (see~\cite{Brouwer-Handbook}). \begin{lem}\label{lem:WE} Let $B$ be a binary doubly even $[36,7]$ code satisfying the conditions {\rm (\ref{eq:C1})} and {\rm (\ref{eq:C2})}. Then the weight enumerator of $B$ is $1+ 63 y^{16}+ 63 y^{20}+ y^{36}$. \end{lem} \begin{proof} The weight enumerator of $B$ is written as: \[ W_{B}(y)= 1 +a y^{16} +b y^{20} +c y^{24} +d y^{28} +e y^{32} +(2^7-1-a-b-c-d-e) y^{36}, \] where $a,b,c,d$ and $e$ are nonnegative integers. By the MacWilliams identity, the weight enumerator of $B^\perp$ is given by: \begin{align*} W_{B^\perp}(y)=& 1 +\frac{1}{16}(- 567 + 5a + 4b + 3c + 2d + e) y \\& +\frac{1}{2}(1260 -10a - 10b - 9c - 7d - 4e) y^2 \\& +\frac{1}{16}(- 112455 + 885a + 900b + 883c + 770d + 497e)y^3 + \cdots. \end{align*} Since $d(B^\perp) \ge 4$, the weight enumerator of $B$ is written using $a$ and $b$: \begin{align*} W_{B}(y)=& 1 + a y^{16} + b y^{20} + (882 -10a - 4b) y^{24} + (- 1638 + 20a + 6b) y^{28} \\ & + (1197 -15a - 4b) y^{32} + (- 314 + 4a + b)y^{36}. \end{align*} Suppose that $B$ does not contain the all-one vector $\mathbf{1}$. Then $b=314 -4a$. In this case, since the coefficients of $y^{24}$ and $y^{28}$ are $- 374 + 6a$ and $246 -4a$, these yield that $a \ge 62$ and $a \le 61$, respectively, which gives the contradiction. Hence, $B$ contains $\mathbf{1}$. Then $b=315-4a$. Since the coefficient $a - 63$ of $y^{32}$ is $0$, the weight enumerator of $B$ is uniquely determined as $1+ 63 y^{16}+ 63 y^{20}+ y^{36}$. \end{proof} \begin{rem} A similar approach shows that the weight enumerator of a binary doubly even $[36,8]$ code $B$ satisfying the conditions {\rm (\ref{eq:C1})} and {\rm (\ref{eq:C2})} is uniquely determined as $1 + 153 y^{16} + 72 y^{20} + 30 y^{24}$. \end{rem} It was shown in~\cite{PST} that there are four inequivalent binary $[36,7,16]$ codes containing $\mathbf{1}$. The four codes are doubly even. Hence, there are exactly four binary doubly even $[36,7]$ codes satisfying the conditions {\rm (\ref{eq:C1})} and {\rm (\ref{eq:C2})}, up to equivalence. The four codes are optimal in the sense that these codes achieve the Gray--Rankin bound, and the codewords of weight $16$ are corresponding to quasi-symmetric SDP $2$-$(36,16,12)$ designs~\cite{JT}. Let $B_{36,i}$ be the binary doubly even $[36,7,16]$ code corresponding to the quasi-symmetric SDP $2$-$(36,16,12)$ design, which is the residual design of the symmetric SDP $2$-$(64,28,12)$ design $D_i$ in~\cite[Section~5]{PST} $(i=1,2,3,4)$. As described above, all self-dual $\mathbb{Z}_4$-codes $C$ with $C^{(1)}=B_{36,i}$ have $d_E(C) =16$ $(i=1,2,3,4)$. Hence, $A_4(C)$ are extremal. \begin{table}[thbp] \caption{Extremal unimodular lattices in dimension $36$} \label{Tab:L} \begin{center} {\small \begin{tabular}{l|c|c|c} \noalign{\hrule height0.8pt} \multicolumn{1}{c|}{Lattices $L$} & $\tau(L)$ & $\{n_1(L),n_2(L)\}$ & $\#\Aut(L)$ \\ \hline Sp4(4)D8.4 in~\cite{lattice-database} &42840& $\{480, 480\}$ & 31334400 \\ $G_{36}$ in~\cite[Table~2]{G04} &42840& $\{144, 816\}$ & 576 \\ $N_{36}$ in~\cite{H11} &42840& $\{0, 960\}$ & 849346560 \\ $A_4(C_{36})$ in~\cite{H12} &51032& $\{0, 840\}$ & 660602880 \\ $A_6(C_{36,6}(D_{18}))$ in~\cite{Hodd} &42840& $\{384, 576\}$ & 288 \\ \hline $A_4(C_{36, 1})$ in Section~\ref{sec:4}& 51032 &$\{ 0, 840\}$& 6291456 \\ $A_4(C_{36, 2})$ in Section~\ref{sec:4}& 42840 &$\{ 0, 960\}$& 6291456 \\ $A_4(C_{36, 3})$ in Section~\ref{sec:4}& 51032 &$\{ 0, 840\}$& 22020096\\ $A_4(C_{36, 4})$ in Section~\ref{sec:4}& 51032 &$\{ 0, 840\}$& 1966080 \\ $A_4(C_{36, 5})$ in Section~\ref{sec:4}& 51032 &$\{ 0, 840\}$& 1572864 \\ $A_4(C_{36, 6})$ in Section~\ref{sec:4}& 42840 &$\{ 0, 960\}$& 2621440 \\ $A_4(C_{36, 7})$ in Section~\ref{sec:4}& 42840 &$\{ 0, 960\}$& 1966080 \\ $A_4(C_{36, 8})$ in Section~\ref{sec:4}& 42840 &$\{ 0, 960\}$& 393216 \\ $A_4(C_{36, 9})$ in Section~\ref{sec:4}& 51032 &$\{ 0, 840\}$& 1376256 \\ $A_4(C_{36,10})$ in Section~\ref{sec:4}& 51032 &$\{ 0, 840\}$& 393216 \\ \hline $A_{5}(D_{36,1})$ in Section~\ref{sec:E}&42840&$\{144, 816\}$ & 144 \\ $A_{5}(D_{36,2})$ in Section~\ref{sec:E}&42840&$\{456, 504\}$ & 72 \\ $A_{6}(D_{36,3})$ in Section~\ref{sec:E}&42840&$\{240, 720\}$ & 288 \\ $A_{6}(D_{36,4})$ in Section~\ref{sec:E}&42840&$\{240, 720\}$ & 576 \\ $A_{7}(D_{36,5})$ in Section~\ref{sec:E}&42840&$\{288, 672\}$ & 288 \\ $A_{7}(D_{36,6})$ in Section~\ref{sec:E}&42840&$\{144, 816\}$ & 72 \\ $A_{7}(D_{36,7})$ in Section~\ref{sec:E}&42840&$\{144, 816\}$ & 288 \\ $A_{9}(D_{36,8})$ in Section~\ref{sec:E}&42840&$\{384, 576\}$ & 144 \\ $A_{19}(D_{36,9})$ in Section~\ref{sec:E}&42840&$\{288, 672\}$ & 144 \\ \hline $A_{5}(E_{36,1})$ in Section~\ref{sec:E}&42840& $\{456, 504\}$ & 72 \\ $A_{6}(E_{36,2})$ in Section~\ref{sec:E}&42840& $\{384, 576\}$ &144\\ \noalign{\hrule height0.8pt} \end{tabular} } \end{center} \end{table} Using the method in~\cite[Section~3]{Z4-PLF}, self-dual $\mathbb{Z}_4$-codes $C$ are constructed from $B_{36,i}$. Then ten extremal unimodular lattices $A_4(C_{36,i})$ $(i=1,2,\ldots,10)$ are constructed, where $C_{36,i}^{(1)} = B_{36,2}$ $(i=1,2,3)$, $C_{36,i}^{(1)} = B_{36,3}$ $(i=4,5,6,7)$ and $C_{36,i}^{(1)} = B_{36,4}$ $(i=8,9,10)$. To distinguish between the known lattices and our lattices, we give in Table~\ref{Tab:L} the kissing numbers $\tau(L)$, $\{n_1(L),n_2(L)\}$ and the orders $\#\Aut(L)$ of the automorphism groups, where $n_i(L)$ denotes the number of vectors of norm $3$ in $Ne_i(L)$ defined in (\ref{eq:N}) $(i=1,2)$. These have been calculated by {\sc Magma}. Table~\ref{Tab:L} shows the following: \begin{lem}\label{lem:N1} The five known lattices and the ten extremal unimodular lattices $A_4(C_{36,i})$ $(i=1,2,\ldots,10)$ are non-isomorphic to each other. \end{lem} \begin{rem} In this way, we have found two more extremal unimodular lattices $A_4(C)$, where $C$ are self-dual $\mathbb{Z}_4$-codes with $C^{(1)} = B_{36,1}$. However, we have verified by {\sc Magma} that the two lattices are isomorphic to $N_{36}$ in~\cite{H11} and $A_4(C_{36})$ in~\cite{H12}. \end{rem} \begin{rem}\label{rem} For $L=A_4(C_{36,i})$ $(i=1,2,\ldots,10)$, it follows from $\tau(L)$ and $\{n_1(L),n_2(L)\}$ that one of the two unimodular neighbors $Ne_1(L)$ and $Ne_2(L)$ defined in (\ref{eq:N}) is extremal. We have verified by {\sc Magma} that the extremal one is isomorphic to $A_4(C_{36,i})$. \end{rem} For $i=1,2,\ldots,10$, the code $C_{36,i}$ is equivalent to some code $\overline{C_{36,i}}$ with generator matrix of the form: \begin{equation} \label{eq:g-matrix} \left(\begin{array}{ccc} I_{7} & A & B_1+2B_2 \\ O &2I_{22} & 2D \end{array}\right), \end{equation} where $A$, $B_1$, $B_2$, $D$ are $(1,0)$-matrices, $I_n$ denotes the identity matrix of order $n$ and $O$ denotes the $22 \times 7$ zero matrix. We only list in Figure~\ref{Fig} the $7 \times 29$ matrix $ M_{i}= \left(\begin{array}{cc} A & B_1+2B_2 \end{array}\right) $ to save space. Note that $\left(\begin{array}{ccc} O &2I_{22} & 2D \end{array}\right)$ in (\ref{eq:g-matrix}) can be obtained from $\left(\begin{array}{ccc} I_{7} & A & B_1+2B_2 \end{array}\right)$ since $\overline{C_{36,i}}^{(2)} = {\overline{C_{36,i}}^{(1)}}^{\perp}$. A generator matrix of $A_4(C_{36,i})$ is obtained from that of $C_{36,i}$. \begin{figure}\label{Fig} \end{figure} \subsection{Unimodular lattices with long shadows}\label{sec:L} The possible theta series of a unimodular lattice $L$ in dimension $36$ having minimum norm $3$ and its shadow are as follows: \begin{align*} & 1 + (960 - \alpha)q^3 + (42840 + 4096 \beta)q^4 + \cdots, \\ & \beta q + (\alpha - 60 \beta) q^3 + (3833856 - 36 \alpha + 1734 \beta)q^5 + \cdots, \end{align*} respectively, where $\alpha$ and $\beta$ are integers with $0 \le \beta \le \frac{\alpha}{60} < 16$~\cite{H11}. Then the kissing number of $L$ is at most $960$ and $\min(S(L)) \le 5$. Unimodular lattices $L$ with $\min(L)=3$ and $\min(S(L))=5$ are often called unimodular lattices with long shadows (see~\cite{NV03}). Only one unimodular lattice $L$ in dimension $36$ with $\min(L)=3$ and $\min(S(L))=5$ was known, namely, $A_4(C_{36})$ in~\cite{H11}. Let $L$ be one of $A_4(C_{36,2})$, $A_4(C_{36,6})$, $A_4(C_{36,7})$ and $A_4(C_{36,8})$. Since $\{n_1(L),n_2(L)\}=\{0,960\}$, one of the two unimodular neighbors $Ne_1(L)$ and $Ne_2(L)$ in (\ref{eq:N}) is extremal and the other is a unimodular lattice $L'$ with minimum norm $3$ having shadow of minimum norm $5$. We denote such lattices $L'$ constructed from $A_4(C_{36,2})$, $A_4(C_{36,6})$, $A_4(C_{36,7})$ and $A_4(C_{36,8})$ by $N_{36,1}$, $N_{36,2}$, $N_{36,3}$ and $N_{36,4}$, respectively. We list in Table~\ref{Tab:LS} the orders $\#\Aut(N_{36,i})$ $(i=1,2,3,4)$ of the automorphism groups, which have been calculated by {\sc Magma}. Table~\ref{Tab:LS} shows the following: \begin{prop}\label{prop:longS} There are at least $5$ non-isomorphic unimodular lattices $L$ in dimension $36$ with $\min(L)=3$ and $\min(S(L))=5$. \end{prop} \begin{table}[th] \caption{$\#\Aut(N_{36,i})$ $(i=1,2,3,4)$} \label{Tab:LS} \begin{center} {\small \begin{tabular}{c|r} \noalign{\hrule height0.8pt} Lattices $L$ & \multicolumn{1}{c}{$\#\Aut(L)$} \\ \hline $A_4(C_{36})$ in~\cite{H11} & 1698693120\\ $N_{36,1}$ & 12582912\\ $N_{36,2}$ & 5242880 \\ $N_{36,3}$ & 3932160 \\ $N_{36,4}$ & 786432 \\ \noalign{\hrule height0.8pt} \end{tabular} } \end{center} \end{table} \section{From self-dual $\mathbb{Z}_k$-codes $(k \ge 5)$}\label{sec:E} In this section, we construct more extremal unimodular lattices in dimension $36$ from self-dual $\mathbb{Z}_k$-codes $(k \ge 5)$. Let $A^T$ denote the transpose of a matrix $A$. An $n \times n$ matrix is {negacirculant} if it has the following form: \[ \left( \begin{array}{ccccc} r_0 &r_1 & \cdots &r_{n-1} \\ -r_{n-1}&r_0 & \cdots &r_{n-2} \\ -r_{n-2}&-r_{n-1}& \cdots &r_{n-3} \\ \vdots & \vdots && \vdots\\ -r_1 &-r_2 & \cdots&r_0 \end{array} \right). \] Let $D_{36,i}$ $(i=1,2,\ldots,9)$ and $E_{36,i}$ $(i=1,2)$ be $\mathbb{Z}_k$-codes of length $36$ with generator matrices of the following form: \begin{equation} \label{eq:GM} \left( \begin{array}{ccc@{}c} \quad & {\Large I_{18}} & \quad & \begin{array}{cc} A & B \\ -B^T & A^T \end{array} \end{array} \right), \end{equation} where $k$ are listed in Table~\ref{Tab:Codes}, $A$ and $B$ are $9 \times 9$ negacirculant matrices with first rows $r_A$ and $r_B$ listed in Table~\ref{Tab:Codes}. It is easy to see that these codes are self-dual since $AA^T+BB^T=-I_9$. Thus, $A_k(D_{36,i})$ $(i=1,2,\ldots,9)$ and $A_k(E_{36,i})$ $(i=1,2)$ are unimodular lattices, for $k$ given in Table~\ref{Tab:Codes}. In addition, we have verified by {\sc Magma} that these lattices are extremal. \begin{table}[thb] \caption{Self-dual $\mathbb{Z}_k$-codes of length 36} \label{Tab:Codes} \begin{center} {\small \begin{tabular}{c|c|l|l} \noalign{\hrule height0.8pt} Codes & $k$ & \multicolumn{1}{c|}{$r_A$} &\multicolumn{1}{c}{$r_B$} \\ \hline $D_{36,1}$& 5&$(0, 0, 0, 1, 2, 2, 0, 4, 2)$ & $(0, 0, 0, 0, 4, 3, 3, 0, 1)$\\ $D_{36,2}$& 5&$(0, 0, 0, 1, 3, 0, 2, 0, 4)$ & $(3, 0, 4, 1, 4, 0, 0, 4, 4)$\\ $D_{36,3}$&6 &$(0,1,5,3,2,0,3,5,1)$&$(3,1,0,0,5,1,1,1,1)$ \\ $D_{36,4}$&6 &$(0,1,3,5,1,5,5,4,4)$&$(4,0,3,2,4,5,5,2,4)$ \\ $D_{36,5}$& 7&$(0, 1, 6, 3, 5, 0, 4, 5, 4)$ & $(0, 1, 6, 3, 5, 2, 1, 5, 1)$\\ $D_{36,6}$& 7&$(0, 1, 1, 3, 2, 6, 1, 4, 6)$ & $(0, 1, 4, 0, 5, 3, 6, 2, 0)$\\ $D_{36,7}$& 7&$(0, 0, 0, 1, 5, 5, 5, 1, 1)$ & $(0, 5, 4, 2, 5, 1, 1, 3, 6)$\\ $D_{36,8}$& 9&$(0, 0, 0, 1, 0, 4, 3, 0, 0)$ & $(0, 4, 1, 5, 3, 5, 1, 7, 0)$\\ $D_{36,9}$&19&$(0, 0, 0, 1, 15, 15, 9, 6, 5)$ &$(14, 16, 0, 14, 15, 8, 8, 3, 12)$ \\ \hline $E_{36,1}$&5 &$(0, 1, 0, 2, 1, 3, 2, 2, 0)$&$(2, 0, 1, 0, 1, 1, 2, 3, 1)$\\ $E_{36,2}$&6 &$(0, 1, 5, 3, 4, 4, 1, 1, 0)$&$(4, 0, 1, 3, 4, 2, 3, 0, 1)$\\ \noalign{\hrule height0.8pt} \end{tabular} } \end{center} \end{table} To distinguish between the above eleven lattices and the known $15$ lattices, in Table~\ref{Tab:L} we give $\tau(L)$, $\{n_1(L),n_2(L)\}$ and $\#\Aut(L)$, which have been calculated by {\sc Magma}. The two lattices have the identical $ (\tau(L),\{n_1(L),n_2(L)\},\#\Aut(L)) $ for each of the pairs $(A_{5}(E_{36,1}), A_{5}(D_{36,2}))$ and $(A_{6}(E_{36,2}),A_{9}(D_{36,8}))$. However, we have verified by {\sc Magma} that the two lattices are non-isomorphic for each pair. Therefore, we have the following: \begin{lem}\label{lem:N2} The $26$ lattices in Table~\ref{Tab:L} are non-isomorphic to each other. \end{lem} Lemma~\ref{lem:N2} establishes Proposition~\ref{main}. \begin{rem} Similar to Remark~\ref{rem}, it is known~\cite{H11} that the extremal neighbor is isomorphic to $L$ for the case where $L$ is $N_{36}$ in~\cite{H11}, and we have verified by {\sc Magma} that the extremal neighbor is isomorphic to $L$ for the case where $L$ is $A_4(C_{36})$ in~\cite{H12}. \end{rem} \section{Related ternary self-dual codes}\label{sec:T} In this section, we give a certain short observation on ternary self-dual codes related to some extremal odd unimodular lattices in dimension $36$. \subsection{Unimodular lattices from ternary self-dual codes} Let $T_{36}$ be a ternary self-dual code of length $36$. The two unimodular neighbors $Ne_1(A_3(T_{36}))$ and $Ne_2(A_3(T_{36}))$ given in (\ref{eq:N}) are described in~\cite{HKO} as $L_S(T_{36})$ and $L_T(T_{36})$. In this section, we use the notation $L_S(T_{36})$ and $L_T(T_{36})$, instead of $Ne_1(A_3(T_{36}))$ and $Ne_2(A_3(T_{36}))$, since the explicit constructions and some properties of $L_S(T_{36})$ and $L_T(T_{36})$ are given in~\cite{HKO}. By Theorem~6 in~\cite{HKO} (see also Theorem~3.1 in~\cite{G04}), $L_T(T_{36})$ is extremal when $T_{36}$ satisfies the following condition (a), and both $L_S(T_{36})$ and $L_T(T_{36})$ are extremal when $T_{36}$ satisfies the following condition (b): \begin{itemize} \item[(a)] extremal (minimum weight $12$) and admissible (the number of $1$'s in the components of every codeword of weight $36$ is even), \item[(b)] minimum weight $9$ and maximum weight $33$. \end{itemize} For each of (a) and (b), no ternary self-dual code satisfying the condition is currently known. \subsection{Condition (a)} Suppose that $T_{36}$ satisfies the condition (a). Since $T_{36}$ has minimum weight $12$, $A_3(T_{36})$ has minimum norm $3$ and kissing number $72$. By Theorem~6 in~\cite{HKO}, $\min(L_T(T_{36}))=4$ and $\min(L_S(T_{36}))=3$. Hence, since the shadow of $L_T(T_{36})$ contains no vector of norm $1$, by (\ref{eq:T1}) and (\ref{eq:T2}) $L_T(T_{36})$ has theta series $1 + 42840 q^4 +1916928 q^5 + \cdots$. It follows that $\{n_1(L_T(T_{36})),n_2(L_T(T_{36}))\}=\{72,888\}$. By Theorem~1 in~\cite{MPS}, the possible complete weight enumerator $W_C(x,y,z)$ of a ternary extremal self-dual code $C$ of length $36$ containing $\mathbf{1}$ is written as \begin{align*} a_{1} \delta_{36} +a_{2} \alpha_{12}^{3} +a_{3} \alpha_{12}^{2} {\beta_6^2} +a_{4} \alpha_{12} (\beta_6^2)^{2} +a_{5} (\beta_6^2)^{3} +a_{6} \beta_6\gamma_{18} \alpha_{12} +a_{7} \beta_6\gamma_{18} {\beta_6^2}, \end{align*} using some $a_i \in \mathbb{R}$ $(i=1,2,\ldots,7)$, where $\alpha_{12}=a(a^3+8p^3)$, $\beta_6 =a^2-12b$, $\gamma_{18}=a^6-20a^3p^3-8p^6$, $\delta_{36}=p^3(a^3-p^3)^3$ and $a=x^3+y^3+z^3$, $p=3xyz$, $b=x^3y^3+x^3z^3+y^3z^3$. From the minimum weight, we have the following: \begin{align*} & a_2 = \frac{3281}{13824} - \frac{a_1}{64}, a_3 = \frac{203}{4608} - \frac{9 a_1}{256}, a_4 = \frac{1763}{13824} + \frac{3 a_1}{128}, \\& a_5 = -\frac{277}{13824} - \frac{a_1}{256}, a_6 = \frac{1133}{1728} + \frac{3 a_1}{64}, a_7 = -\frac{77}{1728} - \frac{a_1}{64}. \end{align*} Since $W_C(x,y,z)$ contains the term $(15180 + 2916a_1) y^{15} z^{21}$, if $C$ is admissible, then \[ a_1=-\frac{15180}{2916}. \] Hence, the complete weight enumerator of a ternary admissible extremal self-dual code containing $\mathbf{1}$ is uniquely determined, which is listed in Figure~\ref{Fig:CWE}. \begin{figure} \caption{Complete weight enumerator} \label{Fig:CWE} \end{figure} \subsection{Condition (b)} Suppose that $T_{36}$ satisfies the condition (b). By the Gleason theorem (see Corollary~5 in~\cite{MPS}), the weight enumerator of $T_{36}$ is uniquely determined as: \begin{align*} & 1 + 888 y^9 + 34848 y^{12} + 1432224 y^{15} + 18377688 y^{18} + 90482256 y^{21} \\& + 162551592 y^{24} + 97883072 y^{27} + 16178688 y^{30} + 479232 y^{33}. \end{align*} By Theorem~6 in~\cite{HKO} (see also Theorem~3.1 in~\cite{G04}), $L_S(T_{36})$ and $L_T(T_{36})$ are extremal. Hence, $\min(A_3(T_{36}))=3$ and $\min(S(A_3(T_{36})))=5$. Note that a unimodular lattice $L$ contains a $3$-frame if and only if $L\cong A_3(C) $ for some ternary self-dual code $C$. Let $L_{36}$ be any of the five lattices given in Table~\ref{Tab:LS}. Let $L_{36}^{(3)}$ be the set $\{\{x,-x\}\mid (x,x)=3, x \in L_{36}\}$. We define the simple undirected graph $\Gamma(L_{36})$, whose set of vertices is the set of $480$ pairs in $L_{36}^{(3)}$ and two vertices $\{x,-x\},\{y,-y\}\in L_{36}^{(3)}$ are adjacent if $(x,y)=0$. It follows that the $3$-frames in $L_{36}$ are precisely the $36$-cliques in the graph $\Gamma(L_{36})$. We have verified by {\sc Magma} that $\Gamma(L_{36})$ are regular graphs with valency $368$, and the maximum sizes of cliques in $\Gamma(L_{36})$ are $12$. Hence, none of these lattices is constructed from some ternary self-dual code by Construction A. \noindent {\bf Acknowledgments.} The author would like to thank Masaaki Kitazume for bringing the observation in Section~\ref{sec:T} to the author's attention. This work is supported by JSPS KAKENHI Grant Number 23340021. \end{document}
\begin{document} \title{Three-manifolds at infinity of complex hyperbolic orbifolds} \begin{abstract} We show the manifolds at infinity of the complex hyperbolic triangle groups $\Delta_{3,4,4;\infty}$ and $\Delta_{3,4,6;\infty}$, are one-cusped hyperbolic 3-manifolds $m038$ and $s090$ in the Snappy Census respectively. That is, these two manifolds admit spherical CR uniformizations. Moreover, these two hyperbolic 3-manifolds above can be obtained by Dehn surgeries on the first cusp of the two-cusped hyperbolic 3-manifold $m295$ in the Snappy Census with slopes $2$ and $4$ respectively. In general, the main result in this paper allow us to conjecture that the manifold at infinity of the complex hyperbolic triangle group $\Delta_{3,4,n;\infty}$ is the one-cusped hyperbolic 3-manifold obtained by Dehn surgery on the first cusp of $m295$ with slope $n-2$. \end{abstract} \section{Introduction} \subsection{Main results} Thurston's work on 3-manifolds has shown that geometry has an important role to play in the study of topology of 3-manifolds. There is a very close relationship between the topological properties of 3-manifolds and the existence of geometric structures on them. A spherical CR-structure on a smooth 3-manifold $M$ is a maximal collection of distinguished charts modeled on the boundary $\partial \mathbf{H}^2_{\mathbb C}$ of the complex hyperbolic space $\mathbf{H}^2_{\mathbb C}$, where coordinates changes are restrictions of transformations from $\mathbf{PU}(2,1)$. In other words, a {\it spherical CR-structure} is a $(G,X)$-structure with $G=\mathbf{PU}(2,1), X=S^3$. In contrast to the results on other geometric structures carried on 3-manifolds, there are relatively few examples known about spherical CR-structures. In general, it is very difficult to determine whether a 3-manifold admits a spherical CR-structure or not. Some of the first examples were given by Burns-Shnider \cite{BS:1976}. 3-manifolds with $Nil^{3}$-geometry naturally admit such structures, but by Goldman \cite{Goldman:1983}, any closed 3-manifold with Euclidean or $Sol^3$-geometry does not admit such structures. We are interested in an important class of spherical CR-structures, called uniformizable spherical CR-structures. A spherical CR-structure on a 3-manifold $M$ is {\it uniformizable} if it is obtained as $M=\Gamma\backslash \Omega_{\Gamma}$, where $\Omega_{\Gamma}\subset \partial \mathbf{H}^2_{\mathbb C}$ is the set of discontinuity of discrete subgroup $\Gamma$ acting on $\partial \mathbf{H}^2_{\mathbb C}=S^3$. Constructing discrete subgroups of $\mathbf{PU}(2,1)$ can be used to constructed spherical CR-structures on 3-manifolds. Thus, the study of the geometry of discrete subgroups of $\mathbf{PU}(2,1)$ is crucial to the understanding of uniformizable spherical CR-structures. Complex hyperbolic triangle groups provide rich examples of such discrete subgroups. As far as we know, almost all known examples of uniformizable spherical CR-structures are related to complex hyperbolic triangle groups. Let $\Delta_{p,q,r}$ be the abstract $(p,q,r)$ reflection triangle group with the presentation $$\langle \sigma_1, \sigma_2, \sigma_3 | \sigma^2_1=\sigma^2_2=\sigma^2_3=(\sigma_2 \sigma_3)^p=(\sigma_3 \sigma_1)^q=(\sigma_1 \sigma_2)^r=id \rangle,$$ where $p,q,r$ are positive integers or $\infty$ satisfying $1/p+1/q+1/r<1$. For simplicity, we assume that $p \leq q \leq r$. If $p,q$ or $r$ equals $\infty$, then the corresponding relation does not appear. A \emph{complex hyperbolic $(p,q,r)$ triangle group} is a representation of $\Delta_{p,q,r}$ into $\mathbf{PU}(2,1)$ where the generators fix complex lines. It is well known that the space of $(p,q,r)$-complex reflection triangle groups has real dimension one if $3 \leq p \leq q \leq r$. Sometimes, we denote the representation of the triangle group $\Delta_{p,q,r}$ into $\mathbf{PU}(2,1)$ such that $\sigma_1 \sigma_3\sigma_2 \sigma_3$ of order $n$ by $\Delta_{p,q,r;n}$. Richard Schwartz has conjectured the necessary and sufficient condition for a complex hyperbolic $(p,q,r)$ triangle group $\langle I_1,I_2,I_3\rangle < \mathbf{PU}(2,1)$ to be a discrete and faithful representation of $\Delta_{p,q,r}$ \cite{schwartz-icm}. Schwartz's conjecture has been proved in a few cases. We now provide a brief historical overview, before discussing our results. The study of complex hyperbolic triangle groups began with Goldman and Parker in \cite{GoPa}. They considered complex hyperbolic ideal triangle groups, i.e. the case $p=q=r=\infty$, and obtained the result: \begin{thm}[Goldman and Parker \cite{GoPa}] Let $\Gamma=\Gamma_{t}=\langle I_1, I_2, I_3 \rangle < \mathbf{PU}(2,1)$ be a complex hyperbolic ideal triangle group, it is parameterized by $t \in [0, \infty)$. Let $t_{1}=\sqrt{105/3}$ and $t_{2}= \sqrt{125/3}$. If $t > t_{2}$ then $\Gamma_{t}$ is not a discrete embedding of the $(\infty,\infty,\infty)$ triangle group. If $t < t_{1}$ then $\Gamma_{t}$ is a discrete embedding of the $(\infty,\infty,\infty)$ triangle group. \end{thm} They conjectured that a complex hyperbolic ideal triangle group $\Gamma_{t}=\langle I_1, I_2, I_3 \rangle$ is discrete and faithful if and only if $I_1 I_2 I_3$ is not elliptic, that is, if and only if $t \leq t_{2}$ in the above parameters. Schwartz proved the Goldman-Parker conjecture in \cite{Schwartz:2001ann} (see a better proof in \cite{schwartz:2006}). \begin{thm}[Schwartz \cite{Schwartz:2001ann}] Let $\Gamma=\langle I_1, I_2, I_3 \rangle < \mathbf{PU}(2,1)$ be a complex hyperbolic ideal triangle group. If $I_1 I_2 I_3$ is not elliptic, then $\Gamma$ is discrete and faithful. Moreover, if $I_1 I_2 I_3$ is elliptic, then $\Gamma$ is not discrete. \end{thm} Furthermore, he analyzed the group when $I_1 I_2 I_3$ is parabolic. \begin{thm}[Schwartz \cite{Schwartz:2001acta}] Let $\Gamma=\langle I_1, I_2, I_3 \rangle$ be the complex hyperbolic ideal triangle group with $I_1 I_2 I_3$ being parabolic. Let $\Gamma'$ be the even subgroup of $\Gamma$. Then the manifold at infinity of the quotient ${\bf H}^2_{\mathbb C}/{\Gamma'}$ is commensurable with the Whitehead link complement in the 3-sphere. \end{thm} Recently, Schwartz's conjecture was shown for complex hyperbolic $(3,3,n)$ triangle groups with positive integer $n\geq 4$ in \cite{ParkerWX:2016} and $n=\infty$ in \cite{ParkerWill:2016}. \begin{thm}[Parker, Wang and Xie \cite{ParkerWX:2016}, Parker and Will \cite{ParkerWill:2016}] Let $n \geq 4$, and let $\Gamma=\langle I_1, I_2, I_3 \rangle$ be a complex hyperbolic $(3,3,n)$ triangle group. Then $\Gamma$ is a discrete and faithful representation of the $(3,3,n)$ triangle group if and only if $I_1 I_3 I_2 I_3$ is not elliptic. \end{thm} There are some interesting results on complex hyperbolic $(3,3,n)$ triangle groups with $I_1I_3I_2I_3$ being parabolic. \begin{thm}[Deraux and Falbel \cite{DerauxF:2015}, Deraux \cite{Deraux:2015} and Acosta \cite{Acosta:2019}] Let $4 \leq n \leq +\infty $, and let $\Gamma=\langle I_1, I_2, I_3 \rangle$ be a complex hyperbolic $(3,3,n)$ triangle group with $I_1I_3I_2I_3$ being parabolic. Let $\Gamma'$ be the even subgroup of $\Gamma$. Then the manifolds at infinity of the quotient ${\bf H}^2_{\mathbb C}/{\Gamma'}$ is a Dehn surgery on the one of the cusps of the Whitehead link complement with slope $n-2$. \end{thm} Note that our choice of the meridian-longitude systems of the Whitehead link complement in Theorem 1.5 is different from that in \cite{Acosta:2019}, the meridian-longitude systems chosen here seems more coherent for the 3-manifold at infinity of the complex hyperbolic triangle group $\Delta_{3,4,n; \infty}$ below. These deformations provide an attractive problem, because they furnish some of the simplest interesting examples in the still mysterious subject of complex hyperbolic deformations. While some progress has been made in understanding these examples, there is still a lot unknown about them. The main purpose of this paper is to study the geometry of triangle groups $\Delta_{3,4,\infty;4}$ and $\Delta_{3,4,\infty;6}$. Thompson showed \cite{Thompson:2010} that $\Delta_{3,4,\infty;4}$ and $\Delta_{3,4,\infty;6}$ are arithmetic subgroups of $\mathbf{PU}(2,1)$, thus they are discrete. We will construct the Ford domains for these two groups, this will provide another proof of the discreteness of such two groups. Furthermore, we will identify the manifolds at infinity for them. It is well-known \cite{kpt, Thompson:2010} that $\Delta_{p,q,r; n}$ and $\Delta_{p,q,n; r}$ are isomorphic. So we often write this family of groups as $\Delta_{3,4,\infty; n}$ or $\Delta_{3,4,n; \infty}$ for convenience. Our main results are \begin{thm} \label{thm:main} Let $\Gamma=\langle I_1, I_2, I_3 \rangle$ be the complex hyperbolic triangle group $\Delta_{3,4,\infty;n}$ with $I_1I_3I_2I_3$ of order $n$. \begin{itemize} \item If $n=4$, then the manifold at infinity of the even subgroup $\langle I_1I_2,I_2I_3\rangle$ of $\Gamma$ is the one-cusped hyperbolic 3-manifold $m038$ in the Snappy Census. \item If $n=6$, then the manifold at infinity of the even subgroup $\langle I_1I_2,I_2I_3\rangle$ of $\Gamma$ is the one-cusped hyperbolic 3-manifold $s090$ in the Snappy Census. \end{itemize} \end{thm} The general ideas of the proof as follows. First, let $\Gamma$ be one of $\Delta_{3,4,\infty;n}$, for $n=4$ or $6$. We construct a Ford domain $D$ for the discrete group $\Gamma$ and analysis the combinatorics of the faces of this domain. The ideal boundary $\partial_{\infty}D$ of $D$, that is $\partial_{\infty}D=D \cap \partial \mathbf{H}^2_{\mathbb C}$, is crucial to get the manifold at infinity of the group $\Gamma$. $\partial_{\infty}D$ is the complement of a topological solid cylinder in $\partial{\mathbf{H}^2_{\mathbb C}}-\{q_{\infty}\}$, the boundary of $\partial_{\infty}D$ is an infinite annulus with polyhedral structure induced by the polyhedral structure of $D$. The combinatorial structure of $\partial_{\infty}D$ can be gathered from the boundary 2-faces of $\partial_{\infty}D$. Unlike the case of the spherical CR uniformization of the figure eight knot complement in \cite{DerauxF:2015}, the combinatorial structure of $\partial_{\infty}D$ is quite different from the Ford domain of the real hyperbolic structure of the figure eight knot complement. In fact, the combinatorial structure of $\partial_{\infty}D$ for the discrete group $\Delta_{3,4,\infty;n}(n=4,6)$ is more complicated than that in the case of $\Delta_{3,3,n;\infty}(n=4,5)$, since one isometric sphere will contribute two or more boundary 2-faces on $\partial_{\infty}D$. Then we produce a 2-dimensional picture of the boundary of $\partial_{\infty}D$, which determines a canonical 2-spine $S$ of our 3-manifold at infinity of ${\bf H}^2_{\mathbb C}/{\Gamma}$. After that, we will determine the topology of the manifold at infinity, which is a quotient space of $\partial_{\infty}D$. From the combinatorial description of $\partial_{\infty}D$, we can calculate the fundamental group of the manifold. The end result is that the manifolds at infinity will be identified with the hyperbolic 3-manifolds $m038$ and $s090$ respectively \cite{CullerDunfield:2014}. \subsection{Discussion of related topics} We observe that the combinatorial structure of the Ford domain of $\Delta_{3,4,\infty; \infty}$ is not too difficult. But the ideal boundary of this Ford domain in Heisenberg group does not give a horotube structure as in \cite{Deraux:2016,ParkerWill:2016}. Due to the different topology structure, we use more sophisticated methods to show that \begin{thm}[Ma and Xie \cite{MaX:2020}]\label{thm:34inftyinfty} The manifold at infinity of the even subgroup of the complex triangle group $\Delta_{3,4,\infty;\infty}$ is the two-cusped hyperbolic 3-manifold $m295$ in the Snappy Census. \end{thm} \begin{thm}[Ma and Xie \cite{MaX:2020}]\label{thm:3inftyinftyinfty}The manifold at infinity of the even subgroup of the complex triangle group $\Delta_{3,\infty,\infty;\infty}$ is the simplest chain link with three components in $S^3$. \end{thm} The complement of the simplest chain link with three components in $S^3$ is the so called "magic" 3-manifold, see \cite{MartelliP:2006}. It is very possible to show that infinitely many hyperbolic 3-manifolds via Dehn surgeries on the first cusp of the hyperbolic 3-manifold $m295$ admit spherical CR uniformizations, by applying M. Acosta's CR Dehn surgery theory \cite{Acosta:2019} and deforming the Ford domain of $\Delta_{3,4,\infty; \infty}$ in a one parameter family. \begin{conj}\label{conj:34ninfty}The manifold at infinity of the even subgroup of the complex triangle group $\Delta_{3,4,\infty;n}$ is the hyperbolic 3-manifold obtained via the Dehn surgery of $m295$ on the first cusp with slope $n-2$. \end{conj} We now give some remarks on Theorems \ref{thm:34inftyinfty}, \ref{thm:3inftyinftyinfty}, and Conjecture \ref{conj:34ninfty}. Let $N$ be the simplest chain link with three components in $S^3$ \cite{MartelliP:2006}, which is a hyperbolic link. We use the meridian-longitude systems of the cusps of $N$ as in \cite{MartelliP:2006}, which is different from the meridian-longitude systems in Snappy. Martelli-Petronio classified all the non-hyperbolic Dehn fillings of $N$ \cite{MartelliP:2006}, and got lots of information about hyperbolic Dehn fillings of $N$. Since $N$ has three cusps, we denote by $N(\alpha)$ the filled 3-manifold with two-cusps where the filling slope is $\alpha$ on the first cusp of $N$. Similarly, $N(\alpha, \beta)$ is a one-cusped 3-manifold with filling slopes $\alpha$ and $\beta$ on the first two cusps of $N$, and $N(\alpha, \beta, \gamma)$ is a closed 3-manifold. 1. The Dehn filling 3-manifold $N(1)$ is the Whitehead link complement in $S^3$ \cite{MartelliP:2006}, now $N(1)$ has two cusps, which is the manifold at infinity of $\Delta_{3,3,\infty; \infty}$ \cite{ParkerWill:2016}. The manifold $N(1,2)$ is the figure eight knot complement in the 3-sphere \cite{MartelliP:2006}, which is the manifold at infinity of the index two even subgroup of $\Delta_{3,3,4; \infty}$ \cite{DerauxF:2015}, and the manifold $N(1,2, -2)$ is the Seifert 3-manifold $(S^2,(3,1),(3,1),(4,1),-1)$. Moreover, $N(1,n-2)$ is a hyperbolic 3-manifold with one cusp for $n \geq 4$, which is the manifold at infinity of the index two even subgroup of $\Delta_{3,3,n; \infty}$ \cite{Acosta:2019}. The manifold $N(1,n-2, -2)$ is the Seifert 3-manifold $\left(S^2,(3,1),(3,1),(n,1),-1\right)$. 2. The manifold $N(2)$ is $m295$ in the Snappy Census \cite{CullerDunfield:2014}, which is also the link complement $9^2_{50}$ in Rolfsen's list \cite{Rolfsen}. The filled 3-manifold $N(2,2)$ is the manifold $m038$, which is the manifold at infinity of the index two even subgroup of $\Delta_{3,4,4; \infty}$ as in Theorem \ref{thm:main}. The manifold $N(2,2,-2)$ is the Seifert 3-manifold $\left(S^2,(3,1),(4,1),(4,1),-1\right)$. The filled 3-manifold $N(2,4)$ is the manifold $s090$, which is the manifold at infinity of the index two even subgroup of $\Delta_{3,4,6; \infty}$ as in Theorem \ref{thm:main}. We also have the manifold $N(2,4,-2)$ is the Seifert 3-manifold $\left(S^2,(3,1),(4,1),(6,1),-1\right)$. 3. From above, it is natural to propose the Conjecture \ref{conj:34ninfty}. Moreover, it should be true that the manifold at infinity of the index two even subgroup of $\Delta_{3,n,\infty; \infty}$ is the two-cusped 3-manifold $N(n-2)$, and the manifold at infinity of the index two even subgroup $\Delta_{3,n,m; \infty}$ is the one-cusped 3-manifold $N(n-2, m-2)$ if Schwartz's conjecture is true. The paper is organized as follows. In Section 2 we give well known background material. Section 3 contains the matrix representation of the complex hyperbolic triangle group $\Delta_{3,4,n;\infty}$ in $\mathbf{SU}(2,1)$ for $n=4,6$. Section 4 is devoted to the description of the isometric spheres that bound the Ford domain for the complex hyperbolic triangle group $\Delta_{3,4,4;\infty}$. We also give some examples of calculation in this section. In Section 5, we study combinatorial structure of the ideal boundary of the Ford domain of the complex hyperbolic triangle group $\Delta_{3,4,4;\infty}$ and get the hyperbolic 3-manifold at infinity. In Section 6, we study the geometry of the complex hyperbolic triangle group $\Delta_{3,4,6;\infty}$ similarly but omit some details of the computation. \section{Background}\label{sec-back} In this section, we present some preliminaries on complex hyperbolic geometry. For more details, see \cite{Go}. \subsection{Complex hyperbolic plane and Siegel domain} Let ${\mathbb C}^{2,1}$ denote the vector space ${\mathbb C}^{3}$ equipped with the Hermitian form $$\langle {\bf{z}}, {\bf{w}} \rangle =z_1 \overline{w}_3+z_2 \overline{w}_2+z_3 \overline{w}_1$$ of signature $(2,1)$, where ${\bf z}=(z_1,z_2,z_3)^T$ and ${\bf w}=(w_1,w_2,w_3)^T$ are vectors in ${\mathbb C}^3$. The Hermitian form divides ${\mathbb C}^{2,1}$ into three parts $V_{-}, V_{0}$ and $V_{+}$, which are \begin{eqnarray*} V_{-} &=& \{{\bf z}\in {\mathbb C}^3-\{0\} : \langle {\bf z}, {\bf z} \rangle <0 \}, \\ V_{0} &=& \{{\bf z}\in {\mathbb C}^3-\{0\} : \langle {\bf z}, {\bf z} \rangle =0 \}, \\ V_{+} &=& \{{\bf z}\in {\mathbb C}^3-\{0\} : \langle {\bf z}, {\bf z} \rangle >0 \}. \end{eqnarray*} Let $$ \mathbb{P}: {\mathbb C}^{3}-\{0\}\rightarrow {\mathbb C}{\mathbb P}^2$$ be the canonical projection onto the complex projective space. Then the {\it complex hyperbolic plane} ${\bf H}^2_{\mathbb C}$ is the image of $V_{-}$ in ${\mathbb C}{\mathbb P}^2$ by the map ${\mathbb P}$ and its {\it ideal boundary}, or {\it boundary at infinity}, is the image of $V_{0}$ in ${\mathbb C}{\mathbb P}^2$, we denote it by $\partial {\bf H}^2_{\mathbb C}$. The standard lift $(z_1,z_2,1)^T$ of $z=(z_1,z_2)\in \mathbb {C}^{2}$ is negative if and only if $$z_1+|z_2|^2+\overline{z}_1=2{\rm Re}(z_1)+|z_2|^2<0.$$ Thus $\mathbb{P}(V_{-})$ is a paraboloid in ${\mathbb C}^{2}$, called the {\it Siegel domain}. Its boundary $\mathbb{P}(V_{0})$ satisfies $$2{\rm Re}(z_1)+|z_2|^2=0.$$ The complex hyperbolic space is parameterized in {\it horospherical coordinates} by $\mathbb{C}\times \mathbb{R}\times \mathbb{R}^{+}$: $$(z,t,u)\rightarrow \left(\begin{matrix}\frac{-|z|^2-u+it}{2}\\ z\\1\end{matrix}\right).$$ The standard lift of the point at infinity is $$q_{\infty}=\left(\begin{matrix}1\\ 0\\0\end{matrix}\right).$$ Then $\mathbb{P}(V_{0})=\{\mathbb{C}\times \mathbb{R}\times \{0\}\}\cup \{q_{\infty}\}$. Therefore, the Siegel domain has an analogue construction of the upper half space model for the real hyperbolic space $H_{\mathbb{R}}^{n}$. If $$\mathbf{p}=\left(\begin{matrix} p_1\\ p_2\\p_3\end{matrix}\right),\quad \mathbf{q}=\left(\begin{matrix} q_1\\ q_2\\q_3\end{matrix}\right)$$ are lifts of $p,q$ in $\mathbf{H}^2_{\mathbb C}$, then the {\it Hermitian cross product} of $p$ and $q$ is defined by $$\mathbf{p}\boxtimes \mathbf{q}=\left(\begin{matrix} \overline{p_1q_2-p_2q_1}\\ \overline{p_3q_1-p_1q_3}\\ \overline{p_2q_3-p_3q_2}\end{matrix}\right).$$ This vector is orthogonal to $\mathbf{p}$ and $\mathbf{q}$ with respect to the Hermitian form $\langle \cdot,\cdot\rangle$. It is a Hermitian version of the Euclidean cross product. The {\it Bergman metric} $\rho$ on ${\bf H}^2_{\mathbb C}$ is given by the following $$ \cosh^2 \left(\frac{\rho(z,w)}{2}\right) = \frac{\langle \mathbf{z}, \mathbf{w} \rangle \langle \mathbf{w}, \mathbf{z} \rangle}{\langle \mathbf{z}, \mathbf{z} \rangle \langle \mathbf{w}, \mathbf{w} \rangle}, $$ where $ \mathbf{z}, \mathbf{w}\in V_{-} $ are the lifts of $z,w$ respectively. Note that this definition is independent of the choices of lifts. \subsection{The isometries} The complex hyperbolic plane is a K\"{a}hler manifold of constant holomorphic sectional curvature $-1$. We denote by $\mathbf{U}(2,1)$ the Lie group of $\langle \cdot,\cdot\rangle$ preserving complex linear transformations and by $\mathbf{PU}(2,1)$ the group modulo scalar matrices. The group of holomorphic isometries of ${\bf H}^2_{\mathbb C}$ is exactly $\mathbf{PU}(2,1)$. It is sometimes convenient to work with $\mathbf{SU}(2,1)$, which is a 3-fold cover of $\mathbf{PU}(2,1)$. The full isometry group of ${\bf H}^2_{\mathbb C}$ is given by $$\widehat{\mathbf{PU}(2,1)}=\langle \mathbf{PU}(2,1),\iota\rangle,$$ where $\iota$ is given on the level of homogeneous coordinates by complex conjugate ${\bf z}=(z_1,z_2,z_3)^T \mapsto {\bf \overline{z}}=(\overline{z}_1,\overline{z}_2,\overline{z}_3)^T$ Elements of $\mathbf{SU}(2,1)$ fall into three types, according to the number and types of the fixed points of the corresponding isometry. Namely, an isometry is {\it loxodromic} (resp. {\it parabolic}) if it has exactly two fixed points (resp. one fixed point) on $\partial {\bf H}^2_{\mathbb C}$. It is called {\it elliptic} when it has (at least) one fixed point inside ${\bf H}^2_{\mathbb C}$. An elliptic $A\in \mathbf{SU}(2,1)$ is called {\it regular elliptic} whenever it has three distinct eigenvalues, and {\it special elliptic} if it has a repeated eigenvalue. Suppose that a non-identity element $T\in \mathbf{SU}(2,1)$ has trace equal to 3. Then all eigenvalues of $T$ equal 1, that is $T$ is {\it unipotent}. If $A\in\mathbf{SU}(2,1)$ fixes $q_{\infty}$, then it is upper triangular. We now examine the subgroup of $\mathbf{SU}(2,1)$ fixing $q_{\infty}$. Consider the map $T$ from $\partial {\bf H}^2_{\mathbb C}-\{q_{\infty}\}=\mathbb{C}\times \mathbb{R}$ to $\mathbf{GL}(3,\mathbb{C})$ given by $$T_{(z,t)}=\left(\begin{matrix} 1 & -\overline{z}& \frac{-|z|^{2}+it}{2} \\ 0 & 1 & z \\ 0 & 0 & 1 \end{matrix}\right).$$ It is easy to find that this map fixes $q_{\infty}$ and sends the origin in $\mathbb{C}\times \mathbb{R}$ to the point $(z,t)$. Moreover, composition of such elements gives $\partial {\bf H}^2_{\mathbb C}-\{q_{\infty}\}$ the structure of the Heisenberg group $$(z_1,t_1)\cdot(z_2,t_2)=\left(z_1+z_2,t_1+t_2+2{\rm Im}(z_1\overline{z}_2)\right)$$ and $T_{(z,t)}$ acts as left {\it Heisenberg translation} on $\partial {\bf H}^2_{\mathbb C}-\{q_{\infty}\}$. A Heisenberg translation by $(0,t)$ is called a {\it vertical translation} by $t$. The full stabilizer of $q_{\infty}$ is generated by the above unipotent group, together with the isometries of the forms \begin{equation} \left(\begin{matrix} 1 & 0& 0 \\ 0 & e^{i\theta} & 0 \\ 0 & 0 & 1 \end{matrix}\right) \quad and \quad \left(\begin{matrix} \lambda & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1/\lambda \end{matrix}\right), \end{equation} where $\theta,\lambda\in \mathbb{R}$ and $\lambda \neq 0$. The first acts on $\partial {\bf H}^2_{\mathbb C}-\{q_{\infty}\}=\mathbb{C}\times \mathbb{R}$ as a rotation with vertical axis: $$(z,t)\mapsto (e^{i\theta}z,t),$$ whereas the second one acts as $$(z,t)\mapsto (\lambda z,\lambda^2 t).$$ The Heisenberg norm of $(z,t)\in \partial {\bf H}^2_{\mathbb C}-\{q_{\infty}\}$ is given by $$\parallel(z,t)\parallel=\left||z|^2+it\right|^{1/2}.$$ This gives rise to a metric, the {\it Cygan metric}, on Heisenberg group by $$d((z_1,t_1),(z_2,t_2))=\parallel (z_1,t_1)^{-1}\cdot(z_2,t_2)\parallel.$$ The {\it Cygan sphere} with center $(z_0,t_0)$ and radius $r$ has equation $$d\left((z,t),(z_0,t_0)\right)=\left||z-z_0|^{2}+i(t-t_0+2{\rm Im}(z\overline{z}_0))\right|=r^2.$$ The Cygan metric can be extended to the metric to points $p$ and $q$ in ${\bf H}^2_{\mathbb C}$ with horospherical coordinates $(z_1,t_1,u_1)$ and $(z_2,t_2,u_2)$ by writing $$d\left(p,q\right)=\left||z_1-z_2|^{2}+|u_1-u_2|+i(t_1-t_2+2{\rm Im}(z_1\overline{z}_2))\right|^{1/2}.$$ \subsection{Totally geodesic submanifolds and complex reflections} There are two kinds of totally geodesic submanifolds of real dimension 2 in ${\bf H}^2_{\mathbb C}$: {\it complex lines} in ${\bf H}^2_{\mathbb C}$ are complex geodesics (represented by ${\bf H}^1_{\mathbb C}\subset {\bf H}^2_{\mathbb C}$) and {\it Lagrangian planes} in ${\bf H}^2_{\mathbb C}$ are totally real geodesic 2-planes (represented by ${\bf H}^2_{\mathbb R}\subset {\bf H}^2_{\mathbb C}$). Since the Riemannian sectional curvature of the complex hyperbolic space is nonconstant, there are no totally geodesic hyperplanes. Consider the complex hyperbolic space ${\bf H}^2_{\mathbb C}$ and its boundary $\partial{\bf H}^2_{\mathbb C}$. We define \emph{$\mathbb{C}$-circles} in $\partial{\bf H}^2_{\mathbb C}$ to be the boundaries of complex geodesics in ${\bf H}^2_{\mathbb C}$. Analogously, We define \emph{$\mathbb{R}$-circles} in $\partial{\bf H}^2_{\mathbb C}$ to be the boundaries of Lagrangian planes in ${\bf H}^2_{\mathbb C}$. Let $L$ be a complex line and $\partial L$ be its trace on boundary $\partial{\bf H}^2_{\mathbb C}$. A {\it polar vector} of $L$ (or $\partial L$) is the unique vector (up to scalar multiplication) perpendicular to this complex line with respect to the Hermitian form. A polar vector belongs to $V_{+}$ and each vector in $V_{+}$ corresponds to a complex line or a $\mathbb{C}$-circle. Moreover, let $L$ be a complex line with polar vector ${\bf c}\in V_{+}$, then the {\it complex reflection} fixing $L$ is given by $$ I_{\bf c}({\bf z}) = -{\bf z}+2\frac{\langle {\bf z}, {\bf c} \rangle}{\langle {\bf c},{\bf c}\rangle}{\bf c}. $$ In the Heisenberg model, $\mathbb{C}$-circles are either vertical lines or ellipses, whose projection on the $z$-plane are circles. Finite $\mathbb{C}$-circles are determined by a center and a radius. They may also be described using polar vectors. A finite $\mathbb{C}$-circles with center $(x+yi,z) \in \mathbb{C} \times \mathbb{R}$ and radius $r$ has polar vector $$\left(\begin{matrix} \frac{r^2-x^2-y^2+iz}{2}\\ x+y i \\ 1 \end{matrix}\right).$$ \subsection{Bisectors and spinal coordinates} In order to analyze 2-faces of a Ford polyhedron, we must study the intersections of isometric spheres. Isometric spheres are special example of bisectors. In this subsection, we will describe a convenient set of coordinates for bisector intersections, deduced from slice decomposition. \begin{defn} Given two distinct points $p_0$ and $p_1$ in ${\bf H}^2_{\mathbb C}$ with the same norm (e.g. one could take $\langle \mathbf{p}_0,\mathbf{p}_0\rangle=\langle \mathbf{p}_1,\mathbf{p}_1\rangle= -1$), the \emph{bisector} $\mathcal{B}(p_0,p_1)$ is the projectivization of the set of negative vectors $x$ with $$|\langle x,\mathbf{p_0}\rangle|=|\langle x,\mathbf{p_1}\rangle|.$$ \end{defn} The {\it spinal sphere} of the bisector $\mathcal{B}(p_0,p_1)$ is the intersection of $\partial {\bf H}^2_{\mathbb C}$ with the closure of $\mathcal{B}(p_0,p_1)$ in $\overline{{\bf H}^2_{\mathbb C}}= {\bf H}^2_{\mathbb C}\cup \partial { {\bf H}^2_{\mathbb C}}$. The bisector $\mathcal{B}(p_0,p_1)$ is a topological 3-ball, and its spinal sphere is a 2-sphere. The {\it complex spine} of $\mathcal{B}(p_0,p_1)$ is the complex line through the two points $p_0$ and $p_1$. The {\it real spine} of $\mathcal{B}(p_0,p_1)$ is the intersection of the complex spine with the bisector itself, which is a (real) geodesic; it is the locus of points inside the complex spine which are equidistant from $p_0$ and $p_1$. Bisectors are not totally geodesic, but they have a very nice foliation by two different families of totally geodesic submanifolds. Mostow \cite{Mostow:1980} showed that a bisector is the preimage of the real spine under the orthogonal projection onto the complex spine. The fibres of this projection are complex lines called the {\it complex slices} of the bisector. Goldman \cite{Go} showed that a bisector is the union of all Lagrangian planes containing the real spine. Such Lagrangian planes are called the {\it real slices} of the bisector. From the detailed analysis in \cite{Go}, we know that the intersection of two bisectors is usually not totally geodesic and can be somewhat complicated. In this paper, we shall only consider the intersection of coequidistant bisectors, i.e. bisectors equidistant from a common point. When $p,q$ and $r$ are not in a common complex line, that is, their lifts are linearly independent in $\mathbb {C}^{2,1}$, then the locus $\mathcal{B}(p,q,r)$ of points in $ {\bf H}^2_{\mathbb C}$ equidistant to $p,q$ and $r$ is a smooth disk that is not totally geodesic, and is often called a \emph{Giraud disk}. The following property is crucial when studying fundamental domain. \begin{prop}[Giraud] If $p,q$ and $r$ are not in a common complex line, then $\mathcal{B}(p,q,r)$ is contained in precisely three bisectors, namely $\mathcal{B}(p,q), \mathcal{B}(q,r)$ and $\mathcal{B}(p,r)$. \end{prop} Note that checking whether an isometry maps a Giraud disk to another is equivalent to checking that corresponding triple of points are mapped to each other. In order to study Giraud disks, we will use spinal coordinates. The complex slices of $\mathcal{B}(p,q)$ are given explicitly by choosing a lift $\mathbf{p}$ (resp. $\mathbf{q}$) of $p$ (resp. $q$). When $p,q\in {\bf H}^2_{\mathbb C}$, we simply choose lifts such that $\langle \mathbf{p},\mathbf{p}\rangle= \langle \mathbf{q},\mathbf{q}\rangle$. In this paper, we will mainly use these parametrization when $p,q\in \partial{\bf H}^2_{\mathbb C}$. In that case, all lifts are null vectors and the condition $\langle \mathbf{p},\mathbf{p}\rangle= \langle \mathbf{q},\mathbf{q}\rangle$ is vacuous. Then we choose some fixed lift $\mathbf{p}$ for the center of the Ford domain, and we take $ \mathbf{q}=G( \mathbf{p})$ for some $G\in \mathbf{ SU}(2,1)$. The complex slices of $\mathcal{B}(p,q)$ are obtained as the set of negative lines $(\overline{z}\mathbf{p}-\mathbf{q})^{\bot}$ in ${\bf H}^2_{\mathbb C}$ for some arc of values of $z\in S^1$, which is determined by requiring that $\langle \overline{z}\mathbf{p}-\mathbf{q},\overline{z}\mathbf{p}-\mathbf{q}\rangle>0$. Since a point of the bisector is on precisely one complex slice, we can parameterize the Giraud torus $\hat{\mathcal{B}}(p,q,r)$ by $(z_1,z_2)=(e^{it_1},e^{it_2})\in S^1\times S^1 $ via $$V(z_1,z_2)=(\overline{z}_1\mathbf{p}-\mathbf{q})\boxtimes (\overline{z}_2\mathbf{p}-\mathbf{r})=\mathbf{q}\boxtimes \mathbf{r}+z_1 \mathbf{r}\boxtimes \mathbf{p}+z_2 \mathbf{p}\boxtimes \mathbf{q}.$$ The Giraud disk $\mathcal{B}(p,q,r)$ corresponds to the $(z_1,z_2)\in S^1\times S^1 $ with $$\langle V(z_1,z_2),V(z_1,z_2)\rangle<0.$$ It follows from the fact the bisectors are covertical that this region is a topological disk, see \cite{Go}. The boundary at infinity $\partial\mathcal{B}(p,q,r)$ is a circle, given in spinal coordinates by the equation $$\langle V(z_1,z_2),V(z_1,z_2)\rangle=0.$$ A defining equation for the trace of another bisector $\mathcal{B}(u,v)$ on the Giraud disk $\mathcal{B}(p,q,r)$ can be written in the form $$| \langle V(z_1,z_2),u\rangle|=| \langle V(z_1,z_2),v\rangle|,$$ provided that $u$ and $v$ are suitably chosen lifts. The expressions $\langle V(z_1,z_2),u\rangle$ and $\langle V(z_1,z_2),v\rangle$ are affine in $z_1, z_2$. This triple bisector intersection can be parameterized fairly explicitly, because one can solve the equation $$|\langle V(z_1,z_2),u\rangle|^2=|\langle V(z_1,z_2),v\rangle|^2 $$ for one of the variables $z_1$ or $z_2$ simply by solving a quadratic equation. A detailed explanation of how this works can be found in \cite{Deraux:2016,dpp}. \subsection{Isometric spheres and Ford domain} \begin{defn} For any $G\in \mathbf{SU}(2,1)$ that does not fix $q_{\infty}$, the \emph{isometric sphere} of $G$, we denote it by $\mathcal{I}(G)$, is defined to be $$\mathcal{I}(G)=\{p\in {\bf H}^2_{\mathbb C} \cup \partial{\bf H}^2_{\mathbb C}: |\langle \mathbf{p},q_{\infty}\rangle|=|\langle \mathbf{p},G^{-1}(q_{\infty})\rangle|\},$$ where $ \mathbf{p}$ is the standard lift of $p\in {\bf H}^2_{\mathbb C} \cup \partial{\bf H}^2_{\mathbb C}$. \end{defn} The {\it interior} of $\mathcal{I}(G)$ is the component of its complement in ${\bf H}^2_{\mathbb C} \cup \partial{\bf H}^2_{\mathbb C}$ that does not contain $q_{\infty}$, namely, $$\{p\in {\bf H}^2_{\mathbb C} \cup \partial{\bf H}^2_{\mathbb C}: |\langle \mathbf{p},q_{\infty}\rangle|>|\langle \mathbf{p},G^{-1}(q_{\infty})\rangle|\}.$$ The {\it exterior} of $\mathcal{I}(G)$ is the component that contains the point at infinity $q_{\infty}$. Suppose that $G\in \mathbf{SU}(2,1)$ written in $3\times 3$ complex matrices $(g_{ij})_{1\leq i,j\leq 3}$ does not fix $q_{\infty}$. Then $G^{-1}(q_{\infty})=(\overline{g}_{33},\overline{g}_{32},\overline{g}_{31})^{T}$ and $g_{31}\neq 0$. Meanwhile, the horospherical coordinate of $G^{-1}(q_{\infty})$ is $$G^{-1}(q_{\infty})=\left(\frac{\overline{g}_{32}}{\overline{g}_{31}}, 2{\rm Im}\left(\frac{\overline{g}_{33}}{\overline{g}_{31}}\right)\right).$$ We can also describe the isometric sphere of $G$ by using the Cygan metric. For concreteness, the isometric sphere $\mathcal{I}(G)$ is the Cygan sphere in ${\bf H}^2_{\mathbb C} \cup \partial{\bf H}^2_{\mathbb C}$ with center $G^{-1}(q_{\infty})$ and radius $r_G=\sqrt{\frac{2}{|g_{31}|}}$. Furthermore, we have \begin{prop} Let $G\in \mathbf{SU}(2,1)$ be an isometry of ${\bf H}^2_{\mathbb C}$ not fixing $q_{\infty}$. \begin{itemize} \item The transformation $G$ maps $\mathcal{I}(G)$ to $\mathcal{I}(G^{-1})$, and the interior of $\mathcal{I}(G)$ to the exterior of $\mathcal{I}(G^{-1})$. \item For any $A\in\mathbf{SU}(2,1)$ fixing $q_{\infty}$ and such that the corresponding eigenvalues has unit modulus, we have $\mathcal{I}(G)=\mathcal{I}(AG)$. \end{itemize} \end{prop} \begin{defn}The \emph{Ford domain} $D_{\Gamma}$ for a discrete group $\Gamma < \mathbf{PU}(2,1)$ centered at $q_{\infty}$ is the intersection of the (closures of the) exteriors of all isometric spheres for elements of $\Gamma$ not fixing $q_{\infty}$. That is, $$D_{\Gamma}=\{p\in {\bf H}^2_{\mathbb C} \cup \partial{\bf H}^2_{\mathbb C}: |\langle \mathbf{p},q_{\infty}\rangle|\leq|\langle \mathbf{p},G^{-1}(q_{\infty})\rangle| \ \forall G\in \Gamma \ \mbox{with} \ G(q_{\infty})\neq q_{\infty} \}.$$ \end{defn} From the definition, one can see that isometric spheres form the boundary of the Ford domain. When $q_{\infty}$ is either in the domain of discontinuity or is a parabolic fixed point, the Ford domain is preserved by $\Gamma_{\infty}$, the stabilizer of $q_{\infty}$ in $\Gamma$. In this case, $D_{\Gamma}$ is only a fundamental domain modulo the action of $\Gamma_{\infty}$. In other words, the fundamental domain for $\Gamma$ is the intersection of the Ford domain with a fundamental domain for $\Gamma_{\infty}$. Facets of codimension one, two, three and four in $D_{\Gamma}$ will be called {\it sides}, {\it ridges}, {\it edges} and {\it vertices}, respectively. Moreover, a {\it bounded ridge} is a ridge which does not intersect $\partial {\bf H}^2_{\mathbb C}$, and if the intersection of a ridge $r$ and $\partial {\bf H}^2_{\mathbb C}$ is non-empty, then $r$ is an {\it infinite ridge}. It is usually very hard to determine $D_{\Gamma}$ because one should check infinitely many inequalities. Therefore a general method will be to guess the Ford polyhedron and check it using the Poincar\'e polyhedron theorem. The basic idea is that the sides of $D_{\Gamma}$ should be paired by isometries, and the images of $D_{\Gamma}$ under these so-called side-pairing maps should give a local tiling of ${\bf H}^2_{\mathbb C}$. If they do (and if the quotient of $D_{\Gamma}$ by the identification given by the side-pairing maps is complete), then the Poincar\'{e} polyhedron theorem implies that the images of $D_{\Gamma}$ actually give a global tiling of ${\bf H}^2_{\mathbb C}$. Once a fundamental domain is obtained, one gets an explicit presentation of $\Gamma$ in terms of the generators given by the side-pairing maps together with a generating set for the stabilizer $\Gamma_{\infty}$, where the relations corresponding to so-called ridge cycles, which correspond to the local tiling bear each codimension-two faces. For more on the Poincar\'e polyhedron theorem, see \cite{dpp, ParkerWill:2016}. \section{The representation of the triangle group $\Delta_{3,4,\infty;n}$}\label{sec-gens} We will explicitly give a matrix representation of $\Delta_{3,4,\infty; n}$ in $\mathbf{SU}(2,1)$ for $n=4,6$. First note that $\Delta_{3,4,\infty;n}$ is isomorphic to $\Delta_{3,4,n;\infty}$ as subgroups of $\mathbf{SU}(2,1)$ \cite{kpt, Thompson:2010}. Let $I_1$, $I_2$ and $I_3$ be the three complex reflections of $\Delta_{3,4,\infty;n}$. Suppose that complex reflections $I_1$ and $I_2$ in $\mathbf{SU}(2,1)$ so that $I_1I_2$ is a parabolic element fixing $q_{\infty}$. By using the similar argument in \cite{MaX:2020}, the matrices of $I_1$ and $I_2$ are given by \begin{equation*}\label{eq-I1-I2} I_1=\left[\begin{matrix} -1 & 0& 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{matrix}\right], \quad I_2=\left[\begin{matrix} -1 & -2 & 2 \\ 0 & 1 & -2 \\ 0 & 0 & -1 \end{matrix}\right] \end{equation*} and the matrix of $I_3$ is \begin{equation*}\label{eq-I3} I_3=\left[\begin{matrix} -\frac{1}{2}& \frac{5-\sqrt{23}i}{12}& \frac{1}{3} \\ \frac{5+\sqrt{23}i}{8} &0 &\frac{5+\sqrt{23}i}{12} \\ \frac{3}{4} & \frac{5-\sqrt{23}i}{8} & -\frac{1}{2} \end{matrix}\right], \quad I_3=\left[\begin{matrix} -\frac{1}{2}& \frac{5-\sqrt{23}i}{12}& \frac{1}{3} \\ \frac{5+\sqrt{23}i}{8} &0 &\frac{5+\sqrt{23}i}{12} \\ \frac{3}{4} & \frac{5-\sqrt{23}i}{8} & -\frac{1}{2} \end{matrix}\right] \end{equation*} for $n=4,6$ respectively. \section{The combinatorics of the Ford domain for the even subgroup of $\Delta_{3,4,\infty;4}$}\label{section:comb344} In this section, we will construct the Ford domain for the subgroup of $\Delta_{3,4,\infty;4}$ generated by the elements $I_1I_2$ and $I_2I_3$. We will also describe the combinatorial structure of the Ford domain. \begin{defn} Let $\Gamma_1$ be the even length subgroup of the $\Delta_{3,4,\infty;4}$-triangle group, i.e. the subgroup generated by $I_1I_2$ and $I_2I_3$. We define $A=I_1I_2, B=I_2I_3$, and write $a,b$ for $A^{-1}, B^{-1}$ respectively. \end{defn} One easily checks that \begin{equation}\label{eq-a-b} A=\left[\begin{matrix} 1 & 2& -2 \\ 0 & 1 & 2 \\ 0 & 0 & 1 \end{matrix}\right], \quad B=\left[\begin{matrix} \frac{1-2i}{2} & \frac{1-i}{2} & \frac{-5-2i}{2}\\ \frac{-1+i}{2}& -1+i & \frac{3+i}{2} \\ -\frac{1}{2} & \frac{-1+i}{2}& \frac{1}{2} \end{matrix}\right]. \end{equation} First, we give a brief summary of the method to get the Ford domain. A reduced word consists of $A,B,a,b$ is called an \emph{essential word} in $\Gamma_1$ if the head or the tail of this word is not $A$ or $a$ and the word can not be reduced to a shorter one in $\Gamma_1$. For example, $BaB$, $Bab$, $baaB$ are essential words and $aB$, $BA$, $baBA$ are not essential words. Note that $B^3=id$ in $\Gamma_1$, so neither $B^2$ nor $BAB^2$ is an essential word, but $b$ and $BAb$ are essential words One can find that \begin{itemize} \item the essential words for the group elements of length 1 are $b,B$; \item there are no essential words for the group elements of length 2; \item the essential words for the group elements of length 3 are $Bab$, $BaB$, $BAB$, $bAB$ and their inverses; \item the essential words for the group elements of length 4 are $Baab$, $BaaB$, $BAAB$, $bAAB$ and their inverses. \end{itemize} Let $S_n$ be the set of group elements in $\Gamma_1$ that can be expressed as essential words of length at most $n$. The set $S_n$ is symmetric, that is, if $g\in S_n$, then $g^{-1}$ also belongs to $S_n$. The polyhedron $D_{S_n}$ will be the intersection of the exteriors of $\mathcal{I}(g)$ for all $g\in S_n$ and all their translates by powers of $A$. The polyhedron $D_{S_n}$ is also called a \textsl{partial Ford domain}. Start with $D_{S_3}$ or $D_{S_4}$ and do the following: (1) Check if the polyhedron $D_{S_n}$ is the Ford domain by using the Poincar\'e polyhedron theorem. If it is, we find the Ford domain for $\Gamma_1$ and stop the procedure. Else go to Step (2). (2) $D_{S_n}$ does not have side pairings. We will add some essential words of length $n+1$ to $D_{S_n}$ and go to Step (1). We do not know a priori that the procedure will stop in finite steps due to Bowditch's result \cite{Bowditch:1993}. Fortunately, the procedure was terminated quickly in our cases. We find the set $S^*$ with the essential words $$b,B,Bab,BAb,BaB,bAb, BAB,bab,bAB,baB.$$ We denote the polyhedron $D_{S^*}$ be the intersection of the exteriors of the isometric spheres of this ten elements and all their translates by powers of $A$. The polyhedron $D_{S^*}$ is our guess for the Ford domain of $\Gamma_1$. We will show that \begin{thm} \label{thm:Ford} $D_{S^*}$ is the Ford domain of $\Gamma_1$. \end{thm} The tool for the proof of this Theorem \ref{thm:Ford} is the Poincar\'e polyhedron theorem. The key step in the verification of the hypotheses of the Poincar\'e polyhedron theorem will be the determination of the combinatorics of $D_{S^*}$. We start with the isometric spheres of $b,BAb,bAb,bab,baB$ and their inverses. First, we fix some notations: \begin{defn} For $k\in \mathbb{Z}$, let $\mathcal{I}_{g}^{k}$ be the isometric sphere of $A^{k}gA^{-k}$, which is $\mathcal{I}(A^{k}gA^{-k})=A^{k}\mathcal{I}(g)$, where $g\in S^*$ and $\mathcal{I}_{g}^{0}=\mathcal{I}_{g}$. The spinal sphere corresponds to $\mathcal{I}_{g}^{k}$ will be denoted by $\mathcal{S}_{g}^{k}$, where $\mathcal{S}_{g}^{0}=\mathcal{S}_{g}$. \end{defn} Note that $$\mathcal{I}(bAb)=\mathcal{I}(ABaBa),\ \mathcal{I}(bab)=\mathcal{I}(aBABA). $$ So the isometric spheres of the following eight group elements $$b,B,BAb,Bab,bAb,bab,baB,bAB$$ and their conjugates by powers of $A$ will define all the sides of our Ford domain. We summarize the information of these isometric spheres in Table \ref{table:centerradius}. \begin{table}[htbp] \caption{The centers and radii of the eight spinal spheres.} \centering \begin{tabular}{c c c| c c c} \hline Spinal sphere & Center & Radius & Spinal sphere& Center & Radius \\ [1 ex] \hline $\mathcal{S}_{b}$ & $[1,-1,4]$ & $2$ & $\mathcal{S}_{B}$ & $[1,1,0]$ & $2$ \\ [2 ex] $\mathcal{S}_{baB}$ & $[\frac{7}{5},\frac{1}{5},4]$ & $\frac{2}{5^{\frac{1}{4}}}$ & $\mathcal{S}_{bAB}$ &$[\frac{3}{5},\frac{1}{5},-\frac{4}{5}]$ & $\frac{2}{5^{\frac{1}{4}}}$ \\[2 ex] $\mathcal{S}_{bAb}$ & $[0,0,4]$ & $\sqrt{2}$ & $\mathcal{S}_{bab}$ & $[2,0,0]$ & $\sqrt{2}$ \\[2 ex] $\mathcal{S}_{BAb}$ & $[\frac{3}{5},-\frac{1}{5},\frac{24}{5}]$ & $\frac{2}{5^{\frac{1}{4}}}$ & $\mathcal{S}_{Bab}$ & $[\frac{7}{5},-\frac{1}{5},0]$ & $\frac{2}{5^{\frac{1}{4}}}$ \\ [2 ex] \hline \end{tabular} \label{table:centerradius} \end{table} Note that the spinal spheres in Table \ref{table:centerradius} do not contain the point $q_{\infty}$. So they are bounded sets in $\partial {\bf H}^2_{\mathbb C}-\{q_{\infty}\}$. For any two spinal spheres $\mathcal{S}_g$ and $\mathcal{S}_h$, we have $A^{k}(\mathcal{S}_g)\cap \mathcal{S}_h=\emptyset$ whenever $k$ is large enough. It is well known that two spinal spheres intersect if and only if the corresponding bisectors intersect. By some simple calculation, we can provide some rough estimate listed in Table \ref{table:intersecton}. \begin{table}[htbp] \caption{The intersections of $\mathcal{I}_{B}$ and it's neighbouring isometric spheres.} \centering \begin{tabular}{c c | c c} \hline Intersection & The value of $k$ & Intersection & The value of $k$ \\ [1 ex] \hline $\mathcal{I}_{B}\cap \mathcal{I}_{B}^{k}$ & $k=\pm 1$ &$\mathcal{I}_{B}\cap \mathcal{I}_{b}^{k}$ & $k=0, \pm1$ \\ [2 ex] $\mathcal{I}_{B}\cap \mathcal{I}_{baB}^{k}$ & $k=0,\pm1$ &$\mathcal{I}_{B}\cap \mathcal{I}_{bAB}^{k}$ & $k=0,\pm1$ \\ [2 ex] $\mathcal{I}_{B}\cap \mathcal{I}_{BaB}^{k}$ & $k=0,\pm1 $ &$\mathcal{I}_{B}\cap \mathcal{I}_{BAB}^{k}$ & $k=0,\pm1$ \\ [2 ex] $\mathcal{I}_{B}\cap \mathcal{I}_{BAb}^{k}$ & $k=0, \pm1 $ &$\mathcal{I}_{B}\cap \mathcal{I}_{Bab}^{k}$ & $k=0,\pm1$ \\ [2 ex] \hline \end{tabular} \label{table:intersecton} \end{table} The intersection of $\mathcal{I}_{B}$ and $D_{S^*}$ is a 3-face, that is, a side of the partial Ford domain $D_{S^*}$. The intersection $\mathcal{I}_{B} \cap D_{S^*}$ is a 3-dimensional polytope, we describe its combinatorics in the following proposition, which is very important for the proof of Theorem \ref{thm:main}. \begin{prop}\label{prop: combinatorics} The combinatorics of all the 2-faces(ridges) on $\mathcal{I}_{B} \cap D_{S^*}$ are listed as follows: \begin{itemize} \item $\mathcal{I}_{B}\cap \mathcal{I}_{baB}^{\pm 1} \cap D_{S^*}$, $\mathcal{I}_{B}\cap \mathcal{I}_{bAB}^{\pm 1} \cap D_{S^*}$, $\mathcal{I}_{B}\cap \mathcal{I}_{BaB}^{-1} \cap D_{S^*}$, $\mathcal{I}_{B}\cap \mathcal{I}_{BaB}^{1} \cap D_{S^*}$, $\mathcal{I}_{B}\cap \mathcal{I}_{BAb}^{ 1} \cap D_{S^*}$ and $\mathcal{I}_{B}\cap \mathcal{I}_{Bab}^{ -1} \cap D_{S^*}$ are empty; \item $\mathcal{I}_{B}\cap \mathcal{I}_{BAb}^{-1} \cap D_{S^*}$ and $\mathcal{I}_{B}\cap \mathcal{I}_{Bab}^{1} \cap D_{S^*}$ are combinatorially triangles with one side at infinity; this means, for example, $\mathcal{I}_{B}\cap \mathcal{I}_{Bab}^{1} \cap D_{S^*}$ is a triangle with one ideal boundary in $\partial {\bf H}^2_{\mathbb C}$. See part (f) and (e) of Figure \ref{fig:B}; \item $\mathcal{I}_{B}\cap \mathcal{I}_{BaB}^{1} \cap D_{S^*}$, $\mathcal{I}_{B}\cap \mathcal{I}_{BAB}^{-1} \cap D_{S^*}$, $\mathcal{I}_{B}\cap \mathcal{I}_{BAb} \cap D_{S^*}$ and $\mathcal{I}_{B}\cap \mathcal{I}_{Bab} \cap D_{S^*}$ are combinatorially quadrangles with one side at infinity. See part (h), (g), (m) and (l) of Figure \ref{fig:B}; \item $\mathcal{I}_{B}\cap \mathcal{I}_{B}^{\pm 1} \cap D_{S^*}$, $\mathcal{I}_{B}\cap \mathcal{I}_{baB} \cap D_{S^*}$ and $\mathcal{I}_{B}\cap \mathcal{I}_{bAB} \cap D_{S^*}$ are combinatorially pentagons with one side at infinity. See part (c), (d), (j) and (k) of Figure \ref{fig:B}; \item $\mathcal{I}_{B}\cap \mathcal{I}_{b} \cap D_{S^*}$ is a combinatorially dodecagon with no side at infinity. See part (i) of Figure \ref{fig:B}; \item The combinatorial type of $\mathcal{I}_B\cap \mathcal{I}_{b}^{1} \cap D_{S^*}$, $\mathcal{I}_B\cap \mathcal{I}_{b}^{-1} \cap D_{S^*}$, $\mathcal{I}_B\cap \mathcal{I}_{BaB} \cap D_{S^*}$ and $\mathcal{I}_B\cap \mathcal{I}_{BAB} \cap D_{S^*}$ are the same. It is a union of a pentagon including a side at infinity and a quadrangle with a common vertex. See part (a), (b), (o) and (n) of Figure \ref{fig:B}. \end{itemize} \end{prop} We summarize the combinatorics of all 2-faces of the side $\mathcal{I}_B\cap D_{S^*}$ in Figure \ref{fig:B}. See Figures \ref{fig:bAB}, \ref{fig:bab}, \ref{fig:bAb} and \ref{fig:Bab} for the combinatorics of all 2-faces of the sides $\mathcal{I}_{bAB}\cap D_{S^*}$, $\mathcal{I}_{bab}\cap D_{S^*}$, $\mathcal{I}_{bAb}\cap D_{S^*}$ and $\mathcal{I}_{Bab}\cap D_{S^*}$ respectively. Using similar method in \cite{Deraux:2016, dpp}, we give some examples of calculation, the other routine calculations will be omitted. Throughout the calculation, we denote by $\hat{\mathcal{I}}_{B}$ the extor in the projective space extending the bisector $\mathcal{I}_{B}$ (see \cite{Go} for a definition of extor). \begin{prop} $\mathcal{I}_{B}\cap \mathcal{I}_{baB}^{1}$ is a Giraud disk, which is entirely contained in the interior of the isometric sphere of $b$. \end{prop} \begin{proof}The Giraud torus $\hat{\mathcal{I}}_{B}\cap \hat{\mathcal{I}}_{baB}^{1}$ can be parameterized by the vector $$V=(\overline{z}_1p_0-b(p_0))\boxtimes (\overline{z}_2p_0-AbAB(p_0)),$$ with $|z_1|=|z_2|=1$. One can choose the following sample point \begin{eqnarray*} X_0&=&\left(\left(\frac{1}{2}+\frac{\sqrt{3}}{2}i\right)p_0-b(p_0)\right)\boxtimes \left(p_0-AbAB(p_0)\right)\\ & = & \left(\frac{5}{4}+\frac{\sqrt{3}}{4}+i\left(-\frac{1}{4}+\frac{\sqrt{3}}{4}\right),-\frac{3}{4}+\frac{\sqrt{3}}{2}+i\left(-\frac{1}{2}+\frac{\sqrt{3}}{4}\right),i\right) \end{eqnarray*} to show that $\mathcal{I}_{B}\cap \mathcal{I}_{baB}^{1}$ is a topological disk, since $\langle X_0,X_0\rangle=\frac{9}{4}-\frac{3\sqrt{3}}{2}<0$. Writing out $z_j=x_j+iy_j$ for real $x_j$ and $y_j$, the intersection of $\partial_{\infty}(\mathcal{I}_{B}\cap \mathcal{I}_{baB}^{1})\cap \hat{\mathcal{I}}_{b} $ is described by the solutions of the following system $$\left\{ \begin{aligned} &\frac{11}{2}-2x_1+4y_1-2x_2-\frac{x_1x_2}{2}-y_1x_2+x_1y_2-\frac{y_1y_2}{2}=0 \\ &x_1-2y_1+x_2-\frac{x_1x_2}{2}+y_1x_2-x_1y_2-\frac{y_1y_2}{2}-\frac{3}{2}=0 \\ &x_1^2 + y_1^2-1=0\\ & x_2^2 + y_2^2-1=0 \end{aligned} \right. $$ It is easy to check that this system has no real solutions. That is, the intersection of $\partial_{\infty}(\mathcal{I}_{B}\cap \mathcal{I}_{baB}^{1})\cap \hat{\mathcal{I}}_{b} $ is empty. One can exhibit a single point of $\mathcal{I}_{B}\cap \mathcal{I}_{baB}^{1}$ inside the isometric sphere of $b$. Therefore $\mathcal{I}_{B}\cap \mathcal{I}_{baB}^{1}$ is entirely contained in the interior of the isometric sphere of $b$. \end{proof} \begin{prop} $\mathcal{I}_{B}\cap \mathcal{I}_{Bab}^{1} \cap D_{S^*}$ is a combinatorial triangle. \end{prop} \begin{proof} The Giraud disk $\mathcal{I}_{B}\cap \mathcal{I}_{Bab}^{1}$ can be parameterized by negative vector of the form $$V=(\overline{z}_1p_0-b(p_0))\boxtimes (\overline{z}_2p_0-ABAb(p_0)),$$ where $|z_1|=|z_2|=1$. Explicitly, this $V$ can be written as $V(z_1,z_2)=v_0+z_1v_1+z_2v_2$, where \begin{align*} v_1&=\left(\frac{1}{2},\frac{i}{2},1+\frac{i}{2}\right),\\ v_2&=\left(\frac{-1-i}{2},-\frac{1}{2}-i,0\right), \\ v_3&=\left(\frac{-1+i}{2},\frac{1}{2},0\right). \end{align*} Furthermore, $\langle V,V\rangle<0$ can be written as $${\rm Re} \left(\frac{11}{4}-\frac{5}{2}z_1-\left(\frac{1}{2}-i\right)z_2-\left(\frac{1}{2}-i\right)z_2\overline{z}_1\right)<0.$$ In order to encode the combinatorial structure of this ridge, we need to study the intersection of this Giraud disk with the isometric spheres given in Table \ref{table:intersecton}. The equation of the intersection with $\mathcal{I}_g$ is given by $$|\langle V(z_1,z_2),p_0\rangle|^2=|\langle V(z_1,z_2),g^{-1}(p_0)\rangle|^2.$$ We start by studying the intersection of $\hat{\mathcal{I}}_{B}\cap \hat{\mathcal{I}}_{Bab}^{1}$ with $\hat{\mathcal{I}}_{Aba}$. Writing out $z_j=x_j+iy_j$ for real $x_j$ and $y_j$, the intersection $\partial_\infty(\mathcal{I}_{B}\cap \mathcal{I}_{Bab}^{1})\cap \hat{\mathcal{I}}_{Aba}$ is described by the solutions of the system $$\left\{ \begin{aligned} &y_1-x_2y_1-2y_2+y_1y_2+x_1(1+x_2+y_2)=0 \\ &11-2x_2+4x_2y_1-4y_2-2y_1y_2-10x_1-2x_1x_2-4x_1y_2=0 \\ &x_1^2 + y_1^2-1=0\\ & x_2^2 + y_2^2-1=0 \end{aligned} \right. $$ This system has exactly two solutions, given approximately by $$x_1=0.378005..., y_1=-0.925803..., x_2 =0.960944..., y_2=0.276744...$$ and $$x_1=0.987712..., y_1=0.156285..., x_2=-0.872037..., y_2=0.48944....$$ So the curve $\hat{\mathcal{I}}_{B}\cap \hat{\mathcal{I}}_{Bab}^{1}\cap \hat{\mathcal{I}}_{Aba}$ intersects $\partial_\infty(\mathcal{I}_{B}\cap \mathcal{I}_{Bab}^{1})$ in two points. Similarly, the curve $\hat{\mathcal{I}}_{B}\cap \hat{\mathcal{I}}_{Bab}^{1}\cap \hat{\mathcal{I}}_{BAB}$ intersects $\partial_\infty(\mathcal{I}_{B}\cap \mathcal{I}_{Bab}^{1})$ in two points given approximately by $$x_1=0.987712..., y_1=-0.156285..., x_2=-0.784829..., y_2=0.619713...$$ and $$x_1=0.378005..., y_1=0.925803..., x_2=0.107031..., y_2=0.994256....$$ Now the triangle in part (e) of Figure \ref{fig:B} has three boundary arcs, given by $\hat{\mathcal{I}}_{B}\cap \hat{\mathcal{I}}_{Bab}^{1}\cap \hat{\mathcal{I}}_{Aba}$, $\hat{\mathcal{I}}_{B}\cap \hat{\mathcal{I}}_{Bab}^{1}\cap \hat{\mathcal{I}}_{BAB}$ and $\partial_\infty(\mathcal{I}_{B}\cap \mathcal{I}_{Bab}^{1})$. Next, we include the element $bAB$ in Table 2. From the similar calculation as above, we find that the curve $\hat{\mathcal{I}}_{B}\cap \hat{\mathcal{I}}_{Bab}^{1}\cap \hat{\mathcal{I}}_{bAB}$ does not intersect the boundary arcs of the triangle. This implies that the boundary of this triangle is either completely inside or outside the isometric sphere of $bAB$. It is easy to check that it is outside by testing a sample point. We can also prove that the curve $\hat{\mathcal{I}}_{B}\cap \hat{\mathcal{I}}_{Bab}^{1}\cap \hat{\mathcal{I}}_{bAB}$ does not intersect the interior of the triangle by computing the critical points of the equation for the curve $\hat{\mathcal{I}}_{B}\cap \hat{\mathcal{I}}_{Bab}^{1}\cap \hat{\mathcal{I}}_{bAB}$. For the element $aBA$ in Table \ref{table:intersecton}, one can verify that the curve $\hat{\mathcal{I}}_{B}\cap \hat{\mathcal{I}}_{Bab}^{1}\cap \hat{\mathcal{I}}_{aBA}$ does not intersect $\partial_\infty(\mathcal{I}_{B}\cap \mathcal{I}_{Bab}^{1})$. By taking a sample point, one get that the Giraud disk $\mathcal{I}_{B}\cap \mathcal{I}_{Bab}^{1}$ lies outside the isometric sphere of $aBA$. Similar arguments allow us to handle the detailed study of all elements in Table \ref{table:intersecton}. \end{proof} \begin{figure} \caption{All the ridges on the side $\mathcal{I}_B$.} \label{fig:B} \end{figure} \begin{figure} \caption{All the ridges on the side $\mathcal{I}_{bAB}$.} \label{fig:bAB} \end{figure} \begin{figure} \caption{All the ridges on the side $\mathcal{I}_{bab}$.} \label{fig:bab} \end{figure} \begin{figure} \caption{All the ridges on the side $\mathcal{I}_{bAb}$.} \label{fig:bAb} \end{figure} \begin{figure} \caption{All the ridges on the side $\mathcal{I}_{Bab}$.} \label{fig:Bab} \end{figure} From Proposition \ref{prop: combinatorics}, the isometric sphere $\mathcal{I}_{B}$ contributes a side of the Ford domain, say $\mathcal{I}_{B}\cap D_{S^*}$, where there are fourteen infinite ridges in $\mathcal{I}_{B}\cap D_{S^*}$. See Table \ref{table:adjacent344} in Section \ref{section: ford344}, the spinal sphere of $B$ is adjacent to $14=2+5+3+4$ spinal spheres. Comparing also to Figures 3, 5 and 6 of \cite{Deraux:2016}. We write $E$ for $\partial_{\infty}D_{S^*}$, and $C$ for $\partial E$. It is easy to see that $D_{S^*}, E$ and $C$ are all $A$-invariant by the construction. The map $A$ acts on the Heisenberg group as a Heisenberg translation preserving $x$-axis. In Heisenberg coordinates, the action of $A$ is given by $$A(z,t)=(z-2,t+4{\rm Re} z).$$ It follows from Section \ref{section: ford344} that $C$ is tiled by some polygons and their orbits under the action of $A$. The identifications in $C$ coming from the action of $A$. We denote by $\sim$ the corresponding equivalence relation on $C$; it is easy to check that $C/ \sim$ is a torus. We need to claim that this torus is unknotted. So we prove that \begin{prop} The $x$-axis of $\partial {\bf H}^2_{\mathbb C}-\{q_{\infty}\}=\mathbb{C} \times \mathbb{R}$ is contained in the complement of $\partial_{\infty}D_{S^*}$. \end{prop} \begin{proof} The fundamental domain of $A$ acting on $x$-axis is a segment parameterized by $$\iota_x=\{[x,0,0] \in \partial {\bf H}^2_{\mathbb C} ~~| ~~\ x\in [-1,1]\}. $$ We decompose this segment into two parts as \begin{align*} \iota_x^1&=\{[x,0,0] \in \partial {\bf H}^2_{\mathbb C} ~~| ~~\ x\in [-1,0]\}, \\ \iota_x^2&=\{[x,0,0] \in \partial {\bf H}^2_{\mathbb C} ~~| ~~\ x\in [0,1]\}. \end{align*} Note that a spinal sphere is convex. It is easy to check that the interval $\iota_x^1$ is in the interior of the isometric sphere $\mathcal{S}_{Aba}$ and the interval $\iota_x^2$ is in the interior of the isometric sphere $\mathcal{S}_{B}$. \end{proof} \section{Manifold at infinity of the complex hyperbolic triangle group $\Delta_{3,4,4;\infty}$}\label{section: ford344} In this section, we study the manifold at infinity of the even subgroup of the complex hyperbolic triangle group $\Delta_{3,4,4;\infty}$. Let $I_{1}$, $I_{2}$ and $I_{3}$ be the matrices in section \ref{sec-gens}, we have $I_{1}I_{2}$ is parabolic, $(I_{2}I_{3})^3=(I_{3}I_{1})^4=id$, and $(I_{1}I_{3}I_{2}I_{3})^4=(I_{1}I_{2}I_{3}I_{2})^4=id$. Let $A=I_{1}I_{2}$ and $B=I_{2}I_{3}$, then $B^3=id$, $(AB)^4=id$ and $(AB^2)^4=id$, and remember that we write $a,b$ for $A^{-1}, B^{-1}$ respectively. \subsection{The Ford domain of the complex hyperbolic triangle group $\Delta_{3,4,4;\infty}$} In this section we will show that $D_{S^*}$ is the Ford domain by using the Poincar\'e polyhedron theorem and study the ideal boundary of $D_{S^*}$. We will give the detailed information on spinal spheres and infinite ridges, which is enough to get the manifold at infinity. Recall $S^*$ in Section \ref{section:comb344} is the set of the essential words $$\{b, B, Bab, BAb, BaB, bAb, BAB, bab, bAB, baB\},$$ which define side-pairings of $D_{S^*}$. The main task is to check the Poincar\'e ridge cycles. Recall that a ridge is by definition a codimension-2 facet of $D_{S^*}$. Since all the complex spines of our isometric spheres intersect at $q_{\infty}$, we can think of the intersections as being coequidistant, and in particular, their intersections are all smooth disks, equidistant from three points. Note that no ridge of $D_{S^*}$ is totally geodesic. The ridges of $D_{S^*}$ are the so-called Giraud disk which are generic intersections of two bisectors. Because of Giraud's theorem, the ridges of $D_{S^*}$ lie on precisely three bisectors, so the local tiling condition near generic ridges is actually a consequence of the existence of side-pairing. The ridge cycles can be obtained by computing orbits of these triples of points under the side-pairings; whenever a ridge in the cycle differs from the starting ridge by a power of $A$, we close the cycle up by that power of $A$. The bounded ridge on $\mathcal{I}(b)\cap\mathcal{I}(B)$ is sent to itself by $B$. One checks that $$q_{\infty}\stackrel{B}{\longrightarrow}B(q_{\infty})\stackrel{B}{\longrightarrow}B^2(q_{\infty})=b(q_{\infty})\stackrel{B}{\longrightarrow}q_{\infty}.$$ This clearly gives a cycle transformation of order 3 preserving that ridge, so we get the relation $B^3=id$. In Tables \ref{table:ridge344} and \ref{table:circle344}, we give the information about infinite ridge cycles and infinite ridge relations of the group $\Delta_{3,4,4;\infty}$. In Table \ref{table:ridge344}, for example, $(r_1, bAB\cap Bab)$ means that the isometric sphere $\mathcal{I}(bAB)$ of $bAB$ and the isometric sphere $\mathcal{I}(Bab)$ of $Bab$ intersect in a ridge $r_1$. From Tables \ref{table:ridge344}, \ref{table:circle344}, and the information about bounded ridges, we can use the Poincar\'e polyhedron theorem to see that $D_{S^*}$ is in fact the Ford domain. Furthermore, we have the following proposition. \begin{prop} No elliptic element of $\Gamma_1$ fixes any point in $\partial {\bf H}^2_{\mathbb C}$ and the only parabolic elements in $\Gamma_1$ are conjugates of powers of $A$. \end{prop} \begin{proof} From the Poincar\'e ridge cycles, we know that any elliptic element in the group $\Gamma_1$ must be conjugate to a power of $B$, a power of $ab$ and a power of $aB$. $B$ is a regular elliptic of order 3, thus it does not fix any point in $\partial {\bf H}^2_{\mathbb C}$. As for $ab$ and $aB$, one can see that the nontrivial, nonregular elliptic elements are $(ab)^2$ and $(aB)^2$. But these are reflections in points, which have no fixed points in $\partial {\bf H}^2_{\mathbb C}$. It is easy to see that the ideal vertices all have trivial stabilizers. The only parabolic elements in $\Gamma_1$ are elements in the cyclic subgroup generated by $A$. \end{proof} The ideal boundary of $D_{S^*}$ is $E$, that is $E=\partial_{\infty}D_{S^*}$, and the boundary of the 3-manifold $E$ is $C$, that is $C=\partial E$. $C$ is an infinite annulus. \begin{figure} \caption{Realistic view of the ideal boundary of the Ford domain of $\Delta_{3,4,4;\infty}$.} \label{figure:front4} \end{figure} We will show $C$ consists of spinal spheres of the following elements: 1. $A$-translations of $\{B, b\}$, which are the blue colored polygons in Figure \ref{figure:front4}. Each of the spinal spheres of $\{B, b\}$ contributes a tetradecagon of the ideal boundary of the Ford domain. See Table \ref{table:adjacent344} for more details; 2. $A$-translations of $\{bAB, baB\}$, which are the red colored polygons in Figure \ref{figure:front4}. Each of the spinal spheres of $\{bAB, baB\}$ contributes two triangles of the ideal boundary of the Ford domain; 3. $A$-translations of $\{bAb, BaB\}$, which are the green colored polygons in Figure \ref{figure:front4}. Each of the spinal spheres of $\{bAb, BaB\}$ contributes a pentagon of the ideal boundary of the Ford domain; 4. $A$-translations of $\{bab, BAB\}$, which are the yellow colored polygons in Figure \ref{figure:front4}. Each of the spinal spheres of $\{bab, BAB\}$ contributes two triangles of the ideal boundary of the Ford domain; 5. $A$-translations of $\{Bab, BAb\}$, which are the black colored polygons in Figure \ref{figure:front4}. Each of the spinal spheres of $\{Bab, BAb\}$ contributes a pentagon of the ideal boundary of the Ford domain. Figure \ref{figure:front4} is a realistic view of the ideal boundary of the Ford domain of $\Delta_{3,4,4;\infty}$. This figure is just for motivation, and we need to show the figure is correct. In Table \ref{table:adjacent344}, we give the detailed information of the contributions of the spinal spheres for the ideal boundary of the Ford domain. $(bAb)_1$ and $(bAb)_2$ means the spinal sphere of $bAb$ contributes two parts of the boundary of $D_{S^*} \cap \partial {\bf H}^2_{\mathbb C}$. We see that the ideal boundary of the Ford domain of the group $\Delta_{3,4,4;\infty}$ is the complement of a $D^2 \times (-\infty, \infty)$ in $\mathbb{C} \times \mathbb{R} = \partial {\bf H}^2_{\mathbb C}- \{q_{\infty}\}$. From Table \ref{table:adjacent344}, we get Figure \ref{figure:boundary4f}, which gives us an abstract picture of the boundary of the ideal boundary of Ford domain of the complex hyperbolic triangle group $\Delta_{3,4,4;\infty}$. That is, Figure \ref{figure:boundary4f} gives us an abstract picture of $C$. Since $(AB)^4=id$, the isometric sphere $\mathcal{I}(bab)$ of $bab$ is the same as the isometric sphere of $ABABa$. So in Figure \ref{figure:boundary4f}, we only see one part which is labeled by $bab$, even though we have claimed that the isometric sphere of $bab$ contribute two triangles of the boundary of the ideal boundary of the Ford domain. Note that $A$ acts on the picture horizontally to the left, and two zigzag lines with angles $\frac{3\pi}{4}$ to the positive horizontal axis are glued together to get the infinite annulus $C$. \begin{table}[htbp]\label{table: vertices} \begin{tabular}{|c|c|} \hline Spinal sphere & Adjacent spinal spheres in clockwise order \\ \hline $(bAb)_1$ & $b$, $Aba$, $AbaBa$ \\ \hline $(bAb)_2$ & $B$, $BAb$, $ABa$ \\ \hline $(BaB)_1$ & $B$, $aBA$, $aBAbA$ \\ \hline $(BaB)_2$ & $b$, $baB$, $abA$ \\ \hline $(bab)_1$ & $B$, $Bab$, $aBA$ \\ \hline $(bab)_2$ & $b$, $abA$, $abABA$ \\ \hline $(BAB)_1$ & $b$, $bAB$, $Aba$ \\ \hline $(BAB)_2$ & $B$, $ABa$, $ABaba$ \\ \hline $Bab$ & $b$, $aBA$, $bab$, $B$, $bAB$ \\ \hline $BAb$ & $b$, $ABa$, $bAb$, $B$, $baB$ \\ \hline $bAB$ & $B$, $Aba$, $Ababa$, $b$, $Bab$ \\ \hline $(baB)$ & $B$, $abA$, $BaB$, $b$, $BAb$ \\ \hline $B$ & $Aba$, $bAB$, $Bab$, $bab$, $aBA$, $abAbA$, $aBAbA$, \\ & $abA$, $baB$, $BAb$, $bAb$, $ABa$, $Ababa$, $ABaba$ \\ \hline $b$ & $bab$, $abABA$, $aBA$, $Bab$, $bAB$, $BAB$, $Aba$, \\ & $bAb$, $AbaBa$, $ABa$, $BAb$, $baB$, $abAbA$, $abA$ \\ \hline \end{tabular} \caption{Adjacent relations in the ideal boundary of the Ford domain boundary of $\Delta_{3,4,4;\infty}$.} \label{table:adjacent344} \end{table} \begin{figure} \caption{A combinatorial picture of the boundary of the ideal boundary of the Ford domain of the complex hyperbolic triangle group $\Delta_{3,4,4;\infty}$.} \label{figure:boundary4f} \end{figure} \subsection{A 2-spine of the manifold at infinity of the complex hyperbolic triangle group $\Delta_{3,4,4;\infty}$} In this subsection, we give a canonical 2-spine $S$ of the manifold at infinity of the complex hyperbolic triangle group $\Delta_{3,4,4;\infty}$, from which we can compute the fundamental group of $M$, our manifold at infinity of the complex hyperbolic triangle group $\Delta_{3,4,4;\infty}$. The ideal boundary $D_{S^*} \cap (\partial{\mathbf{H}^2_{\mathbb C}}-\{q_{\infty}\})$ of the Ford domain of $\Delta_{3,4,4;\infty}$ is a solid cylinder homeomorphic to $(\mathbb{C}- int (\mathbb{D})) \times \mathbb{R}^{1}$, where $\mathbb{D}$ is the unit disk in $\mathbb{C}$. Where Figure \ref{figure:boundary4f} is an abstract picture of $\partial \mathbb{D} \times \mathbb{R}^{1}$, that is, an abstract picture of $C=\partial E$. Since we have exact one parabolic fixed point, which is $q_{\infty}$. In other words, there is no parabolic fixed point in $\partial \mathbb{D} \times \mathbb{R}^{1}$. The 3-manifold $M$ is a quotient space of $D_{S^*} \cap (\partial{\mathbf{H}^2_{\mathbb C}}-\{q_{\infty}\})$, where we first consider the equivalence given by the $A$-action, and then the equivalence given by side-pairings of the spinal spheres. Each infinite ridge in the boundary of the Ford domain gives us an edge in the boundary 3-manifold. We take a fundamental domain of $A$ acts on the infinite annulus, which in Figure \ref{figure:boundary4forientation} is the largest region bounded by the edges with labels. It consists of the following parts in Figure \ref{figure:boundary4f}: (1). Two big tetradecagons correspond to the spinal spheres of the elements $\{B, b\}$; (2). Four triangles correspond to the spinal spheres of the elements $\{BAB, bab\}$, $\{bAb, BaB\}$; (3). Four pentagons correspond to the spinal spheres of the elements $\{bAB, baB\}$, $\{BAb, Bab\}$. Note that we label the edges from 1 to 45, but in fact the edge $e_{35}$ and the edge $e_{22}$ are glued together in the infinite annulus $C$ of the boundary of ideal boundary of the Ford domain. The same as edges $e_{36}$ and $e_{21}$, edges $e_{37}$ and $e_{20}$. In total, there are 42 edges. Note also that the orientations of edges in Figure \ref{figure:boundary4forientation} can be obtained from the ridge relations with taking care of the neighbouring isometric spheres. For example in the procedure of $(e_{44}, aBAbA\cap B) \rightarrow (e_{33}, b \cap AbaBa) \rightarrow (e_{18}, abA\cap baB)$, the positive end of $e_{44}$ is adjacent to the spinal sphere of $abA$, and when we use the map $B$, we get $abAB^{-1}=abAb$, but $abAb$ and $bAb$ has the same spinal sphere, so the positive end of $e_{33}$ is adjacent to the spinal sphere $bAb$. \begin{figure} \caption{Fundamental domain and edge cycles in the boundary of the ideal boundary of the Ford domain of the complex hyperbolic triangle group $\Delta_{3,4,4;\infty}$.} \label{figure:boundary4forientation} \end{figure} \begin{table}[htbp]\label{table: vertices} \begin{tabular}{|c|c|} \hline Number& Ridge relation \\ \hline $R_{1}$ & $(r_1, bAB\cap Bab)\rightarrow (r_{23}, BAb\cap ABa)\rightarrow (r_{44}, aBAbA\cap B)$ \\ & $\rightarrow (r_{33}, b \cap AbaBa) \rightarrow (e_{18}, abA\cap baB) \rightarrow (r_1, bAB\cap Bab)$ \\ \hline $R_{2}$ & $ (r_2, BAb\cap baB)\rightarrow (r_{29}, bAB\cap Aba)\rightarrow (r_{40}, abABA\cap b)$ \\ & $\rightarrow (r_{27}, B\cap ABaba)\rightarrow (r_{10}, aBA\cap Bab) \rightarrow (r_2, BAb\cap baB)$ \\ \hline $R_{3}$ & $(r_3, bAb\cap BAb)\rightarrow (r_{13}, Bab\cap B)\rightarrow (r_{37}, b\cap abAbA)$\\ &$ \rightarrow (r_3, bAb\cap BAb)$ \\ \hline $R_{4}$ & $(r_4, baB\cap BaB)\rightarrow (r_{15}, bAb\cap B)\rightarrow (r_{8}, b\cap bAB)$\\ &$ \rightarrow (r_4, baB\cap BaB)$ \\ \hline $R_{5}$ & $(r_5, BAB\cap bAB)\rightarrow (r_{36}, baB\cap b)\rightarrow (r_{12}, B\cap bab) $\\ &$\rightarrow (r_5, BAB\cap bAB)$ \\ \hline $R_{6}$ & $(r_6, Bab\cap bab)\rightarrow (r_{7}, BAB\cap b)\rightarrow (r_{16}, B\cap BAb)$\\ &$ \rightarrow (r_6, Bab\cap bab)$ \\ \hline $R_{7}$ & $(r_9, Bab\cap b)\rightarrow (r_{25}, B\cap ABa)\rightarrow (r_{42}, aBA\cap B) $\\ &$\rightarrow(r_{35}, b \cap BAb) \rightarrow (r_{22}, b\cap BAb) \rightarrow (r_9, Bab\cap b)$ \\ \hline $R_{8}$ & $(r_{11}, bab\cap aBA)\rightarrow (r_{26}, Ababa\cap B)\rightarrow (r_{41}, b\cap aBA)$\\ &$ \rightarrow(r_{28}, Aba \cap B) \rightarrow (r_{39}, b\cap bab) \rightarrow (r_{30}, Aba\cap BAB) $\\ &$\rightarrow (r_{11}, bab\cap aBA)$ \\ \hline $R_{9}$ & $(r_{14}, bAB\cap B)\rightarrow (r_{38}, b\cap abA)\rightarrow$ \\ & $(r_{31}, Aba\cap b) \rightarrow (r_{17}, B \cap baB) \rightarrow (r_{14}, bAB\cap B)$ \\ \hline $R_{10}$ & $(r_{19}, abA\cap BaB)\rightarrow (r_{24}, bAb\cap ABa)\rightarrow (r_{43}, BaB\cap B) $\\ & $\rightarrow(r_{34}, b\cap ABa) \rightarrow (r_{45}, abA\cap B) \rightarrow (r_{32}; b\cap bAb)$\\ &$ \rightarrow (r_{19}, abA\cap BaB)$ \\ \hline \end{tabular} \caption{Infinite ridges of the Ford domain of $\Delta_{3,4,4;\infty}$.} \label{table:ridge344} \end{table} \begin{table}[htbp]\label{table: vertices} \begin{tabular}{|c|c|c|} \hline Number & Ridge cycle & Cycle relation \\ \hline $R_{1}$ & $(baB)*a*B*a*(Bab)$ &$(aB)^{4}=id$\\ \hline $R_{2}$ & $(Bab)*a*b*a*(baB)$ &$ (ab)^{4}=id$\\ \hline $R_{3}$ & $(BaB)*B*(BAb)$ &$ B^{3}=id$ \\ \hline $R_{4}$ & $(bAB)*B*(BaB)$ &$B^{3}=id$\\ \hline $R_{5}$ & $(bab)*b*(bAB)$ & $ b^{3}=id$ \\ \hline $R_{6}$& $(BAb)*b*(bab)$& $ b^{3}=id$ \\ \hline $R_{7}$ & $(BAb)*B*a*b$ & $id$\\ \hline $R_{8}$ & $(BAB)*A*B*A*B*A$ &$ (AB)^{4}=id$\\ \hline $R_{9}$ & $(baB)*b*A*B$ &$id$\\ \hline $R_{10}$ & $a*B*a*B*a*(BaB)$ &$(aB)^{4}=id$\\ \hline \end{tabular} \caption{Infinite ridge relations of the Ford domain of $\Delta_{3,4,4;\infty}$.} \label{table:circle344} \end{table} The manifold at infinity of $\Gamma_{1}$ is the quotient space of $E=D_{S^*} \cap (\partial{\mathbf{H}^2_{\mathbb C}}-\{q_{\infty}\})$ by the equivalence given by $A$-action and side-pairings of the spinal spheres. That is, if the 3-manifold at infinity is $M$, then $M=E/\sim$, where $\sim$ is the equivalence given by $A$-action and side-pairings of the spinal spheres. Now $\partial \mathbb{D} \times \mathbb{R}^{1}$ is a deformation retraction of $(\mathbb{C}- int (\mathbb{D})) \times \mathbb{R}^{1}$. That is, $C= \partial E$ is a deformation retraction of $E$. The deformation is $A$-equivalent, we also have the quotient space $S$ of $C/\sim$, here $\sim$ is the equivalence given by $A$-action and side-pairings of the spinal spheres. So we get a 2-spine $S$ of our 3-manifold at infinity of $\Delta_{3,4,4;\infty}$. From the side-pairing and $A$-translation, we get the equivalence classes of the oriented edges in Figure \ref{figure:boundary4forientation}, then we get Table \ref{table:edgerelation4}, where Tables \ref{table:ridge344} and \ref{table:circle344} are involved. Note that $S$ is canonical for the boundary 3-manifold of $\Delta_{3,4,4;\infty}$ from the view point of Ford domain. \begin{table}[htbp]\label{table: 4edge} \begin{tabular}{|c|c|} \hline Edge & Equivalence class \\ \hline $E_1$ & $e_{1}$, $e_{23}$, $e_{44}$, $e_{33}$, $e_{18}$\\ \hline $E_2$ & $e_{2}$, $e_{29}$, $e_{40}$, $e_{27}$, $e_{10}$\\ \hline $E_3$ & $e_{3}$, $e_{13}$, $e_{37}$, $e_{20}$\\ \hline $E_4$ & $e_{4}$, $e_{8}$, $e_{15}$\\ \hline $E_5$ & $e_{5}$, $e_{36}$, $e_{21}$, $e_{12}$\\ \hline $E_6$ & $e_{6}$, $e_{7}$, $e_{16}$\\ \hline $E_7$ & $e_{9}$, $e_{42}$, $e_{35}$, $e_{25}$, $e_{22}$\\ \hline $E_8$ & $e_{11}$, $e_{26}$, $e_{41}$, $e_{28}$, $e_{39}$, $e_{30}$\\ \hline $E_9$ & $e_{14}$, $e_{38}$, $e_{31}$, $e_{17}$\\ \hline $E_{10}$ & $e_{19}$, $e_{24}$, $e_{34}$, $e_{43}$, $e_{32}$, $e_{45}$\\ \hline \end{tabular} \caption{Equivalence classes of edges in the boundary of the ideal boundary of the Ford domain of $\Delta_{3,4,4;\infty}$.} \label{table:edgerelation4} \end{table} Moveover, in Table \ref{table:vertex4}, we give the equivalence relations of vertices, we orient each edge $E_{i}$, where $E_{i}^{-}$ and $E_{i}^{+}$ are the negative vertex and positive vertex of $E_{i}$ respectively. \begin{table}[htbp] \begin{tabular}{|c|c|} \hline Vertex & Equivalence class \\ \hline $V_1$ & $E_{1}^{-}$, $E_{3}^{-}$, $E_{9}^{-}$, $E_{10}^{-}$\\ \hline $V_2$ & $E_{1}^{+}$, $E_{4}^{+}$, $E_{7}^{-}$, $E_{10}^{+}$\\ \hline $V_3$ & $E_{2}^{+}$, $E_{6}^{+}$, $E_{9}^{+}$, $E_{8}^{-}$\\ \hline $V_4$ & $E_{8}^{+}$, $E_{7}^{+}$, $E_{2}^{-}$, $E_{5}^{-}$\\ \hline $V_5$ & $E_{3}^{+}$, $E_{4}^{-}$, $E_{6}^{-}$, $E_{5}^{+}$\\ \hline \end{tabular} \caption{Equivalence relations on the vertices in Figure \ref{figure:boundary4forientation}.} \label{table:vertex4} \end{table} From Tables \ref{table:edgerelation4} and \ref{table:vertex4}, we get Figure \ref{figure:1-skeleton4f}. Which is the 1-skeleton of the canonical 2-spine $S$ of our 3-manifold at infinity of $\Delta_{3,4,4;\infty}$. Then the 2-spine $S$ is obtained from the graph in Figure \ref{figure:1-skeleton4f} by attaching a set of disks according to Table \ref{table:4disk}. Now Table \ref{table:4disk} can be obtained from Figure \ref{figure:boundary4forientation}. Note that we just give some of the labels of spinal spheres in Figure \ref{figure:boundary4f}, and we only give some of the orientations of the edges in Figure \ref{figure:boundary4forientation}, both of these information can obtained from Tables \ref{table:adjacent344} and \ref{table:edgerelation4}. From Figure \ref{figure:1-skeleton4f}, we consider the fundamental group of the graph. We oriented the edges of the graph where both edges $E_1$ and $E_{10}$ are from $V_1$ to $V_2$. Edge $E_2$ is from $V_4$ to $V_3$, and edge $E_8$ is from $V_3$ to $V_4$. Edge $E_3$ is from $V_1$ to $V_5$, $E_4$ is from $V_5$ to $V_2$. Edge $E_5$ is from $V_4$ to $V_5$, and edge $E_6$ is from $V_5$ to $V_3$. Edge $E_7$ is from $V_2$ to $V_4$, and edge $E_9$ is from $V_1$ to $V_3$. We let $\phi_1=E_4*E_7*E_5$, $\phi_2=E^{-1}_5*E_2*E^{-1}_6$, $\phi_3=E_6*E^{-1}_9*E_3$, $\phi_4=E^{-1}_3*E_1*E^{-1}_4$, $\phi_5=E_6*E_8*E_5$, $\phi_6=E_4*E^{-1}_{10}*E_3$. Here $E^{-1}_i$ is the inverse path of $E_i$. And it is easy to see $\phi_{i}$, $1 \leq i \leq 6$, is a generator set of the fundamental group of the graph based at the vertex $V_{5}$. The boundary of the disk corresponds to the side-pairing $baB$ is $E_{4}*E^{-1}_{10}*E_{3}$, which can be seen from Figure \ref{figure:boundary4forientation} and Table \ref{table:edgerelation4}, for more details, see Table \ref{table:4disk}. \begin{figure} \caption{The 1-skeleton of the canonical 2-spine $S$ of the 3-manifold at infinity of the complex hyperbolic triangle group $\Delta_{3,4,4;\infty}$.} \label{figure:1-skeleton4f} \end{figure} \begin{table}[htbp] \begin{tabular}{|c|c|} \hline Disk & Boundary path \\ \hline $b$ & $E_{10}*E^{-1}_{1}*E_{10}*E_{7}*E_{5}*E^{-1}_{3}*E_{9}*E_{8}*E_{2}*E_{8}*$\\ & $E^{-1}_{7}*E^{-1}_{4}*E_{6}*E^{-1}_{9}$ \\ \hline $BAB$ & $E^{-1}_{8}*E^{-1}_{6}*E^{-1}_{5}$\\ \hline $bAB$ & $E_{5}*E_{4}*E^{-1}_{1}*E_{9}*E^{-1}_{2}$ \\ \hline $Bab$ & $E_{1}*E_{7}*E_{2}*E^{-1}_{6}*E^{-1}_{3}$\\ \hline $baB$ & $E_{4}*E^{-1}_{10}*E_{3}$ \\ \hline \end{tabular} \caption{Boundary of the attaching disks in the 2-spine $S$ of the manifold at infinity of the complex hyperbolic triangle group $\Delta_{3,4,4;\infty}$.} \label{table:4disk} \end{table} \begin{table}[htbp] \begin{tabular}{|c|c|} \hline Disk& Relation \\ \hline $b$ & $\phi_3\phi^{-1}_6\phi^{-1}_4\phi^{-1}_6\phi_1\phi^{-1}_3\phi_5\phi_2\phi_5\phi^{-1}_1$ \\ \hline $BAB$ & $\phi_5$\\ \hline $bAB$ & $\phi^{-1}_4\phi^{-1}_3\phi^{-1}_2$\\ \hline $Bab$ & $\phi_4\phi_1\phi_2$\\ \hline $baB$ & $\phi_6$\\ \hline \end{tabular} \caption{Relation of the attaching disks in the 2-spine $S$ of the manifold at infinity of the hyperbolic triangle group $\Delta_{3,4,4;\infty}$.} \label{table:4relation} \end{table} In Table \ref{table:4relation}, we give the relations of the attaching disks in the 2-spine $S$ of the manifold at infinity, which can be obtained from Table \ref{table:4disk}. From Table \ref{table:4relation} we get a presentation of the fundamental group of the 2-spine $S$, and so the fundamental group of $M$, the manifold at infinity of $\Delta_{3,4,4;\infty}$. Then use Magma, we can simplify it, $$\pi_{1}(S)=\pi_{1}(M)=\langle x_1, x_2 | x_2^{-1} x_1^{-1}x_2^{2}x_1^{-1}x_2^{-1}x_1x_2x_1x_2x_1\rangle.$$ Let $m038$ be the one-cusped hyperbolic 3-manifold in the Snappy Census \cite{CullerDunfield:2014}, which has volume 3.17729327860... and $$\pi_{1}(m038)=\langle x, y |x^3y^{-1}x^{-1}y^3x^{-1}y^{-1}\rangle.$$ Using Magma, it is easy to see the groups above are isomorphic. This finishes the proof of Theorem \ref{thm:main} for $\Delta_{3,4,4;\infty}$. \section{Manifold at infinity of the complex hyperbolic triangle group $\Delta_{3,4,6;\infty}$} In this section, we study the manifold at infinity of the even subgroup of the complex hyperbolic triangle group $\Delta_{3,4,6;\infty}$. We follow the notation used in the Section 5. Let $I_{1}$, $I_{2}$ and $I_{3}$ be the matrices in Section 3. We have $I_{1}I_{2}$ is parabolic, $(I_{2}I_{3})^3=(I_{3}I_{1})^4=id$, and $(I_{1}I_{3}I_{2}I_{3})^6=(I_{1}I_{2}I_{3}I_{2})^6=id$. Let $A=I_{1}I_{2}$ and $B=I_{2}I_{3}$, then $B^3=id$, $(AB)^4=id$ and $(AB^2)^6=id$. Let $\Gamma_{1}$ be the even subgroup of $\Delta_{3,4,6;\infty}$ generated by $A$ and $B$, and we write $a,b$ for $A^{-1}, B^{-1}$ respectively. \subsection{The Ford domain of the complex hyperbolic triangle group $\Delta_{3,4,6;\infty}$} We omit the detailed information about finite ridges, even through they are crucial to use the Poincar\'e polyhedron theorem, but these are similar to Section \ref{section:comb344}. We study the ideal boundary of the Ford domain of the complex hyperbolic triangle group $\Delta_{3,4,6;\infty}$ centered at the fixed point $q_{\infty}$ of the parabolic element $A$. We give the detailed information on spinal spheres and infinite ridges, which is enough to get the 3-manifold at infinity. Let $S^*$ be the set of the essential words $$\{B, b, BAB, bab, BAb, Bab, BaB, bAb, bAB, baB, BaBaB, bAbAb \}$$ and $D_{S^*}$ be the intersection of the exteriors of the isometric spheres of the essential words in $S^*$ and all their translates by powers of $A$. The ideal boundary of the (partial) Ford domain $D_{S^*}$ is $E$, that is $E=\partial_{\infty}D_{S^*}$, and the boundary of the 3-manifold $E$ is $C$, that is $C=\partial E$. $C$ is an infinite annulus. \begin{figure} \caption{Realistic view of the boundary of the ideal boundary of the Ford domain of the complex hyperbolic triangle group $\Delta_{3,4,6;\infty}$.} \label{figure:f6f} \end{figure} Figure \ref{figure:f6f} is just for motivation, we will give rigorous infinite ridge circles later. We will show $C$ are contributed by the spinal spheres as follows: 1. $\langle A \rangle$-translations of $\{B, b\}$, which are the blue colored polygons in Figure \ref{figure:f6f}; 2. $\langle A \rangle$-translations of $\{BAB, bab\}$, which are the yellow colored polygons in Figure \ref{figure:f6f}; 3. $\langle A \rangle$-translations of $\{BAb, Bab\}$, which are the pink colored polygons in Figure \ref{figure:f6f}; 4. $\langle A \rangle$-translations of $\{BaB, bAb\}$, which are the red colored polygons in Figure \ref{figure:f6f}; 5. $\langle A \rangle$-translations of $\{bAB, baB\}$, which are the black colored polygons in Figure \ref{figure:f6f}; 6. $\langle A \rangle$-translations of $\{BaBaB, bAbAb\}$, which are the green colored polygons in Figure \ref{figure:f6f}. \begin{table}[htbp] \begin{tabular}{|c|c|} \hline Spinal sphere & Adjacent spinal spheres in clockwise order \\ \hline $baB$ & $[b]_{20}$, $[BaB]_{6}$, $[B]_{5}$\\ \hline $bAB$ & $[B]_{5}$, $[b]_{20}$, $Ababa$\\ \hline $BAb$ & $[b]_{5}$, $[B]_{20}$, $[bAb]_{6}$\\ \hline $Bab$ & $[b]_{5}$, $[B]_{20}$, $bab$\\ \hline $(BAB)_{1}$ & $[B]_{20}$, $[Aba]_{5}$, $ABaba$, $[ABa]_{20}$, $[Aba]_{20}$\\ \hline $(bab)_{1}$ & $[b]_{20}$, $[aBA]_{20}$, $[b]_{5}$, $Bab$, $[B]_{20}$\\ \hline $(BAB)_{2}$ & $[b]_{20}$, $[B]_{20}$, $[Aba]_{20}$, $[B]_{5}$, $bAB$\\ \hline $(bab)_{2}$ & $[aBA]_{5}$, $abABA$, $[abA]_{20}$, $[aBA]_{20}$, $[b]_{20}$\\ \hline $(BaB)_{1}$ & $[B]_{20}$, $BaBaB$, $[abA]_{20}$,\\ \hline $(bAb)_{1}$ & $[b]_{20}$, $ABaBaBa$, $[ABa]_{20}$\\ \hline $(BaB)_{2}$ & $[abA]_{20}$, $[B]_{5}$ $baB$, $[b]_{20}$, $[B]_{20}$, $BaBaB$\\ \hline $(bAb)_{2}$ & $[b]_{5}$, $BAb$, $[B]_{20}$, $[b]_{20}$, $ABaBaBa$, $[ABa]_{20}$\\ \hline $(BaBaB)_{1}$ & $[B]_{20}$, $[abAbA]_{3}$, $[abA]_{20}$, $[BaB]_{6} (near~~baB)$\\ \hline $(bAbAb)_{1}$ & $[ABa]_{20}$, $[bAb]_{3}$, $[b]_{20}$, $[ABaBa]_{6} (near~~ AbaBA)$ \\ \hline $(BaBaB)_{2}$ & $[abA]_{20}$, $[BaB]_{3}$, $[B]_{20}$, $[abAbA]_{6} (near~~ aBAbA)$\\ \hline $(bAbAb)_{2}$ & $[b]_{20}$, $[ABaBa]_{3}$, $[ABa]_{20}$, $[bAb]_{6} (near~~ BAb)$ \\ \hline $(B)_{1}$ & $[b]_{20}$, $[bAb]_{6}$, $BAb$, $[b]_{5}$, $[Aba]_{5}$, $BAB$, $[Aba]_{20}$, \\ & $BAB$, $[b]_{20}$, $bab$, $Bab$, $[b]_{5}$, $[abA]_{5}$, $[abAbA]_{6}$, \\ & $BaBaB$, $[BaB]_{3}$, $[abA]_{20}$, $[abAbA]_{3}$, $[BaBaB]$, $[BaB]_{6}$\\ \hline $(b)_{1}$ & $[B]_{20}$, $[BaB]_{6}$, $baB$, $[B]_{5}$, $[aBA]_{5}$, $bab$, $[aBA]_{20}$, \\ & $bab$, $[B]_{20}$, $BAB$, $bAB$, $[B]_{5}$, $[ABa]_{5}$, $[ABaBa]_{6}$, \\ & $bAbAb$, $[bAb]_{3}$, $[ABa]_{20}$, $[ABaBa]_{3}$, $bAbAb$, $[bAb]_{6}$ \\ \hline $(B)_{2}$ & $[abA]_{20}$, $[aBA]_{5}$, $[b]_{20}$, $baB$, $[BaB]_{6}$ \\ \hline $(b)_{2}$ & $[B]_{20}$, $Bab$, $bab$, $[aBA]_{20}$, $[abA]_{5}$\\ \hline $(B)_{3}$ & $[b]_{20}$, $bAB$, $BAB$, $[Aba]_{20}$, $[ABa]_{5}$\\ \hline $(b)_{3}$ & $[B]_{20}$, $BAb$, $[bAb]_{6}$, $[ABa]_{20}$, $[Aba]_{5}$ \\ \hline \end{tabular} \caption{Adjacent relations in the boundary of the ideal boundary of the Ford domain of $\Delta_{3,4,6;\infty}$.} \label{table:346adjacet} \end{table} In Table \ref{table:346adjacet}, we give the adjacent relations of the spinal spheres in the ideal boundary of the Ford domain boundary of $\Delta_{3,4,6;\infty}$. But this is more complicated than the $\Delta_{3,4,4;\infty}$ case. For example the spinal sphere of $B$ give 3 parts in this boundary, one icosagon and two pentagons, we denote them as $(B)_{1}$, $(B)_{2}$ and $(B)_{3}$ in Table \ref{table:346adjacet}. Where it is easy to see $(B)_{1}$ is an icosagon, $(B)_{2}$ and $(B)_{3}$ are pentagons. Moreover, we also use $[B]_{20}$ to denote $(B)_{1}$, and $[B]_{5}$ to denote any of $(B)_{2}$ or $(B)_{3}$, and we can know which it is since the adjacent spinal spheres of $(B)_{2}$ and $(B)_{3}$ are different. By this we mean $(B)_{3}$ is adjacent to the spinal spheres of $bAB$ and $BAB$, $(B)_{2}$ is adjacent to the spinal sphere of $baB$. Each of the spinal spheres of $\{bAB, baB\}$ give a triangle. Similarly, $[bAb]_{6}$ means part of the spinal sphere of $bAb$ which is a hexagon. For concrete information about adjacent relations of the spinal spheres, see also Figure \ref{figure:comb6f}. In Figure \ref{figure:comb6f}, we only give some of the labels, which means the region is a part of the spinal sphere with that label. The remaining labels can be obtained from the partial labels of in Figure \ref{figure:comb6f} and Table \ref{table:346adjacet}. Figure \ref{figure:comb6f} gives a tilling of the plane, we glue together the tilling when they have the same label, we get the infinite annulus $C=\partial E$, the $A$-action is just a negative horizontal translation. \begin{figure} \caption{A combinatorial picture of the boundary of the ideal boundary of the Ford domain of the complex hyperbolic triangle group $\Delta_{3,4,6;\infty}$.} \label{figure:comb6f} \end{figure} \begin{table}[htbp] \begin{tabular}{|c|c|} \hline Number & Ridge relation \\ \hline $R_{1}$ & $(r_1, b\cap B)\rightarrow (r_{62}, b\cap B)\rightarrow$ \\ & $(r_{9}, Aba\cap ABa) \rightarrow (r_{32}, b \cap B) \rightarrow (r_1, b\cap B)$ \\ \hline $R_{2}$ & $ (r_2, bAb\cap b)\rightarrow (r_{33}, B\cap abA)\rightarrow$ \\ & $(r_{8}, ABa\cap b)\rightarrow (r_{65}, ABa\cap ABaBa) \rightarrow (r_2, bAb\cap b)$ \\ \hline $R_{3}$ & $(r_3, bAbAb\cap b)\rightarrow (r_{34}, B\cap abAbA)\rightarrow$ \\ & $(r_{7}, ABa\cap bAb) \rightarrow (r_{69}, ABaBa \cap bAbAb) \rightarrow (r_3, bAbAb\cap b)$ \\ \hline $R_{4}$ & $(r_4, ABaBa\cap b)\rightarrow (r_{35}, B\cap BaBaB)\rightarrow (r_{6}, ABa\cap bAbAb)$ \\ & $(r_{71}, bAbAb\cap bAb) \rightarrow (r_4, ABaBa\cap b)$ \\ \hline $R_{5}$ & $(r_5, ABaBa\cap ABa)\rightarrow (r_{36}, BaB\cap B)\rightarrow (r_{22}, b\cap ABa) $ \\ & $ \rightarrow (r_{37}, abA \cap B) \rightarrow (r_{72}, b \cap bAb) \rightarrow (r_5, ABaBa\cap ABa)$ \\ \hline $R_{6}$ & $(r_{10}, ABaba\cap ABa)\rightarrow (r_{31}, Bab\cap B)\rightarrow (r_{23}, b\cap BaB)$ \\ & $ \rightarrow (r_{18}, Aba\cap ABaBa) \rightarrow (r_{44}, bAb \cap BAb) \rightarrow (r_{10}, ABaba\cap ABa)$ \\ \hline $R_{7}$ & $(r_{11}, BAB\cap ABa)\rightarrow (r_{30}, aBABa\cap B)\rightarrow (r_{24}, b\cap baB) \rightarrow$ \\ & $(r_{17}, Aba \cap AbaBa) \rightarrow (r_{58}, bAB\cap BAB) \rightarrow (r_{11}, BAB\cap ABa)$ \\ \hline $R_{8}$ & $(r_{12}, BAB\cap Aba)\rightarrow (r_{29}, aBABA\cap b)\rightarrow (r_{50}, B\cap Aba) \rightarrow$ \\ & $ (r_{59}, B\cap BAB) \rightarrow (r_{12}, BAB\cap Aba)$ \\ \hline $R_{9}$ & $(r_{13}, B\cap Aba)\rightarrow (r_{28}, aBA\cap b)\rightarrow (r_{54}, B\cap BAB) \rightarrow$ \\ & $(r_{14}, BAB\cap AbA) \rightarrow (r_{27}, bab\cap b) \rightarrow (r_{13}, B\cap Aba)$ \\ \hline $R_{10}$ & $(r_{15}, B\cap Aba)\rightarrow (r_{26}, aBA\cap b)\rightarrow (r_{55}, B\cap BAB) \rightarrow$ \\ & $ (r_{52}, BAB\cap Aba) \rightarrow (r_{15}, B\cap Aba)$ \\ \hline $R_{11}$ & $(r_{16}, ABa\cap Aba)\rightarrow (r_{25}, B\cap b)\rightarrow$ \\ & $(r_{56}, B\cap b) \rightarrow (r_{48}, B \cap b) \rightarrow (r_{16}, ABa\cap Aba)$ \\ \hline $R_{12}$ & $(r_{19}, ABaBa\cap ABa)\rightarrow (r_{40}, BaB\cap B)\rightarrow$ \\ & $(r_{64}, b\cap ABa) \rightarrow (r_{45}, b \cap bAb) \rightarrow (r_{19}, ABaBa\cap ABa)$ \\ \hline $R_{13}$ & $(r_{20}, bAbAb\cap ABa)\rightarrow (r_{39}, BaBaB\cap B)\rightarrow$ \\ & $(r_{68}, b\cap ABaBa) \rightarrow (r_{42}, bAb \cap bAbAb) \rightarrow (r_{20}, bAbAb\cap ABa)$ \\ \hline $R_{14}$ & $(r_{21}, bAb\cap ABa)\rightarrow (r_{38}, abAbA\cap B)\rightarrow$ \\ & $(r_{70}, b\cap bAbAb) \rightarrow (r_{41}, bAbAb \cap ABaBa) \rightarrow (r_{21}, bAb\cap ABa)$ \\ \hline $R_{15}$ & $(r_{43}, bAb\cap B)\rightarrow (r_{60}, b\cap bAB)\rightarrow (r_{67}, AbaBa\cap ABaBa)$ \\ & $\rightarrow (r_{43}, bAb\cap B)$ \\ \hline $R_{16}$ & $(r_{46}, BAb\cap B)\rightarrow (r_{57}, b\cap BAB)\rightarrow (r_{53}, BAB\cap ABaba)$ \\ & $ \rightarrow (r_{46}, BAb\cap B)$ \\ \hline $R_{17}$ & $(r_{47}, BAb\cap b)\rightarrow (r_{63}, ABa\cap B)\rightarrow (r_{51}, Aba\cap ABaba)$ \\ & $ \rightarrow (r_{47}, BAb\cap b)$ \\ \hline $R_{18}$ & $(r_{49}, b\cap Aba)\rightarrow (r_{61}, B\cap bAB)\rightarrow (r_{66}, AbaBa\cap ABa)$ \\ & $ \rightarrow (r_{49}, b\cap Aba)$ \\ \hline \end{tabular} \caption{Infinite ridge relations of the Ford domain of $\Delta_{3,4,6;\infty}$, part I.} \label{table:ridge346} \end{table} From Figure \ref{figure:comb6f}, we have Tables \ref{table:ridge346} and \ref{table:circle346}, which give all the infinite ridge circle relations of the Ford domain of $\Delta_{3,4,6;\infty}$. We should note that the side-pairings and the ridge relations are more complicated than the $\Delta_{3,4,4;\infty}$ case. For example, the circle relation for the ridge $R_{1}$, which is $B*a*A*B*B$: Firstly, from $(r_1, b\cap B)\rightarrow (r_{62}, b\cap B)$ we get the word $B$, which maps the isometric sphere of $B$ to the isometric sphere of $b$. Secondly, from $(r_{62}, b\cap B)$, we consider the map $B$ which maps the isometric sphere of $B$ to the isometric sphere of $b$, even through we get $(r_{32}, b \cap B)$ now, but note that here we don't get the side-paring from our chosen fundamental domain in Subsection \ref{subsection:346spine}, we must perform the translation give by $A$, then we get $(r_{9}, Aba\cap ABa)$, now we get the side-pairing from $(r_{32}, b \cap B)$ to $(r_{9}, Aba\cap ABa)$. In other words, $A*B$ is a side-pairing from $(r_{62}, b\cap B)$ to $(r_{9}, Aba\cap ABa)$. Together with the first map $B$ from $(r_1, b\cap B)$ to $(r_{62}, b\cap B)$, now we have the word $A*B*B$; Thirdly, using the $A^{-1}=a$-translation, from $(r_{9}, Aba\cap ABa) $ we get $ (r_{32}, b \cap B)$, so we have the word $a*A*B*B$ now. Fourthly, from $(r_{32}, b \cap B) \rightarrow (r_{1}, b\cap B)$, we have the word $B$. In total, we have the word $B*a*A*B*B$, which is $B^3$, so the circle relation is $B^3=id$. We omit details for other ridges. \begin{table}[htbp] \begin{tabular}{|c|c|c|} \hline Number& Ridge cycle & Cycle relation \\ \hline $R_{1}$ & $B*a*A *B*B$ & $B^3=id$\\ \hline $R_{2}$ & $a*(ABaBa)*A*b*A*b$ & $id$\\ \hline $R_{3}$ & $A*(bAbAb)*A*(bAb)*A*b$ & $(Ab)^{6}=id$\\ \hline $R_{4}$ & $A*(bAb)*A*(bAbAb)*A*b$ & $(Ab)^{6}=id$ \\ \hline $R_{5}$ & $A*(bAb)*B*a*B*a$ & $id$ \\ \hline $R_{6}$ & $A*(BAb)*a*(ABaBa)*A*B*a$ & $B^3=id$ \\ \hline $R_{7}$ & $A*(BAB)*a*(AbaBa)*A*B*a$ & $B^3=id$ \\ \hline $R_{8}$ & $A*(BAB)*a*(Aba)*b*a$ & $id$ \\ \hline $R_{9}$ & $b*a*A*(BAB)*b*a$ & $id$ \\ \hline $R_{10}$ & $a*(Aba)*A*BAB*b*a$ & $id$ \\ \hline $R_{11}$ & $A*b*b*b*a$ & $b^{3}=id$ \\ \hline $R_{12}$ & $A*(bAb)*a*(ABa)*B*a$ & $id$ \\ \hline $R_{13}$ & $A*(bAbAb)*a*(ABaBa)*B*a$ & $id$ \\ \hline $R_{14}$ & $a*(ABaBa)*A*(bAbAb)*B*a$ & $id$ \\ \hline $R_{15}$ & $a*(ABaBa)*A*(bAB)*B$ & $B^{3}=id$\\ \hline $R_{16}$ & $a*(ABaba)*A*(BAB)*B$ & $B^{3}=id$\\ \hline $R_{17}$ & $a*(ABaba)*A*B*A*b$ & $id$\\ \hline $R_{18}$ & $a*(ABa)*A*(bAB)*a*(Aba)$ & $id$\\ \hline \end{tabular} \caption{Infinite ridge relations of the Ford domain of $\Delta_{3,4,6;\infty}$, part II.} \label{table:circle346} \end{table} Moreover, by studying the intersections of the spinal spheres, we have \begin{prop} \label{prop: 346para} There is no parabolic fixed point in the boundary of the Ford domain $\Delta_{3,4,6;\infty}$. \end{prop} From Tables \ref{table:ridge346} and \ref {table:circle346}, and the information about bounded ridges that we omitted as in Section \ref{section:comb344}, we can use the Poincar\'e polyhedron theorem to see that the partial Ford domain bounded by $\langle A \rangle$-translations of the isometric spheres of $\{B, b\}$, $\{BAB, bab\}$, $\{BAb, Bab\}$, $\{BaB, bAb\}$, $\{bAB, baB\}$ and $\{BaBaB, bAbAb\}$ is in fact the Ford domain. So we can study the manifold at infinity of $\Delta_{3,4,6;\infty}$. \subsection{A 2-spine $S$ of the 3-manifold at infinity of $\Delta_{3,4,6;\infty}$} \label{subsection:346spine} We now consider the canonical 2-spine $S$ of the 3-manifold at infinity of $\Delta_{3,4,6;\infty}$. \begin{figure} \caption{Fundamental domain and edge cycles in the boundary of the ideal boundary of the Ford domain of the complex hyperbolic triangle group $\Delta_{3,4,6;\infty}$.} \label{figure:boundary6forientation} \end{figure} As in the case of $\Delta_{3,4,4;\infty}$, each infinite ridge in the boundary of the Ford domain gives us an edge in the canonical 2-spine of the 3-manifold at infinity. We take a fundamental domain of $A$ acting on the infinite annulus $C$, which in Figure \ref{figure:boundary6forientation} is the largest region bounded by the edges with labels. It consists of: (1). Two big icosagons correspond to the spinal spheres of $\{B, b\}$. (2). Two hexagons correspond to the spinal spheres of $\{ABaBa, bAb\}$. (3). Four pentagons correspond to the spinal spheres of the elements $\{B,b\}$ and $\{Aba, ABa\}$. Two more pentagons correspond to the spinal spheres of $BAB$. Note that since $(AB)^4=id$, $BAB=ababa$ and $Ababa$ have the same isometric sphere. (4). Two quadrilateral correspond to the spinal spheres of $bAbAb$. Note that since $(AB^2)^6=id$, $bAbAb=ABaBaBa$, the isometric sphere of $BaBaB$ is just an $A$-translation of the isometric sphere of $bAbAb$. (5). Six triangles correspond to the spinal spheres of $\{bAb, ABaBa\}$, $\{BAb,ABaba\}$ and $\{bAB,AbaBa\}$. \begin{table}[htbp]\label{table: 6edge} \begin{tabular}{|c|c|} \hline Edge & Equivalence class \\ \hline $E_1$ & $e_{1}$, $e_{62}$, $e_{9}$, $e_{32}$\\ \hline $E_2$ & $e_{2}$, $e_{33}$, $e_{8}$, $e_{65}$\\ \hline $E_3$ & $e_{3}$, $e_{34}$, $e_{7}$, $e_{69}$\\ \hline $E_4$ & $e_{4}$, $e_{35}$, $e_{6}$, $e_{71}$\\ \hline $E_5$ & $e_{5}$, $e_{36}$, $e_{22}$, $e_{37}$, $e_{72}$\\ \hline $E_6$ & $e_{10}$, $e_{31}$, $e_{23}$, $e_{18}$, $e_{44}$\\ \hline $E_7$ & $e_{11}$, $e_{30}$, $e_{24}$, $e_{17}$, $e_{58}$\\ \hline $E_8$ & $e_{12}$, $e_{29}$, $e_{50}$, $e_{59}$\\ \hline $E_9$ & $e_{13}$, $e_{28}$, $e_{54}$, $e_{14}$, $e_{27}$\\ \hline $E_{10}$ & $e_{15}$, $e_{26}$, $e_{55}$, $e_{52}$\\ \hline $E_{11}$ & $e_{16}$, $e_{25}$, $e_{56}$, $e_{48}$\\ \hline $E_{12}$ & $e_{19}$, $e_{40}$, $e_{64}$, $e_{45}$\\ \hline $E_{13}$ & $e_{20}$, $e_{39}$, $e_{68}$, $e_{42}$\\ \hline $E_{14}$ & $e_{21}$, $e_{38}$, $e_{70}$, $e_{41}$\\ \hline $E_{15}$ & $e_{43}$, $e_{60}$, $e_{67}$\\ \hline $E_{16}$ & $e_{46}$, $e_{57}$, $e_{53}$\\ \hline $E_{17}$ & $e_{47}$, $e_{63}$, $e_{51}$\\ \hline $E_{18}$ & $e_{49}$, $e_{61}$, $e_{66}$\\ \hline \end{tabular} \caption{Equivalence classes of edges in 1-skeleton of the canonical 2-spine $S$ of the 3-manifold at infinity of $\Delta_{3,4,6;\infty}$.} \label{table:edge6} \end{table} Note that in Figure \ref{figure:boundary6forientation}, we labeled the edges from $1$ to $72$, so there are $72$ edges in total. Now Table \ref{table:edge6} is obtained from Figure \ref{figure:boundary6forientation}, Tables \ref{table:ridge346} and \ref{table:circle346}, it gives the equivalence classes of edges in the canonical 2-spine $S$ of the 3-manifold at infinity of $\Delta_{3,4,6;\infty}$, a pair of 2-cells in the fundamental domain with inverse labels as in Figure \ref{figure:boundary6forientation} give a 2-cell of the 2-spine $S$. In Table \ref{table:disk6}, we give the information of attaching disks of the canonical 2-spine $S$ of the 3-manifold at infinity of $\Delta_{3,4,6;\infty}$, which is obtained from Figure \ref{figure:boundary6forientation} and Table \ref{table:edge6}. The orientations of the edges and the equivalence classes of end points of the edges is showed in Table \ref{table:vertex6}. \begin{table}[htbp]\label{table: 6boundary} \begin{tabular}{|c|c|} \hline Disk & Boundary path \\ \hline $[B]_{20}$ & $E_{1}*E^{-1}_{15}*E^{-1}_{16}*E^{-1}_{11}*E^{-1}_{8}*(E^{-1}_{9})^{2}*E^{-1}_{10}*E^{-1}_{11}*$ \\ & $E_{7}*E_{6}*E_{1}*E_{2}*E_{3}*E_{4}*(E_{5})^{2}*E_{14}*E_{13}*E_{12}$ \\ \hline $ABaBa$ & $E_{4}*E^{-1}_{5}*E_{14}$\\ \hline $bAbAb$ & $E_{3}*E^{-1}_{14}*E^{-1}_{4}*E_{13}$ \\ \hline $bAb$ & $E_{2}*E^{-1}_{13}*E^{-1}_{3}*E_{12}*E^{-1}_{6}*E_{15}$\\ \hline $BAb$ & $E_{6}*E^{-1}_{17}*E_{16}$\\ \hline $[b]_{5}$ & $E^{-1}_{12}*E^{-1}_{2}*E^{-1}_{18}*E_{11}*E_{17}$\\ \hline $Aba$ & $E_{1}*E^{-1}_{18}*E^{-1}_{8}*E^{-1}_{10}*E_{17}$ \\ \hline $Ababa$ & $E_{17}*E^{-1}_{6}*E^{-1}_{16}$ \\ \hline $BAB$ & $E_{16}*E^{-1}_{7}*E^{-1}_{8}*E_{9}*E^{-1}_{10}$\\ \hline $bAB$ & $E_{18}*E^{-1}_{15}*E^{-1}_{7}$ \\ \hline \end{tabular} \caption{Boundary of the attaching disks of the canonical 2-spine $S$ of the manifold at infinity of $\Delta_{3,4,6;\infty}$.} \label{table:disk6} \end{table} \begin{table}[htbp]\label{table: 6vertices} \begin{tabular}{|c|c|} \hline Vertex & Equivalence class \\ \hline $V_1$ & $E_{1}^{+}$, $E_{2}^{-}$, $E_{15}^{+}$, $E_{18}^{+}$\\ \hline $V_2$ & $E_{1}^{-}$, $E_{12}^{+}$, $E_{6}^{+}$, $E_{17}^{+}$\\ \hline $V_3$ & $E_{2}^{+}$, $E_{3}^{-}$, $E_{13}^{+}$, $E_{12}^{-}$\\ \hline $V_4$ & $E_{4}^{-}$, $E_{14}^{+}$, $E_{3}^{+}$, $E_{13}^{-}$\\ \hline $V_5$ & $E_{4}^{+}$, $E_{5}^{+}$, $E_{5}^{-}$, $E_{14}^{-}$\\ \hline $V_6$ & $E_{6}^{-}$, $E_{7}^{+}$, $E_{16}^{+}$, $E_{15}^{-}$\\ \hline $V_7$ & $E_{8}^{-}$, $E_{9}^{+}$, $E_{9}^{-}$, $E_{10}^{+}$\\ \hline $V_8$ & $E_{7}^{-}$, $E_{8}^{+}$, $E_{11}^{-}$, $E_{18}^{-}$\\ \hline $V_9$ & $E_{10}^{-}$, $E_{11}^{+}$, $E_{16}^{-}$, $E_{17}^{-}$\\ \hline \end{tabular} \caption{Equivalence classes of vertices of the canonical 2-spine $S$ of the 3-manifold at infinity of $\Delta_{3,4,6;\infty}$.} \label{table:vertex6} \end{table} In Table \ref{table:vertex6}, for example, $E_{1}^{+}$ means the positive end of the oriented edge $E_{1}$, and $E_{1}^{-}$ means the negative end of the oriented edge $E_{1}$. From Table \ref{table:vertex6}, we get Figure \ref{figure:1-skeleton6f}, which is the 1-skeleton of the canonical 2-spine $S$ of our 3-manifold at infinity of $\Delta_{3,4,6;\infty}$. From Figure \ref{figure:1-skeleton6f}, we consider the fundamental group of the graph. In Figure \ref{figure:1-skeleton6f}, the orientation of the edges are obtained from Table \ref{table:vertex6}: edge $E_1$ is from $V_2$ to $V_1$; edge $E_2$ is from $V_1$ to $V_3$; edge $E_3$ is from $V_3$ to $V_4$; edge $E_4$ is from $V_4$ to $V_5$; edge $E_5$ is from $V_5$ to $V_5$; edge $E_6$ is from $V_2$ to $V_6$; edge $E_7$ is from $V_8$ to $V_6$; edge $E_8$ is from $V_7$ to $V_8$; edge $E_9$ is from $V_7$ to $V_7$; edge $E_{10}$ is from $V_9$ to $V_7$; edge $E_{11}$ is from $V_8$ to $V_9$; edge $E_{12}$ is from $V_3$ to $V_2$; edge $E_{13}$ is from $V_4$ to $V_3$; edge $E_{14}$ is from $V_5$ to $V_4$; edge $E_{15}$ is from $V_6$ to $V_1$; edge $E_{16}$ is from $V_9$ to $V_6$; edge $E_{17}$ is from $V_9$ to $V_2$; edge $E_{18}$ is from $V_8$ to $V_1$. \begin{figure} \caption{The 1-skeleton of the canonical 2-spine $S$ of the 3-manifold at infinity of the complex hyperbolic triangle group $\Delta_{3,4,6;\infty}$.} \label{figure:1-skeleton6f} \end{figure} We let $\phi_1=E_1*E_2*E_{12}$, $\phi_2=E^{-1}_{12}*E_3*E_{13}*E_{12}$, $\phi_3=E^{-1}_{12}*E_3*E_{4}*E_{14}*E^{-1}_{3}*E_{12}$, $\phi_4=E^{-1}_{12}*E_3*E_{4}*E_{5}*E^{-1}_{4}*E^{-1}_{3}*E_{12}$, $\phi_5=E_1*E^{-1}_{15}*E_{6}$, $\phi_6=E_1*E^{-1}_{18}*E_{7}*E_{6}$, $\phi_7=E^{-1}_{17}*E_{16}*E_{6}$, $\phi_8=E^{-1}_6*E^{-1}_{7}*E_{11}*E_{17}$, $\phi_9=E^{-1}_6*E^{-1}_{7}*E^{-1}_{8}*E^{-1}_{10}*E_{17}$, $\phi_{10}=E^{-1}_6*E^{-1}_{7}*E^{-1}_{8}*E_{9}*E_{8}*E_{7}*E_{6}$ be closed loops in the graph. Here $E^{-1}_i$ is the inverse path of $E_i$. And it is easy to see $\phi_{i}$, $1 \leq i \leq 10$, is a generator set of the fundamental group of the graph with base the vertex $V_2$. \begin{table}[htbp]\label{table: 6relation} \begin{tabular}{|c|c|} \hline Disk & Relation \\ \hline $[B]_{20}$ & $\phi_{5}\phi^{-1}_7\phi^{-1}_{8}\phi^{-1}_{10}\phi^{-1}_{10}\phi_{9}\phi^{-1}_{8}\phi_{1}\phi_{4} \phi_{4} \phi_3\phi_{2}$ \\ \hline $ABaBa$ & $\phi^{-1}_{4}\phi_{3}$\\ \hline $bAbAb$ & $\phi^{-1}_{3}\phi_{2}$\\ \hline $bAb$ & $\phi_1\phi^{-1}_{2}\phi^{-1}_{5}$ \\ \hline $BAb$ & $\phi_7$ \\ \hline $[b]_{5}$ & $\phi^{-1}_{1}\phi_6\phi_{8}$ \\ \hline $Aba$ & $\phi_6\phi_{9}$ \\ \hline $Ababa$ & $\phi^{-1}_{7}$ \\ \hline $BAB$ & $\phi_7\phi_{10}\phi_9$\\ \hline $bAB$ & $\phi_5\phi^{-1}_{6}$\\ \hline \end{tabular} \caption{Relations of the attaching disks of the 2-spine $S$ of the manifold at infinity of $\Delta_{3,4,6;\infty}$.} \label{table:6relation} \end{table} From Table \ref{table:disk6}, we get Table \ref{table:6relation}, then we get a presentation of the fundamental group $\pi_{1}(S)$ of the 2-spine $S$, and so the fundamental group of $M$, the manifold at infinity of $\Delta_{3,4,6;\infty}$. Let $s090$ be the one-cusped hyperbolic manifold in Snappy Census \cite{CullerDunfield:2014}, it is hyperbolic with volume 3.89049957640... and $$\pi_{1}(s090)=\langle x, y |x^5y^{-1}x^{-1}y^3x^{-1}y^{-1}\rangle.$$ Using Magma, it is easy to see $\pi_{1}(s090)$ and $\pi_{1}(S)$ above are isomorphic. This finishes the proof of Theorem \ref{thm:main} for $\Delta_{3,4,6;\infty}$. \end{document}
\begin{document} \title{Detecting $k$-(Sub-)Cadences and \ Equidistant Subsequence Occurrences} \begin{abstract} \fontsize{9}{12}\selectfont The equidistant subsequence pattern matching problem is considered. Given a pattern string $P$ and a text string $T$, we say that $P$ is an \emph{equidistant subsequence} of $T$ if $P$ is a subsequence of the text such that consecutive symbols of $P$ in the occurrence are equally spaced. We can consider the problem of equidistant subsequences as generalizations of (sub-)cadences. We give bit-parallel algorithms that yield $o(n^2)$ time algorithms for finding $k$-(sub-)cadences and equidistant subsequences. Furthermore, $O(n\log^2 n)$ and $O(n\log n)$ time algorithms, respectively for equidistant and Abelian equidistant matching for the case $|P| = 3$, are shown. The algorithms make use of a technique that was recently introduced which can efficiently compute convolutions with linear constraints. \end{abstract} \section{Introduction} Pattern matching on strings is a very important topic in string processing. Usually, strings are regarded and stored as one dimensional sequences and many pattern matching algorithms have been proposed to efficiently find particular substrings occurring in them~\cite{KnuthMP77,BoyerM77,DBLP:conf/stringology/FaroLBMM16,Horspool1980,GALIL1983280,CrochemorePerrin91}. However, when one is to view the string/text data on paper or on a screen, it is usually shown in two dimensions: the single dimensional sequence is displayed in several lines folded by some length. It is known that the two dimensional arrangement can be used to embed hidden messages, and/or cause occurrences of unexpected or unintentional messages in the text. A common form for such an embedding is to consider the occurrence of a pattern in a linear layout: vertically or possibly diagonally along the two dimensional display. For example, there was a (rather controversial) paper~\cite{witztum94:_equid_letter_sequen_book_genes} on the so called Bible Code, claiming that the Bible contains statistically significant occurrences of various related words, occurring vertically and/or diagonally, in close proximity. Furthermore, there was an incident with a veto letter by the California State Governor~\cite{matier09:_did_schwar}; Although it was considered a ``weird coincidence'', the first character on each line of the letter could be connected and interpreted as a very provocative message. In Japanese internet forums, there was a culture of actively using these techniques, referred to as ``tate-yomi''(vertical reading) and ``naname-yomi'' (diagonal reading), where the author of a message purposely embeds a hidden message in his/her post. Most commonly, the author will write a message that praises some object or opinion in question, but embed a message with a completely opposite meaning bearing the author's true intention. The hidden message can be recovered by reading the text message vertically or diagonally from some position, and is used as form of sarcasm, as well as a clever method to mock those who were unable to \emph{get} it. Assuming that the text is folded into lines of equal length, vertical or diagonal occurrences of the pattern in two dimensions can be regarded as a subsequence of the original text, where the distance between each character is equal. We call the problem of detecting such occurrences of the pattern as the \emph{equidistant subsequence matching} problem. To the best of the authors' knowledge, there exist only publications concerning the statistical properties of the occurrence of equidistant subsequence patterns, mainly with the so called Bible Code. Recently, a notion of regularities in strings called \emph{(Sub)-Cadences}, defined by equidistant occurrences of the same character, was considered by Amir et al.~\cite{Cadences_Amir}. A $k$-sub-cadence of a string can be viewed as an occurrence of an equidistant subsequence of length $k$ that consists of the same character. A $k$-sub-cadence is a $k$-cadence, if the starting position is less than or equal to $d$ and the ending position is greater than $n-d$, where $d$ is the distance between each consecutive character occurrence and $n$ is the length of the string. To date, algorithms for detecting anchored cadences (cadences whose starting position is equal to $d$), $3$-(sub-)cadences, and $(\pi_1, \pi_2, \pi_3)$-partial-$3$-cadences (an occurrence of an equidistant subsequence that can become a cadence by changing at most all but 3 characters) have been proposed~\cite{Cadences_Amir, Funakoshi_STACS2020}. However, no efficient algorithm for detecting $k$-(sub)-cadences for arbitrary $k~(1\leq k \leq n)$ is known so far. In this paper, we present counting algorithms for $k$-sub-cadences, $k$-cadences, equidistant subsequence patterns of length $m$ and length $3$, and equidistant Abelian subsequence patterns of length $3$. Table~\ref{tab:complexity} shows a summary of the results. All algorithms run in $O(n)$ space. Furthermore, we present locating algorithms for $k$-sub-cadences, $k$-cadences, and equidistant subsequence patterns of length $m$. The time complexities of these algorithms can be obtained by adding $\mathit{occ}$ to the second term inside the minimum function of each time complexity of the counting algorithm. To the best of the authors' knowledge, these are the first $o(n^2)$ time algorithm for $k$-(sub)-cadences and equidistant subsequence patterns. Unless otherwise noted, we assume a word RAM model with word size $\Theta(\log n)$, and strings over a general ordered alphabet. \begin{table}[htb] \begin{center} \begin{tabular}{|c||c|c|} \hline Counting time & For a constant size alphabet & For a general ordered alphabet \\ \hline \hline $k$-sub-cadences & $O\left(\min\left\{\frac{n^2}{k},\frac{n^2}{\log n}\right\}\right)$ & $O\left(\min\left\{\frac{n^2}{k},\frac{n^2 \sqrt{k}}{\sqrt{\log n}}\right\}\right)$ \\ \hline $k$-cadences & $O\left(\min\left\{\frac{n^2}{k},\frac{1}{\log n}\left(\frac{n^2}{k^2}+kn\right)\right\}\right)$& $O\left(\min\left\{\frac{n^2}{k},\frac{n \sqrt{k}}{\sqrt{\log n}}\sqrt{\frac{n^2}{k^2}+kn}\right\}\right)$ \\ \hline \end{tabular} \scalebox{0.98}{ \begin{tabular}{|c||c|} \hline Counting time & For a general ordered alphabet \\ \hline \hline Equidistant subsequence pattern & $O\left(\min\left\{\frac{n^2}{m},\frac{n^2}{\log n}\right\}\right)$ \\ \hline Equidistant subsequence pattern of length three & $O(n \log^2 n)$ \\ \hline Equidistant Abelian subsequence pattern of length three & $O(n \log n)$ \\ \hline \end{tabular} } \end{center} \caption{Summary of results.} \label{tab:complexity} \end{table} \section{Preliminaries}\label{sec:preliminaries} Let $\Sigma$ be the \emph{alphabet}. An element of $\Sigma^*$ is called a \emph{string}. The length of a string $T$ is denoted by $|T|$. String $s \in \Sigma^{*}$ is said to be a \emph{subsequence} of string $T \in \Sigma^{*}$ if $s$ can be obtained by removing zero or more characters from $T$. For a string $T$ and an integer $1 \leq i \leq |T|$, $T[i]$ denotes the $i$-th character of $T$. For two integers $1 \leq i \leq j \leq |T|$, $T[i..j]$ denotes the substring of $T$ that begins at position $i$ and ends at position $j$. For convenience, let $T[i..j] = \varepsilon$ when $i > j$. \subsection{k-(Sub-)Cadences} The term ``cadence'' has been used in slightly different ways in the literature (e.g., see~\cite{Cadences_Gardelle, Cadences_Lothaire, Cadences_Amir}). In this paper, we use the definitions of cadences and sub-cadences which are used in~\cite{Cadences_Amir} and~\cite{Funakoshi_STACS2020}. For integers $i$ and $d$, the pair $(i,d)$ is called a \emph{$k$-sub-cadence} of $T \in \Sigma^{n}$ if $T[i] = T[i+d] = T[i+2d] = \cdots = T[i+(k-1)d]$, where $1 \leq i \leq n$ and $1 \leq d \leq \lfloor \frac{n-1}{k-1}\rfloor$. The set of $k$-sub-cadences of $T$ can be defined as follows: \begin{definition} For $T\in \Sigma^{n}$, $n \in \mathcal{N}$, and $k \in [1..n]$, \begin{eqnarray*} \mathit{KSC}(T,k) = \left\{(i,d) \left| \begin{array}{l} T[i] = T[i+d] = T[i+2d] = \cdots = T[i+(k-1)d]\\ 1 \leq i \leq n, 1 \leq d \leq \lfloor \frac{n-1}{k-1}\rfloor \end{array} \right. \right\}. \end{eqnarray*} \end{definition} For integers $i$ and $d$, the pair $(i,d)$ is called a \emph{$k$-cadence} of $T \in \Sigma^{n}$ if $(i,d)$ is a $k$-sub-cadence and satisfies the inequalities $i-d \leq 0$ and $n < i+kd$. The set of $k$-cadences of $T$ can be defined as follows: \begin{definition} For $T\in \Sigma^{n}$, $n \in \mathcal{N}$, and $k \in [1..n]$, \begin{eqnarray*} \mathit{KC}(T,k) = \left\{(i,d) \left| \begin{array}{l} T[i] = T[i+d] = T[i+2d] = \cdots = T[i+(k-1)d]\\ 1 \leq i \leq n, 1 \leq d \leq \lfloor \frac{n-1}{k-1}\rfloor, i-d \leq 0, n < i+kd \end{array} \right. \right\}. \end{eqnarray*} \end{definition} \subsection{Equidistant Subsequence Occurrences} For integers $i$ and $d$, we say that pair $(i,d)$ is an \emph{equidistant subsequence occurrence} of $P\in\Sigma^m$ in $T\in\Sigma^{n}$ if $P = T[i] \cdot T[i+d] \cdot T[i+2d] \cdots T[i+(m-1)d]$, where $1\leq i\leq n$ and $1 \leq d \leq \lfloor \frac{n-1}{m-1}\rfloor$. The set of equidistant subsequence occurrences of $P$ in $T$ can be defined as follows: \begin{definition} For $T\in \Sigma^{n}, P \in \Sigma^{m}$ and $n,m\in \mathcal{N}$, \begin{eqnarray*} \mathit{ESP}(T,P) = \left\{(i,d) \left| \begin{array}{l} P=T[i] \cdot T[i+d] \cdot T[i+2d] \cdots T[i+(m-1)d]\\ 1\leq i \leq n, 1\leq d \leq \lfloor \frac{n-1}{m-1}\rfloor \end{array} \right. \right\}. \end{eqnarray*} \end{definition} \subsection{Equidistant Abelian Subsequence Occurrences} Two strings $S_1$ and $S_2$ are said to be \emph{Abelian equivalent} if $S_1$ is a permutation of $S_2$, or vice versa. Now for integers $i$ and $d$, we say that pair $(i,d)$ is an \emph{equidistant Abelian subsequence occurrence} of $P\in\Sigma^m$ in $T\in\Sigma^{n}$ if $T[i] \cdot T[i+d] \cdot T[i+2d] \cdots T[i+(m-1)d]$ and $P$ are Abelian equivalent, where $1\leq i\leq n$ and $1 \leq d \leq \lfloor \frac{n-1}{m-1}\rfloor$. The set of equidistant Abelian subsequence occurrences of $P$ in $T$ can be defined as follows: \begin{definition} For $T\in \Sigma^{n}, P \in \Sigma^{m}$ and $n,m\in \mathcal{N}$, \begin{eqnarray*} \mathit{EASP}(T,P) = \left\{(i,d) \left| \begin{array}{l} T[i] \cdot T[i+d] \cdots T[i+(m-1)d] {\ \rm and \ } P {\ \rm are \ Abelian \ equivalent}\\ 1\leq i \leq n, 1\leq d \leq \lfloor \frac{n-1}{m-1}\rfloor \end{array} \right. \right\}. \end{eqnarray*} \end{definition} When it is clear from the context, we denote $\mathit{KSC}(T,k)$ as $\mathit{KSC}$, $\mathit{KC}(T,k)$ as $\mathit{KC}$, and $\mathit{ESP}(T,P)$ as $\mathit{ESP}$. \section{Detecting k-Sub-Cadences}\label{sec:k-sub-cadences} In this section, we consider algorithms for detecting $k$-sub-cadences. \subsection*{Algorithm 1} One of the most simple methods is as follows: For each distance $d$ $(1 \leq d \leq \lfloor \frac{n-1}{k-1}\rfloor)$, we construct text $ST_d=T[1]\cdot T[1+d]\cdots T[1+d \lfloor \frac{n-1}{d}\rfloor]\cdot\$ \cdot T[2]\cdot T[2+d]\cdots T[2+d \lfloor \frac{n-2}{d}\rfloor]\cdot \$ \cdots T[d]\cdot T[2d]\cdots T[d \lfloor \frac{n}{d}\rfloor]$ of length $n+d-1$. If we would like to find $k$-sub-cadences with $d$ in text $T$, we find concatenations of the same character of length $k$ as substrings in $ST_d$. \begin{figure} \caption{Preprocessing for Algorithm 1.} \label{fig:3split} \end{figure} Fig.~\ref{fig:3split} is an example of 3-split text. In this figure, the strings in the middle row are called \emph{$d$-skip strings}, and the string on the bottom is called the \emph{$d$-split text $ST_d$}. In $ST_d$, we use a symbol $\$\notin\Sigma$ in order to prevent detecting false occurrences concatenation of same character of the length $k$ across the ends of $d$-skip strings as a $k$-sub-cadence. The text obtained by concatenating all $ST_d$ for all $1\leq d \leq \lfloor \frac{n-1}{k-1}\rfloor$ and $\$$ is called the \emph{split text}. If we prepare the split text, we can compute $\mathit{KSC}$ simply by checking that the same character is repeated $k$ times. The length of $ST_d$ is at most $n+d$ including $\$$. The maximum value of $d$ is $\lfloor \frac{n-1}{k-1}\rfloor$, and therefore, the number of $ST_d$ of text $T$ is at most $\lfloor \frac{n-1}{k-1} \rfloor$. Hence, the length of the split text of $T$ is $O(\frac{n^2}{k})$. We can check that the same character is repeated $k$ times in the split text in $O(\frac{n^2}{k})$ time. Although we have presented the split text to ease the description, it does not have to be constructed explicitly. From the above, we can get the following result. \begin{theorem} \label{theo:k-sub-1} There is an algorithm for locating all $k$-sub-cadences for given $k$ $(1 \leq k \leq n)$ which uses $O\left(\frac{n^2}{k}\right)$ time and $O(n)$ space. \end{theorem} As can be seen from the example of $T=\mathtt{a}^n$, $|\mathit{KSC}|$ can be $\Omega(\frac{n^2}{k})$. Therefore, when we locate all $(i,d) \in \mathit{KSC}$, this algorithm is optimal in the worst case. In the next subsection, we show a counting algorithm that is efficient when the value of $k$ is small. Moreover, we show a locating algorithm that is efficient when both the value of $k$ and $|\mathit{KSC}|$ is small. \subsection*{Algorithm 2} In this subsection, we will show the following result: \begin{theorem} \label{theo:k-sub-2} For a constant size alphabet, there is an $O\left(\frac{n^2}{\log n}\right)$ time algorithm for counting all $k$-sub-cadences for given $k$. We can also locate these occurrences in $O\left(\frac{n^2}{\log n}+\mathit{occ}\right)$ time, where $\mathit{occ}$ is the number of the outputs. For a general ordered alphabet, there is an $O\left(\frac{n^2 \sqrt{k}}{\sqrt{\log n}}\right)$ time algorithm for counting all $k$-sub-cadences for given $k$. We can also locate these occurrences in $O\left(\frac{n^2 \sqrt{k}}{\sqrt{\log n}}+\mathit{occ}\right)$ time. These algorithms run in $O(n)$ space. \end{theorem} Note that for counting all $k$-sub-cadences, for a constant size alphabet (resp. for a general ordered alphabet), this algorithm is faster than Algorithm 1 if $k$ is $o(\log n)$ (resp. $o\left(\sqrt[3]{\log n}\right)$). For locating all $k$-sub-cadences, for a constant size alphabet (resp. for a general ordered alphabet), if $|\mathit{KSC}|$ is $o(\frac{n^2}{k})$ and $k$ is $o(\log n)$ (resp. $o\left(\sqrt[3]{\log n}\right)$), then this algorithm is faster. Now we will show how to count all $k$-sub-cadences of character $c \in\Sigma$. Let $\delta_c[1..n]$ be a binary sequence for character $c$ defined as follows: \[\delta_c[i] := \begin{cases} $1$ &\textup{if $T[i] = c$}, \\ $0$ &\textup{if $T[i] \neq c$}. \end{cases}\] If $(i,d)$ is a $k$-sub-cadence, $\delta_c[i]=\delta_c[i+d]=\cdots=\delta_c[i+(k-1)d]=1$. Therefore we can check whether $(i,d)$ is a $k$-sub-cadence or not by computing $\delta_c[i] \cdot \delta_c[i+d] \cdots \delta_c[i+(k-1)d]$. To compute this, we use bit-parallelism, i.e, the bit-wise operations AND and SHIFT\_LEFT, denoted by $\texttt{\&}$ and $\texttt{<<}$, respectively, as in the C language. For each $d$ $(1 \leq d \leq \lfloor \frac{n-1}{k-1} \rfloor)$, let $Q_d = \delta_c \texttt{ \& } (\delta_c \texttt{ << } d) \texttt{ \& } (\delta_c \texttt{ << } 2d) \texttt{ \& } \cdots \texttt{ \& } (\delta_c \texttt{ << } (k-1)d)$. If $Q_d[i]=1$, $(i,d)$ is a $k$-sub-cadence. See Figure~\ref{fig:alg2} for a concrete example. \begin{figure} \caption{Let $T=\mathtt{caaacaabaabaabcabc}$. $(3,3)$, $(4,3)$, and $(7,3)$ are $4$-sub-cadences of character `$\mathtt{a}$' with $d=3$.} \label{fig:alg2} \end{figure} If we want to count all $k$-sub-cadences with $d$, we only have to count the number of $1$'s in $Q_d$. If we want to locate all $k$-sub-cadences with $d$, we have to locate all $1$'s in $Q_d$. In the word RAM model, SHIFT\_LEFT and AND operations can be done in constant time per operation on bit sequences of length $O(\log n)$. Since $\delta_c$ is a binary sequence of length $n$, one SHIFT\_LEFT or AND operation can be done in $O(\frac{n}{\log n})$ time. Therefore, $Q_d$ can be obtained in $O(k \frac{n}{\log n})$ time. Since it is known that the number of $1$'s in a bit sequence of length $O(\log n)$ can be obtained in $O(1)$ time by using the ``popcnt'' operation, the number of $1$'s in $Q_d$ can be counted in $O(\frac{n}{\log n})$ time. Hence, for all $1 \leq d \leq \lfloor \frac{n-1}{k-1} \rfloor$, we can count all $k$-sub-cadences of character $c$ in $O\left(k \frac{n}{\log n} \lfloor \frac{n-1}{k-1} \rfloor + \frac{n}{\log n} \lfloor \frac{n-1}{k-1} \rfloor\right) \subseteq O(\frac{n^2}{\log n})$ time. Also it is known that the position of the rightmost $1$ (the least significant set bit) in a bit sequence of length $O(\log n)$ can be answered in constant time. We split $Q_d$ into $O(\frac{n}{\log n})$ blocks of length $O(\log n)$. For each block, the least significant set bit can be found in $O(1)$ time if the block contains at least one $1$. After finding the least significant set bit, we mask this bit to $0$ and do the above operation again. Bit mask operation can be done in $O(1)$ time. Hence, we can answer all the positions of $1$'s in $Q_d$ in $O(\frac{n}{\log n} + \mathit{occ})$ time. Therefore, we can locate all $k$-sub-cadences of character $c$ in $O(\frac{n^2}{\log n} + \mathit{occ})$ time. We showed how to detect all $k$-sub-cadences of character $c$, so we can detect all $k$-sub-cadences by doing the above operations for each character in $\Sigma$. For a constant size alphabet, since we only do the above operations a constant number of times, we can count all $k$-sub-cadences in $O(\frac{n^2}{\log n})$ time. We can also locate these occurrences in $O(\frac{n^2}{\log n} + \mathit{occ})$ time. However, for a general ordered alphabet, we have to do the above operations $|\Sigma|$ times. For a general ordered alphabet, if the number of occurrences of the character is small, we use another algorithm that generalizes Amir et al.'s algorithm~\cite{Cadences_Amir} for detecting $3$-cadences to $k$-sub-cadences: Let $N_c$ be the set of positions which are occurrences of a character $c$. If we pick two positions in $N_c$ and regard the smaller one as the starting position $i$ of $k$-sub-cadences and the larger one as the second position $i+d$ of a $k$-sub-cadence, then the distance $d$ is uniquely determined. We can check whether the pair $(i,d)$ is a $k$-sub-cadence or not in $O(k)$ time. Since the number of pairs is at most $|N_c|^2$, we can count or locate $k$-sub-cadences of character $c$ in $O(k|N_c|^2)$ time. Thus, for a general ordered alphabet, all $k$-sub-cadences can be counted in\\ $O(\sum_{c \in \Sigma}^{} \min\{k|N_c|^2, \frac{n^2}{\log n}\})$ time. Since $O(\sum_{c \in \Sigma}^{} \min\{k|N_c|^2, \frac{n^2}{\log n}\})$ is maximized when\\ $k|N_c|^2=\frac{n^2}{\log n}$, $O(\sum_{c \in \Sigma}^{} \min\{k|N_c|^2, \frac{n^2}{\log n}\}) \subseteq O(\sum_{c \in \Sigma}^{} \frac{n^2}{\log n}) \subseteq O((\sum_{c \in \Sigma}^{} |N_c|) \frac{n \sqrt{k}}{\sqrt{\log n}}) \subseteq O(\frac{n^2 \sqrt{k}}{\sqrt{\log n}})$. Therefore we can count in $O(\frac{n^2 \sqrt{k}}{\sqrt{\log n}})$ time by using Algorithm 2 and the generalized algorithm described above algorithm. Also, all $k$-sub-cadences can be located in $O(\sum_{c \in \Sigma}^{} \min\{k|N_c|^2, \frac{n^2}{\log n}+\mathit{occ}_c\})$ time where $\mathit{occ}_c$ is the number of $k$-sub-cadences of character $c$. Since $O(\sum_{c \in \Sigma}^{} \min\{k|N_c|^2, \frac{n^2}{\log n}+\mathit{occ}_c\}) \subseteq O(\sum_{c \in \Sigma}^{} \min\{k|N_c|^2, \frac{n^2}{\log n}\}+\mathit{occ}) \subseteq O(\frac{n^2 \sqrt{k}}{\sqrt{\log n}}+ \mathit{occ})$, we can locate in $O(\frac{n^2 \sqrt{k}}{\sqrt{\log n}}+ \mathit{occ})$ time. From the above, we obtain the following result: \begin{theorem} \label{theo:k-sub-cadences} For a constant size alphabet (resp. for a general ordered alphabet), all $k$-sub-cadences with given $k$ can be counted in $O\left(\min\left\{\frac{n^2}{k},\frac{n^2}{\log n}\right\}\right)$ time\\ (resp. $O\left(\min\left\{\frac{n^2}{k},\frac{n^2 \sqrt{k}}{\sqrt{\log n}}\right\}\right)$ time) and $O(n)$ space, and can be located in\\ $O\left(\min\left\{\frac{n^2}{k},\frac{n^2}{\log n}+\mathit{occ}\right\}\right)$ time (resp. $O\left(\min\left\{\frac{n^2}{k},\frac{n^2 \sqrt{k}}{\sqrt{\log n}}+\mathit{occ}\right\}\right)$ time) and $O(n)$ space. \end{theorem} \section{Detecting k-Cadences}\label{sec:k-cadences} In this section, we consider algorithms for detecting $k$-cadences. \subsection*{Algorithm 3} We use same techniques of Algorithm 1 and then check whether each element belongs to $\mathit{KC}$ or not. After computing $\mathit{KSC}$ by using Algorithm 1, we have to check if each $(i,d) \in \mathit{KSC}$ satisfies the following formulas: $i \leq d$ and $i+kd > n$. Since $|\mathit{KSC}| \in O(\frac{n^2}{k})$, we can do this operation in $O(\frac{n^2}{k})$ time. Therefore, we can obtain the following result: \begin{theorem} \label{theo:k-1} There is an algorithm for locating all $k$-sub-cadences for given $k$ which uses $O\left(\frac{n^2}{k}\right)$ time and $O(n)$ space. \end{theorem} \subsection*{Algorithm 4} Now, we will show the following result: \begin{theorem} \label{theo:k-cadences-2} For a constant size alphabet, there is an $O\left(\frac{1}{\log n}\left(\frac{n^2}{k^2}+kn\right)\right)$ time algorithm for counting all $k$-cadences for given $k$. We can also locate these occurrences in\\ $O\left(\frac{1}{\log n}\left(\frac{n^2}{k^2}+kn\right)+\mathit{occ}\right)$ time. For a general ordered alphabet, there is an \\$O\left(\frac{n \sqrt{k}}{\sqrt{\log n}}\sqrt{\frac{n^2}{k^2}+kn}\right)$ time algorithm for counting all $k$-sub-cadences for given $k$. We can also locate these occurrences in $O\left(\frac{n \sqrt{k}}{\sqrt{\log n}}\sqrt{\frac{n^2}{k^2}+kn}+\mathit{occ}\right)$ time. These algorithms run in $O(n)$ space. \end{theorem} These time complexities are at least as fast as Algorithm 2 for $k$-sub-cadences. Moreover, if the value of $k$ is neither constant nor $\Omega(n)$, this algorithm is faster than Algorithm 2 because $\frac{n^2}{k^2}+kn$ will be $o(n^2)$. Note that when we count all $k$-sub-cadences, for a constant size alphabet, this algorithm is faster than Algorithm 3 if $k$ is $o(\sqrt{n \log n})$. (This is because $\frac{n}{k}+k^2$ will be $o(n \log n)$.) Also, for a general ordered alphabet, this algorithm is faster if $k$ is $o(\log n)$. (This is because $k \sqrt{k} \sqrt{\frac{n^2}{k^2}+kn}$ will be $o\left(n \sqrt{\log n}\right)$ and then $kn^2+k^4n$ will be $o(n^2 \log n)$.) When we locate all $k$-sub-cadences, for a constant size alphabet (resp. for a general ordered alphabet), if $|\mathit{KC}|$ is $o(\frac{n^2}{k})$ and $k$ is $o(\sqrt{n \log n})$ (resp. $o(\log n)$) then this algorithm is faster. First, we will show that (the size of) $\mathit{KC}$ can be obtained by using the similar techniques of Algorithm 2 of a character, and then we will show how to speed up. Again, each $(i,d)$ has to satisfy the following formulas: $i \leq d$ and $i+kd > n$, that is $n-kd < i \leq d$. Then let $R_d[1..n]$ be the binary sequence defined as follows: \[R_d[i] := \begin{cases} $1$ &\textup{if $n-kd < i \leq d$}, \\ $0$ &\textup{otherwise}. \end{cases}\] Let $Q'_d = Q_d \texttt{ \& } R_d$. If $Q'_d[i]=1$, $(i,d)$ is a $k$-cadence. Since $R_d$ and $Q'_d$ can be computed in $O(\frac{n}{\log n})$ time, this algorithm runs in the same time complexity as Algorithm 2. Now we show how to speed up this algorithm. In the above algorithm, we obtain $Q'_d$ by masking $Q_d$. However, to calculate $k$-cadences, we need only the range $[n-kd..d]$ of the sequence, and it is useless to calculate other ranges. Therefore, we compute $Q'_d$ by the following operations: $Q'_d = \delta_c[n-kd+1..d] \texttt{ \& } (\delta_c \texttt{ << } d)[n-kd+1..d] \texttt{ \& } (\delta_c \texttt{ << } 2d)[n-kd+1..d] \texttt{ \& } \cdots \texttt{ \& } (\delta_c \texttt{ << } (k-1)d)[n-kd+1..d] = \delta_c[n-kd+1..d] \texttt{ \& } \delta_c[n-(k-1)d+1..2d] \texttt{ \& } \delta_c[n-(k-2)d+1..3d] \texttt{ \& } \cdots \texttt{ \& } \delta_c[n-(k-m+1)d+1..md]$. The length of range $[n-kd+1..d]$ is at most $(k+1)d-n$ with $\lceil \frac{n}{k+1} \rceil \leq d \leq \lfloor \frac{n-1}{k-1} \rfloor$. We can compute all $Q'_d$ for $\lceil \frac{n}{k+1} \rceil \leq d \leq \lfloor \frac{n-1}{k-1} \rfloor$\\ in $\sum_{d=\lceil \frac{n}{k+1} \rceil}^{\lfloor \frac{n-1}{k-1} \rfloor}\left(\frac{(k+1)d-n}{\log n}k\right)$ time. Then, \footnotesize \begin{align*} & \sum_{d=\left\lceil \frac{n}{k+1} \right\rceil}^{\left\lfloor \frac{n-1}{k-1} \right\rfloor}\left(\frac{(k+1)d-n}{\log n}k\right) \\ & =\sum_{d=1}^{\left\lfloor \frac{n-1}{k-1} \right\rfloor}\left(\frac{(k+1)d-n}{\log n}k\right)-\sum_{d=1}^{\left\lceil \frac{n}{k+1} \right\rceil-1}\left(\frac{(k+1)d-n}{\log n}k\right) \\ & =\frac{k}{\log n} \left( (k+1)\frac{\left\lfloor\frac{n-1}{k-1}\right\rfloor\left(\left\lfloor\frac{n-1}{k-1}\right\rfloor+1\right)}{2}-\left\lfloor\frac{n-1}{k-1}\right\rfloor n \right. \\ & \left. \qquad -(k+1)\frac{\left\lceil\frac{n}{k+1}\right\rceil\left(\left\lceil\frac{n}{k+1}\right\rceil-1\right)}{2} +\left(\left\lceil\frac{n}{k+1}\right\rceil-1\right)n\right) \\ & < \frac{1}{2 \log n}\left( \frac{4n^2+n(k^3+k^2-5k-5)-k^3+3k+2}{(k-1)^2}+ 3nk +k^2 +k\right). \end{align*} \normalsize $\frac{1}{2 \log n}\left( \frac{4n^2+n(k^3+k^2-5k-5)-k^3+3k+2}{(k-1)^2}+ 3nk +k^2 +k\right)$ is at most $O(\frac{1}{\log n}( \frac{n^2}{k^2}+kn))$.\\ Therefore, we can count all $k$-cadences of a character in $O(\frac{1}{\log n}( \frac{n^2}{k^2}+kn))$ time. For a locating algorithm and for a general ordered alphabet, we can use same techniques of the above section. Therefore we get Theorem~\ref{theo:k-cadences-2}. From the above, we obtain the following result: \begin{theorem} \label{theo:k-cadences} For a constant size alphabet (resp. for a general ordered alphabet), all $k$-cadences with given $k$ can be counted in $O\left(\min\left\{\frac{n^2}{k},\frac{1}{\log n}\left(\frac{n^2}{k^2}+kn\right)\right\}\right)$ time\\ (resp. $O\left(\min\left\{\frac{n^2}{k},\frac{n \sqrt{k}}{\sqrt{\log n}}\sqrt{\frac{n^2}{k^2}+kn}\right\}\right)$ time) and $O(n)$ space, and can be located in\\ $O\left(\min\left\{\frac{n^2}{k},\frac{1}{\log n}\left(\frac{n^2}{k^2}+kn\right)+\mathit{occ}\right\}\right)$ time\\(resp. $O\left(\min\left\{\frac{n^2}{k},\frac{n \sqrt{k}}{\sqrt{\log n}}\sqrt{\frac{n^2}{k^2}+kn}+\mathit{occ}\right\}\right)$ time) and $O(n)$ space. \end{theorem} \section{Detecting Equidistant Subsequence Pattern}\label{sec:equidistant_subsequence_pattern} In this section, we consider algorithms for detecting equidistant subsequence pattern. \subsection*{Algorithm 5} We use similar techniques of Algorithm 1. For each distance $d (1 \leq d \leq \lfloor \frac{n-1}{k-1}\rfloor)$, we construct text $ST_d$. After preparing the split text, we can compute $\mathit{ESP}$ using existing substring pattern matching algorithms. Since Knuth-Morris-Pratt algorithm~\cite{KnuthMP77} runs in $O(n)$ time for a text of length $n$, we obtain the following result: \begin{theorem} \label{theo:ESPM-1} There is an algorithm for locating all equidistant subsequence occurrences for given pattern $P$ of length $m$ which uses $O\left(\frac{n^2}{m}\right)$ time and $O(n)$ space. \end{theorem} Like $\mathit{KSC}$, for text $T=\mathtt{a}^n$ and pattern $P=\mathtt{a}^m$, $|\mathit{ESP}|$ can be $\Omega(\frac{n^2}{m})$. Therefore, when we locate all $(i,d) \in \mathit{ESP}$, this algorithm is optimal in the worst case. In the next subsection, we show a counting algorithm that is efficient when the value of $m$ is small. And we show a locating algorithm that is efficient when the value of $m$ and $|\mathit{ESP}|$ is small. \subsection*{Algorithm 6} Now we will show the following results: \begin{theorem} \label{theo:ESPM-2} There is an algorithm for counting all equidistant subsequence occurrences which uses $O\left(\frac{n^2}{\log n}\right)$ time and $O\left(\frac{|\Sigma_P| n}{\log n}\right)$ space, where $\Sigma_P$ is the set of distinct characters in the given pattern $P$. We can also locate these occurrences in $O\left(\frac{n^2}{\log n}+\mathit{occ}\right)$ time and $O\left(\frac{|\Sigma_P| n}{\log n}\right)$ space. \end{theorem} First, we construct $\delta_c$ for all $c \in \Sigma_P$. For each $d (1 \leq d \leq \lfloor \frac{n-1}{m-1} \rfloor)$, let $Q^{''}_d = \delta_{P[1]} \texttt{ \& }\\ (\delta_{P[2]} \texttt{ << } d) \texttt{ \& } (\delta_{P[3]} \texttt{ << } 2d) \texttt{ \& } \cdots \texttt{ \& } (\delta_{P[m]} \texttt{ << } (m-1)d)$. If $Q^{''}_d[i]=1$, $(i,d)$ is a occurrence of equidistant subsequence pattern $P$. See Figure~\ref{fig:alg6} for a concrete example. \begin{figure} \caption{Let $T=\mathtt{caaacaabaabaabcabc}$ and $P=\mathtt{aacc}$. $(9,3)$ is an occurrence of equidistant subsequence pattern with $d=3$.} \label{fig:alg6} \end{figure} All of the elements of $\mathit{ESP}$ can be counted / located by using a method similar to Algorithm 2 for $Q^{''}_d$. After constructing $\delta_c$ for all $c \in \Sigma$, all occurrences of equidistant subsequence pattern can be counted in $O(\frac{n^2}{\log n})$ time and $O(n)$ space and can be located in $O(\frac{n^2}{\log n}+\mathit{occ})$ time and $O(n)$ space. Constructing $\delta_c$ for all $c \in \Sigma_P$ needs $O(\frac{|\Sigma_P| n}{\log n})$ time and space. Since $\frac{|\Sigma_P| n}{\log n}$ is at most $O(\frac{n^2}{\log n})$, we get Theorem~\ref{theo:ESPM-2}. If $m$ is $o(\log n)$, Algorithm 6 is faster than Algorithm 5 and $O(\frac{|\Sigma_P| n}{\log n}) \subseteq O(n)$. From the above, we obtain the following result: \begin{theorem} \label{theo:ESPM} All occurrences of equidistant subsequence pattern can be counted in\\ $O\left(\min\left\{\frac{n^2}{m},\frac{n^2}{\log n}\right\}\right)$ time and $O(n)$ space and can be located in $O\left(\min\left\{\frac{n^2}{m},\frac{n^2}{\log n}+\mathit{occ}\right\}\right)$ time and $O(n)$ space. \end{theorem} \section{Detecting Equidistant Subsequence Pattern of Length Three}\label{sec:equidistant_subsequence_pattern_of_length_three} In this section, we show more efficient algorithms that count all occurrences of an equidistant subsequence pattern for the case where the length of the pattern is three. In addition, we show an algorithm for counting all occurrences of equidistant Abelian subsequence patterns of length three. Since we heavily use the techniques of~\cite{Funakoshi_STACS2020} for $3$-sub-cadences, we first show their algorithm for $3$-sub-cadences and then generalize it for solving the equidistant subsequence pattern matching problem. \subsection*{Counting 3-sub-cadences~\cite{Funakoshi_STACS2020}} Let $a[1..n]$ and $b[1..n]$ be two sequences. The sequence $c[1..2n]$ can be computed by the discrete acyclic convolution $c[z] = \sum_{\substack{x+y=z \\ (x,y) \in [0,1,2,\dots,n]^2}} a[x] b[y]$. The discrete acyclic convolution can be computed in $O(n \log n)$ time by using the fast Fourier transform. This convolution can be interpreted geometrically as follows: $c[z] = \sum_{\substack{x+y=z \\ (x,y) \in G \cap \mathbb{Z}^2}} a[x] b[y]$, where $G$ is the square given by $\{(x,y): 0\leq x,y \leq n \}$. Funakoshi and Pape-Lange~\cite{Funakoshi_STACS2020} showed that $3$-sub-cadences can be counted by using the discrete acyclic convolution. If $(i,d)$ is a $3$-sub-cadence with a character $c$, $\delta_c[i] \cdot \delta_c[i+2d]=1$ and $T[i+d]=c$. Let $c[2z]=\sum_{\substack{x+y=2z \\ (x,y) \in [0,1,2,\dots,n]^2}} \delta_c[x] \delta_c[y]$, then $c[2z]$ will be the number of the index $z$ lies in the middle of two $c$. Since $z+z=2z$ and $\delta_c[z] \cdot \delta_c[z]=1$ if $T[z]=1$, $c[2z]$ counts one false positive. In addition, $x+y=z$ and $\delta_c[x] \cdot \delta_c[y]=1$ if $x \neq y$, $c[2z]$ counts twice for same $x$ and $y$. Let $f[z]$ be the number of all $3$-sub-cadences with a character $c$ such that the second occurrences of $c$ has index $z$. $f[z]$ can be computed in $O(n \log n)$ time as follows: \[f[z] := \begin{cases} \frac{c[2z]-1}{2} &\textup{if $T[z]=c$},\\ 0 &\textup{if $T[z]\neq c$}. \end{cases}\] Furthermore, they extended the geometric interpretation of convolution and showed that if $G$ is a triangle with perimeter $p$, the sequence $c$ can be computed in $O(p \log^2 p)$. \subsection*{Counting Equidistant Subsequence Patterns of Length Three} Now we show the algorithm for counting all occurrences of equidistant subsequence pattern whose length is three. Let $g[z]$ be the number of all occurrences of equidistant subsequence pattern such that the second occurrences of $P$ has index $z$. If $P=\alpha \alpha \alpha$, this problem is equal to the counting all $3$-sub-cadences problem. Therefore, $g[z]$ can be computed in $O(n \log n)$ time as follows: \[g[z] := \begin{cases} \frac{c[2z]-1}{2} &\textup{if $T[z]=\alpha$}\\ 0 &\textup{if $T[z]\neq \alpha$} \end{cases}\] where $c[2z]=\sum_{\substack{x+y=2z \\ (x,y) \in [0,1,2,\dots,n]^2}} \delta_\alpha[x] \delta_\alpha[y]$. If $P=\alpha \beta \alpha$, since the pattern is symmetrical, $g[z]$ can be computed in $O(n \log n)$ time as follows, by using almost the same technique as above: \[g[z] := \begin{cases} \frac{c[2z]}{2} &\textup{if $T[z]=\beta$}\\ 0 &\textup{if $T[z]\neq \beta$} \end{cases}\] where $c[2z]=\sum_{\substack{x+y=2z \\ (x,y) \in [0,1,2,\dots,n]^2}} \delta_\alpha[x] \delta_\alpha[y]$. However, if $P=\alpha \beta \gamma$, $c[2z]=\sum_{\substack{x+y=2z \\ (x,y) \in [0,1,2,\dots,n]^2}} \delta_\alpha[x] \delta_\gamma[y]$ would also include occurrences of equidistant subsequence pattern $\gamma \beta \alpha$. Thus, in order to compute $g[z]$, we further add the condition $x<y$. By using triangle convolution of~\cite{Funakoshi_STACS2020}, $g[z]$ can be computed in $O(n \log^2 n)$ time as follows: \[g[z] := \begin{cases} c[2z] &\textup{if $T[z]=\beta$}\\ 0 &\textup{if $T[z]\neq \beta$} \end{cases}\] where $c[z] = \sum_{\substack{x+y=z \\ (x,y) \in G \cap \mathbb{Z}^2}} \delta_\alpha[x] \delta_\gamma[y]$, where $G$ is the triangle as following figure~\ref{fig:esp3}. \begin{figure} \caption{The triangle $G$.} \label{fig:esp3} \end{figure} If $P=\alpha \alpha \gamma$ or $P=\alpha \gamma \gamma$, we can compute $g[z]$ by using the same technique as for the case of $P=\alpha \beta \gamma$. Therefore, we get the following result: \begin{theorem} \label{theo:esp3} All occurrences of equidistant subsequence pattern of length three can be counted in $O(n \log^2 n)$ time and $O(n)$ space. \end{theorem} \subsection*{Counting Equidistant Abelian Subsequence Patterns of Length Three} Now we show the algorithm for counting all occurrences of equidistant Abelian subsequence pattern whose length is three. In this subsection we consider the case where all of the three characters are distinct, namely, $P=\alpha \beta \gamma$. The other cases can be computed similarly. In the previous subsection, we showed that if $P=\alpha \beta \gamma$, $c[2z]=\sum_{\substack{x+y=z \\ (x,y) \in [0,1,2,\dots,n]^2}} \delta_\alpha[x] \delta_\gamma[y]$ counts the occurrences of equidistant subsequence pattern $\gamma \beta \alpha$. Therefore, we can compute all occurrences of equidistant subsequence pattern $\alpha \beta \gamma$, $\gamma \beta \alpha$, $\beta \gamma \alpha$, $\alpha \gamma \beta$, $\gamma \alpha \beta$, and $\beta \alpha \gamma$ by using discrete acyclic convolution for $P=\alpha \beta \gamma$, $P=\beta \gamma \alpha$, and $P=\gamma \alpha \beta$. Hence, we can get following result: \begin{theorem} \label{theo:easp3} All occurrences of equidistant Abelian subsequence pattern of length three can be counted in $O(n \log n)$ time and $O(n)$ space. \end{theorem} \end{document}
\begin{document} \title{Revisiting Hilbert-Schmidt Information Bottleneck for Adversarial Robustness} \begin{abstract} We investigate the HSIC (Hilbert-Schmidt independence criterion) bottleneck as a regularizer for learning an adversarially robust deep neural network classifier. \fcom{ In addition to the usual cross-entropy loss, we add regularization terms for every intermediate layer to ensure that the latent representations retain useful information for output prediction while reducing redundant information.} We show that the HSIC bottleneck enhances robustness to adversarial attacks both theoretically and experimentally. \fcom{In particular, we prove that the HSIC bottleneck regularizer reduces the sensitivity of the classifier to adversarial examples.} Our experiments on multiple benchmark datasets and architectures demonstrate that incorporating an HSIC bottleneck regularizer attains competitive natural accuracy and improves adversarial robustness, both with and without adversarial examples during training. \fcom{Our code and adversarially robust models are publicly available.\footnote{\texttt{\url{https://github.com/neu-spiral/HBaR}}}} \end{abstract} \section{Introduction} \label{sec:intro} Adversarial attacks \citep{goodfellow2014explaining, madry2017towards, moosavi2016deepfool, carlini2017towards,autoattack} to deep neural networks (DNNs) have received considerable attention recently. Such attacks are intentionally crafted to change prediction outcomes, e.g, by adding visually imperceptible perturbations to the original, natural examples \citep{szegedy2013intriguing}. Adversarial robustness, i.e., the ability of a trained model to maintain its predictive power under such attacks, is an important property for many safety-critical applications \citep{adv_self_driving, adv_health, adv_surveillance}. The most common approach to construct adversarially robust models is via adversarial training \citep{yan2018deep, zhang2019theoretically, wang2019improving}, i.e., training the model over adversarially constructed samples. Alemi et al.~\citep{alemi2016deep} propose using the so-called \emph{Information Bottleneck} (IB) \citep{tishby2000information, tishby2015deep} to ehnance adversarial robustness. Proposed by Tishby and Zaslavsky \citep{tishby2015deep}, the information bottleneck expresses a tradeoff between (a) the mutual information of the input and latent layers vs.~(b) the mutual information between latent layers and the output. Alemi et al. show empirically that using IB as a learning objective for DNNs indeed leads to better adversarial robustness. Intuitively, the IB objective increases the entropy between input and latent layers; in turn, this also increases the model's robustness, as it makes latent layers less sensitive to input perturbations. Nevertheless, mutual information is notoriously expensive to compute. The Hilbert-Schmidt independence criterion (HSIC) has been used as a tractable, efficient substitute in a variety of machine learning tasks \citep{knet, wu2020deep, wu2019solving}. Recently, Ma et al.~\citep{ma2020hsic} also exploited this relationship to propose an \emph{HSIC bottleneck} (HB), as a variant to the more classic (mutual-information based) information bottleneck, though not in the context of adversarial robustness. We revisit the HSIC bottleneck, studying its adversarial robustness properties. In contrast to both Alemi et al.~\citep{alemi2016deep} and Ma et al.~\citep{ma2020hsic}, we use the HSIC bottleneck as a regularizer in addition to commonly used losses for DNNs (e.g., cross-entropy). Our proposed approach, HSIC-Bottleneck-as-Regularizer (HBaR\xspace) can be used in conjunction with adversarial examples; even without adversarial training, it is able to improve a classifier's robustness. It also significantly outperforms previous IB-based methods for robustness, as well as the method proposed by Ma et al. Overall, we make the following contributions: \begin{packeditemize} \item[1.] We apply the HSIC bottleneck as a regularizer for the purpose of adversarial robustness. \item[2.] We provide a theoretical motivation for the constituent terms of the HBaR\xspace penalty, proving that it indeed constrains the output perturbation produced by adversarial attacks. \item[3.] We show that HBaR\xspace can be naturally combined with a broad array of state of the art adversarial training methods, consistently improving their robustness. \item[4.] We empirically show that this phenomenon persists even for weaker methods. In particular, HBaR\xspace can even enhance the adversarial robustness of plain SGD, without access to adversarial examples. \end{packeditemize} The remainder of this paper is structured as follows. We review related work in Sec.~\ref{sec:related_work}. In Sec.~\ref{sec:background}, we discuss the standard setting of adversarial robustness and HSIC. In Sec.~\ref{sec:method}, we provide a theoretical justification that HBaR reduces the sensitivity of the classifier to adversarial examples. Sec.~\ref{sec:experiments} includes our experiments; we conclude in Sec.~\ref{sec:conclusion}. \begin{figure} \caption{Illustration of HBaR\xspace\ for adversarial robustness. A neural network trained with HBaR\xspace gives a more constrained prediction w.r.t. perturbed inputs. Thus, it is less sensitive to adversarial examples.} \label{fig:hbar} \end{figure} \section{Related Work} \label{sec:related_work} \textbf{Adversarial Attacks.} Adversarial attacks often add a constrained perturbation to natural inputs with the goal of maximizing classification loss. Szegedy et al.~\citep{szegedy2013intriguing} learn a perturbation via box-constrained L-BFGS that misleads the classifier but minimally distort the input. FGSM, proposed by Goodfellow et al. \citep{goodfellow2014explaining}, is a one step adversarial attack perturbing the input based on the sign of the gradient of the loss. PGD \citep{kurakin2016adversarial, madry2017towards} generates adversarial examples through multi-step projected gradient descent optimization. DeepFool \citep{moosavi2016deepfool} is an iterative attack strategy, which perturbs the input towards the direction of the decision boundaries. CW \citep{carlini2017towards} applies a rectifier function regularizer to generate adversarial examples near the original input. \com{ AutoAttack (AA) \citep{autoattack} is an ensemble of parameter-free attacks, that also deals with common issues like gradient masking \citep{gradient_masking} and fixed step sizes \citep{madry2017towards}.} \textbf{Adversarial Robustness.} A common approach to obtaining robust models is \textit{adversarial training}, i.e., training models over adversarial examples generated via the aforementioned attacks. For example, Madry et al.~\citep{madry2017towards} show that training with adversarial examples generated by PGD achieves good robustness under different attacks. DeepDefense \citep{yan2018deep} penalizes the norm of adversarial perturbations. TRADES \citep{zhang2019theoretically} minimizes the difference between the predictions of natural and adversarial examples to get a smooth decision boundary. MART \citep{wang2019improving} pays more attention to adversarial examples from misclassified natural examples and adds a KL-divergence term between natural and adversarial samples to the cross-entropy loss. \emph{We show that our proposed method HBaR\xspace can be combined with several such state-of-the-art defense methods and boost their performance.} \textbf{Information Bottleneck.} The information bottleneck (IB) \citep{tishby2000information, tishby2015deep} expresses a tradeoff in latent representations between information useful for output prediction and information retained about the input. IB has been employed to explore the training dynamics in deep learning models \citep{shwartz2017opening, saxe2019information} as well as a learning objective \citep{alemi2016deep, amjad2018not}. \fcom{Fischer \citep{fischer2020conditional} proposes a conditional entropy bottleneck (CEB) based on IB and observes its robust generalization ability empirically.} Closer to us, Alemi et al.~\citep{alemi2016deep} propose a variational information bottleneck (VIB) \fcom{for supervised learning}. They empirically show that training VIB on natural examples provides good generalization and adversarial robustness. We show that HBaR\xspace can be combined with various adversarial defense methods enhancing their robustness, but also outperforms VIB \citep{alemi2016deep} when given access only to natural samples. Moreover, \emph{we provide theoretical guarantees on how HBaR\xspace bounds the output perturbation induced by adversarial attacks.} \noindent\textbf{Mutual Information vs.~HSIC.} Mutual information is difficult to compute in practice. To address this, Alemi et al.~\citep{alemi2016deep} estimate IB via variational inference. Ma et al.~\citep{ma2020hsic} replaced mutual information by the Hilbert Schmidt Independence Criterion (HSIC) and named this the \emph{HSIC Bottleneck} (HB). Like Ma et al.~\citep{ma2020hsic}, we utilize HSIC to estimate IB. However, our method is different from Ma et al.~\citep{ma2020hsic} in several aspects. First, they use HB to train the neural network stage-wise, layer-by-layer, without backpropagation, while we use HSIC bottleneck as a regularization in addition to cross-entropy and optimize the parameters jointly by backpropagation. Second, they only evaluate the model performance on classification accuracy, while we demonstrate adversarial robustness. Finally, we show that HBaR\xspace further enhances robustness to adversarial examples both theoretically and experimentally. Greenfeld et al.~\citep{greenfeld2020robust} use HSIC between the residual of the prediction and the input data as a learning objective for model robustness on covariate distribution shifts. Their focus is on robustness to distribution shifts, whereas our work focuses on robustness to adversarial examples, on which HBaR\xspace outperforms their proposed objective. \section{Background} \label{sec:background} \subsection{Adversarial Robustness} In standard $k$-ary classification, we are given a dataset $\mathcal{D} = \{(x_i, y_i)\}_{i= 1}^{n}$, where $x_i \in \mathbb{R}^{d_X}, y_i \in \{0,1\}^{k}$ are i.i.d.~samples drawn from joint distribution $P_{XY}$. A learner trains a neural network $h_\theta:\mathbb{R}^{d_X}\to\mathbb{R}^{k}$ parameterized by weights $\theta\in \mathbb{R}^{d_{\theta}}$ to predict $Y$ from $X$ by minimizing \begin{align}\mathcal{L}(\theta)= \mathbb{E}_{XY}[\ell(h_{\theta}(X), Y)]\approx\frac{1}{n} \sum_{i=1}^n \ell(h_{\theta}(x_i), y_i) , \label{eq:loss} \end{align} where $\ell:\mathbb{R}^k \times \mathbb{R}^k \to\mathbb{R}$ is a loss function, e.g., cross-entropy. We aim to find a model $h_\theta$ that has high prediction accuracy but is also \emph{adversarially robust}: the model should maintain high prediction accuracy against a constrained adversary, that can perturb input samples in a restricted fashion. Formally, prior to submitting a sample $x\in \mathbb{R}^{d_X}$ to the classifier, an adversary may perturb $x$ by an arbitrary $\delta\in \mathcal{S}_r$, where $\mathcal{S}_r\subseteq \mathbb{R}^{d_X}$ is the $\ell_{\infty}$-ball of radius $r$, i.e., \begin{align} \label{def:S_r} \mathcal{S}_r=B(0,r)=\{\delta \in \mathbb{R}^{d_X}: \|\delta\|_{\infty}\leq r\}.\end{align} The \emph{adversarial robustness} \citep{madry2017towards} of a model $h_{\theta}$ is measured by the expected loss attained by such adversarial examples, i.e., \begin{equation} \label{eq:adv_robustness} \begin{split} \loss_r(\theta)= \mathbb{E}_{X Y} \left[\max_{\delta \in \mathcal{S}_r}\ell\left(h_{\theta}(X+\delta), Y\right)\right] \approx \frac{1}{n} \sum_{i=1}^n \max_{\delta\in \mathcal{S}_r}\ell(h_{\theta}(x_i+\delta), y_i).\end{split} \end{equation} An adversarially robust neural network $h_\theta$ can be obtained via \emph{adversarial training}, i.e., by minimizing the adversarial robustness loss in \eqref{eq:adv_robustness} empirically over the training set $\mathcal{D}$. In practice, this amounts to training via stochastic gradient descent (SGD) over adversarial examples $x_i+\delta$ (see, e.g., \citep{madry2017towards}). In each epoch, $\delta$ is generated on a per sample basis via an inner optimization over $\mathcal{S}_r$, e.g., via projected gradient descent (PGD) on $-\mathcal{L}$. \subsection{Hilbert-Schmidt Independence Criterion (HSIC)} The Hilbert-Schmidt Independence Criterion (HSIC) is a statistical dependency measure introduced by Gretton et al.~\citep{gretton2005measuring}. HSIC is the Hilbert-Schmidt norm of the cross-covariance operator between the distributions in Reproducing Kernel Hilbert Space (RKHS). Similar to Mutual Information (MI), HSIC captures non-linear dependencies between random variables. $\mathop{\mathrm{HSIC}}(X, Y)$ is defined as: \fcom{ \begin{align} \begin{split} \mathop{\mathrm{HSIC}}(X, Y&) = \mathbb{E}_{X Y X^{\prime} Y^{\prime}}\left[k_{X}\left(X, X^{\prime}\right) k_{Y^{\prime}}\left(Y, Y^{\prime}\right)\right] \\ &+\mathbb{E}_{X X^{\prime}}\left[k_{X}\left(X, X^{\prime}\right)\right] \mathbb{E}_{YY^{\prime}}\left[k_{Y}\left(Y, Y^{\prime}\right)\right] \\ &-2 \mathbb{E}_{X Y}\left[\mathbb{E}_{X^{\prime}}\left[k_{X}\left(X, X^{\prime}\right)\right] \mathbb{E}_{Y^{\prime}}\left[k_{Y}\left(Y, Y^{\prime}\right)\right]\right], \end{split} \end{align} } where $X'$, $Y'$ are independent copies of $X$, $Y$, respectively, and $k_{X}$, $k_{Y}$ are kernels. In practice, we often approximate HSIC empirically. Given $n$ i.i.d.~samples $\{(x_i, y_i)\}_{i=1}^{n}$ drawn from $P_{XY}$, we estimate HSIC via: \begin{equation} \label{eq:empirical_hsic} \widehat{\mathop{\mathrm{HSIC}}}(X, Y)={(n-1)^{-2}} \operatorname{tr}\left(K_{X} H K_{Y} H\right), \end{equation} where $K_X$ and $K_Y$ are kernel matrices with entries $K_{X_{ij}}=k_{X}(x_i, x_j)$ and $K_{Y_{ij}}=k_{Y}(y_i, y_j)$, respectively, and $H = \mathbf{I}- \frac{1}{n} \mathbf{1} \mathbf{1}^\top$ is a centering matrix. \section{Methodology} \label{sec:method} In this section, we present our method, HSIC bottleneck as regularizer (HBaR\xspace) as a means to enhance a classifier's robustness. The effect of HBaR\xspace\ for adversarial robustness is illustrated in Figure~\ref{fig:hbar}; the HSIC bottleneck penalty reduces the sensitivity of the classifier to adversarial examples. We provide a theoretical justification for this below, in Theorems~\ref{thm:main_theorem}~and~\ref{thm:new_theorem}, but also validate the efficacy of the HSIC bottleneck extensively with experiments in Section~\ref{sec:experiments}. \subsection{HSIC Bottleneck as Regularizer for Robustness} Given a feedforward neural network $h_\theta:\mathbb{R}^{d_X}\to\mathbb{R}^{k}$ parameterized by $\theta$ with $M$ layers, and an input r.v.~$X$, we denote by $Z_j\in\mathbb{R}^{d_{Z_j}}$, $j\in \{1,\ldots,M\}$, the output of the $j$-th layer under input $X$ (i.e., the $j$-th latent representation). We define our HBaR\xspace\ learning objective as follows: \begin{align} \label{eq:obj} \begin{split} \tilde{\loss}(\theta) = \mathcal{L}(\theta) + \lambda_{x} &\sum_{j=1}^{M} \mathop{\mathrm{HSIC}}(X, Z_j) -\lambda_{y} \sum_{j=1}^{M} \mathop{\mathrm{HSIC}}(Y, Z_j), \end{split} \end{align} where $\mathcal{L}$ is the standard loss given by Eq.~\eqref{eq:loss} and $\lambda_{x}, \lambda_{y}\in \mathbb{R}_+$ are balancing hyperparameters. Together, the second and third terms in Eq.~\eqref{eq:obj} form the HSIC bottleneck penalty. As HSIC measures dependence between two random variables, minimizing $\mathop{\mathrm{HSIC}}(X, Z_i)$ corresponds to removing redundant or noisy information contained in $X$. Hence, this term also naturally reduces the influence of an adversarial attack, i.e., a perturbation added on the input data. This is intuitive, but we also provide theoretical justification in the next subsection. Meanwhile, maximizing $\mathop{\mathrm{HSIC}}(Y, Z_i)$ encourages this lack of sensitivity to the input to happen while retaining the discriminative nature of the classifier, captured by dependence to useful information w.r.t. the output label $Y$. Note that minimizing $\mathop{\mathrm{HSIC}}(X, Z_i)$ alone would also lead to the loss of useful information, so it is necessary to keep the $\mathop{\mathrm{HSIC}}(Y, Z_i)$ term to make sure $Z_i$ is informative enough of $Y$. The overall algorithm is described in Alg.~\ref{alg:hsic}. In practice, we perform Stochastic Gradient Descent (SGD) over $\tilde{\loss}$: both $\mathcal{L}$ and HSIC can be evaluated empirically over batches. For the latter, we use the estimator \eqref{eq:empirical_hsic}, restricted over the current batch. As we have $m$ samples in a mini-batch, the complexity of calculating the empirical HSIC \eqref{eq:empirical_hsic} is $O(m^2d_{\bar{Z}})$ \citep{song2012feature} for a single layer, where $d_{\bar{Z}}=\max_{j}d_{Z_j}$. Thus, the overall complexity for~\eqref{eq:obj} is $O(Mm^2d_{\bar{Z}})$. This computation is highly parallelizable, thus, the additional computation time of HBaR\xspace is small when compared to training a neural network via cross-entropy only. \subsection{Combining HBaR\xspace with Adversarial Examples}\label{sec:combine-hb} HBaR\xspace can also be naturally applied in combination with adversarial training. For $r>0$ the magnitude of the perturbations introduced in adversarial examples, one can optimize the following objective instead of $\tilde{\loss}(\theta)$ in Eq. \eqref{eq:obj}: \begin{align} \begin{split} \tilde{\loss}_r(\theta) = \loss_r(\theta) + \lambda_{x} \sum_{j=1}^{M} \mathop{\mathrm{HSIC}}(X, Z_j) -\lambda_{y} \sum_{j=1}^{M} \mathop{\mathrm{HSIC}}(Y, Z_j),\end{split} \end{align} where $\loss_r$ is the adversarial loss given by Eq.~\eqref{eq:adv_robustness}. This can be used instead of $\mathcal{L}$ in Alg.~\ref{alg:hsic}. Adversarial examples need to be used in the computation of the gradient of the loss $\loss_r$ in each minibatch; these need to be computed on a per sample basis, e.g., via PGD over $\mathcal{S}_r$, at additional computational cost. Note that the natural samples $(x_i,y_i)$ in a batch are used to compute the HSIC bottleneck regularizer. The HBaR\xspace penalty can similarly be combined with other adversarial learning methods and/or used with different means for selecting adversarial examples, other than PGD. We illustrate this in Section~\ref{sec:experiments}, where we combine HBaR\xspace with state-of-the-art adversarial learning methods TRADES \citep{zhang2019theoretically} and MART \citep{wang2019improving}. \\ \begin{algorithm}[!t] \SetAlgoLined \textbf{Input:} input sample tuples $\{(x_i, y_i)\}_{i=1}^{n}$, kernel function $k_x, k_y, k_z$, a neural network $h_\theta$ parameterized by $\theta$, mini-batch size $m$, learning rate $\alpha$.\\ \textbf{Output:} parameter of classifier $\theta$\\ \While{$\theta$ has not converged}{ Sample a mini-batch of size $m$ from input samples. \\ Forward Propagation: calculate $z_i$ and $h_\theta(x)$.\\ Compute kernel matrices for $X$, $Y$ and $Z_i$ using $k_x, k_y, k_z$ respectively inside mini-batch. \\ Compute $\tilde{\loss}(\theta)$ via \eqref{eq:obj}, where $\mathop{\mathrm{HSIC}}$ is evaluated empirically via \eqref{eq:empirical_hsic}.\\ Backward Propagation: $\theta \leftarrow \theta - \alpha \nabla \tilde{\loss}(\theta)$. } \caption{Robust Learning with HBaR} \label{alg:hsic} \end{algorithm} \subsection{HBaR\xspace Robustness Guarantees} \label{sec:HBAR_theorem} We provide here a formal justification for the use of HBaR\xspace to enhance robustness: we prove that regularization terms $\mathop{\mathrm{HSIC}}(X, Z_j)$, $j=1,\ldots,M$ lead to classifiers which are less sensitive to input perturbations. For simplicity, we focus on the case where $k=1$ (i.e., binary classification). Let $Z\in \mathbb{R}^{d_Z}$ be the latent representation at some arbitrary intermediate layer of the network. That is, $Z=Z_j$, for some $j\in \{1,\ldots,M\}$; we omit the subscript $j$ to further reduce notation clutter. Then $h_\theta= (g \circ f)$, where $f: \mathbb{R}^{d_X} \rightarrow \mathbb{R}^{d_Z}$ maps the inputs to this intermediate layer, and $g: \mathbb{R}^{d_Z} \rightarrow \mathbb{R}$ maps the intermediate layer to the final layer. Then, $Z = f(X)$ and $g(Z) = h_\theta(X)\in \mathbb{R}$ are the latent and final outputs, respectively. Recall that, in HBaR\xspace, $\mathop{\mathrm{HSIC}}(X,Z)$ is associated with kernels $k_X$, $k_Z$. We make the following technical assumptions: \begin{assumption}\label{asm:cont} Let $\mathcal{X}\subseteq \mathbb{R}^{d_X}$, $\mathcal{Z}\subseteq \mathbb{R}^{d_Z} $ be the supports of random variables $X$, $Z$, respectively. We assume that both $h_\theta$ and $g$ are continuous and bounded functions in $\mathcal{X}$, $\mathcal{Z}$, respectively, i.e.: \begin{align} \label{eq:fi} h_\theta \in C(\mathcal{X}), g \in C(\mathcal{Z}). \end{align} Moreover, we assume that all functions $h_\theta$ and $g$ we consider are uniformly bounded, i.e., there exist $0 < M_\mathcal{X},M_\mathcal{Z} < \infty $ such that: \begin{align} M_{\mathcal{X}} = \max_{h_\theta \in C(\mathcal{X})} \|h_\theta\|_\infty \quad\text{and} \quad M_{\mathcal{Z}} = \max_{g \in C(\mathcal{Z})} \|g\|_\infty. \label{eq:fbound} \end{align} \end{assumption} The continuity stated in Assumption~\ref{asm:cont} is natural, if all activation functions are continuous. Boundedness follows if, e.g., $\mathcal{X}$, $\mathcal{Z}$ are closed and bounded (i.e., compact), or if activation functions are bounded (e.g., softmax, sigmoid, etc.). \begin{assumption}\label{asm:universal}We assume kernels $k_X$, $k_Z$ are universal with respect to functions $h_\theta$ and $g$ that satisfy Assumption \ref{asm:cont}, i.e., if $\mathcal{F}$ and $\mathcal{G}$ are the induced RKHSs for kernels $k_X$ and $k_Z$, respectively, then for any $h_\theta,g$ that satisfy Assumption \ref{asm:cont} and any $\varepsilon>0$ there exist functions $h'\in \mathcal{F}$ and $g'\in \mathcal{G}$ such that $||h_\theta-h'||_{\infty} \leq \varepsilon$ and $||g-g'||_{\infty} \leq \varepsilon$. Moreover, functions in $\mathcal{F}$ and $\mathcal{G}$ are uniformly bounded, i.e., there exist $0<M_{\mathcal{F}}, M_{\mathcal{G}}<\infty $ such that for all $h'\in \mathcal{F}$ and all $g'\in \mathcal{G}$: \begin{align} M_{\mathcal{F}} = \max _{f'\in\mathcal{F}} \|f'\|_{\infty}\quad \text{and} \quad M_{\mathcal{G}} = \max _{g'\in\mathcal{G}} \|g'\|_{\infty}. \label{eq:kbound} \end{align} \end{assumption} We note that several kernels used in practice are universal, including, e.g., the Gaussian and Laplace kernels. Moreover, given that functions that satisfy Assumption~\ref{asm:cont} are uniformly bounded by \eqref{eq:fbound}, such kernels can indeed remain universal while satisfying \eqref{eq:kbound} via an appropriate rescaling. Our first result shows that $\mathop{\mathrm{HSIC}}(X,Z)$ \emph{at any intermediate layer $Z$} bounds the \emph{output} variance: \begin{theorem} \label{thm:main_theorem} Under Assumptions~\ref{asm:cont} and~\ref{asm:universal}, we have: \begin{equation} \label{eq:main_theorem_v2} \operatorname{HSIC}(X, Z) \geq \frac{M_{\mathcal{F}}M_{\mathcal{G}}}{M_{\mathcal{X}}M_{\mathcal{Z}}} \sup_{\theta}\operatorname{Var}(h_{\theta}(X)). \end{equation} \end{theorem} \com{ The proof of Theorem~\ref{thm:main_theorem} is in Appendix~\ref{sec:supp-proof-thm1} in the supplement. We use a result by Greenfeld and Shalit \citep{greenfeld2020robust} that links $\operatorname{HSIC}(X,Z)$ to the supremum of the covariance of bounded continuous functionals over $\mathcal{X}$ and $\mathcal{Z}$.} Theorem~\ref{thm:main_theorem} indicates that the regularizer $\mathop{\mathrm{HSIC}}(X,Z)$ at any intermediate layer naturally suppresses the variability of the output, i.e., the classifier prediction $h_\theta(X)$. To see this, observe that by Chebyshev's inequality \citep{papoulis1989probability} the distribution of $h_\theta(X)$ concentrates around its mean when $\operatorname{Var}(h_{\theta}(X))$ approaches $0$. As a result, bounding $\mathop{\mathrm{HSIC}}(X,Z)$ inherently also bounds the (global) variability of the classifier (across all parameters $\theta$). This observation motivates us to also maximize $\operatorname{HSIC} (Y, Z)$ to recover essential information useful for classification: if we want to achieve good adversarial robustness as well as good predictive accuracy, we have to strike a balance between $\operatorname{HSIC} (X, Z)$ and $\operatorname{HSIC} (Y, Z)$. This perfectly aligns with the intuition behind the information bottleneck \citep{tishby2015deep} and the well-known accuracy-robustness trade off \citep{madry2017towards, zhang2019theoretically, tsipras2018robustness, raghunathan2020understanding}. We also confirm this experimentally: we observe that both additional terms (the standard loss and $\mathop{\mathrm{HSIC}}(Y,Z)$) are necessary for ensuring good prediction performance in practice (see Table~\ref{tab:ablation}). Most importantly, by further assuming that features are normal, we can show that HSIC bounds the power of an arbitrary adversary, as defined in Eq.~\eqref{eq:adv_robustness}: \begin{theorem} \label{thm:new_theorem} \color{black} Assume that $X \sim \mathcal{N}(0, \sigma^2 \mathbf{I})$. Then, under Assumptions~\ref{asm:cont} and~\ref{asm:universal}, we have:\footnote{Recall that for functions $f,g:\mathbb{R}\to\mathbb{R}$ we have $f=o(g)$ if $\lim_{r\to 0}\frac{f(r)}{g(r)}=0$.} \begin{equation} \frac{ r \sqrt{-2 \log o(1)} d_X M_{\mathcal{Z}}}{ \sigma M_{\mathcal{F}}M_{\mathcal{G}}}\operatorname{HSIC}(X, Z) + o(r) \geq \mathbb{E} [|h_\theta(X+\delta) - h_\theta(X)|], \quad \text{for all}~\delta\in \mathcal{S}_r. \end{equation} \end{theorem} The proof of Theorem~\ref{thm:new_theorem} can also be found in Appendix~\ref{sec:supp-proof-thm2} in the supplement. We again use the result by Greenfeld and Shalit \citep{greenfeld2020robust} along with Stein's Lemma~\citep{liu1994siegel}, that relates covariances of Gaussian r.v.s and their functions to expected gradients. In particular, we apply Stein's Lemma to the bounded functionals considered by Greenfeld and Shalit by using a truncation argument. Theorem~\ref{thm:new_theorem} implies that $\mathop{\mathrm{HSIC}}(X, Z)$ indeed bounds the output perturbation produced by an arbitrary adversary: suppressing HSIC sufficiently can ensure that the adversary cannot alter the output significantly, in expectation. In particular, if $ \operatorname{HSIC}(X, Z) = o\left(\frac{\sigma M_{\mathcal{F}}M_{\mathcal{G}}}{ \sqrt{-2 \fcom{\log o(1)}} d_X M_{\mathcal{Z}}}\right),$ then $ \lim_{r\to 0} \sup_{\delta \in \mathcal{S}_r} {\mathbb{E} [|h_\theta(X+\delta) - h_\theta(X)|]}/{r} =0,$ i.e., the output is almost constant under small input perturbations. \section{Experiments}\label{sec:experiments} \subsection{Experimental Setting}\label{sec:experient-setup} We experiment with three standard datasets, MNIST \citep{mnist}, CIFAR-10 \citep{cifar} and CIFAR-100 \citep{cifar}. ~We use a 4-layer LeNet \citep{madry2017towards} for MNIST, ResNet-18 \citep{he2016deep} and WideResNet-28-10 \citep{wideresnet} for CIFAR-10, and WideResNet-28-10 \citep{wideresnet} for CIFAR-100. We use cross-entropy as loss $\mathcal{L}(\theta)$. \com{Licensing information for all existing assets can be found in Appendix~\ref{sec:licensing} in the supplement.} \noindent\textbf{Algorithms.} We compare \emph{HBaR\xspace} to the following non-adversarial learning algorithms: \emph{Cross-Entropy (CE)}, \emph{Stage-Wise HSIC Bottleneck (SWHB)} \citep{ma2020hsic}, \emph{XIC} \citep{greenfeld2020robust}, and \emph{Variational Information Bottleneck (VIB)} \citep{alemi2016deep}. We also incorporate HBaR\xspace to several adversarial learning algorithms, as described in Section~\ref{sec:combine-hb}, and compare against the original methods, without the HBaR\xspace penalty. The adversarial methods we use are: \emph{Projected Gradient Descent (PGD)} \citep{madry2017towards}, \emph{TRADES} \citep{zhang2019theoretically}, and \emph{MART} \citep{wang2019improving}. Further details and parameters can be found in Appendix~\ref{sec:supp-algorithms} in the supplement. \noindent\textbf{Performance Metrics.} For all methods, we evaluate the obtained model $h_\theta$ via the following metrics: (a) \emph{Natural} (i.e., clean test data) accuracy, and adversarial robustness via test accuracy under (b) \emph{FGSM}, the fast gradient sign attack \citep{goodfellow2014explaining}, (c) \emph{PGD}$^m$, the PGD attack with $m$ steps used for the internal PGD optimization \citep{madry2017towards}, (d) \emph{CW}, the CW-loss within the PGD framework \citep{cw}, and (e) \emph{AA}, AutoAttack \citep{autoattack}. All five metrics are reported in percent (\%) accuracy. Following prior literature, we set step size to 0.01 and radius $r=0.3$ for MNIST, and step size as $2/255$ and $r=8/255$ for CIFAR-10 and CIFAR-100. All attacks happen during the test phase and have full access to model parameters (i.e., are white-box attacks). All experiments are carried out on a Tesla V100 GPU with 32 GB memory and 5120 cores. \subsection{Results} \begin{table*}[!t] \centering \setlength{\extrarowheight}{.2em} \setlength{\tabcolsep}{3pt} \small \caption{Natural test accuracy (in \%), adversarial robustness ((in \%) on FGSM, PGD, CW, and AA attacked test examples) on MNIST and CIFAR-100 of \textbf{[row i, iii, v] adversarial learning baselines} and \textbf{[row ii, iv, vi] combining HBaR\xspace with each correspondingly}. Each result is the average of five runs.} \label{tab:adv-competing} \resizebox{1\textwidth}{!}{ \begin{tabular}{||c || c c c c c c||c c c c c c||} \hline \multirow{2}{*}{Methods} & \multicolumn{6}{c||}{MNIST by LeNet} & \multicolumn{6}{c||}{CIFAR-100 by WideResNet-28-10} \\ \cline{2-13} & Natural & FGSM & PGD$^{20}$ & PGD$^{40}$ & CW & AA & Natural & FGSM & PGD$^{10}$ & PGD$^{20}$ & CW & AA \\ \hline \hline PGD & 98.40 & 93.44 & 94.56 & 89.63 & 91.20 & 86.62 & 59.91 & 29.85 & 26.05 & 25.38 & 22.28 & 20.91 \\ HBaR\xspace + PGD & \textbf{98.66} & \textbf{96.02} & \textbf{96.44} & \textbf{94.35} & \textbf{95.10} & \textbf{91.57} & \textbf{63.84} & \textbf{31.59} & \textbf{27.90} & \textbf{27.21} & \textbf{23.23} & \textbf{21.61} \\ \hline \hline TRADES & \textbf{97.64} & 94.73 & 95.05 & 93.27 & 93.05 & 89.66 & 60.29 & 34.19 & 31.32 & 30.96 & 28.20 & 26.91 \\ HBaR\xspace + TRADES & \textbf{97.64} & \textbf{95.23} & \textbf{95.17} & \textbf{93.49} & \textbf{93.47}& \textbf{89.99} & \textbf{60.55} & \textbf{34.57} & \textbf{31.96} & \textbf{31.57} & \textbf{28.72}& \textbf{27.46 } \\ \hline \hline MART & \textbf{98.29} & 95.57 & 95.23 & 93.55 & 93.45 & 88.36 & 58.42 & 32.94 & 29.17 & 28.19 & 27.31 & 25.09 \\ HBaR\xspace + MART & 98.23 & \textbf{96.09} & \textbf{96.08} & \textbf{94.64} & \textbf{94.62} & \textbf{89.99} & \textbf{58.93} & \textbf{33.49} & \textbf{30.72} & \textbf{30.16} & \textbf{28.89}& \textbf{25.21} \\ \hline \end{tabular}} \end{table*} \begin{table*}[!t] \centering \setlength{\extrarowheight}{.2em} \setlength{\tabcolsep}{3pt} \small \caption{Natural test accuracy (in \%), adversarial robustness ((in \%) on FGSM, PGD, CW, and AA attacked test examples) on CIFAR-10 by ResNet-18 and WideResNet-28-10 of \textbf{[row i, iii, v] adversarial learning baselines} and \textbf{[row ii, iv, vi] combining HBaR\xspace with each correspondingly}. Each result is the average of five runs.} \label{tab:adv-competing-cifar10} \resizebox{1\textwidth}{!}{ \begin{tabular}{||c || c c c c c c ||c c c c c c||} \hline \multirow{2}{*}{Methods} & \multicolumn{6}{c||}{CIFAR-10 by ResNet-18} & \multicolumn{6}{c||}{CIFAR-10 by WideResNet-28-10} \\ \cline{2-13} & Natural & FGSM & PGD$^{10}$ & PGD$^{20}$ & CW & AA & Natural & FGSM & PGD$^{10}$ & PGD$^{20}$ & CW & AA \\ \hline \hline PGD & 84.71 & 55.95 & 49.37 & 47.54 & 41.17 & 43.42 & 86.63 & 58.53 & 52.21 & 50.59 & 49.32 & 47.25 \\ HBaR\xspace + PGD & \textbf{85.73} & \textbf{57.13} & \textbf{49.63} & \textbf{48.32} & \textbf{41.80} & \textbf{44.46} & \textbf{87.91} & \textbf{59.69} & \textbf{52.72} & \textbf{51.17} & \textbf{49.52} & \textbf{47.60} \\ \hline \hline TRADES & 84.07 & 58.63 & 53.21 & 52.36 & 50.07 & 49.38 & \textbf{85.66} & 61.55 & 56.62 & 55.67 & 54.02 & 52.71 \\ HBaR\xspace + TRADES & \textbf{84.10} & \textbf{58.97} & \textbf{53.76} & \textbf{52.92} & \textbf{51.00} & \textbf{49.43} & 85.61 & \textbf{62.20} & \textbf{57.30} & \textbf{56.51} & \textbf{54.89} & \textbf{53.53} \\ \hline \hline MART & 82.15 & 59.85 & 54.75 & 53.67 & 50.12 & 47.97 & \textbf{85.94} & 59.39 & 51.30 & 49.46 & 47.94 & 45.48 \\ HBaR\xspace + MART & \textbf{82.44} & \textbf{59.86} & \textbf{54.84} & \textbf{53.89} & \textbf{50.53} & \textbf{48.21} & 85.52 & \textbf{60.54} & \textbf{53.42} & \textbf{51.81} & \textbf{49.32} & \textbf{46.99} \\ \hline \end{tabular}} \end{table*} \noindent\textbf{Combining HBaR\xspace with Adversarial Examples.} We show how HBaR\xspace can be used to improve robustness when used as a regularizer, as described in \Cref{sec:combine-hb}, along with state-of-the-art adversarial learning methods. We run each experiment by five times and report the mean natural test accuracy and adversarial robustness of all models on MNIST, CIFAR-10, and CIFAR-100 datasets by four architectures in Table \ref{tab:adv-competing} and Table \ref{tab:adv-competing-cifar10}. Combined with all adversarial training baselines, HBaR\xspace \emph{consistently improves adversarial robustness against all types of attacks on all datasets}. \com{The resulting improvements are larger than 2 standard deviations (that range between 0.05-0.2) in most cases}; we report the \com{results with standard deviations} in Appendix~\ref{sec:supp-errorbar} in the supplement. Although natural accuracy is generally restricted by the trade-off between robustness and accuracy \citep{zhang2019theoretically}, we observe that incorporating HBaR\xspace comes with an actual improvement over natural accuracy in most cases. \begin{figure*} \caption{CIFAR-10 by ResNet-18: Adversarial robustness of \textbf{IB-based baselines} and \textbf{proposed HBaR\xspace} under (a) PGD attacks, (b) CW attack by various of constant $c$, and (c) AA using different radius. Interestingly\com{, while achieving the highest adversarial robustness under almost all cases}, HBaR\xspace achieves natural accuracy (95.27\%) comparable to CE (95.32\%) \fcom{which is much higher than VIB (92.35\%), XIC (92.93\%) and SWHB (59.18\%).}} \label{fig:hsic-competing} \end{figure*} \noindent\textbf{Adversarial Robustness Analysis without Adversarial Training.}\label{sec:experiments-hsiccompare} Next, we show that HBaR\xspace can achieve modest robustness even without adversarial examples during training. We evaluate the robustness of HBaR\xspace on CIFAR-10 by ResNet-18 against various adversarial attacks, and compare HBaR\xspace with other information bottleneck penalties without adversarial training in Figure \ref{fig:hsic-competing}. Specifically, we compare the robustness of HBaR\xspace with other IB-based methods under various attacks and hyperparameters. Our proposed HBaR\xspace achieves the best \com{overall} robustness against all three types of attacks while attaining competitive natural test accuracy. Interestingly, HBaR\xspace achieves natural accuracy (95.27\%) comparable to CE (95.32\%) which is much higher than VIB (92.35\%), XIC (92.93\%) and SWHB (59.18\%). \fcom{We observe SWHB underperforms HBaR\xspace on CIFAR-10 for both natural accuracy and robustness.} One possible explanation may be that when the model is deep, minimizing HSIC without backpropagation, as in SWHB, does not suffice to transmit the learned information across layers. Compared to SWHB, HBaR backpropagates over the HSIC objective through each intermediate layer and computes gradients only once in each batch, \fcom{improving accuracy and robustness while reducing computational cost significantly. } \begin{figure*}\label{fig:metrics_versus_epochs} \end{figure*} \begin{figure*} \caption{HSIC plane dynamics versus adversarial robustness. The x-axis plots HSIC between the last intermediate layer $Z_M$ and the input $X$, while the y-axis plots HSIC between $Z_M$ and the output $Y$. The color scale indicates adversarial robustness against PGD attack (PGD$^{40}$ and PGD$^{20}$ on MNIST and CIFAR-10, respectively). The arrows indicate dynamic direction w.r.t. training epochs. Each marker in the figures represents a different setting: \textbf{dots}, \textbf{stars}, and \textbf{triangles} represent CE-only, HBaR\xspace-high, and HBaR\xspace-low, respectively, compatible with the definition in Figure \ref{fig:metrics_versus_epochs}.} \label{fig:hsic_plain} \end{figure*} \noindent\textbf{Synergy between HSIC Terms.} Focusing on $Z_M$, the last latent layer, Figure~\ref{fig:metrics_versus_epochs} shows the evolution per epoch of: (a) $\operatorname{HSIC}(X,Z_M)$, (b) $\operatorname{HSIC}(Y,Z_M)$, (c) natural accuracy (in \%), and (d) adversarial robustness (in \%) under PGD attack on MNIST and CIFAR-10. Different lines correspond to CE, HBaR\xspace-high (HBaR\xspace with high weights $\lambda$), and HBaR\xspace-low (HBaR\xspace\ method small weighs $\lambda$). HBaR\xspace-low parameters are selected so that the values of the loss $\mathcal{L}$ and each of the $\mathop{\mathrm{HSIC}}$ terms are close after the first epoch. Figure~\ref{fig:metrics_versus_epochs}(c) illustrates that all three settings achieve good natural accuracy on both datasets. However, in Figure~\ref{fig:metrics_versus_epochs}(d), only HBaR\xspace-high, that puts sufficient weight on $\mathop{\mathrm{HSIC}}$ terms, attains relatively high adversarial robustness. In Figure~\ref{fig:metrics_versus_epochs}(a), we see that CE leads to high $\mathop{\mathrm{HSIC}}(X,Z_M)$ for the shallow LeNet, but low in the (much deeper) ResNet-18, even lower than HBaR\xspace-low. Moreover, we also see that the best performer in terms of adversarial robustness, HBaR\xspace-high, lies in between the other two w.r.t.~$\operatorname{HSIC}(X, Z_M)$. Both of these observations indicate the importance of the $\operatorname{HSIC}(Y, Z_M)$ penalty: minimizing $\operatorname{HSIC}(X, Z_M)$ appropriately leads to good adversarial robustness, but coupling learning to labels via the third term is integral to maintaining useful label-related information in latent layers, thus resulting in good adversarial robustness. Figure~\ref{fig:metrics_versus_epochs}(b) confirms this, as HBaR\xspace-high achieves relatively high $\operatorname{HSIC}(Y, Z_M)$ on both datasets. Figure~\ref{fig:hsic_plain} provides another perspective of the same experiments via the learning dynamics on the HSIC plane. We again observe that the best performer in terms of robustness HBaR\xspace-high lies in between the other two methods, crucially attaining a much higher $\operatorname{HSIC}(Y, Z_M)$ than HBaR\xspace-low. Moreover, for both HBaR\xspace methods, we clearly observe the two distinct optimization phases first observed by \fcom{Shwartz-Ziv and Tishby}~\citep{shwartz2017opening} in the context of the mutual information bottleneck: the \textit{fast empirical risk minimization phase}, where the neural network tries to learn a meaningful representation by increasing $\operatorname{HSIC}(Y, Z_M)$ regardless of information redundancy ($\operatorname{HSIC}(X, Z_M)$ increasing), and the \textit{representation compression phase}, where the neural network turns its focus onto compressing the latent representation by minimizing $\operatorname{HSIC}(X, Z_M)$, while maintaining highly label-related information. Interestingly, the HBaR\xspace penalty produces the two-phase behavior even though our networks use ReLU activation functions; Shwartz et al.~\citep{shwartz2017opening} only observed these two optimization phases on neural networks with tanh activation functions, a phenomenon further confirmed by Saxe et al.~\citep{saxe2019information}. \begin{table*}[!t] \centering \setlength{\extrarowheight}{.2em} \setlength{\tabcolsep}{0.9pt} \caption{Ablation study on HBaR\xspace. Rows [i-iv] indicate the effect of removing each component of the learning objective defined in Eq.\eqref{eq:obj} (row [v]). We evaluate each objective over $\operatorname{HSIC}(X,Z_M)$, $\operatorname{HSIC}(Y,Z_M)$, natural test accuracy (in \%), and adversarial robustness (in \%) against PGD$^{40}$ and PGD$^{20}$ on MNIST and CIFAR-10 respectively. We set $\lambda_x$ as 1 and 0.006, $\lambda_y$ as 50 and 0.05, for MNIST and CIFAR-10 respectively.} \label{tab:ablation} \resizebox{1\textwidth}{!}{ \begin{tabular}{||c | c || c c | c c || c c | c c ||} \hline \multirow{3}{*}{Rows} & \multirow{3}{*}{Objectives} & \multicolumn{4}{c||}{MNIST by LeNet} & \multicolumn{4}{c||}{CIFAR-10 by ResNet-18} \\ \cline{3-10} & & \multicolumn{2}{c|}{HSIC} & \multirow{2}{*}{Natural} & \multirow{2}{*}{PGD$^{40}$} & \multicolumn{2}{c|}{HSIC} & \multirow{2}{*}{Natural} & \multirow{2}{*}{PGD$^{20}$} \\ & &$(X,Z_M)$&$(Y,Z_M)$& & &$(X,Z_M)$&$(Y,Z_M)$& &\\ \hline \hline [i] & $\mathcal{L}(\theta)$ & 45.29 & 8.73 & 99.23 & 0.00 & 3.45 & 4.76 & 95.32 & 8.57 \\ [ii] & $\lambda_{x} \sum_j\operatorname{HSIC}(X, Z_j) -\lambda_{y} \sum_j\operatorname{HSIC}(Y, Z_j)$ & 16.45 & 8.65 & 30.08 & 9.47 & 44.37 & 8.72 & 19.30 & 8.58 \\ [iii] & $\mathcal{L}(\theta)+\lambda_{x} \sum_j\operatorname{HSIC}(X, Z_j)$ & 0.00 & 0.00 & 11.38 & 10.00 & 0.00 & 0.00 & 10.03 & 10.10 \\ [iv] & $\mathcal{L}(\theta) -\lambda_{y} \sum_j\operatorname{HSIC}(Y, Z_j)$ & 56.38 & 9.00 & 99.33 & 0.00 & 43.71 & 8.93 & 95.50 & 1.90 \\ \hline [v] & $\mathcal{L}(\theta) +\lambda_{x} \sum_j\operatorname{HSIC}(X, Z_j) -\lambda_{y} \sum_j\operatorname{HSIC}(Y, Z_j)$ & 15.68 & 8.89 & 98.90 & 8.33 & 6.07 & 8.30 & 95.35 & 34.85\\ \hline \end{tabular} } \end{table*} \noindent\textbf{Ablation Study.} Motivated by the above observations, we turn our attention to how the three terms in the loss function in Eq.~\eqref{eq:obj} affect HBaR\xspace. As illustrated in Table~\ref{tab:ablation}, removing any part leads to either a significant natural accuracy or robustness degradation. Specifically, using $\mathcal{L}(\theta)$ only (row [i]) lacks adversarial robustness; removing $\mathcal{L}(\theta)$ (row [ii]) or the penalty on $Y$ (row [iii]) degrades natural accuracy significantly \fcom{(a similar result was also observed in \citep{amjad2018not})}; finally, removing the penalty on $X$ improves the natural accuracy while degrading adversarial robustness. The three terms combined together by proper hyperparameters $\lambda_x$ and $\lambda_y$ (row [v]) achieve both high natural accuracy and adversarial robustness. We provide a comprehensive ablation study on the sensitivity of $\lambda_x$ and $\lambda_y$ and draw conclusions in Appendix~\ref{sec:supp-ablation} in the supplement (Tables~\ref{tab:mnist-weight} and \ref{tab:cifar-weight}).\\ \section{Conclusions} \label{sec:conclusion} We investigate the HSIC bottleneck as regularizer (HBaR) as a means to enhance adversarial robustness. We theoretically prove that HBaR suppresses the sensitivity of the classifier to adversarial examples while retaining its discriminative nature. One limitation of our method is that the robustness gain is modest when training with only natural examples. Moreover, a possible negative societal impact is overconfidence in adversarial robustness: over-confidence in the \emph{adversarially-robust} models produced by HBaR\xspace as well as other defense methods may lead to overlooking their potential failure on newly-invented attack methods; this should be taken into account in safety-critical applications like healthcare~\citep{adv_health} or security~\citep{adv_surveillance}. \com{We extend the discussion on the limitations and potential negative societal impacts of our work in Appendix~\ref{sec:supp-limit}~and~\ref{sec:supp-impact}, respectively, in the supplement.} \begin{thebibliography}{10} \bibitem{alemi2016deep} Alexander~A Alemi, Ian Fischer, Joshua~V Dillon, and Kevin Murphy. \newblock Deep variational information bottleneck. \newblock In {\em ICLR}, 2017. \bibitem{amjad2018not} Rana~Ali Amjad and Bernhard~C Geiger. \newblock How (not) to train your neural network using the information bottleneck principle. \newblock {\em arXiv preprint arXiv:1802.09766}, 2018. \bibitem{carlini2017towards} Nicholas Carlini and David Wagner. \newblock Towards evaluating the robustness of neural networks. \newblock In {\em 2017 ieee symposium on security and privacy (sp)}, pages 39--57. IEEE, 2017. \bibitem{cw} Nicholas Carlini and David~A. Wagner. \newblock Towards evaluating the robustness of neural networks. \newblock In {\em 2017 {IEEE} Symposium on Security and Privacy, {SP} 2017, San Jose, CA, USA, May 22-26, 2017}, pages 39--57. {IEEE} Computer Society, 2017. \bibitem{adv_self_driving} Alesia Chernikova, Alina Oprea, Cristina Nita-Rotaru, and BaekGyu Kim. \newblock Are self-driving cars secure? evasion attacks against deep neural networks for steering angle prediction. \newblock In {\em 2019 IEEE Security and Privacy Workshops (SPW)}, pages 132--137. IEEE, 2019. \bibitem{autoattack} Francesco Croce and Matthias Hein. \newblock Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. \newblock In {\em Proceedings of the 37th International Conference on Machine Learning, {ICML} 2020, 13-18 July 2020, Virtual Event}, volume 119 of {\em Proceedings of Machine Learning Research}, pages 2206--2216, 2020. \bibitem{adv_health} Samuel~G Finlayson, John~D Bowers, Joichi Ito, Jonathan~L Zittrain, Andrew~L Beam, and Isaac~S Kohane. \newblock Adversarial attacks on medical machine learning. \newblock {\em Science}, 363(6433):1287--1289, 2019. \bibitem{fischer2020conditional} Ian Fischer. \newblock The conditional entropy bottleneck. \newblock {\em Entropy}, 22(9):999, 2020. \bibitem{goodfellow2014explaining} Ian~J Goodfellow, Jonathon Shlens, and Christian Szegedy. \newblock Explaining and harnessing adversarial examples. \newblock In {\em ICLR}, 2015. \bibitem{greenfeld2020robust} Daniel Greenfeld and Uri Shalit. \newblock Robust learning with the hilbert-schmidt independence criterion. \newblock In {\em International Conference on Machine Learning}, pages 3759--3768. PMLR, 2020. \bibitem{gretton2005measuring} Arthur Gretton, Olivier Bousquet, Alex Smola, and Bernhard Sch{\"o}lkopf. \newblock Measuring statistical dependence with hilbert-schmidt norms. \newblock In {\em International conference on algorithmic learning theory}, pages 63--77. Springer, 2005. \bibitem{he2016deep} Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. \newblock Deep residual learning for image recognition. \newblock In {\em CVPR}, pages 770--778, 2016. \bibitem{cifar} Alex Krizhevsky. \newblock Learning multiple layers of features from tiny images. \newblock Technical report, 2009. \bibitem{kurakin2016adversarial} Alexey Kurakin, Ian Goodfellow, and Samy Bengio. \newblock Adversarial machine learning at scale. \newblock In {\em International Conference on Learning Representations}, 2017. \bibitem{mnist} Y.~Lecun, L.~Bottou, Y.~Bengio, and P.~Haffner. \newblock Gradient-based learning applied to document recognition. \newblock {\em Proceedings of the IEEE}, 86(11):2278--2324, 1998. \bibitem{liu1994siegel} Jun~S Liu. \newblock Siegel's formula via stein's identities. \newblock {\em Statistics \& Probability Letters}, 21(3):247--251, 1994. \bibitem{ma2020hsic} Wan-Duo~Kurt Ma, JP~Lewis, and W~Bastiaan Kleijn. \newblock The hsic bottleneck: Deep learning without back-propagation. \newblock In {\em Proceedings of the AAAI Conference on Artificial Intelligence}, volume~34, pages 5085--5092, 2020. \bibitem{madry2017towards} Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. \newblock Towards deep learning models resistant to adversarial attacks. \newblock In {\em ICLR}, 2018. \bibitem{moosavi2016deepfool} Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. \newblock Deepfool: a simple and accurate method to fool deep neural networks. \newblock In {\em Proceedings of the IEEE conference on computer vision and pattern recognition}, pages 2574--2582, 2016. \bibitem{gradient_masking} Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z~Berkay Celik, and Ananthram Swami. \newblock Practical black-box attacks against machine learning. \newblock In {\em Proceedings of the 2017 ACM on Asia conference on computer and communications security}, pages 506--519, 2017. \bibitem{papoulis1989probability} Athanasios Papoulis and H~Saunders. \newblock Probability, random variables and stochastic processes. \newblock 1989. \bibitem{raghunathan2020understanding} Aditi Raghunathan, Sang~Michael Xie, Fanny Yang, John Duchi, and Percy Liang. \newblock Understanding and mitigating the tradeoff between robustness and accuracy. \newblock {\em arXiv preprint arXiv:2002.10716}, 2020. \bibitem{saxe2019information} Andrew~M Saxe, Yamini Bansal, Joel Dapello, Madhu Advani, Artemy Kolchinsky, Brendan~D Tracey, and David~D Cox. \newblock On the information bottleneck theory of deep learning. \newblock {\em Journal of Statistical Mechanics: Theory and Experiment}, 2019(12):124020, 2019. \bibitem{shwartz2017opening} Ravid Shwartz-Ziv and Naftali Tishby. \newblock Opening the black box of deep neural networks via information. \newblock {\em arXiv preprint arXiv:1703.00810}, 2017. \bibitem{song2012feature} Le~Song, Alex Smola, Arthur Gretton, Justin Bedo, and Karsten Borgwardt. \newblock Feature selection via dependence maximization. \newblock {\em Journal of Machine Learning Research}, 13(5), 2012. \bibitem{szegedy2013intriguing} Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. \newblock Intriguing properties of neural networks. \newblock In {\em ICLR}, 2014. \bibitem{adv_surveillance} Simen Thys, Wiebe Van~Ranst, and Toon Goedem{\'e}. \newblock Fooling automated surveillance cameras: adversarial patches to attack person detection. \newblock In {\em Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops}, pages 0--0, 2019. \bibitem{tishby2000information} Naftali Tishby, Fernando~C Pereira, and William Bialek. \newblock The information bottleneck method. \newblock {\em arXiv preprint physics/0004057}, 2000. \bibitem{tishby2015deep} Naftali Tishby and Noga Zaslavsky. \newblock Deep learning and the information bottleneck principle. \newblock In {\em 2015 IEEE Information Theory Workshop (ITW)}, pages 1--5. IEEE, 2015. \bibitem{tsipras2018robustness} Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. \newblock Robustness may be at odds with accuracy. \newblock In {\em International Conference on Learning Representations}, 2019. \bibitem{wang2019improving} Yisen Wang, Difan Zou, Jinfeng Yi, James Bailey, Xingjun Ma, and Quanquan Gu. \newblock Improving adversarial robustness requires revisiting misclassified examples. \newblock In {\em International Conference on Learning Representations}, 2019. \bibitem{knet} Zifeng Wang, Batool Salehi, Andrey Gritsenko, Kaushik Chowdhury, Stratis Ioannidis, and Jennifer Dy. \newblock Open-world class discovery with kernel networks. \newblock In {\em 2020 IEEE International Conference on Data Mining (ICDM)}, pages 631--640, 2020. \bibitem{wu2020deep} Chieh Wu, Zulqarnain Khan, Stratis Ioannidis, and Jennifer~G Dy. \newblock Deep kernel learning for clustering. \newblock In {\em Proceedings of the 2020 SIAM International Conference on Data Mining}, pages 640--648. SIAM, 2020. \bibitem{wu2019solving} Chieh Wu, Jared Miller, Mario Sznaier, and Jennifer Dy. \newblock Solving interpretable kernel dimensionality reduction. \newblock {\em Advances in Neural Information Processing Systems 32 (NIPS 2019)}, 32, 2019. \bibitem{yan2018deep} Ziang Yan, Yiwen Guo, and Changshui Zhang. \newblock Deep defense: Training dnns with improved adversarial robustness. \newblock In {\em NeurIPS}, 2018. \bibitem{wideresnet} Sergey Zagoruyko and Nikos Komodakis. \newblock Wide residual networks. \newblock In {\em Proceedings of the British Machine Vision Conference 2016, {BMVC} 2016, York, UK, September 19-22, 2016}. {BMVA} Press, 2016. \bibitem{zhang2019theoretically} Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El~Ghaoui, and Michael Jordan. \newblock Theoretically principled trade-off between robustness and accuracy. \newblock In {\em International Conference on Machine Learning}, pages 7472--7482. PMLR, 2019. \end{thebibliography} \appendix \onecolumn \section{Checklist} \begin{enumerate} \item For all authors... \begin{enumerate} \item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? \answerYes{} \item Did you describe the limitations of your work? \answerYes{See Section~\ref{sec:conclusion} and Appendix~\ref{sec:supp-limit} in the supplement.} \item Did you discuss any potential negative societal impacts of your work? \answerYes{See Section~\ref{sec:conclusion} and Appendix~\ref{sec:supp-impact} in the supplement.} \item Have you read the ethics review guidelines and ensured that your paper conforms to them? \answerYes{} \end{enumerate} \item If you are including theoretical results... \begin{enumerate} \item Did you state the full set of assumptions of all theoretical results? \answerYes{See Section~\ref{sec:HBAR_theorem}, Assumption~\ref{asm:cont}~and~\ref{asm:universal}.} \item Did you include complete proofs of all theoretical results? \answerYes{See Appendix~\ref{sec:supp-proof-thm1} and~\ref{sec:supp-proof-thm2} in the supplement.} \end{enumerate} \item If you ran experiments... \begin{enumerate} \item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? \answerYes{See Section~\ref{sec:experient-setup}. We provide code and instructions to reproduce the main experimental results for our proposed method in the supplement.} \item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? \answerYes{See Section~\ref{sec:experient-setup} and Appendix~\ref{sec:supp-algorithms} in the supplement.} \item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? \answerYes{See Figure~\ref{fig:metrics_versus_epochs} in the main text and Figure~\ref{fig:hsic-adv-variance} in the supplement.} \item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? \answerYes{See Section~\ref{sec:experient-setup}.} \end{enumerate} \item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... \begin{enumerate} \item If your work uses existing assets, did you cite the creators? \answerYes{See Section~\ref{sec:experient-setup}.} \item Did you mention the license of the assets? \answerYes{See Section~\ref{sec:experient-setup}~and Appendix~\ref{sec:licensing} in the supplement.} \item Did you include any new assets either in the supplemental material or as a URL? \answerYes{We provide code for our proposed method in the supplement}. \item Did you discuss whether and how consent was obtained from people whose data you're using/curating? \answerNA{} \item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? \answerNA{} \end{enumerate} \item If you used crowdsourcing or conducted research with human subjects... \begin{enumerate} \item Did you include the full text of instructions given to participants and screenshots, if applicable? \answerNA{} \item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? \answerNA{} \item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? \answerNA{} \end{enumerate} \end{enumerate} \section{Proof of Theorem \ref{thm:main_theorem}} \label{sec:supp-proof-thm1} \begin{proof} The following lemma holds: \begin{lemma} \label{thm:hsic_cov_1} \citep{gretton2005measuring, greenfeld2020robust} Let $X$, $Z$ be random variables residing in metric spaces $\mathcal{X}$, $\mathcal{Z}$, respectively. Let also $\mathcal{F}, \mathcal{G}$ be the two separable RKHSs on $\mathcal{X}, \mathcal{Z}$ induced by $k_X$ and $k_Z$, respectively. Then, the following inequality holds: \begin{equation} \operatorname{HSIC}(X, Z) \geq \sup _{s \in \mathcal{F}, t \in \mathcal{G}} \operatorname{Cov}[s(X), t(Z)]. \end{equation} \end{lemma} Lemma \ref{thm:hsic_cov_1} shows that HSIC bounds the supremum of the covariance between any pair of functions in the RKHS, $\mathcal{F}, \mathcal{G}$. Assumption~\ref{asm:universal} states that functions in $\mathcal{F}$ and $\mathcal{G}$ are uniformly bounded by $M_{\mathcal{F}}> 0$ and $M_{\mathcal{G}}>0 $, respectively. Let $\mathcal{\tilde{F}}$ and $\mathcal{\tilde{G}}$ be the restriction of $\mathcal{F}$ and $\mathcal{G}$ to functions in the unit ball of the respective RKHSs through rescaling, i.e.: \begin{equation}\label{'eq: unit_ball'} \begin{split} \mathcal{\tilde{F}} = \left\{ \frac{h}{M_{\mathcal{F}}} : h \in \mathcal{F}\right\} \quad \text{and} \quad \mathcal{\tilde{G}} = \left\{ \frac{g}{M_{\mathcal{G}}} : g \in \mathcal{G}\right\}. \end{split} \end{equation} The following lemma links the covariance of the functions in the original RKHSs to their normalized version: \begin{lemma}\label{'le:A_2'} \citep{greenfeld2020robust} Suppose $\mathcal{F}$ and $\mathcal{G}$ are RKHSs over $\mathcal{X}$ and $\mathcal{Z}$, s.t. $\|s\|_{\infty} \leq M_{\mathcal{F}}$ for all $s \in \mathcal{F}$ and $\|t\|_{\infty} \leq M_{\mathcal{G}}$ for all $t \in \mathcal{G}$. Then the following holds: \begin{align} \begin{split} \sup _{s \in \mathcal{F}, t \in \mathcal{G}} \operatorname{Cov}[s(X), &t(Z)] = M_{\mathcal{F}}M_{\mathcal{G}} \sup _{s \in \tilde{\mathcal{F}}, t \in \tilde{\mathcal{G}}} \operatorname{Cov}[s(X), t(Z)]. \end{split} \end{align} \end{lemma} For simplicity in notation, we define the following sets containing functions that satisfy Assumption \ref{asm:cont}: \begin{equation}\label{'eq: continuous'} \begin{split} C_{b}(\mathcal{X}) = \left\{h \in C(\mathcal{X}) : ||h||_{\infty} \leq M_{\mathcal{X}}\right\} \quad \text{and} \quad C_{b}(\mathcal{Z}) = \left\{g \in C(\mathcal{Z}) : ||g||_{\infty} \leq M_{\mathcal{Z}}\right\}. \end{split} \end{equation} In Assumption \ref{asm:universal}, we mention that functions in $\mathcal{F}$ and $\mathcal{G}$ may require appropriate rescaling to keep the universality of corresponding kernels. To make the rescaling explicit, we define the following \textit{rescaled} RKHSs: \begin{equation}\label{'eq: rescaled_rkhs'} \begin{split} \mathcal{\hat{F}} = \left\{\frac{M_{\mathcal{X}}}{M_{\mathcal{F}}} \cdot h : h\in \mathcal{F} \right\} \quad \text{and} \quad \mathcal{\hat{G}} = \left\{\frac{M_{\mathcal{Z}}}{M_{\mathcal{G}}} \cdot g : g\in \mathcal{G} \right\}. \end{split} \end{equation} This rescaling ensures that $||\hat{h}||_{\infty} \leq M_\mathcal{X}$ for every $\hat{h}\in \mathcal{\hat{F}}$. Similarly, $||\hat{g}||_{\infty} \leq M_\mathcal{Z}$ for every $\hat{g}\in \mathcal{\hat{G}}$. We also want to prove $\mathcal{F}$ is convex. Given $f,g\in \mathcal{F}$, we need to show for all $0\leq \alpha \leq 1$, the function $\alpha f + (1-\alpha) g \in \mathcal{F}$. As linear summation of RKHS functions is in the RKHS, we just need to check that $|| \alpha f + (1-\alpha) g||_{\infty} \leq M_{\mathcal{F}}$; indeed: \begin{equation} || \alpha f + (1-\alpha) g||_{\infty} \leq \alpha ||f||_{\infty} + (1-\alpha) ||g||_{\infty} \leq \alpha M_{\mathcal{F}} + (1-\alpha) M_{\mathcal{F}} \end{equation} We thus conclude that the bounded RKHS $\mathcal{F}$ is indeed convex. Hence any rescaling of the function, as long as it has a norm less than $M_{\mathcal{F}} $, remains inside $\mathcal{F}$. Indeed, the following lemma holds: \begin{lemma} \label{'lem: hat_universal'} If $\mathcal{F},\mathcal{G}$ are universal with respect to $C_{b}(\mathcal{X}),C_{b}(\mathcal{Z})$, then: \begin{equation} \label{'eq:restricted universal'} \mathcal{\hat{F}} = C_{b}(\mathcal{X})\quad\text{and}\quad \mathcal{\hat{G}} = C_{b}(\mathcal{Z}). \end{equation} \end{lemma} \begin{proof} We prove this by first showing $C_{b}(\mathcal{X}) \subseteq \mathcal{\hat{F}}$ and then $\mathcal{\hat{F}} \subseteq C_{b}(\mathcal{X})$, which leads to equality of the sets. \begin{itemize} \item $C_{b}(\mathcal{X}) \subseteq \mathcal{\hat{F}}$: For all $h \in C_{b}(\mathcal{X})$, we show $h\in\mathcal{\hat{F}}$. Based on the definition of $C_{b}(\mathcal{X})$ in~\eqref{'eq: continuous'}, we know $\|h\|_{\infty} \leq M_{\mathcal{X}}$. From universality stated in Assumption~\ref{asm:universal}, $h \in \mathcal{F}$. Let $g = \frac{M_{\mathcal{F}}}{M_{\mathcal{X}}}h$. Then $\|g\|_{\infty} = \|\frac{M_{\mathcal{F}}}{M_{\mathcal{X}}}h\|_{\infty} = \frac{M_{\mathcal{F}}}{M_{\mathcal{X}}}||h||_{\infty} \leq M_{\mathcal{F}}$. Based on the convexity of $\mathcal{F}$, $g \in \mathcal{F}$. We rescale every function in $\mathcal{F}$ by $\frac{M_{\mathcal{X}}}{M_{\mathcal{F}}}$ to form $\mathcal{\hat{F}}$, so $\frac{M_{\mathcal{X}}}{M_{\mathcal{F}}} g = \frac{M_{\mathcal{X}}}{M_{\mathcal{F}}} \frac{M_{\mathcal{F}}}{M_{\mathcal{X}}} h = h \in \mathcal{\hat{F}}$. \item $\mathcal{\hat{F}} \subseteq C_{b}(\mathcal{X})$: On the other hand, for all $h \in \mathcal{\hat{F}}$, $h$ is continuous and bounded by $M_{\mathcal{X}}$. So based on the definition of $C_{b}(\mathcal{X})$ in~\eqref{'eq: continuous'}, $h\in C_{b}(\mathcal{X})$. Thus, $ \mathcal{\hat{F}} \subseteq C_{b}(\mathcal{X})$. \\ \end{itemize} Having both side of the inclusion we conclude that $\mathcal{\hat{F}}= C_{b}(\mathcal{X})$. One can prove $\mathcal{\hat{G}}= C_{b}(\mathcal{Z})$ similarly. \end{proof} Applying the universality of kernels from Assumption $\ref{asm:universal}$ we can prove the following lemma: \begin{lemma}\label{'le:equality in sup'} Let $X$, $Z$ be random variables residing in metric spaces $\mathcal{X}$, $\mathcal{Z}$ with separable RKHSs $\mathcal{F}$, $\mathcal{G}$ induced by kernel functions $k_X$ and $k_Z$, respectively, for which Assumption~\ref{asm:universal} holds. Let $\mathcal{\hat{F}}$ and $\mathcal{\hat{G}}$ be the rescaled RKHSs defined in~\eqref{'eq: rescaled_rkhs'}. Then: \begin{equation} \frac{M_{\mathcal{X}}M_{\mathcal{Z}}}{ M_{\mathcal{F}}M_{\mathcal{G}}} \sup _{s \in \mathcal{F}, t \in \mathcal{G}} \operatorname{Cov}[s(X), t(Z)]= \sup _{s \in \mathcal{\hat{F}}, t \in \mathcal{\hat{G}}} \operatorname{Cov}[s(X), t(Z)] = \sup _{s \in C_{b}(\mathcal{X}), t \in C_{b}(\mathcal{Z})} \operatorname{Cov}[s(X), t(Z)], \end{equation} where $C_{b}(\mathcal{X}), C_{b}(\mathcal{Z})$ are defined in \eqref{'eq: continuous'}. \end{lemma} \begin{proof} The right equality of Lemma \ref{'le:equality in sup'} immediately follows by Lemma~\ref{'lem: hat_universal'}: \begin{equation} \sup _{s \in \mathcal{\hat{F}}, t \in \mathcal{\hat{G}}} \operatorname{Cov}[s(X), t(Z)] = \sup _{s \in C_{b}(\mathcal{X}), t \in C_{b}(\mathcal{Z})} \operatorname{Cov}[s(X), t(Z)]. \end{equation} Applying Lemma \ref{'le:A_2'} on $\mathcal{F}, \mathcal{G}, \mathcal{\tilde{F}},\mathcal{\tilde{G}}$, we have: \begin{equation}\label{'eq:lemma3_result_1'} \sup _{s \in \mathcal{F}, t \in \mathcal{G}} \operatorname{Cov}[s(X), t(Z)] = M_{\mathcal{F}}M_{\mathcal{G}} \sup _{s \in \tilde{\mathcal{F}}, t \in \tilde{\mathcal{G}}} \operatorname{Cov}[s(X), t(Z)]. \end{equation} Note that from \eqref{'eq: rescaled_rkhs'} and \eqref{'eq: unit_ball'}, we have that the corresponding normalized space for $\mathcal{\hat{F}}$ is: \begin{equation} \label{eq:21} \begin{split} \left\{ \frac{h}{M_{\mathcal{X}}} : h \in \mathcal{\hat{F}}\right\} = \left\{ \frac{M_{\mathcal{X}}}{M_{\mathcal{F}}}\frac{h}{M_{\mathcal{X}}} : h \in \mathcal{{F}}\right\} = \left\{ \frac{h}{M_{\mathcal{F}}} : h \in \mathcal{F}\right\} = \mathcal{\tilde{F}}. \end{split} \end{equation} Similarly, the normalized space for $\mathcal{\hat{G}}$ is: \begin{equation} \label{eq:22} \begin{split} \left\{ \frac{g}{M_{\mathcal{Z}}} : g \in \mathcal{\hat{G}}\right\} = \left\{ \frac{g}{M_{\mathcal{G}}} : g \in \mathcal{G}\right\}= \mathcal{\tilde{G}}. \end{split} \end{equation} Equation~\eqref{eq:21} implies that the normalized space induced from $\mathcal{\hat{F}}$ coincides with the normalized space induced from $\mathcal{{F}}$. Similarly, Equation \eqref{eq:22} implies the normalized spaces for $\mathcal{G}$ and $\mathcal{\hat{G}}$ also coincide. Moreover, for all $\hat{h}\in \mathcal{\hat{F}}$, $||\hat{h}||_{\infty} \leq M_\mathcal{X}$ and for all $\hat{g}\in \mathcal{\hat{G}}$, $||\hat{g}||_{\infty} \leq M_\mathcal{Z}$. Hence, applying Lemma~\ref{'le:A_2'} on $\mathcal{\hat{F}}, \mathcal{\hat{G}}, \mathcal{\tilde{F}},\mathcal{\tilde{G}}$, we have: \begin{equation}\label{'eq:lemma3_result_2'} \sup _{s \in \mathcal{\hat{F}}, t \in \mathcal{\hat{G}}} \operatorname{Cov}[s(X), t(Z)] = M_{\mathcal{X}}M_{\mathcal{Z}} \sup _{s \in \tilde{\mathcal{F}}, t \in \tilde{\mathcal{G}}} \operatorname{Cov}[s(X), t(Z)]. \end{equation} By dividing Equation~\eqref{'eq:lemma3_result_1'} and \eqref{'eq:lemma3_result_2'}, we prove the left part of Lemma \ref{'le:equality in sup'}: \begin{equation} \frac{M_{\mathcal{X}}M_{\mathcal{Z}}}{ M_{\mathcal{F}}M_{\mathcal{G}}} \sup _{s \in \mathcal{F}, t \in \mathcal{G}} \operatorname{Cov}[s(X), t(Z)]= \sup _{s \in \mathcal{\hat{F}}, t \in \mathcal{\hat{G}}} \operatorname{Cov}[s(X), t(Z)]. \end{equation} \end{proof} By combining Theorem \ref{thm:hsic_cov_1} and Lemma \ref{'le:equality in sup'}, we have the following result: \begin{equation} \label{eq:last_second} \begin{split} \frac{M_{\mathcal{X}}M_{\mathcal{Z}}}{ M_{\mathcal{F}}M_{\mathcal{G}}}\operatorname{HSIC}(X, Z) \geq \sup _{s \in C_{b}(\mathcal{X}), t \in C_{b}(\mathcal{Z})} \operatorname{Cov}[s(X), t(Z)]. \end{split} \end{equation} Recall that $h_{\theta}$ is a neural network from $\mathcal{X}$ to $\mathcal{Y}$, such that it can be written as composition of $g\circ f$, where $f:\mathcal{X}\to \mathcal{Z}$ and $g: \mathcal{Z}\to \mathcal{Y}$. Moreover, $h_{\theta} \in C_{b}(\mathcal{X})$ and $g \in C_{b}(\mathcal{Z})$. Using the fact that the supremum on a subset of a set is smaller or equal than the supremum on the whole set, we conclude that: \begin{equation} \begin{split} \label{eq:hsic_var} \frac{M_{\mathcal{X}}M_{\mathcal{Z}}}{ M_{\mathcal{F}}M_{\mathcal{G}}}\operatorname{HSIC}(X, Z) &\geq \sup_{\theta} \operatorname{Cov}[h_{\theta}(X), g(Z))] \\ &= \sup_{\theta} \operatorname{Cov}[h_{\theta}(X) , g\circ f (X)] \\ &= \sup_{\theta}\operatorname{Var}[h_{\theta}(X)]. \end{split} \end{equation} \end{proof} \section{Proof of Theorem 2} \label{sec:supp-proof-thm2} \begin{proof} \color{black} Let $t_i:\mathbb{R}^{d_X}\to\mathbb{R}$, $i = 1, 2, ..., d_X$ be the following truncation functions: \begin{align} \label{eq:truncation} \begin{split} t_i(X) = \begin{cases} -R, & \text{if}~X_i < -R, \\ X_i, & \text{if}~-R \leq X_i \leq R, \\ R, &\text{if}~ X_i > R. \end{cases} \end{split} \end{align} where $0 < R < \infty$ and $X_i$ is the $i$-th dimension of $X$. Functions $t_i$ are continous and bounded in $\mathcal{X}$, and \begin{equation} t_i \in C_{b'}(\mathcal{X}), \quad \text{where}\quad C_{b'}(\mathcal{X})= \{t \in C(\mathcal{X}): \|t\|_\infty \leq R\} \end{equation} Moreover, $g$ satisfies Assumptions~\ref{asm:cont} and~\ref{asm:universal}. Similar to the proof of Theorem~\ref{thm:main_theorem}, by combining Theorem \ref{thm:hsic_cov_1} and Lemma \ref{'le:equality in sup'}, we have that: \begin{equation} \label{eq:main_for_thm2} \begin{split} \frac{R M_{\mathcal{Z}}}{ M_{\mathcal{F}}M_{\mathcal{G}}}\operatorname{HSIC}(X, Z) &\geq \sup_{t \in C_{b'(\mathcal{X})},\ g \in C_{b}(\mathcal{Z})} \operatorname{Cov}[t (X), g(Z)] \\ &\geq \operatorname{Cov}[t_i (X), h_{\theta}(X)], \quad i = 1, \ldots, d_X. \end{split} \end{equation} Moreover, the following lemma holds: \begin{lemma} \label{lemma: cov_diff} Let $X \sim \mathcal{N}(0, \sigma^2 \mathbf{I})$ and $t_i(X)$ defined by~\eqref{eq:truncation}. For all $h_\theta$ that satisfy Assumption~\ref{asm:cont}, we have: \begin{align} \operatorname{Cov}[X_i, h_{\theta}(X)] - \operatorname{Cov}[t_i(X), h_{\theta}(X)] \leq \frac{2 M_\mathcal{X} \sigma}{\sqrt{2 \pi}} \exp(-\frac{R^2}{2\sigma^2}), \quad \text{for all}~i = 1, 2, \ldots, d_X. \end{align} \end{lemma} \begin{proof} \begin{subequations} \begin{align} \label{cov_diff_1} \text{LHS} &= \int_{-\infty}^{\infty} (x_i-t_i(x)) h_\theta (x) \frac{1}{\sqrt{2 \pi \sigma^2}} \exp(-\frac{x_i^2}{2\sigma^2}) dx_i \\ \label{cov_diff_2} &=\frac{1}{\sqrt{2 \pi \sigma^2}} \left( \int_{-\infty}^{-R} (x_i+R) h_\theta (x) \exp(-\frac{x_i^2}{2\sigma^2}) dx_i + \int_{R}^{\infty} (x_i-R) h_\theta (x) \exp(-\frac{x_i^2}{2\sigma^2}) dx_i \right) \\ \label{cov_diff_3} &\leq \frac{2 M_\mathcal{X}}{\sqrt{2 \pi \sigma^2}} \int_{R}^{\infty} (x_i-R) \exp(-\frac{x_i^2}{2\sigma^2}) dx_i \\ \label{cov_diff_4} &= \frac{2 M_\mathcal{X}}{\sqrt{2 \pi \sigma^2}} \int_{R}^{\infty} x_i \exp(-\frac{x_i^2}{2\sigma^2}) dx_i - \frac{2 M_\mathcal{X} R}{\sqrt{2 \pi \sigma^2}} \int_{R}^{\infty} \exp(-\frac{x_i^2}{2\sigma^2}) dx_i \\ \label{cov_diff_5} &\leq \frac{2 M_\mathcal{X}}{\sqrt{2 \pi \sigma^2}} \int_{R}^{\infty} x_i \exp(-\frac{x_i^2}{2\sigma^2}) dx_i \\ \label{cov_diff_6} &= \frac{2 M_\mathcal{X} \sigma}{\sqrt{2 \pi}} \exp(-\frac{R^2}{2\sigma^2}), \end{align} where \eqref{cov_diff_1}, \eqref{cov_diff_2}, \eqref{cov_diff_4}, \eqref{cov_diff_6} are direct results from definition or simple calculation, \eqref{cov_diff_3} comes from the fact that $M_\mathcal{X} = \max \| h_\theta(X) \|_\infty $ and the symmetry of two integrals, and \eqref{cov_diff_5} is due to the non-negativity of the probability density function. \end{subequations} \end{proof} Combining Lemma~\ref{lemma: cov_diff} with~\eqref{eq:main_for_thm2}, we have the following result: \begin{equation} \label{eq:thm2_intermediate} \frac{R M_{\mathcal{Z}}}{ M_{\mathcal{F}}M_{\mathcal{G}}}\operatorname{HSIC}(X, Z) + \frac{2 M_\mathcal{X} \sigma}{\sqrt{2 \pi}} \exp(-\frac{R^2}{2\sigma^2}) \geq \operatorname{Cov}[ X_i, h_{\theta}(X)], \quad \text{for all}~i = 1, \ldots, d_X. \end{equation} We can further bridge HSIC to adversarial robustness directly by taking advantage of the following lemma: \begin{lemma}[Stein's Identity \citep{liu1994siegel}] \label{thm:stein} Let $X = (X_1, X_2, \ldots X_{d_X})$ be multivariate normally distributed with arbitrary mean vector $\mu$ and covariance matrix $\Sigma$. For any function $h(x_1, \ldots, x_{d_X})$ such that $\frac{\partial h}{\partial x_i}$ exists almost everywhere and $\mathbb{E} |\frac{\partial}{\partial x_i}| < \infty$, $i=1, \ldots, d_X$, we write $\nabla h(X) = (\frac{\partial h(X)}{\partial x_1}, \ldots, \frac{\partial h(X)}{\partial x_{d_X}})^\top$. Then the following identity is true: \begin{equation} \operatorname{Cov}[X, h(X)]=\Sigma E[\nabla h(X)]. \end{equation} Specifically, \begin{equation} \operatorname{Cov}\left[X_{1}, h\left(X_{1}, \ldots, X_{d_X}\right)\right]=\sum_{i=1}^{d_X} \operatorname{Cov}\left(X_{1}, X_{i}\right) E\left[\frac{\partial}{\partial x_{i}} h\left(X_{1}, \ldots, X_{d_X}\right)\right] \end{equation} \end{lemma} Given that $X \sim \mathcal{N}(0, \sigma^2 \mathbf{I})$, Lemma~\ref{thm:stein} implies: \begin{equation} \label{eq:stein} \operatorname{Cov}\left[X_{i}, h_\theta\left(X\right)\right]= \sigma^2 \mathbb{E}\left[\frac{\partial}{\partial x_{i}} h_\theta\left(X\right)\right]. \end{equation} Combining~\eqref{eq:thm2_intermediate} and~\eqref{eq:stein}, we have: \begin{align} \frac{R M_{\mathcal{Z}}}{ M_{\mathcal{F}}M_{\mathcal{G}}}\operatorname{HSIC}(X, Z) + \frac{2 M_\mathcal{X} \sigma}{\sqrt{2 \pi}} \exp(-\frac{R^2}{2\sigma^2}) &\geq \sigma^2 \mathbb{E}\left[\frac{\partial}{\partial x_{k}} h_\theta\left(X\right)\right]. \end{align} Note that a similar derivation could be repeated exactly by replacing $h_\theta (X)$ with $-h_\theta (X)$. Thus, for every $i = 1, 2, \ldots, d_X$, we have: \begin{align} \label{eq:abs_partial_bound} \frac{R M_{\mathcal{Z}}}{ M_{\mathcal{F}}M_{\mathcal{G}}}\operatorname{HSIC}(X, Z) + \frac{2 M_\mathcal{X} \sigma}{\sqrt{2 \pi}} \exp(-\frac{R^2}{2\sigma^2}) &\geq \sigma^2 \mathbb{E}\left[ \left| \frac{\partial}{\partial x_{i}} h_{\theta}\left(X\right) \right |\right]. \end{align} Summing up both sides in \eqref{eq:abs_partial_bound} for $i = 1, 2, \ldots, d_X$, we have: \begin{align} \label{eq:total_abs_partial_bound} \frac{d_X R M_{\mathcal{Z}}}{ M_{\mathcal{F}}M_{\mathcal{G}}}\operatorname{HSIC}(X, Z) + \frac{2 d_X M_\mathcal{X} \sigma}{\sqrt{2 \pi}} \exp(-\frac{R^2}{2\sigma^2}) &\geq \sigma^2 \mathbb{E}\left[ \sum_{i=1}^{d_X} \left| \frac{\partial}{\partial x_{i}} h_{\theta}\left(X\right) \right |\right] . \end{align} On the other hand, for $\delta \in \mathcal{S}_r$, by Taylor's theorem: \begin{subequations} \label{eq:adv_difference_bound} \begin{align} \label{eq:taylor} \mathbb{E} [|h_\theta(X+\delta) - h_\theta(X)|] &\leq \mathbb{E} [|\delta^\top \nabla_{X} h_\theta (X)|] + o(r) \\ \label{eq:holder} &\leq \mathbb{E} \left[ \| \delta \|_{\infty} \|\nabla_{X} h_\theta (X)\|_1 \right] + o(r) \\ \label{eq:triangle} &\leq r \mathbb{E} \left[ \sum_{i=1}^{d_X} \left| \frac{\partial}{\partial x_{i}} h_{\theta}\left(X\right) \right | \right] + o(r), \end{align} \end{subequations} where~\eqref{eq:holder} is implied by H\"older's inequality, and~\eqref{eq:triangle} is implied by the triangle inequality. Combining \eqref{eq:total_abs_partial_bound} and \eqref{eq:adv_difference_bound}, we have: \begin{equation} \frac{ r d_X R M_{\mathcal{Z}}}{ \sigma^2 M_{\mathcal{F}}M_{\mathcal{G}}}\operatorname{HSIC}(X, Z) + \frac{2 r d_X M_\mathcal{X}}{\sqrt{2 \pi} \sigma} \exp(-\frac{R^2}{2\sigma^2}) + o(r) \geq \mathbb{E} [|h_\theta(X+\delta) - h_\theta(X)|]. \end{equation} Let $R = \sigma \sqrt{-2\log o(1)}$ where, here, $o(1)$ stands for an arbitrary function $w:\mathbb{R}\to\mathbb{R}$ s.t. \begin{align}\lim_{r\to 0} w(r) = 0.\end{align} Then, we have $\frac{2 r d_X M_\mathcal{X}}{\sqrt{2 \pi} \sigma} \exp(-\frac{R^2}{2\sigma^2}) = o(r)$, because: \begin{align} \begin{split} \lim_{r \to 0} \frac{2 r d_X M_\mathcal{X}}{\sqrt{2 \pi} \sigma} \exp(-\frac{R^2}{2\sigma^2}) / r &= \lim_{r \to 0} \frac{2 d_X M_\mathcal{X}}{\sqrt{2 \pi} \sigma} \exp( \log o(1)) \\ &= \lim_{r \to 0} \frac{2 d_X M_\mathcal{X}}{\sqrt{2 \pi} \sigma} o(1) \\ &= 0 \end{split} \end{align} Thus, we conclude that: \begin{align} \frac{ r \sqrt{-2 \log o(1)} d_X M_{\mathcal{Z}}}{ \sigma M_{\mathcal{F}}M_{\mathcal{G}}}\operatorname{HSIC}(X, Z) + o(r) \geq \mathbb{E} [|h_\theta(X+\delta) - h_\theta(X)|]. \end{align} \end{proof} \section{Licensing of Existing Assets} \label{sec:licensing} We provide the licensing information of each existing asset below: \textbf{Datasets.} \begin{packeditemize} \item \emph{MNIST} \emph{mnist} is licensed under the Creative Commons Attribution-Share Alike 3.0 license. \item \emph{CIFAR-10}~and~\emph{CIFAR-100}~\citep{cifar} are licensed under the MIT license. \end{packeditemize} \textbf{Models.} \begin{packeditemize} \item The implementations of \emph{LeNet} \citep{madry2017towards} and \emph{ResNet-18} \citep{he2016deep} in our paper are licensed under BSD 3-Clause License. \item The implementation of \emph{WideResNet-28-10} \citep{wideresnet} is licensed under the MIT license. \end{packeditemize} \textbf{Algorithms.} \begin{packeditemize} \item The implementations of \emph{SWHB} \citep{ma2020hsic}, \emph{PGD} \citep{madry2017towards}, \emph{TRADES} \citep{zhang2019theoretically} are licensed under the MIT license. \item The implementation of \emph{VIB} \citep{alemi2016deep} is licensed under the Apache License 2.0. \item There are no licenses for \emph{MART}~\citep{wang2019improving} and \emph{XIC} \citep{greenfeld2020robust}. \end{packeditemize} \textbf{Adversarial Attacks.} The implementations of \emph{FGSM}~\citep{goodfellow2014explaining}, \emph{PGD}~\citep{madry2017towards}, \emph{CW}~\citep{cw} and \emph{AutoAttack}~\citep{autoattack} are all licensed under the MIT license. \section{Algorithm Details and Hyperparameter Tuning} \label{sec:supp-algorithms} Non-adversarial learning, information bottleneck based methds: \begin{packeditemize} \item \emph{Cross-Entropy (CE)}, which includes only loss $\mathcal{L}$. \item \emph{Stage-Wise HSIC Bottleneck (SWHB)} \citep{ma2020hsic}: This is the original HSIC bottleneck. It does not include full backpropagation over the HSIC objective: early layers are fixed stage-wise, and gradients are computed only for the current layer. \item \emph{XIC} \citep{greenfeld2020robust}: To enhance generalization over distributional shifts, this penalty includes inputs and residuals (i.e., $\mathop{\mathrm{HSIC}}(X, Y-h(X))$). \item \emph{Variational Information Bottleneck (VIB)} \citep{alemi2016deep}: this is a variational autoencoder that includes a mutual information bottleneck penalty. \end{packeditemize} Adversarial learning methods: \begin{packeditemize} \item \emph{Projected Gradient Descent (PGD)} \citep{madry2017towards}: This optimizes $\loss_r$, given by \eqref{eq:adv_robustness} via projected gradient ascent over $\mathcal{S}_r$ . \item \emph{TRADES} \citep{zhang2019theoretically}: This uses a regularization term that minimizes the difference between the predictions of natural and adversarial examples to get a smooth decision boundary. \item \emph{MART} \citep{wang2019improving}: Compared to TRADES, MART pays more attention to adversarial examples from misclassified natural examples and add a KL-divergence term between natural and adversarial examples to the binary cross-entropy loss. \end{packeditemize} We use code provided by authors, including the recommended hyperparameter settings and tuning strategies. In both SWHB and HBaR\xspace, we apply Gaussian kernels for $X$ and $Z$ and a linear kernel for $Y$. For Gaussian kernels, we set $\sigma=5\sqrt{d}$, where $d$ is the dimension of the corresponding random variable. We report all tuning parameters in Table~\ref{tab:adv-competing-params}. In particular, we report the parameter settings on the 4-layer LeNet \citep{madry2017towards} for MNIST, ResNet-18 \citep{he2016deep} and WideResNet-28-10 \citep{wideresnet} for CIFAR-10, and WideResNet-28-10 \citep{wideresnet} for CIFAR-100 with the basic HBaR\xspace and when combining HBaR\xspace with state-of-the-art (i.e., PGD, TRADES, MART) adversarial learning. For HBaR\xspace, to make a fair comparison with SWHB \citep{ma2020hsic}, we build our code, along with the implementation of PGD and PGD+HBaR\xspace, upon their framework. When combining HBaR\xspace with other state-of-the-art adversarial learning (i.e., TRADES and MART), we add our HBaR\xspace implemention to the MART framework and use recommended hyperparameter settings/tuning strategies from MART and TRADES. To make a fair comparison, we use the same network architectures among all methods with the same random weight initialization and report last epoch results. \begin{table}[hbt!] \centering \setlength{\extrarowheight}{.2em} \setlength{\tabcolsep}{5pt} \small \caption{Parameter Summary for MNIST, CIFAR-10, and CIFAR-100. $\lambda_x$ and $\lambda_y$ are balancing hyperparameters for HBaR\xspace; $\lambda$ is balancing hyperparameter for TRADES and MART.} \label{tab:adv-competing-params} \resizebox{1\textwidth}{!}{ \begin{tabular}{||c || c || c | c c || c c | c c ||} \hline Dataset & param. & HBaR & PGD & PGD+HBaR & TRADES & TRADES+HBaR & MART & MART+HBaR \\ \hline \hline \multirow{8}{*}{MNIST} & $\lambda_x$ & 1 & - & 0.003 & - & 0.001 & - & 0.001 \\ & $\lambda_y$ & 50 & - & 0.001 & - & 0.005 & - & 0.005 \\ \cline{3-9} & $\lambda$& \multicolumn{3}{c||}{-} & 5 & 5 & 5 & 5 \\ \cline{3-9} & batch size & \multicolumn{3}{c||}{256} & \multicolumn{4}{c||}{256} \\ & optimizer & \multicolumn{3}{c||}{adam} & \multicolumn{4}{c||}{sgd} \\ & learning rate & \multicolumn{3}{c||}{0.0001} & \multicolumn{4}{c||}{0.01} \\ & lr scheduler & \multicolumn{3}{c||}{divided by 2 at the 65-th and 90-th epoch} & \multicolumn{4}{c||}{divided by 10 at the 20-th and 40-th epoch} \\ & \# epochs & \multicolumn{3}{c||}{100} & \multicolumn{4}{c||}{50} \\ \cline{3-9} \hline \hline \multirow{8}{*}{CIFAR-10/100} & $\lambda_x$ & 0.006 & - & 0.0005 & - & 0.0001 & - & 0.0001 \\ & $\lambda_y$ & 0.05 & - & 0.005 & - & 0.0005 & - & 0.0005 \\ \cline{3-9} & $\lambda$& \multicolumn{3}{c||}{-} & 5 & 5 & 5 & 5 \\ \cline{3-9} & batch size & \multicolumn{3}{c||}{128} & \multicolumn{4}{c||}{128} \\ & optimizer & \multicolumn{3}{c||}{adam} & \multicolumn{4}{c||}{sgd} \\ & learning rate & \multicolumn{3}{c||}{0.01} & \multicolumn{4}{c||}{0.01} \\ & lr scheduler & \multicolumn{3}{c||}{cosine annealing} & \multicolumn{4}{c||}{divided by 10 at the 75-th and 90-th epoch} \\ \cline{3-9} & \# epochs & 300 & \multicolumn{2}{c||}{95} & \multicolumn{4}{c||}{95} \\ \hline \end{tabular}} \end{table} \section{Sensitivity of Regularization Hyperparameters $\lambda_x$ and $\lambda_y$}\label{sec:supp-ablation} \fcom{We provide a comprehensive ablation study on the sensitivity of $\lambda_x$ and $\lambda_y$ on MNIST and CIFAR-10 dataset with (Table \ref{tab:mnist-weight-adv} and \ref{tab:cifar-weight-adv}) and without (Table \ref{tab:mnist-weight} and \ref{tab:cifar-weight}) adversarial training. As a conclusion, (a) we set the weight of cross-entropy loss as $1$, and empirically set $\lambda_x$ and $\lambda_y$ according to the performance on a validation set. (b) For MNIST with adversarial training, we empirically discover that $\lambda_x : \lambda_y$ ranging around $5:1$ provides better performance; for MNIST without adversarial training, $\lambda_x=1$ and $\lambda_y=50$, inspired by SWHB (Ma et al., 2020), provide the best performance. (c) for CIFAR-10 (and CIFAR-100), with and without adversarial training, $\lambda_x : \lambda_y$ ranging from $1:5$ to $1:10$ provides better performance.} \begin{table}[hbt!] \centering \setlength{\extrarowheight}{.2em} \setlength{\tabcolsep}{7pt} \small \caption{\textbf{MNIST by LeNet with adversarial training}: Ablation study on HBaR regularization hyperparameters $\lambda_x$ and $\lambda_y$ trained by HBaR\xspace+TRADES over the metric of natural test accuracy (\%) and adversarial test robustness (PGD$^{40}$ and AA, \%).} \label{tab:mnist-weight-adv} \resizebox{0.5\textwidth}{!}{ \begin{tabular}{||c c|| c c c ||} \hline $\lambda_x$ & $\lambda_y$ & Natural & PGD$^{40}$ & AA \\ \hline \hline 0.003 & 0.001 & 98.66 & 94.35 & 91.57 \\ \hline 0.003 & 0 & 98.92 & 93.05 & 90.95 \\ 0 & 0.001 & 98.86 & 91.77 & 88.21 \\ \hline 0.0025 & 0.0005 & 98.96 & 94.52 & 91.42 \\ 0.002 & 0.0005 & 98.92 & 94.13 & 91.33 \\ 0.0015 & 0.0005 & 98.93 & 94.06 & 91.43 \\ 0.001 & 0.0005 & 98.95 & 93.76 & 91.14 \\ \hline 0.001 & 0.0002 & 98.92 & 94.61 & 91.37 \\ 0.0008 & 0.0002 & 98.94 & 94.15 & 91.07 \\ 0.0006 & 0.0002 & 98.91 & 94.13 & 90.72 \\ 0.0004 & 0.0002 & 98.90 & 93.96 & 90.56 \\ \hline \end{tabular}} \end{table} \begin{table}[hbt!] \centering \setlength{\extrarowheight}{.2em} \setlength{\tabcolsep}{7pt} \small \caption{\textbf{CIFAR-10 by WideResNet-28-10 with adversarial training}: Ablation study on HBaR regularization hyperparameters $\lambda_x$ and $\lambda_y$ trained by HBaR\xspace+TRADES over the metric of natural test accuracy (\%), and adversarial test robustness (PGD$^{20}$ and AA, \%).} \label{tab:cifar-weight-adv} \resizebox{0.5\textwidth}{!}{ \begin{tabular}{||c c|| c c c ||} \hline $\lambda_x$ & $\lambda_y$ & Natural & PGD$^{20}$ & AA \\ \hline \hline 0.0001 & 0.0005 & 85.61 & 56.51 & 53.53 \\ \hline 0.0001 & 0 & 80.19 & 49.49 & 45.33 \\ 0 & 0.0005 & 84.74 & 55.00 & 51.50 \\ \hline 0.001 & 0.005 & 85.70 & 55.74 & 52.78 \\ 0.0005 & 0.005 & 84.42 & 55.95 & 52.66 \\ 0.00005 & 0.0005 & 85.37 & 56.43 & 53.40 \\ \hline \end{tabular}} \end{table} \begin{table}[hbt!] \centering \setlength{\extrarowheight}{.2em} \setlength{\tabcolsep}{7pt} \small \caption{\textbf{MNIST by LeNet without adversarial training}: Ablation study on HBaR regularization hyperparameters $\lambda_x$ and $\lambda_y$ over the metric of $\operatorname{HSIC}(X,Z_M)$, $\operatorname{HSIC}(Y,Z_M)$, natural test accuracy (\%), and adversarial test robustness (PGD$^{40}$, \%).} \label{tab:mnist-weight} \resizebox{0.52\textwidth}{!}{ \begin{tabular}{||c c|| c c | c c ||} \hline \multirow{2}{*}{$\lambda_x$} & \multirow{2}{*}{$\lambda_y$} & \multicolumn{2}{c|}{HSIC} & \multirow{2}{*}{Natural} & \multirow{2}{*}{PGD$^{40}$}\\ &&$(X,Z_M)$&$(Y,Z_M)$& & \\ \hline \hline \multicolumn{2}{||c||}{CE only} & 45.29 & 8.73 & 99.23 & 0.00\\ \hline 0.0001 & 0 & 21.71 & 8.01 & 99.28 & 0.00 \\ 0.001 & 0 & 5.82 & 6.57 & 99.36 & 0.00 \\ 0.01 & 0 & 3.22 & 4.28 & 99.13 & 0.00 \\ \hline 0 & 1 & 56.45 & 9.00 & 98.92 & 0.00 \\ \hline 0.001 & 0.05 & 53.70 & 8.99 & 99.13 & 0.03 \\ 0.001 & 0.01 & 10.44 & 8.51 & 99.37 & 0.00 \\ 0.001 & 0.005 & 8.86 & 8.24 & 99.38 & 0.00 \\ \hline 0.01 & 0.5 & 16.13 & 8.90 & 99.14 & 5.00 \\ 0.1 & 5 & 15.81 & 8.90 & 98.96 & 7.72 \\ 1 & 50 & 15.68 & 8.89 & 98.90 & 8.33 \\ 1.1 & 55 & 15.90 & 8.88 & 98.88 & 6.99 \\ 1.2 & 60 & 15.76 & 8.89 & 98.95 & 7.24 \\ 1.5 & 75 & 15.62 & 8.89 & 98.94 & 8.23 \\ 2 & 100 & 15.41 & 8.89 & 98.91 & 7.00 \\ \hline \end{tabular}} \end{table} \begin{table}[hbt!] \centering \setlength{\extrarowheight}{.2em} \setlength{\tabcolsep}{7pt} \small \caption{\textbf{CIFAR-10 by ResNet-18 without adversarial training}: Ablation study on HBaR regularization hyperparameters $\lambda_x$ and $\lambda_y$ over the metric of $\operatorname{HSIC}(X,Z_M)$, $\operatorname{HSIC}(Y,Z_M)$, natural test accuracy (\%), and adversarial test robustness (PGD$^{20}$, \%).} \label{tab:cifar-weight} \resizebox{0.52\textwidth}{!}{ \begin{tabular}{||c c || c c | c c ||} \hline \multirow{2}{*}{$\lambda_x$} & \multirow{2}{*}{$\lambda_y$} & \multicolumn{2}{c|}{HSIC} & \multirow{2}{*}{Natural} & \multirow{2}{*}{PGD$^{20}$}\\ &&$(X,Z_L)$&$(Y,Z_L)$& & \\ \hline \hline \multicolumn{2}{||c||}{CE only} & 3.45 & 4.76 & 95.32 & 8.57 \\ \hline 0.001 & 0.05 & 43.48 & 8.93 & 95.36 & 2.91 \\ 0.002 & 0.05 & 43.15 & 8.92 & 95.55 & 2.29 \\ 0.003 & 0.05 & 41.95 & 8.90 & 95.51 & 3.98 \\ 0.004 & 0.05 & 30.12 & 8.77 & 95.45 & 5.23 \\ 0.005 & 0.05 & 11.56 & 8.45 & 95.44 & 23.73\\ 0.006 & 0.05 & 6.07 & 8.30 & 95.35 & 34.85\\ 0.007 & 0.05 & 4.81 & 8.24 & 95.13 & 15.80 \\ 0.008 & 0.05 & 4.44 & 8.21 & 95.13 & 8.43 \\ 0.009 & 0.05 & 3.96 & 8.14 & 94.70 & 10.83\\ 0.01 & 0.05 & 4.09 & 7.87 & 92.33 & 2.90 \\ \hline \end{tabular}} \end{table} \begin{figure*} \caption{Error bar of natural test accuracy (in \%) and adversarial robustness ((in \%) on FGSM, PGD, CW, and AA attacked test examples) on MNIST by LeNet, CIFAR-100 by WideResNet-28-10, CIFAR-10 by ResNet-18 and WideResNet-28-10 of adversarial learning baselines and combining HBaR with each correspondingly.} \label{fig:hsic-adv-variance} \end{figure*} \begin{table*}[!t] \centering \setlength{\extrarowheight}{.2em} \setlength{\tabcolsep}{3pt} \small \caption{\textbf{MNIST by LeNet}: Mean and Standard deviation of natural test accuracy (in \%) and adversarial robustness ((in \%) on FGSM, PGD, CW, and AA attacked test examples) of adversarial learning baselines and combining HBaR with each correspondingly.} \label{tab:hsic-adv-variance-mnist} \resizebox{0.9\textwidth}{!}{ \begin{tabular}{||c || c c c c c c||} \hline \multirow{2}{*}{Methods} & \multicolumn{6}{c||}{MNIST by LeNet} \\ \cline{2-7} & Natural & FGSM & PGD$^{20}$ & PGD$^{40}$ & CW & AA \\ \hline \hline PGD & 98.40 $\pm$ 0.018 & 93.44 $\pm$ 0.177 & 94.56 $\pm$ 0.079 & 89.63 $\pm$ 0.117 & 91.20 $\pm$ 0.097 & 86.62 $\pm$ 0.166 \\ HBaR\xspace + PGD & \textbf{98.66} $\pm$ 0.026 & \textbf{96.02} $\pm$ 0.161 & \textbf{96.44}$\pm$0.030 & \textbf{94.35}$\pm$0.130 & \textbf{95.10}$\pm$0.106 & \textbf{91.57}$\pm0.123$ \\ \hline \hline TRADES & \textbf{97.64}$\pm$0.017 & 94.73$\pm$0.196 & 95.05$\pm$0.006 & 93.27$\pm$0.088 & 93.05$\pm$0.025 & 89.66$\pm$0.085\\ HBaR\xspace + TRADES & \textbf{97.64}$\pm$0.030 & \textbf{95.23}$\pm$0.106 & \textbf{95.17}$\pm$0.023 & \textbf{93.49}$\pm$0.147 & \textbf{93.47}$\pm$0.089 & \textbf{89.99}$\pm$0.155\\ \hline \hline MART & \textbf{98.29}$\pm$0.059 & 95.57$\pm$0.113 & 95.23$\pm$0.144 & 93.55$\pm$0.018 & 93.45$\pm$0.077 & 88.36$\pm$0.179 \\ HBaR\xspace + MART & 98.23$\pm$0.054 & \textbf{96.09}$\pm$0.074 & \textbf{96.08}$\pm$0.035 & \textbf{94.64}$\pm$0.125 & \textbf{94.62}$\pm$0.06 & \textbf{89.99}$\pm$0.13 \\ \hline \end{tabular}} \end{table*} \begin{table*}[!t] \centering \setlength{\extrarowheight}{.2em} \setlength{\tabcolsep}{3pt} \small \caption{\textbf{CIFAR-10 by ResNet-18}: Mean and Standard deviation of natural test accuracy (in \%) and adversarial robustness ((in \%) on FGSM, PGD, CW, and AA attacked test examples) of adversarial learning baselines and combining HBaR with each correspondingly.} \label{tab:hsic-adv-variance-cifar10} \resizebox{0.9\textwidth}{!}{ \begin{tabular}{||c || c c c c c c ||} \hline \multirow{2}{*}{Methods} & \multicolumn{6}{c||}{CIFAR-10 by ResNet-18} \\ \cline{2-7} & Natural & FGSM & PGD$^{10}$ & PGD$^{20}$ & CW & AA \\ \hline \hline PGD & 84.71$\pm$0.16 & 55.95$\pm$0.097 & 49.37$\pm$0.075 & 47.54$\pm$0.080 & 41.17$\pm$0.086 & 43.42$\pm$0.064 \\ HBaR\xspace + PGD & \textbf{85.73}$\pm$0.166 & \textbf{57.13}$\pm$0.099 & \textbf{49.63}$\pm$0.058 & \textbf{48.32}$\pm$0.103 & \textbf{41.80}$\pm$0.116 & \textbf{44.46}$\pm$0.169 \\ \hline \hline TRADES & 84.07$\pm$0.201 & 58.63$\pm$0.167 & 53.21$\pm$0.118 & 52.36$\pm$0.189 & 50.07$\pm$0.106 & 49.38$\pm$0.069 \\ HBaR\xspace + TRADES & \textbf{84.10}$\pm$0.104 & \textbf{58.97}$\pm$0.093 & \textbf{53.76}$\pm$0.080 & \textbf{52.92}$\pm$0.175 & \textbf{51.00}$\pm$0.085 & \textbf{49.43}$\pm$0.064 \\ \hline \hline MART & 82.15$\pm$0.117 & 59.85$\pm$0.154 & 54.75$\pm$0.089 & 53.67$\pm$0.088 & 50.12$\pm$0.106 & 47.97$\pm$0.156 \\ HBaR\xspace + MART & \textbf{82.44}$\pm$0.156 & \textbf{59.86}$\pm$0.132 & \textbf{54.84}$\pm$0.051 & \textbf{53.89}$\pm$0.135 & \textbf{50.53}$\pm$0.069 & \textbf{48.21}$\pm$0.100 \\ \hline \end{tabular}} \end{table*} \begin{table*}[!t] \centering \setlength{\extrarowheight}{.2em} \setlength{\tabcolsep}{3pt} \small \caption{\textbf{CIFAR-10 by WideResNet-28-10}: Mean and Standard deviation of natural test accuracy (in \%) and adversarial robustness ((in \%) on FGSM, PGD, CW, and AA attacked test examples) of adversarial learning baselines and combining HBaR with each correspondingly.} \label{tab:hsic-adv-variance-cifar10-wrn} \resizebox{0.9\textwidth}{!}{ \begin{tabular}{||c || c c c c c c ||} \hline \multirow{2}{*}{Methods} & \multicolumn{6}{c||}{CIFAR-10 by WideResNet-28-10} \\ \cline{2-7} & Natural & FGSM & PGD$^{10}$ & PGD$^{20}$ & CW & AA \\ \hline \hline PGD & 86.63$\pm$0.186 & 58.53$\pm$0.073 & 52.21$\pm$0.084 & 50.59$\pm$0.096 & 49.32$\pm$0.089 & 47.25$\pm$0.124 \\ HBaR\xspace + PGD & \textbf{87.91}$\pm$0.102 & \textbf{59.69}$\pm$0.097 & \textbf{52.72}$\pm$0.081 & \textbf{51.17}$\pm$0.152 & \textbf{49.52}$\pm$0.174 & \textbf{47.60}$\pm$0.131 \\ \hline \hline TRADES & \textbf{85.66}$\pm$0.103 & 61.55$\pm$0.134 & 56.62$\pm$0.097 & 55.67$\pm$0.098 & 54.02$\pm$0.106 & 52.71$\pm$0.169 \\ HBaR\xspace + TRADES & 85.61$\pm$0.0133 & \textbf{62.20}$\pm$0.102 & \textbf{57.30}$\pm$0.059 & \textbf{56.51}$\pm$0.136 & \textbf{54.89}$\pm$0.098 & \textbf{53.53}$\pm$0.127 \\ \hline \hline MART & \textbf{85.94}$\pm$0.156 & 59.39$\pm$0.109 & 51.30$\pm$0.052 & 49.46$\pm$0.136 & 47.94$\pm$0.098 & 45.48$\pm$0.100 \\ HBaR\xspace + MART & 85.52$\pm$0.136 & \textbf{60.54}$\pm$0.071 & \textbf{53.42}$\pm$0.142 & \textbf{51.81}$\pm$0.177 & \textbf{49.32}$\pm$0.131 & \textbf{46.99}$\pm$0.137 \\ \hline \end{tabular}} \end{table*} \begin{table*}[!t] \centering \setlength{\extrarowheight}{.2em} \setlength{\tabcolsep}{3pt} \small \caption{\textbf{CIFAR-100 by WideResNet-28-10}: Mean and Standard deviation of natural test accuracy (in \%) and adversarial robustness ((in \%) on FGSM, PGD, CW, and AA attacked test examples) of adversarial learning baselines and combining HBaR with each correspondingly.} \label{tab:hsic-adv-variance-cifar100} \resizebox{0.9\textwidth}{!}{ \begin{tabular}{||c || c c c c c c||} \hline \multirow{2}{*}{Methods} & \multicolumn{6}{c||}{CIFAR-100 by WideResNet-28-10} \\ \cline{2-7} & Natural & FGSM & PGD$^{20}$ & PGD$^{40}$ & CW & AA \\ \hline \hline PGD & 59.91$\pm$0.116 & 29.85$\pm$0.117 & 26.05$\pm$0.106 & 25.38$\pm$0.129 & 22.28$\pm$0.079 & 20.91$\pm$0.133 \\ HBaR\xspace + PGD & \textbf{63.84}$\pm$0.105 & \textbf{31.59}$\pm$0.054 & \textbf{27.90}$\pm$0.030 & \textbf{27.21}$\pm$0.025 & \textbf{23.23}$\pm$0.088 & \textbf{21.61}$\pm$0.061 \\ \hline \hline TRADES & 60.29$\pm$0.122 & 34.19$\pm$0.132 & 31.32$\pm$0.134 & 30.96$\pm$0.135 & 28.20$\pm$0.097 & 26.91$\pm$0.172 \\ HBaR\xspace + TRADES & \textbf{60.55}$\pm$0.065 & \textbf{34.57}$\pm$0.068 & \textbf{31.96}$\pm$0.067 & \textbf{31.57}$\pm$0.079 & \textbf{28.72}$\pm$0.071 & \textbf{27.46}$\pm$0.098\\ \hline \hline MART & 58.42$\pm$0.164 & 32.94$\pm$0.160 & 29.17$\pm$0.166 & 28.19$\pm$0.252 & 27.31$\pm$0.096 & 25.09$\pm$0.179 \\ HBaR\xspace + MART & \textbf{58.93}$\pm$0.102 & \textbf{33.49}$\pm$0.144 & \textbf{30.72}$\pm$0.130 & \textbf{30.16}$\pm$0.133 & \textbf{28.89}$\pm$0.118 & \textbf{25.21}$\pm$0.111 \\ \hline \end{tabular}} \end{table*} \section{Error Bar for Combining HBaR\xspace with Adversarial Examples}\label{sec:supp-errorbar} We show how HBaR\xspace can be used to improve robustness when used as a regularizer, as described in \Cref{sec:combine-hb}, along with state-of-the-art adversarial learning methods. We run each experiment by five times. \fcom{Figure \ref{fig:hsic-adv-variance} illustrates mean and standard deviation of the natural test accuracy and adversarial robustness against various attacks on CIFAR-10 by ResNet-18 and WideResNet-28-10. Table \ref{tab:hsic-adv-variance-mnist}, \ref{tab:hsic-adv-variance-cifar10}, \ref{tab:hsic-adv-variance-cifar10-wrn}, and \ref{tab:hsic-adv-variance-cifar100} show the detailed standard deviation.} Combined with the adversarial training baselines, HBaR\xspace consistently improves adversarial robustness against all types of attacks with small variance. \section{Limitations} \label{sec:supp-limit} \com{ One limitation of our method is that the robustness gain, though beating other IB-based methods, is modest when training with only natural examples. However, the potential of getting adversarial robustness \emph{without} adversarial training is interesting and worth further exploration in the future. Another limitation of our method, as well as many proposed adversarial defense methods, is the uncertain performance to new attack methods. Although we have established concrete theories and conducted comprehensive experiments, there is no guarantee that our method is able to handle novel, well-designed attacks. Finally, in our theoretical analysis in Section~\ref{sec:HBAR_theorem}, we have made several assumptions for Theorem~\ref{thm:new_theorem}. While Assumptions~\ref{asm:cont} and~\ref{asm:universal} hold in practice, the distribution of input feature is not guaranteed to be standard Gaussian. Although the empirical evaluation supports the correctness of the theorem, we admit that the claim is not general enough. We aim to proof a more general version of Theorem~\ref{thm:new_theorem} in the future, hopefully agnostic to input distributions. We will keep track of the advances in the adversarial robustness field and further improve our work correspondingly.} \section{Potential Societal Negative Impact} \label{sec:supp-impact} \com{ Although HBaR\xspace has great potential as a general strategy to enhance the robustness for various machine learning systems, we still need to be aware of the potential negative societal impacts it might result in. For example, over-confidence in the \emph{adversarially-robust} models produced by HBaR\xspace as well as other defense methods may lead to overlooking their potential failure on newly-invented attack methods; this should be taken into account in safety-critical applications like healthcare~\citep{adv_health} or security~\citep{adv_surveillance}. Another example is that, one might get insights from the theoretical analysis of our method to design stronger adversarial attacks. These attacks, if fall into the wrong hands, might cause severe societal problems. Thus, we encourage our machine learning community to further explore this field and be judicious to avoid misunderstanding or misusing of our method. Moreover, we propose to establish more reliable adversarial robustness checking routines for machine learning models deployed in safety-critical applications. For example, we should test these models with the latest adversarial attacks and make corresponding updates to them annually. } \end{document}
\begin{document} \title{Example of a diffeomorphism\ for which the special ergodic theorem doesn't hold} \begin{abstract} In this work we present an example of $C^\infty$-diffeomorphism of a compact $4$-manifold such that it admits a global SRB measure but for which the special ergodic theorem doesn't hold. Namely, for this transformation there exist a continuous function $\varphi$ and a positive constant $\alpha$ such that the following holds: the set of the initial points for which the Birkhoff time averages of the function~$\varphi$ differ from its $\mu$--space average by at least~$\alpha$ has zero Lebesgue measure but full Hausdorff dimension. \end{abstract} \section*{Introduction} Let us recall the basic concepts and the notion of the special ergodic theorem, and then state our result. For more detailed survey of the subject we refer the reader to the work~\cite{KR} and to references therein. \subsection{SRB measures and special ergodic theorems} Let $M$ be a compact Riemannian manifold (equipped with the Lebesgue measure), and $f:M\to M$ be its transformation with an invariant measure $\mu$. For a function $\varphi\colon M\to \mathbb R$ and a point $x\in M$ we denote its $n$-th time average at $x$ by \begin{equation} \varphi_n(x):=\frac{1}{n}\sum_{k=0}^{n-1} \varphi\circ f^k(x), \end{equation} and the space average of $\varphi$ by \begin{equation} \bar\varphi:=\int_M \varphi\,d\mu. \end{equation} \begin{definition} An invariant probability measure $\mu$ is called a \emph{(global) SRB measure} for $f:M\to M$ if for any continuous function $\varphi$ and for Lebesgue-almost every $x\in M$, the time averages of $\varphi$ at $x$ tend to the space average of $\varphi$: \begin{equation}\label{eq:time-averages} \lim\limits_{n\to\infty}\varphi_n (x)=\bar{\varphi}. \end{equation} \end{definition} Taking a continuous test-function $\varphi\in C(M)$ and any $\alpha\geqslant 0$, we define the set of ($\varphi,\alpha$)-nontypical points as \begin{equation*} K_{\varphi,\alpha}:=\left\{ x\in X\colon \varlimsup\limits_{n\to\infty} |\varphi_n(x)-\bar\varphi|>\alpha \right\}. \end{equation*} By definition, if $\mu$ is a global SRB measure, then $\mathop{\mathrm{Leb}}(K_{\varphi,0})=0$. \begin{definition}\label{def:spec-erg} Let $\mu$ be a global SRB measure of $f$. Say that the \emph{special ergodic theorem} holds for $(f,\mu)$, if for every continuous function $\varphi\in C(M)$ and every $\alpha>0$ the Hausdorff dimension of the set $K_{\varphi,\alpha}$ is strictly less than the dimension of the phase space: \begin{equation}\label{eq:set} \forall \varphi\in C(M), \alpha>0 \qquad \dim_H K_{\varphi,\alpha}<\dim M. \end{equation} \end{definition} Our interest to the special ergodic theorem is related to its possible applications for studying perturbations of skew products, see, for example,~\cite{IKS}. In~\cite{IKS} the special ergodic theorem was proved for a doubling map of a circle, in~\cite{Saltykov} for linear Anosov diffeomorphisms of two-dimensional torus, and in~\cite{KR} for all transformations for which the so-called dynamical large deviation principle holds (in particularly, for all $C^2$-uniformly hyperbolic maps with a transitive attractor). \subsection{The counterexample} All known sufficient properties that are required from dynamical systems to satisfy the special ergodic theorem (SET) are quite restrictive. Thus one may expect that the SET holds not for every system. The aim of the present work is to present such an example. \begin{theorem*} There exists a $C^\infty$-diffeomorphism of a compact $4$-manifold such that it admits a global SRB measure, but the special ergodic theorem doesnt'n hold for it. \end{theorem*} Our idea is to start with a set that has zero Lebesgue measure and full Hausdorff dimension, and then try to construct a transformation that has it as a set of $(\varphi,\alpha)$-nontypical points for some test-function $\varphi$ and some $\alpha>0$. For this purpose, we first describe a family of subsets of an interval that have zero Lebesgue measure and Hausdorff dimension $1$. We can construct only a discontinuous transformation such that the set from this family is $(\varphi,\alpha)$-nontypical, so we increase the dimension from $1$ to $4$, consequently handling out the lack of continuity and smoothness of the map. This is done in the next four sections. One can easily verify that the construction fails for typical perturbation of the system, and our example has infinite codimension in the space of all $C^\infty$-diffeomorphisms of the manifold. The existence of an open set of diffeomorphisms that do not satisfy the SET is a challenging open problem. \section{Dimension 1: discontinuous map of an interval} For every sequence $P=\{p_n\}\in (0,1)^{\mathbb N}$ consider the Cantor set $C_P$, obtained from the interval $I=[0,1]$ by the standard infinite procedure of consecutive deleting the middle intervals. Namely, on the step number $n$ we take all the intervals that were obtained as a result of the previous steps (\textit{intervals of rank $n$}), and delete their central parts of relative length $p_n$, see Figure~1. In a particular case, when $p_n=1/3 \; \forall n\in\mathbb N$, we obtain the standard ''middle third'' Cantor set. \begin{center} \scalebox{1.2} { \begin{pspicture}(0,-1.108125)(8.582812,1.108125) \psline[linewidth=0.04cm](0.3009375,-0.0903125)(8.300938,-0.0903125) \psdots[dotsize=0.16](0.3009375,-0.0903125) \psdots[dotsize=0.16](8.280937,-0.0903125) \psdots[dotsize=0.16](3.3009374,-0.0903125) \psdots[dotsize=0.16](5.3009377,-0.0903125) \psdots[dotsize=0.16](1.5009375,-0.0903125) \psdots[dotsize=0.16](2.1009376,-0.0903125) \psdots[dotsize=0.16](6.5009375,-0.0903125) \psdots[dotsize=0.16](7.1009374,-0.0903125) \usefont{T1}{ptm}{m}{n} \rput(4.3723435,0.1396875){$p_1$} \psdots[dotsize=0.16](0.7609375,-0.0903125) \psdots[dotsize=0.16](1.0409375,-0.0903125) \psdots[dotsize=0.16](2.5609374,-0.0903125) \psdots[dotsize=0.16](2.8409376,-0.0903125) \psdots[dotsize=0.16](5.7609377,-0.0903125) \psdots[dotsize=0.16](6.0409374,-0.0903125) \psdots[dotsize=0.16](7.5609374,-0.0903125) \psdots[dotsize=0.16](7.8409376,-0.0903125) \psline[linewidth=0.08cm](1.1009375,-0.0903125)(1.5009375,-0.0903125) \psline[linewidth=0.08cm](2.1409376,-0.0903125)(2.5409374,-0.0903125) \psline[linewidth=0.08cm](2.8209374,-0.0903125)(3.2609375,-0.0903125) \psline[linewidth=0.08cm](5.3009377,-0.0903125)(5.7009373,-0.0903125) \psline[linewidth=0.08cm](6.0609374,-0.0903125)(6.4609375,-0.0903125) \psline[linewidth=0.08cm](7.1009374,-0.0903125)(7.5409374,-0.0903125) \psline[linewidth=0.08cm](7.8809376,-0.0903125)(8.220938,-0.0903125) \psline[linewidth=0.08cm](0.3009375,-0.0903125)(0.7009375,-0.0903125) \usefont{T1}{ptm}{m}{n} \rput(4.3723435,-0.8803125){$p_2$} \usefont{T1}{ptm}{m}{n} \rput(4.3723435,0.9196875){$p_3$} \psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<-}(0.8809375,0.0296875)(4.1009374,0.8896875) \psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<-}(7.7009373,0.0296875)(4.6809373,0.8896875) \psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<-}(2.7209375,-0.0103125)(4.1209373,0.7496875) \psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<-}(5.9009376,-0.0103125)(4.6209373,0.7096875) \psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<-}(6.7809377,-0.1503125)(4.7409377,-0.9303125) \psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<-}(1.7809376,-0.1503125)(4.0609374,-0.9303125) \usefont{T1}{ptm}{m}{n} \rput(0.23234375,-0.3603125){$0$} \usefont{T1}{ptm}{m}{n} \rput(8.272344,-0.3603125){$1$} \end{pspicture} } \\ Fig. 1: first three steps of constructing the set $C_P$ \end{center} Let $\mathcal P\subset [0,1]^{\mathbb N}$ denote the set of the sequences $P=(p_n)$ such that: (i) $\lim_{n\to\infty} p_n=0$; (ii) $\sum^\infty_{n=1} p_n=\infty$. \begin{lemma}\label{1} For every $P\in\mathcal P$ the set $C_P$ has zero Lebesgue $1$-measure and Hausdorff dimension $1$. \end{lemma} Though the Lemma is intuitively clear, we give an accurate proof. It makes us the concept of $\gamma$-regular measures, that will be used several times in our proofs. \begin{definition} A measure $\mu$ on a Riemannian manifold is called $\gamma$-regular, if there exist constants $c,\delta>0$ such that \begin{equation}\label{eq:reg} \mu(U)<c\cdot|U|^\gamma \end{equation} for every measurable set $U$ with $|U|<\delta$ (here $|\cdot|$ stands for the diameter of a set). \end{definition} \begin{proposition}[\cite{Falc}, Mass Distribution principle]\label{mdp} If $\text{supp }\mu\subset F$ for some probability $\gamma$-regular measure $\mu$, then $\dim_H F\geqslant \gamma$. \end{proposition} \begin{proof} For $\varepsilon$ small enough, $\gamma$-volume of any cover of $F$ by balls of diameter less than $\varepsilon$ is uniformly bounded away from zero: $$ \sum |U_i|^\gamma \geqslant \frac{\sum \mu(U_i)}{c}\geqslant \frac{\mu(F)}{c}=\frac{1}{c}>0. $$ \end{proof} \begin{proof}[Proof of Lemma~\ref{1}] Let us prove that $C_P$ has Lebesgue measure zero. Indeed, the second property $\sum^\infty_{n=1} p_n=\infty$ implies that $$ \sum^\infty_{n=1} (-\ln (1-p_n))=\infty. $$ Therefore $\prod^\infty_{n=1} (1-p_n)=0$ and hence $\mathop{\mathrm{Leb}}(C_P)=0$. To prove that $\dim_H K_P=1$, consider the standard probability measure $\mu_{P}$ on $C_P$ (that is a pullback of the Bernoulli $(1/2,1/2)$-measure due to natural encoding of $C_P$ by right-infinite sequences of zeroes and ones). Let us verify that for every $0<\gamma<1$ the measure $\mu_P$ is $\gamma$-regular. That is, we need to prove the existence of a constant $c$ such that~\eqref{eq:reg} holds for every sufficiently small interval $U$. Note that we can suppose that $U$ is an interval of rank $n$ for some $n$. Indeed, let \begin{equation}\label{eq:lambdan} \lambda_n=\prod\limits_{j=1}^n \frac{1-p_j}{2} \end{equation} be the length of any interval of rank $n$. Then for all sufficiently large $n$ the ratio $\lambda_n/\lambda_{n+1}$ is less than $3$. Hence one can contract and shift any interval $U$ of length between $\lambda_n$ and $\lambda_{n+1}$ to an interval of rank $n+1$ changing the suitable value of a constant $c$ in~\eqref{eq:reg} by no more than $3^\gamma$. Any interval of rank $n$ has the length $\lambda_n$ and the $\mu_P$-measure $\frac{1}{2^n}$. As a consequence of the first property $\lim_{j\to\infty} p_j=0$, there exists $n\in\mathbb N$ such that $2^{\gamma-1}<(1-p_j)^\gamma$ for every $j>n$. Starting with this number $n$, the sequence \begin{equation}\label{eq:regconst} \frac{\mu_P (U_n)}{|U_n|^\gamma}=\frac{2^{-n}}{\lambda_n^\gamma} =\frac{2^{n(\gamma-1)}}{\left(\prod_{j=1}^n 1-p_j)\right)^\gamma} \end{equation} decreases, and hence is bounded. This proves the $\gamma$-regularity of $\mu_P$. Therefore, by Proposition~\ref{mdp}, $\dim_H C_P \geqslant\gamma$ for every $\gamma<1$. The second conclusion of the lemma is also proved. \end{proof} Of course, taking any sequence $P\in\mathcal P$ one can easily construct a discontinuous map of a unit interval for which the set $C_P$ would be $(\varphi,\alpha)$-nontypical for some $\varphi\in C(I)$ and $\alpha>0$. Indeed, let $f$ send the set $C_P\setminus \{1\}$ to the left endpoint $0$ and $(I\setminus C_P)\cup \{1\}$ to the right endpoint $s_1=1$. Then the $\delta$-measure sitting at the right endpoint would be an SRB measure, and the set $C_P$ would play the role of the set of $(\varphi,1)$-nontypical points $K_{\varphi,1}$ (provided that the test function $\varphi$ has a value $1$ on the left endpoint and $0$ on the right endpoint). But this map is discontinuous on the Cantor set! So, relying on this lemma and keeping in mind the mentioned lack of continuity, let us make the next step: construct an example of a map of the square $[0,1] \times [0,1]$ which is, in some sense, less discontinuous than the previous one, and still has a $(\varphi,1)$-nontypical set of full Hausdorff dimension. \section{Dimension 2: a sieving construction.} Consider a square $$ Y=\{(x,p): x,p\in [0,1]\} $$ and describe a map $g$ on it. Fix the ''horizontal level'' \begin{equation}\label{eq:Ip} I_p:=\{(x,p)\in Y \mid x\in [0,1]\} \end{equation} and split it into three parts: $$I_p=I_{p,-1}\sqcup I_{p,0}\sqcup I_{p,1},$$ where \begin{equation}\label{eq:Ipk} \begin{aligned} I_{p,-1}&:=[0;\frac{1-p}{2}),\\ I_{p,0}&:=[\frac{1-p}{2};\frac{1+p}{2}], \text{ and}\\ I_{p,1}&:=(\frac{1+p}{2};1] \end{aligned} \end{equation} (i.e. the interval $I_{p,0}$ is the central part of length $p$). To define the transformation $g$, we introduce the function $q\colon [0,1]\to [0,1]$, \begin{equation}\label{eq:q-def} q(p)=p/(1+p) \end{equation} (so that $q(1/n)=1/(n+1)$), and mark the point $s_2=(1/2,1)$, which will be the support of SRB measure. We define $g$ separately on the different parts of every horisontal level $I_p$. For $p=1$, $g(x,p)=s_2$. For $0\leqslant p<1$, we send the central part $I_{p,0}$ to the point $s_2$ and stretch linearly other parts $I_{p,-1}$ and $I_{p,1}$ to the whole level $I_{q(p)}$ each, see Figure~2. The shaded triangle in this figure goes to the fixed point $s_2$ under one iterate of the map $g$. This map is defined by the following formula: \begin{equation}\label{eq:g-def} g(x,p)= \begin{cases} (\frac{2}{1-p}x, q(p)), &x\in I_{p,-1}\\ s_2, &x\in I_{p,0}\\ (\frac{2}{1-p}(x-\frac{1+p}{2}), q(p)), &x\in I_{p,1}. \end{cases} \end{equation} \begin{center} \scalebox{1.2} { \begin{pspicture}(0,-4.5629687)(9.982813,4.5629687) \definecolor{color108b}{rgb}{0.8,0.8,0.8} \psframe[linewidth=0.04,dimen=outer](9.039531,3.8445315)(1.0795312,-4.115469) \rput{-180.0}(10.12,-0.2700627){\pstriangle[linewidth=0.03,dimen=outer,fillstyle=solid,fillcolor=color108b](5.06,-4.1150312)(7.94,7.96)} \psline[linewidth=0.04cm](1.0795312,0.4445314)(9.02,0.46296865) \psline[linewidth=0.08cm](1.0795312,-1.4554687)(9.039531,-1.4554687) \psline[linewidth=0.08cm](1.0995312,0.4445314)(2.7995312,0.4445314) \psline[linewidth=0.08cm](7.3,0.44296864)(9.02,0.44296864) \usefont{T1}{ptm}{m}{n} \rput(0.8123438,0.4345314){$p_0$} \usefont{T1}{ptm}{m}{n} \rput(0.6223438,-1.4454687){$q(p_0)$} \psdots[dotsize=0.14](5.079531,3.8445315) \usefont{T1}{ptm}{m}{n} \rput(5.072344,4.0745316){$s_2$} \usefont{T1}{ptm}{m}{n} \rput(1.9023439,0.7345314){$I_{p_0,0}$} \usefont{T1}{ptm}{m}{n} \rput(5.1023436,0.1945314){$I_{p_0,1}$} \usefont{T1}{ptm}{m}{n} \rput(8.382343,0.7345314){$I_{p_0,2}$} \psline[linewidth=0.02cm,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(5.08,0.54296863)(5.079531,3.4645314) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(6.5,0.52296865)(5.259531,3.5045316) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(3.66,0.52296865)(4.959531,3.5045316) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(1.3195312,0.3445314)(1.48,-1.3570313) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(1.8795311,0.3645314)(4.84,-1.3970313) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(2.6395311,0.3645314)(8.12,-1.3770313) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(8.8,0.32296866)(8.58,-1.3570313) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(8.259532,0.3445314)(5.22,-1.3770313) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(7.5,0.34296864)(1.92,-1.3570313) \psline[linewidth=0.04cm,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(1.1,3.5829687)(1.1,4.422969) \psline[linewidth=0.04cm,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(8.58,-4.097031)(9.86,-4.097031) \usefont{T1}{ptm}{m}{n} \rput(0.93234366,4.3745313){$p$} \usefont{T1}{ptm}{m}{n} \rput(0.9323437,3.8345313){$1$} \usefont{T1}{ptm}{m}{n} \rput(9.012344,-4.365469){$1$} \usefont{T1}{ptm}{m}{n} \rput(5.012344,-4.3854685){$1/2$} \usefont{T1}{ptm}{m}{n} \rput(0.9723436,-4.285469){$0$} \usefont{T1}{ptm}{m}{n} \rput(9.6823435,-4.365469){$x$} \end{pspicture} } \\ Fig. 2: the map $g$: one step of the sieving construction \end{center} Let $$ K_2=\{(x,p)\mid \forall n \quad g^n(x,p)\ne s_2\}. $$ Note that under iterates of the function $q(p)$, see~\eqref{eq:q-def} any point $p$ from $[0,1]$ tends to zero. Then our construction immediately implies that there is an alternative description of the set $K_2$: $$ K_2=\{(x,p)\mid \mathop{\mathrm{dist}}(g^n(x,p),I_0)\to 0\}. $$ So we have the mechanism that acts like an infinite sieve: it lets the point pass down through it to the zero level (and not to be ''thrown out'' to the point $s_2$ under the iterates of $g$) if and only if the points of the future orbit of this point never appear in the shaded triangle that is the union of the central parts of level intervals. \begin{lemma}\label{2} The $\delta$-measure sitting at the point $s_2$ is a global SRB measure for the map $g$. The set $K_2$ of the points that tend to the level $\{p=0\}$ under the iterates of $g$ has zero Lebesgue $2$-measure and Hausdorff dimension $2$. \end{lemma} \begin{proof} First, let us prove that on every level $I_{p_0},\,0<p_0\leqslant 1$, see~\eqref{eq:Ip}, Lebesgue almost every point is thrown out to $s_2$ under some iterate of $g$. That is, the set $D_{p_0}:=I_{p_0}\cap K_2$ has $1$-Lebesgue measure $0$. By construction, $D_{p_0}=\{p_0, x\mid x\in C_{P_0}\}$, where $P_0=P_0(p_0)=(p_0, q(p_0),\dots, q^n(p_0),\dots)$. By Lemma~\ref{1}, it is sufficient to show that $P_0\in\mathcal P$. For $p_0=1$, $P_0=(1,1/2,1/3,1/4,\dots)\in \mathcal P$. For any $0<p_0<1$ we also have $P_0(p_0)\in\mathcal P$ due to monotonicity of the function $q$. Indeed, once $1/(n+1)<p_0\leqslant 1/n$ for some $n\in\mathbb N$, then $1/(n+k+1)<q^k(p_0)\leqslant 1/(n+k)$ for every $k\in\mathbb N$. Therefore, as mentioned before, the sequence $P_0$ tends to $0$, and the sum of its components is infinite, as well as for the harmonic one. Hence, $P_0$ belongs to $\mathcal P$. The relation $\mathop{\mathrm{Leb}}_2(K_2)=0$ now follows from the Fubini theorem. Hence, $\delta$-measure at the point $s_2$ is a global SRB measure. It remains to prove that $\dim_H K_2=2$. For every level $I_p$ denote by $\mu_p$ the measure constructed in the proof of Lemma~\ref{1} and supported on the Cantor set $D_{p_0}$ of the points that don't tend to $s_2$ under iterates of $g$. Fix any $\gamma<1$. As it was shown in the proof of Lemma~\ref{1}, all the measures $\mu_p$ for $0<p<1$ are $\gamma$-regular. Note also that the sequence of iterations $P_0(p_0)=(p_0, q(p_0),\dots, q^n(p_0),\dots)$ of the point $p_0=1$ componentwise majorises sequences of iterations for all other $p_0\in [0,1)$ due to monotonicity of the function $q$. Hence, by~\eqref{eq:lambdan}, for any $n$ the length $\lambda_n$ of intervals of the rank $n$ for $P_0=P_0(p_0)$ monotonously decreases while $p_0$ decreases, and by~\eqref{eq:regconst} the regularity constant $c=c(\gamma)$ can be chosen independently on $p$. Then the measure $$ \mu:=\int_0^1 \mu_p dm(p) $$ is $(\gamma+1)$-regular with the same constant $c(\gamma)$ due to Fubini theorem. Therefore, by Proposition~\ref{mdp}, $\dim_H S_2 \geqslant\gamma+1$ for every $\gamma<1$, and hence $\dim_H S_2=2$. \end{proof} The map $g$ is not continuous as well, but the set of discontinuity consists now only of two intervals that are the union of the vertices of the intervals $I_{p,0}$, see~\eqref{eq:Ipk} (and form two sides of the shaded triangle in Figure~2). Note also that even the image of the map $g$ is disconnected (it consists of the point $s_2$ and the lower half of the square $Y$). To get rid of these problems we include this construction into a flow. Namely, we consider such a flow that its Poincar\'e map for some cross-section resembles the map $g$ described above. \section{Dimension 3: a flow on a stratified manifold} In this section we present a $3$-manifold and a flow on it. The time $1$ map of this flow does not satisfy the SET. The shortcoming is that the phase space is a stratified, rather than a genuine smooth manifold. We start with their brief description. The idea is to avoid the discontinuity of the previous $2$-dimensional example by dividing the images of the parts of $I_p$ by separatrices of saddles of a smooth flow. \subsection{Heuristic description} We consider a closed simply connected subset $T$ of a square and multiply it directly by an interval $[0,1]$, thus obtaining a $3$-dimensional manifold with boundary, homeomorphic to a closed $3$-cube. Then we glue parts of the boundary of this manifold and obtain $2$-stratum $S_2$ (every point of this stratum has a neighborhood homeomorphic to three half balls glued together by the boundary discs). Then we define a flow such that the Poincar\'e map for the cross-section $S_2$ is very similar to the map $g$, see \eqref{eq:g-def}, while the time $1$ map of a flow is continuous and displays the same asymptotic behaviour as the Poincar\'e map. Thereinafter we take $p\in [0,1/2]$ instead of $p\in [0,1]$ to avoid incorrect constructions for $p=1$. Now we pass to a rigorous description. \subsection{Construction of a stratified manifold} First we describe three curves in the square $R=\{(x,y)\mid x,y\in [-1,2]\}$. It isn't necessary to define them numerically, but just to state all their properties that we need. One can easily verify the existence of such curves. The curve $\gamma_{-1}$ starts at the point $(0,-1)$ and ends at $(-1,0)$, and coincides with the straight intervals $\{(0,-1+t)\}$ and $\{(-1+t,0)\}$, $t\in[0,\varepsilon]$, in some $\varepsilon$-neighborhoods of its endpoints. Analogously, the curve $\gamma_1$ starts at the point $(1,-1)$ ends at $(2,0)$, and coincides with the straight intervals $\{(1,-1+t)\}$ and $\{(2-t,0)\}$, $t\in[0,\varepsilon]$, in some $\varepsilon$-neighborhoods of its endpoints, see Figure~3a. The curve $\gamma_0$ starts at the point $(-1,1)$ and ends at $(2,1)$. It is tangent to the line $\{(x,2)\}$ at the point $(1/2,2)$ and coincides with the straight intervals $\{(-1+t,1)\}$ and $\{(2-t,0)\}$, $t\in[0,\varepsilon]$, in some $\varepsilon$-neighborhoods of its endpoints. Each of these three curves is assumed to be simple and $C^\infty$-smooth, to lie in the square $R$ and not to have any intersections with the other two curves. For every $y_0\in (1,2)$ the curve $\gamma_0$ intersects the line $\{y=y_0\}$ at two points. We denote by $T\subset R$ the closed subset of a square, bounded by these three curves and segments on the boundary of the square (see Figure~3a). \begin{center} \scalebox{1.2} { \begin{pspicture}(0,-3.6229687)(7.2628126,3.6229687) \definecolor{color3208g}{rgb}{0.6,0.6,0.6} \pscustom[linewidth=0.02,fillstyle=gradient,gradlines=2000,gradbegin=color3208g,gradend=color3208g,gradmidpoint=1.0] { \newpath \moveto(0.4609375,-3.1754687) \lineto(0.4609375,-3.1254687) \curveto(0.4609375,-3.1004686)(0.4609375,-3.0504687)(0.4609375,-3.0254688) \curveto(0.4609375,-3.0004687)(0.4609375,-2.9504688)(0.4609375,-2.9254687) \curveto(0.4609375,-2.9004688)(0.4609375,-2.8504686)(0.4609375,-2.8254688) \curveto(0.4609375,-2.8004687)(0.4559375,-2.7554688)(0.4509375,-2.7354689) \curveto(0.4459375,-2.7154686)(0.4409375,-2.6704688)(0.4409375,-2.6454687) \curveto(0.4409375,-2.6204689)(0.4409375,-2.5704687)(0.4409375,-2.5454688) \curveto(0.4409375,-2.5204687)(0.4459375,-2.4754686)(0.4509375,-2.4554687) \curveto(0.4559375,-2.4354687)(0.4559375,-2.3954687)(0.4509375,-2.3754687) \curveto(0.4459375,-2.3554688)(0.4409375,-2.3104687)(0.4409375,-2.2854688) \curveto(0.4409375,-2.2604687)(0.4459375,-2.2154686)(0.4509375,-2.1954687) \curveto(0.4559375,-2.1754687)(0.4609375,-2.1304688)(0.4609375,-2.1054688) \curveto(0.4609375,-2.0804687)(0.4609375,-2.0304687)(0.4609375,-2.0054688) \curveto(0.4609375,-1.9804688)(0.4609375,-1.9304688)(0.4609375,-1.9054687) \curveto(0.4609375,-1.8804687)(0.4559375,-1.8354688)(0.4509375,-1.8154688) \curveto(0.4459375,-1.7954688)(0.4409375,-1.7504687)(0.4409375,-1.7254688) \curveto(0.4409375,-1.7004688)(0.4409375,-1.6504687)(0.4409375,-1.6254687) \curveto(0.4409375,-1.6004688)(0.4409375,-1.5504688)(0.4409375,-1.5254687) \curveto(0.4409375,-1.5004687)(0.4459375,-1.4554688)(0.4509375,-1.4354688) \curveto(0.4559375,-1.4154687)(0.4609375,-1.3704687)(0.4609375,-1.3454688) \curveto(0.4609375,-1.3204688)(0.4609375,-1.2704687)(0.4609375,-1.2454687) \curveto(0.4609375,-1.2204688)(0.4759375,-1.1854688)(0.4909375,-1.1754688) \curveto(0.5059375,-1.1654687)(0.5459375,-1.1554687)(0.5709375,-1.1554687) \curveto(0.5959375,-1.1554687)(0.6459375,-1.1554687)(0.6709375,-1.1554687) \curveto(0.6959375,-1.1554687)(0.7459375,-1.1554687)(0.7709375,-1.1554687) \curveto(0.7959375,-1.1554687)(0.8459375,-1.1554687)(0.8709375,-1.1554687) \curveto(0.8959375,-1.1554687)(0.9459375,-1.1554687)(0.9709375,-1.1554687) \curveto(0.9959375,-1.1554687)(1.0409375,-1.1504687)(1.0609375,-1.1454687) \curveto(1.0809375,-1.1404687)(1.1259375,-1.1354687)(1.1509376,-1.1354687) \curveto(1.1759375,-1.1354687)(1.2259375,-1.1354687)(1.2509375,-1.1354687) \curveto(1.2759376,-1.1354687)(1.3209375,-1.1404687)(1.3409375,-1.1454687) \curveto(1.3609375,-1.1504687)(1.4059376,-1.1554687)(1.4309375,-1.1554687) \curveto(1.4559375,-1.1554687)(1.5059375,-1.1554687)(1.5309376,-1.1554687) \curveto(1.5559375,-1.1554687)(1.6009375,-1.1604687)(1.6209375,-1.1654687) \curveto(1.6409374,-1.1704688)(1.6809375,-1.1804688)(1.7009375,-1.1854688) \curveto(1.7209375,-1.1904688)(1.7609375,-1.2004688)(1.7809376,-1.2054688) \curveto(1.8009375,-1.2104688)(1.8409375,-1.2204688)(1.8609375,-1.2254688) \curveto(1.8809375,-1.2304688)(1.9209375,-1.2404687)(1.9409375,-1.2454687) \curveto(1.9609375,-1.2504687)(2.0009375,-1.2604687)(2.0209374,-1.2654687) \curveto(2.0409374,-1.2704687)(2.0759375,-1.2854687)(2.0909376,-1.2954688) \curveto(2.1059375,-1.3054688)(2.1359375,-1.3254688)(2.1509376,-1.3354688) \curveto(2.1659374,-1.3454688)(2.1909375,-1.3704687)(2.2009375,-1.3854687) \curveto(2.2109375,-1.4004687)(2.2309375,-1.4304688)(2.2409375,-1.4454688) \curveto(2.2509375,-1.4604688)(2.2709374,-1.4904687)(2.2809374,-1.5054687) \curveto(2.2909374,-1.5204687)(2.3109374,-1.5504688)(2.3209374,-1.5654688) \curveto(2.3309374,-1.5804688)(2.3459375,-1.6154687)(2.3509376,-1.6354687) \curveto(2.3559375,-1.6554687)(2.3659375,-1.6954688)(2.3709376,-1.7154688) \curveto(2.3759375,-1.7354687)(2.3859375,-1.7754687)(2.3909376,-1.7954688) \curveto(2.3959374,-1.8154688)(2.4059374,-1.8554688)(2.4109375,-1.8754687) \curveto(2.4159374,-1.8954687)(2.4209375,-1.9404688)(2.4209375,-1.9654688) \curveto(2.4209375,-1.9904687)(2.4259374,-2.0354688)(2.4309375,-2.0554688) \curveto(2.4359374,-2.0754688)(2.4409375,-2.1204689)(2.4409375,-2.1454687) \curveto(2.4409375,-2.1704688)(2.4409375,-2.2204688)(2.4409375,-2.2454689) \curveto(2.4409375,-2.2704687)(2.4409375,-2.3204687)(2.4409375,-2.3454688) \curveto(2.4409375,-2.3704689)(2.4459374,-2.4154687)(2.4509375,-2.4354687) \curveto(2.4559374,-2.4554687)(2.4609375,-2.5004687)(2.4609375,-2.5254688) \curveto(2.4609375,-2.5504687)(2.4609375,-2.6004686)(2.4609375,-2.6254687) \curveto(2.4609375,-2.6504688)(2.4609375,-2.7004688)(2.4609375,-2.7254686) \curveto(2.4609375,-2.7504687)(2.4609375,-2.8004687)(2.4609375,-2.8254688) \curveto(2.4609375,-2.8504686)(2.4609375,-2.9004688)(2.4609375,-2.9254687) \curveto(2.4609375,-2.9504688)(2.4609375,-3.0004687)(2.4609375,-3.0254688) \curveto(2.4609375,-3.0504687)(2.4609375,-3.1004686)(2.4609375,-3.1254687) \curveto(2.4609375,-3.1504688)(2.4609375,-3.1754687)(2.4609375,-3.1754687) } \pscustom[linewidth=0.02,fillstyle=gradient,gradlines=2000,gradbegin=color3208g,gradend=color3208g,gradmidpoint=1.0] { \newpath \moveto(6.4609375,-3.1554687) \lineto(6.4509373,-3.1154687) \curveto(6.4459376,-3.0954688)(6.4409375,-3.0504687)(6.4409375,-3.0254688) \curveto(6.4409375,-3.0004687)(6.4409375,-2.9504688)(6.4409375,-2.9254687) \curveto(6.4409375,-2.9004688)(6.4409375,-2.8504686)(6.4409375,-2.8254688) \curveto(6.4409375,-2.8004687)(6.4409375,-2.7504687)(6.4409375,-2.7254686) \curveto(6.4409375,-2.7004688)(6.4409375,-2.6504688)(6.4409375,-2.6254687) \curveto(6.4409375,-2.6004686)(6.4409375,-2.5504687)(6.4409375,-2.5254688) \curveto(6.4409375,-2.5004687)(6.4409375,-2.4504688)(6.4409375,-2.4254687) \curveto(6.4409375,-2.4004688)(6.4409375,-2.3504686)(6.4409375,-2.3254688) \curveto(6.4409375,-2.3004687)(6.4409375,-2.2504687)(6.4409375,-2.2254686) \curveto(6.4409375,-2.2004688)(6.4409375,-2.1504688)(6.4409375,-2.1254687) \curveto(6.4409375,-2.1004686)(6.4409375,-2.0504687)(6.4409375,-2.0254688) \curveto(6.4409375,-2.0004687)(6.4409375,-1.9504688)(6.4409375,-1.9254688) \curveto(6.4409375,-1.9004687)(6.4409375,-1.8504688)(6.4409375,-1.8254688) \curveto(6.4409375,-1.8004688)(6.4409375,-1.7504687)(6.4409375,-1.7254688) \curveto(6.4409375,-1.7004688)(6.4409375,-1.6504687)(6.4409375,-1.6254687) \curveto(6.4409375,-1.6004688)(6.4409375,-1.5504688)(6.4409375,-1.5254687) \curveto(6.4409375,-1.5004687)(6.4409375,-1.4504688)(6.4409375,-1.4254688) \curveto(6.4409375,-1.4004687)(6.4409375,-1.3504688)(6.4409375,-1.3254688) \curveto(6.4409375,-1.3004688)(6.4409375,-1.2504687)(6.4409375,-1.2254688) \curveto(6.4409375,-1.2004688)(6.4209375,-1.1704688)(6.4009376,-1.1654687) \curveto(6.3809376,-1.1604687)(6.3359375,-1.1554687)(6.3109374,-1.1554687) \curveto(6.2859373,-1.1554687)(6.2359376,-1.1554687)(6.2109375,-1.1554687) \curveto(6.1859374,-1.1554687)(6.1359377,-1.1554687)(6.1109376,-1.1554687) \curveto(6.0859375,-1.1554687)(6.0359373,-1.1554687)(6.0109377,-1.1554687) \curveto(5.9859376,-1.1554687)(5.9359374,-1.1554687)(5.9109373,-1.1554687) \curveto(5.8859377,-1.1554687)(5.8359375,-1.1554687)(5.8109374,-1.1554687) \curveto(5.7859373,-1.1554687)(5.7359376,-1.1554687)(5.7109375,-1.1554687) \curveto(5.6859374,-1.1554687)(5.6409373,-1.1604687)(5.6209373,-1.1654687) \curveto(5.6009374,-1.1704688)(5.5559373,-1.1754688)(5.5309377,-1.1754688) \curveto(5.5059376,-1.1754688)(5.4559374,-1.1754688)(5.4309373,-1.1754688) \curveto(5.4059377,-1.1754688)(5.3609376,-1.1804688)(5.3409376,-1.1854688) \curveto(5.3209376,-1.1904688)(5.2759376,-1.1954688)(5.2509375,-1.1954688) \curveto(5.2259374,-1.1954688)(5.1809373,-1.2004688)(5.1609373,-1.2054688) \curveto(5.1409373,-1.2104688)(5.1009374,-1.2254688)(5.0809374,-1.2354687) \curveto(5.0609374,-1.2454687)(5.0209374,-1.2604687)(5.0009375,-1.2654687) \curveto(4.9809375,-1.2704687)(4.9409375,-1.2804687)(4.9209375,-1.2854687) \curveto(4.9009376,-1.2904687)(4.8659377,-1.3054688)(4.8509374,-1.3154688) \curveto(4.8359375,-1.3254688)(4.8059373,-1.3454688)(4.7909374,-1.3554688) \curveto(4.7759376,-1.3654687)(4.7459373,-1.3904687)(4.7309375,-1.4054687) \curveto(4.7159376,-1.4204688)(4.6909375,-1.4504688)(4.6809373,-1.4654688) \curveto(4.6709375,-1.4804688)(4.6509376,-1.5104687)(4.6409373,-1.5254687) \curveto(4.6309376,-1.5404687)(4.6109376,-1.5704688)(4.6009374,-1.5854688) \curveto(4.5909376,-1.6004688)(4.5759373,-1.6354687)(4.5709376,-1.6554687) \curveto(4.5659375,-1.6754688)(4.5509377,-1.7104688)(4.5409374,-1.7254688) \curveto(4.5309377,-1.7404687)(4.5159373,-1.7754687)(4.5109377,-1.7954688) \curveto(4.5059376,-1.8154688)(4.5009375,-1.8604687)(4.5009375,-1.8854687) \curveto(4.5009375,-1.9104687)(4.4959373,-1.9554688)(4.4909377,-1.9754688) \curveto(4.4859376,-1.9954687)(4.4809375,-2.0404687)(4.4809375,-2.0654688) \curveto(4.4809375,-2.0904686)(4.4759374,-2.1354687)(4.4709377,-2.1554687) \curveto(4.4659376,-2.1754687)(4.4609375,-2.2204688)(4.4609375,-2.2454689) \curveto(4.4609375,-2.2704687)(4.4559374,-2.3204687)(4.4509373,-2.3454688) \curveto(4.4459376,-2.3704689)(4.4409375,-2.4204688)(4.4409375,-2.4454687) \curveto(4.4409375,-2.4704688)(4.4409375,-2.5204687)(4.4409375,-2.5454688) \curveto(4.4409375,-2.5704687)(4.4359374,-2.6204689)(4.4309373,-2.6454687) \curveto(4.4259377,-2.6704688)(4.4209375,-2.7204688)(4.4209375,-2.7454689) \curveto(4.4209375,-2.7704687)(4.4259377,-2.8154688)(4.4309373,-2.8354688) \curveto(4.4359374,-2.8554688)(4.4409375,-2.9004688)(4.4409375,-2.9254687) \curveto(4.4409375,-2.9504688)(4.4409375,-3.0004687)(4.4409375,-3.0254688) \curveto(4.4409375,-3.0504687)(4.4459376,-3.0954688)(4.4509373,-3.1154687) \curveto(4.4559374,-3.1354687)(4.4609375,-3.1604688)(4.4609375,-3.1754687) } \pscustom[linewidth=0.02,fillstyle=gradient,gradlines=2000,gradbegin=color3208g,gradend=color3208g,gradmidpoint=1.0] { \newpath \moveto(3.5609374,2.8245313) \lineto(3.6009376,2.8045313) \curveto(3.6209376,2.7945313)(3.6609375,2.7745314)(3.6809375,2.7645311) \curveto(3.7009375,2.7545311)(3.7359376,2.7345312)(3.7509375,2.7245312) \curveto(3.7659376,2.7145312)(3.7959375,2.6945312)(3.8109374,2.6845312) \curveto(3.8259375,2.6745312)(3.8559375,2.6545312)(3.8709376,2.6445312) \curveto(3.8859375,2.6345313)(3.9109375,2.6095312)(3.9209375,2.5945313) \curveto(3.9309375,2.5795312)(3.9509375,2.5495312)(3.9609375,2.5345314) \curveto(3.9709375,2.5195312)(3.9909375,2.4895313)(4.0009375,2.4745312) \curveto(4.0109377,2.4595313)(4.0309377,2.4295313)(4.0409374,2.4145312) \curveto(4.0509377,2.3995314)(4.0709376,2.3695312)(4.0809374,2.3545313) \curveto(4.0909376,2.3395312)(4.1109376,2.3045313)(4.1209373,2.2845314) \curveto(4.1309376,2.2645311)(4.1509376,2.2245312)(4.1609373,2.2045312) \curveto(4.1709375,2.1845312)(4.1909375,2.1445312)(4.2009373,2.1245313) \curveto(4.2109375,2.1045313)(4.2309375,2.0695312)(4.2409377,2.0545313) \curveto(4.2509375,2.0395312)(4.2659373,2.0045311)(4.2709374,1.9845313) \curveto(4.2759376,1.9645313)(4.2909374,1.9295312)(4.3009377,1.9145312) \curveto(4.3109374,1.8995312)(4.3309374,1.8695313)(4.3409376,1.8545313) \curveto(4.3509374,1.8395313)(4.3659377,1.8045312)(4.3709373,1.7845312) \curveto(4.3759375,1.7645313)(4.3909373,1.7295313)(4.4009376,1.7145313) \curveto(4.4109373,1.6995312)(4.4309373,1.6695312)(4.4409375,1.6545312) \curveto(4.4509373,1.6395313)(4.4759374,1.6045313)(4.4909377,1.5845313) \curveto(4.5059376,1.5645312)(4.5309377,1.5295312)(4.5409374,1.5145313) \curveto(4.5509377,1.4995313)(4.5709376,1.4695313)(4.5809374,1.4545312) \curveto(4.5909376,1.4395312)(4.6109376,1.4095312)(4.6209373,1.3945312) \curveto(4.6309376,1.3795313)(4.6509376,1.3495313)(4.6609373,1.3345313) \curveto(4.6709375,1.3195312)(4.6959376,1.2945312)(4.7109375,1.2845312) \curveto(4.7259374,1.2745312)(4.7559376,1.2545313)(4.7709374,1.2445313) \curveto(4.7859373,1.2345313)(4.8159375,1.2095313)(4.8309374,1.1945312) \curveto(4.8459377,1.1795312)(4.8759375,1.1545312)(4.8909373,1.1445312) \curveto(4.9059377,1.1345313)(4.9359374,1.1145313)(4.9509373,1.1045313) \curveto(4.9659376,1.0945313)(4.9959373,1.0745312)(5.0109377,1.0645312) \curveto(5.0259376,1.0545312)(5.0559373,1.0345312)(5.0709376,1.0245312) \curveto(5.0859375,1.0145313)(5.1159377,0.9945313)(5.1309376,0.9845312) \curveto(5.1459374,0.97453123)(5.1859374,0.95953125)(5.2109375,0.95453125) \curveto(5.2359376,0.94953126)(5.2809377,0.93953127)(5.3009377,0.9345313) \curveto(5.3209376,0.9295313)(5.3609376,0.9195312)(5.3809376,0.91453123) \curveto(5.4009376,0.90953124)(5.4409375,0.89953125)(5.4609375,0.89453125) \curveto(5.4809375,0.88953125)(5.5259376,0.87953126)(5.5509377,0.87453127) \curveto(5.5759373,0.8695313)(5.6259375,0.8645313)(5.6509376,0.8645313) \curveto(5.6759377,0.8645313)(5.7259374,0.8645313)(5.7509375,0.8645313) \curveto(5.7759376,0.8645313)(5.8259373,0.8595312)(5.8509374,0.8545312) \curveto(5.8759375,0.84953123)(5.9309373,0.84453124)(5.9609375,0.84453124) \curveto(5.9909377,0.84453124)(6.0459375,0.84453124)(6.0709376,0.84453124) \curveto(6.0959377,0.84453124)(6.1459374,0.84453124)(6.1709375,0.84453124) \curveto(6.1959376,0.84453124)(6.2459373,0.83953124)(6.2709374,0.83453125) \curveto(6.2959375,0.82953125)(6.3459377,0.82453126)(6.3709373,0.82453126) \curveto(6.3959374,0.82453126)(6.4259377,0.84453124)(6.4309373,0.8645313) \curveto(6.4359374,0.88453126)(6.4459376,0.9245312)(6.4509373,0.94453126) \curveto(6.4559374,0.96453124)(6.4609375,1.0095313)(6.4609375,1.0345312) \curveto(6.4609375,1.0595312)(6.4609375,1.1095313)(6.4609375,1.1345313) \curveto(6.4609375,1.1595312)(6.4609375,1.2095313)(6.4609375,1.2345313) \curveto(6.4609375,1.2595313)(6.4559374,1.3045312)(6.4509373,1.3245312) \curveto(6.4459376,1.3445313)(6.4409375,1.3895313)(6.4409375,1.4145312) \curveto(6.4409375,1.4395312)(6.4409375,1.4895313)(6.4409375,1.5145313) \curveto(6.4409375,1.5395312)(6.4409375,1.5895313)(6.4409375,1.6145313) \curveto(6.4409375,1.6395313)(6.4409375,1.6895312)(6.4409375,1.7145313) \curveto(6.4409375,1.7395313)(6.4409375,1.7895312)(6.4409375,1.8145312) \curveto(6.4409375,1.8395313)(6.4409375,1.8895313)(6.4409375,1.9145312) \curveto(6.4409375,1.9395312)(6.4409375,1.9895313)(6.4409375,2.0145311) \curveto(6.4409375,2.0395312)(6.4409375,2.0895312)(6.4409375,2.1145313) \curveto(6.4409375,2.1395311)(6.4409375,2.1895313)(6.4409375,2.2145312) \curveto(6.4409375,2.2395313)(6.4409375,2.2895312)(6.4409375,2.3145313) \curveto(6.4409375,2.3395312)(6.4409375,2.3895311)(6.4409375,2.4145312) \curveto(6.4409375,2.4395313)(6.4409375,2.4895313)(6.4409375,2.5145311) \curveto(6.4409375,2.5395312)(6.4409375,2.5895312)(6.4409375,2.6145313) \curveto(6.4409375,2.6395311)(6.4409375,2.6895313)(6.4409375,2.7145312) \curveto(6.4409375,2.7395313)(6.4409375,2.7795312)(6.4409375,2.8245313) } \pscustom[linewidth=0.02,fillstyle=gradient,gradlines=2000,gradbegin=color3208g,gradend=color3208g,gradmidpoint=1.0] { \newpath \moveto(0.4409375,2.8245313) \lineto(0.4409375,2.7745314) \curveto(0.4409375,2.7495313)(0.4409375,2.6995313)(0.4409375,2.6745312) \curveto(0.4409375,2.6495314)(0.4409375,2.5995312)(0.4409375,2.5745313) \curveto(0.4409375,2.5495312)(0.4409375,2.4995313)(0.4409375,2.4745312) \curveto(0.4409375,2.4495313)(0.4409375,2.3995314)(0.4409375,2.3745313) \curveto(0.4409375,2.3495312)(0.4409375,2.2995312)(0.4409375,2.2745314) \curveto(0.4409375,2.2495313)(0.4409375,2.1995313)(0.4409375,2.1745312) \curveto(0.4409375,2.1495314)(0.4409375,2.0995312)(0.4409375,2.0745313) \curveto(0.4409375,2.0495312)(0.4409375,1.9995313)(0.4409375,1.9745313) \curveto(0.4409375,1.9495312)(0.4409375,1.8995312)(0.4409375,1.8745313) \curveto(0.4409375,1.8495313)(0.4409375,1.7995312)(0.4409375,1.7745312) \curveto(0.4409375,1.7495313)(0.4409375,1.6995312)(0.4409375,1.6745312) \curveto(0.4409375,1.6495312)(0.4409375,1.5995313)(0.4409375,1.5745312) \curveto(0.4409375,1.5495312)(0.4409375,1.5095313)(0.4409375,1.4945313) \curveto(0.4409375,1.4795313)(0.4409375,1.4395312)(0.4409375,1.4145312) \curveto(0.4409375,1.3895313)(0.4409375,1.3395313)(0.4409375,1.3145312) \curveto(0.4409375,1.2895312)(0.4409375,1.2395313)(0.4409375,1.2145313) \curveto(0.4409375,1.1895312)(0.4409375,1.1395313)(0.4409375,1.1145313) \curveto(0.4409375,1.0895313)(0.4409375,1.0395312)(0.4409375,1.0145313) \curveto(0.4409375,0.9895313)(0.4409375,0.93953127)(0.4409375,0.91453123) \curveto(0.4409375,0.88953125)(0.4609375,0.8595312)(0.4809375,0.8545312) \curveto(0.5009375,0.84953123)(0.5459375,0.84453124)(0.5709375,0.84453124) \curveto(0.5959375,0.84453124)(0.6409375,0.83953124)(0.6609375,0.83453125) \curveto(0.6809375,0.82953125)(0.7259375,0.82453126)(0.7509375,0.82453126) \curveto(0.7759375,0.82453126)(0.8259375,0.82453126)(0.8509375,0.82453126) \curveto(0.8759375,0.82453126)(0.9259375,0.82453126)(0.9509375,0.82453126) \curveto(0.9759375,0.82453126)(1.0159374,0.82453126)(1.0309376,0.82453126) \curveto(1.0459375,0.82453126)(1.0859375,0.82453126)(1.1109375,0.82453126) \curveto(1.1359375,0.82453126)(1.1809375,0.82953125)(1.2009375,0.83453125) \curveto(1.2209375,0.83953124)(1.2659374,0.84453124)(1.2909375,0.84453124) \curveto(1.3159375,0.84453124)(1.3609375,0.84953123)(1.3809375,0.8545312) \curveto(1.4009376,0.8595312)(1.4409375,0.8695313)(1.4609375,0.87453127) \curveto(1.4809375,0.87953126)(1.5209374,0.88953125)(1.5409375,0.89453125) \curveto(1.5609375,0.89953125)(1.6009375,0.90953124)(1.6209375,0.91453123) \curveto(1.6409374,0.9195312)(1.6809375,0.9295313)(1.7009375,0.9345313) \curveto(1.7209375,0.93953127)(1.7559375,0.95453125)(1.7709374,0.96453124) \curveto(1.7859375,0.97453123)(1.8159375,0.9945313)(1.8309375,1.0045313) \curveto(1.8459375,1.0145313)(1.8759375,1.0345312)(1.8909374,1.0445312) \curveto(1.9059376,1.0545312)(1.9359375,1.0745312)(1.9509375,1.0845313) \curveto(1.9659375,1.0945313)(1.9959375,1.1145313)(2.0109375,1.1245313) \curveto(2.0259376,1.1345313)(2.0559375,1.1545312)(2.0709374,1.1645312) \curveto(2.0859375,1.1745312)(2.1109376,1.1995312)(2.1209376,1.2145313) \curveto(2.1309376,1.2295313)(2.1559374,1.2545313)(2.1709375,1.2645313) \curveto(2.1859374,1.2745312)(2.2109375,1.2995312)(2.2209375,1.3145312) \curveto(2.2309375,1.3295312)(2.2509375,1.3595313)(2.2609375,1.3745313) \curveto(2.2709374,1.3895313)(2.2909374,1.4195312)(2.3009374,1.4345312) \curveto(2.3109374,1.4495312)(2.3309374,1.4795313)(2.3409376,1.4945313) \curveto(2.3509376,1.5095313)(2.3759375,1.5345312)(2.3909376,1.5445312) \curveto(2.4059374,1.5545312)(2.4309375,1.5795312)(2.4409375,1.5945313) \curveto(2.4509375,1.6095313)(2.4709375,1.6395313)(2.4809375,1.6545312) \curveto(2.4909375,1.6695312)(2.5109375,1.7045312)(2.5209374,1.7245313) \curveto(2.5309374,1.7445313)(2.5459375,1.7845312)(2.5509374,1.8045312) \curveto(2.5559375,1.8245312)(2.5709374,1.8595313)(2.5809374,1.8745313) \curveto(2.5909376,1.8895313)(2.6059375,1.9145312)(2.6109376,1.9245312) \curveto(2.6159375,1.9345312)(2.6309376,1.9595313)(2.6409376,1.9745313) \curveto(2.6509376,1.9895313)(2.6659374,2.0245314)(2.6709375,2.0445313) \curveto(2.6759374,2.0645313)(2.6909375,2.0995312)(2.7009375,2.1145313) \curveto(2.7109375,2.1295311)(2.7309375,2.1595314)(2.7409375,2.1745312) \curveto(2.7509375,2.1895313)(2.7659376,2.2245312)(2.7709374,2.2445312) \curveto(2.7759376,2.2645311)(2.7909374,2.2995312)(2.8009374,2.3145313) \curveto(2.8109374,2.3295312)(2.8309374,2.3595312)(2.8409376,2.3745313) \curveto(2.8509376,2.3895311)(2.8709376,2.4195313)(2.8809376,2.4345312) \curveto(2.8909376,2.4495313)(2.9109375,2.4795313)(2.9209375,2.4945312) \curveto(2.9309375,2.5095313)(2.9509375,2.5395312)(2.9609375,2.5545313) \curveto(2.9709375,2.5695312)(2.9909375,2.6045313)(3.0009375,2.6245313) \curveto(3.0109375,2.6445312)(3.0359375,2.6745312)(3.0509374,2.6845312) \curveto(3.0659375,2.6945312)(3.0959375,2.7145312)(3.1109376,2.7245312) \curveto(3.1259375,2.7345312)(3.1609375,2.7495313)(3.1809375,2.7545311) \curveto(3.2009375,2.7595313)(3.2359376,2.7745314)(3.2509375,2.7845314) \curveto(3.2659376,2.7945313)(3.2959375,2.8045313)(3.3409376,2.8045313) } \psframe[linewidth=0.06,dimen=outer](6.4809375,2.8345313)(0.4309375,-3.2154686) \psline[linewidth=0.1cm](0.4409375,0.82453126)(1.0609375,0.82453126) \psline[linewidth=0.1cm](5.8409376,0.84453124)(6.4609375,0.84453124) \psbezier[linewidth=0.06](1.0409375,0.82453126)(2.8809376,0.84453124)(2.4609458,2.808614)(3.4609375,2.8045313)(4.4609294,2.8004484)(4.0609374,0.84453124)(5.8609376,0.84453124) \psline[linewidth=0.1cm](0.4409375,-1.1554687)(1.0609375,-1.1554687) \psline[linewidth=0.1cm](5.8409376,-1.1754688)(6.4609375,-1.1754688) \psline[linewidth=0.1cm](2.4609375,-3.2154686)(2.4609375,-2.5954688) \psline[linewidth=0.1cm](4.4609375,-3.1954687)(4.4609375,-2.5754688) \psbezier[linewidth=0.06](1.0409375,-1.1554687)(2.2609375,-1.1554687)(2.4609375,-1.3554688)(2.4609375,-2.6154687) \psbezier[linewidth=0.06](4.4609375,-2.5954688)(4.4609375,-1.3954687)(4.6809373,-1.1754688)(5.8609376,-1.1754688) \psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.4409375,2.7645311)(0.4409375,3.5245314) \psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(6.2609377,-3.1954687)(7.1809373,-3.1954687) \usefont{T1}{ptm}{m}{n} \rput(3.3923438,-3.4454687){$1/2$} \usefont{T1}{ptm}{m}{n} \rput(4.452344,-3.4454687){$1$} \usefont{T1}{ptm}{m}{n} \rput(6.412344,-3.4454687){$2$} \usefont{T1}{ptm}{m}{n} \rput(2.4123437,-3.4254687){$0$} \usefont{T1}{ptm}{m}{n} \rput(0.29234374,-3.4254687){$-1$} \usefont{T1}{ptm}{m}{n} \rput(0.25234374,-1.1854688){$0$} \usefont{T1}{ptm}{m}{n} \rput(0.27234375,0.7945312){$1$} \usefont{T1}{ptm}{m}{n} \rput(0.25234374,2.7945313){$2$} \psline[linewidth=0.02cm,linestyle=dashed,dash=0.16cm 0.16cm](3.4609375,-3.1554687)(3.4609375,2.8045313) \usefont{T1}{ptm}{m}{n} \rput(0.29234374,3.4345312){$y$} \usefont{T1}{ptm}{m}{n} \rput(6.9623437,-3.4254687){$x$} \usefont{T1}{ptm}{m}{n} \rput(2.7723436,1.3745313){$\gamma_0$} \usefont{T1}{ptm}{m}{n} \rput(2.6323438,-1.3254688){$\gamma_{-1}$} \usefont{T1}{ptm}{m}{n} \rput(4.432344,-1.3254688){$\gamma_{1}$} \usefont{T1}{ptm}{m}{n} \rput(3.7823439,0.57453126){$T$} \psdots[dotsize=0.16](0.4609375,0.82453126) \psdots[dotsize=0.16](0.4609375,-1.1554687) \psdots[dotsize=0.16](0.4409375,2.7845314) \psdots[dotsize=0.16](0.4609375,-3.1754687) \psdots[dotsize=0.16](2.4609375,-3.1954687) \psdots[dotsize=0.16](4.4609375,-3.1754687) \psdots[dotsize=0.16](3.4609375,-3.1954687) \psdots[dotsize=0.16](6.4609375,-3.1954687) \end{pspicture} } \\ Fig. 3a: ''base'' subset $T$ of the square $R$ \end{center} Let \begin{equation}\label{eq:X-def} X=T\times [0,1/2]\subset \{(x,y,p)\mid x,y\in [-1,2],p\in [0,1/2]\}. \end{equation} Denote left and right parts of the boundary of the level \begin{equation}\label{eq:Tp} T_p:=T\times \{p\} \end{equation} by \begin{equation}\label{eq:Epm} \begin{aligned} E_{p,-1}&:=\{(-1,y,p)\mid y\in [0,1]\} \text{ and}\\ E_{p, 1}&:=\{( 2,y,p)\mid y\in [0,1]\}. \end{aligned} \end{equation} We don't change the notations, thus denote the lower boundary $y=-1$ of $T_p$ by \begin{equation}\label{eq:Ip3dim} I_p=\{(x,-1,p)\mid x\in [0,1]\}, \end{equation} and split it into three parts \begin{equation}\label{eq:Ipk3dim} \begin{aligned} I_{p,-1}&:=\{(x,-1,p)\mid x\in [0;\frac{1-p}{2})\},\\ I_{p,0}&:=\{(x,-1,p)\mid x\in [\frac{1-p}{2};\frac{1+p}{2}]\}, \text{ and}\\ I_{p,1}&:=\{(x,-1,p)\mid x\in (\frac{1+p}{2};1]\} \end{aligned} \end{equation} as in~\eqref{eq:Ip},~\eqref{eq:Ipk} (the difference between notations is adding the coordinate $y$ with the condition $y=-1$). We take $X$ and for every $p\in [0,1/2]$ glue linearly both intervals $E_{p,-1}$ and $E_{p,1}$ to \textit{the same} interval $I_{q(p)}$, where the function $q$ is defined in~\eqref{eq:q-def}. This equivalence is specified as follows: \begin{equation}\label{eq:equiv3} \begin{aligned} (-1,y,p)\equiv(y,-1,q(p));\\ (1,y,p)\equiv(y,-1,q(p)). \end{aligned} \end{equation} Thus we obtain a stratified $3$-manifold $\widetilde X$ with one $2$-dimensional stratum \begin{equation}\label{eq:S_2} S_2:=\cup_{p=0}^{1/2} I_{q(p)}=\{(x,-1,p)\mid x\in[0,1],p\in[0,1/3]\}, \end{equation} that coincides (in $\widetilde X$) with both rectangles \begin{equation}\label{eq:Fpm} F_{\pm}:=\cup_{p=0}^{1/2} E_{p,\pm 1}, \end{equation} see~\eqref{eq:Epm}, according to the equivalence~\eqref{eq:equiv3}. \subsection{Construction of a flow} To describe the flow on $\widetilde X$, we first describe the flow on $X$. This flow has two saddles $a_p$ and $b_p$ on any level $T_p$~\eqref{eq:Tp}. They lie on the intersection of $\gamma_{p,0}=\gamma_0 \times \{p\}$ on the level $T_p$ with the line $l_p=\{(x,y,p)\mid y=2-p/2\}$, and hence depend smoothly on $p$ and collide for $p=0$ in the point $(1/2,2,0)$. Let \begin{equation}\label{eq:Xpm} \begin{aligned} X^-:=\{(x,y,p)\in X\mid y< 2-p/2\},\\ X^+:=\{(x,y,p)\in X\mid y\geqslant 2-p/2\}. \end{aligned} \end{equation} The set $X^-$ can be described as ''part of $X$ that lies below the plane $L$ that contains all the $a_p$'s and $b_p$'s''. The function $p$ is the first integral of the flow restricted to $X^-$. The flow has no singular points on $X^-$. The left and right endpoints of the interval $I_{p,0}$ (see~\eqref{eq:Ip3dim},~\eqref{eq:Ipk3dim}) lie on the incoming separatrices of $a_p$ and $b_p$ respectively (for $p=0$ these endpoints collide as well as $a_0$ and $b_0$ do). The outcoming separatrices of $a_p$ and $b_p$ for the flow restricted to $X^-_p=X_p\cap X^-$ are respective parts of $\gamma_{p,0}$. All other trajectories in $X^-_p$ go from $I_{p,\pm 1}$ to $E_{p,\pm 1}$ and from $I_{p,0}$ to interval $E_{p,0}$ on the line $l_p$ between $a_p$ and $b_p$ (see Figures~3b,~3c). \begin{center} \scalebox{1.2} { \begin{pspicture}(0,-3.6359375)(8.45375,3.6359375) \definecolor{color4517g}{rgb}{0.6,0.6,0.6} \pscustom[linewidth=0.02,fillstyle=gradient,gradlines=2000,gradbegin=color4517g,gradend=color4517g,gradmidpoint=1.0] { \newpath \moveto(1.3009375,-3.1625) \lineto(1.3009375,-3.1125) \curveto(1.3009375,-3.0875)(1.3009375,-3.0375)(1.3009375,-3.0125) \curveto(1.3009375,-2.9875)(1.3009375,-2.9375)(1.3009375,-2.9125) \curveto(1.3009375,-2.8875)(1.3009375,-2.8375)(1.3009375,-2.8125) \curveto(1.3009375,-2.7875)(1.2959375,-2.7425)(1.2909375,-2.7225) \curveto(1.2859375,-2.7025)(1.2809376,-2.6575)(1.2809376,-2.6325) \curveto(1.2809376,-2.6075)(1.2809376,-2.5575)(1.2809376,-2.5325) \curveto(1.2809376,-2.5075)(1.2859375,-2.4625)(1.2909375,-2.4425) \curveto(1.2959375,-2.4225)(1.2959375,-2.3825)(1.2909375,-2.3625) \curveto(1.2859375,-2.3425)(1.2809376,-2.2975)(1.2809376,-2.2725) \curveto(1.2809376,-2.2475)(1.2859375,-2.2025)(1.2909375,-2.1825) \curveto(1.2959375,-2.1625)(1.3009375,-2.1175)(1.3009375,-2.0925) \curveto(1.3009375,-2.0675)(1.3009375,-2.0175)(1.3009375,-1.9925) \curveto(1.3009375,-1.9675)(1.3009375,-1.9175)(1.3009375,-1.8925) \curveto(1.3009375,-1.8675)(1.2959375,-1.8225)(1.2909375,-1.8025) \curveto(1.2859375,-1.7825)(1.2809376,-1.7375)(1.2809376,-1.7125) \curveto(1.2809376,-1.6875)(1.2809376,-1.6375)(1.2809376,-1.6125) \curveto(1.2809376,-1.5875)(1.2809376,-1.5375)(1.2809376,-1.5125) \curveto(1.2809376,-1.4875)(1.2859375,-1.4425)(1.2909375,-1.4225) \curveto(1.2959375,-1.4025)(1.3009375,-1.3575)(1.3009375,-1.3325) \curveto(1.3009375,-1.3075)(1.3009375,-1.2575)(1.3009375,-1.2325) \curveto(1.3009375,-1.2075)(1.3159375,-1.1725)(1.3309375,-1.1625) \curveto(1.3459375,-1.1525)(1.3859375,-1.1425)(1.4109375,-1.1425) \curveto(1.4359375,-1.1425)(1.4859375,-1.1425)(1.5109375,-1.1425) \curveto(1.5359375,-1.1425)(1.5859375,-1.1425)(1.6109375,-1.1425) \curveto(1.6359375,-1.1425)(1.6859375,-1.1425)(1.7109375,-1.1425) \curveto(1.7359375,-1.1425)(1.7859375,-1.1425)(1.8109375,-1.1425) \curveto(1.8359375,-1.1425)(1.8809375,-1.1375)(1.9009376,-1.1325) \curveto(1.9209375,-1.1275)(1.9659375,-1.1225)(1.9909375,-1.1225) \curveto(2.0159376,-1.1225)(2.0659375,-1.1225)(2.0909376,-1.1225) \curveto(2.1159375,-1.1225)(2.1609375,-1.1275)(2.1809375,-1.1325) \curveto(2.2009375,-1.1375)(2.2459376,-1.1425)(2.2709374,-1.1425) \curveto(2.2959375,-1.1425)(2.3459375,-1.1425)(2.3709376,-1.1425) \curveto(2.3959374,-1.1425)(2.4409375,-1.1475)(2.4609375,-1.1525) \curveto(2.4809375,-1.1575)(2.5209374,-1.1675)(2.5409374,-1.1725) \curveto(2.5609374,-1.1775)(2.6009376,-1.1875)(2.6209376,-1.1925) \curveto(2.6409376,-1.1975)(2.6809375,-1.2075)(2.7009375,-1.2125) \curveto(2.7209375,-1.2175)(2.7609375,-1.2275)(2.7809374,-1.2325) \curveto(2.8009374,-1.2375)(2.8409376,-1.2475)(2.8609376,-1.2525) \curveto(2.8809376,-1.2575)(2.9159374,-1.2725)(2.9309375,-1.2825) \curveto(2.9459374,-1.2925)(2.9759376,-1.3125)(2.9909375,-1.3225) \curveto(3.0059376,-1.3325)(3.0309374,-1.3575)(3.0409374,-1.3725) \curveto(3.0509374,-1.3875)(3.0709374,-1.4175)(3.0809374,-1.4325) \curveto(3.0909376,-1.4475)(3.1109376,-1.4775)(3.1209376,-1.4925) \curveto(3.1309376,-1.5075)(3.1509376,-1.5375)(3.1609375,-1.5525) \curveto(3.1709375,-1.5675)(3.1859374,-1.6025)(3.1909375,-1.6225) \curveto(3.1959374,-1.6425)(3.2059374,-1.6825)(3.2109375,-1.7025) \curveto(3.2159376,-1.7225)(3.2259376,-1.7625)(3.2309375,-1.7825) \curveto(3.2359376,-1.8025)(3.2459376,-1.8425)(3.2509375,-1.8625) \curveto(3.2559376,-1.8825)(3.2609375,-1.9275)(3.2609375,-1.9525) \curveto(3.2609375,-1.9775)(3.2659376,-2.0225)(3.2709374,-2.0425) \curveto(3.2759376,-2.0625)(3.2809374,-2.1075)(3.2809374,-2.1325) \curveto(3.2809374,-2.1575)(3.2809374,-2.2075)(3.2809374,-2.2325) \curveto(3.2809374,-2.2575)(3.2809374,-2.3075)(3.2809374,-2.3325) \curveto(3.2809374,-2.3575)(3.2859375,-2.4025)(3.2909374,-2.4225) \curveto(3.2959375,-2.4425)(3.3009374,-2.4875)(3.3009374,-2.5125) \curveto(3.3009374,-2.5375)(3.3009374,-2.5875)(3.3009374,-2.6125) \curveto(3.3009374,-2.6375)(3.3009374,-2.6875)(3.3009374,-2.7125) \curveto(3.3009374,-2.7375)(3.3009374,-2.7875)(3.3009374,-2.8125) \curveto(3.3009374,-2.8375)(3.3009374,-2.8875)(3.3009374,-2.9125) \curveto(3.3009374,-2.9375)(3.3009374,-2.9875)(3.3009374,-3.0125) \curveto(3.3009374,-3.0375)(3.3009374,-3.0875)(3.3009374,-3.1125) \curveto(3.3009374,-3.1375)(3.3009374,-3.1625)(3.3009374,-3.1625) } \pscustom[linewidth=0.02,fillstyle=gradient,gradlines=2000,gradbegin=color4517g,gradend=color4517g,gradmidpoint=1.0] { \newpath \moveto(7.3209376,-3.1625) \lineto(7.3109374,-3.1225) \curveto(7.3059373,-3.1025)(7.3009377,-3.0575)(7.3009377,-3.0325) \curveto(7.3009377,-3.0075)(7.3009377,-2.9575)(7.3009377,-2.9325) \curveto(7.3009377,-2.9075)(7.3009377,-2.8575)(7.3009377,-2.8325) \curveto(7.3009377,-2.8075)(7.3009377,-2.7575)(7.3009377,-2.7325) \curveto(7.3009377,-2.7075)(7.3009377,-2.6575)(7.3009377,-2.6325) \curveto(7.3009377,-2.6075)(7.3009377,-2.5575)(7.3009377,-2.5325) \curveto(7.3009377,-2.5075)(7.3009377,-2.4575)(7.3009377,-2.4325) \curveto(7.3009377,-2.4075)(7.3009377,-2.3575)(7.3009377,-2.3325) \curveto(7.3009377,-2.3075)(7.3009377,-2.2575)(7.3009377,-2.2325) \curveto(7.3009377,-2.2075)(7.3009377,-2.1575)(7.3009377,-2.1325) \curveto(7.3009377,-2.1075)(7.3009377,-2.0575)(7.3009377,-2.0325) \curveto(7.3009377,-2.0075)(7.3009377,-1.9575)(7.3009377,-1.9325) \curveto(7.3009377,-1.9075)(7.3009377,-1.8575)(7.3009377,-1.8325) \curveto(7.3009377,-1.8075)(7.3009377,-1.7575)(7.3009377,-1.7325) \curveto(7.3009377,-1.7075)(7.3009377,-1.6575)(7.3009377,-1.6325) \curveto(7.3009377,-1.6075)(7.3009377,-1.5575)(7.3009377,-1.5325) \curveto(7.3009377,-1.5075)(7.3009377,-1.4575)(7.3009377,-1.4325) \curveto(7.3009377,-1.4075)(7.3009377,-1.3575)(7.3009377,-1.3325) \curveto(7.3009377,-1.3075)(7.3009377,-1.2575)(7.3009377,-1.2325) \curveto(7.3009377,-1.2075)(7.2809377,-1.1775)(7.2609377,-1.1725) \curveto(7.2409377,-1.1675)(7.1959376,-1.1625)(7.1709375,-1.1625) \curveto(7.1459374,-1.1625)(7.0959377,-1.1625)(7.0709376,-1.1625) \curveto(7.0459375,-1.1625)(6.9959373,-1.1625)(6.9709377,-1.1625) \curveto(6.9459376,-1.1625)(6.8959374,-1.1625)(6.8709373,-1.1625) \curveto(6.8459377,-1.1625)(6.7959375,-1.1625)(6.7709374,-1.1625) \curveto(6.7459373,-1.1625)(6.6959376,-1.1625)(6.6709375,-1.1625) \curveto(6.6459374,-1.1625)(6.5959377,-1.1625)(6.5709376,-1.1625) \curveto(6.5459375,-1.1625)(6.5009375,-1.1675)(6.4809375,-1.1725) \curveto(6.4609375,-1.1775)(6.4159374,-1.1825)(6.3909373,-1.1825) \curveto(6.3659377,-1.1825)(6.3159375,-1.1825)(6.2909374,-1.1825) \curveto(6.2659373,-1.1825)(6.2209377,-1.1875)(6.2009373,-1.1925) \curveto(6.1809373,-1.1975)(6.1359377,-1.2025)(6.1109376,-1.2025) \curveto(6.0859375,-1.2025)(6.0409374,-1.2075)(6.0209374,-1.2125) \curveto(6.0009375,-1.2175)(5.9609375,-1.2325)(5.9409375,-1.2425) \curveto(5.9209375,-1.2525)(5.8809376,-1.2675)(5.8609376,-1.2725) \curveto(5.8409376,-1.2775)(5.8009377,-1.2875)(5.7809377,-1.2925) \curveto(5.7609377,-1.2975)(5.7259374,-1.3125)(5.7109375,-1.3225) \curveto(5.6959376,-1.3325)(5.6659374,-1.3525)(5.6509376,-1.3625) \curveto(5.6359377,-1.3725)(5.6059375,-1.3975)(5.5909376,-1.4125) \curveto(5.5759373,-1.4275)(5.5509377,-1.4575)(5.5409374,-1.4725) \curveto(5.5309377,-1.4875)(5.5109377,-1.5175)(5.5009375,-1.5325) \curveto(5.4909377,-1.5475)(5.4709377,-1.5775)(5.4609375,-1.5925) \curveto(5.4509373,-1.6075)(5.4359374,-1.6425)(5.4309373,-1.6625) \curveto(5.4259377,-1.6825)(5.4109373,-1.7175)(5.4009376,-1.7325) \curveto(5.3909373,-1.7475)(5.3759375,-1.7825)(5.3709373,-1.8025) \curveto(5.3659377,-1.8225)(5.3609376,-1.8675)(5.3609376,-1.8925) \curveto(5.3609376,-1.9175)(5.3559375,-1.9625)(5.3509374,-1.9825) \curveto(5.3459377,-2.0025)(5.3409376,-2.0475)(5.3409376,-2.0725) \curveto(5.3409376,-2.0975)(5.3359375,-2.1425)(5.3309374,-2.1625) \curveto(5.3259373,-2.1825)(5.3209376,-2.2275)(5.3209376,-2.2525) \curveto(5.3209376,-2.2775)(5.3159375,-2.3275)(5.3109374,-2.3525) \curveto(5.3059373,-2.3775)(5.3009377,-2.4275)(5.3009377,-2.4525) \curveto(5.3009377,-2.4775)(5.3009377,-2.5275)(5.3009377,-2.5525) \curveto(5.3009377,-2.5775)(5.2959375,-2.6275)(5.2909374,-2.6525) \curveto(5.2859373,-2.6775)(5.2809377,-2.7275)(5.2809377,-2.7525) \curveto(5.2809377,-2.7775)(5.2859373,-2.8225)(5.2909374,-2.8425) \curveto(5.2959375,-2.8625)(5.3009377,-2.9075)(5.3009377,-2.9325) \curveto(5.3009377,-2.9575)(5.3009377,-3.0075)(5.3009377,-3.0325) \curveto(5.3009377,-3.0575)(5.3059373,-3.1025)(5.3109374,-3.1225) \curveto(5.3159375,-3.1425)(5.3209376,-3.1675)(5.3209376,-3.1825) } \pscustom[linewidth=0.02,fillstyle=gradient,gradlines=2000,gradbegin=color4517g,gradend=color4517g,gradmidpoint=1.0] { \newpath \moveto(4.4009376,2.8375) \lineto(4.4409375,2.8175) \curveto(4.4609375,2.8075)(4.5009375,2.7875)(4.5209374,2.7775) \curveto(4.5409374,2.7675)(4.5759373,2.7475)(4.5909376,2.7375) \curveto(4.6059375,2.7275)(4.6359377,2.7075)(4.6509376,2.6975) \curveto(4.6659374,2.6875)(4.6959376,2.6675)(4.7109375,2.6575) \curveto(4.7259374,2.6475)(4.7509375,2.6225)(4.7609377,2.6075) \curveto(4.7709374,2.5925)(4.7909374,2.5625)(4.8009377,2.5475) \curveto(4.8109374,2.5325)(4.8309374,2.5025)(4.8409376,2.4875) \curveto(4.8509374,2.4725)(4.8709373,2.4425)(4.8809376,2.4275) \curveto(4.8909373,2.4125)(4.9109373,2.3825)(4.9209375,2.3675) \curveto(4.9309373,2.3525)(4.9509373,2.3175)(4.9609375,2.2975) \curveto(4.9709377,2.2775)(4.9909377,2.2375)(5.0009375,2.2175) \curveto(5.0109377,2.1975)(5.0309377,2.1575)(5.0409374,2.1375) \curveto(5.0509377,2.1175)(5.0709376,2.0825)(5.0809374,2.0675) \curveto(5.0909376,2.0525)(5.1059375,2.0175)(5.1109376,1.9975) \curveto(5.1159377,1.9775)(5.1309376,1.9425)(5.1409373,1.9275) \curveto(5.1509376,1.9125)(5.1709375,1.8825)(5.1809373,1.8675) \curveto(5.1909375,1.8525)(5.2059374,1.8175)(5.2109375,1.7975) \curveto(5.2159376,1.7775)(5.2309375,1.7425)(5.2409377,1.7275) \curveto(5.2509375,1.7125)(5.2709374,1.6825)(5.2809377,1.6675) \curveto(5.2909374,1.6525)(5.3159375,1.6175)(5.3309374,1.5975) \curveto(5.3459377,1.5775)(5.3709373,1.5425)(5.3809376,1.5275) \curveto(5.3909373,1.5125)(5.4109373,1.4825)(5.4209375,1.4675) \curveto(5.4309373,1.4525)(5.4509373,1.4225)(5.4609375,1.4075) \curveto(5.4709377,1.3925)(5.4909377,1.3625)(5.5009375,1.3475) \curveto(5.5109377,1.3325)(5.5359373,1.3075)(5.5509377,1.2975) \curveto(5.5659375,1.2875)(5.5959377,1.2675)(5.6109376,1.2575) \curveto(5.6259375,1.2475)(5.6559377,1.2225)(5.6709375,1.2075) \curveto(5.6859374,1.1925)(5.7159376,1.1675)(5.7309375,1.1575) \curveto(5.7459373,1.1475)(5.7759376,1.1275)(5.7909374,1.1175) \curveto(5.8059373,1.1075)(5.8359375,1.0875)(5.8509374,1.0775) \curveto(5.8659377,1.0675)(5.8959374,1.0475)(5.9109373,1.0375) \curveto(5.9259377,1.0275)(5.9559374,1.0075)(5.9709377,0.9975) \curveto(5.9859376,0.9875)(6.0259376,0.9725)(6.0509377,0.9675) \curveto(6.0759373,0.9625)(6.1209373,0.9525)(6.1409373,0.9475) \curveto(6.1609373,0.9425)(6.2009373,0.9325)(6.2209377,0.9275) \curveto(6.2409377,0.9225)(6.2809377,0.9125)(6.3009377,0.9075) \curveto(6.3209376,0.9025)(6.3659377,0.8925)(6.3909373,0.8875) \curveto(6.4159374,0.8825)(6.4659376,0.8775)(6.4909377,0.8775) \curveto(6.5159373,0.8775)(6.5659375,0.8775)(6.5909376,0.8775) \curveto(6.6159377,0.8775)(6.6659374,0.8725)(6.6909375,0.8675) \curveto(6.7159376,0.8625)(6.7709374,0.8575)(6.8009377,0.8575) \curveto(6.8309374,0.8575)(6.8859377,0.8575)(6.9109373,0.8575) \curveto(6.9359374,0.8575)(6.9859376,0.8575)(7.0109377,0.8575) \curveto(7.0359373,0.8575)(7.0859375,0.8525)(7.1109376,0.8475) \curveto(7.1359377,0.8425)(7.1859374,0.8375)(7.2109375,0.8375) \curveto(7.2359376,0.8375)(7.2659373,0.8575)(7.2709374,0.8775) \curveto(7.2759376,0.8975)(7.2859373,0.9375)(7.2909374,0.9575) \curveto(7.2959375,0.9775)(7.3009377,1.0225)(7.3009377,1.0475) \curveto(7.3009377,1.0725)(7.3009377,1.1225)(7.3009377,1.1475) \curveto(7.3009377,1.1725)(7.3009377,1.2225)(7.3009377,1.2475) \curveto(7.3009377,1.2725)(7.2959375,1.3175)(7.2909374,1.3375) \curveto(7.2859373,1.3575)(7.2809377,1.4025)(7.2809377,1.4275) \curveto(7.2809377,1.4525)(7.2809377,1.5025)(7.2809377,1.5275) \curveto(7.2809377,1.5525)(7.2809377,1.6025)(7.2809377,1.6275) \curveto(7.2809377,1.6525)(7.2809377,1.7025)(7.2809377,1.7275) \curveto(7.2809377,1.7525)(7.2809377,1.8025)(7.2809377,1.8275) \curveto(7.2809377,1.8525)(7.2809377,1.9025)(7.2809377,1.9275) \curveto(7.2809377,1.9525)(7.2809377,2.0025)(7.2809377,2.0275) \curveto(7.2809377,2.0525)(7.2809377,2.1025)(7.2809377,2.1275) \curveto(7.2809377,2.1525)(7.2809377,2.2025)(7.2809377,2.2275) \curveto(7.2809377,2.2525)(7.2809377,2.3025)(7.2809377,2.3275) \curveto(7.2809377,2.3525)(7.2809377,2.4025)(7.2809377,2.4275) \curveto(7.2809377,2.4525)(7.2809377,2.5025)(7.2809377,2.5275) \curveto(7.2809377,2.5525)(7.2809377,2.6025)(7.2809377,2.6275) \curveto(7.2809377,2.6525)(7.2809377,2.7025)(7.2809377,2.7275) \curveto(7.2809377,2.7525)(7.2809377,2.7925)(7.2809377,2.8375) } \pscustom[linewidth=0.02,fillstyle=gradient,gradlines=2000,gradbegin=color4517g,gradend=color4517g,gradmidpoint=1.0] { \newpath \moveto(1.2809376,2.8375) \lineto(1.2809376,2.7875) \curveto(1.2809376,2.7625)(1.2809376,2.7125)(1.2809376,2.6875) \curveto(1.2809376,2.6625)(1.2809376,2.6125)(1.2809376,2.5875) \curveto(1.2809376,2.5625)(1.2809376,2.5125)(1.2809376,2.4875) \curveto(1.2809376,2.4625)(1.2809376,2.4125)(1.2809376,2.3875) \curveto(1.2809376,2.3625)(1.2809376,2.3125)(1.2809376,2.2875) \curveto(1.2809376,2.2625)(1.2809376,2.2125)(1.2809376,2.1875) \curveto(1.2809376,2.1625)(1.2809376,2.1125)(1.2809376,2.0875) \curveto(1.2809376,2.0625)(1.2809376,2.0125)(1.2809376,1.9875) \curveto(1.2809376,1.9625)(1.2809376,1.9125)(1.2809376,1.8875) \curveto(1.2809376,1.8625)(1.2809376,1.8125)(1.2809376,1.7875) \curveto(1.2809376,1.7625)(1.2809376,1.7125)(1.2809376,1.6875) \curveto(1.2809376,1.6625)(1.2809376,1.6125)(1.2809376,1.5875) \curveto(1.2809376,1.5625)(1.2809376,1.5225)(1.2809376,1.5075) \curveto(1.2809376,1.4925)(1.2809376,1.4525)(1.2809376,1.4275) \curveto(1.2809376,1.4025)(1.2809376,1.3525)(1.2809376,1.3275) \curveto(1.2809376,1.3025)(1.2809376,1.2525)(1.2809376,1.2275) \curveto(1.2809376,1.2025)(1.2809376,1.1525)(1.2809376,1.1275) \curveto(1.2809376,1.1025)(1.2809376,1.0525)(1.2809376,1.0275) \curveto(1.2809376,1.0025)(1.2809376,0.9525)(1.2809376,0.9275) \curveto(1.2809376,0.9025)(1.3009375,0.8725)(1.3209375,0.8675) \curveto(1.3409375,0.8625)(1.3859375,0.8575)(1.4109375,0.8575) \curveto(1.4359375,0.8575)(1.4809375,0.8525)(1.5009375,0.8475) \curveto(1.5209374,0.8425)(1.5659375,0.8375)(1.5909375,0.8375) \curveto(1.6159375,0.8375)(1.6659375,0.8375)(1.6909375,0.8375) \curveto(1.7159375,0.8375)(1.7659374,0.8375)(1.7909375,0.8375) \curveto(1.8159375,0.8375)(1.8559375,0.8375)(1.8709375,0.8375) \curveto(1.8859375,0.8375)(1.9259375,0.8375)(1.9509375,0.8375) \curveto(1.9759375,0.8375)(2.0209374,0.8425)(2.0409374,0.8475) \curveto(2.0609374,0.8525)(2.1059375,0.8575)(2.1309376,0.8575) \curveto(2.1559374,0.8575)(2.2009375,0.8625)(2.2209375,0.8675) \curveto(2.2409375,0.8725)(2.2809374,0.8825)(2.3009374,0.8875) \curveto(2.3209374,0.8925)(2.3609376,0.9025)(2.3809376,0.9075) \curveto(2.4009376,0.9125)(2.4409375,0.9225)(2.4609375,0.9275) \curveto(2.4809375,0.9325)(2.5209374,0.9425)(2.5409374,0.9475) \curveto(2.5609374,0.9525)(2.5959375,0.9675)(2.6109376,0.9775) \curveto(2.6259375,0.9875)(2.6559374,1.0075)(2.6709375,1.0175) \curveto(2.6859374,1.0275)(2.7159376,1.0475)(2.7309375,1.0575) \curveto(2.7459376,1.0675)(2.7759376,1.0875)(2.7909374,1.0975) \curveto(2.8059375,1.1075)(2.8359375,1.1275)(2.8509376,1.1375) \curveto(2.8659375,1.1475)(2.8959374,1.1675)(2.9109375,1.1775) \curveto(2.9259374,1.1875)(2.9509375,1.2125)(2.9609375,1.2275) \curveto(2.9709375,1.2425)(2.9959376,1.2675)(3.0109375,1.2775) \curveto(3.0259376,1.2875)(3.0509374,1.3125)(3.0609374,1.3275) \curveto(3.0709374,1.3425)(3.0909376,1.3725)(3.1009376,1.3875) \curveto(3.1109376,1.4025)(3.1309376,1.4325)(3.1409376,1.4475) \curveto(3.1509376,1.4625)(3.1709375,1.4925)(3.1809375,1.5075) \curveto(3.1909375,1.5225)(3.2159376,1.5475)(3.2309375,1.5575) \curveto(3.2459376,1.5675)(3.2709374,1.5925)(3.2809374,1.6075) \curveto(3.2909374,1.6225)(3.3109374,1.6525)(3.3209374,1.6675) \curveto(3.3309374,1.6825)(3.3509376,1.7175)(3.3609376,1.7375) \curveto(3.3709376,1.7575)(3.3859375,1.7975)(3.3909376,1.8175) \curveto(3.3959374,1.8375)(3.4109375,1.8725)(3.4209375,1.8875) \curveto(3.4309375,1.9025)(3.4459374,1.9275)(3.4509375,1.9375) \curveto(3.4559374,1.9475)(3.4709375,1.9725)(3.4809375,1.9875) \curveto(3.4909375,2.0025)(3.5059376,2.0375)(3.5109375,2.0575) \curveto(3.5159376,2.0775)(3.5309374,2.1125)(3.5409374,2.1275) \curveto(3.5509374,2.1425)(3.5709374,2.1725)(3.5809374,2.1875) \curveto(3.5909376,2.2025)(3.6059375,2.2375)(3.6109376,2.2575) \curveto(3.6159375,2.2775)(3.6309376,2.3125)(3.6409376,2.3275) \curveto(3.6509376,2.3425)(3.6709375,2.3725)(3.6809375,2.3875) \curveto(3.6909375,2.4025)(3.7109375,2.4325)(3.7209375,2.4475) \curveto(3.7309375,2.4625)(3.7509375,2.4925)(3.7609375,2.5075) \curveto(3.7709374,2.5225)(3.7909374,2.5525)(3.8009374,2.5675) \curveto(3.8109374,2.5825)(3.8309374,2.6175)(3.8409376,2.6375) \curveto(3.8509376,2.6575)(3.8759375,2.6875)(3.8909376,2.6975) \curveto(3.9059374,2.7075)(3.9359374,2.7275)(3.9509375,2.7375) \curveto(3.9659376,2.7475)(4.0009375,2.7625)(4.0209374,2.7675) \curveto(4.0409374,2.7725)(4.0759373,2.7875)(4.0909376,2.7975) \curveto(4.1059375,2.8075)(4.1359377,2.8175)(4.1809373,2.8175) } \psframe[linewidth=0.06,dimen=outer](7.3409376,2.8575)(1.2709374,-3.2125) \psline[linewidth=0.1cm](1.2809376,0.8375)(1.9009376,0.8375) \psline[linewidth=0.1cm](6.6809373,0.8575)(7.3009377,0.8575) \psbezier[linewidth=0.06](1.8809375,0.8375)(3.7209375,0.8575)(3.3009458,2.8215828)(4.3009377,2.8175)(5.300929,2.8134172)(4.9009376,0.8575)(6.7009373,0.8575) \psline[linewidth=0.1cm](1.2809376,-1.1425)(1.9009376,-1.1425) \psline[linewidth=0.1cm](6.6809373,-1.1625)(7.3009377,-1.1625) \psline[linewidth=0.1cm](3.3009374,-3.1825)(3.3009374,-2.5625) \psline[linewidth=0.1cm](5.3009377,-3.1825)(5.3009377,-2.5625) \psbezier[linewidth=0.06](1.8809375,-1.1425)(3.1009376,-1.1425)(3.3009374,-1.3425)(3.3009374,-2.6025) \psbezier[linewidth=0.06](5.3009377,-2.5825)(5.3009377,-1.3825)(5.5209374,-1.1625)(6.7009373,-1.1625) \psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(1.3009375,2.7575)(1.3009375,3.5175) \psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(6.9209375,-3.1825)(7.8409376,-3.1825) \usefont{T1}{ptm}{m}{n} \rput(0.52234375,1.8475){$2-p/2$} \usefont{T1}{ptm}{m}{n} \rput(5.3923435,-3.3925){$1$} \usefont{T1}{ptm}{m}{n} \rput(7.2523437,-3.4125){$2$} \usefont{T1}{ptm}{m}{n} \rput(3.1323438,-3.3725){$0$} \usefont{T1}{ptm}{m}{n} \rput(1.1323438,-3.4125){$-1$} \usefont{T1}{ptm}{m}{n} \rput(1.1323438,-1.1325){$0$} \usefont{T1}{ptm}{m}{n} \rput(1.1523438,0.8275){$1$} \usefont{T1}{ptm}{m}{n} \rput(1.1323438,2.8075){$2$} \psline[linewidth=0.02cm,linestyle=dashed,dash=0.16cm 0.16cm](1.3009375,1.8375)(7.9009376,1.8375) \usefont{T1}{ptm}{m}{n} \rput(1.1323438,3.4475){$y$} \usefont{T1}{ptm}{m}{n} \rput(7.722344,-3.4125){$x$} \psdots[dotsize=0.16](3.4009376,1.8375) \psdots[dotsize=0.16](5.2009373,1.8375) \psbezier[linewidth=0.06](4.6809373,-3.1625)(4.6809373,-0.5425)(5.0609374,1.4975)(5.1809373,1.8375)(5.3009377,2.1775)(5.4809375,2.6375)(5.7009373,2.8175) \psbezier[linewidth=0.06](3.8809376,-3.1425)(3.9009376,-0.5425)(3.5287697,1.4598192)(3.4009376,1.8375)(3.2731051,2.2151809)(3.0809374,2.6375)(2.8809376,2.8175) \psline[linewidth=0.1cm](4.6809373,-3.1825)(4.6809373,-2.5625) \psline[linewidth=0.1cm](3.8809376,-3.1825)(3.8809376,-2.5625) \psbezier[linewidth=0.04](5.0809374,-3.1425)(5.1009374,-0.7425)(5.2409377,-0.0425)(5.7009373,-0.1425)(6.1609373,-0.2425)(5.7809377,-0.5425)(7.2609377,-0.5425) \psbezier[linewidth=0.04](3.4809375,-3.1625)(3.4609375,-1.4425)(3.3810425,-0.06492391)(2.9009376,-0.1625)(2.4208326,-0.26007608)(2.7209375,-0.5625)(1.3009375,-0.5425) \psbezier[linewidth=0.04](3.6809375,-3.1425)(3.6809375,-1.1425)(3.4809375,0.9575)(3.1009376,0.8575)(2.7209375,0.7575)(3.1209376,0.2575)(1.3009375,0.2575) \psbezier[linewidth=0.04](4.8809376,-3.1625)(4.9209375,-1.1425)(5.1209373,0.9175)(5.4809375,0.8375)(5.8409376,0.7575)(5.5209374,0.2375)(7.2609377,0.2375) \psbezier[linewidth=0.04](4.0809374,-3.1625)(4.1009374,0.6375)(3.6809375,1.2575)(3.7009375,1.4575)(3.7209375,1.6575)(3.8009374,1.7175)(3.8809376,1.8375) \psbezier[linewidth=0.04](4.4809375,-3.1625)(4.5009375,0.6575)(4.9609375,1.1975)(4.9009376,1.4575)(4.8409376,1.7175)(4.7409377,1.7775)(4.7009373,1.8375) \psbezier[linewidth=0.04](4.2809377,-3.1425)(4.3009377,0.2375)(4.3009377,-0.3025)(4.3009377,1.8375) \usefont{T1}{ptm}{m}{n} \rput(7.682344,2.0475){$l_p$} \usefont{T1}{ptm}{m}{n} \rput(4.912344,2.0475){$b_p$} \usefont{T1}{ptm}{m}{n} \rput(3.7823439,2.0075){$a_p$} \usefont{T1}{ptm}{m}{n} \rput(3.6684375,-3.4225){\small $I_{p,-1}$} \usefont{T1}{ptm}{m}{n} \rput(4.3084373,-3.4225){\small $I_{p,0}$} \usefont{T1}{ptm}{m}{n} \rput(4.9084377,-3.4225){\small $I_{p,1}$} \usefont{T1}{ptm}{m}{n} \rput(7.6784377,-0.1425){\small $E_{p,1}$} \usefont{T1}{ptm}{m}{n} \rput(0.7784375,-0.2025){\small $E_{p,-1}$} \psline[linewidth=0.04cm,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(3.0809374,2.5575)(3.3009374,2.1975) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(5.5009375,2.5775)(5.3609376,2.2975) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(5.9009376,-0.2225)(6.0809374,-0.3825) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(2.8609376,1.1175)(2.5609374,0.9575) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(5.5209374,1.3575)(5.8409376,1.0575) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(4.3009377,0.1775)(4.3009377,0.5375) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(4.6809373,0.1175)(4.7209377,0.4975) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(3.9409375,0.2175)(3.9009376,0.4975) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(3.7009375,0.1575)(3.6609375,0.5175) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(4.8809376,0.0975)(4.9409375,0.4975) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(5.1409373,0.0975)(5.2409377,0.5175) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(3.4609375,0.1575)(3.3809376,0.4975) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(2.8009374,-0.1825)(2.5809374,-0.3025) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(5.8009377,-1.2825)(6.0609374,-1.2025) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(3.0409374,-1.3825)(2.8009374,-1.2425) \psline[linewidth=0.1cm](3.4809375,-3.1825)(3.4809375,-2.5625) \psline[linewidth=0.1cm](3.6809375,-3.1825)(3.6809375,-2.5625) \psline[linewidth=0.1cm](4.0809374,-3.1825)(4.0809374,-2.5625) \psline[linewidth=0.1cm](4.2809377,-3.1825)(4.2809377,-2.5625) \psline[linewidth=0.1cm](4.4809375,-3.1825)(4.4809375,-2.5625) \psline[linewidth=0.1cm](4.8809376,-3.1825)(4.8809376,-2.5625) \psline[linewidth=0.1cm](5.0809374,-3.1825)(5.0809374,-2.5625) \psline[linewidth=0.1cm](1.3209375,0.2775)(1.9409375,0.2775) \psline[linewidth=0.1cm](1.3009375,-0.5225)(1.9209375,-0.5225) \psline[linewidth=0.1cm](6.6609373,0.2575)(7.2809377,0.2575) \psline[linewidth=0.1cm](6.6609373,-0.5225)(7.2809377,-0.5225) \psdots[dotsize=0.16](1.3009375,0.8575) \psdots[dotsize=0.16](1.3009375,2.8175) \psdots[dotsize=0.16](1.3009375,-1.1425) \psdots[dotsize=0.16](1.2809376,-3.1825) \psdots[dotsize=0.16](3.3009374,-3.1825) \psdots[dotsize=0.16](5.3009377,-3.1825) \psdots[dotsize=0.16](7.3209376,-3.1825) \psdots[dotsize=0.16](3.8809376,-3.1825) \psdots[dotsize=0.16](4.6809373,-3.1825) \psdots[dotsize=0.16](7.2809377,0.8575) \psdots[dotsize=0.16](7.2809377,-1.1625) \end{pspicture} } \\ Fig. 3b: the flow on the level $T_p$ for $0<p\leqslant 1/2$ \end{center} \begin{center} \scalebox{1.2} { \begin{pspicture}(0,-3.6459374)(8.435,3.6459374) \definecolor{color4300g}{rgb}{0.6,0.6,0.6} \pscustom[linewidth=0.02,fillstyle=gradient,gradlines=2000,gradbegin=color4300g,gradend=color4300g,gradmidpoint=1.0] { \newpath \moveto(1.2821875,-3.1525) \lineto(1.2821875,-3.1025) \curveto(1.2821875,-3.0775)(1.2821875,-3.0275)(1.2821875,-3.0025) \curveto(1.2821875,-2.9775)(1.2821875,-2.9275)(1.2821875,-2.9025) \curveto(1.2821875,-2.8775)(1.2821875,-2.8275)(1.2821875,-2.8025) \curveto(1.2821875,-2.7775)(1.2771875,-2.7325)(1.2721875,-2.7125) \curveto(1.2671875,-2.6925)(1.2621875,-2.6475)(1.2621875,-2.6225) \curveto(1.2621875,-2.5975)(1.2621875,-2.5475)(1.2621875,-2.5225) \curveto(1.2621875,-2.4975)(1.2671875,-2.4525)(1.2721875,-2.4325) \curveto(1.2771875,-2.4125)(1.2771875,-2.3725)(1.2721875,-2.3525) \curveto(1.2671875,-2.3325)(1.2621875,-2.2875)(1.2621875,-2.2625) \curveto(1.2621875,-2.2375)(1.2671875,-2.1925)(1.2721875,-2.1725) \curveto(1.2771875,-2.1525)(1.2821875,-2.1075)(1.2821875,-2.0825) \curveto(1.2821875,-2.0575)(1.2821875,-2.0075)(1.2821875,-1.9825) \curveto(1.2821875,-1.9575)(1.2821875,-1.9075)(1.2821875,-1.8825) \curveto(1.2821875,-1.8575)(1.2771875,-1.8125)(1.2721875,-1.7925) \curveto(1.2671875,-1.7725)(1.2621875,-1.7275)(1.2621875,-1.7025) \curveto(1.2621875,-1.6775)(1.2621875,-1.6275)(1.2621875,-1.6025) \curveto(1.2621875,-1.5775)(1.2621875,-1.5275)(1.2621875,-1.5025) \curveto(1.2621875,-1.4775)(1.2671875,-1.4325)(1.2721875,-1.4125) \curveto(1.2771875,-1.3925)(1.2821875,-1.3475)(1.2821875,-1.3225) \curveto(1.2821875,-1.2975)(1.2821875,-1.2475)(1.2821875,-1.2225) \curveto(1.2821875,-1.1975)(1.2971874,-1.1625)(1.3121876,-1.1525) \curveto(1.3271875,-1.1425)(1.3671875,-1.1325)(1.3921875,-1.1325) \curveto(1.4171875,-1.1325)(1.4671875,-1.1325)(1.4921875,-1.1325) \curveto(1.5171875,-1.1325)(1.5671875,-1.1325)(1.5921875,-1.1325) \curveto(1.6171875,-1.1325)(1.6671875,-1.1325)(1.6921875,-1.1325) \curveto(1.7171875,-1.1325)(1.7671875,-1.1325)(1.7921875,-1.1325) \curveto(1.8171875,-1.1325)(1.8621875,-1.1275)(1.8821875,-1.1225) \curveto(1.9021875,-1.1175)(1.9471875,-1.1125)(1.9721875,-1.1125) \curveto(1.9971875,-1.1125)(2.0471876,-1.1125)(2.0721874,-1.1125) \curveto(2.0971875,-1.1125)(2.1421876,-1.1175)(2.1621876,-1.1225) \curveto(2.1821876,-1.1275)(2.2271874,-1.1325)(2.2521875,-1.1325) \curveto(2.2771876,-1.1325)(2.3271875,-1.1325)(2.3521874,-1.1325) \curveto(2.3771875,-1.1325)(2.4221876,-1.1375)(2.4421875,-1.1425) \curveto(2.4621875,-1.1475)(2.5021875,-1.1575)(2.5221875,-1.1625) \curveto(2.5421875,-1.1675)(2.5821874,-1.1775)(2.6021874,-1.1825) \curveto(2.6221876,-1.1875)(2.6621876,-1.1975)(2.6821876,-1.2025) \curveto(2.7021875,-1.2075)(2.7421875,-1.2175)(2.7621875,-1.2225) \curveto(2.7821875,-1.2275)(2.8221874,-1.2375)(2.8421874,-1.2425) \curveto(2.8621874,-1.2475)(2.8971875,-1.2625)(2.9121876,-1.2725) \curveto(2.9271874,-1.2825)(2.9571874,-1.3025)(2.9721875,-1.3125) \curveto(2.9871874,-1.3225)(3.0121875,-1.3475)(3.0221875,-1.3625) \curveto(3.0321875,-1.3775)(3.0521874,-1.4075)(3.0621874,-1.4225) \curveto(3.0721874,-1.4375)(3.0921874,-1.4675)(3.1021874,-1.4825) \curveto(3.1121874,-1.4975)(3.1321876,-1.5275)(3.1421876,-1.5425) \curveto(3.1521876,-1.5575)(3.1671875,-1.5925)(3.1721876,-1.6125) \curveto(3.1771874,-1.6325)(3.1871874,-1.6725)(3.1921875,-1.6925) \curveto(3.1971874,-1.7125)(3.2071874,-1.7525)(3.2121875,-1.7725) \curveto(3.2171874,-1.7925)(3.2271874,-1.8325)(3.2321875,-1.8525) \curveto(3.2371874,-1.8725)(3.2421875,-1.9175)(3.2421875,-1.9425) \curveto(3.2421875,-1.9675)(3.2471876,-2.0125)(3.2521875,-2.0325) \curveto(3.2571876,-2.0525)(3.2621875,-2.0975)(3.2621875,-2.1225) \curveto(3.2621875,-2.1475)(3.2621875,-2.1975)(3.2621875,-2.2225) \curveto(3.2621875,-2.2475)(3.2621875,-2.2975)(3.2621875,-2.3225) \curveto(3.2621875,-2.3475)(3.2671876,-2.3925)(3.2721875,-2.4125) \curveto(3.2771876,-2.4325)(3.2821875,-2.4775)(3.2821875,-2.5025) \curveto(3.2821875,-2.5275)(3.2821875,-2.5775)(3.2821875,-2.6025) \curveto(3.2821875,-2.6275)(3.2821875,-2.6775)(3.2821875,-2.7025) \curveto(3.2821875,-2.7275)(3.2821875,-2.7775)(3.2821875,-2.8025) \curveto(3.2821875,-2.8275)(3.2821875,-2.8775)(3.2821875,-2.9025) \curveto(3.2821875,-2.9275)(3.2821875,-2.9775)(3.2821875,-3.0025) \curveto(3.2821875,-3.0275)(3.2821875,-3.0775)(3.2821875,-3.1025) \curveto(3.2821875,-3.1275)(3.2821875,-3.1525)(3.2821875,-3.1525) } \pscustom[linewidth=0.02,fillstyle=gradient,gradlines=2000,gradbegin=color4300g,gradend=color4300g,gradmidpoint=1.0] { \newpath \moveto(7.3021874,-3.1525) \lineto(7.2921877,-3.1125) \curveto(7.2871876,-3.0925)(7.2821875,-3.0475)(7.2821875,-3.0225) \curveto(7.2821875,-2.9975)(7.2821875,-2.9475)(7.2821875,-2.9225) \curveto(7.2821875,-2.8975)(7.2821875,-2.8475)(7.2821875,-2.8225) \curveto(7.2821875,-2.7975)(7.2821875,-2.7475)(7.2821875,-2.7225) \curveto(7.2821875,-2.6975)(7.2821875,-2.6475)(7.2821875,-2.6225) \curveto(7.2821875,-2.5975)(7.2821875,-2.5475)(7.2821875,-2.5225) \curveto(7.2821875,-2.4975)(7.2821875,-2.4475)(7.2821875,-2.4225) \curveto(7.2821875,-2.3975)(7.2821875,-2.3475)(7.2821875,-2.3225) \curveto(7.2821875,-2.2975)(7.2821875,-2.2475)(7.2821875,-2.2225) \curveto(7.2821875,-2.1975)(7.2821875,-2.1475)(7.2821875,-2.1225) \curveto(7.2821875,-2.0975)(7.2821875,-2.0475)(7.2821875,-2.0225) \curveto(7.2821875,-1.9975)(7.2821875,-1.9475)(7.2821875,-1.9225) \curveto(7.2821875,-1.8975)(7.2821875,-1.8475)(7.2821875,-1.8225) \curveto(7.2821875,-1.7975)(7.2821875,-1.7475)(7.2821875,-1.7225) \curveto(7.2821875,-1.6975)(7.2821875,-1.6475)(7.2821875,-1.6225) \curveto(7.2821875,-1.5975)(7.2821875,-1.5475)(7.2821875,-1.5225) \curveto(7.2821875,-1.4975)(7.2821875,-1.4475)(7.2821875,-1.4225) \curveto(7.2821875,-1.3975)(7.2821875,-1.3475)(7.2821875,-1.3225) \curveto(7.2821875,-1.2975)(7.2821875,-1.2475)(7.2821875,-1.2225) \curveto(7.2821875,-1.1975)(7.2621875,-1.1675)(7.2421875,-1.1625) \curveto(7.2221875,-1.1575)(7.1771874,-1.1525)(7.1521873,-1.1525) \curveto(7.1271877,-1.1525)(7.0771875,-1.1525)(7.0521874,-1.1525) \curveto(7.0271873,-1.1525)(6.9771876,-1.1525)(6.9521875,-1.1525) \curveto(6.9271874,-1.1525)(6.8771877,-1.1525)(6.8521876,-1.1525) \curveto(6.8271875,-1.1525)(6.7771873,-1.1525)(6.7521877,-1.1525) \curveto(6.7271876,-1.1525)(6.6771874,-1.1525)(6.6521873,-1.1525) \curveto(6.6271877,-1.1525)(6.5771875,-1.1525)(6.5521874,-1.1525) \curveto(6.5271873,-1.1525)(6.4821873,-1.1575)(6.4621873,-1.1625) \curveto(6.4421873,-1.1675)(6.3971877,-1.1725)(6.3721876,-1.1725) \curveto(6.3471875,-1.1725)(6.2971873,-1.1725)(6.2721877,-1.1725) \curveto(6.2471876,-1.1725)(6.2021875,-1.1775)(6.1821876,-1.1825) \curveto(6.1621876,-1.1875)(6.1171875,-1.1925)(6.0921874,-1.1925) \curveto(6.0671873,-1.1925)(6.0221877,-1.1975)(6.0021877,-1.2025) \curveto(5.9821873,-1.2075)(5.9421873,-1.2225)(5.9221873,-1.2325) \curveto(5.9021873,-1.2425)(5.8621874,-1.2575)(5.8421874,-1.2625) \curveto(5.8221874,-1.2675)(5.7821875,-1.2775)(5.7621875,-1.2825) \curveto(5.7421875,-1.2875)(5.7071877,-1.3025)(5.6921873,-1.3125) \curveto(5.6771874,-1.3225)(5.6471877,-1.3425)(5.6321874,-1.3525) \curveto(5.6171875,-1.3625)(5.5871873,-1.3875)(5.5721874,-1.4025) \curveto(5.5571876,-1.4175)(5.5321875,-1.4475)(5.5221877,-1.4625) \curveto(5.5121875,-1.4775)(5.4921875,-1.5075)(5.4821873,-1.5225) \curveto(5.4721875,-1.5375)(5.4521875,-1.5675)(5.4421873,-1.5825) \curveto(5.4321876,-1.5975)(5.4171877,-1.6325)(5.4121876,-1.6525) \curveto(5.4071875,-1.6725)(5.3921876,-1.7075)(5.3821874,-1.7225) \curveto(5.3721876,-1.7375)(5.3571873,-1.7725)(5.3521876,-1.7925) \curveto(5.3471875,-1.8125)(5.3421874,-1.8575)(5.3421874,-1.8825) \curveto(5.3421874,-1.9075)(5.3371873,-1.9525)(5.3321877,-1.9725) \curveto(5.3271875,-1.9925)(5.3221874,-2.0375)(5.3221874,-2.0625) \curveto(5.3221874,-2.0875)(5.3171873,-2.1325)(5.3121877,-2.1525) \curveto(5.3071876,-2.1725)(5.3021874,-2.2175)(5.3021874,-2.2425) \curveto(5.3021874,-2.2675)(5.2971873,-2.3175)(5.2921877,-2.3425) \curveto(5.2871876,-2.3675)(5.2821875,-2.4175)(5.2821875,-2.4425) \curveto(5.2821875,-2.4675)(5.2821875,-2.5175)(5.2821875,-2.5425) \curveto(5.2821875,-2.5675)(5.2771873,-2.6175)(5.2721877,-2.6425) \curveto(5.2671876,-2.6675)(5.2621875,-2.7175)(5.2621875,-2.7425) \curveto(5.2621875,-2.7675)(5.2671876,-2.8125)(5.2721877,-2.8325) \curveto(5.2771873,-2.8525)(5.2821875,-2.8975)(5.2821875,-2.9225) \curveto(5.2821875,-2.9475)(5.2821875,-2.9975)(5.2821875,-3.0225) \curveto(5.2821875,-3.0475)(5.2871876,-3.0925)(5.2921877,-3.1125) \curveto(5.2971873,-3.1325)(5.3021874,-3.1575)(5.3021874,-3.1725) } \pscustom[linewidth=0.02,fillstyle=gradient,gradlines=2000,gradbegin=color4300g,gradend=color4300g,gradmidpoint=1.0] { \newpath \moveto(4.3821874,2.8475) \lineto(4.4221873,2.8275) \curveto(4.4421873,2.8175)(4.4821873,2.7975)(4.5021877,2.7875) \curveto(4.5221877,2.7775)(4.5571876,2.7575)(4.5721874,2.7475) \curveto(4.5871873,2.7375)(4.6171875,2.7175)(4.6321874,2.7075) \curveto(4.6471877,2.6975)(4.6771874,2.6775)(4.6921873,2.6675) \curveto(4.7071877,2.6575)(4.7321873,2.6325)(4.7421875,2.6175) \curveto(4.7521877,2.6025)(4.7721877,2.5725)(4.7821875,2.5575) \curveto(4.7921877,2.5425)(4.8121877,2.5125)(4.8221874,2.4975) \curveto(4.8321877,2.4825)(4.8521876,2.4525)(4.8621874,2.4375) \curveto(4.8721876,2.4225)(4.8921876,2.3925)(4.9021873,2.3775) \curveto(4.9121876,2.3625)(4.9321876,2.3275)(4.9421873,2.3075) \curveto(4.9521875,2.2875)(4.9721875,2.2475)(4.9821873,2.2275) \curveto(4.9921875,2.2075)(5.0121875,2.1675)(5.0221877,2.1475) \curveto(5.0321875,2.1275)(5.0521874,2.0925)(5.0621877,2.0775) \curveto(5.0721874,2.0625)(5.0871873,2.0275)(5.0921874,2.0075) \curveto(5.0971875,1.9875)(5.1121874,1.9525)(5.1221876,1.9375) \curveto(5.1321874,1.9225)(5.1521873,1.8925)(5.1621876,1.8775) \curveto(5.1721873,1.8625)(5.1871877,1.8275)(5.1921873,1.8075) \curveto(5.1971874,1.7875)(5.2121873,1.7525)(5.2221875,1.7375) \curveto(5.2321873,1.7225)(5.2521877,1.6925)(5.2621875,1.6775) \curveto(5.2721877,1.6625)(5.2971873,1.6275)(5.3121877,1.6075) \curveto(5.3271875,1.5875)(5.3521876,1.5525)(5.3621874,1.5375) \curveto(5.3721876,1.5225)(5.3921876,1.4925)(5.4021873,1.4775) \curveto(5.4121876,1.4625)(5.4321876,1.4325)(5.4421873,1.4175) \curveto(5.4521875,1.4025)(5.4721875,1.3725)(5.4821873,1.3575) \curveto(5.4921875,1.3425)(5.5171876,1.3175)(5.5321875,1.3075) \curveto(5.5471873,1.2975)(5.5771875,1.2775)(5.5921874,1.2675) \curveto(5.6071873,1.2575)(5.6371875,1.2325)(5.6521873,1.2175) \curveto(5.6671877,1.2025)(5.6971874,1.1775)(5.7121873,1.1675) \curveto(5.7271876,1.1575)(5.7571874,1.1375)(5.7721877,1.1275) \curveto(5.7871876,1.1175)(5.8171873,1.0975)(5.8321877,1.0875) \curveto(5.8471875,1.0775)(5.8771877,1.0575)(5.8921876,1.0475) \curveto(5.9071875,1.0375)(5.9371877,1.0175)(5.9521875,1.0075) \curveto(5.9671874,0.9975)(6.0071874,0.9825)(6.0321875,0.9775) \curveto(6.0571876,0.9725)(6.1021876,0.9625)(6.1221876,0.9575) \curveto(6.1421876,0.9525)(6.1821876,0.9425)(6.2021875,0.9375) \curveto(6.2221875,0.9325)(6.2621875,0.9225)(6.2821875,0.9175) \curveto(6.3021874,0.9125)(6.3471875,0.9025)(6.3721876,0.8975) \curveto(6.3971877,0.8925)(6.4471874,0.8875)(6.4721875,0.8875) \curveto(6.4971876,0.8875)(6.5471873,0.8875)(6.5721874,0.8875) \curveto(6.5971875,0.8875)(6.6471877,0.8825)(6.6721873,0.8775) \curveto(6.6971874,0.8725)(6.7521877,0.8675)(6.7821875,0.8675) \curveto(6.8121877,0.8675)(6.8671875,0.8675)(6.8921876,0.8675) \curveto(6.9171877,0.8675)(6.9671874,0.8675)(6.9921875,0.8675) \curveto(7.0171876,0.8675)(7.0671873,0.8625)(7.0921874,0.8575) \curveto(7.1171875,0.8525)(7.1671877,0.8475)(7.1921873,0.8475) \curveto(7.2171874,0.8475)(7.2471876,0.8675)(7.2521877,0.8875) \curveto(7.2571874,0.9075)(7.2671876,0.9475)(7.2721877,0.9675) \curveto(7.2771873,0.9875)(7.2821875,1.0325)(7.2821875,1.0575) \curveto(7.2821875,1.0825)(7.2821875,1.1325)(7.2821875,1.1575) \curveto(7.2821875,1.1825)(7.2821875,1.2325)(7.2821875,1.2575) \curveto(7.2821875,1.2825)(7.2771873,1.3275)(7.2721877,1.3475) \curveto(7.2671876,1.3675)(7.2621875,1.4125)(7.2621875,1.4375) \curveto(7.2621875,1.4625)(7.2621875,1.5125)(7.2621875,1.5375) \curveto(7.2621875,1.5625)(7.2621875,1.6125)(7.2621875,1.6375) \curveto(7.2621875,1.6625)(7.2621875,1.7125)(7.2621875,1.7375) \curveto(7.2621875,1.7625)(7.2621875,1.8125)(7.2621875,1.8375) \curveto(7.2621875,1.8625)(7.2621875,1.9125)(7.2621875,1.9375) \curveto(7.2621875,1.9625)(7.2621875,2.0125)(7.2621875,2.0375) \curveto(7.2621875,2.0625)(7.2621875,2.1125)(7.2621875,2.1375) \curveto(7.2621875,2.1625)(7.2621875,2.2125)(7.2621875,2.2375) \curveto(7.2621875,2.2625)(7.2621875,2.3125)(7.2621875,2.3375) \curveto(7.2621875,2.3625)(7.2621875,2.4125)(7.2621875,2.4375) \curveto(7.2621875,2.4625)(7.2621875,2.5125)(7.2621875,2.5375) \curveto(7.2621875,2.5625)(7.2621875,2.6125)(7.2621875,2.6375) \curveto(7.2621875,2.6625)(7.2621875,2.7125)(7.2621875,2.7375) \curveto(7.2621875,2.7625)(7.2621875,2.8025)(7.2621875,2.8475) } \pscustom[linewidth=0.02,fillstyle=gradient,gradlines=2000,gradbegin=color4300g,gradend=color4300g,gradmidpoint=1.0] { \newpath \moveto(1.2621875,2.8475) \lineto(1.2621875,2.7975) \curveto(1.2621875,2.7725)(1.2621875,2.7225)(1.2621875,2.6975) \curveto(1.2621875,2.6725)(1.2621875,2.6225)(1.2621875,2.5975) \curveto(1.2621875,2.5725)(1.2621875,2.5225)(1.2621875,2.4975) \curveto(1.2621875,2.4725)(1.2621875,2.4225)(1.2621875,2.3975) \curveto(1.2621875,2.3725)(1.2621875,2.3225)(1.2621875,2.2975) \curveto(1.2621875,2.2725)(1.2621875,2.2225)(1.2621875,2.1975) \curveto(1.2621875,2.1725)(1.2621875,2.1225)(1.2621875,2.0975) \curveto(1.2621875,2.0725)(1.2621875,2.0225)(1.2621875,1.9975) \curveto(1.2621875,1.9725)(1.2621875,1.9225)(1.2621875,1.8975) \curveto(1.2621875,1.8725)(1.2621875,1.8225)(1.2621875,1.7975) \curveto(1.2621875,1.7725)(1.2621875,1.7225)(1.2621875,1.6975) \curveto(1.2621875,1.6725)(1.2621875,1.6225)(1.2621875,1.5975) \curveto(1.2621875,1.5725)(1.2621875,1.5325)(1.2621875,1.5175) \curveto(1.2621875,1.5025)(1.2621875,1.4625)(1.2621875,1.4375) \curveto(1.2621875,1.4125)(1.2621875,1.3625)(1.2621875,1.3375) \curveto(1.2621875,1.3125)(1.2621875,1.2625)(1.2621875,1.2375) \curveto(1.2621875,1.2125)(1.2621875,1.1625)(1.2621875,1.1375) \curveto(1.2621875,1.1125)(1.2621875,1.0625)(1.2621875,1.0375) \curveto(1.2621875,1.0125)(1.2621875,0.9625)(1.2621875,0.9375) \curveto(1.2621875,0.9125)(1.2821875,0.8825)(1.3021874,0.8775) \curveto(1.3221875,0.8725)(1.3671875,0.8675)(1.3921875,0.8675) \curveto(1.4171875,0.8675)(1.4621875,0.8625)(1.4821875,0.8575) \curveto(1.5021875,0.8525)(1.5471874,0.8475)(1.5721875,0.8475) \curveto(1.5971875,0.8475)(1.6471875,0.8475)(1.6721874,0.8475) \curveto(1.6971875,0.8475)(1.7471875,0.8475)(1.7721875,0.8475) \curveto(1.7971874,0.8475)(1.8371875,0.8475)(1.8521875,0.8475) \curveto(1.8671875,0.8475)(1.9071875,0.8475)(1.9321876,0.8475) \curveto(1.9571875,0.8475)(2.0021875,0.8525)(2.0221875,0.8575) \curveto(2.0421875,0.8625)(2.0871875,0.8675)(2.1121874,0.8675) \curveto(2.1371875,0.8675)(2.1821876,0.8725)(2.2021875,0.8775) \curveto(2.2221875,0.8825)(2.2621875,0.8925)(2.2821875,0.8975) \curveto(2.3021874,0.9025)(2.3421874,0.9125)(2.3621874,0.9175) \curveto(2.3821876,0.9225)(2.4221876,0.9325)(2.4421875,0.9375) \curveto(2.4621875,0.9425)(2.5021875,0.9525)(2.5221875,0.9575) \curveto(2.5421875,0.9625)(2.5771875,0.9775)(2.5921874,0.9875) \curveto(2.6071875,0.9975)(2.6371875,1.0175)(2.6521876,1.0275) \curveto(2.6671875,1.0375)(2.6971874,1.0575)(2.7121875,1.0675) \curveto(2.7271874,1.0775)(2.7571876,1.0975)(2.7721875,1.1075) \curveto(2.7871876,1.1175)(2.8171875,1.1375)(2.8321874,1.1475) \curveto(2.8471875,1.1575)(2.8771875,1.1775)(2.8921876,1.1875) \curveto(2.9071875,1.1975)(2.9321876,1.2225)(2.9421875,1.2375) \curveto(2.9521875,1.2525)(2.9771874,1.2775)(2.9921875,1.2875) \curveto(3.0071876,1.2975)(3.0321875,1.3225)(3.0421875,1.3375) \curveto(3.0521874,1.3525)(3.0721874,1.3825)(3.0821874,1.3975) \curveto(3.0921874,1.4125)(3.1121874,1.4425)(3.1221876,1.4575) \curveto(3.1321876,1.4725)(3.1521876,1.5025)(3.1621876,1.5175) \curveto(3.1721876,1.5325)(3.1971874,1.5575)(3.2121875,1.5675) \curveto(3.2271874,1.5775)(3.2521875,1.6025)(3.2621875,1.6175) \curveto(3.2721875,1.6325)(3.2921875,1.6625)(3.3021874,1.6775) \curveto(3.3121874,1.6925)(3.3321874,1.7275)(3.3421874,1.7475) \curveto(3.3521874,1.7675)(3.3671875,1.8075)(3.3721876,1.8275) \curveto(3.3771875,1.8475)(3.3921876,1.8825)(3.4021876,1.8975) \curveto(3.4121876,1.9125)(3.4271874,1.9375)(3.4321876,1.9475) \curveto(3.4371874,1.9575)(3.4521875,1.9825)(3.4621875,1.9975) \curveto(3.4721875,2.0125)(3.4871874,2.0475)(3.4921875,2.0675) \curveto(3.4971876,2.0875)(3.5121875,2.1225)(3.5221875,2.1375) \curveto(3.5321875,2.1525)(3.5521874,2.1825)(3.5621874,2.1975) \curveto(3.5721874,2.2125)(3.5871875,2.2475)(3.5921874,2.2675) \curveto(3.5971875,2.2875)(3.6121874,2.3225)(3.6221876,2.3375) \curveto(3.6321876,2.3525)(3.6521876,2.3825)(3.6621876,2.3975) \curveto(3.6721876,2.4125)(3.6921875,2.4425)(3.7021875,2.4575) \curveto(3.7121875,2.4725)(3.7321875,2.5025)(3.7421875,2.5175) \curveto(3.7521875,2.5325)(3.7721875,2.5625)(3.7821875,2.5775) \curveto(3.7921875,2.5925)(3.8121874,2.6275)(3.8221874,2.6475) \curveto(3.8321874,2.6675)(3.8571875,2.6975)(3.8721876,2.7075) \curveto(3.8871875,2.7175)(3.9171875,2.7375)(3.9321876,2.7475) \curveto(3.9471874,2.7575)(3.9821875,2.7725)(4.0021877,2.7775) \curveto(4.0221877,2.7825)(4.0571876,2.7975)(4.0721874,2.8075) \curveto(4.0871873,2.8175)(4.1171875,2.8275)(4.1621876,2.8275) } \psframe[linewidth=0.06,dimen=outer](7.3021874,2.8375)(1.2521875,-3.2125) \psline[linewidth=0.1cm](1.2621875,0.8475)(1.8821875,0.8475) \psline[linewidth=0.1cm](6.6621876,0.8675)(7.2821875,0.8675) \psbezier[linewidth=0.06](1.8621875,0.8475)(3.7021875,0.8675)(3.2821958,2.8315828)(4.2821875,2.8275)(5.2821794,2.8234172)(4.8821874,0.8675)(6.6821876,0.8675) \psline[linewidth=0.1cm](1.2621875,-1.1325)(1.8821875,-1.1325) \psline[linewidth=0.1cm](6.6621876,-1.1525)(7.2821875,-1.1525) \psline[linewidth=0.1cm](3.2821875,-3.1725)(3.2821875,-2.5525) \psline[linewidth=0.1cm](5.2821875,-3.1725)(5.2821875,-2.5525) \psbezier[linewidth=0.06](1.8621875,-1.1325)(3.0821874,-1.1325)(3.2821875,-1.3325)(3.2821875,-2.5925) \psbezier[linewidth=0.06](5.2821875,-2.5725)(5.2821875,-1.3725)(5.5021877,-1.1525)(6.6821876,-1.1525) \psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(1.2821875,2.7675)(1.2821875,3.5275) \psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(6.9021873,-3.1725)(7.7821875,-3.1725) \usefont{T1}{ptm}{m}{n} \rput(5.353594,-3.3825){$1$} \usefont{T1}{ptm}{m}{n} \rput(7.233594,-3.4025){$2$} \usefont{T1}{ptm}{m}{n} \rput(3.1935937,-3.3825){$0$} \usefont{T1}{ptm}{m}{n} \rput(1.1135937,-3.4025){$-1$} \usefont{T1}{ptm}{m}{n} \rput(1.0735937,-1.1225){$0$} \usefont{T1}{ptm}{m}{n} \rput(1.0935937,0.8375){$1$} \usefont{T1}{ptm}{m}{n} \rput(1.0735937,2.8175){$2$} \usefont{T1}{ptm}{m}{n} \rput(1.1135937,3.4575){$y$} \usefont{T1}{ptm}{m}{n} \rput(7.7035937,-3.4025){$x$} \psdots[dotsize=0.16](4.2821875,2.8275) \psline[linewidth=0.1cm](4.2621875,-3.1725)(4.2621875,-2.5525) \psline[linewidth=0.1cm](4.0621877,-3.1725)(4.0621877,-2.5525) \psbezier[linewidth=0.04](5.0621877,-3.1325)(5.0821877,-0.7325)(5.2221875,-0.0325)(5.6821876,-0.1325)(6.1421876,-0.2325)(5.7621875,-0.5325)(7.2421875,-0.5325) \psbezier[linewidth=0.04](3.4621875,-3.1525)(3.4421875,-1.4325)(3.3622925,-0.054923914)(2.8821876,-0.1525)(2.4020827,-0.2500761)(2.7021875,-0.5525)(1.2821875,-0.5325) \psbezier[linewidth=0.04](3.6621876,-3.1325)(3.6621876,-1.1325)(3.9221876,1.1075)(3.5421875,1.0475)(3.1621876,0.9875)(3.6421876,-0.1325)(1.3021874,0.0475) \psbezier[linewidth=0.04](4.8621874,-3.1525)(4.9021873,-1.1325)(4.5421877,1.1275)(4.9021873,1.0475)(5.2621875,0.9675)(4.9021873,-0.2125)(7.2621875,0.0475) \psbezier[linewidth=0.04](4.0621877,-3.1525)(4.0821877,0.6475)(4.1221876,2.0275)(3.8421874,2.1075)(3.5621874,2.1875)(3.5621874,0.1275)(1.2821875,0.5075) \psbezier[linewidth=0.04](4.4621873,-3.1525)(4.4821873,0.6675)(4.3821874,2.2475)(4.6021876,2.0875)(4.8221874,1.9275)(5.4021873,0.1875)(7.2821875,0.4875) \psbezier[linewidth=0.06](4.2621875,-3.1325)(4.2821875,0.2475)(4.2821875,0.6475)(4.2821875,2.8475) \usefont{T1}{ptm}{m}{n} \rput(4.193594,3.1175){$a_0=b_0$} \usefont{T1}{ptm}{m}{n} \rput(3.8296876,-3.4325){\small $I_{p,-1}$} \usefont{T1}{ptm}{m}{n} \rput(4.7296877,-3.4325){\small $I_{p,1}$} \usefont{T1}{ptm}{m}{n} \rput(7.6596875,-0.1325){\small $E_{p,1}$} \usefont{T1}{ptm}{m}{n} \rput(0.7596875,-0.1925){\small $E_{p,-1}$} \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(5.8821874,-0.2125)(6.0621877,-0.3725) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(2.8421874,1.1275)(2.5421875,0.9675) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(5.5021877,1.3675)(5.8221874,1.0675) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(4.2821875,0.1875)(4.2821875,0.5475) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(4.4621873,1.3275)(4.5021877,1.7075) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(4.0421877,1.3875)(4.0021877,1.6675) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(4.7421875,0.6875)(4.8221874,0.9875) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(3.7421875,0.6475)(3.6621876,0.9875) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(2.7821875,-0.1725)(2.5621874,-0.2925) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(5.7821875,-1.2725)(6.0421877,-1.1925) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(3.0221875,-1.3725)(2.7821875,-1.2325) \psline[linewidth=0.1cm](4.4621873,-3.1725)(4.4621873,-2.5525) \psline[linewidth=0.1cm](4.8621874,-3.1725)(4.8621874,-2.5525) \psline[linewidth=0.1cm](5.0621877,-3.1725)(5.0621877,-2.5525) \psline[linewidth=0.1cm](3.6621876,-3.1725)(3.6621876,-2.5525) \psline[linewidth=0.1cm](3.4621875,-3.1725)(3.4621875,-2.5525) \psline[linewidth=0.1cm](1.2821875,0.4875)(1.9021875,0.4875) \psline[linewidth=0.1cm](1.2621875,0.0475)(1.8821875,0.0475) \psline[linewidth=0.1cm](1.2621875,-0.5325)(1.8821875,-0.5325) \psline[linewidth=0.1cm](6.6621876,0.4675)(7.2821875,0.4675) \psline[linewidth=0.1cm](6.6621876,0.0275)(7.2821875,0.0275) \psline[linewidth=0.1cm](6.6621876,-0.5325)(7.2821875,-0.5325) \psdots[dotsize=0.16](1.2821875,2.8275) \psdots[dotsize=0.16](1.2621875,0.8475) \psdots[dotsize=0.16](1.2821875,-1.1325) \psdots[dotsize=0.16](1.2621875,-3.1725) \psdots[dotsize=0.16](3.2821875,-3.1525) \psdots[dotsize=0.16](5.2821875,-3.1525) \psdots[dotsize=0.16](7.2821875,-3.1725) \psdots[dotsize=0.16](4.2621875,-3.1725) \psdots[dotsize=0.16](7.2621875,-1.1525) \psdots[dotsize=0.16](7.2621875,0.8675) \end{pspicture} } \\ Fig. 3c: the flow on the level $T_p$ for $p=0$ \end{center} We also assume that there are some neighborhoods $V_\pm$ of $F_\pm$, see~\eqref{eq:Fpm}, such that the generator of the flow is equal to $(0,\pm 1,0)$ in $V_\pm$ respectively; and a neighborhood $U$ of $S_2$ in $X$, see~\eqref{eq:S_2}, such that the the generator is equal to $(1,0,0)$ in $U$. We have already claimed that all the trajectories in $X^-$ preserve $p$-coordinate. The latter assumptions mean that all the trajectories that escape the entrance cross-section $I_p$ are ''locally vertical'' (preserving $x$-coordinate), and all the trajectories that come to the exit cross-sections $E_{p,\pm 1}$ are ''locally horisontal'' (preserving $y$-coordinate), and, moreover, the velocity of the flow near $F_\pm$ and $S_2$ is locally constant and equal to $1$. These assumptions will allow us descend the flow easily to the glued manifold correctly ($C^\infty$-smoothly) in the $4$-dimensional case, where we deal with a genuine (non-stratified) manifold. The description of the flow restricted to $X^-$ is finished. All trajectories in $X^+$, except for the fixed points $a_p$ and $b_p$, $p\in [0,1/2]$, and the unique sink $s_3:=(1/2,2,1/2)$, start from the plane $L$ that divides $X^+$ and $X^-$, smoothly continue the corresponding trajectories from $X^-$ that come to this plane, and tend to the sink $s_3$. The description of the flow on $X$ is finished. We descend it on $\widetilde X$ (without any worries about its smoothness, because there is no smooth structure on the stratum $S_2$, anyway). Let $G\colon \widetilde X\to\widetilde X$ denote its time $1$ map. \begin{lemma}\label{3} The $\delta$-measure sitting at the point $s_3$ is a global SRB measure for the map $G$. The set $K_3$ of the points that don't tend to $s_3$ under iterates of $G$ has zero Lebesgue $3$-measure and Hausdorff dimension $3$. \end{lemma} \begin{proof} The set of the points $(x,p)$ of the ''front side'' $\{y=0\}$ that don't tend to the point $s_3$ is the intersection $K_2'$ of the set $K_2$ from the previous section with the semispace $p\leqslant 1/2$. By Lemma~\ref{2}, the set $K_2'$ has Hausdorff dimension $2$ and zero Lebesgue $2$-measure. The set $K_3$ is a saturation of $K_2'$ by the trajectories of the flow. Therefore $K_3$ has zero Lebesgue $3$-measure by Fubini theorem and Hausdorff dimension $3$ by Proposition~\ref{mdp}. The rest of the points, obviously, tends to $s_3$. \end{proof} The map $G$ is continuous, but as we discussed before, after the gluing~\eqref{eq:equiv3} described above we obtain a stratified manifold. To get rid of the $2$-stratum $S_2$ we need to add one more dimension. \section{Dimension 4: gluing up a genuine manifold} We start with an unformal description of a flow on the $4$-manifold with a piecewise-smooth boundary, and then present the corresponding formulas. \subsection{Heuristic description again} Recall that $X=T\times [0,1]$ is the subset of the $3$-parallelepiped, before gluing the boundaries. Multiply it by an interval $[0,1]$, introducing new coordinate $h$. Denote \begin{equation}\label{eq:Mpm} \begin{aligned} M^-=X^-\times [0,1],\\ M^+=X^+\times [0,1]. \end{aligned} \end{equation} We define a flow in $M^-$ as follows: its the trajectories start on each \textit{entrance square cross-section on the level $p$} (that is, $I_p$, multiplied by the $h$-interval $[0,1]$) and preserve the $h$-coordinate until they reach the boundary. The flow is actually the same as in $X^-$ in previous section, with the condition on the generator $\dot h=0$ added. Almost all trajectories in $M^+$ tend to a unique sink $s_4$ (as in previous section: for $M^+$ instead of $X^+$, and $s_4$ instead of $s_3$). Then we define the gluing of the boundaries of $X\times [0,1]$ using the new coordinate $h$: the \textit{left and right hand exit square cross-sections} (that are $E_\pm$, multiplied by the $h$-interval $[0,1]$) are contracted thrice in the $h$-direction and are divorced by gluing to the \textit{separated} parts of the entrance square cross-section on the level $q(p)$. Thus we obtain the flow that acts on a compact manifold with a piecewise smooth boundary. The time $1$ map of this flow is continuous, admits a global SRB measure sitting at the point $s_4$, and the SET doesn't hold for it. Then, by means of several simple tricks, we derive from this flow another one, which acts on the boundaryless manifold, and such that its time $1$ map is bijective. \subsection{Construction of a manifold with a piecewise smooth boundary} Denote the ''front'', or ''entrance'' cross-section by $$ M_{-1}:=\{(x,-1,p,h)\mid x,h\in [0,1], p\in [0,1/2]\}. $$ The entrance square cross-section on the level $p$ \begin{equation*} X_p=\{(x,-1,p,h)\mid x,h\in [0,1]\} \end{equation*} is divided into three parts in $x$-direction according to the splitting~\eqref{eq:Ipk}: \begin{equation*} \begin{aligned} &X_{p,-1}=\{(x,0-1,p,h)\mid x\in I_{p,-1},h\in [0,1]\};\\ &X_{p,0}=\{(x,0-1,p,h)\mid x\in I_{p,0},h\in [0,1]\};\\ &X_{p,1}=\{(x,-1,p,h)\mid x\in I_{p,1},h\in [0,1]\}. \end{aligned} \end{equation*} Let us also divide $X_p$ into three equal parts in $h$-direction: \begin{equation*} \begin{aligned} &Z_{p,-1}=\{(x,-1,p,h)\mid x\in [0,1], h\in [0,1/3)\};\\ &Z_{p,0}=\{(x,-1,p,h)\mid x\in [0,1], h\in [1/3,2/3]\};\\ &Z_{p,1}=\{(x,0-1,p,h)\mid x\in [0,1], h\in (2/3,1]\}. \end{aligned} \end{equation*} We also define the ''exit squares'' on the level $p$: \begin{equation*} \begin{aligned} &A_{p,0}=\{(x,y,p,h)\mid (x,y,p)\in E_{p,0}, h\in [0,1]\},\\ &A_{p,\pm 1}=\{(x,y,p,h)\mid (x,y,p)\in E_{p,\pm 1}, h\in [0,1]\}, \end{aligned} \end{equation*} and glue the squares $A_{p,\pm 1}$ linearly to the rectangles $Z_{q(p),\pm 1}$ by the following equivalence: \begin{equation}\label{eq:equiv4} \begin{aligned} &(-1,y,p,h)\equiv(y,-1,q(p),h/3);\\ &(2,y,p,h)\equiv(y,-1,q(p),1-(h/3)). \end{aligned} \end{equation} Thus we obtain the genuine $C^\infty$-smooth manifold $M$ with a piecewise smooth. Indeed, all we need is to verify the existence of local charts on the manifold $N\subset M_{-1}$, \begin{equation}\label{eq:N} N:=\{(x,-1,p,h)\mid x\in [0,1], p\in[0,1/3], h\in[0,1/3]\cup[2/3,1]\}. \end{equation} Note that one can continue the manifold $X\times [0,1]$ in $x$-direction to the neighborhoods of $A_{p,\pm 1}$ and in $y$-direction to the neighborhood of $Z_{q(p),\pm 1}$ (in $\mathbb R^4$), and extend the equivalence~\eqref{eq:equiv4} by the following formulas (for $\varepsilon$ sufficiently close to $0$): \begin{equation}\label{eq:equiv4ext} \begin{aligned} &(-1+\varepsilon,y,p,h)\equiv(y,-1-\varepsilon,q(p),h/3);\\ &(2-\varepsilon,y,p,h)\equiv(y,-1-\varepsilon,q(p),1-(h/3)). \end{aligned} \end{equation} Then the local charts on $N$ descend from $\mathbb R^4$ to $M$ according to the latter equivalence. \subsection{Construction of a flow} In fact, we should add only few words to the heuristic description of the flow to make it rigorous. As was mentioned before, the flow in $M^-$ is actually the same flow as described the $3$-dimensional case, multiplied directly by coordinate $h$ (preserving it). We assumed before, that for the flow decsribed the $3$-dimensional case, there are some neighborhoods of $F_\pm$ and $S_2$, such that the generator of the flow is equal to $(0,\pm 1,0)$ in $V_\pm$ respectively, and is equal to $(1,0,0)$ in $U$. Thus, by the equivalence~\eqref{eq:equiv4ext}, there is a neighborhood of $N$ in $M$ such that the generator of the flow is identically equal to $(1,0,0,0)$ in this neighborhood. It means that the flow is descended to $M$ $C^\infty$-smoothly. Figure~4 displays the first return map to the ''front'' cubic cross-section $M_{-1}$ for points in $X_{p,-1}$ and $X_{p,1}$ that return to this cross-section (equivalently: for points whose $x$-coordinate is not central in terms of the map $g$, see Section~2.2). This map resembles a ''non-autonomous horseshoe map'': the $p$-levels are non-invariant and the expanding rate depends on the level. \begin{center} \scalebox{1.2} { \begin{pspicture}(0,-4.5289063)(12.722813,4.5089064) \definecolor{color509b}{rgb}{0.6,0.6,0.6} \definecolor{color97b}{rgb}{0.8,0.8,0.8} \psline[linewidth=0.08cm](1.1395311,3.4689062)(4.539531,4.4689064) \psline[linewidth=0.08cm](8.559532,3.448906)(11.959531,4.4489064) \psline[linewidth=0.08cm](8.559532,-3.971094)(12.039531,-3.011094) \psline[linewidth=0.04cm,linestyle=dashed,dash=0.16cm 0.16cm,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(1.1595312,-3.971094)(5.2795315,-2.7710938) \psline[linewidth=0.08cm](11.979531,4.4689064)(11.999531,-3.0310938) \psline[linewidth=0.08cm](4.5195312,4.4689064)(12.019531,4.4689064) \psline[linewidth=0.03cm,linestyle=dashed,dash=0.16cm 0.16cm](4.5195312,0.48890612)(12.019531,0.48890612) \psline[linewidth=0.06cm](1.099531,-2.0510938)(8.599532,-2.0510938) \psline[linewidth=0.03cm,linestyle=dashed,dash=0.16cm 0.16cm](1.1795312,-2.0310938)(4.579531,-1.031094) \pspolygon[linewidth=0.03,fillstyle=solid,fillcolor=color509b](9.89953,0.48890612)(6.5195312,-0.53109396)(8.579531,-0.55109394)(11.93953,0.48890612) \psline[linewidth=0.06cm](1.1395311,-0.53109396)(8.639532,-0.53109396) \psline[linewidth=0.06cm](8.619531,-0.53109396)(11.999531,0.48890612) \psdots[dotsize=0.16](9.879532,0.48890612) \psdots[dotsize=0.16](11.979531,0.48890612) \psdots[dotsize=0.16](6.539531,-0.5110939) \psdots[dotsize=0.16](6.4195313,0.5089061) \psdots[dotsize=0.16](1.1595312,-0.53109396) \psdots[dotsize=0.16](8.579531,-0.55109394) \pspolygon[linewidth=0.03,fillstyle=solid,fillcolor=color97b](10.879531,-1.371094)(3.439531,-1.371094)(4.539531,-1.031094)(11.959531,-1.031094) \pspolygon[linewidth=0.03,fillstyle=solid,fillcolor=color509b](4.559531,0.48890612)(1.2195312,-0.5110939)(3.1195314,-0.5110939)(6.4395313,0.48890612) \pspolygon[linewidth=0.03,fillstyle=solid,fillcolor=color97b](8.599532,-2.0310938)(1.1595312,-2.0310938)(2.279531,-1.7110939)(9.679532,-1.6910938) \psline[linewidth=0.06cm](8.559532,-2.0310938)(11.959531,-1.031094) \psframe[linewidth=0.08,dimen=outer](8.619531,3.488906)(1.1195312,-4.0110936) \psdots[dotsize=0.16](3.0995314,-0.5110939) \psdots[dotsize=0.16](4.5195312,0.5089061) \psline[linewidth=0.06cm](4.7795315,-3.971094)(1.1795312,3.448906) \psline[linewidth=0.06cm](4.7795315,-3.951094)(8.599532,3.488906) \psline[linewidth=0.03cm,linestyle=dashed,dash=0.16cm 0.16cm](8.079531,-2.971094)(11.979531,4.4689064) \psline[linewidth=0.03cm,linestyle=dashed,dash=0.16cm 0.16cm](8.079531,-2.9910936)(4.539531,4.4889064) \psframe[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm,dimen=outer](12.019531,4.4889064)(4.5195312,-3.011094) \psdots[dotsize=0.16](1.1395311,-2.0510938) \psdots[dotsize=0.16](2.299531,-1.7110939) \psdots[dotsize=0.16](3.439531,-1.3910939) \psdots[dotsize=0.16](4.539531,-1.011094) \psdots[dotsize=0.16](11.979531,-1.031094) \psdots[dotsize=0.16](10.799531,-1.391094) \psdots[dotsize=0.16](9.659532,-1.7110939) \psdots[dotsize=0.16](8.579531,-2.0510938) \psline[linewidth=0.04cm,linestyle=dotted,dotsep=0.16cm](3.0995314,-0.53109396)(3.1195314,-3.931094) \psline[linewidth=0.04cm,linestyle=dotted,dotsep=0.16cm](6.539531,-0.55109394)(6.539531,-3.951094) \psdots[dotsize=0.16](6.539531,-3.971094) \psdots[dotsize=0.16](3.1195314,-3.971094) \psdots[dotsize=0.16](4.7795315,-3.971094) \usefont{T1}{ptm}{m}{n} \rput(4.882344,-4.3010936){$I_{p,0}$} \usefont{T1}{ptm}{m}{n} \rput(7.6023436,-4.2610936){$I_{p,1}$} \usefont{T1}{ptm}{m}{n} \rput(2.2223437,-4.3010936){$I_{p,-1}$} \usefont{T1}{ptm}{m}{n} \rput(5.2923436,-3.721094){$1/2$} \psline[linewidth=0.03cm,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(1.1595312,3.2689064)(1.1595312,4.328906) \psline[linewidth=0.03cm,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(8.179532,-3.9910936)(9.879532,-3.971094) \usefont{T1}{ptm}{m}{n} \rput(9.602344,-4.1810937){$x$} \usefont{T1}{ptm}{m}{n} \rput(0.9723436,4.2589064){$p$} \usefont{T1}{ptm}{m}{n} \rput(5.0723433,-2.6210938){$h$} \usefont{T1}{ptm}{m}{n} \rput(2.0123436,-3.441094){$1/3$} \psline[linewidth=0.034cm,linestyle=dotted,dotsep=0.16cm](2.299531,-1.6910938)(2.3,-3.5910938) \psline[linewidth=0.034cm,linestyle=dotted,dotsep=0.16cm](3.44,-1.5110937)(3.439531,-3.291094) \psdots[dotsize=0.16](2.279531,-3.6310937) \psdots[dotsize=0.16](3.439531,-3.311094) \usefont{T1}{ptm}{m}{n} \rput(3.1123435,-3.181094){$2/3$} \usefont{T1}{ptm}{m}{n} \rput(4.3723435,-2.801094){$1$} \usefont{T1}{ptm}{m}{n} \rput(0.8923437,-3.9810936){$0$} \usefont{T1}{ptm}{m}{n} \rput(8.612343,-4.2610936){$1$} \usefont{T1}{ptm}{m}{n} \rput(0.9323437,3.558906){$1$} \usefont{T1}{ptm}{m}{n} \rput(0.85234374,-0.5010939){$p_0$} \usefont{T1}{ptm}{m}{n} \rput(0.6223438,-2.0610938){$q(p_0)$} \psdots[dotsize=0.16](1.1595312,3.4689062) \psdots[dotsize=0.16](4.559531,-2.971094) \psdots[dotsize=0.16](8.579531,-3.971094) \psbezier[linewidth=0.1,arrowsize=0.05291667cm 5.0,arrowlength=1.0,arrowinset=0.4]{->}(3.7395313,-0.0310939)(3.82,-0.89109373)(5.179531,-1.6710938)(5.48,-1.8910937) \psbezier[linewidth=0.1,arrowsize=0.05291667cm 5.0,arrowlength=1.0,arrowinset=0.4]{->}(9.43953,0.0289061)(8.9,-0.87109375)(8.51953,-0.85109395)(7.74,-1.2710937) \psbezier[linewidth=0.02,arrowsize=0.05291667cm 3.0,arrowlength=3.0,arrowinset=0.4]{->}(2.8395312,2.208906)(2.8395312,1.4089061)(3.5995314,0.4289061)(4.3595314,0.0289061) \psbezier[linewidth=0.02,arrowsize=0.05291667cm 3.0,arrowlength=3.0,arrowinset=0.4]{->}(9.839532,2.188906)(10.259532,1.6889061)(10.519531,0.84890616)(10.219532,0.0489061) \psbezier[linewidth=0.02,arrowsize=0.05291667cm 3.0,arrowlength=3.0,arrowinset=0.4]{->}(11.759532,-3.451094)(11.72,-2.3710938)(12.0,-1.1910938)(10.12,-1.1910938) \psbezier[linewidth=0.02,arrowsize=0.05291667cm 3.0,arrowlength=3.0,arrowinset=0.4]{->}(10.139532,-3.711094)(9.86,-2.8710938)(9.74,-1.8310938)(8.1,-1.8510938) \usefont{T1}{ptm}{m}{n} \rput(10.702343,-3.8810937){$Z_{q(p_0),0}$} \usefont{T1}{ptm}{m}{n} \rput(11.502344,-3.601094){$Z_{q(p_0),2}$} \usefont{T1}{ptm}{m}{n} \rput(2.8723438,2.4789062){$X_{p_0,0}$} \usefont{T1}{ptm}{m}{n} \rput(9.812344,2.418906){$X_{p_0,2}$} \psdots[dotsize=0.16](1.1595312,-3.971094) \end{pspicture} } \\ Fig. 4: first return map for the front cubic transversal $M_{-1}$ \end{center} Analogously to the $3$-dimensional case, we put in $M^+$, see~\eqref{eq:Mpm}, the unique sink $s_4=(1/2,2,1/2,1/2)$. This sink attracts all the trajectories of the flow as soon as they reach $M^+$, except of two surfaces of the fixed points: $$ \{(x,y,p,h)\mid (x,y)=a_p, p\in [0,1/2], h\in [0,1]\} $$ and $$ \{(x,y,p,h)\mid (x,y)=b_p, p\in [0,1/2], h\in [0,1]\}. $$ Let $H\colon M\to M$ denote time $1$ map of the flow described above (one should distinguish it from the first return map displayed on the Figure~4). \begin{lemma}\label{4} The $\delta$-measure sitting at the point $s_4$ is a global SRB measure for the map $H$. The set $K_4$ of the points that don't tend to $s_4$ under iterations of $H$ has zero Lebesgue $4$-measure and Hausdorff dimension $4$. \end{lemma} \begin{proof} The set $K_4$ is the cross product of $K_3$ and an interval $[0,1]$, modulo the set of Hausdorff dimension $3$ that does not affect on it's Lebesgue $4$-measure. Hence all the assertions of the lemma follow from the corresponding statements of Lemma~\ref{3}. \end{proof} \subsection{Proof of the Theorem} \begin{proof}[Proof of the theorem] The theorem can be easily deduced from Lemma~\ref{4}. \textbf{Step 1.} To construct an example of endomorphism on a $4$-manifold with a piecewise smooth boundary into itself, for which the SET fails, one should take the map $H$ that has a global SRB measure ($\delta$-measure sitting at $s_4$), and any function $\varphi$ such that $\varphi(s_4)=0$ and $\varphi(t)=1$ $\forall t\in K_4$. Then $K_{\varphi,1}=K_4$, and hence, by Lemma~4, the SET doesn't hold for $H$. Then we make three simple improvements. First, we construct a similar example on the the manifold $\widetilde M$ with a $C^\infty$-smooth boundary. \textbf{Step 2.} The manifold $M$ has a piecewise smooth boundary. The non-smooth set of the boundary lies on the cross-section $M_{-1}$. It consists of the boundary (in $M_{-1}$) of the set $N$ of points that are glued by the equivalence~\eqref{eq:equiv4}, see~\eqref{eq:N}. Then one can link any $4$-regions to a small neighbourhood of this set bounded by some $C^\infty$-hypersurfaces that $C^\infty$-smoothly link to the boundary of $M$ (see Figure~5). Thus we obtain a manifold $\widetilde M$ that has a $C^\infty$-smooth boundary. The flow can be naturally extended to $\widetilde M$ by the same formula $(1,0,0,0)$ for the generator (in the natural chart in a neighborhood of $N$ in $\mathbb R^4$). Obviously, the SET still does not hold for time $1$ map $\widetilde H$ of this flow on $\widetilde M$. \begin{center} \scalebox{1.2} { \begin{pspicture}(0,-2.65)(9.937813,2.64) \definecolor{color898b}{rgb}{0.6,0.6,0.6} \pscustom[linewidth=0.02,fillstyle=solid,fillcolor=color898b] { \newpath \moveto(3.9078126,1.25) \lineto(3.9178126,1.21) \curveto(3.9228125,1.19)(3.9278126,1.145)(3.9278126,1.12) \curveto(3.9278126,1.095)(3.9228125,1.045)(3.9178126,1.02) \curveto(3.9128125,0.995)(3.9078126,0.945)(3.9078126,0.92) \curveto(3.9078126,0.895)(3.9078126,0.845)(3.9078126,0.82) \curveto(3.9078126,0.795)(3.9078126,0.745)(3.9078126,0.72) \curveto(3.9078126,0.695)(3.9078126,0.645)(3.9078126,0.62) \curveto(3.9078126,0.595)(3.9078126,0.545)(3.9078126,0.52) \curveto(3.9078126,0.495)(3.9078126,0.445)(3.9078126,0.42) \curveto(3.9078126,0.395)(3.9078126,0.345)(3.9078126,0.32) \curveto(3.9078126,0.295)(3.9078126,0.245)(3.9078126,0.22) \curveto(3.9078126,0.195)(3.9078126,0.145)(3.9078126,0.12) \curveto(3.9078126,0.095)(3.9128125,0.05)(3.9178126,0.03) \curveto(3.9228125,0.01)(3.9278126,-0.035)(3.9278126,-0.06) \curveto(3.9278126,-0.085)(3.9278126,-0.135)(3.9278126,-0.16) \curveto(3.9278126,-0.185)(3.9278126,-0.235)(3.9278126,-0.26) \curveto(3.9278126,-0.285)(3.9228125,-0.33)(3.9178126,-0.35) \curveto(3.9128125,-0.37)(3.9078126,-0.415)(3.9078126,-0.44) \curveto(3.9078126,-0.465)(3.9078126,-0.515)(3.9078126,-0.54) \curveto(3.9078126,-0.565)(3.9078126,-0.615)(3.9078126,-0.64) \curveto(3.9078126,-0.665)(3.9128125,-0.715)(3.9178126,-0.74) \curveto(3.9228125,-0.765)(3.9278126,-0.815)(3.9278126,-0.84) \curveto(3.9278126,-0.865)(3.9328125,-0.89)(3.9378126,-0.89) \curveto(3.9428124,-0.89)(3.9578125,-0.875)(3.9678125,-0.86) \curveto(3.9778125,-0.845)(3.9928124,-0.805)(3.9978125,-0.78) \curveto(4.0028124,-0.755)(4.0178127,-0.71)(4.0278125,-0.69) \curveto(4.0378127,-0.67)(4.0528126,-0.63)(4.0578127,-0.61) \curveto(4.0628123,-0.59)(4.0728126,-0.55)(4.0778127,-0.53) \curveto(4.0828123,-0.51)(4.0978127,-0.475)(4.1078124,-0.46) \curveto(4.1178126,-0.445)(4.1328125,-0.41)(4.1378126,-0.39) \curveto(4.1428127,-0.37)(4.1578126,-0.33)(4.1678123,-0.31) \curveto(4.1778126,-0.29)(4.1928124,-0.25)(4.1978126,-0.23) \curveto(4.2028127,-0.21)(4.2178125,-0.175)(4.2278123,-0.16) \curveto(4.2378125,-0.145)(4.2578125,-0.115)(4.2678127,-0.1) \curveto(4.2778125,-0.085)(4.2978125,-0.055)(4.3078127,-0.04) \curveto(4.3178124,-0.025)(4.3328123,0.01)(4.3378124,0.03) \curveto(4.3428125,0.05)(4.3578124,0.085)(4.3678126,0.1) \curveto(4.3778124,0.115)(4.4028125,0.145)(4.4178123,0.16) \curveto(4.4328127,0.175)(4.4578123,0.205)(4.4678125,0.22) \curveto(4.4778123,0.235)(4.4978123,0.265)(4.5078125,0.28) \curveto(4.5178127,0.295)(4.5378127,0.325)(4.5478125,0.34) \curveto(4.5578127,0.355)(4.5828123,0.38)(4.5978127,0.39) \curveto(4.6128125,0.4)(4.6378126,0.425)(4.6478124,0.44) \curveto(4.6578126,0.455)(4.6828127,0.48)(4.6978126,0.49) \curveto(4.7128124,0.5)(4.7378125,0.525)(4.7478123,0.54) \curveto(4.7578125,0.555)(4.7778125,0.585)(4.7878127,0.6) \curveto(4.7978125,0.615)(4.8278127,0.64)(4.8478127,0.65) \curveto(4.8678126,0.66)(4.9028125,0.68)(4.9178123,0.69) \curveto(4.9328127,0.7)(4.9678125,0.72)(4.9878125,0.73) \curveto(5.0078125,0.74)(5.0378127,0.755)(5.0478125,0.76) \curveto(5.0578127,0.765)(5.0828123,0.78)(5.0978127,0.79) \curveto(5.1128125,0.8)(5.1478124,0.82)(5.1678123,0.83) \curveto(5.1878123,0.84)(5.2228127,0.865)(5.2378125,0.88) \curveto(5.2528124,0.895)(5.2828126,0.92)(5.2978125,0.93) \curveto(5.3128123,0.94)(5.3428125,0.96)(5.3578124,0.97) \curveto(5.3728123,0.98)(5.4128127,0.99)(5.4378123,0.99) \curveto(5.4628124,0.99)(5.5078125,1.0)(5.5278125,1.01) \curveto(5.5478125,1.02)(5.5878124,1.04)(5.6078124,1.05) \curveto(5.6278124,1.06)(5.6678123,1.075)(5.6878123,1.08) \curveto(5.7078123,1.085)(5.7478123,1.095)(5.7678127,1.1) \curveto(5.7878127,1.105)(5.8278127,1.115)(5.8478127,1.12) \curveto(5.8678126,1.125)(5.9078126,1.135)(5.9278126,1.14) \curveto(5.9478126,1.145)(5.9878125,1.155)(6.0078125,1.16) \curveto(6.0278125,1.165)(6.0678124,1.175)(6.0878124,1.18) \curveto(6.1078124,1.185)(6.1478124,1.195)(6.1678123,1.2) \curveto(6.1878123,1.205)(6.2278123,1.215)(6.2478123,1.22) \curveto(6.2678127,1.225)(6.2978125,1.235)(6.3278127,1.25) } \psline[linewidth=0.06cm](2.0478125,1.25)(9.907812,1.25) \psline[linewidth=0.06cm](3.9278126,-2.49)(3.9078126,2.61) \psbezier[linewidth=0.08](9.887813,1.23)(8.987812,1.23)(8.867812,1.25)(6.6878123,1.23)(4.5078125,1.21)(3.9078126,-0.45)(3.9278126,-1.53)(3.9478126,-2.61)(3.9278126,-1.63)(3.9278126,-2.51) \psline[linewidth=0.03cm](4.5078125,0.25)(4.5078125,2.61) \psline[linewidth=0.03cm](5.1078124,0.83)(5.1078124,2.61) \psline[linewidth=0.03cm](5.7078123,1.11)(5.7078123,2.61) \psline[linewidth=0.03cm](6.3078127,1.23)(6.3078127,2.61) \psline[linewidth=0.03cm](6.9078126,1.25)(6.9078126,2.61) \psline[linewidth=0.03cm](7.4878125,1.23)(7.4878125,2.61) \psline[linewidth=0.03cm](8.087812,1.25)(8.087812,2.61) \psline[linewidth=0.03cm](8.687813,1.27)(8.687813,2.61) \psline[linewidth=0.03cm](9.287812,1.23)(9.287812,2.61) \psline[linewidth=0.03cm](3.2878125,-2.45)(3.3078125,2.61) \psline[linewidth=0.03cm](2.6878126,-2.45)(2.7078125,2.61) \psline[linewidth=0.03cm](2.0878124,-2.45)(2.1078124,2.61) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(3.9078126,2.11)(3.9078126,2.35) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(4.5078125,0.87)(4.5078125,1.11) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(2.1078124,2.07)(2.1078124,2.31) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(2.7078125,2.07)(2.7078125,2.31) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(3.3078125,2.07)(3.3078125,2.31) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(4.5078125,2.09)(4.5078125,2.33) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(5.1078124,2.09)(5.1078124,2.33) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(5.7078123,2.09)(5.7078123,2.33) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(6.3078127,2.07)(6.3078127,2.31) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(6.9078126,2.07)(6.9078126,2.31) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(7.4878125,2.09)(7.4878125,2.33) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(8.087812,2.11)(8.087812,2.35) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(8.687813,2.09)(8.687813,2.33) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(9.287812,2.09)(9.287812,2.33) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(5.1078124,1.13)(5.1078124,1.37) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(3.9078126,0.49)(3.9078126,0.73) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(3.2878125,-0.13)(3.2878125,0.11) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(2.6878126,-0.51)(2.6878126,-0.27) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(2.0878124,-0.95)(2.0878124,-0.71) \psbezier[linewidth=0.02,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(7.2678127,0.31)(6.6478124,0.95)(4.9078126,1.19)(4.2678127,0.67) \psbezier[linewidth=0.02,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(5.6278124,-1.11)(4.8078127,-0.87)(4.6278124,-0.59)(4.4078126,0.05) \psbezier[linewidth=0.02,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(0.8878125,1.75)(1.6478125,2.59)(2.1878126,1.83)(2.5678124,1.31) \usefont{T1}{ptm}{m}{n} \rput(0.9109375,1.66){} \usefont{T1}{ptm}{m}{n} \rput(0.92921877,1.32){$M_{-1}$} \usefont{T1}{ptm}{m}{n} \rput(6.3301563,-1.04){} \usefont{T1}{ptm}{m}{n} \rput(6.49,-1.32){$\partial\widetilde M$} \usefont{T1}{ptm}{m}{n} \rput(7.6834373,-0.22){$\widetilde M\setminus M$} \usefont{T1}{ptm}{m}{n} \rput(7.310625,0.2){} \end{pspicture} } \\ Fig. 5: slow flow on the smoothened manifold $\widetilde M$ \end{center} \textbf{Step 3.} The time $1$ map $\widetilde H$ of the described flow is not invertible. To make it invertible, we use the ''slow down procedure'' near the boundary of $\widetilde M$. Namely, we multiply the generator of the flow by a $C^\infty$-function that is equal to $0$ on the boundary $\partial \widetilde M$ of $\widetilde M$ and is positive in its interior $\mathop{\mathrm{Int}} \widetilde M$ (see Figure~5 again). For the function $\varphi$ described above, this procedure doesn't change neither the Hausdorff dimension of the $(\varphi,1)$-nontypical set of the time $1$ map of a flow, nor Lebesgue $4$-measure of the basin of attraction of $s_4$. Indeed, for every point of $\mathop{\mathrm{Int}} \widetilde M$ its positive semiorbit of the flow action does not intersect the boundary of $\widetilde M$. Therefore, the $(\varphi,1)$-nontypical set and the basin of attraction of $s_4$ for these two flows (''fast flow'' and ''slow flow'') can differ only by subsets of $\partial \widetilde M$. But $\partial \widetilde M$ has dimension $3$ and doesn't affect on the $4$-measure and on Hausdorff dimenion of the larger sets (sets of Hausdorff dimension more than $3$). We constructed an example of a $C^\infty$-\textit{diffeomorphism} $H_{slow}$ of a compact $4$-manifold with a smooth boundary, for which the SET doesn't hold. \textbf{Step 4.} Note that in the previous example the boundary of $\partial \widetilde M$ totally consists of fixed points of $H_{slow}$ (one of these points, $s_4$, is a support of a global SRB measure). To obtain an example on a boundaryless manifold, we consider the double of the manifold $\widetilde M$. Namely, we take two copies of $\widetilde M$ and glue the boundaries of these copies by the natural ''identical'' map. The two points $s_4$ on the copies glue together. Our diffeomorphism $H_{slow}$ naturally extends to the diffeomorphsm $\mathcal H$ of the new manifold $\mathcal M$ (because $\partial \widetilde M$ is fixed for $H_{slow}$). Obviously, $\mathcal H$ has a global SRB measure in $\mathcal M$ (the glued point $s_4$), but the SET does not hold for it, as well as for $H_{slow}$. \end{proof} \subsection{Topological type of the manifold} Our construction shows that $4$-dimensional manifold $M$ is homeomorphic to a neighborhood in $\mathbb R^4$ of union of two circles. It is also homeomorphic to the direct product of a filled pretzel and an interval. The same can be said about $\widetilde M$. Hence, the manifold $\mathcal M$ is a double of the direct product of a filled pretzel and an interval. $\mathcal M$ can also be described as a connected sum of two $S^3\times S^1$. Indeed, the filled pretzel is a connected sum of two solid tori $D^2\times S^1$. Hence, the direct product of a filled pretzel and an interval is homeomorphic to the connected sum of two $D^3\times S^1$. But the doubling operation and the operation of taking a connected sum (of two equal manifolds) topologically commute. The doubling of $D^3\times S^1$ is obviously homeomorphic to $S^3\times S^1$ (as far as the doubling of every $n$-disk $D^n$ is $S^n$). Therefore, $\mathcal M$ is homeomorphic to a connected sum of two $S^3\times S^1$. \begin{small} \end{small} \end{document}
\begin{document} \newcommand{\mathbb{Z}_3}{\mathbb{Z}_3} \newcommand{\mathbb{Z}_9}{\mathbb{Z}_9} \newcommand{\freefactor}{\mathfrak{L} \left (\mathbf{F} _\frac{11}{3}\right)} \newcommand{\mathfrak{L} \left (\mathbf{F} _{t}\right)}{\mathfrak{L} \left (\mathbf{F} _{t}\right)} \newcommand{\inff\otimes R}{\mathfrak{L} \left (\mathbf{F} _{t}\right)\otimes R} \newcommand{\freefactor\otimes R_{0}}{\freefactor\otimes R_{0}} \newcommand{\left (\tenpro\right )}{\left (\freefactor\otimes R_{0}\right )} \newcommand{\left ( \tenpro\right )\rtimes_{\gamma}\gruppo}{\left ( \freefactor\otimes R_{0}\right )\rtimes_{\gamma}\mathbb{Z}_3} \newcommand{\obstruction}{e^{\frac{2\pi i}{3}}} \newcommand{\obstruconj}{e^{\frac{-2\pi i}{3}}} \newcommand{\jonesinv}{e^{\frac{2\pi i}{9}}} \newcommand{\jonesconj}{e^{\frac{-2\pi i}{9}}} \newcommand{R_{-1}\rtimes _{\theta}\luna}{R_{-1}\rtimes _{\theta}\mathbb{Z}_9} \newcommand{M_{9}(\mathbb{C})}{M_{9}(\mathbb{C})} \newcommand{\bar{\delta}}{\bar{\delta}} \newcommand{\mathcal {M}}{\mathcal {M}} \newcommand{\operatorname{Ad\,}}{\operatorname{Ad\,}} \newcommand{\{\operatorname{Ad} _{N}u\, |\, u\in N\text{ is fixed by }G\}}{\{\operatorname{Ad} _{N}u\, |\, u\in N\text{ is fixed by }G\}} \newcommand{\overline{\mbox{Fint}}}{\overline{\mbox{Fint}}} \newcommand{clos\{\Ad u\, |\, u \text{ is fixed by }\gruppo \}}{clos\{\operatorname{Ad\,} u\, |\, u \text{ is fixed by }\mathbb{Z}_3 \}} \newcommand{Id\otimes (\Ad U_0 ^{*}\,\beta)}{Id\otimes (\operatorname{Ad\,} U_0 ^{*}\,\beta)} \newcommand{\widehat{\mathbb{Z}}_{3}}{\widehat{\mathbb{Z}}_{3}} \newcommand{\M\rtimes_{\widehat{\gamma}}\dualgr}{\mathcal {M}\rtimes_{\widehat{\gamma}}\widehat{\mathbb{Z}}_{3}} \newcommand{$ \blacksquare$}{$ \blacksquare$} \newcommand{\operatorname{Ad}}{\operatorname{Ad}} \newcommand{\mathrm{Re}}{\mathrm{Re}} \newcommand{\operatorname{Int}(M)}{\operatorname{Int}(M)} \newcommand{\operatorname{Int}(N)}{\operatorname{Int}(N)} \newcommand{\operatorname{Int}(R)}{\operatorname{Int}(R)} \newcommand{\operatorname{Int}\left (\freefactor\right )}{\operatorname{Int}\left (\freefactor\right )} \newcommand{\operatorname{Int}\pten}{\operatorname{Int}\left (\tenpro\right )} \newcommand{\operatorname{Ct}(M)}{\operatorname{Ct}(M)} \newcommand{\operatorname{Ct}(N)}{\operatorname{Ct}(N)} \newcommand{\operatorname{Ct}(R_{0})}{\operatorname{Ct}(R_{0})} \newcommand{\operatorname{Ct}\pten}{\operatorname{Ct}\left (\tenpro\right )} \newcommand{\operatorname{Ct}\left (\pippo\right )}{\operatorname{Ct}\left (\inff\otimes R\right )} \newcommand{\operatorname{Aut}(M)}{\operatorname{Aut}(M)} \newcommand{\operatorname{Aut}(R)}{\operatorname{Aut}(R)} \newcommand{\operatorname{Aut}\pten}{\operatorname{Aut}\left (\tenpro\right )} \newcommand{\operatorname{Aut}\left (\freefactor\right )}{\operatorname{Aut}\left (\freefactor\right )} \newcommand{\operatorname{Aut}(N)}{\operatorname{Aut}(N)} \newcommand{\operatorname{Out}(M)}{\operatorname{Out}(M)} \newcommand{\operatorname{Out}(N)}{\operatorname{Out}(N)} \newcommand{\overline{\operatorname {Int}(M)}}{\overline{\operatorname {Int}(M)}} \newcommand{\overline{\operatorname{Int}(N)}}{\overline{\operatorname{Int}(N)}} \newcommand{\overline{\operatorname{Int}(R)}}{\overline{\operatorname{Int}(R)}} \newcommand{\overline{\operatorname{Int}\left (\freefactor\right )}}{\overline{\operatorname{Int}\left (\freefactor\right )}} \newcommand{\overline{\operatorname{Int}\pten}}{\overline{\operatorname{Int}\left (\tenpro\right )}} \title[On a Subfactor Construction]{On a Subfactor Construction of a Factor Non-Antiisomorphic to Itself} \author{by Maria Grazia Viola} \address{\hskip-\parindent Department of Mathematics \\Texas A\&M University \\ College Station TX 77843, USA\\ Fax: (979) 845-3643.} \email{viola@math.tamu.edu} \subjclass[2000]{46L37, 46L40, 46L54} \begin{abstract} We define a $\mathbb{Z}_3$-kernel $\alpha$ on $\freefactor$ and a $\mathbb{Z}_3$-kernel $\beta$ on the hyperfinite factor $R$, which have conjugate obstruction to lifting. Hence, $\alpha\otimes\beta$ can be perturbed by an inner automorphism to produce an action $\gamma$ on $\freefactor\otimes R_{0}$. The aim of this paper is to show that the factor $\mathcal {M}=\left ( \tenpro\right )\rtimes_{\gamma}\gruppo$, which is similar to Connes's example of a $II_{1}$ factor non-antiisomorphic to itself, is the enveloping algebra of an inclusion of $II_{1}$ factors $A\subset B$. Here $A$ is a free group factor and $B$ is isomorphic to the crossed product $A\rtimes_{\theta}\mathbb{Z}_9$, where $\theta$ is a $\mathbb{Z}_3$-kernel of $A$\ with non-trivial obstruction to lifting. By using an argument due to Connes, which involves the invariant $\chi (\mathcal {M})$, we show that $\mathcal {M}$ is not anti-isomorphic to itself. Furthermore, we prove that for one of the generator of $\chi (\mathcal {M})$, which we will denote by $\sigma$, the Jones invariant $\varkappa (\sigma)$ is equal to $\jonesinv$. \end{abstract} \maketitle \section{Introduction} A von Neumann algebra $M$ is anti-isomorphic to itself if there exists a vector space isomorphism $\Phi :M\longrightarrow M$ with the properties $\Phi (x^{*})=\Phi (x)^*$ and $\Phi (xy)=\Phi (y)\Phi (x)$ for every $x,y\in M$. This is equivalent to saying that $M$ is isomorphic to its conjugate algebra $M^{c}$ (defined in Section 6). With his examples of a II$_{1}$ factor non-antiisomorphic to itself (cf. \cite{Connes6}), A. Connes gave in the '70s an answer to a crucial problem posed by F. Murray and J. von Neumann a few decades earlier, concerning the possibility of realizing every II$_{1}$ factor as the left regular representation of a discrete group. His example was obtained from the II$_{1}$ factor $\mathfrak{L} \left (\mathbf{F} _{4}\right)\otimes R$, where $R$ denotes the hyperfinite II$_{1}$ factor, using a crossed product construction with a $\mathbb{Z}_3$-action. After the innovative work on subfactors done by V. Jones in the '80s (see \cite{Jones3}), it was a natural question to ask whether Connes's factor could be obtained through a subfactor construction of finite index. Although extensive work (\cite{Connes1}, \cite{Connes6}, and \cite{Jones1}) has been done by both Connes and Jones on examples of II$_{1}$ and III$_{\lambda}$ factors non-antiisomorphic to itself, it does not seems that this problem has been addressed before and there is little literature on the subject. In this paper we provide a positive answer to this question by giving an explicit subfactor construction for our example of a II$_{1}$ factor non-antiisomorphic to itself. Our model is a variation of Connes's example (\cite{Connes4} and \cite{Connes6}), since we utilize in our approach the recently developed theory of interpolated free group factors (\cite{Radulescu1} and \cite{Dykema2}). The II$_{1}$ factor $\mathcal {M}$ we are going to study is constructed from the tensor product of the interpolated free group factor $\freefactor$ and the hyperfinite II$_{1}$ factor $R$. We use two $\mathbb{Z}_3$-kernels, $\alpha\in\operatorname{Aut}\left (\freefactor\right )$ and $\beta\in\operatorname{Aut}(R)$, which have conjugate obstructions to lifting, to generate an action of $\mathbb{Z}_3$ on $\freefactor\otimes R$. The action is given, up to an inner automorphism, by $\alpha\otimes\beta$, and the factor $\mathcal {M}$ is equal to the crossed product $\left (\freefactor\otimes R\right )\rtimes _{\gamma}\mathbb{Z}_3$ (cf. Section $4$). The main result of this paper is Theorem \ref{main}, where we show that $\mathcal {M}$ is the enveloping algebra of an inclusion $A\subset B$ of interpolated free group factors. Here $A$ is isomorphic to $\mathfrak{L}\left (\mathbf{F} _{\frac{35}{27}} \right)$ (Proposition \ref{proposition4.3}), and $B$ is equal to the the crossed product $A\rtimes _{\theta}\mathbb{Z}_9$, for a $\mathbb{Z}_9$-action $\theta$ of $A$ with outer period 3, and obstruction $\obstruction$ to lifting. The proof is based on Voiculescu's random matrix model for circular and semicircular elements introduced in \cite{Vocu}. An analogous argument has been used by F. R\v{a}dulescu in \cite{Radulescu2}, to prove that a variation of the example given by Jones of a II$_{1}$ factor with Connes invariant equal to $\mathbb{Z}_{2}\otimes\mathbb{Z} _{2}$, has a subfactor construction. We also show in Section 5 that the Connes invariant of our factor $\mathcal {M}$ is equal to $\mathbb{Z}_9$, a result announced by Connes in \cite{Connes4}. This invariant, which is defined for every factor $M$ with a separable predual, was introduced by Connes in \cite{Connes6}. It consists of an abelian subgroup of the group of outer automorphisms, and it is denoted by $\chi (M)$. It is an important tool for distinguishing factors, since it is an isomorphism invariant of the factor $M$. It is trivial for some of the most common II$_{1}$ factors, like the interpolated free group factors and the hyperfinite II$_{1}$ factor, as well as for any tensor product of these factors. However, a crossed product construction yields in general a non-trivial $\chi (M)$. To compute the Connes invariant of the factor $\mathcal {M}=\left ( \tenpro\right )\rtimes_{\gamma}\gruppo$, we use the short exact sequence described by Connes in \cite{Connes6}. In addition, we show that if $\sigma$ denotes the generator of $\chi (\mathcal {M})$ described in Remark \ref{add}, then the invariant $\varkappa (\sigma)$, introduced by Jones in \cite{Jones1} is equal to $\jonesinv$. The Jones invariant is defined for any element $\theta$ of $\chi (M)$, where $M$ is a II$_{1}$ factor without non-trivial hypercentral sequences, and it consists of a complex number of modulus one. It is a finer invariant than $\chi (M)$ and it is constant on the conjugacy class of $\theta$ in the group of outer automorphisms. Moreover, it behaves nicely with respect to antiautomorphisms of $M$, in the sense that conjugation by an anti automorphism changes $\varkappa (\theta )$ by complex conjugation (see \cite{Jones1} for details). Lastly, in Section 6 we use an argument of Connes \cite{Connes4} to show that $\mathcal {M}=$\linebreak $\left ( \tenpro\right )\rtimes_{\gamma}\gruppo$ is not anti-isomorphic to itself. The two main ingredients of this argument are the uniqueness (up to inner automorphism) of the decomposition of $\gamma$ into the product of an approximately inner automorphism and a centrally trivial automorphism, and the fact that the unique subgroup of order $3$ in $\chi (\mathcal {M})$ is generated by the dual action $\widehat{\gamma}$ on $\mathcal {M}=\left ( \tenpro\right )\rtimes_{\gamma}\gruppo$. Using this decomposition we obtain a canonical way to associate to the II$_{1}$ factor $\mathcal {M}$ a complex number, the obstruction to lifting of the approximately inner automorphism in the decomposition of $\gamma$, which is invariant under isomorphism, and distinguishes $\mathcal {M}$ from its conjugate algebra $\mathcal {M} ^{c}$. \section{Definitions} Let $M$ be a factor with separable predual. Denote by $\operatorname{Aut}(M)$ the group of automorphisms of M endowed with the $u$-topology, i.e., a sequence of automorphisms $\alpha _{n}$ converges to $\alpha$\, if and only if\, $\|\varphi\circ\alpha _{n}-\varphi\circ\alpha\|\rightarrow 0$ for all $\varphi\in M_{*}$. The definition of the Connes invariant $\chi (M)$ involves three normal subgroups of the group of automorphisms $\operatorname{Aut}(M)$. For a unitary $u$ in $M$ denote by $\operatorname{Ad} _M (u)$ the inner automorphism of $M$ defined by $\operatorname{Ad} _M (u)(x)=uxu^{*}$, for any $x$ in $M$. Let $\operatorname{Int}(M)$ be the subgroup of $\operatorname{Aut}(M)$ formed by all inner automorphisms. Then $\operatorname{Int}(M)$ is normal in $\operatorname{Aut}(M)$ since $\alpha\operatorname{Ad} _M (u)\alpha ^{-1}=\operatorname{Ad} _M\alpha(u)$ for every $\alpha\in\operatorname{Aut}(M)$. Let $\overline{\operatorname {Int}(M)}$ denote the closure of the group $\operatorname{Int}(M)$ in the $u$-topology. The group $\operatorname{Int}(M)$ of inner automorphisms and the group $\overline{\operatorname {Int}(M)}$ of approximately inner automorphisms are two of the groups involved in the definition of the Connes invariant. We restrict now our attention to II$_{1}$ factors. Recall that if $\tau$ denotes the unique faithful trace on $M$, then $M$ inherits an $L^{2}$-norm from the inclusion $M\subset L^{2}(M)$, given by $\|x\|_{2}=\tau(x^{*}x)^{1/2}$ for all $x\in M$. Note also that for a II$_{1}$ factor $M$, the $u$-topology on $\operatorname{Aut}(M)$ is equivalent to the topology for which a sequence of automorphisms $\alpha _{n}$ on $M$ converges to $\alpha$ iff $\displaystyle\lim _{n\rightarrow\infty} \|\alpha _{n}(x)-\alpha (x)\|_{2}\rightarrow 0$. Since our study of automorphisms is simplified in the II$_1$ case, we will always assume that our factors are II$_{1}$, unless otherwise specified. The last group of automorphisms involved in the definition of the Connes invariant is formed by the centrally trivial automorphisms of M. Given $a,b\in M$ set $[a,b]=ab-ba$. We say that a bounded sequence $(x_{n})_{n\geq 0}$ in $M$ is central if $\displaystyle\lim_{n \rightarrow \infty}{\|[x_n, y]\|_{2}}=0$ for every $y\in M$. \begin{definition} An automorphism $\alpha \in \operatorname{Aut}(M)$ is centrally trivial if for any central sequence $(x_n)$ in $M$ we have $\displaystyle\lim_{n \rightarrow \infty}{\| \alpha (x_n)-x_n \|_{2}}=0.$ \end{definition} We denote by $\operatorname{Ct}(M)$ the set of centrally trivial automorphisms of $M$, which is a normal subgroup of $\operatorname{Aut}(M)$. Let $\operatorname{Out}(M)=\displaystyle\frac{\operatorname{Aut}(M)}{\operatorname{Int}(M)}$ be the group of outer automorphisms of $M$, and denote by $\xi :\operatorname{Aut}(M)\to\operatorname{Out}(M)$ the quotient map. The Connes invariant was introduced by Connes in {\cite{Connes6} (see also \cite{Connes4}). \begin{definition} Let $M$ be a II$_{1}$ factor with separable predual. The Connes invariant of $M$ is the abelian group $$ \chi(M) =\frac{\operatorname{Ct}(M)\cap\overline{\operatorname {Int}(M)}}{\operatorname{Int}(M)}\subset\operatorname{Out}(M). $$ \end{definition} Note that the hyperfinite II$_{1}$ factor $R$ has trivial Connes invariant since $\mbox{Ct (R)}=\operatorname{Int}(R)$. Next we want to define a particular class of central sequences, the hypercentral sequences. \begin{definition} A central sequence $(x_{n})$ is hypercentral if $\displaystyle \lim_{n\rightarrow\infty}\|[x_n , y_n]\|_{2}=0$ for every central sequence $(y_{n})$ in $M$. \end{definition} Let $\omega$ be a free ultrafilter over $\mathbb{N}$ and $M$ a II$_{1}$ factor. Denote by $\ell ^{\infty}(\mathbb{N}, M)$ the algebra of bounded sequences in $M$, and by $C_{\omega}$ the subalgebra of the bounded sequence $(x_{n})_{n\in\mathbb{N}}$ in $M$ with $\displaystyle\lim_{n\rightarrow\omega}\|[x_{n},y]\|_2 =0$, for all $y\in M$. Let $\mathfrak{I}_{\omega}$ be the subalgebra of $\ell ^{\infty}(\mathbb{N}, M)$ consisting of the sequences for which $\displaystyle\lim _{n\rightarrow\omega}\|x_{n}\|_{2}=0$. Set \[M^{\omega}=\frac{\ell ^{\infty}(\mathbb{N}, M)}{\mathfrak{I}_{\omega}}\;\text{ and }\; M_{\omega}=\displaystyle\frac{C_{\omega}}{\mathfrak{I}_{\omega}\cap C_{\omega}}\] Then, $M^{\omega}$ and $M_{\omega}$ are finite von Neumann algebras and \begin{equation} \label{purple} M_{\omega}=M^{\omega}\cap M^{\prime}. \end{equation} \begin{remark} \label{uno} By \cite{McDuff} the existence of non-trivial hypercentral sequences is equivalent to the non-triviality of the center of $M_{\omega}$ for some (and then for all) free ultrafilter $\omega$. \end{remark} \begin{definition} A $\mathbb{Z}_{k}$-kernel on a von Neumann algebra $M$ is an automorphism $\alpha\in\operatorname{Aut}(M)$ such that there exists a unitary $U$ in $M$ with the property $\alpha ^{k} =\operatorname{Ad} _M U$. \end{definition} Note that if $\alpha$ is a $\mathbb{Z}_{k}$-kernel then $\alpha (U)=\lambda U$, for $\lambda$ a k-th root of unity. If $\lambda\neq 1$ we say that $\alpha$ has obstruction $\lambda$ to lifting, meaning that the homomorphism $\varphi :\mathbb{Z}_{k}\longrightarrow \operatorname{Out}(M)$ defined by $\varphi (1)=[\alpha]$ cannot be lifted to an homomorphism $\Phi :\mathbb{Z}_{k}\longrightarrow\operatorname{Aut}(M)$ with $\Phi (1)=\alpha$. We conclude this section by defining an invariant $\varkappa (\theta )$ for any element $\theta$ in $\chi (M)$. \begin{definition} Let $M$ be a II$_1$ factor without non-trivial hypercentral sequences and take $\theta\in\chi (M)$. Let $\phi$ be an automorphism in $\operatorname{Ct}(M)\cap\overline{\operatorname {Int}(M)}$ with $\xi (\phi )=\theta$, and $u_{n}$ a sequence of unitaries such that $\phi=\displaystyle\lim _{n\rightarrow\infty}\operatorname{Ad\,} u_{n}$. Since the sequence $(u_{n}^{*}\phi (u_{n}))_{n\geq 0}$ is hypercentral, there exists a sequence of scalars $(\lambda _{n})_{n\geq 0}$ with the properties that $\displaystyle\lim_{n\rightarrow\infty}\| \phi (u_n )-\lambda _{n}u_{n}\|_{2}=0$, and $(\lambda _{n})_{n\geq 0}$ converges to some $\lambda _{\phi}\in\mathbb{T}$ (Lemmas 2.1 and 2.2 in \cite{Jones1}). Hence, \[\lim_{n\rightarrow\infty}{\| \phi (u_n )-\lambda_{\phi }u_n\|}_{2}=0 \] and the Jones invariant is $\varkappa (\theta)=\lambda_{\phi }$. \end{definition} Jones proved in \cite{Jones1} that this definition makes sense (i.e. $\varkappa (\theta )$ does not depend on the choice of $\phi$ or $u_n$) and that $\varkappa$ is a conjugacy invariant, meaning that if $\alpha,\beta$ belong to $\operatorname{Ct}(M)\cap\overline{\operatorname {Int}(M)}$ and there exists $\psi\in\operatorname{Aut}(M)$ such that $\psi\alpha\psi ^{-1}=\beta$, then $\varkappa(\xi (\alpha))=\varkappa (\xi (\beta))$. \section{Preliminaries} Let $M$ be a factor with separable predual. $M$ is said to be full if $\operatorname{Int}(M)$ is closed in $\operatorname{Aut}(M)$ with respect to the $u$-topology. Obviously all type $I$ factors are full, while the hyperfinite factor $R$ provides an example of a II$_{1}$ factor which is not full since $\overline{\operatorname{Int}(R)}=\operatorname{Aut}(R)\neq \operatorname{Int}(R)$. \begin{remark} \label{due} For an arbitrary factor, being full is equivalent to having no non-trivial central sequence (see \cite{Connes2}). \addvspace{ amount} \end{remark} The following result, due to Connes, is an easy consequence of Lemma 4.3.3 in \cite{Sakai} and Corollary 3.6 in \cite{Connes1}. Some of the arguments used in the proof can be found in \cite{Jones2}. \begin{lemma} \label{second} Let $G$ be a discrete group containing a non-abelian free group and let $\tau$ be the usual trace on $\mathfrak{L}(G)$. Then $\mathfrak{L}(G)$ is full. \end{lemma} {\bf Proof} Set $E=G-\{e\}$, where $e$ denotes the identity element in $G$. Let $g_{1},\, g_{2}$ be two generators of the free group and $F=\{g\in E\, |\, g=g_{1}\tilde{g},\;\tilde{g}\in E\}$. Take $x\in \mathfrak{L}(G)$. Then $x$ can be expressed as $x=\displaystyle\sum_{g\in G}{\lambda_g \delta_g}$ and the function $f:G\to \mathbb{C}$ defined by $f(g)=\lambda_g$ belongs to $l^2(G)$. For such $f$ we have that \[ \sum_{g\in E}{|f(g)|^2}=\|x-\tau (x)\|_2 ^2\,\text{ and }\, \sum_{g\in G}{|f(g_i g g_i ^{-1})-f(g)|^2} =\|[x,\delta_{g_i} ]\|_{2}^{2}.\] Now if $(x_n )$ is a central sequence in $\mathfrak{L}(G)$ then $\| [x_{n},\delta_{g_i}] \|\rightarrow 0$ for $n\rightarrow\infty$, so we can apply Lemma 4.3.3 in \cite{Sakai} and conclude that \begin{equation} \label{puro} \lim _{n\rightarrow\infty}\|x_n -\tau (x_n )\|_2 =0 \end{equation} Let $\alpha$ be any automorphism in $\overline{\operatorname {Int}(\mathfrak{L}(G))}$ and choose a sequence of unitaries $(u_{n})_{n}$ such that $\alpha=\displaystyle\lim _{n\longrightarrow\infty} \operatorname{Ad\,} (u_{n})$. Since $(u_{n}^{*}u_{n+1})_{n\geq 0}$ is a central sequence in $\mathfrak{L}(G)$, by (\ref{puro}) there exists $\lambda _{n}\in\mathbb{T}$ such that $$ \|u_{n}^{*}u_{n+1}-\lambda _{n}1\|_{2}<\frac{1}{2^n}. $$ Set $\displaystyle v_{n}=\left (\prod _{i=1}^{n}{\lambda _{n}}\right )u_{n+1}$. Then $(v_{n})_{n\in\mathbb{N}}$ is a Cauchy sequence so it converges to some $t$ in $\mathfrak{L}(G)$. Since $$ \operatorname{Ad\,} (t)=\lim _{n\rightarrow\infty}\operatorname{Ad\,} (v_{n})=\lim _{n\rightarrow\infty}\operatorname{Ad\,} (u_{n+1})=\alpha $$ we have that $\alpha\in\operatorname {Int}(\mathfrak{L}(G))$. $ \blacksquare$ Using the following remark we can conclude that not only the free group factors are full, but also the interpolated ones. \begin{remark} If $N\subseteq M$ is an inclusion of II$_{1}$ factors and $p$ is a projection in $N$, then $p(N^{\prime}\cap M)p=pN^{\prime}p\cap pMp$. \end{remark} One inclusion of the previous remark is obvious. The other one is proved using the following argument due to S. Popa. Take any element $z\in pN^{\prime}p\cap pMp$ so that $z=x'p$ with $x^{\prime}\in N^{\prime}$. Let $q$ a maximal projection in $N$ with the properties that $p\leq q\leq 1$ and $x^{\prime}q\in M$. To show that $q=1$, suppose that $1-q\neq 0$. Then $(1-q)Np\neq 0$, so using the polar decomposition of a non-zero element in $(1-q)Np$ we can find $0\neq v\in N$ such that $v^{*}v\leq p$ and $vv^{*}\leq 1-q$. Thus $x^{\prime}vv^{*}=vx^{\prime}v^{*}=vpx^{\prime}pv^{*}\in M$ and $x^{\prime}(q+vv^{*})\in M$, contradicting the maximality of $q$. Taking $N=A$ and $M=A^{\omega}$ in the previous remark, and using (\ref{purple}) we obtain that the compression of a full factor is also full. \begin{remark} \label{remark3.4} Let $A$ be a II$_{1}$ factor with separable predual and $p$ a projection in $A$. Then $A$ is full if and only if $pAp$ is full. In particular, any interpolated free group factor $\mathfrak{L} \left (\mathbf{F} _{t}\right)$, with $t>1$, is full. \end{remark} \begin{proposition} \label{sixth} Let $\mathfrak{L} \left (\mathbf{F} _{t}\right)$, for $t\in\mathbb{R}$ and $t>1$, be any interpolated free group factor, and denote by $R$ the hyperfinite II$_{1}$ factor. Then $\mathfrak{L}(F_{t})\otimes R$ has no non-trivial hypercentral sequences. \end{proposition} {\bf Proof} We start by proving the result for $\mathfrak{L}(G)\otimes R$, where $G$ is a discrete group containing a non-abelian free group. First we want to show that any central sequence in $\mathfrak{L}(G)\otimes R$ has the form $(1\otimes x_{n})_{n\geq 0}$, for a central sequence $(x_{n})_{n\geq 0}$ in $R$. Denote by $g_{i}$, for $i=1,2$ the generators of $F_{2}\subseteq G$. By the proof of Lemma \ref{second}, $\mathfrak{L}(G)$ satisfies the hypothesis of Lemma 2.11 in \cite{Connes2} with $Q_1=\mathfrak{L}(G)$, $Q_2=R$ and $b_{i} =\delta _{g_{i}}$. Therefore, we can apply the above mentioned lemma to any central sequence $(X_n)_{n\geq 0}$ in $\mathfrak{L}(G)\otimes R$ to obtain that $\displaystyle\lim_{n\rightarrow\infty}\|X_n-(\tau\otimes 1)(X_n)\|_2=0$. Since $(\tau\otimes 1)(X_n)\in\mathbb{C}\otimes R$, this implies that $X_n=1\otimes x_n$ for a central sequence $(x_{n})_{n\geq 0}$ in $R$. Now suppose $(Y_{n})_{n\geq 0}$ is a hypercentral sequence in $\mathfrak{L}(G)\otimes R$. Since $(Y_{n})$ is central it has the form $Y_n=1\otimes y_n$, for a hypercentral sequence $(y_{n})$ in $R$. So we need only to prove that $R$ has no non-trivial hypercentral sequences. This follows immediately from Remark \ref{uno} and Theorem 15.15 in \cite{Kawi}. In the case of the factor $\inff\otimes R$, we can find an integer $k>1$ and a projection $p$ in $\mathfrak{L}(F_{k})\otimes R$ such that $\inff\otimes R\cong p(\mathfrak{L}(\mathbf{F} _{k})\otimes R)p$. Obviously $p$ belongs to $(\mathfrak{L}(\mathbf{F} _{k})\otimes R)' _{\omega}$. Therefore, by Remark \ref{uno} it is enough to show that given a II$_{1}$ factor $M$ and a projection $p\in M_{\omega}^{\prime}$, $(pMp)_{\omega}$ is a factor if and only if $M_{\omega}$ is a factor. This is an immediate consequence of the equality $(pMp)_{\omega}=pM_{\omega}p$. $ \blacksquare$ \begin{lemma} \label{ninth} If $\alpha\in\operatorname{Ct}\left (\pippo\right )$ then $\alpha=\operatorname{Ad\,} z (\nu\otimes id)$, for some unitary $z\in\inff\otimes R$ and an automorphism $\nu$ of $\mathfrak{L} \left (\mathbf{F} _{t}\right)$. \end{lemma} {\bf Proof} Let $(K_n)_{n\in\mathbb{N}}$ be an increasing sequence of finite dimensional subfactors of $R$ generating $R$, and denote by $R_n=K_n^{'}\cap R$ the relative commutant of $K_n$ in $R$. Set $L_n=1\otimes R_n\subset\inff\otimes R$. Then there exists an $n_0$ such that for all $x\in L_{n_0}$,\ \ $\|x\| \leq 1$ one has $\| \alpha(x)-x\|_2\leq \frac{1}{2}$. In fact, otherwise it would exist a uniform bounded sequence $(x_{n})$, $x_{n}\in L_{n}$ and $\|x_n\|\leq 1$, such that $\| \alpha(x_n)- x_n\|_2 >\frac{1}{2}$. But $(x_{n})$ is a central sequence in $\inff\otimes R$ because for each $m$ and $n\geq m$, $x_n$ commutes with $\mathfrak{L} \left (\mathbf{F} _{t}\right)\otimes K_m$, so we get a contradiction. By Lemma 3.3 in \cite{Connes1}, up to inner automorphism, $\alpha$ is of the form $\alpha _1\otimes 1_{R_{n_0}}$ where $\alpha _1$ is an automorphism of $\mathfrak{L} \left (\mathbf{F} _{t}\right)\otimes K_{n_0}$. Set $F=1\otimes K_{n_0}$. Then F is a type $I$\ subfactor of $\mathfrak{L} \left (\mathbf{F} _{t}\right)\otimes K_{n_0}$. Applying [Lemma 3.11, \cite{Connes1}] to $\alpha_1 $\ we obtain that $\alpha_1 |_{1\otimes K_{n_0}}=\operatorname{Ad\,} V |_{1\otimes K_{n_0}}$. This implies that $\alpha=\operatorname{Ad\,} z(\nu\otimes 1)$ for some automorphism $\nu$ of $\mathfrak{L} \left (\mathbf{F} _{t}\right)$. $ \blacksquare$ \section{The subfactor construction of interpolated free group factors} In this section we use Voiculescu's random matrix model for free group algebras \cite{Vocu}, to show that the crossed product $\left ( \tenpro\right )\rtimes_{\gamma}\gruppo$ can be realized as the enveloping algebra of an inclusion of interpolated free group factors $A\subset B$. For this purpose we first give an explicit construction of $\left ( \tenpro\right )\rtimes_{\gamma}\gruppo$, by providing models for the II$_{1}$ factors $\freefactor$ and $R$. Let $\{X_1, X_2, X_3\}$ be a free semicircular family and $u=\displaystyle\sum_{j=1}^{3}{e^{\frac{2\pi ij}{3}} e_j}$ a unitary whose spectral projections $\{e_{j}\}_{j=1} ^{3}$ have trace $\frac{1}{3}$. Assume also that $\{u\}^{\prime\prime}$ is free with respect to $\{X_1, X_2, X_3\}^{\prime\prime}$. Then $\freefactor$ can be thought as the von Neumann algebra generated by $\{X_1, X_2, X_3, u\}$ as in \cite{Vocu} and \cite{Radulescu1}. The model for $R$ is outlined in the following lemma. It is analogous to the construction given in the case of $\mathbb{Z}_{2}$ by R\v{a}dulescu \cite{Radulescu2}. We include it here for the sake of completeness. \begin{lemma} \label{tenth} Given a von Neumann algebra $M$, let $(U_k)_{k\in\mathbb{Z}}$ be a family of unitaries in M of order $9$. Assume that each $U_{k}$ has a decomposition of the form $\displaystyle U_{k}=\sum _{j=1} ^{9}\jonesinv U_{k}^{(j)}$, where each spectral projection $U_{k}^{(j)}$ has trace $\frac{1}{9}$. Let $g=\displaystyle\sum_{j=1}^{3}{e^{\frac{2\pi i j}{3}}g_j}$ be a unitary in $M$ of order $3$ whose spectral projections $\{g_j\}_{j=1}^{3}$ have trace $\frac{1}{3}$. Suppose that the following relations hold between the $U_{k}$'s and $g$: \begin{enumerate} \item [(i)] $U_k gU_k^* =\obstruconj\, g\;\text{ if }k=0,-1,\; \textstyle{ while }\; U_k gU_k ^* =g\;\text{ if }\;k\in\mathbb{Z}\backslash\{0,-1\}$ \item [(ii)] $U_{k} U_{k+1}U_{k}^{*}=\jonesinv\, U_{k+1},\quad\text{for } k\in\mathbb{Z}$, \item [(iii)] $U_{i} U_{j}=U_{j} U_{i}\quad\text{ if}\quad |i-j|\geq 2$. \end{enumerate} The algebra generated by the $U_{k}$'s and $g$ is endowed with a trace defined by $\tau (m)=0$, for each non-trivial monomial $m$ in these unitaries. Set $$ R_{-1}=\{gU_{0}^{3}, U_{1}, U_{2},\hdots\}^{\prime\prime}\subset\{g,U_{0}, U_{1}, U_{2},\hdots\}^{\prime\prime}=R_{0}. $$ This defines an inclusion of type II$_{1}$ factors of index $9$, such that $R_{-1}'\cap R_{0}=\{g\}^{\prime\prime}$. Let $\theta=\operatorname{Ad} _{R_{-1}}(U_0)$. Then $\theta$ is a outer automorphism of $R_{-1}$ of order $9$ with outer invariant $(3,\obstruction )$. Moreover, $R_0$ is equal to the crossed product $R_{-1}\rtimes _{\theta}\luna$. Also, the Jones tower for the inclusion $R_{-1}\subset R_0$ is given by $$ R_{-1}\subset R_{0}\subset R_{1}\subset\cdots\subset R_{k-1}\subset R_{k}\subset R_{k+1}\subset\cdots , $$ where $R_k =\{gU_{-1}^3\cdots U_{-k}^{3}, U_{-k}, U_{-k+1},\hdots\}^{\prime\prime}$ for $k\geq 1$. \end{lemma} {\bf Proof} The properties of the family $(U_k)_{k\in\mathbb{Z}}$ and of the unitary $g$ imply immediately that $U_{0}xU_{0}^{*}\in R_{-1}$ for every $x\in R_{-1}$. Therefore, $\theta=Ad (U_{0})$ defines an automorphism of $R_{-1}$. Since $g$ commutes with $R_{-1}$, $\theta ^{3}=Ad_{R_{-1}}(U_0 ^3)=Ad_{R_{-1}}(gU_0 ^3)$ belongs to $\operatorname{Int}(R_{-1})$. Moreover $\theta (gU_{0}^{3})=\obstruconj gU_{0}^{3}$, so $\theta$ has outer invariant $(3,\obstruconj )$. Obviously, any monomial in $R_{0}$ can be written using only one occurrence of $U_{0}$ to some power because of the relations between the generators of $R_{0}$. Moreover, by definition, the trace on the algebra generated by the $\{U_{k}\}_{k\in\mathbb{Z}}$ and $g$ (and therefore on its subalgebra $R_{0}$) is compatible with the usual trace defined on the crossed product $R_{-1}\rtimes _{\theta}\luna$, so that $R_{0}=(R_{-1}\cup\{U_{0}\})^{\prime\prime}=R_{-1}\rtimes _{\theta}\luna$. To show that $R_{-1}^{\prime}\cap R_{0}\subset \{g\}^{\prime\prime}$, write any element $x\in R_{-1}^{\prime}\cap R_{0}$ as $\displaystyle x=\sum _{k=0}^{8}a_{k}U_{0}^{k}$. It is easy to check that $x$ belongs to $R_{-1}^{\prime}$ if and only if $a_{0}\in\mathbb{C}$, $a_{3}$ and $a_{6}$ are multiples of $g^{2}U_{0}^{6}$ and $gU_{0}^{3}$, respectively, and all the other $a_{k}$'s are zero. The other inclusion follows immediately from the relations verified by the $U_{k}$'s and $g$, thus $R_{-1}^{\prime}\cap R_{0}= \{g\}^{\prime\prime}$. Note that $\operatorname{Ad} _{R_0}(U_{-1})(U_0)=\jonesinv U_0$, while $\operatorname{Ad} _{R_0}(U_{-1})(x)=x$ for all $x\in R_{-1}$. Hence, $\operatorname{Ad} _{R_0}(U_{-1})$ implements the dual action of $\mathbb{Z}_9$ on the crossed product $R_{-1}\rtimes _{\theta}\luna$, and the next step in the Jones tower for the inclusion $R_{-1}\subset R_0$ is given by $$ R_1 =\{U_{-1},g,U_{0},U_{1},\hdots\}^{\prime\prime}. $$ Similarly, the other steps in the Jones constructions are obtained by adding the unitaries $U_{-2},\, U_{-3}, \hdots$, so that the k-th step is given by \[R_{k} =\{U_{-k},U_{-k+1},\hdots ,U_{-1}, g, U_{0}, U_{1},\hdots\}^{\prime\prime}.\] $ \blacksquare$ Observe that to construct unitaries with the properties in the statement it is enough to consider the Jones tower for an inclusion of the form $R\subset R\rtimes_{\beta}\mathbb{Z}_9$, where $\beta$ is an automorphism of order $9$ with outer invariant $(3,\obstruconj )$. We choose the unitary $g$ of order $3$ between the elements of the relative commutant. The unitaries implementing the crossed product in the successive steps of the basic construction will satisfy the desired relations. The next step is to give a concrete realization of the crossed product \linebreak $\mathcal {M}=\left ( \tenpro\right )\rtimes_{\gamma}\gruppo$, with $\gamma$ a $\mathbb{Z}_3$--action on $\freefactor\otimes R_{0}$. Using the model $\freefactor=\{X_1, X_2, X_3, u\}^{\prime\prime}$, where the $X_{i}$'s are semicircular elements, $u$ is a unitary of order $3$ and $\{X_{i},u \mid \, i=1\hdots 3\}$ is a free family, we define the automorphism $\alpha$ on $\freefactor$ by: \begin{itemize} \item [] $\alpha (X_i)=X_{i+1}$, for $i=1,2$ \item[]$\alpha(X_3)=uX_1 u^*$, and \item [] $\alpha (u)=\obstruction u$. \end{itemize} Since $\alpha ^{3}=\operatorname{Ad\,} u$, $\alpha$ is a $\mathbb{Z}_3$--kernel with obstruction $\obstruction$ to lifting. For the automorphism $\beta$ on the hyperfinite II$_{1}$ factor we use the model of Lemma \ref{tenth}: $R\cong R_0=\{g, U_0, U_1,\hdots\}^{\prime\prime}$ and $\beta =\operatorname{Ad} _{R_0}(U_{-1})$, with $\beta ^3 =\operatorname{Ad} _{R_0}(g)$ and $\beta (g)=\obstruconj g$. Observe that $\alpha\otimes\beta\in\operatorname{Aut}\left (\tenpro\right )$ has outer period $3$ and obstruction to lifting $1$, so it can be perturbed by an inner automorphism to obtain a $\mathbb{Z}_3$-action on $\freefactor\otimes R_{0}$. This action is defined by \begin{displaymath} \gamma=\left (\operatorname{Ad} _{\left [\freefactor\otimes R_{0}\right ]} W\right)(\alpha\otimes \beta ), \end{displaymath} where $W$\ is any cube root of $u^{*}\otimes g^{*}$ which is fixed by the automorphism $\alpha\otimes\beta$. For example, if $\delta=\jonesinv$ and $\{e_{i}\}_{i=1} ^{3}$, $\{g_{j}\}_{j=1} ^{3}$ denote the spectral projections of $u$ and $g$, respectively, take \begin{equation} \label{proiezione} W = \delta E_1+\delta ^{2} E_2+E_3, \end{equation} where \begin{equation} \label{Wproj} E_{l}=\displaystyle\sum_{\substack{i,j=1,\hdots , 3,\\ i+j\equiv l\mod 3}}e_{i}\otimes g_{j}, \text{ for }l=1\hdots , 3. \end{equation} Note that $\alpha$ acts on the spectral projections of $u$ as $\alpha (e_{i})=e_{i-1}$ for $i=2,3$ and $\alpha (e_{1})=e_{3}$, while $\beta$ acts on the spectral projections of $g$ as $\beta (g_{j})=g_{j+1}$ for $j=1,2$ and $\beta (g_{3})=g_{1}$. Hence, $\alpha\otimes\beta$ fixes $W$. \begin{observation} \label{observ} Note that $W$ belongs to the center of the fixed point algebra of $\alpha \otimes\beta$ since for any element $z$ in the fixed point algebra we have \begin{equation*} z\,(u\otimes g)=(\alpha\otimes\beta )^{3}(z\,(u\otimes g))=\operatorname{Ad\,} (u\otimes g) (z\,(u\otimes g))=(u\otimes g)\,z. \end{equation*} \end{observation} We can now prove our main theorem, using an argument similar to the one used by R\u{a}dulescu in \cite{Radulescu2}. We show that $\mathcal {M}=\left ( \tenpro\right )\rtimes_{\gamma}\gruppo$ is the enveloping algebra of an inclusion $A\subset B$ of interpolated free group factors. We divide the proof into two parts, first proving that $A$ is isomorphic to the interpolated free group factor $\mathfrak{L}\left (\mathbf{F} _{\frac{35}{27}}\right )$. \begin{proposition} \label{proposition4.3} Let $v$ be the unitary implementing the crossed product $\mathcal {M}=\left ( \tenpro\right )\rtimes_{\gamma}\gruppo$. Consider the von Neumann subalgebra $$A=\{X_{i}\otimes 1, u\otimes 1, 1\otimes g, v| i=1,\hdots ,3\}^{\prime\prime}\subset \mathcal {M},$$ endowed with a trace with respect to which $\{X_{i}\otimes 1, u\otimes 1| i=1,\hdots ,3\}$ is a free family, $\{X_{i}\otimes 1\}_{i=1}^{3}$ are semicircular elements, and $u\otimes 1, 1\otimes g, v$ are unitaries of order 3 with spectral projections of trace $\frac{1}{3}$. Moreover, assume that the following relations are satisfied by the elements of $A$ \begin{itemize} \item[i)] $[1\otimes g,X_{i}\otimes 1]=0$ for $i=1,\hdots , 3$, and $[1\otimes g,u\otimes 1]=0$. \item[ii)] $v(u\otimes 1)=\obstruction\, (u\otimes 1)v$ \item[iii)] $v(1\otimes g)=\obstruconj\, (1\otimes g)v$ \item[iv)] $\operatorname{Ad\,} v(X_{i}\otimes 1)=\operatorname{Ad\,} W \circ\alpha(X_{i}\otimes 1)$, where $W=\jonesinv E_{1}+e^{\frac{4\pi i}{9}}E_{2}+E_{3}$ as in (\ref{proiezione}). \end{itemize} For any monomial $m$ in the variables $\{X_{i}\otimes 1, u\otimes 1, 1\otimes g| i=1,\hdots , 3\}$, suppose that the trace $\tau$ on the algebra $A$ verifies the following properties: \begin{enumerate} \item [(v)] $\tau(mv^{k})=0$ for $k=1,2$, \item [(vi)] the trace of $m$ in $A$ coincides with its trace as element of the von Neumann algebra $\{X_{i}, u| i=1, \hdots ,3\}^{\prime\prime}\otimes\{g\}^{\prime\prime}$. \end{enumerate} Under these conditions $A$ is isomorphic to $\mathfrak{L}\left (\mathbf{F} _{\frac{35}{27}}\right )$. \end{proposition} {\bf Proof} First we realize the algebra $A$ in terms of random matrices \cite{Vocu} and then use Voiculescu's free probability theory to show that $A$ is an interpolated free group factor. The random matrix model we give is a subalgebra of the algebra of $9\times 9$ matrices with entries in a von Neumann algebra. Let $D$ be a von Neumann algebra with a finite trace $\widetilde{\tau}$ which contains a family of free elements $\{a_i\}_{i=1}^{18}$, with the property that the elements $\{a_{i}\}_{i=1} ^{9}$ are semicircular, while the others ones are circular. Denote by $(e_{i j})_{i,j=1,\hdots ,9}$ the canonical system of matrix units in $M_{9}(\mathbb{C})$. Set $\epsilon=\obstruction$. Using the same notation as before for the spectral projections of $u$ and $g$, we set $$ e_{i}\otimes 1=\displaystyle \sum _{\substack{j=1,\hdots ,9,\\j\equiv i\mod 3}}e_{jj}\inM_{9}(\mathbb{C})\subset D\otimesM_{9}(\mathbb{C}),\text{ for } i=1,\hdots ,3, $$ and \begin{align*} 1\otimes g_{1}&=e_{11}+e_{55}+e_{99},\\\ 1\otimes g_{2}&=e_{33}+e_{44}+e_{88}, \\ 1\otimes g_{3}&=e_{22}+e_{66}+e_{77}. \end{align*} Therefore, $u\otimes 1=\epsilon\, (e_{1}\otimes1)+\epsilon ^{2}\, (e_{2}\otimes 1)+e_{3}$ and $1\otimes g=\epsilon\, (1\otimes g_{1})+\epsilon ^{2}\, (1\otimes g_{2})+1\otimes g_{3}$ can be written in matrix notation as \begin{equation*} u\otimes 1=\begin{pmatrix} \epsilon & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & \epsilon ^2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \epsilon & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \epsilon ^2 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & \epsilon & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \epsilon ^2 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix} \end{equation*} \addvspace{\baselineskip} \begin{equation*} 1\otimes g=\begin{pmatrix} \epsilon & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & \epsilon ^2 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \epsilon ^2 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \epsilon & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \epsilon ^2 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \epsilon \end{pmatrix} \end{equation*} \addvspace{\baselineskip} Moreover, set $$ v=(e_{1 2}+e_{2 3}+e_{3 1})+(e_{4 5}+e_{5 6}+e_{6 4})+(e_{7 8}+e_{8 9}+e_{9 7}),$$ i.e., \begin{equation*} v=\begin{pmatrix} 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \end{pmatrix}. \end{equation*} Note that the unitaries $u\otimes 1$, $1\otimes g$, and $v$ generate a copy of $M_3 (\mathbb{C})\oplus M_3 (\mathbb{C})\oplus M_3 (\mathbb{C})\subseteqM_{9}(\mathbb{C})$. In addition, with this choice of $u\otimes 1$, $1\otimes g$ and $v$ we obtain that \begin{equation*} W=\delta ^2Id\oplus Id\oplus\delta\, Id, \end{equation*} where $\delta=\jonesinv$ and $Id$ denotes the identity of $M_3 (\mathbb{C})$. In matrix notation \begin{equation*} W=\begin{pmatrix} \delta ^2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & \delta ^2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & \delta^2 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & \delta & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \delta & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \delta \end{pmatrix} \end{equation*} Furthermore, $u\otimes 1$, $1\otimes g$ and $v$ satisfy the required conditions \begin{align*} (u\otimes 1)(1\otimes g) &=(1\otimes g)(u\otimes 1),\\ v(u\otimes 1)&=\epsilon \, (u\otimes 1) v,\\ v(1\otimes g)&=\overline{\epsilon}\, (1\otimes g)v. \end{align*} Denote by $\mathrm{Re}(a)=\frac{1}{2}(a+a^* )$, for $a\in D\otimesM_{9}(\mathbb{C})$. We set: \begin{align*} X_{1}\otimes 1= & a_1\otimes e_{1 1}+a_2\otimes e_{22}+a_3\otimes e_{33}+a_{4}\otimes e_{44}+a_{5}\otimes e_{55}+a_{6}\otimes e_{66} \\ & +a_{7}\otimes e_{77}+a_{8}\otimes e_{8 8}+a_{9}\otimes e_{99}+2\mathrm{Re}(a_{10}\otimes e_{15})+2\mathrm{Re}(a_{11}\otimes e_{19}) \\ & +2\mathrm{Re}(a_{12}\otimes e_{26})+2\mathrm{Re}(a_{13}\otimes e_{27})+2\mathrm{Re}(a_{14}\otimes e_{3 4})+2\mathrm{Re}(a_{15}\otimes e_{38}) \\ & +2\mathrm{Re}(a_{16}\otimes e_{48})+2\mathrm{Re}(a_{17}\otimes e_{59})+2\mathrm{Re}(a_{18}\otimes e_{67}). \end{align*} To find $X_2$ and $X_3$, use the relations: \begin{equation*} X_2\otimes 1=\operatorname{Ad\,} (W^* v)(X_1\otimes 1)\quad\text{ and }\quad X_3\otimes 1=\operatorname{Ad\,} (W^* v)(X_2\otimes 1) \end{equation*} Thus, using matrix notation we have: \begin{equation*} X_1\otimes 1=\begin{pmatrix} a_{1} & 0 & 0 & 0 & a_{10} & 0 & 0 & 0 & a_{11} \\ 0 & a_{2} & 0 & 0 & 0 & a_{12} & a_{13} & 0 & 0 \\ 0 & 0 & a_{3} & a_{14} & 0 & 0 & 0 & a_{15} & 0 \\ 0 & 0 & a_{14}^* & a_{4} & 0 & 0 & 0 & a_{16} & 0 \\ a_{10}^* & 0 & 0 & 0 & a_{5} & 0 & 0 & 0 & a_{17} \\ 0 & a_{12}^* & 0 & 0 & 0 & a_{6} & a_{18} & 0 & 0 \\ 0 & a_{13}^* & 0 & 0 & 0 & a_{18}^* & a_{7} & 0 & 0 \\ 0 & 0 & a_{15}^* & a_{16}^* & 0 & 0 & 0 & a_{8} & 0 \\ a_{11}^* & 0 & 0 & 0 & a_{17}^* & 0 & 0 & 0 & a_{9} \end{pmatrix} \end{equation*} \begin{equation*} X_2\otimes 1=\begin{pmatrix} a_{2} & 0 & 0 & 0 & \bar{\delta} ^{2} a_{12} & 0 & 0 & 0 & \bar{\delta} a_{13} \\ 0 & a_{3} & 0 & 0 & 0 & \bar{\delta} ^{2} a_{14} & \bar{\delta} a_{15} & 0 & 0 \\ 0 & 0 & a_{1} & \bar{\delta} ^{2} a_{10} & 0 & 0 & 0 & \bar{\delta} a_{11} & 0 \\ 0 & 0 & \delta ^{2} a_{10}^{*} & a_{5} & 0 & 0 & 0 & \delta a_{17} & 0 \\ \delta ^{2} a_{12}^{*} & 0 & 0 & 0 & a_{6} & 0 & 0 & 0 & \delta a_{18} \\ 0 & \delta ^{2} a_{14}^{*} & 0 & 0 & 0 & a_{4} & \delta a_{16} & 0 & 0 \\ 0 & \delta a_{15}^{*} & 0 & 0 & 0 & \bar{\delta} a_{16}^{*} & a_{8} & 0 & 0 \\ 0 & 0 & \delta a_{11}^{*} & \bar{\delta} a_{17}^{*} & 0 & 0 & 0 & a_{9} & 0 \\ \delta a_{13}^{*} & 0 & 0 & 0 & \bar{\delta} a_{18}^{*} & 0 & 0 & 0 & a_{7} \end{pmatrix} \end{equation*} \begin{equation*} X_3\otimes 1=\begin{pmatrix} a_{3} & 0 & 0 & 0 & \bar{\delta} ^{4} a_{14} & 0 & 0 & 0 & \bar{\delta} ^{2} a_{15} \\ 0 & a_{1} & 0 & 0 & 0 & \bar{\delta} ^{4} a_{10} & \bar{\delta} ^{2} a_{11} & 0 & 0 \\ 0 & 0 & a_{2} & \bar{\delta} ^{4} a_{12} & 0 & 0 & 0 & \bar{\delta} ^{2}a_{13} & 0 \\ 0 & 0 & \delta ^{4}a_{12}^{*} & a_{6} & 0 & 0 & 0 & \delta ^{2} a_{18} & 0 \\ \delta ^{4}a_{14}^{*} & 0 & 0 & 0 & a_{4} & 0 & 0 & 0 & \delta ^{2} a_{16} \\ 0 & \delta ^{4}a_{10}^{*} & 0 & 0 & 0 & a_{5} & \delta ^{2} a_{17} & 0 & 0 \\ 0 & \delta ^{2} a_{11}^{*} & 0 & 0 & 0 & \bar{\delta} ^{2} a_{17}^{*} & a_{9} & 0 & 0 \\ 0 & 0 & \delta ^{2} a_{13}^{*} & \bar{\delta} ^{2} a_{18}^{*} & 0 & 0 & 0 & a_{7} & 0 \\ \delta ^{2} a_{15}^{*} & 0 & 0 & 0 & \bar{\delta} ^{2} a_{16}^{*} & 0 & 0 & 0 & a_{8} \end{pmatrix} \end{equation*} \addvspace{1.3\baselineskip} Obviously $X_1\otimes 1 , X_ 2\otimes 1 ,X_3\otimes 1$ commute with $1\otimes g$. Moreover, the assumption that the family $\{a_i\}_{i=1,\hdots ,18}$\ is free implies that $\{X_{i}\otimes 1, u\otimes 1 |i=1,\hdots ,3\}$ is a free family with respect to the unique normalized trace on $D\otimesM_{9}(\mathbb{C})$. In addition, $\{X_{i}\otimes 1, u\otimes 1|i=1,\hdots ,3\}^{\prime\prime}$ and $\{g\}^{\prime\prime}$ are independent with respect to this trace. To show that the normalized trace of $D\otimesM_{9}(\mathbb{C})$ has the properties (v) and (vi) of the statement, let $m$ be any monomial in the variables $\{X_{i}\otimes 1, u\otimes 1, 1\otimes g| i=1,\hdots ,3\}$, and consider the product $v^k m$ with $k=1,2$. Since $m$ commutes with $1\otimes g$, it must be of the form \begin{equation*} m=\begin{pmatrix} * & 0 & 0 & 0 & * & 0 & 0 & 0 & * \\ 0 & * & 0 & 0 & 0 & * & * & 0 & 0 \\ 0 & 0 & * & * & 0 & 0 & 0 & * & 0 \\ 0 & 0 & * & * & 0 & 0 & 0 & * & 0 \\ * & 0 & 0 & 0 & * & 0 & 0 & 0 & * \\ 0 & * & 0 & 0 & 0 & * & * & 0 & 0 \\ 0 & * & 0 & 0 & 0 & * & * & 0 & 0 \\ 0 & 0 & * & * & 0 & 0 & 0 & * & 0 \\ * & 0 & 0 & 0 & * & 0 & 0 & 0 & * \end{pmatrix} \end{equation*} If we multiply $m$ by $v$ or $v^2$, we obtain a matrix with zero on the diagonal and a few non-zero entries outside the diagonal. This implies that $v^k m$ has zero trace for $k=1,2$. Thus, we have built a matrix model for $A$ which satisfies the conditions of the statement. One can easily check that $A$ is a factor. In order to prove that $A$ is an interpolated free group factor we reduce $A$ by one of the spectral projections of $g$, and show that the new factor we obtain is an interpolated free group factor. Reduce $A$ by $g_3 =1\otimes e_{22}+1\otimes e_{66}+1\otimes e_{77}$, which has trace $\frac{1}{3}$. Then $g_3 A g_3$ is generated by: \begin{enumerate} \item [(i)] $a_{2}\otimes e_{22}+2\mathrm{Re}(a_{12}\otimes e_{26})+2\mathrm{Re}(a_{13}\otimes e_{27})+a_{6}\otimes e_{66}+ a_{7}\otimes e_{77} \\ +2\mathrm{Re}(a_{18}\otimes e_{67})$ \item [(ii)] $a_{3}\otimes e_{22}+2\mathrm{Re}(\bar{\delta} ^{2}a_{14}\otimes e_{26})+ 2\mathrm{Re}(\bar{\delta} a_{15}\otimes e_{27})+a_{4}\otimes e_{66}+a_{8}\otimes e_{77} \\ +2\mathrm{Re}(\delta a_{16}\otimes e_{67})$ \item [(iii)] $a_{1}\otimes e_{22}+2\mathrm{Re}(\bar{\delta} ^{4}a_{10}\otimes e_{26})+ 2\mathrm{Re}(\bar{\delta} ^{2} a_{11}\otimes e_{27})+a_{5}\otimes e_{66}+a_{9}\otimes e_{77} \\ +2\mathrm{Re}(\delta ^{2} a_{17}\otimes e_{67})$ \item [(iv)] $\epsilon ^{2}\otimes e_{22}+1\otimes e_{66}+\epsilon\otimes e_{77}$, \end{enumerate} Thus, by the Voiculescu's random matrix model \cite{VoDy} $g_3 A g_3$ is isomorphic to the free group factor $\mathfrak{L}(F_{3}*\mathbb{Z}_{3})\cong\freefactor$. This implies that $A$ is also an interpolated free group factor, and using the well-known formula for reduced factors (\cite{Dykema1}, \cite{Dykema2} or \cite{Radulescu1}) we get that $A=\mathfrak{L}\left (\mathbf{F} _{\frac{35}{27}}\right )$. $ \blacksquare$ \begin{theorem} \label{main} Set $$ A=\{X_{i}\otimes 1,u\otimes 1, 1\otimes g,v\mid i=1\hdots 3\}^{\prime\prime}\subset B=(A\cup \{1\otimes U_{0}\})^{\prime\prime}. $$ in $\mathcal {M}=\left ( \tenpro\right )\rtimes_{\gamma}\gruppo$. Then $A$ is isomorphic to the interpolated free group factor $\mathfrak{L}\left (\mathbb{F}_{\frac{35}{27}}\right )$, and $B$ is the crossed product of $A$ by a $\mathbb{Z}_9$-action $\theta$ on $A$, with outer invariant is $(3, \obstruction )$. Furthermore, $\mathcal {M}$ is the enveloping algebra in the basic construction for the inclusion $A\subset B$. \end{theorem} {\bf Proof} First we want to describe how $\operatorname{Ad\,} (1\otimes U_0)$ acts on the subalgebra $A$ of $\mathcal {M}$. Obviously \begin{equation*} \operatorname{Ad\,} (1\otimes U_{0})(X_{i}\otimes 1)=X_{i}\otimes 1\quad\text{ for }i=1,\hdots ,3, \end{equation*} \begin{equation*} \operatorname{Ad\,} (1\otimes U_{0})(u\otimes 1)=u\otimes 1 \end{equation*} and \begin{equation*} \operatorname{Ad\,} (1\otimes U_{0})(1\otimes g)=\obstruconj (1\otimes g). \end{equation*} In addition, if $\{E_{i}\}_{i=1} ^{3}$ are the projections defined in (\ref{Wproj}), then \begin{equation} \label{E} \operatorname{Ad\,} (1\otimes U_{0})(E_{i})=E_{i+1}\text{ for } i=1,\hdots , 3, \end{equation} where $i+1$ is taken mod 3. Using (\ref{proiezione}) and (\ref{E}), together with the relation $\gamma (1\otimes U_{0})=\delta \operatorname{Ad\,} W(1\otimes U_{0})$, for $\delta=\jonesinv$, we obtain \begin{align*} \operatorname{Ad\,} (1\otimes U_{0})(v) & =(1\otimes U_{0})(v(1\otimes U_{0}^{*})v^* )v=\bar{\delta}\,\operatorname{Ad\,} (1\otimes U_0)(W)W^{*} v \\ & =\bar{\delta}\,(E_{1}+\delta E_{2}+\delta ^{2}E_{3})W^{*}v =\bar{\delta ^{2}}\, (E_1 +E_2 +\obstruction E_3) v. \end{align*} It follows that $\operatorname{Ad\,} (1\otimes U_0)$ leaves $A$ invariant. Furthermore, $\operatorname{Ad} _{A}(1\otimes U_{0})^{3}$ acts identically on $\{X_{i}\otimes 1, u\otimes 1, 1\otimes g| i=1,\hdots , 3\}''$, while $\operatorname{Ad} _{A}(1\otimes U_{0})^{3} (v)=\obstruconj\, v$. Set $\theta=\operatorname{Ad} _{A}(1\otimes U_{0})$. Then, using the fact that $\operatorname{Ad} _{A}(1\otimes g^*)$ acts identically on $\{X_{i}\otimes 1, u\otimes 1, 1\otimes g| i=1,\hdots ,3\}^{\prime\prime}$, and $\operatorname{Ad\,} v(1\otimes g)=\obstruconj (1\otimes g)$, we conclude that $$ \theta ^{3}=\operatorname{Ad} _{A}(1\otimes g^*)\quad\text{ and }\quad\theta(1\otimes g^*)= \obstruction\, (1\otimes g^*) . $$ Thus $\theta$ is a $\mathbb{Z}_3$-kernel on $A$ with obstruction $\obstruction$ to lifting. To complete the proof that $B=A\rtimes_{\theta}\mathbb{Z}_9$ we need to check that any monomial $m$ in the variables $\{X_{i}\otimes 1, u\otimes 1, 1\otimes g, 1\otimes U_0 , v| i=1,\hdots ,3\}$ contains only one occurrence of $1\otimes U_0$ to some power. We also need to verify that any monomial containing $1\otimes U_0$ has zero trace. Since the crossed product $\left ( \tenpro\right )\rtimes_{\gamma}\gruppo$ is implemented by the unitary $v$, any monomial in the variables $\{X_{i}\otimes 1, u\otimes 1,1\otimes g, 1\otimes U_0 , v| i=1,\hdots ,3\}$ has the form $v^{k} m$, where $m$ is an element in $\freefactor\otimes R_{0}$ and $k=0,1,2$. Because $1\otimes U_{0}$ commutes with the elements of $\freefactor\otimes 1$ and $\operatorname{Ad\,} (1\otimes U_{0})(1\otimes g)=\obstruconj\, (1\otimes g)$, it follows that $1\otimes U_{0}$ can appear at most once in $m$ with some power. Thus, any monomial in $B$ can be written using only one occurrence of $1\otimes U_0$. Furthermore, by the definition of the trace on the crossed product, any monomial of the form $v^k m$, for $k=1,2$, has zero trace in $B$. Therefore, it is enough to compute the trace of any monomial $m$ in $\freefactor\otimes R_{0}$ containing $1\otimes U_{0}$. Because of the definition of the trace on $R_{0}$ (Lemma \ref{tenth}) and hence on $\freefactor\otimes R_{0}$, we can conclude immediately that the trace of $m$ is zero. Therefore $B=A\rtimes_{\theta}\mathbb{Z}_9$. In addition, using an argument similar to the one used to build the model for $R$ (Lemma \ref{tenth}), one can easily verify that the unitaries $1\otimes U_1, 1\otimes U_2,\hdots$\, implement the consecutive terms in the iterated basic construction of $A\subset A\rtimes_{\theta}\mathbb{Z}_9$. This implies that $\left ( \tenpro\right )\rtimes_{\gamma}\gruppo$\ is the enveloping algebra of the above inclusion of factors. $ \blacksquare$ \section{The Connes invariant of the crossed product $\left ( \tenpro\right )\rtimes_{\gamma}\gruppo$} The arguments in this section are analogous to the ones used by Jones in \cite{Jones1} to compute the Connes invariant for his example of a factor which is anti-isomorphic to itself but has no involutory antiautomorphism. The definition of $K$, $K^{\bot}$ and $L$ given below are due to Connes (see \cite{Connes6}). Let $N$\ be a II$_{1}$ factor without non-trivial hypercentral sequences and $G$ a finite subgroup of $\operatorname{Aut}(N)$ such that $G \cap\overline{\operatorname{Int}(N)}=\{Id\}$. Set $K=G\cap\operatorname{Ct}(N)$ and \linebreak $K^{\bot}=\{f:G\to\mathbb{T}\, | \text{ f is a homomorphism and } f|_{K}\equiv 1\}$. Let $\xi :\operatorname{Aut}(N)\to\operatorname{Out}(N)$ be the usual quotient map and Fint be the subgroup $\{\operatorname{Ad} _{N}u\, |\, u\in N\text{ is fixed by }G\}\subseteq\operatorname{Aut}(N)$. Denote by $\overline{\mbox{Fint}}$ its closure in $\operatorname{Aut}(N)$ with respect to the pointwise weak topology, and by $G\vee\operatorname{Ct}(N)$ the subgroup of $\operatorname{Aut}(N)$ generated by $G\cup\operatorname{Ct}(N)$. Set $L=\xi ((G\vee\operatorname{Ct}(N))\cap\overline{\mbox{Fint}})\subseteq\operatorname{Out}(N)$. Connes showed in \cite{Connes6} that there exists an exact sequence \begin{equation*} 0\rightarrow K^{\bot}\overset{\partial}{\rightarrow}\chi (W^{*}(N,G)) \overset{\Pi}{\rightarrow} L\rightarrow 0 \end{equation*} where $M=W^{*}(N,G)$ denotes the crossed product implemented by the action of $G$ on $N$. We briefly describe the maps $\partial$ and $\Pi$ in the exact sequence above. Given an element $x$ in $M$, write it as $\displaystyle\sum_{g\in G}{a_{g}u_g }$, with $a_{g}\in N$ and $\operatorname{Ad} _{M}u_{g}|_{N}=g$. For each $\eta :G \rightarrow \mathbb{T}$ in $K^{\bot}$, define the map $\Delta (\eta):M\longrightarrow M$ by \begin{displaymath} \Delta (\eta )\left (\sum_{g\in G}{a_g u_g}\right )= \sum_{g\in G}{\eta (g) a_{g}u_{g}}. \end{displaymath} Then $\Delta (\eta )$ belongs to $\operatorname{Ct}(M)\cap \overline{\operatorname {Int}(M)}$, and $\partial=\xi\circ\Delta$ is the desired map. To see how $\Pi$ acts on $\chi (M)$, for any element $\sigma\in\chi (M)$ choose an automorphism $\alpha\in\operatorname{Ct}(M)\cap\overline{\operatorname {Int}(M)}$ such that $\xi (\alpha )=\sigma$. The hypothesis $G\cap\overline{\operatorname{Int}(N)}=\{Id\}$, implies that there exists a sequence of unitaries $(u_{n})_{n\geq 0}$ in $N$, which are fixed by $G$, and a unitary $z$ in $M$ such that $\alpha=\displaystyle \operatorname{Ad\,} z\lim _{n\rightarrow\infty}{\operatorname{Ad\,} u_n}$ (Corollary 6 and Lemma 2 in \cite{Jones2}, or Lemma 15.42 in \cite{Kawi}). Set $\psi _{\sigma}\operatorname{Ad\,} (z^*)\alpha|_{N}\in\operatorname{Aut}(N)$. One can show that $\psi_{\sigma}\in \overline{\mbox{Fint}}\cap G\vee\operatorname{Ct}(N)$, and that the composition map $\xi\circ\psi _{\sigma}$ does not depend on the choice of $\alpha$, but only on the class $\sigma=\xi (\alpha)$. Therefore, the map \,$\Pi :\chi(M)\to L$\, given by $\Pi(\sigma )=\xi (\psi_{\sigma })$ is well defined. To show that $\Pi$ is surjective, let $\mu$ be any element in $L$ and denote by $\alpha_{\mu}\in\overline{\mbox{Fint}}\cap (G\vee\operatorname{Ct}(N))$ a representative of $\mu$, i.e. $\xi (\alpha_{\mu})=\mu$. $\alpha_{\mu}$ commutes with $G$ since it is the limit of automorphisms with this property. Hence, the map $\beta_{\mu}$ defined by \begin{equation} \label{lifting} \beta_{\mu}\left (\sum_{g\in G}{a_{g}u_g }\right )=\sum_{g\in G}{\alpha_{\mu}(a_g)u_g}. \end{equation} is an automorphism of $M$. In addition, we have that $\beta_{\mu}\in\operatorname{Ct}(M)\cap\overline{\operatorname {Int}(M)}$ and $\Pi (\xi (\beta_{\mu}))=\mu$. \addvspace{\baselineskip} \begin{remark}[Jones] \label{tre} Note that if $u_{n}$ is a sequence of unitaries left invariant by $G$, with the property $\alpha_{\mu}=\displaystyle\lim_{n\rightarrow\infty}{\operatorname{Ad\,} u_{n}}$ in $\operatorname{Aut}(N)$, and $\beta_{\mu}$ is the map defined in (\ref{lifting}), then $\beta_{\mu}=\displaystyle\lim_{n\rightarrow\infty} {\operatorname{Ad\,} u_{n}}$ in $\operatorname{Aut}(M)$. Hence, \begin{equation*} \varkappa (\varepsilon (\beta_{\mu}))=\lim_{n\rightarrow\infty}u_{n}^{*} \beta_{\mu}(u_{n})=\lim_{n\rightarrow\infty}{u_{n}^{*}\alpha_{\mu} (u_{n})}. \end{equation*} \addvspace{\baselineskip} \end{remark} Our next goal is to show that if $N=\freefactor\otimes R_{0}$ and $G$ is the subgroup of $\operatorname{Aut}\pten$ generated by $\gamma =\operatorname{Ad\,} W (\alpha\otimes\beta),$ then $\chi (\mathcal {M})\cong\mathbb{Z}_9$, as it was observed by Connes in \cite{Connes4}. For all the rest of this paper we will identify $G=\langle\gamma\rangle$ with $\mathbb{Z}_3$. Note that by Proposition \ref{sixth}, $\freefactor\otimes R_{0}$ has no non-trivial hypercentral sequences, so in order to use the exactness of the Connes sequence described above, we only need to show that $\mathbb{Z}_3\cap\overline{\operatorname{Int}\pten} = \{Id \}$. Obviously it is enough to check that $\gamma\not\in\overline{\operatorname{Int}\pten}$. This is equivalent to show that $\alpha\otimes\beta\not\in\overline{\operatorname{Int}\pten}$. By \cite[Corollary 3.3]{Connes3} if $\alpha\otimes\beta\in\overline{\operatorname{Int}\pten}$ then $\alpha\in\overline{\operatorname{Int}\left (\freefactor\right )}$ and $\beta\in\overline{\operatorname{Int}(R)}$. But $\freefactor$ is full (Remark \ref{remark3.4}) and $\alpha\not\in\operatorname{Int}\left (\freefactor\right )$, thus we can conclude that $\mathbb{Z}_3\cap\overline{\operatorname{Int}\pten} =\{Id\}$. Hence, for the factor $\mathcal {M}=\left ( \tenpro\right )\rtimes_{\gamma}\gruppo$ the sequence $$ 0 \rightarrow K^{\bot}\overset{\partial}{\rightarrow}\chi (\mathcal {M})\overset{\Pi} {\rightarrow} L\rightarrow 0 $$ is exact (\cite{Connes6}). In order to compute $\chi (\mathcal {M})$ we first show that $$ K^{\bot}=\mathbb{Z}_3\quad\text{and}\quad L=\mathbb{Z}_3. $$ \begin{lemma} The group $K=\mathbb{Z}_3\cap\operatorname{Ct}\pten$ is trivial. \end{lemma} {\bf Proof} Obviously it is enough to show that the automorphism $\gamma=Ad\, W (\alpha\otimes\beta )$ is not in $\operatorname{Ct}\pten$. We have already shown in the proof of Lemma \ref{sixth} that any central sequence in $\freefactor\otimes R_{0}$ has the form $(1\otimes x_n )_{n\in\mathbb{N}}$ for a central sequence $(x_{n} )_{n\in\mathbb{N}}$ in $R_{0}$. It follows that $\gamma\in\operatorname{Ct}\pten$ if and only if $\beta\in\operatorname{Ct}(R_{0})$. Since for $\epsilon=\obstruction$, $\beta$ is outer conjugate to the automorphism $s_{3}^{\bar{\epsilon}}$ described by Connes in \cite{Connes5}, and $s_{3}^{\bar{\epsilon}}$ does not belong to $\operatorname{Ct}(R_{0})$ \cite[Proposition 1.6]{Connes5}, we conclude that $\beta\not\in\operatorname{Ct}(R_{0})$. $ \blacksquare$ \begin{lemma} \label{eleventh} The group $L$ is isomorphic to $\mathbb{Z}_3$, and a generator of $L$ is given by \[ \mu=\xi\left (Id\otimes (\Ad U_0 ^{*}\,\beta)\right ), \] where $U_0$\ is the unitary in $R_{0}$ defined in Lemma \ref{tenth}, such that $\beta (U_{0})=\jonesinv U_{0}$. \end{lemma} {\bf Proof} We need to show that the automorphism $Id\otimes (\Ad U_0 ^{*}\,\beta)$ belongs to \linebreak $\left (\mathbb{Z}_3\vee\operatorname{Ct}\pten\right )\cap\overline{\mbox{Fint}}$. To prove that it is in $\mathbb{Z}_3\vee\operatorname{Ct}\pten$, multiply $Id\otimes (\Ad U_0 ^{*}\,\beta)$ by $\gamma^{-1}=(\operatorname{Ad\,} W (\alpha\otimes\beta))^{-1}$ to obtain the automorphism \[ \operatorname{Ad\,}\left (W^* (1\otimes U_{0}^{*} )\right )(\alpha^{-1}\otimes Id), \] which is in $\operatorname{Ct}\pten$, since the central sequences in $\freefactor\otimes R_{0}$ have the form $1\otimes x_{n}$, with $x_{n}$ central in $R_{0}$. Thus $Id\otimes (\Ad U_0 ^{*}\,\beta)\in\mathbb{Z}_3\vee\operatorname{Ct}\pten$. Next we want to show that $Id\otimes (\Ad U_0 ^{*}\,\beta)\in\overline{\mbox{Fint}}$. Therefore, we need to exhibit a sequence $(\tilde{u} _{n})$ of unitaries in $\freefactor\otimes R_{0}$, that are invariant with respect to $\gamma$, and satisfy $Id\otimes (\Ad U_0 ^{*}\,\beta) =\displaystyle\lim_{n\rightarrow\infty}{\operatorname{Ad\,}\tilde{u} _n}$. Observe that the sequence $(x_n)_{n\in\mathbb{N}}$ of unitaries in $R_{0}$ given by \begin{equation*} x_n=\begin{cases} U_{0}U_{1}^{*}U_{2}U_{3}^{*}\hdots U_{n}^{*},&\text{ if $n$ is odd}, \\ U_{0}U_{1}^{*}U_{2}U_{3}^{*}\hdots U_{n},& \text{ if $n$ is even} \\ \end{cases} \end{equation*} has the properties \begin{equation*} \beta =\lim_{n\rightarrow\infty}{\operatorname{Ad\,} x_n}\quad\text{ and }\quad\beta (x_n)=\jonesinv\, x_n. \end{equation*} Define $u_{n}=U_{0}^{*} x_n$. Obviously \begin{equation*} Id\otimes (\operatorname{Ad\,} U_{0}^{*}\beta) =\lim_{n\rightarrow\infty}\operatorname{Ad\,} (1\otimes u_n)\quad\text{ and }\quad\beta (u_n)=u_n \end{equation*} so that \begin{equation} \label{quarta} (\alpha\otimes\beta)(1\otimes u_n)=1\otimes u_n. \end{equation} In addition, from Observation \ref{observ} it follows that \[ \gamma(1\otimes u_{n})=\operatorname{Ad\,} W (\alpha\otimes\beta)(1\otimes u_{n})=1\otimes u_{n}. \] Thus, \[Id\otimes (\Ad U_0 ^{*}\,\beta)\in\left (\mathbb{Z}_3\vee\operatorname{Ct}\pten\right )\cap\overline{\mbox{Fint}}.\] Lastly we want to prove that the order 3 element $\mu =\xi\left (Id\otimes (\Ad U_0 ^{*}\,\beta)\right )$ generates $L$. Thus we need to show that any element $\varphi$ in $\left (\mathbb{Z}_3\vee\operatorname{Ct}\pten\right )\cap\overline{\mbox{Fint}}$, is of the form $\mu ^{n}\operatorname{Ad\,} w$, for some unitary $w$ in $\freefactor\otimes R_{0}$ and $n=0,\hdots ,2$. Since $\varphi\in\mathbb{Z}_3\vee\operatorname{Ct}\pten$, there exists $k\in\{0,1,2\}$ such that $\varphi\gamma ^{k}$ is centrally trivial. By Proposition 3.6, it follows that there exists a unitary $z\in\freefactor\otimes R_{0}$ and an automorphism $\nu\in\operatorname{Aut}\left (\freefactor\right )$ such that $\varphi\gamma ^{n}=\operatorname{Ad\,} z(\nu\otimes id)$. Therefore, \[ \varphi=\operatorname{Ad\,} x(\nu\alpha^{-n}\otimes\beta^{-n}), \] with $x=z(\nu\otimes Id)(W^{n})^{*}\in\freefactor\otimes R_{0}$. Since $\varphi\in\overline{\mbox{Fint}}\subseteq\overline{\operatorname{Int}\pten}$ and $\freefactor$ is full (Remark \ref{remark3.4}), by \cite[Corollary 3.3]{Connes3} there exists a unitary $w$ in $\freefactor$ such that $\nu\alpha^{-n}=\operatorname{Ad\,} w$. This implies that $\varphi=\operatorname{Ad\,} x' (Id\otimes\beta ^{-n})$, where $x'=x\,(w\otimes 1)$. Thus $\varphi$ differs from a power of $Id\otimes (\Ad U_0 ^{*}\,\beta)$ only by an inner automorphism. Hence $Id\otimes (\Ad U_0 ^{*}\,\beta)$ generates $L$. $ \blacksquare$ \addvspace{\baselineskip} Note that if $u_{n}$ is the sequence of unitaries defined in the previous lemma \[ (Id\otimes (\Ad U_0 ^{*}\,\beta))(1\otimes u_{n})=1\otimes U_{0}^{*}u_{n}U_0. \] But $\displaystyle\lim_{n\rightarrow\infty}{\operatorname{Ad\,} u_n }=\operatorname{Ad\,} U_{0}^*\,\beta$ and $\beta(U_{0})=\jonesinv\, U_{0}$, so \[ \lim_{n\rightarrow\infty}{u_{n}^{*}U_{0}^{*}u_{n}}=\operatorname{Ad\,} U_{0}(\beta^{-1}(U_{0}^{*}))= \jonesinv\, U_{0}^{*} \] and \begin{equation*} \lim_{n\rightarrow\infty}{(1\otimes u_{n}^{*})(Id\otimes (\Ad U_0 ^{*}\,\beta) )(1\otimes u_n )}=\jonesinv. \end{equation*} Thus, in view of Remark \ref{tre} we obtain the following: \begin{remark} \label{add} Let $\mu=\xi\left (Id\otimes (\Ad U_0 ^{*}\,\beta)\right )$ be the generator of $L$ as in the previous lemma. If $\beta_{\mu}\in\operatorname{Ct}(M)\cap\overline{\operatorname {Int}(M)}$ is the automorphism described in equation (\ref{lifting}), and $\sigma=\varepsilon (\beta_{\mu})$, then $\varkappa (\sigma)=\jonesinv$. \addvspace{ amount} \end{remark} \begin{theorem} \label{twelve} Let $\mathcal {M}$ denote the crossed product $\left ( \tenpro\right )\rtimes_{\gamma}\gruppo$. Then $\chi (\mathcal {M})=\mathbb{Z}_9$. \end{theorem} {\em Proof}. By Connes \cite{Connes6} the sequence \begin{displaymath} 0 \rightarrow\mathbb{Z}_3\rightarrow\chi (\mathcal {M})\rightarrow\mathbb{Z}_3\rightarrow 0 \end{displaymath} is exact. Moreover, according to (\ref{lifting}), $\mu=\xi\left (Id\otimes (\Ad U_0 ^{*}\,\beta)\right )$ lifts to the element $\sigma =\xi (\beta _{\mu})$ of $\chi (M)$, where \begin{displaymath} \beta _{\mu}\left (\sum_{k=0}^{2}{a_{k} v^{k}}\right )=\sum_{k=0}^{2}{(Id\otimes (\Ad U_0 ^{*}\,\beta))(a_{k}) v^{k}}. \end{displaymath} Since the only possibilities for $\chi (M)$ are $\mathbb{Z}_9$ and $\mathbb{Z}_3\oplus\mathbb{Z}_3$, it is enough to show that $\sigma ^{3}\neq 1$, i.e. $\beta _{\mu} ^{3} \not\in\operatorname{Int}(M)$. From the relations $$ (\alpha\otimes\beta )(1\otimes (U_{0}^{*})^{3}g)=\obstruction\, (1 \otimes (U_{0}^{*})^{3}g) $$ and $$ \operatorname{Ad\,} W (1\otimes (U_{0}^{*})^{3} g)=1\otimes (U_{0}^{*})^{3}g $$ we obtain that \begin{equation*} \gamma (1\otimes (U_{0}^{*})^{3}g)= \obstruction (1\otimes (U_{0}^{*})^{3}g). \end{equation*} Since $\operatorname{Ad\,} v=\gamma$\, on $\freefactor\otimes R_{0}$, the last equality can be rewritten as \linebreak $\operatorname{Ad\,} v(1\otimes(U_{0}^{*})^{3}g)=\obstruction (1\otimes (U_{0}^{*})^{3}\, g)$ or \begin{equation} \label{vact} \operatorname{Ad\,} (1\otimes (U_{0}^{*})^{3}\, g)(v)=\obstruconj v. \end{equation} Thus, using the definition of $\beta _{\mu}$, the relation $\beta ^{3}=\operatorname{Ad\,} g$, and (\ref{vact}) we obtain \begin{displaymath} \begin{split} \beta _{\mu} ^{3}\left (\sum_{k=0}^{2}{a_k\, v^{k}}\right )& =\sum_{k=0}^{2} {\left (Id\otimes(\operatorname{Ad\,} (U_{0}^{*})^{3}\, \beta ^{3})\right )(a_{k})v^{k}}=\sum_{k=0}^{2}{\operatorname{Ad} (1\otimes (U_{0}^{*})^{3}g)(a_{k}) v^{k}} \\ &=\operatorname{Ad\,} (1\otimes (U_{0}^{*})^{3}g)\left (\sum_{k=0}^{2}e^{\frac{2\pi i k}{3}}a_{k}\, v^{k}\right ), \end{split} \end{displaymath} which implies that up to an inner automorphism $\beta _{\mu}^{3}$ is a dual action, so it is outer and $\chi (\mathcal {M})\cong\mathbb{Z}_9$. $ \blacksquare$ \section{$\left ( \tenpro\right )\rtimes_{\gamma}\gruppo$ is not anti-isomorphic to itself.} In this section we are going to show that $\mathcal {M}=\left ( \tenpro\right )\rtimes_{\gamma}\gruppo$ is not anti-isomorphic to itself by using the dual action $\widehat{\mathbb{Z}}_{3}\to\operatorname{Aut}(M)$, which gives rise to the only subgroup of order $3$ in $\chi (\mathcal {M})$. This argument has been described by Connes in \cite{Connes4} and \cite{Connes6}. First of all note that the action $\gamma$ can be decomposed as $$ \gamma=\operatorname{Ad} W\,\gamma_{1}\gamma_{2}, $$ where $\gamma_{1}\in\operatorname{Ct}\pten$ and $\gamma_{2}\in\overline{\operatorname{Int}\pten}$ and $W$ is a unitary in $\freefactor\otimes R_{0}$. In fact, $\gamma=\operatorname{Ad\,} W(\alpha\otimes Id)(Id\otimes\beta)$ and $\gamma _{1}=\alpha\otimes Id$ is centrally trivial, since any central sequence in $\freefactor\otimes R_{0}$ has the form $(1\otimes x_{n})$, for a central sequence $(x_n)$ in $R_{0}$. Furthermore in the proof of Lemma \ref{eleventh} we showed that $\beta=\displaystyle\lim_{n\rightarrow\infty}{\operatorname{Ad\,} x_{n}}$ so that $\gamma _{2}=Id\otimes\beta=\lim_{n\rightarrow\infty}{\operatorname{Ad\,} (1\otimes x_{n})}$ belongs to $\overline{\operatorname{Int}\pten}$. Note also that this decomposition of $\gamma$ into an approximately inner automorphism and a centrally trivial automorphism is unique up to inner automorphisms, since $\chi\left (\tenpro\right )=1$. Let $M$ be an arbitrary von Neumann algebra. Define the conjugate $M^{c}$ of $M$ as the algebra whose underlying vector space is the conjugate of $M$ (i.e. for $\lambda\in\mathbb{C}$ and $x\in M$ the product of $\lambda$ by $x$ in $M^{c}$ is equal to $\bar{\lambda}x$) and whose ring structure is the same as in $M$. The opposite $M^{o}$ of $M$ is by definition the algebra whose underlying vector space is the same as for $M$ while the product of $x$ by $y$ is equal to $yx$ instead of $xy$. $M^{c}$ and $M^{o}$ are clearly isomorphic through the map $x\rightarrow x^{*}$. For $\psi\in\operatorname{Aut}(M)$ we denote by $\psi ^{c}$ the automorphism of $M^{c}$ induced by $\psi$. For the convenience of the reader we detail here Connes's argument to show that the factor $\left ( \tenpro\right )\rtimes_{\gamma}\gruppo$ is not antiisomorphic to itself (see \cite{Connes4} and \cite{Connes6} for Connes's argument). \begin{theorem} $\mathcal {M}=\left ( \tenpro\right )\rtimes_{\gamma}\gruppo$ is not anti--isomorphic to itself. \end{theorem} {\bf Proof} We proved in the previous section (Theorem \ref{twelve}) that $\chi (\mathcal {M})\cong\mathbb{Z}_9$ and that the dual action $\widehat{\gamma}:\widehat{\mathbb{Z}}_{3}\to\operatorname{Aut}(M)$ produces the only subgroup of order $3$ in $\chi (\mathcal {M})$, namely $\langle\sigma ^{3}\rangle =\langle\xi (\beta _{\mu} ^{3})\rangle$. Since $\chi (\mathcal {M})$ is an invariant of the von Neumann algebra $\mathcal {M}$, it follows that $\widehat{\mathbb{Z}}_{3}$ is an invariant of $\mathcal {M}$. This implies that $\M\rtimes_{\widehat{\gamma}}\dualgr$ is an invariant of $\mathcal {M}$, as it is the dual action $\widetilde{\gamma}:\mathbb{Z}_3\longrightarrow\mbox{Aut}(\M\rtimes_{\widehat{\gamma}}\dualgr )$ of $\widehat{\gamma}$. Since, by Takesaki's duality theory [Theorem $4.5$, \cite{Take}] \[ \M\rtimes_{\widehat{\gamma}}\dualgr \cong\left (\tenpro\right )\otimes B(\ell ^{2}(\mathbb{Z}_3 )), \] and in this identification the dual action $\widetilde{\gamma}$ of $\widehat{\gamma}$ corresponds to the action \linebreak $\gamma\otimes\operatorname{Ad\,} (\lambda (1)^{*})$, where $\lambda$ is the left regular representation of $\mathbb{Z}_3$ on $\ell ^{2}(\mathbb{Z}_3 )$, we conclude that $\gamma$ is an invariant of $M$. Consequently, the centrally trivial automorphism $\gamma _{1}$ and the approximately inner automorphism $\gamma_{2}$, which appear in the decomposition of $\gamma$, are also invariants of $\mathcal {M}$. Note that \[ \gamma _{1}^{3}=\operatorname{Ad\,} (u\otimes 1)\quad\text{and}\quad\gamma_{1}(u\otimes 1)=\obstruction\, (u\otimes 1), \] so that $\gamma_{1}$ is a $\mathbb{Z}_3$--kernel of $\freefactor\otimes R_{0}$ with obstruction $\obstruction$ to lifting. Observe also that in the above argument we have only used the abstract group $\widehat{\mathbb{Z}}_{3}$ and the dual action defined on $\M\rtimes_{\widehat{\gamma}}\dualgr$. Hence we have found a canonical way to associate to the von Neumann algebra $\mathcal {M}$ a scalar, equal to $\obstruction$ in our case, which is invariant under isomorphisms. Now if $\mathcal {M}=\left ( \tenpro\right )\rtimes_{\gamma}\gruppo$ was anti--isomorphic to itself, then $\mathcal {M}$ and $\mathcal {M} ^{c}$ would be isomorphic. But the obstruction associated to $\gamma _{1}^{c}$, and therefore to $\mathcal {M} ^{c}$ is equal to $\obstruconj$. $ \blacksquare$ \section*{Acknowledgments} We thank our advisor, Professor F. R\u{a}dulescu for his help and support, Professor D. Bisch and Professor D. Shlyakhtenko for useful discussions, and Dr. M. M\"{u}ger for his suggestions regarding this manuscript. \end{document}
\begin{document} \newcommand{\into}[0]{\ensuremath{\hookrightarrow}} \newcommand{\onto}[0]{\ensuremath{\twoheadrightarrow}} \newcommand{\eps}[0]{\varepsilon} \newcommand{\Pfin}[0]{\mathcal{P}_{\mathrm{fin}}} \newcommand{\mdim}[0]{\ensuremath{\mathrm{mdim}}} \newcommand{\mesh}[0]{\ensuremath{\mathrm{mesh}}} \newcommand{\widim}[0]{\ensuremath{\mathrm{widim}}} \newtheorem{satz}{Satz}[section] \newaliascnt{corCT}{satz} \newtheorem{cor}[corCT]{Corollary} \aliascntresetthe{corCT} \providecommand*{\corCTautorefname}{Corollary} \newaliascnt{lemmaCT}{satz} \newtheorem{lemma}[lemmaCT]{Lemma} \aliascntresetthe{lemmaCT} \providecommand*{\lemmaCTautorefname}{Lemma} \newaliascnt{propCT}{satz} \newtheorem{prop}[propCT]{Proposition} \aliascntresetthe{propCT} \providecommand*{\propCTautorefname}{Proposition} \newaliascnt{theoremCT}{satz} \newtheorem{theorem}[theoremCT]{Theorem} \aliascntresetthe{theoremCT} \providecommand*{\theoremCTautorefname}{Theorem} \newtheorem*{theoreme}{Theorem} \theoremstyle{definition} \newaliascnt{conjectureCT}{satz} \newtheorem{conjecture}[conjectureCT]{Conjecture} \aliascntresetthe{conjectureCT} \providecommand*{\conjectureCTautorefname}{Conjecture} \newaliascnt{defiCT}{satz} \newtheorem{defi}[defiCT]{Definition} \aliascntresetthe{defiCT} \providecommand*{\defiCTautorefname}{Definition} \newaliascnt{remCT}{satz} \newtheorem{rem}[remCT]{Remark} \aliascntresetthe{remCT} \providecommand*{\remCTautorefname}{Remark} \newaliascnt{exampleCT}{satz} \newtheorem{example}[exampleCT]{Example} \aliascntresetthe{exampleCT} \providecommand*{\exampleCTautorefname}{Example} \begin{abstract} For a countable amenable group $G$ and a fixed dimension $m\geq 1$, we investigate when it is possible to embed a $G$-space $X$ into the $m$-dimensional cubical shift $([0,1]^m)^G$. We focus our attention on systems that arise as an extension of an almost finite $G$-action on a totally disconnected space $Y$, in the sense of Matui and Kerr. We show that if such a $G$-space $X$ has mean dimension less than $m/2$, then $X$ embeds into the $(m+1)$-dimensional cubical shift. If the distinguished factor $G$-space $Y$ is assumed to be a subshift of finite type, then this can be improved to an embedding into the $m$-dimensional cubical shift. This result ought to be viewed as the generalization of a theorem by Gutman--Tsukamoto for $G=\mathbb Z$ to actions of all amenable groups, and represents the first result supporting the Lindenstrauss--Tsukamoto conjecture for actions of groups other than $G=\mathbb{Z}^k$. \end{abstract} \maketitle \section*{Introduction} It is a ubiquitous phenomenon in mathematics that if one deals with a category of objects with any kind of rich structure, there are often some natural distinguished examples that are \emph{large} enough to study conditions under which a general object embeds into the distinguished examples. Depending on the precise context, the solution to such a problem can reveal an a priori surprising hierarchy present in the objects under consideration, or in the best case scenario give rise to new invariants that may have applications far beyond the original embedding problem. In order to illustrate the historical importance of embedding problems, one need not look further than geometry and/or topology. The Whitney embedding theorem, asserting that every $m$-dimensional Riemannian manifold embeds smoothly as a submanifold of $\mathbb R^{2m}$, was not only impactful for its statement, but introduced various concepts in its proof that remain fundamental in the area of differential geometry to this day. The Menger--Nöbeling theorem, asserting that every compact metrizable space with covering dimension $m$ embeds continuously into the cube $[0,1]^{2m+1}$, not only generalizes this kind of phenomenon from the geometric context, but shows that covering dimension introduces a level of hierarchy among spaces having consequences beyond simply acting as a simple-minded obstruction to embeddings. This short article aims to make progress on a similar embedding problem for topological dynamical systems, i.e., countably infinite discrete groups $G$ acting via homeomorphisms on compact metrizable spaces. In this case the distinguished examples are given by so-called cubical shifts. That is, given a natural number $m\geq 1$, we may consider $G$ acting on the space $([0,1]^m)^G$ by sending $g\in G$ to the homeomorphism $[(x_h)_{h\in G}\mapsto (x_{g^{-1}h})_{h\in G}]$; we refer to this as the \emph{$m$-dimensional cubical shift}. A priori, it is not at all clear how to determine when a given action $\alpha: G\curvearrowright X$ embeds into such an example, not even how to determine that it does not. Given that $X$ embeds into the Hilbert cube as a consequence of Urysohn--Tietze, say via $\iota: X\into [0,1]^{\mathbb N}$, it is a triviality to obtain the equivariant embedding into the analogous Hilbert cube shift $([0,1]^{\mathbb N})^G$ via $x\mapsto (\iota(\alpha_g(x)))_{g\in G}$. Since $([0,1]^{\mathbb N})^G$ is actually homeomorphic to $([0,1]^{m})^G$, this begs the question how much of a hierarchy there really is between cubical shifts of different dimensions regarding the class of dynamical systems that embed into them. The first really substantial embedding result was in the PhD thesis of Jaworski, who showed that every aperiodic homeomorphism on a finite-dimensional space (which we view as a free $\mathbb Z$-system) embeds equivariantly into $[0,1]^{\mathbb Z}$. Later on this led to the question by Auslander, asking whether this holds for arbitrary aperiodic homeomorphisms on any space. This problem remained open for over a decade before it was settled in the negative by the introduction of mean topological dimension, the ideas of which initially appeared in Gromov's work \cite{Gromov99} and were subsequently fleshed out by Lindenstrauss--Weiss \cite{LindenstraussWeiss00}. Under the assumption that $G$ is amenable, every topological dynamical system $\alpha: G\curvearrowright X$ can be assigned its mean dimension $\mdim(X,\alpha)\in [0,\infty]$, which respects embeddings. (Although one should perhaps mention that mean dimension has since been extended to sofic groups \cite{Li13}, the methods in this paper reveal nothing new beyond the amenable case, hence we shall ignore sofic mean dimension here.) In a nutshell, mean dimension is a dimensional analog of entropy and is designed to be useful for distinguishing systems of infinite topological entropy. The conceptual difference between these notions can be summarized by the slogan that entropy measures the number of bits per second needed to describe points in a system, whereas mean dimension measures the number of real parameters per second. From this intuitive perspective, it is not surprising that the mean dimension of every $m$-dimensional cubical shift is equal to $m$. By the mere existence of free minimal actions with arbitrarily large mean dimension --- see \cite[§3]{LindenstraussWeiss00} and \cite{Krieger09} --- one gets plenty of examples that cannot embed into the $m$-dimensional cubical shift. In a suprising twist at the time, Lindenstrauss in \cite{Lindenstrauss99} proved that (extensions of) minimal homeomorphisms with mean dimension less than $m/36$ do embed, however. This has triggered the search for the optimal embedding result that can be seen as the dynamical generalization of the Menger--Nöbeling theorem. Although the situation for completely general systems is rather subtle and unsolved even for $G=\mathbb Z$, there has been amazing progress for aperiodic or even minimal homeomorphisms. Building on various substantial precursor results \cite{LindenstraussTsukamoto14, GutmanTsukamoto14, Gutman15, Gutman17}, the optimal embedding result was recently proved by Gutman--Tsukamoto \cite{GutmanTsukamoto20} for minimal homeomorphisms:\ Every minimal homeomorphism with mean dimension less than $m/2$ embeds into the $m$-dimensional cubical shift. A generalization of this result for $\mathbb Z^k$-actions was successfully pursued in \cite{Gutman11, GutmanQiaoSzabo18, GutmanLindenstraussTsukamoto16, GutmanQiaoTsukamoto19}, the final approach of which involves extremely sophisticated tools from signal analysis to take advantage of the surrounding geometry for these groups. As was noted in the introduction of \cite{GutmanTsukamoto20}, ``the generalization to non-commutative groups seems to require substantially new ideas''. Indeed there has been no progress on the embedding problem for dynamical systems over nonabelian groups to the best of the authors' knowledge, and this article aims to change that. Our main result (\autoref{thm:embedding-result}+\autoref{cor:optimal-embedding}) asserts: \begin{theoreme} Suppose $G$ is a countable amenable group. Let $\beta: G\curvearrowright Y$ be an almost finite action on a compact totally disconnected metrizable space. Let $\alpha: G\curvearrowright X$ be an action on a compact metrizable space that arises as an extension of $\beta$. Let $m \geq 1$ be a natural number and suppose that $\mdim(X, \alpha) < \frac{m}{2}$. Then there exists an embedding of $G$-spaces $X\into ([0,1]^{m+1})^G$. If $(Y,\beta)$ is assumed to be a subshift of finite type, then there exists an embedding of $G$-spaces $X\into ([0,1]^{m})^G$. \end{theoreme} In the context of the above theorem, we remark that the concept of almost finiteness for actions, introduced in \cite{Matui12, Kerr20} with a motivation towards $\mathrm{C}^*$-algebraic applications, is a kind of freeness property that is designed as a topological version of the Ornstein--Weiss lemma \cite{OrnsteinWeiss87} for free probability measure preserving actions. Since it is by now known for a large class of groups that almost finiteness for $\beta$ follows if $\beta$ is assumed to be free (see \autoref{rem:almost-finite-actions}), our main result should be viewed as a generalization of Gutman--Tsukamoto's approach from \cite{GutmanTsukamoto14} to the setting of amenable groups. This is indeed reflected not just in the similarity of the main result, but at the level of our proof. More specifically, there are clear parallels between \autoref{lem:dense-eps-embeddings}, \autoref{thm:embedding-result} and \autoref{cor:optimal-embedding} on the one hand, and \cite[Proposition 3.1, Theorem 1.5, Corollary 1.8]{GutmanTsukamoto14} on the other hand. In a nutshell, almost finiteness of $\beta$ in our proof acts as the correct substitute of the well-known clopen Rokhlin lemma for aperiodic homeomorphisms on the Cantor set. We further point out that, to the best of our knowledge, this provides the first application of almost finiteness to prove a new result in topological dynamics that is entirely unrelated to questions about crossed product $\mathrm{C}^*$-algebras. The problem whether the above result is true for all free actions $\alpha: G\curvearrowright X$, regardless of whether it admits well-behaved factor systems, remains open. In light of the technical difficulties already present in the state-of-the-art for $\mathbb Z^k$, however, we expect this challenge to be rather difficult to tackle without ideas that go substantially beyond our present work. \section{Preliminaries} We start with some basic remarks on notation and terminology. Throughout the article we fix a countable amenable group $G$. We write $F\Subset G$ to mean that $F$ is a finite subset of $G$. Given $K\Subset G$ and a constant $\delta>0$, we say that a non-empty set $F\Subset G$ is \emph{$(K,\delta)$-invariant}, if $|KF\setminus F|\leq\delta|F|$. We will freely use the well-known characterization of amenability via the F{\o}lner criterion, i.e., $G$ is amenable precisely when every pair $(K,\delta)$ admits some $(K,\delta)$-invariant finite subset in $G$. If $G$ is countable, we call a sequence $(F_n)_{n\in\mathbb N}$ with $F_n\Subset G$ a \emph{F{\o}lner sequence}, if for every pair $(K,\delta)$, there is some $n_0\in\mathbb N$ such that $F_n$ is $(K,\delta)$-invariant for all $n\geq n_0$. The letters $X$ and $Y$ will always be reserved to denote compact metrizable spaces. Under a \emph{topological dynamical system} (over $G$) or {\emph{$G$-space}} we understand a pair $(X,\alpha)$, where $X$ is a compact metrizable space and $\alpha: G\curvearrowright X$ is an action by homeomorphisms. In some cases when there is no ambiguity on what action is considered on $X$, we sometimes just talk of the $G$-space $X$ to lighten notation. An action $\alpha$ is called \emph{free} if for every point $x\in X$, its orbit map $[g\mapsto\alpha_g(x)]$ is injective. Given another action $\beta: G\curvearrowright Y$, we say that a continuous map $\phi: X\to Y$ is \emph{equivariant (w.r.t.\ $\alpha$ and $\beta$)}, if $\phi\circ\alpha_g=\beta_g\circ\phi$ for all $g\in G$, in which case we indicate this by writing $\phi: (X,\alpha)\to (Y,\beta)$. Using the alternate arrow $\onto$ means that the map is surjective, whereas using $\into$ means that the map is injective, in which case we also speak of an embedding. If we are given an equivariant surjective map $\pi: (X,\alpha)\onto (Y,\beta)$, then one calls $(Y,\beta)$ a \emph{factor} of $(X,\alpha)$ and refers to $\pi$ as the \emph{factor map}. On the flip side, one says that $(X,\alpha)$ is an \emph{extension} of $(Y,\beta)$. Of particular importance for this work is the example given by cubical shifts over a group $G$. That is, given a natural number $m\geq 1$, the \emph{$m$-dimensional cubical shift} is the action $\sigma: G\curvearrowright ([0,1]^m)^G$ given by $\sigma_g\big( (x_h)_{h\in G} \big)=(x_{g^{-1}h})_{h\in G}$. Let us now introduce the concepts underpinning this article, as well as some known results from the literature. \subsection{Almost finiteness} \begin{defi} Let $\alpha: G\curvearrowright X$ be an action. \begin{enumerate}[leftmargin=*,label=$\bullet$] \item A {\it tower} is a pair $(V,S)$ consisting of a subset $V$ of $X$ and a finite subset $S$ of $G$ such that the sets $\alpha_s(V)$ for $s\in S$ are pairwise disjoint. \item Given such a tower, the set $V$ is the {\it base} of the tower, the set $S$ is the {\it shape} of the tower, and the sets $\alpha_s(V)$ for $s\in S$ are the {\it levels} of the tower. \item The tower $(V,S)$ is {\it open} if $V$ is open. It is called {\it clopen} if $V$ is clopen. \item A {\it castle} is a finite collection of towers $\{ (V_i , S_i) \}_{i\in I}$ such that for all $i,j\in I$ and $s\in S_i$, $t\in S_j$, we have that $\alpha_{s}(V_i)\cap\alpha_{t}(V_j)=\emptyset$ if $i\neq j$ or $s\neq t$. \item The castle is {\it open} if each of the towers is open, and {\it clopen} if each of the towers is clopen. \end{enumerate} \end{defi} The following definition originates in \cite[Definition 6.2]{Matui12} for principal ample groupoids, which was then adapted in \cite[Definition 8.2]{Kerr20} for actions of amenable groups on arbitrary spaces. Although not trivially identical to the general version, the definition below is known to be an equivalent one in our setting due to \cite[Theorem 10.2]{Kerr20}. \begin{defi} Let $\beta: G\curvearrowright Y$ be an action on a totally disconnected space. We say that $\beta$ is \emph{almost finite}, if for every $K\Subset G$ and $\delta>0$, there exists a clopen castle $\{ (W_i , S_i) \}_{i\in I}$ such that $Y=\bigsqcup_{i\in I} \bigsqcup_{s\in S_i} \beta_s(W_i)$ and for every $i\in I$, the shape $S_i$ is $(K,\delta)$-invariant. \end{defi} \begin{rem} \label{rem:almost-finite-actions} One of the possible ways to view almost finiteness is as a strong topological variant of the Ornstein--Weiss tower lemma \cite[Theorem 5]{OrnsteinWeiss87} that characterizes freeness of probability measure preserving actions in ergodic theory, which was recently strengthened in \cite{CJKMSTD18}. Conjecturally, every free action $\beta: G\curvearrowright Y$ on a totally disconnected space is almost finite.\footnote{We note that the converse is not true for all groups $G$. In general one can only conclude from almost finiteness that the action is \emph{essentially free}, i.e., sets of the form $\{y\in Y\mid \beta_g(y)=y\}$ vanish under all $\beta$-invariant Borel probability measures. Examples of almost finite but non-free actions are found among generalized Odometers; see \cite{OrtegaScarparo20}.} This is not so hard to see for $G=\mathbb Z$, as almost finiteness just boils down to the well-known clopen Rokhlin tower lemma for aperiodic homeomorphisms; see for example \cite[Proposition 3]{BezuglyiDooleyMedynets05}. Although the general case is still open, the following partial results are by now known: \begin{enumerate}[leftmargin=*,label=$\bullet$] \item For any amenable group $G$, almost finite actions on the Cantor set are generic among all free minimal $G$-actions; see \cite[Theorem 4.2]{CJKMSTD18}. \item The conjecture holds when $G$ has local subexponential growth, i.e., given any $F\Subset G$, one has $\lim_{n\to\infty}\frac{|F^{n+1}|}{|F^n|}=1$. This was shown in \cite{KerrSzabo18} as a consequence of \cite{DownarowiczZhang17}. \item Let $H\leq G$ be a normal subgroup so that the above conjecture holds for $H$-actions. If $G/H$ is finite or cyclic, then the conjecture holds for all $G$-actions; see \cite{KerrNaryshkin21}. In particular, the conjecture is verified for all elementary amenable groups. \end{enumerate} \end{rem} \subsection{Mean dimension} \begin{defi} \label{def:open-covers} Given a finite open cover $\mathcal U$ of a topological space $X$, we define its \emph{order} as the minimal number $n\geq 0$ such that every point $x\in X$ is an element of at most $n+1$ members of $\mathcal U$. If $X$ is equipped with a metric $d$, then $\mesh_d(\mathcal U)$ is defined as the maximal diameter of a member of $\mathcal U$. Given a constant $\eps>0$, one defines \[ \widim_\eps(X,d) = \min\{ \operatorname{ord}(\mathcal U) \mid \mathcal U \text{ is an open cover with } \mesh_d(\mathcal U)\leq\eps\}. \] \end{defi} Before we can define mean dimension, we recall the following technical result, which is a non-trivial consequence of the Ornstein--Weiss quasitiling machinery. \begin{theorem}[see {\cite[Theorem 6.1]{LindenstraussWeiss00}}] \label{thm:subadditive-convergence} Let $G$ be a countable amenable group. Denote by $\Pfin(G)$ the set of all non-empty finite subsets of $G$. Suppose we are given a function $\phi: \Pfin(G)\to [0,\infty)$ satisfying the following conditions: \begin{enumerate}[leftmargin=*,label=$\bullet$] \item $\phi(F_1) \leq \phi(F_2)$ whenever $F_1 \subseteq F_2$; \item $\phi(Fg) = \phi(F)$ for all $F \Subset G$ and $g \in G$; \item $\phi(F_1 \cup F_2) \leq \phi(F_1) + \phi(F_2)$ for all $F_1, F_2\Subset G$. \end{enumerate} Then there exists $b\geq 0$ such that for every $\eps > 0$ there exists $K\Subset G$ and $\delta > 0$ such that $\big| b - \frac{\phi(F)}{|F|} \big| \leq \eps$ for every $(K, \delta)$-invariant set $F\Subset G$. \end{theorem} \begin{prop}[see {\cite[Proposition 10.4.1]{Coornaert}}] Let $G$ be a countable amenable group and $\alpha: G\curvearrowright X$ a topological dynamical system. For a compatible metric $d$ on $X$ and $F\Subset G$, define the metric $d^\alpha_F$ via \[ d^\alpha_F(x,y)=\max_{g\in F} \ d(\alpha_g(x),\alpha_g(y)). \] Let $\eps>0$ be a constant. Then the map \[ \Pfin(G)\ \ni \ F\mapsto \widim_\eps(X,d^\alpha_F) \] has the properties as required by \autoref{thm:subadditive-convergence}. Consequently, if $F_n\Subset G$ is a F{\o}lner sequence, then the limit \[ \mdim_\eps(X,\alpha,d)=\lim_{n\to\infty} |F_n|^{-1}\widim_\eps(X,d^\alpha_{F_n}) \ \in \ [0,\infty] \] exists and is independent of the choice of $(F_n)_n$. \end{prop} \begin{defi} \label{def:mdim} Let $G$ be a countable amenable group and $\alpha: G\curvearrowright X$ a topological dynamical system. The \emph{mean dimension} of $(X, \alpha)$ is defined as \[ \mdim(X,\alpha) = \sup_{\eps>0} \ \mdim_\eps(X,\alpha,d) \ \in \ [0,\infty], \] where $d$ is some compatible metric on $X$.\footnote{This definition contains the implicit claim that this supremum does not depend on the chosen metric. This is not completely trivial, but it is well-known; see \cite[Theorem 10.4.2]{Coornaert}.} In the cases where the choice of the action $\alpha$ is implicitly clear from context, we just write $\mdim(X)$. \end{defi} \begin{example}[see {\cite[Proposition 3.3]{LindenstraussWeiss00}}] For every natural number $m\geq 1$, we can consider the $m$-dimensional cubical shift $\sigma: G\curvearrowright ([0,1]^m)^G$ as defined before. Then $\mdim(([0,1]^m)^G)=m$. \end{example} \begin{rem} It is an easy consequence of its definition that mean dimension respects inclusions. That is, given an equivariant inclusion $X_1\into X_2$ of $G$-spaces, one has the inequality $\mdim(X_1)\leq\mdim(X_2)$. In light of the above, it follows immediately that mean dimension provides an obstruction to the embeddibility of a $G$-space $X$ into the $m$-dimensional cubical shift. \end{rem} \section{The embedding result} \begin{defi} Let $(X,d)$ be a compact metric space and $\eps>0$ a constant. A continuous map $f: X\to Z$ into another topological space is called an \emph{$\eps$-embedding}, if $\operatorname{diam}(f^{-1}(z))<\eps$ for all $z\in Z$. \end{defi} The following lemma by Gutman-Tsukamoto plays the same role in our proof of the main result as it did in the proof of theirs. \begin{lemma}[{\cite[Lemma 2.1]{GutmanTsukamoto14}}] \label{lem:widim} Let $(X,d)$ be a compact metric space, $m\geq 1$ a natural number and $f_0 :X\to [0,1]^m$ a continuous map. Suppose that the numbers $\delta,\eps>0$ satisfy the implication \[ d(x,y)<\eps\quad\implies\quad\|f_0(x)-f_0(y)\|_\infty <\delta. \] If $\widim_\eps(X,d) < m/2$, then there exists an $\eps$-embedding $f: X\to [0,1]^m$ satisfying \[ \|f-f_0\|_\infty:=\max_{x\in X} \|f(x)-f_0(x)\|_\infty <\delta. \] \end{lemma} \begin{defi} Let $(X,\alpha)$ be a topological dynamical system, $m\geq 1$ a natural number and $f: X\to [0,1]^m$ a continuous map. We then define a continuous equivariant map $I_f: X\to ([0,1]^m)^G$ via $I_f(x)=(f(\alpha_g(x)))_{g\in G}$. \end{defi} \begin{lemma} \label{lem:dense-eps-embeddings} Let $\beta: G\curvearrowright Y$ be an almost finite action on a compact totally disconnected space. Let $\alpha: G\curvearrowright X$ be an action on a compact metrizable space that arises as an extension of $\beta$ via the factor map $\pi: (X,\alpha)\onto (Y,\beta)$. Let $m \geq 1$ be a natural number and suppose that $\mdim(X, \alpha) < \frac{m}{2}$. Choose a compatible metric $d$ on $X$. Then for any $\eta > 0$ the set of functions \[ A_{\eta} = \{ f \in \mathcal C(X,[0,1]^m) \mid I_f \times \pi \text{ is an $\eta$-embedding} \} \] is dense in $\mathcal C(X,[0,1]^m)$ with respect to $\|\cdot\|_\infty$. \end{lemma} \begin{proof} Let $f_0: X\to [0,1]^m$ be a continuous map and let $\eta, \delta>0$. We shall argue that there exists $f\in A_\eta$ with $\|f-f_0\|_\infty<\delta$. Since $f_0$ is uniformly continuous, we can find some $0<\eps\leq\eta$ that fits into the implication \[ d(x,y)<\eps \quad\implies\quad\|f_0(x)-f_0(y)\|_\infty <\delta. \] By assumption, we have $\mdim_\eps(X,\alpha,d)\leq\mdim(X,\alpha)<m/2$. Since $\mdim_\eps(X,\alpha,d)$ arises as a limit in the sense of \autoref{thm:subadditive-convergence}, we can find a constant $\gamma>0$ and $K\Subset G$ such that for every $(K,\gamma)$-invariant set $S\Subset G$, we have $\widim_\eps(X,d^\alpha_S)<|S|m/2$. Since we assumed $\beta$ to be almost finite, we may find a clopen castle $\{(W_i,S_i)\}_{i\in I}$ with $(K,\gamma)$-invariant shapes and $Y=\bigsqcup_{i\in I}\bigsqcup_{s\in S_i} \beta_s(W_i)$. By defining the pullbacks $Z_i=\pi^{-1}(W_i)$ for $i\in I$, we obtain the clopen castle $\{(Z_i,S_i)\}_{i\in I}$ partitioning $X$. Given $i\in I$, we have in particular that $\widim_\eps(X,d^\alpha_{S_i})\leq |S_i|m/2$. Consider the continuous map $F^0_i: X\to [0,1]^{|S_i|m}\cong ([0,1]^m)^{S_i}$ given by $F_i^0(x)=(f_0(\alpha_s(x)))_{s\in S_i}$. Note that by design, we have the implication \[ d^\alpha_{S_i}(x,y)<\eps \quad\implies\quad \|F^0_i(x)-F^0_i(y)\|_\infty <\delta. \] Using \autoref{lem:widim}, we may choose a continuous $\eps$-embedding $F_i: X\to ([0,1]^m)^{S_i}$ with respect to the metric $d^{\alpha}_{S_i}$ such that $\|F_i-F_i^0\|_\infty<\delta$. We now define the continuous function $f: X\to [0,1]^m$ as follows. If $x\in X$ is a point, choose the unique index $i\in I$ and $s\in S_i$ with $x\in\alpha_s(Z_i)$, and set $f(x)=F_i(\alpha_s^{-1}(x))(s)$. Since this assignment is clearly continuous on each clopen set belonging to a partition of $X$, $f$ is indeed a well-defined continuous map. We claim that $\|f-f_0\|_\infty<\delta$ and $f\in A_\eta$. The first of these properties holds because given $x\in X$ as above, we see that \[ f(x)=F_i(\alpha_s^{-1}(x))(s) \approx_\delta F_i^0(\alpha_s^{-1}(x))(s)=f_0(\alpha_s(\alpha_s^{-1}(x)))=f_0(x). \] So let us argue $f\in A_\eta$. Suppose $x,y\in X$ are two points such that $(I_f\times\pi)(x)=(I_f\times\pi)(y)$. Then certainly $\pi(x)=\pi(y)$. Since the clopen partition $X=\bigsqcup_{i\in I}\bigsqcup_{s\in S_i} \alpha_s(Z_i)$ is the pullback from a clopen partition of $Y$, we see that there is a unique index $i\in I$ and $s\in S_i$ with $x,y\in\alpha_s(Z_i)$. Since we have $I_f(x)=I_f(y)$, or in other words $f(\alpha_g(x))=f(\alpha_g(y))$ for all $g\in G$, it follows for all $t\in S_i$ that $\alpha_{ts^{-1}}(x), \alpha_{ts^{-1}}(y)\in \alpha_t(Z_i)$, so \[ F_i(\alpha^{-1}_s(x))(t)= f(\alpha_{ts^{-1}}(x))=f(\alpha_{ts^{-1}}(y))=F_i(\alpha^{-1}_s(x))(t). \] Since $t\in S_i$ is arbitrary, it follows that $F_i(\alpha_s^{-1}(x))=F_i(\alpha_s^{-1}(y))$. Since $F_i$ was an $\eps$-embedding with respect to the metric $d^\alpha_{S_i}$ and $s\in S_i$, we may finally conclude $d(x,y)<\eps\leq\eta$. This finishes the proof. \end{proof} \begin{theorem} \label{thm:embedding-result} Let $\beta: G\curvearrowright Y$ be an almost finite action on a compact totally disconnected metrizable space. Let $\alpha: G\curvearrowright X$ be an action on a compact metrizable space that arises as an extension of $\beta$ via the factor map $\pi: (X,\alpha)\onto (Y,\beta)$. Let $m \geq 1$ be a natural number and suppose that $\mdim(X, \alpha) < \frac{m}{2}$. Then the set of functions $f\in\mathcal C(X,[0,1]^m)$ for which \[ I_f\times\pi: (X,\alpha)\to \big( ([0,1]^m)^G\times Y, \sigma\times\beta \big) \] is an embedding, is dense with respect to $\|\cdot\|_\infty$. Consequently, there exists an embedding of $G$-spaces $X\into ([0,1]^{m+1})^G$. \end{theorem} \begin{proof} Let us first explain the last sentence of the claim. Since $Y$ is totally disconnected, it can be embedded into $[0,1]$, say via a continuous map $\psi$. This implies that $\bar{\psi}: Y\to [0,1]^G$ given by $y\mapsto(\psi(\beta_g(y)))_{g\in G}$ is an equivariant embedding. So assuming the rest of the claim holds, we obtain a chain of embeddings of $G$-spaces \[ X\stackrel{I_f\times\pi}{\longrightarrow} ([0,1]^m)^G\times Y \stackrel{\operatorname{id}\times\bar{\psi}}{\longrightarrow} ([0,1]^{m})^G\times ([0,1])^G \cong ([0,1]^{m+1})^G. \] If we adopt the notation from \autoref{lem:dense-eps-embeddings}, it is clear that the set of functions in question is equal to the intersection $\bigcap_{n\geq 1} A_{1/n}$. In light of the fact that $\mathbb C(X,[0,1]^m)$ a closed subset of the Banach space $\mathcal C(X,\mathbb R^m)$ with respect to $\|\cdot\|_\infty$, the claim follows immediately from the Baire category theorem if we show that the sets $A_\eta$ are open for all $\eta>0$. So let us briefly argue that this is the case. Recall that we have chosen a compatible metric $d$ on $X$. Let $f\in A_\eta$. Given an infinite tuple $(c_g)_{g\in G}$ of strictly positive numbers with $\sum_{g\in G} c_g=1$, we define the constant $\delta$ via \[ 2\delta = \inf\Big\{ \sum_{g\in G} c_g\|f(\alpha_g(x))-f(\alpha_g(y)) \|_\infty \Big| x,y\in X, \pi(x)=\pi(y), d(x,y)\geq\eta \Big\}. \] Keep in mind that the assignment \[ \big( (z^{(1)}_g)_{g\in G}, (z^{(2)}_g)_{g\in G} \big) \mapsto\sum_{g\in G} c_g\|z^{(1)}_g-z^{(2)}_g\|_\infty \] defines a compatible metric on $([0,1]^m)^G$. Since $I_f$ is continuous, $I_f\times\pi$ is an $\eta$-embedding and $X$ is compact, it follows that $\delta>0$. We claim that the open $\delta$-ball around $f$ is contained in $A_\eta$. Indeed, let $f_0\in\mathcal C(X,[0,1]^m)$ with $\|f-f_0\|<\delta$. Suppose that $x,y\in X$ satisfy $(I_{f_0}\times\pi)(x)=(I_{f_0}\times\pi)(y)$. Then $\pi(x)=\pi(y)$ and it follows from the triangle inequality that \[ \sum_{g\in G} c_g\|f(\alpha_g(x))-f(\alpha_g(y))\|_\infty < \sum_{g\in G} c_g(2\delta+\|f_0(\alpha_g(x))-f_0(\alpha_g(y))\|_\infty) = 2\delta. \] By the definition of $\delta$, it follows that $d(x,y)<\eta$. Since $x$ and $y$ were arbitrary, we conclude $f_0\in A_\eta$ and the proof is finished. \end{proof} We also record an improved version of the embedding result, which is an immediate consequence of the above if we assume more about the system $(Y,\beta)$. \begin{cor} \label{cor:optimal-embedding} Let $\beta: G\curvearrowright Y$ be an almost finite action on a compact totally disconnected metrizable space. Suppose that $\beta$ is a subshift of finite type, i.e., there exists some natural number $\ell\geq 2$ and an embedding $Y\into\{1,\dots,\ell\}^G$ of $G$-spaces. Let $\alpha: G\curvearrowright X$ be an action on a compact metrizable space that arises as an extension of $\beta$. Let $m \geq 1$ be a natural number and suppose that $\mdim(X, \alpha) < \frac{m}{2}$. Then there exists an embedding of $G$-spaces $X\into ([0,1]^{m})^G$. \end{cor} \begin{proof} Find some embedding $\varphi: [0,1]\times\{1,\dots,\ell\}\into [0,1]$, which gives rise to an equivariant embedding \[ \bar{\varphi}: [0,1]^G\times\{1,\dots,\ell\}^G \cong \big([0,1]\times\{1,\dots,\ell\}\big)^G \into [0,1]^G \] by applying $\varphi$ componentwise. This allows us to proceed exactly as in the last part of \autoref{thm:embedding-result}, except that we may appeal to the embedding \[ \begin{array}{ccl} ([0,1]^m)^G\times Y &\into& ([0,1]^m)^G\times\{1,\dots,\ell\}^G \\ &\cong& ([0,1]^{m-1})^G\times \big([0,1]\times\{1,\dots,\ell\}\big)^G \\ &\stackrel{\operatorname{id}\times\bar{\varphi}}{\longrightarrow}& ([0,1]^{m-1})^G\times ([0,1])^G\ \cong \ ([0,1]^{m})^G. \end{array} \] \end{proof} \begin{rem} In light of the fact that almost finiteness is a concept that can be defined for actions on arbitrary spaces, one might wonder how far the main result of this note can be generalized. Suppose $\gamma: G\curvearrowright Z$ is an almost finite action on a not necessarily disconnected space. It is then well-known that $\gamma$ has the small boundary property and therefore also $\mdim(Z,\gamma)=0$; see \cite[Theorem 5.6]{KerrSzabo18} and \cite[Theorem 5.4]{LindenstraussWeiss00}.\footnote{Note that a priori, this reference in \cite{KerrSzabo18} assumes freeness of the action. However, the statement involves an ``if and only if'' statement where we only need the ``if'' part, which does not need freeness of the involved action in any way.} Can one prove directly that $(Z,\gamma)$ embeds into the 1-dimensional cubical shift? If so, is the statement of \autoref{thm:embedding-result} true if we replace $\beta: G\curvearrowright Y$ by $\gamma: G\curvearrowright Z$? Although this would seem plausible, the proof does by no means generalize in any obvious way to this more general case. The first named author has proved a partial result in this direction in his master thesis \cite{Lanckriet21}, namely under the assumption that $Z$ has finite covering dimension $d$. In that case, a version of \autoref{thm:embedding-result} is true, where the conclusion is weakened to obtain an embedding into the $(m(d+2)+1)$-dimensional cubical shift. Since this dimensional upper bound is far from what we expect to be optimal, and since it does not actually recover \autoref{thm:embedding-result} as a special case, we decided not to include this generalized approach in this note. \end{rem} \textbf{Acknowledgements.} The second named author has been supported by research project C14/19/088 funded by the research council of KU Leuven, and the project G085020N funded by the Research Foundation Flanders (FWO). \end{document}
\begin{document} \baselineskip 6.1mm \title {Divisibility of the class numbers of imaginary quadratic fields} \author{K. Chakraborty, A. Hoque, Y. Kishi, P. P. Pandey} \address[K. Chakraborty]{Harish-Chandra Research Institute, HBNI, Chhatnag Road, Jhunsi, Allahabad-211019, India.} \email{kalyan@hri.res.in} \address[A. Hoque]{Harish-Chandra Research Institute, HBNI, Chhatnag Road, Jhunsi, Allahabad-211019, India.} \email{azizulhoque@hri.res.in} \address[Y. Kishi]{Department of Mathematics, Aichi University of Education, 1 Hirosawa Igaya-cho, Kariya, Aichi 448-8542, Japan.} \email{ykishi@auecc.aichi-edu.ac.jp} \address[P. P. Pandey]{Department of Mathematics, IISER Berhampur, Berhampur-760010, Odisha, India.} \email{prem.p2506@gmail.com} \subjclass[2010]{11R11; 11R29} \date{\today} \keywords{Imaginary Quadratic Field; Class Number; Ideal Class Group} \begin{abstract} For a given odd integer $n>1$, we provide some families of imaginary quadratic number fields of the form $\mathbb{Q}(\sqrt{x^2-t^n})$ whose ideal class group has a subgroup isomorphic to $\mathbb{Z}/n\mathbb{Z}$. \end{abstract} \maketitle{} \section{Introduction} The divisibility properties of the class numbers of number fields are very important for understanding the structure of the ideal class groups of number fields. For a given integer $n>1$, the Cohen-Lenstra heuristic \cite{CL84} predicts that a positive proportion of imaginary quadratic number fields have class number divisible by $n$. Proving this heuristic seems out of reach with the current state of knowledge. On the other hand, many families of (infinitely many) imaginary quadratic fields with class number divisible by $n$ are known. Most of such families are of the type $\mathbb{Q}(\sqrt{x^2-t^n})$ or of the type $\mathbb{Q}(\sqrt{x^2-4t^n})$, where $x$ and $t$ are positive integers with some restrictions (for the former see \cite{AC55, IT11p, IT11, KI09, RM97, RM99, NA22, NA55, SO00, MI12}, and for the later see \cite{Cohn, GR01, IS11, IT15, LO09, YA70}). Our focus in this article will be on the family $K_{t,x}=\mathbb{Q}(\sqrt{x^2-t^n})$. In 1922, T. Nagell~\cite{NA22} proved that for an odd integer $n$, the class number of imaginary quadratic field $K_{t,x}$ is divisible by $n$ if $t$ is odd, $(t,x)=1$, and $q\mid x$, $q^2\nmid x$ for all prime divisors $q$ of $n$. Let $b$ denote the square factor of $x^2-t^n$, that is, $x^2-t^n=b^2d$, where $d<0$ is the square-free part of $x^2-t^n$. Under the condition $b=1$, N. C. Ankeny and S. Chowla \cite{AC55} (resp.\ M. R. Murty \cite[Theorem~1]{RM97}) considered the family $K_{3,x}$ (resp.\ $K_{t,1}$). M. R. Murty also treated the family $K_{t,1}$ with $b<t^{n/4}/2^{3/2}$ (\cite[Theorem~2]{RM97}). Moreover, K. Soundararajan \cite{SO00} (resp.\ A. Ito \cite{IT11p}) treated the family $K_{t,x}$ under the condition that $b<\sqrt{(t^n-x^2)/(t^{n/2}-1)}$ holds (resp.\ all of divisors of $b$ divide $d$). On the other hand, T. Nagell~\cite{NA55} (resp.\ Y. Kishi~\cite{KI09}, A. Ito~\cite{IT11} and M. Zhu and T. Wang~\cite{AC55}) studied the family $K_{t,1}$ (resp.\ $K_{3,2^k}$, $K_{p,2^k}$ and $K_{t,2^k}$) unconditionally for $b$, where $p$ is an odd prime. In the present paper, we consider the case when both $t$ and $x$ are odd primes and $b$ is unconditional and prove the following: \begin{thm}\label{T1} Let $n\geq 3$ be an odd integer and $p,q$ be distinct odd primes with $q^2<p^n$. Let $d$ be the square-free part of $q^2-p^n$. Assume that $q \not \equiv \pm 1 \pmod {|d|}$. Moreover, we assume $p^{n/3}\not= (2q+1)/3, (q^2+2)/3$ whenever both $d \equiv 1 \pmod 4$ and $3\mid n$. Then the class number of $K_{p,q}=\mathbb{Q}(\sqrt{d})$ is divisible by $n$. \end{thm} In Table 1 (respectively Table 2), we list $K_{p,q}$ for small values of $p,q$ for $n=3$ (respectively for $n=5$). It is readily seen from these tables that the assumptions in Theorem \ref{T1} hold very often. We can easily prove, by reading modulo $4$, that the condition \enquote{$p^{n/3}\not= (2q+1)/3, (q^2+2)/3$} in Theorem \ref{T1} holds whenever $p \equiv 3 \pmod 4$. Further, if we fix an odd prime $q$, then the condition \enquote{$q \not\equiv \pm 1 \pmod{|d|}$} in Theorem \ref{T1} holds almost always, and, this can be proved using the celebrated Siegel's theorem on integral points on affine curves. More precisely, we prove the following theorem in this direction. \begin{thm}\label{T2} Let $n\geq 3$ be an odd integer not divisible by $3$. For each odd prime $q$ the class number of $K_{p,q}$ is divisible by $n$ for all but finitely many $p$'s. Furthermore, for each $q$ there are infinitely many fields $K_{p,q}$. \end{thm} \section{Preliminaries} In this section we mention some results which are needed for the proof of the Theorem \ref{T1}. First we state a basic result from algebraic number theory. \begin{prop}\label{P1} Let $d \equiv 5 \pmod 8$ be an integer and $\ell$ be a prime. For odd integers $a,b$ we have $$\left(\frac{a+b\sqrt{d}}{2}\right)^{\ell} \in \mathbb{Z}[\sqrt{d}] \mbox{ if and only if } \ell=3.$$ \end{prop} \begin{proof} This can be easily proved by taking modulo some power of two. \end{proof} We now recall a result of Y. Bugeaud and T. N. Shorey \cite{BS01} on Diophantine equations which is one of the main ingredient in the proof of Theorem \ref{T1}. Before stating the result of Y. Bugeaud and T. N. Shorey, we need to introduce some definitions and notations. Let $F_k$ denote the $k$th term in the Fibonacci sequence defined by $F_0=0, \ F_1= 1$ and $F_{k+2}=F_k+F_{k+1}$ for $k\geq 0$. Similarly $L_k$ denotes the $k$th term in the Lucas sequence defined by $L_0=2, \ L_1=1$ and $L_{k+2}=L_k+L_{k+1}$ for $k\geq 0$. For $\lambda\in \{1, \sqrt{2}, 2\}$, we define the subsets $\mathcal{F}, \ \mathcal{G_\lambda}, \ \mathcal{H_\lambda}\subset \mathbb{N}\times\mathbb{N}\times\mathbb{N}$ by \begin{align*} \mathcal{F}&:=\{(F_{k-2\varepsilon},L_{k+\varepsilon},F_k)\,|\, k\geq 2,\varepsilon\in\{\pm 1\}\},\\ \mathcal{G_\lambda}&:=\{(1,4p^r-1,p)\,|\,\text{$p$ is an odd prime},r\geq 1\},\\ \mathcal{H_\lambda}&:=\left\{(D_1,D_2,p)\,\left|\, \begin{aligned} &\text{$D_1$, $D_2$ and $p$ are mutually coprime positive integers with $p$}\\ &\text{an odd prime and there exist positive integers $r$, $s$ such that}\\ &\text{$D_1s^2+D_2=\lambda^2p^r$ and $3D_1s^2-D_2=\pm\lambda^2$} \end{aligned}\right.\right\}, \end{align*} except when $\lambda =2$, in which case the condition ``odd'' on the prime $p$ should be removed in the definitions of $\mathcal{G_\lambda}$ and $\mathcal{H_\lambda}$. \begin{thma}\label{A1} Given $\lambda\in \{1, \sqrt{2}, 2\}$, a prime $p$ and positive co-prime integers $D_1$ and $D_2$, the number of positive integer solutions $(x, y)$ of the Diophantine equation \begin{equation}\label{E1} D_1x^2+D_2=\lambda^2p^y \end{equation} is at most one except for $$ (\lambda,D_1,D_2,p)\in\mathcal{E}:=\left\{\begin{aligned} &(2,13,3,2),(\sqrt 2,7,11,3),(1,2,1,3),(2,7,1,2),\\ &(\sqrt 2,1,1,5),(\sqrt 2,1,1,13),(2,1,3,7) \end{aligned}\right\} $$ and $(D_1, D_2, p)\in \mathcal{F}\cup \mathcal{G_\lambda}\cup \mathcal{H_\lambda}$. \end{thma} We recall the following result of J. H. E. Cohn \cite{Cohn1} about appearance of squares in the Lucas sequence. \begin{thma}\label{A2} The only perfect squares appearing in the Lucas sequence are $L_1=1$ and $L_3=4$. \end{thma} \section{Proofs} We begin with the following crucial proposition. \begin{prop}\label{P2} Let $n,q,p,d$ be as in Theorem \ref{T1} and let $m$ be the positive integer with $q^2-p^n=m^2d$. Then the element $\alpha =q+m\sqrt{d}$ is not an $\ell^{th}$ power of an element in the ring of integers of $K_{p,q}$ for any prime divisor $\ell $ of $n$. \end{prop} \begin{proof} Let $\ell$ be a prime divisor of $n$. Since $n$ is odd, so is $\ell$. We first consider the case when $d \equiv 2 \mbox{ or }3 \pmod 4$. If $\alpha$ is an $\ell^{th}$ power, then there are integers $a,b$ such that $$q+m \sqrt{d}=\alpha =(a+b\sqrt{d})^{\ell}.$$ Comparing the real parts, we have $$q=a^{\ell}+\sum_{i=0}^{(\ell-1)/2} \binom{\ell}{2i} a^{\ell-2i}b^{2i}d^i.$$ This gives $a\mid q$ and hence $a=\pm q$ or $a=\pm 1$.\\ Case (1A): $a= \pm q$.\\ We have $q+m\sqrt{d}=(\pm q+b\sqrt{d})^{\ell}$. Taking norm on both sides we obtain $$p^n=(q^2-b^2d)^{\ell}.$$ Writing $D_1=-d>0$, we obtain \begin{equation*} D_1b^2+q^2=p^{n/ \ell}. \end{equation*} Also, we have \begin{equation*} D_1m^2+q^2=p^n. \end{equation*} As $\ell$ is a prime divisor of $n$ so $(x,y)=(|b|,n/ \ell)$ and $(x,y)=(m,n)$ are distinct solutions of (\ref{E1}) in positive integers for $D_1=-d>0,D_2=q^2, \lambda=1$. Now we verify that $(1, D_1,D_2,p) \not \in \mathcal{E}$ and $(D_1,D_2,p) \not \in \mathcal{F}\cup \mathcal{G_\lambda}\cup \mathcal{H_\lambda}$. This will give a contradiction. Clearly $(1,D_1,D_2,p) \not \in \mathcal{E}$. Further, as $D_1>3$, we see that $(D_1,D_2,p) \not \in \mathcal{G}_1$. From Theorem \ref{A2}, we see that $(D_1,D_2,p) \not \in \mathcal{F}$. Finally, if $(D_1,D_2,p) \in \mathcal{H}_1$ then there are positive integers $r,s$ such that \begin{equation}\label{eq:1} 3D_1s^2-q^2=\pm 1 \end{equation} and \begin{equation}\label{eq:2} D_1s^2+q^2=p^r. \end{equation} By (\ref{eq:1}), we have $q\not= 3$, and hence we have $3D_1s^2-q^2=- 1$. From this together with (\ref{eq:2}), we obtain $$4q^2=3p^r+1,$$ that is $$(2q-1)(2q+1)=3p^r.$$ This leads to $2q-1=1 \mbox{ or } 2q-1=3$, but this is not possible as $q$ is an odd prime. Thus $(D_1,D_2,p) \not \in \mathcal{H}_1$.\\ Case (1B): $a=\pm 1$.\\ In this case we have $q+m\sqrt{d}=(\pm 1+b \sqrt{d})^{\ell}$. Comparing the real parts on both sides, we get $q \equiv \pm 1 \pmod{|d|}$ which contradicts to the assumption \enquote{$q \not\equiv \pm 1 \pmod{|d|}$}. Next we consider the case when $d \equiv 1 \pmod 4$. If $\alpha$ is an $\ell^{th}$ power of some integer in $K_{p,q}$, then there are rational integers $a,b$ such that $$q+m\sqrt{d}=\left( \frac{a+b\sqrt{d}}{2} \right)^{\ell},\ \ a\equiv b\pmod 2.$$ In case both $a \mbox{ and }b$ are even, then we can proceed as in the case $d \equiv 2 \mbox{ or }3 \pmod 4$ and obtain a contradiction under the assumption $q \not\equiv \pm 1 \pmod{|d|}$. Thus we can assume that both $a$ and $b$ are odd. Again, taking norm on both sides we obtain \begin{equation}\label{E4} 4p^{n/\ell}=a^2-b^2d. \end{equation} Since $a,b$ are odd and $p \neq 2$, reading modulo 8 in (\ref{E4}) we get $d \equiv 5 \pmod 8$. As $\left( \frac{a+b\sqrt{d}}{2} \right)^{\ell}=q+m\sqrt{d} \in \mathbb{Z}[\sqrt{d}]$, by Proposition \ref{P1} we obtain $\ell=3$. Thus we have $$q+m\sqrt{d}=\left( \frac{a+b\sqrt{d}}{2} \right)^3.$$ Comparing the real parts, we have \begin{align}\label{E44} 8q&=a(a^2+3b^2d). \end{align} Since $a$ is odd, therefore, we have $a\in\{\pm 1,\pm q\}$.\\ Case (2A): $a=q$.\\ By (\ref{E44}), we have $8=q^2+3b^2d$, and hence, $2\equiv q^2\pmod{3}$. This is not possible.\\ Case (2B): $a=-q$.\\ By (\ref{E4}) and (\ref{E44}), we have $$4p^{n/3}=q^2-b^2d\ \text{and}\ 8=-(q^2+3b^2d).$$ From these, we have $3p^{n/3}=q^2+2$, which violates our assumption.\\ Case (2C): $a=1$.\\ By (\ref{E44}) and $d<0$, we have $8q=1+3b^2d<0$. This is not possible.\\ Case (2D): $a=-1$.\\ By (\ref{E4}) and (\ref{E44}), we have $$4p^{n/3}=1-b^2d\ \text{and}\ 8q=-(1+3b^2d).$$ From these, we have $3p^{n/3}=2q+1$, which violates our assumption. This completes the proof. \end{proof} We are now in a position to prove Theorem \ref{T1}. \begin{proof}[\bf Proof of Theorem~$\ref{T1}$] Let $m$ be the positive integer with $q^2-p^n=m^2d$ and put $\alpha =q+m\sqrt{d}$. We note that $\alpha$ and $\bar{\alpha}$ are co-prime and $N(\alpha)=\alpha \bar{\alpha}=p^n$. Thus we get $(\alpha)= \mathfrak{a}^n$ for some integral ideal $\mathfrak{a}$ of $K_{p,q}$. We claim that the order of $[\mathfrak{a}]$ in the ideal class group of $K_{p, q} $ is $n$. If this is not the case, then we obtain an odd prime divisor $\ell$ of $n$ and an integer $\beta $ in $K_{p,q}$ such that $(\alpha)=(\beta)^{\ell}$. As $q$ and $p$ are distinct odd primes, the condition \enquote{$q \not \equiv \pm 1 \pmod{|d|}$} ensures that $d<-3$. Also $d$ is square-free, hence the only units in the ring of integers of $K_{p,q}=\mathbb{Q}(\sqrt{d})$ are $\pm1$. Thus we have $\alpha =\pm \beta^{\ell}$. Since $\ell$ is odd, therefore, we obtain $\alpha= \gamma^{\ell}$ for some integer $\gamma$ in $K_{p,q}$ which contradicts to Proposition \ref{P2}. \end{proof} We now give a proof of Theorem \ref{T2}. This is obtained as a consequence of a well known theorem of Siegel (see \cite{ES, LS}). \begin{proof}[\bf Proof of Theorem $\ref{T2}$] Let $n>1$ be as in Theorem \ref{T2} and $q$ be an arbitrary odd prime. For each odd prime $p \neq q$, from Theorem \ref{T1}, the class number of $K_{p,q}$ is divisible by $n$ unless $q\equiv \pm 1 \pmod {|d|}$. If $q\equiv \pm 1 \pmod {|d|}$, then $|d|\leq q+1$. For any positive integer $D$, the curve \begin{equation}\label{E5} DX^2+q^2=Y^n \end{equation} is an irreducible algebraic curve (see \cite{WS}) of genus bigger than $0$. From Siegel's theorem (see \cite{LS}) it follows that there are only finitely many integral points $(X,Y)$ on the curve (\ref{E5}). Thus, for each $d<0$ there are at most finitely many primes $p$ such that $$q^2-p^n=m^2d.$$ Since $K_{p,q}=\mathbb{Q}(\sqrt{d})$, it follows that there are infinitely many fields $K_{p,q}$ for each odd prime $q$. Further if $p$ is large enough, then for $q^2-p^n=m^2d$, we have $|d|>q+1$. Hence, by Theorem \ref{T1}, the class number of $K_{p,q}$ is divsible by $n$ for $p$ sufficiently large. \end{proof} \section{Concluding remarks} We remark that the strategy of the proof of Theorem \ref{T1} can be adopted, together with the following result of W. Ljunggren \cite{LJ43}, to prove Theorem \ref{T4}. \begin{thma}\label{TE} For an odd integer $n$, the only solutions to the Diophantine equation \begin{equation} \frac{x^n-1}{x-1}=y^2 \end{equation} in positive integers $x,y, n $ with $x>1$ is $n=5, x=3, y=11$. \end{thma} \begin{thm}\label{T4} For any positive odd integer $n$ and any odd prime $p$, the class number of the imaginary quadratic field $\mathbb{Q}(\sqrt{1-p^n})$ is divisible by $n$ except for the case $(p, n)=(3, 5)$. \end{thm} Theorem \ref{T4} alternatively follows from the work of T. Nagell (Theorem 25 in \cite{NA55}) which was elucidated by J. H. E. Cohn (Corollary 1 in \cite{CO03}). M. R. Murty gave a proof of Theorem \ref{T4} under condition either \enquote{$1-p^n$ is square-free with $n>5$} or \enquote{$m<p^{n/4}/2^{3/2}$ whenever $m^2\mid 1-p^n$ for some integer $m$ with odd $n>5$} (Theorem 1 and Theorem 2 in \cite{RM97}, also see \cite{RM99}). Now we give some demonstration for Theorem \ref{T1}. All the computations in this paper were done using PARI/GP (version 2.7.6). Table 1 gives the list of imaginary quadratic fields $K_{p,q}$ corresponding to $n=3$, $p \leq 19$ (and hence discriminant not exceeding $19^3$). Note that the list does not exhaust all the imaginary quadratic fields $K_{p,q}$ of discriminant not exceeding $19^3$. Table 2 is the list of $K_{p,q}$ for $n=5$ and $p \leq 7$.\\ \begin{center} \begin{longtable}{|l|l|l|l|l|l|l|l|l|l|} \caption{Numerical examples of Theorem 1 for $n=3$.} \label{tab:long1} \\ \hline \multicolumn{1}{|c|}{$p$} & \multicolumn{1}{c|}{$q$} & \multicolumn{1}{c|}{$q^2-p^n$}& \multicolumn{1}{c|}{$d$} & \multicolumn{1}{c|}{$h(d)$}& \multicolumn{1}{|c|}{$p$} & \multicolumn{1}{c|}{$q$} & \multicolumn{1}{c|}{$q^2-p^n$}& \multicolumn{1}{c|}{$d$} & \multicolumn{1}{c|}{$h(d)$}\\ \hline \endfirsthead \multicolumn{10}{c} {{\bfseries \tablename\ \thetable{} -- continued from previous page}} \\ \hline \multicolumn{1}{|c|}{$p$} & \multicolumn{1}{c|}{$q$} & \multicolumn{1}{c|}{$q^2-p^n$}& \multicolumn{1}{c|}{$d$} & \multicolumn{1}{c|}{$h(d)$}& \multicolumn{1}{|c|}{$p$} & \multicolumn{1}{c|}{$q$} & \multicolumn{1}{c|}{$q^2-p^n$}& \multicolumn{1}{c|}{$d$} & \multicolumn{1}{c|}{$h(d)$}\\ \hline \endhead \hline \multicolumn{10}{|r|}{{Continued on next page}} \\ \hline \endfoot \hline \endlastfoot 3&5&-2&-2&1*& 5&3&-116&-29&6\\ 5&7&-76&-19&1**& 7&3&-334&-334&12\\ 7&5&-318&-318&12&7&11&-222&-222&12\\ 7&13&-174&-174&12& 7&17&-54&-6&2*\\ 11&3&-1322&-1322&42&11&5&-1306&-1306&18\\ 11&7&-1282&-1282&12&11&13&-1162&-1162&12\\ 11&17&-1042&-1042&12&11&19&-970&-970&12\\ 11&23&-802&-802&12&11&29&-490&-10&2*\\ 11&31&-370&-370&12&11&37&-38&-38&6*\\ 13&3&-2188&-547&3&13&5&-2172&-543&12\\ 13&7&-2148&-537&12&13&11&-2076&-519&18\\ 13&17&-1908&-53&6&13&19&-1836&-51&2**\\ 13&23&-1668&-417&12&13&29&-1356&-339&6\\ 13&31&-1236&-309&12&13&37&-828&-23&3\\ 13&41&-516&-129&12&13&43&-348&-87&6\\ 13&47&-12&-3&1*&17&3&-4904&-1226&42\\ 17&5&-4888&-1222&12&17&7&-4864&-19&1**\\ 17&11&-4792&-1198&12&17&13&-4744&-1186&24\\ 17&19&-4552&-1138&12&17&23&-4384&-274&12\\ 17&29&-4072&-1018&18&17&31&-3952&-247&6\\ 17&37&-3544&-886&18&17&41&-3232&-202&6\\ 17&43&-3064&-766&24&17&47&2704&-1&1*\\ 17&53&-2104&-526&12&17&59&-1432&-358&6\\ 17&61&-1192&-298&6&17&67&-424&-106&6\\ 19&3&-6850&-274&12&19&5&-6834&-6834&48\\ 19&7&-6810&-6810&48&19&11&-6738&-6738&48\\ 19&13&-6690&-6690&72&19&17&-6570&-730&12\\ 19&23&-6330&-6330&48&19&29&-6018&-6018&48\\ 19&31&-5898&-5898&48&19&37&-5490&-610&12\\ 19&41&-5178&-5178&48&19&43&-5010&-5010&48\\ 19&47&-4650&-186&12&19&53&-4050&-2&1*\\ 19&59&-3378&-3378&24&19&61&-3138&-3138&24\\ 19&67&-2370&-2370&24&19&71&-1818&-202&6\\ 19&73&-1530&-170&12&19&79&-618&-618&12\\ \end{longtable} \end{center} \begin{center} \begin{longtable}{|l|l|l|l|l|l|l|l|l|l|} \caption{Numerical examples of Theorem 1 for $n=5$.} \label{tab:long2} \\ \hline \multicolumn{1}{|c|}{$p$} & \multicolumn{1}{c|}{$q$} & \multicolumn{1}{c|}{$q^2-p^n$}& \multicolumn{1}{c|}{$d$} & \multicolumn{1}{c|}{$h(d)$}& \multicolumn{1}{|c|}{$p$} & \multicolumn{1}{c|}{$q$} & \multicolumn{1}{c|}{$q^2-p^n$}& \multicolumn{1}{c|}{$d$} & \multicolumn{1}{c|}{$h(d)$}\\ \hline \endfirsthead \multicolumn{10}{c} {{\bfseries \tablename\ \thetable{} -- continued from previous page}} \\ \hline \multicolumn{1}{|c|}{$p$} & \multicolumn{1}{c|}{$q$} & \multicolumn{1}{c|}{$q^2-p^n$}& \multicolumn{1}{c|}{$d$} & \multicolumn{1}{c|}{$h(d)$}& \multicolumn{1}{|c|}{$p$} & \multicolumn{1}{c|}{$q$} & \multicolumn{1}{c|}{$q^2-p^n$}& \multicolumn{1}{c|}{$d$} & \multicolumn{1}{c|}{$h(d)$}\\ \hline \endhead \hline \multicolumn{10}{|r|}{{Continued on next page}} \\ \hline \endfoot \hline \endlastfoot 3&5&-218&-218&10&3&7&-194&-194&20\\ 3&11&-122&-122&10&3&13&-74&-74&10\\ 5&3&-3116&-779&10&5&7&-3076&-769&20\\ 5&11&-3004&-751&15&5&13&-2956&-739&5\\ 5&17&-2836&-709&10&5&19&-2764&-691&5\\ 5&23&-2596&-649&20&5&29&-2284&-571&5\\ 5&31&-2164&-541&5&5&37&-1756&-439&15\\ 5&41&-1444&-1&1*&5&43&-1276&-319&10\\ 5&47&-916&-229&10&5&53&-316&-79&5\\ 7&3&-16798&-16798&60&7&5&-16782&-16782&100\\ 7&11&-16686&-206&20&7&13&-16638&-16638&80\\ 7&17&-16518&-16518&60&7&19&-16446&-16446&100\\ 7&23&-16278&-16278&80&7&29&-15966&-1774&20\\ 7&31&-15846&-15846&160&7&37&-15438&-15438&80\\ 7&41&-15126&-15126&120&7&43&-14958&-1662&20\\ 7&47&-14598&-1622&30&7&53&-13998&-13998&100\\ 7&59&-13326&-13326&100&7&61&-13086&-1454&60\\ 7&67&-12318&-12318&60&7&71&-11766&-11766&120\\ 7&73&-11478&-11478&60&7&79&-10566&-1174&30\\ 7&83&-9918&-1102&20&7&89&-8886&-8886&60\\ 7&97&-7398&-822&20&7&101&-6606&-734&40\\ 7&103&-6198&-6198&40&7&107&-5358&-5358&40\\ 7&109&-4926&-4926&40&7&113&-4038&-4038&60\\ 7&127&-678&-678&20&&&&& \\ \end{longtable} \end{center} In both the tables we use $*$ in the column for class number to indicate the failure of condition \enquote{$q\not\equiv \pm 1 \pmod{|d|}$} of Theorem \ref{T1}. Appearance of $**$ in the column for class number indicates that both the conditions \enquote{$q\not\equiv \pm 1 \pmod{|d|}$ and $p^{n/3}\ne (2q+1)/3, (q^2+3)/3$} fail to hold. For $n=3$, the number of imaginary quadratic number fields obtained from the family provided by T. Nagell (namely $K_{t,1}$ with $t$ any odd integer) with class number divisible by $3$ and discriminant not exceeding $19^3$ are at most $9$, whereas, in Table 1 there are 59 imaginary quadratic fields $K_{p,q}$ with class number divisible by $3$ and discriminant not exceeding $19^3$ (Table 1 does not exhaust all such $K_{p,q}$). Out of these 59 fields in Table 1, the conditions of Theorem~\ref{T1} hold for 58. This phenomenon holds for all values of $n$. \noindent\textbf{Acknowledgements.} The third and fourth authors would like to appreciate the hospitality provided by Harish-Chandra Research Institute, Allahabad, where the main part of the work was done. The authors would like to thank the anonymous referee for valuable comments to improve the presentation of this paper. \end{document}
\begin{document} \title{A note on generically stable measures and $fsg$ groups} \begin{abstract} We prove (Proposition 2.1) that if $\mu$ is a generically stable measure in an $NIP$ theory, and $\mu(\phi(x,b)) = 0$ for all $b$ then for some $n$, $\mu^{(n)}(\exists y(\phi(x_{1},y)\wedge .. \wedge \phi(x_{n},y))) = 0$. As a consequence we show (Proposition 3.2) that if $G$ is a definable group with $fsg$ in an $NIP$ theory, and $X$ is a definable subset of $G$ then $X$ is generic if and only if every translate of $X$ does not fork over $\emptyset$, precisely as in stable groups, answering positively Problem 5.5 from \cite{NIP2}. \end{abstract} \section{Introduction and preliminaries} This short paper is a contribution to the generalization of stability theory and stable group theory to $NIP$ theories, and also provides another example where we need to resort to measures to prove statements (about definable sets and/or types) which do not explicitly mention measures. The observations in the current paper can and will be used in the future to sharpen existing results around measure and $NIP$ theories (and this is why we wanted to record the observations here). Included in these sharpenings will be: (i) replacing average types by generically stable types in a characterization of strong dependence in terms of measure and weight in \cite{strongdep}, and (ii) showing the existence of ``external generic types" (in the sense of Newelski \cite{Newelski}), over any model, for $fsg$ groups in $NIP$ theories, improving on Lemma 4.14 and related results from \cite{Newelski}. If $p(x)\in S(A)$ is a stationary type in a stable theory and $\phi(x,b)$ any formula, then we know that $\phi(x,b)\in p|\mathfrak C$ if and only if $\models \bigwedge_{i=1,,.n}\phi(a_{i},b)$ for some independent realizations $a_{1},..,a_{n}$ of $p$ (for some $n$ depending on $\phi(x,y)$). Hence $\phi(x,b)\notin p|\mathfrak C$ for all $b$ implies that (and is clearly implied by) the inconsistency of $\bigwedge_{i=1,..,n}\phi(a_{i},y)$ for some (any) independent set $a_{1},..,a_{n}$ of realizations of $p$. This also holds for generically stable types in $NIP$ theories (as well as for generically stable types in arbitrary theories, with definition as in \cite{PT}). In \cite{strongdep}, an analogous result was proved for ``average measures" in strongly dependent theories. Here we prove it (Proposition 2.1) for generically stable measures in arbitrary $NIP$ theories, as well as giving a generalization (Remark 2.2). The $fsg$ condition on a definable group $G$ is a kind of ``definable compactness" assumption, and in fact means precisely this in $o$-minimal theories and suitable theories of valued fields (and of course stable groups are $fsg$). Genericity of a definable subset $X$ of $G$ means that finitely many translates of $X$ cover $G$. Proposition 2.1 is used to show that for $X$ a definable subset of an $fsg$ group $G$, $X$ is generic if and only if every translate of $X$ does not fork over $\emptyset$. This is a somewhat striking extension of stable group theory to the $NIP$ environment. We work with an $NIP$ theory $T$ and inside some monster model $\mathfrak C$. If $A$ is any set of parameters, let $L_x(A)$ denote the Boolean algebra of $A$-definable sets in the variable $x$. A \emph{Keisler measure} over $A$ is a finitely additive probability measure on $L_x(A)$. Equivalently, it is a regular Borel probability measure on the compact space $S_x(A)$. We will denote by $\mathfrak M_x(A)$ the space of Keisler measures over $A$ in the variable $x$. We might omit $x$ when it is not needed or when it is included in the notation of the measure itself ({\it e.g.} $\mu_x$). If $X$ is a sort, or more generally definable set, we may also use notation such $L_{X}(A)$, $S_{X}(A)$, $\mathfrak M_{X}(A)$, where for example $S_{X}(A)$ denote the complete types over $A$ which contain the formula defining $X$ (or which ``concentrate on $X$"). \begin{defi} A type $p\in S_x(A)$ is \emph{weakly random} for $\mu_{x}$ if $\mu(\phi(x))>0$ for any $\phi(x)\in L(A)$ such that $p\vdash \phi(x)$. A point $b$ is weakly random for $\mu$ over $A$ if $\tp(b/A)$ is weakly random for $\mu$. \end{defi} We briefly recall some definitions and properties of Keisler measures, referring the reader to \cite{NIP3} for more details. If $\mu \in \mathfrak M_x(\mathfrak C)$ is a global measure and $M$ a small model, we say that $\mu$ is $M$-invariant if $\mu(\phi(x,a) \triangle \phi(x,a'))=0$ for every formula $\phi(x,y)$ and $a,a'\in \mathfrak C$ having the same type over $M$. Such a measure admits a Borel defining scheme over $M$: For every formula $\phi(x,y)$, the value $\mu(\phi(x,b))$ depends only on $\tp(b/M)$ and for any Borel $B\subset [0,1]$, the set $\{p\in S_y(M) : \mu(\phi(x,b))\in B \text{ for some }b\models p\}$ is a Borel subset of $S_y(M)$. Let $\mu_x \in \mathfrak M(\mathfrak C)$ be $M$-invariant. If $\lambda_y\in \mathfrak M(\mathfrak C)$ is any measure, then we can define the \emph{invariant extension} of $\mu_x$ over $\lambda_y$, denoted $\mu_x \otimes \lambda_y$. It is a measure in the two variables $x,y$ defined in the following way. Let $\phi(x,y) \in L(\mathfrak C)$. Take a small model $N$ containing $M$ and the parameters of $\phi$. Define $\mu_x \otimes \lambda_y (\phi(x,y)) = \int f(p) d\lambda_y,$ the integral ranging over $S_y(N)$ where $f(p) = \mu_x(\phi(x,b))$ for $b\in \mathfrak C$, $b\models p$ (this function is Borel by Borel definability). It is easy to check that this does not depend on the choice of $N$. If $\lambda_y$ is also invariant, we can also form the product $\lambda_y \otimes \mu_x$. In general it will not be the case that $\lambda_y \otimes \mu_x=\mu_x \otimes \lambda_y$. If $\mu_x$ is a global $M$-invariant measure, we define by induction: $\mu^{(n)}_{x_1...x_n}$ by $\mu^{(1)}_{x_1}=\mu_{x_1}$ and $\mu^{n+1}_{x_1...x_{n+1}} = \mu_{x_{n+1}} \otimes \mu^{(n)}_{x_1...x_n}$. We let $\mu^{(\omega)}_{x_1x_2...}$ be the union and call it the \emph{Morley sequence} of $\mu_x$. \\ Special cases of $M$-invariant measures include definable and finitely satisfiable measures. A global measure $\mu_x$ is \emph{definable} over $M$ if it is $M$-invariant and for every formula $\phi(x,y)$ and open interval $I\subset [0,1]$ the set $\{p\in S_y(M) : \mu(\phi(x,b))\in I \text{ for some }b\models p\}$ is open in $S_y(M)$. The measure $\mu$ is \emph{finitely satisfiable} in $M$ if $\mu(\phi(x,b))>0$ implies that $\phi(x,b)$ is satisfied in $M$. Equivalently, any weakly random type for $\mu$ is finitely satisfiable in $M$. \begin{lemma} Let $\mu \in \mathfrak M_{x}(\mathfrak C)$ be definable over $M$, and $p(x)\in S_{x}(\mathfrak C)$ be weakly random for $\mu$. Let $\phi(x_{1},..,x_{n})$ be a formula over $\mathfrak C$. Suppose that $\phi(x_{1},..,x_{n})\in p^{(n)}$. Then $\mu^{(n)}(\phi(x_{1},..,x_{n})) > 0$. \end{lemma} \begin{proof} We will carry out the proof in the case where $\mu$ is definable (over $M$), which is anyway the case we need. Note that $p^{(m)}$ is $M$-invariant for all $m$. The proof of the lemma is by induction on $n$. For $n=1$ it is just the definition of weakly random. Assume true for $n$ and we prove for $n+1$. So suppose $\phi(x_{1},..,x_{n},x_{n+1}) \in p^{(n+1)}$. This means that for $(a_{1},..,a_{n})$ realizing $p^{(n)}|M$, $\phi(a_{1},..,a_{n},x) \in p$. So as $p$ is weakly random for $\mu$, $\mu(\phi(a_{1},..,a_{n},x)) = r>0$. So as $\mu$ is $M$-invariant, $tp(a_{1}',..,a_{n}'/M) = tp(a_{1},..,a_{n}/M)$ implies $\mu(\phi(a_{1}',..,a_{n}',x)) = r$ and thus also $r-\epsilon < \mu(\phi(a_{1}',..,a_{n}',x))$ for any small positive $\epsilon$. By definability of $\mu$ and compactness there is a formula $\psi(x_{1},..,x_{n}) \in tp(a_{1},..,a_{n}/A)$ such that $\models\psi(a_{1}',..,a_{n}')$ implies $ 0 < r-\epsilon < \mu(\phi(a_{1}',..,a_{n}',x))$. By induction hypothesis, $\mu^{(n)}(\psi(x_{1},..,x_{n})) > 0$. So by definition of $\mu^{(n+1)}$ we have that $\mu^{(n+1)}(\phi(x_{1},..,x_{n},x_{n+1})) > 0$ as required. \end{proof} A measure $\mu_{x_1,...,x_n}$ is \emph{symmetric} if for any permutation $\sigma$ of $\{1,...,n\}$ and any formula $\phi(x_1,...,x_n)$, we have $\mu(\phi(x_1,...,x_n))=\mu(\phi(x_{\sigma.1},...,x_{\sigma.n}))$. A special case of a symmetric measure is given by powers of a generically stable measure as we recall now. The following is Theorem 3.2 of \cite{NIP3}: \begin{fact}\label{genstable} Let $\mu_x$ be a global $M$-invariant measure. Then the following are equivalent: \begin{enumerate} \item $\mu_x$ is both definable and finitely satisfiable (necessarily over $M$), \item $\mu^{(n)}_{x_1,...,x_n}|_M$ is symmetric for all $n<\omega$, \item for any global $M$-invariant Keisler measure $\lambda_y$, $\mu_x \otimes \lambda_y=\lambda_y \otimes \mu_x$, \item $\mu$ commutes with itself: $\mu_x \otimes \mu_y=\mu_y\otimes \mu_x$. \end{enumerate} If $\mu_x$ satisfies one of those properties, we say it is \emph{generically stable}. \end{fact} If $\mu\in \mathfrak M_x(A)$ and $D$ is a definable set such that $\mu(D)>0$, we can consider the \emph{localisation} of $\mu$ at $D$ which is a Keisler measure $\mu_D$ over $A$ defined by $\mu_D(X)=\mu(X\cap D)/\mu(X)$ for any definable set $X$. We will use the notation $Fr(\theta(x),x_1,...,x_n)$ to mean $$\frac 1 n |\{i\in \{1,...,n\} : \models \theta(x_i)\}|.$$ The following is a special case of Lemma 3.4. of \cite{NIP3}. \begin{prop}\label{genslemma} Let $\phi(x,y)$ be a formula over $M$ and fix $r\in (0,1)$ and $\epsilon >0$. Then there is $n$ such that for any symmetric measure $\mu_{x_1,...,x_{2n}}$, we have $$\mu_{x_1,...,x_{2n}}( \exists y (|Fr(\phi(x,y),x_1,....,x_n) - Fr(\phi(x,y),x_{n+1},...,x_{2n}) | > r)) \leq \epsilon.$$ \end{prop} \section{Main result} \begin{prop} Let $\mu_x$ be a global generically stable measure. Let $\phi(x,y)$ be any formula in $L(\mathfrak C)$. Suppose that $\mu(\phi(x,b))=0$ for all $b\in \mathfrak C$. Then there is $n$ such that $\mu^{(n)}( \exists y (\phi(x_1,y) \wedge ... \wedge \phi(x_n,y)))=0$. Moreover, $n$ depends only on $\phi(x,y)$ and not on $\mu$. \end{prop} \begin{proof} Let $\mu_x$ be a global generically stable measure and $M$ a small model over which $\phi(x,y)$ is defined and such that $\mu_x$ is $M$-invariant. Assume that $\mu(\phi(x,b))=0$ for all $b\in \mathfrak C$. For any $k$, define $$W_k = \{(x_1,...,x_n) : \exists y (\wedge_{i=1..k} \phi(x_i,y))\}.$$ This is a definable set. We want to show that $\mu^{(n)}(W_n)=0$ for $n$ big enough. Assume for a contradiction that this is not the case. Let $n$ be given by Proposition \ref{genslemma} for $r=1/2$ and $\epsilon=1/2$. Consider the measure $\lambda_{x_1,...,x_{2n}}$ over $M$ defined as being equal to $\mu^{(2n)}$ localised on the set $W_{2n}$ (by our assumption, this is well defined). As the measure $\mu^{(2n)}$ is symmetric and the set $W_{2n}$ is symmetric in the $2n$ variables, the measure $\lambda$ is symmetric. Let $\chi(x_1,...,x_{2n})$ be the formula ``$(x_1,...,x_{2n}) \in W_{2n} \wedge \forall y (|Fr(\phi(x,y),x_1,...,x_n)-Fr(\phi(x,y),x_{n+1},...,x_{2n})| \leq 1/2)$". By definition of $n$, we have $\lambda ( \exists y (|Fr(\phi(x,y),x_1,....,x_n) - Fr(\phi(x,y),x_{n+1},...,x_{2n}) | > 1/2)) \leq 1/2$. Therefore $\mu^{(2n)} (\chi(x_1,...,x_{2n})) >0$. As $\mu$ is $M$-invariant, we can write $$\mu^{(2n)}(\chi(x_1,...,x_{2n})) = \int_{q \in S_{x_1,...,x_n}(M)} \mu^{(n)} (\chi(q,x_{n+1},...,x_{2n}))d\mu^{(n)},$$ where $\mu^{(n)} (\chi(q,x_{n+1},...,x_{2n}))$ stands for $\mu^{(n)}(\chi(a_1,...,a_n,x_{n+1},...,x_{2n}))$ for some (any) realization $(a_1,...,a_n)$ of $q$. As $\mu^{(2n)}(\chi(x_1,...,x_{2n})) >0$, there is $q\in S_{x_1,...,x_n}$ such that \\(*) $\mu^{(n)}(\chi(q,x_{n+1},...,x_{2n}))>0$.\\Fix some $(a_1,...,a_n)\models q$. By (*), we have $(a_1,...,a_n) \in W_n$. So let $b\in \mathfrak C$ such that $\models \bigwedge_{i=1...n} \phi(a_i,b)$. Again by (*), we can find some $(a_{n+1},...,a_{2n})$ weakly random for $\mu^{(n)}$ over $Ma_1...a_nb$ and such that \\ (**) $\models \chi(a_1,...,a_n,a_{n+1},...,a_{2n})$. \\ In particular, for $j=n+1,...,2n$, $a_j$ is weakly random for $\mu$ over $Mb$ hence $\models \neg \phi(a_j,b)$. But then $|Fr(\phi(x,b);a_1,...,a_n) - Fr(\phi(x,b);a_{n+1},...,a_{2n})| =1$. This contradicts (**). \end{proof} \begin{rem} The proof above adapts to showing the following generalization: \newline Let $\mu_{x}$ be a global generically stable measure, $\phi(x,y)$ a formula in $L(\mathfrak C)$. Let $\Sigma(x)$ be the partial type (over the parameters in $\phi$ together with a small model over which $\mu$ is definable) defining $\{b:\mu(\phi(x,b)) = 0\}$. Then for some $n$: $\mu^{(n)}(\exists y(\Sigma(y) \wedge \phi(x_{1},y)\wedge .. \wedge\phi(x_{n},y))) = 0$. \end{rem} \section{Generics in $fsg$ groups } Let $G$ be a definable group, without loss defined over $\emptyset$. We call a definable subset $X$ of $G$ left (right) generic if finitely many left (right) translates of $X$ cover $G$, and a type $p(x)\in S_{G}(A)$ is left (right) generic if every formula in $p$ is. We originally defined (\cite{NIP1}) $G$ to have ``finitely satisfiable generics", or to be $fsg$, if there is some global complete type $p(x)\in S_{G}(\mathfrak C)$ of $G$ every left $G$-translate of which is finitely satisfiable in some fixed small model $M$. The following summarizes the situation, where the reader is referred to Proposition 4.2 of \cite{NIP1} for (i) and Theorem 7.7 of \cite{NIP2} and Theorem 4.3 of \cite{NIP3} for (ii), (iii), and (iv). \begin{fact} Suppose $G$ is an $fsg$ group. Then \newline (i) A definable subset $X$ of $G$ is left generic iff it is right generic, and the family of nongeneric definable sets is a (proper) ideal of the Boolean algebra of definable subsets of $G$, \newline (ii) There is a left $G$-invariant Keisler measure $\mu\in \mathfrak M_G(\mathfrak C)$ which is generically stable, \newline (iii) Moreover $\mu$ from (ii) is the unique left $G$-invariant global Keisler measure on $G$ as well as the unique right $G$-invariant global Keisler measure on $G$, \newline (iv) Moreover $\mu$ from (ii) is {\em generic} in the sense that for any definable set $X$, $\mu(X) > 0$ iff $X$ is generic. \end{fact} Remember that a definable set $X$ (or rather a formula $\phi(x,b)$ defining it) forks over a set $A$ if $\phi(x,b)$ implies a finite disjunction of formulas $\psi(x,c)$ each of which divide over $A$, and $\psi(x,c)$ is said to divide over $A$ if for some $A$-indiscernible sequence $(c_{i}:i<\omega)$ with $c_{0} = c$, $\{\phi(x,c_{i}):i<\omega\}$ is inconsistent. \begin{prop} Suppose $G$ is $fsg$ and $X\subseteq G$ a definable set. Then $X$ is generic if and only if for all $g\in X$, $g\cdot X$ does not fork over $\emptyset$ (if and only if for all $g\in G$, $X\cdot g$ does not fork over $\emptyset$). \end{prop} \begin{proof} Left to right: It suffices to prove that any generic definable set $X$ does not fork over $\emptyset$, and as the set of nongenerics forms an ideal it is enough to prove that any generic definable set does not divide over $\emptyset$. This is carried out in (the proof of) Proposition 5.12 of \cite{NIP2}. \noindent Right to left: Assume that $X$ is nongeneric. We will prove that for some $g\in G$, $g\cdot X$ divides over $\emptyset$ (so also forks over $\emptyset$). Let $\mu_{x}$ be the generically stable $G$-invariant global Keisler measure given by Fact 3.1. Let $M_{0}$ be a small model such that $\mu$ does not fork over $M_{0}$ (namely, as $\mu$ is generic, every generic formula does not fork over $M_{0}$) and $X$ is definable over $M_{0}$. Let $\phi(x,y)$ denote the formula defining $\{(x,y)\in G\times G: y\in x\cdot X\}$. So $\phi$ has additional (suppressed) parameters from $M_{0}$. Note that for $b\in G$, $\phi(x,b)$ defines the set $b\cdot X^{-1}$. As $X$ is nongeneric, so is $X^{-1}$ so also $b\cdot X^{-1}$ for all $b\in G$. Hence, as $\mu$ is generic, $\mu(\phi(x,b)) = 0$ for all $b$. By Proposition 2.1, for some $n$ $\mu^{(n)}(\exists y(\phi(x_{1},y)\wedge .. \wedge \phi(x_{n},y))) = 0$. Let $p$ be any weakly random type for $\mu$ (which in this case amounts to a global generic type, which note is $M_{0}$-invariant). So by Lemma 1.2 the formula $\exists y(\phi(x_{1},y)\wedge .. \wedge \phi(x_{n},y)))\notin p^{(n)}$. Let $(a_{1},..,a_{n})$ realize $p^{(n)}|M_{0}$. Then $(a_{1},..,a_{n})$ extends to an $M_{0}$-indiscernible sequence $(a_{i}:i=1,2,....)$, a Morley sequence in $p$ over $M_{0}$, and $\models \neg \exists y(\phi(a_{1},y) \wedge ... \wedge \phi(a_{n},y))$. So in particular $\{\phi(a_{i},y):i=1,2,...\}$ is inconsistent. Hence the formula $\phi(a_{i},y)$ divides over $M_{0}$, so also divides over $\emptyset$. But $\phi(a_{1},y)$ defines the set $a_{1}\cdot X$, so $a_{1}\cdot X$ divides over $\emptyset$ as required. \end{proof} Recall that we called a global type $p(x)$ of a $\emptyset$-definable group $G$, left $f$-generic if every left $G$-translate of $p$ does not fork over $\emptyset$. We conclude the following (answering positively Problem 5.5 from \cite{NIP2} as well as strengthening Lemma 4.14 of \cite{CPI}): \begin{cor} Suppose $G$ is $fsg$ and $p(x) \in S_{G}(\mathfrak C)$. Then the following are equivalent: \newline (i) $p$ is generic, \newline (ii) $p$ is left (right) $f$-generic, \newline (iii) (Left or right) $Stab(p)$ has bounded index in $G$ (where left $Stab(p) = \{g\in G:g\cdot p = p\}$). \end{cor} \begin{proof} The equivalence of (i) and (ii) is given by Proposition 3.2 and the definitions. We know from \cite{NIP1}, Corollary 4.3, that if $p$ is generic then $Stab(p)$ is precisely $G^{00}$. Now suppose that $p$ is nongeneric. Hence there is a definable set $X\in p$ such that $X$ is nongeneric. Let $M$ be a small model over which $X$ is defined. Note that the $fsg$ property is invariant under naming parameters. Hence $G$ is $fsg$ in $Th(\mathfrak C,m)_{m\in M}$. By Proposition 3.2 (as well as what is proved in ``Right to left" there), for some $g\in G$, $g\cdot X$ divides over $M$. As $X$ is defined over $M$ this means that there is an $M$-indiscernible sequence $(g_{\alpha}:\alpha < {\bar\kappa})$ (where $\bar\kappa$ is the cardinality of the monster model) and some $n$ such that $g_{\alpha_{1}}\cdot X \cap ... \cap g_{\alpha_{n}}\cdot X = \emptyset$ whenever $\alpha_{1} < ... < \alpha_{n}$. This clearly implies that among $\{g_{\alpha}\cdot p: \alpha < \bar\kappa\}$, there are $\bar\kappa$ many types, whereby $Stab(p)$ has unbounded index. \end{proof} \end{document}
\begin{document} \maketitle \centerline{\scshape Yunfeng Liu $^{a,b}$, Zhiming Guo $^{a,*}$, Mohammad El Smaily $^{b}$ and Lin Wang $^{b}$} {\footnotesize \centerline{$^{a}$ School of Mathematics and Information Science,} \centerline{Guangzhou University} \centerline{ Guangzhou, 510006, PR China} \centerline{$^{b}$ Department of Mathematics and Statistics,} \centerline{University of New Brunswick} \centerline{ Fredericton, NB, Canada} \begin{abstract} In this paper, we consider a Leslie-Gower predator-prey model in one-dimensional environment. We study the asymptotic behavior of two species evolving in a domain with a free boundary. Sufficient conditions for spreading success and spreading failure are obtained. We also derive sharp criteria for spreading and vanishing of the two species. Finally, when spreading is successful, we show that the spreading speed is between the minimal speed of traveling wavefront solutions for the predator-prey model on the whole real line (without a free boundary) and an elliptic problem that follows from the original model. \end{abstract} \maketitle \section{Introduction} A variety of models are used to describe the predator-prey interactions. The dynamical relationship between a predator and a prey has long been among the dominant topics in mathematical ecology due to its universal existence and importance. Recently, many works studied the predator-prey system with the Leslie-Gower scheme \cite{MM2003,L-G4,HH1995,L-G1,WN2016,L-G5,ZhouJun2014}. A typical Leslie-Gower predator-prey model is the following \begin{equation}\label{1.1} \left\{\begin{array}{l} \displaystyle {\frac{dN}{dt}=rN\left(1-\frac{N}{G}\right)-bNP,} \\ \displaystyle{\frac{dP}{dt}=P\left(a-\frac{cP}{N+G_{1}}\right),} \end{array}\right. \end{equation} where $N$ and $P$ denote the population densities of the prey and predator populations respectively. The parameter $r$ represents the intrinsic growth rate of the prey species and $G$ stands for its carrying capacity. The parameter $a$ is the growth rate for the predator and $b$ (resp. $c$) is the maximum value which per capita reduction rate of $N$ (resp. $P$) can attain. $G_{1}$ denotes the extent to which environment provides protection to predator $P$. {\em All parameters are assumed to be positive.} In order to get the spatiotemporal dynamics of system \eqref{1.1}, the following reaction-diffusion equations are widely accepted \begin{equation}\label{1.2} \left\{\begin{array}{ll} \displaystyle{\frac{\partial N}{\partial t}=d_{1}\frac{\partial^{2} N}{\partial x^{2}} + rN\left(1-\frac{N}{G}\right)-bNP,}~~& t,~x \in \mathbb{R}, \\ \displaystyle{\frac{\partial P}{\partial t}=d_{2}\frac{\partial^{2} P}{\partial x^{2}}+P\left(a-\frac{cP}{N+G_{1}}\right)},\qquad\qquad\qquad&~t,~x \in \mathbb{R}. \end{array}\right. \end{equation} By setting $$N=Gu,~P=\frac{aG}{c}\upsilon,~t=\frac{\hat{t}}{r},~x=\sqrt{\frac{d_{1}}{r}}\hat{x},$$ $$\delta=\frac{abG}{rc},~\alpha=\frac{G_{1}}{G},~\kappa=\frac{a}{r}\text{ and } D=\frac{d_{2}}{d_{1}},$$ and dropping the hat sign, \eqref{1.2} turns into the following system \begin{equation}\label{1.3} \left\{\begin{array}{ll} \displaystyle{\frac{ \partial u}{ \partial t}=u_{xx} + u(1-u)-\delta u\upsilon,} &t, x \in \mathbb{R}, \\ \displaystyle{\frac{ \partial \upsilon}{ \partial t}=D\upsilon_{xx}+\kappa\upsilon\left(1-\frac{\upsilon}{u+\alpha}\right),}\qquad\qquad\qquad&t, x \in \mathbb{R} .\ \end{array}\right. \end{equation} System \eqref{1.3} has at least three boundary equilibrium solutions $E_{1}=(0,0),$ $E_{2}=(0,\alpha),$ $E_{3}=(1,0)$. Moreover, if $ \delta\alpha<1,$ there exists a unique interior equilibrium solution $E_{*}=(u^{*}, \upsilon^{*}),$ where $$\displaystyle{\upsilon^{*}=\alpha+u^{*}} \text{ and }\displaystyle{u^{*}=\frac{1-\delta\alpha}{1+\delta}}.$$ Our main objective is to understand the long time behavior of a Leslie-Gower predator-prey model via a free boundary. In this paper, we consider the following model: \begin{equation}\label{FBP} \left\{\begin{array}{ll} \displaystyle{\frac{\partial u}{\partial t}=u_{xx} + u(1-u)-\delta u\upsilon,}&\text{for all } t>0\text{ and }0<x<h(t), \\ \displaystyle{\frac{\partial \upsilon}{\partial t}=D\upsilon_{xx}+\kappa\upsilon\left(1-\frac{\upsilon}{u+\alpha}\right),}&\text{for all }t>0\text{ and }0<x<h(t), \\ h'(t)=-\mu (u_{x}(t,h(t))+\rho \upsilon_{x}(t,h(t))),&\text{for all }t>0, \\ h(0)=h_{0}, \\ u_{x}(t,0)=\upsilon_{x}(t,0)=u(t,h(t))=\upsilon(t,h(t))=0,~&\text{for all }t>0, \\ u(0,x)=u_{0}(x)~\text{ and }~\upsilon(0,x)=\upsilon_{0}(x),\qquad& \text{for all }x\in[0,h_{0}], \end{array}\right. \end{equation} with the positive parameters $\mu,$ $\rho > 0.$ The initial data $(u_{0},\upsilon_{0})$ satisfy \begin{equation}\label{1.5} \left\{\begin{array}{l} u_{0},~\upsilon_{0}\in C^{2}([0,h_{0}]), \\ u'_{0}(0)=\upsilon'_{0}(0)=u_{0}(h_{0})=\upsilon_{0}(h_{0})=0, \\ h_{0}>0,~u_{0}(x)>0 ~ \text{ and }~\upsilon_{0}(x)>0~ \text{ for all }~ x\in [0,h_{0}). \end{array}\right. \end{equation} From a biological point of view, model \eqref{FBP} describes how the two species evolve if they initially occupy the bounded region $[0, h_{0}]$. The homogeneous Neumann boundary condition at $x=0$ indicates that the left boundary is fixed, with the population confined to move only to right of the boundary point $x=0.$ We assume that both species have a tendency to emigrate throught the right boundary point to obtain their new habitat: the free boundary $x=h(t)$ represents the spreading front. Moreover, it is assumed that the expanding speed of the free boundary is proportional to the normalized population gradient at the free boundary. This is well-known as the Stefan condition. Many previous works study free boundary problems in predator-prey models. We refer the reader, for instance, to \cite{ziyou1,ziyou2,ziyou3,ziyou4} and references cited therein. In this paper, we have been working under the following assumption $$\text{\bf{(H1)}}:\quad \delta\alpha+\delta<1.$$ \paragraph*{Organization of the paper.} In Section 2, we use a contraction mapping argument to prove the local existence and uniqueness of the solution to \eqref{FBP}, then make use of suitable estimates on the solution to show that it exists for all time $t>0$. In Section 3, we derive several lemmas which will be used later. Section \ref{dichotomy} is devoted to the long time behavior of $(u,\upsilon),$ proving a spreading-vanishing dichotomy and finally deriving criteria for spreading and vanishing. We estimate the spreading speed in Section \ref{spreading.estimate} and then summarize through a brief discussion in Section \ref{discuss}. \section{Existence and uniqueness of solutions} In this section, we first state a result about the local existence and uniqueness of a solution to \eqref{FBP} in Lemma \ref{local.exist}. Then we derive a priori estimates (Lemma \ref{estimates}) in order justify that the solution is defined for all time $t>0$. The global existence of a solution to the system \eqref{FBP} is stated in Theorem \ref{th5.1}. \begin{lemma}\label{local.exist} Assume that $(u_{0},\upsilon_{0})$ satisfies the condition \eqref{1.5}, then for any $\theta\in (0,1),$ there is a $T> 0$ such that the problem~(\ref{FBP}) admits a unique solution $(u(t,x), \upsilon(t,x), h(t)),$ which satisfies $$(u,\upsilon,h) \in C^{\frac{(1+\theta)}{2},1+\theta}(Q_{T})\times C^{\frac{(1+\theta)}{2},1+\theta}(Q_{T})\times C^{1+\frac{\theta}{2}}([0,T]).$$ where $Q_{T}=\{(t,x)\in \mathbb{R}^{2}:~t\in[0,T],~x\in[0,h(t)]\}$. \end{lemma} The proof of Lemma \ref{local.exist} will be postponed to Section \ref{exist.}. \begin{lemma}\label{estimates} Let $(u,\upsilon,h(t))$ be a solution of \eqref{FBP} for $t\in [0,T]$ \text{for some} $T>0$. Then \begin{equation}\label{5.6} 0<u(t,x)\leq \max\{1, \|u_{0}\|_{\infty}\}:=M_{1}~\text{ for }~t\in [0,T] \text{ and }x\in [0,h(t)), \end{equation} \begin{equation}\label{5.7} 0< \upsilon(t,x)\leq \max\{M_{1}+\alpha, \|\upsilon_{0}\|_{\infty}\}:=M_{2}~\text{ for }~t\in [0,T]\text{ and } x\in [0,h(t)), \end{equation} \begin{equation}\label{5.8} 0< h'(t)\leq \Lambda~\text{ for all }~t\in (0,T]. \end{equation} where $\Lambda> 0$ depends on $\mu,$ $\rho,$ $D,$ $\kappa,$ $\|u_{0}\|_{\infty},$ $\|\upsilon_{0}\|_{\infty},$ $\|u'\|_{C[0,h_{0}]}$ and $\|\upsilon'\|_{C[0,h_{0}]}$. \end{lemma} The proof of Lemma \ref{estimates} will be postponed to Section \ref{exist.} as well. \begin{theorem}\label{th5.1} Assume that $(u_{0},\upsilon_{0})$ satisfies the condition \eqref{1.5}, then for any $\theta\in (0,1),$ the problem~(\ref{FBP}) admits a unique solution $(u(t,x), \upsilon(t,x), h(t)),$ which satisfies $$(u,\upsilon,h) \in C^{\frac{(1+\theta)}{2},1+\theta}(Q)\times C^{\frac{(1+\theta)}{2},1+\theta}(Q)\times C^{1+\frac{\theta}{2}}([0,+\infty)),$$ where \[Q=\{(t,x)\in \mathbb{R}^{2}: t\in[0,+\infty),~x\in[0,h(t)]\}.\] \end{theorem} \begin{proof}[On the proof of Theorem \ref{th5.1}.] We only give a brief sketch of the proof here since it is similar to those done in \cite{dushou} and \cite{dusupandinf2014}: the global existence of the solution to problem \eqref{FBP} follows from the uniqueness of the local solution, Zorn's lemma and the uniform estimates of $u,$ $\upsilon$ and $h'(t)$ obtained in Lemma \ref{estimates}, above.\end{proof} \section{Known results from prior works} In this section, we recall from prior works some important results that will be used repeatedly in our arguments. We start with some results regarding the stationary state(s) of the model \begin{equation}\label{2.1} \left\{\begin{array}{ll} \displaystyle{\frac{\partial u}{\partial t}=d u_{xx}+a u(1-bu),}\qquad &(t,x)\in (0, \infty)\times (0, L), \\ u_{x}(t,0)=u(t,L)=0, & t>0. \end{array}\right. \end{equation} The stationary state will be determined via the eigenvalue problem \begin{equation}\label{2.11} \left\{\begin{array}{ll} d\phi_{xx}+a\phi=\sigma\phi, \qquad &0< x< L, \\ \phi_{x}(0)=\phi(L)=0& \end{array}\right. \end{equation} as well as the spatial domain's size. The following lemma summarizes the result. \begin{lemma}[\cite{Cantrell2003} and \cite{zhouxiao}]\label{le2.1} Let $L^{*}=\displaystyle{\frac{\pi}{2}\sqrt{\frac{d}{a}}}$ and $\displaystyle{d^{*}=\frac{4aL^{2}}{\pi^{2}}.}$ Then we have: \begin{enumerate}[\bf (i)] \item if $L\leq L^{*}$, all positive solutions of \eqref{2.1} tend to zero in $C([0,L])$ as $t\rightarrow +\infty.$\\ \item If $L> L^{*}$, then \eqref{2.1} has a minimal positive equilibrium $\phi,$ and all positive solutions to \eqref{2.1} approach $\phi$ in $C([0,L])$ as $t\rightarrow +\infty$.\\ \item If $0< d < d^{*},$ the principal eigenvalue of \eqref{2.11} is positive ($\sigma_{1}> 0.$) If $d=d^{*}$ then $\sigma_{1}=0,$ and if $d > d^{*}$ then $\sigma_{1}< 0$. \end{enumerate} \end{lemma} For a detailed proof of (i) and (ii) one can refer to Proposition 3.1 and 3.2 of \cite{Cantrell2003}. The result in (iii) is obtained through a simple computation and can be found in the proof of Corollary 3.1 in \cite{zhouxiao}. Now, we state a comparison principle that we will use in the proving the results of Section \ref{dichotomy}, below. This comparison principle is extracted from Lemma 4.1 and Lemma 4.2 of \cite{w3} with minor modifications. \begin{lemma}\label{le2.2} Let $\bar{h}$ and $\underline{h}$ be two postive $C^{1}([0,+\infty))$ functions {\rm (}$\bar{h}, \underline{h}>0$ in $[0,+\infty)${\rm ).} Denote by \[\Omega=\left\{(t,x):t>0,x\in[0,\bar{h}(t)]\right\}\] and \[\Omega_{1}=\{(t,x):t>0,x\in[0,\underline{h}(t)]\}.\] Let $\bar{u}, \bar{\upsilon}\in C(\bar{\Omega})\cap C^{1,2}(\Omega)$ and $\underline{u},\underline{\upsilon}\in C(\bar{\Omega}_{1})\cap C^{1,2}(\Omega_{1}).$ Assume that \[0<\bar{u},~\underline{u}\leq M_{1}\text{ and }0<\bar{\upsilon},\underline{\upsilon}\leq M_{2}\] and that $(\bar{u},\bar{\upsilon},\bar{h})$ satisfies \begin{equation}\label{2.2} \left\{\begin{array}{ll} \bar{u}_{t}-\bar{u}_{xx}\geq\bar{u}(1-\bar{u}),~\qquad &t>0,0<x<\bar{h}(t), \\ \bar{\upsilon}_{t}-D\bar{\upsilon}_{xx}\geq \kappa\bar{\upsilon}\left(1-\frac{\bar{\upsilon}}{M_{1}+\alpha}\right),~&t>0,0<x<\bar{h}(t), \\ \bar{u}_{x}(t,0)\leq 0,\bar{\upsilon}_{x}(t,0)\leq 0,~&t>0, \\ \bar{u}(t,\bar{h}(t))=\bar{\upsilon}(t,\bar{h}(t))=0,~&t>0, \\ \bar{h}'(t)\geq -\mu(\bar{u}_{x}(t,\bar{h}(t))+\rho\bar{\upsilon}_{x}(t,\bar{h}(t))),~&t>0, \end{array}\right. \end{equation} and the couple $(\underline{u}, \underline{h})$ satisfies \begin{equation}\label{2.20} \left\{\begin{array}{ll} \underline{u}_{t}-\underline{u}_{xx}\leq\underline{u}(1-\delta M_{2}-\underline{u}), ~~\qquad \qquad &t>0,0<x<\underline{h}(t),\\ \underline{u}_{x}(t,0)\geq 0,~&t>0, \\ \underline{u}(t,\underline{h}(t))=0,~&t>0, \\ \underline{h}'(t)\leq-\mu\underline{u}_{x}(t,\underline{h}(t)),~&t>0 \end{array}\right. \end{equation} and the couple $(\underline{\upsilon},\underline{h})$ satisfies \begin{equation}\label{2.21} \left\{\begin{array}{ll} \underline{\upsilon}_{t}-D\underline{\upsilon}_{xx}\leq \kappa\underline{\upsilon}(1-\frac{\underline{\upsilon}}{\alpha}),~\qquad \qquad\qquad &t>0 \quad 0<x<\underline{h}(t), \\ \underline{\upsilon}_{x}(t,0)\geq 0,~&t>0, \\ \underline{\upsilon}(t,\underline{h}(t))=0,~&t>0, \\ \underline{h}'(t)\leq-\mu\rho\underline{\upsilon}_{x}(t,\underline{h}(t)),~&t>0. \end{array}\right. \end{equation} Assume that the initial data of \eqref{2.2} satisfy \[\bar{h}(0)\geq h_{0},~ \bar{u}(0,x), \bar{\upsilon}(0,x)\geq 0\text{ on }[0,\bar{h}(0)]\] and \[\bar{u}(0,x)\geq u_{0}(x)\text{ and }\bar{\upsilon}(0,x)\geq\upsilon_{0}(x)~on~[0,h_{0}],\] and the initial data of \eqref{2.20} and \eqref{2.21} satisfy \[\underline{h}(0)\leq h_{0}, ~0<\underline{u}(0,x)\leq u_{0}(x)\text{ and } 0<\underline{\upsilon}(0,x)\leq \upsilon_{0}(x)\text{ on }[0,\underline{h}(0)].\] Then, the solution $(u,\upsilon,h)$ of \eqref{FBP} satisfies \[\underline{h}(t)\leq h(t)\leq \bar{h}(t)~on~[0,+\infty),\] \[u\leq\bar{u}~\&~\upsilon\leq\bar{\upsilon}\text{ for all }t\geq 0\text{ and } 0\leq x\leq h(t),\] and \[u\geq\underline{u}~\& ~\upsilon\geq\underline{\upsilon}\text{ for all }t\geq 0 \text{ and }0\leq x\leq \underline{h}(t).\] \end{lemma} \noindent The proof of Lemma \ref{le2.2} is very similar to the proofs of Lemma 5.1 of \cite{b}, Lemma 4.1 and Lemma 4.2 of \cite{w3}. We hence omit the details here. In order to discuss the spreading of the species, we will use Lemma A.2, Lemma A.3 of \cite{wx1} and Proposition 8.1 of \cite{w2}. We restate these results here for the reader's convenience. \begin{lemma}\label{le2.3} Let $M\geq 0.$ For any given $\varepsilon>0$ and $l_{\varepsilon}>0,$ there exist $\displaystyle{l>\max\left\{l_{\varepsilon}, \frac{\pi}{2}\sqrt{\frac{d}{a}}\right\}}$ such that, if the continuous and non-negative function $U(t,x)$ satisfies \begin{equation}\label{2.3} \left\{\begin{array}{ll} U_{t}-dU_{xx}\geq U(a-bU), \qquad &t>0, 0<x<l, \\ U_{x}(t,0)=0,\quad U(t,l)\geq M,& t>0, \end{array}\right. \end{equation} and if $U(0,x)>0$ in $[0,l),$ then $$\displaystyle{\liminf_{t\rightarrow+\infty}U(t,x)>\frac{a}{b}- \varepsilon\text{ uniformly on } [0,l_{\varepsilon}].}$$ \end{lemma} \begin{lemma}\label{le2.4} Let M be a nonnegative constant. For any given $\varepsilon>0$ and $l_{\varepsilon}>0,$ there exists $\displaystyle{l>\max\left\{l_{\varepsilon}, \frac{\pi}{2}\sqrt{\frac{d}{a}}\right\}}$ such that , if the continuous and non-negative function $V(t,x)$ satisfies \begin{equation}\label{2.4} \left\{\begin{array}{ll} V_{t}-dV_{xx}\leq V(a-bV), \qquad &t>0, 0<x<l, \\ V_{x}(t,0)=0,V(t,l)\leq M, &t>0, \end{array}\right. \end{equation} and if $V(0,x)>0$ in $[0,l),$ then $$ \displaystyle{\limsup_{t\rightarrow+\infty}V(t,x)<\frac{a}{b}+\varepsilon ~\text{uniformly~on}~[0,l_{\varepsilon}].}$$ \end{lemma} On the contrary, we will use the following lemma, which is Proposition 3.1 of \cite{w3}, in order to discuss the vanishing case of the species. \begin{lemma}[Proposition 3.1 in \cite{w3}]\label{le2.5} Let $d$ and $s_{0}$ be positive constants and let $a\in\mathbb{R}$. Assume that $\omega_{0}\in C^{2}([0,s_{0}])$ satisfies \[\omega_{0}'(0)=0,~\omega_{0}(s_{0})=0\text{ and }\omega_{0}(x)>0 \text{ for all }x\in(0,s_{0}).\] Let $s\in C^{1+\frac{\theta}{2}}([0,+\infty))$ and $\omega\in C^{\frac{1+\theta}{2},1+\theta}([0,\infty)\times[0,s(t)]),$ for some $\theta > 0.$ Assume that $s(t)>0$ and $\omega(t,x)>0$ for all $0\leq t<\infty$ and $0<x<s(t).$ We further assume that \[\displaystyle{\lim_{t\rightarrow+\infty}s(t)=s_{\infty}<+\infty}, \quad \displaystyle{\lim_{t\rightarrow+\infty}s'(t)=0} \text{ and }\|\omega(t,\cdot)\|_{C^{1}[0,s(t)]}\leq \widetilde{M} \text{ for all }t>1,\] for some constant $\widetilde{M}>0$. If the functions $\omega$ and $s$ satisfy \begin{equation}\label{2.5} \left\{\begin{array}{ll} \omega_{t}-d\omega_{xx} \geq \omega(a-\omega),\qquad\qquad\qquad& t>0\text{ and } 0<x<s(t) ,\\ s'(t)\geq-\mu \omega_{x}(t,s(t)), & t>0, \\ s(0)=s_{0},\\ \omega_{x}(t,0)=0,\quad \omega(t,s(t))=0, & t>0,\\ \omega(0,x)=\omega_{0}(x), & x\in[0,s_{0}], \end{array}\right. \end{equation} then $$\lim_{t\rightarrow+\infty}\parallel \omega(t,\cdot) \parallel_{C[0,s(t)]}=0.$$ \end{lemma} To discuss the asymptotic behaviors of $u$ and $\upsilon$ in the vanishing case, we need the following lemma. \begin{lemma}\label{le2.8} Let $(u, \upsilon, h(t))$ be the solution of \eqref{FBP} and recall that $\displaystyle{h_{\infty}=\lim_{t\rightarrow +\infty}h(t)}.$ If $h_{\infty}<\infty,$ then there exists $M,$ for all $t>0,$ such that $\|u(t,\cdot)\|_{C^{1}[0,h(t)]}\leq M$ and $\|\upsilon(t,\cdot)\|_{C^{1}[0,h(t)]}\leq M$. Moreover, $\displaystyle{\lim_{t\rightarrow +\infty}h'(t)=0}$. \end{lemma} We skip the proof of the above lemma since it is similar to that of Theorem 4.1 in \cite{w2}. Furthermore, we need the following lemma which appears in \cite{b} and \cite{w3} (page 893 and page 3388 respectively). \begin{lemma}\label{le2.6} Consider the following problem \begin{equation}\label{2.6} \left\{\begin{array}{ll} \displaystyle{\frac{\partial u}{ \partial t}=u_{xx} + u(1-u),}~\qquad \qquad \qquad&t>0, x>0, \\ \displaystyle{\frac{\partial \upsilon}{\partial t}=D\upsilon_{xx}+\kappa\upsilon\left(1-\frac{\upsilon}{M_{1}+\alpha}\right),}~&t>0, x>0. \end{array}\right. \end{equation} Assume that $u(t,x)=U(\xi)$ and $\upsilon(t,x)=V(\xi),$ where $~\xi=x-st $. Then \eqref{le2.6} is equivalent to \begin{equation}\label{2.7} \left\{\begin{array}{ll} sU'+U''+U(1-U)=0,~\qquad&\xi \in \mathbb{R}, \\ sV'+DV''+\kappa V\left(1-\frac{V}{M_{1}+\alpha}\right)=0,~~\qquad&\xi \in \mathbb{R}, \end{array}\right. \end{equation} If $s\geq s_{\rm min}=2\max \{1,\sqrt{D \kappa} \},$ then problem of \eqref{2.7} admits a solution $(U,V)$ which satisfies the conditions \begin{equation}\label{limiting.cond} \begin{array}{l} U(-\infty)=1,~V(-\infty)=M_{1}+\alpha, \quad U(+\infty)=V(+\infty)=0, \\ U'(\xi)<0\text{ and }V'(\xi)< 0~\text{ for all }~\xi \in \mathbb{R}. \end{array} \end{equation} \end{lemma} The following lemma will be used to give a lower estimate of the ``asymptotic spreading speed'' (when spreading occurs). The notion of spreading and spreading speed will become more clear later on. Before we state the needed lemma, let us first consider the following problem (which is relevant to the original problem \eqref{FBP}. It will also initiate problem \eqref{2.77}, the subject of Lemma \ref{le2.7}.) \begin{equation}\label{2.771} \left\{\begin{array}{ll} \displaystyle{\partial_{t}\underline{\upsilon}-D\partial_{xx}\underline{\upsilon}= \kappa\underline{\upsilon}\left(1-\frac{\underline{\upsilon}}{\alpha}\right),}~~\qquad\qquad\qquad &t>0,0<x<\underline{h}(t), \\ \partial_{x}\underline{\upsilon}(t,0)= 0,~&t>0, \\ \underline{\upsilon}(t,\underline{h}(t))=0,~&t>0, \\ \underline{h}'(t)=-\mu\rho\,\partial_{x}\underline{\upsilon}(t,\underline{h}(t)),~&t>0. \end{array}\right. \end{equation} We assume that $(\underline{\upsilon},\underline{h})$ is the unique solution of \eqref{2.771} and $\underline{h}(t)\rightarrow +\infty$ as $t\rightarrow +\infty$. Setting $$\omega(t,x)=\underline{\upsilon}(t, \underline{h}(t)-x),$$ we then obtain \begin{equation}\label{2.772} \left\{\begin{array}{ll} \displaystyle{\omega_{t}-D\omega_{xx}+\underline{h}'(t) \omega_{x}= \kappa\omega(1-\frac{\omega}{\alpha}),} &\text{for all }t>0\text{ and }0<x<\underline{h}(t), \\ \omega_{x}(t,\underline{h}(t))= 0,~&t>0, \\ \omega(t,0)=0,&t>0, \\ \underline{h}'(t)=\mu\rho\,\omega_{x}(t,0),~&t>0. \end{array}\right. \end{equation} Since $\displaystyle{\lim_{t\rightarrow +\infty}\underline{h}(t)=+\infty},$ if $\underline{h}'(t)$ approaches a constant $s_{*}$ and $\omega(t,x)$ approaches a positive function $V(x)$ as $t\rightarrow+\infty,$ then $V(x)$ must be a positive solution of \eqref{2.77} with $s_{*}=\mu\rho V'(0)$. We now state the lemma. \begin{lemma}[Proposition 4.1 in \cite{dushou}]\label{le2.7} For any $s\geq 0,$ the following problem \begin{equation}\label{2.77} \left\{\begin{array}{ll} \displaystyle{sV'-DV''-\kappa V\left(1-\frac{V}{\alpha}\right)=0,}& x>0 ,\\ V(0)=0, \end{array}\right. \end{equation} admits a unique positive solution $V=V_{s}.$ Furthermore, for each $\mu,\rho > 0,$ there exists a unique $s_{*}$ such that $\mu\rho V'_{s_{*}}(0)=s_{*}$. \end{lemma} \section{The spreading-vanishing dichotomy}\label{dichotomy} We have seen in Lemma \ref{estimates} that $h'(t)>0$ for all $t>0.$ This allows us to define \begin{equation}\label{define.hinfty}h_{\infty}:=\lim_{t\rightarrow+\infty}h(t)\text{ in }[0,+\infty)\cup \{\infty\}.\end{equation} This will allow us to define the notions of spreading and vanishing as follows. \begin{definition}\label{terminology}We say that the two species $u$ and $\upsilon$ {\em vanish eventually} if $h_{\infty}< \infty$ and \[\lim_{t\rightarrow+\infty}\|u(t,\cdot)\|_{C([0,h(t)])} =\lim_{t\rightarrow+\infty}\|\upsilon(t,\cdot)\|_{C([0,h(t)])}=0.\] We say that the two species $u$ and $\upsilon$ {\em spread successfully} if \[h_{\infty}=+\infty,~\displaystyle{\liminf_{t\rightarrow+\infty}u(t,x)>0}\text{ and }\displaystyle{\liminf_{t\rightarrow+\infty}\upsilon(t,x)>0}\] uniformly in any compact subset of $[0,+\infty)$.\end{definition} \subsection{The Spreading Case} The following theorem shows that $h_{\infty}=+\infty$ is sufficient for a successful spreading: \begin{theorem}\label{th3.1} Suppose that $(u,\upsilon,h(t))$ is the solution of \eqref{FBP}. If $h_{\infty}=+\infty,$ ~then we have $$\lim_{t\rightarrow+\infty}u(t,x)=u^{*}~\text{ and }~\lim_{t\rightarrow+\infty}\upsilon(t,x)=\upsilon^{*}.$$ \end{theorem} \begin{proof}We will divide the proof of this theorem into two steps. {\bf Step 1.} Since $h_{\infty}=+\infty,$ then for any $l_\varepsilon,$ there exists $T_1>0$ and $l_1>0$ such that $\displaystyle{l_1>\max \left\{l_\varepsilon,\frac{\pi}{2}\right\}},$ when $t>T_1,$ and then $u$ satisfies \begin{equation}\label{3.1} \left\{\begin{array}{ll} u_{t}-u_{xx}\leq u(1-u),&t>T_{1},~0<x<l_1, \\ u_{x}(t,0)=0,~u(t,l_1)\leq M,&t>T_{1}, \\ u(T_{1},x)>0,&x\in [0,l_1), \end{array}\right. \end{equation} where $M=\max\{M_{1},M_{2}\}$ (the constants appearing in \eqref{5.6} and \eqref{5.7}.) Applying Lemma \ref{le2.4}, we obtain that \[\displaystyle{\limsup_{t\rightarrow +\infty}u(t,x)< 1+\varepsilon}\text{ uniformly in $[0,l_\varepsilon]$}.\] Since $\varepsilon$ and $l_\varepsilon$ are arbitrary, then $\displaystyle{\limsup_{t\rightarrow +\infty}u(t,x)\leq 1=:\bar{u}_{1}}$ uniformly on $[0,+\infty)$. Now let $\displaystyle{l_2>\max\left\{l_\varepsilon,\frac{\pi}{2}\sqrt{\frac{D}{\kappa}}\right\}}.$ In view of the last conclusion, there exists $T_{2}>T_{1}$ such that $u(t,x)<\bar{u}_{1}+\varepsilon$ when $t> T_{2}$ and $0< x <l_2.$ Then $\upsilon$ satisfies \begin{equation}\label{3.2} \left\{\begin{array}{ll} \displaystyle{\upsilon_{t}-D\upsilon_{xx}\leq \kappa\upsilon\left(1-\frac{\upsilon}{\bar{u}_{1}+\varepsilon+\alpha}\right)},& t>T_{2},~0<x<l_2, \\ \upsilon_{x}(t,0)=0\text{ and }\upsilon(t,l_2)\leq M,& t>T_{2}, \\ \upsilon(T_{2},x)>0, & x\in [0,l_2). \end{array}\right. \end{equation} Applying Lemma \ref{le2.4} again, we get $\displaystyle{\limsup_{t\rightarrow +\infty}\upsilon(t,x)<\bar{u}_{1} +\alpha+\varepsilon}$ uniformly on $[0,l_\varepsilon].$ The arbitrariness of $\varepsilon$ and $l_\varepsilon$ allows us to conclude that $\displaystyle{\limsup_{t\rightarrow +\infty}\upsilon(t,x)\leq \bar{u}_{1}+\alpha=:\bar{\upsilon}_{1}},$ uniformly on $[0,+\infty)$. Let $\displaystyle{l_3>\max \left\{l_\varepsilon,\frac{\pi}{2}\right\}}.$ From the above conclusion, we know that there exists $T_{3}>T_{2}$ such that $\upsilon(t,x)<\bar{\upsilon}_{1}+\varepsilon$ and $u(t,x)>0$ whenver $t> T_{3}$ and $0< x <l_3.$ Then $u$ satisfies \begin{equation}\label{3.3} \left\{\begin{array}{ll} u_{t}-u_{xx}\geq u(1-u)-\delta u(\bar{\upsilon}_{1}+\varepsilon), & t>T_{3},~0<x<l_3, \\ u_{x}(t,0)=0,u(t,l_3)=0, & t>T_{3}, \\ u(T_{3},x)>0,~&x\in [0,l_3). \end{array}\right. \end{equation} By Lemma \ref{le2.3}, we get $\displaystyle{\liminf_{t\rightarrow+\infty}u(t,x)>1-\delta\bar{\upsilon}_{1}-\varepsilon}$ uniformly on $[0,l_\varepsilon]$. Again using the arbitrariness of $\varepsilon$ and $l_\varepsilon,$ it follows that $\displaystyle{\liminf_{t\rightarrow+\infty}u(t,x)\geq 1-\delta\bar{\upsilon}_{1}=:\underline{u}_{1}>0}$ because of the hypothesis {\bf (H1)}. Let $\displaystyle{l_4>\max\left\{l_\varepsilon,\frac{\pi}{2}\sqrt{\frac{D}{\kappa}}\right\}}$. In view of above result, then there exists $T_4>T_{3}$ such that $u(t,x)>\underline{u}_{1}-\varepsilon$ whenever $t> T_{4}$ and $0< x <l_4.$ Then $\upsilon$ satisfies \begin{equation}\label{3.4} \left\{\begin{array}{ll} \displaystyle{\upsilon_{t}-D\upsilon_{xx}\geq \kappa\upsilon\left(1-\frac{\upsilon}{\underline{u}_{1}-\varepsilon+\alpha}\right),}&t>T_{4},~0<x<l_4, \\ \upsilon_{x}(t,0)=0,\upsilon(t,l_4)=0, &t>T_{4}, \\ \upsilon(T_{4},x)>0, &x\in [0,l_4). \end{array}\right. \end{equation} Applying Lemma \ref{le2.3}, we have $\displaystyle{\liminf_{t\rightarrow+\infty}\upsilon(t,x)>\underline{u}_{1}+\alpha-\varepsilon}$ uniformly on $[0,l_\varepsilon],$ and consequently (as $\varepsilon$ and $l_\varepsilon$ are arbitrary) we obtain $\displaystyle{\liminf_{t\rightarrow+\infty}\upsilon(t,x)\geq\underline{u}_{1}+\alpha=:\underline{\upsilon}_{1}.}$ Now we will build a $\bar{u}_{2}$. Denote $l_5>\max \left\{l_\varepsilon,\frac{\pi}{2}\right\}$.~By above conclusion, we know that there exists $T_{5}>T_{4}$~such that $\upsilon(t,x)>\underline{\upsilon}_{1}-\varepsilon$ ~when $t> T_{5},~0< x <l_5$, and then $u$ satisfies: \begin{equation}\label{3.1aa} \left\{\begin{array}{ll} u_{t}-u_{xx}\leq u(1-u)-\delta u(\underline{\upsilon}_{1}-\varepsilon),~~~~~~~~ & t>T_{5},~0<x<l_5,\\ u_{x}(t,0)=0,u(t,l_5)=0, & t>T_{5},\\ u(T_{5},x)>0,~&x\in [0,l_5). \end{array}\right. \end{equation} By Lemma \ref{le2.4}, we have $\limsup_{t\rightarrow+\infty}u(t,x)<1-\delta\bar{\upsilon}_{1}-\varepsilon$ uniformly on $[0,l_\varepsilon]$. Again using the arbitrariness of~$\varepsilon$~and~$l_\varepsilon$, it follows that $\liminf_{t\rightarrow+\infty}u(t,x)\leq 1-\delta\underline{\upsilon}_{1}=:\bar{u}_{2}>0$ uniformly on $[0,+\infty)$. The construction of $\bar{\upsilon}_{2}$. let $l_6>\max\left\{l_\varepsilon,\frac{\pi}{2}\sqrt{\frac{D}{\kappa}}\right\}$. In view of \eqref{3.1aa}, there exists $T_{6}>T_{5}$~such that~$u(t,x)<\bar{u}_{2}+\varepsilon$ when $t> T_{6},~0< x <l_6$, and then $\upsilon$ such that \begin{equation}\label{3.2aa} \left\{\begin{array}{ll} \upsilon_{t}-D\upsilon_{xx}\leq \kappa\upsilon(1-\frac{\upsilon}{\bar{u}_{2}+\varepsilon+\alpha}),~~~~~~~& t>T_{6},~0<x<l_6,\\ \upsilon_{x}(t,0)=0,\upsilon(t,l_6)\leq M,& t>T_{6},\\ \upsilon(T_{6},x)>0, & x\in [0,l_6). \end{array}\right. \end{equation} Applying Lemma \ref{le2.4}, we have~$\limsup_{t\rightarrow +\infty}\upsilon(t,x)<\bar{u}_{2} +\alpha+\varepsilon$~uniformly on~$[0,l_\varepsilon]$. Considering the arbitrariness of $\varepsilon$~and $l_\varepsilon$, we then have~$\limsup_{t\rightarrow +\infty}\upsilon(t,x)\leq \bar{u}_{2}+\alpha=:\bar{\upsilon}_{2}$, uniformly on~$[0,+\infty)$. Furthermore, let $l_7>\max \left\{l_\varepsilon,\frac{\pi}{2}\right\}$.~By above conclusion, we know that there exists $T_{7}>T_{6}$~such that $\upsilon(t,x)<\bar{\upsilon}_{2}+\varepsilon$ and~$u(t,x)>0$~when $t> T_{7},~0< x <l_7$, and then $u$ satisfies: \begin{equation}\label{3.3aa} \left\{\begin{array}{ll} u_{t}-u_{xx}\geq u(1-u)-\delta u(\bar{\upsilon}_{2}+\varepsilon),~~~~~~~~ & t>T_{7},~0<x<l_7,\\ u_{x}(t,0)=0,u(t,l_7)=0, & t>T_{7},\\ u(T_{7},x)>0,~&x\in [0,l_7). \end{array}\right. \end{equation} By Lemma \ref{le2.3}, we have $\liminf_{t\rightarrow+\infty}u(t,x)>1-\delta\bar{\upsilon}_{2}-\varepsilon$ uniformly on $[0,l_\varepsilon]$. Again using the arbitrariness of~$\varepsilon$~and~$l_\varepsilon$, it follows that $\liminf_{t\rightarrow+\infty}u(t,x)\geq 1-\delta\bar{\upsilon}_{2}=:\underline{u}_{2}$. In order to sharpen the upper and lower bounds above, we continue to use the above approach and find $l_8>\max\left\{l_\varepsilon,\frac{\pi}{2}\sqrt{\frac{D}{\kappa}}\right\}$.~In view of above result, then there exists $T_8>T_{7}$~such that $u(t,x)>\underline{u}_{2}-\varepsilon$, when $t> T_{8},~0< x <l_8$, and then $\upsilon$ satisfies \begin{equation}\label{3.4aa} \left\{\begin{array}{ll} \upsilon_{t}-D\upsilon_{xx}\geq \kappa\upsilon(1-\frac{\upsilon}{\underline{u}_{2}-\varepsilon+\alpha}),~~~~~~~~~~&t>T_{8},~0<x<l_8,\\ \upsilon_{x}(t,0)=0,\upsilon(t,l_8)=0, &t>T_{8},\\ \upsilon(T_{8},x)>0, &x\in [0,l_8). \end{array}\right. \end{equation} Applying Lemma \ref{le2.3}, we have~$\liminf_{t\rightarrow+\infty}\upsilon(t,x)>\underline{u}_{1}+\alpha-\varepsilon$~uniformly on~$[0,l_\varepsilon]$, because of the arbitrariness of~$\varepsilon$~and~$l_\varepsilon$, it implies that~$\liminf_{t\rightarrow+\infty}\upsilon(t,x)\geq\underline{u}_{2}+\alpha=:\underline{\upsilon}_{2}$. {\bf Step 2.} Indeed, we can continue the above strategy to obtain the following sequences, whose monotonicity is a straightforward conclusion $$\displaystyle{\underline{u}_{1}\leq\ldots\leq\underline{u}_{i}\leq\ldots\leq \liminf_{t\rightarrow+\infty}u(t,x) \leq\limsup_{t\rightarrow+\infty}u(t,x)\leq\ldots\leq\bar{u}_{i} \leq\ldots\leq\bar{u}_{1},}$$ $$\displaystyle{\underline{\upsilon}_{1}\leq\ldots\leq\underline{\upsilon}_{i} \leq\ldots\leq\liminf_{t\rightarrow+\infty}\upsilon(t,x) \leq\limsup_{t\rightarrow+\infty}\upsilon(t,x) \leq\ldots\leq\bar{\upsilon}_{i}\leq\ldots\leq\bar{\upsilon}_{1},}$$ where $\displaystyle{\underline{u}_{i}=1-\delta\bar{\upsilon}_{i}},$ $\displaystyle{\bar{u}_{i}=1-\delta\underline{\upsilon}_{i-1},}$ $\displaystyle{\underline{\upsilon}_{i}=\underline{u}_{i}+\alpha}$ and $\displaystyle{\bar{\upsilon}_{i}=\bar{u}_{i}+\alpha}$ for $i=1,2,3,\cdots$. Since the constant sequences $\{\bar{u}_{i}\}$ and $\{\bar{\upsilon}_{i}\}$ are monotone non-increasing and bounded from below, and the sequences $\{\underline{u}_{i}\}$ and $\{\underline{\upsilon}_{i}\}$ are monotone non-decreasing, and are bounded from above, the limits of these sequences exist. Let us denote their limits, as $i\rightarrow+\infty,$ by $\bar{u},$ $\bar{\upsilon},$ $\underline{u}$ and $\underline{\upsilon}$ respectively. We then have \[\bar{u}=1-\delta\underline{\upsilon},~\underline{u}=1-\delta\bar{\upsilon},~\bar{\upsilon}=\bar{u}+\alpha \text{ and }\underline{\upsilon}=\underline{u}+\alpha.\] Thus, \begin{equation}\label{3.5} \left\{\begin{array}{l} \bar{u}=1-\delta(\underline{u}+\alpha), \\ \underline{u}=1-\delta(\bar{u}+\alpha). \end{array}\right. \end{equation} From hypothesis {\bf (H1)}, we can easily conclude that $\bar{u}=\underline{u}=u^{*}$ and this implies that \[\displaystyle{\liminf_{t\rightarrow+\infty}u(t,x)=\limsup_{t\rightarrow+\infty}u(t,x)=u^{*}}\text{ and }\displaystyle{\liminf_{t\rightarrow+\infty}\upsilon(t,x)=\limsup_{t\rightarrow+\infty} \upsilon(t,x)=\upsilon^{*}}.\]The proof of Theorem \ref{th3.1} is now complete. \end{proof} \subsection{The Vanishing Case} The following theorem shows that the finiteness of $h_{\infty}$ leads both species, $u$ and $\upsilon$, to vanish. \begin{theorem}\label{th3.2} Let $(u,\upsilon,h(t))$ be the solution of \eqref{FBP}. If $h_{\infty}<\infty,$ then we have $\displaystyle{\lim_{t\rightarrow+\infty}\| u(t,\cdot)\|_{C[0,h(t)]}=0}$ and $\displaystyle{\lim_{t\rightarrow+\infty}\| \upsilon(t,\cdot)\|_{C[0,h(t)]}=0.}$ \end{theorem} \begin{proof} Since $u(t,x)>0$ and $u_{x}(t,h(t))<0,$ ~then $\upsilon$ satisfies \begin{equation}\label{3.6} \left\{\begin{array}{ll} \displaystyle{\upsilon_{t}-D\upsilon_{xx}\geq \kappa\upsilon\left(1-\frac{\upsilon}{\alpha}\right),}&\text{for all }t>0\text{ and }0<x<h(t), \\ \upsilon_{x}(t,0)=0, &t>0, \\ \upsilon(t,h(t))=0, h'(t)\geq-\mu\rho \upsilon_{x}(t,h(t)),&t>0 \\ \upsilon(0,x)=\upsilon_{0}(x), & x\in[0,h_{0}]. \end{array}\right. \end{equation} In view of Lemmas \ref{le2.5} and \ref{le2.8}, we have that $\displaystyle{\lim_{t\rightarrow+\infty} \|\upsilon(t,\cdot)\|_{C[0,h(t)]}=0}$. Hence, there exists $T>0$ such that $\upsilon(t,x)< \varepsilon$ for all $t\geq T$ and $ 0\leq x \leq h(t),$ where $0<\varepsilon<<1$. Since $u(t,x)>0$ and $\upsilon_{x}(t,h(t))<0,$ then \begin{equation}\label{3.7} \left\{\begin{array}{ll} u_{t}-u_{xx}\geq u(1-\delta\varepsilon-u),& t>T, 0<x<h(t), \\ u_{x}(t,0)=0, & t>T, \\ u(t,h(t))=0, h'(t)\geq-\mu u_{x}(t,h(t)),& t>T, \\ u(T,x)=u_{0}(x), & x\in[0,h_{0}]. \end{array}\right. \end{equation} Applying Lemmas \ref{le2.5} and \ref{le2.8}, we obtain that $\displaystyle{\lim_{t\rightarrow+\infty}\| u(t,\cdot)\|_{C[0,h(t)]}=0}$.\end{proof} \subsection{Sharp criteria for spreading and vanishing} In this section, we derive some criteria governing the spreading and vanishing for the free-boundary problem \eqref{FBP}. \begin{lemma}\label{le3.1} If $h_{\infty}<\infty,$ then $\displaystyle{h_{\infty}\leq\frac{\pi}{2}\min\left\{1, \sqrt{\frac{D}{\kappa}}\right\}:=h_{*}.}$ Furthermore, $h_{0}\geq h_{*}$ implies that $h_{\infty}=+\infty$. \end{lemma} \begin{proof} The proof of Lemma \ref{le3.1} is essentially the same as that of Theorem 5.1 in \cite{w3}. By Theorem \ref{th3.2}, we know that if $h_{\infty}<\infty,$ then $$\lim_{t\rightarrow+\infty}\| u(t,\cdot)\|_{C[0,h(t)]}=0,\lim_{t\rightarrow+\infty}\| \upsilon(t,\cdot)\|_{C[0,h(t)]}=0.$$ In the following, we assume that $\displaystyle{h_{\infty}>\frac{\pi}{2}\min\left\{1, \sqrt{\frac{D}{\kappa}}\right\}}$ to get the contradiction. First, as $\displaystyle{h_{\infty}>\frac{\pi}{2}},$ there exists $\varepsilon>0$ such that $\displaystyle{h_{\infty}>\frac{\pi}{2}\sqrt{\frac{1}{1-\delta\varepsilon}}}.$ For such $\varepsilon,$ there exists $T>0$ such that $\displaystyle{h(T)>\frac{\pi}{2}\sqrt{\frac{1}{1-\delta\varepsilon}}}$ and $\upsilon(t,x)\leq\varepsilon,$ for $t>T$ and $x\in[0,h(T)].$ Let $\underline{u}(t,x)$ be the solution of the following problem: \begin{equation}\label{3.8} \left\{\begin{array}{ll} \partial_{t}\underline{u}-\partial_{xx}\underline{u}=\underline{u}(1-\delta\varepsilon- \underline{u}),&\text{for }t>T\text{ and }0<x<h(T), \\ \partial_x\underline{u}(t,0)=\underline{u}(t,h(T))=0,&t>T, \\ \underline{u}(T,x)=u(T,x), &0<x<h(T). \end{array}\right. \end{equation} By the comparison principle, we have $\underline{u}(t, x)\leq u(t, x),$ for all $t>T$ and $0<x<h(T)$. Since $\displaystyle{h(T)>\frac{\pi}{2}\sqrt{\frac{1}{1-\delta\varepsilon}}},$ the Proposition 3.2 of \cite{Cantrell2003} yields $\displaystyle{\liminf_{t\rightarrow+\infty}u(t,x)\geq\liminf_{t\rightarrow+\infty}\underline{u}(t,x)>0,}$ which is a contradiction to Theorem \ref{th3.2}. Secondly, as $\displaystyle{h_{\infty}>\frac{\pi}{2}\sqrt{\frac{D}{\kappa}}},$ there exists $T>0$ such that $\displaystyle{h(T)>\frac{\pi}{2}\sqrt{\frac{D}{\kappa}}}$ and $u(t,x)>0,$ for all $t>T$ and $0<x<h(T).$ Let $\underline{\upsilon}(t, x)$ be the solution of the following equation \begin{equation}\label{3.10} \left\{\begin{array}{ll} \displaystyle{\partial_t\underline{\upsilon}-D\partial_{xx}\underline{\upsilon}= \kappa\underline{\upsilon}\left(1-\frac{\underline{\upsilon}}{\alpha}\right),}&t>T,~0<x<h(T), \\ \partial_x\underline{\upsilon}(t,0)=\underline{\upsilon}(t, h(T))=0, &t>T, \\ \underline{\upsilon}(T, x)=\upsilon(T, x), ~&0<x<h(T). \end{array}\right. \end{equation} By the comparison principle, we have $\underline{\upsilon}(t, x)\leq\upsilon(t, x),$ for all $t>T$ and $0<x<h(T)$. Since $\displaystyle{h(T)>\frac{\pi}{2}\sqrt{\frac{D}{\kappa}}},$ by the Proposition 3.2 of \cite{Cantrell2003} , we have \[\displaystyle{\liminf_{t\rightarrow+\infty}\upsilon(t,x)\geq\liminf_{t\rightarrow+\infty}\underline{\upsilon}(t,x)>0,}\] which is a contradiction to Theorem \ref{th3.2}. Finally, since $h'(t)>0$ for all $t>0,$ then together with the above arguments we can see that $h_{\infty}=+\infty$ when $\displaystyle{h_{0}\geq\frac{\pi}{2}\min\left\{1, \sqrt{\frac{D}{\kappa}}\right\}}$. \end{proof} \begin{lemma}\label{le3.2} Suppose that the initial datum $h_{0}$ in problem \eqref{FBP} is such that $h_{0}<h_{*}.$ Then, there exists $\bar{\mu}>0$ depending on $u_{0}$ and $\upsilon_{0}$ such that $h_{\infty}=+\infty$ when $\mu\geq\bar{\mu}.$ More precisely, we have $$\bar{\mu}=\mu_1:=\frac{D}{\rho}\max\left\{1,\frac{\|\upsilon_{0}\|_{\infty}}{\alpha} \right\}\left(\frac{\pi}{2}\sqrt{\frac{D}{\kappa}} -h_{0}\right)\left(\int_{0}^{h_{0}}\upsilon_{0}(x) dx\right)^{-1}.$$ Furthermore, if $\|\upsilon_{0}\|_{\infty}\leq 1+\theta$ and $\|u_0\|_{\infty}\leq 1,$ then~ $\bar{\mu}=\min \left\{\mu_1 , \mu_2\right\},$ where \[\displaystyle{\mu_2=\max\left\{1, \frac{\| u_{0} \|_{\infty}}{1-\delta(1+\theta)}\right\}\left(\frac{\pi}{2}-h_{0}\right) \left(\int_{0}^{h_{0}}u_{0}(x) dx \right)^{-1}}.\] \end{lemma} \begin{proof} We consider the following problem: \begin{equation}\label{3.12} \left\{\begin{array}{ll} \displaystyle{\partial_t\underline{\upsilon}-D\partial_{xx}\underline{\upsilon}= \kappa\underline{\upsilon}\left(1-\frac{\underline{\upsilon}}{\alpha}\right),} &t>0,0<x<\underline{h}(t), \\ \partial_x\underline{\upsilon}(t, 0)=0,&t>0, \\ \underline{\upsilon}(t,\underline{h}(t))=0,&t>0, \\ \underline{h}'(t)=-\mu\rho\underline{\upsilon}(t,\underline{h}(t)),&t>0, \\ \underline{\upsilon}(0,x)=\upsilon_{0}(x),&0\leq x \leq h_{0}, \\ h_{0}=\underline{h}(0), &t=0. \end{array}\right. \end{equation} By Lemma \ref{le2.2}, we have $\underline{h}(t)\leq h(t)$ and $ \underline{\upsilon}(t, x)\leq\upsilon(t,x),$ for $t>0$ and $0< x <\underline{h}(t)$. Using Lemma 3.7 of \cite{dushou}, if $\displaystyle{\underline{h}(0)=h_{0}<h_{*}\leq \frac{\pi}{2}\sqrt{\frac{D}{\kappa}}}$ and $\mu\geq\bar{\mu},$ we have $\underline{h}(\infty)=+\infty.$ It then follows that $h_{\infty}=+\infty$. Suppose now that $\|\upsilon_{0}\|_{\infty}\leq 1+\theta$ and $\|u_{0}\|_{\infty}\leq 1.$ That is $M_2=1+\theta.$ We consider the following problem \begin{equation}\label{3.13} \left\{\begin{array}{ll} \displaystyle{\partial_t\underline{u}-\partial_{xx}\underline{u}=\underline{u}(1-\delta(1+\theta)-\underline{u}),} &t>0,0<x<\underline{h}(t), \\ \partial_x\underline{u}(t,0)=0,&t>0, \\ \underline{u}(t,\underline{h}(t))=0,&t>0, \\ \underline{h}'(t)=-\mu\underline{u}(t,\underline{h}(t)),&t>0, \\ \underline{u}(0,x)=u_{0}(x), &0\leq x \leq \underline{h}(0), \\ \underline{h}(0)=h_{0}, &t=0. \end{array}\right. \end{equation} From Lemma 3.7 of \cite{dushou}, we know that $\displaystyle{\underline{h}(0)=h_{0}<h_{*}\leq \frac{\pi}{2}}$ and $\mu\geq \mu_2$, which imply that $\underline{h}(\infty)=+\infty$. Thus $\mu\geq \min \{\mu_1 , \mu_2\}$ implies that $\underline{h}(\infty)=+\infty$. Therefore, we have $h_{\infty}=+\infty$ when $\mu\geq \min \{\mu_1 , \mu_2\}$. \end{proof} \begin{lemma}\label{le3.3} Suppose that the initial datum $h_{0},$ in problem \eqref{FBP}, is such that $h_{0}<h_{*}.$ Then, there exists $\underline{\mu}>0$ depending on $u_{0}(x)$ and $\upsilon_{0}(x)$ such that $h_{\infty}<\infty$ when $\mu\leq \underline{\mu}$. \end{lemma} \begin{proof} We adopt the same method used to prove Lemma 5.2 of \cite{w3}, Lemma 3.8 of \cite{dushou} and Corollary 1 of \cite{b}. Let $\displaystyle{\varepsilon=\frac{1}{2}\left(\frac{h_{*}}{h_{0}}-1\right)>0}$ since $h_{0}<h_{*}$. Define $$\bar{h}(t)=h_{0}(1+\varepsilon-\frac{\varepsilon}{2}e^{-\beta t})~~\text{ for }~t\geq 0$$ $$V(y)=\displaystyle{\cos\frac{\pi y}{2}}~~ \text{ for }~0\leq y \leq 1;$$ and $$\bar{u}(t,x)=\bar{\upsilon}(t,x)=\widetilde{M}e^{-\beta t}V\left(\frac{x}{\bar{h}(t)}\right) ~~\text{ for }~~0\leq x \leq \bar{h}(t),$$ where $\displaystyle{\beta=\frac{1}{2} \min\left\{\left(\frac{\pi}{2}\right)^{2}\frac{D}{(1+\varepsilon)^{2}h_{0}^{2}} -\kappa,\left(\frac{\pi}{2}\right)^{2}\frac{1}{(1+\varepsilon)^{2}h_{0}^{2}}-1 \right\}>0},$ as $h_{0}(1+\varepsilon)<h_{*}$ and $$\displaystyle{\widetilde{M}=\frac{\max\left\{\|u_{0}\|_{\infty},\| \upsilon_{0}\|_{\infty}\right\}}{\displaystyle{\cos\left(\frac{\pi}{2+\varepsilon}\right)}}.}$$ If $\displaystyle{\mu\leq \underline{\mu}=\frac{\varepsilon h_{0}^{2}\beta (2+\varepsilon)}{2 (1+\rho)\pi \widetilde{M}}}$\,, then a direct computation yields \begin{equation}\label{3.14} \left\{\begin{array}{ll} \bar{u}_{t}-\bar{u}_{xx}-\bar{u}(1-\bar{u})\geq \widetilde{M}e^{-\beta t}V((\frac{\pi}{2})^{2}\frac{1}{(1+\varepsilon)^{2}h_{0}^{2}}-1-\beta)\geq0,~&t>0,~0<x<\bar{h}(t), \\ \displaystyle{\bar{\upsilon}_{t}-D\bar{\upsilon}_{xx}-\kappa\bar{\upsilon}\left(1-\frac{\bar{\upsilon}}{M_{1}+\alpha}\right)\geq \widetilde{M}e^{-\beta t}V((\frac{\pi}{2})^{2}\frac{D}{(1+\varepsilon)^{2}h_{0}^{2}}-\kappa-\beta)\geq 0,}&t>0,~0<x<\bar{h}(t), \\ \bar{u}_{x}(t,0)=\bar{\upsilon}_{x}(t,0)=0,~&t>0, \\ \bar{u}(t,\bar{h}(t))=\bar{\upsilon}(t,\bar{h}(t))=0,~&t>0, \\ \displaystyle{\bar{h}'(t)+\mu[\bar{u}_{x}(t,\bar{h}(t))+\rho\bar{\upsilon}_{x}(t,\bar{h}(t))]\geq \frac{\varepsilon h_{0} \beta e^{-\beta t}}{2}\left(1-\frac{2\mu (1+\rho)\pi \widetilde{M}}{\varepsilon h_{0}^{2}\beta (2+\varepsilon)}\right)\geq 0,}~&t>0. \end{array}\right. \end{equation} Since $h_{0}\leq \bar{h}(0),$ $\bar{u}(0,x)\geq u_{0}(x)$ and $\bar{\upsilon}(0,x)\geq \upsilon_{0}(x)$ for all $x\in[0,h_{0}],$ then Lemma \ref{le2.2} yields that $h(t)\leq \bar{h}(t)$ \text{ on } $[0,+\infty)$. Taking $t\rightarrow +\infty,$ we obtain $$h_\infty\leq \bar{h}(\infty)=h_{0}(1+\delta)<h_{*}.$$ This, together with Lemma \ref{le3.1}, complete the proof. \end{proof} Lemmas \ref{le3.1} and \ref{le3.3} lead to {\bf other criteria} for spreading and vanishing, in terms of the parameter $D,$ when $h_{0}$ is fixed. \begin{lemma}\label{le3.4} For a fixed $h_{0}>0,$ let $D^{*}=\displaystyle{\frac{4\kappa h^{2}_{0}}{\pi^{2}}}.$ Then, \begin{enumerate}[\rm (i)] \item if $0 < D\leq D^{*} ,$ spreading occurs (see Definition \ref{terminology}). \\ \item Suppose that $D^{*}< D \leq \kappa$. If $\mu\geq\bar{\mu},$ then the spreading occurs. If $\mu\leq \underline{\mu},$ then vanishing occurs (see Definition \ref{terminology}). \end{enumerate} \end{lemma} \section{Spreading speed}\label{spreading.estimate} In this section, we derive upper and lower bounds for the spreading speed under the free boundary conditions stated in \eqref{FBP}. The estimates are given in terms of well-known parameters. \begin{theorem}\label{spreading.speed} Let $(u,\upsilon,h)$ be the solution of problem \eqref{FBP} with $h_{\infty}=\infty$ and recall that $$s_{\rm min}=2 \max \left\{1,\sqrt{D\kappa}\right\}.$$ Then, $$\displaystyle{s_{*}\leq\liminf_{t\rightarrow +\infty}\frac{h(t)}{t}\leq\limsup_{t\rightarrow +\infty}\frac{h(t)}{t}\leq s_{\rm min},}$$ where $s_{*}$ is the constant appearing in Lemma \ref{le2.7}. \end{theorem} \begin{proof}[Proof of Theorem \ref{spreading.speed}] First we will prove $\displaystyle{\limsup_{t\rightarrow +\infty}\frac{h(t)}{t}\leq s_{\rm min}}.$ From Lemma \ref{le2.6}, we know that $(U(\xi), V(\xi))\rightarrow (0, 0)$ and $(U'(\xi), V'(\xi))\rightarrow (0, 0)$ as $\xi\rightarrow+\infty$. Then, we can choose $l$ and $g\gg 1$ such that \begin{equation}\label{4.01} l U(\xi)\geq \| u_{0}\|_{\infty},~~g V(\xi)\geq \| \upsilon_{0}\|_{\infty} \text{ for all }\xi\in[0,h_{0}]. \end{equation} Moreover, there exists $\sigma_{0} > h_{0}$ depending on $D, \kappa, \mu, \rho$ such that \begin{equation}\label{4.02} U(\sigma_{0})< \min_{0\leq x\leq h_{0}}\left(U(x)-\frac{u_{0}(x)}{l}\right),~~~~V(\sigma_{0})< \min_{0\leq x\leq h_{0}}\left(V(x)-\frac{\upsilon_{0}(x)}{g}\right), \end{equation} \begin{equation}\label{4.03} U(\sigma_{0})\leq 1-\frac{1}{l},~~~V(\sigma_{0})\leq\left (1-\frac{1}{g}\right)(M_{1}+\alpha), \end{equation} and \begin{equation}\label{4.04} -\mu(lU'(\sigma_{0})+g\rho V'(\sigma_{0}))< s_{\rm min}. \end{equation} \noindent Now let $\sigma(t)=\sigma_{0}+s_{\rm min}t~$ for $~t\geq 0,$ \[\bar{u}=lU(x-s_{\rm min}t)-lU(\sigma_{0})~\text{ and }~ \bar{\upsilon}=gV(x-s_{\rm min}t)-gV(\sigma_{0})\text{ for }t\geq 0 \text{ and } 0\leq x\leq \sigma(t).\] It is obvious from \eqref{4.02} and \eqref{4.04} that $$\bar{u}(0,x)>u_{0}(x),~\bar{\upsilon}(0,x)>\upsilon_{0}(x),~~\text{for}~~0\leq x\leq h_{0};$$ and $$\sigma'(t)=s_{\rm min}> -\mu(\bar{u}_{x}(t,\sigma(t))+\rho\bar{\upsilon}_{x}(t,\sigma(t))).$$ Moreover, $$\bar{u}(t,\sigma(t))=\bar{\upsilon}(t,\sigma(t))=0~~\text{for all}~t\geq 0;$$ $$\bar{u}_{x}(t,0)<0,~\bar{\upsilon}_{x}(t,0)< 0~~\text{for all}~t\geq 0 ~\text{ (by~Lemma~\ref{le2.6}).}$$ Then by a calculation, we obtain from \eqref{4.03} that \begin{align*} \bar{u}_{t}-\bar{u}_{xx}-\bar{u}(1-\bar{u}) &=l\left[(l-1)\left(U-\frac{lU(\sigma_{0})}{l-1}\right)^{2}+U(\sigma_{0})\frac{l-1-lU( \sigma_{0})}{l-1}\right] \\ &\geq 0 , \end{align*} and \begin{align*} &\bar{\upsilon}_{t}-D\bar{\upsilon}_{xx}-\kappa\bar{\upsilon} \left(1-\frac{\bar{\upsilon}}{M_{1}+\alpha}\right)\\ &=\frac{g\kappa}{M_{1}+\alpha}\left[(g-1)\left(V-\frac{gV(\sigma_{0})}{g-1}\right)^{2} +V(\sigma_{0})\frac{(g-1)(M_{1}+\alpha)-gV(\sigma_{0})}{g-1}\right]\\ &\geq 0. \end{align*} Then, by Lemma \ref{le2.2}, we have $h(t)\leq\sigma(t)~ \text{for}~t\geq 0$. Therefore, $$\displaystyle{\limsup_{t\rightarrow +\infty}\frac{h(t)}{t} \leq \lim_{t\rightarrow +\infty}\frac{\sigma(t)}{t}= s_{\rm min}.}$$ Now, we prove $\displaystyle{\liminf_{t\rightarrow +\infty}\frac{h(t)}{t} \geq s_{*}}.$ Let $(\underline{\upsilon},\underline{h})$ be the solution of the free boundary problem \begin{equation}\label{4.1} \left\{\begin{array}{ll} \displaystyle{\partial_{t}\underline{\upsilon}-D\partial_{xx}\underline{\upsilon}= \kappa\underline{\upsilon}\left(1-\frac{\underline{\upsilon}}{\alpha}\right),}&t>0, 0<x<\underline{h}(t), \\ \underline{\upsilon}_{x}(t,0)= 0,&t>0, \\ \underline{\upsilon}(t,\underline{h}(t))=0,&t>0, \\ \underline{h}'(t)=-\mu\rho\,\partial_{x}\underline{\upsilon}(t,\underline{h}(t)),&t>0. \end{array}\right. \end{equation} By the comparison principle, we then have $\underline{h}(t)\leq h(t).$ From Theorem 4.2 in \cite{dushou}, we have $$\displaystyle{s_{*}=\lim_{t\rightarrow +\infty}\frac{\underline{h}(t)}{t}\leq\liminf_{t\rightarrow +\infty}\frac{h(t)}{t}.}$$ \end{proof} \section{Proof of existence and uniqueness}\label{exist.} This section is devoted to prove the results about local existence and uniqueness of the solution to the main problem \eqref{FBP}. \begin{proof}[Proof of Lemma \ref{local.exist}] The main idea is adapted from \cite{Chenxf}. Let $\zeta \in C^3([0,\infty))$ such that $\zeta(y)=1$ if $| y-h_{0}|\leq \frac{h_{0}}{4},$ $\zeta(y)=0$ if $| y-h_{0}|> \frac{h_{0}}{2},$ $|\zeta'(y)|\leq \frac{6}{h_{0}},$ for all $y$. Define \begin{equation}\label{5.1} x=y+\zeta(y)(h(t)-h_{0}),~~~0\leq y<+\infty. \end{equation} Note that, as long as $\displaystyle{| h(t)-h_{0}|\leq \frac{h_{0}}{8}},$ $(x,t)\longrightarrow(y,t)$ is a diffeomorphism from $[0,+\infty)$ to $[0,+\infty)$. Moreover, \begin{equation}\label{5.2} 0\leq x\leq h(t)\Leftrightarrow 0\leq y \leq h_{0}\text{ and } x=h(t) \Leftrightarrow y=h_{0}. \end{equation} We then compute $$\frac{\partial y}{\partial x}=\frac{1}{1+\zeta'(y)(h(t)-h_{0})}=A(h(t),y(t)), $$ $$\frac{\partial^{2}y}{\partial x^{2}}=\frac{-\zeta''(y)(h(t)-h_{0})} {[1+\zeta'(y)(h(t)-h_{0})]^{3}}=B(h(t),y(t)),$$ $$\frac{\partial y}{\partial t}=\frac{-h'(t)\zeta(y)}{1+\zeta'(y)(h(t)-h_{0})}=C(h(t),y(t)).$$ Now, we denote $$U(t,y(t))=u(t,x), ~~V(t,y(t))=\upsilon(t,x),~~F(U,V)= U(1-U-\delta V)~\text{ and }~G(U,V)=\kappa V\left(1-\frac{V}{U+\alpha}\right).$$ Then problem \eqref{FBP} becomes \begin{equation}\label{5.3} \left\{\begin{array}{ll} \displaystyle{\frac{\partial U}{\partial t}=A^{2}U_{yy}+(B-C) U_{y}+F(U,V),} \qquad\qquad&t>0, 0<y<h_{0}, \\ \displaystyle{\frac{\partial V}{\partial t}= DA^{2}V_{yy}+(DB-C)V_{y}+G(U,V),} &t>0,0<y< h_{0}, \\ \displaystyle{U_{y}(t,0)=V_{y}(t,0)=U(t,h_{0})=V(t,h_{0})=0,} &t>0, \\ \displaystyle{h'(t)=-\mu (U_{y}(t,h_{0})+\rho V_{y}(t,h_{0})),}&t>0, \\ \displaystyle{U(0,y)=U_{0}(y)=u_{0}(y),} &y\in[0,h_{0}],t=0, \\ \displaystyle{V(0,y)=V_{0}(y)=\upsilon_{0}(y),} &y\in[0,h_{0}],t=0. \end{array}\right. \end{equation} We denote by $\tilde{h}=-\mu (U'_{0}(h_{0})+\rho V'_{0}(h_{0})).$ As in \cite{linzhigui2007}, we shall prove the local existence by using the contraction mapping theorem. We let $T$ such that $0<T\leq \frac{h_{0}}{8(1+\tilde{h})}$ and introduce the function spaces \begin{equation} \begin{array}{l} X_{1T}:=\{ U\in C(\mathcal{R}):U(0,y)=U_{0}(y), \| U-U_{0}\|_{C(\mathcal{R})} \leq 1 \}, \\\nonumber X_{2T}:=\{ V\in C(\mathcal{R}):V(0,y)=V_{0}(y), \| V-V_{0}\|_{C(\mathcal{R})} \leq 1 \}, \\\nonumber X_{3T}:=\{ h\in C^{1}[0,T],\| h'-\tilde{h}\|_{C[0,T]}\leq 1 \},\nonumber \end{array} \end{equation} where \[\mathcal{R}=\{(t,y): 0 \leq t \leq T, 0 < y < h_{0}\}.\] Then, the space $X_{T}=X_{1T}\times X_{2T}\times X_{3T}$ is a complete metric space, with the metric \[d((U_{1},V_{1},h_{1}),(U_{2},V_{2},h_{2}))=\| U_{1}-U_{2}\|_{C(\mathcal{R})}+\| V_{1}-V_{2}\|_{C(\mathcal{R})}+\| h'_{1}-h'_{2}\|_{C[0,T]}.\] We have $$| h(t)-h_{0}|\leq \int^T_{0}|h'(s)|ds \leq T(1+\tilde{h})\leq \frac{h_{0}}{8},$$ so that the mapping $(t,x) \rightarrow (t,y)$ is diffeomorphism. As mentioned above, we will construct a contraction mapping from $X_{T}$ into $X_{T}$ in order to prove the existence of a local solution. We begin this construction now. As $0\leq t\leq T,$ the coefficients $A$, $B$ and $C$ are bounded and $A^{2}$ is between two positive constants. By standard $L^{p}$ theory and the Sobolev imbedding theorem, for any $(U,V,h)\in X_{T},$ the following initial boundary value problem \begin{equation}\label{5.4} \left\{\begin{array}{ll} \displaystyle{\frac{\partial \hat{U}}{\partial t}=A^{2}\hat{U}_{yy}+(B-C) \hat{U}_{y}+F(U,V),} &t>0, 0<y<h_{0}, \\ \displaystyle{\frac{\partial \hat{V}}{\partial t}= DA^{2}\hat{V}_{yy}+(DB-C)\hat{V}_{y}+G(U,V),} \qquad\qquad&t>0,0<y<h_{0}, \\ \displaystyle{\hat{U}_{y}(t,0)=\hat{V}_{y}(t,0)=0,} &t>0, \\ \displaystyle{\hat{U}(t,h_{0})=\hat{V}(t,h_{0})=0,} &t>0, \\ \displaystyle{\hat{U}(0,y)=U_{0}(y)=u_{0}(y),} &y\in[0,h_{0}], \\ \displaystyle{\hat{V}(0,y)=V_{0}(y)=\upsilon_{0}(y),}&y\in[0,h_{0}], \end{array}\right. \end{equation} for any $\theta\in (0,1),$ admits a unique bounded solution $(\hat{U},\hat{V})\in C^{\frac{(1+\theta)}{2},1+\theta}(\mathcal{R})\times C^{\frac{(1+\theta)}{2},1+\theta}(\mathcal{R}).$ Moreover, \[\| \hat{U}\|_{C^{\frac{(1+\theta)}{2},1+\theta}(\mathcal{R})}\leq C_{1}\text{ and } \|\hat{V}\|_{C^{\frac{(1+\theta)}{2},1+\theta}(\mathcal{R})}\leq C_{2},\] where the constants $C_{1}$ and $C_{2}$ depend on $h_{0},$ $\theta,$ $\| U_{0}\|_{C^{2}[0,h_{0}]}$ and $\| V_{0}\|_{C^{2}[0,h_{0}]}$. We now define $$\hat{h}(t)=h_{0}-\mu \int^t_{0} [\hat{U}_{y}(\tau,h_{0})+\rho\hat{V}_{y}(\tau,h_{0})]d\tau.$$ Then, $\hat{h}'(t)=-\mu (\hat{U}_{y}(t,h_{0})+\rho\hat{V}_{y}(t,h_{0}))\in C^{\frac{\theta}{2}}[0,T]$ and $\| \hat{h}'\|_{C^{\frac{\theta}{2}}}\leq C_{3},$ where $C_{3}$ depends on $\mu,$ $\rho,$ $h_{0},$ $\alpha,$ $\| U_{0}\|_{C^{2}[0,h_{0}]}$ and $\| V_{0}\|_{C^{2}[0,h_{0}]}$. Now, we are ready to introduce the mapping $\Phi: (U,V,h)\rightarrow (\hat{U},\hat{V},\hat{h})$. We claim that $\Phi$ maps $X_{T}$ into itself for sufficiently small $T$:\\ Indeed, if we take $T$ such that $$0< T \leq\min\left\{C_{1}^{\frac{-2}{1+\theta}},C_{2}^{\frac{-2}{1+\theta}},C_{3}^{\frac{-2}{\alpha}}\right\},$$ we then have $$\|\hat{U}-U_{0}\|_{C(\mathcal{R})}\leq \| \hat{U}\|_{C^{0,\frac{1+\theta}{2}}(\mathcal{R})}T^{\frac{1+\theta}{2}}\leq C_{1}T^{\frac{1+\theta}{2}}\leq 1,$$ $$\| \hat{V}-V_{0}\|_{C(\mathcal{R})}\leq \| \hat{V}\|_{C^{0,\frac{1+\theta}{2}}(\mathcal{R})}T^{\frac{1+\theta}{2}}\leq C_{2}T^{\frac{1+\theta}{2}}\leq 1,$$ $$\| \hat{h}'-\tilde{h}\|_{C[0,T]}\leq \| \hat{h}'\|_{C^{\frac{\theta}{2}}[0,T]}T^{\frac{\theta}{2}}\leq C_{3}T^{\frac{\theta}{2}}\leq 1.$$ Thus we have $\Phi$ as a map from $X_{T}$ into itself. Now we show that $\Phi$ is a contraction mapping for sufficiently small $T$. Let $(\hat{U}_{i},\hat{V}_{i},\hat{h}_{i})\in X_{T}$ for $i=1,2$. We set $\bar{U}=\hat{U_{1}}-\hat{U_{2}},$ and $\bar{V}=\hat{V_{1}}-\hat{V_{2}}.$ Then, \begin{equation}\label{5.5} \left\{\begin{array}{l} \displaystyle{\frac{\partial \bar{U}}{\partial t}=A^{2}(h_{2}(t),y(t))\bar{U}_{yy}+[B(h_{2}(t),y(t))-C(h_{2}(t),y(t))] \bar{U}_{y}+\mathbf{F},} \\ \text{ for }t>0\text{ and } 0<y<h_{0}. \\ \displaystyle{\frac{\partial\bar{V}}{\partial t}= DA^{2}(h_{2}(t),y(t))\bar{V}_{yy}+(DB(h_{2}(t),y(t))-C(h_{2}(t),y(t)))\bar{V}_{y}+ \mathbf{G},} \\ \text{ for }t>0 \text{ and } 0<y<h_{0}. \\ \displaystyle{\bar{U}_{y}(t,0)=\bar{V}_{y}(t,0)=0,}\quad t>0, \\ \displaystyle{\bar{U}(t,h_{0})=\bar{V}(t,h_{0})=0,}\quad t>0, \\ \displaystyle{\bar{U}(0,y)=\bar{V}(0,y)=0,} \quad 0\leq y\leq h_{0}, \end{array}\right. \end{equation} where \begin{equation*} \begin{array}{lll} \mathbf{F}:&=&[A^{2}(h_{1}(t),y(t))-A^{2}(h_{2}(t),y(t))]\hat{U}_{1yy}+[(B(h_{1}(t),y(t))-B(h_{2}(t), \\ &&y(t)))- (C(h_{1}(t),y(t))-C(h_{2}(t),y(t))]\hat{U}_{1y}+F(U_{1},V_{1})-F(U_{2},V_{2}). \end{array} \end{equation*} \begin{equation*} \begin{array}{lll} \mathbf{G}:&=&[DA^{2}(h_{1}(t),y(t))-DA^{2}(h_{2}(t),y(t))]\hat{V}_{1yy}+ [(DB(h_{1}(t),y(t))-DB(h_{2}(t), \\ &&y(t)))-(C(h_{1}(t),y(t)) -C(h_{2}(t),y(t)))]\hat{V}_{1y}+G(U_{1},V_{1})-G(U_{2},V_{2}). \end{array} \end{equation*} Again, using standard $L^{p}$ estimates and the Sobolev embedding theorem, we have $$\displaystyle{\|\bar{U}\|_{C^{\frac{1+\theta}{2},1+\theta}(\mathcal{R})}\leq C_{4}(\| U_{1}-U_{2}\|_{C(\mathcal{R})}+\| V_{1}-V_{2}\|_{C(\mathcal{R})}+\| h_{1}-h_{2}\|_{C^{1}[0,T]}),}$$ $$\| \bar{V}\|_{C^{\frac{1+\theta}{2},1+\theta}}(D)\leq C_{5}(\| U_{1}-U_{2}\|_{C(\mathcal{R})}+\| V_{1}-V_{2}\|_{C(\mathcal{R})}+\| h_{1}-h_{2}\|_{C^{1}[0,T]}),$$ and $$\| \bar{h}_{1}'-\bar{h}_{2}'\|_{C^{\frac{1+\theta}{2},1+\theta}}([0,T])\leq C_{6}(\| U_{1}-U_{2}\|_{C(\mathcal{R})}+\| V_{1}-V_{2}\|_{C(\mathcal{R})}+\| h_{1}-h_{2}\|_{C^{1}[0,T]}),$$ where the constants $C_{4},~C_{5},$ and $C_{6}>0$ depend on $A,$ $B,$ $C$ and $C_{i},$ for $ i=1, 2, 3$. \noindent We also have \begin{equation*} \begin{array}{ll} \|\bar{U}\|_{C(\mathcal{R})}+\|\bar{V}\|_{C(\mathcal{R})}+\|\bar{h}_{1}'-\bar{h}_{2}'\|_{C[0,T]}&\leq T^{\frac{1+\theta}{2}}\|\bar{U}\|_{C^{\frac{1+\theta}{2},1+\theta}(\mathcal{R})}+T^{\frac{1+\theta}{2}}\| \bar{V}\|_{C^{\frac{1+\theta}{2},1+\theta}(\mathcal{R})} \\ ~&+T^{\frac{\theta}{2}}\|\bar{h}_{1}'-\bar{h}_{2}'\|_{C^{\frac{1+\theta}{2},1+\theta}([0,T])}. \end{array}\end{equation*} Based on the above, if $T\in (0,1],$ then \begin{equation*} \begin{array}{ll} \|\bar{U}\|_{C(\mathcal{R})}+\|\bar{V}\|_{C(\mathcal{R})}+\| \bar{h}_{1}'-\bar{h}_{2}'\|_{C([0,T])}&\leq C_{7}T^{\frac{\theta}{2}}\left\{\|U\|_{C(\mathcal{R})} +\|V\|_{C(\mathcal{R})}\right. \\ ~&\left.+\| h_{1}'- h_{2}'\|_{C([0,T])}\right\}, \end{array}\end{equation*} where $C_{7}:=\max \{C_{4},C_{5},C_{6}\}$. We choose $$T=\frac{1}{2} \min \left\{1,\frac{h_{0}}{8(1+\tilde{h})},C_{1}^{\frac{-2}{1+\theta}} ,C_{2}^{\frac{-2}{1+\theta}},C_{3}^{\frac{-2}{\theta}},C_{7}^{\frac{-2}{\theta}} \right\},$$ and apply the contraction mapping theorem to conclude that $\Phi$ has a unique fixed point in $X_{T}$. This completes the proof of Lemma \ref{local.exist}. \end{proof} We now turn to the \begin{proof}[Proof of Lemma \ref{estimates}] The strong maximum principle yields that $u>0$ and $\upsilon>0,$ for all $t\in [0,T]$ and $x\in [0,h(t)).$ Since $u(t,h(t))=\upsilon(t,h(t))=0,$ then Hopf Lemma yields that $u_{x}(t,h(t))<0$ and $\upsilon_{x}(t,h(t))<0$ for all $t\in(0,T].$ Thus, $h'(t)>0$ for all $t\in (0,T]$. Now, we consider the following initial value problem \begin{equation}\label{5.9} \bar{u}'(t)=\bar{u}(1-\bar{u})~ \text{ for } t>0,\qquad \bar{u}(0)=\|u_{0}\|_{\infty} . \end{equation} The comparison principle implies that $u(t,x)\leq \bar{u}(t,x)\leq\max\{1,\|u_{0}\|_{\infty}\}$ for all $t\in[0,T]$ and for all $x\in[0,h(t)]$. Similarly, we consider the following problem \begin{equation}\label{5.10} \bar{\upsilon}'(t)=\kappa\bar{\upsilon}(1-\frac{\bar{\upsilon}}{M_{1}+\alpha})\text{ for } t>0,\qquad \bar{\upsilon}(0)=\|\upsilon_{0}\|_{\infty} , \end{equation} to conclude, via the comparison principle, that $\upsilon(t,x)\leq \max\{M_{1}+\alpha,\|\upsilon_{0}\|_{\infty}\}$ for all $t\in[0,T]$ and $x\in[0,h(t)]$. We turn now to prove that $h'(t)\leq \Lambda~\text{for}~t\in (0,T]$. In order to achieve this, we shall compare $u$ and $\upsilon$ to the following two auxiliary functions \begin{equation*}\label{5.11} \begin{array}{l} \omega_{1}(t,x)=M_{1}[2M(h(t)-x)-M^{2}(h(t)-x)^{2}] \text{ for }t\in[0,T]~\&~ x\in[h(t)-M^{-1}, h(t)], \\ \text{and} \\ \omega_{2}(t,x)=M_{2}[2M(h(t)-x)-M^{2}(h(t)-x)^{2}] \text{ for }t\in[0,T]~\&~x\in[h(t)-M^{-1}, h(t)]. \end{array} \end{equation*} As a first choice, we pick $M=\displaystyle{\max \left\{\frac{1}{h_{0}},\frac{\sqrt{2}}{2},\sqrt{\frac{\kappa}{2D}}\right\}}$ in order to obtain that \begin{equation}\label{5.12} \left\{\begin{array}{l} \partial_{t}\omega_{1}-\partial_{xx}\omega_{1}\geq 2M_{1}M^{2}\geq u\geq u(1-u-\delta\upsilon)=\partial_{t}u-\partial_{xx}u, \\ \partial_{t}\omega_{2}-D\partial_{xx}\omega_{2}\geq 2DM_{2}M^{2}\geq \kappa\upsilon\geq\kappa\upsilon(1-\frac{\upsilon}{u+\alpha})=\partial_{t}\upsilon-D\partial_{xx}\upsilon, \\ \omega_{1}(t,h(t))=0=u(t,h(t)), \\ \omega_{2}(t,h(t))=0=\upsilon(t,h(t)), \\ \omega_{1}(t,h(t)-M^{-1})=M_{1}\geq u(t,h(t)-M^{-1}), \\ \omega_{2}(t,h(t)-M^{-1})=M_{2}\geq\upsilon(t,h(t)-M^{-1}). \end{array}\right. \end{equation} We plan to use a comparison argument to complete the proof. For this, we need to have $\omega_{1}(0,x)\geq u_{0}(x)$ and $\omega_{2}(0,x)\geq \upsilon_{0}(x)$. Note that, for $x\in [h(t)-M^{-1}, h(t)],$ \[u_{0}(x)=-\int^{h_{0}}_{x}u'(s)ds\leq (h_{0}-x)\|u'\|_{C[0,h_{0}]},\] \[\upsilon_{0}(x)=-\int^{h_{0}}_{x}\upsilon'(s)ds\leq (h_{0}-x)\|\upsilon'\|_{C[0,h_{0}]},\] \[\omega_{1}(0,x)=M_{1}M(h_{0}-x)[2-M(h_{0}-x)]\geq M_{1}M(h_{0}-x)\] \[\text{and }\omega_{2}(0,x)=M_{2}M(h_{0}-x)[2-M(h_{0}-x)]\geq M_{1}M(h_{0}-x)\text{ for }x\in [h_{0}-M^{-1},h_{0}].\] Thus, if $\displaystyle{ M =\max\left\{ \frac{\|u'\|_{C[0,h_{0}]}}{M_{1}}, \frac{\|\upsilon'\|_{C[0,h_{0}]}}{M_{2}}\right\}},$ then we have $\omega_{1}(0,x)\geq u(0,x)$ and $\omega_{2}(0,x)\geq \upsilon(0,x).$ By now, we have two constraints that $M$ should satisfy. We choose $M$ such that \[M = \max\left\{\frac{1}{h_{0}},~\frac{\sqrt{2}}{2},~\sqrt{\frac{\kappa}{2D}},~\frac{\|u'\|_{C[0,h_{0}]}}{M_{1}}, ~\frac{\|\upsilon'\|_{C[0,h_{0}]}}{M_{2}}\right\}.\] Then, the comparison principle yields that $\omega_{1}\geq u$ and $\omega_{2}\geq \upsilon$ for $t\in [0,T]$ and $x\in [h(t)-M^{-1}, h(t)]$. Since $\omega_{1}(t,h(t))=u(t,h(t))=0$ and $\omega_{2}(t,h(t))=\upsilon(t,h(t))=0,$ we then obtain that \[\partial_{x}u(t,h(t))\geq \partial_{x}\omega_{1}(t,h(t))=-2MM_{1} \text{ and } \partial_{x}\upsilon(t,h(t))\geq\partial_{x}\omega_{2}(t,h(t))=-2MM_{2}.\] Therefore, we have $h'(t)\leq \Lambda ,$ where $\Lambda:=2M\mu(M_{1}+ \rho M_{2})$. The proof of Lemma \ref{estimates} is now complete. \end{proof} \section{Discussion and summary of the results}\label{discuss} In this paper, we considered a Leslie-Gower and Holling-type II predator-prey model in a one-dimensional environment. The model studies two species that initially occupy the region $[0,h_{0}]$ and both have a tendency to expand their territory. We obtain several results in this setting. \begin{enumerate}[(i)] \item Theorem \ref{th3.1} and Theorem \ref{th3.2} provide the asymptotic behavior of the two species when spreading success and spreading failure, in terms of $h_{\infty}$: If $h_{\infty}=+\infty,$ then we have $$\lim_{t\rightarrow+\infty}u(t,x)=u^{*}, \lim_{t\rightarrow+\infty}\upsilon(t,x)=\upsilon^{*}.$$ If $h_{\infty}<+\infty,$ then we have $$\lim_{t\rightarrow+\infty}\|u(t,\cdot)\|_{C[0,h(t)]}=0, \lim_{t\rightarrow+\infty}\| \upsilon(t,\cdot)\|_{C[0,h(t)]}=0.$$ \item A spreading-vanishing dichotomy can be established by using Lemma \ref{le3.1} and the critical length for the habitat can be characterize by $h_{*},$ in the sense that the two species will spread successfully if $h_{\infty}>h_{*},$ while the two species will vanish eventually if $h_{\infty}\leq h_{*}$. If the size of initial habitat $h_{0}$ is not less than $h_{*},$ or $h_{0}$ is less than $h_{*},$ but $\mu \geq \bar{\mu}$ or $0 < D\leq D^{*} ,$ then the two species will spread successfully. While if the size of initial habitat is less than $h_{*}$ and $\mu \leq \underline{\mu}$ or $D^{*}< D \leq \kappa,$ then the two species will disappear eventually.\\ \item Finally, Theorem \ref{spreading.speed} reveals that the spreading speed (if exists) is between the minimal speed of traveling wavefront solutions for the predator-prey model on the whole real line (without a free boundary) and an elliptic problem induced from the original model. \end{enumerate} \end{document}
\begin{document} \begin{abstract} We study a nonlinear system of partial differential equations arising in macroeconomics which utilizes a mean field approximation. This system together with the corresponding data, subject to two moment constraints, is a model for debt and wealth across a large number of similar households, and was introduced in a recent paper of Achdou, Buera, Lasry, Lions, and Moll. We introduce a relaxation of their problem, generalizing one of the moment constraints; any solution of the original model is a solution of this relaxed problem. We prove existence and uniqueness of strong solutions to the relaxed problem, under the assumption that the time horizon is small. Since these solutions are unique and since solutions of the original problem are also solutions of the relaxed problem, we conclude that if the original problem does have solutions, then such solutions must be the solutions we prove to exist. Furthermore, for some data and for sufficiently small time horizons, we are able to show that solutions of the relaxed problem are in fact not solutions of the original problem. In this way we demonstrate nonexistence of solutions for the original problem in certain cases. \end{abstract} \maketitle \section{Introduction} A recent paper of Achdou, Buera, Lasry, Lions, and Moll calls attention to PDE models in macroeconomics; we study a model proposed there for the distribution of wealth across many similar households \cite{mollPhilTrans}. In this model, the independent variables are $a,$ wealth, $z,$ income, and $t,$ time. Each household of a given wealth and income must decide how much of their income to put towards consumption and how much to instead save. Note that wealth and savings can be positive or negative, representing debt for negative values. The authors make a mean field assumption in the modeling, so that a representative household is seen as interacting not with all the many other individual households, but only with the aggregation of these. In addition to introducing the model, the authors of \cite{mollPhilTrans} work with stationary solutions and state that existence and uniqueness of time-dependent solutions is an open problem. The present work gives the first theory of existence and uniqueness for time-dependent solutions. The particular nonlinear PDE model from \cite{mollPhilTrans} is given by the two equations \begin{equation}\label{vEquation} \partial_{t}v+\frac{1}{2}\sigma^{2}(z)\partial_{zz}v+\mu(z)\partial_{z}v+(z+r(t)a)\partial_{a}v +H(\partial_{a}v)-\rho v=0, \end{equation} \begin{equation}\label{gEquation} \partial_{t}g-\frac{1}{2}\partial_{zz}(\sigma^{2}(z)g)+\partial_{z}(\mu(z)g) +\partial_{a}((z+r(t)a)g)+\partial_{a}(gH_{p}(\partial_{a}v))=0. \end{equation} The dependent variables are $g,$ the distribution of households, and $v,$ the present discounted value of future utility derived from consumption; the discount rate is $\rho.$ The nonlinear function $H$ is the Hamiltonian for the problem and is related to a given utility function, $u;$ the specific form of $H$ is given below in Section \ref{formulationSection}. We consider the $z$ variable to be taken from the doman $[z_{\mathrm{min}},z_{\mathrm{max}}],$ and the $a$ variable to be taken from $\mathbb{R}.$ The function $\sigma\geq0$ is a diffusion coefficient and the function $\mu$ is a transport coefficient. We take these to be smooth and to satisfy $\sigma(z_{min})=\sigma(z_{max})=0$ and $\mu(z_{min})=\mu(z_{max})=0,$ so there is no transport or diffusion through the boundary of the domain. The interest rate $r(t)$ is not given but instead depends on the unknowns; determining $r$ will be a major focus of the present work. The model is based on models appearing previously in the economics literature \cite{aiyagari}, \cite{bewley}, \cite{huggett}. Our choice of domain with respect to the $a$ variable is a different from \cite{mollPhilTrans}, in which the $a$ variable was taken from the semi-infinite interval $[a_{\mathrm{min}},\infty)$ for a given value $a_{min}<0.$ The theorem we prove will be for compactly supported distributions $g,$ and thus our theorem is consistent with \cite{mollPhilTrans} with respect to the spatial domain as long as $a_{min}$ is taken to be beyond the edge of the support of our $g,$ especially at the initial time. At the end, in Section \ref{discussionSection}, we will discuss further the restriction of our solutions to the domain given in \cite{mollPhilTrans}. We have two moment conditions which must be satisfied: \begin{equation}\label{gZeroMoment} \int g \ dadz =1, \end{equation} \begin{equation}\label{gFirstMoment} \int ag\ dadz =0. \end{equation} Of course condition \eqref{gZeroMoment} simply expresses that $g$ is a probability measure. On the other hand \eqref{gFirstMoment} is an equilibrium condition which expresses that the system is closed in the sense that all money available to be borrowed in the system is in fact borrowed, and conversely all money borrowed in the system comes from within the system. Restated, condition \eqref{gFirstMoment} expresses that households with negative wealth have borrowed from households with positive wealth, that households with positive wealth have lent to households with negative wealth, and these total amounts borrowed and lent balance with each other. It is from the condition \eqref{gFirstMoment} that the interest rate, $r(t),$ is to be determined. The equation \eqref{vEquation} for $v$ is backward parabolic, while the equation \eqref{gEquation} for $g$ is forward parabolic; this is the typical situation for mean field games. We therefore specify initial data $g_{0}$ for $g,$ giving an initial distribution of households, and terminal data $v_{T}$ for $v,$ giving a final utility function. We actually are not able to fully solve the problem specified by \eqref{vEquation}, \eqref{gEquation}, \eqref{gZeroMoment}, \eqref{gFirstMoment}, with the accompanying data; rather than being a defect of our method, we are able to prove in some cases that this problem does not have a solution. In \cite{mollPhilTrans}, the authors did not indicate that a general terminal condition $v_{T}$ should be specified, but instead indicated a particular choice: that $T$ should be taken to be large and that $v_{T}$ should be associated to a stationary solution of the system. We will discuss this proposed restriction on the data further in our concluding section, Section \ref{discussionSection} below. Another condition was stated in \cite{mollPhilTrans}, which is related to their choice of the spatial domain with respect to the $a$ variable being $[a_{min},\infty).$ Since the equations \eqref{vEquation}, \eqref{gEquation} include transport terms with respect to $a,$ a boundary condition at $a=a_{min}$ must be carefully given. This is the ``state constraint boundary condition'' of \cite{mollPhilTrans}, which indicates that the relevant characteristics point into the domain; such boundary conditions for transport equations have been developed by Feller \cite{feller57}. The existence of the boundary at $a_{min}$ is a modeling decision, stating that lenders will no longer lend to households with debt of $a_{min};$ the state constraint boundary condition then implies that for these households, their incomes are necessarily high enough that in the absence of further borrowing, their debt load will not increase from the accumulating interest. By considering compactly supported solutions and taking the support to be away from a given value of $a_{min},$ we obviate the need for any such state constraint boundary condition. Furthermore, with our compactly supported distribution $g,$ our solutions feature a maximum and minimum wealth at each time, but these maximum and minimum values are not fixed in time. The system \eqref{vEquation}, \eqref{gEquation} is an example from the realm of mean field games, which have been introduced by Lasry and Lions \cite{lasryLions1}, \cite{lasryLions2}, \cite{lasryLions3}, and also by Caines, Huang, and Malhame \cite{huang1}, \cite{huang2}, to study problems in game theory with a large number of similar agents. Existence theory for such systems has been developed by several authors \cite{degenerateMFG}, \cite{cirantSobolev}, \cite{gomesSuper}, \cite{gomesLog}, \cite{gomesSub}, \cite{porretta1}, \cite{porretta2}, \cite{porretta3}, but the system \eqref{vEquation}, \eqref{gEquation} does not fall readily into any previously developed existence theory for two main reasons. First, some existence theory such as that of the author relies strongly on the presence of parabolic effects \cite{ambroseMFG1}, \cite{ambroseMFG2}, \cite{ambroseMFG3}, but in \eqref{vEquation}, \eqref{gEquation} the diffusion is anisotropic and cannot be used to bound derivatives with respect to the $a$ variable. Second, many of these works assume structure on the nonlinearity, especially additive separability into a part which depends on $v$ and a part which depends on $g,$ and this separability is not present here. Instead, the unknowns interact through the interest rate $r(t),$ and this multiplies other terms in the equations. The author's prior works \cite{ambroseMFG1}, \cite{ambroseMFG2}, \cite{ambroseMFG3} could be described as viewing the mean field games system as a coupled pair of nonlinear heat equations. With the anisotropic effects, we now take the view instead that \eqref{vEquation}, \eqref{gEquation} form a coupled pair of nonlinear transport equations. Otherwise, once we have reformulated the system appropriately, the method used to prove existence and uniqueness of solutions is broadly similar to that of the author's prior work \cite{ambroseMFG3}; this is the energy method, but adapted to the forward-backward setting of mean field games. The plan of the paper is as follows: in Section \ref{formulationSection} we make some reformulation of the problem, changing to a more convenient variable than $v.$ In Section \ref{interestRateSection} we take care to discuss how the interest rate $r(t)$ is calculated, introducing a modification of the original problem. In Section \ref{iterativeSection} we set up an approximation scheme for solving our modified problem. In Section \ref{existenceAndBounds} we prove that our approximate problems have solutions, and develop bounds for the solutions which are uniform in the approximation parameters. We pass to the limit to find solutions of our modified problem in Section \ref{limitSection}, to complete our existence proof. We then prove uniqueness of these solutions in Section \ref{uniquenessSection}. Finally, we make some concluding remarks in Section \ref{discussionSection}, including pointing out that our existence theory for the modified problem demonstrates that the original problem in some cases in fact has no solution. Our main theorems are Theorem \ref{mainExistenceTheorem} in Section \ref{limitSection}, which establishes existence of solutions to our modified problem, and Theorem \ref{mainUniquenessTheorem} in Section \ref{uniquenessSection}, which establishes uniqueness of these solutions. \section{Formulation}\label{formulationSection} We have the Hamiltonian satisfying \begin{equation}\nonumber H(p)=\max_{c\geq0}\left(-cp+u(c)\right), \end{equation} where $u$ is a given consumer utility function. Since $u$ is a consumer utility function, standard economic assumptions are that $u'(c)>0$ for all $c$ and $u''(c)<0$ for all c. For simplicity, we take $u$ to be infinitely smooth away from $c=0,$ and we also assume for simplicity that the range of $u'$ is $(0,\infty)$ and thus the domain of $(u')^{-1}$ is also $(0,\infty).$ We will comment briefly on the general case, in our concluding remarks in Section \ref{discussionSection}. Doing some calculus we see that $-cp+u(c)$ is maximized when $p=u'(c),$ so we may rewrite $H$ as \begin{equation}\nonumber H(p)=-p(u')^{-1}(p)+u((u')^{-1}(p)). \end{equation} We may then also calculate $H_{p},$ which is given by the formula \begin{equation}\nonumber H_{p}(p)=-(u')^{-1}(p)-\frac{p}{u''((u')^{-1}(p))}+\frac{p}{u''((u')^{-1}(p))}=-(u')^{-1}(p). \end{equation} Since we have taken $u$ to be smooth, we see that $H$ and $H_{p}$ inherit this smoothness. The above calculation requires $p>0;$ if instead $p\leq0,$ then there is no maximum, and the Hamiltonian would have the value $+\infty.$ To restrict to $p>0$ we must take $\partial_{a}v>0,$ and thus it is convenient to change variables to $w=\partial_{a}v$ and seek positive solutions for $w.$ We furthermore wish to have compactly supported solutions, and this is not possible with the condition we have just stated, that $w>0$ on the whole domain. So, we introduce $y=w-f(t)w_{\infty}$ for some positive constant $w_{\infty},$ and we require $y$ to be smooth and compactly supported. We will likewise require $g$ to be compactly supported. We let $y=\partial_{a}v-f(t)w_{\infty},$ and seek a favorable choice of the function $f(t).$ We need to determine the equation satisfied by $y$ and also to choose our $f.$ To this end, we begin by differentiating \eqref{vEquation} with respect to $a$ \begin{multline}\label{vaEquation} \partial_{t}(\partial_{a}v)+\frac{1}{2}\sigma^{2}(z)\partial_{zz}(\partial_{a}v)+\mu(z)\partial_{z}(\partial_{a}v) \\ +r(t)\partial_{a}v+(z+r(t)a)\partial_{a}(\partial_{a}v)+H_{p}(\partial_{a}v)\partial_{a}(\partial_{a}v)-\rho\partial_{a}v =0. \end{multline} To each $\partial_{a}v$ appearing on the right-hand side, we add and subtract $f(t)w_{\infty}.$ We find the following evolution equation for $y:$ \begin{multline}\nonumber \partial_{t}y + f'(t)w_{\infty} + \frac{1}{2}\sigma^{2}(z)\partial_{zz}y + \mu(z)\partial_{z}y + r(t)y +r(t)f(t)w_{\infty} \\ + (z+r(t)a)\partial_{a}y + \Theta(y,f)\partial_{a}y -\rho y - \rho f(t)w_{\infty}=0. \end{multline} Here we have introduced $\Theta$ to be the function given by \begin{equation}\nonumber \Theta(y,f)=H_{p}(y+fw_{\infty}). \end{equation} We choose $f$ such that \begin{equation}\label{fFinalEquation} f'(t)+r(t)f(t)-\rho f(t) =0; \end{equation} note that this is a simple ordinary differential equation which may be solved with an integrating factor. We also must specify a terminal condition for $f,$ and we take $f(T)=1.$ This choice leaves the equation for $y$ as \begin{equation}\label{yFinalEquation} \partial_{t}y+\frac{1}{2}\sigma^{2}(z)\partial_{zz}y+\mu(z)\partial_{z}y +(r(t)-\rho)y+(z+r(t)a+\Theta(y,f))\partial_{a}y = 0. \end{equation} In terms of $y$ and $f,$ and thus also in terms of $\Theta,$ our equation for $g$ is \begin{equation}\label{gFinalEquation} \partial_{t}g-\frac{1}{2}\partial_{zz}(\sigma^{2}(z)g)+\partial_{z}(\mu(z)g)+\partial_{a}((z+r(t)a+\Theta(y,f))g) =0. \end{equation} \section{Determining the interest rate, and a relaxed problem}\label{interestRateSection} In this section we explore the nature of the coupling between the $v$ equation \eqref{vEquation} and the $g$ equation \eqref{gEquation}. We will proceed first in terms of $v,$ and then summarize in terms of our new variable $y.$ As stated in \cite{mollPhilTrans}, the coupling is through the interest rate, $r(t),$ and this interest rate is determined through the moment condition \eqref{gFirstMoment}. We proceed with our first calculation on this point, which we expect is what was intended in \cite{mollPhilTrans}. We assume that \eqref{gFirstMoment} is satisfied by the data $g_{0}.$ Call $\mathcal{C}=\int\int ag\ dadz.$ Then we differentiate $\mathcal{C}$ with respect to time: \begin{multline}\nonumber \mathcal{C}_{t}=\int\int\frac{a}{2}\partial_{zz}(\sigma^{2}g)\ dadz -\int\int a\partial_{z}(\mu g)\ dadz \\ -\int\int a\partial_{a}((z+ra)g)\ dadz -\int\int a\partial_{a}(H_{p} g)\ dadz. \end{multline} By assumptions on the diffusion and drift coefficients $\sigma$ and $\mu,$ the first and second terms on the right-hand side vanish. For the third and fourth terms on the right-hand side, we integrate by parts: \begin{multline}\nonumber \mathcal{C}_{t}=-\int a(z+ra)g\Bigg|_{a_{min}}^{a=\infty}\ dz + \int\int(z+ra)g\ dadz \\ -\int a H_{p}g\Bigg|_{a_{min}}^{a=\infty}\ dz+\int\int H_{p}g\ dadz. \end{multline} Because of our assumption of compact support with respect to $a$ in $(a_{min},\infty),$ the first and third terms on the right-hand side also vanish. This leaves us with \begin{equation}\label{unfortunateR} \mathcal{C}_{t}-r(t)\mathcal{C}=\mathcal{Q}, \end{equation} with the quantity $\mathcal{Q}$ defined by $\mathcal{Q}=\int\int (z+H_{p})g\ dadz.$ Unfortunately this is a difficulty, as it is unclear from this how to determine $r$ from \eqref{unfortunateR}. That is, if we believe that $r$ will enforce $\mathcal{C}=0,$ then we must have $\mathcal{C}_{t}=0$ as well, and then \eqref{unfortunateR} tells us that $\mathcal{Q}$ must equal zero as well. However this would not tell us what the interest rate is actually equal to. Worse yet, there is no reason to believe at present that $\mathcal{Q}$ would equal zero. We deal with this difficulty by generalizing the problem. Instead of seeking solutions for which $\mathcal{C}=0,$ we now will determine the interest rate by insisting $\mathcal{Q}_{t}=0.$ \begin{remark} Note that if $g|_{a_{min}}\neq0,$ then there would be another term proportional to $r$ in \eqref{unfortunateR}. It would then be possible to choose a value of $r$ to cancel the $\mathcal{Q}$ term. \end{remark} As we have just said, the condition $\mathcal{C}=0$ does indeed imply $\mathcal{Q}=0$ and thus $\mathcal{Q}_{t}=0.$ Thus solutions of the original problem ($\mathcal{C}=0$) also solve the relaxed problem ($\mathcal{Q}_{t}=0$). In the other direction, if we have a solution of the relaxed problem, since $\mathcal{Q}_{t}=0$ we have $\mathcal{Q}=\mathcal{Q}_{0}$ for all $t.$ If $\mathcal{Q}_{0}=0$ and if $\mathcal{C}(0)=0,$ then we may conclude that $\mathcal{C}=0$ after all. If however $\mathcal{Q}_{0}\neq0$ and if $\mathcal{C}(0)=0,$ then we see that $\mathcal{C}_{t}(0)\neq0$ and thus $\mathcal{C}$ is not identically zero. We will be proving existence and uniqueness of solutions for the relaxed problem. Thus if there is a solution of the original problem, then it must be the solution we prove to exist. We will in some cases be able to guarantee that in fact $\mathcal{Q}_{0}\neq0,$ and thus in these cases, the original problem does not have a solution. Now that we are considering the relaxed problem, we return our attention to determination of the interest rate. Taking the time derivative of $\mathcal{Q},$ we have \begin{multline}\nonumber \mathcal{Q}_{t}=\int\int z\partial_{t}g\ dadz + \int\int H_{pp}(\partial_{a}v)(\partial_{t}\partial_{a}v)g\ dadz \\ +\int\int H_{p}(\partial_{a}v)(\partial_{t}g)\ dadz :=Q_{1}+Q_{2}+Q_{3}. \end{multline} For each of these terms, we decompose into a part which explicitly involves $r$ and a piece which does not: \begin{eqnarray} \label{Q1Equation} Q_{1} & = & P_{1} - \int\int z\partial_{a}\left((z+r(t)a)g\right)\ dadz,\\ \label{Q2Equation} Q_{2} & = & P_{2} - \int\int g(H_{pp}(\partial_{a}v))\partial_{a}\left((z+r(t)a)\partial_{a}v\right)\ dadz,\\ \label{Q3Equation} Q_{3} & = & P_{3} - \int\int (H_{p}(\partial_{a}v))\partial_{a}\left((z+r(t)a)g\right)\ dadz, \end{eqnarray} where \begin{equation}\nonumber P_{1}=-\int\int z\partial_{z}(\mu(z)g)\ dadz, \end{equation} \begin{equation}\nonumber P_{2}=\int\int gH_{pp}(\partial_{a}v)\left(-\frac{\sigma^{2}(z)}{2}\partial_{zz}(\partial_{a}v) -\mu(z)\partial_{z}(\partial_{a}v)-H_{p}(\partial_{a}v)\partial_{a}(\partial_{a}v)+\rho\partial_{a}v\right)\ dadz, \end{equation} \begin{equation}\nonumber P_{3}=\int\int H_{p}(\partial_{a}v)\left(\frac{1}{2}\partial_{zz}(\sigma^{2}(z)g)-\partial_{z}(\mu(z)g) -\partial_{a}(gH_{p}(\partial_{a}v))\right)\ dadz. \end{equation} We first notice that, because of the compact support with respect to $a$ in $(a_{min},\infty),$ the integral on the right-hand side of \eqref{Q1Equation} is equal to zero. We apply the derivative in the integral on the right-hand side of \eqref{Q2Equation}, and we integrate by parts in \eqref{Q3Equation}: \begin{multline}\label{Q2last} Q_{2} = P_{2} - r(t)\int\int g(H_{pp}(\partial_{a}v))\partial_{a}v\ dadz\\ - \int\int g(H_{pp}(\partial_{a}v))(z+r(t)a)\partial_{a}^{2}v\ dadz, \end{multline} \begin{equation}\label{Q3last} Q_{3} = P_{3} + \int\int (H_{pp}(\partial_{a}v))(\partial_{a}^{2}v)(z+r(t)a)g\ dadz. \end{equation} We introduce the notation $P=P_{1}+P_{2}+P_{3},$ and \begin{equation}\nonumber K=\int\int g(H_{pp}(\partial_{a}v))\partial_{a}v\ dadz. \end{equation} Then adding $Q_{1},$ $Q_{2},$ and $Q_{3}$ back together again, we find \begin{equation}\nonumber \mathcal{Q}_{t}=P-r(t)K; \end{equation} to arrive at this, notice that there is a cancellation when adding \eqref{Q2last} and \eqref{Q3last}. We therefore have concluded that we may determine $r(t)$ in the relaxed problem by \begin{equation}\nonumber r(t)=\frac{P}{K}. \end{equation} (Note that both $P$ and $K$ depend on time.) For this to be a complete description of the determination of the interest rate, we must do two further things. First, we remark that it is clear that $K$ is nonzero. Since $H_{p}(p)=-(u')^{-1}(p)$ and since $u'$ is strictly decreasing, we see that $H_{pp}(p)>0$ always. As discussed above, we are only considering solutions for which $\partial_{a}v>0.$ Together with the fact that $g$ is a probability distribution, we have $K>0.$ We will still, however, need to control $K$ to ensure that it cannot get arbitrarily small. Finally, we give an explicit formula for $P,$ in terms of $y$ and $f$ rather than $\partial_{a}v:$ \begin{multline}\label{definitionOfP} P=P[y,f,g]=-\int\int z\partial_{z}(\mu(z)g)\ dadz \\ +\int\int gH_{pp}(y+fw_{\infty})\left(-\frac{1}{2}\sigma^{2}\partial_{zz}y-\mu\partial_{z}y-H_{p}(y+fw_{\infty})\partial_{a}y +\rho (y+fw_{\infty})\right)\ dadz\\ +\int\int H_{p}(y+fw_{\infty})\left(\frac{1}{2}\partial_{zz}(\sigma^{2}g)-\partial_{z}(\mu g)-\partial_{a}(gH_{p}(y+fw_{\infty})) \right)\ dadz. \end{multline} \section{Iterative scheme}\label{iterativeSection} We will prove our existence theorem using an iterative scheme, and we will now set up this scheme. We fix $s\in\mathbb{N}$ such that $s\geq 4;$ we will provide some further comments on this later. Let $A>0$ be given. We let $A_{1}=[-A,A],$ $A_{2}=[-2A,2A],$ and $A_{3}=[-3A,3A].$ We let $\chi$ be such that $\chi\in C^{\infty}(\mathbb{R}),$ such that $\chi(a)=1$ for $a\in A_{2},$ such that $\chi(a)=0$ for $a\in A_{3}^{c},$ and such that on each component of $A_{3}\setminus A_{2},$ $\chi$ is smooth and monotone. For all $a\in \mathbb{R},$ we then have $|\chi(a)a|\leq 3A.$ We will henceforth work in the spatial domain which we denote by $D,$ which is $D=A_{3}\times[z_{min},z_{max}].$ Let data $g_{0}\in H^{s}(D)$ and $y_{T}\in H^{s+1}(D)$ be given, such that the support of $g_{0}$ with respect to $a$ is contained in the interior of $A_{1}$ and the support of $y_{T}$ with respect to $a$ is contained in the interior of $A_{1}.$ We initialize our scheme with $g^{0}=g_{0,\delta}$ and $y^{0}=y_{T,\delta}.$ Here, for small parameter values $\delta>0,$ we have taken a $C^{\infty}$ function $g_{0,\delta}$ and a $C^{\infty}$ function $y_{T,\delta}$ to be within $\delta$ of $g_{0}$ in $H^{s}(D)$ and within $\delta$ of $y_{T}$ in $H^{s+1}(D),$ respectively. As we have assumed that $g_{0}$ and $y_{T}$ are each supported in the interior of $A_{1}$ with respect to the $a$ variable, we may take our approximations to also be supported in this set with respect to $a.$ That our data can be approximated in this way follows from standard density results \cite{adams}. The solutions of our iterated system will actually depend on both $n$ and $\delta$ and would more properly be called $y^{n,\delta}$ and $g^{n,\delta};$ we will suppress this $\delta$ dependence, however, for the time being, considering for now $\delta>0$ to be fixed, and we will call the iterates $y^{n}$ and $g^{n},$ and so on. We take the function $f^{0}(t)=1$ for all $t,$ and we let the initial interest rate be given as $r^{0}(t)=0$ for all $t.$ We will still need to initialize $K.$ For our constant $w_{\infty}>0$ and the data $y_{T}$ we define \begin{equation}\label{dataW} W=\min_{(a,z)\in D}\left(y_{T}(a,z)+w_{\infty}\right), \end{equation} and we require that $W>0;$ this is the positivity condition for $\partial_{a}v.$ Noting that our terminal data in our approximate problems is not exactly equal to $y_{T}+w_{\infty},$ we also take $\delta$ sufficiently small so that \begin{equation}\label{deltaData1} \min_{(a,z)\in D}\left(y_{T,\delta}+w_{\infty}\right)\geq \frac{3W}{4}. \end{equation} We similarly define $K_{data}>0$ as \begin{equation}\nonumber K_{data}=\int\int g_{0}(H_{pp}(y_{T}+w_{\infty}))(y_{T}+w_{\infty})\ dadz. \end{equation} Note that $K_{data}$ is positive since $g_{0}$ is a probability distribution, since $H_{pp}>0$ (this sign is inherited from properties of the utility function, $u$) and because we have taken $W>0.$ We need to initialize $K$ and use something like $K_{data},$ but adapted to the data for our approximate problems, \begin{equation}\nonumber K^{0}=\int\int g_{0,\delta}(H_{pp}(y_{T,\delta}+w_{\infty})(y_{T,\delta}+w_{\infty})\ dadz, \end{equation} and we may take $\delta$ sufficiently small so that \begin{equation}\label{deltaData2} K^{0}\geq \frac{3K_{data}}{4}. \end{equation} Having initialized our iteration scheme with initial iterates $y^{0}=y_{T,\delta}$ and $g^{0}=g_{0,\delta},$ the support of each of $y^{0}$ and $g^{0}$ with respect to $a$ is contained in $A_{1}$ and thus also in $A_{2}.$ We fix $M>1.$ We may take $\delta>0$ sufficiently small so that we also have the following bounds for $y^{0}$ and $g^{0}:$ \begin{equation}\nonumber \sup_{t\in[0,T]}\|y^{0}(t,\cdot)\|^{2}_{H^{s+1}} +\|g^{0}(t,\cdot)\|^{2}_{H^{s}} \leq M\Big(\|y_{T}\|^{2}_{H^{s+1}}+\|g_{0}\|^{2}_{H^{s}}\Big). \end{equation} These two bounds, on the supports and on the norms, are features we will seek to maintain for all subsequent iterates. We introduce another cutoff function, related to the fact that the function $H_{p}$ is only defined for positive arguments. We have given the definition of $W>0$ above in \eqref{dataW}. We let $\psi:\mathbb{R}\rightarrow\mathbb{R}$ be a $C^{\infty}$ function which satisfies $\psi(x)=x$ for $x\geq W/2,$ which satisfies $\psi(x)=W/4$ for $x\leq W/4,$ and which is monotone. We define $\Theta_{c}$ by \begin{equation}\nonumber \Theta_{c}(y,f)=H_{p}(\psi(y+fw_{\infty})). \end{equation} It will be important later to note that if $y+fw_{\infty}\geq W/2,$ then $\Theta_{c}(y,f)=\Theta(y,f).$ We set up our iterative scheme, beginning with $g:$ \begin{multline}\label{gIteratedTransport} \partial_{t}g^{n+1}-\frac{1}{2}\partial_{zz}\left(\sigma^{2}(z)g^{n+1}\right) \\ +\partial_{z}\left(\mu(z)g^{n+1}\right) +\partial_{a}\left(\chi(z+r^{n}(t)a)g^{n+1}\right)+\partial_{a}\left(g^{n+1}\chi\Theta_{c}(y^{n},f^{n})\right)=0. \end{multline} We take this with initial data \begin{equation}\label{mollData} g^{n+1}(0,\cdot)=g_{0,\delta}. \end{equation} Note that we have inserted a factor of the cutoff function $\chi$ in the transport terms. A difficulty of the system is that as long as $r\neq0,$ the transport speeds are unbounded. With the factors of $\chi$ present, this is no longer the case for our approximate equations. We will be able to remove the factors of $\chi$ by the end of our existence argument. The transport speed in \eqref{gIteratedTransport} with respect to the variable $a,$ then, is $\chi z+r^{n}(t)\chi a+\chi\Theta_{c}(y^{n},f^{n}).$ Denote by $R$ an upper bound on $r^{n}(t),$ and denote by $Y$ and upper bound on $\Theta(y^{n},f^{n}),$ presuming for the moment that these bounds can be found independent of our parameters $n$ and $\delta.$ Then the transport speed is bounded by $z_{max}+3RA+Y,$ independently of $n$ and $\delta.$ Thus, until time $T,$ the support of $g^{n+1}$ with respect to $a,$ which is initially contained in $A_{1},$ remains contained in $A_{2}$ as long as $T\leq\frac{A}{z_{max}+3RA+Y}.$ We next give the iterated equation for $y:$ \begin{multline}\label{yIteratedTransport} \partial_{t}y^{n+1} + \frac{1}{2}\sigma^{2}(z)\partial_{zz}y^{n+1} + \mu(z)\partial_{z}y^{n+1} + r^{n}(t)y^{n+1} \\ + (\chi z+r^{n}(t)\chi a)\partial_{a}y^{n+1} + \chi\Theta_{c}(y^{n},f^{n})\partial_{a}y^{n+1} -\rho y^{n+1} =0. \end{multline} As above, we take this with mollified data \begin{equation}\label{yMollData} y^{n+1}(T,\cdot)=y_{T,\delta}. \end{equation} Again, the solutions may more properly be called $y^{n,\delta},$ but we will suppress the $\delta$ dependence for the time being. Note that we have the same transport speed with respect to $a$ as in the $g^{n+1}$ equation, and therefore we have the same support properties; with initial data supported in $A_{1},$ and with the presumed upper bounds, the support of $y^{n+1}$ remains in $A_{2}$ as long as $T\leq\frac{A}{z_{max}+3RA+Y}.$ To finish specifying the iterated problem, we must specify $f^{n+1}$ and $r^{n+1},$ and the latter of these will require specifying $P^{n+1}$ and $K^{n+1}.$ We take $f^{n+1}$ to be the solution of the ordinary differential equation \begin{equation}\label{fIteratedEquation} (f^{n+1})'(t)+r^{n}(t)f^{n+1}(t)-\rho f^{n+1}(t)=0, \end{equation} with terminal condition $f(T)=1.$ Notice that the solution of this terminal value problem is \begin{equation}\label{fIteratedExplicit} f^{n+1}(t)=\exp\left\{\int_{t}^{T}r^{n}(t')-\rho\ dt'\right\}. \end{equation} We take $r^{n+1}$ to be given by \begin{equation}\label{rIterated} r^{n+1}=\frac{P^{n}}{K^{n}}, \end{equation} where we need to define $P^{n}$ and $K^{n}.$ Consistent with our previous definition of $K$ we denote \begin{equation}\nonumber K[y,g,f]=\int\int g(H_{pp}(y+fw_{\infty}))(y+fw_{\infty})\ dadz; \end{equation} but $K[y^{n},g^{n},f^{n}]$ is not sufficient for use in our iterative scheme because we need to use the cutoff function $\psi.$ Thus, for any value of $n,$ given $y^{n},$ $f^{n},$ and $g^{n},$ we define $K^{n+1}$ as \begin{equation}\label{KIteratedDefinition} K^{n+1}=\int\int g^{n}(H_{pp}(\psi(y^{n}+f^{n}w_{\infty})))(y^{n}+f^{n}w_{\infty})\ dadz. \end{equation} Finally, recalling $P[y,f,g]$ as defined in \eqref{definitionOfP}, we must introduce a version $P_{c}$ which involves the cutoff function $\psi:$ \begin{multline}\nonumber P_{c}[y,f,g]= -\int\int z\partial_{z}(\mu(z)g)\ dadz \\ +\int\int gH_{pp}(\psi(y+fw_{\infty})) \left(-\frac{1}{2}\sigma^{2}\partial_{zz}y-\mu\partial_{z}y-\Theta_{c}(y,f)\partial_{a}y +\rho(y+fw_{\infty})\right)\ dadz\\ +\int\int \Theta_{c}(y,f) \left(\frac{1}{2}\partial_{zz}(\sigma^{2}g)-\partial_{z}(\mu g)-\partial_{a}(g\Theta_{c}(y,f)) \right)\ dadz. \end{multline} We can then define our iterated $P$ as \begin{equation}\nonumber P^{n}=P_{c}[y^{n},f^{n},g^{n}]. \end{equation} \section{Existence and bounds for the iterates}\label{existenceAndBounds} In order to eliminate our approximation parameters, i.e. send $n\rightarrow\infty$ and $\delta\rightarrow0,$ we need to establish bounds for the iterates which are uniform with respect to $n$ and $\delta.$ We fix a value $M>1$ and we assume the following are satisfied by the $n$-th iterates: \begin{equation}\label{yInductiveHypothesis} \|y^{n}\|_{H^{s+1}}\leq M\|y_{T}\|_{H^{s+1}}, \end{equation} \begin{equation}\label{gInductiveHypothesis} \|g^{n}\|_{H^{s}}\leq M\|g_{0}\|_{H^{s}}, \end{equation} \begin{equation}\label{fInductiveHypothesis} f^{n}\in\left[\frac{1}{2},2\right],\quad\forall t\in[0,T], \end{equation} \begin{equation}\label{KInductiveHypothesis} K^{n}\geq \frac{K_{data}}{2},\quad\forall t\in[0,T]. \end{equation} We furthermore assume that the $n$-th iterates are infinitely smooth. Based on these values, we define a value $P_{max};$ we take this to be the supremum of the set of values $\{|P_{c}[\tilde{y},\tilde{f},\tilde{g}]|\},$ where $\tilde{y},$ $\tilde{g},$ and $\tilde{f}$ satisfy \begin{equation}\nonumber \|\tilde{y}\|_{H^{s+1}}\leq M\|y_{T}\|_{H^{s+1}},\qquad \|\tilde{g}\|_{H^{s}}\leq M\|g_{0}\|_{H^{s}},\qquad \tilde{f}\in[1/2,2]. \end{equation} With this definition, we then have our inductive hypothesis for the iterates for the interest rate: \begin{equation}\label{rInductiveHypothesis} r^{n}\in\left[-\frac{2P_{max}}{K_{data}},\frac{2P_{max}}{K_{data}}\right],\quad\forall t\in[0,T]. \end{equation} Finally, we have one more condition we wish to have satisfied for our iterates, and that is the positivity condition for $\partial_{a}v.$ Recall the definition of $W>0$ in \eqref{dataW}. Then we desire that the following condition is satisfied for $y^{n}$ and $f^{n}:$ \begin{equation}\label{WInductiveHypothesis} \min_{(t,a,z)\in[0,T]\times D} \left(y^{n}(t,a,z)+f^{n}(t)w_{\infty}\right) \geq \frac{W}{2}. \end{equation} Note that with our specification of the initial iterates, the bounds \eqref{yInductiveHypothesis}, \eqref{gInductiveHypothesis}, \eqref{fInductiveHypothesis}, and \eqref{rInductiveHypothesis} are satisfied for $n=0.$ By \eqref{deltaData1} we have satisfied \eqref{WInductiveHypothesis} as well for $n=0.$ Similarly, by \eqref{deltaData2}, we have satisfied \eqref{KInductiveHypothesis} when $n=0.$ We may also note that all of the initial iterates are in $C^{\infty}.$ We must verify that each of \eqref{yInductiveHypothesis}, \eqref{gInductiveHypothesis}, \eqref{fInductiveHypothesis}, \eqref{KInductiveHypothesis}, \eqref{rInductiveHypothesis}, and \eqref{WInductiveHypothesis} are satisfied for the $(n+1)$-st iterates, but first we must ensure that the $(n+1)$-st iterates exist. \begin{lemma}\label{gIteratedLemma} Let $T>0,$ and let $y^{n},$ $g^{n},$ $r^{n},$ $f^{n},$ and $K^{n}$ be as described above, on the time interval $[0,T].$ There exists a unique $C^{\infty}$ solution $g^{n+1}$ to the initial value problem \eqref{gIteratedTransport}, \eqref{mollData} on the time interval $[0,T].$ \end{lemma} \begin{proof} We prove existence by the energy method, the steps of which are to introduce mollifiers, use the Picard theorem to get existence of solutions, prove an estimate uniform with respect to the mollification parameter, and then pass to the limit as the mollification parameter vanishes. To use standard theory of mollifiers, we first replace our spatial domain with a torus. We make an extension of the domain in the $z$ variable. Let $\omega\in\mathbb{N}$ be any finite degree of regularity, sufficiently large. We take $\widetilde{\sigma},$ $\widetilde{\mu},$ and $\widetilde{\Theta}$ to be $H^{\omega+2}$ extensions of $\sigma,$ $\mu,$ and $\Theta(y^{n},f^{n})$ to the domain $[z_{min}-3, z_{max}+3]$ (for $\sigma$ and $\mu$) and to the domain $A_{3}\times[z_{min}-3, z_{max}+3]$ (for $\Theta$). There are many versions of the existence of such extensions available in the literature, and we cite \cite{millerExtension} in particular. We let $\phi$ be a cutoff function which is equal to $1$ for $z\in[z_{min}-1, z_{max}+1]$ and which is equal to zero on $[z_{min}-3,z_{min}-2]$ and on $[z_{max}+2,z_{max}+3],$ and which is smooth and monotone on the remaining components of the new $z$ domain. In writing an evolution equation to approximate \eqref{gIteratedTransport}, we will replace $\sigma,$ $\mu,$ and $\Theta(y^{n},f^{n})$ with $\phi\widetilde{\sigma},$ $\phi\widetilde{\mu},$ and $\phi\widetilde{\Theta},$ respectively. We also replace the transport coefficient $\chi(z+r^{n}a)$ with $\phi\chi(z+r^{n}a).$ We take $\widetilde{g}_{0}$ to be an $H^{\omega}$ extension of $g_{0,\delta},$ and we will use data $\phi\chi\widetilde{g}_{0}.$ The coefficients in our new evolution equation, because they are zeroed out at the ends of the interval $[z_{min}-3, z_{max}+3],$ are periodic with respect to $z.$ Similarly, the coefficients are all also periodic with respect to $a$ on $A_{3}.$ Because of the presence of $\chi$ and $\phi$ in our proposed data, we also have periodic initial data. We call our new domain $\widetilde{D},$ and we consider this now to be a torus, i.e. we take periodic boundary conditions. We let $\mathcal{J}_{\tau}$ be a standard mollifier on the two-dimensional torus with parameter $\tau>0.$ We introduce an approximate equation: \begin{multline}\label{hEvolution} \partial_{t}h^{\tau}-\frac{1}{2}\partial_{zz}\mathcal{J}_{\tau}((\phi\widetilde{\sigma})^{2}\mathcal{J}_{\tau}h^{\tau}) \\ +\partial_{z}\mathcal{J}_{\tau}((\phi\widetilde{\mu})\mathcal{J}_{\tau}h^{\tau})+ \partial_{a}\mathcal{J}_{\tau}(\phi\chi(z+r^{n}a)\mathcal{J}_{\tau}h^{\tau}) +\partial_{a}\mathcal{J}_{\tau}(\phi\chi\widetilde{\Theta}\mathcal{J}_{\tau}h^{\tau})=0. \end{multline} As we have said, we take this evolution with initial condition \begin{equation}\nonumber h(0,\cdot)=\phi\chi\widetilde{g}_{0}. \end{equation} The presence of the mollifiers turns all derivatives on the right-hand side of \eqref{hEvolution} into bounded operators; the Picard Theorem \cite{majdaBertozzi} then implies that there exists a solution for a time $T_{\tau}>0.$ This solution may be continued as long as the solution does not blow up; in this case, an energy estimate, using standard mollifier properties and integration by parts, implies that the $H^{\omega}(\widetilde{D})$ norm of $h$ does not blow up on $[0,T].$ We introduce an energy, equivalent to the square of the $H^{\omega}(\widetilde{D})$ norm, \begin{equation}\nonumber E(t)=\sum_{j=0}^{\omega}\sum_{\ell=0}^{\omega-j}E_{j,\ell}(t), \qquad E_{j,\ell}(t)=\frac{1}{2}\int_{\widetilde{D}}\left(\partial_{a}^{j}\partial_{z}^{\ell}h^{\tau}(t,a,z)\right)^{2}\ dadz. \end{equation} Taking the time derivative of the energy, using the facts that $\mathcal{J}_{\tau}$ commutes with derivatives and is self-adjoint, and using other mollifier properties such as $\|\mathcal{J}_{\tau}f\|_{H^{m}}\leq\|f\|_{H^{m}}$ for any $f$ and any $m,$ and integrating by parts yields the conclusion \begin{equation}\label{tauEnergyEstimate} \frac{dE}{dt}\leq cE, \end{equation} where $c$ is independent of $\tau.$ (We do not provide further details of this energy estimate as it is very similar to the estimate in Theorem \ref{uniformBoundTheorem} below). The bound \eqref{tauEnergyEstimate} implies that the solutions $h^{\tau}$ are uniformly bounded in $H^{\omega}(\widetilde{D})$ with respect to the approximation parameter $\tau,$ and that our solutions $h^{\tau}$ all exist on the common time interval $[0,T].$ The uniform bound implies that the first derivatives of the solutions with respect to $a,$ $z,$ and $t$ are all uniformly bounded, and thus our solutions $h^{\tau}$ form an equicontinuous family. Thus there is a uniformly convergent subsequence (which we do not relabel), as $\tau$ vanishes; we call the limit $h.$ Uniform convergence implies convergence in $L^{2}$ in a bounded domain, so we see that $h^{\tau}$ converges to $h$ in $C([0,T];L^{2}(\widetilde{D})).$ Using the uniform bound in $H^{\omega}(\widetilde{D}),$ a standard Sobolev interpolation theorem (see \cite{ambroseThesis}, for example) then implies convergence in $C([0,T];H^{\omega-1}(\widetilde{D})).$ Furthermore the uniform bound implies that we have a weak limit at every time in $H^{\omega},$ and this weak limit must be $h,$ so we have $h\in L^{\infty}([0,T];H^{\omega})$ as well. Taking the integral with respect to time of \eqref{hEvolution} and then passing to the limit as $\tau$ vanishes (this is possible because of the regularity we have established, including convergence in $C([0,T];H^{\omega-1})$), and then differentiating with respect to time, we see that $h$ satisfies \begin{equation}\label{hEquationAfterLimit} \partial_{t}h-\frac{1}{2}\partial_{zz}((\phi\widetilde{\sigma})^{2}h) +\partial_{z}(\phi\widetilde{\mu}h)+\partial_{a}(\phi\chi(z+r^{n}a)h)+\partial_{a}(\phi\chi\widetilde{\Theta}h)=0. \end{equation} When taking this limit we again use various standard mollifier properties; a good list of such properties can be found in Lemma 3.5 of \cite{majdaBertozzi}. Perhaps the most useful of these to arrive at \eqref{hEquationAfterLimit} is, for any $m\in\mathbb{N},$ \begin{equation}\nonumber \|\mathcal{J}_{\tau}f-f\|_{H^{m}}\leq\tau\|f\|_{H^{m+1}}. \end{equation} We define $g^{n+1}$ to be the restriction of $h$ to the domain $D.$ On $D,$ we have $\phi=1,$ $\widetilde\sigma=\sigma,$ $\widetilde{\mu}=\mu,$ $\widetilde{\Theta}=\Theta(y^{n},f^{n}),$ and $\widetilde{g}_{0}=g_{0,\delta}.$ Furthermore on $D$ we also have $\chi g_{0,\delta}=g_{0,\delta}.$ We conclude that $g^{n+1}$ satisfies \eqref{gIteratedTransport} and \eqref{mollData}. We have two further points to make, to complete the proof. First, we mention that uniqueness of solutions of the initial value problem \eqref{gIteratedTransport}, \eqref{mollData} is straightforward. The initial value problem satisfied by the difference of two solutions is a linear equation with zero forcing and zero data, and an estimate in $L^{2}$ for the difference of two smooth solutions can be made. Finally, on regularity, we mention that the regularity parameter $\omega$ was arbitrary, so we see that the solution $g^{n+1}$ is infinitely smooth with respect to the spatial variables. Upon taking higher derivatives of \eqref{gIteratedTransport} with respect to time, it can be seen that the solutions are also infinitely smooth with respect to time. This completes the proof. \end{proof} We also have existence of the iterated $y^{n+1},$ given in the following lemma. \begin{lemma}\label{yIteratedLemma} Let $y^{n},$ $g^{n},$ $r^{n},$ $f^{n},$ and $K^{n}$ be as described above. There exists a unique $C^{\infty}$ solution $y^{n+1}$ to the initial value problem \eqref{yIteratedTransport}, \eqref{yMollData} on the time interval $[0,T].$ \end{lemma} We omit the proof of Lemma \ref{yIteratedLemma}, as the method is entirely the same as that of Lemma \ref{gIteratedLemma}. To conclude this section, we mention that it is immediate from their definitions and the smoothness assumptions on the $n$-th iterates that $f^{n+1},$ $K^{n+1},$ and $r^{n+1}$ are $C^{\infty}$ in time. \subsection{Uniform Bounds}\label{uniformSection} Recall that we have fixed $s\in\mathbb{N}$ satisfying $s\geq4,$ and we have taken $g_{0}\in H^{s}$ and $y_{T}\in H^{s+1}.$ The requirement $s\geq 4$ will guarantee that the solutions we find are classical solutions of the PDE system, and will allow us to use Sobolev embedding and related inequalities as needed. Note that while we have demonstrated above that the iterates are infinitely smooth, this has relied on the $C^{\infty}$ approximation $g_{0,\delta}$ to the intended data $g_{0};$ with the data $g_{0}\in H^{s}$ and $y_{T}\in H^{s+1},$ we can only expect bounds on the iterates which are uniform with respect to the parameters in these spaces. \begin{theorem}\label{uniformBoundTheorem} There exists $T_{*}>0$ such that if the time horizon satisfies $T\in(0,T_{*}),$ then for all $n\in\mathbb{N}$ and for all $\delta>0,$ the iterates $(y^{n},g^{n},f^{n},K^{n},r^{n})$ defined above satisfy \eqref{yInductiveHypothesis}, \eqref{gInductiveHypothesis}, \eqref{fInductiveHypothesis}, \eqref{KInductiveHypothesis}, \eqref{rInductiveHypothesis}, and \eqref{WInductiveHypothesis}. \end{theorem} \begin{proof} The proof will be by induction. We have remarked previously that \eqref{yInductiveHypothesis}, \eqref{gInductiveHypothesis}, \eqref{fInductiveHypothesis}, \eqref{KInductiveHypothesis}, \eqref{rInductiveHypothesis}, and \eqref{WInductiveHypothesis} hold in the case $n=0;$ this is the base case. The statements \eqref{yInductiveHypothesis}, \eqref{gInductiveHypothesis}, \eqref{fInductiveHypothesis}, \eqref{KInductiveHypothesis}, \eqref{rInductiveHypothesis}, and \eqref{WInductiveHypothesis} then together constitute the inductive hypothesis. We begin by determining a bound for the next iterate $g^{n+1}.$ We let the functional $E_{j,\ell}$ be given by \begin{equation}\nonumber E_{j,\ell}(t)=\frac{1}{2}\int_{D}\left(\partial_{a}^{j}\partial_{z}^{\ell}g^{n+1}\right)^{2}\ dadz, \end{equation} and we sum over $j$ and $\ell$ to form the energy $E(t),$ \begin{equation}\nonumber E(t)=\sum_{j=0}^{s}\sum_{\ell=0}^{s-j}E_{j,\ell}(t). \end{equation} Of course, the energy $E$ is equivalent to the square of the $H^{s}$-norm of $g^{n+1}.$ We will now demonstrate a bound for the growth of the energy. For given values of $j$ and $\ell,$ we take the time derivative of $E_{j,\ell}:$ \begin{equation}\label{timeDerivativeEnergy} \frac{dE_{j,\ell}}{dt}=\int_{D}\left(\partial_{a}^{j}\partial_{z}^{\ell}g^{n+1}\right) \left(\partial_{a}^{j}\partial_{z}^{\ell}\partial_{t}g^{n+1}\right)\ dadz. \end{equation} We therefore need to write a helpful expression for $\partial_{a}^{j}\partial_{z}^{\ell}\partial_{t}g^{n+1}.$ Applying derivatives to \eqref{gIteratedTransport}, we arrive at the expression \begin{multline}\label{ellNonzero} \partial_{a}^{j}\partial_{z}^{\ell}\partial_{t}g^{n+1}=\frac{1}{2}\sigma^{2}\partial_{a}^{j}\partial_{z}^{\ell+2}g^{n+1} +(\ell+2)\sigma(\partial_{z}\sigma)\partial_{a}^{j}\partial_{z}^{\ell+1}g^{n+1}-\mu\partial_{a}^{j}\partial_{z}^{\ell+1}g^{n+1} \\ -(\chi z+r^{n}\chi a)\partial_{a}^{j+1}\partial_{z}^{\ell}g^{n+1} -\chi\Theta_{c}(y^{n},f^{n})\partial_{a}^{j+1}\partial_{z}^{\ell}g^{n+1} +\Phi, \end{multline} where $\Phi$ is a collection of terms which will be more routine to estimate. We can write $\Phi$ explicitly: \begin{multline}\nonumber \Phi=\frac{1}{2}\sum_{m=2}^{\ell}{\ell+2 \choose m} \left(\partial_{z}^{m}\sigma^{2}\right)\partial_{a}^{j}\partial_{z}^{\ell+2-m}g^{n+1} -\sum_{m=1}^{\ell}{\ell+1 \choose m} \left(\partial_{z}^{m}\mu\right)\partial_{a}^{j}\partial_{z}^{\ell+1-m}g^{n+1} \\ -\sum_{m=1}^{j}{j+1 \choose m}\left(\partial_{a}^{m}(\chi z+r^{n}\chi a)\right)\partial_{a}^{j+1-m}\partial_{z}^{\ell}g^{n+1} \\ -\ell\sum_{m=0}^{j+1}{j+1 \choose m}(\partial_{a}^{m}\chi) \partial_{a}^{j+1-m}\partial_{z}^{\ell-1}g^{n+1}\\ +\left[\partial_{a}^{j+1}\partial_{z}^{\ell}\left(g^{n+1}\chi\Theta_{c}(y^{n},f^{n})\right) -\left(\partial_{a}^{j+1}\partial_{z}^{\ell}g^{n+1}\right)\chi\Theta_{c}(y^{n},f^{n}) \right]. \end{multline} Using inequalities for Sobolev functions, we have an estimate for $\Phi,$ namely \begin{equation}\nonumber \|\Phi\|_{L^{2}}\leq c\left(1+|r^{n}(t)|+\|\Theta_{c}(y^{n},f^{n})\|_{H^{s+1}}\right)\|g^{n+1}\|_{H^{s}}. \end{equation} Since $\Theta_{c}$ is smooth and since the prior iterates satisfy \eqref{yInductiveHypothesis}, \eqref{rInductiveHypothesis}, and \eqref{fInductiveHypothesis}, we see that we may bound $\Phi$ by a constant (independent of our parameters $n$ and $\delta$) times the norm of $g^{n+1},$ i.e. \begin{equation}\label{usefulPhiBound} \|\Phi\|_{L^{2}}\leq c\|g^{n+1}\|_{H^{s}}. \end{equation} We proceed by substituting \eqref{ellNonzero} into \eqref{timeDerivativeEnergy}: \begin{multline}\nonumber \frac{dE_{j,\ell}}{dt}=\int_{D}\frac{\sigma^{2}}{2}\left(\partial_{a}^{j}\partial_{z}^{\ell}g^{n+1}\right) \left(\partial_{a}^{j}\partial_{z}^{\ell+2}g^{n+1}\right)\ dadz\\ + \int_{D}(\ell+2)\sigma(\partial_{z}\sigma)\left(\partial_{a}^{j}\partial_{z}^{\ell}g^{n+1}\right) \left(\partial_{a}^{j}\partial_{z}^{\ell+1}g^{n+1}\right)\ dadz \\ -\int_{D}\mu\left(\partial_{a}^{j}\partial_{z}^{\ell}g^{n+1}\right) \left(\partial_{a}^{j}\partial_{z}^{\ell+1}g^{n+1}\right)\ dadz \\ -\int_{D}(\chi z+r^{n}\chi a)\left(\partial_{a}^{j}\partial_{z}^{\ell}g^{n+1}\right) \left(\partial_{a}^{j+1}\partial_{z}^{\ell}g^{n+1}\right)\ dadz \\ -\int_{D}\chi \Theta_{c}(y^{n},f^{n})\left(\partial_{a}^{j}\partial_{z}^{\ell}g^{n+1}\right) \left(\partial_{a}^{j+1}\partial_{z}^{\ell}g^{n+1}\right)\ dadz +\int_{D}\Phi\left(\partial_{a}^{j}\partial_{z}^{\ell}g^{n+1}\right)\ dadz\\ =I+II+III+IV+V+VI. \end{multline} We integrate $I$ by parts with respect to $z$ and add the result to $II,$ finding \begin{multline}\nonumber I+II=-\int_{D}\frac{\sigma^{2}}{2}\left(\partial_{a}^{j}\partial_{z}^{\ell+1}g^{n+1}\right)^{2}\ dadz \\ +(\ell+1)\int_{D}\sigma(\partial_{z}\sigma)\left(\partial_{a}^{j}\partial_{z}^{\ell}g^{n+1}\right) \left(\partial_{a}^{j}\partial_{z}^{\ell+1}g^{n+1}\right)\ dadz, \end{multline} where the properties of $\sigma$ eliminate the presence of a boundary term. The first integral on the right-hand side could be used to find gain of regularity, but we will not need this for the present and we instead simply note that it is nonpositive. The second integral on the right-hand side can be integrated by parts with respect to $z$ once more (and there is again no boundary term), yielding \begin{equation}\nonumber I+II\leq -\frac{\ell+1}{2}\int_{D}\left(\sigma\partial_{z}^{2}\sigma+(\partial_{z}\sigma)^{2}\right) \left(\partial_{a}^{j}\partial_{z}^{\ell}g^{n+1}\right)^{2}\ dadz. \end{equation} There exists $c>0,$ then, depending on the function $\sigma$ such that \begin{equation}\label{firstTwoBounded} I+II\leq cE. \end{equation} Next, we integrate $III$ by parts with respect to the $z$ variable, and we integrate each of $IV$ and $V$ by parts with respect to the $a$ variable. This yields the following: \begin{equation}\nonumber III=\int_{D}\frac{\partial_{z}\mu}{2}\left(\partial_{a}^{j}\partial_{z}^{\ell}g^{n+1}\right)^{2}\ dadz, \end{equation} \begin{equation}\nonumber IV=\int_{D}\frac{\partial_{a}(\chi z+r^{n}\chi a)}{2} \left(\partial_{a}^{j}\partial_{z}^{\ell}g^{n+1}\right)^{2}\ dadz, \end{equation} \begin{equation}\nonumber V=\int_{D}\frac{\partial_{a}\left(\chi\Theta_{c}(y^{n},f^{n})\right)}{2} \left(\partial_{a}^{j}\partial_{z}^{\ell}g^{n+1}\right)^{2}\ dadz. \end{equation} Here, there is no boundary term when integrating by parts in $III$ because of the properties of $\mu$ at $z_{min}$ and $z_{max}.$ There are no boundary terms in $IV$ and $V$ when integrating by parts because of the presence of the factors of $\chi.$ Just as we bounded $I+II$ in \eqref{firstTwoBounded}, we may bound $III:$ \begin{equation}\label{iiiBound} III\leq cE. \end{equation} For $IV$ and $V,$ since they involve the prior iterates, we must utilize the inductive hypothesis. For $IV,$ we use \eqref{rInductiveHypothesis} to find \begin{equation}\nonumber IV\leq c\left(1+\frac{2P_{max}}{K_{data}}\right)E. \end{equation} Since the constants $P_{max}$ and $K_{data}$ are considered to be fixed (and especially, they do not depend on $n$ or $\delta$), we incorporate these into the constant $c$ to write this as \begin{equation}\label{ivBound} IV\leq cE. \end{equation} Since the function $\Theta_{c}$ is continuous, there exists a constant $c>0$ such that for all $\tilde{y}$ and $\tilde{f}$ satisfying $\|\tilde{y}\|_{H^{s+1}}\leq M\|y_{T}\|_{H^{s+1}}$ and $\tilde{f}\in[1/2,2],$ we have $\|\partial_{a}\Theta_{c}(\tilde{y},\tilde{f})\|_{L^{\infty}(D)}\leq c.$ In light of \eqref{yInductiveHypothesis} and \eqref{fInductiveHypothesis}, then, we conclude \begin{equation}\label{vBound} V\leq c E. \end{equation} Finally, we may use \eqref{usefulPhiBound} directly to bound $VI$ as \begin{equation}\label{viBound} VI\leq cE. \end{equation} Adding \eqref{firstTwoBounded}, \eqref{iiiBound}, \eqref{ivBound}, \eqref{vBound}, and \eqref{viBound}, also summing over $j$ and $\ell,$ we have \begin{equation}\nonumber \frac{dE}{dt}\leq cE, \end{equation} with this constant $c$ independent of $n$ and $\delta.$ Thus, as claimed, for the given value $M>1$ chosen above, there exists $T_{g}>0$ such that if $T\in(0,T_{g})$, then for all $t\in[0,T],$ \begin{equation}\nonumber \|g^{n+1}(t,\cdot)\|_{H^{s}}\leq M\|g_{0}\|_{H^{s}}, \end{equation} and this value of $T_{g}$ is independent of both our parameters $n$ and $\delta.$ The details for $y^{n+1}$ are very similar and we omit them. Our conclusion is that there exists $T_{y}>0$ such if $T\in(0,T_{y}),$ then for all $t\in[0,T],$ \begin{equation}\nonumber \|y^{n+1}(t,\cdot)\|_{H^{s+1}}\leq M\|y_{T}\|_{H^{s+1}}. \end{equation} Again, this value of $T_{y}$ is independent of $n$ and $\delta.$ We now turn to the estimates for $r^{n+1},$ $f^{n+1},$ and $K^{n+1}.$ The bound for $r^{n+1}$ is immediate from the definition \eqref{rIterated}, the definition of $P_{max},$ and the bounds in the inductive hypothesis \eqref{yInductiveHypothesis}, \eqref{gInductiveHypothesis}, \eqref{fInductiveHypothesis}, and \eqref{KInductiveHypothesis}. Given the bound \eqref{rInductiveHypothesis} and the formula \eqref{fIteratedExplicit} for $f^{n+1},$ we see that there exists $T_{f}>0,$ independent of $n$ and $\delta,$ such that if $T\in(0,T_{f})$ then for all $t\in[0,T]$ we have $f^{n+1}(t)\in[1/2,2].$ We next deal with $K^{n+1},$ as defined in \eqref{KIteratedDefinition}. Given the bounds on the $n$-th iterates in the inductive hypothesis, we see that for sufficiently small values of the time horizon, $g^{n}$ remains close to the initial data $g_{0,\delta},$ $f^{n}$ remains close to its terminal value which is $f^{n}(T)=1,$ and $y^{n}$ remains close to its terminal data $y_{T,\delta}.$ We conclude that there exists $T_{K}>0,$ with this value independent of $n$ and $\delta,$ such that if $T\in(0,T_{K}),$ then for all $t\in[0,T],$ we have $K^{n+1}(t)\geq K_{data}/2.$ (To be clear, we have already taken $\delta$ sufficiently small so that the initial iterate $K^{0}$ satisfies $K^{0}\geq 3K_{data}/4,$ and the value of $T_{K}$ is otherwise independent of $\delta.$) Finally we wish to ensure that $y^{n+1}+f^{n+1}w_{\infty}$ remains bounded below by $W/2.$ Similarly to the bound for $K^{n+1},$ the bounds of the inductive hypothesis imply that the time derivatives of $y^{n+1}$ and $f^{n+1}$ are uniformly bounded, and thus if $T$ is sufficiently small, the minimum of $y^{n+1}+f^{n+1}w_{\infty}$ remains close to its terminal value, which by \eqref{deltaData1} is at least $3W/4.$ Thus there exists $T_{W}>0$ such that if $T\in(0,T_{W}),$ then \begin{equation}\nonumber \min_{(t,a,z)\in[0,T]\times D}\left(y^{n+1}+f^{n+1}w_{\infty}\right)\geq\frac{W}{2}, \end{equation} and this value of $T_{W}$ is independent of $n$ and $\delta.$ Choosing $T_{*}=\min\{T_{g},T_{y},T_{f},T_{K},T_{W}\},$ the proof is complete. \end{proof} \section{Passage to the limit}\label{limitSection} We now take the limit of our iterates, proving our main theorem. \begin{theorem}\label{mainExistenceTheorem} Let $s\in\mathbb{N}$ satisfying $s\geq 4$ be given, and let $w_{\infty}>0$ be given. Let $A>0$ be given and let the spatial domain $D$ be as above. Let $y_{T}\in H^{s+1}(D)$ and $g_{0}\in H^{s}(D)$ be given, such that the support of $g_{0}$ with respect to $a$ and the support of $y_{T}$ with respect to $a$ are in the interior of the interval $[-A,A],$ and assume that $g_{0}$ is a probability measure. Assume $W>0,$ where $W$ is defined by \eqref{dataW}. There exists $T_{**}>0$ such that if $T\in(0,T_{**}),$ then there exists $y\in L^{\infty}([0,T];H^{s+1}(D))\cap C([0,T];H^{s'+1}(D))$ for all $s'<s,$ and $g\in L^{\infty}([0,T];H^{s}(D))\cap C([0,T];H^{s'}(D))$ for all $s'<s,$ and $f\in C^{1}([0,T]),$ such that $K[y,g,f]>0$ for all $t\in[0,T]$ and with $r$ defined by $r=P[y,g,f]/K[y,g,f],$ then $(y,g,f)$ solve \eqref{yFinalEquation}, \eqref{gFinalEquation}, and \eqref{fFinalEquation} with data $g(0,\cdot)=g_{0},$ $y(T,\cdot)=y_{T},$ and $f(T)=1.$ The solution $g$ is a probability measure at each time $t\in[0,T]$ and $y+fw_{\infty}$ is positive at each time $t\in[0,T].$ \end{theorem} We make a remark on the data and the constraint $\mathcal{C}=0$ before beginning the proof. \begin{remark} Note that we have not required in our existence theorem that $\int\int ag_{0}\ dadz=0;$ the existence theorem holds whether or not the constraint is initially satisfied. Of course if one hopes to have $\mathcal{C}=0$ for all time, then the initial data should be taken to satisfy $\mathcal{C}(t=0)=0.$ \end{remark} \begin{proof} We have previously suppressed the dependence of the solutions on the mollification parameter $\delta,$ and we have left this value $\delta>0$ to be arbitrary. We now consider the sequence of solutions resulting from taking a specific value of $\delta$ for each $n\in\mathbb{N},$ namely $\delta=1/n.$ In this section we will show that there is a subsequence of $(y^{n},g^{n},f^{n},K^{n},r^{n})$ which converges to a solution of the transformed system. We restrict $T$ to values in $(0,T_{*}),$ with $T_{*}$ given by Theorem \ref{uniformBoundTheorem}. There will be another restriction on $T$ later in the proof. We begin with $y^{n}.$ By Theorem \ref{uniformBoundTheorem}, on the time interval $[0,T],$ the sequence $y^{n}$ is uniformly bounded in $H^{s+1}(D).$ With $s\geq 2,$ Sobolev embedding then implies that $\nabla_{a,z}y^{n}$ is bounded in $L^{\infty}$ uniformly with respect to $n.$ Insepecting the family of evolution equations \eqref{yIteratedTransport}, again using the uniform bounds of Theorem \ref{uniformBoundTheorem} and now using $s\geq 3,$ we see that $\partial_{t}y^{n}$ is bounded in $L^{\infty},$ uniformly with respect to $n.$ We conclude that $\{y^{n}:n\in\mathbb{N}\}$ is an equicontinuous family, and we apply the Arzela-Ascoli theorem to find a uniformly convergent subsequence, which we do not relabel. We call the limit $y.$ We now address regularity of the limit. The Arzela-Ascoli theorem gives convergence in $C([0,T]\times D),$ and this immediately implies convergence in $C([0,T];L^{2}(D)).$ With the uniform bound in $H^{s+1}$ from Theorem \ref{uniformBoundTheorem}, Sobolev interpolation then implies convergence in $C([0,T];H^{s'+1}),$ for any $s'<s.$ Furthermore, since the iterates are uniformly bounded in $H^{s+1},$ at every time $t\in[0,T]$ there is a weak limit in $H^{s+1}$ obeying the same bound, and this limit must again equal $y.$ Thus $y$ is also in $L^{\infty}([0,T];H^{s+1}).$ The argument for $g^{n}$ is the same, except that $g^{n}$ being bounded in $H^{s}$ rather than $H^{s+1}$ means that we require $s\geq 4$ to have the $L^{\infty}$ bound on the time derivatives. We call the limit of the subsequence (which we do not relabel) $g.$ This $g$ is in $C([0,T];H^{s'})$ for any $s'<s,$ and also is in $L^{\infty}([0,T];H^{s}).$ We next take the limit of $f^{n}.$ From the uniform bounds of Theorem \ref{uniformBoundTheorem}, inspection of \eqref{fIteratedEquation} implies that $(f^{n})'$ is uniformly bounded. Thus $\{f^{n}:n\in\mathbb{N}\}$ is an equicontinuous family on $[0,T].$ The Arzela-Ascoli theorem again applies, yielding a uniform limit of a subsequence (which we do not relabel); we call the limit $f.$ We turn now to the sequence $K^{n}.$ Considering \eqref{KIteratedDefinition} and the fact that $y^{n},$ $g^{n},$ and $f^{n}$ all converge uniformly, we see that $K^{n}$ converges to a limit $K$ which is given by \begin{equation}\nonumber K=\int\int g(H_{pp}(\psi(y+fw_{\infty})))(y+fw_{\infty})\ dadz, \end{equation} and this convergence is uniform. Since we have $K^{n}\geq K_{data}/2$ for all $n,$ we also have $K\geq K_{data}/2$ for all times. Finally we consider convergence of $r^{n}.$ In light of \eqref{rIterated} and since we know that $K^{n}$ converges, we only need to consider the convergence of $P^{n}.$ The convergence that we have established for $y^{n}$ implies that up through second derivatives of $y^{n}$ converge to the appropriate derivatives of $y.$ Similarly, up to second derivatives of $g^{n}$ converge uniformly to the appropriate derivatives of $g.$ This is enough regularity to ensure that $P^{n}=P_{c}[y^{n},f^{n},g^{n}]$ converges uniformly to $P_{c}[y,f,g].$ Since $P^{n}$ and $K^{n}$ both converge, we see that $r^{n}$ converges to $r=P/K,$ and this convergence is uniform. We next demonstrate that the limits $y$ and $g$ satisfy the appropriate equations. We provide the details only for $g,$ as the argument for $y$ is the same. We integrate \eqref{gIteratedTransport} with respect to time, on the interval $[0,t]:$ \begin{multline}\nonumber g^{n+1}(t,\cdot)=g_{0,1/n}+\int_{0}^{t} \frac{1}{2}\partial_{zz}(\sigma^{2}g^{n+1})-\partial_{z}(\mu g^{n+1}) -\partial_{a}(\chi(z+r^{n}a)g^{n+1}) \\ -\partial_{a}(\chi\Theta_{c}(y^{n},f^{n})g^{n+1})\ dt'. \end{multline} The uniform convergence of the iterates $y^{n},$ $g^{n},$ $f^{n},$ and $r^{n}$ implies convergence of the integral. Taking the limit, we have \begin{equation}\nonumber g(t,\cdot)=g_{0}+\int_{0}^{t} \frac{1}{2}\partial_{zz}(\sigma^{2}g)-\partial_{z}(\mu g) -\partial_{a}(\chi(z+ra)g) -\partial_{a}(\chi\Theta_{c}(y,f)g)\ dt'. \end{equation} Taking the time derivative of this, we see that \begin{equation}\label{gLimitCutoffs} \partial_{t}g-\frac{1}{2}\partial_{zz}(\sigma^{2}g)+\partial_{z}(\mu g)+\partial_{a}(\chi(z+ra)g) +\partial_{a}(\chi\Theta_{c}(y,f)g)=0. \end{equation} Similarly, we conclude that the equation satisfied by $y$ is \begin{equation}\label{yLimitCutoffs} \partial_{t}y+\frac{1}{2}\sigma^{2}y+\mu\partial_{z}y+(r-\rho)y +(\chi(z+ra))\partial_{a}y+\chi\Theta_{c}(y,f)\partial_{a}y=0. \end{equation} The last step in the existence proof is to remove the cutoff functions $\chi$ and $\psi.$ As discussed when the iterative scheme was set up, as long as the iterates remain uniformly bounded, and as long as $T$ is small enough, the support of the iterates with respect to the $a$ variable remains within the set $A_{2}.$ Since the iterates converge uniformly, the support of $y$ and $g$ with respect to the $a$ variable also remains confined to the set $A_{2}$ throughout the interval $[0,T].$ Since the cutoff function satisfies $\chi=1$ when restricted to $A_{2}$, we see that in \eqref{gLimitCutoffs} and \eqref{yLimitCutoffs}, we have $\chi g=g$ and $\chi\partial_{a}y=\partial_{a}y.$ Since \eqref{WInductiveHypothesis} is satisfied for all $n$ and since $y^{n}$ and $f^{n}$ converge to $y$ and $f,$ we see that \begin{equation}\nonumber \min_{(t,a,z)\in[0,T]\times D}\left(y(t,a,z)+f(t)w_{\infty}\right)\geq\frac{W}{2}. \end{equation} This implies $\psi(y+fw_{\infty})=y+fw_{\infty},$ and therefore that $\Theta_{c}(y,f)=\Theta(y,f).$ We conclude that the equations satisfied by $y$ and $g$ are \eqref{yFinalEquation} and \eqref{gFinalEquation}, as desired. The proof of the existence theorem is complete. \end{proof} \section{Uniqueness}\label{uniquenessSection} We now prove uniqueness of our solutions. \begin{theorem}\label{mainUniquenessTheorem} Let $s\in\mathbb{N}$ satisfying $s\geq 4$ be given, and let $w_{\infty}>0$ be given. Let $A>0$ be given and let the spatial domain $D$ be as above. Let $y_{T}\in H^{s+1}(D)$ and $g_{0}\in H^{s}(D)$ be given, such that the support of $g_{0}$ with respect to $a$ and the support of $y_{T}$ with respect to $a$ are in the interior of the interval $[-A,A],$ and assume that $g_{0}$ is a probability measure. Assume $W>0,$ where $W$ is defined by \eqref{dataW}. Let $(y_{1},g_{1},f_{1})$ and $(y_{2},g_{2},f_{2})$ and the associated interest rates $r_{i}=P[y_{i},g_{i},f_{i}]/K[y_{i},g_{i},f_{i}]$ satisfy \eqref{yFinalEquation}, \eqref{gFinalEquation}, \eqref{fFinalEquation}, with data $g_{i}(0,\cdot)=g_{0},$ $y_{i}(T,\cdot)=y_{T},$ and $f_{i}(T)=1.$ Let $T>0$ be such that $y_{i}\in L^{\infty}([0,T];H^{s+1}(D))\cap C([0,T];H^{s'+1}(D)),$ for all $s'<s,$ and such that $g_{i}\in L^{\infty}([0,T];H^{s}(D))\cap C([0,T];H^{s'}(D)),$ for all $s'<s,$ and such that $f_{i}\in C^{1}([0,T]).$ Assume that $g_{i}$ and $y_{i}$ are compactly supported with respect to the $a$ variable in the interval $(-2A,2A).$ There exists $T_{***}$ such that if $T\in(0,T_{***}),$ then $(y_{1},g_{1},f_{1})=(y_{2},g_{2},f_{2}).$ \end{theorem} \begin{proof} By arguments such as those in \cite{constantinEscher} we see that the evolution for $\partial_{a}v=y+fw_{\infty}$ is positivity preserving, and thus we do not need to assume an explicit lower bound for $y+fw_{\infty}$ over the given interval $[0,T].$ Similarly we could dispense with the explicit bound on the support with respect to the $a$ variable, but we do state it here so as to keep the domain consistent with the solutions we have already proved to exist. We define three components for the energy for the difference of two solutions, called $E_{d,g},$ $E_{d,y},$ and $E_{d,f},$ where \begin{equation}\nonumber E_{d,g}(t)=\frac{1}{2}\int\int(g_{1}-g_{2})^{2}\ dadz, \end{equation} \begin{equation}\nonumber E_{d,y}(t)=\frac{1}{2}\int\int|\nabla_{a,z}(y_{1}-y_{2})|^{2}\ dadz, \end{equation} and \begin{equation}\nonumber E_{d,f}=\sup_{t\in[0,T]}\frac{1}{2}|f_{1}(t)-f_{2}(t)|^{2}. \end{equation} Note that $E_{d,g}(0)=0$ and that $E_{d,y}(T)=0.$ We start by estimating $E_{d,f}.$ Noting that for $i\in\{1,2\}$ we have the equations $f_{i}'=(\rho-r_{i})f_{i},$ and that $f_{i}(T)=1,$ we can write, for any $t\in[0,T],$ \begin{equation}\nonumber \frac{1}{2}|f_{1}(t)-f_{2}(t)|^{2}=-\int_{t}^{T}(f_{1}(t')-f_{2}(t'))(f_{1}'(t')-f_{2}'(t'))\ dt'. \end{equation} Substituting from the equations for $f_{i}'$ and adding and subtracting, this becomes \begin{equation}\nonumber \frac{1}{2}|f_{1}(t)-f_{2}(t)|=-\int_{t}^{T}(f_{1}-f_{2})(r_{2}-r_{1})f_{1}\ dt' -\int_{t}^{T}(f_{1}-f_{2})^{2}(\rho-r_{2})\ dt'. \end{equation} Taking the supremum with respect to time and performing some other manipulations, we can bound this as \begin{equation}\label{EdfIntermediate} E_{d,f}\leq cTE_{d,f}+cT|r_{1}-r_{2}|_{L^{\infty}}^{2}. \end{equation} We will now work with $r_{1}-r_{2}.$ Since $r_{i}=P_{i}/K_{i},$ it is clear that at any time $r_{1}-r_{2}$ can be bounded in terms of $K_{1}-K_{2}$ and $P_{1}-P_{2}.$ We consider $K_{1}-K_{2}$ first. For any $t\in[0,T],$ we have \begin{multline}\nonumber |K_{1}(t)-K_{2}(t)|=\Bigg|\int\int g_{1}(H_{pp}(y_{1}+f_{1}w_{\infty}))(y_{1}+f_{1}w_{\infty})\ dadz \\ -\int\int g_{2}(H_{pp}(y_{2}+f_{2}w_{\infty})(y_{2}+f_{2}w_{\infty})\ dadz\Bigg|. \end{multline} After some adding and subtracting and using a Lipschitz estimate for $H_{pp},$ it is evident that this can be bounded by \begin{equation}\label{KDifferenceAtT} |K_{1}(t)-K_{2}(t)|\leq c(E_{g}^{1/2}+E_{y}^{1/2}+E_{f}^{1/2}). \end{equation} We will need \eqref{KDifferenceAtT} as well as the supremum of this with respect to time, \begin{equation}\label{KDifferenceSup} |K_{1}-K_{2}|_{L^{\infty}}\leq c\left(\sup_{t\in[0,T]}\left(E_{g}^{1/2}+E_{y}^{1/2}\right)+E_{f}^{1/2}\right). \end{equation} The difference $P_{1}-P_{2}$ is similar but slightly more involved, as we must integrate by parts in some instances. We start by noting that the definition \eqref{gFinalEquation} of $P$ includes three terms, so we decompose $P_{1}-P_{2}$ as \begin{equation}\nonumber P_{1}-P_{2}=\Upsilon_{I}+\Upsilon_{II}+\Upsilon_{III}. \end{equation} We believe the meaning here is clear, and we will only write out $\Upsilon_{II}$ in detail. We add and subtract to decompose $\Upsilon_{II}$ as \begin{equation}\nonumber \Upsilon_{II}=\sum_{j=1}^{7}\Upsilon_{II,j}, \end{equation} where we have the following definitions: \begin{multline}\nonumber \Upsilon_{II,1}=\int\int(g_{1}-g_{2})H_{pp}(y_{1}+f_{1}w_{\infty})\cdot \\ \cdot\left(-\frac{1}{2}\sigma^{2}\partial_{zz}y_{1}-\mu\partial_{z}y_{1}-H_{p}(y_{1}+f_{1}w_{\infty})\partial_{a}y_{1} +\rho(y_{1}+f_{1}w_{\infty})\right)\ dadz, \end{multline} \begin{multline}\nonumber \Upsilon_{II,2}=\int\int g_{2}\left[H_{pp}(y_{1}+f_{1}w_{\infty})-H_{pp}(y_{2}+f_{2}w_{\infty})\right]\cdot \\ \cdot\left(-\frac{1}{2}\sigma^{2}\partial_{zz}y_{1}-\mu\partial_{z}y_{1}-H_{p}(y_{1}+f_{1}w_{\infty})\partial_{a}y_{1} +\rho(y_{1}+f_{1}w_{\infty})\right)\ dadz, \end{multline} \begin{equation}\nonumber \Upsilon_{II,3}=\int\int g_{2}H_{pp}(y_{2}+f_{2}w_{\infty}) \left(-\frac{1}{2}\sigma^{2}\partial_{zz}(y_{1}-y_{2})\right)\ dadz, \end{equation} \begin{equation}\nonumber \Upsilon_{II,4}=\int\int g_{2}H_{pp}(y_{2}+f_{2}w_{\infty}) \left(-\mu\partial_{z}(y_{1}-y_{2})\right)\ dadz, \end{equation} \begin{equation}\nonumber \Upsilon_{II,5}=\int\int g_{2}H_{pp}(y_{2}+f_{2}w_{\infty}) \left(-H_{p}(y_{1}+f_{1}w_{\infty})+H_{p}(y_{2}+f_{2}w_{\infty})\right)\partial_{a}y_{1}\ dadz, \end{equation} \begin{equation}\nonumber \Upsilon_{II,6}=\int\int g_{2}H_{pp}(y_{2}+f_{2}w_{\infty}) \left(-H_{p}(y_{2}+f_{2}w_{\infty})\partial_{a}(y_{1}-y_{2})\right)\ dadz, \end{equation} and \begin{equation}\nonumber \Upsilon_{II,7}=\int\int g_{2}H_{pp}(y_{2}+f_{2}w_{\infty})\rho(y_{1}-y_{2}+(f_{1}-f_{2})w_{\infty})\ dadz. \end{equation} It is immediate that $\Upsilon_{II,1},$ $\Upsilon_{II,2},$ $\Upsilon_{II,5},$ and $\Upsilon_{II,7}$ may be bounded in terms of $E_{d,g},$ $E_{d,y},$ and $E_{d,f};$ note that Lipschitz estimates for $H_{p}$ and $H_{pp}$ are needed, but these are smooth functions and the Lipschitz estimates are thus available. The terms $\Upsilon_{II,3}$ and $\Upsilon_{II,4}$ may be bounded in terms of $E_{d,y}$ after integrating by parts with respect to the $z$ variable; note that there are no boundary terms because of the properties of the diffusion and transport coefficients, $\sigma$ and $\mu,$ at the boundaries of the domain. The term $\Upsilon_{II,6}$ can be bounded in terms of $E_{d,y}$ after an integration by parts with respect to the $a$ variable; there is no boundary term because solutions are compactly supported with respect to the $a$ variable. We omit further details, and the result of these and similar considerations is the bounds \begin{equation}\nonumber |P_{1}(t)-P_{2}(t)|\leq c(E_{g}^{1/2}+E_{y}^{1/2}+E_{f}^{1/2}), \end{equation} and \begin{equation}\nonumber |P_{1}-P_{2}|_{L^{\infty}}\leq c\left(\sup_{t\in[0,T]}\left(E_{g}^{1/2}+E_{y}^{1/2}\right)+E_{f}^{1/2}\right). \end{equation} From our bounds on differences of $K$ and $P,$ we conclude that at each $t\in[0,T],$ \begin{equation}\nonumber |r_{1}(t)-r_{2}(t)|\leq c\left(E_{g}^{1/2}+E_{y}^{1/2}+E_{f}^{1/2}\right), \end{equation} and taking the supremum in time, \begin{equation}\label{rDifferenceBound} |r_{1}(t)-r_{2}(t)|_{L^{\infty}}\leq c\left(\sup_{t\in[0,T]}\left(E_{g}^{1/2}+E_{y}^{1/2}\right)+E_{f}^{1/2}\right). \end{equation} Using this in \eqref{EdfIntermediate}, our bound for $E_{d,f}$ is \begin{equation}\label{EdfFinal} E_{d,f}\leq cT\left(\sup_{t\in[0,T]}\left(E_{d,g}+E_{d,y}\right)+E_{d,f}\right). \end{equation} We next establish that there exists $c>0$ such that \begin{equation}\label{uniqToIntegrate1} \frac{dE_{d,g}}{dt}\leq c(E_{d,g}+E_{d,y}+E_{d,f}), \end{equation} and \begin{equation}\label{uniqToIntegrate2} \frac{dE_{d,y}}{dt}\leq c(E_{d,g}+E_{d,y}+E_{d,f}). \end{equation} To this end, we take the time derivative of $E_{d,g}:$ \begin{equation}\nonumber \frac{dE_{d,g}}{dt}=\int\int(g_{1}-g_{2})\partial_{t}(g_{1}-g_{2})\ dadz. \end{equation} We then substitute from the equations \eqref{gFinalEquation} satisfied by each $g_{i},$ and add and subtract: \begin{multline}\nonumber \frac{dE_{d,g}}{dt}=\frac{1}{2}\int\int(g_{1}-g_{2})\partial_{zz}(\sigma^{2}(g_{1}-g_{2}))\ dadz \\ -\int\int(g_{1}-g_{2})\partial_{z}(\mu(g_{1}-g_{2}))\ dadz -\int\int(g_{1}-g_{2})\partial_{a}((r_{1}-r_{2})ag_{1})\ dadz \\ -\int\int(g_{1}-g_{2})\partial_{a}((z+r_{2}a)(g_{1}-g_{2}))\ dadz \\ -\int\int(g_{1}-g_{2})\partial_{a}((g_{1}-g_{2})\Theta(y_{1},f_{1}))\ dadz \\ -\int\int(g_{1}-g_{2})\partial_{a}(g_{2}(\Theta(y_{1},f_{1})-\Theta(y_{2},f_{2})))\ dadz. \end{multline} There are six terms on the right-hand side, and estimating these is very much like the estimate for $g^{n+1}$ in the proof of Theorem \ref{uniformBoundTheorem}. Specifically, for the first term, the two derivatives with respect to $z$ should be applied, and then some integrations by parts can be made. For the second term, the one derivative with respect to $z$ can be applied, and then an integration by parts can be made. To estimate the third term, the bound \eqref{rDifferenceBound} is employed. For the fourth and fifth terms, the derivative with respect to $a$ should be applied, and then an integration by parts can be made. For the sixth term, the derivative with respect to $a$ should be applied, a Lipschitz estimate for $\Theta$ is used, and a further addition and subtraction can be utilized as well. We omit further details. Integrating \eqref{uniqToIntegrate1} forward in time, and using the initial data, we find \begin{multline}\label{uniqToAdd1} E_{d,g}(t)\leq c\int_{0}^{t}E_{d,g}(t')+E_{d,y}(t')+E_{d,f}\ dt' \\ \leq cT\left(\sup_{t\in[0,T]}\left(E_{d,g}(t)+E_{d,y}(t)\right)+E_{d,f}\right). \end{multline} Integrating \eqref{uniqToIntegrate2} backward in time, and using the terminal data, we find \begin{multline}\label{uniqToAdd2} E_{d,y}(t)\leq c\int_{t}^{T}E_{d,g}(t')+E_{d,y}(t')+E_{d,f}\ dt' \\ \leq cT\left(\sup_{t\in[0,T]}\left(E_{d,g}(t)+E_{d,y}(t)\right)+E_{d,f}\right). \end{multline} Adding \eqref{EdfFinal}, \eqref{uniqToAdd1}, and \eqref{uniqToAdd2}, and taking the supremum in time and reorganizing terms, we find \begin{equation}\nonumber (1-cT)\left(E_{d,f}+\sup_{t\in[0,T]}\left(E_{d,g}(t)+E_{d,y}(t)\right)\right)\leq 0. \end{equation} Thus as long as $0<T<1/c,$ we have $y_{1}=y_{2},$ $g_{1}=g_{2},$ and $f_{1}=f_{2}.$ \end{proof} \section{Discussion}\label{discussionSection} We mentioned above that we would remark again on the difference between choice of spatial domain here as compared to \cite{mollPhilTrans}. We have taken the same domain with respect to the $z$ variable, but in \cite{mollPhilTrans} the domain with respect to the $a$ variable was taken to be $[a_{min},\infty)$ for a given $a_{min}<0.$ We have instead taken the initial support of our functions with respect to the $a$ variable in $[-A,A]$ for a given $A>0$ and the support of our solutions has remained in $[-2A,2A]$ over the time interval $[0,T].$ If we take the view that $a_{min}<-2A,$ then our solutions fit into the framework of \cite{mollPhilTrans} with regard to this aspect. We also mentioned above that we would comment on our assumption that the range of $u'$ is equal to $(0,\infty),$ as would be the case, for instance, if $u(c)=\sqrt{c}.$ This assumption is only for simplicity and the general case can be treated by our same method. We stated in Section \ref{formulationSection} that the quantity in the definition of the Hamiltonian is maximized when $p=u'(c),$ and so $c=(u')^{-1}(p).$ This formula is still valid if $p$ is in $\mathrm{Range}(u'),$ which is necessarily an interval. Thus the formula we have used throughout the work is valid for values of $p$ in a given interval. But $p$ stands in for $\partial_{a}v,$ and the method we have applied does find solutions where $\partial_{a}v=y+fw_{\infty}$ only takes values in a given interval. For a general utility function, the terminal data $y_{T}+w_{\infty}$ can be taken with values in the appropriate interval, and the time horizon $T$ can be taken sufficiently small so that solutions remain in this interval. We have discussed in the introduction that we can show that in some cases, the solutions we have proved to exist are not solutions of the original system. For every solution we have proved to exist via Theorem \ref{mainExistenceTheorem}, there is associated a value of the constant $\mathcal{Q}.$ If this constant $\mathcal{Q}$ is nonzero, then the solution does not solve the original problem, i.e., if $\mathcal{Q}\neq0,$ then $\mathcal{C}\not\equiv0.$ It is straightforward to see that we can guarantee in some cases that $\mathcal{Q}\neq0.$ We define \begin{equation}\nonumber \mathcal{Q}_{data}= \int\int (z+H_{p}(y_{T}+w_{\infty}))g_{0}\ dadz. \end{equation} Assume that $g_{0},$ $y_{T},$ and $w_{\infty}$ are specified such that $\mathcal{Q}_{data}\neq0.$ Then for sufficiently small values of $T>0,$ for the solutions $(y,g,f)$ proved to exist in our main theorem, the solution will not vary much from the data. Therefore for small values of $T,$ we will have $\mathcal{Q}$ close to $\mathcal{Q}_{data},$ and $\mathcal{Q}$ will therefore be nonzero. As mentioned in the introduction, the authors of \cite{mollPhilTrans} proposed a restriction on the choice of terminal values for $v,$ and thus in our case for the terminal data for $\partial_{a}v,$ which is $y_{T}+w_{\infty}.$ In particular they proposed that $T$ should be taken to be fairly large and the final value of $v$ should be associated to a stationary solution. Since a stationary solution can be viewed as the infinite-$T$ limit of solutions of the system under consideration, and stationary solutions would satisfy $\mathcal{C}=0,$ this proposed data would be expected to yield solutions satisfying only $\mathcal{C}\approx 0.$ Further work is warranted, though, to find solutions which satisfy the constraint exactly. Specifically, given a value of the time horizon $T$ and the initial distribution $g_{0}$ initially satisfying the constraint, the author intends to perform computational and analytical studies seeking existence of terminal data $y_{T}+w_{\infty}$ which yield $\mathcal{Q}=\mathcal{C}=0.$ \section*{Acknowledgments} The author gratefully acknowledges support from the National Science Foundation through grant DMS-1515849. \section*{Conflict of Interest} Conflict of Interest: The authors declare that they have no conflict of interest. {} \end{document}
\begin{document} \vfuzz2pt \hfuzz2pt \newtheorem{thm}{Theorem}[section] \newtheorem{corollary}[thm]{Corollary} \newtheorem{lemma}[thm]{Lemma} \newtheorem{proposition}[thm]{Proposition} \newtheorem{defn}[thm]{Definition} \newtheorem{remark}[thm]{Remark} \newtheorem{example}[thm]{Example} \newtheorem{fact}[thm]{Fact} \ \newcommand{\norm}[1]{\left\Vert#1\right\Vert} \newcommand{\abs}[1]{\left\vert#1\right\vert} \newcommand{\set}[1]{\left\{#1\right\}} \newcommand{\mathbb R}{\mathbb R} \newcommand{\varepsilon}{\varepsilon} \newcommand{\longrightarrow}{\longrightarrow} \newcommand{\mathbf{B}(X)}{\mathbf{B}(X)} \newcommand{\mathcal{A}}{\mathcal{A}} \def Proof.\ { Proof.\ } \font\lasek=lasy10 \chardef\kwadrat="32 \def{\lasek\kwadrat}{{\lasek\kwadrat}} \def \lower 2pt\hbox{\kwadracik} { \lower 2pt\hbox{{\lasek\kwadrat}} } \newcommand*{\C}{\mathbf{C}} \newcommand*{\R}{\mathbf{R}} \newcommand*{\Z}{\mathbf {Z}} \deff:M\longrightarrow \C ^n{f:M\longrightarrow \C ^n} \def\hbox{\rm det}\, {\hbox{\rm det}\, } \def\hbox{\rm det }_{\C}{\hbox{\rm det }_{\C}} \def\hbox{\rm i}{\hbox{\rm i}} \def\hbox{\rm tr}\, {\hbox{\rm tr}\, } \def\hbox{\rm rk}\,{\hbox{\rm rk}\,} \def\hbox{\rm vol}\,{\hbox{\rm vol}\,} \def\hbox{\rm Im}\, {\hbox{\rm Im}\, } \def\hbox{\rm Re}\, {\hbox{\rm Re}\, } \def\hbox{\rm int}\, {\hbox{\rm int}\, } \def\hbox{\rm e}{\hbox{\rm e}} \def\partial _u{\partial _u} \def\partial _v{\partial _v} \def\partial _{u_i}{\partial _{u_i}} \def\partial _{u_j}{\partial _{u_j}} \def\partial {u_k}{\partial {u_k}} \def\hbox{\rm div}\,{\hbox{\rm div}\,} \def\hbox{\rm Ric}\,{\hbox{\rm Ric}\,} \def\r#1{(\ref{#1})} \def\hbox{\rm ker}\,{\hbox{\rm ker}\,} \def\hbox{\rm im}\, {\hbox{\rm im}\, } \def\hbox{\rm I}\,{\hbox{\rm I}\,} \def\hbox{\rm id}\,{\hbox{\rm id}\,} \def\hbox{{\rm exp}^{\tilde\nabla}}\.{\hbox{{\rm exp}^{\tilde\nabla}}\.} \def{\mathcal C}^{k,a}{{\mathcal C}^{k,a}} \def{\mathcal C}^{k+1,a}{{\mathcal C}^{k+1,a}} \def{\mathcal C}^{1,a}{{\mathcal C}^{1,a}} \def{\mathcal C}^{2,a}{{\mathcal C}^{2,a}} \def{\mathcal C}^{3,a}{{\mathcal C}^{3,a}} \def{\mathcal C}^{0,a}{{\mathcal C}^{0,a}} \def{\mathcal F}^{0}{{\mathcal F}^{0}} \def{\mathcal F}^{n-1}{{\mathcal F}^{n-1}} \def{\mathcal F}^{n}{{\mathcal F}^{n}} \def{\mathcal F}^{n-2}{{\mathcal F}^{n-2}} \def{\mathcal H}^n{{\mathcal H}^n} \def{\mathcal H}^{n-1}{{\mathcal H}^{n-1}} \def\mathcal C^{\infty}_{emb}(M,N){\mathcal C^{\infty}_{emb}(M,N)} \def\mathcal M{\mathcal M} \def\mathcal E _f{\mathcal E _f} \def\mathcal E _g{\mathcal E _g} \def\mathcal N _f{\mathcal N _f} \def\mathcal N _g{\mathcal N _g} \def\mathcal T _f{\mathcal T _f} \def\mathcal T _g{\mathcal T _g} \def{\mathcal Diff}^{\infty}(M){{\mathcal Diff}^{\infty}(M)} \def\mathcal C^{\infty}_{emb}(M,M){\mathcal C^{\infty}_{emb}(M,M)} \def{\mathcal U}^1 _f{{\mathcal U}^1 _f} \def{\mathcal U} _f{{\mathcal U} _f} \def{\mathcal U} _g{{\mathcal U} _g} \def\[f]{{\mathcal U}^1 _{[f]}} \title{A moduli space of minimal affine Lagrangian submanifolds} \author{Barbara Opozda} \subjclass{ Primary: 53C40, 57R40, 58B99 Secondary: 53C38, 58A10} \keywords{infinite dimensional Fr\'echet manifold, variation, phase function, tubular mapping, H\"older-Banach space, differential operator, elliptic regularity theorems} \thanks{The research partially supported by the grant NN 201 545738} \maketitle \address{Instytut Matematyki UJ, ul. \L ojasiewicza 6, 30-348 Cracow, Poland} \email{Barbara.Opozda@im.uj.edu.pl} \vskip 1in \noindent \vskip 0.5in \noindent {\bf Abstract.} It is proved that the moduli space of all connected compact orientable embedded minimal affine Lagrangian submanifolds of a complex equiaffine space constitutes an infinite dimensional Fr\'echet manifold (if it is not $\emptyset$). The moduli space of all connected compact orientable metric Lagrangian embedded surfaces in an almost K\"ahler 4-dimensional manifold forms an infinite dimensional Fr\'echet manifold (if it is not $\emptyset$). \maketitle \section{Introduction} R. McLean proved in \cite{McL} that special Lagrangian submanifolds near a compact special Lagrangian submanifold of a Calabi-Yau manifold form a manifold of dimension $b_1$, where $b_1$ is the first Betti number of the submanifold. Then a few papers giving generalizations to the cases where the ambient space is not a Calabi-Yau manifold but a more general type of space have been published. All those cases are, in fact, within metric geometry. The aim of this paper is to prove a similar result in the non-metric case. Moreover, we prove a global result, that is, we describe the set of all minimal affine Lagrangian embeddings of a compact manifold. It turns out that this set has a nice structure. Namely, it is an infinite dimensional Fr\'echet manifold modeled on the Fr\'echet space of all closed $(n-1)$-forms on the submanifold, where $n$ is the complex dimension of the ambient space. The main result of this paper says that the set of all minimal affine Lagrangian embeddings of a compact manifold into an equiaffine complex space is a submanifold of the Fr\'echet manifold of all compact submanifolds of the complex equiaffine space. We provide a rigorous proof of this fact. It seems that from a differential geometry viewpoint non-metric analogues of Calabi-Yau manifolds are equiaffine complex manifolds, that is, complex manifolds equipped with a torsion-free complex connection and a non-vanishing covariant constant complex volume form. There are many very natural complex equiaffine manifolds. For instance, complex affine hyperspheres of the complex affine space $\C ^n$ with an induced equiaffine structure obtained in a way standard in affine differential geometry (see \cite{DV}) are examples. Equiaffine structures are, in general, non-metrizable. For instance, the complex hyperspheres of $\C ^n$ with the induced equiaffine structure are non-metrizable. In particular, they are not related to Stentzel's metric. If $N$ is a complex $n$-dimensional space with a complex structure $J$, then an $n$-dimensional real submanifold $M$ of $N$ is affine Lagrangian if $JTM$ is transversal to $M$. Of course, if $N$ is almost Hermitian, then Lagrangian (in the metric sense) submanifolds, for which $JTM$ is orthogonal to $TM$, are affine Lagrangian and there are many affine Lagrangian submanifolds which are not metric Lagrangian even if the ambient space is almost Hermitian. In order to discuss minimality of submanifolds a metric structure is not necessary. It is sufficient to have induced volume elements on submanifolds. Such a situation exists in case of affine Lagrangian submanifolds. In this case there does not exist (in general) any mean curvature vector but there exists the Maslov 1-form which can play, in some situations, a role similar to that played by the mean curvature vector. Note that in the general affine case we do not have any canonical duality between tangent vectors and 1-forms. The vanishing of the Maslov form implies that the submanifold is a point where a naturally defined volume functional attains its minimum for compactly supported variations. Affine Lagrangian submanifolds have a phase function. It turns out that a connected affine Lagrangian submanifold is minimal if and only if its phase function is constant. If a connected affine Lagrangian submanifold is minimal (i.e. of constant phase), then after rescaling the complex volume form in the ambient space we can assume that the constant phase function vanishes on $M$. Analogously to the metric case an affine Lagrangian submanifold is called special if its phase function vanishes on $M$. The notion of special submanifolds corresponds to the notion of calibrations. Calibrations in Riemannian geometry were introduced in the famous paper \cite{HL}. The notion can be generalized to the affine case and, like in the metric case, an affine Lagrangian submanifold is special if and only if it is calibrated by the real part of the complex volume form in the ambient space. The minimality of affine Lagrangian submanifolds is discussed in \cite{O_1} and \cite{O}. In this paper we try to assume as little as possible. In particular, we do not assume that the ambient space $N$ is complex equiaffine but we only assume that it is almost complex and endowed with a nowhere vanishing closed complex $n$--form $\Omega$, where $2n=\dim _{\R}N$. Then affine Lagrangian immersions $f:M\to N$, where $\dim M=n$, are those for which $f^*\Omega \ne 0$ at each point of $M$. If $M$ is oriented, then $\Omega$ induces on $M$ a unique volume element $\nu $. We have $f^*\Omega=\hbox{\rm e} ^{\hbox{\rm i}\theta}\nu $, where $\theta $ is the phase function of $f$. In this paper minimal (relative to $\Omega$) affine Lagrangian submanifolds will be those (by definition) which have constant phase. We shall prove the following theorem. \begin{thm}\label{main} Let $M$ be a connected compact oriented $n$-dimensional real manifold admitting a minimal affine Lagrangian embedding into an almost complex $2n$--dimensional manifold $N$ equipped with a nowhere-vanishing complex closed $n$-form. The set of all minimal affine Lagrangian embeddings of $M$ into $N$ has a structure of an infinite dimensional manifold modeled on the Fr\'echet vector space $\mathcal C ^{\infty}(\mathcal F_{closed}^{n-1})$ of all smooth closed $(n-1)$--forms on $M$. \end{thm} A precise formulation of this theorem is given in Section 4. The Fr\'echet manifold in Theorem \ref{main} may have many connected components. In the above theorem a manifold $M$ is fixed. But the theorem says, in fact, about all compact (connected oriented) minimal affine Lagrangian embedded submanifolds. Non-diffeomorphic submanifolds are in different connected components. At the end of this paper we observe that, in contrast with the metric geometry, in the affine case there exist non-smooth minimal submanifolds of smooth manifolds. Almost the same consideration as in the proof of Theorem \ref{main} gives the statement saying that the set of all (metric) Lagrangian embeddings of a connected compact 2-dimensional manifold into a 4-dimensional almost K\"ahler manifold forms an infinite dimensional Fr\'echet manifold modeled on the Fr\"echet vector space $\mathcal C ^{\infty} (\mathcal F ^1_{closed})$ (if it is not $\emptyset$). We also give a simple application of Theorem \ref{main} in the case where the ambient space is the tangent bundle of a flat manifold. \section{Basic notions} Let $N$ be a $2n$--dimensional almost complex manifold with an almost complex structure $J$. Let $M$ be a connected $n$-dimensional manifold and $f:M\to N$ be an immersion. We say that $f$ is affine Lagrangian (some authors call it purely real or totally real) if the bundle $Jf_*(TM)$ is transversal to $f_*(TM)$. We shall call this transversal bundle the normal bundle. The almost complex structure $J$ gives an isomorphism between the normal bundle and the tangent bundle $TM$. If $\Omega$ is a nowhere vanishing complex $n$-form on $N$, then $f$ is affine Lagrangian if and only if $f^*\Omega \ne 0$ at each point of $M$. If $f:M \to N$ is such a mapping that $f^*\Omega \ne 0$ at each point of $M$, then $f$ is automatically an immersion. Recall now the notion of a phase. Let $\bf V$ be an $n$-dimensional complex vector space with a complex volume form $\Omega$ and $\bf U$ be its $n$-dimensional real oriented vector subspace such that $\Omega _{| {\bf U}}\ne 0$. Let $X_1,...., X_n$ be a positively oriented basis of $\bf U$. Then $\Omega (X_1,...,X_n)=\mu e^{\hbox{\rm i}\theta}$, where $\mu \in \R^+$ and $\theta \in \R$. If we change the basis $X_1,..., X_n$ to another positively oriented basis of $\bf U$, then $e^{\hbox{\rm i}\theta}$ remains unchanged. $\theta$ is called the phase or the angle of the subspace $\bf U$. Assume $N$ is endowed with a nowhere vanishing complex volume form $\Omega$ and $M$ is oriented. For an affine Lagrangian immersion $f:M\longrightarrow N$, at each point $x$ of $M$ we have the phase $\theta _x$ of the tangent vector subspace $f_*(T_xM)$ of $T_{f(x)} N$. The phase function $x\longrightarrow \theta _x$ is multi-valued. In general, if we want to have the phase function to be a smooth function, it is defined only locally. For each point $x\in M$ there is a smooth phase function of $f$ defined around $x$. The constancy of the phase function is a well defined global notion, that is, if $\theta$ is locally constant, then it can be chosen globally constant. Recall few facts concerning the situation where the ambient space is complex equiaffine or, in the metric case, Calabi-Yau. A Lagrangian submanifold (affine or metric) is minimal if and only if it is volume minimizing for compactly supported variations. This is equivalent to the fact that the Maslov form vanishes. Moreover, Lagrangian submanifolds (affine or metric) are minimal if and only if they have constant phase. In this paper, where, in general, we do not assume that the ambient space is complex equiaffine, we shall say (by definition) that an affine Lagrangian submanifold $f$ is minimal if and only if its phase is constant. As usual, if the phase constantly vanishes on $M$, then the submanifold will be called special. If the phase $\theta$ is constant, then we can rescale $\Omega$ in the ambient space by multiplying it by $\hbox{\rm e} ^{-\hbox{\rm i} \theta}$ and after this change the given immersion becomes special. But if we have a family of minimal affine Lagrangian immersions of $M$ into $N$, and we adjust the complex volume form $\Omega$ to one member of the family then, in general, the rest of the family remain only minimal. For an oriented affine Lagrangian immersion $f:M\to N$ we have the induced volume form $\nu $ on $M$ defined by the condition $\nu (X_1,..., X_n)=\vert\omega (X_1,..., X_n)\vert$, where $X_1,...,X_n$ is a positively oriented basis of $T_xM$, $x\in M$ and $\omega =f^*\Omega$. The form $\omega$ is a real complex--valued $n$--form on $M$. We have $$\omega =\hbox{\rm e} ^{\hbox{\rm i}\theta } \nu ,$$ where $\theta$ is the phase function. Note that by multiplying $\Omega$ by $\hbox{\rm e} ^{\hbox{\rm i} \alpha}$ for any $\alpha \in\R$ we do not change the induced volume form on $M$. Decompose $\omega$ into the real and imaginary parts: $\omega= \omega _1+\hbox{\rm i}\omega_2$, where $\omega _1=\cos \theta\,\nu $, $\omega _2=\sin\theta\,\nu.$ If $W\in\mathcal X (M)$, then, since $\Omega$ is complex, we have $$ f^*(\iota _{(Jf_*W)} \Omega)=-\iota _W\omega _2 +\hbox{\rm i} \iota _W\omega _1, $$ where $\iota$ stands for the interior product operator. Hence, if $f$ is special (i.e. $\nu =\omega_1$) we get \begin{equation}\label{przedGriffiths} f^*(\iota _{(J f_*W )}\Im\Omega) = \iota _W\nu. \end{equation} Assume now that $f_t$, $|t|<\varepsilon$, is a smooth variation of $f$. Denote by $\mathcal V (t, x)$ its variation vector field. Assume it is normal to $f$ at $t=0$. Then $V:=\mathcal V_{\vert{\{0\}}\times M}$ is equal to $Jf_*W$ for some $W\in\mathcal X (M)$. If $f$ is special and $\Omega$ is closed, then using formula (\ref{przedGriffiths}) and Proposition (I.b.5) from \cite{G}, we obtain \begin{equation}\label{Griffiths} {d\over{dt}}\left (f^*_t\hbox{\rm Im}\, \Omega\right )_{|t=0} =d(\iota _W\nu). \end{equation} This formula is also directly computed in \cite{O}, but there the form $\Omega$ is assumed to be parallel relative to a torsion-free complex connection. We shall now give a justification of the term ``minimal'' adopted in this paper. Assume $M$ is compact. If affine Lagrangian immersions $f, \tilde f: M\to N$ are cohomologous (in particular, if they are homotopic), then the cohomology class of $\omega _i$ is equal to the cohomology class of $\tilde\omega _i$, for $i=1,2$, where $\tilde\omega _1=\cos \tilde\theta\, \tilde\nu$, $\tilde\omega _2=\sin\tilde\theta\, \tilde \nu$ are the real and imaginary parts of $\tilde{\omega} =\tilde f ^*\Omega$ and $\tilde\theta$, $\tilde\nu$ are the phase and the induced volume element for $\tilde f$. Assume that $f$ is special. Then $\omega _1=\nu $ and consequently $$ \int _M \nu=\int _M \omega _1 =\int _M \tilde\omega _1=\int _M \cos\tilde\theta \, \tilde \nu \le\int _M\tilde\nu,$$ which means that with the definition of minimality we adopted in this paper compact special (and consequently minimal) affine Lagrangian submanifolds are volume minimizing in their respective cohomology classes. Assume additionally that $\tilde f$ is minimal with the constant phase $\tilde\theta$. We have $$0=\int _M\omega _2=\int _M \tilde\omega_2= \int _M \sin\tilde\theta\, \tilde \nu =\sin\tilde\theta \int_M \tilde\nu ,$$ which means that $\tilde\omega _2=0$, that is, $\tilde f$ is also special. If $f$ is minimal (special), then for any diffeomorphism $\varphi$ of $M$ $f\circ\varphi$ is minimal (special). \section{Moduli spaces of compact embedded submanifolds} Assume first that $M$ and $N$ are arbitrary manifolds such that $\dim M\le \dim N$. Assume moreover that $M$ is connected compact and it admits an embedding into $N$. Denote by $\mathcal C^{\infty}_{emb}(M,N)$ the set of all embeddings from $M$ into $N$. This is a well known topological space forming an open subset (in the $\mathcal C^1$ topology) of $\mathcal C ^{\infty}(M,N)$. Denote by $\mathcal M$ the space $\mathcal C^{\infty}_{emb}(M,N) _{/ Diff ^{\infty}(M)}$ with the quotient topology. The equivalence class of $f\in \mathcal C^{\infty}_{emb}(M,N)$ will be denoted by $[f]$. For $f,g\in \mathcal C^{\infty}_{emb}(M,N)$ we have that $f\sim g$ if and only if the images of $f$ and $g$ are equal in $N$. We shall now introduce a structure of an infinite dimensional manifold (modeled on Fr\'echet spaces) on $\mathcal M$. It is certainly well known but we have not found suitable references and moreover we need the construction. We use the notion of a manifold modeled on Fr\'echet vector spaces given in \cite{H}. We denote the Fr\'echet space of all $\mathcal C ^{\infty}$ sections of a vector bundle $E\to M$ by $\mathcal C^{\infty}(M\leftarrow E)$. Analogously the Banach spaces of all $\mathcal C ^k$ sections of a vector bundle $E\to M$ will be denoted by $\mathcal C ^k(M\leftarrow E)$. The basic tool in the construction are tubular mappings. We use the following setting of this notion. Assume that $\mathcal N _f$ is any smooth transversal bundle for an embedding $f:M\to N$. Having any connection on $N$ we have the exponential mapping $exp $ given by the connection. No relation between the connection and the transversal bundle is needed. From the theory of connections one knows that there is an open neighborhood $\mathcal U$ of the zero-section in the total space $\mathcal N _f$ and an open neighborhood $\mathcal T$ of $f(M)$ in $N$ such that $exp_{\vert \mathcal U} :\mathcal U\to \mathcal T$ is a diffeomorphism, $exp _{|M}=\hbox{\rm id}\, _M$ and the differential $exp _*: T_{0_x}(\mathcal N_f)=f_*(T_xM)\oplus \mathcal (N_f)_x\to T_xN$ of $exp$ at $0$ is the identity for each point $x$ of $M$. The mapping $exp_{\vert \mathcal U}$ is a tubular mapping. In order to reduce a play with neighborhoods we shall use the following lemma, which allows to have the whole total space $\mathcal N _f$ as the domain of a tubular mapping. In what follows $\mathcal N _f$ will denote either the transversal vector bundle or its total space depending on the context. \begin{lemma}\label{lemacik_o_funkcji} Let $E\longrightarrow M$ be a Riemannian vector bundle and $\mathcal U_{\varepsilon}$ be the neighbourhood of the zero section of $E$ given as follows $$\mathcal U_{\varepsilon}=\{v\in E;\ \mid v\mid<\varepsilon\}, $$ where $\mid\ \mid$ is the norm on fibers of $E$ determined by the Riemannian structure. There is a fiber-respecting diffeomorphism $\sigma :E\longrightarrow \mathcal U_{\varepsilon}$ which is the identity on $\mathcal U_{\varepsilon/2}$. \end{lemma} Proof.\ Let $\psi : [0, \infty)\to \R$ be a smooth function such that $\psi (t)=t$ for $t\le \varepsilon/2$, $\psi (t)\le \varepsilon$ for $t>\varepsilon/2$ and $\psi(t)\rightarrow \varepsilon$ for $t\to{\infty}$. Then the function $\Upsilon(t)=(1/t)\psi (t)$ is also a smooth function on $[0,\infty)$. The mapping $\sigma:E\to\mathcal U_{\varepsilon} $ given by $$\sigma (v)=\Upsilon (\mid v\mid )v$$ satisfies the required conditions. \lower 2pt\hbox{\kwadracik} We now endow the bundle $\mathcal N _f$ with any Riemannian metric. Since $M$ is compact, there is $\varepsilon>0$ such that $\mathcal U _{\varepsilon}\subset \mathcal U$. We use Lemma \ref{lemacik_o_funkcji} for $\mathcal U_{\varepsilon}$ and take the tubular mapping $\mathcal E _f = exp \circ \sigma $. The tubular neighborhood $\mathcal E _f(\mathcal N _f)$ of $f(M)$ will be denoted by $\mathcal T _f$. The set $\mathcal C^{\infty}_{emb}(M,\mathcal T_f )$ is open in $\mathcal C^{\infty}_{emb}(M,N)$ (in the $\mathcal C ^0$-topology). Consider the mapping \begin{equation} \Psi: \mathcal C^{\infty}_{emb}(M,\mathcal T_f )\ni h\longrightarrow \Pi_f\circ\mathcal E _f ^{-1}\circ h\in \mathcal C^{\infty}(M,M), \end{equation} where $\Pi _f:\mathcal N _f\longrightarrow M$ is the natural projection. The mapping is continuous and the set ${\mathcal Diff}^{\infty}(M) $ is open in $\mathcal C ^{\infty} (M,M)$ (in the $\mathcal C ^1$-topology). Thus the set \begin{equation} {\mathcal U}^1_f =\Psi^{-1}({\mathcal Diff}^{\infty}(M) )= \{h\in\mathcal C^{\infty}(M,\mathcal T _f ); \ \Pi _f\circ \mathcal E _f ^{-1}\circ h\in{\mathcal Diff}^{\infty}(M) \} \end{equation} is open in $\mathcal C^{\infty}_{emb}(M,N)$ in the $\mathcal C^1$ topology. Observe that $h\in\mathcal U ^1_f$ if and only if there is a section $V\in {\mathcal C}^{\infty}(M\longleftarrow\mathcal N _f )$ and $\varphi \in {\mathcal Diff}^{\infty}(M) $ such that \begin{equation} \mathcal E _f\circ V=h\circ \varphi . \end{equation} The set ${\mathcal U}^1 _f$ has the following properties: 1) If $h\in{\mathcal U}^1 _f$ and $\varphi \in{\mathcal Diff}^{\infty}(M)$, then $h\circ\varphi \in{\mathcal U}^1 _f$ . 2) For every $\varphi \in{\mathcal Diff}^{\infty}(M)$ we have ${\mathcal U} ^1_{f\circ\varphi}={\mathcal U}^1 _f$. Take the neighborhood $\[f]= \{[h]\in\mathcal M ; \ h\in{\mathcal U}^1 _f\}$ of $[f]$ in $\mathcal M$. Observe that the elements of $\[f]$ can be parametrized simultaneously. Namely, we have \begin{lemma}\label{simultaneously} Let $\xi _0 \in \[f] $ and $h_0\in\mathcal C^{\infty}_{emb}(M,N)$ be its fixed parametrization. For each $\xi \in \[f]$ there is a unique parametrization $h_{\xi}\in\mathcal C^{\infty}_{emb}(M,N)$ of $\xi$ such that $$\Pi _f\circ\mathcal E _f ^{-1}\circ h_0 =\Pi _f\circ \mathcal E _f ^{-1}\circ h_{\xi}$$ \end{lemma} Proof.\ We first reparametrize $f$ in such a way that after the reparametrization $$\Pi _f \circ\mathcal E _f ^{-1}\circ h_0=\hbox{\rm id}\, _M.$$ Assume that $f$ is already parametrized in this way. For every $h\in {\mathcal U}^1 _f $ the mapping $\varphi ^{-1}=\Pi _f\circ\mathcal E _f ^{-1}\circ h$ is a diffeomorphism and it is sufficient to replace $h$ representing $[h]$ by $h\circ\varphi$. The uniqueness is obvious. {\lasek\kwadrat} By the above lemma we see that $\[f]$ can be identified with the set \begin{equation}{\mathcal U_{[f]}} =\{h\in\mathcal C^{\infty}_{emb}(M,\mathcal T_f ) ; \ \Pi _f\circ\mathcal E _f ^{-1}\circ h=\hbox{\rm id}\,_M\}.\end{equation} We now define the bijection $$ u_{[f]}:\mathcal U_{[f]}\longrightarrow \C ^{\infty}(M\leftarrow \mathcal N _f)$$ as follows: \begin{equation} u_{[f]}(h)\longrightarrow \mathcal E _f^{-1}\circ h . \end{equation} We see that $$u_{[f]} ^{-1} (V)=\mathcal E _f\circ V$$ and $\mathcal E _f\circ V$ has values in $\mathcal T _f$. If $ U$ is an open subset of $\mathcal T _f$, then $$u_{[f]}(\{h\in \mathcal U_{[f]}; h(M)\subset U\})=\{V\in\mathcal C ^{\infty}(M\leftarrow\mathcal N _f);\ V(M)\subset\mathcal E _f^{-1}( U)\}$$ and hence is open in $\mathcal C ^{\infty}(M\leftarrow \mathcal N _f)$. Assume now that $f, g\in \mathcal C^{\infty}_{emb}(M,N)$ and $\mathcal U_{[f]}\cap \mathcal U_{[g]}\ne\emptyset$. Take $\xi _0\in \mathcal U_{[f]}\cap \mathcal U_{[g]}$ and fix its parametrization $h_0$. Reparametrize $f$ and $g$ as in Lemma \ref{simultaneously} adjusting the parametrizations to $h_0$. Then \begin{eqnarray*}\mathcal U_{[f]}\cap \mathcal U_{[g]}&=\{h\in \mathcal C ^{\infty}_{emb}(M,\mathcal T _f\cap\mathcal T _g);\ \Pi _f\circ \mathcal E _f ^{-1}\circ h=\hbox{\rm id}\, _M, \ \Pi _g\circ\mathcal E _g ^{-1}\circ h=\hbox{\rm id}\, _M\}\\ &=\{\mathcal E _f\circ V;\ V\in \mathcal C^{\infty} (M\leftarrow\mathcal N _f);\ V(M)\subset \mathcal E _f ^{-1} (\mathcal T _f\cap\mathcal T _g )\}\\ \end{eqnarray*} and consequently $$u_{[f]}(\mathcal U_{[f]}\cap \mathcal U _{[g]})=\{ \ V\in \mathcal C^{\infty} (M\leftarrow\mathcal N _f );\ V(M)\subset \mathcal E _f ^{-1} (\mathcal T _f\cap\mathcal T _g )\}$$ The mapping $\mathcal E _g^{-1}\circ\mathcal E _f :\mathcal E _f^{-1}(\mathcal T _f\cap\mathcal T _g)\to \mathcal E _g ^{-1}(\mathcal T _f\cap\mathcal T _g )$ is smooth and fiber respecting (because of specially chosen parametrizations $f$ and $g$). It is known, \cite{H}, that the set $u_{[f]}(\mathcal U_{[f]}\cap \mathcal U _{[g]})$ is open in the Fr\'echet space $\mathcal C^{\infty}(M\leftarrow \mathcal N _f)$ and the mapping \begin{eqnarray*} &u_{[g][f]}:u_{[f]}(\mathcal U_{[f]}\cap \mathcal U _{[g]})\ni V\to \mathcal E _g ^{-1}\circ\mathcal E _f\circ V \in u_{[g]}(\mathcal U_{[f]}\cap \mathcal U _{[g]}) \end{eqnarray*} is smooth. For the same reason the set $u_{[g]}(\mathcal U_{[f]}\cap \mathcal U_{[f]})$ is open and the mapping $u_{[f][g]}$ is smooth. We have built a smooth atlas on $\mathcal M$. Hence we have \begin{thm} Let $M$ be a connected compact manifold admitting an embedding in a manifold $N$. Then $\mathcal M$ is an infinite dimensional manifold modeled on the Fr\'echet vector spaces $\mathcal C ^{\infty}(M\leftarrow \mathcal N _f)$ for $f\in\mathcal C_{emb} ^{\infty}(M,N)$, where $\mathcal N _f$ is any bundle transversal to $f$. \end{thm} In the theorem $\mathcal N _f$ can be replaced by any bundle isomorphic (over the identity on $M$) to the transversal bundle $\mathcal N _f$. In what follows the Fr\'echet space of all smooth $r$-forms on $M$ will be denoted by $\mathcal C ^{\infty}(\mathcal F ^{r})$. The Banach space of $r$-forms of class $\mathcal C ^k$, $k\in\mathbf N$, will be denoted by $\mathcal C ^k(\mathcal F ^r)$. Assume now additionally that $N$ is a $2n$-dimensional manifold with an almost complex structure $J$ and $M$ is $n$-dimensional orientable with a fixed volume form $\nu$. Having the volume element $\nu$, we have an isomorphic correspondence between tangent vectors and $(n-1)$-forms. It is given by the interior multiplication $$T_xM\ni W\longrightarrow \iota _W\nu \in \Lambda ^{n-1}(T_xM)^*,$$ for $x\in M$. If $f:M\to N$ is affine Lagrangian, then by composing this isomorphism with the isomorphism determined by $J$ between the tangent bundle $TM$ and the normal bundle $\mathcal N _f$ we get an isomorphism, say $\rho$, of vector bundles \begin{equation}\label{rho} \rho: \Lambda ^{n-1} TM^*\longrightarrow \mathcal N _f. \end{equation} The isomorphism gives a smooth isomorphism (linear smooth diffeomorphism) $\wp $ between Fr\'echet vector spaces $\mathcal C ^{\infty}(\mathcal F ^{n-1})$ and $\mathcal C^ {\infty}(M\leftarrow \mathcal N _f)$ given by $\wp (\gamma)=\rho\circ \gamma$. We now have \begin{thm}\label{Mal} Let $M$ be a connected compact orientable $n$-dimensional real manifold admitting an affine Lagrangian embedding into a $2n$-dimensional almost complex manifold $N$. The set $\mathcal M aL =\{[f]\in \mathcal M;\ f \ is \ affine\ Lagrangian\}$ is an infinite dimensional manifold modeled on the Fr\'echet vector space $\C^{\infty}(\mathcal F^{n-1})$. \end{thm} Proof.\ For each $y\in N$ there is an open neighborhood $U_y$ of $y$ in $N$ and a smooth complex $n$-form $\Omega _y$ on $N$ such that $\Omega _y\ne 0$ at each point of $U_y$. Let $U_{y_1},..., U_{y_l}$ cover $f(M)$. Set $\tilde{\Theta} _j= {\mathcal E _f} ^*\Omega _{y_j} $. Consider the mapping \begin{equation}\mathcal C ^1(M\leftarrow \mathcal N _f)\ni V\to (V^*\tilde{\Theta}_1,...,V^* \tilde{\Theta}_l)\in \mathcal (C ^0(\mathcal F (\C)))^l. \end{equation} where $\mathcal C ^0(\mathcal F(\C))$ stands for the space of all real complex-valued $n$-forms on $M$ of class $\mathcal C ^0$. It is known, see Theorem 2.2.15 from \cite{Ba}, that this mapping is continuous between Banach spaces. Hence $$\tilde{\mathcal U}=\{V\in\mathcal C ^{\infty}(M\leftarrow\mathcal N _f);\ ((V^*\tilde{\Theta }_1)_x, ... , (V^*\tilde {\Theta }_l)_x)\ne 0 \ \forall x\in M\}$$ is open in $\mathcal C^{\infty}(M\leftarrow\mathcal N _f)$. It is clear that $[h]\in \mathcal U ^1_{[f]}$ is affine Lagrangian if and only $u_{[f]}([h])\in \tilde {\mathcal U}$. We now compose $u_{[f]}$ with the isomorphism $\wp ^{-1}$, where $\wp$ is determined by any fixed volume form on $M$. \lower 2pt\hbox{\kwadracik} In the above atlas we can compose a chart $u_{[f]}$ with a bijective mapping, say $\phi$, sending an open neighborhood of $0$ in $\C^{\infty}(\mathcal F^{n-1})$ onto an open neighborhood of $0$ in $\C^{\infty}(\mathcal F^{n-1})$ and such that $\phi $ and $\phi ^{-1}$ are smooth in the sense of the theory of Fr\'echet vector spaces. This does not change the differentiable structure on $\mathcal M aL$. We shall use this possibility in the next section. \section{The moduli space of minimal submanifolds} The precise formulation of Theorem \ref{main} is the following \begin{thm}\label{main-theorem} Let $N$ be a $2n$-dimensional almost complex manifold equipped with a smooth nowhere-vanishing closed complex $n$-form $\Omega$. Let $M$ be a connected compact oriented $n$-dimensional real manifold admitting a minimal (relative to $\Omega$) affine Lagrangian embedding into $N$. Then the set $$\mathcal M maL= \{[f]\in \mathcal M aL;\ f \ is \ minimal\}$$ is an infinite dimensional manifold modeled on the Fr\'echet vector space \newline $\C^{\infty}(\mathcal F^{n-1}_{closed})$. It is a submanifold of $\mathcal M aL$. \end{thm} Proof.\ We shall improve the charts obtained in Theorem \ref{Mal} in such a way that the set $\mathcal M maL$ will get a structure of a submanifold of $\mathcal MaL$ in the sense of the theory of Fr\'echet manifolds. Let $f:M\to N$ be a given minimal affine Lagrangian embedding. By rescaling $\Omega$ in the ambient space we make $f$ special. We have the normal bundle $\mathcal N _f=J f_*(TM)$. Fix a tubular mapping $\mathcal E _f :\mathcal N _f\to\mathcal T _f$. For each section $V\in \mathcal C ^{\infty}(M\leftarrow \mathcal N _f)$ we have the embedding $f_V=\mathcal E _f\circ V$. In general, $f_V$ is neither special nor minimal nor even affine Lagrangian. Consider the mapping $$\tilde P: \mathcal C ^{\infty}(M\leftarrow \mathcal N _f)\ni V\to \tilde P (V)=f_V^* (\Im\, \Omega)\in\mathcal C ^{\infty} (\mathcal F^{n}).$$ Of course $\tilde P(0)=0$. For a section $V\in \mathcal C ^{\infty}(M\leftarrow \mathcal N _f)$ take the variation $f_t=f_{tV}$. The section $V$ is the variation vector field for $f_t$ at $0$. Using now formula (\ref{Griffiths}) one sees that the linearization $L_0 \tilde P$ of $\tilde P$ at $0$ is given by the formula \begin{equation}\label{LP} L_0\tilde P(V)=d(\iota _W\nu ), \end{equation} where $V=Jf_*W$ and $\nu$ is the volume form on $M$ induced by $f$. Since for each $V\in \mathcal C ^{\infty}(M\leftarrow \mathcal N _f)$ the embedding $f_V$ is homotopic to $f$, we have that $\tilde P$ has values in $\mathcal C^{\infty}(\mathcal F_{exact}^{n})$. Moreover, as it was observed in Section 2, if $f_V$ is minimal affine Lagrangian, then it is automatically special. We shall now use the isomorphism $\rho$ given by (\ref{rho}). If $\gamma \in \mathcal C ^{\infty}(\mathcal F ^{n-1})$ and $V=\rho\circ\gamma$, then $\gamma= \iota_W\nu$, where $V=Jf_*W$. We now have the mapping $$P: \mathcal C^{\infty}({\mathcal F}^{n-1})\longrightarrow \mathcal C^{\infty}({\mathcal F}^n_{exact})$$ defined as follows: \begin{equation} P(\gamma) =\tilde P (\rho \circ\gamma). \end{equation} The mapping $P$ can be also expressed as follows. If we set $\Theta=({\mathcal E _f}\circ\rho)^*\Im\Omega$, then $\Theta$ is a closed $n$-form on the total space of $\Lambda ^{n-1}TM^*$. We have $ P(\gamma )=\gamma ^*\Theta $ for any $(n-1)$ - form $\gamma$. Obviously $ P(0)=0$. Moreover, $P(\gamma)=0$ if and only if $f_V$, where $V= \rho\circ\gamma $, is special (if $f_V$ is affine Lagrangian). We shall now regard $ P$ as a differential operator. It is smooth, of order 1, non-linear and, by (\ref{LP}), the linearization $L_0 P$ of $P$ at $0$ is given by \begin{equation} L_0 P=d \end{equation} We shall now fix an arbitrary positive definite metric tensor field on $M$. The metric is only a tool here and has no relation with the affine geometric structure considered in this paper. Denote by $\delta$ the codifferential operator determined by the metric. Denote by ${\mathcal C}^{k,a}(\mathcal F^{r})$ the H\"older-Banach space of all $r$-forms on $M$ of class ${\mathcal C}^{k,a}$, where $k\in \mathbf N$ and $a$ is a real number from $(0,1)$. We extend the action of the operators $ P, d, \delta$ to the action on the forms of class ${\mathcal C}^{k,a}$. The extensions will be denoted by the same letters. In particular, after extending, $P$ becomes a ${\mathcal C}^{\infty}$ mapping between Banach spaces, see \cite{Ba} p. 34, \begin{equation} P: {\mathcal C}^{k,a} ({\mathcal F}^{n-1} ) \longrightarrow {\mathcal C}^{k-1,a}({{\mathcal F}^{n}}_{exact}) \end{equation} for each $k=1,2,...$. As in the proof of Theorem \ref{Mal} one sees that there is an open neighborhood, say $\mathcal W$, of $0$ in ${\mathcal C}^{1,a} ({\mathcal F}^{n-1} )$ such that $f_V$ are affine Lagrangian for $V=\rho\circ\gamma$ and $\gamma \in \mathcal W$. From now on all neighborhoods of $0$ in $\mathcal C ^{1,a}({\mathcal F}^{n-1})$ will be contained in $\mathcal W$. Moreover, all neighborhoods will be assumed open. Consider now $P: {\mathcal C}^{1,a} ({\mathcal F}^{n-1} ) \longrightarrow {\mathcal C}^{0,a}({{\mathcal F}^{n}}_{exact})$ as a mapping between Banach spaces. The mapping $L_0 P$ is a surjection. Moreover $\hbox{\rm ker}\, L_0 P=\hbox{\rm ker}\, d$. Denote the Banach space $\mathcal C ^{1,a}(\mathcal F_{closed}^{n-1})=\hbox{\rm ker}\, d\subset{\mathcal C}^{1,a}({\mathcal F}^{n-1})$ by $X$. The space $\delta ({\mathcal C}^{2,a} ({\mathcal F}^{n} ))$ is a closed complement to $\hbox{\rm ker}\, d$. Denote this Banach space by $Y$. Using the implicit mapping theorem for Banach spaces one gets that there is an open neighborhood $A$ of $0$ in $X$ and an open neighborhood $B$ of $0$ in $Y$ and a unique smooth mapping $G:A\longrightarrow B$ such that $$(A+B)\cap P^{-1}(0)=\{\alpha +G(\alpha);\, \alpha \in A\}.$$ We shall now observe that if $\alpha $ is of class ${\mathcal C} ^{k,a}$, where $k\ge 2$ or of class $\mathcal C ^{\infty}$, then so is $G(\alpha)$, for $\alpha$ from some neighborhood of $0$ in $X$. In Riemannian geometry special submanifolds as minimal ones are automatically $\mathcal C ^{\infty}$ (after possible reparametrization), but in the affine case we do not have such a statement and we have to prove that $\alpha$ of class $\mathcal C ^{\infty}$ give rise to a smooth embedding. For an $(n-1)$--form $\gamma$ we define a differential operator $ P_{\gamma}$ of the second order from the vector bundle $\Lambda ^n{TM^*}$ into itself by the formula \begin{equation} P_{\gamma} (\beta)= P(\gamma+\delta\beta) \end{equation} for an $n$-form $\beta$. Since $d\beta=0$, the linearization of $ P_0$ at $0$ is the Laplace operator. We also have \begin{equation} L_{\beta}P_{\gamma}=L_{\gamma +\delta \beta}P\circ \delta. \end{equation} Hence, if $\gamma$ is of class $\mathcal C ^{k,a}$ and $\beta$ is of class $\mathcal C ^{k+1,a}$, then the linear differential operator $L_{\beta}P_{\gamma}$ is of class $\mathcal C ^{k-1,a}$. We have the following smooth mapping between Banach spaces $$\mathcal C^{1,a}(\mathcal F^{n-1})\times \mathcal C^{2,a}(\mathcal F ^{n})\ni (\gamma,\beta)\longrightarrow P_\gamma (\beta)\in \mathcal C ^{0,a}(\mathcal F ^n)$$ and the continuous mapping \begin{equation} \Phi :STM^*\times {\mathcal C}^{1,a}({\mathcal F}^{n-1})\times \mathcal C ^{2,a}({\mathcal F}^{n})\ni (\xi, \alpha,\beta ) \longrightarrow \hbox{\rm det}\, \, \sigma _{\xi} (L_{\beta} { P}_{\alpha})\in \R , \end{equation} where $STM^*$ stands for the total space of the unit spheres bundle in $TM^*$ and $\sigma _\xi$ denotes the principal symbol of a differential operator. Since $STM^*$ is compact and $\Phi (\xi, 0,0)\ne 0$ for every $\xi \in STM^*$, we obtain the following \begin{lemma}\label{eliptyczny} There is a neighborhood $\mathcal U_0$ of $0$ in $\mathcal C ^{1,a}({\mathcal F}^{n-1})$ and a neighborhood $\mathcal V_0$ of $0$ in $\mathcal C ^{2,a}({\mathcal F}^{n})$ such that for each $\gamma \in \mathcal U_0$ and $\beta \in \mathcal V_0$ the differential operator $L_{\beta}P_{\gamma}$ is elliptic. \end{lemma} From the theory of elliptic differential operators applied to $d+\delta$ we know that the codifferential (after restricting) is a linear homemorphism of Banach spaces $$ \delta : \mathcal C ^{2,a}(\mathcal F ^n_{exact})\longrightarrow Y=\delta (\mathcal C ^{2,a}({\mathcal F}^{n})).$$ Take the neighborhood of $0$ in $X$ given by $\mathcal U_1=G^{-1}(\delta (\mathcal V_0\cap \mathcal C ^{2,a}(\mathcal F ^n_{exact}))\cap \mathcal U_0$. Let $\alpha \in \mathcal U_1$. Then $G(\alpha)$ exists and there exists $\beta\in \mathcal V_0$ such that $G(\alpha )=\delta\beta$. Moreover $P_\alpha (\beta)=0$ and $L_\beta P_{\alpha}$ is elliptic. Take now any $k\ge 2$ and $\alpha \in \mathcal U_1\cap \mathcal C^{k,a}({\mathcal F}^{n-1})$. Then the differential operator $P_\alpha$ is of class $\mathcal C ^{k-1, a}$. For $G(\alpha)$ we have $\beta\in \mathcal V_0$ of class $\mathcal C ^2$ such that $G(\alpha) =\delta\beta$, i.e. $P_\alpha(\beta)= 0$. Hence $\beta$ is an elliptic solution of the equation $P_{\alpha}(\beta)=0$ and from the elliptic regularity theorem for non-linear differential operators we know that $\beta$ is of class $\mathcal C ^{k+1,a}$ and consequently $G(\alpha)=\delta\beta$ is of class $\mathcal C ^{k,a}$. Thus if $\alpha$ is of class $\mathcal C ^{\infty}$ then so is $G(\alpha)$. We have got \begin{lemma} There is a neighborhood $\mathcal U_1$ of $0$ in X such that for each $k\ge 1$ we have the mapping \begin{equation}\label{odwzorowanie_klasy_ck} G_{| \mathcal C ^{k,a}({\mathcal F}^{n-1})}: \mathcal U_1\cap \mathcal C ^{k,a}({\mathcal F}^{n-1}) \longrightarrow Y\cap \mathcal C^{k,a}({\mathcal F}^{n-1})=\delta ( \mathcal C^{k+1,a}(\mathcal F ^n)).\end{equation} Consequently we have the mapping \begin{equation}\label{odwzorowanie_nieskonczonosc} G_{| \mathcal C ^{\infty}({\mathcal F}^{n-1})}: \mathcal U_1\cap\mathcal C ^{\infty}({\mathcal F}^{n-1})\longrightarrow Y\cap \mathcal C ^{\infty}({\mathcal F}^{n})=\delta (\mathcal C ^{\infty}({\mathcal F}^{n})).\end{equation} \end{lemma} We know that $G:\mathcal U_1\to Y$ is smooth between Banach spaces. We shall now prove that the mappings (\ref{odwzorowanie_klasy_ck}), for $k=2,...\, $, are smooth mappings between Banach spaces, when we replace $\mathcal U_1$ by a (possibly) smaller neighborhood of $0$ in $X$. It will imply that the mapping (\ref{odwzorowanie_nieskonczonosc}) is smooth as a mapping between Fr\'echet spaces in a sufficiently small neighborhood of $0$ in $\mathcal C ^{\infty}({\mathcal F}^{n-1})$. We have the continuous mapping \begin{equation} \mathcal C ^{1,a}({\mathcal F}^{n-1})\ni\gamma\longrightarrow (L_{\gamma}P)_{| Y}\in \mathcal L (Y,Z),\end{equation} where $Z= \mathcal C^{0,a}(\mathcal F ^n_{exact})$ and $\mathcal L (Y,Z)$ stands for the Banach space of continuous linear mappings from $Y$ to $Z$. We know that $(L_0P)_{| Y}=d_{|Y} :Y\to Z$ is an isomorphism (linear and topological) between Banach spaces $Y$, $Z$. Since the set of isomorphisms is open in $\mathcal L (Y,Z)$, there is a neighborhood, say $\mathcal U_2$, of $0$ in $\mathcal C ^{1,a} ({\mathcal F}^{n-1})$ such that if $\gamma \in \mathcal U_2$, then $(L_{\gamma}P)_{|Y}$ is an isomorphism between $Y$ and $Z$. Take $\gamma \in \mathcal U_2\cap \mathcal C ^{k,a}({\mathcal F}^{n-1})$, We have the mapping \begin{equation}\label{mapping} (L_{\gamma}P)_{|Y_k} : Y_k\longrightarrow \mathcal C ^{k-1,a}(\mathcal F ^n_{exact})=Z\cap \mathcal C ^{k-1,a}({\mathcal F}^{n} ), \end{equation} where $Y_k=Y\cap \mathcal C ^{k,a}({\mathcal F}^{n-1})=\delta (\mathcal C ^{k+1,a}({\mathcal F}^{n}))$. As a restriction of the injection $L_{\gamma}P:Y\to Z$, it is injective. Since $P_{| \mathcal C ^{k,a}({\mathcal F}^{n-1})}:\mathcal C ^{k,a}({\mathcal F}^{n-1})\longrightarrow \mathcal C ^{k-1, a}(\mathcal F ^n_{exact})$ is smooth between Banach spaces, we have that $$L_{\gamma} \left (P_{|\mathcal C ^{k,a}({\mathcal F}^{n-1})}\right): \mathcal C ^{k,a}({\mathcal F}^{n-1})\longrightarrow \mathcal C ^{k-1,a}(\mathcal F ^n_{exact})$$ is continuous. Hence $$ \left(L_{\gamma} \left (P_{|\mathcal C ^{k,a}({\mathcal F}^{n-1})}\right )\right) _{|Y _k}: Y_k\longrightarrow \mathcal C ^{k-1,a}(\mathcal F ^n_{exact})$$ is continuous. On the other hand $$ L_{\gamma}\left(P_{|\mathcal C ^{k,a}({\mathcal F}^{n-1})}\right) =(L_{\gamma}P)_{|\mathcal C ^{k,a}({\mathcal F}^{n-1})}. $$ Thus the mapping given by (\ref{mapping}) is a continuous linear monomorphism. We shall now show that it is surjective for $\gamma \in \mathcal U_2\cap \mathcal U_0\cap \mathcal C ^{k,a}({\mathcal F}^{n-1})$. Let $\mu \in \mathcal C ^{k-1,a}(\mathcal F ^n_{exact})$. Since $\gamma\in \mathcal U_0$, by Lemma \ref{eliptyczny}, we know that $L_0P_{\gamma}$ is elliptic. The differential operator $L_0P_{\gamma}$ is of class $\mathcal C ^{k-1,a}$. Since $\gamma \in \mathcal U_2$, there is $\beta\in \mathcal C ^{2,a}({\mathcal F}^{n})$ such that $L_{\gamma} P(\delta \beta)=\mu$. From the elliptic regularity theorem we know that $\beta $ is of class $\mathcal C ^{k+1,a}$, i.e. $\delta\beta$ is of class $\mathcal C ^{k,a}$. Set $\mathcal U_3=\mathcal U_0\cap \mathcal U_2$. We have got \begin{lemma} There is a neighborhood $\mathcal U_3$ of $0$ in $\mathcal C ^{1,a}({\mathcal F}^{n-1})$ such that for every $\gamma\in \mathcal U_3\cap \mathcal C ^{k,a}({\mathcal F}^{n-1})$ the mapping $$L_{\gamma}P_{|Y_k}:Y_k\longrightarrow \mathcal C ^{k-1,a}(\mathcal F ^n_{exact})$$ is an isomorphism (topological and linear). \end{lemma} Denote by $\tilde G:A\to\mathcal C ^{1,a}(\mathcal F ^{n-1})$ the mapping given by $\tilde G(\alpha)=\alpha + G(\alpha)$. Take $\mathcal U_4=\tilde G^{-1}(\mathcal U_3)\cap \mathcal U_1\subset X$. Let $\alpha _0 \in \mathcal U_4\cap \mathcal C ^{k,a}({\mathcal F}^{n-1})$. Then $\gamma _0=\alpha _0+G (\alpha _0)$ is of class $\mathcal C ^{k,a}$ (because $\alpha _0\in \mathcal U_1$) and $L_{\gamma_0}P_{| Y_k}:Y_k\to \mathcal C ^{k-1,a}(\mathcal F^n_{exact})$ is an isomorphism (because $\alpha _0\in \tilde G ^{-1}(\mathcal U_3)$). Denote by $\tilde X _k$ the Banach space $\hbox{\rm ker}\, L_{\gamma _0}P$. We have $P(\gamma _0)=0$ and $\tilde X _k\oplus Y_k=\mathcal C ^{k,a}(\mathcal F ^{n-1})$. We want to prove that $G$ is smooth around $\alpha _0$ in the sense of the Banach spaces theory. Denote by $\tilde \pi :\mathcal C ^{k,a}({\mathcal F}^{n-1})=\tilde X _k\oplus Y_k\longrightarrow \tilde X_k$ the canonical projection. It is a smooth mapping between Banach spaces. Set $\tilde \alpha _0=\tilde \pi(\alpha _0)$. From the implicit mapping theorem we know that there is a neighborhood $\tilde U $ of $\tilde\alpha _0$ in $\tilde X _k$ and a smooth mapping $F$ defined on $\tilde U$ such that $ \{\tilde\alpha + F(\tilde\alpha); \tilde\alpha \in \tilde U\}\subset P^{-1}(0)$. In a neighborhood of $\alpha _0$ we have $$G(\alpha)= \tilde\pi (\alpha) +F(\tilde \pi (\alpha))-\alpha.$$ Hence in a neighborhood of $\alpha _0$ the mapping $G$ is smooth and consequently it is smooth in $\mathcal U_4\cap \mathcal C ^{k,a}({\mathcal F}^{n-1})$. It follows that $G_{|\mathcal U_4\cap \mathcal C ^{\infty}({\mathcal F}^{n-1})}$ is smooth in the sense of the theory of Fr\'echet spaces. The projections in the Hodge decomposition $\mathcal C ^{\infty}(\mathcal F^{n-1})= \mathcal C^{\infty}(\mathcal F^{n-1}_{closed})\oplus \delta (\mathcal C ^{\infty}(\mathcal F ^n))$ are smooth mappings of Fr\'echet spaces. Denote by $p: \mathcal C ^{\infty}(\mathcal F^{n-1})\to \mathcal C^{\infty}(\mathcal F^{n-1}_{closed})$ the projection. Set $\mathfrak U=(\mathcal U_4 \cap \mathcal C ^{\infty}(\mathcal F^{n-1}))\oplus (\delta(\mathcal C^{\infty }\mathcal (F^{n}))$. Consider the mapping $\phi: \mathfrak U\to \mathfrak U$ defined as $\phi (z)=z-(G\circ p)(z)$. It is a bijection and its converse is given by $\phi ^{-1}(z)= z+(G\circ p)(z)$. Both mappings $\phi$ and $\phi ^{-1}$ are smooth in the sense of the theory of Fr\'echet vector spaces. We now compose the chart obtained in the proof of Theorem \ref{Mal} with $\phi$. Since $$\phi ( \{\gamma =\alpha +G(\alpha); \ \alpha\in \mathcal U_4\cap \mathcal C ^{\infty}(\mathcal F ^{n-1}) \})=\mathcal U_4 \cap \mathcal C ^{\infty}(\mathcal F ^{n-1})$$ is an open subset of the closed subspace $\mathcal C ^{\infty}(\mathcal F^{\infty}_{closed})$ of $\mathcal C ^{\infty}(\mathcal F ^{\infty})$, we have that the set $\mathcal M maL$ is a submanifold of $\mathcal M aL$. The proof is completed. \begin{remark} {\rm We shall now observe that there exist minimal affine Lagrangian submanifolds which are not smooth. We refer to Section 3 for notation. Having a smooth special affine Lagrangian embedding $f:M\to N$ we have the mapping $$ \Phi _k:\mathcal C ^k_{emb}(M, \mathcal T _f)\ni h\longrightarrow \Pi _f \circ \mathcal E ^{-1}_f\circ h\in \mathcal C ^k(M,M).$$ The set $Diff ^k(M)$ is open in $\mathcal C ^k(M,M)$ and the set $$\mathcal U _{k,f}=\{ h\in \mathcal C ^k_{emb}(M,\mathcal T _f) ;\ \exists V\in \mathcal C ^k(M\leftarrow \mathcal N _f): \mathcal E _f\circ V=h\}$$ can be regarded as an open neighbourghood of $[f]$ in $\mathcal M^ {k}=\mathcal C ^k(M, \mathcal T _f)_{/ Diff ^k(M)}$. We have the bijection \begin{equation}\label{bijection} \mathcal C ^k(M,\leftarrow \mathcal N_f) \ni V\longrightarrow \mathcal E _f\circ V\in \mathcal U _{k,f}. \end{equation} In order to study minimal affine Lagrangian submanifolds of complex equiaffine spaces like in \cite{O_1}, \cite{O} or in Section 1 of this paper it suffices that the immersions or embeddings under consideration are of class $\mathcal C ^2$. Also in the proof of Theorem \ref{main-theorem} the class $\mathcal C ^2$ is sufficient, that is, if $\alpha$ is of class $\mathcal C ^k$, where $k\ge 2$, then $G(\alpha)$ is of class $\mathcal C ^k$. Since $\mathcal C ^k(\mathcal F ^{n-1}_{closed})\ne\mathcal C ^{\infty}(\mathcal F ^{n-1}_{closed})$, it is clear by the proof of Theorem \ref{main-theorem} that there exist non-smooth minimal affine Lagrangian embeddings of class $\mathcal C ^k$, for $k\ge 2$. } \end{remark} \begin{example}{\rm Let $M$ be an $n$-dimensional real manifold equipped with a torsion-free linear connection $\nabla$. The tangent bundle to the tangent bundle $TTM$ admits a decomposition into a direct sum of the vertical bundle (tangent to the fibers of $TM$) and the horizontal bundle (depending on the connection). The vertical lift of $X\in T_xM$ to $T_ZTM$ for $Z\in T_xM$ will be denoted by $X^v_Z$. Analogously the horizontal lift will be denoted by $X^h_Z$. The following formulas for the lifts of vector fields $X, Y\in \mathcal X (M)$ are known, see \cite{D}, \begin{equation}\label{nawiasy} \begin{array}{rcl} && [X^v,Y^v]=0,\\ &&[X^h, Y^v]=(\nabla _XY)^v,\\ && [X^h,Y^h]_Z=-(R(X,Y)Z)^v_Z +[X,Y]^h_Z, \end{array} \end{equation} where $R$ denotes the curvature tensor of $\nabla$. The total space $TM$ has an almost complex structure $J$ determined by $\nabla$. Namely \begin{equation} JX^h=X^v,\ \ \ \ \ \ JX^v=-X^h. \end{equation} From (\ref{nawiasy}) it follows that the almost complex structure is integrable if and only if the connection $\nabla$ is flat. Assume that $\nu$ is a volume form on $M$ such that $\nabla \nu =0$. In other words, the pair $\nabla$, $\nu$ is an equiaffine structure on $M$. We define a complex volume form $\Omega$ on $TM$ by the formula \begin{equation}\label{omega} \Omega (X_1^h,...,X_n^h)=\nu (X_1,...,X_n). \end{equation} By using (\ref{nawiasy}) one sees that $d\Omega=0$ if and only if $\nabla$ is flat. From now on we assume that $\nabla$ is flat and $\nabla\nu =0$. A manifold with such a structure is usally called an affine manifold with parallel volume. Take the zero-section of $TM$. The horizontal space at $0_x$ is equal to $T_xM$ (independently of a connection $\nabla$). Hence the zero-section treated as a mapping $ 0:M\to TM$ is an affine Lagrangian embedding. By (\ref{omega}) it is special (also independently of a given connection). We have \begin{proposition} Each affine manifold with parallel volume admits a special affine Lagrangian embedding into a complex space with closed complex volume form. \end{proposition} From the main theorem of this paper we know that if $M$ is additionally compact, then such embeddings are plentiful. If $\nabla$ is flat and $\nabla\nu =0$ then, in fact, the total space of the tangent bundle $TM$ has a structure of a complex equiaffine manifold.} \end{example} \begin{remark} {\rm Assume now that $N$ is a 4-dimensional almost K\"ahler manifold with symplectic form $\kappa$. Let $M$ be a connected compact orientable $2$-dimensional manifold and $f:M\to N$ be a Lagrangian embedding (in the metric sense). We now have the canonical (depending only on the metric) isomorphism, say $\mathfrak{b}$, between vector fields and $1$-forms on $M$. By Theorem \ref{Mal} we have the manifold $\mathcal M aL$ modeled on the Fr\'echet space $\mathcal C ^{\infty}(\mathcal F ^1)$. Similarly as in the proof of Theorem \ref{main-theorem} we define the mapping \begin{equation} \tilde P : \mathcal C ^{\infty}(M\leftarrow \mathcal N) \ni V\to f^*_V\kappa \in\mathcal C ^{\infty}(\mathcal F^2). \end{equation} Since $f^*\kappa =0$ and the normal bundle is star-shaped, the mapping $\tilde P$ has values in $\mathcal C ^{\infty}(\mathcal F^2_{exact})$. By composing the mapping with the isomorphism between the normal bundle and the tangent bundle and the isomorphism $\mathfrak{b}$ we obtain the mapping $$ P:\mathcal C ^{\infty}(\mathcal F^1)\to \mathcal C ^{\infty}(\mathcal F ^2)$$ whose linearization at $0$ is equal to the exterior differential operator $d$. Now we can argue as in the proof of Theorem \ref{main-theorem} and we get \begin{thm} Let $N$ be a 4-dimensional almost K\"ahler manifold and $f:M\to N$ be a Lagrangian embedding of a connected compact orientable 2-dimensional manifold. Then the set $$\mathcal M L= \{ [f]\in \mathcal MaL;\ f\ is \ Lagrangian\}$$ is an infinite dimensional Fr\'echet manifold modeled on the Fr\'echet vector space $\mathcal C^{\infty}(\mathcal F ^1_{closed})$. It is a submanifold of $\mathcal M aL$. \end{thm} } \end{remark} \end{document}
\begin{document} \title{Conditions for Boundedness into Hardy spaces} \author[Grafakos]{Loukas Grafakos} \address{Department of Mathematics, University of Missouri, Columbia, MO 65211} \email{grafakosl@missouri.edu} \author[Nakamura]{Shohei Nakamura} \address{Department of Mathematical Science and Information Science, Tokyo Metropolitan University, 1-1 Minami-Ohsawa, Hachioji, Tokyo, 192-0397, Japan} \email{pokopoko9131@icloud.com} \author[Nguyen]{Hanh Van Nguyen} \address{Department of Mathematics, University of Alabama, Tuscaloosa, AL 35487} \email{hvnguyen@ua.edu} \author[Sawano]{Yoshihiro Sawano} \address{Department of Mathematical Science and Information Science, Tokyo Metropolitan University, 1-1 Minami-Ohsawa, Hachioji, Tokyo, 192-0397, Japan} \email{yoshihiro-sawano@celery.ocn.ne.jp} \thanks{The first author would like to thank the Simons Foundation.} \thanks{MSC 42B15, 42B30} \begin{abstract} We obtain boundedness from a product of Lebesgue or Hardy spaces into Hardy spaces under suitable cancellation conditions for a large class of multilinear operators that includes the Coifman-Meyer class, sums of products of linear Calder\'{o}n-Zygmund operators and combinations of these two types. \end{abstract} \maketitle \tableofcontents \section{Introduction} In this work, we obtain boundedness for multilinear singular operators of various types from products of Lebesgue or Hardy spaces into Hardy spaces, under suitable cancellation conditions. This particular line of investigation was initiated in the work of Coifman, Lions, Meyer and Semmes \cite{CLMS} who showed that certain bilinear operators with vanishing integral map $L^q\times L^{q'} $ into the Hardy space $H^1$ for $1<q<\infty$ with $q'=q/( q-1)$. This result was extended by Dobyinksi \cite{Do} for Coifman-Meyer multiplier operators and by Coifman and Grafakos \cite{CG} for finite sums of products of Calder\'{o}n-Zygmund operators. In \cite{CG} boundedness was extended to $H^{p_1}\times H^{p_2}\to H^p$ for the entire range $0<p_1,p_2,p<\infty$ and $1/p=1/p_1+1/p_2$, under the necessary cancellation conditions. Additional proofs of these results were provided by Grafakos and Li \cite{GLLXW00}, Hu and Meng \cite{HuMe12}, and Huang and Liu \cite{HuangLiu13}. All the aforementioned accounts on this topic are based on different approaches and address two classes of operators but \cite{CG}, \cite{HuMe12}, and \cite{HuangLiu13} seem to contain flaws in their proofs; in fact, as of this writing, only the approach in \cite{GLLXW00} stands, which deals with the case of finite sums of products of Calder\'{o}n-Zygmund operators. In this work we revisit this line of investigation via a new method based on $(p,\infty)$-atomic decompositions. Our approach is powerful enough to encompass many types of multilinear operators that include all the previously studied (Coifman-Meyer type and finite sums of products of Calder\'on-Zygmund operators) as well as mixed types. An alternative approach to Hardy space estimates for bilinear operators has appeared in the recent work of Hart and Lu \cite{HartLu}. Recall that the Hardy space $H^p$ with $0<p<\infty$ is given as the space of all tempered distributions $f$ for which \[ \|f\|_{H^p}=\big\|\sup_{t>0}|e^{t\Delta}f|\big\|_{L^p} \] is finite, where $e^{t\Delta}$ denotes the heat semigroup for $0<p \le \infty$. Note that $H^p$ and $L^p$ are isomorphic with norm equivalence when $1<p \le \infty$. In this work we study the boundedness into $H^p$ of the following three types of operators: \begin{itemize} \item multilinear singular integral operators of Coifman-Meyer type; \item sums of $m$-fold products of linear Calder\'{o}n-Zygmund singular integrals; \item multilinear singular integrals of mixed type (i.e., combinations of the previous two types). \end{itemize} Let $m,n$ be positive integers. For a bounded function $\sigma$ on $({\mathbb R}^n)^m$ we consider the multilinear operator \[ {\mathcal T}_\sigma(f_1,\ldots,f_m)(x) = \int_{({\mathbb R}^n)^m} \sigma(\xi_1,\ldots,\xi_m) \widehat{f_1}(\xi_1)\cdots\widehat{f_m}(\xi_m) e^{2\pi i x\cdot (\xi_1+\cdots+\xi_m)} \,d\xi_1\cdots\,d\xi_m \quad (x \in {\mathbb R}^n) \] for $f_1,\ldots,f_m \in {\mathscr S}$. Here $\mathscr S$ is the space of Schwartz functions and $\widehat f (\xi) = \int_{\mathbb R^n} f(x) e^{-2\pi i x\cdot \xi}dx$ is the Fourier transform of a given Schwartz function $f$ on $\mathbb R^n$. The space of tempered distributions is denoted by ${\mathscr S}'$. Certain conditions on $\sigma$ imply that ${\mathcal T}_\sigma$ extends to a bounded linear operator from $L^{p_1} \times \cdots \times L^{p_m}$ to $L^p$ as long as $1<p_1,\ldots,p_m \le \infty$ and $0<p<\infty$ satisfies \begin{equation}\label{Holder} \frac{1}{p} = \frac{1}{p_1}+\cdots+\frac{1}{p_m}. \end{equation} Such a condition is the following by Coifman-Meyer (modeled after the classical Mihlin linear multiplier condition) \begin{equation}\label{CMcond} |\partial^\alpha \sigma(\xi_1,\ldots,\xi_m)| \lesssim (|\xi_1|+\cdots+|\xi_m|)^{-|\alpha|}, \quad (\xi_1,\ldots,\xi_m) \in ({\mathbb R}^n)^m \setminus \{0\} \end{equation} for $\alpha \in ({\mathbb N}_0{}^n)^m$ satisfying $|\alpha| \le M$ for some large $M$. Such operators are called $m$-linear Calder\'{o}n-Zygmund operators and there is a rich theory for them analogous to the linear one. An $m$-linear Calder\'{o}n-Zygmund operator associated with a Calder\'{o}n-Zygmund kernel $K$ on $\mathbb{R}^{mn}$ is defined by \begin{equation}\label{eq.CalZygOPT} {\mathcal T}_\sigma(f_1,\ldots,f_m)(x) = \int_{({\mathbb R}^n)^m} K(x-y_1,\ldots,x-y_m) f_1(y_1)\cdots f_m(y_m)\; dy_1\cdots dy_m, \end{equation} where $\sigma$ is the distributional Fourier transform of $K$ on $({\mathbb R}^n)^m$ that satisfies (\ref{CMcond}). When $m=1$, these operators reduce to classical Calder\'{o}n-Zygmund singular integral operators. An $m$-linear operator of {\it product type} on $\mathbb{R}^{mn}$ is defined by \begin{equation}\label{eq.CalZygOPT-2} \sum_{\rho=1}^T T_{\sigma_1^\rho}(f_1)(x)\cdots T_{\sigma_m^\rho}(f_m)(x) \quad (x \in {\mathbb R}^n), \end{equation} where the $T_{\sigma_j^\rho}$'s are linear Calder\'{o}n-Zygmund operators associated with the multipliers $\sigma_j^\rho$. In terms of kernels these operators can be expressed as \[ {\mathcal T}_\sigma(f_1,\ldots,f_m)(x) = \sum_{\rho=1}^T \prod_{j=1}^m \int_{{\mathbb R}^n} K^\rho_{\sigma_j}(x-y_j) f_j(y_j) dy_j, \] where $K^\rho_1,\ldots,K^\rho_m$ are the Calder\'{o}n-Zygmund kernels of the operator $T^\rho_{\sigma_1},\ldots,T^\rho_{\sigma_m}$, respectively for $\rho=1,\ldots,T$. In this work we also consider operators of {\it mixed type}, i.e., of the form \begin{equation}\label{eq.CalZygOPT-3} {\mathcal T}_\sigma(f_1,\ldots,f_m)(x) = \sum_{\rho=1}^T \sum_{\substack{I_1^\rho,\ldots,I_{G(\rho)}^\rho }} \prod_{g=1}^{G(\rho)} T_{\sigma_{I_g^\rho}}(\{f_l\}_{l \in I_g^{{\rho}} })(x), \end{equation} where for each $\rho=1,\ldots,T$, $I^\rho_1, \cdots ,I^\rho_{G(\rho)}$ is a partition of $\{1,\ldots,m\}$ and each $T_{\sigma_{I_g^\rho}}$ is an $| I_g^\rho|$-linear Coifman-Meyer multiplier operator. We write $I^\rho_1+ \cdots + I^\rho_{G(\rho)}=\{1,\ldots,m\}$ to denote such partitions. In this work, we study operators of the form \eqref{eq.CalZygOPT}, \eqref{eq.CalZygOPT-2}, and \eqref{eq.CalZygOPT-3}. We will be working with indices in the following range \[ 0<p_1,\ldots,p_m \le \infty, \quad 0<p< \infty \] that satisfy (\ref{Holder}). Throughout this paper we reserve the letter $s$ to denote the following index: \begin{equation}\label{eq:s} s=[n(1/p-1)]_+ \end{equation} and we fix $N\gg s$ a sufficiently large integer, say $N=m(n+1+2s)$. We recall that a $(p,\infty)$-atom is an $L^\infty$-function $a$ that satisfies $|a|\le \chi_Q$, where $Q$ is a cube on $\mathbb R^n$ with sides parallel to the axes and \[ \int_{{\mathbb R}^n}x^\alpha a(x)\,dx=0 \] for all $\alpha$ with $|\alpha| \le N$. By convention, when $p=\infty$, $a$ is called a $(\infty,\infty)$-atom if $Q= \mathbb{R}^n$ and $\|a\|_{L^{\infty}} \le 1.$ No cancellations are required for $(\infty,\infty)$-atoms. Our main results are as follows: \begin{theorem}\label{ThmMain} Let ${\mathcal T}_\sigma$ be the operator defined in \eqref{eq.CalZygOPT} and assume that it satisfies \eqref{CMcond}. Let $0<p_1,\ldots,p_m \le \infty$ and $0<p<\infty$ satisfy $(\ref{Holder})$. Assume that \begin{equation}\label{eq.TmCan} \int_{{\mathbb R}^n} x^{\alpha}{\mathcal T}_\sigma(a_1,\ldots,a_m)(x)\; dx=0, \end{equation} for all $|\alpha|\le s$ and all $(p_l,\infty)$-atoms $a_l$. Then ${\mathcal T}_\sigma$ can be extended to a bounded map from $H^{p_1}\times\cdots\times H^{p_m}$ to $H^p$. \end{theorem} \begin{theorem}\label{ThmMain-2} Let ${\mathcal T}_\sigma$ be the operator defined in \eqref{eq.CalZygOPT-2}, $0<p_1,\ldots,p_m< \infty$, and $0<p<\infty$ satisfies $(\ref{Holder})$, where each $\sigma_j^\rho$ satisfies \eqref{CMcond} with $m=1$. Assume that $(\ref{eq.TmCan})$ holds for all $|\alpha|\le s$. Then ${\mathcal T}_\sigma$ can be extended to a bounded map from $H^{p_1}\times\cdots\times H^{p_m}$ to $H^p$. \end{theorem} \begin{theorem}\label{ThmMain-3} Let ${\mathcal T}_\sigma$ be the operator defined in \eqref{eq.CalZygOPT-3}, $0<p_1,\ldots,p_m \le \infty$, and $0<p<\infty$ satisfies $(\ref{Holder})$. Suppose that each $\sigma_{I_g^\rho}$ satisfies \eqref{CMcond} with $m=|I_g^\rho|$. Assume that $(\ref{eq.TmCan})$ holds for all $|\alpha|\le s$ and that \begin{equation}\label{160902-1} \sup_{\rho\, =1,\ldots,T}\sup_{I^\rho_1+ \cdots + I^\rho_{G(\rho)}=\{1,\ldots,m\}} \inf_{l \in I_g^t}p_l<\infty. \end{equation} Then ${\mathcal T}_\sigma$ can be extended to a bounded map from $H^{p_1}\times\cdots\times H^{p_m}$ to $H^p$. \end{theorem} \begin{remark} \noindent (1) In Theorem \ref{ThmMain-2}, we exclude the case $p_l=\infty$ for all $l=1,\ldots, m$. In fact, one can not expect the mapping property of $\mathcal{T}_\sigma$ with \eqref{eq.CalZygOPT-2} if $p_l=\infty$ for some $l=1,\ldots,m$. Similarly, in Theorem \ref{ThmMain-3}, we need to assume \eqref{160902-1} instead of the exclusion of the case $p_l=\infty$ for some $l=1,\ldots,m$. (2) The convergence of the integral in \eqref{eq.TmCan} is a consequence of Lemma \ref{lm.3A1} for all $x$ outside the union of a fixed multiple of the supports of $a_i$, while the function $T(a_1, \dots, a_m)$ is integrable for $x$ inside any compact set. \end{remark} A few comments about the notation. For brevity we write $d\vec{y}=dy_1\,\cdots\,dy_m$ and we use the symbol $C $ to denote a nonessential constant whose value may vary at different occurrences. For $(k_1,\ldots,k_m) \in {\mathbb Z}^m$, we write $\vec{k}=(k_1,\ldots,k_m)$. We use the notation $A\lesssim B$ to indicate that $A\le C\, B$ for some constant $C$. We denote the Hardy-Littlewood maximal operator by $M$: \begin{equation}\label{eq:160918-1} M f(x)=\sup_{r>0}\frac{1}{r^n}\int_{B(x,r)}|f(y)|\,dy. \end{equation} We say that $A\approx B$ if both $A\lesssim B$ and $B\lesssim A$ hold. The cardinality of a finite set $J$ is denoted by either $|J|$ or $\sharp J$. A cube $Q$ in $\mathbb R^n$ has sides parallel to the axes. We denote by $Q^*$ a centered-dilated cube of any cube $Q$ with the length scale factor $3\sqrt{n}$; then \begin{equation}\label{eq:QStar} Q^*=3\sqrt{n}Q^*, \quad Q^{**}=9n Q. \end{equation} \section{Preliminary and related results} \label{s:160725-2} \subsection{Equivalent definitions of Hardy spaces} We begin this section by recalling Hardy spaces. Let $\phi \in C^\infty_{\rm c}$ satisfy \begin{equation}\label{eq:160716-101} {\rm supp}(\phi) \subset \left\{x\in \mathbb{R}^n\ :\ |x|\le 1\right\} \end{equation} and \begin{equation}\label{eq:160716-102} \int_{{\mathbb R}^n} \phi(y)\; dy=1. \end{equation} For $t>0$, we set $\phi_t(x)=t^{-n}\phi(t^{-1}x)$. The maximal function $M_\phi$ associated with the smooth bump $\phi$ is given by: \begin{equation}\label{eq:M phi} M_\phi(f)(x) = \sup_{t>0}\big| (\phi_t*f)(x) \big| = \sup_{t>0}\Big| t^{-n}\int_{{\mathbb R}^n} \phi\big(y/t\big)f(x-y)\; dy \Big| \end{equation} for $f\in \mathscr S'(\mathbb R^n)$. For $0<p<\infty$, the Hardy space $H^{p}$ is characterized as the space of all tempered distributions $f$ for which $M_\phi(f)\in L^p$; also the $H^p$ quasinorm satisfies \[ \|f\|_{H^p} \approx \|M_\phi(f)\|_{L^p}. \] Denote by $\mathscr C^\infty_{\rm c}$ the space of all smooth functions on $\mathbb{R}^n$ with compact support. The following density property of Hardy spaces will be useful in the proof of the main theorems. \begin{proposition}[{\rm \cite[Chapter III, 5.2(b)]{SteinHA}}] \label{prop:160726-1} Let $N \gg s$ be fixed. Then the following space is dense in $H^p$: \[ \mathcal{O}_N(\mathbb{R}^n)= \bigcap_{\alpha \in {\mathbb N}_0^n, |\alpha| \le N} \left\{f \in \mathscr C^\infty_{\rm c}\,:\, \int_{{\mathbb R}^n}x^\alpha f(x)\,dx=0\right\}, \] where $\mathscr C^\infty_{\rm c}$ is the space of all smooth functions with compact supports in $\mathbb{R}^n.$ \end{proposition} The definition of the Hardy space is useful as the following theorem implies: \begin{theorem}[\cite{Nakai-Sawano-2014}] \label{th-molecule} Let $0<p<\infty$. If $f \in H^p$, then there exist a collection of $(p,\infty)$-atoms $\{a_k\}_{k=1}^\infty$ and a nonnegative sequence $\{\lambda_k\}_{k=1}^\infty$ such that \[ f=\sum_{k=1}^\infty \lambda_k a_k \] in ${\mathscr S}'({\mathbb R}^n)$ and that we have \begin{equation*} \Big\| \sum_{k=1}^\infty \lambda_k\chi_{Q_k} \Big\|_{L^p} \lesssim \|f\|_{H^p}. \end{equation*} Moreover, if $f \in \mathscr C^\infty_{\rm c}$ and $\displaystyle \int_{{\mathbb R}^n}x^\alpha f(x)\,dx=0 $ for all $\alpha$ with $|\alpha| \le [n(1/p-1)]_+$, then we can arrange that $\lambda_k=0$ for all but finitely many $k$. \end{theorem} The following lemma, whose proof is just an application of the Fefferman-Stein vector-valued inequality for maximal function, will be used frequently in the next sections. \begin{lemma} \label{lm.2B00} If $\gamma >\max(1,\frac1p)$, $0<p<\infty$, $\lambda_k\ge 0$ and $\{Q_k\}_k$ are sequence of cubes, then \[ \Big\| \sum_{k}\lambda_k (M\chi_{Q_k})^{\gamma} \Big\|_{L^p} \lesssim \Big\|\sum_{k}\lambda_k \chi_{Q_k}\Big\|_{L^p}. \] In particular \[ \Big\|\sum_{k}\lambda_k \chi_{Q_k^{**}}\Big\|_{L^p} \lesssim \Big\|\sum_{k}\lambda_k \chi_{Q_k}\Big\|_{L^p}. \] \end{lemma} We will also make use of the following result: \begin{lemma}\label{lm.2A00} Let $p \in(0,\infty)$. Assume that $q \in (p,\infty] \cap [1,\infty]$. Suppose that we are given a sequence of cubes $\{Q_j\}_{j=1}^\infty$ and a sequence of non-negative $L^q$-functions $\{F_j\}_{j=1}^\infty$. Then \[ \Big\| \sum_{j=1}^\infty \chi_{Q_j}F_j \Big\|_{L^p} \lesssim \Big\| \sum_{j=1}^\infty \left( \frac{1}{|Q_j|} \int_{Q_j}F_j(y)^q\,dy\right)^{1/q} \chi_{Q_j} \Big\|_{L^p}. \] \end{lemma} \begin{proof} See \cite{HuMe12} for the case of $0<p \le 1$ and \cite{Nakai-Sawano-2014}, \cite{Sawano13} for the case of $1<p<\infty$. \end{proof} \subsection{Reductions in the proof of main results} To start the proof of the main results, let $p_1,\ldots,p_m$ and $p$ be given as in Theorems \ref{ThmMain}, \ref{ThmMain-2} or \ref{ThmMain-3} and note that $H^{p_l}\cap\mathcal{O}_N(\mathbb{R}^n)$ is dense in $H^{p_l}$ for $1\le l\le m$ and $0<p_l<\infty$. Recall the integer $N\gg s$ and fix $f_l\in H^{p_l}\cap\mathcal{O}_N(\mathbb{R}^n)$ for which $0<p_l<\infty$. By Theorem \ref{th-molecule}, we can decompose $f_l = \sum_{k_l =1}^\infty \lambda_{l,k_l} a_{l,k_l}$, where $\{\lambda_{l,k_l}\}_{k_l=1}^\infty$ is a non-negative finite sequence and $\{a_{l,k_l}\}_{k_l=1}^\infty$ is a sequence of $(p_l,\infty)$-atoms such that $a_{l,k_l}$ is supported in a cube $Q_{l,k_l}$ satisfying \[ |a_{l,k_l}| \le \chi_{Q_{l,k_l}},\quad \int_{{\mathbb R}^n}x^\alpha a_{l,k_l}(x)\,dx=0,\quad \ |\alpha| \le N \] and that \begin{equation}\label{crucial77} \Big\| \sum_{k_l=1}^\infty \lambda_{l,k_l}\chi_{Q_{l,k_l}} \Big\|_{L^{p_l}} \lesssim \|f_l\|_{H^{p_l}}. \end{equation} If $p_l=\infty$ and $f_l\in L^{\infty}$, then we can conventionally rewrite $f_l=\lambda_{l,k_l} a_{l,k_l}$ where $\lambda_{l,k_l}=\|f_l\|_{L^{\infty}}$ and $a_{l,k_l}=\|f\|_{L^{\infty}}^{-1}f$ is an $(\infty,\infty)$-atom supported in $Q_{l,k_l}=\mathbb{R}^n$. In this case the summation in \eqref{crucial77} is ignored since there is only one summand. By the multi-sublinearity of $M_\phi\circ \mathcal{T}_\sigma$, we can estimate \[ M_\phi\circ \mathcal{T}_\sigma(f_1,\ldots,f_m)(x)\le \sum_{k_1,\ldots,k_m=1}^\infty \Big(\prod_{l=1}^m \lambda_{l,k_l}\Big) M_\phi \circ {\mathcal T}_\sigma(a_{1,k_1},\ldots,a_{m,k_m})(x). \] To prove Theorems~\ref{ThmMain}, ~\ref{ThmMain-2}, and~\ref{ThmMain-3}, it now suffices to establish the following result: \begin{proposition} \label{LM.Key-31} Let ${\mathcal T}_\sigma$ be the operator defined in \eqref{eq.CalZygOPT}, \eqref{eq.CalZygOPT-2} or \eqref{eq.CalZygOPT-3}. Let $p_1,\ldots,p_m$ and $p$ be given as in corresponding Theorems \ref{ThmMain}, \ref{ThmMain-2} or \ref{ThmMain-3}. Then we have \begin{equation}\label{eq.PWEST} \Big\|\sum_{k_1,\ldots,k_m=1}^\infty \Big(\prod_{l=1}^m \lambda_{l,k_l}\Big) M_\phi \circ {\mathcal T}_\sigma(a_{1,k_1},\ldots,a_{m,k_m})\Big\|_{L^p} \lesssim \prod_{l=1}^m \Big\| \sum_{k_l=1}^\infty \lambda_{l,k_l}\chi_{Q_{l,k_l}} \Big\|_{L^{p_l}}. \end{equation} \end{proposition} Notice that in view of \eqref{crucial77} and Proposition \ref{LM.Key-31}, one obtains the required estimate $$ \| {\mathcal T}_\sigma(f_1,\dots , f_m) \|_{H^p} = \| M_\phi\circ \mathcal{T}_\sigma(f_1,\ldots,f_m) \|_{L^p} \lesssim \| f_1\|_{H^{p_1}} \cdots \| f_m\|_{H^{p_m}}\, . $$ We may therefore focus on the proof of Proposition~\ref{LM.Key-31}. In the sequel we will prove \eqref{eq.PWEST}. Its proof will depend on whether ${\mathcal T}_\sigma$ is of type \eqref{eq.CalZygOPT}, \eqref{eq.CalZygOPT-2} or \eqref{eq.CalZygOPT-3}. The detail proof for each type is discussed in subsequent sections. \section{The Coifman-Meyer type} Throughout this section, $\mathcal{T}_\sigma$ denotes for the operator defined in \eqref{eq.CalZygOPT}. The main purpose of this section is to establish \eqref{eq.PWEST} for $\mathcal{T}_\sigma$. \subsection{Fundamental estimates for the Coifman-Meyer type} We treat the case of Coifman-Meyer multiplier operators whose symbols satisfy \eqref{CMcond}. The study of such operators was initiated by Coifman and Meyer \cite{CM2}, \cite{CM3} and was later pursued by Grafakos and Torres \cite{GrafakosTorresAdvances}; see also \cite{G} for an account. Denoting by $K$ the inverse Fourier transform of $\sigma$, in view of \eqref{CMcond}, we have \[ |\partial^{\beta}_{y} K(y_1,\ldots, y_m)| \lesssim \big( \sum_{i=1}^m |y_i| \big)^{-mn -|\beta|},\quad (y_1,\ldots,,y_m)\ne (0,\ldots,0) \] for all $\beta=(\beta_1,\ldots,\beta_m)\in{\mathbb N}_0{}^{mn}=({\mathbb N}_0{}^n)^m$ and $|\beta|\le N$. Examining carefully the smoothness of the kernel, we obtain the following estimates: \begin{lemma} \label{lm.3A1} Let $a_k$ be $(p_k,\infty)$-atoms supported in $Q_k$ for all $1\le k\le m$. Let $\Lambda$ be a non-empty subset of $\{1,\ldots,m\}$. Then we have \[ |\mathcal{T}_\sigma(a_1,\ldots,a_m)(y)| \le \dfrac{\min\{\ell(Q_k) : k\in \Lambda\}^{n+N+1}} {\big(\sum_{k\in \Lambda}|y-c_k|\big)^{n+N+1}} \] for all $y\notin \cup_{k\in \Lambda}Q_k^*$. \end{lemma} \begin{proof} We may suppose that $\Lambda=\{1,\ldots,r\}$ for some $1\le r\le m$ and that \[ \ell(Q_1) = \min\{\ell(Q_k) : k\in \Lambda\}. \] Let $c_k$ be the center of $Q_k$ and fix $y\notin \cup_{k\in \Lambda}Q_k^*$. Using the cancellation of $a_1$ we can rewrite \begin{align} \notag \mathcal{T}_\sigma(a_1,\ldots,a_m)(y) =& \int_{\mathbb{R}^{mn}}K(y-y_1,\ldots,y-y_m)a_1(y_1)\cdots a_m(y_m)d\vec{y}\\ \notag =& \int_{\mathbb{R}^{mn}}\big[K(y-y_1,\ldots,y-y_m)-P_N(y,y_1,y_2,\ldots,y_m)\big] a_1(y_1)\cdots a_m(y_m)d\vec{y}\\ =& \int_{\mathbb{R}^{mn}}K^1(y,y_1,y_2,\ldots,y_m) a_1(y_1)\cdots a_m(y_m)d\vec{y}, \label{eq.3A5} \end{align} where \begin{equation*} P_N(y,y_1,y_2,\ldots,y_m) = \sum_{|\alpha|\le N}\frac{1}{\alpha!} \partial^{\alpha}_{1}K(y-c_1,y-y_2,\ldots,y-y_m)(c_1-y_1)^{\alpha} \end{equation*} is the Taylor polynomial of degree $N$ of $K(y-\cdot,y-y_2,\ldots,y-y_m)$ at $c_1$ and \begin{equation} \label{eq.3A6} K^1(y,y_1,\ldots,y_m) = K(y-y_1,\ldots,y-y_m)-P_N(y,y_1,y_2,\ldots,y_m). \end{equation} By the smoothness condition of the kernel and the fact that \[ |y-y_k|\approx |y-c_k| \] for all $k \in\Lambda$ and $y_k\in Q_k$ we can estimate \begin{align*} \big|K(y,y_1,\ldots,y_m)-P_N(y,c_1,y_2,\ldots,y_m)\big|\lesssim& |y_1-c_1|^{N+1}\Big( \sum_{k\in \Lambda}|y-c_k|+\sum_{j=2}^m|y-y_j| \Big)^{-mn-N-1}. \end{align*} Thus, \begin{align*} |\mathcal{T}_\sigma(a_1,\ldots,a_m)(y)| \lesssim& \int_{\mathbb{R}^{mn}}\frac{|y_1-c_1|^{N+1}|a_1(y_1)|\cdots |a_m(y_m)|} {\Big( \sum_{k\in \Lambda}|y-c_k|+\sum_{j=2}^m|y-y_j| \Big)^{mn+N+1}} d\vec{y}\\ \lesssim& \int_{\mathbb{R}^{(m-1)n}}\frac{ \ell(Q_1)^{n+N+1}} {\Big( \sum_{k\in \Lambda}|y-c_k|+\sum_{j=2}^m|y_j| \Big)^{mn+N+1}} dy_2\cdots dy_m\\ \lesssim& \frac{\ell(Q_1)^{n+N+1}} {\Big( \sum_{k\in \Lambda}|y-c_k| \Big)^{n+N+1}}. \end{align*} \end{proof} \begin{lemma}\label{lem:160726-51} Let $a_k$ be $(p_k,\infty)$-atoms supported in $Q_k$ for all $1\le k\le m$. Suppose $Q_1$ is the cube such that $\ell(Q_1) = \min\{\ell(Q_k) : 1\le k\le m\}$. Then for fixed $1\le r<\infty$ and $j\in \mathbb{N}$, we have \begin{align} \label{eq.3A2} \|\mathcal{T}_\sigma(a_1,\ldots,a_m)\chi_{Q_1^{**}}\|_{L^{r}}&\lesssim |Q_1|^{\frac{1}{r}} \prod_{l=1}^m \inf_{z\in Q_1^{*}} M\chi_{Q_l}(z)^\frac{n+N+1}{mn},\\ \label{eq.3A3} \|M\circ \mathcal{T}_\sigma(a_1,\ldots,a_m)\chi_{Q_1^{**}}\|_{L^{r}}&\lesssim |Q_1|^{\frac{1}{r}} \prod_{l=1}^m \inf_{z\in Q_1^{*}} M\chi_{Q_l}(z)^\frac{n+N+1}{mn}, \end{align} Furthermore, if $Q_0$ is a cube such that $\ell(Q_0)\le \ell(Q_1)$ and $2^jQ_0^{**}\cap 2^jQ_l^{**}=\emptyset$ for some $l$, then \begin{equation}\label{eq.3D3} \|\mathcal{T}_\sigma(a_1,\ldots,a_m)\chi_{2^jQ_0^{**}}\|_{L^\infty} \lesssim \prod_{l=1}^m \inf_{z\in 2^jQ_0^{*}} M\chi_{2^jQ_l^{**}}(z)^\frac{n+N+1}{mn}. \end{equation} In particular, under the above assumption, \begin{align}\label{eq.3D4} \Big( \frac{1}{|2^jQ_0^{**}|} \int_{2^jQ_0^{**}} |\mathcal{T}_\sigma(a_1,\ldots,a_m)(y)|^rdy \Big)^{\frac1r} &\lesssim \prod_{l=1}^m \inf_{z\in 2^jQ_0^{*}} M\chi_{2^jQ_l^{**}}(z)^\frac{n+N+1}{mn}. \end{align} \end{lemma} \begin{proof} To check \eqref{eq.3A2}, it is enough to consider $1<r<\infty$ and two following cases. First, if $Q_1^{**}\cap Q_k^{**}\ne \emptyset$ for all $2\le k\le m$, then, by the assumption $\ell(Q_1) = \min\{\ell(Q_k) : 1\le k\le m\}$, $Q_1^{**}\subset 3Q_k^{**}$ for all $1\le k\le m$. This implies \[ \inf_{z\in Q_1^{*}}M\chi_{3Q_k^{**}}(z)\ge 1, \] for all $1\le k\le m.$ Now the boundedness of $\mathcal{T}_\sigma$ from $L^r\times L^\infty\times\cdots\times L^\infty$ to $L^r$ yields \begin{align} \notag \|\mathcal{T}_\sigma(a_1,\ldots,a_m)\chi_{Q_1^{**}}\|_{L^r} \le& \|\mathcal{T}_\sigma(a_1,\ldots,a_m)\|_{L^r}\\ \notag \lesssim& \|a_1\|_{L^r}\|a_2\|_{L^\infty}\cdots \|a_m\|_{L^\infty}\\ \label{eq.3A4} \lesssim& |Q_1|^{\frac{1}{r}} \prod_{k=1}^m\inf_{z\in Q_1^{*}}M\chi_{3Q_k^{**}}(z)^\frac{n+N+1}{mn}. \end{align} Second, if $Q_1^{**}\cap Q_k^{**}=\emptyset$ for some $k$, then the set \[ \Lambda = \{2\le k\le m : Q_1^{**}\cap Q_k^{**}=\emptyset\} \] is a non-empty subset of $\{1,\ldots, m\}$. Fix arbitrarily $y\in \mathbb{R}^n$. By the cancellation of $a_1$, rewrite \[ \mathcal{T}_\sigma(a_1,\ldots,a_m)(y)= \int_{\mathbb{R}^{mn}}K^1(y,y_1,y_2,\ldots,y_m) a_1(y_1)\cdots a_m(y_m)d\vec{y}, \] where $K^1(y,y_1,\ldots,y_m)$ is defined in \eqref{eq.3A6}. For $y_1\in Q_1$ we estimate \begin{align*} \big|K^1(y,y_1,\ldots,y_m)\big|\le& C\ell(Q_1)^{N+1}\Big( |y-\xi_1|+\sum_{j=2}^m|y-y_j| \Big)^{-mn-N-1}, \end{align*} for some $\xi_1\in Q_1$ and for all $y_l\in Q_l$. Since $Q_1^{**}\cap Q_k^{**}=\emptyset$ for all $k\in \Lambda$, $|y-\xi_1|+|y-y_k|\ge |\xi_1-y_k| \ge C|c_1-c_k|$ for all $y_k\in Q_k$ and $k\in \Lambda$. Therefore \[ \big|K^1(y,y_1,\ldots,y_m)\big|\lesssim \ell(Q_1)^{N+1}\Big( \sum_{k\in \Lambda}|c_1-c_k| +\sum_{j=2}^{m}|y-y_j| \Big)^{-mn-N-1}, \] for all $y_1\in Q_1^*$ and $y_k\in Q_k$ for $k\in \Lambda$. Insert the above inequality into \eqref{eq.3A5} to obtain \[ |\mathcal{T}_\sigma(a_1,\ldots,a_m)(y)| \lesssim \dfrac{\ell(Q_1)^{n+N+1}} {\big(\sum_{k\in \Lambda}|c_1-c_k|\big)^{n+N+1}} \lesssim \dfrac{\ell(Q_1)^{n+N+1}} {\sum_{k\in \Lambda}\big[\ell(Q_1)+|c_1-c_k|+\ell(Q_k)\big]^{n+N+1}}. \] Noting that $Q_1^{**}\subset 3Q_l^{**}$ for $l\notin \Lambda,$ the last inequality gives \begin{equation} \label{eq.3A8} \|\mathcal{T}_\sigma(a_1,\ldots,a_m)\|_{L^\infty} \lesssim \prod_{k=1}^m\inf_{z\in Q_1^{*}}M\chi_{3Q_k^{**}}(z)^\frac{n+N+1}{mn}, \end{equation} which yields \begin{equation} \label{eq.3A9} \|\mathcal{T}_\sigma(a_1,\ldots,a_m)\chi_{Q_1^{**}}\|_{L^r} \lesssim |Q_1|^{\frac{1}{r}} \prod_{k=1}^m\inf_{z\in Q_1^{*}}M\chi_{3Q_k^{**}}(z)^\frac{n+N+1}{mn}. \end{equation} Combining \eqref{eq.3A4} and \eqref{eq.3A9} and noting that $M\chi_{3Q} \lesssim M\chi_Q$, we obtain \eqref{eq.3A2}. Similarly, we can prove \eqref{eq.3A3}--\eqref{eq.3D3}. For example, to show \eqref{eq.3A3}, we again consider the case where $Q_1^{**}\cap Q_l^{**}\neq \emptyset$ holds for all $l$ and the case where this fails. In the first case, using the boundedness of $M$ on $L^r$, we arrive at the same situation as above. In the second case, we use the boundedness of $M$ on $L^\infty$ to see \[ \|M\circ \mathcal{T}_\sigma(a_1,\dots,a_m)\chi_{Q_1^{**}}\|_{L^r} \lesssim |Q_1|^\frac{1}{r} \| \mathcal{T}_\sigma(a_1,\dots,a_m)\|_{L^\infty}. \] Notice that the right-hand side is already treated in \eqref{eq.3A8}. \end{proof} Lemma \ref{lem:160726-51} will be used to study the behavior of the operator $M_\phi\circ \mathcal{T}_\sigma$ inside $Q_1^{**}$. For the region outside of $Q_1^{**}$, we need the following estimates. \begin{lemma} \label{lm.3A10} Let $a_k$ be $(p_k,\infty)$-atoms supported in $Q_k$ for all $1\le k\le m$. If $p_k=\infty$ then $Q_k=\mathbb{R}^n$. Suppose that $Q_1$ is the cube for which $\ell(Q_1) = \min\{\ell(Q_k) : 1\le k\le m\}$. Fix $0<t<\infty$. \begin{enumerate} \item If $x \notin Q_1^{**}$ and $c_1 \notin B(x,100n^2t)$, then \begin{equation}\label{eq.3A10} \frac{1}{t^n} \int_{B(x,t)}|{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy \lesssim \prod_{l=1}^m M\chi_{Q_l}(x)^{\frac{n+N+1}{m n}}. \end{equation} \item If $x \notin Q_1^{**}$ and $c_1 \in B(x,100n^2t)$, then \begin{equation}\label{eq.3A11} \frac{\ell(Q_1)^{s+1}}{t^{n+s+1}} \int_{Q_1^*}|{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy \lesssim M\chi_{Q_1}(x)^{\frac{n+s+1}{n}} \prod_{l=1}^m \inf_{z \in Q_1^*} M\chi_{Q_l}(z)^{\frac{n+N+1}{m n}}, \end{equation} and \begin{equation}\label{eq.3A12} \frac{1}{t^{n+s+1}} \int_{(Q_1^*)^c} |y-c_1|^{s+1}|{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy \lesssim M\chi_{Q_1}(x)^{\frac{n+s+1}{n}} \prod_{l=1}^m \inf_{z \in Q_1^*} M\chi_{Q_l}(z)^{\frac{N-s}{m n}}. \end{equation} \item For all $x\notin Q_1^{**}$, we have \begin{align} \label{eq.3B12} M_\phi\circ \mathcal{T}_\sigma(a_1,\ldots,a_m)(x)\lesssim \prod_{l=1}^m M\chi_{Q_l}(x)^{\frac{n+N+1}{m n}} + M\chi_{Q_1}(x)^{\frac{n+s+1}{n}} \prod_{l=1}^m \inf_{z \in Q_1^*} \Big( M\chi_{Q_l}(z)^{\frac{N-s}{m n}} \Big). \end{align} \end{enumerate} \end{lemma} \begin{proof} Fix $x\notin Q_1^{**}$ and denote $\Lambda = \{1\le k\le m : x\notin Q_k^{**}\}$. \textit{(1)} Suppose $c_1 \notin B(x,100n^2t)$. For $y\in B(x,t)$, from \eqref{eq.3A5} we rewrite \[ \mathcal{T}_\sigma(a_1,\ldots,a_m)(y) = \int_{\mathbb{R}^{mn}}K^1(y,y_1,\ldots,y_m) a_1(y_1)\cdots a_m(y_m)d\vec{y}, \] where $K^1$ is defined in \eqref{eq.3A6}. Note that for $y\in B(x,t)$, $y_1\in Q_1$ and $c_1\notin B(x,100n^2t)$, we have \[ t\lesssim |x-c_1|\lesssim |y-y_1|. \] Since $x\notin Q_k^{**}$ for all $k\in \Lambda$, \[ |x-c_k|\lesssim |x-y_k|\lesssim t+ |y-y_k|\lesssim |y-y_1|+ |y-y_k| \] for all $k \in \Lambda$ and $y_k \in Q_k$. Consequently, \begin{align} \label{eq.3A13} \left|K^{1}(y,y_1,\ldots,y_m)\prod_{l=1}^m a_l(y_l)\right| \lesssim \frac{\ell(Q_1)^{N+1}\chi_{Q_1}(y_1)}{\displaystyle \left(\sum_{l=2}^m |y-y_l|+ \sum_{k \in \Lambda} |x-c_k|\right)^{m n+N+1}}. \end{align} Integrating (\ref{eq.3A13}) over $({\mathbb R}^n)^m$, and using that $\ell(Q_1)\leq \ell(Q_l)$ for all $2\le l\le m$, we obtain that \begin{align*} |{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)| &\lesssim \frac{\ell(Q_1)^{n+N+1}}{\displaystyle \left(\sum_{l \in \Lambda} |x-c_l|\right)^{n+N+1}} \\ &\lesssim \prod_{l\in \Lambda} \frac{\ell(Q_l)^{\frac{n+N+1}{|\Lambda|}}}{\displaystyle |x-c_l|^{\frac{n+N+1}{|\Lambda|}}} \chi_{(Q_l^{**})^c}(x) \cdot \prod_{k\notin \Lambda}\chi_{Q_k^{**}}(x) \lesssim \prod_{l=1}^m M\chi_{Q_l}(x)^{\frac{n+N+1}{m n}}. \end{align*} This pointwise estimate proves \eqref{eq.3A10}. \textit{(2)} Assume $c_1 \in B(x,100n^2t)$. Fix $1<r<\infty$ and estimate the left-hand side of \eqref{eq.3A11} by \[ \frac{\ell(Q_1)^{s+1}}{t^{n+s+1}}|Q_1|^{1-\frac1{r}} \|\mathcal{T}_\sigma(a_1,\ldots,a_m)\chi_{Q_1^{**}}\|_{L^r} \lesssim \frac{\ell(Q_1)^{n+s+1}}{t^{n+s+1}} \prod_{l=1}^m \inf_{z \in Q_1^*} M\chi_{Q_l}(z)^{\frac{n+N+1}{m n}}, \] where we used \eqref{eq.3A2} in the above inequality. Since $x\notin Q_1^{**}$ and $c_1\in B(x,100n^2t)$, $Q_1^*\subset B(x,1000n^2t)$ and hence, $\ell(Q_1)/t\lesssim M\chi_{Q_1}(x)$. This combined with the last inequality implies \eqref{eq.3A11}. To verify \eqref{eq.3A12}, we recall the expression of $\mathcal{T}_\sigma(a_1,\ldots,a_m)(y)$ in \eqref{eq.3A5} and the pointwise estimate for $K^1(y,y_1,\ldots,y_m)$ defined in \eqref{eq.3A6}. Denote $J=\{2\le k\le m : Q_1^{**}\cap Q_k^{**}=\emptyset\}$. Using the facts that $|y-y_1|\sim |y-c_1|\geq \ell(Q_1)$ for $y\notin Q_1^*$, $y_1\in Q_1$ and $|y-y_1|+|y-y_l|\geq |y_1-y_l|\gtrsim |z-c_l|$ for all $z\in Q_1^*$ and $l\in J$, we now estimate \begin{align*} |{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)| \lesssim \int_{({\mathbb R}^n)^m} \frac{\ell(Q_1)^{N+1}\chi_{Q_1}(y_1)\,d\vec{y}}{ \left( \ell(Q_1)+|y-c_1|+ \displaystyle \sum_{l\in J} |z-c_l| +\sum_{l=2}^m|y-y_l| \right)^{m n+N+1}} \end{align*} for all $y \in (Q_1^*)^c$ and $z\in Q_1^*$. Thus, \begin{align*} &\frac{1}{t^{n+s+1}} \int_{(Q_1^*)^c}|y-c_1|^{s+1}|{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy \\ &\lesssim \frac{1}{t^{n+s+1}} \int_{{\mathbb R}^n \times ({\mathbb R}^n)^m} \frac{|y-c_1|^{s+1}\ell(Q_1)^{N+1}\chi_{Q_1}(y_1)\,d\vec{y}dy} {\displaystyle\bigg(\ell(Q_1)+|y-c_1| +\sum_{l\in J}|c_1-c_l| +\sum_{l=2}^m |y-y_l|\bigg)^{m n+N+1}} \\ &\lesssim \left( \frac{\ell(Q_1)}{t} \right)^{n+s+1} \prod_{l\in J} \left( \frac{\ell(Q_l)}{|z-c_l|} \right)^\frac{N-s}{m}. \end{align*} Note that $ 1\lesssim\inf_{z\in Q_1^*}M\chi_{2Q_l^{**}}(z) $ if $Q_1^{**} \cap Q_l^{**} \ne \emptyset$; otherwise, $\ell(Q_l)/|z-c_l|\lesssim M\chi_{2Q_l^{**}}(z)^\frac{1}{n}$ for all $z\in Q_1^*$. Consequently, \begin{align*} \frac{1}{t^{n+s+1}} \int_{(Q_1^*)^c}|y-c_1|^{s+1}|{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy &\lesssim M\chi_{Q_1}(x)^{\frac{n+s+1}{n}} \prod_{l=1}^m \inf_{z \in Q_1^*} M\chi_{Q_l}(z)^{\frac{N-s}{mn}}, \end{align*} which deduces \eqref{eq.3A12}. \textit{(3)} It remains to prove \eqref{eq.3B12}. Fix $x\notin Q_1^{**}$. To calculate $M_{\phi}\circ \mathcal{T}_{\sigma}(a_1,\ldots,a_m)(x)$, we need to estimate \[ \Big| \int_{{\mathbb R}^n} \phi_t(x-y){\mathcal T}_\sigma(a_1,\ldots,a_m)(y)\,dy \Big| \] for each $t\in (0,\infty)$. Let consider two cases: $c_1 \notin B(x,100n^2t)$ and $c_1 \in B(x,100n^2t)$. In the first case, since $\phi$ is supported in the unit ball, \begin{equation*} \Big| \int_{{\mathbb R}^n} \phi_t(x-y){\mathcal T}_\sigma(a_1,\ldots,a_m)(y)\,dy \Big| \lesssim \frac{1}{t^n} \int_{B(x,t)}|{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy. \end{equation*} Since $c_1 \notin B(x,100n^2t)$, \eqref{eq.3A10} implies that \begin{equation}\label{eq.3A14} \Big| \int_{{\mathbb R}^n} \phi_t(x-y){\mathcal T}_\sigma(a_1,\ldots,a_m)(y)\,dy \Big| \lesssim \prod_{l=1}^m M\chi_{Q_l}(x)^{\frac{n+N+1}{m n}}. \end{equation} In the second case, we will exploit the moment condition of ${\mathcal T}_\sigma(a_1,\ldots,a_m)$. Denote \begin{equation} \label{eq.3C01} \delta_1^s(t;x,y) = \phi_t(x-y) - \sum_{|\alpha|\le s}\frac{\partial^{\alpha}[\phi_t](x-c_1)}{\alpha!}(c_1-y)^{\alpha}. \end{equation} Since $|\delta_1^s(t;x,y)|\lesssim t^{-n-s-1}$ for all $x,y$ and \eqref{eq.TmCan}, \begin{align} \notag \Big| \int_{{\mathbb R}^n} \phi_t(x-y){\mathcal T}_\sigma(a_1,\ldots,a_m)(y)\,dy \Big|&= \Big| \int_{{\mathbb R}^n} \delta^s_1(t;x,y){\mathcal T}_\sigma(a_1,\ldots,a_m)(y)\,dy \Big| \\ \notag \lesssim& \frac{1}{t^{n+s+1}} \int_{{\mathbb R}^n}|y-c_1|^{s+1}|{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy \\ \notag =& \frac{1}{t^{n+s+1}} \int_{Q_1^*}|y-c_1|^{s+1}|{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy\\ \notag &+ \frac{1}{t^{n+s+1}} \int_{(Q_1^*)^c}|y-c_1|^{s+1}|{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy\\ &\lesssim \frac{\ell(Q_1)^{s+1}}{t^{n+s+1}} \int_{Q_1^*}|{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy \label{eq.3C02}\\ \notag &+ \frac{1}{t^{n+s+1}} \int_{(Q_1^*)^c}|y-c_1|^{s+1}|{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy. \end{align} Invoking \eqref{eq.3A11} and \eqref{eq.3A12}, we obtain \begin{align} \notag & \Big| \int_{{\mathbb R}^n} \phi_t(x-y){\mathcal T}_\sigma(a_1,\ldots,a_m)(y)\,dy \Big|\\ \notag &\lesssim M\chi_{Q_1}(x)^{\frac{n+s+1}{n}} \prod_{l=1}^m \inf_{z \in Q_1^*} \Big[ M\chi_{Q_l}(z)^{\frac{n+N+1}{m n}} + M\chi_{Q_l}(z)^{\frac{N-s}{m n}} \Big]\\ \label{eq.3A15} &\lesssim M\chi_{Q_1}(x)^{\frac{n+s+1}{n}} \prod_{l=1}^m \inf_{z \in Q_1^*} \Big( M\chi_{Q_l}(z)^{\frac{N-s}{m n}} \Big). \end{align} Combining \eqref{eq.3A14} and \eqref{eq.3A15} yields the required estimate \eqref{eq.3B12}. The proof of Lemma \ref{lm.3A10} is now completed. \end{proof} \subsection{The proof of Proposition \ref{LM.Key-31} for Coifman-Meyer type} We now turn into the proof of \eqref{eq.PWEST}, i.e., estimate \begin{equation} \label{eq.3C3} A= \Big\|\sum_{k_1,\ldots,k_m=1}^\infty \Big(\prod_{l=1}^m \lambda_{l,k_l}\Big) M_\phi \circ {\mathcal T}_\sigma(a_{1,k_1},\ldots,a_{m,k_m})\Big\|_{L^p} . \end{equation} For each $\vec{k}=(k_1,\ldots,k_m)$, we denote by $R_{\vec{k}}$ the cube with smallest length among $Q_{1,k_1},\ldots,Q_{m,k_m}$. Then we have $A\lesssim B+G$, where \begin{equation} \label{eq.3C4} B = \Big\|\sum_{k_1,\ldots,k_m=1}^\infty \Big(\prod_{l=1}^m \lambda_{l,k_l}\Big) M_\phi \circ {\mathcal T}_\sigma(a_{1,k_1},\ldots,a_{m,k_m})\chi_{R_{\vec{k}}^{**}}\Big\|_{L^p} \end{equation} and \begin{equation} \label{eq.3C5} G = \Big\|\sum_{k_1,\ldots,k_m=1}^\infty \Big(\prod_{l=1}^m \lambda_{l,k_l}\Big) M_\phi \circ {\mathcal T}_\sigma(a_{1,k_1},\ldots,a_{m,k_m})\chi_{(R_{\vec{k}}^{**})^c}\Big\|_{L^p}. \end{equation} To estimate $B$, for some $\max(1,p)<r<\infty$ Lemma \ref{lm.2A00} and \eqref{eq.3A3} imply \begin{align*} B\lesssim & \Big\|\sum_{k_1,\ldots,k_m=1}^\infty \Big(\prod_{l=1}^m \lambda_{l,k_l}\Big) \frac{\chi_{R_{\vec{k}}^{**}}}{|\chi_{R_{\vec{k}}^{**}}|^{\frac1r}} \| M_\phi \circ {\mathcal T}_\sigma(a_{1,k_1},\ldots,a_{m,k_m})\chi_{R_{\vec{k}}^{**}} \|_{L^r} \Big\|_{L^p} \\ \lesssim& \Big\|\sum_{k_1,\ldots,k_m=1}^\infty \Big(\prod_{l=1}^m \lambda_{l,k_l}\Big) \Big( \prod_{l=1}^m \inf_{z\in R_{\vec{k}}^{*}} M\chi_{Q_{l,k_l}}(z)^{\frac{n+N+1}{mn}} \Big) \chi_{R_{\vec{k}}^{**}} \Big\|_{L^p}\\ =& \Big\|\sum_{k_1,\ldots,k_m=1}^\infty \Big( \prod_{l=1}^m \inf_{z\in R_{\vec{k}}^{*}} \lambda_{l,k_l} M\chi_{Q_{l,k_l}}(z)^{\frac{n+N+1}{mn}} \Big) \chi_{R_{\vec{k}}^{**}} \Big\|_{L^p}\\ \lesssim& \Big\|\sum_{k_1,\ldots,k_m=1}^\infty \Big( \prod_{l=1}^m \inf_{z\in R_{\vec{k}}^{*}} \lambda_{l,k_l} M\chi_{Q_{l,k_l}}(z)^{\frac{n+N+1}{mn}} \Big) \chi_{R_{\vec{k}}^{*}} \Big\|_{L^p}, \end{align*} where we used Lemma \ref{lm.2B00} in the last inequality. Now we can remove the infimum and apply H\"older's inequality to obtain \begin{align} \notag B\lesssim & \Big\|\sum_{k_1,\ldots,k_m=1}^\infty \prod_{l=1}^m \lambda_{l,k_l} \Big(M\chi_{Q_{l,k_l}}\Big)^{\frac{n+N+1}{mn}} \Big\|_{L^p} = \Big\| \prod_{l=1}^m \sum_{k_l=1}^\infty \lambda_{l,k_l} \Big( M\chi_{Q_{l,k_l}}\Big)^{\frac{n+N+1}{mn}} \Big\|_{L^p}\\ \notag \le& \prod_{l=1}^m \Big\| \sum_{k_l=1}^\infty \lambda_{l,k_l} \Big( M\chi_{Q_{l,k_l}}\Big)^{\frac{n+N+1}{mn}} \Big\|_{L^{p_l}} \lesssim \prod_{l=1}^m \Big\| \sum_{k_l=1}^\infty \lambda_{l,k_l} \chi_{Q_{l,k_l}^{**}} \Big\|_{L^{p_l}}\\ \label{eq.3A16} \lesssim& \prod_{l=1}^m \Big\| \sum_{k_l=1}^\infty \lambda_{l,k_l} \chi_{Q_{l,k_l}} \Big\|_{L^{p_l}}. \end{align} Once again, Lemma \ref{lm.2B00} was used in the last two inequalities. To deal with $G$, we use \eqref{eq.3B12} and estimate $G \lesssim G_1+G_2$, where \[ G_1 = \Big\|\sum_{k_1,\ldots,k_m=1}^\infty \Big(\prod_{l=1}^m \lambda_{l,k_l}\Big) \prod_{l=1}^m \left(M\chi_{Q_{l,k_l}}\right)^{\frac{n+N+1}{m n}} \Big\|_{L^p} \] and \[ G_2 = \Big\|\sum_{k_1,\ldots,k_m=1}^\infty \Big(\prod_{l=1}^m \lambda_{l,k_l}\Big) \Big( \prod_{l=1}^m \inf_{z \in R_{\vec{k}}^*} M\chi_{Q_{l,k_l}}(z)^{\frac{N-s}{m n}} \Big) (M\chi_{R_{\vec{k}}^*})^{\frac{n+s+1}{n}} \Big\|_{L^p}. \] Repeating the argument in estimating for $B$, noting that $\frac{(n+s+1)p}{n}>1$ and $N\gg s$, we obtain \begin{equation}\label{eq.3A17} G \lesssim G_1+ G_2 \lesssim \prod_{l=1}^m \Big\| \sum_{k_l=1}^\infty \lambda_{l,k_l} \chi_{Q_{l,k_l}} \Big\|_{L^{p_l}}. \end{equation} Combining \eqref{eq.3A16} and \eqref{eq.3A17} deduces \eqref{eq.PWEST}. This completes the proof of Proposition \ref{LM.Key-31} for the operator $\mathcal{T}_\sigma$ of type \eqref{eq.CalZygOPT}. \begin{remark} The techniques in this paper also work for CZ operators of non-convolution types; this recovers the results in \cite{HuMe12}. \end{remark} \section{The product type} On this whole section, we denote by $\mathcal{T}_\sigma$ the operator defined in \eqref{eq.CalZygOPT-2} and prove Proposition \ref{LM.Key-31} for this operator. Now we need to establish some results analogous to Lemmas \ref{lem:160726-51} and \ref{lm.3A10}. \subsection{Fundamental estimates for the product type} Let $a_k$ be $(p_k,\infty)$-atoms supported in $Q_k$ for all $1\le k\le m$. Here and below $M^{(r)}$ denotes the power-maximal operator: $M^{(r)}f(x)=M(|f|^r)(x)^\frac{1}{r}$. Suppose $Q_1$ is the cube such that $\ell(Q_1) = \min\{\ell(Q_k) : 1\le k\le m\}$, then we have the following lemmas. \begin{lemma} \label{lm.4A00} For all $x\in Q_1^{**}$, we have \begin{align}\label{eq.4A01} M_\phi\circ \mathcal{T}_\sigma(a_1,\ldots,a_m)(x)\chi_{Q_1^{**}}(x) \lesssim \prod_{l=1}^m M\chi_{Q_l}(x)^\frac{n+N+1}{mn} \left( 1+M^{(m)} \circ T_{\sigma_l}(a_l)(x) \right). \end{align} \end{lemma} \begin{proof} Fix $x\in Q_1^{**}$. We need to estimate \[ \Big| \int_{{\mathbb R}^n} \phi_t(x-y){\mathcal T}_\sigma(a_1,\ldots,a_m)(y)\,dy \lesssim \frac{1}{t^n} \int_{B(x,t)} |{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy \] for each $t\in (0,\infty)$. The proof of \eqref{eq.4A01} is mainly based on the boundedness of $\mathcal{T}_\sigma$ and the smoothness condition of each Calder\'on-Zygmund kernel in \eqref{eq.CalZygOPT-2}. Instead of considering the whole sum in \eqref{eq.CalZygOPT-2}, for notational simplicity, it is convenient to consider one term, i.e., \begin{equation}\label{eq.4A02} {\mathcal T}_\sigma(f_1,\ldots,f_m) = T_{\sigma_1}(f_1) \cdots T_{\sigma_m}(f_m) \end{equation} except when cancellation is used, when the entire sum is needed. We consider two cases: $t\le \ell(Q_1)$ and $t>\ell(Q_1)$. \textbf{Case 1: $t\le \ell(Q_1)$}. By H\"older inequality and \eqref{eq.3D4}, we have \if0 \footnote{ The detail is as follows. We need to decompose two cases; $Q_1^{**}\cap Q_l^{**}=\emptyset$ or not. In fact, \begin{align*} &\prod_{l=1}^m \Big( \frac{1}{t^n} \int_{B(x,t)} |T_{\sigma_l}a_l(y)|^mdy \Big)^{\frac1m}\\ =& \prod_{l:Q_1^{**}\cap Q_l^{**}\neq \emptyset} \Big( \frac{1}{t^n} \int_{B(x,t)} |T_{\sigma_l}a_l(y)|^mdy \Big)^{\frac1m} \prod_{l:Q_1^{**}\cap Q_l^{**}=\emptyset} \Big( \frac{1}{t^n} \int_{B(x,t)} |T_{\sigma_l}a_l(y)|^mdy \Big)^{\frac1m}. \end{align*} For the first term, we employ the trivial estimate and $1=\chi_{Q_1^*}(x)\leq \chi_{Q_l^{**}}(x)$. For the second term, we observe that the assumptions $x\in Q_1^*$ and $t\leq \ell(Q_1)$ imply $B(x,t)\subset Q_1^{**}$ and hence $B(x,t)\cap Q_l^{**}=\emptyset$. This observation allows us to use \eqref{eq.3D4}. As a result, \[ \prod_{l=1}^m \Big( \frac{1}{t^n} \int_{B(x,t)} |T_{\sigma_l}a_l(y)|^mdy \Big)^{\frac1m} \lesssim \prod_{l:Q_1^{**}\cap Q_l^{**}\neq \emptyset} \chi_{Q_l^{**}}(x) M|T_{\sigma_l}(a_l)|^m(x)^{\frac{1}{m}} \prod_{l:Q_1^{**}\cap Q_l^{**}=\emptyset} M\chi_{Q_l}(x)^\frac{n+N+1}{mn}. \] } \fi \begin{align*} \frac{1}{t^n} \int_{B(x,t)} |{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy \lesssim& \prod_{l=1}^m \Big( \frac{1}{t^n} \int_{B(x,t)} |T_{\sigma_l}a_l(y)|^mdy \Big)^{\frac1m}. \end{align*} Now, we decompose the above product depending on two sub-cases; $B(t,x)\cap Q_l^{**}=\emptyset$ or not. Then \begin{align*} &\prod_{l=1}^m \Big( \frac{1}{t^n} \int_{B(x,t)} |T_{\sigma_l}a_l(y)|^mdy \Big)^{\frac1m}\\ =& \prod_{l:B(t,x)\cap Q_l^{**}=\emptyset} \Big( \frac{1}{t^n} \int_{B(x,t)} |T_{\sigma_l}a_l(y)|^mdy \Big)^{\frac1m} \prod_{l:B(t,x)\cap Q_l^{**}\neq \emptyset} \Big( \frac{1}{t^n} \int_{B(x,t)} |T_{\sigma_l}a_l(y)|^mdy \Big)^{\frac1m}. \end{align*} For the first sub-case, we employ \eqref{eq.3D4}. For the second sub-case, we observe that the assumption $t\leq \ell(Q_1)\le \ell(Q_l)$ imply $B(x,t)\subset 3Q_l^{**}$. As a result, \begin{align*} \lefteqn{ \prod_{l=1}^m \Big( \frac{1}{t^n} \int_{B(x,t)} |T_{\sigma_l}a_l(y)|^mdy \Big)^{\frac1m} }\\ &\lesssim \prod_{l:B(x,t)\cap Q_l^{**}\neq \emptyset} \chi_{3Q_l^{**}}(x) M^{(m)} \circ T_{\sigma_l}(a_l)(x) \prod_{l:B(x,t)\cap Q_l^{**}=\emptyset} M\chi_{Q_l}(x)^\frac{n+N+1}{mn}. \end{align*} Thus \begin{equation} \label{eq.4A03} \Big| \int_{{\mathbb R}^n} \phi_t(x-y){\mathcal T}_\sigma(a_1,\ldots,a_m)(y)\,dy \Big| \lesssim \prod_{l=1}^m M\chi_{Q_l}(x)^\frac{n+N+1}{mn} \left( 1+M^{(m)} \circ T_{\sigma_l}(a_l)(x) \right). \end{equation} \textbf{Case 2: $t>\ell(Q_1)$}. Now we can estimate \begin{align*} \frac{1}{t^n} \int_{B(x,t)} |{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy \lesssim& \frac{1}{|Q_1^*|} \int_{\mathbb{R}^n} |{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy\\ =& \frac{1}{|Q_1^*|} \int_{Q_1^*} |{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy\\ &+ \frac{1}{|Q_1^*|} \int_{\mathbb{R}^n\setminus Q_1^*} |{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy. \end{align*} By the H\"older inequality and \eqref{eq.3D4}, the similar technique to \eqref{eq.4A03} yields \begin{align} \notag \frac{1}{|Q_1^*|} \int_{Q_1^*} |{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy \lesssim& \prod_{l=1}^m \Big( \frac{1}{|Q_1^*|} \int_{Q_1^*} |T_{\sigma_l}a_l(y)|^m\,dy \Big)^{\frac1m} \\ \notag \lesssim& \prod_{l=1}^m \Big( \inf_{z \in Q_{1}^*} M\chi_{Q_{l}^{**}}(z)^{\frac{n+N+1}{mn}} + \inf_{z \in Q_{1}^*} M^{(m)} \circ T_{\sigma_l}(a_l)(z) \chi_{3Q_l^{**}}(z) \Big) \\ \label{eq.4A05} \lesssim& \prod_{l=1}^m M\chi_{Q_l}(x)^\frac{n+N+1}{mn} \left( 1+M^{(m)} \circ T_{\sigma_l}(a_l)(x) \right), \end{align} since $x\in Q_1^*$. For the second term, using the decay of $T_{\sigma_1}a_1(y)$ when $y\notin Q_1^*$ as in Lemma \ref{lm.3A1}, we obtain $$ \frac{1}{|Q_1^*|} \int_{{\mathbb R}^n \setminus Q_1^*} |{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy \lesssim \frac{1}{|Q_1^*|} \int_{{\mathbb R}^n \setminus Q_1^*} \frac{\ell(Q_1)^{n+N+1}}{|y-c_1|^{n+N+1}} \prod_{l=2}^m |T_{\sigma_l}a_l(y)|\,dy. $$ We decompose ${\mathbb R}^n \setminus Q_1^*$ into dyadic annuli and estimate \begin{align*} & \frac{1}{|Q_1^*|} \int_{{\mathbb R}^n \setminus Q_1^*} |{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy \\ &\lesssim \sum_{j=1}^\infty 2^{j(-N-1)} \frac{1}{|2^jQ_1^*|} \int_{2^{j}Q_1^*} \chi_{2^{j}Q_1^*}(y) \prod_{l=2}^m |T_{\sigma_l}a_l(y)|\,dy\\ &\lesssim \sum_{j=1}^\infty 2^{j(-N-1)} \prod_{l=2}^m \Big( \frac{1}{|2^jQ_1^*|}\int_{2^jQ_1^*}|T_{\sigma_l}a_l(y)|^m\,dy \Big)^{\frac1m}\\ &\lesssim \sum_{j=1}^\infty 2^{j(-N-1)} \prod_{l=2}^m \Big( \inf_{z\in 2^jQ_1^*}(M\chi_{2^jQ_l^{**}})(z)^{\frac{n+N+1}{mn}} + \inf_{z\in 2^jQ_1^*} M^{(m)} \circ T_{\sigma_l}(a_l)(z) \chi_{2^{j+1}Q_l^{**}}(z) \Big), \end{align*} where we used \eqref{eq.3D4} in the last inequality. Since $M\chi_{2^jQ}\lesssim 2^{jn}M\chi_Q$, \[ \chi_{2^{j+1}Q_l^{**}}(x) \le (M\chi_{2^jQ_l^{**}})^{\frac{n+N+1}{mn}} \lesssim 2^{\frac{j(n+N+1)}{m}} M\chi_{Q_l}^{\frac{n+N+1}{mn}}. \] Insert this inequality into the previous estimate to obtain \begin{align} \notag &\frac{1}{|Q_1^*|} \int_{{\mathbb R}^n \setminus Q_1^*} |{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy \\ \notag &\lesssim \sum_{j=1}^\infty 2^{-j(\frac{n+N+1}{m}-n)} \prod_{l=1}^m \Big( M\chi_{Q_l}(x)^{\frac{n+N+1}{mn}} + M^{(m)} \circ T_{\sigma_l}(a_l)(x) M\chi_{Q_l}(x)^\frac{n+N+1}{mn} \Big) \\ \label{eq.4E00} &\lesssim \prod_{l=1}^m M\chi_{Q_l}(x)^\frac{n+N+1}{mn} \left( 1+M^{(m)} \circ T_{\sigma_l}(a_l)(x) \right), \end{align} since $N \gg n$. Combining \eqref{eq.4A03}--\eqref{eq.4E00} together completes the proof of \eqref{eq.4A01}. \end{proof} \begin{lemma}\label{lm.4F01} Assume $x \notin Q_1^{**}$ and $c_1 \notin B(x,100n^2t)$. Then we have \begin{align*} \frac{1}{t^n} \int_{B(x,t)}|{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy \lesssim \prod_{l=1}^m M\chi_{Q_l}(x)^\frac{n+N+1}{mn} \left( 1+M^{(m)} \circ T_{\sigma_l}(a_l)(x) \right). \end{align*} \end{lemma} \begin{proof} Fix any $x\notin Q_1^{**}$ and $t>0$ such that $c_1\notin B(x,100n^2t)$. We denote \begin{equation} \label{lm.4F02} J= \{ 2\le l\le m: x\notin Q_l^{**} \},\ J_0 = \{l\in J : B(x,2t)\cap Q_l^*=\emptyset\},\ J_1 = J\setminus J_0. \end{equation} Similar to the previous lemma, it is enough to consider the reduced form \eqref{eq.4A02} of $\mathcal{T}_\sigma$. From the H\"{o}lder inequality, we have \begin{align*} &\frac{1}{t^n} \int_{B(x,t)} |\mathcal{T}_\sigma(a_1,\ldots,a_m)(y)|dy\\ &\lesssim \| T_{\sigma_1}a_1\chi_{B(x,t)} \|_{L^\infty} \prod_{l\in J_0} \|T_{\sigma_l}a_l\chi_{B(x,t)}\|_{L^{\infty}}\\ &\quad \times \prod_{l\in J_1} \left( \frac{1}{|B(x,t)|} \int_{B(x,t)}|T_{\sigma_l}a_l(y)|^mdy \right)^{\frac{1}{m}} \prod_{l\notin J} \left( \frac{1}{|B(x,t)|} \int_{B(x,t)}|T_{\sigma_l}a_l(y)|^mdy \right)^{\frac{1}{m}}\\ &=: {\rm I}\times {\rm II}\times {\rm III}\times {\rm IV}. \end{align*} For $\rm I$, we notice $Q_1^*\cap B(x,2t)=\emptyset$ since we have $x\notin Q_1^{**}$ and $c_1\notin B(x,100n^2t)$. So, we have only to use the decay estimate for $T_{\sigma_1}a_1$ to get \[ {\rm I}= \|T_{\sigma_1}a_1\chi_{B(x,t)}\|_{L^\infty} \lesssim \left( \frac{\ell(Q_1)}{ |x-c_1|+\ell(Q_1) } \right)^{n+N+1}. \] For all $l\in J_1$, since $B(x,2t)\cap Q_l^*\neq\emptyset$, $t\gtrsim \ell(Q_l)$; and hence, $Q_l^*\subset B(x,100n^2t)$. Therefore, \begin{equation} \label{eq.4E01} \left( \frac{1}{|B(x,t)|} \int_{B(x,t)}|T_{\sigma_l}a_l(y)|^mdy \right)^{\frac{1}{m}} \lesssim \left( \frac{|Q_l|}{|B(x,t)|} \right)^{\frac{1}{m}} \lesssim 1. \end{equation} for all $l\in J_1$. Now combining the above inequality with the estimates for $\rm I$ yields \begin{equation} \label{eq.4B01} {\rm I}\times {\rm III} \lesssim \left( \frac{\ell(Q_1)}{ |x-c_1|+\ell(Q_1) } \right)^{\frac{n+N+1}{m}} \prod_{l\in J_1} \left( \frac{\ell(Q_1)}{|x-c_1|+\ell(Q_1)} \right)^{\frac{n+N+1}{m}}. \end{equation} As showed about $Q_l^*\subset B(x,100n^2t)$ for all $l\in J_1$. This implies $|x-c_l|\lesssim t$. Furthermore, $c_1\notin B(x,100n^2t)$ means $t\lesssim |x-c_1|$ which yields $|x-c_l|\lesssim t\lesssim |x-c_1|$. Recalling $\ell(Q_1)\leq \ell(Q_l)$, we see that \[ \frac{\ell(Q_1)}{ |x-c_1|+\ell(Q_1) } \lesssim \frac{\ell(Q_l)}{ |x-c_l|+\ell(Q_l) }. \] From \eqref{eq.4B01}, we obtain \begin{align} {\rm I}\times {\rm III} \lesssim M\chi_{Q_1^{**}}(x)^{\frac{n+N+1}{m n}} \label{eq:161003-11} \prod_{l\in J_1} M\chi_{Q_l}(x)^\frac{n+N+1}{m n}. \end{align} Now, we turn to the estimate for ${\rm II}$ and ${\rm IV}$. For ${\rm II}$, we have only to employ the moment condition of $a_l$ to get \begin{align} \label{eq:161003-12} {\rm II} &= \prod_{l\in J_0} \|T_{\sigma_l}a_l \cdot \chi_{B(x,t)}\|_{L^\infty} \lesssim \prod_{l\in J_0} M\chi_{Q_l}(x)^{\frac{n+N+1}{n}}. \end{align} For ${\rm IV}$, since $x\in Q_l^{**}$, we can estimate \begin{align} \label{eq:161003-13} {\rm IV} &\lesssim \prod_{l\notin J} M^{(m)} \circ T_{\sigma_l}(a_l)(x) \chi_{Q_l^{**}}(x) \end{align} Putting (\ref{eq:161003-11})--(\ref{eq:161003-13}) together, we conclude the proof of Lemma \ref{lm.4F01}. \end{proof} \begin{lemma}\label{lem:160722-3} Assume $x \notin Q_1^{**}$ and $c_1 \in B(x,100n^2t)$. Then we have \begin{align} \notag & \frac{\ell(Q_1)^{s+1}}{t^{n+s+1}} \int_{Q_1^{*}}|{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy\\ \label{eq:160726-123} & \lesssim M\chi_{Q_1}(x)^{\frac{n+s+1}{n}} \prod_{l=1}^m \inf_{z\in Q_1^*} M\chi_{Q_l}(z)^\frac{n+N+1}{mn} \left( 1+M^{(m)} \circ T_{\sigma_l}(a_l)(z) \right). \end{align} \end{lemma} \begin{proof} It is enough to restrict $\mathcal{T}_\sigma$ to the form \eqref{eq.4A02}. By the H\"older inequality we have \begin{align*} & \frac{\ell(Q_1)^{s+1}}{t^{n+s+1}} \int_{Q_1^*}|{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy\\ \le& \frac{\ell(Q_1)^{n+s+1}}{t^{n+s+1}} \prod_{l=1}^m \Big( \frac{1}{|Q_1^*|} \int_{Q_1^*}|T_{\sigma_l}a_l(y)|^mdy \Big)^{\frac1m}\\ \lesssim& \frac{\ell(Q_1)^{n+s+1}}{t^{n+s+1}} \prod_{l=1}^m \Big( \inf_{z\in Q_1^*} M\chi_{Q_l}(z)^{\frac{n+N+1}{n}} + \inf_{z\in Q_1^*} M^{(m)} \circ T_{\sigma_l}(a_l)(z) \chi_{2Q_l^{**}}(z) \Big), \end{align*} where the last inequality is deduced from \eqref{eq.3D4}. Since $x\notin Q_1^{**}$ and $c_1\in B(x,100n^2t)$, $Q_1 \subset B(x,10000n^3t)$ which implies $ \ell(Q_1)/t\lesssim M\chi_{Q_1}(x)^\frac{1}{n} $. As a result, \begin{align*} & \frac{\ell(Q_1)^{s+1}}{t^{n+s+1}} \int_{Q_1^*}|{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy\\ &\lesssim M\chi_{Q_1}(x)^\frac{n+s+1}{n} \prod_{l=1}^m \inf_{z \in Q_1^*} M\chi_{Q_l}(z)^\frac{n+N+1}{mn} \left( 1+M^{(m)} \circ T_{\sigma_l}(a_l)(z) \right). \end{align*} This proves (\ref{eq:160726-123}). \end{proof} \begin{lemma}\label{lem:160722-4} Assume $x \notin Q_1^{**}$ and $c_1 \in B(x,100n^2t)$. Then we have \begin{align*} & \frac{1}{t^{n+s+1}} \int_{{\mathbb R}^n \setminus Q_1^*} |y-c_1|^{s+1}|{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy\\ & \lesssim M\chi_{Q_1}(x)^{\frac{n+s+1}{n}} \prod_{l=1}^m \inf_{z\in Q_1^*}M\chi_{Q_l}(z)^{\frac{n+N+1}{mn}} \left( 1+M^{(m)} \circ T_{\sigma_l}(a_l)(z) \right). \end{align*} \end{lemma} \begin{proof} Using the decay of $T_{\sigma_1}a_1(y)$ when $y\notin Q_1^*$, we obtain \begin{align*} \lefteqn{ \frac{1}{t^{n+s+1}} \int_{{\mathbb R}^n \setminus Q_1^*}|y-c_1|^{s+1}|{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy }\\ &\lesssim \frac{1}{t^{n+s+1}} \int_{{\mathbb R}^n \setminus Q_1^*} |y-c_1|^{s+1} \frac{\ell(Q_1)^{n+N+1}}{|y-c_1|^{n+N+1}} \prod_{l=2}^m |T_{\sigma_l}a_l(y)|\,dy. \end{align*} By dyadic decomposition of ${\mathbb R}^n \setminus Q_1^*$ as in the proof of Lemma \ref{lm.4A00}, we can estimate \begin{align*} &\frac{1}{t^{n+s+1}} \int_{{\mathbb R}^n \setminus Q_1^*}|y-c_1|^{s+1}|{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy \\ &\lesssim \frac{\ell(Q_1)^{s+1}}{t^{n+s+1}} \sum_{j=1}^\infty 2^{j(s-N-n)} \int_{2^{j}Q_1^*} \chi_{2^{j}Q_1^*}(y) \prod_{l=2}^m |T_{\sigma_l}a_l(y)|\,dy\\ &\lesssim \frac{\ell(Q_1)^{n+s+1}}{t^{n+s+1}} \sum_{j=1}^\infty 2^{j(s-N)} \prod_{l=2}^m \Big( \frac{1}{|2^jQ_1^*|}\int_{2^jQ_1^*}|T_{\sigma_l}a_l(y)|^m\,dy \Big)^{\frac1m}\\ &\lesssim \frac{\ell(Q_1)^{n+s+1}}{t^{n+s+1}} \sum_{j=1}^\infty 2^{j(s-N)} \prod_{l=2}^m \Big( \inf_{z\in 2^jQ_1^*}M\chi_{2^jQ_l^{**}}(z)^{\frac{n+N+1}{mn}} + \inf_{z\in 2^jQ_1^*} M^{(m)} \circ T_{\sigma_l}(a_l)(z) \chi_{2^{j+1}Q_l^{**}}(z) \Big), \end{align*} where we used \eqref{eq.3D4} in the last inequality. We now repeat the argument in establishing \eqref{eq.4E00} to obtain \begin{align*} &\frac{1}{t^{n+s+1}} \int_{{\mathbb R}^n \setminus Q_1^*}|y-c_1|^{s+1}|{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy \\ &\lesssim \frac{\ell(Q_1)^{n+s+1}}{t^{n+s+1}} \prod_{l=1}^m \inf_{z\in Q_1^*}M\chi_{Q_l}(z)^{\frac{n+N+1}{mn}} \left( 1+M^{(m)} \circ T_{\sigma_l}(a_l)(z) \right). \end{align*} Moreover, the assumption $x\notin Q_1^{**}$ and $c_1\in B(x,100n^2t)$ implies $ \frac{\ell(Q_1)}{t}\lesssim M\chi_{Q_1}(x)^{\frac1n}. $ Therefore, \begin{align*} & \frac{1}{t^{n+s+1}} \int_{{\mathbb R}^n \setminus Q_1^*}|y-c_1|^{s+1}|{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy\\ &\lesssim M\chi_{Q_1}(x)^{\frac{n+s+1}{n}} \prod_{l=1}^m \inf_{z\in Q_1^*}M\chi_{Q_l}(z)^{\frac{n+N+1}{mn}} \left( 1+M^{(m)} \circ T_{\sigma_l}(a_l)(z) \right). \end{align*} This proves Lemma \ref{lem:160722-4}. \end{proof} \begin{lemma} \label{lm.4B02} For all $x\in {\mathbb R}^n$, we have \begin{align*} M_\phi\circ \mathcal{T}_\sigma(a_1,\ldots,a_m)(x) \lesssim& \prod_{l=1}^m M\chi_{Q_l}(x)^{\frac{n+N+1}{mn}} \left( 1+M^{(m)} \circ T_{\sigma_l}(a_l)(x) \right)\\ &+ M\chi_{Q_1}(x)^{\frac{n+s+1}{n}} \prod_{l=1}^m \inf_{z\in Q_1^*}M\chi_{Q_l}(z)^{\frac{n+N+1}{mn}} \left( 1+M^{(m)} \circ T_{\sigma_l}(a_l)(z) \right). \end{align*} \end{lemma} \begin{proof} If $x\in Q_1^{**}$, the desired estimate is a consequence of Lemma \ref{lm.4A00}. Fix $x\notin Q_1^{**}$. To estimate $M_{\phi}\circ \mathcal{T}_{\sigma}(a_1,\ldots,a_m)(x)$, we need to examine \[ \Big| \int_{{\mathbb R}^n} \phi_t(x-y){\mathcal T}_\sigma(a_1,\ldots,a_m)(y)\,dy \Big| \] for each $t\in (0,\infty)$. If $c_1 \notin B(x,100n^2t)$, then we make use of Lemma \ref{lm.4F01}; otherwise, when $c_1 \in B(x,100n^2t)$ we recall \eqref{eq.3C02} and then apply Lemma \ref{lem:160722-3} and \ref{lem:160722-4} to obtain the required estimate in Lemma \ref{lm.4B02}. This completes the proof of the lemma. \end{proof} \subsection{The proof of Proposition \ref{LM.Key-31} for the product type} To process the proof of \eqref{eq.PWEST}, we set \begin{equation*} A= \Big\|\sum_{k_1,\ldots,k_m=1}^\infty \Big(\prod_{l=1}^m \lambda_{l,k_l}\Big) M_\phi \circ {\mathcal T}_\sigma(a_{1,k_1},\ldots,a_{m,k_m})\Big\|_{L^p}. \end{equation*} For each $\vec{k}=(k_1,\ldots,k_m)$, we recall $R_{\vec{k}}$, the smallest-length cube among $Q_{1,k_1},\ldots,Q_{m,k_m}$. In view of Lemma \ref{lm.4B02}, we have \begin{equation}\label{161031-1} A\lesssim B:= \left\| \sum_{k_1,\ldots, k_m=1}^\infty \prod_{l=1}^m \lambda_{l,k_l} \left(M\chi_{Q_{l,k_l}}\right)^{\frac{n+N+1}{mn}} \left( 1+M^{(m)} \circ T_{\sigma_l}(a_{l,k_l}) \right) \right\|_{L^p}. \end{equation} In fact, our assumption imposing on $s$ means $(n+s+1)p/n>1$ and hence we may employ the boundedness of $M$ to obtain \begin{align*} A&\lesssim B+ \left\| \sum_{k_1,\ldots,k_m=1}^\infty \left(M\chi_{R_{\vec{k}}^*}\right)^{\frac{n+s+1}{n}} \prod_{l=1}^m \lambda_{l,k_l} \inf_{z\in R_{\vec{k}}^*} \bigg(M\chi_{Q_{l,k_l}}\bigg) ^{\frac{n+N+1}{mn}} \left( 1+M^{(m)} \circ T_{\sigma_l}(a_{l,k_l}) \right) \right\|_{L^p}\\ &\lesssim B. \end{align*} So, our task is to estimate $B$. Here, we prepare the following lemma. \begin{lemma}\label{lm-161031-1} Let $p \in(0,\infty)$ and $\alpha>\max{(1,p^{-1})}$. Assume that $q \in (p,\infty] \cap [1,\infty]$. Suppose that we are given a sequence of cubes $\{Q_k\}_{k=1}^\infty$ and a sequence of non-negative $L^q$-functions $\{F_k\}_{k=1}^\infty$. Then \[ \Big\| \sum_{k=1}^\infty (M\chi_{Q_k})^\alpha F_k \Big\|_{L^p} \lesssim \Big\| \sum_{k=1}^\infty \chi_{Q_k} M^{(q)}F_k \Big\|_{L^p}. \] \end{lemma} \begin{proof} By Lemma \ref{lm.2A00} and the fact that \( M\chi_{Q} \lesssim \chi_{Q}+ \sum_{j=1}^{\infty} 2^{-jn}\chi_{2^{j}Q\setminus 2^{j-1}Q}, \) we have \begin{align*} \left\| \sum_{k=1}^\infty (M\chi_{Q_k})^\alpha F_k \right\|_{L^p} &\lesssim \left\| \sum_{j=0}^\infty \sum_{k=1}^\infty 2^{-\alpha jn} \chi_{2^jQ_k}F_k \right\|_{L^p}\\ &\lesssim \left\| \sum_{j=0}^\infty \sum_{k=1}^\infty 2^{-\alpha jn} \chi_{2^jQ_k} \left( \frac{1}{|2^jQ_k|} \int_{2^jQ_k}F_k(y)^qdy \right)^\frac{1}{q} \right\|_{L^p}. \end{align*} Choose $\alpha>\beta>\max(1,\frac1p)$ and observe the trivial estimate \[ \chi_{2^jQ_k} \lesssim \left( 2^{jn}M\chi_{Q_k} \right)^{\beta}. \] Now, Lemma \ref{lm.2B00} gives \begin{align*} \left\| \sum_{k=1}^\infty (M\chi_{Q_k})^\alpha F_k \right\|_{L^p} \lesssim& \left\| \sum_{j=0}^\infty \sum_{k=1}^\infty \chi_{Q_k} \left( \frac{2^{(\beta-\alpha)jqn}}{|2^jQ_k|} \int_{2^jQ_k}F_k(y)^qdy \right)^\frac{1}{q} \right\|_{L^p} \lesssim& \Big\| \sum_{k=1}^\infty \chi_{Q_k} M^{(q)}F_k \Big\|_{L^p}, \end{align*} which yields the desired estimate. \end{proof} Lemma \ref{lm-161031-1} can be regard as a substitution of Lemma \ref{lm.2A00}. Before applying Lemma \ref{lm-161031-1} to $B$, we observe \begin{align*} B \leq \prod_{l=1}^m \left\| \sum_{k_l=1}^\infty \lambda_{l,k_l} \left(M\chi_{Q_{l,k_l}}\right)^{\frac{n+N+1}{mn}} \left( 1+M^{(m)} \circ T_{\sigma_l}(a_{l,k_l}) \right) \right\|_{L^{p_l}}. \end{align*} Then applying Fefferman-Stein's vector-valued inequality and Lemma \ref{lm-161031-1}, \begin{align*} &\left\| \sum_{k_l=1}^\infty \lambda_{l,k_l} \left(M\chi_{Q_{l,k_l}}\right)^{\frac{n+N+1}{mn}} \left( 1+ M^{(m)} \circ T_{\sigma_l}(a_{l,k_l}) \right) \right\|_{L^{p_l}}\\ &\lesssim \left\| \sum_{k_l=1}^\infty \lambda_{l,k_l} M \big( \chi_{Q_{l,k_l}^{**}}\big)^{\frac{n+N+1}{mn}} \right\|_{L^{p_l}} + \left\| \sum_{k_l=1}^\infty \lambda_{l,k_l} \left(M\chi_{Q_{l,k_l}}\right)^{\frac{n+N+1}{mn}} M^{(m)} \circ T_{\sigma_l}(a_{l,k_l}) \right\|_{L^{p_l}}\\ &\lesssim \left\| \sum_{k_l=1}^\infty \lambda_{l,k_l} \chi_{Q_{l,k_l}} \right\|_{L^{p_l}} + \left\| \sum_{k_l=1}^\infty \lambda_{l,k_l} \chi_{Q_{l,k_l}^{**}} M\circ M^{(m)} \circ T_{\sigma_l}(a_{l,k_l}) \right\|_{L^{p_l}}. \end{align*} For the second term, we choose $q\in(m,\infty)$ and employ Lemma \ref{lm.2A00}, and the boundedness of $M$ and $T_{\sigma_l}$ to have \begin{align*} &\left\| \sum_{k_l=1}^\infty \lambda_{l,k_l} \chi_{Q_{l,k_l}^{**}} M\circ M^{(m)} \circ T_{\sigma_l}(a_{l,k_l}) \right\|_{L^{p_l}}\\ &\lesssim \left\| \sum_{k_l=1}^\infty \lambda_{l,k_l} \frac{\chi_{Q_{l,k_l}^{**}}}{|Q_{l,k_l}|^\frac{1}{q}} \left\|M\circ M^{(m)} \circ T_{\sigma_l}(a_{l,k_l})\right\|_{L^q}^q \right\|_{L^{p_l}} \lesssim \left\| \sum_{k_l=1}^\infty \lambda_{l,k_l} \chi_{Q_{l,k_l}} \right\|_{L^{p_l}}. \end{align*} As a result, \[ A\lesssim B\lesssim \prod_{l=1}^m \left\| \sum_{k_l=1}^\infty \lambda_{l,k_l} \chi_{Q_{l,k_l}} \right\|_{L^{p_l}}, \] which completes the proof of Proposition \ref{LM.Key-31}. \section{The mixed type} In this section, we prove Proposition \ref{LM.Key-31} for operators of type \eqref{eq.CalZygOPT-3}. The main techniques to deal with the operator $\mathcal{T}_\sigma$ of mixed type are combinations of two previous types. We now establish some necessary estimates for $\mathcal{T}_\sigma$. For the mixed type, we need the following lemma which can be shown by a way similar to that in Lemma \ref{lm.3A1}.\footnote{ The detailed proof is as follows. Fix any $y\in 2^{j+1}Q_1^*\setminus 2^jQ_1^*$. Let us use a notation $K^1(y,y_1,\ldots, y_m)$ as in the proof of Lemma \ref{lm.3A1}. Then for any $y_l\in Q_l$, $l=1,\ldots,m$, we have \begin{align*} | K^1(y,y_1,\ldots,y_m) | &\lesssim \left( \frac{ \ell(Q_1) } { |y-y_1| + \sum_{l\in \Lambda_j} |y-y_l| + \sum_{l\geq2} |y-y_l| } \right)^{n+N+1}\\ &\lesssim \left( \frac{ \ell(Q_1) } { |y-c_1| + \sum_{l\in \Lambda_j} |y-c_l| + \sum_{l\geq2} |y-y_l| } \right)^{n+N+1}. \end{align*} In fact, if $l\in \Lambda_j$, $2^jQ_1^{**}\cap 2^jQ_l^{**}=\emptyset$ and hence, $y\in 2^{j+1}Q_1^*$ means $|y-y_l|\sim |y-c_l|$ for all $y_l\in Q_l$ for such $l$. Of course, $|y-y_1|\sim |y-c_1|$ is clear since $y\notin 2^jQ_1^*$. Using this kernel estimate, we may prove the desired estimate. } \begin{lemma}\label{lm-161112-1} Let $\sigma$ be a Coifman-Meyer multiplier, $a_l$ be $(p_l,\infty)$-atoms supported on $Q_l$ for $1\leq l\leq m$. Assume $\ell(Q_1)=\min{\{\ell(Q_l):l=1,\ldots, m\}}$ and write $\Lambda_j=\{l=1,\ldots,m: 2^jQ_1^{**}\cap 2^jQ_l^{**}=\emptyset\}$. Then for any $y\in 2^{j+1}Q_1^*\setminus 2^jQ_1^*$ we have \[ |T_\sigma(a_1,\ldots, a_m)(y)| \lesssim \left( \frac{\ell(Q_1)} { |y-c_1| + \sum_{l\in \Lambda_j} |y-c_l|} \right)^{n+N+1}. \] \end{lemma} \subsection{Fundamental estimates for the mixed type} Let $a_k$ be $(p_k,\infty)$-atoms supported in $Q_k$ for all $1\le k\le m$. Suppose $Q_1$ is the cube such that $\ell(Q_1) = \min\{\ell(Q_k) : 1\le k\le m\}$. For each $1\le g\le G$, let $Q_{l(g)}$ be the smallest cube among $\{Q_l\}_{l \in I_g}$ and let $m_g = |I_g|$ be the cardinality of $I_g$. Then we have the following analogues to Lemmas \ref{lm.4A00}--\ref{lm.4B02}. We write $m_g=\sharp I_g$ for each $g$. \begin{lemma} \label{lm.5A00} For all $x\in Q_1^{**}$, we have \begin{align} \notag &M_\phi\circ \mathcal{T}_\sigma(a_1,\ldots,a_m)(x)\chi_{Q_1^{**}}(x)\\ \label{eq.5A01} &\lesssim \prod_{g=1}^G \left( M\chi_{Q_{l(g)}}(x)^\frac{(n+N+1)m_g}{nm} M^{(G)} \circ T_{\sigma_{I_g}}(\{a_{l,k_l}\}_{l\in I_g})(x) + \prod_{l\in I_g} M\chi_{Q_l}(x)^\frac{n+N+1}{mn} \right). \end{align} \end{lemma} \begin{proof} Fix $x\in Q_1^{**}$. We need to estimate \[ \Big| \int_{{\mathbb R}^n} \phi_t(x-y){\mathcal T}_\sigma(a_1,\ldots,a_m)(y)\,dy \Big| \lesssim \frac{1}{t^n} \int_{B(x,t)} |{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy \] for each $t\in (0,\infty)$. Similar to the previous section, it is enough to consider the following form: \begin{equation}\label{eq.CalZygOPT-31} {\mathcal T}_\sigma(f_1,\ldots,f_m) = \prod_{g=1}^{G} T_{\sigma_{I_g}}(\{f_l\}_{l \in I_g}), \end{equation} where $\{I_g\}_{g=1}^{G}$ is a partition of $\{1,\ldots,m\}$ with $1\in I_1$. By the H\"older inequality, we have \begin{equation} \label{eq.4B07} \frac{1}{t^n} \int_{B(x,t)}|{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy \lesssim \prod_{g=1}^G \Big( \frac{1}{t^n} \int_{B(x,t)} |T_{\sigma_{I_g}}(\{a_l\}_{l \in I_g})(y)|^{G}dy \Big)^{\frac1{G}}. \end{equation} For each $1\le g\le G$, we need to examine \[ \Big( \frac{1}{t^n} \int_{B(x,t)} |T_{\sigma_{I_g}}(\{a_l\}_{l \in I_g})(y)|^{G}dy \Big)^{\frac1{G}}. \] We consider two cases as in the proof of Lemma \ref{lm.4A00}. \textbf{Case 1: $t\le \ell(Q_{1})$}. We observe that \begin{align*} \frac{1}{t^n} \int_{B(x,t)}|{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy &\le \prod_{g: B(x,t)\cap Q_{l(g)}^{**}\neq\emptyset} \Big( \frac{1}{t^n} \int_{B(x,t)} |T_{\sigma_{I_g}}(\{a_l\}_{l \in I_g})(y)|^{G}dy \Big)^{\frac1{G}}\\ &\quad \times \prod_{g: B(x,t)\cap Q_{l(g)}^{**}=\emptyset} \Big( \frac{1}{t^n} \int_{B(x,t)} |T_{\sigma_{I_g}}(\{a_l\}_{l \in I_g})(y)|^{G}dy \Big)^{\frac1{G}}. \end{align*} When $B(x,t)\cap Q_{l(g)}^{**}\neq\emptyset$, we see that $x\in 3Q_{l(g)}^{**}$. This shows \begin{align}\label{161112-4} \lefteqn{ \prod_{g: B(x,t)\cap Q_{l(g)}^{**}\neq\emptyset} \Big( \frac{1}{t^n} \int_{B(x,t)} |T_{\sigma_{I_g}}(\{a_l\}_{l \in I_g})(y)|^{G}dy \Big)^{\frac1{G}} }\\ \nonumber &\lesssim \prod_{g: B(x,t)\cap Q_{l(g)}^{**}\neq\emptyset} \chi_{3Q_{l(g)}^{**}}(x) M^{(G)} \circ T_{\sigma_{I_g}}(\{a_l\}_{l\in I_g})(x). \end{align} When $B(x,t)\cap Q_{l(g)}^{**}=\emptyset$, we may use \eqref{eq.3D4} to have \begin{equation}\label{161112-3} \prod_{g:B(x,t)\cap Q_{l(g)}^{**}=\emptyset} \Big( \frac{1}{t^n} \int_{B(x,t)} |T_{\sigma_{I_g}}(\{a_l\}_{l \in I_g})(y)|^{G}dy \Big)^{\frac1{G}} \lesssim \prod_{g: B(x,t)\cap Q_{l(g)}^{**}=\emptyset} \prod_{l\in I_g} M\chi_{Q_l}(x)^\frac{n+N+1}{mn}. \end{equation} These two estimates \eqref{161112-4} and \eqref{161112-3} yield the desired estimate in the Case 1. \textbf{Case 2: $t> \ell(Q_{1})$}. We split \begin{align*} \lefteqn{ \frac{1}{t^n} \int_{B(x,t)}|{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy }\\ &\lesssim \frac{1}{|Q_1^*|} \int_{Q_1^*} |{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy + \frac{1}{|Q_1^*|} \int_{(Q_1^*)^c} |{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy. \end{align*} For the first term, (\ref{161112-3}) yields \begin{align*} & \frac{1}{|Q_1^*|} \int_{Q_1^*} |{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy\\ &\lesssim \prod_{g=1}^G \left( \prod_{l\in I_g} M\chi_{Q_l}(x)^\frac{n+N+1}{mn} + \chi_{Q_{l(g)}^{**}}(x) M^{(G)} \circ T_{\sigma_{I_g}}(\{a_l\}_{l\in I_g})(x) \right). \end{align*} For the second term, by a dyadic decomposition of $(Q_1^*)^c$, \begin{align*} &\frac{1}{|Q_1^*|} \int_{(Q_1^*)^c} |{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy\\ &= \sum_{j=0}^{\infty} \frac{1}{|Q_1^*|} \int_{2^{j+1}Q_1^*\setminus 2^jQ_1^*} |T_{\sigma_{I_1}}(\{a_l\}_{l\in I_1})(y)| \prod_{g\geq 2} |T_{\sigma_{I_g}}(\{a_l\}_{l\in I_g})(y)|dy = \sum_{j=0}^\infty I_j. \end{align*} Now, we fix any $j$ and evaluate each $I_j$. Letting $\Lambda_j=\{l=1,\ldots,m: 2^jQ_1^{**}\cap 2^jQ_l^{**}=\emptyset\}$ and using Lemma \ref{lm-161112-1}, for $y\in2^{j+1}Q_1^*\setminus2^jQ_1^*$ we obtain \begin{align*} |T_{\sigma_{I_1}}(\{a_l\}_{l\in I_1})(y)| &\lesssim \left( \frac{\ell(Q_1)}{|y-c_1|+\sum_{l\in I_1\cap \Lambda_j}|y-c_l|} \right)^{n+N+1}\\ &\lesssim 2^{-j(n+N+1)} \left( \frac{2^j\ell(Q_1)}{2^j\ell(Q_1)+\sum_{l\in I_1\cap \Lambda_j}|c_1-c_l|} \right)^{n+N+1} . \end{align*} We estimate this term further. If $l\in I_1\cap \Lambda_j$, $|c_1-c_l|\sim |x-c_l|$ since $x\in Q_1^{**}$. On the other hand, if $l\in I_1\setminus \Lambda_j$, $\chi_{2^jQ_l^{**}}(x)=\chi_{Q_1^{**}}(x)=1$ since $x\in Q_1^{**}$. So, we have \begin{align*} |T_{\sigma_{I_1}}(\{a_l\}_{l\in I_1})(y)| &\lesssim 2^{-j(n+N+1)} \prod_{l\in I_1\cap \Lambda_j} \left( \frac{2^j\ell(Q_l)}{|x-c_l|} \right)^\frac{n+N+1}{m} \prod_{l\in I_1\setminus \Lambda_j} \chi_{2^jQ_l^{**}}(x)\\ &\lesssim 2^{-j(n+N+1)} \prod_{l\in I_1\setminus \{1\}} M\chi_{2^jQ_l^{**}}(x)^\frac{n+N+1}{mn}\\ &\lesssim 2^{-j(n+N+1)} 2^{j\frac{n+N+1}{m}(m_1-1)} \prod_{l\in I_1} M\chi_{Q_l}(x)^\frac{n+N+1}{mn}. \end{align*} This and H\"{o}lder's inequality imply that \begin{equation}\label{161112-1} I_j \lesssim 2^{-j(N+1)} 2^{j\frac{n+N+1}{m}(m_1-1)} \prod_{l\in I_1} M\chi_{Q_l}(x)^\frac{n+N+1}{mn} \prod_{g\geq2} \left( \frac{1}{|2^jQ_1^*|} \int_{2^jQ_1^*} |T_{\sigma_{I_g}}(\{a_l\}_{l\in I_g})(y)|^Gdy \right)^\frac{1}{G}. \end{equation} In the usual way, we claim that \begin{align}\label{161112-2} &\prod_{g\geq2} \left( \frac{1}{|2^jQ_1^*|} \int_{2^jQ_1^*} |T_{\sigma_{I_g}}(\{a_l\}_{l\in I_g})(y)|^Gdy \right)^\frac{1}{G}\\ &\lesssim 2^{j\frac{n+N+1}{m}(m-m_1)} \prod_{g\geq2} \left( M\chi_{Q_{l(g)}}(x)^\frac{(n+N+1)m_g}{nm} M^{(G)} \circ T_{\sigma_{I_g}}(\{a_{l,k_l}\}_{l\in I_g})(x) + \prod_{l\in I_g} M\chi_{Q_l}(x)^\frac{n+N+1}{mn} \right).\nonumber \end{align} To see this, we again consider two possibilities of $g$ for each $j$; $2^jQ_1^{**}\cap 2^jQ_{l(g)}^{**}\neq \emptyset$ or not. In the first case, we notice $x\in Q_1^*\subset 2^jQ_{l(g)}^{**}$ and that we defined $m_g=\sharp I_g$, and hence \begin{align*} \lefteqn{ \left( \frac{1}{|2^jQ_1^*|} \int_{2^jQ_1^*} |T_{\sigma_{I_g}}(\{a_l\}_{l\in I_g})(y)|^Gdy \right)^\frac{1}{G} }\\ &\lesssim \chi_{2^jQ_{l(g)}^{**}}(x) M^{(G)} \circ T_{\sigma_{I_g}}(\{a_l\}_{l\in I_g})(x)\\ &\lesssim 2^{j\frac{m_g(n+N+1)}{m}} M\chi_{Q_{l(g)}}(x)^\frac{m_g(n+N+1)}{mn} M^{(G)} \circ T_{\sigma_{I_g}}(\{a_l\}_{l\in I_g})(x). \end{align*} In the second case; $2^jQ_1^{**}\cap 2^jQ_{l(g)}^{**}=\emptyset$, we use \eqref{eq.3D4} to see \begin{align*} \lefteqn{ \left( \frac{1}{|2^jQ_1^*|} \int_{2^jQ_1^*} |T_{\sigma_{I_g}}(\{a_l\}_{l\in I_g})(y)|^Gdy \right)^\frac{1}{G} }\\ &\lesssim \prod_{l\in I_g} M\chi_{2^jQ_l^{**}}(x)^\frac{n+N+1}{mn} \lesssim 2^{j\frac{m_g(n+N+1)}{m}} \prod_{l\in I_g} M\chi_{Q_l}(x)^\frac{n+N+1}{mn}. \end{align*} These two estimates yields \eqref{161112-2}. Inserting \eqref{161112-2} to \eqref{161112-1}, we arrive at \[ I_j\lesssim 2^{-j(\frac{n+N+1}{m}-n)} \prod_{g=1}^G \left( M\chi_{Q_{l(g)}}(x)^\frac{m_g(n+N+1)}{mn} M^{(G)} \circ T_{\sigma_{I_g}}(\{a_l\}_{l\in I_g})(x) + \prod_{l\in I_g} M\chi_{Q_l}(x)^\frac{n+N+1}{mn} \right). \] Taking $N$ sufficiently large, we can sum the above estimate up over $j \in {\mathbb N}$ and get desired estimate. This completes the proof of Lemma \ref{lm.5A00} \end{proof} \begin{lemma}\label{lm.5A02} Assume $x \notin Q_1^{**}$ and $c_1 \notin B(x,100n^2t)$. Then we have \begin{align*} &\frac{1}{t^n} \int_{B(x,t)}|{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy\\ &\lesssim \prod_{g=1}^G \left( M\chi_{Q_{l(g)}}(x)^\frac{m_g(n+N+1)}{mn} M^{(G)} \circ T_{\sigma_{I_g}}(\{a_l\}_{l\in I_g})(x) + \prod_{l\in I_g} M\chi_{Q_l}(x)^\frac{n+N+1}{mn} \right). \end{align*} \end{lemma} \begin{proof} Fix $x\notin Q_1^{**}$ and $t>0$ such that $c_1\notin B(x,100n^2t).$ Let ${\mathcal T}_\sigma$ be the operator of type \eqref{eq.CalZygOPT-3}. We may consider the reduced form \eqref{eq.CalZygOPT-31} of $\mathcal{T}_{\sigma}$ and start from \eqref{eq.4B07}. We define \[ J= \{ g=2,\ldots, m: x\notin Q_{l(g)}^{**} \}, \quad J_0= \{ g\in J: B(x,2t)\cap Q_{l(g)}^*=\emptyset \}, \quad J_1=J\setminus J_0 \] and split the product as follows: \begin{align*} \frac{1}{t^n} \int_{B(x,t)}|{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy &\lesssim \left\| T_{\sigma_{I_1}}(\{a_l\}_{l\in I_1}) \chi_{B(x,t)} \right\|_{L^\infty} \prod_{l\in J_0} \left\| T_{\sigma_{I_g}}(\{a_l\}_{l\in I_g}) \chi_{B(x,t)} \right\|_{L^\infty}\\ &\quad \times \prod_{g\in J_1} \left( \frac{1}{|B(x,t)|} \int_{B(x,t)} |T_{\sigma_{I_g}}(\{a_l\}_{l\in I_g})(y)|^Gdy \right)^\frac{1}{G}\\ &\quad \times \prod_{g\in \{2,\ldots,G\} \setminus J} \left( \frac{1}{|B(x,t)|} \int_{B(x,t)} |T_{\sigma_{I_g}}(\{a_l\}_{l\in I_g})(y)|^Gdy \right)^\frac{1}{G}\\ &= {\rm I} \times {\rm II} \times {\rm III} \times {\rm IV}. \end{align*} To estimate I, we further define the partition of $I_1$: \begin{align*} I_1^0 &= \{l\in I_1 : x\notin Q_l^{**}, B(x,2t)\cap Q_l^*=\emptyset\}, \quad I_1^1 = \{l\in I_1 : x\notin Q_l^{**}, B(x,2t)\cap Q_l^*\ne\emptyset\},\\ I_1^2 &= I_1\setminus(I_1^0\cup I_1^1). \end{align*} Since $x\notin Q_1^{**}$ and $c_1\notin B(x,100n^2t)$, we can see that $1\in I_1^0$. From Lemma \ref{lm.3A1}, we deduce \begin{align*} &|T_{\sigma_{I_1}}(\{a_l\}_{l \in I_1})(y)|\lesssim \frac{\ell(Q_1)^{n+N+1}}{(\sum_{l\in I_1^0}|y-c_l|)^{n+N+1}} \lesssim \frac{\ell(Q_1)^{n+N+1}}{(\sum_{l\in I_1^0}|x-c_l|)^{n+N+1}} \\ &\lesssim \Big(\frac{\ell(Q_1)}{|x-c_1|+\ell(Q_1)}\Big)^{(m-m_1)\frac{n+N+1}{m}} \prod_{l\in I_1^0} \Big(\frac{\ell(Q_l)}{|x-c_l|+\ell(Q_l)}\Big)^{\frac{n+N+1}{m}} \prod_{l\in I_1^1} \Big(\frac{\ell(Q_1)}{|x-c_1|+\ell(Q_1)}\Big)^{\frac{n+N+1}{m}} \end{align*} for all $y\in B(x,t)$, where $m_1=|I_1|$ is the cardinality of the set $I_1$. As in the proof of Lemma \ref{lm.4F01} for the product type, if $x\notin Q_l^{**}$ and $B(x,2t)\cap Q_l^{*}\ne\emptyset$ then $|x-c_l|\lesssim t\lesssim |x-c_1|$. This observation implies \[ \frac{\ell(Q_1)}{|x-c_1|+\ell(Q_1)} \lesssim \frac{\ell(Q_l)}{|x-c_l|+\ell(Q_l)} \] for all $l\in I_1^1$. Therefore, we can estimate \[ |T_{\sigma_{I_1}}(\{a_l\}_{l \in I_1})(y)| \lesssim \Big(\frac{\ell(Q_1)}{|x-c_1|+\ell(Q_1)}\Big)^{(m-m_1)\frac{n+N+1}{m}} \prod_{l\in I_1^0\cup I_1^1}M\chi_{Q_l}(x)^{\frac{n+N+1}{mn}} \] for all $y\in B(x,t)$. Obviously, $1\lesssim M\chi_{Q_l}(x)$ for all $l\in I_1^2$, and hence we have \begin{equation} \label{eq.5B08} |T_{\sigma_{I_1}}(\{a_l\}_{l \in I_1})(y)| \lesssim \Big(\frac{\ell(Q_1)}{|x-c_1|+\ell(Q_1)}\Big)^{(m-m_1)\frac{n+N+1}{m}} \prod_{l\in I_1}M\chi_{Q_l}(x)^{\frac{n+N+1}{mn}} \end{equation} for all $y\in B(x,t)$ which gives the estimate for I. For the third term III, we simply have \[ {\rm III} \leq \prod_{g\in J_1} M^{(G)} \circ T_{\sigma_{I_g}}(\{a_l\}_{l\in I_g})(x). \] So, we obtain \begin{align*} {\rm I}\times {\rm III} &\lesssim \prod_{l\in I_1}M\chi_{Q_l}(x)^\frac{n+N+1}{mn} \prod_{g\in J_1} \left( \frac{\ell(Q_1)} { \ell(Q_1)+ |x-c_1| } \right)^\frac{m_g(n+N+1)}{m} M^{(G)} \circ T_{\sigma_{I_g}}(\{a_l\}_{l\in I_g})(x)\\ &\lesssim \prod_{l\in I_1}M\chi_{Q_l}(x)^\frac{n+N+1}{mn} \prod_{g\in J_1} M\chi_{Q_{l(g)}}(x)^\frac{m_g(n+N+1)}{mn} M^{(G)} \circ T_{\sigma_{I_g}}(\{a_l\}_{l\in I_g})(x), \end{align*} since $g\in J_1$ implies $|x-c_{l(g)}|\lesssim|x-c_1|$. For the second term II, we use Lemma \ref{161112-1} and an argument as for estimate for I to get \[ {\rm II}= \prod_{l\in J_0} \left\| T_{\sigma_{I_g}}(\{a_l\}_{l\in I_g}) \chi_{B(x,t)} \right\|_{L^\infty} \lesssim \prod_{g\in J_0} \prod_{l\in I_g}M\chi_{Q_l}(x)^\frac{n+N+1}{mn}. \] For the last term IV, we recall $g\notin J$ means $x\in Q_{l(g)}^{**}$ and hence, \[ {\rm IV} \lesssim \prod_{g\notin J} M\chi_{Q_{l(g)}}(x)^\frac{m_g(n+N+1)}{mn} M^{(G)} \circ T_{\sigma_{I_g}}(\{a_l\}_{l\in I_g})(x). \] Combining the estimates for I, II, III and IV, we complete the proof of Lemma \ref{lm.5A02}. \end{proof} \begin{lemma} \label{lm.5C05} Assume $x \notin Q_1^{**}$ and $c_1 \in B(x,100n^2t)$. Then we have \begin{align*} \lefteqn{ \frac{\ell(Q_1)^{s+1}}{t^{n+s+1}} \int_{Q_1^{*}}|{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy }\\ & \lesssim M\chi_{Q_1}(x)^{\frac{n+s+1}{n}} \prod_{g=1}^G \inf_{z\in Q_1^*} \!\! \left[ M\chi_{Q_{l(g)}}(z)^\frac{m_g(n+N+1)}{mn} M^{(G)} \circ T_{\sigma_{I_g}}(\{a_l\}_{l\in I_g})(z) + \prod_{l\in I_g} M\chi_{Q_l}(z)^\frac{n+N+1}{mn} \right] \!. \end{align*} \end{lemma} \begin{lemma} \label{lm.5C06} Assume $x \notin Q_1^{**}$ and $c_1 \in B(x,100n^2t)$. Then we have \begin{align*} \lefteqn{ \frac{1}{t^{n+s+1}} \int_{(Q_1^{*})^c}|y-c_1|^{s+1}|{\mathcal T}_\sigma(a_1,\ldots,a_m)(y)|\,dy }\\ &\quad \lesssim M\chi_{Q_1}(x)^{\frac{n+s+1}{n}} \prod_{g=1}^G \inf_{z\in Q_1^*} \!\! \left[ M\chi_{Q_{l(g)}}(z)^\frac{m_g(n+N+1)}{mn} M^{(G)} \circ T_{\sigma_{I_g}}(\{a_l\}_{l\in I_g})(z) + \prod_{l\in I_g} M\chi_{Q_l}(z)^\frac{n+N+1}{mn} \right]\! . \end{align*} \end{lemma} The proof of Lemmas \ref{lm.5C05} and \ref{lm.5C06} are very similar to those of Lemma \ref{lm.5A00}, so we omit the details here. \subsection{The proof of Proposition \ref{LM.Key-31} for the mixed type} Employing the above lemmas, we complete the proof of \eqref{eq.PWEST}. For each $\vec{k}=(k_1,\ldots, k_m)$, recall the smallest-length cube $R_{\vec{k}}$ among $Q_{1,k_1},\ldots, Q_{m,k_m}$ and write $Q_{l(g),\vec{k}(g)}$ for the cube of smallest-length among $\{Q_{l,k_l}\}_{l\in I_g}$. Combining Lemmas \ref{lm.5A00}-\ref{lm.5C06}, we have the following pointwise estimate \begin{align*} &M_\phi \circ {\mathcal T}_\sigma(a_{1,k_1},\ldots,a_{m,k_m})(x) \lesssim \prod_{g=1}^G b_{g,\vec{k}(g)}(x) + M\chi_{R_{\vec{k}}^*}(x)^{\frac{n+s+1}{n}} \prod_{g=1}^G \inf_{z\in R_{\vec{k}}^*} b_{g,\vec{k}(g)}(z),\\ &b_{g,\vec{k}(g)}(x) = M\chi_{Q_{l(g),\vec{k}(g)}^*}(x)^\frac{m_g(n+N+1)}{mn} M^{(G)} \circ T_{\sigma_{I_g}}(\{a_{l,k_l}\}_{l\in I_g})(x) + \prod_{l\in I_g} M\chi_{Q_{l,k_l}}(x)^\frac{n+N+1}{mn} \end{align*} for all $x\in {\mathbb R}^n$. As in the proof for the product type, we let \begin{equation*} A= \Big\|\sum_{k_1,\ldots,k_m=1}^\infty \Big(\prod_{l=1}^m \lambda_{l,k_l}\Big) M_\phi \circ {\mathcal T}_\sigma(a_{1,k_1},\ldots,a_{m,k_m})\Big\|_{L^p}. \end{equation*} In view of $(n+s+1)p/n>1$, using Lemma \ref{lm.2B00} and H\"{o}lder's inequality, we see \begin{align*} A &\lesssim \prod_{g=1}^G \bigg\| \sum_{k_l\ge 1:l\in I_g} \bigg(\prod_{l\in I_g}\lambda_{l,k_l}\bigg) \bigg( \Big(M\chi_{Q_{l(g),\vec{k}(g)}^*}\Big)^\frac{m_g(n+N+1)}{mn} M^{(G)} \circ T_{\sigma_{I_g}}(\{a_{l,k_l}\}_{l\in I_g}) \\ & \qquad \qquad \qquad \qquad \qquad \qquad\qquad \qquad \qquad\qquad \qquad \qquad+ \prod_{l\in I_g} (M\chi_{Q_{l,k_l}})^\frac{n+N+1}{mn} \bigg) \bigg\|_{L^{q_g}}\\ &\lesssim \prod_{g=1}^G \Big( A_{g,1}+ A_{g,2} \Big), \end{align*} where $q_g\in (0,\infty)$ is defined by $ 1/q_g = \sum_{l\in I_g} 1/p_l $ and \begin{align*} A_{g,1}&= \left\| \sum_{k_l\ge 1 : l\in I_g} \left(\prod_{l\in I_g}\lambda_{l,k_l}\right) \Big(M\chi_{Q_{l(g),\vec{k}(g)}^*}\Big)^\frac{m_g(n+N+1)}{mn} M^{(G)}\circ T_{\sigma_{I_g}}(\{a_{l,k_l}\}_{l\in I_g}) \right\|_{L^{q_g}},\\ A_{g,2}&= \left\| \sum_{k_l\ge 1 : l\in I_g} \prod_{l\in I_g}\lambda_{l,k_l} (M\chi_{Q_{l,k_l}})^\frac{n+N+1}{mn} \right\|_{L^{q_g}}. \end{align*} For $A_{g,2}$, we have only to employ Lemma \ref{lm.2B00} to get the desired estimate. For $A_{g,1}$, take large $r$ and employ Lemma \ref{lm-161031-1} to obtain \[ A_{g,1} \lesssim \left\| \sum_{k_l\ge 1 : l\in I_g} \left(\prod_{l\in I_g}\lambda_{l,k_l}\right) \chi_{Q_{l(g),\vec{k}(g)}^*} M^{(r)}\circ M^{(G)} [T_{\sigma_{I_g}}(\{a_{l,k_l}\}_{l\in I_g})] \right\|_{L^{q_g}}. \] Then it follows from Lemma \ref{lm.2A00} and \eqref{eq.3A3} that \begin{align*} A_{g,1} &\lesssim \left\| \sum_{k_l\ge 1 : l\in I_g} \left(\prod_{l\in I_g}\lambda_{l,k_l}\right) \frac{\chi_{Q_{l(g),\vec{k}(g)}^*}}{|Q_{l(g),\vec{k}(g)}|^{1/q}} \left\| \chi_{Q_{l(g),\vec{k}(g)}^*} M^{(r)}\circ M^{(G)} [T_{\sigma_{I_g}}(\{a_{l,k_l}\}_{l\in I_g})] \right\|_{L^q} \right\|_{L^{q_g}}\\ &\lesssim \left\| \sum_{k_l\ge 1 : l\in I_g} \left(\prod_{l\in I_g}\lambda_{l,k_l}\right) \chi_{Q_{l(g),\vec{k}(g)}^*} \inf_{z\in Q_{l(g),\vec{k}(g)}^*} \prod_{l\in I_g} M\chi_{Q_l}(z)^\frac{n+N+1}{mn} \right\|_{L^{q_g}}\\ &\leq \prod_{l\in I_g} \left\| \sum_{k_l=1}^\infty \lambda_{l,k_l} (M\chi_{Q_l})^\frac{n+N+1}{mn} \right\|_{L^{p_l}} \lesssim \prod_{l\in I_g} \left\| \sum_{k_l=1}^\infty \lambda_{l,k_l} \chi_{Q_l} \right\|_{L^{p_l}}, \end{align*} which completes the proof of Lemma \ref{LM.Key-31} for for operators of mixed type. \section{Examples} We provide some examples of operators of the kinds discussed in this paper: all of the following are symbols of trilinear operators acting on functions on the real line, thus they are functions on $\mathbb R^3= \mathbb R\times \mathbb R\times \mathbb R$. The symbol $$ \sigma_1(\xi_1,\xi_2,\xi_3) = \frac{ (\xi_1+\xi_2+\xi_3)^2 }{ \xi_1^2+\xi_2^2+\xi_3^2} $$ is associated with an operator of type~\eqref{eq.CalZygOPT}. The symbol \begin{align*} \sigma_2 (\xi_1,\xi_2,\xi_3) &= \frac{\xi_1^3}{(1+\xi_1^2)^{\frac 32}} \frac{1}{(1+\xi_2^2+\xi_3^2)^{\frac 32}} + \frac{1}{(1+\xi_1^2)^{\frac 32}} \frac{\xi_2^3}{(1+\xi_2^2+\xi_3^2)^{\frac 32}} \\ &\quad +\frac{1}{(1+\xi_1^2)^{\frac 32}} \frac{\xi_3^3}{(1+\xi_2^2+\xi_3^2)^{\frac 32}} - \frac{3\xi_1} {(1+\xi_1^2)^{\frac 32}} \frac{\xi_2\xi_3}{(1+\xi_2^2+\xi_3^2)^{\frac 32}} \\ &= \frac{(\xi_1+\xi_2+\xi_3)(\xi_1^2+\xi_2^2+\xi_3^2-\xi_1\xi_2-\xi_2\xi_3-\xi_3\xi_1)} {(1+\xi_1^2)^{\frac 32}(1+\xi_2^2+\xi_3^2)^{\frac 32}} \end{align*} provides an example of an operator of type ~\eqref{eq.CalZygOPT-3}. Note that each term is given as a product of a multiplier of $\xi_1$ times a multiplier of $(\xi_2,\xi_3)$. The symbol \begin{eqnarray*} \sigma_3 (\xi_1,\xi_2,\xi_3) &= & \frac{ \xi_1^4}{(1+\xi_1^2)^2} \frac{\xi_2^2}{(1+\xi_2^2)^2} \frac{\xi_3} { (1+\xi_3^2)^2} - \frac{ \xi_1^4}{(1+\xi_1^2)^2} \frac{\xi_2}{(1+\xi_2^2)^2} \frac{ \xi_3^2} { (1+\xi_3^2)^2} \\ && \quad- \frac{\xi_1^2}{(1+\xi_1^2)^2}\frac{ \xi_2^4}{(1+\xi_2^2)^2} \frac{\xi_3}{ (1+\xi_3^2)^2} + \frac{ \xi_1}{(1+\xi_1^2)^2} \frac{ \xi_2^4}{(1+\xi_2^2)^2} \frac{\xi_3^2} { (1+\xi_3^2)^2} \\ && \quad+ \frac{\xi_1^2}{(1+\xi_1^2)^2}\frac{\xi_2} {(1+\xi_2^2)^2} \frac{ \xi_3^4}{(1+\xi_3 ^2)^2} - \frac{\xi_1}{(1+\xi_1^2)^2}\frac{\xi_2^2} {(1+\xi_1^2)^2} \frac{ \xi_3^4}{(1+\xi_3 ^2)^2}\\ &=&- \frac{\xi_1\xi_2\xi_3(\xi_1-\xi_2)(\xi_2-\xi_3)(\xi_3-\xi_1) (\xi_1+\xi_2+\xi_3)}{(1+\xi_1^2)^2(1+\xi_2^2)^2(1+\xi_3^2)^2} \end{eqnarray*} yields an example of an operator of type ~\eqref{eq.CalZygOPT-2}. The next example: \begin{eqnarray*} \sigma_4 (\xi_1,\xi_2,\xi_3) &= & \frac{\xi_1\xi_2}{\xi_1^2+\xi_2^2+(\xi_1+\xi_2)^2} \cdot 1 - \frac{\xi_1\xi_2}{\xi_1^2+\xi_2^2+\xi_3^2} \end{eqnarray*} shows that the integer $G(\rho)$ varies according to $\rho$. Notice that all four examples satisfy $$ \sigma_1(\xi_1,\xi_2,\xi_3)=\sigma_2(\xi_1,\xi_2,\xi_3) =\sigma_3(\xi_1,\xi_2,\xi_3)=\sigma_4(\xi_1,\xi_2,\xi_3)=0 $$ when $\xi_1+\xi_2+\xi_3=0$. This yields condition \eqref{eq.TmCan} when $s=0$; see \cite{Our-next-paper}. For the case of $s\in \mathbb Z^+$, we consider $\sigma_1{}^{s+1},\sigma_2{}^{s+1},\sigma_3{}^{s+1}$, for example. \end{document}
\begin{document} \title[ local well-posedness theory for MHD boundary layer] {MHD boundary layers theory in Sobolev spaces without monotonicity. \uppercase\expandafter{\romannumeral1}. well-posedness theory} \author[Cheng-Jie Liu]{Cheng-Jie Liu} \address{Cheng-Jie Liu \newline\indent Department of Mathematics, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong} \email{cjliusjtu@gmail.com} \author[Feng Xie]{Feng Xie} \address{Feng Xie \newline\indent School of Mathematical Sciences, and LSC-MOE, Shanghai Jiao Tong University, Shanghai 200240, P. R. China} \email{tzxief@sjtu.edu.cn} \author[Tong Yang]{Tong Yang} \address{Tong Yang \newline\indent Department of Mathematics, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong \newline\indent Department of Mathematics, Jinan University, Guangzhou 510632, P. R. China } \email{matyang@cityu.edu.hk} \begin{abstract} We study the well-posedness theory for the MHD boundary layer. The boundary layer equations are governed by the Prandtl type equations that are derived from the incompressible MHD system with non-slip boundary condition on the velocity and perfectly conducting condition on the magnetic field. Under the assumption that the initial tangential magnetic field is not zero, we establish the local-in-time existence, uniqueness of solution for the nonlinear MHD boundary layer equations. Compared with the well-posedness theory of the classical Prandtl equations for which the monotonicity condition of the tangential velocity plays a crucial role, this monotonicity condition is not needed for MHD boundary layer. This justifies the physical understanding that the magnetic field has a stabilizing effect on MHD boundary layer in rigorous mathematics. \end{abstract} \keywords{Prandtl type equations, MHD, well-posedness, Sobolev space, non-monotone condition} \subjclass[2000]{76N20, 35A07, 35G31,35M33} \maketitle \tableofcontents \section{Introduction and Main Result} \label{S1} One important problem about Magnetohydrodynamics(MHD) is to understand the high Reynolds numbers limits in a domain with boundary. In this paper, we consider the following initial boundary value problem for the two dimensional (2D) viscous MHD equations (cf. \cite{Cow, davidson, D-L, S-T}) in a periodic domain $\{(t,x,y):~t\in[0,T], x\in\mathbb T, y\in\mathbb R_+\}:$ \begin{align}\label{eq_mhd} \left\{ \begin{array}{ll} {\partial}_t\ue+(\ue\cdot\nabla)\ue-(\he\cdot\nabla)\he +\nabla p^\epsilon=\mu\epsilon\triangle\ue,\\ {\partial}_t\he -\nabla\times(\ue\times \he) =\kappa\epsilon\triangle \he,\\ \nabla\cdot\ue=0,\quad\nabla\cdot \he=0. \end{array} \right. \end{align} Here, we assume the viscosity and resistivity coefficients have the same order of a small parameter $\epsilon$. $\ue=(u^\epsilon_1,u^\epsilon_2)$ denotes the velocity vector, $\he=(h^\epsilon_1,h^\epsilon_2)$ denotes the magnetic field, and $p^\epsilon=\tilde p^\epsilon+\frac{|\he|^2}{2}$ denotes the total pressure with $\tilde p^\epsilon$ the pressure of the fluid. On the boundary, the non-slip boundary condition is imposed on velocity field \begin{align} \label{bc_u} \ue|_{y=0}=\bf{0}, \end{align} and the perfectly conducting boundary condition on magnetic field \begin{align} \label{bc_h} h_2^\epsilon|_{y=0}={\partial}_yh_1^\epsilon|_{y=0}=0. \end{align} The formal limiting system of \eqref{eq_mhd} yields the ideal MHD equations when $\epsilon$ tends to zero. However, there is a mismatch in the tangential velocity between the equations \eqref{eq_mhd} and the limiting equations on the boundary $y=0$. This is why a boundary layer forms in the vanishing viscosity and resistivity limit process. To find out the terms in \eqref{eq_mhd} whose contribution is essential for the boundary layer, we use the same scaling as the one used in \cite{OS}, \begin{align*} t=t,\quad x=x,\quad \tilde y=\epsilon^{-\frac{1}{2}}y, \end{align*} then, set \begin{align*} \left\{ \begin{array}{ll} u_1(t,x,\tilde y)=u_1^\epsilon(t,x, y),\\ u_2(t,x,\tilde y)=\epsilon^{-\frac{1}{2}}u^\epsilon_2(t,x, y), \end{array} \right. \qquad \left\{ \begin{array}{ll} h_1(t,x,\tilde y)=h_1^\epsilon(t,x, y),\\ h_2(t,x,\tilde y)=\epsilon^{-\frac{1}{2}}h_2^\epsilon(t,x, y), \end{array} \right. \end{align*} and \[p(t,x,\tilde y)=p^\epsilon(t,x,y).\] Then by taking the leading order, the equations \eqref{eq_mhd} are reduced to \begin{align} \label{eq_bl} \left\{ \begin{array}{ll} \partial_tu_1+u_1\partial_xu_1+u_2\partial_yu_1-h_1\partial_xh_1-h_2\partial_yh_1+{\partial}_xp=\mu\partial^2_yu_1,\\ {\partial}_yp=0,\\ \partial_th_1+\partial_y(u_2h_1-u_1h_2)=\kappa\partial_y^2h_1,\\ \partial_th_2-\partial_x(u_2h_1-u_1h_2)=\kappa\partial_y^2h_2,\\ \partial_xu_1+\partial_yu_2=0,\quad \partial_xh_1+\partial_yh_2=0, \end{array} \right. \end{align} in $\{t>0, x\in\mathbb{T},y\in\mathbb{R}^+\}$, where we have replaced $\tilde y$ by $y$ for simplicity of notations. The second equation of \eqref{eq_bl} implies that the leading order of boundary layers for the total pressure $p^\epsilon(t,x,y)$ is invariant across the boundary layer, and should be matched to the outflow pressure $P(t,x)$ on top of boundary layer, that is, the trace of pressure of ideal MHD flow. Consequently, we have \[p(t,x,y)~\equiv~P(t,x).\] It is worth noting that the pressure $\tilde p^\epsilon$ of the fluid may have the leading order of boundary layers because of the appearance of the boundary layer for magnetic field. It is different from the general fluid in the absence of magnetic field, for which the leading boundary layer for the pressure of the fluid always vanishes. The tangential component $u_1(t,x,y)$ of velocity field, respectively $h_1(t,x,y)$ of magnetic filed, should match the outflow tangential velocity $U(t,x)$, respectively the outflow tangential magnetic field $H(t,x)$, on the top of boundary layer, that is, \begin{equation}\label{bc_infty} u_1(t,x,y)~\rightarrow~U(t,x),\quad h_1(t,x,y)~\rightarrow~H(t,x),\quad{\mbox as}\quad y~\rightarrow~+\infty, \end{equation} where $U(t,x)$ and $H(t,x)$ are the trace of tangential velocity and magnetic field respectively. Therefore, we have the following ``matching'' condition: \begin{align}\label{Brou} U_t+UU_x-HH_x+P_x=0,\quad H_t+UH_x-HU_x=0, \end{align} which shows that \eqref{bc_infty} is consistent with the first and third equations of \eqref{eq_bl}. Moreover, on the boundary $\{y=0\}$, the boundary conditions \eqref{bc_u} and \eqref{bc_h} give \begin{align} \label{bc_bl} u_1|_{y=0}=u_2|_{y=0}={\partial}_yh_1|_{y=0}=h_2|_{y=0}=0. \end{align} On the other hand, it is noted that equation $\eqref{eq_bl}_4$ is a direct consequence of equations $\eqref{eq_bl}_3$, ${\partial}_xh_1+{\partial}_yh_2=0$ in $(\ref{eq_bl})_5$ and the boundary condition (\ref{bc_bl}). Hence, we only need to study the following initial-boundary value problem of the MHD boundary layer equations in $\{t\in[0,T], x\in\mathbb{T},y\in\mathbb{R}^+\}$, \begin{align} \label{bl_mhd} \left\{ \begin{array}{ll} \partial_tu_1+u_1\partial_xu_1+u_2\partial_yu_1-h_1\partial_xh_1-h_2\partial_yh_1=\mu\partial^2_yu_1-P_x,\\ \partial_th_1+\partial_y(u_2h_1-u_1h_2)=\kappa\partial_y^2h_1,\\ \partial_xu_1+\partial_yu_2=0,\quad \partial_xh_1+\partial_yh_2=0,\\ u_1|_{t=0}=u_{10}(x,y),\quad h_1|_{t=0}=h_{10}(x,y),\\ (u_1,u_2,\partial_yh_1,h_2)|_{y=0}=\textbf{0},\quad \lim\limits_{y\rightarrow+\infty}(u_1,h_1)=(U,H)(t,x). \end{array} \right. \end{align} The aim of this paper is to show the local well-posedness of the system \eqref{bl_mhd} with non-zero tangential component of that magnetic field, that is, without loss of generality, by assuming \begin{equation} \label{ass_m} h_1(t,x,y)>0. \end{equation} Let us first introduce some weighted Sobolev spaces for later use. Denote \[\Omega~:=~\big\{(x,y):~x\in\mathbb T,~y\in\mathbb R_+\big\}.\] For any $l\in\mathbb R,$ denote by $L_l^2(\Omega)$ the weighted Lebesgue space with respect to the spatial variables: \[L_l^2(\Omega)~:=~\Big\{f(x,y):~\Omega\rightarrow\mathbb R,~ \|f\|_{L^2_l(\Omega)}:=\Big(\int_{\Omega}\langle y\rangle^{2l}|f(x,y)|^2dxdy\Big)^{\frac{1}{2}}<+\infty\Big\},\qquad \langle y\rangle~=~1+y, \] and then, for any given $m\in\mathbb{N},$ denote by $H_l^m(\Omega)$ the weighted Sobolev spaces: $$H_l^m(\Omega)~:=~\Big\{f(x,y):~\Omega\rightarrow\mathbb R,~\|f\|_{H_l^m(\Omega)}:=\Big(\sum_{m_1+m_2\leq m}\|\langle y\rangle^{l+m_2}{\partial}_x^{m_1}{\partial}_y^{m_2}f\|_{L^2(\Omega)}^2\Big)^{\frac{1}{2}}<+\infty\Big\}.$$ Now, we can state the main result as follows. \begin{thm}\label{Th1} Let $m\geq5$ be a integer, and $l\geq0$ a real number. Assume that the outer flow $(U,H,P_x)(t,x)$ satisfies that for some $T>0,$ \begin{equation}\label{ass_outflow} M_0~:=~\sum_{i=0}^{2m+2}\Big(\sup_{0\leq t\leq T}\|{\partial}_t^i(U,H,P)(t,\cdot)\|_{H^{2m+2-i}(\mathbb T_x)}+\|{\partial}_t^i(U,H,P)\|_{L^2(0,T;H^{2m+2-i}(\mathbb T_x))}\Big)<+\infty. \end{equation} Also, we suppose the initial data $(u_{10},h_{10})(x,y)$ satisfies \begin{equation}\label{ass_ini} \Big(u_{10}(x,y)-U(0,x),h_{10}(x,y)-H(0,x)\Big)\in H^{3m+2}_l(\Omega), \end{equation} and the compatibility conditions up to $m$-th order. Moreover, there exists a sufficiently small constant $\delta_0>0$ such that \begin{align}\label{ass_bound} \big|\langle y\rangle^{l+1}{\partial}_y^i(u_{10}, h_{10})(x,y)\big|\leq(2\delta_0)^{-1}, \qquad h_{10}(x,y)\geq2\delta_0,\quad\mbox{for}\quad i=1,2,~ (x,y)\in\Omega. \end{align} Then, there exist a postive time $0<T_*\leq T$ and a unique solution $(u_1,u_2, h_1,h_2)$ to the initial boundary value problem (\ref{bl_mhd}), such that \begin{align}\label{est_main1} (u_1-U,h_1-H)\in\bigcap_{i=0}^mW^{i,\infty}\Big(0,T_*;H_l^{m-i}(\Omega)\Big), \end{align} and \begin{align}\label{est_main2} (u_2+U_xy,h_2+H_xy)&\in\bigcap_{i=0}^{m-1}W^{i,\infty}\Big(0,T_*;H_{-1}^{m-1-i}(\Omega)\Big),\nonumber\\%\qquad\mbox{for}\quad \lambda>\frac{1}{2},\nonumber\\ ({\partial}_yu_2+U_x,{\partial}_yh_2+H_x)&\in\bigcap_{i=0}^{m-1}W^{i,\infty}\big(0,T_*;H_l^{m-1-i}(\Omega)\big). \end{align} Moreover, if $l>\frac{1}{2},$ \begin{align}\label{est_main3} &(u_2+U_xy,h_2+H_xy)\in\bigcap_{i=0}^{m-1}W^{i,\infty}\Big(0,T_*;L^\infty\big(\mathbb R_{y,+};H^{m-1-i}(\mathbb T_x)\big)\Big). \end{align} \end{thm} \begin{rem} Note that the regularity assumption on the outflow $(U,H,P)$ and the initial data $(u_{10}, h_{10})$ is not optimal. Here, we need the regularity to simplify the construction of approximate solution, cf. Section 4. One may relax the regularity requirement by using other approximations. \end{rem} \iffalse \begin{rem} The result in Theorem \ref{Th1} can also be extended to the case on the half plane, i.e., $(x,y)\in\mathbb R_+^2$ under some extra assumption on the outflow $(U,H,P)$, such as \[(U,H,P)(t,\cdot)\in L^\infty(\mathbb R_x),\quad{\partial}_t^i{\partial}_x^j(U,H,P)(t,\cdot)\in L^2(\mathbb R_x),\quad \mbox{for}\quad i+j\geq1.\] \end{rem} \fi We now review some related works to the problem studied in this paper. First of all, the study on fluid around a rigid body with high Reynolds numbers is an important problem in both physics and mathematics. The classical work can be traced back to Prandtl in 1904 about the derivation of the Prandtl equations for boundary layers from the incompressible Navier-Stokes equations with non-slip boundary condition, cf. \cite{P}. About sixty years after its derivation, the first systematic work in rigorous mathematics was achieved by Oleinik, cf. \cite{O}, in which she showed that under the monotonicity condition on the tangential velocity field in the normal direction to the boundary, local in time well-posedness of the Prandtl system can be justified in 2D by using the Crocco tranformation. This result together with some extensions are presented in Oleinik-Samokhin's classical book \cite{OS}. Recently, this well-posedness result was proved by using simply energy method in the framework of Sobolev spaces in \cite{AWXY} and \cite{MW1} independently by taking care of the cancellation in the convection terms to overcome the loss of derivative in the tangential direction. Moreover, by imposing an additional favorable condition on the pressure, a global in time weak solution was obtained in \cite{XZ}. Some three space dimensional cases were studied for both classical and weak solutions in \cite{LWY1,LWY2}. Since Oleinik's classical work, the necessity of the monotonicity condition on the velocity field for well-posedness remained as a question until 1980s when Caflisch and Sammartino \cite{SC1, SC2} obtained the well-posedness in the framework of analytic functions without this condition, cf. \cite{IV,KV, KMVW, LCS, Mae, ZZ} and the references therein. And recently, the analyticity condition can be further relaxed to Gevrey regularity, cf. \cite{GM, GMM, LWX, L-Y}. When the monotonicity condition is violated, separation of the boundary layer is expected and observed for classical fluid. For this, E-Engquist constructed a finite time blowup solution to the Prandtl equations in \cite{EE}. Recently, when the background shear flow has a non-degenerate critical point, some interesting ill-posedness (or instability) phenomena of solutions to both the linear and nonlinear Prandtl equations around the shear flow are studied, cf. \cite{GD,GN,G,GN1,LWY, LY} and the references therein. All these results show that the monotonicity assumption on the tangential velocity is essential for the well-posedness except in the framework of analytic functions or Gevrey functions. On the other hand, for electrically conducting fluid such as plasmas and liquid metals, the system of magnetohydrodynamics(denoted by MHD) is a fundamental system to describe the movement of fluid under the influence of electro-magnetic field. The study on the MHD was initiated by Alfv\'en \cite{Alf} who showed that the magnetic field can induce current in a moving conductive fluid with a new propagation mechanism along the magnetic field, called Alfv\'en waves. For plasma, the boundary layer equations can be derived from the fundamental MHD system and they are more complicated than the classical Prandtl system because of the coupling of the magnetic field with velocity field through the Maxwell equations. On the other hand, in physics, it is believed that the magnetic field has a stabilizing effect on the boundary layer that could provide a mechanism for containment of, for example, the high temperature gas. If the magnetic field is transversal to the boundary, there are extensive discussions on the so called Hartmann boundary layer, cf. \cite{davidson, Har, H-L}. In addition, there are works on the stability of boundary layers with minimum Reynolds number for flow with different structures to reveal the difference from the classical boundary layers without electro-magnetic field, cf. \cite{A,D,R}. In terms of mathematical derivation when the non-slip boundary condition for the velocity is present, the boundary layer systems that capture the leading order of fluid variables around the boundary depend on three physical parameters, magnetic Reynolds number, Reynolds number and their ratio called magnetic Prandtl number. When the Reynolds number tends to infinity while the magnetic Reynolds number is fixed, the derived boundary layer system is similar to the Prandtl system for classical fluid and its well-posedness was discussed in Oleinik-Samokhin's book \cite{OS}, for which the monotonicity condition on the velocity field is needed. When the Reynolds number is fixed while the magnetic Reynolds number tends to infinity that corresponds to infinite magnetic Prandtl number, the boundary layer system is similar to inviscid Prandtl system and the monotonicity condition on the velocity field is not needed for well-posedness. The case with finite magnetic Prandtl number when both the Reynolds number and magnetic Reynolds number tend to infinity at the same rate, the boundary layer system is totally different from the classical Prandtl system, and this is the system to be discussed in this paper. Note that for this system, there are no any mathematical well-posedness results obtained so far in the Sobolev spaces. Furthermore, we mention that in \cite{XXW}, the authors establish the vanishing viscosity limit for the MHD system in a bounded smooth domain of $\mathbb{R}^d, d=2,3$ with a slip boundary condition, while the leading order of boundary layers for both velocity and magnetic field vanishes because of the slip boundary conditions. Precisely, in this paper, to capture the stabilizing effect of the magnetic field, we establish the well-posedness theory for the problem (\ref{bl_mhd}) without any monotonicity assumption on the tangential velocity. The only essential condition is that the background tangential magnetic field has a lower positive bound. Hence, the result in this paper enriches the classical local well-posedness results of the classical Prandtl equations. In the same time, it is in agreement with the general physical understanding that the magnetic field stabilizes the boundary layer. The rest of the paper is organized as follows. Some preliminaries are given in Section 2. In Section 3, we establish the a priori energy estimates for the nonlinear problem (\ref{bl_mhd}). The local-in-time existence and uniqueness of the solution to (\ref{bl_mhd}) in Sobolev space are given in Section 4. In Section 5, we introduce another method for the study on the well-posedness theory for (\ref{bl_mhd}) by using a nonlinear coordinate transform in the spirit of Crocco transformation for the classical Prandtl system. Finally, some technical proof of a lemma is given in the Appendix. \section{Preliminaries} Firstly, we introduce some notations. Use the tangential derivative operator $${\partial}^\beta_{\tau}={\partial}^{\beta_1}_t{\partial}^{\beta_2}_x,\quad\mbox{for}\quad\beta=(\beta_1,\beta_2)\in\mathbb N^2,\quad|\beta|=\beta_1+\beta_2,$$ and then denote the derivative operator (in both time and space) by $$\quad D^\alpha={\partial}_\tau^\beta{\partial}_y^k, \quad\mbox{for}\quad\alpha=(\beta_1,\beta_2,k)\in\mathbb N^3, \quad|\alpha|=|\beta|+k.$$ Set $e_i\in\mathbb N^2, i=1,2$ and $E_j\in\mathbb N^3, j=1,2,3$ by $$e_1=(1,0)\in\mathbb N^2, ~e_2=(0,1)\in\mathbb N^2,~E_1=(1,0,0)\in\mathbb N^3, ~E_2=(0,1,0)\in\mathbb N^3, ~E_3=(0,0,1)\in\mathbb N^3,$$ and denote by ${\partial}_y^{-1}$ the inverse of derivative ${\partial}_y$, i.e., $({\partial}_y^{-1}f)(y):=\int_0^yf(z)dz.$ Moreover, we use the notation $[\cdot,\cdot]$ to denote the commutator, and denote a nondecreasing polynomial function by $\mathcal P(\cdot)$, which may differ from line to line. For $m\in\mathbb{N},$ define the function spaces $\mathcal H_l^m$ of measurable functions $f(t,x,y): [0,T]\times\Omega\rightarrow\mathbb R,$ such that for any $t\in[0,T],$ \begin{align}\label{def_h} \|f(t)\|_{\mathcal H_l^m}~:=~ \Big(\sum_{|\alpha|\leq m}\|\langle y\rangle^{l+k}D^\alpha f(t,\cdot)\|_{L^2(\Omega)}^2\Big)^{\frac{1}{2}}<+\infty. \end{align} \iffalse For $m\in\mathbb{N},$ we define the function spaces $\mathcal A_l^m(T)$ and $\mathcal B_l^m(T)$ of measurable functions $f(t,x,y): [0,T]\times\Omega\rightarrow\mathbb R,$ such that \[\begin{split} &\|f\|_{\mathcal A_l^m(T)}~:=~ \Big(\sum_{|\alpha|\leq m}\|\langle y\rangle^{l+k}D^\alpha f\|_{L^2(\Omega)}^2\Big)^{\frac{1}{2}}<+\infty,\\\ &\|f\|_{\mathcal B_l^m(T)}~:=~ \Big(\sup_{0\leq t\leq T}\sum_{|\alpha|\leq m}\|\langle y\rangle^{l+k}D^\alpha f(t,\cdot)\|_{L^2(\Omega)}^2\Big)^{\frac{1}{2}}<+\infty. \end{split}\] Obviously, it follows that \[\mathcal A_l^m(T)~=~\bigcap_{i=0}^mH^{i}\big(0,T;H_l^{m-i}(\Omega)\big),\quad \mathcal B_l^m(T)~=~\bigcap_{i=0}^mW^{i,\infty}\big(0,T;H_l^{m-i}(\Omega)\big). \] \fi The following inequalities will be used frequently in this paper. \begin{lem}\label{lemma_ineq} For proper functions $f,g,h$, the following holds.\\ \romannumeral1) If $\lim\limits_{y\rightarrow+\infty}(fg)(x,y)=0,$ then \begin{equation}\label{trace} \Big|\int_{\mathbb T_x}(fg)|_{y=0}dx\Big|\leq \|{\partial}_yf\|_{L^2(\Omega)}\|g\|_{L^2(\Omega)}+\|f\|_{L^2(\Omega)}\|{\partial}_yg\|_{L^2(\Omega)}. \end{equation} In particular, if $\lim\limits_{y\rightarrow+\infty}f(x,y)=0,$ then \begin{equation}\label{trace0} \big\|f|_{y=0}\big\|_{L^2(\mathbb T_x)}\leq \sqrt{2}~\|f\|_{L^2(\Omega)}^{\frac{1}{2}}\|{\partial}_yf\|_{L^2(\Omega)}^{\frac{1}{2}}. \end{equation} \romannumeral2) For $l\in\mathbb R$ and an integer $m\geq3, $ any $\alpha=(\beta,k)\in\mathbb N^3, \tilde\alpha=(\tilde\beta,\tilde k)\in\mathbb N^3$ with $|\alpha|+|\tilde\alpha|\leq m$, \begin{align}\label{Morse} \big\|\big(D^\alpha f\cdot D^{\tilde\alpha}g\big)(t,\cdot)\big\|_{L^2_{l+k+\tilde k}(\Omega)}\leq C\|f(t)\|_{\mathcal H_{l_1}^m}\|g(t)\|_{\mathcal H_{l_2}^m},\qquad \forall~l_1,l_2\in\mathbb R,\quad l_1+l_2=l. \end{align} \romannumeral3) For any $\lambda>\frac{1}{2}, \tilde\lambda>0$, \begin{align}\label{normal} \big\|\langle y\rangle^{-\lambda}({\partial}_y^{-1}f)(y)\big\|_{L^2_y(\mathbb R_+)}\leq \frac{2}{2\lambda-1}\big\|\langle y\rangle^{1-\lambda}f(y)\big\|_{L^2_y(\mathbb R_+)},~ \big\|\langle y\rangle^{-\tilde\lambda}({\partial}_y^{-1}f)(y)\big\|_{L^\infty_y(\mathbb R_+)}\leq \frac{1}{\tilde\lambda}\big\|\langle y\rangle^{1-\tilde\lambda}f(y)\big\|_{L^\infty_y(\mathbb R_+)}, \end{align} and then, for $l\in\mathbb R$, an integer $m\geq3, $ and any $\alpha=(\beta,k)\in\mathbb N^3, \tilde \beta=(\tilde \beta_1,\tilde\beta_2)\in\mathbb N^2$ with $|\alpha|+|\tilde\beta|\leq m$, \begin{align}\label{normal0} \big\|\big(D^\alpha g\cdot{\partial}_\tau^{\tilde\beta}{\partial}_y^{-1}h\big)(t,\cdot)\big\|_{L^2_{l+k}(\Omega)}\leq C\|g(t)\|_{\mathcal H_{l+\lambda}^m}\|h(t)\|_{\mathcal H_{1-\lambda}^m}. \end{align} In particular, for $\lambda=1,$ \begin{align}\label{normal1} \big\|\langle y\rangle^{-1}({\partial}_y^{-1}f)(y)\big\|_{L^2_y(\mathbb R_+)}\leq 2\big\|f\big\|_{L^2_y(\mathbb R_+)},\quad\big\|\big(D^\alpha g\cdot{\partial}_\tau^{\tilde\beta}{\partial}_y^{-1}h\big)(t,\cdot)\big\|_{L^2_{l+k}(\Omega)}\leq C\|g(t)\|_{\mathcal H_{l+1}^m}\|h(t)\|_{\mathcal H_{0}^m}. \end{align} \romannumeral4) For any $\lambda>\frac{1}{2}$, \begin{align}\label{normal2} \big\|({\partial}_y^{-1}f)(y)\big\|_{L^\infty_{y}(\mathbb R_+)}\leq C\|f\|_{L_{y,\lambda}^2(\mathbb R_+)}, \end{align} and then, for $l\in\mathbb R$, an integer $m\geq2, $ and any $\alpha=(\beta,k)\in\mathbb N^3, \tilde \beta=(\tilde \beta_1,\tilde\beta_2)\in\mathbb N^2$ with $|\alpha|+|\tilde\beta|\leq m$, \begin{align}\label{normal3} \big\|\big(D^\alpha f\cdot{\partial}_\tau^{\tilde\beta}{\partial}_y^{-1}g\big)(t,\cdot)\big\|_{L^2_{l+k}(\Omega)}\leq C\|f(t)\|_{\mathcal H_l^m}\|g(t)\|_{\mathcal H_\lambda^m}. \end{align} \end{lem} To overcome the technical difficulty originated from the boundary terms at $\{y=+\infty\}$, we introduce an auxiliary function $\phi(y)\in C^\infty(\mathbb R_+)$ satisfying that \[\phi(y)=\begin{cases} y,\quad y\geq 2R_0,\\ 0,\quad 0\leq y\leq R_0 \end{cases}\] for some constant $R_0>0$. Then, set the new unknowns: \begin{align}\label{new_quan} &u(t,x,y)~:=~u_1(t,x,y)-U(t,x)\phi'(y),\quad v(t,x,y)~:=~u_2(t,x,y)+U_x(t,x)\phi(y),\nonumber\\ &h(t,x,y)~:=~h_1(t,x,y)-H(t,x)\phi'(y),\quad g(t,x,y)~:=~h_2(t,x,y)+H_x(t,x)\phi(y). \end{align} Choose the above construction for $(u,v,h,g)$ to ensure the divergence free conditions and homogenous boundary conditions, i.e., \[\begin{split} &{\partial}_x u+{\partial}_y v=0,\quad {\partial}_x h+{\partial}_y g=0,\\ &(u,v,{\partial}_yh,g)|_{y=0}=\textbf 0,\quad \lim_{y\rightarrow+\infty}(u,h)=\textbf 0, \end{split}\] which implies that $v=-{\partial}_y^{-1}{\partial}_xu$ and $g=-{\partial}_y^{-1}{\partial}_xh.$ And it is easy to get that \begin{align*} (u, h)(t,x,y)=\big(u_1(t,x,y)-U(t,x), h_1(t,x,y)-H(t,x)\big)+\big(U(t,x)(1-\phi'(y)),H(t,x)(1-\phi'(y))\big), \end{align*} which implies that by the construction of $\phi(y)$, \begin{align}\label{est_axu} \|(u, h)(t)\|_{\mathcal H_l^m}-CM_0\leq\|(u_1-U,h_1-H)(t)\|_{\mathcal H_l^m}\leq&\|(u,h)(t)\|_{\mathcal H_l^m}+CM_0. \end{align} By using the new unknowns $(u,v,h,g)$ given by \eqref{new_quan}, we can reformulate the original problem \eqref{bl_mhd} to the following: \begin{align}\label{bl_main} \begin{cases} \partial_tu+\big[(u+U\phi')\partial_x+(v-U_x\phi)\partial_y\big]u-\big[(h+H\phi')\partial_x+(g-H_x\phi)\partial_y\big]h-\mu\partial^2_yu\\ \qquad\qquad+U_x\phi'u+U\phi''v-H_x\phi'h-H\phi''g=r_1,\\ \partial_th+\big[(u+U\phi')\partial_x+(v-U_x\phi)\partial_y\big]h-\big[(h+H\phi')\partial_x+(g-H_x\phi)\partial_y\big]u-\kappa\partial_y^2h\\ \qquad\qquad+H_x\phi'u+H\phi''v-U_x\phi'h-U\phi''g=r_2,\\ \partial_xu+\partial_yv=0,\quad \partial_xh+\partial_yg=0,\\ (u,v,\partial_yh,g)|_{y=0}=\textbf 0,\\%\quad \lim\limits_{y\rightarrow+\infty}u_1=U(t,x),\quad \lim\limits_{y\rightarrow+\infty}b_1=B(t,x). (u,h)|_{t=0}=\big(u_{10}(x,y)-U(0,x)\phi'(y),h_{10}(x,y)-H(0,x)\phi'(y)\big)\triangleq(u_0,h_0)(x,y), \end{cases} \end{align} where \begin{align}\label{def_rhs} \begin{cases} r_1=&U_t[(\phi')^2-\phi\phi''-\phi']+P_x\big[(\phi')^2-\phi\phi''-1\big]+\mu U\phi^{(3)},\\ r_2=&H_t[(\phi')^2+\phi\phi''-\phi']+\kappa H\phi^{(3)}. \end{cases} \end{align} Note that we have used the divergence free conditions in obtaining the equations of $(u,h)$ in \eqref{bl_main}, and the relations \eqref{Brou} in the calculation of \eqref{def_rhs}. It is worth noting that by substituting \eqref{new_quan} into the second equation of \eqref{bl_mhd} directly, there is another equivalent form for the equation of $h$, which may be convenient for use in some situations: \begin{align}\label{eq_h} {\partial}_t h+{\partial}_y\big[(v-U_x\phi)(h+H\phi')-(u+U\phi')(g-H_x\phi)\big]-\kappa{\partial}_y^2h=-H_t\phi'+\kappa H\phi^{(3)}. \end{align} By the choice of $\phi(y)$, it is easy to get that \begin{align}\label{property_r} r_1(t,x,y),~r_2(t,x,y)~\equiv~0,\qquad &y\geq2R_0,\nonumber\\ r_1(t,x,y)~\equiv~-P_x(t,x),\quad r_2(t,x,y)~\equiv~0,\qquad &0\leq y\leq R_0, \end{align} and then for any $t\in[0,T],\lambda\geq0$ and $|\alpha|\leq m$, by virtue of \eqref{ass_outflow}, \begin{equation} \label{est_rhd} \|\langle y\rangle^\lambda D^\alpha r_1(t)\|_{L^2(\Omega)},~\|\langle y\rangle^\lambda D^\alpha r_2(t)\|_{L^2(\Omega)}\leq C\sum_{|\beta|\leq|\alpha|+1}\|{\partial}_\tau^{\beta}(U,H,P_x)(t)\|_{L^{2}(\mathbb T_x)}\leq CM_0. \end{equation} Furthermore, similar to \eqref{est_axu} we have that for the initial data: \begin{align}\label{est_ini} \|(u_0,h_0)\|_{H_l^{2m}(\Omega)}-CM_0\leq&\big\|\big(u_{10}(x,y)-U(0,x),h_{10}-H(0,x)\big)\big\|_{H_l^{2m}(\Omega)}\leq \|(u_0,h_0)\|_{H_l^{2m}(\Omega)}+CM_0. \end{align} Finally, from the transformation \eqref{new_quan}, and the relations \eqref{est_axu} and \eqref{est_ini}, it is easy to know that Theorem \ref{Th1} is a corollary of the following result. \begin{thm}\label{thm_main} Let $m\geq5$ be a integer, $l\geq0$ a real number, and $(U,H,P_x)(t,x)$ satisfies the hypotheses given in Theorem \ref{Th1}. In addition, assume that for the problem \eqref{bl_main}, the initial data \(\big(u_{0}(x,y),h_{0}(x,y)\big)\in H^{3m+2}_l(\Omega),\) and the compatibility conditions up to $m$-th order. Moreover, there exists a sufficiently small constant $\delta_0>0$, such that \begin{align}\label{ass_bound-modify} \big|\langle y\rangle^{l+1}{\partial}_y^i(u_{0}, h_{0})(x,y)\big|\leq(2\delta_0)^{-1}, \quad h_{0}(x,y)+H(0,x)\phi'(y)\geq2\delta_0,\quad\mbox{for}\quad i=1,2,~ (x,y)\in\Omega. \end{align} Then, there exist a time $0<T_*\leq T$ and a unique solution $(u,v,h,g)$ to the initial boundary value problem (\ref{bl_main}), such that \begin{align}\label{result_1} (u, h)\in\bigcap_{i=0}^mW^{i,\infty}\Big(0,T_*;H_l^{m-i}(\Omega)\Big), \end{align} and \begin{align}\label{result_2} (v,g)&\in\bigcap_{i=0}^{m-1}W^{i,\infty}\Big(0,T_*;H_{-1}^{m-1-i}(\Omega)\Big),\quad ({\partial}_yv,{\partial}_yg)\in\bigcap_{i=0}^{m-1}W^{i,\infty}\big(0,T_*;H_l^{m-1-i}(\Omega)\big). \end{align} Moreover, if $l>\frac{1}{2},$ \begin{align}\label{result_3} &(v, g)\in\bigcap_{i=0}^{m-1}W^{i,\infty}\Big(0,T_*; L^\infty\big(\mathbb R_{y,+};H^{m-1-i}(\mathbb T_x)\big)\Big). \end{align} \end{thm} Therefore, our main task is to show the above Theorem \ref{thm_main}, and its proof will be given in the following two sections. \section{A priori estimates} In this section, we will establish a priori estimates for the nonlinear problem (\ref{bl_main}). \begin{prop}\label{prop_priori}[\textit{Weighted estimates for $D^m(u,h)$}]\\ Let $m\geq5$ be a integer, $l\geq0$ be a real number, and the hypotheses for $(U,H,P_x)(t,x)$ given in Theorem \ref{Th1} hold. Assume that $(u,v,h,g)$ is a classical solution to the problem \eqref{bl_main} in $[0,T],$ satisfying that $(u,h)\in L^\infty\big(0,T; \mathcal H_l^m\big),~ ({\partial}_yu,{\partial}_yh)\in L^2\big(0,T; \mathcal H_l^m\big),$ and for sufficiently small $\delta_0$: \begin{equation}\label{ass_h} h(t,x,y)+H(t,x)\phi'(y)\geq\delta_0,\quad\langle y\rangle^{l+1}{\partial}_y^i(u, h)(t,x,y)\leq \delta_0^{-1},\quad i=1,2,~(t,x,y)\in [0,T]\times\Omega. \end{equation} Then, it holds that for small time, \begin{align}\label{est_priori} \sup_{0\leq s\leq t}\|(u, h)(s)\|_{\mathcal H_l^m}~\leq~&\delta_0^{-4}\Big(\mathcal P\big(M_0+\|(u_0,h_0)\|_{H_l^{2m}(\Omega)}\big)+CM_0^6t\Big)^{\frac{1}{2}}\nonumber\\ &\cdot\Big\{1-C\delta_0^{-24}\Big(\mathcal P\big(M_0+\|(u_0,h_0)\|_{H_l^{2m}(\Omega)}\big)+CM_0^6t\Big)^2t\Big\}^{-\frac{1}{4}}. \end{align} Also, we have that for $i=1,2,$ \begin{align}\label{upbound_uy} \|\langle y\rangle^{l+1}{\partial}_y^i(u, h)(t)\|_{L^\infty(\Omega)} \leq~&\|\langle y\rangle^{l+1}{\partial}_y^i(u_0, h_0)\|_{L^\infty(\Omega)}+C\delta_0^{-4}t \Big(\mathcal P\big(M_0+\|(u_0,h_0)\|_{H_l^{2m}(\Omega)}\big)+CM_0^6t\Big)^{\frac{1}{2}}\nonumber\\ &\cdot\Big\{1-C\delta_0^{-24}\Big(\mathcal P\big(M_0+\|(u_0,h_0)\|_{H_l^{2m}(\Omega)}\big)+CM_0^6t\Big)^2t\Big\}^{-\frac{1}{4}}, \end{align} and \begin{align}\label{h_lowbound} h(t,x,y)\geq~&h_0(x,y)-C\delta_0^{-4}t \Big(\mathcal P\big(M_0+\|(u_0,h_0)\|_{H_l^{2m}(\Omega)}\big)+CM_0^6t\Big)^{\frac{1}{2}}\nonumber\\ &\cdot\Big\{1-C\delta_0^{-24}\Big(\mathcal P\big(M_0+\|(u_0,h_0)\|_{H_l^{2m}(\Omega)}\big)+CM_0^6t\Big)^2t\Big\}^{-\frac{1}{4}}. \end{align} \end{prop} The proof of Proposition \ref{prop_priori} will be given in the following two subsections. More precisely, we will obtain the weighted estimates for $D^\alpha(u,h)$ for $\alpha=(\beta,k)=(\beta_1,\beta_2,k)$, satisfying $|\alpha|=|\beta|+k\leq m,~|\beta|\leq m-1$, in the first subsection, and the weighted estimates for ${\partial}_\tau^\beta(u,h)$ for $|\beta|=m$ in the second subsection. \subsection{Weighted $H^m_l-$estimates with normal derivatives} \indent\newline The weighted estimates on $D^\alpha(u,h)$ with $|\alpha|=|\beta|+k\leq m,~|\beta|\leq m-1$ can be obtained by the standard energy method because one order tangential regularity loss is allowed. That is, we have the following estimates: \begin{prop}\label{prop_estm}[\textit{Weighted estimates for $D^\alpha(u,h)$ with $|\alpha|\leq m,|\beta|\leq m-1$}]\\ Let $m\geq5$ be a integer, $l\geq0$ be a real number, and the hypotheses for $(U,H,P_x)(t,x)$ given in Theorem \ref{Th1} hold. Assume that $(u,v,h,g)$ is a classical solution to the problem \eqref{bl_main} in $[0,T],$ and satisfies $(u, h)\in L^\infty\big(0,T; \mathcal H_l^m\big),~ ({\partial}_yu,{\partial}_yh)\in L^2\big(0,T; \mathcal H_l^m\big).$ Then, there exists a positive constant $C$, depending on $m, l$ and $\phi$, such that for any small $0<\delta_1<1,$ \begin{align}\label{est_prop1} &\sum_{\tiny\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\Big(\frac{d}{dt}\|D^\alpha(u,h)(t)\|_{L_{l+k}^2(\Omega)}^2+\mu\|D^\alpha{\partial}_yu(t)\|_{L_{l+k}^2(\Omega)}^2+\kappa\|D^\alpha{\partial}_yh(t)\|_{L_{l+k}^2(\Omega)}^2\Big)\nonumber\\ \leq~&\delta_1C\|({\partial}_yu,{\partial}_yh)(t)\|_{\mathcal H_0^m}^2+C\delta_1^{-1}\|(u, h)(t)\|_{\mathcal H_l^m}^2\big(1+\|(u, h)(t)\|_{\mathcal H_l^m}^2\big)+\sum_{\tiny\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\|D^\alpha(r_1, r_2)(t)\|_{L^2_{l+k}(\Omega)}^2\nonumber\\ &+C\sum_{|\beta|\leq m+2}\|{\partial}_\tau^\beta (U,H,P)(t)\|_{L^2(\mathbb T_x)}^2. \end{align} \end{prop} \begin{proof}[\textbf{Proof.}] Applying the operator $D^{\alpha}={\partial}_\tau^{\beta}{\partial}_y^{k}$ for $\alpha=(\beta,k)=(\beta_1,\beta_2,k)$, satisfying $|\alpha|=|\beta|+k\leq m,~|\beta|\leq m-1,$ to the first two equations of $(\ref{bl_main})$, it yields that \begin{align}\label{eq_u} \begin{cases} {\partial}_tD^\alpha u=D^\alpha r_1+\mu {\partial}_y^2D^\alpha u-D^\alpha\Big\{\big[(u+U\phi')\partial_x+(v-U_x\phi)\partial_y\big]u-\big[(h+H\phi')\partial_x+(g-H_x\phi)\partial_y\big]h\\ \qquad\qquad\qquad+U_x\phi'u+U\phi''v-H_x\phi'h-H\phi''g\Big\},\\ {\partial}_tD^\alpha h=D^\alpha r_2+\kappa {\partial}_y^2D^\alpha h-D^\alpha\Big\{\big[(u+U\phi')\partial_x+(v-U_x\phi)\partial_y\big]h-\big[(h+H\phi')\partial_x+(g-H_x\phi)\partial_y\big]u\\ \qquad\qquad\qquad+H_x\phi'u+H\phi''v-U_x\phi'h-U\phi''g\}. \end{cases} \end{align} Multiplying $(\ref{eq_u})_1$ by $\langle y\rangle^{2l+2k}D^\alpha u$, $(\ref{eq_u})_2$ by $\langle y\rangle^{2l+2k}D^\alpha h$ respectively, and integrating them over $\Omega$, with respect to the spatial variables $x$ and $y$, we obtain that \begin{align}\label{est-m0} \frac{1}{2}\frac{d}{dt}\big\|\langle y\rangle^{l+k}D^\alpha(u,h)(t)\big\|_{L^2(\Omega)}^2 =&\int_{\Omega}\Big(D^\alpha r_1\cdot\langle y\rangle^{2l+2k}D^\alpha u+D^\alpha r_2\cdot\langle y\rangle^{2l+2k}D^\alpha h\Big)dxdy\nonumber\\ &+\mu\int_{\Omega}\big({\partial}_y^2D^\alpha u\cdot\langle y\rangle^{2l+2k}D^\alpha u\big)dxdy+\kappa\int_{\Omega}\big({\partial}_y^2D^\alpha h\cdot\langle y\rangle^{2l+2k}D^\alpha h\big)dxdy\nonumber\\ &-\int_{\Omega}\Big(I_1\cdot\langle y\rangle^{2l+2k}D^\alpha u+I_2\cdot\langle y\rangle^{2l+2k}D^\alpha h\Big)dxdy, \end{align} where \begin{align}\label{def_I} \begin{cases} I_1=&D^\alpha\Big\{\big[(u+U\phi')\partial_x+(v-U_x\phi)\partial_y\big]u-\big[(h+H\phi')\partial_x+(g-H_x\phi)\partial_y\big]h\\ &\quad+U_x\phi'u+U\phi''v-H_x\phi'h-H\phi''g\Big\},\\ I_2=&D^\alpha\Big\{\big[(u+U\phi')\partial_x+(v-U_x\phi)\partial_y\big]h-\big[(h+H\phi')\partial_x+(g-H_x\phi)\partial_y\big]u\\ &\quad+H_x\phi'u+H\phi''v-U_x\phi'h-U\phi''g\Big\}. \end{cases} \end{align} First of all, it is easy to get that by virtue of \eqref{est_rhd}, \begin{align}\label{est-remainder} &\int_{\Omega}\Big(D^\alpha r_1\cdot\langle y\rangle^{2l+2k}D^\alpha u+D^\alpha r_2\cdot\langle y\rangle^{2l+2k}D^\alpha h\Big)dxdy\nonumber\\ \leq &\frac{1}{2}\|D^\alpha(u, h)(t)\|^2_{L_{l+k}^2(\Omega)} +\frac{1}{2}\|D^\alpha (r_1, r_2)(t)\|_{L^2_{l+k}(\Omega)}^2. \end{align} Next, we assume that the following two estimates holds, which will be proved later: for any small $0<\delta_1<1,$ \begin{align} \label{est-duff} & \mu\int_{\Omega}\big({\partial}_y^2D^\alpha u\cdot\langle y\rangle^{2l+2k}D^\alpha u\big) dxdy+\kappa\int_{\Omega}\big({\partial}_y^2D^\alpha h\cdot\langle y\rangle^{2l+2k}D^\alpha h\big) dxdy\nonumber\\ \leq& -\frac{\mu}{2}\big\|D^\alpha {\partial}_yu(t)\big\|^2_{L^2_{l+k}(\Omega)}-\frac{\kappa}{2}\big\|D^\alpha {\partial}_yh(t)\big\|^2_{L_{l+k}^2(\Omega)}+\delta_1\big\|({\partial}_yu,{\partial}_yh)(t)\big\|_{\mathcal H_0^m}^2 \nonumber\\ &+C\delta_1^{-1}\|(u,h)(t)\|_{\mathcal H_l^m}^2\big(1+\|(u,h)(t)\|_{\mathcal H_l^m}^2\big)+C\sum_{|\beta|\leq m-1}\|{\partial}_\tau^\beta P_x(t)\|_{L^2(\mathbb T_x)}^2, \end{align} and \begin{align}\label{est-convect} & -\int_{\Omega}\Big(I_1\cdot\langle y\rangle^{2l+2k}D^\alpha u+I_2\cdot\langle y\rangle^{2l+2k}D^\alpha h\Big)dxdy\nonumber\\ \leq~&C\big(\sum_{|\beta|\leq m+2}\|{\partial}_\tau^\beta(U,H)(t)\|_{L^2(\mathbb T_x)}+\big\|(u,h)(t)\big\|_{\mathcal H_l^m}\big)\big\|(u,h)(t)\big\|_{\mathcal H_l^m}^2. \end{align} At the moment, by plugging the above inequalities \eqref{est-remainder}-\eqref{est-convect} into \eqref{est-m0}, and summing over $\alpha$, we obtain that there exists a constant $C_m>0$, depending only on $m,$ such that \begin{align}\label{est_both} &\sum_{\tiny\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\Big(\frac{d}{dt}\big\|D^\alpha(u,h)(t)\big\|_{L^2_{l+k}(\Omega)}^2+\mu\big\|D^\alpha{\partial}_yu(t)\big\|_{L^2_{l+k}(\Omega)}^2+\kappa\big\|D^\alpha{\partial}_yh(t)\big\|_{L^2_{l+k}(\Omega)}^2\Big)\nonumber\\ \leq~&\delta_1C_m\big\|({\partial}_yu,{\partial}_yh)(t)\big\|_{\mathcal H_0^m}^2+C\delta_1^{-1}\|(u, h)(t)\|_{\mathcal H_l^m}^2\big(1+\|(u, h)(t)\|_{\mathcal H_l^m}^2\big)+\sum_{\tiny\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\|D^\alpha(r_1, r_2)(t)\|_{L^2_{l+k}(\Omega)}^2\nonumber\\ &+C\sum_{|\beta|\leq m+2}\|{\partial}_\tau^\beta (U,H,P)(t)\|_{L^2(\mathbb T_x)}^2, \end{align} which implies the estimate \eqref{est_prop1} immediately. Now, it remains to show the estimates \eqref{est-duff} and \eqref{est-convect} that will be given as follows. \indent\newline \textbf{\textit{Proof of \eqref{est-duff}.}} In this part, we will first handle the term $\mu\int_{\Omega}\big({\partial}_y^2D^\alpha u\cdot\langle y\rangle^{2l+2k}D^\alpha u\big) dxdy$, and the term $\kappa\int_{\Omega}\big({\partial}_y^2D^\alpha h\cdot\langle y\rangle^{2l+2k}D^\alpha h\big) dxdy$ can be estimated similarly. By integration by parts, we have \begin{align} \label{ex_duff} \mu\int_{\Omega}\big({\partial}_y^2D^\alpha u\cdot\langle y\rangle^{2l+2k}D^\alpha u\big)dxdy =&-\mu\big\|\langle y\rangle^{l+k}{\partial}_yD^\alpha u(t)\big\|^2_{L^2(\Omega)}+2(l+k)\mu\int_{\Omega}\big(\langle y\rangle^{2l+2k-1}{\partial}_y D^\alpha u\cdot D^\alpha u \big)dxdy\nonumber\\ &+\mu\int_{\mathbb T_x}({\partial}_y D^\alpha u\cdot D^\alpha u)\big|_{y=0} dx. \end{align} By Cauchy-Schwarz inequality, \begin{align}\label{est_duff0} &2(l+k)\mu\int_{\Omega}\big(\langle y\rangle^{2l+2k-1}{\partial}_yD^\alpha u\cdot D^\alpha u\big)dxdy\nonumber\\ \leq ~&\frac{\mu}{14}\big\|\langle y\rangle^{l+k}{\partial}_y D^\alpha u(t)\big\|^2_{L^2(\Omega)}+14\mu(l+k)^2\|\langle y\rangle^{l+k}D^\alpha u(t)\|^2_{L^2(\Omega)}, \end{align} which implies that by plugging \eqref{est_duff0} into \eqref{ex_duff}, \begin{align}\label{est_duff1} & \mu\int_{\Omega}\big({\partial}_y^2D^\alpha u\cdot\langle y\rangle^{2l+2k}D^\alpha u\big)dxdy\nonumber\\ \leq& -\frac{13\mu}{14}\big\|\langle y\rangle^{l+k}D^\alpha{\partial}_y u(t)\big\|^2_{L^2(\Omega)}+C\|u(t)\|_{\mathcal H_l^m}^2+ \mu\int_{\mathbb T_x}({\partial}_y D^\alpha u\cdot D^\alpha u)\big|_{y=0} dx. \end{align} The last term in (\ref{est_duff1}), that is, the boundary integral $\mu\int_{\mathbb T_x}({\partial}_yD^\alpha u\cdot D^\alpha u)\big|_{y=0}dx$ is treated in the following two cases. {\bf Case 1: $|\alpha|\leq m-1.$} By the inequality \eqref{trace}, we obtain that for any small $0<\delta_1<1,$ \begin{align} \label{est_duff2} \mu\Big|\int_{\mathbb T_x}({\partial}_yD^\alpha u\cdot D^\alpha u)\big|_{y=0}dx\Big| \leq~ & \mu\big\|{\partial}_y^2D^\alpha u(t)\big\|_{L^2(\Omega)}\big\| D^\alpha u(t)\big\|_{L^2(\Omega)}+\mu\big\|{\partial}_y D^\alpha u(t)\big\|_{L^2(\Omega)}^2\nonumber\\ \leq~& \delta_1\big\|{\partial}_y^2D^\alpha u(t)\big\|^2_{L^{2}(\Omega)}+\frac{\mu^2}{4\delta_1}\| D^\alpha u(t)\|^2_{L^{2}(\Omega)}+\mu\big\|{\partial}_y D^\alpha u(t)\big\|_{L^2(\Omega)}^2\nonumber\\ \leq~& \delta_1\|{\partial}_yu(t)\|_{\mathcal H_0^m}+C\delta_1^{-1}\|u(t)\|^2_{\mathcal H_0^{m}}. \end{align} {\bf Case 2: $|\alpha|=|\beta|+k=m$.} It implies that $k\geq1$ from $|\beta|\leq m-1.$ Then, denote by $\gamma\triangleq\alpha-E_3=(\beta,k-1)$ with $|\gamma|=|\beta|+k-1= m-1$, the first equation in $(\ref{bl_main})$ reads \begin{align*} \mu{\partial}_yD^\alpha u=\mu {\partial}^2_yD^\gamma u =&D^\gamma\Big\{{\partial}_t u+\big[(u+U\phi')\partial_x+(v-U_x\phi)\partial_y\big]u-\big[(h+H\phi')\partial_x+(g-H_x\phi)\partial_y\big]h\nonumber\\ &\qquad+U_x\phi'u+U\phi''v-H_x\phi'h-H\phi''g-r_1\Big\}. \end{align*} Then, combining \eqref{property_r} with the fact $\phi\equiv0$ for $y\leq R_0$, it yields that at $y=0,$ \begin{align}\label{bc_um} \mu{\partial}_yD^\alpha u=~&D^\gamma\Big[{\partial}_t u+\big(u\partial_x+v\partial_y\big)u-\big(h\partial_x+g\partial_y\big)h+P_x\Big]\nonumber\\ =~&D^\gamma P_x+D^{\gamma+E_1}u+D^\gamma\big(u{\partial}_xu-h{\partial}_xh\Big)+D^\gamma\big(v{\partial}_yu-g{\partial}_yh\Big). \end{align} It is easy to get that by \eqref{trace0}, \begin{align}\label{est_bc1} \Big|\int_{\mathbb T_x}\big(D^\gamma P_x\cdot D^\alpha u\big)\big|_{y=0}dx\Big|\leq~&\big\|D^\gamma P_x(t)\big\|_{L^2(\mathbb T_x)}\big\|D^\alpha u(t)|_{y=0}\big\|_{L^2(\mathbb T_x)} \nonumber\\ \leq~&\sqrt{2}\big\|D^\gamma P_x(t)\big\|_{L^2(\mathbb T_x)}\big\|D^\alpha u(t)\big\|_{L^2(\Omega)}^{\frac{1}{2}}\big\|D^\alpha {\partial}_yu(t)\big\|_{L^2(\Omega)}^{\frac{1}{2}}\nonumber\\ \leq~&\frac{\mu}{14}\|D^\alpha{\partial}_yu(t)\|_{L^2(\Omega)}^2+C\|u(t)\|_{\mathcal H_0^m}^2+C\big\|D^\gamma P_x(t)\big\|_{L^2(\mathbb T_x)}^2, \end{align} provided $|\alpha|=m.$ Also, by \eqref{trace} and $|\gamma+E_1|=m,$ \begin{align}\label{est_bc2} \Big|\int_{\mathbb T_x}\big(D^{\gamma+E_1}u\cdot D^\alpha u\big)\big|_{y=0}dx\Big|\leq~&\big\|D^{\gamma+E_1}{\partial}_yu(t)\big\|_{L^2(\Omega)}\big\|D^{\alpha}u(t)\big\|_{L^2(\Omega)}+\big\|D^{\gamma+E_1}u(t)\big\|_{L^2(\Omega)}\big\|D^{\alpha}{\partial}_yu(t)\big\|_{L^2(\Omega)}\nonumber\\ \leq~&\frac{\delta_1}{3}\|D^{\gamma+E_1}{\partial}_yu(t)\|_{L^2(\Omega)}^2+\frac{\mu}{14}\|D^\alpha{\partial}_yu(t)\|_{L^2(\Omega)}^2+C\delta_1^{-1}\|u(t)\|_{\mathcal H_0^m}^2. \end{align} Hence, as we know $D^\gamma(u{\partial}_xu\big)=\sum_{\tilde\gamma\leq\gamma}\left(\begin{array}{ccc}\gamma \\ \tilde\gamma \end{array}\right)\Big(D^{\tilde\gamma}u\cdot D^{\gamma-\tilde\gamma+E_2}u\Big),$ it follows that \begin{align}\label{est_bc3-0} \Big|\int_{\mathbb T_x}\big(D^{\gamma}(u{\partial}_xu)\cdot D^\alpha u\big)\big|_{y=0}dx\Big|\leq~&C\sum_{\tilde\gamma\leq\gamma}\Big\{\big\|{\partial}_y\big(D^{\tilde\gamma}u\cdot D^{\gamma-\tilde\gamma+E_2}u\big)\big\|_{L^2(\Omega)}\big\|D^\alpha u\big\|_{L^2(\Omega)}\nonumber\\ &\qquad\quad+\big\|D^{\tilde\gamma}u\cdot D^{\gamma-\tilde\gamma+E_2}u\big\|_{L^2(\Omega)}\big\|D^\alpha {\partial}_yu\big\|_{L^2(\Omega)}\Big\}. \end{align} Then, by using \eqref{Morse} and note that $|\gamma|=m-1\geq3$, we have \begin{align*} \big\|{\partial}_y\big(D^{\tilde\gamma}u\cdot D^{\gamma-\tilde\gamma+E_2}u\big)\big\|_{L^2(\Omega)}\leq~&\big\| D^{\tilde\gamma}{\partial}_yu\cdot D^{\gamma-\tilde\gamma+E_2}u\big\|_{L^2(\Omega)}+\big\|D^{\tilde\gamma}u\cdot D^{\gamma-\tilde\gamma+E_2}{\partial}_yu\big\|_{L^2(\Omega)}\\ \leq~&C\|{\partial}_yu(t)\|_{\mathcal H_0^{m-1}}\|{\partial}_xu(t)\|_{\mathcal H_0^{m-1}}+C\|u(t)\|_{\mathcal H_0^{m-1}}\|{\partial}_{xy}^2u(t)\|_{\mathcal H_0^{m-1}}\\ \leq~&C\|u(t)\|_{\mathcal H_0^m}\|{\partial}_yu(t)\|_{\mathcal H_0^m}+C\|u(t)\|_{\mathcal H_0^m}^2, \end{align*} and \begin{align*} \big\|D^{\tilde\gamma}u\cdot D^{\gamma-\tilde\gamma+E_2}u\big\|_{L^2(\Omega)} \leq~&C\|u(t)\|_{\mathcal H_0^{m}}\|u(t)\|_{\mathcal H_0^{m}} \leq~C\|u(t)\|_{\mathcal H_0^m}^2. \end{align*} Substituting the above two inequalities into \eqref{est_bc3-0} gives \begin{align}\label{est_bc3} &\Big|\int_{\mathbb T_x}\big(D^{\gamma}(u{\partial}_xu)\cdot D^\alpha u\big)\big|_{y=0}dx\Big|\nonumber\\ \leq~&C\sum_{\tilde\gamma\leq\gamma}\Big(\big(\|u(t)\|_{\mathcal H_0^m}\|{\partial}_yu(t)\|_{\mathcal H_0^m}+\|u(t)\|_{\mathcal H_0^m}^2\big)\big\|D^\alpha u\big\|_{L^2(\Omega)}+\|u(t)\|_{\mathcal H_0^m}^2\big\|{\partial}_yD^\alpha u\big\|_{L^2(\Omega)}\Big)\nonumber\\ \leq~&\frac{\delta_1}{3}\|{\partial}_yu(t)\|_{\mathcal H_0^m}^2+\frac{\mu}{14}\|D^\alpha{\partial}_yu(t)\|_{L^2(\Omega)}^2+C\delta_1^{-1}\|u(t)\|_{\mathcal H_0^m}^4+C\|u(t)\|_{\mathcal H_0^m}^2. \end{align} Similarly, we have \begin{align}\label{est_bc4} &\Big|\int_{\mathbb T_x}\big(D^{\gamma}(h{\partial}_x h)\cdot D^\alpha u\big)\big|_{y=0}dx\Big|\nonumber\\ \leq~&C\big(\|h(t)\|_{\mathcal H_0^m}\|{\partial}_yh(t)\|_{\mathcal H_0^m}+C\|h(t)\|_{\mathcal H_0^m}^2\big)\big\|D^\alpha u\big\|_{L^2(\Omega)}+C\|h(t)\|_{\mathcal H_0^m}^2\big\|{\partial}_yD^\alpha u\big\|_{L^2(\Omega)}\nonumber\\ \leq~&\frac{\delta_1}{3}\|({\partial}_yu,{\partial}_yh)(t)\|_{\mathcal H_0^m}^2+\frac{\mu}{14}\|D^\alpha{\partial}_yu(t)\|_{L^2(\Omega)}^2+C\delta_1^{-1}\|(u,h)(t)\|_{\mathcal H_0^m}^4+C\|(u,h)(t)\|_{\mathcal H_0^m}^2. \end{align} We now turn to control the integral $\Big|\int_{\mathbb T_x}\big(D^{\gamma}(v{\partial}_yu)\cdot D^\alpha u\big)\big|_{y=0}dx\Big|$. Recall that $D^\gamma={\partial}_\tau^\beta{\partial}_y^{k-1}$, by the boundary condition $v|_{y=0}=0$ and divergence free condition $u_x+v_y=0,$ we obtain that on $\{y=0\},$ \[\begin{split} D^\gamma(v{\partial}_yu)=~&{\partial}_\tau^\beta\Big(v{\partial}_y^ku+\sum_{i=1}^{k-1}\left(\begin{array}{ccc} k-1 \\ i \end{array}\right){\partial}_y^iv\cdot{\partial}_y^{k-i}u\Big)=~\sum_{j=0}^{k-2}\left(\begin{array}{ccc} k-1 \\ j+1 \end{array}\right){\partial}_\tau^\beta\Big[-{\partial}_y^j{\partial}_xu\cdot{\partial}_y^{k-j-1}u\Big]\\ =~& -\sum_{\tiny\substack{\tilde\beta\leq\beta \\ 0\leq j\leq k-2}}\left(\begin{array}{ccc} k-1 \\ j+1 \end{array}\right)\left(\begin{array}{ccc}\beta \\ \tilde \beta \end{array}\right)\Big({\partial}_\tau^{\tilde\beta+e_2}{\partial}_y^j u\cdot{\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-j-1}u\Big), \end{split}\] where we denote $\left(\begin{array}{ccc}j \\ i \end{array}\right)=0$ for $i>j$. Note that the right-hand side of the above equality vanishes when $k=1$, and we only need to consider the case $k\geq2$. Thus, from the above expression for $D^\gamma(v{\partial}_yu)$ at $y=0,$ we obtain that by \eqref{trace}, \begin{align}\label{est_bc5-0} \Big|\int_{\mathbb T_x}\big(D^{\gamma}(v{\partial}_yu)\cdot D^\alpha u\big)\big|_{y=0}dx\Big|\leq~C\sum_{\tiny\substack{\tilde\beta\leq\beta \\ 0\leq j\leq k-2}}\Big\{&\big\|{\partial}_y\big({\partial}_\tau^{\tilde\beta+e_2}{\partial}_y^ju\cdot {\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-j-1}u\big)\big\|_{L^2(\Omega)}\big\|D^\alpha u\big\|_{L^2(\Omega)}\nonumber\\ &+\big\|{\partial}_\tau^{\tilde\beta+e_2}{\partial}_y^ju\cdot {\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-j-1}u\big\|_{L^2(\Omega)}\big\|D^\alpha {\partial}_yu\big\|_{L^2(\Omega)}\Big\}. \end{align} As $0\leq j\leq k-2$, it follows that by \eqref{Morse}, \begin{align*} \big\|{\partial}_y\big({\partial}_\tau^{\tilde\beta+e_2}{\partial}_y^ju\cdot {\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-j-1}u\big)\big\|_{L^2(\Omega)}\leq~&\big\|{\partial}_\tau^{\tilde\beta+e_2}{\partial}_y^{j+1}u\cdot {\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-j-1}u\big\|_{L^2(\Omega)}+\big\|{\partial}_\tau^{\tilde\beta+e_2}{\partial}_y^{j}u\cdot {\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-j}u\big\|_{L^2(\Omega)}\\ \leq~&C\|{\partial}_yu(t)\|_{\mathcal H_0^{m-1}}\|{\partial}_yu(t)\|_{\mathcal H_0^{m-1}}+C\|{\partial}_xu(t)\|_{\mathcal H_0^{m-1}}\|{\partial}_{y}u(t)\|_{\mathcal H_0^{m-1}}\\ \leq~&C\|u(t)\|_{\mathcal H_0^m}^2, \end{align*} and \begin{align*} \big\|{\partial}_\tau^{\tilde\beta+e_2}{\partial}_y^ju\cdot {\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-j-1}u\big\|_{L^2(\Omega)} \leq~&C\|u(t)\|_{\mathcal H_0^{m}}\|u(t)\|_{\mathcal H_0^{m}} \leq~C\|u(t)\|_{\mathcal H_0^m}^2, \end{align*} provided that $|\beta|+k=|\alpha|=m.$ Substituting the above two inequalities into \eqref{est_bc5-0} gives \begin{align}\label{est_bc5} \Big|\int_{\mathbb T_x}\big(D^{\gamma}(v{\partial}_yu)\cdot D^\alpha u\big)\big|_{y=0}dx\Big|\leq&C\sum_{\tiny\substack{\tilde\beta\leq\beta \\ 0\leq j\leq k-2}}\Big\{\|u(t)\|_{\mathcal H_0^m}^2\big\|D^\alpha u\big\|_{L^2(\Omega)}+\|u(t)\|_{\mathcal H_0^m}^2\big\|{\partial}_yD^\alpha u\big\|_{L^2(\Omega)}\Big\}\nonumber\\ \leq~& \frac{\mu}{14}\|D^\alpha{\partial}_yu(t)\|_{L^2(\Omega)}^2+C\|u(t)\|_{\mathcal H_0^m}^4+C\|u(t)\|_{\mathcal H_0^m}^2. \end{align} Similarly, we can obtain \begin{align}\label{est_bc6} \Big|\int_{\mathbb T_x}\big(D^{\gamma}(g{\partial}_yh)\cdot D^\alpha u\big)\big|_{y=0}dx\Big|\leq&C\|h(t)\|_{\mathcal H_0^m}^2\big\|D^\alpha u\big\|_{L^2(\Omega)}+C\|h(t)\|_{\mathcal H_0^m}^2\big\|{\partial}_yD^\alpha u\big\|_{L^2(\Omega)}\nonumber\\ \leq~& \frac{\mu}{14}\|D^\alpha{\partial}_yu(t)\|_{L^2(\Omega)}^2+C\|(u, h)(t)\|_{\mathcal H_0^m}^4+C\|(u, h)(t)\|_{\mathcal H_0^m}^2. \end{align} Therefore, from \eqref{bc_um} and combining the estimates \eqref{est_bc1}, \eqref{est_bc2}, \eqref{est_bc3}, \eqref{est_bc4}, \eqref{est_bc5} and \eqref{est_bc6}, we have that when $|\alpha|=|\beta|+k=m$ with $|\beta|\leq m-1$, \begin{align}\label{est_duff3} \Big|\int_{\mathbb T_x}(\mu{\partial}_yD^\alpha u\cdot D^\alpha u)\big|_{y=0}dx\Big| \leq~&\delta_1\big\|({\partial}_yu,{\partial}_yh)(t)\big\|^2_{\mathcal H_0^{m}}+\frac{3\mu}{7}\|D^\alpha{\partial}_yu(t)\|_{L^2(\Omega)}^2+C\delta_1^{-1}\|(u, h)(t)\|_{\mathcal H_0^{m}}^4\nonumber\\ &+C\delta_1^{-1}\|(u, h)(t)\|^2_{\mathcal H_0^{m}}+C\|D^\gamma P_x(t)\|_{L^2(\mathbb T_x)}^2. \end{align} Combining \eqref{est_duff2} with \eqref{est_duff3}, it implies that for $|\alpha|=|\beta|+k\leq m, |\beta|\leq m-1$, \begin{align}\label{est_duff4} &\Big|\int_{\mathbb T_x}(\mu{\partial}_yD^\alpha u\cdot D^\alpha u)\big|_{y=0}dx\Big|\nonumber\\ \leq~&\delta_1\big\|({\partial}_yu,{\partial}_yh)(t)\big\|^2_{\mathcal H_0^{m}}+\frac{3\mu}{7}\|D^\alpha{\partial}_yu(t)\|_{L^2(\Omega)}^2+C\delta_1^{-1}\|(u, h)(t)\|_{\mathcal H_0^{m}}^2\big(1+\|(u, h)(t)\|_{\mathcal H_0^{m}}^2\big) \nonumber\\ &+C\sum_{|\beta|\leq m-1}\|{\partial}_\tau^\beta P_x(t)\|_{L^2(\mathbb T_x)}^2. \end{align} Then, plugging the above estimate \eqref{est_duff4} into \eqref{est_duff1} we have \begin{align}\label{est_duff5} &\mu\int_{\Omega}\big({\partial}_y^2D^\alpha u\cdot\langle y\rangle^{2l+2k}D^\alpha u\big)dxdy\nonumber\\ \leq&-\frac{\mu}{2}\big\|D^\alpha {\partial}_yu(t)\big\|^2_{L_{l+k}^2(\Omega)}+\delta_1\big\|({\partial}_yu,{\partial}_yh)(t)\big\|^2_{\mathcal H_0^{m}}+C\delta_1^{-1}\|(u, h)(t)\|_{\mathcal H_0^{m}}^2\big(1+\|(u, h)(t)\|_{\mathcal H_0^{m}}^2\big)\nonumber\\ &+C\sum_{|\beta|\leq m-1}\|{\partial}_\tau^\beta P_x(t)\|_{L^2(\mathbb T_x)}^2. \end{align} On the other hand, one can get the similar estimation on the term $\kappa\int_{\Omega}\big({\partial}_y^2D^\alpha h\cdot\langle y\rangle^{2l+2k}D^\alpha h\big) dxdy$: \begin{align}\label{est_duff6} \kappa\int_{\Omega}\big({\partial}_y^2D^\alpha h\cdot\langle y\rangle^{2l+2k}D^\alpha h\big)dxdy \leq&-\frac{\kappa}{2}\big\|D^\alpha {\partial}_yh(t)\big\|^2_{L_{l+k}^2(\Omega)}+\delta_1\big\|({\partial}_yu,{\partial}_yh)(t)\big\|^2_{\mathcal H_0^{m}}\nonumber\\ &+C\delta_1^{-1}\|(u, h)(t)\|_{\mathcal H_0^{m}}^2\big(1+\|(u, h)(t)\|_{\mathcal H_0^{m}}^2\big). \end{align} Thus, we prove \eqref{est-duff} by combining \eqref{est_duff5} with \eqref{est_duff6}. \iffalse Consequently, we have that by virtue of \eqref{ass_outflow}, \begin{align}\label{est_mbd0} \big\|\mu{\partial}_yD^\alpha u|_{y=0}\big\|_{L^2(\mathbb T_x)} \leq&\big\|D^\gamma\big({\partial}_t u+u{\partial}_xu-h{\partial}_xh\big)\big|_{y=0}\big\|_{L^2(\mathbb T_x)}+\big\|D^\gamma\big(v{\partial}_yu-g{\partial}_yh\big)\big|_{y=0}\big\|_{L^2(\mathbb T_x)}+\|D^\gamma P_x\|_{L^2(\mathbb T_x)}\nonumber\\ \leq&J_1+J_2+M_0\sqrt{t}, \end{align} where \begin{align*} J_1:=\big\|D^\gamma\big({\partial}_t u+u{\partial}_xu-h{\partial}_xh\big)\big|_{y=0}\big\|_{L^2(\mathbb T_x)},\quad J_2:=\big\|D^\gamma\big(v{\partial}_yu-g{\partial}_yh\big)\big|_{y=0}\big\|_{L^2(\mathbb T_x)}. \end{align*} Now, We are going to estimate the terms $J_1$ and $J_2$ on the right-hand side of (\ref{est_mbd0}). Firstly, by $D^\gamma={\partial}_\tau^\beta{\partial}_y^{k-1}$ we have that for $J_1$, \begin{align*} &D^\gamma\big({\partial}_t u+u{\partial}_xu-h{\partial}_xh\big)\\ =&D^{\gamma+E_1}u+\sum_{\tilde\gamma\leq\gamma}\Big\{\left(\begin{array}{ccc}\gamma \\ \tilde\gamma \end{array}\right)\Big[(D^{\tilde\gamma}u)\cdot(D^{\gamma-\tilde\gamma+E_2}u)-(D^{\tilde\gamma}h)\cdot(D^{\gamma-\tilde\gamma+E_2}h)\Big]\Big\}. \end{align*} Each term on the right-hand side of the above equality can be estimated as follows. Firstly, \eqref{trace} gives \begin{align*} \big\|D^{\gamma+E_1}u\big|_{y=0}\big\|_{L^2(\mathbb T_x)}\leq&C\big\|{\partial}_y D^{\gamma+E_1}u\big\|_{L^2(\Omega)}^{\frac{1}{2}}\big\|D^{\gamma+E_1}u\big\|_{L^2(\Omega)}^{\frac{1}{2}}+C\Big\|D^{\gamma+E_1}u\|_{L^2(\Omega)}\\ \leq&C\|{\partial}_yu\|_{\mathcal A_0^m(t)}^{\frac{1}{2}}\|u\|_{\mathcal A_0^m(t)}^{\frac{1}{2}}+C\|u\|_{\mathcal A_0^m(t)}. \end{align*} Secondly, from \eqref{trace}, \eqref{Morse} it follows \begin{align*} \Big\|(D^{\tilde\gamma}u)\cdot (D^{\gamma-\tilde\gamma+E_2}u)\big|_{y=0}\Big\|_{L^2(\mathbb T_x)}\leq&\delta\Big\|(D^{\tilde\gamma}{\partial}_yu)\cdot(D^{\gamma-\tilde\gamma+E_2}u)\Big\|_{L^2(\Omega)}+\delta\Big\|(D^{\tilde\gamma}u)\cdot(D^{\gamma-\tilde\gamma+E_2}{\partial}_yu)\Big\|_{L^2(\Omega)}\\ &+C\delta^{-1}\Big\|(D^{\tilde\gamma}u)\cdot(D^{\gamma-\tilde\gamma+E_2}u)\Big\|_{L^2(\Omega)}\\ \leq&C\delta\|u\|_{\mathcal A_0^m(t)}\|{\partial}_yu\|_{\mathcal A_0^m(t)}+C\delta^{-1}\|u\|_{\mathcal A_0^m(t)}^2, \end{align*} and similarly, \begin{align*} \Big\|(D^{\tilde\gamma}h)\cdot (D^{\gamma-\tilde\gamma+E_2}h)\big|_{y=0}\Big\|_{L^2(\mathbb T_x)}\leq&C\delta\|h\|_{\mathcal A_0^m(t)}\|{\partial}_yh\|_{\mathcal A_0^m(t)}+C\delta^{-1}\|h\|_{\mathcal A_0^m(t)}^2. \end{align*} Therefore, we can obtain the estimate of $J_1$ in \eqref{est_mbd0}: \begin{align}\label{est_mbd1} J_1\leq C\delta\big(M_0+\|(u,h)\|_{\mathcal A_0^m(t)}\big)\|({\partial}_yu,{\partial}_yh)\|_{\mathcal A_0^m(t)}+C\delta^{-1}\big(M_0+\|(u,h)\|_{\mathcal A_0^m(t)}\big)\|(u,h)\|_{\mathcal A_0^m(t)}. \end{align} Next, we will study the term $J_2$ in \eqref{est_mbd0}. Recall that $D^\gamma={\partial}_\tau^\beta{\partial}_y^{k-1}$, we take into account the boundary conditions $v|_{y=0}=g|_{y=0}=0$ and divergence free conditions $u_x+v_y=0, h_x+g_y=0$, to obtain that on $\{y=0\},$ \[\begin{split} &D^\gamma\big(v{\partial}_yu-g{\partial}_yh\big)\\ =~&{\partial}_\tau^\beta\Big\{v{\partial}_y^ku-g{\partial}_y^kh+\sum_{i=1}^{k-1}\Big[\left(\begin{array}{ccc} k-1 \\ i \end{array}\right){\partial}_y^iv\cdot{\partial}_y^{k-i}u-{\partial}_y^ig\cdot{\partial}_y^{k-i}h\Big]\Big\}\\ =~&\sum_{j=0}^{k-2}\Big\{\left(\begin{array}{ccc} k-1 \\ j+1 \end{array}\right){\partial}_\tau^\beta\Big[-\big({\partial}_y^j{\partial}_xu\cdot{\partial}_y^{k-j-1}u\big)+{\partial}_y^j{\partial}_xh\cdot{\partial}_y^{k-j-1}h\Big]\Big\}\\ =~& \sum_{\tilde\beta\leq\beta, 0\leq j\leq k-2}\left(\begin{array}{ccc} k-1 \\ j+1 \end{array}\right)\left(\begin{array}{ccc}\beta \\ \tilde \beta \end{array}\right)\Big[-({\partial}_\tau^{\tilde\beta+e_2}{\partial}_y^j u)\cdot({\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-j-1}u)+({\partial}_\tau^{\tilde\beta+e_2}{\partial}_y^j h)\cdot({\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-j-1}h)\Big], \end{split}\] where we denote $\left(\begin{array}{ccc}j \\ i \end{array}\right)=0$ for $i>j$. Note that the right-hand side of the above equality vanishes when $k=1$, and we only need to consider the case $k\geq2$. Then, we estimate the terms on the right-hand side of the above equality as follows: from \eqref{trace}, \eqref{Morse} and $0\leq j\leq k-2$, \begin{align*} &\Big\|({\partial}_\tau^{\tilde\beta+e_2}{\partial}_y^j u)\cdot({\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-j-1}u)\big|_{y=0}\Big\|_{L^2(\mathbb T_x)}\\ \lesssim~&\Big\|({\partial}_\tau^{\tilde\beta+e_2}{\partial}_y^{j+1}u)\cdot({\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-j-1}u)\Big\|_{L^2(\Omega)}+\Big\|({\partial}_\tau^{\tilde\beta+e_2}{\partial}_y^j u)\cdot({\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-j}u)\Big\|_{L^2(\Omega)}\\ &+\Big\|({\partial}_\tau^{\tilde\beta+e_2}{\partial}_y^j u)\cdot({\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-j-1}u)\Big\|_{L^2(\Omega)}\\ \lesssim~&\|{\partial}_{x}u\|_{\mathcal A_0^{m-1}(t)}\|{\partial}_yu\|_{\mathcal A_0^{m-1}(t)}+\|u\|_{\mathcal A_0^m(t)}^2\leq \|u\|_{\mathcal A_0^m(t)}^2, \end{align*} provided that $m\geq4,$ and similarly, \begin{align*} \Big\|({\partial}_\tau^{\tilde\beta+e_2}{\partial}_y^j h)\cdot{\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-j-1}h\big|_{y=0}\Big\|_{L^2(\mathbb T_x)}\lesssim~& \|h\|_{\mathcal A_0^m(t)}^2. \end{align*} Therefore, through the above arguments we can estimate $J_2$ in \eqref{est_mbd0} as follows: \begin{align}\label{est_mbd2} J_2~\leq~ C\big(M_0+\|(u,h)\|_{\mathcal A_0^m(t)}\big)\|(u,h)\|_{\mathcal A_0^m(t)}. \end{align} Now, substituting \eqref{est_mbd1} and \eqref{est_mbd2} into \eqref{est_mbd0} yields that \begin{equation*} \begin{split} \mu\|{\partial}_yD^\alpha u|_{y=0}\|_{L^2(\mathbb T_x)} \leq&C\big(M_0+\|(u,h)\|_{\mathcal A_0^m(t)}\big)\Big(\delta\|({\partial}_yu,{\partial}_yh)\|_{\mathcal A_0^m(t)}+\delta^{-1}\|(u,h)\|_{\mathcal A_0^m(t)}\Big)+M_0\sqrt{t}, \end{split} \end{equation*} then by letting \begin{equation}\label{delta} \delta~=~\frac{\delta_1}{C\big(M_0+\|(u,h)\|_{\mathcal A_0^m(t)}\big)} \end{equation} with the same $\delta_1$ given in \eqref{eq_u1-2}, we have \begin{equation}\label{est_bd} \begin{split} \mu\|{\partial}_yD^\alpha u|_{y=0}\|_{L^2(\mathbb T_x)} \leq&\delta_1\|({\partial}_yu,{\partial}_yh)\|_{\mathcal A_0^m(t)}+C\delta_1^{-1}\big(M_0+\|(u,h)\|_{\mathcal A_0^m(t)}\big)^2\|(u,h)\|_{\mathcal A_0^m(t)}+M_0\sqrt{t}. \end{split} \end{equation} Consequently, combining with \[\|D^\alpha u|_{y=0}\|_{L^2(\mathbb T_x)}\leq\frac{\mu}{4}\|{\partial}_yD^\alpha u\|_{L^2(\Omega)}+C\|D^\alpha u\|_{L^2(\Omega)},\] we obtain that when $|\alpha|=m,$ \begin{equation} \label{eq_u1-3}\begin{split} &\mu\Big|\int_{\mathbb T_x}({\partial}_yD^\alpha u\cdot D^\alpha u)\big|_{y=0}dxdt\Big| \leq\mu\big\|{\partial}_yD^\alpha u|_{y=0}\big\|_{L^2(\mathbb T_x)}\big\|D^\alpha u|_{y=0}\big\|_{L^2(\mathbb T_x)}\\ \leq&\delta_1\big\|({\partial}_yu,{\partial}_yh)\big\|_{\mathcal A_0^m(\Omega)}^2+\frac{\mu}{4}\|{\partial}_yD^\alpha u\|_{L^2(\Omega)}^2+C\delta_1^{-2}\big(M_0+\|(u,h)\|_{\mathcal A_0^m(t)}\big)^4\|(u,h)\|_{\mathcal A_0^m(t)}^2+CM_0^2t. \end{split}\end{equation} Finally, we combine \eqref{eq_u1-2} with \eqref{eq_u1-3}, and get that for any $|\alpha|\leq m,$ \begin{equation} \label{eq_u1-4}\begin{split} &\mu\Big|\int_{\mathbb T_x}({\partial}_yD^\alpha u\cdot D^\alpha u)\big|_{y=0}dxdt\Big| \leq\mu\|{\partial}_yD^\alpha u|_{y=0}\|_{L^2(\mathbb T_x)}\|D^\alpha u|_{y=0}\|_{L^2(\mathbb T_x)}\\ \leq&\delta_1\|({\partial}_yu,{\partial}_yh)\|_{\mathcal A_0^m(\Omega)}^2+\frac{\mu}{4}\|D^\alpha {\partial}_yu\|_{L^2(\Omega)}^2+C\delta_1^{-2}\big(M_0+\|(u,h)\|_{\mathcal A_0^m(t)}\big)^4\|(u,h)\|_{\mathcal A_0^m(t)}^2+CM_0^2t. \end{split}\end{equation} Then, plugging \eqref{eq_u1-4} into \eqref{est_diff-u0} it follows that \begin{align} \label{est_duff} &\mu\int_{\Omega}\big({\partial}_y^2D^\alpha u\cdot\langle y\rangle^{2l+2k}D^\alpha u\big) dxdy\nonumber\\ \leq&-\frac{\mu}{2}\|\langle y\rangle^{l+k}D^\alpha {\partial}_yu\|^2_{L^2(\Omega)}+\delta_1\|{\partial}_y(u,b)\|_{\mathcal A_0^m(t)}^2+C\delta_1^{-2}\Big(M_0+\|(u,h)\|_{\mathcal A_l^m(t)}\Big)^4+CM_0^2t. \end{align} \fi \indent\newline \textit{\textbf{Proof of \eqref{est-convect}.}} From the definition \eqref{def_I} of $I_1$ and $I_2$, we have \begin{align*} I_1~=~&\big[(u+U\phi'){\partial}_x+(v-U_x\phi){\partial}_y\big] D^\alpha u-\big[(h+H\phi'){\partial}_x+(g-H_x\phi){\partial}_y\big]D^\alpha h\\ &+\big[D^\alpha, (u+U\phi'){\partial}_x+(v-U_x\phi){\partial}_y\big]u-\big[D^\alpha, (h+H\phi'){\partial}_x+(g-H_x\phi){\partial}_y\big]h\\ &+D^\alpha\big[U_x\phi'u+U\phi''v-H_x\phi'h-H\phi''g\big]\\ \triangleq~&I_1^1+I_1^2+I_1^3, \end{align*} and \begin{align*} I_2~=~&\big[(u+U\phi'){\partial}_x+(v-U_x\phi){\partial}_y\big] D^\alpha h-\big[(h+H\phi'){\partial}_x+(g-H_x\phi){\partial}_y\big]D^\alpha u\\ &+\big[D^\alpha, (u+U\phi'){\partial}_x+(v-U_x\phi){\partial}_y\big]h-\big[D^\alpha, (h+H\phi'){\partial}_x+(g-H_x\phi){\partial}_y\big]u\\ &+D^\alpha\big[H_x\phi'u+H\phi''v-U_x\phi'h-U\phi''g\big]\\ \triangleq~&I_2^1+I_2^2+I_2^3. \end{align*} Thus, we divide the term $-\int_{\Omega}\Big(I_1\cdot\langle y\rangle^{2l+2k}D^\alpha u+I_2\cdot\langle y\rangle^{2l+2k}D^\alpha h\Big)dxdy$ into three parts: \begin{align}\label{divide} &-\int_{\Omega}\Big(I_1\cdot\langle y\rangle^{2l+2k}D^\alpha u+I_2\cdot\langle y\rangle^{2l+2k}D^\alpha h\Big)dxdy\nonumber\\ =~&-\sum_{i=1}^3\int_{\Omega}\Big(I_1^i\cdot\langle y\rangle^{2l+2k}D^\alpha u+I_2^i\cdot\langle y\rangle^{2l+2k}D^\alpha h\Big)dxdy\nonumber\\ :=~&G_1+G_2+G_3, \end{align} and estimate each $G_i, i=1,2,3$ in the following. Firstly, note that \begin{align*} \phi(y)~\equiv~y,\quad \phi'(y)~\equiv~1,\quad \phi^{(i)}(y)~\equiv~0,\qquad{\mbox for}~y\geq2R_0,~i\geq2,\end{align*} and then, there exists some positive constant $C$ such that \begin{align}\label{phi_y} \|\langle y\rangle^{i-1}\phi^{(i)}(y)\|_{L^\infty(\mathbb R_+)},~\|\langle y\rangle^{\lambda}\phi^{(j)}(y)\|_{L^\infty(\mathbb R_+)}~\leq~C,\quad\mbox{for}\quad i=0,1,~j\geq2,~\lambda\in\mathbb R,. \end{align} \indent\newline \textbf{\textit{Estimate for $G_1$:}} Note that \[{\partial}_x(u+U\phi')+{\partial}_y(v-U_x\phi)=0,\quad {\partial}_x(h+H\phi')+{\partial}_y(g-H_x\phi)=0,\] and the boundary conditions $(v-U_x\phi)|_{y=0}=(g-H_x\phi)|_{y=0}=0,$ we obtain that by integration by parts, \begin{align*} G_1~=~&-\frac{1}{2}\int_{\Omega}\Big\{\langle y\rangle^{2l+2k}\big[(u+U\phi'){\partial}_x+(v-U_x\phi){\partial}_y\big]\big(|D^\alpha u|^2+|D^\alpha h|^2\big)\Big\}dxdy\nonumber\\ &+\int_{\Omega}\Big\{\langle y\rangle^{2l+2k}\big[(h+H\phi'){\partial}_x+(g-H_x\phi){\partial}_y\big]\big(D^\alpha u\cdot D^\alpha h\big)\Big\}dxdy\nonumber\\ =~&(l+k)\int_{\Omega}\Big\{\langle y\rangle^{2l+2k-1}(v-U_x\phi)\cdot\big(|D^\alpha u|^2+|D^\alpha h|^2\big)\Big\}dxdy\nonumber\\ &-2(l+k)\int_{\Omega}\Big\{\langle y\rangle^{2l+2k-1}(g-H_x\phi)\cdot\big(D^\alpha u\cdot D^\alpha h\big)\Big\}dxdy. \end{align*} Then, by using that $v=-{\partial}_y^{-1}{\partial}_xu,g=-{\partial}_y^{-1}{\partial}_xh$ and \eqref{phi_y} for $i=0,$ we get that by virtue of \eqref{normal} and Sobolev embedding inequality, \begin{align}\label{est_G1} G_1~\leq~ &(l+k)\Big(\Big\|\frac{v-U_x\phi}{1+y}\Big\|_{L^\infty(\Omega)}+\Big\|\frac{g-H_x\phi}{1+y}\Big\|_{L^\infty(\Omega)}\Big)\cdot\big\|\langle y\rangle^{l+k}D^\alpha(u,h)(t)\big\|_{L^2(\Omega)}^2\nonumber\\ \leq~&C\big(\|u_x(t)\|_{L^\infty(\Omega)}+\|h_x(t)\|_{L^\infty(\Omega)}+\|(U_x,H_x)(t)\|_{L^\infty(\mathbb T_x)} \big)\cdot\big\|\langle y\rangle^{l+k}D^\alpha(u,h)(t)\big\|_{L^2(\Omega)}^2\nonumber\\ \leq~&C\big(\|(u, h)(t)\|_{\mathcal H_0^3}+\|(U_x,H_x)(t)\|_{L^\infty(\mathbb T_x)}\big)\|(u, h)(t)\|_{\mathcal H_l^m}^2. \end{align} \indent\newline \textbf{\textit{Estimate for $G_2$:}} For $G_2$, note that \begin{align}\label{G2} G_2~\leq~\|I_1^2(t)\|_{L^2_{l+k}(\Omega)}\|D^\alpha u(t)\|_{L^2_{l+k}(\Omega)}+\|I_2^2(t)\|_{L^2_{l+k}(\Omega)}\|D^\alpha h(t)\|_{L^2_{l+k}(\Omega)}. \end{align} Thus, we need to obtain $\|I_1^2(t)\|_{L^2_{l+k}(\Omega)}$ and $\|I_2^2(t)\|_{L^2_{l+k}(\Omega)}$. To this end, we are going to estimate only the $L^2_{l+k}$ of $I_1^2$, because the $L^2_{l+k}-$estimate on $I_2^2$ can be obtained similarly. Rewrite the quantity $I_1^2$ as: \begin{align}\label{I12} I_1^2~=~&\big[D^\alpha, u{\partial}_x+v{\partial}_y\big]u-\big[D^\alpha, h{\partial}_x+g{\partial}_y\big]h\nonumber\\ &+\big[D^\alpha, U\phi'{\partial}_x-U_x\phi{\partial}_y\big]u-\big[D^\alpha, H\phi'{\partial}_x-H_x\phi{\partial}_y\big]h\nonumber\\ :=~&I_{1,1}^2+I_{1,2}^2. \end{align} In the following, we will estimate $\|I_{1,1}^2\|_{L^2_{l+k}(\Omega)}$ and $\|I_{1,2}^2\|_{L_{l+k}^2(\Omega)}$ respectively. \indent\newline \underline{\textit{$L^2_{l+k}-$estimate on $I_{1,1}^2$:}} The quantity $I_{1,1}^2$ can be expressed as: \begin{align}\label{I112} I_{1,1}^2~=~&\sum_{0<\tilde\alpha\leq\alpha}\left(\begin{array}{ccc} \alpha \\ \tilde\alpha \end{array}\right)\Big\{\Big(D^{\tilde\alpha}u~{\partial}_x+D^{\tilde\alpha}v~{\partial}_y\Big)(D^{\alpha-\tilde\alpha}u)-\Big(D^{\tilde\alpha}h~{\partial}_x+D^{\tilde\alpha}g~{\partial}_y\Big)(D^{\alpha-\tilde\alpha}h)\Big\}. \end{align} Let $\tilde\alpha\triangleq(\tilde\beta,\tilde k)$, then we will study the terms in \eqref{I112} through the following two cases corresponding to $\tilde k=0$ and $\tilde k\geq1$ respectively. \indent\newline \textit{Case 1: $\tilde k=0.$} Firstly, $D^{\tilde\alpha}={\partial}_\tau^{\tilde\beta}$ and $\tilde\beta\geq e_i, i=1$ or 2 since $|\tilde\alpha|>0$. Then, we obtain that by \eqref{Morse}, \begin{align*} \big\|D^{\tilde\alpha}u\cdot{\partial}_x D^{\alpha-\tilde\alpha}u\big\|_{L^2_{l+k}(\Omega)}=&\big\|{\partial}_\tau^{\tilde\beta-e_i}({\partial}_\tau^{e_i}u)\cdot D^{\alpha-\tilde\alpha}({\partial}_xu) \big\|_{L^2_{l+k}(\Omega)}\\ \leq~&C\|{\partial}_\tau^{e_i}u(t)\|_{\mathcal H_0^{m-1}}\|{\partial}_xu(t)\|_{\mathcal H_l^{m-1}}~\leq~C\|u(t)\|_{\mathcal H_l^{m}}^2, \end{align*} provided that $m-1\geq3$. Similarly, it also holds \begin{align*} \big\|D^{\tilde\alpha}h\cdot{\partial}_x D^{\alpha-\tilde\alpha}h\big\|_{L^2_{l+k}(\Omega)}\leq~C\|h(t)\|_{\mathcal H_l^{m}}^2. \end{align*} On the other hand, by using $v=-{\partial}_y^{-1}{\partial}_xu,$ we have \begin{align*} D^{\tilde\alpha}v\cdot{\partial}_y D^{\alpha-\tilde\alpha}u~=~&-{\partial}_\tau^{\tilde\beta}{\partial}_y^{-1}({\partial}_xu)\cdot {\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k+1}u. \end{align*} Then, when $|\alpha|=|\beta|+k\leq m-1$, applying \eqref{normal1} to the right-hand side of the above equality yields \begin{align*} \big\|D^{\tilde\alpha}v\cdot{\partial}_y D^{\alpha-\tilde\alpha}u\big\|_{L^2_{l+k}(\Omega)}=~&\big\|{\partial}_\tau^{\tilde\beta}{\partial}_y^{-1}({\partial}_xu)\cdot {\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^k({\partial}_yu) \big\|_{L^2_{l+k}(\Omega)}\\ \leq~&C\|{\partial}_xu(t)\|_{\mathcal H_0^{m-1}}\|{\partial}_yu(t)\|_{\mathcal H_{l+1}^{m-1}}~\leq~C\|u(t)\|_{\mathcal H_l^{m}}^2, \end{align*} provided that $m-1\geq3$. When $|\alpha|=|\beta|+k=m$, it implies that $k\geq1$ since $|\beta|\leq m-1,$ and consequently, we get that by \eqref{normal1}, \begin{align*} \big\|D^{\tilde\alpha}v\cdot{\partial}_y D^{\alpha-\tilde\alpha}u\big\|_{L^2_{l+k}(\Omega)}=~&\big\|{\partial}_\tau^{\tilde\beta-e_i}{\partial}_y^{-1}({\partial}_\tau^{e_i+e_2}u)\cdot {\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-1}({\partial}_y^2u) \big\|_{L^2_{l+1+(k-1)}(\Omega)}\\ \leq~&C\|{\partial}_\tau^{e_i+e_2}u(t)\|_{\mathcal H_0^{m-2}}\|{\partial}_y^2u(t)\|_{\mathcal H_{l+2}^{m-2}}~\leq~C\|u(t)\|_{\mathcal H_l^{m}}^2, \end{align*} provided that $m-2\geq3$. Therefore, it holds that for $|\alpha|=|\beta|+k\leq m, |\beta|\leq m-1$, \begin{align*} \big\|D^{\tilde\alpha}v\cdot{\partial}_y D^{\alpha-\tilde\alpha}u\big\|_{L^2_{l+k}(\Omega)}\leq~C\|u(t)\|_{\mathcal H_l^{m}}^2. \end{align*} Similarly, one can obtain \begin{align*} \big\|D^{\tilde\alpha}g\cdot{\partial}_y D^{\alpha-\tilde\alpha}h\big\|_{L^2_{l+k}(\Omega)}\leq~C\|h(t)\|_{\mathcal H_l^{m}}^2. \end{align*} Thus, we conclude that for $\tilde k=0$ with $\tilde\alpha=(\tilde\beta, \tilde k)$, \begin{align}\label{est_I112-1} \big\|\big(D^{\tilde\alpha}u~{\partial}_x+D^{\tilde\alpha}v~{\partial}_y\big)(D^{\alpha-\tilde\alpha}u)-\big(D^{\tilde\alpha}h~{\partial}_x+D^{\tilde\alpha}g~{\partial}_y\big)(D^{\alpha-\tilde\alpha}h)\big\|_{L^2_{l+k}(\Omega)}\leq~&C\|(u,h)(t)\|_{\mathcal H_l^{m}}^2. \end{align} \indent\newline \textit{Case 2: $\tilde k\geq1.$} It follows that $\tilde\alpha\geq E_3,$ and then, the right-hand side of \eqref{I112} becomes: \begin{align*} &\big(D^{\tilde\alpha}u~{\partial}_x+D^{\tilde\alpha}v~{\partial}_y\big)(D^{\alpha-\tilde\alpha}u)-\big(D^{\tilde\alpha}h~{\partial}_x+D^{\tilde\alpha}g~{\partial}_y\big)(D^{\alpha-\tilde\alpha}h)\\ =~&\big(D^{\tilde\alpha}u~{\partial}_x-D^{\tilde\alpha-E_3}({\partial}_xu)~{\partial}_y\big)(D^{\alpha-\tilde\alpha}u)-\big(D^{\tilde\alpha}h~{\partial}_x-D^{\tilde\alpha-E_3}({\partial}_xh)~{\partial}_y\big)(D^{\alpha-\tilde\alpha}h). \end{align*} By applying \eqref{Morse} to the terms on the right-hand side of the above quality, we get \begin{align*} \big\|D^{\tilde\alpha}u\cdot{\partial}_x D^{\alpha-\tilde\alpha}u\big\|_{L^2_{l+k}(\Omega)}=~&\big\|D^{\tilde\alpha-E_3}({\partial}_yu)\cdot D^{\alpha-\tilde\alpha}({\partial}_xu) \big\|_{L^2_{l+1+(k-1)}(\Omega)}\\ \leq~&C\|{\partial}_yu(t)\|_{\mathcal H_{l+1}^{m-1}}\|{\partial}_xu(t)\|_{\mathcal H_0^{m-1}}~\leq~C\|u(t)\|_{\mathcal H_l^{m}}^2,\\ \big\|D^{\tilde\alpha-E_3}({\partial}_xu)\cdot{\partial}_y D^{\alpha-\tilde\alpha}u\big\|_{L^2_{l+k}(\Omega)}=~&\big\|D^{\tilde\alpha-E_3}({\partial}_xu)\cdot D^{\alpha-\tilde\alpha}({\partial}_yu) \big\|_{L^2_{l+1+(k-1)}(\Omega)}\\ \leq~&C\|{\partial}_xu(t)\|_{\mathcal H_{0}^{m-1}}\|{\partial}_yu(t)\|_{\mathcal H_{l+1}^{m-1}}~\leq~C\|u(t)\|_{\mathcal H_l^{m}}^2,\\ \big\|D^{\tilde\alpha}h\cdot{\partial}_x D^{\alpha-\tilde\alpha}h\big\|_{L^2_{l+k}(\Omega)}=~&\big\|D^{\tilde\alpha-E_3}({\partial}_yh)\cdot D^{\alpha-\tilde\alpha}({\partial}_xh) \big\|_{L^2_{l+1+(k-1)}(\Omega)}\\ \leq~&C\|{\partial}_yh(t)\|_{\mathcal H_{l+1}^{m-1}}\|{\partial}_xh(t)\|_{\mathcal H_0^{m-1}}~\leq~C\|h(t)\|_{\mathcal H_l^{m}}^2,\\ \big\|D^{\tilde\alpha-E_3}({\partial}_xh)\cdot{\partial}_y D^{\alpha-\tilde\alpha}h\big\|_{L^2_{l+k}(\Omega)}=~&\big\|D^{\tilde\alpha-E_3}({\partial}_xh)\cdot D^{\alpha-\tilde\alpha}({\partial}_yh) \big\|_{L^2_{l+1+(k-1)}(\Omega)}\\ \leq~&C\|{\partial}_xh(t)\|_{\mathcal H_{0}^{m-1}}\|{\partial}_yh(t)\|_{\mathcal H_{l+1}^{m-1}}~\leq~C\|h(t)\|_{\mathcal H_l^{m}}^2. \end{align*} Consequently, we actually conclude that for $\tilde k\geq1$ with $\tilde\alpha=(\tilde\beta, \tilde k)$, \begin{align}\label{est_I112-2} \big\|\big(D^{\tilde\alpha}u~{\partial}_x+D^{\tilde\alpha}v~{\partial}_y\big)(D^{\alpha-\tilde\alpha}u)-\big(D^{\tilde\alpha}h~{\partial}_x+D^{\tilde\alpha}g~{\partial}_y\big)(D^{\alpha-\tilde\alpha}h)\big\|_{L^2_{l+k}(\Omega)}\leq~&C\|(u,h)(t)\|_{\mathcal H_l^{m}}^2. \end{align} Finally, based on the results obtained in the above two cases, it holds that by using \eqref{est_I112-1} and \eqref{est_I112-2} in \eqref{I112}, \begin{align}\label{est_I112} \|I_{1,1}^2(t)\|_{L^2_{l+k}(\Omega)}\leq ~C\|(u,h)(t)\|_{\mathcal H_l^m}^2. \end{align} \iffalse note that $|\alpha-\tilde\alpha|\leq |\alpha|-1\leq m-1.$ Note that $v=-{\partial}_y^{-1}{\partial}_xu,~g=-{\partial}_y^{-1}{\partial}_xh$, we rewrite the quantity $I_{1,1}^2$ as follows: \begin{align}\label{F} I_{1,1}^2~=~&\sum_{0<\tilde\beta\leq\beta}\Big\{\left(\begin{array}{ccc} \beta \\ \tilde\beta \end{array}\right)\Big[\Big({\partial}_\tau^{\tilde\beta}u~{\partial}_x+{\partial}_\tau^{\tilde\beta}v~{\partial}_y\Big)({\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k}u)-\Big({\partial}_\tau^{\tilde\beta}h~{\partial}_x+{\partial}_\tau^{\tilde\beta}g~{\partial}_y\Big)({\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k}h)\Big]\Big\}\nonumber\\ &+\sum_{\tilde\beta\leq\beta, 1\leq i\leq k} \Big\{\left(\begin{array}{ccc} \beta \\ \tilde\beta \end{array}\right)\left(\begin{array}{ccc} k \\ i \end{array}\right)\Big[\Big({\partial}_\tau^{\tilde\beta}{\partial}_y^i u~{\partial}_x+{\partial}_\tau^{\tilde\beta}{\partial}_y^i v~{\partial}_y\Big)({\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-i}u)\nonumber\\ &\qquad\qquad\quad -\Big({\partial}_\tau^{\tilde\beta}{\partial}_y^i h~{\partial}_x+{\partial}_\tau^{\tilde\beta}{\partial}_y^i g~{\partial}_y\Big)({\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-i}h)\Big]\Big\}\nonumber\\ =~&\sum_{0<\tilde\beta\leq\beta}\Big\{\left(\begin{array}{ccc} \beta \\ \tilde\beta \end{array}\right)\Big[{\partial}_\tau^{\tilde\beta}u\cdot({\partial}_\tau^{\beta-\tilde\beta+e_2}{\partial}_y^{k}u)-({\partial}_\tau^{\tilde\beta+e_2}{\partial}_y^{-1}u)\cdot({\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k+1}u)\nonumber\\ &\qquad\qquad\qquad\quad-{\partial}_\tau^{\tilde\beta}h\cdot({\partial}_\tau^{\beta-\tilde\beta+e_2}{\partial}_y^{k}h)+({\partial}_\tau^{\tilde\beta+e_2}{\partial}_y^{-1}h)\cdot({\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k+1}h)\Big]\Big\}\nonumber\\ &+\sum_{\tilde\beta\leq\beta, 0\leq j\leq k-1} \Big\{\left(\begin{array}{ccc} \beta \\ \tilde\beta \end{array}\right)\left(\begin{array}{ccc} k \\ j+1 \end{array}\right)\Big[({\partial}_\tau^{\tilde\beta}{\partial}_y^{j+1} u)\cdot({\partial}_\tau^{\beta-\tilde\beta+e_2}{\partial}_y^{k-1-j}u)-({\partial}_\tau^{\tilde\beta+e_2}{\partial}_y^j u)\cdot({\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-j}u)\nonumber\\ &\qquad\qquad\qquad\quad-({\partial}_\tau^{\tilde\beta}{\partial}_y^{j+1} h)\cdot({\partial}_\tau^{\beta-\tilde\beta+e_2}{\partial}_y^{k-1-j}h)+({\partial}_\tau^{\tilde\beta+e_2}{\partial}_y^j h)\cdot({\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-j}h)\Big]\Big\}\nonumber\\ :=~&F_1+F_2. \end{align} To study $F_1$ in \eqref{F}, we notice that $\tilde\beta\geq e_i, i=1$ or 2 from $\tilde\beta>0$, then each term in $F_1$ can be controlled as follows. On one hand, \eqref{Morse} gives \begin{align*} \Big\|{\partial}_\tau^{\tilde\beta}u\cdot({\partial}_\tau^{\beta-\tilde\beta+e_2}{\partial}_y^{k}u)\Big\|_{L^2_{l+k}(\Omega)}=&\Big\|{\partial}_\tau^{\tilde\beta-e_i}({\partial}_\tau^{e_i}u)\cdot{\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k}({\partial}_xu) \Big\|_{L^2_{l+k}(\Omega)}\\ \leq~&C\|{\partial}_\tau^{e_i}u\|_{\mathcal A_0^{m-1}(t)}\|{\partial}_xu\|_{\mathcal A_l^{m-1}(t)}~\leq~C\|u\|_{\mathcal A_l^{m}(t)}^2, \end{align*} and similarly, \begin{align*} \Big\|{\partial}_\tau^{\tilde\beta}h\cdot({\partial}_\tau^{\beta-\tilde\beta+e_2}{\partial}_y^{k}h)\Big\|_{L^2_{l+k}(\Omega)}\leq~C\|h\|_{\mathcal A_l^{m}(t)}^2. \end{align*} On the other hand, if $|\alpha|=|\beta|+k\leq m-1,$ it follows that by \eqref{normal1}, \begin{align*} \Big\|({\partial}_\tau^{\tilde\beta+e_2}{\partial}_y^{-1}u)\cdot({\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k+1}u)\Big\|_{L^2_{l+k}(\Omega)}=&\Big\|{\partial}_\tau^{\tilde\beta}{\partial}_y^{-1}({\partial}_xu)\cdot{\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k}({\partial}_yu) \Big\|_{L^2_{l+k}(\Omega)}\\ \leq~&C\|{\partial}_xu\|_{\mathcal A_l^{m-1}(t)}\|{\partial}_yu\|_{\mathcal A_l^{m-1}(t)}~\leq~C\|u\|_{\mathcal A_l^{m}(t)}^2, \end{align*} provided that $m-1\geq3$ and $l>\frac{1}{2}$. If $|\alpha|=|\beta|+k=m,$ it implies that $k\geq1$ by $|\beta|\leq m-1$, and we have that by using \eqref{normal1}, \begin{align*} \Big\|({\partial}_\tau^{\tilde\beta+e_2}{\partial}_y^{-1}u)\cdot({\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k+1}u)\Big\|_{L^2_{l+k}(\Omega)}=&\Big\|{\partial}_\tau^{\tilde\beta-e_i}{\partial}_y^{-1}({\partial}_\tau^{e_i}{\partial}_xu)\cdot{\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-1}({\partial}_y^2u) \Big\|_{L^2_{l+1+(k-1)}(\Omega)}\\ \leq~&C\|{\partial}_\tau^{e_i}{\partial}_xu\|_{\mathcal A_l^{m-2}(t)}\|{\partial}_y^2u\|_{\mathcal A_{l+1}^{m-2}(t)}~\leq~C\|u\|_{\mathcal A_l^{m}(t)}^2, \end{align*} provided that $m-2\geq3$ and $l>\frac{1}{2}$. Combining the above inequalities reads \begin{align*} \Big\|({\partial}_\tau^{\tilde\beta+e_2}{\partial}_y^{-1}u)\cdot({\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k+1}u)\Big\|_{L^2_{l+k}(\Omega)}\leq~C\|u\|_{\mathcal A_l^{m}(t)}^2, \end{align*} and similarly, \begin{align*} \Big\|({\partial}_\tau^{\tilde\beta+e_2}{\partial}_y^{-1}h)\cdot({\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k+1}h)\Big\|_{L^2_{l+k}(\Omega)}\leq~C\|h\|_{\mathcal A_l^{m}(t)}^2. \end{align*} Thus, as a result of the above arguments, we have \begin{align}\label{F_1} \|F_1\|_{L_{l+k}^2(\Omega)}~\leq~C\|(u,h)\|_{\mathcal A_l^{m}(t)}^2. \end{align} Next, we turn to the term $F_2$, in which each term can be estimated by \eqref{Morse} as follows. \begin{align*} \Big\|({\partial}_\tau^{\tilde\beta}{\partial}_y^{j+1}u)\cdot({\partial}_\tau^{\beta-\tilde\beta+e_2}{\partial}_y^{k-1-j}u)\Big\|_{L^2_{l+k}(\Omega)}=~&\Big\|{\partial}_\tau^{\tilde\beta}{\partial}_y^j({\partial}_yu)\cdot{\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-1-j}({\partial}_xu) \Big\|_{L^2_{l+1+(k-1)}(\Omega)}\\ \leq~&C\|{\partial}_yu\|_{\mathcal A_{l+1}^{m-1}(t)}\|{\partial}_xu\|_{\mathcal A_0^{m-1}(t)}~\leq~C\|u\|_{\mathcal A_l^{m}(t)}^2, \end{align*} \begin{align*} \Big\|({\partial}_\tau^{\tilde\beta+e_2}{\partial}_y^{j}u)\cdot({\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-j}u)\Big\|_{L^2_{l+k}(\Omega)}=~&\Big\|{\partial}_\tau^{\tilde\beta}{\partial}_y^j({\partial}_xu)\cdot{\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-1-j}({\partial}_yu) \Big\|_{L^2_{l+1+(k-1)}(\Omega)}\\ \leq~&C\|{\partial}_xu\|_{\mathcal A_{0}^{m-1}(t)}\|{\partial}_yu\|_{\mathcal A_{l+1}^{m-1}(t)}~\leq~C\|u\|_{\mathcal A_l^{m}(t)}^2, \end{align*} \begin{align*} \Big\|({\partial}_\tau^{\tilde\beta}{\partial}_y^{j+1}h)\cdot({\partial}_\tau^{\beta-\tilde\beta+e_2}{\partial}_y^{k-1-j}h)\Big\|_{L^2_{l+k}(\Omega)}=~&\Big\|{\partial}_\tau^{\tilde\beta}{\partial}_y^j({\partial}_yh)\cdot{\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-1-j}({\partial}_xh) \Big\|_{L^2_{l+1+(k-1)}(\Omega)}\\ \leq~&C\|{\partial}_yh\|_{\mathcal A_{l+1}^{m-1}(t)}\|{\partial}_xh\|_{\mathcal A_0^{m-1}(t)}~\leq~C\|h\|_{\mathcal A_l^{m}(t)}^2, \end{align*} and \begin{align*} \Big\|({\partial}_\tau^{\tilde\beta+e_2}{\partial}_y^{j}h)\cdot({\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-j}h)\Big\|_{L^2_{l+k}(\Omega)}=~&\Big\|{\partial}_\tau^{\tilde\beta}{\partial}_y^j({\partial}_xh)\cdot{\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-1-j}({\partial}_yh) \Big\|_{L^2_{l+1+(k-1)}(\Omega)}\\ \leq~&C\|{\partial}_xh\|_{\mathcal A_{0}^{m-1}(t)}\|{\partial}_yh\|_{\mathcal A_{l+1}^{m-1}(t)}~\leq~C\|h\|_{\mathcal A_l^{m}(t)}^2. \end{align*} Combining the above four inequalities, we can get that \begin{align}\label{F_2} \|F_2\|_{L_{l+k}^2(\Omega)}~\leq~C\|(u,h)\|_{\mathcal A_l^{m}(t)}^2. \end{align} Then, by using \eqref{F_1} and \eqref{F_2} in \eqref{F}, we finally obtain \begin{align}\label{est_I112} \|I_{1,1}^2\|_{L^2_{l+k}(\Omega)}~\leq~C\|(u,h)\|_{\mathcal A_l^{m}(t)}^2. \end{align} \begin{align}\label{I12} I_1^2~=~ &\sum_{0<\tilde\beta\leq\beta}\Big\{\left(\begin{array}{ccc} \beta \\ \tilde\beta \end{array}\right)\Big[{\partial}_\tau^{\tilde\beta}(u+U\phi')\cdot{\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k}{\partial}_xu+{\partial}_\tau^{\tilde\beta}(v-U_x\phi)\cdot{\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k+1}u\nonumber\\ &\qquad\qquad\qquad-{\partial}_\tau^{\tilde\beta}(h+H\phi')\cdot{\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k}{\partial}_xh-{\partial}_\tau^{\tilde\beta}(g-H_x\phi)\cdot{\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k+1}h\Big]\Big\}\nonumber\\ &+\sum_{\tilde\beta\leq\beta}\sum_{i=1}^k\Big\{\left(\begin{array}{ccc} \beta \\ \tilde\beta \end{array}\right)\left(\begin{array}{ccc} k \\ i \end{array}\right)\Big[{\partial}_\tau^{\tilde\beta}{\partial}_y^i(u+U\phi')\cdot {\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-i}{\partial}_xu+{\partial}_\tau^{\tilde\beta}{\partial}_y^i(v-U_x\phi)\cdot {\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-i+1}u\nonumber\\ &\qquad\qquad\qquad -{\partial}_\tau^{\tilde\beta}{\partial}_y^i(h+H\phi')\cdot {\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-i}{\partial}_xh-{\partial}_\tau^{\tilde\beta}{\partial}_y^i(g-H_x\phi)\cdot {\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-i+1}h\Big]\Big\}\nonumber\\ =~ &\sum_{0<\tilde\beta\leq\beta}\Big\{\left(\begin{array}{ccc} \beta \\ \tilde\beta \end{array}\right)\Big[{\partial}_\tau^{\tilde\beta}(u+U\phi')\cdot{\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k}{\partial}_xu-({\partial}_\tau^{\tilde\beta}{\partial}_y^{-1}{\partial}_xu+\phi{\partial}_\tau^{\tilde\beta}U_x)\cdot{\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k+1}u\nonumber\\ &\qquad\qquad\qquad-{\partial}_\tau^{\tilde\beta}(h+H\phi')\cdot{\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k}{\partial}_xh+({\partial}_\tau^{\tilde\beta}{\partial}_y^{-1}{\partial}_xh+\phi{\partial}_\tau^{\tilde\beta}H_x)\cdot{\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k+1}h\Big]\Big\}\nonumber\\ &+\sum_{\tilde\beta\leq\beta}\sum_{j=0}^{k-1}\Big\{\left(\begin{array}{ccc} \beta \\ \tilde\beta \end{array}\right)\left(\begin{array}{ccc} k-1 \\ j+1 \end{array}\right)\Big[{\partial}_\tau^{\tilde\beta}{\partial}_y^j({\partial}_yu+U\phi'')\cdot {\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-1-j}{\partial}_xu-{\partial}_\tau^{\tilde\beta}{\partial}_y^j({\partial}_xu+U_x\phi')\cdot {\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-j}u\nonumber\\ &\qquad\qquad\qquad-{\partial}_\tau^{\tilde\beta}{\partial}_y^j({\partial}_yh+H\phi'')\cdot {\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-1-j}h+{\partial}_\tau^{\tilde\beta}{\partial}_y^j({\partial}_xh+H_x\phi')\cdot {\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-j}h\Big]\Big\}\nonumber\\ :=~&I_{1,1}^2+I_{1,2}^2. \end{align} Then, we want to establish the estimate $\|\langle y\rangle^{l+k}I_1^2\|_{L^2(\Omega)}$, which is divided into the following two part. \fi \indent\newline \underline{\textit{$L^2_{l+k}-$estimate on $I_{1,2}^2$:}} Write \begin{align*} I_{1,2}^2~=~\sum_{0<\tilde\alpha\leq\alpha}\left(\begin{array}{ccc} \alpha \\ \tilde\alpha \end{array}\right)\Big\{&\Big(D^{\tilde\alpha}(U\phi')~{\partial}_x-D^{\tilde\alpha}(U_x\phi)~{\partial}_y\Big)(D^{\alpha-\tilde\alpha}u)-\Big(D^{\tilde\alpha}(H\phi')~{\partial}_x-D^{\tilde\alpha}(H_x\phi)~{\partial}_y\Big)(D^{\alpha-\tilde\alpha}h)\Big\}. \end{align*} Let $\tilde\alpha\triangleq(\tilde\beta,\tilde k)$ and note that $|\alpha-\tilde\alpha|\leq |\alpha|-1\leq m-1.$ By using \eqref{phi_y}, we estimate each term on the right hand side of the above equility as follows: \begin{align*} \big\|D^{\tilde\alpha}(U\phi')\cdot{\partial}_x D^{\alpha-\tilde\alpha}u\big\|_{L^2_{l+k}(\Omega)}\leq~&\big\|\langle y\rangle^{\tilde k}D^{\tilde\alpha}(U\phi')(t)\big\|_{L^\infty(\Omega)}\big\|\langle y\rangle^{l+k-\tilde k}{\partial}_x D^{\alpha-\tilde\alpha}u(t)\big\|_{L^2(\Omega)}\\\leq~&C\big\|{\partial}_\tau^{\tilde\beta}U(t)\big\|_{L^\infty(\mathbb T_x)}\|u(t)\|_{\mathcal H_l^m},\\ \big\|D^{\tilde\alpha}(U_x\phi)\cdot{\partial}_y D^{\alpha-\tilde\alpha}u\big\|_{L^2_{l+k}(\Omega)}\leq~&\big\|\langle y\rangle^{\tilde k-1}D^{\tilde\alpha}(U_x\phi)(t)\big\|_{L^\infty(\Omega)}\big\|\langle y\rangle^{l+k-\tilde k+1}{\partial}_y D^{\alpha-\tilde\alpha}u(t)\big\|_{L^2(\Omega)}\\ \leq~&C\big\|{\partial}_\tau^{\tilde\beta}U_x(t)\big\|_{L^\infty(\mathbb T_x)}\|u(t)\|_{\mathcal H_l^m}, \end{align*} and similarly, \begin{align*} \big\|D^{\tilde\alpha}(H\phi')\cdot{\partial}_x D^{\alpha-\tilde\alpha}h\big\|_{L^2_{l+k}(\Omega)} \leq~C\big\|{\partial}_\tau^{\tilde\beta}H(t)\big\|_{L^\infty(\mathbb T_x)}\|h(t)\|_{\mathcal H_l^m},\\ \big\|D^{\tilde\alpha}(H_x\phi)\cdot{\partial}_y D^{\alpha-\tilde\alpha}h\big\|_{L^2_{l+k}(\Omega)} \leq~C\big\|{\partial}_\tau^{\tilde\beta}H_x(t)\big\|_{L^\infty(\mathbb T_x)}\|h(t)\|_{\mathcal H_l^m}. \end{align*} Therefore, it follows \begin{align}\label{est_I122} \|I_{1,2}^2(t)\|_{L^2_{l+k}(\Omega)}~\leq~C\|(u,h)(t)\|_{\mathcal H_l^m}\cdot\big(\sum_{|\beta|\leq m+1}\|{\partial}_\tau^\beta(U,H)(t)\|_{L^\infty(\mathbb T_x)}\big). \end{align} Now, we can obtain the estimate of $\|I_1^2\|_{L_{l+k}^2(\Omega)}$. Indeed, plugging \eqref{est_I112} and \eqref{est_I122} into \eqref{I12} yields \begin{align}\label{est_I12} \|I_{1}^2(t)\|_{L^2_{l+k}(\Omega)}~\leq~C\big(\sum_{|\beta|\leq m+1}\|{\partial}_\tau^\beta(U,H)(t)\|_{L^\infty(\mathbb T_x)}+\|(u, h)(t)\|_{\mathcal H_l^m}\big)~\|(u,h)(t)\|_{\mathcal H_l^m}. \end{align} Similarly, one can also get \begin{align}\label{est_I22} \|I_{2}^2(t)\|_{L^2_{l+k}(\Omega)}~\leq~C\big(\sum_{|\beta|\leq m+1}\|{\partial}_\tau^\beta(U,H)(t)\|_{L^\infty(\mathbb T_x)}+\|(u, h)(t)\|_{\mathcal H_l^m}\big)~\|(u,h)(t)\|_{\mathcal H_l^m}, \end{align} then, substituting \eqref{est_I12} and \eqref{est_I22} into \eqref{G2} gives \begin{align}\label{est_G2} G_2~\leq~&C\Big(\sum_{|\beta|\leq m+1}\|{\partial}_\tau^\beta(U,H)(x)\|_{L^\infty(\mathbb T_x)}+\|(u, h)(t)\|_{\mathcal H_l^m}\Big)~\|(u, h)(t)\|_{\mathcal H_l^m}\|D^\alpha (u, h)(t)\|_{L^2_{l+k}(\Omega)}\nonumber\\ \leq~&C\Big(\sum_{|\beta|\leq m+2}\|{\partial}_\tau^\beta(U,H)(t)\|_{L^2(\mathbb T_x)}+\|(u, h)(t)\|_{\mathcal H_l^m}\Big)~\|(u, h)(t)\|_{\mathcal H_l^m}^2. \end{align} \indent\newline \textbf{\textit{Estimate on $G_3$:}} For $G_3,$ the Cauchy-Schwarz inequality implies \begin{align}\label{G3} G_3~\leq~\|I_1^3(t)\|_{L^2_{l+k}(\Omega)}\|D^\alpha u(t)\|_{L^2_{l+k}(\Omega)}+\|I_2^3(t)\|_{L^2_{l+k}(\Omega)}\|D^\alpha h(t)\|_{L^2_{l+k}(\Omega)}. \end{align} Then, it remains to estimate $\|I_1^3(t)\|_{L^2_{l+k}(\Omega)}$ and $\|I_2^3(t)\|_{L^2_{l+k}(\Omega)}$. In the following, we are going to establish the weighted estimate on $I_1^3$, for example, and the weighed estimate on $I_2^3$ can be obtained in a similar way. Recall that $D^\alpha={\partial}_\tau^\beta{\partial}_y^k, $ we have \begin{align}\label{I13} I_1^3~=~\sum_{\tilde\alpha\leq\alpha}\left(\begin{array}{ccc}\alpha \\ \tilde\alpha\end{array}\right)\Big[D^{\tilde\alpha}u\cdot D^{\alpha-\tilde\alpha}(U_x\phi')+D^{\tilde\alpha}v\cdot D^{\alpha-\tilde\alpha}(U\phi'')-D^{\tilde\alpha}h\cdot D^{\alpha-\tilde\alpha}(H_x\phi')-D^{\tilde\alpha}g\cdot D^{\alpha-\tilde\alpha}(H\phi'')\Big]. \end{align} Then, let $\tilde\alpha\triangleq(\tilde\beta,\tilde k)$, and we estimate each term in \eqref{I13} as follows. Firstly, by using \eqref{phi_y} we have \begin{align*} \big\|D^{\tilde\alpha}u\cdot D^{\alpha-\tilde\alpha}(U_x\phi')\big\|_{L^2_{l+k}(\Omega)}\leq~&\big\|\langle y\rangle^{l+\tilde k}D^{\tilde\alpha}u(t)\big\|_{L^2(\Omega)}\big\|\langle y\rangle^{k-\tilde k}D^{\alpha-\tilde\alpha}(U_x\phi')(t)\big\|_{L^\infty(\Omega)}\\\leq~&C\|u(t)\|_{\mathcal H_l^m}\big\|{\partial}_\tau^{\beta-\tilde\beta}U_x(t)\big\|_{L^\infty(\mathbb T_x)}, \end{align*} and similarly, \begin{align*} \big\|D^{\tilde\alpha}h\cdot D^{\alpha-\tilde\alpha}(H_x\phi')\big\|_{L^2_{l+k}(\Omega)} \leq~&C\|h(t)\|_{\mathcal H_l^m}\big\|{\partial}_\tau^{\beta-\tilde\beta}H_x(t)\big\|_{L^\infty(\mathbb T_x)}. \end{align*} Secondly, as $v=-{\partial}_y^{-1}{\partial}_xu$, it reads \begin{align*} D^{\tilde\alpha}v\cdot D^{\alpha-\tilde\alpha}(U\phi'')=-D^{\tilde\alpha+E_2}{\partial}_y^{-1}u\cdot D^{\alpha-\tilde\alpha}(U\phi''). \end{align*} Therefore, if $\tilde k\geq1$, it follows that by \eqref{phi_y}, \begin{align*} \big\|D^{\tilde\alpha}v\cdot D^{\alpha-\tilde\alpha}(U\phi'')\big\|_{L^2_{l+k}(\Omega)}=~&\big\|{\partial}_\tau^{\tilde\beta+e_2}{\partial}_y^{\tilde k-1}u\cdot {\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-\tilde k}(U\phi'')\big\|_{L^2_{l+k}(\Omega)}\\ \leq~&\big\|\langle y\rangle^{\tilde k-1}{\partial}_\tau^{\tilde\beta+e_2}{\partial}_y^{\tilde k-1}u(t)\big\|_{L^2(\Omega)}\big\|\langle y\rangle^{l+k-\tilde k+1} {\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{k-\tilde k}(U\phi'')(t)\big\|_{L^\infty(\Omega)}\\ \leq~&C\|u(t)\|_{\mathcal H_0^m}\big\|{\partial}_\tau^{\beta-\tilde\beta}U(t)\big\|_{L^\infty(\mathbb T_x)}; \end{align*} if $\tilde k=0$, we obtain that by \eqref{normal1} and \eqref{phi_y}, \begin{align*} \big\|D^{\tilde\alpha}v\cdot D^{\alpha-\tilde\alpha}(U\phi'')\big\|_{L^2_{l+k}(\Omega)}=~&\Big\|\frac{{\partial}_\tau^{\tilde\beta+e_2}{\partial}_y^{-1}u}{1+y}\cdot {\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^k(U\phi'')\Big\|_{L^2_{l+k+1}(\Omega)}\\ \leq~&\Big\|\frac{{\partial}_\tau^{\tilde\beta+e_2}{\partial}_y^{-1}u(t)}{1+y}\Big\|_{L^2(\Omega)}\big\|\langle y\rangle^{l+k+1} {\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^k(U\phi'')(t)\big\|_{L^\infty (\Omega)}\\ \leq~&C\|{\partial}_\tau^{\tilde\beta+e_2}u(t)\|_{L^2(\Omega)}\big\|{\partial}_\tau^{\beta-\tilde\beta}U(t)\big\|_{L^\infty(\mathbb T_x)}\\ \leq~ &C\|u(t)\|_{\mathcal H_0^m}\big\|{\partial}_\tau^{\beta-\tilde\beta}U(t)\big\|_{L^\infty(\mathbb T_x)}, \end{align*} provided that $|\tilde\beta|\leq|\beta|\leq m-1.$ Combining the above two inequalities yields that \begin{align*} \big\|D^{\tilde\alpha}v\cdot D^{\alpha-\tilde\alpha}(U\phi'')\big\|_{L^2_{l+k}(\Omega)} \leq~&C\|u(t)\|_{\mathcal H_0^m}\cdot\big(\sum_{|\beta|\leq m+1}\|{\partial}_\tau^\beta(U,H)(t)\|_{L^\infty(\mathbb T_x)}\big). \end{align*} Similarly, we have \begin{align*} \big\|D^{\tilde\alpha}g\cdot D^{\alpha-\tilde\alpha}(H\phi'')\big\|_{L^2_{l+k}(\Omega)} \leq~&C\|h(t)\|_{\mathcal H_0^m}\cdot\big(\sum_{|\beta|\leq m+1}\|{\partial}_\tau^\beta(U,H)(t)\|_{L^\infty(\mathbb T_x)}\big). \end{align*} We take into account the above arguments, to conclude that \begin{align}\label{est_I13} \|I_1^3(t)\|_{L^2_{l+k}(\Omega)}~\leq~C\|(u, h)(t)\|_{\mathcal H_l^m}\cdot\big(\sum_{|\beta|\leq m+1}\|{\partial}_\tau^\beta(U,H)(t)\|_{L^\infty(\mathbb T_x)}\big). \end{align} Then, one can obtain a similar estimate of $I_2^3$: \begin{align}\label{est_I23} \|I_2^3(t)\|_{L^2_{l+k}(\Omega)}~\leq~C\|(u, h)(t)\|_{\mathcal H_l^m}\cdot\big(\sum_{|\beta|\leq m+1}\|{\partial}_\tau^\beta(U,H)(t)\|_{L^\infty(\mathbb T_x)}\big), \end{align} which implies that by plugging \eqref{est_I13} and \eqref{est_I23} into \eqref{G3}, \begin{align}\label{est_G3} G_3~\leq~&C\|D^\alpha (u, h)(t)\|_{L^2_{l+k}(\Omega)} \|(u, h)(t)\|_{\mathcal H_l^m}\cdot\big(\sum_{|\beta|\leq m+1}\|{\partial}_\tau^\beta(U,H)(x)\|_{L^\infty(\mathbb T_x)}\big)\nonumber\\ \leq~&C\|(u, h)(t)\|_{\mathcal H_l^m}^2\cdot\big(\sum_{|\beta|\leq m+2}\|{\partial}_\tau^\beta(U,H)(t)\|_{L^2(\mathbb T_x)}\big). \end{align} Now, as we have completed the estimates on $G_i, i=1,2,3$ given by \eqref{est_G1}, \eqref{est_G2} and \eqref{est_G3} respectively, from \eqref{divide} the conclusion of this step follows immediately: \begin{align*} &-\int_{\Omega}\Big(I_1\cdot\langle y\rangle^{2l+2k}D^\alpha u+I_2\cdot\langle y\rangle^{2l+2k}D^\alpha h\Big)dxdy\nonumber\\ \leq~&C\big(\sum_{|\beta|\leq m+2}\|{\partial}_\tau^\beta(U,H)(t)\|_{L^2(\mathbb T_x)}+\|(u, h)(t)\|_{\mathcal H_l^m}\big)~\|(u, h)(t)\|_{\mathcal H_l^m}^2, \end{align*} and we complete the proof of \eqref{est-convect}. \end{proof} \subsection{Weighted $H^m_l-$estimates only in tangential variables} \indent\newline Similar to the classical Prandtl equations, an essential difficulty for solving the problem \eqref{bl_main} is the loss of one derivative in the tangential variable $x$ in the terms $v{\partial}_yu-g{\partial}_yh$ and $v{\partial}_yh-g{\partial}_yu$. In other words, $v=-{\partial}_y^{-1}{\partial}_xu$ and $g=-{\partial}_y^{-1}{\partial}_xh$, by the divergence free conditions, create a loss of $x-$derivative that prevents us to apply the standard energy estimates. Precisely, consider the following equations of ${\partial}_\tau^\beta(u,h)$ with $|\beta|=m$, by taking the $m-$th order tangential derivatives on the first two equations of \eqref{bl_main} \begin{equation} \label{eq_xm}\begin{cases} {\partial}_t{\partial}_\tau^\beta u+\big[(u+U\phi')\partial_x+(v-U_x\phi)\partial_y\big]{\partial}_\tau^\beta u-\big[(h+H\phi')\partial_x+(g-H_x\phi)\partial_y\big]{\partial}_\tau^\beta h-\mu{\partial}_y^2{\partial}_\tau^\beta u\\ \qquad\qquad+({\partial}_yu+U\phi''){\partial}_\tau^\beta v -({\partial}_yh+H\phi''){\partial}_\tau^\beta g={\partial}_\tau^\beta r_1+R_u^\beta,\\ {\partial}_t{\partial}_\tau^\beta h+\big[(u+U\phi')\partial_x+(v-U_x\phi)\partial_y\big]{\partial}_\tau^\beta h-\big[(h+H\phi')\partial_x+(g-H_x\phi)\partial_y\big]{\partial}_\tau^\beta u-\kappa{\partial}_y^2{\partial}_\tau^\beta h\\ \qquad\qquad+({\partial}_yh+H\phi''){\partial}_\tau^\beta v -({\partial}_yu+U\phi''){\partial}_\tau^\beta g={\partial}_\tau^\beta r_2+R_h^\beta, \end{cases} \end{equation} where \begin{equation}\label{def_R} \begin{cases} R_u^\beta~=&{\partial}_\tau^\beta\big(-U_x\phi'u+H_x\phi'h\big)-[{\partial}_\tau^\beta, U\phi'']v+[{\partial}_\tau^\beta, H\phi'']g-[{\partial}_\tau^\beta,(u+U\phi'){\partial}_x-U_x\phi{\partial}_y]u\\ &+[{\partial}_\tau^\beta,(h+H\phi'){\partial}_x-H_x\phi{\partial}_y]h-\sum\limits_{0<\tilde\beta<\beta}\left(\begin{array}{ccc} \beta\\ \tilde\beta \end{array}\right)\Big({\partial}_\tau^{\tilde\beta} v\cdot{\partial}_\tau^{\beta-\tilde\beta}{\partial}_yu-{\partial}_\tau^{\tilde\beta} g\cdot{\partial}_\tau^{\beta-\tilde\beta}{\partial}_yh\Big),\\ R_h^\beta~=&{\partial}_\tau^\beta\big(-H_x\phi'u+U_x\phi'h\big)-[{\partial}_\tau^\beta, H\phi'']v+[{\partial}_\tau^\beta, U\phi'']g-[{\partial}_\tau^\beta,(u+U\phi'){\partial}_x-U_x\phi{\partial}_y]h\\ &+[{\partial}_\tau^\beta,(h+H\phi'){\partial}_x-H_x\phi{\partial}_y]u-\sum\limits_{0<\tilde\beta<\beta}\left(\begin{array}{ccc} \beta\\ \tilde\beta \end{array}\right)\Big({\partial}_\tau^{\tilde\beta} v\cdot{\partial}_\tau^{\beta-\tilde\beta}{\partial}_yh-{\partial}_\tau^{\tilde\beta} g\cdot{\partial}_\tau^{\beta-\tilde\beta}{\partial}_yu\Big). \end{cases} \end{equation} From the expression \eqref{def_R} and by using the inequalities \eqref{Morse}-\eqref{normal1}, we can control the $L_l^2(\Omega)-$estimates of each term given in \eqref{def_R}, and then obtain the estimates of $\|R_u^\beta(t)\|_{L_l^2(\Omega)}$ and $\|R_h^\beta(t)\|_{L_l^2(\Omega)}$. For example, for $\tilde\beta>0$, which implies that $\tilde\beta\geq e_i, i=1$ or 2, by virtue of \eqref{Morse}, \begin{align*} &\big\|\big[{\partial}_\tau^{\tilde\beta}(u+U\phi'){\partial}_x-{\partial}_\tau^{\tilde\beta}(U_x\phi){\partial}_y\big]({\partial}_\tau^{\beta-\tilde\beta}u)\big\|_{L_l^2(\Omega)}\\ \leq~&\big\|\big[{\partial}_\tau^{\tilde\beta-e_i}({\partial}_\tau^{e_i}u)\cdot{\partial}_\tau^{\beta-\tilde\beta}({\partial}_xu)\big\|_{L_l^2(\Omega)}+\big\|{\partial}_\tau^{\tilde\beta}(U\phi')(t)\big\|_{L^\infty(\Omega)}\|{\partial}_x{\partial}_\tau^{\beta-\tilde\beta}u(t)\|_{L_l^2(\Omega)}\\ &+\Big\|\frac{{\partial}_\tau^{\tilde\beta}(U_x\phi)(t)}{1+y}\Big\|_{L^\infty(\Omega)}\|{\partial}_y{\partial}_\tau^{\beta-\tilde\beta}u(t)\|_{L_{l+1}^2(\Omega)}\\ \leq~&C\|{\partial}_\tau^{e_i}u(t)\|_{\mathcal H_0^{m-1}}\|{\partial}_xu(t)\|_{\mathcal H_l^{m-1}}+C\|{\partial}_\tau^{\tilde\beta}(U, U_x)(t)\|_{L^\infty(\mathbb T_x)}\|u(t)\|_{\mathcal H^m_l}\\ \leq ~&C\big(\|{\partial}_\tau^{\tilde\beta}(U, U_x)(t)\|_{L^\infty(\mathbb T_x)}+\|u(t)\|_{\mathcal H^m_l}\big)\|u(t)\|_{\mathcal H^m_l}, \end{align*} provided $m-1\geq3$ and $|\beta-\tilde\beta|\leq m-1$; \eqref{normal1} gives that for $\tilde\beta<\beta$ \begin{align*} \big\|{\partial}_\tau^{\tilde\beta}v\cdot{\partial}_\tau^{\beta-\tilde\beta}(U\phi'')\big\|_{L_l^2(\Omega)}\leq~&\Big\|\frac{{\partial}_\tau^{\tilde\beta+e_2}{\partial}_y^{-1}u(t)}{1+y}\Big\|_{L^2(\Omega)}\big\|\langle y\rangle^{l+1}{\partial}_\tau^{\beta-\tilde\beta}(U\phi'')(t)\big\|_{L^\infty(\Omega)}\\ \leq~& C\|{\partial}_\tau^{\beta-\tilde\beta}U(t)\|_{L^\infty(\mathbb T_x)}\|u(t)\|_{\mathcal H_0^m}; \end{align*} moreover, for $0<\tilde\beta<\beta$ which implies that $\tilde\beta\geq e_i, \beta-\tilde\beta\geq e_j, i, j=1$ or 2, \eqref{normal1} yields that \begin{align*} \big\|{\partial}_\tau^{\tilde\beta}v\cdot{\partial}_\tau^{\beta-\tilde\beta}({\partial}_y u)\big\|_{L_l^2(\Omega)}=~&\big\|{\partial}_\tau^{\tilde\beta-e_i}{\partial}_y^{-1}({\partial}_\tau^{e_i+e_2}u)\cdot{\partial}_\tau^{\beta-\tilde\beta-e_j}({\partial}_\tau^{e_j}{\partial}_y u)\big\|_{L_l^2(\Omega)}\\ \leq~&C\|{\partial}_\tau^{e_i+e_2}u(t)\|_{\mathcal H_0^{m-2}}\|{\partial}_y{\partial}_\tau^{e_j}u(t)\|_{\mathcal H_{l+1}^{m-2}}\leq~C\|u(t)\|_{\mathcal H_l^m}^2 \end{align*} provided $m-2\geq3.$ The other terms in $R_u^\beta$ and $R_h^\beta$ can be estimated similarly so that \begin{align}\label{est_error-m} \|R_u^\beta(t)\|_{L_l^2(\Omega)},~\|R_h^\beta(t)\|_{L_l^2(\Omega)}\leq~& C\big(\sum_{|\beta|\leq m+2}\|{\partial}_\tau^{\beta}(U, H)(t)\|_{L^2(\mathbb T_x)}+\|(u,h)(t)\|_{\mathcal H_l^m}\big)\|(u,h)(t)\|_{\mathcal H_l^m}. \end{align} On the other hand, consider the equations \eqref{eq_xm}, the main difficulty comes from the terms $$({\partial}_yu+U\phi''){\partial}_\tau^\beta v-({\partial}_yh+H\phi''){\partial}_\tau^\beta g=-({\partial}_yu+U\phi'')\cdot({\partial}_y^{-1}{\partial}_\tau^{\beta+e_2}u)+({\partial}_yh+H\phi'')\cdot({\partial}_y^{-1}{\partial}_\tau^{\beta+e_2}h),$$ and $$({\partial}_yh+H\phi''){\partial}_\tau^\beta v-({\partial}_yu+U\phi''){\partial}_\tau^\beta g=-({\partial}_yh+H\phi'')\cdot({\partial}_y^{-1}{\partial}_\tau^{\beta+e_2}u)+({\partial}_yu+U\phi'')\cdot({\partial}_y^{-1}{\partial}_\tau^{\beta+e_2}h),$$ that contain the $m+1-$th order tangential derivatives which can not be controlled by the standard energy method. To overcome this difficulty, we rely on the following two key observations. One is that from the equation \eqref{eq_h}, ${\partial}_y^{-1}h$ satisfies the following equation (see also the equation \eqref{eq-psi} for $\psi$) \[{\partial}_t ({\partial}_y^{-1}h)+(v-U_x\phi)(h+H\phi')-(g-H_x\phi)(u+U\phi')-\kappa{\partial}_yh=-H_t\phi+\kappa H\phi'',\] or \[{\partial}_t ({\partial}_y^{-1}h)+(h+H\phi')v+(u+U\phi'){\partial}_x({\partial}_y^{-1}h)-U_x\phi h+H_x\phi u-\kappa{\partial}_yh=H_t\phi(\phi'-1)+\kappa H\phi'',\] by using $g=-{\partial}_x{\partial}_y^{-1}h$ and the second relation of \eqref{Brou}. This inspires us in the case of $h+H\phi'>0,$ to introduce the following two quantities \begin{equation}\label{new_qu} u_\beta:={\partial}_\tau^\beta u-\frac{{\partial}_yu+U\phi''}{h+H\phi'}{\partial}_\tau^\beta{\partial}_y^{-1}h,\qquad h_\beta:={\partial}_\tau^\beta h-\frac{{\partial}_yh+H\phi''}{h+H\phi'}{\partial}_\tau^\beta{\partial}_y^{-1}h, \end{equation} to eliminate the terms involving ${\partial}_\tau^\beta v$, then to avoid the loss of $x-$derivative on $v$. Note that the new quantities $(u_\beta, h_\beta)$ are almost equivalent to ${\partial}_\tau^\beta(u,h)$ in $L^2_l$-norm, that is, \begin{equation}\label{equ} \|{\partial}_\tau^\beta(u,h)\|_{L^2_l(\Omega)}~\lesssim~\|(u_\beta,h_\beta)\|_{L^2_l(\Omega)}~\lesssim~\|{\partial}_\tau^\beta(u,h)\|_{L^2_l(\Omega)}, \end{equation} that will be proved at the end of this subsection. Another observation is that by using the above two new unknowns $(u_\beta, h_\beta)$ in \eqref{new_qu}, the regularity loss generated by $g=-{\partial}_y^{-1}{\partial}_xh$, can be cancelled by using the convection terms $-(h+H\phi'){\partial}_xh$ and $-(h+H\phi'){\partial}_xu$, more precisely, \[\begin{split} &-(h+H\phi'){\partial}_x{\partial}_\tau^\beta h-({\partial}_yh+H\phi''){\partial}_\tau^\beta g\\ =&-(h+H\phi'){\partial}_x\Big(h_\beta+\frac{{\partial}_yh+H\phi''}{h+H\phi'}{\partial}_\tau^\beta{\partial}_y^{-1}h\Big)+({\partial}_yh+H\phi'')\cdot({\partial}_y^{-1}{\partial}_\tau^{\beta+e_2}h)\\ =&-(h+H\phi'){\partial}_x h_\beta-(h+H\phi'){\partial}_x\Big(\frac{{\partial}_yh+H\phi''}{h+H\phi'}\Big)\cdot{\partial}_\tau^\beta{\partial}_y^{-1} h, \end{split}\] and \[\begin{split} &-(h+H\phi'){\partial}_x{\partial}_\tau^\beta u-({\partial}_yu+U\phi''){\partial}_\tau^\beta g\\ =&-(h+H\phi'){\partial}_x\Big(u_\beta+\frac{{\partial}_yu+U\phi''}{h+H\phi'}{\partial}_\tau^\beta{\partial}_y^{-1}h\Big)+({\partial}_yu+U\phi'')\cdot({\partial}_y^{-1}{\partial}_\tau^{\beta+e_2}h)\\ =&-(h+H\phi'){\partial}_x u_\beta-(h+H\phi'){\partial}_x\Big(\frac{{\partial}_yu+U\phi''}{h+H\phi'}\Big)\cdot{\partial}_\tau^\beta {\partial}_y^{-1}h. \end{split}\] This cancellation mechanism reveals the stabilizing effect of the magnetic field on the boundary layer. Note that in the above expressions, the convection terms can be handled by the symmetric structure of the system. Based on the above discussion, we will carry out the estimation as follows. First of all, we always assume that there exists a positive constant $\delta_0\leq1,$ such that \begin{equation} \label{priori_ass} h(t,x,y)+H(t,x)\phi'(y)\geq\delta_0,\qquad \mbox{for}\quad (t,x,y)\in [0,T]\times\Omega. \end{equation} Firstly, from the divergence free condition ${\partial}_xh+{\partial}_yg=0$, there exists a stream function $\psi$, such that \begin{align}\label{psi} h={\partial}_y\psi,\quad g=-{\partial}_x\psi,\quad \psi|_{y=0}=0. \end{align} Then, the equation \eqref{eq_h} for $h$ reads \begin{align}\label{eq_psi} &{\partial}_t {\partial}_y\psi+{\partial}_y\big[(v-U_x\phi)({\partial}_y\psi+H\phi')+({\partial}_x\psi+H_x\phi)(u+U\phi')\big]-\kappa{\partial}_y^3\psi=-H_t\phi'+\kappa H\phi^{(3)}. \end{align} By virtue of the boundary conditions: \begin{align*} {\partial}_t\psi|_{y=0}={\partial}_x\psi|_{y=0}={\partial}_y^2\psi|_{y=0}=v|_{y=0}=0, \end{align*} and $\phi(y)\equiv0$ for $y\in[0,R_0]$, we integrate the equation (\ref{eq_psi}) with respect to the variable $y$ over $[0,y]$, to obtain \begin{align}\label{eq-psi} {\partial}_t \psi+\big[(u+U\phi'){\partial}_x+(v-U_x\phi){\partial}_y\big]\psi+H_x\phi u+H\phi'v-\kappa{\partial}_y^2\psi=r_3, \end{align} with \begin{align}\label{r_3} r_3~=~H_t\phi(\phi'-1)+\kappa H\phi^{(3)}. \end{align} Next, applying the $m$-th order tangential derivatives operator on (\ref{eq-psi}) and by virtue of ${\partial}_y\psi=h$, it yields that \begin{align}\label{psi-m} {\partial}_t {\partial}_\tau^\beta\psi+\big[(u+U\phi'){\partial}_x+(v-U_x\phi){\partial}_y\big]{\partial}_\tau^\beta\psi+(h+H\phi'){\partial}_\tau^\beta v-\kappa{\partial}_y^2{\partial}_\tau^\beta\psi={\partial}_\tau^\beta r_3+R_\psi^\beta, \end{align} where $R_\psi^\beta$ is defined as follows: \begin{align}\label{r0} R_\psi^\beta=~&-{\partial}_\tau^\beta\big(H_x\phi u\big)-[{\partial}_\tau^\beta,H\phi']v-[{\partial}_\tau^\beta,(u+U\phi'){\partial}_x-U_x\phi{\partial}_y]\psi-\sum\limits_{0<\tilde\beta<\beta}\left(\begin{array}{ccc} \beta\\ \tilde\beta \end{array}\right)\big({\partial}_\tau^{\tilde\beta} v\cdot{\partial}_\tau^{\beta-\tilde\beta}{\partial}_y\psi\big). \end{align} By $\psi={\partial}_y^{-1}h$ and $v=-{\partial}_x{\partial}_y^{-1}u$, it gives \begin{align*} R_\psi^\beta=~&-{\partial}_\tau^\beta\big(H_x\phi u\big)+[{\partial}_\tau^\beta,H\phi']{\partial}_x{\partial}_y^{-1}u-[{\partial}_\tau^\beta,(u+U\phi')]{\partial}_x{\partial}_y^{-1}h+[{\partial}_\tau^\beta, U_x\phi]h\\ &+\sum\limits_{0<\tilde\beta<\beta}\left(\begin{array}{ccc} \beta\\ \tilde\beta \end{array}\right)\big({\partial}_\tau^{\tilde\beta+e_2} {\partial}_y^{-1}u\cdot{\partial}_\tau^{\beta-\tilde\beta}h\big)\\ =~&-\sum\limits_{\tilde\beta\leq\beta}\left(\begin{array}{ccc} \beta\\ \tilde\beta \end{array}\right)\big[{\partial}_\tau^{\tilde\beta}(H_x\phi)\cdot{\partial}_\tau^{\beta-\tilde\beta}u\big]+\sum\limits_{0<\tilde\beta\leq\beta}\left(\begin{array}{ccc} \beta\\ \tilde\beta \end{array}\right)\Big[{\partial}_\tau^{\tilde\beta}(H\phi')\cdot{\partial}_\tau^{\beta-\tilde\beta+e_2}{\partial}_y^{-1}u\\ &\qquad-{\partial}_\tau^{\tilde\beta}(u+U\phi')\cdot{\partial}_\tau^{\beta-\tilde\beta+e_2}{\partial}_y^{-1}h+{\partial}_\tau^{\tilde\beta}(U_x\phi)\cdot{\partial}_\tau^{\beta-\tilde\beta}h \Big]+\sum\limits_{0<\tilde\beta<\beta}\left(\begin{array}{ccc} \beta\\ \tilde\beta \end{array}\right)\big({\partial}_\tau^{\tilde\beta+e_2} {\partial}_y^{-1}u\cdot{\partial}_\tau^{\beta-\tilde\beta}h\big), \end{align*} and then, we can estimate $\Big\|\frac{R_\psi^\beta(t)}{1+y}\Big\|_{L^2(\Omega)}$ from the above expression term by term. For example, it is easy to get that \begin{align*} \Big\|\frac{{\partial}_\tau^{\tilde\beta}(H_x\phi)\cdot{\partial}_\tau^{\beta-\tilde\beta}u}{1+y}\Big\|_{L^2(\Omega)}\leq~&\Big\|\frac{{\partial}_\tau^{\tilde\beta}(H_x\phi)(t)}{1+y}\Big\|_{L^\infty(\Omega)}\big\|{\partial}_\tau^{\beta-\tilde\beta}u(t)\big\|_{L^2(\Omega)}\leq~ C\|{\partial}_\tau^{\tilde\beta}H_x(t)\|_{L^\infty(\mathbb T_x)}\|u(t)\|_{\mathcal H_0^m}, \end{align*} and \eqref{normal1} implies that \begin{align*} \Big\|\frac{{\partial}_\tau^{\tilde\beta}(H\phi')\cdot{\partial}_\tau^{\beta-\tilde\beta+e_2}{\partial}_y^{-1}u}{1+y}\Big\|_{L^2(\Omega)}\leq\big\|{\partial}_\tau^{\tilde\beta}(H\phi')(t)\big\|_{L^\infty(\Omega)}\Big\|\frac{{\partial}_\tau^{\beta-\tilde\beta+e_2}{\partial}_y^{-1}u(t)}{1+y}\Big\|_{L^2(\Omega)}\leq C\|{\partial}_\tau^{\tilde\beta}H(t)\|_{L^\infty(\mathbb T_x)}\|u(t)\|_{\mathcal H_0^m},\\ \end{align*} provided $|\beta-\tilde\beta|\leq|\beta|-1=m-1$. Also, \eqref{normal1} allows us to get that for $\tilde\beta\geq e_i, i=1$ or 2, \begin{align*} \Big\|\frac{{\partial}_\tau^{\tilde\beta}u\cdot{\partial}_\tau^{\beta-\tilde\beta+e_2}{\partial}_y^{-1}h}{1+y}\Big\|_{L^2(\Omega)}=~&\big\|{\partial}_\tau^{\tilde\beta-e_i}({\partial}_\tau^{e_i}u)\cdot{\partial}_\tau^{\beta-\tilde\beta}{\partial}_y^{-1}({\partial}_xh)\big\|_{L_{-1}^2(\Omega)}\\ \leq~ &C\|{\partial}_\tau^{e_i}u(t)\|_{\mathcal H_0^{m-1}}\|{\partial}_xh(t)\|_{\mathcal H_0^{m-1}}\leq C\|(u, h)(t)\|_{\mathcal H_0^m}^2. \end{align*} The other terms in $R_\psi^\beta$ can be estimated similarly, and we have \begin{equation}\label{est_r0} \Big\|\frac{R_\psi^\beta(t)}{1+y}\Big\|_{L^2(\Omega)}\leq C\big(\sum_{|\beta|\leq m+2}\|{\partial}_\tau^{\beta}(U,H)(t)\|_{L^2(\mathbb T_x)}+\|(u, h)\|_{\mathcal H_0^m}\big)\|(u, h)\|_{\mathcal H_0^m}. \end{equation} Now, combining \eqref{new_qu} with \eqref{psi}, we define new functions: \begin{align}\label{new} u_\beta={\partial}_\tau^\beta u-\frac{{\partial}_yu+U\phi''}{h+H\phi'}{\partial}_\tau^\beta\psi,\quad h_\beta={\partial}_\tau^\beta h-\frac{{\partial}_yh+H \phi''}{h+H\phi'}{\partial}_\tau^\beta\psi, \end{align} and denote \begin{equation} \label{def_eta} \eta_1~\triangleq~\frac{{\partial}_yu+U\phi''}{h+H\phi'},\quad \eta_2~\triangleq~\frac{{\partial}_yh+H\phi''}{h+H\phi'}. \end{equation} Then, by noting that ${\partial}_\tau^\beta g=-{\partial}_x{\partial}_\tau^\beta\psi$ from \eqref{psi}, we compute $(\ref{eq_xm})_1 -(\ref{eq-psi})\times\eta_1$ and $(\ref{eq_xm})_2 -(\ref{eq-psi})\times\eta_2$ respectively, to obtain that \begin{align}\label{eq_hu} \begin{cases} {\partial}_tu_\beta +\big[(u+U\phi')\partial_x+(v-U_x\phi)\partial_y\big]u_\beta -\big[(h+H\phi')\partial_x+(g-H_x\phi)\partial_y\big]h_\beta-\mu{\partial}_y^2u_\beta +(\kappa-\mu)\eta_1{\partial}_y h_\beta &=R_1^\beta,\\ {\partial}_th_\beta +\big[(u+U\phi')\partial_x+(v-U_x\phi)\partial_y\big]h_\beta -\big[(h+H\phi')\partial_x+(g-H_x\phi)\partial_y\big]u_\beta-\kappa{\partial}_y^2h_\beta &=R_2^\beta, \end{cases} \end{align} where \begin{align}\label{def_newr} \begin{cases} R_1^\beta&={\partial}_\tau^\beta r_1-\eta_1{\partial}_\tau^\beta r_3+ R_u^\beta-\eta_1R_\psi^\beta+[2\mu{\partial}_y\eta_1+(g-H_x\phi)\eta_2+(\mu-\kappa)\eta_1\eta_2]{\partial}_\tau^\beta h-\zeta_1{\partial}_\tau^\beta\psi,\\ R_2^\beta&={\partial}_\tau^\beta r_1-\eta_2{\partial}_\tau^\beta r_2+ R_h^\beta-\eta_2R_\psi^\beta+\big[2\kappa{\partial}_y\eta_2+(g-H_x\phi)\eta_1\big]{\partial}_\tau^\beta h-\zeta_2{\partial}_\tau^\beta\psi, \end{cases} \end{align} with \begin{align}\label{zeta} \zeta_1~&=~{\partial}_t\eta_1+\big[(u+U\phi'){\partial}_x+(v-U_x\phi){\partial}_y\big]\eta_1-\big[(h+H\phi'){\partial}_x+(g-H_x\phi){\partial}_y\big]\eta_2-\mu{\partial}_y^2\eta_1+(\kappa-\mu)\eta_1{\partial}_y\eta_2,\nonumber\\ \zeta_2~&=~{\partial}_t\eta_2+\big[(u+U\phi'){\partial}_x+(v-U_x\phi){\partial}_y\big]\eta_2-\big[(h+H\phi'){\partial}_x+(g-H_x\phi){\partial}_y\big]\eta_1-\kappa{\partial}_y^2\eta_2. \end{align} Also, direct calculation gives the corresponding initial-boundary values as follows: \begin{equation}\label{ib_hat} \begin{cases} &u_\beta|_{t=0}={\partial}_\tau^\beta u(0,x,y)-\frac{{\partial}_yu_{0}(x,y)+U(0,x)\phi''(y)}{h_0(x,y)+H(0,x)\phi'(y)}\int_0^y{\partial}_\tau^\beta h(0,x,z)dz\triangleq u_{\beta 0}(x,y),\\ &h_\beta|_{t=0}={\partial}_\tau^\beta h(0,x,y)-\frac{{\partial}_yh_{0}(x,y)+H(0,x)\phi''(y)}{h_0(x,y)+H(0,x)\phi'(y)}\int_0^y{\partial}_\tau^\beta h(0,x,z)dz\triangleq h_{\beta 0}(x,y),\\ &u_\beta|_{y=0}=0,\quad {\partial}_y h_\beta|_{y=0}=0. \end{cases}\end{equation} Finally, we obtain the initial-boundary value problem for $(u_\beta, h_\beta)$: \begin{equation}\label{pr_hat} \begin{cases} {\partial}_tu_\beta +\big[(u+U\phi')\partial_x+(v-U_x\phi)\partial_y\big]u_\beta -\big[(h+H\phi')\partial_x+(g-H_x\phi)\partial_y\big]h_\beta-\mu{\partial}_y^2u_\beta+(\kappa-\mu)\eta_1{\partial}_yh_\beta=R_1^\beta,\\ {\partial}_th_\beta +\big[(u+U\phi')\partial_x+(v-U_x\phi)\partial_y\big]h_\beta -\big[(h+H\phi')\partial_x+(g-H_x\phi)\partial_y\big]u_\beta-\kappa{\partial}_y^2h_\beta =R_2^\beta,\\ ( u_\beta,{\partial}_y h_\beta)|_{y=0}=0,\qquad (u_\beta,h_\beta)|_{t=0}=(u_{\beta 0},h_{\beta 0})(x,y), \end{cases}\end{equation} with the initial data $(u_{\beta 0},h_{\beta 0})(x,y)$ given by \eqref{ib_hat}. Moreover, by combining $\psi={\partial}_y^{-1}h$ with \eqref{normal1}, \begin{align}\label{est_psi} \|\langle y\rangle^{-1}{\partial}_\tau^\beta\psi(t)\|_{L^2(\Omega)}\leq 2\|{\partial}_\tau^\beta h(t)\|_{L^2(\Omega)}. \end{align} From the expression\eqref{def_eta} of $\eta_1$ and $\eta_2$, by \eqref{priori_ass} and Sobolev embedding inequality we have that for $\lambda\in\mathbb R$ and $i=1,2,$ \begin{align}\label{est_eta} \|\langle y\rangle^\lambda\eta_i\|_{L^\infty(\Omega)}\leq C\delta_0^{-1}\big(\|(U,H)(t)\|_{L^\infty(\mathbb T_x)}+\|(u,h)\|_{\mathcal H_{\lambda-1}^3}\big),\nonumber\\ \|\langle y\rangle^\lambda{\partial}_y\eta_i\|_{L^\infty(\Omega)}\leq C\delta_0^{-2}\big(\|(U,H)(t)\|_{L^\infty(\mathbb T_x)}+\|(u,h)\|_{\mathcal H_{\lambda-1}^4}\big)^2, \end{align} and \begin{align}\label{est_zeta} \|\langle y\rangle^\lambda\zeta_i\|_{L^\infty(\Omega)}\leq C\delta_0^{-3}\big(\sum_{|\beta|\leq1}\|{\partial}_\tau^\beta(U,H)(t)\|_{L^\infty(\mathbb T_x)}+\|(u,h)\|_{\mathcal H_{\lambda-1}^5}\big)^3,\qquad i=1,2. \end{align} Then, for the terms $R_1^\beta$ and $ R_2^\beta$ given by \eqref{def_newr}, from the above inequalities \eqref{est_psi}-\eqref{est_zeta}, the estimates \eqref{est_error-m} and \eqref{est_r0} we obtain that for $|\beta|=m\geq5, l\geq0,$ \begin{equation} \label{est-r1} \begin{split} \|R_1^\beta(t)\|_{L_l^2(\Omega)}\leq~&\|{\partial}_\tau^\beta r_1-\eta_1{\partial}_\tau^\beta r_3\|_{L_l^2(\Omega)}+\|R_u^\beta\|_{L_l^2(\Omega)}+\|\langle y\rangle^{l+1}\eta_1\|_{L^{\infty}(\Omega)}\|\langle y\rangle^{-1}R_\psi^\beta\|_{L^2(\Omega)}\\ &+\big(\big\|2\mu{\partial}_y\eta_1+(\mu-\kappa)\eta_1\eta_2\big\|_{L^\infty(\Omega)}+\big\|\langle y\rangle^{-1}(g-H_x\phi)\big\|_{L^\infty(\Omega)}\big\|\langle y\rangle\eta_2\big\|_{L^\infty(\Omega)}\big)\|{\partial}_\tau^\beta h\big\|_{L^2_l(\Omega)}\\ &+\|\langle y\rangle^{l+1}\zeta_1\|_{L^\infty(\Omega)}\|\langle y\rangle^{-1}{\partial}_\tau^\beta\psi\|_{L^2(\Omega)}\\ \leq~&\|{\partial}_\tau^\beta r_1-\eta_1{\partial}_\tau^\beta r_3\|_{L_l^2(\Omega)}+C\delta_0^{-3}\big(\sum_{|\beta|\leq m+2}\|{\partial}_\tau^{\beta}(U,H)(t)\|_{L^2(\mathbb T_x)}+\|(u,h)\|_{\mathcal H_{l}^m}\big)^3\|(u ,h)(t)\|_{\mathcal H_l^m},\\ \end{split}\end{equation} and \begin{equation}\label{est-r2}\begin{split} \|R_2^\beta(t)\|_{L_l^2(\Omega)}\leq~&\|{\partial}_\tau^\beta r_2-\eta_2{\partial}_\tau^\beta r_3\|_{L_l^2(\Omega)}+\|R_h^\beta\|_{L_l^2(\Omega)}+\|\langle y\rangle^{l+1}\eta_2\|_{L^{\infty}(\Omega)}\|\langle y\rangle^{-1}R_\psi^\beta\|_{L^2(\Omega)}\\ &+\big(\big\|2\kappa{\partial}_y\eta_2\big\|_{L^\infty(\Omega)}+\big\|\langle y\rangle^{-1}(g-H_x\phi)\big\|_{L^\infty(\Omega)}\big\|\langle y\rangle\eta_1\big\|_{L^\infty(\Omega)}\big)\|{\partial}_\tau^\beta h\big\|_{L^2_l(\Omega)}\\ &+\|\langle y\rangle^{l+1}\zeta_2\|_{L^\infty(\Omega)}\|\langle y\rangle^{-1}{\partial}_\tau^\beta\psi\|_{L^2(\Omega)}\\ \leq~&\|{\partial}_\tau^\beta r_2-\eta_2{\partial}_\tau^\beta r_3\|_{L_l^2(\Omega)}+C\delta_0^{-3}\big(\sum_{|\beta|\leq m+2}\|{\partial}_\tau^{\beta}(U,H)(t)\|_{L^2(\mathbb T_x)}+\|(u,h)\|_{\mathcal H_{l}^m}\big)^3\|(u ,h)(t)\|_{\mathcal H_l^m}. \end{split} \end{equation} Now, we are going to derive the following $L^2_l$-norms of $(u_\beta,h_\beta)$. \begin{prop}\label{prop_xm}[\textit{$L^2_l-$estimate on $(u_\beta,h_\beta)$}]\\ Under the hypotheses of Proposition \ref{prop_priori}, we have that for any $t\in[0,T]$ and the quantity $(u_\beta,h_\beta)$ given in \eqref{new}, \begin{align}\label{est_hat} &\sum_{|\beta|=m}\Big(\frac{d}{dt}\|(u_\beta, h_\beta)(t)\|_{L^2_l(\mathbb{R}^2_+)}^2 +\mu\|{\partial}_y u_\beta(t)\|_{L^2_l(\Omega)}^2+\kappa\|{\partial}_y h_\beta(t)\|_{L^2_l(\Omega)}^2\Big)\nonumber\\ \leq~&\sum_{|\beta|=m}\Big(\|{\partial}_\tau^\beta r_1-\eta_1{\partial}_\tau^\beta r_3\|_{L_l^2(\Omega)}^2+\|{\partial}_\tau^\beta r_2-\eta_2{\partial}_\tau^\beta r_3\|_{L_l^2(\Omega)}^2\Big)\nonumber\\ &+C\delta_0^{-2}\big(\sum_{|\beta|\leq m+2}\|{\partial}_\tau^{\beta}(U,H)(t)\|_{L^2(\mathbb T_x)}+\|(u,h)(t)\|_{\mathcal H_l^m}\big)^2\Big(\sum_{|\beta|=m}\|(u_\beta, h_\beta)(t)\|_{L^2_l(\Omega)}^2\Big)\nonumber\\ &+C\delta_0^{-4}\big(\sum_{|\beta|\leq m+2}\|{\partial}_\tau^{\beta}(U,H)(t)\|_{L^2(\mathbb T_x)}+\|(u,h)\|_{\mathcal H_{l}^m}\big)^4\|(u ,h)(t)\|_{\mathcal H_l^m}^2. \end{align} \end{prop} \begin{proof}[\textbf{Proof.}] Multiplying $(\ref{pr_hat})_1$ and $\eqref{pr_hat}_2$ by $\langle y\rangle^{2l}u_\beta$ and $\langle y\rangle^{2l}h_\beta$ respectively, and integrating them over $\Omega$ with $t\in[0,T]$, we obtain that by integration by parts, \iffalse \begin{align}\label{est_m} \frac{1}{2}\frac{d}{dt}\big(\|\hat{u}_\alpha\|_{L^2_l(\mathbb{R}^2_+)}+\|\hat{b}_\alpha\|_{L^2_l(\mathbb{R}^2_+)}\big)-l\int_0_{\mathbb R^2_+}\langle y\rangle^{2l-1}v\big(|\hat u|^2+|\hat b|^2\big)dxdy+2l\int_{\Omega}\langle y\rangle^{2l-1}g(\hat u \hat b)dxdy\nonumber\\ -\mu\int_0_{\mathbb{R}^2_+}{\partial}_y^2\hat{u}\hat{u}\langle y\rangle^{2l}dxdy=\int_0_{\mathbb{R}^2_+}\bar{R}_1\hat{u}\langle y\rangle^{2l}dxdy. \end{align} And, multiplying (\ref{2.24}) by $\hat{b}\langle y\rangle^{2l}$ and integrating the resulting equation over $\mathbb{R}^2_+$ give \begin{align} \label{2.27} \frac{1}{2}\frac{d}{dt}\|\hat{b}\|_{L^2_l(\mathbb{R}^2_+)}+\int_0_{\mathbb{R}^2_+}((u_s+u){\partial}_x\hat{b}+v{\partial}_y\hat{b}-(1+b){\partial}_x\hat{u}-g{\partial}_y\hat{u})\hat{b}\langle y\rangle^{2l}dxdy\nonumber\\ -k\int_0_{\mathbb{R}^2_+}{\partial}_y^2\hat{b}\hat{b}\langle y\rangle^{2l}dxdy=\int_0_{\mathbb{R}^2_+}\bar{R}_2\hat{b}\langle y\rangle^{2l}dxdy. \end{align} Add (\ref{2.26}) and (\ref{2.27}) together, and then integrate by parts, we obtain \fi \begin{align}\label{est_m} &\frac{1}{2}\frac{d}{dt}\|(u_\beta,h_\beta)(t)\|_{L^2_l(\mathbb{R}^2_+)}^2 +\mu\|{\partial}_y u_\beta\|_{L^2_l(\Omega)}^2+\kappa\|{\partial}_y h_\beta\|_{L^2_l(\Omega)}^2\nonumber\\ =~&2l\int_{\Omega}\langle y\rangle^{2l-1}\big[(v-U_x\phi)\frac{u_\beta^2+h_\beta^2}{2}-(g-H_x\phi)u_\beta h_\beta\big]dxdy+(\mu-\kappa)\int_{\Omega}\langle y\rangle^{2l}\big(\eta_1{\partial}_yh_\beta\cdot u_\beta\big)dxdy\nonumber\\ &+\int_{\Omega}\langle y\rangle^{2l}\big(u_\beta R^\beta_1+h_\beta R^\beta_2\big)dxdy -2l\int_{\Omega}\langle y\rangle^{2l-1}\big(\mu u_\beta{\partial}_y u_\beta+\kappa h_\beta{\partial}_y h_\beta\big)dxdy, \end{align} where we have used the boundary conditions in \eqref{pr_hat} and $(v,g)|_{y=0}=0.$ By \eqref{normal}, it gives that \begin{align}\label{est_m0} &\Big|2l\int_{\Omega}\langle y\rangle^{2l-1}\big[(v-U_x\phi)\frac{u_\beta^2+h_\beta^2}{2}-(g-H_x\phi)u_\beta h_\beta\big]dxdy\Big|\nonumber\\ \leq~&2l\Big(\Big\|\frac{v-U_x\phi}{1+y}\Big\|_{L^\infty(\Omega)}+\Big\|\frac{g-H_x\phi}{1+y}\Big\|_{L^\infty(\Omega)}\Big)\|(u_\beta, h_\beta)\|_{L_l^2(\Omega)}^2\nonumber\\%\big(\|\hat{u}_\alpha\|^2_{L^2_l(\Omega)}+\|\hat{b}_\alpha\|^2_{L^2_l(\Omega)}\big)\nonumber\\ \leq~&2l\big(\|(U_x,H_x)(t)\|_{L^\infty(\mathbb T_x)}+\|u_x(t)\|_{L^\infty(\Omega)}+\|h_x(t)\|_{L^\infty(\Omega)}\big)\|(u_\beta, h_\beta)(t)\|_{L_l^2(\Omega)}^2\nonumber\\ \leq~&C\big(\|(U_x,H_x)(t)\|_{L^\infty(\mathbb T_x)}+\|(u, h)(t)\|_{\mathcal H_l^m}\big)\|(u_\beta, h_\beta)(t)\|_{L_l^2(\Omega)}^2. \end{align} By integration by parts and the boundary condition $u_\beta|_{y=0}=0$, we obtain that \begin{align}\label{est_m1} &(\mu-\kappa)\int_{\Omega}\langle y\rangle^{2l}\big(\eta_1{\partial}_yh_\beta\cdot u_\beta\big)dxdy\nonumber\\ =&-\mu\int_{\Omega}h_\beta{\partial}_y\big(\langle y\rangle^{2l}\eta_1u_\beta\big)dxdy-\kappa \int_{\Omega}\langle y\rangle^{2l}\big(\eta_1{\partial}_yh_\beta\cdot u_\beta\big)dxdy\nonumber\\ \leq~&\frac{\mu}{4}\|{\partial}_y u_\beta(t)\|_{L^2_l(\Omega)}^2+\frac{\kappa}{4}\|{\partial}_y h_\beta(t)\|_{L^2_l(\Omega)}^2+C\big(1+\|\eta_1(t)\|_{L^\infty(\Omega)}^2+\|{\partial}_y\eta_1(t)\|_{L^\infty(\Omega)}\big)\|(u_\beta, h_\beta)(t)\|_{L^2_l(\Omega)}^2\nonumber\\ \leq~&\frac{\mu}{4}\|{\partial}_y u_\beta(t)\|_{L^2_l(\Omega)}^2+\frac{\kappa}{4}\|{\partial}_y h_\beta(t)\|_{L^2_l(\Omega)}^2+C\delta_0^{-2}\big(\|(U,H)(t)\|_{L^\infty(\mathbb T_x)}+\|(u,h)(t)\|_{\mathcal H_l^m}\big)^2\|(u_\beta, h_\beta)(t)\|_{L^2_l(\Omega)}^2, \end{align} where we have used \eqref{est_eta} in the above second inequality. Next, it is easy to get that by \eqref{est-r1} and \eqref{est-r2}, \begin{align}\label{est_m2} \int_{\Omega}\langle y\rangle^{2l}\big(u_\beta R^\beta_1+h_\beta R^\beta_2\big)dxdy \leq~&\|u_\beta(t)\|_{L_l^2(\Omega)}\|R_1^\beta(t)\|_{L_l^2(\Omega)}+\|h_\beta(t)\|_{L_l^2(\Omega)}\|R_2^\beta(t)\|_{L_l^2(\Omega)}\nonumber\\ \leq~&\|{\partial}_\tau^\beta r_1-\eta_1{\partial}_\tau^\beta r_3\|_{L_l^2(\Omega)}^2+\|{\partial}_\tau^\beta r_2-\eta_2{\partial}_\tau^\beta r_3\|_{L_l^2(\Omega)}^2\nonumber\\ &+C\delta_0^{-2}\big(\sum_{|\beta|\leq m+2}\|{\partial}_\tau^{\beta}(U,H)(t)\|_{L^2(\mathbb T_x)}+\|(u,h)(t)\|_{\mathcal H_l^m}\big)^2\|(u_\beta, h_\beta)(t)\|_{L^2_l(\Omega)}^2\nonumber\\ &+C\delta_0^{-4}\big(\sum_{|\beta|\leq m+2}\|{\partial}_\tau^{\beta}(U,H)(t)\|_{L^2(\mathbb T_x)}+\|(u,h)\|_{\mathcal H_{l}^m}\big)^4\|(u ,h)(t)\|_{\mathcal H_l^m}^2. \end{align} Also, \begin{align}\label{est_m3} &\Big|2l\int_{\Omega}\langle y\rangle^{2l-1}\big(\mu u_\beta{\partial}_y u_\beta+\kappa h_\beta{\partial}_y h_\beta\big)dxdy\Big|\nonumber\\ \leq~&\frac{\mu}{4}\|{\partial}_y u_\beta(t)\|_{L^2_l(\Omega)}^2+\frac{\kappa}{4}\|{\partial}_y h_\beta(t)\|_{L^2_l(\Omega)}^2+C\|(u_\beta, h_\beta)(t)\|_{L^2_l(\Omega)}^2. \end{align} \iffalse Below, we will estimate each term in $\int_{\Omega}\langle y\rangle^{2l}\hat u_\alpha\widehat{R}_1dxdy$ and $\int_{\Omega}\langle y\rangle^{2l}\hat b_\alpha\widehat{R}_2dxdy$.\\ {\bf\textit{ Estimates of $\int_{\Omega}\langle y\rangle^{2l}\hat u_\alpha\widehat{R}_1dxdy$}:} From \eqref{def_r1}, \begin{align}\label{est-R1} \int_{\Omega}\langle y\rangle^{2l}\hat u_\alpha\widehat{R}_1dxdy=&~\int_{\Omega}\langle y\rangle^{2l}\hat u_\alpha\big[(\mu-\kappa)\eta_1{\partial}_y\hat b_\alpha\big]dxdy+\int_{\Omega}\langle y\rangle^{2l}\hat u_\alpha\widetilde R_1dxdy\nonumber\\ :=&~I_1+I_2. \end{align} By virtue of the boundary condition $\hat u_m|_{y=0}=0$, it implies that by integration by parts, \begin{equation}\label{est_r11}\begin{split} I_1&\leq\frac{\kappa}{4}\|{\partial}_y\hat{b}_\alpha\|_{L^2_l(\Omega)}^2+C \|\eta_1\|_{L^\infty(\Omega)}^2\|\hat u_\alpha\|^2_{L_L^2(\Omega)}. \end{split}\end{equation} By the estimate of $\widetilde R_1$ in \eqref{est-r} it gives \begin{equation} \label{est_r12}\begin{split} I_2\leq&\|\hat u_\alpha\|_{L_L^2(\Omega)}\|\widetilde R_1\|_{L_L^2(\Omega)} \lesssim \|\hat u_\alpha(t)\|_{L_L^2(\Omega)}^2+ P\big(E_3(t)\big)\Big(\|(u,b)\|_{\mathcal A_l^{m,0}(\Omega)}^2+\|(u,b)\|_{\mathcal A_l^{m-1,1}(\Omega)}^2\Big). \end{split} \end{equation} Pluggin \eqref{est_r11} and \eqref{est_r12} into \eqref{est-R1} yields that \begin{equation} \label{est-r1}\begin{split} \int_{\Omega}\langle y\rangle^{2l}\hat u_\alpha\widehat{R}_1dxdy\leq& \frac{\kappa}{4}\|{\partial}_y\hat{b}_\alpha(t)\|_{L^2_l(\Omega)}^2+P\big(E_3(t)\big)\Big(\|\hat u_\alpha\|_{L_l^{2}(\Omega)}^2+\|(u,b)\|_{\mathcal A_l^{m,0}(\Omega)}^2+\|(u,b)\|_{\mathcal A_l^{m-1,1}(\Omega)}^2\Big). \end{split} \end{equation} {\bf\textit{ Estimates of $\int_{\Omega}\langle y\rangle^{2l}\hat b_\alpha\widehat{R}_2dxdy$}:} By virtue of the estimate of $\widehat R_2$ given in \eqref{est-r}, it is easy to get that \begin{align}\label{est-r2} \begin{split} \int_{\Omega}\langle y\rangle^{2l}\hat b_\alpha\widehat{R}_2dxdy&\leq\|\hat b_\alpha\|_{L_L^2(\Omega)}\|\widehat R_2\|_{L_L^2(\Omega)}\lesssim \|\hat b_\alpha\|_{L_L^2(\Omega)}^2+P\big(E_3(t)\big)\Big(\|(u,b)\|_{\mathcal A_l^{m,0}(\Omega)}^2+\|(u,b)\|_{\mathcal A_l^{m-1,1}(\Omega)}^2\Big). \end{split}\end{align} \fi Substituting \eqref{est_m0}-\eqref{est_m3} into \eqref{est_m} yields that \begin{align}\label{est-m} &\frac{d}{dt}\|(u_\beta,h_\beta)(t)\|_{L^2_l(\mathbb{R}^2_+)}^2 +\mu\|{\partial}_y u_\beta\|_{L^2_l(\Omega)}^2+\kappa\|{\partial}_y h_\beta\|_{L^2_l(\Omega)}^2\nonumber\\ \leq~&\|{\partial}_\tau^\beta r_1-\eta_1{\partial}_\tau^\beta r_3\|_{L_l^2(\Omega)}^2+\|{\partial}_\tau^\beta r_2-\eta_2{\partial}_\tau^\beta r_3\|_{L_l^2(\Omega)}^2\nonumber\\ &+C\delta_0^{-2}\big(\sum_{|\beta|\leq m+2}\|{\partial}_\tau^{\beta}(U,H)(t)\|_{L^2(\mathbb T_x)}+\|(u,h)(t)\|_{\mathcal H_l^m}\big)^2\|(u_\beta, h_\beta)(t)\|_{L^2_l(\Omega)}^2\nonumber\\ &+C\delta_0^{-4}\big(\sum_{|\beta|\leq m+2}\|{\partial}_\tau^{\beta}(U,H)(t)\|_{L^2(\mathbb T_x)}+\|(u,h)\|_{\mathcal H_{l}^m}\big)^4\|(u ,h)(t)\|_{\mathcal H_l^m}^2, \end{align} thus we prove \eqref{est_hat} by taking the summation over all $|\beta|=m$ in \eqref{est-m}. \end{proof} Finally, we give the following result, which shows the almost equivalence in $L_l^2-$norm between ${\partial}_\tau^\beta(u,h)$ and the quantities $(u_\beta,h_\beta)$ given by \eqref{new}. \begin{lem}\label{lem_equ}[\textit{Equivalence between $\|{\partial}_\tau^\beta(u,h)\|_{L_l^2}$ and $\|(u_\beta,h_\beta)\|_{L_l^2}$}]\\ If the smooth function $(u,h)$ satisfies the problem \eqref{bl_main} in $[0,T]$, and \eqref{priori_ass} holds, then for any $t\in[0,T], l\geq0,$ aninteger $m\geq3$ and the quantity $(u_\beta,h_\beta)$ with $|\beta|=m$ defined by \eqref{new}, we have \begin{equation}\label{equi} M(t)^{-1}\|{\partial}_\tau^\beta (u, h)(t)\|_{L_l^2(\Omega)}~\leq~\|(u_\beta,h_\beta)(t)\|_{L_l^2(\Omega)}~\leq~M(t)\|{\partial}_\tau^\beta(u, h)(t)\|_{L_l^2(\Omega)}, \end{equation} and \begin{equation} \label{equi_y1} \big\|{\partial}_y{\partial}_\tau^\beta (u, h)(t)\big\|_{L_l^2(\Omega)} \leq\|{\partial}_y (u_\beta, h_\beta)(t)\|_{L_l^2(\Omega)}+M(t)\|h_\beta(t)\|_{L_l^2(\Omega)}, \end{equation} where \begin{equation}\label{def_M} M(t)~:=~2\delta_0^{-1}\Big(C\|(U,H)(t)\|_{L^\infty(\mathbb T_x)}+\big\|\langle y\rangle^{l+1}{\partial}_y(u, h)(t)\big\|_{L^\infty(\Omega)}+\big\|\langle y\rangle^{l+1}{\partial}_y^2(u, h)(t)\big\|_{L^\infty(\Omega)}\Big). \end{equation} \iffalse Moreover, it holds \begin{equation}\label{equi_y}\begin{split} \big\|{\partial}_y (u_\beta, h_\beta)(t)\big\|_{L_l^2(\Omega)} \leq~&\big\|{\partial}_y{\partial}_\tau^\beta (u, h)(t)\big\|_{L_l^2(\Omega)}+C\delta_0^{-2}\big(M_0+\|(u, h)(t)\|_{\mathcal H_l^4}\big)^2\|{\partial}_\tau^\beta h(t)\|_{L_l^2(\Omega)}, \end{split}\end{equation}and \begin{equation} \label{equi_y1} \big\|{\partial}_y{\partial}_\tau^\beta (u, h)(t)\big\|_{L_l^2(\Omega)} \leq\|{\partial}_y (u_\beta, h_\beta)(t)\|_{L_l^2(\Omega)}+\big(M(t)+2\delta_0^{-1}\|\langle y\rangle^{l+1}{\partial}_y^2(u, h)(t)\|_{L^\infty(\Omega)}\big)\|h_\beta(t)\|_{L_l^2(\Omega)}. \fi \end{lem} \begin{proof}[\textbf{Proof.}] Firstly, from the definitions of $u_\beta$ and $h_\beta$ in \eqref{new}, we have by using \eqref{est_psi}, \begin{equation*} \label{equivalent}\begin{split} \|u_\beta(t)\|_{L^2_l(\Omega)}\leq~&\|{\partial}_\tau^\beta u(t)\|_{L^2_l(\Omega)}+ \|\langle y\rangle^{l+1}\eta_1(t)\|_{L^\infty(\Omega)}\|\langle y\rangle^{-1}{\partial}_\tau^\beta\psi(t)\|_{L^2(\Omega)}\\ \leq~&\|{\partial}_\tau^\beta u(t)\|_{L^2_l(\Omega)}+2\delta_0^{-1}\big(C\|U(t)\|_{L^\infty(\mathbb T_x)}+\|\langle y\rangle^{l+1}{\partial}_yu(t)\|_{L^\infty(\Omega)}\big)\|{\partial}_\tau^\beta h(t)\|_{L^2(\Omega)}, \end{split}\end{equation*} and \begin{align*} \|h_\beta(t)\|_{L^2_l(\Omega)}\leq~&\|{\partial}_\tau^\beta h(t)\|_{L^2_l(\Omega)}+ \|\langle y\rangle^{l+1}\eta_2(t)\|_{L^\infty(\Omega)}\|\langle y\rangle^{-1}{\partial}_\tau^\beta\psi(t)\|_{L^2(\Omega)}\\ \leq~&2\delta_0^{-1}\big(C\|H(t)\|_{L^\infty(\mathbb T_x)}+\|\langle y\rangle^{l+1}{\partial}_yh(t)\|_{L^\infty(\Omega)}\big)\|{\partial}_\tau^\beta h(t)\|_{L_l^2(\Omega)}. \end{align*} Thus, we have that by \eqref{def_M}, \begin{equation} \label{equ_1}\begin{split} \|(u_\beta,h_\beta)(t)\|_{L_l^2(\Omega)}~&\leq~ M(t)\big\|{\partial}_\tau^\beta (u, h)(t)\big\|_{L_l^2(\Omega)}. \end{split}\end{equation} On other hand, note that from ${\partial}_y\psi=h$ and the expression of $h_\beta$ in \eqref{new}, \[h_\beta~=~{\partial}_\tau^\beta h-\frac{{\partial}_yh+H\phi''}{h+H\phi'}{\partial}_\tau^\beta\psi~=~(h+H\phi')\cdot{\partial}_y\Big(\frac{{\partial}_\tau^\beta\psi}{h+H\phi'}\Big),\] which implies that by ${\partial}_\tau^\beta\psi|_{y=0}=0,$ \begin{equation} \label{def_psi} {\partial}_\tau^\beta\psi(t,x,y)=\big(h(t,x,y)+H(t,x)\phi'(y)\big)\cdot\int_0^y\frac{h_\beta(t,x,z)}{h(t,x,z)+H(t,x)\phi'(z)}dz. \end{equation} Therefore, combining the definition \eqref{new} for $(u_\beta, h_\beta)$ with \eqref{def_psi}, we have \begin{equation}\label{for_m} \begin{cases} {\partial}_\tau^\beta u(t,x,y)=u_\beta(t,x,y)+\big({\partial}_yu(t,x,y)+U(t,x)\phi''(y)\big)\cdot\int_0^y\frac{h_\beta(t,x,z)}{h(t,x,z)+H(t,x)\phi'(z)}dz,\\ {\partial}_\tau^\beta h(t,x,y)=h_\beta(t,x,y)+\big({\partial}_yh(t,x,y)+H(t,x)\phi''(y)\big)\cdot\int_0^y\frac{h_\beta(t,x,z)}{h(t,x,z)+H(t,x)\phi'(z)}dz. \end{cases}\end{equation} Then, by using \eqref{normal1}, \[\begin{split} \|{\partial}_\tau^\beta u(t)\|_{L^2_l(\Omega)} \leq~&\|u_\beta(t)\|_{L_l^2(\Omega)}+\big\|\langle y\rangle^{l+1}\big({\partial}_yu+U\phi'')(t)\big\|_{L^\infty(\Omega)}\Big\|\frac{1}{1+y}\int_0^y\frac{h_\beta(t,x,z)}{h(t,x,z)+H(t,x)\phi'(z)}dz\Big\|_{L^2(\Omega)}\\ \leq~&\|u_\beta(t)\|_{L_l^2(\Omega)}+2\big(C\|U(t)\|_{L^\infty(\mathbb T_x)}+\|\langle y\rangle^{l+1}{\partial}_yu(t)\|_{L^\infty(\Omega)}\big)\Big\|\frac{h_\beta}{h+H\phi'}\Big\|_{L^2(\Omega)}\\ \leq~&\|u_\beta(t)\|_{L_l^2(\Omega)}+2\delta_0^{-1}\big(C\|U(t)\|_{L^\infty(\mathbb T_x)}+\|\langle y\rangle^{l+1}{\partial}_yu(t)\|_{L^\infty(\Omega)}\big)\|h_\beta(t)\|_{L^2(\Omega)}, \end{split}\] and similarly, \[\begin{split} \|{\partial}_\tau^\beta h(t)\|_{L^2_l(\Omega)} \leq~&2\delta_0^{-1}\big(C\|H(t)\|_{L^\infty(\mathbb T_x)}+\|\langle y\rangle^{l+1}{\partial}_yh(t)\|_{L^\infty(\Omega)}\big)\|h_\beta(t)\|_{L^2_l(\Omega)}, \end{split}\] which implies that, \begin{equation} \label{equ_2} \|{\partial}_\tau^\beta (u, h)(t)\|_{L_l^2(\Omega)}~\leq~ M(t)\|(u_\beta, h_\beta)(t)\|_{L_l^2(\Omega)}, \end{equation} provided that $M(t)$ is given in \eqref{def_M}. Thus, combining \eqref{equ_1} with \eqref{equ_2} yields \eqref{equi}. \iffalse Next, as we know \begin{equation}\label{new_y} \begin{split} {\partial}_y u_\beta&={\partial}_y{\partial}_\tau^\beta u-\eta_1{\partial}_\tau^\beta h-{\partial}_y\eta_1{\partial}_\tau^\beta\psi,\quad {\partial}_y h_\beta={\partial}_y{\partial}_\tau^\beta h-\eta_2{\partial}_\tau^\beta h-{\partial}_y\eta_2{\partial}_\tau^\beta\psi, \end{split} \end{equation} then, we have that by \eqref{est_psi} and \eqref{est_eta}, \[\begin{split} &\|{\partial}_y u_\beta(t)\|_{L_l^2(\Omega)}\\ \leq~&\|{\partial}_y {\partial}_\tau^\beta u(t)\|_{L_l^2(\Omega)}+\|\eta_1(t)\|_{L^\infty(\Omega)}\|{\partial}_\tau^\beta h(t)\|_{L_l^2(\Omega)}+\|\langle y\rangle^{l+1}{\partial}_y\eta_1(t)\|_{L^\infty(\Omega)}\|\langle y\rangle^{-1}{\partial}_\tau^\beta\psi(t)\|_{L^2(\Omega)}\\ \leq~&\|{\partial}_y {\partial}_\tau^\beta u(t)\|_{L_l^2(\Omega)}+C\delta_0^{-1}\big(M_0+\|u(t)\|_{\mathcal H_0^3}\big)\|{\partial}_\tau^\beta h\|_{L_l^2(\Omega)}+C\delta_0^{-2}\big(M_0+\|(u,h)(t)\|_{\mathcal H_l^4}\big)^2\|{\partial}_\tau^\beta h(t)\|_{L^2(\Omega)}\\ \leq~&\|{\partial}_y {\partial}_\tau^\beta u(t)\|_{L_l^2(\Omega)}+C\delta_0^{-2}\big(M_0+\|(u,h)(t)\|_{\mathcal H_l^4}\big)^2\|{\partial}_\tau^\beta h(t)\|_{L_l^2(\Omega)}, \end{split}\] and similarly, \[\begin{split} \|{\partial}_y h_\beta(t)\|_{L_l^2(\Omega)} \leq~&\|{\partial}_y {\partial}_\tau^\beta h(t)\|_{L_l^2(\Omega)}+C\delta_0^{-2}\big(M_0+\|(u, h)(t)\|_{\mathcal H_l^4}\big)^2\|{\partial}_\tau^\beta h(t)\|_{L_l^2(\Omega)}. \end{split}\] Combining the above two inequalities implies that \[\begin{split} \|{\partial}_y (u_\beta, h_\beta)\|_{L_l^2(\Omega)} \leq~&\|{\partial}_y{\partial}_\tau^\beta (u, h)(t)\|_{L_l^2(\Omega)}+C\delta_0^{-2}\big(M_0+\|(u, h)(t)\|_{\mathcal H_l^4}\big)^2\|{\partial}_\tau^\beta h(t)\|_{L_l^2(\Omega)}, \end{split}\] and we obtain \eqref{equi_y}. \fi Furthermore, by taking the derivation of \eqref{for_m} in $y$, we get the following forms of ${\partial}_y{\partial}_\tau^\beta u$ and ${\partial}_y{\partial}_\tau^\beta h$: \begin{align*}\begin{cases} {\partial}_y{\partial}_\tau^\beta u(t,x,y)=&{\partial}_y u_\beta(t,x,y)+\eta_1(t,x,y) h_\beta(t,x,y)+\big({\partial}_y^2u(t,x,y)+U(t,x)\phi^{(3)}(y)\big)\cdot\int_0^y\frac{h_\beta(t,x,z)}{h(t,x,z)+H(t,x)\phi'(z)}dz,\\ {\partial}_y{\partial}_\tau^\beta h(t,x,y)=&{\partial}_y h_\beta(t,x,y)+\eta_2(t,x,y) h_\beta(t,x,y)+\big({\partial}_y^2h(t,x,y)+H(t,x)\phi^{(3)}(y)\big)\cdot\int_0^y\frac{h_\beta(t,x,z)}{h(t,x,z)+H(t,x)\phi'(z)}dz. \end{cases}\end{align*} Then, it follows that by \eqref{normal1} and \eqref{est_eta}, \begin{align*} \|{\partial}_y{\partial}_\tau^\beta u(t)\|_{L_l^2(\Omega)} \leq~&\|{\partial}_y u_\beta(t)\|_{L_l^2(\Omega)}+\|\eta_1(t)\|_{L^\infty(\Omega)}\|h_\beta(t)\|_{L_l^2(\Omega)}\\ &+\big\|\langle y\rangle^{l+1}({\partial}_y^2u+U\phi^{(3)})(t)\big\|_{L^\infty(\Omega)}\Big\|\frac{1}{1+y}\int_0^y\frac{h_\beta(t,x,z)}{h(t,x,z)+H(t,x)\phi'(z)}dz\Big\|_{L^2(\Omega)}\\ \leq~&\|{\partial}_yu_\beta(t)\|_{L_l^2(\Omega)}+\delta_0^{-1}\big(C\|U(t)\|_{L^\infty(\mathbb T_x)}+\|{\partial}_yu(t)\|_{L^\infty(\Omega)}\big)\|h_\beta(t)\|_{L_l^2(\Omega)}\\ &+2\big(C\|U(t)\|_{L^\infty(\mathbb T_x)}+\|\langle y\rangle^{l+1}{\partial}_y^2u(t)\|_{L^\infty(\Omega)}\big)\Big\|\frac{h_\beta}{h+H\phi'}\Big\|_{L^2(\Omega)}\\ \leq~&\|{\partial}_yu_\beta(t)\|_{L_l^2(\Omega)}+ M(t)\|h_\beta(t)\|_{L_l^2(\Omega)}, \end{align*} and similarly, \begin{align*} \|{\partial}_y{\partial}_\tau^\beta h(t)\|_{L_l^2(\Omega)} \leq~&\|{\partial}_y h_\beta(t)\|_{L_l^2(\Omega)}+ M(t)\|h_\beta(t)\|_{L_l^2(\Omega)}. \end{align*} Combining the above two inequalities yields that by \eqref{def_M}, \begin{align*} \|{\partial}_y{\partial}_\tau^\beta (u, h)(t)\|_{L_l^2(\Omega)} \leq~&\|{\partial}_y (u_\beta, h_\beta)(t)\|_{L_l^2(\Omega)}+M(t)\|h_\beta(t)\|_{L_l^2(\Omega)}. \end{align*} Thus we obtain \eqref{equi_y1} and this completes the proof. \end{proof} \iffalse {\color{red}????????????????} Note that $\eta_2|_{y=0}=0$ and \[{\partial}_y\big(\eta_2{\partial}_x^m\psi\big)=\eta_2{\partial}_y{\partial}_x^m\psi+{\partial}_y\eta_2{\partial}_x^m\psi~=~0,\quad\mbox{on}~\{y=0\},\] by virtue of \eqref{psi} and \eqref{est_v}, it implies that by integration by parts, \begin{equation}\label{est_r21}\begin{split} J_1&= -\kappa\int_{\Omega}\big(\langle y\rangle^{2l}{\partial}_y\hat b+2l\langle y\rangle^{2l-1}\hat b\big)\big(\eta_2{\partial}_x^m b+{\partial}_y\eta_2{\partial}_x^m\psi\big)dxdy\\ &\quad+\frac{\kappa}{16}\|{\partial}_y\hat b\|_{L_l^2(\Omega)}^2+C\|\eta_1\|_{L^\infty}^2\|\hat u\|_{L^2_l(\Omega)}^2\\ &\leq \frac{\mu}{16}\|{\partial}_y\hat{u}\|_{L^2_l(\mathbb{R}^2_+)}+\frac{k}{16}\|{\partial}_y\hat{b}\|_{L^2_l(\mathbb{R}^2_+)}+C\big(1+\|\eta_1\|_{L^\infty}^2\big)\|\hat u\|_{L^2_l(\Omega)}^2\\ &\quad +C\|\eta_1\|^2_{L^\infty}\|{\partial}_x^mb\|_{L_l^2}^2+C\|{\partial}_y\eta_1\|^2_{L_x^\infty(L^2_{y,l})}\|{\partial}_x^m\psi\|_{L_x^2(L_y^\infty)}^2\\ &\leq \frac{\mu}{16}\|{\partial}_y\hat{u}(t)\|_{L^2_l(\mathbb{R}^2_+)}^2+\frac{k}{16}\|{\partial}_y\hat{b}(t)\|_{L^2_l(\mathbb{R}^2_+)}^2+CP\big(E_2(t)\big)\big(\|\hat u(t)\|_{L^2_l(\Omega)}^2+\|b(t)\|_{H_l^{m,0}}^2\big). \end{split}\end{equation} By the estimate of $\widetilde R_1$ in \eqref{est-r} it gives \begin{equation} \label{est_r12}\begin{split} I_2\leq&\|\hat u\|_{L_l^2(\Omega)}\|\widetilde R_1\|_{L_l^2(\Omega)}\lesssim \|\hat u(t)\|_{L_l^2(\Omega)}^2+P\big(E_{m-1}(t)\big)\big(1+\|(u,b)(t)\|_{H_l^{m,0}}^2\big) \end{split} \end{equation} provided $m\geq4.$ Pluggin \eqref{est_r11} and \eqref{est_r12} into \eqref{est-R1} yields that \begin{equation} \label{est-r1}\begin{split} \int_0_{\mathbb{R}^2_+}\langle y\rangle^{2l}\hat u\widehat{R}_1dxdy\leq&\frac{\mu}{16}\|{\partial}_y\hat{u}(t)\|_{L^2_l(\mathbb{R}^2_+)}^2+\frac{k}{16}\|{\partial}_y\hat{b}(t)\|_{L^2_l(\mathbb{R}^2_+)}^2\\ &+CP\big(E_{m-1}(t)\big)\big(\|\hat u(t)\|_{L^2_l(\Omega)}^2+\|(u,b)(t)\|_{H_l^{m,0}}^2\big). \end{split} \end{equation} Then, it suffices to estimate each term in $\bar{R}_1$. First, we estimate each term in $R_1$ \begin{align*} \|[(u_s+u), {\partial}_\tau^m]{\partial}_xu\|_{L^2_l(\mathbb{R}^2_+)}\leq C(M+\|u\|_{H^m_l(\mathbb{R}^2_+)})\|u\|_{H^m_l(\mathbb{R}^2_+)}, \end{align*} and \begin{align*} \|[{\partial}_y(u_s+u), {\partial}_\tau^m]v\|_{L^2_l(\mathbb{R}^2_+)}\leq C(M+\|{\partial}_yu\|_{H^m_l(\mathbb{R}^2_+)})\|u\|_{H^m_l(\mathbb{R}^2_+)}. \end{align*} Similarly, we have \begin{align*} \|[(1+b), {\partial}_\tau^m]{\partial}_xb\|_{L^2_l(\mathbb{R}^2_+)}\leq C(1+\|b\|_{H^m_l(\mathbb{R}^2_+)})\|b\|_{H^m_l(\mathbb{R}^2_+)}, \end{align*} and \begin{align*} \|[{\partial}_yb, {\partial}_\tau^m]g\|_{L^2_l(\mathbb{R}^2_+)}\leq C(1+\|{\partial}_yb\|_{H^m_l(\mathbb{R}^2_+)})\|b\|_{H^m_l(\mathbb{R}^2_+)}. \end{align*} Consequently, \begin{align*} \|R_1\|_{L^2_l(\mathbb{R}^2_+)}\leq& C(M+\|u\|_{H^m_l(\mathbb{R}^2_+)}+\|{\partial}_yu\|_{H^m_l(\mathbb{R}^2_+)})\|u\|_{H^m_l(\mathbb{R}^2_+)}\\ &+C(1+\|b\|_{H^m_l(\mathbb{R}^2_+)}+\|{\partial}_yb\|_{H^m_l(\mathbb{R}^2_+)})\|b\|_{H^m_l(\mathbb{R}^2_+)}. \end{align*} And \begin{align*} \|v{\partial}_y\hat{u}\|_{L^2_l(\mathbb{R}^2_+)}\leq& C\|v\|_{L^\infty}\|{\partial}_y\hat{u}\|_{L^2_l(\mathbb{R}^2_+)}\leq C\|u_x\|_{L^\infty_x(L^2_{yl})}\|{\partial}_y\hat{u}\|_{L^2_l(\mathbb{R}^2_+)}\\ \leq &C\|u\|_{H^m_l(\mathbb{R}^2_+)}\|{\partial}_y\hat{u}\|_{L^2_l(\mathbb{R}^2_+)}\leq C\|u\|^2_{H^m_l(\mathbb{R}^2_+)}+\frac{\mu}{16}\|{\partial}_y\hat{u}\|^2_{L^2_l(\mathbb{R}^2_+)}. \end{align*} Notice that \begin{align*} &(1+b){\partial}_x(\frac{{\partial}_yb}{1+b}{\partial}_\tau^m\psi)+{\partial}_\tau^mg{\partial}_yb\\ =&(1+b){\partial}_x(\frac{{\partial}_yb}{1+b}){\partial}_\tau^m\psi+{\partial}_yb{\partial}_x{\partial}_\tau^m\psi-{\partial}_\tau^m{\partial}_x\psi{\partial}_yb\\ =&(1+b){\partial}_x(\frac{{\partial}_yb}{1+b}){\partial}_\tau^m\psi, \end{align*} and \begin{align*} \|(1+b){\partial}_x(\frac{{\partial}_yb}{1+b}){\partial}_\tau^m\psi\|_{L^2_l(\mathbb{R}^2_+)}\leq& C\|(1+b)\|_{L^\infty}\|{\partial}_x(\frac{{\partial}_yb}{1+b})\|_{L^2_{y,l}(L^\infty_x)}\|{\partial}_\tau^m\psi\|_{L^\infty_y(L^2_x)}\\ \leq &C(1+\|b\|_{H^m_l(\mathbb{R}^2_+)})(\|b\|_{H^m_l(\mathbb{R}^2_+)}+\|b\|^2_{H^m_l(\mathbb{R}^2_+)})\|b\|_{H^m_l(\mathbb{R}^2_+)}. \end{align*} And \begin{align*} \|g{\partial}_y\hat{b}\|_{L^2_l(\mathbb{R}^2_+)}\leq& C\|g\|_{L^\infty}\|{\partial}_y\hat{b}\|_{L^2_l(\mathbb{R}^2_+)}\leq C\|b_x\|_{L^\infty_x(L^2_{yl})}\|{\partial}_y\hat{b}\|_{L^2_l(\mathbb{R}^2_+)}\\ \leq &C\|b\|_{H^m_l(\mathbb{R}^2_+)}\|{\partial}_y\hat{b}\|_{L^2_l(\mathbb{R}^2_+)} \leq C\|b\|^2_{H^m_l(\mathbb{R}^2_+)}+\frac{k}{16}\|{\partial}_y\hat{b}\|^2_{L^2_l(\mathbb{R}^2_+)}. \end{align*} By the similar arguments as in the estimation of $R_1$, we can estimate the commutator of $R_0$, then we have \begin{align*} &\|R_0\frac{{\partial}_y(u_s+u)}{1+b}\|_{L^2_l(\mathbb{R}^2_+)}\\ \leq& C(M+\|u\|_{H^m_l(\mathbb{R}^2_+)}) [(M+\|u\|_{H^m_l(\mathbb{R}^2_+)})\|b\|_{H^m_l(\mathbb{R}^2_+)}+(1+\|b\|_{H^m_l(\mathbb{R}^2_+)})\|u\|_{H^m_l(\mathbb{R}^2_+)}]. \end{align*} And \begin{align*} &\|{\partial}_\tau^m\psi\Big[{\partial}_t(\frac{{\partial}_y(u_s+u)}{1+b})+(u_s+u){\partial}_x(\frac{{\partial}_y(u_s+u)}{1+b})-k{\partial}_y^2(\frac{{\partial}_y(u_s+u)}{1+b})\Big]\|_{L^2_l(\mathbb{R}^2_+)}\\ \leq&\|{\partial}_\tau^m\psi\|_{L^\infty_{y}(L^2_x)}\|\Big[{\partial}_t(\frac{{\partial}_y(u_s+u)}{1+b})+(u_s+u){\partial}_x(\frac{{\partial}_y(u_s+u)}{1+b})-k{\partial}_y^2(\frac{{\partial}_y(u_s+u)}{1+b})\Big]\|_{L^2_{yl}(L^\infty_x)}\\ \leq&C\|b\|_{H^m_l(\mathbb{R}^2_+)}\Big\{\tilde{M}+\|u\|_{H^m_l(\mathbb{R}^2_+)}+(M+\|u\|_{H^m_l(\mathbb{R}^2_+)})\|b\|_{H^m_l(\mathbb{R}^2_+)}\\ &+ (M+\|u\|_{H^m_l(\mathbb{R}^2_+)})\Big[\tilde{M}+\|u\|_{H^m_l(\mathbb{R}^2_+)}+(M+\|u\|_{H^m_l(\mathbb{R}^2_+)})\|b\|_{H^m_l(\mathbb{R}^2_+)}\Big]\\ &+(\tilde{M}+\|u\|_{H^m_l(\mathbb{R}^2_+)})(M+\|u\|_{H^m_l(\mathbb{R}^2_+)})\|b\|_{H^m_l(\mathbb{R}^2_+)} \Big\} \end{align*} where \begin{align*} \tilde{M}\triangleq\sup\{\|{\partial}_t{\partial}_yu_s\|_{L^2_{yl}(L^\infty_x)}, \|{\partial}_yu_s\|_{L^2_{yl}(L^\infty_x)}, \|{\partial}^3_yu_s\|_{L^2_{yl}(L^\infty_x)}\}. \end{align*} And \begin{align*} &\|2k{\partial}_y(\frac{{\partial}_y(u_s+u)}{1+b}){\partial}_y{\partial}_\tau^m\psi\|_{L^2_l(\mathbb{R}^2_+)}\\ \leq& C\|{\partial}_y(\frac{{\partial}_y(u_s+u)}{1+b}){\partial}_y{\partial}_\tau^m\psi\|_{L^2_{yl}(L^\infty_x)}\|{\partial}_y{\partial}_\tau^m\psi\|_{L^\infty_y(L^2_x)}\\ \leq& \delta\|{\partial}_yb\|_{H^m_l(\mathbb{R}^2_+)}+C\delta^{-1}(\tilde{M}+\|u\|_{H^m_l(\mathbb{R}^2_+)}+(M+\|u\|_{H^m_l(\mathbb{R}^2_+)})\|b\|_{H^m_l(\mathbb{R}^2_+)}). \end{align*} Moreover, \begin{align*} &\Big|\int_0_{\mathbb{R}^2_+}(\mu-k){\partial}_y^2\Big[(\frac{{\partial}_y(u_s+u)}{1+b}){\partial}_\tau^m\psi\Big]\hat{u}\langle y\rangle^{2l}dxdy\Big|\\ \leq&C \Big|\int_0_{\mathbb{R}^2_+}{\partial}_y\Big[(\frac{{\partial}_y(u_s+u)}{1+b}){\partial}_\tau^m\psi\Big]{\partial}_y\hat{u}\langle y\rangle^{2l}dxdy\Big|+C\Big|\int_0_{\mathbb{R}^2_+}{\partial}_y\Big[(\frac{{\partial}_y(u_s+u)}{1+b}){\partial}_\tau^m\psi\Big]\hat{u}(\langle y\rangle^{2l})'dxdy\Big|\\ \leq&C(\|{\partial}_y\hat{u}\|_{L^2_l(\mathbb{R}^2_+)}+\|\hat{u}\|_{L^2_l(\mathbb{R}^2_+)})\|{\partial}_y\Big[(\frac{{\partial}_y(u_s+u)}{1+b}){\partial}_\tau^m\psi\Big]\|_{L^2_l(\mathbb{R}^2_+)}\\ \leq& \frac{\mu}{16}\|{\partial}_y\hat{u}\|^2_{L^2_l(\mathbb{R}^2_+)}+C\|\hat{u}\|^2_{L^2_l(\mathbb{R}^2_+)}+C\|{\partial}_y\Big[(\frac{{\partial}_y(u_s+u)}{1+b}){\partial}_\tau^m\psi\Big]\|^2_{L^2_l(\mathbb{R}^2_+)}, \end{align*} and \begin{align*} &\|{\partial}_y\Big[(\frac{{\partial}_y(u_s+u)}{1+b}){\partial}_\tau^m\psi\Big]\|^2_{L^2_l(\mathbb{R}^2_+)}\\ \leq &C\|{\partial}_y{\partial}_\tau^m\psi\|^2_{L^\infty_{y}(L^2_x)}(\tilde{M}^2+\|u\|^2_{L^2_{yl}(L^\infty_x)})\\ &+C\|{\partial}_\tau^m\psi\|^2_{L^\infty_{y}(L^2_x)}(\tilde{M}^2+\|{\partial}_yu\|^2_{L^2_{yl}(L^\infty_x)}+(M^2+\|u\|^2_{L^\infty})\|{\partial}_yu\|^2_{L^2_{yl}(L^\infty_x)})\\ \leq &C(\tilde{M}^2+\|u\|^2_{L^2_{yl}(L^\infty_x)})\|{\partial}_yb\|^2_{H^m_l(\mathbb{R}^2_+)}\\ &+C\|b\|^2_{H^m_l(\mathbb{R}^2_+)}(\tilde{M}^2+\|{\partial}_yu\|^2_{L^2_{yl}(L^\infty_x)}+(M^2+\|u\|^2_{L^\infty})\|{\partial}_yu\|^2_{L^2_{yl}(L^\infty_x)})\\ \leq &C(\tilde{M}^2+\|u\|^2_{H^m_l(\mathbb{R}^2_+)})\|{\partial}_yb\|^2_{H^m_l(\mathbb{R}^2_+)}\\ &+C\|b\|^2_{H^m_l(\mathbb{R}^2_+)}(\tilde{M}^2+\|u\|^2_{H^m_l(\mathbb{R}^2_+)}+(M^2+\|u\|^2_{H^m_l(\mathbb{R}^2_+)})\|u\|^2_{H^m_l(\mathbb{R}^2_+)}). \end{align*} \textbf{\textit{ Estimates of $\int_0_{\mathbb{R}^2_+}\bar{R}_2\hat{b}\langle y\rangle^{2l}dxdy$}}: \begin{align*} \int_0_{\mathbb{R}^2_+}\bar{R}_2\hat{b}\langle y\rangle^{2l}dxdy\leq \|\bar{R}_2\|_{L^2_l(\mathbb{R}^2_+)}\|\hat{b}\|_{L^2_l(\mathbb{R}^2_+)}. \end{align*} We estimate each term in $\bar{R}_2$. First, each term in $R_2$ is estimated as follows. \begin{align*} \|[(u_s+u), {\partial}_\tau^m]{\partial}_xb\|_{L^2_l(\mathbb{R}^2_+)}\leq C(M+\|u\|_{H^m_l(\mathbb{R}^2_+)})\|b\|_{H^m_l(\mathbb{R}^2_+)}, \end{align*} and \begin{align*} \|[{\partial}_y(u_s+u), {\partial}_\tau^m]g\|_{L^2_l(\mathbb{R}^2_+)}\leq C(M+\|{\partial}_yu\|_{H^m_l(\mathbb{R}^2_+)})\|b\|_{H^m_l(\mathbb{R}^2_+)}. \end{align*} Similarly, \begin{align*} \|[(1+b), {\partial}_\tau^m]{\partial}_xu\|_{L^2_l(\mathbb{R}^2_+)}\leq C(1+\|b\|_{H^m_l(\mathbb{R}^2_+)})\|u\|_{H^m_l(\mathbb{R}^2_+)}, \end{align*} and \begin{align*} \|[{\partial}_yb, {\partial}_\tau^m]v\|_{L^2_l(\mathbb{R}^2_+)}\leq C(1+\|{\partial}_yb\|_{H^m_l(\mathbb{R}^2_+)})\|u\|_{H^m_l(\mathbb{R}^2_+)}. \end{align*} It is in turn to estimate other terms in $\bar{R}_2$. \begin{align*} \|v{\partial}_y\hat{b}\|_{L^2_l(\mathbb{R}^2_+)}\leq& \|v\|_{L^\infty}\|{\partial}_y\hat{b}\|_{L^2_l(\mathbb{R}^2_+)}\leq C\|u_x\|_{L^\infty_x(L^2_{yl})}\|{\partial}_y\hat{b}\|_{L^2_l(\mathbb{R}^2_+)}\\ \leq &C\|u\|_{H^m_l(\mathbb{R}^2_+)}\|{\partial}_y\hat{b}\|_{L^2_l(\mathbb{R}^2_+)}\leq C\|u\|^2_{H^m_l(\mathbb{R}^2_+)}+\frac{k}{16}\|{\partial}_y\hat{b}\|^2_{L^2_l(\mathbb{R}^2_+)}. \end{align*} Notice that \begin{align*} &(1+b){\partial}_x(\frac{{\partial}_y(u_s+u)}{1+b}{\partial}_\tau^m\psi)+{\partial}_\tau^mg{\partial}_y(u_s+u)\\ =&(1+b){\partial}_x(\frac{{\partial}_y(u_s+u)}{1+b}){\partial}_\tau^m\psi+{\partial}_y(u_s+u){\partial}_x{\partial}_\tau^m\psi-{\partial}_\tau^m{\partial}_x\psi{\partial}_y(u_s+u)\\ =&(1+b){\partial}_x(\frac{{\partial}_y(u_s+u)}{1+b}){\partial}_\tau^m\psi, \end{align*} and \begin{align*} &\|(1+b){\partial}_x(\frac{{\partial}_y(u_s+u)}{1+b}){\partial}_\tau^m\psi\|_{L^2_l(\mathbb{R}^2_+)}\\ \leq& C\|(1+b)\|_{L^\infty}\|{\partial}_x(\frac{{\partial}_y(u_s+u)}{1+b})\|_{L^2_{y,l}(L^\infty_x)}\|{\partial}_\tau^m\psi\|_{L^\infty_y(L^2_x)}\\ \leq &(1+\|b\|_{H^m_l(\mathbb{R}^2_+)}) (\tilde{M}+\|{\partial}_{xy}u\|_{L^2_{yl}(L^\infty_x)}+(M+\|{\partial}_yu\|_{L^\infty})\|{\partial}_xb\|_{L^2_{yl}(L^\infty_x)})\|b\|_{H^m_l(\mathbb{R}^2_+)}\\ \leq &(1+\|b\|_{H^m_l(\mathbb{R}^2_+)}) (\tilde{M}+\|u\|_{H^m_l(\mathbb{R}^2_+)}+(M+\|u\|_{H^m_l(\mathbb{R}^2_+)})\|b\|_{H^m_l(\mathbb{R}^2_+)})\|b\|_{H^m_l(\mathbb{R}^2_+)}. \end{align*} And \begin{align*} \|g{\partial}_y\hat{u}\|_{L^2_l(\mathbb{R}^2_+)}\leq& C\|g\|_{L^\infty}\|{\partial}_y\hat{u}\|_{L^2_l(\mathbb{R}^2_+)}\leq C\|b_x\|_{L^\infty_x(L^2_{yl})}\|{\partial}_y\hat{u}\|_{L^2_l(\mathbb{R}^2_+)}\\ \leq &C\|b\|_{H^m_l(\mathbb{R}^2_+)}\|{\partial}_y\hat{u}\|_{L^2_l(\mathbb{R}^2_+)} \leq C\|b\|^2_{H^m_l(\mathbb{R}^2_+)}+\frac{\mu}{16}\|{\partial}_y\hat{u}\|^2_{L^2_l(\mathbb{R}^2_+)}. \end{align*} Moreover, \begin{align*} &\|R_0\frac{{\partial}_yb}{1+b}\|_{L^2_l(\mathbb{R}^2_+)}\\ \leq& C\|b\|_{H^m_l(\mathbb{R}^2_+)} [(M+\|u\|_{H^m_l(\mathbb{R}^2_+)})\|b\|_{H^m_l(\mathbb{R}^2_+)}+(1+\|b\|_{H^m_l(\mathbb{R}^2_+)})\|u\|_{H^m_l(\mathbb{R}^2_+)}], \end{align*} and \begin{align*} &\|{\partial}_\tau^m\psi\Big[{\partial}_t(\frac{{\partial}_yb}{1+b})+(u_s+u){\partial}_x(\frac{{\partial}_yb}{1+b})-k{\partial}_y^2(\frac{{\partial}_yb}{1+b})\Big]\|_{L^2_l(\mathbb{R}^2_+)}\\ \leq&\|{\partial}_\tau^m\psi\|_{L^\infty_{y}(L^2_x)}\|\Big[{\partial}_t(\frac{{\partial}_yb}{1+b})+(u_s+u){\partial}_x(\frac{{\partial}_yb}{1+b})-k{\partial}_y^2(\frac{{\partial}_yb}{1+b})\Big]\|_{L^2_{yl}(L^\infty_x)}\\ \leq&\|b\|_{H^m_l(\mathbb{R}^2_+)}\Big\{M+\|u\|_{H^m_l(\mathbb{R}^2_+)}+\|b\|_{H^m_l(\mathbb{R}^2_+)})(\|b\|_{H^m_l(\mathbb{R}^2_+)}+\|b\|_{H^m_l(\mathbb{R}^2_+)}^2). \Big\} \end{align*} In addition, \begin{align*} &\|2k{\partial}_y(\frac{{\partial}_yb}{1+b}){\partial}_y{\partial}_\tau^m\psi\|_{L^2_l(\mathbb{R}^2_+)}\\ \leq& C\|{\partial}_y(\frac{{\partial}_yb}{1+b})\|_{L^2_{yl}(L^\infty_x)}\|{\partial}_y{\partial}_\tau^m\psi\|_{L^\infty_y(L^2_x)}\\ \leq& \delta\|{\partial}_yb\|^2_{H^m_l(\mathbb{R}^2_+)}+C\delta^{-1}(\|b\|_{H^m_l(\mathbb{R}^2_+)}+\|b\|^2_{H^m_l(\mathbb{R}^2_+)})^2 \end{align*} \fi \subsection{Closeness of the a priori estimates} \indent\newline In this subsection, we will prove Proposition \ref{prop_priori}. Before that, we need some preliminaries. First of all, as we know that from \eqref{ass_h}, \begin{align*} \big\|\langle y\rangle^{l+1}{\partial}_y^i(u, h)(t)\big\|_{L^\infty(\Omega)}\leq \delta_0^{-1},\quad\mbox{for}\quad i=1,2,\quad t\in[0,T], \end{align*} combining with the definitions \eqref{def_eta} for $\eta_i, i=1,2$, and \eqref{def_M} for $M(t)$, it implies that for $\delta_0$ sufficiently small, \begin{align}\label{bound_eta} \|\langle y\rangle^{l+1}\eta_i(t)\|_{L^\infty(\Omega)}\leq2\delta_0^{-2},\quad M(t)\leq2\delta_0^{-1}\big(C\|(U,H)(t)\|_{L^\infty(\mathbb T_x)}+2\delta_0^{-1}\big)\leq 5\delta^{-2}_0,\quad i=1,2. \end{align} Then, recall that $D^\alpha={\partial}_\tau^\beta{\partial}_y^k$, we obtain that by \eqref{equi} and \eqref{equi_y1} given in Lemma \ref{lem_equ}, \begin{align*} \|(u, h)(t)\|_{\mathcal H_l^m}^2=&\sum_{\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\|D^\alpha(u, h)(t)\|_{L_l^2(\Omega)}^2+\sum_{|\beta|=m}\|{\partial}_\tau^\beta(u, h)(t)\|_{L_l^2(\Omega)}^2\nonumber\\ \leq&\sum_{\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\|D^\alpha(u, h)(t)\|_{L_l^2(\Omega)}^2+25\delta_0^{-4}\sum_{|\beta|=m}\|(u_\beta,h_\beta)(t)\|_{L_l^2(\Omega)}^2, \end{align*} and \begin{align*} \|{\partial}_y (u, h)(t)\|_{\mathcal H_l^m}^2=&\sum_{\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\| D^\alpha{\partial}_y (u, h)(t)\|_{L_l^2(\Omega)}^2+\sum_{|\beta|=m}\|{\partial}_y{\partial}_\tau^\beta (u, h)(t)\|_{L_l^2(\Omega)}^2\nonumber\\ \leq&\sum_{\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\| D^\alpha {\partial}_y(u, h)(t)\|_{L_l^2(\Omega)}^2+2\sum_{|\beta|=m}\|{\partial}_y (u_\beta, h_\beta)(t)\|_{L_l^2(\Omega)}^2+50\delta_0^{-4}\sum_{|\beta|=m}\|h_\beta (t)\|_{L^2_l(\Omega)}^2. \end{align*} Consequently, we have the following \begin{cor} \label{cor_equi} Under the assumptions of Proposition \ref{prop_priori}, for any $t\in[0,T]$ and the quantity $(u_\beta, h_\beta), |\beta|=m$ given by \eqref{new}, it holds that \begin{align}\label{equ_m0} \|(u, h)(t)\|_{\mathcal H_l^m}^2 \leq&\sum_{\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\|D^\alpha(u, h)(t)\|_{L_l^2(\Omega)}^2+25\delta_0^{-4}\sum_{|\beta|=m}\|(u_\beta, h_\beta)(t)\|_{L_l^2(\Omega)}^2, \end{align} and \begin{align} \label{equ_ym} \|{\partial}_y(u, h)(t)\|_{\mathcal H_l^m}^2 \leq&\sum_{\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\| D^\alpha{\partial}_y(u, h)(t)\|_{L_l^2(\Omega)}^2+2\sum_{|\beta|=m}\|{\partial}_y(u_\beta, h_\beta)(t)\|_{L_l^2(\Omega)}^2+50\delta_0^{-4}\sum_{|\beta|=m}\|h_\beta (t)\|_{L^2_l(\Omega)}^2. \end{align} \end{cor} Now, we can derive the desired a priori estimates of $(u,h)$ for the problem \eqref{bl_main}. From Proposition \ref{prop_estm} and \ref{prop_xm}, it follows that for $m\geq5$ and any $t\in[0,T],$ \begin{align}\label{est_all} &\frac{d}{dt}\Big(\sum_{\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\big\|D^\alpha(u, h)(t)\big\|^2_{L^2_l(\mathbb{R}^2_+)}+25\delta_0^{-4}\sum_{|\beta|=m}\big\|(u_\beta, h_\beta)(t)\big\|_{L_l^2(\Omega)}^2\Big)\nonumber\\ &+\mu\Big(\sum_{\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\big\|D^\alpha{\partial}_y u(t)\big\|^2_{L^2_l(\mathbb{R}^2_+)}+25\delta_0^{-4}\sum_{|\beta|=m}\big\|{\partial}_y u_\beta(t)\big\|_{L_l^2(\Omega)}^2\Big)\nonumber\\ &+\kappa\Big(\sum_{\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\big\|D^\alpha{\partial}_y h(t)\big\|^2_{L^2_l(\mathbb{R}^2_+)}+25\delta_0^{-4}\sum_{|\beta|=m}\big\|{\partial}_y h_\beta(t)\big\|_{L_l^2(\Omega)}^2\Big)\nonumber\\ \leq~& \delta_1C\|{\partial}_y(u, h)(t)\|_{\mathcal H_0^m}^2+C\delta_1^{-1}\|(u, h)(t)\|^2_{\mathcal H_l^m}\big(1+\|(u, h)(t)\|^2_{\mathcal H_l^m}\big)+\sum_{\tiny\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\|D^\alpha(r_1, r_2)(t)\|_{L^2_{l+k}(\Omega)}^2\nonumber\\ &+C\sum_{|\beta|\leq m+2}\|{\partial}_\tau^\beta (U,H,P)(t)\|_{L^2(\mathbb T_x)}^2+25\delta_0^{-4}\sum_{|\beta|=m}\Big(\|{\partial}_\tau^\beta r_1-\eta_1{\partial}_\tau^\beta r_3\|_{L_l^2(\Omega)}^2+\|{\partial}_\tau^\beta r_2-\eta_2{\partial}_\tau^\beta r_3\|_{L_l^2(\Omega)}^2\Big)\nonumber\\ &+C\delta_0^{-6}\big(\sum_{|\beta|\leq m+2}\|{\partial}_\tau^{\beta}(U,H)(t)\|_{L^2(\mathbb T_x)}+\|(u,h)(t)\|_{\mathcal H_l^m}\big)^2\Big(\sum_{|\beta|=m}\|(u_\beta, h_\beta)(t)\|_{L^2_l(\Omega)}^2\Big)\nonumber\\ &+C\delta_0^{-8}\big(\sum_{|\beta|\leq m+2}\|{\partial}_\tau^{\beta}(U,H)(t)\|_{L^2(\mathbb T_x)}+\|(u,h)\|_{\mathcal H_{l}^m}\big)^4\|(u ,h)(t)\|_{\mathcal H_l^m}^2. \end{align} \iffalse Note that from the equations \eqref{eq_main} we know that there exist positive constants $\mathcal{I}_m$ such that \begin{equation*} \sum_{|\alpha|\leq m}\big\|D^\alpha(u,b)(0)\big\|_{L_l^2(\Omega)}~\lesssim~\big\|(u_0,b_0)\big\|_{H_l^{2m}(\Omega)(\Omega)}^m, \end{equation*} and combining with \eqref{est_equib}, \begin{equation} \label{est_ini} \sum_{\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\big\|D^\alpha(u, b)(0)\big\|^2_{L^2_l(\mathbb R^2_+)}+\sum_{|\alpha|=m}\big\|(\hat u_{\alpha0},\hat b_{\alpha0})\big\|_{L_l^2(\Omega)}^2~\leq~ \tilde C_m\big\|(u_0,b_0)\big\|_{H_l^{2m}(\Omega)(\Omega)}^{2m+1} \end{equation} for some constant $\tilde C_m>0.$ Recall that $D^\alpha={\partial}_\tau^\beta{\partial}_y^k$, we obtain that by \eqref{equi1} and \eqref{equi_y1} in Lemma \ref{lem_equ}, \begin{align} \label{equ_m0} \|(u,b)\|_{\mathcal B_l^m(\Omega)}^2=&\sum_{\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\|D^\alpha(u,b)(t)\|_{L_l^2(\Omega)}^2+\sum_{|\alpha|=m}\|{\partial}_\tau^\beta(u,b)(t)\|_{L_l^2(\Omega)}^2\nonumber\\ \lesssim&\sum_{\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\|D^\alpha(u,b)(t)\|_{L_l^2(\Omega)}^2+M(t)^2\sum_{|\alpha|=m}\|(\hat u_\alpha,\hat b_\alpha)(t)\|_{L_l^2(\Omega)}^2, \end{align} \begin{align} \label{equ_m} \|(u,b)\|_{\mathcal A_l^m(\Omega)}^2=&\sum_{\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\|D^\alpha(u,b)\|_{L_L^2(\Omega)}^2+\sum_{|\alpha|=m}\|{\partial}_\tau^\beta(u,b)\|_{L_L^2(\Omega)}^2\nonumber\\ \lesssim&\sum_{\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\|D^\alpha(u,b)\|_{L_L^2(\Omega)}^2+M(t)^2\sum_{|\alpha|=m}\|(\hat u_\alpha,\hat b_\alpha)\|_{L_L^2(\Omega)}^2, \end{align} where $M(t)$ is given in \eqref{def_M}, and \begin{align} \label{equ_ym} \|{\partial}_y(u,b)\|_{\mathcal A_0^m(\Omega)}^2=&\sum_{\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\|{\partial}_y D^\alpha(u,b)\|_{L_L^2(\Omega)}^2+\sum_{|\alpha|=m}\|{\partial}_y{\partial}_\tau^\beta(u,b)\|_{L_L^2(\Omega)}^2\nonumber\\ \lesssim&\sum_{\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\|{\partial}_y D^\alpha(u,b)\|_{L_L^2(\Omega)}^2+\sum_{|\alpha|=m}\|{\partial}_y(\hat u_\alpha,\hat b_\alpha)\|_{L_L^2(\Omega)}^2+P\big(E_3(t)\big)\sum_{|\alpha|=m}\|{\partial}_\tau^\beta b\|_{L^2_l(\Omega)}. \end{align} \fi Plugging the inequalities \eqref{equ_m0} and \eqref{equ_ym} given in Corollary \ref{cor_equi} into \eqref{est_all}, and choosing $\delta_1$ small enough, we get \begin{align}\label{est_all1} &\frac{d}{dt}\Big(\sum_{\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\big\|D^\alpha(u, h)(t)\big\|^2_{L^2_l(\mathbb{R}^2_+)}+25\delta_0^{-4}\sum_{|\beta|=m}\big\|(u_\beta, h_\beta)(t)\big\|_{L_l^2(\Omega)}^2\Big)\nonumber\\ &+\Big(\sum_{\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\big\|D^\alpha{\partial}_y (u, h)(t)\big\|^2_{L^2_l(\mathbb{R}^2_+)}+25\delta_0^{-4}\sum_{|\beta|=m}\big\|{\partial}_y (u_\beta, h_\beta)(t)\big\|_{L_l^2(\Omega)}^2\Big)\nonumber\\ \leq~&C\sum_{\tiny\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\|D^\alpha(r_1, r_2)(t)\|_{L^2_{l+k}(\Omega)}^2+C\delta_0^{-4}\sum_{|\beta|=m}\Big(\|{\partial}_\tau^\beta (r_1, r_2)(t)\|_{L_l^2(\Omega)}^2+4\delta_0^{-4}\|{\partial}_\tau^\beta r_3\|_{L_{-1}^2(\Omega)}^2\Big)\nonumber\\ &+C\delta_0^{-8}\Big(1+\sum_{|\beta|\leq m+2}\|{\partial}_\tau^\beta (U,H,P)(t)\|_{L^2(\mathbb T_x)}^2\Big)^3\nonumber\\ &+C\delta_0^{-8}\Big(\sum_{\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\|D^\alpha(u, h)(t)\|_{L_l^2(\Omega)}^2+25\delta_0^{-4}\sum_{|\beta|=m}\|(u_\beta, h_\beta)(t)\|_{L_l^2(\Omega)}^2\Big)^3, \end{align} where we have used the fact that \[\|\eta_i{\partial}_\tau^\beta r_3\|_{L_l^2(\Omega)}\leq\|\langle y\rangle^{i+1}\eta_i(t)\|_{L^\infty(\Omega)}\|\langle y\rangle^{-1}{\partial}_\tau^\beta r_3\|_{L^2(\Omega)}\leq 2\delta_0^{-2}\|{\partial}_\tau^\beta r_3\|_{L_{-1}^2(\Omega)},\quad i=1,2.\] Denote by \begin{align}\label{F_def} F_0~:=~\sum_{\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\|D^\alpha(u, h)(0)\|_{L_l^2(\Omega)}^2+25\delta_0^{-4}\sum_{|\beta|=m}\big\|(u_{\beta0}, h_{\beta0})\big\|_{L^2_l(\Omega)}^2, \end{align} and \begin{align}\label{Ft_def} F(t)~:=~&C\sum_{\tiny\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\|D^\alpha(r_1, r_2)(t)\|_{L^2_{l+k}(\Omega)}^2+C\delta_0^{-4}\sum_{|\beta|=m}\Big(\|{\partial}_\tau^\beta (r_1, r_2)(t)\|_{L_l^2(\Omega)}^2+4\delta_0^{-4}\|{\partial}_\tau^\beta r_3\|_{L_{-1}^2(\Omega)}^2\Big)\nonumber\\ &+C\delta_0^{-8}\Big(1+\sum_{|\beta|\leq m+2}\|{\partial}_\tau^\beta (U,H,P)(t)\|_{L^2(\mathbb T_x)}^2\Big)^3. \end{align} By the comparison principle of ordinary differential equations in \eqref{est_all1}, it yields that \begin{align} \label{est_fin0} &\sum_{\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\|D^\alpha(u, h)(t)\|_{L_l^2(\Omega)}^2+25\delta_0^{-4}\sum_{|\beta|=m}\big\|(u_\beta, h_\beta)(t)\big\|_{L^2_l(\Omega)}^2\nonumber\\ &+\int_0^t\Big(\sum_{\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\big\|D^\alpha{\partial}_y (u, h)(s)\big\|^2_{L^2_l(\Omega)}+25\delta_0^{-4}\sum_{|\beta|=m}\big\|{\partial}_y (u_\beta, h_\beta)(s)\big\|_{L_l^2(\Omega)}^2\Big)ds\nonumber\\ \leq~&\big(F_0+\int_0^tF(s)ds\big)\cdot\Big\{1-2C\delta_0^{-8}\big(F_0+\int_0^tF(s)ds\big)^2t\Big\}^{-\frac{1}{2}}. \end{align} \iffalse which implies that by combining with \eqref{def_F}, \begin{align} \label{est_fin1} &\sum_{\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\|D^\alpha(u, h)(t)\|_{L_l^2(\Omega)}^2+36\delta_0^{-4}\sum_{|\beta|=m}\big\|(u_\beta, h_\beta)(t)\big\|_{L^2_l(\Omega)}^2\nonumber\\ &+\int_0^t\Big(\sum_{\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\big\|D^\alpha{\partial}_y (u, h)(s)\big\|^2_{L^2_l(\Omega)}+36\delta_0^{-4}\sum_{|\beta|=m}\big\|{\partial}_y (u_\beta, h_\beta)(s)\big\|_{L_l^2(\Omega)}^2\Big)ds\nonumber\\ \leq~& \delta_0^{-6}~\mathcal P\big(M_0+\|(u_0,h_0)\|_{H_l^{2m}(\Omega)}\big)\cdot\Big[1-\delta_0^{-20}\mathcal P\big(M_0+\|(u_0,h_0)\|_{H_l^{2m}(\Omega)}\big)t\Big]^{-\frac{1}{2}}. \end{align} \fi Then, it implies that by combining \eqref{equ_m0} with \eqref{est_fin0}, \begin{align} \label{est_fin2} \sup_{0\leq s\leq t}\|(u, h)(s)\|_{\mathcal H_l^m}~\leq~\big(F_0+\int_0^tF(s)ds\big)^{\frac{1}{2}}\cdot\Big\{1-2C\delta_0^{-8}\big(F_0+\int_0^tF(s)ds\big)^2t\Big\}^{-\frac{1}{4}}. \end{align} As we know \begin{align*} &\langle y\rangle^{l+1}{\partial}_y^i(u, h)(t,x,y)=\langle y\rangle^{l+1}{\partial}_y^i(u_0, h_0)(x,y)+\int_0^t\langle y\rangle^{l+1}{\partial}_t{\partial}_y^i(u, h)(s,x,y)ds,\quad i=1,2,\end{align*} and \begin{align*} &h(t,x,y)=h_0(x,y)+\int_0^t{\partial}_th(s,x,y)ds. \end{align*} Then, by the Sobolev embedding inequality and \eqref{est_fin2} we have that for $i=1,2,$ \begin{align}\label{bound_uy} &\|\langle y\rangle^{l+1}{\partial}_y^i(u, h)(t)\|_{L^\infty(\Omega)}\nonumber\\ \leq ~&\|\langle y\rangle^{l+1}{\partial}_y^i(u_0, h_0)\|_{L^\infty(\Omega)}+\int_0^t\|\langle y\rangle^{l+1}{\partial}_t{\partial}_y^i(u, h)(s)\|_{L^\infty(\Omega)}ds\nonumber\\ \leq~&\|\langle y\rangle^{l+1}{\partial}_y^i(u_0, h_0)\|_{L^\infty(\Omega)}+C\big(\sup_{0\leq s\leq t}\|(u, h)(s)\|_{\mathcal H_l^5}\big)\cdot t\nonumber\\ \leq~&\|\langle y\rangle^{l+1}{\partial}_y^i(u_0, h_0)\|_{L^\infty(\Omega)}+C t\cdot\big(F_0+\int_0^tF(s)ds\big)^{\frac{1}{2}}\Big\{1-2C\delta_0^{-8}\big(F_0+\int_0^tF(s)ds\big)^2t\Big\}^{-\frac{1}{4}}. \end{align} Similarly, one can obtain that \begin{align}\label{bound_h} h(t,x,y)\geq ~&h_0(x,y)-\int_0^t\|{\partial}_t h(s)\|_{L^\infty(\Omega)}ds \geq~h_0(x,y)-C\big(\sup_{0\leq s\leq t}\|h(s)\|_{\mathcal H_0^3}\big)\cdot t\nonumber\\ \geq~&h_0(x,y)-C t\cdot\big(F_0+\int_0^tF(s)ds\big)^{\frac{1}{2}}\Big\{1-2C\delta_0^{-8}\big(F_0+\int_0^tF(s)ds\big)^2t\Big\}^{-\frac{1}{4}}. \end{align} Therefore, we obtain the following \begin{prop}\label{prop-priori} Under the assumptions of Proposition \ref{prop_priori}, there exists a constant $C>0$, depending only on $m, M_0$ and $\phi$, such that \begin{align}\label{est_priori-1} \sup_{0\leq s\leq t}\|(u, h)(s)\|_{\mathcal H_l^m}~\leq~\big(F_0+\int_0^tF(s)ds\big)^{\frac{1}{2}}\cdot\Big\{1-2C\delta_0^{-8}\big(F_0+\int_0^tF(s)ds\big)^2t\Big\}^{-\frac{1}{4}}, \end{align} for small time, where the quantities $F_0$ and $F(t)$ are defined by \eqref{F_def} and \eqref{Ft_def} respectively. Also, we have that for $i=1,2,$ \begin{align}\label{upbound_uy-1} &\|\langle y\rangle^{l+1}{\partial}_y^i(u, h)(t)\|_{L^\infty(\Omega)}\nonumber\\ \leq~&\|\langle y\rangle^{l+1}{\partial}_y^i(u_0, h_0)\|_{L^\infty(\Omega)}+Ct\cdot\big(\sup_{0\leq s\leq t}\|(u, h)(s)\|_{\mathcal H_l^5}\big)\nonumber\\ \leq~&\|\langle y\rangle^{l+1}{\partial}_y^i(u_0, h_0)\|_{L^\infty(\Omega)}+C t\cdot\big(F_0+\int_0^tF(s)ds\big)^{\frac{1}{2}}\Big\{1-2C\delta_0^{-8}\big(F_0+\int_0^tF(s)ds\big)^2t\Big\}^{-\frac{1}{4}}, \end{align} and \begin{align}\label{h_lowbound-1} h(t,x,y)\geq~&h_0(x,y)-C\big(\sup_{0\leq s\leq t}\|h(s)\|_{\mathcal H_0^3}\big)\cdot t\nonumber\\ \geq~&h_0(x,y)-C t\cdot\big(F_0+\int_0^tF(s)ds\big)^{\frac{1}{2}}\Big\{1-2C\delta_0^{-8}\big(F_0+\int_0^tF(s)ds\big)^2t\Big\}^{-\frac{1}{4}}. \end{align} \end{prop} From the above Proposition \ref{prop-priori}, we are ready to prove Proposition \ref{prop_priori}. Indeed, by using \eqref{ass_outflow}, \eqref{est_rhd} and the fact $\|{\partial}_\tau^\beta r_3\|_{L_{-1}^2(\Omega)}\leq CM_0$ from the expression \eqref{r_3}, it follows that from the definition \eqref{Ft_def} for $F(t)$, \begin{align}\label{est_Ft} F(t)~\leq~C\delta_0^{-8}M_0^6. \end{align} Next, by direct calculation we know that $D^\alpha (u, h)(0,x,y), |\alpha|\leq m$ can be expressed by the spatial derivatives of initial data $(u_0, h_0)$ up to order $2m$. Then, combining with \eqref{ib_hat} we get that $F_0$, given by \eqref{F_def}, is a polynomial of $\big\|\big(u_{0}, h_{0}\big)\big\|_{H_l^{2m}(\Omega)}$, and consequently \begin{align}\label{def_F} F_0~\leq~\delta_0^{-8}~\mathcal P\big(M_0+\|(u_0,h_0)\|_{H_l^{2m}(\Omega)}\big). \end{align} Plugging \eqref{est_Ft} and \eqref{def_F} into \eqref{est_priori-1}-\eqref{h_lowbound-1}, we derive the estimates \eqref{est_priori}-\eqref{h_lowbound}, and then obtain the proof of Proposition \ref{prop_priori}. \iffalse Then, by using \eqref{equi} and \eqref{equi_y1} in Corollary \ref{cor_equi} we obtain that from \eqref{est_fin2}, \begin{align*} \|(u,b)\|_{\mathcal B_l^m(\Omega)}^2+\|{\partial}_y(u,b)\|^2_{\mathcal A_l^m(\Omega)}\leq& \tilde C_m\big\|(u_0,b_0)\big\|_{H_l^{2m}(\Omega)(\Omega)}^{2m+1}\cdot P\big(E_3(t)\big)\exp\{P\big(E_3(T)\big)\cdot t\},\qquad\forall t\in[0,T]. \end{align*} Therefore, we obtain the following result: \begin{prop}\label{prop_m}[\textit{Weighted estimates for $D^m(u,b)$}]\\ Under the hypotheses of Proposition \ref{prop_tm}, it holds that for any $t\in[0,T],$ \begin{align}\label{est_tangm} \|(u,b)\|_{\mathcal B_l^m(\Omega)}^2+\|{\partial}_y(u,b)\|^2_{\mathcal A_l^m(\Omega)}\leq& \tilde C_m\big\|(u_0,b_0)\big\|_{H_l^{2m}(\Omega)(\Omega)}^{2m+1}\cdot P\big(E_3(t)\big)\exp\{P\big(E_3(T)\big)\cdot t\}. \end{align} \end{prop} Firstly, from the Proposition \ref{prop_xm} and Lemma \ref{lem_equ}, we immediately obtain that for any $t\in[0,T],$ \[\begin{split} &\quad\sum_{|\alpha|=m}\Big[\|{\partial}_\tau^\beta(u,b)(t)\|_{L^2_l(\Omega)}^2+\|{\partial}_y{\partial}_\tau^\beta(u,b)\|^2_{L^2_l(\Omega)}\Big]\\ &\lesssim\sum_{|\alpha|=m}\Big[M(t)\|(\hat u_\alpha,\hat b_\alpha)(t)\|_{L^2_l(\Omega)}^2+\|{\partial}_y(\hat u_\alpha,\hat b_\alpha)\|^2_{L^2_l(\Omega)}+P\big(E_2(t)\big)\|{\partial}_\tau^\beta b\|_{L_L^2(\Omega)}\Big]\\ &\lesssim M(t)\sum_{|\alpha|=m}\|(\hat u_\alpha,\hat b_\alpha)(0)\|_{L^2_l(\Omega)}^2+P\big(E_3(t)\big)\Big(\sum_{|\alpha|=m}\|(\hat u_\alpha, \hat b_\alpha)\|_{L^2_l(\Omega)}^2\\ &\qquad\qquad+\|(u,b)\|_{\mathcal A_l^{m,0}(\Omega)}^2+\|(u,b)\|_{\mathcal A_l^{m-1,1}(\Omega)}^2\Big), \end{split}\] which implies that by using Lemma \ref{lem_equ} again and \eqref{est_equib}, \begin{equation} \label{est_equ} \begin{split} &\quad\|(u,b)\|_{\mathcal B_l^{m,0}(\Omega)}+\|{\partial}_y(u,b)\|_{\mathcal A_l^{m,0}(\Omega)}\\ &\lesssim P\big(E_3(t)\big)\Big(\|(u_0, b_0)\|_{H^{2m}_l(\Omega)}^2+\|(u,b)\|_{\mathcal A_l^{m,0}(\Omega)}^2+\|(u,b)\|_{\mathcal A_l^{m-1,1}(\Omega)}^2\Big). \end{split}\end{equation} Below, we will show \begin{align*} &\sum_{\alpha_1+\alpha_2\leq m}\|{\partial}_\tau^{\alpha_1}{\partial}_y^{\alpha_2}b\|_{L^2_l(\mathbb{R}^2_+)}\\ \cong& \sum_{\alpha_1+\alpha_2\leq m, \alpha_1\leq m-1}\|{\partial}_\tau^{\alpha_1}{\partial}_y^{\alpha_2}b\|_{L^2_l(\mathbb{R}^2_+)} +\|{\partial}_\tau^mb-\frac{{\partial}_yb}{1+b}{\partial}_\tau^m\psi\|_{L^2_l(\mathbb{R}^2_+)}. \end{align*} On one hand \begin{align*} \|{\partial}_\tau^mb-\frac{{\partial}_yb}{1+b}{\partial}_\tau^m\psi\|_{L^2_l(\mathbb{R}^2_+)}\leq& \|{\partial}_\tau^mb\|_{L^2_l(\mathbb{R}^2_+)} +\|\frac{{\partial}_yb}{1+b}\|_{L^2_{yl}(L^\infty_x)}\|{\partial}_\tau^m\psi\|_{L^\infty_y(L^2_x)}\\ \leq& (1+C\|b\|_{H^m_l(\mathbb{R}^2_l)})\|{\partial}_\tau^mb\|_{L^2_l(\mathbb{R}^2_+)}, \end{align*} on another hand \begin{align*} \|{\partial}_\tau^mb-\frac{{\partial}_yb}{1+b}{\partial}_\tau^m\psi\|_{L^2_l(\mathbb{R}^2_+)}\geq& \|{\partial}_\tau^mb\|_{L^2_l(\mathbb{R}^2_+)} -\|\frac{{\partial}_yb}{1+b}\|_{L^2_{yl}(L^\infty_x)}\|{\partial}_\tau^m\psi\|_{L^\infty_y(L^2_x)}\\ \geq& (1-C\|b\|_{H^m_l(\mathbb{R}^2_l)})\|{\partial}_\tau^mb\|_{L^2_l(\mathbb{R}^2_+)}. \end{align*} In this way, if $\|b\|_{H^m_l(\mathbb{R}^2_l)}$ is suitably small, the equivalence is done. Moreover, we also have the following relationship. \begin{align*} \|{\partial}_\tau^mu-\frac{{\partial}_yb}{1+b}{\partial}_\tau^m\psi\|_{L^2_l(\mathbb{R}^2_+)}\leq & \|{\partial}_\tau^mu\|_{L^2_l(\mathbb{R}^2_+)} +\|\frac{{\partial}_yb}{1+b}\|_{L^2_{yl}(L^\infty_x)}\|{\partial}_\tau^m\psi\|_{L^\infty_y(L^2_x)}\\ \leq& \|{\partial}_\tau^mu\|_{L^2_l(\mathbb{R}^2_+)} +C\|b\|_{H^m_l(\mathbb{R}^2_l)})\|{\partial}_\tau^mb\|_{L^2_l(\mathbb{R}^2_+)}, \end{align*} and \begin{align*} \|{\partial}_\tau^mu-\frac{{\partial}_yb}{1+b}{\partial}_\tau^m\psi\|_{L^2_l(\mathbb{R}^2_+)}\geq& \|{\partial}_\tau^mu\|_{L^2_l(\mathbb{R}^2_+)} -\|\frac{{\partial}_yb}{1+b}\|_{L^2_{yl}(L^\infty_x)}\|{\partial}_\tau^m\psi\|_{L^\infty_y(L^2_x)}\\ \geq& \|{\partial}_\tau^mu\|_{L^2_l(\mathbb{R}^2_+)}-\|b\|_{H^m_l(\mathbb{R}^2_l)})\|{\partial}_\tau^mb\|_{L^2_l(\mathbb{R}^2_+)}. \end{align*} In addition, by similar arguments, we have \begin{align*} &\sum_{\alpha_1+\alpha_2\leq m}\|{\partial}_y({\partial}_\tau^{\alpha_1}{\partial}_y^{\alpha_2}b)\|_{L^2_l(\mathbb{R}^2_+)}\\ \cong& \sum_{\alpha_1+\alpha_2\leq m, \alpha_1\leq m-1}\|{\partial}_y({\partial}_\tau^{\alpha_1}{\partial}_y^{\alpha_2}b)\|_{L^2_l(\mathbb{R}^2_+)} +\|{\partial}_y({\partial}_\tau^mb-\frac{{\partial}_yb}{1+b}{\partial}_\tau^m\psi)\|_{L^2_l(\mathbb{R}^2_+)}. \end{align*} And then the similar relationship also holds for $u$. Under the assumptions that $\|u\|_{H^m_l(\mathbb{R}^2_+)}$ and $\|b\|_{H^m_l(\mathbb{R}^2_+)}$ suitably small, we combine all of the estimates in Subsection 2.1-2.3, use the equivalent relationship of the norms in Sobolev spaces, and chose $\delta$ small enough, then we obtain \begin{align} \label{2.29} &\frac{d}{dt}(\|u\|^2_{H^m_l(\mathbb{R}^2_+)}+\|b\|^2_{H^m_l(\mathbb{R}^2_+)})+(\|{\partial}_yu\|^2_{H^m_l(\mathbb{R}^2_+)}+\|{\partial}_yb\|^2_{H^m_l(\mathbb{R}^2_+)})\nonumber\\ \leq& C(\|u\|^2_{H^m_l(\mathbb{R}^2_+)}+\|b\|^2_{H^m_l(\mathbb{R}^2_+)}+\|u\|^6_{H^m_l(\mathbb{R}^2_+)}+\|b\|^6_{H^m_l(\mathbb{R}^2_+)}). \end{align} If $\mu\neq k$, we need an additional assumption that $\|{\partial}_yu_s\|_{L^2_{yl}(L^\infty_x)}$ small enough, then (\ref{2.29}) still holds for true. It is not difficult to find that we can close the a priori energy estimates, provided that $\|u_0\|_{H^m_l(\mathbb{R}^2_+)}$ and $\|b_0\|_{H^m_l(\mathbb{R}^2_+)}$ enough small. \fi \section{Local-in-time existence and uniqueness} In this section, we will establish the local-in-time existence and uniqueness of solutions to the nonlinear problem (\ref{bl_main}). \subsection{Existence} \indent\newline For this, we consider a parabolic regularized system for problem \eqref{bl_main}, from which we can obtain the local (in time) existence of solution by using classical energy estimates. Precisely, for a small parameter $0<\epsilon<1,$ we investigate the following problem: \begin{align} \label{pr_app} \left\{ \begin{array}{ll} \partial_tu^\epsilon+\big[(u^\epsilon+U\phi')\partial_x+(v^\epsilon-U_x\phi)\partial_y\big]u^\epsilon-\big[(h^\epsilon+H\phi')\partial_x+(g^\epsilon-H_x\phi)\partial_y\big]h^\epsilon+U_x\phi'u^\epsilon+U\phi''v^\epsilon\\ \qquad-H_x\phi'h^\epsilon-H\phi''g^\epsilon=\epsilon{\partial}_x^2u^\epsilon+\mu\partial^2_yu^\epsilon+r_1^\epsilon,\\ \partial_t h^\epsilon+\big[(u^\epsilon+U\phi')\partial_x+(v^\epsilon-U_x\phi)\partial_y\big]h^\epsilon-\big[(h^\epsilon+H\phi')\partial_x+(g^\epsilon-H_x\phi)\partial_y\big]u^\epsilon+H_x\phi'u^\epsilon+H\phi''v^\epsilon\\ \qquad-U_x\phi'h^\epsilon-U\phi''g^\epsilon=\epsilon{\partial}_x^2h^\epsilon+\kappa\partial^2_y h^\epsilon+r_2^\epsilon,\\ \partial_xu^\epsilon+\partial_y v^\epsilon=0,\quad \partial_xh^\epsilon+\partial_y g^\epsilon=0,\\ (u^\epsilon,h^\epsilon)|_{t=0}=(u_{0}, h_{0})(x,y),\qquad (u^\epsilon,v^\epsilon,\partial_yh^\epsilon,g^\epsilon)|_{y=0}=0, \end{array} \right. \end{align} \iffalse \begin{align} \label{pr_app} \left\{ \begin{array}{ll} \partial_tu_1^\epsilon+(u_1^\epsilon\partial_x+u_2^\epsilon\partial_y)u_1^\epsilon-(h_1^\epsilon\partial_x+h_2^\epsilon\partial_y)h_1^\epsilon=\epsilon{\partial}_x^2u_1^\epsilon+\mu\partial^2_yu_1^\epsilon-P_x^\epsilon,\\ \partial_th_1^\epsilon+\partial_y(u_2^\epsilon h_1^\epsilon-u_1^\epsilon h_2^\epsilon)=\epsilon{\partial}_x^2h_1^\epsilon+\kappa\partial_y^2h_1^\epsilon,\\ \partial_xu_1^\epsilon+\partial_yu_2^\epsilon=0,\quad \partial_xh_1^\epsilon+\partial_yh_2^\epsilon=0,\\ (u_1^\epsilon,h_1^\epsilon)|_{t=0}=(u_{10}, h_{10})(x,y)+\epsilon(\zeta_1^\epsilon,\zeta_2^\epsilon)(x,y)\triangleq (u_{10}^\epsilon,h_{10}^\epsilon)(x,y),\\%\quad h_1|_{t=0}=h_{10}(x,y),\\ (u_1^\epsilon,u_2^\epsilon,\partial_yh_1^\epsilon,h_2^\epsilon)|_{y=0}=0,\quad \lim\limits_{y\rightarrow+\infty}(u_1^\epsilon,h_1^\epsilon)=(U,H^\epsilon)(t,x). \end{array} \right. \end{align} \fi where the source term \begin{align}\label{new_source} (r_1^\epsilon,r_2^\epsilon)(t,x,y)~=~(r_1,r_2)+\epsilon(\tilde r_1^\epsilon, \tilde r_2^\epsilon)(t,x,y). \end{align} Here, $(r_1,r_2)$ is the source term of the original problem \eqref{bl_main}, and $(\tilde r_1^\epsilon, \tilde r_2^\epsilon)$ is constructed to ensure that the initial data $(u_{0}, h_{0})$ also satisfies the compatibility conditions of \eqref{pr_app} up to the order of $m$. Actually, we can use the given functions ${\partial}_t^i(u, h)(0,x,y), 0\leq i\leq m$, which can be derived from the equations and initial data of \eqref{bl_main} by induction with respect to $i$, and it follows that ${\partial}_t^i(u, h)(0,x,y)$ can be expressed as polynomials of the spatial derivatives, up to order $2i$, of the initial data $(u_0,h_0)$. Then, we may choose the corrector $(\tilde r_1^\epsilon, \tilde r_2^\epsilon)$ in the following form: \begin{align}\label{modify} (\tilde r_1^\epsilon, \tilde r_2^\epsilon)(t,x,y)~:=~-\sum_{i=0}^m\Big(\frac{t^i}{i!}{\partial}_x^2{\partial}_t^i(u, h)(0,x,y)\Big), \end{align} which yields that by direct calculation, \[{\partial}_t^i(u^\epsilon, h^\epsilon)(0,x,y)~=~{\partial}_t^i(u, h)(0,x,y),\quad 0\leq i\leq m.\] Likewise, we can derive that $\psi^\epsilon:={\partial}_y^{-1}h^\epsilon$ satisfies \begin{align*} {\partial}_t \psi^\epsilon+\big[(u^\epsilon+U\phi'){\partial}_x+(v^\epsilon-U_x\phi){\partial}_y\big]\psi^\epsilon+H_x\phi u^\epsilon+H\phi'v^\epsilon-\kappa{\partial}_y^2\psi^\epsilon=r_3^\epsilon, \end{align*} where \begin{align}\label{r_3ep} r_3^\epsilon~=~r_3-\epsilon\sum_{i=0}^m\Big(\frac{t^i}{i!}\int_0^y{\partial}_x^2{\partial}_t^ih(0,x,z)dz\Big)~:=~r_3+\epsilon\tilde r_3 \end{align} with $r_3$ given by \eqref{r_3}. Moreover, we have for $\alpha=(\beta,k)=(\beta_1,\beta_2,k)$ with $|\alpha|\leq m$, \begin{align}\label{est_rmodify} \|D^\alpha (\tilde r_1^\epsilon, \tilde r_2^\epsilon)(t)\|_{L_{l+k}^2(\Omega)},~\|{\partial}_\tau^\beta \tilde r_3^\epsilon(t)\|_{L_{-1}^2(\Omega)}\leq \sum_{\beta_1\leq i\leq m}t^{i-\beta_1}\mathcal P\Big(M_0+\|(u_0,h_0)\|_{H_l^{2i+2+\beta_2+k}}\Big). \end{align} Based on the a priori energy estimates established in Proposition \ref{prop-priori}, we can obtain \begin{prop} \label{Th2} Under the hypotheses of Theorem \ref{thm_main}, there exist a time $0<T_*\leq T$, independent of $\epsilon$, and a solution $(u^\epsilon,v^\epsilon,h^\epsilon,g^\epsilon)$ to the initial boundary value problem (\ref{pr_app}) with $(u^\epsilon, h^\epsilon)\in L^\infty\big(0,T_*; \mathcal H_l^m\big)$, which satisfies the following uniform estimates in $\epsilon$: \begin{align}\label{est_modify1} \sup_{0\leq t\leq T_*}\big\|\big(u^\epsilon, h^\epsilon\big)(t)\big\|_{\mathcal H_l^m}\leq~2F^{\frac{1}{2}}_0, \end{align} where $F_0$ is given by \eqref{F_def}). Moreover, for $t\in[0,T_*], (x,y)\in\Omega,$ \begin{align} \label{est_modify2} \big\|\langle y\rangle^{l+1}{\partial}_y^i(u^\epsilon,h^\epsilon)(t)\big\|_{L^\infty(\Omega)}\leq~\delta_0^{-1}, \quad h^\epsilon(t,x,y)+H(t,x)\phi'(y)~\geq~\delta_0,\quad \quad i=1,2. \end{align} \end{prop} \iffalse we have\\ (i)If $\mu=k$, there exists an unique solution $(u_\epsilon,v_\epsilon, b_\epsilon,g_\epsilon)$ to the initial boundary value problem (\ref{pr_app}). Moreover, \begin{align} \label{3.3} \|u_\epsilon,b_\epsilon\|_{H^{m}_l(\mathbb{R}^2_+)}\leq C\|u_{\epsilon0},b_{\epsilon0}\|_{H^{m}_l(\mathbb{R}^2_+)}\leq C\|u_{\epsilon0}, b_{\epsilon0}\|_{\tilde{H}^{2m}_l(\mathbb{R}^2_+)}. \end{align} (ii) If $\mu\neq k$, we assume additional conditions that $\|{\partial}_yu_s\|_{L^2_{yl}(L^\infty_x)}$ is enough small, there exists an unique solution $(u_\epsilon,v_\epsilon, b_\epsilon,g_\epsilon)$ to the initial boundary value problem (\ref{pr_app}). And the estimates (\ref{3.3}) still hold true. Here the constant $l>1/2$. \end{thm} \begin{rem} In general, the lifespan of solutions to the initial-boundary value problem (\ref{pr_app}) depends on the small parameter. However, in Section 2, we have establish the uniform a priori energy estimates, which are independent of $\epsilon$. That means the lifespan of solutions to (\ref{pr_app}) is indeed uniform with respect to small parameter $\epsilon$. \end{rem} \fi \begin{proof}[\textbf{Proof.}] Since the problem \eqref{pr_app} is a parabolic system, it is standard to show that \eqref{pr_app} admits a solution in a time interval $[0,T_\epsilon]$ ($T_\epsilon$ may depend on $\epsilon$) satisfying the estimates \eqref{est_modify2}. Indeed, one can establish a priori estimates for \eqref{pr_app}, and then obtain the local existence of solution by the standard iteration and weak convergence methods. On the other hand, we can derive the similar a priori estimates as in Proposition \ref{prop-priori} for \eqref{pr_app}, so by the standard continuity argument we can obtain the existence of solution in a time interval $[0,T_*], T_*>0$ independent of $\epsilon$. Therefore, we only determine the uniform lifespan $T_*$, and verify the estimates \eqref{est_modify1} and \eqref{est_modify2}. According to Proposition \ref{prop-priori}, we can obtain the estimates for $(u^\epsilon, h^\epsilon)$ similar as \eqref{est_priori-1}: \begin{align}\label{est_priori-ep1} \sup_{0\leq s\leq t}\|(u^\epsilon, h^\epsilon)(s)\|_{\mathcal H_l^m}~\leq~\big(F_0+\int_0^tF^\epsilon(s)ds\big)^{\frac{1}{2}}\cdot\Big\{1-2C\delta_0^{-8}\big(F_0+\int_0^tF^\epsilon(s)ds\big)^2t\Big\}^{-\frac{1}{4}}, \end{align} as long as the quantity in $\{\cdot\}$ on the right-hand side of \eqref{est_priori} is positive, where the quantity $F_0$ is given by \eqref{F_def}, and $F^\epsilon(t)$ is defined as follows (similar as \eqref{Ft_def}): \begin{align}\label{Ft_def-ep} F^\epsilon(t)~:=~&C\sum_{\tiny\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\|D^\alpha(r_1^\epsilon, r_2^\epsilon)(t)\|_{L^2_{l+k}(\Omega)}^2+C\delta_0^{-4}\sum_{|\beta|=m}\Big(\|{\partial}_\tau^\beta (r_1^\epsilon, r_2^\epsilon)(t)\|_{L_l^2(\Omega)}^2+4\delta_0^{-4}\|{\partial}_\tau^\beta r_3^\epsilon\|_{L_{-1}^2(\Omega)}^2\Big)\nonumber\\ &+C\delta_0^{-8}\Big(1+\sum_{|\beta|\leq m+2}\|{\partial}_\tau^\beta (U,H,P)(t)\|_{L^2(\mathbb T_x)}^2\Big)^3. \end{align} Substituting \eqref{new_source}-\eqref{r_3ep} into \eqref{Ft_def-ep} and recalling $F(t)$ defined by \eqref{Ft_def}, it yields that \begin{align*} F^\epsilon(t)~=~&F(t)+C\epsilon^2\sum_{\tiny\substack{|\alpha|\leq m\\|\beta|\leq m-1}}\|D^\alpha(\tilde r_1^\epsilon, \tilde r_2^\epsilon)(t)\|_{L^2_{l+k}(\Omega)}^2+C\epsilon^2\delta_0^{-4}\sum_{|\beta|=m}\Big(\|{\partial}_\tau^\beta (\tilde r_1^\epsilon, \tilde r_2^\epsilon)(t)\|_{L_l^2(\Omega)}^2+4\delta_0^{-4}\|{\partial}_\tau^\beta \tilde r_3^\epsilon\|_{L_{-1}^2(\Omega)}^2\Big), \end{align*} which implies that from \eqref{est_Ft} and \eqref{est_rmodify}, \begin{align*} F^\epsilon(t) \leq~ &C\delta_0^{-8}M_0^6+\epsilon^2\delta_0^{-8}\mathcal P\big(M_0+\|(u_0,h_0)\|_{H_l^{3m+2}}\big)\leq~\delta_0^{-8}\mathcal P\big(M_0+\|(u_0,h_0)\|_{H_l^{3m+2}}\big). \end{align*} Therefore, by choosing \begin{align*} T_1~:=~\min\Big\{\frac{\delta_0^8F_0}{\mathcal P\big(M_0+\|(u_0,h_0)\|_{H_l^{3m+2}}\big)}, ~\frac{3\delta_0^8}{32CF_0^2}\Big\} \end{align*} in \eqref{est_priori-ep1}, we obtain \eqref{est_modify1} for $T_*\leq T_1$. On the other hand, similar as the estimates \eqref{upbound_uy-1} and \eqref{h_lowbound-1} given in Proposition \ref{prop-priori}, we have the following bounds for $\langle y\rangle^{l+1}{\partial}_y^i(u^\epsilon,h^\epsilon), i=1,2$ and $h^\epsilon:$ \begin{align}\label{upbound_ep} \|\langle y\rangle^{l+1}{\partial}_y^i(u^\epsilon, h^\epsilon)(t)\|_{L^\infty(\Omega)} \leq~&\|\langle y\rangle^{l+1}{\partial}_y^i(u_0, h_0)\|_{L^\infty(\Omega)}+Ct\cdot\big(\sup_{0\leq s\leq t}\|(u^\epsilon, h^\epsilon)(s)\|_{\mathcal H_l^5}\big),\quad i=1,2, \end{align} and \begin{align}\label{lowbound_ep} h^\epsilon(t,x,y)\geq~&h_0(x,y)-Ct\cdot\big(\sup_{0\leq s\leq t}\|(u^\epsilon, h^\epsilon)(s)\|_{\mathcal H_0^3}\big). \end{align} Then, from the assumptions \eqref{ass_bound-modify} for the initial data $(u_0,h_0)$, and the chosen of $T_1$ above, we obtain that by \eqref{est_priori-ep1}, \begin{align*} &\|\langle y\rangle^{l+1}{\partial}_y^i(u^\epsilon, h^\epsilon)(t)\|_{L^\infty(\Omega)} \leq~(2\delta_0)^{-1}+2CF_0^{\frac{1}{2}}t,\quad i=1,2, \\ &h^\epsilon(t,x,y)+H(t,x)\phi'(y)\geq 2\delta_0+\big(H(t,x)-H(0,x)\big)\phi'(y)-2CF_0^{\frac{1}{2}}t\geq2\delta_0-C\big(M_0+2F_0^{\frac{1}{2}}\big)t. \end{align*} So, let us choose \begin{align*} T_2~:=~\min\Big\{T_1,~\frac{1}{4C\delta_0F_0^{\frac{1}{2}}},~\frac{\delta_0}{C\big(M_0+2F_0^{\frac{1}{2}}\big)}\Big\}, \end{align*} then, \eqref{est_modify2} holds for $T_*= T_2.$ Therefore, we find the lifespan $T_*=T_2$ and establish the estimates \eqref{est_modify1} and \eqref{est_modify2}, and consequently complete the proof of this proposition. \iffalse choose time $0<T_{\epsilon}'\leq T_\epsilon$ the lifespan such that \[ \|(u_\epsilon,b_\epsilon)\|_{\mathcal B^{3}_l(\Omega)}\leq 2\|(u_{\epsilon0},b_{\epsilon0})\|_{H^{6}_l(\mathbb{R}^2_+)},\quad \forall t\in[0,T_{\epsilon}']. \] Then, by \[b_\epsilon(t,x,y)=b_{\epsilon0}(x,y)+\int_0^t {\partial}_tb_\epsilon(s,x,y)ds,\] we that for $t\leq T_\epsilon',$ \[ |b_\epsilon(t,x,y)|\leq|b_{\epsilon0}(x,y)|+t\cdot\|{\partial}_tb_\epsilon\|_{L^\infty(\Omega_{T_\epsilon'})}\leq |b_{\epsilon0}(x,y)|+Ct\cdot\|b_\epsilon\|_{\mathcal B_0^3(\Omega_{T_\epsilon'})}\leq |b_{\epsilon0}(x,y)|+2C\|(u_{\epsilon0},b_{\epsilon0})\|_{H^{6}_l(\mathbb{R}^2_+)}~t, \] which implies that \begin{equation}\label{con_ass} |b_\epsilon(t,x,y)|\leq\frac{1}{2},\quad\mbox{for}\quad t\in\big[0,T_1\big], \end{equation} where $T_1:=\min\Big\{T_\epsilon', \frac{1}{4CC_0\|(u_{\epsilon0},b_{\epsilon0})\|_{H^{6}_l(\mathbb{R}^2_+)}^{7}}\Big\}.$ Next, in view of \eqref{con_ass} and combining with Proposition \ref{prop_m}, we obtain that for $m\geq3,$ \[ \|(u,b)\|_{\mathcal B_l^m(\Omega)}^2+\|{\partial}_y(u,b)\|^2_{\mathcal A_l^m(\Omega)}\leq C_m\big\|(u_0,b_0)\big\|_{H_l^{2m}(\Omega)(\Omega)}^{2m+1}\cdot\exp\{C_1 t\}, \] where the positive constant $C_m$ depends on $m$ and $\|(u_{\epsilon0},b_{\epsilon0})\|_{H^{6}_l(\mathbb{R}^2_+)}.$ \fi \end{proof} From the above Proposition \ref{Th2}, we obtain the local existence of solutions $(u^\epsilon,v^\epsilon,h^\epsilon,g^\epsilon)$ to the problem \eqref{pr_app} and their uniform estimates in $\epsilon$. Now, by letting $\epsilon\rightarrow0$ we will obtain the solution to the original problem \eqref{bl_main} through some compactness arguments. Indeed, from the uniform estimate \eqref{est_modify1}, by the Lions-Aubin lemma and the compact embedding of $H_l^m(\Omega)$ in $H_{loc}^{m'}$ for $m'<m$ (see \cite[Lemma 6.2]{MW1}), we know that there exists $(u, h)\in L^\infty\big(0,T_*;\mathcal H_l^m\big)\bigcap\Big(\bigcap_{m'<m-1}C^1\big([0,T_*]; H^{m'}_{loc}(\Omega)\big)\Big)$, such that, up to a subsequence, \begin{align*} {\partial}_t^i(u^\epsilon,h^\epsilon)~\stackrel{*}{\rightharpoonup}~{\partial}_t^i(u, h),\qquad &\mbox{in}\quad L^\infty\big(0,T_*; H^{m-i}_l(\Omega)\big),\quad 0\leq i\leq m,\\ (u^\epsilon,h^\epsilon)~\rightarrow~(u, h),\qquad &\mbox{in}\quad C^1\big([0,T_*]; H^{m'}_{loc}(\Omega)\big). \end{align*} Then, by using the uniform convergence of $({\partial}_x u^\epsilon, {\partial}_x h^\epsilon)$ because of $({\partial}_x u^\epsilon, {\partial}_x h^\epsilon)\in Lip~(\Omega_{T_*})$, we get the pointwise convergence for $(v^\epsilon, g^\epsilon)$, i.e., \begin{align}\label{vg_limit} (v^\epsilon, g^\epsilon)=\big(-\int_0^y{\partial}_x u^\epsilon dz, -\int_0^y{\partial}_x h^\epsilon dz\big)\rightarrow\big(-\int_0^y{\partial}_x u dz, -\int_0^y{\partial}_x h dz\big):=(v,g). \end{align} Now, we can pass the limit $\epsilon\rightarrow0$ in the problem \eqref{pr_app}, and obtain that $(u,v,h,g)$, $v$ and $g$ given by \eqref{vg_limit}, solves the original problem \eqref{bl_main}. As $(u, h)\in L^\infty\big(0,T_*;\mathcal H_l^m\big)$ it is easy to get that $(u, h)\in\bigcap_{i=0}^mW^{i,\infty}\Big(0,T;H_l^{m-i}(\Omega)\Big),$ and consequently \eqref{result_1} is proven. Moreover, the relation \eqref{result_2}, respectively \eqref{result_3}, follows immediately by combining the divergence free conditions $v=-{\partial}_y^{-1}{\partial}_xu, g=-{\partial}_y^{-1}{\partial}_xh$ with \eqref{normal1}, respectively \eqref{normal2}. Thus, we prove the local existence result of Theorem \ref{thm_main}. \subsection{Uniqueness} \indent\newline We will show the uniqueness of the obtained solution to (\ref{bl_main}). Let $(u^1,v^1, h^1,g^1)$ and $(u^2, v^2, h^2, g^2)$ be two solutions in $[0,T_*]$, constructed in the previous subsection, with respect to the initial data $(u_0^1, h_0^1)$ and $(u_0^2, h_0^2)$ respectively. Set \[(\tilde{u}, \tilde v, \tilde{h}, \tilde g)=(u^1-u^2, v^1-v^2, h^1-h^2, g^1-g^2),\] then we have \begin{align} \label{pr_diff} \left\{ \begin{array}{ll} {\partial}_t \tilde{u}+\big[(u^1+U\phi'){\partial}_x+(v^1-U_x\phi){\partial}_y\big]\tilde u-\big[(h^1+H\phi'){\partial}_x+(g^1-H_x\phi){\partial}_y\big]\tilde h-\mu{\partial}_y^2\tilde{u}\\ \qquad+({\partial}_xu^2+U_x\phi')\tilde u+({\partial}_yu^2+U\phi'')\tilde v-({\partial}_xh^2+H_x\phi')\tilde h-({\partial}_yh^2+H\phi'')\tilde g=0,\\ {\partial}_t \tilde{h}+\big[(u^1+U\phi'){\partial}_x+(v^1-U_x\phi){\partial}_y\big]\tilde h-\big[(h^1+H\phi'){\partial}_x+(g^1-H_x\phi){\partial}_y\big]\tilde u-\kappa{\partial}_y^2\tilde{h}\\ \qquad+({\partial}_xh^2+H_x\phi')\tilde u+({\partial}_yh^2+H\phi'')\tilde v-({\partial}_xu^2+U_x\phi')\tilde h-({\partial}_yu^2+U\phi'')\tilde g=0,\\ {\partial}_x\tilde{u}+{\partial}_y\tilde{v}=0,\quad {\partial}_x\tilde{h}+{\partial}_y\tilde{g}=0,\\ (\tilde{u}, \tilde h)|_{t=0}=(u_0^1-u_0^2,h_0^1-h_0^2),\quad (\tilde{u},\tilde v,{\partial}_y\tilde h,\tilde g)|_{y=0}=\textbf{0}. \end{array} \right. \end{align} Denote by $\tilde\psi:={\partial}_y^{-1}\tilde h={\partial}_y^{-1}(h^1-h^2)$, then from the second equation $\eqref{pr_diff}_2$ of $\tilde h$ and the divergence free conditions, we know that $\tilde\psi$ satisfies the following equation: \begin{align}\label{eq_tpsi} {\partial}_t \tilde{\psi}+\big[(u^1+U\phi'){\partial}_x+(v^1-U_x\phi){\partial}_y\big]\tilde \psi-\big(g^2-H_x\phi)\tilde u+(h^2+H\phi')\tilde v-\kappa{\partial}_y^2\tilde{\psi}=0. \end{align} Similar as \eqref{new_qu}, we introduce the new quantities: \begin{align}\label{new_tqu} \bar u~:=~\tilde u-\frac{{\partial}_y u^2+U\phi''}{h^2+H\phi'}\tilde\psi,\quad \bar h~:=~\tilde h-\frac{{\partial}_y h^2+H\phi''}{h^2+H\phi'}\tilde\psi, \end{align} and then, \begin{align}\label{new_tqu1} \bar u~:=~u^1-u^2-\eta_1^2~{\partial}_y^{-1}(h^1-h^2),\quad \bar h~:=~ h^1-h^2-\eta_2^2~{\partial}_y^{-1}(h^1-h^2), \end{align} where we denote \[\eta_1^2~:=~\frac{{\partial}_y u^2+U\phi''}{h^2+H\phi'},\quad \eta_2^2~:=~\frac{{\partial}_y h^2+H\phi''}{h^2+H\phi'}.\] Next, we can obtain that through direct calculation, $(\bar u,\bar h)$ admits the following initial-boundary value problem: \begin{align}\label{eq_buh}\begin{cases} {\partial}_t\bar u+\big[(u^1+U\phi'){\partial}_x+(v^1-U_x\phi){\partial}_y\big]\bar u-\big[(h^1+H\phi'){\partial}_x+(g^1-H_x\phi){\partial}_y\big]\bar h-\mu{\partial}_y^2\bar{u}+(\kappa-\mu)\eta_1^2{\partial}_y\bar h\\ \qquad+a_1\bar u+b_1\bar h+c_1\tilde\psi=0,\\ {\partial}_t\bar h+\big[(u^1+U\phi'){\partial}_x+(v^1-U_x\phi){\partial}_y\big]\bar h-\big[(h^1+H\phi'){\partial}_x+(g^1-H_x\phi){\partial}_y\big]\bar u-\kappa{\partial}_y^2\bar h\\ \qquad+a_2\bar u+b_2\bar h+c_2\tilde\psi=0,\\ (\bar u,{\partial}_y\bar h)|_{y=0}=0,\quad (\bar u, \bar h)|_{t=0}=\big(u_0^1-u_0^2-\eta_{10}^2~{\partial}_y^{-1}(h_0^1-h_0^2),h_0^1-h_0^2-\eta_{20}^2~{\partial}_y^{-1}(h_0^1-h_0^2)\big), \end{cases}\end{align} where \begin{align}\label{nota_abc} a_1=&{\partial}_xu^2+U_x\phi'+(g^2-H_x\phi)\eta_1^2,\quad b_1=(\kappa-\mu)\eta_1^2\eta_2^2-2\mu{\partial}_y\eta_1^2-({\partial}_xh^2+H_x\phi')-(g^2-H_x\phi)\eta_2^2,\nonumber\\ c_1=&\big[{\partial}_t+(u^1+U\phi'){\partial}_x+(v^1-U_x\phi){\partial}_y-\mu{\partial}_y^2\big]\eta_1^2-\big[(h^1+H\phi'){\partial}_x+(g^1-H_x\phi){\partial}_y\big]\eta_2^2-2\mu\eta_2^2{\partial}_y\eta_1^2\nonumber\\ &+(\kappa-\mu)\eta_1^2\big[(\eta_2^2)^2+{\partial}_y\eta_2^2\big]+(g^2-H_x\phi)\big[(\eta_1^2)^2-(\eta_2^2)^2\big]+({\partial}_xu^2+U_x\phi')\eta_1^2-({\partial}_xh^2+H_x\phi')\eta_2^2,\nonumber\\ a_2=&{\partial}_xh^2+H_x\phi'+(g^2-H_x\phi)\eta_2^2,\quad b_2=-2\kappa{\partial}_y\eta_2^2-({\partial}_xu^2+U_x\phi')-(g^2-H_x\phi)\eta_1^2,\nonumber\\ c_2=&\big[{\partial}_t+(u^1+U\phi'){\partial}_x+(v^1-U_x\phi){\partial}_y-\kappa{\partial}_y^2\big]\eta_2^2-\big[(h^1+H\phi'){\partial}_x+(g^1-H_x\phi){\partial}_y\big]\eta_1^2-2\kappa\eta_2^2{\partial}_y\eta_2^2\nonumber\\ &+({\partial}_xh^2+H_x\phi')\eta_1^2-({\partial}_xu^2+U_x\phi')\eta_2^2, \end{align} and \[\eta_{10}^2(x,y)~:=~\frac{{\partial}_yu_0^2+U(0,x)\phi''(y)}{h^2_0+H(0,x)\phi'(y)},\quad\eta_{20}^2(x,y)~:=~\frac{{\partial}_yh_0^2+H(0,x)\phi''(y)}{h^2_0+H(0,x)\phi'(y)}.\] Combining \eqref{new_tqu} with the fact $\tilde\psi={\partial}_y^{-1}\tilde h$, we get that \[\bar h~=~(h^2+H\phi')\cdot{\partial}_y\Big(\frac{\tilde\psi}{h^2+H\phi'}\Big),\] and then, by $\tilde\psi|_{y=0}=0,$ \begin{align}\label{tpsi} \tilde\psi(t,x,y)=\big(h^2(t,x,y)+H(t,x)\phi'(y)\big)\cdot\int_0^y\frac{\bar h(t,x,z)}{h^2(t,x,z)+H(t,x)\phi'(z)}dz. \end{align} Since $h^2+H\phi'\geq \delta_0$, applying \eqref{normal1} in \eqref{tpsi} gives \begin{align}\label{est_tpsi} \Big\|\frac{\tilde\psi(t)}{1+y}\Big\|_{L^2(\Omega)}\leq 2\delta_0^{-1}\big\|h^2+H\phi'\big\|_{L^\infty([0,T_*]\times\Omega)}~\|\bar h(t)\|_{L^2(\Omega)}. \end{align} Moreover, through a similar process of getting the estimates \eqref{est_zeta}, we can obtain that there exists a constant $$C=C\Big(T_*,\delta_0, \phi, U, H, \|(u^1,h^1)\|_{\mathcal H_l^5},\|(u^2,h^2)\|_{\mathcal H_l^5}\Big)>0,$$ such that \begin{align}\label{est_abc} \|a_i\|_{L^\infty([0,T_*]\times\Omega)},~\|b_i\|_{L^\infty([0,T_*]\times\Omega)},~\|(1+y)c_i\|_{L^\infty([0,T_*]\times\Omega)}~\leq~C,\quad i=1,2. \end{align} Thus, we have from \eqref{est_tpsi} and \eqref{est_abc}, \begin{align}\label{est_c} \|(c_i\tilde\psi)(t)\|_{L^2(\Omega)}~\leq~C~\|\bar h(t)\|_{L^2(\Omega)},\quad i=1,2. \end{align} \iffalse with\begin{align*} &\Gamma_1=-\tilde{u}{\partial}_xu^2-\tilde v{\partial}_y u^2+\tilde{h}{\partial}_xh^2+\tilde g{\partial}_y h^2,\quad\end{align*}and\begin{align*} \Gamma_2=-\tilde{u}{\partial}_xh^2-\tilde v{\partial}_y h^2+\tilde{h}{\partial}_xu^2+\tilde g{\partial}_y u^2.\end{align*} \begin{align} \label{pr_diff} \left\{ \begin{array}{ll} {\partial}_t \tilde{u}+(u_s+u^1){\partial}_x\tilde{u}+\tilde{v}{\partial}_y(u_s+u^1)-(1+b^1){\partial}_x\tilde{b}-\tilde{g}{\partial}_yb^1=\mu{\partial}_y^2\tilde{u}+\Gamma_1,\\ {\partial}_t \tilde{b}+(u_s+u^1){\partial}_x\tilde{b}+\tilde{v}{\partial}_yb^1-(1+b^1){\partial}_x\tilde{u}-\tilde{g}{\partial}_y(u_s+u^1)=\kappa{\partial}_y^2\tilde{b}+\Gamma_2,\\ {\partial}_x\tilde{u}+{\partial}_y\tilde{v}=0,\quad {\partial}_x\tilde{b}+{\partial}_y\tilde{g}=0,\\ \tilde{u}_0=u_0^1-u_0^2,\quad \tilde{b}_0=b_0^1-b_0^2,\\ \tilde{u}|_{y=0}=0,\quad {\partial}_y\tilde{b}|_{y=0}=0 \end{array} \right. \end{align} with \begin{align*} \Gamma_1=-\tilde{u}{\partial}_xu^2-v^2{\partial}_y\tilde{u}+\tilde{b}{\partial}_xb^2+g^2{\partial}_y\tilde{b}, \end{align*} and \begin{align*} \Gamma_2=-\tilde{u}{\partial}_xb^2-v^2{\partial}_y\tilde{b}+\tilde{b}{\partial}_xu^2+g^2{\partial}_y\tilde{u}. \end{align*} \fi \begin{prop} \label{Prop_uni} Let $(u^1,v^1,h^1,g^1)$ and $(u^2, v^2, h^2,g^2)$ be two solutions of problem \eqref{bl_main} with respect to the initial data $(u_0^1, h_0^1)$ and $(u_0^2, h_0^2)$ respectively, satisfying that $(u^j, h^j)\in\bigcap_{i=0}^mW^{i,\infty}\Big(0,T;H_l^{m-i}(\Omega)\Big)$ for $m\geq5,~j=1,2$. Then, there exists a positive constant $$C=C\Big(T_*,\delta_0, \phi, U, H,\|(u^1,h^1)\|_{\mathcal H_l^5},\|(u^2,h^2)\|_{\mathcal H_l^5}\Big)>0,$$ such that for the quantities $(\bar u, \bar h)$ given by \eqref{new_tqu1}, \begin{align}\label{est_unique} \frac{d}{dt}\|(\bar u, \bar h)(t)\|_{L^{2}(\Omega)}^2+\|({\partial}_y\bar u, {\partial}_y\bar h)(t)\|_{L^2(\Omega)}^2 \leq ~&C\|(\bar u, \bar h)\|_{L^2(\Omega)}^2. \end{align} \end{prop} The above Proposition \ref{Prop_uni} can be proved by the standard energy method and the estimates \eqref{est_abc}, \eqref{est_c}, here we omit the proof for brevity of presentation. Then, by virtue of Proposition \ref{Prop_uni} we can prove the uniqueness of solutions to (\ref{bl_main}) as follows. Firstly, if the initial data satisfying $(u^1,h^1)|_{t=0}=(u^2,h^2)|_{t=0}$, then we know that from \eqref{eq_buh}, $(\bar u, \bar h)$ admits the zero initial data, which implies that $(\bar u, \bar h)\equiv0$ by applying Gronwall's lemma to \eqref{est_unique}. Secondly, it yields that $\tilde\psi\equiv0$ by plugging $\bar h\equiv0$ into \eqref{tpsi}. Then, from \eqref{new_tqu1} we have $(u^1,h^1)\equiv(u^2,h^2)$ immediately through the following calculation: \begin{align*} (u^1,h^1)-(u^2,h^2)~=~(\tilde u, \tilde h)~=~(\bar u, \bar h)+(\eta_1^2, \eta_2^2)~\tilde \psi~\equiv~0. \end{align*} Finally, we obtain $(v^1,g^1)~\equiv~(v^2,g^2)$ since $v^i=-{\partial}_y^{-1}{\partial}_x u^i$ and $g^i=-{\partial}_y^{-1}{\partial}_x h^i$ for $i=1,2,$ and show the uniqueness of solutions. \begin{rem} We mention that in the independent recent preprint \cite{G-P}, the authors give a systematic derivation of MHD boundary layer models, and consider the linearization for the similar system as \eqref{bl_mhd} around some shear flow. By using the analogous transformation to \eqref{new}, they obtain the linear stability for the system in the Sobolev framework. \end{rem} \section{A coordinate transformation} In this section, we will introduce another method to study the initial-boundary value problem considered in this paper: \begin{align}\label{pr_com} \left\{ \begin{array}{ll} \partial_tu_1+u_1\partial_xu_1+u_2\partial_yu_1=h_1\partial_x h_1+h_2\partial_y h_1+\mu\partial^2_yu_1,\\ \partial_th_1+\partial_y(u_2h_1-u_1h_2)=\kappa\partial_y^2h_1,\\ \partial_xu_1+\partial_yu_2=0,\quad \partial_xh_1+\partial_yh_2=0,\\ (u_1,u_2,\partial_yh_1,h_2)|_{y=0}=0,\quad \lim\limits_{y\rightarrow+\infty}(u_1, h_1)=(U,H). \end{array} \right. \end{align} As we mentioned in Subsectin 2.3, by the divergence free condition, \begin{align*} \partial_x h_1+\partial_y h_2=0, \end{align*} there exists a stream function $\psi$, such that \begin{align} \label{4.3} h_1={\partial}_y\psi,\quad h_2=-{\partial}_x\psi,\quad \psi|_{y=0}=0, \end{align} \iffalse Then, we obtain \begin{align} \label{4.4}\partial_t{\partial}_y\psi+\partial_y(u_2{\partial}_y\psi+u_1{\partial}_x\psi)=k\partial_y^2{\partial}_y\psi, \end{align} and the boundary conditions \begin{align*} \psi|_{y=0}={\partial}_x\psi|_{y=0}={\partial}_y^2\psi|_{y=0}=u_2|_{y=0}=0. \end{align*} Integrating equation (\ref{4.4}) with respect to the variable $y$ over $[0,y]$ yields \fi moreover, $\psi$ satisfies \begin{align} \label{4.5} {\partial}_t \psi+u_1{\partial}_x\psi+u_2{\partial}_y\psi=\kappa{\partial}_y^2\psi. \end{align} Under the assumptions that \begin{align} \label{4.6} h_1(t,x,y)>0,\quad \hbox{or}\quad {\partial}_y\psi(t,x,y)>0, \end{align} we can introduce the following transformation \begin{align} \label{4.7} \tau=t,\ \xi=x,\ \eta=\psi(t,x,y), \end{align} and then, \eqref{pr_com} can be written in the new coordinates as follows: \begin{align}\label{pr_crocco} \left\{ \begin{array}{ll} {\partial}_\tau u_1+u_1{\partial}_\xi u_1-h_1{\partial}_\xi h_1+(\kappa-\mu)h_1{\partial}_\eta h_1{\partial}_\eta u_1=\mu h_1^2{\partial}^2_\eta u_1,\\ {\partial}_\tau h_1-h_1{\partial}_\xi u_1+u_1{\partial}_\xi h_1 =\kappa h_1^2 {\partial}^2_\eta h_1,\\ (u_1, h_1{\partial}_\eta h_1)|_{y=0}=0,\quad \lim\limits_{\eta\rightarrow+\infty}(u_1,h_1)=(U,H). \end{array} \right. \end{align} \begin{rem} The equations \eqref{pr_crocco} are quasi-linear equations, and there is no loss of regularity term in \eqref{pr_crocco}, then we can use the classical Picard iteration scheme to establish the local existence. However, in order to guarantee the coordinates transformation to be valid, one needs to assume that $h_1(t,x,y)>0$. Moreover, one can obtain the stability of solutions to \eqref{pr_crocco} in the new coordintes $(\tau, \xi, \eta)$. It is necessary to transfer the well-posedness of solutions to the original equations \eqref{pr_com}. And then, there will be some loss of regularity. \end{rem} \begin{rem} Based on the well-posedness result for MHD boundary layer in the Sobolev framework given in this paper, we will show the validity of the vanishing limit of the viscous MHD equations \eqref{eq_mhd} as $\epsilon\rightarrow0$ in a future work \cite{LXY}, that is, to show the solution to \eqref{eq_mhd} converges to a solution of ideal MHD equations, corresponding to $\epsilon=0$ in \eqref{eq_mhd}, outside the boundary layer, and to a boundary layer profile studied in this paper inside the boundary layer. \end{rem} \appendix \section{Some inequalities} In this appendix, we will prove the inequalities given in Lemma \eqref{lemma_ineq}. Such inequalities can be found in \cite{MW1} and \cite{X-Z}, here we give a proof for readers' convenience. \begin{proof}[\textbf{Proof of Lemma \ref{lemma_ineq}.}] \romannumeral1) From $\lim\limits_{y\rightarrow+\infty}(fg)(x,y)=0$, it yields \begin{align*} \Big|\int_{\mathbb T_x}(fg)|_{y=0}dx\Big|~=~&\Big|\int_{\Omega}{\partial}_y(fg)dxdy\Big|\leq \int_{\Omega}|{\partial}_yf\cdot g|dxdy+\int_{\Omega}|f\cdot{\partial}_yg|dxdy\\ \leq~& \|{\partial}_yf\|_{L^2(\Omega)}\|g\|_{L^2(\Omega)}+\|f\|_{L^2(\Omega)}\|{\partial}_yg\|_{L^2(\Omega)}, \end{align*} and we get \eqref{trace}. \eqref{trace0} follows immediately by letting $g=f$ in \eqref{trace}. \romannumeral 2) From $m\geq3$ and $|\alpha|+|\tilde\alpha|\leq m$, we know that there must be $|\alpha|\leq m-2$ or $|\alpha|\leq m-2$. Without loss of generality, we assume that $|\alpha|\leq m-2$, then for any $l_1,l_2\geq0$ with $l_1+l_2=l$, we have that by using Sobolev embedding inequality, \begin{align*} \big\|\big(D^\alpha f\cdot D^{\tilde\alpha}g\big)(t,\cdot)\big\|_{L^2_{l+k+\tilde k}(\Omega)}\leq~&\big\|\langle y\rangle^{l_1+k}D^\alpha f(t,\cdot)\big\|_{L^\infty(\Omega)}\cdot \big\|\langle y\rangle^{l_2+\tilde k}D^{\tilde\alpha}g(t,\cdot)\big\|_{L^2(\Omega)}\\ \leq~& C\big\|\langle y\rangle^{l_1+k}D^\alpha f(t,\cdot)\big\|_{H^2(\Omega)}\|g(t)\|_{\mathcal H_{l_2}^{|\tilde\alpha|}}\\ \leq~ &C\big\|f(t)\big\|_{\mathcal H_{l_1}^{|\alpha|+2}(\Omega)}\|g(t)\|_{\mathcal H_{l_2}^{m}}, \end{align*} which implies \eqref{Morse} because of $|\alpha|+2\leq m$. \romannumeral3) For $\lambda>\frac{1}{2},$ it follows that by integration by parts, \begin{align*} \big\|\langle y\rangle^{-\lambda}({\partial}_y^{-1}f)(y)\big\|_{L^2_y(\mathbb R_+)}^2=&\int_0^{+\infty}\frac{\big[({\partial}_y^{-1}f)(y)\big]^2}{1-2\lambda}d(1+y)^{1-2\lambda}=\frac{2}{2\lambda-1}\int_0^{+\infty}(1+y)^{1-2\lambda}f(y)\cdot({\partial}_y^{-1}f)(y)dy\\ \leq~& \frac{2}{2\lambda-1}\big\|\langle y\rangle^{-\lambda}({\partial}_y^{-1}f)(y)\big\|_{L^2_y(\mathbb R_+)}\cdot\big\|\langle y\rangle^{1-\lambda}f(y)\big\|_{L^2_y(\mathbb R_+)}, \end{align*} which implies the first inequality of \eqref{normal}. On the other hand, note that for $\tilde\lambda>0,$ \begin{align*} |({\partial}_y^{-1}f)(y)|\leq~&\int_0^y|f(z)|dz\leq \|(1+z)^{1-\tilde\lambda}f(z)\|_{L^\infty(0,y)}\cdot\int_0^y(1+z)^{\tilde\lambda-1}dz\\ \leq~&\frac{(1+y)^{\tilde\lambda}-1}{\tilde\lambda}\|(1+y)^{1-\tilde\lambda}f(y)\|_{L^\infty_y(\mathbb R_+)}, \end{align*} which implies the second inequality of \eqref{normal} immediately. Next, as $m\geq3$ and $|\alpha|+|\tilde\beta|\leq m$, we also get $|\alpha|\leq m-2$ or $|\tilde\beta|\leq m-2$. If $|\alpha|\leq m-2$, by using Sobolev embedding inequality and the first inequality of \eqref{normal}, we have for any $\lambda>\frac{1}{2}$, \begin{align*} \big\|\big(D^\alpha g\cdot{\partial}_\tau^{\tilde\beta}{\partial}_y^{-1}h\big)(t,\cdot)\big\|_{L^2_{l+k}(\Omega)} \leq~&\big\|\langle y\rangle^{l+\lambda+k}D^\alpha g(t,\cdot)\big\|_{L^\infty(\Omega)}\cdot \big\|\langle y\rangle^{-\lambda}{\partial}_\tau^{\tilde\beta}{\partial}_y^{-1}h(t,\cdot)\big\|_{L^2(\Omega)}\\ \leq~&C\big\|\langle y\rangle^{l+\lambda+k}D^\alpha g(t,\cdot)\big\|_{H^2(\Omega)}\cdot \big\|\langle y\rangle^{1-\lambda}{\partial}_\tau^{\tilde\beta}h(t,\cdot)\big\|_{L^2(\Omega)}\\ \leq~&C\|g(t)\|_{\mathcal H_{l+\lambda}^{|\alpha|+2}}\|h(t)\|_{\mathcal H_{1-\lambda}^{|\tilde\beta|}}. \end{align*} If $|\tilde\beta|\leq m-2$, by Sobolev embedding inequality and the second inequality of \eqref{normal}, \begin{align*} \big\|\big(D^\alpha g\cdot{\partial}_\tau^{\tilde\beta}{\partial}_y^{-1}h\big)(t,\cdot)\big\|_{L^2_{l+k}(\Omega)} \leq~&\big\|\langle y\rangle^{l+\lambda+k}D^\alpha g(t,\cdot)\big\|_{L^2(\Omega)}\cdot \big\|\langle y\rangle^{-\lambda}{\partial}_\tau^{\tilde\beta}{\partial}_y^{-1}h(t,\cdot)\big\|_{L^\infty(\Omega)}\\ \leq~&C\|g(t)\|_{\mathcal H_{l+\lambda}^{|\alpha|}}\cdot \big\|\langle y\rangle^{1-\lambda}{\partial}_\tau^{\tilde\beta}h(t,\cdot)\big\|_{H^2(\Omega)}\\ \leq~&C\|g(t)\|_{\mathcal H_{l+\lambda}^{|\alpha|}}\|h(t)\|_{\mathcal H_{1-\lambda}^{|\tilde\beta|+2}}. \end{align*} Thus, we get the proof of \eqref{normal0}, and then, \eqref{normal1} follows by letting $\lambda=1$ in \eqref{normal0}. \romannumeral4) For any $\lambda>\frac{1}{2}$, \begin{align*} \big|({\partial}_y^{-1}f)(y)\big|\leq\|f(y)\|_{L_y^1(\mathbb R_+^2)}\leq \|\langle y\rangle^{-\lambda}\|_{L_y^2(\mathbb R_+)}\|\langle y\rangle^{\lambda}f\|_{L_{y}^2(\mathbb R_+)}\leq C\|\langle y\rangle^{\lambda}f\|_{L_{y}^2(\mathbb R_+)}, \end{align*} and we get \eqref{normal2}. For $m\geq2$ and $|\alpha|+|\tilde\beta|\leq m$, we get that $|\alpha|\leq m-1$ or $|\tilde\beta|\leq m-1$. If $|\alpha|\leq m-1$, by using Sobolev embedding inequality and \eqref{normal2}, we have for any $\lambda>\frac{1}{2}$, \begin{align*} \big\|\big(D^\alpha f\cdot{\partial}_\tau^{\tilde\beta}{\partial}_y^{-1}g\big)(t,\cdot)\big\|_{L^2_{l+k}(\Omega)} \leq~&\big\|\langle y\rangle^{l+k}D^\alpha f(t,\cdot)\big\|_{L_x^\infty L^2_y(\Omega)}\cdot \big\|{\partial}_\tau^{\tilde\beta}{\partial}_y^{-1}g(t,\cdot)\big\|_{L_x^2L^\infty_y(\Omega)}\\ \leq~&C\big\|\langle y\rangle^{l+k}D^\alpha f(t,\cdot)\big\|_{H^1(\Omega)}\cdot \big\|\langle y\rangle^{\lambda}{\partial}_\tau^{\tilde\beta}g(t,\cdot)\big\|_{L^2(\Omega)}\\ \leq~&C\|f(t)\|_{\mathcal H_{l}^{|\alpha|+1}}\|g(t)\|_{\mathcal H_{\lambda}^{|\tilde\beta|}}. \end{align*} If $|\tilde\beta|\leq m-1$, by Sobolev embedding inequality and \eqref{normal2}, \begin{align*} \big\|\big(D^\alpha f\cdot{\partial}_\tau^{\tilde\beta}{\partial}_y^{-1}g\big)(t,\cdot)\big\|_{L^2_{l+k}(\Omega)} \leq~&\big\|\langle y\rangle^{l+k}D^\alpha f(t,\cdot)\big\|_{L^2(\Omega)}\cdot \big\|{\partial}_\tau^{\tilde\beta}{\partial}_y^{-1}g(t,\cdot)\big\|_{L^\infty(\Omega)}\\ \leq~&C\|f(t)\|_{\mathcal H_{l}^{|\alpha|}}\cdot \big\|\langle y\rangle^{\lambda}{\partial}_\tau^{\tilde\beta}g(t,\cdot)\big\|_{H_1^xL_y^2(\Omega)}\\ \leq~&C\|f(t)\|_{\mathcal H_{l}^{|\alpha|}}\|g(t)\|_{\mathcal H_{\lambda}^{|\tilde\beta|+1}}. \end{align*} Thus, we get \eqref{normal3}, and then complete the proof of this lemma. \end{proof} \noindent {\bf Acknowledgements:} The second author is partially supported by NSFC (Grant No.11171213, No.11571231). The third author is supported by the General Research Fund of Hong Kong, CityU No. 11320016, and he would like to thank Pierre Degond for the initial discussion on this problem at Imperial College. \end{document}
\begin{document} \begin{center} {\Large\bf The matrix Stieltjes moment problem: a description of all solutions.} \end{center} \begin{center} {\bf S.M. Zagorodnyuk} \end{center} \section{Introduction.} The matrix Stieltjes moment problem consists of finding a left-continuous non-decreasing matrix function $M(x) = ( m_{k,l}(x) )_{k,l=0}^{N-1}$ on $\mathbb{R}_+ = [0,+\infty)$, $M(0)=0$, such that \begin{equation} \label{f1_1} \int_{\mathbb{R}_+} x^n dM(x) = S_n,\qquad n\in\mathbb{Z}_+, \end{equation} where $\{ S_n \}_{n=0}^\infty$ is a given sequence of Hermitian $(N\times N)$ complex matrices, $N\in\mathbb{N}$. This problem is said to be determinate, if there exists a unique solution and indeterminate in the opposite case. In the scalar ($N=1$) indeterminate case the Stieltjes moment problem was solved by M.G.~Krein (see~\cite{c_1000_Kr_st},\cite{c_2000_KrN}), while in the scalar degenerate case the problem was solved by F.R.~Gantmacher in~\cite[Chapter XVI]{Cit_3000_Gantmacher}. The operator (and, in particular, the matrix) Stieltjes moment problem was introduced by M.G.~Krein and M.A.~Krasnoselskiy in~\cite{c_4000_Kr_Kras}. They obtained the necessary and sufficient conditions of solvability for this problem. \noindent Let us introduce the following matrices \begin{equation} \label{f1_2} \Gamma_n = (S_{i+j})_{i,j=0}^n = \left( \begin{array}{cccc} S_0 & S_1 & \ldots & S_n\\ S_1 & S_2 & \ldots & S_{n+1}\\ \vdots & \vdots & \ddots & \vdots\\ S_n & S_{n+1} & \ldots & S_{2n}\end{array} \right), \end{equation} \begin{equation} \label{f1_3} \widetilde\Gamma_n = (S_{i+j+1})_{i,j=0}^n = \left( \begin{array}{cccc} S_1 & S_2 & \ldots & S_{n+1}\\ S_2 & S_3 & \ldots & S_{n+2}\\ \vdots & \vdots & \ddots & \vdots\\ S_{n+1} & S_{n+2} & \ldots & S_{2n+1}\end{array} \right),\qquad n\in\mathbb{Z}_+. \end{equation} The moment problem~(\ref{f1_1}) has a solution if and only if \begin{equation} \label{f1_4} \Gamma_n \geq 0,\quad \widetilde\Gamma_n \geq 0,\qquad n\in\mathbb{Z}_+. \end{equation} In~2004, Yu.M.~Dyukarev performed a deep investigation of the moment problem~(\ref{f1_1}) in the case when \begin{equation} \label{f1_5} \Gamma_n > 0,\quad \widetilde\Gamma_n > 0,\qquad n\in\mathbb{Z}_+, \end{equation} and some limit matrix intervals (which he called the limit Weyl intervals) are non-degenerate, see~\cite{c_5000_D}. He obtained a parameterization of all solutions of the moment problem in this case. \noindent Our aim here is to obtain a description of all solutions of the moment problem~(\ref{f1_1}) in the general case. No conditions besides the solvability (i.e. conditions~(\ref{f1_4})) will be assumed. We shall apply an operator approach which was used in~\cite{c_6000_Z} and Krein's formula for the generalized $\Pi$-resolvents of non-negative Hermitian operators~\cite{c_7000_Kr},\cite{c_8000_Kr_Ovch}. We shall use Krein's formula in the form which was proposed by V.A.~Derkach and M.M.~Malamud in~\cite{c_9000_D_M}. We should also notice that these authors presented a detailed proof of Krein's formula. \noindent {\bf Notations.} As usual, we denote by $\mathbb{R}, \mathbb{C}, \mathbb{N}, \mathbb{Z}, \mathbb{Z}_+$ the sets of real numbers, complex numbers, positive integers, integers and non-negative integers, respectively; $\mathbb{R}_+ = [0,+\infty)$, $\mathbb{C}_+ = \{ z\in \mathbb{C}:\ \mathop{\rm Im}\nolimits z>0 \}$. The space of $n$-dimensional complex vectors $a = (a_0,a_1,\ldots,a_{n-1})$, will be denoted by $\mathbb{C}^n$, $n\in \mathbb{N}$. If $a\in \mathbb{C}^n$ then $a^*$ means the complex conjugate vector. By $\mathbb{P}$ we denote the set of all complex polynomials. \noindent Let $M(x)$ be a left-continuous non-decreasing matrix function $M(x) = ( m_{k,l}(x) )_{k,l=0}^{N-1}$ on $\mathbb{R_+}$, $M(0)=0$, and $\tau_M (x) := \sum_{k=0}^{N-1} m_{k,k} (x)$; $\Psi(x) = ( dm_{k,l}/ d\tau_M )_{k,l=0}^{N-1}$ (the Radon-Nikodym derivative). We denote by $L^2(M)$ a set (of classes of equivalence) of vector functions $f: \mathbb{R}\rightarrow \mathbb{C}^N$, $f = (f_0,f_1,\ldots,f_{N-1})$, such that (see, e.g.,~\cite{c_10000_M_M}) $$ \| f \|^2_{L^2(M)} := \int_\mathbb{R} f(x) \Psi(x) f^*(x) d\tau_M (x) < \infty. $$ The space $L^2(M)$ is a Hilbert space with the scalar product $$ ( f,g )_{L^2(M)} := \int_\mathbb{R} f(x) \Psi(x) g^*(x) d\tau_M (x),\qquad f,g\in L^2(M). $$ For a separable Hilbert space $H$ we denote by $(\cdot,\cdot)_H$ and $\| \cdot \|_H$ the scalar product and the norm in $H$, respectively. The indices may be omitted in obvious cases. By $E_H$ we denote the identity operator in $H$, i.e. $E_H x = x$, $x\in H$. \noindent For a linear operator $A$ in $H$ we denote by $D(A)$ its domain, by $R(A)$ its range, and by $\mathop{\rm ker}\nolimits A$ its kernel. By $A^*$ we denote its adjoint if it exists. By $\rho(A)$ we denote the resolvent set of $A$; $N_z = \mathop{\rm ker}\nolimits(A^* - zE_H)$. If $A$ is bounded, then $\| A \|$ stands for its operator norm. For a set of elements $\{ x_n \}_{n\in T}$ in $H$, we denote by $\mathop{\rm Lin}\nolimits\{ x_n \}_{n\in T}$ and $\mathop{\rm span}\nolimits\{ x_n \}_{n\in T}$ the linear span and the closed linear span (in the norm of $H$), respectively. Here $T$ is an arbitrary set of indices. For a set $M\subseteq H$ we denote by $\overline{M}$ the closure of $M$ with respect to the norm of $H$. \noindent If $H_1$ is a subspace of $H$, by $P_{H_1} = P_{H_1}^{H}$ we denote the operator of the orthogonal projection on $H_1$ in $H$. If $\mathcal{H}$ is another Hilbert space, by $[H,\mathcal{H}]$ we denote the space of all bounded operators from $H$ into $\mathcal{H}$; $[H]:= [H,H]$. $\mathfrak{C}(H)$ is the set of closed linear operators $A$ such that $\overline{D(A)}=H$. \section{The matrix Stieltjes moment problem: the solvability.} Consider the matrix Stieltjes moment problem~(\ref{f1_1}). Let us check that conditions~(\ref{f1_4}) are necessary for the solvability of the problem~(\ref{f1_1}). In fact, suppose that the moment problem has a solution $M(x)$. Choose an arbitrary function $a(x) = (a_0(x),a_1(x),...,a_{N-1}(x))$, where $$ a_j(x) = \sum_{k=0}^n \alpha_{j,k} x^k,\quad \alpha_{j,k}\in \mathbb{C},\ n\in \mathbb{Z}_+. $$ This function belongs to $L^2(M)$ and $$ 0 \leq \int_{\mathbb{R}_+} a(x) dM(x) a^*(x) = \sum_{k,l=0}^n \int_{\mathbb{R}_+}(\alpha_{0,k},\alpha_{1,k},...,\alpha_{N-1,k}) x^{k+l} dM(x) $$ $$* (\alpha_{0,l},\alpha_{1,l},...,\alpha_{N-1,l})^* = \sum_{k,l=0}^n (\alpha_{0,k},\alpha_{1,k},...,\alpha_{N-1,k}) S_{k+l} $$ $$* (\alpha_{0,l},\alpha_{1,l},...,\alpha_{N-1,l})^* = A\Gamma_n A^*, $$ where $A = (\alpha_{0,0},\alpha_{1,0},...,\alpha_{N-1,0},\alpha_{0,1},\alpha_{1,1},..., \alpha_{N-1,1},...,\alpha_{0,n},\alpha_{1,n},...,\alpha_{N-1,n})$, and we have used the rules for the multiplication of block matrices. In a similar manner we get $$ 0 \leq \int_{\mathbb{R}_+} a(x) x dM(x) a^*(x) = A\widetilde\Gamma_n A^*, $$ and therefore conditions~(\ref{f1_4}) hold. \noindent On the other hand, let the moment problem~(\ref{f1_1}) be given and suppose that conditions~(\ref{f1_4}) are true. For the prescribed moments $$ S_j = (s_{j;k,l})_{k,l=0}^{N-1},\quad s_{j;k,l}\in \mathbb{C},\qquad j\in \mathbb{Z}_+, $$ we consider the following block matrices \begin{equation} \label{f2_1} \Gamma = (S_{i+j})_{i,j=0}^\infty = \left( \begin{array}{cccc} S_{0} & S_{1} & S_2 & \ldots \\ S_{1} & S_{2} & S_3 & \ldots \\ S_{2} & S_{3} & S_4 & \ldots \\ \vdots & \vdots & \vdots & \ddots \end{array}\right), \end{equation} \begin{equation} \label{f2_1_1} \widetilde\Gamma = (S_{i+j+1})_{i,j=0}^\infty = \left( \begin{array}{cccc} S_{1} & S_{2} & S_3 & \ldots \\ S_{2} & S_{3} & S_4 & \ldots \\ S_{3} & S_{4} & S_5 & \ldots \\ \vdots & \vdots & \vdots & \ddots \end{array}\right). \end{equation} The matrix $\Gamma$ can be viewed as a scalar semi-infinite matrix \begin{equation} \label{f2_2} \Gamma = (\gamma_{n,m})_{n,m=0}^\infty,\qquad \gamma_{n,m}\in \mathbb{C}. \end{equation} Notice that \begin{equation} \label{f2_3} \gamma_{rN+j,tN+n} = s_{r+t;j,n},\qquad r,t\in \mathbb{Z_+},\ 0\leq j,n\leq N-1. \end{equation} The matrix $\widetilde\Gamma$ can be also viewed as a scalar semi-infinite matrix \begin{equation} \label{f2_3_1} \widetilde\Gamma = (\widetilde\gamma_{n,m})_{n,m=0}^\infty = (\gamma_{n+N,m})_{n,m=0}^\infty. \end{equation} The conditions in~(\ref{f1_4}) imply that \begin{equation} \label{f2_4} (\gamma_{k,l})_{k,l=0}^r \geq 0,\qquad r\in \mathbb{Z}_+; \end{equation} \begin{equation} \label{f2_4_1} (\gamma_{k+N,l})_{k,l=0}^r \geq 0,\qquad r\in \mathbb{Z}_+. \end{equation} We shall use the following important fact (e.g.,~\cite[Supplement 1]{c_11000_AG}): \begin{thm} \label{t2_1} Let $\Gamma = (\gamma_{n,m})_{n,m=0}^\infty$, $\gamma_{n,m}\in \mathbb{C}$, be a semi-infinite complex matrix such that condition~(\ref{f2_4}) holds. Then there exist a separable Hilbert space $H$ with a scalar product $(\cdot,\cdot)_H$ and a sequence $\{ x_n \}_{n=0}^\infty$ in $H$, such that \begin{equation} \label{f2_5} \gamma_{n,m} = (x_n,x_m)_H,\qquad n,m\in \mathbb{Z}_+, \end{equation} and $\mathop{\rm span}\nolimits\{ x_n \}_{n=0}^\infty = H$. \end{thm} {\bf Proof. } Consider an arbitrary infinite-dimensional linear vector space $V$. For example, we can choose the linear space of all complex sequences $(u_n)_{n\in \mathbb{Z}_+}$, $u_n\in \mathbb{C}$. Let $X = \{ x_n \}_{n=0}^\infty$ be an arbitrary infinite sequence of linear independent elements in $V$. Let $L = \mathop{\rm Lin}\nolimits\{ x_n \}_{n\in\mathbb{Z}_+}$ be the linear span of elements of $X$. Introduce the following functional: \begin{equation} \label{f2_6} [x,y] = \sum_{n,m=0}^\infty \gamma_{n,m} a_n\overline{b_m}, \end{equation} for $x,y\in L$, $$ x=\sum_{n=0}^\infty a_n x_n,\quad y=\sum_{m=0}^\infty b_m x_m,\quad a_n,b_m\in\mathbb{C}. $$ Here and in what follows we assume that for elements of linear spans all but a finite number of coefficients are zero. The space $V$ with $[\cdot,\cdot]$ will be a quasi-Hilbert space. Factorizing and making the completion we obtain the required space $H$ (see~\cite{c_12000_Ber}). $\Box$ From~(\ref{f2_3}) it follows that \begin{equation} \label{f2_7} \gamma_{a+N,b} = \gamma_{a,b+N},\qquad a,b\in\mathbb{Z}_+. \end{equation} In fact, if $a=rN+j$, $b=tN+n$, $0\leq j,n \leq N-1$, $r,t\in\mathbb{Z}_+$, we can write $$ \gamma_{a+N,b} = \gamma_{(r+1)N+j,tN+n} = s_{r+t+1;j,n} = \gamma_{rN+j,(t+1)N+n} = \gamma_{a,b+N}. $$ By Theorem~\ref{t2_1} there exist a Hilbert space $H$ and a sequence $\{ x_n \}_{n=0}^\infty$ in $H$, such that $\mathop{\rm span}\nolimits\{ x_n \}_{n=0}^\infty = H$, and \begin{equation} \label{f2_8} (x_n,x_m)_H = \gamma_{n,m},\qquad n,m\in\mathbb{Z}_+. \end{equation} Set $L := \mathop{\rm Lin}\nolimits\{ x_n \}_{n=0}^\infty$. Notice that elements $\{ x_n \}$ are {\it not necessarily linearly independent}. Thus, for an arbitrary $x\in L$ there can exist different representations: \begin{equation} \label{f2_9} x = \sum_{k=0}^\infty \alpha_k x_k,\quad \alpha_k\in \mathbb{C}, \end{equation} \begin{equation} \label{f2_10} x = \sum_{k=0}^\infty \beta_k x_k,\quad \beta_k\in \mathbb{C}. \end{equation} (Here all but a finite number of coefficients $\alpha_k$, $\beta_k$ are zero). Using~(\ref{f2_7}),(\ref{f2_8}) we can write $$ \left( \sum_{k=0}^\infty \alpha_k x_{k+N}, x_l \right) = \sum_{k=0}^\infty \alpha_k ( x_{k+N}, x_l ) = \sum_{k=0}^\infty \alpha_k \gamma_{k+N,l} = \sum_{k=0}^\infty \alpha_k \gamma_{k,l+N} $$ $$ = \sum_{k=0}^\infty \alpha_k ( x_{k}, x_{l+N} ) = \left( \sum_{k=0}^\infty \alpha_k x_{k}, x_{l+N} \right) = (x,x_{l+N}),\qquad l\in\mathbb{Z}_+. $$ In a similar manner we obtain that $$ \left( \sum_{k=0}^\infty \beta_k x_{k+N}, x_l \right) = (x,x_{l+N}),\qquad l\in\mathbb{Z}_+, $$ and therefore $$ \left( \sum_{k=0}^\infty \alpha_k x_{k+N}, x_l \right) = \left( \sum_{k=0}^\infty \beta_k x_{k+N}, x_l \right),\qquad l\in\mathbb{Z}_+. $$ Since $\overline{L} = H$, we obtain that \begin{equation} \label{f2_11} \sum_{k=0}^\infty \alpha_k x_{k+N} = \sum_{k=0}^\infty \beta_k x_{k+N}. \end{equation} Let us introduce the following operator: \begin{equation} \label{f2_12} A x = \sum_{k=0}^\infty \alpha_k x_{k+N},\qquad x\in L,\ x = \sum_{k=0}^\infty \alpha_k x_{k}. \end{equation} Relations~(\ref{f2_9}),(\ref{f2_10}) and~(\ref{f2_11}) show that this definition does not depend on the choice of a representation for $x\in L$. Thus, this definition is correct. In particular, we have \begin{equation} \label{f2_13} A x_k = x_{k+N},\qquad k\in\mathbb{Z}_+. \end{equation} Choose arbitrary $x,y\in L$, $x = \sum_{k=0}^\infty \alpha_k x_{k}$, $y = \sum_{n=0}^\infty \gamma_n x_{n}$, and write $$ (Ax,y) = \left( \sum_{k=0}^\infty \alpha_k x_{k+N},\sum_{n=0}^\infty \gamma_n x_{n} \right) = \sum_{k,n=0}^\infty \alpha_k \overline{\gamma_n} (x_{k+N},x_n) $$ $$ = \sum_{k,n=0}^\infty \alpha_k \overline{\gamma_n} (x_{k},x_{n+N}) = \left( \sum_{k=0}^\infty \alpha_k x_{k},\sum_{n=0}^\infty \gamma_n x_{n+N} \right) = (x,Ay). $$ By relation~(\ref{f2_4_1}) we get $$ (Ax,x) = \left( \sum_{k=0}^\infty \alpha_k x_{k+N},\sum_{n=0}^\infty \alpha_n x_{n} \right) = \sum_{k,n=0}^\infty \alpha_k \overline{\alpha_n} (x_{k+N},x_n) $$ $$ = \sum_{k,n=0}^\infty \alpha_k \overline{\alpha_n} \gamma_{k+N,n} \geq 0, $$ Thus, the operator $A$ is a linear non-negative Hermitian operator in $H$ with the domain $D(A)=L$. Such an operator has a non-negative self-adjoint extension~\cite[Theorem 7, p.450]{c_13000_Kr}. Let $\widetilde A\supseteq A$ be an arbitrary non-negative self-adjoint extension of $A$ in a Hilbert space $\widetilde H\supseteq H$, and $\{ \widetilde E_\lambda \}_{\lambda\in \mathbb{R}_+}$ be its left-continuous orthogonal resolution of unity. Choose an arbitrary $a\in \mathbb{Z}_+$, $a=rN + j$, $r\in \mathbb{Z}_+$, $0\leq j\leq N-1$. Notice that $$ x_a = x_{rN+j} = A x_{(r-1)N+j} = ... = A^r x_j. $$ Using~(\ref{f2_3}),(\ref{f2_8}) we can write $$ s_{r+t;j,n} = \gamma_{rN+j,tN+n} = ( x_{rN+j},x_{tN+n} )_H = (A^r x_j, A^t x_n)_H $$ $$ = ( \widetilde A^r x_j, \widetilde A^t x_n)_{\widetilde H} = \left( \int_{\mathbb{R}_+} \lambda^r d\widetilde E_\lambda x_j, \int_{\mathbb{R}_+} \lambda^t d\widetilde E_\lambda x_n \right)_{\widetilde H} $$ $$ = \int_{\mathbb{R}_+} \lambda^{r+t} d (\widetilde E_\lambda x_j, x_n)_{\widetilde H} = \int_{\mathbb{R}_+} \lambda^{r+t} d \left( P^{\widetilde H}_H \widetilde E_\lambda x_j, x_n \right)_{H}. $$ Let us write the last relation in a matrix form: \begin{equation} \label{f2_14} S_{r+t} = \int_{\mathbb{R}_+} \lambda^{r+t} d \widetilde M(\lambda),\qquad r,t\in\mathbb{Z}_+, \end{equation} where \begin{equation} \label{f2_15} \widetilde M(\lambda) := \left( \left( P^{\widetilde H}_H \widetilde E_\lambda x_j, x_n \right)_{H} \right)_{j,n=0}^{N-1}. \end{equation} If we set $t=0$ in relation~(\ref{f2_14}), we obtain that the matrix function $\widetilde M(\lambda)$ is a solution of the matrix Stieltjes moment problem~(\ref{f1_1}). In fact, from the properties of the orthogonal resolution of unity it easily follows that $\widetilde M (\lambda)$ is left-continuous non-decreasing and $\widetilde M(0) = 0$. Thus, we obtained another proof of the solvability criterion for the matrix Stieltjes moment problem~(\ref{f1_1}): \begin{thm} \label{t2_2} Let a matrix Stieltjes moment problem~(\ref{f1_1}) be given. This problem has a solution if and only if conditions~(\ref{f1_4}) hold true. \end{thm} \section{A description of solutions.} Let $B$ be an arbitrary non-negative Hermitian operator in a Hilbert space $\mathcal{H}$. Choose an arbitrary non-negative self-adjoint extension $\widehat B$ of $B$ in a Hilbert space $\widehat{\mathcal{H}} \supseteq \mathcal{H}$. Let $R_z(\widehat B)$ be the resolvent of $\widehat B$ and $\{ \widehat E_\lambda\}_{\lambda\in \mathbb{R}_+}$ be the orthogonal left-continuous resolution of unity of $\widehat B$. Recall that the operator-valued function $\mathbf R_z = P_{ \mathcal{H} }^{ \widehat{\mathcal{H}} } R_z(\widehat B)$ is called {\bf a generalized $\Pi$-resolvent of $B$}, $z\in\mathbb{C}\backslash\mathbb{R}$~\cite{c_8000_Kr_Ovch}. If $\widehat{\mathcal{H}} = \mathcal{H}$ then $R_z(\widehat B)$ is called {\bf a canonical $\Pi$-resolvent}. The function $\mathbf E_\lambda = P_{\mathcal{H}}^{\widehat{\mathcal{H}}} \widehat E_\lambda$, $\lambda\in\mathbb{R}$, we call a {\bf $\Pi$-spectral function} of a non-negative Hermitian operator $B$. There exists a one-to-one correspondence between generalized $\Pi$-resolvents and $\Pi$-spectral functions established by the following relation (\cite{c_11000_AG}): \begin{equation} \label{f3_1} (\mathbf R_z f,g)_{\mathcal{H}} = \int_{\mathbb{R}_+} \frac{1}{\lambda - z} d( \mathbf E_\lambda f,g)_{\mathcal{H}},\qquad f,g\in \mathcal{H},\ z\in \mathbb{C}\backslash \mathbb{R}. \end{equation} Denote the set of all generalized $\Pi$-resolvents of $B$ by $\Omega^0(-\infty,0)=\Omega^0(-\infty,0)(B)$. Let a moment problem~(\ref{f1_1}) be given and conditions~(\ref{f1_4}) hold. Consider the operator $A$ defined as in~(\ref{f2_12}). Formula~(\ref{f2_15}) shows that $\Pi$-spectral functions of the operator $A$ produce solutions of the matrix Stieltjes moment problem~(\ref{f1_1}). Let us show that an arbitrary solution of~(\ref{f1_1}) can be produced in this way. \noindent Choose an arbitrary solution $\widehat M(x) = ( \widehat m_{k,l}(x) )_{k,l=0}^{N-1}$ of the matrix Stieltjes moment problem~(\ref{f1_1}). Consider the space $L^2(\widehat M)$ and let $Q$ be the operator of multiplication by an independent variable in $L^2(\widehat M)$. The operator $Q$ is self-adjoint and its resolution of unity is given by (see~\cite{c_10000_M_M}) \begin{equation} \label{f3_2} E_b - E_a = E([a,b)): h(x) \rightarrow \chi_{[a,b)}(x) h(x), \end{equation} where $\chi_{[a,b)}(x)$ is the characteristic function of an interval $[a,b)$, $0 \leq a<b\leq +\infty$. Set $$ \vec e_k = (e_{k,0},e_{k,1},\ldots,e_{k,N-1}),\quad e_{k,j}=\delta_{k,j},\qquad 0\leq j\leq N-1, $$ where $k=0,1,\ldots N-1$. A set of (classes of equivalence of) functions $f\in L^2(\widehat M)$ such that (the corresponding class includes) $f=(f_0,f_1,\ldots, f_{N-1})$, $f\in \mathbb{P}$, we denote by $\mathbb{P}^2(\widehat M)$. It is said to be a set of vector polynomials in $L^2(\widehat M)$. Set $L^2_0(\widehat M) := \overline{ \mathbb{P}^2(\widehat M) }$. For an arbitrary (representative) $f\in \mathbb{P}^2(\widehat M)$ there exists a unique representation of the following form: \begin{equation} \label{f3_3} f(x) = \sum_{k=0}^{N-1} \sum_{j=0}^\infty \alpha_{k,j} x^j \vec e_k,\quad \alpha_{k,j}\in \mathbb{C}. \end{equation} Here the sum is assumed to be finite. Let $g\in \mathbb{P}^2(\widehat M)$ have a representation \begin{equation} \label{f3_4} g(x) = \sum_{l=0}^{N-1} \sum_{r=0}^\infty \beta_{l,r} x^r \vec e_l,\quad \beta_{l,r}\in \mathbb{C}. \end{equation} Then we can write $$ (f,g)_{L^2(\widehat M)} = \sum_{k,l=0}^{N-1} \sum_{j,r=0}^\infty \alpha_{k,j}\overline{\beta_{l,r}} \int_\mathbb{R} x^{j+r} \vec e_k d\widehat M(x) \vec e_l^* $$ \begin{equation} \label{f3_5} = \sum_{k,l=0}^{N-1} \sum_{j,r=0}^\infty \alpha_{k,j}\overline{\beta_{l,r}} \int_\mathbb{R} x^{j+r} d\widehat m_{k,l}(x) = \sum_{k,l=0}^{N-1} \sum_{j,r=0}^\infty \alpha_{k,j}\overline{\beta_{l,r}} s_{j+r;k,l}. \end{equation} On the other hand, we can write $$ \left( \sum_{j=0}^\infty \sum_{k=0}^{N-1} \alpha_{k,j} x_{jN+k}, \sum_{r=0}^\infty \sum_{l=0}^{N-1} \beta_{l,r} x_{rN+l} \right)_H = \sum_{k,l=0}^{N-1} \sum_{j,r=0}^\infty \alpha_{k,j}\overline{\beta_{l,r}} (x_{jN+k}, x_{rN+l})_H $$ \begin{equation} \label{f3_6} = \sum_{k,l=0}^{N-1} \sum_{j,r=0}^\infty \alpha_{k,j}\overline{\beta_{l,r}} \gamma_{jN+k,rN+l} = \sum_{k,l=0}^{N-1} \sum_{j,r=0}^\infty \alpha_{k,j}\overline{\beta_{l,r}} s_{j+r;k,l}. \end{equation} From relations~(\ref{f3_5}),(\ref{f3_6}) it follows that \begin{equation} \label{f3_7} (f,g)_{L^2(\widehat M)} = \left( \sum_{j=0}^\infty \sum_{k=0}^{N-1} \alpha_{k,j} x_{jN+k}, \sum_{r=0}^\infty \sum_{l=0}^{N-1} \beta_{l,r} x_{rN+l} \right)_H. \end{equation} Let us introduce the following operator: \begin{equation} \label{f3_8} Vf = \sum_{j=0}^\infty \sum_{k=0}^{N-1} \alpha_{k,j} x_{jN+k}, \end{equation} for $f(x)\in \mathbb{P}^2(\widehat M)$, $f(x) = \sum_{k=0}^{N-1} \sum_{j=0}^\infty \alpha_{k,j} x^j \vec e_k$, $\alpha_{k,j}\in \mathbb{C}$. Let us show that this definition is correct. In fact, if vector polynomials $f$, $g$ have representations~(\ref{f3_3}),(\ref{f3_4}), and $\| f-g \|_{L^2(\widehat M)} = 0$, then from~(\ref{f3_7}) it follows that $V(f-g)=0$. Thus, $V$ is a correctly defined operator from $\mathbb{P}^2(\widehat M)$ into $H$. Relation~(\ref{f3_7}) shows that $V$ is an isometric transformation from $\mathbb{P}^2(\widehat M)$ onto $L$. By continuity we extend it to an isometric transformation from $L^2_0(\widehat M)$ onto $H$. In particular, we note that \begin{equation} \label{f3_9} V x^j \vec e_k = x_{jN+k},\qquad j\in \mathbb{Z}_+;\quad 0\leq k\leq N-1. \end{equation} Set $L^2_1 (\widehat M) := L^2(\widehat M)\ominus L^2_0 (\widehat M)$, and $U := V\oplus E_{L^2_1 (\widehat M)}$. The operator $U$ is an isometric transformation from $L^2(\widehat M)$ onto $H\oplus L^2_1 (\widehat M)=:\widehat H$. Set $$ \widehat A := UQU^{-1}. $$ The operator $\widehat A$ is a non-negative self-adjoint operator in $\widehat H$. Let $\{ \widehat E_\lambda \}_{\lambda\in\mathbb{R}_+}$ be its left-continuous orthogonal resolution of unity. Notice that $$ UQU^{-1} x_{jN+k} = VQV^{-1} x_{jN+k} = VQ x^j \vec e_k = V x^{j+1} \vec e_k = x_{(j+1)N+k} $$ $$ = x_{jN+k+N} = Ax_{jN+k},\qquad j\in\mathbb{Z}_+;\quad 0\leq k\leq N-1. $$ By linearity we get $$ UQU^{-1} x = Ax,\qquad x\in L = D(A), $$ and therefore $\widehat A\supseteq A$. Choose an arbitrary $z\in\mathbb{C}\backslash\mathbb{R}$ and write $$ \int_{\mathbb{R}_+} \frac{1}{\lambda - z} d( \widehat E_\lambda x_k, x_j)_{\widehat H} = \left( \int_{\mathbb{R}_+} \frac{1}{\lambda - z} d\widehat E_\lambda x_k, x_j \right)_{\widehat H} $$ $$ = \left( U^{-1} \int_{\mathbb{R}_+} \frac{1}{\lambda - z} d\widehat E_\lambda x_k, U^{-1} x_j \right)_{L^2(\widehat M)} $$ $$ = \left( \int_{\mathbb{R}_+} \frac{1}{\lambda - z} d U^{-1} \widehat E_\lambda U \vec e_k, \vec e_j \right)_{L^2(\widehat M)} = \left( \int_{\mathbb{R}_+} \frac{1}{\lambda - z} d E_{\lambda} \vec e_k, \vec e_j \right)_{L^2(\widehat M)} $$ \begin{equation} \label{f3_10} = \int_{\mathbb{R}_+} \frac{1}{\lambda - z} d(E_{\lambda} \vec e_k, \vec e_j)_{L^2(\widehat M)},\qquad 0\leq k,j\leq N-1. \end{equation} Using~(\ref{f3_2}) we can write $$ (E_{\lambda} \vec e_k, \vec e_j)_{L^2(\widehat M)} = \widehat m_{k,j}(\lambda), $$ and therefore \begin{equation} \label{f3_11} \int_{\mathbb{R}_+} \frac{1}{\lambda - z} d( P^{\widehat H}_H \widehat E_\lambda x_k, x_j)_H = \int_{\mathbb{R}_+} \frac{1}{\lambda - z} d\widehat m_{k,j}(\lambda),\qquad 0\leq k,j\leq N-1. \end{equation} By the Stieltjes-Perron inversion formula (see, e.g.,~\cite{c_14000_Akh}) we conclude that \begin{equation} \label{f3_12} \widehat m_{k,j} (\lambda) = ( P^{\widehat H}_H \widehat E_\lambda x_k, x_j)_H. \end{equation} \begin{prop} \label{p3_1} Let the matrix Stieltjes moment problem~(\ref{f1_1}) be given and conditions~(\ref{f1_4}) hold. Let $A$ be a non-negative Hermitian operator which is defined by~(\ref{f2_12}). The deficiency index of $A$ is equal to $(n,n)$, $0\leq n\leq N$. \end{prop} {\bf Proof. } Choose an arbitrary $u\in L$, $u = \sum_{k=0}^\infty c_k x_k$, $c_k\in \mathbb{C}$. Suppose that $c_k = 0$, $k\geq N+R+1$, for some $R\in \mathbb{Z}_+$. Consider the following system of linear equations: \begin{equation} \label{f3_13} -z d_k = c_k,\qquad k=0,1,...,N-1; \end{equation} \begin{equation} \label{f3_14} d_{k-N} - z d_k = c_k,\qquad k=N,N+1,N+2,...; \end{equation} where $\{ d_k \}_{k\in \mathbb{Z}_+}$ are unknown complex numbers, $z\in \mathbb{C}\backslash \mathbb{R}$ is a fixed parameter. Set $$ d_k = 0,\qquad k\geq R+1; $$ \begin{equation} \label{f3_15} d_{j} = c_{N+j} + z d_{N+j},\qquad j=R,R-1,R-2,...,0. \end{equation} For such defined numbers $\{ d_k \}_{k\in\mathbb{Z}_+}$, all equations in~(\ref{f3_14}) are satisfied. But equations~(\ref{f3_14}) are not necessarily satisfied. Set $$ v = \sum_{k=0}^\infty d_k x_k,\ v\in L. $$ Notice that $$ (A-zE_H) v = \sum_{k=0}^\infty (d_{k-N} - z d_k) x_k, $$ where $d_{-1}=d_{-2}=...=d_{-N}=0$. By the construction of $d_k$ we have $$ (A-zE_H) v - u = \sum_{k=0}^\infty (d_{k-N} - z d_k - c_k) x_k = \sum_{k=0}^{N-1} (-zd_k - c_k) x_k; $$ \begin{equation} \label{f3_16} u = (A-zE_H) v + \sum_{k=0}^{N-1} (zd_k + c_k) x_k,\qquad u\in L. \end{equation} Set $$ H_z := \overline{(A-zE_H) L} = (\overline{A} - zE_H) D(\overline{A}), $$ and \begin{equation} \label{f3_17} y_k := x_k - P^H_{H_z} x_k,\qquad k=0,1,...,N-1. \end{equation} Set $$ H_0 := \mathop{\rm span}\nolimits\{ y_k \}_{k=0}^{N-1}. $$ Notice that the dimension of $H_0$ is less or equal to $N$, and $H_0\perp H_z$. From~(\ref{f3_16}) it follows that $u\in L$ can be represented in the following form: \begin{equation} \label{f3_18} u = u_1 + u_2,\qquad u_1\in H_z,\quad u_2\in H_0. \end{equation} Therefore we get $L\subseteq H_z\oplus H_0$; $H\subseteq H_z\oplus H_0$, and finally $H=H_z\oplus H_0$. Thus, $H_0$ is the corresponding defect subspace. So, the defect numbers of $A$ are less or equal to $N$. Since the operator $A$ is non-negative, they are equal. $\Box$ \begin{thm} \label{t3_1} Let a matrix Stieltjes moment problem~(\ref{f1_1}) be given and conditions~(\ref{f1_4}) hold. Let an operator $A$ be constructed for the moment problem as in~(\ref{f2_12}). All solutions of the moment problem have the following form \begin{equation} \label{f3_19} M(\lambda) = (m_{k,j} (\lambda))_{k,j=0}^{N-1},\quad m_{k,j} (\lambda) = ( \mathbf E_\lambda x_k, x_j)_H, \end{equation} where $\mathbf E_\lambda$ is a $\Pi$-spectral function of the operator $A$. Moreover, the correspondence between all $\Pi$-spectral functions of $A$ and all solutions of the moment problem is one-to-one. \end{thm} {\bf Proof. } It remains to prove that different $\Pi$-spectral functions of the operator $A$ produce different solutions of the moment problem~(\ref{f1_1}). Suppose to the contrary that two different $\Pi$-spectral functions produce the same solution of the moment problem. That means that there exist two non-negative self-adjoint extensions $A_j\supseteq A$, in Hilbert spaces $H_j\supseteq H$, such that \begin{equation} \label{f3_20} P_{H}^{H_1} E_{1,\lambda} \not= P_{H}^{H_2} E_{2,\lambda}, \end{equation} \begin{equation} \label{f3_21} (P_{H}^{H_1} E_{1,\lambda} x_k,x_j)_H = (P_{H}^{H_2} E_{2,\lambda} x_k,x_j)_H,\qquad 0\leq k,j\leq N-1,\quad \lambda\in\mathbb{R}_+, \end{equation} where $\{ E_{n,\lambda} \}_{\lambda\in\mathbb{R}_+}$ are orthogonal left-continuous resolutions of unity of operators $A_n$, $n=1,2$. Set $L_N := \mathop{\rm Lin}\nolimits\{ x_k \}_{k=0,N-1}$. By linearity we get \begin{equation} \label{f3_22} (P_{H}^{H_1} E_{1,\lambda} x,y)_H = (P_{H}^{H_2} E_{2,\lambda} x,y)_H,\qquad x,y\in L_N,\quad \lambda\in\mathbb{R}_+. \end{equation} Denote by $R_{n,\lambda}$ the resolvent of $A_n$, and set $\mathbf R_{n,\lambda} := P_{H}^{H_n} R_{n,\lambda}$, $n=1,2$. From~(\ref{f3_22}),(\ref{f3_1}) it follows that \begin{equation} \label{f3_23} (\mathbf R_{1,z} x,y)_H = (\mathbf R_{2,z} x,y)_H,\qquad x,y\in L_N,\quad z\in \mathbb{C}\backslash \mathbb{R}. \end{equation} Choose an arbitrary $z\in\mathbb{C}\backslash\mathbb{R}$ and consider the space $H_z$ defined as above. Since $$ R_{j,z} (A-zE_H) x = (A_j - z E_{H_j} )^{-1} (A_j - z E_{H_j}) x = x,\qquad x\in L=D(A),$$ we get \begin{equation} \label{f3_24} R_{1,z} u = R_{2,z} u \in H,\qquad u\in H_z; \end{equation} \begin{equation} \label{f3_25} \mathbf R_{1,z} u = \mathbf R_{2,z} u,\qquad u\in H_z,\ z\in\mathbb{C}\backslash\mathbb{R}. \end{equation} We can write $$ (\mathbf R_{n,z} x, u)_H = (R_{n,z} x, u)_{H_n} = ( x, R_{n,\overline{z}}u)_{H_n} = ( x, \mathbf R_{n,\overline{z}} u)_H, $$ \begin{equation} \label{f3_26} x\in L_N,\ u\in H_{\overline z},\ n=1,2, \end{equation} and therefore we get \begin{equation} \label{f3_27} (\mathbf R_{1,z} x,u)_H = (\mathbf R_{2,z} x,u)_H,\qquad x\in L_N,\ u\in H_{\overline z}. \end{equation} By~(\ref{f3_16}) an arbitrary element $y\in L$ can be represented as $y=y_{ \overline{z} } + y'$, $y_{ \overline{z} }\in H_{ \overline{z} }$, $y'\in L_N$. Using~(\ref{f3_23}) and~(\ref{f3_25}) we get $$ (\mathbf R_{1,z} x,y)_H = (\mathbf R_{1,z} x, y_{ \overline{z} } + y')_H $$ $$ = (\mathbf R_{2,z} x, y_{ \overline{z} } + y')_H = (\mathbf R_{2,z} x,y)_H,\qquad x\in L_N,\ y\in L. $$ Since $\overline{L}=H$, we obtain \begin{equation} \label{f3_28} \mathbf R_{1,z} x = \mathbf R_{2,z} x,\qquad x\in L_N,\ z\in\mathbb{C}\backslash\mathbb{R}. \end{equation} For an arbitrary $x\in L$, $x=x_z + x'$, $x_z\in H_z$, $x'\in L_N$, using relations~(\ref{f3_25}),(\ref{f3_28}) we obtain \begin{equation} \label{f3_29} \mathbf R_{1,z} x = \mathbf R_{1,z} (x_z + x') = \mathbf R_{2,z} (x_z + x') = \mathbf R_{2,z} x,\qquad x\in L,\ z\in\mathbb{C}\backslash\mathbb{R}, \end{equation} and \begin{equation} \label{f3_30} \mathbf R_{1,z} x = \mathbf R_{2,z} x,\qquad x\in H,\ z\in\mathbb{C}\backslash\mathbb{R}. \end{equation} By~(\ref{f3_1}) that means that the $\Pi$-spectral functions coincide and we obtain a contradiction. $\Box$ We shall recall some basic definitions and facts from~\cite{c_9000_D_M}. Let $A$ be a closed Hermitian operator in a Hilbert space $H$, $\overline{D(A)} = H$. \begin{dfn} \label{d3_1} A collection $\{ \mathcal{H}, \Gamma_1, \Gamma_2 \}$ in which $\mathcal{H}$ is a Hilbert space, $\Gamma_1, \Gamma_2 \in [D(A^*),\mathcal{H}]$, is called {\bf a space of boundary values (SBV)} for $A^*$, if \noindent (1) $(A^* f,g)_H - (f,A^* g)_H = (\Gamma_1 f,\Gamma_2 g)_{\mathcal{H}} - (\Gamma_2 f, \Gamma_1 g)_{\mathcal{H}}$, $\forall f,g\in D(A^*)$; \noindent (2) the mapping $\Gamma: f\rightarrow \{ \Gamma_1 f,\Gamma_2 f \}$ from $D(A^*)$ to $\mathcal{H}\oplus \mathcal{H}$ is surjective. \end{dfn} Naturally associated with each SBV are self-adjoint operators $\widetilde A_1,\widetilde A_2\ (\subset A^*)$ with $$ D(\widetilde A_1) = \mathop{\rm ker}\nolimits \Gamma_1,\ D(\widetilde A_2) = \mathop{\rm ker}\nolimits \Gamma_2. $$ The operator $\Gamma_2$ restricted to the defect subspace $N_z = \mathop{\rm ker}\nolimits(A^* - zE_H)$, $z\in \rho(\widetilde A_2)$, is fully invertible. For $\forall z\in \rho(\widetilde A_2)$ set \begin{equation} \label{f3_31} \gamma(z) = \left( \Gamma_2|_{N_z} \right)^{-1} \in [\mathcal{H},N_z]. \end{equation} \begin{dfn} \label{d3_2} The operator-valued function $M(z)$ defined for $z\in\rho(\widetilde A_2)$ by \begin{equation} \label{f3_32} M(z) \Gamma_2 f_z = \Gamma_1 f_z,\qquad f_z\in N_z, \end{equation} is called {\bf a Weyl function} of the operator $A$, corresponding to SBV $\{ \mathcal{H}, \Gamma_1, \Gamma_2 \}$. \end{dfn} The Weyl function can be also obtained from the equality: \begin{equation} \label{f3_33} M(z) = \Gamma_1 \gamma(z),\qquad z\in \rho(\widetilde A_2). \end{equation} For an arbitrary operator $\widetilde A = \widetilde A^* \subset A^*$ there exist a SBV with (\cite{c_15000_D_M}) \begin{equation} \label{f3_34} D(\widetilde A_2) = \mathop{\rm ker}\nolimits\Gamma_2 = D(\widetilde A). \end{equation} (There even exist a family of such SBV). An extension $\widehat A$ of $A$ is called {\bf proper} if $A\subset\widehat A\subset A^*$ and $(\widehat A^*)^* = \widehat A$. Two proper extensions $\widehat A_1$ and $\widehat A_2$ are {\bf disjoint} if $D(\widehat A_1)\cap D(\widehat A_2) = D(A)$ and {\bf transversals} if they are disjoint and $D(\widehat A_1) + D(\widehat A_2) = D(A^*)$. Suppose that the operator $A$ is non-negative, $A\geq 0$. In this case there exist two non-negative self-adjoint extensions of $A$ in $H$, Friedrich's extension $A_\mu$ and Krein's extension $A_M$, such that for an arbitrary non-negative self-adjoint extension $\widehat A$ of $A$ in $H$ it holds: \begin{equation} \label{f3_35} (A_\mu + xE_H)^{-1} \leq (\widehat A + xE_H)^{-1} \leq (A_M + xE_H)^{-1},\qquad x\in \mathbb{R}_+. \end{equation} Recall some definitions and facts from~\cite{c_8000_Kr_Ovch},\cite{c_13000_Kr}. For the non-negative operator $A$ we put into correspondence the following operator: \begin{equation} \label{f3_36} T = (E_H - A)(E_H + A)^{-1} = -E_H + 2(E_H + A)^{-1},\qquad D(T) = (A+E_H)D(A). \end{equation} The operator $T$ is a Hermitian contraction (i.e. $\| T \| \leq 1$). Its domain is not dense in $H$ if $A$ is not self-adjoint. The defect subspace $H\ominus D(T) = N_{-1}$ and its dimension is equal to the defect number $n(A)$ of $A$. The inverse transformation to~(\ref{f3_36}) is given by \begin{equation} \label{f3_37} A = (E_H - T)(E_H + T)^{-1} = -E_H + 2(E_H + T)^{-1},\qquad D(A) = (T+E_H)D(T). \end{equation} Relations~(\ref{f3_36}),(\ref{f3_37}) (with $\widehat T,\widehat A$ instead of $T,A$) also establish a bijective correspondence between self-adjoint contractive extensions $\widehat T\supseteq T$ in $H$ and self-adjoint non-negative extensions $\widehat A\supseteq A$ in $H$ (\cite[p.451]{c_13000_Kr}). \noindent Consider an arbitrary Hilbert space $\widehat H \supseteq H$. It is not hard to see that relations~(\ref{f3_36}),(\ref{f3_37}) (with $\widehat T,\widehat A$ instead of $T,A$) establish a bijective correspondence between self-adjoint contractive extensions $\widehat T\supseteq T$ in $\widehat H$ and self-adjoint non-negative extensions $\widehat A\supseteq A$ in $\widehat H$, as well. There exist extremal self-adjoint contractive extensions of $T$ in $H$ such that for an arbitrary self-adjoint contractive extension $\widetilde T\supseteq T$ in $H$ it holds \begin{equation} \label{f3_38} T_\mu \leq \widetilde T \leq T_M. \end{equation} Notice that \begin{equation} \label{f3_39} A_\mu = -E_H + 2(E_H + T_\mu)^{-1},\quad A_M = -E_H + 2(E_H + T_M)^{-1}. \end{equation} Set \begin{equation} \label{f3_40} C = T_M - T_\mu. \end{equation} Consider the following subspace: \begin{equation} \label{f3_41} \Upsilon = \mathop{\rm ker}\nolimits \left( C|_{ N_{-1} } \right). \end{equation} \begin{dfn} \label{d3_3} Let a closed non-negative Hermitian operator $A$ be given. For the operator $A$ it takes place {\bf a completely indeterminate case} if $\Upsilon = \{ 0 \}$. \end{dfn} By Theorem~1.4 in~\cite{c_16000_KO}, on the set $\{ x\in H:\ T_\mu x = T_M x \} = \mathop{\rm ker}\nolimits C$, all self-adjoint contractive extensions in a Hilbert space $\widetilde H\supseteq H$ coincide. Thus, all such extensions are extensions of the operator $T_{ext}$: \begin{equation} \label{f3_42} T_{ext} x = \left\{\begin{array}{cc} Tx, & x\in D(T)\\ T_\mu x = T_M x, & x\in \mathop{\rm ker}\nolimits C \end{array}\right.. \end{equation} Introduce the following operator: \begin{equation} \label{f3_43} A_{ext} = -E_H + 2(E_H + T_{ext})^{-1} \supseteq A. \end{equation} Thus, {\it the set of all non-negative self-adjoint extensions of $A$ coincides with the set of all non-negative self-adjoint extensions of $A_{ext}$.} Since $T_{ext,\mu} = T_\mu$ and $T_{ext,M} = T_M$, for $A_{ext}$ it takes place the completely indeterminate case. \begin{prop} \label{p3_2} Let $A$ be a closed non-negative Hermitian operator with finite defect numbers and for $A$ it takes place the completely indeterminate case. Then extensions $A_\mu$ and $A_M$ given by~(\ref{f3_39}) are transversal. \end{prop} {\bf Proof. } Notice that \begin{equation} \label{f3_44} D(A_M) \cap D(A_\mu) = D(A). \end{equation} In fact, suppose that there exists $y\in D(A_M) \cap D(A_\mu)$, $y\notin D(A)$. Since $A_M\subset A^*$ and $A_\mu\subset A^*$ we have $A_M y = A_\mu y$. Set $$ g := (A_M + E_H)y = (A_\mu + E_H)y . $$ Then $$ T_M g = -g + 2(E_H + A_M)^{-1}g = -g + 2y, $$ $$ T_\mu g = -g + 2(E_H + A_\mu)^{-1}g = -g + 2y, $$ and therefore $Cg = (T_M - T_\mu)g = 0$. Since $y\notin D(A)$, then $g\in N_{-1}$. We obtained a contradiction, since for $A$ it takes place the completely indeterminate case. Introduce the following sets: \begin{equation} \label{f3_45} D_M := (A_M + E_H)^{-1} N_{-1},\quad D_\mu := (A_\mu + E_H)^{-1} N_{-1}. \end{equation} Since $D(A_M) = (A_M + E_H)^{-1} D(T_M)$, $D(A_\mu) = (A_\mu + E_H)^{-1} D(T_\mu)$, we have \begin{equation} \label{f3_46} D_M \subset D(A_M),\ D_\mu \subset D(A_\mu), \end{equation} and \begin{equation} \label{f3_47} D_M \cap D(A) = \{ 0 \},\quad D_\mu \cap D(A) = \{ 0 \}, \end{equation} By~(\ref{f3_44}),(\ref{f3_46}) and~(\ref{f3_47}) we obtain that \begin{equation} \label{f3_48} D_M \cap D_\mu = \{ 0 \}. \end{equation} Set \begin{equation} \label{f3_49} D := D_M \dotplus D_\mu. \end{equation} By~(\ref{f3_45}) we obtain that the sets $D_M$ and $D_\mu$ have the linear dimension $n(A)$. Elementary arguments show that $D$ has the linear dimension $2 n(A)$. Since $D(A_\mu)\subset D(A^*)$, $D(A_M)\subset D(A^*)$, we can write \begin{equation} \label{f3_50} D(A) \dotplus D_M \dotplus D_\mu \subseteq D(A^*) = D(A) \dotplus N_z \dotplus N_{\overline{z}}, \end{equation} where $z\in \mathbb{C}\backslash \mathbb{R}$. Let $$ g_1,g_2,...,g_{2n(A)}, $$ be $2n(A)$ linearly independent elements from $D$. Let \begin{equation} \label{f3_52} g_j = g_{A,j} + g_{z,j} + g_{\overline{z},j},\qquad 1\leq j\leq 2n(A), \end{equation} where $g_{A,j}\in D(A)$, $g_{z,j}\in N_z$, $g_{\overline{z},j}\in N_{\overline{z}}$. Set \begin{equation} \label{f3_53} \widehat g_j := g_j - g_{A,j},\qquad 1\leq j\leq 2n(A). \end{equation} If for some $\alpha_j\in \mathbb{C}$, $1\leq j\leq 2n(A)$, we have $$ 0 = \sum_{j=1}^{ 2n(A) } \alpha_j \widehat g_j = \sum_{j=1}^{ 2n(A) } \alpha_j g_j - \sum_{j=1}^{ 2n(A) } \alpha_j g_{A,j}, $$ then $$ \sum_{j=1}^{ 2n(A) } \alpha_j g_j = 0, $$ and $\alpha_j = 0$, $1\leq j\leq 2n(A)$. Therefore elements $\widehat g_j$, $1\leq j\leq 2n(A)$ are linearly independent. Thus, they form a linear basis in a finite-dimensional subspace $N_z \dotplus N_{\overline{z}}$. Then \begin{equation} \label{f3_54} N_z \dotplus N_{\overline{z}} \subseteq D, \end{equation} \begin{equation} \label{f3_55} D(A^*) = D(A) \dotplus N_z \dotplus N_{\overline{z}} \subseteq D(A) \dotplus D = D_L. \end{equation} So, we get the equality \begin{equation} \label{f3_56} D(A) \dotplus D_M \dotplus D_\mu = D(A^*). \end{equation} Since $D(A) + D_M \subseteq D(A_M)$, $D_\mu \subseteq D(A_\mu)$, we get $$ D(A^*) = D(A)+D_M+D_\mu \subseteq D(A_M) + D(A_\mu). $$ Since $D(A_M) + D(A_\mu)\subseteq D(A^*)$, we get \begin{equation} \label{f3_57} D(A^*) = D(A_M) + D(A_\mu). \end{equation} From~(\ref{f3_44}),(\ref{f3_57}) it follows the statement of the Proposition. $\Box$ We shall use the following classes of functions~\cite{c_9000_D_M}. Let $\mathcal{H}$ be a Hilbert space. Denote by $R_{\mathcal{H}}$ the class of operator-valued functions $F(z)=F^*(\overline{z})$ holomorphic in $\mathbb{C}\backslash \mathbb{R}$ with values (for $z\in \mathbb{C}_+$) in the set of maximal dissipative operators in $\mathfrak{C}(\mathcal{H})$. Completing the class $R_{\mathcal{H}}$ by ideal elements we get the class $\widetilde R_{\mathcal{H}}$. Thus, $\widetilde R_{\mathcal{H}}$ is a collection of functions holomorphic in $\mathbb{C}\backslash \mathbb{R}$ with values (for $z\in \mathbb{C}_+$) in the set of maximal dissipative linear relations $\theta(z)=\theta^*(\overline{z})$ in $\mathcal{H}$. The indeterminate part of the relation $\theta(z)$ does not depend on $z$ and the relation $\theta(z)$ admits the representation \begin{equation} \label{f3_58} \theta(z) = \{ < h_1, F_1(z) h_1 + h_2 >:\ h_1\in D(F_1(z)),\ h_2\in \mathcal{H}_2 \}, \end{equation} where $\mathcal{H} = \mathcal{H}_1 \oplus \mathcal{H}_2$, $F_1(z) \in R_{\mathcal{H}_1}$. \begin{dfn} \cite{c_9000_D_M} \label{d3_4} An operator-valued function $F(z)\in R_{\mathcal{H}}$ belongs to the class $S_{\mathcal{H}}^{-0} (-\infty,0)$ if $\forall n\in \mathbb{N}$, $\forall z_j\in \mathbb{C}_+$, $h_j\in D(F(z_j))$, $\xi_j\in \mathbb{C}$, holds \begin{equation} \label{f3_59} \sum_{i,j=1}^n \frac{ (z_i^{-1} F(z_i) h_i, h_j) - (h_i, z_j^{-1} F(z_j) h_j) } { z_i - \overline{z_j} } \xi_i \overline{\xi_j} \geq 0. \end{equation} Completing the class $S_{\mathcal{H}}^{-0} (-\infty,0)$ with ideal elements~(\ref{f3_58}) we obtain the class $\widetilde S_{\mathcal{H}}^{-0} (-\infty,0)$. \end{dfn} From Theorem~9 in~\cite[p.46]{c_9000_D_M} taking into account Proposition~\ref{p3_2} we have the following conclusion (see also Remark~17 in~\cite[p.49]{c_9000_D_M}): \begin{thm} \label{t3_2} Let $A$ be a closed non-negative Hermitian operator in a Hilbert space $H$ and for $A$ it takes place the completely indeterminate case. Let $\{ \mathcal{H}, \Gamma_1, \Gamma_2 \}$ be an arbitrary SBV for $A$ such that $\widetilde A_2 = A_\mu$ and $M(z)$ be the corresponding Weyl function. Then the formula \begin{equation} \label{f3_60} \mathbf{R}_z = (A_\mu - zE_H)^{-1} - \gamma(z) (\tau(z)+M(z)-M(0))^{-1} \gamma^*(\overline{z}),\quad z\in \mathbb{C}\backslash \mathbb{R}, \end{equation} establishes a bijective correspondence between $\mathbf{R}_z\in \Omega^0(-\infty,0)(A)$ and $\tau\in \widetilde S_{\mathcal{H}}^{-0} (-\infty,0)$. The function $\tau(z)\equiv \tau = \tau^*$ in~(\ref{f3_60}) corresponds to the canonical $\Pi$-resolvents and only to them. \end{thm} Now we can state our main result. \begin{thm} \label{t3_3} Let a matrix Stieltjes moment problem~(\ref{f1_1}) be given and conditions~(\ref{f1_4}) hold. Let an operator $A$ be the closure of the operator constructed for the moment problem in~(\ref{f2_12}). Then the following statements are true: 1) The moment problem~(\ref{f1_1}) is determinate if and only if Friedrich's extension $A_\mu$ and Krein's extension $A_M$ coincide: $A_\mu = A_M$. In this case the unique solution of the moment problem is generated by the orthogonal spectral function $\mathbf E_\lambda$ of $A_\mu$ by formula~(\ref{f3_19}); 2) If $A_\mu\not= A_M$, define the extended operator $A_{ext}$ for $A$ as in~(\ref{f3_43}). Let $\{ \mathcal{H}, \Gamma_1, \Gamma_2 \}$ be an arbitrary SBV for $A_{ext}$ such that $\widetilde A_2 = (A_{ext})\mu$ and $M(z)$ be the corresponding Weyl function. All solutions of the moment problem~(\ref{f1_1}) have the following form: \begin{equation} \label{f3_61} M(\lambda) = (m_{k,j} (\lambda))_{k,j=0}^{N-1}, \end{equation} where $$ \int_{\mathbb{R}_+} \frac{dm_{k,j} (\lambda)}{\lambda - z} = \left( (A_\mu - zE_H)^{-1} x_k,x_j \right)_H $$ \begin{equation} \label{f3_62} - \left( \gamma(z) (\tau(z)+M(z)-M(0))^{-1} \gamma^*(\overline{z}) x_k, x_j \right)_H,\quad z\in \mathbb{C}\backslash \mathbb{R}, \end{equation} where $\tau\in \widetilde S_{\mathcal{H}}^{-0} (-\infty,0)$. Moreover, the correspondence between all $\tau\in \widetilde S_{\mathcal{H}}^{-0} (-\infty,0)$ and all solutions of the moment problem~(\ref{f1_1}) is one-to-one. \end{thm} {\bf Proof. } The statements of the Theorem follow directly from Theorems~\ref{t3_1} and~\ref{t3_2}. $\Box$ \begin{center} \large\bf The matrix Stieltjes moment problem: a description of all solutions. \end{center} \begin{center} \bf S.M. Zagorodnyuk \end{center} We describe all solutions of the matrix Stieltjes moment problem in the general case (no conditions besides solvability are assumed). We use Krein's formula for the generalized $\Pi$-resolvents of positive Hermitian operators in the form of V.A~Derkach and M.M.~Malamud. MSC: 44A60; Secondary 30E05 Key words: moment problem, positive definite kernel, spectral function. \end{document}
\begin{document} \title[\hfil Impulsive hematopoiesis models with delays] {On the existence and exponential attractivity of a unique positive almost periodic solution to an impulsive hematopoiesis model with delays} \author[T.T. Anh, T.V. Nhung \& L.V. Hien] {Trinh Tuan Anh, Tran Van Nhung and Le Van Hien} \address{Trinh Tuan Anh \newline Department of Mathematics, Hanoi National University of Education\\ 136 Xuan Thuy Road, Hanoi, Vietnam} \email{anhtt@hnue.edu.vn} \address{Tran Van Nhung \newline Vietnam State Council for Professor Promotion, No. 1, Dai Co Viet Road, Hanoi, Vietnam} \email{tvnhung@moet.edu.vn} \address{Le Van Hien\newline Department of Mathematics, Hanoi National University of Education\\ 136 Xuan Thuy Road, Hanoi, Vietnam} \email{hienlv@hnue.edu.vn} \thanks{Accepted for publication in AMV} \maketitle \begin{abstract} In this paper, a generalized model of hematopoiesis with delays and impulses is considered. By employing the contraction mapping principle and a novel type of impulsive delay inequality, we prove the existence of a unique positive almost periodic solution of the model. It is also proved that, under the proposed conditions in this paper, the unique positive almost periodic solution is globally exponentially attractive. A numerical example is given to illustrate the effectiveness of the obtained results. \end{abstract} \keywords{Hematopoiesis model; almost periodic solution; impulsive systems.} \numberwithin{equation}{section} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}{Lemma}[section] \theoremstyle{definition} \newtheorem{definition}{Definition}[section] \newtheorem{remark}{Remark}[section] \newtheorem{example}{Example}[section] \allowdisplaybreaks \section{Introduction} The nonlinear delay differential equation \begin{equation}\label{eq1} \dot x(t)=-ax(t)+\frac{b}{1+x^n(t-\tau)},\quad n>0, \end{equation} where $a,b,\tau$ are positive constants, proposed by Mackey and Glass \cite{MG}, has been used as an appropriate model for the dynamics of hematopoiesis (blood cells production) \cite{BBI2,GL,MG}. In medical terms, $x(t)$ denotes the density of mature cells in blood circulation at time $t$ and $\tau$ is the time delay between the production of immature cells in the bone marrow and their maturation for release in circulating bloodstream. As we may know, the periodic or almost periodic phenomena are popular in various natural problems of real world applications \cite{A,BBI2,F,HLSC,LZ,LD,Sm,WLX,WZh}. In comparing with periodicity, almost periodicity is more frequent in nature and much more complicated in studying for such model \cite{Stamov,WZ}. On the other hand, many dynamical systems describe the real phenomena depend on the history as well as undergo abrupt changes in their states. This kind of models are best described by impulsive delay differential equations \cite{BS,SP,Stamov}. A great deal of effort from researchers has been devoted to study the existence and asymptotic behavior of almost periodic solutions of \eqref{eq1} and its generalizations due to their extensively realistic significance. We refer the reader to \cite{HW,Liu,GYZ,WZ,WLZ,ZYW} and the references therein. Particularly, in \cite{WZ}, Wang and Zhang investigated the existence, nonexistence and uniqueness of positive almost periodic solution of the following model \begin{equation}\label{eq2} \dot x(t)=-a(t)x(t)+\frac{b(t)x(t-\tau(t))}{1+x^n(t-\tau(t))},\quad n>1, \end{equation} by using a new fixed point theorem in cone. Very recently, using a fixed point theorem for contraction mapping combining with the Lyapunov functional method, Zhang et al. \cite{ZYW} obtained sufficient conditions for the existence and exponential stability of a positive almost periodic solution to a generalized model of \eqref{eq1} \begin{equation}\label{eq3} \dot x(t)=-a(t)x(t)+\sum_{i=1}^m\frac{b_i(t)}{1+x^n(t-\tau_i(t))},\quad n>0. \end{equation} By employing a novel argument, a delay-independent criteria was established in \cite{Liu} ensuring the existence, uniqueness, and global exponential stability of positive almost periodic solutions of a non-autonomous delayed model of hematopoiesis with almost periodic coefficients and delays. In \cite{ANS}, Alzabut et al. considered the following model of hematopoiesis with impulses \begin{equation}\label{eq4} \begin{aligned} &\dot x(t)=-a(t)x(t)+\frac{b(t)}{1+x^n(t-\tau)},\; t\geq 0,\; t\neq t_k,\\ &\Delta x(t_k)=\gamma_kx\left(t_k^-\right)+\delta_k,\; k\in \mathbb{N}, \end{aligned} \end{equation} where $t_k$ represents the instant at which the density suffers an increment of $\delta _k$ unit and $\Delta x(t_k)=x\left(t_k^+\right)-x\left(t^-_k\right)$. The density of mature cells in blood circulation decreases at prescribed instant $t _k$ by some medication and it is proportional to the density at that time $t ^-_k$. By employing the contraction mapping principle and applying Gronwall-Bellman's inequality, sufficient conditions which guarantee the existence and exponential stability of a positive almost periodic solution of system \eqref{eq4} were given in \cite{ANS} as follows. \begin{theorem}[\cite{ANS}]\label{thm1} Assume that \begin{itemize} \item[\rm(C1)] The function $a\in C\left(\mathbb{R}^+,\mathbb{R}^+\right)$ is almost periodic in the sense of Bohr and there exists a positive constant $\mu$ such that $a(t)\geq \mu$. \item[\rm(C2)] The sequence $\{\gamma_k\}$ is almost periodic and $-1\leq \gamma_k \leq 0,\ k\in \mathbb{N}$. \item[\rm(C3)] The sequences $\{t_k^p\}$ are uniformly almost periodic and there exists a positive constant $\eta $ such that $\inf_{k\in \mathbb{N}}t^1_k=\eta$, where $0<\sigma\leq t_k<t_{k+1}, \forall k\in\mathbb{N}$, $\lim_{k\to \infty}t_k=\infty$ and $t_k^p=t_{k+p}-t_k, k,p\in \mathbb{N}$. \item[\rm(C4)] The function $b\in C\left(\mathbb{R}^+,\mathbb{R}^+\right)$ is almost periodic in the sense of Bohr, $b(0)=0$, and there exists a positive constant $\nu$ such that $\sup_{t\in \mathbb{R}^+}|b(t)|< \nu$. \item[\rm(C5)] The sequence $\{\delta_k\}$ is almost periodic and there exists a constant $\delta>0$ such that $\sup_{k\in\mathbb{N}}|\delta_k|<\delta$. \end{itemize} If $\nu <\mu$, then equation \eqref{eq4} has a unique positive almost periodic solution. \end{theorem} Unfortunately, the above theorem is incorrect. For this, let us consider the following example. \begin{example}\label{exam1} Consider the following equation \begin{equation}\label{eq5} \dot x(t)= - x(t), \quad t\geq 0, t\neq k \in \mathbb{N},\quad \Delta x(k)=-1, \; k\in \mathbb{N}. \end{equation} Note that \eqref{eq5} is a special case of \eqref{eq4}. Moreover, we can easily see that equation \eqref{eq5} satisfies conditions (C1)-(C5), where $t_k=k, \gamma_k=0$ and $\delta_k=-1$. Suppose that system \eqref{eq5} has a positive almost periodic solution $x^*(t)$. It is obvious that \begin{equation*} x^*(t)=e^{-t}x^*(0)-\sum_{k\in\mathbb{N},\; k\leq t}e^{-(t-k)},\quad t>0. \end{equation*} For any positive integer $n$, we have \begin{equation*} 0<x^*(n)=e^{-n}x^*(0)-e^{-1}\left(\frac{1-e^{-n}}{1-e^{-1}}\right) \longrightarrow \frac{-e^{-1}}{1-e^{-1}}<0 \text{ as } n\to \infty \end{equation*} which yields a contradiction. This shows that \eqref{eq5} has no positive almost periodic solution. Thus, Theorem \ref{thm1} is incorrect, and Theorem 3.2 in \cite{ANS} is also incorrect. \end{example} Motivated by the aforementioned discussions, in this paper we consider a generalized model of hematopoiesis with delays, harvesting terms \cite{BBI1,LM,Long} and impulses of the form \begin{equation}\label{eq6} \begin{split} &\begin{aligned}\dot x(t)=-a(t)x(t)&+\sum_{i=1}^m\bigg[\frac{b_i(t)}{1+x^{\alpha_i}(t-\tau_i(t))}\\ &+c_i(t)\int_0^{T}\frac{v_i(s)}{1+x^{\beta_i}(t-s)}ds-H_i(t,x(t-\sigma_i(t)))\bigg],\quad t\neq t_k, \end{aligned}\\ &\Delta x(t_k)=x\left(t_k^+\right)-x\left(t^-_k\right) =\gamma_kx\left(t_k^-\right)+\delta_k, \quad k\in \mathbb{Z}, \end{split} \end{equation} where $m$ is given positive integer, $a, b_i,c_i:\mathbb{R}\to \mathbb{R}, i\in\underline{m}:=\{1,2,\ldots,m\}$, are nonnegative functions; $H_i:\mathbb{R}\times \mathbb{R}\to \mathbb{R}_+, i\in\underline{m},$ are nonnegative functions represent harvesting terms; $\tau_i(t),\sigma _i(t)\geq 0, i\in\underline{m}$, are time delays; $\alpha _i, \beta_i, i\in\underline{m},$ are positive numbers and $T>0$ is a constant; $\gamma _k, \delta_k, k\in \mathbb{Z},$ are constants; $v_i(t), i\in\underline{m},$ are nonnegative integrable functions on $[0,T]$ with $\int_0^Tv_i(s)ds=1$; $\{t_k\}, k\in\mathbb{Z}$, is an increasing sequence involving the fixed impulsive points with $\lim_{k\to \pm\infty}t_k=\pm\infty$. The main goal of the present paper is to establish conditions for the existence of a unique positive almost periodic solution of model \eqref{eq6}. It is also proved that, under the proposed conditions, the unique positive almost periodic solution of \eqref{eq6} is globally exponentially attractive. The rest of this paper is organized as follows. Section 2 introduces some notations, basic definitions and technical lemmas. Main results on the existence and exponential attractivity of a unique positive almost periodic solution of \eqref{eq6} are presented in section 3. An illustrative example is given in section 4. The paper ends with the conclusion and cited references. \section{Preliminaries} Let $\{t_k\}_{k\in \mathbb{Z}}$ be a fixed sequence of real numbers satisfying $t_k<t_{k+1}$ for all $k\in \mathbb{Z}$, $\lim_{k\to \pm\infty}t_k=\pm\infty$. Let $X$ be an interval of $\mathbb{R}$, denoted by $PLC(X,\mathbb{R})$ the space of all piecewise left continuous functions $\phi: X\to \mathbb{R}$ with points of discontinuity of the first kind at $t=t _k$, $k\in \mathbb{Z}$. The following notations will be used in this paper. For bounded functions $f:\mathbb{R}\to \mathbb{R}$, $F:\mathbb{R}\times \mathbb{R}_+\to \mathbb{R}$ and a bounded sequence $\{ z_k\}$, we set \begin{align*} &f_L=\inf_{t\in \mathbb{R}}f(t),\; f_M=\sup_{t\in \mathbb{R}}f(t),\\ &F_L=\inf_{(t,x)\in \mathbb{R}\times\mathbb{R}_+}F(t,x),\; F_M=\sup_{(t,x)\in \mathbb{R}\times \mathbb{R}_+}F(t,x),\\ &z_L=\inf_{k\in \mathbb{Z}}z_k,\; z_M=\sup_{k\in \mathbb{Z}}z_k. \end{align*} The following definitions are borrowed from \cite{SP}. \begin{definition}[\cite{SP,Stamov}]\label{def1} The set of sequences $\{t_k^p\}$, where $t_k^p=t_{k+p}-t_k, p, k\in \mathbb{Z}$, is said to be uniformly almost periodic if for any positive number $\epsilon$, there exists a relatively dense set of $\epsilon$-almost periods common for all sequences. \end{definition} \begin{definition}[\cite{LZ,SP}]\label{def2} A function $\phi \in PLC(\mathbb{R},\mathbb{R})$ is said to be almost periodic if the following conditions hold \begin{itemize} \item[(i)] The set of sequences $\{t_k^p\}$ is uniformly almost periodic. \item[(ii)] For any $\epsilon>0$, there exists $\delta=\delta(\epsilon)>0$ such that, if $t,\bar t$ belong to the same interval of continuity of $\phi (t)$, $|t-\bar t|<\delta$, then $|\phi (t)-\phi (\bar t)|<\epsilon$. \item[(iii)] For any $\epsilon>0$, there exists a relatively dense set $\Omega$ of $\epsilon$-almost periods such that, if $\omega \in \Omega$ then $|\phi (t+\omega )-\phi (t)|<\epsilon$ for all $t\in \mathbb{R}$, $k\in \mathbb{Z}$ satisfying $|t-t_k|>\epsilon$. \end{itemize} \end{definition} For equation \eqref{eq6}, we introduce the following assumptions \begin{itemize} \item[(A1)] The function $a(t)$ is almost periodic in the sense of Bohr and $a_L>0$. \item[(A2)] The functions $ b_i(t),c_i(t), i\in\underline{m}$, are nonnegative and almost periodic in the sense of Bohr. \item[(A3)] The function $H_i(t,x), i\in\underline{m}$, are bounded nonnegative and almost periodic in the sense of Bohr in $t\in \mathbb{R}$ uniformly in $x\in \mathbb{R}_+$ and there exist positive constants $L_i$ such that \[ |H_i(t,x)-H_i(t,y)|\leq L_i|x-y|, \; \forall (t,x),(t,y)\in \mathbb{R}\times \mathbb{R}_+. \] \item[(A4)] The functions $\tau_i (t), \sigma_i(t), i\in\underline{m}$, are almost periodic in the sense of Bohr, $\dot\tau_i(t), \dot\sigma_i(t)$ are bounded, $\inf_{t\in \mathbb{R}}\left(1-\dot \tau_i(t)\right)>0$, $\inf_{t\in \mathbb{R}}\left(1-\dot \sigma_i(t)\right)>0$. \item[(A5)] The sequence $\{\delta_k\}$ is almost periodic. \item[(A6)] The sequence $\{\gamma_k\}$ is almost periodic satisfying \[ \gamma_L >-1, \quad \Gamma_M=\sup_{p,q\in\mathbb{Z}, p\geq q}\Gamma(q,p)<\infty,\quad \Gamma_L=\inf_{p,q\in \mathbb{Z}, p\geq q}\Gamma(q,p)>0, \] where $\Gamma (q,p)=\prod\limits_{i=q}^p(1+\gamma_i), p\geq q$. \item[(A7)] The set of sequences $\{t_k^p\}$ is uniformly almost periodic, $\eta =\inf_{k\in \mathbb{Z}}t_k^1>0$. \end{itemize} \begin{remark}\label{rm1} It should be noted that model \eqref{eq6} includes \eqref{eq4} as a special case. For that model, assumptions (A3), (A4) obviously be removed. Furthermore, we make assumption (A6) in order to correct condition (C2) in \cite{ANS}. \end{remark} The following lemmas will be used in the proof of our main results. \begin{lemma}[\cite{SP}]\label{lem1} Let Assumption {\rm(A7)} holds. Assume that functions $g_i(t), i\in\underline{m}$, are almost periodic in the sense of Bohr, a function $\phi (t)$ and sequences $\{\delta_k\}$, $\{\gamma_k\}$ are almost periodic. Then for any $\epsilon>0$, there exist $\epsilon_1\in(0,\epsilon)$, relatively dense sets $\Omega\subset\mathbb{R}$, $\mathcal P\subset\mathbb{Z}$ such that \begin{itemize} \item[\rm(a1)] $|\phi (t+\omega)-\phi (t)|<\epsilon, t\in \mathbb{R}, |t-t_k|>\epsilon, k\in\mathbb{Z}$; \item[\rm(a2)] $|g_i(t+\omega)-g_i(t)|<\epsilon, t\in \mathbb{R}, i\in\underline{m}$; \item[\rm(a3)] $|\gamma _{k+p}-\gamma _k|< \epsilon, |\delta _{k+p}-\delta _k|< \epsilon, |t^p_k-\omega|<\epsilon _1, \omega \in \Omega, p\in {\mathcal P}, k\in \mathbb{Z}.$ \end{itemize} \end{lemma} \begin{lemma}\label{lem2} Let Assumption {\rm(A7)} holds. Assume that functions $f_i(t,x), i\in\underline{m}$, are almost periodic in $t\in \mathbb{R}$ uniformly in $x \in \mathbb{R}$ in the sense of Bohr, a function $\phi (t)$ and sequences $\{\delta_k\}$, $\{\gamma_k\}$ are almost periodic. Then for any compact set ${\mathcal M}\subset \mathbb{R}$ and positive number $\epsilon$, there exist $\epsilon_1\in(0,\epsilon)$, relatively dense sets $\Omega\subset\mathbb{R}$, $\mathcal P\subset\mathbb{Z}$ such that \begin{itemize} \item[\rm(b1)] $|\phi (t+\omega)-\phi (t)|<\epsilon, t\in \mathbb{R}, |t-t_k|>\epsilon, \omega\in\Omega$; \item[\rm(b2)] $|f_i(t+\omega,x)-f_i(t,x)|<\epsilon, t\in \mathbb{R}, x\in {\mathcal M}, \omega\in\Omega, i\in\underline{m}$; \item[\rm(b3)] $|\gamma _{k+p}-\gamma _k|< \epsilon, |\delta _{k+p}-\delta _k|< \epsilon, |t^p_k-\omega|<\epsilon _1, \omega \in \Omega, p\in {\mathcal P}, k\in \mathbb{Z}.$ \end{itemize} \end{lemma} \begin{proof} The proof of this lemma is similar to the proof of Lemma 2.1 in \cite{Stamov} so let us omit it here. \end{proof} \begin{lemma}\label{lem3} For given $\epsilon>\epsilon_1>0$, real number $\omega$ and integers $k,p$ such that $|t^p_k-\omega|<\epsilon_1$, if $|t-t_i|>\epsilon$ for all $i\in \mathbb{Z}$ and $t_{k-1}<t<t_k$ then $t_{k+p-1}<t+\omega <t_{k+p}$. \end{lemma} \begin{proof} The proof is straight forward, so let us omit it here. \end{proof} \begin{lemma}\label{lem4} Let Assumptions {\rm(A6)} and {\rm (A7)} hold. If $p\in \mathbb{Z}$ satisfies $|\gamma_{i+p}-\gamma_i|\leq \epsilon$ for all $i\in \mathbb{Z}$, then \begin{equation*} \left|\Gamma (n+p, k+p)-\Gamma (n,k)\right| \leq\frac{\Gamma_M}{1+\gamma_L}(k-n+1)\epsilon,\quad \forall k, n\in\mathbb{Z}, k\geq n. \end{equation*} \end{lemma} \begin{proof} Using the facts that $\left|e^u-e^v\right|\leq |u-v|\max\{e^u,e^v\}, \forall u,v\in\mathbb{R}$ and $\left|\ln(1+u)-\ln(1+v)\right|\leq \dfrac{1}{1+\min\{u,v\}}|u-v|, \forall u,v>-1$, from (A6), we have \begin{align*} \left|\Gamma (n+p,k+p)-\Gamma (n,k)\right| &\leq\left|\exp\bigg(\sum_{i=n+p}^{k+p}\ln(1+\gamma_i)\bigg) -\exp\bigg(\sum_{i=n}^{k}\ln(1+\gamma_i)\bigg)\right| \\ & \leq \frac{{\Gamma_M}}{1+\gamma_L} \sum\limits_{i=n}^k|\gamma_{i+p} -\gamma_i| \leq \frac{{\Gamma_M}}{1+\gamma_L}(k-n+1)\epsilon,\ k\geq n . \end{align*} The proof is completed. \end{proof} \begin{lemma}\label{lem5} Let Assumption {\rm(A7)} holds. For any $\alpha>0$, $0<\epsilon < \eta/2$, we have \begin{equation*} \sum_{t_k<t}e^{-\alpha (t-t_k)}\leq \frac{1}{1-e^{-\alpha \eta}},\quad \sum_{t_k<t}\int_{t_k-\epsilon}^{t_k+\epsilon}e^{-\alpha (t-s)}ds \leq 2 e^{\frac{1}{2}\alpha \eta}\frac{\epsilon}{1-e^{-\alpha\eta}}. \end{equation*} \end{lemma} \begin{proof} The proof follows by some direct estimates and, thus, is omitted here. \end{proof} Now, let $\sigma =\max_{i\in\underline{m}}\{T, \tau_{iM}, \sigma_{iM}\}$, then $0<\sigma <+\infty$. From biomedical significance, we only consider the initial condition \begin{equation}\label{eq9} x(s)=\xi(s)\geq 0, \ s\in [\alpha-\sigma, \alpha),\ \xi(\alpha)>0,\ \xi \in PLC([\alpha-\sigma,\alpha],\mathbb{R}). \end{equation} It should be noted that problem \eqref{eq6} and \eqref{eq9} has a unique solution $x(t)=x(t;\alpha,\xi)$ defined on $[\alpha-\sigma,\infty)$ which is piecewise continuous with points of discontinuity of the first kind, namely $t_k, k\in\mathbb{Z}$, at which it is left continuous and the following relations are satisfied \cite{SP} \begin{equation*} x\left(t^-_k\right)=x(t_k),\ \Delta x(t_k):=x\left(t_k^+\right)-x\left(t^-_k\right) =\gamma_kx\left(t_k^-\right)+\delta_k. \end{equation*} Related to \eqref{eq6}, we consider the following linear equation \begin{equation}\label{eq10} \dot y(t)=- a(t)y(t), \ t\neq t_k,\quad \Delta y(t_k)=\gamma_ky(t^-_k),\ k\in \mathbb{Z}. \end{equation} \begin{lemma}\label{lem6} Let Assumptions {\rm(A1), (A6)} and {\rm(A7)} hold. Then \begin{equation*} Be^{-a_M(t-s)} \leq H(t,s)\leq Ae^{-a_L(t-s)}, \ s \leq t, \end{equation*} where \begin{equation}\label{eq11} H(t,s)=\begin{cases} \exp\left(-\int_s ^t a(r)dr\right),\ \text{ if } \ t_{k-1}<s \leq t \leq t_k,\\ \Gamma(n,k)\exp\left(-\int_s^t a(r) dr\right),\ \text{ if } \ t_{n-1}<s \leq t_n \leq t_k<t \leq t_{k+1} \end{cases} \end{equation} is the Cauchy matrix of \eqref{eq10}, $A= \max\left\{\Gamma_M, 1\right\}$ and $B= \min\left\{\Gamma_L, 1\right\}$. \end{lemma} \begin{proof} The proof is straight forward from \eqref{eq11}, so let us omit it here. \end{proof} Similar to Lemma 36 in \cite{SP} and Lemma 2.6 in \cite{Stamov} we have the following lemma. \begin{lemma}\label{lem7} Let Assumptions {\rm(A1), (A6)} and {\rm(A7)} hold. Then, for given $0<\epsilon_1<\epsilon$, relatively dense sets $\Omega\subset \mathbb{R}$, $\mathcal{P}\subset\mathbb{Z}$, satisfying \begin{itemize} \item[\rm(c1)] $|a(t+\omega)-a(t)|<\epsilon, \ t\in \mathbb{R}, \omega \in \Omega$; \item[\rm (c2)] $|\gamma_{k+p}-\gamma_k|<\epsilon,\ |t^p_k-\omega|<\epsilon_1, \omega \in \Omega,\ p\in{\mathcal P},\ k\in \mathbb{Z}$, \end{itemize} the following estimate holds \begin{equation*} \left|H(t+\omega, s+\omega)-H(t,s)\right|\leq \epsilon Me^{-\frac{1}{2}a_L(t-s)}, \end{equation*} for any $\omega \in \Omega$, $t,s\in\mathbb{R}$ satisfying $t\geq s$, $|t-t_k|>\epsilon, \ |s-t_k| >\epsilon, k\in \mathbb{Z}$, where \begin{equation}\label{eq12} M=\max\left\{\frac{2}{a_L}, \Gamma_M\left[\frac{2}{a_L} +\frac{1}{1+\gamma_L}\left(1+\frac{2}{a_L\eta}\right)\right]\right\}. \end{equation} \end{lemma} \begin{proof} We divide the proof into two possible cases as follows. \noindent {\it Case 1:} $t_{k-1}<s\leq t\leq t_k$. By Lemma \ref{lem3}, $t_{k+p-1}<s+\omega\leq t+\omega <t_{k+p}$. Since $|a(t+\omega)-a(t)|\leq \epsilon$, $\forall t\in \mathbb{R}$, $\epsilon <\eta/2$ and $\frac12a_L(t-s)e^{-\frac{1}{2}a_L(t-s)}<1$, it follows from \eqref{eq11}, \eqref{eq12} that \begin{equation}\label{eq13} \begin{aligned} |H(t+\omega,s+\omega)-H(t,s)| &= \left|\exp\left(-\int_{s}^{t}a(r+\omega)dr\right)-\exp\left(-\int_{s}^{t}a(r)dr\right)\right| \\ & \leq e^{-a_L(t-s)}\int_s^t|a(r+\omega)-a(r)|dr\\ &\leq \frac{2}{a_L} \epsilon e^{-\frac{1}{2}a_L(t-s)}\leq \epsilon Me^{-\frac{1}{2}a_L(t-s)}. \end{aligned} \end{equation} \noindent {\it Case 2:} $t_{n-1}<s \leq t_n \leq t_k<t \leq t_{k+1}$. Similarly, we have $$t_{n+p-1}<s+\omega<t_{n+p}\leq t+\omega <t_{k+p+1}.$$ By Lemma \ref{lem4}, from \eqref{eq11}-\eqref{eq13} we obtain \begin{align*} |H(t+\omega,s+\omega)-H(t,s)|&= \Gamma (n+p,k+p)\left| \exp\left(-\int_{s+\omega}^{t+\omega}a(r)dr\right) -\exp\left(-\int_{s}^{t}a(r)dr\right)\right|\\ &\quad + |\Gamma (n+p,k+p)-\Gamma (n,k)|\exp\left(-\int_{s}^{t}a(r)dr\right)\\ &\leq \frac{2\Gamma_M\epsilon}{a_L}e^{-\frac{1}{2}a_L(t-s)} +\frac{\Gamma_M\epsilon}{1+\gamma_L}(k-n+1)e^{-a_L(t-s)}\\ &\leq \frac{2\Gamma_M\epsilon}{a_L}e^{-\frac{1}{2}a_L(t-s)} +\frac{\Gamma_M\epsilon}{1+\gamma_L}\left(\frac{t-s}{\eta}+1\right)e^{-a_L(t-s)}\\ &\leq \frac{2\Gamma_M\epsilon}{a_L}e^{-\frac{1}{2}a_L(t-s)} +\frac{\Gamma_M\epsilon}{1+\gamma_L}\left(1+\frac{2}{a_L\eta}\right)e^{-\frac{1}{2}a_L(t-s)}\\ &\leq \epsilon Me^{-\frac{1}{2}a_L(t-s)} . \end{align*} The proof is completed. \end{proof} It is worth noting that, the proof of Lemma \ref{lem7} is different from those in \cite{ANS,SP,Stamov}. By employing Lemma \ref{lem4}, we obtain a new bound for constant $M$ given in \eqref{eq4}. \begin{lemma}[\cite{L}]\label{lem8} Assume that there exist constants $R,S>0,\tau\geq 0$, $T_0\in\mathbb{R}$ and a function $y\in PLC([T_0-\tau,\infty),\mathbb{R}^+)$ satisfying \begin{itemize} \item[\rm(d1)] $\Delta y(t_k)\leq \gamma_ky(t^-_k)$ for $t_k\geq T_0$, where $\gamma_k> -1$ and $\max_{ t_k\geq T_0}\left\{(\gamma_k+1)^{-1},1\right\} <\dfrac{R}{S}$; \item[\rm(d2)] $D^+y(t)\leq -Ry(t)+S\overline{y}(t)$ for $t\geq T_0, t\neq t_k$, where $\overline{y}(t)=\sup_{t-\tau \leq s \leq t}y(s)$ and $D^+$ denotes the upper-right Dini derivative; \item[\rm(d3)] $\tau\leq t_k-t_{k-1}$ for all $k\in \mathbb{Z}$ satisfies $t_k\geq T_0$. \end{itemize} Then \begin{equation*} y(t)\leq\overline{y}(T_0)\bigg(\prod_{T_0< t_k \leq t}(\gamma_k+1)\bigg)e^{-\lambda (t-T_0)}, \forall t\geq T_0, \end{equation*} where $0<\lambda \leq R-S \max_{ t_k\geq T_0}\left\{(\gamma_k+1)^{-1},1\right\}e^{\lambda \tau}.$ \end{lemma} \section{Main results} Let us set $\mathcal{D}_1=\Big\{\phi\in PLC(\mathbb{R},\mathbb{R}): \phi \text{ is almost periodic}, \phi(t)\geq 0 \text{ for all } t\in \mathbb{R}\Big\}$ and $\|\phi\|=\sup _{t\in \mathbb{R}}|\phi (t)|$. We define an operator $F: \mathcal{D}_1\to PLC(\mathbb{R},\mathbb{R})$ as follows \begin{equation}\label{eq14} \begin{aligned} F\phi(t)=\int_{-\infty}^tH(t,s)\sum_{i=1}^m\bigg\{ &\frac{b_i(s)}{1+\phi^{\alpha_i}(s-\tau_i(s))} +c_i(s)\int_0^T\frac{v_i(r)}{1+\phi^{\beta_i}(s-r)}dr\\ &-H_i(s,\phi(s-\sigma_i(s)))\bigg\}ds+\sum_{t_k<t}H(t,t_k)\delta_k. \end{aligned} \end{equation} It can be verified that $x^*(t)=\phi (t)$ is an almost periodic solution on $\mathcal{D} _1$ of \eqref{eq6} if and only if $F\phi = \phi$. We define the following constants \begin{equation}\label{eq15} \begin{aligned} \underline{\delta}&= \inf_{k\in\mathbb{Z}}|\delta_k|, \ \overline{\delta} = \sup_{k\in\mathbb{Z}}|\delta_k|, \ \overline{\eta} =\sup_{k\in \mathbb{Z}}t^1_k,\\ M_1&=\frac{A}{a_L}\sum_{i=1}^m\left(b_{iM}+c_{iM}-H_{iL}\right)+\frac{A\delta_M}{1-e^{-a_L\eta}},\\ M_2&= \begin{cases}\displaystyle \frac{B}{a_M}\sum_{i=1}^m\bigg(\frac{b_{iL}}{1+M_1^{\alpha_i}} +\frac{c_{iL}}{1+M_1^{\beta_i}}-H_{iM}\bigg) +\frac{B\delta_Le^{-a_M{\overline{\eta}}}}{1-e^{-a_M{ \bar \eta}}}\ \text{ if } \delta_L\geq 0,\\ \frac{B}{a_M}\sum_{i=1}^m\bigg(\dfrac{b_{iL}}{1+M_1^{\alpha_i}} +\frac{c_{iL}}{1+M_1^{\beta_i}}-H_{iM}\bigg) +\frac{A\delta_L}{1-e^{-a_L \eta}}\ \text{ if } \delta_L<0. \end{cases} \end{aligned} \end{equation} \begin{lemma}\label{lem9} Let Assumptions {\rm(A1)-(A7)} hold. If $\phi\in\mathcal{D}_1$ then \[ M_2\leq F\phi (t)\leq M_1,\ \forall t\in \mathbb{R}. \] \end{lemma} \begin{proof} Let $\phi \in\mathcal{D}_1$. By Lemma \ref{lem5} and Lemma \ref{lem6}, from \eqref{eq14} we have \begin{equation}\label{eq16} \begin{aligned} F\phi (t)&\leq\int_{-\infty}^tAe^{-a_L(t-s)}\sum_{i=1}^m\left(b_{iM}+c_{iM}-H_{iL}\right)ds +A\delta_M\sum_{t_k<t}e^{-a_L(t-t_k)}\\ &\leq \frac{A}{a_L}\sum_{i=1}^m\left(b_{iM}+c_{iM}-H_{iL}\right) +\frac{A\delta_M}{1-e^{-a_L\eta}}=M_1,\ \forall t\in \mathbb{R}. \end{aligned} \end{equation} For each $t\in \mathbb{R}$, let $n_0$ be an integer such that $t_{n_0}<t\leq t_{n_0+1}$. If $\delta_L\geq 0$ then, by Lemma \ref{lem6}, it follows from the fact $(n-k)\eta \leq t_n-t_k\leq (n-k)\bar \eta,\ \forall k\leq n$ that \begin{equation}\label{eq17} \begin{aligned} \sum_{t_k<t}H(t,t_k)\delta_k &\geq \sum_{t_k<t}B\delta_L e^{-a_M (t-t_k)} \geq \sum_{t_k\leq t_{n_0}}B\delta_Le^{-a_M (t_{n_0+1}-t_k)}\\ &=\sum_{k\leq n_0}B\delta_Le^{-a_M (t_{n_0+1}-t_k)} \geq\sum_{q=1}^{\infty}B\delta_Le^{-a_M \bar \eta q} =\frac{B\delta_Le^{-a_M\bar \eta}}{1-e^{-a_M\bar \eta}}. \end{aligned} \end{equation} If $\delta_L<0$ then from (A7) and Lemma \ref{lem6}, we have \begin{equation}\label{eq18} \begin{aligned} \sum_{t_k<t}H(t,t_k)\delta_k&\geq \sum_{t_k<t}A\delta_L e^{-a_L(t-t_k)} \geq \sum_{t_k\leq t_{n_0}}A\delta_L e^{-a_L(t_{n_0}-t_k)}\\ &\geq \sum_{q=0}^\infty A\delta_Le^{-a_Lq\eta}=\frac{A\delta}{1-e^{-a_L\eta}}. \end{aligned} \end{equation} From \eqref{eq17} and \eqref{eq18} we obtain \begin{equation}\label{eq19} \begin{aligned} F\phi(t)\geq \frac{B}{a_M}\sum_{i=1}^m\bigg(\frac{b_{iL}}{1+M_1^{\alpha_i}} +\frac{c_{iL}}{1+M_1^{\beta_i}}-H_{iM}\bigg)+\sum_{t_k<t}H(t,t_k)\delta_k\geq M_2. \end{aligned} \end{equation} The proof is completed. \end{proof} Now we are in position to introduce our main results as follows. \begin{theorem}\label{thm2} Under the Assumptions {\rm(A1)-(A7)}, if $\phi\in \mathcal{D}_1$ then $F\phi (t)$ is almost periodic. \end{theorem} \begin{proof} Let $\phi \in\mathcal{D}_1$. For given $\epsilon\in(0, \eta/2)$, there exists $0<\delta <\epsilon/2$ such that, if $t,\bar t$ belong to the same interval of continuity of $\phi(t)$ then \begin{equation}\label{eq20} \left|\phi(t)-\phi\left(\bar t\right)\right|<\epsilon, \; |t-\bar t|<\delta. \end{equation} By Lemma \ref{lem2} and Lemma \ref{lem7}, there exist $0< \epsilon _1<\delta$, relatively dense sets $\Omega\subset\mathbb{R}$, $\mathcal{P}\subset\mathbb{Z}$ such that, for all $\omega\in\Omega$, we have \begin{equation}\label{eq21} \begin{aligned} &|H(t+\omega, s+\omega)-H(t,s)|\leq \delta Me^{-\frac{1}{2}a_L(t-s)},\ t\geq s, |t-t_k|>\delta, |s-t_k| >\delta;\\ & |\phi (t+\omega)-\phi (t)|<\delta, \ t\in \mathbb{R}, |t-t_k|>\delta, k\in \mathbb{Z};\\ &|H_i(t+\omega,x)-H_i(t,x)|<\delta, \ t\in \mathbb{R}, x\in [\phi_L,\|\phi\|], i\in\underline{m};\\ &|a(t+\omega)-a(t)|<\delta, \ t\in \mathbb{R};\\ &|b_i(t+\omega)-b_i(t)|<\delta,\; |c_i(t+\omega)-c_i(t)|<\delta, \ t\in \mathbb{R}, i\in\underline{m};\\ & |\tau_i(t+\omega)-\tau_i(t)|<\delta, \; |\sigma_i(t+\omega)-\sigma_i(t)|<\delta, \ t\in \mathbb{R}, i\in\underline{m};\\ &|\gamma _{k+p}-\gamma _k|< \delta,\; |\delta _{k+p}-\delta _k|< \delta,\; |t^p_k-\omega|<\epsilon _1, p\in\mathcal{P}, k\in \mathbb{Z}. \end{aligned} \end{equation} Let $\omega \in \Omega$, $p\in \mathcal{P}$. One can easily see that \begin{equation}\label{eq22} \begin{aligned} F\phi (t+\omega)=\; &\int_{-\infty}^tH(t+\omega,s+\omega) \sum_{i=1}^m\bigg\{\frac{b_i(s+\omega)}{1+\phi^{\alpha_i}(s+\omega-\tau_i(s+\omega))}\\ &+\int_0^T\frac{c_i(s+\omega)v_i(r)}{1+\phi^{\beta_i}(s+\omega-r)}dr -H_i(s+\omega,\phi(s+\omega-\sigma_i(s+\omega)))\bigg\}ds\\ &+\sum_{t_k<t}H(t+\omega,t_{k+p})\delta_{k+p}. \end{aligned} \end{equation} We define $E_\epsilon(\{t_k\})=\{t\in \mathbb{R}:\; |t-t_k|>\epsilon, \forall k\in \mathbb{Z}\}$. For $t\in E_\epsilon(\{t_k\})$, $i\in\underline{m}$, let us set \begin{equation}\label{eq23} \begin{aligned} C_i&=\int_{-\infty}^t\left|\frac{H(t+\omega,s+\omega) b_i(s+\omega)}{1+\phi^{\alpha_i}(s+\omega-\tau_i(s+\omega))} -\frac{H(t,s)b_i(s)}{1+\phi^{\alpha_i}(s-\tau_i(s))}\right|ds,\\ D_i&=\int_{-\infty}^t\left|H(t+\omega,s+\omega)\int_0^T \frac{c_i(s+\omega)v_i(r)}{1+\phi^{\beta_i}(s+\omega-r)}dr -H(t,s)\int_0^T\frac{c_i(s)v_i(r)}{1+\phi^{\beta_i}(s-r)}dr\right|ds,\\ E_i&=\int_{-\infty}^t\left|H(t+\omega,s+\omega)H_i(s+\omega,\phi(s+\omega-\sigma_i(s+\omega))) -H(t,s)H_i(s,\phi(s-\sigma_i(s)))\right|ds,\\ G&=\sum_{t_k<t}\left|H(t+\omega,t_{k+p})\delta_{k+p}-H(t,t_k)\delta_k\right|, \end{aligned} \end{equation} then we have \begin{equation}\label{eq24} |F\phi(t+\omega)-F\phi(t)|\leq \sum_{i=1}^m(C_i+D_i+E_i)+G,\; t\in E_\epsilon(\{t_k\}). \end{equation} We also define \begin{equation}\label{eq25} \begin{aligned} C_{i1}&=\int_{-\infty}^t|H(t+\omega,s+\omega)-H(t,s)| \frac{|b_i(s+\omega)|}{1+\phi^{\alpha_i}(s+\omega-\tau_i(s+\omega))}ds,\\ C_{i2}&=\int_{-\infty}^tH(t,s) \frac{|b_i(s+\omega)-b_i(s)|}{1+\phi^{\alpha_i}(s+\omega-\tau_i(s+\omega))}ds,\\ C_{i3}&=\int_{-\infty}^tH(t,s)|\phi(s+\omega-\tau_i(s+\omega))-\phi(s-\tau_i(s+\omega))|ds,\\ C_{i4}&=\int_{-\infty}^tH(t,s)|\phi(s-\tau_i(s+\omega))-\phi(s-\tau_i(s))|ds \end{aligned} \end{equation} and $K_i=\sup_{\phi_L\leq x\leq \|\phi\|}\alpha_i x^{\alpha_i-1}$. It can be seen from \eqref{eq23} and \eqref{eq25} that \begin{equation}\label{eq26} C_i\leq C_{i1}+C_{i2}+b_{iM}K_i(C_{i3}+C_{i4}),\; i\in\underline{m}. \end{equation} By Lemma \ref{lem5} and Lemma \ref{lem6}, from \eqref{eq21}, \eqref{eq25} and the fact that $\int_{-\infty}^te^{-\frac{1}{2}a_L(t-s)}ds=2/a_L$, we have \begin{equation}\label{eq27} C_{i1}\leq \frac{2b_{iM}M}{a_L}+\sum_{t_k<t}2Ab_{iM} \int_{t_k-\epsilon}^{t_k+\epsilon}e^{-a_L(t-s)}ds\leq b_{iM} \bigg(\frac{2M}{a_L}+\frac{4Ae^{\frac{1}{2}a_L\eta}}{1-e^{-a_L\eta}}\bigg)\epsilon, \end{equation} and \begin{equation}\label{eq28} \begin{aligned} &C_{i2}\leq \int_{-\infty}^tA\epsilon e^{-a_L(t-s)}ds=\frac{A}{a_L}\epsilon,\\ &C_{i3}\leq \int_{-\infty}^t A\epsilon e^{-a_L(t-s)}ds+2A\|\phi\|\sum_{t_k<t} \int_{\{s: |s-\tau_i(s+\omega)-t_k|<\epsilon, s\leq t\}}e^{-a_L(t-s)}ds. \end{aligned} \end{equation} It should be noted that, by (A4), $t-\tau_i(t)$, $i\in\underline{m}$, are strictly increasing functions, and thus, there exist the inverse functions $\tau^*_i(t)$ of $t-\tau_i(t)$. For each $t\in\mathbb{R}$, denote $\bar t=t-\epsilon-\tau_i(t+\omega)$ then \begin{equation}\label{eq29} t+\omega =\tau^*_i(\bar t+\omega +\epsilon). \end{equation} Let $\underline{\lambda}_i=\inf_{s\in\mathbb{R}}\dot\tau^*_i(s)$, $\overline{\lambda}_i= \sup_{s\in\mathbb{R}}\dot\tau^*_i(s), i\in\underline{m}$, then, by (A4), $0<\underline{\lambda}_i, \overline{\lambda}_i<\infty.$ Therefore, \begin{equation}\label{eq30} \tau^*_i(\bar t+\omega+\epsilon)-\tau^*_i(t_{k}+\omega+\epsilon) \geq\underline{\lambda}_i(\bar t-t_k),\ t_k<\bar t, \end{equation} and hence, from equations \eqref{eq28}-\eqref{eq30}, we have \begin{equation}\label{eq31} \begin{aligned} C_{i3}&\leq \frac{A\epsilon}{a_L}+2A\|\phi\|\sum_{t_k<\bar t} \int_{\tau^*_i(t_k+\omega-\epsilon)-\omega}^{\tau^*_i(t_k+\omega+\epsilon)-\omega}e^{-a_L(t-s)}ds\\ & \leq \frac{A\epsilon}{a_L}+2A\|\phi\|\sum_{t_k<\bar t} e^{-a_L[\tau^*_i(\bar t+\omega+\epsilon)-\tau^*_i(t_k+\omega+\epsilon)]} \left[\tau^*_i( t_k+\omega+\epsilon)-\tau^*_i(t_k+\omega-\epsilon)\right]\\ &\leq\frac{A\epsilon}{a_L}+4A\|\phi\|\overline{\lambda}_i\epsilon \sum_{t_k<\bar t}e^{-a_L{\underline \lambda}_i(\bar t-t_k)} \leq \bigg(\frac{1}{a_L}+\frac{4\overline{\lambda}_i\|\phi\|}{1-e^{-a_L{\underline\lambda}_i\eta}}\bigg)A\epsilon. \end{aligned} \end{equation} By the same arguments used in deriving \eqref{eq31}, we obtain \begin{equation}\label{eq32} C_{i4}\leq A\epsilon\int_{-\infty}^te^{-a_L(t-s)}ds +2A\|\phi\|\sum_{t_k<t}\int_{\tau^*_i(t_k-\epsilon)}^{\tau^*_i(t_k+\epsilon)}e^{-a_L(t-s)}ds \leq \bigg(\frac{1}{a_L}+\frac{4\overline{\lambda}_i\|\phi\|}{1-e^{-a_L{\underline\lambda}_i\eta}}\bigg)A\epsilon. \end{equation} Combining \eqref{eq26}-\eqref{eq28}, \eqref{eq31} and \eqref{eq32}, we readily obtain \begin{equation}\label{eq33} C_i\leq \left[\frac{A}{a_L}+\frac{2b_{iM}(M+AK_i)}{a_L}+Ab_{iM} \bigg(\frac{4e^{\frac{1}{2}a_L\eta}}{1-e^{-a_L\eta}} +\frac{8K_i\overline{\lambda}_i\|\phi\|}{1-e^{-a_L{\underline \lambda}_i\eta}}\bigg)\right]\epsilon. \end{equation} Next, let us set \begin{equation}\label{eq34} \begin{aligned} D_{i1}&=\int_{-\infty}^t|H(t+\omega,s+\omega)-H(t,s)| \int_0^T\frac{v_i(r)}{1+\phi^{\beta_i}(s+\omega-r)} drds,\\ D_{i2}&=\int_{-\infty}^tH(t,s)|c_i(s+\omega)-c_i(s)| \int_0^T\frac{v_i(r)}{1+\phi^{\beta_i}(s+\omega-r)} drds,\\ D_{i3}&=\int_{-\infty}^tH(t,s)\int_0^Tv_i(r) \left|\frac{1}{1+\phi^{\beta_i}(s+\omega-r)}-\frac{1}{1+\phi^{\beta_i}(s-r)}\right|drds. \end{aligned} \end{equation} It follows from \eqref{eq23} and \eqref{eq34} that \begin{equation}\label{eq35} D_i\leq c_{iM}D_{i1}+D_{i2}+c_{iM}D_{i3},\ i\in\underline{m}. \end{equation} By Lemmas \ref{lem5} and \ref{lem6}, from \eqref{eq21} we have \begin{equation}\label{eq36} \begin{aligned} D_{i1}&\leq \int_{-\infty}^tM\epsilon e^{-\frac{1}{2}a_L(t-s)}ds +\sum_{t_k<t}2A\int_{t_k-\epsilon}^{t_k+\epsilon}e^{-a_L(t-s)} \leq \bigg(\frac{2M}{a_L}+\frac{4Ae^{\frac{1}{2}a_L\eta}}{1-e^{-a_L\eta}}\bigg)\epsilon,\\ D_{i2}&\leq\int_{-\infty}^tA\epsilon e^{-a_L(t-s)}ds=\frac{A}{a_L}\epsilon, \end{aligned} \end{equation} and \begin{equation}\label{eq37} \begin{aligned} D_{i3}&\leq\int_{-\infty}^t Ae^{-a_L(t-s)}\int_0^TG_i v_i(r)|\phi(s+\omega-r)-\phi(s-r)|drds\\ &\leq \int_{-\infty}^tAG_i\epsilon e^{-a_L(t-s)}ds+2AG_i\|\phi\|\int_0^Tv_i(r) \bigg(\sum_{t_k+r<t}\int_{t_k-r-\epsilon}^{t_k+r+\epsilon}e^{-a_L(t-s)}ds\bigg)dr\\ &\leq\frac{AG_i\epsilon}{a_L}+{4AG_i\|\phi\|\epsilon}\int_0^Tv_i(r) \bigg(\sum_{t_k+r<t}e^{-a_L(t-t_k-r -\epsilon)}\bigg)dr\\ &\leq\bigg(\frac{1}{a_L}+\frac{4e^{\frac{1}{2}a_L\eta}\|\phi\|}{1-e^{-a_L\eta}}\bigg)AG_i\epsilon, \end{aligned} \end{equation} where $G_i=\sup_{\phi_L\leq x\leq \|\phi\|}\beta_i x^{\beta_i-1}$. From \eqref{eq35}-\eqref{eq37}, we readily obtain \begin{equation}\label{eq38} D_i\leq\left[\frac{A+2Mc_{iM}+Ac_{iM}G_i}{a_L} +\frac{4Ac_{iM}e^{\frac{1}{2}a_L\eta}(1+G_i\|\phi\|)}{1-e^{-a_L\eta}}\right]\epsilon. \end{equation} Now, we define \begin{equation}\label{eq39} \begin{aligned} E_{i1}&=\int_{-\infty}^t|H(t+\omega,s+\omega)-H(t,s)|H_i(s+\omega,\phi(s+\omega-\sigma_i(s+\omega)))ds, \\ E_{i2}&=\int_{-\infty}^tH(t,s)|H_i(s+\omega,\phi(s+\omega-\sigma_i(s+\omega))) -H_i(s+\omega,\phi(s-\sigma_i(s+\omega)))|ds, \\ E_{i3}&=\int_{-\infty}^tH(t,s)|H_i(s+\omega,\phi(s-\sigma_i(s+\omega))) -H_i(s+\omega,\phi(s-\sigma_i(s)))|ds, \\ E_{i4}&=\int_{-\infty}^tH(t,s)|H_i(s+\omega,\phi(s-\sigma_i(s)))- H_i(s,\phi(s-\sigma_i(s)))|ds, \end{aligned} \end{equation} then, from \eqref{eq23} and \eqref{eq39}, we have \begin{equation}\label{eq40} E_i\leq E_{i1}+E_{i2}+E_{i3}+E_{i4},\ i\in\underline{m}. \end{equation} Also using Lemma \ref{lem5} and Lemma \ref{lem6}, from \eqref{eq21} and the fact that $\int_{-\infty}^te^{-\frac{1}{2}a_L(t-s)}ds=2/a_L$, we obtain \begin{equation}\label{eq41} \begin{aligned} E_{i1}&\leq \frac{2H_{iM}M\epsilon}{a_L}+\sum_{t_k<t}2AH_{iM} \int_{t_k-\epsilon}^{t_k+\epsilon}e^{-a_L(t-s)}ds \leq 2\left(\frac{M}{a_L}+\frac{2Ae^{\frac{1}{2}a_L\eta}}{1-e^{-a_L\eta}}\right)H_{iM}\epsilon,\\ E_{i4}&\leq \int_{-\infty}^tA\epsilon e^{-a_L(t-s)}ds \leq \frac{A}{a_L}\epsilon. \end{aligned} \end{equation} Let $\underline{\xi}_i=\inf_{t\in \mathbb{R}}\dot \sigma^*_i(t),\overline{\xi}_i =\sup_{t\in \mathbb{R}}\dot \sigma^*_i(t), i\in\underline{m}$. Similarly to \eqref{eq31} and \eqref{eq32}, we readily obtain \begin{equation}\label{eq42} \begin{aligned} E_{i2}&\leq L_i\int_{-\infty}^tH(t,s)|\phi(s+\omega-\sigma_i(s+\omega))-\phi(s-\sigma_i(s+\omega))|ds\\ &\leq \left(\frac{1}{a_L}+\frac{4\overline{\xi}_i\|\phi\|}{1-e^{-a_L{\underline\xi}_i\eta}}\right) AL_i\epsilon,\\ E_{i3}&\leq L_i\int_{-\infty}^tH(t,s)|\phi(s-\sigma_i(s+\omega))-\phi(s-\sigma_i(s))|ds\\ &\leq \left(\frac{1}{a_L}+\frac{4\overline{\xi}_i\|\phi\|}{1-e^{-a_L{\underline\xi}_i\eta}}\right) AL_i\epsilon. \end{aligned} \end{equation} Inequalities \eqref{eq40}-\eqref{eq42} yield \begin{equation}\label{eq43} E_i\leq\bigg(\frac{A(2L_i+1)+2H_{iM}M}{a_L} +\frac{4Ae^{\frac{1}{2}a_L\eta} H_{iM}}{1-e^{-a_L\eta}} +\frac{8AL_i\overline{\xi}_i\|\phi\|}{1-e^{-a_L\underline{\xi}_i\eta}}\bigg)\epsilon. \end{equation} Let us set \begin{equation}\label{eq44} G_1= \sum_{t_k<t}|H(t+\omega,t_{k+p})-H(t,t_k)|, \quad G_2= \sum_{t_k<t}| H(t,t_k)| \end{equation} then \begin{equation}\label{eq45} G\leq\sum_{t_k<t}|H(t+\omega,t_{k+p})-H(t,t_k)||\delta_{k+p}| +\sum_{t_k<t}| H(t,t_k)||\delta_{k+p}-\delta_k| \leq \bar\delta G_1+\epsilon G_2. \end{equation} For each $t\geq t_k$, there exists a unique integer $l=l(t)$ such that $t_l<t\leq t_{l+1}$. By Lemma \ref{lem3} we have $t_{l+p}<t+\omega<t_{l+p+1}$. Thus \begin{equation}\label{eq46} \begin{aligned} G_1&\leq\sum_{t_k<t}\bigg(\Gamma_M e^{-a_L(t-t_k-\epsilon)} \left|\int_{t_{k+p}}^{t+\omega}a(s)ds-\int_{t_{k}}^{t}a(s)ds\right|\\ &\quad +e^{-a_L(t-t_k)}\left|\Gamma (k+p,l+p)-\Gamma (k,l)\right|\bigg)\\ &\leq \Gamma_M I_1 +I_2, \end{aligned} \end{equation} where \begin{equation}\label{eq47} \begin{aligned} &I_1=\sum_{t_k<t} e^{-a_L(t-t_k-\epsilon)} \left|\int_{t_{k+p}}^{t+\omega}a(s)ds-\int_{t_{k}}^{t}a(s)ds\right|,\\ &I_2= \sum_{t_k<t}e^{-a_L(t-t_k)}\left|\Gamma (k+p,l+p)-\Gamma (k,l)\right|. \end{aligned} \end{equation} Note that $\epsilon <\eta/2, \ \dfrac12a_L(t-t_k)e^{-\frac{1}{2}a_L(t-t_k)}<1$ and $|t^p_k-\omega |<\epsilon_1<\epsilon$, from \eqref{eq21} and Lemma \ref{lem5}, we have \begin{equation}\label{eq48} \begin{aligned} I_1&\leq \sum_{t_k<t} e^{-a_L(t-t_k-\epsilon)}\bigg(\int_{t_{k}}^{t}|a(s+\omega)-a(s)|ds +\bigg|\int_{t_{k+p}-\omega}^{t_k}a(s)ds\bigg|\bigg)\\ &\leq \epsilon\sum_{t_k<t} e^{-a_L(t-t_k-\epsilon)}(t-t_k+a_M)\\ &\leq\left(a_M\epsilon e^{\frac{1}{2}a_L\eta} +\frac{2\epsilon}{a_L}e^{\frac{1}{2}a_L\eta}\right) \sum_{t_k<t} e^{-\frac{1}{2}a_L(t-t_k)}\\ &\leq\frac{e^{\frac{1}{2}a_L\eta}}{1-e^{-\frac{1}{2}a_L\eta}} \bigg(a_M+\frac{2 }{a_L}\bigg)\epsilon. \end{aligned} \end{equation} Similarly, we obtain \begin{equation}\label{eq49} \begin{aligned} I_2&\leq \frac{\Gamma_M}{1+\gamma_L} \sum_{t_k<t}e^{-a_L(t-t_k)}(l-k+1)\epsilon \leq \frac{\Gamma_M}{1+\gamma_L}\sum_{t_k<t}e^{-a_L(t-t_k)} \bigg(\frac{t-t_k}{\eta}+1\bigg)\epsilon\\ &\leq\frac{\Gamma_M}{(1+\gamma_L)(1-e^{-\frac{1}{2}a_L\eta})} \bigg(\frac{2}{a_L\eta }+1\bigg)\epsilon. \end{aligned} \end{equation} It follows from \eqref{eq46}-\eqref{eq49} that \begin{equation}\label{eq50} G_1\leq\frac{\Gamma_M}{1-e^{-\frac{1}{2}a_L\eta}} \left[e^{\frac{1}{2}a_L\eta}\left(a_M+\frac{2}{a_L}\right) +\frac{1}{1+\gamma_L}\left(\frac{2}{a_L\eta}+1\right)\right]\epsilon. \end{equation} We also have \begin{equation}\label{eq51} G_2 \leq A\sum_{t_k<t}e^{-a_L(t-t_k)}\leq \frac{A}{1-e^{-a_L\eta}}. \end{equation} Therefore \begin{equation}\label{eq52} G\leq\frac{\epsilon\overline\delta\Gamma_M}{1-e^{-\frac{1}{2}a_L\eta}} \left[e^{\frac{1}{2}a_L\eta}\bigg(a_M+\frac{2 }{a_L}\bigg) +\frac{1}{1+\gamma_L}\bigg(\frac{2}{a_L\eta }+1\bigg)\right] +\frac{A\epsilon}{1-e^{-\frac{1}{2}a_L\eta}}. \end{equation} We can see clearly from \eqref{eq24}, \eqref{eq33}, \eqref{eq38}, \eqref{eq43} and \eqref{eq52} that there exists a positive constant $\Lambda$ such that $|F(\phi (t+\omega)-F\phi (t)|\leq \Lambda \epsilon$, for all $t\in\mathbb{R}, |t-t_k|>\epsilon$, $k\in \mathbb{Z}$. This shows that $F\phi(t)$ is almost periodic. The proof is completed. \end{proof} \begin{theorem}\label{thm3} Let Assumptions {\rm (A1)-(A7)} hold. If $M_2$, defined in \eqref{eq15}, is positive and \begin{equation}\label{eq53} \frac{A}{a_L}\sum_{i=1}^m\left(b_{iM}K^*_i+c_{iM}G^*_i+L_i\right)<1, \end{equation} where $K^*_i=\sup_{M_2\leq x\leq M_1}\alpha_ix^{\alpha_i-1},\ G^*_i =\sup_{M_2\leq x\leq M_1}\beta_ix^{\beta_i-1}$, then equation \eqref{eq6} has a unique positive almost periodic solution. \end{theorem} \begin{proof} We define $\mathcal{D}_2=\big\{\phi \in\mathcal{D}_1: M_2 \leq \phi(t)\leq M_1,\ t\in \mathbb{R}\big\}$. It is worth noting that, from Lemma \ref{lem9}, Theorem \ref{thm2} and the assumption $M_2>0$, we have $F(\mathcal{D}_2)\subset {\mathcal D}_2$. For any $\phi, \psi\in\mathcal{D}_2$, applying Lemma \ref{lem6} we obtain \begin{align*} |F\phi (t)-F\psi (t)|\leq\; &\int_{-\infty}^tH(t,s)\sum_{i=1}^m\bigg\{L_i|\phi(s-\sigma_i(s))-\psi(s-\sigma_i(s))|\\ &+c_{iM}\int_0^Tv_i(r)\left|\frac{1}{1+\phi^{\beta_i}(s-r)}-\frac{1}{1+\psi^{\beta_i}(s-r)}\right|dr \\ &+ b_{iM}\left|\frac{1}{1+\phi^{\alpha_i}(s-\tau_i(s))}-\frac{1}{1+\psi^{\alpha_i}(s-\tau_i(s))}\right|\bigg\}ds\\ \leq \; & \frac{A}{a_L}\sum_{i=1}^m\left(b_{iM}K^*_i+c_{iM}G^*_i+L_i\right)\|\phi -\psi\|. \end{align*} Therefore \begin{equation*} \|F\phi -F\psi \|\leq \frac{A}{a_L}\sum_{i=1}^m\left(b_{iM}K^*_i+c_{iM}G^*_i+L_i\right)\|\phi-\psi\| \end{equation*} which yields $F$ is a contraction mapping on ${\mathcal D}_2$ by condition \eqref{eq53}. Then $F$ has a unique fixed point in $\mathcal{D}_2$, namely $\phi_0$. It should be noted that, $F(\mathcal{D}_1)\subset\mathcal{D}_2$, and hence, $F$ also has a unique fixed point $\phi_0$ in $\mathcal{D}_1$. This shows that \eqref{eq6} has a unique positive almost periodic solution $x^*(t)=\phi_0(t)$. The proof is completed. \end{proof} \begin{theorem}\label{thm4} Let Assumptions {\rm(A1)-(A7)} hold. If $M_2>0, \sigma\leq\eta$ and \begin{equation}\label{eq54} \frac{1}{a_L}\max\left\{A, (\gamma _L +1)^{-1}\right\} \sum_{i=1}^m\left(b_{iM}K^*_i+c_{iM}G^*_i+L_i\right)<1, \end{equation} then \eqref{eq6} has a unique positive almost periodic solution $x^*(t)$. Moreover, every solution $x(t)=x(t,\alpha,\xi)$ of \eqref{eq6} converges exponentially to $x^*(t)$ as $t\to \infty$. \end{theorem} \begin{proof} By Theorem \ref{thm3}, \eqref{eq6} has a unique positive almost periodic solution $x^*(t)$. Let $x(t)=x(t,\alpha,\xi)$ be a solution of \eqref{eq6} and \eqref{eq9}. We define $V(t)=|x(t)-x^*(t)|$ then \begin{align*} D^+V(t) & \leq -a_L|x(t)-x^*(t)|+\sum_{i=1}^m \bigg[b_{iM}K^*_i|x(t-\tau_i(t))- x^*(t-\tau_i(t))| \\ &\quad+c_{iM}G^*_i\int_0^Tv_i(s)|x(t-s)-x^*(t-s)|ds+L_i|x(t-\sigma_i(t))- x^*(t-\sigma_i(t))|\bigg]\\ &\leq -a_LV(t)+ \sum_{i=1}^m\left(b_{iM}K^*_i+c_{iM}G^*_i +L_i\right)\overline{V}(t), \; t\ne t_k,\ t\geq \alpha,\\ \Delta V(t_k)&=\gamma_k V(t^-_k), \; t_k\geq\alpha, k\in \mathbb{Z}, \end{align*} where $\overline{V}(t)=\sup_{t-\sigma\leq s \leq t}V(s)$. By Lemma \ref{lem8}, there exists a positive constant $\lambda$ such that \begin{equation*} V(t)\leq \overline{V}(\alpha)\prod_{\alpha <t_k\leq t}(\gamma_k+1)e^{-\lambda (t-\alpha)} \leq\Gamma_M\overline{V}(\alpha) e^{-\lambda (t-\alpha)},\; t\geq \alpha. \end{equation*} This shows that $x(t)$ converges exponentially to $x^*(t)$ as $t\to \infty$. The proof is completed. \end{proof} The existence and exponential stability of positive almost periodic solution of \eqref{eq4} is presented in the following corollary as an application of our obtained results with $m=1$. \begin{corollary}\label{cr1} Under the Assumptions {\rm (A1), (A2)} (with $c(t)=0$) and {\rm (A5)-(A7)}, if $M_2>0, \tau\leq\eta$ and \begin{equation}\label{eq55} \frac{1}{a_L}\max\left\{A, (\gamma _L +1)^{-1}\right\}b_MK^*<1, \end{equation} where $K^*=\sup_{M_2\leq x\leq M_1}\alpha x^{\alpha-1}$, then \eqref{eq4} has a unique positive almost periodic solution which is exponentially stable. \end{corollary} \section{An illustrative example} In this section we give a numerical example to illustrate the effectiveness of our conditions. For illustrating purpose, let us consider the following equation \begin{equation}\label{eq56} \begin{split} \dot x(t)=\; &-a(t)x(t)+\frac{b(t)}{1+x^2(t-\tau(t))}+\int_0^1\frac{c(t)ds}{1+x^2(t-s)}\\ &-d(t)\frac{|x(t-\sigma(t))|}{10+|x(t-\sigma(t))|},\quad t\neq k,\\ \Delta x(k)=\;&\gamma_kx(k-0)+\delta_k, \quad k\in \mathbb{Z}, \end{split} \end{equation} where \begin{align*} &a(t)=5+|\sin(t\sqrt 2)|,\; b(t)=\frac1{10}(1+|\sin(t\sqrt 3)|),\; c(t)=\frac1{10}(1+|\cos(t\sqrt 3)|),\\ &d(t)=\frac1{20}\sin^2(t\sqrt3),\; \tau(t)=\sin^2(\frac{\sqrt3}{2}t),\; \sigma(t)=\cos^2(\frac{\sqrt3}{2}t),\\ &\gamma_{2m}=-\frac12,\ \gamma_{2m+1}=1,\ \delta_{2m}=1,\ \delta_{2m+1}= \frac12,\ m\in\mathbb{Z}. \end{align*} It should be noted that, the functions $a(t), b(t), c(t), \tau(t)$ and $\sigma(t)$ are almost periodic in the sense of Bohr, $H(t,x)=d(t)\dfrac{|x|}{10+|x|}$ is almost periodic in $t\in\mathbb{R}$ uniformly in $x\in\mathbb{R}_+$, $|H(t,x)-H(t,y)|\leq \frac12|x-y|$. Therefore, Assumptions (A1)-(A5) and (A7) are satisfied. On the other hand, $\Gamma(q,p)=\prod_{i=q}^p(1+\gamma_i)\in\{\frac12, 1, 2\}$ for any $p,q\in\mathbb{Z}, p\geq q$. Thus, $\Gamma_M=2, \Gamma_L=\dfrac12$ and Assumption (A6) is satisfied. Taking some computations we obtain \begin{align*} &a_L=5,\ a_M=6,\ b_L=c_L=0.1,\ b_M=c_M=0.2, L\leq 0.5, H_M=\frac1{20}, H_L=0,\\ &A=2,\ B=0.5,\ M_1=2.1736, \ M_2=0.0027,\ K^*=G^*=2M_1 \end{align*} and $\dfrac{A}{a_L}(b_MK^*+c_MG^*+L)\leq 0.8956$. By Theorem \ref{thm3}, equation \eqref{eq56} has a unique positive almost periodic solution $x^*(t)$. Furthermore, it can be seen that $\gamma_L=-\frac12$, and hence, $\dfrac1{a_L}\max\{A,(\gamma_L+1)^{-1}\}(b_MK^*+c_MG^*+L)\leq 0.8956$. By Theorem \ref{thm4}, every solution $x(t,\alpha,\xi)$ of \eqref{eq56} converges exponentially to $x^*(t)$ as $t$ tends to infinity. As presented in figure 1, state trajectories of \eqref{eq56} with different initial conditions converge to the unique positive almost periodic solution of \eqref{eq56}. \begin{figure} \caption{State trajectories of \eqref{eq56} converge to the unique positive almost periodic solution} \label{fig1} \end{figure} \section{Conclusion} This paper has dealt with the existence and exponential attractivity of a unique positive almost periodic solution for a generalized model of hematopoiesis with delays and impulses. Using the contraction mapping principle and a novel type of impulsive delay inequality, new sufficient conditions have been derived ensuring that all solutions of the model converge exponentially to the unique positive almost periodic solution. \end{document}
\begin{document} \preprint{SB/F/06-339} \title{Total Quantum Zeno effect and Intelligent States for a two level system in a squeezed bath} \author{D. Mundarain$^{1}$, M. Orszag$^{2}$ and J. Stephany$^{1}$} \address{ \ ${}^{1}$ Departmento de F\'{\i}sica, Universidad Sim{\'o}n Bol{\'\i}var, Apartado Postal 89000, Caracas 1080A, Venezuela \\ ${}^{2}$ Facultad de F\'{\i}sica, Pontificia Universidad Cat\'{o}lica de Chile, Casilla 306, Santiago, Chile} \begin{abstract} In this work we show that by frequent measurements of adequately chosen observables, a complete suppression of the decay in an exponentially decaying two level system interacting with a squeezed bath is obtained. The observables for which the effect is observed depend on the the squeezing parameters of the bath. The initial states which display Total Zeno Effect are intelligent states of two conjugate observables associated to the electromagnetic fluctuations of the bath. \end{abstract} \maketitle \section{Introduction} Frequent measurements modify the dynamics of a quantum system. This result of quantum measurement theory is known as the quantum Zeno effect(QZE)\cite {1,2,3,4,5} and has attracted much attention since first discussed. The QZE is related to the suppression of induced transitions in interacting systems or the reduction of the decay rate in unstable systems. It has been also pointed out that the opposite effect, i.e the acceleration of the decay process, maybe caused by frequent measurement too and this effect is known as Anti-Zeno effect(AZE). The experimental observation of QZE in the early days was restricted to oscillating quantum systems \cite{6} but recently, both QZE and AZE were successfully observed in irreversible decaying processes.\cite{7,8,9}. The quantum theory of measurement predicts a reduction in the decay rate of an unstable system if the time between successive measurements is smaller than the Zeno time which in general is smaller than the correlation time of the bath. The effect is universal in the sense that it does not depend on the measured observable whenever the time between measurements is very small. This observation does not preclude the manifestation of the Zeno effect for larger times ( for example in the exponentially decaying regime ) for well selected observables in a particular bath. Recently it was shown in Ref.( \cite{mund} ) that reduction of the decay rate occurs in an exponentially decaying two level system when interacting with a squeezed bath. In this case variations on the squeezing phase of the bath may lead to the appearance of either Zeno or anti-Zeno effect when continuously monitoring the associated fictitious spin. In this work we show that for the same system it is possible to select a couple observables whose measurement for adequately prepared systems lead to the total suppression of the transitions i.e Total Zeno Effect. This paper is organized as follows: In section (\ref{sec2}) we discuss some general facts and review some results obtained in reference \cite{mund} which are needed for our discussion. In Section (\ref{sec3}) we define the system we deal with and identify the observables and the corresponding initial states which are shown to display Total Zeno Effect. In section (\ref {sec4}) we show that the initial states which show Total Zeno Effect are intelligent spin states, i.e states that saturate the Heisenberg Uncertainty Relation between the (fictitious) spin operators. Finally, we discuss the results in Section (\ref{sec5}). \section{Total Zeno effect in unstable systems} \label{sec2} Consider a closed system with Hamiltonian $H$ and an observable $A$ with discrete spectrum. If the system is initialized in an eigenstate $ |a_n\rangle $ of $A$ with eigenvalue $a_n$, the probability of survival in a sequence of $S$ measurements, that is the probability that in all measurements one gets the same result $a_n$, is \begin{equation} P_n(\Delta t,S) = \left( 1 - \frac{\Delta t^2}{\hbar^2} \Delta_n^2 \mathit{H} \right)^S \end{equation} where \begin{equation} \Delta_n^2 \mathit{H} = \langle a_n |\mathit{H}^2|a_n\rangle -\langle a_n | \mathit{H}|a_n\rangle^2 \end{equation} and $\Delta t$ is the time between consecutive measurements. In the limit of continuous monitoring ( $S \rightarrow \infty ,\Delta t \rightarrow 0$ and $ S \Delta t \rightarrow t $ ), $P_n \rightarrow 1$ and the system is freezed in the initial state. For an unstable system and considering evolution times larger than the correlation time of the reservoir with which it interacts, the irreversible evolution is well described in terms of the Liouville operator $\mathit{L} \{\rho \}$ by the master equation \begin{equation} \frac{\partial \rho }{\partial t}=\mathit{L}\{\rho \}\ . \label{master} \end{equation} If measurements are done frequently, the master equation (\ref{master}) is modified. The survival probability is time dependent and is shown to be given by \cite{mund} \begin{equation} P_{n}(t)=\exp \left\{ \langle a_{n}|\mathit{L}\{|a_{n}\rangle \langle a_{n}|\}|a_{n}\rangle t\right\} \ \ . \label{ec13} \end{equation} This expression is valid only when the time between consecutive measurements is small enough but greater than the correlation time of the bath. For mathematical simplicity in what follows we consider the zero correlation time limit for the bath and then one is allowed to take the limit of continuous monitoring. From equation (\ref{ec13}) one observe that the Total Zeno Effect is possible when \begin{equation} \langle a_{n}|\mathit{L}\{|a_{n}\rangle \langle a_{n}|\}|a_{n}\rangle =0\ \ . \label{ec1239} \end{equation} Then, for times larger than the correlation time, the possibility of having Total Zeno effect depends on the dynamics of the system ( determined by the interaction with the baths), on the particular observable to be measured and on the initial state. The concept of zero correlation time of the bath $\tau _{D}$ is of course an idealization. If this time is not zero, the equation (\ref{ec13}) is only approximate, since $\Delta t$ cannot be strictly zero and at the same time be larger than $\tau _{D}$. Also, if equation (\ref{ec1239}) is satisfied, then equation (\ref{ec13}) must be corrected, taking the next non-zero contribution in the expansion of $\rho (\Delta t).$ In that case, a decay rate proportional to $\Delta t$ appears, and the decay time is $\varpropto \frac{1}{\gamma ^{2}\Delta t}$, which is in general a number much larger than the typical evolution time of the system. Notice that as the spectrum of the squeezed bath gets broader, $\tau _{D}$ becomes smaller, and one is able to choose a smaller $\Delta t$, approaching in this way, the ideal situation and the Total Zeno Effect. \section{Total Zeno observables} \label{sec3} In the interaction picture the Liouville operator for a two level system in a broadband squeezed vacuum has the following structure \cite{gar}, \begin{eqnarray} L\{\rho \} &=&\frac{1}{2}\gamma \left( N+1\right) \left( 2\sigma _{-}{\rho } \sigma _{+}-\sigma _{+}\sigma _{-}{\rho }-{\rho }\sigma _{+}\sigma _{-}\right) \nonumber \label{em1} \\ &&\frac{1}{2}\gamma N\left( 2\sigma _{+}{\rho }\sigma _{-}-\sigma _{-}\sigma _{+}{\rho }-{\rho }\sigma _{-}\sigma _{+}\right) \nonumber \\ &&-\gamma Me^{i\phi }\sigma _{+}{\rho }\sigma _{+}-\gamma Me^{-i\phi }\sigma _{-}{\rho }\sigma _{-}\ \ . \end{eqnarray} whereh $\gamma $ is the vacuum decay constant and $N,M=\sqrt{N(N+1)}$ and $ \psi $ are the parameters of the squeezed bath. Here $\sigma _{-}$ and $ \sigma _{+}$ are the two ladder operators, \begin{equation} \sigma_{+} =\frac{1}{2}(\sigma_x+i\sigma_y) \qquad \sigma_{-} =\frac{1}{2} (\sigma_x+i\sigma_y) \end{equation} with $\sigma _{x}$,$\sigma _{y}$ and $\sigma _{z}$ the Pauli matrices. Let us introduce the Bloch representation of the two level density matrix \begin{equation} \rho =\frac{1}{2}\left( 1+\vec{\rho} \cdot \vec{\sigma}\right) \label{dm1} \end{equation} In this representation the master equation takes the form: \begin{eqnarray} \frac{\partial \rho }{\partial t} &=&-\frac{1}{2}\gamma \left( N+1\right) \left( (1+\rho _{z})\sigma _{z}+\frac{1}{2}\rho _{x}\sigma _{x}+\frac{1}{2}{ \rho _{y}}\sigma _{y}\right) \nonumber \\ &&+\frac{1}{2}\gamma N\left( (1-\rho _{z})\sigma _{z}-\frac{1}{2}\rho _{x}\sigma _{x}-\frac{1}{2}{\rho _{y}}\sigma _{y}\right) \nonumber \\ &&-\frac{1}{2}\gamma M\rho _{x}(\cos (\psi )\sigma _{x}-\sin (\psi )\sigma _{y}) \nonumber \\ &&+\frac{1}{2}\gamma M\rho _{y}(\sin (\psi )\sigma _{x}+\cos (\psi )\sigma _{y}) \end{eqnarray} and has the following solutions for Bloch vector components that give the behavior of the system without measurements \begin{eqnarray} \rho _{x}(t) &=&\left( \rho _{x}(0)\sin ^{2}(\psi /2)+\rho _{y}(0)\sin (\psi /2)\cos (\psi /2)\right) \nonumber \\ &&e^{-\gamma (N+1/2-M)\,t} \nonumber \\ &&+\left( \rho _{x}(0)\cos ^{2}(\psi /2)-\rho _{y}(0)\sin (\psi /2)\cos (\psi /2)\right) \nonumber \\ &&e^{-\gamma (N+1/2+M)\,t} \label{ec1222} \end{eqnarray} \begin{eqnarray} \rho _{y}(t) &=&\left( \rho _{y}(0)\cos ^{2}(\psi /2)+\rho _{x}(0)\sin (\psi /2)\cos (\psi /2)\right) \nonumber \\ &&e^{-\gamma (N+1/2-M)\,t} \nonumber \\ &&+\left( \rho _{y}(0)\sin ^{2}(\psi /2)-\rho _{x}(0)\sin (\phi /2)\cos (\psi /2)\right) \nonumber \\ &&e^{-\gamma (N+1/2+M)\,t} \label{ec1223} \end{eqnarray} \begin{equation} \rho _{z}(t)=\rho _{z}(0)e^{-\gamma (2N+1)t}+\frac{1}{2N+1}\left( e^{-\gamma (2N+1)t}-1\right) \end{equation} Now consider the hermitian operator $\sigma _{\mu }$ associated to the fictitious spin component in the direction of the unitary vector $\hat{\mu} =(\cos (\phi )\sin (\theta ),\sin (\phi )\sin (\theta ),\cos (\theta ))$ defined by the angles $\theta $ and $\phi $, \begin{equation} \sigma _{\mu }=\vec{\sigma}\cdot \hat{\mu}=\sigma _{x}\cos (\phi )\sin (\theta )+\sigma _{y}\sin (\phi )\sin (\theta )+\sigma _{z}\cos (\theta ) \end{equation} The eigenstates of $\sigma _{\mu }$ are, \begin{equation} |+\rangle _{\mu }=\cos (\theta /2)\,|+\rangle +\sin (\theta /2)\,\exp { (i\phi )}\,|-\rangle \end{equation} \begin{equation} |-\rangle _{\mu }=-\sin (\theta /2)\,|+\rangle +\cos (\theta /2)\,\exp { (i\phi )}\,|-\rangle \end{equation} If the system is initialized in the state $|+\rangle _{\mu }$ the survival probability at time $t$ is \begin{equation} P_{\mu }^{+}(t)=\exp \left\{ F(\theta ,\phi )\,\,t\,\right\} \end{equation} where \begin{equation} F(\theta ,\phi )=\,_{\mu }\langle +|\,\,L\left\{ \,\,|+\rangle _{\mu }\,_{\mu }\langle +|\,\,\right\} \,\,|+\rangle _{\mu }\ \ . \end{equation} In this case the function $F(\theta ,\phi )$ has the structure \begin{eqnarray} F(\theta ,\phi ) &=&-\frac{1}{2}\gamma \left( N+1\right) \left( \rho _{z}(0)+\rho _{z}^{2}(0)+\frac{1}{2}\rho _{x}^{2}(0)+\frac{1}{2}{\rho _{y}^{2}(0)}\right) \nonumber \\ &&+\frac{1}{2}\gamma N\left( (\rho _{z}(0)-\rho _{z}^{2}(0)-\frac{1}{2}\rho _{x}^{2}(0)-\frac{1}{2}{\rho _{y}^{2}(0)}\right) \nonumber \\ &&-\frac{1}{2}\gamma M\rho _{x}(0)(\cos (\psi )\rho _{x}(0)-\sin (\psi ){ \rho _{y}(0)}) \nonumber \\ &&+\frac{1}{2}\gamma M\rho _{y}(0)(\sin (\psi )\rho _{x}(0)+\cos (\psi ){ \rho _{y}(0)}) \end{eqnarray} where now $\vec{\rho}(0)=\hat{\mu}$ is a function of the angles.. In figure (\ref{etiqueta1}) we show $F(\phi ,\theta )$ for $N=1$ and $\psi =0 $ as function of $\phi $ and $\theta $. The maxima correspond to $F(\phi ,\theta )=0$. For arbitrary values of $N$ and $\psi $ there are two maxima corresponding to the following angles: \begin{equation} \phi_1^M = \frac{\pi-\psi}{2} \quad \mathrm{and} \quad \cos(\theta^M) = - \frac{1}{2 \left( N+M+ 1/2\right)} \end{equation} and \begin{equation} \phi_2^M = \frac{\pi-\psi}{2}+\pi \quad \mathrm{and} \quad \cos(\theta^M) = - \frac{1}{2 \left( N+M+ 1/2\right)} \end{equation} \begin{figure} \caption{$F(\phi,\theta)$ for $N=1$ and $\psi =0$ } \label{etiqueta1} \end{figure} These preferential directions given by the vectors $\mathbf{\hat{\mu}_1} =(\cos(\phi_1^M)\sin(\theta^M),\sin(\phi_1^M)\sin (\theta^M),\cos(\theta^M))$ and $\mathbf{\hat{\mu}_2}=(\cos(\phi_2^M)\sin(\theta^M),\sin(\phi_2^M)\sin( \theta^M),\cos(\theta^M))$) define the operators $\sigma _{\mu_1}$ and $ \sigma _{\mu_2}$ which show Total Zeno Effect if the initial state of the system is the eigenstate $|+\rangle _{\mu_1}$ or respectively $|+\rangle _{\mu_2}$. To be specific let us consider measurements of the observable $ \sigma_{\mu_1}= \vec{\sigma} \cdot \hat{\mu}_1$ (analogous results are obtained for $\sigma_{\mu_2}$). The modified master equation with measurements of $\sigma_{\mu_1}$ is given by \cite{mund}, \begin{equation} \frac{\partial \rho }{\partial t}=P_{\mu _{1}}^{+}\,L\left\{ \rho \right\} \,P_{\mu _{1}}^{+}+\left( 1-P_{\mu _{1}}^{+}\right) \,L\left\{ \rho \right\} \,\left( 1-P_{\mu _{1}}^{+}\right) \label{ecmcm} \end{equation} where \begin{equation} P_{\mu _{1}}^{+}=|+\rangle _{\mu _{1}}\,_{\mu _{1}}\langle +| \end{equation} and $\mathit{L}\{\rho \}$ is given by (\ref{em1}). Besides of the Total Zeno effect obtained in the cases specified above it is also very interesting to discuss the effect of measurements for other choices of the initial state. This can be done numerically. In figure (\ref {etiqueta2}) we show the evolution of $\langle \sigma _{\mu _{1}}\rangle $, that is the mean value of observable $\sigma _{\mu _{1}}$, when the system is initialized in the state $|+\rangle _{\mu _{1}}$ without measurements (master equation (\ref{em1})) and with frequent monitoring of $\sigma _{\mu _{1}}$ (master equation (\ref{ecmcm})). Consistently with our discussion of frequent measurements, the system is freezed in the state $|+\rangle _{\mu _{1}}$ (Total Zeno Effect). In figure (\ref{etiqueta3}) we show the time evolution of $\langle \sigma _{\mu _{1}}\rangle $ when the initial state is $|-\rangle _{\mu _{1}}$ without measurements and with measurements of the same observable as in previous case. One observes that with measurements the system evolves from $ |-\rangle _{\mu _{1}}$ to $|+\rangle _{\mu _{1}}$. In general for any initial state the system under frequent measurements evolves to $|+\rangle _{\mu _{1}}$ which is the stationary state of Eq. ( \ref{ecmcm}) whenever we do measurements in $\sigma _{\mu _{1}}$. Analogous effects are observed if one measures $\sigma _{\mu _{2}}$. In contrast, for measurements in other directions different from those defined by $\mathbf{\hat{\mu}_{1}}$ or $ \mathbf{\hat{\mu}_{2}}$ , the system evolves to states which are not eigenstates of the measured observables. \begin{figure}\label{etiqueta2} \end{figure} \begin{figure}\label{etiqueta3} \end{figure} \section{Intelligent States} \label{sec4} Aragone et al \cite{ar} considered well defined angular momentum states that satisfy the equality $(\Delta J_{x}\Delta J_{y})^{2}=\frac{1}{4}\mid \langle J_{z}\rangle \mid ^{2}$ in the uncertainty relation. They are called Intelligent States in the literature. The difference with the coherent or squeezed states, associated to harmonic oscillators, is that these Intelligent States are not Minimum Uncertainty States (MUS), since the uncertainty is a function of the state itself. In this section we show that the states $|+\rangle _{\mu _{1}}$ and $ |+\rangle _{\mu _{2}}$ are intelligent states of two observables associated to the bath fluctuations. The master equation (\ref{em1}) can be written in an explicit Lindblad form \begin{equation} \frac{\partial \rho }{\partial t}=\frac{\gamma }{2}\left\{ 2S\rho S^{\dagger }-\rho S^{\dagger }S-S^{\dagger }S\rho \right\} \label{em2} \end{equation} using only one Lindblad operator $S$, \begin{eqnarray} S &=&\sqrt{N+1}\sigma _{-}-\sqrt{N}\exp \left\{ i\psi \right\} \sigma ^{+} \nonumber \\ &=&\cosh (r)\sigma _{-}-\sinh (r)\exp \left\{ i\psi \right\} \sigma ^{+} \end{eqnarray} Obviously the eigenstates of $S$ satisfy the condition (\ref{ec1239}). Moreover the states $|\phi _{1}\rangle =|+\rangle _{\mu _{1}}$ and $|\phi _{2}\rangle =|+\rangle _{\mu _{2}}$ are the two eigenstates of $S$ with eigenvalues $\lambda _{\pm }=\pm i\sqrt{M}\exp \{i\psi /2\}$. \begin{equation} S|\phi _{1,2}\rangle =\lambda _{\pm }|\phi _{1,2}\rangle \end{equation} Consider now the standard fictitious angular momentum operators for the two level system are $\{J_{x}=\sigma _{x}/2,J_{y}=\sigma _{y}/2,J_{z}=\sigma _{z}/2\}$ and also two rotated operators $J_{1}$ and $J_{2}$ which are consistent with the electromagnetic bath fluctuations in phase space (see fig. 2 in ref \cite{mund}) and which satisfy the same Heisenberg uncertainty relation that $J_{x}$ and $J_{y}$ . They are, \begin{eqnarray} J_{1} &=&\exp \{i\psi /2J_{z}\}J_{x}\exp \{-i\psi /2J_{z}\} \nonumber \\ &=&\cos (\psi /2)J_{x}-\sin (\psi /2)J_{y} \end{eqnarray} \begin{eqnarray} J_{2} &=&\exp \{i\psi /2J_{z}\}J_{y}\exp \{-i\psi /2J_{z}\} \nonumber \\ &=&\sin (\psi /2)J_{x}+\cos (\psi /2)J_{y} \end{eqnarray} These two operators are associated respectively with the major and minor axes of the ellipse which represents the fluctuations of bath. Using Eqs. ( \ref{ec1222}) and (\ref{ec1223}) one can show that the mean values of these operators have the following exponentially decaying evolution: \begin{equation} \langle J_{1}\rangle (t)=\langle J_{1}\rangle (0)\exp \{-\gamma (N+M+1/2)t\} \end{equation} \begin{equation} \langle J_{2}\rangle (t)=\langle J_{2}\rangle (0)\exp \{-\gamma (N-M+1/2)t\} \end{equation} Notice that the above averages decay with maximum and minimum rates respectively. In terms of $J_{1}$ y $J_{2}$ we have \begin{equation} J_{-}=\sigma =(J_{x}-iJ_{y})=\exp \{i\psi /2\}(J_{1}-iJ_{2})\ , \end{equation} \begin{equation} J_{+}=\sigma ^{\dagger }=(J_{x}+iJ_{y})=\exp \{-i\psi /2\}(J_{1}+iJ_{2})\ . \end{equation} $S$ has the form \begin{equation} S=\exp \{i\psi /2\}\left( \cosh (r)-\sinh (r)\right) \left( J_{1}-i\alpha J_{2}\right) \end{equation} with \begin{equation} \alpha =\frac{\cosh (r)+\sinh (r)}{\cosh (r)-\sinh (r)}=\exp \{2r\} \end{equation} Following Rashid \textit{et al (}\cite{ra}\textit{)} we define a non hermitian operator $J_{-}(\alpha )$ \begin{equation} J_{-}(\alpha )=\frac{\left( J_{1}-i\alpha J_{2}\right) }{(1-\alpha ^{2})^{1/2}} \end{equation} so that \begin{equation} S=\exp \{i\psi /2\}\left( \cosh (r)-\sinh (r)\right) \,(1-\alpha ^{2})^{1/2}\,J_{-}(\alpha ) \end{equation} After some algebra one obtains then that \begin{equation} S=2\,\lambda _{+}\,J_{-}(\alpha ) \end{equation} The eigenstates of $S$ are then eigenstates of $J_{-}(\alpha )$ with eigenvalues $\pm 1/2$. The eigenstates of $J_{-}(\alpha )$ are are also shown to be intelligent states \textit{i.e} they satisfy the equality condition in the Heisenberg uncertainty relation for $J_{1}$ and $J_{2}$ . \begin{equation} \Delta ^{2}J_{1}\Delta ^{2}J_{2}=\frac{|\langle J_{z}\rangle |^{2}}{4} \end{equation} The operator $J_{-}(\alpha )$ is obtained from $J_{1}$ by the following transformation \begin{equation} J_{-}(\alpha )=\exp \{\beta J_{z}\}J_{1}\exp \{-\beta J_{z}\} \end{equation} with \begin{equation} \exp \{\beta \}=\sqrt{\frac{1-\alpha }{1+\alpha }}=i\sqrt{\frac{\sinh (r)}{ \cosh (r)}} \end{equation} In terms of the real and imaginary parts of $\beta =\beta _{r}+i\beta _{i}$, \begin{equation} \beta _{i}=\frac{\pi }{2} \end{equation} \begin{equation} \exp \{\beta _{r}\}=\sqrt{\frac{\sinh (r)}{\cosh (r)}}=\left( \frac{N}{N+1} \right) ^{1/4} \end{equation} $S$ takes the form, \begin{eqnarray} S &=&2i\sqrt{M}\exp \{i\psi /2\} \\ &&\,\exp \{i\frac{\pi }{2}J_{z}\}\,\exp \{\beta _{r}J_{z}\}\,J_{1}\,\exp \{-i \frac{\pi }{2}J_{z}\}\,\exp \{-\beta _{r}J_{z}\} \nonumber \end{eqnarray} Finally $S$ may be written in the form: \begin{equation} S=2i\sqrt{M}\exp \{i\psi /2\}U\,J_{z}\,U^{-1} \end{equation} with \begin{equation} U=\,\exp \{i\frac{\pi }{2}J_{z}\}\,\exp \{\beta _{r}J_{z}\}\,\exp \{i\frac{ \psi }{2}J_{z}\}\,\exp \{-i\frac{\pi }{2}J_{y}\} \end{equation} Then the eigenstates of $S$ could be obtained from the eigenstates of $J_{z} $ using $U$ as \begin{equation} |\phi _{1,2}\rangle =C_{0}\text{ }U\text{ }|\pm \rangle \end{equation} where $C_{0}$ is a normalization constant. It is quite clear that $|\phi _{1,2}\rangle $ are intelligent states of the observables $J_{1},J_{2},$ which are rotated versions of $J_{x},J_{y}$. They are also quasi-intelligent states of the original observables $J_{x},J_{y}$\cite{ra}). One can verify the above result, by finding directly the eigenstates of $S$: \begin{equation} |\phi _{1,2}\rangle =\sqrt{\frac{N}{N+M}}|+\rangle \pm i\sqrt{\frac{M}{N+M}} e^{-i\psi /2}|-\rangle \end{equation} Finally, when the system is initialized in one of these states, the mean value of $J_{1}$ is zero, $\langle J_{1}\rangle (t) =0$. Then, the term with the biggest decaying rate does not appear in the mean value of the measured observable $\sigma_{\mu}$ which becomes: \begin{equation} \langle \sigma_{\mu} \rangle (t) = \langle J_{2}\rangle (t) \sin (\theta^M)+ \langle J_{z}\rangle (t) \cos (\theta^M) \end{equation} Using the definition of the angle $\theta^M$ one can prove that \begin{equation} \frac{d \langle \sigma_{\mu} \rangle }{dt}(0) =0 \end{equation} which is a neccesary condition in order to obtain Total Zeno Effect when one is measuring the observable $\sigma_{\mu}$(See Fig (\ref{etiqueta2})). \section{Discussion} \label{sec5} We have shown that Total Zeno Effect is obtained for two particular observables $\sigma _{\mu_1}$ or $\sigma _{\mu_2}$, for which the azimuthal phases in the fictitious spin representation depend on the phase of the squeezing parameter of the bath and the polar phases depend on the squeeze amplitude $r$. In this sense, the parameters of the squeezed bath specify some definite atomic directions. When performing frequent measurements on $\sigma _{\mu _{1}}$, starting from the initial state $|+\rangle _{\mu _{1}}$, the system freezes at the initial state as opposed to the usual decay when no measurements are done. On the other hand, if the system is initially prepared in the state $|-\rangle _{\mu _{1}},$ the frequent measurements on $\sigma _{\mu _{1}}$ will makes it evolve from the state $|-\rangle _{\mu _{1}}$ to $|+\rangle _{\mu _{1}}$. More generally, when performing the measurements on $\sigma _{\mu _{1}}$, any initial state evolves to the same state $|+\rangle _{\mu _{1}}$ which is the steady state of the master equation (\ref{ecmcm}) in this situation. The above discussion could appear at a first sight surprising. However, taking a more familiar case of a two-level atom in contact with a thermal bath at zero temperature, if one starts from any initial state, the atom will necessarily decay to the ground state. This is because the time evolution of $\langle \sigma _{z}\rangle $ is the same with or without measurements of $\sigma _{z}$. In both cases the system goes to the ground state, which is an eigenstate of the measured observable $\sigma _{z}$. In the limit $N,M\rightarrow 0$, $\sigma _{\mu _{1}}\rightarrow -\sigma _{z}$, and the state $|+\rangle _{\mu _{1}}\rightarrow $ $|-\rangle _{z}$, which agrees with the known results. Finally, we also found that the eigenstates of $S$ are also quasi-intelligent states of the observables $J_{x},J_{y}$ \textit{i.e} intelligent states of the rotated version of the observables, that is, of $ J_{1},J_{2}$. Starting from an eigenstate of $\sigma _{z}$, these intelligent states are obtained by applying the transformation defined by $U$ . \end{document}
\begin{document} \startdocumentlayoutoptions \thispagestyle{plain} \defAbstract{Abstract} \begin{abstract} The space of unitary $\ensuremath{C_{0}}$-semigroups on separable infinite-dimensional Hilbert space, when viewed under the topology of uniform weak operator convergence on compact subsets of $\reals_{+}$, is known to admit various interesting residual subspaces. Before treating the contractive case, the problem of the complete metrisability of this space was raised in \cite{eisner2010buchStableOpAndSemigroups}. Utilising Borel complexity computations and automatic continuity results for semigroups, we obtain a general result, which in particular implies that the one-/multiparameter contractive $\ensuremath{C_{0}}$-semigroups constitute Polish spaces and thus positively addresses the open problem. \end{abstract} \subjclass[2020]{47D06, 54E35} \keywords{Semigroups of operators, metrisability, completeness, Polish spaces, Borel complexity.} \title[On the complete metrisability of spaces of contractive semigroups]{On the complete metrisability of spaces of contractive semigroups} \author{Raj Dahya} \email{raj.dahya@web.de} \address{Fakult\"at f\"ur Mathematik und Informatik\newline Universit\"at Leipzig, Augustusplatz 10, D-04109 Leipzig, Germany} \maketitle \setcounternach{section}{1} \@startsection{section}{1}{\z@}{.7\linespacing\@plus\linespacing}{.5\linespacing}{\formatsection@text}[Introduction]{Introduction} \label{sec:intro} \noindent In \cite{dahya2022weakproblem} the space of contractive $\ensuremath{C_{0}}$-semigroups over a separable infinite-dimensional Hilbert space, and when viewed with the topology of uniform weak operator convergence on compact subsets of $\reals_{+}$, was shown to constitute a Baire space. The main application of this is \cite[Proposition 5.1]{dahya2022weakproblem}, which relies on the approximation result in \cite[Theorem~2.1]{krol2009} and shows that residual properties for the unitary case automatically transfer to the contractive case. In particular, this application renders meaningful the residuality results achieved in \cite{eisnersereny2009catThmStableSemigroups}, \cite[Corollary~3.2]{krol2009}, and \cite[\S{}III.6 and \S{}IV.3.3]{eisner2010buchStableOpAndSemigroups}. Note also that residual properties of (contractive) operators on Banach spaces, as initiated in \cite{eisner2010typicalContraction,eisnermaitrai2010typicalOperators}, have recently been studied in connection with \emph{hypercyclicity} and the \emph{Invariant Subspace Problem} in \cite{grivaux2021localspecLp,grivaux2021invariantsubspaceLp,grivaux2021typicalexamples}. The continuous case remains to be investigated. In this paper we improve upon the result in \cite{dahya2022weakproblem} and show that the space of contractive $\ensuremath{C_{0}}$-semigroups is \highlightTerm{Polish} (\idest separable, completely metrisable). In fact, we prove this for spaces of more generally defined semigroups, including multiparameter semigroups. In particular, our result positively solves a problem raised in \cite[\S{}III.6.3]{eisner2010buchStableOpAndSemigroups} (\cf also \cite[Remark~2.2]{Eisner2008kato}). There it was shown that, when viewed under the topology of uniform weak operator convergence on compact time intervals, the space of contractive $\ensuremath{C_{0}}$-semigroups is not sequentially closed within the larger space of continuous contraction-valued functions. We shall reinforce this by studying the geometric properties of these spaces in a general setting, and providing a deeper reason for this failure (see \Cref{cor:broad-class-of-counterexamples-to-Hs-closed:sig:article-str-problem-raj-dahya}). This renders the complete metrisability problem non-trivial. The approach in \cite[Theorem~1.20]{dahya2022weakproblem} involves studying and transferring properties from the subspace of unitary semigroups, which is a Baire space. This method crucially relies on the fact that contractive semigroups can be weakly approximated by unitary semigroups. These density results in turn arise from the theory of dilations (\cf \cite[Theorem~1]{peller1981estimatesOperatorPolyLp} and \cite[Theorem~2.1]{krol2009} ). By contrast, the approach here bypasses dependency upon dilation. Instead we directly classify the space of contractive $\ensuremath{C_{0}}$-semigroups in terms of its Borel complexity within a larger, completely metrisable space. This complexity result in turn implies complete metrisability (see \Cref{thm:main-result:sig:article-str-problem-raj-dahya}). Our result encompasses a broad class of spaces on which the semigroups are defined. We provide basic examples in the main text and broaden this to a larger class in \Cref{app:continuity}. The generality of the main result may also be of interest to other fields. Multiparameter semigroups, for example, occur in structure theorems (see \exempli \cite{lopushanskyMultiparamFourier}), the study of diffusion equations in space-time dynamics (see \exempli \cite{zelik2004}), the approximation of periodic functions in multiple variables (see \exempli \cite{terehin1975}), \etcetera. \@startsection{section}{1}{\z@}{.7\linespacing\@plus\linespacing}{.5\linespacing}{\formatsection@text}[Definitions of spaces of semigroups]{Definitions of spaces of semigroups} \label{sec:intro-definitions} \noindent Throughout this paper, $\mathcal{H}$ shall denote a fixed separable infinite-dimensional Hilbert space. Furthermore, \begin{mathe}[mc]{rcccccl} \BoundedOps{\mathcal{H}} &\supseteq &\OpSpaceC{\mathcal{H}} &\supseteq &\OpSpaceI{\mathcal{H}} &\supseteq &\OpSpaceU{\mathcal{H}}\\ \end{mathe} \noindent denote (from left to right) the spaces of bounded linear operators, contractions, isometries, and unitaries over $\mathcal{H}$. These can be endowed with the weak operator topology ($\text{\upshape \scshape wot}$) or the strong operator topology ($\text{\upshape \scshape sot}$). Instead of working with semigroups defined on $\reals_{+}$ (continuous time) or over $\mathbb{N}_{0}$ (discrete time), we shall more generally work with semigroups parameterised by a topological monoid. \begin{defn} \makelabel{defn:semigroup-on-h:sig:article-str-problem-raj-dahya} Let $(M,\cdot,1)$ be a topological monoid. A \highlightTerm{semigroup} over $\mathcal{H}$ on $M$ shall mean any operator-valued function, ${T:M\to\BoundedOps{\mathcal{H}}}$, satisfying ${T(1)=\text{\upshape\bfseries I}}$ and ${T(st)=T(s)T(t)}$ for ${s,t\in M}$. \end{defn} In other words, semigroups are just certain kinds of algebraic morphisms. Observe that the above definition applied to the topological monoid $(\reals_{+},+,0)$ yields the usual definition of an operator semigroup. The continuous contractive semigroups defined on $M$ may be viewed as subspaces of the function spaces $\@ifnextchar_{\Cts@tief}{\Cts@tief_{}}{M}{\OpSpaceC{\mathcal{H}}}$, where $\OpSpaceC{\mathcal{H}}$ may be endowed with either the $\text{\upshape \scshape wot}$- or $\text{\upshape \scshape sot}$-topology. We summarise these spaces and their topologies as follows: \begin{defn} \makelabel{defn:standard-funct-spaces:sig:article-str-problem-raj-dahya} Let $M$ be a topological monoid. Denote via ${\cal{F}^{c}_{s}(M)\colonequals\@ifnextchar_{\Cts@tief}{\Cts@tief_{}}{M}{(\OpSpaceC{\mathcal{H}},\text{\upshape \scshape sot})}}$ the $\text{\upshape \scshape sot}$-continuous contraction-valued functions defined on $M$, and via ${\cal{C}_{s}(M)\colonequals\Hom{M}{(\OpSpaceC{\mathcal{H}},\text{\upshape \scshape sot})}}$ the $\text{\upshape \scshape sot}$-continuous contractive semigroups over $\mathcal{H}$ on $M$. Denote via $\cal{F}^{c}_{w}(M)$ and $\cal{C}_{w}(M)$ the respective $\text{\upshape \scshape wot}$-continuous counterparts. \end{defn} \begin{defn} \makelabel{defn:loc-wot-sot-convergence:sig:article-str-problem-raj-dahya} Let $M$ be a topological monoid and let $X$ be any of the spaces in \Cref{defn:standard-funct-spaces:sig:article-str-problem-raj-dahya}. Let $\KmpRm{M}$ denote the collection of compact subsets of $M$. The topologies of \highlightTerm{uniform \text{\upshape \scshape wot}-convergence on compact subsets of $M$} (short: $\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape wot}}$), \respectively of \highlightTerm{uniform \text{\upshape \scshape sot}-convergence on compact subsets of $M$} (short: $\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape sot}}$) are induced via the convergence conditions: \begin{mathe}[mc]{rcll} T_{i} \overset{\text{\scriptsize{{{$\mathpzc{k}$}}}-{\upshape \scshape wot}}}{\longrightarrow} T &:\Longleftrightarrow &\forall{\xi,\eta\in\mathcal{H}:~} \forall{K\in\KmpRm{M}:~} &\sup_{t\in K}|\BRAKET{(T_{i}(t)-T(t))\xi}{\eta}|\underset{i}{\longrightarrow}0\\ T_{i} \overset{\text{\scriptsize{{{$\mathpzc{k}$}}}-{\upshape \scshape sot}}}{\longrightarrow} T &:\Longleftrightarrow &\forall{\xi\in\mathcal{H}:~} \forall{K\in\KmpRm{M}:~} &\sup_{t\in K}\|(T_{i}(t)-T(t))\xi\|\underset{i}{\longrightarrow}0\\ \end{mathe} \noindent for all nets, $(T_{i})_{i}\subseteqX$ and all $T\inX$. \end{defn} Working with these definitions, one can readily classify some of these spaces as follows: \begin{prop} \makelabel{prop:basic:SpC:basic-polish:sig:article-str-problem-raj-dahya} Let $M$ be a locally compact Polish monoid. Then $(\cal{F}^{c}_{w}(M),\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape wot}})$, $(\cal{F}^{c}_{s}(M),\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape sot}})$, and $(\cal{C}_{s}(M),\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape sot}})$ are Polish spaces. \end{prop} For a full proof see \cite[Propositions~1.16~and~1.18]{dahya2022weakproblem}. For the reader's convenience, we sketch the arguments here. \begin{proof}[of \Cref{prop:basic:SpC:basic-polish:sig:article-str-problem-raj-dahya}] First note that the spaces, ${(\OpSpaceC{\mathcal{H}},\text{\upshape \scshape wot})}$ and ${(\OpSpaceC{\mathcal{H}},\text{\upshape \scshape sot})}$, are well-known to be Polish ( see \exempli \cite[Exercise~3.4~(5) and Exercise~4.9]{kech1994} ). To prove the first claim, we need to show that $(\@ifnextchar_{\Cts@tief}{\Cts@tief_{}}{M}{Y},\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape wot}})$ is Polish, where ${Y\colonequals(\OpSpaceC{\mathcal{H}},\text{\upshape \scshape wot})}$. Since $M$ is a locally compact Polish space, and since for metrisable spaces, separability is equivalent to second countability, one can readily construct a countable collection of compact subsets, ${\tilde{\cal{K}}\subseteq\KmpRm{M}}$, such that $\{\topInterior{K}\mid K\in\tilde{\cal{K}}\}$ covers $M$. Consider the spaces $\@ifnextchar_{\Cts@tief}{\Cts@tief_{}}{K}{Y}$ for $K\in\tilde{\cal{K}}$ and endow these with the topology of uniform convergence, which makes them Polish spaces (see \exempli \cite[Theorem~4.19]{kech1994} or \cite[Lemma~3.96--7,~3.99]{aliprantis2005} ). The map \begin{mathe}[mc]{rcccl} \Psi &: &\@ifnextchar_{\Cts@tief}{\Cts@tief_{}}{M}{Y} &\to &\prod_{K\in\tilde{\cal{K}}}\@ifnextchar_{\Cts@tief}{\Cts@tief_{}}{K}{Y}\\ &&f &\mapsto &(f\restr{K})_{K\in\tilde{\cal{K}}}\\ \end{mathe} \noindent is clearly well-defined. Since $\{\topInterior{K} \mid K\in\tilde{\cal{K}}\}$ covers $M$ and $M$ is locally compact, the map is also clearly bicontinuous. The covering property also guarantees that every coherent sequence of continuous functions $(f_{K})_{K\in\tilde{\cal{K}}}\in\prod_{K\in\tilde{\cal{K}}}\@ifnextchar_{\Cts@tief}{\Cts@tief_{}}{K}{Y}$ corresponds to a unique continuous function, ${f\colonequals\bigcup_{K\in\tilde{\cal{K}}}f_{K}\in\@ifnextchar_{\Cts@tief}{\Cts@tief_{}}{M}{Y}}$, satisfying $\Psi(f)=(f_{K})_{K\in\tilde{\cal{K}}}$. Thus $\Psi$ is a homeomorphism between $\@ifnextchar_{\Cts@tief}{\Cts@tief_{}}{M}{Y}$ and the subspace of coherent sequences of continuous functions. Since the product of Polish spaces is Polish (see \cite[Corollary~3.39]{aliprantis2005} ) and the subspace of coherent sequences is clearly closed under the product topology, it follows that $(\@ifnextchar_{\Cts@tief}{\Cts@tief_{}}{M}{Y},\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape wot}})$ is Polish. The second claim is obtained in the same manner, by replacing $Y$ by $(\OpSpaceC{\mathcal{H}},\text{\upshape \scshape sot})$ and $\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape wot}}$ by $\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape sot}}$ above. Finally, it is easy to verify that $\cal{C}_{s}(M)$ is a closed subspace within $(\cal{F}^{c}_{s}(M),\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape sot}})$, and thus that $(\cal{C}_{s}(M),\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape sot}})$ too is Polish. \end{proof} The aim of this paper is to show that $(\cal{C}_{s}(M),\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape wot}})$ is completely metrisable for some topological monoids $M$, in particular for $M=(\reals_{+},+,0)$. We can achieve this for a broad class of monoids, by appealing to the following condition: \begin{defn} \makelabel{defn:good-monoids:sig:article-str-problem-raj-dahya} Call a topological monoid, $M$, \highlightTerm{\usesinglequotes{good}}, if the contractive $\text{\upshape \scshape wot}$-continuous semigroups over $\mathcal{H}$ on $M$ are automatically $\text{\upshape \scshape sot}$-continuous, \idest if $\cal{C}_{w}(M)=\cal{C}_{s}(M)$ holds. \end{defn} All discrete monoids (including non-commutative ones) are trivially \usesinglequotes{good}. By a classical result, $(\reals_{+},+,0)$ is \usesinglequotes{good} (\cf \cite[Theorem~5.8]{Engel1999} as well as \cite[Theorem~9.3.1 and Theorem~10.2.1--3]{hillephillips2008} ). Furthermore, it is easy to see that \usesinglequotes{good} monoids are closed under products: \begin{prop} \makelabel{prop:good-monoids-closed-under-products:sig:article-str-problem-raj-dahya} Let $d\in\mathbb{N}$ and $M_{1},M_{2},\ldots,M_{d}$ be \usesinglequotes{good} topological Polish monoids. Then $\prod_{i=1}^{d}M_{i}$ is \usesinglequotes{good}. \end{prop} \begin{proof} Let ${T:\prod_{i=1}^{d}M_{i}\to\OpSpaceC{\mathcal{H}}}$ be a $\text{\upshape \scshape wot}$-continuous contractive semigroup. We need to show that $T$ is $\text{\upshape \scshape sot}$-continuous. For each $k\in\{1,2,\ldots,d\}$ let ${\pi_{k}:\prod_{i=1}^{d}M_{i}\to M_{k}}$ denote the canonical projection, which is a (continuous) monoid homomorphism, and let ${r_{k}:M_{k}\to\prod_{i=1}^{d}M_{i}}$ denote the canonical (continuous) monoid homomorphism defined by $r_{k}(t)=(1,1,\ldots,t,\ldots,1)$ (the $d$-tuple with $t$ in the $k$-th position and identity elements elsewhere) for all $t\in M_{k}$. For each $k\in\{1,2,\ldots,d\}$ observe further that ${T_{k}:M_{k}\to\OpSpaceC{\mathcal{H}}}$ define by ${T_{k} \colonequals T\circ r_{k}}$ is a $\text{\upshape \scshape wot}$-continuous homomorphism. That is, each $T_{k}$ is a $\text{\upshape \scshape wot}$-continuous contractive semigroup over $\mathcal{H}$ on $M_{k}$. Since each $M_{k}$ is \usesinglequotes{good}, these are $\text{\upshape \scshape sot}$-continuous. Observe now, that \begin{mathe}[mc]{rcl} T(t) &= &T(\prod_{i=1}^{d}r_{k}(\pi_{k}(t)))\\ &= &T(r_{1}(\pi_{1}(t)))\cdot T(r_{2}(\pi_{2}(t)))\cdot\ldots\cdot T(r_{d}(\pi_{d}(t)))\\ &= &T_{1}(\pi_{1}(t))\cdot T_{2}(\pi_{2}(t))\cdot\ldots\cdot T_{d}(\pi_{d}(t))\\ \end{mathe} \noindent holds for all $t\in \prod_{i=1}^{d}M_{i}$. Since the algebraic projections are continuous and the $T_{k}$ are $\text{\upshape \scshape sot}$-continuous and contractive, and since multiplication of contractions is $\text{\upshape \scshape sot}$-continuous, it follows that $T$ is $\text{\upshape \scshape sot}$-continuous. \end{proof} Thus we immediately obtain the following examples of \usesinglequotes{good} monoids: \begin{cor} \makelabel{cor:multiparameter-auto-cts:sig:article-str-problem-raj-dahya} For each $d\in\mathbb{N}$ the monoid $\reals_{+}^{d}$, viewed under pointwise addition, is \usesinglequotes{good}. \end{cor} If one more generally considers monoids which are closed subspaces of locally compact Hausdorff topological groups, a sufficient topological condition exists, which guarantees that a monoid is \usesinglequotes{good} (see \Cref{app:continuity:defn:conditition-II:sig:article-str-problem-raj-dahya} and \Cref{app:continuity:thm:generalised-auto-continuity:sig:article-str-problem-raj-dahya} ). By \Cref{ app:continuity:e.g.:extendible-mon:reals:sig:article-str-problem-raj-dahya, app:continuity:e.g.:extendible-mon:p-adic:sig:article-str-problem-raj-dahya, app:continuity:e.g.:extendible-mon:discrete:sig:article-str-problem-raj-dahya, app:continuity:e.g.:extendible-mon:non-comm-non-discrete:sig:article-str-problem-raj-dahya, } and \Cref{app:continuity:prop:monoids-with-cond-closed-under-fprod:sig:article-str-problem-raj-dahya}, the class of monoids satisfying this condition is closed under finite products and includes all discrete monoids, the non-negative reals under addition $(\reals_{+},+,0)$, the $p$-adic integers under addition $(\mathbb{Z}_{p},+,0)$ for all $p\in\mathbb{P}$, and even non-discrete non-commutative monoids including naturally definable monoids contained within the Heisenberg group of order $2d-3$ for each $d\geq 2$. \@startsection{section}{1}{\z@}{.7\linespacing\@plus\linespacing}{.5\linespacing}{\formatsection@text}[The $\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape wot}}$-closure of the space of contractive semigroups]{The $\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape wot}}$-closure of the space of contractive semigroups} \label{sec:convexity} \noindent The simplest approach to demonstrate the complete metrisability of $(\cal{C}_{s}(M),\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape wot}})$ would be to show that this be a closed subspace within the function space $(\cal{F}^{c}_{w}(M),\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape wot}})$, which we already know to be Polish (see \Cref{prop:basic:SpC:basic-polish:sig:article-str-problem-raj-dahya}). In \cite[Example~\S{}III.6.10]{eisner2010buchStableOpAndSemigroups} and \cite[Example~2.1]{Eisner2008kato} a construction is provided, which demonstrates that this fails in particular in the case of one-parameter contractive $\ensuremath{C_{0}}$-semigroups. In this section we reveal that the deeper reason for this failure is that the closure of $\cal{C}_{s}(M)$ within $(\cal{F}^{c}_{w}(M),\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape wot}})$ is always convex, whereas for a broad class of topological monoids, $M$, the subset $\cal{C}_{s}(M)$ is not convex (see \Cref{cor:broad-class-of-counterexamples-to-Hs-closed:sig:article-str-problem-raj-dahya} below). Before we proceed, we require a few definitions. In the following $M$ shall denote an arbitrary topological monoid. We continue to use $\mathcal{H}$ to denote a separable infinite-dimensional Hilbert space and $\text{\upshape\bfseries I}_{\mathcal{H}}$ (or simply $\text{\upshape\bfseries I}$) for the identity operator. \begin{defn} \makelabel{defn:partition-of-identity-operator:sig:article-str-problem-raj-dahya} For $n\in\mathbb{N}$ and ${u_{1},u_{2},\ldots,u_{n}\in\BoundedOps{\mathcal{H}}}$ we shall call $(u_{1},u_{2},\ldots,u_{n})$ an \highlightTerm{isometric partition of the identity}, just in case the following axioms hold: \begin{kompaktenum}{(\bfseries {P}1)}[\rtab][\rtab] \item\label{ax:partition:1} $u_{j}^{\ast}u_{i}=\delta_{ij}\cdot\text{\upshape\bfseries I}$ for all $i,j\in\{1,2,\ldots,n\}$. \item\label{ax:partition:2} $\sum_{i=1}^{n}u_{i}u_{i}^{\ast}=\text{\upshape\bfseries I}$. \end{kompaktenum} \@ifnextchar\bgroup{\nvraum@c}{\vspace*{-\baselineskip}}{1} \end{defn} Note that by axiom (P\ref{ax:partition:1}) the operators in an isometric partition are necessarily isometries. \begin{rem} \makelabel{rem:isometric-partitions-of-one-correspond-to-decompositions:sig:article-str-problem-raj-dahya} Let $n\in\mathbb{N}$. If $(u_{1},u_{2},\ldots,u_{n})$ is an isometric partition of $\text{\upshape\bfseries I}$, then, letting ${\mathcal{H}_{i}\colonequals\mathop{\textup{ran}}(u_{i})}$ for $i\in\{1,2,\ldots,n\}$, it is easy to see that (P\ref{ax:partition:1}) and (P\ref{ax:partition:2}), together with the fact that each $u_{i}$ is necessarily an isometry, imply that the $\mathcal{H}_{i}$ are mutually orthogonal closed subspaces of $\mathcal{H}$ and that $\mathcal{H}=\bigoplus_{i=1}^{n}\mathcal{H}_{i}$. Conversely, if $\mathcal{H}$ can be decomposed as $\bigoplus_{i=1}^{n}\mathcal{H}_{i}$ where each $\mathcal{H}_{i}\subseteq\mathcal{H}$ is a closed subspace with $\mathop{\textup{dim}}(\mathcal{H}_{i})=\mathop{\textup{dim}}(\mathcal{H})$ for each $i$, then letting $u_{i}\in\BoundedOps{\mathcal{H}}$ be any isometries with $\mathop{\textup{ran}}(u_{i})=\mathcal{H}_{i}$ for each $i\in\{1,2,\ldots,n\}$, one can readily see that (P\ref{ax:partition:1}) and (P\ref{ax:partition:2}) are satisfied. Thus, isometric partitions of $\text{\upshape\bfseries I}$ can be constructed from orthogonal decompositions into infinite-dimensional closed subspaces of $\mathcal{H}$ and vice versa. \end{rem} Of course, these observations only apply for infinite-dimensional Hilbert spaces. \begin{defn} Let $n\in\mathbb{N}$, $(u_{1},u_{2},\ldots,u_{n})$ be an isometric partition of $\text{\upshape\bfseries I}$, and $T_{1},T_{2},\ldots,T_{n}\in\cal{F}^{c}_{w}(M)$. Denote via $(\bigoplus_{i=1}^{n}T_{i})_{\quer{u}}$ the operator-valued function ${T:M\to\BoundedOps{\mathcal{H}}}$ given by \begin{mathe}[mc]{rcl} T(\cdot) &= &\sum_{i=1}^{n}u_{i} T_{i}(\cdot) u_{i}^{\ast}.\\ \end{mathe} \@ifnextchar\bgroup{\nvraum@c}{\vspace*{-\baselineskip}}{1} \end{defn} \begin{defn} Let $A\subseteq\cal{F}^{c}_{w}(M)$. Say that $A$ is \highlightTerm{closed under finite joins} just in case for all $n\in\mathbb{N}$, all isometric partitions $\quer{u} \colonequals (u_{1},u_{2},\ldots,u_{n})$ of $\text{\upshape\bfseries I}$, and all $T_{1},T_{2},\ldots,T_{n}\in A$, it holds that ${(\bigoplus_{i=1}^{n}T_{i})_{\quer{u}}\in A}$. \end{defn} The property of being closed under finite joins is a key ingredient in proving the convexity of the closure of subsets (see \Cref{lemm:closure-is-convex:sig:article-str-problem-raj-dahya} below). We first provide some basic observations about which subsets are closed under finite joins. \begin{prop} \makelabel{prop:basic-some-subspaces-are-closed-under-finite-stream-addition:sig:article-str-problem-raj-dahya} Let $A \in \{ \cal{F}^{c}_{w}(M), \cal{F}^{c}_{s}(M), \cal{C}_{w}(M), \cal{C}_{s}(M) \}$. Then $A$ is closed under finite joins. \end{prop} \begin{proof} First consider the case $A=\cal{F}^{c}_{w}(M)$. Let $n\in\mathbb{N}$, $\quer{u} \colonequals (u_{1},u_{2},\ldots,u_{n})$ be an isometric partition of $\text{\upshape\bfseries I}$, and $T_{1},T_{2},\ldots,T_{n}\in A$. We need to show that ${T \colonequals (\bigoplus_{i=1}^{n}T_{i})_{\quer{u}}}$ is in $A$. Applying the properties of the partition yields \begin{mathe}[mc]{rcl} \|T(t)\xi\|^{2} &= &\sum_{i,j=1}^{n} \BRAKET{u_{i}T_{i}(t)u_{i}^{\ast}\xi}{u_{j}T_{j}(t)u_{j}^{\ast}\xi}\\ &\textoverset{(P\ref{ax:partition:1})}{=} &\sum_{i=1}^{n} \BRAKET{u_{i}T_{i}(t)u_{i}^{\ast}\xi}{u_{i}T_{i}(t)u_{i}^{\ast}\xi}\\ &\leq &\sum_{i=1}^{n} \|u_{i}T_{i}(t)\|\|u_{i}^{\ast}\xi\|\\ &\leq &\sum_{i=1}^{n} \|u_{i}^{\ast}\xi\|\\ &\multispan{2}{\text{since $u_{i}$ is isometric and $T_{i}(t)$ contractive for all $i$}}\\ &= &\BRAKET{\sum_{i=1}^{n}u_{i}u_{i}^{\ast}\xi}{\xi}\\ &\textoverset{(P\ref{ax:partition:2})}{=} &\BRAKET{\text{\upshape\bfseries I} \xi}{\xi} = \|\xi\|^{2}\\ \end{mathe} \noindent for all $\xi\in\mathcal{H}$ and all $t\in M$. And since by construction, ${T(\cdot)=\sum_{i=1}^{n}u_{i}T_{i}(\cdot)u_{i}^{\ast}}$, it clearly holds that $T$ is $\text{\upshape \scshape wot}$-continuous. Thus $T$ is a $\text{\upshape \scshape wot}$-continuous contraction-valued function, \idest $T\in A$. Hence $A$ is closed under finite joins. The case of $A=\cal{F}^{c}_{s}(M)$ is analogous. Next we consider the case $A=\cal{C}_{w}(M)$. Let $n\in\mathbb{N}$, $\quer{u} \colonequals (u_{1},u_{2},\ldots,u_{n})$ be an isometric partition of $\text{\upshape\bfseries I}$, and $T_{1},T_{2},\ldots,T_{n}\in A$. We need to show that ${T \colonequals (\bigoplus_{i=1}^{n}T_{i})_{\quer{u}}}$ is in $A$. Since $A\subseteq\cal{F}^{c}_{w}(M)$ and $\cal{F}^{c}_{w}(M)$ is closed under finite joins, we already know that $T\in\cal{F}^{c}_{w}(M)$, \idest that $T$ is contraction-valued and $\text{\upshape \scshape wot}$-continuous. To show that $T\in A$, it remains to show that $T$ is a semigroup. Since each of the $T_{i}$ are semigroups, applying the properties of the partition yields \begin{mathe}[mc]{rcccccccl} T(1) &= &\sum_{i=1}^{n}u_{i}T_{i}(1)u_{i}^{\ast} &= &\sum_{i=1}^{n}u_{i}\cdot\text{\upshape\bfseries I}\cdot u_{i}^{\ast} &= &\sum_{i=1}^{n}u_{i}u_{i}^{\ast} &\textoverset{(P\ref{ax:partition:2})}{=} &\text{\upshape\bfseries I}\\ \end{mathe} \noindent and \begin{mathe}[mc]{rcl} T(s)T(t) &= &(\sum_{i=1}^{n}u_{i}T_{i}(s)u_{i}^{\ast})(\sum_{j=1}^{n}u_{j}T_{j}(t)u_{j}^{\ast})\\ &= &\sum_{i,j=1}^{n}u_{i}T_{i}(s)u_{i}^{\ast}\cdot u_{j} T_{j}(t)u_{j}^{\ast}\\ &\textoverset{(P\ref{ax:partition:1})}{=} &\sum_{i=1}^{n}u_{i}T_{i}(s)T_{i}(t)u_{i}^{\ast}\\ &= &\sum_{i=1}^{n}u_{i}T_{i}(st)u_{i}^{\ast}\\ &= &T(st)\\ \end{mathe} \noindent for all $s,t\in M$. Thus $T$ is a $\text{\upshape \scshape wot}$-continuous contractive semigroup, \idest $T\in A$. Hence $A$ is closed under finite joins. The case of $A=\cal{C}_{s}(M)$ is analogous. \end{proof} \begin{prop} \makelabel{prop:closure-is-convex:sig:article-str-problem-raj-dahya} Let $A\subseteq \cal{F}^{c}_{w}(M)$ and let $\quer{A}$ be the closure of $A$ within $(\cal{F}^{c}_{w}(M),\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape wot}})$. If $A$ is closed under finite joins, then so too is $\quer{A}$. \end{prop} \begin{proof} Let $n\in\mathbb{N}$, $\quer{u} \colonequals (u_{1},u_{2},\ldots,u_{n})$ be an isometric partition of $\text{\upshape\bfseries I}$, and $T_{1},T_{2},\ldots,T_{n}\in\quer{A}$. We need to show that ${T \colonequals (\bigoplus_{j=1}^{n}T_{j})_{\quer{u}}}$ is in $\quer{A}$. To see this, we may simply fix a net ${((T^{(i)}_{1},T^{(i)}_{2},\ldots,T^{(i)}_{n}))_{i} \subseteq \prod_{j=1}^{n}A}$ such that ${T^{(i)}_{j}\underset{i}{\overset{\text{\scriptsize{{{$\mathpzc{k}$}}}-{\upshape \scshape wot}}}{\longrightarrow}} T_{j}}$ for all $j\in\{1,2,\ldots,n\}$. Since $A$ is closed under finite joins, we have ${T^{(i)}\colonequals (\bigoplus_{j=1}^{n}T^{(i)}_{j})_{\quer{u}}\in A}$ for all $i$. We also clearly have \begin{mathe}[mc]{rcccccl} A \ni T^{(i)} &= &\sum_{j=1}^{n}u_{j}T^{(i)}_{j}(\cdot)u_{j}^{\ast} &\underset{i}{\overset{\text{\scriptsize{{{$\mathpzc{k}$}}}-{\upshape \scshape wot}}}{\longrightarrow}} &\sum_{j=1}^{n}u_{j}T_{j}(\cdot)u_{j}^{\ast} &= &T(\cdot).\\ \end{mathe} \noindent Hence $T\in\quer{A}$. \end{proof} \begin{lemm} \makelabel{lemm:closure-is-convex:sig:article-str-problem-raj-dahya} Let $A\subseteq \cal{F}^{c}_{w}(M)$ be closed under finite joins. Then the closure, $\quer{A}$, of $A$ within $(\cal{F}^{c}_{w}(M),\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape wot}})$ is convex. \end{lemm} \begin{proof} Since $\mathcal{H}$ is a separable Hilbert space, it admits a countable orthonormal basis (ONB) ${B\subseteq\mathcal{H}}$, which we shall fix. It suffices to show for $S,T\in\quer{A}$ and $\alpha,\beta\in[0,1]$ with ${\alpha+\beta=1}$, that ${R \colonequals \alpha S + \beta T \in \quer{A}}$. To do this, we need to show that $R$ can be approximated within the $\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape wot}}$-topology by elements in $\quer{A}$. In order to achieve this, it suffices to fix arbitrary $K\in\KmpRm{M}$, $F\subseteq B$ finite, and $\varepsilon>0$, and show that some $\tilde{R}\in\quer{A}$ exists satisfying \begin{mathe}[mc]{lql} \@ifnextchar[{\eqtag@loc@}{\eqtag@loc@[*]}[eq:0:\beweislabel] \sup_{t\in K}|\BRAKET{(R(t)-\tilde{R}(t))e}{e'}|<\varepsilon, &\text{for all $e,e'\in F$.} \end{mathe} To construct $\tilde{R}$, we first construct an isometric partition, $(w_{0},w_{1})$, of $\text{\upshape\bfseries I}$, such that $w_{0}$ fixes the vectors in $F$. This can be achieved as follows: Since the ONB $B$ is infinite, a partition $\{B_{0},B_{1}\}$ of $B$ exists satisfying $F\subseteq B_{0}$ and $|B_{0}|=|B_{1}|=|B|=\mathop{\textup{dim}}(\mathcal{H})$. There thus exist bijections ${f_{0}:B\to B_{0}}$ and ${f_{1}:B\to B_{1}}$ and since $B_{0}\supseteq F$, we may assume without loss of generality that $f_{0}\restr{F}=\mathop{\textit{id}}_{F}$. Using these bijections we obtain (unique) isometries ${w_{0},w_{1}\in\BoundedOps{\mathcal{H}}}$ satisfying ${w_{0}e=f_{0}(e)}$ and ${w_{1}e=f_{1}(e)}$ for all $e\in B$. In particular, $\mathop{\textup{ran}}(w_{0})=\quer{\mathop{\textup{lin}}}(B_{0})$ and $\mathop{\textup{ran}}(w_{1})=\quer{\mathop{\textup{lin}}}(B_{1})$. Now since $\{B_{0},B_{1}\}$ partitions $B$, we have $\mathcal{H}=\quer{\mathop{\textup{lin}}}(B)=\quer{\mathop{\textup{lin}}}(B_{0})\oplus\quer{\mathop{\textup{lin}}}(B_{1})$. As per \Cref{rem:isometric-partitions-of-one-correspond-to-decompositions:sig:article-str-problem-raj-dahya} it follows that $(w_{0},w_{1})$ satisfies the axioms of an isometric partition of $\text{\upshape\bfseries I}$. Now set \begin{mathe}[mc]{rclcrcl} u_{0} &\colonequals &\sqrt{\alpha}w_{0} + \sqrt{\beta}w_{1} &\text{and} &u_{1} &\colonequals &\sqrt{\beta}w_{0} - \sqrt{\alpha}w_{1}.\\ \end{mathe} \noindent One can easily derive from the fact that $(w_{0},w_{1})$ is an isometric partition of $\text{\upshape\bfseries I}$, that $\quer{u} \colonequals (u_{0},u_{1})$ also satisfies the axioms of an isometric partition of $\text{\upshape\bfseries I}$. Moreover, since $w_{0}$ was chosen to fix the vectors in $F$, applying the properties of the partition $(w_{0},w_{1})$ yields \begin{mathe}[mc]{rcccccccl} \@ifnextchar[{\eqtag@loc@}{\eqtag@loc@[*]}[eq:sqrt-alpha:\beweislabel] u_{0}^{\ast}e &= &u_{0}^{\ast}w_{0}e &= &\sqrt{\alpha}w_{0}^{\ast}w_{0}e+\sqrt{\beta}w_{1}^{\ast}w_{0}e &\textoverset{(P\ref{ax:partition:1})}{=} &\sqrt{\alpha}\text{\upshape\bfseries I} e+\sqrt{\beta}\text{\upshape\bfseries 0} e &= &\sqrt{\alpha}e\\ u_{1}^{\ast}e &= &u_{1}^{\ast}w_{0}e &= &\sqrt{\beta}w_{0}^{\ast}w_{0}e-\sqrt{\alpha}w_{1}^{\ast}w_{0}e &\textoverset{(P\ref{ax:partition:1})}{=} &\sqrt{\beta}\text{\upshape\bfseries I} e-\sqrt{\alpha}\text{\upshape\bfseries 0} e &= &\sqrt{\beta}e\\ \end{mathe} \noindent for all $e\in F$. Finally set $\tilde{R}\colonequals(S\bigoplus T)_{\quer{u}}$. Since $A$ is closed under finite joins, by \Cref{prop:closure-is-convex:sig:article-str-problem-raj-dahya} $\quer{A}$ is also closed under finite joins, and hence the constructed operator-valued function, $\tilde{R}$, lies in $\quer{A}$. For all $e,e'\in F$ we obtain \begin{mathe}[mc]{rcl} \BRAKET{(\tilde{R}(t)-R(t))e}{e'} &= &\BRAKET{u_{0}S(t)u_{0}^{\ast}e}{e'} + \BRAKET{u_{1}T(t)u_{1}^{\ast}e}{e'} - \BRAKET{R(t)e}{e'}\\ &= &\BRAKET{S(t)u_{0}^{\ast}e}{u_{0}^{\ast}e'} + \BRAKET{T(t)u_{1}^{\ast}e}{u_{1}^{\ast}e'} - \BRAKET{(\alpha S(t) + \beta T(t))e}{e'}\\ &\eqcrefoverset{eq:sqrt-alpha:\beweislabel}{=} &\BRAKET{S(t)\sqrt{\alpha}e}{\sqrt{\alpha}e'} + \BRAKET{T(t)\sqrt{\beta}e}{\sqrt{\beta}e'} - \BRAKET{(\alpha S(t) + \beta T(t))e}{e'}\\ &= &0\\ \end{mathe} \noindent for all $t\in M$. In particular we have found an $\tilde{R}\in\quer{A}$ which clearly satisfies \eqcref{eq:0:\beweislabel}. This establishes that the convex hull of $\quer{A}$ is contained in $\quer{A}$. Thus $\quer{A}$ is convex. \end{proof} By \Cref{prop:basic-some-subspaces-are-closed-under-finite-stream-addition:sig:article-str-problem-raj-dahya} and \Cref{lemm:closure-is-convex:sig:article-str-problem-raj-dahya} we thus immediately obtain the general result: \begin{cor} \makelabel{cor:closure-of-Hs-is-convex:sig:article-str-problem-raj-dahya} For all topological monoids, $M$, the closure of $\cal{C}_{s}(M)$ within $(\cal{F}^{c}_{w}(M),\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape wot}})$ is convex. \end{cor} We now provide a large class of topological monoids, $M$, for which $\cal{C}_{s}(M)$ is not convex. \begin{defn} Denote by $\text{\upshape\bfseries I}(\cdot)$ the trivial $\text{\upshape \scshape sot}$-continuous semigroup over $\mathcal{H}$ on $M$, which is everywhere equal to the identity operator. Say that $M$ has \highlightTerm{non-trivial unitary semigroups} over $\mathcal{H}$, just in case there exists a unitary $\text{\upshape \scshape sot}$-continuous semigroup, $U$, over $\mathcal{H}$ on $M$ with ${U \neq \text{\upshape\bfseries I}(\cdot)}$. \end{defn} \begin{defn} Let $(G,\cdot,{}^{-1},1)$ be a topological group. Say that $M\subseteq G$ is a \highlightTerm{topological submonoid}, just in case $M$ is endowed with the subspace topology, contains the neutral element $1$, and is closed under $\cdot$. \end{defn} In particular, if $G$ is a topological group and $M\subseteq G$ is a topological submonoid, then $M$ is itself a topological monoid. \begin{prop} \makelabel{prop:gelfand-raikov-monoids:sig:article-str-problem-raj-dahya} Let $G$ be a locally compact Polish group and suppose that $M\subseteq G$ is a topological submonoid with $M\neq\{1\}$. Then $M$ has non-trivial unitary semigroups over $\mathcal{H}$. \end{prop} \begin{proof} Let $t_{0}\in M\setminus\{1\}$. By the Gelfand-Raikov theorem (see \exempli \cite[Theorem~6]{yoshi1949}), there exists a Hilbert space $\mathcal{H}_{0}$ and an irreducible $\text{\upshape \scshape sot}$-continuous unitary representation, ${U_{0}:G\to\OpSpaceU{\mathcal{H}_{0}}}$, satisfying $U_{0}(t_{0})\neq \text{\upshape\bfseries I}_{\mathcal{H}_{0}}$. By irreducibility and since $G$ is separable, it necessarily holds that $\mathop{\textup{dim}}(\mathcal{H}_{0})\leq\aleph_{0}=\mathop{\textup{dim}}(\mathcal{H})$. If $\mathop{\textup{dim}}(\mathcal{H}_{0})=\mathop{\textup{dim}}(\mathcal{H})$, then we may assume without loss of generality that $\mathcal{H}_{0}=\mathcal{H}$, and thus that $U_{0}$ is a representation of $G$ over $\mathcal{H}$. If $\mathop{\textup{dim}}(\mathcal{H}_{0})$ is finite, we may assume that $\mathcal{H}_{0}\subset\mathcal{H}$ and view the orthogonal complement $\mathcal{H}_{1}\colonequals\mathcal{H}_{0}^{\perp}$ within $\mathcal{H}$. Replacing $U_{0}$ by ${t\in G\mapsto U_{0}(t)\oplus \text{\upshape\bfseries I}_{\mathcal{H}_{1}}}$ yields an $\text{\upshape \scshape sot}$-continuous unitary representation of $G$ over $\mathcal{H}$ which satisfies $U_{0}(t_{0})\neq\text{\upshape\bfseries I}_{\mathcal{H}}$. In both cases, restricting $U_{0}$ to $M$ yields a non-trivial $\text{\upshape \scshape sot}$-continuous unitary semigroup over $\mathcal{H}$ on $M$. Hence $M$ has non-trivial unitary semigroups over $\mathcal{H}$. \end{proof} \begin{lemm} \makelabel{lemm:non-trivial-Hs-is-nonconvex:sig:article-str-problem-raj-dahya} Suppose that $M$ has non-trivial unitary semigroups over $\mathcal{H}$. Then $\cal{C}_{s}(M)$ is not a convex subset of $\cal{F}^{c}_{w}(M)$. \end{lemm} \begin{proof} Let ${S\colonequals\text{\upshape\bfseries I}(\cdot)}$ be the trivial $\text{\upshape \scshape sot}$-continuous unitary semigroup over $\mathcal{H}$ on $M$. And by non-triviality we may fix some $\text{\upshape \scshape sot}$-continuous unitary semigroup ${T\in\cal{C}_{s}(M)\setminus\{\text{\upshape\bfseries I}(\cdot)\}}$. In particular, $T(t_{0})\neq\text{\upshape\bfseries I}$ for some $t_{0}\in M$, which we shall fix. Choose any $\alpha,\beta\in(0,1)$ with $\alpha+\beta=1$. It suffices to show that $R \colonequals \alpha S + \beta T \notin \cal{C}_{s}(M)$. Suppose \textit{per contra} that $R \in \cal{C}_{s}(M)$. Then by the semigroup law we have \begin{mathe}[mc]{rcl} \text{\upshape\bfseries 0} &= &R(st)-R(s)R(t)\\ &= &\big(\alpha S(st)+\beta T(st)\big) -\big(\alpha S(s)+\beta T(s)\big) \big(\alpha S(t)+\beta T(t)\big)\\ &= &\big(\alpha S(s)S(t)+\beta T(s)T(t)\big) -\big(\alpha S(s)+\beta T(s)\big) \big(\alpha S(t)+\beta T(t)\big)\\ &= &\alpha(1-\alpha)\,S(s)S(t) +\beta(1-\beta)\,T(s)T(t) -\alpha\beta\,S(s)T(t) -\beta\alpha\,T(s)S(t)\\ &= &\alpha\beta\,S(s)S(t) +\beta\alpha\,T(s)T(t) -\alpha\beta\,S(s)T(t) -\beta\alpha\,T(s)S(t)\\ &= &\alpha\beta\,(S(s)-T(s))(S(t)-T(t)).\\ \end{mathe} \noindent for all $s,t\in M$. Since $\alpha,\beta \neq 0$ setting $s\colonequals t\colonequals t_{0}$ in the above yields \begin{mathe}[mc]{rcl} (\text{\upshape\bfseries I} - u)^{2} &= &\text{\upshape\bfseries 0},\\ \end{mathe} \noindent where $u\colonequals T(t_{0})\in\OpSpaceU{\mathcal{H}}$. Since $u$ is unitary, a basic application of the spectral mapping theorem yields that the spectrum of $u$ is $\{1\}$. By the Gelfand theorem (see \cite[Theorem~2.1.10]{murphy1990}), it follows that $T(t_{0})=u=\text{\upshape\bfseries I}$, which is a contradiction. \end{proof} Applying \Cref{cor:closure-of-Hs-is-convex:sig:article-str-problem-raj-dahya}, \Cref{prop:gelfand-raikov-monoids:sig:article-str-problem-raj-dahya}, and \Cref{lemm:non-trivial-Hs-is-nonconvex:sig:article-str-problem-raj-dahya} yields: \begin{cor} \makelabel{cor:broad-class-of-counterexamples-to-Hs-closed:sig:article-str-problem-raj-dahya} Let $G$ be a locally compact Polish group and suppose that $M\subseteq G$ is a topological submonoid with $M\neq\{1\}$. Then the closure of $\cal{C}_{s}(M)$ within $(\cal{F}^{c}_{w}(M),\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape wot}})$ is convex, whilst $\cal{C}_{s}(M)$ itself is not convex. In particular, $\cal{C}_{s}(M)$ is not a closed subspace within $(\cal{F}^{c}_{w}(M),\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape wot}})$. \end{cor} Considering $G=(\mathbb{R}^{d},+,\text{\upshape\bfseries 0})$ with $d\geq 1$ and ${M \colonequals \reals_{+}^{d} \subseteq G}$, the conditions of \Cref{cor:broad-class-of-counterexamples-to-Hs-closed:sig:article-str-problem-raj-dahya} are satisfied. In particular, the subspace of one-/multiparameter $\ensuremath{C_{0}}$-semigroups is not closed within the larger space of $\text{\upshape \scshape wot}$-continuous contraction-valued functions. \@startsection{section}{1}{\z@}{.7\linespacing\@plus\linespacing}{.5\linespacing}{\formatsection@text}[Complete metrisability results]{Complete metrisability results} \label{sec:results} \noindent As indicated in the introduction, we shall demonstrate the complete metrisability of $\cal{C}_{s}(M)$ by directly classifying its Borel complexity within the larger Polish space, $(\cal{F}^{c}_{w}(M),\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape wot}})$. By the previous section we know that in a very general setting, $\cal{C}_{s}(M)$ is not closed within $(\cal{F}^{c}_{w}(M),\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape wot}})$. Hence we require weaker conditions, which determine when subspaces are completely metrisable. For this we rely on the following classical result from descriptive set theory (see \cite[Theorem~3.11]{kech1994} for a proof): \begin{lemm*}[Alexandroff's lemma] Let $X$ be a completely metrisable space. Then $A\subseteqX$ viewed with the relative topology is completely metrisable if and only if it is a $G_{\delta}$-subset of $X$. \end{lemm*} We now present the main result. \begin{schattierteboxdunn}[ backgroundcolor=leer, nobreak=true, ] \begin{thm} \makelabel{thm:main-result:sig:article-str-problem-raj-dahya} Let $\mathcal{H}$ denote a separable infinite-dimensional Hilbert space and $M$ be a locally compact Polish monoid. If $M$ is \usesinglequotes{good}, then the space $\cal{C}_{s}(M)$ of contractive $\ensuremath{C_{0}}$-semigroups over $\mathcal{H}$ on $M$ is Polish under the $\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape wot}}$-topology. \end{thm} \end{schattierteboxdunn} \begin{proof} Since $\{1\}$ is a compact subset of $M$, it is easy to see that \begin{mathe}[mc]{c} X \colonequals \{T\in\cal{F}^{c}_{w}(M) \mid T(1)=\text{\upshape\bfseries I}\}\\ \end{mathe} \noindent is a closed subset in $(\cal{F}^{c}_{w}(M),\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape wot}})$ and thus $(X,\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape wot}})$ is Polish (\cf \Cref{prop:basic:SpC:basic-polish:sig:article-str-problem-raj-dahya}). By Alexandroff's lemma it thus suffices to prove that $\cal{C}_{s}(M)$ is a $G_{\delta}$-subset of $X$. To proceed, observe that since $M$ is a locally compact Polish space, it is $\sigma$-compact, \idest there exists a countable collection of compact subsets, ${\tilde{\cal{K}}\subseteq\KmpRm{M}}$, such that ${\bigcup_{K\in\tilde{\cal{K}}}K=M}$. \Withoutlog one may assume that $\tilde{\cal{K}}$ is closed under finite unions. Since $\mathcal{H}$ is a separable Hilbert space, it admits a countable ONB ${B\subseteq\mathcal{H}}$. For each finite $F\subseteq B$ define \begin{mathe}[mc]{c} \pi_{F} \colonequals \mathop{\mathrm{Proj}}_{\mathop{\textup{lin}}(F)} \in \BoundedOps{\mathcal{H}},\\ \end{mathe} \noindent \idest, the projection onto the closed subspace generated by $F$. Using these, we construct \begin{mathe}[mc]{c} d_{K,F,e,e'}(T) \colonequals {\displaystyle\sup_{s,t\in K}}|\BRAKET{(T(s)\pi_{F}T(t)-T(st))e}{e'}|\\ \end{mathe} \noindent for each ${K\in\KmpRm{M}}$, ${F\subseteq B}$ finite, ${e,e'\in\mathcal{H}}$, and ${T\inX}$, and \begin{mathe}[mc]{rcl} \@ifnextchar[{\eqtag@loc@}{\eqtag@loc@[*]}[eq:sets:\beweislabel] V_{\varepsilon;K,F}(\tilde{T}) &\colonequals &{\displaystyle\bigcap_{e,e'\in F}} \{ T\inX \mid {\displaystyle\sup_{t\in K}} |\BRAKET{(T(t)-\tilde{T}(t))e}{e'}| < \varepsilon \},\\ W_{\varepsilon;K,F,e,e'} &\colonequals &\{T\inX \mid d_{K,F,e,e'}(T)<\varepsilon\}\\ \end{mathe} \noindent for each ${\varepsilon>0}$, ${K\in\KmpRm{M}}$, ${F\subseteq B}$ finite, ${e,e'\in\mathcal{H}}$, and ${\tilde{T}\inX}$. We can now present our strategy for the rest of the proof: To show that $\cal{C}_{s}(M)$ is a $G_{\delta}$-subset of $X$, it suffices to show (I) that the $W$-sets defined in \eqcref{eq:sets:\beweislabel} are open and (II) that \begin{mathe}[tc]{rcccl} \@ifnextchar[{\eqtag@loc@}{\eqtag@loc@[*]}[eq:G-delta-expression:\beweislabel] \cal{C}_{s}(M) &\subseteq &\displaystyle\bigcap_{\substack{ \varepsilon\in\mathbb{Q}^{+},~K\in\tilde{\cal{K}},\\ e,e'\in B,\\ F_{0}\subseteq B~\text{finite} }}\; \displaystyle\bigcup_{\substack{ F\subseteq B~\text{finite}\\ \text{s.t.}~F\supseteq F_{0} }}\; W_{\varepsilon;K,F,e,e'} &\subseteq &\cal{C}_{w}(M).\\ \end{mathe} \noindent If these two statements hold, then by assumption of $M$ being \usesinglequotes{good}, (I) + (II) will yield that $\cal{C}_{s}(M)=\cal{C}_{w}(M)$ are equal to a $G_{\delta}$-subset of $X$, which will complete the proof. \null Towards (I), fix arbitrary ${\varepsilon>0}$, ${K\in\KmpRm{M}}$, ${F\subseteq B}$ finite, and ${e,e'\in B}$, and consider an arbitrary element, ${\tilde{T}\in W_{\varepsilon;K,F,e,e'}}$. We need to show that $\tilde{T}$ is in the interior of $W_{\varepsilon;K,F,e,e'}$. By continuity of multiplication in the topological monoid, $M$, the set ${K\cdot K=\{st\mid s,t\in K\}}$ is compact. Setting ${K'\colonequals K\cup (K\cdot K)}$ and ${F'\colonequals F\cup\{e,e'\}}$, it suffices to show that \begin{mathe}[tc]{c} \@ifnextchar[{\eqtag@loc@}{\eqtag@loc@[*]}[eq:openness-of-W:\beweislabel] V_{\varepsilon';K',F'}(\tilde{T}) \subseteq W_{\varepsilon;K,F,e,e'}\\ \end{mathe} \noindent holds for some ${\varepsilon'>0}$, since clearly $\tilde{T}$ is an element of the left hand side and by definition of the $\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape wot}}$-topology, the $V$-sets are clearly open. We determine $\varepsilon'$ as follows. First note that by virtue of $\tilde{T}$ being in $W_{\varepsilon;K,F,e,e'}$ \begin{mathe}[tc]{c} \@ifnextchar[{\eqtag@loc@}{\eqtag@loc@[*]}[eq:modified-eps:\beweislabel] r \colonequals \varepsilon - d_{K,F,e,e'}(\tilde{T}) > 0\\ \end{mathe} \noindent holds. Since the unit disc, ${\cal{D}_{1}=\{z\in\mathbb{C}\mid |z|\leq 1\}}$, is compact, the map ${(a,b)\in\cal{D}_{1}^{2}\mapsto ab\in\mathbb{C}}$ is uniformly continuous, and hence some ${\varepsilon'>0}$ exists, such that \begin{mathe}[tc]{c} \@ifnextchar[{\eqtag@loc@}{\eqtag@loc@[*]}[eq:uniform-cts-func:\beweislabel] |a'b'-ab| < \frac{r}{4|F|+1}\\ \end{mathe} \noindent for all ${a,b,a',b'\in\cal{D}_{1}}$ with ${|a-a'|<\varepsilon'}$ and ${|b-b'|<\varepsilon'}$. We may also assume without loss of generality that $\varepsilon'<\frac{r}{4}$. With this $\varepsilon'$-value, the left hand side of \eqcref{eq:openness-of-W:\beweislabel} is now determined. It remains to show that the inclusion holds. Since the elements in $X$ are all contraction-valued functions and the ONB, $B$, consists of unit vectors, it holds that ${\BRAKET{T(t)\xi}{\eta}\in\cal{D}_{1}}$ for all $T\inX$, ${t\in M}$, and ${\xi,\eta\in B}$. Now consider an arbitrary $T$ in the left hand side of \eqcref{eq:openness-of-W:\beweislabel}. Let $s,t\in K$ be arbitrary. Then $s,t,st\in K'$, so that by the choice of $F'$ and by virtue of $T$ being inside $V_{\varepsilon';K',F'}(\tilde{T})$, we have \begin{mathe}[mc]{rcl} |\BRAKET{T(s)e''}{e'}-\BRAKET{\tilde{T}(s)e''}{e'}| &< &\varepsilon',\\ |\BRAKET{T(t)e}{e''}-\BRAKET{\tilde{T}(t)e}{e''}| &< &\varepsilon',\\ |\BRAKET{T(st)e}{e'}-\BRAKET{\tilde{T}(st)e}{e'}| &< &\varepsilon'\\ \end{mathe} \noindent for all $e''\in F$. Since $F$ is an orthonormal collection, the choice of $\varepsilon'$ and \eqcref{eq:uniform-cts-func:\beweislabel} yield \begin{longmathe}[mc]{RCL} \begin{array}[b]{0l} |\BRAKET{(T(s)\pi_{F}T(t)-T(st))e}{e'}\\ \;-\BRAKET{(\tilde{T}(s)\pi_{F}\tilde{T}(t)-\tilde{T}(st))e}{e'}|\\ \end{array} &\leq &{\displaystyle\sum_{e''\in F}} | \BRAKET{T(s)e''}{e'} \BRAKET{T(t)e}{e''} - \BRAKET{\tilde{T}(s)e''}{e'} \BRAKET{\tilde{T}(t)e}{e''} |\\ &&\quad +\;|\BRAKET{T(st)e}{e'}-\BRAKET{\tilde{T}(st)e}{e'}|\\ &\eqcrefoverset{eq:uniform-cts-func:\beweislabel}{<} &{\displaystyle\sum_{e''\in F}}\frac{r}{4|F|+1}\;+\;\varepsilon' <\frac{r|F|}{4|F|+1} + \frac{r}{4} <\frac{r}{2}\\ \end{longmathe} \noindent for all $s,t\in K$. Thus \begin{longmathe}[mc]{RCL} d_{K,F,e,e'}(T) &= &{\displaystyle\sup_{s,t\in K}}|\BRAKET{(T(s)\pi_{F}T(t)-T(st))e}{e'}|\\ &\leq &{\displaystyle\sup_{s,t\in K}}|\BRAKET{(\tilde{T}(s)\pi_{F}\tilde{T}(t)-\tilde{T}(st))e}{e'}| +\frac{r}{2}\\ &= &d_{K,F,e,e'}(\tilde{T})+\frac{r}{2}\\ &\eqcrefoverset{eq:modified-eps:\beweislabel}{=} &\varepsilon-r+\frac{r}{2} < \varepsilon,\\ \end{longmathe} \noindent whence ${T\in W_{\varepsilon;K,F,e,e'}}$. Hence the inclusion in \eqcref{eq:openness-of-W:\beweislabel} holds, as desired. \null To prove (II), consider the first inclusion of \eqcref{eq:G-delta-expression:\beweislabel}. Let ${T\in\cal{C}_{s}(M)}$ be arbitrary. To show that $T$ is in the $G_{\delta}$-set in the middle of \eqcref{eq:G-delta-expression:\beweislabel}, consider arbitrary fixed ${\varepsilon>0}$, ${K\in\KmpRm{M}}$, ${F_{0}\subseteq B}$ finite, and ${e,e'\in B}$. Our goal is to find some finite ${F\subseteq B}$ with ${F\supseteq F_{0}}$, such that $T \in W_{\varepsilon;K,F,e,e'}$. To this end, we rely on the fact that $T$ is a contractive semigroup and observe that for all finite $F\subseteq B$ the functions \begin{mathe}[mc]{rcccl} \tilde{f}_{F} &: &K &\to &\reals_{+}\\ &&t &\mapsto &\|(\text{\upshape\bfseries I}-\pi_{F})T(t)e\|,\\ \\ f_{F} &: &K\times K &\to &\reals_{+}\\ &&(s,t) &\mapsto &|\BRAKET{(T(s)\pi_{F}T(t)-T(st))e}{e'}|\\ \end{mathe} \noindent satisfy \begin{mathe}[mc]{rclcccl} f_{F}(s,t) &= &|\BRAKET{(T(s)\pi_{F}T(t)-T(s)T(t))e}{e'}|\\ &= &|\BRAKET{(\text{\upshape\bfseries I}-\pi_{F})T(t)e}{T(s)^{\ast}e'}| &\leq &\|T(s)^{\ast}e'\|\tilde{f}_{F}(t) &\leq &\tilde{f}_{F}(t)\\ \end{mathe} \noindent for all $s,t\in K$. Furthermore, the $\text{\upshape \scshape sot}$-continuity of $T$ guarantees that $\tilde{f}_{F}$ is continuous. Now consider the net $(\tilde{f}_{F})_{F}$, where the indices run over all finite $F\subseteq B$, ordered by inclusion. Note that the correspondingly indexed net of projections, $(\pi_{F})_{F}$, is monotone, and, since ${\bigcup_{F\subseteq B~\text{finite}}F=B}$ and $B$ is a basis for $\mathcal{H}$, it holds that ${\pi_{F}\underset{F}{\longrightarrow}\text{\upshape\bfseries I}}$ weakly (in fact strongly). Clearly then \begin{mathe}[mc]{c} \@ifnextchar[{\eqtag@loc@}{\eqtag@loc@[*]}[eq:pt-wise-convergence:\beweislabel] \tilde{f}_{F} \underset{F}{\longrightarrow} 0\\ \end{mathe} \noindent pointwise and monotone. Since $K$ is compact and the $\tilde{f}_{F}$ are continuous for all $F$, by Dini's Theorem (\cf \cite[Theorem~2.66]{aliprantis2005} ) the monotone pointwise convergence in \eqcref{eq:pt-wise-convergence:\beweislabel} is in fact uniform convergence. Hence, by the definition of the net, for some finite $F\subseteq B$ with $F\supseteq F_{0}$ \begin{mathe}[mc]{c} d_{K,F,e,e'}(T) = \sup_{s,t\in K}f_{F}(s,t) \leq \sup_{t\in K}\tilde{f}_{F}(t) < \varepsilon,\\ \end{mathe} \noindent and thereby ${T\in W_{\varepsilon;K,F,e,e'}}$. \null To complete the proof of (II), we treat the second inclusion in \eqcref{eq:G-delta-expression:\beweislabel}. So let ${T\inX}$ be an arbitrary element in the $G_{\delta}$-set in the middle of \eqcref{eq:G-delta-expression:\beweislabel}. To show that ${T\in\cal{C}_{w}(M)}$, it is necessary and sufficient to show that ${T(st)=T(s)T(t)}$ for all ${s,t\in M}$. So fix arbitrary ${s,t\in M}$. It suffices to show that $\BRAKET{(T(s)T(t)-T(st))e}{e'}=0$ for all basis vectors, ${e,e'\in B}$. So fix arbitrary $e,e'\in B$. Note that since $\tilde{\cal{K}}$ covers $M$ and is closed under finite unions, there exists some ${K\in\tilde{\cal{K}}}$, such that ${s,t\in K}$. Fix this compact set. Now consider the net $(d_{K,F,e,e'}(T))_{F}$, whose indices run over all finite ${F\subseteq B}$, ordered by inclusion. Since $T$ is in the set in the middle of \eqcref{eq:G-delta-expression:\beweislabel}, working through the definitions yields \begin{mathe}[mc]{c} \forall{\varepsilon\in\mathbb{Q}^{+}:~} \forall{F_{0}\subseteq B~\text{finite}:~} \,\exists{F\subseteq B~\text{finite},~\text{s.t.}~F\supseteq F_{0}:~} \,d_{K,F,e,e'}(T)<\varepsilon,\\ \end{mathe} \noindent which is clearly equivalent to \begin{mathe}[mc]{c} \@ifnextchar[{\eqtag@loc@}{\eqtag@loc@[*]}[eq:liminf:\beweislabel]\relax {\displaystyle\liminf_{F}}\,d_{K,F,e,e'}(T) = 0.\\ \end{mathe} Now, since $s,t\in K$, it holds that \begin{mathe}[mc]{c} |\BRAKET{(T(s)T(t)-T(st))e}{e'}| \leq \underbrace{ |\BRAKET{T(s)(\text{\upshape\bfseries I}-\pi_{F})T(t)e}{e'}| }_{=:d'_{F}} +\underbrace{ |\BRAKET{(T(s)\pi_{F}T(t)-T(st))e}{e'}| }_{\leq d_{K,F,e,e'}(T)}\\ \end{mathe} \noindent for all finite $F\subseteq B$. Since ${\pi_{F}\underset{F}{\longrightarrow}\text{\upshape\bfseries I}}$ weakly (see above), we have ${d'_{F}\underset{F}{\longrightarrow}0}$. Noting \eqcref{eq:liminf:\beweislabel}, taking the limit inferior of the right hand side of the above expression thus yields ${\BRAKET{(T(s)T(t)-T(st))e}{e'}=0}$. Since ${e,e'\in B}$ were arbitrarily chosen, and $B$ is a basis for $\mathcal{H}$, it follows that ${T(st)=T(s)T(t)}$. This completes the proof. \end{proof} \begin{rem} The proof of \Cref{thm:main-result:sig:article-str-problem-raj-dahya} reveals that in fact claims (I) and (II) hold, provided the topological monoid $M$ is at least $\sigma$-compact. And if $M$ is furthermore \usesinglequotes{good}, then these again imply that that $\cal{C}_{s}(M)$ is a $G_{\delta}$-subset in $(\cal{F}^{c}_{w}(M),\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape wot}})$. The stronger assumption of $M$ being locally compact is only relied upon to obtain that $(\cal{F}^{c}_{w}(M),\text{{{$\mathpzc{k}$}}}_{\text{\tiny\upshape \scshape wot}})$ is itself completely metrisable (\cf the proof of \Cref{prop:basic:SpC:basic-polish:sig:article-str-problem-raj-dahya}), and thus via Alexandroff's lemma that $G_{\delta}$-subsets of this space are completely metrisable. \end{rem} Finally, by \Cref{cor:multiparameter-auto-cts:sig:article-str-problem-raj-dahya} we obtain: \begin{cor} The spaces of one-/multiparameter contractive $\ensuremath{C_{0}}$-semigroups on a separable Hilbert space, viewed under the topology of uniform $\text{\upshape \scshape wot}$-convergence on compact subsets, are Polish. \end{cor} This positively solves the open problem raised in \cite[\S{}III.6.3]{eisner2010buchStableOpAndSemigroups}. \setcounternach{section}{1} \appendix \@startsection{section}{1}{\z@}{.7\linespacing\@plus\linespacing}{.5\linespacing}{\formatsection@text}[Strong continuity of operator semigroups]{Strong continuity of operator semigroups} \label{app:continuity} \noindent The main result of this paper is proved for semigroups defined on \usesinglequotes{good} monoids (see \Cref{defn:good-monoids:sig:article-str-problem-raj-dahya}). By a well-known result, any $\text{\upshape \scshape wot}$-continuous semigroup on a Banach space $\mathcal{E}$ over the monoid $(\reals_{+},+,0)$ is automatically $\text{\upshape \scshape sot}$-continuous, (\cf \cite[Theorem~5.8]{Engel1999}) and thus $\reals_{+}$ is by our definition a \usesinglequotes{good} monoid. In this appendix, we provide sufficient conditions for topological monoids to possess this property, and thus broaden application of the main result. These conditions are given as follows: \begin{defn} \makelabel{app:continuity:defn:conditition-I:sig:article-str-problem-raj-dahya} A topological monoid, $M$, shall be called \highlightTerm{extendible}, if there exists a locally compact Hausdorff topological group, $G$, such that $M$ is topologically and algebraically isomorphic to a closed subset of $G$. \end{defn} If $M$ is extendible to $G$ via the above definition, then one can assume \withoutlog that $M \subseteq G$. \begin{defn} \makelabel{app:continuity:defn:conditition-II:sig:article-str-problem-raj-dahya} Let $G$ be a locally compact Hausdorff group. Call a subset $A \subseteq G$ \highlightTerm{positive in the identity}, if for all neighbourhoods, $U \subseteq G$, of the group identity, $U \cap A$ has non-empty interior within $G$. \end{defn} \begin{e.g.}[The non-negative reals] \makelabel{app:continuity:e.g.:extendible-mon:reals:sig:article-str-problem-raj-dahya} Consider ${M \colonequals \reals_{+}}$ viewed under addition. Since ${M\subseteq\mathbb{R}}$ is closed, we have that $M$ is an extendible locally compact Hausdorff monoid. For any open neighbourhood, ${U\subseteq\mathbb{R}}$, of the identity, there exists an ${\varepsilon>0}$, such that ${(-\varepsilon,\varepsilon)\subseteq U}$ and thus ${U\cap M\supseteq(0,\varepsilon)\neq\emptyset}$. Hence $M$ is positive in the identity. \end{e.g.} \begin{e.g.}[The $p$-adic integers] \makelabel{app:continuity:e.g.:extendible-mon:p-adic:sig:article-str-problem-raj-dahya} Consider ${M \colonequals \mathbb{Z}_{p}}$ with ${p\in\mathbb{P}}$, viewed under addition and with the topology generated by the $p$-adic norm. Since ${M\subseteq\mathbb{Q}_{p}}$ is clopen, it is an extendible locally compact Hausdorff monoid. Since $M$ is clopen, it is clearly positive in the identity. \end{e.g.} \begin{e.g.}[Discrete cases] \makelabel{app:continuity:e.g.:extendible-mon:discrete:sig:article-str-problem-raj-dahya} Let $G$ be a discrete group, and let $M\subseteq G$ contain the identity and be closed under group multiplication. Clearly, $M$ is a locally compact Hausdorff monoid, extendible to $G$ and positive in the identity. For example one can take the free-group $\mathbb{F}_{2}$ with generators $\{a,b\}$, and $M$ to be the algebraic closure of $\{1,a,b\}$ under multiplication. \end{e.g.} \begin{e.g.}[Non-discrete, non-commutative cases] \makelabel{app:continuity:e.g.:extendible-mon:non-comm-non-discrete:sig:article-str-problem-raj-dahya} Let $d\in\mathbb{N}$ with $d>1$ and consider the space, $X$, of $\mathbb{R}$-valued $d\times d$ matrices. Topologised with any matrix norm (equivalently the strong or the weak operator topologies), this space is homeomorphic to $\mathbb{R}^{d^{2}}$ and thus locally compact Hausdorff. Since the determinant map ${X\ni T\mapsto \det(T)\in\mathbb{R}}$ is continuous, the subspace of invertible matrices $\{T\inX\mid \det(T)\neq 0\}$ is open and thus a locally compact Hausdorff topological group. The subspace, $G$, of upper triangular matrices with positive diagonal entries, is a closed subgroup and thus locally compact Hausdorff. Letting \begin{mathe}[mc]{rcl} G_{0} &\colonequals &\{T\in G\mid \det(T)=1\},\\ G_{+} &\colonequals &\{T\in G\mid \det(T)>1\},~\text{and}\\ G_{-} &\colonequals &\{T\in G\mid \det(T)<1\},\\ \end{mathe} \noindent it is easy to see that $M \colonequals G_{0}\cup G_{+}$ is a topologically closed subspace containing the identity and is closed under multiplication. Moreover $M$ is a proper monoid, since the inverses of the elements in $G_{+}$ are clearly in ${G\setminus M}$. Consider now an open neighbourhood, ${U\subseteq G}$, of the identity. Since inversion is continuous, $U^{-1}$ is also an open neighbourhood of the identity. Since, as a locally compact Hausdorff space, $G$ satisfies the Baire category theorem, and since ${G_{+}\cup G_{-}}$ is clearly dense (and open) in $G$, and thus comeagre, we clearly have $(U\cap U^{-1})\cap(G_{+}\cup G_{-})\neq\emptyset$. So either ${U\cap G_{+}\neq\emptyset}$ or else ${U^{-1}\cap G_{-}\neq\emptyset}$, from which it follows that ${U\cap G_{+}=(U^{-1}\cap G_{-})^{-1}\neq\emptyset}$. Hence in each case ${U\cap M}$ contains a non-empty open subset, \viz ${U\cap G_{+}}$. So $M$ is extendible to $G$ and positive in the identity. Next, consider the subgroup, ${G_{h} \subseteq G}$, consisting of matrices of the form $T = \text{\upshape\bfseries I} + \tilde{T}$, where $\tilde{T}$ is a strictly upper triangular matrix with at most non-zero entries on the top row and right hand column. That is, $G_{h}$ is the \highlightTerm{continuous Heisenberg group}, $\Heisenberg{2d-3}(\mathbb{R})$, of order ${2d-3}$. The elements of the Heisenberg group occur in the study of Kirillov's \highlightTerm{orbit method} (see \cite{kirillov1962}) and have important applications in physics (see \exempli \cite{kirillov2003}). Clearly, $G_{h}$ is topologically closed within $G$ and thus locally compact Hausdorff. Now consider the subspace, \begin{mathe}[mc]{c} M_{h} \colonequals \{T\in G_{h} \mid \forall{i,j\in\{1,2,\ldots,d\}:~}T_{ij}\geq 0\},\\ \end{mathe} \noindent of matrices with only non-negative entries. This is clearly a topologically closed subspace of $G_{h}$ containing the identity and closed under multiplication. Moreover, for each ${S,T \in M_{h}\setminus\{\text{\upshape\bfseries I}\}}$ we have \begin{mathe}[mc]{c} ST = \text{\upshape\bfseries I} + \big((S-\text{\upshape\bfseries I}) + (T-\text{\upshape\bfseries I}) + (S-\text{\upshape\bfseries I})(T-\text{\upshape\bfseries I})\big) \in M_{h} \setminus \{\text{\upshape\bfseries I}\},\\ \end{mathe} \noindent which implies that no non-trivial element in $M_{h}$ has its inverse in $M_{h}$, making $M_{h}$ a proper monoid. Consider now an open neighbourhood, ${U \subseteq G_{h}}$, of the identity. Since $G_{h}$ is homeomorphic to $\mathbb{R}^{2d-3}$, there exists some ${\varepsilon>0}$, such that \begin{mathe}[mc]{c} U \supseteq \{ T \in G_{h} \mid \forall{(i,j)\in\mathcal{I}:~}T_{ij}\in(-\varepsilon,\varepsilon) \},\\ \end{mathe} \noindent where $\mathcal{I} \colonequals \{(1,2),(1,3),\ldots,(1,d),(2,d),\ldots,(d-1,d)\}$. Hence \begin{mathe}[mc]{rcl} U \cap M_{h} &\supseteq &\{ T \in G_{h} \mid \forall{(i,j)\in\mathcal{I}:~}T_{ij}\in(0,\varepsilon) \} \eqcolon V,\\ \end{mathe} \noindent where $V$ is clearly a non-empty open subset of $G_{h}$, since the $2d-3$ entries in the matrices can be freely and independently chosen. Thus $M_{h}$ is extendible to $G_{h}$ and positive in the identity. Finally, we may consider the subgroup, ${G_{u} \colonequals \mathop{\mathrm{UT}_{1}}(d)}$, of upper triangular matrices over $\mathbb{R}$ with unit diagonal. The elements of $\mathop{\mathrm{UT}_{1}}(d)$ have important applications in image analysis (see \exempli \cite{kirillov2003} and \cite[\S{}5.5.2]{pennec2020} ) and representations of the group have been studied in \cite[Chapter~6]{samoilenko1991}. Setting ${M_{u} \colonequals \{T \in G_{u}\mid \forall{i,j\in\{1,2,\ldots,d\}:~}T_{ij}\geq 0\}}$, one may argue similarly to the case of the continuous Heisenberg group and obtain that $G_{u}$ is locally compact and that $M_{u}$ is a proper topological monoid which is furthermore extendible to $G_{u}$ and positive in the identity. \end{e.g.} The following result allows us to generate infinitely many examples from basic ones: \begin{prop} \makelabel{app:continuity:prop:monoids-with-cond-closed-under-fprod:sig:article-str-problem-raj-dahya} Let ${n\in\mathbb{N}}$ and let $M_{i}$ be locally compact Hausdorff monoids for ${1\leq i\leq n}$. Assume for each ${i<n}$ that $M_{i}$ is extendible to a locally compact Hausdorff group $G_{i}$, and that $M_{i}$ is positive in the identity of $G_{i}$. Then ${M \colonequals \prod_{i=1}^{n}M_{i}}$ is a locally compact Hausdorff monoid which is extendible to ${G \colonequals \prod_{i=1}^{n}G_{i}}$ and positive in the identity. \end{prop} \begin{proof} The extendibility of $M$ to $G$ is clear. Now consider an arbitrary open neighbourhood, $U$, of the identity in $G$. For each ${1\leq i\leq n}$, one can find open neighbourhoods, $U_{i}$, of the identity in $G_{i}$, so that ${U' \colonequals \prod_{i=1}^{n}U_{i}\subseteq U}$. By assumption, $M_{i}\cap U_{i}$ contains a non-empty open set, ${V_{i}\subseteq G_{i}}$ for each ${1\leq i\leq n}$. Since ${U\supseteq\prod_{i=1}^{n}U_{i}}$, it follows that $M\cap U \supseteq \prod_{i=1}^{n}(M_{i}\cap U_{i}) \supseteq \prod_{i=1}^{n}V_{i}\neq\emptyset$. Thus ${M\cap U}$ has non-empty interior. Hence $M$ is positive in the identity. \end{proof} Our argumentation for the generalised continuity result builds on \cite[Theorem~5.8]{Engel1999}. \begin{schattierteboxdunn}[ backgroundcolor=leer, nobreak=true, ] \begin{thm} \makelabel{app:continuity:thm:generalised-auto-continuity:sig:article-str-problem-raj-dahya} Let $M$ be a topological monoid such that $M$ is extendible to a locally compact Hausdorff group $G$ and such that $M$ is positive in the identity. Then for any Banach space, $\mathcal{E}$, every $\text{\upshape \scshape wot}$-continuous semigroup, ${T:M\to\BoundedOps{\mathcal{E}}}$, is $\text{\upshape \scshape sot}$-continuous. In particular, $M$ is \usesinglequotes{good}. \end{thm} \end{schattierteboxdunn} (Note that a semigroup over a Banach space $\mathcal{E}$ on a topological monoid is defined analogously to \Cref{defn:semigroup-on-h:sig:article-str-problem-raj-dahya}.) \begin{proof} First note that the principle of uniform boundedness applied twice to the $\text{\upshape \scshape wot}$-continuous function, $T$, ensures that $T$ is norm-bounded on all compact subsets of $M$. Fix now a left-invariant Haar measure, $\lambda$, on $G$ and set \begin{mathe}[mc]{rcl} S &\colonequals &\{F\subseteq G\mid F~\text{a compact neighbourhood of the identity}\}.\\ \end{mathe} Consider arbitrary ${F\in S}$ and ${x \in \mathcal{E}}$. By the closure of $M$ in $G$ as well as positivity in the identity, ${M\cap F}$ is compact and contains a non-empty open subset of $G$. It follows that ${0<\lambda(M\cap F)<\infty}$. The $\text{\upshape \scshape wot}$-continuity of $T$, the compactness (and thus measurability) of ${M\cap F}$, and the norm-boundedness of $T$ on compact subsets ensure that \begin{mathe}[mc]{rcl} \@ifnextchar[{\eqtag@loc@}{\eqtag@loc@[*]}[eq:defn-xF:\beweislabel] \BRAKET{x_{F}}{\altvarphi} &\colonequals &\frac{1}{\lambda(M\cap F)} \displaystyle\int_{t\in M\cap F} \BRAKET{T(t)x}{\altvarphi} ~\textup{d}{}t, \quad\text{for $\altvarphi \in \mathcal{E}^{\prime}$}\\ \end{mathe} \noindent describes a well-defined element $x_{F} \in \mathcal{E}^{\prime\prime}$. Exactly as in \cite[Theorem~5.8]{Engel1999}, one may now argue by the $\text{\upshape \scshape wot}$-continuity of $T$ and compactness of ${M\cap F}$ that in fact ${x_{F} \in \mathcal{E}}$ for each ${x \in \mathcal{E}}$ and ${F\in S}$. Moreover, since $M$ is locally compact, and $T$ is $\text{\upshape \scshape wot}$-continuous with $T(1)=\text{\upshape\bfseries I}$, one readily obtains that each $x \in \mathcal{E}$ can be weakly approximated by the net, $(x_{F})_{F \in S}$, ordered by inverse inclusion. So \begin{mathe}[mc]{rcl} D &\colonequals &\{x_{F}\mid x \in \mathcal{E},~F \in S\}\\ \end{mathe} \noindent is weakly dense in $\mathcal{E}$. Since the weak and strong closures of any convex subset in a Banach space coincide (\cf \cite[Theorem~5.98]{aliprantis2005}), it follows that the convex hull, $\mathop{\textup{co}}(D)$, is strongly dense in $\mathcal{E}$. Now, to prove the $\text{\upshape \scshape sot}$-continuity of $T$, we need to show that \begin{mathe}[mc]{rcl} \@ifnextchar[{\eqtag@loc@}{\eqtag@loc@[*]}[eq:map:\beweislabel] t\in M &\mapsto &T(t)x \in \mathcal{E}\\ \end{mathe} \noindent is strongly continuous for all $x \in \mathcal{E}$. Since $M$ is locally compact and $T$ is norm-bounded on compact subsets of $M$, the set of $x \in \mathcal{E}$ such that \eqcref{eq:map:\beweislabel} is strongly continuous, is itself a strongly closed convex subset of $\mathcal{E}$. So, since $\mathop{\textup{co}}(D)$ is strongly dense in $\mathcal{E}$, it suffices to prove the strong continuity of \eqcref{eq:map:\beweislabel} for each ${x\in D}$. To this end, fix arbitrary ${x \in \mathcal{E}}$, ${F\in S}$ and ${t\in M}$. We need to show that ${T(t')x_{F}\longrightarrow T(t)x_{F}}$ strongly for ${t'\in M}$ as ${t'\longrightarrow t}$. First recall, that by basic harmonic analysis, the canonical \highlightTerm{left-shift}, \begin{mathe}[mc]{rcccl} L &: &G &\to &\BoundedOps{L^{1}(G)},\\ \end{mathe} \noindent defined via ${(L_{t}f)(s)=f(t^{-1}s)}$ for ${s,t\in G}$ and $f\in L^{1}(G)$, is an $\text{\upshape \scshape sot}$-continuous morphism (\cf \cite[Proposition~3.5.6 ($\lambda_{1}$--$\lambda_{4}$)]{reiter2000} ). Now, by compactness, ${f \colonequals \text{\textbf{1}}_{M\cap F}\in L^{1}(G)}$ and it is easy to see that ${\|L_{t'}f-L_{t}f\|_{1}=\lambda(t'(M\cap F)\mathbin{\Delta} t(M\cap F))}$ for ${t'\in M}$. The $\text{\upshape \scshape sot}$-continuity of $L$ thus yields \begin{mathe}[mc]{rcl} \@ifnextchar[{\eqtag@loc@}{\eqtag@loc@[*]}[eq:1:\beweislabel] \lambda(t'(M\cap F) \mathbin{\Delta} t(M\cap F)) &\longrightarrow &0\\ \end{mathe} \noindent for ${t'\in M}$ as ${t'\longrightarrow t}$. Fix now a compact neighbourhood, ${K\subseteq G}$, of $t$. For ${t'\in M\cap K}$ and ${\altvarphi \in \mathcal{E}^{\prime}}$ one obtains \begin{mathe}[mc]{rcl} |\BRAKET{T(t')x_{F}-T(t)x_{F}}{\altvarphi}| &= &|\BRAKET{x_{F}}{T(t')^{\ast}\altvarphi}-\BRAKET{x_{F}}{T(t)^{\ast}\altvarphi}|\\ &= &\frac{1}{\lambda(M\cap F)} \cdot\left| \displaystyle\int_{s\in M\cap F}\textstyle \BRAKET{T(s)x}{T(t')^{\ast}\altvarphi}~\textup{d}{}s - \displaystyle\int_{s\in M\cap F}\textstyle \BRAKET{T(s)x}{T(t)^{\ast}\altvarphi}~\textup{d}{}s \right|\\ &\multispan{2}{\text{by construction of $x_{F}$ in \eqcref{eq:defn-xF:\beweislabel}}}\\ &= &\frac{1}{\lambda(M\cap F)} \cdot\left| \displaystyle\int_{s\in M\cap F}\textstyle \BRAKET{T(t's)x}{\altvarphi}~\textup{d}{}s - \displaystyle\int_{s\in M\cap F}\textstyle \BRAKET{T(ts)x}{\altvarphi}~\textup{d}{}s \right|\\ &\multispan{2}{\text{since $T$ is a semigroup}}\\ &= &\frac{1}{\lambda(M\cap F)} \cdot\left| \displaystyle\int_{s\in t'(M\cap F)}\textstyle \BRAKET{T(s)x}{\altvarphi}~\textup{d}{}s - \displaystyle\int_{s\in t(M\cap F)}\textstyle \BRAKET{T(s)x}{\altvarphi}~\textup{d}{}s \right|\\ &\multispan{2}{\text{by left-invariance}}\\ &\leq &\frac{1}{\lambda(M\cap F)} \displaystyle\int_{s\in t'(M\cap F)\mathbin{\Delta} t(M\cap F)}\textstyle |\BRAKET{T(s)x}{\altvarphi}|~\textup{d}{}s\\ &\leq &\frac{1}{\lambda(M\cap F)} \cdot\displaystyle\sup_{s\in(M\cap K)(M\cap F)}\textstyle\|T(s)\| \cdot\|x\|\cdot\|\altvarphi\| \cdot\lambda(t'(M\cap F)\mathbin{\Delta} t(M\cap F))\\ &\multispan{2}{\text{since $t,t'\in M\cap K$}.}\\ \end{mathe} Since $K' \colonequals (M\cap K)(M\cap F)$ is compact, and $T$ is uniformly bounded on compact sets (see above), it holds that ${C \colonequals \displaystyle\sup_{s\in K'}\|T(s)\|\textstyle<\infty}$. The above calculation thus yields \begin{mathe}[mc]{rcl} \@ifnextchar[{\eqtag@loc@}{\eqtag@loc@[*]}[eq:2:\beweislabel] \|T(t')x_{F}-T(t)x_{F}\| &= &\displaystyle\sup\textstyle\{ |\BRAKET{T(s)x_{F}-T(t)x_{F}}{\altvarphi}| \mid \altvarphi \in \mathcal{E}^{\prime},~\|\altvarphi\|\leq 1 \}\\ &\leq &\frac{1}{\lambda(M\cap F)} \cdot C \cdot\|x\| \cdot\lambda(t'(M\cap F)\mathbin{\Delta} t(M\cap F))\\ \end{mathe} \noindent for all $t'\in M$ sufficiently close to $t$. By \eqcref{eq:1:\beweislabel}, the right-hand side of \eqcref{eq:2:\beweislabel} converges to $0$ and hence ${T(t')x_{F}\longrightarrow T(t)x_{F}}$ strongly as ${t'\longrightarrow t}$. This completes the proof. \end{proof} \begin{rem} In the proof of \Cref{app:continuity:thm:generalised-auto-continuity:sig:article-str-problem-raj-dahya}, weak continuity only played a role in obtaining the boundedness of $T$ on compact sets, as well as the well-definedness of the elements in $D$. In \cite[Theorem~9.3.1 and Theorem~10.2.1--3]{hillephillips2008} a proof of the classical continuity result exists under weaker conditions, \viz weak measurability, provided the semigroups are almost separably valued. It would be interesting to know whether the above approach can be adapted to these weaker assumptions. \end{rem} \@startsection{subsection}{2}{\z@}{\z@}{\z@\hspace{1em}}{\formatsubsection@text}*{Acknowledgement} \noindent The author is grateful to Tanja Eisner for her feedback, to Konrad Zimmermann for his helpful comments on the results in the appendix, and to the referee for their constructive feedback. \defReferences{References} \bgroup \footnotesize \egroup \addresseshere \end{document}
\begin{document} \begin{abstract} Using a cap product, we construct an explicit Poincar\'e duality isomorphism between the blown-up intersection cohomology and the Borel-Moore intersection homology, for any commutative ring of coefficients and second-countable, oriented pseudomanifolds. \end{abstract} \maketitle \section*{Introduction} Poincar\'e duality of singular spaces is the ``raison d'\^etre'' (\cite[Section 8.2]{FriedmanBook}) of intersection homology. It has been proven by Goresky and MacPherson in their first paper on intersection homology (\cite{GM1}) for compact PL pseudomanifolds and rational coefficients and extended to $\mathbb{Z}$ coefficients with some hypothesis on the torsion part, by Goresky and Siegel in \cite{GS}. Friedman and McClure obtain this isomorphism for a topological pseudomanifold, from a cap product with a fundamental class, for any field of coefficients in \cite{FM}, see also \cite{FriedmanBook} for a commutative ring of coefficients with restrictions on the torsion. Using the blown-up intersection cohomology with compact supports, we have established in \cite{CST2} a Poincar\'e duality for any commutative ring of coefficients, without hypothesis on the torsion part, for any oriented paracompact pseudomanifold. Moreover, we also set up in \cite{CST5} a Poincar\'e duality between the blown-up intersection cohomology and the Borel-Moore intersection homology of an oriented PL pseudomanifold $X$. This paper is the ``cha\^inon manquant:'' the existence of an explicit Poincar\'e duality isomorphism between the blown-up intersection cohomology and the Borel-Moore intersection homology, from a cap product with the fundamental class, for any commutative ring of coefficients and any second-countable, oriented pseudomanifold. This allows the definition of an intersection product on the Borel-Moore intersection homology, induced from the Poincar\'e duality and a cup product, as in the case of manifolds, see \corref{cor:intersectionproduct} and \remref{rem:lastone}. Let us note that a Poincar\'e duality isomorphism cannot exist with an intersection cohomology defined as homology of the dual complex of intersection chains, see \exemref{exam:pasdual}. In \secref{sec:back}, we recall basic background on pseudomanifolds and intersection homology. In particular, we present the complex of blown-up cochains, already introduced and studied in a series of papers \cite{ CST6,CST7, CST4, CST1,CST2,CST5,CST3} (also called Thom-Whitney cochains in some works). \secref{sec:BM} contains the main properties of Borel-Moore intersection homology: the existence of a Mayer-Vietoris exact sequence in \thmref{thm:MV} and the recollection of some results established in \cite{CST5}. \secref{sec:Poincare} is devoted to the proof of the main result stated in \thmref{thm:dual}: the existence of an isomorphism between the blown-up intersection cohomology and the Borel-Moore intersection homology, by using the cap-product with the fundamental class of a second-countable, oriented pseudomanifold, $$\mathcal D_{X}\colon {\mathscr H}^*_{\overline{p}}(X;R)\to {\mathfrak{H}}^{\infty,\overline{p}}_{n-*}(X;R),$$ for any commutative ring of coefficients. In \cite{CST5}, we prove it for PL-pseudomanifolds with a sheaf presentation of intersection homology. Here the duality is realized for topological pseudomanifolds, by a map defined at the chain complexes level by the cap product with a cycle representing the fundamental class. Notice also that the intersection homology of this work (\defref{def:lessimplexes}) is a general version, called tame homology in \cite{CST3} and non-GM in \cite{FriedmanBook}, which coincides with the original one for the perversities of \cite{GM1}. Let us also observe that our definition of Borel-Moore intersection homology coincides with the one studied by G. Friedman in \cite{MR2276609} for perversities depending only on the codimension of strata. Homology and cohomology are considered with coefficients in a commutative ring, $R$. In general, we do not explicit them in the proofs. For any topological space $X$, we denote by ${\mathtt c} X=X\times [0,1]/X\times\{0\}$ the cone on $X$ and by ${\mathring{\tc}} X=X\times [0,1[/X\times\{0\}$ the open cone on $X$. Elements of the cones ${\mathtt c} X$ and ${\mathring{\tc}} X$ are denoted $[x,t]$ and the apex is ${\mathtt v}=[-,0]$. \tableofcontents \section{Background}\label{sec:back} \subsection{Pseudomanifold} In \cite{GM1}, M. Goresky and R. MacPherson introduce intersection homology for the study of pseudomanifolds. Some basic properties of intersection homology, as the existence of a Mayer-Vietoris sequence, do not require such a structure and exist for filtered spaces. \begin{definition} A \emph{filtered space of (formal) dimension $n$,} $X$, is a Hausdorff space endowed with a filtration by closed subsets, $$\emptyset=X_{-1}\subseteq X_0\subseteq X_1\subseteq\dots\subseteq X_n=X,$$ such that $X_n\backslash X_{n-1}\neq \emptyset$. The \emph{strata} of $X$ of dimension $i$ are the connected components $S$ of $X_{i}\backslash X_{i-1}$; we denote $\dim S=i$ and ${\rm codim\,} S=\dim X-\dim S$. The \emph{regular strata} are the strata of dimension $n$ and the \emph{singular set} is the subspace $\Sigma =X_{n-1}$. We denote by ${\mathcal S}_{X}$ (or ${\mathcal S}$ if there is no ambiguity) the set of non-empty strata. \end{definition} \begin{definition} An $n$-dimensional \emph{CS set} is a filtered space of dimension $n$, $X$, such that, for any $i\in\{0,\dots,n\}$, $X_i\backslash X_{i-1}$ is an $i$-dimensional topological manifold or the empty set. Moreover, for each point $x \in X_i \backslash X_{i-1}$, $i\neq n$, there exist \begin{enumerate}[(i)] \item an open neighborhood $V$ of $x$ in $X$, endowed with the induced filtration, \item an open neighborhood $U$ of $x$ in $X_i\backslash X_{i-1}$, \item a compact filtered space $L$ of dimension $n-i-1$, whose cone ${\mathring{\tc}} L$ is endowed with the filtration $({\mathring{\tc}} L)_{i}={\mathring{\tc}} L_{i-1}$, \item a homeomorphism, $\varphi \colon U \times {\mathring{\tc}} L\to V$, such that \begin{enumerate}[(a)] \item $\varphi(u,{\mathtt v})=u$, for any $u\in U$, where ${\mathtt v}$ is the apex of the cone ${\mathring{\tc}} L$, \item $\varphi(U\times {\mathring{\tc}} L_{j})=V\cap X_{i+j+1}$, for all $j\in \{0,\dots,n-i-1\}$. \end{enumerate} \end{enumerate} The filtered space $L$ is called a \emph{link} of $x$. \end{definition} Except in the reminder of a previous result (see \propref{prop:supersuperbredon}), this work is only concerned with particular CS sets, the pseudomanifolds. \begin{definition} An $n$-dimensional \emph{pseudomanifold} is an $n$-dimensional CS set for which the link of a point $x\in X_i\backslash X_{i-1}$, $i\neq n$, is a compact pseudomanifold $L$ of dimension $n-i-1$. \end{definition} For a pseudomanifold, the formal dimension of the underlying filtered space coincides with the classical dimension of the manifold $X_{n}\backslash X_{n-1}$. In \cite{GM1}, the pseudomanifolds are supposed without strata of codimension~1. Here, we do not require this property. The class of pseudomanifolds is large anough to include (\cite{GM2}) complex algebraic or analytic varieties, real analytic varieties, Whitney and Thom-Mather stratified spaces, quotients of manifolds by compact Lie groups acting smootly, Thom spaces of vector bundles over triangulable compact manifolds, suspension of manifolds, ... \begin{remark}\label{rem:plentyofdef} For the convenience of the reader, we first collect basic topological definitions. A topological space $X$ is said \begin{enumerate} \item \emph{separable} if it contains a countable, dense subset; \item \emph{second-countable} if its topology has a numerable basis; that is there exists some numerable collection $\mathcal U = \{U_{j} |j \in \mathbb{N}\}$ of open subsets such that any open subset of $X$ can be written as a union of elements of some subfamily of $\mathcal U$; \item \emph{hemicompact} if it is locally compact and there exists a numerable sequence of relatively compact open subsets, $(U_{i})_{i\in \mathbb{N}}$, such that $\overline{U}_{i}\subset U_{i+1}$ and $X=\cup_{i}U_{i}$. \end{enumerate} To relate hypotheses of some following results to ones of previous works, we list some interactions between these notions. \begin{itemize} \item A second countable space is separable and, in a metric space, the two properties are equivalent (\cite[Theorem 16.9]{Wil}). \item A space is locally compact and second-countable if, and only if, it is metrizable and hemicompact, see \cite[Corollaire of Proposition 16]{MR0173226}. \end{itemize} \end{remark} As second-countability is a hereditary property, any open subset of a second-countable pseudomanifold is one also. Moreover, we also know that a second-countable pseudomanifold is paracompact, separable, metrizable and hemicompact. \subsection{Intersection homology} We consider intersection homology relatively to the general perversities defined in \cite{MacPherson90}. \begin{definition}\label{def:perversite} A \emph{perversity on a filtered space,} $X$, is an application, $\overline{p}\colon {\mathcal S}\to \mathbb{Z}$, defined on the set of strata of $X$ and taking the value~0 on the regular strata. Among them, mention the null perversity $\overline{0}$, constant with value~0, and the \emph{top perversity} defined by $\overline{t}(S)={\rm codim\,} S-2$ on singular strata. (This infers $\overline{t}(S)=-1$ for codimension~1 strata.) For any perversity, $\overline{p}$, the perversity $D\overline{p}:=\overline{t}-\overline{p}$ is called the \emph{complementary perversity} of $\overline{p}$. The pair $(X,\overline{p})$ is called a \emph{perverse space}. For a pseudomanifold we say \emph{perverse pseudomanifold.} \end{definition} \begin{example} Let $(X,\overline{p})$ be a perverse space of dimension $n$. \begin{itemize} \item An \emph{open perverse subspace $(U,\overline{p})$} is an open subset $U$ of $X$, endowed with the induced filtration and a perversity still denoted $\overline{p}$ and defined as follows: if $S\subset U$ is a stratum of $U$, such that $S\subset U\cap S'$ with $S'$ a stratum of $X$, then $\overline{p}(S)=\overline{p}(S')$. In the case of a perverse pseudomanifold, $(U,\overline{p})$ is one also. \item If $M$ is a connected topological manifold, the product $M\times X$ is a filtered space for the \emph{product filtration,} $\left(M \times X\right) _i = M \times X_{i}$. The perversity $\overline p$ induces a perversity on $M\times X$, still denoted $\overline p$ and defined by $\overline p(M \times S) = \overline p(S)$ for each stratum $S$ of $X$. \item If $X$ is compact, the open cone ${\mathring{\tc}} X $ is endowed with the \emph{conical filtration,} $\left({\mathring{\tc}} X\right) _i ={\mathring{\tc}} X_{i-1}$, $0\leq i\leq n+1$, where ${\mathring{\tc}} \,\emptyset=\{ {\mathtt v} \}$ is the apex of the cone. A perversity $\overline{p}$ on ${\mathring{\tc}} X$ induces a perversity on $X$ still denoted $\overline{p}$ and defined by $\overline{p}(S)=\overline{p}(S\times ]0,1[)$ for each stratum $S$ of $X$. \end{itemize} \end{example} \emph{For the rest of this section, we consider a perverse space $(X,\overline{p})$.} We introduce now a chain complex giving the intersection homology with coefficients in $R$, cf. \cite{CST3}. \begin{definition}\label{def:regularsimplex} A \emph{regular simplex} is a continuous map $\sigma\colon\Delta\to X$ of domain an Euclidean simplex decomposed in joins, $\Delta=\Delta_{0}\ast\Delta_{1}\ast\dots\ast\Delta_{n}$, such that $\sigma^{-1}X_{i} =\Delta_{0}\ast\Delta_{1}\ast\dots\ast\Delta_{i}$ for all~$i \in \{0, \dots, n\}$ and $\Delta_n \ne \emptyset$. Given an Euclidean regular simplex $\Delta = \Delta_0 * \dots *\Delta_n$, we denote ${\mathfrak{d}}\Delta$ the regular part of the chain $\partial \Delta$. More precisely, we set ${\mathfrak{d}} \Delta =\partial (\Delta_0 * \dots * \Delta_{n-1})* \Delta_n$, if $\dim(\Delta_n) = 0 $ and ${\mathfrak{d}} \Delta = \partial \Delta$, if $\dim(\Delta_n)\geq 1$. For any regular simplex $\sigma \colon\Delta \to X$, we set ${\mathfrak{d}} \sigma=\sigma_* \circ {\mathfrak{d}}$. Notice that ${\mathfrak{d}}^2=0$. We denote by ${\mathfrak{C}}_{*}(X;R)$ the complex of linear combinations of regular simplices (called finite chains) with the differential~${\mathfrak{d}}$. \end{definition} \begin{definition}\label{def:lessimplexes} The \emph{perverse degree of} a regular simplex $\sigma\colon\Delta=\Delta_{0}\ast \dots\ast\Delta_{n} \to X$ is the $(n+1)$-uple, $\|\sigma\|=(\|\sigma\|_0,\dots,\|\sigma\|_n)$, where $\|\sigma\|_{i}= \dim \sigma^{-1}X_{n-i}=\dim (\Delta_{0}\ast\dots\ast\Delta_{n-i})$, with the convention $\dim \emptyset=-\infty$. For each stratum $S$ of $X$, the \emph{perverse degree of $\sigma$ along $S$} is defined by $$\|\sigma\|_{S}=\left\{ \begin{array}{cl} -\infty,&\text{if } S\cap \sigma(\Delta)=\emptyset,\\ \|\sigma\|_{{\rm codim\,} S},&\text{otherwise.} \end{array}\right. $$ A \emph{regular simplex is $\overline{p}$-allowable} if \begin{equation*} \|\sigma\|_{S}\leq \dim \Delta-{\rm codim\,} S+\overline{p}(S), \end{equation*} for any stratum $S$. A finite chain $\xi$ is \emph{$\overline{p}$-allowable} if it is a linear combination of $\overline{p}$-allowable simplices and of \emph{$\overline{p}$-intersection} if $\xi$ and its boundary ${\mathfrak{d}} \xi$ are $\overline{p}$-allowable. We denote by ${\mathfrak{C}}_{*}^{\overline{p}}(X;R)$ the complex of $\overline{p}$-intersection chains and by ${\mathfrak{H}}_{*}^{\overline{p}}(X;R)$ its homology, called \emph{$\overline{p}$-intersection homology}. \end{definition} If $(U,\overline{p})$ is an open perverse subspace of $(X,\overline{p})$, we define the \emph{complex of relative $\overline{p}$-intersection chains} as the quotient ${\mathfrak{C}}_{*}^{\overline{p}}(X,U;R)= {\mathfrak{C}}_{*}^{\overline{p}}(X;R)/ {\mathfrak{C}}_{*}^{\overline{p}}(U;R)$. Its homology is denoted ${\mathfrak{H}}_{*}^{\overline{p}}(X,U;R)$. Finally, if $K\subset U$ is compact, we have ${\mathfrak{H}}_{*}^{\overline{p}}(X,X\backslash K;R)= {\mathfrak{H}}_{*}^{\overline{p}}(U,U\backslash K;R) $ by excision, cf. \cite[Corollary 4.5]{CST3}. \begin{remark} This homology is called tame intersection homology in \cite{CST3}. As we are only using it in this work, for sake of simplicity, we call it intersection homology. It coincides with the non-GM intersection homology of \cite{FriedmanBook} (see \cite[Theorem~B]{CST3}) and with intersection homology for the original perversities of \cite{GM1}, see \cite[Remark 3.9]{CST3}. The GM-perversities introduced by Goresky and MacPherson in \cite{GM1} depend only on the codimension of the strata and verify a growth condition, $\overline{p}(k)\leq \overline{p}(k+1)\leq \overline{p}(k)+1$, that implies the topological invariance of the intersection homology groups, if there is no strata of codimension~1. In \cite{CST3}, we prove that a certain topological invariance is still verified within the framework of strata dependent perversities. For that, we consider the intrinsic space, $X^*$, associated to a pseudomanifold $X$ (introduced by King in \cite{MR800845}) and use pushforward and pullback perversities. In particular, if there is no singular strata in $X$ becoming regular in $X^*$, we establish the invariance of intersection homology under a refinement of the stratification (\cite[Remark 6.14]{CST3}). \end{remark} \subsection{Blown-up intersection cohomology} Let $N_{*}(\Delta)$ and $N^*(\Delta)$ be the simplicial chains and cochains, with coefficient in $R$, of an Euclidean simplex $\Delta$. Given a face $F$ of $\Delta$, we write ${\mathbf 1}_{F}$ the element of $N^*(\Delta)$ taking the value 1 on $F$ and 0 otherwise. We denote also by $(F,0)$ the same face viewed as face of the cone ${\mathtt c}\Delta=[{\mathtt v}]\ast \Delta $ and by $(F,1)$ the face ${\mathtt c} F$ of ${\mathtt c} \Delta$. The apex is denoted $(\emptyset,1)={\mathtt c} \emptyset =[{\mathtt v}]$. If $\Delta=\Delta_{0}\ast\dots\ast\Delta_{n}$ is a regular Euclidean simplex, we set $${\widetilde{N}}^*(\Delta)=N^*({\mathtt c} \Delta_{0})\otimes\dots\otimes N^*({\mathtt c} \Delta_{n-1})\otimes N^*(\Delta_{n}).$$ A basis of ${\widetilde{N}}^*(\Delta)$ is made of the elements ${\mathbf 1}_{(F,\varepsilon)}={\mathbf 1}_{(F_{0},\varepsilon_{0})}\otimes\dots\otimes {\mathbf 1}_{(F_{n-1},\varepsilon_{n-1})}\otimes {\mathbf 1}_{F_{n}}$, where $\varepsilon_{i}\in\{0,1\}$ and $F_{i}$ is a face of $\Delta_{i}$ for $i\in\{0,\dots,n\}$ or the empty set with $\varepsilon_{i}=1$ if $i<n$. We set $|{\mathbf 1}_{(F,\varepsilon)}|_{>s}=\sum_{i>s}(\dim F_{i}+\varepsilon_{i})$, with $\varepsilon_{n}=0$. \begin{definition}\label{def:degrepervers} Let $\ell\in \{1,\ldots,n\}$. The \emph{$\ell$-perverse degree} of ${\mathbf 1}_{(F,\varepsilon)}\in {\widetilde{N}}^*(\Delta)$ is $$ \|{\mathbf 1}_{(F,\varepsilon)}\|_{\ell}=\left\{ \begin{array}{ccl} -\infty&\text{if} & \varepsilon_{n-\ell}=1,\\ |{\mathbf 1}_{(F,\varepsilon)}|_{> n-\ell} &\text{if}& \varepsilon_{n-\ell}=0. \end{array}\right.$$ For a cochain $\omega = \sum_b\lambda_b \ {\mathbf 1}_{(F_b,\varepsilon_b) }\in{\widetilde{N}}^*(\Delta)$ with $\lambda_{b}\neq 0$ for all $b$, the \emph{$\ell$-perverse degree} is $$\|\omega \|_{\ell}=\max_{b}\|{\mathbf 1}_{(F_b,\varepsilon_b)}\|_{\ell}.$$ By convention, we set $\|0\|_{\ell}=-\infty$. \end{definition} Let $(X,\overline{p})$ be a perverse space and $\sigma\colon \Delta=\Delta_{0}\ast\dots\ast\Delta_{n}\to X$ a regular simplex. We set ${\widetilde{N}}^*_{\sigma}={\widetilde{N}}^*(\Delta)$. Let $\delta_{\ell}\colon \Delta' \to\Delta$ be the inclusion of a face, we set $\partial_{\ell}\sigma=\sigma\circ\delta_{\ell}\colon \Delta'\to X$ with the induced filtration $\Delta'=\Delta'_{0}\ast\dots\ast\Delta'_{n}$. The \emph{blown-up complex} of $X$ is the cochain complex ${\widetilde{N}}^*(X;R)$ composed of the elements $\omega$ associating to each regular simplex $\sigma\colon \Delta_{0}\ast\dots\ast\Delta_{n}\to X$ an element $\omega_{\sigma}\in {\widetilde{N}}^*_{\sigma}$ such that $\delta_{\ell}^*(\omega_{\sigma})=\omega_{\partial_{\ell}\sigma}$, for any regular face operator $\delta_{\ell}\colon\Delta'\to\Delta$. The differential $d \omega$ is defined by $(d \omega)_{\sigma}=d(\omega_{\sigma})$. The \emph{perverse degree of $\omega$ along a singular stratum $S$} equals $$\|\omega\|_S=\sup\left\{ \|\omega_{\sigma}\|_{{\rm codim\,} S}\mid \sigma\colon \Delta\to X \; \text{regular such that } \sigma(\Delta)\cap S\neq\emptyset \right\}.$$ By setting $\|\omega\|_{S}=0$ for any regular stratum $S$, we get a map $\|\omega\|\colon {\mathcal S}\to \mathbb{N}$. \begin{definition}\label{def:blowup} A \emph{cochain $\omega\in {\widetilde{N}}^*(X;R)$ is $\overline{p}$-allowable} if $\| \omega\|\leq \overline{p}$ and of \emph{$\overline{p}$-intersection} if $\omega$ and $d\omega$ are $\overline{p}$-allowable. We denote ${\widetilde{N}}^*_{\overline{p}}(X;R)$ the complex of $\overline{p}$-intersection cochains and ${\mathscr H}^*_{\overline{p}}(X;R)$ its homology, called \emph{blown-up $\overline{p}$-intersection cohomology} of $X$. \end{definition} Let us recall its main properties. First, the canonical projection ${\rm pr} \colon X\times\mathbb{R}\rightarrow X$ induces an isomorphism (\cite[Theorem D]{CST4}) \begin{equation}\label{pro} {\rm pr}^* \colon {\mathscr H}^*_{\overline{p}}(X;R)\rightarrow{\mathscr H}^*_{\overline{p}}(X\times \mathbb{R};R). \end{equation} Also, if $L$ is a compact pseudomanifold and $\overline p$ a perversity on the cone ${\mathring{\tc}} L$, inducing $\overline{p}$ on $L$, we have \cite[Theorem E]{CST4}: \begin{equation}\label{Cone} {\mathscr H}^*_{\overline{p}}({\mathring{\tc}} L;R) =\begin{cases} {\mathscr H}^*_{\overline{p}}(L;R), & \text{if }k\leq\overline p({\mathtt v}),\\ 0 & \text{if }k>\overline p({\mathtt v}), \end{cases} \end{equation} where ${\mathtt v}$ is the apex of the cone. If $k\leq\overline p({\mathtt v})$, the isomorphism ${\mathscr H}^*_{\overline{p}}({\mathring{\tc}} L;R)\cong {\mathscr H}^*_{\overline{p}}(L;R)$ is given by the inclusion $L \times ]0,1[ = {\mathring{\tc}} L \backslash\{{\mathtt v}\} \hookrightarrow {\mathring{\tc}} L$. \begin{definition}\label{def:Upetit} Let $\mathcal U$ be an open cover of $X$. A \emph{$\mathcal U$-small simplex} is a regular simplex, $\sigma\colon \Delta=\Delta_{0}\ast\dots\ast\Delta_{n}\to X$, such that there exists $U\in\mathcal U$ with ${\rm Im\,}\sigma\subset U$. The \emph{blown-up complex of $\mathcal U$-small cochains of $X$ with coefficients in $R$,} written ${\widetilde{N}}^{*,\mathcal U}(X;R)$ is the cochain complex made up of elements $\omega$, associating to any $\mathcal U$-small simplex, $\sigma\colon\Delta= \Delta_{0}\ast\dots\ast\Delta_{n}\to X$, an element $\omega_{\sigma}\in \Hiru {\widetilde{N}}*\Delta$, so that $\delta_{\ell}^*(\omega_{\sigma})=\omega_{\partial_{\ell}\sigma}$, for any face operator, $\delta_{\ell}\colon \Delta'_{0}\ast\dots\ast\Delta'_{n}\to \Delta_{0}\ast\dots\ast\Delta_{n}$, with $\Delta'_{n}\neq\emptyset$. If $\overline{p}$ is a perversity on $X$, we denote by ${\widetilde{N}}^{*,\mathcal U}_{\overline{p}}(X;R)$ the complex of $\mathcal U$-small cochains verifying $ \|\omega\|\leq \overline{p}$ and $\|\delta \omega\|\leq\overline{p}$. \end{definition} \begin{proposition}{\cite[Corollary 9.7]{CST4}}\label{cor:Upetits} The restriction map is a quasi-isomor\-phism,\newline $\rho_{\mathcal U}\colon \lau {\widetilde{N}}{*}{\overline{p}}{X;R} \to \lau{\widetilde{N}}{*,\mathcal U}{\overline{p}}{X;R}$. \end{proposition} Finally, the blown-up intersection cohomology satisfies the Mayer-Vietoris property. \begin{proposition}{\cite[Theorem C]{CST4}}\label{thm:MVcourte} Let $(X,\overline{p})$ be a paracompact perverse space, endowed with an open cover $\mathcal U =\{W_{1},W_{2}\}$ and a subordinated partition of the unity, $(f_{1},f_{2})$. For $i=1,\,2$, we denote by $\mathcal U_{i}$ the cover of $W_{i}$ consisting of the open subsets $(W_{1}\cap W_{2}, f_{{i}}^{-1}(]1/2,1])$ and by $\mathcal U$ the cover of $X$, union of the covers $\mathcal U_{i}$. Then, the canonical inclusions, $W_{i}\subset X$ and $W_{1}\cap W_{2}\subset W_{i}$, induce a short exact sequence, where $\varphi(\omega_{1},\omega_{2})=\omega_{1}-\omega_{2}$, $$ \xymatrix@C=4mm{ 0\ar[r]& {\widetilde{N}}^{*,\mathcal U}_{\overline{p}}(X;R)\ar[r] & {\widetilde{N}}^{*,\mathcal U_{1}}_{\overline{p}}(W_{1};R) \oplus {\widetilde{N}}^{*,\mathcal U_{2}}_{\overline{p}}(W_{2};R) \ar[r]^-{\varphi}& {\widetilde{N}}^{*}_{\overline{p}}(W_{1}\cap W_{2};R)\ar[r]& 0. }$$ \end{proposition} \section{Borel-Moore intersection homology}\label{sec:BM} In a filtered space $X$, locally finite chains are sums, perhaps infinite, $\xi=\sum_{j\in J}\lambda_{j}\sigma_{j}$, such that every point $x\in X$ has a neighborhood $U_{x}$ for which all but a finite number of the regular simplices $\sigma_{j}$ (see \defref{def:regularsimplex}) with support intersecting $U_{x}$ have a coefficient $\lambda_{j}$ equal to 0. \begin{definition}\label{def:BM} Let $(X,\overline{p})$ be a perverse space. We denote by ${\mathfrak{C}}^{\infty,\overline{p}}_{*}(X;R)$ the complex of locally finite chains of $\overline{p}$-intersection with the differential ${\mathfrak{d}}$. Its homology, ${\mathfrak{H}}^{\infty,\overline{p}}_{*}(X;R)$, is called \emph{the locally finite (or Borel-Moore) $\overline p$-intersection homology.} \end{definition} Recall a characterization of locally finite $\overline{p}$-intersec\-tion chains. \begin{proposition}{\cite[Proposition 3.4]{CST5}}\label{prop:BMprojectivelim} Let $(X,\overline{p})$ be a perverse space. Suppose that $X$ is locally compact, metrizable and separable. Then, the complex of locally finite $\overline{p}$-intersection chains is isomorphic to the inverse limit of complexes, $${\mathfrak{C}}^{\infty,\overline{p}}_{*}(X;R) \cong \varprojlim_{K\subset X}{\mathfrak{C}}^{\overline{p}}_{*}(X,X\backslash K;R), $$ where the limit is taken over all compact subsets of $X$. \end{proposition} Since locally finite chains in an open subset of $X$ are not necessarily locally finite in $X$, the complex ${\mathfrak{C}}^{\infty,\overline{p}}_{*}(-;R)$ is not functorial for the inclusions of open subsets. To get round this defect, we introduce a contravariant chain complex as in \cite{Bre}. In the context of intersection homology a similar approach is done by Friedman (see \cite[Section 2.3.2]{MR2276609}), the sheafification of the resulting complex being nothing but Deligne's sheaf of \cite{GM2}. So we set $${\mathfrak{C}}^{\infty,X,\overline{p}}_{*}(U;R):= \varprojlim_{K\subset U} {\mathfrak{C}}^{\overline{p}}_{*}(X,X\backslash K;R), $$ where $K$ runs over the family of compact subsets of $U$. An element $\alpha\in {\mathfrak{C}}^{\infty,X,\overline{p}}_{*}(U;R)$ is a family $\alpha=\langle\alpha_K\rangle_{K}$, indexed by the family of compacts of $U$, with $\alpha_K \in {\mathfrak{C}}_{*}^{\overline{p}}(X;R)$ and $\alpha_{K'} - \alpha_{K} \in {\mathfrak{C}}_{*}^{\overline{p}}(X\backslash K;R)$, if $K \subset K'$. In particular, $\alpha=0$ if, and only if, $\alpha_K \in{\mathfrak{C}}_{*}^{\overline{p}}(X\backslash K;R)$ for every $K$. For the construction of the projective limit, an exhaustive family of compacts suffices. Therefore, if $X$ is hemicompact, we may use a numerable increasing sequence of compacts, $(K_{i})_{i\in\mathbb{N}}$, and get $\alpha=\langle \alpha_{i}\rangle_{i}$, with $\alpha_{i}\in {\mathfrak{C}}_{*}^{\overline{p}}(X;R)$ and $\alpha_{i+1} - \alpha_{i} \in {\mathfrak{C}}_{*}^{\overline{p}}(X\backslash K_{i};R)$. Given two open subsets $V \subset U \subset X$, we denote by \begin{equation}\label{equa:maps2} I^{X,\overline{p}}_{V,U}\colon \varprojlim_{K\subset U} {\mathfrak{C}}^{\overline{p}}_{*}(X,X\backslash K;R) \to \varprojlim_{K\subset V} {\mathfrak{C}}^{\overline{p}}_{*}(X,X\backslash K;R) \end{equation} the map induced by the identity. So, the complex ${\mathfrak{C}}^{\infty,X,\overline{p}}_{*}(-;R)$ defines a contravariant functor from the poset of open subsets of $X$. Moreover, this is an appropriate substitute for the study of locally finite $\overline p$-intersec\-tion homology as shows the following result. \begin{proposition}\label{prop:BMprojectivelimU} Let $(X,\overline{p})$ be a locally compact, second-countable perverse space and $U \subset X$ an open subset. The natural restriction $I_{U}^{\overline{p}}\colon {\mathfrak{C}}^{\infty,\overline{p}}_{*}(U;R) \to {\mathfrak{C}}^{\infty,X,\overline{p}}_{*}(U;R)$ is a quasi-isomorphism. \end{proposition} \begin{proof} Let $(K_{i})_{i\in\mathbb{N}}$ be a numerable increasing sequence of compacts of $U$, covering $U$ and cofinal in the family of compact subsets of $U$. The maps ${\mathfrak{C}}_{*}^{\overline{p}}(U,U\backslash K_{i})\to {\mathfrak{C}}_{*}^{\overline{p}}(U,U\backslash K_{i+1})$ and ${\mathfrak{C}}_{*}^{\overline{p}}(X,X\backslash K_{i})\to {\mathfrak{C}}_{*}^{\overline{p}}(X,X\backslash K_{i+1})$ being surjective, these two sequences verify the Mittag-Leffler condition. Thus the inclusions $ ({\mathfrak{C}}_{*}^{\overline{p}}(U,U\backslash K_{i}))_{i} \to ({\mathfrak{C}}_{*}^{\overline{p}}(X,X\backslash K_{i}))_{i} $ give a morphism of short exact sequences (\cite[Proposition 3.5.8]{MR1269324}): $$\xymatrix@C=4mm{ 0\ar[r]& \varprojlim^1_{i}{\mathfrak{H}}_{k+1}^{\overline{p}}(U,U\backslash K_{i}) \ar[d]\ar[r]& H_{k}(\varprojlim_{i}{\mathfrak{C}}_{*}^{\overline{p}}(U,U\backslash K_{i})) \ar[d]\ar[r]& \varprojlim_{i}{\mathfrak{H}}_{k}^{\overline{p}}(U,U\backslash K_{i}) \ar[d]\ar[r]& 0\\ 0\ar[r]& \varprojlim^1_{i}{\mathfrak{H}}_{k+1}^{\overline{p}}(X,X\backslash K_{i}) \ar[r]& H_{k}(\varprojlim_{i}{\mathfrak{C}}_{*}^{\overline{p}}(X,X\backslash K_{i})) \ar[r]& \varprojlim_{i}{\mathfrak{H}}_{k}^{\overline{p}}(X,X\backslash K_{i}) \ar[r]& 0. }$$ The result is now a consequence of the excision property. \end{proof} The existence of a Mayer-Vietoris exact sequence in this context can be deduced from a sheaf theoretic argument in the case of perversities depending only on the codimension of strata, as mentioned in \cite[Proof of Proposition 2.20]{MR2276609}. We provide below a direct proof for general perversities. \begin{theorem}\label{thm:MV} Let $(X,\overline{p})$ be a locally compact, second-countable perverse space and $\{U,V\}$ an open covering of $X$. Then we have a Mayer-Vietoris exact sequence, with coefficients in $R$, \begin{equation}\label{equa:MV} \xymatrix@C=4.9mm{ \dots \ar[r] & {\mathfrak{H}}^{\infty,\overline{p}}_{k}(X) \ar[r] & {\mathfrak{H}}^{\infty,\overline{p}}_{k}(V) \oplus {\mathfrak{H}}^{\infty,\overline{p}}_{k}(U) \ar[r] & {\mathfrak{H}}^{\infty,\overline{p}}_{k}(U\cap V) \ar[r] & {\mathfrak{H}}^{\infty,\overline{p}}_{k-1}(X) \ar[r] & \dots } \end{equation} \end{theorem} \begin{proof} As $U$ and $V$ are hemicompact, we choose sequences $(U_{i})_{i\in\mathbb{N}}$ and $(V_{i})_{i\in\mathbb{N}}$ of relatively compact open subsets of $U$ and $V$, respectively, such that $\overline{U}_{i}\subset U_{i+1}$, $\cup_{i\in\mathbb{N}}U_{i}=U$ and $\overline{V}_{i}\subset V_{i+1}$, $\cup_{i\in\mathbb{N}}V_{i}=V$. Let us notice that $(\overline{U}_{i}\cup \overline{V}_{i})_{i\in\mathbb{N}}$ and $(\overline{U}_{i}\cap \overline{V}_{i})_{i\in\mathbb{N}}$ are sequences of compact subsets such that $\overline{U}_{i}\cup \overline{V}_{i}\subset \overline{U}_{i+1}\cup \overline{V}_{i+1}$ and $\overline{U}_{i}\cap \overline{V}_{i}\subset \overline{U}_{i+1}\cap \overline{V}_{i+1}$ which are exhaustive for $U\cup V$ and $U\cap V$ respectively. As observed in the proof of \propref{prop:BMprojectivelimU}, the sequences $({\mathfrak{C}}_{*}^{\overline{p}}(X,X\backslash K_{i}))_{i\in\mathbb{N}}$ satisfy the Mittag-Leffler property, for $K_{i}=\overline{U}_{i}$, $\overline{V}_{i}$, $\overline{U}_{i}\cap \overline{V}_{i}$ or $\overline{U}_{i}\cup \overline{V}_{i}$. Therefore, the short exact sequences $$0\to {\mathfrak{C}}_{*}^{\overline{p}}(X\backslash (\overline{U}_{i}\cup \overline{V}_{i})) \to {\mathfrak{C}}_{*}^{\overline{p}}( X\backslash \overline{U}_{i})\oplus {\mathfrak{C}}_{*}^{\overline{p}}(X\backslash \overline{V}_{i}) \to {\mathfrak{C}}_{*}^{\overline{p}}(X\backslash \overline{U}_{i})+ {\mathfrak{C}}_{*}^{\overline{p}}(X\backslash \overline{V}_{i}) \to 0 $$ induce the short exact sequence \begin{equation}\label{MV2} \def\scriptstyle{\scriptstyle} \xymatrix@C=4mm{ 0\ar[r]& \varprojlim_{i}{\mathfrak{C}}_{*}^{\overline{p}}(X, X\backslash (\overline{U}_{i}\cup \overline{V}_{i})) \ar[r]& \varprojlim_{i}{\mathfrak{C}}_{*}^{\overline{p}}(X, X\backslash \overline{U}_{i})\oplus \varprojlim_{i} {\mathfrak{C}}_{*}^{\overline{p}}(X, X\backslash \overline{V}_{i}) \ar[r]& \varprojlim_{i} {\mathfrak{Q}}_{*}^{\overline{p}}(X, \overline{U}_{i},\overline{V}_{i})) \ar[r]& 0} \end{equation} with $ {\mathfrak{Q}}_{*}^{\overline{p}}(X, \overline{U}_{i},\overline{V}_{i})) ={\mathfrak{C}}_{*}^{\overline{p}}(X)/({\mathfrak{C}}_{*}^{\overline{p}}(X\backslash \overline{U}_{i})+ {\mathfrak{C}}_{*}^{\overline{p}}(X\backslash \overline{V}_{i}))$. The long exact sequence associated to \eqref{MV2} gives \eqref{equa:MV}. Let us see that. $\bullet$ First, by \propref{prop:BMprojectivelimU}, we have ${\mathfrak{H}}^{\infty,\overline{p}}_{*}(U\cup V) \cong H_{k}(\varprojlim_{i}{\mathfrak{C}}_{*}^{\overline{p}}(X, X\backslash (\overline{U}_{i}\cup \overline{V}_{i})))$, ${\mathfrak{H}}^{\infty,\overline{p}}_{*}(U) \cong H_{k}(\varprojlim_{i}{\mathfrak{C}}_{*}^{\overline{p}}(X, X\backslash \overline{U}_{i}))$ and ${\mathfrak{H}}^{\infty,\overline{p}}_{*}( V) \cong H_{k}(\varprojlim_{i}{\mathfrak{C}}_{*}^{\overline{p}}(X, X\backslash \overline{V}_{i}))$. $\bullet$ As for the \emph{third term} of the sequence (\ref{MV2}), we proved in \cite[Proposition 4.1]{CST3} that the identity map on $X$ induces a quasi-isomorphism, ${\mathfrak{C}}^{\overline{p}}_{*}(X\backslash \overline{U}_{i}) + {\mathfrak{C}}^{\overline{p}}_{*}(X\backslash \overline{V}_{i}) \to {\mathfrak{C}}^{\overline{p}}_{*}(X\backslash (\overline{U}_{i}\cap \overline{V}_{i}))$. Therefore, it induces a quasi-isomorphism $$ {\mathfrak{Q}}_{*}^{\overline{p}}(X, \overline{U}_{i},\overline{V}_{i})) \to {\mathfrak{C}}^{\overline{p}}_{*}(X,X\backslash (\overline{U}_{i}\cap \overline{V}_{i})). $$ With the Mittag-Leffler property and \propref{prop:BMprojectivelimU}, the identity map also gives a quasi-isomorphism \begin{equation}\label{equa:MV3} \psi\colon \varprojlim_{i} {\mathfrak{Q}}_{*}^{\overline{p}}(X, \overline{U}_{i},\overline{V}_{i}) \to \varprojlim_{i} {\mathfrak{C}}^{\overline{p}}_{*}(X,X\backslash (\overline{U}_{i}\cap \overline{V}_{i})) = {\mathfrak{C}}^{\infty,X,\overline{p}}_{*}(U\cap V). \end{equation} \end{proof} The following properties have been proven in \cite{CST5}. \begin{proposition}{\cite[Proposition 3.5]{CST5}}\label{prop:foisR} Let $(L,\overline{p})$ be a compact perverse space. Then we have $${\mathfrak{H}}^{\infty,\overline{p}}_{k}(\mathbb{R}^m\times L;R) = {\mathfrak{H}}_{k-m}^{\overline{p}}(L;R).$$ \end{proposition} \begin{proposition}{\cite[Proposition 3.7]{CST5}}\label{prop:conefoisR} Let $L$ be a compact space and $\overline{p}$ be a perversity on the cone ${\mathring{\tc}} L$ of apex ${\mathtt v}$. Then we have $${\mathfrak{H}}^{\infty,\overline{p}}_{k}(\mathbb{R}^m\times {\mathring{\tc}} L;R)=\left\{ \begin{array}{ccl} 0&\text{if}&k\leq m+D\overline{p}({\mathtt v})+1,\\ {\mathfrak{H}}^{\overline{p}}_{k-m-1}(L;R) &\text{if}&k\geq m+ D\overline{p}({\mathtt v})+2. \end{array}\right. $$ \end{proposition} \section{Poincar\'e duality}\label{sec:Poincare} \subsection{Fundamental class and cap product}\label{subsec:cap} Let $(X,\overline{p})$ be a perverse pseudomanifold of dimension $n$. Recall from \cite{GM1} that an $R$-orientation of $X$ is an $R$-orientation of the manifold $X^n:=X\backslash X_{n-1}$. For any $x\in X^n$, we denote by ${\mathtt{o}}_{x}\in H_{n}(X^n,X^n\backslash\{x\};R)={\mathfrak{H}}_{n}^{\overline{0}}(X,X\backslash\{x\};R)$ the associated local orientation. We know (see \cite{FM} or \cite[Theorem 8.1.18]{FriedmanBook}) that, for any compact $K\subset X$, there exists a unique element $\Gamma^{K}_{X}\in {\mathfrak{H}}_{n}^{\overline{0}}(X,X\backslash K;R)$ whose restriction equals ${\mathtt{o}}_{x}$ for any $x\in K$. These classes give a Borel-Moore homology class, called \emph{the fundamental class of $X$,} $$\Gamma_{X}=\langle \Gamma^{K}_{X}\rangle_{K}\in {\mathfrak{H}}^{\infty,\overline{0}}_{n}(X;R).$$ The fundamental classes are natural for the injections between open subsets of $X$. Given two open subsets $V \subset U \subset X$, the map induced in Borel-Moore homology by the identity \begin{equation}\label{equa:restr} I^{X,\overline{p}}_{V,U}\colon \varprojlim_{K\subset U} {\mathfrak{C}}^{\overline{p}}_{*}(X,X\backslash K;R) \to \varprojlim_{K\subset V} {\mathfrak{C}}^{\overline{p}}_{*}(X,X\backslash K;R) \end{equation} sends $\Gamma_{U}$ on $\Gamma_{V}$, see \cite[Theorem 8.1.18]{FriedmanBook}. Suppose that $X$ is equipped with two perversities, $\overline{p}$ and $\overline{q}$. In \cite[Proposition 4.2]{CST4}, we prove the existence of a map \begin{equation}\label{equa:cupprduitespacefiltre} -\smile -\colon{\widetilde{N}}^k_{\overline{p}}(X;R)\otimes {\widetilde{N}}^{\ell}_{\overline{q}}(X;R)\to {\widetilde{N}}^{k+\ell}_{\overline{p}+\overline{q}}(X;R), \end{equation} inducing an associative and commutative graded product, called \emph{intersection cup product,} \begin{equation}\label{equa:cupprduitTWcohomologie} -\smile -\colon {\mathscr H}^k_{\overline{p}}(X;R)\otimes {\mathscr H}^{\ell}_{\overline{q}}(X;R)\to {\mathscr H}^{k+\ell}_{\overline{p}+\overline{q}}(X;R). \end{equation} Mention also from \cite[Propositions 6.6 and 6.7]{CST4} the existence of \emph{cap products,} \begin{equation}\label{equa:caphomology} -\frown - \colon \widetilde{N}_{\overline{p}}^{i}(X;R)\otimes {{\mathfrak{C}}}_{j}^{\overline{q}}(X;R) \to {{\mathfrak{C}}}_{j-i}^{\overline{p}+\overline{q}}(X;R) \end{equation} such that $ (\eta\smile \omega)\frown \xi=\eta\frown(\omega\frown\xi)$ and ${\mathfrak{d}}(\omega\frown \xi)=d\omega\frown\xi+(-1)^{|\omega|}\omega\frown {\mathfrak{d}} \xi $. Thus, this cap product induces a \emph{cap product} in homology, \begin{equation}\label{equa:caphomologyhomology} - \frown - \colon {\mathscr H}_{\overline{p}}^{i}(X;R)\otimes {{\mathfrak{H}}}_{j}^{\overline{q}}(X;R) \to {{\mathfrak{H}}}_{j-i}^{\overline{p}+\overline{q}}(X;R). \end{equation} The map \eqref{equa:caphomology} can be extended to a map \begin{equation}\label{equa:capBM} -\frown - \colon \widetilde{N}_{\overline{p}}^{i}(X;R)\otimes {{\mathfrak{C}}}_{j}^{\infty,\overline{q}}(X;R) \to {{\mathfrak{C}}}_{j-i}^{\infty,\overline{p}+\overline{q}}(X;R) \end{equation} as follows:\\ given $\alpha\in {\widetilde{N}}^i_{\overline{p}}(X;R)$ and $\eta=\langle \eta_{K}\rangle_{K}\in {\mathfrak{C}}^{\infty,\overline{q}}_{j}(X;R)$, we set $\alpha\frown \eta=\langle \alpha\frown \eta_{K}\rangle_{K}\in {\mathfrak{C}}^{\infty,\overline{p}+\overline{q}}_{j-i}(X;R)$. This definition makes sense since the cap product \eqref{equa:caphomology} is natural. Moreover, from the compatibility with the differentials, we get an induced map, \begin{equation}\label{equa:caphomologyhomologyBM} - \frown - \colon {\mathscr H}_{\overline{p}}^{i}(X;R)\otimes {{\mathfrak{H}}}_{j}^{\infty,\overline{q}}(X;R) \to {{\mathfrak{H}}}_{j-i}^{\infty,\overline{p}+\overline{q}}(X;R). \end{equation} As $\Gamma_{X}\in {{\mathfrak{H}}}_{n}^{\infty,\overline{0}}(X;R)$, the cap product with the fundamental class gives a map, \begin{equation}\label{equa:dualmap} \mathcal D_{X}:=-\frown \Gamma_{X}\colon {\mathscr H}^*_{\overline{p}}(X;R)\to {\mathfrak{H}}^{\infty,\overline{p}}_{n-*}(X;R), \end{equation} that is the Poincar\'e duality map of the next theorem. Let us emphasize that this map exists at the level of chain complexes. In this paradigm, the Poincar\'e duality comes from a cap product with the fundamental class, $\Gamma_{X}$. As this one is a Borel-Moore homology class, we need to adapt \eqref{equa:caphomologyhomology}. In fact, there are two ways of working. \begin{itemize} \item We may keep the blown-up cohomology, for which the cap with $\Gamma_{X}$ gives a Borel-Moore homology class as in \eqref{equa:dualmap}. This is the subject of this work which brings \thmref{thm:dual} below. \item We may also work with blown-up cohomology with compact supports, for which the cap product with $\Gamma_{X}$ gives a (finite) homology class. That is, we extend \eqref{equa:caphomology} in \begin{equation}\label{equa:capcompact} -\frown - \colon \widetilde{N}_{\overline{p},c}^{i}(X;R)\otimes {{\mathfrak{C}}}_{j}^{\infty,\overline{q}}(X;R) \to {{\mathfrak{C}}}_{j-i}^{\overline{p}+\overline{q}}(X;R), \end{equation} This approach was used in \cite{CST2} and led to the Poincar\'e duality (\cite[Theorem B]{CST2}) $$-\frown \Gamma_{X}\colon {\mathscr H}^{i}_{\overline{p},c}(X;R)\xrightarrow{\cong} {\mathfrak{H}}_{n-i}^{\overline{p}}(X;R).$$ \end{itemize} \subsection{Main theorem} We prove the existence of an isomorphism between the Borel-Moore $\overline{p}$-intersection homology and the blown-up $\overline{p}$-intersection cohomology. \begin{theorem}\label{thm:dual} Let $(X,\overline p)$ be an $n$-dimensional, second countable and oriented perverse pseudomanifold. The cap product with the fundamental class induces a Poincar\'e duality isomorphism $$\mathcal D_{X}\colon {\mathscr H}^*_{\overline{p}}(X;R) \xrightarrow{\cong} {\mathfrak{H}}^{\infty,\overline{p}}_{n-*}(X;R).$$ \end{theorem} The proof uses the following result. \begin{proposition}{\cite[Proposition 13.2]{CST5}}\label{prop:supersuperbredon} Let $\mathcal F_{X}$ be the category whose objects are (stratified homeomorphic to) open subsets of a given paracompact and separable CS set $X$ and whose morphisms are stratified homeomorphisms and inclusions. Let $\mathcal Ab_{*}$ be the category of graded abelian groups. Let $F^{*},\,G^{*}\colon \mathcal F_{X}\to \mathcal Ab$ be two functors and $\Phi\colon F^{*}\to G^{*}$ a natural transformation satisfying the conditions listed below. \begin{enumerate}[(i)] \item The functors $F^{*}$ and $G^{*}$ admit Mayer-Vietoris exact sequences and the natural transformation $\Phi$ induces a commutative diagram between these sequences. \item If $\{U_{\alpha}\}$ is a disjoint collection of open subsets of $X$ and $\Phi\colon F_{*}(U_{\alpha})\to G_{*}(U_{\alpha})$ is an isomorphism for each $\alpha$, then $\Phi\colon F^{*}(\bigsqcup_{\alpha}U_{\alpha})\to G^{*}(\bigsqcup_{\alpha}U_{\alpha})$ is an isomorphism. \item If $L$ is a compact filtered space such that $X$ has an open subset stratified homeomorphic to $\mathbb{R}^i\times {\mathring{\tc}} L$ and, if $\Phi\colon F^{*}(\mathbb{R}^i\times ({\mathring{\tc}} L\backslash \{{\mathtt v}\}))\to G^{*}(\mathbb{R}^i\times ({\mathring{\tc}} L\backslash \{{\mathtt v}\}))$ is an isomorphism, then so is $\Phi\colon F^{*}(\mathbb{R}^i\times {\mathring{\tc}} L)\to G^{*}(\mathbb{R}^i\times {\mathring{\tc}} L)$. (Here, ${\mathtt v}$ is the apex of the cone ${\mathring{\tc}} L$.) \item If $U$ is an open subset of X contained within a single stratum and homeomorphic to an Euclidean space, then $\Phi\colon F^{*}(U)\to G^{*}(U)$ is an isomorphism. \end{enumerate} Then $\Phi\colon F^{*}(X)\to G^{*}(X)$ is an isomorphism. \end{proposition} \begin{proof}[Proof of \thmref{thm:dual}] As any open subset $U\subset X$ is an oriented pseudomanifold, we may consider the associated homomorphism defined in (\ref{equa:dualmap}), $\dos {\mathcal D}{U}\colon {\mathscr H}_{k}^{ \overline{p}}(U) \to {\mathfrak{H}}^{\infty,X,\overline{p}}_{n-k}(U)$, where we use the identification ${\mathfrak{H}}^{\infty,\overline{p}}_{n-k}(U) \xrightarrow{\cong} {\mathfrak{H}}^{\infty,X,\overline{p}}_{n-k}(U)$ given by \propref{prop:BMprojectivelimU}. Let $V\subset U\subset X$ be two open subsets of $X$, endowed with the induced structures of pseudomanifold of $X$. The canonical inclusion (see \eqref{equa:maps2}) and the cap product with the fundamental class give a commutative diagram, \begin{equation}\label{conm} \xymatrix{ {\mathscr H}^k_{\overline{p}}(U) \ar[d]_{\mathcal D_{U}} \ar[r]^-{(I^{\overline{p}}_{V,U})^*} & {\mathscr H}^k_{\overline{p}}(V) \ar[d]^{\mathcal D_{V}}\\ {\mathfrak{H}}_{n-k}^{\infty,X,\overline{p}}(U) \ar[r]^-{(I^{X,\overline{p}}_{V,U})^*} & {\mathfrak{H}}_{n-k}^{\infty,X,\overline{p}}(V). } \end{equation} We apply \propref{prop:supersuperbredon} to the natural transformation $\mathcal D_{U}\colon {\mathscr H}_{\overline{p}}^k(U) \to {\mathfrak{H}}_{n-k}^{\infty,X,\overline{p}}(U)$. The proof is reduced to the verifications of its hypotheses. $\bullet$ First, let us notice that the conditions (ii) and (iv) are direct. $\bullet$ \emph{Property} (i). Let $\mathcal U = \{W_1,W_2\}$ be an open covering of $X$. Mayer-Vietoris sequences are constructed in \propref{thm:MVcourte} and \thmref{thm:MV}. We build a morphism between them with the following diagram. In the first row, we take over the notations of \propref{thm:MVcourte}. As in the proof of \thmref{thm:MV}, we choose sequences $(U_{i})_{i\in\mathbb{N}}$ and $(V_{i})_{i\in\mathbb{N}}$ of relatively compact open subsets of $W_{1}$ and $W_{2}$, respectively, such that $\overline{U}_{i}\subset U_{i+1}$, $\cup_{i\in\mathbb{N}}U_{i}=W_{1}$ and $\overline{V}_{i}\subset V_{i+1}$, $\cup_{i\in\mathbb{N}}V_{i}=W_{2}$. The last row corresponds to \eqref{MV2}. $$ \def\scriptstyle{\scriptstyle} \xymatrix@C=3.6mm{ 0\ar[r]& {\widetilde{N}}^{*,\mathcal U}_{\overline{p}}(X) \ar[r] & {\widetilde{N}}^{*,\mathcal U_{1}}_{\overline{p}}(W_{1}) \oplus {\widetilde{N}}^{*,\mathcal U_{2}}_{\overline{p}}(W_{2}) \ar[r] \ar@{}[dl]+<10pt>^{\fbox{\tiny{I}}}& {\widetilde{N}}^*_{\overline{p}}(W_{1}\cap W_{2}) \ar[r] \ar@{}[dl]+<10pt>^{\fbox{\tiny{II}}}& 0\\ & {\widetilde{N}}^*_{\overline{p}}(X) \ar[r] \ar[u]^{\rho} \ar[d]_{\frown \gamma_X}& {\widetilde{N}}^*_{\overline{p}}(W_{1}) \oplus {\widetilde{N}}^*_{\overline{p}}(W_{2}) \ar[r] \ar[u]_{\rho} \ar@{}[dl]+<10pt>^{\fbox{\tiny{III}}} \ar[d]^{\frown \gamma_{W_1} \oplus \frown \gamma_{W_2}}& {\widetilde{N}}^*_{\overline{p}}(W_{1}\cap W_{2}) \ar@{=}[u] \ar@{}[dl]+<10pt>^{\fbox{\tiny{IV}}} \ar[d]^{\frown \gamma_{W_1 \cap W_2}} &&\\ &{\mathfrak{C}}^{\infty,\overline{p}}_{n-*}(X) \ar[r] \ar[d]_{I^{\overline{p}}_{X}}& {\mathfrak{C}}^{\infty,\overline{p}}_{n-*}(W_{1}) \oplus {\mathfrak{C}}^{\infty,\overline{p}}_{n-*}(W_{2}) \ar[r] \ar@{}[dl]+<10pt>^{\fbox{\tiny{V}}} \ar[d]^{I^{\overline{p}}_{W_{1}}\oplus {I^{\overline{p}}_{W_{2}}}}& {\mathfrak{C}}^{\infty,\overline{p}}_{n-*}(W_{1}\cap W_{2}) \ar@{}[dl]+<10pt>^{\fbox{\tiny{VI}}} \ar[d]^{I^{\overline{p}}_{W_{1}\cap W_{2}}} &&\\ &{\mathfrak{C}}^{\infty,X,\overline{p}}_{n-*}(X) \ar[r] & {\mathfrak{C}}^{\infty,X,\overline{p}}_{n-*}(W_{1}) \oplus {\mathfrak{C}}^{\infty,X,\overline{p}}_{n-*}(W_{2}) \ar[r] \ar@{}[dl]+<10pt>^{\fbox{\tiny{VII}}}& {\mathfrak{C}}^{\infty,X,\overline{p}}_{n-*}(W_{1}\cap W_{2}) \ar@{}[dl]+<10pt>^{\fbox{\tiny{VIII}}}&&\\ 0\ar[r]& \varprojlim_{i}{\mathfrak{C}}_{*}^{\overline{p}}(X, X\backslash (\overline{U}_{i}\cup \overline{V}_{i})) \ar@{=}[u] \ar[r]& \varprojlim_{i}{\mathfrak{C}}_{*}^{\overline{p}}(X, X\backslash \overline{U}_{i})\oplus \varprojlim_{i}{\mathfrak{C}}_{*}^{\overline{p}}(X, X\backslash \overline{V}_{i}) \ar@{=}[u] \ar[r]& \varprojlim_{i} {\mathfrak{Q}}_{*}^{\overline{p}}(X, \overline{U}_{i},\overline{V}_{i})) \ar[u]_{\psi} \ar[r]&0. } $$ The maps $\rho$ denote restriction and the cycles $\gamma_{Y}\in {\mathfrak{C}}^{\infty,\overline{0}}_{n}(Y)$ represent the fundamental class of $Y$, for $Y=X,\,W_{1},\,W_{2}, W_{1}\cap W_{2}$. First, we prove that the above diagram commutes. The square VII is clearly commutative. The vertical maps of squares I, II, V and VI are induced by restrictions. Thus these squares are commutative. By naturality of the fundamental classes, we may choose for $\gamma_{W_1}$ the restriction of $\gamma_X$ and similarly for $\gamma_{W_2}$ and $\gamma_{W_1\cap W_2}$. So, diagrams III and IV commute. The diagram VIII is commutative by construction of the map $\psi$, see \eqref{equa:MV3}. Let us observe that the columns of the diagrams I, II, V, VI, VII and VIII are quasi-isomorphisms. This is a consequence of \cite[Theorem B]{CST4}, \propref{prop:BMprojectivelimU} and \eqref{equa:MV3} in the proof of \thmref{thm:MV}, respectively. So, we get the following commutative diagram, $$ \def\scriptstyle{\scriptstyle} \xymatrix@C=6mm{ \dots\ar[r]& {\mathscr H}^k_{\overline{p}}(X) \ar[r] \ar[d]^{\mathcal D_X}& {\mathscr H}^k_{\overline{p}}(W_{1}) \oplus {\mathscr H}^k_{\overline{p}}(W_{2}) \ar[r] \ar[d]^{\mathcal D_{W_1} \oplus \mathcal D_{W_2}} & {\mathscr H}^k_{\overline{p}}(W_{1}\cap W_{2}) \ar[r] \ar[d]^{\mathcal D_{W_1 \cap W_2}} & {\mathscr H}^{k+1}_{\overline{p}}(X) \ar[r] \ar[d]^{\mathcal D_X} & \dots \\ \dots \ar[r] & {\mathfrak{H}}_{n-k}^{\infty,\overline{p}}(X) \ar[r] & {\mathfrak{H}}_{n-k}^{\infty,\overline{p}}(W_{1})\oplus {\mathfrak{H}}_{n-k}^{\infty,\overline{p}}(W_{2)} \ar[r] & {\mathfrak{H}}_{n-k}^{\infty,\overline{p}}(W_{1}\cap W_{2}) \ar[r] & {\mathfrak{H}}_{n-k-1}^{\infty,\overline{p}}(X) \ar[r]& \dots. } $$ \emph{$\bullet$ Property} (iii) We apply \eqref{pro}, \eqref{Cone}, \propref{prop:foisR} and \propref {prop:conefoisR}. First, we have the isomorphism $ {\mathscr H}^k_{\overline{p}}(\mathbb{R}^i\times {\mathring{\tc}} L) = 0 = {\mathfrak{H}}_{n-k}^{\infty,\overline{p}}(\mathbb{R}^i\times {\mathring{\tc}} L)$ for $k > \overline p({\mathtt v})$, or equivalently $n-k < i + D \overline p({\mathtt v}) +2$. (Observe that $D\overline{p}({\mathtt v})=n-i-2-\overline{p}({\mathtt v})$.) Next, let $k \leq \overline p({\mathtt v})$. The following commutative diagram comes from \eqref{conm}. $$ \xymatrix{ {\mathscr H}^k_{\overline{p}}(\mathbb{R}^i\times {\mathring{\tc}} L) \ar[r] \ar[d]_{\mathcal D_{\mathbb{R}^i \times {\mathring{\tc}} L }}& {\mathscr H}^k_{\overline{p}}(\mathbb{R}^i \times L \times ]0,1[) \ar[d]^{\mathcal D_{\mathbb{R}^i \times L \times ]0,1[ }} \\ {\mathfrak{H}}_{n-k}^{\infty,\overline{p}}(\mathbb{R}^i \times {\mathring{\tc}} L) \ar[r] & {\mathfrak{H}}_{n-k}^{\infty,\overline{p}}(\mathbb{R}^i \times L \times ]0,1[). } $$ The right column is an isomorphism by hypothesis. Since horizontal rows are also isomorphisms, we deduce that $\mathcal D_{\mathbb{R}^i \times {\mathring{\tc}} L }$ is an isomorphism. This proves Property (iii). \end{proof} \begin{corollary}\label{cor:intersectionproduct} Let $(X,\overline p)$ be an $n$-dimensional, second countable and oriented perverse pseudomanifold. The Borel-Moore intersection homology can be endowed with an intersection product, induced from Poincar\'e duality and a cup product. \end{corollary} \begin{proof} The following commutative diagram, whose vertical maps are isomorphisms, defines the intersection product $\pitchfork$ from the cup product, \begin{equation}\label{equa:fork} \xymatrix{ {\mathscr H}^k_{\overline{p}}(X;R)\otimes {\mathscr H}^{\ell}_{\overline{q}}(X;R) \ar[r]^-{-\smile-}\ar[d]^{\cong}\ar[d]_{\mathcal D_X\otimes \mathcal D_X}& {\mathscr H}^{k+\ell}_{\overline{p}+\overline{q}}(X;R) \ar[d]^{\mathcal D_X}\ar[d]_{\cong} \\ {\mathfrak{H}}_{n-k}^{\infty,\overline{p}}(X;R)\otimes {\mathfrak{H}}_{n-\ell}^{\infty,\overline{q}}(X;R) \ar[r]^-{-\pitchfork-}& {\mathfrak{H}}_{n-k-\ell}^{\infty,\overline{p}+\overline{q}}(X;R). } \end{equation} \end{proof} \begin{remark} If $\overline{p}\colon \mathbb{N}\to \mathbb{Z}$ is a loose perversity as in \cite{MR2276609}, we have an inclusion of complexes $\iota\colon {\mathfrak{C}}_{*}^{\infty,\overline{p}}(X;R) \hookrightarrow I^{\overline{p}} {\mathfrak{C}}_{*}^{\infty}(X;R)$, where $I^{\overline{p}} {\mathfrak{C}}_{*}^{\infty}(X;R)$ denotes the chain complex studied by Friedman in \cite{MR2276609}. With the technique of the proof of \thmref{thm:dual}, using \propref{prop:supersuperbredon}, we also deduce that $\iota$ is a quasi-isomorphism. For the complex $I^{\overline{p}} {\mathfrak{C}}_{*}^{\infty}(X;R)$, the existence of a Mayer-Vietoris exact sequence follows from the fact that its homology proceeds from sheaf theory and the computations involving a cone are done in \cite[Propositions 2.18 and 2.20]{MR2276609}. Thus, the blown-up intersection cohomology is Poincar\'e dual to the Borel-Moore intersection homology defined in \cite{MR2276609}, for any commutative ring of coefficients. \end{remark} \begin{remark}\label{rem:GJE} An intersection cohomology, denoted ${\mathfrak{H}}_{\overline{p}}^{*}(X;R)$, can also be defined from the linear dual, ${\mathfrak{C}}_{\overline{p}}^{*}(X;R)=\hom({\mathfrak{C}}^{\overline{p}}_{*}(X;R),R)$. This is the point of view adopted in \cite{FriedmanBook}, \cite{FR1}, \cite{FM}. If $(X,\overline{p})$ is a locally $(\overline{p},R)$-torsion free perverse space $X$, this cohomology is isomorphic to the blown-up cohomology with the complementary perversity, ${\mathfrak{H}}_{\overline{p}}^{*}(X;R)\cong {\mathscr H}^*_{D\overline{p}}(X;R)$, see \cite[Theorem F]{CST4}. (We send the reader to \cite{GM1} for the condition on the torsion, noting that it is always satisfied if $R$ is a field.) Therefore, in this case, there is a Poincar\'e duality, ${\mathfrak{H}}_{\overline{p}}^{k}(X;R)\cong {\mathfrak{H}}_{n-k}^{\infty,D\overline{p}}(X;R)$, induced by a cup product. But, in contrast to the blown-up cohomology situation, such isomorphism does not exist in general as shows the following example. \end{remark} \begin{example}\label{exam:pasdual} Let $X=\Sigma\mathbb{R} P^3\backslash{*}$ be the complementary subspace of a regular point in the suspension of the real projective space $\mathbb{R} P^3$, together with the constant perversity on 1, denoted $\overline{1}$. (Observe $\overline{1}=D\overline{1}$.) Direct calculations from the Mayer-Vietoris exact sequences give as only non zero values for the homology and cohomologies, \begin{enumerate}[(i)] \item ${\mathscr H}^0_{\overline{1}}(X;\mathbb{Z})= \mathbb{Z}$ and ${\mathscr H}^3_{\overline{1}}(X;\mathbb{Z})= \mathbb{Z}_{2}$, \item ${\mathfrak{H}}_{1}^{\infty,\overline{1}}(X;\mathbb{Z})=\mathbb{Z}_{2}$ and ${\mathfrak{H}}_{4}^{\infty,\overline{1}}(X;\mathbb{Z})=\mathbb{Z}$, \item ${\mathfrak{H}}^{0}_{\overline{1}}(X;\mathbb{Z})=\mathbb{Z}$ and ${\mathfrak{H}}^{1}_{\overline{1}}(X;\mathbb{Z})=\mathbb{Z}_{2}$. \end{enumerate} We notice $ {\mathscr H}^k_{\overline{1}}(X;R) \cong {\mathfrak{H}}^{\infty,\overline{1}}_{4-k}(X;R)$ as stated in \thmref{thm:dual} but ${\mathfrak{H}}^{1}_{\overline{1}}(X;\mathbb{Z})=\mathbb{Z}_{2}\not\cong {\mathfrak{H}}_{3}^{\infty,\overline{1}}(X;\mathbb{Z})=0$. \end{example} \begin{remark}\label{rem:lastone} In the case of a compact oriented pseudomanifold, a diagram as \eqref{equa:fork} has already been introduced in \cite[Section~4]{CST7} for the definition of a product in intersection homology, for any ring of coefficients. For a PL pseudomanifold, a product of geometric cycles is defined by Goresky and MacPherson (\cite{GM1}) for a simplicial intersection homology. The relationship between \eqref{equa:fork} and the geometric product can be specified as below. (For sake of simplicity, we consider coefficients in a field.) \begin{itemize} \item From a diagram similar to \eqref{equa:fork}, Friedman and McClure define an intersection product on the intersection homology of a PL pseudomanifold, via a duality isomorphism (\cite{FM}) with the intersection cohomology ${\mathfrak{H}}_{\overline{p}}^{*}(X)$ recalled in \remref{rem:GJE}. In \cite[Theorem 1.3]{2018arXiv181210585F}, they connect it with the geometric product of \cite{GM1}. \item As recalled in \remref{rem:GJE}, the blown-up cohomology studied in this paper is connected to ${\mathfrak{H}}_{\overline{p}}^{*}(X)$ through a natural quasi-isomorphism, which is compatible with the algebra structures (see \cite[Corollary 4.4 and end of Section 4]{CST6}). \end{itemize} Consequently, the intersection product on ${\mathfrak{H}}_{\ast}^{\bullet}(X;R)$ defined by \eqref{equa:fork} coincides with the original geometric product of Goresky and McPherson in the PL compact case. Finally, mention the existence of a construction of the intersection product via a Leinster partial algebra structure, done by Friedman in \cite{MR3857199}, who extends to the perverse setting the construction made by McClure on the chain complex of a PL manifold, see \cite{MR2255502}. \end{remark} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \end{document} \end{document}
\begin{document} \title[On locally analytic vectors of completed cohomology II]{On locally analytic vectors of the completed cohomology of modular curves II} \author{Lue Pan} \address{Department of Mathematics, Princeton University, Fine Hall, Washington Road Princeton NJ 08544} \email{lpan@princeton.edu} \begin{abstract} This is a continuation of our previous work on the locally analytic vectors of the completed cohomology of modular curves. We construct differential operators on modular curves with infinite level at $p$ in both ``holomorphic'' and ``anti-holomorphic'' directions. As applications, we reprove a classicality result of Emerton which says that every absolutely irreducible two dimensional Galois representation which is regular de Rham at $p$ and appears in the completed cohomology of modular curves comes from an eigenform. Moreover we give a geometric description of the locally analytic representations of $\mathrm{GL}_2(\mathbb{Q}_p)$ attached to such a Galois representation in the completed cohomology. \end{abstract} \maketitle \tableofcontents \section{Introduction} \subsection{Main results} \begin{para} This is a continuation of our previous work \cite{Pan20} on the locally analytic vectors of the completed cohomology of modular curves. Roughly speaking, in \cite{Pan20} we study the \textit{Hodge-Tate} structure of the completed cohomology. The main goal of this work is to study the \textit{de Rham} structure. We introduce some notations first. See \cite[\S 1]{Pan20} for more details. Fix a prime number $p$ throughout this paper. Let $K^p=\prod_{l\neq p}K_l$ be an open compact subgroup of $\mathrm{GL}_2(\mathbb{A}_f^p)$. Set \[\tilde{H}^i(K^p,\mathbb{Z}/p^n):=\varinjlim_{K_p\subseteq\mathrm{GL}_2(\mathbb{Q}_p)} H^i(Y_{K^pK_p}(\mathbb{C}),\mathbb{Z}/p^n),\] where $K_p$ runs through all open compact subgroups of $\mathrm{GL}_2(\mathbb{Q}_p)$, and $Y_{K^pK_p}$ denotes the modular curve of level $K^pK_p$ over $\mathbb{Q}$ whose $\mathbb{C}$-points are given by the usual quotient $Y_{K^pK_p}(\mathbb{C})=\mathrm{GL}_2(\mathbb{Q})\setminus (\mathbb{H}^{\pm}\times\mathrm{GL}_2(\mathbb{A}_f)/K)$, and $\mathbb{H}^{\pm}=\mathbb{C}-\mathbb{R}$ denotes the union of the upper and lower half planes. The completed cohomology of tame level $K^p$, introduced by Emerton, is defined as \[\tilde{H}^i(K^p,\mathbb{Z}_p):=\varprojlim_n \tilde{H}^i(K^p,\mathbb{Z}/p^n).\] It is p-adically complete and equipped with a natural continuous action of $\mathrm{GL}_2(\mathbb{Q}_p)\times G_{\mathbb{Q}}$, where $G_{\mathbb{Q}}=\Gal(\overline\mathbb{Q}/\mathbb{Q})$ denotes the absolute Galois group of $\mathbb{Q}$. We are interested in the finite dimensional Galois representations of $G_{\mathbb{Q}}$ inside of $\tilde{H}^1(K^p,\mathbb{Z}_p)$. \end{para} \begin{thm}[Theorem \ref{MTCH}] \label{MTI} Let $E$ be a finite extension of $\mathbb{Q}_p$ and \[\rho:G_{\mathbb{Q}}\to\mathrm{GL}_2(E)\] be a two-dimensional continuous absolutely irreducible representation of $G_{\mathbb{Q}}$. By abuse of notation, we also use $\rho$ to denote its underlying representation space. Suppose that \begin{enumerate} \item $\rho$ appears in $\tilde{H}^1(K^p,E):=\tilde{H}^1(K^p,\mathbb{Z}_p)\otimes_{\mathbb{Z}_p} E$, i.e. $\Hom_{E[G_{\mathbb{Q}}]}(\rho,\tilde{H}^1(K^p,E))\neq 0$. \item $\rho|_{G_{\mathbb{Q}_p}}$ is de Rham of Hodge-Tate weights $0,k$ for some integer $k>0$, where $G_{\mathbb{Q}_p}\subseteq G_\mathbb{Q}$ denotes a decomposition group at $p$. \end{enumerate} Then $\rho$ arises from a cuspidal eigenform of weight $k+1$. \end{thm} \begin{rem} This result was first obtained by Matthew Emerton in his famous work \cite{Eme1} (at least in the generic cases) and was used by him in the same paper to attack the Fontaine-Mazur conjecture. To do this, Emerton proved a local-global compatibility result of the completed cohomology which implies that $\Hom_{E[G_{\mathbb{Q}}]}(\rho,\tilde{H}^1(K^p,E))$ corresponds to $\rho|_{G_{\mathbb{Q}_p}}$ under the $p$-adic local Langlands correspondence for $\mathrm{GL}_2(\mathbb{Q}_p)$ (up to some multiplicities). Then the claim follows from Colmez's result on the existence of locally algebraic vectors and the relationship between locally algebraic vectors in the completed cohomology and the cohomology of certain local systems on modular curves. We remark that our method is geometric and uses the intertwining operator constructed in this paper as a key input. In particular, we do not need the $p$-adic local Langlands correspondence for $\mathrm{GL}_2(\mathbb{Q}_p)$ in our argument. It is conceivable that our method can be generalized in more general contexts and hopefully can shed some light on the conjectural $p$-adic local Langlands correspondence for groups beyond $\mathrm{GL}_2(\mathbb{Q}_p)$. \end{rem} \begin{rem} $\rho|_{G_{\mathbb{Q}_l}}$ is unramified for all but finitely many $l$ and $\rho|_{G_\mathbb{R}}$ is odd since $\rho$ appears in $\tilde{H}^1(K^p,E)$. Hence this Theorem is a special case of the Fontaine-Mazur conjecture. In fact, in Emerton's approach to the Fontaine-Mazur conjecture \cite{Eme1}, he first showed that any two-dimensional $p$-adic geometric absolutely irreducible odd Galois representation of $G_{\mathbb{Q}}$ appears in $\tilde{H}^1(K^p,E)$ (under some generic assumptions), then he invoked this result to deduce that it actually comes from a cuspidal eigenform. \end{rem} \begin{para} Let $\rho$ be as in Theorem \ref{MTI}. \[\Pi_\rho:=\Hom_{E[G_{\mathbb{Q}}]}(\rho,\tilde{H}^1(K^p,E))\] is an $E$-Banach space representation of $\mathrm{GL}_2(\mathbb{Q}_p)$. Our method also gives a geometric description of the $\mathrm{GL}_2(\mathbb{Q}_p)$-locally analytic vectors of $\Pi_\rho$. For the purpose of introduction, we will restrict ourselves to the case when \begin{itemize} \item $k=1$, i.e. $\rho$ corresponds an eigenform of weight $2$. \end{itemize} See Subsection \ref{rbpLLC} for results when $k\geq 2$. Fix an embedding of $\tau:E\to C$, the $p$-adic completion of $\overline\mathbb{Q}_p$. By Theorem \ref{MTI}, $\rho$ corresponds to an irreducible representation $\pi^\infty=\otimes'_l \pi_l$ of $\mathrm{GL}_2(\mathbb{A}_f)$ over $C$, which is isomorphic to the finite part of an automorphic representation of $\mathrm{GL}_2(\mathbb{A})$. See Section \ref{noraf} below for the precise definition of $\pi^\infty$. \end{para} \begin{para} We first explain the case when $\pi_p$ is supercuspidal. By the Jacquet-Langlands correspondence, $\pi^\infty$ transfers to an irreducible representation $\pi'^\infty=\otimes'_l \pi'_l$ of $(D\otimes_\mathbb{Q} \mathbb{A}_f)^\times$ where $D$ denotes the quaternion algebra over $\mathbb{Q}$ only ramified at $p,\infty$. In particular, $\pi'_p$ is an irreducible representation of $D_p^\times$. Not surprisingly, in this case, $\Pi_\rho$ is closely related to the coverings of the Drinfeld upper half plane. Let $\Omega:=\mathbb{P}^1\setminus \mathbb{P}^1(\mathbb{Q}_p)$ be the Drinfeld upper half plane, viewed as an adic space over $\Spa(C,\mathcal{O}_C)$. There is a natural action of $\mathrm{GL}_2(\mathbb{Q}_p)$ on $\Omega$. As explained by Drinfeld, $\Omega$ has a sequence of finite \'etale Galois coverings $\{\mathcal{M}_{\mathrm{Dr},n}^{(0)}\}_{n\geq 0}$ with Galois group $(\mathcal{O}_{D_p}/p^n\mathcal{O}_{D_p})^\times$. Let $\mathrm{GL}_2(\mathbb{Q}_p)^o$ denote the subgroup of $\mathrm{GL}_2(\mathbb{Q}_p)$ with determinants in $\mathbb{Z}_p^\times$. There are natural actions of $\mathrm{GL}_2(\mathbb{Q}_p)^o$ on $\mathcal{M}_{\mathrm{Dr},n}^{(0)}$ so that the projection maps $\pi_{\mathrm{Dr},n}^{(0)}:\mathcal{M}_{\mathrm{Dr},n}^{(0)}\to \Omega$ are $\mathrm{GL}_2(\mathbb{Q}_p)^o$-equivariant. We denote by $j:\Omega\to\mathbb{P}^1$ the natural inclusion. \end{para} \begin{thm}[Theorem \ref{scch}] \label{scch1} Suppose $k=1$ and $\pi_p$ is supercuspidal. There are a natural embedding \[i_\pi:\pi_p\to \left(\varinjlim_n H^1\left(\mathbb{P}^1,j_!\pi^{(0)}_{\mathrm{Dr},n*} \mathcal{O}_{\mathcal{M}_{\mathrm{Dr},n}^{(0)}}\right)\otimes_C \pi'_p\right)^{\mathcal{O}_{D_p}^\times}\] coming from the position of the Hodge filtration of $\rho|_{G_{\mathbb{Q}_p}}$ and a $\mathrm{GL}_2(\mathbb{Q}_p)^o$-equivariant isomorphism \[\Pi_\rho^{\mathrm{la}}\widehat\otimes_{E,\tau} C\cong(\pi^{\infty,p})^{K^p}\otimes_C \left[\left(\varinjlim_n H^1\left(\mathbb{P}^1,j_!\pi^{(0)}_{\mathrm{Dr},n*} \mathcal{O}_{\mathcal{M}_{\mathrm{Dr},n}^{(0)}}\right)\otimes_C \pi'_p\right)^{\mathcal{O}_{D_p}^\times}/i_\pi(\pi_p)\right]\] where $\Pi_\rho^{\mathrm{la}}\subseteq\Pi_\rho$ is the subspace of $\mathrm{GL}_2(\mathbb{Q}_p)$-locally analytic vectors, and $j_!$ denotes the usual extension by zero, and the cohomology is computed on the analytic site of $\mathbb{P}^1$. This isomorphism can be upgraded to a $\mathrm{GL}_2(\mathbb{Q}_p)$-equivariant isomorphism, cf. Remark \ref{GL2oGL2}. \end{thm} \begin{rem} $H^1\left(\mathbb{P}^1,j_!\pi^{(0)}_{\mathrm{Dr},n*} \mathcal{O}_{\mathcal{M}_{\mathrm{Dr},n}^{(0)}}\right)$ is isomorphic to the compactly supported cohomology of $\mathcal{O}_{\mathcal{M}_{\mathrm{Dr},n}^{(0)}}$ on $\mathcal{M}_{\mathrm{Dr},n}^{(0)}$. Hence this result essentially verifies a conjecture of Breuil-Strauch on $\Pi_\rho^{\mathrm{la}}$ in the dual form, which was completely solved by Dospinescu-Le Bras \cite{DLB17} before. See Remark \ref{scla} for more details. We emphasize again that our method does not rely on any known results from the $p$-adic local Langlands correspondence for $\mathrm{GL}_2(\mathbb{Q}_p)$. \end{rem} \begin{para} Next assume that $\pi_p$ is a principal series. In this case we need the rigid cohomology of Igusa curves. Let $\Gamma(p^n)=1+p^n\mathbb{Z}_p\subseteq \mathrm{GL}_2(\mathbb{Z}_p)$ be the principal congruence subgroup of level $n\geq 1$. The modular curve $Y_{K^p\Gamma(p^n)}$ has a natural compactification $X_{K^p\Gamma(p^n)}$. As explained by Katz-Mazur \cite[Theorem 13.7.6]{KM85}, $X_{K^p\Gamma(p^n)}\times_{\mathbb{Q}_p} C$ has a natural integral model $\mathfrak{X}_{K^p\Gamma(p^n)}$ over $\mathcal{O}_C$ whose irreducible components of its special fiber are indexed by surjective homomorphisms $(\mathbb{Z}/p^n)^2\to \mathbb{Z}/p^n$ and meet at supersingular points. Let $\mathfrak{X}_{K^p\Gamma(p^n),\bar\mathbb{F}_p,c}\subseteq \mathfrak{X}_{K^p\Gamma(p^n),\bar\mathbb{F}_p}$ denote the open subset of the non-supersingular points of irreducible components with indices sending $(1,0)\in (\mathbb{Z}/p^n)^2$ to $0$. It is an affine variety over $\bar\mathbb{F}_p$. In more classical languages, $\mathfrak{X}_{K^p\Gamma(p^n),\bar\mathbb{F}_p,c}$ is a union of Igusa curves of level $n$ (and tame level $K^p$). Let $H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p,n)):=H^1_{\mathrm{rig}}(\mathfrak{X}_{K^p\Gamma(p^n),\overline\mathbb{F}_p,c}/W(\bar\mathbb{F}_p)[\frac{1}{p}])\otimes_{W(\bar\mathbb{F}_p)[\frac{1}{p}]} C$ be the $C$-coefficients rigid cohomology of $\mathfrak{X}_{K^p\Gamma(p^n),\bar\mathbb{F}_p,c}$. It forms a direct system when $n$ varies. Let \[H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p))=\varinjlim_n H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p,n)).\] It follows from the construction of $\mathfrak{X}_{K^p\Gamma(p^n),\bar\mathbb{F}_p,c}$ that the upper-triangular Borel subgroup $B\subseteq\mathrm{GL}_2(\mathbb{Q}_p)$ acts on $H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p))$. The $B$-action is smooth, hence locally analytic. On the other hand, fix a finite set of rational primes $S$ such that $K_l\cong\mathrm{GL}_2(\mathbb{Z}_l),l\notin S$. The spherical Hecke algebra $\mathbb{T}^S$ (cf. Subsection \ref{Hd} below) acts naturally on $H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p))$. By Theorem \ref{MTI}, the Galois representation $\rho$ and the chosen embedding $\tau$ induces a homomorphism $\lambda_\tau:\mathbb{T}^S\to C$. See Subsection \ref{noraf} for the normalization. \end{para} \begin{thm}[Theorem \ref{PSla}] \label{PSla1} Suppose $k=1$ and $\pi_p$ is a principal series. There are a natural embedding \[i_\pi:(\pi^{\infty})^{K^p}\to \mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)} H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p)) [\lambda_\tau]\] coming from the position of the Hodge filtration of $\rho|_{G_{\mathbb{Q}_p}}$ and a natural $\mathrm{GL}_2(\mathbb{Q}_p)$-equivariant isomorphism \[\Pi_\rho^{\mathrm{la}}\widehat\otimes_{E,\tau} C\cong \mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)} H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p)) [\lambda_\tau]/i_\pi\left((\pi^{\infty})^{K^p}\right),\] where $\mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)}$ denotes the locally analytic induction and $[\lambda_\tau]$ denotes the $\lambda_\tau$-isotypic part. \end{thm} \begin{rem} There is an isomorphism $H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p)) [\lambda_\tau]\cong (\pi^{\infty,p})^{K^p}\otimes_E D_{\mathrm{cris}}(\rho|_{G_{\mathbb{Q}_p}})$, which is $B$-equivariant if we equip the right hand side with the correct $B$-action using the congruence relation. See Remark \ref{psla} for more details. Hence \[\Pi_\rho^{\mathrm{la}}\widehat\otimes_{E,\tau} C\cong (\pi^{\infty,p})^{K^p}\otimes \mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)} D_{\mathrm{cris}}(\rho|_{G_{\mathbb{Q}_p}})/\pi_p\] and the embedding of $\pi_p\subseteq \mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)} D_{\mathrm{cris}}(\rho|_{G_{\mathbb{Q}_p}})$ comes from \[\Fil^1D_{\mathrm{dR}}(\rho|_{G_{\mathbb{Q}_p}})\subseteq D_\mathrm{dR}(\rho|_{G_{\mathbb{Q}_p}})\cong D_{\mathrm{cris}}(\rho|_{G_{\mathbb{Q}_p}}).\] Such an isomorphism was previously known by Emerton's local-global compatibility result and the description of the local analytic representation of $\mathrm{GL}_2(\mathbb{Q}_p)$ associated to $\rho|_{G_{\mathbb{Q}_p}}$ under the $p$-adic local Langlands correspondence for $\mathrm{GL}_2(\mathbb{Q}_p)$. In this case, this was essentially conjectured by Berger-Breuil and Emerton, and was proved by Liu-Xie-Zhang and Colmez. \end{rem} \begin{rem} We have a slightly weaker result when $\pi_p$ is special. See Remark \ref{scpsConj}. \end{rem} \begin{rem} \label{conunif} In both Theorems \ref{scch1} and \ref{PSla1}, $\Pi_\rho^{\mathrm{la}}\widehat\otimes_{E,\tau} C$ is written as a quotient by $(\pi^\infty)^{K^p}$. This is not a coincidence. In fact, we will show that there is a natural isomorphism (cf. Remark \ref{DRstn}, Theorem \ref{HT0kerF}) \[\Pi_\rho^{\mathrm{la}}\widehat\otimes_{E,\tau} C\cong \mathbb{H}^1(DR_{k-1})[\lambda]/\Fil^1 \mathbb{H}^1(DR_{k-1})[\lambda]\] for some two-terms complex $DR_{k-1}$ on $\mathbb{P}^1$, where $\Fil^\bullet$ denotes the Hodge filtration on $\mathbb{H}^1(DR_{k-1})$. Let $\mathcal{X}_{K^pK_p}$ denote the adic space associated to $X_{K^pK_p}\times C$. When $k=1$, the de Rham cohomology $H^1_{\mathrm{dR}}(\mathcal{X}_{K^pK_p})$ naturally embeds into $ \mathbb{H}^1(DR_{0})$ and induces an isomorphism \[\Fil^1 \mathbb{H}^1(DR_0)[\lambda]\cong \Fil^1 \varinjlim_{K_p}H^1_{\mathrm{dR}}(\mathcal{X}_{K^pK_p})[\lambda] \cong (\pi^\infty)^{K^p}.\] This shows that the Breuil-Strauch conjecture in the supercuspidal case and the conjecture of Berger-Breuil, Emerton in the principal series case can be formulated in a uniform way. It is also interesting to point out that the cohomology group $\mathbb{H}^1(DR_{k-1})[\lambda]$ itself does not see the information of Hodge filtration. Intuitively, $DR_0$ is some completed tensor product of the de Rham complex of $\mathcal{X}_{K^pK_p}$ and the structure sheaf $\mathcal{O}_{\mathbb{P}^1}$. See Remark \ref{KSreal} below for more details regarding $\mathbb{H}^1(DR_{k-1})$ and its analogy with geometric constructions of representations of real groups, e.g. work of Kashiwara-Schmid in \cite{KS94}. \end{rem} Our method also yields a finiteness result. Let $\mathfrak{n}\subseteq\Lie(\mathrm{GL}_2(\mathbb{Q}_p))$ denote the upper-triangular nilpotent subalgebra. The following result was also obtained by Dospinescu-Pa\v{s}k\=unas-Schraen \cite{DPS22} very recently by a totally different method, cf. Remark \ref{relDPS}. \begin{thm} [part (2) of Theorem \ref{MTCH}] \label{Intfin} Let $\rho$ be as in Theorem \ref{MTI}. Then $ \Pi_\rho^{\Gamma(p^n),\mathfrak{n}}$ is finite-dimensional for any $n\geq 2$, where $\Pi_\rho^{\Gamma(p^n),\mathfrak{n}}$ denotes the $\mathfrak{n}$-invariants of the $\Gamma(p^n)$-analytic vectors of $\Pi_\rho$. \end{thm} \subsection{Strategy} \begin{para} Now we explain our strategy for proving Theorem \ref{MTI}. Fix $k\geq 1$ throughout this subsection. The starting point is a characterization of de Rham Galois representations in Fontaine's classification of almost $B_{\mathrm{dR}}$-representations \cite{Fo04}. More precisely, let $V$ be a two-dimensional continuous representation of $G_{\mathbb{Q}_p}$ over $\mathbb{Q}_p$. Suppose that $V$ is Hodge-Tate of weights $0,k$, i.e. there is a natural decomposition \[W:=V\otimes_{\mathbb{Q}_p} C=W_0\oplus W_k\] with both $W_0$ and $W_k$ being non-zero, and $W_i(i)=W_i(i)^{G_{\mathbb{Q}_p}}\otimes_{\mathbb{Q}_p} C$ for $i=0,k$. Fontaine proved that there is a natural $C$-linear, $G_{\mathbb{Q}_p}$-equivariant operator \[N:W_0\to W_k(k)\] such that $V$ is de Rham exactly when $N=0$. For reader's convenience, we briefly recall Fontaine's construction here. Let $B_{\mathrm{dR}}$ denote the usual de Rham period ring and $H_{\mathbb{Q}_p}=\ker \varepsilon_p$, the kernel of the $p$-adic cyclotomic character $\varepsilon_p:G_{\mathbb{Q}_p}\to\mathbb{Z}_p^\times$. Let $\Gamma=G_{\mathbb{Q}_p}/H_{\mathbb{Q}_p}\cong\mathbb{Z}_p^\times$. Consider \[D_{\mathrm{pdR}}(V):= (V\otimes_{\mathbb{Q}_p}B_{\mathrm{dR}})^{H_{\mathbb{Q}_p},\Gamma-unipotent},\] the subspace of $H_{\mathbb{Q}_p}$-invariants on which the action of $\Gamma$ is unipotent. The $t$-adic filtration on $B_{\mathrm{dR}}$ naturally defines a decreasing filtration on $D_{\mathrm{pdR}}(V)$. Generalizing Sen's work on the decompletion of $C$-representations, Fontaine showed that $\dim_{\mathbb{Q}_p} D_{\mathrm{pdR}}(V)=\dim_{\mathbb{Q}_p} V$ and \[\gr^i D_{\mathrm{pdR}}(V)\otimes_{\mathbb{Q}_p} C=W_i(i)\] where $W_i(i)=0$ if $i\neq 0,k$. The action of $1\in\mathbb{Q}_p\cong\Lie\Gamma$ on $D_{\mathrm{pdR}}(V)$ is nilpotent and preserves the filtration, hence induces a map $\gr^0 D_{\mathrm{pdR}}(V)\to \gr ^k D_{\mathrm{pdR}}(V)$ whose tensor product with $C$ gives $N$. We will call $N$ the Fontaine operator. \end{para} \begin{para} \label{IntFO} Back to the completed cohomology. For simplicity, we will assume $E=\mathbb{Q}_p$ in the introduction. Since the completed cohomology is an admissible Banach space representation of $\mathrm{GL}_2(\mathbb{Q}_p)$, it follows that $\Pi_\rho$ is also admissible. Hence $\Pi^\mathrm{la}_\rho\neq 0$ by Schneider-Teitelbaum. Moreover by \cite[Proposition 6.1.5]{Pan20} (see also \cite{DPS20}), $\Pi^\mathrm{la}_\rho$ has an infinitesimal character $\tilde\chi_k:Z(U(\mathfrak{gl}_2(\mathbb{Q}_p)))\to\mathbb{Q}_p$ which is equal to the infinitesimal character of the $(k-1)$-th symmetric power of the dual of the standard representation. In fact, it was shown in \cite{Pan20} that the $\tilde\chi_k$-isotypic part $\tilde{H}^1(K^p,\mathbb{Q}_p)^{\mathrm{la},\tilde\chi_k}$ is Hodge-Tate of weights $0,k$, i.e there is a natural Hodge-Tate decomposition \[\tilde{H}^1(K^p,\mathbb{Q}_p)^{\mathrm{la},\tilde\chi_k}\widehat\otimes_{\mathbb{Q}_p} C=W_0\oplus W_k\] such that $W_i(i)=W_i(i)^{G_{\mathbb{Q}_p}}\widehat\otimes_{\mathbb{Q}_p} C$, $i=0,k$. Fontaine's construction can also be generalized to this setting: we will show that there exists a natural continuous map \[N:W_0\to W_k(k)\] such that for any two-dimensional irreducible sub-representation $V\subseteq \tilde{H}^1(K^p,\mathbb{Q}_p)^{\mathrm{la},\tilde\chi_k}$ of $G_{\mathbb{Q}}$, the restriction $N|_{(V\otimes_{\mathbb{Q}_p}C)_0}$ agrees with the Fontaine operator $(V\otimes_{\mathbb{Q}_p}C)_0\to (V\otimes_{\mathbb{Q}_p}C)_k(k)$ for $V$. Therefore the following spectral decomposition theorem implies Theorem \ref{MTI} in view of Fontaine's result. \end{para} \begin{thm} [Theorem \ref{sd}, Corollary \ref{sd'}] There is natural generalized eigenspace decomposition with respect to the action of $\mathbb{T}^S$ \[\ker N=\bigoplus_{\lambda} \ker N\widetilde{[\lambda]}\] where $\lambda:\mathbb{T}^S\to C$ runs over all systems of eigenvalues associated to eigenforms of weight $k+1$ and eigenvalues appearing in $\displaystyle \varinjlim_{K_p\subseteq\mathrm{GL}_2(\mathbb{Q}_p)} H^0(Y(K^pK_p)(\mathbb{C}),\mathbb{Q}_p)$, and $\widetilde{[\lambda]}$ denotes the generalized eigenspace of $\lambda$. \end{thm} \begin{rem} In fact, we will show that $\ker N\widetilde{[\lambda]}=\ker N{[\lambda]}$, the eigenspace of $\lambda$, if the associated automorphic representation is cuspidal and is not special at $p$. Using results from $p$-adic local Langlands correspondence for $\mathrm{GL}_2(\mathbb{Q}_p)$, we can even show that this is true in the Steinberg case. See Remark \ref{stn}. \end{rem} \begin{para} It remains to prove this spectral decomposition. Here comes the main innovation of this paper: we will give another construction of the Fontaine operator $N$ using the $p$-adic geometry of the perfectoid modular curves. We need some construction in our previous work \cite{Pan20}. Let $\mathcal{X}_{K^p}\sim\varprojlim_{K_p\subset\mathrm{GL}_2(\mathbb{Q}_p)}\mathcal{X}_{K^pK_p}$ denote the modular curve of infinite level at $p$ introduced by Scholze in \cite{Sch15}. There is a $\mathrm{GL}_2(\mathbb{Q}_p)$-equivariant Hodge-Tate period map \[\pi_{\mathrm{HT}}:\mathcal{X}_{K^p}\to {\mathscr{F}\!\ell}\cong \mathbb{P}^1\] where ${\mathscr{F}\!\ell}$ denotes the flag variety of $\mathrm{GL}_2$, viewed as an adic space over $\Spa(C,\mathcal{O}_C)$. Let $\mathcal{O}_{K^p}:=\pi_{\mathrm{HT}*}\mathcal{O}_{\mathcal{X}_{K^p}}$ and $\mathcal{O}_{K^p}^\mathrm{la}\subseteq \mathcal{O}_{K^p}$ be the subsheaf of $\mathrm{GL}_2(\mathbb{Q}_p)$-locally analytic sections. In \cite{Pan20}, using the relative Sen theory, we showed that $\mathcal{O}^{\mathrm{la}}_{K^p} $ is annihilated by the horizontal nilpotent subalgebra $\mathfrak{n}^o$ on ${\mathscr{F}\!\ell}$. Fix a Cartan subalgebra $\mathfrak{h}:=\{\begin{pmatrix} * & 0 \\ 0 & * \end{pmatrix}\}\subseteq \{\begin{pmatrix} * & * \\ 0 & * \end{pmatrix}\}\subseteq\mathfrak{gl}_2(\mathbb{Q}_p)$. Hence there is a natural horizontal Cartan action $\theta_\mathfrak{h}$ of $\mathfrak{h}$ on $\mathcal{O}^{\mathrm{la}}_{K^p} $ induced from $\mathfrak{h}\hookrightarrow \mathfrak{b}^o/\mathfrak{n}^o$, where $\mathfrak{b}^o$ denotes the horizontal Borel. Given a weight $\chi=(n_1,n_2)\in\mathbb{Q}_p^2:\mathfrak{h}\to \mathbb{Q}_p$, we will denote by $\mathcal{O}^{\mathrm{la},\chi}_{K^p}$ the $\chi$-isotypic part of $\mathcal{O}^{\mathrm{la}}_{K^p}$ with respect to $\theta_\mathfrak{h}$. It was shown in \cite{Pan20} that there are natural isomorphisms \begin{itemize} \item $W_0\cong H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p})$ when $k\geq 2$; \item $W_k\cong H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},(1,-k)}_{K^p})$. \end{itemize} When $k=1$, there is a natural map $H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p})\to W_0$ and the difference between $W_0$ and $H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p})$ comes from $\tilde{H}^0(K^p,\mathbb{Z}_p)$. In particular, $W_0[\lambda_\tau]\cong H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},(0,0)}_{K^p})[\lambda_\tau]$ still holds. We recommend the reader to ignore this difference in the following discussion. \end{para} \begin{para} Let $\mathcal{O}^{\mathrm{sm}}_{K^p}\subseteq\mathcal{O}_{K^p}$ be the subsheaf of $\mathrm{GL}_2(\mathbb{Q}_p)$-smooth sections. Equivalently, \[\displaystyle \mathcal{O}^{\mathrm{sm}}_{K^p}=\varinjlim_{K_p\subseteq\mathrm{GL}_2(\mathbb{Q}_p)} \pi_{\mathrm{HT}*}\pi_{K_p}^{-1}\mathcal{O}_{\mathcal{X}_{K^pK_p}},\] where $\pi_{K_p}:\mathcal{X}_{K^p}\to \mathcal{X}_{K^pK_p}$ denotes the natural projection map. Similarly, set \[\displaystyle \Omega^{1,\mathrm{sm}}_{K^p}(\mathcal{C})=\varinjlim_{K_p\subseteq\mathrm{GL}_2(\mathbb{Q}_p)} \pi_{\mathrm{HT}*}\pi_{K_p}^{-1}\Omega^1_{\mathcal{X}_{K^pK_p}}(\mathcal{C}_{K_p}),\] where $\mathcal{C}_{K_p}$ denotes the cusps of $\mathcal{X}_{K^pK_p}$. Clearly $\Omega^{1,\mathrm{sm}}_{K^p}(\mathcal{C})$ is an $\mathcal{O}^\mathrm{sm}_{K^p}$-module and there is a natural derivation $d:\mathcal{O}^{\mathrm{sm}}_{K^p}\to \Omega^{1,\mathrm{sm}}_{K^p}(\mathcal{C})$. We note that there are natural inclusions $\mathcal{O}^{\mathrm{sm}}_{K^p},\mathcal{O}_{{\mathscr{F}\!\ell}} \subseteq \mathcal{O}^{\mathrm{la},(0,0)}_{K^p}$. In Section \ref{Ioef} below, we will prove the following result. \end{para} \begin{thm} The natural map $\mathcal{O}^{\mathrm{sm}}_{K^p}\otimes_{C} \mathcal{O}_{{\mathscr{F}\!\ell}} \to\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}$ has dense images. Moreover, \begin{enumerate} \item There exists a (necessarily unique) continuous map $d^1: \mathcal{O}^{\mathrm{la},(0,0)}_{K^p}\to \mathcal{O}^{\mathrm{la},(0,0)}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\Omega^{1,\mathrm{sm}}_{K^p}(\mathcal{C})$ which is $\mathcal{O}_{{\mathscr{F}\!\ell}}$-linear and extends $d:\mathcal{O}^{\mathrm{sm}}_{K^p}\to \Omega^{1,\mathrm{sm}}_{K^p}(\mathcal{C})$; \item There exists a (necessarily unique) continuous map $\bar{d}^1: \mathcal{O}^{\mathrm{la},(0,0)}_{K^p}\to \mathcal{O}^{\mathrm{la},(0,0)}_{K^p}\otimes_{\mathcal{O}_{\mathscr{F}\!\ell}}\Omega^1_{{\mathscr{F}\!\ell}}$ which is $\mathcal{O}^{\mathrm{sm}}_{K^p}$-linear and extends $d_{\mathscr{F}\!\ell}:\mathcal{O}_{{\mathscr{F}\!\ell}}\to \Omega^{1}_{\mathscr{F}\!\ell}$. \end{enumerate} \end{thm} \begin{rem} $\bar{d}^1$ is essentially the pull-back of derivations on ${\mathscr{F}\!\ell}$ along the Hodge-Tate period map $\pi_{\mathrm{HT}}$. As explained by Scholze in \cite{Sch16}, $\pi_{\mathrm{HT}}$ can be regarded as a $p$-adic analogue of the \textit{anti-holomorphic} Borel embedding. This explains the notation $\bar{d}$ here. \end{rem} \begin{para} We define $I_0$ as the composite map \[\mathcal{O}^{\mathrm{la},(0,0)}_{K^p} \xrightarrow{d^1} \mathcal{O}^{\mathrm{la},(0,0)}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\Omega^{1,\mathrm{sm}}_{K^p}(\mathcal{C})\xrightarrow{\bar{d}^1\otimes 1} \Omega^1_{{\mathscr{F}\!\ell}}\otimes_{\mathcal{O}_{{\mathscr{F}\!\ell}}}\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\Omega^{1,\mathrm{sm}}_{K^p}(\mathcal{C}).\] There is a natural isomorphism $\Omega^1_{{\mathscr{F}\!\ell}}\otimes_{\mathcal{O}_{{\mathscr{F}\!\ell}}}\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\Omega^{1,\mathrm{sm}}_{K^p}(\mathcal{C})\cong \mathcal{O}^{\mathrm{la},(1,-1)}_{K^p}(1)$ coming from the Kodaira-Spencer isomorphisms: \begin{itemize} \item $\Omega^1_{\mathcal{X}_{K^pK_p}}(\mathcal{C}_{K_p})\cong \omega^2$ on $\mathcal{X}_{K^pK_p}$, where $\omega$ denotes the usual automorphic line bundle; \item $\Omega^1_{{\mathscr{F}\!\ell}}\cong \omega_{{\mathscr{F}\!\ell}}^{-2}$ on ${\mathscr{F}\!\ell}$, where $\omega_{\mathscr{F}\!\ell}$ denotes the tautological ample line bundle on ${\mathscr{F}\!\ell}=\mathbb{P}^1$. \end{itemize} It follows from the construction of $\pi_\mathrm{HT}$ that the pull-backs of $\omega$ and $\omega_{{\mathscr{F}\!\ell}}$ to $\mathcal{X}_{K^p}$ are isomorphic to each other (up to a Tate twist), which gives the desired isomorphism. Hence $I_0=\bar{d}'^1\circ d^1$ can be viewed as a map $\mathcal{O}^{\mathrm{la},(0,0)}_{K^p} \to \mathcal{O}^{\mathrm{la},(1,-1)}_{K^p}(1)$, where $\bar{d}'^1=\bar{d}^1\otimes 1$. In general, a standard BGG construction allows us to define $I_{k-1}=\bar{d}'^{k+1}\circ d^{k+1}: \mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p} \to \mathcal{O}^{\mathrm{la},(1,-k)}_{K^p}(k)$. We denote $H^1(I_{k-1})$ by $I^1_{k-1}$. Recall that in \ref{IntFO}, we have the Fontaine operator $N: W_0\cong H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p} ) \to W_k(k)\cong H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},(1,-k)}_{K^p}(k))$ when $k\geq 2$. Here is the main result of Section \ref{IopHti}. \end{para} \begin{thm} $I^1_{k-1}\cong c_k N$ when $k\geq 2$ and $I^1_0|_{H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},(0,0)}_{K^p})[\lambda_\tau]}\cong c_1 N|_{W_0[\lambda_\tau]}$ when $k=1$ for some $c_k\in\mathbb{Q}_p^\times$. \end{thm} \begin{para} \label{kerdinf} It remains to prove that $\ker I^1_{k-1}$ has a spectral decomposition. In fact, we will show such a decomposition on the sheaf level. For simplicity we assume $k=1$ here. The following observations for $\bar{d}^1:\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}\to \mathcal{O}^{\mathrm{la},(0,0)}_{K^p}\otimes_{\mathcal{O}_{\mathscr{F}\!\ell}}\Omega^1_{\mathscr{F}\!\ell}$ are very useful: \begin{itemize} \item $\bar{d}^1$ is surjective; \item $\ker \bar{d}^1=\mathcal{O}^{\mathrm{sm}}_{K^p}$. \end{itemize} We give some very informal explanations here. We view $\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}\xrightarrow{\bar{d}^1} \mathcal{O}^{\mathrm{la},(0,0)}_{K^p}\otimes_{\mathcal{O}_{\mathscr{F}\!\ell}}\Omega^1_{\mathscr{F}\!\ell}$ as a ``relative de Rham complex of $\mathcal{X}_{K^p}$ over $\mathcal{X}_{K^pK_p}$''. Each fiber of the projection map $\mathcal{X}_{K^p}\to\mathcal{X}_{K^pK_p}$ is a profinite set, i.e. zero-dimensional, hence $\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}\xrightarrow{\bar{d}^1} \mathcal{O}^{\mathrm{la},(0,0)}_{K^p}\otimes_{\mathcal{O}_{\mathscr{F}\!\ell}}\Omega^1_{\mathscr{F}\!\ell}$ has no $H^1$, which implies the first claim. For the second claim, by Beilinson-Bernstein's theory of localization, the derivation $\bar{d}^1$ on ${\mathscr{F}\!\ell}$ essentially comes from the action of $\mathfrak{gl}_2(\mathbb{Q}_p)$. Since $\ker{\bar{d}^1}$ is $\mathrm{GL}_2(\mathbb{Q}_p)$-invariant, the first claim follows. From this, it is easy to see that $H^1(\bar{d}'^1)$ is injective. Therefore $\ker I^1_0=\ker H^1(d^1)$. Consider the following de Rham complex \[DR_0: \mathcal{O}^{\mathrm{la},(0,0)}_{K^p} \xrightarrow{d^1} \mathcal{O}^{\mathrm{la},(0,0)}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\Omega^{1,\mathrm{sm}}_{K^p}(\mathcal{C}).\] Then $\ker I^1_0=\ker H^1(d^1)=\mathbb{H}^1(DR_0)/\Fil^1\mathbb{H}^1(DR_0)$, where $\Fil^\bullet$ denotes the Hodge filtration. This in particular explains Remark \ref{conunif}. \end{para} \begin{para} Now it is enough to show that $H^\bullet(DR_0)$ has a spectral decomposition. Roughly speaking, for $y\in{\mathscr{F}\!\ell}$, $H^\bullet(DR_0)_y$ computes the ``de Rham cohomology of the fiber $\pi_\mathrm{HT}^{-1}(y)$''. Hence we need to stratify ${\mathscr{F}\!\ell}=\mathbb{P}^1=\Omega\bigsqcup \mathbb{P}^1(\mathbb{Q}_p)$ according to the fibers. \begin{itemize} \item On $\Omega$, using the uniformization of supersingular locus in terms of the Lubin-Tate space, we get a natural spectral decomposition of $H^\bullet(DR_0)|_{\Omega}$ with eigenvalues corresponding to automorphic representations on $(D\otimes_\mathbb{Q} \mathbb{A})^\times$. Note that each fiber is a profinite set in this case, hence $H^1(DR_0)|_{\Omega}=0$. \item On $ \mathbb{P}^1(\mathbb{Q}_p)$, the fiber is closely related to the Igusa curves, and we have a natural spectral decomposition of $H^\bullet(DR_0)|_{\mathbb{P}^1(\mathbb{Q}_p)}$ with eigenvalues appearing in the rigid cohomology $H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p)) $ of Igusa curves. \end{itemize} For a $C$-point $y\in\Omega$, there is an isomorphism $\pi_{\mathrm{HT}}^{-1}(y)\cong D^\times\setminus (D\otimes_\mathbb{Q} \mathbb{A}_f)^\times/K^p$. The key point is to show that any element in $H^0(DR_0)_y=(\ker d^1)_y$, viewed as a continuous function on $D^\times\setminus (D\otimes_\mathbb{Q} \mathbb{A}_f)^\times/K^p$, is an automorphic function, i.e. is $D_p^\times$-smooth. To see this, we investigate the duality isomorphism between the Lubin-Tate towers and Drinfeld towers at the infinite level in Subsection \ref{Drinfty} and show that the roles of $\bar{d}^1$ and $d^1$ are swapped under this isomorphism. In particular, that $\ker d^1$ consists of $D_p^\times$-smooth elements is equivalent to that $\ker\bar{d}^1$ consists of $\mathrm{GL}_2(\mathbb{Q}_p)$-smooth elements, which we have already seen in \ref{kerdinf}! For $y\in \mathbb{P}^1(\mathbb{Q}_p)$, the fiber $\pi_{\mathrm{HT}}^{-1}(y)$ is some perfect Igusa curve. The main point here is to show that $H^\bullet(DR_0)_y$ computes the rigid cohomology of the classical, \textit{non-perfect} Igusa curves. I believe this is a consequence of our differential equation \cite[Lemma 4.3.3]{Pan20}. \end{para} \subsection{Organization of the paper} \begin{para} This paper is organized as follows. The next two sections are preliminary and can be skipped on first reading (although some notations will be introduced in Section \ref{Rf20}). In Section \ref{Lav}, we collect some basic facts about locally analytic representations over a Hausdorff LB-space. In Section \ref{Rf20}, we recall some results and constructions from our previous work \cite{Pan20} and make some slight generalizations. In Section \ref{Ioef}, we will prove the existence of the differential operators $d^1$ and $\bar{d}^1$ and study the $\mathfrak{n}$-invariants of $\ker H^1(d^{k})$. The spectral decomposition will be proved in Section \ref{Iosd}. As mentioned above, we will also study a ``local picture": the Lubin-Tate towers and Drinfeld towers. In Section \ref{IopHti}, we prove that the Fontaine operator has the simple formula $H^1(\bar{d}'^k\circ d^k)$ (up to a non-zero scalar). Not surprisingly, one main ingredient is the period sheaf $\mathcal{O}\mathbb{B}_\mathrm{dR}^+$. Finally, we collect everything and prove the main results in Section \ref{mrs}. Although this paper is quite long, I hope it is clear to the reader that at least the main idea is very simple. \end{para} \begin{para} In this work, we also make improvements in some results obtained in \cite{Pan20}, which may simplify and clarify many arguments in our previous work. \begin{enumerate} \item In \cite[\S 2.2]{Pan20}, we introduced two notions: $\mathfrak{LA}$-acyclic and strongly $\mathfrak{LA}$-acyclic. In Proposition \ref{LAasLAa}, we will show that both notions are equivalent. \item In \cite[Theorem 4.2.7]{Pan20}, we calculated the differential equation that $\mathcal{O}^{\mathrm{la}}_{K^p}$ satisfies using a result of Faltings. We will give a more conceptual explanation in the proof of Theorem \ref{LTde} below by exploring the relationship between the Higgs bundle and the variation of Hodge structure attached to a de Rham local system. Roughly speaking, we will show that the differential equation is essentially given by the Kodaira-Spencer class. This point of view is more flexible and works for general Shimura varieties. \item In our previous work, the inclusion map in the relative Hodge-Tate sequence (See sequence \eqref{rHT} below) is not Hecke-equivariant. More precisely, we implicitly fix a trivialization of the $H^2$ of the relative de Rham cohomology of the universal elliptic curves. We will make everything independent of choices in Section \ref{HEt} below and explain how our choice of the trivialization affects the Hecke action. In particular, the Hodge-Tate sequence should be sequence \ref{rHTH}. \item In \cite[Proof of Theorem 5.1.11]{Pan20}, we had to take the largest separated quotient for some map in a \v{C}ech complex. In Subsection \ref{Cech} below, we will use the Weyl group to show that all the \v{C}ech complexes considered are strict, hence there is no need taking the separated quotient. We also explain how to carry out a Cartan-Serre type of argument in this context, cf. Proof of Lemma \ref{Lemcom} (1). \end{enumerate} \end{para} \begin{para} After finishing this work, Juan Esteban Rodriguez Camargo explained to me a more direct and conceptual construction of the differential operators $d^1$ and $\bar{d}^1$ using $\mathbb{B}_{\mathrm{dR}}^+/t^2$. It would be very interesting to simplify arguments in Section \ref{IopHti} with his construction. \end{para} \subsection*{Notation} For a topological space, we denote by $\pi_0(X)$ the set of connected components of $X$. For a map $f:\mathcal{F}\to\mathcal{G}$ of sheaves of abelian groups on $X$, we will denote by $H^i(f):H^i(X,\mathcal{F})\to H^i(X,\mathcal{G})$ the induced map of the $i$-th cohomology groups. Fix an algebraic closure $\overbar\mathbb{Q}$ of $\mathbb{Q}$. Denote by $G_{\mathbb{Q}}$ the absolute Galois group $\Gal(\overbar\mathbb{Q}/\mathbb{Q})$. For each rational prime $l$, fix an algebraic closure $\overbar\mathbb{Q}_l$ of $\mathbb{Q}_l$ with ring of integers $\overbar\mathbb{Z}_l$, an embedding $\overbar\mathbb{Q}\to\overbar\mathbb{Q}_l$ which determines a decomposition group $G_{\mathbb{Q}_l}\subset G_{\mathbb{Q}}$ at $l$, and a lift of geometric Frobenius $\Frob_l\in G_{\mathbb{Q}_l}$. Our convention for local class field theory sends a uniformizer to a lift of geometric Frobenius. \begin{itemize} \item $\varepsilon:\mathbb{A}_f^\times/\mathbb{Q}_{>0}^{\times}\to\mathbb{Z}_p^\times$ denotes the character corresponding to the $p$-adic cyclotomic character via global class field theory. By abuse of notation, we will also use $\varepsilon$ to denote its composite with the determinant map $\mathrm{GL}_2(\mathbb{A}_f)\xrightarrow{\det}\mathbb{A}_f^\times\xrightarrow{\varepsilon}\mathbb{Z}_p^\times$. \item $\varepsilon_p:G_{\mathbb{Q}_p}\to\mathbb{Z}_p^\times$ denotes the $p$-adic cyclotomic character. \item $\varepsilon'_p:=\varepsilon|_{\mathbb{Q}_p^\times}:\mathbb{Q}_p^\times\to\mathbb{Z}_p^\times$ denotes the composite of $\varepsilon_p$ with the local Artin map. Explicitly, its sends $x\in\mathbb{Q}_p^\times$ to $x|x|$. By abuse of notation, we will also use $\varepsilon'_p$ to denote its composite with the determinant map $\mathrm{GL}_2(\mathbb{Q}_p)\xrightarrow{\det}\mathbb{Q}_p^\times\xrightarrow{\varepsilon'_p}\mathbb{Z}_p^\times$. \item For a $\mathrm{GL}_2(\mathbb{A}_f)$-representation $M$, we will denote by $M\cdot D_0^k$ the twist of $M$ by the character $|\cdot|_{\mathbb{A}}^{-k}\circ \det$, where $|\cdot|_\mathbb{A}:\mathbb{A}^\times_f\to \mathbb{Q}^\times$ denotes the usual adelic norm map. \end{itemize} Suppose $S$ is a finite set of rational primes. We denote by $G_{\mathbb{Q},S}$ the the Galois group of the maximal extension of $\mathbb{Q}$ unramified outside $S$ and $\infty$. Let $B\subseteq\mathrm{GL}_2(\mathbb{Q}_p)$ denote the upper triangular Borel subgroup. For $i=1,2$, $e_i':B\to \mathbb{Q}_p^\times$ will denote the character sending $\begin{pmatrix} a_1 & b \\ 0 & a_2\end{pmatrix}$ to $a_i$. We fix an isomorphism $\bar{\mathbb{Q}}_p\cong \mathbb{C}$. \section{Locally analytic vectors} \label{Lav} Let $G$ be a $p$-adic Lie group of dimension $d$. In \cite[\S 2]{Pan20}, we consider locally analytic vectors in a $\mathbb{Q}_p$-Banach space representation of $G$. For the purpose of this paper, we also need to study locally analytic vectors of representations over Hausdorff LB-spaces. One good reference is \cite{Eme17}. \subsection{Representations on Hausdorff LB-spaces} \label{roHLB} \begin{para} \label{GanBan} First we recall some constructions from \cite[\S 2]{Pan20}. Fix a compact open subgroup $G_0$ of $G$ equipped with an integer valued, saturated $p$-valuation. Let $G_n=G_0^{p^n}$. We denote by $\mathscr{C}^{\mathrm{an}}(G_n,\mathbb{Q}_p)$ the space of $\mathbb{Q}_p$-valued analytic functions on $G_n$. There are two left actions of $G_n$ on $\mathscr{C}^{\mathrm{an}}(G_n,\mathbb{Q}_p)$: left translation action and right translation action. Unless otherwise stated, we will always use the left translation action. For a $\mathbb{Q}_p$-Banach space representation $W$ of $G$, we denote by $\mathscr{C}^{\mathrm{an}}(G_n,W)$ the space of $W$-valued analytic functions on $G_n$. The group $G_n$ acts on it by \[(g\cdot f)(h)=g\left(f(g^{-1}h)\right)\] for any $g,h\in G_n$ and $f\in \mathscr{C}^{\mathrm{an}}(G_n,W)$. There is a natural $G_n$-equivariant isomorphism $\mathscr{C}^{\mathrm{an}}(G_n,W)\cong \mathscr{C}^{\mathrm{an}}(G_n,\mathbb{Q}_p)\widehat{\otimes}_{\mathbb{Q}_p}W$. The subspace of $G_n$-analytic vectors in $W$ (denoted by $W^{G_n-\mathrm{an}}$) is defined as the image of the $G_n$-invariants under the evaluation map at the identity element \[\mathscr{C}^{\mathrm{an}}(G_n,W)^{G_n}\xrightarrow{ev_{\mathbf{1}}} W.\] $G_n$ acts continuously on $W^{G_n-\mathrm{an}}$. One way to see this is by identifying this action with the right translation action of $G_n$ on $\mathscr{C}^{\mathrm{an}}(G_n,W)$. If $W^{G_n-\mathrm{an}}=W$, we say the action of $G_n$ on $W$ is analytic. In this case, the evaluation map ${ev_{\mathbf{1}}}$ is an isomorphism, hence it induces a natural isomorphism \[\mathscr{C}^{\mathrm{an}}(G_n,\mathbb{Q}_p)\widehat{\otimes}_{\mathbb{Q}_p}W\cong \mathscr{C}^{\mathrm{an}}(G_n,W)\] equipped with the usual actions of $G_n$ on $\mathscr{C}^{\mathrm{an}}(G_n,\mathbb{Q}_p)$ and $ \mathscr{C}^{\mathrm{an}}(G_n,W)$ but with the trivial action on $W$ on the left hand side. Explicitly, if we identify $\mathscr{C}^{\mathrm{an}}(G_n,\mathbb{Q}_p)\widehat{\otimes}_{\mathbb{Q}_p}W$ with $\mathscr{C}^{\mathrm{an}}(G_n,W)$, this isomorphism sends $f\in \mathscr{C}^{\mathrm{an}}(G_n,W)$ on the left hand side to the function $g\mapsto g(f(g)),g\in G_n$. If $W$ is a $C$-Banach space representation of $G$, we can view it as a $\mathbb{Q}_p$-Banach space representation. Then it is clear that $W^{G_n-\mathrm{an}}$ is also a $C$-Banach space. \end{para} \begin{para} \label{indVn} Now we extend everything to representations on LB-spaces. By an LB-space, we mean a locally convex topological $\mathbb{Q}_p$-vector space which is isomorphic to a locally convex inductive limit of a countable inductive system of $\mathbb{Q}_p$-Banach spaces. Our reference here is \cite{Eme17}. In this paper, we will only consider Hausdorff LB-space. Such a space can be written as the inductive limit of a sequence of $\mathbb{Q}_p$-Banach spaces with injective transition maps. See the discussion below \cite[Definition 1.1.16]{Eme17} for more details about this notion. \end{para} \begin{exa} \label{exala} Let $W$ be a $\mathbb{Q}_p$-Banach space representation of $G$. Then the subspace of $G$-locally analytic vectors in $W$ \[W^{\mathrm{la}}=\varinjlim_{n}W^{G_n-\mathrm{an}}\] is naturally a Hausdorff LB-space. (To see that $W^{\mathrm{la}}$ is Hausdorff, one simply observes that the natural inclusion $W^{\mathrm{la}}\subseteq W$ is continuous because $\mathscr{C}^{\mathrm{an}}(G_n,\mathbb{Q}_p)\subseteq\mathscr{C}(G_n,\mathbb{Q}_p)$ is continuous, where $\mathscr{C}(G_n,\mathbb{Q}_p)$ denotes the space of $\mathbb{Q}_p$-valued continuous functions on $G_n$.) \end{exa} \begin{para} \label{discLB} Let $\displaystyle V=\varinjlim_i V_i$ be a Hausdorff LB-space, where $\{V_i\}_{i\geq 1}$ is an inductive sequence of Banach spaces. We denote by $\mathscr{C}^{\mathrm{an}}(G_n,V)$ the space of $V$-valued analytic functions on $G_n$, cf. \cite[Definition 2.1.11]{Eme17}. Then it is easy to see that \[\mathscr{C}^{\mathrm{an}}(G_n,V)\cong\varinjlim_i \mathscr{C}^{\mathrm{an}}(G_n,V_i)\cong \varinjlim_i \mathscr{C}^{\mathrm{an}}(G_n,\mathbb{Q}_p)\widehat{\otimes}_{\mathbb{Q}_p}V_i\] is a Hausdorff LB-space. As the notation suggests, we need to justify that this does not depend on the choice of $\{V_i\}_{i\geq 1}$. Let $U_i$ be the kernel of the natural map $V_i\to V$ and $W_i=V_i/U_i$. Since $V$ is Hausdorff, $U_i$ is a closed subspace hence there is a natural Banach space structure on $W_i$. Moreover $W_i$ can be viewed as a subspace of $V$ and $\bigcup_{i\geq 1} W_i=V$. We claim that \begin{itemize} \item for any $i$, there exists $i'\geq i$ such that the image of $U_i$ in $U_{i'}$ is zero, i.e. there is an induced map $W_i\to V_{i'}$ through which the inclusion $W_i\subseteq W_{i'}$ factors. \end{itemize} Indeed, $U_i$ is equal to the increasing union $\bigcup_{j\geq i}\ker(V_i\to V_{j})$. Hence our claim follows from the Baire category theorem. In particular, this implies that the natural map \[\varinjlim_{j} \mathscr{C}^{\mathrm{an}}(G_n,V_j)\to \varinjlim_{j} \mathscr{C}^{\mathrm{an}}(G_n,W_j)\] is an isomorphism. Now it suffices to show $\mathscr{C}^{\mathrm{an}}(G_n,V)$ does not depend on the choice of $\{W_i\}_{i\geq 1}$. But this is a direct consequence of \cite[Proposition 1.1.10]{Eme17}. We are going to use this result a few times, so we recall it here. \end{para} \begin{prop} \label{1.1.10} Let $V$ be a Hausdorff locally convex topological $\mathbb{Q}_p$-vector space. Suppose that there exists a sequence of $\mathbb{Q}_p$-Banach spaces $\{V_n\}_{n\geq 1}$, and for each $n$, an injective continuous linear mapping $v_n:V_n\to V$ such that $\bigcup_{n\geq 1}v_n(V_n)=V$. Let $W$ be a $\mathbb{Q}_p$-Banach space and $u:W\to V$ a continuous linear mapping. Then $u(W)\subseteq v_n(V_n)$ for some $n$. \end{prop} \begin{proof} This is essentially \cite[Proposition 1.1.10]{Eme17} which follows from \cite[prop. 1, p. I.20]{Bou87} by observing that the graph of $u$ is closed in $W\times V$ by our assumption. \end{proof} \begin{para} Suppose everything admits a continuous action of $G_0$, i.e. $\{V_i\}_{i\geq 1}$ is an inductive sequence of Banach space representations of $G_0$. There is a natural action of $G_n$ on $\mathscr{C}^{\mathrm{an}}(G_n,V)$ defined by \[(g\cdot f)(h)=g\left(f(g^{-1}h)\right)\] for any $g,h\in G_n$ and $f\in \mathscr{C}^{\mathrm{an}}(G_n,V)$. We define the subspace of $G_n$-analytic vectors in $V$ as the image of the $G_n$-invariants under the evaluation map at the identity element \[\mathscr{C}^{\mathrm{an}}(G_n,V)^{G_n}\xrightarrow{ev_{\mathbf{1}}} V\] equipped with the subspace topology from $\mathscr{C}^{\mathrm{an}}(G_n,V)$ and denote it by $V^{G_n-\mathrm{an}}$. Equivalently, \[V^{G_n-\mathrm{an}}=\varinjlim_{i} V_i^{G_n-\mathrm{an}}\subseteq V.\] It is clear from this definition that $V^{G_n-\mathrm{an}}$ is a Hausdorff LB-space and $\{V^{G_n-\mathrm{an}}\}_n$ forms a inductive sequence. The subspace of locally analytic vectors in $V$ is defined as the inductive limit $\displaystyle \varinjlim_n V^{G_n-\mathrm{an}}$ and will be denoted by $V^{\mathrm{la}}$. This is a Hausdorff LB-space. It is easy to check that $V^{\mathrm{la}}$ does not depend on the choice of $G_0$ in the sense that we get the same $V^{\mathrm{la}}$ if we replace $G_0$ by a compact open subgroup of $G_0$ equipped with an integer valued, saturated $p$-valuation. \end{para} \subsection{LB-space of compact type} \begin{para} We will need a sufficient condition for an LB-space being Hausdorff. For simplicity, all the discussions in this subsection are restricted to $\mathbb{Q}_p$-coefficients. But all the results actually hold with $\mathbb{Q}_p$ replaced by a finite extension. Recall that a linear operator $f:X\to Y$ between two $\mathbb{Q}_p$-Banach spaces is called compact if the closure of $f(X^o)$ in $Y$ is compact. Here $X^o\subseteq X$ denotes the unit ball of $X$. Equivalently, $f$ is compact if and only if the closure of $f(T)$ is compact for any bounded subset $T$ of $X$. From this description, it is clear that the notion of compact operator only depends on the topology on $X$ and $Y$, not the norms. A compact operator is necessarily continuous. Conversely, suppose $f:X\to Y$ is a continuous linear map between two $\mathbb{Q}_p$-Banach spaces. Then $f(X^o)\subseteq p^kY^o$ for some integer $k$. By definition, $f$ is compact if and only if the image of $X^o\xrightarrow{f} p^kY^o/p^{k+n}Y^o$ is finite for any $n\geq 0$. Suppose $g:Y\to Z$ and $h:W\to X$ are continuous operators between Banach spaces. Then $g\circ f$ and $f\circ h$ are also compact if $f$ is compact. \end{para} \begin{exa} \label{compexa} Here is a standard example of a compact operator in rigid analytic geometry. Consider the natural inclusion map between Tate algebras $\mathbb{Q}_p\langle T \rangle \to \mathbb{Q}_p \langle \frac{T}{p}\rangle$. It is easy to see that this map is compact. Geometrically, it corresponds to the restriction from the closed unit disc to the closed disc with radius $\|p\|$. More generally, suppose $B$ is a $\mathbb{Q}_p$-Banach algebra. Then a continuous $\mathbb{Q}_p$-algebra homomorphism $f:\mathbb{Q}_p\langle T_1,\cdots, T_n \rangle \to B$ is compact if $f(T_1),\cdots, f(T_n)$ are topologically nilpotent. We will need a variant of this in our later application. Consider the $\mathbb{Q}_p$-Banach algebra $A=\mathbb{Z}_p\langle T_1,\cdots, T_n\rangle [[x]]\otimes_{\mathbb{Z}_p}\mathbb{Q}_p$ with unit open ball $\mathbb{Z}_p\langle T_1,\cdots, T_n\rangle [[x]]$. It is easy to see that again a continuous $\mathbb{Q}_p$-algebra homomorphism $g:A \to B$ is compact if $g(T_1),\cdots, g(T_n)$ and $g(x)$ are topologically nilpotent. \end{exa} The following results will only be used in \ref{LBH}. \begin{prop} \label{comHau} Suppose $\{V_i\}_{i\geq 1}$ is an inductive sequence of $\mathbb{Q}_p$-Banach spaces with injective compact transition maps. Then the locally convex inductive limit $\displaystyle \varinjlim_i V_i$ is Hausdorff. \end{prop} \begin{proof} \cite[Lemma 16.9]{Sch02}. \end{proof} We will also need the following generalization. \begin{cor} \label{comWHau} Suppose $\{V_i\}_{i\geq 1}$ is an inductive sequence of $\mathbb{Q}_p$-Banach spaces with injective compact transition maps and $W$ is a $\mathbb{Q}_p$-Banach space. Then the locally convex inductive limit $\displaystyle \varinjlim_i V_i\widehat\otimes_{\mathbb{Q}_p}W$ is Hausdorff. \end{cor} \begin{proof} Given a non-zero vector $v\in V_j\widehat\otimes_{\mathbb{Q}_p}W$, we can find a linear functional $l:W\to\mathbb{Q}_p$ such that the image of $v$ under the induced map $1\otimes l:V_j\widehat\otimes_{\mathbb{Q}_p}W\to V_j\otimes\mathbb{Q}_p=V_j$ is non-zero. Consider the locally convex inductive limit over $i$ \[ \varinjlim_i V_i\widehat\otimes_{\mathbb{Q}_p}W\xrightarrow{1\otimes l} \varinjlim_i V_i.\] This is a continuous map and $v$ has non-zero image. Since $\displaystyle \varinjlim_i V_i$ is Hausdorff by Proposition \ref{comHau}, there exists an open neighborhood of $0\in \varinjlim_i V_i\widehat\otimes_{\mathbb{Q}_p}W$ not containing $v$. This shows that $\varinjlim_i V_i\widehat\otimes_{\mathbb{Q}_p}W$ is Hausdorff. \end{proof} In our later application, we will prove some inductive sequence of $\mathbb{Q}_p$-Banach spaces has compact transition maps by a Cartan-Serre type argument. Here is the key lemma. \begin{lem} \label{CStarg} Let $\{V_i\}_{i\geq 1}$ and $\{W_i\}_{i\geq 1}$ be inductive sequences of $\mathbb{Q}_p$-Banach spaces with continuous linear transition maps. Suppose there are continuous linear maps $f_i:V_i\to W_i,i\geq 1$ with the following properties \begin{enumerate} \item $\{f_i\}_{i\geq 1}$ commute with transition maps; \item For any $i\geq 1$, there exists $j\geq i$ such that $\im(W_i\to W_j)\subseteq f_j(V_j)$; \item For any $i\geq 1$, the composite map $V_i\xrightarrow{f_i}W_i\to W_j$ is compact for $j$ sufficiently large. \end{enumerate} Then the transition map $W_i\to W_j$ is compact for any $i\geq 1$ and $j$ sufficiently large. \end{lem} \begin{proof} Fix $i\geq 1$. By the second condition, $\im(W_i\to W_k)\subseteq f_k(V_k)$ for some $k\geq i$. We equip $f_k(V_k)=V_k/\ker(f_k)$ with the quotient topology. Hence it is also a $\mathbb{Q}_p$-Banach space. The induced map \[W_i\to f_k(V_k)\] is continuous by the closed graph theorem. Indeed, the graph of $W_i\to W_k$ is closed and the inclusion $W_i\times f_k(V_k)\subseteq W_i\times W_k$ is continuous. Now take $j\geq k$ such that $V_k\xrightarrow{f_k}W_k\to W_j$ is compact. Clearly $W_i\to W_j$ is compact as it is the composite $W_i\to f_k(V_k)\to W_j$. \end{proof} \subsection{Some miscellaneous results} \begin{para} In this subsection, we will collect some simple results which will be used later. Besides, we will show the equivalence between $\mathfrak{LA}$-acyclicity and strongly $\mathfrak{LA}$-acyclicity introduced in \cite[\S 2.2]{Pan20}. We keep the same notation used in the previous subsection. \end{para} \begin{lem} \label{Gnanprecle} Let $V\subseteq W$ be a closed embedding of $\mathbb{Q}_p$-Banach space representations of $G$. Then the natural maps $V^{G_n-\mathrm{an}}\to W^{G_n-\mathrm{an}},n\geq 0$ and $V^{\mathrm{la}}\to W^{\mathrm{la}}$ are also closed embeddings. \end{lem} \begin{proof} It suffices to prove that $V^{G_n-\mathrm{an}}\to W^{G_n-\mathrm{an}}$ is a closed embedding. By our assumption, the natural map \[\mathscr{C}^{\mathrm{an}}(G_n,\mathbb{Q}_p)\widehat{\otimes}_{\mathbb{Q}_p} V \to \mathscr{C}^{\mathrm{an}}(G_n,\mathbb{Q}_p)\widehat{\otimes}_{\mathbb{Q}_p} W\] is a closed embedding by our assumption. Hence the $G_n$-invariants of this map is also a closed embedding which is nothing but $V^{G_n-\mathrm{an}}\to W^{G_n-\mathrm{an}}$. \end{proof} \begin{prop} \label{tenscomHcont} Let $H$ be a profinite group and $K$ a finite extension of $\mathbb{Q}_p$. Suppose $W$ is a $K$-Banach space representation of $H$ and \[H^i_\mathrm{cont}(H,W)=0,i\geq1.\] Then \[H^i_\mathrm{cont}(H,W\widehat{\otimes}_{K}V)=0,i\geq 1\] for any $K$-Banach space $V$ equipped with a trivial action of $H$. \end{prop} \begin{proof} By definition, $H^i_\mathrm{cont}(H,W)$ is computed by a complex $C^\bullet(H,W)$ with $C^{i}(H,W)=\mathscr{C}(\underbrace{H\times\cdots\times H}_\text{$n+1$ times},W)^H\cong \mathscr{C}(\underbrace{H\times\cdots\times H}_\text{$n$ times},W)$. Then $W^{H}\to C^\bullet(H,W)$ is a strict exact complex by our assumption and the open mapping theorem. There are natural isomorphisms \[\mathscr{C}(\underbrace{H\times\cdots\times H}_\text{$n$ times},W)\widehat{\otimes}_{K}V\cong \mathscr{C}(\underbrace{H\times\cdots\times H}_\text{$n$ times},W\widehat{\otimes}_{K}V),\] i.e. $C^\bullet(H,W)\widehat{\otimes}_{K}V\cong C^\bullet(H,W\widehat{\otimes}_{K}V)$. Hence $(W\widehat{\otimes}_{K}V)^H\cong W^H\widehat{\otimes}_{K}V\to C^\bullet(H,W\widehat{\otimes}_{K}V)$ is a strict exact complex as well. \end{proof} \begin{cor} \label{injnoHi} Let $H$ be a profinite group and $W$ be a $\mathbb{Q}_p$-Banach space representation of $H$. Then \[H^i_\mathrm{cont}(H,\mathscr{C}(H,\mathbb{Q}_p)\widehat\otimes_{\mathbb{Q}_p}W)=0,i\geq 1.\] \end{cor} \begin{proof} Let $W'=W$ equipped with the \textit{trivial} action of $H$. Then there is a natural $H$-equivariant isomorphism \[\mathscr{C}(H,\mathbb{Q}_p)\widehat\otimes_{\mathbb{Q}_p}W'\cong \mathscr{C}(H,\mathbb{Q}_p)\widehat\otimes_{\mathbb{Q}_p}W.\] Explicitly it sends $f\in\mathscr{C}(H,W')=\mathscr{C}(H,\mathbb{Q}_p)\widehat\otimes_{\mathbb{Q}_p}W'$ to the function $g\mapsto g\left(f(g)\right),g\in H$, viewed as an element in $\mathscr{C}(H,W)=\mathscr{C}(H,\mathbb{Q}_p)\widehat\otimes_{\mathbb{Q}_p}W$. Hence we may assume the action of $H$ on $W$ is trivial. Now our claim follows from Proposition \ref{tenscomHcont} since $H^i_\mathrm{cont}\left(H,\mathscr{C}(H,\mathbb{Q}_p)\right)=0,i\geq 1$, cf. Proof of Proposition 1.1.3 of \cite{Eme06}. \end{proof} \begin{para} For a $\mathbb{Q}_p$-Banach space representation $W$ of $G$ and $i\geq 1$, we define (following \cite[\S 2.2]{Pan20}) \[R^i\mathfrak{LA}(W):=\varinjlim_n H^i_{\mathrm{cont}}(G_n,W\widehat{\otimes}_{\mathbb{Q}_p}\mathscr{C}^{\mathrm{an}}(G_n,\mathbb{Q}_p))\] which measures the failure of taking locally analytic vectors in $W$. Note that there is an isomorphism $W\widehat{\otimes}_{\mathbb{Q}_p}\mathscr{C}^{\mathrm{an}}(G_n,\mathbb{Q}_p)\cong \mathscr{C}^{\mathrm{an}}(G_n,W)$. Hence \[R^i\mathfrak{LA}(W):=\varinjlim_n H^i_{\mathrm{cont}}\left(G_n,\mathscr{C}^{\mathrm{an}}(G_n,W)\right).\] We say $W$ is \begin{itemize} \item $\mathfrak{LA}$-acyclic if $R^i\mathfrak{LA}(W)=0$ for any $i\geq 1$; \item strongly $\mathfrak{LA}$-acyclic if the direct system $\left\{H^i_{\mathrm{cont}}(G_n,W\widehat{\otimes}_{\mathbb{Q}_p}\mathscr{C}^{\mathrm{an}}(G_n,\mathbb{Q}_p))\right\}_n$ is essentially zero for any $i\geq1$, i.e. for any $i\geq 1$ and $n\geq0$, we can find $m\geq n$ such that \[H^i_{\mathrm{cont}}(G_n,\mathscr{C}^{\mathrm{an}}(G_n,W)) \to H^i_{\mathrm{cont}}(G_m,\mathscr{C}^{\mathrm{an}}(G_m,W))\] is the zero map. \end{itemize} Both notions do not depend on the choice of $G_0$. Clearly $W$ is $\mathfrak{LA}$-acyclic if it is strongly $\mathfrak{LA}$-acyclic. \end{para} \begin{prop} \label{LAasLAa} Suppose $W$ is a $\mathfrak{LA}$-acyclic $\mathbb{Q}_p$-Banach space representation of $G$. Then $W$ is strongly $\mathfrak{LA}$-acyclic. \end{prop} \begin{proof} Consider the natural closed embedding $\mathbb{Q}_p\to \mathscr{C}(G_0,\mathbb{Q}_p)$ which maps $\mathbb{Q}_p$ to constant functions. Take the completed tensor product with $W$ over $\mathbb{Q}_p$. We get a closed embedding \[W\to \mathscr{C}(G_0,\mathbb{Q}_p)\widehat\otimes_{\mathbb{Q}_p} W\cong \mathscr{C}(G_0,W).\] Denote the quotient by $Q$ equipped with the quotient topology. This is naturally a $\mathbb{Q}_p$-Banach representation of $G_0$. Hence we get a short exact sequence \[0\to W\to \mathscr{C}(G_0,W)\to Q\to 0.\] Passing to the $G_n$-analytic vectors, we have \[ \mathscr{C}(G_0,W)^{G_n-\mathrm{an}}\to Q^{G_n-\mathrm{an}}\to H^1_{\mathrm{cont}}(G_n,\mathscr{C}^{\mathrm{an}}(G_n,W))\to H^1_{\mathrm{cont}}\left(G_n,\mathscr{C}^{\mathrm{an}}(G_n, \mathscr{C}(G_0,W))\right)\to.\] Since $\mathscr{C}^{\mathrm{an}}\left(G_n, \mathscr{C}(G_0,W)\right)\cong \mathscr{C}(G_0,\mathbb{Q}_p)\widehat\otimes_{\mathbb{Q}_p}\mathscr{C}^\mathrm{an}(G_n,W)$, it follows from Corollary \ref{injnoHi} that \[H^i_{\mathrm{cont}}\left(G_n,\mathscr{C}^{\mathrm{an}}(G_n, \mathscr{C}(G_0,W))\right)=0,i\geq 1.\] (Note that $ \mathscr{C}(G_0,\mathbb{Q}_p)\cong \mathscr{C}(G_n,\mathbb{Q}_p)^{\oplus [G_0:G_n]}$.) In particular, we have \[H^{i+1}_\mathrm{cont}\left(G_n,\mathscr{C}^\mathrm{an}(G_n,W)\right)\cong H^i_\mathrm{cont}\left(G_n,\mathscr{C}^\mathrm{an}(G_n,Q)\right),i\geq 1,\] and an exact sequence \[ 0\to W^{G_n-\mathrm{an}}\to\mathscr{C}(G_0,W)^{G_n-\mathrm{an}}\to Q^{G_n-\mathrm{an}}\to H^1_{\mathrm{cont}}(G_n,\mathscr{C}^{\mathrm{an}}(G_n,W))\to 0.\] Denote the quotient $\mathscr{C}(G_0,W)^{G_n-\mathrm{an}}/W^{G_n-\mathrm{an}}$ by $Q_n$ equipped with the quotient topology. This is a $\mathbb{Q}_p$-Banach space. Now consider the direct limit over $n$: \[0\to \varinjlim_n Q_n\to\varinjlim_n Q^{G_n-\mathrm{an}}\to R^1\mathfrak{LA}(W)=0~~~~\mbox{(by our assumption)}.\] Hence $\displaystyle \varinjlim_n Q_n\to\varinjlim_n Q^{G_n-\mathrm{an}}\cong \varinjlim_n Q^{G_n-\mathrm{an}}=Q^{\mathrm{la}}$ is an isomorphism. Example \ref{exala} shows that $Q^{\mathrm{la}}$ is a Hausdorff LB-space. Thus $Q^{G_n-\mathrm{an}}\subseteq Q_m$ for some $m\geq n$ by Proposition \ref{1.1.10}. Equivalently, the image of \[H^1_{\mathrm{cont}}(G_n,\mathscr{C}^{\mathrm{an}}(G_n,W))\to H^1_{\mathrm{cont}}(G_m,\mathscr{C}^{\mathrm{an}}(G_m,W))\] is zero. This proves that $\left\{H^1_{\mathrm{cont}}(G_n,W\widehat{\otimes}_{\mathbb{Q}_p}\mathscr{C}^{\mathrm{an}}(G_n,\mathbb{Q}_p))\right\}_n$ is essentially zero. The general case can be proved by induction on $i$ using the isomorphism \[H^{i+1}_\mathrm{cont}\left(G_n,\mathscr{C}^\mathrm{an}(G_n,W)\right)\cong H^i_\mathrm{cont}\left(G_n,\mathscr{C}^\mathrm{an}(G_n,Q)\right),i\geq 1.\] We omit the detail here. \end{proof} \section{Some results of \texorpdfstring{\cite{Pan20}}{} and generalizations} \label{Rf20} \subsection{Horizontal Cartan action}\label{R20} \begin{para} \label{brr} First we recall some notation in \cite[\S 4]{Pan20}. As in the introduction, let $C$ be the completion of $\overbar\mathbb{Q}_p$. For a neat open compact subgroup $K$ of $\mathrm{GL}_2(\mathbb{A}_f)$, we denote by $X_K$ the complete modular curve of level $K$ over $\mathbb{Q}$ and by $\mathcal{X}_K$ the adic space associated to $X_K\times_{\mathbb{Q}}C$. Fix an open compact subgroup $K^p$ of $\mathrm{GL}_2(\mathbb{A}_f^p)$ contained in the level-$N$-subgroup $\{g\in\mathrm{GL}_2(\hat{\mathbb{Z}}^p)=\prod_{l\neq p}\mathrm{GL}_2(\mathbb{Z}_l)\,\vert\, g\equiv1\mod N\}$ for some $N\geq 3$ prime to $p$ throughout this paper. Then there exists a unique perfectoid space $\mathcal{X}_{K^p}$ over $C$ such that \[\mathcal{X}_{K^p}\sim\varprojlim_{K_p\subset\mathrm{GL}_2(\mathbb{Q}_p)}\mathcal{X}_{K^pK_p},\] where $K_p$ runs through all open compact subgroups of $\mathrm{GL}_2(\mathbb{Q}_p)$. Very loosely speaking, on non-cusp points, $\mathcal{X}_{K^p}$ parametrizes elliptic curves with level-$K^p$-structure away from p and a trivialization of the $p$-adic Tate module. Let $V=\mathbb{Q}_p^{\oplus 2}$ be the standard representation of $\mathrm{GL}_2(\mathbb{Q}_p)$. Our convention here is that the local system associated to $V$ agrees with the first (relative) $p$-adic \'etale cohomology of the universal family of elliptic curves. Hence the universal $p$-adic Tate module corresponds to the representation $V(1)$, where $(1)$ denotes the Tate twist by $1$, or $V^*$, the dual representation of $V$. We denote the automorphic line bundle on finite level of modular curves by $\omega^1$ and its pull-back to $\mathcal{X}_{K^p}$ by $\omega_{K^p}$. On the infinite level $\mathcal{X}_{K^p}$, we have the following exact sequence coming from the relative Hodge-Tate filtration: \begin{eqnarray}\label{rHT} 0\to\omega_{K^p}^{-1}(1)\to V(1)\otimes_{\mathbb{Q}_p} \mathcal{O}_{\mathcal{X}_{K^p}} \to \omega_{K^p}\to 0, \end{eqnarray} or equivalently, \[0\to\omega_{K^p}^{-1}\to V\otimes_{\mathbb{Q}_p} \mathcal{O}_{\mathcal{X}_{K^p}} \to \omega_{K^p}(-1)\to 0.\] Then the variation of Hodge-Tate decompositions, or more precisely the position of $\omega^{-1}_{K^p}$ in $V\otimes_{\mathbb{Q}_p} \mathcal{O}_{\mathcal{X}_{K^p}}$, induces a $\mathrm{GL}_2(\mathbb{Q}_p)$-equivariant Hodge-Tate period map \[\pi_{\mathrm{HT}}:\mathcal{X}_{K^p}\to{\mathscr{F}\!\ell}.\] Here ${\mathscr{F}\!\ell}=\mathbb{P}^1$ is the adic space over $C$ associated to the usual flag variety for $\mathrm{GL}_2$. We note that there is an ample line bundle $\omega_{{\mathscr{F}\!\ell}}$ on ${\mathscr{F}\!\ell}$ (tautological line bundle) whose pull-back along $\pi_{\mathrm{HT}}$ agrees with $\omega_{K^p}(-1)$. Let $\mathcal{O}_{K^p}={\pi_{\mathrm{HT}}}_*\mathcal{O}_{\mathcal{X}_{K^p}}$ be the push-forward of the structure sheaf of $\mathcal{X}_{K^p}$ along $\pi_{\mathrm{HT}}$ and $\mathcal{O}^{\mathrm{la}}_{K^p}\subset \mathcal{O}_{K^p}$ be the subsheaf of $\mathrm{GL}_2(\mathbb{Q}_p)$-locally analytic sections introduced in \cite[4.2.6]{Pan20}. Let $\mathfrak{g}:=\mathfrak{gl}_2(C)$ be the complexified Lie algebra of $\mathrm{GL}_2(\mathbb{Q}_p)$. Then there is a natural action of $\mathfrak{g}^0:=\mathcal{O}_{\mathrm{Fl}}\otimes_{C}\mathfrak{g}$ on $\mathcal{O}^{\mathrm{la}}_{K^p}$. Let $\mathfrak{b}^0$ (resp. $\mathfrak{n}^0$) be the subsheaf of horizontal Borel subalgebra (resp. subsheaf of horizontal nilpotent subalgebra). By \cite[Theorem.4.2.7]{Pan20}, $\mathfrak{n}^0$ acts trivially on $\mathcal{O}^{\mathrm{la}}_{K^p}$, hence we get an action of $\mathfrak{b}^0/\mathfrak{n}^0$ on $\mathcal{O}^{\mathrm{la}}_{K^p}$. Let $\mathfrak{h}:=\{\begin{pmatrix} * & 0 \\ 0 & * \end{pmatrix}\}$ be a Cartan subalgebra of the Borel subalgebra $\mathfrak{b}:=\{\begin{pmatrix} * & * \\ 0 & * \end{pmatrix}\}$. It acts on $\mathcal{O}^{\mathrm{la}}_{K^p}$ via the natural embedding $\mathfrak{h}\to\mathcal{O}_{\mathrm{{\mathscr{F}\!\ell}}}\otimes_{C}\mathfrak{h}=\mathfrak{b}^0/\mathfrak{n}^0$. This horizontal action will be denoted by $\theta_\mathfrak{h}$ and encodes the infinitesimal character, cf. \cite[Corollary 4.2.8]{Pan20}. \end{para} \begin{para} \label{omegakla} In this paper, we also need to consider twists of $\mathcal{O}^{\mathrm{la}}_{K^p}$. Fix an integer $k$. Let $U$ be an affinoid open subset of ${\mathscr{F}\!\ell}$. Then $\omega_{{\mathscr{F}\!\ell}}$ is trivial on $U$. Hence by choosing a generator of $\omega_{{\mathscr{F}\!\ell}}|_U$, we get an isomorphism $\mathcal{O}_{\mathcal{X}_{K^p}}(\pi_{\mathrm{HT}}^{-1}(U))\cong \omega^k_{K^p}(\pi_{\mathrm{HT}}^{-1}(U))$. This defines a Banach space structure on $\omega_{K^p}(\pi_{\mathrm{HT}}^{-1}(U))$. It is clear that the topology defined on $\omega^k_{K^p}(\pi_{\mathrm{HT}}^{-1}(U))$ does not depend on the choice of the generator of $\omega_{{\mathscr{F}\!\ell}}|_U$. In particular, the subspace of locally analytic vectors in $\omega^k_{K^p}(\pi_{\mathrm{HT}}^{-1}(U))$ is well-defined. Let $\omega^{k,\mathrm{la}}_{K^p}$ be the subsheaf of $\mathrm{GL}_2(\mathbb{Q}_p)$-locally analytic sections of ${\pi_{\mathrm{HT}}}_*(\omega_{K^p})^{\otimes k}$. Similar to $\theta_\mathfrak{h}$, there is a natural horizontal Cartan action of $\mathfrak{h}$ on $\omega^{k,\mathrm{la}}_{K^p}$. There are two natural ways viewing $\omega^{k,\mathrm{la}}_{K^p}$ as a twist of $\mathcal{O}^{\mathrm{la}}_{K^p}$: \begin{enumerate} \item (twist on ${\mathscr{F}\!\ell}$) $\omega^{k,\mathrm{la}}_{K^p}(-k)\cong \mathcal{O}^{\mathrm{la}}_{K^p}\otimes_{\mathcal{O}_{{\mathscr{F}\!\ell}}}(\omega_{{\mathscr{F}\!\ell}})^{\otimes k}$; \item (twist on modular curves) $\omega^{k,\mathrm{la}}_{K^p}\cong \omega^{k,\mathrm{sm}} _{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}} _{K^p}}\mathcal{O}^{\mathrm{la}}_{K^p}$. \end{enumerate} Here $\mathcal{O}^{\mathrm{sm}}_{K^p}=\omega^{0,\mathrm{sm}} _{K^p}$, and $\omega^{k,\mathrm{sm}} _{K^p}$ were introduced in \cite[5.3.3]{Pan20}. As the superscript suggests, $\omega^{k,\mathrm{sm}} _{K^p}\subset\omega^{k,\mathrm{la}} _{K^p}$ is the subsheaf of $\mathrm{GL}_2(\mathbb{Q}_p)$-smooth sections, i.e. sections annihilated by $\mathfrak{g}$. Equivalently, \[\omega^{k,\mathrm{sm}}_{K^p}={\pi_{\mathrm{HT}}}_{*} (\varinjlim_{K_p\subset \mathrm{GL}_2(\mathbb{Q}_p)}(\pi_{K_p})^{-1} \omega^{ k}),\] where $\pi_{K_p}:\mathcal{X}_{K^p}\to\mathcal{X}_{K^pK_p}$ denotes the natural projection and $\pi_{K_p}^{-1}$ is the pull-back as sheaf of abelian groups. The first isomorphism (twist on ${\mathscr{F}\!\ell}$) is clear and the second isomorphism will be proved later in the next Subsection. It's easy to compute that the horizontal action of $\mathfrak{h}$ on $\omega^{k,\mathrm{sm}}_{K^p}$ is trivial and the action on $\omega_{{\mathscr{F}\!\ell}}$ is via the character sending $\begin{pmatrix} a & 0\\ 0 & d \end{pmatrix}\in\mathfrak{h}$ to $d$. Both isomorphisms are $\mathfrak{h}$-equivariant with respect to these actions. \end{para} \subsection{Local structure of \texorpdfstring{$\mathcal{O}^{\mathrm{la},\chi}_{K^p}$}{Lg}} \begin{para} \label{exs} For a weight $\chi$ of $\mathfrak{h}$, we write $\chi(\begin{pmatrix} a & 0\\ 0 & d \end{pmatrix}) =n_1a+n_2d$ for some $n_1,n_2\in C$ and identify $\chi$ with an ordered pair $(n_1,n_2)\in C^2$. Throughout the paper, we will only consider the case of \textit{integral weights}, i.e. \[(n_1,n_2)\in\mathbb{Z}^2.\] Denote by $\mathcal{O}^{\mathrm{la},\chi}_{K^p}$ the weight-$\chi$ subsheaf of $\mathcal{O}^{\mathrm{la}}_{K^p}$ under $\theta_\mathfrak{h}$ and define $\omega^{k,\mathrm{la},\chi}_{K^p}\subset\omega^{k,\mathrm{la}}_{K^p}$ similarly. An explicit description of $\mathcal{O}^{\mathrm{la},\chi}_{K^p}$ was given in \cite[4.3,5.1]{Pan20}. We recall it here and generalize to $\omega^{k,\mathrm{la},\chi}_{K^p}$ as well. Recall that we have the following exact sequence: \begin{eqnarray*} 0\to\omega_{K^p}^{-1}(1)\to V(1)\otimes_{\mathbb{Q}_p} \mathcal{O}_{\mathcal{X}_{K^p}} \to \omega_{K^p}\to 0 \end{eqnarray*} coming from the relative Hodge-Tate filtration. Taking $\wedge^2$ of $V(1)\otimes_{\mathbb{Q}_p} \mathcal{O}_{\mathcal{X}_{K^p}}$ in this exact sequence \eqref{rHT}, we get an isomorphism \[\mathcal{O}_{\mathcal{X}_{K^p}}(1)=\omega_{K^p}^{-1}(1)\otimes_{\mathcal{O}_{\mathcal{X}_{K^p}}}\omega_{K^p}\cong \wedge^2 V(1) \otimes_{\mathbb{Q}_p} \mathcal{O}_{\mathcal{X}_{K^p}}=\mathcal{O}_{\mathcal{X}_{K^p}}\otimes\det(2),\] i.e. $\mathcal{O}_{\mathcal{X}_{K^p}}\cong \mathcal{O}_{\mathcal{X}_{K^p}}\otimes\det(1)$, where $\det$ denotes the determinant representation of $\mathrm{GL}_2(\mathbb{Q}_p)$. Fix a basis $b$ of $\mathbb{Q}_p(1)$ from now on. Then under this isomorphism, $1\otimes b$ defines an invertible function $\mathrm{t}\in H^0(\mathcal{X}_{K^p},\mathcal{O}_{\mathcal{X}_{K^p}})$, on which the Galois group $G_{\mathbb{Q}_p}$ acts via the cyclotomic character. We remark that $\mathrm{GL}_2(\mathbb{A}_f)$ acts on $\mathrm{t}$ via a non-trivial character because implicitly we identify $\wedge^2 D$ with a trivial line bundle. See \ref{HEt} below for more details. Note that the notation for $\mathrm{t}$ in \cite[4.3.1]{Pan20} was $t$. We decide to change the notation here because $t$ will be used for Fontaine's $2\pi i$ later. We will use the notation $\cdot \mathrm{t}$ to denote the twist by $\mathrm{t}$ to remember the action of $G_{\mathbb{Q}_p}\times\mathrm{GL}_2(\mathbb{A}_f)$. Let $(1,0),(0,1)$ be the standard basis of $V$. We denote their images in $H^0(\mathcal{X}_{K^p},\omega_{K^p})$ under the surjective map in \eqref{rHT} by $e_1,e_2$. (Here we identify $V(1)$ with $V$ using $b$.) Hence we may view $e_1,e_2$ as sections of $\omega_{{\mathscr{F}\!\ell}}$ on ${\mathscr{F}\!\ell}$ and $x:=\frac{e_2}{e_1}$ defines rational function on ${\mathscr{F}\!\ell}$. By \cite[Theorem III.1.2]{Sch15} (see also \cite[Theorem 4.1.7]{Pan20}), there exists a basis of open affinoid subsets $\mathfrak{B}$ of ${\mathscr{F}\!\ell}$ stable under finite intersections such that for any $U\in\mathfrak{B}$, \begin{itemize} \item its preimage $V_\infty=\pi_{\mathrm{HT}}^{-1}(U)$ is affinoid perfectoid; \item $V_\infty$ is the preimage of an affinoid subset $V_{K_p}\subset\mathcal{X}_{K^pK_p}$ for sufficiently small open subgroup $K_p$ of $\mathrm{GL}_2(\mathbb{Q}_p)$; \item the map $\varinjlim_{K_p}H^0(V_{K_p},\mathcal{O}_{\mathcal{X}_{K^pK_p}})\to H^0(V_\infty,\mathcal{O}_{\mathcal{X}_{K^p}})$ has dense image. \end{itemize} We briefly recall the choice of $\mathfrak{B}$ in \cite[Theorem 4.1.7]{Pan20}. Let $U_1,U_2\subset {\mathscr{F}\!\ell}$ be the affinoid open subsets defined by $\{\|x\|\leq1\},\{\|x\|\geq1\}$. Then $\mathfrak{B}$ consists of finite intersections of rational subsets of $U_1,U_2$. Fix $U\in\mathfrak{B}$ and assume that \begin{itemize} \item $U$ is an open subset of $U_1$, hence $e_1$ generates $H^0(V_\infty, \omega_{K^p})$ and $x=\frac{e_2}{e_1}$ is a regular function on $V_\infty$. \end{itemize} Now we would like to give an explicit description of $\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)$ and $\omega^{k,\mathrm{la},\chi}_{K^p}(U)$. Let $G_0=1+p^mM_2(\mathbb{Z}_p),m\geq 2$ be an open subgroup of $\mathrm{GL}_2(\mathbb{Q}_p)$ so that $V_\infty$ is the preimage of an affinoid subset $V_{G_0}\subset\mathcal{X}_{K^pG_0}$. Let $G_n=G_0^{p^n}=1+p^{m+n}M_2(\mathbb{Z}_p)$ and $V_{G_n}\subset\mathcal{X}_{K^pG_n}$ be the preimage of $V_{G_0}$. Then $\varinjlim_{n}H^0(V_{G_n},\mathcal{O}_{\mathcal{X}_{K^pG_n}})\to H^0(V_\infty,\mathcal{O}_{\mathcal{X}_{K^p}})$ has dense image. As explained in \cite[4.3.5]{Pan20}, for any $n\geq0$, we can find \begin{itemize} \item an integer $r(n)>r(n-1)>0$; \item $x_n\in H^0(V_{G_{r(n)}},\mathcal{O}_{\mathcal{X}_{K^pG_{r(n)}}})$ such that $\|x-x_n\|_{G_{r(n)}}=\|x-x_n\|\leq p^{-n}$ in $H^0(V_\infty,\mathcal{O}_{\mathcal{X}_{K^p}})$, \end{itemize} and assume that $\omega^1$ is a trivial line bundle on $V_{K^pG_{r(n)}}$. Here $\|\cdot\|_{G_{r(n)}}$ denotes the norm on $G_{r(n)}$-analytic vectors and $\|\cdot\|$ is the usual norm on $\mathcal{O}_{\mathcal{X}_{K^p}}$. \end{para} \begin{thm} \label{str} Suppose $\chi=(n_1,n_2)\in\mathbb{Z}^2$. \begin{enumerate} \item For any $n\geq0$, given a sequence of sections \[c_i\in H^0(V_{G_{r(n)}},\omega^{n_1-n_2}),i=0,1,\cdots\] such that the norm of $e_1^{n_2-n_1}c_i p^{(n-1)i}\in \mathcal{O}_{\mathcal{X}_{K^p}}(V_\infty),i\geq 0$ is uniformly bounded, then \[f=\mathrm{t}^{n_1}e_1^{n_2-n_1}\sum_{i=0}^{+\infty}c_i(x-x_n)^i=\mathrm{t}^{n_1}\sum_{i=0}^{+\infty}e_1^{n_2-n_1}c_i(x-x_n)^i\] converges in $\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)^{G_{r(n)-\mathrm{an}}}$ and any $G_{n}$-analytic vector in $\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)$ arises in this way. Moreover $f=0$ if and only if all $c_i=0$. \item Similarly, for any $n\geq0$, given a sequence of sections $c'_i\in H^0(V_{G_{r(n)}},\omega^{n_1-n_2+k}),i=0,1,\cdots$ such that $\|e_1^{n_2-n_1-k}c'_i \|p^{-(n-1)i}$ is uniformly bounded, then \[f=\mathrm{t}^{n_1}e_1^{n_2-n_1}\sum_{i=0}^{+\infty}c'_i(x-x_n)^i\] converges in $\omega^{k,\mathrm{la},\chi}_{K^p}(U)^{G_{r(n)}-\mathrm{an}}$ and any $G_n$-analytic vector in $\omega^{k,\mathrm{la},\chi}_{K^p}(U)$ arises in this way. Moreover $f=0$ if and only if all $c'_i=0$. \end{enumerate} \end{thm} \begin{proof} The first part follows from Theorem 4.3.9 and Lemma 5.1.2 of \cite{Pan20}. Note that $c_i$ here is $(\frac{1}{t_N})^{n_1}(\frac{1}{e_{1,N}})^{n_2-n_1}c_i^{(n)}(f)$ in \cite[Lemma 5.1.2]{Pan20}. The second part can be reduced to the first part since multiplication by $e_1^{k}$ induces an isomorphism $\mathcal{O}^{\mathrm{la}}_{K^p}(U)\cong \omega^{k,\mathrm{la}}_{K^p}(U) $. \end{proof} \begin{cor} \label{density} Let $\chi=(n_1,n_2)\in\mathbb{Z}^2,k\in\mathbb{Z}$. We denote by $\omega^{k,\mathrm{la},\chi}_{K^p}(U)^{\mathfrak{n}-fin}\subseteq\omega^{k,\mathrm{la},\chi}_{K^p}(U)$ the subspace consisting of elements of the form \[\mathrm{t}^{n_1}e_1^{n_2-n_1}\sum_{i=0}^lc_ix^i,~~~~~c_i\in H^0(V_{G_n},\omega^{n_1-n_2+k})\mbox{ for some }l,n\geq 0.\] Then \begin{itemize} \item $\omega^{k,\mathrm{la},\chi}_{K^p}(U)^{\mathfrak{n}-fin}$ is dense in $\omega^{k,\mathrm{la},\chi}_{K^p}(U)$. \item There is a natural isomorphism \[\omega^{k,\mathrm{la},\chi}_{K^p}(U)^{\mathfrak{n}-fin}\cong \omega^{n_1-n_2+k,\mathrm{sm}}_{K^p}(U)\otimes_{\mathbb{Q}_p} M^{\vee}_{(n_2,n_1)},\] where $M^{\vee}_{(n_2,n_1)}\subseteq \omega^{n_2-n_1,\mathrm{la},\chi}_{K^p}(U)$ consists of elements of the form \[\mathrm{t}^{n_1}e_1^{n_2-n_1}\sum_{i=0}^lc_ix^i,~~~~~c_i\in \mathbb{Q}_p \mbox{ for some }l\geq 0.\] (The notation comes from the category $\mathcal{O}$, cf. \ref{CatO} below.) \end{itemize} \end{cor} \begin{proof} This is obvious in view of Theorem \ref{str}. \end{proof} \begin{rem} \label{tn1} It is clear that $\mathcal{O}^{\mathrm{la},(n_1+k,n_2+k)}_{K^p}=\mathcal{O}^{\mathrm{la},(n_1,n_2)}_{K^p}\cdot\mathrm{t}^k$ for any integer $k$. We will use later to reduce calculations to the case when $n_1=0$. \end{rem} \begin{para} \label{lalg0k} Suppose $k$ is a non-negative integer and consider the case when $\chi=(0,k)$, i.e. $n_1=0,n_2=k$. Recall that $V=\mathbb{Q}_p^{\oplus 2}$ denotes the standard representation of $\mathrm{GL}_2(\mathbb{Q}_p)$. The relative Hodge-Tate filtration induces a map $V(1)\to \omega^1_{K^p}$. Taking its $k$-th symmetric power, we get a map $\Sym^k V(k)\to \omega^k_{K^p}$, which is nothing but the natural map $H^0({\mathscr{F}\!\ell},\omega^k_{{\mathscr{F}\!\ell}})(k)\to \omega^k_{K^p}$. This defines a map $\Sym^k V(k)\otimes_{\mathbb{Q}_p}\omega^{-k,\mathrm{sm}}_{K^p}\to \mathcal{O}^{\mathrm{la}}_{K^p}$ whose image lies in the weight-$(0,k)$ part. In fact, this map is injective, hence we get a natural inclusion \[\Sym^k V(k)\otimes_{\mathbb{Q}_p}\omega^{-k,\mathrm{sm}}_{K^p}\subseteq \mathcal{O}^{\mathrm{la},(0,k)}_{K^p}\] and it is easy to see that the image exactly consists of $\mathrm{GL}_2(\mathbb{Q}_p)$-locally algebraic vectors of $\mathcal{O}^{\mathrm{la},(0,k)}_{K^p}$. In general, if $\chi=(n_1,n_1+k)$, we have an inclusion \[(\Sym^k V(k)\otimes_{\mathbb{Q}_p}\omega^{-k,\mathrm{sm}}_{K^p})\cdot\mathrm{t}^{n_1}\subseteq \mathcal{O}^{\mathrm{la},\chi}_{K^p}.\] \end{para} \begin{defn} \label{lalgKp} We denote the image by $\mathcal{O}^{\mathrm{lalg},\chi}_{K^p}\subseteq \mathcal{O}^{\mathrm{la},\chi}_{K^p}$. \end{defn} \begin{para}[Definition of $A^n$] \label{An} Let $A^n\subset \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)$ be the subset of $f$ that can be written as $\mathrm{t}^{n_1}e_1^{n_2-n_1}\sum_{i=0}^{+\infty}c_i(x-x_n)^i$ for some $c_i\in H^0(V_{G_{r(n)}},\omega^{n_1-n_2})$ such that $\|e_1^{n_2-n_1}c_i p^{(n-1)i}\|$ is uniformly bounded. \cite[Remark 4.3.11]{Pan20} implies that $A^n$ is independent of the choice of $x_n$. This is a Banach space with respect to the norm \[\|\mathrm{t}^{n_1}e_1^{n_2-n_1}\sum_{i=0}^{+\infty}c_i(x-x_n)\|_n:=\sup_{i\geq 0}\|e_1^{n_2-n_1}c_i p^{(n-1)i}\|.\] Then there are continuous inclusions \[\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)^{G_n-\mathrm{an}}\subset A^n \subset \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)^{G_{r(n)}-\mathrm{an}}.\] In particular, $\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)=\varinjlim_n A^n$ as LB-spaces. Note that $A^n$ can be viewed as a subspace of $\prod_{i=0}^{+\infty} H^0(V_{G_{r(n)}},\omega^{n_1-n_2})$ by sending $f$ to $\{c_i\}_{i=0,1,\cdots}$. More precisely, if we fix a generator $s\in H^0(V_{G_{r(n)}},\omega^{n_1-n_2})$ i.e. an isomorphism $H^0(V_{G_{r(n)}},\mathcal{O}_{\mathcal{X}_{K^pG_{r(n)}}})\cong H^0(V_{G_{r(n)}},\omega^{n_1-n_2})$, then we have an isomorphism \begin{eqnarray} \label{exAn} \left(\prod_{i=0}^\infty \mathcal{O}^+_{\mathcal{X}_{K^pG_{r(n)}}}(V_{G_{r(n)}})\right)\otimes_{\mathbb{Z}_p}\mathbb{Q}_p\cong A^n \end{eqnarray} by sending $(a_i)_{i\geq 0}\in \prod_{i=0}^\infty \mathcal{O}^+_{\mathcal{X}_{K^pG_{r(n)}}}(V_ {G_{r(n)}})$ to \[\mathrm{t}^{n_1}e_1^{n_2-n_1}s\sum_{i=0}^{+\infty}a_ip^{-(n-1)i}(x-x_n)^i.\] In other words, we have \begin{eqnarray} \label{exAn[[]]} \mathcal{O}^+_{\mathcal{X}_{K^pG_{r(n)}}}(V_{G_{r(n)}})[[\frac{x-x_n}{p^{n-1}}]] \otimes_{\mathbb{Z}_p} \mathbb{Q}_p\cong A^n \end{eqnarray} Similarly we can define a subspace $B^n\subset \omega^{k,\mathrm{la},\chi}_{K^p}(U)$, which can also be viewed as a subspace of $\prod_{i=0}^{+\infty} H^0(V_{G_{r(n)}},\omega^{n_1-n_2+k})$. Then the natural isomorphism \[H^0(V_{G_{r(n)}},\omega^{k})\otimes_{\mathcal{O}(V_{G_{r(n)}})}\prod_{i=0}^{+\infty} H^0(V_{G_{r(n)}},\omega^{n_1-n_2})\cong \prod_{i=0}^{+\infty} H^0(V_{G_{r(n)}},\omega^{n_1-n_2+k})\] induces an isomorphism $H^0(V_{G_{r(n)}},\omega^{k})\otimes_{\mathcal{O}(V_{G_{r(n)}})}A^n\cong B^n$. Recall that we introduced the sheaf $\omega^{k,\mathrm{sm}}_{K^p}$ in \ref{omegakla}. By definition, $\omega^{k,\mathrm{sm}}_{K^p}(U)=\varinjlim_n H^0(V_{G_{n}},\omega^{k})$. Hence by passing to the direct limit over $n$ of the previous isomorphisms, we get the following natural isomorphism \[\omega^{k,\mathrm{sm}}_{K^p}(U)\otimes_{\omega^{0,\mathrm{sm}}_{K^p}(U)} \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)\cong \omega^{k,\mathrm{la},\chi}_{K^p}(U).\] Same argument works when $e_2$ is a generator of $H^0(V_\infty,\omega_{K^p})$. Therefore, we have \end{para} \begin{lem} \label{tensomegaksm} \[\omega^{k,\mathrm{sm}}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}} \mathcal{O}^{\mathrm{la},\chi}_{K^p}\cong \omega^{k,\mathrm{la},\chi}_{K^p}.\] \end{lem} \begin{rem} One can repeat the same argument without fixing weight $\chi$ (using \cite[Theorem 4.3.9]{Pan20} as an input), then one has the natural isomorphism $\omega^{k,\mathrm{la}}_{K^p}\cong \omega^{k,\mathrm{sm}} _{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}} _{K^p}}\mathcal{O}^{\mathrm{la}}_{K^p}$, which was claimed in \ref{omegakla}. \end{rem} \begin{para} \label{vCech} As pointed out in \cite[Lemma 4.3.14]{Pan20}, we can use $A^n$ to prove the following vanishing result of the \v{C}ech cohomology. This will only be used in the proof of Proposition \ref{Haus} below. Let $\mathfrak{U}=\{U^1,\cdots,U^l\}$ be a finite subset of $\mathfrak{B}$. We can find $G_0$ of the form $1+p^mM_2(\mathbb{Z}_p)$ such that each $U^{i}\in\mathfrak{U}$ is $G_0$-stable. Let $C^\bullet(\mathfrak{U},\mathcal{O}^{\mathrm{la},\chi}_{K^p})$ be the \v{C}ech complex for $\mathcal{O}^{\mathrm{la},\chi}_{K^p}$ with respect to $\mathfrak{U}$. For $n\geq 0$, let $C^\bullet(\mathfrak{U},\mathcal{O}^{G_n-\mathrm{an},\chi}_{K^p})$ be the subcomplex defined by restricting to the $G_n$-analytic vectors, i.e. \[C^\bullet(\mathfrak{U},\mathcal{O}^{G_n-\mathrm{an},\chi}_{K^p}):=C^\bullet(\mathfrak{U},\mathcal{O}^{\mathrm{la},\chi}_{K^p})^{G_n-\mathrm{an}},\] where $G_n=G_0^{p^n}$ as before. We denote the cohomology of this complex by $\check{H}^\bullet(\mathfrak{U},\mathcal{O}^{G_n-\mathrm{an},\chi}_{K^p})$. Clearly there is a natural map $C^\bullet(\mathfrak{U},\mathcal{O}^{G_n-\mathrm{an},\chi}_{K^p})\to C^\bullet(\mathfrak{U},\mathcal{O}^{G_{m}-\mathrm{an},\chi}_{K^p})$ for $m\geq n$. \end{para} \begin{prop} \label{VCech} Suppose $\mathfrak{U}=\{U^1,\cdots,U^l\}\subseteq \mathfrak{B}$ is a finite cover of $U\in\mathfrak{B}$. For any $i\geq 1$ and $n\geq 0$, the natural map \[\check{H}^i(\mathfrak{U},\mathcal{O}^{G_n-\mathrm{an},\chi}_{K^p})\to\check{H}^i(\mathfrak{U},\mathcal{O}^{G_m-\mathrm{an},\chi}_{K^p})\] is zero when $m\geq n$ is sufficiently large. \end{prop} \begin{proof} After possibly shrinking $G_0$, we may assume that $\pi_{\mathrm{HT}}^{-1}(U^{i})$ (resp. $\pi_{\mathrm{HT}}^{-1}(U)$) is the preimage of an affinoid subset $V^i_{G_0}\subseteq\mathcal{X}_{K^pG_0}$ (resp. $V_{G_0}\subseteq\mathcal{X}_{K^pG_0}$). Let $V^i_{G_n}\subseteq\mathcal{X}_{K^pG_n}$ be the preimage of $V^i_{G_0}$. Clearly $\{V^i_{G_n}\}_{i=1,\cdots,l}$ forms a finite affinoid cover of $V_{G_n}$. Using the action of $\mathrm{GL}_2(\mathbb{Q}_p)$, we may assume $U\subseteq U_1$ and apply the construction in \ref{exs}. In particular, we have sections $x_n$'s and can define $A^n\subseteq \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)$ as in \ref{An}. Now suppose $\mathcal{I}$ is a subset of $\{1,\cdots,l\}$. We denote $\bigcap_{i\in \mathcal{I}} U^i$ by $U^{\mathcal{I}}$ and $\bigcap_{i\in \mathcal{I}} V^i_{G_{n}}$ by $V^\mathcal{I}_{G_n}$. ($U^{\emptyset}$ is understood as $U$ and $V^\emptyset_{G_n}=V_{G_n}$.) Using $x_n|_{U^\mathcal{I}}$, we can define $A^{\mathcal{I},n}\subseteq \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U^i)$ similarly. Suppose $\mathcal{J}\subseteq \mathcal{I}$. Clearly there is a natural restriction map $A^{\mathcal{I},n}\to A^{\mathcal{J},n}$. Hence we can replace $\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U^\mathcal{I})$ by $A^{\mathcal{I},n}$ in the construction of the \v{C}ech complex $C^\bullet(\mathfrak{U},\mathcal{O}^{\mathrm{la},\chi}_{K^p})$ and get a subcomplex $C^{\bullet}(\mathfrak{U},A^n)$. Since $\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)^{G_n-\mathrm{an}}\subset A^n \subset \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)^{G_{r(n)}-\mathrm{an}}$ and similar statements hold for $U^\mathcal{I}$, we have natural inclusions \[C^\bullet(\mathfrak{U},\mathcal{O}^{G_n-\mathrm{an},\chi}_{K^p})\subseteq C^{\bullet}(\mathfrak{U},A^n)\subseteq C^\bullet(\mathfrak{U},\mathcal{O}^{G_{r(n)}-\mathrm{an},\chi}_{K^p}).\] Therefore it suffices to prove that for $i\geq 1$, \[H^i(C^{\bullet}(\mathfrak{U},A^n))=0.\] Fix a generator $s$ of $\omega^{n_1-n_2}|_{V_{G_{r(n)}}}$. Then $s|_{V^\mathcal{I}_{G_{r(n)}}}$ is a generator of $\omega^{n_1-n_2}|_{V^\mathcal{I}_{G_{r(n)}}}$. There is an isomorphism \eqref{exAn} \[\left(\prod_{i=0}^\infty \mathcal{O}^+_{\mathcal{X}_{K^pG_{r(n)}}}(V^\mathcal{I}_{G_{r(n)}})\right)\otimes_{\mathbb{Z}_p}\mathbb{Q}_p\cong A^{\mathcal{I},n}\] which is compatible with replacing $\mathcal{I}$ by a subset. Note that $\mathfrak{V}^n=\{V^i_{G_n}\}_{i=1,\cdots,l}$ is a finite affinoid cover of the affinoid $V_{G_n}$. Let $C^\bullet(\mathfrak{V}^n,\mathcal{O}^+)$ be the \v{C}ech complex for $\mathcal{O}^+_{\mathcal{X}_{K^pG_{r(n)}}}$ with respect to $\mathfrak{V}^n$. Then the above isomorphism implies that \[\left(\prod_{i=0}^\infty C^\bullet(\mathfrak{V}^n,\mathcal{O}^+)\right)\otimes_{\mathbb{Z}_p}\mathbb{Q}_p\cong C^{\bullet}(\mathfrak{U},A^n).\] Our claim follows by noting that $H^i(C^\bullet(\mathfrak{V}^n,\mathcal{O}^+)),i\geq 1$ is annihilated by some power of $p$ by Tate's acyclicity result, cf. \cite[Proof of Lemma 4.3.14]{Pan20}. \end{proof} \begin{para} \label{mathfrakU'} Here is how we are going to apply this result. Let $\mathfrak{U}=\{U_1,U_2\}$. Recall that $U_1$ (resp. $U_2$) is defined by $\|x\|\leq 1$ (resp. $\|U_2\|\geq 1$). Let $U'_1\subseteq{\mathscr{F}\!\ell}$ (resp. $U'_2\subseteq{\mathscr{F}\!\ell}$) be the affinoid open subset defined by $\|x\|\leq \|p^{-1}\|$ (resp. $\|U_2\|\geq \|p\|$) and $\mathfrak{U}'=\{U'_1,U'_2\}$. Then both $\mathfrak{U}$ and $\mathfrak{U}'$ are covers of ${\mathscr{F}\!\ell}$. Note that $U'_1,U'_2$ can also be obtained by applying the action of $\begin{pmatrix} p & 0\\ 0 & 1 \end{pmatrix}$ and its inverse to $U_1,U_2$. In particular, even though $U'_1,U'_2\notin \mathfrak{B}$, results we proved for open subsets in $\mathfrak{U}$ are also valid for $U'_1,U'_2$. Take $G_0=1+p^mM_2(\mathbb{Z}_p)$ so that $U'_1,U_1,U_2,U'_2$ are $G_0$-stable. As in Section \ref{vCech}, we have the \v{C}ech complexes $C^\bullet(\mathfrak{U},\mathcal{O}^{G_n-\mathrm{an},\chi}_{K^p}), C^\bullet(\mathfrak{U'},\mathcal{O}^{G_n-\mathrm{an},\chi}_{K^p})$ and their cohomology groups $\check{H}^i(\mathfrak{U},\mathcal{O}^{G_n-\mathrm{an},\chi}_{K^p})$, $\check{H}^i(\mathfrak{U}',\mathcal{O}^{G_n-\mathrm{an},\chi}_{K^p})$, where $G_n=G_0^{p^n}$. The restriction of $U'_1$ (resp. $U'_2$) to $U_1$ (resp. $U_2$) induces a natural map $\check{H}^i(\mathfrak{U'},\mathcal{O}^{G_n-\mathrm{an},\chi}_{K^p}) \to \check{H}^i(\mathfrak{U},\mathcal{O}^{G_n-\mathrm{an},\chi}_{K^p})$. \end{para} \begin{cor} \label{CSarg} For any $n\geq 0$, there exists $m\geq n$ such that the image of \[\check{H}^1(\mathfrak{U},\mathcal{O}^{G_n-\mathrm{an},\chi}_{K^p})\to \check{H}^1(\mathfrak{U},\mathcal{O}^{G_m-\mathrm{an},\chi}_{K^p})\] is contained in the image of $\check{H}^1(\mathfrak{U}',\mathcal{O}^{G_m-\mathrm{an},\chi}_{K^p})\to \check{H}^1(\mathfrak{U},\mathcal{O}^{G_m-\mathrm{an},\chi}_{K^p})$. \end{cor} \begin{proof} This is a formal consequence of Proposition \ref{VCech}. Equivalently, we need to find $m\geq n$ such that for any $s\in \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_1\cap U_2)^{G_n-\mathrm{an}}$, there exist sections $s_i\in \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_i)^{G_m-\mathrm{an}}$, $i=1,2$ so that \[s+s_1|_{U_1\cap U_2}+s_2|_{U_1\cap U_2}\] can be extended to a section in $ \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U'_1\cap U'_2)^{G_m-\mathrm{an}}$. Consider the cover $\{U_1,U'_1\cap U_2\}$ of $U'_1$. Then Proposition \ref{VCech} implies that there exists $m'\geq n$ such that for any $s\in \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_1\cap U_2)^{G_n-\mathrm{an}}$, we can find $s_1\in \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_1)^{G_{m'}-\mathrm{an}}$ so that $s+s_1|_{U_1\cap U_2}$ can be extended to $U'_1\cap U_2$. Similarly, by applying Proposition \ref{VCech} to the cover $\{U_2,U'_1\cap U'_2\}$ of $U_2$, we find $m\geq m'$ and $s_2\in \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_2)^{G_{m}-\mathrm{an}}$ such that $s+s_1|_{U_1\cap U_2}+s_2|_{U_1\cap U_2}$ can be extended to $U'_1\cap U'_2$, which is exactly what want. \end{proof} \subsection{Hodge-Tate structure} \begin{para} \label{stpHT} Since modular curves $\mathcal{X}_{K^pK_p}$ are defined over $\mathbb{Q}_p$, there is a natural continuous action of $G_{\mathbb{Q}_p}$ on $\mathcal{X}_{K^p}$ which commutes with the action of $\mathrm{GL}_2(\mathbb{Q}_p)$. Hence it also acts on the sheaf $\mathcal{O}^{\mathrm{la},\chi}_{K^p}$, where $\chi=(n_1,n_2)$ is an integral weight. In this subsection, we mainly study this semilinear action with respect to the $C$-vector space structure on $\mathcal{O}^{\mathrm{la},\chi}_{K^p}$. Our reference here is \cite[Theorem 5.1.8]{Pan20} and its proof. Let $U\in\mathfrak{U}$. We keep the same notation as in \ref{exs}. Fix $G_0=1+p^mM_2(\mathbb{Z}_p)$ so that $\pi_{\mathrm{HT}}^{-1}(U)$ is the preimage of an affinoid subset $V_{G_0}\subset\mathcal{X}_{K^pG_0}$. Let $G_n=G_0^{p^n}$, One useful observation is that \begin{itemize} \item $V_{G_n}$, hence also its preimage $V_{G_n}$ in $\mathcal{X}_{K^pG_n}$, are defined over a finite extension of $\mathbb{Q}_p$. \end{itemize} Indeed, this is clear when $U=U_1=\{||x||\leq 1\}$ or $U_2=\{||x||\geq 1\}$ because $\overbar\mathbb{Q}_p$ is dense in $C$ hence we can approximate $x$ by a section defined over a finite extension of $\mathbb{Q}_p$ on a finite level modular curve. A similar argument implies cases when $U$ is a rational subset of $U_1$ or $U_2$. In general, our assertion follows from the construction of $\mathfrak{B}$ that $U$ is a finite intersection of rational subsets of $U_1$ and $U_2$, cf. \ref{exs}. Fix a finite extension $K$ of $\mathbb{Q}_p$ in $C$ over which $V_{G_0}$ is defined. Then $G_K$ acts continuously on the LB-space $\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)$ and $C$-Banach space $\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)^{G_n-\mathrm{an}}$. \end{para} \begin{para} \label{HTsetup} In order to state our result, we need a general discussion on semilinear representations of $G_K$ on $C$-Banach spaces. Let $K$ bw a finite extension of $\mathbb{Q}_p$ in C. We denote by $K_\infty\subseteq C$ the maximal $\mathbb{Z}_p$-extension of $K$ in $K(\mu_{p^\infty})$. Equivalently, let $\varepsilon_p:G_{\mathbb{Q}_p}\to\mathbb{Z}_p^\times$ be the $p$-adic cyclotomic character. There is a natural decomposition $\mathbb{Z}_p=(\mathbb{Z}/p\mathbb{Z})^\times \times(1+2p\mathbb{Z}_p)$. We denote the composite map $G_K\xrightarrow{\varepsilon_p}\mathbb{Z}_p^\times\to 1+2p\mathbb{Z}_p$ by $\tilde\varepsilon_p$. Let $H_K$ be the kernel of $\tilde\varepsilon_p$. Then $K_\infty=\overbar\mathbb{Q}_p{}^{H_K}$. Hence $\Gal(K_\infty/K)$ is (non-canonically) isomorphic to $\mathbb{Z}_p$. Suppose $W$ is a $C$-Banach space equipped with a continuous semilinear action of $G_{K}$. Denote by \[W^{K}\subseteq W\] the subspace of $G_{K_\infty}$-fixed, $G_K$-analytic vectors \footnote{In the classical Sen theory for finite-dimensional $C$-representations, people usually consider $G_{K_\infty}$-fixed, $G_K$-finite vectors. As pointed out by Berger-Colmez in \S 3.1 of \cite{BC16}, the correct notion in general should be analytic vectors.} in $W_i$. This is naturally a $K$-Banach space. The $C$-Banach space structure on $W$ induces a map \[\varphi^K:C\widehat{\otimes}_K W^{K} \to W.\] We are particularly interested in $W$ satisfying that \begin{itemize} \item $\varphi^K$ is an isomorphism. \end{itemize} Then a standard argument using Tate's normalized trace shows that the natural map \[L\otimes_{K}W^K\xrightarrow{\cong} W^L\] is an isomorphism for any finite extension $L$ of $K$ in $C$. See for example the proof of \cite[Th\'eor\`eme 3.2]{BC16} \footnote{The $K_\infty$ in \cite{BC16} is different from our $K_\infty$ by a finite group in general. But the same argument works.}. In particular, $\varphi^L$ is an isomorphism for any finite extension $L$ of $K$. More generally, one can use have Tate's normalized trace to prove the following result. \end{para} \begin{prop} \label{exeTtr} Let $X$ be a $K$-Banach space representation of $G_K$ such that $X^K=X$, i.e. the action of $G_{K_\infty}$ on $X$ is trivial and the induced action of $\Gal({K_\infty}/K)$ is analytic. Then $(C\widehat\otimes_K X)^K=X$. \end{prop} \begin{proof} Exercise. \end{proof} \begin{cor} \label{strHT} Let $K$ be a finite extension of $\mathbb{Q}_p$ in $C$. Let $W_1,W_2$ be $C$-Banach spaces equipped with continuous semilinear actions of $G_{K}$. Suppose that the natural maps \[\varphi_i^K:C\widehat{\otimes}_K W_i^K \to W,i=1,2,\] are isomorphisms. Let $\phi:W_1\to W_2$ be a $G_{\mathbb{Q}_p}$-equivariant, $C$-linear continuous map. Then \begin{enumerate} \item $\ker(\phi)= C\widehat\otimes_K \ker(\phi)^K$. \item Suppose $\phi$ is a closed embedding. Let $Z=W_2/\phi(W_1)$ equipped with the quotient topology. Then $Z= C\widehat\otimes_K Z^K$. \item In general, suppose $\phi$ is strict. Let $Y=W_2/\phi(W_1)$ equipped with the quotient topology. Then $Y= C\widehat\otimes_K Y^K$. \end{enumerate} \end{cor} \begin{proof} For the first two parts, in view of the previous proposition, it suffices to prove that if we have a short exact sequence \[0\to M'\to M\xrightarrow{f} M\] of $K$-Banach spaces with continuous homomorphisms, then \[0\to C\widehat\otimes_K M' \to C\widehat\otimes_K M\xrightarrow{1\otimes f} C\widehat\otimes_K M''\] is still exact, and moreover, $1\otimes f$ is surjective if $f$ is surjective. This is clear by choosing an orthonormal basis of $C$ over $K$. The last part follows from the first two parts. \end{proof} \begin{para} Back to the setup in \ref{HTsetup}. By definition, there is a natural action of the Lie algebra $\Lie(\Gal({K_\infty}/K))$ on $W^K$. Note that we can naturally identify $\Lie(\Gal({K_\infty}/K))$ with $\Lie(\mathbb{Z}_p^\times)=\mathbb{Q}_p$ via $\varepsilon_p$. The action of $1\in\mathbb{Q}_p\cong \Lie(\Gal({K_\infty}/K))$ is classically called the Sen operator. Clearly the Sen operator is $K$-linear. \end{para} \begin{defn} \label{HTg} Let $K$ be a finite extension of $\mathbb{Q}_p$ in $C$ and let $W$ be a $C$-Banach space equipped with a continuous semilinear action of $G_{K}$. \begin{enumerate} \item We say $W$ is Hodge-Tate of weight $\{w_1,\cdots,w_n\}\subseteq\mathbb{Z}$ if there exists a finite extension $L$ of $K$ in $C$ such that \begin{itemize} \item $\varphi^L:C\widehat{\otimes}_L W^{L} \to W$ is an isomorphism; \item The action of the Sen operator $1\in\mathbb{Q}_p\cong \Lie(\Gal(L_\infty/L))$ on $W^L$ is semi-simple with eigenvalues $-w_1,\cdots,-w_n$, i.e. there is a natural decomposition \[W^L=W^L_{-w_1}\oplus\cdots\oplus W^L_{-w_n}\] such that the Sen operator acts on $W^L_{-w_i}$ via multiplication by $-w_i$ for $i=1,\cdots,n$. All $W^{L}_{-w_i}$ are non-zero. \end{itemize} Moreover, we have a decomposition for $W$ if it is Hodge-Tate of weight $\{w_1,\cdots,w_n\}$: \[W=W_{-w_1}\oplus\cdots W_{-w_n},\] where $W_{-w_i}=C\widehat\otimes_{L}W^L_{-w_i}\subseteq W$. It is easy to see that $W_{-w_i}$ is Hodge-Tate of weight $w_i$ and does not depend on the choice of $L$. \item We say $W$ is Hodge-Tate if $W$ is Hodge-Tate of some weight $\{w_1,\cdots,w_n\}\subseteq\mathbb{Z}$ in the above sense. \end{enumerate} \end{defn} \begin{rem} \label{clHT} Keep the same notation as in Definition \ref{HTg}. Comparing with the classical definition of Hodge-Tate representations, we remark that $W$ is Hodge-Tate of weight $0$ if and only if \[W= C\widehat\otimes_{K} W^{G_K}.\] Indeed, suppose $W=C \widehat\otimes_L W^L_0$ for some finite Galois extension $L$ of $K$. Then $\Gal(L_\infty/L)$ acts trivially on $W^L_0$ as this action is analytic and its infinitesimal action is trivial. Hence $W^L_0=W^{G_L}$. By Galois descent, we have \[W^{G_L}=L\otimes_K (W^{G_L})^{G_K}=L\otimes_K W^{G_K}.\] It follows that $W= C\widehat\otimes_{K} W^{G_K}$. More generally, $W$ is Hodge-Tate of weight $w\in\mathbb{Z}$ if and only if its $w$-th Tate twist $W(w)$ is Hodge-Tate of weight $0$, equivalently, \[W=C(-w) \widehat\otimes_K W(w)^{G_K}. \] In general, $W$ is Hodge-Tate of weight $\{w_1,\cdots,w_n\}\subseteq\mathbb{Z}$ if and only if \[W=\bigoplus_{i=1}^n C(-w_i) \widehat\otimes_K W(w_i)^{G_K}.\] It recovers the classical notion of Hodge-Tate representations when $W$ is finite-dimensional. This also implies that \[W=C\widehat\otimes_{K} W^K\] if $W$ is Hodge-Tate. \end{rem} Now we can state our main results. \begin{prop} \label{GnanHT} Let $\chi=(n_1,n_2)$ be an integral weight and $U\in\mathfrak{B}$ as in \ref{stpHT}. Then $\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)^{G_n-\mathrm{an}}$ is Hodge-Tate of weight $-n_2$ for any $n\geq 0$. \end{prop} \begin{proof} We may assume $U\subseteq U_1$. Keep the same notation in \ref{stpHT}. In particular, $V_{G_n}$ is defined over some finite extension $K$ of $\mathbb{Q}_p$. We may simply assume $K$ contains $\mu_{p^2}$ hence $K_\infty=K(\mu_{p^\infty})$. Now choose $x_n\in H^0(V_{G_{r(n)}},\mathcal{O}_{\mathcal{X}_{K^pG_{r(n)}}})$ as in \ref{exs}. Since $\overbar\mathbb{Q}_p$ is dense in $C$, after possibly replacing $K$ by a finite extension, $x_n$ can be chosen to be defined over $K$. Then we have $A^n \subset \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)$ as \ref{An}. It follows from the construction that $G_K$ acts on $\mathrm{t},e_1,e_2$ via the cyclotomic character $\varepsilon_p$. Hence $G_K$ acts trivially on $x$. This implies that $A^n$ is $G_K$-stable. \begin{lem} \label{AnHT} $A^n$ is Hodge-Tate of weight $-n_2$. \end{lem} \begin{proof} Recall that $A^n$ consists of elements of the form $\mathrm{t}^{n_1}e_1^{n_2-n_1}\sum_{i=0}^{+\infty}c_i(x-x_n)^i$ for some $c_i\in H^0(V_{G_{r(n)}},\omega^{n_1-n_2})$ such that $\|e_1^{n_2-n_1}c_i p^{(n-1)i}\|$ is uniformly bounded. Since $V_{G_{r(n)}}$ and $\omega^{n_1-n_2}$ are defined over $K$, we have \[H^0(V_{G_{r(n)}},\omega^{n_1-n_2})=C\widehat\otimes_K H^0(V_{G_{r(n)}},\omega^{n_1-n_2})^{G_K}.\] Let $A^{n,K}\subseteq A^n$ be the subset of $f=\mathrm{t}^{n_1}e_1^{n_2-n_1}\sum_{i=0}^{+\infty}c_i(x-x_n)^i$ with $c_i\in H^0(V_{G_{r(n)}},\omega^{n_1-n_2})^{G_K}$. This is a $K$-Banach space and $G_K$ acts on it via $\varepsilon_p^{n_1}\cdot \varepsilon_p^{n_2-n_1}=\varepsilon_p^{n_2}$. It follows from Proposition \ref{exeTtr} that \[A^n = C\widehat\otimes_K A^{n,K}\] and by definition, $A^n$ is Hodge-Tate of weight $-n_2$. \end{proof} Note that $\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)^{G_n-\mathrm{an}}\subseteq A^n$. By the previous lemma, we have \[\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)^{G_n-\mathrm{an}}=(A^n)^{G_n-\mathrm{an}}=(C\widehat\otimes_K A^{n,K})^{G_n-\mathrm{an}}=C\widehat\otimes_K (A^{n,K})^{G_n-\mathrm{an}}\] where the last identity is clear by choosing an orthonormal basis of $C$ over $K$. Our claim follows as $G_K$ acts on $A^{n,K}$ via $\varepsilon_p^{n_2}$. \end{proof} The following result will be used later in a Cartan-Serre type argument. Roughly speaking, this is a generalization of the well-known fact in analytic geometry that relatively compact inclusions between $\mathbb{Q}_p$-affinoid spaces induce compact operators between $\mathbb{Q}_p$-affinoid algebras. \begin{prop} \label{strcomp} Let $\chi=(n_1,n_2)$ be an integral weight. Let $U,U'\subseteq \mathfrak{B}$ be $G_0=1+p^lM_2(\mathbb{Z}_p)$-stable open subsets of ${\mathscr{F}\!\ell}$ with $l\geq 2$ and $\pi_\mathrm{HT}^{-1}(U),\pi_\mathrm{HT}^{-1}(U')$ are the preimages of affinoid subsets $V_{G_0},V'_{G_0}\subseteq \mathcal{X}_{K^pG_0}$. Suppose that $V_{G_0}$ and $V'_{G_0}$ are defined over a finite extension $K$ of $\mathbb{Q}_p$ in $C$. Assume that \begin{itemize} \item $U$ is a strict neighborhood of $U'$, i.e. $\overbar{U'}\subseteq U$ as adic spaces. \end{itemize} Then \begin{enumerate} \item $\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)^{G_n-\mathrm{an}}=C\widehat\otimes_K \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)^{G_n-\mathrm{an},K}$ for any $n\geq 0$, where $G_n=G_0^{p^n}$; \item $\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U')^{G_n-\mathrm{an}}=C\widehat\otimes_K \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U')^{G_n-\mathrm{an},K}$ for any $n\geq 0$; \item For any $n\geq0 $, the restriction map $\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)^{G_n-\mathrm{an},K}\to \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U')^{G_m-\mathrm{an},K}$ is compact for $m\geq n$ sufficiently large \end{enumerate} \end{prop} \begin{rem} This proposition roughly says that the restriction map $\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)^{G_n-\mathrm{an}}\to \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U')^{G_m-\mathrm{an}}$ is essentially the completed tensor product of a compact operator with $C$. \end{rem} \begin{proof} The first two claims follow from Lemma \ref{AnHT} and Remark \ref{clHT}. It remains to prove the last assertion. Note that we are free to replace $K$ by a finite extension. Indeed, suppose $L$ is a finite extension of $K$ in $L$. Then $ \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)^{G_n-\mathrm{an},K}$ is closed subspace of $\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)^{G_n-\mathrm{an},L}$ because \[ \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)^{G_n-\mathrm{an},L}=L\otimes_K \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)^{G_n-\mathrm{an},K},\] cf. the discussion above Proposition \ref{exeTtr}. We may assume $U\subseteq U_1$. Denote by $V_{G_{n}}$ (resp. $V'_{G_n}$) the preimage of $V_{G_0}$ (resp. $V'_{G_0}$) in $\mathcal{X}_{K^pG_n}$. As in \ref{exs}, we can choose $x_n\in H^0(V_{G_{r(n)}},\mathcal{O}_{\mathcal{X}_{K^pG_{r(n)}}})$ and define $A^n\subseteq \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)$. Using $x_n|_{V'_{G_{r(n)}}}$, we can define $A'{}^{n}\subseteq \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U')$ similarly. Then $\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)^{G_n-\mathrm{an}}\subseteq A^n$ and $A'{}^n \subseteq \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U')^{G_{r(n)}-\mathrm{an}}$. Fix $n\geq 0$. After possibly replacing $K$ by a finite extension, we may assume that $V_{G_0},V'_{G_0},x_n,x_{n+1}$ are defined over $K$. It suffices to prove the natural map \[f:A^{n,K}\to A'{}^{n+1,K}\] is compact. Indeed, this will imply that $\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)^{G_n-\mathrm{an},K}\to \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)^{G_m-\mathrm{an},K}$ is compact with $m=r(n+1)$. To see the compactness of $f$, we fix a generator $s$ of $\omega^{n_1-n_2}|_{V_{G_{r(n)}}}$ defined over $K$. By \ref{exAn[[]]} and the proof of Lemma \ref{AnHT}, there is an isomorphism \[A^{n,K}\cong \mathcal{O}^+_{\mathcal{X}_{K^pG_{r(n)}}}(V_{G_{r(n)}})^{G_K}[[\frac{x-x_n}{p^{n-1}}]] \otimes_{\mathbb{Z}_p} \mathbb{Q}_p.\] Similarly, using the pull-back of $s$ to $V_{G_{r(n+1)}}$, we have \[A'{}^{n+1,K}\cong \mathcal{O}^+_{\mathcal{X}_{K^pG_{r(n+1)}}}(V'_{G_{r(n+1)}})^{G_K}[[\frac{x-x_{n+1}}{p^{n}}]] \otimes_{\mathbb{Z}_p} \mathbb{Q}_p.\] By our assumption, $U$ is a strict neighborhood of $U'$. Hence $\overbar{\pi_{\mathrm{HT}}^{-1}(U')}\subseteq \pi_\mathrm{HT}^{-1}(U)$. Both are $G_{r(n)}$-stable subsets. Note that $\displaystyle \pi_\mathrm{HT}^{-1}(U)\sim\varprojlim_{l}V_{G_l}$ and $V_{l}$ is the quotient of $V_{k}$ by $G_k/G_l, k\geq l$. Hence $V_{G_{r(n)}}=\pi_\mathrm{HT}^{-1}(U)/G_{r(n)}$ as topological spaces. From this, we conclude that $V_{G_{r(n)}}$ is a strict neighborhood of $V'_{G_{r(n)}}$. Therefore \[\mathcal{O}^+_{\mathcal{X}_{K^pG_{r(n)}}}(V_{G_{r(n)}})^{G_K}\to \mathcal{O}^+_{\mathcal{X}_{K^pG_{r(n+1)}}}(V'_{G_{r(n+1)}})^{G_K}\] is compact because modular curves are of finite type. On the other hand, we have \[\frac{x-x_n}{p^{n-1}}=p\left(\frac{x-x_{n+1}}{p^n}\right)+\frac{x_{n+1}-x_n}{p^{n-1}}.\] By our choice of $x_n$ in \ref{exs}, \[\|x_{n+1}-x_{n}\|\leq\max\{\|x-x_n\|,\|x-x_{n+1}\|\} \leq p^{-n}.\] In particular, $\frac{x_{n+1}-x_n}{p^{n-1}}\in p \mathcal{O}^+_{\mathcal{X}_{K^pG_{r(n+1)}}}(V'_{G_{r(n+1)}})^{G_K}$. Hence $\frac{x-x_n}{p^{n-1}}$ is topologically nilpotent in $A'{}^{n+1,K}$. This shows that $f:A^{n,K}\to A'{}^{n+1,K}$ is compact, cf. Example \ref{compexa}. \end{proof} \subsection{A digression: topology on the cohomology of \texorpdfstring{$\mathcal{O}^{\mathrm{la},\chi}_{K^p}$}{Lg}} \label{LBH} \begin{para} \label{Cech} $H^0(U,\mathcal{O}^{\mathrm{la},\chi}_{K^p})=H^0(U,\mathcal{O}_{K^p})^{\mathrm{la},\chi}$ is a Hausdorff LB-space if $U\in\mathfrak{B}$. The goal of this subsection is to define such a topology on the cohomology $H^i({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})$. We first consider the cohomology group $H^i({\mathscr{F}\!\ell},\mathcal{O}^+_{K^p})$, where $\mathcal{O}^+_{K^p}={\pi_{\mathrm{HT}}}_*\mathcal{O}^+_{\mathcal{X}_{K^p}}$. By Scholze's work, $H^i({\mathscr{F}\!\ell},\mathcal{O}^+_{K^p})$ is isomorphic to the $\mathcal{O}_C$-coefficient completed cohomology as almost $\mathcal{O}_C$-modules, cf. \cite[Corollary 4.4.3]{Pan20} and the proof of Theorem 4.4.6 \textit{ibid.}. In particular, $H^i({\mathscr{F}\!\ell},\mathcal{O}^+_{K^p})$ has bounded $p$-torsion and $H^i({\mathscr{F}\!\ell},\mathcal{O}_{K^p})$ is a $C$-Banach space. Recall that $U_1$ (resp. $U_2$) denotes the affinoid open subset $\{\|x\|\leq1\}$ (resp. $\{\|x\|\geq1\}$) in ${\mathscr{F}\!\ell}$. Let $U_{12}=U_1\cap U_2$. As proved in the proof of Theorem 4.4.6 of \cite{Pan20}, the cohomology $H^\bullet({\mathscr{F}\!\ell},\mathcal{O}^+_{K^p})$ can be computed by the \v{C}ech complex \[\mathcal{O}^+_{K^p}(U_1)\oplus \mathcal{O}^+_{K^p}(U_2)\to \mathcal{O}^+_{K^p}(U_{12}).\] Therefore after inverting $p$, we see that the homomorphism in the \v{C}ech complex \[C^\bullet:\mathcal{O}_{K^p}(U_1)\oplus \mathcal{O}_{K^p}(U_2)\to \mathcal{O}_{K^p}(U_{12})\] is strict. \end{para} \begin{lem} \label{rescle} The restriction map $\mathcal{O}_{K^p}(U_1)\to \mathcal{O}_{K^p}(U_{12})$ is a closed embedding. \end{lem} \begin{proof} First we show that this map is injective. Suppose $f\in\ker\left(\mathcal{O}_{K^p}(U_1)\to \mathcal{O}_{K^p}(U_{12})\right)$. Then $f$ can be extended to a global section $\tilde{f}\in H^0({\mathscr{F}\!\ell},\mathcal{O}_{K^p})$ with $\tilde{f}|_{U_2}=0$ (and $\tilde{f}|_{U_1}=f$). Since $H^0({\mathscr{F}\!\ell},\mathcal{O}_{K^p})$ is isomorphic to the $C$-coefficient completed cohomology $\tilde{H}^0(K^p,C)$ by \cite[Theorem 4.4.6]{Pan20}, on which the action of $\mathrm{SL}_2(\mathbb{Q}_p)$ is trivial, cf. \cite[4.2]{Eme06}, we conclude that $\tilde{f}$ is fixed by $\mathrm{SL}_2(\mathbb{Q}_p)$. However $\mathrm{SL}_2(\mathbb{Q}_p)\cdot U_2={\mathscr{F}\!\ell}$. Hence $f=0$. It remains to show the strictness of $\mathcal{O}_{K^p}(U_1)\to \mathcal{O}_{K^p}(U_{12})$. We use the Weyl group. \begin{lem} \label{keyweyl} Let $i:A\to B$ be a continuous map of $\mathbb{Q}_p$-Banach spaces. Let $w$ be a continuous involution on $B$. Then $i$ is strict if and only if $(i,w\circ i):A\oplus A\to B$ is strict. \end{lem} \begin{proof} Suppose $(i,w\circ i):A\oplus A\to B$ is strict, then $i$ is strict as it is the composite $A\xrightarrow{x\mapsto (x,0)} A\oplus A\xrightarrow{(i,w\circ i)} B$. Now suppose $i$ is strict. There is a continuous involution $w'$ on $A\oplus A$ sending $(x_1,x_2)$ to $(x_2,x_1)$. It is clear that $(i,w\circ i)$ intertwines $w'$ and $w$. Hence we only need to prove that the induced maps $(A\oplus A)^{w'=1}\to B^{w=1}$ and $(A\oplus A)^{w'=-1}\to B^{w=-1}$ are strict. Under the isomorphism $A\xrightarrow{x\mapsto (x,x)}(A\oplus A)^{w'=1}$, one can check easily that $(A\oplus A)^{w'=1}\to B^{w=1}$ agrees with the composite \[A\xrightarrow{i} B \xrightarrow{1+w} B^{w=1}.\] Note that $(1+w)/2$ is the projection map of $B$ onto $B^{w=1}$ hence a strict homomorphism. It follows from our assumption that $(1+w)\circ i$ is strict. Similarly $(A\oplus A)^{w'=-1}\to B^{w=-1}$ is strict as it essentially agrees with $(1-w)\circ i$. \end{proof} We apply this lemma with $A=\mathcal{O}_{K^p}(U_1)$, $B=\mathcal{O}_{K^p}(U_{12})$, $i$ being the restriction map and $w=\begin{pmatrix} 0 & 1\\ 1 & 0\end{pmatrix}\in\mathrm{GL}_2(\mathbb{Q}_p)$. In this case, $(i,w\circ i):A\oplus A\to B$ is nothing but the \v{C}ech complex $\mathcal{O}_{K^p}(U_1)\oplus \mathcal{O}_{K^p}(U_2)\to \mathcal{O}_{K^p}(U_{12})$ up to a sign and we have seen this is a strict complex. Hence $\mathcal{O}_{K^p}(U_1)\to \mathcal{O}_{K^p}(U_{12})$ is strict. \end{proof} \begin{para} \label{Celachi} Let $G_0=1+p^2M_2(\mathbb{Z}_p)\subseteq \mathrm{GL}_2(\mathbb{Z}_p)$ and $G_n=G_0^{p^n}$. There are unitary continuous actions of $G_n$ on $\mathcal{O}_{K^p}(U_1)$ and $\mathcal{O}_{K^p}(U_{12})$. Then Lemma \ref{rescle} and Lemma \ref{Gnanprecle} imply that \[\mathcal{O}_{K^p}(U_1)^{G_n-\mathrm{an}}\to \mathcal{O}_{K^p}(U_{12})^{G_n-\mathrm{an}}\] is also a closed embedding. Applying Lemma \ref{keyweyl} to this map, we see that the complex obtained by taking $G_n$-analytic vectors in $C^{\bullet}$ \[\mathcal{O}_{K^p}(U_1)^{G_n-\mathrm{an}}\oplus \mathcal{O}_{K^p}(U_2)^{G_n-\mathrm{an}}\to \mathcal{O}_{K^p}(U_{12})^{G_n-\mathrm{an}}\] is still a strict complex. Similarly, let $\chi$ be a weight of $\mathfrak{h}$. Then $\mathcal{O}_{K^p}(U_1)^{G_n-\mathrm{an},\chi}\to \mathcal{O}_{K^p}(U_{12})^{G_n-\mathrm{an},\chi}$ is a closed embedding. We get a strict homomorphism \[\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_1)^{G_n-\mathrm{an}}\oplus \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_2)^{G_n-\mathrm{an}}\to \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_{12})^{G_n-\mathrm{an}}.\] Therefore the cohomology groups of this complex are also Banach spaces. If we pass to the inductive limit over $n$, we get the \v{C}ech complex \[\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_1)\oplus \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_2)\to \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_{12}).\] Note that this actually computes the cohomology $H^i({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})$. Indeed, the proof of Theorem 4.4.6 of \cite{Pan20} says that the \v{C}ech cohomology $\check{H}^j(U,\mathcal{O}^{\mathrm{la}}_{K^p})=0$ for any $j>0$ and $U\in\mathfrak{B}$. By \cite[Theorem 5.1.2.(1)]{Pan20}, this shows that $\check{H}^j(U,\mathcal{O}^{\mathrm{la},\chi}_{K^p})=0,~j>0,~U\in\mathfrak{B}$. Hence by the same argument used in the proof of Theorem 4.4.6 of \cite{Pan20}, $H^i({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})$ can be computed by the \v{C}ech cohomology. \end{para} \begin{defn} \label{chGanchi} We define $\check{H}^\bullet(\mathcal{O}^{G_n-\mathrm{an}}_{K^p})$ to be the cohomology of \[\mathcal{O}_{K^p}(U_1)^{G_n-\mathrm{an}}\oplus \mathcal{O}_{K^p}(U_2)^{G_n-\mathrm{an}}\to \mathcal{O}_{K^p}(U_{12})^{G_n-\mathrm{an}}\] and $\check{H}^\bullet(\mathcal{O}^{G_n-\mathrm{an},\chi}_{K^p})$ (which was denoted by $\check{H}^i(\mathfrak{U},\mathcal{O}^{G_n-\mathrm{an},\chi}_{K^p})$ in \ref{vCech}) to be the cohomology of \[\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_1)^{G_n-\mathrm{an}}\oplus \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_2)^{G_n-\mathrm{an}}\to \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_{12})^{G_n-\mathrm{an}}.\] These are natural Banach spaces over $C$ equipped with an analytic action of $G_n$. This defines an LB-space structure on $H^i({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})$ by \[H^i({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})=\varinjlim_n \check{H}^i(\mathcal{O}^{G_n-\mathrm{an},\chi}_{K^p}).\] \end{defn} \begin{rem} \label{LBstreqv} Similarly, we can define an LB-space structure on $H^i({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la}}_{K^p})$ by \[H^i({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la}}_{K^p})=\varinjlim_{n}\check{H}^i(\mathcal{O}^{G_n-\mathrm{an}}_{K^p}).\] On the other hand, it follows from \cite[Theorem 4.4.6]{Pan20} that we can define another LB-space structure by \[H^i({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la}}_{K^p})\cong H^i({\mathscr{F}\!\ell},\mathcal{O}_{K^p})^\mathrm{la} =\varinjlim_n H^i({\mathscr{F}\!\ell},\mathcal{O}_{K^p})^{G_n-\mathrm{an}}.\] Not surprisingly, both LB-spaces structures are equivalent. This is clear when $i=0$ because the natural map \[\check{H}^0(\mathcal{O}^{G_n-\mathrm{an}}_{K^p})\to H^0({\mathscr{F}\!\ell},\mathcal{O}_{K^p})^{G_n-\mathrm{an}}.\] is an isomorphism. When $i=1$, the natural map $\check{H}^1(\mathcal{O}^{G_n-\mathrm{an}}_{K^p})\to H^1({\mathscr{F}\!\ell},\mathcal{O}_{K^p})^{G_n-\mathrm{an}}$ is continuous. It suffices to prove that for $n\geq0$, there exist $m\geq n$ and a continuous map \[H^1({\mathscr{F}\!\ell},\mathcal{O}_{K^p})^{G_n-\mathrm{an}}\to \check{H}^1(\mathcal{O}^{G_m-\mathrm{an}}_{K^p})\] whose composite with $\check{H}^1(\mathcal{O}^{G_m-\mathrm{an}}_{K^p})\to H^1({\mathscr{F}\!\ell},\mathcal{O}_{K^p})^{G_m-\mathrm{an}}$ agrees with the natural inclusion $H^1({\mathscr{F}\!\ell},\mathcal{O}_{K^p})^{G_n-\mathrm{an}}\subseteq H^1({\mathscr{F}\!\ell},\mathcal{O}_{K^p})^{G_m-\mathrm{an}}$. This follows from the strongly $\mathfrak{LA}$-acyclicity of $\mathcal{O}_{K^p}(U_i)$ and $H^0({\mathscr{F}\!\ell},\mathcal{O}_{K^p})$ cf. the beginning of the proof of Theorem 5.1.11 of \cite{Pan20}. \end{rem} \begin{rem} We can also equip a topology on $H^i({\mathscr{F}\!\ell},\omega^{k,\mathrm{la},\chi}_{K^p})$ similar to Definition \ref{chGanchi}. This will not be used in this paper but might be of some independent interest. Again we need to show the \v{C}ech complex associated to the cover $\{U_1,U_2\}$ \[{\pi_{\mathrm{HT}}}_*(\omega^k_{K^p})(U_1)\oplus {\pi_{\mathrm{HT}}}_*(\omega^k_{K^p})(U_2)\to {\pi_{\mathrm{HT}}}_*(\omega^k_{K^p})(U_{12})\] has strict homomorphism. The topology on ${\pi_{\mathrm{HT}}}_*(\omega^k_{K^p})(U_?),?=1,2,12$ is defined as follows: by choosing an invertible section of $\omega^k_{K^p}$ on $\pi_{\mathrm{HT}}^{-1}(U_?)$, we identify ${\pi_{\mathrm{HT}}}_*(\omega^k_{K^p})(U_?)$ with $\mathcal{O}_{K^p}(U_?)$ and thus define a Banach space structure on ${\pi_{\mathrm{HT}}}_*(\omega^k_{K^p})(U_?)$. It is easy to see that the topology obtained in this way does not depend on the choice of the invertible section. By Lemma \ref{keyweyl} again, we see that \[{\pi_{\mathrm{HT}}}_*(\omega^k_{K^p})(U_1)\oplus {\pi_{\mathrm{HT}}}_*(\omega^k_{K^p})(U_2)\to {\pi_{\mathrm{HT}}}_*(\omega^k_{K^p})(U_{12})\] is strict. From this, we can proceed as before. Note that this argument shows that we can put a Banach space structure on $H^i({\mathscr{F}\!\ell},{\pi_{\mathrm{HT}}}_*(\omega^k_{K^p}))$. However the action of $\mathrm{GL}_2(\mathbb{Q}_p)$ is only continuous but not unitary in general. \end{rem} \begin{rem} One can also use Lemma \ref{keyweyl} to prove the claim in Remark 5.1.14 of \cite{Pan20} without any calculations. \end{rem} \begin{para} \label{GKcoh} As explained in \ref{stpHT}, the Galois group $G_K$ acts naturally on $\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_i),i=1,2$ for some finite extension $K$ of $\mathbb{Q}_p$ in $C$. Assume $\chi=(n_1,n_2)$ is integral. We have seen in Proposition \ref{GnanHT} that $\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_?)^{G_n-\mathrm{an}}, ?=1,2,12$ are Hodge-Tate of weight $-n_2$. \end{para} \begin{prop} \label{chcohHT} $\check{H}^\bullet(\mathcal{O}^{G_n-\mathrm{an},\chi}_{K^p})$ is Hodge-Tate of weight $-n_2$. \end{prop} \begin{proof} This follows from Corollary \ref{strHT} directly. \end{proof} Back to $H^i({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})$. Here are the main results of this subsection. \begin{prop} \label{Haus} $H^i({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})$ is a Hausdorff LB-space. \end{prop} \begin{proof} The continuous inclusion $\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_{12})\subseteq \mathcal{O}_{K^p}(U_{12})$ induces a natural continuous map \[H^i({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})\to H^i({\mathscr{F}\!\ell},\mathcal{O}_{K^p}).\] By \cite[Corollary 5.1.3.(2)]{Pan20}, this is injective unless $i=1$ and $\chi(h)=0$, where $h=\begin{pmatrix} 1& 0\\ 0&-1\end{pmatrix}$. Hence $H^i({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})$ is Hausdorff except this case as $H^i({\mathscr{F}\!\ell},\mathcal{O}_{K^p})$ is a $C$-Banach space. By Remark \ref{tn1}, it remains to show that $\displaystyle \varinjlim_n \check{H}^1(\mathcal{O}^{G_n-\mathrm{an},(0,0)}_{K^p})$ is Hausdorff. We need several lemmas. By the discussion in \ref{GKcoh}, $G_K$ acts on $\check{H}^1(\mathcal{O}^{G_n-\mathrm{an},(0,0)}_{K^p}),n\geq 0$ for some finite extension $K$ of $\mathbb{Q}_p$ in $C$. \begin{lem} For $n\geq 0$, there is a natural isomorphism \[\check{H}^1(\mathcal{O}^{G_n-\mathrm{an},(0,0)}_{K^p})\cong \check{H}^1(\mathcal{O}^{G_n-\mathrm{an},(0,0)}_{K^p})^{G_{K}}\widehat\otimes_{K}C.\] \end{lem} \begin{proof} This follows from Proposition \ref{chcohHT} and Remark \ref{clHT}. \end{proof} \begin{lem} \label{Lemcom} For $m\geq n$, we denote by $i_{n,m}$ the transition map \[\check{H}^1(\mathcal{O}^{G_n-\mathrm{an},(0,0)}_{K^p})^{G_{K}}\to \check{H}^1(\mathcal{O}^{G_m-\mathrm{an},(0,0)}_{K^p})^{G_{K}}.\] Then \begin{enumerate} \item $i_{n,m}$ is compact if $m$ is sufficiently large. \item $\ker(i_{n,m})$ stabilizes when $m$ is sufficiently large, i.e. \[\ker(i_{n,m})=\ker\left(\check{H}^1(\mathcal{O}^{G_n-\mathrm{an},(0,0)}_{K^p})^{G_{K}}\to H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},(0,0)}_{K^p})\right),~~~~\mbox{if }m>>n.\] Equivalently, $\ker\left(\check{H}^1(\mathcal{O}^{G_n-\mathrm{an},(0,0)}_{K^p})^{G_{K}}\to H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},(0,0)}_{K^p})\right)$ is a closed subspace. \end{enumerate} \end{lem} Assuming this lemma at the moment, we finish the proof of Proposition \ref{Haus}. Denote by $V_n$ the image of $\check{H}^1(\mathcal{O}^{G_n-\mathrm{an},(0,0)}_{K^p})^{G_{K}}\to H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},(0,0)}_{K^p})$ equipped with the quotient topology. Then it follows from the previous two lemmas that \[\varinjlim_n V_n\widehat\otimes_K C\cong \varinjlim_n \check{H}^1(\mathcal{O}^{G_n-\mathrm{an},(0,0)}_{K^p})\] as locally convex inductive limits. Note that $V_n\to V_m$ is injective and compact for sufficiently large $m\geq n$. We can apply Corollary \ref{comWHau} here with $W=C$ and conclude that $\displaystyle \varinjlim_n \check{H}^1(\mathcal{O}^{G_n-\mathrm{an},(0,0)}_{K^p})$ is Hausdorff. \end{proof} \begin{proof}[Proof of Lemma \ref{Lemcom} (1)] We will use a Cartan-Serre type argument and invoke Lemma \ref{CStarg}. Our starting point is that $H^i({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})$ can be computed by another cover $\mathfrak{U}'$ of ${\mathscr{F}\!\ell}$ introduced in \ref{mathfrakU'}. Recall that $\mathfrak{U}'=\{U_1',U_2'\}$ with $U'_1$ (resp. $U'_2$) defined by $\|x\|\leq\|p^{-1}\|$ (resp. $\|x\|\geq \|p\|$). Then we can redo everything in this subsection with $U_1,U_2$ replaced by $U'_1,U'_2$. As in \ref{vCech}, we denote the quotient of \[\mathcal{O}_{K^p}(U'_1)^{G_n-\mathrm{an}}\oplus \mathcal{O}_{K^p}(U'_2)^{G_n-\mathrm{an}}\to \mathcal{O}_{K^p}(U'_{1}\cap U'_{2})^{G_n-\mathrm{an}}\] by $\check{H}^1(\mathfrak{U}',\mathcal{O}^{G_n-\mathrm{an},\chi}_{K^p})$. Then $\displaystyle H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})=\varinjlim_n \check{H}^1(\mathfrak{U}',\mathcal{O}^{G_n-\mathrm{an},\chi}_{K^p})$. Also $G_K$ acts on these cohomology groups and there are natural isomorphisms \[\check{H}^1(\mathfrak{U}',\mathcal{O}^{G_n-\mathrm{an},\chi}_{K^p})=\check{H}^1(\mathfrak{U}',\mathcal{O}^{G_n-\mathrm{an},\chi}_{K^p})^{G_K}\widehat\otimes_K C.\] Moreover, we have natural restriction maps \[\check{H}^1(\mathfrak{U}',\mathcal{O}^{G_n-\mathrm{an},\chi}_{K^p}) \to \check{H}^1(\mathcal{O}^{G_n-\mathrm{an},\chi}_{K^p}).\] By Corollary \ref{CSarg}, the image of $\check{H}^1(\mathfrak{U}',\mathcal{O}^{G_m-\mathrm{an},\chi}_{K^p}) \to \check{H}^1(\mathcal{O}^{G_m-\mathrm{an},\chi}_{K^p})$ contains the image of $\check{H}^1(\mathcal{O}^{G_n-\mathrm{an},\chi}_{K^p}) \to \check{H}^1(\mathcal{O}^{G_m-\mathrm{an},\chi}_{K^p})$ for sufficiently large $m\geq n$. We claim that this is also true for the $G_K$-invariants, i.e. \[\im\left( \check{H}^1(\mathcal{O}^{G_n-\mathrm{an},\chi}_{K^p})^{G_K} \to \check{H}^1(\mathcal{O}^{G_m-\mathrm{an},\chi}_{K^p})\right) \subseteq \im\left(\check{H}^1(\mathfrak{U}',\mathcal{O}^{G_m-\mathrm{an},\chi}_{K^p})^{G_K} \to \check{H}^1(\mathcal{O}^{G_m-\mathrm{an},\chi}_{K^p})\right).\] Indeed by the last part of Corollary \ref{strHT}, the image of $\check{H}^1(\mathfrak{U}',\mathcal{O}^{G_m-\mathrm{an},\chi}_{K^p}) \to \check{H}^1(\mathcal{O}^{G_m-\mathrm{an},\chi}_{K^p})$, equipped with the quotient topology, is Hodge-Tate of weight $0$. Note that $U'_1\cap U'_2$ is a strict neighborhood of $U_1\cap U_2$. Hence by Proposition \ref{strcomp}, $\mathcal{O}_{K^p}(U'_{1}\cap U'_{2})^{G_n-\mathrm{an},G_K}\to \mathcal{O}_{K^p}(U_{1}\cap U_{2})^{G_m-\mathrm{an},G_K}$ is compact for sufficiently large $m\geq n$. Therefore \[\check{H}^1(\mathfrak{U}',\mathcal{O}^{G_n-\mathrm{an},\chi}_{K^p})^{G_K} \to \check{H}^1(\mathcal{O}^{G_m-\mathrm{an},\chi}_{K^p})^{G_K}\] is compact for $m>>n$. Finally our claim follows from Lemma \ref{CStarg} with $V_i= \check{H}^1(\mathfrak{U}',\mathcal{O}^{G_i-\mathrm{an},\chi}_{K^p})^{G_K}$ and $W_i= \check{H}^1(\mathcal{O}^{G_i-\mathrm{an},\chi}_{K^p})^{G_K}$. \end{proof} \begin{proof}[Proof of Lemma \ref{Lemcom} (2)] Fix $n\geq 0$. It suffices to prove that \[U:=\ker\left(\check{H}^1(\mathcal{O}^{G_n-\mathrm{an},(0,0)}_{K^p})\to H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},(0,0)}_{K^p})\right)\] is a closed subspace. By \cite[Corollary 5.1.3.(2)]{Pan20}, there is a $\mathrm{GL}_2(\mathbb{Q}_p)$-equivariant exact sequence \begin{eqnarray} \label{chio} ~\,~\,~\,~\,~\,~\,~\,~\, 0\to \varinjlim_{K_p\subset\mathrm{GL}_2(\mathbb{Q}_p)} H^0(\mathcal{X}_{K^pK_p},\mathcal{O}_{\mathcal{X}_{K^pK_p}}) \to H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}) \to H^1({\mathscr{F}\!\ell},\mathcal{O}_{K^p})^{\mathrm{la},(0,0)}\to 0 \end{eqnarray} where $K_p$ runs through all open compact subgroups of $\mathrm{GL}_2(\mathbb{Q}_p)$. Recall that \[H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},(0,0)}_{K^p})=\varinjlim_n \check{H}^1(\mathcal{O}^{G_n-\mathrm{an},(0,0)}_{K^p}).\] Let $V$ be the kernel of $\check{H}^1(\mathcal{O}^{G_n-\mathrm{an},(0,0)}_{K^p})\to H^1({\mathscr{F}\!\ell},\mathcal{O}_{K^p})^{\mathrm{la},(0,0)}$. Then $V$ is closed and contains $U$ . We claim that $V/U$ is a finite-dimensional $C$-vector space. This will imply our claim by a standard argument using the open mapping theorem. Note that $V/U$ naturally embeds into $\varinjlim_{K_p\subset\mathrm{GL}_2(\mathbb{Q}_p)} H^0(\mathcal{X}_{K^pK_p},\mathcal{O}_{\mathcal{X}_{K^pK_p}})$. Since the action of $\mathrm{GL}_2(\mathbb{Q}_p)$ on $\varinjlim_{K_p\subset\mathrm{GL}_2(\mathbb{Q}_p)} H^0(\mathcal{X}_{K^pK_p},\mathcal{O}_{\mathcal{X}_{K^pK_p}})$ is smooth, the action of $G_n$ on $V/U$ is also smooth. On the other hand, the action of $G_n$ on $\check{H}^1(\mathcal{O}^{G_n-\mathrm{an},(0,0)}_{K^p})$ is analytic. It is easy to see that $G_{n+1}$ acts trivially on $V/U$. Now $V/U$ is finite-dimensional because \[ \left(\varinjlim_{K_p\subset\mathrm{GL}_2(\mathbb{Q}_p)} H^0(\mathcal{X}_{K^pK_p},\mathcal{O}_{\mathcal{X}_{K^pK_p}})\right)^{G_{n+1}}= H^0(\mathcal{X}_{K^pG_{n+1}},\mathcal{O}_{\mathcal{X}_{K^pG_{n+1}}})\] is finite-dimensional. \end{proof} Now we can talk about $G_n$-analytic vectors in $H^i({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})$. Our next result gives a control of these vectors. \begin{prop} \label{contrGnan} For any $n\geq0$, the subspace of $G_n$-analytic vectors $H^i({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})^{G_n-\mathrm{an}}$ is contained in the image of $\check{H}^i(\mathcal{O}^{G_m-\mathrm{an},\chi}_{K^p})\to H^i({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})$ for some $m\geq n$. \end{prop} \begin{proof} Note that $\check{H}^0(\mathcal{O}^{G_m-\mathrm{an},\chi}_{K^p})=H^i({\mathscr{F}\!\ell},\mathcal{O}_{K^p})^{G_n-\mathrm{an},\chi}=H^0({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})^{G_n-\mathrm{an}}$ and there is no $H^i,i\geq2$ as ${\mathscr{F}\!\ell}$ is one-dimensional. Hence we only need to consider the case $i=1$. First we claim that $H^1({\mathscr{F}\!\ell},\mathcal{O}_{K^p})^{G_n-\mathrm{an},\chi}$ is contained in the image of the composite map \[f_n:\check{H}^1(\mathcal{O}^{G_{m'}-\mathrm{an},\chi}_{K^p}) \to H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})\to H^1({\mathscr{F}\!\ell},\mathcal{O}_{K^p})^{\mathrm{la},\chi}\] for some $m'\geq n$. Indeed, it follows from the exact sequence \eqref{chio} that \[H^1({\mathscr{F}\!\ell},\mathcal{O}_{K^p})^{\mathrm{la},\chi}=\bigcup_{n=0}^\infty \im(f_n).\] Since $H^1({\mathscr{F}\!\ell},\mathcal{O}_{K^p})^{G_n-\mathrm{an},\chi}$ is a Banach space and $f_n$ is continuous by Remark \ref{LBstreqv}, our claim follows from Proposition \ref{1.1.10}. We denote the image of $\check{H}^1(\mathcal{O}^{G_m-\mathrm{an},\chi}_{K^p})\to H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})$ by $V_m$. Note that \[ \left(\varinjlim_{K_p\subset\mathrm{GL}_2(\mathbb{Q}_p)} H^0(\mathcal{X}_{K^pK_p},\mathcal{O}_{\mathcal{X}_{K^pK_p}})\right)^{G_{m'}}= H^0(\mathcal{X}_{K^pG_{m'}},\mathcal{O}_{\mathcal{X}_{K^pG_{m'}}})\] is finite-dimensional over $C$, hence is contained in $V_m$ via \eqref{chio} for some $m\geq m'$. Now we claim $H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})^{G_n-\mathrm{an}}\subseteq V_m$. Take the $G_n$-analytic vectors in \eqref{chio}. \[0\to \left(\varinjlim_{K_p\subset\mathrm{GL}_2(\mathbb{Q}_p)} H^0(\mathcal{X}_{K^pK_p},\mathcal{O}_{\mathcal{X}_{K^pK_p}})\right)^{G_n-\mathrm{an}} \to H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})^{G_n-\mathrm{an}} \to H^1({\mathscr{F}\!\ell},\mathcal{O}_{K^p})^{G_n-\mathrm{an},\chi}.\] For any $v\in H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})^{G_n-\mathrm{an}}$, we can find $v'\in V_{m'}$ such that \[v-v'\in \left(\varinjlim_{K_p\subset\mathrm{GL}_2(\mathbb{Q}_p)} H^0(\mathcal{X}_{K^pK_p},\mathcal{O}_{\mathcal{X}_{K^pK_p}})\right)^{G_{m'}-\mathrm{an}}=\left(\varinjlim_{K_p\subset\mathrm{GL}_2(\mathbb{Q}_p)} H^0(\mathcal{X}_{K^pK_p},\mathcal{O}_{\mathcal{X}_{K^pK_p}})\right)^{G_{m'}}.\] Hence $v-v'\in V_m$. Thus $v=v'+(v-v')\in V_m$. \end{proof} \section{Intertwining operators: explicit formulas} \label{Ioef} \subsection{Differential operators I} \label{Do1} \begin{para} In this subsection, we will construct an operator from $\mathcal{O}^{\mathrm{la},\chi}_{K^p}$ to a twist of $\omega^{2k+2,\mathrm{la},\chi}_{K^p}$ for some integer $k$ depending on $\chi$. Roughly speaking, this operator comes from \textit{differential operators on modular curves} and $\mathcal{O}_{{\mathscr{F}\!\ell}}$-linearly extends the operator $\theta^{k+1}$ in the classical theory of modular forms. Let me first recall the definition of $\theta^{k+1}$. \end{para} \begin{para} \label{XCY} Fix an open compact subgroup $K_p$ of $\mathrm{GL}_2(\mathbb{Q}_p)$. Let $\mathcal{X}=\mathcal{X}_{K^pK_p}$ and $\mathcal{C}\subset \mathcal{X}$ be the subset of cusps. There is a universal family of elliptic curves $\mathcal{E}$ over the modular curve $\mathcal{X}-\mathcal{C}$. We denote its first relative de Rham cohomology by $D_{\mathrm{dR}}$. This is a rank two vector bundle equipped with a Hodge filtration and a Gauss-Manin connection. Its canonical extension to $\mathcal{X}$ will be denoted by $D$. In particular, it admits a logarithmic connection \[\nabla:D\to D\otimes_{\mathcal{O}_{\mathcal{X}}}\Omega^1_{\mathcal{X}}(\mathcal{C}),\] and a decreasing filtration \[\Fil^n(D)=\left\{ \begin{array}{lll} D~,n\leq0\\ \omega,~n=1\\ 0,~n\geq 2 \end{array}.\right.\] Here as usual, $\Omega^1_{\mathcal{X}}(\mathcal{C})$ denotes the sheaf of differentials on $\mathcal{X}$ with simple poles at $\mathcal{C}$. It is well-known that the composite map \[\omega=\Fil^1 D\subset D\xrightarrow{\nabla}D\otimes_{\mathcal{O}_{\mathcal{X}}}\Omega^1_{\mathcal{X}}(\mathcal{C})\to \gr^0 D\otimes_{\mathcal{O}_{\mathcal{X}}}\Omega^1_{\mathcal{X}}(\mathcal{C})=\wedge^2 D\otimes_{\mathcal{O}_{\mathcal{X}}}\omega^{-1}\otimes_{\mathcal{O}_{\mathcal{X}}}\Omega^1_{\mathcal{X}}(\mathcal{C})\] is an isomorphism (Kodaira-Spencer isomorphism): $\omega\stackrel{\sim}{\to}\wedge^2 D\otimes_{\mathcal{O}_{\mathcal{X}}}\omega^{-1}\otimes_{\mathcal{O}_{\mathcal{X}}}\Omega^1_{\mathcal{X}}(\mathcal{C})$. Under this isomorphism, we sometimes identify $\Omega^1_{\mathcal{X}}(\mathcal{C})$ with $(\wedge^2 D)^{-1}\otimes_{\mathcal{O}_{\mathcal{X}}}\omega^2$. We also note that the cup product induces a natural isomorphism between $\wedge^2 D$ and the canonical extension of $H^2_{\mathrm{dR}}(\mathcal{E}/\mathcal{X})$. Everything here is compatible under varying $K_p$. Let $k$ be a non-negative integer. Then the $k$-th symmetric power $\Sym^k D$ also admits a logarithmic connection $\nabla_k:\Sym^kD\to \Sym^k D\otimes_{\mathcal{O}_{\mathcal{X}}}\Omega^1_{\mathcal{X}}(\mathcal{C})$ and a decreasing filtration with $\Fil^{k} \Sym^k D=\omega^k$. The Kodaira-Spencer isomorphism implies that the composite map \[\Fil^1\Sym^k D\xrightarrow{\nabla_k}\Sym^k D\otimes_{\mathcal{O}_{\mathcal{X}}}\Omega^1_{\mathcal{X}}(\mathcal{C})\to (\Sym^k D/\Fil^k)\otimes_{\mathcal{O}_{\mathcal{X}}}\Omega^1_{\mathcal{X}}(\mathcal{C})\] is an isomorphism. In particular, given a section $s$ of $\gr^0 \Sym^k D=\Sym^k D/\Fil^1 \Sym^k D$, there exists a unique lift $\tilde{s}\in \Sym^k D$ such that $\nabla_k(\tilde{s})\in \Fil^k \Sym^k D\otimes_{\mathcal{O}_{\mathcal{X}}}\Omega^1_{\mathcal{X}}(\mathcal{C})=\omega^k\otimes_{\mathcal{O}_{\mathcal{X}}}\Omega^1_{\mathcal{X}}(\mathcal{C})$. This defines a map \[\theta'^{k+1}: \omega^{-k}\otimes_{\mathcal{O}_{\mathcal{X}}}(\wedge^2 D)^{k}\cong \gr^0 \Sym^k D\to \omega^k\otimes_{\mathcal{O}_{\mathcal{X}}}\Omega^1_{\mathcal{X}}(\mathcal{C})\cong \omega^{-k}\otimes_{\mathcal{O}_{\mathcal{X}}}(\wedge^2 D)^{k}\otimes_{\mathcal{O}_{\mathcal{X}}}\Omega^1_{\mathcal{X}}(\mathcal{C})^{\otimes k+1}\] which can be viewed as a log differential operator on $ \omega^{-k}\otimes_{\mathcal{O}_{\mathcal{X}}}(\wedge^2 D)^{k}$ of order $k+1$. Note that $\wedge^2 D$ is a line bundle equipped with a logarithmic connection. If we apply this construction to $\Sym^k D\otimes_{\mathcal{O}_\mathcal{X}} (\wedge^2 D)^{-k}=\Sym^k D^*$ where $D^*$ denotes the dual of $D$, we get a map \[\theta^{k+1}:\omega^{-k}\to \omega^k\otimes_{\mathcal{O}_{\mathcal{X}}}\Omega^1_{\mathcal{X}}(\mathcal{C})\otimes_{\mathcal{O}_\mathcal{X}} (\wedge^2 D)^{-k}\cong\omega^{k+2}\otimes_{\mathcal{O}_{\mathcal{X}}} (\wedge^2 D)^{-k-1}.\] Equivalently, since $\wedge^2 D$ is isomorphic to $\mathcal{O}_{\mathcal{X}}$ with the standard logarithmic connection, $\theta^{k+1}$ is nothing but the twist of $\theta'^{k+1}$ on $ \omega^{-k}\otimes_{\mathcal{O}_{\mathcal{X}}}(\wedge^2 D)^{-k}$ by $(\wedge^2 D)^{k}$. When $k=0$, $\theta^1$ is simply the composite of $d:\mathcal{O}_{\mathcal{X}}\to \Omega^1_{\mathcal{X}}(\mathcal{C})$ with the Kodaira-Spencer isomorphism. \end{para} \begin{para} \label{KSsm} One can take the direct limit of previous paragraph over $K_p$ in the following way. Since the vector bundle $D$ on $\mathcal{X}=\mathcal{X}_{K^pK_p}$ is compatible with varying $K_p$, \[D^{\mathrm{sm}}_{K^p}={\pi_{\mathrm{HT}}}_{*} (\varinjlim_{K_p\subset \mathrm{GL}_2(\mathbb{Q}_p)}(\pi_{K_p})^{-1} D)\] defines a locally free $\mathcal{O}^{\mathrm{sm}}_{K^p}$-module on ${\mathscr{F}\!\ell}$. See \ref{omegakla} for the notation here. It has a natural decreasing filtration with $\Fil^1=\omega^{1,\mathrm{sm}}_{K^p}$ and a connection \[\nabla:D^{\mathrm{sm}}_{K^p}\to D^{\mathrm{sm}}_{K^p}\otimes_{\mathcal{O}^\mathrm{sm}_{K^p}}\Omega_{K^p}^{1}(\mathcal{C})^{\mathrm{sm}},\] where \[\Omega_{K^p}^{1}(\mathcal{C})^{\mathrm{sm}}={\pi_{\mathrm{HT}}}_{*} (\varinjlim_{K_p\subset \mathrm{GL}_2(\mathbb{Q}_p)}(\pi_{K_p})^{-1} \Omega_{\mathcal{X}}^{1}(\mathcal{C})).\] Then we can repeat our construction in the previous paragraph. For example, the Kodaira-Spencer isomorphism induces a natural isomorphism $\omega^{2,\mathrm{sm}}_{K^p}\otimes_{\mathcal{O}^\mathrm{sm}_{K^p}}(\wedge^2 D^{\mathrm{sm}}_{K^p})^{-1}\cong \Omega^1_{K^p}(\mathcal{C})^{\mathrm{sm}}$. We note that $\wedge^2 D^{\mathrm{sm}}_{K^p}$ is isomorphic to $\mathcal{O}^{\mathrm{la}}_{K^p}$, but the isomorphism will not be Hecke-equivariant. Now we can state the main result of this subsection, which basically says that all the constructions here can be extended in an $\mathcal{O}_{{\mathscr{F}\!\ell}}$-linear way. \end{para} \begin{thm} \label{I1} Suppose $\chi=(n_1,n_2)\in\mathbb{Z}^2$ and $k=n_2-n_1\geq 0$. Then there exists a unique natural continuous operator \[d^{k+1}:\mathcal{O}^{\mathrm{la},\chi}_{K^p}\to \omega^{2k+2,\mathrm{la},\chi}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}} (\wedge^2 D^{\mathrm{sm}}_{K^p})^{-k-1}\cong \mathcal{O}^{\mathrm{la},\chi}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}(\Omega^1_{K^p}(\mathcal{C})^\mathrm{sm})^{\otimes k+1}\] satisfying the following properties: \begin{enumerate} \item $d^{k+1}$ is $\mathcal{O}_{{\mathscr{F}\!\ell}}$-linear; \item $d^{k+1}(\mathrm{t}^{n_1}e_1^ie_2^{k-i} s)=\mathrm{t}^{n_1}e_1^ie_2^{k-i} \theta^{k+1}(s)$ for any section $s\in \omega^{-k,\mathrm{sm}}_{K^p}$ and $i=0,1,\cdots,k$. See Theorem \ref{str} for notation here. \end{enumerate} \end{thm} \begin{rem} \label{dd'} Since $d^{k+1}$ is $\mathcal{O}_{{\mathscr{F}\!\ell}}$-linear, it can be twisted by a line bundle on ${\mathscr{F}\!\ell}$. Let $\chi'=\chi-(0,k)=(n_1,n_2-k)=(n_1,n_1)$. If we twist by $(\omega_{{\mathscr{F}\!\ell}})^{-k}(-k)$ and use the isomorphisms in \ref{omegakla}, we get a continuous operator \[{d'}^{k+1}:\omega^{-k,\mathrm{la},\chi'}_{K^p}\to \omega^{k+2,\mathrm{la},\chi'}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}} (\wedge^2 D^{\mathrm{sm}}_{K^p})^{-k-1}\] satisfying the following properties \begin{enumerate} \item ${d'}^{k+1}$ is $\mathcal{O}_{{\mathscr{F}\!\ell}}$-linear; \item ${d'}^{k+1}(\mathrm{t}^{n_1} s)=\mathrm{t}^{n_1} \theta^{k+1}(s)$ for any section $s\in \omega^{-k,\mathrm{sm}}_{K^p}$. \end{enumerate} Clearly the existence of ${d'}^{k+1}$ is equivalent with the existence of $d^{k+1}$. \end{rem} \begin{proof} By Theorem \ref{str}, given an open set $U\in\mathfrak{B}$ introduced in \ref{exs}, elements of the form $\mathrm{t}^{n_1}e_1^ie_2^{k-i} s$ generate a dense $\mathcal{O}_{{\mathscr{F}\!\ell}}(U)$-submodule in $\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)$. In particular, $d^{k+1}$ is unique if it exists. First we prove the case $k=0$. By the uniqueness we just showed, it suffices to prove the existence of $d^1$ on each $U\in\mathfrak{B}$. Fix an open set $U$ as in \ref{exs}. In particular, $e_1$ is a generator of $\omega^1_{K^p}$ on $V_\infty=\pi_{\mathrm{HT}}^{-1}(U)$. In general, one can use the action of $\mathrm{GL}_2(\mathbb{Q}_p)$ to reduce to this case. We will freely use the notation in Theorem \ref{str} and \ref{An}. Let $n$ be a non-negative integer. Consider \[f=\mathrm{t}^{n_1}\sum_{i=0}^{+\infty}c_i(x-x_n)^i\in A^n,\] where $c_i\in H^0(V_{G_{r(n)}},\mathcal{O}_{\mathcal{X}_{K^pG_{r(n)}}}),i=0,1,\cdots$ such that $\|c_i p^{(n-1)i}\|$ are uniformly bounded. We use $d$ to denote the derivation $\mathcal{O}_{V_{G_{r(n)}}}\to \Omega^1_{V_{G_{r(n)}}}(\mathcal{C})$. Define \[d_n(f)=\mathrm{t}^{n_1}(\sum_{i=0}^{+\infty}(x-x_n)^idc_i-\sum_{i=0}^{+\infty}(i+1)c_{i+1}(x-x_n)^idx_n).\] Formally, this formula is obtained by applying the Leibniz rule to $c_i(x-x_n)^i$. Since derivation $d$ is continuous, it follows from our choice of $c_i$ that $p^{(n-1)i}dc_i$ are also uniformly bounded. Hence the expression of $d_n(f)$ converges to a $G_{r(n)}$-analytic vector in $\mathcal{O}^{\mathrm{la},\chi}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\Omega^1_{K^p}(\mathcal{C})^\mathrm{sm}(U)$, i.e. $d_n$ defines a continuous map \[d_n:A^n\to \mathcal{O}^{\mathrm{la},\chi}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\Omega^1_{K^p}(\mathcal{C})^\mathrm{sm}(U)^{G_{r(n)}-\mathrm{an}}.\] Note that if $f=\mathrm{t}^{n_1}\sum_{i=0}^l c_ix^i$ is a finite sum, $d_n(\mathrm{t}^{n_1}\sum_{i=0}^kc_ix^i)$ is simply $\mathrm{t}^{n_1}\sum_{i=0}^kx^idc_i$. Since these finite sums are dense in $A^n$, this proves that $d_{n+1}|_{A^{n}}=d_n$. In particular, by passing to the limit over $n$, we get a continuous map \[d^1:\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)\to \mathcal{O}^{\mathrm{la},\chi}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\Omega^1_{K^p}(\mathcal{C})^\mathrm{sm}(U),\] which is independent of the choice of $x_n$. Clearly $d^1$ commutes with restriction of $U$ to an affinoid subdomain in $\mathfrak{B}$. Hence $d^1$ defines a map of sheaves \[d^1:\mathcal{O}^{\mathrm{la},\chi}_{K^p}\to \mathcal{O}^{\mathrm{la},\chi}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\Omega^1_{K^p}(\mathcal{C})^\mathrm{sm}.\] We claim that $d^1$ is $\mathcal{O}_{{\mathscr{F}\!\ell}}$-linear. This is a local question on ${\mathscr{F}\!\ell}$. Again using the action of $\mathrm{GL}_2(\mathbb{Q}_p)$, we only need to prove that $d^1:\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U)\to \mathcal{O}^{\mathrm{la},\chi}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\Omega^1_{K^p}(\mathcal{C})^\mathrm{sm}(U)$ commutes with $\mathcal{O}_{{\mathscr{F}\!\ell}}(U)$ for any rational subdomain $U$ of $U_1$. When $U=U_1=\{\|x\|\leq 1\}=\Spa(C\langle x\rangle,\mathcal{O}_C\langle x\rangle)$, it follows from our construction that $d^1$ commutes with multiplication by a polynomial of $x$ over $C$, hence commutes with $\mathcal{O}_{{\mathscr{F}\!\ell}}(U_1)$ as well by continuity. In general, if $U=U_1(\frac{f_1,\cdots,f_n}{g})$ is a rational subset of $U$, it can be proved in a similar way by noting that the image of $C\langle x\rangle[\frac{1}{g}]$ in $\mathcal{O}_{{\mathscr{F}\!\ell}}(U)$ is dense. This finishes the proof when $k=0$. In general, one can repeat the construction of $\theta^{k+1}$ to prove the existence of $d^{k+1}$. In fact, we will construct ${d'}^{k+1}$ in Remark \ref{dd'}. Let $\chi'=\chi-(0,k)=(n_1,n_1)$ introduced in Remark \ref{dd'}. Consider $D^{\chi'}:=D^{\mathrm{sm}}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\mathcal{O}^{\mathrm{la},\chi'}_{K^p}$. The filtration on $D^{\mathrm{sm}}_{K^p}$ extends $\mathcal{O}^{\mathrm{la},\chi'}_{K^p}$-linearly to $D^{\chi'}$. In particular, \[\Fil^1 D^{\chi'}=\omega^{1,\mathrm{sm}}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\mathcal{O}^{\mathrm{la},\chi'}_{K^p}=\omega^{1,\mathrm{la},\chi'}_{K^p},\] \[\gr^0 D^{\chi'}\cong \omega^{-1,\mathrm{la},\chi'}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\wedge^2 D^{\mathrm{sm}}_{K^p}.\] Recall that there is a connection $\nabla:D^{\mathrm{sm}}_{K^p}\to D^{\mathrm{sm}}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\Omega^1_{K^p}(\mathcal{C})^\mathrm{sm}$. Hence using $d^1:\mathcal{O}^{\mathrm{la},\chi'}_{K^p}\to \mathcal{O}^{\mathrm{la},\chi'}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\Omega^1_{K^p}(\mathcal{C})^\mathrm{sm}$, we have a connection on $D^{\chi'}$: \[\nabla^{\chi'}:D^{\chi'}\to D^{\chi'}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\Omega^1_{K^p}(\mathcal{C})^\mathrm{sm}\] such that the composite \[\Fil^1 D^{\chi'}\xrightarrow{\nabla^{\chi'}} D^{\chi'}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\Omega^1_{K^p}(\mathcal{C})^\mathrm{sm}\to \gr^0 D^{\chi'} \otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\Omega^1_{K^p}(\mathcal{C})^\mathrm{sm}\] is an isomorphism. Therefore, if we apply the construction of $\theta^{k+1}$ to the $k$-th symmetric power $\Sym^k (D^{\chi'}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}(\wedge^2 D^{\mathrm{sm}}_{K^p})^{-1})$ as a $\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}$-module, or equivalently, $\Sym^k D^{*,\chi'}$, where $D^{*,\chi'}:=\Hom_{\mathcal{O}^{\mathrm{sm}}_{K^p}}(D^{\mathrm{sm}}_{K^p},\mathcal{O}^{\mathrm{sm}}_{K^p})\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\mathcal{O}^{\mathrm{la},\chi'}_{K^p}$, we get an operator \[\omega^{-k,\mathrm{la},\chi'}_{K^p}\to \omega^{k+2,\mathrm{la},\chi'}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}} (\wedge^2 D^{\mathrm{sm}}_{K^p})^{-k-1}\] which satisfy all the properties of ${d'}^{k+1}$ listed in Remark \ref{dd'}. By the same remark, this implies the existence of $d^{k+1}$. \end{proof} We record the following result obtained in the proof which will be used in Subsection \ref{PTII}. \begin{lem} \label{KSsec} Recall that \[D^{*,(0,0)}=D^{(0,0)}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}(\wedge^2 D^{\mathrm{sm}}_{K^p})^{-1}\cong\Hom_{\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}}(D^{(0,0)},\mathcal{O}^{\mathrm{la},(0,0)})\] denotes the dual of $D^{(0,0)}$. We normalize the filtration on $D^{*,(0,0)}$ so that $\gr^iD^{*,(0,0)}=0,~i<0$ and $\gr^0D^{*,(0,0)}=\omega^{-1,\mathrm{la},(0,0)}_{K^p}$. Then the surjective map \[\Sym^k D^{*,(0,0)} \to \gr^0 \Sym^k D^{*,(0,0)}\cong \omega^{-k,\mathrm{la},(0,0)}_{K^p}\] has a unique continuous left inverse \[r_k: \omega^{-k,\mathrm{la},(0,0)}_{K^p}\to \Sym^k D^{*,(0,0)}\] such that $\nabla_k\circ r_k\subseteq \Fil^{k}\Sym^k D^{*,(0,0)}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}} \Omega^1_{K^p}(\mathcal{C})^{\mathrm{sm}}$, where by abuse of notation $\nabla_k$ denotes the logarithmic connection on $\Sym^k D^{*,(0,0)}$. Similarly, if we twist $D^{*,(0,0)}$ by $\wedge^2 D^{\mathrm{sm}}_{K^p}\cong\mathcal{O}^{\mathrm{sm}}_{K^p}$, we obtain \[r'_k: \omega^{-k,\mathrm{la},(0,0)}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}(\wedge^2 D^{\mathrm{sm}}_{K^p})^k=\gr^0 \Sym^k D^{(0,0)}\to \Sym^k D^{(0,0)}=\Sym^k D^{*,(0,0)}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}(\wedge^2 D^{\mathrm{sm}}_{K^p})^k\] which is the unique left inverse of $\Sym^k D^{(0,0)}\to\gr^0 \Sym^k D^{(0,0)}$ and satisfies $\nabla'_k\circ r'_k\subseteq \Fil^{k}\Sym^k D^{(0,0)}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}} \Omega^1_{K^p}(\mathcal{C})^{\mathrm{sm}}$. Here $\nabla'_k$ denotes the logarithmic connection on $\Sym^k D^{(0,0)}$. Explicitly, let $c$ be a global invertible section of $\wedge^2 D$ (which is necessarily horizontal). Then $r'_k(f\otimes c^k)=r_k(f)\otimes c^k$. \end{lem} \begin{proof} Only the continuity of $r_k$ needs to be shown. Consider the map \[r:\Sym^k D^{*,(0,0)}\to \gr^0 \Sym^k D^{*,(0,0)}\oplus \left ((\Sym^k D^{*,(0,0)}/\Fil^{k})\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}} \Omega^1_{K^p}(\mathcal{C})^{\mathrm{sm}}\right)\] sending $f\in\Sym^k D^{*,(0,0)}$ to $(f\mod \Fil^1,\nabla_k(f)\mod \Fil^k)$. The Kodaira-Spencer isomorphism implies that this map is an isomorphism hence an isometry by the open mapping theorem. From this, it is easy to see that $r_k$ is continuous because $r\circ r_k(f)=(f,0)$. \end{proof} \subsection{Differential operators II} \label{DII} \begin{para} In this subsection, we construct an operator from $\mathcal{O}^{\mathrm{la},\chi}_{K^p}$ to a twist of $\omega^{-2k-2,\mathrm{la},-w\cdot(-\chi)}_{K^p}$ for some weight $-w\cdot(-\chi)$ and some integer $k$ depending on $\chi$. Roughly speaking, this operator is $\mathcal{O}^{\mathrm{sm}}_{K^p}$-linear and essentially comes from \textit{differential operators on the flag variety} ${\mathscr{F}\!\ell}$. The construction here is much simpler than $d^{k+1}$ because by Beilinson-Bernstein's theory of localization \cite{BB83}, differential operators on ${\mathscr{F}\!\ell}$ can be expressed by the Lie algebra $\mathfrak{g}$. \end{para} \begin{para} \label{KSFl} Denote by $\Omega^1_{{\mathscr{F}\!\ell}}$ the sheaf of differential forms on ${\mathscr{F}\!\ell}$. Then there is a $\mathrm{GL}_2$-equivariant isomorphism of line bundles on ${\mathscr{F}\!\ell}$: \[\Omega^1_{{\mathscr{F}\!\ell}}\cong \omega_{{\mathscr{F}\!\ell}}^{-2}\otimes\det{}.\] Consider $\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}$. It follows from Beilinson-Bernstein's theory that differential operators on ${\mathscr{F}\!\ell}$ act on this sheaf. In particular, we get a natural map \[\bar{d}^1:\mathcal{O}^{\mathrm{la},(0,0)}_{K^p} \to \mathcal{O}^{\mathrm{la},(0,0)}_{K^p}\otimes_{\mathcal{O}_{{\mathscr{F}\!\ell}}}\Omega^1_{\mathscr{F}\!\ell}.\] Concretely, note that the derivation $\frac{d}{dx}$ on $\mathcal{O}_{{\mathscr{F}\!\ell}}$ agrees with the action of $u^+:=\begin{pmatrix} 0& 1 \\ 0 & 0 \end{pmatrix}\in\mathfrak{g}$. Hence on any open subset of ${\mathscr{F}\!\ell}$ where $e_1$ is invertible, \[\bar{d}^1:s\in \mathcal{O}^{\mathrm{la},(0,0)}_{{\mathscr{F}\!\ell}}\mapsto u^+(s)\otimes dx.\] In other words, $\bar{d}^1$ extends $\mathcal{O}^{\mathrm{sm}}_{K^p}$-linearly the de Rham sequence $\mathcal{O}_{{\mathscr{F}\!\ell}}\xrightarrow{d_{\mathscr{F}\!\ell}} \Omega^1_{\mathscr{F}\!\ell}$ on ${\mathscr{F}\!\ell}$, where $d_{\mathscr{F}\!\ell}$ notes the derivation on ${\mathscr{F}\!\ell}.$ \end{para} \begin{para} \label{HEt} Before moving on, we need to modify the relative Hodge-Tate filtration \eqref{rHT} to make everything Hecke-equivariant. It follows from the construction in \cite[4.1.3]{Pan20} that the $\omega^{-1}$ in \eqref{rHT} should be replaced by $\gr^1D=\wedge^2 D\otimes_{\mathcal{O}_\mathcal{X}} \omega^{-1}$: \begin{eqnarray}\label{rHTH} 0\to\wedge^2 D\otimes_{\mathcal{O}_{\mathcal{X}}}\omega_{K^p}^{-1}(1)\to V(1)\otimes_{\mathbb{Q}_p} \mathcal{O}_{\mathcal{X}_{K^p}} \to \omega_{K^p}\to 0, \end{eqnarray} where $\mathcal{X}=\mathcal{X}_{K^pK_p}$ is a modular curve of finite level. Implicitly in the discussion of \ref{R20}, we already fixed a trivialization $\wedge^2 D\cong \mathcal{O}_{\mathcal{X}}$, equivalently a global non-vanishing section $c$ of $\wedge^2 D$. Then our choice of $c$ is that it is the pull-back of a global section of $\wedge^2 D$ on ``$\mathcal{X}_{\mathrm{GL}_2(\hat\mathbb{Z})}$'', i.e. a $\mathrm{GL}_2(\hat\mathbb{Z})$-fixed non-zero global section. Recall that $\wedge^2 D$ extends $H^2_{\mathrm{dR}}(\mathcal{E}/\mathcal{Y})$. It follows that $\mathrm{GL}_2(\mathbb{A}_f)$ acts on $c$ via the degree map $|\cdot|_{\mathbb{A}}^{-1}\circ \det$, where $|\cdot|_\mathbb{A}:\mathbb{A}^\times_f\to \mathbb{Q}^\times$ denotes the usual adelic norm map on finite ideles. If we take the wedge product of $V(1)\otimes_{\mathbb{Q}_p} \mathcal{O}_{\mathcal{X}_{K^p}}$, cf. \ref{exs}, we get a natural isomorphism \[\mathcal{O}_{\mathcal{X}_{K^p}}\cong (\wedge^2 D)^{-1}\otimes_{\mathcal{O}_\mathcal{X}} \mathcal{O}_{\mathcal{X}_{K^p}}\otimes \det(1).\] The element $\mathrm{t}$ introduced in \ref{exs} should be regarded as the image of $c^{-1}\otimes 1\otimes b$ under this isomorphism, i.e. $\mathrm{t}=c^{-1}\otimes 1\otimes b$. We remind the reader that $b\in\mathbb{Q}_p(1)$ is our choice of a basis. Hence we have the following natural isomorphism on ${\mathscr{F}\!\ell}$: \begin{eqnarray} \label{rmtwddet1} \mathcal{O}^{\mathrm{la},(1,1)}_{K^p}=\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}\cdot \mathrm{t}\cong \mathcal{O}^{\mathrm{la},(0,0)}_{K^p}\otimes_{\mathcal{O}_{K^p}^{\mathrm{sm}}}(\wedge^2 D^{\mathrm{sm}}_{K^p})^{-1}\otimes \det(1), \end{eqnarray} which is independent of the choice of $\mathrm{t}$. Recall that we have the Kodaira-Spencer isomorphism in \ref{KSsm}: $\omega^{2,\mathrm{sm}}_{K^p}\otimes_{\mathcal{O}^\mathrm{sm}_{K^p}}(\wedge^2 D^{\mathrm{sm}}_{K^p})^{-1}\cong \Omega^1_{K^p}(\mathcal{C})^{\mathrm{sm}}$. Therefore, it follows isomorphisms in \ref{omegakla} and \ref{KSFl} that there is a natural isomorphism \begin{eqnarray} \label{period} \Omega^1_{K^p}(\mathcal{C})^{\mathrm{sm}}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}\otimes_{\mathcal{O}_{{\mathscr{F}\!\ell}}}\Omega^1_{{\mathscr{F}\!\ell}}\cong \mathcal{O}^{\mathrm{la},(1,-1)}_{K^p}(1). \end{eqnarray} Explicitly, we require \[s\otimes1\otimes dx\in \Omega^1_{K^p}(\mathcal{C})^{\mathrm{sm}}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}\otimes_{\mathcal{O}_{{\mathscr{F}\!\ell}}}\Omega^1_{{\mathscr{F}\!\ell}}\mapsto\frac{s\mathrm{t}}{e_1^2}\in \mathcal{O}^{\mathrm{la},(1,-1)}_{K^p}(1)\] on an open set where $e_1$ is invertible. Here we view $s$ as a section of $\omega^{2,\mathrm{sm}}_{K^p}$ using the Kodaira-Spencer isomorphism and our choice of generator of $\wedge^2 D$. It is easy to check that $\frac{s\mathrm{t}}{e_1^2}$ does not depend on the choice of the trivializations of $\wedge^2 D$ and $\mathbb{Q}_p(1)$. \end{para} \begin{para} \label{dbar} For a non-negative integer $k$, it is well-known that ``$(k+1)$-th derivation'' defines a $\mathrm{GL}_2$-equivariant map \[\omega_{\mathscr{F}\!\ell}^{k}\to\omega_{\mathscr{F}\!\ell}^k\otimes_{\mathcal{O}_{\mathscr{F}\!\ell}}(\Omega_{{\mathscr{F}\!\ell}}^1)^{k+1}\cong \omega_{{\mathscr{F}\!\ell}}^{-k-2}\otimes \det{}^{k+1}\] sending $s\in \omega_{\mathscr{F}\!\ell}^{k}$ to $(u^+)^{k+1}(s)\otimes (dx)^{k+1}$. The operator we are going to construct essentially $\mathcal{O}^{\mathrm{sm}}_{K^p}$-linearly extends this map up to a constant. We follow the idea of Bernstein-Gelfand-Gelfand (BGG). Let $U(\mathfrak{g})$ be the universal enveloping algebra of $\mathfrak{g}$ and $Z(U(\mathfrak{g}))$ be its centre. Then $Z(U(\mathfrak{g}))$ acts via the same characters on \[\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}\otimes _{\mathcal{O}_{{\mathscr{F}\!\ell}}}\omega_{{\mathscr{F}\!\ell}}^{k}\cong \omega^{k,\mathrm{la},(0,k)}_{K^p}(-k)=(\omega^{1,\mathrm{la},(0,1)}_{K^p}(-1))^{\otimes k}\] and \begin{eqnarray*} \mathcal{O}^{\mathrm{la},(0,0)}_{K^p}\otimes _{\mathcal{O}_{{\mathscr{F}\!\ell}}}\omega_{\mathscr{F}\!\ell}^k\otimes_{\mathcal{O}_{\mathscr{F}\!\ell}}(\Omega_{{\mathscr{F}\!\ell}}^1)^{\otimes k+1} &\cong& \omega^{-k,\mathrm{la},(0,-k)}_{K^p}\otimes_{\mathcal{O}_{{\mathscr{F}\!\ell}}} \Omega^1_{{\mathscr{F}\!\ell}} \otimes \det{}^{k}(k) \\ &\xrightarrow[\times \mathrm{t}^k]{\simeq} &(\wedge^2 D^{\mathrm{sm}}_{K^p})^{k}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\omega_{K^p}^{-k,\mathrm{la},(k,0)}\otimes \Omega^1_{{\mathscr{F}\!\ell}}, \end{eqnarray*} which will be denote by $\lambda_k$. The last isomorphism uses the $k$-th power of \eqref{rmtwddet1}. Now we take the locally analytic vectors of the push forward of the sequence \eqref{rHTH} along $\pi_{\mathrm{HT}}$ and twist by $(-1)$ \[0\to \wedge^2 D^{\mathrm{sm}}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\omega^{-1,\mathrm{la}}_{K^p}\to V \otimes_{\mathbb{Q}_p} \mathcal{O}_{K^p}^{\mathrm{la}}\to \omega_{K^p}^{1,\mathrm{la}}(-1)\to 0\] Restricting the middle term to $V\otimes_{\mathbb{Q}_p} \mathcal{O}_{K^p}^{\mathrm{la},(0,0)}$ induces \begin{eqnarray} \label{rHTcc} 0\to \wedge^2 D^{\mathrm{sm}}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\omega^{-1,\mathrm{la},(1,0)}_{K^p}\to V\otimes_{\mathbb{Q}_p} \mathcal{O}_{K^p}^{\mathrm{la},(0,0)}\to \omega_{K^p}^{1,\mathrm{la},(0,1)}(-1)\to 0, \end{eqnarray} which is essentially the same as the tensor product of $\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}$ and $0\to \omega^{-1}_{{\mathscr{F}\!\ell}}\otimes \det\to V\otimes_{\mathbb{Q}_p}\mathcal{O}_{{\mathscr{F}\!\ell}}\to\omega_{{\mathscr{F}\!\ell}}\to 0$. By Harish-Chandra's work and the relation between $\theta_\mathfrak{h}$ and infinitesimal characters \cite[Corollary 4.2.8]{Pan20}, the weight-$(n_1,n_2)$ part and weight-$(k_1,k_2)$ part of $\mathcal{O}^{\mathrm{la}}_{K^p}$ have the same infinitesimal characters if and only if $n_1+n_2=k_1+k_2$ and $(n_1-n_2-1)^2=(k_1-k_2-1)^2$. Hence the $\lambda_1$-isotypic component of $V\otimes_{\mathbb{Q}_p}\mathcal{O}_{K^p}^{\mathrm{la},(0,0)}$ is canonically isomorphic to $\omega_{K^p}^{1,\mathrm{la},(0,1)}(-1)$ since the infinitesimal character of $\omega^{-1,\mathrm{la},(1,0)}_{K^p}$ is different. Similarly, if we twist this exact sequence by $\Omega^1_{{\mathscr{F}\!\ell}}$, we see that the $\lambda_1$-isotypic component of $V\otimes_{\mathbb{Q}_p}\mathcal{O}_{K^p}^{\mathrm{la},(0,0)}\otimes_{\mathcal{O}_{{\mathscr{F}\!\ell}}}\Omega^1_{\mathscr{F}\!\ell}$ is naturally identified with $\wedge^2 D^{\mathrm{sm}}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\omega^{-1,\mathrm{la},(1,0)}_{K^p}\otimes_{\mathcal{O}_{\mathscr{F}\!\ell}}\Omega^1_{\mathscr{F}\!\ell}$. Therefore, if we take the $\lambda_1$-isotypic component of the tensor product of $V$ and $\bar{d}^1:\mathcal{O}^{\mathrm{la},(0,0)}_{K^p} \to \mathcal{O}^{\mathrm{la},(0,0)}_{K^p}\otimes_{\mathcal{O}_{{\mathscr{F}\!\ell}}}\Omega^1_{\mathscr{F}\!\ell}$, we get \[\bar{d}^2:\omega_{K^p}^{1,\mathrm{la},(0,1)}(-1)\to \wedge^2 D^{\mathrm{sm}}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\omega^{-1,\mathrm{la},(1,0)}_{K^p}\otimes_{\mathcal{O}_{\mathscr{F}\!\ell}}\Omega^1_{\mathscr{F}\!\ell}\cong \omega_{K^p}^{1,\mathrm{la},(0,1)}(-1)\otimes_{\mathcal{O}_{\mathscr{F}\!\ell}}(\Omega_{{\mathscr{F}\!\ell}}^1)^{\otimes 2}.\] A direct calculation shows that this operator agrees with the explicit formula $s\in \omega_{K^p}^{1,\mathrm{la},(0,1)}\mapsto(u^+)^{2}(s)\otimes (dx)^{2}$ up to a non-zero constant in $\mathbb{Q}$ on any open subset of ${\mathscr{F}\!\ell}$ where $e_1$ is invertible, or equivalently $x$ is a regular function. Clearly, $\bar{d}^2$ is $\mathcal{O}^{\mathrm{sm}}_{K^p}$-linear, hence it can be twisted by locally free $\mathcal{O}^{\mathrm{sm}}_{K^p}$-modules. By abuse of notation, we still denote them by $\bar{d}^2$. For example, the twist by $\omega^{-1,\mathrm{sm}}_{K^p}(1)$ is \[\bar{d}^2:\mathcal{O}^{\mathrm{la},(0,1)}_{K^p}\to \mathcal{O}_{K^p}^{\mathrm{la},(0,1)}\otimes_{\mathcal{O}_{\mathscr{F}\!\ell}}(\Omega_{{\mathscr{F}\!\ell}}^1)^{\otimes 2}\xrightarrow[\times \mathrm{t}^{-2}]{\simeq}(\wedge^2 D^{\mathrm{sm}}_{K^p})^{2}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\omega^{-4,\mathrm{la},(2,-1)}_{K^p}(2).\] \end{para} \begin{para} \label{genklambdak} For general $k\geq1$, one can repeat the same construction with $V$ replaced by its $k$-th symmetric power and $\lambda_1$ replaced by $\lambda_k$. It follows from the sequence \eqref{rHTcc} that there are a natural surjective map \[\phi_{k,1}:\Sym^k V\otimes_{\mathbb{Q}_p} \mathcal{O}_{K^p}^{\mathrm{la},(0,0)}\to \omega_{K^p}^{k,\mathrm{la},(0,k)}(-k)\] and a natural injective map \[\phi_{k,2}:\omega_{K^p}^{k,\mathrm{la},(0,k)}(-k)\otimes_{\mathcal{O}_{\mathscr{F}\!\ell}}(\Omega_{{\mathscr{F}\!\ell}}^1)^{\otimes k+1}\to \Sym^k V\otimes_{\mathbb{Q}_p} \mathcal{O}_{K^p}^{\mathrm{la},(0,0)}\otimes_{\mathcal{O}_{\mathscr{F}\!\ell}}\Omega_{{\mathscr{F}\!\ell}}^1.\] \end{para} \begin{lem} \label{secinf} Both $\phi_{k,1}$ and $\phi_{k,2}$ induce isomorphisms on the $\lambda_k$-isotypic components. In particular, $\phi_{k,1}$ has a natural $U(\mathfrak{g})$-equivariant left inverse \[\phi'_{k}: \omega_{K^p}^{k,\mathrm{la},(0,k)}(-k) \to \Sym^k V\otimes_{\mathbb{Q}_p} \mathcal{O}_{K^p}^{\mathrm{la},(0,0)}.\] \end{lem} \begin{proof} For $\phi_{k,1}$, we note that \eqref{rHTcc} implies that $\Sym^k V\otimes_{\mathbb{Q}_p} \mathcal{O}_{K^p}^{\mathrm{la},(0,0)}$ is filtered by \[ (\wedge^2 D^{\mathrm{sm}}_{K^p})^{\otimes i}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\omega_{K^p}^{k-2i,\mathrm{la},(i,k-i)}(i-k),~i=0,\cdots,k\] and only $\omega_{K^p}^{k,\mathrm{la},(0,k)}(-k)$ contributes to the $\lambda_k$-isotypic component. The same argument works for $\phi_{k,2}$. \end{proof} As a consequence, we get \[\bar{d}^{k+1}:\omega_{K^p}^{k,\mathrm{la},(0,k)}(-k)\cong \mathcal{O}^{\mathrm{la},(0,0)}_{K^p}\otimes_{\mathcal{O}_{{\mathscr{F}\!\ell}}}\omega^k_{{\mathscr{F}\!\ell}}\to \omega_{K^p}^{k,\mathrm{la},(0,k)}(-k)\otimes_{\mathcal{O}_{\mathscr{F}\!\ell}}(\Omega_{{\mathscr{F}\!\ell}}^1)^{\otimes k+1},\] which agrees with the explicit formula $s\in \omega_{K^p}^{k,\mathrm{la},(0,k)}\mapsto(u^+)^{k+1}(s)\otimes (dx)^{k+1}$ up to a non-zero constant in $\mathbb{Q}$ by a direct calculation. It is $\mathcal{O}^{\mathrm{sm}}_{K^p}$-linear and its twist by $\omega^{k,\mathrm{sm}}_{K^p}(k)$ becomes \[\bar{d}^{k+1}:\mathcal{O}^{\mathrm{la},(0,k)}_{K^p}\to\mathcal{O}_{K^p}^{\mathrm{la},(0,k)}\otimes_{\mathcal{O}_{\mathscr{F}\!\ell}}(\Omega_{{\mathscr{F}\!\ell}}^1)^{\otimes k+1}\xrightarrow[\times \mathrm{t}^{-k-1}]{\simeq}(\wedge^2 D^{\mathrm{sm}}_{K^p})^{k+1}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\omega^{-2k-2,\mathrm{la},(k+1,-1)}_{K^p}(k+1).\] It is clear from the construction that $\bar{d}^{k+1}$ is $\mathrm{GL}_2(\mathbb{Q}_p)$ and Hecke-equivariant. Since we are free to multiply by powers of $\mathrm{t}$, we obtain the following theorem. \begin{thm} \label{I2} Suppose $\chi=(n_1,n_2)\in\mathbb{Z}^2$ and $k=n_2-n_1\geq 0$. Let $-w\cdot(-\chi)=(n_2+1,n_1-1)$. Then there exists a natural continuous operator \[\bar{d}^{k+1}:\mathcal{O}^{\mathrm{la},\chi}_{K^p}\to (\wedge^2 D^{\mathrm{sm}}_{K^p})^{k+1}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\omega^{-2k-2,\mathrm{la},-w\cdot(-\chi)}_{K^p}(k+1)\cong \mathcal{O}^{\mathrm{la},\chi}_{K^p}\otimes_{\mathcal{O}_{{\mathscr{F}\!\ell}}}(\Omega^1_{{\mathscr{F}\!\ell}})^{\otimes k+1}\] which formally can be regarded as $(d_{\mathscr{F}\!\ell})^{k+1}$ satisfying the following properties: \begin{enumerate} \item $\bar{d}^{k+1}$ is $\mathcal{O}^{\mathrm{sm}}_{K^p}$-linear; \item there exists a non-zero constant $c\in\mathbb{Q}$ such that $\bar{d}^{k+1}(s)=c(u^+)^{k+1}(s)\otimes (dx)^{k+1}$ for any $s\in \mathcal{O}^{\mathrm{la},\chi}_{K^p}$ defined over an open subset of ${\mathscr{F}\!\ell}$ where $x$ is a regular function. \end{enumerate} Moreover, $\bar{d}^{k+1}$ commutes with $\mathrm{GL}_2(\mathbb{Q}_p)$ and Hecke actions away from $p$. \end{thm} \begin{rem} \label{ddbarcom} It's natural to compare the differential operator $d^{k+1}$ introduced in Theorem \ref{I1} with $\bar{d}^{k+1}$. It follows from the explicit formulae that $\bar{d}^{k+1}$ commutes with $d^{k+1}$. In fact, both differential operators can be defined in a uniform way. This is probably best explained in the local picture, See Subsections \ref{LTinfty}, \ref{Drinfty} below. \end{rem} \begin{prop} \label{dbarsurj} Suppose $\chi=(0,k)\in\mathbb{Z}^2$ and $k\geq 0$. We have the following results for \[\bar{d}^{k+1}:\mathcal{O}^{\mathrm{la},(0,k)}_{K^p}\to \mathcal{O}^{\mathrm{la},(0,k)}_{K^p}\otimes_{\mathcal{O}_{{\mathscr{F}\!\ell}}}(\Omega^1_{{\mathscr{F}\!\ell}})^{\otimes k+1}.\] \begin{enumerate} \item $\bar{d}^{k+1}$ is surjective. \item $\ker(\bar{d}^{k+1})=\mathcal{O}^{\mathrm{lalg},(0,k)}_{K^p}=\Sym^k V(k)\otimes_{\mathbb{Q}_p}\omega^{-k,\mathrm{sm}}_{K^p}$. See \ref{lalg0k}, \ref{lalgKp} for the notation here. \end{enumerate} In general, if $\chi=(n_1,n_2)\in\mathbb{Z}^2$ with $k=n_2-n_1\geq 0$, then the same results hold: $\ker(\bar{d}^{k+1})=\mathcal{O}^{\mathrm{lalg},\chi}_{K^p}=\Sym^k V(k)\otimes_{\mathbb{Q}_p}\omega^{-k,\mathrm{sm}}_{K^p}\cdot \mathrm{t}^{n_1}$. \end{prop} \begin{proof} Since $\bar{d}^{k+1}$ is $\mathrm{GL}_2(\mathbb{Q}_p)$-equivariant, it suffices to prove these claims on open sets on which $e_1\in\omega^1_{K^p}$ is a generator. Then both claims follow from the explicit formula $\bar{d}^{k+1}=(u^+)^{k+1}\otimes (dx)^{k+1}$ up to a non-zero constant. The first part follows from \cite[Proposition 5.2.9]{Pan20} which says that $u^+$ is surjective on such open sets . For the second part, since $\ker(\bar{d}^{k+1})$ is $\mathfrak{gl}_2(\mathbb{Q}_p)$-stable and annihilated by $(u^+)^{k+1}$, the action of $\mathfrak{gl}_2(\mathbb{Q}_p)$ on it is locally finite. A consideration of the infinitesimal character shows that $\ker(\bar{d}^{k+1})$ is exactly the $\Sym^k V$-isotypic subspace. Hence $\ker(\bar{d}^{k+1})=\mathcal{O}^{\mathrm{lalg},(0,k)}_{K^p}$. One can also prove this by a direct calculation. \end{proof} \begin{rem}\label{dbarres} When $k=0$, it follows that there is a natural exact sequence \[0\to \mathcal{O}^{\mathrm{sm}}_{K^p} \to \mathcal{O}^{\mathrm{la},(0,0)}_{K^p}\xrightarrow{\bar{d}} \mathcal{O}^{\mathrm{la},(0,k)}_{K^p}\otimes_{\mathcal{O}_{{\mathscr{F}\!\ell}}}\Omega^1_{{\mathscr{F}\!\ell}}\to 0\] which can be regarded as a $\bar{d}$-resolution of $\mathcal{O}^{\mathrm{sm}}_{K^p}$ in this setting. In general, $\bar{d}^{k+1}$ gives rise to a resolution \[0\to \Sym^k V(k)\otimes_{\mathbb{Q}_p}\omega^{-k,\mathrm{sm}}_{K^p}\to \mathcal{O}^{\mathrm{la},(0,k)}_{K^p}\xrightarrow{\bar{d}^{k+1}} \mathcal{O}^{\mathrm{la},(0,k)}_{K^p}\otimes_{\mathcal{O}_{{\mathscr{F}\!\ell}}}(\Omega^1_{{\mathscr{F}\!\ell}})^{\otimes k+1}\to 0.\] \end{rem} \subsection{Intertwining operator} \begin{defn} \label{I} Suppose $\chi=(n_1,n_2)\in\mathbb{Z}^2$ and $k=n_2-n_1\geq 0$. We define \[I_k:\mathcal{O}^{\mathrm{la},\chi}_{K^p}\to \mathcal{O}^{\mathrm{la},-w\cdot(-\chi)}_{K^p}(k+1)\] as the composite of \[\mathcal{O}^{\mathrm{la},\chi}_{K^p}\xrightarrow{d^{k+1}} \omega^{2k+2,\mathrm{la},\chi}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}} (\wedge^2 D^{\mathrm{sm}}_{K^p})^{-k-1} \xrightarrow{\bar{d}'^{k+1}\otimes 1} \mathcal{O}^{\mathrm{la},-w\cdot(-\chi)}_{K^p}(k+1),\] where $\bar{d}'^{k+1}:=\bar{d}^{k+1}\otimes 1: \omega^{2k+2,\mathrm{la},\chi}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}} (\wedge^2 D^{\mathrm{sm}}_{K^p})^{-k-1}\to \mathcal{O}^{\mathrm{la},-w\cdot(-\chi)}_{K^p}(k+1)$. \end{defn} Note that $\bar{d}'^{k+1}$ makes sense because $\bar{d}^{k+1}$ is $\mathcal{O}^{\mathrm{sm}}_{K^p}$-linear by Theorem \ref{I2}. Formally, $I_k=\bar{d}'^{k+1}\circ d^{k+1}$ and commutes with $\mathrm{GL}_2(\mathbb{Q}_p)$ and Hecke actions away from $p$. Note that $d^{k+1}$ commutes with $\bar{d}^{k+1}$ by \ref{ddbarcom}. Hence $I_k$ also agrees with the composite: \begin{eqnarray*} \mathcal{O}^{\mathrm{la},\chi}_{K^p}\xrightarrow{\bar{d}^{k+1}} \mathcal{O}^{\mathrm{la},\chi}_{K^p}\otimes_{\mathcal{O}_{{\mathscr{F}\!\ell}}}(\Omega^1_{{\mathscr{F}\!\ell}})^{\otimes k+1}&\xrightarrow{d^{k+1}\otimes 1} & (\Omega_{K^p}^1(\mathcal{C})^\mathrm{sm})^{\otimes k+1}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\mathcal{O}^{\mathrm{la},\chi}_{K^p}\otimes_{\mathcal{O}_{{\mathscr{F}\!\ell}}}(\Omega^1_{{\mathscr{F}\!\ell}})^{\otimes k+1}\\ &\cong &\mathcal{O}^{\mathrm{la},-w\cdot(-\chi)}_{K^p}(k+1), \end{eqnarray*} where the last isomorphism follows from \eqref{period}. Hence $I_k$ can also be regarded as $d^{k+1}\circ\bar{d}^{k+1}$. \begin{rem} We will give a $p$-adic Hodge-theoretic interpretation of $I_k$ in the next section. Roughly speaking, it comes from the action of $\mathbb{G}_a$ (a monodromy operator) in Fontaine's classification of almost de Rham $B_{\mathrm{dR}}$-representations \cite{Fo04}. The Tate twist $(k+1)$ in the definition is better to be understood as $\gr^k B_{\mathrm{dR}}$. \end{rem} \begin{defn} \label{I^1k} We denote $H^1(I_k)$ by \[I^1_k:H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})\to H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},-w\cdot(-\chi)}_{K^p}(k+1)).\] \end{defn} \begin{para}[A variant of $I_k$] \label{VIk} If $k>0$, then $H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})=H^1({\mathscr{F}\!\ell},\mathcal{O}_{K^p})^{\mathrm{la},\chi}$ and similar results hold for $H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},-w\cdot(-\chi)}_{K^p}(k+1))$ by \cite[Corollary 5.1.3.(2)]{Pan20}. Thus we can also view $I^1_k$ as a map \[I^{1}_k:H^1({\mathscr{F}\!\ell},\mathcal{O}_{K^p})^{\mathrm{la},\chi} \to H^1({\mathscr{F}\!\ell},\mathcal{O}_{K^p})^{\mathrm{la},-w\cdot(-\chi)}(k+1).\] If $k=0$, it follows from the exact sequence \eqref{chio} and Remark \ref{A00rinv} below that $I^1_0$ factors through the quotient $H^1({\mathscr{F}\!\ell},\mathcal{O}_{K^p})^{\mathrm{la},\chi}$ of $H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})$ and we will denote by \[{I}'_0:H^1({\mathscr{F}\!\ell},\mathcal{O}_{K^p})^{\mathrm{la},\chi} \to H^1({\mathscr{F}\!\ell},\mathcal{O}_{K^p})^{\mathrm{la},-w\cdot(-\chi)}(1)\] the induced map. \end{para} \subsection{\texorpdfstring{$\mathfrak{n}$}{Lg}-cohomology} \begin{para} \label{I1k} Fix a non-negative integer $k$ and a weight $\chi=(n_1,n_2)\in\mathbb{Z}^2$ with $n_2-n_1=k$ throughout this subsection. Let $\mathfrak{n}=Cu^+=\{\begin{pmatrix} 0 & * \\ 0 & 0 \end{pmatrix}\}\subset \mathfrak{g}$ be the upper triangular nilpotent subalgebra. We will determine the $\mathfrak{n}$-invariants of $\ker I^1_k$ below as a representation of the upper-triangular Borel subgroup $B$ of $\mathrm{GL}_2(\mathbb{Q}_p)$ and deduce the finiteness result mentioned in the introduction Theorem \ref{Intfin}. We note that the main result will not be used in the next two sections. People who are only interested in the spectral decomposition can skip this subsection at first. It suffices to study the case when $n_1=0$, i.e. $\chi=(0,k)$ as the general case can be reduced to this case by multiplying by $\mathrm{t}^{-n_1}$. Thus we will simply assume $n_1=0$ from now on. Recall that $d^{k+1}: \mathcal{O}^{\mathrm{la},\chi}_{K^p}\to\omega^{2k+2,\mathrm{la},\chi}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}} (\wedge^2 D^{\mathrm{sm}}_{K^p})^{-k-1}$. \end{para} \begin{lem} \label{kerdk+1} \[\ker I^1_k=\ker \left(H^1(d^{k+1})\right).\] \end{lem} \begin{proof} $I_k=\bar{d}'^{k+1}\circ d^{k+1}$ (Definition \ref{I}). It suffices to prove that the kernel of \[H^1(\bar{d}'^{k+1}): H^1({\mathscr{F}\!\ell},\omega^{2k+2,\mathrm{la},\chi}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}} (\wedge^2 D^{\mathrm{sm}}_{K^p})^{-k-1})\to H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},-w\cdot(-\chi)}_{K^p}(k+1))\] is zero. Since $\bar{d}^{k+1}\otimes 1$ is surjective by Proposition \ref{dbarsurj}, we only need to show that \[H^1({\mathscr{F}\!\ell},\ker\bar{d}'^{k+1})=0.\] On the other hand, by the second part of Proposition \ref{dbarsurj} and the isomorphism $\omega^{2k+2,\mathrm{la},\chi}_{K^p}\cong \mathcal{O}^{\mathrm{la},\chi}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\omega^{2k+2,\mathrm{sm}}_{K^p}$, we have \[\ker(\bar{d}^{k+1}\otimes 1)\cong\Sym^k V(k)\otimes_{\mathbb{Q}_p} \omega^{k+2,\mathrm{la},\chi}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}} (\wedge^2 D^{\mathrm{sm}}_{K^p})^{-k-1}.\] Since $\wedge^2 D^{\mathrm{sm}}_{K^p}\cong\mathcal{O}^{\mathrm{sm}}_{K^p}$ (non-canonically), it is enough to prove \[H^1({\mathscr{F}\!\ell},\omega^{k+2,\mathrm{sm}}_{K^p})=0,\] which is exactly \cite[Corollary 5.3.6]{Pan20} as $k\geq 0$. \end{proof} \begin{para} We are going to compute $(\ker I^1_k)^\mathfrak{n}$, the $\mathfrak{n}$-invariants of $\ker I^1_k$. This is a subspace of the $\mathfrak{n}$-invariants $H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})^\mathfrak{n}$, which was determined in \S 5.3 of \cite{Pan20}. We briefly recall the computation here. One ingredient is the following simple lemma. \end{para} \begin{lem} \label{ninvlem} Let $\mathcal{F},\mathcal{G}$ be sheaves of abelian groups on ${\mathscr{F}\!\ell}$ (or any topological space with cohomological dimension one) and $X:\mathcal{F}\to\mathcal{G}$ be a map. Then there are natural homomorphisms $r:H^0({\mathscr{F}\!\ell},\mathcal{G})\to H^0({\mathscr{F}\!\ell},\coker X)$ and $\delta:\ker r =H^0({\mathscr{F}\!\ell},X(\mathcal{F}))\to H^1({\mathscr{F}\!\ell},\ker X)$, and the following natural short exact sequence: \[0\to \coker(\delta) \to \ker H^1(X) \to \coker(r)\to 0.\] In particular, \begin{enumerate} \item If $H^1({\mathscr{F}\!\ell},\ker X)=0$, then $\ker H^1(X)\cong \coker(r)$. \item If $H^0({\mathscr{F}\!\ell},\mathcal{G})=0$, then $H^0({\mathscr{F}\!\ell},X(\mathcal{F}))=0$ and $0\to H^1({\mathscr{F}\!\ell},\ker X)\to \ker H^1(X)\to H^0({\mathscr{F}\!\ell},\coker X)\to 0$. \end{enumerate} \end{lem} \begin{proof} The map $r:H^0({\mathscr{F}\!\ell},\mathcal{G})\to H^0({\mathscr{F}\!\ell},\coker X)$ simply comes from the quotient map $\mathcal{G}\to\coker(X)$. For $\delta$, consider the short exact sequence $0\to \ker(X)\to\mathcal{F}\to X(\mathcal{F}) \to 0$ and the long exact sequence of its cohomology groups: \[H^0({\mathscr{F}\!\ell},X(\mathcal{F}))\xrightarrow{\delta} H^1({\mathscr{F}\!\ell},\ker X)\to H^1({\mathscr{F}\!\ell},\mathcal{F}) \to H^1 ({\mathscr{F}\!\ell},X(\mathcal{F}))\to 0.\] The first connecting homomorphism will be our $\delta$. Hence $\coker(\delta)$ is isomorphic to the kernel of $H^1({\mathscr{F}\!\ell},\mathcal{F}) \to H^1 ({\mathscr{F}\!\ell},X(\mathcal{F}))$. On the other hand, consider the cohomology groups of $0\to X(\mathcal{F})\to \mathcal{G} \to \coker X\to 0$: \[H^0({\mathscr{F}\!\ell},\mathcal{G})\xrightarrow{r} H^0({\mathscr{F}\!\ell},\coker X)\to H^1({\mathscr{F}\!\ell},X(\mathcal{F}))\to H^1({\mathscr{F}\!\ell},\mathcal{G}).\] Hence $\coker(r)$ is isomorphic to the kernel of $H^1({\mathscr{F}\!\ell},X(\mathcal{F}))\to H^1({\mathscr{F}\!\ell},\mathcal{G})$. By writing $H^1(X)$ as the composite map $H^1({\mathscr{F}\!\ell},\mathcal{F})\to H^1({\mathscr{F}\!\ell},X(\mathcal{F}))\to H^1({\mathscr{F}\!\ell},\mathcal{G})$, we get $0\to \coker(\delta) \to \ker(H^1(X)) \to \coker(r)$. Since $H^1({\mathscr{F}\!\ell},\mathcal{F}) \to H^1 ({\mathscr{F}\!\ell},X(\mathcal{F}))$ is surjective, the last map is surjective as well. \end{proof} \begin{rem} \label{hyclem} Here is a more direct way to prove this lemma. If we denote the hypercohomology of the complex $\mathcal{F}\xrightarrow{X}\mathcal{G}$ by $\mathbb{H}^*(X)$, then there is natural exact sequence. \[H^0({\mathscr{F}\!\ell},\mathcal{F})\xrightarrow{H^0(X)} H^0({\mathscr{F}\!\ell},\mathcal{G}) \to \mathbb{H}^1(X)\to \ker (H^1(X))\to 0.\] Hence \[ \ker (H^1(X))\cong \mathbb{H}^1(X)/\Fil^1\mathbb{H}^1(X),\] where $\Fil^1\mathbb{H}^1(X)$ denotes the image of $H^0({\mathscr{F}\!\ell},\mathcal{G})$ in $\mathbb{H}^1(X)$. On the other hand, since there is no $H^2$ on ${\mathscr{F}\!\ell}$, we have another exact sequence \[0\to H^1({\mathscr{F}\!\ell},\ker X)\to \mathbb{H}^1(X)\to H^0({\mathscr{F}\!\ell},\coker X)\to 0.\] The composite map $H^0({\mathscr{F}\!\ell},\mathcal{G}) \to \mathbb{H}^1(X)\to H^0({\mathscr{F}\!\ell},\coker X)$ is nothing but $r$. From this we easily deduce the lemma. \end{rem} \begin{para} \label{e_1e_2ch} We apply this lemma to $\mathcal{F}=\mathcal{G}=\mathcal{O}^{\mathrm{la},\chi}_{K^p}$ and $X=u^+$, it remains to determine the $\mathfrak{n}$-invariants $\mathcal{O}^{\mathrm{la},\chi,\mathfrak{n}}_{K^p}$, the coinvariants $(\mathcal{O}^{\mathrm{la},\chi}_{K^p})_{\mathfrak{n}}$ and their cohomology. This was computed in \S 5.2, 5.3 of \cite{Pan20}. To state the result, we denote by $\infty\in {\mathscr{F}\!\ell}$ the point where $e_1$ vanishes and by $i_{\infty}:\infty\to {\mathscr{F}\!\ell}$ the natural embedding. Following \cite[Definition 5.2.5, 5.3.8]{Pan20}, we define \begin{eqnarray*} M^\dagger_k(K^p)&:=&(\omega^{k,\mathrm{sm}}_{K^p})_\infty \mbox{, the stalk of } \omega^{k,\mathrm{sm}}_{K^p} \mbox{ at }\infty,\\ M_k(K^p)&:=&H^0({\mathscr{F}\!\ell},\omega^{k,\mathrm{sm}}_{K^p})=\varinjlim_{K_p\subset\mathrm{GL}_2(\mathbb{Q}_p)}H^0(\mathcal{X}_{K^pK_p},\omega^k). \end{eqnarray*} $M^\dagger_k(K^p)$ (resp. $M_k(K^p)$) is our space of overconvergent modular forms (resp. classical modular forms) of weight $k$ of tame level $K^p$. Then there is a natural restriction map $M_k(K^p)\to M^\dagger_k(K^p)$. Clearly $M_k(K^p)=0$ if $k<0$. Following \cite[Remark 5.2.7]{Pan20}, for a $B\times G_{\mathbb{Q}_p}$-representation $W$ and integers $i,j$, we use $W\cdot e_1^ie_2^j$ to denote the twist of $W(i+j)$ by the character sending $\begin{pmatrix} a & b\\ 0 & d \end{pmatrix}\in B$ to $a^id^j$. This agrees with the definition in \cite[Remark 5.2.7]{Pan20}. \end{para} \begin{prop} \label{kninvcoinv} $\chi=(0,k)\in\mathbb{Z}^2$ and $k\geq 0$. \begin{enumerate} \item There is a natural isomorphism \begin{eqnarray*} \omega^{-k,\mathrm{sm}}_{K^p}\cdot e_1^k&\cong&\mathcal{O}^{\mathrm{la},\chi,\mathfrak{n}}_{K^p} \\ s& \mapsto&e_1^ks; \end{eqnarray*} \item $(\mathcal{O}^{\mathrm{la},\chi}_{K^p})_{\mathfrak{n}}$ is a sky-scrapper sheaf supported at $\infty$ and there is a natural isomorphism \begin{eqnarray*} (i_\infty)_* M^\dagger_{-k}(K^p)\cdot e_2^{k} \oplus (i_\infty)_* M^\dagger_{-k}(K^p)\cdot e_1^{1+k}e_2^{-1} &\cong& (\mathcal{O}^{\mathrm{la},\chi}_{K^p})_{\mathfrak{n}} \\ (s_1,s_2)&\mapsto& e_2^{k}s_1+ e_1^{1+k}e_2^{-1}s_2. \end{eqnarray*} \item $H^0({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})=0$ unless $k=0$. If $k=0$, then there is a natural isomorphism \begin{eqnarray*} H^0({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{sm}}_{K^p})&\cong&H^0({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p}) \end{eqnarray*} induced from the natural inclusion $\mathcal{O}^{\mathrm{sm}}_{K^p}\subset \mathcal{O}^{\mathrm{la},\chi}_{K^p}$ In particular, $H^0({\mathscr{F}\!\ell},u^+(\mathcal{O}^{\mathrm{la},\chi}_{K^p}))=0$. \end{enumerate} \end{prop} \begin{proof} See Proposition 5.2.6 ,5.2.9, 5.2.11, Corollary 5.1.3 of \cite{Pan20}. \end{proof} We note that all these isomorphisms are $\mathcal{O}^{\mathrm{sm}}_{K^p}$-linear. \begin{cor} \label{gso2k2} $\chi=(0,k)\in\mathbb{Z}^2$ and $k\geq 0$. There are natural isomorphisms: \begin{enumerate} \item $(\omega^{2k+2,\mathrm{la},\chi}_{K^p})^{\mathfrak{n}} \cong \omega^{k+2,\mathrm{sm}}_{K^p}\cdot e_1^k$. \item $(\omega^{2k+2,\mathrm{la},\chi}_{K^p})_{\mathfrak{n}}\cong (i_\infty)_* M^\dagger_{k+2}(K^p)\cdot e_2^{k} \oplus (i_\infty)_* M^\dagger_{k+2}(K^p)\cdot e_1^{1+k}e_2^{-1} $. \item $H^0({\mathscr{F}\!\ell},\omega^{2k+2,\mathrm{la},\chi}_{K^p})\cong \Sym^k V(k) \otimes_{\mathbb{Q}_p} M_{k+2}(K^p)$ induced from the natural inclusion $\Sym^k V(k)\otimes_{\mathbb{Q}_p}\omega^{-k,\mathrm{sm}}_{K^p}\subseteq \mathcal{O}^{\mathrm{la},\chi}_{K^p}$, cf. \ref{lalg0k}. \end{enumerate} \end{cor} \begin{proof} The first two claims follow from Proposition \ref{kninvcoinv} and the $\mathcal{O}^{\mathrm{sm}}_{K^p}$- linear isomorphism $\omega^{2k+2,\mathrm{la},\chi}_{K^p}\cong \mathcal{O}^{\mathrm{la},\chi}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\omega^{2k+2,\mathrm{sm}}_{K^p}$. For the last part, consider \[\bar{d}'^{k+1}:\omega^{2k+2,\mathrm{la},\chi}_{K^p}\to (\wedge^2 D^{\mathrm{sm}}_{K^p})^{k+1}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}\mathcal{O}^{\mathrm{la},-w\cdot(-\chi)}_{K^p}(k+1)\cong \mathcal{O}^{\mathrm{la},-w\cdot(-\chi)}_{K^p}(k+1).\] By \cite[Corollary 5.1.3]{Pan20}, we have $H^0({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},-w\cdot(-\chi)}_{K^p})=0$. Hence \[H^0({\mathscr{F}\!\ell},\omega^{2k+2,\mathrm{la},\chi}_{K^p})=\ker(H^0(\bar{d}'^{k+1}))=H^0({\mathscr{F}\!\ell},\ker(\bar{d}'^{k+1}))=H^0({\mathscr{F}\!\ell}, \Sym^k V(k) \otimes_{\mathbb{Q}_p}\omega^{k+2,\mathrm{sm}}_{K^p})\] by Proposition \ref{dbarsurj}. \end{proof} \begin{rem} \label{resmap} There is a natural restriction map \[\mathrm{res}:H^0({\mathscr{F}\!\ell},\omega^{2k+2,\mathrm{la},\chi}_{K^p})\to H^0({\mathscr{F}\!\ell},(\omega^{2k+2,\mathrm{la},\chi}_{K^p})_{\mathfrak{n}}). \] Using the isomorphisms in the previous corollary, we get \[\Sym^k V(k) \otimes_{\mathbb{Q}_p} M_{k+2}(K^p) \to M^\dagger_{k+2}(K^p)\cdot e_2^{k} \oplus M^\dagger_{k+2}(K^p)\cdot e_1^{1+k}e_2^{-1}.\] It is important to know this map explicitly. The $\mathfrak{n}$-coinvariants $(\Sym^k V)_{\mathfrak{n}}$ are isomorphic to $\mathbb{Q}_p$ by sending $(0,1)^{\otimes k}$ to $1$. (Recall that $V=\mathbb{Q}_p^{\oplus 2}$.) One can check easily that $\Sym^k V(k) \otimes_{\mathbb{Q}_p} M_{k+2}(K^p)$ factors through the quotient \[(\Sym^k V)_{\mathfrak{n}}(k) \otimes_{\mathbb{Q}_p} M_{k+2}(K^p) \cong M_{k+2}(K^p) \cdot e_2^k\] and $\mathrm{res}$ is the composite with the natural map $M_{k+2}(K^p)\cdot e_2^k\to M^\dagger_{k+2}(K^p)\cdot e_2^k$. \end{rem} \begin{para} \label{chininv} Now Lemma \ref{ninvlem} and Proposition \ref{kninvcoinv} imply that there is a natural exact sequence \[0\to H^1({\mathscr{F}\!\ell},\omega^{-k,\mathrm{sm}}_{K^p})\cdot e_1^k\to H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})^\mathfrak{n} \to M^\dagger_{-k}(K^p)/M_{-k}(K^p)\cdot e_1^{-1}e_2^{k+1} \oplus M^\dagger_{-k}(K^p)\cdot e_1^{k} \to 0.\] We note that to make everything $B$-equivariant, $H^0({\mathscr{F}\!\ell},(\mathcal{O}^{\mathrm{la},\chi}_{K^p})_{\mathfrak{n}})$ needs to be twisted by $\mathfrak{n}^*$, or formally $\cdot e_1^{-1}e_2$. We remark that by \cite[Lemma 5.3.5]{Pan20}, \[H^1({\mathscr{F}\!\ell},\omega^{-k,\mathrm{sm}}_{K^p})= \varinjlim_{K_p\subset\mathrm{GL}_2(\mathbb{Q}_p)} H^1(\mathcal{X}_{K^pK_p},\omega^{-k}).\] Similarly, we can also compute the $\mathfrak{n}$-invariants $H^1({\mathscr{F}\!\ell},\omega^{2k+2,\mathrm{la},\chi}_{K^p})^{\mathfrak{n}}$. Since \[H^1({\mathscr{F}\!\ell},\omega^{2k+2,\mathrm{la},\chi,\mathfrak{n}}_{K^p})\cong H^1({\mathscr{F}\!\ell},\omega^{k+2,\mathrm{sm}}_{K^p})=0\] by \cite[Corollary 5.3.6]{Pan20}, Lemma \ref{ninvlem} simply becomes \[H^1({\mathscr{F}\!\ell},\omega^{2k+2,\mathrm{la},\chi}_{K^p})^{\mathfrak{n}}\cong M^\dagger_{k+2}(K^p)/M_{k+2}(K^p)\cdot e_1^{-1}e_2^{k+1} \oplus M^\dagger_{k+2}(K^p)\cdot e_1^{k}\] by taking Remark \ref{resmap} into account. We are ready to compute $(\ker I^1_k)^\mathfrak{n}$. By Lemma \ref{kerdk+1}, it is the same as $(\ker H^1(d^{k+1}))^{\mathfrak{n}}$. Fix a trivialization $\wedge^2 D^{\mathrm{sm}}_{K^p}\cong \mathcal{O}^{\mathrm{sm}}_{K^p}$. In particular, the operator $\theta^{k+1}$ in \ref{Do1} defines a map $M_{-k}^{\dagger}(K^p)\to M_{k+2}^\dagger(K^p)$ which we still call $\theta^{k+1}$ by abuse of notation. Note that Lemma \ref{ninvlem} is functorial in the pair $(\mathcal{F},X)$. It follows from Theorem \ref{I1} that we have the following commutative diagram \[\begin{tikzcd} H^1({\mathscr{F}\!\ell},\omega^{-k,\mathrm{sm}}_{K^p})\cdot e_1^k \arrow [d] \arrow[r] & H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})^\mathfrak{n} \arrow[d,"H^1(d^{k+1})"] \arrow[r] & M^\dagger_{-k}/M_{-k}\cdot e_1^{-1}e_2^{k+1} \oplus M^\dagger_{-k}\cdot e_1^{k} \arrow[d,"\theta^{k+1}\oplus\theta^{k+1}"]\\ 0 \arrow[r] & H^1({\mathscr{F}\!\ell},\omega^{2k+2,\mathrm{la},\chi}_{K^p})^\mathfrak{n} \arrow[r,"\simeq"] & M^\dagger_{k+2}/M_{k+2}\cdot e_1^{-1}e_2^{k+1} \oplus M^\dagger_{k+2}\cdot e_1^{k} \end{tikzcd}.\] Here we drop all $(K^p)$ in $M^\dagger_{-k}(K^p),M^\dagger_{k+2}(K^p),M_{k+2}(K^p),M_{-k}(K^p)$ for simplicity. Recall that $\mathfrak{b}\subset\mathfrak{g}$ is the Lie algebra of $B$ and $\mathfrak{h}\subset \mathfrak{b}$ is a Cartan subalgebra, cf. the last paragraph of \ref{brr}. Then there is a natural action of $\mathfrak{h}$ on $H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})^\mathfrak{n}$ and induces a decomposition of $H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})^\mathfrak{n}$ into weight $(k,0)$ and $(-1,k+1)$ components. Note that this constant $\mathfrak{h}$-action is different from the horizontal action $\theta_\mathfrak{h}$. \end{para} \begin{defn} \label{D0} \[D_0:=H^0({\mathscr{F}\!\ell},\wedge^2 D^{\mathrm{sm}}_{K^p}).\] This is a free module of $M_0(K^p)$ of rank-one with a generator on which $\mathrm{GL}_2(\mathbb{A}_f)$ acts via $|\cdot|^{-1}\circ\det$. \end{defn} \begin{prop} There is a natural weight decomposition \[(\ker I^1_k)^\mathfrak{n}=A_{(k,0)}\cdot e_1^k\oplus A_{(-1,k+1)}\cdot e_1^{-1}e_2^{k+1},\] where \[A_{(-1,k+1)}\cong\ker\left(M^\dagger_{-k}(K^p)/M_{-k}(K^p)\xrightarrow{\theta^{k+1}}(M^\dagger_{k+2}(K^p)/M_{k+2}(K^p))\otimes_{M_0(K^p)}D_0^{\otimes -k-1}\right),\] and $A_{(k,0)}$ sits inside the exact sequence \[0\to \varinjlim_{K_p\subset\mathrm{GL}_2(\mathbb{Q}_p)} H^1(\mathcal{X}_{K^pK_p},\omega^{-k})\to A_{(k,0)} \to M^{\dagger}_{-k}(K^p)\xrightarrow{\theta^{k+1}}M^\dagger_{k+2}(K^p)\otimes_{M_0(K^p)}D_0^{\otimes -k-1}.\] All the maps here are Hecke and $B$-equivariant. \end{prop} \begin{proof} Apply the Snake lemma to the commutative diagram above. \end{proof} To simplify this result, we need the following well-known result on the kernel of $\theta^{k+1}$. \begin{lem} \label{kertheta} Suppose $k\geq 0$. Then the kernel of $\theta^{k+1}:M_{-k}^{\dagger}(K^p)\to M_{k+2}^\dagger(K^p)$ is $M_{-k}(K^p)$. In particular, $\theta^{k+1}$ is injective if $k>0$. \end{lem} \begin{proof} As explained in \cite[5.2.4]{Pan20}, $M^\dagger_k(K^p)$ has the following equivalent description. Let $\Gamma(p^n)=1+p^n M_2(\mathbb{Z}_p)$. We defined an open subset $\mathcal{X}_{K^p\Gamma(p^n),c}$ (canonical locus) of $\mathcal{X}_{K^p\Gamma(p^n)}$. For each connected component $\mathcal{Z}$ of $\mathcal{X}_{K^p\Gamma(p^n)}$, the intersection $\mathcal{Z}\cap \mathcal{X}_{K^p\Gamma(p^n),c}$ is the generic fiber of an Igusa curve in the integral model of $\mathcal{X}_{K^p\Gamma(p^n)}$ defined by Katz-Mazur. In particular, the irreducibility of the Igusa curve implies that the connected components of $\mathcal{X}_{K^p\Gamma(p^n),c}$ are naturally in bijection with the connected components of $\mathcal{X}_{K^p\Gamma(p^n)}$, i.e. $\pi_0(\mathcal{X}_{K^p\Gamma(p^n),c})=\pi_0(\mathcal{X}_{K^p\Gamma(p^n)})$. We denote by $M^\dagger_k(K^p\Gamma(p^n))$ the set of sections of $\omega^k$ defined in a strict neighborhood of $\mathcal{X}_{K^p\Gamma(p^n),c}$. Then $\{M^\dagger_k(K^p\Gamma(p^n))\}_n$ form a direct system and $\varinjlim_n M^\dagger_k(K^p\Gamma(p^n))=M^\dagger_k(K^p)$. Note that $\theta^{k+1}$ maps $M^\dagger_{-k}(K^p\Gamma(p^n))$ to $M^\dagger_{k+2}(K^p\Gamma(p^n))$. Suppose $k=0$. Then $\theta^1$ is essentially the derivation. Hence $\ker(\theta^1|_{M^\dagger_0(K^p\Gamma(p^n))})$ is the set of locally constant functions on $\mathcal{X}_{K^p\Gamma(p^n),c}$. By our previous discussion, this is also the set of locally constant functions on $\mathcal{X}_{K^p\Gamma(p^n)}$, i.e. $H^0(\mathcal{X}_{K^p\Gamma(p^n)},\mathcal{O}_{\mathcal{X}_{K^p\Gamma(p^n)}})$. By passing to the limit over $n$, we get $\ker(\theta^1)=M_0(K^p)$. It remains to show that $\theta^{k+1}$ is injective when $k\geq1$. This can be deduced by several different ways. We sketch a proof here using infinitesimal characters, or equivalently, using weights. \begin{lem} \label{unifb} $\dim_C \ker(\theta^{k+1}|_{M^\dagger_{-k}(K^p\Gamma(p^n))})\leq (k+1)|\pi_0(\mathcal{X}_{K^p\Gamma(p^n)})|$ which only depends on $\det(K^p\Gamma(p^n))\mathbb{Q}^\times_{>0}\subset\mathbb{A}_f^\times$. \end{lem} Let's assume this lemma at the moment. Suppose $k\geq 1$. Consider the direct limit over all open compact subgroups $K^p$ of $\mathrm{GL}_2(\mathbb{A}^p_f)$: \[N:=\varinjlim_{K^p\subseteq\mathrm{GL}_2(\mathbb{A}_f^p)}\ker(\theta^{k+1}|_{M^\dagger_{-k}(K^p \Gamma(p^n))}),\] which is a representation of $\mathrm{GL}_2(\mathbb{A}_f^p)$. Let $\psi:(\mathbb{A}_f^p)^\times\to C^\times$ be a smooth character. Then the previous lemma implies that its $\psi$-isotypic part $N[\psi]$ is a finite-dimensional smooth representation of $\mathrm{GL}_2(\mathbb{A}_f^p)$. Hence the action of $\mathrm{GL}_2(\mathbb{A}_f^p)$ on $N[\psi]$ factors through the determinant homomorphism. Therefore, if $\ker(\theta^{k+1}|_{M^\dagger_{-k}(K^p\Gamma(p^n))})\neq 0$, we can find a system of spherical Hecke eigenvalues appearing in $\ker(\theta^{k+1}|_{M^\dagger_{-k}(K^p\Gamma(p^n))})$ and its associated two-dimensional $p$-adic Galois representation has consecutive Hodge-Tate weights $a,a+1$. But this contradicts Theorem 1.1 of \cite{Pan2020N} as $|-k-1|>1$ when $k\geq 1$. Hence $\ker(\theta^{k+1})=0$. \end{proof} \begin{proof}[Proof of Lemma \ref{unifb}] Since $\pi_0(\mathcal{X}_{K^p\Gamma(p^n),c})=\pi_0(\mathcal{X}_{K^p\Gamma(p^n)})$, it suffices to show that on each connected component of $\mathcal{X}_{K^p\Gamma(p^n),c}$, the kernel of $\theta^{k+1}$ has dimension at most $k+1$. It is enough to prove this on the non-cusp points. Note that outside of the cusps, computing the kernel of \[\theta^{k+1}:\omega^{-k}\to\omega^{-k}\otimes_{\mathcal{O}_{\mathcal{X}_{K^pK_p}}}(\Omega^1(\mathcal{C}))^{\otimes k+1}\] is the same as solving a homogeneous linear ordinary differential equation of order $k+1$ (using Kodaira-Spencer isomorphism). Hence the space of solutions is at most $(k+1)$-dimensional on a connected component by our knowledge of linear ODE. \end{proof} Hence we get the following theorem. \begin{thm} \label{kernk} For $\chi=(0,k)$, there is a natural weight decomposition \[(\ker I^1_k)^\mathfrak{n}=A_{(k,0)}\cdot e_1^k\oplus A_{(-1,k+1)}\cdot e_1^{-1}e_2^{k+1},\] where \begin{itemize} \item $A_{(-1,k+1)}\cong\ker\left(M^\dagger_{-k}(K^p)/M_{-k}(K^p)\xrightarrow{\theta^{k+1}}(M^\dagger_{k+2}(K^p)/M_{k+2}(K^p))\otimes_{M_0(K^p)}D_0^{\otimes -k-1}\right)$ is isomorphic to a subspace of $M_{k+2}(K^p)\otimes_{M_0(K^p)}D_0^{\otimes -k-1}$ via $\theta^{k+1}$. \item $A_{(k,0)}$ sits inside the exact sequence \[0\to \varinjlim_{K_p\subset\mathrm{GL}_2(\mathbb{Q}_p)} H^1(\mathcal{X}_{K^pK_p},\omega^{-k})\to A_{(k,0)} \to M_{-k}(K^p)\to 0.\] \end{itemize} All maps here are Hecke and $B$-equivariant. In particular, the Hecke action on $(\ker I^1_k)^\mathfrak{n}$ is ``classical'' in the sense that systems of Hecke eigenvalues appearing in $(\ker I^1_k)^\mathfrak{n}$ all come from automorphic forms on $\mathrm{GL}_2(\mathbb{A})$. \end{thm} \begin{rem} \label{A00rinv} When $k>0$, we have $A_{(k,0)}=\varinjlim_{K_p\subset\mathrm{GL}_2(\mathbb{Q}_p)} H^1(\mathcal{X}_{K^pK_p},\omega^{-k})$ because $M_{-k}(K^p)=0$. When $k=0$, it turns out that the surjective map $A_{(0,0)}\to M_{0}(K^p)$ has a natural right inverse $M_0(K^p)\to H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},(0,0)}_{K^p})$ constructed in \cite[Corollary 5.1.3.(2)]{Pan20} which is $\mathrm{GL}_2(\mathbb{Q}_p)$-equivariant. See the discussion before Theorem 5.3.16 \textit{ibid.}. Hence we have \[A_{(0,0)}=\varinjlim_{K_p\subset\mathrm{GL}_2(\mathbb{Q}_p)} H^1(\mathcal{X}_{K^pK_p},\mathcal{O}_{\mathcal{X}_{K^pK_p}})\oplus M_0(K^p).\] \end{rem} \begin{rem} It is natural to ask whether one can give a more precise description of $A_{(-1,k+1)}$. Essentially one needs to understand the cokernel of $\theta^{k+1}$ on $\mathcal{X}_{K^pK_p,c}$, which is the $H^1$ of the de Rham complex $\Sym^k D\xrightarrow{\nabla} \Sym^k D \otimes_{\mathcal{O}_{\mathcal{X}_{K^pK_p}}} \Omega^1_{\mathcal{X}_{K^pK_p}}(\mathcal{C})$ on the dagger space associated to $\mathcal{X}_{K^pK_p,c}$. One can also interpret this as some rigid cohomology on the Igusa curve. Coleman was able to determine the primitive part in his famous work \cite{Cole97} and his result roughly says that $\coker(\theta^{k+1})$ is closely related to the finite slope part of classical modular forms. \end{rem} \begin{cor} \label{knfin} Let $K_p=1+p^lM_2(\mathbb{Z}_p)$ for some integer $l\geq 2$ and $(\ker I^1_k)^{K_p-\mathrm{an}}\subseteq \ker I^1_k$ the subspace of $K_p$-analytic vectors. Then \begin{enumerate} \item $(\ker I^1_k)^{K_p-\mathrm{an},\mathfrak{n}}$ is finite-dimensional. \item The subspace of $\mathfrak{n}$-finite vectors \[(\ker I^1_k)^{K_p-\mathrm{an},\mathfrak{n}-\mathrm{fin}}:=\{v\in(\ker I^1_k)^{K_p-\mathrm{an}}, (u^+)^k\cdot v=0,\mbox{ for some }k\}\] is a finitely generated $U(\mathfrak{g})$-module. \end{enumerate} \end{cor} \begin{proof} The second part follows from the first part. For the first part, we first explain the rough idea. The $K_p$-analytic vectors in a smooth representation of $\mathrm{GL}_2(\mathbb{Q}_p)$ are equal to the $K_p$-fixed vectors. In particular, the $K_p$-analytic vectors in $M_{k+2}(K^p)$ are \[M_{k+2}(K^pK_p):=H^0(\mathcal{X}_{K^pK_p},\omega^{k+2}),\] a finite-dimensional vector space over $C$ as $\mathcal{X}_{K^pK_p}$ is proper over $C$. Similarly, the subspace of $K_p$-analytic vectors in $\varinjlim_{K'_p\subset\mathrm{GL}_2(\mathbb{Q}_p)} H^1(\mathcal{X}_{K^pK'_p},\omega^{-k})$ is also finite-dimensional. Therefore our finiteness claim will follow from Theorem \ref{kernk} if we can show maps in Theorem \ref{kernk} preserve the property of being $K_p$-analytic. For $A_{(k,0)}$, there is a natural inclusion $\Sym^k V(k)\otimes_{\mathbb{Q}_p}\omega^{-k,\mathrm{sm}}_{K^p}\subseteq \mathcal{O}^{\mathrm{la},(0,k)}_{K^p}$ by \ref{lalg0k}. Taking the $H^1$, we get a $\mathrm{GL}_2(\mathbb{Q}_p)$-equivariant inclusion \[\Sym^k V(k)\otimes_{\mathbb{Q}_p}H^1({\mathscr{F}\!\ell},\omega^{-k,\mathrm{sm}}_{K^p})\cong H^1({\mathscr{F}\!\ell},\Sym^k V(k)\otimes_{\mathbb{Q}_p}\omega^{-k,\mathrm{sm}}_{K^p})\subseteq H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},(0,k)}_{K^p})\] which recovers $\varinjlim_{K_p\subset\mathrm{GL}_2(\mathbb{Q}_p)} H^1(\mathcal{X}_{K^pK_p},\omega^{-k})\to A_{(k,0)}$ in Theorem \ref{kernk} by taking the $\mathfrak{n}$-invariants of this map. Note that $\Sym^k V$ is an algebraic representation of $\mathrm{GL}_2(\mathbb{Q}_p)$. Hence it is clear from this description that \[H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},(0,k)}_{K^p})^{K_p-\mathrm{an}}\cap H^1({\mathscr{F}\!\ell},\omega^{-k,\mathrm{sm}}_{K^p})\cdot e_1^k=H^1(\mathcal{X}_{K^pK_p},\omega^{-k})\cdot e_1^k\] is finite-dimensional. If $k=0$, by Remark \ref{A00rinv}, the inclusion $M_{0}(K^p)\subset H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},(0,0)}_{K^p})$ is $\mathrm{GL}_2(\mathbb{Q}_p)$-equivariant. Hence the same argument works. This proves that $(A_{(k,0)}\cdot e_1^k)^{K_p-\mathrm{an}}$ is finite-dimensional. (Essentially, we are showing that $A_{(k,0)}\cdot e_1^k$ are all contributed by the locally algebraic vectors.) For $A_{(-1,k+1)}$, the situation is a bit more complicated. We need several lemmas. Note that $e_1^k\neq e_1^{-1}e_2^{k+1}$ as $k\geq 0$. Hence the exact sequence in the beginning of \ref{chininv} implies that there is a natural inclusion \[M^\dagger_{-k}(K^p)/M_{-k}(K^p)\cdot e_1^{-1}e_2^{k+1} \subseteq H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})^\mathfrak{n}.\] We need to understand which vectors in $M^\dagger_{-k}(K^p)/M_{-k}(K^p)\cdot e_1^{-1}e_2^{k+1}$ are $K_p$-analytic. Recall that $M^\dagger_k(K^p\Gamma(p^m))$ denotes the set of sections of $\omega^k$ defined in a strict neighborhood of $\mathcal{X}_{K^p\Gamma(p^m),c}$. \begin{lem} \label{bdan} There exist an integer $m>0$ and an affinoid strict open neighborhood $V$ of $\mathcal{X}_{K^p\Gamma(p^m),c}$ such that the $K_p$-analytic vectors \[M^\dagger_{-k}(K^p)/M_{-k}(K^p)\cdot e_1^{-1}e_2^{k+1} \cap H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})^{K_p-\mathrm{an},\mathfrak{n}}\] are contained in the image of $H^0(V,\omega^{-k})\to M^\dagger_{-k}(K^p\Gamma(p^m)) \to M^\dagger_{-k}(K^p)/M_{-k}(K^p)$. \end{lem} We also need an irreducibility result. Denote the natural map $\mathcal{X}_{K^p\Gamma(p^m)} \to \mathcal{X}_{K^p\Gamma(p^n)}, m\geq n$ by $\pi_{m/n}$. \begin{lem} \label{irredigsnbhd} Let $V$ be an affinoid subset of $\mathcal{X}_{K^p\Gamma(p^m)}$ for some $m$. Suppose for each connected component $X$ of $\mathcal{X}_{K^p\Gamma(p^m)}$, the intersection $V\cap X$ is either empty or a connected strict open neighborhood of $\mathcal{X}_{K^p\Gamma(p^m),c}\cap X$. (This is possible as Igusa curves are irreducible.) Then there exists an integer $m'\geq m$ such that for each connected component $V'$ of $\pi_{m'/m}^{-1}(V)$ and any integer $m''\geq m'$, the intersection of $\pi_{m''/m'}^{-1}(V')$ with each connected component of $\mathcal{X}_{K^p\Gamma(p^{m''})}$ is either connected or empty. \end{lem} Both lemmas will be proved later. Back to the proof of Corollary \ref{knfin}. Let $V$ be an open subset of $\mathcal{X}_{K^p\Gamma(p^m)}$ as in Lemma \ref{bdan}. We may assume that it satisfies the assumption in Lemma \ref{irredigsnbhd}, i.e. $V\cap X$ is connected for each connected component $X$ of $\mathcal{X}_{K^p\Gamma(p^m)}$. Fix a trivialization $D_0\cong M_0(K^p)$. It suffices to show the kernel of the composite \[H^0(V,\omega^{-k})\xrightarrow{\theta^{k+1}} H^0(V,\omega^{k+2}) \to M^\dagger_{k+2}(K^p)/M_{k+2}(K^p)\] is finite-dimensional. In fact, we claim that $\ker\left(H^0(V,\omega^{k+2}) \to M^\dagger_{k+2}(K^p)/M_{k+2}(K^p)\right)$ is finite-dimensional. Indeed, by Lemma \ref{irredigsnbhd}, after possibly replacing $m$ by $m'$ and $V$ by the union of irreducible components of $\pi_{m'/m}^{-1}(V)$ whose intersection with $\mathcal{X}_{K^p\Gamma(p^{m'}),c}$ is non-empty, we may assume further that \begin{itemize} \item the preimage of $V$ in each connected component of $\mathcal{X}_{K^p\Gamma(p^{m''})}$ is connected for any $m''\geq m$. \end{itemize} Suppose $s\in\ker\left(H^0(V,\omega^{k+2}) \to M^\dagger_{k+2}(K^p)/M_{k+2}(K^p) \right)$. This means that the pull-back of $s$ to $\mathcal{X}_{K^p\Gamma(p^{m''})}$ is equal to some $s'\in H^0(\mathcal{X}_{K^p\Gamma(p^{m''})},\omega^{k+2})$ when restricted to a strict open neighborhood of $\mathcal{X}_{K^p\Gamma(p^{m''}),c}$ for some $m''\geq m$. On the other hand, it follows from our assumption that we can choose this open neighborhood to be $\pi_{m''/m}^{-1}(V)$, which is $\Gamma(p^m)$-stable. Since $s$ is fixed by $\Gamma(p^m)$, so is $s'$, i.e. $s'$ is the pull-back of some element in $M_{k+2}(K^p\Gamma(p^m))$. This implies our claim as $M_{k+2}(K^p\Gamma(p^m))$ is finite-dimensional. \end{proof} \begin{proof}[Proof of Lemma \ref{bdan}] By Proposition \ref{contrGnan}, $H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})^{K_p-\mathrm{an}}$ is contained in the image of $\check{H}^1(\mathcal{O}^{\Gamma(p^m)-\mathrm{an},\chi})\to H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})$ for some $m$. Moreover after possibly enlarging $m$, we may even assume there is a natural lifting $H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})^{K_p-\mathrm{an}}\subseteq \check{H}^1(\mathcal{O}^{\Gamma(p^m)-\mathrm{an},\chi})$ by the discussion in \ref{discLB}. In particular, we may view $H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})^{K_p-\mathrm{an},\mathfrak{n}}$ as a subspace of $\check{H}^1(\mathcal{O}^{\Gamma(p^m)-\mathrm{an},\chi})^\mathfrak{n}$. Let $H=\Gamma(p^m)$. By definition \ref{chGanchi}, there is an exact sequence \[0\to \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_1)^{H-\mathrm{an}}\oplus \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_2)^{H-\mathrm{an}}/\check{H}^0(\mathcal{O}^{H-\mathrm{an},\chi})\to \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_{12})^{H-\mathrm{an}}\to \check{H}^1(\mathcal{O}^{H-\mathrm{an},\chi})\to 0.\] Take the $\mathfrak{n}$-cohomology. \[\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_{12})^{H-\mathrm{an},\mathfrak{n}}\to \check{H}^1(\mathcal{O}^{H-\mathrm{an},\chi})^\mathfrak{n} \to \left(\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_1)^{H-\mathrm{an}}\right)_\mathfrak{n} \oplus \left( \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_2)^{H-\mathrm{an}}/\check{H}^0(\mathcal{O}^{H-\mathrm{an},\chi})\right)_\mathfrak{n}.\] Recall that $H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})$ has a natural weight decomposition into weight $(k,0)$ and $(-1,k+1)$ components and we are interested in the $(-1,k+1)$ part. Since $\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_{12})^{H-\mathrm{an},\mathfrak{n}}$ has weight $(k,0)$, it is easy to see that we have the following commutative diagram \[\begin{tikzcd} \check{H}^1(\mathcal{O}^{H-\mathrm{an},\chi})^\mathfrak{n}_{(-1,k+1)} \arrow [d] \arrow[r] & \left( \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_2)^{H-\mathrm{an}}/\check{H}^0(\mathcal{O}^{H-\mathrm{an},\chi})\right)_\mathfrak{n} \arrow[d] \\ H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p})_{(-1,k+1)} \arrow[r,"\simeq"] & M^\dagger_{-k}(K^p)/M_{-k}(K^p)\cdot e_1^{-1}e_2^{k+1} \end{tikzcd},\] where the subscript $(-1,k+1)$ means the weight-$(-1,k+1)$ component and the bottom line follows from the beginning of \ref{chininv}. ( There is no contribution from $(\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_1)^{H-\mathrm{an}})_\mathfrak{n}$ because $(\mathcal{O}^{\mathrm{la},\chi}_{K^p})_\mathfrak{n}$ is supported outside of $U_1$.) Hence it remains to study the image of \[ev_\infty:\left( \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_2)^{H-\mathrm{an}}/\check{H}^0(\mathcal{O}^{H-\mathrm{an},\chi})\right)_\mathfrak{n} \to M^\dagger_{-k}(K^p)/M_{-k}(K^p).\] By \cite[Proposition 5.2.10.(2)]{Pan20}, this map comes from the composite map \[ \left( \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_2)^{H-\mathrm{an}}\right)_\mathfrak{n} \to \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_2)/(y)\cong M^\dagger_{-k}(K^p),\] where $y=1/x$ vanishes at $\infty$. Now our claim essentially follows from the proof of \cite[Proposition 5.2.6]{Pan20}, i.e. the construction of the isomorphism $\mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_2)/(y)\cong M^\dagger_{-k}(K^p)$. Indeed, by Theorem \ref{str} (with $x$ replaced by $y$ and $x_n=0$ here), we can find sufficiently large integers $m'<r(m')$ such that for any $f\in \mathcal{O}^{\mathrm{la},\chi}_{K^p}(U_2)^{H-\mathrm{an}}$, its restriction on $U'_2:=\{\|y\|\leq p^{-{m'}}\} \subseteq U_2$ can be written as \[f|_{U'_2}=e_2^{k}\sum_{i=0}^\infty c_i y^i,\] where $c_i\in\omega^{-k}(V'_{2})$ and $e_2^kc_ip^{(m'-1)i}$ is uniformly bounded, $i\geq 0$. Here $V'_2\subseteq \mathcal{X}_{K^p\Gamma(p^{r(m')})}$ denotes the open affinoid subset whose preimage in $\mathcal{X}_{K^p}$ is $\pi_{\mathrm{HT}}^{-1}(U'_2)$. The map $ev_\infty$ is simply sending $f$ to $c_0$. See the proof of \cite[Proposition 5.2.10.(3),(4)]{Pan20} for more details. Note that $V'_2$ is a strict neighborhood of $\mathcal{X}_{K^p\Gamma(p^{r(m')})_c}$ by \cite[Lemma 5.2.9]{Pan20}. We thus prove Lemma \ref{bdan} with $m=r(m')$ and $V=V'_2$. \end{proof} \begin{proof}[Proof of Lemma \ref{irredigsnbhd}] We may assume $V$ is connected. Let \[S_1=\varprojlim_{n\geq m} \pi_0 \left(\pi_{n/m}^{-1}(V)\right),~~~~~~~~~~~~S_2=\varprojlim_{n} \pi_0 (\mathcal{X}_{K^p\Gamma(p^n)}).\] $\Gamma(p^m)$ acts naturally on both profinite sets. The inclusion of $V\subseteq \mathcal{X}_{K^p\Gamma(p^m)}$ induces a natural $\Gamma(p^m)$-equivariant map \[f:S_1\to S_2.\] Our claim is equivalent to the existence of integer $m'\geq m$ such that $f|_T$ is injective for any $\Gamma(p^{m'})$-orbit $T$ of $S_1$. By our knowledge of connected components of modular curves, for any $s\in S_2$, the stabilizer of $\Gamma(p^m)$ with respect to $s$ is $\Gamma(p^m)\cap\mathrm{SL}_2(\mathbb{Z}_p)$. Since $V$ is connected, $\Gamma(p^m)$ acts transitively on $S_1$. Hence $S_1=\Gamma(p^m)/H$ for some closed subgroup $H$ of $\Gamma(p^m)$. It suffices to prove that \[H\cap\Gamma(p^{m'})=\Gamma(p^{m'})\cap\mathrm{SL}_2(\mathbb{Z}_p)\] for some $m'\geq m$. Equivalently $\Lie(H)=\mathfrak{sl}_2(\mathbb{Q}_p)$. To see this, we consider the space of $\Gamma(p^m)$-locally analytic $\mathbb{Q}_p$-valued functions $\mathscr{C}(S_1,\mathbb{Q}_p)^{\mathrm{la}}$ on $S_1$. It is enough to show that $\mathfrak{sl}_2(\mathbb{Q}_p)$ acts trivially on $\mathscr{C}(S_1,\mathbb{Q}_p)^{\mathrm{la}}$. Let $V_\infty$ be the preimage of $V$ in $\mathcal{X}_{K^p}$. By shrinking $V$ if necessary, we may assume $e_2$ is a generator on $V_\infty$, cf. \cite[Lemma 5.2.9]{Pan20} and its proof. There is a natural continuous map $c:|V_\infty|\to S_1$ hence we may view $\mathscr{C}(S_1,\mathbb{Q}_p)^{\mathrm{la}}$ as a subspace of $\mathcal{O}_{\mathcal{X}_{K^p}}(V_\infty)^\mathrm{la}$. In \cite[\S 3]{Pan20}, we showed that elements in $\mathcal{O}_{\mathcal{X}_{K^p}}(V_\infty)^\mathrm{la}$ satisfy a first-order differential equation and calculated this differential equation in \cite[Theorem 4.2.4]{Pan20}. Note that $e_2$ is a generator. Hence by \textit{loc. cit.} \[\begin{pmatrix} y & 1 \\ -y^2 & -y\end{pmatrix}\cdot f=0,~~~~~f\in\mathcal{O}_{\mathcal{X}_{K^p}}(V_\infty)^\mathrm{la}.\] Recall $y=e_1/e_2$ is a coordinated function on $\pi_\mathrm{HT}(V_\infty)$. We can rewrite this equation as \begin{eqnarray} \label{sl2eqn} \begin{pmatrix} 0 & 1 \\ 0 & 0\end{pmatrix}\cdot f +y\begin{pmatrix} 1 & 0 \\ 0 & -1\end{pmatrix}\cdot f +y^2\begin{pmatrix} 0 & 0 \\ -1 & 0\end{pmatrix}\cdot f=0. \end{eqnarray} \begin{lem} \label{largeim} $\pi_{\mathrm{HT}}\left(c^{-1}(s)\right)$ contains at least $3$ classical points for any $s\in S_1$, where $c$ denotes the natural map $|V_\infty|\to S_1$. \end{lem} \begin{proof} Since $\Gamma(p^m)$ acts transitively on $S_1$, it is enough to find $3$ classical points in $\pi_{\mathrm{HT}}(V_\infty)$ which belong to different orbits of $\Gamma(p^m)$. In fact, it follows from Scholze's construction of $\mathcal{X}_{K^p}$ in \cite[Chapter III]{Sch15} that $\pi_{\mathrm{HT}}(V_\infty)$ contains an open subset of ${\mathscr{F}\!\ell}$. We give a more direct proof here. By our assumption $V$ contains the canonical locus of a connected component of $\mathcal{X}_{K^p\Gamma(p^m)}$, hence $\infty\in \pi_{\mathrm{HT}}(V_\infty)$. Take a classical point outside of the canonical locus in $V$. Its preimage in $\mathcal{X}_{K^p}$ goes to a classical non-$\mathbb{Q}_p$-rational point $s_2$ in ${\mathscr{F}\!\ell}$ under $\pi_\mathrm{HT}$. Note that $q=\displaystyle \mathrm{inf}_{g\in\Gamma(p^m)} |y(g\cdot s_2)|>0$. We apply Lemma 5.2.9 of \cite{Pan20} with $U=\{\|y\|\leq q/2\}\subseteq{\mathscr{F}\!\ell}$ and conclude that there is a classical point outside of the canonical locus in $\pi^{-1}_{m'/m}(V)$ for some $m'$, whose preimage in $\mathcal{X}_{K^p}$ maps to a point $s_3\in U$ under $\pi_\mathrm{HT}$. Clearly $\infty,s_2,s_3$ belong to different $\Gamma(p^m)$-orbits. \end{proof} Now let $f\in\mathscr{C}(S_1,\mathbb{Q}_p)^{\mathrm{la}}$. For any $s\in S_1$, by Lemma \ref{largeim}, there exist $3$ points $s_1,s_2,s_3$ in $c^{-1}(s)\subseteq V_\infty$ such that $y(s_1),y(s_2),y(s_3)\in C$ are distinct, Then we can evaluate \eqref{sl2eqn} at $s_1,s_2,s_3$ and conclude that \[\left(\begin{pmatrix} 0 & 1 \\ 0 & 0\end{pmatrix}\cdot f\right)(s)=\left(\begin{pmatrix} 1 & 0 \\ 0 & -1\end{pmatrix}\cdot f \right)(s)=\left(\begin{pmatrix} 0 & 0 \\ -1 & 0\end{pmatrix}\cdot f\right)(s)=0.\] This immediately implies our claim as $\begin{pmatrix} 0 & 1 \\ 0 & 0\end{pmatrix}, \begin{pmatrix} 1 & 0 \\ 0 & -1\end{pmatrix},\begin{pmatrix} 0 & 0 \\ -1 & 0\end{pmatrix}$ generate $\mathfrak{sl}_2(\mathbb{Q}_p)$. \end{proof} \begin{rem} The same argument gives a purely $p$-adic proof of the well-known fact that the action of $\mathrm{SL}_2(\mathbb{Q}_p)$ on $\varinjlim_{K_p}\pi_0(\mathcal{X}_{K^pK_p})$ is trivial. \end{rem} \section{Intertwining operators: spectral decomposition} \label{Iosd} In this section, we continue our study of the intertwining operators $I_k$ and $I^1_k$. Our main result (Theorem \ref{sd}) below gives a decomposition of $\ker I^1_k$ with respect to the Hecke action. To do this, probably not surprisingly, we will use the Newton stratification on the flag variety ${\mathscr{F}\!\ell}=\mathbb{P}^1=\mathbb{P}^1(\mathbb{Q}_p)\sqcup \Omega$, where $ \Omega=\mathbb{P}^1\setminus \mathbb{P}^1(\mathbb{Q}_p)$ denotes the usual Drinfeld upper half plane, and compute the kernel and cokernel of $I_k$ on each strata. Recall some notations. Fix a non-negative integer $k$ throughout this section. For $\chi=(n_1,n_2)\in\mathbb{Z}^2$ with $n_2-n_1=k$, we have \[d^{k+1}: \mathcal{O}^{\mathrm{la},(n_1,n_2)}_{K^p}\to \mathcal{O}^{\mathrm{la},(n_1,n_2)}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}} (\Omega^1_{K^p}(\mathcal{C})^{\mathrm{sm}})^{\otimes k+1}\] \[d'^{k+1}:\mathcal{O}^{\mathrm{la},(n_1,n_2)}_{K^p}\otimes_{\mathcal{O}_{{\mathscr{F}\!\ell}}}(\Omega^1_{{\mathscr{F}\!\ell}})^{\otimes k+1}\to\mathcal{O}^{\mathrm{la},(n_2+1,n_1-1)}_{K^p}(k+1),\] Similarly, \[\bar{d}^{k+1}: \mathcal{O}^{\mathrm{la},(n_1,n_2)}_{K^p}\to \mathcal{O}^{\mathrm{la},(n_1,n_2)}_{K^p}\otimes_{\mathcal{O}_{{\mathscr{F}\!\ell}}}(\Omega^1_{{\mathscr{F}\!\ell}})^{\otimes k+1},\] \[\bar{d}'^{k+1}: \mathcal{O}^{\mathrm{la},(n_1,n_2)}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}} (\Omega^1_{K^p}(\mathcal{C})^{\mathrm{sm}})^{\otimes k+1}\to \mathcal{O}^{\mathrm{la},(n_2+1,n_1-1)}_{K^p}(k+1).\] Hence $I_k=d'^{k+1}\circ \bar{d}^{k+1}=\bar{d}'^{k+1}\circ d^{k+1}$. \subsection{\texorpdfstring{$I_k$}{Lg} on \texorpdfstring{$\mathbb{P}^1(\mathbb{Q}_p)$}{Lg}} \begin{para} We first compute the stalks of $\ker d^{k+1}$ and $\coker d^{k+1}$ at points in $\mathbb{P}^1(\mathbb{Q}_p)$. Since everything is $\mathrm{GL}_2(\mathbb{Q}_p)$-equivariant and $\mathrm{GL}_2(\mathbb{Q}_p)$ acts transitively on $\mathbb{P}^1(\mathbb{Q}_p)$, it's enough to determine the stalks at one point. Here we choose it to be $\infty\in{\mathscr{F}\!\ell}$, the vanishing locus of $e_1$. Note that $y=\frac{e_1}{e_2}$ is a local coordinate around $0$. We need the following description of the stalk of $\mathcal{O}^{\mathrm{la},(n_1,n_2)}_{K^p}$ at $\infty$. Note that \[(\mathcal{O}_{{\mathscr{F}\!\ell}})_{\infty}=\varinjlim_{n} \mathcal{O}_C[[\frac{y}{p^n}]][\frac{1}{p}]\] and we view $\mathcal{O}_C[[\frac{y}{p^n}]][\frac{1}{p}]$ as a $C$-Banach space with unit open ball $\mathcal{O}_C[[\frac{y}{p^n}]]$. On the other hand, for $l\in\mathbb{Z}$, recall that \[(\omega^{l,\mathrm{sm}}_{K^p})_\infty=M^{\dagger}_{l}(K^p)=\varinjlim_{n}\varinjlim_{U\supseteq \bar{\mathcal{X}}_{K^p\Gamma(p^n),c}}\omega^l(U),\] where $U$ runs through all strict affinoid open neighborhood of $\mathcal{X}_{K^p\Gamma(p^n),c}$. See the notation in the proof of Lemma \ref{kertheta}. Denote by \[(\mathcal{O}_{{\mathscr{F}\!\ell}})_{\infty}\widehat\otimes_{C} M^{\dagger}_{l}(K^p):= \varinjlim_{n}\varinjlim_{U\supseteq \bar{\mathcal{X}}_{K^p\Gamma(p^n),c}} \mathcal{O}_C[[ \frac{y}{p^n}]][\frac{1}{p}] \widehat\otimes_C~ \omega^l(U)\] equipped with the inductive limit topology. Equivalently, any element of $(\mathcal{O}_{{\mathscr{F}\!\ell}})_{\infty}\widehat\otimes_{C} M^{\dagger}_{l}(K^p)$ can be written as $\displaystyle \sum_{i=0}^{+\infty}c_iy^{i}$, where all $c_i\in \omega^l(U)$ for some strict affinoid open neighborhood $U$ of some $\mathcal{X}_{K^p\Gamma(p^n),c}$ and $c_ip^{mi},i\geq 0$ are uniformly bounded for some $m$. Take $l=n_1-n_2$. Then there is a natural continuous map \[(\mathcal{O}_{{\mathscr{F}\!\ell}})_{\infty}\widehat\otimes_{C} M^{\dagger}_{n_1-n_2}(K^p) \cdot \mathrm{t}^{n_1}e_2^{n_2-n_1}\to (\mathcal{O}^{\mathrm{la},(n_1,n_2)}_{K^p})_\infty\] sending $\displaystyle \sum_{i=0}^{+\infty}c_iy^{i}$ to $\displaystyle \mathrm{t}^{n_1}e_2^{n_2-n_1}\sum_{i=0}^{+\infty}c_iy^{i}$. Note that $(\mathcal{O}_{{\mathscr{F}\!\ell}})_{\infty}\cdot e_2^{n_2-n_1}=(\omega^{n_2-n_1}_{{\mathscr{F}\!\ell}})_{\infty}(n_2-n_1)$. We can also rewrite this map as \[(\omega^{n_2-n_1}_{{\mathscr{F}\!\ell}})_{\infty}(n_2-n_1)\widehat\otimes_{C} M^{\dagger}_{n_1-n_2}(K^p) \cdot \mathrm{t}^{n_1}\to (\mathcal{O}^{\mathrm{la},(n_1,n_2)}_{K^p})_\infty\] \end{para} \begin{lem} \label{Olainfty} This is an isomorphism, i.e. \[(\mathcal{O}^{\mathrm{la},(n_1,n_2)}_{K^p})_\infty\cong (\omega^{n_2-n_1}_{{\mathscr{F}\!\ell}})_{\infty}\widehat\otimes_{C} M^{\dagger}_{n_1-n_2}(K^p)(n_2-n_1) \cdot \mathrm{t}^{n_1}.\] Similarly, for $l\in\mathbb{Z}$, \[(\omega^{l,\mathrm{la},(n_1,n_2)}_{K^p})_\infty\cong (\omega^{n_2-n_1}_{{\mathscr{F}\!\ell}})_{\infty}\widehat\otimes_{C} M^{\dagger}_{n_1-n_2+l}(K^p)(n_2-n_1) \cdot \mathrm{t}^{n_1}.\] \end{lem} \begin{proof} The first claim is essentially proved in the proof of \cite[Proposition 5.2.10]{Pan20}. We give a sketch here. By Theorem \ref{str} (after applying $\begin{pmatrix} 0 & 1\\ 1& 0\end{pmatrix}$), a section of $\mathcal{O}^{\mathrm{la},(n_1,n_2)}_{K^p}$ defined in an open neighborhood of $\infty$ can be written as $\displaystyle \mathrm{t}^{n_1}e_2^{n_2-n_1}\sum_{i=0}^{+\infty}c_i(y-y_n)^{i}$ for some $y_n$. After shrinking this open neighborhood, we may choose $y_n=0$. Thus we see that any element of $(\mathcal{O}^{\mathrm{la},(n_1,n_2)}_{K^p})_\infty$ has the desired form. This implies the second claim by Lemma \ref{tensomegaksm}. \end{proof} \begin{para} Suppose $(n_1,n_2)\in\mathbb{Z}^2$ and $n_2-n_1=k$. Fix a trivialization of $\wedge^2 D^{\mathrm{sm}}_{K^p}$ as before, i.e. an isomorphism $D_0=H^0(\wedge^2 D^{\mathrm{sm}}_{K^p})\cong M_0(K^p)$. Consider $d^{k+1}:\mathcal{O}^{\mathrm{la},\chi}_{K^p}\to \omega^{2k+2,\mathrm{la},\chi}_{K^p}$ at $\infty$. Under the isomorphisms in Lemma \ref{Olainfty}, we may identify $(d^{k+1})_\infty:(\mathcal{O}^{\mathrm{la},\chi}_{K^p})_\infty\to (\omega^{2k+2,\mathrm{la},\chi}_{K^p})_\infty$ with the map \[1\otimes\theta^{k+1}:(\omega^k_{{\mathscr{F}\!\ell}})_{\infty}\widehat\otimes_{C} M^{\dagger}_{-k}(K^p) \cdot \mathrm{t}^{n_1} \to (\omega^k_{{\mathscr{F}\!\ell}})_{\infty}\widehat\otimes_{C} M^{\dagger}_{k+2}(K^p) \cdot \mathrm{t}^{n_1}\] induced from $\theta^{k+1}:M^{\dagger}_{-k}(K^p)\to M^{\dagger}_{k+2}(K^p)$. By Lemma \ref{kertheta}, this map is injective if $k\geq 1$, and when $k=0$, the kernel of $M^{\dagger}_0(K^p\Gamma(p^n))\to M^{\dagger}_2(K^p\Gamma(p^n))$ is $M_0(K^p\Gamma(p^n))$, which is finite-dimensional. Hence \[\ker(d^{1})_\infty= (\mathcal{O}_{{\mathscr{F}\!\ell}})_{\infty}\otimes_{C} M_{0}(K^p) \cdot \mathrm{t}^{n_1}\] (the usual tensor product). Since $\mathrm{GL}_2(\mathbb{Q}_p)$ acts transitively on $\mathbb{P}^1(\mathbb{Q}_p)$, this also implies the following result. \end{para} \begin{prop} \label{kerdc} For $d^{k+1}:\mathcal{O}^{\mathrm{la},(n_1,n_2)}_{K^p}\to \mathcal{O}^{\mathrm{la},(n_1,n_2)}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}(\Omega^1_{K^p}(\mathcal{C})^\mathrm{sm})^{\otimes k+1}$ , we have \[(\ker d^{k+1})|_{\mathbb{P}^1(\mathbb{Q}_p)}=0,~~~k\geq 1.\] and \[\ker(d^{1})|_{\mathbb{P}^1(\mathbb{Q}_p)}= (\mathcal{O}_{{\mathscr{F}\!\ell}})|_{\mathbb{P}^1(\mathbb{Q}_p)}\otimes_{C} M_{0}(K^p) \cdot \mathrm{t}^{n_1}.\] \end{prop} \begin{para} Next we consider the cokernel of $d^{k+1}$ at $\infty$. Using Lemma \ref{Olainfty}, essentially we need to understand the cokernel of $\theta^{k+1}:M_{-k}^{\dagger}(K^p)\to M_{k+2}^{\dagger}(K^p)$. \end{para} \begin{defn} \label{H1Ig} \[H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p\Gamma(p^n)),\Sym^k):=\coker\left(M_{-k}^{\dagger}(K^p\Gamma(p^n))\xrightarrow{\theta^{k+1}} M_{k+2}^{\dagger}(K^p\Gamma(p^n))\otimes D_0^{-k-1}\right)\] for $n\geq 0$, and \[H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k):=\coker\left(M_{-k}^{\dagger}(K^p)\xrightarrow{\theta^{k+1}} M_{k+2}^{\dagger}(K^p)\otimes D_0^{-k-1}\right).\] Clearly, $\displaystyle H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k)=\varinjlim_n H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p\Gamma(p^n)),\Sym^k)$ and there is a natural smooth action of $B:=\{\begin{pmatrix} * & * \\ 0 & * \end{pmatrix}\}\subseteq \mathrm{GL}_2(\mathbb{Q}_p)$ on it. Note that we put $\otimes D_0^{-k-1}$ here to make sure that everything does not depend on the trivialization of $D_0$. When $k=0$, we will simply write $H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p))$ instead of $H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^0)$. \end{defn} $H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p\Gamma(p^n)),\Sym^k)$ was studied in detail by Coleman in his famous works \cite{Cole96,Cole97}. He basically shows that it is closely related to the finite slope part of classical modular forms. For our purpose, we only need the following finiteness result. \begin{prop} \label{Colefin} $H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p\Gamma(p^n)),\Sym^k)$ is a finite-dimensional vector space over $C$. \end{prop} \begin{proof} Coleman determined its dimension in the proof of \cite[Theorem 2.1]{Cole97}, which is based on his previous work \cite[\S 8]{Cole96}. Roughly speaking, as Coleman explained, $(\Sym^k D,\nabla_k)$ defines an overconvergent logarithmic F-isocrystal on the special fiber of $\mathcal{X}_{K^p\Gamma(p^n),c}$ (a union of Igusa curves) and $H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p\Gamma(p^n)),\Sym^k)$ is simply its log-rigid cohomology (after extending the coefficients to $C$). The claim here follows from the general finiteness result on rigid cohomology. \end{proof} \begin{prop} \label{cokdk+1infty} For $d^{k+1}:\mathcal{O}^{\mathrm{la},(n_1,n_2)}_{K^p}\to \mathcal{O}^{\mathrm{la},(n_1,n_2)}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}}(\Omega^1_{K^p}(\mathcal{C})^\mathrm{sm})^{\otimes k+1}$ , we have \[(\coker d^{k+1})_{\infty}=(\omega^k_{\mathscr{F}\!\ell})_\infty\otimes_C H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k)(k) \cdot \mathrm{t}^{n_1}.\] \end{prop} \begin{proof} By Proposition \ref{Colefin}, the image of $M_{-k}^{\dagger}(K^p\Gamma(p^n))$ in $M_{k+2}^{\dagger}(K^p\Gamma(p^n))$ under $\theta^{k+1}$ is closed. From this, by the standard argument using open mapping theorem, it's easy to deduce that \[\varinjlim_{U\supseteq \bar{\mathcal{X}}_{K^p\Gamma(p^n),c}} \mathcal{O}_C[[ \frac{y}{p^n}]][\frac{1}{p}] \widehat\otimes_C~ \omega^{-k}(U)\xrightarrow{1\otimes\theta^{k+1}} \varinjlim_{U\supseteq \bar{\mathcal{X}}_{K^p\Gamma(p^n),c}} \mathcal{O}_C[[ \frac{y}{p^n}]][\frac{1}{p}] \widehat\otimes_C~ \omega^{k+2}(U)\] has closed images with cokernel $\mathcal{O}_C[[ \frac{y}{p^n}]][\frac{1}{p}] \widehat\otimes_C H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p\Gamma(p^n)),\Sym^k)$. Taking the inductive limit over $n$ gives the proposition. \end{proof} \begin{para} \label{H1ord} To deduce a description of the sheaf $(\coker d^{k+1})|_{\mathbb{P}^1(\mathbb{Q}_p)}$, we note that $B$ acts smoothly on $H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k)$. The smooth induction of it to $\mathrm{GL}_2(\mathbb{Q}_p)$ naturally defines a $\mathrm{GL}_2(\mathbb{Q}_p)$-equivariant sheaf on $\mathbb{P}^1(\mathbb{Q}_p)$ which will be denoted by $\mathcal{H}^1_{\mathrm{ord}}(K^p,k)$. Explicitly, the (right) action of $\mathrm{GL}_2(\mathbb{Q}_p)$ on $\infty$ induces a map $\pi:\mathrm{GL}_2(\mathbb{Q}_p)\to\mathbb{P}^1(\mathbb{Q}_p)$. Let $U$ be an open subset of $\mathbb{P}^1(\mathbb{Q}_p)$. Then $\mathcal{H}^1_{\mathrm{ord}}(K^p,k)(U)$ is the set of smooth functions \[f:\pi^{-1}(U)\to H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k)\] such that $f(bg)=b\cdot f(g)$. It's clear that \[H^0(\mathbb{P}^1(\mathbb{Q}_p),\mathcal{H}^1_{\mathrm{ord}}(K^p,k))=\mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)} H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k),\] where $\mathrm{Ind}$ denotes the smooth induction. By abuse of notation, we will also view $\mathcal{H}^1_{\mathrm{ord}}(K^p,k)$ as a sheaf on ${\mathscr{F}\!\ell}$ via the closed embedding $i:\mathbb{P}^1(\mathbb{Q}_p)\subseteq {\mathscr{F}\!\ell}$. Rougly speaking, $\mathcal{H}^1_{\mathrm{ord}}(K^p,k)$ is defined by the rigid cohomology of the ordinary locus of modular curves. \end{para} \begin{para}[Definition of $e'_1,{e'_2}$] Note that $B$ acts on the fiber of $\omega^l_{\mathscr{F}\!\ell}$ at $\infty$ via the character ${e'_2}^l$ sending $\begin{pmatrix} a & b\\ 0 &d\end{pmatrix}\in B$ to $d^l$. We remark that $e'_2=e_2$ in \ref{e_1e_2ch} as a character of $B$ but we don't put any Galois action on $e'_2$. As in \ref{e_1e_2ch}, $W\cdot {e'_2}^l$ denotes the twist of $W$ by ${e'_2}^l$ for any $B$-representation $W$ over $\mathbb{Q}_p$. Similarly, $e'_1:B\to\mathbb{Q}_p^\times$ denotes the character sending $\begin{pmatrix} a & b\\ 0 &d\end{pmatrix}\in B$ to $a$. There is a Hausdorff LB-space structure on $ H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k)\cdot {e'_2}^l$ and the action of $B$ on it is locally analytic. Its locally analytic induction from $B$ to $\mathrm{GL}_2(\mathbb{Q}_p)$ defines naturally a $\mathrm{GL}_2(\mathbb{Q}_p)$-equivariant sheaf on $\mathbb{P}^1(\mathbb{Q}_p)$ whose global sections are given by the locally analytic induction \[\mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)}H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k)\cdot {e'_2}^l . \] The sheaf defined by $\mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)} {e'_2}^l$ is nothing but $\omega^l_{{\mathscr{F}\!\ell}}|_{\mathbb{P}^1(\mathbb{Q}_p)}$. Hence Proposition \ref{cokdk+1infty} has the following corollaries. \end{para} \begin{cor} \label{cokerdk+1P1} There are natural isomorphisms \[(\coker d^{k+1})|_{\mathbb{P}^1(\mathbb{Q}_p)}\cong\omega^k_{{\mathscr{F}\!\ell}}|_{\mathbb{P}^1(\mathbb{Q}_p)}\otimes_C \mathcal{H}^1_{\mathrm{ord}}(K^p,k)(k)\cdot \mathrm{t}^{n_1},\] \[H^0(\mathbb{P}^1(\mathbb{Q}_p),\coker d^{k+1})\cong\mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)} H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k) (k)\cdot {e'_2}^k\mathrm{t}^{n_1}.\] \end{cor} Recall that $d'^{k+1}$ denotes the twist of $d^{k+1}$ by $(\Omega^1_{{\mathscr{F}\!\ell}})^{\otimes k+1}\cong\omega_{\mathscr{F}\!\ell}^{-2k-2}\otimes\det{}^{k+1}$. \begin{cor} \label{cokerd'k+1} For $\chi=(-k,0)$, \[(\coker d'^{k+1})|_{\mathbb{P}^1(\mathbb{Q}_p)}\cong\omega^{-k-2}_{\mathscr{F}\!\ell}|_{\mathbb{P}^1(\mathbb{Q}_p)}\otimes_C \mathcal{H}^1_{\mathrm{ord}}(K^p,k)(k)\otimes\det{}^{k+1}\cdot \mathrm{t}^{-k}\] \[H^0(\mathbb{P}^1(\mathbb{Q}_p),\coker d'^{k+1})\cong\mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)} H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k) (k)\cdot {e'_1}^{k+1}{e'_2}^{-1}\mathrm{t}^{-k}.\] \end{cor} \subsection{The Lubin-Tate space at infinite level} \label{LTinfty} \begin{para} Now we study the intertwining operator $I_k$ on the Drinfeld upper half plane $\Omega$. Since it is well-known that $\pi_\mathrm{HT}^{-1}(\Omega)$ is a finite disjoint union of the Lubin-Tate space $\mathcal{M}_{\mathrm{LT},\infty}$ at infinite level, we essentially need to understand the geometry of the Hodge-Tate period map for $\mathcal{M}_{\mathrm{LT},\infty}$. The key point here is that the differential operators $d^{k+1}$ and $\bar{d}^{k+1}$ are swapped under the isomorphism between Lubin-Tate and Drinfeld towers, hence results we proved for $\bar{d}^{k+1}$ before (Proposition \ref{dbarsurj}) can be applied directly to $d^{k+1}$. In this subsection, we will restrict ourselves to this local picture and come back to modular curves in next subsection. \end{para} \begin{para}\label{LTsetup} First we recall the definition of the Lubin-Tate towers. One reference is \cite[\S 6]{SW13}. However, we warn the readers that we work with \textit{contravariant} objects instead of \textit{covariant} objects used in the reference. Hence some statements might be slightly different from \cite{SW13}. Let $H_0$ be a one-dimensional formal group over $\bar{\mathbb{F}}_p$ of height $2$. I is unique up to isomorphism. Let $\mathrm{Nilp}_{W(\bar{\mathbb{F}}_p)}$ be the category of $W(\bar{\mathbb{F}}_p)$-algebras on which $p$ is nilpotent. Consider the functor which assigns $R\in\mathrm{Nilp}_{W(\bar{\mathbb{F}}_p)}$ to the set of isomorphism classes of deformations of $H_0$ to $R$, i.e. pairs $(G,\rho)$ where $G$ is a $p$-divisible group over $R$ and $\rho:H_0\otimes_{\bar{\mathbb{F}}_p} R/p \to G \otimes_{R} R/p$ is a quasi-isogeny. This is represented by a formal scheme $\mathcal{M}$ over $\Spf W(\bar{\mathbb{F}}_p)$. Let $\mathcal{M}^{(i)}\subseteq \mathcal{M}$ be the subset parametrizing quasi-isogenies of height $i\in\mathbb{Z}$. Then each $\mathcal{M}^{(i)}$ is non-canonically isomorphic to $\Spf W(\bar{\mathbb{F}}_p)[[T]]$ and $\mathcal{M}$ is the disjoint union of $\mathcal{M}^{(i)},i\in\mathbb{Z}$. For $*=(i)$ or empty, we denote by $\mathcal{M}^{*}_{\mathrm{LT}}$ the base change of the generic fiber of $\mathcal{M}^{*}$ (viewed as an adic space) from $\Spa(W(\bar{\mathbb{F}}_p)[\frac{1}{p}],W(\bar{\mathbb{F}}_p))$ to $\Spa(C,\mathcal{O}_C)$. Let $(\mathcal{G},\rho)$ be a universal deformation of $H_0$ on $\mathcal{M}$. Its covariant Dieudonn\'e crystal defines a vector bundle $M(\mathcal{G})$ of rank $2$ on $\mathcal{M}_{\mathrm{LT}}$ equipped with an integrable connection $\nabla_{\mathrm{LT},0}$, the (dual of) Gauss-Manin connection. Let $M(H_0)$ denote the covariant Dieudonn\'e module of $H_0$, a free $W(\bar{\mathbb{F}}_p)$-module of rank $2$. Then $\rho$ induces a natural trivialization $M(\mathcal{G})\cong M(H_0)\otimes_{W(\bar{\mathbb{F}}_p)}\mathcal{O}_{\mathcal{M}_{\mathrm{LT}}}$ under which $\nabla_{\mathrm{LT},0}$ is identified with the standard connection on $\mathcal{O}_{\mathcal{M}_{\mathrm{LT}}}$. The Lie algebra of $\mathcal{G}$ defines a line bundle on $\mathcal{M}_{\mathrm{LT}}$, whose dual will be denoted by $\omega_{\mathrm{LT},0}$. By the Grothendieck-Messing theory, $(\mathcal{G},\rho)$ gives rises to a surjection \[M(H_0)\otimes_{W(\bar{\mathbb{F}}_p)}\mathcal{O}_{\mathcal{M}_{\mathrm{LT}}} \cong M(\mathcal{G})\to (\omega_{\mathrm{LT},0})^{-1} \] whose kernel is a line bundle on $\mathcal{M}_{\mathrm{LT}}$. This induces the so-called \textit{Gross-Hopkins period map} \cite{GH94} \[\pi_{\mathrm{GM}}:\mathcal{M}_{\mathrm{LT}}\to {\mathscr{F}\!\ell}_{\mathrm{GM}},\] where ${\mathscr{F}\!\ell}_{\mathrm{GM}}$ is the adic space over $\Spa(C,\mathcal{O}_C)$ associated to the flag variety parametrizing $1$-dimensional quotients of the $2$-dimensional $C$-vector space $M(H_0)\otimes_{W(\bar{\mathbb{F}}_p)} C$. Again it follows from the Grothendieck-Messing theory that this map is an \'etale morphism of adic spaces locally of finite type. Moreover it admits local sections, cf. \cite[Lemma 6.1.4]{SW13}. By definition, the pull-back of the tautological ample line bundle on ${\mathscr{F}\!\ell}_{\mathrm{GM}}$ to $\mathcal{M}_{\mathrm{LT}}$ is $(\omega_{\mathrm{LT},0})^{-1} $. Let $D(\mathcal{G})$ be the dual of $M(\mathcal{G})$ as a vector bundle. Then there is a natural inclusion $\omega_{\mathrm{LT},0}\subset D(\mathcal{G})$ which gives rise to a decreasing filtration on $D(\mathcal{G})$ (the Hodge filtration) with $\Fil^0=D(\mathcal{G})$, $\Fil^1=\omega_{\mathrm{LT},0}$ and $\Fil^2=0$. It trivially satisfies the Griffiths transversality with respect to the connection $\nabla_{\mathrm{LT}}$ on $D(\mathcal{G})$ induced by $\nabla_{\mathrm{LT},0}$. We have the usual Kodaira-Spencer map defined as the composite map \begin{eqnarray} \label{KSmap} KS:\Fil^1D(\mathcal{G})\xrightarrow{\nabla_{\mathrm{LT}}} D(\mathcal{G})\otimes_{\mathcal{O}_{\mathcal{M}_{\mathrm{LT}}}} \Omega^1_{\mathcal{M}_{\mathrm{LT}}} \to \gr^0 D(\mathcal{G}) \otimes_{\mathcal{O}_{\mathcal{M}_{\mathrm{LT}}}} \Omega^1_{\mathcal{M}_{\mathrm{LT}}}. \end{eqnarray} An easy and standard computation on ${\mathscr{F}\!\ell}_{\mathrm{GM}}$ shows the following well-known result. \end{para} \begin{prop} \label{KSLT} $KS$ is an isomorphism. \end{prop} \begin{para} \label{LTet} Next we turn to the \'etale side. Note that $\mathcal{G}$ defines a $\mathbb{Z}_p$-local system of rank $2$ on the \'etale site of $\mathcal{M}_{\mathrm{LT}}$, whose dual will be denoted by $V_\mathrm{LT}$. Here we take a dual in order to be consistent with our normalization used for modular curves. For $n\geq0$, we get a $\mathrm{GL}_2(\mathbb{Z}/p^n\mathbb{Z})=\mathrm{GL}_2(\mathbb{Z}_p)/\Gamma(p^n)$-\'etale covering $\mathcal{M}_{\mathrm{LT},n}$ of $\mathcal{M}_{\mathrm{LT}}$ by considering isomorphisms between $V_\mathrm{LT}/p^nV_\mathrm{LT}$ and $(\mathbb{Z}/p^n)^{2}$. One main result of the work of Scholze-Weinstein \cite[Theorem 6.3.4]{SW13} shows that there exists a perfectoid space $\mathcal{M}_{\mathrm{LT},\infty}$ over $\Spa(C,\mathcal{O}_C)$ such that \[\mathcal{M}_{\mathrm{LT},\infty} \sim\varprojlim_n \mathcal{M}_{\mathrm{LT},n}.\] Strictly speaking, $\mathcal{M}_{\mathrm{LT},\infty}$ is the strong completion of the base change to $C$ of the preperfectoid space constructed in the reference, cf. Proposition 2.3.6 of \cite{SW13}. There is a natural continuous right action of $\mathrm{GL}_2(\mathbb{Z}_p)$ on $\mathcal{M}_{\mathrm{LT},\infty}$. A purely $p$-adic Hodge-theoretic description of $\mathcal{M}_{\mathrm{LT},\infty}$ can be found in \cite[Proposition 6.3.9]{SW13}. Using this, one can extend the action of $\mathrm{GL}_2(\mathbb{Z}_p)$ to $\mathrm{GL}_2(\mathbb{Q}_p)$. As a consequence of our normalization, the action of $\mathrm{GL}_2(\mathbb{Q}_p)$ here differs from the one in the reference by $g\mapsto (g^{-1})^t$. Let $\omega_{\mathrm{LT},\infty}$ be the pull-back of $\omega_{\mathrm{LT},0}$ to $\mathcal{M}_{\mathrm{LT},\infty}$. Since the Tate module of $\mathcal{G}$ gets trivialized on $\mathcal{M}_{\mathrm{LT},\infty}$, the dual of the Hodge-Tate sequence induces a surjection \[\mathbb{Z}_p^2\otimes_{\mathbb{Z}_p} \mathcal{O}_{\mathcal{M}_{\mathrm{LT},\infty}} \to \omega_{\mathrm{LT},\infty}(-1)\] whose kernel is isomorphic to $\omega^{-1}_{\mathrm{LT}}$ if we choose an isomorphism between $\mathcal{G}$ and its dual, i.e. a principal polarization here. Recall that ${\mathscr{F}\!\ell}=\mathbb{P}^1$ denotes the flag variety of $\mathrm{GL}_2$. Then the above surjection defines a $\mathrm{GL}_2(\mathbb{Q}_p)$-equivariant map, called the \textit{Hodge-Tate period map} \[\pi_{\mathrm{LT},\mathrm{HT}}: \mathcal{M}_{\mathrm{LT},\infty}\to {\mathscr{F}\!\ell}.\] The image is exactly the Drinfeld upper half plane $\Omega$. This will be clear from the point of view of the Drinfeld towers, cf. Theorem \ref{LTDrdual} below. It follows from the construction that the pull-back of the tautological ample line bundle $\omega_{{\mathscr{F}\!\ell}}$ along $\pi_{\mathrm{LT},\mathrm{HT}}$ is $\omega_{\mathrm{LT},\infty}(-1)$. \end{para} \begin{para} \label{AA+} Let $X=\Spa(A,A^+)$ be an open affinoid subset of $\mathcal{M}_{\mathrm{LT}}$ and $\tilde{X}=\Spa(B,B^+)$ be its preimage in $\mathcal{M}_{\mathrm{LT},\infty}$. Then $X$ is a one-dimensional smooth affinoid adic space over $\Spa(C,\mathcal{O}_C)$ and $\tilde{X}$ is a $G=\mathrm{GL}_2(\mathbb{Z}_p)$-Galois pro-\'etale perfectoid covering of $X$. In \cite[Theorem 3.1.2]{Pan20}, we show that the $\mathrm{GL}_2(\mathbb{Z}_p)$-locally analytic vectors $B^{\mathrm{la}}\subseteq B$ satisfy a first-order differential equation $\theta_{\tilde{X}}=0$ for some $\theta_{\tilde{X}}\in B\otimes_{\mathbb{Q}_p}\Lie(G)$ (under some smallness assumption on $X$ which can be easily removed here, cf. \cite[Remark 3.1.5]{Pan20}). To describe this differential equation, we recall the following construction on the flag variety ${\mathscr{F}\!\ell}$ of $\mathrm{GL}_2/C$ in the last paragraph of \ref{brr}. For a $\Spa(C,\mathcal{O}_C)$-point $x$ of ${\mathscr{F}\!\ell}$, let $\mathfrak{b}_x,\mathfrak{n}_x\subseteq\mathfrak{gl}_2(C)$ be its corresponding Borel subalgebra and nilpotent subalgebra. Let \begin{eqnarray*} \mathfrak{g}^0&:=&\mathcal{O}_{{\mathscr{F}\!\ell}}\otimes_{C}\mathfrak{gl}_2(C),\\ \mathfrak{b}^0&:=&\{f\in \mathfrak{g}^0\,| \, f_x\in \mathfrak{b}_x,\mbox{ for all }\Spa(C,\mathcal{O}_C)\mbox{-point }x\in{\mathscr{F}\!\ell}\},\\ \mathfrak{n}^0&:=&\{f\in \mathfrak{g}^0\,| \, f_x\in \mathfrak{n}_x,\mbox{ for all }\Spa(C,\mathcal{O}_C)\mbox{-point }x\in{\mathscr{F}\!\ell}\}. \end{eqnarray*} $\mathfrak{g}^0$ acts naturally on $B^\mathrm{la}$ through $\pi_{\mathrm{LT},\mathrm{HT}}$ in the following sense: suppose that $U$ is an open subset of ${\mathscr{F}\!\ell}$ containing $\pi_{\mathrm{LT},\mathrm{HT}}(\tilde{X})$, then there is a natural map $\mathcal{O}_{{\mathscr{F}\!\ell}}(U)\to B^{\mathrm{la}}$ induced by $\pi_{\mathrm{LT},\mathrm{HT}}$ and induces a natural action of $\mathfrak{g}^0(U)=\mathcal{O}_{{\mathscr{F}\!\ell}}(U)\otimes_{C}\mathfrak{gl}_2(C)$ on $B^{\mathrm{la}}$. Hence we get natural actions of $\mathfrak{b}^0$ and $\mathfrak{n}^0$ on $B^\mathrm{la}$. Note that $\mathfrak{n}^0$ is an invertible sheaf. Locally it is generated by one differential operator. \end{para} \begin{thm} \label{LTde} Let $\tilde{X}=\Spa(B,B^+)$ be as above. Then $\theta_{\tilde{X}}$ is given by a generator of $\mathfrak{n}^0$ up to $B^\times$, i.e. $B^{\mathrm{la}}$ is annihilated by $\mathfrak{n}^0$. \end{thm} \begin{proof} This follows from the corresponding result for the modular curves \cite[Theorem 4.2.7]{Pan20} by observing that the Lubin-Tate space $\mathcal{M}^({0)}_{\mathrm{LT}}$ can be embedded into modular curves. In fact, one can also repeat the argument in \cite[\S 4.2]{Pan20}. The key point in the computation there is that the \textit{Kodaira-Spencer map} is an isomorphism (Proposition \ref{KSLT}). We give a sketch of proof here, which is more conceptual than the original argument and was first suggested to me by Michael Harris. To compute $\theta_{\tilde{X}}$, as in \cite[Remark 3.3.7]{Pan20}, we can choose a faithful finite dimensional continuous $\mathbb{Q}_p$-representation $V$ of $G$ and compute the corresponding \textit{Higgs field} \cite[Remark 3.1.7]{Pan20} \[\phi_{V}:B\otimes_{\mathbb{Q}_p} V \to \Omega^1_{A/C}\otimes_{A}B\otimes_{\mathbb{Q}_p} V(-1). \] Then $\theta_{\tilde{X}}$ is obtained by choosing a generator of $\Omega^1_{A/C}$. In our situation, we take $V=\mathbb{Q}_p^2$, the $2$-dimensional representation associated to $V_{\mathrm{LT}}$. To compute the Higgs field, we note that $V_{\mathrm{LT}}$ is \textit{de Rham} and make use of the corresponding variation of $p$-adic Hodge structure $(D(\mathcal{G}),\nabla_{\mathrm{LT}})$. Recall that there is the dual of the Hodge-Tate sequence $0\to \omega^{-1}_{\mathrm{LT}}\to V\otimes \mathcal{O}_{\mathcal{M}_{\mathrm{LT},\infty}} \to \omega_{\mathrm{LT},\infty}(-1)\to 0$ (after choosing a polarization). This gives rise to an ascending filtration on $V\otimes_{\mathbb{Q}_p} \mathcal{O}_{\mathcal{M}_{\mathrm{LT},\infty}}$ with $\Fil_0=\omega^{-1}_{\mathrm{LT}}$, $\Fil_{-1}=0$ and $\Fil_1$ is everything. Observe that when we evaluate this at $\tilde{X}$, \begin{itemize} \item $\theta_{V}(\Fil_0(\mathbb{Q}_p^2\otimes B))=0$ and $\theta_V$ induces a map \[ \omega_{\mathrm{LT},\infty}(B)(-1)=\gr_1 (\mathbb{Q}_p^2\otimes B)\xrightarrow{\theta_V} \Fil_0 (\mathbb{Q}_p^2\otimes B)\otimes_A \Omega^1_{A/C}(-1)= \omega^{-1}_{\mathrm{LT}} (B)\otimes_A \Omega^1_{A/C}(-1)\] which is the unique $B$-linear map extending the Kodaira-Spencer map $KS$ \eqref{KSmap}. \end{itemize} In the classical theory of non-abelian Hodge theory over complex numbers, this result follows directly from the construction of the Higgs bundle from a variation of Hodge structures (which was called a system of Hodge bundles by Simpson). In this $p$-adic case, it is not too hard to deduce this from the work of Liu-Zhu \cite[Theorem 2.1, 3.8]{LZ17} by observing that the Higgs field is obtained from the $0$th graded piece of the connection $(V\otimes_{\mathbb{Q}_p}\mathcal{O}\mathbb{B}_{\mathrm{dR}},1\otimes\nabla)$. \footnote{Here we need a compatibility between the Higgs bundle in Liu-Zhu's work and the Higgs field constructed in \cite{Pan20}. This can be checked on a toric chart. We plan to provide more details in a future work.} Now since $KS$ is an isomorphism, we see that $\phi_V$ is essentially a generator of $\mathfrak{n}^0$ up to $B^\times$ by unraveling the definition of $\mathfrak{n}^0$. This is exactly what we need to show. \end{proof} \begin{para} \label{khLT} Keep the same notation as above. Once we have Theorem \ref{LTde}, we can repeat our work in \cite[\S4]{Pan20}. The action of $\mathfrak{b}^0$ on $B^{\mathrm{la}}$ factors through the quotient $\mathfrak{b}^0/\mathfrak{n}^0$. As in \ref{brr}, let $\mathfrak{h}:=\{\begin{pmatrix} * & 0 \\ 0 & * \end{pmatrix}\}$ be a Cartan subalgebra of the Borel subalgebra $\mathfrak{b}:=\{\begin{pmatrix} * & * \\ 0 & * \end{pmatrix}\}$. It acts on $B^{\mathrm{la}}$ via the natural embedding $\mathfrak{h}\to\mathcal{O}_{\mathrm{{\mathscr{F}\!\ell}}}\otimes_{C}\mathfrak{h}\cong\mathfrak{b}^0/\mathfrak{n}^0$ and we denote this action by $\theta_{\mathrm{LT},\mathfrak{h}}$. Similarly as in \ref{omegakla}, the same construction defines a natural action of $\mathfrak{h}$ on the $\mathrm{GL}_2(\mathbb{Z}_p)$-locally analytic vectors of $\omega_{\mathrm{LT},\infty}$, which is also denoted by $\theta_{\mathrm{LT},\mathfrak{h}}$. We can also describe elements in $B^{\mathrm{la}}$ as in \cite[\S4.3]{Pan20}. By fixing a principal polarization of $\mathcal{G}$, we get a natural map $\mathcal{M}_{\mathrm{LT},\infty}\to\mathrm{Isom}(\mathbb{Z}_p,\mathbb{Z}_p(1))$ of topological spaces, which classically can be defined in terms of the connected components of $\mathcal{M}_{\mathrm{LT},n}$. Choosing a generator of $\mathbb{Z}_p(1)$ induces an identification $\mathrm{Isom}(\mathbb{Z}_p,\mathbb{Z}_p(1))=\mathbb{Z}_p^\times$ and $1\in\mathbb{Z}_p^\times$ defines a global section of $\mathcal{O}_{\mathcal{M}_{\mathrm{LT},\infty}}$ which will be denoted by $\mathrm{t}$. As in \cite[\S4.3.1]{Pan20}, for $n\geq 1$, we can find \begin{itemize} \item $\mathrm{t}_n\in H^0(\mathcal{M}_{\mathrm{LT},n},\mathcal{O}_{\mathcal{M}_{\mathrm{LT},n}})$ which factors through $\pi_0(\mathcal{M}_{\mathrm{LT},n})$ so that $||\mathrm{t}-\mathrm{t}_n||\leq p^{-n}$. \end{itemize} Fix a generator of $\mathbb{Z}_p(1)$. Let $e_1,e_2\in H^0(\mathcal{M}_{\mathrm{LT},\infty},\omega_{\mathrm{LT},\infty})$ be the images of $(1,0),(0,1)\in\mathbb{Z}_p^2$ under the map $\mathbb{Z}_p^2(1)\otimes_{\mathbb{Z}_p} \mathcal{O}_{\mathcal{M}_{\mathrm{LT},\infty}} \to \omega_{\mathrm{LT},\infty}$ in the dual of the Hodge-Tate sequence. Note that $x=e_2/e_1$ is a standard coordinate function on ${\mathscr{F}\!\ell}$ and is an invertible function on $\mathcal{M}_{\mathrm{LT},\infty}$, or equivalently $e_1$ is invertible, because $\pi_{\mathrm{LT},\mathrm{HT}}(\mathcal{M}_{\mathrm{LT},\infty})= \Omega$. Let $G_n=\Gamma(p^n)$ and $X_n$ be the preimage of $X$ in $\mathcal{M}_{\mathrm{LT},n}$. Then $X_n=\Spa(B^{G_n},(B^+)^{G_n})$. Following \cite[\S4.3.5]{Pan20}, for each $n\geq1$, we can find \begin{itemize} \item an integer $r(n)>r(n-1)>0$; \item $x_n\in B^{G_{r(n)}}$ such that $\|x-x_n\|_{G_{r(n)}}=\|x-x_n\|\leq p^{-n}$ in $B$; \item $e_{1,n}\in \omega_{\mathrm{LT},\infty}(\tilde{X})^{G_{r(n)}}$ invertible such that $\|1-e_{1}/e_{1,n}\|_{G_{r(n)}}=\|1-e_{1}/e_{1,n}\|\leq p^{-n}$ in $B$. This implies that $\log(\frac{e_1}{e_{1,n}}):=-\sum_{i=1}^{+\infty}(-1)^i\frac{1}{i}(\frac{e_1}{e_{1,n}}-1)^i$ converges. \item $||\mathrm{t}-\mathrm{t}_n||_{G_{r(n)}}=||\mathrm{t}-\mathrm{t}_n||\leq p^{-n}$. Similarly, $\log(\frac{\mathrm{t}}{\mathrm{t}_{n}})$ converges. \end{itemize} Here $||\cdot||_{G_{r(n)}}$ denotes the norm on $G_{r(n)}$-analytic vectors. Using these elements, we have the following description of $B^{\mathrm{la}}$. \end{para} \begin{thm} \label{expGL2} For any $n\geq0$, given a sequence of sections \[c_{i,j,k}\in B^{G_{r(n)}},i,j,k=0,1,\cdots\] such that the norms of $c_{i,j,k}p^{(n-1)(i+j+k)}$, $i,j,k\geq 0$ are uniformly bounded, then \[f=\sum_{i,j,k\geq 0} c_{i,j,k}(x-x_n)^i \left(\log(\frac{e_1}{e_{1,n}})\right)^j\left(\log(\frac{\mathrm{t}}{\mathrm{t}_{n}})\right)^k\] converges in $B^{G_{r(n)-\mathrm{an}}}$ and any $G_{n}$-analytic vector in $B$ arises in this way. \end{thm} \begin{proof} Same proof as \cite[Theorem 4.3.9]{Pan20}. \end{proof} \begin{para} Let $O_{D_p}:=\End(H_0)$ and $D_p:=\End(H_0)\otimes_{\mathbb{Z}}\mathbb{Q}$. Then $D_p$ is a non-split quaternion algebra over $\mathbb{Q}_p$. The multiplicative group $D_p^\times$ is the group of self-quasi-isogenies of $H_0$ and acts naturally on $\mathcal{M}_{\mathrm{LT}}$ and ${\mathscr{F}\!\ell}_{\mathrm{GM}}$ (on the right) through its action on $H_0$. By choosing a basis of $M(H_0)$, one may identify $D_p^\times$ as a subgroup of $\mathrm{GL}_2(C)$ and identify ${\mathscr{F}\!\ell}_{\mathrm{GM}}$ with the projective space $\mathbb{P}^1$ over $\Spa(C,\mathcal{O}_C)$, on which the action of $D_p^\times$ is the usual linear action. In particular $D_p^\times$ acts continuously on ${\mathscr{F}\!\ell}_{\mathrm{GM}}$ in the sense of \cite[Proposition 6.5.5]{SW13}. Its action on $\mathcal{M}_{\mathrm{LT}}$ is also continuous by \cite[Proposition 19.2]{GH94}. Clearly $\pi_{\mathrm{GM}}$ is $D_p^\times$-equivariant. We remark that the left action of $D_p^\times$ on $M(H_0)\otimes_{W(\bar{F}_p)}C$ is irreducible and its action on $\wedge^2 M(H_0)\otimes_{W(\bar{F}_p)}C$ is the reduced norm map. It follows from the constructions that $\mathcal{M}_{\mathrm{LT},n},n\geq 0$ are $D^\times_p$-equivariant finite coverings. The continuity of the action of $D^\times_p$ on $\mathcal{M}_{\mathrm{LT}}$ implies the continuity of the action on each $\mathcal{M}_{\mathrm{LT},n}$ and hence on $\mathcal{M}_{\mathrm{LT},\infty}$. Note that $\omega_{\mathrm{LT},0}$ and its pull-back to $\mathcal{M}_{\mathrm{LT},n}$ and $\mathcal{M}_{\mathrm{LT},\infty}$ are $D_p^\times$-equivariant line bundles. We denote by \[\pi_{\mathrm{LT},\mathrm{GM}}:\mathcal{M}_{\mathrm{LT},\infty}\to {\mathscr{F}\!\ell}_{\mathrm{GM}}\] the composite of the projection map $\mathcal{M}_{\mathrm{LT},\infty}\to \mathcal{M}_{\mathrm{LT}}$ and $\pi_{\mathrm{GM}}$. It is $\mathrm{GL}_2(\mathbb{Q}_p)\times D_p^\times$-equivariant with respect to the trivial action of $\mathrm{GL}_2(\mathbb{Q}_p)$ on ${\mathscr{F}\!\ell}_{\mathrm{GM}}$. We note that the centers $\mathbb{Q}_p^\times\subseteq \mathrm{GL}_2(\mathbb{Q}_p)$ and $\mathbb{Q}_p^\times\subseteq D_p^\times$ act in the same way on $\mathcal{M}_{\mathrm{LT},\infty}$. Keep the same notation as in \ref{AA+}. The continuity of the action of $D^\times_p$ shows that we may find an open compact subgroup $K\subseteq D^\times_p$ such that $X$ is $K$-stable. Then $K$ acts continuously on $B$. We denote by $B^{D^\times_p-\mathrm{la}}\subseteq B$ the subspace of $K$-locally analytic vectors, which does not depend on the choice of $K$. One consequence of Theorem \ref{expGL2} is an one-direction inclusion between the $K$-locally analytic vectors and $\mathrm{GL}_2(\mathbb{Z}_p)$-locally analytic vectors. \end{para} \begin{cor} \label{BlasubsetBDpla} $B^{\mathrm{la}}\subseteq B^{D^\times_p-\mathrm{la}}$. \end{cor} We will see (Corollary \ref{GL2Dpan}) that $B^{D^\times_p-\mathrm{la}}\subseteq B^{\mathrm{la}}$ by swapping the roles of $D^\times_p$ and $\mathrm{GL}_2(\mathbb{Q}_p)$. \begin{proof} Recall that $X_{r(n)}=\Spa(B^{G_{r(n)}},(B^+)^{G_{r(n)}})$ is affinoid of finite type over $\Spa(C,\mathcal{O}_C)$. In particular, there exists an open subgroup $K'\subseteq K$ of the form $1+p^m\mathcal{O}_{D_p}$ such that the action of $K'$ on $(B^+)^{G_{r(n)}}/p$ is trivial. This implies that the action of $K'$ on $B^{G_{r(n)}}$ is analytic. Hence the $K'$-analytic norm on $B^{G_{r(n)}}$ is equivalent with the norm induced from $B$. Note that $D^\times_p$ acts trivially on $x,e_1$. Its action on $\mathrm{t}$ factors through the reduced norm map hence agrees the reduced norm map because the center of $D^\times_p$ acts in the same way as the center of $\mathrm{GL}_2(\mathbb{Q}_p)$. By shrinking $K'$ if necessary, we may assume that the $K'$-analytic norm $||x-x_n||_{K'}$ is equal to $||x-x_n||$ and similar statements hold for $\log(e_1/e_{1,n})$ and $\log(\mathrm{t}/\mathrm{t}_n)$. Hence the function $f$ in Theorem \ref{expGL2} is $K'$-analytic. Our claim now follows as any $f\in B^{\mathrm{la}}$ has this form. \end{proof} \begin{rem} So far we are assuming that $X$ is an affinoid subset of $\mathcal{M}_{\mathrm{LT}}$. It is easy to see that all of these results are still true if $X$ is an open affinoid subset of $\mathcal{M}_{\mathrm{LT},n}$ for some $n\geq0$. \end{rem} \begin{para} \label{OLTla} Let $\mathcal{M}_{\mathrm{LT},\infty}^{(0)}$ be the preimage of $\mathcal{M}_{\mathrm{LT}}^{(0)}$ in $\mathcal{M}_{\mathrm{LT},\infty}$. Recall that the superscript $(0)$ means the subset parametrizing quasi-isogenies of height $0$. Note that $\mathcal{M}_{\mathrm{LT},\infty}^{(0)}$ is $\mathrm{GL}_2(\mathbb{Q}_p)^0\times \mathcal{O}^\times_{D_p}$-stable, where $\mathrm{GL}_2(\mathbb{Q}_p)^0\subseteq \mathrm{GL}_2(\mathbb{Q}_p)$ denotes the subgroup of elements with determinants in $\mathbb{Z}_p^\times$. Since $\pi_{\mathrm{LT},\mathrm{HT}}$ has images in $\Omega$, we denote by \[\pi_{\mathrm{LT},\mathrm{HT}}^{(0)}:\mathcal{M}_{\mathrm{LT}}^{(0)}\to \Omega\] the restriction of $\pi_{\mathrm{LT},\mathrm{HT}}$ to $\mathcal{M}_{\mathrm{LT}}^{(0)}$ and $\Omega$. By embedding Lubin-Tate spaces into the modular curves (Theorem \ref{ssunif} below), we may invoke \cite[Theorem III.1.2]{Sch15} and conclude that there exists a basis of open affinoid subsets $\mathfrak{B}$ of $\Omega$ such that for any $U\in\mathfrak{B}$, the preimage $(\pi^{(0)}_{\mathrm{LT},\mathrm{HT}})^{-1}(U)$ in $\mathcal{M}_{\mathrm{LT},\infty}^{(0)}$ is affinoid perfectoid and is also the preimage of some open affinoid subset of $\mathcal{M}^{(0)}_{\mathrm{LT},n}$. In particular, we can apply previous results for $\tilde{X}$ to $(\pi^{(0)}_{\mathrm{LT},\mathrm{HT}})^{-1}(U)$. Let \[\mathcal{O}_{\mathrm{LT}}:=\pi^{(0)}_{\mathrm{LT},\mathrm{HT}}{}_* \mathcal{O}_{\mathcal{M}_{\mathrm{LT},\infty}^{(0)}}.\] This is a $\mathrm{GL}_2(\mathbb{Q}_p)^0$-equivariant sheaf on $\Omega$ equipped with an action of $\mathcal{O}^\times_{D_p}$. We denote by \[\mathcal{O}^{\mathrm{la}}_{\mathrm{LT}}\subseteq \mathcal{O}_{\mathrm{LT}}\] the subsheaf of $\mathrm{GL}_2(\mathbb{Q}_p)^0$-locally analytic sections. By Theorem \ref{LTde}, $\mathcal{O}^{\mathrm{la}}_{\mathrm{LT}}$ is annihilated by $\mathfrak{n}^0$ and we get an induced action of $\mathfrak{h}$ on it via $\mathfrak{h}\to\mathcal{O}_{\mathrm{{\mathscr{F}\!\ell}}}\otimes_{C}\mathfrak{h}=\mathfrak{b}^0/\mathfrak{n}^0$, which will also be denoted by $\theta_{\mathrm{LT},\mathfrak{h}}$ by abuse of notation. As before, given a weight $\chi=(n_1,n_2):\mathfrak{h}\to C$, we denote the $\chi$-isotypic part of $\mathcal{O}^{\mathrm{la}}_{\mathrm{LT}}$ by $\mathcal{O}^{\mathrm{la},\chi}_{\mathrm{LT}}$. \end{para} \begin{para}[Compare \ref{lalg0k}] \label{LTlalg} Suppose $\chi=(n_1,n_2)\in\mathbb{Z}^2$ and $k=n_2-n_1\geq 0$. By Theorem \ref{expGL2}, for $i\in\{0,\cdots,k\}$ and $s\in\omega^{-k,\mathrm{sm}}_{\mathrm{LT}}$, the product $\mathrm{t}^{n_1}e_1^ie_2^{k-i} s$ defines an element in $\mathcal{O}^{\mathrm{la},(n_1,n_2)}_{\mathrm{LT}}$. We denote by \[\mathcal{O}^{\mathrm{lalg},(n_1,n_2)}_{\mathrm{LT}}\subseteq \mathcal{O}^{\mathrm{la},(n_1,n_2)}_{\mathrm{LT}}\] the subsheaf spanned by sections of this form. As pointed out in \ref{lalg0k} and suggested by the notation, it exactly consists of $\mathrm{GL}_2(\mathbb{Z}_p)$-locally algebraic vectors. Equivalently, it is the image of the natural map \[H^0({\mathscr{F}\!\ell},\omega^k_{{\mathscr{F}\!\ell}})(k)\otimes_C \omega^{-k,\mathrm{sm}}_{\mathrm{LT}}\cdot\mathrm{t}^{n_1}\to \mathcal{O}^{\mathrm{la},(n_1,n_2)}_{\mathrm{LT}}.\] From this, we see that $\mathcal{O}^{\mathrm{lalg},(n_1,n_2)}_{\mathrm{LT}}=\Sym^k V(k)\otimes_{\mathbb{Q}_p} \omega^{-k,\mathrm{sm}}_{\mathrm{LT}}\cdot\mathrm{t}^{n_1}$ as a representation of $\mathrm{GL}_2(\mathbb{Z}_p)$. \end{para} \begin{para}\label{dLT} Let \[\mathcal{O}^{\mathrm{sm}}_{\mathrm{LT}}\subseteq \mathcal{O}_{\mathrm{LT}}\] be the subsheaf of $\mathrm{GL}_2(\mathbb{Q}_p)^0$-smooth sections. Equivalently, this is also \[(\pi^{(0)}_{\mathrm{LT},\mathrm{HT}})_{*} (\varinjlim_{n}(\pi_n)^{-1} \mathcal{O}_{\mathcal{M}^{(0)}_{\mathrm{LT},n}}),\] where $\pi_n:\mathcal{M}_{\mathrm{LT},\infty}\to \mathcal{M}_{\mathrm{LT},n}$ is the natural projection. It is naturally an $\mathcal{O}_{\Omega}$-module. Similarly, we can define $\omega^{\mathrm{la}}_{\mathrm{LT}},\omega^{\mathrm{sm}}_{\mathrm{LT}}$ and their tensor powers $\omega^{k,\mathrm{la}}_{\mathrm{LT}},\omega^{k,\mathrm{sm}}_{\mathrm{LT}}$. They are natural $\mathcal{O}^{\mathrm{sm}}_{\mathrm{LT}}\otimes_{C}\mathcal{O}_{\Omega}$-modules. Denote by \[\Omega^{1}_{\mathrm{LT}}:=(\pi^{(0)}_{\mathrm{LT},\mathrm{HT}})_{*} (\varinjlim_{n}(\pi_n)^{-1} \Omega^1_{\mathcal{M}^{(0)}_{\mathrm{LT},n}}).\] It is non-canonically isomorphic to $\omega^{2,\mathrm{sm}}_{\mathrm{LT}}$ via the Kodaira-Spencer isomorphism and we have the usual derivation \[d_{\mathrm{LT}}:\mathcal{O}^{\mathrm{sm}}_{\mathrm{LT}}\to \Omega^{1}_{\mathrm{LT}}.\] Let $k$ be a non-negative integer. As in \ref{XCY}, the Kodaira-Spencer isomorphism implies that the $(k+1)$-th power of $d_{\mathrm{LT}}$ defines a map \[(d_{\mathrm{LT}})^{k+1}:\omega^{-k,\mathrm{sm}}_{\mathrm{LT}}\to \omega^{-k,\mathrm{sm}}_{\mathrm{LT}}\otimes_{\mathcal{O}^{\mathrm{sm}}_{\mathrm{LT}}} (\Omega^{1}_{\mathrm{LT}})^{\otimes k+1}.\] We can repeat our work in \ref{Do1} with everything with subscript $K^p$ replaced by $\mathrm{LT}$. In particular, we have the following result which can be viewed as the restriction of Theorem \ref{I1} to the supersingular locus. \end{para} \begin{thm} \label{LTI1} Suppose $\chi=(n_1,n_2)\in\mathbb{Z}^2$ and $k=n_2-n_1\geq 0$. Then there exists a unique natural continuous operator \[d_{\mathrm{LT}}^{k+1}:\mathcal{O}^{\mathrm{la},\chi}_{\mathrm{LT}}\to \mathcal{O}^{\mathrm{la},\chi}_{\mathrm{LT}}\otimes_{\mathcal{O}^{\mathrm{sm}}_{\mathrm{LT}}}(\Omega^1_{\mathrm{LT}})^{\otimes k+1}\] satisfying the following properties: \begin{enumerate} \item $d_{\mathrm{LT}}^{k+1}$ is $\mathcal{O}_{\Omega}$-linear; \item $d_{\mathrm{LT}}^{k+1}(\mathrm{t}^{n_1}e_1^ie_2^{k-i} s)=\mathrm{t}^{n_1}e_1^ie_2^{k-i} (d_{\mathrm{LT}})^{k+1}(s)$ for any section $s\in \omega^{-k,\mathrm{sm}}_{\mathrm{LT}}$ and $i=0,1,\cdots,k$. \end{enumerate} \end{thm} Similarly, we can repeat our work in \ref{DII}, cf. Theorem \ref{I2} and Proposition \ref{dbarsurj}. \begin{thm} \label{LTI2} Suppose $\chi=(n_1,n_2)\in\mathbb{Z}^2$ and $k=n_2-n_1\geq 0$. Then there exists a natural continuous operator \[\bar{d}_{\mathrm{LT}}^{k+1}:\mathcal{O}^{\mathrm{la},\chi}_{\mathrm{LT}}\to \mathcal{O}^{\mathrm{la},\chi}_{\mathrm{LT}}\otimes_{\mathcal{O}_{\Omega}}(\Omega^1_{\Omega})^{\otimes k+1}\] formally can be regarded as $(d_{\Omega})^{k+1}$ satisfying following properties: \begin{enumerate} \item $\bar{d}_{\mathrm{LT}}^{k+1}$ is $\mathcal{O}^{\mathrm{sm}}_{\mathrm{LT}}$-linear; \item there exists a non-zero constant $c\in\mathbb{Q}$ such that $\bar{d}_{\mathrm{LT}}^{k+1}(s)=c(u^+)^{k+1}(s)\otimes (dx)^{k+1}$ for any $s\in \mathcal{O}^{\mathrm{la},\chi}_{\mathrm{LT}}$ ,where $u^+=\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} \in \Lie(\mathrm{GL}_2(\mathbb{Q}_p))$. \end{enumerate} Moreover, \begin{enumerate} \item $\bar{d}_{\mathrm{LT}}^{k+1}$ is surjective and commutes with $\mathrm{GL}_2(\mathbb{Z}_p)$. \item $\ker(\bar{d}_{\mathrm{LT}}^{k+1})=\mathcal{O}^{\mathrm{lalg},\chi}_{\mathrm{LT}}$. \end{enumerate} \end{thm} In the next section, we will show that there is a similar description of $\ker d_{\mathrm{LT}}^{k+1}$ and $\coker d_{\mathrm{LT}}^{k+1}$ by swapping the roles of $\mathrm{GL}_2(\mathbb{Q}_p)$ and $D^\times_p$. \subsection{The Drinfeld space at the infinite level} \label{Drinfty} \begin{para} In this subsection, we study the action of $D_p^\times$ on $\mathcal{M}_{\mathrm{LT},\infty}$. To do this, we use the isomorphism between the Lubin-Tate space and Drinfeld space at the infinite level. See Theorem E of \cite{SW13} and the discussion below it for some historical remarks. Roughly speaking, all of the results for Lubin-Tate towers obtained in the previous subsection are also true for the Drinfeld towers by exactly the same arguments. \end{para} \begin{para}[Compare \ref{LTsetup}] \label{Drsetup} We begin by recalling the construction of Drinfeld towers, cf. \cite{Dr76,BC91}. Let $H_1$ be a special formal $\mathcal{O}_{D_p}$-module over $\bar{\mathbb{F}}_p$. It is unique up to isogeny and is isomorphic to $H_0\times H_0$ as a formal group. Consider the functor which assigns $R\in\mathrm{Nilp}_{W(\bar{\mathbb{F}}_p)}$ to the set of isomorphism classes of deformations of $H_1$ to $R$, i.e. pairs $(G,\rho)$ where $G$ is a special formal $\mathcal{O}_{D_p}$-module over $R$ and $\rho:H_1\otimes_{\bar{\mathbb{F}}_p} R/p \to G \otimes_{R} R/p$ is a quasi-isogeny. Drinfeld showed that this is pro-represented by a formal scheme $\mathcal{M}'$ over $\Spf W(\bar{\mathbb{F}}_p)$. We denote by $\mathcal{M}_{\mathrm{Dr}}$ the base change of the generic fiber of $\mathcal{M}'$ (viewed as an adic space) from $\Spa(W(\bar{\mathbb{F}}_p)[\frac{1}{p}],W(\bar{\mathbb{F}}_p))$ to $\Spa(C,\mathcal{O}_C)$. Then $\mathcal{M}_{\mathrm{Dr}}$ is the disjoint union of $\mathcal{M}^{(i)}_{\mathrm{Dr}}$, $i\in\mathbb{Z}$, where $\mathcal{M}^{(i)}_{\mathrm{Dr}}$ denotes the subset with quasi-isogeny of degree $i$. Let $(\mathcal{G}',\rho')$ be a universal deformation of $H_1$ on $\mathcal{M}'$. Its covariant Dieudonn\'e crystal defines a vector bundle $M(\mathcal{G}')$ of rank $4$ on $\mathcal{M}_{\mathrm{Dr}}$ equipped with an action of $D_p$ and an integrable connection $\nabla_{\mathrm{Dr},0}$, the (dual of) Gauss-Manin connection. Let $M(H_1)$ denote the Dieudonn\'e module of $H_1$. Then $\rho'$ induces a natural trivialization $M(\mathcal{G}')\cong M(H_1)\otimes_{W(\bar{\mathbb{F}}_p)}\mathcal{O}_{\mathcal{M}_{\mathrm{Dr}}}$ under which $\nabla_{\mathrm{Dr},0}$ is identified with the standard connection on $\mathcal{O}_{\mathcal{M}_{\mathrm{Dr}}}$. The Lie algebra of $\mathcal{G}'$ defines a rank-$2$ vector bundle $\Lie(\mathcal{G}')\otimes C$ on $\mathcal{M}_{\mathrm{Dr}}$. By the Grothendieck-Messing theory, $(\mathcal{G}',\rho')$ gives rises to a $D_p$-equivariant surjection \[M(H_1)\otimes_{W(\bar{\mathbb{F}}_p)}\mathcal{O}_{\mathcal{M}_{\mathrm{Dr}}} \cong M(\mathcal{G}')\to \Lie(\mathcal{G}')\otimes C. \] This induces the so-called \textit{Gross-Hopkins period map} \cite{GH94} \[\pi'_{\mathrm{GM}}:\mathcal{M}_{\mathrm{Dr}}\to {\mathscr{F}\!\ell}'_{\mathrm{GM}},\] where ${\mathscr{F}\!\ell}'_{\mathrm{GM}}$ is the adic space over $\Spa(C,\mathcal{O}_C)$ associated to the flag variety parametrizing $2$-dimensional $D_p$-equivariant quotients of the $4$-dimensional $C$-vector space $M(H_1)\otimes_{W(\bar{\mathbb{F}}_p)} C$. Drinfeld proved that $\pi'_{\mathrm{GM}}|_{\mathcal{M}^{(0)}}$ is an isomorphism onto its image. Moreover, one can naturally identify ${\mathscr{F}\!\ell}'_{\mathrm{GM}}$ with $\mathbb{P}^1$ (cf. Theorem \ref{LTDrdual} below) and the image of $\pi'_{\mathrm{GM}}$ is exactly $\Omega$, the Drinfeld upper half plane. As in \ref{LTsetup}, we denote by $D(\mathcal{G}')$ the dual of $M(\mathcal{G}')$. Then there is a natural inclusion $(\Lie(\mathcal{G}')\otimes C)^\vee\subseteq D(\mathcal{G}')$ which defines the Hodge filtration on $D(\mathcal{G}')$. Again we have the Kodaira-Spencer isomorphism, which can be easily checked on $\Omega$. \end{para} \begin{para}[Compare \ref{LTet}] Note that $\mathcal{G}'$ defines a $\mathbb{Z}_p$-local system of rank $4$ equipped with an action of $D_p$ on the \'etale site of $\mathcal{M}_{\mathrm{Dr}}$, whose dual will be denoted by $V_\mathrm{Dr}$. For $n\geq0$, we get an $(\mathcal{O}_{D_p}/p^n)^\times=\mathcal{O}_{D_p}^\times/(1+p^n\mathcal{O}_{D_p})$-\'etale covering $\mathcal{M}_{\mathrm{Dr},n}$ of $\mathcal{M}_{\mathrm{Dr}}$ by considering $\mathcal{O}_{D_p}$-equivariant isomorphisms between $V_\mathrm{Dr}/p^n V_\mathrm{Dr}$ and $\mathcal{O}_{D_p}/p^n$. Scholze-Weinstein \cite[Theorem 6.5.4]{SW13} shows that there exists a perfectoid space $\mathcal{M}_{\mathrm{Dr},\infty}$ over $\Spa(C,\mathcal{O}_C)$ such that \[\mathcal{M}_{\mathrm{Dr},\infty} \sim\varprojlim_n \mathcal{M}_{\mathrm{Dr},n}.\] It admits a natural continuous right action of $\mathcal{O}_{D_p}^\times$, which can be extended naturally to action of $D_p^\times$. On the other hand, we can fix an isomorphism $\End(H_1)\otimes_{\mathbb{Z}}\mathbb{Q}\cong M_2(\mathbb{Q}_p)$. Then the multiplicative group $\mathrm{GL}_2(\mathbb{Q}_p)$ is the group of self-quasi-isogenies of $H_1$ and acts naturally on $\mathcal{M}_{\mathrm{Dr}},\,\mathcal{M}_{\mathrm{Dr},n}, \,\mathcal{M}_{\mathrm{Dr},\infty}$ and ${\mathscr{F}\!\ell}'_{\mathrm{GM}}$ (on the right) through its action on $H_1$. All of these actions are continuous. We denote by \[\pi_{\mathrm{Dr},\mathrm{GM}}: \mathcal{M}_{\mathrm{Dr},\infty}\to {\mathscr{F}\!\ell}'_{\mathrm{GM}}\] the composite of the projection map $ \mathcal{M}_{\mathrm{Dr},\infty}\to \mathcal{M}_{\mathrm{Dr}}$ and $\pi'_{\mathrm{GM}}$. It is $\mathrm{GL}_2(\mathbb{Q}_p)\times D_p^\times$-equivariant with respect to the trivial action of $D_p^\times$ on ${\mathscr{F}\!\ell}'_{\mathrm{GM}}$. Let $\Lie(\mathcal{G}')\otimes\mathcal{O}_{\mathcal{M}_{\mathrm{Dr},\infty}}$ be the pull-back of $\Lie(\mathcal{G}')\otimes C$ to $\mathcal{M}_{\mathrm{Dr},\infty}$. Since the Tate module of $\mathcal{G}'$ gets trivialized on $\mathcal{M}_{\mathrm{Dr},\infty}$ with $\mathcal{O}_{D_p}$, the Hodge-Tate sequence induces a $D_p$-equivariant surjection \[ \mathcal{O}_{D_p}\otimes_{\mathbb{Z}_p} \mathcal{O}_{\mathcal{M}_{\mathrm{Dr},\infty}} \to (\Lie(\mathcal{G}')\otimes\mathcal{O}_{\mathcal{M}_{\mathrm{Dr},\infty}})^{\vee}(-1),\] whose kernel is isomorphic to $\Lie(\mathcal{G}')\otimes\mathcal{O}_{\mathcal{M}_{\mathrm{Dr},\infty}}$ if we choose an isomorphism between $\mathcal{G}'$ and its dual, i.e. a principal polarization. Let ${\mathscr{F}\!\ell}'$ be the adic space over $\Spa(C,\mathcal{O}_C)$ associated to the flag variety parametrizing $2$-dimensional $D_p$-equivariant quotients of the $4$-dimensional $C$-vector space $ \mathcal{O}_{D_p}\otimes_{\mathbb{Z}_p} C$. Then the Hodge-Tate sequence defines a $D_p^\times$-equivariant map, called the \textit{Hodge-Tate period map} \[\pi_{\mathrm{Dr},\mathrm{HT}}: \mathcal{M}_{\mathrm{Dr},\infty}\to {\mathscr{F}\!\ell}'.\] We have the following duality result due to Scholze-Weinstein \cite[Proposition 7.2.2, Theorem 7.2.3]{SW13}. \end{para} \begin{thm} \label{LTDrdual} There are natural $\mathrm{GL}_2(\mathbb{Q}_p)\times D_p^\times$-equivariant isomorphisms \[\mathcal{M}_{\mathrm{LT},\infty}\cong \mathcal{M}_{\mathrm{Dr},\infty},\] and ${\mathscr{F}\!\ell}\cong{\mathscr{F}\!\ell}'_{\mathrm{GM}}$, ${\mathscr{F}\!\ell}'\cong{\mathscr{F}\!\ell}_{\mathrm{GM}}$ so that $\pi_{\mathrm{LT},\mathrm{HT}}$ (resp. $\pi_{\mathrm{Dr},\mathrm{HT}}$) is getting identified with $\pi_{\mathrm{Dr},\mathrm{GM}}$ (resp. $\pi_{\mathrm{LT},\mathrm{GM}}$) and $\mathcal{M}_{\mathrm{LT},\infty}^{(0)}$ is identified with $\mathcal{M}^{(0)}_{\mathrm{Dr},\infty}$. \end{thm} This shows that $\pi_{\mathrm{LT},\mathrm{HT}}(\mathcal{M}_{\mathrm{LT},\infty})=\Omega$. Under the duality isomorphism, there is a natural isomorphism $\omega_{\mathrm{LT},\infty}(-1)\cong\omega_{\mathrm{Dr},\infty}^{-1}$ because both are the pull-backs of $\omega_{{\mathscr{F}\!\ell}}$ along $\pi_{\mathrm{LT},\mathrm{HT}}$ and $\pi_{\mathrm{Dr},\mathrm{GM}}$. We will freely use these isomorphisms from now on. \begin{para}[Compare \ref{AA+}] \label{DrBB+} Let $Y$ be an open affinoid subset of $\mathcal{M}_{\mathrm{Dr}}$ and $\tilde{Y}=\Spa(B,B^+)$ be its preimage in $\mathcal{M}_{\mathrm{Dr},\infty}$. Assume that \begin{itemize} \item $\pi_{\mathrm{Dr},\mathrm{HT}}(\tilde{Y})\neq {\mathscr{F}\!\ell}'$, hence $\pi_{\mathrm{Dr},\mathrm{HT}}(\tilde{Y})$ is contained in an affinoid open subset of ${\mathscr{F}\!\ell}'$ as it is compact. \end{itemize} We note that such $Y$ forms a basis of open affinoid subsets of $\mathcal{M}_{\mathrm{Dr}}$. Indeed, since $\mathrm{GL}_2(\mathbb{Q}_p)$ acts transitively on $\pi_{\mathrm{LT},\mathrm{GM}}^{-1}(y)$ for any $\Spa(C,\mathcal{O}_C)$-point $y$ of ${\mathscr{F}\!\ell}_{\mathrm{GM}}$ (cf. Proposition 23.28 of \cite{GH94} and the discussion below it), it is enough to consider those $Y$'s such that there exists a $\Spa(C,\mathcal{O}_C)$-point of $\mathcal{M}_{\mathrm{Dr}}$ not contained in any translations of $Y$ under $\mathrm{GL}_2(\mathbb{Q}_p)$. $Y$ is a one-dimensional smooth affinoid adic space over $\Spa(C,\mathcal{O}_C)$ and $\tilde{Y}$ is a $H=\mathcal{O}^\times_{D_p}$-Galois pro-\'etale perfectoid covering of $Y$. Again by \cite[Theorem 3.1.2]{Pan20}, the $\mathcal{O}^\times_{D_p}$-locally analytic vectors $B^{D^\times_p-\mathrm{la}}\subseteq B$ satisfy a first-order differential equation $\theta_{\tilde{Y}}=0$ for some $\theta_{\tilde{Y}}\in B\otimes_{\mathbb{Q}_p}\Lie(H)$. It turns out that as in the case of Lubin-Tate tower, $\theta_{\tilde{Y}}$ essentially comes from the pull-back of the horizontal nilpotent subalgebra along the Hodge-Tate period map. More precisely, for a $\Spa(C,\mathcal{O}_C)$-point $y$ of ${\mathscr{F}\!\ell}'$, it follows from the construction that $y$ corresponds to a $2$-dimensional $D_p$-stable $C$-subspace of $D_p\otimes_{\mathbb{Q}_p}C$, equivalently, corresponds to a Borel subalgebra $\mathfrak{b}_y\subseteq C\otimes_{\mathbb{Q}_p} \Lie(D_p^\times)$. Denote the nilpotent subalgebra of $\mathfrak{b}_y$ by $\mathfrak{n}_y$. Let \begin{eqnarray*} \mathfrak{g}_{D_p}^0&:=&\mathcal{O}_{{\mathscr{F}\!\ell}'}\otimes_{\mathbb{Q}_p}\Lie(D_p^\times),\\ \mathfrak{b}^0_{D_p}&:=&\{f\in \mathfrak{g}_{D_p}^0\,| \, f_y\in \mathfrak{b}_y,\mbox{ for all }\Spa(C,\mathcal{O}_C)\mbox{-point }y\in{\mathscr{F}\!\ell}'\},\\ \mathfrak{n}^0_{D_p}&:=&\{f\in \mathfrak{g}_{D_p}^0\,| \, f_y\in \mathfrak{n}_y,\mbox{ for all }\Spa(C,\mathcal{O}_C)\mbox{-point }y\in{\mathscr{F}\!\ell}'\}. \end{eqnarray*} Then we have a natural action of $\mathfrak{g}_{D_p}^0$ on $B^{D^\times_p-\mathrm{la}}$ just as at the end of \ref{AA+}. \end{para} \begin{thm} \label{Drde} Let $\tilde{Y}=\Spa(B,B^+)$ be as above. Then $\theta_{\tilde{Y}}$ is given by a generator of $\mathfrak{n}_{D_p}^0$ up to $B^\times$ i.e. $B^{D^\times_p-\mathrm{la}}$ is annihilated by $\mathfrak{n}_{D_p}^0$. \end{thm} \begin{proof} Same proof as Theorem \ref{LTde} by observing that the Kodaira-Spencer isomorphism also holds for the Drinfeld towers. \end{proof} \begin{para}[Compare \ref{khLT}] \label{khDr} Theorem \ref{Drde} implies that there is a natural action of $\mathfrak{b}^0_{D_p}/\mathfrak{n}^0_{D_p}$ on $B^{D_p^\times-\mathrm{la}}$. Fix an isomorphism $\iota:C\otimes_{\mathbb{Q}_p} \Lie(D^\times_p)\cong \mathfrak{gl}_2(C)$. Then we can identify ${\mathscr{F}\!\ell}$ with ${\mathscr{F}\!\ell}'$, hence get an isomorphism $\mathfrak{b}^0_{D_p}/\mathfrak{n}^0_{D_p}\cong \mathcal{O}_{{\mathscr{F}\!\ell}'}\otimes_C \mathfrak{h}$. We denote the induced action of $\mathfrak{h}\to \mathcal{O}_{{\mathscr{F}\!\ell}'}\otimes_C \mathfrak{h}\cong \mathfrak{b}^0_{D_p}/\mathfrak{n}^0_{D_p}$ on $B^{D_p^\times-\mathrm{la}}$ by $\theta_{\mathrm{Dr},\mathfrak{h}}$. It is easy to see that $\theta_{\mathrm{Dr},\mathfrak{h}}$ does not depend on $\iota$. Similarly as in \ref{omegakla}, the same construction defines a natural action of $\mathfrak{h}$ on the $D_p^\times$-locally analytic vectors of $\omega_{\mathrm{Dr},\infty}$, which is also denoted by $\theta_{\mathrm{Dr},\mathfrak{h}}$. We can also describe elements in $B^{D_p^\times-\mathrm{la}}$ in exactly the same way as $B^{\mathrm{la}}$ in Theorem \ref{expGL2}. Using the chosen polarization, we can define global sections $\mathrm{t}'$ of $\mathcal{O}_{\mathcal{M}_{\mathrm{Dr},\infty}}$ and $\mathrm{t}'_n\in H^0(\mathcal{M}_{\mathrm{Dr},n},\mathcal{O}_{\mathcal{M}_{\mathrm{Dr},n}})$ which factors through $\pi_0(\mathcal{M}_{\mathrm{Dr},n})$ so that $||\mathrm{t}'-\mathrm{t}'_n||\leq p^{-n}$, $n\geq 0$. In fact, we can simply choose $\mathrm{t}'=\mathrm{t}$ because $\mathcal{O}_{D_p}^\times$ acts on $\mathrm{t}$ via the reduced norm map, cf. proof of Corollary \ref{BlasubsetBDpla}. Let $W_p$ be a $2$-dimensional irreducible representation of $D_p$ over $C$ in $D_p\otimes_{\mathbb{Q}_p} C$, which is unique up to isomorphisms. Then $\Hom_{C[D_p]}(W_p,\Lie(\mathcal{G}')\otimes C)$ defines a line bundle on $\mathcal{M}_{\mathrm{Dr}}$. The inverse of its pull-back to $\mathcal{M}_{\mathrm{Dr},*}$ will be denoted by $\omega_{\mathrm{Dr},*}$ for $*=0,1,\cdots$ and $\infty$. Note that $\omega_{\mathrm{Dr},\infty}$ is isomorphic to the pull-back of the tautological ample line bundle $\omega_{{\mathscr{F}\!\ell}'}$ on ${\mathscr{F}\!\ell}'$ along $\pi_{\mathrm{Dr},\mathrm{HT}}$. Choose a basis $f_1,f_2$ of $H^0({\mathscr{F}\!\ell}',\omega_{{\mathscr{F}\!\ell}'})$ so that $\pi_{\mathrm{Dr},\mathrm{HT}}^*f_1$ is invertible on $\tilde{Y}$. This is possible as $\pi_{\mathrm{Dr},\mathrm{HT}}(\tilde{Y})\neq {\mathscr{F}\!\ell}'$. Then $y=f_2/f_1$ is an invertible function on $\tilde{Y}$. Let $H_n=1+p^n\mathcal{O}_{D_p}$. As in \ref{khLT}, for $n\geq 0$, we can find \begin{itemize} \item an integer $r(n)>r(n-1)>0$; \item $y_n\in B^{H_{r(n)}}$ such that $\|y-y_n\|_{H_{r(n)}}=\|y-y_n\|\leq p^{-n}$ in $B$; \item $f_{1,n}\in \omega_{\mathrm{Dr},\infty}(\tilde{Y})^{H_{r(n)}}$ invertible such that $\|1-f_{1}/f_{1,n}\|_{H_{r(n)}}=\|1-f_{1}/f_{1,n}\|\leq p^{-n}$ in $B$. This implies that $\log(\frac{f_1}{f_{1,n}})$ makes sense. \item $||\mathrm{t}-\mathrm{t}'_n||_{H_{r(n)}}=||\mathrm{t}-\mathrm{t}'_n||\leq p^{-n}$. \end{itemize} \end{para} \begin{thm} \label{expDp} For any $n\geq0$, given a sequence of sections \[d_{i,j,k}\in B^{H_{r(n)}},i,j,k=0,1,\cdots\] such that the norms of $d_{i,j,k}p^{(n-1)(i+j+k)}$ are uniformly bounded, then \[f=\sum_{i,j,k\geq 0} c_{i,j,k}(y-y_n)^i \left(\log(\frac{f_1}{f_{1,n}})\right)^j\left(\log(\frac{\mathrm{t}}{\mathrm{t}'_{n}})\right)^k\] converges in $B^{H_{r(n)-\mathrm{an}}}$ and any $H_{n}$-analytic vector in $B$ arises in this way. \end{thm} \begin{proof} Same proof as \cite[Theorem 4.3.9]{Pan20}. \end{proof} Note that by continuity, $Y$ is $K$-stable for some open subgroup $K\subseteq \mathrm{GL}_2(\mathbb{Q}_p)$. As before, we denote by $B^{\mathrm{la}}$ the subspace of $K$-locally analytic vectors. \begin{cor} \label{GL2Dpan} Let $\Spa(B,B^+)$ be as in \ref{DrBB+}. Then $B^{\mathrm{la}}=B^{D_p^\times-\mathrm{la}}$, i.e. the subspace of $\mathrm{GL}_2(\mathbb{Q}_p)$-locally analytic vectors is the same as the subspace of $D_p^\times$-locally analytic vectors on $\mathcal{M}_{\mathrm{LT},\infty}$. \end{cor} \begin{rem} Our proof relies on explicit calculations. It will be very interesting and conceptually satisfying to have a more intrinsic proof. \end{rem} \begin{rem} I was informed by Gabriel Dospinescu that this result was also obtained by himself and Juan Esteban Rodriguez Camargo. \end{rem} \begin{proof} The same proof of Corollary \ref{BlasubsetBDpla} shows that as a direct consequence of Theorem \ref{expDp}, we have $B^{D_p^\times-\mathrm{la}}\subseteq B^{\mathrm{la}}$. On the other hand, $B^{\mathrm{la}}\subseteq B^{D_p^\times-\mathrm{la}}$ by Corollary \ref{BlasubsetBDpla}. Hence we have an equality here. Strictly speaking, in Corollary \ref{BlasubsetBDpla}, we assume that $\tilde{Y}$ is the preimage of some open affinoid subset of $\mathcal{M}_{\mathrm{LT},n}$. This is not a problem here because it follows from the discussion in \ref{OLTla} that we can find a finite cover of $Y$ by open affinoid subsets of $\mathcal{M}_{\mathrm{Dr}}$, whose preimages in $\mathcal{M}_{\mathrm{Dr},\infty}$ can be applied with Corollary \ref{BlasubsetBDpla}. \end{proof} \begin{para} We can rewrite these results on the sheaf level as in \ref{OLTla}. Recall that $\mathcal{O}_{\mathrm{LT}}=\pi^{(0)}_{\mathrm{LT},\mathrm{HT}}{}_* \mathcal{O}_{\mathcal{M}_{\mathrm{LT},\infty}^{(0)}}$, and $\mathcal{O}_{\mathrm{LT}}^{\mathrm{la}}$ (resp. $\mathcal{O}_{\mathrm{LT}}^{\mathrm{sm}}$) denotes the subsheaf of $\mathrm{GL}_2(\mathbb{Z}_p)$-locally analytic sections (resp. $\mathrm{GL}_2(\mathbb{Z}_p)$-smooth sections). By Corollary \ref{GL2Dpan}, $\mathcal{O}_{\mathrm{LT}}^{\mathrm{la}}$ is also the subsheaf of $\mathcal{O}_{D_p}^\times$-locally analytic sections. Let \[\mathcal{O}^{D_p^\times-\mathrm{sm}}_{\mathrm{LT}}\subseteq\mathcal{O}^{\mathrm{la}}_{\mathrm{LT}}\] denote the subsheaf of $\mathcal{O}_{D_p}^\times$-smooth sections. Concretely, let $\pi^{(0)}_{\mathrm{Dr},n}:\mathcal{M}^{(0)}_{\mathrm{Dr},n}\to \Omega$ denote the Gross-Hopkins map on $\mathcal{M}^{(0)}_{\mathrm{Dr},n}$. Then \[\mathcal{O}^{D_p^\times-\mathrm{sm}}_{\mathrm{LT}}=\varinjlim_n \pi^{(0)}_{\mathrm{Dr},n}{}_* \mathcal{O}_{\mathcal{M}^{(0)}_{\mathrm{Dr},n}}.\] Let $\omega_{\mathrm{LT}}:=\pi^{(0)}_{\mathrm{LT},\mathrm{HT}}{}_* \omega_{\mathrm{LT},\infty}|_{\mathcal{M}_{\mathrm{LT},\infty}^{(0)}}$ and $\omega_{\mathrm{Dr}}:=\pi^{(0)}_{\mathrm{LT},\mathrm{HT}}{}_* \omega_{\mathrm{Dr},\infty}|_{\mathcal{M}_{\mathrm{Dr},\infty}^{(0)}}$. We can define subsheaves $\omega^{\mathrm{la}}_{\mathrm{LT}},\omega^{\mathrm{sm}}_{\mathrm{LT}},\omega^{D_p^\times-\mathrm{sm}}_{\mathrm{LT}}\subseteq \omega_{\mathrm{LT}}$ and $\omega^{\mathrm{la}}_{\mathrm{Dr}},\omega^{\mathrm{sm}}_{\mathrm{LT}},\omega^{D_p^\times-\mathrm{sm}}_{\mathrm{Dr}}\subseteq \omega_\mathrm{Dr}$ similarly. Note that the isomorphism $\omega_{\mathrm{LT},\infty}(-1)\cong\omega_{\mathrm{Dr},\infty}^{-1}$ induces isomorphisms between these sheaves. It is easy to check that the action $\theta_{\mathrm{Dr},\mathfrak{h}}$ introduced in \ref{khDr} acts on $\mathcal{O}_{\mathrm{LT}}^{\mathrm{la}}$ and $\omega^{\mathrm{la}}_{\mathrm{LT}}$, $\omega^{\mathrm{la}}_{\mathrm{Dr}}$. \end{para} \begin{cor} \label{LTDrkh} $\theta_{\mathrm{LT},\mathfrak{h}}=\theta_{\mathrm{Dr},\mathfrak{h}}$ on $\mathcal{O}_{\mathrm{LT}}^{\mathrm{la}}$. \end{cor} \begin{proof} The most ``natural'' way to prove this should be using the moduli interpretations of $\mathcal{M}_{\mathrm{LT},\infty}$ and $\mathcal{M}_{\mathrm{Dr},\infty}$. Here we give a proof by explicit calculations. Note that the centers of $\mathrm{GL}_2(\mathbb{Q}_p)$ and $D_p^\times$ act in the same way on $B^{\mathrm{la}}$. It suffices to show that $\theta_{\mathrm{LT},\mathfrak{h}}(h)=\theta_{\mathrm{Dr},\mathfrak{h}}(h)$, where $h=\begin{pmatrix} 1 & 0\\ 0 & -1\end{pmatrix}$. Let $Y$ be an open affinoid subset of $\Omega$ such that $\tilde{Y}:=(\pi^{(0)}_{\mathrm{LT},\mathrm{HT}})^{-1}(Y)$ is the preimage of some open affinoid subset $X$ of $\mathcal{M}^{(0)}_{\mathrm{LT},m}$ for some $m\geq0$. Write $\tilde{Y}=\Spa(B,B^+)$. Fix $f\in B^{\mathrm{la}}$. Then by Theorem \ref{expGL2}, we can write \[f=\sum_{i,j,k\geq 0} c_{i,j,k}(x-x_n)^i \left(\log(\frac{e_1}{e_{1,n}})\right)^j\left(\log(\frac{\mathrm{t}}{\mathrm{t}_{n}})\right)^k.\] for some $c_{i,j,k}\in B^{G_{r(n)}},i,j,k=0,1,\cdots$. See \ref{khLT} for the constructions $x_n,e_{1,n},\mathrm{t}_n$. Since both $\theta_1:=\theta_{\mathrm{LT},\mathfrak{h}}(h)$ and $\theta_2:=\theta_{\mathrm{Dr},\mathfrak{h}}(h)$ are first-order differential operators, it is enough to show that they both agree on $B^{G_{r(n)}},x,e_1/e_{1,n},\mathrm{t}$. This is clear for $\mathrm{t}$ because the action of $\mathrm{GL}_2(\mathbb{Q}_p)$ (resp. $D_p^\times$) on it factors through the determinant (resp. reduced norm) map. For other elements, we have the following lemma. (Note that $B^{G_{r(n)}}\subseteq \mathcal{O}^{\mathrm{sm}}_{\mathrm{LT}}(Y)$, $x\in \mathcal{O}^{D_p^\times-\mathrm{sm}}_{\mathrm{LT}}(Y)$, $e_{1,n}\in \omega^{\mathrm{sm}}_{\mathrm{LT}}(Y)$, $e_1\in \omega^{D_p^\times-\mathrm{sm}}_{\mathrm{LT}}(Y)$ and $1/e_{1,n}\in \omega^{\mathrm{sm}}_{\mathrm{Dr}}(Y)$, $1/e_1\in \omega^{D_p^\times-\mathrm{sm}}_{\mathrm{Dr}}(Y)$.) \end{proof} \begin{lem} \begin{enumerate} \item $\theta_{\mathrm{LT},\mathfrak{h}}$ and $\theta_{\mathrm{Dr},\mathfrak{h}}$ are zero on $\mathcal{O}^{\mathrm{sm}}_{\mathrm{LT}}$ and $\mathcal{O}^{D_p^\times-\mathrm{sm}}_{\mathrm{LT}}$. \item $\theta_1=0$ on $\omega^{\mathrm{sm}}_{\mathrm{LT}}$ and $\theta_1=-1$ on $ \omega^{D_p^\times-\mathrm{sm}}_{\mathrm{LT}}$. \item $\theta_2=0$ on $ \omega^{D_p^\times-\mathrm{sm}}_{\mathrm{Dr}}$ and $\theta_2=-1$ on $\omega^{\mathrm{sm}}_{\mathrm{Dr}}$. \end{enumerate} \end{lem} \begin{proof} We will only prove results for $\theta_{\mathrm{LT},\mathfrak{h}}$ because the same argument will also work for $\theta_{\mathrm{Dr},\mathfrak{h}}$. It follows from the construction that $\theta_{\mathrm{LT},\mathfrak{h}}$ acts trivially on $\mathrm{GL}_2(\mathbb{Z}_p)$-smooth vectors. Hence $\theta_{\mathrm{LT},\mathfrak{h}}=0$ on $\mathcal{O}^{\mathrm{sm}}_{\mathrm{LT}}$, $\omega^{\mathrm{sm}}_{\mathrm{LT}}$. For $\mathcal{O}^{D_p^\times-\mathrm{sm}}_{\mathrm{LT}}$, we first observe that $\theta_{\mathrm{LT},\mathfrak{h}}=0$ on $\mathcal{O}_{{\mathscr{F}\!\ell}}$. This can be seen by writing ${\mathscr{F}\!\ell}$ as the quotient of $N\setminus\mathrm{GL}_2$ by $H$, where $H=\{\begin{pmatrix} * & 0 \\ 0 & * \end{pmatrix}\}$ and $N=\{\begin{pmatrix} 0 & * \\ 0 & 0 \end{pmatrix}\}$, and $\theta_{\mathrm{LT},\mathfrak{h}}$ essentially comes from the infinitesimal action of $H$. One can also prove this by a direct calculation, cf. \cite[5.1.1]{Pan20}. Now this implies that $\theta_{\mathrm{LT},\mathfrak{h}}$ is zero on $\mathcal{O}^{D_p^\times-\mathrm{sm}}_{\mathrm{LT}}$ because sections of $\mathcal{O}^{D_p^\times-\mathrm{sm}}_{\mathrm{LT}}$ are analytic functions on some finite \'etale coverings of $\Omega$ and $\theta_{\mathrm{LT},\mathfrak{h}}$ is a first-order differential operator. For $ \omega^{D_p^\times-\mathrm{sm}}_{\mathrm{LT}}$, the same argument reduces to showing that $\theta_1=-1$ on $\omega_{{\mathscr{F}\!\ell}}$. Again this can be proved by a simple calculation. \end{proof} \begin{para}[Compare Remark \ref{LTlalg}] Let $\chi:\mathfrak{h}\to C$ be a character. As before, we denote by $\mathcal{O}^{\mathrm{la},\chi}_{\mathrm{LT}}\subseteq\mathcal{O}^{\mathrm{la}}_{K^p}$ the subsheaf of $\chi$-isotypic with respect to $\theta_{\mathrm{LT},\mathfrak{h}}=\theta_{\mathrm{Dr},\mathfrak{h}}$. Suppose $\chi=(n_1,n_2)\in\mathbb{Z}^2$ and $k=n_2-n_1\geq 0$. By Theorem \ref{expDp}, for $i\in\{0,\cdots,k\}$ and $s\in\omega^{-k,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}}\cong\omega^{k,D_p^\times-\mathrm{sm}}_{\mathrm{LT}}(-k)$, the product $\mathrm{t}^{n_1}f_1^if_2^{k-i} s$ defines an element in $\mathcal{O}^{\mathrm{la},(n_1,n_2)}_{\mathrm{LT}}$. We denote by \[\mathcal{O}^{D_p^\times-\mathrm{lalg},(n_1,n_2)}_{\mathrm{LT}}\subseteq \mathcal{O}^{\mathrm{la},(n_1,n_2)}_{\mathrm{LT}}\] the subsheaf spanned by sections of this form. It is easy to see that it exactly consists of $\mathcal{O}_{D_p}^\times$-locally algebraic vectors. Equivalently, it is the image of the natural inclusion \[H^0({\mathscr{F}\!\ell}',\omega^{k}_{{\mathscr{F}\!\ell}'})\otimes_C \omega^{k,D_p^\times-\mathrm{sm}}_{\mathrm{LT}}\cdot \mathrm{t}^{n_1}\cong H^0({\mathscr{F}\!\ell}',\omega^{k}_{{\mathscr{F}\!\ell}'})(k)\otimes_C \omega^{-k,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}}\cdot \mathrm{t}^{n_1}\subseteq \mathcal{O}^{\mathrm{la},(n_1,n_2)}_{\mathrm{LT}}.\] \end{para} \begin{para} We can repeat the construction in Theorem \ref{I2} and prove the following analogue of Theorem \ref{LTI2}. Note that the $\Omega$ in the Lubin-Tate picture corresponds to ${\mathscr{F}\!\ell}'$, or its \'etale coverings $\mathcal{M}_{\mathrm{LT},n} $ in this Drinfeld picture. In particular, $\mathcal{O}_{\Omega}$ is replaced by $\mathcal{O}^{\mathrm{sm}}_{\mathrm{LT}}$ and $\Omega^1_{\Omega}$ is replaced by $\Omega^1_{\mathrm{LT}}$ here. Also note that since there is no canonical choice of bases of $H^0({\mathscr{F}\!\ell}',\omega_{{\mathscr{F}\!\ell}'})$, we need to make a choice as in \ref{khDr}. \end{para} \begin{thm} \label{DrI2} Suppose $\chi=(n_1,n_2)\in\mathbb{Z}^2$ and $k=n_2-n_1\geq 0$. Then there exists a natural continuous operator \[\bar{d}_{\mathrm{Dr}}^{k+1}:\mathcal{O}^{\mathrm{la},\chi}_{\mathrm{LT}}\to \mathcal{O}^{\mathrm{la},\chi}_{\mathrm{LT}}\otimes_{\mathcal{O}^{\mathrm{sm}}_{\mathrm{LT}}}(\Omega^1_{\mathrm{LT}})^{\otimes k+1}\] which formally can be regarded as $(d_{{\mathscr{F}\!\ell}'})^{k+1}$ satisfying following properties: \begin{enumerate} \item $\bar{d}_{\mathrm{LT}}^{k+1}$ is $\mathcal{O}^{D_p^\times-\mathrm{sm}}_{\mathrm{LT}}$-linear; \item Suppose that $Y$ is an open affinoid subset of $\Omega$ and $\{f_1,f_2\}$ is a basis of $H^0({\mathscr{F}\!\ell}',\omega_{{\mathscr{F}\!\ell}'})$ such that $f_1$ is a generator of $\omega_{\mathrm{Dr}}$ on $Y$. Then there exists a non-zero constant $c\in\mathbb{Q}$ such that $\bar{d}_{\mathrm{Dr}}^{k+1}(s)=c(u'^+)^{k+1}(s)\otimes (dy')^{k+1}$ for any $s\in \mathcal{O}^{\mathrm{la},\chi}_{\mathrm{LT}}$, where $y'=f_2/f_1$ and $u'^+$ denotes the unique nilpotent element of $\Lie(D_p^\times)\otimes_{\mathbb{Q}_p}C$ sending $f_2$ to $f_1$. \end{enumerate} Here $d_{{\mathscr{F}\!\ell}'}$ denotes the derivation on ${\mathscr{F}\!\ell}'$. Moreover, \begin{enumerate} \item $\bar{d}_{\mathrm{Dr}}^{k+1}$ is surjective and commutes with $\mathcal{O}_{D_p}^\times$. \item $\ker(\bar{d}_{\mathrm{Dr}}^{k+1})=\mathcal{O}^{D_p^\times-\mathrm{lalg},\chi}_{\mathrm{LT}}=H^0({\mathscr{F}\!\ell}',\omega^{k}_{{\mathscr{F}\!\ell}'})\otimes_C \omega^{k,D_p^\times-\mathrm{sm}}_{\mathrm{LT}}\cdot \mathrm{t}^{n_1}$. \end{enumerate} \end{thm} We remark that the set of $Y$ considered in the Theorem is a basis of open affinoid subsets of $\Omega$, cf. \ref{DrBB+}. \begin{rem}\label{dDR} Although this will not be used, we can also redo the construction of Theorem \ref{LTI1} and prove that there exists a unique natural continuous $\mathcal{O}^{D_p^\times-\mathrm{sm}}_{\mathrm{LT}}$-linear operator \[d_{\mathrm{Dr}}^{k+1}:\mathcal{O}^{\mathrm{la},\chi}_{\mathrm{LT}}\to \mathcal{O}^{\mathrm{la},\chi}_{\mathrm{LT}}\otimes_{\mathcal{O}^{D_p^\times-\mathrm{sm}}_{\mathrm{LT}}}(\Omega^1_{\mathrm{Dr}})^{\otimes k+1}\] extending $(d_{\mathrm{Dr}})^{k+1}$ on $\omega^{-k,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}}$, where \[\Omega^1_{\mathrm{Dr}}:=\varinjlim_n \pi^{(0)}_{\mathrm{Dr},n}{}_*\Omega^1_{\mathcal{M}^{(0)}_{\mathrm{Dr},n}}.\] and $d_{\mathrm{Dr}}$ denotes the derivation $\mathcal{O}^{D_p^\times-\mathrm{sm}}_{\mathrm{LT}}\to \Omega^1_{\mathrm{Dr}}$. \end{rem} \begin{para} Now we would like to compare $\bar{d}_{\mathrm{Dr}}^{k+1}$ and $d_{\mathrm{LT}}^{k+1}$ introduced in Theorem \ref{LTI1}. Keep the notation in Theorem \ref{DrI2}. Note that $u'^+=\frac{d}{dy'}$ on ${\mathscr{F}\!\ell}'$. Hence this also holds on its \'etale coverings $\mathcal{M}_{\mathrm{LT},n}$. From this, it is easy to see that $\bar{d}_{\mathrm{Dr}}^{k+1}$ satisfies the properties in Theorem \ref{LTI1} up to a unit. Hence by the uniqueness of $d_{\mathrm{LT}}^{k+1}$, we have \end{para} \begin{thm} $\bar{d}_{\mathrm{Dr}}^{k+1}=c'd_{\mathrm{LT}}^{k+1}$ for some $c'\in \mathbb{Q}^\times$. \end{thm} \begin{cor} \label{dLTkcok} $d_{\mathrm{LT}}^{k+1}$ is surjective and \[\ker(d_{\mathrm{LT}}^{k+1})=\mathcal{O}^{D_p^\times-\mathrm{lalg},\chi}_{\mathrm{LT}}=H^0({\mathscr{F}\!\ell}',\omega^{k}_{{\mathscr{F}\!\ell}'})(k)\otimes_C \omega^{-k,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}}\cdot \mathrm{t}^{n_1}.\] \end{cor} \subsection{\texorpdfstring{$I_k$}{Lg} on \texorpdfstring{$\Omega$}{Lg}: uniformization of supersingular locus} \begin{para} To apply the results obtained in the previous two sections to modular curves, we note that the supersingular locus of modular curves has a uniformization by Lubin-Tate towers and $\pi_{\mathrm{HT}}^{-1}(\Omega)$ is simply a finite disjoint union of $\mathcal{M}^{(0)}_{\mathrm{LT}}$. This was used by Deligne in his proof \cite{De73} of the local Langlands correspondence for $\mathrm{GL}_2(\mathbb{Q}_l)$. More precisely, the uniformization theorem relates the supercuspidal part of the $\ell$-adic cohomology of modular curves with the $\ell$-adic cohomology of Lubin-Tate towers in terms of Jacquet-Langlands correspondence. We will see a similar picture here with the $\ell$-adic cohomology replaced by the completed cohomology. We now recall the uniformization theorem. Let $E_0$ be a supersingular elliptic curve over $\bar{\mathbb{F}}_p$ whose $p$-divisible group is isomorphic to $H_0$ considered in \ref{LTsetup}. Fix such an isomorphism. Let $D:=\End(E_0)\otimes\mathbb{Q}$. It is well-known that $D$ is a quaternion algebra which is only ramified at $p$ and $\infty$. The chosen isomorphism gives a natural identification $D\otimes_{\mathbb{Q}}\mathbb{Q}_p\cong D_p$. Fix an $\mathbb{A}^p_f$-linear isomorphism \[H^1_{\mathrm{\acute{e}t}}(E_0,\mathbb{A}^p_f)\cong (A^p_f)^{\oplus 2}.\] Note that $D$ acts naturally on the right of it. We switch this to a left action by applying the main involution $g\mapsto \iota(g)$. This induces an isomorphism $D\otimes_{\mathbb{Q}}\mathbb{A}_f^{p}\cong M_2(\mathbb{A}_f^p)$ of $\mathbb{A}_f^p$-algebras and $K^p\subseteq\mathrm{GL}_2(\mathbb{A}_f^p)$ is considered as an open subgroup of $(D\otimes_{\mathbb{Q}}\mathbb{A}_f^{p})^\times$ through this isomorphism. Recall that $\pi_{\mathrm{HT}}:\mathcal{X}_{K^p}\to {\mathscr{F}\!\ell}$ denotes the Hodge-Tate period morphism for modular curves, and we have the Lubin-Tate space at infinite level $\mathcal{M}_{\mathrm{LT},\infty}$ in \ref{LTet} equipped with a right action of $\mathrm{GL}_2(\mathbb{Q}_p)\times D_p^\times$ and a $\mathrm{GL}_2(\mathbb{Q}_p)\times D_p^\times$-equivariant period map $\pi_{\mathrm{LT},\mathrm{HT}}:\mathcal{M}_{\mathrm{LT},\infty}\to{\mathscr{F}\!\ell}$. \end{para} \begin{thm} \label{ssunif} There is a $\mathrm{GL}_2(\mathbb{Q}_p)$-equivariant isomorphism \[\pi_{\mathrm{HT}}^{-1}(\Omega)\cong D^\times \setminus [\mathcal{M}_{\mathrm{LT},\infty}\times (D\otimes_{\mathbb{Q}}\mathbb{A}_f^{p})^\times/K^p]\] which is functorial in $K^p$ and identifies $\pi_{\mathrm{HT}}: \pi_{\mathrm{HT}}^{-1}(\Omega)\to {\mathscr{F}\!\ell}$ with $\pi_{\mathrm{LT},\mathrm{HT}}\circ \pi_1$ on $\mathcal{M}_{\mathrm{LT},\infty}\times (D\otimes_{\mathbb{Q}}\mathbb{A}_f^{p})^\times/K^p$ and identifies $\omega_{K^p}|_{\Omega}$ with $\pi_{1}{}_*\omega_{\mathrm{LT}}$, where $\pi_1:\mathcal{M}_{\mathrm{LT},\infty}\times (D\otimes_{\mathbb{Q}}\mathbb{A}_f^{p})^\times/K^p \to\mathcal{M}_{\mathrm{LT},\infty}$ denotes the projection map onto the first factor. Here $D^\times$ acts diagonally on the left of $\mathcal{M}_{\mathrm{LT},\infty}\times (D\otimes_{\mathbb{Q}}\mathbb{A}_f^{p})^\times/K^p$ via the embedding \[D^\times\to D_p^\times\] sending $g$ to $\iota(g)$ (hence $D^\times$ acts on the left of $\mathcal{M}_{\mathrm{LT},\infty}$) and the natural embedding $D^\times\subseteq (D\otimes_{\mathbb{Q}}\mathbb{A}_f^{p})^\times$. \end{thm} \begin{proof} It was shown by \cite[Lemma III.3.6]{Sch15} that $\pi_{\mathrm{HT}}^{-1}(\Omega)$ is exactly the preimage of the supersingular locus of $\mathcal{X}_{K^p\mathrm{GL}_2(\mathbb{Z}_p)}$. The supersingular locus in $\mathcal{X}_{K^p\Gamma(p^n)}$ has a uniformization by $\mathcal{M}_{\mathrm{LT},n}$ by Theorem III of \cite{RZ96}. See \cite[6.13]{RZ96} for the construction. The isomorphism on the infinite level follows from taking the inverse limit. The key point is that all supersingular elliptic curves are isogenous to $E_0$ and the isomorphism is guaranteed by the Serre-Tate theory. The identifications of the Hodge-Tate period maps and $\omega_{K^p}|_{\Omega}$ with $\pi_{1}{}_*\omega_{\mathrm{LT}}$ are clear in view of the constructions. Since the normalization here is slightly different from the reference, we briefly explain where the main involution of $D^\times$ in the embedding $D^\times\to D_p^\times$ comes from. Note that $D^\times$ acts naturally on the left of $H^1_{\mathrm{\acute{e}t}}(E_0,\mathbb{A}^p_f)$ via $g\mapsto(g^{-1})^*$, $g\in D^\times$. Then under the fixed isomorphism $H^1_{\mathrm{\acute{e}t}}(E_0,\mathbb{A}^p_f)\cong (A^p_f)^{\oplus 2}$ and $D\otimes_{\mathbb{Q}}\mathbb{A}_f^{p}\cong M_2(\mathbb{A}_f^p)$, this induces an embedding $i:D^\times\to (D\otimes_{\mathbb{Q}}\mathbb{A}_f^{p})^\times$ which is nothing but $g\mapsto \iota(g)^{-1}$. On the other hand, the construction of \cite[6.13]{RZ96} gives an isomorphism \[\pi_{\mathrm{HT}}^{-1}(\Omega)\cong D^\times \setminus [\mathcal{M}_{\mathrm{LT},\infty}\times (D\otimes_{\mathbb{Q}}\mathbb{A}_f^{p})^\times/K^p]\] via the embedding $D^\times\to D_p^\times$ sending $g$ to $g^{-1}$ and $i:D^\times\to (D\otimes_{\mathbb{Q}}\mathbb{A}_f^{p})^\times$. We get the isomorphism in the theorem by applying $g\mapsto \iota(g)^{-1}$ to $D^\times$. \end{proof} \begin{para} The quotient $D^\times \setminus [\mathcal{M}_{\mathrm{LT},\infty}\times (D\otimes_{\mathbb{Q}}\mathbb{A}_f^{p})^\times/K^p]$ can be made more explicit. Consider \[D^\times\setminus (D\otimes_{\mathbb{Q}}\mathbb{A}_f)^\times/K^p.\] which admits a natural action of $D_p^\times$ on the right. We note that \begin{itemize} \item the action of $\mathcal{O}_{D_p^\times}$ on it is free. \end{itemize} This follows easily from our assumption in \ref{brr} that $K^p$ is contained in the level-$N$-subgroup $\{g\in\mathrm{GL}_2(\hat{\mathbb{Z}}^p)=\prod_{l\neq p}\mathrm{GL}_2(\mathbb{Z}_l)\,\vert\, g\equiv1\mod N\}$ for some $N\geq 3$ prime to $p$. Indeed, for any $g\in (D\otimes_{\mathbb{Q}_p} \mathbb{A}_f^p)^\times$, the intersection $gD^\times g^{-1}\cap K^p\mathcal{O}_{D_p^\times}$ is finite, hence has to be trivial by the assumption. By the finiteness of the class group, we can write \[(D\otimes_{\mathbb{Q}}\mathbb{A}_f)^\times=\bigsqcup_{i\in I}D^\times \gamma_i K^p\mathcal{O}_{D_p}^\times\] for some elements $\gamma_i\in (D\otimes_{\mathbb{Q}}\mathbb{A}_f)^\times$ and finite set $I$. Write $\gamma_i=\gamma_i^p \gamma_{i,p}$ where $\gamma_i^p\in (D\otimes_{\mathbb{Q}}\mathbb{A}^p_f)^\times$ and $\gamma_{i,p}\in D_p^\times$. Consider the map \[\bigsqcup_{i\in I} \mathcal{M}^{(0)}_{\mathrm{LT},\infty}\to D^\times \setminus [\mathcal{M}_{\mathrm{LT},\infty}\times (D\otimes_{\mathbb{Q}}\mathbb{A}_f^{p})^\times/K^p]\] sending $z_i\in\mathcal{M}^{(0)}_{\mathrm{LT},\infty},{i\in I}$ to the coset containing $(z_i\iota(\gamma_{i,p}),\gamma_{i}^p)\in \mathcal{M}_{\mathrm{LT},\infty}\times (D\otimes_{\mathbb{Q}}\mathbb{A}_f^{p})^\times/K^p$. \end{para} \begin{lem} \label{expuf} This is an isomorphism, i.e. $u:\pi_{\mathrm{HT}}^{-1}(\Omega)\cong \bigsqcup_{i\in I} \mathcal{M}^{(0)}_{\mathrm{LT},\infty}$ and this isomorphism is $\mathrm{GL}_2(\mathbb{Q}_p)^0$-equivariant. \end{lem} \begin{proof} The trivial map sending $x\in \mathcal{M}_{\mathrm{LT},\infty}$ to $(x,1)\in \mathcal{M}_{\mathrm{LT},\infty}\times D_p^\times$ induces an isomorphism \[ \mathcal{M}_{\mathrm{LT},\infty}\cong ( \mathcal{M}_{\mathrm{LT},\infty}\times D_p^\times)/D_p^\times\] where $D_p$ acts diagonally on the right hand side via $g\mapsto \iota(g)^{-1}$ on $ \mathcal{M}_{\mathrm{LT},\infty}$ and the right multiplication on $D_p^\times$. The (right) action of $D_p^\times$ on the left hand side is identified with $g\cdot(z,a)=(z,\iota(g)a)$, $g\in D_p^\times,~(z,a)\in\mathcal{M}_{\mathrm{LT},\infty}\times D_p^\times$ on the right hand side. On the other hand, \[ ( \mathcal{M}_{\mathrm{LT},\infty}\times D_p^\times)/D_p^\times= ( \mathcal{M}^{(0)}_{\mathrm{LT},\infty}\times D_p^\times)/\mathcal{O}_{D_p}^\times.\] Hence \[ D^\times \setminus [\mathcal{M}_{\mathrm{LT},\infty}\times (D\otimes_{\mathbb{Q}}\mathbb{A}_f^{p})^\times/K^p]\cong \left[\mathcal{M}_{\mathrm{LT},\infty}^{(0)} \times [D^\times\setminus (D\otimes_{\mathbb{Q}}\mathbb{A}_f)^\times/K^p]\right]/\mathcal{O}_{D_p}^\times \] where $\mathcal{O}_{D_p}^\times$ acts via $g\mapsto \iota(g)^{-1}$ on $ \mathcal{M}^{(0)}_{\mathrm{LT},\infty}$ and the right multiplication on $D^\times\setminus (D\otimes_{\mathbb{Q}}\mathbb{A}_f)^\times/K^p$. Our claim follows immediately. \end{proof} \begin{para} The uniformization isomorphism $u:\pi_{\mathrm{HT}}^{-1}(\Omega)\cong \bigsqcup_{i\in I} \mathcal{M}^{(0)}_{\mathrm{LT},\infty}$ induces a $\mathrm{GL}_2(\mathbb{Q}_p)^0 $-equivariant isomorphism $\mathcal{O}_{K^p}|_{\Omega}\cong \bigoplus_{i\in I}\mathcal{O}_{\mathrm{LT}}$ of $\mathcal{O}_{\Omega}$-modules. In particular, it is easy to see that after taking the locally analytic vectors, the horizontal Cartan action $\theta_{\mathfrak{h}}$ is identified with $\theta_{\mathrm{LT},\mathfrak{h}}$. Let $\chi=(n_1,n_2)\in\mathbb{Z}^2$ be a character of $\mathfrak{h}$ with $k=n_2-n_1\geq 0$. Then the differential operator $d^{k+1}|_{\Omega}$ in Theorem \ref{I1} gets identified with $d_{\mathrm{LT}}^{k+1}$ introduced in Theorem \ref{LTI1}. In particular, the surjectivity of $d_{\mathrm{LT}}^{k+1}$ (Corollary \ref{dLTkcok}) shows that \end{para} \begin{cor} \label{dk+1Os} $d^{k+1}|_{\Omega}$ is surjective. \end{cor} In particular, $H^0({\mathscr{F}\!\ell},\coker d^{k+1})=H^0(\mathbb{P}^1(\mathbb{Q}_p),\coker d^{k+1})$. Hence by Corollary \ref{cokerdk+1P1}, \ref{cokerd'k+1}, we have the following result. Recall that $i$ denotes the embedding $\mathbb{P}^1(\mathbb{Q}_p)\subseteq{\mathscr{F}\!\ell}$. \begin{cor} \label{cokerIkinfty} For $\chi=(-k,0)$, \[\coker d^{k+1}=i_*(\coker d^{k+1})|_{\mathbb{P}^1(\mathbb{Q}_p)}\cong \omega^k_{{\mathscr{F}\!\ell}}\otimes_C i_*\mathcal{H}^1_{\mathrm{ord}}(K^p,k)(k)\cdot \mathrm{t}^{-k},\] \[\coker d'^{k+1}=i_*(\coker d^{k+1})|_{\mathbb{P}^1(\mathbb{Q}_p)}\cong \omega^{-k-2}_{{\mathscr{F}\!\ell}}\otimes_C i_*\mathcal{H}^1_{\mathrm{ord}}(K^p,k)(k)\otimes\det{}^{k+1}\cdot \mathrm{t}^{-k},\] \[H^0({\mathscr{F}\!\ell},\coker d^{k+1})=\mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)} H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k) (k)\cdot {e'_2}^k\mathrm{t}^{-k},\] \[H^0({\mathscr{F}\!\ell},\coker d'^{k+1})=\mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)} H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k) (k)\cdot {e'_1}^{k+1}{e'_2}^{-1}\mathrm{t}^{-k}.\] \end{cor} Recall that $I_k= d'^{k+1}\circ \bar{d}^{k+1}$ and $\bar{d}^{k+1}$ is surjective by Proposition \ref{dbarsurj}. \begin{cor} \label{cokik} $\coker I_k$ is supported on $\mathbb{P}^1(\mathbb{Q}_p)\subseteq{\mathscr{F}\!\ell}$. Moreover, \[I^1_k:H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},(n_1,n_2)}_{K^p})\to H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},(n_2+1,n_1-1)}_{K^p}(k+1))\] is surjective. \end{cor} \begin{proof} Let $i:\mathbb{P}^1(\mathbb{Q}_p)\to {\mathscr{F}\!\ell}$ denotes the closed embedding (of topological spaces). Then $H^1({\mathscr{F}\!\ell},\coker I_k)=H^1(\mathbb{P}^1(\mathbb{Q}_p),i^*\coker I_k)=0$ because $\mathbb{P}^1(\mathbb{Q}_p)$ is a profinite set (``zero-dimensional''). This implies the surjectivity of $I^1_k$ as ${\mathscr{F}\!\ell}$ is one-dimensional, hence taking $H^1$ is right-exact. \end{proof} \begin{rem} It is interesting to observe that the complex $\mathcal{I}_k:\mathcal{O}^{\mathrm{la},(-k,0)}_{K^p}\xrightarrow{I_k} \mathcal{O}^{\mathrm{la},(1,-k-1)}_{K^p}(k+1)$ satisfies the following perversities: $\mathrm{supp} H^1(\mathcal{I}_k)=\mathbb{P}^1(\mathbb{Q}_p)$ is ``zero-dimensional'' and $\mathrm{supp} H^0(\mathcal{I}_k)\subseteq {\mathscr{F}\!\ell}$ is at most ``one-dimensional''. \end{rem} \begin{para} We can also use Corollary \ref{dLTkcok} to give a description of $\ker d^{k+1}$. Let $X_{K^p}:=D^\times\setminus (D\otimes_{\mathbb{Q}}\mathbb{A}_f)^\times/K^p$, which is isomorphic to $\bigsqcup_{i\in I}\mathcal{O}_{D_p}^\times$ as an $\mathcal{O}_{D_p}^\times$-space. Recall that in the proof of Lemma \ref{expuf}, we showed that \[ \pi_{\mathrm{HT}}^{-1} (\Omega)\cong [\mathcal{M}_{\mathrm{LT},\infty}^{(0)} \times X_{K^p}]/\mathcal{O}_{D_p}^\times ,\] where $\mathcal{O}_{D_p}^\times$ acts on $\mathcal{M}_{\mathrm{LT},\infty}^{(0)}$ via $g\mapsto \iota(g)^{-1}$. From this, it is easy to see that $\mathcal{O}_{K^p}|_{\Omega}\cong\mathrm{Map}_{\mathcal{O}_{D_p}^\times}(X_{K^p},\mathcal{O}_{\mathrm{LT}})$, the set of $\mathcal{O}_{D_p}^\times$-equivariant maps from $X_{K^p}$ to $\mathcal{O}_{\mathrm{LT}}$. (In fact, this holds for general $\mathcal{O}_{D_p}^\times$-equivariant sheaves on $\mathcal{M}_{\mathrm{LT},\infty}^{(0)}$ but we don't need this generality here.) Similarly, we have \[\ker d^{k+1}|_{\Omega}\cong\mathrm{Map}_{\mathcal{O}_{D_p}^\times}(X_{K^p},\ker d_{\mathrm{LT}}^{k+1}).\] To further simplify it, by Corollary \ref{dLTkcok}, we have \[\ker d^{k+1}_{\mathrm{LT}}=H^0({\mathscr{F}\!\ell}',\omega^{k}_{{\mathscr{F}\!\ell}'})(k)\otimes_C \omega^{-k,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}}\cdot \mathrm{t}^{n_1}.\] Note that $H^0({\mathscr{F}\!\ell}',\omega^{k}_{{\mathscr{F}\!\ell}'})\cdot \mathrm{t}^{n_1}$ is an irreducible algebraic representation of $D_p^\times$ over $C$ with highest weight $(k,0)+(n_1,n_1)=(n_1+k,n_1)=(n_2,n_1)$. We denote the \textit{dual} of this irreducible representation by $W_{(-n_1,-n_2)}$ which has highest weight $(-n_1,-n_2)$. \end{para} \begin{defn} \label{AFD} Let $\chi=(n_1,n_2)\in\mathbb{Z}^2$ with $n_2-n_1\geq 0$. \begin{enumerate} \item $\mathcal{A}_{D,-\chi}=\mathcal{A}_{D,(-n_1,-n_2)}$ denotes the set of maps \[f: D^\times\setminus (D\otimes_{\mathbb{Q}}\mathbb{A}_f)^\times\to W_{(-n_1,-n_2)}\] such that $f(dh^ph_p)=h_p^{-1}\cdot f(d)$, for any $d\in D^\times\setminus (D\otimes_{\mathbb{Q}}\mathbb{A}_f)^\times$ and any $h^p$ (resp. $h_p$) in some open compact subgroup $U^p\subseteq (D\otimes_{\mathbb{Q}}\mathbb{A}_f^p)^\times$ (resp. $U_p\subseteq D_p^\times$). This is usually called the space of quaternionic forms on $(D\otimes_\mathbb{Q} \mathbb{A})^\times$. It admits a right translation action of $ (D\otimes_{\mathbb{Q}}\mathbb{A}_f^p)^\times$ and an action of $D_p^\times$ via $(h_p\cdot f)(d)=h_p\cdot f(dh_p)$, $h_p\in D_p^\times,d\in D^\times\setminus (D\otimes_{\mathbb{Q}}\mathbb{A}_f)^\times$. Both actions are smooth. \item $\mathcal{A}^{1}_{D,-\chi}\subseteq\mathcal{A}_{D,-\chi}$ denotes the subset of maps which factor through the reduced norm map. Clearly this is non-zero only when $n_1=n_2$. \item $\mathcal{A}^c_{D,-\chi}:=\mathcal{A}_{D,-\chi}/\mathcal{A}^1_{D,-\chi}$. Then the Hecke action (cf. \ref{Hd} below) induces a natural splitting $\mathcal{A}_{D,-\chi}\cong\mathcal{A}^c_{D,-\chi}\oplus \mathcal{A}^1_{D,-\chi}$. \end{enumerate} \end{defn} \begin{prop} \label{kerdo} There is a Hecke and $\mathrm{GL}_2(\mathbb{Q}_p)^0$-equivariant isomorphism \[\ker d^{k+1}|_{\Omega}\cong (\mathcal{A}_{D,(-n_1,-n_2)}^{K^p}\otimes_C \omega^{-k,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}})^{\mathcal{O}_{D_p}^\times}(n_2)\cdot {\varepsilon'_p}^{n_1},\] where $\mathcal{O}_{D_p}^\times$ acts diagonally via the action defined in \ref{AFD} on $\mathcal{A}_{D,(-n_1,-n_2)}^{K^p}$ and the action $g\mapsto \iota(g)^{-1}$ on $ \omega^{-k,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}}$, and $\varepsilon'_p$ denotes the character sending $x\in\mathrm{GL}_2(\mathbb{Q}_p)$ to $|\det{x}|\det{x}$, i.e. the $p$-adic cyclotomic character. \end{prop} \begin{proof} We denote by $\mathrm{Map}^{\mathrm{sm}}(X_{K^p},\ker d_{\mathrm{LT}}^{k+1})$ the set of maps $X_{K^p} \to \ker d_{\mathrm{LT}}^{k+1}$ invariant with respect to some open subgroup of $\mathcal{O}_{D_p}^\times$. Then \[\mathrm{Map}_{\mathcal{O}_{D_p}^\times}(X_{K^p},\ker d_{\mathrm{LT}}^{k+1})=\mathrm{Map}^{\mathrm{sm}}(X_{K^p},\ker d_{\mathrm{LT}}^{k+1})^{\mathcal{O}_{D_p}^\times}.\] Since the action of $\mathcal{O}_{D_p}^\times$ on $\omega^{-k,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}}$ is smooth, hence locally finite, by definition, we have \begin{eqnarray*} \mathrm{Map}^{\mathrm{sm}}(X_{K^p},\ker d_{\mathrm{LT}}^{k+1})&\cong&\mathrm{Map}^{\mathrm{sm}}(X_{K^p},H^0({\mathscr{F}\!\ell}',\omega^{k}_{{\mathscr{F}\!\ell}'})\cdot \mathrm{t}^{n_1})\otimes_C \omega^{-k,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}}(k)\\ &\cong&\mathcal{A}_{D,(-n_1,-n_2)}^{K^p}\otimes_C \omega^{-k,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}}(n_1+k)\cdot {\varepsilon'_p}^{n_1}\\ &=&\mathcal{A}_{D,(-n_1,-n_2)}^{K^p}\otimes_C \omega^{-k,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}}(n_2)\cdot {\varepsilon'_p}^{n_1}, \end{eqnarray*} where the second isomorphism holds as $\mathcal{O}_{D_p}^\times$ acts $\ker d_{\mathrm{LT}}^{k+1}$ via $g\mapsto \iota(g)^{-1}$ and $\mathrm{GL}_2(\mathbb{A}_f)$ acts on $\mathrm{t}$ via $\varepsilon\circ\det$ with $\varepsilon:\mathbb{A}_f^\times/\mathbb{Q}^\times_{>0}\to\mathbb{Z}_p^\times$ denoting the $p$-adic cyclotomic character, cf. \ref{HEt}, and $G_{\mathbb{Q}_p}$ acts on $\mathrm{t}$ also via the $p$-adic cyclotomic character. Our claim follows immediately. \end{proof} \begin{rem} \label{GL20} We can extend this isomorphism to be $\mathrm{GL}_2(\mathbb{Q}_p)$-equivariant in the following way. It follows from the construction of $\omega_{\mathrm{Dr}}$ that the action of $\mathcal{O}_{D_p}^\times\times\mathrm{GL}_2(\mathbb{Q}_p)^0$ can be extended naturally to an action of $D_p^\times\times\mathrm{GL}_2(\mathbb{Q}_p)$ on $\bigoplus_{i\in\mathbb{Z}} \omega^{-k,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}}$. Then \begin{eqnarray*} \ker d^{k+1}|_{\Omega}&\cong &(\mathcal{A}_{D,(-n_1,-n_2)}^{K^p}\otimes_C \omega^{-k,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}})^{\mathcal{O}_{D_p}^\times}(n_2)\cdot {\varepsilon'_p}^{n_1}\\ &\cong & (\mathcal{A}_{D,(-n_1,-n_2)}^{K^p}\otimes_C \bigoplus_{i\in\mathbb{Z}} \omega^{-k,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}})^{D_p^\times}(n_2)\cdot {\varepsilon'_p}^{n_1} \end{eqnarray*} which is $\mathrm{GL}_2(\mathbb{Q}_p)$-equivariant. \end{rem} \begin{para} We denote by $j:\Omega\to {\mathscr{F}\!\ell}$ the open embedding. Let $\mathcal{F}$ be a sheaf on $\Omega$. As usual $j_{!}\mathcal{F}$ denotes the extension by zero of $\mathcal{F}$. We now can determine the kernel of $d^{k+1}: \mathcal{O}^{\mathrm{la},(n_1,n_2)}_{K^p}\to \mathcal{O}^{\mathrm{la},(n_1,n_2)}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}} (\Omega^1_{K^p}(\mathcal{C})^{\mathrm{sm}})^{\otimes k+1}$ by Proposition \ref{kerdc} and Proposition \ref{kerdo}. \end{para} \begin{prop} \label{kerdk+1s} \[\ker d^{k+1}\cong j_{!}(\mathcal{A}_{D,(-n_1,-n_2)}^{K^p}\otimes_C \omega^{-k,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}})^{\mathcal{O}_{D_p}^\times}(n_2)\cdot {\varepsilon'_p}^{n_1},~~~~~~k\geq 1,\] \[\ker d^1\cong j_{!} (\mathcal{A}_{D,(-n_1,-n_1)}^{c,K^p}\otimes_C \mathcal{O}^{D_p^\times-\mathrm{sm}}_{\mathrm{LT}})^{\mathcal{O}_{D_p}^\times}(n_1)\cdot {\varepsilon'_p}^{n_1} \oplus \mathcal{O}_{{\mathscr{F}\!\ell}} \otimes_C M_0(K^p)(n_1)\cdot {\varepsilon'_p}^{n_1},\] where $\mathcal{A}_{D,(0,0)}^{c}$ was introduced in Definition \ref{AFD}. Recall that \[\omega^{-k,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}}:=\varinjlim_{n}(\pi^{(0)}_{\mathrm{Dr},n})_* (\pi^{(0)}_{\mathrm{Dr},n})^* \omega_{{\mathscr{F}\!\ell}}^k,\] where $\pi^{(0)}_{\mathrm{Dr},n}:\mathcal{M}_{\mathrm{Dr},n}^{(0)}\to \Omega$ denotes the projection map. We also remind that the action of ${\mathcal{O}_{D_p}^\times}$ on $\omega^{k+2,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}}$ is via $g\mapsto \iota(g)^{-1}$. \end{prop} \begin{proof} The only case that requires some extra explanation is when $k=0$. We may assume $n_1=0$ by twisting by $\mathrm{t}^{-n_1}$. There is a natural map \[\mathcal{O}_{{\mathscr{F}\!\ell}} \otimes_C M_0(K^p)=\mathcal{O}_{{\mathscr{F}\!\ell}} \otimes_C H^0({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{sm}}_{K^p})\to \mathcal{O}_{{\mathscr{F}\!\ell}} \otimes_C \mathcal{O}^\mathrm{sm}_{K^p}\to \mathcal{O}^{\mathrm{la},(0,0)}_{K^p}\] whose image is inside of $\ker d^1$. On the other hand, by Proposition \ref{kerdc} and Proposition \ref{kerdo}, there is an exact sequence \[0\to j_{!} (\mathcal{A}_{D,(0,0)}^{K^p}\otimes_C \mathcal{O}^{D_p^\times-\mathrm{sm}}_{\mathrm{LT}})^{\mathcal{O}_{D_p}^\times} \to \ker d^{1} \to (\mathcal{O}_{{\mathscr{F}\!\ell}})|_{\mathbb{P}^1(\mathbb{Q}_p)} \otimes_C M_0(K^p) \to 0.\] Here we use that $ \omega^{0,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}}=\mathcal{O}^{D_p^\times-\mathrm{sm}}_{\mathrm{LT}}$. Hence we get an isomorphism \[\ker d^1\cong \left[\mathcal{O}_{{\mathscr{F}\!\ell}} \otimes_C M_0(K^p)\right] \oplus j_{!} \left[(\mathcal{A}_{D,(0,0)}^{K^p}\otimes_C \mathcal{O}^{D_p^\times-\mathrm{sm}}_{\mathrm{LT}})^{\mathcal{O}_{D_p}^\times}/ \mathcal{O}_{\Omega} \otimes_C M_0(K^p) \right].\] We claim that the image of the map $ \mathcal{O}_{\Omega} \otimes_C M_0(K^p)\to (\mathcal{A}_{D,(0,0)}^{K^p}\otimes_C \mathcal{O}^{D_p^\times-\mathrm{sm}}_{\mathrm{LT}})^{\mathcal{O}_{D_p}^\times}$ is exactly $(\mathcal{A}_{D,(0,0)}^{1,K^p}\otimes_C \mathcal{O}^{D_p^\times-\mathrm{sm}}_{\mathrm{LT}})^{\mathcal{O}_{D_p}^\times}$. Recall that $\mathcal{A}_{D,(0,0)}^{1,K^p}$ denotes the subset of smooth functions on $D^\times\setminus(D\otimes_{\mathbb{Q}}\mathbb{A}_f)^\times/K^p$ that factor through the reduced norm map. This implies the Proposition because by definition $\mathcal{A}_{D,(0,0)}^{c,K^p}=\mathcal{A}_{D,(0,0)}^{K^p}/\mathcal{A}_{D,(0,0)}^{1,K^p}$. To see the claim, we note that there is a natural identification \[\mathcal{A}_{D,(0,0)}^{1,K^p}=M_0(K^p)\] because both sides denote the space of smooth functions on $\mathbb{Q}_{>0}^\times\setminus \mathbb{A}_f^\times/\det(K^p)$. Let $\psi:\mathcal{O}_{D_p}^\times\to C^\times$ be a smooth character. Then the $\psi$-isotypic part $\mathcal{O}_{\mathrm{LT}}[\psi]$ of $\mathcal{O}_{\mathrm{LT}}$ can be identified with $\mathcal{O}_{\Omega}$. Indeed, suppose that $1+p^n\mathcal{O}_{D_p}\subseteq \ker \psi$. There is a natural surjective map $c_n:\mathcal{M}_{\mathrm{Dr},n}^{(0)}\to \pi_0(\mathcal{M}_{\mathrm{Dr},n}^{(0)})\to (\mathbb{Z}/p^n)^\times$, cf. second paragraph of \ref{khDr}. Then the restriction of $\mathcal{M}_{\mathrm{Dr},n}^{(0)}$ to $c_n^{-1}(1)$ induces an isomorphism \[\mathcal{O}_{\mathrm{LT}}[\psi]\cong(\pi_{\mathrm{Dr},n*}^{(0)}\mathcal{O}_{c_n^{-1}(1)})^{\mathcal{O}_{D_p}^{\times,1}}=\mathcal{O}_\Omega\] where $\mathcal{O}_{D_p}^{\times,1}\subseteq \mathcal{O}_{D_p}^{\times}$ denotes the kernel of the reduced norm map. From this we easily deduce our claim on each $\psi$-isotypic component hence deduce the whole claim. \end{proof} There is a natural isomorphism $\omega^{-2}_{{\mathscr{F}\!\ell}}\otimes\det\cong \Omega_{{\mathscr{F}\!\ell}}$, cf. \ref{KSFl}. We obtain the following result by twisting Proposition \ref{kerdk+1s} by $(\Omega_{{\mathscr{F}\!\ell}})^{\otimes k+1}$. \begin{cor} \label{kerd'k+1s} \hspace{2em} \[\ker d'^{k+1}\cong j_{!}(\mathcal{A}_{D,(-n_1,-n_2)}^{K^p}\otimes_C \omega^{k+2,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}})^{\mathcal{O}_{D_p}^\times}(n_2)\otimes \det{}^{k+1}\cdot {\varepsilon'_p}^{n_1},~~~~~~k\geq 1,\] \[\ker d'^1\cong j_{!} (\mathcal{A}_{D,(-n_1,-n_1)}^{c,K^p}\otimes_C \omega^{2,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}})^{\mathcal{O}_{D_p}^\times}(n_2)\otimes\det\cdot{\varepsilon'_p}^{n_1} \oplus \Omega_{{\mathscr{F}\!\ell}} \otimes_C M_0(K^p)(n_2)\cdot {\varepsilon'_p}^{n_1}.\] \end{cor} \begin{rem} It is natural to ask whether we can rewrite these results about $\ker d^{k+1}$, $\ker \bar{d}^{k+1}$ and the cokernels in a uniform way. Let me restrict to the case $k=0$. Roughly speaking the complex $d^{1}: \mathcal{O}^{\mathrm{la},(0,0)}_{K^p}\to \mathcal{O}^{\mathrm{la},(0,0)}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}} \Omega^1_{K^p}(\mathcal{C})^{\mathrm{sm}}$ computes the \textit{de Rham cohomology} of the fiber of the Hodge-Tate period map $\pi_{\mathrm{HT}}$. In particular, since the fiber of each point of $\Omega$ is a profinite set by the uniformization theorem, the cohomology only appears in degree zero. On $\mathbb{P}^1(\mathbb{Q}_p)$, the fiber is closely related to the Igusa curve, and it is not surprising at all that we are essentially computing the de Rham cohomology of the Igusa curve here, cf. Proposition \ref{kerdc}, \ref{cokdk+1infty}. For $\bar{d}$, similarly it is like computing the de Rham cohomology of the fiber of the projection map $\mathcal{X}_{K^p}\to \mathcal{X}_{K^p\mathrm{GL}_2(\mathbb{Z}_p)}$. Again since the fiber is a profinite set, there is no $H^1$, i.e. $\coker\bar{d}=0$. \end{rem} \subsection{Spectral decompositions} \label{Sd} \begin{para} We are ready to decompose $\ker I_k^1$ with respect to the Hecke action. Recall that $I^1_k=H^1(I_k)$. In fact, we will present two decompositions. The first comes from $I_k=d'^{k+1}\circ \bar{d}^{k+1}$, while the second comes from $ I_k=\bar{d}'^{k+1}\circ d^{k+1}$. Both give different prospectives of $\ker I_k^1$ and we will see some interesting results when comparing two decompositions. First we explore the factorization $I_k=d'^{k+1}\circ \bar{d}^{k+1}$. We have the following easy lemma. \end{para} \begin{lem} \label{kerIk1fil} There are natural short exact sequences \[0\to \ker H^1(\bar{d}^{k+1})\to \ker I_k^1 \to \ker H^1(d'^{k+1})\to 0,\] \[0\to H^1({\mathscr{F}\!\ell},\ker d'^{k+1})\to \ker H^1(d'^{k+1})\to H^0(\coker d'^{k+1})\to 0\] and isomorphisms $ \ker H^1(d'^{k+1})\cong \mathbb{H}^1(d'^{k+1})$, where $ \mathbb{H}^1(d'^{k+1})$ denotes the first hypercohomology of the complex $\mathcal{O}^{\mathrm{la},(n_1,n_2)}_{K^p}\otimes_{\mathcal{O}_{{\mathscr{F}\!\ell}}}(\Omega^1_{{\mathscr{F}\!\ell}})^{\otimes k+1}\xrightarrow{d'^{k+1}}\mathcal{O}^{\mathrm{la},(n_2+1,n_1-1)}_{K^p}(k+1)$. \end{lem} \begin{proof} Note that $\bar{d}^{k+1}$ is surjective by Proposition \ref{dbarsurj}, hence $H^1(\bar{d}^{k+1})$ is surjective, which clearly implies the first claim. For the second claim, by \cite[Corollary 5.1.3]{Pan20}, we have $H^0({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},(n_2+1,n_1-1)}_{K^p})=0$. In particular, the image of $d'^{k+1}$ also has no global sections. With these vanishing results, our claim follows from Lemma \ref{ninvlem} and Remark \ref{hyclem} with $X=d'^{k+1}$. \end{proof} \begin{para} Some notations. Note that $G_{\mathbb{Q}_p}$ acts on $\mathrm{t}$ via the $p$-adic cyclotomic character and $\mathrm{GL}_2(\mathbb{A}_f)$ acts on $\mathrm{t}$ via $\varepsilon\circ \det$. This means that $\mathrm{t}(-1)=\varepsilon\circ \det$ as a representation of $\mathrm{GL}_2(\mathbb{A}_f)\times G_{\mathbb{Q}_p}$. For simplicity, we use $\varepsilon$ to denote twisting by $\mathrm{t}(-1)$. \end{para} \begin{thm} \label{dec1} For $\chi=(-k,0)$, there is a natural increasing filtration $\Fil_\bullet$ on $\ker I^1_k$ with $\Fil_n=0$, $n\geq 0$ and $\Fil_n= I^1_k$, $n\geq 3$, and \[\gr_n \ker I^1_k=\left\{ \begin{array}{lll} \ker H^1(\bar{d}^{k+1})\cong \Sym^k V\otimes_{\mathbb{Q}_p} H^1(\omega^{-k,\mathrm{sm}}_{K^p})\cdot{\varepsilon}^{-k}, &n=1&\\ H^1(\ker d'^{k+1})\cong (H^1(j_{!} \omega^{k+2,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}})\otimes_C \mathcal{A}_{D,(k,0)}^{K^p})^{\mathcal{O}_{D_p}^\times}\otimes\det^{k+1}\cdot{\varepsilon}_p^{-k}, &n=2& \\ H^0(\coker d'^{k+1})\cong \mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)} H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k) \cdot{\varepsilon}^{-k} {e'_1}^{k+1}{e'_2}^{-1}, &n=3& \end{array},\right.\] when $k\geq 1$ and \[\gr_n \ker I^1_0=\left\{ \begin{array}{lll} \ker H^1(\bar{d}^1)\cong H^1(\mathcal{O}^{\mathrm{sm}}_{K^p}) , &n=1&\\ H^1(\ker d'^{1})\cong (H^1(j_{!} \omega^{2,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}})\otimes_C \mathcal{A}_{D,(0,0)}^{c,K^p})^{\mathcal{O}_{D_p}^\times}\otimes\det \oplus M_0(K^p) , &n=2&\\ H^0(\coker d'^{1})\cong \mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)} H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p))\cdot {e'_1}{e'_2}^{-1} , &n=3& \end{array}.\right.\] All of the cohomology groups are computed on ${\mathscr{F}\!\ell}$ and we drop ${\mathscr{F}\!\ell}$ for simplicity. All of the isomorphisms are Hecke and $\mathrm{GL}_2(\mathbb{Q}_p)$-equivariant. See \ref{Hd} below for the precise meaning of Hecke actions. \end{thm} \begin{proof} The claim for $H^0(\coker d'^{k+1})$ follows from Corollary \ref{cokerIkinfty}. For $H^1(\ker d'^{k+1})$, when $k\geq 1$, it follows from Corollary \ref{kerd'k+1s} by noting that \[ H^1(j_{!}(\mathcal{A}_{D,(k,0)}^{K^p}\otimes_C \omega^{k+2,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}})^{\mathcal{O}_{D_p}^\times})\cong\left(H^1( j_{!} \omega^{k+2,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}})\otimes_C \mathcal{A}_{D,(k,0)}^{K^p}\right)^{\mathcal{O}_{D_p}^\times}\] because the actions of $\mathcal{O}_{D_p}^\times$ on $\mathcal{A}_{D,(k,0)}^{K^p}$ and $\omega^{k+2,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}}$ are smooth, hence semi-simple as $\mathcal{O}_{D_p}^\times$ is compact. The same argument plus that $H^1({\mathscr{F}\!\ell},\Omega_{{\mathscr{F}\!\ell}})=C$ proves the case $k=0$. For $\ker H^1(\bar{d}^{k+1})$, by our description of $\ker \bar{d}^{k+1}$ in Proposition \ref{dbarsurj}, it suffices to show that \[\ker H^1(\bar{d}^{k+1})=H^1(\ker\bar{d}^{k+1}).\] Since $\bar{d}^{k+1}: \mathcal{O}^{\mathrm{la},(n_1,n_2)}_{K^p}\to \mathcal{O}^{\mathrm{la},(n_1,n_2)}_{K^p}\otimes_{\mathcal{O}_{{\mathscr{F}\!\ell}}}(\Omega^1_{{\mathscr{F}\!\ell}})^{\otimes k+1}$ is surjective, we only need to show \[H^0( \mathcal{O}^{\mathrm{la},(n_1,n_2)}_{K^p}\otimes_{\mathcal{O}_{{\mathscr{F}\!\ell}}}(\Omega^1_{{\mathscr{F}\!\ell}})^{\otimes k+1})=0.\] Consider the exact sequence \[0\to H^0(\ker d'^{k+1})\to H^0( \mathcal{O}^{\mathrm{la},(n_1,n_2)}_{K^p}\otimes_{\mathcal{O}_{{\mathscr{F}\!\ell}}}(\Omega^1_{{\mathscr{F}\!\ell}})^{\otimes k+1}) \xrightarrow{H^1(d'^{k+1})} H^0(\mathcal{O}^{\mathrm{la},(n_2+1,n_1-1)}_{K^p}).\] The last term is zero by \cite[Corollary 5.1.3]{Pan20}. On the other hand, we have \[H^0(j_! \omega_{\mathrm{Dr}}^{k+2,\mathrm{sm}})=0\] because each connected component of $\mathcal{M}_{\mathrm{Dr},n}$ maps surjectively onto $\Omega$ and \[H^0({\mathscr{F}\!\ell},\Omega_{\mathscr{F}\!\ell})=0.\] Hence $H^0(\ker d'^{k+1})=0$, which implies the vanishing of $H^0( \mathcal{O}^{\mathrm{la},(n_1,n_2)}_{K^p}\otimes_{\mathcal{O}_{{\mathscr{F}\!\ell}}}(\Omega^1_{{\mathscr{F}\!\ell}})^{\otimes k+1})$. \end{proof} \begin{para} \label{Hd} To further state our result, we need a digression on Hecke algebra. Let $S$ be a finite set of rational primes containing $p$ such that $K_l=K^p\cap\mathrm{GL}_2(\mathbb{Q}_l)$ is maximal for $l\notin S$. As usual, we denote by $T_l$ the double coset action of \[[K_l\begin{pmatrix} l & 0 \\ 0 & 1 \end{pmatrix} K_l]\] and by $S_l$ the action of $\begin{pmatrix} l & 0 \\ 0 & l \end{pmatrix}$. Let \[\mathbb{T}^S:=\mathbb{Z}_p[T_l,S_l,l\notin S]\] be the $\mathbb{Z}_p$-algebra generated by symbols $T_l,S_l,l\notin S$. Then it acts naturally on $\mathcal{O}^{\mathrm{la}}_{K^p}$ and commutes with $\theta_\mathfrak{h}$, $I_k$, $d^{k+1}$, $\bar{d}^{k+1}$ as these operators are defined purely at the place $p$. For $k\in\mathbb{Z}$, recall that \[M_{k}(K^p)=H^0({\mathscr{F}\!\ell},\omega^{k,\mathrm{sm}}_{K^p})=\varinjlim_{K_p} H^0(\mathcal{X}_{K^pK_p},\omega^k)\] denotes the space of classical holomorphic modular forms of weight $k$. It is well-known that the action of $\mathbb{T}^S$ on $M_k(K^p)$ is semi-simple, hence there is a natural eigenspace decomposition \[M_k(K^p)=\bigoplus_{\lambda:\mathbb{T}^S\to C} M_k(K^p)[\lambda].\] We will denote by $\sigma_k^{K^p}$ the spectrum of $\mathbb{T}^S$ on $M_k(K^p)$. In general, since $\mathrm{GL}_2(\mathbb{A}_f)$ acts on $\mathrm{t}$ via the cyclotomic character, we also have a spectral decomposition \[M_k(K^p)\cdot \mathrm{t}^{n}=\bigoplus_{\lambda\in\sigma_{k,n}^{K^p}} M_k(K^p)\cdot \mathrm{t}^{n}[\lambda]\] where $\sigma_{k,n}^{K^p}$ denotes the spectrum of $\mathbb{T}^S$ on $M_k(K^p)\cdot \mathrm{t}^{n}$. On the other hand, for $k\geq 2$, there is also a spectral decomposition for $H^1({\mathscr{F}\!\ell},\omega^{2-k,\mathrm{sm}}_{K^p})=\varinjlim_{K_p} H^1(\mathcal{X}_{K^pK_p},\omega^{2-k})$: \[H^1({\mathscr{F}\!\ell},\omega^{2-k,\mathrm{sm}}_{K^p})=\bigoplus_{\lambda\in \sigma_{k,k-1}^{K^p}} H^1({\mathscr{F}\!\ell},\omega^{2-k,\mathrm{sm}}_{K^p})[\lambda],\] where $\lambda$ runs over systems of Hecke eigenvalues associated to cuspidal automorphic representations $\pi$ of $\mathrm{GL}_2(\mathbb{A})$ such that $\pi_\infty$ has the same infinitesimal character as the irreducible algebraic representation of $\mathrm{GL}_2$ with highest weight $(k-2,0)$ and $(\pi^\infty)^{K^p}\neq 0$. Hence the spectrum is a subset of $\sigma_{k,k-1}^{K^p}$. This can be seen from the Hodge-Tate decomposition of $H^1_{\mathrm{\acute{e}t}}(\mathcal{X}_{K^pK_p},\Sym^{k-2} V)$. Recall that $\mathcal{A}^{K^p}_{(-n_1,-n_2)}$ denotes the space of quaternionic forms, cf. Definition \ref{AFD}. The Hecke algebra $\mathbb{T}^S$ acts naturally on it. By the Jacquet-Langlands correspondence, this action is semi-simple, and moreover for $k\geq 1$, there is a natural decomposition into eigenspaces \[\mathcal{A}^{K^p}_{(k,0)}=\bigoplus_{\lambda\in \sigma^{K^p}_{k+2,1}} \mathcal{A}^{K^p}_{(k,0)}[\lambda]\] where $\lambda:\mathbb{T}^S\to C$ runs over cuspidal automorphic representations $\pi$ of $\mathrm{GL}_2(\mathbb{A})$ such that $\pi_\infty$ has the same infinitesimal character as the irreducible algebraic representation of $\mathrm{GL}_2$ with highest weight $(0,-k)$ and such that $(\pi^\infty)^{K^p}\neq 0$ and $\pi_p$ is special or supercuspidal. In particular, the spectrum is a subset of $\sigma^{K^p}_{k+2,1}$. When $k=0$, for a Hecke-eigenform, either it is contained in $\mathcal{A}^{1}_{(0,0)}$, i.e. (as a function on $(D\otimes \mathbb{A}_f)^\times$) it factors through the reduced norm map, hence transfers to an eigenform in $M_0(K^p)$, or it transfers to a cuspidal eigenform of weight $2$ as in the higher weight case. Therefore \[\mathcal{A}^{K^p}_{(0,0)}=\bigoplus_{\lambda\in\sigma_{2,1}^{K^p}\cup \sigma_{0}^{K^p}} \mathcal{A}^{K^p}_{(0,0)}[\lambda].\] \end{para} \begin{para} \label{D0a} Recall that in Definition \ref{D0}, we introduced \[D_0:=H^0({\mathscr{F}\!\ell},\wedge^2 D^{\mathrm{sm}}_{K^p}).\] It follows from \ref{HEt} that $D_0=M_0(K^p)\cdot c$ and $\mathrm{GL}_2(\mathbb{A}_f)$ acts on $c$ via $|\cdot|^{-1}\circ\det$. Now we can state the main result of this section. \end{para} \begin{thm} \label{sd} For $\chi=(-k,0)$, there are natural generalized eigenspace decompositions \[\ker I^1_k=\bigoplus_{\lambda \in\sigma_{k+2,1}^{K^p}} \ker I^1_k\widetilde{[\lambda]}\] when $k\geq 1$, and \[\ker I^1_0=\bigoplus_{\lambda \in\sigma_{2,1}^{K^p}\cup \sigma_{0}^{K^p}} \ker I^1_0\widetilde{[\lambda]}.\] Here $\widetilde{[\lambda]}$ denotes the generalized eigenspace of $\lambda$. \end{thm} \begin{proof} By Theorem \ref{dec1}, it's enough to show that there are generalized eigenspace decompositions for each graded pieces $\ker H^1(\bar{d}^{k+1})$, $H^1(\ker d'^{k+1})$ and $H^0(\coker d'^{k+1})$. This is clear for $\ker H^1(\bar{d}^{k+1})$ and $H^1(\ker d'^{k+1})$ in view of discussion in \ref{Hd}. For $H^0(\coker d'^{k+1})$, we recall that $\coker I_k$ is supported on $\mathbb{P}^1(\mathbb{Q}_p)$ and \[(\coker d'^{k+1})_{\infty}=(\omega^{-k-2}_{\mathscr{F}\!\ell})_\infty\otimes_C H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k)(k) \cdot \mathrm{t}^{-k}.\] Since $\mathrm{GL}_2(\mathbb{Q}_p)$ acts transitively on $\mathbb{P}^1(\mathbb{Q}_p)$, it is enough to show that $H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k)\cdot \mathrm{t}^{-k}$ has a generalized eigenspace decomposition with the spectrum in $\sigma_{k+2,1}^{K^p}$ or $\sigma^{K^p}_{2,1}\cup \sigma_0^{K^p}$ if $k=0$. This can be deduced either from Coleman's result \cite[Theorem 2.1]{Cole97} or from people's works on the $\ell$-adic cohomology of the Igusa curves and ``independence of $\ell$'' results. Here we provide a proof without using these works. See also Theorem \ref{ijdR} for another proof. By Proposition \ref{Colefin} (see also Definition \ref{H1Ig}), $H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k)$ is a direct limit of finite-dimensional Hecke-stable vector spaces. Hence there is a generalized eigenspace decomposition \[ H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k)\cdot \mathrm{t}^{-k}=\bigoplus_{\lambda:\mathbb{T}^S\to C} H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k)\cdot \mathrm{t}^{-k}\widetilde{[\lambda]},\] which also implies a decomposition \[\coker d'^{k+1} =\bigoplus_{\lambda} (\coker d'^{k+1})\widetilde{[\lambda]}.\] We denote the spectrum by $\sigma$. It remains to show $\sigma\subseteq \sigma_{k+2,1}^{K^p}$ (or $\sigma_{2,1}^{K^p}\cup \sigma_{0}^{K^p}$ if $k=0$). Suppose $\lambda\in\sigma$ but not contained in $\sigma_{k+2,1}^{K^p}$ ($\sigma_{2,1}^{K^p}\cup \sigma_{0}^{K^p}$ if $k=0$). Then it follows from what we have proved that $\Fil_2 \ker I^1_k[\lambda]=0$ and $(\ker I^1_k)\widetilde{[\lambda]} = H^0({\mathscr{F}\!\ell},(\coker d'^{k+1})\widetilde{[\lambda]})$. \begin{lem} $H^0({\mathscr{F}\!\ell},(\coker d'^{k+1})\widetilde{[\lambda]})^{\mathfrak{n}}\neq 0$. Recall that $\mathfrak{n}=\{\begin{pmatrix} 0 & * \\ 0& 0\end{pmatrix}\} \subseteq \mathfrak{gl}_2(\mathbb{Q}_p)$. \end{lem} \begin{proof} Let $0\in\mathbb{P}^1(\mathbb{Q}_p)$ denote the zero of $e_2$. Hence $x$ is a local coordinate around $0$. Since $(\coker d'^{k+1})\widetilde{[\lambda]}$ is supported on $\mathbb{P}^1(\mathbb{Q}_p)$, a profinite set. It suffices to show that the stalk $(\coker d'^{k+1})_0\widetilde{[\lambda]}$ at $0$ has non-zero $\mathfrak{n}$-invariants. This is clear as \[(\coker d'^{k+1})_0\widetilde{[\lambda]}\cong(\omega^{-k-2}_{\mathscr{F}\!\ell})_0\otimes_C H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k)(k) \cdot \mathrm{t}^{-k}\widetilde{[\lambda]}\neq0,\] and $(\omega^{-k-2}_{\mathscr{F}\!\ell})_0^{\mathfrak{n}}=Ce_1^{-k-2}\neq 0$. \end{proof} The lemma shows that $(\ker I^1_k)\widetilde{[\lambda]}^{\mathfrak{n}}\neq 0$. However, by Theorem \ref{kernk} (after twisting by $\mathrm{t}^{-k}$), we see that one of the following happens \begin{itemize} \item $M_{k+2}(K^p)\otimes_{M_0(K^p)}D_0^{\otimes -k-1}\cdot\mathrm{t}^{-k}[\lambda]\neq 0$; \item $H^1({\mathscr{F}\!\ell},\omega^{-k,\mathrm{sm}}_{K^p})\cdot\mathrm{t}^{-k}[\lambda]\neq 0$; \item $M_{-k}(K^p)[\lambda]\neq 0$. \end{itemize} In the first case, $\lambda\in\sigma_{k+2,1}^{K^p}$ because $\mathrm{GL}_2(\mathbb{A}^p_f)$ acts on $D_0$ via the inverse of the $p$-adic cyclotomic character. The same conclusion is true for the second case. The third case only can happen when $k$=0 and it will imply that $\lambda\in \sigma_0^{K^p}$. Hence we get a contradiction here. Therefore the spectrum of $\coker I_k$ is contained in $\sigma_{k+2,1}^{K^p}$ if $k\geq 1$ and $\sigma_{2,1}^{K^p}\cup \sigma_{0}^{K^p}$ if $k=0$. \end{proof} In Section \ref{VIk}, we introduced $I'_0:H^1({\mathscr{F}\!\ell},\mathcal{O}_{K^p})^{\mathrm{la},(0,0)}\to H^1({\mathscr{F}\!\ell},\mathcal{O}_{K^p})^{\mathrm{la},(1,-1)}(1)=H^1({\mathscr{F}\!\ell},\mathcal{O}_{K^p}^{\mathrm{la},(1,-1)})(1)$ induced by $I^1_0$. \begin{cor} \label{sd'} \[\ker I'_0=\bigoplus_{\lambda \in\sigma_{2,1}^{K^p}\cup \sigma_{0}^{K^p}} \ker I'_0\widetilde{[\lambda]}.\] \end{cor} \begin{proof} This follows from Theorem \ref{sd} because $\ker I'_0$ is a quotient of $\ker I^1_0$, cf. Subsection \ref{VIk}. \end{proof} \begin{para} \label{2nddec} Next we explore another decomposition of $\ker I^1_k$. Note that $\ker I_k^1=\ker H^1(d^{k+1})$ by Lemma \ref{kerdk+1}. Consider the following (de Rham) complex \[ DR_k:\mathcal{O}^{\mathrm{la},(-k,0)}_{K^p}\xrightarrow{d^{k+1}} \mathcal{O}^{\mathrm{la},(-k,0)}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}} (\Omega^1_{K^p}(\mathcal{C})^{\mathrm{sm}})^{\otimes k+1}.\] The decreasing filtration $\Fil^1 DR_k: 0\to \mathcal{O}^{\mathrm{la},(-k,0)}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}} (\Omega^1_{K^p}(\mathcal{C})^{\mathrm{sm}})^{\otimes k+1}$ defines a natural filtration on $\mathbb{H}^\bullet(DR_k)$. Then \[\ker H^1(d^{k+1})=\mathbb{H}^1(DR_k)/\Fil^1\mathbb{H}^1(DR_k)\] with $\Fil^1\mathbb{H}^1(DR_k)\cong \coker H^0(d^{k+1})$ by Remark \ref{hyclem}. \end{para} \begin{lem} \label{lemH0} \hspace{2em} \begin{enumerate} \item $H^0(d^{k+1})=0$. In particular, \[\Fil^1\mathbb{H}^1(DR_k)\cong\coker H^0(d^{k+1})=H^0( \mathcal{O}^{\mathrm{la},(-k,0)}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}} (\Omega^1_{K^p}(\mathcal{C})^{\mathrm{sm}})^{\otimes k+1}).\] \item There is a natural isomorphism \[H^0( \mathcal{O}^{\mathrm{la},(-k,0)}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}} (\Omega^1_{K^p}(\mathcal{C})^{\mathrm{sm}})^{\otimes k+1})\cong \Sym^k V\otimes_{\mathbb{Q}_p} M_{k+2}(K^p)(k)\otimes D_0^{-k-1}\cdot\mathrm{t}^{-k}\] induced by the inclusion $\mathcal{O}^{\mathrm{lalg},(-k,0)}_{K^p}\subseteq \mathcal{O}^{\mathrm{la},(-k,0)}_{K^p}$, cf. \ref{lalg0k}, \ref{lalgKp}. \end{enumerate} \end{lem} \begin{proof} The first claim is clear as $H^0( \mathcal{O}^{\mathrm{la},(n_1,n_2)}_{K^p})=0$ when $k\neq 0$ and $H^0( \mathcal{O}^{\mathrm{la},(n_1,n_1)}_{K^p})=M_0(K^p)\cdot \mathrm{t}^{n_1}$ which is annihilated by $d^1$, cf. \cite[Corollary 5.1.3]{Pan20}. The second claim is the last part of Corollary \ref{gso2k2}. \end{proof} \begin{defn} Denote by $i_{H}$ the induced map \[\Sym^k V\otimes_{\mathbb{Q}_p} M_{k+2}(K^p)(k)\otimes D_0^{-k-1}\cdot\mathrm{t}^{-k}\cong\Fil^1\mathbb{H}^1(DR_k)\subseteq \mathbb{H}^1(DR_k).\] Recall that $D_0^{-1}(1)\cdot \mathrm{t}^{-1}=M_0(K^p)\otimes_{\mathbb{Q}_p}\det^{-1}$ as a $\mathrm{GL}_2(\mathbb{A}_f)\times G_{\mathbb{Q}_p}$-representation, cf. \ref{HEt}. We may aslo view $i_H$ as \[i_H: \Sym^k V^*\otimes_{\mathbb{Q}_p} M_{k+2}(K^p)\otimes D_0^{-1}\to\mathbb{H}^1(DR_k)\] where $V^*$ denotes the dual of $V$. We will see in Remark \ref{HodFil} that $i_H$ describes the Hodge filtration on the de Rham cohomology of the modular curves. \end{defn} Note that \[0\to H^1(\ker d^{k+1}) \to \mathbb{H}^1(DR_k)\to H^0(\coker d^{k+1})\to 0.\] Combining with Corollary \ref{cokerIkinfty} and Proposition \ref{kerdk+1s}, we obtain the following result. \begin{prop} \label{bH1DRk} There is a natural exact sequence \[0\to (H^1(j_{!} \omega^{-k,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}})\otimes \mathcal{A}_{D,(k,0)}^{c,K^p})^{\mathcal{O}_{D_p}^\times}\cdot{\varepsilon}_p^{-k} \to \mathbb{H}^1(DR_k) \to H^0( i_* \mathcal{H}^1_{\mathrm{ord}}(K^p,k)\otimes\omega^k_{{\mathscr{F}\!\ell}}) \cdot\varepsilon^{-k}\to 0 \] with $H^0( i_* \mathcal{H}^1_{\mathrm{ord}}(K^p,k)\otimes\omega^k_{{\mathscr{F}\!\ell}}) \cdot\varepsilon^{-k}\cong\mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)} H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k) \cdot \varepsilon^{-k}{e'_2}^k$. \end{prop} Assuming Corollary \ref{ssIg} below, we have the following result. \begin{cor} \label{BScon} Suppose $\lambda\in\sigma^{K^p}_{k+2,1}$ corresponds to a cuspidal automorphic representation $\pi$ of $\mathrm{GL}_2(\mathbb{A})$. \begin{enumerate} \item If $\pi_p$ is supercuspidal, then $H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k)\cdot\mathrm{t}^{-k}[\lambda]=0$ by Corollary \ref{ssIg} below, and \[\ker I^1_k\widetilde{[\lambda]} \cong (H^1(j_{!} \omega^{-k,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}})\otimes_C \mathcal{A}_{D,(k,0)}^{c,K^p}[\lambda])^{\mathcal{O}_{D_p}^\times}\cdot{\varepsilon}_p^{-k}/ i_H(\Sym^k V^*\otimes_{\mathbb{Q}_p} M_{k+2}(K^p)\otimes D_0^{-1}[\lambda]). \] In particular, $\ker I^1_k\widetilde{[\lambda]}=\ker I^1_k[\lambda]$ is the eigenspace of $\lambda$. Moreover, by Theorem \ref{dec1}, there is an exact sequence \[\begin{multlined}0\to \Sym^k V\otimes_{\mathbb{Q}_p} H^1(\omega^{-k,\mathrm{sm}}_{K^p})\cdot{\varepsilon}^{-k}[\lambda]\to \ker I^1_k[\lambda] \\ \to (H^1(j_{!} \omega^{k+2,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}})\otimes \mathcal{A}_{D,(k,0)}^{K^p}[\lambda])^{\mathcal{O}_{D_p}^\times}\otimes \det{}^{k+1}\cdot{\varepsilon}_p^{-k}\to 0.\end{multlined}\] \item If $ \mathcal{A}_{D,(k,0)}^{c,K^p}[\lambda]=0$, i.e. $\pi_p$ is a principal series, then \[\ker I^1_k\widetilde{[\lambda]}\cong \mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)} H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k) \cdot \varepsilon^{-k}{e'_2}^k[\lambda]/ i_H(\Sym^k V^*\otimes_{\mathbb{Q}_p} M_{k+2}(K^p)\otimes D_0^{-1}[\lambda]).\] In particular, $\ker I^1_k\widetilde{[\lambda]}=\ker I^1_k[\lambda]$ is the eigenspace of $\lambda$ by Corollary \ref{ssIg} below. Moreover, by Theorem \ref{dec1}, there is an exact sequence \[\begin{multlined}0\to \Sym^k V\otimes_{\mathbb{Q}_p} H^1(\omega^{-k,\mathrm{sm}}_{K^p})\cdot{\varepsilon}^{-k}[\lambda]\to \ker I^1_k[\lambda] \\ \to \mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)} H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k) \cdot \varepsilon^{-k}{e'_1}^{k+1}{e'_2}^{-1}[\lambda]\to 0.\end{multlined}\] \end{enumerate} \end{cor} \begin{rem} \label{DRstn} We believe that $\ker I^1_k\widetilde{[\lambda]}=\ker I^1_k[\lambda]$ also holds when $\pi_p$ is special. This can be proved by using Emerton's local-global compatibility and people's work on the $p$-adic local Langlands correspondence for $\mathrm{GL}_2(\mathbb{Q}_p)$. See Remark \ref{stn} below. It would be very interesting to have a more direct proof. Note that in both cases when $\pi_p$ is either supercuspidal or a principal series, we actually proved that $\mathbb{H}^1(DR_k)[\lambda] = \mathbb{H}^1(DR_k)\widetilde{[\lambda]}$ and \[ \ker I^1_k[\lambda]=\mathbb{H}^1(DR_k)[\lambda]/ \Fil^1\mathbb{H}^1(DR_k)[\lambda]. \] It is natural to guess that this is also true when $\pi_p$ is special. \end{rem} \begin{rem} \label{KSreal} Proposition \ref{bH1DRk} says that $\mathbb{H}^1(DR_k)$ is an extension of \begin{itemize} \item $H^0( i_* \mathcal{H}^1_{\mathrm{ord}}(K^p,k)\otimes\omega^k_{{\mathscr{F}\!\ell}}) \cdot\varepsilon^{-k}$ by \item $(H^1(j_{!} \omega^{-k,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}})\otimes \mathcal{A}_{D,(k,0)}^{c,K^p})^{\mathcal{O}_{D_p}^\times}\cdot{\varepsilon}_p^{-k}$. \end{itemize} We can rewrite these two cohomology groups in the following more uniform way. On $\Omega$, we may view $\mathcal{A}_{D,(k,0)}^{c,K^p}$ as a $C$-local system $\mathcal{L}^{ss}$ on the \'etale site $\Omega_{\mathrm{\acute{e}t}}$ of $\Omega$. On the other hand $\omega^k_{{\mathscr{F}\!\ell}}|_{\Omega}=j^*\omega^k_{\mathscr{F}\!\ell}$ naturally defines a sheaf on $\Omega_{\mathrm{\acute{e}t}}$, which by abuse of notation will also be denoted by $j^*\omega^k_{\mathscr{F}\!\ell}$. Let $\nu:\Omega_{\mathrm{\acute{e}t}}\to \Omega$ denote the projection map from the \'etale site to the analytic site. Then \begin{itemize} \item $(H^1(j_{!} \omega^{-k,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}})\otimes \mathcal{A}_{D,(k,0)}^{c,K^p})^{\mathcal{O}_{D_p}^\times}=H^1({\mathscr{F}\!\ell}, j_!\nu_* (\mathcal{L}^{ss}\otimes j^*\omega^k_{\mathscr{F}\!\ell}))$. \item $H^0( i_* \mathcal{H}^1_{\mathrm{ord}}(K^p,k)\otimes\omega^k_{{\mathscr{F}\!\ell}})=H^0({\mathscr{F}\!\ell}, i_! (\mathcal{L}^{\mathrm{ord}}\otimes i^*\omega^k_{{\mathscr{F}\!\ell}}))$. \end{itemize} where $\mathcal{L}^{\mathrm{ord}}= \mathcal{H}^1_{\mathrm{ord}}(K^p,k)$ is viewed as a local system on $\mathbb{P}^1(\mathbb{Q}_p)$, and we note that $i_*=i_!$. This is very similar to Kashiwara-Schmid's geometric constructions of representations of real groups on the flag variety in \cite{KS94}. It is also very interesting to observe that these representations should be considered as objects in the ``classical local Langlands correspondence'' in the sense that they do not see the information of the Hodge filtration. This suggests that for $p$-adic local Langlands correspondence, it might be necessary to work in certain filtered (equivariant) derived category on ${\mathscr{F}\!\ell}$. \end{rem} \subsection{de Rham cohomology of modular curves} \begin{para} The goal of this subsection is to show that most generalized eigensubspaces in Theorem \ref{sd} are eigenspaces. To do this, we need to understand the Hecke action on $H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k)$. This is well-known for the $\ell$-adic cohomology (essentially the Jacquet module of the automorphic forms). Here we work with the de Rham cohomology and do all the computations on ${\mathscr{F}\!\ell}$ using ``the $\bar{d}$-resolution'' in Remark \ref{dbarres}. For integer $k\geq 0$, recall that $\theta^{k+1}$ was introduced in \ref{XCY}. Let $D^k$ denote the complex \[ \omega^{-k,\mathrm{sm}}_{K^p}\xrightarrow{\theta^{k+1}} \omega^{-k,\mathrm{sm}}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}} (\Omega^1_{K^p}(\mathcal{C})^{\mathrm{sm}})^{\otimes k+1}.\] It follows from the construction of $\theta^{k+1}$ that the inclusion $\omega^{-k,\mathrm{sm}}_{K^p}\subseteq\Sym^k (D^{\mathrm{sm}}_{K^p})^* $ induces a quasi-isomorphism between $D^k$ and $\Sym^k (D^{\mathrm{sm}}_{K^p})^*\xrightarrow{\nabla} \Sym^k (D^{\mathrm{sm}}_{K^p})^* \otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}} \Omega^1_{K^p}(\mathcal{C})^{\mathrm{sm}}$. The vanishing of $H^0(\theta^{k+1})$ and $H^1(\omega^{k+2,\mathrm{sm}}_{K^p})$ implies that there is a natural exact sequence \[0\to H^0( \omega^{-k,\mathrm{sm}}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}} (\Omega^1_{K^p}(\mathcal{C})^{\mathrm{sm}})^{\otimes k+1}) \to \mathbb{H}^1(D^k) \to H^1(\omega^{-k,\mathrm{sm}}_{K^p}) \to 0 \] and $\mathbb{H}^2(D^k)=0$, where $\mathbb{H}^*(D^k)$ denotes the hypercohomology of $D^k$. It is easy to see that $\displaystyle \mathbb{H}^1(D^k)=\varinjlim_{K_p\subseteq\mathrm{GL}_2(\mathbb{Q}_p)} \mathbb{H}^1(\mathcal{X}_{K^pK_p}, \Sym^k D^*\xrightarrow{\nabla} \Sym^k D^*\otimes\Omega^1(\mathcal{C}))$. Note that $\Sym^k D^*\otimes\Omega^1(\mathcal{C})$ is defined over $\mathbb{Q}$ and we have the usual Shimura isomorphism for the cohomology of its base change to complex numbers, which implies that the Hecke action of $\mathbb{T}^S$ on its cohomology is \textit{semi-simple}. In particular, we just proved the following result. \end{para} \begin{prop} The Hecke action of $\mathbb{T}^S$ on $\mathbb{H}^1(D^k)$ is {semi-simple}. \end{prop} \begin{para} We also have the following complexes on ${\mathscr{F}\!\ell}$. \[ DR_k:\mathcal{O}^{\mathrm{la},(-k,0)}_{K^p}\xrightarrow{d^{k+1}} \mathcal{O}^{\mathrm{la},(-k,0)}_{K^p}\otimes_{\mathcal{O}^{\mathrm{sm}}_{K^p}} (\Omega^1_{K^p}(\mathcal{C})^{\mathrm{sm}})^{\otimes k+1},\] \[DR'_k:\mathcal{O}^{\mathrm{la},(-k,0)}_{K^p}\otimes_{\mathcal{O}_{{\mathscr{F}\!\ell}}}(\Omega^1_{{\mathscr{F}\!\ell}})^{\otimes k+1}\xrightarrow{d'^{k+1}}\mathcal{O}^{\mathrm{la},(1,-k-1)}_{K^p}(k+1).\] Then by Proposition \ref{ddbarcom}, $\bar{d}^{k+1}$ induces an exact triangle \[ D^k\otimes_{\mathbb{Q}_p} \Sym^k V\cdot\varepsilon^{-k}\to DR_k \xrightarrow{\bar{d}^{k+1}} DR'_k \xrightarrow{+1} D^k[1]\otimes_{\mathbb{Q}_p} \Sym^k V\cdot\varepsilon^{-k}.\] It was shown in the proof of Theorem \ref{dec1} that $H^0(\mathcal{O}^{\mathrm{la},(-k,0)}_{K^p}\otimes_{\mathcal{O}_{{\mathscr{F}\!\ell}}}(\Omega^1_{{\mathscr{F}\!\ell}})^{\otimes k+1})=0$. Hence $\mathbb{H}^0(DR'_k)=0$. Thus \[0\to \mathbb{H}^1(D^k)\otimes_{\mathbb{Q}_p} \Sym^k V\cdot\varepsilon^{-k}\to \mathbb{H}^1(DR_k)\to \mathbb{H}^1(DR'_k)\to 0. \] On the other hand, we have $0\to H^1(\ker d^{k+1}) \to \mathbb{H}^1(DR_k)\to H^0(\coker d^{k+1})\to 0$ as there is no $H^2$ on ${\mathscr{F}\!\ell}$, and similar statement holds for $\mathbb{H}^1(DR'_k)$. An application of the snake lemma to \[\begin{tikzcd} 0 \arrow[r] & H^1(\ker d^{k+1}) \arrow[d,"SS^k"]\arrow[r] & \mathbb{H}^1(DR_k) \arrow[d,"\mathbb{H}^1(\bar{d}^{k+1})"] \arrow[r]& H^0(\coker d^{k+1}) \arrow[d,"ORD^k"] \arrow[r] & 0 \\ 0 \arrow[r] & H^1(\ker d'^{k+1}) \arrow[r] & \mathbb{H}^1(DR'_k) \arrow[r]& H^0(\coker d'^{k+1}) \arrow[r] & 0 \end{tikzcd}.\] shows that \begin{eqnarray} \label{dRstr} 0\to \ker SS^k \to \mathbb{H}^1(D^k)\otimes_{\mathbb{Q}_p} \Sym^k D \cdot\varepsilon^{-k} \to \ker ORD^k \to \coker SS^k\to 0 \end{eqnarray} where both vertical maps $SS^k,ORD^k$ are induced by $\bar{d}^{k+1}$. \end{para} \begin{lem} \label{ORDk} $\ker ORD^k\cong \Sym^k V \otimes_{\mathbb{Q}_p} \mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)} H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k)\cdot\varepsilon^{-k}$. \end{lem} \begin{proof} By Corollary \ref{cokerIkinfty}, we have $\coker d^{k+1}\cong \omega^k_{{\mathscr{F}\!\ell}}\otimes_C \mathcal{H}^1_{\mathrm{ord}}(K^p,k)\cdot\varepsilon^{-k}$, $\coker d'^{k+1}\cong \omega^{-k-2}_{{\mathscr{F}\!\ell}}\otimes_C \mathcal{H}^1_{\mathrm{ord}}(K^p,k)\otimes\det{}^{k+1}\cdot\varepsilon^{-k}$, and $ORD^k$ is induced by $\bar{d}^{k+1}:\omega^k_{{\mathscr{F}\!\ell}}\to \omega^{-k-2}_{{\mathscr{F}\!\ell}}\otimes\det{}^{k+1}$, whose kernel is $H^0({\mathscr{F}\!\ell},\omega^k_{{\mathscr{F}\!\ell}})=\Sym^k V\otimes_{\mathbb{Q}_p} C$. This shows that \begin{eqnarray*} \ker ORD^k &\cong& \Sym^k V \otimes_{\mathbb{Q}_p} H^0(\mathcal{H}^1_{\mathrm{ord}}(K^p,k))\cdot\varepsilon^{-k}\\ &\cong & \Sym^k V \otimes_{\mathbb{Q}_p} \mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)} H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k) \cdot\varepsilon^{-k}. \end{eqnarray*} \end{proof} For $SS^k$, recall that $\displaystyle \mathcal{O}_{\mathrm{LT}}^{D_p^\times-\mathrm{sm}}=\varinjlim_{n}(\pi^{(0)}_{\mathrm{Dr},n})_* \mathcal{O}_{\mathcal{M}_{\mathrm{Dr},n}^{(0)}}$, where $\pi^{(0)}_{\mathrm{Dr},n}:\mathcal{M}_{\mathrm{Dr},n}^{(0)}\to \Omega$ denotes the projection map, and $\displaystyle \Omega^1_{\mathrm{Dr}}=\varinjlim_{n}(\pi^{(0)}_{\mathrm{Dr},n})_* \Omega^1_{\mathcal{M}_{\mathrm{Dr},n}^{(0)}}$ cf. Remark \ref{dDR}. Then $\Omega^1_\mathrm{Dr}\cong \omega^{2,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}}\otimes\det$ by the Kodaira-Spencer isomorphism. There is a natural derivation $d_{\mathrm{dR}}: \mathcal{O}_{\mathrm{LT}}^{D_p^\times-\mathrm{sm}}\to \Omega^1_\mathrm{Dr}$. We denote this complex by $DR_{\mathrm{dR}}$. \begin{defn} For $i\geq 0$, we define \[H^{i}_{\mathrm{dR},c}(\mathcal{M}_{\mathrm{Dr},\infty}^{(0)}):=\mathbb{H}^i(j_! \mathcal{O}_{\mathrm{LT}}^{D_p^\times-\mathrm{sm}}\xrightarrow{j_! d_{\mathrm{dR}}} j_!\Omega^1_\mathrm{Dr}).\] Since $H^i(j_! \mathcal{O}_{\mathrm{LT}}^{D_p^\times-\mathrm{sm}})=H^i(j_! \Omega^1_{\mathrm{Dr}})=0$ unless $i=1$, we have \[H^{i}_{\mathrm{dR},c}(\mathcal{M}_{\mathrm{Dr},\infty}^{(0)})=H^{i-1}\left(H^1(j_! \mathcal{O}_{\mathrm{LT}}^{D_p^\times-\mathrm{sm}})\xrightarrow{H^1(j_!d_\mathrm{dR})} H^1(j_! \Omega^1_{\mathrm{Dr}})\right).\] \end{defn} $\bar{d}^{k+1}$ induces a map $\omega_{\mathrm{Dr}}^{-k,D_p^\times-\mathrm{sm}}\to \omega_{\mathrm{Dr}}^{-k,D_p^\times-\mathrm{sm}}\otimes_{\mathcal{O}_{\mathrm{LT}}^{D_p^\times-\mathrm{sm}}}(\Omega^1_{\mathrm{Dr}})^{\otimes k+1}\cong \omega_{\mathrm{Dr}}^{k+2,D_p^\times-\mathrm{sm}}\otimes\det^{k+1}$. \begin{lem} \label{SSk} There is a natural morphism of complexes \[\left[\omega_{\mathrm{Dr}}^{-k,D_p^\times-\mathrm{sm}}\xrightarrow{\bar{d}^{k+1}} \omega_{\mathrm{Dr}}^{k+2,D_p^\times-\mathrm{sm}}\otimes\det{}^{k+1}\right]\to \Sym^k V\otimes_{\mathbb{Q}_p} DR_{\mathrm{dR}},\] which is a quasi-isomorphism and has a left inverse. In particular, applying $j_!$, we have \[\ker SS^k\cong (H^{1}_{\mathrm{dR},c}(\mathcal{M}_{\mathrm{Dr},\infty}^{(0)})\otimes_C \mathcal{A}_{D,(k,0)}^{c,K^p})^{\mathcal{O}_{D_p}^\times}\otimes_{\mathbb{Q}_p}\Sym^k V\cdot{\varepsilon}_p^{-k}, \] \[\coker SS^k \cong \left\{ \begin{array}{lll} (H^{2}_{\mathrm{dR},c}(\mathcal{M}_{\mathrm{Dr},\infty}^{(0)})\otimes_C \mathcal{A}_{D,(k,0)}^{K^p})^{\mathcal{O}_{D_p}^\times}\otimes_{\mathbb{Q}_p}\Sym^k V\cdot{\varepsilon}_p^{-k}, &k\geq 1&\\ (H^{2}_{\mathrm{dR},c}(\mathcal{M}_{\mathrm{Dr},\infty}^{(0)})\otimes_C \mathcal{A}_{D,(0,0)}^{c,K^p})^{\mathcal{O}_{D_p}^\times}\oplus M_0(K^p) , &k=0& \end{array}.\right.\] \end{lem} \begin{proof} The quasi-isomorphism follows from the construction of $\bar{d}^{k+1}$, cf. \ref{dbar}. More precisely, there is a natural action of $Z(U(\mathfrak{g}))$ on $\Sym^k V\otimes_{\mathbb{Q}_p}DR_{\mathrm{dR}}$. For an infinitesimal character $\tilde\chi:Z(U(\mathfrak{g}))\to C$, a simple calculation shows that the $\tilde{\chi}$-isotypic part of $\Sym^k V\otimes_{\mathbb{Q}_p}DR_{\mathrm{dR}}$ is non-zero only when $\tilde{\chi}$ is the infinitesimal character of $\omega^k_{{\mathscr{F}\!\ell}}$, and in this case, the $\tilde{\chi}$-isotypic part is naturally isomorphic to $\omega_{\mathrm{Dr}}^{-k,D_p^\times-\mathrm{sm}}\xrightarrow{\bar{d}^{k+1}} \omega_{\mathrm{Dr}}^{k+2,D_p^\times-\mathrm{sm}}\otimes\det{}^{k+1}$. By Proposition \ref{kerdk+1s} and Corollary \ref{kerd'k+1s}, we may identify $SS^k$ with \[(\mathcal{A}_{D,(k,0)}^{K^p}\otimes H^1( j_{!}\omega^{-k,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}}))^{\mathcal{O}_{D_p}^\times}\cdot {\varepsilon'_p}^{-k} \xrightarrow{H^1(\bar{d}^{k+1})} (\mathcal{A}_{D,(k,0)}^{K^p}\otimes H^1( j_{!}\omega^{k+2,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}}))^{\mathcal{O}_{D_p}^\times}\otimes\det{}^{k+1}\cdot {\varepsilon'_p}^{-k}\] when $k\geq 1$ and a similar result holds for $k=0$. From this we easily deduce our claim. \end{proof} \begin{thm} \label{ijdR} For $k\geq 1$, there is a natural exact sequence \begin{eqnarray*} 0\to (H^{1}_{\mathrm{dR},c}(\mathcal{M}_{\mathrm{Dr},\infty}^{(0)})\otimes_C \mathcal{A}_{D,(k,0)}^{K^p})^{\mathcal{O}_{D_p}^\times}\to \mathbb{H}^1(D^k) \to \mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)} H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k)\\ \to (H^{2}_{\mathrm{dR},c}(\mathcal{M}_{\mathrm{Dr},\infty}^{(0)})\otimes_C \mathcal{A}_{D,(k,0)}^{K^p})^{\mathcal{O}_{D_p}^\times}\to 0. \end{eqnarray*} When $k=0$, we have \begin{eqnarray*} 0\to (H^{1}_{\mathrm{dR},c}(\mathcal{M}_{\mathrm{Dr},\infty}^{(0)})\otimes_C \mathcal{A}_{D,(0,0)}^{c,K^p})^{\mathcal{O}_{D_p}^\times}\to \mathbb{H}^1(D^k) \to \mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)} H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p))\\ \to (H^{2}_{\mathrm{dR},c}(\mathcal{M}_{\mathrm{Dr},\infty}^{(0)})\otimes_C \mathcal{A}_{D,(0,0)}^{c,K^p})^{\mathcal{O}_{D_p}^\times}\oplus M_0(K^p)\to 0. \end{eqnarray*} \end{thm} \begin{proof} Apply the results of Lemma \ref{ORDk} and Lemma \ref{SSk} to sequence \eqref{dRstr} and remove $\otimes_{\mathbb{Q}_p}\Sym^k V\cdot {\varepsilon'_p}^{-k}$ in every term of the exact sequence by taking the $\mathrm{GL}_2(\mathbb{Q}_p)$-smooth parts of the tensor product of sequence \eqref{dRstr} with $ \Sym^k V^*\cdot {\varepsilon'_p}^{k}$. \end{proof} \begin{rem} Theorem \ref{ijdR} can be viewed as a $p$-adic analogue of the usual vanishing cycle exact sequence in the $\ell$-adic case \cite[4.2]{Car86}. It can also be obtained from taking the hypercohomology of the exact triangle $ j_! D^k|_{\Omega}\to D^k \to i_*i^* D^k\xrightarrow{+1} j_! D^k|_{\Omega}[1]$. It's clear that $\mathbb{H}^*( i_*i^* D^k)$ computes $ \mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)} H^*_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k)$. For $ j_! D^k|_{\Omega}$, note that $\mathbb{H}^i(j_! D^k|_{\Omega})$ computes the compactly supported de Rham cohomology of the supersingular locus of modular curves, or essentially the compactly supported de Rham cohomology of the Lubin-Tate towers. To compare with Theorem \ref{ijdR}, we use \cite[Th\'eor\`eme 4.4]{CDN20} which says that the Lubin-Tate towers and Drinfeld towers have the same compactly supported de Rham cohomology. \end{rem} \begin{para} \label{H1dRc} As the notation suggests, $H^{i}_{\mathrm{dR},c}(\mathcal{M}_{\mathrm{Dr},\infty}^{(0)})$ should be viewed as the compactly supported de Rham cohomology of the Drinfeld towers in the following sense. Let \[H^i_{\mathrm{dR},c}(\mathcal{M}_{\mathrm{Dr},n}^{(0)}):= H^{i-1}\left(H^1(j_! \pi_{\mathrm{Dr},n\,*}^{(0)}\mathcal{O}_{\mathcal{M}_{\mathrm{Dr},n}^{(0)}})\xrightarrow{d_{\mathrm{dR}}} H^1(j_! \pi_{\mathrm{Dr},n\,*}^{(0)}\Omega^1_{\mathcal{M}_{\mathrm{Dr},n}^{(0)}}) \right).\] Then $\displaystyle H^i_{\mathrm{dR},c}(\mathcal{M}_{\mathrm{Dr},\infty}^{(0)})=\varinjlim_n H^i_{\mathrm{dR},c}(\mathcal{M}_{\mathrm{Dr},n}^{(0)})$. Let \[H^i_{\mathrm{dR}}(\mathcal{M}_{\mathrm{Dr},n}^{(0)}):= H^{i}\left(H^0(\mathcal{M}_{\mathrm{Dr},n}^{(0)},\mathcal{O}_{\mathcal{M}_{\mathrm{Dr},n}^{(0)}})\xrightarrow{d_{\mathrm{dR}}} H^0(\mathcal{M}_{\mathrm{Dr},n}^{(0)},\Omega^1_{\mathcal{M}_{\mathrm{Dr},n}^{(0)}}) \right).\] Since $\mathcal{M}_{\mathrm{Dr},n}^{(0)}$ is partially proper, the higher cohomology groups of $\mathcal{O}_{\mathcal{M}_{\mathrm{Dr},n}^{(0)}}$ and $\Omega^1_{\mathcal{M}_{\mathrm{Dr},n}^{(0)}}$ vanish. Hence $H^i_{\mathrm{dR}}(\mathcal{M}_{\mathrm{Dr},n}^{(0)})=\mathbb{H}^i(\mathcal{O}_{\mathcal{M}_{\mathrm{Dr},n}^{(0)}}\xrightarrow{d_{\mathrm{dR}}}\Omega^1_{\mathcal{M}_{\mathrm{Dr},n}^{(0)}} )$, i.e. the de Rham cohomology of $\mathcal{M}_{\mathrm{Dr},n}^{(0)}$. As explained in \cite[\S 4.3]{CDN20}, the Serre duality on $\mathcal{M}_{\mathrm{Dr},n}^{(0)}$ induces an isomorphism \[H^i_{\mathrm{dR}}(\mathcal{M}_{\mathrm{Dr},n}^{(0)})\cong \Hom_C(H^{2-i}_{\mathrm{dR},c}(\mathcal{M}_{\mathrm{Dr},n}^{(0)}),C),\] where $\Hom$ is calculated between $C$-vectors spaces. Comparing our notations here with the reference, we note that since $\pi_{\mathrm{Dr},n}^{(0)}$ is a finite morphism, it's easy to see that the cohomology group $H^1(j_! \pi_{\mathrm{Dr},n\,*}^{(0)}\mathcal{O}_{\mathcal{M}_{\mathrm{Dr},n}^{(0)}})$ agrees with the $H^1_c(\mathcal{M}_{\mathrm{Dr},n}^{(0)},\mathcal{O}_{\mathcal{M}_{\mathrm{Dr},n}^{(0)}})$ introduced in \cite[\S 4.3.2]{CDN20}. $H^0_{\mathrm{dR}}(\mathcal{M}_{\mathrm{Dr},n}^{(0)})=\mathscr{C}(\pi^0(\mathcal{M}_{\mathrm{Dr},n}^{(0)}),C)$, where $\pi^0(\mathcal{M}_{\mathrm{Dr},n}^{(0)})$ denotes the set of connected components of $\mathcal{M}_{\mathrm{Dr},n}^{(0)}$. Hence $H^{2}_{\mathrm{dR},c}(\mathcal{M}_{\mathrm{Dr},n}^{(0)})$ is finite-dimensional and we have the following result. \end{para} \begin{lem} \label{detnm} The action of $\mathrm{GL}_2(\mathbb{Q}_p)^0$ on $H^2_{\mathrm{dR},c}(\mathcal{M}_{\mathrm{Dr},\infty}^{(0)})$ factors through the determinant map. Similarly, the action of $\mathcal{O}_{D_p}^\times$ factors through the reduced norm map. \end{lem} \begin{cor} \label{ssIg} Let $\lambda:\mathbb{T}^S\to C$ correspond to a cuspidal automorphic representation $\pi$ of $\mathrm{GL}_2(\mathbb{A})$. Suppose that the generalized eigenspace $H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k)\widetilde{[\lambda]}\neq 0$. Then \begin{enumerate} \item $H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k)\widetilde{[\lambda]}=H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k)[\lambda]$ is the eigenspace associated to $\lambda$. \item $\pi_p$ is either a principal series or special; \end{enumerate} \end{cor} \begin{proof} By Theorem \ref{ijdR}, there is an exact sequence \[\mathbb{H}^1(D^k)\widetilde{[\lambda]} \to \mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)} H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k)\widetilde{[\lambda]} \to (H^{2}_{\mathrm{dR},c}(\mathcal{M}_{\mathrm{Dr},\infty}^{(0)})\otimes_C \mathcal{A}_{D,(k,0)}^{K^p})^{\mathcal{O}_{D_p}^\times}\widetilde{[\lambda]}\to 0.\] Note that $\mathbb{T}^S$ acts semi-simply on the first and third terms. Hence if $\mathbb{T}^S$ does not act semi-simply on $H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k)\widetilde{[\lambda]}$, we see that $(H^{2}_{\mathrm{dR},c}(\mathcal{M}_{\mathrm{Dr},\infty}^{(0)})\otimes_C \mathcal{A}_{D,(k,0)}^{K^p})^{\mathcal{O}_{D_p}^\times}\widetilde{[\lambda]}$ at least contains $\mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)} W$ for some smooth representation $W$ of $B$, which contradicts Lemma \ref{detnm}. This proves the first claim. Lemma \ref{detnm} also implies that \[\Hom_{C[\mathrm{GL}_2(\mathbb{Q}_p)]}(\pi_p,\mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)} H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^k)[\lambda])\neq 0.\] Hence $\pi_p$ is either a principal series or special. (To rule out the possibility that $\pi_p$ is a character, either we use the fact that $\pi$ is generic, or we observe that this would imply that a special representation appears in $(H^{2}_{\mathrm{dR},c}(\mathcal{M}_{\mathrm{Dr},\infty}^{(0)})\otimes_C \mathcal{A}_{D,(k,0)}^{K^p})^{\mathcal{O}_{D_p}^\times}$, which again contradicts Lemma \ref{detnm}.) \end{proof} \begin{rem} \label{HodFil} It follows from Lemma \ref{lemH0} that the natural map $\mathbb{H}^1(D^k)\to \mathbb{H}^1(DR_k)$ induces $\Fil^1 \mathbb{H}^1(D^k)=\Fil^1 \mathbb{H}^1(DR_k)$. In particular, the inclusion $\Fil^1 \mathbb{H}^1(DR_k)\subseteq \mathbb{H}^1(DR_k)$ exactly describes the position of the Hodge filtration on $\mathbb{H}^1(D^k)$. \end{rem} \section{Intertwining operators: \texorpdfstring{$p$}{Lg}-adic Hodge-theoretic interpretation} \label{IopHti} In this section, we give a $p$-adic Hodge-theoretic meaning of the intertwining operators $I_k$ and $I^1_k$ introduced in Section \ref{Ioef}. Our main result says that it essentially agrees with an operator introduced by Fontaine \cite{Fo04} in the classifications of almost de Rham $B_\mathrm{dR}$-representations, which we will call the Fontaine operator. We first recall its construction and generalize it to our setting. \subsection{The Fontaine operator} \begin{para}[Construction of $B_{\mathrm{dR}}^+$] \label{CBdR+} We start with the classical construction of $B^+_{\mathrm{dR}}$. Let $\displaystyle \mathcal{O}_{C^{\flat}}=\varprojlim_{x\mapsto x^p}\mathcal{O}_C/p\cong \varprojlim_{x\mapsto x^p}\mathcal{O}_C$ be the tilt of $\mathcal{O}_C$ (classically denoted by $R$) and $A_\mathrm{inf}=W(\mathcal{O}_{C^{\flat}})$ its ring of Witt vectors. We have the usual surjective ring homomorphism \[\theta:A_\mathrm{inf}\to \mathcal{O}_C\] sending the Teichm\"{u}ller lifting $[x]$ of $\displaystyle x=(x_n)_{n\geq 0}\in\varprojlim_{x\mapsto x^p}\mathcal{O}_C$, to $x_0$. By abuse of notation, we also denote by $\theta:A_\mathrm{inf}[\frac{1}{p}]\to C$ the rational version. The period ring $B_{\mathrm{dR}}^+$ is defined as the $\ker(\theta)$-completion of $A_\mathrm{inf}[\frac{1}{p}]$. This is a complete discrete valuation ring with residue field isomorphic to $C$ via $\theta$. It has a natural decreasing filtration with $\Fil^k B_\mathrm{dR}^+=\ker(\theta)^k B_\mathrm{dR}^+, k\geq 0$. By Hensel's lemma, for any finite extension $K$ of $\mathbb{Q}_p$ in $C$, we have a canonical lifting of $K\subseteq C$ to $K\subseteq B_{\mathrm{dR}}^+$. The Galois group $G_{\mathbb{Q}_p}$ acts on everything and $\theta$ is $G_{\mathbb{Q}_p}$-equivariant. Fix a compatible system of primitive $p^n$-th roots of unity $\{\zeta_{p^n}\}_{n\geq 0}$ in $\mathcal{O}_C$ such that $\zeta_{p^{n+1}}^p=\zeta_{p^n}$, or equivalently a generator of $\mathbb{Z}_p(1)$. Then $(\zeta_{p^n})_{n\geq 0}$ defines an element $\varepsilon$ in $\mathcal{O}_{C^\flat}$. It is well-known that $[\varepsilon]-1$ generates the kernel of $\theta:A_\mathrm{inf}[\frac{1}{p}]\to C$. The element \[t:=\log([\varepsilon])=-\sum_{i=1}^{+\infty} \frac{(1-[\varepsilon])^i}{i}\in B_{\mathrm{dR}}^+\] is Fontaine's ``$2\pi i$'' in $p$-adic Hodge theory on which $G_{\mathbb{Q}_p}$ acts via the cyclotomic character. There is a canonical isomorphism $\gr^kB_{\mathrm{dR}}^+=t^k B_\mathrm{dR}^+/t^{k+1}B_{\mathrm{dR}}^+\cong C(k)$, $k\geq 0$. \end{para} \begin{para}[Banach $B_{\mathrm{dR}}^+/(t^k)$-modules] \label{BBdrtkmod} For the purpose of this paper, we will only consider modules over $B_{\mathrm{dR}}^+/(t^k)$. There is a natural $\mathbb{Q}_p$-Banach algebra structure on $B_{\mathrm{dR}}^+/(t^k)$ with the unit open ball given by the image of $A_\mathrm{inf}\to B_{\mathrm{dR}}^+/(t^k)$. We say a $\mathbb{Q}_p$-Banach space $W$ is a Banach $B_{\mathrm{dR}}^+/(t^k)$-module if it is equipped with a $B_{\mathrm{dR}}^+/(t^k)$-module structure such that the multiplication map $B_{\mathrm{dR}}^+/(t^k)\times W\to W$ is jointly continuous. We equip $W$ with the $t$-adic filtration: $\Fil^iW=t^iW,i\geq 0$ and denote by \[W_i:=\gr^i W=W\otimes_{B_{\mathrm{dR}}^+/(t^k)} t^iB_{\mathrm{dR}}^+/t^{i+1}B_{\mathrm{dR}}^+,~~~~~~i\geq0.\] Hence $W_0=W/tW$. Moreover if $W$ is flat over $B_{\mathrm{dR}}^+/(t^k)$, then $t^iW\subseteq W$ is a closed subspace because it can be identified with the kernel of $W\xrightarrow{\times t^{k-i}} W$. Hence in this case $W_i$ is a $C$-Banach space for $i\geq 0$ and there are natural isomorphisms \[W_i\cong W_0\otimes_C C(i)=W_0(i),~~~~~~i\geq 0.\] \end{para} \begin{para}[Assumption] \label{setup} Throughout this subsection, we fix a finite extension $K$ of $\mathbb{Q}_p$ in $C$. Now suppose $W$ is a Banach $B_{\mathrm{dR}}^+/(t^k)$-module equipped with a continuous semilinear action of $G_{K}$. Then $G_{K}$ also acts on $W_i$. Recall that $K_\infty\subset \overbar\mathbb{Q}_p$ denotes the maximal $\mathbb{Z}_p$-extension of $K$ in $K(\mu_{p^\infty})$, cf. \ref{HTsetup}. Let \[W^{K}_i\subseteq W_i\] be the subspace of $G_{K_\infty}$-fixed, $G_K$-analytic vectors in $W_i$. This is naturally a $K$-Banach space. The $C$-Banach space structure on $W_i$ induces a map \[\varphi_{W,i}^K:C\widehat{\otimes}_K W^{K}_i \to W_i.\] We will write $\varphi_i^K$ instead of $\varphi_{W,i}^K$ if there is no confusion. We make the following assumptions from now on \begin{itemize} \item $W$ is flat over $B_{\mathrm{dR}}^+/(t^k)$; \item $\varphi_0^K$ is an isomorphism. \end{itemize} A standard argument using Tate's normalized trace shows that the second assumption implies that the natural map \[L\otimes_{K}W^K_0\xrightarrow{\cong} W^L_0\] is an isomorphism for any finite extension $L$ of $K$ in $C$. In particular, $\varphi^L_0$ is an isomorphism for any finite extension $L$ of $K$. \end{para} \begin{para}[Fontaine' generalization of Sen theory] We denote by \[W^K\subseteq W\] the subspace of $G_{K_\infty}$-fixed, $G_K$-analytic vectors. It has a natural $K$-linear structure coming from the inclusion $K\subseteq B_{\mathrm{dR}}^+$ and inherits the $K$-Banach space structure and the decreasing filtration from $W$. Taking the graded pieces, we get a natural injective map \[g_i^K:\gr^i W^K\to W_i^K\] for $i\geq 0$. Here is Fontaine's generalization of Sen theory to $B_{\mathrm{dR}}^+$-representations. \end{para} \begin{prop} \label{FonSen} Let $W$ be a flat Banach $B_{\mathrm{dR}}^+/(t^k)$-module equipped with a continuous semilinear action of $G_{K}$ and $\varphi_0^K$ is an isomorphism. Then there exists a finite extension $K'$ of $K$ in $K_\infty$ such that \begin{enumerate} \item $g_i^{K'}$ is an isomorphism for $i\geq0$. \item $g^L_i$ is an isomorphism for $i\geq 0$ and any finite extension $L$ of $K'$. In particular, there is a natural isomorphism \[L\otimes_{K'}W^{K'}\cong W^L.\] \end{enumerate} \end{prop} \begin{proof} Denote by $K_n\subseteq K_\infty$ the unique $\mathbb{Z}/(p^n)$-extension of $K$ and $\Gal(K_\infty/K_n)$ by $\Gamma_n$. \begin{lem} \label{surj^K} Let $W$ be as in Proposition \ref{FonSen}. Then for $j\geq 0$, \begin{enumerate} \item $W^{G_{K_\infty}}\to (W/t^jW)^{G_{K_\infty}}$ is surjective. \item The image of $W^{K_n}\to (W/t^jW)^{K_n}$ contains $(W/t^jW)^K$ for some $n\geq 0$. \end{enumerate} \end{lem} \begin{proof} The first claim is a direct consequence of the well-known result $H^1_\mathrm{cont}(G_{K_\infty},C)=0$. Indeed, consider the $G_{K_\infty}$-invariants of the exact sequence \[0\to t^jW\to W\to W/t^jW\to 0.\] It suffices to show that $H^1_{\mathrm{cont}}(G_{K_\infty},t^jW)=0$. Since $t^jW$ is filtered by $W_i$, it is enough to know that $H^1_{\mathrm{cont}}(G_{K_\infty},W_i)=0$. Since $W_i= C\widehat{\otimes}_K W^{K}_i$ and $G_{K_\infty}$ fixes $W^K_i$, \[H^1_{\mathrm{cont}}(G_{K_\infty},W_i)= H^1_{\mathrm{cont}}(G_{K_\infty},C\widehat{\otimes}_K W^{K}_i) =H^1_{\mathrm{cont}}(G_{K_\infty},C)\widehat{\otimes}_K W^{K}_i=0\] where the second equality follows from Proposition \ref{tenscomHcont} and the third equality is a classical result of Tate. Note that this shows that $W^{G_{K_\infty}}$ is filtered by $W_i^{G_{K_\infty}}$. To see the second part of the lemma, consider the exact sequence \[0\to (t^jW)^{G_{K_\infty}}\to W^{G_{K_\infty}}\to (W/t^jW)^{G_{K_\infty}}\to 0.\] Let $n\geq0$ be an integer. Now take the $\Gamma_n$-analytic vectors of this sequence, equivalently take the completed tensor product with $\mathscr{C}^{\mathrm{an}}(\Gamma_n,\mathbb{Q}_p)$ over $\mathbb{Q}_p$ then take the $\Gamma_n$-invariants. Hence we get an exact sequence \[W^{K_n}\to (W/t^jW)^{K_n}\to H^1_\mathrm{cont}\left(\Gamma_n,(t^jW)^{G_{K_\infty}}\widehat{\otimes}_{\mathbb{Q}_p}\mathscr{C}^{\mathrm{an}}(\Gamma_n,\mathbb{Q}_p)\right)\] functorial in $n$. \begin{lem} \label{Gamacyc} There exists $n\geq 0$ such that the image of \[H^1_\mathrm{cont}\left(\Gamma_0,(t^jW)^{G_{K_\infty}}\widehat{\otimes}_{\mathbb{Q}_p}\mathscr{C}^{\mathrm{an}}(\Gamma_0,\mathbb{Q}_p)\right)\to H^1_\mathrm{cont}\left(\Gamma_n,(t^jW)^{G_{K_\infty}}\widehat{\otimes}_{\mathbb{Q}_p}\mathscr{C}^{\mathrm{an}}(\Gamma_n,\mathbb{Q}_p)\right)\] is zero. \end{lem} This lemma implies that the image of $W^{K_n}\to (W/t^jW)^{K_n}$ contains $(W/t^jW)^{K_0}=(W/t^jW)^{K}$, which is exactly what we want. \end{proof} \begin{proof}[Proof of Lemma \ref{Gamacyc}] Since $(t^jW)^{G_{K_\infty}}$ is filtered by $W_i^{G_{K_\infty}}$, it suffices to show that for any $i,m\geq 0$, we can find $n\geq m$ such that the image of \[H^1_\mathrm{cont}\left(\Gamma_m,W_i^{G_{K_\infty}}\widehat{\otimes}_{\mathbb{Q}_p}\mathscr{C}^{\mathrm{an}}(\Gamma_m,\mathbb{Q}_p)\right)\to H^1_\mathrm{cont}\left(\Gamma_n,W_i^{G_{K_\infty}}\widehat{\otimes}_{\mathbb{Q}_p}\mathscr{C}^{\mathrm{an}}(\Gamma_n,\mathbb{Q}_p)\right)\] is zero. By the Ax-Sen-Tate theorem, $C^{G_{K_\infty}}=\widehat{K_\infty}$, the $p$-adic completion of $K_\infty$ . Since $\varphi_i^K$ is an isomorphism, we have \[W_i^{G_{K_\infty}}=W^K_i\widehat{\otimes}_{K} \widehat{K_\infty}.\] By our construction, the action of $\Gamma_m$ on $W_i^K$ is analytic, hence there is a natural $\Gamma_m$-equivariant isomorphism \[W_i^K\widehat{\otimes}_{\mathbb{Q}_p}\mathscr{C}^{\mathrm{an}}(\Gamma_m,\mathbb{Q}_p) \cong W_i^K\widehat{\otimes}_{\mathbb{Q}_p}\mathscr{C}^{\mathrm{an}}(\Gamma_m,\mathbb{Q}_p)\] functorial in $m$, where $\Gamma_m$ acts on everything except $W_i^K$ on the right hand side, cf. \ref{GanBan}. We use $W'$ to denote $W_i^K$ equipped with the trivial action of $\Gamma_m$. Thus it suffices to find $n\geq m$ such that the image of \[H^1_\mathrm{cont}\left(\Gamma_m,\widehat{K_\infty}\widehat\otimes_K W'\widehat{\otimes}_{\mathbb{Q}_p}\mathscr{C}^{\mathrm{an}}(\Gamma_m,\mathbb{Q}_p)\right)\to H^1_\mathrm{cont}\left(\Gamma_n,\widehat{K_\infty}\widehat\otimes_K W'\widehat{\otimes}_{\mathbb{Q}_p}\mathscr{C}^{\mathrm{an}}(\Gamma_n,\mathbb{Q}_p)\right)\] is zero, i.e. $\widehat{K_\infty}\widehat\otimes_K W'$ is strongly $\mathfrak{LA}$-acyclic with respect to the action of $\Gamma_0$ in the sense of \cite[\S 2.2]{Pan20}. Denote by $\bar{\mathrm{tr}}_n:\widehat{K_\infty}\to K_n$ the Tate's normalized trace, which exists for sufficiently large $n$ and gives a continuous left inverse of the inclusion $K_n\subseteq \widehat{K_\infty}$. See for example \cite[4.1]{BC08}. Fix a generator $\gamma$ of $\Gamma_m$. Then $\gamma-1$ is invertible on $\ker(\bar\mathrm{tr}_n)$ and the norm of its inverse $\|(\gamma-1)^{-1}\|$ converges to $1$ when $n\to+\infty$. Since $(\gamma-1)^p$ has norm $\leq p^{-1}$ on $\mathscr{C}^{\mathrm{an}}(\Gamma_m,\mathbb{Q}_p)$, by the same argument as in the proof of \cite[Lemma 3.6.6]{Pan20}, we have \[H^1_\mathrm{cont}(\Gamma_m,\ker(\bar\mathrm{tr}_n)\widehat\otimes_K W'\widehat{\otimes}_{\mathbb{Q}_p} \mathscr{C}^{\mathrm{an}}(\Gamma_m,\mathbb{Q}_p))=0\] if $\|(\gamma-1)^{-1}\|< p^{1/p}$ on $\ker(\bar\mathrm{tr}_n)$. Fix such an $n$. Since $\widehat{K_\infty}=\ker(\bar\mathrm{tr}_n)\oplus K_n$, it is enough to find $n'\geq n$ such that the image of \[H^1_\mathrm{cont}(\Gamma_m, K_n\otimes_K W'\widehat\otimes_{\mathbb{Q}_p}\mathscr{C}^\mathrm{an}(\Gamma_m,\mathbb{Q}_p))\to H^1_\mathrm{cont}(\Gamma_{n'}, K_n\otimes_K W'\widehat\otimes_{\mathbb{Q}_p}\mathscr{C}^\mathrm{an}(\Gamma_{n'},\mathbb{Q}_p)) \] is zero. This is clear if we take $n'=n+1$ because the map $H^1_{\mathrm{cont}}(\Gamma_n,\mathscr{C}^\mathrm{an}(\Gamma_n,\mathbb{Q}_p))\to H^1_{\mathrm{cont}}(\Gamma_{n+1},\mathscr{C}^\mathrm{an}(\Gamma_{n+1},\mathbb{Q}_p))$ is zero. \end{proof} Back to the proof of Proposition \ref{FonSen}. We argue by induction on $k$. When $k=1$, we have $W=W_0$ and all the claims are clear. Suppose we already proved Proposition \ref{FonSen} for $k\leq l$. We will deduce the analogous statements for $k=l+1$. Apply our induction hypothesis to the $B_{\mathrm{dR}}^+/(t^{k-1})$-module $tW$. We can find a finite extension $K_m$ of $K$ in $K_\infty$ such that $\gr^i\left( (tW)^{K_m} \right)\cong W_{i+1}^{K_m}$ for $i\geq 0$, and for any finite extension $L$ of $K_m$, we have $(tW)^L=(tW)^{K_m}\otimes_{K_m} L$. By Lemma \ref{surj^K}, the image of $W^{K_n}\to (W/tW)^{K_n}=W_0^{K_n}$ contains $W_0^{K_m}$ for some $n\geq m$. Note that $W_0^{K_n}=W_0^{K_m}\otimes_{K_m} K_n$. Hence \[W^{K_n}\to W_0^{K_n}\] is surjective as this map is $K_n$-linear. It follows that \[g_0^{K_n}:\gr^0 W^{K_n}\to W_0^{K_n}\] is an isomorphism. Thus $g_i^{K_n}$ is an isomorphism for $i\geq 0$ because $g_i^{K_n}$ is already an isomorphism for $i\geq 1$ by the induction hypothesis. Now we observe that we can repeat the same argument with $K_n$ replaced by a finite extension $L$ of $K_n$ and prove that $g_i^{L}$ is an isomorphism for $i\geq 0$ and any finite extension $L$ of $K_n$. This implies that the natural map $L\otimes_{K_n}W^{K_n}\to W^L$ is an isomorphism by looking at the graded pieces which is nothing but $L\otimes_{K_n}W_i^{K_n}=W^L_i$. \end{proof} Our next result concerns the kernel and cokernel of a morphism between such $W$'s. \begin{prop} \label{Bmor} Let $f:X\to Y$ be a continuous $B_{\mathrm{dR}}^+/(t^k)$-linear, $G_K$-equivariant maps between Banach flat $B^{+}_\mathrm{dR}/(t^k)$-modules equipped with continuous semilinear actions of $G_K$ such that $\varphi^K_{X,0}, \varphi^K_{Y,0}$ are isomorphisms. Suppose that \begin{itemize} \item $f$ is strict with respect to the topology on $X,Y$; hence $\ker f, \coker f$ are natural Banach $B^{+}_\mathrm{dR}/(t^k)$-modules equipped with continuous semilinear actions of $G_K$. \item $f$ is strict with respect to the $t$-adic filtrations on $X,Y$, that is $f(t^i X)=f(X)\cap t^i Y$ for any $i\geq 0$. \end{itemize} Then $\ker f$ and $\coker f$ are also flat over $B^{+}_\mathrm{dR}/(t^k)$, and $\varphi^K_{\ker f,0}, \varphi^K_{\coker f,0}$ are isomorphisms. Moreover, there exists a finite extension $L$ of $K$ in $K_\infty$ such that the natural sequence \begin{eqnarray} \label{fkc} 0\to (\ker f)^L\to X^L\to Y^L\to (\coker f)^L\to 0 \end{eqnarray} is exact. \end{prop} \begin{proof} The second assumption implies that $0\to t^i\ker f\to t^i X\to t^i Y\to t^i \coker f\to 0$ is exact. Hence the flatness of $\ker f$ and $\coker f$ follows from the flatness of $X,Y$ over $B^{+}_\mathrm{dR}/(t^k)$. Moreover this implies that $0\to (\ker f)_0\to X_0\xrightarrow{f_0} Y_0\to (\coker f)_0\to 0$ is exact with strict homomorphisms. Since $X_0= C\widehat\otimes_K X_0^K$ and $Y_0= C\widehat\otimes_K Y_0^K$, it follows from Corollary \ref{strHT} that $\varphi^K_{\ker f,0}, \varphi^K_{\coker f,0}$ are isomorphisms and $0\to (\ker f)_0^K\to X_0^K\xrightarrow{f_0} Y_0^K\to (\coker f)_0^K\to 0$ is exact. For the last part, by Proposition \ref{FonSen}, we can find a finite extension $L$ of $K$ in $K_\infty$ such that $\gr^iW^L=W_i^L$ for $W=\ker f,X,Y,\coker f$. Then it's clear that the sequence \eqref{fkc} is exact for such $L$. \end{proof} For the purpose of this paper, we will only consider Hodge-Tate $B_{\mathrm{dR}}^+/(t^k)$-representations of weights $0,l$ in the following sense. \begin{defn} \label{HTBdR} Let $W,K$ be as in \ref{setup} and $l\geq 1$ a positive integer. We say $W$ is Hodge-Tate of weights $0,l$ if $W_0$ is Hodge-Tate of weights $0,l$ in the sense of Definition \ref{HTg}, equivalently, there exists a finite extension $L$ of $K$ in $C$ such that $W_0=C\widehat\otimes_L W_0^L$ and the action of the Sen operator (i.e. $1\in\mathbb{Q}_p=\Lie(\mathbb{Z}_p^\times)\cong \Lie(\Gal(L_\infty/L))$) on $W_0^L$ is semi-simple with eigenvalues $0$ and $-l$. \end{defn} \begin{para} \label{HT0l} Suppose $W$ is Hodge-Tate of weights $0,l$. Following the notation in Definition \ref{HTg}, we decompose $W^L_0$ into the direct sum of eigenspaces \[W^L_0=W^L_{0,0}\oplus W^L_{0,-l},\] where $W^L_{0,j}$ denotes the eigenspace for $j\in\{0,-l\}$. Let $W_{0,j}=C\widehat\otimes_L W^L_{0,j}$. Then \[W_0=W_{0,0}\oplus W_{0,-l}\] and this decomposition does not depend on the choice of $L$. Similarly we can define \[W^L_i=W^L_{i,0}\oplus W^L_{i,-l},\] \[W_i=W_{i,0}\oplus W_{i,-l}\cong W_{0,0}(i)\oplus W_{0,-l}(i),~~~~~~i\geq 0\] via the isomorphism $W_i\cong W_0(i)$. We remark that the action of the Sen operator on $W^L_{i,j}$ is multiplication by $i+j$. Hence $ \Lie(\Gal({L_\infty/L}))$ acts trivially on $W^L_{i,j}$ if and only if $W^L_{i,j}=W^L_{0,0}$ or $W^L_{l,-l}(l)$ if it exists. \end{para} By Remark \ref{clHT}, we can take $L=K$ in the definition \ref{HTBdR}. In fact, a slightly stronger result holds. \begin{lem} \label{HTK} Let $W$ be a flat Banach $B_{\mathrm{dR}}^+/(t^k)$-module equipped with a continuous semilinear action of $G_{K}$. Suppose $W$ is Hodge-Tate of weights $0,l$. Then $W_0=C\widehat\otimes_K W_0^K$ and $g_i^K:\gr^i W^K\to W_i^K$ are isomorphisms for $i\geq 0$. \end{lem} \begin{proof} It follows from Remark \ref{clHT} that $W_0=C\widehat\otimes_K W_0^K$. It remains to show that $g_i^K:\gr^i W^K\to W_i^K$ are isomorphisms for $i\geq 0$. By Proposition \ref{FonSen}, $g_i^{K_n},i\geq 0$ are isomorphisms for some $n\geq 0$. An induction argument on $i$ shows that it is enough to prove that \begin{eqnarray} \label{twKn} 0\to (tW)^{K_n}\to W^{K_n} \to W_0^{K_n}\to 0 \end{eqnarray} remains exact when passing to the $\Gal(K_\infty/K)$-analytic vectors. Fix a topological generator $\sigma\in\Gal(K_\infty/K)$. Then $\sigma^{p^n}$ topologically generates $\Gal(K_\infty/K_n)$. Since the action of $\Gal(K_\infty/K_n)$ on $W_i^{K_n}$ is analytic, the eigenvalues of $\sigma^{p^n}$ on $W^{K_n}$ are of the form $\varepsilon_p(\sigma^{p^n})^{i+j}$, $i=0,\cdots, k-1$, $j=0,-l$ and there is a natural decomposition \[W^{K_n}=\bigoplus_m E_m,\] where $E_m$ denotes the generalized eigenspace associated to $\varepsilon_p(\sigma^{p^n})^{m}$. Note that each $E_m$ has a natural generalized eigenspace decomposition with respect to the action of $\sigma$ and $E_m^{\Gal(K_\infty/K)-\mathrm{an}}$ is nothing but the generalized eigenspace associated to the eigenvalue $\varepsilon_p(\sigma)^{m}$. Using this interpretation of the $\Gal(K_\infty/K)$-analytic vectors, we see easily that \eqref{twKn} remains exact after taking $\Gal(K_\infty/K)$-analytic vectors. \end{proof} \begin{para} \label{FonOpsetup} Finally we can define the Fontaine operator. Our setup is as follows. Let $k$ be a positive integer. Let $W$ be a flat Banach $B_{\mathrm{dR}}^+/(t^{k+1})$-module equipped with a continuous semilinear action of $G_{K}$. Moreover, we assume that \begin{itemize} \item $W$ is Hodge-Tate of weights $0,k$. \end{itemize} By Lemma \ref{HTK}, $W_0=C\widehat\otimes_K W^K_0$ and $\gr^iW^K= W_i^K$, $i\geq 0$. Hence $W^K$ is filtered by $W^K_{i,0}$ and $W^K_{i,-k}$. $\Lie(\Gal(K_\infty/K))$ acts naturally on $W^K$. Denote by $\nabla\in \End_K(W^K)$ the bounded operator defined by the action of $1\in\mathbb{Q}_p\cong\Lie(\Gal(K_\infty/K))$. (See for example \cite[Lemma 2.1.8]{Pan20} for the boundedness.) Consider the subspace $E_0(W^K)$ of $W^K$ annihilated by some power of $\nabla$. Equivalently, this is the generalized eigenspace $E_0(W^K)$ associated to the eigenvalue $0$. It follows from our discussion in \ref{HT0l} that there is a natural exact sequence \[0\to W^K_{k,-k}\to E_0(W^K)\to W^K_{0,0}\to 0\] and the induced actions of $\nabla$ on $W^K_{k,-k}$ and $W^K_{0,0}$ are trivial. In particular, if we apply $\nabla$ to $E_0(W^K)$, it induces a $K$-linear map \[N_W^K:W^K_{0,0}\to W^K_{k,-k}\cong W^K_{0,-k}(k).\] Since $W_{0,0}=W^K_{0,0}\widehat{\otimes}_{K} C$ and $W_{k,-k}=W^K_{k,-k}\widehat{\otimes}_{K} C$, we can $C$-linearly extend $N_W^K$ to a map \[N_W:W_{0,0}\to W_{k,-k}\cong W_{0,-k}(k).\] It follows from Proposition \ref{FonSen} that $N_W$ does not depend on the choice of $K$. Since $\nabla$ commutes with the action of $G_{K}$ on $W^K$, we conclude that $N_W$ also commutes with $G_{K}$. \end{para} \begin{defn} \label{defnNW} The $G_{K}$-equivariant, $C$-linear map \[N_W:W_{0,0}\to W_{0,-k}(k)\] is called the Fontaine operator associated to $W$. \end{defn} \begin{rem} It is clear from the definition that $N_W$ is functorial in $W$. More precisely, suppose $Z$ is another flat Banach $B_{\mathrm{dR}}^+/(t^{k+1})$-module equipped with a continuous semilinear action of $G_{K}$ and $Z$ is Hodge-Tate of weights $0,k$. Let $f:W\to Z$ be a $C$-linear $G_{K}$-equivariant continuous map. Then $f$ intertwines $N_W$ and $N_Z$. \end{rem} \begin{para}[The connection with Fontaine's work] \label{relFon} We explain the meaning of $N_W$ when $W$ comes from a $2$-dimensional $p$-adic Galois representation of $G_{\mathbb{Q}_p}$. Let $E$ be a finite extension of $\mathbb{Q}_p$ and $V$ a $2$-dimensional $E$-vector space equipped with a continuous action of $G_{\mathbb{Q}_p}$. We assume that \begin{itemize} \item $V$ is Hodge-Tate of weights $0,k$ for some positive integer $k$. \end{itemize} In this case, it simply means that \[(V\otimes_{\mathbb{Q}_p}C)^{G_{\mathbb{Q}_p}}\neq 0~~~~~~~~~~~\mbox{ and } ~~~~~~~~~\left(V\otimes_{\mathbb{Q}_p}C(k)\right)^{G_{\mathbb{Q}_p}}\neq 0.\] Let \[W:=V\otimes_{\mathbb{Q}_p}B_{\mathrm{dR}}^+/(t^{k+1}).\] This is a finite free $B_{\mathrm{dR}}^+/(t^{k+1})$-module. Choose a lattice $V^o$ of $V$. We get a $\mathbb{Q}_p$-Banach space structure on $W$ by requiring the image of $V^o\otimes_{\mathbb{Z}_p} A_{\mathrm{inf}}\to V\otimes_{\mathbb{Q}_p}B_{\mathrm{dR}}^+/(t^{k+1})$ to be the unit ball. The $p$-adic topology on $W$ defined in this way does not depend on the choice of $V^o$. Note that \[W_0=W/tW=V\otimes_{\mathbb{Q}_p}C.\] It follows from our assumption that there is a natural decomposition \[W_0=C\otimes_{\mathbb{Q}_p}W^{\mathbb{Q}_p}_{0,0}\oplus C\otimes_{\mathbb{Q}_p}W^{\mathbb{Q}_p}_{0,-k}\] where $W^{\mathbb{Q}_p}_{0,j}\subseteq W\otimes_{\mathbb{Q}_p} C$ denotes the (one-dimensional) $E$-subspace on which $G_{\mathbb{Q}_p}$ acts via $\tilde\varepsilon_p^{j}$ for $j=0,-k$, where $\tilde\varepsilon_p:G_{\mathbb{Q}_p}\to\mathbb{Z}_p^\times$ is a twist of $\varepsilon_p$ introduced in \ref{HTsetup}. Hence $\dim_E E_0(W^{\mathbb{Q}_p})=2$. Comparing with Fontaine's work, we remark that our $E_0(W^{\mathbb{Q}_p})$ is $D_{\mathrm{pdR}}(V)$ in \cite[\S 4]{Fo04}and our $\nabla|_{E_0(W^{\mathbb{Q}_p})}$ agrees with his $-\nu$. Fontaine also proved the following result. \end{para} \begin{thm} \label{FondR} Let $V$ be a two-dimensional continuous representation of $G_{\mathbb{Q}_p}$ over a finite extension of $\mathbb{Q}_p$ with Hodge-Tate weights $0,k>0$. Let $W=V\otimes_{\mathbb{Q}_p}B_{\mathrm{dR}}^+/(t^{k+1})$. Then $N_W=0$ if and only $V$ is a de Rham representation. \end{thm} \begin{proof} Since this result will play an important role later, we sketch a proof here. Recall that in this case, $V$ is de Rham if and only if $\dim_E D_{\mathrm{dR}}(V)=\dim_E V=2$, where $D_\mathrm{dR}(V)=(V\otimes_{\mathbb{Q}_p}B_{\mathrm{dR}}^+)^{G_{\mathbb{Q}_p}}$. By our assumption, $\left(V\otimes_{\mathbb{Q}_p}C(l)\right)^{G_{\mathbb{Q}_p}}=0$ when $l\geq k+1$. Hence \[(V\otimes_{\mathbb{Q}_p}B_{\mathrm{dR}}^+)^{G_{\mathbb{Q}_p}}\cap V\otimes_{\mathbb{Q}_p}t^{k+1}B_{\mathrm{dR}}^+=\{0\}.\] In particular, the natural map \[(V\otimes_{\mathbb{Q}_p}B_{\mathrm{dR}}^+)^{G_{\mathbb{Q}_p}}\to V\otimes_{\mathbb{Q}_p}B_{\mathrm{dR}}^+/(t^{k+1})=W\] is injective. Hence it induces an inclusion \[ D_{\mathrm{dR}}(V)\subseteq E^0(W^{\mathbb{Q}_p}).\] If $\dim_E D_{\mathrm{dR}}(V)=2$, a consideration of the dimensions shows that this is in fact an equality $D_{\mathrm{dR}}(V)= E_0(W^{\mathbb{Q}_p})$. Since $G_{\mathbb{Q}_p}$ acts trivially on $D_{\mathrm{dR}}(V)$, it follows from the definition that $N_W=0$. Now suppose $N_W=0$. Then $E_0(W^{\mathbb{Q}_p})$ is fixed by $G_{\mathbb{Q}_p}$, i.e. $W^{G_{\mathbb{Q}_p}}=E_0(W^{\mathbb{Q}_p})$. The kernel of $B_{\mathrm{dR}}^+/(t^l)\otimes_{\mathbb{Q}_p}V\to W$ for $l\geq {k+1}$ is filtered by $C(i),i>0$ which has no $H^1_{\mathrm{cont}}(G_{\mathbb{Q}_p},\bullet)$. Hence the natural map $(B_{\mathrm{dR}}^+/(t^l)\otimes_{\mathbb{Q}_p}V)^{G_{\mathbb{Q}_p}}\to W^{G_{\mathbb{Q}_p}}$ is an isomorphism when $l\geq k+1$. Passing to the limit over $l$, we conclude that $\dim_E D_\mathrm{dR}(V)=2$. \end{proof} \begin{para}[Generalizations to LB-spaces] \label{LBsetup} We briefly discuss how to generalize the construction of the Fontaine operator to Hausdorff LB-$B_{\mathrm{dR}}^+/(t^{k+1})$-modules. Let \[W^0\subseteq W^1\subseteq W^2\subseteq\cdots\] be an increasing sequence of Banach $B_{\mathrm{dR}}^+/(t^{k+1})$-modules $W^j$ equipped with continuous semilinear actions of $G_{K}$ whose transition maps are continuous, $G_{K}$-equivariant and $B_{\mathrm{dR}}^+/(t^{k+1})$-linear. Denote the direct limit $\displaystyle \varinjlim_j W^j$ by $W$ and assume that this is a Hausdorff LB-space. We also denote the quotient $W/tW$ by $W_0$. Assume that \begin{itemize} \item $W$ is flat over $B_{\mathrm{dR}}^+/(t^{k+1})$. This implies that $W_0$ is also a Hausdorff LB-space by the same argument as in \ref{BBdrtkmod}. \end{itemize} We remark that we don't require each $W^j$ is flat over $B_{\mathrm{dR}}^+/(t^{k+1})$. As before, we denote by $W^K$ (resp. $W_0^K$) the subspace of $G_{K_\infty}$-invariant, $G_K$-analytic vectors in $W$ (resp. $W_0$). This definition makes sense in view of the discussion in \ref{roHLB}. The $t$-adic filtration on $W$ induces a decreasing filtration $\Fil^\bullet$ on $W^K$. We say $W$ is Hodge-Tate of weights $0,k$ if \begin{itemize} \item $W_0$ is Hodge-Tate of weights $0,k$ in the sense that the natural map \[W_0^K\widehat\otimes_K C\to W_0\] is an isomorphism and the action of the Sen operator $1\in\mathbb{Q}_p=\Lie(\Gal(K_\infty/K))$ on $W_0^K$ is semi-simple with eigenvalues $0,-k$. Here $W_0^K\widehat\otimes_K C$ is understood as $\displaystyle \varinjlim_{j=0} Z^j \widehat\otimes_K C$ by writing the LB-space $W_0^K=\bigcup_{j=0}^{+\infty}Z^j$ as an increasing sequence of $K$-Banach subspaces of $W_0^K$. \end{itemize} Following our notation in \ref{HT0l}, we have an eigenspace decomposition \[W_0^K=W_{0,0}^{K}\oplus W_{0,-k}^K\] and an induced decomposition of $W_0$: \[W_0=W_{0,0}\oplus W_{0,-k}.\] Similarly, let $W_i=t^iW/t^{i+1}W$. Then $W_i\cong W_0(i)$ and there is a natural decomposition \[W_i=W_{i,0}\oplus W_{i,-k}.\] \end{para} \begin{prop} \label{Kcmgr} Let $W$ be as in \ref{LBsetup} and assume that $W$ is Hodge-Tate of weights $0,k$. Then the natural maps \[\gr^i W^K\to W^K_i\] are isomorphisms for $i\geq 0$. Moreover for any finite extension $L$ of $K$ in $C$, we have \[W^L=W^K\otimes_K L.\] \end{prop} \begin{proof} The key observation is as follows. By our assumption, we can write \[W_0=\bigcup_{j=0}^{+\infty}Y^j\] as an increasing sequence of $C$-Banach Hodge-Tate representations of $G_K$ of weights $0,k$. Hence \[W_i=\bigcup_{j=0}^{+\infty}Y^j(i)=\bigcup_{j=0}^{+\infty}\gr^i W^j.\] By Proposition \ref{1.1.10}, this implies that for any $j\geq 0$, there exist $j',j''\geq j$ such that \[\gr^i W^j\subseteq Y^{j'}(i)\subseteq \gr^i W^{j''}.\] The rest of the argument is almost the same as the proof of Proposition \ref{FonSen} and Lemma \ref{HTK}. For example, using the vanishing of $H^1_\mathrm{cont}(G_{K_\infty}, Y^{j'}(i))$ in the proof of the first part of Lemma \ref{surj^K}, we see that $H^1_\mathrm{cont}(G_{K_\infty}, W_i)=0$ and $W^{G_{K_\infty}}\to W_0^{G_{K_\infty}}$ is surjective. We omit the details here. \end{proof} \begin{para} \label{NWLB} Now we can carry out the same construction in \ref{FonOpsetup}. Let $W$ be as in \ref{LBsetup} and assume that $W$ is Hodge-Tate of weights $0,k$. Denote by $\nabla$ the action of $1\in\mathbb{Q}_p=\Lie(\Gal(K_\infty/K))$ on $W^K$ and denote by $E_0(W^K)$ the generalized eigenspace associated to the eigenvalue $0$. There is a natural exact sequence \[0\to W^K_{k,-k}\to E_0(W^K)\to W^K_{0,0}\to 0.\] Then $\nabla$ induces a map $W^K_{0,0}\to W^K_{k,-k}$. We can $C$-linearly extend it to a map \[N_W:W_{0,0}\to W_{k,-k}=W_{0,-k}(k)\] which is called the Fontaine operator. Clearly $N_W$ is functorial in $W$. Our last result extends Proposition \ref{Bmor} to this setup. \end{para} \begin{prop} \label{LBfXY} Let $f:X\to Y$ be a continuous $B_{\mathrm{dR}}^+/(t^k)$-linear, $G_K$-equivariant maps between Hausdorff flat LB $B^{+}_\mathrm{dR}/(t^k)$-modules equipped with continuous semilinear actions of $G_K$ such that $\varphi^K_{X,0}, \varphi^K_{Y,0}$ are isomorphisms. Suppose that \begin{itemize} \item $X=\bigcup_n X_n$ and $Y=\bigcup_n Y_n$ are unions of increasing sequences of $G_K$-stable $B^{+}_\mathrm{dR}/(t^k)$-Banach modules; \item $X_n$ and $Y_n$ are Hodge-Tate of weights $0,k$ for $n\geq 0$; \item $f(X_n)\subseteq Y_n$ and $f|_{X_n}:X_n\to Y_n$ is strict; hence $\ker f=\bigcup_n \ker f|_{X_n}$ and $\coker f=\bigcup_n Y_n/f(X_n)$ are natural $B^{+}_\mathrm{dR}/(t^k)$-LB modules. \item $\coker f$ is Hausdorff, (while $\ker f$ is automatically Hausdorff;) \item $f$ is strict with respect to the $t$-adic filtrations on $X,Y$. \end{itemize} Then $\ker f$ and $\coker f$ are also Hodge-Tate of weights $0,k$ and flat over $B^{+}_\mathrm{dR}/(t^k)$. Moreover, $\varphi^K_{\ker f,0}, \varphi^K_{\coker f,0}$ are isomorphisms and the natural sequence \[ 0\to (\ker f)^K\to X^K\to Y^K\to (\coker f)^K\to 0 \] is exact. \end{prop} \begin{proof} Same proof as Proposition \ref{Bmor}. \end{proof} \begin{cor} \label{Ncokerf} Same setup as in Proposition \ref{LBfXY}. Let $N_W:W_{0,0}\to W_{k,-k}$ denote the Fontaine operator of $W$ for $W=\ker f, X,Y, \coker f$. Then the following diagram commutes. \[ \begin{tikzcd} 0 \arrow[r] & (\ker f)_{0,0} \arrow[d,"N_{\ker f}"]\arrow[r] & X_{0,0} \arrow[r] \arrow[d, "N_X"] & Y_{0,0} \arrow[r] \arrow[d,"N_Y"] & (\coker f)_{0,0} \arrow[d,"N_{\coker f}"]\arrow[r]& 0 \\ 0 \arrow[r] & (\ker f)_{k,-k} \arrow[r] & X_{k,-k} \arrow[r] & Y_{k,-k} \arrow[r] & (\coker f)_{k,-k} \arrow[r]& 0 \end{tikzcd}. \] \end{cor} \begin{proof} Direct consequences of Proposition \ref{LBfXY} and the construction of the Fontaine operator. \end{proof} \subsection{de Rham period sheaves} \label{dRps} \begin{para} In this subsection, we apply our discussion in the previous subsection to the infinite level modular curves. To do this, we need the construction of de Rham period sheaves, i.e. a relative construction of $B_{\mathrm{dR}}^+$, cf. \ref{CBdR+}. Our reference here is \cite[\S 6]{Sch13}. First we have the sheaf $\mathbb{A}_{\mathrm{inf},\mathcal{X}_{K^p}}$ on $\mathcal{X}_{K^p}$ which assigns an affinoid perfectoid open subset $V_\infty=\Spa(B,B^+)\subseteq\mathcal{X}_{K^p}$ to $W(B^{\flat+})$, the ring of Witt vectors of $\displaystyle B^{\flat+}:=\varprojlim_{x\mapsto x^p} B^+/p$, the tilt of $B^+$. There is the usual surjective ring homomorphism \[\theta: \mathbb{A}_{\mathrm{inf},\mathcal{X}_{K^p}}\to\mathcal{O}^+_{\mathcal{X}_{K^p}}.\] By abuse of notation, we also denote by $\theta: \mathbb{A}_{\mathrm{inf},\mathcal{X}_{K^p}}[\frac{1}{p}]\to \mathcal{O}_{\mathcal{X}_{K^p}}$ the rational version. The (positive) de Rham ring $\mathbb{B}_{\mathrm{dR},\mathcal{X}_{K^p}}^+$ is defined as the $\ker(\theta)$-adic completion of $\mathbb{A}_{\mathrm{inf},\mathcal{X}_{K^p}}[\frac{1}{p}]$. The natural inclusion $C\subseteq \mathcal{O}_{\mathcal{X}_{K^p}}$ defines a map $B_{\mathrm{dR}}^+\to \mathbb{B}_{\mathrm{dR},\mathcal{X}_{K^p}}^+$. As explained in \cite[Lemma 6.3]{Sch13}, $t\in B_{\mathrm{dR}}^+$ is also a generator of the kernel of $\mathbb{B}_{\mathrm{dR},\mathcal{X}_{K^p}}\to \mathcal{O}_{\mathcal{X}_{K^p}}$. We define a decreasing filtration on $\mathbb{B}_{\mathrm{dR},\mathcal{X}_{K^p}}^+$ by $\Fil^i\mathbb{B}_{\mathrm{dR},\mathcal{X}_{K^p}}^+=t^i\mathbb{B}_{\mathrm{dR},\mathcal{X}_{K^p}}^+,i\geq 0$. Then $\gr^i\mathbb{B}_{\mathrm{dR},\mathcal{X}_{K^p}}^+\cong \mathcal{O}_{\mathcal{X}_{K^p}}(i),i\geq 0$. Set \[\mathbb{B}_{\mathrm{dR}}^+:={\pi_{\mathrm{HT}}}_*\mathbb{B}_{\mathrm{dR},\mathcal{X}_{K^p}}^+.\] Clearly $\gr^i \mathbb{B}_{\mathrm{dR}}^+\cong\mathcal{O}_{K^p}(i)$. We will only consider truncated sheaves. More precisely, let $k\geq 0$ be an integer. Set \[\mathbb{B}_{\mathrm{dR},k}^+:=\mathbb{B}_{\mathrm{dR}}^+/(t^k).\] Let $U\in\mathfrak{B}$ be an affinoid open subset of ${\mathscr{F}\!\ell}$ with $V_\infty:=\pi_{\mathrm{HT}}^{-1}(U)=\Spa(B,B^+)$. Then $\mathbb{B}_{\mathrm{dR}}^+/(t^k)(U)$ is naturally a flat $B_{\mathrm{dR}}^+/(t^k)$-Banach module with the unit open ball given by the image of $\mathbb{A}_{\mathrm{inf},\mathcal{X}_{K^p}}(V_\infty)\to \mathbb{B}_{\mathrm{dR},\mathcal{X}_{K^p}}^+(V_\infty)/(t^k)$. It is clear from the construction that $\mathrm{GL}_2(\mathbb{Q}_p)$ acts on $\mathbb{B}_{\mathrm{dR},k}^+$. We denote by \[\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la}}\subset \mathbb{B}_{\mathrm{dR},k}^+\] the subsheaf of $\mathrm{GL}_2(\mathbb{Q}_p)$-locally analytic sections. There is a natural decreasing filtration on $\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la}}$ induced from the $t$-adic filtration on $\mathbb{B}_{\mathrm{dR},k}^+$. By the same argument in \ref{stpHT}, we see that $G_K$ acts naturally on $\mathbb{B}_{\mathrm{dR},k}^+(U)$ for some finite extension $K$ of $\mathbb{Q}_p$ in $C$. The Lie algebra $\Lie(\mathrm{GL}_2(\mathbb{Q}_p))=\mathfrak{gl}_2(\mathbb{Q}_p)$ acts naturally on $\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la}}$. Let $Z:=Z(U(\mathfrak{gl}_2(\mathbb{Q}_p)))$ the centre of the universal enveloping algebra of $\mathfrak{gl}_2(\mathbb{Q}_p)$ over $\mathbb{Q}_p$. Given an infinitesimal character $\tilde\chi:Z\to\mathbb{Q}_p\subseteq B_{\mathrm{dR}}^+$, \footnote{We can allow infinitesimal characters valued in a finite extension of $\mathbb{Q}_p$. But since only integral weights are considered in this paper, we restrict ourselves to characters valued in $\mathbb{Q}_p$ here.} we denote by \[\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi}\subset \mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la}}\] the $\tilde\chi$-isotypic part. Then there is also an induced decreasing filtration on $\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi}$. On the other hand, we denote by $\mathcal{O}^{\mathrm{la},\tilde\chi}_{K^p}$ the $\tilde\chi$-isotypic part of $\mathcal{O}^{\mathrm{la}}_{K^p}$. There are natural maps \[\gr^i \mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la}} \to \mathcal{O}^{\mathrm{la}}_{K^p}(i),\] \[\gr^i\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi}\to \mathcal{O}^{\mathrm{la},\tilde\chi}_{K^p}(i)\] for $\gr^{i}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la}}=\mathcal{O}^{\mathrm{la}}_{K^p}(i),i=0,\cdots,k-1$. \end{para} \begin{lem} \label{BdRcmgr} For $i=0,\cdots,k-1$, the natural maps \begin{enumerate} \item $\gr^i \mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la}} \to \mathcal{O}^{\mathrm{la}}_{K^p}(i)$, \item $\gr^i\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi}\to \mathcal{O}^{\mathrm{la},\tilde\chi}_{K^p}(i)$ \end{enumerate} are isomorphisms. In particular, $\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la}}$ and $\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi}$ are flat over $B_{\mathrm{dR}}^+/(t^k)$. \end{lem} \begin{proof} Both are local statements on ${\mathscr{F}\!\ell}$. Fix $U\in\mathfrak{B}$ which is $G_0=1+p^m M_2(\mathbb{Z}_p)$-stable. For the first part, since $\mathbb{B}_{\mathrm{dR},k}^+(U)$ is filtered by $\mathcal{O}_{K^p}(U)(i)$, it suffices to show that $\mathcal{O}_{K^p}(U)$ is $\mathfrak{LA}$-acyclic respect to the action of $G_0$. This is \cite[Proposition 4.3.15]{Pan20}. For the second part, by the same argument, it is enough to show that \[\Ext^i_{Z}(\tilde\chi, \mathcal{O}^{\mathrm{la}}_{K^p}(U))=\Ext^i_Z(\mathbb{Q}_p,\mathcal{O}^{\mathrm{la}}_{K^p}(U)(\tilde\chi^{-1}))=0,i\geq 1.\] Recall that there is the horizontal action $\theta_\mathfrak{h}$ of $\mathfrak{h}=\begin{pmatrix} * & 0\\ 0 &*\end{pmatrix}\subseteq\mathfrak{gl}_2(C)$ on $\mathcal{O}^{\mathrm{la}}_{K^p}(U)$, cf. \ref{brr}. Moreover it follows from \cite[Corollary 4.2.8]{Pan20} that the action of $Z$ on $\mathcal{O}^{\mathrm{la}}_{K^p}(U)$ factors through $S(\mathfrak{h})$, the symmetric algebra of $\mathfrak{h}$ (over $C$). In fact, by Harish-Chandra's isomorphism, we can choose an isomorphism $S(\mathfrak{h})\cong C[x_1,x_2]$ and identify $Z$ with the subalgebra $\mathbb{Q}_p[x_1,x_2^2]$. Hence $\Ext^i_{Z}(\mathbb{Q}_p,\cdot)$ (resp. $\Ext^i_{S(\mathfrak{h})}(C,\cdot)$) can be computed by the Koszul complex with respect to $x_1,x_2^2$ (resp. $x_1,x_2$). Choosing a character $\chi:S(\mathfrak{h})\to C$ extending $\tilde\chi$. By \cite[Lemma 5.1.2.(1)]{Pan20}, \[\Ext^i_{S(\mathfrak{h})}(\chi, \mathcal{O}^{\mathrm{la}}_{K^p}(U))=\Ext^i_{S(\mathfrak{h})}(C, \mathcal{O}^{\mathrm{la}}_{K^p}(U)(\chi^{-1}))=0,i\geq 1.\] It is easy to see that this implies the vanishing of $\Ext^i_Z(\mathbb{Q}_p,\mathcal{O}^{\mathrm{la}}_{K^p}(U)(\tilde\chi^{-1})),i\geq 1$ using the Koszul complexes. \end{proof} \begin{para} \label{tilchik} By Harish-Chandra's isomorphism, $Z\otimes_{\mathbb{Q}_p}C=Z(U(\mathfrak{gl}_2(C)))\cong S(\mathfrak{h})^W$, where $W$ denotes the Weyl group of $\mathfrak{gl}_2$ and acts on $S(\mathfrak{h})$ via the dot action: $w\cdot \mu=w(\mu+\rho)-\rho,\mu\in\mathfrak{h}^*,w\in W$. Here $\rho$ denotes the half sum of the positive roots as usual. In particular, $\tilde\chi$ is identified with a $W$-orbit of $\mathfrak{h}^*$. Let $k$ be a positive integer. Consider \[\tilde\chi_k=\{(0,1-k), (-k,1)\}\subseteq \mathfrak{h}^*,\] the infinitesimal character of the $(k-1)$-th symmetric power of the dual of the standard representation. It follows from the relation between $\theta_\mathfrak{h}$ and the infinitesimal character \cite[Corollary 4.2.8]{Pan20} that on $\mathcal{O}^{\mathrm{la},\tilde\chi_k}_{K^p}$, \[\left(\theta_\mathfrak{h}(\begin{pmatrix} a & 0\\ 0 & d\end{pmatrix})-(1-k)a \right)\left(\theta_\mathfrak{h}(\begin{pmatrix} a & 0\\ 0 & d\end{pmatrix})-(a-kd) \right)=0.\] Hence we have a natural decomposition \[\mathcal{O}^{\mathrm{la},\tilde\chi_k}_{K^p}=\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}\oplus \mathcal{O}^{\mathrm{la},(1,-k)}_{K^p}.\] By Proposition \ref{GnanHT}, $\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)$ (resp. $ \mathcal{O}^{\mathrm{la},(1,-k)}_{K^p}(U)$) is Hodge-Tate of weight $0$ (resp. $k$) for any $U\in{\mathscr{F}\!\ell}$. This shows that $\mathcal{O}^{\mathrm{la},\tilde\chi_k}_{K^p}(U)$ is Hodge-Tate of weights $0,k$. \end{para} \begin{para} \label{noBdRK} Let $U\in\mathfrak{B}$. Then $G_K$ acts naturally on $\mathbb{B}_{\mathrm{dR},k+1}^{+,\mathrm{la},\tilde\chi_k}(U)$ for some finite extension $K$ of $\mathbb{Q}_p$ in $C$. We just verified that $\mathbb{B}_{\mathrm{dR},k+1}^{+,\mathrm{la},\tilde\chi_k}(U)$ satisfies all the assumptions in \ref{LBsetup}. Therefore we can carry out the construction in \ref{NWLB} and get the Fontaine operator \[\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U) \to \mathcal{O}^{\mathrm{la},(1,-k)}_{K^p}(U)(k).\] By the functorial property of the Fontaine operator, this actually defines a map of sheaves \[\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p} \to \mathcal{O}^{\mathrm{la},(1,-k)}_{K^p}(k).\] \end{para} \begin{defn} Let $k$ be a positive integer. We denote this map by \[N_k: \mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p} \to \mathcal{O}^{\mathrm{la},(1,-k)}_{K^p}(k).\] \end{defn} Now we can state the main theorem of this section. Recall that we have introduced an intertwining operator $I_{k-1}:\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p} \to \mathcal{O}^{\mathrm{la},(1,-k)}_{K^p}(k)$ in Definition \ref{I}. \begin{thm} \label{MT} Let $k$ be a positive integer. Then $N_k=c_kI_{k-1}$ for some $c_k\in\mathbb{Q}_p^\times$. \end{thm} The proof of this theorem will be given in the rest of this section. \begin{rem}It's possible to show that $c_k\in\mathbb{Q}^\times$ but we don't need this here. See Remark \ref{Qstr} below.\end{rem} \begin{para} Theorem \ref{MT} also implies a similar result on the cohomology level. We need some constructions from subsection \ref{LBH}. Let $U_1,U_2,U_{12}$ be the open subsets of ${\mathscr{F}\!\ell}$ introduced in \ref{Cech}. Fix an integer $k$. \end{para} \begin{lem} \label{Bdrce} The natural map $\mathbb{B}_{\mathrm{dR},k}^+(U_1)\to \mathbb{B}_{\mathrm{dR},k}^+(U_{12})$ is a closed embedding and is strict with respect to the $t$-adic filtrations. \end{lem} \begin{proof} Since $t^i\mathbb{B}_{\mathrm{dR},k}^+/t^{i+1}\mathbb{B}_{\mathrm{dR},k}^+\cong\mathcal{O}_{K^p}(i)$, both claims follow directly from Lemma \ref{rescle}. \end{proof} \begin{cor} The \v{C}ech complex $\mathbb{B}_{\mathrm{dR},k}^{+}(U_1)\oplus B_{\mathrm{dR},k}^{+}(U_2)\to \mathbb{B}_{\mathrm{dR},k}^{+}(U_{12})$ is strict with respect to the topology on $\mathbb{B}_{\mathrm{dR},k}^{+}(U_1)\oplus B_{\mathrm{dR},k}^{+}(U_2),\mathbb{B}_{\mathrm{dR},k}^{+}(U_{12})$ and is strict with respect to the $t$-adic filtrations. \end{cor} \begin{proof} An application of Lemma \ref{keyweyl} and Lemma \ref{Bdrce}. We note that the same argument of Lemma \ref{keyweyl} also proves the strictness for the $t$-adic filtrations. \end{proof} Set $G_0=1+p^2M_2(\mathbb{Z}_p)$ and $G_n=G_0^{p^n}$. Let $\tilde\chi:Z\to\mathbb{Q}_p$ be a character. \begin{cor} \label{strCeLB} The natural maps \[\mathbb{B}_{\mathrm{dR},k}^{+}(U_1)^{G_n-\mathrm{an},\tilde\chi}\to \mathbb{B}_{\mathrm{dR},k}^{+}(U_{12})^{G_n-\mathrm{an},\tilde\chi},\] \[\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi}(U_1)\to \mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi}(U_{12})\] are closed embeddings. The second map is also strict with respect to the $t$-adic filtrations. Hence by Lemma \ref{keyweyl}, the following \v{C}ech complexes are strict \[\mathbb{B}_{\mathrm{dR},k}^{+}(U_1)^{G_n-\mathrm{an},\tilde\chi}\oplus B_{\mathrm{dR},k}^{+}(U_2)^{G_n-\mathrm{an},\tilde\chi}\to \mathbb{B}_{\mathrm{dR},k}^{+}(U_{12})^{G_n-\mathrm{an},\tilde\chi},\] \[\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi}(U_1)\oplus B_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi}(U_2)\to \mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi}(U_{12}),\] and the second complex is strict with respect to the $t$-adic filtrations. \end{cor} \begin{proof} All of the claims follow from Lemma \ref{Bdrce} except for the strictness of $\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi}(U_1)\to \mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi}(U_{12})$ for the $t$-adic filtrations. To see this, we use Lemma \ref{BdRcmgr} which says that $\gr^i \mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi}=\mathcal{O}^{\mathrm{la},\tilde\chi}_{K^p}(i)$. \end{proof} \begin{defn} Let $\mathcal{F}$ be one of $\mathbb{B}_{\mathrm{dR},k}^+,\mathbb{B}_{\mathrm{dR},k}^{+,G_n-\mathrm{an},\tilde\chi},\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi}$. We define $\check{H}^\bullet(\mathcal{F})$ as the cohomology of the \v{C}ech complex $\mathcal{F}(U_1)\oplus \mathcal{F}(U_2)\to \mathcal{F}(U_{12})$, where $\mathbb{B}_{\mathrm{dR},k}^{+,G_n-\mathrm{an},\tilde\chi}(U_i)$ is understood as $\mathbb{B}_{\mathrm{dR},k}^{+}(U_i)^{G_n-\mathrm{an},\tilde\chi}$. These are natural $B_{\mathrm{dR}}^+/(t^k)$-Banach modules when $\mathcal{F}=\mathbb{B}_{\mathrm{dR},k}^+,\mathbb{B}_{\mathrm{dR},k}^{+,G_n-\mathrm{an},\tilde\chi}$. It follows that \[\check{H}^\bullet(\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi})=\varinjlim_n \check{H}^\bullet(\mathbb{B}_{\mathrm{dR},k}^{+,G_n-\mathrm{an},\tilde\chi})\] which defines a $B_{\mathrm{dR}}^+/(t^k)$-LB module structure on $\check{H}^\bullet(\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi})$. \end{defn} \begin{prop} For $\mathcal{F}=\mathbb{B}_{\mathrm{dR},k}^+,\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi}$, there are natural isomorphisms \[\check{H}^\bullet(\mathcal{F})\cong H^{\bullet}({\mathscr{F}\!\ell},\mathcal{F}).\] These cohomology groups are flat over $B_{\mathrm{dR}}^+/(t^k)$ and we equip them with the $t$-adic filtrations. Then \[\gr^i H^{\bullet}({\mathscr{F}\!\ell},\mathcal{F})\cong H^{\bullet}({\mathscr{F}\!\ell},\gr^i\mathcal{F}).\] Moreover, $H^{\bullet}({\mathscr{F}\!\ell},\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi})$ is Hausdorff. \end{prop} \begin{proof} For the first isomorphism, it suffices to prove that $\check{H}^\bullet(\gr^i\mathcal{F})\cong H^{\bullet}({\mathscr{F}\!\ell},\gr^i\mathcal{F})$, where $\gr^i\mathcal{F}=\mathcal{O}_{K^p}(i)$ or $\mathcal{O}^{\mathrm{la},\tilde\chi}_{K^p}(i)$. When $\gr^i\mathcal{F}=\mathcal{O}_{K^p}(i)$, this was shown in the first paragraph of the proof of \cite[Theorem 4.4.6]{Pan20}. When $\gr^i\mathcal{F}=\mathcal{O}^{\mathrm{la},\tilde\chi}_{K^p}(i)$, this was explained in section \ref{Celachi}. Since $\mathcal{F}=\mathbb{B}_{\mathrm{dR},k}^+,\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi}$ is flat over $B_{\mathrm{dR}}^+/(t^k)$, the strictness of $\mathcal{F}(U_1)\oplus \mathcal{F}(U_2)\to \mathcal{F}(U_{12})$ for $t$-adic filtrations implies the flatness of $\check{H}^\bullet(\mathcal{F})$ over $B_{\mathrm{dR}}^+/(t^k)$ and $\gr^i H^{\bullet}({\mathscr{F}\!\ell},\mathcal{F})\cong H^{\bullet}({\mathscr{F}\!\ell},\gr^i\mathcal{F})$. It remains to show that $H^{\bullet}({\mathscr{F}\!\ell},\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi})$ is Hausdorff. This is clear as $H^{\bullet}({\mathscr{F}\!\ell},\gr^i \mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi})\cong H^\bullet({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\tilde\chi}_{K^p}(i))$ is filtered by $H^\bullet ({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\chi}_{K^p}(i))$ for some characters $\chi:\mathfrak{h}\to\mathbb{Q}_p$, which is Hausdorff by Proposition \ref{Haus}. \end{proof} \begin{para} Now set $\tilde\chi=\tilde\chi_k$ defined in section \ref{tilchik}. Let $K$ be a finite extension of $\mathbb{Q}_p$ such that $G_K$ acts on $\mathbb{B}_{\mathrm{dR},k+1}^+(U_1),\mathbb{B}_{\mathrm{dR},k+1}^+(U_{12})$. As explained in section \ref{noBdRK}, $\mathbb{B}_{\mathrm{dR},k+1}^{+,\mathrm{la},\tilde\chi_k}(U_{?})$ satisfies all the assumptions in \ref{LBsetup} for $?=1,2,12$. Hence by applying Proposition \ref{LBfXY} to $\mathbb{B}_{\mathrm{dR},k+1}^{+,\mathrm{la},\tilde\chi_k}(U_1)\oplus B_{\mathrm{dR},k+1}^{+,\mathrm{la},\tilde\chi_k}(U_2)\to \mathbb{B}_{\mathrm{dR},k+1}^{+,\mathrm{la},\tilde\chi_k}(U_{12})$, we have the following result. \end{para} \begin{cor} \label{I1Fono} $H^1({\mathscr{F}\!\ell},\mathbb{B}_{\mathrm{dR},k+1}^{+,\mathrm{la},\tilde\chi_k})$ is Hodge-Tate of weights $0,k$ and satisfies all of the assumptions in \ref{LBsetup}. The Fontaine operator in this case (cf. section \ref{NWLB}) agrees with \[I^1_{k-1}: H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p})\to H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},(1,-k)}_{K^p}(k))\] up to $\mathbb{Q}_p^\times$. Recall that $I^1_{k-1}=H^1(I_{k-1})$, cf. Definition \ref{I^1k}. \end{cor} \begin{proof} A direct consequence of Theorem \ref{MT} and Corollary \ref{Ncokerf}. \end{proof} \begin{para} It will be useful to relate $H^1({\mathscr{F}\!\ell},\mathbb{B}_{\mathrm{dR},k+1}^{+,\mathrm{la},\tilde\chi_k})$ with $H^1({\mathscr{F}\!\ell},\mathbb{B}_{\mathrm{dR},k+1}^{+})^{\mathrm{la},\tilde\chi_k}$. Since $\mathbb{B}_{\mathrm{dR},k+1}^{+}$ is filtered by $\mathcal{O}_{K^p}(i)$ which is $\mathfrak{LA}$-acyclic as a representation of $\mathrm{GL}_2(\mathbb{Q}_p)$ by \cite[Proposition 4.3.15]{Pan20}. Hence by the same proof of \cite[Theorem 4.4.6]{Pan20}, there are natural isomorphisms \[H^i({\mathscr{F}\!\ell},\mathbb{B}_{\mathrm{dR},k+1}^{+})^{\mathrm{la}}\cong H^i({\mathscr{F}\!\ell},\mathbb{B}_{\mathrm{dR},k+1}^{+,\mathrm{la}}).\] \end{para} \begin{prop} \label{compinfch} Let $k$ be a positive integer. If $k\geq 2$, then \[H^1({\mathscr{F}\!\ell},\mathbb{B}_{\mathrm{dR},k+1}^{+,\mathrm{la},\tilde\chi_k})\cong H^1({\mathscr{F}\!\ell},\mathbb{B}_{\mathrm{dR},k+1}^{+})^{\mathrm{la},\tilde\chi_k}. \] When $k=1$, there is a natural exact sequence \[0\to \Ext^1_{Z}(\tilde\chi_1,H^0(\mathbb{B}_{\mathrm{dR},2}^{+})^{\mathrm{la}}) \to H^1(\mathbb{B}_{\mathrm{dR},2}^{+,\mathrm{la},\tilde\chi_1})\to H^1(\mathbb{B}_{\mathrm{dR},2}^{+})^{\mathrm{la},\tilde\chi_1}\to \Ext^2_{Z}(\tilde\chi_1,H^0(\mathbb{B}_{\mathrm{dR},2}^{+})^{\mathrm{la}}), \] where all of the sheaf cohomology groups are computed on ${\mathscr{F}\!\ell}$. \end{prop} \begin{proof} Since $\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la}}$ is filtered by $\mathcal{O}^{\mathrm{la}}_{K^p}(i)$, it follows from the proof of Lemma \ref{BdRcmgr} that \[\Ext^i_Z(\tilde\chi, \mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la}}(U))=0\] for $i\geq 1$ and $U\in\mathfrak{B}$. Hence by exactly the same argument as the proof of \cite[Corollary 5.1.3]{Pan20}, there is an exact sequence \[0\to \Ext^1_{Z}(\tilde\chi_k,H^0(\mathbb{B}_{\mathrm{dR},k+1}^{+})^{\mathrm{la}}) \to H^1(\mathbb{B}_{\mathrm{dR},k+1}^{+,\mathrm{la},\tilde\chi_k})\to H^1(\mathbb{B}_{\mathrm{dR},k+1}^{+})^{\mathrm{la},\tilde\chi_k}\to \Ext^2_{Z}(\tilde\chi_k,H^0(\mathbb{B}_{\mathrm{dR},k+1}^{+})^{\mathrm{la}}). \] Note that $H^0({\mathscr{F}\!\ell},\mathbb{B}_{\mathrm{dR},k+1}^{+,\mathrm{la}})$ is filtered by $H^0({\mathscr{F}\!\ell}, \mathcal{O}^{\mathrm{la}}_{K^p})(i)$ on which the action of $\mathfrak{sl}_2(\mathbb{Q}_p)\subseteq \mathfrak{gl}_2(\mathbb{Q}_p)$ is trivial by its relation with the completed cohomology (See Theorem \ref{BdRcomp} below). Hence $\Ext^i_{Z}(\tilde\chi_k,H^0(\mathbb{B}_{\mathrm{dR},k+1}^{+})^{\mathrm{la}})=0$ if $k\geq 2$. \end{proof} \subsection{Proof of Theorem \ref{MT} I} \label{PfMTI} \begin{para} Recall that in Definition \ref{I}, $I_{k-1}$ was defined as the composite $\bar{d}'^k\circ d^k$. We will also factorize $N_k$ as a composite of two maps and identify each map with $d^k$ or $\bar{d}'^k$. Not surprisingly, the basic idea is that instead of working directly with $\mathbb{B}_{\mathrm{dR}}^{+}$, we replace $\mathbb{B}_{\mathrm{dR},k+1}^{+}$ by a resolution using the \textit{Poincar\'e lemma sequence}. We first sketch a proof when $k=1$ in the following discussion. I hope this can help the reader understand the general arguments. Consider the Poincar\'e lemma sequence on $\mathcal{X}_{K^p}$ (viewed as log pro-\'etale covering of $\mathcal{X}=\mathcal{X}_{K^pK_p}$ for some $K_p$.) \[0\to \mathbb{B}_{\mathrm{dR}}^+\to \mathcal{O}\mathbb{B}_{\mathrm{dR}}^+\to \mathcal{O}\mathbb{B}_{\mathrm{dR}}^+\otimes\Omega_{\mathcal{X}}^1(\mathcal{C})\to 0.\] See section \ref{setupOBdR} \ref{PlsFe} below for more details. This sequence is strict with respect to the natural decreasing filtrations on each term. Consider the quotient by $\Fil^2$ and note that $\gr^0\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+=\mathcal{O}_{\mathcal{X}_{K^p}}$. \[0\to \mathbb{B}_{\mathrm{dR},2}^+\to \mathcal{O}\mathbb{B}_{\mathrm{dR}}^+/\Fil^2 \mathcal{O}\mathbb{B}_{\mathrm{dR}}^+\to \mathcal{O}_{\mathcal{X}_{K^p}}\otimes\Omega_{\mathcal{X}}^1(\mathcal{C})\to 0.\] It is easy to see that this sequence remains exact after restricting to the $\mathrm{GL}_2(\mathbb{Q}_p)$-locally analytic vectors and taking the $\tilde{\chi}_1$-parts. \[0\to \mathbb{B}_{\mathrm{dR},2}^{+,\mathrm{la},\tilde{\chi}_1}\to (\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+/\Fil^2 \mathcal{O}\mathbb{B}_{\mathrm{dR}}^+)^{\mathrm{la},\tilde{\chi}_1}\xrightarrow{\nabla_{\log}} \mathcal{O}_{\mathcal{X}_K^p}^{\mathrm{la},\tilde{\chi}_1}\otimes\Omega_{\mathcal{X}}^1(\mathcal{C})\to 0.\] Now given a section $s$ of $\mathcal{O}_{\mathcal{X}_K^p}^{\mathrm{la},\tilde{\chi}_1}=\mathbb{B}_{\mathrm{dR},2}^{+,\mathrm{la},\tilde{\chi}_1}/ \Fil^1$ fixed by $G_K$ for some finite extension $K$ of $\mathbb{Q}_p$, we would like to compute $N_k(s)$. By definition, we need to find a lifting of $s$ in $\mathbb{B}_{\mathrm{dR},2}^{+,\mathrm{la},\tilde{\chi}_1,K}$ and compute its image under $\nabla$. It turns out that there exists \begin{itemize} \item $\tilde{s}\in (\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+/\Fil^2 \mathcal{O}\mathbb{B}_{\mathrm{dR}}^+)^{\mathrm{la},\tilde{\chi}_1}$ fixed by $G_K$ such that $\tilde{s}\mod\Fil^1=s$. \end{itemize} The lifting $\tilde{s}\notin \mathbb{B}_{\mathrm{dR},2}^{+,\mathrm{la},\tilde{\chi}_1,K}$ in general, hence we need a modification. There always exists \begin{itemize} \item $s'\in(\Fil^1\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+/\Fil^2 \mathcal{O}\mathbb{B}_{\mathrm{dR}}^+)^{\mathrm{la},\tilde{\chi}_1,K}$ such that $\nabla_{\log}(s')=\nabla_{\log}(\tilde{s})$. \end{itemize} This is essentially because the first graded piece of the Poincar\'e lemma sequence is exact. \[0\to \mathcal{O}_{\mathcal{X}_{K^p}}(1)\to \gr^1\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+\to \mathcal{O}_{\mathcal{X}_{K^p}}\otimes\Omega_{\mathcal{X}}^1(\mathcal{C})\to 0.\] Clearly $\tilde{s}-s'\in \mathbb{B}_{\mathrm{dR},2}^{+,\mathrm{la},\tilde{\chi}_1}$ is a lifting of $s$. Moreover since $\tilde{s}$ is fixed by $G_K$, \[N_k(s)=\nabla(\tilde{s}-s')=-\nabla(s').\] It remains to compare $\nabla(s')$ with $\bar{d}'^1\circ d^1 (s)$. This is a consequence of the following results. \begin{enumerate} \label{parta} \item $\nabla_{\log}(\tilde{s})=d^1(s)$. \label{partb} \item $\nabla(s')=c_1\bar{d}'^1 (\nabla_{\log}(s'))$ for some $c_1\in\mathbb{Q}_p^\times$. \end{enumerate} Indeed, combining with our choice of $s'$, we see that \[N_k(s)=-\nabla(s')=-c_1 \bar{d}'^1\circ d^1(s).\] Part (1) in fact follows from the definition of $\tilde{s}$ which is constructed from certain $\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+$-comparison isomorphism. (This is can be viewed as a ``$B_\mathrm{dR}^+$-deformation'' of our construction of elements in $\mathcal{O}_{\mathcal{X}_{K^p}}$ using the Hodge-Tate sequence in section \ref{exs}.) For Part (2), one key observation (due to Faltings) is that the first graded piece of the Poincar\'e lemma sequence, also known as the Faltings's extension, is essentially isomorphic to the pull-back of the non-split exact sequence $0\to \mathcal{O}_{{\mathscr{F}\!\ell}}\to (\omega^1)^{\oplus 2}\to \omega^2_{{\mathscr{F}\!\ell}}\to 0$ on ${\mathscr{F}\!\ell}$ via $\pi_\mathrm{HT}$. Hence all the calculations are reduced to ${\mathscr{F}\!\ell}$ and it's not hard to compare with $\bar{d}$, the derivation on ${\mathscr{F}\!\ell}$. \end{para} \begin{para} \label{setupM'MM''} Now we treat the general case. The abstract linear algebra setup is as follows. Let $M$ be an abelian group equipped with an endomorphism $N\in \End(M)$. Suppose that $N^2=0$. Let $M'\subseteq \ker(N)$ be a subgroup such that $N(M)\subseteq M'$. Hence $N$ becomes trivial on $M'':=M/M'$ and induces a map \[N_{M'}: M''\to M'.\] This recovers $N^K_W$ in \ref{FonOpsetup} with $M=E_0(W^K),M'=W_{k,-k}^K,M''=W_{0,0}^K$ and $N=\nabla$. Now suppose $(M_1,N_1,\theta:M_1\to M'')$ is a triple extending the triple $(M,N,M\to M'')$. More precisely, $M_1$ is an abelian group containing $M$ as a subgroup, $N_1\in \End(M_1)$ with $N_1^2=0$ and $N_1|_M=N$, and $\theta:M_1\to M''$ is a map such that $\theta|_M$ agrees with the natural quotient map $M\to M''$. Suppose that $M'_1\subseteq \ker \theta$ is an $N_1$-stable subgroup and $M'_1\cap M=M'$. In particular, $M'_1$ contains $M'$. We denote the quotient by $M'_2\subseteq M_2:=M_1/M$. \begin{eqnarray} \label{absdiag} \begin{tikzcd} & M''=M/M' &&&\\ 0 \arrow[r] & M \arrow[u]\arrow[r] & M_1 \arrow[r,"\pi"] \arrow[lu, "\theta" ']& M_2 \arrow[r] & 0 \\ 0 \arrow[r] & M' \arrow[r] \arrow[u] & M'_1 \arrow[r] \arrow[u] & M'_2 \arrow[r] \arrow[u] & 0 \end{tikzcd}. \end{eqnarray} Assume that \begin{itemize} \item there is a right inverse $s:M''\to \ker(N_1)$ of $\theta:M_1\to M''$, i.e. $\theta\circ s=\Id_{M''}$ such that $\pi\circ s(M'')\subseteq M'_2$, where $\pi:M_1\to M_2$ denotes the quotient map. \begin{tikzcd} &&& M'' \arrow[d,"s"] \arrow[r,"\pi\circ s"] & M'_2 \arrow[d] \\ &&& \ker(N_1) \arrow[r,"\pi"] & M_2 \end{tikzcd}; \item $N_1(M'_1)\subseteq M'$. Hence $N_1$ induces a map $N_{1.M'}:M'_{2}\to M'$. \end{itemize} \end{para} \begin{lem} \label{decomN} $N_{M'}=-N_{1,M'}\circ (\pi\circ s)$ as maps $M''\to M'$ \end{lem} \begin{proof} The basic idea is very simple. Let $a\in M''=M/M'$. By defintion, $N_{M'}(a)=N(\tilde{a})$ if $\tilde{a}\in M$ is a lifting of $a$. Note that $s(a)$ is a lifting of $a$ in $M_1$ but not necessarily in $M$. To make it inside of $M$, we will modify it with an element in $M'_1$. Let $b\in M'_1$ be a lifting of $\pi(s(a))\in M'_2=M'_1/M'$. By definition, \[-N_{1,M'}\circ \pi\circ s(a)=-N_1(b).\] On the other hand, consider $s(a)-b\in M_1$. Since \[\pi(s(a)-b)=\pi(s(a))-\pi(s(a))=0\in M_2\] by our choice of $b$, we have $s(a)-b\in M$. Note that $b\in M'_1\subseteq\ker(\theta)$. Hence $\theta(s(a)-b)=a$, i.e. $s(a)-b$ is a lifting of $a$ in $M$. By definition, \[N_{M'}(a)=N(s(a)-b)=-N_1(b)=-N_{1,M'}\circ \pi\circ s(a)\] where the last equality follows from $s(M'')\subseteq \ker(N_1)$. This finishes the proof. \end{proof} \begin{rem} In our application, using the notation in \ref{noBdRK} and \ref{FonOpsetup} , we will take $M=E_0(\mathbb{B}_{\mathrm{dR},k+1}^{\mathrm{la},\tilde\chi_k}(U)^K)$, $M'=\mathcal{O}^{\mathrm{la},(1,-k)}_{K^p}(U)(k)^K$, $M''=\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^K$, $N=\nabla$ and the exact sequence $0\to M\to M_1\to M_2\to 0$ (resp. $0\to M'\to M'_1\to M'_2\to 0$) will be constructed from the Poincar\'e lemma sequence (resp. its $k$-th graded piece). It remains to construct $M_1, M'_1,N_1$ and $s$. \end{rem} \begin{para} \label{setupOBdR} Next we collect some results about the positive structural de Rham sheaf $\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+$ (and its log version). We will not recall the construction here. People can find the construction in \cite[\S 2.2]{DLLZ2} (with a log structure) and \cite[(3)]{Sch13e} (without log structures). Let $U\in\mathfrak{B}$ and $V_\infty=\pi_\mathrm{HT}^{-1}(U)$. Then $V_\infty$ is the inverse image of some affinoid $V_{K_p}\subseteq\mathcal{X}_{K^pK_p}$ for sufficiently small open subgroup $K_p\subseteq \mathrm{GL}_2(\mathbb{Z}_p)$. Let $K$ be a finite extension of $\mathbb{Q}_p$ in $C$ over which all $V_{K_p}$'s are defined. Hence we can write $V_{K_p}=V_{K_p,K}\times_{\Spa(K,\mathcal{O}_K)} \Spa(C,\mathcal{O}_C)$. Fix a sufficiently small open subgroup $G_0\subseteq\mathrm{GL}_2(\mathbb{Z}_p)$. After enlarging $K$, we may assume that all cusps in $V_{G_0}$ are defined over $K$. Let $V_0=V_{G_0,K}$. The cusps in $V_0$ define a natural log structure on $V_{0}$, cf. \cite[Example 2.1.2]{DLLZ2}. Consider the pro-Kummer \'etale site $(V_{0})_{\mathrm{prok\acute{e}t}}$ of $V_{0}$. Then we can regard $V_\infty$ as the open covering $\displaystyle \varprojlim_{L,K_p} V_{K_p,L}$ in $(V_{0})_{\mathrm{prok\acute{e}t}}$, where $K_p$ runs through all open subgroups of $G_0$ and $L$ runs through all finite extension of $K$ in $C$. In particular, we can evaluate the positive structural de Rham sheaf $\mathcal{O}\mathbb{B}_{\mathrm{dR},\log,V_0}^+$ (called the geometric de Rham period in \cite[Definition 2.2.10]{DLLZ2}) at $V_\infty$ and denote it by $\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty)$. It has the following ring-theoretic properties. \begin{enumerate} \item There is a natural surjective ring homomorphism $\theta_{\log}:\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty)\to\mathcal{O}_{\mathcal{X}_{K^p}}(V_\infty)$. It induces a decreasing filtration on $\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty)$ with $\Fil^i\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V)=\ker\theta_{\log}^i$. This filtration is exhausted and $\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty)$ is complete with respect to it. Moreover, the natural map $\Sym^i_{\gr^0 \mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty)} \gr^1\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty)\to\gr^i\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty)$ is an isomorphism for $i\geq 0$. \item There is a natural injection $\mathbb{B}_\mathrm{dR}^+(V_\infty)\to\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty)$ which is strict with respect to the filtrations such that its composite with $\theta_{\log}$ agrees with $\theta:\mathbb{B}_{\mathrm{dR}}^+(V_\infty)\to \mathcal{O}_{\mathcal{X}_{K^p}}(V_\infty)$. \item There is a natural $\mathbb{Q}_p$-algebra injection $\displaystyle \varinjlim_{L,K_p} \mathcal{O}_{V_{K_p,L}}(V_{K_p,L})\to \mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty)$ such that its composite with $\theta_{\log}$ agrees with the natural inclusion $\mathcal{O}_{V_{K_p,L}}(V_{K_p,L})\subseteq \mathcal{O}_{\mathcal{X}_{K^p}}(V_\infty)$. \item $G_{K}$ and $G_0$ act naturally on $\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty)$ and all the maps above are $G_K\times G_0$-equivariant. \end{enumerate} The last two properties can be seen directly from the construction. The first two properties follow from the explicit description of $\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty)$ in \cite[Proposition 2.3.15]{DLLZ2}. We will also give a description of $\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty)$ in terms of $\mathbb{B}_{\mathrm{dR}}^+(V_\infty)$ below, cf \eqref{locBdR}. We remark that $\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty)$ does not depend on the choice of $G_0$ and $K$. See the paragraph below the proof of Proposition 2.3.15 of \cite{DLLZ2} and also \cite[(3)]{Sch13e}. \end{para} \begin{para}[Poincar\'e lemma sequence and Faltings's extension] \label{PlsFe} There is a natural logarithmic connection on $\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty)$, cf. \cite[2.2.15]{DLLZ2}. Following the notation in \ref{XCY}, we use $\mathcal{C}$ to denote the set of cusps. Then the logarithmic connection \[\mathcal{O}_{V_0}\to\Omega^1_{V_0}(\mathcal{C})\] can be extended $\mathbb{B}_\mathrm{dR}^+(V_\infty)$-linearly to a logarithmic connection \[\nabla_{\log}:\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty)\to \mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty)\otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C})\] where by abuse of notation $\mathcal{O}_{V_0}=\mathcal{O}_{V_0}(V_0)$ and $\Omega^1_{V_0}(\mathcal{C})=\Omega^1_{V_0}(\mathcal{C})(V_0)$. As the notation suggests, $\nabla_{\log}$ also does not depend on the choice of $V_0$. More precisely, $\nabla_{\log}$ also extends the log connection $\mathcal{O}_{V_{K_p,L}}\to \Omega^1_{{V_{K_p,L}}}(\mathcal{C})$ for any $K_p,L$ under the natural identification $ \Omega^1_{{V_{K_p,L}}}(\mathcal{C})=\mathcal{O}_{V_{K_p,L}}\otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C})$. We sometimes will drop the $\mathcal{O}_{V_0}$ in the tensor products below. $\nabla_{\log}$ induces a short strict exact sequence (Poincar'e lemma sequence) \[0\to \mathbb{B}_\mathrm{dR}^+(V_\infty) \to \mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty)\to \mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty)\otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C})\to 0\] where $\Omega^1_{V_0}(\mathcal{C})$ has degree $1$. Taking the first graded piece, we obtain an exact sequence of $\mathcal{O}_{\mathcal{X}_{K^p}}(V_\infty)$-modules \begin{eqnarray} \label{Fext} 0\to \mathcal{O}_{\mathcal{X}_{K^p}}(V_\infty)(1)\to \gr^1 \mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty)\to \mathcal{O}_{\mathcal{X}_{K^p}}(V_\infty)\otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C})\to 0 \end{eqnarray} called the log Faltings's extension. Note that $\Omega^1_{V_0}(\mathcal{C})\cong \omega^2$ by Kodaira-Spencer isomorphism, hence $\Omega^1_{V_0}(\mathcal{C})$ is trivial. Then $\mathcal{O}_{\mathcal{X}_{K^p}}(V_\infty)\otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C})\cong \mathcal{O}_{\mathcal{X}_{K^p}}(V_\infty)$ by choosing a generator of $\Omega^1_{V_0}(\mathcal{C})$. Recall that there are natural isomorphisms $\Sym^k_{\gr^0 \mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty)} \gr^1\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty)\cong\gr^k\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty)$. Hence the Faltings's extension induces a natural filtration on $\gr^k\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty)$ with graded pieces isomorphic to $\mathcal{O}_{\mathcal{X}_{K^p}}(V_\infty)(i)\otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C})^{\otimes k-i} \cong \mathcal{O}_{\mathcal{X}_{K^p}}(V_\infty)(i),i=0,\cdots,k$. We also have the following explicit description of $ \mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty)$. Choose an element $X\in \Fil^1 \mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty)$ whose image in $\mathcal{O}_{\mathcal{X}_{K^p}}(V_\infty)\otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C})$ is a generator. Then the natural map \begin{eqnarray} \label{locBdR} \mathbb{B}_{\mathrm{dR}}^+(V_\infty)[[X]]\to \mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty) \end{eqnarray} is an isomorphism of filtered groups where $X$ has degree $1$. \end{para} \begin{para}[$\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^+$] \label{cOBdRk+} It is clear that assigning $U$ to $\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty)$, where $V_\infty=\pi_{\mathrm{HT}}^{-1}(U)$, defines a sheaf on ${\mathscr{F}\!\ell}$, which will be denoted by $\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+$ by abuse of notation. It is equipped with a natural decreasing filtration and we denote $\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+/\Fil^k\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+$ by $\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^+$. There is a natural map $\mathbb{B}_{\mathrm{dR},k}^+\to \mathcal{O}\mathbb{B}_{\mathrm{dR},k}^+$. It follows from the discussion in the previous paragraph that $\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^+(U)$ is a finite $\mathbb{B}_{\mathrm{dR},k}^+(U)$-module, $U\in\mathfrak{B}$. Hence we can define a natural $p$-adic topology on $\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^+(U)$. More precisely, using the notation in \eqref{locBdR}, we can define a $\mathbb{Q}_p$-Banach algebra structure on $\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^+(U)$ with the unit open ball given by the image of \[\mathbb{A}_\mathrm{inf}(V_\infty)[[X]]\to \mathbb{B}_{\mathrm{dR},k}^+(U)[[X]]\to \mathcal{O}\mathbb{B}_{\mathrm{dR},k}^+(U).\] The $p$-adic topology defined in this way does not depend on the choice of $X$. \end{para} \begin{para}[subsheaves of $\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^+$] \label{subexa} It follows from the construction that $\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^+$ is a $\mathrm{GL}_2(\mathbb{Q}_p)$-equivariant sheaf. We denote by \[\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la}}\subseteq \mathcal{O}\mathbb{B}_{\mathrm{dR},k}^+\] the subsheaf of $\mathrm{GL}_2(\mathbb{Q}_p)$-locally analytic sections. The Lie algebra $\mathfrak{gl}_2(\mathbb{Q}_p)$ acts naturally on it. Given an infinitesimal character $\tilde\chi:Z\to\mathbb{Q}_p$, we denote by \[\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi}\subseteq \mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la}}\] the $\tilde\chi$-isotypic part. Both sheaves $\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi}$ and $\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la}}$ inherit a decreasing filtration from $\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^+$. Note that by Faltings's extension, $\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^+$ is naturally filtered by sheaves isomorphic to $\mathcal{O}_{K^p}$ (up to Tate twists). Hence the same argument as in the proof of Lemma \ref{BdRcmgr} shows that taking graded pieces commutes with taking locally analytic vectors and $\tilde\chi$-isotypic parts. Now take $\tilde\chi=\tilde\chi_{l}$ for some positive integer $l$ defined in \ref{tilchik}. Let $U\in\mathfrak{U}$. Then $G_K$ acts naturally on $\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^+(U)$ hence also on $\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi}(U)$ for some finite extension $K$ of $\mathbb{Q}_p$ in $C$. Consider the subspace of $G_{K_\infty}$-fixed, $G_K$-analytic vectors $\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi}(U)^K\subseteq \mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi}(U)$ as before. We have seen that $\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi}(U)$ is naturally filtered by LB-spaces isomorphic to $\mathcal{O}^{\mathrm{la},\tilde\chi}_{K^p}(U)(i)$ for some integer $i$, which is Hodge-Tate. Therefore taking graded pieces also commutes with taking $G_{K_\infty}$-fixed, $G_K$-analytic vectors by the same argument as in the proof of Proposition \ref{Kcmgr}. Let's summarize these results in the following proposition. \end{para} \begin{prop} \label{griFIl} Let $U\in\mathfrak{B}$ and suppose $G_K$ acts on $\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^+(U)$ for some finite extension $K$ of $\mathbb{Q}_p$ in $C$. For integers $i\geq 0$ and $l>0$, the following natural maps \begin{enumerate} \item $\gr^i \mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la}}\to (\gr^i\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+)^{\mathrm{la}}$, \item $\gr^i \mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi_l}\to (\gr^i\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+)^{\mathrm{la},\tilde\chi_l}$, \item $\gr^i \mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi_l}(U)^K\to (\gr^i\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(U))^{\mathrm{la},\tilde\chi_l,K}$ \end{enumerate} are isomorphisms. Moreover the $i$-th symmetric power of Faltings's extension induces a natural filtration on $\gr^i \mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi_l}(U)^K$ whose graded pieces are isomorphic to \[ \mathcal{O}^{\mathrm{la},\tilde\chi_l}_{K^p}(U)(j)^K\otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C})^{\otimes i-j},~~~j=0,\cdots,i\] \end{prop} For $i\geq 0$, since $\gr^i \mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la}}$ does not depend on $k>i$, we will simply write it as $\gr^i\mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la}}$. Similarly, we can define $\gr^i \mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la},\tilde\chi_l}$, $\gr^i\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la}}$... \begin{para} Consider the action of $1\in\mathbb{Q}_p\cong\Lie(\Gal(K_\infty/K))$ on $\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi_l}(U)^K$. It is easy to see that $\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi_l}(U)^K$ is filtered by $\mathcal{O}^{\mathrm{la},\tilde\chi_l}_{K^p}(U)(i)^K$, hence there is a generalized eigenspace decomposition \[\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi_l}(U)^K=\bigoplus_{\lambda} E_{\lambda}(\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi_l}(U)^K)\] where $\lambda$ runs through all eigenvalues. We will be particularly interested in the the generalized eigenspace associated to $\lambda=0$. \end{para} \begin{para} \label{constdiag} Now we can construct a diagram similar to \eqref{absdiag}. Consider the following truncated Poincar\'e lemma sequence and its $\Fil^k$-parts: \[\begin{tikzcd} & \mathcal{O}^{\mathrm{la},\tilde\chi_k}_{K^p} &&&\\ 0 \arrow[r] & \mathbb{B}_{\mathrm{dR},k+1}^{+,\mathrm{la},\tilde\chi_k} \arrow[r] \arrow[u] & \mathcal{O}\mathbb{B}_{\mathrm{dR},k+1}^{+,\mathrm{la},\tilde\chi_k} \arrow[r,"\nabla_{\log}"] \arrow[lu,"\theta_{\log}" ']& \mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi_k} \otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C}) \arrow[r] & 0 \\ 0 \arrow[r] & \gr^k\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la},\tilde\chi_k} \arrow[r] \arrow[u] & \gr^k \mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la},\tilde\chi_k} \arrow[r] \arrow[u] & \gr^{k-1} \mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la},\tilde\chi_k} \otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C}) \arrow[r] \arrow[u] & 0 \end{tikzcd}.\] Evaluate this diagram at $U$, take the subspace of $G_{K_\infty}$-fixed, $G_K$-analytic vectors, and take the generalized eigenspace of $0$ with respect to the action of $1\in\mathbb{Q}_p\cong\Lie(\Gal(K_\infty/K))$. We obtain the following diagram, cf. \eqref{absdiag}. \[ \begin{tikzcd} \mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^{G_K} &&&\\ E_0(\mathbb{B}_{\mathrm{dR},k+1}^{+,\mathrm{la},\tilde\chi_k}(U)^K) \arrow[r,hook] \arrow[u] & E_0(\mathcal{O}\mathbb{B}_{\mathrm{dR},k+1}^{+,\mathrm{la},\tilde\chi_k} (U)^K) \arrow[r,"E_0(\nabla_{\log})"] \arrow[lu,"E_0(\theta_{\log})" ']& E_0(\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi_k}(U)^K) \otimes\Omega^1_{V_0}(\mathcal{C}) \arrow[r] & 0 \\ \mathcal{O}^{\mathrm{la},(1,-k)}_{K^p}(U)(k)^{G_K} \arrow[r,hook] \arrow[u] & E_0(\gr^k \mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la},\tilde\chi_k}(U)^K) \arrow[r] \arrow[u] & \mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^{G_K} \otimes\Omega^1_{V_0}(\mathcal{C})^{\otimes k} \arrow[r] \arrow[u] & 0 \end{tikzcd}.\] Both horizontal sequences are exact by exactly the same argument as in \ref{subexa}. Here we identify \begin{itemize} \item $E_0(\mathcal{O}^{\mathrm{la},\tilde\chi_k}_{K^p}(U)^K)=\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^K=\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^{G_K}$; \item $E_0( \gr^k\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la},\tilde\chi_k} (U)^K)= E_0(\mathcal{O}^{\mathrm{la},\tilde\chi_k}_{K^p}(U)(k)^K)=\mathcal{O}^{\mathrm{la},(1,-k)}_{K^p}(U)(k)^K=\mathcal{O}^{\mathrm{la},(1,-k)}_{K^p}(U)(k)^{G_K}$; \item $E_0(\gr^{k-1} \mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la},\tilde\chi_k}(U)^K) \otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C})=\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^{G_K} \otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C})^{\otimes k}$. \end{itemize} The first two follow from that $\mathcal{O}^{\mathrm{la},\tilde\chi_k}_{K^p}(U)^K$ is Hodge-Tate of weights $0,k$, cf. \ref{tilchik}. For the last one, we note that by Proposition \ref{griFIl}, $\gr^{k-1} \mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la},\tilde\chi_k}(U)^K$ is filtered by \[ \mathcal{O}^{\mathrm{la},\tilde\chi_k}_{K^p}(U)(j)^K\otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C})^{\otimes k-1-j} \cong \mathcal{O}^{\mathrm{la},\tilde\chi_k}_{K^p}(U)(j)^K,~~~j=0,\cdots,k-1.\] Hence the only Hodge-Tate weight-$0$ part is contributed by $ \mathcal{O}^{\mathrm{la},\tilde\chi_k}_{K^p}(U)^K\otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C})^{\otimes k-1} $, i.e. when $j=0$. This implies our claim. \end{para} \begin{para} We basically have everything in \ref{setupM'MM''} here with $N$ and $N_1$ given by the action of $1\in\mathbb{Q}_p\cong\Lie(\Gal(K_\infty/K))$, except that we still need to construct a section of $E_0(\theta_{\log})$ (also denoted by $s$ in \ref{setupM'MM''}) \[s_{k+1}: \mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^{G_K}\to \mathcal{O}\mathbb{B}_{\mathrm{dR},k+1}^{+,\mathrm{la},\tilde\chi_k} (U)^{G_K} \] such that $\im(E_0(\nabla_{\log})\circ s_{k+1})\subseteq \Fil^{k-1}\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi_k} (U)^{G_K}\otimes\Omega^1_{V_0}(\mathcal{C})^{\otimes k} = \mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^{G_K} \otimes\Omega^1_{V_0}(\mathcal{C})^{\otimes k} $. (Note that $\ker(N_1)$ is exactly the subspace of $G_K$-invariants here.) This will be done in the next subsection, cf. \ref{defnsk}. Let's assume its existence at the moment. Then by Lemma \ref{decomN}, $N_k(U): \mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^{G_K} \to \mathcal{O}^{\mathrm{la},(1,-k)}_{K^p}(U)(k)^{G_K}$ (corresponding to $N_{M'}$ in Lemma \ref{decomN}) can be written as \[N_k(U)=-N'_k\circ (E_0(\nabla_{\log})\circ s_{k+1}),\] where $N'_k: \mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^{G_K} \otimes\Omega^1_{V_0}(\mathcal{C})^{\otimes k}\to \mathcal{O}^{\mathrm{la},(1,-k)}_{K^p}(U)(k)^{G_K}$ is obtained by applying $1\in\mathbb{Q}_p\cong\Lie(\Gal(K_\infty/K))$ to the middle term of the exact sequence \[0 \to \mathcal{O}^{\mathrm{la},(1,-k)}_{K^p}(U)(k)^{G_K} \to E_0(\gr^k \mathcal{O}\mathbb{B}_{\mathrm{dR},k+1}^{+,\mathrm{la},\tilde\chi_k}(U)^K) \to \mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^{G_K} \otimes\Omega^1_{V_0}(\mathcal{C})^{\otimes k} \to 0. \] Thus in order to prove that $N_k=c_kI_{k-1}=c_k\bar{d}'^k\circ d^k$ for some $c_k\in\mathbb{Q}^\times$, it remains to compare $E_0(\nabla_{\log})\circ s_{k+1}$ with $d^k$ and $N'_k$ with $\bar{d}'^k$. \end{para} \begin{prop} \label{comdk} Let $s_{k+1}: \mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^{G_K}\to \mathcal{O}\mathbb{B}_{\mathrm{dR},k+1}^{+,\mathrm{la},\tilde\chi_k} (U)^{G_K}$ be the map defined in Definition \ref{defnsk} below. Then \[E_0(\nabla_{\log})\circ s_{k+1}=d^k.\] Both sides are viewed as maps $\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^{G_K}\to \mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^{G_K} \otimes\Omega^1_{V_0}(\mathcal{C})^{\otimes k} $. \end{prop} \begin{prop} \label{comdbark} There exists $c_k\in\mathbb{Q}_p^\times$ such that \[N'_k=c_k\bar{d}'^k.\] Both sides are viewed as maps $\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^{G_K} \otimes\Omega^1_{V_0}(\mathcal{C})^{\otimes k}\to \mathcal{O}^{\mathrm{la},(1,-k)}_{K^p}(U)(k)^{G_K}$. \end{prop} We will prove these two propositions in the next two subsections. \subsection{Proof of Theorem \ref{MT} II} \label{PTII} \begin{para} \label{reldRcomp} In this subsection, we prove Proposition \ref{comdk}. Keep the same notation as in the previous subsection. As claimed before, we will construct a map \[s_{k+1}: \mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^{G_K}\to \mathcal{O}\mathbb{B}_{\mathrm{dR},k+1}^{+,\mathrm{la},\tilde\chi_k} (U)^{G_K} \] which is a section of $E_0(\theta_{\log}):\mathcal{O}\mathbb{B}_{\mathrm{dR},k+1}^{+,\mathrm{la},\tilde\chi_k} (U)^{G_K}\to \mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^K$, and we will compute its composite with $E_0(\nabla_{\log})$. Essentially we need to construct some explicit elements in $\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+$. Our strategy is to use a relative de Rham comparison theorem on modular curves. One reference here is \cite[4.1.3]{Pan20}. Keep the same setup and notation as in \ref{setupOBdR}. The first relative $p$-adic \'etale cohomology of the universal family of elliptic curves on $V_0$ defines a rank two $\hat\mathbb{Z}_p$-local system $\hat{\underline{V}}$ (denoted by $\hat{\underline{V}}_{\log}$ in \cite{Pan20}) on $(V_0)_{\mathrm{prok\acute{e}t}}$. Recall that in \ref{XCY}, we have a rank two filtered vector bundle $D$ on $V_0$ equipped with a logarithmic connection $\nabla$, which is defined as the canonical extension of the first de Rham cohomology of the universal family of elliptic curves. Then $\hat{\underline{V}}$ and $(D,\nabla)$ are associated. Concretely, evaluating everything at $V_\infty\in (V_0)_{\mathrm{prok\acute{e}t}}$, we have a natural isomorphism \[V\otimes_{\mathbb{Q}_p} \mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty)[\frac{1}{t}]\cong D\otimes_{\mathcal{O}_{V_0}}\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty)[\frac{1}{t}]\] preserving the filtrations and logarithmic connections. Here $V=\mathbb{Q}_p^{\oplus 2}$ as $\hat{\underline{V}}$ becomes trivial on $V_\infty$ and $\frac{1}{t}$ has degree $-1$. Under this isomorphism, we even have a more refined result \begin{eqnarray} \label{relOBdR} V\otimes_{\mathbb{Q}_p} t\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty) \subseteq D\otimes_{\mathcal{O}_{V_0}}\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty) \subseteq V\otimes_{\mathbb{Q}_p} \mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty), \end{eqnarray} cf. \cite[(4.1.2)]{Pan20}. Thus there is a natural $\mathcal{O}_{V_0}$-linear, $G_0$-equivariant map \begin{eqnarray}\label{l_1} l_1:D(V_0)\otimes_{\mathbb{Q}_p} V^*\to \mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty)=\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(U) \end{eqnarray} preserving filtrations and logarithmic connections on both sides. The images of this $l_1$ should be considered as the ``$\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+$-periods of $V$''. Take the composite of this map with $\theta_{\log}$. \[ \theta_{\log}\circ l_1: D(V_0)\otimes_{\mathbb{Q}_p} V^* \to \gr^0 D(V_0)\otimes_{\mathbb{Q}_p} V^*\to \mathcal{O}_{\mathcal{X}_{K^p}}(V_\infty).\] Note that this induces a map \[\gr^0 D(V_0)\cong \omega^{-1}\otimes\wedge^2 D(V_0)\to \mathcal{O}_{\mathcal{X}_{K^p}}(V_\infty)\otimes_{\mathbb{Q}_p} V\] which is nothing but the injection in \eqref{rHTH} (when restricting to $\omega^{-1}\otimes\wedge^2 D(V_0)$) up to a Tate twist by \cite[Proposition 7.9]{Sch13}. Let $(1,0)^*,(0,1)^*\in V^*$ be the dual basis of $V=\mathbb{Q}_p^{\oplus 2}$. Then it follows from our discussion in \ref{HEt} and the construction of $e_1,e_2,\mathrm{t}$ in \ref{exs} that for $f\in D(V_0)$, \[\theta_{\log}\circ l_1(f\otimes (1,0)^*)=\frac{\bar{f}e_2}{\mathrm{t}c},\] \[\theta_{\log}\circ l_1(f\otimes (0,1)^*)=-\frac{\bar{f}e_1}{\mathrm{t}c},\] where $\bar{f}\equiv f \mod \Fil^1D$, viewed as an element in $\gr^0 D\cong \omega^{-1}\otimes \wedge^2 D$. Recall that $c$ is our chosen global non-vanishing section of $\wedge^2 D$ used in the construction of $\mathrm{t}$. See \ref{HEt} for more details. In the below, sometimes we will also use $c^i$ to denote twisting by $(\wedge^2 D)^i$. For example, $\gr^0 D=\omega^{-1}c$. In general, we can consider the $k$-th symmetric powers $\Sym^k\hat{\underline{V}}$ and $\Sym^k D$ and obtain a natural map \begin{eqnarray} \label{l_k} l_k:\Sym^k D(V_0)\otimes_{\mathbb{Q}_p} \Sym^k V^*\to \mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(V_\infty)=\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(U). \end{eqnarray} Similarly, it has the property that for $f\in \Sym^k D(V_0)$, \begin{eqnarray} \label{thetalk} \theta_{\log}\circ l_k\left(f\otimes ((1,0)^*)^{\otimes i}(-(0,1)^*)^{\otimes k-i}\right) =\frac{\bar{f}e_1^{k-i}e_2^i}{\mathrm{t}^kc^k},~~~~~i=0,\cdots,k \end{eqnarray} where $\bar{f}\equiv f\mod \Fil^1$. We will use these maps to produce elements in $\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+$. First we need several easy lemmas. \end{para} \begin{lem} \label{thetainv} $f\in \mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(U)$ is invertible if and only if $\theta_{\log}(f)\in\mathcal{O}_{K^p}(U)$ is invertible. \end{lem} \begin{proof} This is clear as $\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(U)$ is complete with respect to the $\ker(\theta_{\log})$-adic topology. \end{proof} \begin{lem} \label{OBdRconv} Let $\{a_n\}_{n\geq 0}$ be a sequence of sections of $\mathcal{O}_{V_0}$ on $V_0$ and $y\in\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^+(U)$. Suppose $\theta_{\log}(y)\in\mathcal{O}_{\mathcal{X}_{K^p}}^+(V_\infty), i.e. ||\theta_{\log}(y)||\leq 1$ and $\displaystyle \lim_{n\to+\infty} a_n=0$. Then \[\sum_{n=0}^{+\infty} a_n y^n\] converges in $\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^+(U)$. In other words, $\sum_{n=0}^{+\infty} a_n y^n$ converges if and only if $\sum_{n=0}^{+\infty} a_n \theta_{\log}(y)^n$ converges. \end{lem} \begin{proof} This is essentially \cite[Lemme 4.9]{BC16}. We give a sketch of their argument here. Since $\theta:\mathbb{A}_\mathrm{inf}(V_\infty)\to \mathcal{O}_{\mathcal{X}_{K^p}}^+(V_\infty)$ is surjective, we can find $\tilde{y}\in \mathbb{A}_\mathrm{inf}(V_\infty)$ such that $z=y-\tilde{y}\in\ker(\theta_{\log})$. Hence $\sum_{n=0}^{+\infty} a_n \tilde{y}^n$ converges in view of the topology defined in \ref{cOBdRk+}. Using that $z^k=0$, we deduce that \[\sum_{n=0}^{+\infty} a_n y^n=\sum_{n=0}^{+\infty} a_n (z+\tilde{y})^n=\sum_{i=0}^{k-1}z^i\sum_{n=0}^{+\infty} \binom{n}{i} a_n\tilde{y}^{n-i}\] converges in $\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^+(U)$. \end{proof} \begin{para} We start with the case $k=1$. Assume that $e_1$ is invertible on $V_\infty$. Fix a section $f_1\in D(V_0)$ whose image in $\gr^0 D(V_0)$ is a generator. Consider $l_1(f_1\otimes (0,1)^*)\in\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(U)$. Note that $\theta_{\log}\circ l_1(f_1\otimes (0,1)^*)\in\mathcal{O}_{K^p}(U)^\times$ by our choice of $f_1$. Hence by Lemma \ref{thetainv}, $l_1(f_1\otimes (0,1)^*)$ is invertible in $\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(U)$. Let \[\tilde{x}:=-\frac{l_1(f_1\otimes (1,0)^*)}{l_1(f_1\otimes (0,1)^*)}\in\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(U).\] Then it follows from the discussion in \ref{reldRcomp} that $\theta_{\log}(\tilde{x})=x$. Clearly $G_K$ fixes $\tilde{x}$ and the action of $G_0$ on $\tilde{x}$ is analytic. Shrinking $G_0$ if necessary, we may assume the $||x||=||x||_{G_0}$, the norm as a $G_n$-analytic vector. A direct calculation shows that the Lie algebra $\mathfrak{gl}_2(\mathbb{Q}_p)$ acts on $\tilde{x}$ via \[\begin{pmatrix} a & 0 \\ 0 & d \end{pmatrix}\cdot \tilde{x}=(d-a)\tilde{x},~~~~~~~~~~~\begin{pmatrix} 0 & 1 \\ 0 & 0\end{pmatrix} \cdot \tilde{x}=1,~~~~~~~~~~~\begin{pmatrix} 0 & 0 \\ 1& 0\end{pmatrix} \cdot \tilde{x}=-\tilde{x}^2\] (in the same way as on $x$). In particular, $Z$ acts on powers of $\tilde{x}$ via $\tilde{\chi}_1$. We are going to construct maps \[ \mathcal{O}^{\mathrm{la},(0,0)}_{K^p}(U)^{G_K}\to \mathcal{O}\mathbb{B}_{\mathrm{dR},l}^{+,\mathrm{la},\tilde\chi_1} (U)^{G_K},l\geq 0\] which are compatible when varying $l$. When $l=2$, this will give us the desired map $s_2: \mathcal{O}^{\mathrm{la},(0,0)}_{K^p}(U)^{G_K}\to \mathcal{O}\mathbb{B}_{\mathrm{dR},1}^{+,\mathrm{la},\tilde\chi_1} (U)^{G_K}$. In fact we will construct $G_K$-equivariant maps \begin{eqnarray} \label{tildesl} \tilde{s}_l:\bigcup_{L}\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}(U)^{G_L}\to \bigcup_{L}\mathcal{O}\mathbb{B}_{\mathrm{dR},l}^{+,\mathrm{la},\tilde\chi_1} (U)^{G_L},l\geq 0 \end{eqnarray} where $L$ runs through all finite extensions of $K$ in $C$. Now we need the explicit description of $\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}(U)$ in Theorem \ref{str}. As in \ref{exs}, we choose $x_n\in\mathcal{O}_{V_{G_{r(n)}}}(V_{G_{r(n)}}),n\geq 0$ such that $\|x-x_n\|_{G_{r(n)}}=\|x-x_n\|\leq p^{-n}$. We may assume that $x_n$ is defined over some finite extension $K_n$ of $K$ in $C$ and $\bigcup_n K_n=\overbar\mathbb{Q}_p$. Then an element $f\in \mathcal{O}^{\mathrm{la},(0,0)}_{K^p}(U)^{G_L}$ can be written as \[f=\sum_{i=0}^{+\infty} c_i(x-x_n)^i\] for some $n\geq 0$ and $c_i\in\mathcal{O}_{V_{G_{r(n)}},K_n}$ such that $c_ip^{(n-1)i}$ is uniformly bounded. Define \[\tilde{s}_l(f):=\sum_{i=0}^{+\infty} c_i(\tilde{x}-x_n)^i\in\mathcal{O}\mathbb{B}_{\mathrm{dR},l}^+(U).\] Note that this series is convergent by Lemma \ref{OBdRconv}. It is easy to see that $\tilde{s}_l(f)$ does not depend on the choice of $x_n$ and $\tilde{s}_l(f)$ is a $G_{r(n)}$-analytic vector. Moreover $Z$ acts on $\tilde{s}_l(f)$ via $\tilde\chi_1$. Hence we obtain the map $\tilde{s}_l$ claimed in \eqref{tildesl}. Clearly the inverse limit of $\tilde{s}_l$ over $l$ defines a map \[\phi_1:\bigcup_{L}\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}(U)^{G_L}\to \mathcal{O}\mathbb{B}_{\mathrm{dR}}(U).\] \end{para} \begin{para} Our next step is to compute $\nabla_{\log}\circ \phi_1:\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}(U)^{G_K}\to \mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(U) \otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C})$. Essentially we need to understand the image of $x$. In fact, the following lemma will be enough for our purpose in view of Lemma \ref{tpowt} below. \end{para} \begin{lem}[$p$-adic Legendre's relation] \label{pLegen} Recall that the map $l_1$ was introduced in \eqref{l_1}. \begin{enumerate} \item For any $g_1,g_2\in D(V_0)$ and $v_1,v_2\in V^*$, the periods satify \[l_1(g_1\otimes v_1)l_1(g_2\otimes v_2)-l_1(g_1\otimes v_2)l_1(g_2\otimes v_1)\in t\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(U).\] \item $\nabla_{\log}\circ\phi_1(x)\in t\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(U)\otimes\Omega^1_{V_0}(\mathcal{C})$. \end{enumerate} \end{lem} \begin{proof} For the first part, it suffices to show that \[\wedge^2D\otimes_{\mathcal{O}_{V_0}}\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(U) \subseteq t\left(\wedge^2V\otimes_{\mathbb{Q}_p} \mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(U)\right),\] cf. \eqref{relOBdR}. Let $\mathbb{M}_0=(D\otimes_{\mathcal{O}_{V_0}}\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(U))^{\nabla_{\log}}$. It was shown in the proof of Theorem 7.6 of \cite{Sch13} that $\mathbb{M}_0\otimes_{\mathbb{B}_{\mathrm{dR}}^+(U)}\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(U)=D\otimes_{\mathcal{O}_{V_0}}\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(U)$. Then \[\mathbb{M}_0\subseteq V\otimes_{\mathbb{Q}_p}\mathbb{B}_{\mathrm{dR}}^+(U).\] By Proposition 7.9 \textit{ibid.} or \cite[(4.1.2)]{Pan20}, the cokernel of this inclusion is nothing but $\gr^1D\otimes_{\mathcal{O}_{V_0}}\mathcal{O}_{\mathcal{X}_{K^p}}(V_\infty)(-1)\neq 0$. Hence \[\wedge^2 \mathbb{M}_0\subseteq t(\wedge^2 V\otimes_{\mathbb{Q}_p}\mathbb{B}_{\mathrm{dR}}^+(U)),\] which implies our claim by taking the tensor product with $\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(U)$. For the second part, we recall that $\tilde{x}:=-\frac{l_1(f_1\otimes (1,0)^*)}{l_1(f_1\otimes (0,1)^*)}$. Fix a generator $\omega$ of $\Omega^1_{V_0}(\mathcal{C})$ and write $\nabla_{\log}(f_1)=f_2\omega, f_2\in\mathcal{O}_{V_0}$. Then \[\nabla_{\log}(\tilde{x})=\frac{1}{l_1(f_1\otimes (0,1)^*)^2}(l_1(f_1\otimes v_1)l_1(f_2\otimes v_2)-l_1(f_1\otimes v_2)l_1(f_2\otimes v_1))\otimes\omega,\] where $v_1=(1,0)^*,v_2=(0,1)^*$. Hence by the first part, we have $\nabla_{\log}(\tilde{x})\in t\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(U)\omega$. \end{proof} \begin{para} It is clear from the construction that $\tilde{x}$ and $\phi_1$ depends on the choice of $f_1\in D(V_0)$. However this lemma shows that $\tilde{x}$ is actually well-defined up to $t\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(U)$, i.e. $\tilde{x}\mod t\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(U)$ is independent of the choice of $f_1$. To see this, we consider the Poincar\'e lemma sequence modulo $t$ \[0\to \mathbb{B}_{\mathrm{dR}}^+(U)/(t) \to \mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(U)/(t)\xrightarrow{\nabla_{\log}\mod t} \mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(U)/(t)\otimes\Omega^1_{V_0}(\mathcal{C})\to 0\] which is exact as all the terms in the Poincar\'e lemma sequence are free over $\mathbb{B}_{\mathrm{dR}}^+(U)$. Lemma \ref{pLegen} implies that $\tilde{x}\mod (t)\in \mathbb{B}_{\mathrm{dR}}^+(U)/(t)=\mathcal{O}_{K^p}(U)$. On the other hand, $\theta_{\log}(\tilde{x})=x$ by our construction. Hence \[\tilde{x}\mod (t)=x.\] We summarize what we obtained so far. \end{para} \begin{prop} \label{phi1} Given a section $f_1\in D(V_0)$ whose image in $\gr^0 D(V_0)$ is a generator, we can define a map $G_K\times G_0$-equivariant map \[\phi_1:\bigcup_{L}\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}(U)^{G_L}\to \mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(U)\] satisfying following the conditions \begin{enumerate} \item $\theta_{\log}\circ\phi_1=\Id$. \item $\phi_1$ is $\mathcal{O}_{V_{K_p},L}$-linear for any open subgroup $K_p\subseteq G_0$ and finite extension $L$ of $K$ in $C.$ \item the composite map $\bigcup_{L}\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}(U)^{G_L}\xrightarrow{\phi_1} \mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(U)\to \mathcal{O}\mathbb{B}_{\mathrm{dR},l}^+(U)$ is continuous for $l\geq 0$. \item $\phi_1(x)\equiv x \mod (t)$. In particular, $\nabla_{\log}\circ\phi_1(x)\in t\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(U)\otimes\Omega^1_{V_0}(\mathcal{C})$ and $\phi_1\mod (t)$ is independent of the choice of $f_1$. \end{enumerate} \end{prop} As mentioned before, we denote by $s_2$ the map $\phi_1|_{\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}(U)^{G_K}}\mod\Fil^2$, hence \[s_2: \mathcal{O}^{\mathrm{la},(0,0)}_{K^p}(U)^{G_K}\to \mathcal{O}\mathbb{B}_{\mathrm{dR},2}^{+,\mathrm{la},\tilde{\chi_1}}(U)^{G_K}.\] Moreover the composite map \[ \mathcal{O}^{\mathrm{la},(0,0)}_{K^p}(U)^{G_K}\xrightarrow{s_2} \mathcal{O}\mathbb{B}_{\mathrm{dR},2}^{+,\mathrm{la},\tilde{\chi_1}}(U)^{G_K} \xrightarrow{\nabla_{\log}} \mathcal{O}^{\mathrm{la},(0,0)}_{K^p}(U)^{G_K} \otimes\Omega^1_{V_0}(\mathcal{C})\] sends $x$ to $0$ and sections $f\in\mathcal{O}_{V_{G_{n}},K}\subseteq \mathcal{O}^{\mathrm{la},(0,0)}_{K^p}(U)^{G_K}$ to $df$. This shows that $\nabla_{\log}\circ s_2=d^1$ in view of the construction of $d^1$, cf. Theorem \ref{I1}. This proves Proposition \ref{comdk} when $k=1$. \begin{para} \label{kgeq2} Now for $k\geq 2$, we follow the same strategy as in Subsection \ref{Do1}. Recall that we have \eqref{l_k} \[l_{n}:\Sym^{n} D(V_0)\otimes_{\mathbb{Q}_p} \Sym^{n} V^* \to \mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(U),~~~~n\geq 0\] which is $\mathcal{O}_{V_0}$-linear. Therefore using $\phi_1$, we can extend this map $\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}(U)^{G_K}$-linearly to a map \[l'_{n}:\Sym^{n} D(V_0)\otimes_{\mathcal{O}_{V_0}}\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}(U)^{G_K}\otimes_{\mathbb{Q}_p} \Sym^{n} V^* \to \mathcal{O}\mathbb{B}_{\mathrm{dR}}^+(U).\] Note that there are natural surjective maps coming from the Hodge filtration on $\Sym^n D$ and the Hodge-Tate filtration on $\Sym^n V^*\otimes \mathcal{O}_{\mathcal{X}_{K^p}}$ \[\Sym^n D\to \gr^0 \Sym^n D=\omega^{-k}\otimes (\wedge^2 D)^{\otimes n},\] \[\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}\otimes_{\mathbb{Q}_p} \Sym^{n} V^*=\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}\otimes_{\mathbb{Q}_p} \Sym^{n} V\otimes\det{}^{-n} \to \omega^{k,\mathrm{la},(0,n)}_{K^p}\otimes\det{}^{-n}(-n),\] (cf. \ref{genklambdak}) and the natural identification \eqref{rmtwddet1}, \[\mathcal{O}^{\mathrm{la},(n_1+1,n_2+1)}_{K^p}=\mathcal{O}^{\mathrm{la},(n_1,n_2)}_{K^p}\cdot \mathrm{t}\cong \mathcal{O}^{\mathrm{la},(n_1,n_2)}_{K^p}\otimes_{\mathcal{O}_{K^p}^{\mathrm{sm}}}(\wedge^2 D^{\mathrm{sm}}_{K})^{-1}\otimes \det(1).\] These induce a natural map \[\Sym^{n} D(V_0)\otimes_{\mathcal{O}_{V_0}}\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}(U)^{G_K}\otimes_{\mathbb{Q}_p} \Sym^{n} V^*\to \mathcal{O}^{\mathrm{la},(-n,0)}_{K^p}(U)^{G_K}.\] which is nothing but the composite map $\theta_{\log}\circ l'_{n}$ by our discussion in \ref{l_1}. The BGG constructions for $\Sym^n D$ and $\Sym^n V^*$ give a natural left inverse of this map \begin{eqnarray} \label{psin} \psi_n:\mathcal{O}^{\mathrm{la},(-n,0)}_{K^p}(U)^{G_K}\to\Sym^{n} D(V_0)\otimes_{\mathcal{O}_{V_0}}\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}(U)^{G_K}\otimes_{\mathbb{Q}_p} \Sym^{n} V^*. \end{eqnarray} More precisely, recall that $\Sym^n D(V_0)\otimes_{\mathcal{O}_{V_0}}\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}(U)$ was denoted by $\Sym^n D^{(0,0)}(U)$ in the proof of Theorem \ref{I1}. The filtration on $\Sym^n D$ naturally extends to $\Sym^n D^{(0,0)}(U)$ in an $\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}$-linear way. By Lemma \ref{KSsec}, we have a natural $U(\mathfrak{gl}_2(\mathbb{Q}_p))$-equivariant map \[r'_n:\gr^0\Sym^n D^{(0,0)}(U)^{G_K}\xrightarrow{r'_n} \Sym^k D^{(0,0)}(U)^{G_K}\] which is a section of $\Sym^k D^{(0,0)}(U)^{G_K}\to \gr^0 \Sym^k D^{(0,0)}(U)^{G_K}$ and was constructed using the Kodaira-Spencer isomorphism. On the other hand, by Lemma \ref{secinf}, the $\tilde\chi_{k+1}$-isotopic part of \[\gr^0\Sym^n D^{(0,0)}(U)^{G_K}\otimes_{\mathbb{Q}_p} \Sym^{n} V^*=\omega^{-n}(V_0)c^n\otimes_{\mathcal{O}_{V_0}}\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}(U)^{G_K}\otimes_{\mathbb{Q}_p} \Sym^{n} V^*\] is canonically isomorphic to $\mathcal{O}^{\mathrm{la},(-n,0)}_{K^p}(U)^{G_K}$ via $\theta_{\log}\circ l'_n$. Thus we obtain a natural injective $U(\mathfrak{gl}_2(\mathbb{Q}_p))$-equivariant map \[\mathcal{O}^{\mathrm{la},(-n,0)}_{K^p}(U)^{G_K}\to \gr^0\Sym^n D^{(0,0)}(U)^{G_K}\otimes_{\mathbb{Q}_p} \Sym^{n} V^*\] and its composite with $r'_n\otimes 1$ gives the map $\psi_n$ we are looking for. We remark that all the maps defines here are $G_0$-equivariant and continuous. \end{para} \begin{defn} \label{defnsk} Let $s_{k+1}$ be the composite map \[ \mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^{G_K}\xrightarrow{\psi_{k-1}}\Sym^{n} D(V_0)\otimes\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}(U)^{G_K}\otimes \Sym^{n} V^* \xrightarrow{l'_{k-1}\mod\Fil^{k+1}}\mathcal{O}\mathbb{B}_{\mathrm{dR},k+1}^{+}(U)^{G_K}\] It follows from our previous discussion that $s_{k+1}$ actually defines a map \[s_{k+1}:\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^{G_K}\to\mathcal{O}\mathbb{B}_{\mathrm{dR},k+1}^{+,\mathrm{la},\tilde{\chi}_k}(U)^{G_K}\] such that $\theta_{\log} \circ s_{k+1}=\Id$. \end{defn} We are ready to prove Proposition \ref{comdk}. We restate it here. \begin{prop} \label{nlogsk+1dk} $\im(\nabla_{\log}\circ s_{k+1})\subseteq \Fil^{k-1} \mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde{\chi}_k}(U)^{G_K}\otimes \Omega^1_{V_0}(\mathcal{C})$. Moreover, under the isomorphism $\Fil^{k-1}\mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la},\tilde\chi_k} (U)^{G_K}\otimes\Omega^1_{V_0}(\mathcal{C}) = \mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^{G_K} \otimes\Omega^1_{V_0}(\mathcal{C})^{\otimes k}$, we have \[\nabla_{\log}\circ s_{k+1}={d}^{k}:\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^{G_K}\to \mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^{G_K} \otimes\Omega^1_{V_0}(\mathcal{C})^{\otimes k}.\] \end{prop} \begin{proof} Our first observation is that it suffices to prove the proposition modulo $t$. \begin{lem} \label{tpowt} The natural map \[\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi_k}(U)^{G_K}\to \left(\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi_k}(U)/(t)\right)^{G_K}\] is an isomorphism. \end{lem} \begin{proof} By the same argument as in the proof of Proposition \ref{griFIl}, \[0\to t\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi_k}(U)^K\to \mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi_k}(U)^K\to \left(\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi_k}(U)/(t)\right)^{K}\to 0\] is an exact sequence. Moreover each term has a generalized eigenspace decomposition with respect to the action of $1\in\mathbb{Q}_p\cong\Lie(\Gal(K_\infty/K))$. Hence we only need to show that $0$ does not appear in the spectrum on $t\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi_k}(U)^K$, equivalently, \[t\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi_k}(U)^{G_K}=0.\] Note that there is a natural filtration on $t\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi_k}(U)$ coming from the filtration on $\mathcal{O}\mathbb{B}_{\mathrm{dR}}^+$. It suffices to show that \[\left(\gr^i t\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi_k}(U)\right)^{G_K}=0,~~~~~~i\geq 0.\] We may assume $i\in\{1,\cdots,k-1\}$ because $\gr^i\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^+=0$ if $i$ is not in this range. Again by the proof of Proposition \ref{griFIl}, it follows from Faltings's extension \eqref{Fext} that \[0\to \mathcal{O}_{K^p}^{\mathrm{la},\tilde{\chi}_k}(U)(1)^K\to \gr^1\mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la},\tilde\chi_k}(U)^K\to \mathcal{O}_{K^p}^{\mathrm{la},\tilde{\chi}_k}(U)^K\otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C})\to 0\] is exact and $\mathcal{O}_{K^p}^{\mathrm{la},\tilde{\chi}_k}(U)(1)^K = (\gr^1 t\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi_k}(U))^K$. In general, $(\gr^i t\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^{+,\mathrm{la},\tilde\chi_k}(U))^K$ is filtered by \[\mathcal{O}_{K^p}^{\mathrm{la},\tilde{\chi}_k}(U)(j)^K\otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C})^{\otimes i-j},~~~~~~~~j=1,\cdots,i.\] (cf. the paragraph below \eqref{Fext}.) Recall that the eigenvalues of the Sen operator on $\mathcal{O}_{K^p}^{\mathrm{la},\tilde{\chi}_k}(U)^K$ are $0,-k$, cf. \ref{tilchik}. Therefore $0$ is not an eigenvalue on $\mathcal{O}_{K^p}^{\mathrm{la},\tilde{\chi}_k}(U)(j)^K\otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C})^{\otimes i-j}$ as $j\in\{1,\cdots,k-1\}$ by our assumption. This certainly implies our claim. \end{proof} Recall the construction of $d^k$ in \ref{Do1}, cf. Lemma \ref{KSsec}. The composite map \[\omega^{-k+1,\mathrm{la},(0,0)}_{K^p}(U)c^{k-1} \xrightarrow{r'_{k-1}} \Sym^{k-1} D^{(0,0)}(U) \xrightarrow{\nabla'_{k-1}} \Sym^{k-1} D^{(0,0)}(U)\otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C})\] has images in $\Fil^{k-1} \Sym^{k-1} D^{(0,0)}(U)\otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C})$, where $\nabla'_{k-1}$ is $\mathcal{O}_{{\mathscr{F}\!\ell}}$-linear and extends the connection on $\Sym^{k-1} D$. Hence $\nabla'_{k-1}$ induces a map $d'^k:$ \[\omega^{-k+1,\mathrm{la},(0,0)}_{K^p}(U)c^{k-1}\to \Fil^{k-1} \Sym^{k-1} D^{(0,0)}(U)\otimes\Omega^1_{V_0}(\mathcal{C})=\omega^{-k+1,\mathrm{la},(0,0)}_{K^p}(U)c^{k-1}\otimes\Omega^1_{V_0}(\mathcal{C})^{\otimes k}.\] Note that by \ref{kgeq2}, the $\tilde\chi_k$-part of $\omega^{-k+1,\mathrm{la},(0,0)}_{K^p}(U)c^{k-1}\otimes_{\mathbb{Q}_p} \Sym^{k-1} V^*$ is canonically isomorphic to $\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)$. This shows that if we take the tensor product of $d'^k$ with $\Sym^{k-1} V^*$ and take the $\tilde\chi_k$-part, we get a map \[\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)\to \mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)\otimes \Omega^1_{V_0}(\mathcal{C})^{\otimes k}\] which is nothing but $d^k$. In other words, recall that there is an injection \[\psi_{k-1}: \mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^{G_K} \to \Sym^{k-1} D(V_0)\otimes_{\mathcal{O}_{V_0}}\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}(U)^{G_K}\otimes_{\mathbb{Q}_p} \Sym^{k-1} V^* \] cf. \eqref{psin}. Then $d^k|_{\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^{G_K}}$ is obtained by restricting $\nabla'_{k-1}\otimes 1$ to $\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^{G_K}$ via $\psi_{k-1}$. For simplicity, we write $DV_{k}$ for $\Sym^{k-1} D^{(0,0)}(U)^{G_K}\otimes_{\mathbb{Q}_p} \Sym^{k-1} V^*$. Consider the diagram \[ \begin{tikzcd} \mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^{G_K} \arrow[r,"{\psi_{k-1}}"] \arrow[rd,"s_{k+1}\mod t" '] & DV_k \arrow[r,"\nabla'_{k-1}\otimes 1"] \arrow[d," l'_{k-1}\mod t"] & DV_k\otimes\Omega^1_{V_0}(\mathcal{C}) \arrow[d,"{l}'_{k-1}\otimes 1\mod t"]\\ &\mathcal{O}\mathbb{B}_{\mathrm{dR},k+1}^+(U)/(t) ~~~~~~~~~~~~\arrow[r,"\nabla_{\log}~ \mathrm{ mod}~ t"] & ~~~~~~~~~~~\mathcal{O}\mathbb{B}_{\mathrm{dR},k}^+(U)/(t)\otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C}) \end{tikzcd}. \] \begin{lem} \label{comdiag} This diagram is commutative. \end{lem} Note that this lemma will imply that \[\nabla_{\log}\circ s_{k+1}={d}^{k}\mod t,\] hence $\nabla_{\log}\circ s_{k+1}={d}^{k}$ by Lemma \ref{tpowt}, which is exactly what we need to show. \end{proof} \begin{proof}[Proof of Lemma] \ref{comdiag} The left triangle is commutative by the definition of $s_{k+1}$. For the right square, we observe that ${l}'_{k-1}\mod t$ is a $K$-linear mapping between $C$-LB spaces. Hence we can extend ${l}'_{k-1}\mod t$ to a $C$-linear map $l':\Sym^{k-1} D^{(0,0)}(U)\otimes_{\mathbb{Q}_p} \Sym^{k-1} V^*\to \mathcal{O}\mathbb{B}_{\mathrm{dR},k+1}^+(U)/(t)$. Here we use the fact $\Sym^k D^{(0,0)}(U)=\Sym^k D \otimes \mathcal{O}^{\mathrm{la},(0,0)}_{K^p}(U)$ is Hodge-Tate of weight $0$. Now it suffices to show \[ \begin{tikzcd} \Sym^{k-1} D^{(0,0)}(U)\otimes_{\mathbb{Q}_p} \Sym^{k-1} V^* \arrow[r,"\nabla'_{k-1}"] \arrow[d,"{l}'"] & \Sym^{k-1} D^{(0,0)}(U) \otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C}) \otimes_{\mathbb{Q}_p} \Sym^{k-1} V^* \arrow[d,"l'\otimes 1"]\\ \mathcal{O}\mathbb{B}_{\mathrm{dR},k+1}^+(U)/(t) \arrow[r,"\nabla_{\log}\mod t"] & \mathcal{O}\mathbb{B}_{\mathrm{dR},k}^+(U)/(t)\otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C}) \end{tikzcd} \] is commutative, i.e. $(\nabla_{\log}\mod t)\circ l'=(l'\otimes 1)\circ\nabla'_{k-1}$. (Here we use $\nabla'_{k-1}$ instead of $\nabla'_{k-1}\otimes 1$ for simplicity.) First, this is clearly true when restricted to $\Sym^{k-1} D\otimes_{\mathbb{Q}_p} \Sym^{k-1} V^*\subseteq \Sym^{k-1} D^{(0,0)}(U)\otimes_{\mathbb{Q}_p} \Sym^{k-1} V^*$, cf. \ref{reldRcomp}. Second, all of the maps in the diagram commute with multiplication by $x\in\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}$: \begin{itemize} \item for $\nabla'_{k-1}$, this is because $\nabla'_{k-1}$ is $\mathcal{O}_{{\mathscr{F}\!\ell}}$-linear; \item for $l'$, this is because $l'$ is $\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}(U)$-linear by our construction; \item for $\nabla_{\log}\mod t$, this is the last part of Proposition \ref{phi1}. \end{itemize} Third, both connections $(\nabla_{\log}\mod t)$ and $\nabla'_{k-1}$ satisfy the Leibniz rule for functions on $V_{K_p,L}$ for any $K_p,L$. See \ref{PlsFe} for $\nabla_{\log}$ and the proof of Theorem \ref{I1} for $\nabla'_{k-1}$. Now our claim follows from the observation that all the maps are continuous and elements of the form \[\sum_{i=0}^n x^i \sum_{j=1}^m a_{ij}b_j\in\Sym^{k-1} D^{(0,0)}(U),~~~~~~~~~~~a_{ij}\in \varinjlim_{K_p,L}\mathcal{O}_{V_{K_p,L}}(V_{K_p,L}),b_j\in \Sym^{k-1} D(V_0),\] are dense in $\Sym^{k-1} D^{(0,0)}(U)$ by the explicit description of $\mathcal{O}^{\mathrm{la},(0,0)}_{K^p}(U)$ in Theorem \ref{str}. \end{proof} \subsection{Proof of Theorem \ref{MT} III} \begin{para} In this subsection, we prove Proposition \ref{comdbark}. Keep the same notation in \ref{PfMTI}. We may assume that $e_1$ is invertible on $V_\infty$. First we recall the definition of $N'_k$. Consider the locally analytic vectors in the $k$-th graded piece of the Poincar'e lemma sequence \begin{eqnarray} \label{grkPls} ~~~~~~~~0\to \gr^k \mathbb{B}^{+,\mathrm{la}}_{\mathrm{dR}}(U)\to \gr^k \mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la}}(U) \xrightarrow{\gr^k\nabla_{\log}} \gr^{k-1}\mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la}}(U)\otimes\Omega^1_{V_0}(\mathcal{C})\to 0. \end{eqnarray} Take the $\tilde\chi_k$-isotypic part and the $G_{K_\infty}$-fixed, $G_K$-analytic vectors of this sequence. \[0\to \gr^k \mathbb{B}^{+,\mathrm{la},\tilde\chi_k}_{\mathrm{dR}}(U)^K\to \gr^k \mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la},\tilde\chi_k}(U)^K \to \gr^{k-1}\mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la},\tilde\chi_k}(U)^K\otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C})\to 0.\] Note that $\Lie(\Gal(K_\infty/K))$ acts semi-simply on every term except the middle one, cf. \ref{constdiag}. The action of $1\in\mathbb{Q}_p\cong \Lie(\Gal(K_\infty/K))$ induces a map between the kernels of $\Lie(\Gal(K_\infty/K))$ , i.e the $G_K$-invariants, \[N'_k:\gr^{k-1}\mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la},\tilde\chi_k}(U)^{G_K}\otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C}) \to \gr^k \mathbb{B}^{+,\mathrm{la},\tilde\chi_k}_{\mathrm{dR}}(U)^{G_K}.\] We obtain the form of $N'_k$ in Proposition \ref{comdbark} by identifying $\gr^{k-1}\mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la},\tilde\chi_k}(U)^{G_K}$ with $\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^{G_K}$ and $\gr^{k-1}\mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la},\tilde\chi_k}(U)^{G_K}\otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C})$ with $\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^{G_K}\otimes \Omega^1_{V_0}(\mathcal{C})^{\otimes k}$. In other words, $N'_k$ is obtained by looking at the action of $G_K$ on the $\tilde\chi_k$-isotypic parts. It turns out that we can ``switch'' the roles of $G_K$ and $Z$ here. In fact, we will show that $N'_k$ can also be obtained by looking at the action of $Z$ on the $G_K$-invariants up to a non-zero constant. Then it will be quite straightforward to relate it with $\bar{d}^k$ using this representation-theoretic interpretation of $N'_k$. We introduce some notations. Let $\mathfrak{z}=\{\begin{pmatrix} a & 0\\ 0 & a\end{pmatrix}\}$ be the centre of $\mathfrak{gl}_2(\mathbb{Q}_p)$. Then there is a natural inclusion $U(\mathfrak{z})\subseteq Z$ and $\tilde\chi_k$ induces a character $z_k:\mathfrak{z}\to\mathbb{Q}_p$. For a $U(\mathfrak{z})$-module $M$, we denote by $M^{z_k}$ the $z_k$-isotypic part of $M$. \end{para} \begin{prop} \label{GKinvex} The sequence \eqref{grkPls} remains exact after taking the $G_K$-invariants and $z_k$-isotypic parts, i.e. we have a short exact sequence \[ 0\to \gr^k \mathbb{B}^{+,\mathrm{la}}_{\mathrm{dR}}(U)^{G_K,z_k}\to \gr^k \mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la}}(U)^{G_K,z_k} \to \gr^{k-1}\mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la}}(U)^{G_K,z_k}\otimes \Omega^1_{V_0}(\mathcal{C})\to 0. \] \end{prop} \begin{proof} When $k=1$, the exact sequence \eqref{grkPls} is simply \begin{eqnarray} \label{Fextla} 0\to \mathcal{O}^{\mathrm{la}}_{K^p}(U)(1)\to \gr^1 \mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la}}(U)\to \mathcal{O}^{\mathrm{la}}_{K^p}(U)\otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C})\to 0. \end{eqnarray} \begin{lem} \label{GKzspli} The surjective map $ \gr^1 \mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la}}(U)\to \mathcal{O}^{\mathrm{la}}_{K^p}(U)\otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C})$ has a $G_K\times U(\mathfrak{z})$-equivariant section. Equivalently, \[\gr^1 \mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la}}(U)\cong \mathcal{O}^{\mathrm{la}}_{K^p}(U)(1)\oplus \mathcal{O}^{\mathrm{la}}_{K^p}(U)\otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C}) \] as $G_K\times U(\mathfrak{z})$-modules. \end{lem} Clearly this lemma implies our claim when $k=1$. It also implies the general case because \[\gr^k \mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la}}\cong \Sym^k \gr^1\mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la}}.\] \end{proof} \begin{proof}[Proof of Lemma \ref{GKzspli}] Faltings proved that (the locally analytic vectors in) Faltings's extension \eqref{Fextla} are (up to a sign) isomorphic to the (locally analytic vectors in) a twist of the relative Hodge-Tate filtration sequence \eqref{rHT} \begin{eqnarray} \label{twrHT} 0\to \mathcal{O}^{\mathrm{la}}_{K^p}(U)(1)\xrightarrow{p} V(1)\otimes_{\mathbb{Q}_p} \omega^{1,\mathrm{la}}_{K^p}(U) \xrightarrow{q}\omega^{2,\mathrm{la}}_{K^p}(U)\to 0, \end{eqnarray} after identifying $\omega^2(V_0)$ with $\Omega^1_{V_0}(\mathcal{C})$ using the Kodaira-Spencer isomorphism, cf. \cite[Theorem 4.2.2]{Pan20}. Here, we choose a trivialization of $\wedge^2 D$ which is implicitly done in the reference, cf. \ref{XCY}, \ref{KSsm}. The surjective map $V(1)\otimes_{\mathbb{Q}_p} \omega^{1,\mathrm{la}}_{K^p}(U) \to \omega^{2,\mathrm{la}}_{K^p}(U)$ sends the standard basis of $V=\mathbb{Q}_p^2$ to $e_1,e_2$, cf. \ref{exs}. (Recall that we fix a basis of $\mathbb{Q}_p(1)$ in \ref{exs}.) Then a $G_K\times U(\mathfrak{z})$-equivariant section is given by mapping $f\in \omega^{2,\mathrm{la}}_{K^p}(U)$ to $(1,0)\otimes \frac{f}{e_1}\in V(1)\otimes_{\mathbb{Q}_p} \omega^{1,\mathrm{la}}_{K^p}(U)$. \end{proof} We need to understand how $Z$ acts on the exact sequence in Proposition \ref{GKinvex}. By Harish-Chandra's isomorphism, $Z$ is generated by $z=\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \in \mathfrak{z}$ and the Casimir operator \[\Omega=\begin{pmatrix} 0 & 1 \\ 0 & 0\end{pmatrix}\begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}+\begin{pmatrix} 0 & 0 \\ 1 & 0\end{pmatrix}\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}+\frac{1}{2}\begin{pmatrix} 1 & 0 \\ 0 & -1\end{pmatrix}^2\in U(\mathfrak{gl}_2(\mathbb{Q}_p)).\] Recall that we identify a character $\tilde\chi:Z\to \mathbb{Q}_p$ with a pair of weights of the form $\{(a,b),(b-1,1+a)\}$ in \ref{tilchik}. Then $\tilde\chi(z)=a+b$ and $\tilde\chi(\Omega)=\frac{1}{2}(a-b+1)^2-\frac{1}{2}$. For example, $\tilde\chi_k(\Omega)=\frac{1}{2}k^2-\frac{1}{2}$ and $\tilde\chi_k(z)=1-k$. \begin{prop} \label{SenopCasz} Let $\theta_{\mathrm{Sen}}$ denote the action of $1\in\mathbb{Q}_p\cong\Lie(\Gal(K_\infty/K))$ on $\mathcal{O}^{\mathrm{la}}_{K^p}(U)^K$. Then it satisfies the following quadratic relation \[(2\theta_\mathrm{Sen}-z+1)^2-1=2\Omega\] \end{prop} \begin{proof} By \cite[Theorem 5.1.8]{Pan20}, $\theta_\mathrm{Sen}=\theta_\mathfrak{h}(\begin{pmatrix} 0 & 0 \\ 0 & 1\end{pmatrix})$. Hence this result follows directly from the relation between $\theta_\mathfrak{h}$ and the action of $Z$, cf. \ref{tilchik}, \cite[Corollary 4.2.8]{Pan20}. \end{proof} \begin{cor}\label{ZactonGKinv} \begin{enumerate} \item $Z$ acts on $\gr^k \mathbb{B}^{+,\mathrm{la}}_{\mathrm{dR}}(U)^{G_K,z_k}$ via $\tilde\chi_k$, i.e. \[\gr^k \mathbb{B}^{+,\mathrm{la}}_{\mathrm{dR}}(U)^{G_K,z_k}=\gr^k\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la},\tilde\chi_k}(U)(k)^{G_K}=\mathcal{O}^{\mathrm{la},(1,-k)}_{K^p}(U)(k)^{G_K}.\] \item The action of $Z$ on $\gr^{k-1}\mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la}}(U)^{G_K,z_k}$ has a generalized eigenspace decomposition. The $\tilde\chi_k$-generalized eigenspace is equal to the $\tilde\chi_k$-eigenspace. In particular, \[\gr^{k-1}\mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la}}(U)^{G_K,z_k,\tilde\chi_k}=\gr^{k-1}\mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la},\tilde\chi_k}(U)^{G_K}=\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^{G_K} \otimes\Omega^1_{V_0}(\mathcal{C})^{\otimes k-1}.\] \end{enumerate} \end{cor} \begin{proof} For the first part, we note that $\gr^k \mathbb{B}^{+,\mathrm{la}}_{\mathrm{dR}}(U)^{G_K,z_k}=\mathcal{O}^{\mathrm{la}}_{K^p}(U)(k)^{G_K,z_k}$, hence it follows directly from Proposition \ref{SenopCasz}. For the second part, Proposition \ref{griFIl} implies that $\gr^i \mathcal{O}\mathbb{B}_{\mathrm{dR},k-1}^{+,\mathrm{la},\tilde\chi_l}(U)^{G_K,z_k}$ is naturally filtered by $U(\mathfrak{gl}_2(\mathbb{Q}_p))$- submodules \[ \mathcal{O}^{\mathrm{la}}_{K^p}(U)(j)^{G_K,z_k}\otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C})^{\otimes k-1-j},~~~j=0,\cdots,k-1.\] Again by Proposition \ref{SenopCasz}, we see that $\Omega$ acts on $\mathcal{O}^{\mathrm{la}}_{K^p}(U)(j)^{G_K,z_k}\otimes_{\mathcal{O}_{V_0}}\Omega^1_{V_0}(\mathcal{C})^{\otimes k-1-j}$ via $\frac{1}{2}(k-2j)^2-\frac{1}{2}$, which is equal to $\frac{1}{2}k^2-\frac{1}{2}$ only when $j=0$. \end{proof} \begin{para} Now we can take the $\tilde\chi_k$-generalized eigenspace $E_{\tilde\chi_k}$ of $\gr^k \mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la}}(U)^{G_K,z_k}$, or equivalently, by Corollary \ref{ZactonGKinv}, this is the same as the kernel of $(\Omega-\tilde\chi_k(\Omega))^2$. By Proposition \ref{GKinvex} and Corollary \ref{ZactonGKinv}, it sits inside the exact sequence \[0\to \mathcal{O}^{\mathrm{la},(1,-k)}_{K^p}(U)(k)^{G_K} \to E_{\tilde\chi_k}\to \mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^{G_K} \otimes\Omega^1_{V_0}(\mathcal{C})^{\otimes k}\to 0.\] The action of $\Omega-\tilde\chi_k(\Omega)$ on $E_{\tilde\chi_k}$ induces a natural map \[\tilde{N}_k:\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^{G_K} \otimes\Omega^1_{V_0}(\mathcal{C})^{\otimes k}\to \mathcal{O}^{\mathrm{la},(1,-k)}_{K^p}(U)(k)^{G_K}.\] It is continuous, hence can be extended $C$-linearly to a map (which is also denoted by $\tilde{N}_k$ by abuse of notation) \[\tilde{N}_k:\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)\otimes\Omega^1_{V_0}(\mathcal{C})^{\otimes k}\to \mathcal{O}^{\mathrm{la},(1,-k)}_{K^p}(U)(k).\] \end{para} \begin{lem} $N'_k=\frac{1}{2k}\tilde{N}_k$. \end{lem} \begin{proof} Let $f\in \mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)^{G_K} =\gr^{k-1}\mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la},\tilde\chi_k}(U)^{G_K}\otimes \Omega^1_{V_0}(\mathcal{C})$. By definition, $N'_k(f)$ is computed as follows: we may choose $f_1\in \gr^k \mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la},\tilde\chi_k}(U)^K$ such that \begin{itemize} \item $\theta_K^2 (f_1)=0$, where $\theta_K$ denotes $1\in\mathbb{Q}_p\cong\Lie(\Gal(K_\infty/K))$; \item $\gr^k\nabla_{\log}(f_1)=f$, where $\gr^k\nabla_{\log}:\gr^k \mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la}}(U) \to \gr^{k-1}\mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la}}(U)\otimes \Omega^1_{V_0}(\mathcal{C})$, cf. \eqref{grkPls}. \end{itemize} Then $N'_k(f)=\theta_K(f_1)$. Similarly, $\tilde{N}_k(f)$ is computed as follows: we may choose $f_2\in \gr^k \mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la}}(U)^{G_K,z_k}$ such that \begin{itemize} \item $(\Omega-\tilde\chi_k(\Omega))^2\cdot f_2=0$; \item $\gr^k\nabla_{\log}(f_2)=f$. \end{itemize} Then $\tilde{N}_k(f)=(\Omega-\tilde\chi_k(\Omega))\cdot f_2$. Consider $f_c=f_1-f_2\in \gr^k \mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la}}(U)^{z_k,K}$. Since $\gr^k\nabla_{\log}(f_c)=0$, by \eqref{grkPls}, we have \[f_c\in \gr^k \mathbb{B}^{+,\mathrm{la}}_{\mathrm{dR}}(U)^{z_k,K}=\mathcal{O}^{\mathrm{la}}_{K^p}(U)(k)^{z_k,K}. \] By Proposition \ref{SenopCasz}, there is the quadratic relation \[(2(\theta_K-k)-(1-k)+1)^2-1=2\Omega\] on $\mathcal{O}^{\mathrm{la}}_{K^p}(U)(k)^{z_k,K}$. Apply it to $f_c$. A simple computation shows that \[2\theta_K^2\cdot f_c-2k\theta_K(f_c)=(\Omega-\tilde\chi_k(\Omega))\cdot f_c\] Note that $\theta_K^2$ annihilates $f_1$ and $\theta_K$ annihilates $f_2$. Hence $\theta_{K}^2(f_c)=0$. On the other hand, \begin{itemize} \item $\theta_K(f_c)=\theta_K(f_1)=N'_k(f)$ as $\theta_K$ annihilates $f_2$; \item $(\Omega-\tilde\chi_k(\Omega))\cdot f_c=-(\Omega-\tilde\chi_k(\Omega))\cdot f_2=-\tilde{N}_k(f)$ as $f_1$ has infinitesimal character $\tilde\chi_k$. \end{itemize} We conclude that \[N'_k(f)=\frac{1}{2k}\tilde{N}_k(f).\] \end{proof} \begin{para} It remains to compare $\tilde{N}_k$ with $\bar{d}'^k$. Note that both are continuous maps \[\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p}(U)\otimes\Omega^1_{V_0}(\mathcal{C})^{\otimes k}\cong \omega^{2k,\mathrm{la},(1-k,0)}_{K^p}(U)\to \mathcal{O}^{\mathrm{la},(1,-k)}_{K^p}(U)(k)\] which are $\mathcal{O}^{\mathrm{sm}}_{K^p}(U)$-linear. Here we use the Kodaira-Spencer isomorphism and our fixed trivialization of $\wedge^2 D$, cf. \ref{HEt}. Recall that in Corollary \ref{density}, we introduced \[\omega^{2k,\mathrm{la},(1-k,0)}_{K^p}(U)^{\mathfrak{n}-fin}\subseteq \omega^{2k,\mathrm{la},(1-k,0)}_{K^p}(U)\] which consist of elements of the form $(\frac{e_1}{\mathrm{t}})^{k-1}\sum_{i=0}^n a_ix^i,~~~~~a_i\in\omega^{k+1,\mathrm{sm}}_{K^p}(U)$. There is a natural $\mathfrak{gl}_2(\mathbb{Q}_p)$-equivariant isomorphism \[\omega^{2k,\mathrm{la},(1-k,0)}_{K^p}(U)^{\mathfrak{n}-fin}=\omega^{k+1,\mathrm{sm}}_{K^p}(U)\otimes_{\mathbb{Q}_p} M^{\vee}_{(0,1-k)},\] where $M^{\vee}_{(0,k-1)}\subseteq \omega^{k-1}_{K^p}(U)$ consists of $(\frac{e_1}{\mathrm{t}})^{k-1}\sum_{i=0}^n a_ix^i,~~~~~a_i\in\mathbb{Q}_p$. Similarly, we have \[\mathcal{O}^{\mathrm{la},(1,-k)}_{K^p}(U)^{\mathfrak{n}-fin}\subseteq \mathcal{O}^{\mathrm{la},(1,-k)}_{K^p}(U)\] and \[\mathcal{O}^{\mathrm{la},(1,-k)}_{K^p}(U)^{\mathfrak{n}-fin}=\omega^{k+1,\mathrm{sm}}_{K^p}(U)\otimes_{\mathbb{Q}_p} M^{\vee}_{(-k,1)},\] where $M^{\vee}_{(-k,1)}$ consists of $\mathrm{t}e_1^{-k-1}\sum_{i=0}^n a_ix^i,~~~~~a_i\in\mathbb{Q}_p$. \end{para} \begin{lem} \label{redVerma} Both $\tilde{N}_k$ and $\bar{d}'^k$ send $\omega^{2k,\mathrm{la},(1-k,0)}_{K^p}(U)^{\mathfrak{n}-fin}$ to $\mathcal{O}^{\mathrm{la},(1,-k)}_{K^p}(U)^{\mathfrak{n}-fin}(k)$, i.e. \[\tilde{N}_k,~\bar{d}'^k: \omega^{k+1,\mathrm{sm}}_{K^p}(U)\otimes_{\mathbb{Q}_p} M^{\vee}_{(0,1-k)}\to \omega^{k+1,\mathrm{sm}}_{K^p}(U)\otimes_{\mathbb{Q}_p} M^{\vee}_{(-k,1)}(k).\] Moreover, both $\mathcal{O}^{\mathrm{sm}}_{K^p}(U)$-linear maps are induced from maps \[M^{\vee}_{(0,1-k)}\to M^{\vee}_{(-k,1)}(k)\] which by abuse of notation are still denoted by $\tilde{N}_k$ and $\bar{d}'^k$. \end{lem} \begin{proof} We first compute $\bar{d}'^k$. By Theorem \ref{I2}, for $a_n\in\omega^{k+1,\mathrm{sm}}_{K^p}(U)$, there exists a constant $\bar{c}\in\mathbb{Q}^\times$ such that \[\bar{d}^k(a_n(\frac{e_1}{\mathrm{t}})^{k-1}x^n)=\bar{c}a_n(\frac{e_1}{\mathrm{t}})^{k-1}(\frac{d}{dx})^k(x^n)(dx)^k=\bar{c}a_n(\frac{e_1}{\mathrm{t}})^{k-1} \binom{n}{k}x^{n-k} (dx)^k.\] Recall that we identify $dx$ with $\frac{\mathrm{t}}{e_1^2}$ in \ref{HEt}. Hence \[\bar{d}'^k(a_n(\frac{e_1}{\mathrm{t}})^{k-1}x^n)=\bar{c}a_n \binom{n}{k}\frac{\mathrm{t}x^{n-k}}{e_1^{k+1}}\in\mathcal{O}^{\mathrm{la},(1,-k)}_{K^p}(U)^{\mathfrak{n}-fin}(k),\] i.e. $\bar{d}'^k((\frac{e_1}{\mathrm{t}})^{k-1}x^n)=\bar{c} \binom{n}{k}{\mathrm{t}e_1^{-k-1}x^{n-k}}$. Next we consider $\tilde{N}_k$. By identifying Faltings's extension with the relative Hodge-Tate filtration sequence (cf. the proof of \ref{GKzspli}), we can identify $\gr^k \mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la}}(U)^{G_K,z_k}$ with \[\left(\Sym^k V\otimes_{\mathbb{Q}_p}\omega^{k,\mathrm{la}}_{K^p}(U)(k)\right)^{G_K,z_k}=\Sym^k V\otimes_{\mathbb{Q}_p} \omega^{k,\mathrm{la},(1-k,-k)}_{K^p}(U)(k)^{G_K}\] so that \begin{itemize} \item the injection $i:\mathcal{O}^{\mathrm{la},(1,-k)}_{K^p}(U)(k)^{G_K}\to \gr^k \mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la}}(U)^{G_K,z_k}$ is identified with the map $\Sym^k p|_{\mathcal{O}^{\mathrm{la},(1,-k)}_{K^p}(U)(k)^{G_K}}$, where $p$ is the injective map in \eqref{twrHT}; \item the surjection $j:\gr^k \mathcal{O}\mathbb{B}_{\mathrm{dR}}^{+,\mathrm{la}}(U)^{G_K,z_k}\to \omega^{2k,\mathrm{la},(1-k,0)}_{K^p}(U)^{G_K}$ is getting identified with the $k$-th symmetric power of the surjective map $q$ in \eqref{twrHT} (when restricted to the $G_K$-invariants and $z_k$-isotypic parts) up to a sign. \end{itemize} Extending everything $C$-linearly, we get \begin{eqnarray}\label{kthsymwall} ~~~~~0\to\mathcal{O}^{\mathrm{la},(1,-k)}_{K^p}(k)\xrightarrow{i} \Sym^k V\otimes \omega^{k,\mathrm{la},(1-k,-k)}_{K^p}(k)\xrightarrow{j} \omega^{2k,\mathrm{la},(1-k,0)}_{K^p}\to 0 \end{eqnarray} which becomes exact when passing to the $\tilde\chi_k$-generalized eigenspaces. \begin{lem} The sequence \eqref{twrHT} induces an exact sequence \[0\to\mathcal{O}^{\mathrm{la},(1,-1)}_{K^p}(U)^{\mathfrak{n}-fin}(1)\to V(1)\otimes_{\mathbb{Q}_p} \omega^{1,\mathrm{la},(0,-1)}_{K^p}(U)^{\mathfrak{n}-fin} \to \omega^{2,\mathrm{la},(0,0)}_{K^p}(U)^{\mathfrak{n}-fin}\to 0.\] Moreover under the isomorphism $\mathcal{O}^{\mathrm{la},(1,-1)}_{K^p}(U)^{\mathfrak{n}-fin}=\omega^{2,\mathrm{sm}}_{K^p}(U)\otimes_{\mathbb{Q}_p} M^{\vee}_{(-1,1)}$ and similar isomorphisms for $\omega^{1,\mathrm{la},(0,-1)}_{K^p}(U)^{\mathfrak{n}-fin},\omega^{2,\mathrm{la},(0,0)}_{K^p}(U)^{\mathfrak{n}-fin}$, this exact sequence is the tensor product of $\omega^{2,\mathrm{sm}}_{K^p}(U)$ with an exact sequence \[0\to M^{\vee}_{(-1,1)}(1)\to V(1)\otimes_{\mathbb{Q}_p} M^{\vee}_{(-1,0)} \to M^{\vee}_{(0,0)}\to 0.\] Similar results also hold for the $k$-th symmetric power of the sequence \eqref{twrHT}. \end{lem} \begin{proof} In \eqref{twrHT}, we have $p(f)=(\frac{fe_2}{\mathrm{t}},-\frac{fe_1}{\mathrm{t}})$ up to a sign and $q(f_1,f_2)=e_1f_1+e_2f_2$ for $f\in\mathcal{O}^{\mathrm{la}}_{K^p}(U)(1)$ and $f_1,f_2\in \omega^{1,\mathrm{la}}_{K^p}(U)(1)$. This implies all the claims here. \end{proof} By this lemma, there is a $\mathfrak{gl}_2(\mathbb{Q}_p)$-equivariant sequence \begin{eqnarray} \label{wallcr} 0\to M^{\vee}_{(-k,1)}(k) \to \Sym^k V\otimes_{\mathbb{Q}_p} M^{\vee}_{(-k,1-k)}(k)\to M^{\vee}_{(0,1-k)}\to 0 \end{eqnarray} which is induced from the $k$-th symmetric power of \eqref{twrHT} and becomes exact after passing to the $\tilde\chi_k$-generalized eigenspaces. Its tensor product with $\omega^{k+1,\mathrm{sm}}_{K^p}(U)$ is naturally identified with taking $\mathfrak{n}-fin$ vectors of \eqref{kthsymwall} on $U$. Note that $\tilde{N}_k$ is essentially obtained by applying the action of $Z$ to the middle term of \eqref{kthsymwall}. It follows from our discussion here that $\tilde{N}_k$ is induced from a map $M^{\vee}_{(0,1-k)}\to M^{\vee}_{(-k,1)}(k)$. \end{proof} \begin{para} \label{CatO} Now we can prove Proposition \ref{comdbark}. As we explained before, we only need to compare $\tilde{N}_k$ and $\bar{d}'^k$. By Corollary \ref{density}, $\omega^{2k,\mathrm{la},(1-k,0)}_{K^p}(U)^{\mathfrak{n}-fin}$ is a dense subset of $\omega^{2k,\mathrm{la},(1-k,0)}_{K^p}(U)$. Since both $\tilde{N}_k$ and $\bar{d}'^k$ are continuous, by Lemma \ref{redVerma}, it suffices to prove that \[\tilde{N}_k,\bar{d}'^k:M^{\vee}_{(0,1-k)}\to M^{\vee}_{(-k,1)}(k)\] differ by a non-zero constant. There are natural actions of $\mathfrak{gl}_2(\mathbb{Q}_p)$ on $M^{\vee}_{(0,1-k)}$ and $M^{\vee}_{(-k,1)}(k)$ and both maps $\tilde{N}_k,\bar{d}'^k$ are $\mathfrak{gl}_2(\mathbb{Q}_p)$-equivariant. We shall ignore the Tate twist here and only restrict to the action of $\mathfrak{sl}_2(\mathbb{Q}_p)$. Recall that $M^{\vee}_{(0,1-k)}=\{(\frac{e_1}{\mathrm{t}})^{k-1}\sum_{i=0}^n a_ix^i,~~~~~a_i\in\mathbb{Q}_p\}$. The action of $u^+=\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}$ corresponds to the derivation with respect to $x$. From this, it is easy to see that \begin{itemize} \item $M^{\vee}_{(0,1-k)}$ is the dual of the Verma module of $\mathfrak{sl}_2(\mathbb{Q}_p)$ with highest weight $k-1$; \item Similarly, $M^{\vee}_{(-k,1)}$ is the dual of the Verma module with highest weight $-k-1$ (which is also isomorphic to the Verma module with highest weight $-k-1$). \end{itemize} Note that $k\geq 1$, hence $-k-1$ is anti-dominant and $k-1$ is dominant. This shows that $M^{\vee}_{(-k,1)}$ is irreducible and $M^{\vee}_{(0,1-k)}$ is an extension of $M^{\vee}_{(-k,1)}$ by a finite-dimensional irreducible representation of $\mathfrak{sl}_2(\mathbb{Q}_p)$. Therefore, \[\dim_{\mathbb{Q}_p}\Hom_{U(\mathfrak{sl}_2(\mathbb{Q}_p))}(M^{\vee}_{(0,1-k)},M^{\vee}_{(-k,1)})=1.\] Since both $\tilde{N}_k$ and $\bar{d}'^k$ belong to this Hom space, it suffices to show the following lemma. \end{para} \begin{lem} Both maps $\tilde{N}_k,\bar{d}'^k$ are non-zero. \end{lem} \begin{proof} For $\bar{d}'^k$, this follows from the explicit formula of $\bar{d}'^k$ in Theorem \ref{I2}. For $\tilde{N}_k$, recall that it is induced from the action of $Z$ on the $\tilde\chi_k$-generalized eigenspace $E_k$ of $\Sym^k V\otimes_{\mathbb{Q}_p} M^{\vee}_{(-k,1-k)}$. Note that $M^{\vee}_{(-k,1-k)}$ is the Verma module of $\mathfrak{sl}_2(\mathbb{Q}_p)$ with highest weight $-1=-\rho$. Hence in the language of translation functors, $E$ is the translation of $M^{\vee}_{(-k,1-k)}$ from $-1$ to $-k-1$. It is well-known that $E_k$ is a projective envelop of $M^{\vee}_{(-k,1)}$ in category $\mathcal{O}$ and the action of $Z$ on $E_k$ is non-semisimple. See for example \cite[3.12]{Hum08}. For reader's convenience, we sketch a proof here. When $k=1$, then $E_1=V\otimes_{\mathbb{Q}_p} M^{\vee}_{(-1,0)}$ and a simple computation shows that $Z$ acts non-semisimply on it. This implies the general case $k\geq2$ because $E_k$ is the translation of $E_1$ from $-2$ to $-k-1$ and this translation functor is an equivalence of category between $\mathcal{O}_{-2}$ and $\mathcal{O}_{-k-1}$, cf. \cite[7.8]{Hum08}. \end{proof} \begin{rem} \label{Qstr} Clearly $M^{\vee}_{(i,j)}$ has a natural $\mathbb{Q}$-structure preserved by the action of $\mathfrak{sl}_2(\mathbb{Q})\subseteq\mathfrak{sl}_2(\mathbb{Q}_p)$. Hence this proof actually shows that $\tilde{N}_k,\bar{d}^k$ differ by a constant in $\mathbb{Q}$. Therefore the constant $c_k$ in Proposition \ref{comdbark} can be chosen to be in $\mathbb{Q}^\times$. \end{rem} \section{Regular de Rham representations in the completed cohomology} \label{mrs} In this section, we will combine results obtained in the previous two sections and study regular de Rham representations appearing in the completed cohomology. \subsection{Completed cohomology} \begin{para} In order to state our main result, we first recall the completed cohomology of modular curves introduced by Emerton \cite{Eme06}. Let $K^p=\prod_{l\neq p}K_l$ be an open compact subgroup of $\mathrm{GL}_2(\mathbb{A}_f^p)$. Set \[\tilde{H}^i(K^p,\mathbb{Z}/p^n):=\varinjlim_{K_p\subseteq\mathrm{GL}_2(\mathbb{Q}_p)} H^i(X_{K^pK_p}(\mathbb{C}),\mathbb{Z}/p^n),\] where $K_p$ runs through all open compact subgroups of $\mathrm{GL}_2(\mathbb{Q}_p)$. Recall that $X_{K^pK_p}$ denotes the complete modular curve of level $K^pK_p$ over $\mathbb{Q}$. The completed cohomology of tame level $K^p$ is defined as \[\tilde{H}^i(K^p,\mathbb{Z}_p):=\varprojlim_n \tilde{H}^i(K^p,\mathbb{Z}/p^n).\] In general, its torsion part is annihilated by $p^n$ for some $n$ by \cite[Theorem 1.1]{CE12}. But in the case of modular curves, it is well-known that $\tilde{H}^i(K^p,\mathbb{Z}_p)$ is torsionfree. Suppose $W$ is a $\mathbb{Q}_p$-Banach space. We denote by \[\tilde{H}^i(K^p,W):=\tilde{H}^i(K^p,\mathbb{Z}_p)\widehat\otimes_{\mathbb{Z}_p} W.\] This is naturally a $\mathbb{Q}_p$-Banach space equipped with a continuous action of $\mathrm{GL}_2(\mathbb{Q}_p)$. If $W=\mathbb{Q}_p$, then $\tilde{H}^i(K^p,\mathbb{Q}_p)$ is an admissible representation. Since $X_{K^pK_p}$ is defined over $\mathbb{Q}$, using the \'etale cohomology, we have a natural Galois action of $G_{\mathbb{Q}}$ on everything, which commutes with the action of $\mathrm{GL}_2(\mathbb{Q}_p)$. Recall that $\mathfrak{n}=\{\begin{pmatrix} 0 & *\\ 0 & 0 \end{pmatrix}\}\subseteq \mathfrak{gl}_2(\mathbb{Q}_p)=\Lie(\mathrm{GL}_2(\mathbb{Q}_p))$. \end{para} \begin{thm} \label{MTCH} Let $E$ be a finite extension of $\mathbb{Q}_p$ and \[\rho:G_{\mathbb{Q}}\to\mathrm{GL}_2(E)\] be a two-dimensional continuous absolutely irreducible representation of $G_{\mathbb{Q}}$. By abuse of notation, we also use $\rho$ to denote its underlying representation space. Suppose that \begin{enumerate} \item $\rho$ appears in $\tilde{H}^1(K^p,E)$, i.e. $\Hom_{E[G_{\mathbb{Q}}]}(\rho,\tilde{H}^1(K^p,E))\neq 0$. We denote by \[\tilde{H}^1(K^p,E)[\rho]\subseteq \tilde{H}^1(K^p,E)\] the image of the evaluation map $\rho\otimes_{E}\Hom_{E[G_{\mathbb{Q}}]}(\rho,\tilde{H}^1(K^p,E))\to \tilde{H}^1(K^p,E)$. \item $\rho|_{G_{\mathbb{Q}_p}}$ is de Rham of Hodge-Tate weights $0,k$ for some integer $k>0$. \end{enumerate} Then \begin{enumerate} \item (classicality) $\rho$ arises from a cuspidal eigenform of weight $k+1$. \item (finiteness) Let $K_p=1+p^nM_2(\mathbb{Z}_p)$ for some $n\geq 2$. Denote by $\tilde{H}^1(K^p,E)[\rho]^{K_p-\mathrm{an}}\subseteq \tilde{H}^1(K^p,E)[\rho]$ the subspace of $K_p$-analytic vectors. Then the subspace of $\mathfrak{n}$-invariants \[\tilde{H}^1(K^p,E)[\rho]^{K_p-\mathrm{an},\mathfrak{n}}\] is a finite-dimensional vector space over $E$. \end{enumerate} \end{thm} A description of $\tilde{H}^1(K^p,E)^{\mathrm{la}}[\rho]\widehat\otimes_{\mathbb{Q}_p} C$ as a representation of $\mathrm{GL}_2(\mathbb{Q}_p)$ will be given in Subsection \ref{rbpLLC} below. \begin{rem} \label{relDPS} The main result of \cite{DPS20} (see also \cite[Proposition 6.1.5]{Pan20}) says that $\tilde{H}^1(K^p,E)[\rho]^{\mathrm{la}}$ has an infinitesimal character. In their follow-up work \cite{DPS22}, Dospinescu-Pa\v{s}k\=unas-Schraen show that this is actually enough to deduce our finiteness result. In fact, they even prove a finiteness result on the $K_p$-analytic vectors, cf. \cite[Corolllary 6.7]{DPS22}. Their method is purely representation-theoretic. \end{rem} \begin{para}[Hecke algebra] The proof of Theorem \ref{MTCH} will be given in the next subsection. To better interpret that $\rho$ arises from a cuspidal eigenform, we need the (big) Hecke algebra. See \cite[6.1.1]{Pan20} for more details. Let $S$ be a finite set of rational primes containing $p$ such that $K_l\subseteq\mathrm{GL}_2(\mathbb{Q}_l)$ is maximal for $l\notin S$. We have the usual Hecke operators $T_l$ and $S_l$, cf. \ref{Hd}. For an open compact subgroup $K_p$ of $\mathrm{GL}_2(\mathbb{Q}_p)$, we define \[\mathbb{T}(K^pK_p)\subseteq\End_{\mathbb{Z}_p}(H^1(X_{K^pK_p}(\mathbb{C}),\mathbb{Z}_p))\] as the $\mathbb{Z}_p$-subalgebra generated by Hecke operators $T_l,S_l^{\pm1},l\notin S$, which does not depend on the choice of $S$. The (big) Hecke algebra of tame level $K^p$ is defined as the projective limit \[\mathbb{T}(K^p):=\varprojlim_{K_p} \mathbb{T}(K^pK_p).\] This is a complete semi-local noetherian $\mathbb{Z}_p$-algebra and $\mathbb{T}(K^p)/\mathfrak{m}$ is a finite field for any maximal ideal $\mathfrak{m}$. It acts faithfully on $\tilde{H}^1(K^p,\mathbb{Z}_p)$ and commutes with the action of $\mathrm{GL}_2(\mathbb{Q}_p)\times G_{\mathbb{Q}}$. Moreover, there is a continuous $2$-dimensional determinant $D_S$ of $G_{\mathbb{Q},S}$ valued in $\mathbb{T}(K^p)$ in the sense of Chenevier \cite{Che14} such that the characteristic polynomial of $D_S(\Frob_l)$ is \[X^2-l^{-1}T_l+l^{-1}S_l.\] Its twist by the inverse of the cyclotomic character is also the Eichler-Shimura congruence relation, i.e. \[\Frob_l^2-T_l\Frob_l+lS_l=0\] on $\tilde{H}^1(K^p,\mathbb{Z}_p)$. The Poincar\'e duality on $X_{K^pK_p}(\mathbb{C})$ implies that $\mathbb{T}(K^p)$ acts on $\tilde{H}^0(K^p,\mathbb{Z}_p)$. Let $\lambda:\mathbb{T}(K^p)\to E$ be a $\mathbb{Z}_p$-algebra homomorphism. We can associate a semi-simple Galois representation (unique up to conjugation) \[\rho_{\lambda}:G_{\mathbb{Q},S}\to \mathrm{GL}_2(E)\] whose determinant is $\lambda\circ D_S$. Denote the $\lambda$-isotypic part by \[\tilde{H}^1(K^p,E)[\lambda]\subseteq \tilde{H}^1(K^p,E).\] Then it follows from the Eichler-Shimura relation that \[\Hom_{E[G_{\mathbb{Q}}]}(\rho_\lambda(-1),\tilde{H}^1(K^p,E))=\Hom_{E[G_{\mathbb{Q}}]}(\rho_\lambda(-1),\tilde{H}^1(K^p,E)[\lambda]).\] Assume that $\rho_\lambda$ is absolutely irreducible. Then by the main result of \cite{BLR91}, $\tilde{H}^1(K^p,E)[\lambda]$ is also $\rho_{\lambda}(-1)$-isotypic, that is \[\tilde{H}^1(K^p,E)[\lambda]=\Hom_{E[G_{\mathbb{Q}}]}(\rho_\lambda(-1),\tilde{H}^1(K^p,E)[\lambda])\otimes_E \rho_\lambda(-1).\] \end{para} \subsection{Fontaine operator on completed cohomology} \begin{para} \label{rholambda} Let $\rho$ be as in Theorem \ref{MTCH} and $\lambda':\mathbb{T}(K^p)\to E$ such that $\rho_{\lambda'}\cong \rho$. The classicality part of Theorem \ref{MTCH} is equivalent with saying that $M_{k+1}(K^p)\otimes_{\mathbb{Q}_p} E[\lambda']\neq 0$. Let $\lambda:\mathbb{T}(K^p)\to E$ be the twist of $\lambda'$ by the cyclotomic character, i.e. $\lambda(T_l)=l^{-1}\lambda'(T_l)$, $l\notin S$. Then $\rho_\lambda(-1)\cong \rho$ and \[\tilde{H}^1(K^p,E)[\lambda]=\tilde{H}^1(K^p,E)[\rho].\] Since $\rho$ is de Rham of Hodge-Tate weights $0,k$, there is a Hodge-Tate decomposition \[\tilde{H}^1(K^p,C\otimes_{\mathbb{Q}_p}E)[\lambda]=\tilde{H}^1(K^p,E)[\rho]\widehat{\otimes}_{\mathbb{Q}_p}C=\tilde{H}^1(K^p,C\otimes_{\mathbb{Q}_p}E)[\lambda]_0\oplus \tilde{H}^1(K^p,C\otimes_{\mathbb{Q}_p}E)[\lambda]_k,\] where $\tilde{H}^1(K^p,C\otimes_{\mathbb{Q}_p}E)[\lambda]_i\neq 0$ denotes the Hodge-Tate weight $i$ part for $i=0,k$. Note that $\mathrm{GL}_2(\mathbb{Q}_p)$ acts continuously on $\tilde{H}^1(K^p,C\otimes_{\mathbb{Q}_p}E)[\lambda]$. We will prove the following result in this subsection. Recall that $I^1_{k-1}$ was introduced in Definition \ref{I^1k}. \end{para} \begin{thm} \label{HT0kerF} Let $\rho$ be as in Theorem \ref{MTCH}. Then there is a natural $\mathrm{GL}_2(\mathbb{Q}_p)$-equivariant isomorphism of $\lambda$-isotypic parts \[\tilde{H}^1(K^p,C\otimes_{\mathbb{Q}_p}E)[\lambda]_0^{\mathrm{la}}\cong \ker I^1_{k-1}\otimes_{\mathbb{Q}_p} E [\lambda]\] \end{thm} \begin{proof}[Proof of Theorem \ref{MTCH}] By Theorem \ref{HT0kerF}, $\ker I^1_{k-1}\otimes_{\mathbb{Q}_p} E [\lambda]\neq 0$. Hence Theorem \ref{sd} implies that $M_{k+1}(K^p)\otimes_{\mathbb{Q}_p} E[\lambda']\cong M_{k+1}\cdot D_0^{-1}[\lambda]\neq 0$, which proves the classicality part. The finiteness part follows from Corollary \ref{knfin}. \end{proof} To prove this result, we recall the comparison theorem between the completed cohomology and the cohomology of the $\mathcal{X}_{K^p}$, which is essentially due to Scholze \cite[\S IV.2]{Sch15}. \begin{prop} \label{BdRcomp} There is a natural $G_{\mathbb{Q}_p}$-equivariant isomorphism of $B_{\mathrm{dR},k}^+$-modules \[\tilde{H}^i(K^p,B_{\mathrm{dR},k}^+)\cong H^i({\mathscr{F}\!\ell},\mathbb{B}_{\mathrm{dR},k}^+),\] where $\mathbb{B}_{\mathrm{dR},k}^+$ denotes the truncated de Rham period sheaves introduced in Subsection \ref{dRps}. \end{prop} \begin{proof} When $k=1$, this isomorphism is reduced to \begin{eqnarray} \label{cis} \tilde{H}^i(K^p,C)\cong H^i({\mathscr{F}\!\ell},\mathcal{O}_{K^p}), \end{eqnarray} which was established in \cite[Corollary 4.4.3]{Pan20}. For $k\geq 2$, we need the following lemma. \begin{lem} \label{cmap} There is a natural $G_{\mathbb{Q}_p}$-equivariant homomorphism of $B_{\mathrm{dR},k}^+$-modules \[\tilde{H}^i(K^p,B_{\mathrm{dR},k}^+)\to H^i({\mathscr{F}\!\ell},\mathbb{B}_{\mathrm{dR},k}^+)\] whose reduction modulo $t$ agrees with the isomorphism \eqref{cis}. \end{lem} Assuming this lemma at the moment. Since $B_{\mathrm{dR},k}^+$ and $\mathbb{B}_{\mathrm{dR},k}^+$ are flat over $B_{\mathrm{dR},k}^+$, an induction on $k$ shows that this map is an isomorphism. \end{proof} \begin{proof}[Proof of Lemma \ref{cmap}] The most natural proof is to use the primitive comparison theorem on modular curves \[H^i_{\mathrm{\acute{e}t}}(\mathcal{X}_{K^pK_p},\mathbb{Z}/{p^n})\otimes_{\mathbb{Z}_p} A^a_{\mathrm{inf}} \cong H^i_{\mathrm{pro\acute{e}t}}(\mathcal{X}_{K^pK_p},\mathbb{A}^a_\mathrm{inf}/p^n)\] cf. \cite[Proof of Theorem 8.4]{Sch13} and to argue as in \cite[Proof of Theorem IV.2.1]{Sch15}. Here we sketch a construction which only uses the \'etale site instead of the pro-\'etale site. For $X=\mathcal{X}_{K^pK_p}$ or $\mathcal{X}_{K^p}$, we denote by $W_n(\mathcal{O}_X^+/p)$ the sheaf on $X_{\mathrm{\acute{e}t}}$ which is the sheafication of $U\mapsto W_n(\mathcal{O}_X^+(U)/p)$, where $W_n$ denotes the ring of $n$-truncated Witt vectors. There are natural maps \[\begin{multlined} H^i_{\mathrm{\acute{e}t}}(\mathcal{X}_{K^pK_p},\mathbb{Z}/p^n)\otimes_{\mathbb{Z}_p}W_n(\mathcal{O}_C/p)\to H^i_{\mathrm{\acute{e}t}}(\mathcal{X}_{K^pK_p},W_n(\mathcal{O}_C/p))\\ \to H^i_{\mathrm{\acute{e}t}}(\mathcal{X}_{K^pK_p},W_n(\mathcal{O}^+_{\mathcal{X}_{K^pK_p}}/p)) \to H^i_{\mathrm{\acute{e}t}}(\mathcal{X}_{K^p},W_n(\mathcal{O}^+_{\mathcal{X}_{K^p}}/p)). \end{multlined}\] The direct limit over $K_p$ gives a map \[\tilde{H}^i(K^p,\mathbb{Z}/p^n)\otimes_{\mathbb{Z}_p} W_n(\mathcal{O}_C/p)\to H^i_{\mathrm{\acute{e}t}}(\mathcal{X}_{K^p},W_n(\mathcal{O}^+_{\mathcal{X}_{K^p}}/p)).\] There is an almost isomorphism \[H^i_{\mathrm{\acute{e}t}}(\mathcal{X}_{K^p},W_n(\mathcal{O}^+_{\mathcal{X}_{K^p}}/p))^a\cong H^i(\mathcal{X}_{K^p},W_n(\mathcal{O}^+_{\mathcal{X}_{K^p}}/p))^a\] where the right hand side is computed on the analytic site. Indeed, by induction on $n$, it suffices to prove the case $n=1$, that is $H^i_{\mathrm{\acute{e}t}}(\mathcal{X}_{K^p},\mathcal{O}^{+a}_{\mathcal{X}_{K^p}}/p)\cong H^i(\mathcal{X}_{K^p},\mathcal{O}^{+a}_{\mathcal{X}_{K^p}}/p)$. But this is clear as on affinoid subsets, $\mathcal{O}^{+a}_{\mathcal{X}_{K^p}}$ has no higher cohomology on both \'etale and analytic sites, cf. \cite[Proposition 6.14, 7.13]{Sch12}. Hence there is a map of $W_n(\mathcal{O}_C/p)^a$-modules \[f_n:\tilde{H}^i(K^p,\mathbb{Z}/p^n)\otimes_{\mathbb{Z}_p} W_n(\mathcal{O}_C/p)^a\to H^i(\mathcal{X}_{K^p},W_n(\mathcal{O}^+_{\mathcal{X}_{K^p}}/p))^a.\] Recall that $A_\mathrm{inf}=W(\mathcal{O}_{C^\flat})$ and $\displaystyle \mathcal{O}_{C^\flat}=\varprojlim_{x\mapsto x^p} \mathcal{O}_C/p$. We denote elements in $\mathcal{O}_{C^\flat}$ by $(\cdots,a_1,a_0),a_i\in\mathcal{O}_C/p$ satisfying $a_{n+1}^p=a_n$. For $n\geq 0$, the projection map sending $(\cdots,a_1,a_0)$ to $a_n$ defines a ring homomorphism $\mathcal{O}_{C^\flat}\to\mathcal{O}_C/p$ and induces a quotient map $\phi'_n:A_\mathrm{inf}\to W(\mathcal{O}_C/p)$. Denote by $\phi_n:A_\mathrm{inf}\to W_n(\mathcal{O}_C/p)$ the composite of $\phi'_n$ with the natural projection $W(\mathcal{O}_C/p)\to W_n(\mathcal{O}_C/p)$. We claim that the quotient map \[A_\mathrm{inf}\to A_\mathrm{inf}/((\ker\theta)^k,p^m)\] factors through $\phi_l$ for $l$ sufficiently large. It suffices to show that this map factors through $\phi'_n$. For $a=(\cdots,a_1,a_0)\in\mathcal{O}_{C^\flat}$ with $a_0=0$, it is clear that $[a]\in(\ker\theta,p)$. Now if $a_n=0$, then $a=b^{p^n}$ for some $b=(\cdots,b_1,b_0)\in\mathcal{O}_{C^\flat}$ with $b_0=0$ and $[a]=[b]^{p^n}\in((\ker\theta)^{p^n},p^{n+1})$ by an easy induction on $n$. This implies our claim. Similarly the quotient map $\mathbb{A}_{\mathrm{inf},\mathcal{X}_{K^p}}\to \mathbb{A}_{\mathrm{inf},\mathcal{X}_{K^p}}/((\ker\theta)^k,p^m)$ factors through some $W_n(\mathcal{O}^+_{\mathcal{X}_{K^p}}/p)$. Therefore $f_n$ induces maps of $A^a_\mathrm{inf}/((\ker\theta)^k,p^m)$-modules \[g_{k,m}:\tilde{H}^i(K^p,\mathbb{Z}/p^m)\otimes_{\mathbb{Z}_p} A^a_\mathrm{inf}/((\ker\theta)^k,p^m)\to H^i(\mathcal{X}_{K^p}, \mathbb{A}^a_{\mathrm{inf},\mathcal{X}_{K^p}}/((\ker\theta)^k,p^m))\] such that $g_{k,m}\equiv g_{k,m+1}\mod p^m$ and $g_{k,m}\equiv g_{k+1,m}\mod (\ker\theta)^k$. \begin{lem} For $k\geq 1$, \begin{enumerate} \item The torsion part of $H^i(\mathcal{X}_{K^p}, \mathbb{A}^a_{\mathrm{inf},\mathcal{X}_{K^p}}/(\ker\theta)^k)$ is killed by $p^n$ for some $n$; \item $\displaystyle H^i(\mathcal{X}_{K^p}, \mathbb{A}^a_{\mathrm{inf},\mathcal{X}_{K^p}}/(\ker\theta)^k)\cong\varprojlim_{m}H^i(\mathcal{X}_{K^p}, \mathbb{A}^a_{\mathrm{inf},\mathcal{X}_{K^p}}/((\ker\theta)^k,p^m)).$ \end{enumerate} \end{lem} \begin{proof} First assume $k=1$. Then $\mathbb{A}_{\mathrm{inf},\mathcal{X}_{K^p}}/\ker\theta= \mathcal{O}^+_{\mathcal{X}_{K^p}}$. By \cite[Corollary 4.4.3]{Pan20}, \[\tilde{H}^i(K^p,\mathcal{O}^a_C)\cong H^i(\mathcal{X}_{K^p}, \mathcal{O}^{+a}_{\mathcal{X}_{K^p}})\cong\varprojlim_m H^i(\mathcal{X}_{K^p}, \mathcal{O}^{+a}_{\mathcal{X}_{K^p}}/p^m).\] The torsion part of $\tilde{H}^i(K^p,\mathcal{O}^a_C)$ is killed by some $p^n$ which implies our claims when $k=1$. In general we do induction on $k$. Note that $\ker\theta$ is principal, hence $\mathbb{A}_{\mathrm{inf},\mathcal{X}_{K^p}}/(\ker\theta)^k$ is torsionfree as $\mathbb{A}_{\mathrm{inf},\mathcal{X}_{K^p}}/\ker\theta=\mathcal{O}^+_{\mathcal{X}_{K^p}}$ is torsionfree. Let $\mathcal{F}_k=\mathbb{A}^a_{\mathrm{inf},\mathcal{X}_{K^p}}/(\ker\theta)^k$ and fix a generator $\tilde{a}$ of $\ker\theta$. Then $\mathcal{F}_1=\mathcal{O}^{+a}_{\mathcal{X}_{K^p}}$. There is an exact sequence \[0\to \mathcal{F}_{k-1}\xrightarrow{\times \tilde{a}} \mathcal{F}_{k}\to \mathcal{F}_1\to 0. \] The first claim follows by induction on $k$. To see the second claim, consider \[0\to H^i(\mathcal{X}_{K^p},\mathcal{F}_k)/p^n \to H^i(\mathcal{X}_{K^p},\mathcal{F}_k/p^n) \to H^{i+1}(\mathcal{X}_{K^p},\mathcal{F}_k)[p^n]\] which forms a projective system when $n$ varies. The transition map $H^{i+1}(\mathcal{X}_{K^p},\mathcal{F}_k)[p^{n+1}]\to H^{i+1}(\mathcal{X}_{K^p},\mathcal{F}_k)[p^n]$ is multiplication by $p$. Hence $\displaystyle \varprojlim_{n} H^{i+1}(\mathcal{X}_{K^p},\mathcal{F}_k)[p^n] =0$ by what we have proved. In particular, $\displaystyle \varprojlim_{n} H^i(\mathcal{X}_{K^p},\mathcal{F}_k)/p^n \cong \varprojlim_{n} H^i(\mathcal{X}_{K^p},\mathcal{F}_k/p^n) $ with torsion parts annihilated by some $p^n$. From this and \cite[Lemma 4.4.4]{Pan20}, we deduce that \[ H^i(\mathcal{X}_{K^p},\mathcal{F}_k)\cong \varprojlim_{n} H^i(\mathcal{X}_{K^p},\mathcal{F}_k)/p^n \cong \varprojlim_{n} H^i(\mathcal{X}_{K^p},\mathcal{F}_k/p^n)\] in exactly the same way as the proof of \cite[Corollary 4.4.3]{Pan20}. \end{proof} Now taking the inverse limit of $g_{k,m}$ with respect to $m$, we get a map \[\varprojlim_m\tilde{H}^i(K^p,\mathbb{Z}/p^m)\otimes_{\mathbb{Z}_p} A^a_\mathrm{inf}/((\ker\theta)^k,p^m)\to H^i(\mathcal{X}_{K^p}, \mathbb{A}^a_{\mathrm{inf},\mathcal{X}_{K^p}}/(\ker\theta)^k)\] which is clearly $G_{\mathbb{Q}_p}$-equivariant and $A^a_\mathrm{inf}$-linear. When $k=1$, this map agrees with the isomorphism \eqref{cis}. In general, this gives the desired map in Lemma \ref{cmap} after inverting $p$ modulo the following lemma. \end{proof} \begin{lem} There are natural isomorphisms \begin{enumerate} \item $\displaystyle \tilde{H}^i(K^p, A_\mathrm{inf}/(\ker\theta)^k)\cong \varprojlim_m\tilde{H}^i(K^p,\mathbb{Z}/p^m)\otimes_{\mathbb{Z}_p} A_\mathrm{inf}/((\ker\theta)^k,p^m)$. \item $H^i(\mathcal{X}_{K^p}, \mathbb{A}^a_{\mathrm{inf},\mathcal{X}_{K^p}}/(\ker\theta)^k)\cong H^i({\mathscr{F}\!\ell},\pi_{\mathrm{HT}*}\mathbb{A}^a_{\mathrm{inf},\mathcal{X}_{K^p}}/(\ker\theta)^k )$. \end{enumerate} \end{lem} \begin{proof} For the first claim, we prove the following stronger result: \[\tilde{H}^i(K^p,M)\cong \varprojlim_m \tilde{H}^i(K^p,\mathbb{Z}/p^m)\otimes_{\mathbb{Z}_p} M\] for any $p$-adically complete torsion-free $\mathbb{Z}_p$-module $M$. Note that there exists a complex $\tilde{S}^\bullet$ of $p$-adically complete torsion-free $\mathbb{Z}_p$ modules such that $H^i(\tilde{S}^\bullet)\cong \tilde{H}^i(K^p,\mathbb{Z}_p)$ and $H^i(\tilde{S}^\bullet/p^m)\cong \tilde{H}^i(K^p,\mathbb{Z}/p^m)$, cf. \cite[p. 27, 28]{Eme06}. Hence there is an projective system of short exact sequences \[0\to \tilde{H}^i(K^p,\mathbb{Z}_p)/p^m\to \tilde{H}^i(K^p,\mathbb{Z}/p^m) \to \tilde{H}^{i+1}(K^p,\mathbb{Z}_p)[p^m],\] and the transition map $\tilde{H}^{i+1}(K^p,\mathbb{Z}_p)[p^{m+1}]\to \tilde{H}^{i+1}(K^p,\mathbb{Z}_p)[p^m]$ is multiplication by $p$. From this we easily deduce our claim by taking the tensor product with $M$. For the second claim, it suffices to show that $R^i\pi_{\mathrm{HT}*}\mathbb{A}^a_{\mathrm{inf},\mathcal{X}_{K^p}}/(\ker\theta)^k=0$ when $i\geq 1$. This follows as for $U\subseteq\mathfrak{B}$, the preimage $\pi_{\mathrm{HT}}^{-1}(U)$ is affinoid perfectoid, hence we have $H^i(\pi_{\mathrm{HT}}^{-1}(U),\mathbb{A}^a_{\mathrm{inf},\mathcal{X}_{K^p}}/(\ker\theta)^k)=0,i\geq 1$ by induction on $k$. \end{proof} We need one more lemma for proving Theorem \ref{HT0kerF}. \begin{lem} \label{dRcompne} There is a natural $G_{\mathbb{Q}_p}$-equivariant isomorphism of $B_{\mathrm{dR},k+1}^+$-modules \[\tilde{H}^1(K^p,B_{\mathrm{dR},k+1}^+\otimes_{\mathbb{Q}_p}E)^{\mathrm{la}}[\lambda]\cong H^1({\mathscr{F}\!\ell},\mathbb{B}_{\mathrm{dR},k+1}^{+,\mathrm{la},\tilde\chi_k})\otimes_{\mathbb{Q}_p} E[\lambda].\] \end{lem} \begin{proof} Recall that $Z$ denotes the center of $U(\mathfrak{gl}_2(\mathbb{Q}_p))$ and $\tilde\chi_k:Z\to\mathbb{Q}_p$ denotes the infinitesimal character of the $(k-1)$-th symmetric power of the dual of the standard representation, cf. Section \ref{tilchik}. Since $\rho_\lambda=\rho(1)$ has Hodge-Tate weights $-1,k-1$, it follows that the subspace of $\mathrm{GL}_2(\mathbb{Q}_p)$-locally analytic vectors $\tilde{H}^1(K^p,E)^{\mathrm{la}}[\lambda]$ has infinitesimal character $\tilde\chi_k$ by \cite[Proposition 6.1.5]{Pan20}. Hence \[\tilde{H}^1(K^p,B_{\mathrm{dR},k+1}^+\otimes_{\mathbb{Q}_p}E)^{\mathrm{la}}[\lambda]\cong\tilde{H}^1(K^p,B_{\mathrm{dR},k+1}^+\otimes_{\mathbb{Q}_p}E)^{\mathrm{la},\tilde\chi_k}[\lambda]\cong H^1({\mathscr{F}\!\ell},\mathbb{B}_{\mathrm{dR},k+1}^{+})^{\mathrm{la},\tilde\chi_k}\otimes_{\mathbb{Q}_p} E[\lambda],\] where the second isomorphism follows from Proposition \ref{BdRcomp}. It remains to show \[H^1({\mathscr{F}\!\ell},\mathbb{B}_{\mathrm{dR},k+1}^{+})^{\mathrm{la},\tilde\chi_k}\otimes_{\mathbb{Q}_p} E[\lambda]\cong H^1({\mathscr{F}\!\ell},\mathbb{B}_{\mathrm{dR},k+1}^{+,\mathrm{la},\tilde\chi_k})\otimes_{\mathbb{Q}_p} E[\lambda].\] This is clear when $k>1$ by Proposition \ref{compinfch}. When $k=1$, note that \[H^0({\mathscr{F}\!\ell},\mathbb{B}_{\mathrm{dR},2}^+)^{\mathrm{la}}\cong \tilde{H}^0(K^p,B_{\mathrm{dR},2}^+)^{\mathrm{la}}=\tilde{H}^0(K^p,\mathbb{Q}_p)^{\mathrm{la}}\widehat\otimes_{\mathbb{Q}_p}B_{\mathrm{dR},2}^+.\] Again by Proposition \ref{compinfch}, it suffices to show that the localization $\tilde{H}^0(K^p,E)_{\mathfrak{p}_\lambda}=0$ where $\mathfrak{p}_\lambda=\ker \lambda$. But this follows from our assumption that $\rho_\lambda$ is irreducible. Indeed we can find an element $g\in [G_{\mathbb{Q}},G_\mathbb{Q}]$ such that $\mathrm{tr}(\rho_\lambda(g))\neq 2$. Let $X^2-a_gX+1$ be the characteristic polynomial of $D_S(g)$. Then $a_g-2\notin\mathfrak{p}_\lambda$ annihilates $\tilde{H}^0(K^p,E)$ because the action of $G_\mathbb{Q}$ on $\tilde{H}^0(K^p,E)$ factors through its abelianization $G_{\mathbb{Q}}^{ab}$. \end{proof} \begin{proof}[Proof of Theorem \ref{HT0kerF}] The $\gr^0$-part of the isomorphism in Lemma \ref{dRcompne} gives \[\tilde{H}^1(K^p, C\otimes_{\mathbb{Q}_p}E)[\lambda]^{\mathrm{la}}\cong H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\tilde\chi_k}_{K^p})\otimes_{\mathbb{Q}_p} E[\lambda].\] Recall that $H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},\tilde\chi_k}_{K^p})=H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p})\oplus H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},(1,-k)}_{K^p})$ with $H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p})$ being the Hodge-Tate weight zero part, cf. Section \ref{tilchik}. It is enough to show that \[I^1_{k-1}\otimes 1: H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p})\otimes E\to H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},(1,-k)}_{K^p})\otimes E(k)\] is zero on $H^1({\mathscr{F}\!\ell},\mathcal{O}^{\mathrm{la},(1-k,0)}_{K^p})\otimes E[\lambda]\cong\tilde{H}^1(K^p, C\otimes_{\mathbb{Q}_p}E)[\lambda]^\mathrm{la}_0$. Let $N=\Hom_{E[G_\mathbb{Q}]}(\rho,\tilde{H}^1(K^p,E)^{\mathrm{la}})$ and $W$ be \[\tilde{H}^1(K^p,B_{\mathrm{dR},k+1}^+\otimes_{\mathbb{Q}_p}E)^{\mathrm{la}}[\lambda]\cong ( B_{\mathrm{dR},k+1}^+\otimes_{\mathbb{Q}_p}\rho)\widehat\otimes_E N.\] We can apply the construction in Section \ref{NWLB} to $W$ and get the Fontaine operator \[N_W:W_{0,0}=\tilde{H}^1(K^p,C\otimes_{\mathbb{Q}_p} E)[\lambda]^{\mathrm{la}}_0\to W_{0,-k}(k)=\tilde{H}^1(K^p,C\otimes_{\mathbb{Q}_p} E)[\lambda]^{\mathrm{la}}_k(k)\] Since $\rho$ is \textit{de Rham} by our assumption, Fontaine's result (cf. Theorem \ref{FondR}) implies that $N_W=0$. On the other hand, it follows from Corollary \ref{I1Fono} that the Fontaine operator on $H^1({\mathscr{F}\!\ell},\mathbb{B}_{\mathrm{dR},k+1}^{+,\mathrm{la},\tilde\chi_k})\otimes_{\mathbb{Q}_p} E$ agrees with $I^1_{k-1}\otimes 1$ up to a unit. Thus under the isomorphism in Lemma \ref{dRcompne}, it follows from the functorial property of the Fontaine operator that \[I^1_{k-1}\otimes 1|_{\tilde{H}^1(K^p,C\otimes_{\mathbb{Q}_p} E)[\lambda]^{\mathrm{la}}_0}=N_W=0.\] \end{proof} \subsection{Relation with the \texorpdfstring{$p$}{Lg}-adic local Langlands correspondence for \texorpdfstring{$\mathrm{GL}_2(\mathbb{Q}_p)$}{Lg}} \label{rbpLLC} \begin{para} \label{noraf} Let $\rho$ be as in Theorem \ref{MTCH} and keep the same notation in Section \ref{rholambda}. Combining Theorem \ref{HT0kerF} with results in Subsection \ref{Sd}, we obtain a geometric description of the $\mathrm{GL}_2(\mathbb{Q}_p)$-locally analytic representation $\tilde{H}^1(K^p,E)[\lambda]^{\mathrm{la}}$. When the automorphic representation corresponding to $\rho$ is supercuspidal at $p$, our result is a special case of the Breuil-Strauch conjecture, which was fully solved by Dospinescu-Le Bras in \cite{DLB17}. See Remark \ref{scla}. When the automorphic representation is a principal series at $p$, our result is a special case of a conjecture of Berger-Breuil and Emerton, solved by Liu-Xie-Zhang and Colmez. See Remark \ref{psla}. In fact, our work shows that both cases can be formulated in a uniform way, cf. Remark \ref{scpsConj} below for more details. We introduce some notations first. For simplicity, we fix an embedding $\tau:E\to C$ and denote the composite map $\tau\circ\lambda$ by \[\lambda_\tau:\mathbb{T}(K^p)\to C.\] Consider $\displaystyle M_{k+2}:=\varinjlim_{K\subseteq\mathrm{GL}_2(\mathbb{A}_f)} M_{k+2}(K)=\varinjlim_{K^p\subseteq\mathrm{GL}_2(\mathbb{A}^p_f)} M_{k+2}(K^p)$. It admits a natural smooth action of $\mathrm{GL}_2(\mathbb{A}_f)$ and $M_{k+2}(K^p)=M_{k+2}^{K^p}$. We have just shown that $M_{k+2}(K^p)\otimes D_0^{-1}[\lambda_\tau]\neq 0$. (Recall that $\otimes D_0$ is the same as twisting with $|\cdot|^{-1}\circ \det$, cf. Section \ref{D0a}.) Hence $\lambda_\tau$ corresponds to an irreducible representation of $\mathrm{GL}_2(\mathbb{A}_f)$ \[\pi^\infty=\otimes'\pi_l\subseteq M_{k+2}\otimes D_0^{-1}\] such that $(\pi^{\infty})^{K^p}=M_{k+2}(K^p)[\lambda_\tau]$. Now assume that $\pi_p$ is special or supercuspidal. In particular $\pi^\infty$ can be transferred to $(D\otimes\mathbb{A}_f)^\times$ by the Jacquet-Langlands correspondence. Recall that $\mathcal{A}_{D,(k,0)}$ was introduced action in Definition \ref{AFD}. Then there is a $(D\otimes\mathbb{A}_f)^\times$-equivariant embedding \[\pi'{}^{\infty}:=\pi^{\infty,p}\otimes_C \pi'_p\to \mathcal{A}_{D,(k,0)}\] for some irreducible smooth representation $\pi'_p$ (corresponding to $\pi_p$ under the local Jacquet-Langlands correspondence in suitable normalization) of $D_p^\times$. Here $\pi^{\infty,p}=\otimes'_{l\neq p}\pi_l$. \end{para} \begin{thm} \label{scch} Suppose $\pi_p$ is supercuspidal. There is a $\mathrm{GL}_2(\mathbb{Q}_p)^0$-equivariant isomorphism between $\tilde{H}^1(K^p,C)[\lambda_\tau]_0^{\mathrm{la}}$ and \[(\pi^{\infty,p})^{K^p}\otimes_C \left[\left(H^1({\mathscr{F}\!\ell},j_{!} \omega^{-k+1,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}})\otimes_C \pi'_p\right)^{\mathcal{O}_{D_p}^\times}\cdot{\varepsilon'_p}^{-k+1}/ \Sym^{k-1} V^*\otimes_{\mathbb{Q}_p} \pi_p\right].\] \end{thm} \begin{rem} \label{GL2oGL2} As explained in Remark \ref{GL20}, this isomorphism is $\mathrm{GL}_2(\mathbb{Q}_p)$-equivariant by writing $(H^1({\mathscr{F}\!\ell},j_{!} \omega^{-k+1,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}})\otimes_C \pi'_p)^{\mathcal{O}_{D_p}^\times}$ as $\displaystyle (\bigoplus_{i\in\mathbb{Z}} H^1({\mathscr{F}\!\ell},j_{!} \omega^{-k+1,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}})\otimes_C \pi'_p)^{{D_p}^\times}$. \end{rem} \begin{proof}[Proof of Theorem \ref{scch}] By Theorem \ref{HT0kerF} and Corollary \ref{BScon}, $\tilde{H}^1(K^p,C)[\lambda_\tau]_0^{\mathrm{la}}$ is isomorphic to \[ \left[(\pi^{\infty,p})^{K^p}\otimes \left(H^1({\mathscr{F}\!\ell},j_{!} \omega^{-k+1,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}})\otimes_C \pi'_p\right)^{\mathcal{O}_{D_p}^\times}\cdot{\varepsilon'_p}^{-k+1}\right]/ \left[(\pi^{\infty,p})^{K^p}\otimes\Sym^{k-1} V^*\otimes \pi_p\right].\] It remains to move the tensor product with $(\pi^{\infty,p})^{K^p}$ outside of the quotient. This can be done by considering new vectors of $\pi_l^{K_l},l\neq p$. \end{proof} \begin{rem} By Corollary \ref{BScon}, there is a $\mathrm{GL}_2(\mathbb{Q}_p)^0$-equivariant exact sequence \[\begin{multlined}0\to (\pi^{\infty})^{K^p}\otimes_C \Sym^{k-1} V^*\to \tilde{H}^1(K^p,C)[\lambda_\tau]_0^{\mathrm{la}} \to \\ (H^1({\mathscr{F}\!\ell},j_{!} \omega^{k+1,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}})\otimes \pi'{}^{\infty})^{K^p\times\mathcal{O}_{D_p}^\times}\otimes \det{}^{k+1}\cdot{\varepsilon'_p}^{-k}\to 0.\end{multlined}\] which can be upgraded to a $\mathrm{GL}_2(\mathbb{Q}_p)$-equivariant sequence. \end{rem} \begin{rem} Since $\rho$ is $2$-dimensional, $\rho\otimes_{\mathbb{Q}_p} C\cong (\rho\otimes_{\mathbb{Q}_p} C)_0^{\oplus 2}$ as $C$-vector spaces. Thus there is a non-canonical $\mathrm{GL}_2(\mathbb{Q}_p)$-equivariant isomorphism \[\left(\Hom_{E[G_{\mathbb{Q}}]}(\rho,\tilde{H}^1(K^p,E)^{\mathrm{la}})\right)\widehat\otimes_E C\cong\tilde{H}^1(K^p,C)[\lambda_\tau]_0^{\mathrm{la}}.\] Hence Theorem \ref{scch} also gives a description of $\tilde{H}^1(K^p,C)[\lambda_\tau]^{\mathrm{la}}$ as a representation of $\mathrm{GL}_2(\mathbb{Q}_p)$. On the other hand, $\tilde{H}^1(K^p,E)^{\mathrm{la}}$ is an admissible locally analytic $E$-representation of $\mathrm{GL}_2(\mathbb{Q}_p)$ in the sense of Schneider-Teitelbaum \cite{ST02}. We remind the readers that in \cite{ST02}, the coefficient field is assumed to be spherically complete. In particular $C$ does not satisfy their assumption. Theorem \ref{scch} nonetheless implies certain admissibility of $(H^1({\mathscr{F}\!\ell},j_{!} \omega^{-k+1,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}})\otimes_C \pi'_p)^{\mathcal{O}_{D_p}^\times}$ as a locally analytic representation of $\mathrm{GL}_2(\mathbb{Q}_p)$ over $C$. \end{rem} \begin{rem} \label{scla} We explain the connection between Theorem \ref{scch} and the Breuil-Strauch conjecture in this case. For simplicity we restrict ourselves to the case $k=1$. Note that $\pi'_p$ is fixed by $1+p^n\mathcal{O}_{D_p}$ for some $n\geq 0$. Hence \[(H^1({\mathscr{F}\!\ell},j_{!} \omega^{0,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}})\otimes_C \pi'_p)^{\mathcal{O}_{D_p}^\times} =\left[H^1\left({\mathscr{F}\!\ell},j_!\pi^{(0)}_{\mathrm{Dr},n*} \mathcal{O}_{\mathcal{M}_{\mathrm{Dr},n}^{(0)}}\right)\otimes_C \pi'_p\right]^{\mathcal{O}_{D_p}^\times}\] where $\pi^{(0)}_{\mathrm{Dr},n}:\mathcal{M}_{\mathrm{Dr},n}^{(0)}\to \Omega$ denotes the projection map, cf. Subsection \ref{Drinfty}. As explained in Section \ref{H1dRc}, there is an exact sequence \[0\to H^1_{\mathrm{dR},c}(\mathcal{M}_{\mathrm{Dr},n}^{(0)}) \to H^1(\mathcal{M}_{\mathrm{Dr},n}^{(0)},\mathcal{O}_{\mathcal{M}_{\mathrm{Dr},n}^{(0)}})\xrightarrow{d_{\mathrm{dR}}} H^1(\mathcal{M}_{\mathrm{Dr},n}^{(0)},\Omega^1_{\mathcal{M}_{\mathrm{Dr},n}^{(0)}}).\] Taking the $\lambda$-isotypic part in Theorem \ref{ijdR}, we deduce that \[\left(H^1_{\mathrm{dR},c}(\mathcal{M}_{\mathrm{Dr},n}^{(0)})\otimes_C \pi'_p\right)^{\mathcal{O}_{D_p}^\times}\cong D_{\mathrm{dR}}(\rho|_{G_{\mathbb{Q}_p}})\otimes_E \pi_p.\] See also \cite[Th\'{e}or\`{e}me 1.4]{DLB17}. Here $D_{\mathrm{dR}}(\rho|_{G_{\mathbb{Q}_p}})=(\rho\otimes_{\mathbb{Q}_p}B_{\mathrm{dR}})^{G_{\mathbb{Q}_p}}$. The inclusion map $\pi_p\subseteq \left[H^1\left({\mathscr{F}\!\ell},j_!\pi^{(0)}_{\mathrm{Dr},n*} \mathcal{O}_{\mathcal{M}_{\mathrm{Dr},n}^{(0)}}\right)\otimes_C \pi'_p\right]^{\mathcal{O}_{D_p}^\times}$ in Theorem \ref{scch} is nothing but \[\pi_p\cong\Fil^1 D_{\mathrm{dR}}(\rho|_{G_{\mathbb{Q}_p}})\otimes_E \pi_p\subseteq D_{\mathrm{dR}}(\rho|_{G_{\mathbb{Q}_p}})\otimes_E \pi_p\cong \left(H^1_{\mathrm{dR},c}(\mathcal{M}_{\mathrm{Dr},n}^{(0)})\otimes_C \pi'_p\right)^{\mathcal{O}_{D_p}^\times}.\] On the other hand, Emerton's local-global compatibility result (\cite[Corollary 6.3.6]{Pan20}) implies that \[\tilde{H}^1(K^p,C)[\lambda_\tau]_0^{\mathrm{la}}\cong (\pi^{\infty,p})^{K^p}\otimes_E \Pi(\rho|_{G_{\mathbb{Q}_p}})^{\mathrm{la}}\] where $ \Pi(\rho|_{G_{\mathbb{Q}_p}})$ denotes the unitary $E$-Banach space representation of $\mathrm{GL}_2(\mathbb{Q}_p)$ attached to $\rho|_{G_{\mathbb{Q}_p}}$ under the $p$-adic local Langlands correspondence for $\mathrm{GL}_2(\mathbb{Q}_p)$. (Note that $\rho|_{G_{\mathbb{Q}_p}}$ is absolutely irreducible as $\pi_p$ is supercuspidal.) Therefore Theorem \ref{scch} is equivalent with \[\Pi(\rho|_{G_{\mathbb{Q}_p}})^{\mathrm{la}}\widehat\otimes_E C\cong \left[H^1\left({\mathscr{F}\!\ell},j_!\pi^{(0)}_{\mathrm{Dr},n*} \mathcal{O}_{\mathcal{M}_{\mathrm{Dr},n}^{(0)}}\right)\otimes_C \pi'_p\right]^{\mathcal{O}_{D_p}^\times}/ \Fil^1 D_{\mathrm{dR}}(\rho|_{G_{\mathbb{Q}_p}})\otimes_E \pi_p.\] This is exactly what was conjectured by Breuil-Strauch for $\rho|_{G_{\mathbb{Q}_p}}$. In general, for any de Rham representation $\rho_p:G_{\mathbb{Q}_p}\to\mathrm{GL}_2(E)$ of Hodge-Tate weights $0,1$ such that $\rho_p$ corresponds to $\pi_p$ under the usual local Langlands correspondence, then there is an isomorphism $D_{\mathrm{dR}}(\rho_p)\cong D_{\mathrm{dR}}(\rho|_{G_{\mathbb{Q}_p}})$ unique up to $E^\times$, and $\rho_p$ is completely determined by the position of $\Fil^1 D_{\mathrm{dR}}(\rho_p)$. Breuil-Strauch conjectured that there is a $\mathrm{GL}_2(\mathbb{Q}_p)$-isomorphism \[\Pi(\rho_p)^{\mathrm{la}}\widehat\otimes_E C\cong \left[H^1\left({\mathscr{F}\!\ell},j_!\pi^{(0)}_{\mathrm{Dr},n*} \mathcal{O}_{\mathcal{M}_{\mathrm{Dr},n}^{(0)}}\right)\otimes_C \pi'_p\right]^{\mathcal{O}_{D_p}^\times}/ \Fil^1 D_{\mathrm{dR}}(\rho_p)\otimes_E \pi_p.\] The full conjecture was proved by Dospinescu-Le Bras \cite[Th\'eor\`em 1.4]{DLB17}, taking into account of the Serre duality \[\Hom_{C}^{\mathrm{cont}} \left(H^1({\mathscr{F}\!\ell},j_!\pi^{(0)}_{\mathrm{Dr},n*} \mathcal{O}_{\mathcal{M}_{\mathrm{Dr},n}^{(0)}}),C \right)\cong H^0(\mathcal{M}_{\mathrm{Dr},n}^{(0)}, \Omega^1_{\mathcal{M}_{\mathrm{Dr},n}^{(0)}}).\] \end{rem} It follows from Theorem \ref{HT0kerF} and the second part of Corollary \ref{BScon} that there is also a description of $\tilde{H}^1(K^p,C)[\lambda_\tau]_0^{\mathrm{la}}$ when $\pi_p$ is a principal series. \begin{thm} \label{PSla} Suppose $\pi_p$ is a principal series. There is a $\mathrm{GL}_2(\mathbb{Q}_p)$-equivariant isomorphism \[\tilde{H}^1(K^p,C)[\lambda_\tau]_0^{\mathrm{la}} \cong \mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)} H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^{k-1}) \cdot \varepsilon^{-k+1}{e'_2}^{k-1}[\lambda]/(\pi^{\infty})^{K^p}\otimes \Sym^{k-1} V^*,\] where $\mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)}$ denotes the locally analytic induction. \end{thm} \begin{rem} By Corollary \ref{BScon}, there is also a $\mathrm{GL}_2(\mathbb{Q}_p)$-equivariant exact sequence \[\begin{multlined}0\to (\pi^{\infty})^{K^p}\otimes_C \Sym^{k-1} V^*\to \tilde{H}^1(K^p,C)[\lambda_\tau]_0^{\mathrm{la}} \to \\ \mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)} H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p),\Sym^{k-1}) \cdot \varepsilon^{-k+1}{e'_1}^{k}{e'_2}^{-1}[\lambda]\to 0.\end{multlined}\] \end{rem} \begin{rem} \label{psla} Again we assume $k=1$ here for simplicity. But the discussion below works for general $k$. In this case, $H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p))$ can be identified with the log rigid cohomology of the tower of Igusa curves. The comparison theorem between the log rigid cohomology and log crystalline cohomology implies that there is an isomorphism $H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p)) [\lambda]\cong (\pi^{\infty,p})^{K^p}\otimes_E D_{\mathrm{cris}}(\rho|_{G_{\mathbb{Q}_p}})$, where $D_{\mathrm{cris}}(\rho|_{G_{\mathbb{Q}_p}}):=(\rho\otimes_{\mathbb{Q}_p} B_{\mathrm{cris}})^{G_{\mathbb{Q}_p(\mu_{p^n})}}$ for some sufficiently large $n$ and carries an action of the Weil group $W_{\mathbb{Q}_p}$ by Fontaine's recipe. Using the congruence relation of $B\times W_{\mathbb{Q}_p}^{ab}$ on the Igusa cruves (cf. \cite[\S 1.6.4]{Car86}), we can equip $D_{\mathrm{cris}}(\rho|_{G_{\mathbb{Q}_p}})$ with a $B$-action and this isomorphism becomes $B$-equivariant. Hence \[\tilde{H}^1(K^p,C)[\lambda_\tau]_0^{\mathrm{la}} \cong (\pi^{\infty,p})^{K^p}\otimes \mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)} D_{\mathrm{cris}}(\rho|_{G_{\mathbb{Q}_p}})/\pi_p,\] where again the embedding $\pi_p\to \mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)} D_{\mathrm{cris}}(\rho|_{G_{\mathbb{Q}_p}})$ comes from the $\Fil^1$. If we further assume $\rho|_{G_{\mathbb{Q}_p}}$ is absolutely irreducible, then Emerton's local-global compatibility result implies that \[\Pi(\rho|_{G_{\mathbb{Q}_p}})^{\mathrm{la}}\widehat\otimes_{E} C\cong \mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)} D_{\mathrm{cris}}(\rho|_{G_{\mathbb{Q}_p}})/\pi_p.\] The local form of this isomorphism was conjectured by Berger-Breuil in \cite[Conjecture 5.3.7]{BB10} and by Emerton \cite[Conjecture 6.7.3]{Eme06C}, and was proved by Liu-Xie-Zhang \cite{LXZ12} and Colmez \cite{Col14}. \end{rem} \begin{rem} \label{scpsConj} As noted in Remark \ref{DRstn}, we have \[\tilde{H}^1(K^p,C)[\lambda_\tau]_0^{\mathrm{la}} \cong \mathbb{H}^1(DR_k)[\lambda_\tau]/ \Fil^1 \mathbb{H}^1(DR_k)[\lambda_\tau]\] which gives the descriptions of $\tilde{H}^1(K^p,C)[\lambda_\tau]_0^{\mathrm{la}}$ in Theorem \ref{scch} and \ref{PSla}, when $\pi_p$ is either supercuspidal or a principal series. This suggests that the Breuil-Strauch conjecture in the supercuspidal case and the conjecture of Berger-Breuil and Emerton in the principal series case can be formulated in a uniform way. We believe this is also true when $\pi_p$ is special. \end{rem} \begin{rem}\label{stn} Finally we remark on the case when $\pi_p$ is special. Again let me assume $k=1$ for simplicity. By Theorem \ref{dec1}, $\ker I^1_0[\tilde\lambda_\tau]$ has a natural increasing filtration $\Fil_\bullet$ with \[\gr_n \ker I^1_0[\tilde\lambda_\tau]=\left\{ \begin{array}{lll} H^1(\mathcal{O}^{\mathrm{sm}}_{K^p})[\lambda]\cong (\pi^{p,\infty})^{K^p}\otimes_{C} \pi_p , &n=1&\\ (\pi^{p,\infty})^{K^p}\otimes_C (H^1(j_{!} \omega^{2,D_p^\times-\mathrm{sm}}_{\mathrm{Dr}})\otimes_C \pi'_p)^{\mathcal{O}_{D_p}^\times}\otimes\det, &n=2&\\ \mathrm{Ind}_B^{\mathrm{GL}_2(\mathbb{Q}_p)} H^1_{\mathrm{rig}}(\mathrm{Ig}(K^p))[\lambda]\cdot {e'_1}{e'_2}^{-1} , &n=3& \end{array}.\right.\] If $\rho|_{G_{\mathbb{Q}_p}}$ is absolutely irreducible, then Emerton's local-global compatibility result implies that $\tilde{H}^1(K^p,C)[\lambda_\tau]_0^{\mathrm{la}}\cong (\pi^{p,\infty})^{K^p}\widehat\otimes_{E} \Pi(\rho|_{G_{\mathbb{Q}_p}})^{\mathrm{la}}$. A description of $ \Pi(\rho|_{G_{\mathbb{Q}_p}})^{\mathrm{la}}$ was conjectured by Emerton \cite[Conjecture 6.7.7]{Eme06C} and proved in \cite{LXZ12,Col14}. By Theorem \ref{HT0kerF}, there is an inclusion $\tilde{H}^1(K^p,C)[\lambda_\tau]_0^{\mathrm{la}}\subseteq \ker I^1_0[\tilde\lambda_\tau]$. We claim that it is actually an equality: a careful analysis using Serre duality shows that $\Fil_2\ker I^1_0[\tilde\lambda_\tau]$ is essentially $(\pi^{p,\infty})^{K^p}\widehat\otimes_{E}\Sigma(2,\mathcal{L})$, where $\Sigma(2,\mathcal{L})$ was introduced in \cite{Bre04}. See the construction of Breuil and Morita's duality in \S2, \S3 of the reference, especially Remarque 3.5.7. On the other hand, it can be shown that $\gr_3\ker I^1_0[\tilde\lambda_\tau]$ essentially agrees with the locally analytic induction in the quotient of the sequence in \cite[Conjecture 6.7.7]{Eme06C} (after taking the completed tensor product with $(\pi^{p,\infty})^{K^p}$). Hence It follows that $\ker I^1_0[\tilde\lambda_\tau]=\tilde{H}^1(K^p,C)[\lambda_\tau]_0^{\mathrm{la}}$, i.e. $\ker I^1_0[\tilde\lambda_\tau]$ is the eigenspace of $\lambda$. \end{rem} \end{document}
\begin{document} \noindent{\small Published version:}\\ \noindent{\emph{Probability and Mathematical Statistics (Wroclav), }}\\ \noindent{\emph{ Vol. 37, Fasc. 1 (2017), pp.~101-118 (open access)}}\\ \noindent{\emph{ doi: 10.19195/0208-4147.37.1.4 }}\\ \begin{center} {\sf \LARGE Cram\'{e}r Type Large Deviations for Trimmed L-statistics}\footnote{Research partially supported by the Russian Foundation for Basic Research (grant RFBR no. SS-2504.2014.1).} \vspace*{7mm} {\large Nadezhda Gribkova}\footnote{E-mail: n.gribkova@spbu.ru; nv.gribkova@gmail.com} \textit{Faculty of Mathematics and Mechanics, St.\,Petersburg State University,\\ St.\,Petersburg 199034, Russia} \end{center} \begin{quote} \noindent{\bf Abstract.} {\small In this paper, we propose a~new approach to the investigation of asymptotic properties of trimmed L-statistics and we apply it to the Cram\'{e}r type large deviation problem. Our results can be compared with ones in Callaert et~al.~(1982) -- the first and, as far as we know,~the single article, where some results on probabilities of large deviations for the trimmed $L$-statistics were obtained, but under some strict and unnatural conditions. Our approach is to approximate the trimmed $L$-statistic by a~non-trimmed $L$-statistic (with smooth weight function) based on Winsorized random variables. Using this method, we establish the Cram\'{e}r type large deviation results for the trimmed $L$-statistics under quite mild and natural conditions.} \noindent{\bf Keywords:} trimmed $L$-statistics, central limit theorem, large deviations, moderate deviations. \noindent{\bf MSC:} Primary: 62G30, 62E20; Secondary: 60F05, 60F10. \end{quote} \section{Introduction and main results} \label{imtro} Consider a~sequence $X_1,X_2,\dots $ of independent identically distributed real-valued random variables with distribution function $F$, and let $X_{1:n}\le \dots \le X_{n:n}$ denote the order statistics corresponding to the first $n$ observations. Define the trimmed L-statistic by \begin{equation} \label{tn} L_n=n^{-1}\sum_{i=k_n+1}^{n-m_n}\cinX_{i:n}, \end{equation} where $c_{i,n}\in \mathbb{R}$, \ $k_n$, $m_n$ are two sequences of integers such that $0\le k_n<n-m_n\le n$. Put $\alpha_n=k_n/n$, \ $\beta_n=m_n/n$. Throughout this paper, we suppose that $\alpha_n \to \alpha$, $\beta_n\to \beta$, as $n\to\infty$, where $0<\alpha<1-\beta<1$, i.e. we focus on the case of heavy trimmed $L$-statistic. In this paper we investigate Cram\'{e}r type large deviations, i.e. relative errors in the central limit theorem for $L_{n}$. First we note that in the case of non-trimmed $L$-statistic ($k_n=m_n=0$) with the coefficients $c_{i,n}$ generated by a smooth weight function the Cram\'{e}r type large and moderate deviations were studied in a~number of papers (see Vandemaele and Veraverbeke~\cite{vv82}, Bentkus and Zitikis~\cite{bz90}, Aleskeviciene~\cite{al91}). In contrast, to the best of our knowledge, there exists a~sole paper -- Callaert et al.~\cite{cvv82} -- devoted to the large deviations for the trimmed $L$-statistics. However, the result in~\cite{cvv82} was obtained under some rigorous and unnatural conditions imposed on the underlying distribution $F$ and the weights. The method of proof in Callaert et al.~\cite{cvv82} is based on the following two well-known facts: 1. The joint distribution of $X_{i:n}$ coincides with the joint distribution of $F^{-1}(G(Z_{i:n}))$, $i=1,\dots,n$, where $G$ is the distribution function of the standard exponential distribution, $Z_{i:n}$ are the order statistics corresponding to a~sample of $n$ independent random variable from the distribution $G$. 2. The order statistics $Z_{i:n}$ are distributed as $\sum_{k=1}^iZ_k/(n-k+1)$, where $Z_k$ -- independent standard exponential random variables. These two facts and the Taylor expansion together enable one to get an~approximation of $L_{n}$ by a~sum of weighed i.i.d. random variables for which some suitable known result on Cram\'{e}r type large deviations can be applied. This approach was first implemented by Bjerve~\cite{bj77} to prove a~Berry-Esseen type result for the $L$-statistics. However, use of this method requires excessive smoothness conditions imposed on $F$ and leads to the unnatural and complicated normalization of the $L$-statistic (cf.~Callaert et al.~\cite{cvv82}). In this article, we propose another approach to the investigation of asymptotic properties of the trimmed $L$-statistics different from that used in Bjerve~\cite{bj77} and Callaert et al.~\cite{cvv82}. Our idea is to approximate the trimmed $L$-statistic by a~non-trimmed $L$-statistic with weights generated by a~smooth weight function, where the approximating $L$-statistic is based on the order statistics corresponding to a~sample of $n$ i.i.d. Winsorized random variables. The asymptotic properties that we are interested in are often well studied in the case of $L$-statistics with a~smooth weight function and bounded observations, this allows us to obtain a~desired result for the trimmed $L$-statistic by applying a~result of the corresponding type to the approximating non-trimmed $L$-statistic; so it remains only to evaluate the remainder in the approximation. Here, we apply our method to obtain a~result on probabilities of large deviations for the trimmed $L$ -statistics, and we establish it under mild and natural conditions. This our result on large deviations can be viewed as a~strengthening of the result from Callaert et al.~\cite{cvv82}. To conclude this introduction, we adduce a~brief review of the relevant literature. The class of $L$-statistics is one of the most commonly used classes in statistical inferences. We refer to monographs by David and Nagaraja~\cite{david_2003}, Serfling~\cite{serfling_90}, Shorack and Wellner~\cite{shorack}, van der Vaart~\cite{vanderv} for an~introduction to the theory and applications of $L$-statistics. There is a~vast literature on asymptotic properties of $L$-statistics. Since we focus on the case of heavy trimmed $L$-statistics, we will mention mainly sources appropriate to our case. The most significant contribution to the establishment of the central limit theorem for (trimmed) $L$-statistics was made by Shorack ~\cite{shor69}-\cite{shor72} and Stigler~\cite{s69}-\cite{s74}. Mason and Shorack~\cite{mas_shor_90} obtained the necessary and sufficient conditions for the asymptotic normality of the trimmed $L$-statistics. The Berry -- Esseen type bounds under different sets of conditions were obtained by Bjerve~\cite{bj77}, Helmers~\cite{helm81}-\cite{helm_e2}, Gribkova \cite{gri}. A~great contribution to the research of second order asymptotic properties for $L$-statistic was done by Helmers~\cite{helm80}-\cite{helm_e2}, who established the Edgeworth expansions for the (trimmed) L-statistics. In papers by Bentkus et al.~\cite{bgz}, Friedfich~\cite{friedrich}, Putter and van Zwet~\cite{pz} and van Zwet~\cite{zwet}, the Berry--Esseen type bounds and Edgeworth expansions for $L$-statistics were derived as the consequences of the very general results for symmetric statistics established in these papers. Some interesting results on Chernoff's type large deviations (for non-trimmed $L$-statistics with smooth weight function) were obtained by Boistard~\cite{boi}. Recently, Gao and Zhao~\cite{gao_zhao} proposed a~general delta method in the theory of Chernoff's type large deviations and illustrated it by many examples including M-estimators and L-statistics. A~survey on the $L$-statistics and some modern applications of them in the economy and theory of actuarial risks can be found in Greselin~et~al.~\cite{zit_2009}. We will now proceed to the statement of our results. Define the left-continuous inverse of $F$: $F^{-1}(u)= \inf \{ x: F(x) \ge u \}$, \ $0<u\le 1$, \ $F^{-1}(0)=F^{-1}(0^+)$, and let $F_n$, $F_n^{-1}$ denote the empirical distribution function and its inverse respectively. Let $J$ be a~function defined in an~open set $I$ such that $[\alpha,1-\beta]\subset I\subseteq(0,1)$. We will also consider the trimmed $L$-statistics with coefficients generated by the weight function $J$ \begin{equation} \label{tn0} L_{n}^0=n^{-1}\sum_{i=k_n+1}^{n-m_n}c_{i,n}^0X_{i:n}=\int_{\alpha_n}^{1-\beta_n}J(u)F_n^{-1}(u)\,du, \end{equation} where $c_{i,n}^0=n\int_{(i-1)/n}^{i/n}J(u)\,du$. To state our results, we will need the following set of assumptions. \noindent{\bf (i)} {\it $J$ is Lipschitz in $I$, i.e. there exists a~constant $C\ge 0$ such that} \begin{equation} \label{LipJ} |J(u)-J(v)|\le C|u-v|,\quad \forall \ \ u,\,v\in I. \end{equation} \noindent{\bf (ii)} {\it $F^{-1}$ satisfies a~H\"{o}lder condition of order $0<\varepsilon\le 1$ in some neighborhoods $U_{\alpha}$ and $U_{1-\beta}$ of $\alpha$ and $1-\beta$.} \noindent{\bf (iii)} {\it $\max(|\alpha_n-\alpha|,\,|\beta_n-\beta|)=O\bigl( n^{-\frac 1{2+\varepsilon}}\bigr)$, where $\varepsilon$ is the H\"{o}lder index from condition}~{\bf (ii)}. \noindent{\bf (iv)} {\it with $\varepsilon$ from conditions} {\bf (ii)}-{\bf (iii)} $$ \sum_{i=k_n+1}^{n-m_n}|c_{i,n}-c_{i,n}^0|=O(n^{\frac 1{2+\varepsilon}}). $$ Define a~sequence of centering constants \begin{equation} \label{mun} \mu_n=\int_{\alpha_n}^{1-\beta_n}J(u)F^{-1}(u)\,du. \end{equation} Since $\alpha_n\to\alpha$, $\beta_n\to\beta$ as $n \to \infty$, both variables $L_{n}^0$ and $\mu_n$ are well defined for all sufficiently large $n$. It is well known (cf.,~e.g.,~\cite{mas_shor_90},~\cite{s74},~\cite{vanderv}) that when the inverse $F^{-1}$ is continuous at two points $\alpha$ and $1-\beta$, smoothness condition~\eqref{LipJ} implies the weak convergence to the normal law: $\sqrt{n}(L_{n}^0-\mu_n)\Rightarrow N(0,\sigma^2)$, where \begin{equation} \label{sigma} \sigma^2=\sigma^2(J,F)=\int_{\alpha}^{1-\beta}\int_{\alpha}^{1-\beta} J(u)J(v)(u\wedge v-uv)\, dF^{-1}(u)\,dF^{-1}(v), \end{equation} and $u\wedge v=\min(u,v)$. Here and in the sequel, we use the convention that $\int_a^b=\int_{[a,b)}$ when integrating with respect to the left continuous integrator $F^{-1}$. All along the article, we assume $\sigma>0$. Define the distribution functions of the normalized $L_{n}$ and $L_{n}^0$ respectively \begin{equation} \label{dfs} F_{L_{n}}(x) =\textbf{P}\{\sqrt{n}(L_{n}-\mu_n)/\sigma\le x\},\quad F_{L_{n}^0}(x) =\textbf{P}\{\sqrt{n}(L_{n}^0-\mu_n)/\sigma\le x\}. \end{equation} Let $\Phi$ denote the standard normal distribution function. Here is our first result on Cram\'{e}r type large deviations for $L_{n}$. \begin{theorem} \label{thm1} Suppose that $F^{-1}$ satisfies condition {\bf (ii)} for some $0<\varepsilon\le 1$ and the sequences $\alpha_n$ and $\beta_n$ satisfy {\bf (iii)}. In addition, assume that the weights $c_{i,n}$ satisfy {\bf (iv)} for some function $J$ satisfying condition {\bf (i)}. Then for every sequence $a_n\to 0$ and each $A>0$ \begin{equation} \label{thm_1} \begin{split} 1- F_{L_{n}}(x) &= [1-\Phi(x)](1+o(1)),\\ F_{L_{n}}(-x)&=\Phi(-x)(1+o(1)), \end{split} \end{equation} as \,$n \to \infty$, uniformly in the range $-A\le x\le a_n n^{\varepsilon/(2(2+\varepsilon))}$. \end{theorem} The proof of our main results is relegated to Section~3. Theorem~\ref{thm1} directly implies the following two corollaries. \begin{corollary} \label{cor1} Suppose that the conditions of Theorem~\ref{thm1} are satisfied with $\varepsilon=1$, i.e. $F^{-1}$ is Lipschitz in some neighborhoods $U_{\alpha}$ and $U_{1-\beta}$ of $\alpha$ and $1-\beta$. Then for every sequence $a_n\to 0$ and each $A>0$ relations~\eqref{thm_1} hold true, uniformly in the range $-A\le x\le a_n n^{1/6}$. \end{corollary} \begin{corollary} \label{cor2} Let $c_{i,n}=c_{i,n}^0=n\int_{(i-1)/n}^{i/n}J(u)\,du$ \,$(k_n+1 \le i\le n-m_n)$, where $J$ is a~function satisfying {\bf (i)}. Furthermore, assume that conditions {\bf (ii)} and {\bf (iii)} hold for some $0<\varepsilon\le 1$. Then relations~\eqref{thm_1} with $L_{n}=L_{n}^0$ hold true for every sequence $a_n\to 0$ and each $A>0$, uniformly in the range $-A\le x\le a_n n^{\varepsilon/(2(2+\varepsilon))}$. \end{corollary} Theorem~\ref{thm1} can be compared with the~result by~Callaert et al.~\cite{cvv82}, where it was assumed that the derivative $H'=(F^{-1}\circ G)'$ exists and satisfies a~H\"{o}lder condition of order $0<\varepsilon\le 1$ in some open set, containing $[G^{-1}(\alpha),G^{-1}(1-\beta)]$, where $G$ is the standard exponential distribution function. Moreover, some unnatural condition was imposed on the weights and $H'$ (cf., conditions~(A2) and~(B), Callaert et al.~\cite{cvv82}). In contrast, we use the natural scale parameter $\sigma$ -- root of the asymptotic variance of $L_{n}$ -- for the normalization, and our smoothness condition {\bf (ii)} for $F^{-1}$ is much weaker than one from Callaert et al.~\cite{cvv82}. Our Theorem~\ref{thm1} is also related with previous results by Vandemaele and Veraverbeke~\cite{vv82} and Bentkus and Zitikis~\cite{bz90} on Cram\'{e}r type large deviations for non-trimmed $L$-statistics with smooth weight function. The method of proof in the first of these articles was based on Helmers's~\cite{helm81}-\cite{helm_e2} $U$-statistic approximation, and in the second one the $\omega^2$-von Mises statistic type approximation was applied. We approximate our trimmed $L$-statistic by $L$-statistics with smooth weight function. Moreover, we apply the results from the papers mentioned to our approximating non-trimmed $L$-statistic when proving Theorem~\ref{thm1}. Note also that Cram\'{e}r 's moment conditions for the underlying distribution assumed in the cited papers are not needed in the case of the trimmed $L$-statistics, whereas the smoothness of $F^{-1}$ near $\alpha$ and $1-\beta$ becomes essential for the Cram\'{e}r type large deviations results. Finally, we state a~version of Theorem~\ref{thm1}, where the scale factor $\sigma/n^{1/2}$ is replaced by $\sqrt{{\text{\em Var}}(L_{n})}$, it is parallel to Theorem~2~(ii) by Vandemaele and Veraverbeke~\cite{vv82}, but now for the trimmed $L$-statistics. We will need the following two somewhat stronger versions of conditions {\bf (iii)} and {\bf (iv)}. \noindent{\bf (iii')} \ {\it $\max(|\alpha_n-\alpha|,\,|\beta_n-\beta|)=O\Bigl( n^{-\frac{1}{2+\varepsilon}\left[ 1 + \frac{\varepsilon(1-\varepsilon)}{2}\right]}(\log n)^{-\frac{\varepsilon}{2}}\Bigr)$, where $\varepsilon$ is the H\"{o}lder index from condition}~{\bf (ii)}. \noindent{\bf (iv')} {\it with $\varepsilon$ from conditions} {\bf (ii)}-{\bf (iii')} $$ \sum_{i=k_n+1}^{n-m_n}|c_{i,n}-c_{i,n}^0|=O\Bigl( n^{\frac 1{2+\varepsilon}\lb1-\frac{\varepsilon}{2}\right]}\Bigr). $$ \begin{theorem} \label{thm2} Suppose that the conditions of Theorem~\ref{thm1} are satisfied, where {\bf (iii)} and {\bf (iv)} are replaced by {\bf (iii')} and {\bf (iv')} respectively. \ \ In addition, assume that \ ${\text{\em Var}}(L_{n})<\infty$ for all sufficiently large $n$. Then \begin{equation} \label{thm_2} n \sigma^{-2}{\text{\em Var}}(L_{n})=1 + O\bigl(n^{-\frac{\varepsilon}{2+\varepsilon}}\bigr). \end{equation} Furthermore, relations~\eqref{thm_1}, where $\sigma/n^{1/2}$ is replaced by $\sqrt{{\text{\em Var}}(L_{n})}$, hold true for every sequence $a_n\to 0$ and each $A>0$ as \,$n \to \infty$, uniformly in the range $-A\le x\le a_n n^{\varepsilon/(2(2+\varepsilon))}$. \end{theorem} Note that in the case of heavy trimmed $L$-statistics the condition $\textbf{E}|X_1|^{\gamma}<\infty$ (for some $\gamma>0$) is sufficient for the finiteness of ${\text{\em Var}}(L_{n})$ when $n$ gets large. \section{Our method (representation for $L_{n}^0$ by a~non-trimmed L-statistic)} \label{lemmas} Let $\xi_{\nu}=F^{-1}(\nu)$, $0<\nu<1$, be the $\nu$-th quantile of $F$ and $W_i$ denote $X_i$ Winsorized outside of $(\xi_{\alpha},\xi_{1-\beta}]$. In other words \begin{equation} \label{2_2} W_i=\left\{ \begin{array}{ll} \xi_{\alpha},& X_i\le \xi_{\alpha}, \\ X_i,& \xi_{\alpha} < X_i \le \xi_{1-\beta},\\ \xi_{1-\beta},& \xi_{1-\beta} < X_i . \end{array} \right. \end{equation} Let $W_{i:n}$ denote the order statistics, corresponding to $W_1,\dots,W_n$ (the sample of $n$ i.i.d. auxiliary random variables). Define the distribution function $G(x)=\textbf{P}\{W_i\le x\}$ of $W_i$, the corresponding quantile function is equal to $G^{-1}(u)= \xi_{\alpha} \vee (F^{-1}(u) \wedge \xi_{1-\beta})$. Here and further on $(a\vee b)=\max(a,b)$. Let $G_n$ and $G_n^{-1}$ denote the corresponding empirical distribution function and its inversion respectively. We will approximate $L_{n}$ by a~linear combination of the order statistics $W_{i:n}$ with coefficients, generated by the weight function \begin{equation} \label{2_3} J_w(u)=\left\{ \begin{array}{ll} J(\alpha),& u\le \alpha, \\ J(u),& \alpha < u \le 1-\beta,\\ J(1-\beta),& 1-\beta < u, \end{array} \right. \end{equation} which is defined in $[0,1]$. It is obvious that when $J$ is Lipschitz in $I$, i.e. satisfies condition \eqref{LipJ} with some positive constant~$C$, the function $J_w$ is Lipschitz in $[0,1]$ with some constant $C_w\le C$. Consider the auxiliary non-truncated $L$-statistic given by \begin{equation} \label{Ln} \widetilde{L}_n=n^{-1}\sum_{i=1}^{n}\widetilde{c}_{i,n}W_{i:n}=\int_0^1 J_w(u)G_n^{-1}(u)\,du, \end{equation} where $\widetilde{c}_{i,n}=n\int_{(i-1)/n}^{i/n}J_w(u)\,du$. Define the centering constants \begin{equation} \label{muL} \mu_{\widetilde{L}_n}=\int_0^1 J_w(u)G^{-1}(u)\,du. \end{equation} Since $W_i$ has the finite moments of any order and because $J_w$ is Lipschitz, the distribution of the normalized $\widetilde{L}_n$ tends to the standard normal law (see, e.g.,~\cite{s74}) \begin{equation*} \label{Ln_to} \sqrt{n}(\widetilde{L}_n-\mu_{\widetilde{L}_n})/\sigma(J_w,G) \Rightarrow N(0,1), \end{equation*} where the asymptotic variance \begin{equation} \label{sigma} \sigma^2(J_w,G)=\int_0^1\int_0^1 J_w(u)J_w(v)(u\wedge v-uv)\, dG^{-1}(u)\,dG^{-1}(v). \end{equation} Observe that for $u\in(\alpha,1-\beta]$ we have $J_w(u)=J(u)$, $G^{-1}(u)=F^{-1}(u)$, and that $dG^{-1}(u)\equiv 0$ for $u\notin(\alpha,1-\beta]$. This yields the equality of the asymptotic variances \begin{equation} \label{sigma_eq} \sigma^2(J_w,G)=\sigma^2(J,F)=\sigma^2 \end{equation} of the truncated $L$-statistic $L_{n}^0$ and the non-truncated $L$-statistic $\widetilde{L}_n$ based on the Winsorized random variables. Define the binomial random variable $N_{\nu}= \sharp \{i : X_i \le \xi_{\nu} \}$, where $0<\nu <1$. Our representation for $L_{n}^0$ is based on the following simple observation: we see that \begin{equation} \label{observ} W_{i:n}=\left\{ \begin{array}{ll} \xi_{\alpha},& i\le N_{\alpha}, \\ X_{i:n},& N_{\alpha} < i \le N_{1-\beta},\\ \xi_{1-\beta},& i> N_{1-\beta}. \end{array} \right. \end{equation} Put $A_n=N_{\alpha}/n$, \ $B_n=(n-N_{1-\beta})/n$. The following lemma provides us a~useful representation which is crucial in the proof of our main results. \begin{lemma} \label{lem_2.1} \begin{equation} \label{lem_2.1_1} L_{n}^0-\mu_n=\widetilde{L}_n-\mu_{\widetilde{L}_n}+R_n, \end{equation} where $R_n=R_n^{(1)}+R_n^{(2)}$, \begin{equation} \label{lem_2.1_2} R_n^{(1)}=\int_{\alpha}^{A_n} J_w(u)[F_n^{-1}(u)-\xi_{\alpha}]\,du-\int_{1-\beta}^{1-B_n} J_w(u)[F_n^{-1}(u)-\xi_{1-\beta}]\,du \end{equation} and \begin{equation} \label{lem_2.1_3} \ \ \ R_n^{(2)}=\int_{\alpha_n}^{\alpha} J(u)[F_n^{-1}(u)-F^{-1}(u)]\,du-\int_{1-\beta_n}^{1-\beta} J(u)[F_n^{-1}(u)-F^{-1}(u)]\,du. \end{equation} \end{lemma} \noindent{\bf Proof.} \ First, consider the difference between the centering constants. We obtain \begin{equation} \label{ptf_lem_1} \begin{split} \mu_{\widetilde{L}_n}-\mu_n=&\int_0^1 J_w(u)G^{-1}(u)\,du - \int_{\alpha_n}^{1-\beta_n}J(u)F^{-1}(u)\,du=\alpha J(\alpha)\xi_{\alpha} \\ &+\beta J(1-\beta)\xi_{1-\beta} -\int_{\alpha_n}^{\alpha} J(u)F^{-1}(u)\,du + \int_{1-\beta_n}^{1-\beta} J(u)F^{-1}(u)\,du . \end{split} \end{equation} For the difference between $L_{n}^0$ and $\widetilde{L}_n$ after some simple computations we get \begin{equation} \label{ptf_lem_2} \begin{split} L_{n}^0-\widetilde{L}_n=&\int_{\alpha}^{1-\beta}J(u)[F_n^{-1}(u)-G_n^{-1}(u)]\,du\\ &+ \int_{\alpha_n}^{\alpha} J(u)F_n^{-1}(u)\,du- \int_{1-\beta_n}^{1-\beta} J(u)F_n^{-1}(u)\,du\\ &-J(\alpha)\int_0^{\alpha}G_n^{-1}(u)\,du -J(1-\beta)\int_{1-\beta}^1 G_n^{-1}(u)\,du. \end{split} \end{equation} Relations~\eqref{ptf_lem_1} and \eqref{ptf_lem_2} together imply \begin{equation} \label{ptf_lem_3} L_{n}^0-\widetilde{L}_n +(\mu_{\widetilde{L}_n}-\mu_n)= D_n+ R_n^{(2)}, \end{equation} where \begin{equation*} \label{ptf_lem_4} \begin{split} D_n := &\int_{\alpha}^{1-\beta}J(u)[F_n^{-1}(u)-G_n^{-1}(u)]\,du+ \\ &J(\alpha)\left[ \alpha\,\xi_{\alpha} - \int_0^{\alpha}G_n^{-1}(u)\,du\right] + J(1-\beta) \left[ \beta\,\xi_{1-\beta}- \int_{1-\beta}^1 G_n^{-1}(u)\,du\right]. \end{split} \end{equation*} It remains to show that $D_n=R_n^{(1)}$. Let us consider three of six possible cases (treatment for the three other cases is similar and therefore omitted). We use the fact that $F_n^{-1}(u)=G_n^{-1}(u)$ for $A_n<u\le 1-B_n$, \ $G_n^{-1}(u)=\xi_{\alpha}$ for $u \le A_n$ and $G_n^{-1}(u)=\xi_{1-\beta}$ \ for $u>1-B_n$.\\ \noindent{\bf Case 1}. $\alpha\le A_n \le 1-B_n<1-\beta$. In this case the second and third terms of $D_n$ are equal to zero, and the first one yields \begin{equation} \label{ptf_lem_5} \begin{split} D_n =\int_{\alpha}^{A_n}J(u)[F_n^{-1}(u)-\xi_{\alpha}]\,du+ \int_{1-B_n}^{1-\beta} J(u) [F_n^{-1}(u)-\xi_{1-\beta}]\,du, \end{split} \end{equation} and since $J(u)=J_w(u)$ for $\alpha<u\le 1-\beta$, we obtain the desired equality.\\ \noindent{\bf Case 2}. $\alpha\le A_n\le 1-\beta < 1-B_n$. In this case we have \begin{equation} \label{ptf_lem_6} \begin{split} D_n =& \int_{\alpha}^{A_n}J(u)[F_n^{-1}(u)-\xi_{\alpha}]\,du\\ + &J(1-\beta) \left[ \beta\,\xi_{1-\beta}- \int_{1-\beta}^{1-B_n} F_n^{-1}(u)\,du -B_n\,\xi_{1-\beta}\right]\\ =&\int_{\alpha}^{A_n}J(u)[F_n^{-1}(u)-\xi_{\alpha}]\,du- \int_{1-\beta}^{1-B_n}J(1-\beta) [F_n^{-1}(u)-\xi_{1-\beta}]\,du, \end{split} \end{equation} and since $J(u)=J_w(u)$ for $\alpha<u\le A_n$ and $J(1-\beta)=J_w(u)$ for $u>1-\beta$, the expression on the r.h.s. in \eqref{ptf_lem_5} is equal to $R_n^{(1)}$. \\ \noindent{\bf Case 3}. $1-\beta\le A_n\le 1-B_n$. In this case $D_n$ can be written as \begin{equation} \label{ptf_lem_6} \begin{split} & \int_{\alpha}^{A_n}J_w(u)[F_n^{-1}(u)-\xi_{\alpha}]\,du\\ &- J(1-\beta)\int_{1-\beta}^{A_n} F_n^{-1}(u)\,du + J(1-\beta)\xi_{\alpha}(A_n-(1-\beta))\\ &+J(1-\beta) \left[ \beta\,\xi_{1-\beta}-\xi_{\alpha}(A_n-(1-\beta))- \int_{A_n}^{1-B_n} F_n^{-1}(u)\,du -B_n\,\xi_{1-\beta}\right]\\ =&\int_{\alpha}^{A_n}J_w(u)[F_n^{-1}(u)-\xi_{\alpha}]\,du- \int_{1-\beta}^{1-B_n}J(1-\beta) [F_n^{-1}(u)-\xi_{1-\beta}]\,du =R_n^{(1)}. \end{split} \end{equation} This completes the proof of representation~\eqref{lem_2.1_1}. The lemma is proved. \qed In conclusion of this section, we note that the idea of the $L$-statistic approximation emerged as a~result of the observation of the fact that the asymptotic variances of $L_{n}^0$ and of the non-trimmed $L$-statistic $\widetilde{L}_n$ based on the Winsorized random variables coincide. This idea of $L$-statistic approximation can also be regarded as an~extension of the one used in Gribkova and Helmers~\cite{gh2006}-\cite{gh2007} and~\cite{gh2014} (where the second order asymptotic properties -- the Berry--Esseen bounds and Edgeworth type expansions -- were established for (slightly) trimmed means and their studentized versions) to the case of trimmed $L$-statistics. In the papers mentioned, we constructed the $U$-statistic type approximations for (slightly) trimmed means using sums of i.i.d. Winsorized observations as the linear $U$-statistic terms; in order to get the quadratic terms, we applied some special Bahadur--Kiefer representations of von Mises statistic type for (intermediate) sample quantiles (cf.~Gribkova and Helmers~\cite{gh2011}). \section{Proof of Theorems~\ref{thm1} and~\ref{thm2} } \label{proof} \noindent{\bf Proof of Theorem~\ref{thm1}}. Obviously, it suffices to prove the first of relations~\eqref{thm_1}. Set \begin{equation} \label{2_1} V_n=L_{n}-L_{n}^0=n^{-1}\sum_{i=k_n+1}^{n-m_n}(c_{i,n} -c_{i,n}^0)X_{i:n}. \end{equation} Lemma~\ref{lem_2.1} and relation~\eqref{2_1} together yield \begin{equation} \label{proof_1} L_{n}-\mu_n=\widetilde{L}_n-\mu_{\widetilde{L}_n}+R_n +V_n. \end{equation} In view of the classical Slutsky argument applied to~\eqref{proof_1}, $1- F_{L_{n}}(x)$ is bounded above and below by \begin{equation} \label{proof_2} \textbf{P}\{\sqrt{n}(\widetilde{L}_n-\mu_{\widetilde{L}_n})/\sigma> x-2\delta \}+\textbf{P}\{\sqrt{n}|R_n|/\sigma> \delta\}+\textbf{P}\{\sqrt{n}|V_n|/\sigma> \delta\} \end{equation} and \begin{equation} \label{proof_3} \textbf{P}\{\sqrt{n}(\widetilde{L}_n-\mu_{\widetilde{L}_n})/\sigma> x+2\delta \}-\textbf{P}\{\sqrt{n}|R_n|/\sigma> \delta\}-\textbf{P}\{\sqrt{n}|V_n|/\sigma> \delta\} \end{equation} respectively, for each $\delta>0$. Let $z_n=n^{\varepsilon/(2(2+\varepsilon))}$. Fix an~arbitrary sequence $a_n\to 0$ and $A>0$. Without loss of generality we may assume that $a_n\ge 1/\log(1+n)$ (otherwise, we may replace $a_n$ by the~new sequence $a_n'=\max(a_n ,\, 1/\log(1+n))\ge a_n$ without affecting result). Set $\delta=\delta_n=a_n^{-1/2}/z_n$. From~\eqref{proof_2} and~\eqref{proof_3} it immediately follows that to prove our theorem it suffices to show that \begin{equation} \label{proof_4} \textbf{P}\{\sqrt{n}(\widetilde{L}_n-\mu_{\widetilde{L}_n})/\sigma> x \pm 2\delta\}=[1-\Phi(x)](1+o(1)) , \end{equation} \begin{equation} \label{proof_5} \quad \quad \ \ \,\textbf{P}\{\sqrt{n}|R_n|/\sigma> \delta\}=[1-\Phi(x)]o(1) , \end{equation} \begin{equation} \label{proof_6} \quad \quad \ \ \, \textbf{P}\{\sqrt{n}|V_n|/\sigma> \delta\}=[1-\Phi(x)]o(1) , \end{equation} uniformly in the range $-A\le x\le a_n z_n$. \noindent{\bf Proof of \eqref{proof_4}}. Since $\widetilde{L}_n$ is the non-truncated linear combination of order statistics corresponding to the sample $W_1,\dots,W_n$ of i.i.d. bounded random variables and because its weight function $J_w$ is Lipschitz in $[0,1]$, we can apply the results on probabilities of large deviations by Vandemaele and Veraverbeke~\cite{vv82} and by Bentkus and Zitikis~\cite{bz90}. Set $B=A+2\sup_{n\ge 1}\delta_n$ and $b_n=a_n+2\delta_n$. Since $a_n\ge 1/\log(1+n)$, the number $B$ exists, and $b_n\to 0$. Then, by Theorem~2~({\em i}), of Vandemaele and Veraverbeke~\cite{vv82} for $x:$ $-B\le x \pm 2\delta <0$, and by Theorem~1.1 of Bentkus and Zitikis~\cite{bz90} for $x:$ $0 \le x \pm 2\delta \le b_n n^{1/6}$), we obtain \begin{equation} \label{proof_7} \textbf{P}\{\sqrt{n}(\widetilde{L}_n-\mu_{\widetilde{L}_n})/\sigma> x \pm 2\delta \}=[1-\Phi(x \pm 2\delta)](1+o(1)) , \end{equation} uniformly with respect to $x$ such that $-B\le x \pm 2\delta \le b_n n^{1/6}$. In particular, relation~\eqref{proof_7} holds true uniformly in the range $-A\le x\le a_n n^{1/6}$. To prove~\eqref{proof_4}, it remains to note that since $2\,\delta a_n z_n =2\sqrt{a_n}\to 0$, Lemma~A.1 from Vandemaele and Veraverbeke~\cite{vv82} now yields \begin{equation} \label{proof_8} 1-\Phi(x \pm 2\delta)=[1-\Phi(x)](1+o(1)), \end{equation} as $n \to \infty$, uniformly in the range $-A\le x\le a_n z_n$. \noindent{\bf Proof of \eqref{proof_5}}. Let $I_1^{(j)}$ and $I_2^{(j)}$ denote the first and the second terms of $R^{(j)}_n$ (cf.~\eqref{lem_2.1_2}--\eqref{lem_2.1_3}) respectively, $j=1,2$. In this notation, $R_n=I_1^{(1)}-I_2^{(1)}+I_1^{(2)}-I_2^{(2)}$ and \begin{equation} \label{proof_9} \textbf{P}\{\sqrt{n}|R_n|/\sigma> \delta\}\le \sum_{k=1}^2 \textbf{P}\{\sqrt{n}|I^{(1)}_k|/\sigma> \delta/4\} +\sum_{k=1}^2 \textbf{P}\{\sqrt{n}|I^{(2)}_k|/\sigma> \delta/4\} . \end{equation} Thus, it suffices to show that for each positive $C$ (in particular, for $C=\sigma/4$), \begin{equation} \label{proof_10} \textbf{P}\{\sqrt{n}|I^{(j)}_k|>C \delta \}=[1-\Phi(x)]o(1) , \text{\ \ $k,j=1,2,$} \end{equation} as $n \to \infty$, uniformly in the range $-A\le x\le a_n z_n$. We will prove~\eqref{proof_10} for $I^{(1)}_1$ and $I^{(2)}_1$ (the treatment of $I^{(1)}_2$ and $I^{(2)}_2$ is similar and therefore omitted). Consider $I^{(1)}_1$. First, note that if $\alpha < A_n$, then $\max_{u\in (\alpha,A_n)} |F_n^{-1}(u)-\xi_{\alpha}|=\xi_{\alpha} -X_{[n\alpha]+1:n}\le \xi_{\alpha} -X_{[n\alpha]:n}$, as $F_n^{-1}$ is monotonic. Here and in what follows $[x]$ represents the greatest integer function. Similarly we find that if $A_n \le \alpha$, then $\max_{u\in (A_n,\alpha)} |F_n^{-1}(u)-\xi_{\alpha}|=X_{[n\alpha]:n}-\xi_{\alpha}$. Furthermore, by the Lipschitz condition for $J$, there exists a~positive $K$ such that $\max_{u\in [0,1]}J_w(u)\le \sup_{u\in I}J(u)\le K$. This yields \begin{equation} \label{proof_11} |I^{(1)}_1| =\left| \int_{\alpha}^{A_n} J_w(u)[ F_n^{-1}(u)-\xi_{\alpha}]\, du \right| \le K |A_n-\alpha| |X_{[n\alpha]:n}-\xi_{\alpha}|. \end{equation} Define a~sequence of intervals $\Gamma_n=[\alpha\wedge\alpha_n,\alpha\vee\alpha_n+1/n)$, then we obtain \begin{equation} \label{proof_11_} |I^{(2)}_1| =\left| \int_{\alpha_n}^{\alpha}J(u) [F_n^{-1}(u)-F^{-1}(u)]\, du \right| \le K |\alpha_n-\alpha| \emph{D}_n, \end{equation} where $\emph{D}_n= \max_{i:\,i/n\in \Gamma_n}|X_{i:n}-F^{-1}(i/n)|\vee |X_{i:n}-F^{-1}((i-1)/n)|$. Let $U_1,\dots,U_n$ be a~sample of independent $(0,1)$-uniform distributed random variables, $U_{i:n}$ -- the corresponding order statistics. Set $M_{\alpha}=\sharp \{i : U_i \le \alpha \}$. Since the joint distribution of $X_{i:n}$ and $N_{\alpha}$ coincides with the joint distribution of $F^{-1}(U_{i:n})$ and $M_{\alpha}$, $i=1,\dots,n$, in order to prove~\eqref{proof_10}, it suffices to show that \begin{equation} \label{proof_12} \begin{split} \textbf{P}\{|M_{\alpha}-n\alpha| |U_{[n \alpha ]:n}-\alpha|^{\varepsilon} > C\sqrt{n}\,\delta \}&=[1-\Phi(x)]o(1) ,\\ \textbf{P}\{\sqrt{n}|\alpha_n-\alpha| \emph{D}_{n,u}^{\varepsilon} > C\,\delta \}&=[1-\Phi(x)]o(1) ,\\ \textbf{P}\Bigl( \bigcup_{i:\,i/n\in \Gamma_n}\lf U_{i:n} \notin U_{\alpha}\rf) &=[1-\Phi(x)]o(1) , \end{split} \end{equation} as $n \to \infty$, uniformly in the range $-A\le x\le a_n z_n$. Here $U_{\alpha}$ is the~neighborhood of $\alpha$ , in which $F^{-1}$ satisfies a~H\"{o}lder condition of order $\varepsilon$ (cf.~condition~{\bf (ii)}), \begin{equation} \label{proof_12a} \emph{D}_{n,u}^{\varepsilon}=\max_{i:\,i/n\in \Gamma_n}|U_{i:n}-i/n|^{\varepsilon}\vee |U_{i:n}-(i-1)/n|^{\varepsilon}, \end{equation} and $C$ stands for a~positive constant independent of $n$, which may change its value, from line to line. To shorten notation, let $k=[n\alpha]$. Consider the probability on the l.h.s. in the first line of~\eqref{proof_12}. It is equal to \begin{equation} \label{proof_13} \textbf{P}\{|M_{\alpha}-n\alpha| |U_{k:n}-\alpha|^{\varepsilon}> C a_n^{-\frac 12} n^{\frac 1{2+\varepsilon}} \} \le \textbf{P}_1+\textbf{P}_2, \end{equation} where \begin{equation*} \begin{split} &\textbf{P}_1:=\textbf{P}\{|M_{\alpha}-n\alpha| > C_1 a_n^{-\frac 1{2(1+\varepsilon)}} n^{\frac{1+\varepsilon}{2+\varepsilon}} \}, \\ &\textbf{P}_2:=\textbf{P}\{|U_{k:n}-\alpha|^{\varepsilon}> C_2 a_n^{-\frac {\varepsilon}{2(1+\varepsilon)}} n^{-\frac{\varepsilon}{2+\varepsilon}} \}, \end{split} \end{equation*} $C_1$, $C_2$ are any positive constants such that $C_1C_2=C$. Let us estimate $\textbf{P}_1$ and $\textbf{P}_2$. Set $h=C_1a_n^{-\frac 1{2(1+\varepsilon)}} n^{\frac{1+\varepsilon}{2+\varepsilon}-1}$. Since $h<1-\alpha$ for all sufficiently large $n$ (because $a_n\ge 1/\log(1+n)$), by Theorem~1 of Hoeffding~\cite{ho} we have \begin{equation} \label{proof_14} \textbf{P}_1 = \textbf{P}\{|M_{\alpha}-n\alpha| > n C_1 h\}\le 2\exp(-2nh^2)=2\exp(-2C_1^2n^{\frac {\varepsilon}{2+\varepsilon}}a_n^{-\frac 1{1+\varepsilon}}). \end{equation} Next, we evaluate $1/(1-\Phi(x))$. Let $\phi=\Phi'$. Since $1-\Phi(x) \sim \phi(x)/x$ as $x\to\infty$, for $x$ such that $-A\le x\le a_n z_n$ we have \begin{equation} \label{proof_phi} \frac 1{1-\Phi(x)}\le \frac 1{1-\Phi(a_n z_n)} \sim \frac {a_n z_n}{\phi(a_n z_n)} = \sqrt{2\pi}\, a_n \, n^{\frac{\varepsilon}{2(2+\varepsilon)}} \exp\left( a_n^2 n^{\frac{\varepsilon}{2+\varepsilon}}/2\right), \end{equation} and combining \eqref{proof_14} and \eqref{proof_phi}, we obtain that \begin{equation} \label{proof_15} \textbf{P}_1 = [1-\Phi(x)]o(1), \ \text{ as} \ n\to\infty, \end{equation} uniformly in the range $-A\le x\le a_n z_n$. Set $p_k=k/({n+1})$, and note that $0< \alpha-p_k<n^{-1}$. Then for $\textbf{P}_2$ we have \begin{equation} \label{proof_16} \begin{split} \textbf{P}_2 \le &\textbf{P}\{|U_{k:n}-p_k|> C_2^{1/\varepsilon} a_n^{-\frac 1{2(1+\varepsilon)}} n^{-\frac 1{2+\varepsilon}} - n^{-1}\},\\ =&\textbf{P}\{\sqrt{n}|U_{k:n}-p_k|> C_2^{1/\varepsilon} a_n^{-\frac 1{2(1+\varepsilon)}} n^{\frac {\varepsilon}{2(2+\varepsilon)}}-n^{-1/2} \}.\\ \end{split} \end{equation} Note that the term $n^{-1/2}$ on the r.h.s. in~\eqref{proof_16} is of negligible order and therefore we may omit it. Set $\lambda: =C_2^{1/\varepsilon} a_n^{-\frac 1{2(1+\varepsilon)}} n^{\frac {\varepsilon}{2(2+\varepsilon)}} $. We observe that $\lambda /\sqrt{n}= C_2^{1/\varepsilon} a_n^{-\frac 1{2(1+\varepsilon)}} n^{-\frac{1}{2+\varepsilon}}$, the latter quantity tends to zero, because $a_n \ge 1/\log(1+n)$, and so we can apply Inequality~1 and Proposition~1 (relation~(12)) given on pages~453 and~455 respectively in Shorack and Wellner~\cite{shorack}. Then we obtain \begin{equation} \label{proof_17} \begin{split} \textbf{P}_2&\leq 2\exp\Bigl( -\frac {\lambda^2}{2p_k}\,\frac 1{1+2\lambda/(3p_k \sqrt{n})}\Bigr)\\ & = 2 \exp\Bigl( -\frac {1}{2p_k} C_2^{2/\varepsilon} a_n^{-\frac 1{1+\varepsilon}} n^{\frac {\varepsilon}{2+\varepsilon}} [1+o(1)]\Bigr). \end{split} \end{equation} From~\eqref{proof_phi} and~\eqref{proof_17} it follows that \begin{equation} \label{proof_18} \textbf{P}_2 = [1-\Phi(x)]o(1), \ \text{ as} \ n\to\infty, \end{equation} uniformly in the range $-A\le x\le a_n z_n$. So, the first relation in~\eqref{proof_12} follows directly from~\eqref{proof_13}, \eqref{proof_15} and~\eqref{proof_18}. The next step we prove the second relation in~\eqref{proof_12}. We have \begin{equation} \label{proof_19} \begin{split} &\textbf{P}\{\sqrt{n}|\alpha_n-\alpha| \emph{D}_{n,u}^{\varepsilon} > C\,\delta \}\\ \leq & \sum_{i:\,i/n\in \Gamma_n} \textbf{P}\{\sqrt{n}|\alpha_n-\alpha| |U_{i:n}-i/n|^{\varepsilon}\vee |U_{i:n}-(i-1)/n|^{\varepsilon} > C\,\delta \} . \end{split} \end{equation} By condition {\bf (iii)}, there exists $M>0$ such that $|\alpha_n-\alpha|\le M n^{-1/(2+\varepsilon)}$ for all sufficiently large $n$, hence each item of the sum on the l.h.s. in~\eqref{proof_19} does not exceed \begin{equation} \label{proof_19a} \textbf{P}\{\sqrt{n} |U_{i:n}-i/n| >\lambda \} + \textbf{P}\{\sqrt{n} |U_{i:n}-(i-1)/n| > \lambda \}, \quad i/n\in \Gamma_n. \end{equation} where $\lambda=C_{\varepsilon} a_n^{-1/(2\varepsilon)} n^{\frac {\varepsilon}{2(2+\varepsilon)}}$ and $C_{\varepsilon}=(C/M)^{1/\varepsilon}$. Obviously (cf.~\eqref{proof_16}-\eqref{proof_17}), it suffices to prove the desired bound for the first of two probabilities in~\eqref{proof_19a}. Applying once more the exponential Inequality~1 for uniform order statistics (cf. Shorack and Wellner~\cite{shorack}, pp.~453,~455) and the fact that $|i/n-\alpha|\leq Mn^{-1/(2+\varepsilon)}$ for all sufficiently large $n$, we obtain \begin{equation*} \textbf{P}\{\sqrt{n} |U_{i:n}-i/n| >\lambda \} \leq 2\exp\left( -\frac {1}{2\alpha} C_{\varepsilon}^2 a_n^{-\frac 1{\varepsilon}} n^{\frac {\varepsilon}{2+\varepsilon}} \Bigl[1+O(n^{-1/(2+\varepsilon)})\Bigr]\right). \end{equation*} Since the number of items on the r.h.s. in \eqref{proof_19} does not exceed $|n|\alpha-\alpha_n|+1|=O(n^{\frac{1+\varepsilon}{2+\varepsilon}})$, the latter bound implies that the quantity on the r.h.s. in \eqref{proof_19} is of the order \begin{equation*} \label{proof_20_} n^{\frac{1+\varepsilon}{2+\varepsilon}} \exp\Bigl( -\frac {1}{2\alpha} C_{\varepsilon}^2 a_n^{-\frac 1{\varepsilon}} n^{\frac {\varepsilon}{2+\varepsilon}} \Bigl[1+o(1)\Bigr]\Bigr). \end{equation*} This together with~\eqref{proof_phi} imply the required relation. It remains to prove the last relation in~\eqref{proof_12}. Fix some $\gamma>0$ such that $[\alpha-\gamma,\alpha+\gamma]\subseteq U_{\alpha}$, set $r_n=k\wedge k_n$, $s_n=k\vee k_n+1$, where $k_n= n\alpha_n$ (cf.~\eqref{tn}). Then \begin{equation} \label{proof_20} \textbf{P}\Bigl( \bigcup_{i:\,i/n\in \Gamma_n}\lf U_{i:n} \notin U_{\alpha}\rf\Bigr)\leq \textbf{P}(U_{r_n:n} < \alpha -\gamma) + \textbf{P}(U_{U_{s_n:n}:n} > \alpha+\gamma). \end{equation} Observe that both sequences $r_n/n$ and $s_n/n$ satisfy condition~{\bf (iii)}, along with the sequence $\alpha_n=k_n/n$. Let us estimate the first probability on the r.h.s. in~\eqref{proof_20} (the treatment of the second one is similar). Define a~binomial random variable $S_n=\sharp \{i : U_i < \alpha-\gamma \}$, then the first term on the r.h.s. in~\eqref{proof_20} is equal to \begin{equation} \label{proof_12_2} \begin{split} \textbf{P}(S_n \geq r_n) = &\textbf{P}\bigl(S_n-\textbf{E}S_n \geq r_n -n\alpha +\gamma n\bigr)\\ = &\textbf{P}\bigl(n^{-1}(S_n-\textbf{E}S_n)\ge \gamma+ o(1)\bigr) \end{split} \end{equation} and by the~classical Hoeffding~\cite{ho} inequality, the latter quantity is no greater than $\exp(-2n(\gamma+ o(1))^2)$, which is $[1-\Phi(x)]o(1)$, uniformly in the range $-A \le x \le a_n n^{1/2}$, and the last relation in~\eqref{proof_12} follows. Relations~\eqref{proof_11}-\eqref{proof_11_} and~\eqref{proof_12} directly imply~\eqref{proof_10}, which yields~\eqref{proof_5}. \noindent{\bf Proof of \eqref{proof_6}}. \ By condition~{\bf (iv)}, there exists $b>0$ such that \begin{equation*} \label{proof_21} \sqrt{n}|V_n|\leq bn^{-\varepsilon/(2(2+\varepsilon))}(|X_{(k_n+1):n}|\vee |X_{(n-m_n):n}|), \end{equation*} for all sufficiently large $n$. Thus, \begin{equation*} \label{proof_22} \textbf{P}\left( \sqrt{n}|V_n|/\sigma >\delta \right) \leq \textbf{P}\left( |X_{(k_n+1):n}|\vee |X_{(n-m_n):n}|>\sigma a_n^{-1/2}\right)\leq \textbf{P}_{3}+\textbf{P}_{4}, \end{equation*} where $\textbf{P}_{3}=\textbf{P}\bigl(|X_{(k_n+1):n}|>\sigma a_n^{-1/2}\bigr) $, \ $\textbf{P}_{4}=\textbf{P}\bigl(|X_{(n-m_n):n}|>\sigma a_n^{-1/2}\bigr) $. Let us estimate $\textbf{P}_{3}$ (the treatment for $\textbf{P}_{4}$ is same and therefore omitted). We have \begin{equation} \label{proof_23} \begin{split} \textbf{P}_{3}&=\textbf{P}\left( \left| F^{-1}(U_{(k_n+1):n})\right| >\sigma a_n^{-1/2}\right) \\ &\leq \textbf{P}\left( \left| F^{-1}(U_{(k_n+1):n})- F^{-1}(\alpha)\right| +\left| F^{-1}(\alpha)\right| >\sigma a_n^{-1/2}\right),\\ &\leq \textbf{P}\left( \left| U_{(k_n+1):n}- \alpha\right|^{\varepsilon} >\sigma a_n^{-1/2} (1+o(1))\right) + \textbf{P}(U_{(k_n+1):n} \notin U_{\alpha}). \end{split} \end{equation} Observe that the first term on the r.h.s. in~\eqref{proof_23} is equal to zero, for all sufficiently large $n$, and the second one is $[1-\Phi(x)]o(1)$, uniformly in the range $-A\le x\le a_n z_n$. This completes the proof of~\eqref{proof_6} and the theorem. \qed \noindent{\bf Proof of Theorem~\ref{thm2}}. Let us first prove relation~\eqref{thm_2}. By Lemma~\ref{lem_2.1} and relation~\eqref{proof_1}, we have \begin{equation*} \label{proof t21} \text{\em Var}(L_{n})=\text{\em Var}(\widetilde{L}_n)+\text{\em Var}(R_n+V_n)+2 \text{\em cov}(\widetilde{L}_n,R_n+V_n). \end{equation*} Since $W_i$ are bounded, all conditions of Theorem~2\,(ii)~\cite{vv82} are satisfied, and hence \begin{equation*} \label{proof t22} \sigma^{-1} n^{1/2}\sqrt{\text{\em Var}(\widetilde{L}_n)}=1+O(n^{-1/2}) \end{equation*} (cf. \cite{vv82}, p.~431). \ Furthermore, we have \begin{equation*} \label{proof t21} \begin{split} n|\text{\em cov}(\widetilde{L}_n,R_n+V_n)|&\leq n[\text{\em Var}(\widetilde{L}_n)\text{\em Var}(R_n+V_n)]^{1/2}\\ &= \sigma[n\text{\em Var}(R_n+V_n)]^{1/2}(1+O(n^{-1/2})). \end{split} \end{equation*} The latter three relations imply that in order to prove~\eqref{thm_2}, it suffices to show that \begin{equation} \label{proof t23} n \text{\em Var}(R_n+V_n)=O\bigl(n^{-\frac{2\varepsilon}{2+\varepsilon}}\bigr). \end{equation} We have \begin{equation} \label{proof t24} n \text{\em Var}(R_n+V_n)\leq n \textbf{E}(R_n+V_n)^2\leq 5n\Bigl[\sum_{i,j=1}^{2} \textbf{E}\bigl(I_j^{(i)}\bigr)^2 + \textbf{E}V_n^2 \,\Bigr], \end{equation} where $I_j^{(i)}$ are as in \eqref{proof_9}-\eqref{proof_10}. We will show that \begin{equation} \label{proof t25} n \textbf{E}\bigl(I_j^{(1)}\bigr)^2 =O(n^{-\varepsilon})=o\bigl(n^{-\frac{2\varepsilon}{2+\varepsilon}}\bigr), \ \ n \textbf{E}\bigl(I_j^{(2)}\bigr)^2 =O(n^{-\frac{2\varepsilon}{2+\varepsilon}}),\ j=1,2, \end{equation} and that \begin{equation} \label{proof t26} n \textbf{E}V_n^2 =O(n^{-\frac{2\varepsilon}{2+\varepsilon}}). \end{equation} Relations \eqref{proof t24}-\eqref{proof t26} imply the desired bound~\eqref{proof t23}. We first prove~\eqref{proof t25}, and consider in detail only the case $j=1$ (the treatment in the case $j=2$ is same and therefore omitted). Let as before $k=[\alpha n]$ and $k_n=\alpha_n n$. By \eqref{proof_11} and the Schwarz inequality, we have \begin{equation*} \label{proof t27} \begin{split} \textbf{E}\bigl(I_1^{(1)}\bigr)^2 \leq & K^2 [\textbf{E}(A_n-\alpha)^4\textbf{E}(X_{k:n}-\xi_{\alpha})^4]^{1/2}\\ = & K^2 n^{-2}[\textbf{E}(N_{\alpha}-\alpha n)^4\textbf{E}(X_{k:n}-\xi_{\alpha})^4]^{1/2}. \end{split} \end{equation*} By well-known formula for 4-th moments of a~binomial random variable, we have $\textbf{E}(N_{\alpha}-\alpha n)^4=3\alpha^2(1-\alpha^2)n^2(1+o(1))$. Thus, there exists a~positive constant $C$ independent of $n$ such that \begin{equation} \label{proof t28} n\textbf{E}\bigl(I_1^{(1)}\bigr)^2 \leq C [\textbf{E}(X_{k:n}-\xi_{\alpha})^4]^{1/2} \end{equation} for all sufficiently large $n$. We have \begin{equation} \label{proof t29} \begin{split} &\textbf{E}(X_{k:n}-\xi_{\alpha})^4= \textbf{E}(F^{-1}(U_{k:n})-F^{-1}(\alpha))^4\\ =&\textbf{E}[(F^{-1}(U_{k:n})-F^{-1}(\alpha))^4 \textbf{1}_{\{U_{k:n}\in U_{\alpha}\}}]\\ +& \textbf{E}[(F^{-1}(U_{k:n})-F^{-1}(\alpha))^4 \textbf{1}_{\{U_{k:n}\notin U_{\alpha}\}}]\\ \leq & C_H^4 \textbf{E}|U_{k:n}-\alpha|^{4\varepsilon}+ \textbf{E}[(X_{k:n}-\xi_{\alpha})^6]^{2/3}[\textbf{P}(U_{k:n}\notin U_{\alpha})]^{1/3}, \end{split} \end{equation} where $C_H$ is a~constant from the H\"{o}lder condition {\bf (ii)}. Note that if $\varepsilon>1/2$, then $\textbf{E}|U_{k:n}-\alpha|^{4\varepsilon}\leq \textbf{E}|U_{k:n}-\alpha|^{2}=O(n^{-1})$, and if $\varepsilon \leq 1/2$, then $\textbf{E}|U_{k:n}-\alpha|^{4\varepsilon}\leq (\textbf{E}|U_{k:n}-\alpha|^{2})^{2\varepsilon}=O(n^{-2 \varepsilon})$. Since moments of any order of $X_{k:n}$ are finite for all sufficiently large $n$ and because $\textbf{P}(U_{k:n}\notin U_{\alpha})=O(\exp(-cn))$ with some $c>0$ (cf.~\eqref{proof_20}-\eqref{proof_12_2}), the latter bounds and relations~\eqref{proof t28}-\eqref{proof t29} imply the first of relations~\eqref{proof t25}. Consider $I_1^{(2)}$. By condition {\bf (iii')}, there exists $d>0$ such that \\ $(\alpha_n-\alpha)^2\leq d n^{-(2+\varepsilon-\varepsilon^2)/(2+\varepsilon)}(\log n)^{-\varepsilon}$, for all sufficiently large $n$. Then in view of~\eqref{proof_11_} we obtain \begin{equation} \label{proof t210} n\textbf{E}\bigl(I_1^{(2)}\bigr)^2 \leq n K^2 (\alpha_n-\alpha)^2 \textbf{E} \emph{D}_n^2 \leq K^2n^{\frac{\varepsilon^2}{2+\varepsilon}}(\log n)^{-\varepsilon}\textbf{E} \emph{D}_n^2. \end{equation} Hence, to get the second bound in~\eqref{proof t25}, it suffices to show that \begin{equation} \label{proof t211} \textbf{E}\emph{D}_n^2=O\left( (\log n)^{\varepsilon}n^{-\varepsilon}\right). \end{equation} For all sufficiently large $n$, \ $\alpha_n \in U_{\alpha}$ and \begin{equation*} \label{proof t212} \textbf{E}\emph{D}_n^2\leq C_H^2 \textbf{E}\left(\emph{D}_{n,u}^{\varepsilon}\right)^2= C_H^2\textbf{E}\left( \max_{i:\,i/n\in \Gamma_n}|U_{i:n}-i/n|^{2\varepsilon}\vee |U_{i:n}-(i-1)/n|^{2\varepsilon}\right), \end{equation*} where $\emph{D}_n$ is as in~\eqref{proof_12a}. The latter quantity does not exceed \begin{equation} \label{proof t213} \begin{split} &t^{\varepsilon}C_H^2 (\log n)^{\varepsilon}n^{-\varepsilon} \\ + &\textbf{P} \left( \bigcup_{i:\,i/n\in \Gamma_n} \lf |U_{i:n}-i/n|\vee |U_{i:n}-(i-1)/n|> \sqrt{t\frac{\log n}n}\rf\right) \\ \leq & t^{\varepsilon}C_H^2 (\log n)^{\varepsilon}n^{-\varepsilon}+|\alpha n-k_n+1|\left( \textbf{P}_1+\textbf{P}_2\right), \end{split} \end{equation} where $t$ is a constant which will be chosen later, and \begin{equation*} \label{proof t214} \textbf{P}_1= \textbf{P}\left( |U_{i:n}-i/n|> \sqrt{t\frac{\log n}n}\right),\ \ \textbf{P}_2=\textbf{P}\left( |U_{i:n}-(i-1)/n|> \sqrt{t\frac{\log n}n}\right). \end{equation*} It is obvious that both $\textbf{P}_1$ and $\textbf{P}_2$ are of the same order of magnitude, so it suffices to estimate $\textbf{P}_1$, where we can apply once more the Inequality~1 from Shorack and Wellner~\cite{shorack}. We have \begin{equation*} \label{proof t215} \textbf{P}_1= \textbf{P}\left( \sqrt{n}|U_{i:n}-i/n|> \sqrt{t\log n}\right)\leq 2\exp\left(-\frac{t\log n}{2\alpha}(1+O(|\alpha_n-\alpha|))\right), \end{equation*} hence if we choose $t\ge 4\alpha$, we obtain $\textbf{P}_1+\textbf{P}_2=O(n^{-2})$, and the second term on the r.h.s. in~\eqref{proof t213} becomes negligible in order relative to the first one. This proves~\eqref{proof t211} and the second relation in~\eqref{proof t25}. We now turn to the proof of~\eqref{proof t26}. By condition {\bf (iv')}, the exists a~constant $C>0$, not depending on $n$, such that $\sum_{i=k_n+1}^{n-m_n}|c_{i,n} -c_{i,n}^0|\leq Cn^{\frac{2-\varepsilon}{2(2+\varepsilon)}}$, for all sufficiently large $n$, and \begin{equation*} \begin{split} n\textbf{E}V_n^2 &\leq n^{-1} \left( \sum_{i=k_n+1}^{n-m_n}|c_{i,n} -c_{i,n}^0|\right)^2\textbf{E}\bigl(X^2_{k_n+1:n}\vee X^2_{n-m_n:n}\bigr)\\ &\leq C^2n^{-1} n^{\frac{2-\varepsilon}{2+\varepsilon}} \textbf{E}\bigl(X^2_{k_n+1:n}\vee X^2_{n-m_n:n}\bigr)=O\bigl( n^{-\frac{2\varepsilon}{2+\varepsilon}}\bigr), \end{split} \end{equation*} and~\eqref{proof t26} follows. Thus, relation~\eqref{thm_2} is proved, and we are now in a~position to prove that relations~\eqref{thm_1} hold true if we replace $\sigma/n^{1/2}$ by $\sqrt{{\text{\em Var}}(L_{n})}$. We prove the first of relations~\eqref{thm_1}, the second one will then follow from the first if we replace $c_{i,n}$ by $-c_{i,n}$. Fix an~arbitrary sequence $a_n\to 0$ and $A>0$, set $\lambda_n=\sigma^{-1}n^{1/2} \sqrt{{\text{\em Var}}(L_{n})}$ and write \begin{equation} \label{proof t217} \frac{\textbf{P}\bigl((L_{n}-\mu_n)/ \sqrt{{\text{\em Var}}(L_{n})} >x\bigr)}{1-\Phi(x) }=\frac {1-F_{L_{n}}(\lambda_nx)}{1-\Phi(\lambda_n x)} \, \,\frac{1-\Phi(\lambda_n x)}{1-\Phi(x)}. \end{equation} Set $B=A\sup_{n\in \mathbb{N}}\lambda_n$ and $b_n=\lambda_n a_n$, Since $\lambda_n \to 1$, the number $B$ exists and $b_n \to 0$. Hence, by Theorem~\ref{thm1}, the first ratio on the r.h.s. in ~\eqref{proof t217} tends to $1$ as $n \to \infty$, uniformly in $x$ such that $-B \leq \lambda_n x \leq b_n z_n$, where $z_n=n^{\varepsilon/(2(2+\varepsilon))}$, in particular, uniformly in the range $-A \leq x \leq a_n z_n$. Furthermore, we see that $|\lambda_n-1|^{1/2} a_nz_n \to 0$, which is due to the fact that $|\lambda_n-1|^{1/2}=O\bigl(n^{-\frac {\varepsilon}{2(2+\varepsilon)}}\bigr)$. Hence, by Lemma A1 from Vandemaele and Veraverbeke~\cite{vv82}, the second ratio on the r.h.s. in ~\eqref{proof t217} also tends to~$1$, uniformly in the range $-A \leq x \leq a_n z_n$. The theorem is proved. \qed {\bf Acknowledgments.} \ The author is grateful to the~referee for his valuable remarks and suggestions that led to improvement of the article. \end{document}
\begin{document} \begin{frontmatter} \title{Efficient Structural Descriptor Sequence to Identify Graph Isomorphism and Graph Automorphism} \author[siva]{Sivakumar Karunakaran} \ead{sivakumar\_karunakaranm@srmuniv.edu.in} \author[lavanya]{Lavanya Selvaganesh\corref{cor1}} \ead{lavanyas.mat@iitbhu.ac.in} \cortext[cor1]{Corresponding author. This work was presented as an invited talk at ICDM 2019, 4-6 Jan 2019 at Amrita University, Coimbatore, India} \address[siva]{SRM Research Institute, S R M Institute of Science and Technology Kattankulathur, Chennai - 603203, INDIA} \address[lavanya]{Department of Mathematical Sciences, Indian Institute of Technology (BHU), Varanasi-221005, INDIA} \begin{abstract} In this paper, we study the graph isomorphism and graph automorphism problems. We propose a novel technique to analyze graph isomorphism and graph automorphism. Further we handled some strongly regular datasets for prove the efficiency of our technique. The neighbourhood matrix $ \mathcal{NM}(G) $ was proposed in \cite {ALPaper} as a novel representation of graphs and was defined using the neighbourhood sets of the vertices. It was also shown that the matrix exhibits a bijection between the product of two well known graph matrices, namely the adjacency matrix and the Laplacian matrix. Further, in a recent work\cite{NM_SPath}, we introduced the sequence of matrices representing the powers of $\mathcal{NM}(G)$ and denoted it as $ \mathcal{NM}^{\{l\}}, 1\leq l \leq k(G)$ where $ k(G) $ is called the \textbf{iteration number}, $k(G)=\ceil*{\log_{2}diameter(G)} $. In this article we introduce a structural descriptor given by a sequence and clique sequence for any undirected unweighted simple graphs with help of the sequences of matrices $ NM^{\{l\}} $. The $ i^{th} $ element of structural descriptor sequence encodes the complete structural information of the graph from the vertex $ i\in V(G) $. The $ i^{th} $ element of clique sequence encodes the Maximal cliques on $ i $ vertices. The above sequences is shown to be a graph invariants and is used to study the graph isomorphism and automorphism problem. \end{abstract} \begin{keyword} Graph Matrices, Graph Isomorphism, Maximal clique, Graph Automorphism, Product of Matrices, Structural descriptors. \MSC{05C50, 05C60, 05C62, 05C85} \end{keyword} \end{frontmatter} \section{Introduction} One of the classical and long standing problem in graph theory is the graph isomorphism problem. Study of graph isomorphisms is not only of theoretical interest, but also has found applications in diversified fields such as cryptography, image processing, chemical structural analysis, biometric analysis, mathematical chemistry, bioinformatics, gene network analysis to name a few. The challenging task in solving graph isomorphism problem is in finding the permutation matrix, if it exists, that can be associated with the adjacency matrices of the given graph. We also know that there are $ n! $ such permutation matrices that are possible, where $ n $ is the number of vertices. However, finding one suitable choice is time consuming. There are various graph invariants that can be used to identify two non-isomorphic graphs. That is, for any two graphs, if a graph invariant is not equal, then it immediately follows that the graphs are not isomorphic. For example, the following invariants have been well studied for this purpose, such as, number of vertices, number of edges, degree sequence of the graph, diameter, colouring number, eigenvalues of the associated graph matrices, etc. However, for each of the invariant, if the invariants are equal one cannot declare the graphs are isomorphic. For example, there are non-isomorphic graphs which have same spectral values and such graphs are referred as isospectral graphs. There are several algorithms to solve the graph isomorphism problem. Computationally, many algorithms have been proposed for the same. Ullman \cite{Sub_Iso_Ullman} proposed the first algorithm for a more general problem known as subgraph isomorphism whose running time is $ \mathcal{O}(n! n^{3}) $. During the last decade many algorithm such as VF2 \cite{VF2_Luigi}, VF2++\cite{Alpar_VF2++}, QuickSI \cite{QuickSI}, GADDI \cite{GADDI_Algori}, have been proposed to improve the efficiency of Ullman's algorithm. Recent developments were made in 2016 by Babai \cite{Quasipolynomial_Babai}, who proposed a more theoretical algorithm based on divide and conquer strategy to solve subgraph isomorphism problem in Quasi Polynomial time ($ 2^{(\log n)^{\mathcal{O}(3)}} $) which has certain implementation difficulties. Due to the non-availability of polynomial time solvable algorithm many researchers have also studied this problem for special classes of graphs such as trees, planar graphs, interval graphs and bounded parameter such as genus, degree and treewidth graphs \cite{Pol_Fixed_Genus,Poly_tree,Poly_Interval,Poly_Permutation,Poly_Planar}. Maximal clique and graph automorphism problems are interesting and long standing problems in graph theory. Lots research works are done on graph automorphism problem \cite{REF_1,REF_2,REF_3,REF_4,REF_8,REF_9} and clique problem \cite{REF_10,REF_11,REF_12,REF_13}. In this paper we study the graph isomorphism problem and graph automorphism problem using our newly proposed structural descriptor sequence and clique sequence. The above sequences are completely constructed form our novel techniques. This sequences are used to reduce the running time for classify the non isomorphic graphs and find the automorphism groups. We use the recently proposed matrix representation of graphs using neighbourhood sets called neighbourhood matrix $ \mathcal{NM}(G) $ \cite{ALPaper}. Further, we also iteratively construct a finite sequence of powers of the given graph and their corresponding neighbourhood matrices \cite{NM_SPath}. This sequence of graph matrices help us to define a collection of measures, capturing the structural information of graphs. The paper is organized as follows: Section 2 presents all required basic definitions and results. Section 3 is the main section which defines the collection of measures and structural descriptor and the main result for testing graph isomorphism. In Section 4, we discuss the clique sequence and its time complexity. Section 5 ensure the efficiency of our structural descriptor sequence and clique sequence. Section 6, describe the way of finding automorphism groups of a given graph and time complexity. We conclude the paper in section 7. \section{Sequence of powers of $ \mathcal{NM }(G)$ matrix} Throughout this paper, we consider only undirected, unweighted simple graphs. For all basic notations and definitions of graph theory, we follow the books by J.A. Bondy and U.S.R. Murty \cite{Graphtheory} and D.B. West \cite{GraphTheoryWest}. In this section, we present all the required notations and definitions. A graph $ G $ is an ordered pair $ (V(G),E(G)) $ consisting of a set $ V(G) $ of vertices and a set $ E(G) $, disjoint from $ V(G) $, of edges, together with an incidence function $ \psi_{G} $ that associates with each edge of $ G $ an unordered pair of vertices of $ G $. As we work with matrices, the vertex set $ V(G) $ is considered to be labelled with integers $ \{1,2,...,n\} $. For a vertex $v\in V(G)$, let $N_{G}(v)$ denote the set of all neighbors of $ v $. The degree of a vertex $v$ is given by $ d_{G}(v) $ or $|N_{G}(v)|$. The diameter of the graph is denoted by $ diameter(G)$ and the shortest path distance between two vertices $i$ and $j$ in $G$ is denoted by $ d_{G}(i,j), i,j\in V(G)$. Let $ A_{G} \text{ or } A(G), D(G) $ and $ C(G)(:=D(G)-A(G)) $ denote the adjacency matrix, degree matrix and the Laplacian/admittance matrix of the grpah $ G $ respectively. Two graphs $ G $ and $ H $ are isomorphic, written $ G \cong H $, if there are bijections $ \theta :V(G)\rightarrow V(H) $ and $ \phi: E(G)\rightarrow E(H) $ such that $ \psi_{G}(e)=uv $ if and only if $ \psi_{H}(\phi(e))=\theta(u)\theta(v); $ (that is a bijection of vertices preserving adjacency) such a pair of mappings is called an isomorphism between $ G $ and $ H $. An automorphism of a graph is an isomorphism of a graph to itself. In the case of a simple graph, an automorphism is just a permutation $ \alpha $ of its vertex set which preserves adjacency: if $ uv $ is an edge then so is $ \alpha (u) \alpha(v) $. The automorphism groups of $ G $ denoted by $ Aut(G) $, is the set of all automorphisms of a groups $ G $. Graphs in which no two vertices are similar are called asymmetric; these are the graphs which have only the identity permutation as automorphism. The graph isomorphism problem tests whether two given graphs are isomorphic or not. In other words, it asks whether there is a one-to-one mapping between the vertices of the graphs preserving the adjacency. \begin{Defn}\label{d3} {\normalfont{ \cite{ALPaper}}} Given a graph $ G $, the neighbourhood matrix, denoted by $ \mathcal{NM}(G) =(\eta_{ij})$ is defined as $$\eta_{ij}=\begin{cases} -|N_{G}(i)|, & \text{ if } i=j\\ |N_{G}(j)-N_{G}(i)|, & \text{ if } (i,j)\in E(G)\\ -|N_{G}(i) \cap N_{G}(j)|, & \text{ if } (i,j)\notin E(G) \\ \end{cases} $$ \end{Defn} The following lemma facilitates us with a way of computing the neighbourhood matrix. Proofs of the following results are given in appendix for immediate reference as they are awaiting publication elsewhere. \begin{Lemma}\label{prop.2}{\normalfont{ \cite{ALPaper}}} Given a graph $ G $, the entries of any row of $ \mathcal{NM}(G) $ corresponds to the subgraph with vertices from the first two levels of the level decomposition of the graph rooted at the given vertex with edges connecting the vertices indifferent levels. $ \square $ \end{Lemma} \begin{Remark}\label{Remark.2}{\normalfont{ \cite{ALPaper}}} The above lemma also reveals following information about the neighbourhood matrix, which justifies its terminology. For any $ i^{th} $ row of $ \mathcal{NM}(G) $, \begin{enumerate} \item The diagonal entries are either negative or zero. In particular, if $ \eta_{ii}=-c $, then the degree of the vertex is $ c $ and that there will be exactly $ c $ positive entries in that row. If $ \eta_{ii}=0 $ then the vertex $ i $ is isolated. \item For some positive integer $ c $, if $ \eta_{ij}=c $ then $ j\in N_{G}(i) $ and that there exists $ (c-1) $ vertices are adjacent to $ 1 $ and at distance $ 2 $ from $ i $ through $ j $. \item If $ \eta_{ij}=-c $, then the vertex $ d_{G}(i,j)=2 $ and there exists $ c $ paths of length two from vertex $ i $ to $ j $. In other words, there exist $ c $ common neighbors between vertex $ i $ and $ j $. \item If an entry, $ \eta_{ij}=0 $ then the distance between vertices $ i $ and $ j $ is at least $ 3 $ or the vertices $ i $ and $ j $ lie in different components.$ \square $ \end{enumerate} \end{Remark} \begin{Defn}\label{Def.1} \normalfont{\cite{NM_SPath} } Given a graph $ G $, let $ G^{\{1\}}=G$ and $ \mathcal{NM}^{\{1\}} = \mathcal{NM}(G^{\{1\}}) $. For $l>1$, let $G^{\{l\}}$ be the graph constructed from the graph $G^{\{l-1\}}$ as below: $ V(G^{\{l\}})=V(G^{\{l-1\}}) $ and $ E(G^{\{l\}})=E(G^{\{l-1\}})\cup \{(i,j): \eta_{ij}^{\{l-1\}}<0, i\neq j\} $. The sequence of matrices, denoted by $ \mathcal{NM}^{\{l\}} $ is defined iteratively as $ \mathcal{NM}^{\{l\}}=\mathcal{NM}(G^{\{l\}}) $ and can be constructed by definition \ref{d3}. We refer to this finite sequence of matrices as \textbf{\textit{Sequence of Powers of $ \mathcal{NM} $}}. \end{Defn} \begin{Remark} \normalfont{\cite{NM_SPath} } The adjacency matrix of $ A(G^{\{l\}}) $ is given by: $$ A(G^{\{l\}}) = (a^{\{l\}}_{ij}) =\begin{cases} 1,& \text{ if } \eta_{ij}^{\{l-1\}}\neq 0, i\neq j\\ 0,& \text{ Otherwise } \end{cases} $$ $ \square $ \end{Remark} \begin{Defn}\label{Def.2} \normalfont{\cite{NM_SPath} } Let $k$ be the smallest possible integer, such that, $\mathcal{NM}^{\{k\}}$ has either no zero entry or the number of non zero entries of $\mathcal{NM}^{\{k\}} $ and $\mathcal{NM}^{\{k+1\}} $ are the same. The number $k(G):=k$ is called the \textbf{\textit{Iteration Number}} of $G$. \end{Defn} \begin{Remark}\label{Rem.2} \normalfont{\cite{NM_SPath} } When $G = K_n$, the complete graph, $\mathcal{NM}(K_n)$ has no zero entries. Hence for $K_n$, the iteration number is $k(K_n) = 1$. Further, for a graph $G$ with diameter 2, $\mathcal{NM}(G)$ has no zero entries, hence iteration number $k(G) = 1$ $ \square $ \end{Remark} Let the number of non zero entries in $\mathcal{NM}^{\{k\}} $ be denoted by $z$. Note that $z\leq n^2$. \begin{Theorem}\label{Thm.4} \normalfont{\cite{NM_SPath} } A graph $ G $ is connected if and only if $ z = n^{2} $. In addition, the iteration number $k(G)$ of the graph $G\neq K_{n}$ is given by $ k(G)=\ceil*{\log_{2}(diameter(G))} $. (For $ G=K_{n}, k(G)=1 $ and $ \mathcal{NM}^{\{1\}}=\mathcal{NM}(G)=-C(G)$) $ \square $ \end{Theorem} \begin{Corollary}\label{Cor.1} \normalfont{\cite{NM_SPath} } A graph $G$ is disconnected if and only if $ z<n^{2} $. Further, the iteration number $ k(G) $ of $G$ is given by $k(G)=\ceil*{\log_{2}S} $, where $ S $ is the maximum over the diameter of components of $G$. $ \square $ \end{Corollary} \begin{Defn}\label{IsoDef1} Let $ N({G^{\{l\}}},i)$ denote the neighbourhood set of a vertex $ i $ in $ G^{\{l\}} $, that is, \begin{equation}\label{Isoeq.1} N({G^{\{l\}}},i) =\{x : \eta_{ix}^{\{l\}}>0\} \end{equation} Let $ X(G^{\{l\}},i) $ be the set of vertices given by \begin{equation}\label{Isoeq.2} X(G^{\{l\}},i)=\{y:\eta_{iy}^{\{l\}}<0, i\neq y \} \end{equation} \end{Defn} Note that, when $ l=1 $, $ N(G^{\{1\}},i)=N_{G}(i)$. For any $ l>1 $, $ N({G^{\{l\}}},i)= N({G^{\{l-1\}}},i) \cup X({G^{\{l-1\}}},i)$, for any given $ i\in V(G). $ \begin{Theorem}\label{Thm.1}\normalfont{\cite{NM_SPath} } Let $G$ be a graph on $n$ vertices and $k(G)$ be the iteration number of $G$. For $1 \leq l \leq k(G)$, the off-diagonal elements of $\mathcal{NM}^{\{l\}}$ can be characterized as follows: For $ 1\leq i\neq j\leq n $ \begin{enumerate} \item {$\eta^{\{l\}}_{ij}=0$ if and only if $ d_{G}(i,j)> 2^{l}$} \item {$\eta^{\{l\}}_{ij}>0$ if and only if $0 < d_{G}(i,j)\leq 2^{l-1}$} \item {$\eta^{\{l\}}_{ij}<0$ if and only if $2^{l-1}<d_{G}(i,j) \leq 2^l$} $ \square $ \end{enumerate} \end{Theorem} By combining definition \ref{d3}, definition \ref{Def.1} and Theorem \ref{Thm.1} we state the following corollaries without proof. \begin{Corollary}\label{RRM1} For a given graph $ G $, if $ \eta^{\{l\}}_{ij} >0$, for some $ l\leq k(G) $, then the following conditions are equivalent: \begin{enumerate} \item $ (i,j)\in E(G^{\{p\}}) $, where $ p\geq l $. \item $ \eta^{\{l\}}_{ij}=|N(G^{\{l\}},j) -N(G^{\{l\}},i)| $. \item $ d_{G}(i,j)\leq 2^{l-1} $ $ \square $ \end{enumerate} \end{Corollary} \begin{Corollary}\label{RRM2} For a graph $ G $, if $ \eta^{\{l\}}_{ij}=0, i\neq j $, then the following conditions are equivalent: \begin{enumerate} \item$ (i,j)\notin E(G^{\{p\}}) $, where $ 1\leq p\leq l+1 $ \item $ d_{G}(i,j)>2^{l} $ $ \square $ \end{enumerate} \end{Corollary} \begin{Corollary}\label{RRM3} For a graph $ G $, if $ \eta^{\{l\}}_{ij}<0, i\neq j$, for some $ l,1\leq l\leq k(G) $ then the following conditions are equivalent, \begin{enumerate} \item $ (i,j)\in E(G^{\{p\}}) $, where $ p\geq l+1 $ \item $ \eta^{\{l\}}_{ij}=-|N(G^{\{l\}},i)\cap N(G^{\{l\}},j)|$ \item $ 2^{l-1}<d_{G}(i,j)\leq 2^{l} $ $ \square $ \end{enumerate} \end{Corollary} \section{Structural descriptor sequence of a graph} Let $ \mathcal{NM}^{\{l\}}(G)$ be the sequence of powers of $ \mathcal{NM} $ corresponding to a graph $ G $, where $ 1\leq l\leq k(G), k(G)=\ceil*{\log_{2}diameter(G)} $. In the following, we define a novel collection of measures to quantify the structure of a graph as follows: \begin{Defn} \label{IsoDef2} Let $ w_{1}, w_{2}, w_{3}, w_{4}, w_{5}$ and $ w_{6} $ be six distinct irrational numbers. For $ x\in N(G^{\{l\}},i) $, let \begin{equation}\label{Isoeq.3} M_{1}(G^{\{l\}},i,x)={\Bigg(\dfrac{\eta_{ix}^{\{l\}}}{w_{1}} \Bigg)} + {\Bigg(\dfrac{|\eta_{xx}^{\{l\}}|- \eta_{ix}^{\{l\}} +w_{3}}{w_{2}} \Bigg).} \end{equation} Consider an ordering of the elements $\langle x_{1}, x_{2},...\rangle $ of $ N(G^{\{l\}},i) $, such that $ M_{1}(G^{\{l\}},i,x_{1})\leq M_{1}(G^{\{l\}},i,x_{2})\leq...\leq M_{1}(G^{\{l\}},i,x_{|N({G^{\{l\}}},i)|}) $. For $ y\in X(G^{\{l\}},i) $, let \begin{equation}\label{Isoeq.4} \begin{aligned} M_{2}(G^{\{l\}},i,y) {} = & {} {\Bigg(\dfrac{|\eta_{iy}^{\{l\}}|}{w_{4}} \Bigg)}+ {\Bigg(\dfrac{|N({G^{\{l\}}},y)\cap X({G^{\{l\}}},i)| +w_{3}}{w_{5}} \Bigg)}\\ & + {\Bigg(\dfrac{|\eta_{yy}^{\{l\}}|-|N({G^{\{l\}}},y)\cap X({G^{\{l\}}},i)|-|\eta_{iy_{j}}^{\{l\}}|+w_{3}}{w_{6}} \Bigg).} \end{aligned} \end{equation}\normalsize Consider an ordering of the elements $ \langle y_{1}, y_{2},...\rangle $ of $ X(G^{\{l\}},i) $, such that $ M_{2}(G^{\{l\}},i,y_{1})\leq M_{2}(G^{\{l\}},i,y_{2})\leq...\leq M_{2}(G^{\{l\}},i,y_{|X({G^{\{l\}}},i)|}) $. \end{Defn} Note that by Lemma \ref{prop.2}, the induced subgraph obtained from two level decomposition of $ G^{\{l\}} $ with root $ i $, is given by the vertex set $ [i\cup N(G^{\{l\}},i)\cup X(G^{\{l\}},i) ] $. In the above definition, $ M_{1} $ is the measure computed as weighted sum to count the number of edges that connect vertices in level 1 to level 2 and the number of edges that connect vertices within level 1. That is, for $x\in N(G^{\{l\}},i), \eta^{\{l\}}_{ix} $ represent the number of vertices connected with $ x $ and not belonging to the same level as $ x $, while $ |\eta^{\{l\}}_{xx}|-\eta^{\{l\}}_{ix} $ counts the number of vertices connected to $ x $ and belonging to the same level as $ x $. We use different weights, namely $ \dfrac{1}{w_{1}}, \dfrac{1}{w_{2}}, \ldots $, to distinguish the same. Similarly, $ M_{2} $ is the weighted sum to count the number of edges that connect vertices in level $ 2 $ with vertices in level $ 1 $, vertices within level $ 2 $ and the vertices in level $ 3 $. That is, for $ y\in X(G^{\{l\}},i),|\eta^{\{l\}}_{iy}| $ represents the number of vertices adjacent to $ y $ from level $ 1 $, $ |N(G^{\{l\}},y)\cap X(G^{\{l\}},i)| $ counts the number of vertices in level $ 2 $ adjacent with $ y $. Note that as the matrix immediately does not reveal the edges present with in this level. Here one neeeds to do extra effort in finding that. Next term, $ |\eta_{yy}^{\{l\}}|-|N({G^{\{l\}}},y)\cap X({G^{\{l\}}},i)|-|\eta_{iy}^{\{l\}}| $ counts the number of vertices adjacent with $ y $ and not belonging to level 1 or level 2. Here again, we use different weights, $ \dfrac{1}{w_{4}},\dfrac{1}{w_{5}}, \dfrac{1}{w_{6}}$, to keep track of these values. The irrational number $ w_{3} $ is used to keep a count of the number of vertices which are isolated within the level in level decomposition. By Corollaries \ref{RRM1}, \ref{RRM2} and \ref{RRM3}, the integer coefficients in $ M_{1}(G^{\{l\}},i,x) $ and $ M_{2}(G^{\{l\}},i,y) $ gives the complete information about the induced subgraph $ [i\cup N(G^{\{l\}},i)\cup X(G^{\{l\}},i)] $ along with the volume of edges connecting outside it for any given $ l $. Let us now define a finite sequence, where each element is associated with a vertex. We call this vector as \textbf{\textit{Structural Descriptor Sequence}}. \begin{Defn}\label{IsoDef3} A finite sequence, known as \textbf{\textit{Structural Descriptor Sequence}}, $ R_{G} (i)$, for $, i\in V(G) $, is defined as the weighted sum of the ordered sets of measure $ M_{1}(G^{\{l\}},i,x_{j}) $ and $ M_{2}(G^{\{l\}},i,y_{j}) $ given by, \begin{equation} \label{Isoeq.5} R_{G}(i)=\sum\limits_{l=1}^{k} \dfrac{1}{Irr(l)} \Bigg({{\sum\limits_{j=1}^{|N({G^{\{l\}}},i)|}\dfrac{M_{1}({G^{\{l\}}},i,x_{j})}{Irr(j)}}+ {\sum\limits_{j=1}^{|X({G^{\{l\}}},i)|}\dfrac{M_{2}({G^{\{l\}}},i, y_{j})}{Irr(j)}}} \Bigg ) \end{equation} where $ Irr $ is a finite sequence of irrational numbers of the form $ \langle \sqrt{2}, \sqrt{3}, \sqrt{5},...\rangle $. \end{Defn} In the above definition note that for every $ i\in V(G) $, $ R_{G}(i) $ captures the complete structural information about the node $ i $ and hence the finite sequence $ \{R_{G}(i)\} $ is a complete descriptor of the given graph. We use the \textbf{\textit{Structural Descriptor Sequence}} explicitly to study the graph isomorphism problem. Rest of this section, will discuss this problem and its feasible solution. \begin{Remark} The following inequalities are immediate from the definition. For any given $ i\in V(G),1\leq l\leq k(G) $ and $ x\in N(G^{\{l\}},i), y\in X(G^{\{l\}},i) $, we have \begin{enumerate} \item $ 1\leq \eta_{ix_{j}}^{\{l\}}\leq (n-1) $ \item $ 0\leq |\eta_{x_{j}x_{j}}^{\{l\}}|- \eta_{ix_{j}}^{\{l\}}\leq (n-2) $ \item $ 1\leq |\eta_{iy_{j}}^{\{l\}}|\leq (n-2)$ \item $ 0\leq |N({G^{\{l\}}},y_{j})\cap X({G^{\{l\}}},i)|\leq (n-3) $ \item $ 0\leq \Big(|\eta_{y_{j}y_{j}}^{\{l\}}|-|N({G^{\{l\}}},y_{j})\cap X({G^{\{l\}}},i)|-|\eta_{iy_{j}}^{\{l\}}|\Big)\leq (n-3) $ $ \square $ \end{enumerate} \end{Remark} It is well known that two graphs with different number of vertices or with non-identical degree sequences are non-isomorphic. Hence, we next prove a theorem to test isomorphism between two graphs having same number of vertices and identical degree sequence. \begin{Theorem} \label{Poly_Iso_Thm_1} If two graphs $ G $ and $ H $ are isomorphic then the corresponding sorted structural descriptor sequence $ R_{G} $ and $ R_{H} $ are identical. \end{Theorem} \begin{proof} Suppose $ G $ and $ H $ are isomorphic, then we have $ k(G)=k(H) $, and there exist an adjacency preserving bijection $\phi :V(G)\rightarrow V(H) $, such that $ (i,j)\in E(G) $ if and only if $ (\phi(i),\phi(j))\in E(H) $ where $ 1\leq i,j\leq n $. It is well known that, for any $ i\in V(G) $ and for given $ l, 1\leq l\leq k(G) $ the subgraph induced by $ \{x\in V(G) : d_{G}(i,x)\leq 2^{l-1}\}$ is isomorphic to the subgraph induced by $\{\phi(x)\in V(H) : d_{H}(\phi(i),\phi(x))\leq 2^{l-1}\} $. Similarly, the subgraph induced by set of vertices $ \{y\in V(G) : 2^{l-1}<d_{G}(i,y)\leq 2^{l}\}$ is isomorphic to $\{\phi(y)\in V(H) :2^{l-1}< d_{H}(\phi(i),\phi(y))\leq 2^{l}\} $. By Corollary \ref{RRM1} and Corollary \ref{RRM3} and Definition \ref{IsoDef1} we have $ N(G^{\{l\}},i) $ isomorphic to $ N(H^{\{l\}},\phi(i)) $ and $X(G^{\{l\}},i) $ is isomorphic to $ X(H^{\{l\}},\phi(i)) $. By equations (\ref{Isoeq.3}) and (\ref{Isoeq.4}) we have, $\forall i\in V(G) $ and $ \forall x_{j}\in N(G^{\{l\}},i)$ $M_{1}(G^{\{l\}},i,x_{j})=M_{1} (H^{\{l\}},\phi(i),\phi(x_{j})) $ and for every $ i\in V(G) $, $ y_{j}\in X(G^{\{l\}},i) $ we have $ M_{2}(G^{\{l\}},i,y_{j})=M_{2} (H^{\{l\}},\phi(i),\phi(y_{j})) $. Since each element of the sequence $ M_{1}(G^{\{l\}},i,x_{j}), M_{1}(H^{\{l\}},\phi(i),\phi(x_{j})), M_{2}(G^{\{l\}},i,y_{j}) $ and $ M_{2}(H^{\{l\}},\phi(i),\phi(y_{j}) ) $ are linear combinations of irrational numbers, by equating the coefficients of like terms, we further get the following 5 equalities between entries of $ \mathcal{NM}^{\{l\}}(G) $ and $ \mathcal{NM}^{\{l\}}(H) $. \begin{eqnarray}\label{ee1} \eta^{\{l\}}_{ix_{j}} & = & \eta^{\{l\}}_{\phi(i)\phi(x_{j})} \\ \label{ee2} \eta^{\{l\}}_{iy_{j}} & = & \eta^{\{l\}}_{\phi(i)\phi(y_{j})} \\ \label{ee3} \big(|\eta^{\{l\}}_{x_{j}x_{j}}|-\eta^{\{l\}}_{ix_{j}}\big )& = &\big ( |\eta^{\{l\}}_{\phi (x_{j})\phi (x_{j})}|-\eta^{\{l\}}_{\phi(i) \phi (x_{j})}\big ) \\ \label{ee4} \big |N({G^{\{l\}}},y_{j})\cap X({G^{\{l\}}},i)\big |& = &\big |N({H^{\{l\}}},\phi (y_{j}))\cap X({H^{\{l\}}},\phi(i))\big | \end{eqnarray} \begin{equation} \label{ee5} \begin{aligned} \big(|\eta_{y_{j}y_{j}}^{\{l\}}|-|\eta_{iy_{j}}^{\{l\}}|-|N({G^{\{l\}}},y_{j})\cap X({G^{\{l\}}},i)|\big) {} & = \big(|\eta_{\phi (y_{j})\phi(y_{j})}^{\{l\}}| -|\eta_{\phi(i)\phi(y_{j})}^{\{l\}}| \\ & -|N({H^{\{l\}}},\phi(y_{j}))\cap X({H^{\{l\}}},\phi(i))|\big) \end{aligned} \end{equation} \normalsize where $ x_{j}\in N(G^{\{l\}},i) $ and $ y_{j}\in X(G^{\{l\}},i) $, for every $ i\in V(G) $ and for every $ l, 1\leq l\leq k(G) $. Therefore there exists a bijection from $ R_{G}$ to $ R_{H} $ satisfying. $ R_{G}(i)=R_{H}(\phi(i)) $, $ \forall i\in V(G)$. \end{proof} \begin{Remark}\label{REMK3} The converse of Theorem \ref{Poly_Iso_Thm_1} need not hold. For example in the case of strongly regular graphs with parameters $ (n,r,\lambda,\mu) $, where $ n-$ number of vertices, $ r- $ regular, $ \lambda- $ number of common neighbours of adjacent vertices and $ \mu- $ number of common neighbours of non adjacent vertices, we see that the sequences are always identical for any graphs. \end{Remark} \subsection{Algorithm to compute Structural descriptor sequence} Now we present a brief description of proposed algorithm to find the structural descriptor sequence for any given graph $ G $. The pseudocodes of the algorithm is given in Appendix. The objective of this work is to get the unique structural sequence given by a descriptor of graph $ G $ using Algorithm \ref{ALO.1} - \ref{ISOALO.3}. The algorithm is based on the construction of the sequence of graphs and its corresponding matrices $ \mathcal{NM}^{\{l\}},1\leq l\leq k(G) $. The structural descriptor sequence is one time calculation for any graph. The sorted sequences are non identical then we conclude corresponding graphs are non-isomorphic. Moreover polynomial time is enough to construct the structural descriptor sequence. Given a graph $ G $, we use the algorithm \ref{ALO.1} from \cite{NM_SPath} to compute the sequence of powers of neighbourhood matrix $ \mathcal{NM}^{\{l\}}$. In this module(Algorithm \ref{ALO.1}), we compute the $ \mathcal{NM}(G) $ matrix and the sequence of $ k(G) $ matrices, namely powers of $ \mathcal{NM} (G)$ [denoted by $ SPG(:,:,:) $] associated with $ G $. The algorithm is referred as \textsc{SP-$\mathcal{NM}(A)$}, where $ A $ is the adjacency matrix of $G$. Next, we compute a sequence of real numbers $ M_{1}(G^{\{l\}},i,x_{j})$ and $ M_{2}(G^{\{l\}},i,y_{j}),\forall i,$ and for given $l,1\leq l\leq k(G) $ using the equations (\ref{Isoeq.1}), (\ref{Isoeq.2}), (\ref{Isoeq.3}), (\ref{Isoeq.4}) and (\ref{Isoeq.5}) which corresponds to the entries of $ \mathcal{NM} $ describing the structure of two level decomposition rooted at each vertex $i\in V(G^{\{l\}})$, and for any $l, 1\leq l\leq k(G) $. For evaluating equations (\ref{Isoeq.3}) and (\ref{Isoeq.4}), we choose the weights $ w_{1} $ to $ w_{7} $ as: $ w_{1}=\sqrt{7}, w_{2}=\sqrt{11}, w_{3}=\sqrt{3}, w_{4}=\sqrt{13}, w_{5}=\sqrt{17}, w_{6}=\sqrt{19} $, which are all done by \ref{ISOALO.3}. Atlast, we compute a sequence of $ n $ real numbers given by $ R_{G} $ in equation (\ref{Isoeq.5}) for every vertex of the given graph $ G $, denoted by $ R_{G} $. The sequence $ R_{G} $ is constructed from the structural information of two level decomposition of $ G^{\{l\}}, 1\leq l\leq k$, where $k=\ceil*{\log_{2}diameter(G)} $ using the entries of sequence of powers of $ \mathcal{NM}^{\{l\}} $, which are all done by \ref{ISOALO.2}. \subsection{Time Complexity} As stated before, the above described algorithm has been designed in three modules and is presented in Appendix. In this module (Algorithm \ref{ISOALO.2}), we compute the structural descriptor sequence $ R_{G} $ corresponding to given graph $ G $. The algorithm is named as \textsc{$S_{-}D _{-} S(A,Irr)$}, where $ A $ is the adjacency matrix of $ G $ and $ Irr- $ square root of first $ (n-1) $ primes. We use Algorithm \ref{ALO.1} in Algorithm \ref{ISOALO.2} to compute the $ \mathcal{NM}(G) $ matrix and the sequence of $ k(G) $ matrices, namely powers of $ \mathcal{NM} (G)$ [denoted by $ SPG(:,:,:) $] associated with $ G $. The algorithm is named as \textsc{SP-$\mathcal{NM}(A)$}, as we apply the product of two matrices namely adjacency A(G) and the Laplacian $C(G)$ to obtain the $\mathcal{NM}(G)$ and we do this for $k(G) $ times, for where $ k(G)= \ceil*{\log_{2}(diameter(G))}$, while $G$ is connected and $ k(G) + 1$ times while $G $ is disconnected. We use Coppersmith - Winograd algorithm \cite{C.W} for matrix multiplication. The running time of Algorithm \ref{ALO.1} is $ \mathcal{O}(k.n^{2.3737}) $. Finally we use Algorithm \ref{ISOALO.3} in Algorithm \ref{ISOALO.2} to we compute the structural descriptor for each $ \mathcal{NM} $ in sequence of powers of $ \mathcal{NM}(G) $. The algorithm is named as \textsc{Structural$ _{-} $ Descriptor}($ \mathcal{NM},n,Irr $), where $ \mathcal{NM} $ is any sequence of $\mathcal{NM} $ corresponding to given $ G $, $ n- $ number of vertices. The running time of Algorithm \ref{ISOALO.3} is $ \mathcal{O}(n^{3}) $. Therefore the total running time of Algorithm \ref{ISOALO.2} is $ \mathcal{O}(n^{3}\log n) $. Note that, the contrapositive of Theorem \ref{Poly_Iso_Thm_1} states that if the sequences $ R_{G} $ and $ R_{H} $ differ atleast in one position, then the graphs are non-isomorphic. We exploit to study the graph isomorphism problem. In Algorithms \ref{ALO.1} - \ref{ISOALO.3}, we implement the computation of structural descriptor sequence and use the theorem to decide if the graphs are non-isomorphic. However, inview of Remark \ref{REMK3}, when the sequence are identical for two graphs, we cannot conclude immediately whether the graphs are isomorphic or not. Hence we tried to extract more information about the structure of the graphs. In this attempt, by using the first matrix of the sequence namely the $ \mathcal{NM}(G) $, we find Maximal cliques of all possible size. We use this information to further discriminate the given graphs. \section{Clique sequence of a graph} We first enumerate all possible maximal cliques of size $ i, 1\leq i\leq w(G)=$clique number and store it in a matrix of size $ t_{i}\times i $, where $ t_{i} $ is the number of maximal cliques of size $ i $, denote it by $ LK_{t_{i},i} $. Note that for each $ i $, we obtain a matrix, that is, $ w(G) $ such matrices. Secondly for each $ i $, we count the number of cliques to which each vertex belongs to and store as a vector of size $ n (C_{j}^{i}), 1\leq j\leq n$. In this fashion we get $ w(G) $ such vectors. Consider the sequence obtained by computing for each $ j $, $ R=\Bigg\{\sum\limits_{i=1}^{w(G)}\dfrac{C_{j}^{i}}{Irr(i)}\Bigg\}_{j=1}^{n}$. Finally we construct the clique sequence $ CS $, defined as follows $ CS=\Bigg\{\sum_{j=1}^{n} C_{j}^{i}\cdot R(j) \Bigg\}_{i=1}^{w(G)} $ \subsection{Description of Maximal cliques Algorithm} In this section we present a brief description of the proposed algorithm to find the Maximal cliques of all possible size of given graph $ G $. The algorithm based on neighbourhood matrix corresponding to $ G $. All possible Maximal cliques work done by Algorithm \ref{Clique_Main} and \ref{Clique_Sub}. Algorithm \ref{Clique_Sub} is iteratively run in Algorithm \ref{Clique_Main} by following inputs, $ A $ is an adjacency matrix corresponding to given graph, $ W \subset V $, set of vertices of $ W $ is complete subgraph on $ 3\times d $ vertices where $ d\leftarrow 1,2,... $, $X\subset V, X+W\subseteq G $ and vertex labels of vertices in $ X $ is greater than vertex labels in $ W $ and $Y\subset V, Y+W\subset G $ and vertex labels of vertices in $ X $ is greater than vertex labels in $ Y $. Initially, $W=\emptyset, X=1,2,...,n $ and $ Y=\emptyset $, first we find adjacency matrix $ C $ corresponding to $ [X] $ and the neighourhood matrix $ CL $ corresponding to $ C $. The algorithm runs for each vertices in $ [X] $. $ g_{1} $ is a neighbours of $ i $, if $ |g_{1}| >0$ then $ i $ is not isolated vertex and the neighbours of $ i $ should contained in any $ K_{2},K_{3},... $ to gether with $ i $. We define new $ g_{1} $, $ g_{1}=\{q:q>i,q\in g_{1}\} $, this elimination used to reduce the running time and maintain the collection of Maximal cliques. Now we construct the sequence of number of triangles passing through each elements of $ g_{1} $ together with $ i $ and stored in $ p $, this can be done by the entries of neighbourhood matrix in step 7. Now we find the set of vertices which are all not contained in any $ K_{3} $, $ r=\{f: p(f)=0,f\in g_{1}\} $. If suppose for all vertices in $ Y $ are not adjacent with $ \{i,r(f)\},f=1,2,...,|r| $ then the vertices $ \{i,r(f)\} $ can add in distinct $ K_{2} $ which is done by step 8 - 12. Now we again redefine $ g_{1} $, $ g_{1}=\{q: p(q)>0\} $ and find the edges in upper triangular matrix of $ C(g1,g1) $ and stored in $ a_{4} $. Each edges in $ a_{4} $ together with $ i $ contained in any $ K_{3},K_{4},..,\text{ and so on,} $. To find $ s $ is a number of common neighbour between $ a_{4}(u,1) $ and $ a_{4}(u,2), u=1,2,...,|a_{4}|$. If $ s>1 $ then we find the common neighbours among $ i,a_{4}(u,1), a_{4}(u,2) $ and stored in $ a_{8} $. $ a_{9}=\{q: q>a_{4}(u,2), q\in a_{8}\} $, this set used to reduce the running time and maintain the collection of Maximal cliques. If $ |a_{9}|=1 $ then it possible to add in distinct $ K_{4} $ at the same time it is not possible to add, when there exist an edge between $ a_{8} $ and $ a_{9} $ and atleast one vertex in $ Y $ is adjacent with $ \{i,a_{4}(u,1),a_{4}(u,2), a_{9}\} $. which is done by steps 19 - 23. If suppose $ |a_{9}|=0 $ and $ |a_{8}| =0$ then it possible to add in distinct $ K_{3} $ at the same time it is not possible to add, when atleast one vertex in $ Y $ is adjacent with $ \{i,a_{4}(u,1),a_{4}(u,2)\} $. which is done by steps 25 - 29. At last the set $a_{9} $ together with $ \{i,a_{4}(u,1),a_{4}(u,2)\} $, possible to contained in $ K_{4},K_{5},... $ and so on,. Upon repeated applications of this iterative process to $ W,X $ and $ Y $ we get all possible Maximal cliques. Now we get the following outputs, $ U- $ cell array which contains the number of Maximal cliques contained in each $ i\in V $ and $ Z- $ cell array which contains all possible Maximal cliques. Our aim is to construct a unique sequence using the outputs $ U $ and $ Z $. Let $ C= $ Let $ C $ is a matrix of size ($ t\times n $), which is constructed from $ U $. Let $ R=\sum_{i=1}^{t}\dfrac{row_{i}(C)}{Irr(i)}, Irr=\sqrt{2},\sqrt{3},\sqrt{5},... $. Finally the clique sequence $ CS $ defined as follows, $ CS=\Big\{\sum_{j=1}^{n} {C(i,j)}\cdot{R(j)} \Big\}_{i=1}^{t} $. \subsection{Time Compleixty} As stated before, the above described algorithm has been designed in three modules and is presented in Appendix. In this module (Algorithm \ref{Clique_Sequence}), we compute the clique sequence and subsequence of clique sequence corresponding to given graph $ G $. The algorithm is named as \textsc{Clique$ _{-} $Sequence}($ A,Irr $), where $ A $ is a adjacency matrix of $ G $. We run Algorithm \ref{Clique_Main} in this module which is named as \textsc{Complete$ _{-} $Cliques}($ A $). Algorithm \ref{Clique_Sub} iteratively run in Algorithm \ref{Clique_Main} which is named as \textsc{Cliques}($ A,W,X,Y $), where $ A $ is an adjacency matrix corresponding to given graph, $ W \subset V $, set of vertices of $ W $ is complete subgraph on $ 3d $ vertices where $ d\leftarrow 1,2,... $, $X\subset V, X+W\subseteq G $ and vertex labels of vertices in $ X $ is greater than vertex labels in $ W $ and $Y\subset V, Y+W\subset G $ and vertex labels of vertices in $ X $ is greater than vertex labels in $ Y $. The worst running time of algorithm \ref{Clique_Sub} is $ \mathcal{O}(|X|^{5}) $ and algorihtm \ref{Clique_Main} is $ \mathcal{O}(n^{5}) $ + $ \sum_{i=1}^{ct}\sum_{j=3*i}^{n-2}(n-j)^{5}{j-1 \choose 3*i-1} $, where $ct=\round{\dfrac{n}{4}}$. If we expand the sum of the series that's seems to be very hard. Therefore the worst case running time of the Algorithm \ref{Clique_Sequence} is exponential. \section{Computation and Analysis} Idenntifying non-isomorphic graphs along with existing algorithm will be executed as follows: Given a collection of graphs: $ \mathcal{G} $ \begin{enumerate} \item We first compute the Structural descriptor sequence using Algorithm \ref{ALO.1} - \ref{ISOALO.3}. So we have a $ n $ element sequence for each graph in $ \mathcal{G} $. From here, all the distinct sequences computed above represent non-isomorphic graphs ($ \mathcal{G}_{1} $). The remaining graphs $ \mathcal{G}_{2} =(\mathcal{G}-\mathcal{G}_{1}) $ are the input for the next step. \item Here we compute the clique sequences for the graphs in $ \mathcal{G}_{2} $. Here we have a $ n- $ element sequence for each graph in $ \mathcal{G}_{2} $. Now all the distinct sequences represent non-isomorphic graphs. Let the collection be $ \mathcal{G}_{3} $. Let $ \mathcal{G}_{4}=\mathcal{G}_{2}-\mathcal{G}_{3} $ \item On this collection $ \mathcal{G}_{4} $: we define a relation $ G_{1}\backsim G_{2} $ if and only if $ CS(G_{1})=CS(G_{2}), G_{1},G_{2}\in \mathcal{G}_{4} $. Note that the relation forms an equivalance relation and partitions $ \mathcal{G}_{4} $ into equivalance classes having graphs with identical $ CS $ sequences in the same class. \item We run the existing isomorphism algorithm comparing graphs with in each of te equivalance class. \end{enumerate} Such a preprocessing reduces the computational effort involved in identify in non-isomorphic graphs among a given collection of graphs as compared to running the existing algorithm for all possible pairs in $ \mathcal{G} $. Above procedure was implemented to verify our claim on relevant datasets for back of space. We present various existing benchmark datasets. \underline{\textbf{$ DS_{1} $:}} This dataset contains $ 3854 $ graphs, all graphs are non isomorphic strongly regular on the family of $ (n,r,\lambda,\mu) $, where $ n=35 $, $ r=18 $, $ \lambda=9 $, $ \mu=9 $. \underline{\textbf{$ DS_{2} $:}} This dataset contains $ 180 $ graphs, all graphs are non isomorphic strongly regular on the family of $ (n,r,\lambda,\mu) $, where $ n=36 $, $ r=14 $, $ \lambda=6 $, $ \mu=4 $. \underline{\textbf{$ DS_{3} $:}} This dataset contains $ 28 $ graphs, all graphs are non isomorphic strongly regular on the family of $ (n,r,\lambda,\mu) $, where $ n=40 $, $ r=12 $, $ \lambda=2 $, $ \mu=4 $. \underline{\textbf{$ DS_{4} $:}} This dataset contains $ 78 $ graphs, all graphs are non isomorphic strongly regular on the family of $ (n,r,\lambda,\mu) $, where $ n=45 $, $ r=12 $, $ \lambda=3 $, $ \mu=3 $. \underline{\textbf{$ DS_{5} $:}} This dataset contains $ 18 $ graphs, all graphs are non isomorphic strongly regular on the family of $ (n,r,\lambda,\mu) $, where $ n=50 $, $ r=21 $, $ \lambda=8 $, $ \mu=9 $. \underline{\textbf{$ DS_{6} $:}} This dataset contains $ 167 $ graphs, all graphs are non isomorphic strongly regular on the family of $ (n,r,\lambda,\mu) $, where $ n=64 $, $ r=18 $, $ \lambda=2 $, $ \mu=6 $. \underline{\textbf{$ DS_{7} $:}} This dataset contains $ 1000 $ graphs, all graphs are regular on the family of $ (n,r) $, where $ n=35 $, $ r=18$, in this collection $ 794 $ graphs are non isomorphic. \\ \underline{\textbf{On the dataset $ DS_{1} $}} Among the $ 3854 $ graphs, we could identify $ 3838 $ graphs to be non-isomorphic with distinct sequences, and remaining $ 16 $ graphs were classified in to $ 8 $ equivalence classes, having $v=\{ 2,2,2,2,2,2,2,2 \}$, the elements of $ v $ are in each of the class. Running the existing algorithm on there ${ v_{i} \choose 2} $ graphs for each $ i $, we can completely distinguish the graph collection which has taken only $ 2376.5268 $ seconds in total, instead of running the existing isomorphism algorithm on $ {3854\choose 2} $ pairs which takes atleast 2 days. \underline{\textbf{On the dataset $ DS_{2 } $}} Among the $180 $ graphs, we could identify $ 81 $ graphs to be non-isomorphic with distinct sequences, and remaining $ 99 $ graphs were classified in to $ 19 $ equivalence classes, having\\ $v=\{2,2,2,2,2,3,4,4,4,4,5,6,6,7,7,7,7,9,16 \}$, the elements of $ v $ are in each of the class. Running the existing algorithm on there ${ v_{i} \choose 2} $ graphs for each $ i $, we can completely distinguish the graph collection which has taken only $ 18.5414 $ seconds in total. The existing isomorphism algorithm on $ {180 \choose 2} $ pairs takes $ 62.0438 $ seconds. \underline{\textbf{On the dataset $ DS_{ 3} $}} Among the $28 $ graphs, we could identify $ 20 $ graphs to be non-isomorphic with distinct sequences, and remaining $ 8 $ graphs were classified in to $ 4 $ equivalence classes, having $v=\{ 2,2,2,2 \}$, the elements of $ v $ are in each of the class. Running the existing algorithm on there ${ v_{i} \choose 2} $ graphs for each $ i $, we can completely distinguish the graph collection which has taken only $ 2.0855 $ seconds in total, the existing isomorphism algorithm on $ {28 \choose 2} $ pairs takes $39.9169 $ seconds. \underline{\textbf{On the dataset $ DS_{ 4} $}} Among the $ 78 $ graphs, we could identify $ 78 $ graphs to be non-isomorphic with distinct sequences, which has taken only $ 5.8620 $ seconds in total, the existing isomorphism algorithm on $ {78 \choose 2} $ pairs takes $ 22.6069$ seconds. \underline{\textbf{On the dataset $ DS_{5 } $}} Among the $ 18 $ graphs, we could identify $ 18 $ graphs to be non-isomorphic with distinct sequences, which has taken only $ 8.4866 $ seconds in total, the existing isomorphism algorithm on $ { 18\choose 2} $ pairs takes $ 8.3194$ seconds. \underline{\textbf{On the dataset $ DS_{6 } $}} Among the $167 $ graphs, we could identify $ 146 $ graphs to be non-isomorphic with distinct sequences, and remaining $ 21 $ graphs were classified in to $ 8 $ equivalence classes, having $v=\{ 2,2,2,2,2,3,3,5 \}$, the elements of $ v $ are in each of the class. Running the existing algorithm on there ${ v_{i} \choose 2} $ graphs for each $ i $, we can completely distinguish the graph collection which has taken only $ 23.8966 $ seconds in total, the existing isomorphism algorithm on $ { 167\choose 2} $ pairs takes $ 12597.2899$ seconds. \underline{\textbf{On the dataset $ DS_{ 7} $}} Among the $1000 $ graphs, we could identify $ 583 $ graphs to be non-isomorphic with distinct sequences, and remaining $ 417 $ graphs were classified in to $ 206 $ equivalence classes, having $v=\{2,2,2,...,(201 \text{ times}), 3,3,3,3,3 \}$, the elements of $ v $ are in each of the class. Running the existing algorithm on there ${ v_{i} \choose 2} $ graphs for each $ i $, we can completely distinguish the graph collection which has taken only $ 508.6248 $ seconds in total, the existing isomorphism algorithm on $ {1000 \choose 2} $ pairs takes $ 108567.0245$ seconds. \section{Automorpism Groups of a graph} In this section we find the automorphism groups of given graph $ G $, using the strucutral descriptor sequence. The above sequence have a complete structural informations for each vertex $ i,i\in V $. Our aim is to get the optimal possibilities for the automorphism groups. From this sequence we can conclude, if any two values of sequence is not identical then the corresponding vertices never be symmetric. By this idea we can do the following, \begin{enumerate} \item Let $ R_{G} $ be the structural descriptor sequence of given graph. \item Let $ fx_{i} =$ Set of vertices in the $ i^{th} $ component of given graph, where $ i=1,2,...,w $ and $ w $ is a number of components. \item Let $ h= $ Set of unique values in $ R_{G}(fx_{i}) $. \item $v_{q}= \big\{\alpha : R_{G}(\alpha)=h(q), \alpha\in fx_{i} \big\}, q=1,2,...,|h| $ \item $ P_{q}= $ Set of all permutations of $ v_{q} $ \item $ X $ is all possible combinations of $ P_{q}, q=1,2,...,|h| $. Moreover $ X $ is optimal options of automorphism groups. \item Finally we check each possible permutaions in $ X $ with given graph for get a automorphism groups. \end{enumerate} \begin{figure} \caption{A graph $ G $ and its structural descriptor sequence $ R_{G} $} \label{autoF.a} \label{autoF.1b} \label{AutoFig} \end{figure} \begin{Corollary}\label{Auto_Cor} If the structural descriptor sequence $ R_{G} $ contains $ n $ distinct elements, then $|Aut( G)|=1 $, that is, $ G $ is an asymmetric graph. \end{Corollary} \begin{proof} Let the structural descriptor sequence $ R_{G} $ contain $ n $ distinct elements, that is, $ R_{G}(i)\neq R_{G}(j), \forall i\neq j, i,j\in V(G) $. By the definition of $ R_{G} $, it is immediate that for any two vertices $ i $ and $ j $, the two level decompositions of $ G^{\{l\}} $ rooted at $ i $ and $ j $ are not isomorphic for some $ l,1\leq l\leq k(G) $. Since this is true for any two vertices, there exists no non-trivial automorphhism in $ G $. Hence $ Aut(G)$ contains only the identity mapping $ e $. \end{proof} \normalsize \begin{Remark} Note the converse of the above corollary need not hold. For the graph in \normalfont{Figure \ref{autoF.a}}, the structural descriptor sequence $ R_{G} $ is given by Figure \ref{autoF.1b}. Here, $ R_{G} (4)=R_{G}(7)$ but the neighbours of vertex 4 and neighbours of vertex 7 are different as they have different degree sequence and their $ R_{G}-$ values also do not coincide. Hence, we cannot find any automorphism other than identity. \end{Remark} \subsection{Description of Automorphism Group Algorithm} In this section, we present a brief description of the proposed algorithm to find the all posssible automorphism groups of given graph $ G $, this work done by Algorithm \ref{AutoAl_2} and Algorithm \ref{AutoAl_3}. The pseudocodes of the algorithm is given in Appendix. The objective of this work is find the set of all permutation matrix for the automorphism group. This work based on the construction of structural descriptor sequenceof given graph $ G $, which is done in Algorithm \ref{ISOALO.2}. First we find the set of vertices of connected components of given graph. This algorithm run for each connected component of the graph. Let $ fx $ be the vertices of $ i^{th} $ component of the given graph and $ h $ is the set of unique values in $ R_{G}(fx) $. Now we find a vertex set $ v $ which have identical values in $ R_{G}(fx) $ for each values in $ h $, then we find all possible permutations of $ v $. Now we find the combinations of all possible $ v $ corresponding to each elements in $ h $ and stored in $ X $. Finally we check for each permutations in $ X $ with given graph to get a automorphism groups. \subsection{Time Complexity} \begin{enumerate} \item In this module (Algorithm \ref{AutoAl_2}), we compute automorphism groups of given graph $ G $. The algorithm is nammed as \textsc{$A_{-}M_{-}G(A)$}, where $ A- $ Adjacency matrix $ G $. Algorithm \ref{AutoAl_3} is iteratively run in this module. In Algorithm \ref{AutoAl_3}, we compute optimal options of automorphism groups of given set of vertices in $ G $ this algorithm named as \textsc{$ M_{-}O_{-}A_{-}G(R_{G},h,fx) $}, where $ R_{G} $ structural descriptor sequence, $ fx- $ set of vertices in the connected component and $ h- $ unique values in $ R_{G}(fx) $. The worst case running time of this algorithm is $ \mathcal{O}(n!) $ when the graph is connected and all values are identical in a sequence. In case of $ \mathcal{O}(n!) $ we can reduce more time on the following, $ M $ is set of multiplicities of $ h $ in $ R_{G}(fx) $ which is greater than 1. $ M=\{m_{1},m_{2},...,m_{d}\}, d\leq |h| $. Let $ y_{1}=(m_{1}! -1) $, $ y_{j}=(y_{j-1}\times (m_{j}!-1))+(y_{j-1}+ (m_{j}!-1)) $. Therefore the complexity of Algorithm \ref{AutoAl_3} is $\sum_{j=1}^{d}y_{j}\leq |fx|! \leq n! $. Therefore the worst case running time of Algorithm \ref{AutoAl_2} is $ \mathcal{O}(n^{2}.n!) $. \end{enumerate} \section{Conclusion} In this paper, we have studied the graph isomorphism problem and graph automorphism problem. We determining the class of non-isomorphic graphs from given graph collection. We have proposed a novel technique to compute a new feature vector graph invariant called as \textit{Structural Descriptor Sequence}, which encodes complete structural information of the graph with respect to each node. This sequence is also unique in the way it is computed. The proposed \textit{Structural Descriptor Sequence} has been shown useful in identifying non-isomorphic graphs from given collection of graphs. We have proved that if any two sorted \textit{Structural Descriptor Sequence} are not identical then the graphs are not isomorphic. Further, we also propose a polynomial time algorithm for test the sufficient part of graph isomorphism between two graphs, which runs in $ \mathcal{O}(kn^{3}) $, where $ k=\ceil*{\log_{2}diameter(G)} \leq \log n$. In this paper we also proposed an algorithm for finding the automorphism groups of given graph, using the structural descriptor sequence. \section*{References} \appendix \section{Algorithm } In this section, we present the $ MATLAB $ pseudocode in the form of algorithm for testing isomorphism between two graphs and discuss their running time. \begin{algorithm}[H] \caption{\textsc{Structural Descriptor Sequence of $ G $.\label{ISOALO.2}}} \textbf{Objective:} To Find the {\textit{Structural Descriptor Sequence}} corresponding to the given undirected unweighted simple graph $ G $ on $ n $ vertices using the sequence of powers of $ \mathcal{NM} (G)$. \\ \textbf{Input:} $ A- $ Adjacency matrix corresponding to the graph $ G $ and $Irr- $ is a set of $ (n-1)$ irrational numbers, where $Irr= \langle \sqrt{2},\sqrt{3},\sqrt{5},...\rangle ; $\\ \textbf{Output:} $ R_{G} -$ is a set of $ n $ real numbers corresponding to each vertex of the graph $ G $. \begin{algorithmic}[1] \Procedure{$ R_{G} = S_{-}D_{-}S $ }{$A,Irr$} \State [$ n $ \text{, } $ k $ \text{, } $ SPG $ ] $ \leftarrow $ \textsc{$ SP$-$\mathcal{NM}$}({$A$}) \For {$ l \leftarrow 1 \text{ to } k $} \State $ \mathcal{NM}\leftarrow SPG(:,:,l) $ \State $ E $\textsc {$ \leftarrow $ $Structural_{-}Descriptor $ ($\mathcal{NM},n,Irr$)} \State $ S(l,:)\leftarrow \dfrac{E}{Irr(l)} $ \EndFor \State \textbf{end} \If {$ k==1 $} $ R_{G}\leftarrow S $ \Else { $ R_{G} \leftarrow \sum S $ } \EndIf \State \textbf{end} \EndProcedure \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \cite{NM_SPath} \caption{\textsc{Sequence of powers of $ \mathcal{NM } $.\label{ALO.1}}} \textbf{Objective:} To find the iteration number $k $ and construct the sequence of powers of $\mathcal{NM} $ matrix of a given graph $G$, that is $ \mathcal{NM}^{\{l\}} $, for every $l,1 \leq l\leq k $. \textbf{Input:} Adjacency matrix $ A $ of a undirected unweighted simple graph $ G $. \textbf{Output:} The number of vertices $ n $, iteration number $k $ and $SPG-$ A three dimensional matrix of size $(n\times n\times k )$, that is for each $l$, $ 1 \leq l \leq k $, the $n\times n-\mathcal{NM}^{\{l\}} $ matrix is given by $ SPG(:,:,l) $. \begin{algorithmic}[1] \Procedure{$ [n \text{, } k \text{, } SPG ]= $ SP$-\mathcal{NM}$}{$A$} \State $ l\leftarrow 1; $ \State $ n \leftarrow $ Number of rows or columns of matrix $ A $ \State $ z \leftarrow 0$ \Comment{Initialize } \While{(True)} \State $ L\leftarrow $ Laplacian matrix of $ A $ \State $ \mathcal{NM} \leftarrow A \times L$ \Comment{Construct the $ \mathcal{NM} $ matrix from $ A $}. \State $ SPG(:,:,l)\leftarrow \mathcal{NM } $ \Comment{$ SPG $-sequence of powers of $ \mathcal{NM } $ matrix.} \State $ u\leftarrow nnz(\mathcal{NM } ) $ \Comment{$ nnz $-Number of non zero entries } \If {$u==n^{2}$} $k \leftarrow l$ \textbf{break} \ElsIf {$ isequal(u,z)==1 $} $k \leftarrow l-1 $ \textbf{break} \Else { $ z\leftarrow u $ } \State $ A \leftarrow \mathcal{NM} ; A (A \neq 0)\leftarrow 1;$ \For{$ p\leftarrow 1 \text{ to } n $} $A (p,p)\leftarrow 0$ \EndFor \State \textbf{end} \State $l\leftarrow l+1$ \EndIf \State \textbf{end} \EndWhile \State \textbf{end} \EndProcedure \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \caption{\textsc{Structural descriptor of $ G $.\label{ISOALO.3}}} \textbf{Objective:} To find the sequence of $ n $ real numbers corresponding to given $ \mathcal{NM} $ which is constructed from the information of two level decomposition of each vertex $ i $. \\ \textbf{Input:} $ \mathcal{NM}, n $ and $ Irr- $ Square root of first $ (n-1) $ primes. \\ \textbf{Output:} $ E- $ Sequence of real numbers corresponding to $ i\in V $. \begin{algorithmic}[1] \Procedure{$ E = Structural_{-}Descriptor $ }{$\mathcal{NM},n,Irr$} \State $ Deg\leftarrow |diag(\mathcal{NM})| $ \For{$ i\leftarrow 1:n$} $ X\leftarrow \mathcal{NM}(i,:); \text{ } X(i)\leftarrow 0 ;$ \State $a\leftarrow \big(find(X>0)\big ); \text{ } b\leftarrow\big(find(X<0)\big ) $ \State $ M_{1} \leftarrow sort\Bigg(\dfrac{X(a)}{\sqrt{7}}+\dfrac{Deg(a)-\mathcal{NM}(i,a)+\sqrt{3}}{\sqrt{11}}\Bigg)$ \Comment{$ sort- $ Sorting increasing order} \For{$ f\leftarrow 1 \text{ to } |a| $} $ S_{1}\leftarrow \sum\Big( \dfrac{M_{1}(f)}{Irr(f)}\Big) $ \textbf{ end } \EndFor \If{$ |b|== 0 $} $ S_{2}\leftarrow 0 $ \Else { $ p\leftarrow \mathcal{NM}(b,b); $ } $ p(p<0)\leftarrow 0; \text{ } p(p>0)\leftarrow 1 ;\text{ } p\leftarrow \sum(p) $ \Comment{Column sum of $ p $} \State $ M_{2} \leftarrow sort\Bigg(\dfrac{|X(b)|}{\sqrt{13}}+\dfrac{p+\sqrt{3}}{\sqrt{17}}+\dfrac{Deg(b)-|X(b)|-p+\sqrt{3}}{\sqrt{19}}\Bigg)$ \Comment{$ sort- $ Sorting increasing order} \For{$ f\leftarrow 1 \text{ to } |b| $} $ S_{2}\leftarrow \sum\Big( \dfrac{M_{2}(f)}{Irr(f)}\Big) $ \textbf{ end } \EndFor \EndIf \State \textbf{end} \State $ E(i)\leftarrow S_{1}+S_{2} $ \EndFor \State \textbf{end} \EndProcedure \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \caption{\textsc{Clique$ _{-} $Sequence of graph $ G $}} \label{Clique_Sequence} \textbf{Objective:} To get a unique sequence for differentiate the graphs using Algorithm \ref{Clique_Main}. \\ \textbf{Input:} Adjacency matrix $ A $ corresponding to given graph $ G $.\\ \textbf{Output:} $ CS- $ Unique sequence of irrational numbers of size $ (1\times t) $, where t is maximum clique number. \begin{algorithmic}[1] \Procedure {$ CS =$Clique$ _{-} $Sequence}{$ A,Irr $} \State $ n\leftarrow $ number of rows in $ A $; $ Z\leftarrow zeros(1,n)$; $ R\leftarrow Z $; $ C\leftarrow Z $ \State \textsc{$ [U,Z] \leftarrow $ Complete$ _{-} $Cliques}$ (A) $; \For {$ i\leftarrow 1 $ to $ |U| $} $ C\leftarrow [C;U\{i\}] $ \textbf{end}\EndFor \For {$ i\leftarrow 1 $ to $ |C| $} $ R\leftarrow R+\dfrac{C(i,:)}{Irr(i)} $; \textbf{end} \EndFor \For {$ i\leftarrow 1$ to $ |C| $} $ CS(i)\leftarrow \sum_{j=1}^{n} {C(i,j)}\cdot{R(j)} $ \textbf{end}; \EndFor \EndProcedure \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \caption{\textsc{All possible Maximal cliques of given graph $ G $}} \label{Clique_Main} \textbf{Objective:} To find all possible distinct complete subgraph of given graph $ G $. \\ \textbf{Input:} $ A- $ Adjacency matrix corresponding to given graph $ G $.\\ \textbf{Output:} $ U- $ cell array which contains the number of Maximal cliques contained in each $ i\in V $, $ Z- $ cell array which contains all possible Maximal cliques. \begin{algorithmic}[1] \Procedure{$ [U,Z] $= Complete$ _{-} $Cliques}{$ A $} \State $ n\leftarrow $Number of rows in $ A $; $ X\leftarrow 1,2,...,n $; $ Y\leftarrow \emptyset $ ; $ W\leftarrow \emptyset $ \State $ [a,b,c,d,LK_{1},LK_{2},LK_{3},LK_{4},J] \leftarrow$ \textsc{Cliques}$ (A,W,X,Y) $; $ ct\leftarrow 1 $ \State $ U\{ct\}\leftarrow [a;b;c;d] $; $ Z\{ct,1\}\leftarrow LK_{1} $; $ Z\{ct,2\}\leftarrow LK_{2} $; $ Z\{ct,3\}\leftarrow LK_{3} $; $ Z\{ct,4\}\leftarrow LK_{4}$ \If {$ |J|==\emptyset $} \textbf{return}; \textbf{end} \EndIf \State $ ct\leftarrow 2 $ \While{$ (1) $} \If {$ J==\emptyset $}; \textbf{break}; \textbf{end} \EndIf \State $ a\leftarrow zeros(1,n) $; $ b\leftarrow a $; $ c\leftarrow a $; $ d\leftarrow a $; $ LK_{1}\leftarrow \emptyset $; \State $ LK_{2}\leftarrow \emptyset $; $ LK_{3}\leftarrow \emptyset $; $ LK_{4}\leftarrow \emptyset $; $ F\leftarrow \emptyset $ \For {$ i\leftarrow 1 $ to number of rows in $ J $} $ W\leftarrow J\{i,1\} $; $ X\leftarrow J\{i,2\} $; $ Y\leftarrow J\{i,3\}-X $ \State $ [ac,bc,cc,dc,LK_{1}c,LK_{2}c,LK_{3}c,LK_{4}c,Jc]\leftarrow $ \textsc{Cliques}$ (A,W,X,Y) $; \State $ a(e)\leftarrow a(e)+ac $; $ b(e)\leftarrow b(e)+bc $; $ c(e)\leftarrow c(e)+cc $; $ d(e)\leftarrow d(e)+dc $; \State $ a(W)\leftarrow a(W)$+Number of rows in $ LK_{1}c $; $ b(W)\leftarrow b(W)$+Number of rows in $ LK_{2}c$ ; \State $ c(W)\leftarrow c(W)$+Number of rows in $ LK_{3}c $ ; $ d(W)\leftarrow d(W)$+Number of rows in $ LK_{4}c $; \State $ LK_{1}\leftarrow [LK_{1}; W \text{ concatenation with } LK_{1}c] $; $ LK_{2}\leftarrow [LK_{2}; W \text{concatenation with } LK_{2}c] $; \State $ LK_{3}\leftarrow [LK_{3}; W \text{ concatenation with } LK_{3}c] $; $ LK_{4}\leftarrow [LK_{4}; W \text{concatenation with } LK_{4}c] $; \For {$ t\leftarrow 1 $ to Number of rows in $ Jc $} \State $ Jc\{t,1\}\leftarrow [W\text{ concatenation with } Jc\{t,1\} ] $; \State $ Jc\{t,2\}\leftarrow [Jc\{t,2\} \text{ concatenation with } X] $; $ Q\leftarrow \{k: k+Jc\{t,1\},k\in Y\} $ \State $ Jc\{t,3\}\leftarrow [Q,Jc\{t,3\} \text{ concatenation with } X] $ \EndFor \State \textbf{end} \State $ F\leftarrow [F; Jc] $ \EndFor \State \textbf{end} \State $ J\leftarrow F $ \State $ a\leftarrow a+U\{ct-1\}(4,:); U\{ct-1\}(4,:)\leftarrow \emptyset $; \State $ U\{ct\}\leftarrow [a;b;c;d] $; $ Z\{ct,1\}\leftarrow [Z\{ct-1,4\};LK_{1}] $; $ Z\{ct,2\}\leftarrow LK_{2} $; \State $ Z\{ct,3\}\leftarrow LK_{3} $; $ Z\{ct,4\}\leftarrow LK_{4} $; $ Z\{ct-1,4\}\leftarrow \emptyset $; \State $ ct\leftarrow ct+1 $; \EndWhile \State \textbf{end} \EndProcedure \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \caption{\textsc{Distinct $ K_{1}, K_{2},K_{3} $ and $ K_{4} $ of $[X] $ on given $ G $}} \label{Clique_Sub} \textbf{Objective:} To find distinct $ K_{1}, K_{2}, K_{3} $ and $ K_{4} $ of $ [X] $ in $ A $. \\ \textbf{Input:} $ A $ is an adjacency matrix corresponding to given graph, $ W \subset V $, set of vertices of $ W $ is complete subgraph on $ 3d $ vertices where $ d\leftarrow 1,2,... $, $X\subset V, X+W\subseteq G $ and vertex labels of vertices in $ X $ is greater than vertex labels in $ W $ and $Y\subset V, Y+W\subset G $ and vertex labels of vertices in $ X $ is greater than vertex labels in $ Y $. \textbf{Output:} $ a,b,c,d\in (\mathbb{W})^{1\times |X|} $. $ a,b,c,d $ - Number of distinct $ K_{1}, K_{2},K_{3}, K_{4} $ contained in $ [X] $ respectively. $ LK_{1}, LK_{2},LK_{3},LK_{4} $ - Collections of distinct $ K_{1}, K_{2},K_{3},K_{4} $ in $ [X] $ respectively and $ J $ - Collection of possible distinct $ K_{4}, K_{5},... $ \begin{algorithmic}[1] \small \Procedure{$ [a,b,c,d,LK_{1},LK_{2},LK_{3},LK_{4},J]= $ Cliques}{$ A,W,X,Y $} \State $ C $ is an adjacency matrix corresponding to induced subgraph of $ X $. \State $ CL $ is an neighbourhood matrix corresponding to $ C $, $ n\leftarrow |X| $; \State $ at\leftarrow 1 $; $ bt\leftarrow 1 $; $ ct\leftarrow 1 $; $ dt\leftarrow 1 $; $ vt\leftarrow 1 $ \For {$ i\leftarrow 1 \text{ to } n $} $ g_{1}\leftarrow N_{[X]}(i) $ \If {$ |g_{1}|>0 $} $ g_{1}\leftarrow g_{1}(g_{1}>i) $ \If {$ |g_{1}|>0 $} $p\leftarrow Degree(g1)-CL(i,g1) $ \State $ r\leftarrow g_{1}(p==0) $ \For {$ q\leftarrow 1 \text{ to } |r| $} \If {for every $ Y_{w}, w= 1,2,...,|Y| $ is not adjacent with $ \{i,r(f)\} $} \State $ LK_{2}(bt,:)\leftarrow [r(f),i] $; $ bt\leftarrow bt+1 $ \EndIf \State \textbf{end} \EndFor \State \textbf{end} $ g1\leftarrow g1(p>0) $ \If {$ |g_{1}|>1 $} $ a_{4} \leftarrow $ distinct edges in upper triangular matrix of $ C(g1,g1) $. \For {$ u\leftarrow 1 $ to $ |a_{4}| $} $ s\leftarrow |CL(a_{4}(u,2),a_{4}(u,2))|-CL(a_{4}(u,1),a_{4}(u,2)) $ \If {$ s>1 $} $ a_{8}\leftarrow \{N_{[X]}(i)\cap N_{[X]}(a_{4}(u,1)) \cap N_{[X]}(a_{4}(u,2))\} $ \State $ a_{9}\leftarrow a_{8}(a_{8}>a_{4}(u,2)) $; $ tr\leftarrow |a_{9}| $ \If {$ tr==1 $} \If {there exist no edge between $ a_{8} $ and $ a_{9} $} \If { for every $ Y_{w}, w= 1,2,...,|Y| $ is not adjacent with $\{ i,a_{4}(u,1),a_{4}(u,2),a_{9} \}$ } \State $ LK_{4}(dt,:)\leftarrow [i,a_{4}(u,1),a_{4}(u,2),a_{9}] $; $ dt\leftarrow dt+1 $ \EndIf \State \textbf{end} \EndIf \State \textbf{end} \ElsIf {$ tr==0 $} \If {$ |a_{8}|==0 $} \If {for every $ Y_{w}, w= 1,2,...,|Y| $ is not adjacent with $\{ i,a_{4}(u,1),a_{4}(u,2) \}$ } \State $ LK_{3}(ct,:)\leftarrow [i,a_{4}(u,1),a_{4}(u,2)] $; $ ct\leftarrow ct+1 $ \EndIf \State \textbf{end} \EndIf \State \textbf{end} \Else { $ J\{vt,1\}\leftarrow \{i,a_{4}(u,1),a_{4}(u,2)\} $; $ J\{vt,2\}\leftarrow a_{9} $; $ J\{vt,3\}\leftarrow a_{8} $} \EndIf \State \textbf{end} \Else { } \If { for every $ Y_{w}, w= 1,2,...,|Y| $ is not adjacent with $\{ i,a_{4}(u,1),a_{4}(u,2) \}$ } \State $ LK_{3}(ct,:)\leftarrow [i,a_{4}(u,1),a_{4}(u,2)] $; $ ct\leftarrow ct+1 $ \EndIf \State \textbf{end} \EndIf \State \textbf{end} \EndFor \State \textbf{end} \EndIf \State \textbf{end} \EndIf \State \textbf{end} \Else { } \If { forevry $ Y_{w}, w= 1,2,...,|Y| $ is not adjacent with $\{ i\}$ } \State $ LK_{1}(at,:)\leftarrow [i] $; $ at\leftarrow at+1 $ \EndIf \State \textbf{end} \EndIf \State \textbf{end} \EndFor \State \textbf{end} \EndProcedure \end{algorithmic} \end{algorithm} \normalsize \begin{algorithm}[H] \caption{\textsc{Automorphism Groups of given graph $ G $}} \label{AutoAl_2} \textbf{Objective:} To find the automorphism groups of given undirected unweighted simple graph. \textbf{Input:} Graph $ G $, with adjacency matrix $ A $. \textbf{Output:} $ A_{-}G $- all the automorphisms of $ G $. \begin{algorithmic}[1] \Procedure{$ A_{-}G= A_{-}M_{-}G $}{$ A $} \State $ Irr\leftarrow $ square root of first $ (n-1) $ primes. \State $ R_{G} \leftarrow S_{-}D_{-}S(A,Irr) $; \textsc{$ [CS,R] \leftarrow $Clique$ _{-} $Sequence}($ A,Irr $) \State $ R_{G}\leftarrow R_{G}+R $; $ Tb\leftarrow $ set of vertices of connected components \For {$ i\leftarrow 1 \text{ to } |Tb| $} \State $ fx\leftarrow $ $ Tb\{i\} $; $Id\leftarrow fx; $; \State $ h\leftarrow$ unique values in $ (R_{G}(fx)) $; $ asd\leftarrow 1 $ \If {$ |h|\neq |fx| $} \State { $eL\leftarrow \text{Sorted edge list corresponding to } $} $ [fx] $ \State $ X \leftarrow M_{-}O_{-}A_{-}G(R_{G},h,fx)$ ; \For {$ u\leftarrow 1$ to $ |X| $} $ f\leftarrow X\{u\} $; \Comment{ \scriptsize $ f $ is optional autommorphism groups of size $ (zk\times 2), zk\leq 2n $\normalsize } \State $ tL\leftarrow $ sorted edge list corresponding to given $ f $ \If {$ eL $ is identical with $ tL $} \State $ AMor\{asd\}\leftarrow f$ ; $asd\leftarrow asd+1 $;\Comment{$ AMor\{asd\} $ is an cell array which stores different size of permutation} \EndIf \State \textbf{end} \EndFor \State \textbf{end} \EndIf \State \textbf{end} \State $ AMor\{asd\}\leftarrow [Id' \text{ } Id'] ; A_{-}G\{i\}\leftarrow AMor$; \EndFor \State \textbf{end} \State \textbf{return} \EndProcedure \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \caption{\textsc{Optimal Options of Automorphism groups of given vertex set of $ G $}} \label{AutoAl_3} \textbf{Objective:} To find the optimal options of the automorphism groups for given graph. \textbf{Input:} $ R_{G}- $ Structural descriptor sequence corresponding to given graph, $ fx- $ Set of vertices in the connected component and $ h- $ unique values of $ R_{G}(fx) $. \textbf{Output:} $ X- $ cell array which contains the optimal options of automorphism for the given graph. \begin{algorithmic}[1] \Procedure{$ X= M_{-}O_{-}A_{-}G $}{$ R_{G},h,fx $} \State $ jt\leftarrow 1 $; \For {$ i\leftarrow 1:|h| $} $ v\leftarrow $ Set of vertices which have the identical values in $ R_{G}(fx) $; \If {$ |v|>1 $} ; $ fz\leftarrow $ All possible permutations of $ v $ ; \State $ U\{jt\}\leftarrow fz $; $ jt\leftarrow jt+1 $; \EndIf \State \textbf{end} \EndFor \State \textbf{end} \If {$ |U|==1 $} $ X\leftarrow U\{:\} $; \Else { $ ct\leftarrow 1 $ };$ X\leftarrow U\{1\} $; $ Y\leftarrow U\{2\} $; \While{(1)} $ bt\leftarrow 1 $; \For {$ i\leftarrow 1\text{ to } |X| $} \For {$ j\leftarrow 1\text{ to }|Y| $ } \State $ RX\{bt\}\leftarrow [X\{i\}\text{ ; } Y\{j\}] $; $ bt\leftarrow bt+1 $ \EndFor \State \textbf{end} \EndFor \State \textbf{end} \State $ X\leftarrow [X \text{, } Y \text{, } RX] $; $ ct\leftarrow ct+1 $; \If {$ ct==|U| $} \textbf{ break}; \textbf{ end} \EndIf \State $ Y\leftarrow U\{ct+1\} $; \EndWhile \State \textbf{end} \EndIf \State \textbf{end} \State \textbf{return} \EndProcedure \end{algorithmic} \end{algorithm} \end{document}
\begin{document} \title{Lectures on Twisted Rabinowitz--Floer Homology } \author{Yannis B\"ahni } \maketitle \begin{abstract} Rabinowitz--Floer homology is the Morse--Bott homology in the sense of Floer associated with the Rabinowitz action functional introduced by Kai Cieliebak and Urs Frauenfelder in 2009. In this manuscript, we consider a generalisation of this theory to a Rabinowitz--Floer homology of a Liouville automorphism. As an application, we show the existence of noncontractible periodic Reeb orbits on quotients of symmetric star-shaped hypersurfaces. In particular, this theory applies to lens spaces. Moreover, we prove a forcing theorem, which guarantees the existence of a contractible twisted closed characteristic on a displaceable twisted stable hypersurface in a symplectically aspherical geometrically bounded symplectic manifold if there exists a contractible twisted closed characteristic belonging to a Morse-Bott component, with energy difference smaller or equal to the displacement energy of the displaceable hypersurface. \end{abstract} \tableofcontents \section{Introduction} \label{sec:introduction} The existence of closed Reeb orbits on lens spaces is important in the study of celestial mechanics. Indeed, by \cite[Corollary~5.7.5]{frauenfelderkoert:3bp:2018}, the Moser regularised energy hypersurface near the earth or the moon of the planar circular restricted three-body problem for energy values below the first critical value is diffeomorphic to the real projective space $\mathbb{RP}^3$. See also \cite[Introduction]{hryniewicz:cm:2016} for more details. Our main result will be the following. \begin{theorem}[{\cite[Theorem~1.2]{baehni:rfh:2021}}] Let $\Sigma \subseteq \mathbb{C}^n$, $n \geq 2$, be a compact and connected star-shaped hypersurface invariant under the rotation \begin{equation*} \varphi \colon \mathbb{C}^n \to \mathbb{C}^n, \quad \varphi(z_1,\dots,z_n) := \del[1]{e^{2\pi i k_1/m}z_1,\dots,e^{2\pi i k_n/m}z_n} \end{equation*} \noindent for some even $m \geq 2$ and $k_1,\dots,k_n \in \mathbb{Z}$ coprime to $m$. Then $\Sigma/\mathbb{Z}_m$ admits a noncontractible periodic Reeb orbit. \label{thm:my_result} \end{theorem} Theorem \ref{thm:my_result} has similarities with the following two recent results. \begin{theorem}[{\cite[Corollary~1.6~(iv)]{sandon:reeb:2020}}] \label{thm:lens_space} Any contact form on $\mathbb{S}^{2n - 1}/\mathbb{Z}_m$ defining the standard contact structure admits a closed Reeb orbit. \end{theorem} Using the fact that there is a natural bijection between contact forms on the odd-dimensional sphere equipped with the standard contact structure and star-shaped hypersurfaces, Theorem \ref{thm:lens_space} is actually stronger than Theorem \ref{thm:my_result} in that it does not restrict the parity of the lens space. However, Theorem \ref{thm:lens_space} does not say anything about the topological nature of the Reeb orbit. The proof of this theorem uses a generalisation of Givental's nonlinear Maslov index to lens spaces. \begin{theorem}[{\cite[Theorem~1.2]{liuzhang:noncontractible:2021}}] \label{thm:multiplicity} Every dynamically convex star-shaped $C^3$-hypersurface $\Sigma \subseteq \mathbb{C}^n$, $n \geq 2$, satisfying $\Sigma = -\Sigma$ admits at least two symmetric geometrically distinct closed characteristics. \end{theorem} Theorem \ref{thm:multiplicity} has the advantage of being a \emph{multiplicity result}, but in disadvantage requires the assumption that the hypersurface is dynamically convex and does only work for $\mathbb{Z}_2$-symmetry. To the authors knowledge, the first named author is working on extending Theorem \ref{thm:multiplicity} to lens spaces. As many multiplicity results, the proof of this theorem makes use of index theory and in particular Ekeland--Hofer theory. The proof of Theorem \ref{thm:my_result} relies on a generalisation of \emph{Rabinowitz--Floer homology}. Rabinowitz--Floer homology is the Morse--Bott homology in the sense of Floer associated with the Rabinowitz action functional introduced by Kai Cieliebak and Urs Frauenfelder in 2009. See the excellent survey article \cite{albersfrauenfelder:rfh:2012} for a brief introduction to Rabinowitz--Floer homology and \cite{schlenk:floer:2019} for an overview of common Floer theories. One important feature of this homology in our work is that it provides an affirmative answer to the \emph{Weinstein conjecture} in some instances. Specifically, we introduce an analogue of the twisted Floer homology \cite{uljarevic:liouville:2017} in the Rabinowitz--Floer setting. Following \cite{cieliebakfrauenfelder:rfh:2009} and \cite{albersfrauenfelder:rfh:2010}, we construct a Morse--Bott homology for a suitable twisted version of the standard Rabinowitz action functional, that is, the Lagrange multiplier functional of the symplectic area functional. \begin{theorem}[{\cite[Theorem~1.1]{baehni:rfh:2021}}] \label{thm:twisted_rfh} Let $(M,\lambda)$ be the completion of a Liouville domain $(W,\lambda)$ and let $\varphi \in \Diff(W)$ be of finite order and such that $\varphi^* \lambda - \lambda = df_\varphi$ for some smooth compactly supported function $f_\varphi \in C^\infty_c(\Int W)$. \begin{enumerate}[label=\textup{(\alph*)}] \item The semi-infinite dimensional Morse--Bott homology $\RFH^\varphi(\partial W,M)$ in the sense of Floer of the twisted Rabinowitz action functional exists and is well-defined. Moreover, twisted Rabinowitz--Floer homology is invariant under twisted homotopies of Liouville domains. \item If $\partial W$ is simply connected and does not admit any nonconstant twisted Reeb orbit, then $\RFH^\varphi_*(\partial W,M) \cong \operatorname{H}_*(\Fix(\varphi\vert_{\partial W});\mathbb{Z}_2)$. \item If $\partial W$ is displaceable by a compactly supported Hamiltonian symplectomorphism in $(M,\lambda)$, then $\RFH^\varphi(\partial W,M) \cong 0$. \end{enumerate} \end{theorem} Twisted Rabinowitz--Floer homology does indeed generalise standard Rabinowitz--Floer homology as \begin{equation*} \RFH^{\id_W}(\partial W,M) \cong \RFH(\partial W,M). \end{equation*} The proof of Theorem \ref{thm:my_result} is straightforward, once we have computed the $\mathbb{Z}_m$-equivariant twisted Rabinowitz--Floer homology of the spheres $\mathbb{S}^{2n - 1} \subseteq \mathbb{C}^n$. Indeed, by invariance we may assume that $\Sigma = \mathbb{S}^{2n - 1}$, as $\Sigma$ is star-shaped. Then we use the following elementary topological fact (see Lemma \ref{lem:noncontractible} below). Let $\Sigma$ be a simply connected topological manifold and let $\varphi \colon \Sigma \to \Sigma$ be a homeomorphism of finite order $m$ that is not equal to the identity. If the induced discrete action \begin{equation*} \mathbb{Z}_m \times \Sigma \to \Sigma, \qquad [k] \cdot x := \varphi^k(x) \end{equation*} \noindent is free, then $\pi \colon \Sigma \to \Sigma/\mathbb{Z}_m$ is a normal covering map \cite[Theorem~12.26]{lee:tm:2011}. For a point $x \in \Sigma$ define the \bld{based twisted loop space of $\Sigma$ and $\varphi$} by \begin{equation*} \mathscr{L}_\varphi(\Sigma,x) := \cbr[0]{\gamma \in C(I,\Sigma) : \gamma(0) = x \text{ and } \gamma(1) = \varphi(x)}, \end{equation*} \noindent where $I := \intcc[0]{0,1}$. Then we have the following result. See Figure \ref{fig:twisted_lift}. \begin{lemma} If $\gamma \in \mathscr{L}_\varphi(\Sigma,x)$, then $\pi \circ \gamma \in \mathscr{L}(\Sigma/\mathbb{Z}_m,\pi(x))$ is not contractible. Conversely, if $\gamma \in \mathscr{L}(\Sigma/\mathbb{Z}_m,\pi(x))$ is not contractible, then there exists $1 \leq k < m$ such that $\widetilde{\gamma}_x \in \mathscr{L}_{\varphi^k}(\Sigma,x)$ for the unique lift $\widetilde{\gamma}_x$ of $\gamma$ with $\widetilde{\gamma}_x(0) = x$. \label{lem:noncontractible} \end{lemma} For a more detailed study of twisted loop spaces of universal covering manifolds as well as a proof of Lemma \ref{lem:noncontractible} see Appendix \ref{sec:twisted_loops_on_universal_covering_manifolds}. Another interesting application of Theorem \ref{thm:twisted_rfh} is the following \emph{forcing result}. Suppose that $\partial W$ is Hamiltonianly displaceable in the completion $(M,\lambda)$ and simply connected. If $\Fix(\varphi\vert_{\partial W}) \neq \emptyset$, then $\partial W$ does admit a twisted periodic Reeb orbit. Indeed, if there does not exist any twisted periodic Reeb orbit on $\partial W$, we compute \begin{equation*} \RFH^\varphi(\partial W,M) \cong \operatorname{H}(\Fix(\varphi\vert_{\partial W});\mathbb{Z}_2) = \bigoplus_{k \geq 0} \operatorname{H}_k(\Fix(\varphi\vert_{\partial W});\mathbb{Z}_2) \neq 0 \end{equation*} \noindent by part (b) of Theorem \ref{thm:twisted_rfh}, contradicting part (c) of Theorem \ref{thm:twisted_rfh}. \begin{figure} \caption{The projection $\pi \circ \gamma \in \mathscr{L}(\Sigma/\mathbb{Z}_m,\pi(x))$ of $\gamma \in \mathscr{L}_\varphi(\Sigma,x)$ is not contractible for the deck transformation $\varphi \neq \id_\Sigma$.} \label{fig:twisted_lift} \end{figure} \begin{theorem}[Forcing] \label{thm:forcing} Let $\Sigma$ be a twisted stable displaceable hypersurface in a symplectically aspherical, geometrically bounded, symplectic manifold $(M,\omega)$ for some $\varphi \in \Symp(M,\omega)$ and suppose that $v_0$ is a contractible twisted closed characteristic on $\Sigma$ belonging to a Morse--Bott component $C$. Then there exists a contractible twisted closed characteristic $v \notin C$ such that \begin{equation*} \int_{\mathbb{D}}\overline{v}^*\omega - \int_{\mathbb{D}} \overline{v}_0^*\omega \leq \ord(\varphi)e(\Sigma), \end{equation*} \noindent where $e(\Sigma)$ denotes the displacement energy of $\Sigma$. \end{theorem} The proof of Theorem \ref{thm:forcing} is an adaptation of \cite[Theorem~4.9]{cieliebakfrauenfelderpaternain:mane:2010}. This result was initially shown by Felix Schlenk using quite different methods. Finally, we put Theorem \ref{thm:my_result} into context. If $\Sigma^{2n - 1}/\mathbb{Z}_m$ satisfies the index condition \begin{equation} \label{eq:index_condition} \mu_{\CZ}(\gamma) > 4 - n \end{equation} \noindent for all contractible Reeb orbits $\gamma$, the $\bigvee$-shaped symplectic homology $\check{\SH}(\Sigma)$ can be defined in the positive cylindrical end $\intco{0,+\infty} \times \Sigma$ by \cite[Corollary~3.7]{uebele:reeb:2019}. If $\Sigma/\mathbb{Z}_m$ admits a Liouville filling $W$, then we have \begin{equation*} \check{\SH}_*(\Sigma/\mathbb{Z}_m,M) \cong \RFH_*(\Sigma/\mathbb{Z}_m,M), \end{equation*} \noindent where $M$ denotes the completion of $W$. Note that even in the case of lens spaces this need not be the case, as for example $\mathbb{RP}^{2n - 1}$ is not Liouville fillable for any odd $n \geq 2$ by \cite[Theorem~1.1]{ghiggininiederkrueger:fillings:2020}. As the index condition \eqref{eq:index_condition} is only required for contractible Reeb orbits and they come from the universal covering manifold $\Sigma$, we can say something in the case where $\Sigma$ is strictly convex. Indeed, the Hofer--Wysocki--Zehnder Theorem \cite[Theorem~12.2.1]{frauenfelderkoert:3bp:2018} then implies that $\Sigma$ is dynamically convex, that is, \begin{equation*} \mu_{\CZ}(\gamma) \geq n + 1 \end{equation*} \noindent holds for all periodic Reeb orbits $\gamma$. Thus for $n \geq 2$, the index condition is satisfied and we can compute $\check{\SH}_*(\mathbb{S}^{2n - 1}/\mathbb{Z}_m)$ via the $\mathbb{Z}_m$-equivariant version of the symplectic homology $\check{\SH}_*(\mathbb{S}^{2n - 1})$. In the case of hypertight contact manifolds, there is a similar construction without the index condition \eqref{eq:index_condition}. See for example \cite[Theorem~1.1]{meiwesnaef:hypertight:2018}. By \cite[Theorem~1.7]{meiwesnaef:hypertight:2018}, there do exist noncontractible periodic Reeb orbits on hypertight contact manifolds under suitable technical conditions. Moreover, one can show the existence of invariant Reeb orbits in this setting. See \cite[Corollary~1.6]{meiwesnaef:hypertight:2018} as well as \cite[Theorem~1.6]{merrynaef:invariant:2016} in the Liouville-fillable case. The thesis is organised as follows. In Section \ref{sec:Floer_homology}, we review the basics of Hamiltonian Floer homology defined as the Morse--Bott homology associated with the symplectic action functional. We follow the cascade approach introduced by Urs Frauenfelder and define Hamiltonian Floer homology in the simplest case, that is, in the symplectically aspherical case. This is sufficient for our purposes. A detailed proof of the compactness of the relevant moduli spaces is given in Appendix \ref{ch:bubbling_analysis} and in order to deal with transversality, we use the polyfold approach which is sketched in Appendix \ref{ch:polyfolds}. We also review stable Hamiltonian manifolds, a generalisation of contact manifolds. Appendix \ref{ch:bubbling_analysis} is based on lecture notes written for a course on Hamiltonian Floer homology in the winter semester 2021/2022 taught by Urs Frauenfelder at the university of Augsburg. In Section \ref{sec:twisted_rfh}, we introduce the main machinery for defining our new homology theory and prove Theorem \ref{thm:twisted_rfh}. This material is an extended version of \cite{baehni:rfh:2021}. In Section \ref{sec:applications}, we prove Theorem \ref{thm:my_result} and the Forcing theorem \ref{thm:forcing}. Theorem \ref{thm:my_result} and its proof is also taken from \cite{baehni:rfh:2021}. In the final Chapter \ref{sec:further_steps}, we indicate two further results that may be obtained using the theory developed in this thesis. \section{Hamiltonian Floer Homology} \label{sec:Floer_homology} Floer homology was introduced by Andreas Floer around 1988 to tackle the \emph{homological Arnold conjecture}. Roughly speaking, the conjecture says in its simplest form that the number of nondegenerate solutions of a $1$-periodic Hamiltonian equation is bounded below by the dimension of the singular homology of the symplectic manifold with coefficients in $\mathbb{Z}_2$. That is, the number of such solutions is always bounded below by a topological invariant. This resembles the famous \emph{Morse inequalities}, and thus it is not suprising that the construction of Floer homology was largely influenced by Morse homology. See the excellent article \cite{schlenk:floer:2019}. However, a key technical ingredient for this semi-infinite dimensional version of Morse homology was Gromov's analysis of pseudoholomorphic curves introduced in 1985. Today there are many flavours of Floer theories, and we shall focus on \emph{Rabinowitz--Floer homology}. This homology was introduced in 2009 by Kai Cieliebak and Urs Frauenfelder. Crucial is the observation, that Floer homology can also be constructed in a more general way, namely in the \emph{Morse--Bott} case. In contrast to standard Hamiltonian Floer homology, Rabinowitz--Floer homology considers a fixed energy but arbitrary period problem. This leads to particular instances of the \emph{Weinstein conjecture} formulated in 1979, including the result of Rabinowitz in 1978. Weinstein conjectured that on every compact manifold admitting a contact form, there must exist a closed Reeb orbit. For an extensive historical treatment see \cite{mcduffsalamon:st:2017}. The aim of this introductory chapter is to explain the fundamental concepts required later on. In the first section we discuss the finite-dimensional version of Morse--Bott homology, a generalisation of Morse homology. The second section discusses some basic facts coming from Hamiltonian dynamics, focusing on theory we need in subsequent chapters. The third section introduces the archetypical version of Floer homology, \emph{Hamiltonian Floer homology}, on compact symplectic manifolds. We discuss the Morse--Bott approach, as this one will be useful in the discussion of Rabinowitz--Floer homology. In the last section we discuss suitable structures on regular hypersurfaces in symplectic manifolds, including hypersurfaces of restricted contact type. \input{Morse--Bott_homology.tex} \input{Hamiltonian_dynamics.tex} \input{Morse--Bott_homology_for_the_symplectic_action_functional.tex} \input{regular_energy_surfaces.tex} \section{Twisted Rabinowitz--Floer Homology} \label{sec:twisted_rfh} In this chapter we construct the generalisation of Rabinowitz--Floer homology and prove Theorem \ref{thm:twisted_rfh}. To begin, we define the twisted Rabinowitz action functional for an exact symplectic manifold and compute its first and second variation. In the second section we prove a compactness result for the moduli space of twisted negative gradient flow lines in a restricted geometrical setting. This follows a standard procedure, but one has to carefully adapt the constructions and proofs to this more general case. In the third section we define ungraded and graded twisted Rabinowitz--Floer homology and prove part (b) of Theorem \ref{thm:twisted_rfh} in Proposition \ref{prop:fixed_points}. In the fourth section we briefly illustrate how to prove part (a) of Theorem \ref{thm:twisted_rfh} (see Theorem \ref{thm:invariance}) and define the notion of twisted homotopies of Liouville domains. In the last section we prove an important vanishing result for twisted Rabinowitz--Floer homology, that is, part (c) of Theorem \ref{thm:twisted_rfh} (see Theorem \ref{thm:displaceable}). \input{the_twisted_Rabinowitz_action_functional.tex} \input{compactness.tex} \input{definition_of_twisted_rfh.tex} \input{invariance.tex} \input{twisted_leaf-wise_intersection_points.tex} \section{Applications} \label{sec:applications} In this chapter we give two applications of the abstract machinery developed in the previous section and prove Theorem \ref{thm:my_result} as well as Theorem \ref{thm:forcing} (see Theorem \ref{thm:forcing_theorem}). \input{existence_of_noncontractible_periodic_Reeb_orbits.tex} \input{forcing.tex} \section{Further Steps in Twisted Rabinowitz--Floer Homology} \label{sec:further_steps} In this final chapter we discuss some possible further research in twisted Rabinowitz--Floer homology for future work. One can of course try to find a twisted version of every result provided by standard Rabinowitz--Floer homology. Following the survey article \cite{albersfrauenfelder:rfh:2012}, major results relate Rabinowitz--Floer homology to symplectic homology. In the first section we briefly outline a further computation of twisted Rabinowitz--Floer homology, where the hypersurface is not displaceable. In the second section, we explain an important physical setting where the Forcing Theorem \ref{thm:forcing_theorem} and Theorem \ref{thm:my_result} might be applicable. \input{cotangent_bundles.tex} \input{Stark-Zeeman_systems.tex} \section{Twisted Loops in Universal Covering Manifolds} \label{sec:twisted_loops_on_universal_covering_manifolds} In this Appendix, we will consider the category of topological manifolds rather than the category of smooth manifolds, because smoothness does not add much to the discussion. Free and based loop spaces are fundamental objects in Algebraic Topology, for a vast treatment of the geometry and topology of based as well as free loop spaces see for example \cite{loop_spaces:2015}. But so-called twisted loop spaces are not considered that much. \begin{theorem}[Twisted Loops in Universal Covering Manifolds] Let $(M,x)$ be a connected pointed topological manifold and $\pi \colon \tilde{M} \to M$ the universal covering. \begin{enumerate}[label=\textup{(\alph*)}] \item Fix $[\eta] \in \pi_1(M,x)$ and denote by $U_\eta \subseteq \mathscr{L}(M,x)$ the path component corresponding to $[\eta]$ via the bijection $\pi_0(\mathscr{L}(M,x)) \cong \pi_1(M,x)$. For every $e,e' \in \pi^{-1}(x)$ and $\varphi \in \Aut_\pi(\tilde{M})$ such that $\varphi(e) = \tilde{\eta}_e(1)$, where $\tilde{\eta}_e$ denotes the unique lift of $\eta$ with $\tilde{\eta}_e(0) = e$, we have a commutative diagram of homeomorphisms \begin{equation} \label{cd:twisted} \qquad \begin{tikzcd} \mathscr{L}_\varphi(\tilde{M},e) \arrow[rr,"L_\psi"] & & \mathscr{L}_{\psi \circ \varphi \circ \psi^{-1}}(\tilde{M},e')\\ & U_\eta \arrow[lu,"\Psi_e"] \arrow[ru,"\Psi_{e'}"'], \end{tikzcd} \end{equation} \noindent where $\psi \in \Aut_\pi(\tilde{M})$ is such that $\psi(e) = e'$, \begin{equation*} \qquad L_\psi \colon \mathscr{L}_\varphi(\tilde{M},e) \to \mathscr{L}_{\psi \circ \varphi \circ \psi^{-1}}(\tilde{M},e'), \qquad L_\psi(\gamma) := \psi \circ \gamma, \end{equation*} \noindent and \begin{align*} \qquad& \Psi_e \colon U_\eta \to \mathscr{L}_\varphi(\tilde{M},e), & \Psi_e(\gamma) := \tilde{\gamma}_e,\\ \qquad& \Psi_{e'} \colon U_\eta \to \mathscr{L}_{\psi \circ \varphi \circ \psi^{-1}}(\tilde{M},e'), & \Psi_{e'}(\gamma) := \tilde{\gamma}_{e'}. \end{align*} Moreover, $U_{c_x} \cong \mathscr{L}_\varphi(\tilde{M},e)$ via $\Psi_e$ if and only if $\varphi = \id_{\tilde{M}}$, where $c_x$ denotes the constant loop at $x$. \item For every $\varphi \in \Aut_\pi(\tilde{M})$ and $e,e' \in \pi^{-1}(x)$ we have a commutative diagram of isomorphisms \begin{equation*} \qquad \begin{tikzcd} \Aut_\pi(\tilde{M}) \arrow[rr,"C_\psi"] & & \Aut_\pi(\tilde{M})\\ & \pi_1(M,x) \arrow[lu,"\Phi_e"] \arrow[ru,"\Phi_{e'}"'], \end{tikzcd} \end{equation*} \noindent where for $\psi \in \Aut_\pi(\tilde{M})$ sucht that $\psi(e) = e'$ \begin{equation*} \qquad C_\psi \colon \Aut_\pi(\tilde{M}) \to \Aut_\pi(\tilde{M}), \qquad C_\psi(\varphi) := \psi \circ \varphi \circ \psi^{-1}, \end{equation*} \noindent and \begin{align*} \qquad & \Phi_e \colon \pi_1(M,x) \to \Aut_\pi(\tilde{M}), & \Phi_e([\gamma]) := \varphi^e_{[\gamma]},\\ \qquad & \Phi_{e'} \colon \pi_1(M,x) \to \Aut_\pi(\tilde{M}), & \Phi_{e'}([\gamma]) := \varphi^{e'}_{[\gamma]}, \end{align*} \noindent with $\varphi^e_{[\gamma]}(e) = \tilde{\gamma}_e(1)$ and $\varphi^{e'}_{[\gamma]}(e') = \tilde{\gamma}_{e'}(1)$. \item The projection \begin{equation*} \qquad \tilde{\pi}_x \colon \coprod_{\substack{\varphi \in \Aut_\pi(\tilde{M})\\e \in \pi^{-1}(x)}} \mathscr{L}_\varphi(\tilde{M},e) \to \mathscr{L}(M,x) \end{equation*} \noindent defined by $\tilde{\pi}_x(\gamma) := \pi \circ \gamma$ is a covering map with number of sheets coinciding with the cardinality of $\pi_1(M,x)$. Moreover, $\tilde{\pi}_x$ restricts to define a covering map \begin{equation*} \qquad \tilde{\pi}_x\vert_{\id_{\tilde{M}}} \colon \coprod_{e \in \pi^{-1}(x)} \mathscr{L}(\tilde{M},e) \to U_{c_x}, \end{equation*} \noindent and $\tilde{\pi}_x$ gives rise to a principal $\Aut_\pi(\tilde{M})$-bundle. If $M$ admits a smooth structure, then this bundle is additionally a bundle of smooth Banach manifolds. \end{enumerate} \label{thm:cd_twisted} \end{theorem} \begin{proof} For proving part (a), fix a path class $[\gamma] \in \pi_1(M,x)$. As any topological manifold is Hausdorff, paracompact and locally metrisable by definition, the Smirnov Metrisation Theorem \cite[Theorem~42.1]{munkres:topology:2000} implies that $M$ is metrisable. Let $d$ be a metric on $M$ and $\bar{d}$ be the standard bounded metric corresponding to $d$, that is, \begin{equation*} \bar{d}(x,y) = \min\cbr[0]{d(x,y),1} \qquad \forall x,y \in M. \end{equation*} The metric $\bar{d}$ induces the same topology on $M$ as $d$ by \cite[Theorem~20.1]{munkres:topology:2000}. Topologise the based loop space $\mathscr{L}(M,x) \subseteq \mathscr{L}M$ as a subspace of the free loop space on $M$, where $\mathscr{L}M$ is equipped with the topology of uniform convergence, that is, with the supremum metric \begin{equation*} \bar{d}_\infty(\gamma,\gamma') = \sup_{t \in \mathbb{S}^1}\bar{d}\del[1]{\gamma(t),\gamma'(t)} \qquad \forall \gamma,\gamma' \in \mathscr{L}M. \end{equation*} There is a canonical pseudometric on the universal covering manifold $\tilde{M}$ induced by $\bar{d}$ given by $\bar{d} \circ \pi$. As every pseudometric generates a topology, we topologise the based twisted loop space $\mathscr{L}_\varphi(\tilde{M},e) \subseteq \mathscr{P}\tilde{M}$ as a subspace of the free path space on $\tilde{M}$ for every $e \in \pi^{-1}(x)$ via the supremum metric $\tilde{d}_\infty$ corresponding to $\bar{d} \circ \pi$. In fact, $\tilde{d}_\infty$ is a metric as if $\tilde{d}_\infty(\gamma,\gamma') = 0$, then by definition of $\tilde{d}_\infty$ we have that $\pi(\gamma) = \pi(\gamma')$. But as $\gamma(0) = e = \gamma'(0)$, we conclude $\gamma = \gamma'$ by the unique lifting property of paths \cite[Corollary~11.14]{lee:tm:2011}. Note that the resulting topology of uniform convergence on $\mathscr{L}_\varphi(\tilde{M},e)$ coincides with the compact-open topology by \cite[Theorem~46.8]{munkres:topology:2000} or \cite[Proposition~A.13]{hatcher:at:2001}. In particular, the topology of uniform convergence does not depend on the choice of a metric (see \cite[Corollary~46.9]{munkres:topology:2000}). It follows from \cite[Theorem~11.15~(b)]{lee:tm:2011}, that $\Psi_e$ and $\Psi_{e'}$ are well-defined. Moreover, it is immediate by the fact that the projection $\pi \colon \tilde{M} \to M$ is an isometry with respect to the above metric, that $\Psi_e$ and $\Psi_{e'}$ are continuous with continuous inverse given by the composition with $\pi$. It is also immediate that $L_\psi$ is continuous with continuous inverse $L_{\psi^{-1}}$. Next we show that the diagram \eqref{cd:twisted} commutes. Note that \begin{equation*} \pi \circ L_\psi \circ \Psi_e = \pi \circ \Psi_e = \id_{U_\eta} = \pi \circ \Psi_{e'}, \end{equation*} \noindent thus by \begin{equation*} (L_\psi \circ \Psi_e(\gamma))(0) = \psi(\tilde{\gamma}_e(0)) = \psi(e) = e' = \tilde{\gamma}_{e'}(0) = \Psi_{e'}(\gamma)(0) \end{equation*} \noindent and by uniqueness it follows that \begin{equation*} L_\psi \circ \Psi_e = \Psi_{e'}. \end{equation*} In particular \begin{equation*} \Psi_{e'}(1) = (L_\psi \circ \Psi_e)(1) = \psi(\varphi(e)) = (\psi \circ \varphi \circ \psi^{-1})(e'), \end{equation*} \noindent and thus $\Psi_{e'}(\gamma) \in \mathscr{L}_{\psi \circ \varphi \circ \psi^{-1}}(\tilde{M},e')$. Consequently, the homeomorphism $\Psi_{e'}$ is well-defined. Recall, that by the Monodromy Theorem \cite[Theorem~11.15~(b)]{lee:tm:2011} \begin{equation*} \gamma \simeq \gamma' \qquad \Leftrightarrow \qquad \Psi_e(\gamma)(1) = \Psi_e(\gamma')(1) \end{equation*} \noindent for all paths $\gamma$ and $\gamma'$ in $M$ starting at $x$ and ending at the same point. Note that the statement of the the Monodromy Theorem is an if-and-only-if statement since $\tilde{M}$ is simply connected. Suppose $\gamma \in \mathscr{L}(M,x)$ is contractible. Then $\gamma \simeq c_x$, implying $e \in \Fix(\varphi)$. But the only deck transformation of $\pi$ fixing any point of $\tilde{M}$ is $\id_{\tilde{M}}$ by \cite[Proposition~12.1~(a)]{lee:tm:2011}. Conversely, assume that $\gamma \in \mathscr{L}(M,x)$ is not contractible. Then we have that $\Psi_e(\gamma)(1) \neq e$. Indeed, if $\Psi_e(\gamma)(1) = e$, then $\gamma \simeq c_x$ and consequently, $\gamma$ would be contractible. As normal covering maps have transitive automorphism groups by \cite[Corollary~12.5]{lee:tm:2011}, there exists $\psi \in \Aut_\pi(\tilde{M}) \setminus \cbr[0]{\id_{\tilde{M}}}$ such that $\Psi_e(\gamma)(1) = \psi(e)$. For proving part (b), observe that $\Phi_e$ and $\Phi_{e'}$ are isomorphisms follows from \cite[Corollary~12.9]{lee:tm:2011}. Moreover, it is also clear that $C_\psi$ is an isomorphism with inverse $C_{\psi^{-1}}$. Let $[\gamma] \in \pi_1(M,x)$. Then using part (a) we compute \allowdisplaybreaks \begin{align*} (C_\psi \circ \Phi_e)[\gamma](e') &= (\psi \circ \Phi_e[\gamma] \circ \psi^{-1})(e')\\ &= \psi\del[1]{\varphi_{[\gamma]}^e(e)}\\ &= \psi(\tilde{\gamma}_e(1))\\ &= (L_\psi \circ \Psi_e)(\gamma)(1)\\ &= \Psi_{e'}(\gamma)(1)\\ &= \tilde{\gamma}_{e'}(1)\\ &= \varphi^{e'}_{[\gamma]}(e')\\ &= \Phi_{e'}[\gamma](e'). \end{align*} Thus by uniqueness \cite[Proposition~12.1~(a)]{lee:tm:2011}, we conclude \begin{equation*} C_\psi \circ \Phi_e = \Phi_{e'}. \end{equation*} Finally for proving (c), define a metric $\tilde{d}_\infty$ on \begin{equation*} E := \coprod_{\substack{\varphi \in \Aut_\pi(\tilde{M})\\e \in \pi^{-1}(x)}} \mathscr{L}_\varphi(\tilde{M},e) \end{equation*} \noindent by \begin{equation*} \tilde{d}_\infty(\gamma,\gamma') := \begin{cases} \bar{d}_\infty\del[1]{\pi(\gamma),\pi(\gamma')} & \gamma,\gamma' \in \mathscr{L}_\varphi(\tilde{M},e),\\ 1 & \text{else}. \end{cases} \end{equation*} Then the induced topology coincides with the disjoint union topology and with respect to this topology, $\tilde{\pi}_x$ is continuous. So left to show is that $\tilde{\pi}_x$ is a covering map. Surjectivity is clear. So let $\gamma \in \mathscr{L}(M,x)$. Then $\gamma \in U_\eta$ for some $[\eta] \in \pi_1(M,x)$. Now note that $U_\eta$ is open in $\mathscr{L}(M,x)$ and by part (a) we conclude \begin{equation} \label{eq:twisted_fibre} \tilde{\pi}_x^{-1}(U_\eta) = \coprod_{\psi \in \Aut_\pi(\tilde{M})} \mathscr{L}_{\psi \circ \varphi \circ \psi^{-1}}(\tilde{M},\psi(e)) \end{equation} \noindent for some fixed $e \in \pi^{-1}(x)$ and $\varphi \in \Aut_\pi(\tilde{M})$ such that $\varphi(e) = \tilde{\eta}_e(1)$. As the cardinality of the fibre $\pi^{-1}(x)$ and of $\Aut_\pi(\tilde{M})$ coincides with the cardinality of the fundamental group $\pi_1(M,x)$ by \cite[Corollary~11.31]{lee:tm:2011} and part (b), we conclude that the number of sheets is equal to the cardinality of the fundamental group $\pi_1(M,x)$. Equip $\Aut_\pi(\tilde{M})$ with the discrete topology. As the fundamental group of every topological manifold is countable by \cite[Theorem~7.21]{lee:tm:2011}, we have that $\Aut_\pi(\tilde{M})$ is a discrete topological Lie group. Now label the distinct path classes in $\pi_1(M,x)$ by $\beta \in B$ and for fixed $e \in \pi^{-1}(x)$ define local trivialisations \begin{equation*} (\tilde{\pi}_x,\alpha_\beta) \colon \tilde{\pi}_x^{-1}(U_\beta) \xrightarrow{\cong} U_\beta \times \Aut_\pi(\tilde{M}), \end{equation*} \noindent making use of \eqref{eq:twisted_fibre} by \begin{equation*} \alpha_\beta(\gamma) := \psi^{-1}, \end{equation*} \noindent whenever $\gamma \in \mathscr{L}_{\psi \circ \varphi \circ \psi^{-1}}(\tilde{M},\psi(e))$. Consequently, $\tilde{\pi}_x$ is a fibre bundle with discrete fibre $\Aut_\pi(\tilde{M})$ and bundle atlas $(U_\beta,\alpha_\beta)_{\beta \in B}$. Define a free right action \begin{equation*} E \times \Aut_\pi(\tilde{M}) \to E, \qquad \gamma \cdot \xi := \xi^{-1} \circ \gamma. \end{equation*} Then $\alpha_\beta$ is $\Aut_\pi(\tilde{M})$-equivariant with respect to this action for all $\beta \in B$. Indeed, using again the commutative diagram \eqref{cd:twisted} we compute \begin{equation*} \alpha_\beta(\gamma \cdot \xi) = \alpha_\beta(\xi^{-1} \circ \gamma) = \del[1]{\xi^{-1} \circ \psi}^{-1} = \psi^{-1} \circ \xi = \alpha_\beta(\gamma) \circ \xi \end{equation*} \noindent for all $\xi \in \Aut_\pi(\tilde{M})$ and $\gamma \in \mathscr{L}_{\psi \circ \varphi \circ \psi^{-1}}(\tilde{M},\psi(e))$. Note, that here we use again the fact that $\Aut_\pi(\tilde{M})$ acts transitively on the fibre $\pi^{-1}(x)$. Suppose that $M$ admits a smooth structure. Then for every compact smooth manifold $N$ we have that the mapping space $C(N,M)$ admits the structure of a smooth Banach manifold by \cite{wittmann:loop_space:2019}. By \cite[Theorem~1.1~p.~24]{loop_spaces:2015}, there is a smooth fibre bundle, called the \bld{loop-loop fibre bundle}, \begin{equation*} \mathscr{L}(M,x) \hookrightarrow \mathscr{L}M \xrightarrow{\ev_0} M \end{equation*} \noindent where \begin{equation*} \ev_0 \colon \mathscr{L} M \to M, \qquad \ev_0(\gamma) := \gamma(0). \end{equation*} Thus the based loop space $\mathscr{L}(M,x) = \ev_0^{-1}(x)$ on $M$ is a smooth Banach manifold by the implicit function theorem \cite[Theorem~A.3.3]{mcduffsalamon:J-holomorphic_curves:2012} for all $x \in M$. Likewise, by \cite[Theorem~1.2~p.~25]{loop_spaces:2015}, there is a smooth fibre bundle, called the \bld{path-loop fibre bundle}, \begin{equation*} \mathscr{L}(\tilde{M},e) \hookrightarrow \mathscr{P}(\tilde{M},e) \xrightarrow{\ev_1} \tilde{M}, \end{equation*} \noindent where \begin{equation*} \mathscr{P}(\tilde{M},e) := \{\gamma \in C(I,\tilde{M}) : \gamma(0) = e\} \end{equation*} \noindent denotes the based path space and \begin{equation*} \ev_1 \colon \mathscr{P}(\tilde{M},e) \to \tilde{M}, \qquad \ev_1(\gamma) := \gamma(1). \end{equation*} Therefore, the twisted loop space $\mathscr{L}_\varphi(\tilde{M},e) = \ev_1^{-1}(\varphi(e))$ is also a smooth Banach manifold for all $\varphi \in \Aut_\pi(\tilde{M})$ and $e \in \pi^{-1}(x)$ by the implicit function theorem \cite[Theorem~A.3.3]{mcduffsalamon:J-holomorphic_curves:2012}. As the fundamental group $\pi_1(M,x)$ is countable, the topological space $E$ has only countably many connected components being smooth Banach manifolds and thus the total space itself is a smooth Banach manifold. Finally, $\Aut_\pi(\tilde{M})$ is trivially a Banach manifold with $\dim \Aut_\pi(\tilde{M}) = 0$ as a discrete Lie group. \end{proof} \begin{corollary} \label{cor:cd_twisted_abelian} Let $(M,x)$ be a connected pointed topological manifold and denote by $\pi \colon \tilde{M} \to M$ the universal covering of $M$. Assume that $\pi_1(M,x)$ is abelian. \begin{enumerate}[label=\textup{(\alph*)}] \item Fix a path class $[\eta] \in \pi_1(M,x)$. For every $e,e' \in \pi^{-1}(x)$ and deck transformation $\varphi \in \Aut_\pi(\tilde{M})$ such that $\varphi(e) = \tilde{\eta}_e(1)$, we have a commutative diagram of homeomorphisms \begin{equation*} \qquad \begin{tikzcd} \mathscr{L}_\varphi(\tilde{M},e) \arrow[rr,"L_\psi"] & & \mathscr{L}_\varphi(\tilde{M},e')\\ & U_\eta \arrow[lu,"\Psi_e"] \arrow[ru,"\Psi_{e'}"'], \end{tikzcd} \end{equation*} \noindent where $\psi \in \Aut_\pi(\tilde{M})$ is such that $\psi(e) = e'$. \item For every $\varphi \in \Aut_\pi(\tilde{M})$ we have that $\Phi_e = \Phi_{e'}$ for all $e,e' \in \pi^{-1}(x)$. \end{enumerate} \end{corollary} Lemma \ref{lem:noncontractible} now follows from part (a) of Theorem \ref{thm:cd_twisted}. Indeed, by assumption $\varphi \in \Aut_\pi(\Sigma) \setminus \{\id_\Sigma\}$ and using the long exact sequence of homotopy groups of a fibration \cite[Theorem~4.41]{hatcher:at:2001}, there is a short exact sequence \begin{equation*} \begin{tikzcd}[column sep = scriptsize] 0 \arrow[r] & \pi_1(\Sigma,x) \arrow[r] & \pi_1(\Sigma/\mathbb{Z}_m,\pi(x)) \arrow[r] & \pi_0(\mathbb{Z}_m) \arrow[r] & 0. \end{tikzcd} \end{equation*} In particular, by \cite[Corollary~12.9]{lee:tm:2011} we conclude \begin{equation*} \Aut_\pi(\Sigma) \cong \pi_1(\Sigma/\mathbb{Z}_m,\pi(x)) \cong \mathbb{Z}_m \cong \{\id_\Sigma,\varphi,\dots,\varphi^{m - 1}\}. \end{equation*} Finally, we discuss a smooth structure on the continuous free twisted loop space of a smooth manifold. \begin{lemma} \label{lem:free_twisted_loop_space} Let $M$ be a smooth manifold and $\varphi \in \Diff(M)$. Then the continuous free twisted loop space $\mathscr{L}_\varphi M$ is the pullback of \begin{equation*} (\ev_0,\ev_1) \colon \mathscr{P}M \to M \times M, \qquad \gamma \mapsto (\gamma(0),\gamma(1)), \end{equation*} \noindent where we abbreviate $\mathscr{P}M := C(I,M)$, along the graph of $\varphi$ \begin{equation*} \Gamma_\varphi \colon M \to M \times M, \qquad \Gamma_\varphi(x) := (x,\varphi(x)), \end{equation*} \noindent in the category of smooth Banach manifolds. Moreover, we have that \begin{equation*} T_\gamma \mathscr{L}_\varphi M = \{X \in \Gamma^0(\gamma^*TM) : X(1) = D\varphi(X(0))\} \end{equation*} \noindent for all $\gamma \in \mathscr{L}_\varphi M$. \end{lemma} \begin{proof} Write $f := (\ev_0,\ev_1)$. Then \begin{equation*} \mathscr{L}_\varphi M = f^{-1}(\Gamma_\varphi(M)). \end{equation*} Thus in order to show that the free twisted loop space $\mathscr{L}_\varphi M$ is a smooth Banach manifold, it is enough to show that $f$ is transverse to the properly embedded smooth submanifold $\Gamma_\varphi(M) \subseteq M \times M$. By \cite[Proposition~2.4]{lang:dg:1999} we need to show that the composition \begin{equation*} \Phi_\gamma\colon T_\gamma\mathscr{P}M \xrightarrow{Df_\gamma} T_{(x,\varphi(x))}(M \times M) \to T_{(x,\varphi(x))}(M \times M)/T_{(x,\varphi(x))}\Gamma_\varphi(M) \end{equation*} \noindent is surjective and $\ker \Phi_\gamma$ is complemented for all $\gamma \in f^{-1}(\Gamma_\varphi(M))$, where we abbreviate $x := \gamma(0)$. Note that we have a canonical isomorphism \begin{equation*} T_{(x,\varphi(x))}(M \times M)/T_{(x,\varphi(x))}\Gamma_\varphi(M) \to T_{\varphi(x)}M, \quad [(v,u)] := u - D\varphi(v). \end{equation*} Under this canonical isomorphism, the linear map $\Phi_\gamma$ is given by \begin{equation*} \Phi_\gamma(X) = X(1) - D\varphi(X(0)), \qquad \forall X \in \Gamma^0(\gamma^*TM). \end{equation*} Fix a Riemannian metric on $M$ and let $X_v \in \Gamma(\gamma^*TM)$ be the unique parallel vector field with $X_v(1) = v \in T_{\varphi(x)}M$. Fix a cutoff function $\beta \in C^\infty(I)$ such that $\supp \beta \subseteq \intcc[1]{\frac{1}{2},1}$ and $\beta = 1$ in a neighbourhood of $1$. Then $\Phi_\gamma(\beta X_v) = v$ and consequently, $\Phi_\gamma$ is surjective. Moreover \begin{equation*} \ker \Phi_\gamma = \cbr[0]{X \in \Gamma^0(\gamma^*TM) : X(1) = D\varphi(X(0))} \end{equation*} \noindent is complemented by the finite-dimensional vector space \begin{equation*} V := \cbr[0]{\beta X_v \in \Gamma(\gamma^*TM) : v \in T_{\varphi(x)}M}. \end{equation*} Indeed, any $X \in \Gamma^0(\gamma^*TM)$ can be decomposed uniquely as \begin{equation*} X = X - \beta X_v + \beta X_v, \qquad v := X(1) - D\varphi(X(0)). \end{equation*} Abbreviating $Y := X - \beta X_v \in \Gamma^0(\gamma^*TM)$, we have that \begin{equation*} Y(1) = D\varphi(X(0)) = D\varphi(Y(0)), \end{equation*} \noindent implying $Y \in \ker \Phi_\gamma$. Thus $\mathscr{L}_\varphi M$ is a smooth Banach manifold. Now note that $\mathscr{L}_\varphi M$ can be identified with the pullback \begin{equation*} f^*\mathscr{P}M = \{(x,\gamma) \in M \times \mathscr{P}M : (\gamma(0),\gamma(1)) = \del[0]{x,\varphi(x)}\}, \end{equation*} \noindent making the diagram \begin{equation*} \begin{tikzcd} f^* \mathscr{P}M \arrow[r,"\pr_2"]\arrow[d,"\pr_1"'] & \mathscr{P}M \arrow[d,"f"]\\ M \arrow[r,"\Gamma_\varphi"'] & M \times M \end{tikzcd} \end{equation*} \noindent commute, via the homeomorphism \begin{equation*} \mathscr{L}_\varphi M \to f^*\mathscr{P}M, \qquad \gamma \mapsto (\gamma(0),\gamma). \end{equation*} Finally, one computes \begin{equation*} T_{(x,\gamma)}f^*\mathscr{P}M = \{(v,X) \in T_xM \times T_\gamma \mathscr{P}M : Df_\gamma X = D\Gamma_\varphi\vert_x(v)\} \end{equation*} \noindent for all $(x,\gamma) \in f^*\mathscr{P}M$. \end{proof} \begin{remark} Using Lemma \ref{lem:free_twisted_loop_space} one should be able to prove similar results as in Theorem \ref{thm:cd_twisted} in the case of free twisted loop spaces. However, in the non-abelian case the situation gets much more complicated as in general it is not true, that lifts of conjugated elements of the fundamental group lie in the same free twisted loop space by \cite[Theorem~1.6~(i)]{loop_spaces:2015}. \end{remark} \section{On the Nonexistence of the Gradient of the Twisted Rabinowitz Action Functional} Let $(M,\lambda)$ be an exact symplectic manifold and $\varphi \in \Symp(M,d\lambda)$ a symplectomorphism of finite order. For $H \in C^\infty(M)$ such that $H \circ \varphi = H$, one can define the twisted Rabinowitz action functional \begin{equation} \label{eq:twisted_Rabinowitz_action_functional} \mathscr{A}^H_\varphi \colon \mathscr{L}_\varphi M \times \mathbb{R} \to \mathbb{R}, \qquad \mathscr{A}^H_\varphi(\gamma,\tau) := \int_0^1 \gamma^*\lambda - \tau \int_0^1 H(\gamma(t))dt. \end{equation} Let $J$ be a $d\lambda$-compatible almost complex structure such that $\varphi^*J = J$. Then one can consider the gradient of $\mathscr{A}^H_\varphi$ with respect to the $L^2$-metric \begin{equation} \langle (X,\eta), (Y,\sigma) \rangle_J := \int_0^1 d\lambda(JX(t),Y(t))dt + \eta\sigma \label{eq:L^2-metric} \end{equation} \noindent for all $(X,\eta), (Y,\sigma) \in T_\gamma\mathscr{L}_\varphi M \times \mathbb{R}$ and $(\gamma,\tau) \in \mathscr{L}_\varphi M \times \mathbb{R}$. \begin{theorem}[Nonexistence Gradient] \label{thm:nonexistence_gradient} Let $(M,\lambda)$ be a connected exact symplectic manifold, $\varphi \in \Symp(M,d\lambda)$ of finite order and $H \in C^\infty(M)$ such that $H \circ \varphi = H$. If $\varphi^*\lambda \neq \lambda$, then the gradient of the twisted Rabinowitz action functional \eqref{eq:twisted_Rabinowitz_action_functional} with respect to the $L^2$-metric \eqref{eq:L^2-metric} does not exist. \end{theorem} \begin{proof} Assume that the gradient $\grad \mathscr{A}^H_\varphi \in \mathfrak{X}(\mathscr{L}_\varphi M \times \mathbb{R})$ exists. We write \begin{equation*} \grad \mathscr{A}^H_\varphi = \grad \mathscr{A}^H + V \end{equation*} \noindent for some $V \in \mathfrak{X}(\mathscr{L}_\varphi M)$ and \begin{equation*} \grad \mathscr{A}^H\vert_{(\gamma,\tau)} = \begin{pmatrix} \displaystyle J(\dot{\gamma} - \tau X_H(\gamma))\\ \displaystyle -\int_0^1 H(\gamma(t))dt \end{pmatrix} \end{equation*} \noindent for all $(\gamma,\tau) \in \mathscr{L}_\varphi M \times \mathbb{R}$. Indeed, this follows from \begin{equation} d\mathscr{A}^H_\varphi\vert_{(\gamma,\tau)}(X,\eta) = d\mathscr{A}^H\vert_{(\gamma,\tau)}(X,\eta) + (\varphi^*\lambda - \lambda)(X(0)) \label{eq:twisted_differential} \end{equation} \noindent for all $(X,\eta) \in T_\gamma\mathscr{L}_\varphi M \times \mathbb{R}$, where \begin{equation*} d\mathscr{A}^H\vert_{(\gamma,\tau)}(X,\eta) = \int_0^1 d\lambda(X,\dot{\gamma}(t) - \tau X_H(\gamma(t))) dt - \eta \int_0^1 H(\gamma(t))dt. \end{equation*} \noindent By assumption, there exists $x \in M$ and $v \in T_x M$ with $(\varphi^*\lambda)_x(v) \neq \lambda_x(v)$. As by assumption $M$ is connected, there exists a smooth path $u \in C^\infty(\intcc[0]{0,1},M)$ from $x$ to $\varphi(x)$. Fix a smooth cutoff function $\beta \in C^\infty(\intcc[0]{0,1},\intcc[0]{0,1})$ such that $\beta = 0$ in a neighbourhood of $0$ and $\beta = 1$ in a neighbourhood of $1$. Then we can extend $u$ by \begin{equation*} \gamma(t) := \varphi^k(u(\beta(t - k))) \qquad \forall t \in \intcc[0]{k,k+1}, k \in \mathbb{Z}. \end{equation*} Clearly, $\gamma \in \mathscr{L}_\varphi M$ by construction. Extend $v \in T_{\gamma(0)}M$ to $X_v \in T_\gamma \mathscr{L}_\varphi M$ by \begin{equation*} X_v(t) := (1 - \beta(t - k))P_{0,\beta(t - k)}^{\varphi^k \circ u}(D\varphi^k(v)) + \beta(t - k)P_{1,\beta(t - k)}^{\varphi^k \circ u}(D\varphi^{k + 1}(v)), \end{equation*} \noindent for all $t \in \intcc[0]{k,k+1}$ and $k \in \mathbb{Z}$, where $P$ denotes the parallel transport system induced by the Levi--Civita connection associated with the metric $m_J$. Choose a sequence $(\beta_j) \subseteq C^\infty(\mathbb{S}^1,\intcc[0]{0,1})$ with $\beta_j = 1$ on $\intcc[1]{0,\frac{1}{2j}} \cup \intcc[1]{1 - \frac{1}{2j},1}$ and such that $\supp \beta_j \subseteq \intcc[1]{0,\frac{1}{j}} \cup \intcc[1]{1 - \frac{1}{j},1}$ for all $j \in \mathbb{N}$. Using \eqref{eq:twisted_differential} we compute \begin{align*} \langle V, \beta_j X_v \rangle_J &= \langle \grad \mathscr{A}^H_\varphi,\beta_j X_v \rangle_J - \langle \grad \mathscr{A}^H,\beta_j X_v\rangle_J\\ &= d\mathscr{A}^H_\varphi(\beta_j X_v) - d\mathscr{A}^H(\beta_j X_v)\\ &= (\varphi^*\lambda - \lambda)(\beta_j(0)X_v(0))\\ &= (\varphi^*\lambda - \lambda)(v) \end{align*} \noindent for all $j \in \mathbb{N}$, implying \begin{align*} (\varphi^*\lambda - \lambda)(v) &= \lim_{j \to \infty} \langle V,\beta_j X_v \rangle_J\\ &= \lim_{j \to \infty} \int_0^1 d\lambda(JV(t),\beta_j(t)X_v(t))dt\\ &= 0 \end{align*} \noindent by dominated convergence. \end{proof} \section{M-Polyfolds} \label{ch:polyfolds} The classical approach for establishing generic transversality results in Floer theories is via a suitable version of the Sard--Smale Theorem \cite[Theorem~A.5.1]{mcduffsalamon:J-holomorphic_curves:2012}. The idea is to represent the moduli space of negative gradient flow lines as the zero set of an appropriate Fredholm section. Unfortunately, this does not work for the moduli space of unparametrised negative gradient flow lines as the reparametrisation action is not smooth. Moreover, the transversality results usually require perturbing the given metric to a generic one. There is a more abstract approach for proving transversality results via polyfold theory. This theory was and is still developed by Hofer--Wysocki--Zehnder \cite{hoferwysockizehnder:polyfolds:2021} primarily having symplectic field theory in mind. Another more algebraic approach to abstract perturbations is via Kuranishi structures developed by Fukaya--Oh--Ohta--Ono \cite{FOOO:kuranishi:2020}. In the first section we introduce the basic terminology of polyfold theory, namely the notion of scale smoothness on scale Banach spaces. In the second section we formulate a prototypical result for Morse--Bott homology following the brilliant lecture notes \cite{cieliebak:ga:2018}. \input{scale_calculus.tex} \input{polyfold_setup_MB.tex} \section{Bubbling Analysis} \label{ch:bubbling_analysis} In this section we prove the main result about the compactness of the moduli space of negative gradient flow lines of the symplectic action functional. \begin{definition}[Symplectic Asphericity] A connected symplectic manifold $(M,\omega)$ is said to be \bld{symplectically aspherical}, if \begin{equation*} \int_{\mathbb{S}^2} f^*\omega = 0 \qquad \forall f \in C^\infty(\mathbb{S}^2,M). \end{equation*} \end{definition} \begin{remark} A symplectic manifold $(M,\omega)$ is symplectically aspherical, if and only if $[\omega]\vert_{\pi_2(M)} = 0$, where $[\omega] \in \operatorname{H}_{\mathrm{dR}}^2(M;\mathbb{R})$ denotes the cohomology class of the closed form $\omega$. \end{remark} \begin{theorem}[Bubbling] \label{thm:bubbling} Let $(M,\omega)$ be a compact symplectically aspherical symplectic manifold and let $(u_k)$ be a sequence of negative gradient flow lines of the symplectic action functional $\mathcal{A}_H$ for some $H \in C^\infty(M \times \mathbb{T})$ with uniformly bounded energy \begin{equation*} E_J(u_k) := \int_{-\infty}^{+\infty}\norm[0]{\partial_s u_k}^2_J \end{equation*} \noindent for some, and hence every, $\omega$-compatible almost complex structure $J$. Then the derivatives of $(u_k)$ are uniformly bounded. \end{theorem} The main idea of the proof is to assume that the derivatives $(Du_k)$ explode and then to construct a nonconstant $J$-holomorphic sphere. Indeed, assume that there exists a sequence $(s_k,t_k)$ in $\mathbb{R} \times \mathbb{T}$ such that \begin{equation*} \lim_{k \to \infty}\norm[0]{\partial_su_k(s_k,t_k)} \to +\infty. \end{equation*} Then we rescale the sequence $(u_k)$, see Figure \ref{fig:magnifying_glass}. Set \begin{equation*} m_k := \norm[0]{\partial_su_k(s_k,t_k)} \qquad \text{and} \qquad v_k(\sigma,\tau) := u_k\del[3]{\frac{\sigma}{m_k} + s_k,\frac{\tau}{m_k} + t_k} \end{equation*} \noindent for all $(\sigma,\tau) \in \mathbb{C}$. \noindent Then we compute \begin{align*} m_k \partial_\sigma v_k(\sigma,\tau) &= \partial_s u_k\del[3]{\frac{\sigma}{m_k} + s_k,\frac{\tau}{m_k} + t_k},\\ m_k \partial_\tau v_k(\sigma,\tau) &= \partial_t u_k\del[3]{\frac{\sigma}{m_k} + s_k,\frac{\tau}{m_k} + t_k}. \end{align*} In particular $\norm[0]{\partial_\sigma v_k(0,0)} = 1$ for all $k \in \mathbb{N}$. Moreover, every $v_k$ solves \begin{equation*} \partial_\sigma v_k(\sigma,\tau) + J\partial_\tau v_k(\sigma,\tau) = \frac{1}{m_k}JX_{H_{\frac{\tau}{m_k}+t_k}}(v_k(\sigma,\tau)) \qquad \forall (\sigma,\tau) \in \mathbb{C} \end{equation*} \noindent as $u_k$ satisfies the Floer equation \eqref{eq:Floer_equation}. If there exists $v_\infty \in C^\infty(\mathbb{C},M)$ such that \begin{equation*} v_k \xrightarrow{C^\infty_{\loc}} v_\infty, \qquad k \to \infty, \end{equation*} \noindent modulo subsequences, then $v_\infty$ satisfies \begin{equation*} \partial_\sigma v_\infty(\sigma,\tau) + J\partial_\tau v_\infty(\sigma,\tau) = 0 \qquad \forall (\sigma,\tau) \in \mathbb{C}. \end{equation*} Consequently, $v_\infty$ is a $J$-holomorphic plane. Using the assumption that the energy of the sequence $(u_k)$ is uniformly bounded, one can extend $v_\infty$ to a $J$-holomorphic sphere $v \in C^\infty(\mathbb{S}^2,M)$ such that $v\vert_\mathbb{C} = v_\infty$ via the identification $\mathbb{S}^2 \cong \mathbb{C} \cup \{\infty\}$. \begin{figure} \caption{Looking at the sequence $(u_k)$ of negative gradient flow lines of the symplectic action functional with a magnifying glass via rescaling.} \label{fig:magnifying_glass} \end{figure} \input{rescaling.tex} \input{elliptic_bootstrapping.tex} \input{removal_of_singularities.tex} \printbibliography \end{document}
\begin{document} \title{Independent joins of tolerance factorable varieties} \author[I.\ Chajda]{Ivan Chajda} \email{ivan.chajda@upol.cz} \address{Palack\'y University Olomouc\\Department of Algebra and Geometry\\17. listopadu 12, 771 46 Olomouc, Czech Republic} \author[G.\ Cz\'edli]{G\'abor Cz\'edli} \email{czedli@math.u-szeged.hu} \urladdr{http://www.math.u-szeged.hu/~czedli/} \address{University of Szeged\\ Bolyai Institute\\Szeged, Aradi v\'ertan\'uk tere 1\\ Hungary 6720} \author[R.\ Hala\v s]{Radom\'\i r Hala\v s} \email{radomir.halas@upol.cz} \address{Palack\'y University Olomouc\\Department of Algebra and Geometry\\17. listopadu 12, 771 46 Olomouc, Czech Republic} \thanks{This research was supported the project Algebraic Methods in Quantum Logic, No.: CZ.1.07/2.3.00/20.0051, by the NFSR of Hungary (OTKA), grant numbers K77432 and K83219, and by T\'AMOP-4.2.1/B-09/1/KONV-2010-0005} \dedicatory{Dedicated to B\'ela Cs\'ak\'any on his eightieth birthday} \subjclass[2000]{Primary: 08A30. Secondary: 08B99, 06B10, 20M07} \keywords{Tolerance relation, quotient algebra by a tolerance, tolerance factorable algebra, independent join of varieties, product of varieties, rotational lattice, rectangular band} \date{May 11, 2012} \begin{abstract} Let $\lat$ denote the variety of lattices. In 1982, the second author proved that $\lat$ is \emph{{strongly} tolerance factorable}, that is, the members of $\lat$ have quotients in $\lat$ modulo tolerances, although $\lat$ has proper tolerances. We did not know any other nontrivial example of a strongly tolerance factorable variety. Now we prove that this property is preserved by forming independent joins (also called products) of varieties. This enables us to present infinitely many {strongly} tolerance factorable varieties with proper tolerances. Extending a recent result of G.\ Cz\'edli and G.\ Gr\"atzer, we show that if $\var V$ is a strongly tolerance factorable variety, then the tolerances of $\var V$ are exactly the homomorphic images of congruences of algebras in $\var V$. Our observation that (strong) tolerance factorability is not necessarily preserved when passing from a variety to an equivalent one leads to an open problem. \end{abstract} \maketitle \subsection*{Basic concepts} Given an algebra $\alg A = (A, F)$, a binary reflexive, symmetric, and compatible relation $T \subseteq A\times A=A^2$ is called a \emph{tolerance} on $\alg A$. The set of tolerances of $\alg A$ is denoted by $\Tol{\alg A}$. A tolerance which is not a congruence is called \emph{proper}. By a \emph{block} of a tolerance $T$ we mean a maximal subset $B$ of $A$ such that $B^2\subseteq T$. Let $\bset T$ denote the set of all blocks of $T$. It follows from Zorn's lemma that, for $X\subseteq A$, we have that \begin{equation}\label{HeRZrnLypp} \text{$X^2\subseteq T$ if{f} $X\subseteq U$ for some $U\in \bset T$.} \end{equation} Applying this observation to $X=\{a,b\}$, we obtain that $\bset T$ determines $T$. Furthermore, we also conclude that, for each $n$, each $n$-ary $f \in F$, and all $B_1,\dots,B_n\in\bset T$, there exists a $B\in\bset T$ such that \begin{equation}\label{mfaXCDsF} \setm{f(b_1,\ldots,b_n)} {b_1\in B_1,\ldots, b_n\in B_n}\subseteq B\text. \end{equation} We say that $\alg A$ is \emph{$T$-factorable} if, for each $n$, each $n$-ary $f \in F$ and all $B_1,\dots,B_n\in\bset T$, the block $B$ in \eqref{mfaXCDsF} is uniquely determined. In this case, we define $f(B_1,\ldots, B_n) := B$, and we call the algebra $(\bset T,F)$ the \emph{quotient algebra} $\alg A/T$ of $\alg A$ \emph{modulo the tolerance} $T$. {If $\alg A$ is $T$-factorable for all $T\in \Tol A$, then we say that $\alg A$ is \emph{tolerance factorable}.} In what follows, we focus on the following properties of varieties; $\var V$ denotes a variety of algebras. The \emph{tolerances} of $\var V$ are understood as the tolerances of algebras of $\var V$. \begin{list}{}{\setlength{\leftmargin}{1.6cm}\setlength{\rightmargin}{1cm}} \item[\proponetfac] $\var V$ is \emph{ tolerance factorable} if all of its members are tolerance factorable. \item[\proptwostfac] $\var V$ is \emph{ strongly tolerance factorable} if it is tolerance factorable and, for all $\alg A\in \var V$ and all $T\in\Tol{\alg A}$, $\alg A/T\in \var V$. \item[\propthreetcp] \emph{The tolerances of $\,\var V$ are the images of {its} congruences} if for each $\alg A\in\var V$ and every $T\in\Tol{\alg A}$, there exist an algebra $\alg B\in \var V$, a congruence $\boldsymbol{\theta}$ of $\alg B$ and a surjective homomorphism $\varphi\colon \alg B\to \alg A$ such that $T=\setm{(\varphi(a),\varphi(b))} {(a,b)\in\boldsymbol{\theta}}$. \item[\propfourprop] $\var V$ has proper tolerances if at least one of its members has a proper tolerance. \end{list} \emph{Term equivalence}, in short, \emph{equivalence}, of varieties was introduced by W.\,D.~Neumann~\cite{neumann}. (He called it rational equivalence.) Instead of recalling the technical definition, we mention that the variety of Boolean algebras is equivalent to that of Boolean rings. The variety of sets (with no operations) is denoted by $\Set$. Although the present paper is self-contained, for more information on tolerances the reader is referred to the monograph I.~Chajda~\cite{chajdabook} . \subsection*{Motivation and the target} {Besides $\lat$ {and $\Set$}, no other {strongly tolerance factorable variety with proper tolerances} has been known since 1982.} Our initial goal was to find {some other ones}. {We prove} that independent joins, see later, preserve {each of the properties \proponetfac{}--\propfourprop{}}. This enables us to construct infinitely many, pairwise non-equivalent, strongly tolerance factorable varieties with proper tolerances. Also, we show that if a variety is strongly tolerance factorable, then its tolerances are the images of its congruences, but the converse implication fails. Finally, we show that (strong) tolerance factorability is not always preserved when passing from a variety to an equivalent one, and we raise an open problem based on this fact. \subsection*{Independent joins} Let $n\in\mathbb N=\set{1,2,\ldots}$, and let $\anyvar 1,\ldots,\anyvar n$ be varieties of the same type. These varieties are called \emph{independent} if there exists an $n$-ary term $\indterm$ in their common type such that, for $i=1,\ldots,n$, $\anyvar i$ satisfies the identity $\indterm(x_1,\ldots,x_n)=x_i$. In this case, the join $\nopvar$ of the varieties $\anyvar 1,\ldots,\anyvar n$ is called an \emph{independent join} (in the lattice of all varieties of a given type). This concept was introduced by G.~Gr\"atzer, H.~Lakser, and J.~P\l onka~\cite{gglakplonka}. Independent joins of varieties are also called (direct) \emph{products}. \begin{proposition}[W.~Taylor~\cite{taylor}, G.~Gr\"atzer, H.~Lakser, and J.~P\l onka \cite{gglakplonka}]\label{propIndep} Assume that a variety $\nopvar$ is the independent join of its subvarieties $\anyvar 1, \cdots, \anyvar n$. \begin{enumeratei} \item \label{xhTalma1} Every algebra $\alg A\in \nopvar$ is $($isomorphic to$)$ a product $\alg A_1\times\cdots\times \alg A_n$ with $\alg A_1\in\anyvar 1$, \dots, $\alg A_n\in\anyvar n$. These $A_i$ are uniquely determined up to isomorphism. \item\label{xhTalma2}If $B$ is a subalgebra of $\alg A=\alg A_1\times\cdots\times \alg A_n$ considered above, then there exist subalgebras $B_i$ of $\alg A_i$ $(i=1,\ldots,n)$ such that $B=B_1\times\cdots\times B_n$. \item\label{xhTalma3} Every tolerance $T$ of $\alg A$ is of the form $\,T_1\times\cdots\times T_n$ such that $T_i$ is a tolerance of $\alg A_i$ for $i=1\ldots,n$. {If $T$ is a congruence, then so are the $T_i$.} \end{enumeratei} \end{proposition} Although part \eqref{xhTalma3} above is stated only for congruences in \cite{taylor}, the one-line argument ``regard $T$ as a subalgebra of $\alg A_1^2\times\cdots\times \alg A_n^2$ and apply part \eqref{xhTalma2}'' of \cite{taylor} also works if $T$ is a tolerance rather than a congruence. \subsection*{Results and examples} The properties \proponetfac--\propfourprop{} are not independent from each other and from congruence permutability. We know from H.~Werner~\cite{werner}, see also J.\,D.\,H.\ Smith~\cite{smith}, that a variety is congruence permutable if{f} it has no proper tolerances. Obviously, a variety without proper tolerances is strongly tolerance factorable and its tolerances are the images of its congruences. Also, we present the following statement, which generalizes the result of G.~Cz\'edli and G.~Gr\"atzer~\cite{czggg}. (The statements of this section will be proved in the next one.) \begin{proposition}\label{czggggener}~ \begin{enumeratei} \item\label{czggggeneri} Assume that $\alg A$ is a tolerance factorable algebra and $T\in\Tol{\alg A}$. Then there exist an algebra $\alg B$ $($of the same type as $\alg A)$, a congruence $\boldsymbol{\theta}$ of $\alg B$, and a surjective homomorphism $\varphi\colon \alg B\to\alg A$ such that $T=\varphi(\boldsymbol{\theta})$, where $\varphi(\boldsymbol{\theta})=\set{(\varphi(x),\varphi(x)): (x,y)\in\boldsymbol{\theta}}$. \item\label{czggggenerii} If a variety is strongly tolerance factorable, then its tolerances are the images of its congruences. \end{enumeratei} \end{proposition} Tolerance factorability does not imply strong tolerance factorability. For example, let $\var V$ be a nontrivial proper subvariety of the variety $\lat$ of all lattices. We know from G.~Cz\'edli~\cite{czg82} that $\lat$ is {strongly tolerance factorable}; see also G.~Gr\"atzer and G.\,H.~Wenzel~\cite{gggwenz} for an alternative proof. Consequently, $\var V$ is tolerance factorable. However, {it is not strongly tolerance factorable} by G.~Cz\'edli~\cite[Theorem 3]{czg82}. Our main achievement is the following statement. \begin{theorem}\label{thmmain} Assume that a variety $\nopvar$ is the independent join of its subvarieties $\anyvar 1,\ldots,\anyvar n$. {Consider one of the properties \begin{enumeratei} \item strong tolerance factorability, \item tolerance factorability, \item the tolerances of the variety are the images of its congruences. \end{enumeratei} If this property holds for all the $\anyvar i$, then it also holds for $\nopvar$.} \end{theorem} Now we are ready to give several examples {for strongly tolerance factorable varieties with proper tolerances.} It would be easy to give such examples by taking varieties equivalent to $\lat$. (For example, we could replace the binary join by the $n$-ary operation $f(x_1,\ldots,x_n):=x_1\vee x_2$.) Hence we will give pairwise non-equivalent varieties even if Example~\ref{hnplMrl} implies the surprising fact that strong tolerance factorability is not necessarily preserved when passing from a variety to an equivalent one. For $2\leq n\in \mathbb N$ and $1\leq i\leq n$, let $\setni ni$ be the variety consisting of all algebras $(X,\setop n)$ such that $X$ is a nonempty set and $\setop n$ is an $n$-ary operation symbol inducing the $i$-th projection on $X$. That is, $\setni ni$ is of type $\set{\setop n}$, and it is defined by the identity $\setop n(x_1,\ldots,x_n)=x_i$. Let $\setnn n=\setni n1\vee\cdots\vee \setni nn$ and $\setnn 1=\Set$. \begin{example}\label{exbands} The varieties $\setnn n$, {$n\in\mathbb N$, are strongly tolerance factorable and pairwise non-equivalent, and they have proper tolerances}. \end{example} Notice that $\setnn 2$ is the variety of \emph{rectangular bands}, which are idempotent semigroups satisfying the identity $xyx=x$. See A.\,H.~Clifford~\cite{clifford}, who introduced this concept, and B.~J\'onsson and C.~Tsinakis \cite{jonssontsinakis}. Next, consider lattices with an additional unary operation $\rotop n$ {that induces an automorphism of the lattice structure such that the identity $\rotop n^n(x)=x$ (where $\rotop n^n(x)$ denotes the $n$-fold iteration $\rotop n\bigl(\rotop n(\dots \rotop n(x)\dots)\bigr)$ of $\rotop n$) holds}. We can call them \emph{rotational lattices of order $n$}. The variety of these lattices is denoted by $\rotlat n$. Note that $\rotlat 1$ is equivalent to $\lat$ while $\rotlat 2$ consists of \emph{lattices with involution}, which were studied, for example, in I.~Chajda and G.~Cz\'edli~\cite{chczinvol}. {Note also that $\rotlat n\subseteq\rotlat m$ if{f} $n\mid m$.} \begin{example}\label{exrotlats} The varieties $\rotlat n$, $n\in\mathbb N$, {are strongly tolerance factorable and pairwise non-equivalent, and they have proper tolerances}. Moreover, none of them is equivalent {to a variety} given in Example~\ref{exbands}. \end{example} Armed with Theorem~\ref{thmmain}, one can give some more sophisticated examples. For example, we present the following. Let $\sophop$ be a binary operation symbol, and let $m,n\in\mathbb N$. We consider the type $\tau_{mn}=\set{\vee,\wedge,\rotop m, \setop n,\sophop}$. Define the action of $\setop n$ and $\sophop$ on the algebras of $\rotlat m$ as first projections. This way these algebras become $\tau_{mn}$-algebras and they form a variety $\rottamp{\rotlat m}{n}$. Similarly, on the members of $\setnn n$, we define $\vee$, $\wedge$, and $\rotop m$ as first projections and $\sophop$ as the binary second projection. The algebras we obtain constitute a variety $\setnntamp{\setnn n}{m}$ of type $\tau_{mn}$. Let $\combvar mn={\rottamp{\rotlat m}{n}}\vee \setnntamp{\setnn n}{m}$. \begin{example}\label{excomb} The varieties $\combvar mn$, $m,n\in\mathbb N$, are {strongly tolerance factorable and they have proper tolerances}. Furthermore, $\combvar mn$ is equivalent to $\combvar ij$ if{f} $(i,j)=(m,n)$. \end{example} Note that the varieties in Example~\ref{exrotlats} are congruence distributive while those in Examples~\ref{exbands} and \ref{excomb} satisfy no nontrivial congruence lattice identity. {Next, in the language of lattices, we consider the ternary lattice terms $\tjoin(x,y,z)=x\vee(y\wedge z)$ and $\tmeet(x,y,z)=x\wedge(y\vee z)$. Clearly, the identities $x\vee y=\tjoin(x,y,y)$ and $x\wedge y=\tmeet(x,y,y)$ hold in all lattices. This motivates the following definition of another variety in the language of $\set{\tjoin,\tmeet}$ as follows. In each of the six usual laws defining $\lat$, we replace $\vee$ and $\wedge$ by $\tjoin(x,y,y)$ and $\tmeet(x,y,y)$. For example, the absorption law $x=x\vee (x\wedge y)$ turns into the identity $x=\tjoin\bigl(x, \tmeet(x,y,y), \tmeet(x,y,y)\bigr)$. The six identities we obtain this way together with the identities $\tjoin(x,y,z)= \tjoin(x, \tmeet(y,z,z), \tmeet(y,z,z))$ and $\tmeet(x,y,z)= \tmeet(x, \tjoin(y,z,z), \tjoin(y,z,z))$ define a variety, which will be denoted by $\tlat$.} \begin{example}\label{hnplMrl} $\tlat$ is equivalent to $\lat$. Hence the tolerances of $\tlat$ are the images of its congruences. However, $\tlat$ is not tolerance factorable. \end{example} Let $\alg A\in\tlat$ and $T\in\Tol{\alg A}$. Although {$\tlat$ is not tolerance factorable, the fact that {it} is equivalent to {a tolerance factorable variety (which is $\lat$) yields a natural way of defining $\alg A/T$}. Namely, $\alg A\in \tlat$ has an alter ego $\alg A'\in \lat$ with the same tolerances, so we can take the quotient $\alg B':=\alg A'/T$ defined in $\lat$, and we can let $\alg A/T$ be the alter ego of $\alg B'$ in $\tlat$. Clearly, the strong tolerance factorability of $\lat$ implies that $\alg A/T\in \tlat$.} {Since $\tlat$ is only an {``}artificial{''} variety, we raise the following problem.} \begin{problem} {Is there a {well-known variety $\var V$ such that although $\var V$ is not tolerance factorable, it} is equivalent to {some tolerance factorable} (possibly ''artificial'') variety?} \end{problem} \subsection*{Proofs} \begin{proof}[{Proof of Proposition~\ref{czggggener}}] We generalize the idea of G.~Cz\'edli and G.~Gr\"atzer~\cite{czggg}. Assume that $\alg A=(A,F)$ is a tolerance factorable algebra and $T\in\Tol{\alg A}$. If $\alg A$ belongs to a strongly tolerance factorable variety $\var V$, then all the algebras we construct in the proof will clearly belong to $\var V$. The quotient algebra $\alg A/T=\bigl(\bset T,F\bigr)$, defined according to formula \eqref{mfaXCDsF}, makes sense. So does the direct product $\alg C=\alg A\times (\alg A/T)$. Denoting $\setm{(x,Y)\in A\times \bset T}{x\in Y}$ by $D$, the construction implies that $\alg D=( D,F)$ is a subalgebra of $\alg C$. This $\alg D$ will play the role of $\alg B$. Define $\boldsymbol{\theta}=\setbm{\bigl((x_1,Y_1),(x_2,Y_2)\bigr) \in {D}^2}{Y_1=Y_2 }$. As the kernel of the second projection from $\alg D$ to $\alg A/T$, it is a congruence on {$\alg D$}. The first projection $\varphi\colon \alg D\to \alg A$, $(x,Y)\mapsto x$, is a surjective homomorphism since, for every $x\in A$, \eqref{HeRZrnLypp} allows us to extend $\set x$ to a block of $T$. Clearly, if $\bigl((x_1,Y_1),(x_2,Y_2)\bigr)\in\boldsymbol{\theta}$, then $\set{x_1,x_2}\subseteq Y_1=Y_2\in\bset T$ implies that $\bigl(\varphi(x_1,Y_1),\varphi(x_2,Y_2)\bigr)= (x_1,x_2)\in T$. Conversely, assume that $(x_1,x_2)\in T$. Then, by \eqref{HeRZrnLypp}, there is a $Y\in\bset T$ with $\set{x_1,x_2}\subseteq Y$. Hence $(x_1,Y), (x_2,Y)\in D$, $\bigl((x_1,Y), (x_2,Y)\bigr)\in\boldsymbol{\theta}$, and $x_i= \varphi(x_i,Y)$ yield the desired equality $T=\setbm{\bigl(\varphi(x_1,Y_1),\varphi(x_2,Y_2)\bigr)} { \bigl((x_1,Y_1),(x_2,Y_2)\bigr)\in\boldsymbol{\theta} }$. \end{proof} \begin{lemma}\label{lemmanoskewblock} Assume that $T$ is as in Proposition \textup{\ref{propIndep}}\eqref{xhTalma3} and $B\in\bset T$. Then there exist $B_i\in \bset{T_i}$, $i\in\set{1,\ldots,n}$, such that $B=B_1\times\cdots\times B_n$, and they are uniquely determined. Furthermore, $\bset T=\bset{T_1}\times\cdots\times \bset{T_n}$. \end{lemma} \begin{proof} Let $\pi_i$ denote the projection map $A\to A_i$, $(x_1,\ldots,x_n)\mapsto x_i$. Define $B_i:=\pi_i(B)$. First we show that $B_1\in\bset{T_1}$. If $a_1,b_1\in B_1$, then $(a_1,a_2,\ldots, a_n),(b_1,b_2,\ldots,b_n)\in B$ for some $a_j,b_j\in A_j$, $2\leq j\leq n$. Hence $B^2\subseteq T$ implies that $(a_1,b_1)\in T_1$. This gives that $B_1^2\subseteq T_1$, and we obtain $B_i^2\subseteq T_i$ for all $i\in\set{1,\ldots,n}$ by symmetric arguments. Thus \[(B_1\times\cdots\times B_n)^2\subseteq T_1\times\cdots\times T_n =T, \] which together with $B\in\bset T$ and the obvious $B\subseteq B_1\times\cdots\times B_n$ implies that \begin{equation}\label{isThx} B = B_1 \times\cdots\times B_n\text. \end{equation} The uniqueness of the $B_i$ is trivial. If $B_1\subseteq C_1\subseteq A_1$ such that $C_1^2\subseteq T_1$, then \[B^2=(B_1 \times\cdots\times B_n)^2\subseteq (C_1 \times B_2\times\cdots\times B_n)^2\subseteq T_1 \times\cdots\times T_n=T\text. \] Hence $B\in\bset T$ yields that the first inclusion above is an equality, which implies that $B_1=C_1$. Thus $B_1\in \bset {T_1}$ and $B_i\in\bset{T_i}$ for all $i$. This together with \eqref{isThx} proves that $\bset T\subseteq \bset{T_1}\times\cdots\times \bset{T_n}$. Finally, to prove the converse inclusion, assume that $U_i\in\bset{T_i}$ for $i=1,\ldots,n$, and let $U=U_1\times\cdots\times U_n$. Clearly, $U^2\subseteq T_1\times\cdots \times T_n=T$. By Zorn's lemma, there is a $B\in \bset T$ such that $U\subseteq B$. We already know that {$B_i\in\bset{T_i}$ and \eqref{isThx} holds.} This together with $U\subseteq B$ yields that $U_i\subseteq B_i$. Comparable blocks of $T_i$ are equal, whence $U_i=B_i$, for all $i$. Hence $U=B\in\bset T$, proving that $\bset{T_1}\times\cdots\times \bset{T_n} \subseteq \bset T$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thmmain}] {Assume first that the $\anyvar i$ are tolerance factorable.} Let $T$ be as in Proposition \ref{propIndep}\eqref{xhTalma3}. Assume that $\sterm$ is a $k$-ary term in the language of $\nopvar$ and $B_1,\ldots,B_k\in\bset T$. By Lemma~\ref{lemmanoskewblock}, there are uniquely determined $B_{ij}\in\bset{T_j}$ such that \begin{equation}\label{DdpOsDi} B_i=B_{i1}\times\cdots\times B_{in}\quad\text{for}\quad i=1,\ldots,k\text. \end{equation} Assume that $C$ is in $\bset T$ such that \begin{equation}\label{oWkQm} \setm{\sterm(b_1,\ldots,b_k)} {b_1\in B_1,\ldots, b_k\in B_k}\subseteq C\text. \end{equation} According to $\alg A=\alg A_1\times\cdots\times \alg A_n$, we can write $b_i=(b_{i1},\ldots,b_{in})$. Since $\sterm$ acts componentwise, \begin{equation}\label{NkLSp} \begin{aligned} \setm{&\sterm(b_1,\ldots,b_k)} {b_1\in B_1,\ldots, b_k\in B_k}\cr &=\setbm{\bigl(\sterm(b_{11},\ldots, b_{k1}),\ldots, \sterm(b_{1n},\ldots, b_{kn}) \bigr) }{b_{ij}\in B_{ij}}\cr &=\setm{\sterm(b_{11},\ldots, b_{k1}) }{b_{i1}\in B_{i1}} \times\cdots\times \setm{\sterm(b_{1n},\ldots, b_{kn}) }{b_{in}\in B_{in}}\text. \end{aligned} \end{equation} By Lemma~\ref{lemmanoskewblock}, $C=C_1\times\cdots \times C_n$ with $C_j\in\bset{T_j}$. Combining this with \eqref{oWkQm} and \eqref{NkLSp}, we obtain that, for $j\in\set{1,\ldots,n}$, \begin{equation}\label{isWeicG} \setm{\sterm(b_{1j},\ldots, b_{kj}) }{b_{ij}\in B_{ij}\text{ for }i=1, \ldots,k}\subseteq C_j\text. \end{equation} This implies the uniqueness of $C_j$ since $\anyvar j$ is tolerance factorable. Therefore, $C$ in \eqref{oWkQm} is uniquely determined, {and we obtain that $\nopvar$ is tolerance factorable.} {Next, assume that the $\anyvar i$ are strongly tolerance factorable. Observe that \eqref{isWeicG} also yields} that $C_j=\sterm(B_{1j},\ldots,B_{kj})$ in the quotient algebra $\alg A_j/T_j$. This, together with \eqref{DdpOsDi} and $C=C_1\times\cdots \times C_n$, implies that $\alg A/T$ is (isomorphic to) $\alg A_1/T_1\times\cdots\times \alg A_n/T_n$. Since $\anyvar j$ is {strongly} tolerance factorable, we conclude that $\alg A_j/T_j\in\anyvar j\subseteq\nopvar$. Therefore $\alg A/T\in \nopvar$, proving that $\nopvar$ is {strongly} tolerance factorable. {Finally, if the tolerances of $\anyvar i$ are the images of its congruences, for $i=1,\ldots, n$, then Proposition~\ref{propIndep} easily implies the same property of $\nopvar$.} \end{proof} \begin{proof}[Proof of Example~\ref{exbands}] Each of the $\setni ni$ is equivalent to $\Set$, whence it is easy to see that the $\setni ni$ are strongly tolerance factorable. The operation $\setop n$ witnesses that $\setnn n=\setni n1\vee\cdots\vee\setni nn$ is an independent join. Hence $\setnn n$ is {strongly} tolerance factorable by Theorem~\ref{thmmain}. The three-element algebra $\alg A=\bigl(\set{a,b,c},\setop n)$, where $\setop n$ acts as the first projection, belongs to $\setni n1\subseteq \setnn n$. Consider $T\in\Tol {\alg A}$ determined by $\bset T=\bigl\{\set{ab},\set{bc} \bigr\}$. This $T$ witnesses that $\setnn n$ {has proper tolerances}. Next, consider an arbitrary $\alg A\in \setnn n$. It is of the form $\alg A=\alg A_1\times\cdots\times \alg A_n$, where $\alg A_i\in\setni ni$ for $i=1,\ldots,n$. Let $\sterm$ be an arbitrary term in the language of $\setnn n$. Since $\setni ni$ is equivalent to $\Set$, $\sterm$ induces a projection on $ A_i$, for $i=1,\ldots,n$. It follows that $\sterm$ induces an operation on $ A$ that depends on at most $n$ variables. On the other hand, if none of the $\alg A_i$ is one-element, then $\setop n$ defines a term function on $ A$ that depends exactly on $n$ variables. Thus $n$ is the largest integer $k$ such that all term functions on algebras in $\setnn n$ depend on at most $k$ variables and there exists an algebra in $\setnn n$ with a term function depending exactly on $k$ variables. This proves that $\setnn n$ and $\setnn m$ are non-equivalent if $n\neq m$. \end{proof} \begin{proof}[Proof of Example~\ref{exrotlats}] Let $\alg A=(A,\vee,\wedge,\rotop n)\in \rotlat n$ and $T\in\Tol{\alg A}$. Then $T$ is also a tolerance of the lattice reduct $(A,\vee,\wedge)$, and $\bset T$ for the lattice reduct is the same as it is for $\alg A$. {We claim that, for every $B\in \bset T$, \begin{equation}\label{siThKq} \rotop n(B):=\setm{\rotop n(b)}{b\in B}\in \bset T\text. \end{equation} By Zorn's lemma, there is a $C\in \bset T$ such that $\setm{\rotop n(b)}{b\in B}\subseteq C$. Since $\rotop n^{-1}=\rotop n^{n-1}$ preserves $T$, $\setm{\rotop n^{-1}(c)}{c\in C}^2\subseteq T$. This together with $B\subseteq \setm{\rotop n^{-1}(c)}{c\in C}$ and $B\in\bset T$ yields that $B= \setm{\rotop n^{-1}(c)}{c\in C}$. Therefore, $g_n(B)=C\in\bset T$, proving \eqref{siThKq}. } For the lattice operations, $B$ in \eqref{mfaXCDsF} is uniquely determined since $\lat$ is (strongly) tolerance factorable by G. Cz\'edli~\cite{czg82}. By \eqref{siThKq}, the same holds for $g_n$. Thus $\alg A/T$ makes sense. $(A/T,\vee,\wedge)$ is a lattice since $\lat$ is {strongly} tolerance factorable. We conclude from \eqref{siThKq} that $\rotop n$ is a permutation on $\alg A/T$, whose $n$-th power is the identity map. Finally, assume that $B\vee C=D$ in $\alg A/T$; the case of the meet is similar. Then, by \eqref{siThKq} and $\setm{b\vee c}{b\in B,\text{}c\in C}\subseteq D$, \begin{align*} \setm{x\vee y}{x\in \rotop n(B),\text{ } y\in\rotop n(C)} = \setm{\rotop n(b)\vee \rotop n(c)}{b\in B,\text{ } c\in C}\cr =\setm{\rotop n(b\vee c)}{b\in B,\text{ } c\in C} \subseteq \setm{\rotop n(d)}{d\in D} =\rotop n(D)\text. \end{align*} Hence $\rotop n(B)\vee\rotop n(C)=\rotop n(D)$, that is, $\rotop n$ is an automorphism of $(A/T,\vee,\wedge)$. Therefore, $\rotlat n$ {is strongly tolerance factorable. It has proper tolerances since so has $\lat$, which is equivalent to the subvariety $\rotlat 1$ of $\rotlat n$. } The boolean lattice with $n$ atoms allows an automorphism $\varphi$ of order $n$ such that the subgroup generated by $\varphi$ acts transitively on the set of atoms, but no such automorphism of smaller order is possible. This implies easily that $\rotlat m$ is not equivalent to $\rotlat k$ if $m\neq k$. Since $\rotlat n$ is congruence distributive, it is not equivalent to $\setnn m$. \end{proof} \begin{proof}[Proof of Example~\ref{excomb}] Since $\sophop$ takes care of independence, Examples~\ref{exbands} and \ref{exrotlats} together with Theorem~\ref{thmmain} yield that $\combvar mn$ {is strongly tolerance factorable and it has proper tolerances}. Suppose for a contradiction that $(m,n)\neq (u,v)$ but $\combvar mn$ is equivalent to $\combvar uv$. Suppose first that $m=u$ and $n\neq v$. Let, say, $v<n$. Take the $2^n$-element $\alg A\in \setnntamp{\setnn n}{m} \subseteq \combvar mn$ for which all the $\alg A_i$ in Proposition \ref{propIndep}\eqref{xhTalma1} are 2-element. Let $\sterm$ be a binary term in the language of $\combvar mn$. Since all terms induce projections on $\alg A_i$, the identity $\sterm(x,\sterm(y,x))=x$ holds in $\alg A_i$ for $i=1,\ldots,n$. Therefore, $\alg A$ satisfies the same identity, for every binary term $\sterm$. Observe that, up to now, we did not use the assumption on the size of $\alg A_i$, whence \begin{equation}\label{sTmidbInhowk}\sterm(x,\sterm(y,x))=x \text{ holds in }\setnn n\text{, for all binary terms }s\text. \end{equation} By the assumption, there is a $\combvar mv${-}structure $\alg B$ on the set $A$ such that $\alg B$ and $\alg A$ have the same term functions. By the definition of $\combvar mv{= \combvar uv}$, $\alg B$ is (isomorphic to) $\alg C\times \alg D$, where $\alg C\in {{\rottamp{\rotlat m}{v}}}$ and $\alg D\in { { \setnntamp{\setnn v}{m} } }$. Since $\alg C$ is a homomorphic image of $\alg B$ and $\alg B$ has the same term functions as $\alg A$, the identity $\sterm(x,\sterm(y,x))=x$ holds in $\alg C$ for all binary terms $\sterm$. Thus $\alg C$ is one-element since otherwise $\sterm(x,y)=x\vee y$ would fail this identity. Hence the term functions of $\alg B$ are the same as those of its $\setnn v$-reduct. Now, we can obtain a contradiction the same way as in the last paragraph of the proof of Example~\ref{exbands}: $\alg A$ has an $n$-ary term function that depends on all of its variables while all term functions of $\alg B$ depend on at most $v$ variables. This proves that $n=v$. Secondly, we suppose that $m\neq u$. Let, say, $m>u$. Consider the algebra $\alg A\in {\rottamp{\rotlat m}{n}}\subseteq \combvar mn$ such that the $\rotlat m$-reduct of $\alg A$ is the $2^m$-element boolean lattice and $g_m$ is a lattice automorphism of order $m$ that acts transitively on the set of atoms. (That is, the restriction of $g_m$ to the set of atoms is a cyclic permutation of order $m$.) Since $\combvar mn$ is equivalent to $\combvar un=\combvar uv$, there exist algebras $\alg C\in {\rottamp{\rotlat u}n}$ and $\alg D\in {\setnntamp{\setnn n}u}$ such that $\alg B:=\alg C\times \alg D\in \combvar un$ is equivalent to $\alg A$. Observe that $\alg D$, which is a homomorphic image of $\alg B$, has a lattice reduct. Hence, like in the firts part of the proof, \eqref{sTmidbInhowk} easily implies that $\alg D$ is a one-element algebra. Therefore, $\alg A$ is equivalent to $\alg C$, that is, to a member of ${\rottamp{\rotlat u}{n}}$. Hence the $\rotlat m$-reduct of $\alg A$ is equivalent to a member of $\rotlat u$. This leads to a contradiction the same way as in the last paragraph of the proof of Example~\ref{exrotlats}. \end{proof} \begin{proof}[{Proof of Example~\ref{hnplMrl}}] {Consider the lattice $L$ in Figure~\ref{figone} as an algebra of $\tlat$. A tolerance $T\in\Tol L$ is given by its blocks $A=[a_0,a_1]$, \dots, $E=[e_0,e_1]$. (It is easy to check, and it follows even more easily from G.~Cz\'edli~\cite[Theorem 2]{czg82}, that $T$ is a tolerance.)} Since \[\setm{\tjoin(x,y,z)}{x\in A,\,\, y\in B,\,\, z\in C}=[c_0,a_1], \] {this set} is a subset of two {distinct} blocks, $A$ and $C$. Hence $\tlat$ is not tolerance factorable. The rest is trivial. \end{proof} \begin{figure} \caption{$L$ and the blocks of $T$} \label{figone} \end{figure} \begin{ackno} The authors thank Paolo Lipparini for helpful comments and for calling their attention to H.\ Werner~\cite{werner}. \end{ackno} \end{document}
\begin{document} \setlength{\parindent}{0pt} \title{Alternating Signed Bipartite Graphs and Difference-1 Colourings} \hspace{1cm}\footnotesize $^1$ Funded by Hardiman Research Scholarship \normalsize \begin{abstract} \noindent We investigate a class of 2-edge coloured bipartite graphs known as \emph{alternating signed bipartite graphs (ASBGs)} that encode the information in \emph{alternating sign matrices}. The central question is when a given bipartite graph admits an ASBG-colouring; a 2-edge colouring such that the resulting graph is an ASBG. We introduce the concept of a difference-1 colouring, a relaxation of the concept of an ASBG-colouring, and present a set of necessary and sufficient conditions for when a graph admits a difference-1 colouring. The relationship between distinct difference-1 colourings of a particular graph is characterised, and some classes of graphs for which all difference-1 colourings are ASBG-colourings are identified. One key step is Theorem \ref{multimatching}, which generalises Hall's Matching Theorem by describing a necessary and sufficient condition for the existence of a subgraph $H$ of a bipartite graph in which each vertex $v$ of $H$ has some prescribed degree $r(v)$. \end{abstract} \section{Introduction} In this article, we develop a theme introduced by Brualdi, Kiernan, Meyer, and Schroeder in \cite{brualdibib}, by investigating a class of bipartite graphs related to \emph{alternating sign matrices}. In general, we may associate to any matrix the bipartite graph whose vertices correspond to rows and columns, and where an edge between the vertices representing Row $i$ and Column $j$ encodes the information that the $(i,j)$ entry is non-zero. Additional features of the non-zero entries (for example, sign) might be indicated by assigning colours to the edges. In the case of alternating sign matrices, the special matrix structure translates to particular combinatorial properties of the resulting bipartite graphs. We introduce the main objects of interest in this opening section. \subsection{Alternating Sign Matrices and Alternating Signed Bipartite Graphs} \begin{definition} An \emph{alternating sign matrix (ASM)} is a ${(0,1,-1)}$-matrix in which all row and column sums are $1$, and the non-zero elements in each row and column alternate in sign. \end{definition} It is easily observed that permutation matrices are examples of ASMs, and there are contexts in which the concept of an ASM arises as a natural extension of a permutation. Alternating sign matrices were first investigated by Mills, Robbins, and Rumsey \cite{asmconjecturebib}, who observed their connection to a variant of the ordinary determinant function related to the technique of \emph{Dodgson condensation} \cite{dodgsonbib}. In their construction, the role of ASMs is similar to that of permutations in the usual definition of the determinant. This motivated the problem of enumerating the ASMs of size $n \times n$, leading to the \emph{Alternating Sign Matrix Conjecture}, namely that this number is \[\frac{1!4!7! \ldots (3n-2)!}{n!(n+1)! \ldots (2n-1)!}\text{.}\] Independent and very different proofs were published in 1996 by Zeilberger \cite{asmproof1bib} and Kuperberg \cite{asmproof2bib}, respectively using techniques from enumerative combinatorics and from statistical mechanics. Zeilberger's article establishes that the number of $n \times n$ ASMs is equal to the number of \emph{totally symmetric self-complementary plane partitions} in a $2n \times 2n \times 2n$ box, and the connection between these two classes of objects is further explored by Doran \cite{tsscppbib}. The connection to physics arises from the \emph{square ice} model for two-dimensional crystal structures; square patches of which correspond exactly to alternating sign matrices. Interest in alternating sign matrices intensified following the discovery of these deep connections between apparently disparate fields. A detailed and engaging account of the resolution of the alternating sign matrix conjecture can be found in the book by Bressoud \cite{bressoudbib}. \* Recent developments in the study of ASMs include an investigation of their spectral properties in \cite{spectralbib}, an extension of the concept of Latin square arising from the replacement of permutation matrices by ASMs in \cite{latinsquarebib}, and a study of some graphs arising from ASMs in \cite{brualdibib}. This latter article introduces the concept of the \emph{alternating signed bipartite graph (ASBG)} of an ASM, which is constructed as follows. The graph has a vertex for each row and each column of the matrix. Edges in the graph represent non-zero entries; the vertices corresponding to Row $i$ and Column $j$ are adjacent if and only if the entry in the $(i,j)$-position of the matrix is non-zero. Edges are marked as positive or negative, respectively represented by blue and red edge colours, according to the sign of the corresponding matrix entry. \begin{example}\label{3x3_ASBGs} All seven $3 \times 3$ ASMs and their corresponding ASBGs (up to isomorphism) are shown below: \includegraphics[width = \textwidth]{asm_asbg.png} \noindent\textsc{Note:} ASMs $A$ and $B$ correspond to the same ASBG if $B = PAQ$ for permutation matrices $P$ and $Q$. \end{example} From the viewpoint of bipartite graphs, we propose the following abstraction of the previous definition. \begin{definition} \label{asbg_def} An \emph{alternating signed bipartite graph (ASBG)} is a bipartite graph ${G}$ with no isolated vertices and edges coloured blue and red, for which there exists an ordering of the vertices of ${G}$ such that, for each vertex $u$ of $G$ with neighbours ordered $v_1, v_2, \dots, v_k$, the edges $uv_1, uv_2, \dots, uv_k$ alternate in colour, starting and ending with blue.\end{definition} Suppose that $G$ is an ASBG in the sense of Definition \ref{asbg_def}, with a bipartition $(P_1,P_2)$ of its vertex set. Let $u_1,\dots ,u_k$ be the vertices of $P_1$ and let $v_1,\dots ,v_l$ be the vertices of $P_2$. It is easily confirmed that the following assignment determines a $k\times l$ alternating sign matrix $A(G)$. $$ A(G)_{ij} = \left\{\begin{array}{rl} 1 & \mathrm{if\ }u_iv_j\ \mathrm{is\ a\ blue\ edge\ in}\ G \\ -1 & \mathrm{if\ }u_iv_j\ \mathrm{is\ a\ red\ edge\ in}\ G \\ 0 & \mathrm{if\ }u_iv_j\ \mathrm{is \ not \ an\ edge\ in}\ G \\ \end{array}\right. $$ Since every row and every column of $A(G)$ has entries summing to 1, the sum of all entries of $A(G)$ is equal both to $k$ and $l$. Thus $k=l$ and $G$ is balanced. \* Our main theme in this article is the problem of determining whether a given graph admits a 2-edge-colouring with respect to which it is an ASBG. The article is organised as follows. In Section 2, we introduce the concept of an ASBG-colouring, and a relaxation of this, which we refer to as a difference-1 colouring. We show that these two are equivalent in the case of a graph in which no edge belongs to multiple cycles. In Section 3, we establish general criteria for a graph to admit a difference-1 colouring, establishing a generalisation of Hall's Matching Theorem in the process. In the final section, we consider when a bipartite graph may admit multiple difference-1 colourings, and introduce some variants of such colourings. \section{ASBG-Colourings} In this section, we consider properties of edge-colourings compatible with an ASBG structure. \subsection{Obstacles to ASBG-Colourability} We define a \emph{colouring} ${c}$ of a graph ${G}$ to be a function ${c: E(G) \rightarrow \{r,b\}}$, and denote the graph ${G}$ endowed with the colouring ${c}$ by ${G^c}$. If ${H}$ is a subgraph of ${G}$, then ${c}$ restricts to a colouring ${c_H}$ of ${H}$. We denote by ${H^c}$ the graph ${H}$ with edges coloured according to ${c_H}$. If $G^c$ is an ASBG and $v$ is a vertex of $G^c$, then the numbers ${deg^B(v)}$ and ${deg^R(v)}$ of blue and red edges incident with ${v}$, respectively, must satisfy the following relation for all vertices $v$: \begin{equation}\label{degB-degR} deg^B(v) - deg^R(v) = 1 \text{.} \end{equation} \begin{definition} A 2-edge colouring $c$ of a graph $G$ is an \emph{ASBG-colouring} of $G$ if $G^c$ is an ASBG. The graph $G$ is {ASBG-colourable} if there exists an ASBG-colouring of $G$.\end{definition} The following are some basic necessary (but not sufficient) criteria that a graph ${G}$ must meet for it to be ASBG-colourable: \begin{itemize} \item ${G}$ must be bipartite and balanced. \item The degree of each vertex in ${G}$ must be odd, from \eqref{degB-degR}. \end{itemize} \begin{example}\label{unfeasible} Let $G$ be the following graph: \includegraphics[width = \textwidth]{unfeasible0.png} We attempt to colour the edges of $G$ with a colouring $c$, so that \eqref{degB-degR} is satisfied at every vertex: \begin{itemize} \item All edges incident with vertices of degree ${1}$ must be blue. \item Now $4$ out of $5$ edges incident with vertices $A$ and $B$ are blue, so it is not possible for $G^c$ to satisfy \eqref{degB-degR} at each vertex. \end{itemize} Therefore ${G}$ is not ASBG-colourable. \end{example} \begin{example}\label{unconfigurable} Now consider the following graph ${G}$ with edge-colouring ${c}$. \includegraphics[width = \textwidth]{unconfigurable.png} We observer that $G$ has a unique colouring that satisfies \eqref{degB-degR} at every vertex. However, we claim that there is no ordering of the vertices that satisfies the condition of Definition \ref{asbg_def} for this colouring. The conditions of Definition \ref{asbg_def}, applied to the vertex $A$, requires that $Y$ occurs between $X$ and $Z$ in an ordering of the vertices of $G$. The same condition applied to $B$ requires that $X$ occurs between $Y$ and $Z$. Since these requirements are incompatible, we conclude that $G$ is not ASBG-colourable. \end{example} Examples \ref{unfeasible} and \ref{unconfigurable} demonstrate two different ways in which a graph can fail to be ASBG-colourable. If a graph ${G}$ has an edge colouring ${c}$ that satisfies \eqref{degB-degR} at every vertex, we say that ${G}$ has a \emph{difference-${1}$ colouring}, and if the vertices of ${G^c}$ can be ordered such that the edges incident with each vertex alternate in colour, beginning and ending with blue, we say that ${G^c}$ is \emph{configurable}. \* In this paper, we will present necessary and sufficient conditions for a given graph to have a difference-1 colouring, as well as classes of graphs which are configurable for any difference-1 colouring. We will also give a generalisation of \emph{Hall's Matching Theorem}, which will be needed further in the paper. \subsection{Difference-$1$ Colourings} \begin{definition}\label{dif-1_def} A 2-edge colouring $c$ of a graph ${G}$ is a \emph{difference-$1$ colouring} of $G$ if $G^c$ satisfies $deg^B(v)-deg^R(v) = 1$ at every vertex $v$.\end{definition} \noindent\textsc{Note:} We have introduced the concept of a difference-1 colouring specifically to address the question of ASBG-colourability. Therefore, we will consider the existence of difference-1 colourings only for balanced bipartite graphs. \begin{definition}\label{config_def} A difference-1 colouring ${c}$ of a graph $G$ is \emph{configurable} if $G^c$ satisfies the conditions of Definition \ref{asbg_def}.\end{definition} \noindent\textsc{Note:} A graph has a difference-1 colouring if and only if each of its connected components has a difference-1 colouring. Similarly, a graph is an ASBG if and only if each of its connected components is. Therefore, we will consider the existence of difference-1 and ASBG-colourings only for connected graphs, and so all graphs from now on should be assumed to be connected, balanced, and bipartite, unless otherwise stated. \* We remark that the property of configurability can be usefully visualised in terms of embedding the vertices in the plane along two parallel lines so that the edges incident with each vertex alternate in colour, as in the diagrams in Examples \ref{3x3_ASBGs} and \ref{unconfigurable}. \begin{lemma} Let ${G}$ be a bipartite graph with a difference-1 colouring. Then ${G}$ is balanced. \end{lemma} \begin{proof} Let $c$ be a difference-1 colouring of $G$, and let ${b}$ and ${r}$ be the number of blue and red edges in ${G^c}$, respectively. Let ${(P_1, P_2)}$ be the bipartition of ${V(G)}$. Each vertex ${v}$ of ${P_1}$ satisfies ${deg^B(v) - deg^R(v) = 1}$. Summing this expression over all ${v \in P_1}$, we have ${|P_1| = b - r}$. Similarly, ${|P_2| = b - r}$. Therefore ${|P_1| = |P_2|}$, so ${G}$ is balanced. \end{proof} \begin{definition} A \emph{leaf} is a vertex of degree ${1}$.\end{definition} \begin{definition} A \emph{twig} is a configuration of three vertices, consisting of two leaves incident with the third vertex of the twig (called the base of the twig), which has degree ${3}$. \end{definition} One useful property of leaves and twigs is that the colouring of their edges in any difference-1 colouring is uniquely determined. All edges incident with leaves must be coloured blue, which means that the remaining edge incident with the base of a twig must be coloured red, to satisfy ${(\ref{degB-degR})}$. \begin{definition} A \emph{leaf-twig configuration} at a vertex ${v}$ is a configuration of four vertices (distinct from $v$), consisting of a leaf and a twig, where the base of the twig and the leaf are both incident with ${v}$.\end{definition} \noindent\textsc{Note:} We refer to the operation of deleting the four vertices of a leaf-twig configuration, and their four incident edges, as \emph{removing a leaf-twig configuration from G}. We also refer to the operation of adding four vertices to ${G}$ in such a configuration, with the leaf and the base incident with a vertex ${v}$ of ${G}$, as \emph{adding a leaf-twig configuration to ${G}$ at ${v}$}. \includegraphics[width = \textwidth]{leaf-twig_config_and_coloured.png} \begin{center} {\footnotesize A leaf-twig configuration with its only difference-1 colouring.} \end{center} \begin{lemma}\label{leaf-twig_removal_lemma} Let $G$ and $G'$ be graphs with the property that $G'$ is obtained from $G$ by the addition of a leaf-twig configuration. Then $G'$ has a difference-1 colouring if and only if $G$ has a difference-1 colouring. \end{lemma} \begin{proof} Since the addition of a leaf-twig configuration involves one additional edge of each colour at a single vertex of $G$, it is easily observed that any difference-$1$ colouring of $G$ extends in a unique way to a difference-$1$ colouring of $G'$. On the other hand, any difference-$1$ colouring of $G'$ restricts to a difference-$1$ colouring of $G$. \end{proof} A consequence of Lemma \ref{leaf-twig_removal_lemma} is that repeated addition and/or removal of leaf-twig configurations does not affect the status of a graph with respect to the existence of a difference-$1$ colouring. \* \noindent\textsc{Note:} For any graph with a difference-1 colouring that is not configurable, it is possible to add leaf-twig configurations so the resulting graph is configurable. Adding leaf-twig configurations to a configurable graph will always result in a configurable graph. \begin{definition} Given a graph ${G}$, we define the \emph{reduced form} of ${G}$ be the graph that results from deleting leaf-twig configurations from ${G}$ until none remain. We also refer to a graph as \emph{reduced} if it possesses no leaf-twig configuration. \end{definition} \noindent\textsc{Note:} Although the reduced form of $G$ is not necessarily uniquely determined as a subgraph of $G$, it is uniquely determined as a graph, up to isomorphism. \* It is a consequence of Lemma \ref{leaf-twig_removal_lemma} that ${G}$ has a difference-1 colouring if and only if its reduced form has a difference-1 colouring. \subsection{Configurability for Cactus Graphs}\label{cactus_section} In this section, we examine a class of graphs for which any graph with a difference-1 colouring is configurable; namely \emph{cactus graphs}. \begin{definition} A \emph{cactus graph} is a connected graph in which any pair of cycles share at most one vertex. \end{definition} Equivalently, a cactus graph is a connected graph in which any edge belongs to at most one cycle. Our main result in this section is Theorem \ref{ced_difference-1_is_asbg}, which asserts that for a cactus graph, the existence of a difference-1 colouring is a sufficient condition for ASBG-colourability. Lemma \ref{cactus} provides the key technical ingredient in the proof of our main theorem in this section. Our proof of this statement makes use of a partition of the vertex set of a bipartite cactus graph, which is explained in the following lemma. We denote the minimum distance in a graph between vertices $v$ and $u$ by $d(v,u)$. \begin{lemma}\label{cactus} Let $v$ be a vertex of a bipartite cactus graph $G$. Write $k = \max_{u\in V(G)}d(v,u)$, and for $i=0,\dots ,k$ define $$ V_i = \{u\in V(G): d(v,u)=i\}. $$ \begin{enumerate} \item If $1\le i\le k$ and $x\in V_i$, then $x$ has at most two neighbours in $V_{i-1}$. \item If $1\le i\le k-1$ and $x\in V_i$, then there is at most one vertex $y$ in $V_i$, distinct from $x$, for which $x$ and $y$ have a common neighbour in $V_{i+1}$. Moreover, if such a vertex $y$ exists, then $x$ has only one neighbour in $V_{i-1}$. \end{enumerate} \end{lemma} \begin{proof} We note that $V(G)$ is the disjoint union of the sets $V_i$, and that, since $G$ is bipartite, the neighbours of any vertex in $V_i$ belong to $V_{i-1}\cup V_{i+1}$, for $1\le i< k$. For item 1, suppose that $x$ has distinct neighbours $u_1$, $u_2$, $u_3$ in $V_{i-1}$. Each of the edges $xu_1,\ xu_2,\ xu_3$ is the intial edge of a path in $G$ from $x$ to $v$. It follows that $xu_2$ belongs to two distinct cycles, one including the edge $xu_1$ and one including the edge $xu_3$, contrary to the hypothesis that $G$ is a cactus graph. For item 2, suppose that $y_1$ and $y_2$ are two vertices of $V_i$, distinct from $x$, with the property that $z_1$ is a common neighbour of $y_1$ and $x$ in $V_{i+1}$, and $z_2$ is a common neighbour of $y_2$ and $x$ in $V_{i+1}$. Let $u$ be a neighbour of $x$ in $V_{i-1}$. Then the edge $xu$ belongs to two distinct cycles in $G$, one including the edges $z_1x$ and $z_1y_1$, and the other including the edges $z_2x$ and $z_2y_2$. Thus at most one element of $V_i\backslash\{x\}$ shares a neighbour $z$ with $x$ in $V_{i+1}$. Suppose that $y$ is such a vertex, and that $x$ has two neighbours $u_1$ and $u_2$ in $V_{i-1}$. Then the edge $u_1x$ belongs to a cycle in $G$ that includes the edges $xz$ and $zy$, and also to a cycle in $G$ that includes the edge $xu_2$ and no vertex of $V_{i+1}$. From this contradiction we conclude that if $x$ shares a neighbour in $V_{i+1}$ with another vertex of $V_i$, then $x$ has only one neighbour in $V_{i-1}$. \end{proof} \begin{theorem}\label{ced_difference-1_is_asbg} Let ${G}$ be a cactus graph with difference-1 colouring ${c}$. Then ${G^c}$ is configurable. \end{theorem} \begin{proof}We choose a vertex $v$ of $G$ and partition the vertex set of $G$ into non-empty subsets $V_0=\{v\}, V_1,\dots ,V_k$ as in the statement of Lemma \ref{cactus}. We note that $(P_1,P_2)$ is a bipartition of the vertex set of $G$, where $P_1=\cup_{i \text{ even}}V_i$ and $P_2=\cup_{i \text{ odd}}V_i$. We let $L$ and $M$ be parallel lines embedded on the plane and position the vertex $v$ on the line $L$. We configure $G$ by positioning the vertices of $P_1$ and $P_2$ at distinct locations on $L$ and $M$ respectively, so that the order in which the vertices are positioned along the two lines satisfies the requirements of Definition \ref{config_def}. For $1\le i\le k$, the $i$th step in the process involves the positioning of the vertices of $V_i$ on $L$ or $M$, according to whether $i$ is even or odd. Suppose that $r$ steps have been completed and that every vertex of $\cup_{i=0}^{r-1}V_i$ has the property that its neighbours are positioned in a manner consistent with an ASBG-colouring. Let $x_1,\dots ,x_t$ be the vertices of $V_r$. We position the vertices of $V_{r+1}$ (on $L$ or $M$) in $t$ steps, the first of which is to position the neighbours of $x_1$. At most two neighbours of $x_1$ (in $V_{r-1}$) are already positioned; all further neighbours of $x_1$ belong to $V_{r+1}$ and may be positioned in a manner that satisfies the alternating condition on colours of edges incident with $x_1$. At step $j$ we position the neighbours of $x_j$ in $V_{r+1}$. From Lemma \ref{cactus} it follows that at most two neighbours of $x_j$ in $G$ have already been assigned positions at this stage, potentially two from $V_{r-1}$ or one each from $V_{r-1}$ and $V_{r+1}$. In any case we may position the remaining neighbours of $x_j$ to satisfy the alternating condition on coloured edges. This iterative process results in a configuration of $G$. \end{proof} \begin{example} \label{ced_config} The diagram below demonstrates how the construction of Theorem \ref{ced_difference-1_is_asbg} might apply to a particular cactus graph with a difference-1 colouring, for a choice of initial vertex $v$. \includegraphics[width = \textwidth]{CEDconfiguration.png} \end{example} \section{Deciding the Existence of a Difference-1 Colouring} In this section, we develop methods of determining whether a given bipartite graph admits a difference-1 colouring. We consider this question first for trees and unicyclic graphs, where the analysis is considerably easier. \subsection{Trees} We now consider necessary and sufficient conditions for a tree to have a difference-1 colouring. As we have seen in Section \ref{cactus_section}, this will resolve the question of ASBG-colourability for trees, as every difference-1 colouring of a tree is configurable. We first note that the only connected ASBG that contains no red edges is $P_2$, the path graph on two vertices (with its edge coloured blue). \begin{theorem}\label{asbt-colouring} Let ${T}$ be a tree. Then ${T}$ has a difference-1 colouring if and only if its reduced form is $P_2$. \end{theorem} \begin{proof} Suppose the reduced form of ${T}$ is $P_2$. Then $T$ can be constructed by repeatedly adding leaf-twig configurations to $P_2$. Hence, by Lemma \ref{leaf-twig_removal_lemma}, $T$ has a difference-1 colouring. \* Now suppose that $T$ has a difference-1 colouring $c$ and is not $P_2$. Choose any vertex ${r}$ of ${T}$. Let ${u}$ be a leaf of ${T}$ which is furthest from ${r}$, and let $b$ be the neighbour of $u$ on the unique path from $u$ to $r$. As $T$ has a difference-1 colouring, every vertex of $T$ has odd degree. Therefore $u$ shares its neigbour $b$ with at least one other vertex $u_1$ that is further from $r$ than $b$. As $u$ is a furthest leaf from $r$, this means that $u_1$ is also a leaf. As $u$ and $u_1$ are both leaves, this means the edges $ub$ and $u_1b$ must be blue in $T^c$. This means that a third edge incident with $b$ and another vertex $v$ must be red. If $v$ is not on the unique path from $b$ to $r$, then $v$ has a neighbour which is a leaf that is further from $r$ than $u$, which is contrary to the choice of $u$. Therefore $v$ is on the unique path from $b$ to $r$. As $c$ is a difference-1 colouring and $bv$ is red in $T^c$, $v$ must be incident with at least two blue edges. Let $l$ be a neighbour of $v$ which is not on the unique path from $v$ to $r$, such that $lv$ is blue. As $d(r,u) = d(r,l) + 1$, this means that $l$ must be a leaf, otherwise it would be incident with at least one red edge leading to a leaf further from $r$ than $u$. This means that ${u}$, ${u_1}$, ${b}$, and ${l}$ make a leaf-twig configuration, with ${b}$ being the base of the twig, and ${l}$ the leaf. Removing this results in another tree with a difference-1 colouring (Lemma \ref{leaf-twig_removal_lemma}). Inductively, this means that leaf-twig configurations can be removed until there are no more red edges. As $P_2$ is the only connected ASBG with no red edges, this process will reduce ${T}$ to $P_2$. \end{proof} \begin{corollary} A tree has an ASBG-colouring if and only if its reduced form is $P_2$, and this colouring is unique. \end{corollary} Note that the process outlined in the proof of Theorem \ref{asbt-colouring} not only gives an existence condition for an ASBG-colouring of a tree, it gives the colouring. The following observation will be needed later, for the proof of Lemma \ref{local-tree-lemma}. \begin{corollary} \label{asbt-corollary} Let ${T}$ be an ASBG-colourable tree and let ${r}$ be a vertex of ${T}$. Then there is a sequence of leaf-twig removals that reduces ${T}$ to a subgraph that consists of only leaves and twigs attached to $r$, with one more leaf than twig. \end{corollary} \begin{proof} In the proof of Theorem \ref{asbt-colouring}, it was shown that a furthest vertex from ${r}$ is always part of a removable leaf-twig configuration. So we succesively remove the furthest leaf-twig configuration from $r$ until any remaining leaf-twig configurations are attached to $r$. We call the resulting tree $T'$. Theorem \ref{asbt-colouring} tells us that it is possible to remove leaf-twig configurations from $T$ until only $P_2$ remains, where $r$ is one of the vertices of this $P_2$. Therefore $T'$ must be $P_2$ with extra leaf-twig configurations at $r$, which means that $T'$ consists only of leaves and twigs attached to $r$, with one more leaf than twig. \end{proof} \subsection{Unicyclic Graphs} \begin{definition} A graph is \emph{unicyclic} if it is connected and contains exactly one cycle. \end{definition} When analysing the ASBG-colourability of graphs which are not trees, it is useful to define the \emph{skeleton} of a graph. \begin{definition} For a graph ${G}$ (containing at least one cycle), we refer to the subgraph that results from repeatedly removing leaves and their incident edges, until none remain, as \emph{the skeleton} ${Sk(G)}$ of $G$. \end{definition} \noindent\textsc{Note:} ${Sk(G)}$ is determined as a subgraph of ${G}$. The process of repeatedly deleting leaves from a tree terminates with a single vertex, but which vertex remains depends on the order in which leaves are deleted. For this reason, we do not define the skeleton of a tree. \* We also note that if ${G'}$ is the reduced form of ${G}$, then ${Sk(G') = Sk(G)}$. \begin{definition} A \emph{junction} in a graph ${G}$ is a vertex $v$ with ${deg_{Sk(G)}(v) \geq 3}$. \end{definition} \noindent\textsc{Note:} If ${G}$ is unicyclic, then the skeleton ${Sk(G)}$ is the graph consisting of the cycle in ${G}$, and ${G}$ has no junctions. \begin{definition} For any graph $G$ that contains a cycle, let ${v}$ be a vertex in ${Sk(G)}$. We define the \emph{local tree at v}, ${T_v}$, to be the connected component containing ${v}$ that remains when all edges of ${Sk(G)}$ are deleted from ${G}$. \end{definition} \begin{example} ${\;}$ \includegraphics[width = \linewidth]{local_tree.png} \end{example} \begin{lemma}\label{local-tree-lemma} Let ${G}$ be a graph with a difference-1 colouring ${c}$ and reduced form $H$. Then for each vertex ${v \in Sk(H)}$, the local tree ${T_v}$ consists either only of ${v}$, or of $v$ with only leaves or only twigs incident with it. \end{lemma} \begin{proof} The number of blue edges at ${v}$ in ${H^c}$ is one greater than the number of red edges at ${v}$ in ${H^c}$. This also is the case for every vertex in ${T_v^c}$ except possibly for ${v}$ itself. Let ${d = deg_{T_v}^{\hspace{2mm}B}(v) - deg_{T_v}^{\hspace{2mm}R}(v)}$. If ${d > 1}$, we attach ${d-1}$ twigs to ${v}$. If ${d < 1}$, we attach ${1-d}$ leaves to ${v}$. We call the resulting coloured tree ${T_v'}$. ${T_v'}$ is an ASBG, which means that leaf-twig configurations can be removed from $T_v'$ until only leaf-twig configurations attached to $v$ remain (Corollary \ref{asbt-corollary}). If there are any leaf-twig configurations attached to $v$ that are also in $T_v$, we remove them. We have now removed all leaf-twig configurations in $T_v'$ that are also in $G$, and the remaining graph consists of a copy of $P_2$ containing $v$ with $|d-1|$ leaf-twig configurations attached to $v$. We can now delete the $|d-1|$ twigs or leaves that we attached to $v$ to make $T_v'$ that were not part of $T_v$. We are now left with the reduced form of $T_v$; a subgraph of ${T_v}$ that consists either only of $v$, or of $v$ with only leaves or only twigs incident with it. \end{proof} \begin{example} ${\;}$ \includegraphics[width = \linewidth]{leaf-twig_reduction.png} \end{example} \begin{corollary}\label{ltt-corollary} Let ${G}$ be a graph and ${H}$ be the reduced form of ${G}$. Then if $G$ has a difference-1 colouring $c$, any vertex of degree ${2}$ in ${Sk(G)}$ is of one of the following types: \begin{itemize} \item \textbf{Leaf-Type:} A vertex of degree $3$ in $H$, with a leaf incident with it. If a blue and red edge meet at a vertex of degree $2$ in $Sk(G^c)$, then $v$ must be a leaf-type vertex in $G$. \item \textbf{Twig-Type:} A vertex of degree ${3}$ in ${H}$, with the base of a twig incident with it. If two blue edges meet at a vertex of degree $2$ in $Sk(G^c)$, then $v$ must be a twig-type vertex in $G$. \item \textbf{Triple-Type:} A vertex of degree ${5}$ in ${H}$, with three leaves incident with it. If two red edges meet at a vertex of degree $2$ in $Sk(G^c)$, then $v$ must be a triple-type vertex in $G$. \end{itemize} \end{corollary} \noindent\textsc{Note:} We only use the terms leaf-type, twig-type, and triple-type to classify vertices of degree $2$ in $Sk(G)$. Vertices of higher degree in $Sk(G)$ will be discussed later. We consider twig-type and triple-type vertices to be of \emph{opposite type} to one another. \begin{definition} Let $G$ be a graph whose skeleton contains at least one vertex that is not leaf-type. A \emph{limb} of ${G}$ is a subgraph ${H}$ of ${Sk(G)}$ such that the edges of ${H}$ form a trail whose only non-leaf-type vertices are its (not necessarily distinct) endpoints. \end{definition} We observe that every edge of ${Sk(G)}$ belongs to exactly one limb, and that the skeleton of a graph is the edge-disjoint union of its limbs. \begin{lemma} \label{odd-even-distance-lemma} Let ${G}$ be a graph with a difference-1 colouring and let ${P = v_1 \dots v_k}$ be a limb in ${Sk(G)}$. Then ${P}$ has odd length if ${v_1}$ and ${v_k}$ are both of twig-type or triple-type, and even length if $v_1$ and $v_k$ are of opposite type. \end{lemma} \begin{proof} Let $v_1$ and $v_k$ be vertices of the same or opposite type to one another. This means that each is a twig-type or triple-type vertex. As $v_1$ is either a twig-type or triple-type vertex, both edges incident with $v_1$ are the same colour. The same is true for $v_k$. As ${v_2, \dots, v_{k-1}}$ are all leaf-type vertices, they are each incident with an edge of each colour, which means that edge colours alternate along $P$. Therefore, if $k-1$ is odd, the colours of the edges incident with $v_1$ are the same as those incident with $v_k$, meaning that $v_1$ and $v_k$ are of the same vertex type. If $k-1$ is even, then the colours of the edges incident with $v_1$ are different to those incident with $v_k$, meaning that $v_1$ and $v_k$ are of opposite vertex type. \end{proof} \begin{proposition}\label{unicyclic_prop} Let ${G}$ be a bipartite unicyclic graph. Then ${G}$ is ASBG-colourable if and only if ${G}$ satisfies the following: \begin{itemize} \item Each vertex in Sk(G) is either a leaf-type, twig-type, or triple-type vertex; \item Each limb has odd (even) length if its endpoints are the same (opposite) type. \end{itemize} \end{proposition} \begin{proof} Suppose ${G}$ satisfies the above conditions. We now give $G$ a colouring $c$, as follows. At any twig-type vertex $v$ of $Sk(G)$, we colour the edges of $Sk(G)$ incident with $v$ blue, we colour the edge between $v$ and the base of the twig red, and the other two edges of the twig blue. So $v$ is incident with one more blue edge than red. At any triple-type vertex $v$ of $Sk(G)$, we colour the edges of $Sk(G)$ incident with $v$ red, and we colour the edges between $v$ and the three leaves blue. So $v$ is incident with one more blue edge than red. As all vertices of $Sk(G)$ that are of the opposite type are an even distance apart, vertices of the opposite type cannot be neighbours, and so it is possible to colour $G$ in this way. Along each limb of $Sk(G)$, the first edge is now coloured red or blue according to the type of vertex it is incident with. If the other vertex incident with this edge is of leaf-type, we colour the next edge along the limb the opposite colour to the previous edge. We continue to do this until we reach the other end of the limb. As the limb has odd (even) length if the vertices are of the same (opposite) type, the colour of the last two edges of the limb are the opposite to one another, and the vertex incident with these two edges are of leaf-type. Finally, we colour all edges between all leaf-type vertices and leaves blue. Now all leaf-type vertices are inciden with one more blue edge than red, and $c$ is a difference-1 colouring of $G$. As ${G}$ is unicyclic, we know that ${G^c}$ is configurable (Theorem \ref{ced_difference-1_is_asbg}). So $G$ is ASBG-colourable. \* On the other hand, suppose ${G}$ has an ASBG-colouring ${c}$. From Corollary \ref{ltt-corollary} and Lemma \ref{odd-even-distance-lemma}, we have that each vertex in $Sk(G)$ is of leaf, twig, or triple-type, and that each limb has odd (even) length if its endpoints are the same (opposite) type. \end{proof} \subsection{Graphs With Junctions} We now turn our attention to bipartite graphs possessing junctions, where the problems of determining the existence of a difference-1 colouring and configurability are both considerably more complicated. \* We have identified necessary and sufficient conditions for trees and unicyclic graphs to admit difference-1 colourings. In this section, we extend our analysis to the situation of graphs whose skeletons include junctions. We note that Lemma \ref{local-tree-lemma}, Corollary \ref{ltt-corollary}, and Lemma \ref{odd-even-distance-lemma} provide a partial test for difference-1 colourability, in the sense that a bipartite graph whose reduced form fails to satisfy the conditions of these results does not have a difference-1 colouring. The main content of this section is an algorithm whose purpose is to detect obstacles to a graph that passes these conditions having a difference-1 colouring, or to confirm the existence of a difference-1 colouring. We may restrict our attention to reduced graphs satisfying these conditions. \* We now describe the operation of the algorithm on a reduced graph $G$ satisfying the conditions of Lemma \ref{local-tree-lemma}, Corollary \ref{ltt-corollary}, and Lemma \ref{odd-even-distance-lemma}. The algorithm initializes $J$ to be the empty set and then iterates the following step. Each iteration involves at least one addition of a new element to $J$. A junction $j$ with $j \not \in J$ is chosen, and added to the set $J$. Integer weights are assigned in turn to each edge $e$ incident with $j$ as follows. \begin{itemize} \item If $e$ is not in $Sk(G)$, either $e$ is incident with a leaf and is assigned the weight $1$ (indicating that $e$ must be blue in any difference-1 colouring of $G$), or $e$ is incident with the base of a twig and is assigned the weight $-1$ (indicating red). \item If $e$ is in $Sk(G)$, and the limb to which $e$ belongs ends in a vertex of twig/triple-type, then the colour of $e$ in any difference-$1$ colouring is determined by this limb, and $e$ is assigned the weight $1$ or $-1$ accordingly. \item The remaining possibility is that the non-leaf type vertex on the other end of the limb starting at $j$ along $e$ is a junction $j_1$. In this case, the algorithm proceeds as follows: \begin{itemize} \item If $j_1 \in J$, then the weight 0 is assigned to $e$. \item If $j_1 \not \in J$ (which means, in particular, that $j_1 \not = j$), then $j_1$ is adjoined to the set $J$, and the assignment procedure is applied recursively to all edges of $G$ incident with $j_1$, with the exception of the edge $e_1$ that belongs to the same limb as $e$. An integer weight is then assigned to $e_1$ by the requirement that the sum of weights on all edges incident with $j_1$ must be $1$. Then $x_e$, the weight assigned to $e$ is assigned by $x_e = \pm x_{e_1}$, according to whether the limb containing $e$ and $e_1$ requires that these two edges have the same or opposite colours. \end{itemize} \end{itemize} This iteration of the algorithm concludes when weights have been assigned to all edges incident with junctions that can be reached from $j$ via limbs of $Sk(G)$ that do not include twig or triple-type vertices. No more iterations of the algorithm are run when all junctions of $G$ are included in $J$. If the algorithm identifies a junction for which the sum of the incident weights is not $1$, then the algorithm returns \emph{False} to indicate that $G$ does not admit a difference-$1$ colouring. Otherwise, it returns \emph{True}, and the algorithm proceeds to assign weigths to all edges of the graph that are not incident with junctions, so that all edges of $G$ have now been assigned a weight. If all assigned weights are $1$, $-1$, or $0$, then $G$ admits a difference-$1$ colouring, which we can easily determine from the output of the algorithm. Otherwise, the algorithm has assigned some \emph{surplus weights}; weights which have magnitude greater than $1$. In this case, the existence or not of a difference-$1$ colouring depends on properties of the graph consisting of those limbs of $Sk(G)$ that include edges that have been assigned surplus or zero weights. This theme is developed in Section \ref{redistrib}. We now present the details of the algorithm in a pseudocode format. \begin{algorithm}[H] \caption{Difference-1 Colouring Algorithm}\label{junction_weight_algorithm} \begin{algorithmic}[1] \State $J \gets \{ \}$ \Function{assign}{j} \For {e in j.edges} \Comment{j.edges is the set of edges incident with j} \If {e.next $=$ "leaf"}\Comment{if $e \not\in Sk(G)$, e.next is the type of the other vertex incident with e} \State e.weight $\gets 1$ \ElsIf {e.next $=$ "base\_of\_twig"} \State e.weight $\gets -1$ \ElsIf {e.next $=$ "twig-type"} \Comment{e.next is type of last vertex of limb L from j along e} \State e.weight $\gets (-1)^{\text{e.distance} + 1}$ \Comment{e.distance returns the length of L} \ElsIf {e.next $=$ "triple-type"} \State e.weight $\gets (-1)^{\text{e.distance}}$ \ElsIf {e.next $=$ "junction"} \State $j_1 \gets$ e.next\_junction \Comment{We must first calculate the weights of edges incident with $j_1$} \If{$j_1 \in J$} \State e.weight $\gets 0$ \Else \State $e_1 \gets$ e.last\_edge \Comment{Label the last edge on L} \State J.append($j_1$) \State $e_1$.next $\gets$ "null" \Comment{In the recursive step, the weight of $e_1$ should not be calculated} \State \textproc{assign}($j_1$) \State $e_1$.weight $\gets 1 -$ sum(f.weight \textbf{for} f \textbf{in} ($j_1$.edges $\setminus \{e1\}$)) \State e.weight $\gets (-1)^{\text{e.distance} - 1}e_1$.weight \EndIf \EndIf \EndFor \EndFunction \While{$|J| < |\text{junctions}|$} \State $junct \gets$ (junctions $\setminus J$) [0] \State J.append(j) \State \textproc{assign}(j) \EndWhile \Function{w}{j} \Return sum(e.weight for e in j.edges) \EndFunction \State result $\gets$ True \For{j in junctions} \If {\textproc{w}(j) $\not = 1$} \State result $\gets$ False \Comment{If w(j) is not 1 for every junction j, then the algorithm returns False} \EndIf \EndFor \If{result} \For{e in edges} \If {e.weight = "null"} \Comment{If e is not incident with a junction, it has no weight} \State e.weight $\gets (-1)^{e.\text{distance}}$e.next\_weight \Comment{We now assign e a weight} \EndIf \EndFor \EndIf \State\Return result \end{algorithmic} \end{algorithm} For a vertex $v$ of a graph $G$, let $w(v)$ denote the sum of the weights assigned by Algorithm \ref{junction_weight_algorithm} to the edges incident with $v$. \begin{lemma} \label{algorithm_lemma} Let ${G}$ be a graph with a difference-1 colouring. Then Algorithm \ref{junction_weight_algorithm} assigns ${w(j) = 1}$ for all junctions ${j}$ in ${G}$. \end{lemma} \begin{proof} $G$ has a difference-1 colouring, which means that there is some function $c_1(e)$ that assigns weights of $\pm 1$ to all edges $e$ of $G$ such that for each vertex $v$ of $G$, the sum of weights $w'(v)$ incident with $v$ is $1$. Algorithm \ref{junction_weight_algorithm} implies a function ${c_2: E(G) \rightarrow \mathbb{Z}}$ of integer weight assignments to each edge $e$ in $G$. If $c_2(e) = \pm 1$ to some edge $e$, this means that $e$ must be blue/red in any difference-1 colouring of $G$. If we remove these edges and any vertices which are now isolated, the resulting graph $H$ consists of connected components which are all subgraphs of $Sk(G)$ and are composed of limbs whose ends are both junctions. On each iteration of the algorithm, a junction $j$ is chosen, and any other junction $j'$ encountered while trying to assign weights to $j$ is assigned weights first such that $w(j') = 1$. This process partitions the junctions in the same way as the connected components of $H$. For a component $K$ of $H$, because $w'(v) = w(v) = 1$ in $G$ for any vertex $v \not = j$, the fact that $H$ results from removing only the edges $e$ with $c_2(e) = \pm 1$ from $G$ means that $w'(v) =w(v)$ in $H$ for any vertex $v \not = j$. We also know that $w'(j) = 1$ in $G$. If $(P_1,P_2)$ is the bipartition of $K$ such that $j \in P_1$, because each edge is incident with a vertex in both parts of the bipartition, we have \[\sum_{v \in P_1}w'(v) = \sum_{v \in P_2}w'(v) \hspace{1cm} \text{and} \hspace{1cm} \sum_{v \in P_2}w(v) = \sum_{v \in P_1}w(v)\text{.}\] And because $w'(v) = w(v)$ for all vertices in $Q$, we have \[\sum_{v \in P_2}w'(v) = \sum_{v \in P_2}w(v)\text{.}\] Which therefore gives us \[\sum_{v \in P_1}w'(v) = \sum_{v \in P_1}w(v)\text{.}\] And because $w'(v) = w(v)$ for all $v \not = j$, this implies that $w'(j)= w(j)$. Therefore the algorithm has assigned $w(j) = 1$ for all junctions $j$ in $G$. \end{proof} Note that the weights assigned to each edge incident with a junction can vary depending on the order in which the algorithm deals with the junctions and edges, but the result (whether or not the algorithm concludes that ${G}$ has a difference-1 colouring) is independent of such choices. \* It is possible that a graph will have only junctions whose incident weights sum to ${1}$ but have no difference-1 colouring. This is because Algorithm \ref{junction_weight_algorithm} can assign weights to edges ${x(e)}$ that have magnitude greater than ${1}$. We call these weights \emph{surplus weights}, and they (as well as zero weights) arise in the case of edges whose colour may differ in distinct difference-1 colourings of a graph, if any such colourings exist. In order to assign a particular colouring to a graph, these surplus weights must be \emph{redistributed} so that every edge has weight ${1}$ or ${-1}$, while maintaining ${w(j) = 1}$ for each junction ${j}$. If this is possible, we say that the surplus weights are \emph{redistributable}, and we have a criterion for exactly when the surplus weights of any graph are redistributable, which will be outlined in Section \ref{redistrib}. \begin{example} Here, some edges of $G$ have been assigned surplus weights. These surplus weights are redistributable, as the sum of the weights at each junction in $Sk(G)$ before and after redistribution remain constant. \includegraphics[width = \textwidth]{redistribution1.png} \end{example} \subsection{Redistributability}\label{redistrib} We now consider the situation where Algorithm \ref{junction_weight_algorithm} cannot decide on the existence of a difference-1 colouring for a graph ${G}$. This means that the algorithm assigns weights to the edges of ${G}$ such that ${w(j) = 1}$ for each junction $j$ in $G$, but some of the weights assigned to the edges are surplus weights. \begin{definition} Let $G$ be a graph for which Algorithm \ref{junction_weight_algorithm} could not determine the existance of a difference-1 colouring, and let $e$ be an edge of $G$ that has been assigned a surplus weight. A \emph{redistribution} of the weight of $e$ is a redefining of the weight of $e$ and of some set $S$ of the other edges of $G$ such that every weight that has been redefined now has a value of $1$ or $-1$, the sum of the weights of all edges incident with each vertex remains $1$, and there is no proper subset of $S$ for which this is possible. We say the surplus weights of a graph are \emph{redistributable} if it is possible to redistribute all surplus weights in $G$ successively. \end{definition} In order to determine whether or not the surplus weights of a graph are redistributable, we can partition the edge set of the graph into equivalence classes and consider each equivalence class separately, as follows. \begin{definition} Let ${E'(G)}$ be the set of all edges of ${G}$ that belong to a cycle. For ${e_1, e_2 \in E'(G)}$, ${e_1 \sim e_2}$ if and only if ${e_1}$ and ${e_2}$ occur together in a cycle of ${G}$. \end{definition} \begin{lemma} \label{ced_partition} ${\sim}$ is an equivalence relation on $E'(G)$. \end{lemma} \begin{proof} It is immediate that ${\sim}$ is reflexive and symmetric. For transitivity, let ${e_1 = u_1v_1}$, ${e_2 = u_2v_2}$, and ${e_3 = u_3v_3}$ be edges of ${G}$ with ${e_1 \sim e_2}$ and ${e_2 \sim e_3}$. Let $C_1$ be a cycle containing both $e_1$ and $e_2$, and $C_2$ be a cycle containing both $e_2$ and $e_3$. If $C_1 = C_2$, then $e_1 \sim e_3$. Assume $C_1 \not = C_2$. Let ${P_{u_1}}$ be the path in ${C_1}$ starting at ${u_1}$ and ending at a vertex ${u}$ in ${C_2}$, that does not include ${e_1}$ and contains no other vertex in ${C_2}$. Let ${P_{v_1}}$ be the path in ${C_1}$ starting at ${u_1}$ and ending at a vertex ${v}$ in ${C_2}$ that does include ${e_1}$ and contains no other vertex in ${C_2}$. We know that ${u}$ and ${v}$ are distinct, because ${e_2}$ is common to both ${C_1}$ and ${C_2}$. Let ${P_{u_3}}$ be the path in ${C_2}$ starting at ${u}$ and ending at ${u_3}$ which does not include ${e_2}$, and let ${P_{v_3}}$ be the path in ${C_2}$ starting at ${v}$ and ending at ${u_3}$ which does not include ${e_2}$. ${P_{u_1}}$, ${P_{v_1}}$, ${P_{u_3}}$, and ${P_{v_3}}$ form a cycle which contains ${e_1}$ and ${e_3}$. Therefore ${e_1 \sim e_2, e_2 \sim e_3 \implies e_1 \sim e_3}$; the relation is transitive. \end{proof} \noindent\textsc{Note:} We refer to the equivalence classes of ${\sim}$ as the \emph{common cycle classes} of ${E'(G)}$. \begin{lemma}\label{common_cycle_class} Let $G$ be a graph with a difference-$1$ colouring whose edges have been assigned weights by Algorithm \ref{junction_weight_algorithm}. Then all distinct difference-$1$ colourings of $G$ can be obtained by redistributing any surplus and zero weights of $G$ within common cycle classes. \end{lemma} \begin{proof} Define $x: E(G) \to \mathbb{Z}$ to be a function that sends edges of $G$ to the weights that they were assigned by Algorithm \ref{junction_weight_algorithm}, and $x': E(G) \to \{1,-1\}$ to be a redistribution of the surplus weights of $G$. We know that at any junction $j$, the sums of the values assigned by each of these functions to the edges incident with $j$ are $1$, and we also have that $x'(e) = x(e)$ for all $e$ for which $x(e) \in \{1,-1\}$. Let us consider the subgraph $D$ of $G$, consisting of all edges $e$ for which $x(e) \not = x'(e)$, and their incident vertices. Define the $\delta$-weight $\delta : E(D) \to \mathbb{Z}$ by $\delta (e) = x(e)-x'(e)$. Note that for all edges in $D$, the $\delta$-weight is a non-zero integer, and the sum of the $\delta$-weights of edges incident with a vertex in $D$ is $0$. Thus each vertex of $D$ is incident with at least one edge of positive $\delta$-weight and one of negative $\delta$-weight. \* Choose any neighbouring vertices $u_0$ and $u_1$ in $D$ and then repeat the following step; on step $i$, choose a vertex $u_{i+1}$ such that ${\delta (u_iu_{i+1})}$ is of the opposite sign to ${\delta (u_{i-1}u_i)}$. This process terminates at some step $j$ when we choose a vertex $u_{j+1} = u_{k}$, where $0 \leq k \leq j$. Because $G$ is bipartite, the cycle $u_ku_{k+1} \dots u_ju_k$ that results from this process is of at least length $4$ and each vertex of the cycle has incident edges with $\delta$-weight of opposite sign. We now define the graph $D'$ to be the graph $D$ with the $\delta$-weight of each edge in this cycle reduced in magnitude by $1$ while retaining its sign, and any edge that now has weight $0$ is deleted. Note that all $\delta$-weights in $D'$ are non-zero integers and the sum of all $\delta$-weights incident with a vertex of $D'$ is $0$, and thus we can repeat this process until we are left with an empty graph. Therefore $D$ can be decomposed into cycles, which implies that surplus weights can be redistributed to other edges in the same common cycle class. \end{proof} We now consider when it is possible to redistribute surplus weights in a cactus graph. \begin{lemma}\label{redistribute_CED} Let ${G}$ be a cactus graph that passes Algorithm \ref{junction_weight_algorithm} and whose edges have been assigned weights. For a cycle $C$ in $G$, let $w_C(v)$ denote the sum of the weights assigned to the edges of $C$ incident with a vertex $v$ in $C$. Then the surplus weights of $G$ are redistributable if and only if $w_C(v) \in \{-2,0,2\}$ for all cycles $C$ in $G$ and vertices $v$ in $C$, and for every path $P = v_1v_2\dots v_k$ in $C$ with $w_C(v_1),w_C(v_k) = \pm 2$ and $w_C(v_i) = 0$ for all $1 < i < k-1$, $k$ is even if $w_C(v_1) = w_C(v_k)$ and odd if $w_C(v_1) = -w_C(v_k)$. \end{lemma} \begin{proof} First, assume that the surplus weights of $G$ are redistributable. From Lemma \ref{common_cycle_class}, the surplus weights of $G$ can be redistributed within the common cycle classes of $G$. $G$ is a cactus graph, which means that the common cycle classes of $G$ are the cycles of $G$. Let $C$ be a cycle of $G$. As any redistribution of the surplus weights involves all edges being assigned weights of $\pm 1$, this means that $w_C(v) \in \{-2,0,2\}$ for any vertex $v$ in $C$. If there is a path $P = v_1v_2\dots v_k$ in $C$ such that $w_C(v_1),w_C(v_k) = \pm 2$ and $w_C(v_i) = 0$ for all $1 < i < k-1$, this means that $v_i$ is incident with one edge of each colour in any difference-1 colouring, while $v_1$ and $v_k$ are incident with two edges of the same colour. If $v_1v_2$ is the same colour as $v_{k-1}v_k$, then $k$ must be even and $w_C(v_1) = w_C(v_k)$. If $v_1v_2$ is the opposite colour to $v_{k-1}v_k$, then $k$ must be odd and $w_C(v_1) = -w_C(v_k)$. \* Now assume $w_C(v) \in \{-2,0,2\}$ for all cycles $C$ in $G$ and vertices $v$ in $C$, and for every path $P = v_1v_2\dots v_k$ in $C$ with $w_C(v_1),w_C(v_k) = \pm 2$ and $w_C(v_i) = 0$ for all $1 < i < k-1$, $k$ is even if $w_C(v_1) = w_C(v_k)$ and odd if $w_C(v_1) = -w_C(v_k)$. We can redistribute the weights incident with $v_1$ and $v_k$ by assigning a weight of $x = \frac{w_C(v_1)}{2}$ to $v_1v_2$, a weight of $-x$ to $v_2v_3$, \dots, a weight of $(-1)^kx$ to $v_k$. As $k$ is even if $w_C(v_1) = w_C(v_k)$ and odd if $w_C(v_1) = -w_C(v_k)$, this means that $w_C(v_k) = (-1)^kx$. Therefore the surplus weights of $G$ are redistributable. \end{proof} To determine in general when surplus weights can be redistributed in a graph $G$ that passes Algorithm \ref{junction_weight_algorithm} and whose edges have been assigned weights, let ${G'}$ be a subgraph of $G$ consisting of all edges from one common cycle class of $G$ and their incident vertices. For each vertex ${v}$ of ${G'}$, let $r(v)$ denote the number of edges of ${G'}$ incident with ${v}$ which are required to be red in any difference-1 colouring of ${G}$ (as determined by Algorithm \ref{junction_weight_algorithm}). Therefore ${r(v)}$ is ${0}$, ${1}$, or ${2}$, if ${v}$ is a twig-type, leaf-type, or triple-type vertex, respectively, and ${r(v) = \frac{1}{2}\big(deg_{G'}(v) - \sum_{uv \in E(G')} x(uv)\big)}$ if ${v}$ is a junction. If we can find a subgraph ${H}$ of ${G'}$ for which ${deg_H(v) = r(v)}$ for each vertex ${v}$ in ${G'}$, then an edge colouring of ${G'}$ where all edges of ${H}$ are coloured red and the remaining edges are coloured blue is consistent with a difference-1 colouring of ${G}$. Therefore, in order to determine if all surplus weights of a graph are redistributable, we need to find a subgraph ${H}$ of $G'$, for each common cycle class $G'$, such that ${deg_H(v) = r(v)}$ for all vertices of ${H}$. The following theorem tells us when this is possible. \begin{theorem}\label{multimatching} Let ${G}$ be a bipartite graph with bipartition ${(P_1, P_2)}$, and for each vertex ${v}$ of ${G}$, let ${r(v)}$ be an integer value in the range ${0}$ to ${deg_G(v)}$ such that $\sum_{v \in P_1} r(v) \leq \sum_{v \in P_2} r(v)$. Then ${G}$ has a subgraph ${H}$ with ${deg_H(v) = r(v)}$ for every $v \in P_1$ and ${deg_H(v) \leq r(v)}$ for every $v \in P_2$ if and only if every subset ${S}$ of ${P_1}$ in ${G}$ satisfies \[\sum_{v \in S} r(v) \leq \sum_{n \in \Gamma(S)} \min\{r(n), |\Gamma(n) \cap S|\}\text{,}\] where ${\Gamma(S)}$ is the set of neighbours of elements of ${S}$ in ${G}$, and ${\Gamma(n)}$ is the set of neighbours of ${n}$.\end{theorem} Note that the problem of redistributability is precisely the case where $\sum_{v \in P_1} r(v) = \sum_{v \in P_2} r(v)$, as we require all vertices of $v$ in $P_2$ to satisfy $deg_H(v) = r(v)$ as well as for all $v$ in $P_1$. Therefore the surplus weights in a common cycle class $G'$ with bipartition $(P_1,P_2)$ are redistributable if and only if every subset ${S}$ of ${P_1}$ in ${G'}$ satisfies \[\sum_{v \in S} r(v) \leq \sum_{n \in \Gamma(S)} \min\{r(n), |\Gamma(n) \cap S|\}\text{,}\] where ${r(v)=0}$ if ${v}$ is a twig-type vertex, ${r(v)=1}$ if ${v}$ is a leaf-type vertex, ${r(v)=2}$ if ${v}$ is a triple-type vertex and \[r(v) = \frac{1}{2}\big(deg_{G'}(v) - \sum_{uv \in E(G')} x(uv)\big)\] if ${v}$ is a junction. \* Before we prove this result, we recall some concepts relating to \emph{flow networks}. For our purposes, a flow network is a directed graph $G$ in which every arc is assigned a positive integer weight, called its capacity. The network has exactly one vertex $s$ of indegree zero and positive outdegree, called the source, and exactly one vertex $t$ of outdegree zero and positive indegree, called the sink. A \emph{flow} is an assignment to every arc of a non-negative integer at most equal to its capacity, with the property that for every vertex $v \not \in \{s, t\}$, the total of the flows on arcs directed into $v$ is equal to the total of the flows on arcs directed out of $v$. The total flow is the sum of the flows on all arcs directed out of $s$. The maximum flow of $G$ is the maximum possible total flow over all flows that can be assigned to $G$ (for a fixed assignment of capacity). \* A \emph{cut} of a flow network ${G}$ is a partition ${(X,Y)}$ of the vertex set of ${G}$, such that ${s \in X}$ and ${t \in Y}$. The \emph{cut capacity} of a cut ${(X,Y)}$ is the sum of the capacities of all arcs $(u,v)$ from vertex $u$ to vertex $v$ such that ${u \in X, v \in Y}$. We now prove Theorem \ref{multimatching} by interpreting $G$ as a flow network, and applying the \emph{Max-Flow Min-Cut Theorem} \cite{maxminbib}, which states that the maximum flow in a flow network is bounded above by all cut capacities and is equal to the minimum cut capacity. \begin{proof} We construct a directed graph ${G^*}$, with ${V(G^*) = V(G) \cup \{s,t\}}$. The edge set of $G^*$ consists of \begin{itemize} \item Arcs $(s,u)$, for all $u \in P_1$; \item Arcs $(u,v)$, where $u \in P_1$ and $v \in P_2$, for all $uv \in E(G)$; \item Arcs $(v,t)$, for all $v \in P_2$. \end{itemize} We define a flow network structure on $G^*$ by assigning the capacity $r(u)$ to each arc of the form $(s,u)$, $1$ to each arc from $P_1$ to $P_2$, and $r(v)$ to each arc of the form $(v,t)$. The assignment of a flow to $G^*$ determines a subgraph $H$ of $G$ whose edges are the pairs $u,v$ for which the arc $(u,v)$ of $G^*$ has flow $1$. \* For a vertex $u \in P_1$, the degree of $u$ in $H$ is equal to the flow assigned to $su$ in $G^*$. Therefore we can find a subgraph $H$ of $G$ where every vertex $v$ of ${H}$ has degree equal to $r(v)$ if and only if we can assign a flow to $G^*$ such that the total flow is equal to $\sum_{v \in P_1} r(v)$. As this desired total flow is equal to the sum of the capacities of all the arcs out of $s$, this is the maximum flow that we could possibly achieve. Therefore $G$ has a subgraph $H$ with the required properties if and only if, for $G^*$, \[\text{max flow} = \sum_{v \in P_1} r(v) \text{.}\] From the \emph{Max-Flow Min-Cut Theorem}, this is equivalent to the condition that, for every cut, \begin{equation} \label{max-min-eq} \text{cut capacity} \geq \sum_{v \in P_1} r(v) \text{.}\end{equation} To complete the proof of Theorem \ref{multimatching}, we need to show that \eqref{max-min-eq} is equivalent to \[\sum_{v \in S} r(v) \leq \sum_{n \in \Gamma(S)} \min\{r(n), |\Gamma(n) \cap S|\} \text{,}\] for every subset $S$ of $P_1$. \* Let ${S}$ be a subset of ${P_1}$, and choose a cut ${(X,Y)}$ such that ${S = P_1 \cap X}$. We now consider the arcs that contribute to the cut capacity: \begin{itemize} \item If a vertex $v$ in $P_1$ is in $Y$, then the arc $(s,v)$ is from $X$ to $Y$ and has capacity ${r(v)}$. \item If a vertex $v$ in $P_2$ is in $Y$, then any arc $(u,v)$ where $u \in S$ is from $X$ to $Y$. Each such arc has capacity $1$, and therefore the sum of these capacities for a given $v$ is equal to the number of neighbours of $v$ in ${S}$. \item If a vertex $v$ in ${P_2}$ is in $X$, then the arc $(v,t)$ is from $X$ to $Y$ and has capacity $r(v)$. \end{itemize} We therefore have the following: \[\text{cut capacity} = \sum_{v \in P_1 \cap Y} r(v) + \sum_{v \in P_2} f(v), \;\;\;\; \text{where } f(v) = \begin{cases} r(v) & \text{if } v \in P_2 \cap X \\ |\Gamma(v) \cap S| & \text{if } v \in P_2 \cap Y \end{cases}\] Using this and \eqref{max-min-eq}, our condition for the existence of the required subgraph $H$ becomes \[\sum_{v \in P_1} r(v) \leq \sum_{v \in P_1 \cap Y} r(v) + \sum_{v \in P_2} f(v)\text{,}\] or equivalently, \[\sum_{v \in S} r(v) \leq \sum_{v \in P_2} f(v) \text{,}\] for every proper subset $S$ of $P_1$, and every choice of cut ${(X,Y)}$ with ${S = P_1 \cap X}$. \* The function $f$ depends on the choice of cut, and $\sum_{v \in P_2} f(v)$ is minimized by the cut $(X',Y')$ where \[X' = S \cup \{v \in P_2 : r(v) \leq |\Gamma(v) \cap S|\}\text{.}\] Therefore the condition for the existence of $H$ becomes the following. For all $S \subset P_1$, \[\sum_{v \in S} r(v) \leq \sum_{n \in \Gamma(S)} \min\{r(n), |\Gamma(n) \cap S|\}\text{.}\] \end{proof} As previously shown, Theorem \ref{multimatching} is a special case of the \emph{Max-Flow Min-Cut Theorem}. It is also true that \emph{Hall's Matching Theorem} \cite{hallmatchingbib} is a special case of Theorem \ref{multimatching}. A \emph{matching} in a bipartite graph is a subgraph where every vertex (in the smaller part of the bipartition) has degree ${1}$, and no vertex has degree exceeding $1$. \begin{theorem}[Hall's Matching Theorem] Let ${G}$ be a bipartite graph with bipartition ${(P_1, P_2)}$ such that $|P_1| \leq |P_2|$. Then ${G}$ has a matching ${H}$ if and only if every ${S \subset P_1}$ in ${G}$ satisfies \[|S| \leq |\Gamma(S)| \text{,}\] where ${\Gamma(S)}$ is the set of neighbours of ${S}$ in ${G}$.\end{theorem} We note that this is equivalent to Theorem \ref{multimatching}, where ${r(v) = 1}$ for all vertices ${v}$ in the graph. In this case, ${\sum_{v \in S} r(v) = |S|}$, and ${\sum_{n \in \Gamma(S)} \min\{r(n), |\Gamma(n) \cap S|\} = |\Gamma(S)|}$. \* We now have the following full set of necessary and sufficient conditions for when a graph has a difference-1 colouring, which follows from Lemma \ref{local-tree-lemma}, Lemma \ref{odd-even-distance-lemma}, Lemma \ref{algorithm_lemma}, Lemma \ref{common_cycle_class}, and Theorem \ref{multimatching}. \begin{theorem}\label{general-colourability} Let ${G}$ be a bipartite graph containing at least one cycle. Then ${G}$ has a difference-1 colouring if and only if ${G}$ satisfies the following conditions: \begin{itemize} \item Each vertex in Sk(G) is either a leaf-type, twig-type, triple-type vertex, or a junction ${j}$ for which the local tree at $j$ in the reduced form of $G$ consists of only leaves or twigs attached to $j$ and which is assigned ${w(j) = 1}$ by Algorithm \ref{junction_weight_algorithm}; \item For each path ${P = v_1v_2 \dots v_k}$ in ${Sk(G)}$ where ${v_1}$ and ${v_k}$ are both non-leaf type vertices, and ${v_2, \dots, v_{k-1}}$ are all leaf-type vertices, ${P}$ has odd (even) length if ${v_1}$ and ${v_k}$ are the same (opposite) type; \item Surplus weights in each common cycle class of ${G}$ are redistributable. \end{itemize} \end{theorem} Recall from Theorem \ref{ced_difference-1_is_asbg} that if a cactus graph ${G}$ has a difference-1 colouring ${c}$, then $G^c$ is configurable. Therefore the Theorem \ref{general-colourability} determines when a cactus graph is ASBG-colourable. The problem of determining necessary and sufficient conditions for configurability of difference-1 colourings for wider classes of graphs is an ongoing topic of investigation. \section{Uniqueness and Difference-k Colourings} In this last section, we examine what it means for one graph to have more than one distinct difference-1 colouring, and how these difference-1 colourings can differ from one another. We also explore generalising the concept of a difference-1 colouring to a \emph{difference-k} colouring. \subsection{Uniqueness of Difference-1 Colourings} \begin{definition} We say that a graph ${G}$ has a \emph{unique colouring} if there is only one difference-1 colouring of ${G}$. \end{definition} It is possible that a graph can have multiple difference-1 colourings, some of which are configurable and some of which are not. \begin{example} The following graph has two distinct difference-1 colourings, only one of which is configurable: \includegraphics[width = \textwidth]{alternating1.png} \end{example} \begin{definition} An \emph{alternating cycle} is a cycle in a graph which has been coloured such that each vertex of the cycle is incident with two edges in the cycle of opposite colour. \end{definition} \begin{definition} The \emph{rotation} of an alternating cycle ${C^c}$ is the coloured cycle ${C^d}$ where every edge of ${C^d}$ is the opposite colour of the corresponding edge in ${C^c}$. \end{definition} \begin{theorem} Any distinct colourings of a graph ${G}$ differ only by edge-disjoint alternating cycle rotations. \end{theorem} \begin{proof} Let ${G}$ be a graph with two distinct colourings ${c}$ and ${d}$, and let ${H}$ be the subgraph of ${G^c}$ containing only the edges which differ in colour from those in ${G^d}$ and only vertices which are incident with these edges. Since both $c$ and $d$ are difference-1 colourings, every vertex of $H$ is incident with the same number of red and blue edges, and so every vertex of $H$ has even degree. We may start at any edge of $H$, and construct a trail consisting of edges of alternating colours until a repeated vertex $v$ is encountered for the first time. This completes an alternating cycle from $v$ to itself, since every cycle in $G$ has even length. Removal of this cycle from $H$ results in another graph in which every vertex is incident with an equal number of blue and red edges. So we can remove alternating cycles until no edges remain, and therefore two distinct colourings of a graph ${G}$ differ only by alternating cycle rotations. \end{proof} \begin{corollary} A difference-1 colouring of a tree ${T}$ is the unique ASBG-colouring of ${T}$. \end{corollary} \subsection{Difference-k Colourings} The definition and exploration of difference-1 colourings leads very naturally to the more general definition of a \emph{difference-k} colouring. \begin{definition} \label{dif-k_def} A 2-edge colouring $c$ of a graph ${G}$ is a \emph{difference-$k$ colouring}, $k \in \mathbb{N}_0$, of $G$ if ${G}$ is bipartite and $G^c$ satisfies $deg^B(v) - deg^R(v) = k$ at every vertex $v$. \end{definition} We propose the problem of determining when a graph admits a difference-k colouring, and note some preliminary observations in the bipartite case. The following is a characterisation of bipartite graphs with a difference-0 colouring. \begin{theorem} A bipartite graph $G$ has a difference-0 colouring if and only if every vertex of $G$ has even degree. \end{theorem} \begin{proof} First, assume that $G$ has a difference-0 colouring. Then $deg^B(v) - deg^R(v) = 0 \implies deg^B(v) = deg^R(v) \implies deg(v) = 2deg^B(v)$. So every vertex of $G$ has even degree. \* Now assume that every vertex of $G$ has even degree. As $G$ is \emph{Eulerian}, the edges of $G$ can be decomposed into edge-disjoint cycles. We now colour the edges of $G$ so that each cycle is an alternating cycle. Therefore $deg^B(v) = deg^R(v)$ at each vertex $v$. So $G$ has a difference-0 colouring. \end{proof} We can use Theorem \ref{multimatching} to give a general characterisation for all bipartite graphs with difference-k colourings, for a given $k$ as follows. \begin{theorem}\label{dif-k} A bipartite graph $G$ with bipartition $(P_1,P_2)$ has a difference-k colouring if and only if every $S \subset P_1$ satisfies \[\sum_{v \in S} \frac{deg(v)-k}{2} \leq \sum_{n \in \Gamma(S)} \min\Big\{\frac{deg(n)-k}{2}, |\Gamma(n) \cap S|\Big\}\text{,}\] where ${\Gamma(S)}$ is the set of neighbours of elements of ${S}$ in ${G}$, and ${\Gamma(n)}$ is the set of neighbours of ${n}$.\end{theorem} \begin{proof} A bipartite graph $G$ having a difference-k colouring is equivalent to $G$ having a subgraph $H$ where every vertex $v$ of $H$ satisfies $deg_H(v) = \frac{deg_G(v)-k}{2}$. This is because if we find such a subgraph $H$, the colouring $c$ of the edges of $G$ such that all edges of $H^c$ are red and all edges of $(G \setminus H)^c$ are blue is a difference-k colouring of $G$. From Theorem \ref{multimatching}, we know that this is equivalent to every strict subset $S$ of one bipartition of $G$ satisying \[\sum_{v \in S} \frac{deg(v)-k}{2} \leq \sum_{n \in \Gamma(S)} \min\Big\{\frac{deg(n)-k}{2}, |\Gamma(n) \cap S|\Big\}\text{.}\] \end{proof} While Theorem \ref{dif-k} arises directly from Theorem \ref{multimatching}, and can be applied to the case of difference-1 colourings, its practical utility is limited. Algorithm \ref{junction_weight_algorithm} and partitioning the edge set into common cycle classes simplifies the analysis considerably in the case $k=1$. At present, we have no version of these approaches for difference-k colourings in general. The problem of determining when a non-bipartite graph admits a difference-k colouring remains largely unexplored, even in the case $k=1$. \end{document}
\begin{document} \title{Local Coupling Property for Markov Processes with Applications to L\'evy Processes} \author{Kasra Alishahi\\ Sharif University of Technology\\ Erfan Salavati\\ Faculty of Mathematics and Computer Science,\\ Amirkabir University of Technology (Tehran Polytechnic),\\ P.O. Box 15875-4413, Tehran, Iran.} \date{} \maketitle MSC: 60J25, 60G51, 60E07. \begin{abstract} In this article, we define the new concept of local coupling property for markov processes and study its relationship with distributional properties of the transition probability. In the special case of L\`evy processes we show that this property is equivalent to the absolute continuity of the transition probability and also provide a sufficient condition for it in terms of the L\'evy measure. Our result is stronger than existing results for absolute continuity of L\'evy distributions. \end{abstract} \section{Introduction} The coupling method is a very powerful tool for studying properties of stochastic processes. In case of Markov processes this method is used for proof of convergence to stationary measure. This method can also be used to prove the distributional properties of the transition probability measure of the Markov process. A coupling between two stochastic objects, means a joint distribution whose marginal distributions are that of those objects. For two stochastic processes, those couplings are concerned in which the two processes coincide eventually. Such couplings are called successful couplings. \section{Coupling property and its equivalent statements} Let $\mathbb{R}^+=[0,\infty)$ and $\{X_t\}_{t\in\mathbb{R}^+}$ be a continuous time markov process on a metric space $(E,\mathcal{E})$ and $P^t(x,dy)$ be its transition probability measure. $\mathcal{B}(E)$ and $\mathcal{C}(E)$ denote the space of respectively bounded and continuous real-valued functions on $E$. For each $f\in\mathcal{B}(E)$ we define \[ P^t f(x) = \int_E f(y) P^t(x,dy) \] A function $u(t,x)$ is called space-time harmonic if for all $t,s>0$, \[ u(s,x) = P^t u(s+t,.) (x) = \int_E u(s+t,y) P^t(x,dy) \] For any $x\in E$, Let $\mathbb{P}_x(X\in .)$ be the probability measure on $(E^{\mathbb{R}^+},\mathcal{E}^{\mathbb{R}^+})$ which is induced by the Markov process starting at $x$. For any probability measure $\mu$ on $E$, let $\mathbb{P}_\mu(.)$ be the induced probability measure by the process starting with initial distribution $\mu$. In other words \[ \mathbb{P}_\mu(.) = \int \mathbb{P}_x(.) d\mu(x) \] For any $\ge 0$, the shift operator $\theta_t$ on $(E^{\mathbb{R}^+},\mathcal{E}^{\mathbb{R}^+})$ is defined as \[ \theta_t X (.) = X(t+.) \] \begin{definition}[Local Coupling Property] The Markov process $X_t$ is said to have the local coupling property if for any $x\in E$ and $\epsilon>0$, there exists $\delta>0$ such that for any $y$ with $d(y,x)<\delta$, there exists a coupling $(X_t,Y_t)$ between $\mathbb{P}_x$ and $\mathbb{P}_y$ with the property that for \[ T=T_{x,y}=\inf\{t\ge 0: X_t=Y_t\} \] we have $P(T>\epsilon)<\epsilon$. \end{definition} Without loss of generality, we can assume that for $t\ge T$, $X_t=Y_t$ because we can assume that the two processes move together after time $T$. By $\|\mu\|$ we mean the total variation norm of the signed measure $\mu$. We are now ready to state and prove the main theorem of this section. \begin{theorem}\label{thm:Markov_main} For a Markov process $X_t$ on a Polish space $E$, the following statements are equivalent: \begin{description} \item[(i)] $X$ has the local coupling property. \item[(ii)] For any $x\in E$ and $t>0$, \[ \lim_{y\to x} \|P^t(y,.)-P^t(x,.)\| = 0 \] \item[(iii)] For any $x\in E$ and any $t>0$, \[ \lim_{y\to x} \|\mathbb{P}_y (\theta_t X\in \cdot)-\mathbb{P}_x (\theta_t X\in \cdot)\| = 0 \] \end{description} \end{theorem} \begin{proof} \begin{description} \item[(i) $\implies$ (ii)] Assume $t>0$ is given and let $(X_t,Y_t)$ be a coupling of $\mathbb{P}_x$ and $\mathbb{P}_y$ which satisfies the definition of local coupling property for an $\epsilon<t$. Then we have \begin{eqnarray*} \|P^t(y,.)-P^t(x,.)\| & \le & \mathbb{P}(X_t\ne Y_t)\\ &=&\mathbb{P}(T >t) \le \mathbb{P}(T >\epsilon) \le \epsilon \end{eqnarray*} Now by letting $\epsilon\to 0$ the statement follows. \item[(ii) $\Leftrightarrow$ (iii)] Let $\mu=P^t(x,.)$ and $\nu=P^t(y,.)$. We have \[ \mathbb{P}_y (\theta_t X\in .) - \mathbb{P}_x(\theta_t X\in .) = \mathbb{P}_\nu (.) - \mathbb{P}_\mu(.) \] Now let $\rho=\mu+\nu$ and assume that $g_\mu=\frac{d\mu}{d\rho}$ and $g_\nu=\frac{d\nu}{d\rho}$ are respectively the Radon-Nikodym derivatives of $\mu$ and $\nu$ with respect to $\rho$. Hence we have, \[ \mathbb{P}_\nu (.) - \mathbb{P}_\mu(.) = \int \mathbb{P}_x (.) g_\nu(x)d\rho(x) - \int \mathbb{P}_x (.) g_\mu(x) d\rho(x)\] \[ \le \int_{g_\nu\ge g_\mu} (g_\nu - g_\mu) d \rho = \frac{1}{2} \|\nu-\mu\| \] which implies $\|\mathbb{P}_\nu - \mathbb{P}_\mu \| \le \|\nu-\mu \|$. The other direction of the statement is obvious. \item[(iii) $\implies$ (i)] Given $x$ and $\epsilon>0$, there exists $\delta>0$ such that for any $y$ with $d(y,x)<\delta$, \[ \|\mathbb{P}_y(\theta_\epsilon X\in .) - \mathbb{P}_x (\theta_\epsilon X \in .) \| <2 \epsilon \] Now consider the maximal coupling between these two measures and denote it by $(\{X_x(t)\}_{t\ge \epsilon},\{X_y(t)\}_{t\ge \epsilon})$. Now using the regular conditional probability we extend this coupling to $0\le t <\epsilon$. To do this, we follow the machinery used in \cite{Lindvall}, section 15. Note that by the Polish assumption, there exists a regular version of the conditional probabilities \[ \mathbb{P}_x(X\in .| \theta_\epsilon X =Z) , \quad \mathbb{P}_y(X\in .| \theta_\epsilon X =Z) \] which we denote them by two transition kernels $K_x(Z,.)$ and $K_y(Z,.)$ on $E^{\mathbb{R}^+}\times E^{\mathbb{R}^+}$. Now define a probability measure on $E^{\mathbb{R}^+}\times E^{\mathbb{R}^+}$ by the following, \[ \tilde{\mathbb{P}} (A\times B)= \mathbb{E}( K_x(\theta_\epsilon X_x,A) K_y(\theta_\epsilon X_y,B) ) \] This extends to a coupling of $\mathbb{P}_x$ and $\mathbb{P}_x$. Now we have \[ \mathbb{P}(T>\epsilon) = \mathbb{P}(X_x(\epsilon)\ne X_y(\epsilon)) = \frac{1}{2}\|\mathbb{P}_y(\theta_\epsilon X\in .) - \mathbb{P}_x(\theta_\epsilon X\in .) \| <\epsilon \] \end{description} \end{proof} \begin{proposition} If $X_t$ is a Markov process with the local coupling property then \begin{description} \item[(i)] Every space-time harmonic function which is bounded, is continuous with respect to $x$. \item[(ii)] For any $t>0$ and any $f\in \mathcal{B}(E)$, we have $P^t f\in \mathcal{C}(E)$. \end{description} \end{proposition} \begin{proof} \begin{description} \item[(i)] Let $u(t,x)$ be a bounded harmonic function and $s>0$. Choose $t>0$ arbitrarily. We have \[ |u(t,y)-u(t,x)| = |\int u(s+t,z) P^t(y,dz) - \int u(s+t,z) P^t(x,dz)| \le \|u(s+t,.)\| . \|P^t(y,.)-P^t(x,.)\| \to 0\] \item[(ii)] \[ |P^t f(y)-P^t f(x)| = |\int f(z) P^t(y,dz) - \int f(z) P^t(x,dz)| \le \|f\| . \|P^t(y,.)-P^t(x,.)\| \to 0\] \end{description} \end{proof} \section{Local Coupling Property for L\'evy Processes} In this section we show that for L\'evy processes, the local coupling property is equivalent to absolute continuity of the transition probability measure with respect to the Lebesgue measure. We also provide a sufficient condition for local coupling property in terms of the L\'evy measure. We first prove a lemma. \begin{lemma}\label{lem:absolute_cont} Let $\mu$ be a Borel measure on $\mathbb{R}$. For $a\in \mathbb{R}$ let $\mu_a$ be the translation of $\mu$ by $a$. Then $\mu$ is absolutely continuous if and only if \[ \lim_{a\to 0} \| \mu_a - \mu \| = 0\] \end{lemma} \begin{proof} Denote the Lebesgue measure on $\mathbb{R}$ by $\lambda$. To prove the if part, assume $\lambda(A)=0$. Hence for any $x\in\mathbb{R}$, $\lambda(A+x)=0$ and therefore \begin{multline}\label{equation:proof of lemma_Fubini} 0 = \int_\mathbb{R} \lambda(A+x) d\mu(x) = \int_\mathbb{R} \int_\mathbb{R} 1_A(x+y) d\lambda(y) d\mu(x) \\ = \int_\mathbb{R} \int_\mathbb{R} 1_A(x+y) d\mu(x) d\lambda(y) = \int_\mathbb{R} \mu_y(A) d\lambda(y) \end{multline} On the other hand, by assumption, the function $y\mapsto \mu_y(A)$ is continuous at $y=0$ and since is nonnegative, it follows from \eqref{equation:proof of lemma_Fubini} that $\mu(A)=\mu_0(A) = 0$. To prove the only if part, note that if $\mu \ll \lambda$, it follows from the Radon-Nikodym theorem that for some $f\in L^1(\mathbb{R})$, $d\mu(x) = f(x) d\lambda(x)$ and therefore $d\mu_a(x) = f(x+a) d\lambda(x)$ which implies \[ \| \mu_a - \mu \| = \| f(a+.)-f(.)\|_{L^1} \] and the right hand side tends to 0 as $a\to 0$. \end{proof} \begin{remark} The special case that $\mu$ is the transition probability of a L\'evy process has been proved in~\cite{Hawkes}. \end{remark} \begin{theorem}\label{thm:Levy_absol} The L\'evy process $X_t$ has the local coupling property if and only if its transition probability measure is absolutely continuous for any $t>0$. \end{theorem} \begin{proof} By Theorem~\ref{thm:Markov_main} the local coupling property is equivalent to \begin{equation}\label{eq1} \lim_{a\to 0} \|P^t(x+a,.)-P^t(x,.)\| = 0, \forall t>0 \end{equation} On the other hand for L\'evy processes \[ P^t(x+a,.)= P^t(x,a+.) \] and hence by Lemma~\ref{lem:absolute_cont}, equation \eqref{eq1} is equivalent to the absolute continuity of the transition probability measure. \end{proof} We state two straight forward consequences of Theorem~\ref{thm:Levy_absol}. \begin{corollary} The Brownian motion has the local coupling property. \end{corollary} \begin{proof} The transition probability measure is Gaussian which is absolutely continuous. \end{proof} \begin{corollary}\label{cor:sum_ind_Levy} Let $X_t$ and $Y_t$ be two independent L\'evy processes and assume that $X_t$ has the local coupling property. Then so is $X_t+Y_t$. \end{corollary} \begin{proof} The transition probability of the sum is convolution of two transition probability measure. Hence if one of them is absolutely continuous, so is the sum. \end{proof} Now consider a general one dimensional L\'evy process with L\'evy triplet $(b,\sigma,\nu)$. It is clear that $b$ doesn't play any role in the local coupling property. It also follows from the previous two lemmas that if a $\sigma>0$ then the process has the local coupling property. Hence it remains to study the processes with triplets $(0,0,\nu)$. In what follows, we assume that $X_t$ is a L\'evy process with triplet $(0,0,\nu)$. We use the L\'evy-It\"o representation of $X_t$, \[ X_t = \int_{|x|\le 1} x\tilde{N}(t,dx) + \int_{|x|> 1} xN(t,dx)\] where $N$ is the Poisson random measure with intensity $dt\nu(dx)$ and $\tilde{N}$ is its compensation. The following theorem provides a sufficient condition for local coupling property of the $X_t$ in terms of its L\'evy measure $\nu$. Recall that the minimum of two measures $\mu$ and $\nu$, denoted by $\mu\wedge\nu$ is defined as \[ \mu\wedge\nu = \frac{1}{2} (\mu+\nu - \|\mu - \nu \|) \] Now let $\bar{\nu}(dx)=\nu(-dx)$ and define $\rho=\nu\wedge\bar{\nu}$. Also define the auxiliary function $\eta(r)=\int_0^r x^2 \rho(dx)$. \begin{theorem}\label{thm:Levy_suffic} Assume that $\int_0^1 \frac{r}{\eta(r)} dr <\infty$. Then $X_t$ has the local coupling property. \end{theorem} \begin{remark} In the special case that $\nu$ is symmetric, the above theorem has been implicitly proved in~\cite{Alishahi_Salavati}. \end{remark} In order to prove Theorem~\ref{thm:Levy_suffic} we provide appropriate couplings between two L\'evy processes $X_t$ and $Y_t$ with characteristics $(0,0,\nu)$ and starting respectively from 0 and $a$. Without loss of generality we assume $a>0$. If we denote the distribution of $X_1$ by $\mu$, then the distribution of $Y_1$ is $\mu_a(dx)=\mu(a+dx)$. We denote the left limit of a cadlag process $X_t$ at $t$ by $X_{t-}$ and let $\Delta X_t = X_t - X_{t-}$. Note that it suffices to prove for the case that $\nu$ is supported in $[-1,1]$, since by the L\'evy-It\"o representation we know that $X_t$ is the sum of two independent L\'evy processes $ \int_{|x|\le 1} x\tilde{N}(t,dx)$ and $\int_{|x|> 1} xN(t,dx)$ and if the former has the local coupling property then by Corollary~\ref{cor:sum_ind_Levy} so is $X_t$. Hence from now on we assume that $\nu$ is supported in $[-1,1]$. Let $L_1(t)$ and $L_2(t)$ be two independent L\`evy processes with characteristics $(0,0,\nu-\frac{1}{2}\rho)$ and $(0,0,\frac{1}{2}\rho)$ and let $\tilde{N}_1(dt,du)$ and $\tilde{N}_2(dt,du)$ be the corresponding compensated Poisson random measures (cPrms). Let $\mathcal{F}_t$ be the filtration generated by $N_1$ and $N_2$ and define \[ X_t = L_1(t) + L_2(t) \] It is clear that $X_t$ is a L\`evy process with characteristics $(0,0,\nu)$. Now consider the following stochastic differential equation \begin{equation}\label{definition:Y_t} Y_t = a+ \int_0^t f(s,Y_{s-},\omega) dX_s \end{equation} where $f:\mathbb{R}^+\times \mathbb{R}\times \Omega \to \mathbb{R}$ is defined by \[f(s,y,\omega ) = \left\{ {\begin{array}{*{20}{c}} { - 1}&{\frac{{{X_{s - }} - y}}{2} > \,\left| {\Delta {L_2}(s)} \right|>0} \\ 1&{otherwise} \end{array}} \right.\] Notice that by equation~\eqref{definition:Y_t}, the jumps of $X$ and $Y$ occur at the same times and have the same magnitude but probably different directions. Note that the classical existence theorems for solutions of SDEs are not applicable to equation~\eqref{definition:Y_t} since $f$ is not a Lipschitz (not even continuous) function of $y$. In order to prove the existence of solution, we rewrite~\eqref{definition:Y_t} as an SDE with respect to cPrms as follows, \begin{equation*} Y_t = a + \int_0^t \int_\mathbb{R} u \tilde{N}_1(ds,du) \\ + \int_0^t \int_\mathbb{R} g(s,Y_{s-},u,\omega) u \tilde{N}_2(ds,du) \end{equation*} where \[ g(s,y,u,\omega) = \chi_{\frac{|X_{s-}-y|}{2}<|u|} - \chi_{\frac{|X_{s-}-y|}{2}\ge |u|} \] Now we can use the existence result for such equations (see e.g. Applebaum~\cite{Applebaum}, Theorem 6.2.3). For that, we need to prove linear growth and Lipschitz conditions for coefficients. To prove this, note that both coefficients are almost surely bounded by $|u|$. Hence the Lipschitz condition reduces to $\int_\mathbb{R} |u|^2 \nu (du) < \infty$ which holds by L\'evy condition and the assumption that $\nu$ is supported in $[-1,1]$. The linear growth condition is similar. Hence the equation has a square integrable and cadlag solution. Now we claim that \begin{lemma} $Y_t$ is a L\`evy processes with characteristics $(0,0,\nu)$ and starting from $a$. \end{lemma} \begin{proof} The idea is that $Y_t$ and $X_t$ has the same movements except that $Y$ sometimes jumps in the opposite direction than $X$, but notice that if we decompose $\nu$ as $(\nu-\rho)+\rho$, the jumps of $X$ that come from the $\rho$ part have a symmetric distribution and at exactly these jumps $Y$ makes an opposite jump with probability $\frac{1}{2}$. In order to give a rigorous proof, we calculate the conditional characteristic function of $Y_t$, i.e. $\mathbb{E}(e^{i\xi Y_t}|\mathcal{F}_s)$. We have \[ Y_t = a + \int_0^t \int_\mathbb{R} u \tilde{N}_0(ds,du) \\ +\int_0^t \int_\mathbb{R} u \tilde{N}_3(ds,du)+ \int_0^t \int_\mathbb{R} g(s,Y_{s-},u,\omega) u \tilde{N}_2(ds,du) \] We write the It\^o's formula for $e^{i\xi Y_t}$, \begin{multline*} e^{i\xi Y_t} - e^{i\xi Y_s} = \int_s^t \int_\mathbb{R} \left( e^{i\xi Y_{r-} + i\xi u} - e^{i \xi Y_{r-}}\right) \tilde{N}_1(dr , du)\\ + \int_s^t \int_\mathbb{R} \left( e^{i\xi Y_{r-} + i\xi g(r,Y_{r-},u) u} - e^{i \xi Y_{r-}}\right) \tilde{N}_2(dr , du)\\ + \int_s^t \int_\mathbb{R} e^{i\xi Y_{r-}} [e^{i\xi u}-1- i\xi u] (\nu-\frac{1}{2}\rho)(du)dr\\ + \int_s^t \int_\mathbb{R} [e^{i \xi g(r,Y_{r-},u) u} - 1 - i\xi g(r,Y_{r-},u) u]\frac{1}{2}\rho(du)dr \end{multline*} taking expectations conditioned on $\mathcal{F}_s$ and noting that the first two integrals are martingales we find, \begin{multline} \label{eq:proof_characteristic_1} \mathbb{E}(e^{i\xi Y_t}|\mathcal{F}_s) - e^{i\xi Y_s} = \int_s^t \mathbb{E} \bigg( e^{i\xi Y_{r-}} \int_\mathbb{R} \Big( [e^{i\xi u}-1- i\xi u](\nu-\frac{1}{2}\rho)(du)\\ + [e^{i \xi g(r,Y_{r-},u) u} - 1 - i\xi g(r,Y_{r-},u) u] \frac{1}{2}\rho(du) \Big) |\mathcal{F}_s \bigg) dr \end{multline} Now note that $\rho$ is a symmetric measure and for each $\omega$, the function $u\mapsto g(r,Y_{r-},u,\omega)$ is an even function with values $\pm 1$, hence we have \[ \int_\mathbb{R} \left( e^{i \xi g(r,Y_{r-},u) u} - 1 - i\xi g(r,Y_{r-},u) u \right) \rho(du) = \int_\mathbb{R} \left( e^{ i\xi u}-1 -i\xi u \right) \rho(du) \] Substituting in~\eqref{eq:proof_characteristic_1} we find, \begin{multline*} \mathbb{E}(e^{i\xi Y_t}|\mathcal{F}_s) - e^{i\xi Y_s} = \int_s^t \mathbb{E} \bigg( e^{i\xi Y_{r-}} \int_\mathbb{R} [e^{i\xi u}-1- i\xi u] \nu(du) |\mathcal{F}_s \bigg) dr \\ =\psi(\xi) \int_s^t \mathbb{E} \left( e^{i\xi Y_{r-}} |\mathcal{F}_s \right)dr \end{multline*} where $\psi(\xi)=\int_\mathbb{R} [e^{i\xi u}-1- i\xi u] \nu(du)$. Note that $\psi(\xi)$ is indeed the characteristic exponent of $X$. Hence if we define $h(t)=\mathbb{E}(e^{i\xi Y_t}|\mathcal{F}_s)$, $h$ satisfies the ordinary differential equation $h^\prime(t)=\psi(\xi)h(t)$. This ode has the unique solution \[ \mathbb{E}(e^{i\xi Y_t}|\mathcal{F}_s) = e^{i\xi Y_s} e^{(t-s)\psi(\xi)} .\] The last equality implies easily that $Y$ is a L\'evy process with characteristics $(0,0,\nu)$. \end{proof} Now let \[ Z_t = Y_t - X_t \] By subtracting the integral representations of $X_t$ and $Y_t$, we find that $Z_t$ satisfies the following sde, \[ Z_t = -2 \int_0^t \int_\mathbb{R} \chi_{\frac{|Z_{s-}|}{2}\ge |u|} u \tilde{N}_2(ds,du) \] It is clear from the above equation that the jumps of $Z_t$ always have magnitudes less than $|Z_t|$. Hence since $Z_0=a>0$, $Z_t$ is always non-negative. Now we define two stopping times \[ \tau_a = \inf \{ t: Z_t = 0 \}, \] \[ \bar{\tau}_a = \inf \{ t: Z_t \notin (0,1) \}. \] Since the jumps of $Z_t$ satisfy $| \Delta Z_s| \le Z_{s^-}$ it is clear that $Z_{\bar{\tau}_a} \le 2$. Note also that since $\rho$ is symmetric, the jumps of $Z_t$ have a symmetric distribution. \begin{lemma}\label{lemma:limit} \[ \lim_{t\to\infty} Z_t =0,\quad a.s. \] \end{lemma} \begin{proof} Since $X_t$ and $Y_t$ are martingales hence so is $Z_t$. Moreover, $Z_t$ is non-negative and hence by martingale convergence theorem it has a limit $Z_\infty$, as $t\to\infty$. On the other hand, by the assumption made on $\eta$, for any $\epsilon>0$ we have $\nu((0,\epsilon))>0$. Hence jumps of size greater than $\epsilon$ occur in $X_t$ with a positive rate. This implies that if $Z_\infty=\alpha \ne 0$, then $Z_t$ has infinitely many jumps greater than some $\epsilon$ with $0<\epsilon<\frac{\alpha}{2}$ which contradicts its convergence. Hence $Z_\infty = 0\quad a.s$. \end{proof} We define an auxiliary function $g:[0,\infty)\to\mathbb{R}$ by letting \[ g(x) = \int_x^1 \int_y^1 \frac{1}{\eta(r)} dr dy \] \begin{lemma}\label{lemma:g} If $\int_0^1 \frac{r}{\eta(r)} dr <\infty$ then $g$ is defined on $[0,\infty)$. Moreover, it is differentiable on $(0,\infty)$ and its derivative is absolutely continuous and $g^{\prime\prime} (x)= \frac{1}{\eta(x)}$ for almost every $x$. Furthermore, for every $x,y\in[0,\infty)$, we have \[ g(y)-g(x) \ge g^\prime(x) (y-x) + \frac{1}{2} \frac{1}{\eta(x)} (y-x)^2 1_{y<x} \] \end{lemma} \begin{proof} By Fubini's theorem, \[ g(x) = \int_x^1 \int_x^r \frac{1}{\eta(r)} dy dr = \int_x^1 \frac{r}{\eta(r)} dr \] hence $g$ is finite and differentiable everywhere and its derivative is absolutely continuous and $g^{\prime\prime} = \frac{1}{\eta} \quad a.e $. To prove the last claim, note that since $g^\prime=-\int_x^1 \frac{1}{\eta}$ is increasing hence $g$ is convex and therefore, \[ g(y)-g(x) \ge g^\prime(x) (y-x) \] If $y<x$, by the integral form of the Taylor's remainder theorem we have \[ g(y)-g(x) = g^\prime(x) (y-x) + \int_y^x (t-y) g^{\prime\prime}(t) dt \] where since $g^{\prime\prime}(t)$ is decreasing we have $g^{\prime\prime}(t) \ge \frac{1}{\eta(x)}$ which implies that in the case that $y<x$, \[ g(y)-g(x) \ge g^\prime(x) (y-x) + \frac{1}{2} \frac{1}{\eta(x)} (y-x)^2\] and the proof is complete. \end{proof} \begin{lemma}\label{lemma:tau_bar} If $\int_0^1 \frac{r}{\eta(r)} dr <\infty$ then \[ \lim_{a\to 0} \mathbb{E} \bar{\tau}_a = 0\] \end{lemma} \begin{proof} We have by Lemma~\ref{lemma:g}, \begin{multline} g(Z_{t \wedge \bar{\tau}_a}) = g(a) + \sum_{s\le t\wedge \bar{\tau}_a} \Delta g(Z_s)\\ \ge g(a) + \sum_{s\le t\wedge \bar{\tau}_a} g^\prime(Z_{s-})\Delta Z_s + \frac{1}{2} \sum_{s\le t\wedge \bar{\tau}_a} \frac{1}{\eta(Z_{s-})} (\Delta Z_s)^2 1_{\Delta Z_s<0} \end{multline} Since the second term on the right hand side is a martingale, we have \begin{equation} \label{equation:Ito's formula2} \mathbb{E} g(Z_{t \wedge \bar{\tau}_a}) \ge g(a) + \frac{1}{2} \mathbb{E} \sum_{s\le t\wedge \bar{\tau}_a} \frac{1}{\eta(Z_{s-})} (\Delta Z_s)^2 1_{\Delta Z_s<0}. \end{equation} Noting that the jumps of $Z_s$ are independent and have symmetric distribution, we conclude \[ \mathbb{E} \sum_{s\le t\wedge \bar{\tau}_a} \frac{1}{\eta(Z_{s-})} (\Delta Z_s)^2 1_{\Delta Z_s<0} = \frac{1}{2} \mathbb{E} \int_0^{t\wedge \bar{\tau}_a} \frac{1}{\eta(Z_{s-})} \eta(Z_s) ds = \frac{1}{2} \mathbb{E} (t \wedge \bar{\tau}_a)\] Substituting in~\eqref{equation:Ito's formula2} implies \[ \frac{1}{4} \mathbb{E} (t \wedge \bar{\tau}_a) \le \mathbb{E}g(Z_{t \wedge \bar{\tau}_a}) - g(a) \] Now letting $t\to \infty$ and noting that $Z_s$ is uniformly bounded by 2 for $t\le \bar{\tau}_a$, we find that \[ \frac{1}{4} \mathbb{E} (\bar{\tau}_a) \le \mathbb{E}g(Z_{\bar{\tau}_a}) - g(a) \] Now let $a\to 0$. By continuity of $g$, we have $g(a)\to g(0)$. On the other hand, \[ \mathbb{P}(Z_{\bar{\tau}_a}\ge 1) \le \mathbb{E}(Z_{\bar{\tau}_a}) = a\to 0 \] where we have used the optional stopping theorem. Now we can write, \begin{multline} \mathbb{E}g(Z_{\bar{\tau}_a}) = \mathbb{E}g(Z_{\bar{\tau}_a}; Z_{\bar{\tau}_a}=0) + \mathbb{E}g(Z_{\bar{\tau}_a};Z_{\bar{\tau}_a}\ge 1) \\ \le g(0) + 2 \mathbb{P}(Z_{\bar{\tau}_a}\ge 1) \to g(0) \end{multline} Hence we find that, \[ \lim_{a\to 0} \mathbb{E} \bar{\tau}_a = 0\] \end{proof} We are now ready to prove Theorem~\ref{thm:Levy_suffic}. \begin{proof}[Proof of Theorem~\ref{thm:Levy_suffic}] It suffices to prove that for any $\epsilon>0$, \[\lim_{a\to 0} \mathbb{P} (\tau_a\ge \epsilon) = 0.\] We have \[ \mathbb{P} (\tau_a\ge \epsilon) \le \mathbb{P} (\bar{\tau}_a\ge \epsilon) + \mathbb{P} (\bar{\tau}_a \le \epsilon, Z_{\bar{\tau}_a}\ge 1) \] The first term on the right hand side is less than or equal to $\mathbb{E} \bar{\tau}_a/\epsilon$ and the second term is less than or equal \[ \mathbb{P} (\sup_{0\le t\le \epsilon} Z_t \ge 1)\] which by Doob's maximal inequality is less than or equal to $\mathbb{E} Z_\epsilon = a$. Hence, \[ \mathbb{P} (\tau_a\ge \epsilon) \le \mathbb{E} \bar{\tau}_a/\epsilon + a \] and from Lemma~\ref{lemma:tau_bar} the statement follows. \end{proof} \end{document}